├── .DS_Store ├── .github └── PULL_REQUEST_TEMPLATE.md ├── 01-access-as-workshop └── readme.adoc ├── 01-deploy-od-workshop └── readme.adoc ├── 02-connect-to-instances ├── .DS_Store └── readme.adoc ├── 03-examine-efs-console └── readme.adoc ├── 04-iops-zero-byte ├── .DS_Store └── readme.adoc ├── 05-iops-4kb ├── .DS_Store └── readme.adoc ├── 06-instance-throughput ├── .DS_Store └── readme.adoc ├── 07-provisioned-throughput ├── .DS_Store └── readme.adoc ├── 08-throughput-dd ├── .DS_Store └── readme.adoc ├── 09-throughput-ior ├── .DS_Store └── readme.adoc ├── 10-transfer-tools ├── .DS_Store ├── readme.adoc └── readme_20200608.adoc ├── 11-monitor-performance ├── .DS_Store └── readme.adoc ├── 12-client-access ├── .DS_Store └── readme.adoc ├── 13-takeaways ├── .DS_Store └── readme.adoc ├── 14-tear-down-as-workshop ├── .DS_Store └── readme.adoc ├── 14-tear-down-od-workshop ├── .DS_Store └── readme.adoc ├── 14-tear-down-workshop ├── .DS_Store └── readme.adoc ├── LICENSE ├── aws-sponsored └── readme.adoc ├── on-demand └── readme.adoc ├── readme.adoc └── resources ├── .DS_Store └── images ├── .DS_Store ├── Picture1.png ├── access-as-workshop.gif ├── client-access.gif ├── client-access.png ├── cloudformation-capabilities.png ├── connect-linux-instances-efs.gif ├── connect-to-instances.png ├── create-alarm.png ├── dashboard.png ├── deploy-to-aws.png ├── efs-as-workshop.png ├── efs-as-workshops.png ├── efs-aws-logos.png ├── efs-od-workshop.png ├── efs-od-workshops.png ├── efs-workshop-architecture-v1.png ├── efs-workshop-architecture.png ├── efs-workshop-icons.graffle ├── data.plist ├── image32.pdf ├── image33.pdf ├── image34.png └── preview.jpeg ├── efs-workshops.png ├── examine-efs-console.gif ├── examine-efs-console.png ├── instance-throughput.png ├── iops-4kb-duration-graph.png ├── iops-4kb-iops-graph.png ├── iops-4kb.png ├── iops-zero-byte-graph.png ├── iops-zero-byte.gif ├── iops-zero-byte.png ├── ior-fpp-duration.png ├── ior-fpp-throughput.png ├── ior-ssf-duration.png ├── ior-ssf-throughput.png ├── monitor-performance.gif ├── monitor-performance.png ├── mount-filesystem.png ├── next-section.png ├── provisioned-throughput.gif ├── provisioned-throughput.png ├── takeaways.png ├── tear-down-as-workshop.png ├── tear-down-od-workshop.png ├── tear-down-workshop.png ├── test-performance.png ├── throughput-dd-duration-graph.png ├── throughput-dd-throughput-graph.png ├── throughput-dd.png ├── throughput-ior.gif ├── throughput-ior.png ├── transfer-tool-graph.png ├── transfer-tools.gif └── transfer-tools.png /.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/.DS_Store -------------------------------------------------------------------------------- /.github/PULL_REQUEST_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | *Issue #, if available:* 2 | 3 | *Description of changes:* 4 | 5 | 6 | By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. 7 | -------------------------------------------------------------------------------- /01-access-as-workshop/readme.adoc: -------------------------------------------------------------------------------- 1 | = Access Workshop Environment 2 | :toc: 3 | :icons: 4 | :linkattrs: 5 | :imagesdir: ../resources/images 6 | 7 | 8 | == Summary 9 | 10 | A new AWS environment will be pre-built and available to you for use during this workshop. See *workshop diagram* below. 11 | 12 | == Workshop Diagram 13 | 14 | image::efs-workshop-architecture-v1.png[align="left"] 15 | 16 | == Step-by-step Guide 17 | 18 | IMPORTANT: Follow the instructions given by the workshop administrators on how to log in to the AWS account provided for this workshop. Do NOT use your personal or business account to run this workshop, as the required pre-built resources will not be available. 19 | 20 | IMPORTANT: Read through all steps below and watch the quick video before continuing. 21 | 22 | image::access-as-workshop.gif[align="left", width=600] 23 | 24 | . Go to the link:https://dashboard.eventengine.run[AWS Events Engine Dashboard] - link:https://dashboard.eventengine.run[https://dashboard.eventengine.run] or access the direct link provided by the workshop administrators. 25 | * The AWS Event Engine was created to help AWS field teams run Workshops, GameDays, Bootcamps, Immersion Days, and other events that require hands-on access to AWS accounts. 26 | . *_Read_* the *Terms & Conditions*. 27 | . *_Enter_* the team hash you were given from the workshop administrators and *_click_* *[Accept terms and login]*. 28 | . From the *Team Dashboard* page, *_click_* *[AWS Console]*. 29 | . From the *AWS Console Login* page, *_click_* the *[Open AWS Console]* link in the *Login Link* section to open the AWS Console using the AWS sponsored account. 30 | 31 | == Next section 32 | 33 | Click the link below to go to the next section. 34 | 35 | image::connect-to-instances.png[link=../02-connect-to-instances/, align="right",width=420] 36 | -------------------------------------------------------------------------------- /01-deploy-od-workshop/readme.adoc: -------------------------------------------------------------------------------- 1 | = Deploy On-Demand Workshop 2 | :toc: 3 | :icons: 4 | :linkattrs: 5 | :imagesdir: ../resources/images 6 | 7 | 8 | == Summary 9 | 10 | Deploy a new AWS environment for use during this workshop. See the *Workshop Diagram* below. 11 | It will take approximately 5 minutes for the workshop environment to be created. 12 | 13 | == Workshop Diagram 14 | 15 | image::efs-workshop-architecture-v1.png[align="center"] 16 | 17 | === Deploy the workshop using AWS CloudFormation 18 | 19 | IMPORTANT: Read through all steps below and watch the quick video before *_clicking_* the *Deploy to AWS* button. 20 | 21 | image::create-environment.gif[align="left", width=600] 22 | 23 | . Click on the *Deploy to AWS* button and follow the CloudFormation prompts to begin. 24 | + 25 | Amazon EFS is currently available in 20 AWS regions. 26 | + 27 | TIP: *_Context-click (right-click)_* the *Deploy to AWS* button and open the link in a new tab or window to make it easy to navigate between this github tutorial and the AWS Console. 28 | + 29 | |=== 30 | |Region | Launch template with a new VPC 31 | | *N. Virginia* (us-east-1) 32 | a| image::deploy-to-aws.png[link=https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/new?stackName=efs-workshop&templateURL=https://amazon-elastic-file-system.s3.amazonaws.com/workshop/efs-od-workshop.yaml] 33 | 34 | | *Ohio* (us-east-2) 35 | a| image::deploy-to-aws.png[link=https://console.aws.amazon.com/cloudformation/home?region=us-east-2#/stacks/new?stackName=efs-workshop&templateURL=https://amazon-elastic-file-system.s3.amazonaws.com/workshop/efs-od-workshop.yaml] 36 | 37 | | *N. California* (us-west-1) 38 | a| image::deploy-to-aws.png[link=https://console.aws.amazon.com/cloudformation/home?region=us-west-1#/stacks/new?stackName=efs-workshop&templateURL=https://amazon-elastic-file-system.s3.amazonaws.com/workshop/efs-od-workshop.yaml] 39 | 40 | | *Oregon* (us-west-2) 41 | a| image::deploy-to-aws.png[link=https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?stackName=efs-workshop&templateURL=https://amazon-elastic-file-system.s3.amazonaws.com/workshop/efs-od-workshop.yaml] 42 | 43 | | *Cape Town* (af-south-1) 44 | a| image::deploy-to-aws.png[link=https://console.aws.amazon.com/cloudformation/home?region=af-south-1#/stacks/new?stackName=efs-workshop&templateURL=https://amazon-elastic-file-system.s3.amazonaws.com/workshop/efs-od-workshop.yaml] 45 | 46 | | *Hong Kong* (ap-east-1) 47 | a| image::deploy-to-aws.png[link=https://console.aws.amazon.com/cloudformation/home?region=ap-east-1#/stacks/new?stackName=efs-workshop&templateURL=https://amazon-elastic-file-system.s3.amazonaws.com/workshop/efs-od-workshop.yaml] 48 | 49 | | *Mumbai* (ap-south-1) 50 | a| image::deploy-to-aws.png[link=https://console.aws.amazon.com/cloudformation/home?region=ap-south-1#/stacks/new?stackName=efs-workshop&templateURL=https://amazon-elastic-file-system.s3.amazonaws.com/workshop/efs-od-workshop.yaml] 51 | 52 | | *Seoul* (ap-northeast-2) 53 | a| image::deploy-to-aws.png[link=https://console.aws.amazon.com/cloudformation/home?region=ap-northeast-2#/stacks/new?stackName=efs-workshop&templateURL=https://amazon-elastic-file-system.s3.amazonaws.com/workshop/efs-od-workshop.yaml] 54 | 55 | | *Singapore* (ap-southeast-1) 56 | a| image::deploy-to-aws.png[link=https://console.aws.amazon.com/cloudformation/home?region=ap-southeast-1#/stacks/new?stackName=efs-workshop&templateURL=https://amazon-elastic-file-system.s3.amazonaws.com/workshop/efs-od-workshop.yaml] 57 | 58 | | *Sydney* (ap-southeast-2) 59 | a| image::deploy-to-aws.png[link=https://console.aws.amazon.com/cloudformation/home?region=ap-southeast-2#/stacks/new?stackName=efs-workshop&templateURL=https://amazon-elastic-file-system.s3.amazonaws.com/workshop/efs-od-workshop.yaml] 60 | 61 | | *Tokyo* (ap-northeast-1) 62 | a| image::deploy-to-aws.png[link=https://console.aws.amazon.com/cloudformation/home?region=ap-northeast-1#/stacks/new?stackName=efs-workshop&templateURL=https://amazon-elastic-file-system.s3.amazonaws.com/workshop/efs-od-workshop.yaml] 63 | 64 | | *Canada* (ca-central-1) 65 | a| image::deploy-to-aws.png[link=https://console.aws.amazon.com/cloudformation/home?region=ca-central-1#/stacks/new?stackName=efs-workshop&templateURL=https://amazon-elastic-file-system.s3.amazonaws.com/workshop/efs-od-workshop.yaml] 66 | 67 | | *Frankfurt* (eu-central-1) 68 | a| image::deploy-to-aws.png[link=https://console.aws.amazon.com/cloudformation/home?region=eu-central-1#/stacks/new?stackName=efs-workshop&templateURL=https://amazon-elastic-file-system.s3.amazonaws.com/workshop/efs-od-workshop.yaml] 69 | 70 | | *Ireland* (eu-west-1) 71 | a| image::deploy-to-aws.png[link=https://console.aws.amazon.com/cloudformation/home?region=eu-west-1#/stacks/new?stackName=efs-workshop&templateURL=https://amazon-elastic-file-system.s3.amazonaws.com/workshop/efs-od-workshop.yaml] 72 | 73 | | *London* (eu-west-2) 74 | a| image::deploy-to-aws.png[link=https://console.aws.amazon.com/cloudformation/home?region=eu-west-2#/stacks/new?stackName=efs-workshop&templateURL=https://amazon-elastic-file-system.s3.amazonaws.com/workshop/efs-od-workshop.yaml] 75 | 76 | | *Milan* (eu-south-1) 77 | a| image::deploy-to-aws.png[link=https://console.aws.amazon.com/cloudformation/home?region=eu-south-1#/stacks/new?stackName=efs-workshop&templateURL=https://amazon-elastic-file-system.s3.amazonaws.com/workshop/efs-od-workshop.yaml] 78 | 79 | | *Paris* (eu-west-3) 80 | a| image::deploy-to-aws.png[link=https://console.aws.amazon.com/cloudformation/home?region=eu-west-3#/stacks/new?stackName=efs-workshop&templateURL=https://amazon-elastic-file-system.s3.amazonaws.com/workshop/efs-od-workshop.yaml] 81 | 82 | | *Stockholm* (eu-north-1) 83 | a| image::deploy-to-aws.png[link=https://console.aws.amazon.com/cloudformation/home?region=eu-north-1#/stacks/new?stackName=efs-workshop&templateURL=https://amazon-elastic-file-system.s3.amazonaws.com/workshop/efs-od-workshop.yaml] 84 | 85 | | *Bahrain* (me-south-1) 86 | a| image::deploy-to-aws.png[link=https://console.aws.amazon.com/cloudformation/home?region=me-south-1#/stacks/new?stackName=efs-workshop&templateURL=https://amazon-elastic-file-system.s3.amazonaws.com/workshop/efs-od-workshop.yaml] 87 | 88 | | *São Paulo* (sa-east-1) 89 | a| image::deploy-to-aws.png[link=https://console.aws.amazon.com/cloudformation/home?region=sa-east-1#/stacks/new?stackName=efs-workshop&templateURL=https://amazon-elastic-file-system.s3.amazonaws.com/workshop/efs-od-workshop.yaml] 90 | |=== 91 | + 92 | . Accept the defaults on the *Prerequisite - Prepare template* page and *_click_* *Next*. 93 | + 94 | . Accept the default stack name and *_click_* *Next*. *_Enter_* values for all parameters. 95 | + 96 | [cols="3,10"] 97 | |=== 98 | | *VPC CIDR* 99 | a| Select a CIDR that will be used for the VPC. 100 | 101 | | *Availability Zones* 102 | a| Select two (2) availability zones for your VPC. 103 | 104 | |=== 105 | + 106 | . After you have entered values for all parameters, *_click_* *Next*. 107 | . *_Accept_* the default values of the *Configure stack options* and *Advanced options* sections and *_click_* *Next*. 108 | . *_Review_* the CloudFormation stack settings. 109 | . *_Click_* both checkboxes in the blue *Capabilities* box at the bottom of the page. 110 | + 111 | image::cloudformation-capabilities.png[align="left", width=420] 112 | + 113 | . *_Click_* *Create stack*. 114 | 115 | The workshop environment will be available in approximately 5 minutes. 116 | 117 | 118 | == Next section 119 | 120 | Click the button below to go to the next section. 121 | 122 | image::connect-to-instances.png[link=../02-connect-to-instances/, align="right",width=420] 123 | -------------------------------------------------------------------------------- /02-connect-to-instances/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/02-connect-to-instances/.DS_Store -------------------------------------------------------------------------------- /02-connect-to-instances/readme.adoc: -------------------------------------------------------------------------------- 1 | = Connect to instances 2 | :toc: 3 | :icons: 4 | :linkattrs: 5 | :imagesdir: ../resources/images 6 | 7 | 8 | == Summary 9 | 10 | In this section you will establish connections to Linux instances you will use for the remainder of the workshop. 11 | 12 | 13 | == Duration 14 | 15 | NOTE: It will take approximately 5 minutes to complete this section. 16 | 17 | 18 | == Step-by-step Guide 19 | 20 | === Connect to Linux instances 21 | 22 | IMPORTANT: Read through all steps below and watch the quick video before continuing. 23 | 24 | image::connect-linux-instances-efs.gif[align="left", width=600] 25 | 26 | 27 | . Open the link:https://console.aws.amazon.com/ec2/[Amazon EC2] console. 28 | + 29 | TIP: *_Context-click (right-click)_* the link above and open the link in a new tab or window to make it easy to navigate between this github workshop and AWS console. 30 | + 31 | . Make sure you are in the same *AWS Region* as the workshop environment. If you need to change the *AWS Region* of the Amazon EC2 console, in the top right corner of the browswer window *_click_* the region name next to *Support* and *_click_* the appropriate *AWS Region* from the drop-down menu. 32 | 33 | . *_Click_* *Running Instances*. 34 | 35 | . *_Click_* the radio button next to the instance with the name *EFS Workshop Linux Instance 2*. 36 | 37 | . *_Click_* the *Connect* button. 38 | 39 | . *_Click_* the radio button next to *EC2 Instance Connect (browser-based SSH connection)*. 40 | 41 | * If you prefer to use another terminal emulator (e.g. Terminal, iTerm2, PUTTY, etc.), *_click_* the radio button next to *A standalone SSH client* and follow the directions in the *To access your instance:* section to establish an SSH sesstion to the EC2 instances. Throughput the workshop, please access these terminal session windows when asked to connect to the *browser-based SSH connection session* windows. You will need to *_download_* the *SSH Key* from the Event Engine Team Dashboard where you launched the AWS Console. If you're not familiar with this process of downloading an SSH key, converting a .pem file to .ppk (Windows only), and making it not public viewable (chmod 400 ee-default-keypair.pem), we recommend you continue the workshop using *browser-based SSH connection session* windows. 42 | 43 | . Leave the default user name as *ec2-user* and *_click_* *Connect*. 44 | 45 | NOTE: Throughput this workshop you will be asked to connect to *browserr-based SSH connection session* windows of three Amazon EC2 instances - *EFS Workshop Linux Instance 0*, *EFS Workshop Linux Instance 1*, *EFS Workshop Linux Instance 2*. Follow these steps for the specific EC2 instance when asked to do so. 46 | 47 | == Next section 48 | 49 | Click the link below to go to the next section. 50 | 51 | image::examine-efs-console.png[link=../03-examine-efs-console/, align="left",width=420] 52 | 53 | 54 | 55 | 56 | -------------------------------------------------------------------------------- /03-examine-efs-console/readme.adoc: -------------------------------------------------------------------------------- 1 | = Examine Amazon EFS console 2 | :toc: 3 | :icons: 4 | :linkattrs: 5 | :imagesdir: ../resources/images 6 | 7 | 8 | == Summary 9 | 10 | This section will help you become more familiar with the Amazon EFS console. 11 | 12 | 13 | == Duration 14 | 15 | NOTE: It will take approximately 5 minutes to complete this section. 16 | 17 | 18 | == Step-by-step Guide 19 | 20 | === Examine the Amazon EFS console 21 | 22 | IMPORTANT: Read through all steps below and watch the quick video before continuing. 23 | 24 | image::examine-efs-console.gif[align="left", width=600] 25 | 26 | . *_Go_* to the link:https://console.aws.amazon.com/efs/[Amazon EFS] console. 27 | 28 | . *_Examine_* the *Summary* section of the file system from the main console. *_Find_* the values of the following file system attributes: 29 | * Name - this is a tag or key/value pair 30 | * File system ID 31 | * Metered size 32 | * Number of mount targets 33 | * Creation date 34 | 35 | . *_Move_* your cursor over the Metered size value to see when the metered size was last updated. 36 | * The computed metered size doesn't represent a consistent snapshot of the file system at any particular time during that hour. Instead, it represents the sizes of the objects that existed in the file system at varying times within each hour, or possibly the hour before it. These sizes are summed to determine the file system's metered size for the hour. The metered size of a file system is thus eventually consistent with the metered sizes of the objects stored when there are no writes to the file system. 37 | 38 | . *_Examine_* more details about the file system. *_Click_* the radio button next to the file system. *_Find_* the values of the following file system attributes: 39 | * Owner Id 40 | * File system state 41 | * Performance mode 42 | * Throughput mode 43 | * Encrypted 44 | * Lifecycle policy 45 | * Tags 46 | * DNS name 47 | 48 | . *_Examine_* the mount targets of the file system. 49 | * You mount your file system on an EC2 instance in your virtual private cloud (VPC) using a mount target that you create for the file system. Managing file system network accessibility refers to managing the mount targets. 50 | 51 | . *_Find_* the values of following mount target attributes: 52 | * VPC 53 | * Availability Zones 54 | * Subnets 55 | * IP addresses 56 | * Mount target IDs 57 | * Network interface IDs 58 | * Security groups 59 | * Mount target states 60 | 61 | . *_Examine_* the instructions to mount a file system from a local VPC. *_Click_* the *Amazon EC2 mount instructions (from local VPC)* link. 62 | 63 | . *_Examine_* the instructions to mount a file system from a remote VPCs. *_Click_* the *Amazon EC2 mount instructions (across VPC peering connection)* link. 64 | 65 | . *_Examine_* the instructions to mount a file system from on-premises. *_Click_* the *On-premises mount instructions* link. 66 | 67 | . *_Examine_* the *Tags* section of the console. *_Click_* the *Manage tags* link. 68 | * What tags (key/value) pairs are assigned to the file system? 69 | * Add a new tag (key/value) pair. *_Click_* the *[Add]* button and enter a *key* / *value* of your choice (e.g. Environment/Production). *_Click_* the *[Save]* button. 70 | 71 | . *_Examine_* how to add and remote mount targets. *_Click_* the *Manage network access* link. 72 | * Don't make any changes to the existing mount targets. *_Click_* Cancel. 73 | 74 | . *_Examine_* how to add file system policies and access points to control client access to the file system. *_Click_* the *Manage client access* link. 75 | * Don't make any changes. *_Click_* the browser's back button to return to the main Amazon EFS console. 76 | 77 | 78 | == Next section 79 | 80 | Click the link below to go to the next section. 81 | 82 | image::iops-zero-byte.png[link=../04-iops-zero-byte/, align="left",width=420] 83 | 84 | 85 | 86 | 87 | -------------------------------------------------------------------------------- /04-iops-zero-byte/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/04-iops-zero-byte/.DS_Store -------------------------------------------------------------------------------- /04-iops-zero-byte/readme.adoc: -------------------------------------------------------------------------------- 1 | = IOPS Zero-byte Files 2 | :toc: 3 | :icons: 4 | :linkattrs: 5 | :imagesdir: ../resources/images 6 | 7 | 8 | == Summary 9 | 10 | This section will demonstrate how parallelism increases achievable IOPS of an EFS file system. 11 | 12 | 13 | == Duration 14 | 15 | NOTE: It will take approximately 10 minutes to complete this section. 16 | 17 | 18 | == Step-by-step Guide 19 | 20 | === Generate zero-byte files using touch 21 | 22 | IMPORTANT: Read through all steps below and watch the quick video before continuing. 23 | 24 | image::iops-zero-byte.gif[align="left", width=600] 25 | 26 | . Return to the browser-based SSH connection of the *EFS Workshop Linux Instance 2* instance. 27 | + 28 | TIP: If the SSH connection has timed out, e.g. the session is unresponsive, refresh or reload the current browser tab. If that doesn't resolve the issue, close the browser-based SSH connection window and create a new one. Return to the link:https://console.aws.amazon.com/ec2/[Amazon EC2] console. *_Click_* the radio button next to the instance with the name *EFS Workshop Linux Instance 2*. *_Click_* the *Connect* button. *_Click_* the radio button next to *EC2 Instance Connect (browser-based SSH connection)*. Leave the default user name as *ec2-user* and *_click_* *Connect*. 29 | + 30 | . *_Copy_*, *_paste_*, and *_run_* the following command in the browser-based SSH connection window to see how the Amazon EFS file system has been mounted. The rest of the bash commands below will also be *_run_* in the same browser-based SSH connection window. 31 | + 32 | [source,bash] 33 | ---- 34 | mount -t nfs4 35 | 36 | ---- 37 | + 38 | 39 | . What is the mount point of the EFS file system? 40 | * The output of the command should look similar to this: 41 | + 42 | [source,bash] 43 | ---- 44 | fs-01234abc.efs.us-east-1.amazonaws.com:/ on /efs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,noresvport,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.0.12,local_lock=none,addr=10.0.1.176,_netdev) 45 | ---- 46 | + 47 | * Answer: /efs 48 | 49 | . *_Run_* the following script to test how long it takes to genereate 1024 zero-bytes files on an EFS file system. 50 | + 51 | [source,bash] 52 | ---- 53 | directory=$(echo $(uuidgen)| grep -o ".\{6\}$") 54 | mkdir -p /efs/touch/${directory} 55 | 56 | time for i in {1..1024}; do 57 | touch /efs/touch/${directory}/test-1.1-$i; 58 | done; 59 | ---- 60 | + 61 | . How many seconds did it take to generate 1024 zero-byte files on the EFS file system? 62 | * The output of the script should look similar to this: 63 | + 64 | [source,bash] 65 | ---- 66 | real0m12.545s 67 | user0m0.522s 68 | sys0m0.210s 69 | ---- 70 | + 71 | . Do you consider this fast or slow? 72 | . Lets see how long it takes to generate the same amount of zero-bytes files on an EBS volume. *_Run_* the following script. 73 | + 74 | [source,bash] 75 | ---- 76 | directory=$(echo $(uuidgen)| grep -o ".\{6\}$") 77 | mkdir -p /ebs/touch/${directory} 78 | 79 | time for i in {1..1024}; do 80 | touch /ebs/touch/${directory}/test-1.1-$i; 81 | done; 82 | ---- 83 | + 84 | . How many seconds did it take to generate 1024 zero-byte files on the EBS volume? 85 | * The output of the script should look similar to this: 86 | + 87 | [source,bash] 88 | ---- 89 | real0m0.648s 90 | user0m0.525s 91 | sys0m0.182s 92 | ---- 93 | + 94 | . Do you consider this fast or slow? 95 | . What if you re-wrote the script to use multiple threads and ran it against the EFS file system again? *_Run_* the following script. 96 | * The script uses 32 threads to generate 1024 zero-byte files in parallel. 97 | + 98 | [source,bash] 99 | ---- 100 | directory=$(echo $(uuidgen)| grep -o ".\{6\}$") 101 | mkdir -p /efs/touch/${directory} 102 | 103 | time seq 1 1024 | parallel --will-cite -j 32 touch /efs/touch/${directory}/test-1.2-{} 104 | ---- 105 | + 106 | . How many seconds did it take to generate 1024 zero-byte files in parallel using multiple threads on the EFS file system? 107 | * The output of the script should look similar to this: 108 | + 109 | [source,bash] 110 | ---- 111 | real0m6.138s 112 | user0m3.039s 113 | sys0m2.440s 114 | ---- 115 | + 116 | . Why was this so much faster than the first test against the EFS file system? 117 | * Generating files in parallel using multiple threads takes advantage of the distributed data storage design of Amazon EFS. 118 | . What if you re-wrote the script again so each thread writes to its own directory in parallel? *_Run_* the following script. 119 | * The script uses 32 threads - each writing 32 files in its own directory - generating a total of 1024 zero-byte files (32x32=1024). 120 | + 121 | [source,bash] 122 | ---- 123 | directory=$(echo $(uuidgen)| grep -o ".\{6\}$") 124 | mkdir -p /efs/touch/${directory}/{1..32} 125 | 126 | time seq 1 32 | parallel --will-cite -j 32 touch /efs/touch/${directory}/{}/test1.3{1..32} 127 | ---- 128 | + 129 | . How many seconds did it take to generate 1024 zero-byte files in parallel using multiple threads on the EFS file system? 130 | * The output of the script should look similar to this: 131 | + 132 | [source,bash] 133 | ---- 134 | real0m0.658s 135 | user0m0.186s 136 | sys0m0.142s 137 | ---- 138 | + 139 | . Why was this so much faster than all the other tests? 140 | * Having each thread write to its own unique directory avoids inode contention. An inode is a data structure on Linux file systems that stores certain file and directory metadata about file system objects. Instead of the script needing to update one directory inode for every file being generated, it updates all directory inodes in parallel for every file being generated. This, along with generating files in parallel using multiple threads, helps to maximize the achievable IOPS by taking advantage of the distributed data storage design of Amazon EFS. 141 | . Experiment running the previous script using different numbers of threads. *_Run_* the commands in the table below. To validate the creation of all 1024 files, run the following *tree* command after each parallel touch command to get a count of all the files created. 142 | + 143 | [source,bash] 144 | ---- 145 | tree --du -h /efs/touch/${directory} 146 | ---- 147 | + 148 | [cols="3,10"] 149 | |=== 150 | |*Threads* |*Parallel touch command* 151 | 152 | |1 a| 153 | .... 154 | directory=$(echo $(uuidgen)\| grep -o ".\{6\}$") 155 | mkdir -p /efs/touch/${directory}/{1..1} 156 | time seq 1 1 \| parallel --will-cite -j 1 touch /efs/touch/${directory}/{}/test.{1..1024} 157 | .... 158 | 159 | |2 a| 160 | .... 161 | directory=$(echo $(uuidgen)\| grep -o ".\{6\}$") 162 | mkdir -p /efs/touch/${directory}/{1..2} 163 | time seq 1 2 \| parallel --will-cite -j 2 touch /efs/touch/${directory}/{}/test.{1..512} 164 | .... 165 | 166 | |4 a| 167 | .... 168 | directory=$(echo $(uuidgen)\| grep -o ".\{6\}$") 169 | mkdir -p /efs/touch/${directory}/{1..4} 170 | time seq 1 4 \| parallel --will-cite -j 4 touch /efs/touch/${directory}/{}/test.{1..256} 171 | .... 172 | 173 | |8 a| 174 | .... 175 | directory=$(echo $(uuidgen)\| grep -o ".\{6\}$") 176 | mkdir -p /efs/touch/${directory}/{1..8} 177 | time seq 1 8 \| parallel --will-cite -j 8 touch /efs/touch/${directory}/{}/test.{1..128} 178 | .... 179 | 180 | |16 a| 181 | .... 182 | directory=$(echo $(uuidgen)\| grep -o ".\{6\}$") 183 | mkdir -p /efs/touch/${directory}/{1..16} 184 | time seq 1 16 \| parallel --will-cite -j 16 touch /efs/touch/${directory}/{}/test.{1..64} 185 | .... 186 | 187 | |32 a| 188 | .... 189 | directory=$(echo $(uuidgen)\| grep -o ".\{6\}$") 190 | mkdir -p /efs/touch/${directory}/{1..32} 191 | time seq 1 32 \| parallel --will-cite -j 32 touch /efs/touch/${directory}/{}/test.{1..32} 192 | .... 193 | 194 | |=== 195 | + 196 | . The following table and graph shows an example of the results you should expect. 197 | + 198 | [cols="3,3,3",options="header"] 199 | |=== 200 | |Threads |IOPS |Duration (seconds) 201 | 202 | |1 203 | a|86.6 204 | a|11.8 205 | 206 | |2 207 | a|184.1 208 | a|5.6 209 | 210 | |4 211 | a|367.7 212 | a|2.8 213 | 214 | |8 215 | a|634.4 216 | a|1.6 217 | 218 | |16 219 | a|820.5 220 | a|1.2 221 | 222 | |32 223 | a|1771.6 224 | a|0.6 225 | 226 | |=== 227 | + 228 | [.left] 229 | .IOPS and Duration 230 | image::iops-zero-byte-graph.png[450, scaledwidth="75%"] 231 | 232 | == Next section 233 | 234 | Click the link below to go to the next section. 235 | 236 | image::provisioned-throughput.png[link=../07-provisioned-throughput/, align="left",width=420] 237 | 238 | 239 | 240 | 241 | -------------------------------------------------------------------------------- /05-iops-4kb/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/05-iops-4kb/.DS_Store -------------------------------------------------------------------------------- /05-iops-4kb/readme.adoc: -------------------------------------------------------------------------------- 1 | = IOPS 4KB Files 2 | :toc: 3 | :icons: 4 | :linkattrs: 5 | :imagesdir: ../resources/images 6 | 7 | 8 | == Summary 9 | 10 | This section will demonstrate how parallelism increases achievable IOPS of an EFS file system. 11 | 12 | 13 | == Duration 14 | 15 | NOTE: It will take approximately 10 minutes to complete this section. 16 | 17 | 18 | == Step-by-step Guide 19 | 20 | === Generate 4KB files using smallfile 21 | 22 | NOTE: smallfile is a python-based distributed POSIX workload generator which can be used to quickly measure performance for a variety of metadata-intensive workloads across an entire cluster. It is an excellent tool to test file systems and I use it regularly to measure and demonstrate operation performance of Amazon EFS and Amazon FSx file systems. Learn more about smallfile from Ben England's github repo - link:https://github.com/distributed-system-analysis/smallfile[smallfile]. 23 | 24 | IMPORTANT: Read through all steps below and watch the quick video before continuing. 25 | 26 | image::iops-4k.gif[align="left", width=600] 27 | 28 | . Return to the browser-based SSH connection of the *EFS Workshop Linux Instance 2* instance. 29 | + 30 | TIP: If the SSH connection has timed out, e.g. the session is unresponsive, refresh or reload the current browser tab. If that doesn't resolve the issue, close the browser-based SSH connection window and create a new one. Return to the link:https://console.aws.amazon.com/ec2/[Amazon EC2] console. *_Click_* the radio button next to the instance with the name *EFS Workshop Linux Instance 2*. *_Click_* the *Connect* button. *_Click_* the radio button next to *EC2 Instance Connect (browser-based SSH connection)*. Leave the default user name as *ec2-user* and *_click_* *Connect*. 31 | + 32 | . *_Copy_*, *_paste_*, and *_run_* the following command in the browser-based SSH connection window to see how long it takes for smallfile to generate 1024 4KB files on the EFS file system. 33 | + 34 | [source,bash] 35 | ---- 36 | job_name=$(echo $(uuidgen)| grep -o ".\{6\}$") 37 | prefix=$(echo $(uuidgen)| grep -o ".\{6\}$") 38 | path=/efs/smallfile/${job_name} 39 | sudo mkdir -p ${path} 40 | 41 | threads=1 42 | file_size=4 43 | file_count=1024 44 | operation=create 45 | same_dir=Y 46 | 47 | sudo python ~/smallfile/smallfile_cli.py \ 48 | --operation ${operation} \ 49 | --threads ${threads} \ 50 | --file-size ${file_size} \ 51 | --files ${file_count} \ 52 | --same-dir ${same_dir} \ 53 | --hash-into-dirs Y \ 54 | --prefix ${prefix} \ 55 | --dirs-per-dir ${file_count} \ 56 | --files-per-dir ${file_count} \ 57 | --top ${path} 58 | ---- 59 | + 60 | 61 | . How many seconds did it take to generate 1024 zero-byte files on the EFS file system? 62 | * The ouput of the script should look similar to this: 63 | + 64 | [source,bash] 65 | ---- 66 | host = ip-10-0-0-12,thr = 00,elapsed = 11.924979,files = 1024,records = 1024,status = ok 67 | total threads = 1 68 | total files = 1024 69 | total IOPS = 85 70 | total data = 0.004 GiB 71 | 100.00% of requested files processed, warning threshold is 70.00 72 | elapsed time = 11.925 73 | files/sec = 85.870172 74 | IOPS = 85.870172 75 | MiB/sec = 0.335430 76 | ---- 77 | + 78 | . What was the IOPS? 79 | . How many threads were used? 80 | . Were the files generated in the same directory? 81 | * HINT: Look at the value of the variable "--same-dir". 82 | . *_Copy_* the previous smallfile script to your favorite text editor. Experiment with different smallfile parameter settings. Use the table below as a guide. Test with different threads (--threads), file size (--file-size), file count (--file-count) and same directory (--same-dir). 83 | + 84 | [cols="10,5"] 85 | |=== 86 | | Parameter | Description 87 | 88 | | `--threads` 89 | a| Number of threads. 90 | 91 | | `--file-size` 92 | a| File size in KB. 93 | 94 | | `--file-count` 95 | a| Number of files per thread. For example, if you want to see how long it takes to generate 1024 files using 16 threads, change the --threads parameter to 16 and the --file-count parameter to 64 (1024÷16=64). 96 | 97 | | `--same-dir` 98 | a| Y will generate all files in the same direct - increasing inode contention. N will generate files in different directories, one for each thread - decreasing inode contention. 99 | 100 | |=== 101 | + 102 | 103 | * What different parameters did you test? 104 | * How did the different parameter options alter the results? 105 | * The following table and graphs show the sample results of a few tests. Look how increasing the number of threads (increasing parallelism) and writing to different subdirectories (decreasing inode contention) impacts the IOPS and duration. 106 | 107 | + 108 | [cols="3,3,2,3,3,3,3",options="header"] 109 | |=== 110 | |Threads |File size (KB) |File count (per thread) |File count (total) |Same directory |Duration (seconds) |IOPS 111 | 112 | | 1 113 | | 4 114 | | 1024 115 | | 1024 116 | | Y 117 | | 11.369 118 | | 90.066095 119 | 120 | | 2 121 | | 4 122 | | 512 123 | | 1024 124 | | Y 125 | | 5.820 126 | | 176.009550 127 | 128 | | 4 129 | | 4 130 | | 256 131 | | 1024 132 | | Y 133 | | 5.883 134 | | 174.591562 135 | 136 | | 8 137 | | 4 138 | | 128 139 | | 1024 140 | | Y 141 | | 5.882 142 | | 175.117492 143 | 144 | | 16 145 | | 4 146 | | 64 147 | | 1024 148 | | Y 149 | | 5.629 150 | | 184.055531 151 | 152 | | 32 153 | | 4 154 | | 32 155 | | 1024 156 | | Y 157 | | 5.641 158 | | 186.835993 159 | 160 | | 1 161 | | 4 162 | | 1024 163 | | 1024 164 | | N 165 | | 11.958 166 | | 85.633895 167 | 168 | | 2 169 | | 4 170 | | 512 171 | | 1024 172 | | N 173 | | 5.452 174 | | 188.621103 175 | 176 | | 4 177 | | 4 178 | | 256 179 | | 1024 180 | | N 181 | | 2.755 182 | | 372.936600 183 | 184 | | 8 185 | | 4 186 | | 128 187 | | 1024 188 | | N 189 | | 1.390 190 | | 746.051127 191 | 192 | | 16 193 | | 4 194 | | 64 195 | | 1024 196 | | N 197 | | 0.819 198 | | 1281.790673 199 | 200 | | 32 201 | | 4 202 | | 32 203 | | 1024 204 | | N 205 | | 0.535 206 | | 1973.441341 207 | 208 | |=== 209 | {empty} + 210 | {empty} + 211 | -- 212 | [.left] 213 | .IOPS 214 | image::iops-4kb-iops-graph.png[450, scaledwidth="75%"] 215 | {empty} + 216 | {empty} + 217 | [.left] 218 | .Duration 219 | image::iops-4kb-duration-graph.png[450, scaledwidth="75%"] 220 | -- 221 | 222 | == Next section 223 | 224 | Click the link below to go to the next section. 225 | 226 | image::instance-throughput.png[link=../05-instance-throughput, align="left",width=420] 227 | 228 | 229 | 230 | 231 | -------------------------------------------------------------------------------- /06-instance-throughput/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/06-instance-throughput/.DS_Store -------------------------------------------------------------------------------- /06-instance-throughput/readme.adoc: -------------------------------------------------------------------------------- 1 | = Instance throughput 2 | :toc: 3 | :icons: 4 | :linkattrs: 5 | :imagesdir: ../resources/images 6 | 7 | 8 | == Summary 9 | 10 | This section will demonstrate how different instance types impact achievable throughput of an EFS file system. 11 | 12 | 13 | == Duration 14 | 15 | NOTE: It will take approximately 15 minutes to complete this section. 16 | 17 | 18 | == Step-by-step Guide 19 | 20 | === Generate throughput using dd 21 | 22 | IMPORTANT: Read through all steps below and watch the quick video before continuing. 23 | 24 | image::throughput-dd.gif[align="left", width=600] 25 | 26 | . Return to the browser-based SSH connection of the *EFS Workshop Linux Instance 0* - t2.micro instance. 27 | + 28 | TIP: If the SSH connection has timed out, e.g. the session is unresponsive, refresh or reload the current browser tab. If that doesn't resolve the issue, close the browser-based SSH connection window and create a new one. Return to the link:https://console.aws.amazon.com/ec2/[Amazon EC2] console. *_Click_* the radio button next to the instance with the name *EFS Workshop Linux Instance 2*. *_Click_* the *Connect* button. *_Click_* the radio button next to *EC2 Instance Connect (browser-based SSH connection)*. Leave the default user name as *ec2-user* and *_click_* *Connect*. 29 | + 30 | . *_Copy_*, *_paste_*, and *_run_* the following command in the browser-based SSH connection window to see how the Amazon EFS file system has been mounted. 31 | + 32 | [source,bash] 33 | ---- 34 | mount -t nfs4 35 | 36 | ---- 37 | + 38 | 39 | . What is the mount point of the EFS file system? 40 | * The output of the command should look similar to this: 41 | + 42 | [source,bash] 43 | ---- 44 | fs-01234abc.efs.us-east-1.amazonaws.com:/ on /efs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,noresvport,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.0.12,local_lock=none,addr=10.0.1.176,_netdev) 45 | ---- 46 | + 47 | * Answer: /efs 48 | 49 | . Determine how much throughput an t2.micro instance can achieve writing to an EFS file system. *_Run_* the following script in the same browser-based SSH connection window. The last commands run nload, a real-time network monitoring tool to see incoming and outgoing network traffic. 50 | + 51 | [source,bash] 52 | ---- 53 | time dd if=/dev/zero of=/efs/dd/17G-dd-$(date +%Y%m%d%H%M%S.%3N) bs=1M count=17408 conv=fsync & 54 | nload -u M 55 | ---- 56 | + 57 | . Once outgoing network traffic drops to zero, the dd command has finished and you can end *nload* by pressing *control+Z*. 58 | . How much throughput did the instance generate at the start? 59 | . What happened to the throughput over the 3-4 minutes? 60 | . What throughput did the instance finally bottom out at? 61 | . Why? 62 | * t2 instances have variable network performance and are able achieve high levels of burst network performance for short period of time followed by consistent baseline network performance. 63 | . Close the browser-based SSH connection window of the *EFS Workshop Linux Instance 0* - t2.micro instance. 64 | 65 | . Return to the browser-based SSH connection of the *EFS Workshop Linux Instance 1* - m4.large instance. 66 | + 67 | TIP: If the SSH connection has timed out, e.g. the session is unresponsive, refresh or reload the current browser tab. If that doesn't resolve the issue, close the browser-based SSH connection window and create a new one. Return to the link:https://console.aws.amazon.com/ec2/[Amazon EC2] console. *_Click_* the radio button next to the instance with the name *EFS Workshop Linux Instance 2*. *_Click_* the *Connect* button. *_Click_* the radio button next to *EC2 Instance Connect (browser-based SSH connection)*. Leave the default user name as *ec2-user* and *_click_* *Connect*. 68 | + 69 | . *_Copy_*, *_paste_*, and *_run_* the following command in the browser-based SSH connection window to see how the Amazon EFS file system has been mounted. 70 | + 71 | [source,bash] 72 | ---- 73 | mount -t nfs4 74 | 75 | ---- 76 | + 77 | 78 | . What is the mount point of the EFS file system? 79 | * The output of the command should look similar to this: 80 | + 81 | [source,bash] 82 | ---- 83 | fs-01234abc.efs.us-east-1.amazonaws.com:/ on /efs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,noresvport,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.0.12,local_lock=none,addr=10.0.1.176,_netdev) 84 | ---- 85 | + 86 | * Answer: /efs 87 | 88 | . Determine how much throughput an m4.large instance can achieve writing to an EFS file system. *_Run_* the following script in the same browser-based SSH connection window. The last commands run nload, a real-time network monitoring tool to see incoming and outgoing network traffic. 89 | + 90 | [source,bash] 91 | ---- 92 | time dd if=/dev/zero of=/efs/dd/17G-dd-$(date +%Y%m%d%H%M%S.%3N) bs=1M count=17408 conv=fsync & 93 | nload -u M 94 | ---- 95 | + 96 | . Once outgoing network traffic drops to zero, the dd command has finished and you can end *nload* by pressing *control+Z*. 97 | . How much throughput did the instance generate at the start? 98 | . What happened to the throughput over the 3-4 minutes? 99 | . Did you see any variance in the throughput for the duration of the test? 100 | . Why did the t2.micro achieve much higher throughput then the m4.large at the start of the test? 101 | * t2 instances have variable network performance while m4.large instances have a consistent moderate network performance. Becausing EFS file systems are accessed over the network, selecting instance types with sufficient network performance will help determine how much throughput these instances can drive an EFS file system. 102 | . Close the browser-based SSH connection window of the *EFS Workshop Linux Instance 1*. 103 | 104 | 105 | == Next section 106 | 107 | Click the link below to go to the next section. 108 | 109 | image::provisioned-throughput.png[link=../06-provisioned-throughput/, align="left",width=420] 110 | 111 | 112 | 113 | 114 | -------------------------------------------------------------------------------- /07-provisioned-throughput/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/07-provisioned-throughput/.DS_Store -------------------------------------------------------------------------------- /07-provisioned-throughput/readme.adoc: -------------------------------------------------------------------------------- 1 | = Provisioned throughput 2 | :toc: 3 | :icons: 4 | :linkattrs: 5 | :imagesdir: ../resources/images 6 | 7 | 8 | == Summary 9 | 10 | This section will demonstrate how to scale throughput in provisioned throughput mode of an EFS file system. 11 | 12 | 13 | == Duration 14 | 15 | NOTE: It will take approximately 10 minutes to complete this section. 16 | 17 | 18 | == Step-by-step Guide 19 | 20 | === Generate throughput 21 | 22 | IMPORTANT: Read through all steps below and watch the quick video before continuing. 23 | 24 | image::provisioned-throughput.gif[align="left", width=600] 25 | 26 | . Return to the browser-based SSH connection of the *EFS Workshop Linux Instance 2*. 27 | + 28 | TIP: If the SSH connection has timed out, e.g. the session is unresponsive, refresh or reload the current browser tab. If that doesn't resolve the issue, close the browser-based SSH connection window and create a new one. Return to the link:https://console.aws.amazon.com/ec2/[Amazon EC2] console. *_Click_* the radio button next to the instance with the name *EFS Workshop Linux Instance 2*. *_Click_* the *Connect* button. *_Click_* the radio button next to *EC2 Instance Connect (browser-based SSH connection)*. Leave the default user name as *ec2-user* and *_click_* *Connect*. 29 | + 30 | . Create a test files. *_Run_* the following script in the same browser-based SSH connection window. 31 | + 32 | [source,bash] 33 | ---- 34 | module load mpi/openmpi-x86_64 35 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 36 | mpirun --npernode 32 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 64 -g -v -w -i 1 -u -F -k -D 0 -o /efs/ior/ior.bin 37 | ---- 38 | + 39 | . Generate file system activity by continuously reading the test files for five minutes. *_Run_* the following script in the same browser-based SSH connection window. The last commands run nload, a real-time network monitoring tool to see incoming and outgoing network traffic. After running the following script to start the read test, continue with the remaining steps in this section. 40 | + 41 | [source,bash] 42 | ---- 43 | module load mpi/openmpi-x86_64 44 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 45 | mpirun --npernode 32 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 64 -g -v -r -i 60 -u -F -k -D 0 -T=5 -o /efs/ior/ior.bin >> /tmp/mpi_ior.log 2>&1 & 46 | nload -u M 47 | ---- 48 | + 49 | . While the above script is running in the browser-based SSH connection window, return to the Amazon EC2 console *_click_* *Services* in the top right are of the window. 50 | . *_Context-click(right-click)_* *EFS* which is under the *Storage* section and *_click_* *Open link in new tab*. 51 | . *_Click_* the new tab that was just opened to go to the *Amazon EFS console*. 52 | . *_Select_* the radio button next to the only EFS file system. 53 | . *_Click_* *Actions* >> *Managed throughput mode* 54 | . From the *Manage throughput mode* window, *_click_* the radio button next to *Provisioned* and *_enter_* *50* in the *Throughput (MiB/s)* text field. *_Click_* save. 55 | . Return to the browser-based SSH connection of the *EFS Workshop Linux Instance 2*. 56 | . Monitor the outgoing network traffic (throughput) to the EFS file system. 57 | . How long does it take for the outgoing network traffic (throughput) to the EFS file system to drop to 50 MB/s? 58 | . Return the previous steps and change the provisioned throughput from 50 MiB/s to 400 MiB/s. 59 | . How long does it take for the outgoing network traffic (throughput) to the EFS file system to change? 60 | + 61 | NOTE: With provisioned throughput mode you can provision the desired throughput independent of the amount of data stored in the file system. Throughput modes can be changed (bursting to provisioned or provisioned to bursting) as long as it has been more than 24 hours since the last throughput mode change. When in provisioned throughput mode, the amount of throughput provisioned can decreased as long as it has been more than 24 hours since the last throughput decrease or the last throughput mode change. The amount of throughput provisioned can be increased at anytime. 62 | + 63 | . *_Press_* control+Z on the keyboard to exit nload. 64 | . If you want to cancel the read activity before it completes in five minutes, *_run_* the following script in the same browser-based SSH connection window. 65 | + 66 | [source,bash] 67 | ---- 68 | sudo pkill -9 ior 69 | ---- 70 | 71 | 72 | == Next section 73 | 74 | Click the link below to go to the next section. 75 | 76 | image::throughput-ior.png[link=../09-throughput-ior/, align="left",width=420] 77 | 78 | 79 | 80 | 81 | -------------------------------------------------------------------------------- /08-throughput-dd/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/08-throughput-dd/.DS_Store -------------------------------------------------------------------------------- /08-throughput-dd/readme.adoc: -------------------------------------------------------------------------------- 1 | = Parallelism and throughput using dd 2 | :toc: 3 | :icons: 4 | :linkattrs: 5 | :imagesdir: ../resources/images 6 | 7 | 8 | == Summary 9 | 10 | This section will demonstrate how increasing the number of threads accessing an EFS file system will significantly improve throughput. 11 | 12 | == Duration 13 | 14 | NOTE: It will take approximately 15 minutes to complete this section. 15 | 16 | 17 | == Step-by-step Guide 18 | 19 | === Different IO sizes and sync frequencies 20 | 21 | IMPORTANT: Read through all steps below and watch the quick video before continuing. 22 | 23 | image::throughput-dd.gif[align="left", width=600] 24 | 25 | . Return to the browser-based SSH connection of the *EFS Workshop Linux Instance 2* - m5n.2xlarge instance. 26 | + 27 | TIP: If the SSH connection has timed out, e.g. the session is unresponsive, refresh or reload the current browser tab. If that doesn't resolve the issue, close the browser-based SSH connection window and create a new one. Return to the link:https://console.aws.amazon.com/ec2/[Amazon EC2] console. *_Click_* the radio button next to the instance with the name *EFS Workshop Linux Instance 2*. *_Click_* the *Connect* button. *_Click_* the radio button next to *EC2 Instance Connect (browser-based SSH connection)*. Leave the default user name as *ec2-user* and *_click_* *Connect*. 28 | + 29 | 30 | 31 | . *_Run_* this command to generate 2GB of data on the EBS volume using a 1 MB block size and issuing a sync once at the end to ensure everything is written to disk. 32 | + 33 | [source,bash] 34 | ---- 35 | time dd if=/dev/zero of=/ebs/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N) bs=1M count=2048 status=progress conv=fsync 36 | ---- 37 | + 38 | . Record these results somewhere (e.g. your favorite text editor). 39 | . How long did it take? - 15.826s 40 | 41 | 42 | . *_Run_* this command to generate 2GB of data on the EFS file system using a 1 MB block size and issuing a sync once at the end to ensure everything is written to disk. 43 | + 44 | [source,bash] 45 | ---- 46 | time dd if=/dev/zero of=/efs/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N) bs=1M count=2048 status=progress conv=fsync 47 | ---- 48 | + 49 | . Record these results somewhere (e.g. your favorite text editor). 50 | . How long did it take? - 11.516s 51 | 52 | 53 | . *_Run_* this command to generate 2GB of data on the EBS volume using a 16 MB block size and issuing a sync once at the end to ensure everything is written to disk. 54 | + 55 | [source,bash] 56 | ---- 57 | time dd if=/dev/zero of=/ebs/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N) bs=16M count=128 status=progress conv=fsync 58 | ---- 59 | + 60 | . Record these results somewhere (e.g. your favorite text editor). 61 | . How long did it take? - 17.757s 62 | 63 | 64 | . *_Run_* this command to generate 2GB of data on the EFS file system using a 16 MB block size and issuing a sync once at the end to ensure everything is written to disk. 65 | + 66 | [source,bash] 67 | ---- 68 | time dd if=/dev/zero of=/efs/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N) bs=16M count=128 status=progress conv=fsync 69 | ---- 70 | + 71 | . Record these results somewhere (e.g. your favorite text editor). 72 | . How long did it take? - 11.611s 73 | 74 | 75 | . *_Run_* this command to generate 2GB of data on the EBS volume using a 1 MB block size and issuing a sync after each block to ensure each block is written to disk. 76 | + 77 | [source,bash] 78 | ---- 79 | time dd if=/dev/zero of=/ebs/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N) bs=1M count=2048 status=progress oflag=sync 80 | ---- 81 | + 82 | . Record these results somewhere (e.g. your favorite text editor). 83 | . How long did it take? - 15.002s 84 | 85 | 86 | . *_Run_* this command to generate 2GB of data on the EFS file system using a 1 MB block size and issuing a sync once at the end to ensure everything is written to disk. 87 | + 88 | [source,bash] 89 | ---- 90 | time dd if=/dev/zero of=/efs/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N) bs=1M count=2048 status=progress oflag=sync 91 | ---- 92 | + 93 | . Record these results somewhere (e.g. your favorite text editor). 94 | . How long did it take? - 1m26.699s 95 | . Why did this take so long? 96 | * A sync operation that persists data and metadata to disk is issued after everyone 1MB block, so there are 2048 sync operations issues. The distributed data storage design of Amazon EFS introduces slightly higher latencies per file system operation, and because this dd command is a serial operation, it needs to wait for each 1MB block to persist to disk before starting to write the next 1MB block. This type of operation magnifies the higher latencies of Amazon EFS. 97 | 98 | 99 | . *_Run_* this command to generate 2GB of data on the EBS volume using a 16 MB block size and issuing a sync after each block to ensure each block is written to disk. 100 | + 101 | [source,bash] 102 | ---- 103 | time dd if=/dev/zero of=/ebs/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N) bs=16M count=128 status=progress oflag=sync 104 | ---- 105 | + 106 | . Record these results somewhere (e.g. your favorite text editor). 107 | . How long did it take? - 15.002s 108 | 109 | 110 | . *_Run_* this command to generate 2GB of data on the EFS file system using a 16 MB block size and issuing a sync once at the end to ensure everything is written to disk. 111 | + 112 | [source,bash] 113 | ---- 114 | time dd if=/dev/zero of=/efs/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N) bs=16M count=128 status=progress oflag=sync 115 | ---- 116 | + 117 | . Record these results somewhere (e.g. your favorite text editor). 118 | . How long did it take? - 30.574s 119 | . Is there a significant duration difference between the commands writing to the EBS volume? 120 | . Why not? 121 | * The latency per file system operation is very low so the number of sync operations doesn't make a significant difference. 122 | . Why is there such a duration variance between the commands writing to the EFS file system? 123 | . The distributed data storage design of Amazon EFS introduces slightly higher latencies per file system operation, and because this dd command is a serial operation, it needs to wait for each block to persist to disk before starting to write the next block. Less sync operations increases achievable throughput. 124 | 125 | 126 | === Different levels of parallelism 127 | 128 | IMPORTANT: Read through all steps below and watch the quick video before continuing. 129 | 130 | image::throughput-dd.gif[align="left", width=600] 131 | 132 | . *_Run_* this command to generate 2GB of data on the EBS volume using 4 threads in parallel and a 1 MB block size, issuing a sync after each block to ensure everything is written to disk. 133 | + 134 | [source,bash] 135 | ---- 136 | time seq 1 4 | parallel --will-cite -j 4 dd if=/dev/zero of=/ebs/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N)-{} bs=1M count=512 oflag=sync 137 | ---- 138 | + 139 | . Record these results somewhere (e.g. your favorite text editor). 140 | . How long did it take? - 15.083s 141 | 142 | . *_Run_* this command to generate 2GB of data on the EFS file system using 4 threads in parallel and a 1 MB block size, issuing a sync after each block to ensure everything is written to disk. 143 | + 144 | [source,bash] 145 | ---- 146 | time seq 1 4 | parallel --will-cite -j 4 dd if=/dev/zero of=/efs/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N)-{} bs=1M count=512 oflag=sync 147 | ---- 148 | + 149 | . Record these results somewhere (e.g. your favorite text editor). 150 | . How long did it take? - 0m23.292s 151 | . Compare this to the results above when you wrote to the EFS file system using 1 thread and a 1 MB block size, issuing a sync after each block. Is there a big difference? Why? 152 | 153 | 154 | . *_Run_* this command to generate 2GB of data on the EBS volume using 16 threads in parallel and a 1 MB block size, issuing a sync after each block to ensure everything is written to disk. 155 | + 156 | [source,bash] 157 | ---- 158 | time seq 1 16 | parallel --will-cite -j 16 dd if=/dev/zero of=/ebs/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N)-{} bs=1M count=128 oflag=sync 159 | ---- 160 | + 161 | . Record these results somewhere (e.g. your favorite text editor). 162 | . How long did it take? - 15.093s 163 | 164 | . *_Run_* this command to generate 2GB of data on the EFS file system using 16 threads in parallel and a 1 MB block size, issuing a sync after each block to ensure everything is written to disk. 165 | + 166 | [source,bash] 167 | ---- 168 | time seq 1 16 | parallel --will-cite -j 16 dd if=/dev/zero of=/efs/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N)-{} bs=1M count=128 oflag=sync 169 | ---- 170 | + 171 | . Record these results somewhere (e.g. your favorite text editor). 172 | . How long did it take? - 0m10.581s 173 | . Compare this to the results above when you wrote to the EFS file system using 1 thread and a 1 MB block size, issuing a sync after each block. Is there a big difference? Why? 174 | . Review the results of all the EBS tests. Was there a significant difference between any of them? 175 | . Review the results of all the EFS tests. Why was there a significant difference between them? 176 | . Where you able to achieve higher overall throughput writting to an EFS file system than a local EBS volume? 177 | 178 | * The following table and graphs show the sample results of these tests. Look how increasing the size of the IO (reducing sync operations) and increasing the number of threads (increasing parallelism) impacts the throughput and duration. 179 | 180 | + 181 | 182 | |============================================================================================== 183 | | Storage | Threads | Data size (MB) | Block size (MB) | Duration (seconds) | Throughput (MB/s) 184 | | EBS | 1 | 2048 | 1 | 15.002 | 136.5 185 | | EFS | 1 | 2048 | 1 | 86.699 | 23.6 186 | | EBS | 1 | 2048 | 16 | 15.002 | 136.5 187 | | EFS | 1 | 2048 | 16 | 30.574 | 67.0 188 | | EBS | 4 | 2048 | 1 | 15.083 | 135.8 189 | | EFS | 4 | 2048 | 1 | 23.292 | 87.9 190 | | EBS | 16 | 2048 | 1 | 15.093 | 135.7 191 | | EFS | 16 | 2048 | 1 | 10.581 | 193.6 192 | |============================================================================================== 193 | 194 | -- 195 | {empty} + 196 | {empty} + 197 | [.left] 198 | .IOPS 199 | image::throughput-dd-throughput-graph.png[450, scaledwidth="75%"] 200 | {empty} + 201 | {empty} + 202 | [.left] 203 | .Duration 204 | image::throughput-dd-duration-graph.png[450, scaledwidth="75%"] 205 | -- 206 | 207 | 208 | == Next section 209 | 210 | Click the link below to go to the next section. 211 | 212 | image::throughput-ior.png[link=../08-throughput-ior, align="left",width=420] 213 | 214 | 215 | 216 | 217 | -------------------------------------------------------------------------------- /09-throughput-ior/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/09-throughput-ior/.DS_Store -------------------------------------------------------------------------------- /09-throughput-ior/readme.adoc: -------------------------------------------------------------------------------- 1 | = Parallelism and throughput using IOR 2 | :toc: 3 | :icons: 4 | :linkattrs: 5 | :imagesdir: ../resources/images 6 | 7 | 8 | == Summary 9 | 10 | This section will demonstrate how increasing the number of threads accessing an EFS file system will significantly improve throughput. 11 | 12 | == Duration 13 | 14 | NOTE: It will take approximately 15 minutes to complete this section. 15 | 16 | 17 | == Step-by-step Guide 18 | 19 | === Parallelism 20 | 21 | IMPORTANT: Read through all steps below and watch the quick video before continuing. 22 | 23 | image::throughput-ior.gif[align="left", width=600] 24 | 25 | . Return to the browser-based SSH connection of the *EFS Workshop Linux Instance 2*. 26 | + 27 | TIP: If the SSH connection has timed out, e.g. the session is unresponsive, refresh or reload the current browser tab. If that doesn't resolve the issue, close the browser-based SSH connection window and create a new one. Return to the link:https://console.aws.amazon.com/ec2/[Amazon EC2] console. *_Click_* the radio button next to the instance with the name *EFS Workshop Linux Instance 2*. *_Click_* the *Connect* button. *_Click_* the radio button next to *EC2 Instance Connect (browser-based SSH connection)*. Leave the default user name as *ec2-user* and *_click_* *Connect*. 28 | 29 | 30 | ==== IOR single shared file (SSF) write tests 31 | 32 | IOR is a commonly used file system benchmarking application typically used to evaluate the performance of distributed and parallel file systems. You can read more about IOR here - link:http://wiki.lustre.org/IOR[http://wiki.lustre.org/IOR]. 33 | 34 | . *_Run_* the following IOR command to generate one 2 GiB file using one thread on an EFS file system. 35 | + 36 | [source,bash] 37 | ---- 38 | module load mpi/openmpi-x86_64 39 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 40 | mpirun --npernode 1 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 2048 -g -v -w -i 1 -k -D 0 -o /efs/ior/ior.bin 41 | 42 | ---- 43 | + 44 | * The results should look similar to this. 45 | + 46 | [source,bash] 47 | ---- 48 | IOR-3.3.0+dev: MPI Coordinated Test of Parallel I/O 49 | Began : Mon Jun 1 19:21:30 2020 50 | Command line : ior --posix.odirect -t 1m -b 1m -s 2048 -g -v -w -i 1 -D 0 -o /efs/ior/ior.bin 51 | Machine : Linux ip-10-0-0-11 52 | Start time skew across all tasks: 0.00 sec 53 | TestID : 0 54 | StartTime : Mon Jun 1 19:21:30 2020 55 | Path : /efs/ior 56 | FS : 8388608.0 TiB Used FS: 0.0% Inodes: 0.0 Mi Used Inodes: -nan% 57 | Participating tasks: 1 58 | 59 | Options: 60 | api : POSIX 61 | apiVersion : 62 | test filename : /efs/ior/ior.bin 63 | access : single-shared-file 64 | type : independent 65 | segments : 2048 66 | ordering in a file : sequential 67 | ordering inter file : no tasks offsets 68 | nodes : 1 69 | tasks : 1 70 | clients per node : 1 71 | repetitions : 1 72 | xfersize : 1 MiB 73 | blocksize : 1 MiB 74 | aggregate filesize : 2 GiB 75 | 76 | Results: 77 | 78 | access bw(MiB/s) IOPS Latency(s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter 79 | ------ --------- ---- ---------- ---------- --------- -------- -------- -------- -------- ---- 80 | Commencing write performance test: Mon Jun 1 19:21:30 2020 81 | write 24.38 24.39 83.98 1024.00 1024.00 0.004828 83.98 0.000897 83.99 0 82 | Max Write: 24.38 MiB/sec (25.57 MB/sec) 83 | 84 | Summary of all tests: 85 | Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Max(OPs) Min(OPs) Mean(OPs) StdDev Mean(s) Stonewall(s) Stonewall(MiB) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggs(MiB) API RefNum 86 | write 24.38 24.38 24.38 0.00 24.38 24.38 24.38 0.00 83.99048 NA NA 0 1 1 1 0 0 1 0 0 2048 1048576 1048576 2048.0 POSIX 0 87 | Finished : Mon Jun 1 19:22:54 2020 88 | ---- 89 | + 90 | * Record the results somewhere (e.g. your favorite text editor). 91 | * How long did it take (total seconds)? 92 | * What was the bandwidth or throughput (MB/s)? 93 | + 94 | . *_Run_* the following IOR command to generate one 2 GiB file using two threads on an EFS file system. 95 | + 96 | [source,bash] 97 | ---- 98 | module load mpi/openmpi-x86_64 99 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 100 | mpirun --npernode 2 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 1024 -g -v -w -i 1 -k -D 0 -o /efs/ior/ior.bin 101 | 102 | ---- 103 | + 104 | * Record the results somewhere (e.g. your favorite text editor). 105 | * How long did it take (total seconds)? 106 | * What was the bandwidth or throughput (MB/s)? 107 | + 108 | . *_Run_* the following IOR command to generate one 2 GiB file using four threads on an EFS file system. 109 | + 110 | [source,bash] 111 | ---- 112 | module load mpi/openmpi-x86_64 113 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 114 | mpirun --npernode 4 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 512 -g -v -w -i 1 -k -D 0 -o /efs/ior/ior.bin 115 | 116 | ---- 117 | + 118 | * Record the results somewhere (e.g. your favorite text editor). 119 | * How long did it take (total seconds)? 120 | * What was the bandwidth or throughput (MB/s)? 121 | + 122 | . *_Run_* the following IOR command to generate one 2 GiB file using eight threads on an EFS file system. 123 | + 124 | [source,bash] 125 | ---- 126 | module load mpi/openmpi-x86_64 127 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 128 | mpirun --npernode 8 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 256 -g -v -w -i 1 -k -D 0 -o /efs/ior/ior.bin 129 | 130 | ---- 131 | + 132 | * Record the results somewhere (e.g. your favorite text editor). 133 | * How long did it take (total seconds)? 134 | * What was the bandwidth or throughput (MB/s)? 135 | + 136 | . *_Run_* the following IOR command to generate one 2 GiB file using sixteen threads on an EFS file system. 137 | + 138 | [source,bash] 139 | ---- 140 | module load mpi/openmpi-x86_64 141 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 142 | mpirun --npernode 16 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 128 -g -v -w -i 1 -k -D 0 -o /efs/ior/ior.bin 143 | 144 | ---- 145 | + 146 | * Record the results somewhere (e.g. your favorite text editor). 147 | * How long did it take (total seconds)? 148 | * What was the bandwidth or throughput (MB/s)? 149 | + 150 | . *_Run_* the following IOR command to generate one 2 GiB file using thirty-two threads on an EFS file system. 151 | + 152 | [source,bash] 153 | ---- 154 | module load mpi/openmpi-x86_64 155 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 156 | mpirun --npernode 32 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 64 -g -v -w -i 1 -k -D 0 -o /efs/ior/ior.bin 157 | 158 | ---- 159 | + 160 | * Record the results somewhere (e.g. your favorite text editor). 161 | * How long did it take (total seconds)? 162 | * What was the bandwidth or throughput (MB/s)? 163 | + 164 | . *_Run_* the following IOR command to generate one 2 GiB file using sixty-four threads on an EFS file system. 165 | + 166 | [source,bash] 167 | ---- 168 | module load mpi/openmpi-x86_64 169 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 170 | mpirun --npernode 64 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 32 -g -v -w -i 1 -k -D 0 -o /efs/ior/ior.bin 171 | 172 | ---- 173 | 174 | ==== IOR single shared file (SSF) read tests 175 | 176 | 177 | . *_Run_* the following IOR command to read one 2 GiB file using one thread. 178 | + 179 | [source,bash] 180 | ---- 181 | module load mpi/openmpi-x86_64 182 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 183 | mpirun --npernode 1 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 2048 -g -v -r -i 1 -k -D 0 -o /efs/ior/ior.bin 184 | 185 | ---- 186 | + 187 | * Record the results somewhere (e.g. your favorite text editor). 188 | * How long did it take (total seconds)? 189 | * What was the bandwidth or throughput (MB/s)? 190 | + 191 | . *_Run_* the following IOR command to read one 2 GiB file using two threads. 192 | + 193 | [source,bash] 194 | ---- 195 | module load mpi/openmpi-x86_64 196 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 197 | mpirun --npernode 2 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 1024 -g -v -r -i 1 -k -D 0 -o /efs/ior/ior.bin 198 | 199 | ---- 200 | + 201 | * Record the results somewhere (e.g. your favorite text editor). 202 | * How long did it take (total seconds)? 203 | * What was the bandwidth or throughput (MB/s)? 204 | + 205 | . *_Run_* the following IOR command to read one 2 GiB file using four threads. 206 | + 207 | [source,bash] 208 | ---- 209 | module load mpi/openmpi-x86_64 210 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 211 | mpirun --npernode 4 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 512 -g -v -r -i 1 -k -D 0 -o /efs/ior/ior.bin 212 | 213 | ---- 214 | + 215 | * Record the results somewhere (e.g. your favorite text editor). 216 | * How long did it take (total seconds)? 217 | * What was the bandwidth or throughput (MB/s)? 218 | + 219 | . *_Run_* the following IOR command to read one 2 GiB file using eight threads. 220 | + 221 | [source,bash] 222 | ---- 223 | module load mpi/openmpi-x86_64 224 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 225 | mpirun --npernode 8 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 256 -g -v -r -i 1 -k -D 0 -o /efs/ior/ior.bin 226 | 227 | ---- 228 | + 229 | * Record the results somewhere (e.g. your favorite text editor). 230 | * How long did it take (total seconds)? 231 | * What was the bandwidth or throughput (MB/s)? 232 | + 233 | . *_Run_* the following IOR command to read one 2 GiB file using sixteen threads. 234 | + 235 | [source,bash] 236 | ---- 237 | module load mpi/openmpi-x86_64 238 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 239 | mpirun --npernode 16 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 128 -g -v -r -i 1 -k -D 0 -o /efs/ior/ior.bin 240 | 241 | ---- 242 | + 243 | * Record the results somewhere (e.g. your favorite text editor). 244 | * How long did it take (total seconds)? 245 | * What was the bandwidth or throughput (MB/s)? 246 | + 247 | . *_Run_* the following IOR command to read one 2 GiB file using thirty-two threads. 248 | + 249 | [source,bash] 250 | ---- 251 | module load mpi/openmpi-x86_64 252 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 253 | mpirun --npernode 32 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 64 -g -v -r -i 1 -k -D 0 -o /efs/ior/ior.bin 254 | 255 | ---- 256 | + 257 | * Record the results somewhere (e.g. your favorite text editor). 258 | * How long did it take (total seconds)? 259 | * What was the bandwidth or throughput (MB/s)? 260 | + 261 | . *_Run_* the following IOR command to read one 2 GiB file using sixty-four threads. 262 | + 263 | [source,bash] 264 | ---- 265 | module load mpi/openmpi-x86_64 266 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 267 | mpirun --npernode 64 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 32 -g -v -r -i 1 -D 0 -o /efs/ior/ior.bin 268 | 269 | ---- 270 | * Record the results somewhere (e.g. your favorite text editor). 271 | * How long did it take (total seconds)? 272 | * What was the bandwidth or throughput (MB/s)? 273 | 274 | 275 | ==== IOR file per process (FPP) write tests 276 | 277 | . *_Run_* the following IOR command to generate 2 GiBs of data with one file per thread per directory using one thread (e.g. one file one directory). Notice the new flags *-u* uniqueDir -- use unique directory name for each file-per-process, and *-F* filePerProc — file-per-process flag. 278 | + 279 | [source,bash] 280 | ---- 281 | module load mpi/openmpi-x86_64 282 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 283 | mpirun --npernode 1 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 2048 -g -v -w -i 1 -u -F -k -D 0 -o /efs/ior/ior.bin 284 | 285 | ---- 286 | + 287 | * Record the results somewhere (e.g. your favorite text editor). 288 | * How long did it take (total seconds)? 289 | * What was the bandwidth or throughput (MB/s)? 290 | + 291 | . *_Run_* the following IOR command to generate 2 GiBs of data with one file per thread per directory using two threads (e.g. two files two directories). 292 | + 293 | [source,bash] 294 | ---- 295 | module load mpi/openmpi-x86_64 296 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 297 | mpirun --npernode 2 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 1024 -g -v -w -i 1 -u -F -k -D 0 -o /efs/ior/ior.bin 298 | 299 | ---- 300 | + 301 | * Record the results somewhere (e.g. your favorite text editor). 302 | * How long did it take (total seconds)? 303 | * What was the bandwidth or throughput (MB/s)? 304 | + 305 | . *_Run_* the following IOR command to generate 2 GiBs of data with one file per thread per directory using four threads (e.g. four files four directories). 306 | + 307 | [source,bash] 308 | ---- 309 | module load mpi/openmpi-x86_64 310 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 311 | mpirun --npernode 4 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 512 -g -v -w -i 1 -u -F -k -D 0 -o /efs/ior/ior.bin 312 | 313 | ---- 314 | + 315 | * Record the results somewhere (e.g. your favorite text editor). 316 | * How long did it take (total seconds)? 317 | * What was the bandwidth or throughput (MB/s)? 318 | + 319 | . *_Run_* the following IOR command to generate 2 GiBs of data with one file per thread per directory using eight threads (e.g. eight files eight directories). 320 | + 321 | [source,bash] 322 | ---- 323 | module load mpi/openmpi-x86_64 324 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 325 | mpirun --npernode 8 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 256 -g -v -w -i 1 -u -F -k -D 0 -o /efs/ior/ior.bin 326 | 327 | ---- 328 | + 329 | * Record the results somewhere (e.g. your favorite text editor). 330 | * How long did it take (total seconds)? 331 | * What was the bandwidth or throughput (MB/s)? 332 | + 333 | . *_Run_* the following IOR command to generate 2 GiBs of data with one file per thread per directory using sixteen threads (e.g. sixteen files sixteen directories). 334 | + 335 | [source,bash] 336 | ---- 337 | module load mpi/openmpi-x86_64 338 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 339 | mpirun --npernode 16 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 128 -g -v -w -i 1 -u -F -k -D 0 -o /efs/ior/ior.bin 340 | 341 | ---- 342 | + 343 | * Record the results somewhere (e.g. your favorite text editor). 344 | * How long did it take (total seconds)? 345 | * What was the bandwidth or throughput (MB/s)? 346 | + 347 | . *_Run_* the following IOR command to generate 2 GiBs of data with one file per thread per directory using thirty-two threads (e.g. thirty-two files thirty-two directories). 348 | + 349 | [source,bash] 350 | ---- 351 | module load mpi/openmpi-x86_64 352 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 353 | mpirun --npernode 32 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 64 -g -v -w -i 1 -u -F -k -D 0 -o /efs/ior/ior.bin 354 | 355 | ---- 356 | + 357 | * Record the results somewhere (e.g. your favorite text editor). 358 | * How long did it take (total seconds)? 359 | * What was the bandwidth or throughput (MB/s)? 360 | + 361 | . *_Run_* the following IOR command to generate 2 GiBs of data with one file per thread per directory using sixty-four threads (e.g. sixty-four files sixty-four directories). 362 | + 363 | [source,bash] 364 | ---- 365 | module load mpi/openmpi-x86_64 366 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 367 | mpirun --npernode 64 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 32 -g -v -w -i 1 -u -F -k -D 0 -o /efs/ior/ior.bin 368 | 369 | ---- 370 | + 371 | * Record the results somewhere (e.g. your favorite text editor). 372 | * How long did it take (total seconds)? 373 | * What was the bandwidth or throughput (MB/s)? 374 | 375 | 376 | ==== IOR file per process (FPP) read tests 377 | 378 | . *_Run_* the following IOR command to read 2 GiBs of data from the previous write test with one file per thread per directory using one thread (e.g. one file one directory). 379 | + 380 | [source,bash] 381 | ---- 382 | module load mpi/openmpi-x86_64 383 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 384 | mpirun --npernode 1 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 2048 -g -v -r -i 1 -u -F -D 0 -o /efs/ior/ior.bin 385 | 386 | ---- 387 | + 388 | * Record the results somewhere (e.g. your favorite text editor). 389 | * How long did it take (total seconds)? 390 | * What was the bandwidth or throughput (MB/s)? 391 | + 392 | . *_Run_* the following IOR command to read 2 GiBs of data from the previous write test with one file per thread per directory using two threads (e.g. two files two directories) on an EFS file system. 393 | + 394 | [source,bash] 395 | ---- 396 | module load mpi/openmpi-x86_64 397 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 398 | mpirun --npernode 2 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 1024 -g -v -r -i 1 -u -F -D 0 -o /efs/ior/ior.bin 399 | 400 | ---- 401 | + 402 | * Record the results somewhere (e.g. your favorite text editor). 403 | * How long did it take (total seconds)? 404 | * What was the bandwidth or throughput (MB/s)? 405 | + 406 | . *_Run_* the following IOR command to read 2 GiBs of data from the previous write test with one file per thread per directory using four threads (e.g. four files four directories) on an EFS file system. 407 | + 408 | [source,bash] 409 | ---- 410 | module load mpi/openmpi-x86_64 411 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 412 | mpirun --npernode 4 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 512 -g -v -r -i 1 -u -F -D 0 -o /efs/ior/ior.bin 413 | 414 | ---- 415 | + 416 | * Record the results somewhere (e.g. your favorite text editor). 417 | * How long did it take (total seconds)? 418 | * What was the bandwidth or throughput (MB/s)? 419 | + 420 | . *_Run_* the following IOR command to read 2 GiBs of data from the previous write test with one file per thread per directory using eight threads (e.g. eight files eight directories) on an EFS file system. 421 | + 422 | [source,bash] 423 | ---- 424 | module load mpi/openmpi-x86_64 425 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 426 | mpirun --npernode 8 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 256 -g -v -r -i 1 -u -F -D 0 -o /efs/ior/ior.bin 427 | 428 | ---- 429 | + 430 | * Record the results somewhere (e.g. your favorite text editor). 431 | * How long did it take (total seconds)? 432 | * What was the bandwidth or throughput (MB/s)? 433 | + 434 | . *_Run_* the following IOR command to read 2 GiBs of data from the previous write test with one file per thread per directory using sixteen threads (e.g. sixteen files sixteen directories) on an EFS file system. 435 | + 436 | [source,bash] 437 | ---- 438 | module load mpi/openmpi-x86_64 439 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 440 | mpirun --npernode 16 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 128 -g -v -r -i 1 -u -F -D 0 -o /efs/ior/ior.bin 441 | 442 | ---- 443 | + 444 | * Record the results somewhere (e.g. your favorite text editor). 445 | * How long did it take (total seconds)? 446 | * What was the bandwidth or throughput (MB/s)? 447 | + 448 | . *_Run_* the following IOR command to read 2 GiBs of data from the previous write test with one file per thread per directory using thirty-two threads (e.g. thirty-two files thirty-two directories) on an EFS file system. 449 | + 450 | [source,bash] 451 | ---- 452 | module load mpi/openmpi-x86_64 453 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 454 | mpirun --npernode 32 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 64 -g -v -r -i 1 -u -F -D 0 -o /efs/ior/ior.bin 455 | 456 | ---- 457 | + 458 | * Record the results somewhere (e.g. your favorite text editor). 459 | * How long did it take (total seconds)? 460 | * What was the bandwidth or throughput (MB/s)? 461 | + 462 | . *_Run_* the following IOR command to read 2 GiBs of data from the previous write test with one file per thread per directory using sixty-four threads (e.g. sixty-four files sixty-four directories) on an EFS file system. 463 | + 464 | [source,bash] 465 | ---- 466 | module load mpi/openmpi-x86_64 467 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 468 | mpirun --npernode 64 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 32 -g -v -r -i 1 -u -F -D 0 -o /efs/ior/ior.bin 469 | 470 | ---- 471 | + 472 | * Record the results somewhere (e.g. your favorite text editor). 473 | * How long did it take (total seconds)? 474 | * What was the bandwidth or throughput (MB/s)? 475 | 476 | . Compare the results from the tests above. Is there a big difference? Why? 477 | 478 | === Test results 479 | 480 | The following table and graphs show the sample results of the IOR 2 GiB single shared file (SSF) tests. Look how increasing the number of threads (increasing parallelism) impacts the throughput and duration. 481 | 482 | 483 | |========================================================================= 484 | | Operation | Threads | File count| Throughput (MB/s) | Duration (seconds) 485 | | Write | 1 | 1 | 25.34 | 84.74 486 | | Write | 2 | 1 | 34.93 | 61.48 487 | | Write | 4 | 1 | 87.78 | 24.46 488 | | Write | 8 | 1 | 150.79 | 14.24 489 | | Write | 16 | 1 | 198.36 | 10.83 490 | | Write | 32 | 1 | 208.32 | 10.31 491 | | Write | 64 | 1 | 221.82 | 9.68 492 | | Read | 1 | 1 | 67.92 | 31.62 493 | | Read | 2 | 1 | 104.04 | 20.64 494 | | Read | 4 | 1 | 193.34 | 11.11 495 | | Read | 8 | 1 | 402.23 | 5.34 496 | | Read | 16 | 1 | 421.85 | 5.09 497 | | Read | 32 | 1 | 422.93 | 5.08 498 | | Read | 64 | 1 | 420.38 | 5.11 499 | |========================================================================= 500 | 501 | -- 502 | {empty} + 503 | {empty} + 504 | [.left] 505 | .Single Shared File Throughput 506 | image::ior-ssf-throughput.png[align="left"] 507 | {empty} + 508 | {empty} + 509 | [.left] 510 | .Single Shared File Duration 511 | image::ior-ssf-duration.png[align="left"] 512 | -- 513 | 514 | The following table and graphs show the sample results of the IOR 2 GiB file per process (FPP) tests. Look how increasing the number of threads (increasing parallelism) impacts the throughput and duration. 515 | 516 | 517 | |========================================================================== 518 | | Operation | Threads | File count | Throughput (MB/s) | Duration (seconds) 519 | | Write | 1 | 1 | 25.36 | 84.69 520 | | Write | 2 | 2 | 50.35 | 42.65 521 | | Write | 4 | 4 | 97.37 | 22.05 522 | | Write | 8 | 8 | 175.41 | 12.24 523 | | Write | 16 | 16 | 263.02 | 8.16 524 | | Write | 32 | 32 | 279.16 | 7.69 525 | | Write | 64 | 64 | 281.12 | 7.64 526 | | Read | 1 | 1 | 62.01 | 34.63 527 | | Read | 2 | 2 | 126.09 | 17.03 528 | | Read | 4 | 4 | 239.82 | 8.95 529 | | Read | 8 | 8 | 418.44 | 5.13 530 | | Read | 16 | 16 | 415.94 | 5.16 531 | | Read | 32 | 32 | 415.96 | 5.16 532 | | Read | 64 | 64 | 412.06 | 5.21 533 | |========================================================================== 534 | 535 | 536 | -- 537 | {empty} + 538 | {empty} + 539 | [.left] 540 | .File Per Process Throughput 541 | image::ior-fpp-throughput.png[align="left"] 542 | {empty} + 543 | {empty} + 544 | [.left] 545 | .File Per Process Duration 546 | image::ior-fpp-duration.png[align="left"] 547 | -- 548 | 549 | 550 | == Next section 551 | 552 | Click the link below to go to the next section. 553 | 554 | image::transfer-tools.png[link=../10-transfer-tools, align="left",width=420] 555 | 556 | -------------------------------------------------------------------------------- /10-transfer-tools/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/10-transfer-tools/.DS_Store -------------------------------------------------------------------------------- /10-transfer-tools/readme.adoc: -------------------------------------------------------------------------------- 1 | = Transfer tools 2 | :toc: 3 | :icons: 4 | :linkattrs: 5 | :imagesdir: ../resources/images 6 | 7 | 8 | == Summary 9 | 10 | This section will compare and demonstrate how different file transfer tools affect performance when accessing an EFS file system. 11 | 12 | 13 | == Duration 14 | 15 | NOTE: It will take approximately 15 minutes to complete this section. 16 | 17 | 18 | == Step-by-step Guide 19 | 20 | === Transfer tools 21 | 22 | IMPORTANT: Read through all steps below and watch the quick video before continuing. 23 | 24 | image::transfer-tools.gif[align="left", width=600] 25 | 26 | . Return to the browser-based SSH connection of the *EFS Workshop Linux Instance 2* instance. 27 | + 28 | TIP: If the SSH connection has timed out, e.g. the session is unresponsive, refresh or reload the current browser tab. If that doesn't resolve the issue, close the browser-based SSH connection window and create a new one. Return to the link:https://console.aws.amazon.com/ec2/[Amazon EC2] console. *_Click_* the radio button next to the instance with the name *EFS Workshop Linux Instance 2*. *_Click_* the *Connect* button. *_Click_* the radio button next to *EC2 Instance Connect (browser-based SSH connection)*. Leave the default user name as *ec2-user* and *_click_* *Connect*. 29 | + 30 | . *_Run_* the following command in the browser-based SSH connection window to generate new sample data of five thousand 1 MB files totaling approx. 5 GB of data. This will be stored on an the attached EBS GP2 volume. 31 | * This section will use different transfer tools to copy this sample data from EBS to EFS. 32 | + 33 | [source,bash] 34 | ---- 35 | sudo rm /ebs/data-1m/* -r 36 | sudo rm /ebs/smallfile/* -r 37 | 38 | sudo python /home/ec2-user/smallfile/smallfile_cli.py --operation create --threads 1 --file-size 1024 --files 5000 --same-dir Y --dirs-per-dir 1024 --hash-into-dirs Y --files-per-dir 10240 --pause 500 --top /ebs/smallfile 39 | 40 | cp -R /ebs/smallfile/file_srcdir/ /ebs/data-1m/ 41 | 42 | ---- 43 | + 44 | . Run this command to validate the total size and count of files to be copied. 45 | + 46 | [source,bash] 47 | ---- 48 | tree --du -h /ebs/data-1m 49 | 50 | ---- 51 | + 52 | * The ouput should look similar to this: 53 | + 54 | [source,bash] 55 | ---- 56 | ├── [1.0M] _ip-10-0-0-11_00_994_ 57 | ├── [1.0M] _ip-10-0-0-11_00_995_ 58 | ├── [1.0M] _ip-10-0-0-11_00_996_ 59 | ├── [1.0M] _ip-10-0-0-11_00_997_ 60 | ├── [1.0M] _ip-10-0-0-11_00_998_ 61 | └── [1.0M] _ip-10-0-0-11_00_999_ 62 | 63 | 4.9G used in 1 directory, 5000 files 64 | ---- 65 | 66 | 67 | === Rsync 68 | 69 | rsync is a fast, versatile, remote (and local) file-copying tool. Learn more about rsync - link:https://linux.die.net/man/1/rsync[https://linux.die.net/man/1/rsync]. 70 | 71 | . *_Run_* the following rsync command in the browser-based SSH connection window to see how long it takes to copy the sample dataset from EBS to EFS. Caches are dropped so all reads come from the EBS volume and not memory. 72 | + 73 | [source,bash] 74 | ---- 75 | instance_id=$(curl -s http://169.254.169.254/latest/meta-data/instance-id) 76 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 77 | time rsync -r /ebs/data-1m/ /efs/rsync/${instance_id} 78 | 79 | ---- 80 | + 81 | * How long did it take for the copy? 82 | * The ouput of the script should look similar to this: 83 | + 84 | [source,bash] 85 | ---- 86 | real 5m59.205s 87 | user 0m19.346s 88 | sys 0m6.019s 89 | ---- 90 | + 91 | * Calculate the average throughput achieved. Divide 5000 MB by the number of seconds it took for the copy, e.g. 5000(MB)÷359(seconds)=13.93MB/s. 92 | * Why is rsync so slow? 93 | ** rync is a single-threaded copy tool that is very chatty over the network. These two attributes don't make rsync a good tool to use to copy data to and from an EFS file system because it doesn't take advantage of the distributed data storage design of EFS. 94 | . Run this command to validate the total size and count of the files copied. 95 | + 96 | [source,bash] 97 | ---- 98 | tree --du -h /efs/rsync/${instance_id} 99 | 100 | ---- 101 | 102 | === Copy (cp) 103 | 104 | . *_Run_* the following cp command to see how long it takes to copy the sample dataset from EBS to EFS. Caches are dropped so all reads come from the EBS volume and not memory. 105 | + 106 | [source,bash] 107 | ---- 108 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 109 | time cp -r /ebs/data-1m/* /efs/cp/${instance_id} 110 | 111 | ---- 112 | + 113 | * How long did it take for the copy? 114 | * The ouput of the script should look similar to this: 115 | + 116 | [source,bash] 117 | ---- 118 | real 4m34.786s 119 | user 0m0.048s 120 | sys 0m4.584s 121 | ---- 122 | + 123 | * Calculate the average throughput achieved. Divide 5000 MB by the number of seconds it took for the copy, e.g. 5000(MB)÷274(seconds)=18.25MB/s. 124 | * Why is so slow but faster than rsync? 125 | * cp is also a single-threaded copy tool but isn't as chatty over the network as rsync, to throughput is faster. 126 | . *_Run_* the following command to validate the total size and count of the files copied. 127 | + 128 | [source,bash] 129 | ---- 130 | tree --du -h /efs/cp/${instance_id} 131 | 132 | ---- 133 | 134 | 135 | === fpsync 136 | 137 | fpsync is a tool included in fpart and is a powerful shell script that wraps fpart and rsync to launch multiple transfer jobs in parallel. Learn more about fpsync - link:https://github.com/martymac/fpart#fpsync-[https://github.com/martymac/fpart#fpsync-]. 138 | 139 | Copyright (c) 2011-2020 Ganael LAPLANCHE 140 | 141 | . *_Run_* the following fpsync command to see how long it takes to copy the sample dataset from EBS to EFS. Caches are dropped so all reads come from the EBS volume and not memory. 142 | * The first command sets the $threads variable to 4 threads per virtual cpu (vcpu). This will be used by the multi-threaded transfer tools. 143 | + 144 | [source,bash] 145 | ---- 146 | threads=$(($(nproc --all) * 4)) 147 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 148 | time fpsync -n ${threads} -v /ebs/data-1m/ /efs/fpsync/${instance_id} 149 | 150 | ---- 151 | + 152 | * How long did it take for the copy? 153 | * The ouput of the script should look similar to this: 154 | + 155 | [source,bash] 156 | ---- 157 | 1591078644 ===> Job name: 1591078644-21223 158 | 1591078645 ===> Analyzing filesystem... 159 | 1591078646 ===> Waiting for sync jobs to complete... 160 | 1591078808 <=== Parts done: 3/3 (100%), remaining: 0 161 | 1591078808 <=== Time elapsed: 163s, remaining: ~0s (~54s/job) 162 | 1591078808 <=== Fpsync completed without error. 163 | 164 | real 2m43.147s 165 | user 0m20.845s 166 | sys 0m7.651s 167 | ---- 168 | + 169 | * Calculate the average throughput achieved. Divide 5000 MB by the number of seconds it took for the copy, e.g. 5000(MB)÷163(seconds)=30.67MB/s. 170 | . *_Run_* the following command to validate the total size and count of the files copied. 171 | + 172 | [source,bash] 173 | ---- 174 | tree --du -h /efs/fpsync/${instance_id} 175 | 176 | ---- 177 | 178 | 179 | === GNU Parallel + cp 180 | 181 | GNU Parallel is an amazing tool that parallelizes single-threaded commands. Learn more about GNU Parallel - link:https://www.gnu.org/software/parallel/[https://www.gnu.org/software/parallel/]. O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014. 182 | 183 | . *_Run_* the following cp + GNU parallel command to see how long it takes to copy the sample dataset from EBS to EFS. Caches are dropped so all reads come from the EBS volume and not memory. 184 | + 185 | [source,bash] 186 | ---- 187 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 188 | time find /ebs/data-1m/. -type f | parallel --will-cite -j ${threads} cp {} /efs/parallelcp/${instance_id} 189 | 190 | ---- 191 | + 192 | * How long did it take for the copy? 193 | * The ouput of the script should look similar to this: 194 | + 195 | [source,bash] 196 | ---- 197 | real 0m38.320s 198 | user 0m16.115s 199 | sys 0m19.323s 200 | ---- 201 | + 202 | * Calculate the average throughput achieved. Divide 5000 MB by the number of seconds it took for the copy, e.g. 5000(MB)÷38(seconds)=131.58MB/s. 203 | . *_Run_* the following command to validate the total size and count of the files copied. 204 | + 205 | [source,bash] 206 | ---- 207 | tree --du -h /efs/parallelcp/${instance_id} 208 | 209 | ---- 210 | 211 | 212 | === Fpart + GNU Parallel + cp 213 | 214 | Fpart is a tool that helps sort file trees and pack them into pages or partitions. Learn more about fpart - link:https://github.com/martymac/fpart/[https://github.com/martymac/fpart]. Copyright (c) 2011-2020 Ganael LAPLANCHE 215 | 216 | GNU Parallel is an amazing tool that parallelizes single-threaded commands. Learn more about GNU Parallel - link:https://www.gnu.org/software/parallel/[https://www.gnu.org/software/parallel/]. O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014. 217 | 218 | . *_Run_* the following cp + GNU parallel command to see how long it takes to copy the sample dataset from EBS to EFS. Caches are dropped so all reads come from the EBS volume and not memory. 219 | + 220 | [source,bash] 221 | ---- 222 | cd /ebs/data-1m/ 223 | time fpart -z -n 1 -o /home/ec2-user/fpart-files-to-transfer . 224 | time parallel --will-cite -j ${threads} --pipepart --round-robin --delay .1 --block 1M -a /home/ec2-user/fpart-files-to-transfer.0 sudo "cpio -dpmL /efs/parallelcpio/${instance_id}" 225 | 226 | ---- 227 | + 228 | * How long did it take for the copy? 229 | * The ouput of the script should look similar to this: 230 | + 231 | [source,bash] 232 | ---- 233 | 319488 blocks 234 | 319488 blocks 235 | 319488 blocks 236 | 319488 blocks 237 | 319488 blocks 238 | 239 | real 0m33.113s 240 | user 0m7.065s 241 | sys 0m14.830s 242 | ---- 243 | + 244 | * Calculate the average throughput achieved. Divide 5000 MB by the number of seconds it took for the copy, e.g. 5000(MB)÷33(seconds)=151.52MB/s. 245 | . *_Run_* the following command to validate the total size and count of the files copied. 246 | + 247 | [source,bash] 248 | ---- 249 | tree --du -h /efs/parallelcpio/${instance_id} 250 | 251 | ---- 252 | + 253 | . Compare the results from the tests above. Is there a big difference? Why? 254 | 255 | === Test results 256 | 257 | The following table and graph show the sample results of using different transfer tools to copy 5000 1MB files (5000 MB total size) from an EBS volume to an EFS file system. Look how choosing the most effecient transfer tool impacts duration and throughput. 258 | 259 | 260 | |=========================================================================================== 261 | | Tool | Data size (MB) | File count | Duration (seconds) | Throughput (MB/s) 262 | | rsync | 5000 | 5000 | 359.21 | 13.93 263 | | cp | 5000 | 5000 | 274.79 | 18.25 264 | | fpsync | 5000 | 5000 | 163.14 | 30.67 265 | | parallel+cp | 5000 | 5000 | 38.32 | 131.58 266 | | fpart+parallel+cpio | 5000 | 5000 | 33.11 | 151.52 267 | |=========================================================================================== 268 | 269 | 270 | -- 271 | {empty} + 272 | {empty} + 273 | [.left] 274 | .Transfer tools 275 | image::transfer-tool-graph.png[align="left"] 276 | -- 277 | 278 | == Next section 279 | 280 | Click the link below to go to the next section. 281 | 282 | image::monitor-performance.png[link=../11-monitor-performance, align="left",width=420] 283 | 284 | 285 | -------------------------------------------------------------------------------- /10-transfer-tools/readme_20200608.adoc: -------------------------------------------------------------------------------- 1 | = Transfer tools 2 | :toc: 3 | :icons: 4 | :linkattrs: 5 | :imagesdir: ../resources/images 6 | 7 | 8 | == Summary 9 | 10 | This section will compare and demonstrate how different file transfer tools affect performance when accessing an EFS file system. 11 | 12 | 13 | == Duration 14 | 15 | NOTE: It will take approximately 15 minutes to complete this section. 16 | 17 | 18 | == Step-by-step Guide 19 | 20 | === Transfer tools 21 | 22 | IMPORTANT: Read through all steps below and watch the quick video before continuing. 23 | 24 | image::transfer-tools.gif[align="left", width=600] 25 | 26 | . Return to the browser-based SSH connection of the *EFS Workshop Linux Instance 2* instance. 27 | + 28 | TIP: If the SSH connection has timed out, e.g. the session is unresponsive, refresh or reload the current browser tab. If that doesn't resolve the issue, close the browser-based SSH connection window and create a new one. Return to the link:https://console.aws.amazon.com/ec2/[Amazon EC2] console. *_Click_* the radio button next to the instance with the name *EFS Workshop Linux Instance 2*. *_Click_* the *Connect* button. *_Click_* the radio button next to *EC2 Instance Connect (browser-based SSH connection)*. Leave the default user name as *ec2-user* and *_click_* *Connect*. 29 | + 30 | . This instance was reloaded with approx. five thousand 1 MiB files totaling approx. 5 GB of data, all stored on an the attached EBS GP2 volume. This section will use different transfer tools to copy this dataset from EBS to EFS. 31 | . Run this command to validate the total size and count of files to be copied. 32 | + 33 | [source,bash] 34 | ---- 35 | tree --du -h /ebs/data-1m 36 | 37 | ---- 38 | + 39 | * The ouput should look similar to this: 40 | + 41 | [source,bash] 42 | ---- 43 | ├── [1.0M] _ip-10-0-0-11_00_994_ 44 | ├── [1.0M] _ip-10-0-0-11_00_995_ 45 | ├── [1.0M] _ip-10-0-0-11_00_996_ 46 | ├── [1.0M] _ip-10-0-0-11_00_997_ 47 | ├── [1.0M] _ip-10-0-0-11_00_998_ 48 | └── [1.0M] _ip-10-0-0-11_00_999_ 49 | 50 | 4.9G used in 1 directory, 5000 files 51 | ---- 52 | 53 | 54 | === Rsync 55 | 56 | rsync is a fast, versatile, remote (and local) file-copying tool. Learn more about rsync - link:https://linux.die.net/man/1/rsync[https://linux.die.net/man/1/rsync]. 57 | 58 | . *_Run_* the following rsync command in the browser-based SSH connection window to see how long it takes to copy the sample dataset from EBS to EFS. Caches are dropped so all reads come from the EBS volume and not memory. 59 | + 60 | [source,bash] 61 | ---- 62 | instance_id=$(curl -s http://169.254.169.254/latest/meta-data/instance-id) 63 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 64 | time rsync -r /ebs/data-1m/ /efs/rsync/${instance_id} 65 | 66 | ---- 67 | + 68 | * How long did it take for the copy? 69 | * The ouput of the script should look similar to this: 70 | + 71 | [source,bash] 72 | ---- 73 | real 5m59.205s 74 | user 0m19.346s 75 | sys 0m6.019s 76 | ---- 77 | + 78 | * Calculate the average throughput achieved. Divide 5000 MB by the number of seconds it took for the copy, e.g. 5000(MB)÷359(seconds)=13.93MB/s. 79 | * Why is rsync so slow? 80 | * rync is a single-threaded copy tool that is very chatty over the network. These two attributes don't make rsync a good tool to use to copy data to and from an EFS file system because it doesn't take advantage of the distributed data storage design of EFS. 81 | . Run this command to validate the total size and count of the files copied. 82 | + 83 | [source,bash] 84 | ---- 85 | tree --du -h /efs/rsync/${instance_id} 86 | 87 | ---- 88 | + 89 | 90 | === Copy (cp) 91 | 92 | . *_Run_* the following cp command to see how long it takes to copy the sample dataset from EBS to EFS. Caches are dropped so all reads come from the EBS volume and not memory. 93 | + 94 | [source,bash] 95 | ---- 96 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 97 | time cp -r /ebs/data-1m/* /efs/cp/${instance_id} 98 | 99 | ---- 100 | + 101 | * How long did it take for the copy? 102 | * The ouput of the script should look similar to this: 103 | + 104 | [source,bash] 105 | ---- 106 | real 4m34.786s 107 | user 0m0.048s 108 | sys 0m4.584s 109 | ---- 110 | + 111 | * Calculate the average throughput achieved. Divide 5000 MB by the number of seconds it took for the copy, e.g. 5000(MB)÷274(seconds)=18.25MB/s. 112 | * Why is so slow but faster than rsync? 113 | * cp is also a single-threaded copy tool but isn't as chatty over the network as rsync, to throughput is faster. 114 | . Run this command to validate the total size and count of the files copied. 115 | + 116 | [source,bash] 117 | ---- 118 | tree --du -h /efs/cp/${instance_id} 119 | 120 | ---- 121 | + 122 | 123 | === fpsync 124 | 125 | fpsync is a tool included in fpart and is a powerful shell script that wraps fpart and rsync to launch multiple transfer jobs in parallel. Learn more about fpsync - link:https://github.com/martymac/fpart#fpsync-[https://github.com/martymac/fpart#fpsync-]. 126 | 127 | Copyright (c) 2011-2020 Ganael LAPLANCHE 128 | 129 | . *_Run_* the following fpsync command to see how long it takes to copy the sample dataset from EBS to EFS. Caches are dropped so all reads come from the EBS volume and not memory. 130 | * The first command sets the $threads variable to 4 threads per virtual cpu (vcpu). This will be used by the multi-threaded transfer tools. 131 | + 132 | [source,bash] 133 | ---- 134 | threads=$(($(nproc --all) * 4)) 135 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 136 | time fpsync -n ${threads} -v /ebs/data-1m/ /efs/fpsync/${instance_id} 137 | 138 | ---- 139 | + 140 | * How long did it take for the copy? 141 | * The ouput of the script should look similar to this: 142 | + 143 | [source,bash] 144 | ---- 145 | 1591078644 ===> Job name: 1591078644-21223 146 | 1591078645 ===> Analyzing filesystem... 147 | 1591078646 ===> Waiting for sync jobs to complete... 148 | 1591078808 <=== Parts done: 3/3 (100%), remaining: 0 149 | 1591078808 <=== Time elapsed: 163s, remaining: ~0s (~54s/job) 150 | 1591078808 <=== Fpsync completed without error. 151 | 152 | real 2m43.147s 153 | user 0m20.845s 154 | sys 0m7.651s 155 | ---- 156 | + 157 | * Calculate the average throughput achieved. Divide 5000 MB by the number of seconds it took for the copy, e.g. 5000(MB)÷163(seconds)=30.67MB/s. 158 | . Run this command to validate the total size and count of the files copied. 159 | + 160 | [source,bash] 161 | ---- 162 | tree --du -h /efs/fpsync/${instance_id} 163 | 164 | ---- 165 | + 166 | 167 | === GNU Parallel + cp 168 | 169 | GNU Parallel is an amazing tool that parallelizes single-threaded commands. Learn more about GNU Parallel - link:https://www.gnu.org/software/parallel/[https://www.gnu.org/software/parallel/]. 170 | 171 | O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014. 172 | 173 | . *_Run_* the following cp + GNU parallel command to see how long it takes to copy the sample dataset from EBS to EFS. Caches are dropped so all reads come from the EBS volume and not memory. 174 | + 175 | [source,bash] 176 | ---- 177 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 178 | time find /ebs/data-1m/. -type f | parallel --will-cite -j ${threads} cp {} /efs/parallelcp/${instance_id} 179 | 180 | ---- 181 | + 182 | * How long did it take for the copy? 183 | * The ouput of the script should look similar to this: 184 | + 185 | [source,bash] 186 | ---- 187 | real 0m38.320s 188 | user 0m16.115s 189 | sys 0m19.323s 190 | ---- 191 | + 192 | * Calculate the average throughput achieved. Divide 5000 MB by the number of seconds it took for the copy, e.g. 5000(MB)÷38(seconds)=131.58MB/s. 193 | . Run this command to validate the total size and count of the files copied. 194 | + 195 | [source,bash] 196 | ---- 197 | tree --du -h /efs/parallelcp/${instance_id} 198 | 199 | ---- 200 | + 201 | 202 | === Fpart + GNU Parallel + cp 203 | 204 | Fpart is a tool that helps sort file trees and pack them into pages or partitions. Learn more about fpart - link:https://github.com/martymac/fpart/[https://github.com/martymac/fpart]. 205 | 206 | * Copyright (c) 2011-2020 Ganael LAPLANCHE 207 | 208 | GNU Parallel is an amazing tool that parallelizes single-threaded commands. Learn more about GNU Parallel - link:https://www.gnu.org/software/parallel/[https://www.gnu.org/software/parallel/]. 209 | 210 | * O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014. 211 | 212 | . *_Run_* the following cp + GNU parallel command to see how long it takes to copy the sample dataset from EBS to EFS. Caches are dropped so all reads come from the EBS volume and not memory. 213 | + 214 | [source,bash] 215 | ---- 216 | cd /ebs/data-1m/ 217 | time fpart -z -n 1 -o /home/ec2-user/fpart-files-to-transfer . 218 | time parallel --will-cite -j ${threads} --pipepart --round-robin --delay .1 --block 1M -a /home/ec2-user/fpart-files-to-transfer.0 sudo "cpio -dpmL /efs/parallelcpio/${instance_id}" 219 | 220 | ---- 221 | + 222 | * How long did it take for the copy? 223 | * The ouput of the script should look similar to this: 224 | + 225 | [source,bash] 226 | ---- 227 | 319488 blocks 228 | 319488 blocks 229 | 319488 blocks 230 | 319488 blocks 231 | 319488 blocks 232 | 233 | real 0m33.113s 234 | user 0m7.065s 235 | sys 0m14.830s 236 | ---- 237 | + 238 | * Calculate the average throughput achieved. Divide 5000 MB by the number of seconds it took for the copy, e.g. 5000(MB)÷33(seconds)=151.52MB/s. 239 | . Run this command to validate the total size and count of the files copied. 240 | + 241 | [source,bash] 242 | ---- 243 | tree --du -h /efs/parallelcpio/${instance_id} 244 | 245 | ---- 246 | + 247 | . Compare the results from the tests above. Is there a big difference? Why? 248 | 249 | === Test results 250 | 251 | The following table and graph show the sample results of using different transfer tools to copy 5000 1MB files (5000 MB total size) from an EBS volume to an EFS file system. Look how choosing the most effecient transfer tool impacts duration and throughput. 252 | 253 | 254 | |=========================================================================================== 255 | | Tool | Data size (MB) | File count | Duration (seconds) | Throughput (MB/s) 256 | | rsync | 5000 | 5000 | 359.21 | 13.93 257 | | cp | 5000 | 5000 | 274.79 | 18.25 258 | | fpsync | 5000 | 5000 | 163.14 | 30.67 259 | | parallel+cp | 5000 | 5000 | 38.32 | 131.58 260 | | fpart+parallel+cpio | 5000 | 5000 | 33.11 | 151.52 261 | |=========================================================================================== 262 | 263 | 264 | -- 265 | {empty} + 266 | {empty} + 267 | [.left] 268 | .Transfer tools 269 | image::transfer-tool-graph.png[align="left"] 270 | -- 271 | 272 | == Next section 273 | 274 | Click the link below to go to the next section. 275 | 276 | image::monitor-performance.png[link=../10-monitor-performance, align="left",width=420] 277 | 278 | 279 | -------------------------------------------------------------------------------- /11-monitor-performance/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/11-monitor-performance/.DS_Store -------------------------------------------------------------------------------- /11-monitor-performance/readme.adoc: -------------------------------------------------------------------------------- 1 | = Monitor perforrmance 2 | :toc: 3 | :icons: 4 | :linkattrs: 5 | :imagesdir: ../resources/images 6 | 7 | 8 | == Summary 9 | 10 | This section will demonstrate how to monitor the performance of an Amazon EFS file system using Amazon CloudWatch. 11 | 12 | 13 | == Duration 14 | 15 | NOTE: It will take approximately 10 minutes to complete this section. 16 | 17 | 18 | == Step-by-step Guide 19 | 20 | === Monitor performance 21 | 22 | IMPORTANT: Read through all steps below and watch the quick video before continuing. 23 | 24 | image::monitor-performance.gif[align="left", width=600] 25 | 26 | . Return to the browser-based SSH connection of the *EFS Workshop Linux Instance 2*. 27 | + 28 | TIP: If the SSH connection has timed out, e.g. the session is unresponsive, refresh or reload the current browser tab. If that doesn't resolve the issue, close the browser-based SSH connection window and create a new one. Return to the link:https://console.aws.amazon.com/ec2/[Amazon EC2] console. *_Click_* the radio button next to the instance with the name *EFS Workshop Linux Instance 2*. *_Click_* the *Connect* button. *_Click_* the radio button next to *EC2 Instance Connect (browser-based SSH connection)*. Leave the default user name as *ec2-user* and *_click_* *Connect*. 29 | + 30 | . *_Run_* the following IOR command to generate throughput against the EFS file system. 31 | + 32 | [source,bash] 33 | ---- 34 | module load mpi/openmpi-x86_64 35 | sudo bash -c 'echo 3 > /proc/sys/vm/drop_caches' 36 | mpirun --npernode 64 --oversubscribe ior --posix.odirect -t 1m -b 1m -s 1024 -g -v -w -r -i 2 -u -F -k -T 300 -o /efs/ior/ior.bin 37 | 38 | ---- 39 | + 40 | . While the above script is running in the browser-based SSH connection window, return to the Amazon EC2 console *_click_* *Services* in the top right are of the window. 41 | . *_Context-click(right-click)_* *CloudWatch* which is under the *Management & Governance* section and *_click_* *Open link in new tab*. 42 | . *_Click_* the new tab that was just opened to go to the *Amazon CloudWatch console*. 43 | . *_Click_* *Dashboard* in the left frame. 44 | . *_Click_* the pre-created dashboard. The name of the dashboard should use the _ naming convention. 45 | * The name should be similar to this: 46 | + 47 | [source,bash] 48 | ---- 49 | us-east-1_fs-0123abdc 50 | 51 | ---- 52 | + 53 | . Change the time window for all the widgets. *_Click_* *1h* in the top right of the window. 54 | . *_Move_* the cursor over lines of graphs in different widgets. Examine the information available from all the widgets during the workshop. Notice the time indicator line appears in all widgets. 55 | . 56 | * The easy way to understand how hard your workload is driving the file system is by monitoring the metrics Amazon CloudWatch collects and processes. By monitoring the EFS CloudWatch metrics TotalIOBytes, DataWriteIOBytes, DataReadIOBytes, and MetaDataIOBytes in the CloudWatch console, you can see, in near real-time, your file system's performance. These metrics are sent to CloudWatch at 1-minute intervals and are available for the next 15 months, so you can access historical information about the workload that has run on your file system over time. 57 | 58 | * Metric Math, a feature within Amazon CloudWatch, makes it easy to perform math analytics on your metrics to derive additional insights into the health and performance of your AWS resources and applications. 59 | 60 | * Amazon CloudWatch dashboards are customizable home pages in the CloudWatch console that you can use to monitor resources in a single view. You can use CloudWatch dashboards to create customized views of the metrics and alarms for your AWS resources. You can have up to 500 dashboards in your AWS account. All dashboards are global, not region-specific. 61 | 62 | -- 63 | {empty} + 64 | {empty} + 65 | [.left] 66 | .EFS Dashboard 67 | image::dashboard.png[450, scaledwidth="75%"] 68 | -- 69 | 70 | 71 | == Next section 72 | 73 | Click the link below to go to the next section. 74 | 75 | image::client-access.png[link=../12-client-access, align="left",width=420] 76 | 77 | 78 | 79 | -------------------------------------------------------------------------------- /12-client-access/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/12-client-access/.DS_Store -------------------------------------------------------------------------------- /12-client-access/readme.adoc: -------------------------------------------------------------------------------- 1 | = Client access 2 | :toc: 3 | :icons: 4 | :linkattrs: 5 | :imagesdir: ../resources/images 6 | 7 | 8 | == Summary 9 | 10 | This section will demonstrate how control client access using access points. 11 | 12 | 13 | == Duration 14 | 15 | NOTE: It will take approximately 10 minutes to complete this section. 16 | 17 | 18 | == Step-by-step Guide 19 | 20 | === Control client access using access points 21 | 22 | IMPORTANT: Read through all steps below and watch the quick video before continuing. 23 | 24 | image::client-access.gif[align="left", width=600] 25 | 26 | . Return to the browser-based SSH connection of the *EFS Workshop Linux Instance 1* instance. 27 | + 28 | TIP: If the SSH connection has timed out, e.g. the session is unresponsive, refresh or reload the current browser tab. If that doesn't resolve the issue, close the browser-based SSH connection window and create a new one. Return to the link:https://console.aws.amazon.com/ec2/[Amazon EC2] console. *_Click_* the radio button next to the instance with the name *EFS Workshop Linux Instance 1*. *_Click_* the *Connect* button. *_Click_* the radio button next to *EC2 Instance Connect (browser-based SSH connection)*. Leave the default user name as *ec2-user* and *_click_* *Connect*. 29 | + 30 | . *_Copy_*, *_paste_*, and *_run_* the following command in the browser-based SSH connection window to see how the Amazon EFS file system has been mounted. The rest of the bash commands below will also be *_run_* in the same browser-based SSH connection window. 31 | + 32 | [source,bash] 33 | ---- 34 | mount -t nfs4 35 | 36 | ---- 37 | + 38 | . What is the mount point of the EFS file system? 39 | * The output of the command should look similar to this: 40 | + 41 | [source,bash] 42 | ---- 43 | fs-01234abc.efs.us-east-1.amazonaws.com:/ on /efs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,noresvport,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.0.12,local_lock=none,addr=10.0.1.176,_netdev) 44 | ---- 45 | + 46 | * Answer: /efs 47 | . *_Copy_*, *_paste_*, and *_run_* the following command in the browser-based SSH connection window to get a list of all files and directories under the mount point */efs*. The rest of the bash commands below will also be *_run_* in the same browser-based SSH connection window. 48 | + 49 | [source,bash] 50 | ---- 51 | ll /efs 52 | 53 | ---- 54 | + 55 | . What directories are under the root of the file system in the mount point */efs*? 56 | * The output of the command should look similar to this: 57 | + 58 | [source,bash] 59 | ---- 60 | drwxr-xr-x 6 ec2-user ec2-user 6144 Jun 1 01:52 cp 61 | drwxr-xr-x 6 ec2-user ec2-user 6144 Jun 1 02:58 dd 62 | drwxr-xr-x 6 ec2-user ec2-user 6144 Jun 1 01:52 fpsync 63 | drwxr-xr-x 2 ec2-user ec2-user 6144 Jun 1 00:22 ior 64 | drwxr-xr-x 6 ec2-user ec2-user 227328 Jun 1 06:31 parallelcp 65 | drwxr-xr-x 6 ec2-user ec2-user 6144 Jun 1 01:52 parallelcpio 66 | drwxr-xr-x 6 ec2-user ec2-user 6144 Jun 1 01:52 rsync 67 | drwxr-xr-x 24 ec2-user ec2-user 6144 May 31 06:02 smallfile 68 | drwxrwxr-x 12 ec2-user ec2-user 6144 Jun 1 01:52 touch 69 | ---- 70 | + 71 | . Make a note of all the directories under the mount point */efs*. 72 | . Create a new directory called *client* and some subdirectories with zero-byte files. *_Run_* the following script. 73 | + 74 | [source,bash] 75 | ---- 76 | mkdir -p /efs/client/touch1/{1..32} 77 | sudo chown ec2-user:ec2-user /efs/client/touch1/{1..32} 78 | time seq 1 32 | parallel --will-cite -j 32 touch /efs/client/touch1/{}/test1.{1..32} 79 | 80 | ---- 81 | + 82 | . List the contents of the mount point. *_Run_* the following script. 83 | + 84 | [source,bash] 85 | ---- 86 | ll /efs/client 87 | 88 | ---- 89 | + 90 | * The output of the script should look similar to this: 91 | + 92 | [source,bash] 93 | ---- 94 | total 4 95 | drwxr-xr-x 34 root root 6144 Jun 1 14:45 touch1 96 | ---- 97 | + 98 | . List directories, files, including users and groups. *_Run_* the following script. 99 | + 100 | [source,bash] 101 | ---- 102 | tree --du -hug /efs/client/touch1 103 | 104 | ---- 105 | + 106 | * The end of the output should look similar to this: 107 | + 108 | [source,bash] 109 | ---- 110 | ├── [ec2-user ec2-user 0] test1.7 111 | ├── [ec2-user ec2-user 0] test1.8 112 | └── [ec2-user ec2-user 0] test1.9 113 | 114 | 198K used in 32 directories, 1024 files 115 | ---- 116 | + 117 | . Unmount the file system. *_Run_* the following script. 118 | + 119 | [source,bash] 120 | ---- 121 | cd 122 | sudo umount /efs 123 | 124 | ---- 125 | + 126 | . Return to the Amazon EFS console. 127 | . *_Click_* the radio button next to the file system. 128 | . *_Click_* *Actions* >> *Manage client access* from the File systems tool bar. 129 | . Create a simple file system policy. From the *File system policy* section, *_click_* the check boxes of the following policy statements: 130 | * Disable root access by default 131 | * Enforce in-transit encryption for all clients 132 | . *_Click_* *Set policy*. 133 | . *_Click_* *Save policy*. 134 | . Create an access point and configure the POSIX identity and root directory for all connections using this access point. From the *Access points* section, *_click_* *+ Add access point* at the bottom left of the window. 135 | . Complete the *New access points* form using the following table. 136 | 137 | + 138 | [cols="10,10,10,10,10,10,10,10"] 139 | |=== 140 | | Name | User ID | Group ID | Secondary Group IDs | Path | Owner User ID | Owner Group ID | Permissions 141 | | client 142 | | 1000 143 | | 1000 144 | | 145 | | /client 146 | | 1000 147 | | 1000 148 | | 755 149 | |=== 150 | . *_Click_* *Save access points*. 151 | . *_Click_* the browser's back button to return to the Amazon EFS console. 152 | . *_Copy_* the *File system ID*. 153 | + 154 | * The file system ID should look similar to this: 155 | + 156 | [source,bash] 157 | ---- 158 | fs-0123abcd 159 | ---- 160 | 161 | 162 | === Validate file system policies and access point 163 | 164 | . Return to the browser-based SSH connection of the *EFS Workshop Linux Instance 1* instance. 165 | . See if you can mount the file system using an unencrypted connection. *_Run_* the following script. Replace the file system ID place holder with the file system ID you copied in the earlier step. 166 | + 167 | [source,bash] 168 | ---- 169 | sudo mount -t efs /efs 170 | 171 | ---- 172 | + 173 | * The actual command should look similar to this: 174 | + 175 | [source,bash] 176 | ---- 177 | sudo mount -t efs fs-0123abcd /efs 178 | 179 | ---- 180 | + 181 | . Did the mount command succeed? Why not? 182 | . The output of the command should look similar to this: 183 | + 184 | [source,bash] 185 | ---- 186 | mount.nfs4: access denied by server while mounting fs-d4d65d57.efs.us-east-1.amazonaws.com:/ 187 | ---- 188 | + 189 | . What must you do to the mount command to successfully mount the file system? 190 | . Change the mount command to use an encrypted connection by inserting *-o tls*. *_Run_* the following script. Replace the file system ID place holder with the file system ID you copied in the earlier step. 191 | + 192 | [source,bash] 193 | ---- 194 | sudo mount -t efs -o tls /efs 195 | 196 | ---- 197 | + 198 | * The actual command should look similar to this: 199 | + 200 | [source,bash] 201 | ---- 202 | sudo mount -t efs -o tls fs-0123abcd /efs 203 | 204 | ---- 205 | + 206 | . Did the mount command succeed? 207 | . Verify the file system successfully mounted. *_Run_* the following script. 208 | + 209 | [source,bash] 210 | ---- 211 | mount -t nfs4 212 | 213 | ---- 214 | + 215 | * The output should look similar to this: 216 | + 217 | [source,bash] 218 | ---- 219 | 127.0.0.1:/ on /efs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,noresvport,proto=tcp,port=20279,timeo=600,retrans=2,sec=sys,clientaddr=127.0.0.1,local_lock=none,addr=127.0.0.1) 220 | ---- 221 | + 222 | . Notice the DNS name of the file system is no longer in the mount output. The file system DNS name is replaced with the IP address of the loopback or localhost. To help identify the DNS name of a file system mounted with an encrypted connection, query the mount.log file and find the last successful mount operation. *_Run_* the following script. 223 | + 224 | [source,bash] 225 | ---- 226 | grep -E "Successfully mounted.*/efs" /var/log/amazon/efs/mount.log | tail -1 227 | 228 | ---- 229 | + 230 | . The output of the command should look similar to this: 231 | + 232 | [source,bash] 233 | ---- 234 | 2020-06-01 14:55:46,279 - INFO - Successfully mounted fs-0123abcd.efs.us-east-1.amazonaws.com at /efs 235 | ---- 236 | + 237 | . Verify you can access the file system. List the file system objects under the root of the mount point. *_Run_* the following script. 238 | + 239 | [source,bash] 240 | ---- 241 | ll /efs 242 | 243 | ---- 244 | + 245 | . What directories are under the root of the file system in the mount point */efs*? 246 | * The output of the command should look similar to this: 247 | + 248 | [source,bash] 249 | ---- 250 | total 256 251 | drwxrwxr-x 3 ec2-user ec2-user 6144 Jun 1 15:25 client 252 | drwxr-xr-x 6 ec2-user ec2-user 6144 Jun 1 01:52 cp 253 | drwxr-xr-x 6 ec2-user ec2-user 6144 Jun 1 02:58 dd 254 | drwxr-xr-x 6 ec2-user ec2-user 6144 Jun 1 01:52 fpsync 255 | drwxr-xr-x 2 ec2-user ec2-user 6144 Jun 2 00:22 ior 256 | drwxr-xr-x 6 ec2-user ec2-user 227328 Jun 2 06:31 parallelcp 257 | drwxr-xr-x 6 ec2-user ec2-user 6144 Jun 1 01:52 parallelcpio 258 | drwxr-xr-x 6 ec2-user ec2-user 6144 Jun 1 01:52 rsync 259 | drwxr-xr-x 24 ec2-user ec2-user 6144 May 31 06:02 smallfile 260 | drwxrwxr-x 12 ec2-user ec2-user 6144 Jun 1 01:52 touch 261 | ---- 262 | + 263 | 264 | . Create more zero-byte files. *_Run_* the following script. 265 | + 266 | [source,bash] 267 | ---- 268 | mkdir -p /efs/client/touch2/{1..32} 269 | time seq 1 32 | parallel --will-cite -j 32 sudo touch /efs/client/touch2/{}/test1.{1..32} 270 | 271 | ---- 272 | + 273 | . Did parallel touch command succeed? Why not? 274 | . Rerun the script by but remove *sudo*. *_Run_* the following script. 275 | + 276 | [source,bash] 277 | ---- 278 | time seq 1 32 | parallel --will-cite -j 32 touch /efs/client/touch2/{}/test1.{1..32} 279 | 280 | ---- 281 | + 282 | . Did parallel touch command succeed? 283 | . List directories, files, including users and groups. *_Run_* the following script. 284 | + 285 | [source,bash] 286 | ---- 287 | sudo tree --du -hug /efs/client/touch2 288 | 289 | ---- 290 | + 291 | * The output of the script should look similar to this: 292 | + 293 | [source,bash] 294 | ---- 295 | ├── [ec2-user ec2-user 0] test1.6 296 | ├── [ec2-user ec2-user 0] test1.7 297 | ├── [ec2-user ec2-user 0] test1.8 298 | └── [ec2-user ec2-user 0] test1.9 299 | 300 | 198K used in 32 directories, 1024 files 301 | ---- 302 | + 303 | . Unmount the file system. *_Run_* the following script. 304 | + 305 | [source,bash] 306 | ---- 307 | cd 308 | sudo umount /efs 309 | 310 | ---- 311 | + 312 | 313 | . Return to the Amazon EFS console. 314 | . *_Click_* the radio button next to the file system. 315 | . *_Click_* *Actions* >> *Manage client access* from the File systems tool bar. 316 | . From the *Access points* section, *_copy_* the *Access point ID*. It should look similar to this: 317 | * fsap-0d3c794aa17bcc98d 318 | 319 | . Run the mount command to use an encrypted connection and the access point. *_Run_* the following script. Replace the file system ID place holder with your file system ID and the access point place holder your copied earlier. 320 | + 321 | [source,bash] 322 | ---- 323 | sudo mount -t efs -o tls,accesspoint= /efs 324 | 325 | ---- 326 | + 327 | * The actual command should look similar to this: 328 | + 329 | [source,bash] 330 | ---- 331 | sudo mount -t efs -o tls,accesspoint=fsap-0123456789abdcef0 fs-0123abcd /efs 332 | 333 | ---- 334 | + 335 | . List the contents of the mount point. *_Run_* the following script. 336 | + 337 | [source,bash] 338 | ---- 339 | ll /efs 340 | 341 | ---- 342 | + 343 | * The output should look similar to this: 344 | + 345 | [source,bash] 346 | ---- 347 | total 8 348 | drwxrwxr-x 34 ec2-user ec2-user 6144 Jun 1 15:25 touch1 349 | drwxrwxr-x 34 ec2-user ec2-user 6144 Jun 1 15:31 touch2 350 | ---- 351 | + 352 | . What happened to all the other directories that were under */efs*? 353 | * Earlier you created an access point with the path */client*, so mount points for all connections using that access point will have the root */client*. These connections will only be able to access file system contents within */client*. 354 | . Create a new directory called */touch3* and some subdirectories with zero-byte files. *_Run_* the following script. 355 | + 356 | [source,bash] 357 | ---- 358 | sudo mkdir -p /efs/touch3/{1..32} 359 | time seq 1 32 | parallel --will-cite -j 32 sudo touch /efs/touch3/{}/test1.{1..32} 360 | 361 | ---- 362 | + 363 | . List the contents of the mount point. *_Run_* the following script. 364 | + 365 | [source,bash] 366 | ---- 367 | ll /efs 368 | 369 | ---- 370 | + 371 | * The output of the script should look similar to this: 372 | + 373 | [source,bash] 374 | ---- 375 | total 12 376 | drwxrwxr-x 34 ec2-user ec2-user 6144 Jun 1 15:25 touch1 377 | drwxrwxr-x 34 ec2-user ec2-user 6144 Jun 1 15:31 touch2 378 | drwxr-xr-x 34 ec2-user ec2-user 6144 Jun 1 15:43 touch3 379 | ---- 380 | + 381 | . List directories, files, including users and groups in the *touch3* directory. *_Run_* the following script. 382 | + 383 | [source,bash] 384 | ---- 385 | tree --du -hug /efs/touch3 386 | 387 | ---- 388 | + 389 | * The output of the script should look similar to this: 390 | + 391 | [source,bash] 392 | ---- 393 | ├── [ec2-user ec2-user 0] test1.6 394 | ├── [ec2-user ec2-user 0] test1.7 395 | ├── [ec2-user ec2-user 0] test1.8 396 | └── [ec2-user ec2-user 0] test1.9 397 | 398 | 198K used in 32 directories, 1024 files 399 | ---- 400 | + 401 | . Who is the user owner and group owner of all these directories and files? 402 | * Notice the owner of all directories and files created in the */touch3* directory is *ec2-user*. Because this instance is using the access point that is mapped to *User ID: 1000 (ec2-user)* and *Groupd ID: 1000 (ec2-user)*, all file system objects will be created as ec2-user, even those created as *sudo*. 403 | 404 | 405 | == Next section 406 | 407 | Click the link below to go to the next section. 408 | 409 | image::takeaways.png[link=../13-takeaways/, align="left",width=420] 410 | 411 | 412 | 413 | 414 | -------------------------------------------------------------------------------- /13-takeaways/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/13-takeaways/.DS_Store -------------------------------------------------------------------------------- /13-takeaways/readme.adoc: -------------------------------------------------------------------------------- 1 | = Workshop takeaways 2 | :toc: 3 | :icons: 4 | :linkattrs: 5 | :imagesdir: ../resources/images 6 | 7 | 8 | == Summary 9 | 10 | This section will list some of the best practices demonstrated during this workshop. 11 | 12 | 13 | == Best practices for optimal performance 14 | 15 | . Use multiple threads when accessing the file system. 16 | . Use multiple clients when accessing the file system. 17 | . Spread writes and updates over multiple directories to minimize inode contention. 18 | . Clients accessing the file system should have sufficient network performance to drive the IOPS and throughput needed. 19 | . Start with General Purpose Performance mode and Burst Throughput mode. Change to Provisioned Throughput Performance mode when the amount of data stored in the file system doesn't provide the desired throughput. 20 | . Use Amazon CloudWatch to monitor how hard your workloads drive the file system. 21 | . Mount using the EFS mount helper which provides the recommended mount options to optimize performance to the file system. 22 | . When required, add file system policies and access points to manage and secure client access. 23 | 24 | 25 | == Next section 26 | 27 | Click the link below to go to the next section. 28 | 29 | image::tear-down-workshop.png[link=../14-tear-down-workshop/, align="left",width=420] 30 | 31 | 32 | 33 | 34 | -------------------------------------------------------------------------------- /14-tear-down-as-workshop/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/14-tear-down-as-workshop/.DS_Store -------------------------------------------------------------------------------- /14-tear-down-as-workshop/readme.adoc: -------------------------------------------------------------------------------- 1 | = Tear down AWS sponsored workshop 2 | :icons: 3 | :linkattrs: 4 | :imagesdir: ../resources/images 5 | 6 | 7 | *Congratulation!* You have completed the Amazon EFS Workshop. 8 | 9 | Because you completed the Amazon EFS AWS sponsored workshop, there are no other steps you need to do to tear down your workshop. The AWS account provided to you for the workshop will be deleted at the end of the workshop event. 10 | 11 | If you're interested in taking this workshop again, please return to this Github repo and select the link:/../../[Amazon EFS On-Demand Workshop] and complete the workshop using your own account, or click on the link below. 12 | -- 13 | {empty} + 14 | {empty} + 15 | -- 16 | image::efs-workshops.png[link=/../../, align="right",width=420] 17 | 18 | 19 | -------------------------------------------------------------------------------- /14-tear-down-od-workshop/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/14-tear-down-od-workshop/.DS_Store -------------------------------------------------------------------------------- /14-tear-down-od-workshop/readme.adoc: -------------------------------------------------------------------------------- 1 | = Tear down workshop 2 | :icons: 3 | :linkattrs: 4 | :imagesdir: ../resources/images 5 | 6 | 7 | == Coming soon! 8 | -------------------------------------------------------------------------------- /14-tear-down-workshop/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/14-tear-down-workshop/.DS_Store -------------------------------------------------------------------------------- /14-tear-down-workshop/readme.adoc: -------------------------------------------------------------------------------- 1 | = Tear down workshop 2 | :icons: 3 | :linkattrs: 4 | :imagesdir: ../resources/images 5 | 6 | 7 | *Congratulation!* You have completed the Amazon EFS Workshop. 8 | 9 | IMPORTANT: *_Click_* the link below that matches the type of workshop you started, **AWS Managed** or **On-demand**. The link will outline the steps you need to follow in order to tear down your workshop environment. It is important to tear down and delete all the AWS resources you created during this workshop so you are no longer charged for these resources. 10 | 11 | 12 | [cols="1,1"] 13 | |=== 14 | a|image::tear-down-od-workshop.png[link=../14-tear-down-od-workshop/] 15 | a|image::tear-down-as-workshop.png[link=../14-tear-down-as-workshop/] 16 | |=== 17 | 18 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of 4 | this software and associated documentation files (the "Software"), to deal in 5 | the Software without restriction, including without limitation the rights to 6 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of 7 | the Software, and to permit persons to whom the Software is furnished to do so. 8 | 9 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 10 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS 11 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR 12 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER 13 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN 14 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 15 | -------------------------------------------------------------------------------- /aws-sponsored/readme.adoc: -------------------------------------------------------------------------------- 1 | = Amazon EFS AWS Sponsored Workshops 2 | :icons: 3 | :linkattrs: 4 | :imagesdir: ../resources/images 5 | 6 | image:efs-aws-logos.png[align="left",width=420] 7 | 8 | This is workshop designed for architects and engineers who would like to learn more about Amazon Web Services (AWS) full-managed Network File System (NFS) service - link:https://aws.amazon.com/efs/[Amazon EFS]. 9 | 10 | Click the link below to access the type of AWS sponsored workshop you want to run, **Amazon EFS**. 11 | 12 | [cols="1"] 13 | |=== 14 | a|image::efs-as-workshop.png[align="left",width=440,link=../01-access-as-environment/] 15 | |=== 16 | 17 | NOTE: You will incur charges as you go through either of these workshops, as the resources launched will exceed the link:http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/free-tier-limits.html[limits of AWS free tier]. 18 | -------------------------------------------------------------------------------- /on-demand/readme.adoc: -------------------------------------------------------------------------------- 1 | = Cloud File Storage the AWSome Way! 2 | :icons: 3 | :linkattrs: 4 | :imagesdir: resources/images 5 | 6 | image:fsx-aws-logos.png[alt="fsx and aws logos", align="left",width=420] 7 | 8 | This is a set of workshops designed for architects and engineers who would like to learn more about Amazon Web Services (AWS) full-managed third party file system service - link:https://aws.amazon.com/fsx/[Amazon FSx]. 9 | 10 | There are two types of Amazon FSx workshops. **On-demand workshops** are designed for individuals to run the workshop at any time using their own AWS account. These workshops will include instructions that will guide you through the steps to create the workshop environment using your own AWS account. **AWS sponsored workshops** are designed for individuals or groups taking the workshop during an AWS sponsored event, such as an AWS Summit, AWS Storage Days Workshop, AWS Workshop, or AWS re:invent. These workshops use AWS owned accounts that will be created and distributed at the time of the event. Instructions on how to gain access these AWS accounts will by shared by the event sponsors at the start of the event. 11 | 12 | Click the link below to access the type of workshop you want to run. 13 | 14 | image::fsx-windows-od-workshop.png[link=stg326/, align="left",width=420] 15 | 16 | 17 | === Participation 18 | 19 | We encourage participation; if you find anything, please submit an issue. However, if you want to help raise the bar, **submit a PR**! 20 | 21 | 22 | === License 23 | 24 | This sample code is made available under a modified MIT license. See the LICENSE file. 25 | -------------------------------------------------------------------------------- /readme.adoc: -------------------------------------------------------------------------------- 1 | = Cloud File Storage the AWSome Way! 2 | :icons: 3 | :linkattrs: 4 | :imagesdir: resources/images 5 | 6 | image:efs-aws-logos.png[alt="fsx and aws logos", align="left",width=420] 7 | 8 | This is a set of workshops designed for architects and engineers who would like to learn more about Amazon Web Services (AWS) full-managed general purpose network file system (NFS) service - link:https://aws.amazon.com/efs/[Amazon Elastic File System (EFS)]. 9 | 10 | There are two types of Amazon EFS workshops. **On-Demand workshops** are designed for individuals to run the workshop at any time using their own AWS account. These workshops include instructions to create the workshop environment using your own AWS account. **AWS Sponsored workshops** are designed for individuals or groups taking the workshop during an AWS sponsored event, such as an AWS Summit, AWS Storage Days Workshop, AWS Workshop, or AWS re:invent. These workshops use AWS owned accounts that will be created and distributed at the time of the event. Instructions on how to gain access to these AWS accounts will by shared by AWS at the start of the event. 11 | 12 | Click the link below to access the type of workshop you want to run. 13 | 14 | |=== 15 | a|image::efs-od-workshop.png[link=01-deploy-od-workshop/] a| image::efs-as-workshop.png[link=01-access-as-workshop/] 16 | |=== 17 | 18 | === Participation 19 | 20 | We encourage participation; if you find anything, please submit an issue. However, if you want to help raise the bar, **submit a PR**! 21 | 22 | 23 | === License 24 | 25 | This sample code is made available under a modified MIT license. See the LICENSE file. 26 | -------------------------------------------------------------------------------- /resources/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/.DS_Store -------------------------------------------------------------------------------- /resources/images/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/.DS_Store -------------------------------------------------------------------------------- /resources/images/Picture1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/Picture1.png -------------------------------------------------------------------------------- /resources/images/access-as-workshop.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/access-as-workshop.gif -------------------------------------------------------------------------------- /resources/images/client-access.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/client-access.gif -------------------------------------------------------------------------------- /resources/images/client-access.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/client-access.png -------------------------------------------------------------------------------- /resources/images/cloudformation-capabilities.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/cloudformation-capabilities.png -------------------------------------------------------------------------------- /resources/images/connect-linux-instances-efs.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/connect-linux-instances-efs.gif -------------------------------------------------------------------------------- /resources/images/connect-to-instances.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/connect-to-instances.png -------------------------------------------------------------------------------- /resources/images/create-alarm.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/create-alarm.png -------------------------------------------------------------------------------- /resources/images/dashboard.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/dashboard.png -------------------------------------------------------------------------------- /resources/images/deploy-to-aws.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/deploy-to-aws.png -------------------------------------------------------------------------------- /resources/images/efs-as-workshop.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/efs-as-workshop.png -------------------------------------------------------------------------------- /resources/images/efs-as-workshops.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/efs-as-workshops.png -------------------------------------------------------------------------------- /resources/images/efs-aws-logos.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/efs-aws-logos.png -------------------------------------------------------------------------------- /resources/images/efs-od-workshop.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/efs-od-workshop.png -------------------------------------------------------------------------------- /resources/images/efs-od-workshops.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/efs-od-workshops.png -------------------------------------------------------------------------------- /resources/images/efs-workshop-architecture-v1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/efs-workshop-architecture-v1.png -------------------------------------------------------------------------------- /resources/images/efs-workshop-architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/efs-workshop-architecture.png -------------------------------------------------------------------------------- /resources/images/efs-workshop-icons.graffle/data.plist: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/efs-workshop-icons.graffle/data.plist -------------------------------------------------------------------------------- /resources/images/efs-workshop-icons.graffle/image32.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/efs-workshop-icons.graffle/image32.pdf -------------------------------------------------------------------------------- /resources/images/efs-workshop-icons.graffle/image33.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/efs-workshop-icons.graffle/image33.pdf -------------------------------------------------------------------------------- /resources/images/efs-workshop-icons.graffle/image34.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/efs-workshop-icons.graffle/image34.png -------------------------------------------------------------------------------- /resources/images/efs-workshop-icons.graffle/preview.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/efs-workshop-icons.graffle/preview.jpeg -------------------------------------------------------------------------------- /resources/images/efs-workshops.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/efs-workshops.png -------------------------------------------------------------------------------- /resources/images/examine-efs-console.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/examine-efs-console.gif -------------------------------------------------------------------------------- /resources/images/examine-efs-console.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/examine-efs-console.png -------------------------------------------------------------------------------- /resources/images/instance-throughput.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/instance-throughput.png -------------------------------------------------------------------------------- /resources/images/iops-4kb-duration-graph.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/iops-4kb-duration-graph.png -------------------------------------------------------------------------------- /resources/images/iops-4kb-iops-graph.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/iops-4kb-iops-graph.png -------------------------------------------------------------------------------- /resources/images/iops-4kb.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/iops-4kb.png -------------------------------------------------------------------------------- /resources/images/iops-zero-byte-graph.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/iops-zero-byte-graph.png -------------------------------------------------------------------------------- /resources/images/iops-zero-byte.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/iops-zero-byte.gif -------------------------------------------------------------------------------- /resources/images/iops-zero-byte.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/iops-zero-byte.png -------------------------------------------------------------------------------- /resources/images/ior-fpp-duration.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/ior-fpp-duration.png -------------------------------------------------------------------------------- /resources/images/ior-fpp-throughput.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/ior-fpp-throughput.png -------------------------------------------------------------------------------- /resources/images/ior-ssf-duration.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/ior-ssf-duration.png -------------------------------------------------------------------------------- /resources/images/ior-ssf-throughput.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/ior-ssf-throughput.png -------------------------------------------------------------------------------- /resources/images/monitor-performance.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/monitor-performance.gif -------------------------------------------------------------------------------- /resources/images/monitor-performance.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/monitor-performance.png -------------------------------------------------------------------------------- /resources/images/mount-filesystem.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/mount-filesystem.png -------------------------------------------------------------------------------- /resources/images/next-section.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/next-section.png -------------------------------------------------------------------------------- /resources/images/provisioned-throughput.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/provisioned-throughput.gif -------------------------------------------------------------------------------- /resources/images/provisioned-throughput.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/provisioned-throughput.png -------------------------------------------------------------------------------- /resources/images/takeaways.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/takeaways.png -------------------------------------------------------------------------------- /resources/images/tear-down-as-workshop.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/tear-down-as-workshop.png -------------------------------------------------------------------------------- /resources/images/tear-down-od-workshop.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/tear-down-od-workshop.png -------------------------------------------------------------------------------- /resources/images/tear-down-workshop.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/tear-down-workshop.png -------------------------------------------------------------------------------- /resources/images/test-performance.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/test-performance.png -------------------------------------------------------------------------------- /resources/images/throughput-dd-duration-graph.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/throughput-dd-duration-graph.png -------------------------------------------------------------------------------- /resources/images/throughput-dd-throughput-graph.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/throughput-dd-throughput-graph.png -------------------------------------------------------------------------------- /resources/images/throughput-dd.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/throughput-dd.png -------------------------------------------------------------------------------- /resources/images/throughput-ior.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/throughput-ior.gif -------------------------------------------------------------------------------- /resources/images/throughput-ior.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/throughput-ior.png -------------------------------------------------------------------------------- /resources/images/transfer-tool-graph.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/transfer-tool-graph.png -------------------------------------------------------------------------------- /resources/images/transfer-tools.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/transfer-tools.gif -------------------------------------------------------------------------------- /resources/images/transfer-tools.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-efs-workshop/e074e130ff386ef688a6c9ebf1f19f7161663abf/resources/images/transfer-tools.png --------------------------------------------------------------------------------