├── CONTRIBUTING.md ├── LICENSE ├── LICENSE-SUMMARY ├── README.md ├── images ├── aws_configure.png ├── cf_complete.png ├── cli_1.png ├── ebs_part1_1.png ├── ebs_part1_2.png ├── ebs_part1_3.png ├── ebs_part1_4.png ├── ebs_part2_1.png ├── ebs_part2_2.png ├── mod1cf1.png ├── mod1ssh1.png └── s3_perf_1.png └── storage_performance.json /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | 10 | ## Reporting Bugs/Feature Requests 11 | 12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features. 13 | 14 | When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already 15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: 16 | 17 | * A reproducible test case or series of steps 18 | * The version of our code being used 19 | * Any modifications you've made relevant to the bug 20 | * Anything unusual about your environment or deployment 21 | 22 | 23 | ## Contributing via Pull Requests 24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 25 | 26 | 1. You are working against the latest source on the *master* branch. 27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. 29 | 30 | To send us a pull request, please: 31 | 32 | 1. Fork the repository. 33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. 34 | 3. Ensure local tests pass. 35 | 4. Commit to your fork using clear commit messages. 36 | 5. Send us a pull request, answering any default questions in the pull request interface. 37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. 38 | 39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 41 | 42 | 43 | ## Finding contributions to work on 44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start. 45 | 46 | 47 | ## Code of Conduct 48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 50 | opensource-codeofconduct@amazon.com with any additional questions or comments. 51 | 52 | 53 | ## Security issue notifications 54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. 55 | 56 | 57 | ## Licensing 58 | 59 | See the [LICENSE](LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. 60 | 61 | We may ask you to sign a [Contributor License Agreement (CLA)](http://en.wikipedia.org/wiki/Contributor_License_Agreement) for larger changes. 62 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Creative Commons Attribution-ShareAlike 4.0 International Public License 2 | 3 | By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution-ShareAlike 4.0 International Public License ("Public License"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions. 4 | 5 | Section 1 – Definitions. 6 | 7 | a. Adapted Material means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image. 8 | 9 | b. Adapter's License means the license You apply to Your Copyright and Similar Rights in Your contributions to Adapted Material in accordance with the terms and conditions of this Public License. 10 | 11 | c. BY-SA Compatible License means a license listed at creativecommons.org/compatiblelicenses, approved by Creative Commons as essentially the equivalent of this Public License. 12 | 13 | d. Copyright and Similar Rights means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights. 14 | 15 | e. Effective Technological Measures means those measures that, in the absence of proper authority, may not be circumvented under laws fulfilling obligations under Article 11 of the WIPO Copyright Treaty adopted on December 20, 1996, and/or similar international agreements. 16 | 17 | f. Exceptions and Limitations means fair use, fair dealing, and/or any other exception or limitation to Copyright and Similar Rights that applies to Your use of the Licensed Material. 18 | 19 | g. License Elements means the license attributes listed in the name of a Creative Commons Public License. The License Elements of this Public License are Attribution and ShareAlike. 20 | 21 | h. Licensed Material means the artistic or literary work, database, or other material to which the Licensor applied this Public License. 22 | 23 | i. Licensed Rights means the rights granted to You subject to the terms and conditions of this Public License, which are limited to all Copyright and Similar Rights that apply to Your use of the Licensed Material and that the Licensor has authority to license. 24 | 25 | j. Licensor means the individual(s) or entity(ies) granting rights under this Public License. 26 | 27 | k. Share means to provide material to the public by any means or process that requires permission under the Licensed Rights, such as reproduction, public display, public performance, distribution, dissemination, communication, or importation, and to make material available to the public including in ways that members of the public may access the material from a place and at a time individually chosen by them. 28 | 29 | l. Sui Generis Database Rights means rights other than copyright resulting from Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or succeeded, as well as other essentially equivalent rights anywhere in the world. 30 | 31 | m. You means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning. 32 | 33 | Section 2 – Scope. 34 | 35 | a. License grant. 36 | 37 | 1. Subject to the terms and conditions of this Public License, the Licensor hereby grants You a worldwide, royalty-free, non-sublicensable, non-exclusive, irrevocable license to exercise the Licensed Rights in the Licensed Material to: 38 | 39 | A. reproduce and Share the Licensed Material, in whole or in part; and 40 | 41 | B. produce, reproduce, and Share Adapted Material. 42 | 43 | 2. Exceptions and Limitations. For the avoidance of doubt, where Exceptions and Limitations apply to Your use, this Public License does not apply, and You do not need to comply with its terms and conditions. 44 | 45 | 3. Term. The term of this Public License is specified in Section 6(a). 46 | 47 | 4. Media and formats; technical modifications allowed. The Licensor authorizes You to exercise the Licensed Rights in all media and formats whether now known or hereafter created, and to make technical modifications necessary to do so. The Licensor waives and/or agrees not to assert any right or authority to forbid You from making technical modifications necessary to exercise the Licensed Rights, including technical modifications necessary to circumvent Effective Technological Measures. For purposes of this Public License, simply making modifications authorized by this Section 2(a)(4) never produces Adapted Material. 48 | 49 | 5. Downstream recipients. 50 | 51 | A. Offer from the Licensor – Licensed Material. Every recipient of the Licensed Material automatically receives an offer from the Licensor to exercise the Licensed Rights under the terms and conditions of this Public License. 52 | 53 | B. Additional offer from the Licensor – Adapted Material. Every recipient of Adapted Material from You automatically receives an offer from the Licensor to exercise the Licensed Rights in the Adapted Material under the conditions of the Adapter’s License You apply. 54 | 55 | C. No downstream restrictions. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material. 56 | 57 | 6. No endorsement. Nothing in this Public License constitutes or may be construed as permission to assert or imply that You are, or that Your use of the Licensed Material is, connected with, or sponsored, endorsed, or granted official status by, the Licensor or others designated to receive attribution as provided in Section 3(a)(1)(A)(i). 58 | 59 | b. Other rights. 60 | 61 | 1. Moral rights, such as the right of integrity, are not licensed under this Public License, nor are publicity, privacy, and/or other similar personality rights; however, to the extent possible, the Licensor waives and/or agrees not to assert any such rights held by the Licensor to the limited extent necessary to allow You to exercise the Licensed Rights, but not otherwise. 62 | 63 | 2. Patent and trademark rights are not licensed under this Public License. 64 | 65 | 3. To the extent possible, the Licensor waives any right to collect royalties from You for the exercise of the Licensed Rights, whether directly or through a collecting society under any voluntary or waivable statutory or compulsory licensing scheme. In all other cases the Licensor expressly reserves any right to collect such royalties. 66 | 67 | Section 3 – License Conditions. 68 | 69 | Your exercise of the Licensed Rights is expressly made subject to the following conditions. 70 | 71 | a. Attribution. 72 | 73 | 1. If You Share the Licensed Material (including in modified form), You must: 74 | 75 | A. retain the following if it is supplied by the Licensor with the Licensed Material: 76 | 77 | i. identification of the creator(s) of the Licensed Material and any others designated to receive attribution, in any reasonable manner requested by the Licensor (including by pseudonym if designated); 78 | 79 | ii. a copyright notice; 80 | 81 | iii. a notice that refers to this Public License; 82 | 83 | iv. a notice that refers to the disclaimer of warranties; 84 | 85 | v. a URI or hyperlink to the Licensed Material to the extent reasonably practicable; 86 | 87 | B. indicate if You modified the Licensed Material and retain an indication of any previous modifications; and 88 | 89 | C. indicate the Licensed Material is licensed under this Public License, and include the text of, or the URI or hyperlink to, this Public License. 90 | 91 | 2. You may satisfy the conditions in Section 3(a)(1) in any reasonable manner based on the medium, means, and context in which You Share the Licensed Material. For example, it may be reasonable to satisfy the conditions by providing a URI or hyperlink to a resource that includes the required information. 92 | 93 | 3. If requested by the Licensor, You must remove any of the information required by Section 3(a)(1)(A) to the extent reasonably practicable. 94 | 95 | b. ShareAlike.In addition to the conditions in Section 3(a), if You Share Adapted Material You produce, the following conditions also apply. 96 | 97 | 1. The Adapter’s License You apply must be a Creative Commons license with the same License Elements, this version or later, or a BY-SA Compatible License. 98 | 99 | 2. You must include the text of, or the URI or hyperlink to, the Adapter's License You apply. You may satisfy this condition in any reasonable manner based on the medium, means, and context in which You Share Adapted Material. 100 | 101 | 3. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, Adapted Material that restrict exercise of the rights granted under the Adapter's License You apply. 102 | 103 | Section 4 – Sui Generis Database Rights. 104 | 105 | Where the Licensed Rights include Sui Generis Database Rights that apply to Your use of the Licensed Material: 106 | 107 | a. for the avoidance of doubt, Section 2(a)(1) grants You the right to extract, reuse, reproduce, and Share all or a substantial portion of the contents of the database; 108 | 109 | b. if You include all or a substantial portion of the database contents in a database in which You have Sui Generis Database Rights, then the database in which You have Sui Generis Database Rights (but not its individual contents) is Adapted Material, including for purposes of Section 3(b); and 110 | 111 | c. You must comply with the conditions in Section 3(a) if You Share all or a substantial portion of the contents of the database. 112 | For the avoidance of doubt, this Section 4 supplements and does not replace Your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights. 113 | 114 | Section 5 – Disclaimer of Warranties and Limitation of Liability. 115 | 116 | a. Unless otherwise separately undertaken by the Licensor, to the extent possible, the Licensor offers the Licensed Material as-is and as-available, and makes no representations or warranties of any kind concerning the Licensed Material, whether express, implied, statutory, or other. This includes, without limitation, warranties of title, merchantability, fitness for a particular purpose, non-infringement, absence of latent or other defects, accuracy, or the presence or absence of errors, whether or not known or discoverable. Where disclaimers of warranties are not allowed in full or in part, this disclaimer may not apply to You. 117 | 118 | b. To the extent possible, in no event will the Licensor be liable to You on any legal theory (including, without limitation, negligence) or otherwise for any direct, special, indirect, incidental, consequential, punitive, exemplary, or other losses, costs, expenses, or damages arising out of this Public License or use of the Licensed Material, even if the Licensor has been advised of the possibility of such losses, costs, expenses, or damages. Where a limitation of liability is not allowed in full or in part, this limitation may not apply to You. 119 | 120 | c. The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability. 121 | 122 | Section 6 – Term and Termination. 123 | 124 | a. This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically. 125 | 126 | b. Where Your right to use the Licensed Material has terminated under Section 6(a), it reinstates: 127 | 128 | 1. automatically as of the date the violation is cured, provided it is cured within 30 days of Your discovery of the violation; or 129 | 130 | 2. upon express reinstatement by the Licensor. 131 | 132 | c. For the avoidance of doubt, this Section 6(b) does not affect any right the Licensor may have to seek remedies for Your violations of this Public License. 133 | 134 | d. For the avoidance of doubt, the Licensor may also offer the Licensed Material under separate terms or conditions or stop distributing the Licensed Material at any time; however, doing so will not terminate this Public License. 135 | 136 | e. Sections 1, 5, 6, 7, and 8 survive termination of this Public License. 137 | 138 | Section 7 – Other Terms and Conditions. 139 | 140 | a. The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed. 141 | 142 | b. Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License. 143 | 144 | Section 8 – Interpretation. 145 | 146 | a. For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License. 147 | 148 | b. To the extent possible, if any provision of this Public License is deemed unenforceable, it shall be automatically reformed to the minimum extent necessary to make it enforceable. If the provision cannot be reformed, it shall be severed from this Public License without affecting the enforceability of the remaining terms and conditions. 149 | 150 | c. No term or condition of this Public License will be waived and no failure to comply consented to unless expressly agreed to by the Licensor. 151 | 152 | d. Nothing in this Public License constitutes or may be interpreted as a limitation upon, or waiver of, any privileges and immunities that apply to the Licensor or You, including from the legal processes of any jurisdiction or authority. 153 | -------------------------------------------------------------------------------- /LICENSE-SUMMARY: -------------------------------------------------------------------------------- 1 | Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | The documentation is made available under the Creative Commons Attribution-ShareAlike 4.0 International License. See the LICENSE file. 4 | 5 | The sample code within this documentation is made available under the MIT-0 license. See the LICENSE-SAMPLECODE file. 6 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # **Maximizing Storage Throughput and Performance** 2 | 3 | © 2019 Amazon Web Services, Inc. and its affiliates. All rights reserved. 4 | This sample code is made available under the MIT-0 license. See the LICENSE file. 5 | 6 | Errors or corrections? Contact [mburbey@amazon.com](mailto:mburbey@amazon.com). 7 | --- 8 | ## Workshop Summary 9 | 10 | In this workshop you learn how to obtain higher levels of performance with EBS, S3 and EFS. 11 | 12 | #### 1. Deploy AWS resources using CloudFormation 13 | 14 | 1. Click one of the launch links in the table below to deploy the resources using CloudFormation. To avoid errors during deployment, select a region in which you have previously created AWS resources. 15 | 16 | | **Region Code** | **Region Name** | **Launch** | 17 | | --- | --- | --- | 18 | | us-west-1 | US West (N. California) | [Launch in us-west-1](https://console.aws.amazon.com/cloudformation/home?region=us-west-1#/stacks/new?stackName=StoragePerformanceWorkshop&templateURL=https://storage-specialists-cf-templates.s3-us-west-2.amazonaws.com/2019/storage_performance.json) | 19 | | us-west-2 | US West (Oregon) | [Launch in us-west-2](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?stackName=StoragePerformanceWorkshop&templateURL=https://storage-specialists-cf-templates.s3-us-west-2.amazonaws.com/2019/storage_performance.json) | 20 | | us-east-1 | US East (N. Virginia) | [Launch in us-east-1](https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/new?stackName=StoragePerformanceWorkshop&templateURL=https://storage-specialists-cf-templates.s3-us-west-2.amazonaws.com/2019/storage_performance.json) | 21 | | us-east-2 | US East (Ohio) | [Launch in us-east-2](https://console.aws.amazon.com/cloudformation/home?region=us-east-2#/stacks/new?stackName=StoragePerformanceWorkshop&templateURL=https://storage-specialists-cf-templates.s3-us-west-2.amazonaws.com/2019/storage_performance.json) | 22 | | ca-central-1 | Canada (Central) | [Launch in ca-central-1](https://console.aws.amazon.com/cloudformation/home?region=ca-central-1#/stacks/new?stackName=StoragePerformanceWorkshop&templateURL=https://storage-specialists-cf-templates.s3-us-west-2.amazonaws.com/2019/storage_performance.json) | 23 | | eu-central-1 | EU (Frankfurt) | [Launch in eu-central-1](https://console.aws.amazon.com/cloudformation/home?region=eu-central-1#/stacks/new?stackName=StoragePerformanceWorkshop&templateURL=https://storage-specialists-cf-templates.s3-us-west-2.amazonaws.com/2019/storage_performance.json) | 24 | | eu-west-1 | EU (Ireland) | [Launch in eu-west-1](https://console.aws.amazon.com/cloudformation/home?region=eu-west-1#/stacks/new?stackName=StoragePerformanceWorkshop&templateURL=https://storage-specialists-cf-templates.s3-us-west-2.amazonaws.com/2019/storage_performance.json) | 25 | | eu-west-2 | EU (London) | [Launch in eu-west-2](https://console.aws.amazon.com/cloudformation/home?region=eu-west-2#/stacks/new?stackName=StoragePerformanceWorkshop&templateURL=https://storage-specialists-cf-templates.s3-us-west-2.amazonaws.com/2019/storage_performance.json) | 26 | | eu-west-3 | EU (Paris) | [Launch in eu-west-3](https://console.aws.amazon.com/cloudformation/home?region=eu-west-3#/stacks/new?stackName=StoragePerformanceWorkshop&templateURL=https://storage-specialists-cf-templates.s3-us-west-2.amazonaws.com/2019/storage_performance.json) | 27 | | eu-north-1 | EU (Stockholm) | [Launch in eu-north-1](https://console.aws.amazon.com/cloudformation/home?region=eu-north-1#/stacks/new?stackName=StoragePerformanceWorkshop&templateURL=https://storage-specialists-cf-templates.s3-us-west-2.amazonaws.com/2019/storage_performance.json) | 28 | | ap-east-1 | Asia Pacific (Hong Kong) | [Launch in ap-east-1](https://console.aws.amazon.com/cloudformation/home?region=ap-east-1#/stacks/new?stackName=StoragePerformanceWorkshop&templateURL=https://storage-specialists-cf-templates.s3-us-west-2.amazonaws.com/2019/storage_performance.json) | 29 | | ap-northeast-1 | Asia Pacific (Tokyo) | [Launch in ap-northeast-1](https://console.aws.amazon.com/cloudformation/home?region=ap-northeast-1#/stacks/new?stackName=StoragePerformanceWorkshop&templateURL=https://storage-specialists-cf-templates.s3-us-west-2.amazonaws.com/2019/storage_performance.json) | 30 | | ap-northeast-2 | Asia Pacific (Seoul) | [Launch in ap-northeast-2](https://console.aws.amazon.com/cloudformation/home?region=ap-northeast-2#/stacks/new?stackName=StoragePerformanceWorkshop&templateURL=https://storage-specialists-cf-templates.s3-us-west-2.amazonaws.com/2019/storage_performance.json) | 31 | | ap-northeast-3 | Asia Pacific (Osaka-Local) | [Launch in ap-northeast-3](https://console.aws.amazon.com/cloudformation/home?region=ap-northeast-3#/stacks/new?stackName=StoragePerformanceWorkshop&templateURL=https://storage-specialists-cf-templates.s3-us-west-2.amazonaws.com/2019/storage_performance.json) | 32 | | ap-southeast-1 | Asia Pacific (Singapore) | [Launch in ap-southeast-1](https://console.aws.amazon.com/cloudformation/home?region=ap-southeast-1#/stacks/new?stackName=StoragePerformanceWorkshop&templateURL=https://storage-specialists-cf-templates.s3-us-west-2.amazonaws.com/2019/storage_performance.json) | 33 | | ap-southeast-2 | Asia Pacific (Sydney) | [Launch in ap-southeast-2](https://console.aws.amazon.com/cloudformation/home?region=ap-southeast-2#/stacks/new?stackName=StoragePerformanceWorkshop&templateURL=https://storage-specialists-cf-templates.s3-us-west-2.amazonaws.com/2019/storage_performance.json) | 34 | | ap-south-1 | Asia Pacific (Mumbai) | [Launch in ap-south-1](https://console.aws.amazon.com/cloudformation/home?region=ap-south-1#/stacks/new?stackName=StoragePerformanceWorkshop&templateURL=https://storage-specialists-cf-templates.s3-us-west-2.amazonaws.com/2019/storage_performance.json) | 35 | | me-south-1 | Middle East (Bahrain) | [Launch in me-south-1](https://console.aws.amazon.com/cloudformation/home?region=me-south-1#/stacks/new?stackName=StoragePerformanceWorkshop&templateURL=https://storage-specialists-cf-templates.s3-us-west-2.amazonaws.com/2019/storage_performance.json) | 36 | | sa-east-1 | South America (São Paulo) | [Launch in sa-east-1](https://console.aws.amazon.com/cloudformation/home?region=sa-east-1#/stacks/new?stackName=StoragePerformanceWorkshop&templateURL=https://storage-specialists-cf-templates.s3-us-west-2.amazonaws.com/2019/storage_performance.json) | 37 | 38 | 2. Click **Next** on the Create Stack page. 39 | 3. Click **Next**. 40 | 4. Click **Next** Again. (skipping the Options and Advanced options sections) 41 | 5. On the Review page, scroll to the bottom and check the boxes to acknowledge that CloudFormation will create IAM resources, then click **Create stack**. 42 | 43 | ![](/images/mod1cf1.png) 44 | 6. Click **Events**. Events will not auto refresh. You will need to manually refresh the page using the refresh button on the right side of the page. 45 | 7. Watch for **StoragePerformanceWorkshop** and a status of **CREATE_COMPLETE** 46 | 47 | ![](/images/cf_complete.png) 48 | 49 | **Note:** Instances that are launched as part of this CloudFormation template may be in the initializing state for few minutes. 50 | 51 | #### 2. Connect to the EC2 Instance using EC2 Instance Connect 52 | 53 | 1. From the AWS console, click **Services** and select **EC2.** 54 | 2. Select **Instances** from the menu on the left. 55 | 3. Wait until the state of the **Storage_Performance_Workshop** instance shows as _running_ and all Status Checks have completed (i.e. **not** in _Initializing_ state). 56 | 4. Right-click on the **Storage_Performance_Workshop** instance and select **Connect** from the menu. 57 | 5. From the dialog box, select the EC2 Instance Connect option, as shown below: 58 | 59 | ![](/images/mod1ssh1.png) 60 | 61 | 6. For the **User name** field, enter "ec2-user", then click **Connect**. A new dialog box or tab on your browser should appear, providing you with a command line interface (CLI). Keep this open - you will use the command line on the instance throughout this workshop. 62 | 7. Verify your login prompt contains storage-workshop. If it does not the initialization task for the workshop have not yet completed. Close the SSH window and repeat the EC2 Instance Connect steps in a couple minutes. 63 | 64 | ![](/images/cli_1.png) 65 | 66 | Note: The SSH session will disconnect after a period of inactivity. If your session becomes unresponsive, close the window and repeat the steps above to reconnect. 67 | 68 | ## EBS Performance Part 1 69 | 70 | FIO was automatically launched by the CloudFormation template. We will use FIO to exhaust the burst credits on the 1 GB GP2 volume. 71 | 72 | 1. Run the following command to ensure FIO is running on the instance. Output should be similar to below. 73 | ``` 74 | ps -ef | grep fio 75 | ``` 76 | ![](/images/ebs_part1_1.png) 77 | 2. From the AWS console, click **Services** and select **EC2.** 78 | 3. Select **Instances** from the menu on the left. 79 | 4. Select the instance named **Storage_Performance_Workshop**. 80 | 81 | ![](/images/ebs_part1_2.png) 82 | 5. Click on **/dev/sdb** in the lower pane by Block devices. 83 | 84 | ![](/images/ebs_part1_3.png) 85 | 6. In the pop up window, click the **EBS ID**. 86 | 87 | ![](/images/ebs_part1_4.png) 88 | 7. Click on the **Monitoring** tab. 89 | 8. There are two graphs of interest: Read Throughput and Burst Balance. You may need to click on the graph to see any data as there is likely only one data point so far. 90 | 1. Read Throughput should be at 3000 OPS until the Burst Balance is not exhausted. 91 | 2. Burst Balance will start out at 100% and will decrease steadily over the next 30 minutes. 92 | 9. We will come back to EBS performance later, giving the Burst Balance a chance to be exhausted. 93 | 94 | ## S3 Performance- Optimize Throughput of Large Files 95 | 96 | In the section we will demonstrate parallelization of a large object by breaking it into smaller chunks and increasing the number of threads used to transfer the object. 97 | 98 | 1. In the CLI for the instance, run the following commands to setup the AWS CLI 99 | ``` 100 | aws configure 101 | ``` 102 | Leave Access Key and Secret Key blank, set the region to the region you deployed your CloudFormation template in , output format leave default. 103 | 104 | ![](/images/aws_configure.png) 105 | 106 | 2. Configure AWS CLI S3 settings, by running the following commands. 107 | ``` 108 | aws configure set default.s3.max_concurrent_requests 1 109 | aws configure set default.s3.multipart_threshold 64MB 110 | aws configure set default.s3.multipart_chunksize 16MB 111 | ``` 112 | **Note** 113 | Commands starting with aws s3 will use the settings above. Any commands starting with aws s3api will not. aws s3 utilizes Transfer Manager which can optimize transfers. aws s3api simply makes the specific api call you specified. 114 | 115 | 3. Verify settings match screenshot below. 116 | ``` 117 | cat ~/.aws/config 118 | ``` 119 | ![](/images/s3_perf_1.png) 120 | 121 | 4. Run the following command to create a 5 GB file. 122 | ``` 123 | dd if=/dev/urandom of=5GB.file bs=1 count=0 seek=5G 124 | ``` 125 | 5. Upload a 5 GB file to the S3 bucket using 1 thread. Record time to complete. 126 | ``` 127 | time aws s3 cp 5GB.file s3://${bucket}/upload1.test 128 | ``` 129 | 6. Upload a 5 GB file to the S3 bucket using 2 threads. Record time to complete. 130 | ``` 131 | aws configure set default.s3.max_concurrent_requests 2 132 | time aws s3 cp 5GB.file s3://${bucket}/upload2.test 133 | ``` 134 | 7. Upload a 5 GB file to the S3 bucket using 10 threads. Record time to complete. 135 | ``` 136 | aws configure set default.s3.max_concurrent_requests 10 137 | time aws s3 cp 5GB.file s3://${bucket}/upload3.test 138 | ``` 139 | 8. Upload a 5 GB file to the S3 bucket using 20 threads. Record time to complete. 140 | ``` 141 | aws configure set default.s3.max_concurrent_requests 20 142 | time aws s3 cp 5GB.file s3://${bucket}/upload4.test 143 | ``` 144 | At some point the AWS CLI will limit the performance that can be achieved. This is likely the case if you didn't see any performance increase between 10 and 20 threads. This is a limitation of the CLI, increasing thread count to 100's using other software will continue to increase performance. 145 | 146 | 9. Run the following command to create a 1 GB file. 147 | ``` 148 | dd if=/dev/urandom of=1GB.file bs=1 count=0 seek=1G 149 | ``` 150 | Data can also be segmented into multiple pieces. The next step will demonstrate moving 5 GB of data using multiple source files. 151 | 152 | 10. Upload 5 GB of data to S3 by uploading five 1 GB files in parallel. The -j flag is the number of concurrent jobs to run. This results in 100 total threads (20 from aws configure times 5 jobs). Record time to complete. 153 | ``` 154 | time seq 1 5 | parallel --will-cite -j 5 aws s3 cp 1GB.file s3://${bucket}/parallel/object{}.test 155 | ``` 156 | **Note** 157 | 158 | 1. These exercises showed that workloads can parallelized by breaking up a large object into chunks or by having smaller files. 159 | 2. One trade off to keep in mind is each PUT is billed at $0.05 per 1,000 requests. When you break up a 5GB file into 16MB chunks, that results in 313 PUTs instead of 1 PUT. Essentially you can view it as paying a toll to go faster. 160 | 161 | ## S3 Performance- Optimize the Sync command 162 | 163 | This exercise will use the aws s3 sync command to move 2,000 files totaling 2 GB of data. 164 | 165 | 1. In the CLI for the instance, perform the sync using 1 thread. Record time to complete. 166 | ``` 167 | aws configure set default.s3.max_concurrent_requests 1 168 | time aws s3 sync /ebs/tutorial/data-1m/ s3://${bucket}/sync1/ 169 | ``` 170 | 2. Perform the sync using 10 threads. Record time to complete. 171 | ``` 172 | aws configure set default.s3.max_concurrent_requests 10 173 | time aws s3 sync /ebs/tutorial/data-1m/ s3://${bucket}/sync2/ 174 | ``` 175 | ## S3 Performance- Optimize Small File Operations 176 | 177 | This exercise will demonstrate how to increase the transactions per second(TPS) while moving small objects. 178 | 179 | 1. In the CLI for the instance, create a text file that represents a list of object ids. 180 | ``` 181 | seq 1 500 > object_ids 182 | cat object_ids 183 | ``` 184 | 2. Create a 1 KB file. 185 | ``` 186 | dd if=/dev/urandom of=1KB.file bs=1 count=0 seek=1K 187 | ``` 188 | 3. Upload 500 1KB files to S3 using 1 thread. Record time to complete. 189 | ``` 190 | time parallel --will-cite -a object_ids -j 1 aws s3 cp 1KB.file s3://${bucket}/run1/{} 191 | ``` 192 | 4. Upload 500 1KB files to S3 using 10 threads. Record time to complete. 193 | ``` 194 | time parallel --will-cite -a object_ids -j 10 aws s3 cp 1KB.file s3://${bucket}/run2/{} 195 | ``` 196 | 5. Upload 500 1KB files to S3 using 50 threads. Record time to complete. 197 | ``` 198 | time parallel --will-cite -a object_ids -j 50 aws s3 cp 1KB.file s3://${bucket}/run3/{} 199 | ``` 200 | 6. Upload 500 1KB files to S3 using 100 threads. Record time to complete. 201 | ``` 202 | time parallel --will-cite -a object_ids -j 100 aws s3 cp 1KB.file s3://${bucket}/run4/{} 203 | ``` 204 | **Note** 205 | Going from 50 to 100 threads likely didn't result in higher performance. For ease of demonstration we are using multiple instances of the AWS CLI to show a concept. In the real world developers would create thread pools that are much more efficient than our demonstration method. It is reasonable to assume that added threads should continue to add performance until another bottleneck like running out of CPU occurs. 206 | 207 | ## S3 Performance- Optimize Copying Objects to Different Locations 208 | 209 | In this exercise we will demonstrate how to copy files from one location in S3 to another more efficiently. 210 | 211 | 1. In the CLI for the instance, run this command to download a 5 GB file from the S3 bucket that was uploaded in an earlier test and the upload to a different prefix. Record time to complete. 212 | ``` 213 | time (aws s3 cp s3://$bucket/upload1.test 5GB.file; aws s3 cp 5GB.file s3://$bucket/copy/5GB.file) 214 | ``` 215 | 2. Use PUT COPY(copy-object) to move the file. Record time to complete. 216 | ``` 217 | time aws s3api copy-object --copy-source $bucket/upload1.test --bucket $bucket --key copy/5GB-2.file 218 | ``` 219 | 3. Copy the file between S3 using a single command between locations. Record time to complete. 220 | ``` 221 | time aws s3 cp s3://$bucket/upload1.test s3://$bucket/copy/5GB-3.file 222 | ``` 223 | **Note** 224 | The first command required to GET data from S3 back to the EC2 instance and then PUT the data back to S3 from the EC2 instance. The second command uses a PUT COPY but is only single threaded. The third command uses a PUT COPY as well, but also uses Transfer Manager which is multi-threaded depending on the AWS CLI configurations. Both the second and third command perform the copy between S3 locations internal to S3. This results in only API calls being made from the EC2 host and the data transfer bandwidth is done inside of S3 only. 225 | 226 | ## EFS Performance- Optimize IOPS 227 | 228 | In this exercise we will demonstrate different methods of creating 1,024 files and compare the performance of each method. 229 | 230 | 1. In the CLI for the instance, run this command to generate 1,024 zero byte files. 231 | Record time to complete. 232 | ``` 233 | directory=$(echo $(uuidgen)| grep -o ".\\{6\\}$") 234 | mkdir -p /efs/tutorial/touch/${directory} 235 | time for i in {1..1024}; do 236 | touch /efs/tutorial/touch/${directory}/test-1.3-$i; 237 | done; 238 | ``` 239 | 2. Run this command to generate 1,024 zero by files using multiple threads. Record time to complete. 240 | ``` 241 | directory=$(echo $(uuidgen)| grep -o ".\\{6\\}$") 242 | mkdir -p /efs/tutorial/touch/${directory} 243 | time seq 1 1024 | parallel --will-cite -j 128 touch /efs/tutorial/touch/${directory}/test-1.4-{} 244 | ``` 245 | 3. Run this command to generate 1,024 zero by files in multiple directories using multiple threads. Record time to complete. 246 | ``` 247 | directory=$(echo $(uuidgen)| grep -o ".\\{6\\}$") 248 | mkdir -p /efs/tutorial/touch/${directory}/{1..32} 249 | time seq 1 32 | parallel --will-cite -j 32 touch /efs/tutorial/touch/${directory}/{}/test1.5{1..32} 250 | ``` 251 | **Note** 252 | The best way to leverage the distributed data storage design of Amazon EFS is to use multiple threads and inodes in parallel. 253 | 254 | ## EFS Performance- I/O Sizes and Sync Frequency 255 | 256 | In this exercise we will demonstrate how different I/O sizes and sync frequencies affects throughput to EFS. 257 | 258 | 1. In the CLI for the instance, Write a 2GB file to EFS using 1MB block size and sync once after each file. Record time to complete. 259 | ``` 260 | time dd if=/dev/zero of=/efs/tutorial/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N) bs=1M count=2048 status=progress conv=fsync 261 | ``` 262 | 2. Write a 2 GB file to EFS using 16MB block size and sync once after each file. Record time to complete. 263 | ``` 264 | time dd if=/dev/zero of=/efs/tutorial/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N) bs=16M count=128 status=progress conv=fsync 265 | ``` 266 | 3. Write a 2GB file to EFS using 1MB block size and sync after each block. Record time to complete. 267 | ``` 268 | time dd if=/dev/zero of=/efs/tutorial/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N) bs=1M count=2048 status=progress oflag=sync 269 | ``` 270 | 4. Write a 2 GB file to EFS using 16MB block size and sync after each block. Record time to complete. 271 | ``` 272 | time dd if=/dev/zero of=/efs/tutorial/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N) bs=16M count=128 status=progress oflag=sync 273 | ``` 274 | **Note** 275 | Syncing after each block will dramatically decrease performance of the filesystem. Best performance will be obtained by syncing after each file. Block size has little impact to performance. 276 | 277 | ## EFS Performance- Multi-threaded Performance 278 | 279 | This exercise will demonstrate how multi-threaded access improves throughput and IOPS. 280 | 281 | 1. Each command will write 2 GB of data to EFS using 1 MB block size. 282 | 2. Write to EFS using 4 threads and sync after each block. Record time to complete. 283 | ``` 284 | time seq 0 3 | parallel --will-cite -j 4 dd if=/dev/zero of=/efs/tutorial/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N)-{} bs=1M count=512 oflag=sync 285 | ``` 286 | 3. Write to EFS using 16 threads and sync after each block. Record time to complete. 287 | ``` 288 | time seq 0 15 | parallel --will-cite -j 16 dd if=/dev/zero of=/efs/tutorial/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N)-{} bs=1M count=128 oflag=sync 289 | ``` 290 | **Note** 291 | The distributed data storage design of EFS means that multi-threaded applications can drive substantial levels of aggregate throughput and IOPS. By parallelizing your writes to EFS by increasing the number of threads, you can increase the overall throughput and IOPS to EFS. 292 | 293 | ## EFS Performance- Compare File Transfer Tools 294 | 295 | In this section we will compare the performance of different file transfer utilities and EFS. 296 | 297 | 1. Review the data to transfer. 2,000 files and 2 GB of data. Record time to complete. 298 | ``` 299 | du -csh /ebs/tutorial/data-1m/ 300 | find /ebs/tutorial/data-1m/. -type f | wc -l 301 | ``` 302 | 2. Transfer files from EBS to EFS using rsync 303 | ``` 304 | sudo su 305 | sync && echo 3 > /proc/sys/vm/drop_caches 306 | exit 307 | time rsync -r /ebs/tutorial/data-1m/ /efs/tutorial/rsync/ 308 | ``` 309 | 310 | 3. Transfer files from EBS to EFS using cp 311 | ``` 312 | sudo su 313 | sync && echo 3 > /proc/sys/vm/drop_caches 314 | exit 315 | time cp -r /ebs/tutorial/data-1m/* /efs/tutorial/cp/ 316 | ``` 317 | 4. Set the $threads variable to 4 threads per CPU 318 | ``` 319 | threads=$(($(nproc --all) * 4)) 320 | echo $threads 321 | ``` 322 | 5. Transfer files from EBS to EFS using fpsync 323 | ``` 324 | sudo su 325 | sync && echo 3 > /proc/sys/vm/drop_caches 326 | exit 327 | time fpsync -n ${threads} -v /ebs/tutorial/data-1m/ /efs/tutorial/fpsync/ 328 | ``` 329 | 6. Transfer files from EBS to EFS using cp + GNU Parallel 330 | ``` 331 | sudo su 332 | sync && echo 3 > /proc/sys/vm/drop_caches 333 | exit 334 | time find /ebs/tutorial/data-1m/. -type f | parallel --will-cite -j ${threads} cp {} /efs/tutorial/parallelcp 335 | ``` 336 | **Note** 337 | Not all file transfer utilities are created equal. File systems are distributed across an unconstrained number of storage servers and this distributed data storage design means that multithreaded applications like fpsync, mcp, and GNU parallel can drive substantial levels of throughput and IOPS to EFS when compared to single-threaded applications. 338 | 339 | ## EBS Performance Part 2 340 | 1. From the AWS console, click **Services** and select **EC2.** 341 | 2. Select **Instances** from the menu on the left. 342 | 3. Select the instance named **Storage_Performance_Workshop**. 343 | 344 | ![](/images/ebs_part1_2.png) 345 | 4. Click on **/dev/sdb** in the lower pane by Block devices. 346 | 347 | ![](/images/ebs_part1_3.png) 348 | 5. In the pop up window, click the **EBS ID**. 349 | 350 | ![](/images/ebs_part1_4.png) 351 | 352 | 6. Click on the **Monitoring** tab. 353 | 7. There are two graphs of interest: Read Throughput and Burst Balance. 354 | 1. Read Throughput should be at 100 OPS now. 355 | 2. Burst Balance should be at 0%. 356 | 8. In the CLI for the instance, Look at output of fio job to see current IOPs. 357 | ``` 358 | sudo screen -r 359 | ``` 360 | 9. In the AWS console, click **Actions**, then select **Modify Volume** 361 | 10. In the pop up window configure the following values: 362 | 363 | Volume Type: Provisioned IOPS SSD (IO1) 364 | Size: 100GB 365 | Iops: 5000 366 | 367 | ![](/images/ebs_part2_1.png) 368 | 11. Click **Modify**. 369 | 12. Click **Yes.** 370 | 13. Click **Close.** 371 | 14. Go back to your SSH session. Check output of fio, you should see the increase in IOPS. ctrl-a d to exit screen. 372 | 15. Verify larger volume is seen by instance 373 | ``` 374 | lsblk 375 | ``` 376 | ![](/images/ebs_part2_2.png) 377 | 378 | 16. Resize the filesystem. The increase in IOPs is available right away, but you would need to resize the filesystem to use the added capacity. 379 | ``` 380 | sudo umount /ebsperftest 381 | sudo resize2fs /dev/nvme1n1 382 | sudo mount /ebsperftest 383 | ``` 384 | 17. Verify filesystem is using 100 GB volume. 385 | ``` 386 | df -h 387 | ``` 388 | 18. Run the following command to ensure FIO is running on the instance. Output should be similar to below. 389 | ``` 390 | ps -ef | grep fio 391 | ``` 392 | ![](/images/ebs_part1_1.png) 393 | 394 | 19. Allow fio to continue to run for several more minutes to allow graphs to update. 395 | 20. Click on the **Monitoring** tab. 396 | 21. Refresh graphs. Burst balance will no longer report any values due to the volume being IO1 now. Read Throughput should be at 5,000 IOPS 397 | 398 | ## Clean Up Resources 399 | 400 | To ensure you don't continue to be billed for services in your account from this workshop follow the steps below to remove all resources created during the workshop. 401 | 402 | 1. In the CLI for the instance, remove objects from the S3 bucket. 403 | ``` 404 | aws configure set default.s3.max_concurrent_requests 20 405 | aws s3 rm s3://${bucket} --recursive 406 | ``` 407 | 2. From the AWS console, click **Services** and select **CloudFormation.** 408 | 3. Select **StoragePerformanceWorkshop**. 409 | 4. Click **Delete**. 410 | 5. Click **Delete stack**. 411 | 6. It will take a few minutes to delete everything. Refresh the page to see an updated status. **StoragePerformanceWorkshop** will be removed from the list if everything has been deleted correctly. 412 | -------------------------------------------------------------------------------- /images/aws_configure.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/maximizing-storage-throughput-and-performance/f307912a11e3b0fbd18ac07bf8a5ecf46585cde9/images/aws_configure.png -------------------------------------------------------------------------------- /images/cf_complete.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/maximizing-storage-throughput-and-performance/f307912a11e3b0fbd18ac07bf8a5ecf46585cde9/images/cf_complete.png -------------------------------------------------------------------------------- /images/cli_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/maximizing-storage-throughput-and-performance/f307912a11e3b0fbd18ac07bf8a5ecf46585cde9/images/cli_1.png -------------------------------------------------------------------------------- /images/ebs_part1_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/maximizing-storage-throughput-and-performance/f307912a11e3b0fbd18ac07bf8a5ecf46585cde9/images/ebs_part1_1.png -------------------------------------------------------------------------------- /images/ebs_part1_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/maximizing-storage-throughput-and-performance/f307912a11e3b0fbd18ac07bf8a5ecf46585cde9/images/ebs_part1_2.png -------------------------------------------------------------------------------- /images/ebs_part1_3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/maximizing-storage-throughput-and-performance/f307912a11e3b0fbd18ac07bf8a5ecf46585cde9/images/ebs_part1_3.png -------------------------------------------------------------------------------- /images/ebs_part1_4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/maximizing-storage-throughput-and-performance/f307912a11e3b0fbd18ac07bf8a5ecf46585cde9/images/ebs_part1_4.png -------------------------------------------------------------------------------- /images/ebs_part2_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/maximizing-storage-throughput-and-performance/f307912a11e3b0fbd18ac07bf8a5ecf46585cde9/images/ebs_part2_1.png -------------------------------------------------------------------------------- /images/ebs_part2_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/maximizing-storage-throughput-and-performance/f307912a11e3b0fbd18ac07bf8a5ecf46585cde9/images/ebs_part2_2.png -------------------------------------------------------------------------------- /images/mod1cf1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/maximizing-storage-throughput-and-performance/f307912a11e3b0fbd18ac07bf8a5ecf46585cde9/images/mod1cf1.png -------------------------------------------------------------------------------- /images/mod1ssh1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/maximizing-storage-throughput-and-performance/f307912a11e3b0fbd18ac07bf8a5ecf46585cde9/images/mod1ssh1.png -------------------------------------------------------------------------------- /images/s3_perf_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/maximizing-storage-throughput-and-performance/f307912a11e3b0fbd18ac07bf8a5ecf46585cde9/images/s3_perf_1.png -------------------------------------------------------------------------------- /storage_performance.json: -------------------------------------------------------------------------------- 1 | { 2 | "AWSTemplateFormatVersion":"2010-09-09", 3 | "Transform":{ 4 | "Name":"AWS::Include", 5 | "Parameters":{ 6 | "Location":"s3://storage-specialists-cf-templates/regional_ami_mapping.json" 7 | } 8 | }, 9 | "Resources":{ 10 | "myVPC":{ 11 | "Type":"AWS::EC2::VPC", 12 | "Properties":{ 13 | "CidrBlock":"10.11.12.0/24", 14 | "EnableDnsSupport":"true", 15 | "EnableDnsHostnames":"true", 16 | "InstanceTenancy":"default", 17 | "Tags":[ 18 | { 19 | "Key":"Name", 20 | "Value":"StoragePerformanceWorkshopVPC" 21 | } 22 | ] 23 | } 24 | }, 25 | "PublicSubnet1":{ 26 | "Type":"AWS::EC2::Subnet", 27 | "Properties":{ 28 | "VpcId":{ 29 | "Ref":"myVPC" 30 | }, 31 | "CidrBlock":"10.11.12.0/24", 32 | "MapPublicIpOnLaunch":"True", 33 | "Tags":[ 34 | { 35 | "Key":"Name", 36 | "Value":"StoragePerformanceWorkshopSubnet" 37 | } 38 | ] 39 | } 40 | }, 41 | "myInternetGateway":{ 42 | "Type":"AWS::EC2::InternetGateway", 43 | "Properties":{ 44 | "Tags":[ 45 | { 46 | "Key":"Name", 47 | "Value":"StoragePerformanceWorkshopVPC" 48 | } 49 | ] 50 | } 51 | }, 52 | "AttachGateway":{ 53 | "Type":"AWS::EC2::VPCGatewayAttachment", 54 | "Properties":{ 55 | "VpcId":{ 56 | "Ref":"myVPC" 57 | }, 58 | "InternetGatewayId":{ 59 | "Ref":"myInternetGateway" 60 | } 61 | } 62 | }, 63 | "myRouteTable":{ 64 | "Type":"AWS::EC2::RouteTable", 65 | "Properties":{ 66 | "VpcId":{ 67 | "Ref":"myVPC" 68 | }, 69 | "Tags":[ 70 | { 71 | "Key":"Name", 72 | "Value":"StoragePerformanceWorkshopVPC" 73 | } 74 | ] 75 | } 76 | }, 77 | "PublicSubnet1RouteAssociaton":{ 78 | "Type":"AWS::EC2::SubnetRouteTableAssociation", 79 | "Properties":{ 80 | "SubnetId":{ 81 | "Ref":"PublicSubnet1" 82 | }, 83 | "RouteTableId":{ 84 | "Ref":"myRouteTable" 85 | } 86 | } 87 | }, 88 | "RoutetoInternet":{ 89 | "Type":"AWS::EC2::Route", 90 | "DependsOn":"myInternetGateway", 91 | "Properties":{ 92 | "RouteTableId":{ 93 | "Ref":"myRouteTable" 94 | }, 95 | "DestinationCidrBlock":"0.0.0.0/0", 96 | "GatewayId":{ 97 | "Ref":"myInternetGateway" 98 | } 99 | } 100 | }, 101 | "SSHAccessSG":{ 102 | "Type":"AWS::EC2::SecurityGroup", 103 | "Properties":{ 104 | "GroupDescription":"SSH Access", 105 | "VpcId":{ 106 | "Ref":"myVPC" 107 | }, 108 | "SecurityGroupIngress":{ 109 | "IpProtocol":"tcp", 110 | "FromPort":"22", 111 | "ToPort":"22", 112 | "CidrIp":"0.0.0.0/0" 113 | } 114 | } 115 | }, 116 | "EFSSG":{ 117 | "Type":"AWS::EC2::SecurityGroup", 118 | "Properties":{ 119 | "GroupDescription":"Access to EFS", 120 | "VpcId":{ 121 | "Ref":"myVPC" 122 | }, 123 | "SecurityGroupIngress":{ 124 | "IpProtocol":"tcp", 125 | "FromPort":"2049", 126 | "ToPort":"2049", 127 | "CidrIp":"0.0.0.0/0" 128 | } 129 | } 130 | }, 131 | "S3Role":{ 132 | "Type":"AWS::IAM::Role", 133 | "Properties":{ 134 | "AssumeRolePolicyDocument":{ 135 | "Version":"2012-10-17", 136 | "Statement":[ 137 | { 138 | "Effect":"Allow", 139 | "Principal":{ 140 | "Service":[ 141 | "ec2.amazonaws.com" 142 | ] 143 | }, 144 | "Action":[ 145 | "sts:AssumeRole" 146 | ] 147 | } 148 | ] 149 | }, 150 | "Path":"/" 151 | } 152 | }, 153 | "RolePolicies":{ 154 | "Type":"AWS::IAM::Policy", 155 | "Properties":{ 156 | "PolicyName":"admin", 157 | "PolicyDocument":{ 158 | "Version":"2012-10-17", 159 | "Statement":[ 160 | { 161 | "Effect":"Allow", 162 | "Action":"s3:*", 163 | "Resource":"*" 164 | } 165 | ] 166 | }, 167 | "Roles":[ 168 | { 169 | "Ref":"S3Role" 170 | } 171 | ] 172 | } 173 | }, 174 | "InstanceProfile":{ 175 | "Type":"AWS::IAM::InstanceProfile", 176 | "Properties":{ 177 | "Path":"/", 178 | "Roles":[ 179 | { 180 | "Ref":"S3Role" 181 | } 182 | ] 183 | } 184 | }, 185 | "bucket01":{ 186 | "Type":"AWS::S3::Bucket", 187 | "Properties":{ 188 | "BucketEncryption":{ 189 | "ServerSideEncryptionConfiguration":[ 190 | { 191 | "ServerSideEncryptionByDefault":{ 192 | "SSEAlgorithm":"AES256" 193 | } 194 | } 195 | ] 196 | } 197 | } 198 | }, 199 | "FileSystem":{ 200 | "Type":"AWS::EFS::FileSystem", 201 | "Properties":{ 202 | "PerformanceMode":"generalPurpose", 203 | "ThroughputMode":"provisioned", 204 | "ProvisionedThroughputInMibps":"300", 205 | "FileSystemTags":[ 206 | { 207 | "Key":"Name", 208 | "Value":"StoragePerformanceWorkshopFS" 209 | } 210 | ] 211 | } 212 | }, 213 | "MountTarget":{ 214 | "Type":"AWS::EFS::MountTarget", 215 | "Properties":{ 216 | "FileSystemId":{ 217 | "Ref":"FileSystem" 218 | }, 219 | "SubnetId":{ 220 | "Ref":"PublicSubnet1" 221 | }, 222 | "SecurityGroups":[ 223 | { 224 | "Fn::GetAtt":[ 225 | "EFSSG", 226 | "GroupId" 227 | ] 228 | } 229 | ] 230 | } 231 | }, 232 | "Instance01":{ 233 | "Type":"AWS::EC2::Instance", 234 | "DependsOn":"MountTarget", 235 | "Properties":{ 236 | "ImageId":{ 237 | "Fn::FindInMap":[ 238 | "RegionalAmiMap", 239 | { 240 | "Ref":"AWS::Region" 241 | }, 242 | "AmazonLinux2" 243 | ] 244 | }, 245 | "SubnetId":{ 246 | "Ref":"PublicSubnet1" 247 | }, 248 | "InstanceType":"c5.4xlarge", 249 | "Tags":[ 250 | { 251 | "Key":"Name", 252 | "Value":"Storage_Performance_Workshop" 253 | } 254 | ], 255 | "SecurityGroupIds":[ 256 | { 257 | "Ref":"SSHAccessSG" 258 | }, 259 | { 260 | "Ref":"EFSSG" 261 | } 262 | ], 263 | "IamInstanceProfile":{ 264 | "Ref":"InstanceProfile" 265 | }, 266 | "BlockDeviceMappings":[ 267 | { 268 | "DeviceName":"/dev/xvda", 269 | "Ebs":{ 270 | "VolumeType":"gp2", 271 | "DeleteOnTermination":"true", 272 | "VolumeSize":"40" 273 | } 274 | }, 275 | { 276 | "DeviceName":"/dev/sdb", 277 | "Ebs":{ 278 | "VolumeType":"gp2", 279 | "DeleteOnTermination":"true", 280 | "VolumeSize":"1" 281 | } 282 | } 283 | ], 284 | "UserData":{ 285 | "Fn::Base64":{ 286 | "Fn::Join":[ 287 | "", 288 | [ 289 | "#!/bin/bash -xe\n", 290 | "sudo yum update -y\n", 291 | "sudo yum install fio amazon-efs-utils git -y\n", 292 | "sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm\n", 293 | "sudo yum install fpart -y\n", 294 | "sudo wget https://ftpmirror.gnu.org/parallel/parallel-20191022.tar.bz2\n", 295 | "sudo bzip2 -dc parallel-20191022.tar.bz2 | tar xvf -\n", 296 | "cd parallel-20191022\n", 297 | "sudo ./configure && make && sudo make install\n", 298 | "sudo mkfs -t ext4 /dev/nvme1n1\n", 299 | "sudo mkdir /ebsperftest\n", 300 | "sudo mount /dev/nvme1n1 /ebsperftest\n", 301 | "echo '/dev/nvme1n1 /ebsperftest ext4 defaults,nofail 0 0' | sudo tee -a /etc/fstab\n", 302 | "screen -d -m -S fiotest fio --filename=/dev/nvme1n1 --rw=randread --bs=16k --runtime=9600 --time_based=1 --iodepth=32 --ioengine=libaio --direct=1 --name=gp2-16kb-burst-bucket-test\n", 303 | "sudo mkdir /efs\n", 304 | "sudo chown ec2-user:ec2-user /efs\n", 305 | { 306 | "Fn::Join":[ 307 | "", 308 | [ 309 | "sudo mount -t efs ", 310 | { 311 | "Ref":"FileSystem" 312 | }, 313 | ":/ /efs\n" 314 | ] 315 | ] 316 | }, 317 | "sudo mkdir -p /efs/tutorial/{dd,touch,rsync,cp,parallelcp,parallelcpio}/\n", 318 | "sudo chown ec2-user:ec2-user /efs/tutorial/ -R\n", 319 | "cd /home/ec2-user/\n", 320 | "sudo git clone https://github.com/bengland2/smallfile.git\n", 321 | "sudo mkdir -p /ebs/tutorial/{smallfile,data-1m}\n", 322 | "sudo chown ec2-user:ec2-user //ebs/tutorial/ -R\n", 323 | "echo '#!/bin/bash' > /etc/profile.d/script.sh\n", 324 | { 325 | "Fn::Join":[ 326 | "", 327 | [ 328 | "sudo echo export bucket=", 329 | { 330 | "Ref":"bucket01" 331 | }, 332 | " >> /etc/profile.d/script.sh\n" 333 | ] 334 | ] 335 | }, 336 | "echo 'storage-workshop' | sudo tee -a /proc/sys/kernel/hostname\n", 337 | "python /home/ec2-user/smallfile/smallfile_cli.py --operation create --threads 10 --file-size 1024 --file-size-distribution exponential --files 200 --same-dir N --dirs-per-dir 1024 --hash-into-dirs Y --files-per-dir 10240 --top /ebs/tutorial/smallfile\n", 338 | "cp -R /ebs/tutorial/smallfile/file_srcdir/storage-workshop /ebs/tutorial/data-1m/\n" 339 | ] 340 | ] 341 | } 342 | } 343 | } 344 | } 345 | }, 346 | "Outputs":{ 347 | "Bucket":{ 348 | "Value":{ 349 | "Ref":"bucket01" 350 | } 351 | }, 352 | "EC2Instance":{ 353 | "Value":{ 354 | "Fn::GetAtt":[ 355 | "Instance01", 356 | "PublicIp" 357 | ] 358 | } 359 | } 360 | } 361 | } 362 | --------------------------------------------------------------------------------