├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── LICENSE
├── README.md
├── images
├── fullarch.png
├── mod1-cfout.png
├── mod2-newgw.png
├── mod2-s3.png
├── mod2-share.png
├── mod3-edit-sg.png
├── mod3-endpoint.png
├── mod3-sg-rules.png
├── mod3-sglist.png
├── mod3-user-list.png
├── mod3-user.png
├── mod4-refresh.png
└── mod4-s3.png
├── module1
└── README.md
├── module2
└── README.md
├── module3
└── README.md
├── module4
└── README.md
├── module5
└── README.md
└── templates
└── transfer-storage-gateway-workshop.yaml
/CODE_OF_CONDUCT.md:
--------------------------------------------------------------------------------
1 | ## Code of Conduct
2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).
3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact
4 | opensource-codeofconduct@amazon.com with any additional questions or comments.
5 |
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | # Contributing Guidelines
2 |
3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional
4 | documentation, we greatly value feedback and contributions from our community.
5 |
6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary
7 | information to effectively respond to your bug report or contribution.
8 |
9 |
10 | ## Reporting Bugs/Feature Requests
11 |
12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features.
13 |
14 | When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already
15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful:
16 |
17 | * A reproducible test case or series of steps
18 | * The version of our code being used
19 | * Any modifications you've made relevant to the bug
20 | * Anything unusual about your environment or deployment
21 |
22 |
23 | ## Contributing via Pull Requests
24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that:
25 |
26 | 1. You are working against the latest source on the *master* branch.
27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already.
28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted.
29 |
30 | To send us a pull request, please:
31 |
32 | 1. Fork the repository.
33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change.
34 | 3. Ensure local tests pass.
35 | 4. Commit to your fork using clear commit messages.
36 | 5. Send us a pull request, answering any default questions in the pull request interface.
37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation.
38 |
39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and
40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/).
41 |
42 |
43 | ## Finding contributions to work on
44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start.
45 |
46 |
47 | ## Code of Conduct
48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).
49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact
50 | opensource-codeofconduct@amazon.com with any additional questions or comments.
51 |
52 |
53 | ## Security issue notifications
54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue.
55 |
56 |
57 | ## Licensing
58 |
59 | See the [LICENSE](LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution.
60 |
61 | We may ask you to sign a [Contributor License Agreement (CLA)](http://en.wikipedia.org/wiki/Contributor_License_Agreement) for larger changes.
62 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 |
3 | Permission is hereby granted, free of charge, to any person obtaining a copy of
4 | this software and associated documentation files (the "Software"), to deal in
5 | the Software without restriction, including without limitation the rights to
6 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
7 | the Software, and to permit persons to whom the Software is furnished to do so.
8 |
9 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
10 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
11 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
12 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
13 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
14 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
15 |
16 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Access data in Amazon S3 using AWS Transfer Family and AWS Storage Gateway
2 |
3 | © 2020 Amazon Web Services, Inc. and its affiliates. All rights reserved.
4 | This sample code is made available under the MIT-0 license. See the LICENSE file.
5 |
6 | Errors or corrections? Contact [jeffbart@amazon.com](mailto:jeffbart@amazon.com).
7 |
8 | ---
9 |
10 | ## Workshop scenario
11 |
12 | In many industries, organizations frequently need to exchange data with other parties. This data is often encapsulated in reports stored in a well-known format. Today, many organizations generate these reports internally and then make them available to external entities using common file transfer protocols such as SFTP. In many cases, these reports must be preserved to meet compliance requirements, being stored in on-premises storage systems or in offsite vaults, often for many years. In these types of workflows, organizations need to provide access to data both through standard file storage protocols such as NFS and SMB, as well as common file transfer protocols such as SFTP, FTP, or FTPS.
13 |
14 | This workshop will show you how to use [AWS Transfer Family](https://aws.amazon.com/aws-transfer-family/) and [AWS Storage Gateway](https://aws.amazon.com/storagegateway/) to provide access to data from different file protocols. Each of these services allows you to store and access data in [Amazon S3](https://aws.amazon.com/s3/) for scalable, durable cloud storage.
15 |
16 | ## Topics covered
17 |
18 | - Deploying resources using CloudFormation
19 | - Using AWS Storage Gateway to create and access files in Amazon S3 using NFS/SMB
20 | - Using AWS Transfer Family to create and access data in Amazon S3 using SFTP/FTP/FTPS
21 | - File Gateway RefreshCache API
22 |
23 | ## Prerequisites
24 |
25 | #### AWS Account
26 |
27 | In order to complete this workshop, you will need an AWS account with rights to create AWS IAM roles, EC2 instances, Storage Gateway shares, AWS Transfer servers, and CloudFormation stacks in the AWS regions you select.
28 |
29 | #### Software
30 |
31 | - **Internet Browser** – It is recommended that you use the latest version of Chrome or Firefox for this workshop.
32 |
33 | ## Cost
34 |
35 | It will cost approximately **3.00 USD** to run this workshop. It is recommended that you follow the cleanup instructions once you have completed the workshop to remove all deployed resources and limit ongoing costs to your AWS account.
36 |
37 | ## Related workshops
38 |
39 | - [NFS server migration using AWS DataSync and Storage Gateway](https://github.com/aws-samples/aws-datasync-migration-workshop/blob/master/workshops/nfs-migration)
40 | - [IP whitelisting with AWS Transfer Family](https://github.com/aws-samples/aws-transfer-sftp-ip-whitelisting-workshop)
41 |
42 | ## Workshop modules
43 |
44 | This workshop consists of the following modules:
45 |
46 | - [Module 1](/module1) - Deploy resources using CloudFormation
47 | - [Module 2](/module2) - Configure the File Gateway
48 | - [Module 3](/module3) - Configure the AWS Transfer server
49 | - [Module 4](/module4) - Using RefreshCache to see changes in S3
50 | - [Module 5](/module5) - Cleanup resources
51 |
52 | To get started, go to [Module 1](/module1).
53 |
--------------------------------------------------------------------------------
/images/fullarch.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/aws-transfer-storage-gateway-workshop/1aed00ef086060371d99e05f973b7bbe8f1b1fdd/images/fullarch.png
--------------------------------------------------------------------------------
/images/mod1-cfout.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/aws-transfer-storage-gateway-workshop/1aed00ef086060371d99e05f973b7bbe8f1b1fdd/images/mod1-cfout.png
--------------------------------------------------------------------------------
/images/mod2-newgw.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/aws-transfer-storage-gateway-workshop/1aed00ef086060371d99e05f973b7bbe8f1b1fdd/images/mod2-newgw.png
--------------------------------------------------------------------------------
/images/mod2-s3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/aws-transfer-storage-gateway-workshop/1aed00ef086060371d99e05f973b7bbe8f1b1fdd/images/mod2-s3.png
--------------------------------------------------------------------------------
/images/mod2-share.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/aws-transfer-storage-gateway-workshop/1aed00ef086060371d99e05f973b7bbe8f1b1fdd/images/mod2-share.png
--------------------------------------------------------------------------------
/images/mod3-edit-sg.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/aws-transfer-storage-gateway-workshop/1aed00ef086060371d99e05f973b7bbe8f1b1fdd/images/mod3-edit-sg.png
--------------------------------------------------------------------------------
/images/mod3-endpoint.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/aws-transfer-storage-gateway-workshop/1aed00ef086060371d99e05f973b7bbe8f1b1fdd/images/mod3-endpoint.png
--------------------------------------------------------------------------------
/images/mod3-sg-rules.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/aws-transfer-storage-gateway-workshop/1aed00ef086060371d99e05f973b7bbe8f1b1fdd/images/mod3-sg-rules.png
--------------------------------------------------------------------------------
/images/mod3-sglist.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/aws-transfer-storage-gateway-workshop/1aed00ef086060371d99e05f973b7bbe8f1b1fdd/images/mod3-sglist.png
--------------------------------------------------------------------------------
/images/mod3-user-list.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/aws-transfer-storage-gateway-workshop/1aed00ef086060371d99e05f973b7bbe8f1b1fdd/images/mod3-user-list.png
--------------------------------------------------------------------------------
/images/mod3-user.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/aws-transfer-storage-gateway-workshop/1aed00ef086060371d99e05f973b7bbe8f1b1fdd/images/mod3-user.png
--------------------------------------------------------------------------------
/images/mod4-refresh.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/aws-transfer-storage-gateway-workshop/1aed00ef086060371d99e05f973b7bbe8f1b1fdd/images/mod4-refresh.png
--------------------------------------------------------------------------------
/images/mod4-s3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/aws-transfer-storage-gateway-workshop/1aed00ef086060371d99e05f973b7bbe8f1b1fdd/images/mod4-s3.png
--------------------------------------------------------------------------------
/module1/README.md:
--------------------------------------------------------------------------------
1 | # Access data in Amazon S3 using AWS Transfer Family and AWS Storage Gateway
2 |
3 | © 2020 Amazon Web Services, Inc. and its affiliates. All rights reserved.
4 | This sample code is made available under the MIT-0 license. See the LICENSE file.
5 |
6 | Errors or corrections? Contact [jeffbart@amazon.com](mailto:jeffbart@amazon.com).
7 |
8 | ---
9 |
10 | # Module 1
11 | ## Deploy resources
12 |
13 | In this module, you will use CloudFormation to deploy all AWS resources necessary to complete this workshop, as shown in the diagram below. The resources include an AWS Transfer server, an EC2 instance running Storage Gateway in File mode (i.e. File Gateway), and a Linux server running on EC2. An S3 bucket will also be created in the region you select. IAM roles will be automatically created to secure access to the S3 bucket.
14 |
15 |
16 |
17 | ## Module Steps
18 |
19 | #### 1. Deploy AWS resources
20 |
21 | 1. Click one of the launch links in the table below to deploy workshop resources using CloudFormation. To avoid errors during deployment, select a region in which you have previously created AWS resources.
22 |
23 | | **Region Code** | **Region Name** | **Launch** |
24 | | --- | --- | --- |
25 | | us-west-1 | US West (N. California) | [Launch in us-west-1](https://console.aws.amazon.com/cloudformation/home?region=us-west-1#/stacks/new?stackName=TransferWorkshop&templateURL=https://aws-transfer-samples.s3-us-west-2.amazonaws.com/workshops/transfer-storage-gateway/transfer-storage-gateway-workshop.yaml) |
26 | | us-west-2 | US West (Oregon) | [Launch in us-west-2](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?stackName=TransferWorkshop&templateURL=https://aws-transfer-samples.s3-us-west-2.amazonaws.com/workshops/transfer-storage-gateway/transfer-storage-gateway-workshop.yaml) |
27 | | us-east-1 | US East (N. Virginia) | [Launch in us-east-1](https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/new?stackName=TransferWorkshop&templateURL=https://aws-transfer-samples.s3-us-west-2.amazonaws.com/workshops/transfer-storage-gateway/transfer-storage-gateway-workshop.yaml) |
28 | | us-east-2 | US East (Ohio) | [Launch in us-east-2](https://console.aws.amazon.com/cloudformation/home?region=us-east-2#/stacks/new?stackName=TransferWorkshop&templateURL=https://aws-transfer-samples.s3-us-west-2.amazonaws.com/workshops/transfer-storage-gateway/transfer-storage-gateway-workshop.yaml) |
29 | | eu-west-1 | Ireland | [Launch in eu-west-1](https://console.aws.amazon.com/cloudformation/home?region=eu-west-1#/stacks/new?stackName=TransferWorkshop&templateURL=https://aws-transfer-samples.s3-us-west-2.amazonaws.com/workshops/transfer-storage-gateway/transfer-storage-gateway-workshop.yaml) |
30 | | eu-central-1 | Frankfurt | [Launch in eu-central-1](https://console.aws.amazon.com/cloudformation/home?region=eu-central-1#/stacks/new?stackName=TransferWorkshop&templateURL=https://aws-transfer-samples.s3-us-west-2.amazonaws.com/workshops/transfer-storage-gateway/transfer-storage-gateway-workshop.yaml) |
31 |
32 | 2. Click **Next** on the Create Stack page.
33 | 3. Keep the Stack Name as-is. Under the **Parameters** section, enter a CIDR block to use for the VPC that will be created, or leave the default as-is. Do not edit the values of the AMI IDs. When you are done, click **Next**.
34 | 4. Click **Next** again. (skipping the Options and Advanced options sections)
35 | 5. On the Review page, scroll to the bottom and check the box to acknowledge that CloudFormation will create IAM resources, then click **Create stack**.
36 |
37 | Wait for the CloudFormation stack to reach the CREATE\_COMPLETE state before proceeding to the next steps. It will take about **3 minutes** for the CloudFormation stack to complete.
38 |
39 | **NOTE:** If the stack fails to deploy because an EC2 instance type is not available in a particular availability zone, delete the stack and retry in the same region or in a different region.
40 |
41 | #### 3. Stack Outputs
42 |
43 | Upon completion, the CloudFormation stack will have a list of "Outputs". These are values such as IP addresses and resource names that will be used throughout the workshop. You can either copy these values elsewhere or keep the page open in your browser and refer to them as you go through the workshop.
44 |
45 | On the CloudFormation page , click on the **Outputs** tab, as shown in the image below. You should see the following values listed:
46 |
47 | - **bucketName** – This is the name of the S3 bucket that was automatically created. You will use this when you create an NFS share on the File Gateway.
48 | - **fileGatewayPublicIP** – This is the public IP address of the EC2 instance running the Storage Gateway appliance. You will use this when you activate the gateway.
49 | - **iamRoleForS3Access** - This is an IAM role that provides access to the S3 bucket. This role will be used both by File Gateway and by AWS Transfer.
50 | - **linuxServerPrivateIP** – This is the private IP address of the Linux server. You will use this when you create the File Gateway share and when you configure the security group for the AWS Transfer VPC endpoint.
51 | - **transferServerId** – This is the ID of the AWS Transfer server that was created.
52 |
53 |
54 |
55 | ## Validation Step
56 |
57 | Open a new tab in your browser and navigate to the AWS management console for EC2. You should see two new instances created: one called "Workshop-FileGateway" and one call "Workshop-LinuxServer".
58 |
59 | If you do not see these resources, verify that the CloudFormation stack completed with state "CREATE_COMPLETE".
60 |
61 | ## Module Summary
62 |
63 | In this module, you deployed all resources necessary to complete this workshop and verified that resources were deployed correctly.
64 |
65 | In the next module, you will activate the File Gateway, create an NFS share to provide file access to the S3 bucket, and then write a file to the share from the Linux server.
66 |
67 | Go to [Module 2](/module2).
68 |
--------------------------------------------------------------------------------
/module2/README.md:
--------------------------------------------------------------------------------
1 | # Access data in Amazon S3 using AWS Transfer Family and AWS Storage Gateway
2 |
3 | © 2020 Amazon Web Services, Inc. and its affiliates. All rights reserved.
4 | This sample code is made available under the MIT-0 license. See the LICENSE file.
5 |
6 | Errors or corrections? Contact [jeffbart@amazon.com](mailto:jeffbart@amazon.com).
7 |
8 | ---
9 |
10 | # Module 2
11 | ## Configure the File Gateway
12 |
13 | In the previous module, you deployed various AWS resources using CloudFormation. This included an EC2 instance running File Gateway and another EC2 instance running a Linux server. In this module, you will activate the File Gateway and then create an NFS share, backed by the S3 bucket created in CloudFormation. You will then mount the NFS share on the Linux server and write a file to it, showing how you can use File Gateway to access data in S3 using standard file protocols.
14 |
15 | ## Module Steps
16 |
17 | #### 1. Activate the File Gateway
18 |
19 | Follow the steps below to activate the gateway.
20 |
21 | 1. Go to the AWS Management console page, click **Services** then select **Storage Gateway.**
22 | 2. If no gateways exist, click the **Get started** button, otherwise click the **Create gateway** button.
23 | 3. Select the **File gateway** type and click **Next.**
24 | 4. Select **Amazon EC2** as the host platform, then click **Next**.
25 | 5. Select the **Public** endpoint type, then click **Next**.
26 | 6. Enter the **Public IP address** of the File Gateway instance that was created in the first module using CloudFormation. The IP address is available in the outputs of the CloudFormation stack. Click **Connect to gateway**.
27 | 7. Name the gateway "WorkshopGateway" then click **Activate gateway**.
28 | 8. The gateway will be activated and then it will spend a minute or so preparing the local disk devices. Allocate the **300 GiB /dev/sdc** device to **Cache.** This is the local disk on the gateway that will be used to cache frequently accessed files.
29 | 9. Click **Configure logging.**
30 | 10. Configure the setting to _Disable logging_ then click **Save and continue.**
31 | 11. From the main Storage Gateway page, you will see your gateway listed.
32 |
33 |
34 |
35 | #### 2. Create an NFS share
36 |
37 | 1. Click on the **Create file share** button
38 | 2. For the **Amazon S3 bucket name** , enter the name of the S3 bucket created by CloudFormation. You can find the bucket name in the outputs of the CloudFormation stack. The name begins with "workshop-" followed by a GUID.
39 | 3. Select **NFS** as the access method and make sure your gateway from the previous step is selected.
40 | 4. Click **Next**.
41 | 5. For the **Storage class**, select **S3 Standard**.
42 | 6. Under **Access to your S3 bucket**, select **Use an existing IAM role** and then enter the ARN of the role from the outputs in the CloudFormation stack.
43 | 7. Click **Next**.
44 | 6. Under the **Allowed clients** section, click **Edit** and change "0.0.0.0/0" to the **Private IP Address** of the Linux server, followed by "/32". This will allow only the Linux server to access the NFS file share on the gateway. You can find the private IP address in the outputs of the CloudFormation stack. Click the **Close** button.
45 | 7. Under the **Mount options** section, change the **Squash level** to "No root squash". Click the **Close** button.
46 | 8. Click **Create file share**.
47 | 9. Select the check box next to the new file share and note the mount instructions.
48 |
49 |
50 |
51 | #### 3. Connect to the Linux server using Session Manager
52 |
53 | 1. From the AWS console, click **Services** and select **EC2.**
54 | 2. Select **Instances** from the menu on the left.
55 | 4. Right-click on the **Workshop-LinuxServer** instance and select **Connect** from the menu.
56 | 5. From the dialog box, select the **Session Manager** option.
57 | 6. Click **Connect**. A new tab will be opened in your browser with a command line interface (CLI) to the Linux server. Keep this tab open - you will use the command line on the Linux server throughout this workshop.
58 |
59 | #### 4. Mount the NFS share on the Linux server
60 |
61 | 1. From CLI on the Linux server, run the following command to create a new mount point for the File Gateway NFS share:
62 |
63 | $ sudo mkdir /mnt/nfs
64 |
65 | 2. Copy the Linux mount command from the Storage Gateway file share page and replace "[MountPath]" with "/mnt/nfs". **You must run the command as sudo.**
66 | 3. You now have an NFS mount point on your Linux server that connects you to the File Gateway.
67 |
68 | ## Validation Step
69 |
70 | To verify that the File Gateway is working correctly, create a simple file using the following command on the Linux server:
71 |
72 | $ echo "Hello World" > /mnt/nfs/file-via-nfs.txt
73 |
74 | Return to the AWS console, click **Services** and select **S3**. Click on the bucket that was created via CloudFormation in the previous module. Inside the bucket you should see a single object with a name that matches the file you just created on the Linux server.
75 |
76 |
77 |
78 | When a file is written to File Gateway it is cached locally on the gateway and then persisted as an object in the S3 bucket that was configured for the gateway share. The object stores both file data and metadata. Click on the file and then click the **Open** button. You should see "Hello World" displayed in your browswer.
79 |
80 | ## Module Summary
81 |
82 | In this module, you activated the File Gateway, created an NFS share on the gateway, mounted the share on the Linux server, and then wrote a file to the NFS share. You verified that the file was written successfully to the S3 bucket. With the file in the S3 bucket, it can now be accessed by many other AWS services that integrate with Amazon S3, including AWS Transfer Family.
83 |
84 | In the next module, you will configure your AWS Transfer server to allow access to the S3 bucket using common file transfer protocols such as SFTP.
85 |
86 | Go to [Module 3](/module3).
87 |
--------------------------------------------------------------------------------
/module3/README.md:
--------------------------------------------------------------------------------
1 | # Access data in Amazon S3 using AWS Transfer Family and AWS Storage Gateway
2 |
3 | © 2020 Amazon Web Services, Inc. and its affiliates. All rights reserved.
4 | This sample code is made available under the MIT-0 license. See the LICENSE file.
5 |
6 | Errors or corrections? Contact [jeffbart@amazon.com](mailto:jeffbart@amazon.com).
7 |
8 | ---
9 |
10 | # Module 3
11 | ## Configure the AWS Transfer server
12 |
13 | In the previous module, you activated the File Gateway and created an NFS share to provide access to the S3 bucket. You saw how writing a file to the NFS share automatically created an object in S3. In this module, you will configure the AWS Transfer server to also provide access to the S3 bucket, but using the SFTP file transfer protocol instead. You will start by creating a user on the AWS Transfer server. You will then configure the Security Group on the VPC endpoint used by AWS Transfer to allow access from the Linux server. Finally, you will transfer a test file and see that it was placed correctly in S3.
14 |
15 | ## Module Steps
16 |
17 | #### 1. Create an SSH key on the Linux server
18 |
19 | AWS Transfer is designed to provide access to S3 using common file transfer protocols. Users or applications that want to access the Transfer server must be able to authenticate. AWS Transfer Family provides two methods of authentication: service-managed or a [custom identity provider](https://docs.aws.amazon.com/transfer/latest/userguide/authenticating-users.html). In this workshop, you'll use the service-managed identity provider, which authenticates users via SSH keys.
20 |
21 | Before you can create a user in AWS Transfer, you will first need to create an SSH key pair on the Linux server. Run the following command on the Linux server to generate an SSH key pair:
22 |
23 | $ ssh-keygen
24 |
25 | Press Enter several times to accept the default settings, then run the following command to display the public key:
26 |
27 | $ cat /home/ssm-user/.ssh/id_rsa.pub
28 |
29 | #### 2. Create a user on the AWS Transfer server
30 |
31 | 1. Go to the AWS Management console page, click **Services** then select **AWS Transfer Family.**
32 | 2. Click on the Server ID that matches the one in the CloudFormation outputs.
33 | 3. Scroll down to the **Users** section and click on the **Add user** button.
34 | 4. For the username, enter **userA**.
35 | 5. Under the **Access** drop-down, select the same IAM role that you used when creating the File Gateway NFS share. You can search for the string "s3BucketIamRole".
36 | 6. Keep **Policy** set to None.
37 | 7. For **Home directory**, select the S3 bucket that was created by CloudFormation. It begins with **"workshop-"**.
38 | 8. Check the **Restricted** check box. This will limit the user to view only the contents of the S3 bucket.
39 |
40 |
41 |
42 | 9. Copy the entire SSH key string from the previous step and paste it into the box under **SSH public key**.
43 | 10. Click **Add**.
44 |
45 | You should now have a user listed in the **Users** section, as shown below:
46 |
47 |
48 |
49 | #### 3. Configure the Security Group for the VPC endpoint
50 |
51 | Before you can connect to the AWS Transfer server from the Linux server, you will need to whitelist the Linux server IP address in the Security Group configured on the VPC endpoint generated by the AWS Transfer server. This is shown in the architecture diagram below:
52 |
53 |
54 |
55 | 1. Go to the AWS Management console page, click **Services** then select **VPC.**
56 | 2. On the left side of the page, click on **Endpoints**.
57 | 3. Check the box next to the endpoint where the **Service name** contains "transfer.server".
58 | 4. Click on the **Security Groups** tab to view the security groups assigned to the VPC endpoint. There should be only one security group in the list, which is the default security group for the VPC.
59 |
60 |
61 |
62 | 5. Click on the security group ID to view the rules for the security group.
63 | 6. In the new page that opens, click on the **Inbound rules** tab.
64 |
65 |
66 |
67 | 7. Click on the **Edit inbound rules** button.
68 | 8. Click on **Add rule**.
69 | 9. Under **Port range** on the new rule, enter "22", which is the default port used by the SFTP protocol.
70 | 10. Under the **Source**, select "Custom" and then in the adjacent box, enter the **Private IP address** of the Linux server, followed by "/32". You can get the private IP address from the CloudFormation outputs.
71 | 11. Scroll down and click the **Save rules** button.
72 |
73 |
74 |
75 | In AWS, security groups act like firewalls, controlling inbound and outbound traffic to AWS resources. In editing the security group, you have added a new rule allowing inbound traffic from the Linux server to reach the VPC endpoint, which will then automatically route traffic to the AWS Transfer server.
76 |
77 | #### 4. Connect using SFTP
78 |
79 | WIth the user added and the security group configured, you can now access the S3 bucket via SFTP from your Linux server. You will connect to the AWS Transfer server using a Private IP address allocated when the Transfer server was created. Go to the AWS Transfer page for your server and go to the **Endpoint configuration** section. Copy the **Private IPv4 Address**.
80 |
81 |
82 |
83 | Return to the CLI and enter the following commands to change to your home directory and then connect to the AWS Transfer server via SFTP. Replace the IP address with the one you just copied.
84 |
85 | $ cd ~
86 | $ sftp userA@10.11.12.40
87 |
88 | If you are prompted to continue connecting, enter "yes" then hit Enter. Once you are connected, you will be presented with a new prompt. Enter the following command:
89 |
90 | sftp> ls
91 |
92 | You should see the file named "file-via-nfs.txt" that was created in the previous module, when you wrote the file via File Gateway. Run the following command to copy the file to the Linux server and then quit out of sftp:
93 |
94 | sftp> get file-via-nfs.txt
95 | sftp> quit
96 |
97 | ## Validation Step
98 |
99 | The **get** command you ran previously made a local copy of the file on the Linux server, in your home directory. Run the following command to see the contents of the file:
100 |
101 | $ cat file-via-nfs.txt
102 |
103 | You should see "Hello World" printed.
104 |
105 | ## Module Summary
106 |
107 | In this module, you created a user in AWS Transfer, configured the VPC endpoint security group to allow access from the Linux server, and then connected to the AWS Transfer server using SFTP. Once connected, you saw that the file created in S3 via File Gateway in the previous module, was also visible via SFTP.
108 |
109 | In the next module, you will learn how to coordinate data workflows between AWS Transfer Family and Storage Gateway by writing a file via SFTP and using the RefreshCache API on File Gateway.
110 |
111 | Go to [Module 4](/module4).
112 |
--------------------------------------------------------------------------------
/module4/README.md:
--------------------------------------------------------------------------------
1 | # Access data in Amazon S3 using AWS Transfer Family and AWS Storage Gateway
2 |
3 | © 2020 Amazon Web Services, Inc. and its affiliates. All rights reserved.
4 | This sample code is made available under the MIT-0 license. See the LICENSE file.
5 |
6 | Errors or corrections? Contact [jeffbart@amazon.com](mailto:jeffbart@amazon.com).
7 |
8 | ---
9 |
10 | # Module 4
11 | ## Using RefreshCache to see changes to S3 in File Gateway
12 |
13 | In the previous module, you configured AWS Transfer to allow access via SFTP from the Linux server. You then connected to the AWS Transfer server and copied the file that was in the S3 bucket, which had previously been written there via File Gateway. You now have everything in place to write files via File Gateway and read them via SFTP through AWS Transfer. In this module, you will write files via SFTP and read them via File Gateway, using the RefreshCache API to update the metadata cache on the gateway.
14 |
15 | ## Module Steps
16 |
17 | #### 1. Write to S3 via SFTP
18 |
19 | Return to the CLI and enter the following commands to generate a file on the Linux server and get an MD5 checksum of the new file:
20 |
21 | $ cd ~
22 | $ dd if=/dev/urandom of=file-via-sftp.dat bs=1M count=1
23 | $ md5sum file-via-sftp.dat
24 |
25 | Note the MD5 checksum and then log back into the AWS Transfer using the following command, replacing the IP address below with the IP address of your AWS Transfer server used in the previous module:
26 |
27 | $ sftp userA@10.11.12.40
28 |
29 | Run the following command to copy the file to S3 via SFTP over the Transfer server and then quit the SFTP session:
30 |
31 | sftp> put file-via-sftp.dat
32 | sftp> quit
33 |
34 | Return to the AWS console, click **Services** and select **S3**. Click on the bucket that was created via CloudFormation in the first module. Inside the bucket you should now see two objects: the first one created via File Gateway and the second one that was just uploaded.
35 |
36 |
37 |
38 | #### 2. Read the new file using File Gateway
39 |
40 | Return to the CLI and enter the following commands to list the contents of the top level folder on the File Gateway:
41 |
42 | $ ls /mnt/nfs
43 |
44 | You should see only the original file. Where is the new file you just wrote via SFTP?
45 |
46 | In this case, the file was written to the S3 bucket via AWS Transfer, instead of through the File Gateway share. File Gateway is not aware that there are new objects in the bucket. In order to see the new file on the Linux server, you need to refresh the metadata cache on the File Gateway.
47 |
48 | Return to the AWS management console and go to the **Storage Gateway** service. On the left side of the page, click on **File shares** and select the NFS share that you created earlier from the list. Click on the **Actions** button and select **Refresh cache** then click **Start**.
49 |
50 |
51 |
52 | Return to the CLI and list the folder again using the following command:
53 |
54 | $ ls /mnt/nfs
55 |
56 | This time you should now see both files that are in the S3 bucket. Run the following command to get the MD5 checksum of the file via File Gateway:
57 |
58 | $ md5sum /mnt/nfs/file-via-sftp.dat
59 |
60 | The checksum should match the same one that you generated earlier, indicating that the file was successfully copied via SFTP to S3, and then accessed properly via File Gateway.
61 |
62 | ## Module Summary
63 |
64 | In this module, you wrote a file to S3 via SFTP and then learned how to use the RefreshCache API to make the file visible on the File Gateway. You've now shown that you can write to and read files from an S3 bucket using both AWS Transfer and File Gateway. You are now enabled to create workflows that need to use both file transfer and file storage protocols in tandem.
65 |
66 | In the next module, you will clean up the resources you created in this workshop.
67 |
68 | Go to [Module 5](/module5).
69 |
--------------------------------------------------------------------------------
/module5/README.md:
--------------------------------------------------------------------------------
1 | # Access data in Amazon S3 using AWS Transfer Family and AWS Storage Gateway
2 |
3 | © 2020 Amazon Web Services, Inc. and its affiliates. All rights reserved.
4 | This sample code is made available under the MIT-0 license. See the LICENSE file.
5 |
6 | Errors or corrections? Contact [jeffbart@amazon.com](mailto:jeffbart@amazon.com).
7 |
8 | ---
9 |
10 | # Module 5
11 | ## Workshop clean-up
12 |
13 | To make sure all resources are deleted after this workshop scenario make sure you execute the steps in the order outlined below (you do not need to wait for CloudFormation to finish deleting before moving to the next step):
14 |
15 | 1. Unmount the File Gateway NFS share on the Linux server by running the following command:
16 |
17 | $ sudo umount /mnt/nfs
18 |
19 | 2. Close the browser window running the CLI.
20 | 3. Go to the Storage Gateway page in the AWS console and delete the **NFS file share**.
21 | 4. On the left side of the Storage Gateway console, click on **Gateways** and delete the File Gateway named **WorkshopGateway**. Note that this will not delete the gateway EC2 instance itself. The instance will get deleted when the CloudFormation stack is deleted.
22 | 5. Delete all objects in the S3 bucket that was created for this workshop. The bucket must be empty before it can be deleted by CloudFormation in the next step.
23 | 5. Go to the CloudFormation page and delete the stack named **TransferWorkshop**. It will take a few minutes for the stack to be fully deleted.
24 |
25 | To make sure that all CloudFormation templates have been deleted correctly, confirm that any EC2 instances created in this workshop are in the **terminated** state.
26 |
--------------------------------------------------------------------------------
/templates/transfer-storage-gateway-workshop.yaml:
--------------------------------------------------------------------------------
1 | AWSTemplateFormatVersion: '2010-09-09'
2 | Description: AWS Transfer and File Gateway Workshop
3 | Metadata:
4 | License:
5 | Description: |
6 | Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.
7 |
8 | Permission is hereby granted, free of charge, to any person obtaining a copy of this
9 | software and associated documentation files (the "Software"), to deal in the Software
10 | without restriction, including without limitation the rights to use, copy, modify,
11 | merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
12 | permit persons to whom the Software is furnished to do so.
13 |
14 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
15 | INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
16 | PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
17 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
18 | OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
19 | SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
20 |
21 | AWS::CloudFormation::Interface:
22 | ParameterGroups:
23 | - Label:
24 | default: Network
25 | Parameters:
26 | - cidrBlock
27 | - Label:
28 | default: AMI IDs (do not edit)
29 | Parameters:
30 | - linuxAmi
31 | - fgwAmi
32 | ParameterLabels:
33 | cidrBlock:
34 | default: 'VPC CIDR Block'
35 | linuxAmi:
36 | default: 'Linux'
37 | fgwAmi:
38 | default: 'File Gateway'
39 |
40 | Parameters:
41 | cidrBlock:
42 | Type : 'String'
43 | Default: '10.11.12.0/24'
44 | linuxAmi:
45 | Type : 'AWS::SSM::Parameter::Value'
46 | Default: '/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2'
47 | fgwAmi:
48 | Type : 'AWS::SSM::Parameter::Value'
49 | Default: '/aws/service/storagegateway/ami/FILE_S3/latest'
50 |
51 | Resources:
52 |
53 | # Create a dedicated VPC with one public subnet and internet connectivity
54 | dmVPC:
55 | Type: AWS::EC2::VPC
56 | Properties:
57 | CidrBlock: !Ref cidrBlock
58 | EnableDnsSupport: 'true'
59 | EnableDnsHostnames: 'true'
60 | InstanceTenancy: default
61 | Tags:
62 | - Key: Name
63 | Value: WorkshopVPC
64 | dmSubnet1:
65 | Type: AWS::EC2::Subnet
66 | Properties:
67 | VpcId: !Ref 'dmVPC'
68 | CidrBlock: !Ref cidrBlock
69 | MapPublicIpOnLaunch: 'True'
70 | Tags:
71 | - Key: Name
72 | Value: WorkshopSubnet1
73 | dmInternetGateway:
74 | Type: AWS::EC2::InternetGateway
75 | Properties:
76 | Tags:
77 | - Key: Name
78 | Value: WorkshopIGW
79 | dmAttachGateway:
80 | Type: AWS::EC2::VPCGatewayAttachment
81 | Properties:
82 | VpcId: !Ref 'dmVPC'
83 | InternetGatewayId: !Ref 'dmInternetGateway'
84 | dmRouteTable:
85 | Type: AWS::EC2::RouteTable
86 | Properties:
87 | VpcId: !Ref 'dmVPC'
88 | Tags:
89 | - Key: Name
90 | Value: WorkshopRouteTable
91 | dmSubnet1RouteAssociaton:
92 | Type: AWS::EC2::SubnetRouteTableAssociation
93 | Properties:
94 | SubnetId: !Ref 'dmSubnet1'
95 | RouteTableId: !Ref 'dmRouteTable'
96 | dmRoutetoInternet:
97 | Type: AWS::EC2::Route
98 | DependsOn: dmInternetGateway
99 | Properties:
100 | RouteTableId: !Ref 'dmRouteTable'
101 | DestinationCidrBlock: 0.0.0.0/0
102 | GatewayId: !Ref 'dmInternetGateway'
103 |
104 | # We use the same security group for all four resources. Technically port 80
105 | # isn't needed for the NFS server, but nothing is listening on those ports on those servers.
106 | dmSecurityGroup:
107 | Type: AWS::EC2::SecurityGroup
108 | Properties:
109 | GroupDescription: Workshop - Security Group for all resources
110 | VpcId: !Ref 'dmVPC'
111 | SecurityGroupIngress:
112 | - IpProtocol: tcp
113 | FromPort: '22'
114 | ToPort: '22'
115 | CidrIp: '0.0.0.0/0'
116 | - IpProtocol: tcp
117 | FromPort: '80'
118 | ToPort: '80'
119 | CidrIp: '0.0.0.0/0'
120 |
121 | # We use this so we can limit access on this port to the SG
122 | dmSecurityGroupIngress:
123 | Type: AWS::EC2::SecurityGroupIngress
124 | DependsOn: dmSecurityGroup
125 | Properties:
126 | GroupId: !Ref 'dmSecurityGroup'
127 | IpProtocol: tcp
128 | ToPort: '2049'
129 | FromPort: '2049'
130 | SourceSecurityGroupId: !Ref 'dmSecurityGroup'
131 |
132 | transferServer:
133 | Type: AWS::Transfer::Server
134 | Properties:
135 | EndpointDetails:
136 | SubnetIds:
137 | - !Ref 'dmSubnet1'
138 | VpcId: !Ref dmVPC
139 | EndpointType: VPC
140 | IdentityProviderType: SERVICE_MANAGED
141 | Protocols:
142 | - SFTP
143 |
144 | linuxServerInstanceProfile:
145 | Type: AWS::IAM::InstanceProfile
146 | Properties:
147 | Path: /
148 | Roles:
149 | - !Ref 'linuxServerIamRole'
150 | linuxServerIamRole:
151 | Type: AWS::IAM::Role
152 | Properties:
153 | AssumeRolePolicyDocument:
154 | Statement:
155 | - Action:
156 | - sts:AssumeRole
157 | Effect: Allow
158 | Principal:
159 | Service:
160 | - ec2.amazonaws.com
161 | Version: '2012-10-17'
162 | ManagedPolicyArns:
163 | - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
164 | # linuxServerRolePolicy:
165 | # Type: AWS::IAM::Policy
166 | # Properties:
167 | # PolicyDocument:
168 | # Statement:
169 | # - Effect: Allow
170 | # Action:
171 | # - s3:ListBucket
172 | # Resource:
173 | # - arn:aws:s3:::aft-vbi-pds
174 | # - Effect: Allow
175 | # Action:
176 | # - s3:GetObject
177 | # Resource:
178 | # - arn:aws:s3:::aft-vbi-pds/*
179 | # Version: '2012-10-17'
180 | # PolicyName: policy
181 | # Roles:
182 | # - !Ref 'linuxServerIamRole'
183 | linuxServer:
184 | Type: AWS::EC2::Instance
185 | Properties:
186 | ImageId: !Ref linuxAmi
187 | InstanceType: t2.micro
188 | IamInstanceProfile: !Ref 'linuxServerInstanceProfile'
189 | Tags:
190 | - Key: Name
191 | Value: Workshop-LinuxServer
192 | InstanceInitiatedShutdownBehavior: terminate
193 | BlockDeviceMappings:
194 | - DeviceName: /dev/xvda
195 | Ebs:
196 | VolumeSize: '8'
197 | DeleteOnTermination: 'true'
198 | VolumeType: gp2
199 | NetworkInterfaces:
200 | - AssociatePublicIpAddress: 'true'
201 | DeviceIndex: '0'
202 | GroupSet:
203 | - !Ref 'dmSecurityGroup'
204 | SubnetId: !Ref 'dmSubnet1'
205 |
206 | fileGatewayInstanceProfile:
207 | Type: AWS::IAM::InstanceProfile
208 | Properties:
209 | Path: /
210 | Roles:
211 | - !Ref 'fileGatewayIamRole'
212 | fileGatewayIamRole:
213 | Type: AWS::IAM::Role
214 | Properties:
215 | AssumeRolePolicyDocument:
216 | Statement:
217 | - Action:
218 | - sts:AssumeRole
219 | Effect: Allow
220 | Principal:
221 | Service:
222 | - ec2.amazonaws.com
223 | Version: '2012-10-17'
224 | fileGatewayRolePolicy:
225 | Type: AWS::IAM::Policy
226 | Properties:
227 | PolicyDocument:
228 | Statement:
229 | - Effect: Allow
230 | Action:
231 | - storagegateway:*
232 | Resource:
233 | - '*'
234 | - Effect: Allow
235 | Action:
236 | - iam:PassRole
237 | Resource:
238 | - '*'
239 | Version: '2012-10-17'
240 | PolicyName: policy
241 | Roles:
242 | - !Ref 'fileGatewayIamRole'
243 | fileGateway:
244 | Type: AWS::EC2::Instance
245 | Properties:
246 | ImageId: !Ref fgwAmi
247 | InstanceType: c4.2xlarge
248 | IamInstanceProfile: !Ref 'fileGatewayInstanceProfile'
249 | Tags:
250 | - Key: Name
251 | Value: Workshop-FileGateway
252 | InstanceInitiatedShutdownBehavior: stop
253 | BlockDeviceMappings:
254 | - DeviceName: /dev/xvda
255 | Ebs:
256 | VolumeSize: '80'
257 | DeleteOnTermination: 'true'
258 | VolumeType: gp2
259 | - DeviceName: /dev/xvdc
260 | Ebs:
261 | VolumeSize: '300'
262 | DeleteOnTermination: 'true'
263 | VolumeType: gp2
264 | NetworkInterfaces:
265 | - AssociatePublicIpAddress: 'true'
266 | DeviceIndex: '0'
267 | GroupSet:
268 | - !Ref 'dmSecurityGroup'
269 | SubnetId: !Ref 'dmSubnet1'
270 |
271 | # We use the GUID from the ARN of the stack ID to generate
272 | # a unique bucket name
273 | s3Bucket:
274 | Type: AWS::S3::Bucket
275 | Properties:
276 | PublicAccessBlockConfiguration:
277 | BlockPublicAcls: True
278 | BlockPublicPolicy: True
279 | IgnorePublicAcls: True
280 | RestrictPublicBuckets: True
281 | BucketName: !Join
282 | - "-"
283 | - - "workshop"
284 | - !Select
285 | - 2
286 | - !Split
287 | - "/"
288 | - !Ref "AWS::StackId"
289 |
290 | # Both Transfer and Storage Gateway need a role to access the bucket. We'll keep things simple
291 | # and create one role for both, with full access to S3.
292 | s3BucketIamRole:
293 | Type: AWS::IAM::Role
294 | Properties:
295 | AssumeRolePolicyDocument:
296 | Statement:
297 | - Action:
298 | - sts:AssumeRole
299 | Effect: Allow
300 | Principal:
301 | Service:
302 | - storagegateway.amazonaws.com
303 | - transfer.amazonaws.com
304 | Version: '2012-10-17'
305 | s3BucketRolePolicy:
306 | Type: AWS::IAM::Policy
307 | DependsOn: s3Bucket
308 | Properties:
309 | PolicyDocument:
310 | Statement:
311 | - Effect: Allow
312 | Resource:
313 | - !GetAtt s3Bucket.Arn
314 | - !Join [ "/", [ !GetAtt s3Bucket.Arn, "*" ] ]
315 | Action:
316 | - s3:*
317 | Version: '2012-10-17'
318 | PolicyName: policy
319 | Roles:
320 | - !Ref 's3BucketIamRole'
321 |
322 | Outputs:
323 | bucketName:
324 | Description: S3 Bucket Name
325 | Value: !Ref s3Bucket
326 | iamRoleForS3Access:
327 | Description: S3 IAM Role for Transfer and File Gateway
328 | Value: !GetAtt s3BucketIamRole.Arn
329 | linuxServerPrivateIP:
330 | Description: Linux Server Private IP Address
331 | Value: !GetAtt linuxServer.PrivateIp
332 | fileGatewayPublicIP:
333 | Description: File Gateway Public IP Address
334 | Value: !GetAtt fileGateway.PublicIp
335 | transferServerId:
336 | Description: AWS Transfer Server ID
337 | Value: !GetAtt transferServer.ServerId
338 |
--------------------------------------------------------------------------------