├── images └── latency.jpg ├── resources ├── constants.py ├── redis_dbsize.py ├── flush_redis.py ├── delete_S3_objects.py ├── load_data.py ├── query_S3.py └── query_redis.py ├── CODE_OF_CONDUCT.md ├── setup └── s3_redis_project_setup.sh ├── LICENSE.txt ├── examples └── lazyload │ └── lazy_load.py ├── cfn └── S3RedisCFN.yaml ├── CONTRIBUTING.md └── README.md /images/latency.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/amazon-S3-cache-with-amazon-elasticache-redis/HEAD/images/latency.jpg -------------------------------------------------------------------------------- /resources/constants.py: -------------------------------------------------------------------------------- 1 | # Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | # SPDX-License-Identifier: MIT-0 3 | 4 | ### Required: update Redis endpoint ### 5 | redishost="" 6 | ### Required: update S3 bucket information ### 7 | S3bucket = "" 8 | 9 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | ## Code of Conduct 2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 4 | opensource-codeofconduct@amazon.com with any additional questions or comments. 5 | -------------------------------------------------------------------------------- /setup/s3_redis_project_setup.sh: -------------------------------------------------------------------------------- 1 | # Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | # SPDX-License-Identifier: MIT-0 3 | 4 | #!/bin/bash -xe 5 | 6 | sudo yum -y update 7 | pip install awscli --user 8 | pip install redis --user 9 | 10 | echo -e "\n\n** Sample App setup completed , enjoy!! **\n\n " 11 | 12 | -------------------------------------------------------------------------------- /resources/redis_dbsize.py: -------------------------------------------------------------------------------- 1 | # Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | # SPDX-License-Identifier: MIT-0 3 | 4 | import boto3 5 | import redis 6 | import constants 7 | 8 | r = redis.StrictRedis(host=constants.redishost, port=6379) 9 | 10 | print("Redis DB size = " + str( r.dbsize()) ) 11 | 12 | print("Completed Successfully!") 13 | 14 | 15 | -------------------------------------------------------------------------------- /resources/flush_redis.py: -------------------------------------------------------------------------------- 1 | # Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | # SPDX-License-Identifier: MIT-0 3 | 4 | import boto3 5 | import redis 6 | import constants 7 | 8 | r = redis.StrictRedis(host=constants.redishost, port=6379) 9 | 10 | # Drop data in Redis 11 | r.flushdb() 12 | print("redis db size now = " + str( r.dbsize()) ) 13 | 14 | print("Completed Successfully!") 15 | 16 | 17 | 18 | -------------------------------------------------------------------------------- /resources/delete_S3_objects.py: -------------------------------------------------------------------------------- 1 | # Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | # SPDX-License-Identifier: MIT-0 3 | 4 | import boto3 5 | import constants 6 | 7 | s3 = boto3.resource('s3') 8 | 9 | #Iterate through all created files and delete them 10 | i = 0 11 | while i < 100: 12 | s3.Object(constants.S3bucket, 'filename' + str(i) + '.txt').delete() 13 | i += 1 14 | 15 | print("Completed Successfully!") 16 | 17 | 18 | -------------------------------------------------------------------------------- /resources/load_data.py: -------------------------------------------------------------------------------- 1 | # Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | # SPDX-License-Identifier: MIT-0 3 | 4 | import boto3 5 | import redis 6 | import constants 7 | 8 | r = redis.StrictRedis(host=constants.redishost, port=6379) 9 | 10 | # Load data into S3 & Redis 11 | s3 = boto3.resource('s3') 12 | i = 0 13 | while i < 100: 14 | #PUT data in S3 15 | object = s3.Object(constants.S3bucket, 'filename' + str(i) + '.txt') 16 | object.put(Body="This is some generated data for filename" + str(i) + '.txt') 17 | #Cache the data [ KEY = bucket:filenameX.txt] 18 | r.set(constants.S3bucket + ':filename' + str(i) + '.txt', 'This is some generated data for filename' + str(i) + '.txt') 19 | i += 1 20 | 21 | print("Data loaded successfully!") 22 | 23 | 24 | -------------------------------------------------------------------------------- /LICENSE.txt: -------------------------------------------------------------------------------- 1 | MIT No Attribution 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of this 4 | software and associated documentation files (the "Software"), to deal in the Software 5 | without restriction, including without limitation the rights to use, copy, modify, 6 | merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 7 | permit persons to whom the Software is furnished to do so. 8 | 9 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 10 | INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 11 | PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 12 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 13 | OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 14 | SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -------------------------------------------------------------------------------- /examples/lazyload/lazy_load.py: -------------------------------------------------------------------------------- 1 | # Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | # SPDX-License-Identifier: MIT-0 3 | 4 | import boto3 5 | import datetime 6 | import redis 7 | import sys 8 | sys.path.insert(0,'amazon-S3-cache-with-amazon-elasticache-redis/resources/') 9 | import constants 10 | 11 | r = redis.StrictRedis(host=constants.redishost, port=6379) 12 | 13 | s3 = boto3.resource('s3') 14 | 15 | ###Sample object to test with 16 | S3ObjectKey = "filename0.txt" 17 | 18 | ### Check to see if value is in Redis 19 | value = r.get(constants.S3bucket + ':' + S3ObjectKey) 20 | 21 | if value is None: 22 | print("Cache Miss") 23 | ### Get data from S3 24 | 25 | obj = s3.Object(constants.S3bucket, S3ObjectKey) 26 | data = obj.get()['Body'].read().decode('utf-8') 27 | 28 | ### Store the data into Redis 29 | r.set(constants.S3bucket + ':' + S3ObjectKey, data) 30 | 31 | else: 32 | print ("Cache Hit") 33 | print("Data retrieved from redis = " + value) -------------------------------------------------------------------------------- /resources/query_S3.py: -------------------------------------------------------------------------------- 1 | # Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | # SPDX-License-Identifier: MIT-0 3 | 4 | import boto3 5 | import datetime 6 | import constants 7 | 8 | # Query & Record Individual GET requests on S3 Objects 9 | s3 = boto3.resource('s3') 10 | 11 | keys = [] 12 | i = 0 13 | 14 | while i < 100: 15 | 16 | # Start timer 17 | start = datetime.datetime.now() 18 | 19 | obj = s3.Object(constants.S3bucket, 'filename' + str(i) + '.txt') 20 | data = obj.get()['Body'].read().decode('utf-8') 21 | 22 | end = datetime.datetime.now() 23 | # End timer 24 | 25 | # Parse each result into microseconds 26 | delta = end - start 27 | millis = delta.seconds * 1000000 28 | millis += delta.microseconds 29 | keys.append(millis) 30 | i += 1 31 | 32 | #Throw out first request due to initialization overhead 33 | keys.pop(0) 34 | 35 | #print timing 36 | sum=0 37 | for timing in keys: 38 | sum+=timing 39 | 40 | print("=====Timing=====\n") 41 | 42 | average = sum / len(keys) 43 | print ("Average Latency in Microseconds: ", average ) 44 | print ("MAX Latency in Microseconds: ", max(keys)) 45 | print ("MIN Latency in Microseconds: ", min(keys)) 46 | print("\nCompleted Successfully!") 47 | 48 | 49 | 50 | 51 | 52 | 53 | -------------------------------------------------------------------------------- /resources/query_redis.py: -------------------------------------------------------------------------------- 1 | # Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | # SPDX-License-Identifier: MIT-0 3 | 4 | import boto3 5 | import datetime 6 | import redis 7 | import constants 8 | 9 | r = redis.StrictRedis(host=constants.redishost, port=6379) 10 | 11 | # Query S3 Data 12 | s3 = boto3.resource('s3') 13 | 14 | keys = [] 15 | i = 0 16 | 17 | while i < 100: 18 | 19 | # Start timer 20 | start = datetime.datetime.now() 21 | 22 | value = r.get(constants.S3bucket + ':filename' + str(i) + '.txt') 23 | 24 | end = datetime.datetime.now() 25 | # End timer 26 | 27 | delta = end - start 28 | millis = delta.seconds * 1000000 29 | millis += delta.microseconds 30 | keys.append(millis) 31 | i += 1 32 | 33 | #throw out first request due to initialization overhead 34 | keys.pop(0) 35 | 36 | #print timing 37 | sum=0 38 | for timing in keys: 39 | sum+=timing 40 | #uncomment below to see timing for each request 41 | #print timing 42 | 43 | average = sum / len(keys) 44 | 45 | print("=====Timing=====\n") 46 | 47 | average = sum / len(keys) 48 | print ("Average Latency in Microseconds: ", average ) 49 | print ("MAX Latency in Microseconds: ", max(keys)) 50 | print ("MIN Latency in Microseconds: ", min(keys)) 51 | print("\nCompleted Successfully!") 52 | 53 | 54 | print("Completed Successfully!") 55 | 56 | 57 | 58 | 59 | 60 | 61 | -------------------------------------------------------------------------------- /cfn/S3RedisCFN.yaml: -------------------------------------------------------------------------------- 1 | AWSTemplateFormatVersion: '2010-09-09' 2 | Description: 'Amazon S3 & Amazon ElastiCache for Redis & AWS Cloud9 template' 3 | ########################################################################## 4 | 5 | Parameters: 6 | SubnetID: 7 | Description: 'The subnet to be used with Amazon ElastiCache and AWS Cloud9' 8 | Type: String 9 | 10 | Resources: 11 | 12 | Cloud9: 13 | Type: "AWS::Cloud9::EnvironmentEC2" 14 | Properties: 15 | Description: "Cloud9 environment for workshop" 16 | Name: "S3RedisCache" 17 | InstanceType: c4.large 18 | SubnetId: !Ref SubnetID 19 | AutomaticStopTimeMinutes: 15 20 | 21 | ElastiCacheSubnetGroup: 22 | Type: AWS::ElastiCache::SubnetGroup 23 | Properties: 24 | Description: Cache Subnet Group 25 | SubnetIds: 26 | - !Ref SubnetID 27 | 28 | RedisClusterSecurityGroup: 29 | Type: 'AWS::EC2::SecurityGroup' 30 | Properties: 31 | GroupName: Redis-S3-Test 32 | GroupDescription: ElastiCache Redis Security Group 33 | SecurityGroupIngress: 34 | - IpProtocol: tcp 35 | CidrIp: 0.0.0.0/0 36 | FromPort: 6379 37 | ToPort: 6379 38 | 39 | RedisCluster: 40 | Type: AWS::ElastiCache::CacheCluster 41 | Properties: 42 | CacheSubnetGroupName: 43 | Ref: ElastiCacheSubnetGroup 44 | CacheNodeType: cache.m5.xlarge 45 | ClusterName: 'Redis-S3-Test' 46 | VpcSecurityGroupIds: 47 | - Fn::GetAtt: 48 | - RedisClusterSecurityGroup 49 | - GroupId 50 | Engine: redis 51 | NumCacheNodes: '1' 52 | 53 | S3Bucket: 54 | Type: 'AWS::S3::Bucket' 55 | Description: Amazon S3 Input bucket for AWS ElastiCache Workshop 56 | 57 | Outputs: 58 | RedisHost: 59 | Description: ElastiCache Endpoint 60 | Value: 61 | Fn::GetAtt: 62 | - RedisCluster 63 | - RedisEndpoint.Address 64 | ElastiCachePort: 65 | Description: ElastiCache Port 66 | Value: 67 | Fn::GetAtt: 68 | - RedisCluster 69 | - RedisEndpoint.Port 70 | ElastiCacheSG: 71 | Description: ElastiCache SecurityGroup Id 72 | Value: 73 | Ref: RedisClusterSecurityGroup 74 | S3Bucket: 75 | Description: 'S3 Bucket Name' 76 | Value: 77 | Ref: S3Bucket 78 | 79 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | 10 | ## Reporting Bugs/Feature Requests 11 | 12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features. 13 | 14 | When filing an issue, please check [existing open](https://github.com/aws-samples/amazon-S3-cache-with-amazon-elasticache-redis/issues), or [recently closed](https://github.com/aws-samples/amazon-S3-cache-with-amazon-elasticache-redis/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aclosed%20), issues to make sure somebody else hasn't already 15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: 16 | 17 | * A reproducible test case or series of steps 18 | * The version of our code being used 19 | * Any modifications you've made relevant to the bug 20 | * Anything unusual about your environment or deployment 21 | 22 | 23 | ## Contributing via Pull Requests 24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 25 | 26 | 1. You are working against the latest source on the *master* branch. 27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. 29 | 30 | To send us a pull request, please: 31 | 32 | 1. Fork the repository. 33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. 34 | 3. Ensure local tests pass. 35 | 4. Commit to your fork using clear commit messages. 36 | 5. Send us a pull request, answering any default questions in the pull request interface. 37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. 38 | 39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 41 | 42 | 43 | ## Finding contributions to work on 44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any ['help wanted'](https://github.com/aws-samples/amazon-S3-cache-with-amazon-elasticache-redis/labels/help%20wanted) issues is a great place to start. 45 | 46 | 47 | ## Code of Conduct 48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 50 | opensource-codeofconduct@amazon.com with any additional questions or comments. 51 | 52 | 53 | ## Security issue notifications 54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. 55 | 56 | 57 | ## Licensing 58 | 59 | See the [LICENSE](https://github.com/aws-samples/amazon-S3-cache-with-amazon-elasticache-redis/blob/master/LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. 60 | 61 | We may ask you to sign a [Contributor License Agreement (CLA)](http://en.wikipedia.org/wiki/Contributor_License_Agreement) for larger changes. 62 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Caching Amazon S3 with Amazon ElastiCache for Redis 2 | 3 | This sample project demonstrates how you can cache [Amazon S3](https://aws.amazon.com/s3/) objects with [Amazon ElastiCache for Redis](https://aws.amazon.com/elasticache/redis/) . This project also uses [AWS CloudFormation](https://aws.amazon.com/cloudformation/) & [AWS Cloud9](https://aws.amazon.com/cloud9/) as means to deploy, build and run this tutorial, although you can run this in your own environments as well. 4 | 5 | These examples are also referenced in the following [blog](https://aws.amazon.com/blogs/storage/turbocharge-amazon-s3-with-amazon-elasticache-for-redis/) which provide background and context to this project. It is recommended to read the blog as a prerequisite. 6 | 7 | ## Deployment 8 | 9 | 1. Download from github, then run the following CFN template with AWS CloudFormation: [cfn/S3RedisCFN.yaml](https://raw.githubusercontent.com/aws-samples/amazon-S3-cache-with-amazon-elasticache-redis/master/cfn/S3RedisCFN.yaml) 10 | 11 | 2. Upon running the CFN, you will be prompted to enter a Subnet Id for AWS Cloud9 and Amazon ElastiCache to be launched in. Enter a subnet id to use and then click next, next, create. (Note: This step ensures that both services are running within the same availability zone for optimal performance. You can find your subnet ids within the Amazon VPC console. Be sure the subnet is associated with a compatible route table and an internet gateway for Cloud9.) 12 | 13 | ## Setup and Build 14 | 15 | 1. Upon CFN completion, take note of the generated S3 Bucket name and the Redis endpoint within the cloudformation outputs tab. Then navigate to AWS Cloud9 and open the **S3RedisCache** IDE environment. 16 | 17 | 2. Within the AWS Cloud9 environment, open (+) a new terminal and clone this repository: 18 | 19 | ``` 20 | (ssh) 21 | git clone git@github.com:aws-samples/amazon-S3-cache-with-amazon-elasticache-redis.git 22 | 23 | (https) 24 | git clone https://github.com/aws-samples/amazon-S3-cache-with-amazon-elasticache-redis.git 25 | ``` 26 | 3. Navigate to the downloaded setup directory (/amazon-S3-cache-with-amazon-elasticache-redis/setup) and run the following script to further prepare your environment: 27 | 28 | ``` 29 | cd amazon-S3-cache-with-amazon-elasticache-redis/setup 30 | sh s3_redis_project_setup.sh 31 | ``` 32 | 33 | 4. Navigate to the resources directory (amazon-S3-cache-with-amazon-elasticache-redis/resources) and update the following properties within **constants.py**. Provide the generated resource values you captured in the cloudformation outputs: 34 | 35 | ``` 36 | redishost="" (leave out the port) 37 | S3bucket= "" 38 | ``` 39 | 5. Next right click on and run **load_data.py**. This will generate and load 100 objects into both Amazon S3 and Amazon ElastiCache for Redis 40 | 41 | 6. Next right click on and run **query_redis.py** and **query_S3.py** . Then compare the generated latency (in microseconds) output. 42 | 43 | You will notice a significant performance improvement when querying redis vs S3. This performance test is intended to be lightweight and only for illustration purposes. Your results may slightly vary based on your environment. An example comparison between the two services converted in milliseconds is as follows: 44 | 45 | ![latency](images/latency.jpg) 46 | 47 | ## Lazy-load example 48 | 49 | A common caching technique often used is lazy loading. This approach assumes data is cached and if not, retrieves data from the origin data source, then caches the data future requests. In order to illustrate this example we must first flush the redis cache. 50 | 51 | 1. Next right click on and run **flush_redis.py** (this deletes all your keys) 52 | 53 | 2. Next right click on and run **lazy_load.py** found within the following directory (amazon-S3-cache-with-amazon-elasticache-redis/examples/lazyload). Upon first run, you will notice a cache miss because the object was not initially cached in redis. Run the script again and you will now notice a cache hit since the object was set into redis after the initial cache miss. 54 | 55 | ## Terminate your environment 56 | 57 | Upon running these examples, terminate your environment by the following steps: 58 | 59 | 1. Next right click on and run **delete_S3_objects.py** within the (amazon-S3-cache-with-amazon-elasticache-redis/resources) directory. This will delete all your generated S3 Objects. 60 | 61 | 2. Next, within the AWS CloudFormation console, delete the stack you launched. 62 | 63 | 64 | --------------------------------------------------------------------------------