├── control-plane ├── custom-resources │ ├── src_macro │ │ ├── __init__.py │ │ └── index.py │ ├── src_custom_resources │ │ ├── requirements.txt │ │ ├── unique_id.py │ │ ├── cfnresponse.py │ │ └── subnets_checker.py │ └── custom-resources.yaml ├── storage-manager │ └── storage-manager.yaml └── control-plane.yaml ├── docs ├── DataPlane.png └── ControlPlane.png ├── CHANGELOG.md ├── CODE_OF_CONDUCT.md ├── .github ├── ISSUE_TEMPLATE │ ├── ---say-thank-you.md │ ├── ---bug-report.md │ └── ---feature-request.md └── PULL_REQUEST_TEMPLATE.md ├── .gitignore ├── LICENSE ├── CONTRIBUTING.md ├── data-plane ├── data-plane.yaml └── fargate-sql-service │ └── fargate-sql-service.yaml └── README.md /control-plane/custom-resources/src_macro/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /control-plane/custom-resources/src_custom_resources/requirements.txt: -------------------------------------------------------------------------------- 1 | requests 2 | -------------------------------------------------------------------------------- /docs/DataPlane.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/aws-dev-fabric-for-sql-server/HEAD/docs/DataPlane.png -------------------------------------------------------------------------------- /docs/ControlPlane.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/aws-dev-fabric-for-sql-server/HEAD/docs/ControlPlane.png -------------------------------------------------------------------------------- /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | # Changelog 2 | 3 | ## [0.2.0] 4 | 5 | - Introducing changelog. 6 | - Adding Office hours feature. 7 | - Introducing native support for Capacity Providers. 8 | - Enabling Fargate Spot. 9 | - Fixing bug with backups not executing. 10 | - Avoid re-creation of services when updating the stack. 11 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | ## Code of Conduct 2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 4 | opensource-codeofconduct@amazon.com with any additional questions or comments. 5 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/---say-thank-you.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: "Say thank you!" 3 | about: "Tell us how you use Dev fabric for SQL Server \U0001F60D" 4 | title: '' 5 | labels: Thanks 6 | assignees: '' 7 | 8 | --- 9 | 10 | #### ❤️ I'm using the `AWS Dev fabric for SQL Server` solution 11 | 12 | If you (or your company) are using the `AWS Dev fabric for SQL Server` - please let us know. We'd love to hear from you! We are open to feedback and advice. 13 | 14 | Thank you! 15 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | ### 2 | # Misc 3 | # 4 | 5 | # Standard files 6 | **/.DS_Store 7 | **/.ropeproject 8 | 9 | # Ignore packaged templates 10 | **/packaged-* 11 | 12 | ### 13 | # Python files 14 | # 15 | **/*.pyc 16 | **/.env 17 | **/.venv 18 | **/env/ 19 | **/venv/ 20 | **/ENV/ 21 | **/env.bak/ 22 | **/venv.bak/ 23 | **/.Python 24 | **/build/ 25 | **/develop-eggs/ 26 | **/dist/ 27 | **/downloads/ 28 | **/eggs/ 29 | **/.eggs/ 30 | **/lib/ 31 | **/lib64/ 32 | **/parts/ 33 | **/sdist/ 34 | **/var/ 35 | **/wheels/ 36 | **/share/python-wheels/ 37 | **/*.egg-info/ 38 | **/.installed.cfg 39 | **/*.egg 40 | -------------------------------------------------------------------------------- /.github/PULL_REQUEST_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | ---- 6 | Enter `[N/A]` in the box, if an item is not applicable to your change. 7 | 8 | **Testing** 9 | - [ ] Unit tests passed 10 | - [ ] Integration tests passed 11 | - [ ] Unit tests added for new functionality 12 | - [ ] Listed manual checks and their outputs in the comments 13 | - [ ] Link to issue or PR for the integration tests: 14 | 15 | **Documentation** 16 | - [ ] Contacted our doc writer 17 | - [ ] Updated our README 18 | ---- 19 | 20 | By submitting this pull request, I confirm that my contribution is made under the terms of the [MIT-0 License](https://github.com/aws/mit-0). 21 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/---bug-report.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: "\U0001F41B Bug report" 3 | about: "If something isn't working \U0001F622" 4 | title: '[Bug] ' 5 | labels: Bug 6 | assignees: '' 7 | --- 8 | 9 | 10 | 11 | #### Summary 12 | 13 | 14 | #### Description 15 | 16 | * Which is the AWS region were the solution is deployed? 17 | * Which is the exact error retrieved by CloudFormation? 18 | 19 | #### Additional details 20 | 21 | 22 | #### Expected Behavior 23 | 24 | 25 | #### Observed Behavior 26 | 27 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of 4 | this software and associated documentation files (the "Software"), to deal in 5 | the Software without restriction, including without limitation the rights to 6 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of 7 | the Software, and to permit persons to whom the Software is furnished to do so. 8 | 9 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 10 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS 11 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR 12 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER 13 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN 14 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 15 | 16 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/---feature-request.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: "\U0001F4A1 Feature request" 3 | about: Request a new feature 4 | title: '[Feature Request] ' 5 | labels: feature-request 6 | --- 7 | 8 | 9 | #### Context 10 | 11 | 12 | #### Alternatives 13 | 14 | 15 | 16 | 17 | #### Additional context 18 | 24 | 25 | 26 | * [ ] :wave: I may be able to implement this feature request 27 | * [ ] :warning: This feature might incur a breaking change 28 | 29 | --- 30 | 31 | This is a :rocket: Feature Request 32 | -------------------------------------------------------------------------------- /control-plane/custom-resources/src_custom_resources/unique_id.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | import cfnresponse 3 | 4 | def handler(event, context): 5 | 6 | try: 7 | 8 | the_event = event['RequestType'] 9 | response_data = {} 10 | phy_res_id = str(hash(event['StackId'] 11 | + event['LogicalResourceId']))[1:] 12 | 13 | print("Event dump: %s" % str(event)) 14 | print("Request type: ", str(the_event)) 15 | 16 | if the_event in ('Create', 'Update'): 17 | 18 | unique_id = str(hash(event['StackId']))[1:11] 19 | print("Unique ID: %s ." % unique_id) 20 | response_data['unique_id'] = unique_id 21 | 22 | print("Execution successful!") 23 | cfnresponse.send(event, 24 | context, 25 | cfnresponse.SUCCESS, 26 | response_data, 27 | physicalResourceId=phy_res_id) 28 | 29 | except Exception as e: 30 | 31 | print("Execution failed...") 32 | e_string = str(e) 33 | print(e_string) 34 | context.log_stream_name = e_string 35 | cfnresponse.send(event, 36 | context, 37 | cfnresponse.FAILED, 38 | response_data, 39 | physicalResourceId=phy_res_id) 40 | -------------------------------------------------------------------------------- /control-plane/custom-resources/src_custom_resources/cfnresponse.py: -------------------------------------------------------------------------------- 1 | import requests 2 | import json 3 | 4 | SUCCESS = "SUCCESS" 5 | FAILED = "FAILED" 6 | 7 | def send(event, context, responseStatus, responseData, reason=None, physicalResourceId=None, noEcho=False): 8 | responseUrl = event['ResponseURL'] 9 | 10 | print(responseUrl) 11 | 12 | responseBody = {} 13 | responseBody['Status'] = responseStatus 14 | responseBody['Reason'] = reason or ('See the details in CloudWatch Log Stream: ' + context.log_stream_name) 15 | responseBody['PhysicalResourceId'] = physicalResourceId or context.log_stream_name 16 | responseBody['StackId'] = event['StackId'] 17 | responseBody['RequestId'] = event['RequestId'] 18 | responseBody['LogicalResourceId'] = event['LogicalResourceId'] 19 | responseBody['NoEcho'] = noEcho 20 | responseBody['Data'] = responseData 21 | 22 | json_responseBody = json.dumps(responseBody) 23 | 24 | print("Response body:\n" + json_responseBody) 25 | 26 | headers = { 27 | 'content-type' : '', 28 | 'content-length' : str(len(json_responseBody)) 29 | } 30 | 31 | try: 32 | response = requests.put(responseUrl, 33 | data=json_responseBody, 34 | headers=headers) 35 | print("Status code: " + response.reason) 36 | except Exception as e: 37 | print("send(..) failed executing requests.put(..): " + str(e)) 38 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | 10 | ## Reporting Bugs/Feature Requests 11 | 12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features. 13 | 14 | When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already 15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: 16 | 17 | * A reproducible test case or series of steps 18 | * The version of our code being used 19 | * Any modifications you've made relevant to the bug 20 | * Anything unusual about your environment or deployment 21 | 22 | 23 | ## Contributing via Pull Requests 24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 25 | 26 | 1. You are working against the latest source on the *master* branch. 27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. 29 | 30 | To send us a pull request, please: 31 | 32 | 1. Fork the repository. 33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. 34 | 3. Ensure local tests pass. 35 | 4. Commit to your fork using clear commit messages. 36 | 5. Send us a pull request, answering any default questions in the pull request interface. 37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. 38 | 39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 41 | 42 | 43 | ## Finding contributions to work on 44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start. 45 | 46 | 47 | ## Code of Conduct 48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 50 | opensource-codeofconduct@amazon.com with any additional questions or comments. 51 | 52 | 53 | ## Security issue notifications 54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. 55 | 56 | 57 | ## Licensing 58 | 59 | See the [LICENSE](LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. 60 | 61 | We may ask you to sign a [Contributor License Agreement (CLA)](http://en.wikipedia.org/wiki/Contributor_License_Agreement) for larger changes. 62 | -------------------------------------------------------------------------------- /control-plane/custom-resources/src_custom_resources/subnets_checker.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | import cfnresponse 3 | 4 | def error_and_exit(error): 5 | 6 | print(error) 7 | raise Exception(error) 8 | 9 | def handler(event, context): 10 | 11 | try: 12 | 13 | the_event = event['RequestType'] 14 | response_data = {} 15 | phy_res_id = str(hash(event['StackId'] 16 | + event['LogicalResourceId']))[1:] 17 | 18 | print("Event dump: %s" % str(event)) 19 | print("Request type: ", str(the_event)) 20 | 21 | if the_event in ('Create', 'Update'): 22 | 23 | input_subnet_list = event['ResourceProperties']['Subnets'] 24 | the_vpc = event['ResourceProperties']['Vpc'] 25 | input_count = event['ResourceProperties']['Input'] 26 | print("Processing subnets: %s." % input_subnet_list) 27 | 28 | ec2 = boto3.client('ec2') 29 | 30 | if len(input_subnet_list) != int(input_count): 31 | 32 | error_and_exit("The amount of seleced subnets and the 'SelectedPrivateSubnetsCont' value differs.") 33 | 34 | print("Validating subnets and VPC ...") 35 | response = ec2.describe_subnets(Filters=[ 36 | { 37 | 'Name': 'vpc-id', 38 | 'Values': [ 39 | the_vpc, 40 | ] 41 | }, 42 | ]) 43 | 44 | vpc_subnets = list(map(lambda subnet: 45 | subnet['SubnetId'], 46 | response['Subnets'])) 47 | 48 | if not all(subnet in vpc_subnets 49 | for subnet in input_subnet_list): 50 | 51 | error_and_exit("All the subnets must belong to the " 52 | + "same VPC [%s]." % the_vpc) 53 | 54 | print("[OK] Passed!") 55 | 56 | print("Validating subnets and AZs ...") 57 | az_s = [] 58 | for sub in response['Subnets']: 59 | 60 | if sub['SubnetId'] in input_subnet_list: 61 | 62 | if sub['AvailabilityZone'] not in az_s: 63 | 64 | az_s.append(sub['AvailabilityZone']) 65 | 66 | else: 67 | 68 | error_and_exit("Only '1' Subnet per Availability Zone is allowed. You have at least (2) in AZ [%s]." % sub['AvailabilityZone']) 69 | 70 | print("[OK] Passed!") 71 | 72 | print("Validating that subnets are private ...") 73 | response = ec2.describe_route_tables( 74 | Filters=[ 75 | { 76 | 'Name': 'association.subnet-id', 77 | 'Values': input_subnet_list 78 | }, 79 | ]) 80 | 81 | route_table_list = list(map(lambda route: 82 | route['Routes'], 83 | response['RouteTables'])) 84 | 85 | for route in [r for sub_list in route_table_list 86 | for r in sub_list]: 87 | 88 | gw_id = route.get('GatewayId', None) 89 | if gw_id and gw_id.startswith('igw-'): 90 | 91 | error_and_exit("This setup does not allow public subnets. Please select only private subnets.") 92 | 93 | print("[OK] Passed!") 94 | 95 | print("Getting VPC CIDR ...") 96 | response_data['vpc_cidr'] = ec2.describe_vpcs(Filters=[ 97 | { 98 | 'Name': 'vpc-id', 99 | 'Values': [ 100 | the_vpc, 101 | ] 102 | }, 103 | ])['Vpcs'][0]['CidrBlock'] 104 | 105 | print("CIDR is: %s" % response_data['vpc_cidr']) 106 | 107 | print("Execution successful!") 108 | cfnresponse.send(event, 109 | context, 110 | cfnresponse.SUCCESS, 111 | response_data, 112 | physicalResourceId=phy_res_id) 113 | 114 | except Exception as e: 115 | 116 | print("Execution failed...") 117 | e_string = str(e) 118 | print(e_string) 119 | context.log_stream_name = e_string 120 | cfnresponse.send(event, 121 | context, 122 | cfnresponse.FAILED, 123 | response_data, 124 | physicalResourceId=phy_res_id) 125 | -------------------------------------------------------------------------------- /control-plane/custom-resources/src_macro/index.py: -------------------------------------------------------------------------------- 1 | """ 2 | ## Macro transform ## 3 | 4 | Creates N copies of CloudFormation resources, parametrising 5 | them using a generated random ID. 6 | 7 | Based on this AWS Sample macro: https://github.com/awslabs/aws-cloudformation-templates/tree/master/aws/services/CloudFormation/MacrosExamples/Count 8 | 9 | """ 10 | import copy 11 | import json 12 | from random import choice 13 | from string import ascii_lowercase 14 | 15 | 16 | def process_template(template, parameters): 17 | """Process the CloudFormation template. 18 | 19 | Parameters 20 | ---------- 21 | template : JSON 22 | Input template that is going to be transformed. 23 | parameters : list 24 | Additional parameters for the processing. 25 | 26 | Returns 27 | ------- 28 | type 29 | status (success|failed), processed template 30 | 31 | """ 32 | new_template = copy.deepcopy(template) 33 | status = 'success' 34 | 35 | # Iterate over each of the CloudFormation created resources 36 | for name, resource in template['Resources'].items(): 37 | 38 | # Check whether the current resource has the 'count' property 39 | if 'Count' in resource: 40 | 41 | try: 42 | ref_value = new_template['Resources'][name]['Count'].pop('Ref') 43 | # Convert referenced parameter to an integer value 44 | count = int(parameters[ref_value]) 45 | # Remove the Count property from this resource 46 | new_template['Resources'][name].pop('Count') 47 | 48 | except AttributeError: 49 | # Use numeric count value 50 | count = new_template['Resources'][name].pop('Count') 51 | 52 | print("Found 'Count' property with value {} in '{}'" 53 | "resource.... multiplying!".format(count, name)) 54 | # Remove the original resource from the template 55 | # but preserve a local copy of it 56 | resourceToMultiply = new_template['Resources'].pop(name) 57 | # Create a new block of the resource multiplied with names 58 | # ending in the iterator and the placeholders substituted 59 | resourcesAfterMultiplication = multiply(name, resourceToMultiply, count) 60 | if not set(resourcesAfterMultiplication.keys()) & set(new_template['Resources'].keys()): 61 | new_template['Resources'].update(resourcesAfterMultiplication) 62 | else: 63 | status = 'failed' 64 | return status, template 65 | else: 66 | print("Did not find 'Count' property in" 67 | "'{}' resource.... " 68 | "Nothing to do!".format(name)) 69 | 70 | return status, new_template 71 | 72 | 73 | def update_placeholder(resource_structure, iteration): 74 | """Update/replace the placeholder in the resource. 75 | 76 | Parameters 77 | ---------- 78 | resource_structure : type 79 | CloudFormation resource definition. 80 | iteration : integer 81 | Number of iteration. 82 | 83 | Returns 84 | ------- 85 | type 86 | Modified CloudFormation resource 87 | 88 | """ 89 | resource_string = json.dumps(resource_structure) 90 | place_holder_count = resource_string.count('%s') 91 | 92 | # If the placeholder is found then replace it 93 | if place_holder_count > 0: 94 | 95 | print("Found {} occurrences of string placeholder in JSON, replacing with iterator value {}".format(place_holder_count, iteration + 1)) 96 | 97 | # Generate a random string for replacement 98 | #replacement_values = ''.join(choice(ascii_lowercase) for i in range(10)) 99 | # For now... replacing with the iteration count 100 | replacement_values = str(iteration) 101 | 102 | # Replace the placeholders 103 | resource_string = resource_string % (replacement_values) 104 | 105 | # Convert the string back to json and return it 106 | return json.loads(resource_string) 107 | 108 | else: 109 | 110 | print("No occurences of decimal placeholder found in JSON, therefore nothing will be replaced") 111 | return resource_structure 112 | 113 | 114 | def multiply(resource_name, resource_structure, count): 115 | """Multiply resources and create copies of them. 116 | 117 | Parameters 118 | ---------- 119 | resource_name : string 120 | The resource name. 121 | resource_structure : JSON 122 | CloudFormation resource definition. 123 | count : Integer 124 | Amount of copies thar are goint to be created. 125 | 126 | Returns 127 | ------- 128 | type 129 | Dictonary with the replicated resources. 130 | 131 | """ 132 | resources = {} 133 | # Loop according to the number of times we want to 134 | # multiply, creating a new resource each time 135 | for iteration in range(0, count): 136 | print("Multiplying '{}', iteration count {}" 137 | .format(resource_name, iteration + 1)) 138 | multipliedResourceStructure = update_placeholder( 139 | resource_structure, 140 | iteration) 141 | resources[resource_name+str(iteration)] = multipliedResourceStructure 142 | return resources 143 | 144 | 145 | def handler(event, context): 146 | result = process_template(event['fragment'], 147 | event['templateParameterValues']) 148 | return { 149 | 'requestId': event['requestId'], 150 | 'status': result[0], 151 | 'fragment': result[1], 152 | } 153 | -------------------------------------------------------------------------------- /control-plane/storage-manager/storage-manager.yaml: -------------------------------------------------------------------------------- 1 | AWSTemplateFormatVersion: '2010-09-09' 2 | Description: Storage Manager service - full admin access to 'EFS' 3 | 4 | 5 | Parameters: 6 | 7 | ClusterName: 8 | Type: String 9 | Default: Testing 10 | Description: Existing ECS cluster 11 | 12 | TheVPCID: 13 | Type: AWS::EC2::VPC::Id 14 | Description: The VPC 15 | 16 | SubnetsPrivate: 17 | Type: List 18 | Description: The Subnets 19 | 20 | EFSID: 21 | Type: String 22 | Description: EFS ID 23 | 24 | SRVPrefix: 25 | Type: String 26 | Default: StorageManager 27 | Description: Storage Manager prefix 28 | 29 | ServiceDiscoveryNamespace: 30 | Type: String 31 | Description: Private service discovery namespace 32 | 33 | PlatformVersion: 34 | Type: String 35 | Default: 1.4.0 36 | Description: Fargate platform version 37 | 38 | CloudWatchLogGroup: 39 | Type: String 40 | Default: /ecs/StorageManager 41 | Description: Specify the destination log group in CloudWatch 42 | 43 | ImageUrl: 44 | Type: String 45 | Default: coderaiser/cloudcmd:14.3.10-alpine 46 | Description: The url of a docker image 47 | 48 | ContainerPort: 49 | Type: Number 50 | Default: 80 51 | Description: What port number the application inside the docker container is binding to 52 | 53 | TaskCPU: 54 | Type: Number 55 | Default: 512 56 | Description: CPU shares 57 | 58 | TaskMemory: 59 | Type: Number 60 | Default: 1024 61 | Description: Memory hard limit 62 | 63 | DesiredCount: 64 | Type: Number 65 | Default: 1 66 | Description: How many copies of the service task to run 67 | 68 | DefaultAccessCIDR: 69 | Type: String 70 | Default: '0.0.0.0/0' 71 | Description: Default access rule for the services 72 | 73 | 74 | Resources: 75 | 76 | ### 77 | ## ECS Task definition 78 | # 79 | TaskDefinition: 80 | Type: AWS::ECS::TaskDefinition 81 | Properties: 82 | Family: !Sub ${SRVPrefix}-service 83 | Cpu: !Ref TaskCPU 84 | Memory: !Ref TaskMemory 85 | NetworkMode: awsvpc 86 | ExecutionRoleArn: !GetAtt TaskExecutionRole.Arn 87 | TaskRoleArn: !GetAtt TaskRole.Arn 88 | RequiresCompatibilities: 89 | - FARGATE 90 | Volumes: 91 | - Name: EFS_ROOT 92 | EFSVolumeConfiguration: 93 | AuthorizationConfig: 94 | AccessPointId: !Ref ROOTAcessPoint 95 | IAM: DISABLED 96 | FilesystemId: !Ref EFSID 97 | TransitEncryption: ENABLED 98 | ContainerDefinitions: 99 | # Storage Manager 100 | - Name: !Sub ${SRVPrefix}-StorageManager 101 | Essential: true 102 | Image: !Ref ImageUrl 103 | Command: 104 | - '--no-keys-panel' 105 | - '--one-file-panel' 106 | - '--port=80' 107 | - '--root=/backup' 108 | PortMappings: 109 | - ContainerPort: 80 110 | Protocol: tcp 111 | Environment: 112 | - Name: CLOUDCMD_AUTH 113 | Value: 'true' 114 | - Name: CLOUDCMD_USERNAME 115 | Value: administrator 116 | - Name: CLOUDCMD_NAME 117 | Value: !Sub ${SRVPrefix} - Admin Storage Manager 118 | Secrets: 119 | - Name: CLOUDCMD_PASSWORD 120 | ValueFrom: !Ref StorageManagerAdministratorPassword 121 | LogConfiguration: 122 | LogDriver: awslogs 123 | Options: 124 | awslogs-group: !Ref CloudWatchLogGroup 125 | awslogs-create-group: 'true' 126 | awslogs-region: !Ref AWS::Region 127 | awslogs-stream-prefix: !Sub ${SRVPrefix}-StorageManager 128 | MountPoints: 129 | - ContainerPath: /backup 130 | ReadOnly: false 131 | SourceVolume: EFS_ROOT 132 | 133 | ### 134 | ## EFS root Access point 135 | # 136 | ROOTAcessPoint: 137 | Type: AWS::EFS::AccessPoint 138 | Properties: 139 | FileSystemId: !Ref EFSID 140 | PosixUser: 141 | Gid: '7000' 142 | Uid: '7000' 143 | RootDirectory: 144 | CreationInfo: 145 | OwnerGid: '7000' 146 | OwnerUid: '7000' 147 | Permissions: '750' 148 | Path: '/' 149 | 150 | ### 151 | ## ECS Service specification 152 | # 153 | Service: 154 | Type: AWS::ECS::Service 155 | Properties: 156 | ServiceName: !Sub ${SRVPrefix}-service 157 | Cluster: !Ref ClusterName 158 | LaunchType: FARGATE 159 | PlatformVersion: !Ref PlatformVersion 160 | DeploymentConfiguration: 161 | MaximumPercent: 200 162 | MinimumHealthyPercent: 100 163 | DesiredCount: !Ref 'DesiredCount' 164 | NetworkConfiguration: 165 | AwsvpcConfiguration: 166 | AssignPublicIp: DISABLED 167 | SecurityGroups: [!Ref SecurityGroup] 168 | Subnets: !Ref SubnetsPrivate 169 | TaskDefinition: !Ref TaskDefinition 170 | ServiceRegistries: 171 | - RegistryArn: !GetAtt DiscoveryService.Arn 172 | 173 | ### 174 | ## Security group for controlling access to the service 175 | # 176 | SecurityGroup: 177 | Type: AWS::EC2::SecurityGroup 178 | Properties: 179 | GroupName: !Sub ${SRVPrefix}-SecurityGroup 180 | GroupDescription: !Sub 'Allowed ports for ${SRVPrefix} ECS service.' 181 | VpcId: !Ref TheVPCID 182 | SecurityGroupIngress: 183 | - IpProtocol: tcp 184 | FromPort: !Ref ContainerPort 185 | ToPort: !Ref ContainerPort 186 | CidrIp: !Ref DefaultAccessCIDR 187 | 188 | ### 189 | ## Secrets configuration 190 | # 191 | StorageManagerAdministratorPassword: 192 | Type: AWS::SecretsManager::Secret 193 | Properties: 194 | Name: !Sub ${SRVPrefix}-StorageManager 195 | Description: !Sub 'This is the password for the root Administrator web user in ${SRVPrefix}-service.' 196 | GenerateSecretString: 197 | PasswordLength: 16 198 | ExcludeLowercase: false 199 | ExcludeNumbers: false 200 | ExcludePunctuation: true 201 | ExcludeUppercase: false 202 | IncludeSpace: false 203 | RequireEachIncludedType: true 204 | ExcludeCharacters: '"@/\' 205 | 206 | ### 207 | ## Task Execution role ARN 208 | # 209 | TaskExecutionRole: 210 | Type: AWS::IAM::Role 211 | Properties: 212 | RoleName: !Sub ${SRVPrefix}-TaskExecutionRole 213 | AssumeRolePolicyDocument: 214 | Statement: 215 | - Action: 216 | - sts:AssumeRole 217 | Effect: Allow 218 | Principal: 219 | Service: 220 | - ecs-tasks.amazonaws.com 221 | Version: '2012-10-17' 222 | Path: "/" 223 | Policies: 224 | - PolicyDocument: 225 | Statement: 226 | - Action: 227 | - logs:CreateLogGroup 228 | - logs:CreateLogStream 229 | - logs:PutLogEvents 230 | Effect: Allow 231 | Resource: 232 | - !Sub arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:${CloudWatchLogGroup}:* 233 | - !Sub arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:${CloudWatchLogGroup} 234 | Version: '2012-10-17' 235 | PolicyName: !Sub ${SRVPrefix}-CW-Policy 236 | - PolicyDocument: 237 | Statement: 238 | - Action: 239 | - kms:Decrypt 240 | - secretsmanager:GetSecretValue 241 | Effect: Allow 242 | Resource: !Sub arn:aws:secretsmanager:${AWS::Region}:${AWS::AccountId}:secret:${SRVPrefix}-* 243 | Version: '2012-10-17' 244 | PolicyName: !Sub ${SRVPrefix}-Secrets-Policy 245 | 246 | ### 247 | ## Task role ARN 248 | # 249 | TaskRole: 250 | Type: AWS::IAM::Role 251 | Properties: 252 | RoleName: !Sub ${SRVPrefix}-TaskRole 253 | AssumeRolePolicyDocument: 254 | Statement: 255 | - Action: 256 | - sts:AssumeRole 257 | Effect: Allow 258 | Principal: 259 | Service: 260 | - ecs-tasks.amazonaws.com 261 | Version: '2012-10-17' 262 | Path: "/" 263 | 264 | ### 265 | ## Service discovery settings 266 | # 267 | DiscoveryService: 268 | Type: AWS::ServiceDiscovery::Service 269 | Properties: 270 | Description: Discovery Service for the SQL container 271 | DnsConfig: 272 | RoutingPolicy: MULTIVALUE 273 | DnsRecords: 274 | - TTL: 60 275 | Type: A 276 | HealthCheckCustomConfig: 277 | FailureThreshold: 1 278 | Name: !Sub ${SRVPrefix} 279 | NamespaceId: !Ref ServiceDiscoveryNamespace 280 | -------------------------------------------------------------------------------- /data-plane/data-plane.yaml: -------------------------------------------------------------------------------- 1 | AWSTemplateFormatVersion: '2010-09-09' 2 | Description: Data plane 3 | 4 | 5 | Transform: 6 | - sqlserverdevfabric-macro 7 | 8 | 9 | Metadata: 10 | AWS::CloudFormation::Interface: 11 | 12 | ParameterLabels: 13 | 14 | ClusterName: 15 | default: "ECS Cluster name" 16 | 17 | DBInstancesCount: 18 | default: "SQL Instance count" 19 | 20 | CloudWatchLogGroup: 21 | default: "Cloud Watch Log Group name" 22 | 23 | ImageUrl: 24 | default: "SQL Server docker image" 25 | 26 | TaskCPU: 27 | default: "Server CPU shares" 28 | 29 | TaskMemory: 30 | default: "Server RAM Memory (MB)" 31 | 32 | UseSpot: 33 | default: "Enable Fargate Spot for this setup" 34 | 35 | MSSQLCOLLATION: 36 | default: "Collation and Unicode support" 37 | 38 | OfficeHours: 39 | default: "Enable / Disable" 40 | 41 | OfficeHoursActive: 42 | default: "Select a weekly schedule" 43 | 44 | OfficeHoursStart: 45 | default: "Time to start (UTC)" 46 | 47 | OfficeHoursStop: 48 | default: "Time to stop (UTC)" 49 | 50 | ParameterGroups: 51 | - 52 | Label: 53 | default: "Service settings ..." 54 | Parameters: 55 | - ClusterName 56 | - DBInstancesCount 57 | - CloudWatchLogGroup 58 | - 59 | Label: 60 | default: "SQL Server configuration ..." 61 | Parameters: 62 | - ImageUrl 63 | - TaskCPU 64 | - TaskMemory 65 | - UseSpot 66 | - MSSQLCOLLATION 67 | - 68 | Label: 69 | default: "Office hours ..." 70 | Parameters: 71 | - OfficeHours 72 | - OfficeHoursActive 73 | - OfficeHoursStart 74 | - OfficeHoursStop 75 | 76 | 77 | Conditions: 78 | 79 | EverydayCondition: !Equals [!Ref OfficeHoursActive, 'Everyday'] 80 | 81 | 82 | Parameters: 83 | 84 | ClusterName: 85 | Type: String 86 | Default: sqlserverdevfabric-cluster 87 | Description: > 88 | Must match the ECS Cluster created by the Control Plane, were you 89 | intend to deploy the SQL Instances 90 | AllowedValues: 91 | - sqlserverdevfabric-cluster 92 | 93 | DBInstancesCount: 94 | Type: Number 95 | Description: > 96 | How many DB instances with the exact same configuration do you want to deploy? 97 | Default: 1 98 | 99 | CloudWatchLogGroup: 100 | Type: String 101 | Description: > 102 | Specify the destination Log Group in CloudWatch 103 | Default: /ecs/SQLServerFargate 104 | 105 | ImageUrl: 106 | Type: String 107 | Description: > 108 | SQL Server Docker image to deploy 109 | Default: mcr.microsoft.com/mssql/server:2019-CU4-ubuntu-18.04 110 | 111 | TaskCPU: 112 | Type: Number 113 | Description: > 114 | Allocated CPU shares for the container that will run the database engine. Please consider the following supported combinations: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-cpu-memory-error.html 115 | Default: 2048 116 | AllowedValues: 117 | - 1024 118 | - 2048 119 | - 4096 120 | 121 | TaskMemory: 122 | Type: Number 123 | Description: > 124 | Allocated RAM memory for the container that will run the database engine. Please consider the following supported combinations: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-cpu-memory-error.html 125 | Default: 16384 126 | AllowedValues: 127 | - 2048 128 | - 3072 129 | - 4096 130 | - 5120 131 | - 6144 132 | - 7168 133 | - 8192 134 | - 9216 135 | - 10240 136 | - 11264 137 | - 12288 138 | - 13312 139 | - 14336 140 | - 15360 141 | - 16384 142 | - 17408 143 | - 18432 144 | - 19456 145 | - 20480 146 | - 21504 147 | - 22528 148 | - 23552 149 | - 24576 150 | - 25600 151 | - 26624 152 | - 27648 153 | - 28672 154 | - 29696 155 | - 30720 156 | 157 | UseSpot: 158 | Type: String 159 | Description: > 160 | Choose if you want to use Fargate Spot technology 161 | Default: No 162 | AllowedValues: 163 | - Yes 164 | - No 165 | 166 | OfficeHours: 167 | Type: String 168 | Description: > 169 | Choose if you want to enable scheduled on / off 170 | Default: No 171 | AllowedValues: 172 | - Yes 173 | - No 174 | 175 | OfficeHoursActive: 176 | Type: String 177 | Description: > 178 | Select whether the schedule will be enabled for the whole week or only on business days 179 | Default: Everyday 180 | AllowedValues: 181 | - Everyday 182 | - Monday-to-Friday 183 | 184 | OfficeHoursStart: 185 | Type: String 186 | Description: > 187 | This value must be in UTC 188 | Default: 9-AM 189 | AllowedValues: 190 | - 12-Midnight 191 | - 1-AM 192 | - 2-AM 193 | - 3-AM 194 | - 4-AM 195 | - 6-AM 196 | - 7-AM 197 | - 8-AM 198 | - 9-AM 199 | - 10-AM 200 | - 11-AM 201 | - 12-Noon 202 | - 1-PM 203 | - 2-PM 204 | - 3-PM 205 | - 4-PM 206 | - 5-PM 207 | - 6-PM 208 | - 7-PM 209 | - 8-PM 210 | - 9-PM 211 | - 10-PM 212 | - 11-PM 213 | 214 | OfficeHoursStop: 215 | Type: String 216 | Description: > 217 | This value must be in UTC 218 | Default: 6-PM 219 | AllowedValues: 220 | - 12-Midnight 221 | - 1-AM 222 | - 2-AM 223 | - 3-AM 224 | - 4-AM 225 | - 6-AM 226 | - 7-AM 227 | - 8-AM 228 | - 9-AM 229 | - 10-AM 230 | - 11-AM 231 | - 12-Noon 232 | - 1-PM 233 | - 2-PM 234 | - 3-PM 235 | - 4-PM 236 | - 5-PM 237 | - 6-PM 238 | - 7-PM 239 | - 8-PM 240 | - 9-PM 241 | - 10-PM 242 | - 11-PM 243 | 244 | MSSQLCOLLATION: 245 | Type: String 246 | Default: SQL_Latin1_General_CP1_CI_AS 247 | Description: SQL Server collation setting 248 | 249 | 250 | Mappings: 251 | 24hTimeMap: 252 | 12-Midnight: 253 | 24h: '0' 254 | 1-AM: 255 | 24h: '1' 256 | 2-AM: 257 | 24h: '2' 258 | 3-AM: 259 | 24h: '3' 260 | 4-AM: 261 | 24h: '4' 262 | 5-AM: 263 | 24h: '5' 264 | 6-AM: 265 | 24h: '6' 266 | 7-AM: 267 | 24h: '7' 268 | 8-AM: 269 | 24h: '8' 270 | 9-AM: 271 | 24h: '9' 272 | 10-AM: 273 | 24h: '10' 274 | 11-AM: 275 | 24h: '11' 276 | 12-Noon: 277 | 24h: '12' 278 | 1-PM: 279 | 24h: '13' 280 | 2-PM: 281 | 24h: '14' 282 | 3-PM: 283 | 24h: '15' 284 | 4-PM: 285 | 24h: '16' 286 | 5-PM: 287 | 24h: '17' 288 | 6-PM: 289 | 24h: '18' 290 | 7-PM: 291 | 24h: '19' 292 | 8-PM: 293 | 24h: '20' 294 | 9-PM: 295 | 24h: '21' 296 | 10-PM: 297 | 24h: '22' 298 | 11-PM: 299 | 24h: '23' 300 | 301 | 302 | Resources: 303 | 304 | ServiceStack: 305 | Type: AWS::CloudFormation::Stack 306 | Count: !Ref DBInstancesCount 307 | Properties: 308 | TemplateURL: fargate-sql-service/fargate-sql-service.yaml 309 | Parameters: 310 | UseSpot: !Ref UseSpot 311 | ClusterName: 312 | Fn::ImportValue: 313 | !Sub '${ClusterName}::TheECSCluster' 314 | TheVPCID: 315 | Fn::ImportValue: 316 | !Sub '${ClusterName}::TheVPCID' 317 | DefaultAccessCIDR: 318 | Fn::ImportValue: 319 | !Sub '${ClusterName}::TheVPCCIDR' 320 | SubnetsPrivate: 321 | Fn::ImportValue: 322 | !Sub '${ClusterName}::SubnetsPrivate' 323 | SRVPrefix: !Sub db-${IdGenerator.unique_id}-%s 324 | ServiceDiscoveryNamespace: 325 | Fn::ImportValue: 326 | !Sub '${ClusterName}::ServiceDiscoveryNamespace' 327 | EFSID: 328 | Fn::ImportValue: 329 | !Sub '${ClusterName}::EFS' 330 | CloudWatchLogGroup: !Ref CloudWatchLogGroup 331 | ImageUrl: !Ref ImageUrl 332 | ContainerPort: 1433 333 | TaskCPU: !Ref TaskCPU 334 | TaskMemory: !Ref TaskMemory 335 | MSSQLCOLLATION: !Ref MSSQLCOLLATION 336 | OfficeHours: !Ref OfficeHours 337 | OfficeHoursActive: 338 | !If [EverydayCondition, '?', 'MON-FRI'] 339 | OfficeHoursStart: !FindInMap 340 | - 24hTimeMap 341 | - !Ref OfficeHoursStart 342 | - 24h 343 | OfficeHoursStop: !FindInMap 344 | - 24hTimeMap 345 | - !Ref OfficeHoursStop 346 | - '24h' 347 | 348 | IdGenerator: 349 | Type: Custom::IdGenerator 350 | Properties: 351 | ServiceToken: 352 | Fn::ImportValue: 353 | !Sub '${ClusterName}::IdGeneratorServiceToken' 354 | -------------------------------------------------------------------------------- /control-plane/custom-resources/custom-resources.yaml: -------------------------------------------------------------------------------- 1 | AWSTemplateFormatVersion: 2010-09-09 2 | Transform: AWS::Serverless-2016-10-31 3 | 4 | Description: > 5 | This template creates the required custom resources. 6 | 7 | 8 | Parameters: 9 | 10 | LambdaRuntime: 11 | Type: String 12 | Default: python3.7 13 | Description: Python macro runtime 14 | 15 | 16 | Resources: 17 | 18 | ### 19 | ## Macro transform 20 | # 21 | Macro: 22 | Type: AWS::CloudFormation::Macro 23 | Properties: 24 | Name: sqlserverdevfabric-macro 25 | Description: Processes the template and creates multiple instances of the Service resource 26 | FunctionName: !Ref AWSMacroFunction 27 | 28 | AWSMacroFunction: 29 | Type: AWS::Serverless::Function 30 | Properties: 31 | FunctionName: sqlserverdevfabric-lambda-macro 32 | CodeUri: src_macro 33 | Handler: index.handler 34 | Runtime: !Ref LambdaRuntime 35 | Timeout: 360 36 | Role: !GetAtt AWSMacroFunctionExecutionRole.Arn 37 | 38 | AWSMacroFunctionExecutionRole: 39 | Type: AWS::IAM::Role 40 | Properties: 41 | AssumeRolePolicyDocument: 42 | Statement: 43 | - Action: 44 | - sts:AssumeRole 45 | Effect: Allow 46 | Principal: 47 | Service: 48 | - lambda.amazonaws.com 49 | Version: '2012-10-17' 50 | Path: "/" 51 | Policies: 52 | - PolicyDocument: 53 | Statement: 54 | - Action: 55 | - logs:CreateLogGroup 56 | - logs:CreateLogStream 57 | - logs:PutLogEvents 58 | Effect: Allow 59 | Resource: arn:aws:logs:*:*:* 60 | Version: '2012-10-17' 61 | PolicyName: !Sub ${AWS::StackName}-AWSMacroFunction-CW 62 | 63 | ### 64 | ## Subnet checker 65 | # 66 | SubnetCheckerFunction: 67 | Type: AWS::Serverless::Function 68 | Properties: 69 | Handler: index.handler 70 | Runtime: !Ref LambdaRuntime 71 | Timeout: 360 72 | Role: !GetAtt SubnetCheckerFunctionExecutionRole.Arn 73 | InlineCode: | 74 | import boto3 75 | import cfnresponse 76 | 77 | def error_and_exit(error): 78 | 79 | print(error) 80 | raise Exception(error) 81 | 82 | def handler(event, context): 83 | 84 | try: 85 | 86 | the_event = event['RequestType'] 87 | response_data = {} 88 | phy_res_id = str(hash(event['StackId'] 89 | + event['LogicalResourceId']))[1:] 90 | 91 | print("Event dump: %s" % str(event)) 92 | print("Request type: ", str(the_event)) 93 | 94 | if the_event in ('Create', 'Update'): 95 | 96 | input_subnet_list = event['ResourceProperties']['Subnets'] 97 | the_vpc = event['ResourceProperties']['Vpc'] 98 | input_count = event['ResourceProperties']['Input'] 99 | print("Processing subnets: %s." % input_subnet_list) 100 | 101 | ec2 = boto3.client('ec2') 102 | 103 | if len(input_subnet_list) != int(input_count): 104 | 105 | error_and_exit("The amount of seleced subnets and the 'SelectedPrivateSubnetsCont' value differs.") 106 | 107 | print("Validating subnets and VPC ...") 108 | response = ec2.describe_subnets(Filters=[ 109 | { 110 | 'Name': 'vpc-id', 111 | 'Values': [ 112 | the_vpc, 113 | ] 114 | }, 115 | ]) 116 | 117 | vpc_subnets = list(map(lambda subnet: 118 | subnet['SubnetId'], 119 | response['Subnets'])) 120 | 121 | if not all(subnet in vpc_subnets 122 | for subnet in input_subnet_list): 123 | 124 | error_and_exit("All the subnets must belong to the " 125 | + "same VPC [%s]." % the_vpc) 126 | 127 | print("[OK] Passed!") 128 | 129 | print("Validating subnets and AZs ...") 130 | az_s = [] 131 | for sub in response['Subnets']: 132 | 133 | if sub['SubnetId'] in input_subnet_list: 134 | 135 | if sub['AvailabilityZone'] not in az_s: 136 | 137 | az_s.append(sub['AvailabilityZone']) 138 | 139 | else: 140 | 141 | error_and_exit("Only '1' Subnet per Availability Zone is allowed. You have at least (2) in AZ [%s]." % sub['AvailabilityZone']) 142 | 143 | print("[OK] Passed!") 144 | 145 | print("Validating that subnets are private ...") 146 | response = ec2.describe_route_tables( 147 | Filters=[ 148 | { 149 | 'Name': 'association.subnet-id', 150 | 'Values': input_subnet_list 151 | }, 152 | ]) 153 | 154 | route_table_list = list(map(lambda route: 155 | route['Routes'], 156 | response['RouteTables'])) 157 | 158 | for route in [r for sub_list in route_table_list 159 | for r in sub_list]: 160 | 161 | gw_id = route.get('GatewayId', None) 162 | if gw_id and gw_id.startswith('igw-'): 163 | 164 | error_and_exit("This setup does not allow public subnets. Please select only private subnets.") 165 | 166 | print("[OK] Passed!") 167 | 168 | print("Getting VPC CIDR ...") 169 | response_data['vpc_cidr'] = ec2.describe_vpcs(Filters=[ 170 | { 171 | 'Name': 'vpc-id', 172 | 'Values': [ 173 | the_vpc, 174 | ] 175 | }, 176 | ])['Vpcs'][0]['CidrBlock'] 177 | 178 | print("CIDR is: %s" % response_data['vpc_cidr']) 179 | 180 | print("Execution successful!") 181 | cfnresponse.send(event, 182 | context, 183 | cfnresponse.SUCCESS, 184 | response_data, 185 | physicalResourceId=phy_res_id) 186 | 187 | except Exception as e: 188 | 189 | print("Execution failed...") 190 | e_string = str(e) 191 | print(e_string) 192 | context.log_stream_name = e_string 193 | cfnresponse.send(event, 194 | context, 195 | cfnresponse.FAILED, 196 | response_data, 197 | physicalResourceId=phy_res_id) 198 | 199 | 200 | SubnetCheckerFunctionExecutionRole: 201 | Type: AWS::IAM::Role 202 | Properties: 203 | AssumeRolePolicyDocument: 204 | Statement: 205 | - Action: 206 | - sts:AssumeRole 207 | Effect: Allow 208 | Principal: 209 | Service: 210 | - lambda.amazonaws.com 211 | Version: '2012-10-17' 212 | Path: "/" 213 | Policies: 214 | - PolicyDocument: 215 | Statement: 216 | - Action: 217 | - logs:CreateLogGroup 218 | - logs:CreateLogStream 219 | - logs:PutLogEvents 220 | Effect: Allow 221 | Resource: arn:aws:logs:*:*:* 222 | Version: '2012-10-17' 223 | PolicyName: !Sub ${AWS::StackName}-SubetChecker-sub-CW 224 | - PolicyDocument: 225 | Statement: 226 | - Action: 227 | - ec2:DescribeVpcs 228 | - ec2:DescribeSubnets 229 | - ec2:DescribeRouteTables 230 | Effect: Allow 231 | Resource: 232 | - '*' 233 | Version: '2012-10-17' 234 | PolicyName: !Sub ${AWS::StackName}-SubetChecker-sub 235 | 236 | ### 237 | ## Id Generator Function 238 | # 239 | IdGeneratorFunction: 240 | Type: AWS::Serverless::Function 241 | Properties: 242 | Handler: index.handler 243 | Runtime: !Ref LambdaRuntime 244 | Timeout: 360 245 | Role: !GetAtt IdGeneratorFunctionExecutionRole.Arn 246 | InlineCode: | 247 | import boto3 248 | import cfnresponse 249 | 250 | def handler(event, context): 251 | 252 | try: 253 | 254 | the_event = event['RequestType'] 255 | response_data = {} 256 | phy_res_id = str(hash(event['StackId'] 257 | + event['LogicalResourceId']))[1:] 258 | 259 | print("Event dump: %s" % str(event)) 260 | print("Request type: ", str(the_event)) 261 | 262 | if the_event in ('Create', 'Update'): 263 | 264 | unique_id = str(hash(event['StackId']))[1:11] 265 | print("Unique ID: %s ." % unique_id) 266 | response_data['unique_id'] = unique_id 267 | 268 | print("Execution successful!") 269 | cfnresponse.send(event, 270 | context, 271 | cfnresponse.SUCCESS, 272 | response_data, 273 | physicalResourceId=phy_res_id) 274 | 275 | except Exception as e: 276 | 277 | print("Execution failed...") 278 | e_string = str(e) 279 | print(e_string) 280 | context.log_stream_name = e_string 281 | cfnresponse.send(event, 282 | context, 283 | cfnresponse.FAILED, 284 | response_data, 285 | physicalResourceId=phy_res_id) 286 | 287 | 288 | IdGeneratorFunctionExecutionRole: 289 | Type: AWS::IAM::Role 290 | Properties: 291 | AssumeRolePolicyDocument: 292 | Statement: 293 | - Action: 294 | - sts:AssumeRole 295 | Effect: Allow 296 | Principal: 297 | Service: 298 | - lambda.amazonaws.com 299 | Version: '2012-10-17' 300 | Path: "/" 301 | Policies: 302 | - PolicyDocument: 303 | Statement: 304 | - Action: 305 | - logs:CreateLogGroup 306 | - logs:CreateLogStream 307 | - logs:PutLogEvents 308 | Effect: Allow 309 | Resource: arn:aws:logs:*:*:* 310 | Version: '2012-10-17' 311 | PolicyName: !Sub ${AWS::StackName}-IdGenerator-sub-CW 312 | 313 | 314 | Outputs: 315 | 316 | SubnetChecker: 317 | Description: 'Subnet Checker Service Token.' 318 | Value: !GetAtt SubnetCheckerFunction.Arn 319 | 320 | IdGenerator: 321 | Description: 'ID generator Service Token.' 322 | Value: !GetAtt IdGeneratorFunction.Arn 323 | -------------------------------------------------------------------------------- /control-plane/control-plane.yaml: -------------------------------------------------------------------------------- 1 | AWSTemplateFormatVersion: 2010-09-09 2 | Description: > 3 | Control plane infrastructure for the Dev Fabric for SQL Server solution. 4 | 5 | 6 | Metadata: 7 | AWS::CloudFormation::Interface: 8 | 9 | ParameterLabels: 10 | 11 | TheVPCID: 12 | default: "The VPC" 13 | 14 | TheSubnets: 15 | default: "The subnets" 16 | 17 | SelectedPrivateSubnetsCont: 18 | default: "Subnets count" 19 | 20 | ClusterName: 21 | default: "ECS Cluster name" 22 | 23 | RootDNSName: 24 | default: "Root DNS name" 25 | 26 | LambdaRuntime: 27 | default: "Lambda Runtime" 28 | 29 | StorageManagerDockerImage: 30 | default: "Storage Manager image" 31 | 32 | EnableBackups: 33 | default: "Enable / Disable EFS Backups" 34 | 35 | ParameterGroups: 36 | - 37 | Label: 38 | default: "Deployment settings ..." 39 | Parameters: 40 | - ClusterName 41 | - RootDNSName 42 | - LambdaRuntime 43 | - EnableBackups 44 | - 45 | Label: 46 | default: "Networking settings ..." 47 | Parameters: 48 | - TheVPCID 49 | - TheSubnets 50 | - SelectedPrivateSubnetsCont 51 | 52 | - 53 | Label: 54 | default: "Storage Manager settings ..." 55 | Parameters: 56 | - StorageManagerDockerImage 57 | 58 | 59 | Parameters: 60 | 61 | ClusterName: 62 | Type: String 63 | Description: > 64 | Main ECS cluster for hosting all the database containers and instances 65 | Default: sqlserverdevfabric-cluster 66 | AllowedValues: 67 | - sqlserverdevfabric-cluster 68 | 69 | LambdaRuntime: 70 | Type: String 71 | Description: > 72 | Execution runtime for custom and backend resources 73 | Default: python3.7 74 | 75 | RootDNSName: 76 | Type: String 77 | Description: > 78 | Private DNS root domain name for the service 79 | Default: sqlserverdev.fabric 80 | 81 | TheVPCID: 82 | Type: AWS::EC2::VPC::Id 83 | Description: > 84 | VPC where the solution is going to be deployed 85 | 86 | TheSubnets: 87 | Type: List 88 | Description: > 89 | The Subnets where the solution will be hosted (must be private and located in different Availability Zones) 90 | 91 | SelectedPrivateSubnetsCont: 92 | Type: Number 93 | Description: > 94 | How many subnets you would like to utilise. This parameter must 95 | match the number of subnets selected above 96 | AllowedValues: 97 | - 1 98 | - 2 99 | - 3 100 | - 4 101 | 102 | StorageManagerDockerImage: 103 | Type: String 104 | Default: coderaiser/cloudcmd:14.3.10-alpine 105 | Description: The url of a docker image 106 | 107 | EnableBackups: 108 | Type: String 109 | Description: > 110 | Choose if you want to enable EFS backups 111 | Default: No 112 | AllowedValues: 113 | - Yes 114 | - No 115 | 116 | 117 | Conditions: 118 | 119 | ConditionBackups: !Equals [!Ref EnableBackups, 'true'] 120 | 121 | CreateMountTarget0: !Or 122 | - !Equals [!Ref SelectedPrivateSubnetsCont, 1] 123 | - !Equals [!Ref SelectedPrivateSubnetsCont, 2] 124 | - !Equals [!Ref SelectedPrivateSubnetsCont, 3] 125 | - !Equals [!Ref SelectedPrivateSubnetsCont, 4] 126 | 127 | CreateMountTarget1: !Or 128 | - !Equals [!Ref SelectedPrivateSubnetsCont, 2] 129 | - !Equals [!Ref SelectedPrivateSubnetsCont, 3] 130 | - !Equals [!Ref SelectedPrivateSubnetsCont, 4] 131 | 132 | CreateMountTarget2: !Or 133 | - !Equals [!Ref SelectedPrivateSubnetsCont, 3] 134 | - !Equals [!Ref SelectedPrivateSubnetsCont, 4] 135 | 136 | CreateMountTarget3: !Equals [!Ref SelectedPrivateSubnetsCont, 4] 137 | 138 | 139 | Resources: 140 | 141 | SubnetChecker: 142 | Type: Custom::SubnetChecker 143 | Properties: 144 | ServiceToken: !GetAtt CustomResourcesStack.Outputs.SubnetChecker 145 | Subnets: !Ref TheSubnets 146 | Vpc: !Ref TheVPCID 147 | Input: !Ref SelectedPrivateSubnetsCont 148 | 149 | ServiceDiscoveryNamespace: 150 | Type: AWS::ServiceDiscovery::PrivateDnsNamespace 151 | DependsOn: [ SubnetChecker ] 152 | Properties: 153 | Name: !Ref RootDNSName 154 | Vpc: !Ref TheVPCID 155 | 156 | ECSCluster: 157 | Type: AWS::ECS::Cluster 158 | DependsOn: [ SubnetChecker ] 159 | Properties: 160 | ClusterName: !Sub ${ClusterName} 161 | ClusterSettings: 162 | - Name: containerInsights 163 | Value: enabled 164 | CapacityProviders: 165 | - FARGATE 166 | - FARGATE_SPOT 167 | DefaultCapacityProviderStrategy: 168 | - Base: 0 169 | CapacityProvider: FARGATE 170 | Weight: 1 171 | 172 | StorageManagerStack: 173 | Type: AWS::CloudFormation::Stack 174 | Properties: 175 | TemplateURL: storage-manager/storage-manager.yaml 176 | Parameters: 177 | ClusterName: !Ref ECSCluster 178 | TheVPCID: !Ref TheVPCID 179 | DefaultAccessCIDR: !GetAtt SubnetChecker.vpc_cidr 180 | SubnetsPrivate: !Join 181 | - ',' 182 | - !Ref TheSubnets 183 | SRVPrefix: StorageManager 184 | ServiceDiscoveryNamespace: !Ref ServiceDiscoveryNamespace 185 | EFSID: !Ref EFS 186 | ImageUrl: !Ref StorageManagerDockerImage 187 | 188 | CustomResourcesStack: 189 | Type: AWS::CloudFormation::Stack 190 | Properties: 191 | TemplateURL: custom-resources/custom-resources.yaml 192 | Parameters: 193 | LambdaRuntime: !Ref LambdaRuntime 194 | 195 | EFS: 196 | Type: AWS::EFS::FileSystem 197 | DependsOn: [ SubnetChecker ] 198 | Properties: 199 | Encrypted: True 200 | PerformanceMode: generalPurpose 201 | ThroughputMode: bursting 202 | 203 | EFSMountTargetAZ0: 204 | Type: AWS::EFS::MountTarget 205 | Condition: CreateMountTarget0 206 | Properties: 207 | FileSystemId: !Ref EFS 208 | SecurityGroups: 209 | - !Ref SecurityGroupEFS 210 | SubnetId: !Select [ 0, !Ref TheSubnets ] 211 | 212 | EFSMountTargetAZ1: 213 | Type: AWS::EFS::MountTarget 214 | Condition: CreateMountTarget1 215 | Properties: 216 | FileSystemId: !Ref EFS 217 | SecurityGroups: 218 | - !Ref SecurityGroupEFS 219 | SubnetId: !Select [ 1, !Ref TheSubnets ] 220 | 221 | EFSMountTargetAZ2: 222 | Type: AWS::EFS::MountTarget 223 | Condition: CreateMountTarget2 224 | Properties: 225 | FileSystemId: !Ref EFS 226 | SecurityGroups: 227 | - !Ref SecurityGroupEFS 228 | SubnetId: !Select [ 2, !Ref TheSubnets ] 229 | 230 | EFSMountTargetAZ3: 231 | Type: AWS::EFS::MountTarget 232 | Condition: CreateMountTarget3 233 | Properties: 234 | FileSystemId: !Ref EFS 235 | SecurityGroups: 236 | - !Ref SecurityGroupEFS 237 | SubnetId: !Select [ 3, !Ref TheSubnets ] 238 | 239 | SecurityGroupEFS: 240 | Type: AWS::EC2::SecurityGroup 241 | Properties: 242 | GroupName: !Sub ${ClusterName}-EFS-SecurityGroup 243 | GroupDescription: !Sub 'Allowed ports for ${ClusterName} EFS volume.' 244 | VpcId: !Ref TheVPCID 245 | SecurityGroupIngress: 246 | - IpProtocol: tcp 247 | FromPort: 2049 248 | ToPort: 2049 249 | CidrIp: !GetAtt SubnetChecker.vpc_cidr 250 | 251 | FileSystemBackupVault: 252 | Type: AWS::Backup::BackupVault 253 | Condition: ConditionBackups 254 | DeletionPolicy: Retain 255 | Properties: 256 | BackupVaultName: !Sub efs-${EFS}-backup 257 | AccessPolicy: | 258 | { 259 | "Version": "2012-10-17", 260 | "Statement": [ 261 | { 262 | "Effect": "Deny", 263 | "Principal": "*", 264 | "Action": "backup:DeleteBackupVault", 265 | "Resource": "*" 266 | } 267 | ] 268 | } 269 | 270 | FileSystemBackupPlan: 271 | Type: AWS::Backup::BackupPlan 272 | Condition: ConditionBackups 273 | Properties: 274 | BackupPlan: 275 | BackupPlanName: !Sub ${EFS}-Daily-Weekly-Monthly 276 | BackupPlanRule: 277 | - RuleName: DailyBackups 278 | TargetBackupVault: !Ref FileSystemBackupVault 279 | ScheduleExpression: "cron(0 5 ? * * *)" 280 | StartWindowMinutes: 480 281 | CompletionWindowMinutes: 10080 282 | Lifecycle: 283 | DeleteAfterDays: 35 284 | - RuleName: WeeklyBackups 285 | TargetBackupVault: !Ref FileSystemBackupVault 286 | ScheduleExpression: "cron(0 5 ? * 7 *)" 287 | StartWindowMinutes: 480 288 | CompletionWindowMinutes: 10080 289 | Lifecycle: 290 | DeleteAfterDays: 90 291 | - RuleName: MonthlyBackups 292 | TargetBackupVault: !Ref FileSystemBackupVault 293 | ScheduleExpression: "cron(0 5 1 * ? *)" 294 | StartWindowMinutes: 480 295 | CompletionWindowMinutes: 10080 296 | Lifecycle: 297 | MoveToColdStorageAfterDays: 90 298 | DeleteAfterDays: 365 299 | 300 | FileSystemBackupSelection: 301 | Type: AWS::Backup::BackupSelection 302 | Condition: ConditionBackups 303 | Properties: 304 | BackupPlanId: !Ref FileSystemBackupPlan 305 | BackupSelection: 306 | IamRoleArn: !GetAtt AWSBackupRole.Arn 307 | Resources: 308 | - !Sub arn:aws:elasticfilesystem:${AWS::Region}:${AWS::AccountId}:file-system/${EFS} 309 | SelectionName: !Sub efs-${EFS}-backup 310 | 311 | AWSBackupRole: 312 | Type: AWS::IAM::Role 313 | Condition: ConditionBackups 314 | Properties: 315 | AssumeRolePolicyDocument: 316 | Statement: 317 | - Action: 318 | - sts:AssumeRole 319 | Effect: Allow 320 | Principal: 321 | Service: 322 | - backup.amazonaws.com 323 | Version: '2012-10-17' 324 | Path: "/" 325 | ManagedPolicyArns: 326 | - arn:aws:iam::aws:policy/service-role/AWSBackupServiceRolePolicyForBackup 327 | 328 | 329 | Outputs: 330 | 331 | ServiceDiscoveryNamespace: 332 | Description: 'DNS Namespace.' 333 | Value: !Ref ServiceDiscoveryNamespace 334 | Export: 335 | Name: !Sub ${ClusterName}::ServiceDiscoveryNamespace 336 | 337 | EFS: 338 | Description: 'EFS ID.' 339 | Value: !Ref EFS 340 | Export: 341 | Name: !Sub ${ClusterName}::EFS 342 | 343 | TheVPCID: 344 | Description: 'VPC used for control plane.' 345 | Value: !Ref TheVPCID 346 | Export: 347 | Name: !Sub ${ClusterName}::TheVPCID 348 | 349 | TheVPCCIDR: 350 | Description: 'VPC CIDR.' 351 | Value: !GetAtt SubnetChecker.vpc_cidr 352 | Export: 353 | Name: !Sub ${ClusterName}::TheVPCCIDR 354 | 355 | SubnetsPrivate: 356 | Description: 'Private Subnets used for control plane and the services.' 357 | Value: !Join 358 | - ',' 359 | - !Ref TheSubnets 360 | Export: 361 | Name: !Sub ${ClusterName}::SubnetsPrivate 362 | 363 | TheECSCluster: 364 | Description: 'ECS Cluster name.' 365 | Value: !Ref ECSCluster 366 | Export: 367 | Name: !Sub ${ClusterName}::TheECSCluster 368 | 369 | IdGeneratorServiceToken: 370 | Description: 'Id Generator Service Token.' 371 | Value: !GetAtt CustomResourcesStack.Outputs.IdGenerator 372 | Export: 373 | Name: !Sub ${ClusterName}::IdGeneratorServiceToken 374 | -------------------------------------------------------------------------------- /data-plane/fargate-sql-service/fargate-sql-service.yaml: -------------------------------------------------------------------------------- 1 | AWSTemplateFormatVersion: '2010-09-09' 2 | Description: Deploy SQL SERVER DB services 3 | 4 | 5 | Conditions: 6 | 7 | EnableSpot: !Equals [!Ref UseSpot, 'true'] 8 | EnableOfficeHours: !Equals [!Ref OfficeHours, 'true'] 9 | 10 | 11 | Parameters: 12 | 13 | UseSpot: 14 | Type: String 15 | Description: > 16 | Choose if you want to use Fargate Spot technology 17 | Default: No 18 | AllowedValues: 19 | - Yes 20 | - No 21 | 22 | ClusterName: 23 | Type: String 24 | Default: Testing 25 | Description: Existing ECS cluster 26 | 27 | TheVPCID: 28 | Type: AWS::EC2::VPC::Id 29 | Description: The VPC 30 | 31 | SubnetsPrivate: 32 | Type: List 33 | Description: The Subnets 34 | 35 | EFSID: 36 | Type: String 37 | Description: EFS ID 38 | 39 | SRVPrefix: 40 | Type: String 41 | Default: testing 42 | Description: Random suffix for the service 43 | 44 | ServiceDiscoveryNamespace: 45 | Type: String 46 | Description: Private service discovery namespace 47 | 48 | PlatformVersion: 49 | Type: String 50 | Default: 1.4.0 51 | Description: Fargate platform version 52 | 53 | CloudWatchLogGroup: 54 | Type: String 55 | Default: /ecs/SQLServerFargate 56 | Description: Specify the destination log group in CloudWatch 57 | 58 | ImageUrl: 59 | Type: String 60 | Default: nginx 61 | Description: The url of a docker image 62 | 63 | ContainerPort: 64 | Type: Number 65 | Default: 80 66 | Description: What port number the application inside the docker container is binding to 67 | 68 | TaskCPU: 69 | Type: Number 70 | Default: 256 71 | Description: CPU shares 72 | 73 | TaskMemory: 74 | Type: Number 75 | Default: 512 76 | Description: Memory hard limit 77 | 78 | MSSQLCOLLATION: 79 | Type: String 80 | Default: SQL_Latin1_General_CP1_CI_AS 81 | Description: MS SQL Collation setting 82 | 83 | DesiredCount: 84 | Type: Number 85 | Default: 1 86 | Description: How many copies of the service task to run 87 | 88 | DefaultAccessCIDR: 89 | Type: String 90 | Default: '0.0.0.0/0' 91 | Description: Default access rule for the services 92 | 93 | OfficeHours: 94 | Type: String 95 | Description: > 96 | Choose if you want to enable scheduled on / off 97 | Default: No 98 | AllowedValues: 99 | - Yes 100 | - No 101 | 102 | OfficeHoursActive: 103 | Type: String 104 | Description: > 105 | Active schedule 106 | Default: '?' 107 | AllowedValues: 108 | - '?' 109 | - MON-FRI 110 | 111 | OfficeHoursStart: 112 | Type: String 113 | Description: > 114 | Time to start (UTC) 115 | Default: 9 116 | AllowedPattern: ^\d{1}$|^[1]{1}\d{1}$|^[2]{1}[0-4]{1}$ 117 | 118 | OfficeHoursStop: 119 | Type: String 120 | Description: > 121 | Time to stop (UTC) 122 | Default: 18 123 | AllowedPattern: ^\d{1}$|^[1]{1}\d{1}$|^[2]{1}[0-4]{1}$ 124 | 125 | 126 | Resources: 127 | 128 | ### 129 | ## ECS Service Task definition 130 | # 131 | TaskDefinition: 132 | Type: AWS::ECS::TaskDefinition 133 | Properties: 134 | Family: !Sub ${SRVPrefix}-service 135 | Cpu: !Ref TaskCPU 136 | Memory: !Ref TaskMemory 137 | NetworkMode: awsvpc 138 | ExecutionRoleArn: !GetAtt TaskExecutionRole.Arn 139 | TaskRoleArn: !GetAtt TaskRole.Arn 140 | RequiresCompatibilities: 141 | - FARGATE 142 | Volumes: 143 | - Name: EFS_DATA 144 | EFSVolumeConfiguration: 145 | AuthorizationConfig: 146 | AccessPointId: !Ref DataAcessPoint 147 | IAM: DISABLED 148 | FilesystemId: !Ref EFSID 149 | TransitEncryption: ENABLED 150 | - Name: EFS_BKP 151 | EFSVolumeConfiguration: 152 | AuthorizationConfig: 153 | AccessPointId: !Ref BKPAcessPoint 154 | IAM: DISABLED 155 | FilesystemId: !Ref EFSID 156 | TransitEncryption: ENABLED 157 | ContainerDefinitions: 158 | # Main SQL server container 159 | - Name: !Sub ${SRVPrefix}-sqlserver 160 | Essential: true 161 | Image: !Ref ImageUrl 162 | PortMappings: 163 | - ContainerPort: !Ref ContainerPort 164 | Protocol: tcp 165 | Environment: 166 | - Name: ACCEPT_EULA 167 | Value: Y 168 | - Name: MSSQL_AGENT_ENABLED 169 | Value: 'True' 170 | - Name: MSSQL_COLLATION 171 | Value: !Ref MSSQLCOLLATION 172 | Secrets: 173 | - Name: SA_PASSWORD 174 | ValueFrom: !Ref DBpasswordSA 175 | LogConfiguration: 176 | LogDriver: awslogs 177 | Options: 178 | awslogs-group: !Ref CloudWatchLogGroup 179 | awslogs-create-group: 'true' 180 | awslogs-region: !Ref AWS::Region 181 | awslogs-stream-prefix: !Sub ${SRVPrefix}-sqlserver 182 | MountPoints: 183 | - ContainerPath: /var/opt/mssql 184 | ReadOnly: false 185 | SourceVolume: EFS_DATA 186 | - ContainerPath: /backup 187 | ReadOnly: false 188 | SourceVolume: EFS_BKP 189 | # Sidecar Storage Manager 190 | - Name: !Sub ${SRVPrefix}-StorageManager 191 | Essential: false 192 | Image: coderaiser/cloudcmd:14.3.10-alpine 193 | Command: 194 | - '--no-keys-panel' 195 | - '--one-file-panel' 196 | - '--port=80' 197 | - '--root=/backup' 198 | PortMappings: 199 | - ContainerPort: 80 200 | Protocol: tcp 201 | Environment: 202 | - Name: CLOUDCMD_AUTH 203 | Value: 'true' 204 | - Name: CLOUDCMD_USERNAME 205 | Value: administrator 206 | - Name: CLOUDCMD_NAME 207 | Value: !Sub ${SRVPrefix} - SQL Fabric Storage Manager 208 | Secrets: 209 | - Name: CLOUDCMD_PASSWORD 210 | ValueFrom: !Ref StorageManagerAdministratorPassword 211 | LogConfiguration: 212 | LogDriver: awslogs 213 | Options: 214 | awslogs-group: !Ref CloudWatchLogGroup 215 | awslogs-create-group: 'true' 216 | awslogs-region: !Ref AWS::Region 217 | awslogs-stream-prefix: !Sub ${SRVPrefix}-StorageManager 218 | MountPoints: 219 | - ContainerPath: /backup 220 | ReadOnly: false 221 | SourceVolume: EFS_BKP 222 | 223 | ### 224 | ## EFS Backup Access point 225 | # 226 | BKPAcessPoint: 227 | Type: AWS::EFS::AccessPoint 228 | Properties: 229 | FileSystemId: !Ref EFSID 230 | PosixUser: 231 | Gid: '7000' 232 | Uid: '7000' 233 | RootDirectory: 234 | CreationInfo: 235 | OwnerGid: '7000' 236 | OwnerUid: '7000' 237 | Permissions: '750' 238 | Path: !Sub '/${SRVPrefix}/backup' 239 | 240 | ### 241 | ## EFS Data Access point 242 | # 243 | DataAcessPoint: 244 | Type: AWS::EFS::AccessPoint 245 | Properties: 246 | FileSystemId: !Ref EFSID 247 | PosixUser: 248 | Gid: '7000' 249 | Uid: '7000' 250 | RootDirectory: 251 | CreationInfo: 252 | OwnerGid: '7000' 253 | OwnerUid: '7000' 254 | Permissions: '750' 255 | Path: !Sub '/${SRVPrefix}/data' 256 | 257 | ### 258 | ## ECS Service specification 259 | # 260 | Service: 261 | Type: AWS::ECS::Service 262 | Properties: 263 | CapacityProviderStrategy: !If 264 | - EnableSpot 265 | - - CapacityProvider: FARGATE_SPOT 266 | Weight: 1 267 | - - CapacityProvider: FARGATE 268 | Weight: 1 269 | ServiceName: !Sub ${SRVPrefix}-service 270 | Cluster: !Ref ClusterName 271 | PlatformVersion: !Ref PlatformVersion 272 | DeploymentController: 273 | Type: ECS 274 | DeploymentConfiguration: 275 | MaximumPercent: 100 276 | MinimumHealthyPercent: 0 277 | DesiredCount: !Ref 'DesiredCount' 278 | NetworkConfiguration: 279 | AwsvpcConfiguration: 280 | AssignPublicIp: DISABLED 281 | SecurityGroups: [!Ref SecurityGroup] 282 | Subnets: !Ref SubnetsPrivate 283 | TaskDefinition: !Ref TaskDefinition 284 | ServiceRegistries: 285 | - RegistryArn: !GetAtt DiscoveryService.Arn 286 | 287 | ### 288 | ## Security group for controlling access to the service 289 | # 290 | SecurityGroup: 291 | Type: AWS::EC2::SecurityGroup 292 | Properties: 293 | GroupName: !Sub ${SRVPrefix}-SecurityGroup 294 | GroupDescription: !Sub 'Allowed ports for ${SRVPrefix} ECS service.' 295 | VpcId: !Ref TheVPCID 296 | SecurityGroupIngress: 297 | - IpProtocol: tcp 298 | FromPort: !Ref ContainerPort 299 | ToPort: !Ref ContainerPort 300 | CidrIp: !Ref DefaultAccessCIDR 301 | - IpProtocol: tcp 302 | FromPort: 80 303 | ToPort: 80 304 | CidrIp: !Ref DefaultAccessCIDR 305 | 306 | ### 307 | ## Secrets configuration 308 | # 309 | DBpasswordSA: 310 | Type: AWS::SecretsManager::Secret 311 | Properties: 312 | Name: !Sub ${SRVPrefix}-SApassword 313 | Description: !Sub 'This is the password for the SA SQL user in ${SRVPrefix}-service.' 314 | GenerateSecretString: 315 | PasswordLength: 16 316 | ExcludeLowercase: false 317 | ExcludeNumbers: false 318 | ExcludePunctuation: true 319 | ExcludeUppercase: false 320 | IncludeSpace: false 321 | RequireEachIncludedType: true 322 | ExcludeCharacters: '"@/\' 323 | 324 | StorageManagerAdministratorPassword: 325 | Type: AWS::SecretsManager::Secret 326 | Properties: 327 | Name: !Sub ${SRVPrefix}-StorageManager 328 | Description: !Sub 'This is the password for the Administrator web user in ${SRVPrefix}-service.' 329 | GenerateSecretString: 330 | PasswordLength: 16 331 | ExcludeLowercase: false 332 | ExcludeNumbers: false 333 | ExcludePunctuation: true 334 | ExcludeUppercase: false 335 | IncludeSpace: false 336 | RequireEachIncludedType: true 337 | ExcludeCharacters: '"@/\' 338 | 339 | ### 340 | ## Task Execution role ARN 341 | # 342 | TaskExecutionRole: 343 | Type: AWS::IAM::Role 344 | Properties: 345 | RoleName: !Sub ${SRVPrefix}-TaskExecutionRole 346 | AssumeRolePolicyDocument: 347 | Statement: 348 | - Action: 349 | - sts:AssumeRole 350 | Effect: Allow 351 | Principal: 352 | Service: 353 | - ecs-tasks.amazonaws.com 354 | Version: '2012-10-17' 355 | Path: "/" 356 | Policies: 357 | - PolicyDocument: 358 | Statement: 359 | - Action: 360 | - logs:CreateLogGroup 361 | - logs:CreateLogStream 362 | - logs:PutLogEvents 363 | Effect: Allow 364 | Resource: 365 | - !Sub arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:${CloudWatchLogGroup}:* 366 | - !Sub arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:${CloudWatchLogGroup} 367 | Version: '2012-10-17' 368 | PolicyName: !Sub ${SRVPrefix}-CW-Policy 369 | - PolicyDocument: 370 | Statement: 371 | - Action: 372 | - kms:Decrypt 373 | - secretsmanager:GetSecretValue 374 | Effect: Allow 375 | Resource: !Sub arn:aws:secretsmanager:${AWS::Region}:${AWS::AccountId}:secret:${SRVPrefix}-* 376 | Version: '2012-10-17' 377 | PolicyName: !Sub ${SRVPrefix}-Secrets-Policy 378 | 379 | ### 380 | ## Task role ARN 381 | # 382 | TaskRole: 383 | Type: AWS::IAM::Role 384 | Properties: 385 | RoleName: !Sub ${SRVPrefix}-TaskRole 386 | AssumeRolePolicyDocument: 387 | Statement: 388 | - Action: 389 | - sts:AssumeRole 390 | Effect: Allow 391 | Principal: 392 | Service: 393 | - ecs-tasks.amazonaws.com 394 | Version: '2012-10-17' 395 | Path: "/" 396 | 397 | ### 398 | ## Service discovery settings 399 | # 400 | DiscoveryService: 401 | Type: AWS::ServiceDiscovery::Service 402 | Properties: 403 | Description: Discovery Service for the SQL container 404 | DnsConfig: 405 | RoutingPolicy: MULTIVALUE 406 | DnsRecords: 407 | - TTL: 60 408 | Type: A 409 | HealthCheckCustomConfig: 410 | FailureThreshold: 1 411 | Name: !Sub ${SRVPrefix} 412 | NamespaceId: !Ref ServiceDiscoveryNamespace 413 | 414 | ### 415 | ## Scheduled scaling 416 | # 417 | OfficeHoursAutoScalingTarget: 418 | Type: AWS::ApplicationAutoScaling::ScalableTarget 419 | DependsOn: Service 420 | Condition: EnableOfficeHours 421 | Properties: 422 | MaxCapacity: 1 423 | MinCapacity: 0 424 | ResourceId: !Sub service/${ClusterName}/${Service.Name} 425 | RoleARN: !Sub arn:aws:iam::${AWS::AccountId}:role/aws-service-role/ecs.application-autoscaling.amazonaws.com/AWSServiceRoleForApplicationAutoScaling_ECSService 426 | ScalableDimension: ecs:service:DesiredCount 427 | ServiceNamespace: ecs 428 | ScheduledActions: 429 | - ScheduledActionName: Turn On 430 | ScalableTargetAction: 431 | MaxCapacity: 1 432 | MinCapacity: 1 433 | Schedule: !Sub cron(0 ${OfficeHoursStart} * * ${OfficeHoursActive} *) 434 | - ScheduledActionName: Turn Off 435 | ScalableTargetAction: 436 | MaxCapacity: 0 437 | MinCapacity: 0 438 | Schedule: !Sub cron(0 ${OfficeHoursStop} * * ${OfficeHoursActive} *) 439 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # AWS Dev Fabric for SQL Server 2 | 3 | Microsoft made changes to the MSDN license terms that prohibit BYOL of MSDN subscriptions effective after 10/2019. The good news is that SQL Server Developer Edition is now a free download independent of MSDN, which can be deployed on shared and dedicated tenant AWS environments. While many customers may use MSDN to cover Windows Server licensing on premises, they must have a Windows Server License to run the database on AWS due to the inability to bring MSDN licenses. 4 | 5 | At AWS, we offer customers the option to run SQL Server Developer Edition on Amazon EC2 instances based on Linux OS, which helps customers move away from MSDN for Windows Server licenses. However, this still requires DBAs to have the necessary skills to operate a Linux OS. The AWS Dev Fabric for SQL Server allows customers to keep their SQL Server within a development environment running on AWS, while removing the need for customers to subscribe to MSDN and license Windows Server on Amazon EC2 to run SQL Server Developer Edition. 6 | 7 | The AWS Dev Fabric for SQL Server orchestrates AWS services like Fargate, ECS, CloudMap, CloudWatch, EFS, Lambda, and AWS Backup, offering a serverless solution for running SQL Server Developer edition into containers, at the same time, automatically taking care of data persistence, monitoring, log management, backup and auto-recovery. 8 | 9 | - [What does the solution offer ?](#what-does-the-solution-offer-) 10 | - [How to deploy the solution](#how-to-deploy-the-solution) 11 | - [Control plane](#control-plane) 12 | - [Data plane](#data-plane) 13 | - [Architecture](#architecture) 14 | - [Project structure and template anatomy](#project-structure-and-template-anatomy) 15 | - [Considerations](#considerations) 16 | - [Contributing to the project](#contributing-to-the-project) 17 | - [Changelog](#changelog) 18 | - [License](#license) 19 | 20 | #### Security disclosures 21 | 22 | If you think you’ve found a potential security issue, please do not post it in the Issues. Instead, please follow the instructions [here](https://aws.amazon.com/security/vulnerability-reporting/) or email AWS security directly at [aws-security@amazon.com](mailto:aws-security@amazon.com). 23 | 24 | ## What does the solution offer ? 25 | 26 | It allows you to deploy and create [ECS Fargate containers](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html) using [Microsoft SQL Server Images](https://hub.docker.com/_/microsoft-mssql-server). This will easily and seamlessly provision serverless Microsoft SQL instances for development environments. You can fine control the Network settings (VPC and subnets), SQL Instance configuration (RAM, CPU Cores, etc) and the specific Image version you would like to use. 27 | 28 | You can launch any number of SQL Instances you require in minutes. Each Database will have its own unique endpoint, password and storage allocation. 29 | 30 | In order to improve security, the Security Groups are configured to only allow the minimum required traffic. Also, the deployment explicitly validates that the selected subnets are [private](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html). As a best practice, Database servers should not be accessible publicly over the internet. In any case, please always double check and ensure that private subnets are selected. 31 | 32 | As the solution is designed for development workloads, it leverages options for minimising costs: 33 | - You can opt-in for running the SQL Instances in [Fargate Spot](https://aws.amazon.com/blogs/aws/aws-fargate-spot-now-generally-available/). This can significantly reduce the operational costs. 34 | - You can define a weekly schedule for starting and stopping your SQL Server Instances during office hours. 35 | 36 | As the engine configuration and the data are [securely stored in EFS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/efs-volumes.html), containers can be started, stopped and replaced (in case of malfunction) without affecting or losing data. 37 | 38 | Finally, each SQL Instance will be accompanied by a simple web-server file manager (running as a side-car container within the same ECS Task) for easily uploading and downloading database files and configurations. This allows you to quickly test datasets and load database backups. 39 | 40 | ## How to deploy the solution 41 | 42 | The solution deployment is fully automated in [CloudFormation](https://aws.amazon.com/cloudformation/). Before deploying, you will need to **pack** the templates, a process that will [upload local artifacts to an S3 bucket](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-cli-package.html). This will consolidate the project templates for seamlessly deploying the solution. 43 | 44 | Regarding the parameters, you will find self explanatory comments and parameter names while deploying the solution through the AWS Web Console. 45 | 46 | ### Step by step 47 | 48 | 1. Locate an S3 bucket, where CloudFormation templates will be stored. This Bucket **must** be in the same region were you will deploy the solution. If using Linux or MacOS, you can export the variables for smooth usage: 49 | 50 | ``` 51 | export the_region= 52 | export the_bucket= 53 | ``` 54 | 55 | 2. Clone the repository 56 | 57 | ``` 58 | git clone https://github.com/aws-samples/aws-dev-fabric-for-sql-server.git 59 | ``` 60 | 61 | ... or [download](https://github.com/aws-samples/aws-dev-fabric-for-sql-server/archive/master.zip) it directly as a zip. 62 | 63 | 3. Step into the repository folder 64 | 65 | ``` 66 | cd aws-dev-fabric-for-sql-server 67 | ``` 68 | 69 | #### Control plane 70 | 71 | - **Package ...** 72 | ``` 73 | aws --region=$the_region cloudformation package --template-file ./control-plane/control-plane.yaml --s3-bucket $the_bucket --output-template-file ./packaged-control-plane.yaml 74 | ``` 75 | 76 | - **Deploy ...** 77 | 78 | You can use the [AWS Web console to deploy](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html)! Upload the **packaged-control-plane.yaml** file. Alternatively, you can deploy via CLI as well: 79 | ``` 80 | aws --region=$the_region cloudformation create-stack \ 81 | --template-body file://packaged-control-plane.yaml \ 82 | --stack-name \ 83 | --disable-rollback \ 84 | --capabilities CAPABILITY_AUTO_EXPAND CAPABILITY_NAMED_IAM \ 85 | --parameters \ 86 | ParameterKey=TheVPCID,ParameterValue="" \ 87 | ParameterKey=SubnetPrivateAZ1,ParameterValue="" \ 88 | ParameterKey=SubnetPrivateAZ2,ParameterValue="" 89 | ``` 90 | 91 | #### Data plane 92 | 93 | - Package ... 94 | ``` 95 | aws --region=$the_region cloudformation package --template-file ./data-plane/data-plane.yaml --s3-bucket $the_bucket --output-template-file packaged-data-plane.yaml 96 | ``` 97 | 98 | - Deploy ... 99 | 100 | You can use the [AWS Web console to deploy](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html)! Upload the **packaged-data-plane.yaml** file. Alternatively, you can deploy via CLI as well: 101 | 102 | ``` 103 | aws --region=$the_region cloudformation create-stack \ 104 | --template-body file://packaged-data-plane.yaml \ 105 | --stack-name \ 106 | --disable-rollback \ 107 | --capabilities CAPABILITY_AUTO_EXPAND CAPABILITY_NAMED_IAM 108 | ``` 109 | 110 | 111 | ## Architecture 112 | 113 | The SQL Server Dev Fabric allows you to create a fleet of SQL Server 2017/2019 Developer Edition running on **Ubuntu 16.04/18.04** on top of Amazon Elastic Container Service (ECS Fargate). The solution helps customers reduce all the heavy-lift imposed by using EC2 or/and ECS + EC2 Deployment (OS Patching, Management, Hardening, Snapshots and AMI Lifecycle), allowing customers to quickly scale up to thousands SQL Server Instances in minutes, with SQL Server Auto-Recovery and Data Persistence. The solution splits into two modules. 114 | 115 | #### Control plane 116 | 117 | Ideally, you would have one **Control Plane** deployed per AWS region. This setup takes care of creating and provisioning all the common and underlying resources for the solution to work: 118 | 119 | - Checks and validates that input subnets are private. 120 | - Creates the [ECS Cluster](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/clusters.html). 121 | - Creates the [EFS shared filesystem](https://aws.amazon.com/efs/) and the [backup policies](https://docs.aws.amazon.com/efs/latest/ug/efs-backup-solutions.html). 122 | - Provisions other CloudFormation resources required for the solution setup, such as [Custom Resources](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html) and [Macros](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-macros.html). 123 | 124 | ![control plane](docs/ControlPlane.png) 125 | 126 | #### Data plane 127 | 128 | You can deploy as many data planes as you require. This deployment allows you to provision **N** unique databases. Each of them will have a dedicated set of resources, unique and isolated: 129 | 130 | - A Docker container for running the database engine. 131 | - A portion/directory within the EFS volume for storing the data. 132 | - A dedicated web access for retrieving and adding files to the engine. 133 | - Its own set of unique and randomly generated passwords. 134 | - Its own IAM roles and permissions. 135 | 136 | This is useful for creating multiple and isolated environments with similar setups, in minutes. 137 | 138 | You can later deploy new and different instances of the Data Plane, specifying different engine versions, capabilities, permissions, etc. 139 | 140 | ![data plane](docs/DataPlane.png) 141 | 142 | ## Project structure and template anatomy 143 | 144 | The project structure uses [nested Stacks](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-nested-stacks.html) for deploying the resources in a structured and tidy manner. The CloudFormation templates are subdivided into [Control plane](#control-plane) and [Data plane](#data-plane) directories, which are [deployed separately](#how-to-deploy-the-solution). 145 | 146 | In the **control-plane** section, you will find the CloudFormation definitions for common resources and custom resources that create the base infrastructure for the solution. You will generally not require to check or explore these files. Typically you will deploy this template once per region. 147 | 148 | In the **data-plane** section, you will find the *data-plane.yaml* template. You will deploy this template each time you would like to create a new set of *N* Database engine instances. This template will be pre-processed by the [sqlserverdevfabric-macro macro](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-macros.html) and will finally use the *fargate-sql-service.yaml* templae for deploying all the required resources for the Database Engine (ECS Service, Task Definition, Secrets, EFS Endpoints, etc). Note that each of these deployments are unique and will be part of a single isolated Stack. 149 | 150 | As a sample use case, suppose there is a new project where an internal team requested 5 different SQL Databases for testing and developing features in parallel. You can deploy a new **data-plane** Stack, selecting to deploy 5 Instances. This will create a new set of 5 SQL Instances with its own unique credentials and DNS endpoints. You can distribute the access details to the team members, being able to create such environments in minutes. Each of the created SQL Instances will have a similar configurations (RAM, CPU allocation, etc) but they will have unique credentials and endpoints. 151 | 152 | The parameters are generally self-explanatory and you will also find a handy description in the AWS Web Console while deploying the templates. The most important ones are: 153 | 154 | **control-plane.yaml** 155 | - *ECS Cluster name* -> Defines the name for the main ECS Cluster that will hold all the database engines. This cluster will be created as part of the solution. 156 | - *Root DNS name* -> Domain name you would like to use for the solution. This will be used as the suffix for each of the created SQL Instances. Each of the SQL Instances will be accessed via *.sqlserverdev.fabric*. 157 | - *Networking settings* -> Defines the VPC and Subnets where the SQL instances will be placed. The subnets must be private. 158 | 159 | **data-plane.yaml** 160 | - *ECS Cluster name* -> ECS Cluster for deploying the set of instances. This cluster must match the name defined in the **control-plane** section. 161 | - *SQL Instance count* -> How many SQL Instances you would like to deploy. 162 | - *SQL Server configuration* -> Specific configurations for the SQL Instances (vCPUs, RAM, SQL Docker Image and version, etc). All the instances will have similar configurations. 163 | 164 | The project structure looks as follows: 165 | 166 | ``` 167 | . 168 | ├── control-plane 169 | │   ├── control-plane.yaml 170 | │   ├── custom-resources 171 | │   │   ├── custom-resources.yaml 172 | │   │   ├── src_custom_resources 173 | │   │   │   └── ** .py files for the custom resources code ** 174 | │   │   └── src_macro 175 | │   │   ├── __init__.py 176 | │   │   └── index.py 177 | │   └── storage-manager 178 | │   └── storage-manager.yaml 179 | └── data-plane 180 |    ├── data-plane.yaml 181 |    └── fargate-sql-service 182 |    └── fargate-sql-service.yaml 183 | ``` 184 | 185 | 186 | ## Considerations 187 | 188 | We encourage everyone to report issues and feature requests in [this section](https://github.com/aws-samples/aws-dev-fabric-for-sql-server/issues). This will help to improve the solution and expand it to different use cases. 189 | 190 | - Currently, it is not possible to add customisations or settings to the SQL engine. Selecting the **SQL Server collation** is the only available option for now. This will be hopefully expanded in subsequent releases. 191 | - As the data is stored in EFS, the solution offers reasonable performance for development environments but it will not be suitable for production workloads. 192 | - The solution is currently available for ECS only. EKS clusters are not supported. 193 | - Even though the solution is fully automated, the concept and pattern for running such workloads on Linux containers is relatively new. This may be something to consider if you do not have basic expertise with containers. 194 | 195 | 196 | ## Contributing to the project 197 | 198 | Contributions and feedback are welcome! Proposals and pull requests will be considered and responded. For more information, see the [CONTRIBUTING](https://github.com/aws-samples/aws-dev-fabric-for-sql-server/blob/master/CONTRIBUTING.md) file. 199 | 200 | Amazon Web Services does not currently provide support for modified copies of this software. 201 | 202 | 203 | ## Changelog 204 | 205 | Refer to the [Changelog section](./CHANGELOG.md). 206 | 207 | 208 | ## License 209 | 210 | The AWS Dev fabric for SQL Server solution is distributed under the [MIT-0 License](https://github.com/aws/mit-0). See [LICENSE](https://github.com/aws-samples/aws-dev-fabric-for-sql-server/blob/master/LICENSE) for more information. 211 | --------------------------------------------------------------------------------