├── dags ├── airflowlib │ ├── __init__.py │ └── emr_lib.py ├── transform │ ├── tags.scala │ ├── links.scala │ ├── movies.scala │ ├── ratings.scala │ ├── genome_tags.scala │ └── genome_scores.scala └── movielens_dag.py ├── NOTICE ├── .github └── PULL_REQUEST_TEMPLATE.md ├── CODE_OF_CONDUCT.md ├── README.md ├── CONTRIBUTING.md ├── cloudformation └── airflow.yaml └── LICENSE /dags/airflowlib/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /NOTICE: -------------------------------------------------------------------------------- 1 | AWS Concurrent Data Orchestration Pipeline EMR Livy 2 | Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 3 | -------------------------------------------------------------------------------- /.github/PULL_REQUEST_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | *Issue #, if available:* 2 | 3 | *Description of changes:* 4 | 5 | 6 | By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. 7 | -------------------------------------------------------------------------------- /dags/transform/tags.scala: -------------------------------------------------------------------------------- 1 | val tags_df = spark.read.format("csv").option("header", "true").option("inferschema", "true").load("s3:///tags.csv") 2 | 3 | tags_df.write.mode("overwrite").parquet("s3:///movielens-parquet/tags/") 4 | -------------------------------------------------------------------------------- /dags/transform/links.scala: -------------------------------------------------------------------------------- 1 | val links_df = spark.read.format("csv").option("header", "true").option("inferschema", "true").load("s3:///links.csv") 2 | 3 | links_df.write.mode("overwrite").parquet("s3:///movielens-parquet/links/") 4 | -------------------------------------------------------------------------------- /dags/transform/movies.scala: -------------------------------------------------------------------------------- 1 | val movies_df = spark.read.format("csv").option("header", "true").option("inferschema", "true").load("s3:///movies.csv") 2 | 3 | movies_df.write.mode("overwrite").parquet("s3:///movielens-parquet/movies/") 4 | -------------------------------------------------------------------------------- /dags/transform/ratings.scala: -------------------------------------------------------------------------------- 1 | val ratings_df = spark.read.format("csv").option("header", "true").option("inferschema", "true").load("s3:///ratings.csv") 2 | 3 | ratings_df.write.mode("overwrite").parquet("s3:///movielens-parquet/ratings/") 4 | -------------------------------------------------------------------------------- /dags/transform/genome_tags.scala: -------------------------------------------------------------------------------- 1 | val genome_tags_df = spark.read.format("csv").option("header", "true").option("inferschema", "true").load("s3:///genome-tags.csv") 2 | 3 | genome_tags_df.write.mode("overwrite").parquet("s3:///movielens-parquet/genome-tags/") 4 | -------------------------------------------------------------------------------- /dags/transform/genome_scores.scala: -------------------------------------------------------------------------------- 1 | val genome_scores_df = spark.read.format("csv").option("header", "true").option("inferschema", "true").load("s3:///genome-scores.csv") 2 | 3 | genome_scores_df.write.mode("overwrite").parquet("s3:///movielens-parquet/genome-scores/") 4 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | ## Code of Conduct 2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 4 | opensource-codeofconduct@amazon.com with any additional questions or comments. 5 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # AWS Concurrent Data Orchestration Pipeline EMR Livy 2 | This code demonstrates the architecture featured on the AWS Big Data blog (https://aws.amazon.com/blogs/big-data/ ) which creates a concurrent data pipeline by using Amazon EMR and Apache Livy. This pipeline is orchestrated by Apache Airflow. 3 | 4 | ### Description of the project folders 5 | ##### cloudformation 6 | This folder contains the cloudformation template that spins up the Airflow infrastructure. 7 | 8 | ##### dags/airflowlib 9 | This folder contains reusable code for Amazon EMR and Apache Livy. 10 | 11 | ##### dags/transform 12 | This folder contains sample transformation scala code which transforms the movielens data files from csv to parquet. 13 | 14 | ##### dags/movielens_dag.py 15 | This script contains the code for the DAG definition. It basically defines the Airflow pipeline. 16 | 17 | ### License 18 | This library is licensed under the Apache 2.0 License. 19 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | 10 | ## Reporting Bugs/Feature Requests 11 | 12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features. 13 | 14 | When filing an issue, please check [existing open](https://github.com/aws-samples/aws-concurrent-data-orchestration-pipeline-emr-livy/issues), or [recently closed](https://github.com/aws-samples/aws-concurrent-data-orchestration-pipeline-emr-livy/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aclosed%20), issues to make sure somebody else hasn't already 15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: 16 | 17 | * A reproducible test case or series of steps 18 | * The version of our code being used 19 | * Any modifications you've made relevant to the bug 20 | * Anything unusual about your environment or deployment 21 | 22 | 23 | ## Contributing via Pull Requests 24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 25 | 26 | 1. You are working against the latest source on the *master* branch. 27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. 29 | 30 | To send us a pull request, please: 31 | 32 | 1. Fork the repository. 33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. 34 | 3. Ensure local tests pass. 35 | 4. Commit to your fork using clear commit messages. 36 | 5. Send us a pull request, answering any default questions in the pull request interface. 37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. 38 | 39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 41 | 42 | 43 | ## Finding contributions to work on 44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels ((enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any ['help wanted'](https://github.com/aws-samples/aws-concurrent-data-orchestration-pipeline-emr-livy/labels/help%20wanted) issues is a great place to start. 45 | 46 | 47 | ## Code of Conduct 48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 50 | opensource-codeofconduct@amazon.com with any additional questions or comments. 51 | 52 | 53 | ## Security issue notifications 54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. 55 | 56 | 57 | ## Licensing 58 | 59 | See the [LICENSE](https://github.com/aws-samples/aws-concurrent-data-orchestration-pipeline-emr-livy/blob/master/LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. 60 | 61 | We may ask you to sign a [Contributor License Agreement (CLA)](http://en.wikipedia.org/wiki/Contributor_License_Agreement) for larger changes. 62 | -------------------------------------------------------------------------------- /dags/airflowlib/emr_lib.py: -------------------------------------------------------------------------------- 1 | import boto3, json, pprint, requests, textwrap, time, logging, requests 2 | from datetime import datetime 3 | 4 | def get_region(): 5 | r = requests.get("http://169.254.169.254/latest/dynamic/instance-identity/document") 6 | response_json = r.json() 7 | return response_json.get('region') 8 | 9 | def client(region_name): 10 | global emr 11 | emr = boto3.client('emr', region_name=region_name) 12 | 13 | def get_security_group_id(group_name, region_name): 14 | ec2 = boto3.client('ec2', region_name=region_name) 15 | response = ec2.describe_security_groups(GroupNames=[group_name]) 16 | return response['SecurityGroups'][0]['GroupId'] 17 | 18 | def create_cluster(region_name, cluster_name='Airflow-' + str(datetime.now()), release_label='emr-5.9.0',master_instance_type='m3.xlarge', num_core_nodes=2, core_node_instance_type='m3.2xlarge'): 19 | emr_master_security_group_id = get_security_group_id('AirflowEMRMasterSG', region_name=region_name) 20 | emr_slave_security_group_id = get_security_group_id('AirflowEMRSlaveSG', region_name=region_name) 21 | cluster_response = emr.run_job_flow( 22 | Name=cluster_name, 23 | ReleaseLabel=release_label, 24 | Instances={ 25 | 'InstanceGroups': [ 26 | { 27 | 'Name': "Master nodes", 28 | 'Market': 'ON_DEMAND', 29 | 'InstanceRole': 'MASTER', 30 | 'InstanceType': master_instance_type, 31 | 'InstanceCount': 1 32 | }, 33 | { 34 | 'Name': "Slave nodes", 35 | 'Market': 'ON_DEMAND', 36 | 'InstanceRole': 'CORE', 37 | 'InstanceType': core_node_instance_type, 38 | 'InstanceCount': num_core_nodes 39 | } 40 | ], 41 | 'KeepJobFlowAliveWhenNoSteps': True, 42 | 'Ec2KeyName' : 'airflow_key_pair', 43 | 'EmrManagedMasterSecurityGroup': emr_master_security_group_id, 44 | 'EmrManagedSlaveSecurityGroup': emr_slave_security_group_id 45 | }, 46 | VisibleToAllUsers=True, 47 | JobFlowRole='EmrEc2InstanceProfile', 48 | ServiceRole='EmrRole', 49 | Applications=[ 50 | { 'Name': 'hadoop' }, 51 | { 'Name': 'spark' }, 52 | { 'Name': 'hive' }, 53 | { 'Name': 'livy' }, 54 | { 'Name': 'zeppelin' } 55 | ] 56 | ) 57 | return cluster_response['JobFlowId'] 58 | 59 | 60 | def get_cluster_dns(cluster_id): 61 | response = emr.describe_cluster(ClusterId=cluster_id) 62 | return response['Cluster']['MasterPublicDnsName'] 63 | 64 | 65 | def wait_for_cluster_creation(cluster_id): 66 | emr.get_waiter('cluster_running').wait(ClusterId=cluster_id) 67 | 68 | 69 | def terminate_cluster(cluster_id): 70 | emr.terminate_job_flows(JobFlowIds=[cluster_id]) 71 | 72 | # Creates an interactive scala spark session. 73 | # Python(kind=pyspark), R(kind=sparkr) and SQL(kind=sql) spark sessions can also be created by changing the value of kind. 74 | def create_spark_session(master_dns, kind='spark'): 75 | # 8998 is the port on which the Livy server runs 76 | host = 'http://' + master_dns + ':8998' 77 | data = {'kind': kind} 78 | headers = {'Content-Type': 'application/json'} 79 | response = requests.post(host + '/sessions', data=json.dumps(data), headers=headers) 80 | logging.info(response.json()) 81 | return response.headers 82 | 83 | 84 | def wait_for_idle_session(master_dns, response_headers): 85 | # wait for the session to be idle or ready for job submission 86 | status = '' 87 | host = 'http://' + master_dns + ':8998' 88 | session_url = host + response_headers['location'] 89 | while status != 'idle': 90 | time.sleep(3) 91 | status_response = requests.get(session_url, headers=response_headers) 92 | status = status_response.json()['state'] 93 | logging.info('Session status: ' + status) 94 | return session_url 95 | 96 | 97 | def kill_spark_session(session_url): 98 | requests.delete(session_url, headers={'Content-Type': 'application/json'}) 99 | 100 | # Submits the scala code as a simple JSON command to the Livy server 101 | def submit_statement(session_url, statement_path): 102 | statements_url = session_url + '/statements' 103 | with open(statement_path, 'r') as f: 104 | code = f.read() 105 | data = {'code': code} 106 | response = requests.post(statements_url, data=json.dumps(data), headers={'Content-Type': 'application/json'}) 107 | logging.info(response.json()) 108 | return response 109 | 110 | # Function to help track the progress of the scala code submitted to Apache Livy 111 | def track_statement_progress(master_dns, response_headers): 112 | statement_status = '' 113 | host = 'http://' + master_dns + ':8998' 114 | session_url = host + response_headers['location'].split('/statements', 1)[0] 115 | # Poll the status of the submitted scala code 116 | while statement_status != 'available': 117 | # If a statement takes longer than a few milliseconds to execute, Livy returns early and provides a statement URL that can be polled until it is complete: 118 | statement_url = host + response_headers['location'] 119 | statement_response = requests.get(statement_url, headers={'Content-Type': 'application/json'}) 120 | statement_status = statement_response.json()['state'] 121 | logging.info('Statement status: ' + statement_status) 122 | 123 | #logging the logs 124 | lines = requests.get(session_url + '/log', headers={'Content-Type': 'application/json'}).json()['log'] 125 | for line in lines: 126 | logging.info(line) 127 | 128 | if 'progress' in statement_response.json(): 129 | logging.info('Progress: ' + str(statement_response.json()['progress'])) 130 | time.sleep(10) 131 | final_statement_status = statement_response.json()['output']['status'] 132 | if final_statement_status == 'error': 133 | logging.info('Statement exception: ' + statement_response.json()['output']['evalue']) 134 | for trace in statement_response.json()['output']['traceback']: 135 | logging.info(trace) 136 | raise ValueError('Final Statement Status: ' + final_statement_status) 137 | logging.info('Final Statement Status: ' + final_statement_status) 138 | 139 | 140 | def get_public_ip(cluster_id): 141 | instances = emr.list_instances(ClusterId=cluster_id, InstanceGroupTypes=['MASTER']) 142 | return instances['Instances'][0]['PublicIpAddress'] 143 | -------------------------------------------------------------------------------- /dags/movielens_dag.py: -------------------------------------------------------------------------------- 1 | import airflowlib.emr_lib as emr 2 | import os 3 | 4 | from airflow import DAG 5 | from airflow.operators.python_operator import PythonOperator 6 | from datetime import datetime, timedelta 7 | 8 | default_args = { 9 | 'owner': 'airflow', 10 | 'depends_on_past': False, 11 | 'start_date': datetime(2017, 11, 01), 12 | 'retries': 0, 13 | 'retry_delay': timedelta(minutes=2), 14 | 'provide_context': True 15 | } 16 | 17 | # Initialize the DAG 18 | # Concurrency --> Number of tasks allowed to run concurrently 19 | dag = DAG('transform_movielens', concurrency=3, schedule_interval=None, default_args=default_args) 20 | region = emr.get_region() 21 | emr.client(region_name=region) 22 | 23 | # Creates an EMR cluster 24 | def create_emr(**kwargs): 25 | cluster_id = emr.create_cluster(region_name=region, cluster_name='movielens_cluster', num_core_nodes=2) 26 | return cluster_id 27 | 28 | # Waits for the EMR cluster to be ready to accept jobs 29 | def wait_for_completion(**kwargs): 30 | ti = kwargs['ti'] 31 | cluster_id = ti.xcom_pull(task_ids='create_cluster') 32 | emr.wait_for_cluster_creation(cluster_id) 33 | 34 | # Terminates the EMR cluster 35 | def terminate_emr(**kwargs): 36 | ti = kwargs['ti'] 37 | cluster_id = ti.xcom_pull(task_ids='create_cluster') 38 | emr.terminate_cluster(cluster_id) 39 | 40 | # Converts each of the movielens datafile to parquet 41 | def transform_movies_to_parquet(**kwargs): 42 | # ti is the Task Instance 43 | ti = kwargs['ti'] 44 | cluster_id = ti.xcom_pull(task_ids='create_cluster') 45 | cluster_dns = emr.get_cluster_dns(cluster_id) 46 | headers = emr.create_spark_session(cluster_dns, 'spark') 47 | session_url = emr.wait_for_idle_session(cluster_dns, headers) 48 | statement_response = emr.submit_statement(session_url, 49 | '/root/airflow/dags/transform/movies.scala') 50 | emr.track_statement_progress(cluster_dns, statement_response.headers) 51 | emr.kill_spark_session(session_url) 52 | 53 | def transform_tags_to_parquet(**kwargs): 54 | # ti is the Task Instance 55 | ti = kwargs['ti'] 56 | cluster_id = ti.xcom_pull(task_ids='create_cluster') 57 | cluster_dns = emr.get_cluster_dns(cluster_id) 58 | headers = emr.create_spark_session(cluster_dns, 'spark') 59 | session_url = emr.wait_for_idle_session(cluster_dns, headers) 60 | statement_response = emr.submit_statement(session_url, 61 | '/root/airflow/dags/transform/tags.scala') 62 | emr.track_statement_progress(cluster_dns, statement_response.headers) 63 | emr.kill_spark_session(session_url) 64 | 65 | def transform_ratings_to_parquet(**kwargs): 66 | # ti is the Task Instance 67 | ti = kwargs['ti'] 68 | cluster_id = ti.xcom_pull(task_ids='create_cluster') 69 | cluster_dns = emr.get_cluster_dns(cluster_id) 70 | headers = emr.create_spark_session(cluster_dns, 'spark') 71 | session_url = emr.wait_for_idle_session(cluster_dns, headers) 72 | statement_response = emr.submit_statement(session_url, 73 | '/root/airflow/dags/transform/ratings.scala') 74 | emr.track_statement_progress(cluster_dns, statement_response.headers) 75 | emr.kill_spark_session(session_url) 76 | 77 | def transform_links_to_parquet(**kwargs): 78 | # ti is the Task Instance 79 | ti = kwargs['ti'] 80 | cluster_id = ti.xcom_pull(task_ids='create_cluster') 81 | cluster_dns = emr.get_cluster_dns(cluster_id) 82 | headers = emr.create_spark_session(cluster_dns, 'spark') 83 | session_url = emr.wait_for_idle_session(cluster_dns, headers) 84 | statement_response = emr.submit_statement(session_url, 85 | '/root/airflow/dags/transform/links.scala') 86 | emr.track_statement_progress(cluster_dns, statement_response.headers) 87 | emr.kill_spark_session(session_url) 88 | 89 | def transform_genome_scores_to_parquet(**kwargs): 90 | # ti is the Task Instance 91 | ti = kwargs['ti'] 92 | cluster_id = ti.xcom_pull(task_ids='create_cluster') 93 | cluster_dns = emr.get_cluster_dns(cluster_id) 94 | headers = emr.create_spark_session(cluster_dns, 'spark') 95 | session_url = emr.wait_for_idle_session(cluster_dns, headers) 96 | statement_response = emr.submit_statement(session_url, 97 | '/root/airflow/dags/transform/genome_scores.scala') 98 | emr.track_statement_progress(cluster_dns, statement_response.headers) 99 | emr.kill_spark_session(session_url) 100 | 101 | def transform_genome_tags_to_parquet(**kwargs): 102 | # ti is the Task Instance 103 | ti = kwargs['ti'] 104 | cluster_id = ti.xcom_pull(task_ids='create_cluster') 105 | cluster_dns = emr.get_cluster_dns(cluster_id) 106 | headers = emr.create_spark_session(cluster_dns, 'spark') 107 | session_url = emr.wait_for_idle_session(cluster_dns, headers) 108 | statement_response = emr.submit_statement(session_url, 109 | '/root/airflow/dags/transform/genome_tags.scala') 110 | emr.track_statement_progress(cluster_dns, statement_response.headers) 111 | emr.kill_spark_session(session_url) 112 | 113 | # Define the individual tasks using Python Operators 114 | create_cluster = PythonOperator( 115 | task_id='create_cluster', 116 | python_callable=create_emr, 117 | dag=dag) 118 | 119 | wait_for_cluster_completion = PythonOperator( 120 | task_id='wait_for_cluster_completion', 121 | python_callable=wait_for_completion, 122 | dag=dag) 123 | 124 | transform_movies = PythonOperator( 125 | task_id='transform_movies', 126 | python_callable=transform_movies_to_parquet, 127 | dag=dag) 128 | 129 | transform_ratings = PythonOperator( 130 | task_id='transform_ratings', 131 | python_callable=transform_ratings_to_parquet, 132 | dag=dag) 133 | 134 | transform_tags = PythonOperator( 135 | task_id='transform_tags', 136 | python_callable=transform_tags_to_parquet, 137 | dag=dag) 138 | 139 | transform_links = PythonOperator( 140 | task_id='transform_links', 141 | python_callable=transform_links_to_parquet, 142 | dag=dag) 143 | 144 | transform_genome_scores = PythonOperator( 145 | task_id='transform_genome_scores', 146 | python_callable=transform_genome_scores_to_parquet, 147 | dag=dag) 148 | 149 | transform_genome_tags = PythonOperator( 150 | task_id='transform_genome_tags', 151 | python_callable=transform_genome_tags_to_parquet, 152 | dag=dag) 153 | 154 | 155 | terminate_cluster = PythonOperator( 156 | task_id='terminate_cluster', 157 | python_callable=terminate_emr, 158 | trigger_rule='all_done', 159 | dag=dag) 160 | 161 | # construct the DAG by setting the dependencies 162 | create_cluster >> wait_for_cluster_completion 163 | wait_for_cluster_completion >> transform_movies >> terminate_cluster 164 | wait_for_cluster_completion >> transform_ratings >> terminate_cluster 165 | wait_for_cluster_completion >> transform_links >> terminate_cluster 166 | wait_for_cluster_completion >> transform_tags >> terminate_cluster 167 | wait_for_cluster_completion >> transform_genome_scores >> terminate_cluster 168 | wait_for_cluster_completion >> transform_genome_tags >> terminate_cluster 169 | -------------------------------------------------------------------------------- /cloudformation/airflow.yaml: -------------------------------------------------------------------------------- 1 | AWSTemplateFormatVersion: '2010-09-09' 2 | 3 | Description: Airflow server backed by Postgres RDS 4 | 5 | Parameters: 6 | KeyName: 7 | Description: Name of an existing EC2 KeyPair to enable SSH access into the Airflow web server 8 | Type: AWS::EC2::KeyPair::KeyName 9 | ConstraintDescription: Must be the name of an existing EC2 KeyPair 10 | S3BucketName: 11 | Description: REQUIRED - A new S3 Bucket name. This bucket will be used to read and write the Movielens dataset. 12 | Type: String 13 | AllowedPattern: '.+' 14 | DBPassword: 15 | Default: airflowpassword 16 | NoEcho: 'true' 17 | Description: Airflow database admin account password 18 | Type: String 19 | MinLength: '8' 20 | MaxLength: '41' 21 | AllowedPattern: '[a-zA-Z0-9]*' 22 | ConstraintDescription: Must contain only alphanumeric characters 23 | 24 | # Mapping to find the Amazon Linux AMI in each region. 25 | Mappings: 26 | RegionMap: 27 | us-east-1: 28 | AMI: ami-97785bed 29 | us-east-2: 30 | AMI: ami-f63b1193 31 | us-west-1: 32 | AMI: ami-824c4ee2 33 | us-west-2: 34 | AMI: ami-f2d3638a 35 | ca-central-1: 36 | AMI: ami-a954d1cd 37 | eu-west-1: 38 | AMI: ami-d834aba1 39 | eu-west-2: 40 | AMI: ami-403e2524 41 | eu-west-3: 42 | AMI: ami-8ee056f3 43 | eu-central-1: 44 | AMI: ami-5652ce39 45 | sa-east-1: 46 | AMI: ami-84175ae8 47 | ap-south-1: 48 | AMI: ami-531a4c3c 49 | ap-southeast-1: 50 | AMI: ami-68097514 51 | ap-southeast-2: 52 | AMI: ami-942dd1f6 53 | ap-northeast-1: 54 | AMI: ami-ceafcba8 55 | ap-northeast-2: 56 | AMI: ami-863090e8 57 | Resources: 58 | EC2Instance: 59 | Type: AWS::EC2::Instance 60 | Properties: 61 | KeyName: !Ref 'KeyName' 62 | SecurityGroups: [!Ref 'AirflowEC2SecurityGroup'] 63 | InstanceType: 'm4.xlarge' 64 | IamInstanceProfile: 65 | Ref: EC2InstanceProfile 66 | Tags: 67 | - 68 | Key: Name 69 | Value: Airflow 70 | ImageId: !FindInMap 71 | - RegionMap 72 | - !Ref 'AWS::Region' 73 | - AMI 74 | UserData: 75 | Fn::Base64: !Sub | 76 | #!/bin/bash 77 | set -x 78 | exec > >(tee /var/log/user-data.log|logger -t user-data ) 2>&1 79 | # Get the latest CloudFormation package 80 | echo "Installing aws-cfn" 81 | yum install -y aws-cfn-bootstrap 82 | # Start cfn-init 83 | /opt/aws/bin/cfn-init -v -c install --stack ${AWS::StackId} --resource EC2Instance --region ${AWS::Region} 84 | # Download and unzip the Movielens dataset 85 | wget http://files.grouplens.org/datasets/movielens/ml-latest.zip && unzip ml-latest.zip 86 | # Upload the movielens dataset files to the S3 bucket 87 | aws s3 cp ml-latest s3://${S3BucketName} --recursive 88 | # Install git 89 | sudo yum install -y git 90 | # Clone the git repository 91 | git clone https://github.com/aws-samples/aws-concurrent-data-orchestration-pipeline-emr-livy.git 92 | sudo pip install boto3 93 | # Install airflow using pip 94 | echo "Install Apache Airflow" 95 | sudo SLUGIFY_USES_TEXT_UNIDECODE=yes pip install -U apache-airflow 96 | # Encrypt connection passwords in metadata db 97 | sudo pip install apache-airflow[crypto] 98 | # Postgres operators and hook, support as an Airflow backend 99 | sudo pip install apache-airflow[postgres] 100 | sudo -H pip install six==1.10.0 101 | sudo pip install --upgrade six 102 | sudo pip install markupsafe 103 | sudo pip install --upgrade MarkupSafe 104 | echo 'export PATH=/usr/local/bin:$PATH' >> /root/.bash_profile 105 | source /root/.bash_profile 106 | # Initialize Airflow 107 | airflow initdb 108 | # Update the RDS connection in the Airflow Config file 109 | sed -i '/sql_alchemy_conn/s/^/#/g' ~/airflow/airflow.cfg 110 | sed -i '/sql_alchemy_conn/ a sql_alchemy_conn = postgresql://airflow:${DBPassword}@${DBInstance.Endpoint.Address}:${DBInstance.Endpoint.Port}/airflowdb' ~/airflow/airflow.cfg 111 | # Update the type of executor in the Airflow Config file 112 | sed -i '/executor = SequentialExecutor/s/^/#/g' ~/airflow/airflow.cfg 113 | sed -i '/executor = SequentialExecutor/ a executor = LocalExecutor' ~/airflow/airflow.cfg 114 | airflow initdb 115 | # Move all the files to the ~/airflow directory. The Airflow config file is setup to hold all the DAG related files in the ~/airflow/ folder. 116 | mv aws-concurrent-data-orchestration-pipeline-emr-livy/* ~/airflow/ 117 | # Delete the higher-level git repository directory 118 | rm -rf aws-concurrent-data-orchestration-pipeline-emr-livy 119 | # Replace the name of the S3 bucket in each of the .scala files. CHANGE THE HIGHLIGHTED PORTION BELOW TO THE NAME OF THE S3 BUCKET YOU CREATED IN STEP 1. The below command replaces the instance of the string ‘’ in each of the scripts to the name of the actual bucket. 120 | sed -i 's//${S3BucketName}/g' /root/airflow/dags/transform/* 121 | # Run Airflow webserver 122 | airflow webserver 123 | Metadata: 124 | AWS::CloudFormation::Init: 125 | configSets: 126 | install: 127 | - gcc 128 | gcc: 129 | packages: 130 | yum: 131 | gcc: [] 132 | DependsOn: 133 | - DBInstance 134 | - AirflowEC2SecurityGroup 135 | DBInstance: 136 | Type: AWS::RDS::DBInstance 137 | DeletionPolicy: Delete 138 | Properties: 139 | DBName: airflowdb 140 | Engine: postgres 141 | MasterUsername: airflow 142 | MasterUserPassword: !Ref 'DBPassword' 143 | DBInstanceClass: db.t2.small 144 | AllocatedStorage: 5 145 | DBSecurityGroups: 146 | - Ref: DBSecurityGroup 147 | AirflowEC2SecurityGroup: 148 | Type: AWS::EC2::SecurityGroup 149 | Properties: 150 | GroupName: AirflowEC2SG 151 | GroupDescription: Enable HTTP access via port 80 + SSH access 152 | SecurityGroupIngress: 153 | - IpProtocol: tcp 154 | FromPort: 80 155 | ToPort: 80 156 | CidrIp: 0.0.0.0/0 157 | - IpProtocol: tcp 158 | FromPort: 8080 159 | ToPort: 8080 160 | CidrIp: 0.0.0.0/0 161 | - IpProtocol: tcp 162 | FromPort: 22 163 | ToPort: 22 164 | CidrIp: 0.0.0.0/0 165 | AirflowEMRMasterEC2SecurityGroup: 166 | Type: AWS::EC2::SecurityGroup 167 | Properties: 168 | GroupName: AirflowEMRMasterSG 169 | GroupDescription: Airflow EMR Master SG 170 | DependsOn: 171 | - AirflowEC2SecurityGroup 172 | AirflowEMRMasterInboundRule: 173 | Type: AWS::EC2::SecurityGroupIngress 174 | Properties: 175 | IpProtocol: tcp 176 | FromPort: '8998' 177 | ToPort: '8998' 178 | SourceSecurityGroupName: !Ref 'AirflowEC2SecurityGroup' 179 | GroupName: !Ref 'AirflowEMRMasterEC2SecurityGroup' 180 | AirflowEMRSlaveEC2SecurityGroup: 181 | Type: AWS::EC2::SecurityGroup 182 | Properties: 183 | GroupName: AirflowEMRSlaveSG 184 | GroupDescription: Airflow EMR Slave SG 185 | DBSecurityGroup: 186 | Type: AWS::RDS::DBSecurityGroup 187 | Properties: 188 | GroupDescription: Frontend Access 189 | DBSecurityGroupIngress: 190 | EC2SecurityGroupName: 191 | Ref: AirflowEC2SecurityGroup 192 | EC2Role: 193 | Type: AWS::IAM::Role 194 | Properties: 195 | RoleName: AirflowInstanceRole 196 | AssumeRolePolicyDocument: 197 | Version: "2012-10-17" 198 | Statement: 199 | - 200 | Effect: "Allow" 201 | Principal: 202 | Service: 203 | - "ec2.amazonaws.com" 204 | Action: 205 | - "sts:AssumeRole" 206 | ManagedPolicyArns: 207 | - arn:aws:iam::aws:policy/AmazonS3FullAccess 208 | - arn:aws:iam::aws:policy/AmazonElasticMapReduceFullAccess 209 | EC2InstanceProfile: 210 | Type: AWS::IAM::InstanceProfile 211 | Properties: 212 | InstanceProfileName: AirflowInstanceProfile 213 | Roles: 214 | - 215 | Ref: EC2Role 216 | EmrRole: 217 | Type: AWS::IAM::Role 218 | Properties: 219 | RoleName: EmrRole 220 | AssumeRolePolicyDocument: 221 | Version: "2012-10-17" 222 | Statement: 223 | - 224 | Effect: "Allow" 225 | Principal: 226 | Service: 227 | - "elasticmapreduce.amazonaws.com" 228 | - "s3.amazonaws.com" 229 | Action: 230 | - "sts:AssumeRole" 231 | ManagedPolicyArns: 232 | - arn:aws:iam::aws:policy/AmazonS3FullAccess 233 | - arn:aws:iam::aws:policy/AmazonElasticMapReduceFullAccess 234 | EmrEc2Role: 235 | Type: AWS::IAM::Role 236 | Properties: 237 | RoleName: EmrEc2Role 238 | AssumeRolePolicyDocument: 239 | Version: "2012-10-17" 240 | Statement: 241 | - 242 | Effect: "Allow" 243 | Principal: 244 | Service: 245 | - "ec2.amazonaws.com" 246 | Action: 247 | - "sts:AssumeRole" 248 | ManagedPolicyArns: 249 | - arn:aws:iam::aws:policy/service-role/AmazonElasticMapReduceforEC2Role 250 | - arn:aws:iam::aws:policy/AmazonS3FullAccess 251 | EmrEc2InstanceProfile: 252 | Type: AWS::IAM::InstanceProfile 253 | Properties: 254 | InstanceProfileName: EmrEc2InstanceProfile 255 | Roles: 256 | - 257 | Ref: EmrEc2Role 258 | S3Bucket: 259 | Type: AWS::S3::Bucket 260 | DeletionPolicy: Retain 261 | Properties: 262 | AccessControl: BucketOwnerFullControl 263 | BucketName: !Ref 'S3BucketName' 264 | Outputs: 265 | AirflowEC2PublicDNSName: 266 | Description: Public DNS Name of the Airflow EC2 instance 267 | Value: !Join ["", ["http://", !GetAtt EC2Instance.PublicDnsName, ":8080"]] 268 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | 2 | Apache License 3 | Version 2.0, January 2004 4 | http://www.apache.org/licenses/ 5 | 6 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 7 | 8 | 1. Definitions. 9 | 10 | "License" shall mean the terms and conditions for use, reproduction, 11 | and distribution as defined by Sections 1 through 9 of this document. 12 | 13 | "Licensor" shall mean the copyright owner or entity authorized by 14 | the copyright owner that is granting the License. 15 | 16 | "Legal Entity" shall mean the union of the acting entity and all 17 | other entities that control, are controlled by, or are under common 18 | control with that entity. For the purposes of this definition, 19 | "control" means (i) the power, direct or indirect, to cause the 20 | direction or management of such entity, whether by contract or 21 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 22 | outstanding shares, or (iii) beneficial ownership of such entity. 23 | 24 | "You" (or "Your") shall mean an individual or Legal Entity 25 | exercising permissions granted by this License. 26 | 27 | "Source" form shall mean the preferred form for making modifications, 28 | including but not limited to software source code, documentation 29 | source, and configuration files. 30 | 31 | "Object" form shall mean any form resulting from mechanical 32 | transformation or translation of a Source form, including but 33 | not limited to compiled object code, generated documentation, 34 | and conversions to other media types. 35 | 36 | "Work" shall mean the work of authorship, whether in Source or 37 | Object form, made available under the License, as indicated by a 38 | copyright notice that is included in or attached to the work 39 | (an example is provided in the Appendix below). 40 | 41 | "Derivative Works" shall mean any work, whether in Source or Object 42 | form, that is based on (or derived from) the Work and for which the 43 | editorial revisions, annotations, elaborations, or other modifications 44 | represent, as a whole, an original work of authorship. For the purposes 45 | of this License, Derivative Works shall not include works that remain 46 | separable from, or merely link (or bind by name) to the interfaces of, 47 | the Work and Derivative Works thereof. 48 | 49 | "Contribution" shall mean any work of authorship, including 50 | the original version of the Work and any modifications or additions 51 | to that Work or Derivative Works thereof, that is intentionally 52 | submitted to Licensor for inclusion in the Work by the copyright owner 53 | or by an individual or Legal Entity authorized to submit on behalf of 54 | the copyright owner. For the purposes of this definition, "submitted" 55 | means any form of electronic, verbal, or written communication sent 56 | to the Licensor or its representatives, including but not limited to 57 | communication on electronic mailing lists, source code control systems, 58 | and issue tracking systems that are managed by, or on behalf of, the 59 | Licensor for the purpose of discussing and improving the Work, but 60 | excluding communication that is conspicuously marked or otherwise 61 | designated in writing by the copyright owner as "Not a Contribution." 62 | 63 | "Contributor" shall mean Licensor and any individual or Legal Entity 64 | on behalf of whom a Contribution has been received by Licensor and 65 | subsequently incorporated within the Work. 66 | 67 | 2. Grant of Copyright License. Subject to the terms and conditions of 68 | this License, each Contributor hereby grants to You a perpetual, 69 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 70 | copyright license to reproduce, prepare Derivative Works of, 71 | publicly display, publicly perform, sublicense, and distribute the 72 | Work and such Derivative Works in Source or Object form. 73 | 74 | 3. Grant of Patent License. Subject to the terms and conditions of 75 | this License, each Contributor hereby grants to You a perpetual, 76 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 77 | (except as stated in this section) patent license to make, have made, 78 | use, offer to sell, sell, import, and otherwise transfer the Work, 79 | where such license applies only to those patent claims licensable 80 | by such Contributor that are necessarily infringed by their 81 | Contribution(s) alone or by combination of their Contribution(s) 82 | with the Work to which such Contribution(s) was submitted. If You 83 | institute patent litigation against any entity (including a 84 | cross-claim or counterclaim in a lawsuit) alleging that the Work 85 | or a Contribution incorporated within the Work constitutes direct 86 | or contributory patent infringement, then any patent licenses 87 | granted to You under this License for that Work shall terminate 88 | as of the date such litigation is filed. 89 | 90 | 4. Redistribution. You may reproduce and distribute copies of the 91 | Work or Derivative Works thereof in any medium, with or without 92 | modifications, and in Source or Object form, provided that You 93 | meet the following conditions: 94 | 95 | (a) You must give any other recipients of the Work or 96 | Derivative Works a copy of this License; and 97 | 98 | (b) You must cause any modified files to carry prominent notices 99 | stating that You changed the files; and 100 | 101 | (c) You must retain, in the Source form of any Derivative Works 102 | that You distribute, all copyright, patent, trademark, and 103 | attribution notices from the Source form of the Work, 104 | excluding those notices that do not pertain to any part of 105 | the Derivative Works; and 106 | 107 | (d) If the Work includes a "NOTICE" text file as part of its 108 | distribution, then any Derivative Works that You distribute must 109 | include a readable copy of the attribution notices contained 110 | within such NOTICE file, excluding those notices that do not 111 | pertain to any part of the Derivative Works, in at least one 112 | of the following places: within a NOTICE text file distributed 113 | as part of the Derivative Works; within the Source form or 114 | documentation, if provided along with the Derivative Works; or, 115 | within a display generated by the Derivative Works, if and 116 | wherever such third-party notices normally appear. The contents 117 | of the NOTICE file are for informational purposes only and 118 | do not modify the License. You may add Your own attribution 119 | notices within Derivative Works that You distribute, alongside 120 | or as an addendum to the NOTICE text from the Work, provided 121 | that such additional attribution notices cannot be construed 122 | as modifying the License. 123 | 124 | You may add Your own copyright statement to Your modifications and 125 | may provide additional or different license terms and conditions 126 | for use, reproduction, or distribution of Your modifications, or 127 | for any such Derivative Works as a whole, provided Your use, 128 | reproduction, and distribution of the Work otherwise complies with 129 | the conditions stated in this License. 130 | 131 | 5. Submission of Contributions. Unless You explicitly state otherwise, 132 | any Contribution intentionally submitted for inclusion in the Work 133 | by You to the Licensor shall be under the terms and conditions of 134 | this License, without any additional terms or conditions. 135 | Notwithstanding the above, nothing herein shall supersede or modify 136 | the terms of any separate license agreement you may have executed 137 | with Licensor regarding such Contributions. 138 | 139 | 6. Trademarks. This License does not grant permission to use the trade 140 | names, trademarks, service marks, or product names of the Licensor, 141 | except as required for reasonable and customary use in describing the 142 | origin of the Work and reproducing the content of the NOTICE file. 143 | 144 | 7. Disclaimer of Warranty. Unless required by applicable law or 145 | agreed to in writing, Licensor provides the Work (and each 146 | Contributor provides its Contributions) on an "AS IS" BASIS, 147 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 148 | implied, including, without limitation, any warranties or conditions 149 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 150 | PARTICULAR PURPOSE. You are solely responsible for determining the 151 | appropriateness of using or redistributing the Work and assume any 152 | risks associated with Your exercise of permissions under this License. 153 | 154 | 8. Limitation of Liability. In no event and under no legal theory, 155 | whether in tort (including negligence), contract, or otherwise, 156 | unless required by applicable law (such as deliberate and grossly 157 | negligent acts) or agreed to in writing, shall any Contributor be 158 | liable to You for damages, including any direct, indirect, special, 159 | incidental, or consequential damages of any character arising as a 160 | result of this License or out of the use or inability to use the 161 | Work (including but not limited to damages for loss of goodwill, 162 | work stoppage, computer failure or malfunction, or any and all 163 | other commercial damages or losses), even if such Contributor 164 | has been advised of the possibility of such damages. 165 | 166 | 9. Accepting Warranty or Additional Liability. While redistributing 167 | the Work or Derivative Works thereof, You may choose to offer, 168 | and charge a fee for, acceptance of support, warranty, indemnity, 169 | or other liability obligations and/or rights consistent with this 170 | License. However, in accepting such obligations, You may act only 171 | on Your own behalf and on Your sole responsibility, not on behalf 172 | of any other Contributor, and only if You agree to indemnify, 173 | defend, and hold each Contributor harmless for any liability 174 | incurred by, or claims asserted against, such Contributor by reason 175 | of your accepting any such warranty or additional liability. 176 | 177 | END OF TERMS AND CONDITIONS 178 | 179 | APPENDIX: How to apply the Apache License to your work. 180 | 181 | To apply the Apache License to your work, attach the following 182 | boilerplate notice, with the fields enclosed by brackets "[]" 183 | replaced with your own identifying information. (Don't include 184 | the brackets!) The text should be enclosed in the appropriate 185 | comment syntax for the file format. We also recommend that a 186 | file or class name and description of purpose be included on the 187 | same "printed page" as the copyright notice for easier 188 | identification within third-party archives. 189 | 190 | Copyright [yyyy] [name of copyright owner] 191 | 192 | Licensed under the Apache License, Version 2.0 (the "License"); 193 | you may not use this file except in compliance with the License. 194 | You may obtain a copy of the License at 195 | 196 | http://www.apache.org/licenses/LICENSE-2.0 197 | 198 | Unless required by applicable law or agreed to in writing, software 199 | distributed under the License is distributed on an "AS IS" BASIS, 200 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 201 | See the License for the specific language governing permissions and 202 | limitations under the License. 203 | --------------------------------------------------------------------------------