├── .env ├── .gitignore ├── 02_getting_started ├── README.md └── app.py ├── 03_create_function_for_spark_session ├── README.md ├── app.py └── util.py ├── 04_read_data_from_files ├── README.md ├── app.py ├── read.py └── util.py ├── 05_process_data_using_spark_apis ├── README.md ├── app.py ├── process.py ├── read.py └── util.py ├── 06_write_data_to_files ├── README.md ├── app.py ├── process.py ├── read.py ├── util.py └── write.py ├── 07_productionize_code ├── README.md ├── app.py ├── process.py ├── read.py ├── util.py └── write.py ├── 08_run_using_cluster_mode ├── README.md ├── emr.sh ├── emr_step.sh ├── emr_step_ai.sh └── exports.sh ├── 11_manage_emr_using_aws_boto3 ├── 06 Overview of AWS boto3 to Manage AWS EMR Clusters.ipynb ├── 07 Create AWS EMR Cluster using AWS boto3.ipynb ├── 09 Add Spark Application as Step to AWS EMR Cluster using Boto3.ipynb ├── Manage EMR using AWS CLI.md ├── add_step.sh ├── aws_cli_commands_to_manage_emr.sh ├── emr-step.json ├── emr_create_cluster.sh ├── emr_create_cluster_step.sh └── manage_emr.sh ├── 12_emr_pipeline_using_step_functions └── Data Pipeline using Step Function with EMR.md ├── 13_overview_of_emr_serverless └── Submit Serverless Spark Job.ipynb ├── 13_spark_sql_using_aws_emr_cluster ├── 07_compute_daily_product_revenue.sql ├── 08_compute_daily_product_revenue_args.sql └── data │ └── retail_db_json │ ├── categories │ └── part-r-00000-ce1d8208-178d-48d3-bfb2-1a97d9c05094 │ ├── create_db_tables_pg.sql │ ├── customers │ └── part-r-00000-70554560-527b-44f6-9e80-4e2031af5994 │ ├── departments │ └── part-r-00000-3db7cfae-3ad2-4fc7-88ff-afe0ec709f49 │ ├── order_items │ └── part-r-00000-6b83977e-3f20-404b-9b5f-29376ab1419e │ ├── orders │ └── part-r-00000-990f5773-9005-49ba-b670-631286032674 │ └── products │ └── part-r-00000-158b7037-4a23-47e6-8cb3-8cbf878beff7 ├── README.md └── associate_elastic_ip_with_emr_master.ipynb /.env: -------------------------------------------------------------------------------- 1 | PYTHONPATH=/usr/lib/spark/python:/usr/lib/spark/python/lib/py4j-src.zip -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | .idea 2 | itvg-venv 3 | me-venv 4 | itv-ghactivity 5 | data/itv-github 6 | data.zip 7 | *.zip 8 | .DS_Store 9 | archive 10 | airflow 11 | -------------------------------------------------------------------------------- /02_getting_started/README.md: -------------------------------------------------------------------------------- 1 | # Getting Started 2 | 3 | Here is how we can get started with development of data engineering pipelines using Spark on EMR Cluster. 4 | * Open the project as Visual Studio Workspace using Remote Connection. 5 | * Make sure to install required extensions such as Python/Pylance. 6 | * Connect to Master node of the EMR Cluster and create a folder by name **itv-ghactivity** under workspace **mastering-emr**. 7 | * Create a program called as **app.py** and enter this code. 8 | 9 | ```python 10 | from pyspark.sql import SparkSession 11 | 12 | spark = SparkSession. \ 13 | builder. \ 14 | master('local'). \ 15 | appName('GitHub Activity - Getting Started'). \ 16 | getOrCreate() 17 | 18 | spark.sql('SELECT current_date').show() 19 | ``` 20 | 21 | * Make sure to update **.env** file with PYTHONPATH pointing to Spark related modules already available on the master node of the cluster. 22 | 23 | ``` 24 | PYTHONPATH=/usr/lib/spark/python:/usr/lib/spark/python/lib/py4j-0.10.9.2-src.zip 25 | ``` 26 | 27 | * Try running the application using `spark-submit`. Make sure to run it from **itv-ghactivity** folder. 28 | 29 | ``` 30 | spark-submit \ 31 | --master local \ 32 | app.py 33 | ``` -------------------------------------------------------------------------------- /02_getting_started/app.py: -------------------------------------------------------------------------------- 1 | from pyspark.sql import SparkSession 2 | 3 | spark = SparkSession. \ 4 | builder. \ 5 | master('local'). \ 6 | appName('GitHub Activity - Getting Started'). \ 7 | getOrCreate() 8 | 9 | spark.sql('SELECT current_date').show() 10 | -------------------------------------------------------------------------------- /03_create_function_for_spark_session/README.md: -------------------------------------------------------------------------------- 1 | # Create Function for SparkSession 2 | 3 | Let us create a program and define function which returns spark session object. Make sure to update the code in **itv-ghactivity** folder. 4 | * Define function by name **get_spark_session** in a program called as **util.py** 5 | 6 | ```python 7 | from pyspark.sql import SparkSession 8 | 9 | 10 | def get_spark_session(env, app_name): 11 | if env == 'DEV': 12 | spark = SparkSession. \ 13 | builder. \ 14 | master('local'). \ 15 | appName(app_name). \ 16 | getOrCreate() 17 | return spark 18 | return 19 | ``` 20 | 21 | * Refactor code in **app.py** to use the function to get Spark Session. 22 | 23 | ```python 24 | import os 25 | from util import get_spark_session 26 | 27 | 28 | def main(): 29 | env = os.environ.get('ENVIRON') 30 | spark = get_spark_session(env, 'GitHub Activity - Getting Started') 31 | spark.sql('SELECT current_date').show() 32 | 33 | 34 | if __name__ == '__main__': 35 | main() 36 | ``` 37 | * Configure run time environment variable by name **ENVIRON** with value **DEV** to validate locally. 38 | * Run the program to confirm that the changes are working as expected. 39 | 40 | ``` 41 | export ENVIRON=DEV 42 | spark-submit \ 43 | --master local \ 44 | app.py 45 | ``` -------------------------------------------------------------------------------- /03_create_function_for_spark_session/app.py: -------------------------------------------------------------------------------- 1 | import os 2 | from util import get_spark_session 3 | 4 | 5 | def main(): 6 | env = os.environ.get('ENVIRON') 7 | spark = get_spark_session(env, 'GitHub Activity - Getting Started') 8 | spark.sql('SELECT current_date').show() 9 | 10 | 11 | if __name__ == '__main__': 12 | main() 13 | -------------------------------------------------------------------------------- /03_create_function_for_spark_session/util.py: -------------------------------------------------------------------------------- 1 | from pyspark.sql import SparkSession 2 | 3 | 4 | def get_spark_session(env, app_name): 5 | if env == 'DEV': 6 | spark = SparkSession. \ 7 | builder. \ 8 | master('local'). \ 9 | appName(app_name). \ 10 | getOrCreate() 11 | return spark 12 | return 13 | -------------------------------------------------------------------------------- /04_read_data_from_files/README.md: -------------------------------------------------------------------------------- 1 | # Read data from files 2 | 3 | Let us develop the code to read the data from files into Spark Dataframes. 4 | * Create directory for data under **mastering-emr** and copy some files into it. 5 | 6 | ```shell script 7 | mkdir -p data/itv-github/landing/ghactivity 8 | cd data/itv-github/landing/ghactivity 9 | wget https://data.gharchive.org/2021-01-13-0.json.gz 10 | wget https://data.gharchive.org/2021-01-14-0.json.gz 11 | wget https://data.gharchive.org/2021-01-15-0.json.gz 12 | ``` 13 | * Make sure to upload all the files to s3. 14 | ``` 15 | aws s3 rm s3://aigithub/landing/ghactivity/ \ 16 | --recursive 17 | 18 | aws s3 cp ~/mastering-emr/data/itv-github/landing/ghactivity \ 19 | s3://aigithub/landing/ghactivity \ 20 | --recursive 21 | ``` 22 | * Create a Python program by name **read.py**. We will create a function by name **from_files**. It reads the data from files into Dataframe and returns it. 23 | 24 | ```python 25 | def from_files(spark, data_dir, file_pattern, file_format): 26 | df = spark. \ 27 | read. \ 28 | format(file_format). \ 29 | load(f'{data_dir}/{file_pattern}') 30 | return df 31 | ``` 32 | 33 | * Call the program from **app.py**. For now review schema and data. 34 | 35 | ```python 36 | import os 37 | from util import get_spark_session 38 | from read import from_files 39 | 40 | 41 | def main(): 42 | env = os.environ.get('ENVIRON') 43 | src_dir = os.environ.get('SRC_DIR') 44 | file_pattern = f"{os.environ.get('SRC_FILE_PATTERN')}-*" 45 | src_file_format = os.environ.get('SRC_FILE_FORMAT') 46 | spark = get_spark_session(env, 'GitHub Activity - Reading Data') 47 | df = from_files(spark, src_dir, file_pattern, src_file_format) 48 | df.printSchema() 49 | df.select('repo.*').show() 50 | 51 | 52 | if __name__ == '__main__': 53 | main() 54 | ``` 55 | * Run the program to confirm that the changes are working as expected. 56 | 57 | ``` 58 | export ENVIRON=DEV 59 | export SRC_DIR=s3://aigithub/landing/ghactivity 60 | export SRC_FILE_PATTERN=2021-01-15 61 | export SRC_FILE_FORMAT=json 62 | 63 | spark-submit \ 64 | --master local \ 65 | app.py 66 | ``` -------------------------------------------------------------------------------- /04_read_data_from_files/app.py: -------------------------------------------------------------------------------- 1 | import os 2 | from util import get_spark_session 3 | from read import from_files 4 | 5 | 6 | def main(): 7 | env = os.environ.get('ENVIRON') 8 | src_dir = os.environ.get('SRC_DIR') 9 | file_pattern = f"{os.environ.get('SRC_FILE_PATTERN')}-*" 10 | src_file_format = os.environ.get('SRC_FILE_FORMAT') 11 | spark = get_spark_session(env, 'GitHub Activity - Reading Data') 12 | df = from_files(spark, src_dir, file_pattern, src_file_format) 13 | df.printSchema() 14 | df.select('repo.*').show() 15 | 16 | 17 | if __name__ == '__main__': 18 | main() 19 | -------------------------------------------------------------------------------- /04_read_data_from_files/read.py: -------------------------------------------------------------------------------- 1 | def from_files(spark, src_dir, file_pattern, file_format): 2 | df = spark. \ 3 | read. \ 4 | format(file_format). \ 5 | load(f'{src_dir}/{file_pattern}') 6 | return df 7 | -------------------------------------------------------------------------------- /04_read_data_from_files/util.py: -------------------------------------------------------------------------------- 1 | from pyspark.sql import SparkSession 2 | 3 | 4 | def get_spark_session(env, app_name): 5 | if env == 'DEV': 6 | spark = SparkSession. \ 7 | builder. \ 8 | master('local'). \ 9 | appName(app_name). \ 10 | getOrCreate() 11 | return spark 12 | return 13 | -------------------------------------------------------------------------------- /05_process_data_using_spark_apis/README.md: -------------------------------------------------------------------------------- 1 | # Process data using Spark APIs 2 | 3 | We will eventually partition the data by year, month and day of month while writing to target directory. However, to partition the data we need to add new columns. 4 | * Create a Python program by name **process.py**. We will create a function by name **df_transform**. It partitions the Dataframe using specified field. 5 | 6 | ```python 7 | from pyspark.sql.functions import year, \ 8 | month, dayofmonth 9 | 10 | 11 | def transform(df): 12 | return df.withColumn('year', year('created_at')). \ 13 | withColumn('month', month('created_at')). \ 14 | withColumn('dayofmonth', dayofmonth('created_at')) 15 | ``` 16 | 17 | * Call the program from **app.py**. For now review schema and data. 18 | 19 | ```python 20 | import os 21 | from util import get_spark_session 22 | from read import from_files 23 | from process import transform 24 | 25 | 26 | def main(): 27 | env = os.environ.get('ENVIRON') 28 | src_dir = os.environ.get('SRC_DIR') 29 | file_pattern = f"{os.environ.get('SRC_FILE_PATTERN')}-*" 30 | src_file_format = os.environ.get('SRC_FILE_FORMAT') 31 | spark = get_spark_session(env, 'GitHub Activity - Partitioning Data') 32 | df = from_files(spark, src_dir, file_pattern, src_file_format) 33 | df_transformed = transform(df) 34 | df_transformed.printSchema() 35 | df_transformed.select('repo.*', 'year', 'month', 'dayofmonth').show() 36 | 37 | 38 | if __name__ == '__main__': 39 | main() 40 | ``` 41 | * Run the program to confirm that the changes are working as expected. 42 | 43 | ``` 44 | export ENVIRON=DEV 45 | export SRC_DIR=s3://aigithub/landing/ghactivity 46 | export SRC_FILE_PATTERN=2021-01-15 47 | export SRC_FILE_FORMAT=json 48 | 49 | spark-submit \ 50 | --master local \ 51 | app.py 52 | ``` 53 | -------------------------------------------------------------------------------- /05_process_data_using_spark_apis/app.py: -------------------------------------------------------------------------------- 1 | import os 2 | from util import get_spark_session 3 | from read import from_files 4 | from process import transform 5 | 6 | 7 | def main(): 8 | env = os.environ.get('ENVIRON') 9 | src_dir = os.environ.get('SRC_DIR') 10 | file_pattern = f"{os.environ.get('SRC_FILE_PATTERN')}-*" 11 | src_file_format = os.environ.get('SRC_FILE_FORMAT') 12 | spark = get_spark_session(env, 'GitHub Activity - Partitioning Data') 13 | df = from_files(spark, src_dir, file_pattern, src_file_format) 14 | df_transformed = transform(df) 15 | df_transformed.printSchema() 16 | df_transformed.select('repo.*', 'year', 'month', 'dayofmonth').show() 17 | 18 | 19 | if __name__ == '__main__': 20 | main() 21 | -------------------------------------------------------------------------------- /05_process_data_using_spark_apis/process.py: -------------------------------------------------------------------------------- 1 | from pyspark.sql.functions import year, \ 2 | month, dayofmonth 3 | 4 | 5 | def transform(df): 6 | return df.withColumn('year', year('created_at')). \ 7 | withColumn('month', month('created_at')). \ 8 | withColumn('dayofmonth', dayofmonth('created_at')) 9 | -------------------------------------------------------------------------------- /05_process_data_using_spark_apis/read.py: -------------------------------------------------------------------------------- 1 | def from_files(spark, src_dir, file_pattern, file_format): 2 | df = spark. \ 3 | read. \ 4 | format(file_format). \ 5 | load(f'{src_dir}/{file_pattern}') 6 | return df 7 | -------------------------------------------------------------------------------- /05_process_data_using_spark_apis/util.py: -------------------------------------------------------------------------------- 1 | from pyspark.sql import SparkSession 2 | 3 | 4 | def get_spark_session(env, app_name): 5 | if env == 'DEV': 6 | spark = SparkSession. \ 7 | builder. \ 8 | master('local'). \ 9 | appName(app_name). \ 10 | getOrCreate() 11 | return spark 12 | return 13 | -------------------------------------------------------------------------------- /06_write_data_to_files/README.md: -------------------------------------------------------------------------------- 1 | # Write data to files 2 | 3 | Let us develop the code to write Spark Dataframe to the files using Spark Dataframe APIs. 4 | * Create a Python program by name **write.py**. We will create a function by name **to_files**. It writes the data from Dataframe into files. 5 | 6 | ```python 7 | def to_files(df, tgt_dir, file_format): 8 | df.coalesce(16). \ 9 | write. \ 10 | partitionBy('year', 'month', 'dayofmonth'). \ 11 | mode('append'). \ 12 | format(file_format). \ 13 | save(tgt_dir) 14 | ``` 15 | 16 | * Call the program from **app.py** to write Dataframe to files. 17 | 18 | ```python 19 | import os 20 | from util import get_spark_session 21 | from read import from_files 22 | from write import write 23 | from process import transform 24 | 25 | 26 | def main(): 27 | env = os.environ.get('ENVIRON') 28 | src_dir = os.environ.get('SRC_DIR') 29 | file_pattern = f"{os.environ.get('SRC_FILE_PATTERN')}-*" 30 | src_file_format = os.environ.get('SRC_FILE_FORMAT') 31 | tgt_dir = os.environ.get('TGT_DIR') 32 | tgt_file_format = os.environ.get('TGT_FILE_FORMAT') 33 | spark = get_spark_session(env, 'GitHub Activity - Reading and Writing Data') 34 | df = from_files(spark, src_dir, file_pattern, src_file_format) 35 | df_transformed = transform(df) 36 | write(df_transformed, tgt_dir, tgt_file_format) 37 | 38 | 39 | if __name__ == '__main__': 40 | main() 41 | ``` 42 | * First cleanup all the files in the target location. 43 | ``` 44 | aws s3 rm s3://aigithub/emrraw/ghactivity \ 45 | --recursive 46 | ``` 47 | * Run the application after adding environment variables. Validate for multiple days. 48 | * 2021-01-13 49 | * 2021-01-14 50 | * 2021-01-15 51 | * You can use below command for the reference. 52 | 53 | ``` 54 | export ENVIRON=DEV 55 | export SRC_DIR=s3://aigithub/landing/ghactivity 56 | export SRC_FILE_PATTERN=2021-01-13 57 | export SRC_FILE_FORMAT=json 58 | export TGT_DIR=s3://aigithub/emrraw/ghactivity 59 | export TGT_FILE_FORMAT=parquet 60 | 61 | spark-submit \ 62 | --master local \ 63 | app.py 64 | ``` 65 | * Check for files in the target location. 66 | 67 | ```shell script 68 | aws s3 ls s3://aigithub/emrraw/ghactivity \ 69 | --recursive 70 | ``` 71 | 72 | * Run below code using pyspark CLI. 73 | 74 | ```python 75 | file_path = 's3://aigithub/raw/ghactivity' 76 | df = spark.read.parquet(file_path) 77 | df.printSchema() 78 | df.show() 79 | df. \ 80 | groupBy('year', 'month', 'dayofmonth'). \ 81 | count(). \ 82 | show() 83 | df.count() 84 | ``` -------------------------------------------------------------------------------- /06_write_data_to_files/app.py: -------------------------------------------------------------------------------- 1 | import os 2 | from util import get_spark_session 3 | from read import from_files 4 | from write import to_files 5 | from process import transform 6 | 7 | 8 | def main(): 9 | env = os.environ.get('ENVIRON') 10 | src_dir = os.environ.get('SRC_DIR') 11 | file_pattern = f"{os.environ.get('SRC_FILE_PATTERN')}-*" 12 | src_file_format = os.environ.get('SRC_FILE_FORMAT') 13 | tgt_dir = os.environ.get('TGT_DIR') 14 | tgt_file_format = os.environ.get('TGT_FILE_FORMAT') 15 | spark = get_spark_session(env, 'GitHub Activity - Reading and Writing Data') 16 | df = from_files(spark, src_dir, file_pattern, src_file_format) 17 | df_transformed = transform(df) 18 | to_files(df_transformed, tgt_dir, tgt_file_format) 19 | 20 | 21 | if __name__ == '__main__': 22 | main() 23 | -------------------------------------------------------------------------------- /06_write_data_to_files/process.py: -------------------------------------------------------------------------------- 1 | from pyspark.sql.functions import year, \ 2 | month, dayofmonth 3 | 4 | 5 | def transform(df): 6 | return df.withColumn('year', year('created_at')). \ 7 | withColumn('month', month('created_at')). \ 8 | withColumn('dayofmonth', dayofmonth('created_at')) 9 | -------------------------------------------------------------------------------- /06_write_data_to_files/read.py: -------------------------------------------------------------------------------- 1 | def from_files(spark, src_dir, file_pattern, file_format): 2 | df = spark. \ 3 | read. \ 4 | format(file_format). \ 5 | load(f'{src_dir}/{file_pattern}') 6 | return df 7 | -------------------------------------------------------------------------------- /06_write_data_to_files/util.py: -------------------------------------------------------------------------------- 1 | from pyspark.sql import SparkSession 2 | 3 | 4 | def get_spark_session(env, app_name): 5 | if env == 'DEV': 6 | spark = SparkSession. \ 7 | builder. \ 8 | master('local'). \ 9 | appName(app_name). \ 10 | getOrCreate() 11 | return spark 12 | return 13 | -------------------------------------------------------------------------------- /06_write_data_to_files/write.py: -------------------------------------------------------------------------------- 1 | def to_files(df, tgt_dir, file_format): 2 | df.coalesce(16). \ 3 | write. \ 4 | partitionBy('year', 'month', 'dayofmonth'). \ 5 | mode('append'). \ 6 | format(file_format). \ 7 | save(tgt_dir) 8 | -------------------------------------------------------------------------------- /07_productionize_code/README.md: -------------------------------------------------------------------------------- 1 | # Productionize Code 2 | 3 | Let us make necessary changes to the code so that we can run on a multinode cluster. 4 | * Update **util.py** to use multi node cluster. 5 | 6 | ```python 7 | from pyspark.sql import SparkSession 8 | 9 | 10 | def get_spark_session(env, app_name): 11 | if env == 'DEV': 12 | spark = SparkSession. \ 13 | builder. \ 14 | master('local'). \ 15 | appName(app_name). \ 16 | getOrCreate() 17 | return spark 18 | elif env == 'PROD': 19 | spark = SparkSession. \ 20 | builder. \ 21 | master('yarn'). \ 22 | appName(app_name). \ 23 | getOrCreate() 24 | return spark 25 | return 26 | ``` 27 | 28 | * Here are the commands to download the files. 29 | 30 | ```shell script 31 | rm -rf data/itv-github/landing/ghactivity 32 | mkdir -p data/itv-github/landing/ghactivity 33 | cd data/itv-github/landing/ghactivity 34 | 35 | wget https://data.gharchive.org/2021-01-13-{0..23}.json.gz 36 | wget https://data.gharchive.org/2021-01-14-{0..23}.json.gz 37 | wget https://data.gharchive.org/2021-01-15-{0..23}.json.gz 38 | ``` 39 | 40 | * Copy the files into HDFS landing folders. 41 | 42 | ```shell script 43 | aws s3 rm s3://aigithub/landing/ghactivity \ 44 | --recursive 45 | aws s3 rm s3://aigithub/emrraw/ghactivity \ 46 | --recursive 47 | aws s3 cp ~/mastering-emr/data/itv-github/landing/ghactivity \ 48 | s3://aigithub/landing/ghactivity \ 49 | --recursive 50 | 51 | # Validating Files in HDFS 52 | aws s3 ls s3://aigithub/landing/ghactivity \ 53 | --recursive|grep json.gz|wc -l 54 | ``` 55 | 56 | * Run the application after adding environment variables. Validate for multiple days. 57 | * 2021-01-13 58 | * 2021-01-14 59 | * 2021-01-15 60 | * Here are the export statements to set the environment variables. 61 | 62 | ```shell script 63 | export ENVIRON=PROD 64 | export SRC_DIR=s3://aigithub/landing/ghactivity 65 | export SRC_FILE_FORMAT=json 66 | export TGT_DIR=s3://aigithub/emrraw/ghactivity 67 | export TGT_FILE_FORMAT=parquet 68 | 69 | export PYSPARK_PYTHON=python3 70 | ``` 71 | 72 | * Here are the spark submit commands to run application for 3 dates. 73 | 74 | ```shell script 75 | export SRC_FILE_PATTERN=2021-01-13 76 | 77 | spark-submit --master yarn \ 78 | app.py 79 | 80 | export SRC_FILE_PATTERN=2021-01-14 81 | 82 | spark-submit --master yarn \ 83 | app.py 84 | 85 | export SRC_FILE_PATTERN=2021-01-15 86 | 87 | spark-submit --master yarn \ 88 | app.py 89 | ``` 90 | * Check for files in the target location. 91 | 92 | ```shell script 93 | aws s3 ls s3://aigithub/emrraw/ghactivity \ 94 | --recursive 95 | ``` 96 | 97 | * We can use `pyspark2 --master yarn` to launch Pyspark and run the below code to validate. 98 | 99 | ```python 100 | src_file_path = 's3://aigithub/landing/ghactivity' 101 | src_df = spark.read.json(src_file_path) 102 | src_df.printSchema() 103 | src_df.show() 104 | src_df.count() 105 | from pyspark.sql.functions import to_date 106 | src_df.groupBy(to_date('created_at').alias('created_at')).count().show() 107 | 108 | tgt_file_path = f's3://aigithub/emrraw/ghactivity' 109 | tgt_df = spark.read.parquet(tgt_file_path) 110 | tgt_df.printSchema() 111 | tgt_df.show() 112 | tgt_df.count() 113 | tgt_df.groupBy('year', 'month', 'dayofmonth').count().show() 114 | ``` 115 | 116 | -------------------------------------------------------------------------------- /07_productionize_code/app.py: -------------------------------------------------------------------------------- 1 | import os 2 | from util import get_spark_session 3 | from read import from_files 4 | from write import to_files 5 | from process import transform 6 | 7 | 8 | def main(): 9 | env = os.environ.get('ENVIRON') 10 | src_dir = os.environ.get('SRC_DIR') 11 | file_pattern = f"{os.environ.get('SRC_FILE_PATTERN')}-*" 12 | src_file_format = os.environ.get('SRC_FILE_FORMAT') 13 | tgt_dir = os.environ.get('TGT_DIR') 14 | tgt_file_format = os.environ.get('TGT_FILE_FORMAT') 15 | spark = get_spark_session(env, 'GitHub Activity - Reading and Writing Data') 16 | df = from_files(spark, src_dir, file_pattern, src_file_format) 17 | df_transformed = transform(df) 18 | to_files(df_transformed, tgt_dir, tgt_file_format) 19 | 20 | 21 | if __name__ == '__main__': 22 | main() 23 | -------------------------------------------------------------------------------- /07_productionize_code/process.py: -------------------------------------------------------------------------------- 1 | from pyspark.sql.functions import year, \ 2 | month, dayofmonth 3 | 4 | 5 | def transform(df): 6 | return df.withColumn('year', year('created_at')). \ 7 | withColumn('month', month('created_at')). \ 8 | withColumn('day', dayofmonth('created_at')) 9 | -------------------------------------------------------------------------------- /07_productionize_code/read.py: -------------------------------------------------------------------------------- 1 | def from_files(spark, src_dir, file_pattern, file_format): 2 | df = spark. \ 3 | read. \ 4 | format(file_format). \ 5 | load(f'{src_dir}/{file_pattern}') 6 | return df 7 | -------------------------------------------------------------------------------- /07_productionize_code/util.py: -------------------------------------------------------------------------------- 1 | from pyspark.sql import SparkSession 2 | 3 | 4 | def get_spark_session(env, app_name): 5 | if env == 'DEV': 6 | spark = SparkSession. \ 7 | builder. \ 8 | master('local'). \ 9 | appName(app_name). \ 10 | getOrCreate() 11 | return spark 12 | elif env == 'PROD': 13 | spark = SparkSession. \ 14 | builder. \ 15 | master('yarn'). \ 16 | appName(app_name). \ 17 | getOrCreate() 18 | return spark 19 | return 20 | -------------------------------------------------------------------------------- /07_productionize_code/write.py: -------------------------------------------------------------------------------- 1 | def to_files(df, tgt_dir, file_format): 2 | df.coalesce(16). \ 3 | write. \ 4 | partitionBy('year', 'month', 'dayofmonth'). \ 5 | mode('append'). \ 6 | format(file_format). \ 7 | save(tgt_dir) 8 | -------------------------------------------------------------------------------- /08_run_using_cluster_mode/README.md: -------------------------------------------------------------------------------- 1 | # Run using Cluster Mode 2 | 3 | Let us go ahead and build zip file so that we can run the application in cluster mode. 4 | * Here are the commands to build the zip file. We need to run these commands from **itv-ghactivity** folder in the workspace. 5 | ``` 6 | rm -f itv-ghactivity.zip 7 | zip -r itv-ghactivity.zip *.py 8 | ``` 9 | * Copy the files into HDFS landing folders. 10 | 11 | ```shell script 12 | aws s3 rm s3://aigithub/emrraw/ghactivity \ 13 | --recursive 14 | 15 | # Validating Files in HDFS 16 | aws s3 ls s3://aigithub/landing/ghactivity \ 17 | --recursive|grep json.gz|wc -l 18 | ``` 19 | 20 | * Run the application using Cluster Mode. Here the zip file and driver program file are in the local file system on the Master Node of the cluster. 21 | ``` 22 | spark-submit \ 23 | --master yarn \ 24 | --deploy-mode cluster \ 25 | --conf "spark.yarn.appMasterEnv.ENVIRON=PROD" \ 26 | --conf "spark.yarn.appMasterEnv.SRC_DIR=s3://aigithub/landing/ghactivity" \ 27 | --conf "spark.yarn.appMasterEnv.SRC_FILE_FORMAT=json" \ 28 | --conf "spark.yarn.appMasterEnv.TGT_DIR=s3://aigithub/emrraw/ghactivity" \ 29 | --conf "spark.yarn.appMasterEnv.TGT_FILE_FORMAT=parquet" \ 30 | --conf "spark.yarn.appMasterEnv.SRC_FILE_PATTERN=2021-01-13" \ 31 | --py-files itv-ghactivity.zip \ 32 | app.py 33 | ``` 34 | * Here are the instructions to run the application by using the zip file and the driver program from s3. 35 | ``` 36 | aws s3 cp itv-ghactivity.zip s3://aigithub/app/itv-ghactivity.zip 37 | aws s3 cp app.py s3://aigithub/app/app.py 38 | 39 | spark-submit \ 40 | --master yarn \ 41 | --deploy-mode cluster \ 42 | --conf "spark.yarn.appMasterEnv.ENVIRON=PROD" \ 43 | --conf "spark.yarn.appMasterEnv.SRC_DIR=s3://aigithub/landing/ghactivity" \ 44 | --conf "spark.yarn.appMasterEnv.SRC_FILE_FORMAT=json" \ 45 | --conf "spark.yarn.appMasterEnv.TGT_DIR=s3://aigithub/emrraw/ghactivity" \ 46 | --conf "spark.yarn.appMasterEnv.TGT_FILE_FORMAT=parquet" \ 47 | --conf "spark.yarn.appMasterEnv.SRC_FILE_PATTERN=2021-01-14" \ 48 | --py-files s3://aigithub/app/itv-ghactivity.zip \ 49 | s3://aigithub/app/app.py 50 | ``` 51 | * We can also run the application as step on existing EMR Cluster. We need to ensure both zip file as well as driver program file are in s3. You can use the previous command as reference. The command is supposed to be in single line discarding spark-submit, master and deploy-mode. 52 | * Check for files in the target location. 53 | 54 | ```shell script 55 | aws s3 ls s3://aigithub/emrraw/ghactivity \ 56 | --recursive 57 | ``` 58 | 59 | * We can use `pyspark2 --master yarn` to launch Pyspark and run the below code to validate. 60 | 61 | ```python 62 | src_file_path = 's3://aigithub/landing/ghactivity' 63 | src_df = spark.read.json(src_file_path) 64 | src_df.printSchema() 65 | src_df.show() 66 | src_df.count() 67 | from pyspark.sql.functions import to_date 68 | src_df.groupBy(to_date('created_at').alias('created_at')).count().show() 69 | 70 | tgt_file_path = f's3://aigithub/emrraw/ghactivity' 71 | tgt_df = spark.read.parquet(tgt_file_path) 72 | tgt_df.printSchema() 73 | tgt_df.show() 74 | tgt_df.count() 75 | tgt_df.groupBy('year', 'month', 'dayofmonth').count().show() 76 | ``` 77 | 78 | -------------------------------------------------------------------------------- /08_run_using_cluster_mode/emr.sh: -------------------------------------------------------------------------------- 1 | spark-submit \ 2 | --master yarn \ 3 | --deploy-mode cluster \ 4 | --conf "spark.yarn.appMasterEnv.ENVIRON=PROD" \ 5 | --conf "spark.yarn.appMasterEnv.SRC_DIR=/user/hadoop/itv-github/landing/ghactivity" \ 6 | --conf "spark.yarn.appMasterEnv.SRC_FILE_FORMAT=json" \ 7 | --conf "spark.yarn.appMasterEnv.TGT_DIR=/user/hadoop/itv-github/raw/ghactivity" \ 8 | --conf "spark.yarn.appMasterEnv.TGT_FILE_FORMAT=parquet" \ 9 | --conf "spark.yarn.appMasterEnv.SRC_FILE_PATTERN=2021-01-15" \ 10 | --py-files s3://itv-github/itv-ghactivity.zip \ 11 | s3://itv-github/app.py 12 | 13 | -------------------------------------------------------------------------------- /08_run_using_cluster_mode/emr_step.sh: -------------------------------------------------------------------------------- 1 | --master yarn --deploy-mode cluster --conf "spark.yarn.appMasterEnv.ENVIRON=PROD" --conf "spark.yarn.appMasterEnv.SRC_DIR=s3://itv-github/landing/ghactivity" --conf "spark.yarn.appMasterEnv.SRC_FILE_FORMAT=json" --conf "spark.yarn.appMasterEnv.TGT_DIR=s3://itv-github/raw/ghactivity" --conf "spark.yarn.appMasterEnv.TGT_FILE_FORMAT=parquet" --conf "spark.yarn.appMasterEnv.SRC_FILE_PATTERN=2021-01-15" --py-files s3://itv-github/itv-ghactivity.zip 2 | 3 | s3://itv-github/app.py 4 | 5 | -------------------------------------------------------------------------------- /08_run_using_cluster_mode/emr_step_ai.sh: -------------------------------------------------------------------------------- 1 | --master yarn --deploy-mode cluster --conf "spark.yarn.appMasterEnv.ENVIRON=PROD" --conf "spark.yarn.appMasterEnv.SRC_DIR=s3://aigithub/landing/ghactivity" --conf "spark.yarn.appMasterEnv.SRC_FILE_FORMAT=json" --conf "spark.yarn.appMasterEnv.TGT_DIR=s3://aigithub/emrraw/ghactivity" --conf "spark.yarn.appMasterEnv.TGT_FILE_FORMAT=parquet" --conf "spark.yarn.appMasterEnv.SRC_FILE_PATTERN=2022-06-15" --py-files s3://aigithub/app/itv-ghactivity.zip 2 | 3 | s3://aigithub/app/app.py 4 | -------------------------------------------------------------------------------- /08_run_using_cluster_mode/exports.sh: -------------------------------------------------------------------------------- 1 | export ENVIRON=PROD 2 | export SRC_DIR=/user/${USER}/itv-github/landing/ghactivity 3 | export SRC_FILE_FORMAT=json 4 | export TGT_DIR=/user/${USER}/itv-github/raw/ghactivity 5 | export TGT_FILE_FORMAT=parquet 6 | export SRC_FILE_PATTERN=2021-01-15 7 | -------------------------------------------------------------------------------- /11_manage_emr_using_aws_boto3/06 Overview of AWS boto3 to Manage AWS EMR Clusters.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "import boto3" 10 | ] 11 | }, 12 | { 13 | "cell_type": "code", 14 | "execution_count": null, 15 | "metadata": {}, 16 | "outputs": [], 17 | "source": [ 18 | "emr_client = boto3.client('emr')" 19 | ] 20 | }, 21 | { 22 | "cell_type": "code", 23 | "execution_count": null, 24 | "metadata": {}, 25 | "outputs": [], 26 | "source": [ 27 | "type(emr_client)" 28 | ] 29 | }, 30 | { 31 | "cell_type": "code", 32 | "execution_count": null, 33 | "metadata": {}, 34 | "outputs": [], 35 | "source": [ 36 | "emr_client.list_clusters?" 37 | ] 38 | }, 39 | { 40 | "cell_type": "code", 41 | "execution_count": null, 42 | "metadata": {}, 43 | "outputs": [], 44 | "source": [ 45 | "emr_client.list_clusters()" 46 | ] 47 | }, 48 | { 49 | "cell_type": "code", 50 | "execution_count": null, 51 | "metadata": {}, 52 | "outputs": [], 53 | "source": [ 54 | "emr_client.list_clusters()['Clusters']" 55 | ] 56 | }, 57 | { 58 | "cell_type": "code", 59 | "execution_count": null, 60 | "metadata": {}, 61 | "outputs": [], 62 | "source": [ 63 | "len(emr_client.list_clusters()['Clusters'])" 64 | ] 65 | }, 66 | { 67 | "cell_type": "code", 68 | "execution_count": null, 69 | "metadata": {}, 70 | "outputs": [], 71 | "source": [ 72 | "for cluster in emr_client.list_clusters()['Clusters']:\n", 73 | " print(f\"{cluster['Id']}:{cluster['Status']['State']}\") " 74 | ] 75 | }, 76 | { 77 | "cell_type": "code", 78 | "execution_count": null, 79 | "metadata": {}, 80 | "outputs": [], 81 | "source": [ 82 | "emr_client.list_clusters(ClusterStates=['WAITING', 'RUNNING', 'STARTING'])" 83 | ] 84 | }, 85 | { 86 | "cell_type": "code", 87 | "execution_count": null, 88 | "metadata": {}, 89 | "outputs": [], 90 | "source": [ 91 | "# Make sure to replace the cluster id\n", 92 | "emr_client.describe_cluster(ClusterId='j-128Z411TY6VKK')" 93 | ] 94 | }, 95 | { 96 | "cell_type": "code", 97 | "execution_count": null, 98 | "metadata": {}, 99 | "outputs": [], 100 | "source": [ 101 | "emr_client.terminate_job_flows?" 102 | ] 103 | }, 104 | { 105 | "cell_type": "code", 106 | "execution_count": null, 107 | "metadata": {}, 108 | "outputs": [], 109 | "source": [ 110 | "emr_client.terminate_job_flows(JobFlowIds=['j-128Z411TY6VKK'])" 111 | ] 112 | }, 113 | { 114 | "cell_type": "code", 115 | "execution_count": null, 116 | "metadata": {}, 117 | "outputs": [], 118 | "source": [] 119 | } 120 | ], 121 | "metadata": { 122 | "kernelspec": { 123 | "display_name": "Python 3.9.12 ('me-venv': venv)", 124 | "language": "python", 125 | "name": "python3" 126 | }, 127 | "language_info": { 128 | "codemirror_mode": { 129 | "name": "ipython", 130 | "version": 3 131 | }, 132 | "file_extension": ".py", 133 | "mimetype": "text/x-python", 134 | "name": "python", 135 | "nbconvert_exporter": "python", 136 | "pygments_lexer": "ipython3", 137 | "version": "3.9.12" 138 | }, 139 | "orig_nbformat": 4, 140 | "vscode": { 141 | "interpreter": { 142 | "hash": "486bfb5c5bc2d80b6bdfdd884fe8070e0b9ef1b20f462b79720291b7c17c500c" 143 | } 144 | } 145 | }, 146 | "nbformat": 4, 147 | "nbformat_minor": 2 148 | } 149 | -------------------------------------------------------------------------------- /11_manage_emr_using_aws_boto3/07 Create AWS EMR Cluster using AWS boto3.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "import boto3" 10 | ] 11 | }, 12 | { 13 | "cell_type": "code", 14 | "execution_count": null, 15 | "metadata": {}, 16 | "outputs": [], 17 | "source": [ 18 | "emr_client = boto3.client('emr')" 19 | ] 20 | }, 21 | { 22 | "cell_type": "code", 23 | "execution_count": null, 24 | "metadata": {}, 25 | "outputs": [], 26 | "source": [ 27 | "emr_client.run_job_flow?" 28 | ] 29 | }, 30 | { 31 | "cell_type": "code", 32 | "execution_count": null, 33 | "metadata": {}, 34 | "outputs": [], 35 | "source": [ 36 | "emr_client.run_job_flow(\n", 37 | " Name='AI Cluster',\n", 38 | " LogUri='s3n://aws-logs-269066542444-us-east-1/elasticmapreduce/',\n", 39 | " ReleaseLabel='emr-6.6.0',\n", 40 | " Instances={\n", 41 | " 'KeepJobFlowAliveWhenNoSteps': True,\n", 42 | " 'InstanceGroups': [\n", 43 | " {\n", 44 | " \"InstanceCount\": 1,\n", 45 | " \"EbsConfiguration\": {\n", 46 | " \"EbsBlockDeviceConfigs\": [\n", 47 | " {\n", 48 | " \"VolumeSpecification\": {\n", 49 | " \"SizeInGB\": 32,\n", 50 | " \"VolumeType\": \"gp2\"\n", 51 | " },\n", 52 | " \"VolumesPerInstance\": 2\n", 53 | " }\n", 54 | " ]\n", 55 | " },\n", 56 | " \"InstanceRole\": \"MASTER\",\n", 57 | " \"InstanceType\": \"m5.xlarge\",\n", 58 | " \"Name\": \"Master - 1\"\n", 59 | " },\n", 60 | " {\n", 61 | " \"InstanceCount\": 2,\n", 62 | " \"EbsConfiguration\": {\n", 63 | " \"EbsBlockDeviceConfigs\": [\n", 64 | " {\n", 65 | " \"VolumeSpecification\": {\n", 66 | " \"SizeInGB\": 32,\n", 67 | " \"VolumeType\": \"gp2\"\n", 68 | " },\n", 69 | " \"VolumesPerInstance\": 2\n", 70 | " }\n", 71 | " ]\n", 72 | " },\n", 73 | " \"InstanceRole\": \"CORE\",\n", 74 | " \"InstanceType\": \"m5.xlarge\",\n", 75 | " \"Name\": \"Core - 2\"\n", 76 | " }\n", 77 | " ]\n", 78 | " },\n", 79 | " Applications=[\n", 80 | " {\n", 81 | " 'Name': 'Hadoop'\n", 82 | " },\n", 83 | " {\n", 84 | " 'Name': 'Spark'\n", 85 | " }\n", 86 | " ],\n", 87 | " Configurations=[\n", 88 | " {\n", 89 | " \"Classification\": \"spark-hive-site\",\n", 90 | " \"Properties\": {\n", 91 | " \"hive.metastore.client.factory.class\": \"com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory\"\n", 92 | " }\n", 93 | " }\n", 94 | " ],\n", 95 | " ServiceRole='EMR_DefaultRole',\n", 96 | " JobFlowRole='EMR_EC2_DefaultRole',\n", 97 | " AutoScalingRole='EMR_AutoScaling_DefaultRole',\n", 98 | " ScaleDownBehavior='TERMINATE_AT_TASK_COMPLETION',\n", 99 | " EbsRootVolumeSize=10,\n", 100 | " AutoTerminationPolicy={\n", 101 | " \"IdleTimeout\": 3600\n", 102 | " }\n", 103 | ")\n" 104 | ] 105 | }, 106 | { 107 | "cell_type": "code", 108 | "execution_count": null, 109 | "metadata": {}, 110 | "outputs": [], 111 | "source": [] 112 | } 113 | ], 114 | "metadata": { 115 | "kernelspec": { 116 | "display_name": "Python 3.9.12 ('me-venv': venv)", 117 | "language": "python", 118 | "name": "python3" 119 | }, 120 | "language_info": { 121 | "codemirror_mode": { 122 | "name": "ipython", 123 | "version": 3 124 | }, 125 | "file_extension": ".py", 126 | "mimetype": "text/x-python", 127 | "name": "python", 128 | "nbconvert_exporter": "python", 129 | "pygments_lexer": "ipython3", 130 | "version": "3.9.12" 131 | }, 132 | "orig_nbformat": 4, 133 | "vscode": { 134 | "interpreter": { 135 | "hash": "486bfb5c5bc2d80b6bdfdd884fe8070e0b9ef1b20f462b79720291b7c17c500c" 136 | } 137 | } 138 | }, 139 | "nbformat": 4, 140 | "nbformat_minor": 2 141 | } 142 | -------------------------------------------------------------------------------- /11_manage_emr_using_aws_boto3/09 Add Spark Application as Step to AWS EMR Cluster using Boto3.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "import boto3" 10 | ] 11 | }, 12 | { 13 | "cell_type": "code", 14 | "execution_count": null, 15 | "metadata": {}, 16 | "outputs": [], 17 | "source": [ 18 | "emr_client = boto3.client('emr')" 19 | ] 20 | }, 21 | { 22 | "cell_type": "code", 23 | "execution_count": null, 24 | "metadata": {}, 25 | "outputs": [], 26 | "source": [ 27 | "emr_client.add_job_flow_steps?" 28 | ] 29 | }, 30 | { 31 | "cell_type": "code", 32 | "execution_count": null, 33 | "metadata": {}, 34 | "outputs": [], 35 | "source": [ 36 | "emr_client.list_clusters(ClusterStates=['RUNNING', 'WAITING'])" 37 | ] 38 | }, 39 | { 40 | "cell_type": "code", 41 | "execution_count": null, 42 | "metadata": {}, 43 | "outputs": [], 44 | "source": [ 45 | "emr_client.add_job_flow_steps(\n", 46 | " JobFlowId='j-3J9Z0ZS4D7JU7',\n", 47 | " Steps=[\n", 48 | " {\n", 49 | " 'Name': 'GHActivity Processor',\n", 50 | " 'HadoopJarStep': {\n", 51 | " 'Jar': 'command-runner.jar',\n", 52 | " 'Args': ['spark-submit',\n", 53 | " '--deploy-mode',\n", 54 | " 'cluster',\n", 55 | " '--conf',\n", 56 | " 'spark.yarn.appMasterEnv.ENVIRON=PROD',\n", 57 | " '--conf',\n", 58 | " 'spark.yarn.appMasterEnv.SRC_DIR=s3://aigithub/landing/ghactivity',\n", 59 | " '--conf',\n", 60 | " 'spark.yarn.appMasterEnv.SRC_FILE_FORMAT=json',\n", 61 | " '--conf',\n", 62 | " 'spark.yarn.appMasterEnv.TGT_DIR=s3://aigithub/emrraw/ghactivity',\n", 63 | " '--conf',\n", 64 | " 'spark.yarn.appMasterEnv.TGT_FILE_FORMAT=parquet',\n", 65 | " '--conf',\n", 66 | " 'spark.yarn.appMasterEnv.SRC_FILE_PATTERN=2021-01-13',\n", 67 | " '--py-files',\n", 68 | " 's3://aigithub/app/itv-ghactivity.zip',\n", 69 | " 's3://aigithub/app/app.py']\n", 70 | " }\n", 71 | " }\n", 72 | " ]\n", 73 | ")\n" 74 | ] 75 | }, 76 | { 77 | "cell_type": "code", 78 | "execution_count": null, 79 | "metadata": {}, 80 | "outputs": [], 81 | "source": [] 82 | } 83 | ], 84 | "metadata": { 85 | "kernelspec": { 86 | "display_name": "Python 3.9.12 ('me-venv': venv)", 87 | "language": "python", 88 | "name": "python3" 89 | }, 90 | "language_info": { 91 | "codemirror_mode": { 92 | "name": "ipython", 93 | "version": 3 94 | }, 95 | "file_extension": ".py", 96 | "mimetype": "text/x-python", 97 | "name": "python", 98 | "nbconvert_exporter": "python", 99 | "pygments_lexer": "ipython3", 100 | "version": "3.9.12" 101 | }, 102 | "orig_nbformat": 4, 103 | "vscode": { 104 | "interpreter": { 105 | "hash": "486bfb5c5bc2d80b6bdfdd884fe8070e0b9ef1b20f462b79720291b7c17c500c" 106 | } 107 | } 108 | }, 109 | "nbformat": 4, 110 | "nbformat_minor": 2 111 | } 112 | -------------------------------------------------------------------------------- /11_manage_emr_using_aws_boto3/Manage EMR using AWS CLI.md: -------------------------------------------------------------------------------- 1 | # Manage AWS EMR Cluster using AWS CLI 2 | 3 | Here are the steps we are going to follow to understand how to manage EMR using AWS CLI. It includes how to deploy Spark Application as a step to running EMR Cluster. 4 | * Create AWS EMR Cluster using AWS Web Console 5 | * Export AWS CLI and create a script with the command 6 | * Login into EMR Cluster and upload files to s3 7 | * Confirm that the application zip file as well as driver program file are in s3 8 | * Generate JSON File to deploy Spark Application as AWS EMR Step 9 | * Run AWS CLI Command to add Spark Applciation as AWS EMR Step to running cluster 10 | * Validate if AWS EMR Step is executed successfully or not. 11 | * Make sure to terminate the AWS EMR Cluster -------------------------------------------------------------------------------- /11_manage_emr_using_aws_boto3/add_step.sh: -------------------------------------------------------------------------------- 1 | aws emr add-steps \ 2 | --cluster-id j-2T15V046M0KDO \ 3 | --steps file://./emr-step.json -------------------------------------------------------------------------------- /11_manage_emr_using_aws_boto3/aws_cli_commands_to_manage_emr.sh: -------------------------------------------------------------------------------- 1 | aws emr create-cluster \ 2 | --os-release-label 2.0.20220426.0 \ 3 | --termination-protected \ 4 | --applications Name=Hadoop Name=Hive Name=Spark \ 5 | --ec2-attributes '{"InstanceProfile":"EMR_EC2_DefaultRole","SubnetId":"subnet-0452ade8510f43c16","EmrManagedSlaveSecurityGroup":"sg-0846a8d248c6654f5","EmrManagedMasterSecurityGroup":"sg-0ce008d667fae0c7f"}' \ 6 | --release-label emr-6.6.0 \ 7 | --log-uri 's3n://aws-logs-269066542444-us-east-1/elasticmapreduce/' \ 8 | --instance-groups '[{"InstanceCount":1,"EbsConfiguration":{"EbsBlockDeviceConfigs":[{"VolumeSpecification":{"SizeInGB":32,"VolumeType":"gp2"},"VolumesPerInstance":2}]},"InstanceGroupType":"MASTER","InstanceType":"m5.xlarge","Name":"Master - 1"},{"InstanceCount":2,"EbsConfiguration":{"EbsBlockDeviceConfigs":[{"VolumeSpecification":{"SizeInGB":32,"VolumeType":"gp2"},"VolumesPerInstance":2}]},"InstanceGroupType":"CORE","InstanceType":"m5.xlarge","Name":"Core - 2"}]' \ 9 | --configurations '[{"Classification":"hive-site","Properties":{"hive.metastore.client.factory.class":"com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory"}},{"Classification":"spark-hive-site","Properties":{"hive.metastore.client.factory.class":"com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory"}}]' \ 10 | --auto-scaling-role EMR_AutoScaling_DefaultRole \ 11 | --ebs-root-volume-size 32 \ 12 | --service-role EMR_DefaultRole \ 13 | --enable-debugging \ 14 | --auto-termination-policy '{"IdleTimeout":3600}' \ 15 | --name 'AI Cluster' \ 16 | --scale-down-behavior TERMINATE_AT_TASK_COMPLETION \ 17 | --region us-east-1 18 | 19 | aws emr create-cluster \ 20 | --applications Name=Hadoop Name=Spark \ 21 | --ec2-attributes '{"InstanceProfile":"EMR_EC2_DefaultRole","SubnetId":"subnet-0452ade8510f43c16","EmrManagedSlaveSecurityGroup":"sg-0846a8d248c6654f5","EmrManagedMasterSecurityGroup":"sg-0ce008d667fae0c7f"}' \ 22 | --release-label emr-6.6.0 \ 23 | --log-uri 's3n://aws-logs-269066542444-us-east-1/elasticmapreduce/' \ 24 | --instance-groups '[{"InstanceCount":1,"EbsConfiguration":{"EbsBlockDeviceConfigs":[{"VolumeSpecification":{"SizeInGB":32,"VolumeType":"gp2"},"VolumesPerInstance":2}]},"InstanceGroupType":"MASTER","InstanceType":"m5.xlarge","Name":"Master - 1"},{"InstanceCount":2,"EbsConfiguration":{"EbsBlockDeviceConfigs":[{"VolumeSpecification":{"SizeInGB":32,"VolumeType":"gp2"},"VolumesPerInstance":2}]},"InstanceGroupType":"CORE","InstanceType":"m5.xlarge","Name":"Core - 2"}]' \ 25 | --configurations '[{"Classification":"spark-hive-site","Properties":{"hive.metastore.client.factory.class":"com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory"}}]' \ 26 | --auto-scaling-role EMR_AutoScaling_DefaultRole \ 27 | --ebs-root-volume-size 10 \ 28 | --service-role EMR_DefaultRole \ 29 | --enable-debugging \ 30 | --auto-termination-policy '{"IdleTimeout":3600}' \ 31 | --name 'AI Cluster' \ 32 | --scale-down-behavior TERMINATE_AT_TASK_COMPLETION \ 33 | --region us-east-1 34 | 35 | aws emr list-clusters 36 | aws emr list-clusters --active # Get cluster id 37 | 38 | aws emr list-instances --cluster-id # Replace with relevant cluster id 39 | aws emr describe-cluster 40 | 41 | aws emr terminate-clusters --cluster-ids -------------------------------------------------------------------------------- /11_manage_emr_using_aws_boto3/emr-step.json: -------------------------------------------------------------------------------- 1 | [ 2 | { 3 | "Name":"Spark application", 4 | "Type": "SPARK", 5 | "Args":[ 6 | "--deploy-mode", 7 | "cluster", 8 | "--conf", 9 | "spark.yarn.appMasterEnv.ENVIRON=PROD", 10 | "--conf", 11 | "spark.yarn.appMasterEnv.SRC_DIR=s3://aigithub/landing/ghactivity", 12 | "--conf", 13 | "spark.yarn.appMasterEnv.SRC_FILE_FORMAT=json", 14 | "--conf", 15 | "spark.yarn.appMasterEnv.TGT_DIR=s3://aigithub/emrraw/ghactivity", 16 | "--conf", 17 | "spark.yarn.appMasterEnv.TGT_FILE_FORMAT=parquet", 18 | "--conf", 19 | "spark.yarn.appMasterEnv.SRC_FILE_PATTERN=2022-06-19", 20 | "--py-files", 21 | "s3://aigithub/app/itv-ghactivity.zip", 22 | "s3://aigithub/app/app.py" 23 | ] 24 | } 25 | ] 26 | -------------------------------------------------------------------------------- /11_manage_emr_using_aws_boto3/emr_create_cluster.sh: -------------------------------------------------------------------------------- 1 | aws emr create-cluster \ 2 | --applications Name=Hadoop Name=Hive Name=Hue Name=Spark \ 3 | --ec2-attributes '{"KeyName":"aiawsdg","InstanceProfile":"EMR_EC2_DefaultRole","SubnetId":"subnet-0452ade8510f43c16","EmrManagedSlaveSecurityGroup":"sg-0846a8d248c6654f5","EmrManagedMasterSecurityGroup":"sg-0ce008d667fae0c7f"}' \ 4 | --release-label emr-6.6.0 \ 5 | --log-uri 's3n://aws-logs-269066542444-us-east-1/elasticmapreduce/' \ 6 | --instance-groups '[{"InstanceCount":1,"EbsConfiguration":{"EbsBlockDeviceConfigs":[{"VolumeSpecification":{"SizeInGB":32,"VolumeType":"gp2"},"VolumesPerInstance":2}]},"InstanceGroupType":"CORE","InstanceType":"m5.xlarge","Name":"Core - 2"},{"InstanceCount":1,"EbsConfiguration":{"EbsBlockDeviceConfigs":[{"VolumeSpecification":{"SizeInGB":32,"VolumeType":"gp2"},"VolumesPerInstance":2}]},"InstanceGroupType":"MASTER","InstanceType":"m5.xlarge","Name":"Master - 1"}]' \ 7 | --configurations '[{"Classification":"hive-site","Properties":{"hive.metastore.client.factory.class":"com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory"}},{"Classification":"spark-hive-site","Properties":{"hive.metastore.client.factory.class":"com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory"}}]' \ 8 | --auto-scaling-role EMR_AutoScaling_DefaultRole \ 9 | --ebs-root-volume-size 30 \ 10 | --service-role EMR_DefaultRole \ 11 | --enable-debugging \ 12 | --auto-termination-policy '{"IdleTimeout":900}' \ 13 | --name 'GitHub Development Cluster' \ 14 | --scale-down-behavior TERMINATE_AT_TASK_COMPLETION \ 15 | --region us-east-1 -------------------------------------------------------------------------------- /11_manage_emr_using_aws_boto3/emr_create_cluster_step.sh: -------------------------------------------------------------------------------- 1 | aws emr create-cluster \ 2 | --applications Name=Hadoop Name=Spark Name=JupyterEnterpriseGateway \ 3 | --ec2-attributes '{"KeyName":"aiawsdg","InstanceProfile":"EMR_EC2_DefaultRole","SubnetId":"subnet-3340f619","EmrManagedSlaveSecurityGroup":"sg-a5a51ed4","EmrManagedMasterSecurityGroup":"sg-4da61d3c"}' \ 4 | --release-label emr-6.4.0 \ 5 | --log-uri 's3n://aigithub/elasticmapreduce/' \ 6 | --steps '[{"Args":["spark-submit","--deploy-mode","cluster","--conf","spark.yarn.appMasterEnv.ENVIRON=PROD","\\","--conf","spark.yarn.appMasterEnv.SRC_DIR=s3://itv-github-emr/prod/landing/ghactivity/","\\","--conf","spark.yarn.appMasterEnv.SRC_FILE_FORMAT=json","\\","--conf","spark.yarn.appMasterEnv.TGT_DIR=s3://itv-github-emr/prod/raw/ghactivity/","\\","--conf","spark.yarn.appMasterEnv.TGT_FILE_FORMAT=parquet","\\","--conf","spark.yarn.appMasterEnv.SRC_FILE_PATTERN=2021-01-14","\\","--py-files","s3://itv-github-emr/app/itv-ghactivity.zip","\\","s3://itv-github-emr/app/app.py"],"Type":"CUSTOM_JAR","ActionOnFailure":"CONTINUE","Jar":"command-runner.jar","Properties":"","Name":"ITV GHActivity"},{"Args":["spark-submit","--deploy-mode","cluster","--conf","spark.yarn.appMasterEnv.ENVIRON=PROD","--conf","spark.yarn.appMasterEnv.SRC_DIR=s3://itv-github-emr/prod/landing/ghactivity/","--conf","spark.yarn.appMasterEnv.SRC_FILE_FORMAT=json","--conf","spark.yarn.appMasterEnv.TGT_DIR=s3://itv-github-emr/prod/raw/ghactivity/","--conf","spark.yarn.appMasterEnv.TGT_FILE_FORMAT=parquet","--conf","spark.yarn.appMasterEnv.SRC_FILE_PATTERN=2021-01-14","--py-files","s3://itv-github-emr/app/itv-ghactivity.zip","s3://itv-github-emr/app/app.py"],"Type":"CUSTOM_JAR","ActionOnFailure":"CONTINUE","Jar":"command-runner.jar","Properties":"","Name":"Spark application"}]' \ 7 | --instance-groups '[{"InstanceCount":1,"EbsConfiguration":{"EbsBlockDeviceConfigs":[{"VolumeSpecification":{"SizeInGB":32,"VolumeType":"gp2"},"VolumesPerInstance":2}]},"InstanceGroupType":"CORE","InstanceType":"m5.xlarge","Name":"Core - 2"},{"InstanceCount":0,"BidPrice":"OnDemandPrice","EbsConfiguration":{"EbsBlockDeviceConfigs":[{"VolumeSpecification":{"SizeInGB":32,"VolumeType":"gp2"},"VolumesPerInstance":2}]},"InstanceGroupType":"TASK","InstanceType":"m5.xlarge","Name":"Task_m5.xlarge_SPOT_By_Managed_Scaling"},{"InstanceCount":1,"EbsConfiguration":{"EbsBlockDeviceConfigs":[{"VolumeSpecification":{"SizeInGB":32,"VolumeType":"gp2"},"VolumesPerInstance":2}]},"InstanceGroupType":"MASTER","InstanceType":"m5.xlarge","Name":"Master - 1"}]' \ 8 | --configurations '[{"Classification":"spark-hive-site","Properties":{"hive.metastore.client.factory.class":"com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory"}}]' \ 9 | --auto-scaling-role EMR_AutoScaling_DefaultRole \ 10 | --ebs-root-volume-size 10 \ 11 | --service-role EMR_DefaultRole \ 12 | --enable-debugging \ 13 | --auto-termination-policy '{"IdleTimeout":1800}' \ 14 | --managed-scaling-policy '{"ComputeLimits":{"MaximumOnDemandCapacityUnits":1,"UnitType":"Instances","MaximumCapacityUnits":2,"MinimumCapacityUnits":1,"MaximumCoreCapacityUnits":1}}' \ 15 | --name 'GitHub Cluster' \ 16 | --scale-down-behavior TERMINATE_AT_TASK_COMPLETION \ 17 | --region us-east-1 18 | -------------------------------------------------------------------------------- /11_manage_emr_using_aws_boto3/manage_emr.sh: -------------------------------------------------------------------------------- 1 | # Listing active clusters 2 | aws emr list-clusters --active 3 | 4 | # Describe Cluster 5 | aws emr describe-cluster \ 6 | --cluster-id j-100ZC8QX517UU 7 | 8 | # Terminate Cluster 9 | aws emr terminate-cluster \ 10 | --cluster-id j-100ZC8QX517UU -------------------------------------------------------------------------------- /12_emr_pipeline_using_step_functions/Data Pipeline using Step Function with EMR.md: -------------------------------------------------------------------------------- 1 | # Data Pipeline using AWS Step Function with EMR 2 | 3 | Here are the details related to the data pipeline that is going to be bult using AWS Step Function with AWS EMR. 4 | * Make sure AWS Lambda Function is scheduled every 15 minutes to ingest GitHub Activity Data to s3. 5 | * Create a lambda function which take bucket, prefix with date as arguments and return number of objects from s3. The function will return dict. The dict contain `ready_to_process` as key and value will be either True or False. If the number of objects is 24 then, it will be True else False. 6 | * Add a task to the step function to invoke lambda function. 7 | * Add conditional to the step function to confirm if data is supposed to be processed or not based on `ready_to_process`. Job should be forced to fail if `ready_to_process` is False. 8 | * If `ready_to_process` is True, then we need to add steps to run Spark application to process the data. 9 | * Create EMR Cluster 10 | * Add Step to EMR Cluster 11 | * Terminate the EMR Cluster 12 | * Finally the step functions should be marked as complete. 13 | -------------------------------------------------------------------------------- /13_overview_of_emr_serverless/Submit Serverless Spark Job.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 1, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "import boto3" 10 | ] 11 | }, 12 | { 13 | "cell_type": "markdown", 14 | "metadata": {}, 15 | "source": [ 16 | "* Submitting Spark Application on EMR Serverless.\n", 17 | "```shell\n", 18 | "aws emr-serverless start-job-run \\\n", 19 | " --application-id 00f1uur2ps283a09 \\\n", 20 | " --execution-role-arn arn:aws:iam::269066542444:role/EMRServerlessS3RuntimeRole \\\n", 21 | " --job-driver '{\n", 22 | " \"sparkSubmit\": {\n", 23 | " \"entryPoint\": \"/usr/lib/spark/examples/jars/spark-examples.jar\",\n", 24 | " \"entryPointArguments\": [\"1\"],\n", 25 | " \"sparkSubmitParameters\": \"--class org.apache.spark.examples.SparkPi --conf spark.executor.cores=4 --conf spark.executor.memory=20g --conf spark.driver.cores=4 --conf spark.driver.memory=8g --conf spark.executor.instances=1\"\n", 26 | " }\n", 27 | " }'\n", 28 | "```\n", 29 | "\n", 30 | "* Submitting Spark SQL Script as step on AWS EMR.\n", 31 | "```shell\n", 32 | "aws emr-serverless start-job-run \\\n", 33 | " --application-id 00f1uur2ps283a09 \\\n", 34 | " --execution-role-arn arn:aws:iam::269066542444:role/EMRServerlessS3RuntimeRole \\\n", 35 | " --job-driver '{\n", 36 | " \"sparkSubmit\": {\n", 37 | " \"entryPoint\": \"s3://airetail/jars/command-runn.jar\",\n", 38 | " \"sparkSubmitParameters\": \"spark-sql -d s3.bucket=airetail -f s3://airetail/scripts/compute_order_revenue.sql\"\n", 39 | " }\n", 40 | " }'\n", 41 | "```" 42 | ] 43 | }, 44 | { 45 | "cell_type": "code", 46 | "execution_count": 3, 47 | "metadata": {}, 48 | "outputs": [], 49 | "source": [ 50 | "es_client = boto3.client('emr-serverless')" 51 | ] 52 | }, 53 | { 54 | "cell_type": "code", 55 | "execution_count": 4, 56 | "metadata": {}, 57 | "outputs": [ 58 | { 59 | "data": { 60 | "text/plain": [ 61 | "{'ResponseMetadata': {'RequestId': '0326ade9-bdb3-4fad-8334-4a4f930e6824',\n", 62 | " 'HTTPStatusCode': 200,\n", 63 | " 'HTTPHeaders': {'date': 'Wed, 20 Jul 2022 01:07:30 GMT',\n", 64 | " 'content-type': 'application/json',\n", 65 | " 'content-length': '306',\n", 66 | " 'connection': 'keep-alive',\n", 67 | " 'x-amzn-requestid': '0326ade9-bdb3-4fad-8334-4a4f930e6824',\n", 68 | " 'x-amzn-remapped-x-amzn-requestid': 'VipE6Ej7oAMFSCg=',\n", 69 | " 'x-amzn-remapped-content-length': '306',\n", 70 | " 'x-amz-apigw-id': 'VipE6Ej7oAMFSCg=',\n", 71 | " 'x-amzn-trace-id': 'Root=1-62d75552-77b2ebe81de50b845f1f3adb',\n", 72 | " 'x-amzn-remapped-date': 'Wed, 20 Jul 2022 01:07:30 GMT'},\n", 73 | " 'RetryAttempts': 0},\n", 74 | " 'applications': [{'id': '00f1uur2ps283a09',\n", 75 | " 'name': 'GettingStarted',\n", 76 | " 'arn': 'arn:aws:emr-serverless:us-east-1:269066542444:/applications/00f1uur2ps283a09',\n", 77 | " 'releaseLabel': 'emr-6.6.0',\n", 78 | " 'type': 'Spark',\n", 79 | " 'state': 'STOPPED',\n", 80 | " 'stateDetails': '',\n", 81 | " 'createdAt': datetime.datetime(2022, 6, 24, 5, 19, 4, 627000, tzinfo=tzlocal()),\n", 82 | " 'updatedAt': datetime.datetime(2022, 7, 18, 21, 36, 26, 791000, tzinfo=tzlocal())}]}" 83 | ] 84 | }, 85 | "execution_count": 4, 86 | "metadata": {}, 87 | "output_type": "execute_result" 88 | } 89 | ], 90 | "source": [ 91 | "es_client.list_applications()" 92 | ] 93 | }, 94 | { 95 | "cell_type": "code", 96 | "execution_count": 5, 97 | "metadata": {}, 98 | "outputs": [ 99 | { 100 | "name": "stdout", 101 | "output_type": "stream", 102 | "text": [ 103 | "\u001b[0;31mSignature:\u001b[0m \u001b[0mes_client\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mstart_job_run\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 104 | "\u001b[0;31mDocstring:\u001b[0m\n", 105 | "Starts a job run.\n", 106 | "\n", 107 | "\n", 108 | "\n", 109 | "See also: `AWS API Documentation `_\n", 110 | "\n", 111 | "\n", 112 | "**Request Syntax** \n", 113 | "::\n", 114 | "\n", 115 | " response = client.start_job_run(\n", 116 | " applicationId='string',\n", 117 | " clientToken='string',\n", 118 | " executionRoleArn='string',\n", 119 | " jobDriver={\n", 120 | " 'sparkSubmit': {\n", 121 | " 'entryPoint': 'string',\n", 122 | " 'entryPointArguments': [\n", 123 | " 'string',\n", 124 | " ],\n", 125 | " 'sparkSubmitParameters': 'string'\n", 126 | " },\n", 127 | " 'hive': {\n", 128 | " 'query': 'string',\n", 129 | " 'initQueryFile': 'string',\n", 130 | " 'parameters': 'string'\n", 131 | " }\n", 132 | " },\n", 133 | " configurationOverrides={\n", 134 | " 'applicationConfiguration': [\n", 135 | " {\n", 136 | " 'classification': 'string',\n", 137 | " 'properties': {\n", 138 | " 'string': 'string'\n", 139 | " },\n", 140 | " 'configurations': {'... recursive ...'}\n", 141 | " },\n", 142 | " ],\n", 143 | " 'monitoringConfiguration': {\n", 144 | " 's3MonitoringConfiguration': {\n", 145 | " 'logUri': 'string',\n", 146 | " 'encryptionKeyArn': 'string'\n", 147 | " },\n", 148 | " 'managedPersistenceMonitoringConfiguration': {\n", 149 | " 'enabled': True|False,\n", 150 | " 'encryptionKeyArn': 'string'\n", 151 | " }\n", 152 | " }\n", 153 | " },\n", 154 | " tags={\n", 155 | " 'string': 'string'\n", 156 | " },\n", 157 | " executionTimeoutMinutes=123,\n", 158 | " name='string'\n", 159 | " )\n", 160 | ":type applicationId: string\n", 161 | ":param applicationId: **[REQUIRED]** \n", 162 | "\n", 163 | " The ID of the application on which to run the job.\n", 164 | "\n", 165 | " \n", 166 | "\n", 167 | "\n", 168 | ":type clientToken: string\n", 169 | ":param clientToken: **[REQUIRED]** \n", 170 | "\n", 171 | " The client idempotency token of the job run to start. Its value must be unique for each request.\n", 172 | "\n", 173 | " This field is autopopulated if not provided.\n", 174 | "\n", 175 | "\n", 176 | ":type executionRoleArn: string\n", 177 | ":param executionRoleArn: **[REQUIRED]** \n", 178 | "\n", 179 | " The execution role ARN for the job run.\n", 180 | "\n", 181 | " \n", 182 | "\n", 183 | "\n", 184 | ":type jobDriver: dict\n", 185 | ":param jobDriver: \n", 186 | "\n", 187 | " The job driver for the job run.\n", 188 | "\n", 189 | " .. note:: This is a Tagged Union structure. Only one of the following top level keys can be set: ``sparkSubmit``, ``hive``. \n", 190 | "\n", 191 | "\n", 192 | " - **sparkSubmit** *(dict) --* \n", 193 | "\n", 194 | " The job driver parameters specified for Spark.\n", 195 | "\n", 196 | " \n", 197 | "\n", 198 | " \n", 199 | " - **entryPoint** *(string) --* **[REQUIRED]** \n", 200 | "\n", 201 | " The entry point for the Spark submit job run.\n", 202 | "\n", 203 | " \n", 204 | "\n", 205 | " \n", 206 | " - **entryPointArguments** *(list) --* \n", 207 | "\n", 208 | " The arguments for the Spark submit job run.\n", 209 | "\n", 210 | " \n", 211 | "\n", 212 | " \n", 213 | " - *(string) --* \n", 214 | "\n", 215 | " \n", 216 | " \n", 217 | " - **sparkSubmitParameters** *(string) --* \n", 218 | "\n", 219 | " The parameters for the Spark submit job run.\n", 220 | "\n", 221 | " \n", 222 | "\n", 223 | " \n", 224 | " \n", 225 | " - **hive** *(dict) --* \n", 226 | "\n", 227 | " The job driver parameters specified for Hive.\n", 228 | "\n", 229 | " \n", 230 | "\n", 231 | " \n", 232 | " - **query** *(string) --* **[REQUIRED]** \n", 233 | "\n", 234 | " The query for the Hive job run.\n", 235 | "\n", 236 | " \n", 237 | "\n", 238 | " \n", 239 | " - **initQueryFile** *(string) --* \n", 240 | "\n", 241 | " The query file for the Hive job run.\n", 242 | "\n", 243 | " \n", 244 | "\n", 245 | " \n", 246 | " - **parameters** *(string) --* \n", 247 | "\n", 248 | " The parameters for the Hive job run.\n", 249 | "\n", 250 | " \n", 251 | "\n", 252 | " \n", 253 | " \n", 254 | "\n", 255 | ":type configurationOverrides: dict\n", 256 | ":param configurationOverrides: \n", 257 | "\n", 258 | " The configuration overrides for the job run.\n", 259 | "\n", 260 | " \n", 261 | "\n", 262 | "\n", 263 | " - **applicationConfiguration** *(list) --* \n", 264 | "\n", 265 | " The override configurations for the application.\n", 266 | "\n", 267 | " \n", 268 | "\n", 269 | " \n", 270 | " - *(dict) --* \n", 271 | "\n", 272 | " A configuration specification to be used when provisioning an application. A configuration consists of a classification, properties, and optional nested configurations. A classification refers to an application-specific configuration file. Properties are the settings you want to change in that file.\n", 273 | "\n", 274 | " \n", 275 | "\n", 276 | " \n", 277 | " - **classification** *(string) --* **[REQUIRED]** \n", 278 | "\n", 279 | " The classification within a configuration.\n", 280 | "\n", 281 | " \n", 282 | "\n", 283 | " \n", 284 | " - **properties** *(dict) --* \n", 285 | "\n", 286 | " A set of properties specified within a configuration classification.\n", 287 | "\n", 288 | " \n", 289 | "\n", 290 | " \n", 291 | " - *(string) --* \n", 292 | "\n", 293 | " \n", 294 | " - *(string) --* \n", 295 | "\n", 296 | " \n", 297 | " \n", 298 | " \n", 299 | " - **configurations** *(list) --* \n", 300 | "\n", 301 | " A list of additional configurations to apply within a configuration object.\n", 302 | "\n", 303 | " \n", 304 | "\n", 305 | " \n", 306 | " \n", 307 | "\n", 308 | " - **monitoringConfiguration** *(dict) --* \n", 309 | "\n", 310 | " The override configurations for monitoring.\n", 311 | "\n", 312 | " \n", 313 | "\n", 314 | " \n", 315 | " - **s3MonitoringConfiguration** *(dict) --* \n", 316 | "\n", 317 | " The Amazon S3 configuration for monitoring log publishing.\n", 318 | "\n", 319 | " \n", 320 | "\n", 321 | " \n", 322 | " - **logUri** *(string) --* \n", 323 | "\n", 324 | " The Amazon S3 destination URI for log publishing.\n", 325 | "\n", 326 | " \n", 327 | "\n", 328 | " \n", 329 | " - **encryptionKeyArn** *(string) --* \n", 330 | "\n", 331 | " The KMS key ARN to encrypt the logs published to the given Amazon S3 destination.\n", 332 | "\n", 333 | " \n", 334 | "\n", 335 | " \n", 336 | " \n", 337 | " - **managedPersistenceMonitoringConfiguration** *(dict) --* \n", 338 | "\n", 339 | " The managed log persistence configuration for a job run.\n", 340 | "\n", 341 | " \n", 342 | "\n", 343 | " \n", 344 | " - **enabled** *(boolean) --* \n", 345 | "\n", 346 | " Enables managed logging and defaults to true. If set to false, managed logging will be turned off.\n", 347 | "\n", 348 | " \n", 349 | "\n", 350 | " \n", 351 | " - **encryptionKeyArn** *(string) --* \n", 352 | "\n", 353 | " The KMS key ARN to encrypt the logs stored in managed log persistence.\n", 354 | "\n", 355 | " \n", 356 | "\n", 357 | " \n", 358 | " \n", 359 | " \n", 360 | "\n", 361 | ":type tags: dict\n", 362 | ":param tags: \n", 363 | "\n", 364 | " The tags assigned to the job run.\n", 365 | "\n", 366 | " \n", 367 | "\n", 368 | "\n", 369 | " - *(string) --* \n", 370 | "\n", 371 | " \n", 372 | " - *(string) --* \n", 373 | "\n", 374 | " \n", 375 | "\n", 376 | "\n", 377 | ":type executionTimeoutMinutes: integer\n", 378 | ":param executionTimeoutMinutes: \n", 379 | "\n", 380 | " The maximum duration for the job run to run. If the job run runs beyond this duration, it will be automatically cancelled.\n", 381 | "\n", 382 | " \n", 383 | "\n", 384 | "\n", 385 | ":type name: string\n", 386 | ":param name: \n", 387 | "\n", 388 | " The optional job run name. This doesn't have to be unique.\n", 389 | "\n", 390 | " \n", 391 | "\n", 392 | "\n", 393 | "\n", 394 | ":rtype: dict\n", 395 | ":returns: \n", 396 | " \n", 397 | " **Response Syntax** \n", 398 | "\n", 399 | " \n", 400 | " ::\n", 401 | "\n", 402 | " {\n", 403 | " 'applicationId': 'string',\n", 404 | " 'jobRunId': 'string',\n", 405 | " 'arn': 'string'\n", 406 | " }\n", 407 | " **Response Structure** \n", 408 | "\n", 409 | " \n", 410 | "\n", 411 | " - *(dict) --* \n", 412 | " \n", 413 | "\n", 414 | " - **applicationId** *(string) --* \n", 415 | "\n", 416 | " This output displays the application ID on which the job run was submitted.\n", 417 | "\n", 418 | " \n", 419 | " \n", 420 | "\n", 421 | " - **jobRunId** *(string) --* \n", 422 | "\n", 423 | " The output contains the ID of the started job run.\n", 424 | "\n", 425 | " \n", 426 | " \n", 427 | "\n", 428 | " - **arn** *(string) --* \n", 429 | "\n", 430 | " The output lists the execution role ARN of the job run.\n", 431 | "\n", 432 | " \n", 433 | "\u001b[0;31mFile:\u001b[0m ~/Projects/Internal/bootcamp/itversity-material/mastering-emr/me-venv/lib/python3.9/site-packages/botocore/client.py\n", 434 | "\u001b[0;31mType:\u001b[0m method\n" 435 | ] 436 | } 437 | ], 438 | "source": [ 439 | "es_client.start_job_run?" 440 | ] 441 | }, 442 | { 443 | "cell_type": "code", 444 | "execution_count": 13, 445 | "metadata": {}, 446 | "outputs": [ 447 | { 448 | "data": { 449 | "text/plain": [ 450 | "{'ResponseMetadata': {'RequestId': '0ca9b9c8-4b2c-40ea-8e2e-40520e843380',\n", 451 | " 'HTTPStatusCode': 200,\n", 452 | " 'HTTPHeaders': {'date': 'Wed, 20 Jul 2022 02:38:19 GMT',\n", 453 | " 'content-type': 'application/json',\n", 454 | " 'content-length': '176',\n", 455 | " 'connection': 'keep-alive',\n", 456 | " 'x-amzn-requestid': '0ca9b9c8-4b2c-40ea-8e2e-40520e843380',\n", 457 | " 'x-amzn-remapped-x-amzn-requestid': 'Vi2YSFRioAMFjVA=',\n", 458 | " 'x-amzn-remapped-content-length': '176',\n", 459 | " 'x-amz-apigw-id': 'Vi2YSFRioAMFjVA=',\n", 460 | " 'x-amzn-trace-id': 'Root=1-62d76a9b-02e053dc76b9f8c725544e59',\n", 461 | " 'x-amzn-remapped-date': 'Wed, 20 Jul 2022 02:38:19 GMT'},\n", 462 | " 'RetryAttempts': 0},\n", 463 | " 'applicationId': '00f1uur2ps283a09',\n", 464 | " 'jobRunId': '00f2jvb8le2hlk01',\n", 465 | " 'arn': 'arn:aws:emr-serverless:us-east-1:269066542444:/applications/00f1uur2ps283a09/jobruns/00f2jvb8le2hlk01'}" 466 | ] 467 | }, 468 | "execution_count": 13, 469 | "metadata": {}, 470 | "output_type": "execute_result" 471 | } 472 | ], 473 | "source": [ 474 | "es_client.start_job_run(\n", 475 | " applicationId='00f1uur2ps283a09',\n", 476 | " executionRoleArn='arn:aws:iam::269066542444:role/EMRServerlessS3RuntimeRole',\n", 477 | " jobDriver={\n", 478 | " \"sparkSubmit\": {\n", 479 | " \"entryPoint\": \"/usr/lib/spark/examples/jars/spark-examples.jar\",\n", 480 | " \"entryPointArguments\": [\"1\"],\n", 481 | " \"sparkSubmitParameters\": \"--class org.apache.spark.examples.SparkPi --conf spark.executor.cores=4 --conf spark.executor.memory=20g --conf spark.driver.cores=4 --conf spark.driver.memory=8g --conf spark.executor.instances=1\"\n", 482 | " }\n", 483 | " }\n", 484 | ")" 485 | ] 486 | }, 487 | { 488 | "cell_type": "code", 489 | "execution_count": 9, 490 | "metadata": {}, 491 | "outputs": [ 492 | { 493 | "data": { 494 | "text/plain": [ 495 | "{'ResponseMetadata': {'RequestId': 'ff09c89d-5e90-4323-90f5-f52ece456653',\n", 496 | " 'HTTPStatusCode': 200,\n", 497 | " 'HTTPHeaders': {'date': 'Wed, 20 Jul 2022 01:20:43 GMT',\n", 498 | " 'content-type': 'application/json',\n", 499 | " 'content-length': '1122',\n", 500 | " 'connection': 'keep-alive',\n", 501 | " 'x-amzn-requestid': 'ff09c89d-5e90-4323-90f5-f52ece456653',\n", 502 | " 'x-amzn-remapped-x-amzn-requestid': 'VirA1Ec-oAMFXHw=',\n", 503 | " 'x-amzn-remapped-content-length': '1054',\n", 504 | " 'x-amz-apigw-id': 'VirA1Ec-oAMFXHw=',\n", 505 | " 'x-amzn-trace-id': 'Root=1-62d7586b-6be15b716e2876895223f07b',\n", 506 | " 'x-amzn-remapped-date': 'Wed, 20 Jul 2022 01:20:43 GMT'},\n", 507 | " 'RetryAttempts': 0},\n", 508 | " 'jobRun': {'applicationId': '00f1uur2ps283a09',\n", 509 | " 'jobRunId': '00f2jttd4ijc1t01',\n", 510 | " 'arn': 'arn:aws:emr-serverless:us-east-1:269066542444:/applications/00f1uur2ps283a09/jobruns/00f2jttd4ijc1t01',\n", 511 | " 'createdBy': 'arn:aws:iam::269066542444:user/durga.gadiraju',\n", 512 | " 'createdAt': datetime.datetime(2022, 7, 20, 6, 48, 10, 593000, tzinfo=tzlocal()),\n", 513 | " 'updatedAt': datetime.datetime(2022, 7, 20, 6, 49, 58, 456000, tzinfo=tzlocal()),\n", 514 | " 'executionRole': 'arn:aws:iam::269066542444:role/EMRServerlessS3RuntimeRole',\n", 515 | " 'state': 'RUNNING',\n", 516 | " 'stateDetails': '',\n", 517 | " 'releaseLabel': 'emr-6.6.0',\n", 518 | " 'jobDriver': {'sparkSubmit': {'entryPoint': '/usr/lib/spark/examples/jars/spark-examples.jar',\n", 519 | " 'entryPointArguments': ['1'],\n", 520 | " 'sparkSubmitParameters': '--class org.apache.spark.examples.SparkPi --conf spark.executor.cores=4 --conf spark.executor.memory=20g --conf spark.driver.cores=4 --conf spark.driver.memory=8g --conf spark.executor.instances=1'}},\n", 521 | " 'tags': {}}}" 522 | ] 523 | }, 524 | "execution_count": 9, 525 | "metadata": {}, 526 | "output_type": "execute_result" 527 | } 528 | ], 529 | "source": [ 530 | "es_client.get_job_run(\n", 531 | " applicationId='00f1uur2ps283a09',\n", 532 | " jobRunId='00f2jttd4ijc1t01'\n", 533 | ")" 534 | ] 535 | }, 536 | { 537 | "cell_type": "code", 538 | "execution_count": null, 539 | "metadata": {}, 540 | "outputs": [], 541 | "source": [] 542 | } 543 | ], 544 | "metadata": { 545 | "kernelspec": { 546 | "display_name": "Python 3.9.12 ('me-venv': venv)", 547 | "language": "python", 548 | "name": "python3" 549 | }, 550 | "language_info": { 551 | "codemirror_mode": { 552 | "name": "ipython", 553 | "version": 3 554 | }, 555 | "file_extension": ".py", 556 | "mimetype": "text/x-python", 557 | "name": "python", 558 | "nbconvert_exporter": "python", 559 | "pygments_lexer": "ipython3", 560 | "version": "3.9.12" 561 | }, 562 | "orig_nbformat": 4, 563 | "vscode": { 564 | "interpreter": { 565 | "hash": "486bfb5c5bc2d80b6bdfdd884fe8070e0b9ef1b20f462b79720291b7c17c500c" 566 | } 567 | } 568 | }, 569 | "nbformat": 4, 570 | "nbformat_minor": 2 571 | } 572 | -------------------------------------------------------------------------------- /13_spark_sql_using_aws_emr_cluster/07_compute_daily_product_revenue.sql: -------------------------------------------------------------------------------- 1 | CREATE DATABASE IF NOT EXISTS retail_ods; 2 | USE retail_ods; 3 | 4 | CREATE TEMPORARY VIEW orders 5 | USING JSON 6 | OPTIONS (path='s3://airetail/retail_db_json/orders'); 7 | 8 | CREATE TEMPORARY VIEW order_items 9 | USING JSON 10 | OPTIONS (path='s3://airetail/retail_db_json/order_items'); 11 | 12 | INSERT OVERWRITE DIRECTORY 's3://airetail/retail_db_json/daily_product_revenue' 13 | USING JSON 14 | SELECT o.order_date, 15 | o.order_status, 16 | oi.order_item_product_id, 17 | round(sum(oi.order_item_subtotal), 2) AS revenue 18 | FROM orders AS o JOIN order_items AS oi 19 | ON o.order_id = oi.order_item_order_id 20 | GROUP BY o.order_date, 21 | o.order_status, 22 | oi.order_item_product_id 23 | ORDER BY o.order_date, 24 | revenue DESC; -------------------------------------------------------------------------------- /13_spark_sql_using_aws_emr_cluster/08_compute_daily_product_revenue_args.sql: -------------------------------------------------------------------------------- 1 | CREATE DATABASE IF NOT EXISTS retail_ods; 2 | USE retail_ods; 3 | 4 | CREATE TEMPORARY VIEW orders 5 | USING JSON 6 | OPTIONS (path='s3://${s3.bucket}/retail_db_json/orders'); 7 | 8 | CREATE TEMPORARY VIEW order_items 9 | USING JSON 10 | OPTIONS (path='s3://${s3.bucket}/retail_db_json/order_items'); 11 | 12 | INSERT OVERWRITE DIRECTORY 's3://${s3.bucket}/retail_db_json/daily_product_revenue' 13 | USING JSON 14 | SELECT o.order_date, 15 | o.order_status, 16 | oi.order_item_product_id, 17 | round(sum(oi.order_item_subtotal), 2) AS revenue 18 | FROM orders AS o JOIN order_items AS oi 19 | ON o.order_id = oi.order_item_order_id 20 | GROUP BY o.order_date, 21 | o.order_status, 22 | oi.order_item_product_id 23 | ORDER BY o.order_date, 24 | revenue DESC; -------------------------------------------------------------------------------- /13_spark_sql_using_aws_emr_cluster/data/retail_db_json/categories/part-r-00000-ce1d8208-178d-48d3-bfb2-1a97d9c05094: -------------------------------------------------------------------------------- 1 | {"category_id":1,"category_department_id":2,"category_name":"Football"} 2 | {"category_id":2,"category_department_id":2,"category_name":"Soccer"} 3 | {"category_id":3,"category_department_id":2,"category_name":"Baseball & Softball"} 4 | {"category_id":4,"category_department_id":2,"category_name":"Basketball"} 5 | {"category_id":5,"category_department_id":2,"category_name":"Lacrosse"} 6 | {"category_id":6,"category_department_id":2,"category_name":"Tennis & Racquet"} 7 | {"category_id":7,"category_department_id":2,"category_name":"Hockey"} 8 | {"category_id":8,"category_department_id":2,"category_name":"More Sports"} 9 | {"category_id":9,"category_department_id":3,"category_name":"Cardio Equipment"} 10 | {"category_id":10,"category_department_id":3,"category_name":"Strength Training"} 11 | {"category_id":11,"category_department_id":3,"category_name":"Fitness Accessories"} 12 | {"category_id":12,"category_department_id":3,"category_name":"Boxing & MMA"} 13 | {"category_id":13,"category_department_id":3,"category_name":"Electronics"} 14 | {"category_id":14,"category_department_id":3,"category_name":"Yoga & Pilates"} 15 | {"category_id":15,"category_department_id":3,"category_name":"Training by Sport"} 16 | {"category_id":16,"category_department_id":3,"category_name":"As Seen on TV!"} 17 | {"category_id":17,"category_department_id":4,"category_name":"Cleats"} 18 | {"category_id":18,"category_department_id":4,"category_name":"Men's Footwear"} 19 | {"category_id":19,"category_department_id":4,"category_name":"Women's Footwear"} 20 | {"category_id":20,"category_department_id":4,"category_name":"Kids' Footwear"} 21 | {"category_id":21,"category_department_id":4,"category_name":"Featured Shops"} 22 | {"category_id":22,"category_department_id":4,"category_name":"Accessories"} 23 | {"category_id":23,"category_department_id":5,"category_name":"Men's Apparel"} 24 | {"category_id":24,"category_department_id":5,"category_name":"Women's Apparel"} 25 | {"category_id":25,"category_department_id":5,"category_name":"Boys' Apparel"} 26 | {"category_id":26,"category_department_id":5,"category_name":"Girls' Apparel"} 27 | {"category_id":27,"category_department_id":5,"category_name":"Accessories"} 28 | {"category_id":28,"category_department_id":5,"category_name":"Top Brands"} 29 | {"category_id":29,"category_department_id":5,"category_name":"Shop By Sport"} 30 | {"category_id":30,"category_department_id":6,"category_name":"Men's Golf Clubs"} 31 | {"category_id":31,"category_department_id":6,"category_name":"Women's Golf Clubs"} 32 | {"category_id":32,"category_department_id":6,"category_name":"Golf Apparel"} 33 | {"category_id":33,"category_department_id":6,"category_name":"Golf Shoes"} 34 | {"category_id":34,"category_department_id":6,"category_name":"Golf Bags & Carts"} 35 | {"category_id":35,"category_department_id":6,"category_name":"Golf Gloves"} 36 | {"category_id":36,"category_department_id":6,"category_name":"Golf Balls"} 37 | {"category_id":37,"category_department_id":6,"category_name":"Electronics"} 38 | {"category_id":38,"category_department_id":6,"category_name":"Kids' Golf Clubs"} 39 | {"category_id":39,"category_department_id":6,"category_name":"Team Shop"} 40 | {"category_id":40,"category_department_id":6,"category_name":"Accessories"} 41 | {"category_id":41,"category_department_id":6,"category_name":"Trade-In"} 42 | {"category_id":42,"category_department_id":7,"category_name":"Bike & Skate Shop"} 43 | {"category_id":43,"category_department_id":7,"category_name":"Camping & Hiking"} 44 | {"category_id":44,"category_department_id":7,"category_name":"Hunting & Shooting"} 45 | {"category_id":45,"category_department_id":7,"category_name":"Fishing"} 46 | {"category_id":46,"category_department_id":7,"category_name":"Indoor/Outdoor Games"} 47 | {"category_id":47,"category_department_id":7,"category_name":"Boating"} 48 | {"category_id":48,"category_department_id":7,"category_name":"Water Sports"} 49 | {"category_id":49,"category_department_id":8,"category_name":"MLB"} 50 | {"category_id":50,"category_department_id":8,"category_name":"NFL"} 51 | {"category_id":51,"category_department_id":8,"category_name":"NHL"} 52 | {"category_id":52,"category_department_id":8,"category_name":"NBA"} 53 | {"category_id":53,"category_department_id":8,"category_name":"NCAA"} 54 | {"category_id":54,"category_department_id":8,"category_name":"MLS"} 55 | {"category_id":55,"category_department_id":8,"category_name":"International Soccer"} 56 | {"category_id":56,"category_department_id":8,"category_name":"World Cup Shop"} 57 | {"category_id":57,"category_department_id":8,"category_name":"MLB Players"} 58 | {"category_id":58,"category_department_id":8,"category_name":"NFL Players"} 59 | -------------------------------------------------------------------------------- /13_spark_sql_using_aws_emr_cluster/data/retail_db_json/create_db_tables_pg.sql: -------------------------------------------------------------------------------- 1 | -- Postgres Table Creation Script 2 | -- 3 | 4 | -- 5 | -- Table structure for table departments 6 | -- 7 | 8 | CREATE TABLE departments ( 9 | department_id INT NOT NULL, 10 | department_name VARCHAR(45) NOT NULL, 11 | PRIMARY KEY (department_id) 12 | ); 13 | 14 | -- 15 | -- Table structure for table categories 16 | -- 17 | 18 | CREATE TABLE categories ( 19 | category_id INT NOT NULL, 20 | category_department_id INT NOT NULL, 21 | category_name VARCHAR(45) NOT NULL, 22 | PRIMARY KEY (category_id) 23 | ); 24 | 25 | -- 26 | -- Table structure for table products 27 | -- 28 | 29 | CREATE TABLE products ( 30 | product_id INT NOT NULL, 31 | product_category_id INT NOT NULL, 32 | product_name VARCHAR(45) NOT NULL, 33 | product_description VARCHAR(255) NOT NULL, 34 | product_price FLOAT NOT NULL, 35 | product_image VARCHAR(255) NOT NULL, 36 | PRIMARY KEY (product_id) 37 | ); 38 | 39 | -- 40 | -- Table structure for table customers 41 | -- 42 | 43 | CREATE TABLE customers ( 44 | customer_id INT NOT NULL, 45 | customer_fname VARCHAR(45) NOT NULL, 46 | customer_lname VARCHAR(45) NOT NULL, 47 | customer_email VARCHAR(45) NOT NULL, 48 | customer_password VARCHAR(45) NOT NULL, 49 | customer_street VARCHAR(255) NOT NULL, 50 | customer_city VARCHAR(45) NOT NULL, 51 | customer_state VARCHAR(45) NOT NULL, 52 | customer_zipcode VARCHAR(45) NOT NULL, 53 | PRIMARY KEY (customer_id) 54 | ); 55 | 56 | -- 57 | -- Table structure for table orders 58 | -- 59 | 60 | CREATE TABLE orders ( 61 | order_id INT NOT NULL, 62 | order_date TIMESTAMP NOT NULL, 63 | order_customer_id INT NOT NULL, 64 | order_status VARCHAR(45) NOT NULL, 65 | PRIMARY KEY (order_id) 66 | ); 67 | 68 | -- 69 | -- Table structure for table order_items 70 | -- 71 | 72 | CREATE TABLE order_items ( 73 | order_item_id INT NOT NULL, 74 | order_item_order_id INT NOT NULL, 75 | order_item_product_id INT NOT NULL, 76 | order_item_quantity INT NOT NULL, 77 | order_item_subtotal FLOAT NOT NULL, 78 | order_item_product_price FLOAT NOT NULL, 79 | PRIMARY KEY (order_item_id) 80 | ); 81 | 82 | 83 | -------------------------------------------------------------------------------- /13_spark_sql_using_aws_emr_cluster/data/retail_db_json/departments/part-r-00000-3db7cfae-3ad2-4fc7-88ff-afe0ec709f49: -------------------------------------------------------------------------------- 1 | {"department_id":2,"department_name":"Fitness"} 2 | {"department_id":3,"department_name":"Footwear"} 3 | {"department_id":4,"department_name":"Apparel"} 4 | {"department_id":5,"department_name":"Golf"} 5 | {"department_id":6,"department_name":"Outdoors"} 6 | {"department_id":7,"department_name":"Fan Shop"} 7 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Mastering EMR for Data Engineers 2 | GitHub repository related to the course [Mastering AWS Elastic Map Reduce](https://itversity.com/course/mastering-aws-elastic-map-reduce-for-data-engineers) for the [Data Engineers using AWS Data Analytics](https://itversity.com/bundle/data-engineering-using-aws-analytics). 3 | 4 | As part of this GitHub Repository, we have all the material that is relevant to Mastering AWS Elastic Map Reduce. The comprehensive course on our LMS Platform covers all important aspects related to building solutions using technologies such as Spark on AWS EMR. 5 | 6 | You can find list of our comprehensive courses on Data Engineering using AWS Data Analytics on Udemy as well as our LMS in the below table. We also offer a [comprehensive bundle or program]((https://itversity.com/bundle/data-engineering-using-aws-analytics)) which include all relevant courses related to Data Engineering using AWS Data Analytics. 7 | 8 | |Course Title|Udemy Link|Guided Program Link| 9 | |---|---|---| 10 | |AWS Essentials for Data Engineers|Self Support(TBA)|[Guided Program on AWS Data Analytics Services](https://itversity.com/course?courseid=aws-essentials-for-data-engineers)| 11 | |Mastering AWS Lambda Functions for Data Engineers|[Self Support](https://www.udemy.com/course/mastering-aws-lambda-functions/?referralCode=3F0E4F315A5CABE89702)|[Guided Program on AWS Data Analytics Services](https://itversity.com/course?courseid=mastering-aws-lambda-functions-for-data-engineers)| 12 | |Mastering AWS Elastic Map Reduce for Data Engineers|[Self Support](https://www.udemy.com/course/mastering-aws-elastic-map-reduce-for-data-engineers/?referralCode=7B1DD34B3999E0A4BFF4)|[Guided Program on AWS Data Analytics Services](https://itversity.com/course?courseid=mastering-aws-elastic-map-reduce-for-data-engineers)| 13 | |Mastering Amazon Redshift and Serverless for Data Engineers|[Self Support](https://www.udemy.com/course/mastering-amazon-redshift-and-serverless-for-data-engineers/?referralCode=B217ECEFED78F7CF9734)|[Guided Program on AWS Data Analytics Services](https://itversity.com/course?courseid=mastering-amazon-redshift-for-data-engineers)| 14 | 15 | -------------------------------------------------------------------------------- /associate_elastic_ip_with_emr_master.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 1, 6 | "metadata": {}, 7 | "outputs": [ 8 | { 9 | "name": "stdout", 10 | "output_type": "stream", 11 | "text": [ 12 | "aigithub\n", 13 | "aiodatabricks\n", 14 | "airetail\n", 15 | "aispark\n", 16 | "analytiqspractice\n", 17 | "aws-emr-resources-269066542444-us-east-1\n", 18 | "aws-logs-269066542444-us-east-1\n", 19 | "databricks-workspace-stack-lambdazipsbucket-4syuukj0qwxk\n", 20 | "db-bb6cc2ec5502ac309a2f318b635e55b8-s3-root-bucket\n", 21 | "dgghactivity\n", 22 | "sagemaker-studio-z19d0ssl0vk\n", 23 | "stepfunctions-athena-sample-project-fqqxxln1v8\n", 24 | "stepfunctions-athena-sample-project-sjqlvah38p\n" 25 | ] 26 | } 27 | ], 28 | "source": [ 29 | "import boto3\n", 30 | "s3_client = boto3.client('s3')\n", 31 | "buckets = s3_client.list_buckets()['Buckets']\n", 32 | "for bucket in buckets:\n", 33 | " print(bucket['Name'])" 34 | ] 35 | }, 36 | { 37 | "cell_type": "code", 38 | "execution_count": 2, 39 | "metadata": {}, 40 | "outputs": [], 41 | "source": [ 42 | "import boto3" 43 | ] 44 | }, 45 | { 46 | "cell_type": "code", 47 | "execution_count": 3, 48 | "metadata": {}, 49 | "outputs": [], 50 | "source": [ 51 | "emr_client = boto3.client('emr')" 52 | ] 53 | }, 54 | { 55 | "cell_type": "code", 56 | "execution_count": 4, 57 | "metadata": {}, 58 | "outputs": [ 59 | { 60 | "name": "stdout", 61 | "output_type": "stream", 62 | "text": [ 63 | "\u001b[0;31mSignature:\u001b[0m \u001b[0memr_client\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mlist_clusters\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 64 | "\u001b[0;31mDocstring:\u001b[0m\n", 65 | "Provides the status of all clusters visible to this Amazon Web Services account. Allows you to filter the list of clusters based on certain criteria; for example, filtering by cluster creation date and time or by status. This call returns a maximum of 50 clusters in unsorted order per call, but returns a marker to track the paging of the cluster list across multiple ListClusters calls.\n", 66 | "\n", 67 | "\n", 68 | "\n", 69 | "See also: `AWS API Documentation `_\n", 70 | "\n", 71 | "\n", 72 | "**Request Syntax** \n", 73 | "::\n", 74 | "\n", 75 | " response = client.list_clusters(\n", 76 | " CreatedAfter=datetime(2015, 1, 1),\n", 77 | " CreatedBefore=datetime(2015, 1, 1),\n", 78 | " ClusterStates=[\n", 79 | " 'STARTING'|'BOOTSTRAPPING'|'RUNNING'|'WAITING'|'TERMINATING'|'TERMINATED'|'TERMINATED_WITH_ERRORS',\n", 80 | " ],\n", 81 | " Marker='string'\n", 82 | " )\n", 83 | ":type CreatedAfter: datetime\n", 84 | ":param CreatedAfter: \n", 85 | "\n", 86 | " The creation date and time beginning value filter for listing clusters.\n", 87 | "\n", 88 | " \n", 89 | "\n", 90 | "\n", 91 | ":type CreatedBefore: datetime\n", 92 | ":param CreatedBefore: \n", 93 | "\n", 94 | " The creation date and time end value filter for listing clusters.\n", 95 | "\n", 96 | " \n", 97 | "\n", 98 | "\n", 99 | ":type ClusterStates: list\n", 100 | ":param ClusterStates: \n", 101 | "\n", 102 | " The cluster state filters to apply when listing clusters. Clusters that change state while this action runs may be not be returned as expected in the list of clusters.\n", 103 | "\n", 104 | " \n", 105 | "\n", 106 | "\n", 107 | " - *(string) --* \n", 108 | "\n", 109 | " \n", 110 | "\n", 111 | ":type Marker: string\n", 112 | ":param Marker: \n", 113 | "\n", 114 | " The pagination token that indicates the next set of results to retrieve.\n", 115 | "\n", 116 | " \n", 117 | "\n", 118 | "\n", 119 | "\n", 120 | ":rtype: dict\n", 121 | ":returns: \n", 122 | " \n", 123 | " **Response Syntax** \n", 124 | "\n", 125 | " \n", 126 | " ::\n", 127 | "\n", 128 | " {\n", 129 | " 'Clusters': [\n", 130 | " {\n", 131 | " 'Id': 'string',\n", 132 | " 'Name': 'string',\n", 133 | " 'Status': {\n", 134 | " 'State': 'STARTING'|'BOOTSTRAPPING'|'RUNNING'|'WAITING'|'TERMINATING'|'TERMINATED'|'TERMINATED_WITH_ERRORS',\n", 135 | " 'StateChangeReason': {\n", 136 | " 'Code': 'INTERNAL_ERROR'|'VALIDATION_ERROR'|'INSTANCE_FAILURE'|'INSTANCE_FLEET_TIMEOUT'|'BOOTSTRAP_FAILURE'|'USER_REQUEST'|'STEP_FAILURE'|'ALL_STEPS_COMPLETED',\n", 137 | " 'Message': 'string'\n", 138 | " },\n", 139 | " 'Timeline': {\n", 140 | " 'CreationDateTime': datetime(2015, 1, 1),\n", 141 | " 'ReadyDateTime': datetime(2015, 1, 1),\n", 142 | " 'EndDateTime': datetime(2015, 1, 1)\n", 143 | " }\n", 144 | " },\n", 145 | " 'NormalizedInstanceHours': 123,\n", 146 | " 'ClusterArn': 'string',\n", 147 | " 'OutpostArn': 'string'\n", 148 | " },\n", 149 | " ],\n", 150 | " 'Marker': 'string'\n", 151 | " }\n", 152 | " **Response Structure** \n", 153 | "\n", 154 | " \n", 155 | "\n", 156 | " - *(dict) --* \n", 157 | "\n", 158 | " This contains a ClusterSummaryList with the cluster details; for example, the cluster IDs, names, and status.\n", 159 | "\n", 160 | " \n", 161 | " \n", 162 | "\n", 163 | " - **Clusters** *(list) --* \n", 164 | "\n", 165 | " The list of clusters for the account based on the given filters.\n", 166 | "\n", 167 | " \n", 168 | " \n", 169 | "\n", 170 | " - *(dict) --* \n", 171 | "\n", 172 | " The summary description of the cluster.\n", 173 | "\n", 174 | " \n", 175 | " \n", 176 | "\n", 177 | " - **Id** *(string) --* \n", 178 | "\n", 179 | " The unique identifier for the cluster.\n", 180 | "\n", 181 | " \n", 182 | " \n", 183 | "\n", 184 | " - **Name** *(string) --* \n", 185 | "\n", 186 | " The name of the cluster.\n", 187 | "\n", 188 | " \n", 189 | " \n", 190 | "\n", 191 | " - **Status** *(dict) --* \n", 192 | "\n", 193 | " The details about the current status of the cluster.\n", 194 | "\n", 195 | " \n", 196 | " \n", 197 | "\n", 198 | " - **State** *(string) --* \n", 199 | "\n", 200 | " The current state of the cluster.\n", 201 | "\n", 202 | " \n", 203 | " \n", 204 | "\n", 205 | " - **StateChangeReason** *(dict) --* \n", 206 | "\n", 207 | " The reason for the cluster status change.\n", 208 | "\n", 209 | " \n", 210 | " \n", 211 | "\n", 212 | " - **Code** *(string) --* \n", 213 | "\n", 214 | " The programmatic code for the state change reason.\n", 215 | "\n", 216 | " \n", 217 | " \n", 218 | "\n", 219 | " - **Message** *(string) --* \n", 220 | "\n", 221 | " The descriptive message for the state change reason.\n", 222 | "\n", 223 | " \n", 224 | " \n", 225 | " \n", 226 | "\n", 227 | " - **Timeline** *(dict) --* \n", 228 | "\n", 229 | " A timeline that represents the status of a cluster over the lifetime of the cluster.\n", 230 | "\n", 231 | " \n", 232 | " \n", 233 | "\n", 234 | " - **CreationDateTime** *(datetime) --* \n", 235 | "\n", 236 | " The creation date and time of the cluster.\n", 237 | "\n", 238 | " \n", 239 | " \n", 240 | "\n", 241 | " - **ReadyDateTime** *(datetime) --* \n", 242 | "\n", 243 | " The date and time when the cluster was ready to run steps.\n", 244 | "\n", 245 | " \n", 246 | " \n", 247 | "\n", 248 | " - **EndDateTime** *(datetime) --* \n", 249 | "\n", 250 | " The date and time when the cluster was terminated.\n", 251 | "\n", 252 | " \n", 253 | " \n", 254 | " \n", 255 | " \n", 256 | "\n", 257 | " - **NormalizedInstanceHours** *(integer) --* \n", 258 | "\n", 259 | " An approximation of the cost of the cluster, represented in m1.small/hours. This value is incremented one time for every hour an m1.small instance runs. Larger instances are weighted more, so an EC2 instance that is roughly four times more expensive would result in the normalized instance hours being incremented by four. This result is only an approximation and does not reflect the actual billing rate.\n", 260 | "\n", 261 | " \n", 262 | " \n", 263 | "\n", 264 | " - **ClusterArn** *(string) --* \n", 265 | "\n", 266 | " The Amazon Resource Name of the cluster.\n", 267 | "\n", 268 | " \n", 269 | " \n", 270 | "\n", 271 | " - **OutpostArn** *(string) --* \n", 272 | "\n", 273 | " The Amazon Resource Name (ARN) of the Outpost where the cluster is launched. \n", 274 | "\n", 275 | " \n", 276 | " \n", 277 | " \n", 278 | " \n", 279 | "\n", 280 | " - **Marker** *(string) --* \n", 281 | "\n", 282 | " The pagination token that indicates the next set of results to retrieve.\n", 283 | "\n", 284 | " \n", 285 | "\u001b[0;31mFile:\u001b[0m ~/Projects/Internal/bootcamp/itversity-material/mastering-emr/me-venv/lib/python3.9/site-packages/botocore/client.py\n", 286 | "\u001b[0;31mType:\u001b[0m method\n" 287 | ] 288 | } 289 | ], 290 | "source": [ 291 | "emr_client.list_clusters?" 292 | ] 293 | }, 294 | { 295 | "cell_type": "code", 296 | "execution_count": 6, 297 | "metadata": {}, 298 | "outputs": [ 299 | { 300 | "data": { 301 | "text/plain": [ 302 | "[{'Id': 'j-3VMHTW6P6KKL6',\n", 303 | " 'Name': 'AI Dev Cluster',\n", 304 | " 'Status': {'State': 'WAITING',\n", 305 | " 'StateChangeReason': {'Message': 'Cluster ready after last step completed.'},\n", 306 | " 'Timeline': {'CreationDateTime': datetime.datetime(2022, 6, 26, 7, 6, 24, 614000, tzinfo=tzlocal()),\n", 307 | " 'ReadyDateTime': datetime.datetime(2022, 6, 26, 7, 12, 43, 368000, tzinfo=tzlocal())}},\n", 308 | " 'NormalizedInstanceHours': 192,\n", 309 | " 'ClusterArn': 'arn:aws:elasticmapreduce:us-east-1:269066542444:cluster/j-3VMHTW6P6KKL6'}]" 310 | ] 311 | }, 312 | "execution_count": 6, 313 | "metadata": {}, 314 | "output_type": "execute_result" 315 | } 316 | ], 317 | "source": [ 318 | "emr_client.list_clusters(ClusterStates=['RUNNING', 'WAITING'])['Clusters']" 319 | ] 320 | }, 321 | { 322 | "cell_type": "code", 323 | "execution_count": 8, 324 | "metadata": {}, 325 | "outputs": [], 326 | "source": [ 327 | "clusters = emr_client.list_clusters(ClusterStates=['RUNNING', 'WAITING'])['Clusters']" 328 | ] 329 | }, 330 | { 331 | "cell_type": "code", 332 | "execution_count": 9, 333 | "metadata": {}, 334 | "outputs": [ 335 | { 336 | "name": "stdout", 337 | "output_type": "stream", 338 | "text": [ 339 | "j-3VMHTW6P6KKL6\n" 340 | ] 341 | } 342 | ], 343 | "source": [ 344 | "for cluster in clusters:\n", 345 | " if cluster['Name'] == 'AI Dev Cluster':\n", 346 | " print(cluster['Id'])" 347 | ] 348 | }, 349 | { 350 | "cell_type": "code", 351 | "execution_count": 10, 352 | "metadata": {}, 353 | "outputs": [ 354 | { 355 | "name": "stdout", 356 | "output_type": "stream", 357 | "text": [ 358 | "\u001b[0;31mSignature:\u001b[0m \u001b[0memr_client\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mdescribe_cluster\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 359 | "\u001b[0;31mDocstring:\u001b[0m\n", 360 | "Provides cluster-level details including status, hardware and software configuration, VPC settings, and so on.\n", 361 | "\n", 362 | "\n", 363 | "\n", 364 | "See also: `AWS API Documentation `_\n", 365 | "\n", 366 | "\n", 367 | "**Request Syntax** \n", 368 | "::\n", 369 | "\n", 370 | " response = client.describe_cluster(\n", 371 | " ClusterId='string'\n", 372 | " )\n", 373 | ":type ClusterId: string\n", 374 | ":param ClusterId: **[REQUIRED]** \n", 375 | "\n", 376 | " The identifier of the cluster to describe.\n", 377 | "\n", 378 | " \n", 379 | "\n", 380 | "\n", 381 | "\n", 382 | ":rtype: dict\n", 383 | ":returns: \n", 384 | " \n", 385 | " **Response Syntax** \n", 386 | "\n", 387 | " \n", 388 | " ::\n", 389 | "\n", 390 | " {\n", 391 | " 'Cluster': {\n", 392 | " 'Id': 'string',\n", 393 | " 'Name': 'string',\n", 394 | " 'Status': {\n", 395 | " 'State': 'STARTING'|'BOOTSTRAPPING'|'RUNNING'|'WAITING'|'TERMINATING'|'TERMINATED'|'TERMINATED_WITH_ERRORS',\n", 396 | " 'StateChangeReason': {\n", 397 | " 'Code': 'INTERNAL_ERROR'|'VALIDATION_ERROR'|'INSTANCE_FAILURE'|'INSTANCE_FLEET_TIMEOUT'|'BOOTSTRAP_FAILURE'|'USER_REQUEST'|'STEP_FAILURE'|'ALL_STEPS_COMPLETED',\n", 398 | " 'Message': 'string'\n", 399 | " },\n", 400 | " 'Timeline': {\n", 401 | " 'CreationDateTime': datetime(2015, 1, 1),\n", 402 | " 'ReadyDateTime': datetime(2015, 1, 1),\n", 403 | " 'EndDateTime': datetime(2015, 1, 1)\n", 404 | " }\n", 405 | " },\n", 406 | " 'Ec2InstanceAttributes': {\n", 407 | " 'Ec2KeyName': 'string',\n", 408 | " 'Ec2SubnetId': 'string',\n", 409 | " 'RequestedEc2SubnetIds': [\n", 410 | " 'string',\n", 411 | " ],\n", 412 | " 'Ec2AvailabilityZone': 'string',\n", 413 | " 'RequestedEc2AvailabilityZones': [\n", 414 | " 'string',\n", 415 | " ],\n", 416 | " 'IamInstanceProfile': 'string',\n", 417 | " 'EmrManagedMasterSecurityGroup': 'string',\n", 418 | " 'EmrManagedSlaveSecurityGroup': 'string',\n", 419 | " 'ServiceAccessSecurityGroup': 'string',\n", 420 | " 'AdditionalMasterSecurityGroups': [\n", 421 | " 'string',\n", 422 | " ],\n", 423 | " 'AdditionalSlaveSecurityGroups': [\n", 424 | " 'string',\n", 425 | " ]\n", 426 | " },\n", 427 | " 'InstanceCollectionType': 'INSTANCE_FLEET'|'INSTANCE_GROUP',\n", 428 | " 'LogUri': 'string',\n", 429 | " 'LogEncryptionKmsKeyId': 'string',\n", 430 | " 'RequestedAmiVersion': 'string',\n", 431 | " 'RunningAmiVersion': 'string',\n", 432 | " 'ReleaseLabel': 'string',\n", 433 | " 'AutoTerminate': True|False,\n", 434 | " 'TerminationProtected': True|False,\n", 435 | " 'VisibleToAllUsers': True|False,\n", 436 | " 'Applications': [\n", 437 | " {\n", 438 | " 'Name': 'string',\n", 439 | " 'Version': 'string',\n", 440 | " 'Args': [\n", 441 | " 'string',\n", 442 | " ],\n", 443 | " 'AdditionalInfo': {\n", 444 | " 'string': 'string'\n", 445 | " }\n", 446 | " },\n", 447 | " ],\n", 448 | " 'Tags': [\n", 449 | " {\n", 450 | " 'Key': 'string',\n", 451 | " 'Value': 'string'\n", 452 | " },\n", 453 | " ],\n", 454 | " 'ServiceRole': 'string',\n", 455 | " 'NormalizedInstanceHours': 123,\n", 456 | " 'MasterPublicDnsName': 'string',\n", 457 | " 'Configurations': [\n", 458 | " {\n", 459 | " 'Classification': 'string',\n", 460 | " 'Configurations': {'... recursive ...'},\n", 461 | " 'Properties': {\n", 462 | " 'string': 'string'\n", 463 | " }\n", 464 | " },\n", 465 | " ],\n", 466 | " 'SecurityConfiguration': 'string',\n", 467 | " 'AutoScalingRole': 'string',\n", 468 | " 'ScaleDownBehavior': 'TERMINATE_AT_INSTANCE_HOUR'|'TERMINATE_AT_TASK_COMPLETION',\n", 469 | " 'CustomAmiId': 'string',\n", 470 | " 'EbsRootVolumeSize': 123,\n", 471 | " 'RepoUpgradeOnBoot': 'SECURITY'|'NONE',\n", 472 | " 'KerberosAttributes': {\n", 473 | " 'Realm': 'string',\n", 474 | " 'KdcAdminPassword': 'string',\n", 475 | " 'CrossRealmTrustPrincipalPassword': 'string',\n", 476 | " 'ADDomainJoinUser': 'string',\n", 477 | " 'ADDomainJoinPassword': 'string'\n", 478 | " },\n", 479 | " 'ClusterArn': 'string',\n", 480 | " 'OutpostArn': 'string',\n", 481 | " 'StepConcurrencyLevel': 123,\n", 482 | " 'PlacementGroups': [\n", 483 | " {\n", 484 | " 'InstanceRole': 'MASTER'|'CORE'|'TASK',\n", 485 | " 'PlacementStrategy': 'SPREAD'|'PARTITION'|'CLUSTER'|'NONE'\n", 486 | " },\n", 487 | " ],\n", 488 | " 'OSReleaseLabel': 'string'\n", 489 | " }\n", 490 | " }\n", 491 | " **Response Structure** \n", 492 | "\n", 493 | " \n", 494 | "\n", 495 | " - *(dict) --* \n", 496 | "\n", 497 | " This output contains the description of the cluster.\n", 498 | "\n", 499 | " \n", 500 | " \n", 501 | "\n", 502 | " - **Cluster** *(dict) --* \n", 503 | "\n", 504 | " This output contains the details for the requested cluster.\n", 505 | "\n", 506 | " \n", 507 | " \n", 508 | "\n", 509 | " - **Id** *(string) --* \n", 510 | "\n", 511 | " The unique identifier for the cluster.\n", 512 | "\n", 513 | " \n", 514 | " \n", 515 | "\n", 516 | " - **Name** *(string) --* \n", 517 | "\n", 518 | " The name of the cluster.\n", 519 | "\n", 520 | " \n", 521 | " \n", 522 | "\n", 523 | " - **Status** *(dict) --* \n", 524 | "\n", 525 | " The current status details about the cluster.\n", 526 | "\n", 527 | " \n", 528 | " \n", 529 | "\n", 530 | " - **State** *(string) --* \n", 531 | "\n", 532 | " The current state of the cluster.\n", 533 | "\n", 534 | " \n", 535 | " \n", 536 | "\n", 537 | " - **StateChangeReason** *(dict) --* \n", 538 | "\n", 539 | " The reason for the cluster status change.\n", 540 | "\n", 541 | " \n", 542 | " \n", 543 | "\n", 544 | " - **Code** *(string) --* \n", 545 | "\n", 546 | " The programmatic code for the state change reason.\n", 547 | "\n", 548 | " \n", 549 | " \n", 550 | "\n", 551 | " - **Message** *(string) --* \n", 552 | "\n", 553 | " The descriptive message for the state change reason.\n", 554 | "\n", 555 | " \n", 556 | " \n", 557 | " \n", 558 | "\n", 559 | " - **Timeline** *(dict) --* \n", 560 | "\n", 561 | " A timeline that represents the status of a cluster over the lifetime of the cluster.\n", 562 | "\n", 563 | " \n", 564 | " \n", 565 | "\n", 566 | " - **CreationDateTime** *(datetime) --* \n", 567 | "\n", 568 | " The creation date and time of the cluster.\n", 569 | "\n", 570 | " \n", 571 | " \n", 572 | "\n", 573 | " - **ReadyDateTime** *(datetime) --* \n", 574 | "\n", 575 | " The date and time when the cluster was ready to run steps.\n", 576 | "\n", 577 | " \n", 578 | " \n", 579 | "\n", 580 | " - **EndDateTime** *(datetime) --* \n", 581 | "\n", 582 | " The date and time when the cluster was terminated.\n", 583 | "\n", 584 | " \n", 585 | " \n", 586 | " \n", 587 | " \n", 588 | "\n", 589 | " - **Ec2InstanceAttributes** *(dict) --* \n", 590 | "\n", 591 | " Provides information about the EC2 instances in a cluster grouped by category. For example, key name, subnet ID, IAM instance profile, and so on.\n", 592 | "\n", 593 | " \n", 594 | " \n", 595 | "\n", 596 | " - **Ec2KeyName** *(string) --* \n", 597 | "\n", 598 | " The name of the Amazon EC2 key pair to use when connecting with SSH into the master node as a user named \"hadoop\".\n", 599 | "\n", 600 | " \n", 601 | " \n", 602 | "\n", 603 | " - **Ec2SubnetId** *(string) --* \n", 604 | "\n", 605 | " Set this parameter to the identifier of the Amazon VPC subnet where you want the cluster to launch. If you do not specify this value, and your account supports EC2-Classic, the cluster launches in EC2-Classic.\n", 606 | "\n", 607 | " \n", 608 | " \n", 609 | "\n", 610 | " - **RequestedEc2SubnetIds** *(list) --* \n", 611 | "\n", 612 | " Applies to clusters configured with the instance fleets option. Specifies the unique identifier of one or more Amazon EC2 subnets in which to launch EC2 cluster instances. Subnets must exist within the same VPC. Amazon EMR chooses the EC2 subnet with the best fit from among the list of ``RequestedEc2SubnetIds`` , and then launches all cluster instances within that Subnet. If this value is not specified, and the account and Region support EC2-Classic networks, the cluster launches instances in the EC2-Classic network and uses ``RequestedEc2AvailabilityZones`` instead of this setting. If EC2-Classic is not supported, and no Subnet is specified, Amazon EMR chooses the subnet for you. ``RequestedEc2SubnetIDs`` and ``RequestedEc2AvailabilityZones`` cannot be specified together.\n", 613 | "\n", 614 | " \n", 615 | " \n", 616 | "\n", 617 | " - *(string) --* \n", 618 | " \n", 619 | " \n", 620 | "\n", 621 | " - **Ec2AvailabilityZone** *(string) --* \n", 622 | "\n", 623 | " The Availability Zone in which the cluster will run. \n", 624 | "\n", 625 | " \n", 626 | " \n", 627 | "\n", 628 | " - **RequestedEc2AvailabilityZones** *(list) --* \n", 629 | "\n", 630 | " Applies to clusters configured with the instance fleets option. Specifies one or more Availability Zones in which to launch EC2 cluster instances when the EC2-Classic network configuration is supported. Amazon EMR chooses the Availability Zone with the best fit from among the list of ``RequestedEc2AvailabilityZones`` , and then launches all cluster instances within that Availability Zone. If you do not specify this value, Amazon EMR chooses the Availability Zone for you. ``RequestedEc2SubnetIDs`` and ``RequestedEc2AvailabilityZones`` cannot be specified together.\n", 631 | "\n", 632 | " \n", 633 | " \n", 634 | "\n", 635 | " - *(string) --* \n", 636 | " \n", 637 | " \n", 638 | "\n", 639 | " - **IamInstanceProfile** *(string) --* \n", 640 | "\n", 641 | " The IAM role that was specified when the cluster was launched. The EC2 instances of the cluster assume this role.\n", 642 | "\n", 643 | " \n", 644 | " \n", 645 | "\n", 646 | " - **EmrManagedMasterSecurityGroup** *(string) --* \n", 647 | "\n", 648 | " The identifier of the Amazon EC2 security group for the master node.\n", 649 | "\n", 650 | " \n", 651 | " \n", 652 | "\n", 653 | " - **EmrManagedSlaveSecurityGroup** *(string) --* \n", 654 | "\n", 655 | " The identifier of the Amazon EC2 security group for the core and task nodes.\n", 656 | "\n", 657 | " \n", 658 | " \n", 659 | "\n", 660 | " - **ServiceAccessSecurityGroup** *(string) --* \n", 661 | "\n", 662 | " The identifier of the Amazon EC2 security group for the Amazon EMR service to access clusters in VPC private subnets.\n", 663 | "\n", 664 | " \n", 665 | " \n", 666 | "\n", 667 | " - **AdditionalMasterSecurityGroups** *(list) --* \n", 668 | "\n", 669 | " A list of additional Amazon EC2 security group IDs for the master node.\n", 670 | "\n", 671 | " \n", 672 | " \n", 673 | "\n", 674 | " - *(string) --* \n", 675 | " \n", 676 | " \n", 677 | "\n", 678 | " - **AdditionalSlaveSecurityGroups** *(list) --* \n", 679 | "\n", 680 | " A list of additional Amazon EC2 security group IDs for the core and task nodes.\n", 681 | "\n", 682 | " \n", 683 | " \n", 684 | "\n", 685 | " - *(string) --* \n", 686 | " \n", 687 | " \n", 688 | " \n", 689 | "\n", 690 | " - **InstanceCollectionType** *(string) --* \n", 691 | "\n", 692 | " .. note::\n", 693 | "\n", 694 | " \n", 695 | "\n", 696 | " The instance fleet configuration is available only in Amazon EMR versions 4.8.0 and later, excluding 5.0.x versions.\n", 697 | "\n", 698 | " \n", 699 | "\n", 700 | " \n", 701 | "\n", 702 | " The instance group configuration of the cluster. A value of ``INSTANCE_GROUP`` indicates a uniform instance group configuration. A value of ``INSTANCE_FLEET`` indicates an instance fleets configuration.\n", 703 | "\n", 704 | " \n", 705 | " \n", 706 | "\n", 707 | " - **LogUri** *(string) --* \n", 708 | "\n", 709 | " The path to the Amazon S3 location where logs for this cluster are stored.\n", 710 | "\n", 711 | " \n", 712 | " \n", 713 | "\n", 714 | " - **LogEncryptionKmsKeyId** *(string) --* \n", 715 | "\n", 716 | " The KMS key used for encrypting log files. This attribute is only available with EMR version 5.30.0 and later, excluding EMR 6.0.0. \n", 717 | "\n", 718 | " \n", 719 | " \n", 720 | "\n", 721 | " - **RequestedAmiVersion** *(string) --* \n", 722 | "\n", 723 | " The AMI version requested for this cluster.\n", 724 | "\n", 725 | " \n", 726 | " \n", 727 | "\n", 728 | " - **RunningAmiVersion** *(string) --* \n", 729 | "\n", 730 | " The AMI version running on this cluster.\n", 731 | "\n", 732 | " \n", 733 | " \n", 734 | "\n", 735 | " - **ReleaseLabel** *(string) --* \n", 736 | "\n", 737 | " The Amazon EMR release label, which determines the version of open-source application packages installed on the cluster. Release labels are in the form ``emr-x.x.x`` , where x.x.x is an Amazon EMR release version such as ``emr-5.14.0`` . For more information about Amazon EMR release versions and included application versions and features, see `https\\://docs.aws.amazon.com/emr/latest/ReleaseGuide/ `__ . The release label applies only to Amazon EMR releases version 4.0 and later. Earlier versions use ``AmiVersion`` .\n", 738 | "\n", 739 | " \n", 740 | " \n", 741 | "\n", 742 | " - **AutoTerminate** *(boolean) --* \n", 743 | "\n", 744 | " Specifies whether the cluster should terminate after completing all steps.\n", 745 | "\n", 746 | " \n", 747 | " \n", 748 | "\n", 749 | " - **TerminationProtected** *(boolean) --* \n", 750 | "\n", 751 | " Indicates whether Amazon EMR will lock the cluster to prevent the EC2 instances from being terminated by an API call or user intervention, or in the event of a cluster error.\n", 752 | "\n", 753 | " \n", 754 | " \n", 755 | "\n", 756 | " - **VisibleToAllUsers** *(boolean) --* \n", 757 | "\n", 758 | " Indicates whether the cluster is visible to IAM principals in the Amazon Web Services account associated with the cluster. When ``true`` , IAM principals in the Amazon Web Services account can perform EMR cluster actions on the cluster that their IAM policies allow. When ``false`` , only the IAM principal that created the cluster and the Amazon Web Services account root user can perform EMR actions, regardless of IAM permissions policies attached to other IAM principals.\n", 759 | "\n", 760 | " \n", 761 | "\n", 762 | " The default value is ``true`` if a value is not provided when creating a cluster using the EMR API RunJobFlow command, the CLI `create-cluster `__ command, or the Amazon Web Services Management Console.\n", 763 | "\n", 764 | " \n", 765 | " \n", 766 | "\n", 767 | " - **Applications** *(list) --* \n", 768 | "\n", 769 | " The applications installed on this cluster.\n", 770 | "\n", 771 | " \n", 772 | " \n", 773 | "\n", 774 | " - *(dict) --* \n", 775 | "\n", 776 | " With Amazon EMR release version 4.0 and later, the only accepted parameter is the application name. To pass arguments to applications, you use configuration classifications specified using configuration JSON objects. For more information, see `Configuring Applications `__ .\n", 777 | "\n", 778 | " \n", 779 | "\n", 780 | " With earlier Amazon EMR releases, the application is any Amazon or third-party software that you can add to the cluster. This structure contains a list of strings that indicates the software to use with the cluster and accepts a user argument list. Amazon EMR accepts and forwards the argument list to the corresponding installation script as bootstrap action argument.\n", 781 | "\n", 782 | " \n", 783 | " \n", 784 | "\n", 785 | " - **Name** *(string) --* \n", 786 | "\n", 787 | " The name of the application.\n", 788 | "\n", 789 | " \n", 790 | " \n", 791 | "\n", 792 | " - **Version** *(string) --* \n", 793 | "\n", 794 | " The version of the application.\n", 795 | "\n", 796 | " \n", 797 | " \n", 798 | "\n", 799 | " - **Args** *(list) --* \n", 800 | "\n", 801 | " Arguments for Amazon EMR to pass to the application.\n", 802 | "\n", 803 | " \n", 804 | " \n", 805 | "\n", 806 | " - *(string) --* \n", 807 | " \n", 808 | " \n", 809 | "\n", 810 | " - **AdditionalInfo** *(dict) --* \n", 811 | "\n", 812 | " This option is for advanced users only. This is meta information about third-party applications that third-party vendors use for testing purposes.\n", 813 | "\n", 814 | " \n", 815 | " \n", 816 | "\n", 817 | " - *(string) --* \n", 818 | " \n", 819 | "\n", 820 | " - *(string) --* \n", 821 | " \n", 822 | " \n", 823 | " \n", 824 | " \n", 825 | " \n", 826 | "\n", 827 | " - **Tags** *(list) --* \n", 828 | "\n", 829 | " A list of tags associated with a cluster.\n", 830 | "\n", 831 | " \n", 832 | " \n", 833 | "\n", 834 | " - *(dict) --* \n", 835 | "\n", 836 | " A key-value pair containing user-defined metadata that you can associate with an Amazon EMR resource. Tags make it easier to associate clusters in various ways, such as grouping clusters to track your Amazon EMR resource allocation costs. For more information, see `Tag Clusters `__ . \n", 837 | "\n", 838 | " \n", 839 | " \n", 840 | "\n", 841 | " - **Key** *(string) --* \n", 842 | "\n", 843 | " A user-defined key, which is the minimum required information for a valid tag. For more information, see `Tag `__ . \n", 844 | "\n", 845 | " \n", 846 | " \n", 847 | "\n", 848 | " - **Value** *(string) --* \n", 849 | "\n", 850 | " A user-defined value, which is optional in a tag. For more information, see `Tag Clusters `__ . \n", 851 | "\n", 852 | " \n", 853 | " \n", 854 | " \n", 855 | " \n", 856 | "\n", 857 | " - **ServiceRole** *(string) --* \n", 858 | "\n", 859 | " The IAM role that Amazon EMR assumes in order to access Amazon Web Services resources on your behalf.\n", 860 | "\n", 861 | " \n", 862 | " \n", 863 | "\n", 864 | " - **NormalizedInstanceHours** *(integer) --* \n", 865 | "\n", 866 | " An approximation of the cost of the cluster, represented in m1.small/hours. This value is incremented one time for every hour an m1.small instance runs. Larger instances are weighted more, so an EC2 instance that is roughly four times more expensive would result in the normalized instance hours being incremented by four. This result is only an approximation and does not reflect the actual billing rate.\n", 867 | "\n", 868 | " \n", 869 | " \n", 870 | "\n", 871 | " - **MasterPublicDnsName** *(string) --* \n", 872 | "\n", 873 | " The DNS name of the master node. If the cluster is on a private subnet, this is the private DNS name. On a public subnet, this is the public DNS name.\n", 874 | "\n", 875 | " \n", 876 | " \n", 877 | "\n", 878 | " - **Configurations** *(list) --* \n", 879 | "\n", 880 | " Applies only to Amazon EMR releases 4.x and later. The list of Configurations supplied to the EMR cluster.\n", 881 | "\n", 882 | " \n", 883 | " \n", 884 | "\n", 885 | " - *(dict) --* \n", 886 | "\n", 887 | " .. note::\n", 888 | "\n", 889 | " \n", 890 | "\n", 891 | " Amazon EMR releases 4.x or later.\n", 892 | "\n", 893 | " \n", 894 | "\n", 895 | " \n", 896 | "\n", 897 | " An optional configuration specification to be used when provisioning cluster instances, which can include configurations for applications and software bundled with Amazon EMR. A configuration consists of a classification, properties, and optional nested configurations. A classification refers to an application-specific configuration file. Properties are the settings you want to change in that file. For more information, see `Configuring Applications `__ .\n", 898 | "\n", 899 | " \n", 900 | " \n", 901 | "\n", 902 | " - **Classification** *(string) --* \n", 903 | "\n", 904 | " The classification within a configuration.\n", 905 | "\n", 906 | " \n", 907 | " \n", 908 | "\n", 909 | " - **Configurations** *(list) --* \n", 910 | "\n", 911 | " A list of additional configurations to apply within a configuration object.\n", 912 | "\n", 913 | " \n", 914 | " \n", 915 | "\n", 916 | " - **Properties** *(dict) --* \n", 917 | "\n", 918 | " A set of properties specified within a configuration classification.\n", 919 | "\n", 920 | " \n", 921 | " \n", 922 | "\n", 923 | " - *(string) --* \n", 924 | " \n", 925 | "\n", 926 | " - *(string) --* \n", 927 | " \n", 928 | " \n", 929 | " \n", 930 | " \n", 931 | " \n", 932 | "\n", 933 | " - **SecurityConfiguration** *(string) --* \n", 934 | "\n", 935 | " The name of the security configuration applied to the cluster.\n", 936 | "\n", 937 | " \n", 938 | " \n", 939 | "\n", 940 | " - **AutoScalingRole** *(string) --* \n", 941 | "\n", 942 | " An IAM role for automatic scaling policies. The default role is ``EMR_AutoScaling_DefaultRole`` . The IAM role provides permissions that the automatic scaling feature requires to launch and terminate EC2 instances in an instance group.\n", 943 | "\n", 944 | " \n", 945 | " \n", 946 | "\n", 947 | " - **ScaleDownBehavior** *(string) --* \n", 948 | "\n", 949 | " The way that individual Amazon EC2 instances terminate when an automatic scale-in activity occurs or an instance group is resized. ``TERMINATE_AT_INSTANCE_HOUR`` indicates that Amazon EMR terminates nodes at the instance-hour boundary, regardless of when the request to terminate the instance was submitted. This option is only available with Amazon EMR 5.1.0 and later and is the default for clusters created using that version. ``TERMINATE_AT_TASK_COMPLETION`` indicates that Amazon EMR adds nodes to a deny list and drains tasks from nodes before terminating the Amazon EC2 instances, regardless of the instance-hour boundary. With either behavior, Amazon EMR removes the least active nodes first and blocks instance termination if it could lead to HDFS corruption. ``TERMINATE_AT_TASK_COMPLETION`` is available only in Amazon EMR version 4.1.0 and later, and is the default for versions of Amazon EMR earlier than 5.1.0.\n", 950 | "\n", 951 | " \n", 952 | " \n", 953 | "\n", 954 | " - **CustomAmiId** *(string) --* \n", 955 | "\n", 956 | " Available only in Amazon EMR version 5.7.0 and later. The ID of a custom Amazon EBS-backed Linux AMI if the cluster uses a custom AMI.\n", 957 | "\n", 958 | " \n", 959 | " \n", 960 | "\n", 961 | " - **EbsRootVolumeSize** *(integer) --* \n", 962 | "\n", 963 | " The size, in GiB, of the Amazon EBS root device volume of the Linux AMI that is used for each EC2 instance. Available in Amazon EMR version 4.x and later.\n", 964 | "\n", 965 | " \n", 966 | " \n", 967 | "\n", 968 | " - **RepoUpgradeOnBoot** *(string) --* \n", 969 | "\n", 970 | " Applies only when ``CustomAmiID`` is used. Specifies the type of updates that are applied from the Amazon Linux AMI package repositories when an instance boots using the AMI.\n", 971 | "\n", 972 | " \n", 973 | " \n", 974 | "\n", 975 | " - **KerberosAttributes** *(dict) --* \n", 976 | "\n", 977 | " Attributes for Kerberos configuration when Kerberos authentication is enabled using a security configuration. For more information see `Use Kerberos Authentication `__ in the *Amazon EMR Management Guide* .\n", 978 | "\n", 979 | " \n", 980 | " \n", 981 | "\n", 982 | " - **Realm** *(string) --* \n", 983 | "\n", 984 | " The name of the Kerberos realm to which all nodes in a cluster belong. For example, ``EC2.INTERNAL`` . \n", 985 | "\n", 986 | " \n", 987 | " \n", 988 | "\n", 989 | " - **KdcAdminPassword** *(string) --* \n", 990 | "\n", 991 | " The password used within the cluster for the kadmin service on the cluster-dedicated KDC, which maintains Kerberos principals, password policies, and keytabs for the cluster.\n", 992 | "\n", 993 | " \n", 994 | " \n", 995 | "\n", 996 | " - **CrossRealmTrustPrincipalPassword** *(string) --* \n", 997 | "\n", 998 | " Required only when establishing a cross-realm trust with a KDC in a different realm. The cross-realm principal password, which must be identical across realms.\n", 999 | "\n", 1000 | " \n", 1001 | " \n", 1002 | "\n", 1003 | " - **ADDomainJoinUser** *(string) --* \n", 1004 | "\n", 1005 | " Required only when establishing a cross-realm trust with an Active Directory domain. A user with sufficient privileges to join resources to the domain.\n", 1006 | "\n", 1007 | " \n", 1008 | " \n", 1009 | "\n", 1010 | " - **ADDomainJoinPassword** *(string) --* \n", 1011 | "\n", 1012 | " The Active Directory password for ``ADDomainJoinUser`` .\n", 1013 | "\n", 1014 | " \n", 1015 | " \n", 1016 | " \n", 1017 | "\n", 1018 | " - **ClusterArn** *(string) --* \n", 1019 | "\n", 1020 | " The Amazon Resource Name of the cluster.\n", 1021 | "\n", 1022 | " \n", 1023 | " \n", 1024 | "\n", 1025 | " - **OutpostArn** *(string) --* \n", 1026 | "\n", 1027 | " The Amazon Resource Name (ARN) of the Outpost where the cluster is launched. \n", 1028 | "\n", 1029 | " \n", 1030 | " \n", 1031 | "\n", 1032 | " - **StepConcurrencyLevel** *(integer) --* \n", 1033 | "\n", 1034 | " Specifies the number of steps that can be executed concurrently.\n", 1035 | "\n", 1036 | " \n", 1037 | " \n", 1038 | "\n", 1039 | " - **PlacementGroups** *(list) --* \n", 1040 | "\n", 1041 | " Placement group configured for an Amazon EMR cluster.\n", 1042 | "\n", 1043 | " \n", 1044 | " \n", 1045 | "\n", 1046 | " - *(dict) --* \n", 1047 | "\n", 1048 | " Placement group configuration for an Amazon EMR cluster. The configuration specifies the placement strategy that can be applied to instance roles during cluster creation.\n", 1049 | "\n", 1050 | " \n", 1051 | "\n", 1052 | " To use this configuration, consider attaching managed policy AmazonElasticMapReducePlacementGroupPolicy to the EMR role.\n", 1053 | "\n", 1054 | " \n", 1055 | " \n", 1056 | "\n", 1057 | " - **InstanceRole** *(string) --* \n", 1058 | "\n", 1059 | " Role of the instance in the cluster.\n", 1060 | "\n", 1061 | " \n", 1062 | "\n", 1063 | " Starting with Amazon EMR version 5.23.0, the only supported instance role is ``MASTER`` .\n", 1064 | "\n", 1065 | " \n", 1066 | " \n", 1067 | "\n", 1068 | " - **PlacementStrategy** *(string) --* \n", 1069 | "\n", 1070 | " EC2 Placement Group strategy associated with instance role.\n", 1071 | "\n", 1072 | " \n", 1073 | "\n", 1074 | " Starting with Amazon EMR version 5.23.0, the only supported placement strategy is ``SPREAD`` for the ``MASTER`` instance role.\n", 1075 | "\n", 1076 | " \n", 1077 | " \n", 1078 | " \n", 1079 | " \n", 1080 | "\n", 1081 | " - **OSReleaseLabel** *(string) --* \n", 1082 | "\n", 1083 | " The Amazon Linux release specified in a cluster launch RunJobFlow request. If no Amazon Linux release was specified, the default Amazon Linux release is shown in the response.\n", 1084 | "\n", 1085 | " \n", 1086 | " \n", 1087 | "\u001b[0;31mFile:\u001b[0m ~/Projects/Internal/bootcamp/itversity-material/mastering-emr/me-venv/lib/python3.9/site-packages/botocore/client.py\n", 1088 | "\u001b[0;31mType:\u001b[0m method\n" 1089 | ] 1090 | } 1091 | ], 1092 | "source": [ 1093 | "emr_client.describe_cluster?" 1094 | ] 1095 | }, 1096 | { 1097 | "cell_type": "code", 1098 | "execution_count": 11, 1099 | "metadata": {}, 1100 | "outputs": [], 1101 | "source": [ 1102 | "cluster_details = emr_client.describe_cluster(ClusterId='j-3VMHTW6P6KKL6')" 1103 | ] 1104 | }, 1105 | { 1106 | "cell_type": "code", 1107 | "execution_count": 12, 1108 | "metadata": {}, 1109 | "outputs": [ 1110 | { 1111 | "data": { 1112 | "text/plain": [ 1113 | "{'Cluster': {'Id': 'j-3VMHTW6P6KKL6',\n", 1114 | " 'Name': 'AI Dev Cluster',\n", 1115 | " 'Status': {'State': 'WAITING',\n", 1116 | " 'StateChangeReason': {'Message': 'Cluster ready after last step completed.'},\n", 1117 | " 'Timeline': {'CreationDateTime': datetime.datetime(2022, 6, 26, 7, 6, 24, 614000, tzinfo=tzlocal()),\n", 1118 | " 'ReadyDateTime': datetime.datetime(2022, 6, 26, 7, 12, 43, 368000, tzinfo=tzlocal())}},\n", 1119 | " 'Ec2InstanceAttributes': {'Ec2KeyName': 'aiawsmini',\n", 1120 | " 'Ec2SubnetId': 'subnet-0452ade8510f43c16',\n", 1121 | " 'RequestedEc2SubnetIds': ['subnet-0452ade8510f43c16'],\n", 1122 | " 'Ec2AvailabilityZone': 'us-east-1c',\n", 1123 | " 'RequestedEc2AvailabilityZones': [],\n", 1124 | " 'IamInstanceProfile': 'EMR_EC2_DefaultRole',\n", 1125 | " 'EmrManagedMasterSecurityGroup': 'sg-0ce008d667fae0c7f',\n", 1126 | " 'EmrManagedSlaveSecurityGroup': 'sg-0846a8d248c6654f5',\n", 1127 | " 'AdditionalMasterSecurityGroups': [],\n", 1128 | " 'AdditionalSlaveSecurityGroups': []},\n", 1129 | " 'InstanceCollectionType': 'INSTANCE_GROUP',\n", 1130 | " 'LogUri': 's3n://aws-logs-269066542444-us-east-1/elasticmapreduce/',\n", 1131 | " 'ReleaseLabel': 'emr-6.6.0',\n", 1132 | " 'AutoTerminate': False,\n", 1133 | " 'TerminationProtected': True,\n", 1134 | " 'VisibleToAllUsers': True,\n", 1135 | " 'Applications': [{'Name': 'Hadoop', 'Version': '3.2.1'},\n", 1136 | " {'Name': 'Hive', 'Version': '3.1.2'},\n", 1137 | " {'Name': 'JupyterEnterpriseGateway', 'Version': '2.1.0'},\n", 1138 | " {'Name': 'Spark', 'Version': '3.2.0'}],\n", 1139 | " 'Tags': [],\n", 1140 | " 'ServiceRole': 'EMR_DefaultRole',\n", 1141 | " 'NormalizedInstanceHours': 192,\n", 1142 | " 'MasterPublicDnsName': 'ec2-100-25-68-152.compute-1.amazonaws.com',\n", 1143 | " 'Configurations': [{'Classification': 'hive-site',\n", 1144 | " 'Properties': {'hive.metastore.client.factory.class': 'com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory'}},\n", 1145 | " {'Classification': 'spark-hive-site',\n", 1146 | " 'Properties': {'hive.metastore.client.factory.class': 'com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory'}}],\n", 1147 | " 'AutoScalingRole': 'EMR_AutoScaling_DefaultRole',\n", 1148 | " 'ScaleDownBehavior': 'TERMINATE_AT_TASK_COMPLETION',\n", 1149 | " 'EbsRootVolumeSize': 32,\n", 1150 | " 'KerberosAttributes': {},\n", 1151 | " 'ClusterArn': 'arn:aws:elasticmapreduce:us-east-1:269066542444:cluster/j-3VMHTW6P6KKL6',\n", 1152 | " 'StepConcurrencyLevel': 1,\n", 1153 | " 'PlacementGroups': []},\n", 1154 | " 'ResponseMetadata': {'RequestId': '1a28b820-7872-471b-a94c-f335fd258086',\n", 1155 | " 'HTTPStatusCode': 200,\n", 1156 | " 'HTTPHeaders': {'x-amzn-requestid': '1a28b820-7872-471b-a94c-f335fd258086',\n", 1157 | " 'content-type': 'application/x-amz-json-1.1',\n", 1158 | " 'content-length': '1863',\n", 1159 | " 'date': 'Sun, 26 Jun 2022 12:13:47 GMT'},\n", 1160 | " 'RetryAttempts': 0}}" 1161 | ] 1162 | }, 1163 | "execution_count": 12, 1164 | "metadata": {}, 1165 | "output_type": "execute_result" 1166 | } 1167 | ], 1168 | "source": [ 1169 | "cluster_details" 1170 | ] 1171 | }, 1172 | { 1173 | "cell_type": "code", 1174 | "execution_count": 13, 1175 | "metadata": {}, 1176 | "outputs": [], 1177 | "source": [ 1178 | "master_public_dns = cluster_details['Cluster']['MasterPublicDnsName']" 1179 | ] 1180 | }, 1181 | { 1182 | "cell_type": "code", 1183 | "execution_count": 14, 1184 | "metadata": {}, 1185 | "outputs": [ 1186 | { 1187 | "data": { 1188 | "text/plain": [ 1189 | "'ec2-100-25-68-152.compute-1.amazonaws.com'" 1190 | ] 1191 | }, 1192 | "execution_count": 14, 1193 | "metadata": {}, 1194 | "output_type": "execute_result" 1195 | } 1196 | ], 1197 | "source": [ 1198 | "master_public_dns" 1199 | ] 1200 | }, 1201 | { 1202 | "cell_type": "code", 1203 | "execution_count": 15, 1204 | "metadata": {}, 1205 | "outputs": [], 1206 | "source": [ 1207 | "instances = emr_client.list_instances(ClusterId='j-3VMHTW6P6KKL6')['Instances']" 1208 | ] 1209 | }, 1210 | { 1211 | "cell_type": "code", 1212 | "execution_count": 16, 1213 | "metadata": {}, 1214 | "outputs": [ 1215 | { 1216 | "data": { 1217 | "text/plain": [ 1218 | "[{'Id': 'ci-IB9E8KBNP6ST',\n", 1219 | " 'Ec2InstanceId': 'i-075733388ebad4fdd',\n", 1220 | " 'PublicDnsName': 'ec2-100-25-68-152.compute-1.amazonaws.com',\n", 1221 | " 'PublicIpAddress': '100.25.68.152',\n", 1222 | " 'PrivateDnsName': 'ip-172-31-39-44.ec2.internal',\n", 1223 | " 'PrivateIpAddress': '172.31.39.44',\n", 1224 | " 'Status': {'State': 'RUNNING',\n", 1225 | " 'StateChangeReason': {},\n", 1226 | " 'Timeline': {'CreationDateTime': datetime.datetime(2022, 6, 26, 7, 6, 46, 475000, tzinfo=tzlocal()),\n", 1227 | " 'ReadyDateTime': datetime.datetime(2022, 6, 26, 7, 12, 3, 530000, tzinfo=tzlocal())}},\n", 1228 | " 'InstanceGroupId': 'ig-1OA03AF6294SS',\n", 1229 | " 'Market': 'ON_DEMAND',\n", 1230 | " 'InstanceType': 'm5.xlarge',\n", 1231 | " 'EbsVolumes': [{'Device': '/dev/sdb', 'VolumeId': 'vol-0c31f72a8fbb0f518'},\n", 1232 | " {'Device': '/dev/sdc', 'VolumeId': 'vol-0eaf6a6900f6e2d11'}]},\n", 1233 | " {'Id': 'ci-1H3YBJ5FVSGS',\n", 1234 | " 'Ec2InstanceId': 'i-04955d7070ed1ff4d',\n", 1235 | " 'PublicDnsName': 'ec2-52-200-115-138.compute-1.amazonaws.com',\n", 1236 | " 'PublicIpAddress': '52.200.115.138',\n", 1237 | " 'PrivateDnsName': 'ip-172-31-33-98.ec2.internal',\n", 1238 | " 'PrivateIpAddress': '172.31.33.98',\n", 1239 | " 'Status': {'State': 'RUNNING',\n", 1240 | " 'StateChangeReason': {},\n", 1241 | " 'Timeline': {'CreationDateTime': datetime.datetime(2022, 6, 26, 7, 6, 46, 475000, tzinfo=tzlocal()),\n", 1242 | " 'ReadyDateTime': datetime.datetime(2022, 6, 26, 7, 12, 25, 135000, tzinfo=tzlocal())}},\n", 1243 | " 'InstanceGroupId': 'ig-3O0K0T7OU9RZZ',\n", 1244 | " 'Market': 'ON_DEMAND',\n", 1245 | " 'InstanceType': 'm5.xlarge',\n", 1246 | " 'EbsVolumes': [{'Device': '/dev/sdb', 'VolumeId': 'vol-0b03b3d2f4696a188'},\n", 1247 | " {'Device': '/dev/sdc', 'VolumeId': 'vol-085ce65f68e6428a8'}]},\n", 1248 | " {'Id': 'ci-24PK1BO2AKSKO',\n", 1249 | " 'Ec2InstanceId': 'i-0e47684f45c8ebb4f',\n", 1250 | " 'PublicDnsName': 'ec2-100-26-197-2.compute-1.amazonaws.com',\n", 1251 | " 'PublicIpAddress': '100.26.197.2',\n", 1252 | " 'PrivateDnsName': 'ip-172-31-39-75.ec2.internal',\n", 1253 | " 'PrivateIpAddress': '172.31.39.75',\n", 1254 | " 'Status': {'State': 'TERMINATED',\n", 1255 | " 'StateChangeReason': {'Message': 'Instance group was resized.'},\n", 1256 | " 'Timeline': {'CreationDateTime': datetime.datetime(2022, 6, 26, 15, 11, 36, 244000, tzinfo=tzlocal()),\n", 1257 | " 'ReadyDateTime': datetime.datetime(2022, 6, 26, 15, 13, 40, 493000, tzinfo=tzlocal()),\n", 1258 | " 'EndDateTime': datetime.datetime(2022, 6, 26, 15, 24, 0, 532000, tzinfo=tzlocal())}},\n", 1259 | " 'InstanceGroupId': 'ig-3BWRW68MGVRK4',\n", 1260 | " 'Market': 'SPOT',\n", 1261 | " 'InstanceType': 'm5.xlarge',\n", 1262 | " 'EbsVolumes': [{'Device': '/dev/sdb', 'VolumeId': 'vol-0bb757e9b905e17ce'},\n", 1263 | " {'Device': '/dev/sdc', 'VolumeId': 'vol-0328150162f527dff'}]},\n", 1264 | " {'Id': 'ci-3HNCO37ZV4Y1Q',\n", 1265 | " 'Ec2InstanceId': 'i-03fc2925133b26bb4',\n", 1266 | " 'PublicDnsName': 'ec2-3-81-171-183.compute-1.amazonaws.com',\n", 1267 | " 'PublicIpAddress': '3.81.171.183',\n", 1268 | " 'PrivateDnsName': 'ip-172-31-42-134.ec2.internal',\n", 1269 | " 'PrivateIpAddress': '172.31.42.134',\n", 1270 | " 'Status': {'State': 'TERMINATED',\n", 1271 | " 'StateChangeReason': {'Message': 'Instance group was resized.'},\n", 1272 | " 'Timeline': {'CreationDateTime': datetime.datetime(2022, 6, 26, 16, 11, 57, 128000, tzinfo=tzlocal()),\n", 1273 | " 'ReadyDateTime': datetime.datetime(2022, 6, 26, 16, 14, 33, 545000, tzinfo=tzlocal()),\n", 1274 | " 'EndDateTime': datetime.datetime(2022, 6, 26, 16, 24, 24, 31000, tzinfo=tzlocal())}},\n", 1275 | " 'InstanceGroupId': 'ig-3BWRW68MGVRK4',\n", 1276 | " 'Market': 'SPOT',\n", 1277 | " 'InstanceType': 'm5.xlarge',\n", 1278 | " 'EbsVolumes': [{'Device': '/dev/sdb', 'VolumeId': 'vol-053d612d31db47f55'},\n", 1279 | " {'Device': '/dev/sdc', 'VolumeId': 'vol-026669fbdd18184d0'}]},\n", 1280 | " {'Id': 'ci-1J4E1DMUVJUFZ',\n", 1281 | " 'Ec2InstanceId': 'i-015a210a4ead1c033',\n", 1282 | " 'PublicDnsName': 'ec2-3-90-254-132.compute-1.amazonaws.com',\n", 1283 | " 'PublicIpAddress': '3.90.254.132',\n", 1284 | " 'PrivateDnsName': 'ip-172-31-47-92.ec2.internal',\n", 1285 | " 'PrivateIpAddress': '172.31.47.92',\n", 1286 | " 'Status': {'State': 'TERMINATED',\n", 1287 | " 'StateChangeReason': {'Message': 'Instance group was resized.'},\n", 1288 | " 'Timeline': {'CreationDateTime': datetime.datetime(2022, 6, 26, 16, 11, 57, 128000, tzinfo=tzlocal()),\n", 1289 | " 'ReadyDateTime': datetime.datetime(2022, 6, 26, 16, 14, 33, 544000, tzinfo=tzlocal()),\n", 1290 | " 'EndDateTime': datetime.datetime(2022, 6, 26, 16, 24, 24, 31000, tzinfo=tzlocal())}},\n", 1291 | " 'InstanceGroupId': 'ig-3BWRW68MGVRK4',\n", 1292 | " 'Market': 'SPOT',\n", 1293 | " 'InstanceType': 'm5.xlarge',\n", 1294 | " 'EbsVolumes': [{'Device': '/dev/sdb', 'VolumeId': 'vol-0d1b9fe3b0e2ef5dd'},\n", 1295 | " {'Device': '/dev/sdc', 'VolumeId': 'vol-00657bf6ac23c27ae'}]},\n", 1296 | " {'Id': 'ci-UN03QNZ73WKU',\n", 1297 | " 'Ec2InstanceId': 'i-0a7b8f64bf58306e1',\n", 1298 | " 'PublicDnsName': 'ec2-54-90-176-170.compute-1.amazonaws.com',\n", 1299 | " 'PublicIpAddress': '54.90.176.170',\n", 1300 | " 'PrivateDnsName': 'ip-172-31-36-115.ec2.internal',\n", 1301 | " 'PrivateIpAddress': '172.31.36.115',\n", 1302 | " 'Status': {'State': 'TERMINATED',\n", 1303 | " 'StateChangeReason': {'Message': 'Instance group was resized.'},\n", 1304 | " 'Timeline': {'CreationDateTime': datetime.datetime(2022, 6, 26, 16, 11, 57, 128000, tzinfo=tzlocal()),\n", 1305 | " 'ReadyDateTime': datetime.datetime(2022, 6, 26, 16, 14, 33, 546000, tzinfo=tzlocal()),\n", 1306 | " 'EndDateTime': datetime.datetime(2022, 6, 26, 16, 24, 24, 31000, tzinfo=tzlocal())}},\n", 1307 | " 'InstanceGroupId': 'ig-3BWRW68MGVRK4',\n", 1308 | " 'Market': 'SPOT',\n", 1309 | " 'InstanceType': 'm5.xlarge',\n", 1310 | " 'EbsVolumes': [{'Device': '/dev/sdb', 'VolumeId': 'vol-057f8a997eb8d606e'},\n", 1311 | " {'Device': '/dev/sdc', 'VolumeId': 'vol-0230696125fd48ac2'}]}]" 1312 | ] 1313 | }, 1314 | "execution_count": 16, 1315 | "metadata": {}, 1316 | "output_type": "execute_result" 1317 | } 1318 | ], 1319 | "source": [ 1320 | "instances" 1321 | ] 1322 | }, 1323 | { 1324 | "cell_type": "code", 1325 | "execution_count": 17, 1326 | "metadata": {}, 1327 | "outputs": [ 1328 | { 1329 | "name": "stdout", 1330 | "output_type": "stream", 1331 | "text": [ 1332 | "i-075733388ebad4fdd\n" 1333 | ] 1334 | } 1335 | ], 1336 | "source": [ 1337 | "for instance in instances:\n", 1338 | " if instance['PublicDnsName'] == master_public_dns:\n", 1339 | " print(instance['Ec2InstanceId'])" 1340 | ] 1341 | }, 1342 | { 1343 | "cell_type": "code", 1344 | "execution_count": 18, 1345 | "metadata": {}, 1346 | "outputs": [], 1347 | "source": [ 1348 | "ec2_client = boto3.client('ec2')" 1349 | ] 1350 | }, 1351 | { 1352 | "cell_type": "code", 1353 | "execution_count": 19, 1354 | "metadata": {}, 1355 | "outputs": [ 1356 | { 1357 | "name": "stdout", 1358 | "output_type": "stream", 1359 | "text": [ 1360 | "\u001b[0;31mSignature:\u001b[0m \u001b[0mec2_client\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mdescribe_addresses\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 1361 | "\u001b[0;31mDocstring:\u001b[0m\n", 1362 | "Describes the specified Elastic IP addresses or all of your Elastic IP addresses.\n", 1363 | "\n", 1364 | " \n", 1365 | "\n", 1366 | "An Elastic IP address is for use in either the EC2-Classic platform or in a VPC. For more information, see `Elastic IP Addresses `__ in the *Amazon Elastic Compute Cloud User Guide* .\n", 1367 | "\n", 1368 | "\n", 1369 | "\n", 1370 | "See also: `AWS API Documentation `_\n", 1371 | "\n", 1372 | "\n", 1373 | "**Request Syntax** \n", 1374 | "::\n", 1375 | "\n", 1376 | " response = client.describe_addresses(\n", 1377 | " Filters=[\n", 1378 | " {\n", 1379 | " 'Name': 'string',\n", 1380 | " 'Values': [\n", 1381 | " 'string',\n", 1382 | " ]\n", 1383 | " },\n", 1384 | " ],\n", 1385 | " PublicIps=[\n", 1386 | " 'string',\n", 1387 | " ],\n", 1388 | " AllocationIds=[\n", 1389 | " 'string',\n", 1390 | " ],\n", 1391 | " DryRun=True|False\n", 1392 | " )\n", 1393 | ":type Filters: list\n", 1394 | ":param Filters: \n", 1395 | "\n", 1396 | " One or more filters. Filter names and values are case-sensitive.\n", 1397 | "\n", 1398 | " \n", 1399 | "\n", 1400 | " \n", 1401 | " * ``allocation-id`` - [EC2-VPC] The allocation ID for the address. \n", 1402 | " \n", 1403 | " * ``association-id`` - [EC2-VPC] The association ID for the address. \n", 1404 | " \n", 1405 | " * ``domain`` - Indicates whether the address is for use in EC2-Classic (``standard`` ) or in a VPC (``vpc`` ). \n", 1406 | " \n", 1407 | " * ``instance-id`` - The ID of the instance the address is associated with, if any. \n", 1408 | " \n", 1409 | " * ``network-border-group`` - A unique set of Availability Zones, Local Zones, or Wavelength Zones from where Amazon Web Services advertises IP addresses. \n", 1410 | " \n", 1411 | " * ``network-interface-id`` - [EC2-VPC] The ID of the network interface that the address is associated with, if any. \n", 1412 | " \n", 1413 | " * ``network-interface-owner-id`` - The Amazon Web Services account ID of the owner. \n", 1414 | " \n", 1415 | " * ``private-ip-address`` - [EC2-VPC] The private IP address associated with the Elastic IP address. \n", 1416 | " \n", 1417 | " * ``public-ip`` - The Elastic IP address, or the carrier IP address. \n", 1418 | " \n", 1419 | " * ``tag`` : - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key ``Owner`` and the value ``TeamA`` , specify ``tag:Owner`` for the filter name and ``TeamA`` for the filter value. \n", 1420 | " \n", 1421 | " * ``tag-key`` - The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value. \n", 1422 | " \n", 1423 | "\n", 1424 | " \n", 1425 | "\n", 1426 | "\n", 1427 | " - *(dict) --* \n", 1428 | "\n", 1429 | " A filter name and value pair that is used to return a more specific list of results from a describe operation. Filters can be used to match a set of resources by specific criteria, such as tags, attributes, or IDs.\n", 1430 | "\n", 1431 | " \n", 1432 | "\n", 1433 | " If you specify multiple filters, the filters are joined with an ``AND`` , and the request returns only results that match all of the specified filters.\n", 1434 | "\n", 1435 | " \n", 1436 | "\n", 1437 | " \n", 1438 | " - **Name** *(string) --* \n", 1439 | "\n", 1440 | " The name of the filter. Filter names are case-sensitive.\n", 1441 | "\n", 1442 | " \n", 1443 | "\n", 1444 | " \n", 1445 | " - **Values** *(list) --* \n", 1446 | "\n", 1447 | " The filter values. Filter values are case-sensitive. If you specify multiple values for a filter, the values are joined with an ``OR`` , and the request returns all results that match any of the specified values.\n", 1448 | "\n", 1449 | " \n", 1450 | "\n", 1451 | " \n", 1452 | " - *(string) --* \n", 1453 | "\n", 1454 | " \n", 1455 | " \n", 1456 | " \n", 1457 | "\n", 1458 | ":type PublicIps: list\n", 1459 | ":param PublicIps: \n", 1460 | "\n", 1461 | " One or more Elastic IP addresses.\n", 1462 | "\n", 1463 | " \n", 1464 | "\n", 1465 | " Default: Describes all your Elastic IP addresses.\n", 1466 | "\n", 1467 | " \n", 1468 | "\n", 1469 | "\n", 1470 | " - *(string) --* \n", 1471 | "\n", 1472 | " \n", 1473 | "\n", 1474 | ":type AllocationIds: list\n", 1475 | ":param AllocationIds: \n", 1476 | "\n", 1477 | " [EC2-VPC] Information about the allocation IDs.\n", 1478 | "\n", 1479 | " \n", 1480 | "\n", 1481 | "\n", 1482 | " - *(string) --* \n", 1483 | "\n", 1484 | " \n", 1485 | "\n", 1486 | ":type DryRun: boolean\n", 1487 | ":param DryRun: \n", 1488 | "\n", 1489 | " Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is ``DryRunOperation`` . Otherwise, it is ``UnauthorizedOperation`` .\n", 1490 | "\n", 1491 | " \n", 1492 | "\n", 1493 | "\n", 1494 | "\n", 1495 | ":rtype: dict\n", 1496 | ":returns: \n", 1497 | " \n", 1498 | " **Response Syntax** \n", 1499 | "\n", 1500 | " \n", 1501 | " ::\n", 1502 | "\n", 1503 | " {\n", 1504 | " 'Addresses': [\n", 1505 | " {\n", 1506 | " 'InstanceId': 'string',\n", 1507 | " 'PublicIp': 'string',\n", 1508 | " 'AllocationId': 'string',\n", 1509 | " 'AssociationId': 'string',\n", 1510 | " 'Domain': 'vpc'|'standard',\n", 1511 | " 'NetworkInterfaceId': 'string',\n", 1512 | " 'NetworkInterfaceOwnerId': 'string',\n", 1513 | " 'PrivateIpAddress': 'string',\n", 1514 | " 'Tags': [\n", 1515 | " {\n", 1516 | " 'Key': 'string',\n", 1517 | " 'Value': 'string'\n", 1518 | " },\n", 1519 | " ],\n", 1520 | " 'PublicIpv4Pool': 'string',\n", 1521 | " 'NetworkBorderGroup': 'string',\n", 1522 | " 'CustomerOwnedIp': 'string',\n", 1523 | " 'CustomerOwnedIpv4Pool': 'string',\n", 1524 | " 'CarrierIp': 'string'\n", 1525 | " },\n", 1526 | " ]\n", 1527 | " }\n", 1528 | " **Response Structure** \n", 1529 | "\n", 1530 | " \n", 1531 | "\n", 1532 | " - *(dict) --* \n", 1533 | " \n", 1534 | "\n", 1535 | " - **Addresses** *(list) --* \n", 1536 | "\n", 1537 | " Information about the Elastic IP addresses.\n", 1538 | "\n", 1539 | " \n", 1540 | " \n", 1541 | "\n", 1542 | " - *(dict) --* \n", 1543 | "\n", 1544 | " Describes an Elastic IP address, or a carrier IP address.\n", 1545 | "\n", 1546 | " \n", 1547 | " \n", 1548 | "\n", 1549 | " - **InstanceId** *(string) --* \n", 1550 | "\n", 1551 | " The ID of the instance that the address is associated with (if any).\n", 1552 | "\n", 1553 | " \n", 1554 | " \n", 1555 | "\n", 1556 | " - **PublicIp** *(string) --* \n", 1557 | "\n", 1558 | " The Elastic IP address.\n", 1559 | "\n", 1560 | " \n", 1561 | " \n", 1562 | "\n", 1563 | " - **AllocationId** *(string) --* \n", 1564 | "\n", 1565 | " The ID representing the allocation of the address for use with EC2-VPC.\n", 1566 | "\n", 1567 | " \n", 1568 | " \n", 1569 | "\n", 1570 | " - **AssociationId** *(string) --* \n", 1571 | "\n", 1572 | " The ID representing the association of the address with an instance in a VPC.\n", 1573 | "\n", 1574 | " \n", 1575 | " \n", 1576 | "\n", 1577 | " - **Domain** *(string) --* \n", 1578 | "\n", 1579 | " Indicates whether this Elastic IP address is for use with instances in EC2-Classic (``standard`` ) or instances in a VPC (``vpc`` ).\n", 1580 | "\n", 1581 | " \n", 1582 | " \n", 1583 | "\n", 1584 | " - **NetworkInterfaceId** *(string) --* \n", 1585 | "\n", 1586 | " The ID of the network interface.\n", 1587 | "\n", 1588 | " \n", 1589 | " \n", 1590 | "\n", 1591 | " - **NetworkInterfaceOwnerId** *(string) --* \n", 1592 | "\n", 1593 | " The ID of the Amazon Web Services account that owns the network interface.\n", 1594 | "\n", 1595 | " \n", 1596 | " \n", 1597 | "\n", 1598 | " - **PrivateIpAddress** *(string) --* \n", 1599 | "\n", 1600 | " The private IP address associated with the Elastic IP address.\n", 1601 | "\n", 1602 | " \n", 1603 | " \n", 1604 | "\n", 1605 | " - **Tags** *(list) --* \n", 1606 | "\n", 1607 | " Any tags assigned to the Elastic IP address.\n", 1608 | "\n", 1609 | " \n", 1610 | " \n", 1611 | "\n", 1612 | " - *(dict) --* \n", 1613 | "\n", 1614 | " Describes a tag.\n", 1615 | "\n", 1616 | " \n", 1617 | " \n", 1618 | "\n", 1619 | " - **Key** *(string) --* \n", 1620 | "\n", 1621 | " The key of the tag.\n", 1622 | "\n", 1623 | " \n", 1624 | "\n", 1625 | " Constraints: Tag keys are case-sensitive and accept a maximum of 127 Unicode characters. May not begin with ``aws:`` .\n", 1626 | "\n", 1627 | " \n", 1628 | " \n", 1629 | "\n", 1630 | " - **Value** *(string) --* \n", 1631 | "\n", 1632 | " The value of the tag.\n", 1633 | "\n", 1634 | " \n", 1635 | "\n", 1636 | " Constraints: Tag values are case-sensitive and accept a maximum of 256 Unicode characters.\n", 1637 | "\n", 1638 | " \n", 1639 | " \n", 1640 | " \n", 1641 | " \n", 1642 | "\n", 1643 | " - **PublicIpv4Pool** *(string) --* \n", 1644 | "\n", 1645 | " The ID of an address pool.\n", 1646 | "\n", 1647 | " \n", 1648 | " \n", 1649 | "\n", 1650 | " - **NetworkBorderGroup** *(string) --* \n", 1651 | "\n", 1652 | " The name of the unique set of Availability Zones, Local Zones, or Wavelength Zones from which Amazon Web Services advertises IP addresses.\n", 1653 | "\n", 1654 | " \n", 1655 | " \n", 1656 | "\n", 1657 | " - **CustomerOwnedIp** *(string) --* \n", 1658 | "\n", 1659 | " The customer-owned IP address.\n", 1660 | "\n", 1661 | " \n", 1662 | " \n", 1663 | "\n", 1664 | " - **CustomerOwnedIpv4Pool** *(string) --* \n", 1665 | "\n", 1666 | " The ID of the customer-owned address pool.\n", 1667 | "\n", 1668 | " \n", 1669 | " \n", 1670 | "\n", 1671 | " - **CarrierIp** *(string) --* \n", 1672 | "\n", 1673 | " The carrier IP address associated. This option is only available for network interfaces which reside in a subnet in a Wavelength Zone (for example an EC2 instance). \n", 1674 | "\n", 1675 | " \n", 1676 | " \n", 1677 | " \n", 1678 | "\u001b[0;31mFile:\u001b[0m ~/Projects/Internal/bootcamp/itversity-material/mastering-emr/me-venv/lib/python3.9/site-packages/botocore/client.py\n", 1679 | "\u001b[0;31mType:\u001b[0m method\n" 1680 | ] 1681 | } 1682 | ], 1683 | "source": [ 1684 | "ec2_client.describe_addresses?" 1685 | ] 1686 | }, 1687 | { 1688 | "cell_type": "code", 1689 | "execution_count": 20, 1690 | "metadata": {}, 1691 | "outputs": [], 1692 | "source": [ 1693 | "elastic_ips = ec2_client.describe_addresses()['Addresses']" 1694 | ] 1695 | }, 1696 | { 1697 | "cell_type": "code", 1698 | "execution_count": 21, 1699 | "metadata": {}, 1700 | "outputs": [ 1701 | { 1702 | "data": { 1703 | "text/plain": [ 1704 | "[{'InstanceId': 'i-075733388ebad4fdd',\n", 1705 | " 'PublicIp': '100.25.68.152',\n", 1706 | " 'AllocationId': 'eipalloc-0f1de6fddfa65cf31',\n", 1707 | " 'AssociationId': 'eipassoc-03526cc1730a604af',\n", 1708 | " 'Domain': 'vpc',\n", 1709 | " 'NetworkInterfaceId': 'eni-04db38825a03e7896',\n", 1710 | " 'NetworkInterfaceOwnerId': '269066542444',\n", 1711 | " 'PrivateIpAddress': '172.31.39.44',\n", 1712 | " 'Tags': [{'Key': 'Name', 'Value': 'AI Dev EMR Cluster Master'}],\n", 1713 | " 'PublicIpv4Pool': 'amazon',\n", 1714 | " 'NetworkBorderGroup': 'us-east-1'},\n", 1715 | " {'PublicIp': '3.221.172.23',\n", 1716 | " 'AllocationId': 'eipalloc-0ab53ddaa8a6cc928',\n", 1717 | " 'AssociationId': 'eipassoc-03bdfb19c00fbc774',\n", 1718 | " 'Domain': 'vpc',\n", 1719 | " 'NetworkInterfaceId': 'eni-0eaf78c635403cfce',\n", 1720 | " 'NetworkInterfaceOwnerId': '269066542444',\n", 1721 | " 'PrivateIpAddress': '172.31.69.213',\n", 1722 | " 'Tags': [{'Key': 'Name', 'Value': 'CDP Sandbox - Spotk'}],\n", 1723 | " 'PublicIpv4Pool': 'amazon',\n", 1724 | " 'NetworkBorderGroup': 'us-east-1'},\n", 1725 | " {'PublicIp': '34.237.33.214',\n", 1726 | " 'AllocationId': 'eipalloc-09efcb0d9cf78cf75',\n", 1727 | " 'AssociationId': 'eipassoc-028061619e4bfabbf',\n", 1728 | " 'Domain': 'vpc',\n", 1729 | " 'NetworkInterfaceId': 'eni-028de67d3b8230130',\n", 1730 | " 'NetworkInterfaceOwnerId': '269066542444',\n", 1731 | " 'PrivateIpAddress': '10.146.0.64',\n", 1732 | " 'PublicIpv4Pool': 'amazon',\n", 1733 | " 'NetworkBorderGroup': 'us-east-1'},\n", 1734 | " {'PublicIp': '54.234.206.233',\n", 1735 | " 'AllocationId': 'eipalloc-0959d3e18070461d6',\n", 1736 | " 'Domain': 'vpc',\n", 1737 | " 'Tags': [{'Key': 'Name', 'Value': 'CDP Sandbox - Spot'}],\n", 1738 | " 'PublicIpv4Pool': 'amazon',\n", 1739 | " 'NetworkBorderGroup': 'us-east-1'}]" 1740 | ] 1741 | }, 1742 | "execution_count": 21, 1743 | "metadata": {}, 1744 | "output_type": "execute_result" 1745 | } 1746 | ], 1747 | "source": [ 1748 | "elastic_ips" 1749 | ] 1750 | }, 1751 | { 1752 | "cell_type": "code", 1753 | "execution_count": 22, 1754 | "metadata": {}, 1755 | "outputs": [ 1756 | { 1757 | "name": "stdout", 1758 | "output_type": "stream", 1759 | "text": [ 1760 | "{'InstanceId': 'i-075733388ebad4fdd', 'PublicIp': '100.25.68.152', 'AllocationId': 'eipalloc-0f1de6fddfa65cf31', 'AssociationId': 'eipassoc-03526cc1730a604af', 'Domain': 'vpc', 'NetworkInterfaceId': 'eni-04db38825a03e7896', 'NetworkInterfaceOwnerId': '269066542444', 'PrivateIpAddress': '172.31.39.44', 'Tags': [{'Key': 'Name', 'Value': 'AI Dev EMR Cluster Master'}], 'PublicIpv4Pool': 'amazon', 'NetworkBorderGroup': 'us-east-1'}\n" 1761 | ] 1762 | } 1763 | ], 1764 | "source": [ 1765 | "for elastic_ip in elastic_ips:\n", 1766 | " if elastic_ip.get('Tags'):\n", 1767 | " for tag in elastic_ip['Tags']:\n", 1768 | " if tag['Key'] == 'Name' and tag['Value'] == 'AI Dev EMR Cluster Master':\n", 1769 | " print(elastic_ip)" 1770 | ] 1771 | }, 1772 | { 1773 | "cell_type": "code", 1774 | "execution_count": 23, 1775 | "metadata": {}, 1776 | "outputs": [], 1777 | "source": [ 1778 | "def get_emr_master_instance_id(ClusterName):\n", 1779 | " instance_id = None\n", 1780 | " try:\n", 1781 | " emr_client = boto3.client('emr')\n", 1782 | " clusters = emr_client.list_clusters(\n", 1783 | " ClusterStates=['RUNNING', 'WAITING']\n", 1784 | " )['Clusters']\n", 1785 | " for cluster in clusters:\n", 1786 | " if cluster['Name'] == ClusterName:\n", 1787 | " cluster_id = cluster['Id']\n", 1788 | " cluster_details = emr_client.describe_cluster(ClusterId=cluster_id)\n", 1789 | " master_public_dns = cluster_details['Cluster']['MasterPublicDnsName']\n", 1790 | " instances = emr_client.list_instances(ClusterId=cluster_id)['Instances']\n", 1791 | " for instance in instances:\n", 1792 | " if instance['PublicDnsName'] == master_public_dns:\n", 1793 | " instance_id = instance['Ec2InstanceId']\n", 1794 | " except:\n", 1795 | " raise\n", 1796 | " return instance_id" 1797 | ] 1798 | }, 1799 | { 1800 | "cell_type": "code", 1801 | "execution_count": 24, 1802 | "metadata": {}, 1803 | "outputs": [], 1804 | "source": [ 1805 | "def get_emr_master_allocation_id(ElasticIpName):\n", 1806 | " elastic_ip_allocation_id = None\n", 1807 | " try:\n", 1808 | " ec2_client = boto3.client('ec2')\n", 1809 | " elastic_ips = ec2_client.describe_addresses()['Addresses']\n", 1810 | " for elastic_ip in elastic_ips:\n", 1811 | " if elastic_ip.get('Tags'):\n", 1812 | " for tag in elastic_ip['Tags']:\n", 1813 | " if tag['Key'] == 'Name' and tag['Value'] == ElasticIpName:\n", 1814 | " elastic_ip_allocation_id = elastic_ip['AllocationId']\n", 1815 | " except:\n", 1816 | " raise\n", 1817 | " return elastic_ip_allocation_id" 1818 | ] 1819 | }, 1820 | { 1821 | "cell_type": "code", 1822 | "execution_count": 25, 1823 | "metadata": {}, 1824 | "outputs": [], 1825 | "source": [ 1826 | "instance_id = get_emr_master_instance_id('AI Dev Cluster')\n", 1827 | "elastic_ip_allocation_id = get_emr_master_allocation_id('AI Dev EMR Cluster Master')" 1828 | ] 1829 | }, 1830 | { 1831 | "cell_type": "code", 1832 | "execution_count": 26, 1833 | "metadata": {}, 1834 | "outputs": [ 1835 | { 1836 | "data": { 1837 | "text/plain": [ 1838 | "'i-075733388ebad4fdd'" 1839 | ] 1840 | }, 1841 | "execution_count": 26, 1842 | "metadata": {}, 1843 | "output_type": "execute_result" 1844 | } 1845 | ], 1846 | "source": [ 1847 | "instance_id" 1848 | ] 1849 | }, 1850 | { 1851 | "cell_type": "code", 1852 | "execution_count": 27, 1853 | "metadata": {}, 1854 | "outputs": [ 1855 | { 1856 | "data": { 1857 | "text/plain": [ 1858 | "'eipalloc-0f1de6fddfa65cf31'" 1859 | ] 1860 | }, 1861 | "execution_count": 27, 1862 | "metadata": {}, 1863 | "output_type": "execute_result" 1864 | } 1865 | ], 1866 | "source": [ 1867 | "elastic_ip_allocation_id" 1868 | ] 1869 | }, 1870 | { 1871 | "cell_type": "code", 1872 | "execution_count": 28, 1873 | "metadata": {}, 1874 | "outputs": [], 1875 | "source": [ 1876 | "ec2_client = boto3.client('ec2')\n" 1877 | ] 1878 | }, 1879 | { 1880 | "cell_type": "code", 1881 | "execution_count": 29, 1882 | "metadata": {}, 1883 | "outputs": [ 1884 | { 1885 | "name": "stdout", 1886 | "output_type": "stream", 1887 | "text": [ 1888 | "\u001b[0;31mSignature:\u001b[0m \u001b[0mec2_client\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0massociate_address\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 1889 | "\u001b[0;31mDocstring:\u001b[0m\n", 1890 | "Associates an Elastic IP address, or carrier IP address (for instances that are in subnets in Wavelength Zones) with an instance or a network interface. Before you can use an Elastic IP address, you must allocate it to your account.\n", 1891 | "\n", 1892 | " \n", 1893 | "\n", 1894 | "An Elastic IP address is for use in either the EC2-Classic platform or in a VPC. For more information, see `Elastic IP Addresses `__ in the *Amazon Elastic Compute Cloud User Guide* .\n", 1895 | "\n", 1896 | " \n", 1897 | "\n", 1898 | "[EC2-Classic, VPC in an EC2-VPC-only account] If the Elastic IP address is already associated with a different instance, it is disassociated from that instance and associated with the specified instance. If you associate an Elastic IP address with an instance that has an existing Elastic IP address, the existing address is disassociated from the instance, but remains allocated to your account.\n", 1899 | "\n", 1900 | " \n", 1901 | "\n", 1902 | "[VPC in an EC2-Classic account] If you don't specify a private IP address, the Elastic IP address is associated with the primary IP address. If the Elastic IP address is already associated with a different instance or a network interface, you get an error unless you allow reassociation. You cannot associate an Elastic IP address with an instance or network interface that has an existing Elastic IP address.\n", 1903 | "\n", 1904 | " \n", 1905 | "\n", 1906 | "[Subnets in Wavelength Zones] You can associate an IP address from the telecommunication carrier to the instance or network interface. \n", 1907 | "\n", 1908 | " \n", 1909 | "\n", 1910 | "You cannot associate an Elastic IP address with an interface in a different network border group.\n", 1911 | "\n", 1912 | " \n", 1913 | "\n", 1914 | ".. warning::\n", 1915 | "\n", 1916 | " \n", 1917 | "\n", 1918 | " This is an idempotent operation. If you perform the operation more than once, Amazon EC2 doesn't return an error, and you may be charged for each time the Elastic IP address is remapped to the same instance. For more information, see the *Elastic IP Addresses* section of `Amazon EC2 Pricing `__ .\n", 1919 | "\n", 1920 | " \n", 1921 | "\n", 1922 | "\n", 1923 | "\n", 1924 | "See also: `AWS API Documentation `_\n", 1925 | "\n", 1926 | "\n", 1927 | "**Request Syntax** \n", 1928 | "::\n", 1929 | "\n", 1930 | " response = client.associate_address(\n", 1931 | " AllocationId='string',\n", 1932 | " InstanceId='string',\n", 1933 | " PublicIp='string',\n", 1934 | " AllowReassociation=True|False,\n", 1935 | " DryRun=True|False,\n", 1936 | " NetworkInterfaceId='string',\n", 1937 | " PrivateIpAddress='string'\n", 1938 | " )\n", 1939 | ":type AllocationId: string\n", 1940 | ":param AllocationId: \n", 1941 | "\n", 1942 | " [EC2-VPC] The allocation ID. This is required for EC2-VPC.\n", 1943 | "\n", 1944 | " \n", 1945 | "\n", 1946 | "\n", 1947 | ":type InstanceId: string\n", 1948 | ":param InstanceId: \n", 1949 | "\n", 1950 | " The ID of the instance. The instance must have exactly one attached network interface. For EC2-VPC, you can specify either the instance ID or the network interface ID, but not both. For EC2-Classic, you must specify an instance ID and the instance must be in the running state.\n", 1951 | "\n", 1952 | " \n", 1953 | "\n", 1954 | "\n", 1955 | ":type PublicIp: string\n", 1956 | ":param PublicIp: \n", 1957 | "\n", 1958 | " [EC2-Classic] The Elastic IP address to associate with the instance. This is required for EC2-Classic.\n", 1959 | "\n", 1960 | " \n", 1961 | "\n", 1962 | "\n", 1963 | ":type AllowReassociation: boolean\n", 1964 | ":param AllowReassociation: \n", 1965 | "\n", 1966 | " [EC2-VPC] For a VPC in an EC2-Classic account, specify true to allow an Elastic IP address that is already associated with an instance or network interface to be reassociated with the specified instance or network interface. Otherwise, the operation fails. In a VPC in an EC2-VPC-only account, reassociation is automatic, therefore you can specify false to ensure the operation fails if the Elastic IP address is already associated with another resource.\n", 1967 | "\n", 1968 | " \n", 1969 | "\n", 1970 | "\n", 1971 | ":type DryRun: boolean\n", 1972 | ":param DryRun: \n", 1973 | "\n", 1974 | " Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is ``DryRunOperation`` . Otherwise, it is ``UnauthorizedOperation`` .\n", 1975 | "\n", 1976 | " \n", 1977 | "\n", 1978 | "\n", 1979 | ":type NetworkInterfaceId: string\n", 1980 | ":param NetworkInterfaceId: \n", 1981 | "\n", 1982 | " [EC2-VPC] The ID of the network interface. If the instance has more than one network interface, you must specify a network interface ID.\n", 1983 | "\n", 1984 | " \n", 1985 | "\n", 1986 | " For EC2-VPC, you can specify either the instance ID or the network interface ID, but not both. \n", 1987 | "\n", 1988 | " \n", 1989 | "\n", 1990 | "\n", 1991 | ":type PrivateIpAddress: string\n", 1992 | ":param PrivateIpAddress: \n", 1993 | "\n", 1994 | " [EC2-VPC] The primary or secondary private IP address to associate with the Elastic IP address. If no private IP address is specified, the Elastic IP address is associated with the primary private IP address.\n", 1995 | "\n", 1996 | " \n", 1997 | "\n", 1998 | "\n", 1999 | "\n", 2000 | ":rtype: dict\n", 2001 | ":returns: \n", 2002 | " \n", 2003 | " **Response Syntax** \n", 2004 | "\n", 2005 | " \n", 2006 | " ::\n", 2007 | "\n", 2008 | " {\n", 2009 | " 'AssociationId': 'string'\n", 2010 | " }\n", 2011 | " **Response Structure** \n", 2012 | "\n", 2013 | " \n", 2014 | "\n", 2015 | " - *(dict) --* \n", 2016 | " \n", 2017 | "\n", 2018 | " - **AssociationId** *(string) --* \n", 2019 | "\n", 2020 | " [EC2-VPC] The ID that represents the association of the Elastic IP address with an instance.\n", 2021 | "\n", 2022 | " \n", 2023 | "\u001b[0;31mFile:\u001b[0m ~/Projects/Internal/bootcamp/itversity-material/mastering-emr/me-venv/lib/python3.9/site-packages/botocore/client.py\n", 2024 | "\u001b[0;31mType:\u001b[0m method\n" 2025 | ] 2026 | } 2027 | ], 2028 | "source": [ 2029 | "ec2_client.associate_address?" 2030 | ] 2031 | }, 2032 | { 2033 | "cell_type": "code", 2034 | "execution_count": 30, 2035 | "metadata": {}, 2036 | "outputs": [ 2037 | { 2038 | "data": { 2039 | "text/plain": [ 2040 | "{'AssociationId': 'eipassoc-03526cc1730a604af',\n", 2041 | " 'ResponseMetadata': {'RequestId': '32de5037-fa4e-4e2c-89b4-b2b4982a3171',\n", 2042 | " 'HTTPStatusCode': 200,\n", 2043 | " 'HTTPHeaders': {'x-amzn-requestid': '32de5037-fa4e-4e2c-89b4-b2b4982a3171',\n", 2044 | " 'cache-control': 'no-cache, no-store',\n", 2045 | " 'strict-transport-security': 'max-age=31536000; includeSubDomains',\n", 2046 | " 'content-type': 'text/xml;charset=UTF-8',\n", 2047 | " 'content-length': '295',\n", 2048 | " 'date': 'Sun, 26 Jun 2022 12:44:43 GMT',\n", 2049 | " 'server': 'AmazonEC2'},\n", 2050 | " 'RetryAttempts': 0}}" 2051 | ] 2052 | }, 2053 | "execution_count": 30, 2054 | "metadata": {}, 2055 | "output_type": "execute_result" 2056 | } 2057 | ], 2058 | "source": [ 2059 | "ec2_client.associate_address(\n", 2060 | " AllocationId=elastic_ip_allocation_id,\n", 2061 | " InstanceId=instance_id\n", 2062 | ")" 2063 | ] 2064 | }, 2065 | { 2066 | "cell_type": "code", 2067 | "execution_count": null, 2068 | "metadata": {}, 2069 | "outputs": [], 2070 | "source": [] 2071 | } 2072 | ], 2073 | "metadata": { 2074 | "kernelspec": { 2075 | "display_name": "Python 3.9.12 ('me-venv': venv)", 2076 | "language": "python", 2077 | "name": "python3" 2078 | }, 2079 | "language_info": { 2080 | "codemirror_mode": { 2081 | "name": "ipython", 2082 | "version": 3 2083 | }, 2084 | "file_extension": ".py", 2085 | "mimetype": "text/x-python", 2086 | "name": "python", 2087 | "nbconvert_exporter": "python", 2088 | "pygments_lexer": "ipython3", 2089 | "version": "3.9.12" 2090 | }, 2091 | "orig_nbformat": 4, 2092 | "vscode": { 2093 | "interpreter": { 2094 | "hash": "486bfb5c5bc2d80b6bdfdd884fe8070e0b9ef1b20f462b79720291b7c17c500c" 2095 | } 2096 | } 2097 | }, 2098 | "nbformat": 4, 2099 | "nbformat_minor": 2 2100 | } 2101 | --------------------------------------------------------------------------------