├── Chapter16 ├── DataEngGlueCWS3CuratedZoneRole-trust-policy.json ├── CFN-glue_job-streams_by_category.cfn ├── Glue-streaming_views_by_category.py └── README.md ├── Chapter03 ├── test.csv ├── DataEngLambdaS3CWGluePolicy.json ├── CSVtoParquetLambda.py └── README.md ├── Chapter10 ├── dataeng-random-failure-generator.py ├── dataeng-check-file-ext.py ├── README.md └── ProcessFileStateMachine.json ├── Chapter08 └── README.md ├── LICENSE ├── Chapter13 ├── website-reviews-analysis-role.py └── README.md ├── Chapter02 └── README.md ├── Chapter01 └── README.md ├── Chapter07 └── README.md ├── Chapter17 └── README.md ├── Chapter05 ├── Data-Engineering-Whiteboard-Template.drawio ├── README.md ├── Data-Engineering-Whiteboard-Completed-Notes.drawio └── Data-Engineering-Completed-Whiteboard.drawio ├── Chapter12 └── README.md ├── Chapter06 ├── mysql-ec2loader.cfn └── README.md ├── Chapter04 ├── README.md └── AthenaAccessCleanZoneDB ├── Chapter15 └── README.md ├── Chapter09 └── README.md ├── Chapter11 └── README.md ├── Chapter14 └── README.md └── README.md /Chapter16/DataEngGlueCWS3CuratedZoneRole-trust-policy.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Statement": [ 4 | { 5 | "Effect": "Allow", 6 | "Principal": { 7 | "Service": [ 8 | "glue.amazonaws.com", 9 | "cloudformation.amazonaws.com" 10 | ] 11 | }, 12 | "Action": "sts:AssumeRole" 13 | } 14 | ] 15 | } 16 | -------------------------------------------------------------------------------- /Chapter03/test.csv: -------------------------------------------------------------------------------- 1 | Name,favorite_num 2 | Vrinda,22 3 | Tracy,28 4 | Gareth,23 5 | Chris,16 6 | Emma,14 7 | Carlos,7 8 | Cooper,11 9 | Praful,4 10 | David,33 11 | Shilpa,2 12 | Gary,18 13 | Sean,20 14 | Ha-yoon,9 15 | Elizabeth,8 16 | Mary,1 17 | Chen,15 18 | Janet,22 19 | Mariusz,25 20 | Romain,11 21 | Matt,25 22 | Brendan,19 23 | Roger,2 24 | Jack,7 25 | Sachin,17 26 | Francisco,5 27 | -------------------------------------------------------------------------------- /Chapter10/dataeng-random-failure-generator.py: -------------------------------------------------------------------------------- 1 | from random import randint 2 | 3 | def lambda_handler(event, context): 4 | print('Processing') 5 | #Our ETL code would go here 6 | value = randint(0, 2) 7 | # We now divide 10 by our random number. 8 | # If the random numebr is 0, our function will fail 9 | newval = 10 / value 10 | print(f'New Value is: {newval}') 11 | return(newval) 12 | -------------------------------------------------------------------------------- /Chapter10/dataeng-check-file-ext.py: -------------------------------------------------------------------------------- 1 | import urllib.parse 2 | import json 3 | import os 4 | print('Loading function') 5 | def lambda_handler(event, context): 6 | print("Received event: " + json.dumps(event, indent=2)) 7 | # Get the object from the event and show its content type 8 | bucket = event['detail']['bucket']['name'] 9 | key = urllib.parse.unquote_plus(event['detail']['object']['key'], encoding='utf-8') 10 | filename, file_extension = os.path.splitext(key) 11 | print(f'File extension is: {file_extension}') 12 | payload = { 13 | "file_extension": file_extension, 14 | "bucket": bucket, 15 | "key": key 16 | } 17 | return payload 18 | -------------------------------------------------------------------------------- /Chapter08/README.md: -------------------------------------------------------------------------------- 1 | # Chapter 8 - Identifying and Enabling Data Consumers 2 | 3 | In this chapter, we explored a variety of data consumers that you are likely to find in most 4 | organizations, including business users, data analysts, and data scientists. We briefly 5 | examined their roles, and then looked at the types of AWS services that each of them is 6 | likely to use to work with data. 7 | 8 | ## Hands-on Activity 9 | In the hands-on section of this chapter, we took on the role of a data analyst, tasked 10 | with creating a mailing list for the marketing department. We used data that had been 11 | imported from a MySQL database into S3 in a previous chapter, joined two of the tables 12 | from that database, and transformed the data in some of the columns. Then, we wrote the 13 | newly transformed dataset out to Amazon S3 as a CSV file. 14 | 15 | #### Configuring new datasets for AWS Glue DataBrew 16 | - AWS Management Console - Glue DataBrew: https://console.aws.amazon.com/databrew 17 | 18 | 19 | -------------------------------------------------------------------------------- /Chapter03/DataEngLambdaS3CWGluePolicy.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Statement": [ 4 | { 5 | "Effect": "Allow", 6 | "Action": [ 7 | "logs:PutLogEvents", 8 | "logs:CreateLogGroup", 9 | "logs:CreateLogStream" 10 | ], 11 | "Resource": "arn:aws:logs:*:*:*" 12 | }, 13 | { 14 | "Effect": "Allow", 15 | "Action": [ 16 | "s3:*" 17 | ], 18 | "Resource": [ 19 | "arn:aws:s3:::dataeng-landing-zone-INITIALS/*", 20 | "arn:aws:s3:::dataeng-landing-zone-INITIALS", 21 | "arn:aws:s3:::dataeng-clean-zone-INITIALS/*", 22 | "arn:aws:s3:::dataeng-clean-zone-INITIALS" 23 | ] 24 | }, 25 | { 26 | "Effect": "Allow", 27 | "Action": [ 28 | "glue:*" 29 | ], 30 | "Resource": "*" 31 | } 32 | ] 33 | } 34 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2023 Packt 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /Chapter13/website-reviews-analysis-role.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | import json 3 | comprehend = boto3.client(service_name='comprehend', 4 | region_name='us-east-2') 5 | 6 | def lambda_handler(event, context): 7 | for record in event['Records']: 8 | payload = record["body"] 9 | print(str(payload)) 10 | 11 | print('Calling DetectSentiment') 12 | response = comprehend.detect_sentiment(Text=payload, 13 | LanguageCode='en') 14 | sentiment = response['Sentiment'] 15 | sentiment_score = response['SentimentScore'] 16 | print(f'SENTIMENT: {sentiment}') 17 | print(f'SENTIMENT SCORE: {sentiment_score}') 18 | 19 | print('Calling DetectEntities') 20 | response = comprehend.detect_entities(Text=payload, 21 | LanguageCode='en') 22 | #print(response['Entities']) 23 | for entity in response['Entities']: 24 | entity_text = entity['Text'] 25 | entity_type = entity['Type'] 26 | print( 27 | f'ENTITY: {entity_text}, ' 28 | f'ENTITY TYPE: {entity_type}' 29 | ) 30 | return 31 | -------------------------------------------------------------------------------- /Chapter02/README.md: -------------------------------------------------------------------------------- 1 | # Chapter 2 - Data Management Architectures for Analytics 2 | In this chapter, we learned about the foundational architectural concepts that are typically applied when designing real-life analytics data management and processing solutions. 3 | We also did a deep-dive into three analytics data management architectures that are popular today: data warehouses, data lakes, and data lakehouses. 4 | 5 | ## Hands-on Activity 6 | In the ***hands-on activity*** section, you learnt how to access the AWS Command Line Interface (CLI) via AWS CloudShell, and then used the CLI to create Amazon S3 buckets (storage containers in the Amazon S3 service) which we will use in later chapters. 7 | 8 | ### Links 9 | - **Learn more about S3 bucket naming rules:** https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html 10 | - **AWS CloudShell service:** https://us-east-2.console.aws.amazon.com/cloudshell/home 11 | 12 | ### Commands 13 | #### Create a new Amazon S3 bucket 14 | The following command, when run via the AWS CLI, creates a new bucket called *dataeng-test-bucket-123*. If a bucket with this name already exists the command will fail, so you need to ensure you provide a globally unique name. 15 | 16 | ``` 17 | aws s3 mb s3://dataeng-test-bucket-123 18 | ``` 19 | -------------------------------------------------------------------------------- /Chapter01/README.md: -------------------------------------------------------------------------------- 1 | # Chapter 1 - An Introduction to Data Engineering 2 | In this chapter we reviewed how data is becoming an increasingly important asset for organizations, and looked at some of the core challenges of working with big data. We then reviewed some of the data related roles that are commonly seen in organizations today. 3 | 4 | ## Links 5 | - **AWS Data Lake Defition:** https://aws.amazon.com/big-data/datalakes-and-analytics/what-is-a-data-lake/ 6 | 7 | ## Hands-on Activity 8 | 9 | In this ***hands-on activity section*** of this chapter you were given step-by-step instructions to guide you in creating a new AWS account. There was no coding or policies configured in this chapter. 10 | 11 | ### Links 12 | - Creating a billing alarm to monitor your estimated AWS charges: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/monitor_estimated_charges_with_cloudwatch.html 13 | - Google Voice for creating a virtual phone number for use with your account: https://voice.google.com/ 14 | - What to do if you don’t receive a confirmation email within 24 hours: https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/ 15 | - Best practices for securing your account - root user: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html 16 | - Best practices for securing your account - Multi Factor Authentication (MFA): https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_virtual.html 17 | -------------------------------------------------------------------------------- /Chapter07/README.md: -------------------------------------------------------------------------------- 1 | # Chapter 7 - Transforming Data to Optimize for Analytics 2 | 3 | In this chapter, we reviewed a number of common transformations that can be applied 4 | to raw datasets, covering both generic transformations used to optimize data for analytics, 5 | and business transforms used to enrich and denormalize datasets. 6 | 7 | ## Hands-on Activity 8 | In the ***hands-on activity*** section of this chapter, we joined various datasets that we had previously ingested in order to denormalize the underlying datasets. We then joined data we had ingested from a database with data that had been streamed into the data lake. 9 | 10 | #### Creating a new IAM role for the Glue job 11 | - AWS Management Console - IAM Policies: https://console.aws.amazon.com/iamv2/home?#/policies 12 | 13 | - AWS IAM policy for `DataEngGlueCWS3CuratedZoneWrite`. Change *INITIALS* in the policy below to match the name of the relevant bucket that you previously created. 14 | 15 | ``` 16 | { 17 | "Version": "2012-10-17", 18 | "Statement": [ 19 | { 20 | "Effect": "Allow", 21 | "Action": [ 22 | "s3:GetObject" 23 | ], 24 | "Resource": [ 25 | "arn:aws:s3:::dataeng-landing-zone-INITIALS/*", 26 | "arn:aws:s3:::dataeng-clean-zone-INITIALS/*" 27 | ] 28 | }, 29 | { 30 | "Effect": "Allow", 31 | "Action": [ 32 | "s3:*" 33 | ], 34 | "Resource": "arn:aws:s3:::dataeng-curated-zone-INITIALS/*" 35 | } 36 | ] 37 | } 38 | ``` 39 | 40 | 41 | -------------------------------------------------------------------------------- /Chapter17/README.md: -------------------------------------------------------------------------------- 1 | # Chapter 17 - Wrapping Up the First Part of Your Learning Journey 2 | 3 | In this chapter we looked at some of the complexities of data engineering in the real-world, and 4 | examined some examples of real-world data pipelines. We then look at some emerging trends to get 5 | an idea of what the future holds for data engineering, such as the increased adoption of a data 6 | mesh approach, the reality of multi-cloud environments, the work that will need to be done to 7 | migrate to open-table formats, how Generative AI may impact the field, and more. 8 | 9 | ## Hands-on Activity 10 | In the hands-on activity we looked at how you can review your AWS spend in the billing console, and then how you could optionally close your AWS account. 11 | 12 | #### Reviewing AWS Billing to identify the resources being charged for 13 | 14 | - AWS Management Console - Billing Console: https://console.aws.amazon.com/billing/home 15 | 16 | - AWS Documentation on how to create a billing alarm to monitor estimated charges: [Documentation Link](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/monitor_estimated_charges_with_cloudwatch.html) 17 | 18 | #### Closing your AWS account 19 | 20 | - AWS Documentation on considerations before closing your AWS account: [Documentation Link](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/close-account.html) 21 | 22 | - AWS Management Console - Login: https://console.aws.amazon.com 23 | [Make sure to log in with the email address you used when creating the account] 24 | 25 | - AWS Billing Console: https://console.aws.amazon.com/billing/home 26 | -------------------------------------------------------------------------------- /Chapter05/Data-Engineering-Whiteboard-Template.drawio: -------------------------------------------------------------------------------- 1 | 7VrbbuI6FP0aHoviOBd4LJeZqdQjHYmRRpoXZBKTeBrsyHEKnK8fh9iFxIFmdICGoSDReNtx7LX2rdvpwfFq85WjNP6HhTjp2Va46cFJz7ZtMPDkn0KyLSXAcv1SEnESKtleMCP/YT1QSXMS4qwyUDCWCJJWhQGjFAeiIkOcs3V12JIl1aemKMKGYBagxJT+IKGI9eosa9/xDZMoVo92dMcK6cFKkMUoZOsDEZz24JgzJsqr1WaMkwI9jUt535cjvW8L45iKNjd4JJ+n9nQS/kz5dvZCk8XTtwe7nOUVJbnasFqs2GoEOMtpiItJrB4crWMi8CxFQdG7lqRLWSxWiWwBeWkuSq3zFXOBNwcitcivmK2w4Fs5RPU+eK5CTOnMA3SHfbcUrfccOEM1LK7ArxQOKd6jt/n30MgLhc4fIAUNpCZIICn5zhHNloxLwgmjBnxy16KKUSY4e8FjljAuJZRROXK0JElSE6GERFQ2A4kllvJRgSGRqvmoOlYkDIvHNJJSpW3JqFDGZTvn4smu8wShwZLtNbCkbzw7Sc776sxykRAq4df+QsOj0e/ZUH6/FE8dRRyFBO/7Gsgqhvs+AN4xZkOUxW88aAqf0QIn/7KM7HQGThZMCLZq4FiwtEkVDtTpgFpg67bab/FIlKXlRpdkU6xjlDJSzDJ9lZNlWidjlBY3rDZR4cj7aJ05/UUevGAxXxMRz9nil5wlu5TqABv0wfDwY2iS65qK5Pp9+0Kq5L6vSjiUoUI1GRcxixhFyXQvrRnhfswzK2jdIf8LC7FV/KFcsOPkeqfAz1jOA3xiP8ovCsQjLE6MA8qGi82d5JLjRHq812qcPDsNnkHDM6IhoZEU/iwMrPP+9jRvf2I0wNZhUJuNMzDD4jX9rf8h/tZ9hNbI/fv8bSRRnIcyq5gH8idh0ZkUx3adWqBu0BvX7cOhqTrepVRncCyfGqu9349hQ2nYHqxRZH+waQ8/U6mbSKUcr/OplP5f/q/JpYCuRLybTPmdSqb0ug+IGCcY0fvLpRy3c7kUaFGLuS0rgW2tBHTLSsxSj0GEjmYTwqUvLwMPLdAuQP8/YbFIby13INMBIzaCqe3ZU8MIZc9y9+ligGwMhhyXuvMUFOsZyWZ5VcuGMcUcJWcyeNceVs29Kb9qyH+17Pxa9lmruo0EC1rdT7DMYtU450hICO8ustvD7kV2s4h145HdbxvZ3W5F9hb1qs/IfmORHfo1//zRkd1pkT9e/VAVOF4VJehZDYeqoMkxOpZ/KahaJEHXhmpQDx/Nx89HkPL00PNjZQZ5VTF9DAKcZd2P8Wc7eQYOqFHUcPIM4FVPns0Yr+vZjGb5CvM7Isg1baiBoOueVLWoQV7dKQ/roUu6mtaO5lJAmTXCjwdqADoIlFmvUxY/2+VAd2TvD8M2rwJd197NJOyJRjgr39GyvjOW3BNBfv2U6IIEyeb+rcZd38HLoXD6Gw== -------------------------------------------------------------------------------- /Chapter16/CFN-glue_job-streams_by_category.cfn: -------------------------------------------------------------------------------- 1 | AWSTemplateFormatVersion: '2010-09-09' 2 | # CloudFormation template to deploy the streaming view by category 3 | # Glue job. 4 | # In the Parameters section we define parameters that can be passed to 5 | # CloudFormation at deployment time. If no parameters are passed in, then the 6 | # specified default is used. 7 | Parameters: 8 | # JobName: The name of the job to be created 9 | JobName: 10 | Type: String 11 | Default: streaming_views_by_category 12 | # The name of the IAM role that the job assumes. It must have access to data, 13 | # script, and temporary directory. We created this IAM role via the AWS 14 | # console in Chapter 7. 15 | IAMRoleName: 16 | Type: String 17 | Default: DataEngGlueCWS3CuratedZoneRole 18 | # The S3 path where the script for this job is located. Modify the default 19 | # below to reference the specific path for your S3 bucket 20 | ScriptLocation: 21 | Type: String 22 | Default: "s3://data-product-film-gse23/glueETL_code/Glue-streaming_views_by_category.py" 23 | # In the Resources section, we define the AWS resources we want to deploy 24 | # with this CloudFormation template. In our case, it is just a single Glue 25 | # job, but a single template can deploy multiple different AWS resources 26 | Resources: 27 | # Below we define our Glue job, and we substitute parameters in from the 28 | # above section. 29 | GlueJob: 30 | Type: AWS::Glue::Job 31 | Properties: 32 | Role: !Ref IAMRoleName 33 | Description: Glue job to calculate number of streams by category 34 | Command: 35 | Name: glueetl 36 | ScriptLocation: !Ref ScriptLocation 37 | WorkerType: G.1X 38 | NumberOfWorkers: 2 39 | GlueVersion: "3.0" 40 | Name: Streaming Views by Category 41 | -------------------------------------------------------------------------------- /Chapter12/README.md: -------------------------------------------------------------------------------- 1 | # Chapter 12 - Visualizing Data with Amazon QuickSight 2 | 3 | In this chapter we discussed the power of visually representing data, and then explored core Amazon 4 | QuickSight concepts. We looked at how various data sources can be used with QuickSight, 5 | how data can optionally be imported into the SPICE storage engine, and how you can 6 | perform some data preparation tasks using QuickSight. 7 | 8 | We then did a deeper dive into the concepts of analyses (where new visuals are authored) 9 | and dashboards (published analyses that can be shared with data consumers). As part 10 | of this, we also examined some of the common types of visualizations available in 11 | QuickSight. 12 | 13 | We then looked at some of the advanced features available in QuickSight, including ML 14 | Insights (which uses machine learning to detect outliers in data and forecast future data 15 | trends), as well as embedded dashboards (which enable you to embed either the full 16 | QuickSight console or dashboards directly into your websites and applications). We also 17 | examined QuickSight Q (for Natural Language Queries), and the ability to generate paginated reports. 18 | 19 | ## Hands-on Activity 20 | 21 | #### Setting up a new QuickSight account and loading a dataset 22 | In this section, we create a visualization using data from [SimpleMaps.com](https://simplemaps.com). The basic map data is distributed under the [Creative Commons Attribution 4.0 International (CC BY 4.0) license](https://creativecommons.org/licenses/by/4.0/). 23 | 24 | - Link to download the Simple Maps basic data: https://simplemaps.com/static/data/world-cities/basic/simplemaps_worldcities_basicv1.76.zip 25 | [Extract the ZIP file to access the underlying CSV file] 26 | 27 | - Link to the Amazon QuickSight service: https://quicksight.aws.amazon.com/ 28 | 29 | 30 | 31 | 32 | -------------------------------------------------------------------------------- /Chapter03/CSVtoParquetLambda.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | import awswrangler as wr 3 | from urllib.parse import unquote_plus 4 | 5 | def lambda_handler(event, context): 6 | # Get the source bucket and object name as passed to the Lambda function 7 | for record in event['Records']: 8 | bucket = record['s3']['bucket']['name'] 9 | key = unquote_plus(record['s3']['object']['key']) 10 | 11 | # We will set the DB and table name based on the last two elements of 12 | # the path prior to the file name. If key = 'dms/sakila/film/LOAD01.csv', 13 | # then the following lines will set db to sakila and table_name to 'film' 14 | key_list = key.split("/") 15 | print(f'key_list: {key_list}') 16 | db_name = key_list[len(key_list)-3] 17 | table_name = key_list[len(key_list)-2] 18 | 19 | print(f'Bucket: {bucket}') 20 | print(f'Key: {key}') 21 | print(f'DB Name: {db_name}') 22 | print(f'Table Name: {table_name}') 23 | 24 | input_path = f"s3://{bucket}/{key}" 25 | print(f'Input_Path: {input_path}') 26 | output_path = f"s3://dataeng-clean-zone-INITIALS/{db_name}/{table_name}" 27 | print(f'Output_Path: {output_path}') 28 | 29 | input_df = wr.s3.read_csv([input_path]) 30 | 31 | current_databases = wr.catalog.databases() 32 | wr.catalog.databases() 33 | if db_name not in current_databases.values: 34 | print(f'- Database {db_name} does not exist ... creating') 35 | wr.catalog.create_database(db_name) 36 | else: 37 | print(f'- Database {db_name} already exists') 38 | 39 | result = wr.s3.to_parquet( 40 | df=input_df, 41 | path=output_path, 42 | dataset=True, 43 | database=db_name, 44 | table=table_name, 45 | mode="append") 46 | 47 | print("RESULT: ") 48 | print(f'{result}') 49 | 50 | return result 51 | -------------------------------------------------------------------------------- /Chapter05/README.md: -------------------------------------------------------------------------------- 1 | # Chapter 5 - Architecting Data Engineering Pipelines 2 | 3 | In this chapter, we reviewed an approach to developing data engineering pipelines by 4 | identifying a limited-scope project, and then whiteboarding a high-level architecture 5 | diagram. We looked at how we could have a workshop, in conjunction with relevant 6 | stakeholders in the organization, to discuss requirements and plan the initial architecture. 7 | 8 | ## Links from this chapter 9 | 10 | - Spotify blog providing an example of a data engineering pipeline: https://engineering.atspotify.com/2020/02/18/spotify-unwrapped-how-we-brought-you-a-decade-of-data/ 11 | 12 | ## Hands-on Activity 13 | In the ***hands-on activity*** section of this chapter, we read through some fictional notes of a meeting to discuss a new project that had specific data requirements. As we read through the notes, we sketched out a high-level whiteboard architecture showing data consumers, data ingestion sources, and transformations. 14 | 15 | - Link to diagrams.net - an online architecture design tool: https://www.diagrams.net/. 16 | 17 | **NOTE:** The files linked to below can be downloaded from here (in .drawio format) and then opened in diagrams.net and modified. To download the source files, click on the link, and then right-click the **Raw** button, and select **Save link as**. This will let you download the XML draw.io file which you can then open with diagrams.net. 18 | 19 | - Generic Data Architecture Whiteboard Template (drawio format): [Data-Engineering-Whiteboard-Template.drawio](Data-Engineering-Whiteboard-Template.drawio) 20 | 21 | - Completed Data Architecture Whiteboard Diagram (drawio format): [Data-Engineering-Completed-Whiteboard.drawio](Data-Engineering-Completed-Whiteboard.drawio) 22 | 23 | - Completed Data Architecture Whiteboard Notes (drawio format): [Data-Engineering-Whiteboard-Completed-Notes.drawio](Data-Engineering-Whiteboard-Completed-Notes.drawio) 24 | 25 | -------------------------------------------------------------------------------- /Chapter06/mysql-ec2loader.cfn: -------------------------------------------------------------------------------- 1 | --- 2 | AWSTemplateFormatVersion: 2010-09-09 3 | Description: Chapter 6 - Data Engineering with AWS 4 | Parameters: 5 | DBPassword: 6 | Type: String 7 | NoEcho: true 8 | Description: The database admin account password 9 | MinLength: 8 10 | AllowedPattern: ^[a-zA-Z0-9]*$ 11 | ConstraintDescription: Password must contain only alphanumeric characters. 12 | LatestAmiId: 13 | Type: 'AWS::SSM::Parameter::Value' 14 | Default: '/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-6.1-x86_64' 15 | Resources: 16 | MySQLInstance: 17 | Type: AWS::RDS::DBInstance 18 | Properties: 19 | AllocatedStorage: 20 20 | DBInstanceClass: db.t3.micro 21 | Engine: MySQL 22 | MasterUsername: admin 23 | MasterUserPassword: !Ref DBPassword 24 | EC2Instance: 25 | Type: AWS::EC2::Instance 26 | Properties: 27 | ImageId: !Ref LatestAmiId 28 | InstanceType: t3.micro 29 | UserData: 30 | Fn::Base64: 31 | Fn::Sub: 32 | - | 33 | #!/bin/bash 34 | 35 | yum install -y mariadb105 36 | 37 | curl https://downloads.mysql.com/docs/sakila-db.zip -o sakila.zip 38 | 39 | unzip sakila.zip 40 | 41 | cd sakila-db 42 | 43 | echo "mysql --host=${EndPointAddress} --user=admin --password=${DBPassword} -f < sakila-schema.sql" | tee -a /var/tmp/userdata.log 44 | 45 | mysql --host=${EndPointAddress} --user=admin --password=${DBPassword} -f < sakila-schema.sql | tee -a /var/tmp/userdata.log1 46 | 47 | mysql --host=${EndPointAddress} --user=admin --password=${DBPassword} -f < sakila-data.sql | tee -a /var/tmp/userdata.log2 48 | 49 | - EndPointAddress: !GetAtt MySQLInstance.Endpoint.Address 50 | Tags: 51 | - Key: Name 52 | Value: dataeng-book-ec2-3 53 | DependsOn: MySQLInstance 54 | -------------------------------------------------------------------------------- /Chapter04/README.md: -------------------------------------------------------------------------------- 1 | # Chapter 4 - Data Governance, Security and Cataloging 2 | 3 | In this chapter, we did a deeper dive into best practices for handling data responsibly and securely, and for making sure that the value of data can be maximized for an organization. 4 | 5 | ## Hands-on Activity 6 | In the ***hands-on activity*** section of this chapter, we created a new data lake user and assigned them permissions using AWS Identitty and Access Management (IAM). Once we verified their permissions, we then transitioned data authorization over to use Lake Formation for fine-grained access control (including the ability to control permissions at the column level). 7 | 8 | #### Creating a new user with IAM permissions 9 | - AWS Management Console - IAM Policies: https://console.aws.amazon.com/iamv2/home?#/policies 10 | 11 | - Resource section of policy that is updated to limit access to just the Glue `cleanzonedb` database and tables in that database 12 | ``` 13 | "Resource": [ 14 | "arn:aws:glue:*:*:catalog", 15 | "arn:aws:glue:*:*:database/cleanzonedb", 16 | "arn:aws:glue:*:*:database/cleanzonedb*", 17 | "arn:aws:glue:*:*:table/cleanzonedb/*" 18 | ] 19 | ``` 20 | 21 | - New section of policy that enables access to the underlying S3 storage for the `cleanzonedb` database. **Ensure that you modify INITIALS below to reflect the correct name for your CleanZoneDB bucket.** 22 | ``` 23 | { 24 | "Effect": "Allow", 25 | "Action": [ 26 | "s3:GetBucketLocation", 27 | "s3:GetObject", 28 | "s3:ListBucket", 29 | "s3:ListBucketMultipartUploads", 30 | "s3:ListMultipartUploadParts", 31 | "s3:AbortMultipartUpload", 32 | "s3:PutObject" 33 | ], 34 | "Resource": [ 35 | "arn:aws:s3:::dataeng-clean-zone-INITIALS/*" 36 | ] 37 | }, 38 | ``` 39 | 40 | - Athena query to validate that IAM permissions are correct for the datalake-user: 41 | `select * from cleanzonedb.csvtoparquet` 42 | 43 | #### Transitioning to managing fine-grained permissions with AWS Lake Formation 44 | 45 | - AWS Management Console - Lake Formation: https://console.aws.amazon.com/lakeformation/home 46 | -------------------------------------------------------------------------------- /Chapter16/Glue-streaming_views_by_category.py: -------------------------------------------------------------------------------- 1 | import sys 2 | from awsglue.transforms import * 3 | from awsglue.utils import getResolvedOptions 4 | from pyspark.context import SparkContext 5 | from awsglue.context import GlueContext 6 | from awsglue.job import Job 7 | from awsglue.dynamicframe import DynamicFrame 8 | 9 | args = getResolvedOptions(sys.argv, ["JOB_NAME"]) 10 | sc = SparkContext() 11 | glueContext = GlueContext(sc) 12 | spark = glueContext.spark_session 13 | job = Job(glueContext) 14 | job.init(args["JOB_NAME"], args) 15 | 16 | # Load the streaming_films table into a Glue DynamicFrame from the Glue catalog 17 | StreamingFilms = glueContext.create_dynamic_frame.from_catalog( 18 | database="curatedzonedb", 19 | table_name="streaming_films", 20 | transformation_ctx="StreamingFilms", 21 | ) 22 | 23 | # Convert the DynamicFrame to a Spark DataFrame 24 | spark_dataframe = StreamingFilms.toDF() 25 | 26 | # Create a SparkSQL table based on the steaming_films table 27 | spark_dataframe.createOrReplaceTempView("streaming_films") 28 | 29 | # Create a new DataFrame that records number of streams for each 30 | # category of film 31 | CategoryStreamsDF = glueContext.sql(""" 32 | SELECT category_name, 33 | count(category_name) streams 34 | FROM streaming_films 35 | GROUP BY category_name 36 | """) 37 | 38 | # Convert the DataFrame back to a Glue DynamicFrame 39 | CategoryStreamsDyf = DynamicFrame.fromDF(CategoryStreamsDF, glueContext, "CategoryStreamsDyf") 40 | 41 | # Prepare to write the dataframe to Amazon S3 42 | ############# NOTE ############# 43 | #### Change the path below to 44 | #### reference your bucket name 45 | ################################ 46 | s3output = glueContext.getSink( 47 | path="s3://dataeng-curated-zone-gse23/streaming/top_categories", 48 | connection_type="s3", 49 | updateBehavior="UPDATE_IN_DATABASE", 50 | partitionKeys=[], 51 | compression="snappy", 52 | enableUpdateCatalog=True, 53 | transformation_ctx="s3output", 54 | ) 55 | # Set the database and table name for where you want this table 56 | # to be registered in the Glue catalog 57 | s3output.setCatalogInfo( 58 | catalogDatabase="curatedzonedb", catalogTableName="category_streams" 59 | ) 60 | # Set the output format to Glue Parquet 61 | s3output.setFormat("glueparquet") 62 | # Write the output to S3 and update the Glue catalog 63 | s3output.writeFrame(CategoryStreamsDyf) 64 | job.commit() 65 | -------------------------------------------------------------------------------- /Chapter03/README.md: -------------------------------------------------------------------------------- 1 | # Chapter 3 - The AWS Data Engineers Toolkit 2 | In this chapter we reviewed a range of AWS services at a high level, including services for ingesting data from a variety of sources, services for transforming data, services for orchestrating pipelines, and services for consuming and working with data. 3 | 4 | ## Hands-on Activity 5 | In the ***hands-on activity*** section of this chapter, we configured an S3 bucket to automatically trigger a Lambda function whenever a new CSV file was written to the bucket. In the Lambda function, we used an open-source library to convert the CSV file into Parquet format, and wrote the file out to a new zone of our data lake. 6 | 7 | #### Creating a Lambda layer containing the AWS SDK for Pandas (awswrangler) library 8 | - AWS SDK for Pandas site: https://github.com/aws/aws-sdk-pandas 9 | - AWS Data Wrangler v2.19 ZIP file for Python 3.9: https://github.com/aws/aws-sdk-pandas/releases/download/2.19.0/awswrangler-layer-2.19.0-py3.9.zip 10 | - AWS Management Console - Lambda Layers: https://console.aws.amazon.com/lambda/home#/layers 11 | 12 | #### Creating an IAM policy and role for your Lambda function 13 | - AWS Management Console - IAM Policies: https://console.aws.amazon.com/iamv2/home?#/policies 14 | - Policy JSON for `DataEngLambdaS3CWGluePolicy`: [DataEngLambdaS3CWGluePolicy](DataEngLambdaS3CWGluePolicy.json) 15 | [Ensure you replace INITIALS in the policy statements to reflect the name of the S3 buckets you created] 16 | 17 | #### Creating a Lambda function 18 | - AWS Management Console - Lambda Functions: https://console.aws.amazon.com/lambda/home#/functions 19 | - `CSVtoParquetLambda` function code: [CSVtoParquetLambda.py](CSVtoParquetLambda.py) 20 | - **Note 1:** Make sure that on Line 26 you replace INITIALS with the unique identifier you used when creating your clean-zone bucket 21 | - **Note 2:** Make sure you don't miss the step about increasing the Lambda function timeout to 1 minute. If using a larger CSV file than the file provided here as a sample (test.csv) then consider also increasing the memory allocation. 22 | 23 | #### Configuring our Lambda function to be triggered by an S3 upload 24 | - Sample CSV file: [test.csv](test.csv) 25 | 26 | #### Command to list the newly created Parquet files in the clean-zone bucket: 27 | ###### Ensure you replace INITIALS below to reflect the name of the bucket you previously created 28 | 29 | ``` 30 | aws s3 ls s3://dataeng-clean-zone-INITIALS/cleanzonedb/csvtoparquet/ 31 | ``` 32 | 33 | -------------------------------------------------------------------------------- /Chapter10/README.md: -------------------------------------------------------------------------------- 1 | # Chapter 10 - Orchestrating the Data Pipeline 2 | 3 | In this chapter, we looked at a critical part of a data engineers job: designing and 4 | orchestrating data pipelines. First, we examined some of the core concepts around data 5 | pipelines, such as the definition of a DAG (directed acyclic graph), 6 | and common pipeline triggers (scheduled and event-based pipelines). We also 7 | looked at how to handle failures and retries. 8 | 9 | We then looked at four different AWS services that can be used for creating and 10 | orchestrating data pipelines. This included Amazon Data Pipeline (now in maintenance mode), 11 | AWS Glue Workflows, Amazon Managed Workflows for Apache Airflow (MWAA), and AWS Step Functions. 12 | We discussed some of the use cases for each of these services, as well as the pros and cons 13 | of them. 14 | 15 | ## Hands-on Activity 16 | In the hands-on section of this chapter, we built an event-driven pipeline. We used 17 | two AWS Lambda functions for processing files, and an Amazon SNS topic for sending out 18 | notifications about failure. Then, we put these pieces of our data pipeline together into 19 | a state machine orchestrated by AWS Step Functions. 20 | 21 | #### Lambda function to determine the file extension 22 | 23 | - AWS Management Console - Lambda Functions: https://console.aws.amazon.com/lambda/home 24 | 25 | - Code for Lambda function to check file extension: [dataeng-check-file-ext.py](dataeng-check-file-ext.py) 26 | 27 | #### Lambda function to randomly generate failures 28 | 29 | - Code for Lambda function to generate random failures: [dataeng-random-failure-generator.py](dataeng-random-failure-generator.py) 30 | 31 | #### Creating an SNS topic and subscribing to an email address 32 | 33 | - AWS Management Console - SNS: https://us-east-2.console.aws.amazon.com/sns/v3/home 34 | 35 | #### Creating a new Step Functions state machine 36 | 37 | - AWS Management Console - Step Functions: https://console.aws.amazon.com/states/home 38 | 39 | - Example of Step Functions JSON for completed state machine: [ProcessFileStateMachine.json](ProcessFileStateMachine.json) 40 | [Note that the ARN references in this state machine are not valid, and would need to be updated to reflect your AWS account number] 41 | 42 | #### Configuring our S3 bucket to send events to EventBridge 43 | 44 | - AWS Management Console - Amazon S3: https://s3.console.aws.amazon.com/s3 45 | 46 | #### Create an EventBridge rule for triggering our Step Functions state machine 47 | 48 | - AWS Management Console - EventBridge: https://console.aws.amazon.com/events/home 49 | 50 | #### Testing out event-driven data orchestration pipeline 51 | 52 | - AWS Management Console - Amazon S3: https://s3.console.aws.amazon.com/s3 53 | 54 | -------------------------------------------------------------------------------- /Chapter15/README.md: -------------------------------------------------------------------------------- 1 | # Chapter 15 - Implementing a Data Mesh Strategy 2 | 3 | In this chapter we explored the concept of a data mesh approach to organizing data responsibilities within an organization. We started off by 4 | examining the **four core principals** of a data mesh: 5 | 6 | - Domain-orientated, decentralized data ownership 7 | - Data as a product 8 | - Self-service data infrastructure as a platform 9 | - Federated computational governance 10 | 11 | We then looked at how a data mesh approach can solve a number of challenges that exist with traditional data lake approaches. This included 12 | understanding how a centralized data team can be a bottleneck, how traditionally product teams did not consider data analytics to be there 13 | "problem", and how there was a lack of organization wide visibility into which datasets were available. 14 | 15 | We then reviewed the organizational and technical challenges of building a data mesh. This included a discussion of the difficulties of 16 | changing the way that an organization traditionally appraoched data analytics, and how a data mesh approach changed the way that a 17 | centralized data and analytics team worked. We then looked at the changes needed for line of business teams, and finally at how there 18 | were a number of technical challenges to building a data mesh. 19 | 20 | After that we examined the AWS services that can help to build a data mesh, including a look at the **Amazon DataZone** service. 21 | We also reviewed a sample architecture for building a data mesh using both AWS services, and 3rd party services. 22 | 23 | ## Hands-On Activity 24 | In the hands-on section of this chapter we looked at how to setup Amazon DataZone, and how to publish data to the DataZone business catalog. 25 | 26 | #### Setting up AWS Identity Center 27 | 28 | - AWS Management Console - AWS IAM Identity Center: https://us-east-2.console.aws.amazon.com/singlesignon/home 29 | 30 | - Group to create: `DataZone Users` 31 | - User to create: `film-catalog-team-admin` 32 | - User to create: `marketing-team-admin` 33 | 34 | #### Enabling and configuring Amazon DataZone 35 | 36 | - AWS Management Console - Amazon DataZone Console: https://us-east-2.console.aws.amazon.com/datazone/home 37 | 38 | #### Adding business metadata 39 | 40 | - Business name for the film_category table: `Movie listings with category` 41 | - Description for the film_category table: `This table contains a complete listing of all films in our streaming movie catalog, including category / genre information for each film, as well as a list of special features.` 42 | - Description for the Category Name field: `This field contains the movie category / genre. Sample categories include Animation, Comedy, Sports, Children, Drama, Action, and more.` 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | -------------------------------------------------------------------------------- /Chapter10/ProcessFileStateMachine.json: -------------------------------------------------------------------------------- 1 | { 2 | "Comment": "A description of my state machine", 3 | "StartAt": "Check file extension", 4 | "States": { 5 | "Check file extension": { 6 | "Type": "Task", 7 | "Resource": "arn:aws:states:::lambda:invoke", 8 | "OutputPath": "$.Payload", 9 | "Parameters": { 10 | "Payload.$": "$", 11 | "FunctionName": "arn:aws:lambda:us-east-2:123456789:function:dataeng-check-file-ext:$LATEST" 12 | }, 13 | "Retry": [ 14 | { 15 | "ErrorEquals": [ 16 | "Lambda.ServiceException", 17 | "Lambda.AWSLambdaException", 18 | "Lambda.SdkClientException", 19 | "Lambda.TooManyRequestsException" 20 | ], 21 | "IntervalSeconds": 2, 22 | "MaxAttempts": 6, 23 | "BackoffRate": 2 24 | } 25 | ], 26 | "Next": "Choice" 27 | }, 28 | "Choice": { 29 | "Type": "Choice", 30 | "Choices": [ 31 | { 32 | "Variable": "$.file_extension ", 33 | "StringMatches": ".csv", 34 | "Next": "Process CSV" 35 | } 36 | ], 37 | "Default": "Pass - Invalid File Ext" 38 | }, 39 | "Process CSV": { 40 | "Type": "Task", 41 | "Resource": "arn:aws:states:::lambda:invoke", 42 | "OutputPath": "$.Payload", 43 | "Parameters": { 44 | "Payload.$": "$", 45 | "FunctionName": "arn:aws:lambda:us-east-2:123456789:function:dataeng-random-failure-generator:$LATEST" 46 | }, 47 | "Retry": [ 48 | { 49 | "ErrorEquals": [ 50 | "Lambda.ServiceException", 51 | "Lambda.AWSLambdaException", 52 | "Lambda.SdkClientException", 53 | "Lambda.TooManyRequestsException" 54 | ], 55 | "IntervalSeconds": 2, 56 | "MaxAttempts": 6, 57 | "BackoffRate": 2 58 | } 59 | ], 60 | "Catch": [ 61 | { 62 | "ErrorEquals": [ 63 | "States.ALL" 64 | ], 65 | "Next": "SNS Publish", 66 | "ResultPath": "$.Payload" 67 | } 68 | ], 69 | "Next": "Success" 70 | }, 71 | "Success": { 72 | "Type": "Succeed" 73 | }, 74 | "Pass - Invalid File Ext": { 75 | "Type": "Pass", 76 | "Result": { 77 | "Error": "InvalidFileFormat" 78 | }, 79 | "ResultPath": "$.Payload", 80 | "Next": "SNS Publish" 81 | }, 82 | "SNS Publish": { 83 | "Type": "Task", 84 | "Resource": "arn:aws:states:::sns:publish", 85 | "Parameters": { 86 | "Message.$": "$", 87 | "TopicArn": "arn:aws:sns:us-east-2:123456789:dataeng-failure-notification" 88 | }, 89 | "Next": "Fail" 90 | }, 91 | "Fail": { 92 | "Type": "Fail" 93 | } 94 | } 95 | } 96 | -------------------------------------------------------------------------------- /Chapter05/Data-Engineering-Whiteboard-Completed-Notes.drawio: -------------------------------------------------------------------------------- 1 | 7VrfU9s4EP5rMgMPyThxEuAxBGh7VwpHmGGmLzeKrcQaHMuV5ITw19/uSv6R2CmBQoe2x0NwZGm12v1291tNWv548fBBsTS6lCGPWz0vfGj5Z61er9c9HsI/HFnbka43OLIjcyVCN1YOTMQjzye60UyEXG9MNFLGRqSbg4FMEh6YjTGmlFxtTpvJeHPXlM15bWASsLg+eidCE+XaeV754iMX88ht3c9fLFg+2Q3oiIVyVRnyz1v+WElp7NPiYcxjtF5uF7vuYsfbQjHFE7PPgqHI/k1752fh11StJ/dJPP30sd2zUpYsztyBnbJmnVtAySwJOQrxWv7pKhKGT1IW4NsVOB3GIrOI4VsXHutKOT2XXBn+UBlySn7gcsGNWsMU97Y99I7tGoeZtj887gzs0Kr0QT+HSFQ1/9Ewn8qc5+fFDqVx4MHZ5xm28mu2OmOGwcitYomeSQUuFzKpGRDObTatpI2S93wsY6lgJJEJzDydiTjeGmKxmCfwNQBrchg/RSsKAOfIvViIMMRtGt2y6biZTIwLr17/lTw18AfbnurV/NQbNvgpD4pXd1K//w4R3T3eC9F+/ycjuj/cBemxTHS24Er/QWjuev4eaPYKd/wUPB957xDPJ7lWFTzX0TzoNaL5rQxVL2UOyxOZqYD/SUg+6df804DkJv+8GY5PThpwPIzJAXCa4dzQ2e3IVG2PwJ71aemGP4ffMqRSZNC2JouOYEJ3kD6UL7dF4OSXS2m30F5O53xwBN6OOLy541NEH1fgOXj4LOfweSFipLOjysGmFaXYAhGSTDX+u1ByAUugonmF0BUJ1SRUd5r2n6y14bjuapUgOEfbYvfZ+guETixwwxs2FYlciiBq3M3FWGWvJ2VfMnXPjUjmnV2aFUs/JXOuiVTtJ3osM+D3PS/TqPrfIuFaaDTfnJOfjcSPnK7lA39Nrr4g5pOQTEv/6EUp4EIoHknNO5VnbGsk5hUEtQgd+fMOMg1nQ4ezxTRE48yyJMCXh8UmhQo63+qaqW8ZN9SiIJHsUBG2p1kpSCK4G2wcFMcIYs5wv0fMSb2xTRpG4EYcV00xI4QMPw/W8IdhC0e+CMPDzncDzcZEbTj9NcIPATlleu8gG2faSMt00IbXSoZZYNy3G24ylbgvVyqkaR65efLP52p0g45Q2bxzLAipEoSO85C8sXeUPqnr1Yrj1pfBOGZL/lrxOGExofiWs0UtJK+lgdARLI7XRVyNFuyRbHB2OcnRqHgaQwUknNJAMWni42u2KqDqiaQZ8DvT1K+NyDvOTEQgIefs55TzBwBSwug2xbpUE43BFLJkImbTGJ+Xgl5lUx0oke5E2w9m6brA5ybmUUXnmS1ro7tJbhIIlYcgYiCzhRcnuHEaA2fqNGVRTH2U+GLJwtI8InkKdquIoqcwX+dZCHs18qihlGOB8M8G5bdbCVY6a/d2EXgJWWYW0/VRBNwSDvIdHgmZ6JV45KC/xSNPvDrP7w0beH7e8b46kex6TR3Rb8kkixBEOKc8gCwstNm3sN2xkvLYvD1FZsK1ttECthZAiyilHCyFzkD6I8t5Cl6y5quXgmMYZWnbyHZoczzwUG1ZCYMstYbuB8UesLAdgBYM2yAsCjOuFItdAQ1klko7HvJFaoN5TMhhZsHICZFc2QMzCPalMOucyQCA5njHLAIK/cBq2qEiPVNcR3lhZnhq4EYa/0eQNKlyASUQxJdHFXUNVDwSDWmxrHBUCzFBsMyqN81MkZescm6ho3+JpK2gJOPOArpCEIJ7XaWUcciI0tWA00/Ouvp5+ec9A9UlcTLt/gC94RrgoIWrChKtEyiAgrUxMAqpkIg5zixgKuQXTU5yQUEcDGiEEtbgOfkrFq2K2uvKROA49ZRbfCJyuC0jAgFQ+hvWISBWbkHCuRVY9l8oDtx9D5nf4iGmFm8mLJ+qSm8A5W/n/kkgkCruD4Av1qSQMyIZFO5kQWBzFIVNWNB5ck3pz9y/G+6oOADhcGtdGGQKq2CFwUJ1VfeXnylltFPFF6LYwTpfViI3oewXxDILbSLhYR7CBBNms8OaPu9RvE7tyTL0V8hjAdqtC50DGQP/kCpvGC3y4VhCkYggEsRXAC4qsUBPXVfSRJKbO+iXNXZ/IsnpHm1fzh/XL8t8v+lq/uStbsu6XvdNSM4LU8Fw71RwQ4T7K4FgrxzwdMYuOktbBGy3WfB6DPM7S+zzvCuoN7FBqsR8bpv2pj6CZKUAOnsZ5Db8hkyIiEcQccjuhWrG7oN9hy51Gbt7mK+2y2i4hNm8fXm1y5c3d+bGyfYknTvzsTOcrbV0kUX52Blx8zLLYzND6+sXcK2CorEfcurWRV4gk5mYZ4oXd3/5pVuTevv4+HtcpGYQ1xJvtKu5DXaiclxILvR5AnHvDV+ZcmH9soSRmxEqBV4hQfuCwtAsPFEisMzrYCVM1JRX0vJm69Cuyil6pU8CAFPDwaAckVev667nohBMl63NlAWbKir0yP5Ir52hcvh/zX7qhwcne9Ts4qc4GzXbf37Nhq/lz3ToXeXXTv75fw== -------------------------------------------------------------------------------- /Chapter09/README.md: -------------------------------------------------------------------------------- 1 | # Chapter 9 - A deeper-dive into Data Marts and Amazon Redshift 2 | 3 | In this chapter, we learned how a cloud data warehouse can be used to 4 | optimize performance for hot data. We reviewed some common "anti-patterns" 5 | for data warehouse usage before diving deep into Redshift architecture to learn more 6 | about how Redshift optimizes data storage across nodes. 7 | 8 | We then reviewed some of the important design decisions that need to be made when 9 | creating a Redshift cluster in order to optimize performance while balancing costs, and also 10 | reviewed ssome of the advanced Redshift features. 11 | 12 | ## Hands-on Activity 13 | In the hands-on section of this chapter we created a new Redshift Serverless cluster, 14 | configured Redshift Spectrum to query data from Amazon S3, and then loaded a 15 | subset of data from S3 into Redshift. 16 | 17 | ### Uploading our sample data to Amazon S3 18 | In this exercise, we use some fake data that was created using a tool called [Mockaroo](https://www.mockaroo.com/). 19 | 20 | Download our fake user data with the following link: [user_details.csv](user_details.csv). When you open the link, click on the three dots, and then click on `Download`. 21 | 22 | Open the Amazon S3 console with the following link: https://s3.console.aws.amazon.com/s3 23 | 24 | #### IAM Roles for Redshift 25 | 26 | - AWS Management Console - IAM Roles: https://console.aws.amazon.com/iamv2/home?#/roles 27 | 28 | #### Creating a Redshift cluster 29 | 30 | - AWS Management Console - Redshift: https://console.aws.amazon.com/redshiftv2/ 31 | 32 | #### Using Redshift Spectrum to directly query data in the data lake 33 | 34 | - The following query can be run in the Redshift Query Editor to create an external schema. Make sure to specify the ARN for the new role you created in place of the ***iam_role*** listed below. 35 | 36 | ``` 37 | create external schema spectrum_schema 38 | from data catalog 39 | database 'users' 40 | iam_role 'arn:aws:iam::1234567890:role/AmazonRedshiftSpectrumRole' 41 | create external database if not exists; 42 | ``` 43 | 44 | - The following query can be run to create a new external table. Make sure to replace ***INITIALS*** in the query below with the correct identifier for your Landing Zone bucket. 45 | 46 | ``` 47 | CREATE EXTERNAL TABLE spectrum_schema.user_details( 48 | id INTEGER, 49 | first_name VARCHAR(40), 50 | last_name VARCHAR(40), 51 | email VARCHAR(60), 52 | gender VARCHAR(15), 53 | address_1 VARCHAR(80), 54 | address_2 VARCHAR(80), 55 | city VARCHAR(40), 56 | state VARCHAR(25), 57 | zip VARCHAR(5), 58 | phone VARCHAR(12) 59 | ) 60 | row format delimited 61 | fields terminated by ',' 62 | stored as textfile 63 | location 's3://dataeng-landing-zone-initials/users/' 64 | table properties ('skip.header.line.count'='1'); 65 | ``` 66 | 67 | **Query 1:** 68 | ``` 69 | select * from spectrum_schema.user_details limit 10; 70 | ``` 71 | 72 | 73 | 74 | 75 | 76 | 77 | 78 | 79 | 80 | -------------------------------------------------------------------------------- /Chapter11/README.md: -------------------------------------------------------------------------------- 1 | # Chapter 11 - Ad-hoc queries with Amazon Athena 2 | 3 | Amazon Athena is a serverless, fully managed service that lets you use SQL to directly query data in the data lake, as well as query various other databases. It requires no setup, and by default the cost is based on the amount of data that is scanned to complete the query, or based on the amount of provisioned capacity that you specify. 4 | 5 | In this chapter, we did a deep dive into Athena, examining how Athena can be used to query data directly in the data lake, looking at advanced Athena functionality (such as the ability to query data from other data sources with Query Federation), and how Athena provides workgroup functionality to help with governance and cost management. 6 | 7 | ## Links 8 | [Considerations and Limitations for CTAS Queries](https://docs.aws.amazon.com/athena/latest/ug/considerations-ctas.html) 9 | 10 | [Airbnb blog post about the problem of small files](https://medium.com/airbnb-engineering/on-spark-hive-and-small-files-an-in-depth-look-at-spark-partitioning-strategies-a9a364f908) 11 | 12 | [Partitioning and bucketing in Athena](https://docs.aws.amazon.com/athena/latest/ug/ctas-partitioning-and-bucketing.html) 13 | 14 | [Partition Projection with Amazon Athena](https://docs.aws.amazon.com/athena/latest/ug/partition-projection.html) 15 | 16 | [Using EXPLAIN and EXPLAIN ANALYSE in Athena](https://docs.aws.amazon.com/athena/latest/ug/athena-explain-statement.html) 17 | 18 | [Functions in Amazon Athena](https://docs.aws.amazon.com/athena/latest/ug/functions.html) 19 | 20 | [Performance Tuning in Athena](https://docs.aws.amazon.com/athena/latest/ug/performance-tuning.html) 21 | 22 | [Using Amazon Athena Federated Query](https://docs.aws.amazon.com/athena/latest/ug/connect-to-a-data-source.html) 23 | 24 | [Athena Query Federation SDK](https://docs.aws.amazon.com/athena/latest/ug/connect-data-source-federation-sdk.html) 25 | 26 | [Explore your data lake using Amazon Athena for Apache Spark](https://aws.amazon.com/blogs/big-data/explore-your-data-lake-using-amazon-athena-for-apache-spark/) 27 | 28 | [Using Athena ACID transactions](https://docs.aws.amazon.com/athena/latest/ug/acid-transactions.html) 29 | 30 | [Tag-Based IAM Access Control Policies](https://docs.aws.amazon.com/athena/latest/ug/tags-access-control.html) 31 | 32 | ## Hands-on Activity 33 | 34 | In the hands-on activity section of this chapter, we create and configure a new Athena Workgroup and learn more about how Workgroups can help separate groups of users. 35 | 36 | #### Creating an Amazon Athena workgroup and configuring Athena settings 37 | 38 | - AWS Management Console - Amazon Athena: https://console.aws.amazon.com/athena 39 | 40 | #### Switching Workgroups and running queries 41 | 42 | - AWS Documentation on IAM Policies for Accessing Workgroups: https://docs.aws.amazon.com/athena/latest/ug/workgroups-iam-policy.html 43 | 44 | - Query to determine most popular category of films (Step 3) 45 | ``` 46 | SELECT category_name, count(category_name) streams 47 | FROM streaming_films 48 | GROUP BY category_name 49 | ORDER BY streams DESC 50 | ``` 51 | 52 | - Query to determine which State streamed the most films (Step 7) 53 | ``` 54 | SELECT state, count(state) count 55 | FROM streaming_films 56 | GROUP BY state 57 | ORDER BY count desc 58 | ``` 59 | 60 | 61 | 62 | 63 | 64 | 65 | -------------------------------------------------------------------------------- /Chapter04/AthenaAccessCleanZoneDB: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Statement": [ 4 | { 5 | "Effect": "Allow", 6 | "Action": [ 7 | "athena:*" 8 | ], 9 | "Resource": [ 10 | "*" 11 | ] 12 | }, 13 | { 14 | "Effect": "Allow", 15 | "Action": [ 16 | "glue:CreateDatabase", 17 | "glue:DeleteDatabase", 18 | "glue:GetDatabase", 19 | "glue:GetDatabases", 20 | "glue:UpdateDatabase", 21 | "glue:CreateTable", 22 | "glue:DeleteTable", 23 | "glue:BatchDeleteTable", 24 | "glue:UpdateTable", 25 | "glue:GetTable", 26 | "glue:GetTables", 27 | "glue:BatchCreatePartition", 28 | "glue:CreatePartition", 29 | "glue:DeletePartition", 30 | "glue:BatchDeletePartition", 31 | "glue:UpdatePartition", 32 | "glue:GetPartition", 33 | "glue:GetPartitions", 34 | "glue:BatchGetPartition" 35 | ], 36 | "Resource": [ 37 | "arn:aws:glue:*:*:catalog", 38 | "arn:aws:glue:*:*:database/cleanzonedb", 39 | "arn:aws:glue:*:*:database/cleanzonedb*", 40 | "arn:aws:glue:*:*:table/cleanzonedb/*" 41 | ] 42 | }, 43 | { 44 | "Effect": "Allow", 45 | "Action": [ 46 | "s3:GetBucketLocation", 47 | "s3:GetObject", 48 | "s3:ListBucket", 49 | "s3:ListBucketMultipartUploads", 50 | "s3:ListMultipartUploadParts", 51 | "s3:AbortMultipartUpload", 52 | "s3:PutObject" 53 | ], 54 | "Resource": [ 55 | "arn:aws:s3:::dataeng-clean-zone-gse23/*" 56 | ] 57 | }, 58 | { 59 | "Effect": "Allow", 60 | "Action": [ 61 | "s3:GetBucketLocation", 62 | "s3:GetObject", 63 | "s3:ListBucket", 64 | "s3:ListBucketMultipartUploads", 65 | "s3:ListMultipartUploadParts", 66 | "s3:AbortMultipartUpload", 67 | "s3:CreateBucket", 68 | "s3:PutObject", 69 | "s3:PutBucketPublicAccessBlock" 70 | ], 71 | "Resource": [ 72 | "arn:aws:s3:::aws-athena-query-results-*" 73 | ] 74 | }, 75 | { 76 | "Effect": "Allow", 77 | "Action": [ 78 | "s3:GetObject", 79 | "s3:ListBucket" 80 | ], 81 | "Resource": [ 82 | "arn:aws:s3:::athena-examples*" 83 | ] 84 | }, 85 | { 86 | "Effect": "Allow", 87 | "Action": [ 88 | "s3:ListBucket", 89 | "s3:GetBucketLocation", 90 | "s3:ListAllMyBuckets" 91 | ], 92 | "Resource": [ 93 | "*" 94 | ] 95 | }, 96 | { 97 | "Effect": "Allow", 98 | "Action": [ 99 | "sns:ListTopics", 100 | "sns:GetTopicAttributes" 101 | ], 102 | "Resource": [ 103 | "*" 104 | ] 105 | }, 106 | { 107 | "Effect": "Allow", 108 | "Action": [ 109 | "cloudwatch:PutMetricAlarm", 110 | "cloudwatch:DescribeAlarms", 111 | "cloudwatch:DeleteAlarms", 112 | "cloudwatch:GetMetricData" 113 | ], 114 | "Resource": [ 115 | "*" 116 | ] 117 | }, 118 | { 119 | "Effect": "Allow", 120 | "Action": [ 121 | "lakeformation:GetDataAccess" 122 | ], 123 | "Resource": [ 124 | "*" 125 | ] 126 | }, 127 | { 128 | "Effect": "Allow", 129 | "Action": [ 130 | "pricing:GetProducts" 131 | ], 132 | "Resource": [ 133 | "*" 134 | ] 135 | } 136 | ] 137 | } 138 | -------------------------------------------------------------------------------- /Chapter13/README.md: -------------------------------------------------------------------------------- 1 | # Chapter 13 - Enabling Artificial Intelligence and Machine Learning 2 | 3 | In this chapter, you learned more about the broad range of AWS ML and AI services, 4 | and had the opportunity to get hands-on with Amazon Comprehend, an AI service for 5 | extracting insights from written text. 6 | 7 | We discussed how ML and AI services can apply to a broad range of use cases, both 8 | specialized (such as detecting cancer early) and general (business forecasting or 9 | personalization). 10 | 11 | We examined different AWS services related to ML and AI. We looked at how different 12 | Amazon SageMaker capabilities can be used to prepare data for ML, build models, train 13 | and fine-tune models, and deploy and manage models. SageMaker makes building custom 14 | ML models much more accessible to developers without existing expertise in ML. 15 | 16 | We then looked at a range of AWS AI services that provide prebuilt and trained models for 17 | common use cases. We looked at services for transcribing text from audio files (Amazon 18 | Transcribe), for extracting text from forms and handwritten documents (Amazon 19 | Textract), for recognizing images (Amazon Rekognition), and for extracting insights from 20 | text (Amazon Comprehend). We also briefly discussed other business-focused AI services, 21 | such as Amazon Forecast and Amazon Personalize. 22 | 23 | Finally, we did a quick overview of what Generative AI, Foundation Models, and Large Language 24 | Models are, and examined how Generative AI solutiosn can be built on AWS. 25 | 26 | ## Useful links 27 | The links below cover some of the services that were overviewed in this chapter. 28 | 29 | ### SageMaker Components 30 | - [Amazon SageMaker](https://aws.amazon.com/sagemaker/) 31 | - [Amazon SageMaker Ground Truth](https://aws.amazon.com/sagemaker/data-labeling/) 32 | - [Amazon SageMaker Data Wrangler](https://aws.amazon.com/sagemaker/data-wrangler) 33 | - [Amazon SageMaker Clarify](https://aws.amazon.com/sagemaker/clarify) 34 | - [Amazon SageMaker Notebooks](https://aws.amazon.com/sagemaker/notebooks/) 35 | - [Amazon SageMaker Autopilot](https://aws.amazon.com/sagemaker/autopilot) 36 | - [Amazon SageMaker JumpStart](https://aws.amazon.com/sagemaker/jumpstart) 37 | - [Amazon SageMaker Experiments](https://aws.amazon.com/sagemaker/experiments/) 38 | - [Amazon SageMaker Model Monitor](https://aws.amazon.com/sagemaker/model-monitor/) 39 | 40 | ### AWS Serivces for AI 41 | - [Amazon Transcribe](https://aws.amazon.com/transcribe/) 42 | - [Amazon Textract](https://aws.amazon.com/textract/) 43 | - [Amazon Comprehend](https://aws.amazon.com/comprehend/) 44 | - [Amazon Rekognition](https://aws.amazon.com/rekognition/) 45 | - [Amazon Forecast](https://aws.amazon.com/forecast/) 46 | - [Amazon Fraud Detector](https://aws.amazon.com/fraud-detector/) 47 | - [Amazon Personalize](https://aws.amazon.com/personalize/) 48 | 49 | ### AWS Services for Generative AI 50 | - [Blog post - Get started with generative AI on AWS using Amazon SageMaker JumpStart](https://aws.amazon.com/blogs/machine-learning/get-started-with-generative-ai-on-aws-using-amazon-sagemaker-jumpstart/) 51 | - [Amazon Bedrock](https://aws.amazon.com/bedrock/) 52 | - [Amazon Titan](https://aws.amazon.com/bedrock/titan/) 53 | 54 | ## Hands-on Activity 55 | In the hands-on acitvity section of this chapter we looked at how we can use the 56 | [Amazon Comprehend](https://aws.amazon.com/comprehend/) service to gain insight into the sentiment 57 | of reviews posted to a website. We configured an SQS queue to receive the details 58 | of newly posted reviewes, and had a Lambda function configured to pass the review text to Amazon 59 | Comprehend to gain insight into the review sentiment (postivie, negative, or netural). 60 | 61 | #### Setting up a new Amazon SQS message queue 62 | 63 | - Amazon Management Console - SQS: https://console.aws.amazon.com/sqs/v2/. 64 | 65 | #### Creating a Lambda function for calling Amazon Comprehend 66 | 67 | - Amazon Management Console - Lambda: https://console.aws.amazon.com/lambda/ 68 | 69 | - Lambda function code for calling Amazon Comprehend for sentiment analysis: [website-reviews-analysis-role.py](website-reviews-analysis-role.py) 70 | 71 | **[ Make sure to set region on Line 4 to the correct region that you are using for the hands-on activities in this book ]** 72 | 73 | #### Testing the solution with Amazon Comprehend 74 | 75 | - Example of a postivie review 76 | 77 | ``` 78 | I recently stayed at the Kensington Hotel in downtown Cape Town and was very impressed. 79 | The hotel is beautiful, the service from the staff is amazing, and the sea views cannot be beaten. 80 | If you have the time, stop by Elizabeth's Kitchen, a coffee shop not far from the hotel, 81 | to get a coffee and try some of their delicious cakes and baked goods. 82 | ``` 83 | 84 | 85 | 86 | 87 | 88 | 89 | -------------------------------------------------------------------------------- /Chapter06/README.md: -------------------------------------------------------------------------------- 1 | # Chapter 6 - Ingesting Batch and Streaming Data 2 | 3 | In this chapter, we discussed the 5 V's of data (volume, velocity, variety, validity, and value). 4 | We then reviewed a few different approaches for ignesting data from databases, and from streaming data sources. 5 | 6 | ## Hands-on Activity 7 | In the ***hands-on activity*** section of this chapter, we deployed an **Amazon CloudFormation** template that provisioned an **Amazon RDS MySQL** database instance, as well as an **Amazon EC2** instance, which was used to load a demo database into the MySQL database instance. We configured **Amazon Database Migration Service (DMS)** to ingest data from the MySQL database, and then we configured **Amazon Kinesis Data Firehose** to ingest streaming data that we generated using the **Amazon Kinesis Data Generator (KDG)**. 8 | 9 | ### Deploying MySQL and an EC2 data loader via AWS CloudFormation 10 | - Download the CloudFormation template from [here](./mysql-ec2loader.cfn) 11 | 12 | - AWS Management Console - CloudFormation: https://us-east-2.console.aws.amazon.com/cloudformation 13 | 14 | - CloudFormation Stack Name: `dataeng-aws-chapter6-mysql-ec2` 15 | 16 | #### Creating an IAM policy and role for DMS 17 | 18 | - AWS Management Console - IAM Policies: https://console.aws.amazon.com/iamv2/home?#/policies 19 | 20 | - AWS IAM policy for `DataEngDMSLandingS3BucketPolicy`. **Change INITIALS in the policy below to match the name of the landing zone bucket that you previously created**. 21 | ``` 22 | { 23 | "Version": "2012-10-17", 24 | "Statement": [ 25 | { 26 | "Effect": "Allow", 27 | "Action": [ 28 | "s3:*" 29 | ], 30 | "Resource": [ 31 | "arn:aws:s3:::dataeng-landing-zone-INITIALS", 32 | "arn:aws:s3:::dataeng-landing-zone-INITIALS/*" 33 | ] 34 | } 35 | ] 36 | } 37 | ``` 38 | - IAM Policy Name: `DataEngDMSLandingS3BucketPolicy` 39 | - IAM Role Name: `DataEngDMSLandingS3BucketRole` 40 | 41 | #### Configuring DMS settings and performing a full load from MySQL to S3 42 | - Replication instance name: `mysql-s3-replication` 43 | - Target endpoint identifier: `s3-landing-zone-sakilia-csv` 44 | - Migration task identifier: `dataeng-mysql-s3-sakila-task` 45 | - Schema Source Name: `%sakila%` 46 | 47 | #### Querying data with Amazon Athena 48 | - Athena S3 Query Bucket Name: `athena-query-results-`. **Change \ to the unique identifier you have been using for bucket names**. 49 | 50 | - Athena query to validate that data has been successfully ingested using DMS 51 | `select * from film limit 20;` 52 | 53 | - Further reading: [Using Amazon S3 as a target for AWS Database Migration Service](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html) 54 | 55 | - Further reading: [Using a MySQL-compatible database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html) 56 | 57 | ### Ingesting Streaming Data 58 | 59 | #### Configuring Kinesis Data Firehose for streaming delivery to Amazon S3 60 | - AWS Management Console - Kinesis Firehose: https://us-east-2.console.aws.amazon.com/firehose/home 61 | - Kinesis Firehose Delivery Stream Name: `dataeng-firehose-streaming-s3` 62 | - *S3 Bucket Prefix* for Kinesis Data Firehose: `streaming/!{timestamp:yyyy/MM/}` 63 | - *S3 Bucket Error Output Prefix* for Kinesis Data Firehose: `!{firehose:error-output-type}/!{timestamp:yyyy/MM/}` 64 | 65 | #### Configuring Amazon Kinesis Data Generator (KDG) 66 | - Kinesis Data Generator Help Page: https://awslabs.github.io/amazon-kinesis-data-generator/web/help.html 67 | - Documentation for mapping region names to region ID's: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html 68 | - Record template for Kinesis Data Generator 69 | ``` 70 | { 71 | "timestamp":"{{date.now}}", 72 | "eventType":"{{random.weightedArrayElement( 73 | { 74 | "weights": [0.3,0.1,0.6], 75 | "data": ["rent","buy","trailer"] 76 | } 77 | )}}", 78 | "film_id":{{random.number( 79 | { 80 | "min":1, 81 | "max":1000 82 | } 83 | )}}, 84 | "distributor":"{{random.arrayElement( 85 | ["amazon prime", "google play", "apple itunes","vudo", "fandango now", "microsoft", "youtube"] 86 | )}}", 87 | "platform":"{{random.arrayElement( 88 | ["ios", "android", "xbox", "playstation", "smarttv", "other"] 89 | )}}", 90 | "state":"{{address.state}}" 91 | } 92 | ``` 93 | 94 | #### Querying data with Amazon Athena 95 | - AWS Management Console - Glue: https://us-east-2.console.aws.amazon.com/glue/home 96 | - Crawler name: `dataeng-streaming-crawler` 97 | - Crawler S3 data path: `s3://dataeng-landing-zone-/streaming/` (**Change \ to the unique identifier used for your landing zone bucket**) 98 | - Athena query to validate that data has been successfully ingested using DMS: 99 | `select * from streaming limit 20;` 100 | 101 | 102 | 103 | -------------------------------------------------------------------------------- /Chapter16/README.md: -------------------------------------------------------------------------------- 1 | # Chapter 16 - Building a Modern Data Platform on AWS 2 | 3 | In this chapter we reviewed high-level concepts around building a modern data platform on AWS. 4 | We covered some of the puzzle pieces that need to be in place in order to build a data platform that is flexible and agile, scalable, 5 | well goverened, and secure, while also being easy to use and enable data producers and consumers to self-serve. 6 | 7 | We examined the pros and cons of building or buying a data platform, and then looked at how a DataOps approach to the development 8 | of a platform helps enable automation and observability. We examined how tools such as CloudFormation can be used to automate and manage 9 | the process of deploying data infrastructure, and how we can use Source Control Systems to manage our code for data transformation (such 10 | as PySpark jobs that we run in AWS Glue). 11 | 12 | We then examined a number of tools that enable a DataOps approach, including CloudFormation, CodeCommit, CodeBuild and CodePipeline. 13 | 14 | ## Hands-on Activity 15 | In the hands-on activity for this chapter, we set up a CodeCommit repository, and then added Glue ETL code and a CloudFormation template 16 | to the repository using Git. We then built two pipelines using CodePipeline - one to write out Glue ETL code to S3 whenever the file is updated in 17 | CodeCommit, and a second one to deploy our CloudFormation template, whenever the template is updated. 18 | 19 | #### Setting up a Cloud9 IDE environment 20 | 21 | - AWS Management Console - Cloud9: https://us-east-2.console.aws.amazon.com/cloud9control/home 22 | 23 | - Commands to configure a Git username and password. Change the example below to reflect your name and email address. 24 | ``` 25 | git config --global user.name "Gareth Eagar" 26 | git config --global user.email gareth.eagar@example.com 27 | ``` 28 | 29 | - Configuring the AWS CLI credential helper to automate provisioning of credentials needed to authenticate to CodeCommit 30 | ``` 31 | git config --global credential.helper '!aws codecommit credential-helper $@' 32 | git config --global credential.UseHttpPath true 33 | ``` 34 | 35 | - Creating a directory for your Git repository 36 | ``` 37 | cd /home/ec2-user/environment 38 | mkdir git 39 | cd git 40 | ``` 41 | 42 | #### Setting up our AWS CodeCommit repository 43 | 44 | - AWS Management Console - CodeCommit: https://us-east-2.console.aws.amazon.com/codesuite/codecommit/repositories 45 | 46 | - Use the HTTPS link under Clone URL to copy the link to clone your repository. Then run `git clone `. For example: 47 | ``` 48 | git clone https://git-codecommit.us-east-2.amazonaws.com/v1/repos/data-product-film 49 | ``` 50 | 51 | - Create new directories in the repository, using the Cloud9 terminal window 52 | ``` 53 | cd data-product-film 54 | mkdir glueETL_code 55 | mkdir cfn_templates 56 | ``` 57 | 58 | - Create a new Amazon S3 bucket to store the resources for our film data product. 59 | ``` 60 | aws s3 mb s3://data-product-film-initials 61 | ``` 62 | 63 | #### Adding a Glue ETL script and CloudFormation template into our repository 64 | 65 | - Access the [Glue-streaming_views_by_category.py](/Chapter16/Glue-streaming_views_by_category.py) file in this repository, and paste it into a new file in your Cloud9 IDE environment. 66 | 67 | **Make sure to change the S3 bucket path on Line 47 of the code to match the name of your curated zone bucket** 68 | 69 | - Access the [CFN-glue_job-streams_by_category.cfn](/Chapter16/CFN-glue_job-streams_by_category.cfn) file in this repository, and paste it into a new file in your Cloud9 IDE environment. 70 | 71 | **Make sure to change the S3 bucket path on Line 22 of the template to match the name of the bucket you created earlier in this exercise** 72 | 73 | - Commit the newly created files into your repository by running the following commands in the Cloud9 terminal 74 | ``` 75 | git add . 76 | git commit -m "Initial commit of CloudFormation template and Glue code for our streaming views by category data product" 77 | git push 78 | ``` 79 | 80 | - Access the [CodeCommit](https://us-east-1.console.aws.amazon.com/codesuite/codecommit/repositories) service in the AWS Management Console, and ensure that the two files have been written to your repository 81 | 82 | #### Automating deployment of our Glue code 83 | 84 | - AWS Management Console - CodePipeline: https://us-east-1.console.aws.amazon.com/codesuite/codepipeline/pipelines 85 | 86 | #### Automating deployment of our Glue job 87 | 88 | - AWS Management Console - IAM: https://us-east-1.console.aws.amazon.com/iamv2/home 89 | 90 | - Edit the `DataEngGlueCWS3CuratedZoneRole`, and replace the current **Trust relationship** JSON with the JSON in [DataEngGlueCWS3CuratedZoneRole-trust-policy.json](/Chapter16/DataEngGlueCWS3CuratedZoneRole-trust-policy.json) in this repository 91 | 92 | - In the permissions tab for the role, edit the DataEngGlueCWS3CuratedZoneWrite role to add in S3 permissions for the new bucket you created earlier. The new S3 permissions should look as follows (but reflect the name of your buckets): 93 | ``` 94 | { 95 | "Effect": "Allow", 96 | "Action": [ 97 | "s3:GetObject" 98 | ], 99 | "Resource": [ 100 | "arn:aws:s3:::dataeng-landing-zone-gse23/*", 101 | "arn:aws:s3:::dataeng-clean-zone-gse23/*", 102 | "arn:aws:s3:::data-product-film-gse23/*" 103 | ] 104 | } 105 | ``` 106 | 107 | 108 | 109 | 110 | 111 | 112 | 113 | 114 | -------------------------------------------------------------------------------- /Chapter14/README.md: -------------------------------------------------------------------------------- 1 | # Chapter 14 - Building Transactional Data Lakes 2 | 3 | In this chapter we did a deeper-dive into three common transactional table formats (also sometimes refered to as open table formats), namely Delta 4 | Lake, Apache Hudi, and Apache Iceberg. 5 | 6 | We looked into the limitations of the traditional Hive format, and discussed how these new transactional formats provided a number of benefits for 7 | building modern data lakes, that are able to behave more like a traditional data warehouse. We discussed benefits such as being able to use 8 | ACID transactions, perform record level updates, run time travel queries, handle schema evolution, and more. 9 | 10 | We then 'looked under the covers' at how these open table formats work by tracking metadata at the table level, and at some of the different 11 | approaches for performing updates - Copy-on-Write (COW) which provides better read performance, and Merge-on-Read (MOR) that provides better 12 | write performance. 13 | 14 | Each of the three popular table formats we discussed in this section have their own pros and cons, and while there was not space to go super 15 | deep into how each of them work, we did do a bit of a deeper dive into Delta Lake, Apache Hudi and Apache Iceberg. Finally, before getting hands-on, 16 | we reviewed the support for each of the table formats across different AWS services (such as Glue, EMR, Redshift, and Athena). 17 | 18 | ## Hands-on Activity 19 | In the hands-on activity section of this chapter, we used Amazon Athena to create an Apache Iceberg formatted table, added data to the table, 20 | and then looked at how the underlying metadata was changed as we deleted data and performed table maintenance tasks. We also used the time travel 21 | feature of Iceberg to see how we could query data as it was at a previous point in time. 22 | 23 | #### Creating an Apache Iceberg table using Amazon Athena 24 | 25 | - Amazon Management Console - Athena: https://console.aws.amazon.com/athena/home 26 | 27 | - Use the following statement to create a new Athena database 28 | ``` 29 | create database curatedzonedb_iceberg; 30 | ``` 31 | 32 | - Use the following statement to create a new table in Apache Iceberg format. **Make sure to change the S3 bucket location below to 33 | match the location of your curated-zone bucket**. 34 | ``` 35 | CREATE TABLE curatedzonedb_iceberg.streaming_films_ib( 36 | timestamp string, 37 | eventtype string, 38 | film_id_streaming int, 39 | distributor string, 40 | platform string, 41 | state string, 42 | ingest_year string, 43 | ingest_month string, 44 | category_id bigint, 45 | category_name string, 46 | film_id bigint, 47 | title string, 48 | description string, 49 | release_year bigint, 50 | language_id bigint, 51 | original_language_id double, 52 | length bigint, 53 | rating string, 54 | special_features string 55 | ) 56 | PARTITIONED BY (category_name) 57 | LOCATION 's3://dataeng-curated-zone-gse23/iceberg/streaming_films/' 58 | TBLPROPERTIES ('table_type' = 'ICEBERG', 'format' = 'parquet') 59 | ``` 60 | 61 | - Access the Amazon S3 console to review metadata files: https://s3.console.aws.amazon.com/s3 62 | 63 | #### Adding data to our Iceberg table and running queries 64 | 65 | - Run the following statement to insert data from the previously created streaming_films table (created in Chapter 7), into our new 66 | Apache Iceberg formatted table created in the previous step. 67 | ``` 68 | insert into curatedzonedb_iceberg.streaming_films_ib 69 | select * 70 | from curatedzonedb.streaming_films 71 | ``` 72 | 73 | - Open the AWS Glue console to view table properties: https://console.aws.amazon.com/glue 74 | 75 | - Run the following statement to query the newly created table 76 | ``` 77 | select * from curatedzonedb_iceberg.streaming_films_ib limit 50; 78 | ``` 79 | 80 | - Run the following three statements (one at a time) to view table metadata. 81 | ``` 82 | select * from "curatedzonedb_iceberg"."streaming_films_ib$manifests" 83 | ``` 84 | 85 | ``` 86 | select * from "curatedzonedb_iceberg"."streaming_films_ib$files" 87 | ``` 88 | 89 | ``` 90 | select * from "curatedzonedb_iceberg"."streaming_films_ib$partitions" 91 | ``` 92 | 93 | #### Modifying data in our Iceberg table and running queries 94 | 95 | - Delete data from the table for all movies in the *Documentary* category, using the following statement 96 | ``` 97 | delete from curatedzonedb_iceberg.streaming_films_ib where category_name='Documentary' 98 | ``` 99 | 100 | - Use the following statement to list out the current partitions, and note how there are only 15 partitions now 101 | ``` 102 | select * from "curatedzonedb_iceberg"."streaming_films_ib$partitions" 103 | ``` 104 | 105 | - Use the following statement to list out details about the current manifest: 106 | ``` 107 | select * from "curatedzonedb_iceberg"."streaming_films_ib$manifests" 108 | ``` 109 | - Use the following statemet to query the snapshot history: 110 | 111 | ``` 112 | select * from "curatedzonedb_iceberg"."streaming_films_ib$history" 113 | ``` 114 | 115 | - Use the following statement to do a *time travel* query, where you query the table as it was at a point in the past. **Ensure you change the 116 | TIMESTAMP in the statement below to be a UTC time between when your first and second snapshots were created**. 117 | ``` 118 | SELECT * FROM "curatedzonedb_iceberg"."streaming_films_ib" FOR TIMESTAMP AS OF TIMESTAMP '2023-07-23 19:00:00 UTC' where category_name = 'Documentary' 119 | ``` 120 | 121 | #### Iceberg table maintenance tasks 122 | 123 | - Use the following statement to display the list of files that make up the current snapshot: 124 | ``` 125 | select * from "curatedzonedb_iceberg"."streaming_films_ib$files" 126 | ``` 127 | 128 | - Execute the OPTIMIZE command by running the following statemet: 129 | ``` 130 | OPTIMIZE curatedzonedb_iceberg.streaming_films_ib REWRITE DATA USING BIN_PACK 131 | ``` 132 | 133 | - Use the following statement to display the list of files that make up the current, optimized, snapshot: 134 | ``` 135 | select * from "curatedzonedb_iceberg"."streaming_films_ib$files" 136 | ``` 137 | 138 | - Use the following statement to list the tables snapshot history: 139 | ``` 140 | select * from "curatedzonedb_iceberg"."streaming_films_ib$history" 141 | ``` 142 | 143 | - Change the table property for `vacuum_max_snapshot_age_seconds` to be just 60 seconds 144 | 145 | ``` 146 | ALTER TABLE curatedzonedb_iceberg.streaming_films_ib SET TBLPROPERTIES ( 147 | 'vacuum_max_snapshot_age_seconds'='60' 148 | ) 149 | ``` 150 | 151 | - View the properties of our Iceberg table to ensure that the setting was applied: 152 | 153 | ``` 154 | SHOW TBLPROPERTIES curatedzonedb_iceberg.streaming_films_ib 155 | ``` 156 | 157 | - Run the following statement to VACUUM the Iceberg table 158 | 159 | ``` 160 | VACUUM curatedzonedb_iceberg.streaming_films_ib 161 | ``` 162 | 163 | 164 | 165 | 166 | 167 | 168 | 169 | 170 | 171 | -------------------------------------------------------------------------------- /Chapter05/Data-Engineering-Completed-Whiteboard.drawio: -------------------------------------------------------------------------------- 1 | 7L3XluTWsS36NRrj3AdpJDzwCO9tIhPmRQMJ7xPefP3Fqu6WxK6WRJ1DUtTWLpJVyAUksBBmxoyIBfAPCNvu4hi9C71P0uYP8C3Z/4Bwf4BhGCLx6w8YOb6MQDeM+DKSj2XydeyvA/fyTL8d+HV0KZN0+smBc983c/n+6WDcd10azz8Zi8ax3356WNY3P73qO8rTTwP3OGo+j3plMhffZne7/XWHlJZ58fXS6LcdbfTt4K8DUxEl/fY3Qwj/B4Qd+37+stXubNoA6X2Ty5fvCX9n718mNqbd/HO+gJfLn98wzyXhezzudde8ZOmP8JezrFGzfL3hr5Odj28SGPulS1JwktsfEGYryjm9v6MY7N0upV9jxdw21yfo2vw8qa/zXNNxTve/Gfo6STHt23Qej+uQr3v/iGNfJfbVZv6IYNSfsC9D2191gFJfDyt+Iv6vBhd91Xv+l/P/VTTXxlfp/AuSQj5Jiovm6Bpxx6ibsn68FF723SfxXXc9/1RG0zz2dcr2TT9eI13fXUcyWdk03w1FTZl318f4kmV6jTNAhuVlmvTXHW2ZJOAyP1TKT9WW9d381blg9JfSE/y9nhDkk5Zg/Ada+vbFX1xJ6D83536Zm7K7xP8NL76J55v0/wAj1z8CuCqTj1FSpn/d9wNlgcMJAoLwv6fZJJqKv+jhmwq16JU2Vj+VHzaDcK9+nvv2Bzqe+/ePTOFvzOlvVAvB3z5/vV9wyWh6f7nRrNzBPJh3X4Kz8Ot1sumbTRbRG3yh3XMA5H+Ktgn902uJ63T+81bOxZ/7V3WdZfq1TAe6oX+CqL/9+WRJGPbZkDDiT/CvZErYPzelNLlCxdeP/TgXfd53UcP/dfQ7J/zrMVoP1Poh+Sqd5+Or/qJl7v++cvF/JPypX8Y4/Qf38xUX52jM0/kfHAd99WFwc/9Ql2PaXIi3/jRO/uJqwD+pQYu6pOzyazAEDva7x9t/rLd/xWkg+FsY/AviQp/D4m+Jt8S/BW8xGrkx2P88vM0vKf45uVjFn+PrV9Pnv5DhwBj6ndmg5A/g9U8I9dl08F/LdMi/x6fYr/f+3+PYyOXYOPKdiuDPKvpNXZv6Xyr1H0GlUPx3T6W+5fL/Y7gU9K0S8U/JFPG7IlPf5v03imCbNOr++7gUiv3uuBT0M2ox/1legvxcL4F+X17yudTzSRHfohlXjheWfwk8HZA2EPr/S1gE9PaGkRcd+BQbIR7GYf6TE157so+f32OA/GEwHNMvtiPHYD7M9fHL1ndsOO3SMWp+IYfHYOq7MPkDd/8B//029stb2f/Wqv4zCBZy+/0TrM/FKnYZo/kS4X9dZIep319k/1zE+g+P7MTPjezY7yuy/4x61f9G9v+wyI4Q3+Pzvzmyoz+DP/7mTVUIxb8DRfz2g6Yq9CNgRG/EryWqn0GCfmtRkZ/Cx7cKy79RTp8D/NdqKR3H6TT9/uP7L9Z1hlDoO/X8oOsMIb9p1/lzfP9Wy+67aWnT8b9IQdhn//mBgn7bLtXPqD/+5oBMfR+2EOxzHgFhv+kaF+JzffDfLygS+jmC+jEe/2qC+lyr++rx9w/+81/k73+kfs4yoN/W3z8TMLnL0+nL+qyb2/fNf5OCiO87RL+lgpqjeEmIHtpwMOMZSQ3z/uefs0wrv+Ty/vl3/5cFoNHr2xlu/1gqFPpdmIJw5E/oJ7n8aJEhhf1KcvkZ8Ps39vlPs8BPhtmAI5korvMPq/tRnvnjzPGLK3xbAAv/KFO93cibcPuSGH5Z53r7Pm39PmUs+4n4U3kpbvoTWPXws0oU/9Ce/rlP/BOL/yVWO/xwhp/jxd3WQLhIx2vW1wZ8+1Yl+j1jEvRjTPp/VMpXDyR+sMb3t0Slz1GDXabLkz7oO94ATbwuVeE52LLGPllAkfnTHiedl7H7wQ5zTH54ql9B7U2azb9zpX9LF34am+DPqfZvagM/Z6nCNxgr249nB5gfQPJP0fOfAvXPwOWPi9HfinA/rMh9nQ9XzDN4OIIGkoCFOOngD5DNyku945/i64qw8IG3sPABvtffy5TSP35Zwz59yEDAQX9KgG9//uOf2fvzjxBM/und5T8wjO9M7hdhlPh3ofkHy9UI9LNVfBv75a3iR4H5ixuX37yY3y+g7aLm2w4g3b94OFDGzUujufgA+y+5wn+k1/9f5G/Ud9r8d/s4/HnpoR6NdTp/WVV8v9yqvEQ6zf+x8fgXKNrACPGjZ25+03j8GYu/FT0vPzv+u/UDwT8q3/+2+vn7RZAYNL7+yz3o3+8/P6MP9f/a0byyaE6A/2bfj9qmP2/5/v/UbuewlHE9fdH3L9MBud0+oTWJ/sja/jL4k6roX0Z/eYv7XNj5igjPclouhZxfn5G8/R8XFGqi5boM3Ubnx5gNBHUHE/3//ntw4yK736kS+u0W/v9YiT/jybf/gSkQsKcuB5/+uPbNH2H0j9mYpiAHQr8mSPPxTv8IYb9tLvQDZ/9xoRK7/ekHKdFfR395O/nc9aSTP0p9DAjAR3HLXtKxBL2Q2//5i5PTVwrUAUToQSI0lW3ZRON/k79DyHcOTxGfdAnBv6nD/9pP8V37UJqBKfpf4wkQhHMY81/DE6YLutqo/pjCLxJYvlvwjP0g7SZ/QA/IX8vMPj9RpgOUAO04ABGXJ441GPkrI7hfItGBSP6bAIL66ZOAP1rN9pv26n5GU+pXb9YROPxdDPyZzQLqF1hK+uOS4Ofk9+eB5q8NWT9AIOSjvfaKpvTPbXmNgov8eUrHtYy/9xEA4zyBMOQPYPybS/1fdgb+ZU/4zHG+hc6fMJxfTcOf0+e/ABP3VZ7Xpv5Non/40s0DMv1PBav/V8V9a+T9uyHrZ/jmr7++4PsmBkz+aBnpDzg79WtF4M+JnfaD5+z/FSv96PsTFId/y+D+BsM+mez3IPcX0/2ZIPc9GfshvE6XB5Rd7n48E/FH/DNLFT5+fgCU17wu/vX+099778I/NrV/aenBj1b1Yb+WN/wgTbtkVADs8tLXH/5mDcKHNdyEK9X9NZZH/ft60j9fR1/3Ej99XwaM/un2tz/Q58hE/Jb49rl/9b9+/Nv58be9P124AH0G99/Uyz8nV/9rE7+5TXy3EvYHje7f1Cagz0DxySJ+PzlKfU1jKqc/X4aZRu302RoFjMQQ9JdPTP5lNX+3ZukHjwd9I40/CQfwr6Xmz76vfpHlNUjn4O5//wkJ/Kv6JfpdDZ343GMhf6SyXy1D+T2sgCbw71tP5Od14T8CLBL54fPvv7xhf05Y/pKB/9XChXJMi376n5Z1/8srKYmfgtKPdfRvW1f5G/QY/nctwj/tMXyLsR8vGcy+Oc6vYZT/Ls6D/qxH/H9tZCW+X8D68zJG4tdamAH/jKVA3/f0f4mW/d/xhh+27X/S2f/WyP/yXAj95SMsgLY7zJZPxnS2myrmPX39GPdHwT/ya0sir19g5S1LB9cWJ67OzQWH0L5xd24yPU5ojNvgyKLx+KZ1HzdaBd+kc5qh+YimH+ADA35ZOf/6289Ozl+X079+lq9/kzdN23/Z736cB/yw3scfsEsJvu438+sz93X8+szm1yX5vxwn/QFmcnr72xHyY1PVrCjawFaz3YXmBPfCX1cSdpkJL3MR7uDOr1MqDi88UlqaM6/dwmcCIQvhj2Tm52jk2Sqt3O28Zowi3+iwVK7LeXvF8ajwTAe0U2JlFN/T2wrON/losmO7zkyVyrDLVvMY1fhxYI/34wgdVdr8cIPtU5zy252uDFlmRbt22AjXmheSwbhPUOQGvUcCsdYsui5EhI4kx9ZwbcZUmOLQmnndOmdBSBBji1zsVDi0y20F4nYS90bzrs30NubTbL8usHPSV9Kw9+OAhRldosA5PbdPh0Z6LTYTToPvCT1tQcrrrPnIckTlhQ++aqbFJShXDW4LrB4TWhAkIl9wb0MwphvIpiHo2+lOq0hYLFPpkTLkenqXft9hQ8w85/CWXRzxErG1c6GWyiequQZqK1VyjTYaATlgwtUdWf38BUOrD1Gvaw/FtbxWGhNeme+t5qQn5fvvzgpwTq6LrqTyhoG8TZBvPiLLTkm+z37OuOnFwc4LfXbMVm/dTELTQ47zabwZkrI/7vxhO9itrq8r9rZGO/1LLsdIZZUq8MRFLs1XYOarYal8AJa7OMPqLsMuqfQDDVf8VgmbGTjVYomMTiYMz4vXQcGObvH1N9LNxGYqhpO9ir6TWF93b5hVS+BFnPhQCGYjsWh8gzUwZE2Uzb6lfJ4G4kBQj34ycA++STJjw95EvxPsNfKmw98jvtJLPjrFtC3bSyy0AGfqWPveyuMozmbk49QkLnj0J8Zxz/ptX8fUgstFw4JnG2TvBQfMInwj1VifXSy3bicukw9PTP4g6Du+lI18CzBfolmea+t8CCkbp8RIlenwldlrIphKG0XSndGvU1c1OUeCnkkMeLThmq1/2bX1eEA0preD2IFjGIHaaLn3rwOWUF6rXoRabCH5mQroTmlzD4XbeL8l+Wu3x5W5TfK82uSRPqe3w4ychSKm6KeqbR2t4/i7t/MFum3pYtCReF+vC+j1ILAqH9tUFLYEhRW3vSDLO/7IJ1/FmBrZhLvJ1938MOV7R7B6ubyGpK0vUwjtghFhcZRDhqfXyWlx/gRLl0xzHKw5qDc6B3qfmszjSOJM6etyQH/Xf3NRhXc7eCosCiV183gUccG3lHHUVqjaUUmPr724N3NKNxRuFh7vYCvH3DKeC3pV6rqz5PgT1ccXLmX8OOzNBNXhELYK82bcpOtvmew06OM8Z6p6XHSciaxnfgoHM5pgVRXfrKMy8gXwJfmNQpp1bZxaiLCGwue+hkx1FrtbWESuHpgrX7LCFIZi/WYVltv2y1IZYzYijbgPawsrqak8KSlE8BOenCumMbdILrLhwkGGg0vSeOiBfQW9VyNwreBHeGX0Jk7lnTp1pxAtFOkJNssRN5O2HUFXhs0ejC1PNHJiK0/TTbE5g8LgilzHE+xePNVC8/VLhKrgj+3zVkBiTZVms9kiziYork+QbvKvwDIx2kXvqeDHD2cZASx3ukTtx3zfSB11bH1pjCfXDwP71mEeakbPLgl6fzs1SQvjGhmtegPfiu0i3W51YG8Vq+MPbHyUg932ontn4vJV3j1mE9bVjauDeGl0JPTudefzZdmWjL5t2oJXuaOt8SCwfAmjk04nC+LVh/w24bnW7FCUZnrCXM7pCvE4/NPwQwDQZ7T0OQ9XeEvTa8ZlnaNgTf6kZNnN9VxnqXrFnap5lAE9xU3tI6p2W8fcnfZbBQcTDee+s+CK0kh9TnmYGSj1Eq8KTMs8gEleEaUzKGk1iFVP7Oe1aPdqUB3zVB1MXFF3uSZhccvZ3lkzuz+WW6peJ/QBnhInfs/kSQnQDnHa4LgndGGqQc/YeiMMihReQqEPJ5kCx8lx1TMDdZzMKxicA+sLT3omk03QjOC9b43UQnJYQhdhizLbgbVWynVU01ndR72bfr8uJ2n9kazVIPfyy/DKecOglbUOSsXPtdZrvQvv6Rx4tnUrZfGg+HZ16CjdivxNbveXRD91EOUG1+W3GwcLdT3jDxojogcbprMq+tGDRsWnUuUvLLY3LGvxRrMlVs8Up4Bu4W3zhA6/JWSw5F7LsDX7Gu5FoN09rq8zpPBfo5IL2XirAYQjSm766Iu1mpp5vBFTvXli+Kgi9IEG1ywE/Zl7jCL4fT03JtJ4Nw0zfbeH00qxi3HX87DcjUXIhm6mbz1KiXWgB7455K4HTj87UCNfEfWBYvfu7E2Sqe7gqjwbooa1H6olS5U87Np1Lb71ec8lANLTOKYRTUGXzLut33LxXAI0WAq2UvytUq8jJkNlarQ9M7EehqPWFup9GQmDFnbj7pAZbXYi7UjGe5wjaM8kV+V4DfAJ5l/kISgYSRfvaiY57bGPjuASN1QxUjeOH7t8ul790GEQoMHKT5NJlscTvvkBvwXGc0tleKRy1a4/0JGpdAKS2Nd+5WKrdDPnwfX4emovVsA86jWwZO4Fk2wIjvZD7K4EHj+7IueGUmYMRDLrCs/aiZWoUlISkwiginmTB5cV+W0KM+Ku7ChunUNrNy1hlL6dQGOgFGScVJ0CK0HQPl5QXfaYiHshhwj0Q/cH7eCzSczdqnrCTKGNPSjiC6XEbiysE6yZwkVDxiyw1f7gBeAiz2ysGZdrQLzs0wotWZkMbUEZLjS3sZAmKNr3+921LiBPFVF4HZcrO7q6KhD+Kvj5YDYMbQW3u871FrSGNni25LRaky3ZUFZdEvmdwNl2S9h7GkftoUthxcZBZau4g9rh4eg7/Z7Fewu7gO5FZFqqGakal1+yfKV2jMDpaT8+W7msCeKN6KZcYkiiQ0kO8SVZvSKXy4ZtOP1H2NBjed2IsQcktVKOU/Ybw+my31/7dVuJdqpf9+nudF50IbLBjNMpwIEgSzo23m6P5n4HpqtW9HZFfP6DcrHvMn1rxhX8BhU7wouKafv7jqF6P0T800AdIsZN3gYuM8kE21uy5jIDJ5VIFTS1md/vt11618zi3ARvkbpLtc7yQmM7jMWEX2Ec0vhHS3t6rXlugTv3gBU2srBbwyjmisFDee+I+xNP9bK1z3i5jARTQuwMiEbeKVuuJTVHlFJ6Yfwo0S9gb8viOHQMiJPw1hEapltDQzPatZgeEIkn7XFKZqIQxb4yD1oUd/VbqbKVOq6lwaSxlm+8YmToSTxyzKw8iw0ghpyWGjjaSb9lpXNm2UIM7VVHTztZHC0N+lU27KTOpMaVEJrtTVj02tyyJaa/SLdOG+Wa+Hx7dhhCX8iVKyvE7Wmnv+nAUAIlyHiTrALq4NlzbXBiqnkr4vN4iJL5igNRbDdw0fdq23J3EK8vKcl6MYvOSxAfjdjoRC63DyIPIopcDqgyeHWYHyoLbwWfD1i3i9L7lmt4lHv8KwM+XkyYdx9ddrb7oMQMNMrjRiRqoW0kdUMWhN/Mwt/5iexizWNDh5GsQEE5Baq50HNaJmHDp4hYMnQ4DtdQd9eHedGHR/Kx7otUMZWrLu7d9hViXj64xgXBch1SqBJaxuL0XDI92+31sUb8XvsDrjucVz+pfuHbEUvZMy2wi/gJKE+Q4sXgSTpTl+V8+MHgdCgDMZQWB4k10f1h1vY92GI9ATxbpLwggKW3eTwa49BsT7kifqEkQkA84SIeX2utOVGiMCjupxJNIUE1p4+qFP0SkTnTq+PBgNO2Yhm2yOO6IM6CC0VDYOs6uqjPYaP9idZL2ziL5ngy4+cT6kkNTmJwPHZ0fpNtiDHzhqYcRh807AgqPcDy+7PB+JOyw1vFy/SrR7w6cR5cTk16aYkiQNzwVqQTo8Z3nBJ8dRKn8HnBF8Oq0xxmARVdXBgA7/PSugVvTIyz6pGSFUh+dy0eG2aoKJURXJXErugDPHng6HBs9axhwkeGBIaD4AGZ4J5S7EpPvmnUkMrMYtCKWSuazzrs1cokZAoaMSlXSvDsTrlwxFc5csIJmWNAGRJwsE7DLcgJK6w9g+1BJztJCPWhOzxJhn2uyxQ6D7ZEJZqneUgnreg8pzdiHu/zAUwXT4gIghJrJLL7nnjw00o1/gIQoert5UWL5VsAvCpfbC94aKJwa6GiDnrdNtoU3tdM2oYEdUp7B+Lw+8iin9eXn+lz1y98m98Z9A6YA6+lydYV1i4A3Lc4XQTSHhlo8EB17ZQddwmHQmdVVewfUzRV8uEJbIXTkcM5BHkXsoBW7cKqIMOvKlOrEHLw4Y5JpteVlvhXou4TSJwsUBZVM6y9/Ob1csbQGLE5e8Kv0YP0I7NmzJBOIuMqDBpdYMzQ9WvEb9CEXpPKXtjM+NGdRsyU4h5SyT5peq79oA1aW7I/wNhbSIutQTJAb2UIvp26e+6O68aqQtMCgvgO5yvcH2MlZ66f0erG6kcCewX/0LlJkEwaGT+IVRXP4iQ8Y63xl+CiAyBtpMrnTSdo95UA1UARqJQlNw9D8RiDqcx/wR0K8rvp3dypsYphClm8l1dK1kJIb0BZ5gpLlsvNqhWHrA7emeuuc5AbJ0OQpJ1tOKjag/DeS6oeo3GbSfaQmEAa0+HmHJzXGFXBr4g0nqgjIeCmp7NA49MvrdQkSBP4QKGYPkUqG92/n/Dx0OP0oZZIeo9YRIga1hkSNUZrNzdOHyHECsmkGbAlBBOlDZwgx/RCLdzd3RGmW3Hzab/LMboJaaIRBqKB24YRguS8faRGvy4Br0/h9gl86yOrJaZoXBEXZMDVoHHevGi3LX5i8Tx1gQrqPTMoLlaMWQiq68gVE+9XgAuxHXJmptttMSiIENAEgS9lqKo7XmafGUhtEcmWNiTywCTwsaF61Se847Ivfb7P6saA1MaV1hO18XW+ovtIuLZJwPFOx0RFO7mXSnUIbezlYU6SEhn7gmcCyRdQG2KydOVhN7C9rgx9Cqbmp3EcCEZQfYqsGzFbs76TJAdu0et6g+HqGWPUvmjfnY6iTHoyIp3S2OOCI5Cf7jxXZ606eU6XxWJXiUXFSuXo3+GbnFFrZ7PTQ3WLSiNbZ2PE1PpSQkAUh8RhK4zXO0JsXWPNZJCZeAWYJF7louQkMEZIUoRTXRzW0ZhDqP4qdfZ2QenbpzxqTVUiGUZQ6IIMd32FwMSY2ZYXfqBADSwLYeYeSOMOShDgxiH3CmC2QQ4G8TLAwSJbWHQyPJTLJJTXkDdBkF3gKSi1g8eQ7RPLlU8qKkOazK1wz0F4XySPwW4D+ZCBDXF4iCM9jURoJVq5ZfkRjGRICjwZcaXzDC83x+cCmMiiBmMKzi3dAsu4t213u/ybzbjm8TxvY6rIG2uxx/1yGMgMbJ7Kb915WRSOiBdCMHCPIC4F2IY3ePTq5peuPNzHtDUjEQd4HWd0TUgdzxH3jVwKZCyqCEoaeEE5xqYet7yM92M6qgzetCszOubjih9GSJ8yL3vkU6c11BuO9qy5LhBGSZ0Lhe+2DTLpY0zZXYXeerQHVbI/ilEHlTOxZyS4voTOXER8h042942TO2oV771kMPF2wS7+pphchsaZD7+BembqnQJN3EYiCX2C4pw8iVlg5uISZMtSn/lz1CPz3anEjJiQc6B7r5nhIS/vS8VX4n6iBIteSccAFMXUC0mByLC3jG/SxfQkYlG5ka1H08EFDd32JrR5fHAnRaYE6gnjFSC9OaprKYIPSM03+33OxzwY97gvVrqcM0Nznupj2M4lRKp3JqEy4CP4mz8O2Rxg9nRu+42xhsUqYz/079wkmmSJ5D0osq54WR2wmSDzDBgVaeBwlqZeMt1mmHqPWVsI04tA48dGP8O6mTS3FwcOjky0JyUm0yQTNaqJfUmuUUuIZ76uk677887F3qWcITa1KUdpnysvCDiRYIZnMDlq8tboGLFGrw8vSvjunU8Xh6ER+kRBWWshhLzCqJcKXtkvELE1o0Br2XPsYipkHmiZ4E+mx4mqaUQ7mwUe2G9FAla5OCCkUrbkLiYbvyBqB0/vTbvtl49xxfcWmZ8eFUdRegXvdLVkxt1kb6GfruGKrAxKrB/VJIGrX2IKsyhJuxWykKo4tkKLNafE7w7JiCJLvZIFg1icSIt5qdf7iE03kWBWY+4h5Ip40thxsZEyYbRGFn+wloL3T6+YEvqiagCrEs1EoNJNYmZlLXQFlZx9BCEs8GLltV0RWgNj6ABeZycQ3jpH4dYWYiJLtrhJqemWsh7Q5IDM5HVAzZocSdgzqzkVWcpua9JivaJ5Zev4EeMLevOl2meplDGKCWi/jWukP3tx87lw2/wk11jtQDH82lUUOdJU2ZoOni47/VMcA1Tr5fGYk6eLUTMnAT2BALhsiUKNwHiChKDCa9oXV54/ulUwszbIfHZpnPCFVDinbth8+fbEuAVgVlr+KN8LHQYORl3BhLRlDAXlwZug41zO3PVHGhvyc/SPUcm2O7JqMajJCzs1+JbVVNfJjcalcvRJdj6bPtPd7JtiVP2Z1sYnlb2kFKFvUsvKIAgyfAlIVhragGN7LcuPbbIPUIhMFwtpTqvrrQ8oTWkdy2nwFI5QJ6YFvgSv6ayqREEk5LGhoAMwUhu1t1k0Or6OUEJ1GDQkvBU7wEEGHGo8ALh7nPEGMYmD0QKBL/7T6rvT041FRpvUZq7IM19/CMIi1ycsgMdQGQpBvWkGl7imWCvnempGGz1oJ50U2zBmAJ31MMnQBO1ifa8CFEHrCRmwZuNTkXgklDA83acCPfc5oKuhvmUePN4yc/aWHSyoEE57QXD75bzA068v+AOMKfW98HBqOQ7P4S0f9lQ1xPR6AxgoFqWFH9b5LmOl3HCzSdD5aTGezbkOCgetKL8EzeNrXLhP7OVILKtA+ZWp+dmkYDnOGTmuqrlSmQ/85gU3ZqpHFLWkshZfxN29O2KpoY7inmsqLE/XxU9D4lCUr8ijWivOxfYxIRDkzaMgLhpnbrrxOl8cdXkiKy0PwAr1vjD4needB9+vT4+/TURD0T4y94r0rsgnyE38S8esiR1TgJ62goe5EufIwwGxTincia5aTA1hajvP4sBdgZZZh6Lx9CEVFW0+ZxN7RWu5+LOb7V2QU7gSHVMSvgChVW0mRQ69RHlgNIXFETaxtvolXP9dRN6rP9uOV8sbz1bnsqg7A2UvGsmKDJA00O16JvPLTakYsE6Gfdon8IlIhPoVv9f6Dug//lG/xamb01/8BLgfh4kXow7YvJtpqscv72I1b3z6GFuQV/rnUWUQbKHLLX5em0cxDY/yAYznFofnFe3JzfRAf5vB3sDul5dBRGD6hgoRsbd6UTfWMo5kW/AGsM5rjs2wAJ8m8bS/+Pd0TBAJmgyovmXRQVCnCy0q+2CASZWv1AxeKe3qOhv4nYKyGeghGe6cxdki4pgFYoH3yoqgfYgSktNr8drZht1tkqmL1/R0leFe3GssIbyzGjjAJvxgqo1pdFVkGFlGi8u2Et/4O8JbttyeqbPBt6WP7FPXL/4g7PIzBJ1Mpngwe3caE7Jm0BqxDeAldwQwWfdAUuRDDIixEb6MUEdCaI9Kgjpi3C0AbB7Ya1xgCvejNSHEOibpxVuThd4OBKJb7GmTtScdzPOtjG8IiEhz15HDOFk+MlbAwMtihchL9vJ937HkEESnHdttT7YdNLWZV8iaNYr4wRvvVjSEkrSSrjg/XWhqNFry5oct0oSlYo0Foi8YCFJJf1rcNF3h2Xo3O+6KIybHK3RlCoulATmhEbVliy9NIfJYW0uX1UmOBKtNEqXyoWL+6LY+31TvZsRt4JYxpTRdgkFzU4B2NDZAHEKPR0zHll7ULDjn1gNpdH61gr9cFLq7v4kzqgm7wA2DUWPpQdDsQ5Kg3QTcaZCX4SAIxZUeS0zi+hMqdzYkEqqjWaezjxMUjvsyklvQ69PT+nEEppuDgD3yt0faykc9CSL8rg8Gfllhlj8Iv3sN3ilxGDRXFACEe5c5sI6WdzMa0Pe8QlzVLzUa+iQP4vjXUCQUIFVcLZ6XL8piIc8GvHkCgdQtj7zMqSMWa43oCCJA9eNNTpgMgTwBGciDyIipvKH0wmg36imlG0pxw2irAhapJP3oqShA0EpCDL9rAkPjKqbY7uQbOFYIF5vZLvxCq+kjrUwQvEy3Im+7vuAaU+PllWZcByr65aVZQeh6ZhV3cFdZQVVGtvYXPzCwZmXUDDwXy4L6+pXlCreqAAH8dj7QfbpYmOcBix5A20laQx1IX+06F8oRlSKz0aPYW0orwCt51qG7E+TcVwZJXmSQKm86LbCX+GzmGAr+WD6Ky5C7sLN0Ugqmay6U+j5qROgiQ4745jmWgkaLKjnkWY1SEKIPTwcRti+t44aGeBla65Q+moEeRPoO3fG61jo76znUtu1XpzP0mmVQQ6QRKF+nKJj8C0xengPzrQjNS45e3j3Agpozx4io/PV0Cd9AuoStYTkArw8QgoQ0HoGEk3iclTExELv29OOhoAG5YmQmANzxETMphixsuhdKK90z+iIEfGpciY8BL0uLv5TlBsDPuj+PBSRI1jkpw7BfJGdvQD3cNbJWPHh9urKlcjuEJ5pc1KvL1wDUtWIpyoIDO1oI4fV7fBRiZm/pg7UkWr3LDAiem+pk3MLPRW/PmaQyx/vWn4iAFJq+MQaD4ZR7w1XidE2ZOokcRsaX668ZT0tvi30m5jmrR4r1cHPRmbDaT0m7dVWe7+tHuT/hA92yQAQJyRXGhMJ9gVxugF8oJRIHutFe1eYSz1pyr5AmlqW7BlAvmjb6FY3nRyLDBxcDUT/yIl4B/MvLLecOHTfhMc6WXOouafj6492RpBUbrQgCLoJuoF4+VDcb4t9bHQr8POH4kMzAeODTK1AWWqpm0LFIs+hFo4VnRlQyIcxKyPHhsg0iukg5Qumq7bEv+8Hhb1EIVpMiUnKPayYlwHKKQ48pi7ZO/PlOLUksvnnyU+Stznl52dSZjR0873OXtCIrsA+YxosxB6BPon3f+qH6gYcQSsGGRHRFuN4xamCEG5HuKJaWxcdrDjnCLcrE5aOn3Lj23dE4VfWSouHXQKV9PF02d9s7zjZ0Cq56I5jr4dkGwJ8H1tm4LQ7AS+4YC3dfWXPl+gNMOGh8epugPZp3Wvm9TOMzneoKlVZdhwZm6llbywYZKPK/2AWRuoexTeAG41cjM5TAZpoN+P8UbZRrb89asWVZfe3vffQVKKkqUFp+Gh+9LMGHK0JCByYG+kM24lD9Co8ZH1fqCgT3h5PrJJ+u6l0Ak6ZQigcimgaFLzSIKw79PtWxK5xy+xKxhTXp22JL5pHbWQcYEvnWYbvomCSN3HwemkpXiXxTiaXeMx4dAJKPuCXNz0zotf1eR3In0bpCINOr7Wa1YfUnjeTZjuU81fcsPQ2cMsQJc8b35D1Zuq7YHLDcyVLF7sTp4gwRXAFIyBSkmGUuUhHAzDOd3DkLDdQtVszlfj+g4oU1UEbxTJQGoaoxZtZu9FYV5wQMfcquVNN/Z4yIKzitP5mB9PA+f5ePcIQeRmD2x80kCcHWngN8wPDdfHgRK4nMqt3Q1rSkqRYf2pDGJIlqWUPqpF8hzS1NBKkA2ebTxNTC9OVMNa14sFOhRVipu1kgwRvV5HBJfYpf2TLQ3Sp1vYKIcrZUxIl41noBwzEh6PTM9x01MqYLJ5ujSyRWP3zQFwiUQO9WCWmWJ9mkYpZkwWKhiudRaHO+IrO08n7J+ASw6JpPNQuSxN8C6e5URq/eCpxinZ7uKtpDgjtWvOu+T2y+NZ5RXnoF4fPnZI1lYUhWVmmmhkxIvw5VURJJgd/GYICMiffZpyR8LDPLAFWxYKqC1uWlgAQFlxOzAgvFMn8zAJk7HaoFf9OzxIOLTdGdDNl4ActzCLUL69tEtW3vR6YmiRP0coZTFgHjj0KiNS25E1Mh0B/pegbf9ppeH7lNVvRwawLioPt1bXmvnvp4lpzw0atKST9x45WrrnY3Nmeg694fgOjeWAVsHm8MEgHGfmriiWFVwmOls6N3ejbGxJevAyXcykjEMKk38oUklE8iG6+0vg/30AwCCRDWLRFg7Jz3W5Ag+tjVmJqIHMOo75rAAFvaYUSu4mKj4OVC6q0B3oIo8TWhQdH08GBeizIcySO0DplYWOVKCKkjj1ErhVmIWUAmztweZTm8Qjd+QrqCvDEMFDTkECzS02lvyaMHZZ60q+RYJgeIrp+H78KcAqfZ5RtV+FGNypudZOFnTUpLJesPuvfsY0illEwRH/9ye+nFoPTsHUtSlbTEinGcda5BymFuaW7y+75e+bBpwmJRbZAlEyuMJjySSS+6eGaUg6Hkm7KkXAiOzhcn9xTg18roE3v3+taQClbTGIazjtgZxmAqHzACaxN7M7mYW+YrGwNEBwm9ImOsYRAkTOUvlkct9UtvNd93jf5IkJ3TnWDX19x4AJR3LY1vSPx2Ujs2MwiFV11CCz2xb3OErqY85DtY7ck84wy31lANwqo7ke2j/h6rHGYZRPLl1g0gZhIgZvx0AOuguTy8nycYPhebxhYGVAVuvl/FVWwbw33eHJSSFPJV+gBI0DjBjVO0qxPPLRDYvTsVSswcaDIMnkK4IhULKIG8zaNzLwjUz6EXOh3xmhZs5E5yN/ZM/9wulUjulFUbgwlgNUXGGmeTh4IS3eiqLiPavvmLWu+3x10wMKRU0lTR3tT8JeHCnlICAiyzp8xLVR4GSldZSqrD6wwYVKrq42XNH7GPfIM1YBJXgeg3d8h0JV285CCrEde8RMPcbQWzVoQVRbJygBAPIZ4fq1CfVxLG4bzZP9ZMDolGc/0Qe60gB5SXTfAFycVS0mpbeYmxdGDwhxikL83oX6pgiMBlLtLn89N5iM4dhRxFLw8uEXt9uQc9nZ+3G73vQeWkTpnkYZkWeyy+3PbpWhl66h2HvA92OhcPzP9mRlIW0TLNCRaBvxMfBlBENsCgB7whEKSy2ou3tL17j6V0edK3AKrJW0VzpIFIJGQhyA21dABKULbhdx4oefF8c0pwQBAx2Jp5EGbI0kKUy+UCkr51zRTyAODEGrygjgFV8Fdw4YeFmRdNmMagN8TZsMyphr2tTwStQDh5sctdpv0L/D4U5IAkM7mytA5Wav2Nq5j3woMQoi9gtqRTQYUEVfDUme+YDuCKvGmrVi5OhlVNPuiTar8laXLZW3vFNzmrrjzrIng1ZmRC19VWdfJlwvfdendSKjVn547jF72od4ZrwysogSR09BrGbnYKLulAJSgpuPXv0JgwOSiChA4UBfS0TPBkiMDFi+NJqPA+TtgB5m3q47MP9KSU1GdasQni95gLlRgDx9OpXSmNfBsAe8ziN20/EA/UtwolE6f6Jd3hYIXfbCNsnrRJ6PzmM3YGDXSzQ+OKnihXHSQ940hK4ZH6VfUlz6dE/HzYNImJ7uOilYT9QgK+lLsrkejjyjRX0HYsUN2s2yqS+rR8FHcsP6+MAHMnVpLaK41/rViYrVHyShdUTUCN0XyxkRT1bxbdmCINbSJspzFqYEjUAo/ntsg7+yTw9MpOh6YYQQupBEvdsIeTMqada0o+KztBSPEzxKIH62D0sJAAUECvcYbQVDRf1Np30olrwDQbk3s1L1dB+yjYhWiuqYOZDZp1WTmFu1xiEKWpzouTSsCjb7MfQ5sgEExUqTbtPnnebDGlDmFc9iWbz5r2OUoisa5Hi8wvAsHwNTQQBBe0EX1Qztzvr1QCufwyKwjHJA9Hyr3kpKctpcPXXumNMVJVU9yYC+IwVVrPQH1C7qBH0UM5zm7gscAbZcOunvYAF6UxnKSiVxtHM7teKRdP0lFctOKTkPI+FPCblOj2SwtVEnNe5TwTyJUXA14A2oecgyGVrIGVnutKstUtJLRm0TnZSWTn8Ryeji4rT3MtRh5ndCqHozmvS323aO1QZnFERsYC9HWkRwTENDwhXOIFXH2TdFpHKhyjNKwLeFf1XzXf9Zj9Ae2bWhhwEBw0qNNkGGX6DrV3+FbmKrTco0ImFFCjAlNM7kmB2FXRjRY4MVP1ntacLJmknhLiiiGa9EbkuaKT8nMk4oB4zoOhUfXs3smgPTaXw1ZZ9Fx9FXt2mjVZEsJcjLI0fZV2MMgSsAHkacD7CihFiFBEMr7mYQH1OgLjFq3KEDcGR71ls1QIm/G00aSbFVGccOidCgjPUceVruQRfU/AzDZaHubwOHrJc50rEolCZhOMQ6KFQyc8WZYVc6tz07sSoGch3R7sHXRNcUv1ARgzNQA/1t0Be6MuFFFci4rv72DwWn4RSQZbBjGjOD0QUNYTNTntO6SroD3JgYRQXFLX+x4Y4lqiB33XKU/KNuHo7jy935WNJltmniy6QsU5W9WFovoMvLiHSbKYI31D25gE4AlBWg+YsFo+hej8LW6Mz7VkxTxDlcEq3qEwvdSmRNCTQ6/ihmfywAyOuNVAMDtn6j1I53LhAdYzpeTADx0z6L3lmxZVWnHReVnBtN3wPJfSjtcbdHGO0RXyvTzSipMpSV5XgSfMMhS5nqtiBrLVp0xhNQlgHrOBoOy3LL06lJ4Ao7ApMc/pBqfPXkyH8u7v+ZjzdlYn+TxSZ+JfbPSggG3AjWVdXurPG+ASVWoCAjjUxBFvcg8/yEluy1bVkWl52wYrQUNngPfACLn8dJaDQGFajCaxsOW7rmCHBN/tWDk23lT3++Ip95HQNe+y5HzKXt79nmbEi6WrzYwySY7CqnbxJsXlfG8pTzGTKR5hhNmYpZtHz5AS0q787daTMEB6mQzi6ZbSSXJMqIe8nsiJZOt5hzBC5NAIjU1u3fKEBXSndFHi3YwcflDFfdOUnh7pm6SgL3/2Hp64X/pTVj7HREiROq86Gqn13eANI7bjaNe9NFDTrm3MNtIVur1rqhtTneVjotMk3Gh7wnSVTYLbja9b0A7ViC1PNb0Kvag5p9VBn2Za0qX+9ghBJKxaobm7OhdllWh015Fp5Qnra4yoaEsWzEqR9kUNM7Hz27mK3JXefyxKmAHH346UPJePgrR75hg8nXOBUD5TIGiH6I1dCJc7PJ2krPjkdlgSXcLESuVCi7+Y/GY8FaiWDJyTrzTixlwhA4/PZ1hUr1mP+CAhAlmipVmKC5bE0kXcjkF5u3BvVnnYlEZQA7/X3cTMlY9EyRweeuu9TF+9QNc5hZwd+4si4tKVc70noKTtga7cF84KNQgUs0kidUieWKsM2vRC+UoQd4ozVwJ1QHR4koBXL9hptpI4yNgG6gmck5ahlDMFgZgOw5ynAxtezAy4TJ86U+QbYRTxScdR9J72oyafSdnjMjRfGCO8MZFxcPSJ28qdf4YXodoz20gRGoQZFH1qW3Lz1F6oWpiYx8Bf9zE7eHHlEKz56AhqSY5aQKmKGxub6xw5dWr8JeUQwIJvbTqO+GiVof5Ic8ieBfkjJUlzSVLpHgPupFxJmpECKyRc8BV4tTo6BjyWsGbLItkrT6vk014VW+F5PX8NuKg2OSjX1dIYy3xGqRITPMnQZWO6vOV02n48eyS4TC9bh12YdBfJIBmAA5ALPIw3Q/dqLaYbhjyy/Qo2zkZ7LmbJd7ewDiU+aO9i2OGCqUI6jZNKr8ub0Kgst3rYirNOE9cD9ZhUYXX0yk4jBL3sPAYVstQnXvmTdNEzZeJzUd/gWZOLNREKhfeWRKoVIrrH60SpwboLj0d275gKwYkGLYAZ2I1ykudRXlOC3Y8a8EPWdY+r3dnVJMl8bi+XjRjbqZ/Be9K2aazgpu/UN6OPehXxGHJNQ84ps5obuAMr39LtkXIZMWBZRuKcP3cVCNgXCGKo4sJ653d8GdAf1cDMh9VDRmtIYiwMhRQfQdksOBE7gyjgW3x+KV8kroiqg1odn34Yu5KsVHKlroIQb/dxEZeLh17uds6cQG1Q/MgiSsmt3Hus7nG5iUjFQ16oVnY22Fvw0pwKHi5v9IrtKDkIf5OO5fzD0XiL0itauiGekkc+djZNFN0TLtQTdmP11+w1M2hMCEQ3ki9/vNAZmvHjo7x7zA41cxQG2Vf2e+kydIPJjn2amB5n9QCvSReQMotI8FZgYWyr0ieo4YSC5CDmU+d9irzLTsZ8pOPgfIQd+ccLQ1wCH6yu1/VMWwhy2eiLsj68ZduSCQvKvMAly77F6RPkH0yOQ/kSEDob5f6TiXt9KpznjEt1BEKzXDjN079/LHh61BwHMkr/eStLTH0EaV2rN1AN6HFFtyIavUUdzT/zulX8+2YNnohJgXhMzxRLi9fE4mIGOldC1b0mzl9LQuL2fp6dOV21NaE09UlNT8I9QAmr4WgcD7Xb4RjCgy4XrEiG8BHM0kisiokjHSnUMD4A7FFeDtdtVi1fuNcVAbYn4mXRmAljVke668cjPrvGQdRBXXHsxpNIK8Pr7jugwF88CbJIz3RUQksel7OIoDcbzdX74wkWV7baPBmnlwva7VMk6wTL3QOaUPrV9GblCsYYLuqS6KaaO+11DxYyP3NUfJOpf175rOwIz8tJXs2JuxRuzEs0gtQfhx9RaqAgNE93iRu9D7mw9usNq8C/8kFElSnN4fzNXPxfGl6Am1XPx1lMPQgaegdYZX+Tb2+ueJ3bpld4PtljDnApDTKVbCDNQuiQUNSYC/sgPnmUhcJcdRhPxdgBzShdsaLNbMwZXx+NwyhsD0EdulOJ0x+7914ZxHs0TeZUtlQlbqynZSYJCtwOqGVV4P91yeDGuhBdQiajRt3tpbt48C7H5gwSR1oOSSvaQwtlp7R8swdbOMojTZiBLLlABmnGgWFHb3VPdSH2jFeDGrCqwWMVc5Ynxq5FOtGP413KG9NkYtS91q6iOLGd3joDqYbDnGX9jKypq74mLeKrb0jabZ0xdKONjngSDx0572Im5gfeyq3uwWqn0nDH27S+voR7NZFZAY5AdDJlPtbeB3hBRN3qZ0hZSy4SrMecxXeQu+rbPfjSkLhDjFtW1BBs7XOKjShYsj7jOPRp38X4jaC8Y+Cg8OJUj3SV+ZeSQjVUT004y7XGaAab1k5HB7A3UbIcouFCpPo8yGGBe2/e9rL78RAnhsb0Qt+6iBkxZlZi1ysdTUDyFP8oIvsUEr+mTPB9VCIWPTONKqUcMrvGtn6bU5rGyHAR4NljLDGZigaS4DiZ4yLfxzJumYHWs2DvlBctu2xL6jhTYiw8WWX/iClTLNwsM9kHkzl3wi217FnSykXPXz7+YgGRMfg3TAheeyBq2nRtm2MH2yGo0L+XixdqenFfhpueIbrQ6DtVvszbbH2ROZtMKUK5IPcGgE0R04qYKwL3nXuKINttHrSuGTwJq4LFa0bqSMD/iJ0yTJ/SXCebLom38kX//ZKyh+HGGTTlr3xxUUFzCcFaKwN0fMNJf49b+LEel2AQ7uJZxUWf8NsUfTzGlZB4qZJv4Xj6uZ0EBOTUVborUJYJY/FoXTqLtBMSruy9hHDmRUrlBOJkU2IAVaCO6BpQpWGFfUXfc6eZtPhIjvvDUrF5thmlfXBy7qSv0Jib3W6TNCpt2tLIs1mwFz97Juyj2/OF7zwnSXrp5+3jLXdYeA8J2bZR6/WQWw++ICe7PyY0diAzdPy9BwH3HlhVh9S78PaPyh2H9wX+D4Q1tVZVfIwUSHp8PFRRhYhmgDyBuF9Q/0pEAtadFsookNFZYYpgyCsAiPTEuyS11jSZVo0inoSFyDx/U0P8UQLs13gNcBVLrm6Iuqx9Q9mvle4r1XEUb9VYwMK0DtUgdUEiYJz3GoKlu0Bqyc0SSt6Ye7zBhD7N2gOyMuDpQrEs0/21BQGMvax+sWWDeVtLST7CrKMbyr/ycLtKBMtkivrxVvamZNz9Sr5ep2KDGk+oBfso2wrQ7lwrtUkESvMsXJedooTfkiSUl5qRiYWr3tT68WwGKCRSc5itNJzVJfJmpvljvPOoYl7TjXZRiY75IDR327GZyhBilZ7k2HiNXV7oHOjF1kTVYPd1Nr/YMhNfQWsBHHnNquqy2iN5+wFnUqSDWA3BUFP25cDQfdmguiLUoT2qQKjv1Q4a6kFIVl9ViWe/sYtTXOMBz7LhrmPkG2mi5b6TA7DV26kcTXkGl09I7xZX4l4Z1dwwFKWyhRHyHAOZuGgjHb2yPjhxp10ZC/ii/3UCHwtb4LIFoeNpjR6YmFlShRJ/vJug58UyKmvy0GK+sMpFubc9nigNrPkOW2eSLtxJgNBRic1FjmYzRBAl+fG09kiTQOnpldPNpfGK3SsdXXfHSuPGfi5P33zWxiWLqj7lPTjgpiMNVyRHGsIT/CkuKeRggbjMYwSW4/fRFllpGXiZKRlUd1f8ixNVoBGPOqt1EhOoDJJtRRiPg/6gViuBMYeJo0y2yv4LvFRCwJMapfNNuChHFaOhtKIjUJYqmsqgQ1iCjvfOLxoxBOVmpwaZmHDsmTkO+YqbfhrxYzi3pBm/k/MQ2iW7V1esFxuplt1Fq5EVROHJlo4kZMq8Y+pbsy1JMdKsbyAjIEqCMk3PNoft9Q4mHFwx0HsQJyzF8Ukg092aG4o4jiyCZ4KKNW5GGavyNVJ+oxKWgTJzeQoYqeykCYwiEnwySoSSimwhoeUDybMvhsUUF6Rn0lNL8StRxTukHdZ7Kc0ELeVyGKcuHSDr1Mqol71X9SVz9yvARlLOvldKQBb2wJm56Ok3rRaiyr5Zg4Ya2cmb6ej///aubLtRZNl+zX28XszDIyDmQYBACF7OQohJTBIIMXz9JVXlqmrJ1e3utqt9+9jl5TIyQpCxc8eQGRHQrtO30YHud3ppqwvjlDncT6udHsDESWkP1bXDDB9nM+AAAWjcopsj3X+5sclLCNViEgAN4rhDs8XQqnuK7sE+HHabQayYrXC9YvDtJnY4fdp5CSMtM5yOQqQHhiUDHK8+2F5AmF+RxWQOwSdQ7vmwn+sZphhsQ3euFq9iowCUzBtVAW8u21kNmkyzGzkQRS4TNV48dZHgETHU1X3e4BzwOWbjiOXsdi36jlEy1TaOLL5ZseAzNYVvKqwuiWIzEGq9OIrFHGyyEbvWcHK9rkNZGcxrv1Nzktq4AFotvMD2rI1ehFBdV0YXft2AgAqIt8uuYRYTQPAEO4XubOTkWFT0oBt6xtFbj688rZEroi9O6zANo6EJ4tUyb/aadT0AaoELt9lk5UUzsVCUC5SgYE/v3fXijXhDXOQrq2Dbtg3mnTPg2YCIlqCshUQwPesgHtOQ5piITeBk4NAvshHAll6BktYkhR60PjEjh3S+LDeygUS0UHshhFkAhhUJUfMeGqKYc7+ob/Ywz4uqcNbKuFmtkBpYsejZzr9m94fpBFHnaVOJRHOMLJsZy7E7hfQaE5XtYtelMqlgVtbS4o7XT6KqNOsLuOpMuoZF4LSp4ASgdCbBKnjgIrAcNeyFObw0RynHtLkkXP9wUgplbWBSqtaHA51kNEiiAJcJS4RagV/OtOfsF7OTcoh5v/ONiMXR3CyRI3O1ccFv2WKkMyogRLwcJXiVE4uhRshXfkpDhrcoPeT8oIO4PEhDKxxkXqFQELBlx24fT60mzC0HDytFwYJosX7qcFAHEWuaONMV3qpLZG95c3UF7n/FqOykKUIb4VLD1vRW1YbLpolzY8dgNNgwwTpaCgzL1Dj60N5V+Y7Bmf0exFJIUui39SFnd1preftsr3CTnQ/sLsvXq0vOu4DWNjlzbqrFUtqgeiFfKchmCWZ7AFuOAPmPVCra8yZkNvPqElMUCGcflwsbvZTE0iVghVZR1DBAG95nkzlR8cZrFkZBN77VOYpokyLiEmt7YNYHJXDSAQsPo8xNUxZO5HjzaMxbVhAkJb1KgO2tA+tf1SvMuXArw6dYb0sm7kUiKrGArR0XuLTEdIuiMHS2/SI6oENXV6lqTFSCNzXa6z6ur6rjLrvWfXYYJOUYiag49N5yJ8Etv1vmxW21NRRM8peZs99c6iw401wWeLxeJczKLU7QQWCG6BqIjFEcB+y0wyLitokHjMwX0Aglyl/pErUAqQtgpyQJ1IJk9945R6q6dDtljNKNbPXnYcZpBl44bxhq4EL4rUZyZdpMBjO7ttnjuS5z68t23ECu6IIuCWxqWVWtmoZ+tNbjdl58R2mWdHIvSbAz9xyLkZNxC3Zg5yqHsymYwIbYwNsTl21KhHh0XpUOtbJuqJ4P7Tk24oXCY7+HuSYFc0Wtm0ZT9La9mdQzsU3IvTFJTLCwXWEPqsPyoyT6bOUiGLmBpI6b1UkwyNWxwU2lDFDjkDJba4tK0wpcgjLMy+q2vX9CoL3Uxj0VXnnXQJrYRvWACph5m8EulC+O80ZTsKOjkIfNGSyGeaVCiGizampPN0DEOxk1nLQv6DglHF4xbeq5LuuxOklzCmuSkJ9Fl5XEcPN5tDF2YRkqxw+5p0j7Hqb0uAOOuJcN7FrfLe7SGtsl19s8XPe7reOFbtjPHDqfikl3dN1anJxK1IwgZ9DRH+ZN30MD4mtrjeglO5f9YsXP5BEo1A4xZteNmIT25CNCzqe0crCM2s24pVsaBG3P1rbTB4wyV3pySCHIWEM+W2/RWlc2YP0VyIyxA96gWOPYcI5pYSvEZmPn0Cu3sZNoNDWQC+YAjRJkax5vaM5aWwyXCYQB0qancQuqRrLFrRjNxqllnvV1JXEu1ipF292qUA10+Rax0aC3UCanOO7TQoYiJ8aHmJaIJEW++M1QBBhusa2U8n461K6V8alUk9jJ9XuzOEgsXZ0W3a61TmZ3602AKZ7WEcL+xNEx5pnMUWQGvDfL8XJFL8YlvoIZiXJHnDRIXKLhRKJ3FCfKhiIf1WlUmZlEeK4TgsZPKz3NrXi1J/gSEkY02PHQysVZm5wciGyGLa/na6Bts2JfHffSVhhSmQ3J88o20XL2myMjNTaDZMbaOaXCeZy5jRsLIRrzqmc1BKXsr25dLzqoq3O8AUr05J561jGEVivtyXb3c8Mr1aWRfTTm2BohMm6xBzurBcR6VekKTRLchQKIuF78pDdDfH8N1pKc4wRTODRsFv5qpR5Nu+jOGhHzFLdKR9i8Rvt0DdNeta+mmj/GottX6d6sigslnJXIVpw9hXf9LbeI12h1J509by6LoNcpLtL8gIJ0uj1c+V2/MzLe02yIv1XxmYe1Ktq6fTQ49uKTNr0AIYTDrl8lER8zhNhfU0Wp2NmJZiF1mqM17IzFIuNtfwtoqSO2JTq1wwGochqsH8KgixHVTHBB5V95dDtXR7M/dubOV2BX3bAktIYpp2b5LIH5UpBMIZjPbc+k81CVDiMLxBYdC2Y1o9245XxITyPdPqRVkltFoG1ZGwR8LN4y2GnVIE7BFaYzgPl9FM77ziTKNYjFXc26FuLRoAKR24zRbJmV4azXy5gWbrRyqapOAW8D3yns+Z3ehaiVn9wjz+3AHkP5UjNMlKt5MubOGJrcNsXAVMvgYh4TrwJhI045RzzczIzGXtzT4s1MmYBB3nbhTciiwMpGQTGRbUhVrQP3ZxPpbqROs80sjvFAykNEbeaevanWkr7UZi1bNZrM9LnVdlWnQXLAsMw5Ze3rSSbivpsrPG6AvxZlrZ/mejMvNhXl0LEp2yBa3x/RuBIpjLEBrTg2w9RxEDcT76KxcoVFk1EbdtuSoY1KNaCCIgJqvA6VlZT4sLgmUnqlz1E+cGdUyiejRCQJOOhCWF/JzRQi6ogKzkWMpROmXCze3+I8GxTofhE3K9BCbh6Lwq3W9AWHmCNrkaGgs6gra72P0IuXpo4G1O6BHQXGDw17it+e7GGQiYQZxARNq7hCUzsiMpE5hNl8vZaRx57zseUDej2kRYcHtVeNnDrlxUHPcZ5rp5Wgtq2n7eLrWWES3jv7uXpoYpdZG5O2hbMw3u3awMTXcrG+nsWjTbfGlcyVlF2BPVdCxjIZkkWpgimcM3ggDhSNxxTyDmbFjhGO9qfTsKpG9Eyc4JEU90Q/WuLyoAURbq5RmWFiPwSVvCMZURm7MaaaLXfYhwarr/yreR1KpGPXwKiymzkdYyYx8fpqFhsHRhWh2SDlddhlYwzMltA/euHEK1fHNtJ9O5xiEZKJXaXMfIbXBJaNFDOv7DPLh0pmjRjEx7u+tkOZbZR9exiDougrqQk0DoMpqZHT5X4aylcCN2DW5Q4PLTZXLQkW2R1cjqfJEtuKDRgRP8uOw8LSlmYICN7RDYHKSamLBKrqNMsS7tE6b5E4VKOoP5TGsMYIX2W6614uSuUkVMZKG7Z4zIz4lZa8wEGpXer5FudtGwGLGNExNlUUmddug6ke0UHNJddMExN9eIClYnJWNXAuGAdhtm47+MujS2EK+ddt4ERdyTPFDuPRge8MsiFN7yLPTda5lZ9SgbJevHaz9E5Vq0F7VecrWORYaGXqKFw6B+x6wGBRDRy6ZchjV8j7ieqVRiTk9iyyuo/IY73Yv8AKY/l56tXaKo6p5kUt6qw5ZDTaKmykYeN2onXoY0fl5VHYLOOz63OTm7aicYHG3qdguassxbQ45RK7i37iNP1oWIUs1jpvV6sLokXBHmKjtX0+qod1l16251A+OAdeIaA6TkKCEt3zyhW5+Lg77+0Gjqhj0KS15e5gZrJwupLOyrrcj2cY1O5hqVomM3vxqxcE4wWOWKW29hTmfDgbAMGj3ZShftCqXXqLvLiX8iQ4Sl9V4jo9BhmJEtNhKODAhd2ymSZBbgjIQOyyGpkRyTF+dC+5kqWQFSpGETaRKEJnOFhU5EXBfcaS4Un1tWSnYtyWkhPG28k401KW6xrlqSSXSVxRtNoaUOIFeZJFR0del3l5GHjF8Jt2lawGAQRHC2Yu4545OetUkI+3TSGrmeRo6+JZlr3x4D3AjyxpBnMxWO6k0ot/otGKQu+MLkMNWR3kfDRFpSIZBdqaYbnWmpPcDadNTms7uEh4wpo4M+428sjhi2Wsq3nAzWZ/Wbv4AGx3Ic1SRm5nSwpogpnPYivg1BAk/VAvKksQYNCi1Z8XJBcYg6TIQZa9Q8gsStRqzMbhjg5PVjymp9oY+JjJ+IHkA2N7VDz+YJrXC9Rr2o51PHbDBmdr8HB+ANGyrRQjqlTYh0nrgt1FqWcgV91tQO0a4KztKd5d0yXZphs8V9Znpuby82kfX/K4AWMnTgGW+Z0q+LpQ8Senu6z3UKjjzSx0lNL68J4Th/Wo+fuTYhqFY8Qby005mHBJJRgdiN107KnhgYUFfLcrmXWEuD8fRsdAQIErVkwjWFZG3Dp5Ic0EzizDaIJrInyaykhVQEBhpTSEfCmQ1tPhUA2UFRpq7uSclKxBmt7cLN7fwPPtGlU2Tteur5InbAvkqtyq+FCBpwbbhbDjkj4NwNuxvWGTragQQqANtYpl7bpCVSac2M2eO/JYWJqcWO1lathjAzvoOWvPvCilWq6BLd5IYmIheJyDiqhM7o47trsE0TpI81OV1OfS4bQ0wjeelCwuaJDsYmsr+LISGTkSL6YiHoQwmR+jjlEhCMqngPALa+OdlcTfUvudrMkKhjHoYJ4slRBdfYyIjR23FykiwjE46pUBSpxsNV8ND/qc06KEmVcZBPSiOaBPDA9f1HXp1b7tdRUcjlMDG0rUgZhqcDQWM0a8Nqw2oEaeUZS1zeIITc9TdUoMV0entZ90+ghCPGx6xWNOJeVbLQ/nHOf8DoPYxQoI+qy68Gi1OXkId/AN5tS2R06D2bM6uTCVcToudsFc22mgmxp1Lf1OTwt84C130zOVHUjhJV98IzHrmlOU94BKSkWp4X5DOkcNmxVDDod1aKe3kmR+mayqRmw6TvQtlhcXnNszSxCoIxz3lhMzAhUbHtPZ3nm1ixDVMJjgFtg5lZ0r3bw0XeyucmihO6SHSUzmAhNGOFXZFGK23iKt3wcXzbeOpe+T48VA8XznT2Yse5IrEMdaqw3MgPQVgm6AI1+Vs6QwcHPFM3tEPCimeGYdcyunplS+lsbDZSiTi1YnyVxAUjyriRXjbtEodH7mNR/JlPlMjjrI/hLbsxZdHTXa2hC82dpnZPDHxVwez6GKmWePR3fqZYI3IVWCybNuTTgVkXWt8doeeONjtk2jFKuRcyMxbmNB6zqsd9mVazja5w7qzh6gcyPAJ4P0pJXBIIFwNH1buXaKtM52Cn0uEzM6qqyUFSxpZIfyZl9vRe0UmyfVJxHDVlYCQsiaE3N4Zx+JOVnpUbQ5ulvi6DCDk18JTJuhXXY6Vhx8yQLbHc3YKtUing7RRqqz/MiBcWOPp5gAorYq+sgVjki0M5PhaeWlx9ydZkUlCIzLOGwxlRt7NlP4AIU7KyPkIeSpodwUlp7OHq+hbF9MGbQJCiULotNeWJw5FlHjBNZFRN0JG6guLeKAYWOpaoa6Ua9nmdSlQHc2OTdlVdUpwa7UY2t3ijObtc/jxkuuuhRR6YBTFpqFl/WmTAB3sKsjMtGBcMjs4KTykkMlVtnivNe7uAk8WfRkuNMEz6dcwfZGKW9BXpQgrCRsxc1r8aSqfXQuvQpegaIuJacf/V5OeKgiWxu6YrWaZfgBT9apI3urRTVQruEWroRijsae7AwbDkEqkiEqOzV0rN31gRKpU9me9LWzTpiZZY11Xsr4eIpQWZ+qCIW13OM8FYcr9+yU1vEsOkiojy26LTFkvvQX3lktmmGokHMnMkWXnROHOxCLD9M1naOZGJWlouGClX2BrVqTiYgY6Q5naEutbb+gDZBiGDZMHHJ0zmTsQbY002jtDYI0I72Iiu5suRu567V2qdQWzGspBlgTLUb3xT5l8ZgCIB9Z9SKGJ+ADTRnvOEkgmqES7VT+emUZW+43ujM1nX8UgRsWDjo3Xvw8oxt/X59RbruysdUpa3c8rPRXkJGEItfrnkpTwe7quWMuEhGygRgLbV3iOITo5x0ZEXvqOMeVcN5tkMKjpsXjk7CtckHraxzBm0i4eIHB2H1bC8Jam1loXjz0rI8wEJIxnSbmbTWI/TlF3HOWzu3aH+b52JG9dkh2J70YGxJWVG2Fg8cabP5gBW4dJHOVnnf8lTvDlV/meKkmGHqUwjDs4XQdOmbKCemWA8FBFTh87GqjH04dYWBzcIAWg1mkRg7oSa21kcSNoOuFrSMruuYlAhywAuFoDmyo71nZopEOY8CG0WGxyM5lS4yZCKtgDfeqeni4TYaEuuU6d3xElElri8RuPRQZWChs+wj46+sywBAroBSdqwfslnmd9zW9y/rEdEYwKTZHe/EhS83jymJv93vlGjUHcapP6nWnNUqE14ieyMxMulMAC3njnvjuVr8b3rg2u+WbdKMniKQcNWJ1LC6my8UrodI6rlp0iConh/yE7mWfEW92Whebo4nH8W1LCA7WgLLpOlwHllpMC3RDQ+Xt2qDytovzbaGkaQrKn4PvVxfUJ39Sc/7nbaceeyv+0vr6z105f+zJ4S0u760l5vIfP0ZZWKf/tm4cr5fT15L/+GN/2l/aI+65jfPvVfyP6wPTts0AxrYMuy6Pft6ZAgxWPOaXHRjIJxL/euh/PRX8vhq/DvLtYPp68NM+C1/6VfzOI3ztTnMJ2zT+PeE89xaJD2n8Fj0p2rhczMRr/Ju7fUlAXz/BbHLQYulb9wcKue/8id5J+cujf33fd0E/XuqhkcS3JqLPl/oyOg+XumHm26P/DRg99pb62zD664h4lvSPkCDy/j8nhF8dglM7bYq63MvScze8j4II/LlF2Lf21fRfRQR+13fwf7/18fkDRCwiCqcfTjuBE7rfu+mHT/pKVq+/t6+tzL9j8stNvClCn2X0/6PJXRlW+0P4n6SvvzYmevsmd3++4zz5PGTfJP0Mzx97b7+kvd6tU/Erms6/Ge38sYJBPhadUOg9ndxzwF9XMDCF/FIFgyMfSsEgr1Qw2MdCBIGSd2LEkac7Qb5exdx3Lodx+ulupr83Kl7Rr+qvW6/Uope+m6/LMfa7Fiw4MOM2Xx4KcPzrrdoXkfPsBfwh6zz37PooGHs0a9+Kc/C7K703uF7RfPPXUc6zmP/fUQ55r4QWtxD9oUs6SvxV/qHJJ/qeguAnHEIgmqBwcrFOYPju4u8NGeKPIXPfP+8HtLxsPL5Fg73HznkvN9dbjrPL5bSM1td0nOhQY0951NRJXh/i9ilqwArDrd0eIoDXwY69QxP11YLo7tZzV8CBevz24n8WyzX+z2l56Hb5mcb/ixCL/qSeQI++304Q4o3sVJy+hxwOPWu5HxubUo+ox97NUn1Fv0kwZZ87eoI2pE3a1GHJf3/1Lkj1/RytufU6Bxg6xpfL9HVEw/7SvBcf4dQreebPEcifdz+pO5MGIX/jTT6+A7k3i+/e8Vv/8zXvp36x1fOKmN0zy0R9W05su5ADUBovRj5/gMd3fL0ElsMPfXIXfNz65L7FfCXQO78SJl+ar/RLnuV7xUWJV4QLHij0Bta45a+AC//nm3t/uNkHD7GEPwpxl3d8/xDX/rtBiLuQAgTBFLV6KSIx9238BGj8nQibQMinux7jMEa+AAH023m/Ie13A8ErSPsTBG8EAhTFn7B7HniEAEw/kdgLRPA7iuXvYeCRbM2wPfcxUAnCMhzdlye/3ASwyBWBQJ4IdABqTMgWNfoAmY+3xPVmMsTg51n7TTnC6KMMl3n8kgzfbR6/YnXicx6/FQbgDziLyU91/usQgNDI00Og8SOA4BWrBZ8geEd1jj8uF/1yDCCfGPgHVcFHQMAr1gw+EfCequAjgOAVsf1PELyjKnjeEvBPYgD/xMA/qAo+AgJesVzziYD3VAUfAAT0pz3w60CAUcQTTv/w9eAmPu5b/+WAeLQNPiN+PxXoMqvvmB153h31D0b86E/d/gsnNYHfR30/xDz+1O6/EAMY+oR/RDL/XML7QNr9OU3knwTEKzZPfALiHTXDR8DA53rgP6wZPgAIYOhzTfADqQaE/ACI+Fwg/Gd1w4cAAfIJgn9WOXwIFDxGBQ/gQxEIxCyE2w53FH4AxscL9CBvJSkEfaLQO0kRj2qcQp4I9AVJvVekB4Ze4eb/Azvrv6eRPdc9+JJE9lwR4Y1zyH4yNq8tefB3s3yQhyTwV+6J/9P7/ZFnzH37JOIP0s0f7+2373iffPNvY/9v2aWPouTTfWYf9gJTfz/tF83/f1k2BIiof8hxfk2Jj38lz/5hyuUvY1kEeoLgxRh6/rrnXOR9OBdD7pd4vmZG/JxzkQfOJe8g+Ls5VsDaeKhpg9093k8yRd+MweFXBAje1vC/s8txCPx7B4fgxUInDxbovTfQJEkexU99F7fd7efbMB4BP92njMME9bxc+GP2D/YS433nwbfnPPgVAYFPBLwHAvBvsv5jABDvCIBXBAM+AfAeAEAo+PUIeE8K+FBVIOCXKs+8eCb6wcqCPBgEz3lhf7buw0P2NXF3oXfOvobhV+wteTNEfBDx4dS9TwJjiy0KYTRJQCSB4/hfE+a3smw/LVH17tJ8TfjoXybN+zKDyF8twQKStO95+0Ebv7sEP15pDRAeX2TfTruvf7wdfHFMv9REBcffXdPb0d+vhPrNXv1j7YC/EpB/lzfwR96gnsgfyv9Ab8Uc3+rmvrH3e6+4vtYr+tl93Z8OI785/52CjfDblvFFsB8BDD3ROP1bEGPLpPt9FN+O3i6Ujb62/BXyoZgWhM0eonnUX6Vb+L6w0YMR9e5c+7Z1fpHfRPD+RPzubwDptcz3qwDyQBgE/FfhQT1c6hcbU8grImb/KDw+jtSxJ+oHJfjM39+VzBP0426Zt4PE/QLBu0PiMYT2vJC+/EWAnn/coeS/bCEde8yD+uUL6c9z9Xcl9Tif/8sk9cLmlF8vqc/NKXe5hST8RCB3knoh++DXS+oxevjfzX4vSuqFggC/XlKPUb3/bvZ7UVLIe274Wg7bprn8aICARQe9OcTgjP8D -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Data Engineering with AWS, Second Edition 2 | This is the code repository for [Data Engineering with AWS, Second Edition](https://www.packtpub.com/product/data-engineering-with-aws-second-edition/9781804614426), published by Packt. 3 | **Acquire the skills to design and build AWS-based data transformation pipelines like a pro** 4 | 5 | The author of this book is -[Gareth Eagar](https://www.linkedin.com/in/garetheagar/) 6 | ## About the book 7 | 8 | This book, authored by a seasoned Senior Data Architect with 25 years of experience, aims to help you achieve proficiency in using the AWS ecosystem for data engineering. This revised edition provides updates in every chapter to cover the latest AWS services and features, takes a refreshed look at data governance, and includes a brand-new section on building modern data platforms which covers; implementing a data mesh approach, open-table formats (such as Apache Iceberg), and using DataOps for automation and observability. 9 | 10 | You'll begin by reviewing the key concepts and essential AWS tools in a data engineer's toolkit and by getting acquainted with modern data management approaches. You'll then architect a data pipeline, review raw data sources, transform the data, and learn how that transformed data is used by various data consumers. You’ll learn how to ensure strong data governance, and about populating data marts and data warehouses along with how a data lakehouse fits into the picture. After that, you'll be introduced to AWS tools for analyzing data, including those for ad-hoc SQL queries and creating visualizations. Then, you'll explore how the power of machine learning and artificial intelligence can be used to draw new insights from data. In the final chapters, you'll discover transactional data lakes, data meshes, and how to build a cutting-edge data platform on AWS. 11 | 12 | By the end of this AWS book, you'll be able to execute data engineering tasks and implement a data pipeline on AWS like a pro! 13 | 14 | ## Key Takeaways 15 | - Delve into robust AWS tools for ingesting, transforming, and consuming data, and for orchestrating pipelines 16 | - Stay up to date with a comprehensive revised chapter on Data Governance 17 | - Build modern data platforms with a new section covering transactional data lakes and data mesh 18 | 19 | 20 | 21 | ## New Edition v/s Previous Edition: 22 | - New and refreshed content. 23 | - Fully revised with the most up-to-date information on AWS. 24 | 25 | 26 | 27 | 28 | ## What's New 29 | - Updates to every single chapter to cover all the newest AWS services and features. 30 | - Three brand-new chapters, including content on building transactional data lakes, implementing a data mesh approach on AWS, and using a DataOps approach to data platform building. 31 | - Modernized coverage of AI and ML functionality within AWS. 32 | - A new and refreshed look at data governance strategies. 33 | 34 | 35 | 36 | 37 | 38 | ## Outline and Chapter Summary 39 | This new edition builds on the work of the original to bring you a fully refreshed, definitive guide to building data pipelines in AWS. You’ll move from the basics of learning what data engineering is and what goes into designing strong architectures, to exploring the tools and services available to help you and how to get the most out of them, before covering recent developments in the data engineering space, like transactional data lakes, AI, and ML. 40 | Throughout the book you’ll cement your newfound knowledge with a series of easy to follow practical exercises. 41 | 42 | 1. [An Introduction to Data Engineering](https://github.com/PacktPublishing/Data-Engineering-with-AWS-2nd-edition/tree/main/Chapter01) 43 | 2. [Data Management Architectures for Analytics](https://github.com/PacktPublishing/Data-Engineering-with-AWS-2nd-edition/tree/main/Chapter02) 44 | 3. [The AWS Data Engineers Toolkit](https://github.com/PacktPublishing/Data-Engineering-with-AWS-2nd-edition/tree/main/Chapter03) 45 | 4. [Data Cataloging, Security, and Governance](https://github.com/PacktPublishing/Data-Engineering-with-AWS-2nd-edition/tree/main/Chapter04) 46 | 5. [Architecting Data Engineering Pipelines](https://github.com/PacktPublishing/Data-Engineering-with-AWS-2nd-edition/tree/main/Chapter05) 47 | 6. [Ingesting Batch and Streaming Data](https://github.com/PacktPublishing/Data-Engineering-with-AWS-2nd-edition/tree/main/Chapter06) 48 | 7. [Transforming Data to Optimize for Analytics](https://github.com/PacktPublishing/Data-Engineering-with-AWS-2nd-edition/tree/main/Chapter07) 49 | 8. [Identifying and Enabling Data Consumers](https://github.com/PacktPublishing/Data-Engineering-with-AWS-2nd-edition/tree/main/Chapter08) 50 | 9. [Loading Data into a Dart Mart](https://github.com/PacktPublishing/Data-Engineering-with-AWS-2nd-edition/tree/main/Chapter09) 51 | 10. [Orchestrating the Data Pipeline](https://github.com/PacktPublishing/Data-Engineering-with-AWS-2nd-edition/tree/main/Chapter10) 52 | 11. [Ad Hoc Queries with Amazon Athena](https://github.com/PacktPublishing/Data-Engineering-with-AWS-2nd-edition/tree/main/Chapter11) 53 | 12. [Visualizing Data with Amazon QuickSight](https://github.com/PacktPublishing/Data-Engineering-with-AWS-2nd-edition/tree/main/Chapter12) 54 | 13. [Enabling Artificial Intelligence and Machine Learning](https://github.com/PacktPublishing/Data-Engineering-with-AWS-2nd-edition/tree/main/Chapter13) 55 | 14. [Transactional Data Lakes](https://github.com/PacktPublishing/Data-Engineering-with-AWS-2nd-edition/tree/main/Chapter14) 56 | 15. [Implementing a Data Mesh strategy](https://github.com/PacktPublishing/Data-Engineering-with-AWS-2nd-edition/tree/main/Chapter15) 57 | 16. [Building a modern data platform on AWS](https://github.com/PacktPublishing/Data-Engineering-with-AWS-2nd-edition/tree/main/Chapter16) 58 | 17. [Wrapping Up the First Part of Your Learning Journey](https://github.com/PacktPublishing/Data-Engineering-with-AWS-2nd-edition/tree/main/Chapter17) 59 | 60 | ### Chapter 01, An Introduction to Data Engineering 61 | This chapter goes into the burgeoning field of data engineering, shedding light on its rapid growth and the soaring demand for professionals in this domain. With data assuming an increasingly pivotal role in organizations of all scales, the chapter underscores the gratification derived by individuals who relish assembling intricate data pipelines that not only ingest raw data, but also undertake its transformation and optimization for diverse consumer needs. It stresses the significance of data as a corporate asset, highlighting its escalating value, while concurrently examining the hurdles faced by organizations coping with surges in data volume. The chapter elucidates how data engineers can leverage cloud-based services to surmount these challenges, setting the stage for the practical activities presented later in the book. It also provides a comprehensive guide for creating a new Amazon Web Services (AWS) account. 62 | 63 | 64 | **Key Insights**: 65 | - Data engineering is a rapidly growing career path, with high demand being driven by the increasing importance of data in organizations. 66 | - Data engineers play a critical role in building complex data pipelines that ingest, transform, and optimize raw data for various consumers. 67 | - The chapter emphasizes the value of data as a corporate asset and discusses the challenges organizations face with growing data volumes. 68 | - Cloud-based services are highlighted as a means for data engineers to address these challenges effectively. 69 | - The chapter also introduces the foundational step of creating an Amazon Web Services (AWS) account, setting the stage for hands-on activities in later chapters. 70 | 71 | 72 | 73 | ### Chapter 02, Data Management Architectures for Analytics 74 | This chapter underlines the diversity of options available in terms of cloud services, open-source frameworks, file formats, and architectural approaches for analytical projects, depending on specific business requirements. It serves as a foundational resource for understanding modern analytical architectures, regardless of whether one chooses to implement them on AWS or other platforms. The chapter provides essential introductory concepts that lay the groundwork for subsequent chapters, covering key topics such as the evolution of data management for analytics, in-depth exploration of data warehouse concepts and architecture, an overview of data lake architecture, and the synergy between data warehouses and data lakes. It also includes a practical component, guiding readers on using the AWS Command Line Interface (CLI) to create Simple Storage Service (S3) buckets. 75 | 76 | Furthermore, the chapter emphasizes the importance of foundational architectural concepts in designing real-world analytics data management and processing solutions. It introduces three prevalent analytics data management architectures used in organizations today: data warehouses, data lakes, and data lakehouses. 77 | 78 | **Key Insights**: 79 | 80 | - Various cloud services, open-source frameworks, file formats, and architectural approaches can be employed in analytical projects, with choices dependent on specific business requirements. 81 | - Foundational concepts and technologies essential for modern analytical architectures are discussed, irrespective of the cloud platform used, laying the groundwork for subsequent chapters. 82 | - The chapter covers the evolution of data management for analytics, data warehouses, data lakes, and data lakehouses. 83 | - Practical guidance is provided on using the AWS CLI to create S3 buckets. 84 | 85 | 86 | ### Chapter 03, The AWS Data Engineers Toolkit 87 | This chapter highlights how cloud computing has transformed the traditional approach to big data processing. In the past, organizations grappled with the complexity of building and maintaining their own data processing systems, often leading to significant expenditures and delays. With AWS, many of these challenges have been eliminated, enabling the quick deployment of fully configured software solutions, automatic updates, and scalability without the need for large upfront capital investment. The chapter introduces a range of AWS-managed services used for building big data solutions, and emphasizes the importance of selecting the most suitable service for specific requirements. 88 | 89 | The chapter provides a comprehensive overview of AWS services for data ingestion, data transformation, orchestrating big data pipelines, and data consumption. It also includes a hands-on component, guiding readers on how to trigger an AWS Lambda function when a new file arrives in an S3 bucket, offering a practical understanding of these services. The text culminates with an introduction to data governance, emphasizing its crucial role in data engineering projects. 90 | 91 | **Key Insights**: 92 | - AWS, launched in 2006, has been instrumental in shaping the cloud computing industry by continuously innovating and expanding its service offerings. 93 | - Traditional data processing systems in organizations were complex, expensive, and required significant maintenance and scaling efforts, whereas AWS has simplified these challenges by providing on-demand and scalable solutions. 94 | - AWS offers over 200 services, including analytics services, which data engineers can use to build complex data analytic pipelines. 95 | - The chapter introduces key AWS-managed services for data ingestion, transformation, orchestration of big data pipelines, and data consumption, emphasizing the importance of selecting the right service for specific requirements. 96 | - A hands-on component guides readers in triggering an AWS Lambda function when a new file arrives in an S3 bucket. 97 | 98 | 99 | ### Chapter 04, Data Cataloging, Security, and Governance 100 | In this chapter, the text explores the best practices for responsible data handling, aiming to ensure that data's value is fully realized by organizations. It covers various aspects of data governance, including data security, access, and privacy, data quality, data profiling, and data lineage. It introduces the significance of data catalogs and AWS services that aid in data governance. A hands-on component guides readers in configuring Lake Formation permissions, offering practical insights into implementing data governance. The chapter underscores the necessity of a robust data governance program to ensure correct data usage and its optimal value to the organization. 101 | 102 | **Key Insights**: 103 | - Data governance and security are crucial components of data management, ensuring that data remains secure, compliant with local laws, and discoverable for organizational use. 104 | - Data breaches and mishandling can lead to reputational damage and government-imposed penalties, making it essential for organizations to prioritize responsible data handling. 105 | - Siloed data, poor data quality, and a lack of user trust can hinder organizations from maximizing the value of their data. 106 | - The hands-on component provides practical insights into configuring data governance in an AWS environment. 107 | 108 | 109 | ### Chapter 05, Architecting Data Engineering Pipelines 110 | The chapter provides a structured approach to designing data pipelines, beginning with the importance of understanding data consumers and their specific requirements. It also covers the identification of data sources, the processes of data ingestion, and the essential steps of data transformation and optimization. Loading data into data marts is another critical component discussed, along with a practical hands-on session that guides readers in architecting a sample data pipeline. This chapter offers a comprehensive guide for translating data engineering concepts into real-world applications, enabling readers to bridge the gap between theory and practice. 111 | 112 | **Key Insights**: 113 | - Data pipelines are pivotal in data engineering, serving as the means to ingest data from multiple sources, optimize and transform it, and make it available to data consumers. 114 | - The chapter focuses on the practical application of data engineering principles, enabling readers to translate theoretical knowledge into real-world data pipeline architecture. 115 | - It emphasizes the importance of understanding the needs and requirements of data consumers as a fundamental step in the data pipeline design process. 116 | - The chapter concludes with a hands-on exercise that allows readers to architect a sample data pipeline, providing practical experience in implementing the concepts covered. 117 | 118 | 119 | ### Chapter 06, Ingesting Batch and Streaming Data 120 | The chapter covers several fundamental topics, including understanding data sources, ingesting data from database and streaming sources. It underscores the importance of using the right ingestion tools to ensure efficient data movement. The hands-on portion of the chapter allows readers to practice data ingestion from both database and streaming sources using AWS DMS, Amazon Kinesis, and AWS Glue. 121 | 122 | **Key Insights**: 123 | - Emphasizes the critical role of data ingestion within the data pipeline architecture, providing a detailed exploration of this foundational process. 124 | - Data engineers often face the challenges of the five Vs of data: variety, volume, velocity, veracity, and value, and the chapter underscores the importance of addressing these challenges in data engineering. 125 | - The chapter covers various data sources and the multitude of tools available within AWS for effective data ingestion. 126 | - It provides guidance on making informed decisions when choosing the appropriate data ingestion tools for specific tasks. 127 | - Readers gain practical experience through hands-on activities, ingesting data from both database and streaming sources using AWS DMS, Amazon Kinesis, and AWS Glue, reinforcing the theoretical concepts with practical application. 128 | 129 | 130 | ### Chapter 07, Transforming Data to Optimize for Analytics 131 | This chapter looks into the vital task of data transformation, a key responsibility for data engineers in optimizing data for analytics and creating value for organizations. The chapter underscores the diversity of data transformations, distinguishing between common, generic transformations applicable to datasets, such as the conversion of raw files to Parquet format and partitioning, and those that incorporate business logic tailored to specific data content and business requirements. The chapter aims to equip data engineers with an understanding of the value of transformations, types of data transformation tools, common data preparation transformations, and business use case transformations. A hands-on section illustrates the practical application of these concepts, using AWS Glue Studio and Apache Spark for building transformations. 132 | 133 | **Key Insights**: 134 | - Data transformation is a key task for data engineers, involving various types of transformations, including both generic and business-specific ones. 135 | - The chapter highlights the value of data transformations in optimizing data for analytics and creating value for organizations. 136 | - It provides an overview of data transformation tools, common data preparation transformations, and business use case transformations. 137 | - Hands-on activities with AWS Glue Studio and Apache Spark offer practical experience in building data transformations. 138 | - The chapter serves as a bridge between data pipeline architecture and data ingestion, emphasizing the significance of data optimization for analytics in the data engineering process. 139 | 140 | 141 | ### Chapter 08, Identifying and Enabling Data Consumers 142 | This chapter examines the intricacies of data consumers. Data consumers span a wide spectrum, from operational staff needing real-time stock levels to the CEO requiring data for strategic decision-making. Additionally, data consumers can be systems seeking data from other systems. The central objective of data engineers is to make datasets useful and accessible to these consumers, ultimately empowering the business to gain valuable insights from its data. To achieve this, data engineers must deliver the right data through appropriate tools to the right people or applications at the right time, facilitating informed decision-making. The chapter underscores the pivotal role of understanding business objectives, recognizing data consumers, and comprehending their requirements when designing a data engineering pipeline. This approach enables data engineers to select the right data ingestion tools, frequency, and transformation processes to meet the specific needs of data consumers. The chapter also considers data democratization, emphasizing its impact and significance. A hands-on section asks readers to take on the role of a data analyst tasked with creating a mailing list for the marketing department. 143 | 144 | **Key Insights**: 145 | - Data consumers, including individuals, applications, and systems within an organization, require access to data to fulfill a range of needs, from operational tasks to strategic decision-making. 146 | - Data engineers play a crucial role in making datasets useful and accessible to data consumers, facilitating valuable insights for the business. 147 | - Understanding the business objectives, identifying data consumers, and comprehending their requirements are essential steps in designing a data engineering pipeline. 148 | - Data democratization has a significant impact on data access and usage, influencing various data consumer groups, including business users, data analysts, and data scientists. 149 | - The hands-on section of the chapter provides practical experience in transforming data using AWS Glue DataBrew. 150 | 151 | 152 | ### Chapter 09, Loading Data into a Dart Mart 153 | This chapter explores the significance of data warehouses and data marts in scenarios where data engineers need to move data from a data lake to an external data warehouse or data mart to serve specific data consumers. The chapter clarifies the distinctions between a data lake, which serves as a comprehensive source of truth across various business lines, and a data mart, which typically contains a subset of data tailored to specific user groups. 154 | It begins by examining common anti-patterns that should be avoided when using data warehouses. Subsequently, it goes into the intricate architecture of Redshift, elucidating how it optimizes data storage across nodes. The chapter emphasizes the importance of design decisions for creating a Redshift cluster optimized for performance, as well as the processes for data ingestion into and unloading from Redshift. Additionally, advanced Redshift features, including data sharing, Data Distribution Keys (DDM), and cluster resizing, are reviewed. The hands-on exercises provide practical experience by guiding readers in creating a new Redshift Serverless cluster, exploring sample data, and configuring Redshift Spectrum to query data from Amazon S3. 155 | 156 | **Key Insights**: 157 | - Data lakes provide a central source of data while data marts serve as subsets optimized for specific user groups and query types. 158 | - Data marts play a dual role, offering a subset of data for targeted queries and providing high-performance query engines for analytic use cases. 159 | - It highlights common anti-patterns to avoid in data warehouse usage and explores the architecture of Redshift, shedding light on its data storage optimization. 160 | - Readers gain insights into the design considerations for high-performance data warehouses, data movement between data lakes and Redshift, and advanced Redshift features. 161 | - The hands-on exercises involve creating a Redshift Serverless cluster and utilizing Redshift Spectrum for querying data from Amazon S3. 162 | 163 | ### Chapter 10, Orchestrating the Data Pipeline 164 | This chapter explores the pivotal role of data pipeline orchestration tools in automating data engineering tasks, emphasizing their significance in a real production environment. The chapter begins by highlighting the various services and techniques discussed in previous chapters, including data ingestion via Amazon Kinesis Data Firehose and AWS Database Migration Service, as well as data transformation through AWS Lambda and AWS Glue functions. It underlines the importance of updating a data catalog and loading subsets of data into data marts or data warehouses for specific use cases. However, for real-world applications, manual triggering of these tasks is not acceptable, necessitating automation. Data pipeline orchestration engines play a crucial role in orchestrating various data engineering tasks, managing task sequences and dependencies, handling task success or failure, and executing tasks in parallel or sequentially. The chapter covers core concepts of pipeline orchestration, reviews different options for orchestrating data pipelines within AWS, and provides a hands-on activity demonstrating orchestration using AWS Step Functions. 165 | 166 | Readers gain insights into concepts like scheduled and event-based pipelines, failure handling, and retries. The chapter introduces four AWS services for creating and orchestrating data pipelines, namely AWS Data Pipeline, AWS Glue workflows, Amazon MWAA, and AWS Step Functions. The hands-on section guides readers in building an event-driven pipeline using AWS Lambda functions and an Amazon SNS topic for notifications about failures, orchestrated by AWS Step Functions. 167 | 168 | **Key Insights**: 169 | - Data pipeline orchestration is crucial for automating data engineering tasks, enabling scheduled and event-driven pipelines, handling task failures, and orchestrating parallel and sequential tasks. 170 | - Core concepts of pipeline orchestration, including scheduling and dependency management, are fundamental to designing effective data pipelines. 171 | - AWS offers multiple orchestration options, including AWS Data Pipeline, AWS Glue workflows, Amazon MWAA, and AWS Step Functions, each with its unique strengths. 172 | - The hands-on activity in this chapter involves orchestrating a data pipeline using AWS Step Functions, providing practical experience in implementing pipeline orchestration. 173 | - This chapter serves as a bridge between architectural design and practical implementation of data pipelines, setting the stage for exploring data consumption services and deeper dives into data exploration and analysis in the subsequent chapters. 174 | 175 | 176 | ### Chapter 11, Ad Hoc Queries with Amazon Athena 177 | This chapter investigates Amazon Athena, a serverless and fully managed service that allows users to query data directly in a data lake using SQL and Spark. The chapter commences by introducing Athena's features, emphasizing its setup-free nature, flexible payment options based on data scanned or provisioned capacity, and its capability to query data not only in data lakes but also from various other database sources using Query Federation. The chapter explores Athena's governance and cost management functionality through workgroups and offers recommended best practices for optimizing SQL queries for both cost-efficiency and performance. 178 | 179 | The chapter also covers advanced functionality, illustrating how Athena can serve as a SQL query engine for data in Amazon S3 data lakes and external data sources like databases, data warehouses, and CloudWatch logs. It concludes with a hands-on exercise where users create an Athena workgroup, configure settings, and execute SQL queries within that workgroup. 180 | 181 | **Key Insights**: 182 | - Amazon Athena is a serverless, fully managed service that enables SQL and Spark-based querying of data in data lakes and various database sources. 183 | - The chapter provides insights into optimizing Athena SQL queries for cost-efficiency and performance, covering tips and best practices. 184 | - Athena's advanced functionality includes Query Federation, allowing users to query data from external sources like databases and data warehouses. 185 | - Athena workgroups offer governance and cost management capabilities, helping manage settings for different teams or projects and controlling data scan limits in queries. 186 | 187 | 188 | ### Chapter 12, Visualizing Data with Amazon QuickSight 189 | In this chapter, the focus is on Amazon QuickSight, a powerful BI tool that empowers users to create rich visualizations and reports while exploring data interactively. QuickSight offers features such as data filtering, drill-down capabilities, natural language querying, and the ability to summarize complex datasets visually. The chapter emphasizes the purpose of BI tools in helping users comprehend complex data through visual exploration. While it primarily covers Amazon QuickSight, the concepts discussed can apply to other popular BI applications like Tableau, Microsoft Power BI, and Qlik. 190 | 191 | The chapter also looks at advanced QuickSight features, such as ML Insights for data trend detection and outlier identification, QuickSight Q for natural language querying, embedded dashboards for integration with websites and applications, and paginated reports. It concludes with a hands-on section guiding users through the configuration of QuickSight in their AWS account and the creation of customized visualizations. 192 | 193 | **Key Insights**: 194 | - This chapter provides an in-depth understanding of Amazon QuickSight, highlighting its role as a serverless, fully managed BI tool for creating visualizations, analyzing data, and reporting. 195 | - The chapter emphasizes the core concepts of business intelligence, explaining the significance of visual data representation, data exploration, and the role of BI tools in enabling users to make data-driven decisions. 196 | - QuickSight's cost-effective subscription-based pricing model is discussed, making it a practical choice for organizations looking to avoid infrastructure and licensing costs. 197 | - The chapter offers hands-on experience, guiding users through the setup and customization of QuickSight in their AWS accounts, empowering them to create their own visualizations and reports. 198 | 199 | 200 | ### Chapter 13, Enabling Artificial Intelligence and Machine Learning 201 | This chapter delves into the significance of AI and ML for organizations, emphasizing how advancements in these fields have made it possible to automate tasks, personalize recommendations, and derive insights from data. While AI focuses on replicating human intelligence for tasks like problem-solving, decision-making, and language understanding, ML, more specifically, develops algorithms and models that recognize patterns in data to make predictions. The chapter highlights the growing impact of AI and ML in various domains, such as healthcare, self-driving vehicles, and the capabilities of Large Language Models (LLMs) like ChatGPT. It introduces AWS's array of AI and ML services and explores their utility with different data types. 202 | 203 | 204 | **Key Insights**: 205 | - The chapter underscores the transformative role of AI and ML in enabling organizations to automate tasks, personalize recommendations, and draw insights from data. 206 | - It clarifies the distinction between AI and ML. 207 | - The chapter introduces a range of AWS AI and ML services, including Amazon SageMaker for custom model development and services like Amazon Transcribe, Textract, Rekognition, and Comprehend for specific tasks. 208 | - It explores various real-world applications of AI and ML, from early detection of diseases to self-driving vehicles and the capabilities of LLMs. 209 | - The hands-on exercise in the chapter demonstrates the use of Amazon Comprehend for extracting insights from written text, providing practical exposure to AWS AI and ML services. 210 | 211 | 212 | ### Chapter 14, Transactional Data Lakes 213 | In this chapter, the focus is on the evolution of data lakes, which have advanced significantly in recent years, transforming into transactional data lakes. These new technologies not only retain the benefits of traditional data lakes, such as cost-effectiveness and serverless data processing capabilities but also offer improved data update mechanisms. 214 | 215 | The key topics covered in this chapter include defining the concept of a transactional data lake, an in-depth examination of table formats like Delta Lake, Apache Hudi, and Apache Iceberg, and how AWS integrates these table formats to create transactional data lakes. The hands-on section allows readers to work with Apache Iceberg tables in AWS, gaining practical experience in managing and querying tables in this modern data lake format. 216 | 217 | **Key Insights**: 218 | - Traditional data lakes built on Apache Hive had limitations in terms of handling data updates, querying consistency, and schema changes. 219 | - New table formats like Delta Lake, Apache Hudi, and Apache Iceberg address these limitations, offering transactional capabilities and advanced features. 220 | - AWS integrates these table formats to support the creation of transactional data lakes, making them more akin to traditional data warehouses. 221 | - The transformation of data lakes into transactional data lakes is a significant shift, driven by innovations in table formats and metadata management, shaping the future of data lake management. 222 | 223 | ### Chapter 15, Implementing a Data Mesh strategy 224 | This chapter introduces a shift from traditional data lake approaches to a newer concept known as a data mesh. Historically, organizations established central data engineering teams responsible for collecting, processing, and transforming raw data into a data lake. However, this centralized approach had its limitations. The data mesh approach presents an alternative by decentralizing data analytics and pushing the responsibility for creating analytic data products closer to the teams that generate or own the operational data. The chapter explores the various aspects of a data mesh, including organizational changes, architectural approaches, and technology implementation. It also delves into the core concepts and components of Amazon DataZone, a data governance tool offering a business data catalog. In a hands-on exercise, readers set up DataZone, import a dataset from the AWS Glue Data Catalog, add business metadata, and explore how to search the catalog and subscribe to data products. 225 | 226 | 227 | **Key Insights**: 228 | - The data mesh approach offers a decentralized model, pushing responsibility for data analytics closer to operational data owners. 229 | - Data mesh implementation involves organizational changes, architectural approaches, and technology utilization. 230 | - Amazon DataZone, a data governance tool with a business data catalog, is introduced as a key element in data mesh initiatives. 231 | - The chapter provides a hands-on exercise for setting up DataZone, importing datasets, adding metadata, and exploring the catalog, preparing readers for modern data platform building on AWS. 232 | 233 | 234 | ### Chapter 16, Building a modern data platform on AWS 235 | In this chapter, the focus is on the essential aspects of constructing a modern data platform on AWS. The chapter aims to provide a foundational understanding of building such platforms and emphasizes the significance of combining the various concepts presented throughout the book. It offers insights into goals for modern data platforms, deliberates the decision to build or buy such a platform, and introduces the concept of DataOps for automated development of data products. The hands-on section illustrates how AWS services like CloudFormation, CodeCommit, and CodePipeline can be employed to automate data platforms, streamlining infrastructure deployment and ETL code management. 236 | 237 | 238 | 239 | **Key Insights**: 240 | - Offers a high-level overview of constructing a modern data platform on AWS, serving as a foundational guide. 241 | - Explores key objectives for modern data platforms, including flexibility, scalability, governance, security, and self-service capabilities. 242 | - The chapter discusses the decision-making process of whether to build or purchase a data platform, weighing the pros and cons. 243 | - Introduces DataOps as an approach to automate the development of data products and manage infrastructure, making the platform more efficient and agile. 244 | - Demonstrates the practical use of AWS services like CloudFormation, CodeCommit, and CodePipeline to automate components and code deployment in a modern data platform. 245 | 246 | ### Chapter 17, Wrapping Up the First Part of Your Learning Journey 247 | In this concluding chapter, the reader is reminded of the wide spectrum of topics explored in the book, ranging from common architectural patterns to hands-on experience with AWS services essential for data engineering. The chapter reflects on the significance of data engineering in optimizing data value within organizations. It emphasizes the dynamic nature of AWS services, with continuous innovation to meet evolving customer needs, offering a fast-paced journey through the cloud for data engineering platforms. 248 | 249 | The chapter serves as an acknowledgment of the ever-expanding horizons of data engineering, indicating that the book is just the beginning of a comprehensive learning journey. It encourages the reader to delve deeper into the field. The conclusion paints an exciting picture of the journey ahead, implying that with the constant evolution of AWS services and the vast wealth of resources, the opportunities for growth and innovation are boundless for those venturing into data engineering on AWS. 250 | 251 | **Key Insights**: 252 | - The book provides a comprehensive exploration of data engineering, covering architectural patterns, hands-on AWS service usage, data security, governance, and data catalog importance. 253 | - It introduces the concept of a data lake house and discusses data marts, data warehouses, and data consumers, along with tools like Amazon Athena and QuickSight for data querying and visualization. 254 | - The book touches on ML and AI and highlights AWS services in these domains. 255 | - Emerging trends such as the data mesh approach and open table formats like Apache Iceberg are discussed, focusing on decentralizing data engineering responsibilities. 256 | - The chapter concludes by emphasizing the rapid evolution of AWS services and the constant learning opportunities in data engineering, encouraging readers to embark on a continuous learning journey in this field. 257 | 258 | > If you feel this book is for you, get your [copy](https://www.amazon.in/Data-Engineering-AWS-cloud-based-transformation-ebook/dp/B0C61KXWQ5) today! Coding 259 | 260 | 261 | 262 | ## Learn more on the Discord server Coding 263 | You can join in on the discord server for all the latest updates and discussions in the community at [Discord](https://discord.gg/9s5mHNyECd) 264 | 265 | ## Download a free PDF Coding 266 | 267 | _If you have already purchased a print or Kindle version of this book, you can get a DRM-free PDF version at no cost. Simply click on the link to claim your free PDF._ 268 | [Free-Ebook](https://packt.link/free-ebook/9781804614426) Coding 269 | 270 | We also provide a PDF file that has color images of the screenshots/diagrams used in this book at [GraphicBundle](https://packt.link/gbp/9781804614426) Coding 271 | 272 | 273 | ## Get to know the Author 274 | _Gareth Eagar_ has over 25 years of experience in the IT industry, starting in South Africa, working in the United Kingdom for a while, and now based in the USA. Having worked at AWS since 2017, Gareth has broad experience with a variety of AWS services, and deep expertise around building data platforms on AWS. While Gareth currently works as a Solutions Architect, he has also worked in AWS Professional Services, helping architect and implement data platforms for global customers. Gareth also frequently speaks on data-related topics. 275 | 276 | 277 | ## Other Related Books 278 | - [Data Engineering with dbt](https://www.packtpub.com/product/data-engineering-with-dbt/9781803246284) 279 | - [Data Engineering with Python](https://www.packtpub.com/product/data-engineering-with-python/9781839214189) 280 | --------------------------------------------------------------------------------