├── Figures ├── Compute.png ├── Data_Lifecycle_Solutions.png ├── DocumentDatabase.png ├── Solutions-_Proces_Analyze.png ├── Solutions-_Storage.png ├── beam_pipeline.png ├── bigquery.svg ├── bigtable-example.png ├── column.png ├── flowlogistic_sol.png └── pubsub.png ├── README.md ├── Screenshots └── screenshot1.png ├── data_engineering_on_GCP.pdf └── data_engineering_on_GCP.tex /Figures/Compute.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ml874/Data-Engineering-on-GCP-Cheatsheet/abb3ecc58f0724461150fd8f7d526e2f45241583/Figures/Compute.png -------------------------------------------------------------------------------- /Figures/Data_Lifecycle_Solutions.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ml874/Data-Engineering-on-GCP-Cheatsheet/abb3ecc58f0724461150fd8f7d526e2f45241583/Figures/Data_Lifecycle_Solutions.png -------------------------------------------------------------------------------- /Figures/DocumentDatabase.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ml874/Data-Engineering-on-GCP-Cheatsheet/abb3ecc58f0724461150fd8f7d526e2f45241583/Figures/DocumentDatabase.png -------------------------------------------------------------------------------- /Figures/Solutions-_Proces_Analyze.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ml874/Data-Engineering-on-GCP-Cheatsheet/abb3ecc58f0724461150fd8f7d526e2f45241583/Figures/Solutions-_Proces_Analyze.png -------------------------------------------------------------------------------- /Figures/Solutions-_Storage.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ml874/Data-Engineering-on-GCP-Cheatsheet/abb3ecc58f0724461150fd8f7d526e2f45241583/Figures/Solutions-_Storage.png -------------------------------------------------------------------------------- /Figures/beam_pipeline.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ml874/Data-Engineering-on-GCP-Cheatsheet/abb3ecc58f0724461150fd8f7d526e2f45241583/Figures/beam_pipeline.png -------------------------------------------------------------------------------- /Figures/bigquery.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | Produced by OmniGraffle 7.4.1 5 | 2017-08-03 16:49:24 +0000 6 | 7 | 8 | 9 | Figure 1 - Outlines 10 | 11 | 12 | Layer 1 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | 76 | 77 | 78 | 79 | 80 | 81 | 82 | 83 | 84 | 85 | 86 | 87 | 88 | 89 | 90 | 91 | 92 | 93 | 94 | 95 | 96 | 97 | 98 | 99 | 100 | 101 | 102 | 103 | 104 | 105 | 106 | 107 | 108 | 109 | 110 | 111 | 112 | 113 | 114 | 115 | 116 | 117 | 118 | 119 | 120 | 121 | 122 | 123 | 124 | 125 | 126 | 127 | 128 | 129 | 130 | 131 | 132 | 133 | 134 | 135 | 136 | 137 | 138 | 139 | 140 | 141 | 142 | 143 | 144 | 145 | 146 | 147 | 148 | 149 | 150 | 151 | 152 | 153 | 154 | 155 | 156 | 157 | 158 | 159 | 160 | 161 | 162 | 163 | 164 | 165 | 166 | -------------------------------------------------------------------------------- /Figures/bigtable-example.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ml874/Data-Engineering-on-GCP-Cheatsheet/abb3ecc58f0724461150fd8f7d526e2f45241583/Figures/bigtable-example.png -------------------------------------------------------------------------------- /Figures/column.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ml874/Data-Engineering-on-GCP-Cheatsheet/abb3ecc58f0724461150fd8f7d526e2f45241583/Figures/column.png -------------------------------------------------------------------------------- /Figures/flowlogistic_sol.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ml874/Data-Engineering-on-GCP-Cheatsheet/abb3ecc58f0724461150fd8f7d526e2f45241583/Figures/flowlogistic_sol.png -------------------------------------------------------------------------------- /Figures/pubsub.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ml874/Data-Engineering-on-GCP-Cheatsheet/abb3ecc58f0724461150fd8f7d526e2f45241583/Figures/pubsub.png -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Data Engineering on GCP Cheatsheet 2 | 3 | This cheatsheet is currently a 9-page reference Data Engineering on the Google Cloud Platform. It covers the data engineering lifecycle, machine learning, Google case studies, and GCP's storage, compute, and big data products. 4 | 5 | I compiled this sheet while studying for Google's Data Engineering Exam- this cheatsheet is not guaranteed to help you pass. 6 | 7 | ## Screenshots 8 | ![](Screenshots/screenshot1.png?raw=true) 9 | 10 | ## License 11 | This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. 12 | Creative Commons License
13 | 14 | ## Changelog 15 | **2018-08-10**: Added Google Data Engineering Cheatsheet 16 | 17 | ## Contact 18 | Feel free to suggest comments, updates, and potential improvements! 19 | 20 | **Maverick Lin**: Reach out to me via [Quora](https://www.quora.com/profile/Maverick-Lin) or through my [website](http://mavericklin.com/). Cheers. 21 | -------------------------------------------------------------------------------- /Screenshots/screenshot1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ml874/Data-Engineering-on-GCP-Cheatsheet/abb3ecc58f0724461150fd8f7d526e2f45241583/Screenshots/screenshot1.png -------------------------------------------------------------------------------- /data_engineering_on_GCP.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ml874/Data-Engineering-on-GCP-Cheatsheet/abb3ecc58f0724461150fd8f7d526e2f45241583/data_engineering_on_GCP.pdf -------------------------------------------------------------------------------- /data_engineering_on_GCP.tex: -------------------------------------------------------------------------------- 1 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 2 | % MatPlotLib and Random Cheat Sheet 3 | % 4 | % Edited by Maverick Lin 5 | % 6 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 7 | 8 | % \documentclass{article} 9 | \documentclass[9pt]{extarticle} 10 | \usepackage[landscape]{geometry} 11 | \usepackage{url} 12 | \usepackage{multicol} 13 | \usepackage{amsmath} 14 | \usepackage{amsfonts} 15 | \usepackage{tikz} 16 | \usetikzlibrary{decorations.pathmorphing} 17 | \usepackage{amsmath,amssymb} 18 | \usepackage{tabularx} 19 | 20 | \usepackage{colortbl} 21 | \usepackage{xcolor} 22 | \usepackage{mathtools} 23 | \usepackage{amsmath,amssymb} 24 | \usepackage{enumitem} 25 | \usepackage{tabto} 26 | \usepackage{enumitem} 27 | \usepackage{graphicx} 28 | \usepackage{tabu} 29 | 30 | \title{Google Data Engineer Notes} 31 | \usepackage[brazilian]{babel} 32 | \usepackage[utf8]{inputenc} 33 | 34 | \advance\topmargin-.8in 35 | \advance\textheight3in 36 | \advance\textwidth3in 37 | \advance\oddsidemargin-1.5in 38 | \advance\evensidemargin-1.5in 39 | \parindent0pt 40 | \parskip2pt 41 | \newcommand{\hr}{\centerline{\rule{3.5in}{1pt}}} 42 | %\colorbox[HTML]{e4e4e4}{\makebox[\textwidth-2\fboxsep][l]{texto} 43 | \begin{document} 44 | 45 | \begin{center}{\huge{\textbf{Google Data Engineering Cheatsheet}}}\\ 46 | {\large Compiled by Maverick Lin (\url{http://mavericklin.com})}\\ 47 | {\normalsize Last Updated August 4, 2018} 48 | \end{center} 49 | \begin{multicols*}{3} 50 | 51 | \tikzstyle{mybox} = [draw=black, fill=white, very thick, rectangle, rounded corners, inner sep=10pt, inner ysep=10pt] 52 | \tikzstyle{fancytitle} =[fill=black, text=white, font=\bfseries] 53 | 54 | 55 | %------------ What is Data Engineering? --------------- 56 | \begin{tikzpicture} 57 | \node [mybox] (box){% 58 | \begin{minipage}{0.3\textwidth} 59 | Data engineering enables data-driven decision making by collecting, transforming, and visualizing data. A data engineer designs, builds, maintains, and troubleshoots data processing systems with a particular emphasis on the security, reliability, fault-tolerance, scalability, fidelity, and efficiency of such systems.\\ 60 | 61 | A data engineer also analyzes data to gain insight into business outcomes, builds statistical models to support decision-making, and creates machine learning models to automate and simplify key business processes. \\ 62 | 63 | Key Points 64 | \setlist{nolistsep} 65 | \begin{itemize} 66 | \item Build/maintain data structures and databases 67 | \item Design data processing systems 68 | \item Analyze data and enable machine learning 69 | \item Design for reliability 70 | \item Visualize data and advocate policy 71 | \item Model business processes for analysis 72 | \item Design for security and compliance 73 | \end{itemize} 74 | 75 | \end{minipage} 76 | }; 77 | \node[fancytitle, right=10pt] at (box.north west) {What is Data Engineering?}; 78 | \end{tikzpicture} 79 | 80 | 81 | %------------ Google Compute Platform (GCP) --------------- 82 | \begin{tikzpicture} 83 | \node [mybox] (box){% 84 | \begin{minipage}{0.3\textwidth} 85 | GCP is a collection of Google computing resources, which are offered via \textit{services}. Data engineering services include Compute, Storage, Big Data, and Machine Learning.\\ 86 | 87 | The 4 ways to interact with GCP include the console, command-line-interface (CLI), API, and mobile app.\\ 88 | 89 | The GCP resource hierarchy is organized as follows. All resources (VMs, storage buckets, etc) are organized into \textbf{projects}. These projects \textit{may} be organized into \textbf{folders}, which can contain other folders. All folders and projects can be brought together under an organization node. Project folders and organization nodes are where policies can be defined. Policies are inherited downstream and dictate who can access what resources. Every resource must belong to a project and every must have a billing account associated with it.\\ 90 | 91 | \textbf{Advantages}: Performance (fast solutions), Pricing (sub-hour billing, sustained use discounts, custom machine types), PaaS Solutions, Robust Infrastructure 92 | \end{minipage}; 93 | 94 | }; 95 | \node[fancytitle, right=10pt] at (box.north west) {Google Compute Platform (GCP)}; 96 | \end{tikzpicture} 97 | 98 | 99 | %------------ Hadoop Overview --------------- 100 | \begin{tikzpicture} 101 | \node [mybox] (box){% 102 | \begin{minipage}{0.3\textwidth} 103 | \setlist{nolistsep} 104 | 105 | {\color{blue} \textbf{Hadoop}}\\ 106 | Data can no longer fit in memory on one machine (monolithic), so a new way of computing was devised using many computers to process the data (distributed). Such a group is called a cluster, which makes up server farms. All of these servers have to be coordinated in the following ways: partition data, coordinate computing tasks, handle fault tolerance/recovery, and allocate capacity to process.\\ 107 | 108 | Hadoop is an open source \textit{distributed} processing framework that manages data processing and storage for big data applications running in clustered systems. It is comprised of 3 main components: 109 | \begin{itemize} 110 | \item \textbf{Hadoop Distributed File System (HDFS)}: a distributed file system that provides high-throughput access to application data by partitioning data across many machines 111 | \item \textbf{YARN}: framework for job scheduling and cluster resource management (task coordination) 112 | \item \textbf{MapReduce}: YARN-based system for parallel processing of large data sets on multiple machines\\ 113 | \end{itemize} 114 | 115 | {\color{blue} \textbf{HDFS}}\\ 116 | Each disk on a different machine in a cluster is comprised of 1 master node; the rest are data nodes. The \textbf{master node} manages the overall file system by storing the directory structure and metadata of the files. The \textbf{data nodes} physically store the data. Large files are broken up/distributed across multiple machines, which are replicated across 3 machines to provide fault tolerance.\\ 117 | 118 | {\color{blue} \textbf{MapReduce}}\\ 119 | Parallel programming paradigm which allows for processing of huge amounts of data by running processes on multiple machines. Defining a MapReduce job requires two stages: map and reduce. 120 | 121 | \begin{itemize} 122 | \item \textbf{Map}: operation to be performed in parallel on small portions of the dataset. the output is a key-value pair $$ 123 | \item \textbf{Reduce}: operation to combine the results of Map\\ 124 | \end{itemize} 125 | 126 | {\color{blue} \textbf{YARN- Yet Another Resource Negotiator}}\\ 127 | Coordinates tasks running on the cluster and assigns new nodes in case of failure. Comprised of 2 subcomponents: the resource manager and the node manager. The \textbf{resource manager} runs on a single master node and schedules tasks across nodes. The \textbf{node manager} runs on all other nodes and manages tasks on the individual node. 128 | 129 | % A typical job process: the user defines defines the map and reduce tasks using MapReduce API, the job is triggered on the Hadoop cluster, YARN figures out where and how to run the job and stores the result in HDFS. 130 | 131 | \end{minipage}; 132 | 133 | }; 134 | \node[fancytitle, right=10pt] at (box.north west) {Hadoop Overview}; 135 | \end{tikzpicture} 136 | 137 | 138 | 139 | 140 | 141 | %------------ Hadoop Ecosystem --------------- 142 | \begin{tikzpicture} 143 | \node [mybox] (box){% 144 | \begin{minipage}{0.3\textwidth} 145 | An entire ecosystem of tools have emerged around Hadoop, which are based on interacting with HDFS.\\ 146 | 147 | {\color{cyan} \textbf{Hive}}: data warehouse software built o top of Hadoop that facilitates reading, writing, and managing large datasets residing in distributed storage using SQL-like queries (HiveQL). Hive abstracts away underlying MapReduce jobs and returns HDFS in the form of tables (not HDFS). \\ 148 | {\color{cyan} \textbf{Pig}}: high level scripting language (Pig Latin) that enables writing complex data transformations. It pulls unstructured/incomplete data from sources, cleans it, and places it in a database/data warehouses. Pig performs ETL into data warehouse while Hive queries from data warehouse to perform analysis (GCP: DataFlow).\\ 149 | {\color{cyan} \textbf{Spark}}: framework for writing fast, distributed programs for data processing and analysis. Spark solves similar problems as Hadoop MapReduce but with a fast in-memory approach. It is an unified engine that supports SQL queries, streaming data, machine learning and graph processing. Can operate separately from Hadoop but integrates well with Hadoop. Data is processed using Resilient Distributed Datasets (RDDs), which are immutable, lazily evaluated, and tracks lineage. \\ 150 | {\color{cyan} \textbf{Hbase}}: non-relational, NoSQL, column-oriented database management system that runs on top of HDFS. Well suited for sparse data sets (GCP: BigTable) \\ 151 | {\color{cyan} \textbf{Flink/Kafka}}: stream processing framework. Batch streaming is for bounded, finite datasets, with periodic updates, and delayed processing. Stream processing is for unbounded datasets, with continuous updates, and immediate processing. Stream data and stream processing must be decoupled via a message queue. Can group streaming data (windows) using tumbling (non-overlapping time), sliding (overlapping time), or session (session gap) windows. \\ 152 | {\color{cyan} \textbf{Beam}}: programming model to define and execute data processing pipelines, including ETL, batch and stream (continuous) processing. After building the pipeline, it is executed by one of Beam’s distributed processing back-ends (Apache Apex, Apache Flink, Apache Spark, and Google Cloud Dataflow). Modeled as a Directed Acyclic Graph (DAG).\\ 153 | {\color{cyan} \textbf{Oozie}}: workflow scheduler system to manage Hadoop jobs\\ 154 | {\color{cyan} \textbf{Sqoop}}: transferring framework to transfer large amounts of data into HDFS from relational databases (MySQL) 155 | \end{minipage}; 156 | }; 157 | \node[fancytitle, right=10pt] at (box.north west) {Hadoop Ecosystem}; 158 | \end{tikzpicture} 159 | 160 | 161 | % ------------ IAM ----------------- 162 | \begin{tikzpicture} 163 | \node [mybox] (box){% 164 | \begin{minipage}{0.3\textwidth} 165 | \setlist{nolistsep} 166 | 167 | {\color{blue} \textbf{Identity Access Management (IAM)}}\\ 168 | Access management service to manage different members of the platform- who has what access for which resource.\\ 169 | 170 | Each member has roles and permissions to allow them access to perform their duties on the platform. 3 member types: Google account (single person, gmail account), service account (non-person, application), and Google Group (multiple people). Roles are a set of specific permissions for members. Cannot assign permissions to user directly, must grant roles.\\ 171 | 172 | If you grant a member access on a higher hierarchy level, that member will have access to all levels below that hierarchy level as well. You cannot be restricted a lower level. The policy is a union of assigned and inherited policies.\\ 173 | 174 | \textbf{Primitive Roles}: Owner (full access to resources, manage roles), Editor (edit access to resources, change or add), Viewer (read access to resources)\\ 175 | \textbf{Predefined Roles}: finer-grained access control than primitive roles, predefined by Google Cloud\\ 176 | \textbf{Custom Roles}\\ 177 | 178 | \textbf{Best Practice}: use predefined roles when they exist (over primitive). Follow the principle of least privileged favors. 179 | 180 | 181 | \end{minipage} 182 | }; 183 | \node[fancytitle, right=10pt] at (box.north west) {IAM}; 184 | \end{tikzpicture} 185 | 186 | 187 | % ------------ Stackdriver ----------------- 188 | \begin{tikzpicture} 189 | \node [mybox] (box){% 190 | \begin{minipage}{0.3\textwidth} 191 | \setlist{nolistsep} 192 | GCP's monitoring, logging, and diagnostics solution. Provides insights to health, performance, and availability of applications.\\ 193 | Main Functions 194 | \begin{itemize} 195 | \item \textbf{Debugger}: inspect state of app in real time without stopping/slowing down e.g. code behavior 196 | \item \textbf{Error Reporting}: counts, analyzes, aggregates crashes in cloud services 197 | \item \textbf{Monitoring}: overview of performance, uptime and heath of cloud services (metrics, events, metadata) 198 | \item \textbf{Alerting}: create policies to notify you when health and uptime check results exceed a certain limit 199 | \item \textbf{Tracing}: tracks how requests propagate through applications/receive near real-time performance results, latency reports of VMs 200 | \item \textbf{Logging}: store, search, monitor and analyze log data and events from GCP 201 | 202 | \end{itemize} 203 | \end{minipage} 204 | }; 205 | \node[fancytitle, right=10pt] at (box.north west) {Stackdriver}; 206 | \end{tikzpicture} 207 | 208 | 209 | % ------------ Key Concepts ----------------- 210 | \begin{tikzpicture} 211 | \node [mybox] (box){% 212 | \begin{minipage}{0.3\textwidth} 213 | \setlist{nolistsep} 214 | 215 | 216 | {\color{blue} \textbf{OLAP vs. OLTP}}\\ 217 | \textbf{Online Analytical Processing (OLAP)}: primary objective is data analysis. It is an online analysis and data retrieving process, characterized by a large volume of data and complex queries, uses data warehouses.\\ 218 | \textbf{Online Transaction Processing (OLTP)}: primary objective is data processing, manages database modification, characterized by large numbers of short online transactions, simple queries, and traditional DBMS.\\ 219 | 220 | {\color{blue} \textbf{Row vs. Columnar Database}}\\ 221 | \textbf{Row Format}: stores data by row\\ 222 | \textbf{Column Format}: stores data tables by column rather than by row, which is suitable for analytical query processing and data warehouses\\ 223 | \includegraphics[width=\textwidth, height=4cm]{Figures/column.png} 224 | 225 | {\color{blue} \textbf{IaaS, Paas, SaaS}}\\ 226 | \textbf{IaaS}: gives you the infrastructure pieces (VMs) but you have to maintain/join together the different infrastructure pieces for your application to work. Most flexible option.\\ 227 | \textbf{PaaS}: gives you all the infrastructure pieces already joined so you just have to deploy source code on the platform for your application to work. PaaS solutions are managed services/no-ops (highly available/reliable) and serverless/autoscaling (elastic). Less flexible than IaaS\\ 228 | 229 | 230 | {\color{blue} \textbf{Fully Managed, Hotspotting}} 231 | 232 | \end{minipage} 233 | }; 234 | \node[fancytitle, right=10pt] at (box.north west) {Key Concepts}; 235 | \end{tikzpicture} 236 | 237 | 238 | % ------------ Compute Choices ----------------- 239 | \begin{tikzpicture} 240 | \node [mybox] (box){% 241 | \begin{minipage}{0.3\textwidth} 242 | 243 | {\color{blue} \textbf{Google App Engine}}\\ 244 | Flexible, serverless platform for building highly available applications. Ideal when you want to focus on writing and developing code and do not want to manage servers, clusters, or infrastructures. \\ 245 | \textbf{Use Cases:} web sites, mobile app and gaming backends, RESTful APIs, IoT apps.\\ 246 | 247 | {\color{blue} \textbf{Google Kubernetes (Container) Engine}}\\ 248 | Logical infrastructure powered by Kubernetes, an open-source container orchestration system. Ideal for managing containers in production, increase velocity and operatability, and don't have OS dependencies.\\ 249 | \textbf{Use Cases:} containerized workloads, cloud-native distributed systems, hybrid applications.\\ 250 | 251 | {\color{blue} \textbf{Google Compute Engine} (IaaS)}\\ 252 | Virtual Machines (VMs) running in Google's global data center. Ideal for when you need complete control over your infrastructure and direct access to high-performance hardward or need OS-level changes.\\ 253 | \textbf{Use Cases:} any workload requiring a specific OS or OS configuration, currently deployed and on-premises software that you want to run in the cloud.\\ 254 | 255 | \textbf{Summary}: AppEngine is the PaaS option- serverless and ops free. ComputeEngine is the IaaS option- fully controllable down to OS level. Kubernetes Engine is in the middle- clusters of machines running Kuberenetes and hosting containers. 256 | 257 | \includegraphics[width=\textwidth]{Figures/Compute.png} 258 | 259 | {\color{blue} \textbf{Additional Notes}}\\ 260 | You can also mix and match multiple compute options.\\ 261 | \textbf{Preemptible Instances}: instances that run at a much lower price but may be terminated at any time, self-terminate after 24 hours. ideal for interruptible workloads\\ 262 | \textbf{Snapshots}: used for backups of disks\\ 263 | \textbf{Images}: VM OS (Ubuntu, CentOS) 264 | \setlist{nolistsep} 265 | \end{minipage} 266 | }; 267 | \node[fancytitle, right=10pt] at (box.north west) {Compute Choices}; 268 | \end{tikzpicture} 269 | 270 | 271 | 272 | % ------------ Storage Options ----------------- 273 | \begin{tikzpicture} 274 | \node [mybox] (box){% 275 | \begin{minipage}{0.3\textwidth} 276 | \setlist{nolistsep} 277 | 278 | {\color{blue} \textbf{Persistent Disk}} Fully-managed block storage (SSDs) that is suitable for VMs/containers. Good for snapshots of data backups/sharing read-only data across VMs.\\ 279 | 280 | {\color{blue} \textbf{Cloud Storage}} Infinitely scalable, fully-managed and highly reliable object/blob storage. Good for data blobs: images, pictures, videos. Cannot query by content.\\ 281 | 282 | To use Cloud Storage, you create buckets to store data and the location can be specified. Bucket names are globally unique. There a 4 storage classes: 283 | \begin{itemize} 284 | \item \textbf{Multi-Regional}: frequent access from anywhere in the world. Use for "hot data" 285 | \item \textbf{Regional}: high local performance for region 286 | \item \textbf{Nearline}: storage for data accessed less than once a month (archival) 287 | \item \textbf{Coldline}: less than once a year (archival) 288 | \end{itemize} 289 | 290 | \end{minipage} 291 | }; 292 | \node[fancytitle, right=10pt] at (box.north west) {Storage}; 293 | \end{tikzpicture} 294 | 295 | 296 | % % ------------ Storage- Database Options ----------------- 297 | % \begin{tikzpicture} 298 | % \node [mybox] (box){% 299 | % \begin{minipage}{0.3\textwidth} 300 | % \setlist{nolistsep} 301 | 302 | % Non-relational databases holds non-relational data, or data that doesn't have a fixed structure . Think documents, hierarchical data, key-value attributes, graph data, etc...\\ 303 | 304 | % {\color{blue} \textbf{BigTable}} Scalable, fully-managed NoSQL wide-column database. Good for real-time access and analytics workloads, low-latency read/write access, high-throughput analytics, and native time series support. Sensitive to hot-spotting.\\ 305 | 306 | % {\color{blue} \textbf{Datastore}} Scalable, fully-managed NoSQL document database for web and mobile applications. Good for semi-structured application data, hierarchical data, ad durable key-value data. 307 | 308 | % \end{minipage} 309 | % }; 310 | % \node[fancytitle, right=10pt] at (box.north west) {Storage- Non-Relational Databases}; 311 | % \end{tikzpicture} 312 | 313 | 314 | 315 | % ------------ CloudSQL and Cloud Spanner ----------------- 316 | \begin{tikzpicture} 317 | \node [mybox] (box){% 318 | \begin{minipage}{0.3\textwidth} 319 | \setlist{nolistsep} 320 | 321 | 322 | {\color{blue} \textbf{Cloud SQL}} \\ 323 | Fully-managed relational database service (supports MySQL/PostgreSQL). Use for relational data: tables, rows and columns, and super structured data. SQL compatible and can update fields. Not scalable (small storage- GBs). Good for web frameworks and OLTP workloads (not OLAP). Can use \textbf{Cloud Storage Transfer Service} or \textbf{Transfer Appliance} to data into Cloud Storage (from AWS, local, another bucket). Use gsutil if copying files over from on-premise.\\ 324 | 325 | {\color{blue} \textbf{Cloud Spanner}} \\ 326 | Google-proprietary offering, more advanced than Cloud SQL. Mission-critical, relational database. Supports horizontal scaling. Combines benefits of of relational and non-relational databases. 327 | 328 | \begin{itemize} 329 | \item \textbf{Ideal}: relational, structured, and semi-structured data that requires high availability, strong consistency, and transactional reads and writes 330 | \item \textbf{Avoid}: data is not relational or structured, want an open source RDBMS, strong consistency and high availability is unnecessary\\ 331 | \end{itemize} 332 | 333 | {\color{cyan} \textbf{Cloud Spanner Data Model}} \\ 334 | A database can contain 1+ tables. Tables look like relational database tables. Data is strongly typed: must define a schema for each database and that schema must specify the data types of each column of each table. \\ 335 | \textbf{Parent-Child Relationships}: Can optionally define relationships between tables to physically co-locate their rows for efficient retrieval (data locality: physically storing 1+ rows of a table with a row from another table. 336 | 337 | \end{minipage} 338 | }; 339 | \node[fancytitle, right=10pt] at (box.north west) {CloudSQL and Cloud Spanner- Relational DBs}; 340 | \end{tikzpicture} 341 | 342 | % ------------ BigTable ----------------- 343 | \begin{tikzpicture} 344 | \node [mybox] (box){% 345 | \begin{minipage}{0.3\textwidth} 346 | \setlist{nolistsep} 347 | 348 | Columnar database ideal applications that need high throughput, low latencies, and scalability (IoT, user analytics, time-series data, graph data) for non-structured key/value data (each value is $<$ 10 MB). A single value in each row is indexed and this value is known as the row key. Does not support SQL queries. Zonal service. Not good for data less than 1TB of data or items greater than 10MB. Ideal at handling large amounts of data (TB/PB) for long periods of time.\\ 349 | 350 | {\color{blue} \textbf{Data Model}}\\ 351 | 4-Dimensional: row key, column family (table name), column name, timestamp. 352 | \begin{itemize} 353 | \item Row key uniquely identifies an entity and columns contain individual values for each row. 354 | \item Similar columns are grouped into column families. 355 | \item Each column is identified by a combination of the column family and a column qualifier, which is a unique name within the column family. 356 | \end{itemize} 357 | \includegraphics[width=\textwidth]{Figures/bigtable-example.png} 358 | 359 | {\color{blue} \textbf{Load Balancing}} 360 | Automatically manages splitting, merging, and rebalancing. The master process balances workload/data volume within clusters. The master splits busier/larger tablets in half and merges less-accessed/smaller tablets together, redistributing them between nodes. \\ 361 | Best write performance can be achieved by using row keys that do not follow a predictable order and grouping related rows so they are adjacent to one another, which results in more efficient multiple row reads at the same time.\\ 362 | 363 | {\color{blue} \textbf{Security}} 364 | Security can be managed at the project and instance level. Does not support table-level, row-level, column-level, or cell-level security restrictions.\\ 365 | 366 | 367 | 368 | {\color{blue} \textbf{Other Storage Options}}\\ 369 | - Need SQL for OLTP: CloudSQL/Cloud Spanner.\\ 370 | - Need interactive querying for OLAP: BigQuery.\\ 371 | - Need to store blobs larger than 10 MB: Cloud Storage.\\ 372 | - Need to store structured objects in a document database, with ACID and SQL-like queries: Cloud Datastore. 373 | \end{minipage} 374 | }; 375 | \node[fancytitle, right=10pt] at (box.north west) {BigTable}; 376 | \end{tikzpicture} 377 | 378 | 379 | % ------------ BigTable Part II ----------------- 380 | \begin{tikzpicture} 381 | \node [mybox] (box){% 382 | \begin{minipage}{0.3\textwidth} 383 | \setlist{nolistsep} 384 | 385 | {\color{blue} \textbf{Designing Your Schema}}\\ 386 | Designing a Bigtable schema is different than designing a schema for a RDMS. Important considerations: 387 | \begin{itemize} 388 | \item Each table has only one index, the row key (4 KB) 389 | \item Rows are \textbf{sorted lexicographically by row key}. 390 | \item All operations are \textbf{atomic} (ACID) at row level. 391 | \item Both reads and writes should be distributed evenly 392 | \item Try to keep all info for an entity in a single row 393 | \item Related entities should be stored in adjacent rows 394 | \item Try to store \textbf{10 MB in a single cell} (max 100 MB) and \textbf{100 MB in a single row} (256 MB) 395 | \item Supports max of 1,000 tables in each instance. 396 | \item Choose row keys that don't follow predictable order 397 | \item Can use up to around 100 column families 398 | \item Column Qualifiers: can create as many as you need in each row, but should avoid splitting data across more column qualifiers than necessary (16 KB) 399 | \item \textbf{Tables are sparse}. Empty columns don't take up any space. Can create large number of columns, even if most columns are empty in most rows. 400 | \item Field promotion (shift a column as part of the row key) and salting (remainder of division of hash of timestamp plus row key) are ways to help design row keys.\\ 401 | 402 | \end{itemize} 403 | 404 | 405 | For time-series data, use tall/narrow tables.\\ 406 | Denormalize- prefer multiple tall and narrow tables 407 | 408 | 409 | \end{minipage} 410 | }; 411 | \node[fancytitle, right=10pt] at (box.north west) {BigTable Part II}; 412 | \end{tikzpicture} 413 | 414 | 415 | 416 | 417 | % ------------ BigQuery ----------------- 418 | \begin{tikzpicture} 419 | \node [mybox] (box){% 420 | \begin{minipage}{0.3\textwidth} 421 | \setlist{nolistsep} 422 | Scalable, fully-managed Data Warehouse with extremely fast SQL queries. Allows querying for massive volumes of data at fast speeds. Good for OLAP workloads (petabyte-scale), Big Data exploration and processing, and reporting via Business Intelligence (BI) tools. Supports SQL querying for non-relational data. Relatively cheap to store, but costly for querying/processing. Good for analyzing historical data.\\ 423 | 424 | {\color{blue} \textbf{Data Model}} \\ 425 | Data tables are organized into units called datasets, which are sets of tables and views. A table must belong to dataset and a datatset must belong to a porject. Tables contain records with rows and columns (fields). 426 | 427 | You can load data into BigQuery via two options: batch loading (free) and streaming (costly).\\ 428 | 429 | {\color{blue} \textbf{Security}} \\ 430 | BigQuery uses IAM to manage access to resources. The three types of resources in BigQuery are organizations, projects, and datasets. Security can be applied at the project and dataset level, but not table or view level. \\ 431 | % Project-level access controls determine the users, groups, and service accounts allowed to access all datasets, tables, views, and table data within a project. Dataset-level access controls determine the users, groups, and service accounts allowed to access the tables, views, and table data in a specific dataset.\\ 432 | 433 | {\color{blue} \textbf{Views}} \\ 434 | A view is a virtual table defined by a SQL query. When you create a view, you query it in the same way you query a table. \textbf{Authorized views allow you to share query results with particular users/groups without giving them access to underlying data.} When a user queries the view, the query results contain data only from the tables and fields specified in the query that defines the view.\\ 435 | 436 | {\color{blue} \textbf{Billing}} \\ 437 | Billing is based on \textbf{storage} (amount of data stored), \textbf{querying} (amount of data/number of bytes processed by query), and \textbf{streaming inserts}. Storage options are active and long-term (modified or not past 90 days). Query options are on-demand and flat-rate. \\ 438 | Query costs are based on how much data you read/process, so if you only read a section of a table (partition), your costs will be reduced. Any charges occurred are billed to the attached billing account. Exporting/importing/copying data is free. 439 | 440 | 441 | 442 | \end{minipage} 443 | }; 444 | \node[fancytitle, right=10pt] at (box.north west) {BigQuery}; 445 | \end{tikzpicture} 446 | 447 | 448 | % ------------ BigQuery Part II ----------------- 449 | \begin{tikzpicture} 450 | \node [mybox] (box){% 451 | \begin{minipage}{0.3\textwidth} 452 | \setlist{nolistsep} 453 | 454 | {\color{blue} \textbf{Partitioned tables}}\\ 455 | Special tables that are divided into partitions based on a column or partition key. Data is stored on different directories and specific queries will only run on slices of data, which improves query performance and reduces costs. Note that the partitions will not be of the same size. BigQuery automatically does this.\\ 456 | Each partitioned table can have up to 2,500 partitions (2500 days or a few years). The daily limit is 2,000 partition updates per table, per day. The rate limit: 50 partition updates every 10 seconds.\\ 457 | 458 | Two types of partitioned tables: 459 | \begin{itemize} 460 | \item \textbf{Ingestion Time}: Tables partitioned based on the data's ingestion (load) date or arrival date. Each partitioned table will have pseudocolumn \_PARTITIONTIME, or time data was loaded into table. Pseudocolumns are reserved for the table and cannot be used by the user. 461 | \item \textbf{Partitioned Tables}: Tables that are partitioned based on a TIMESTAMP or DATE column.\\ 462 | \end{itemize} 463 | 464 | \textbf{Windowing}: window functions increase the efficiency and reduce the complexity of queries that analyze partitions (windows) of a dataset by providing complex operations without the need for many intermediate calculations. They reduce the need for intermediate tables to store temporary data\\ 465 | 466 | {\color{blue} \textbf{Bucketing}}\\ 467 | Like partitioning, but each split/partition should be the same size and is based on the hash function of a column. Each bucket is a separate file, which makes for more efficient sampling and joining data.\\ 468 | 469 | 470 | 471 | {\color{blue} \textbf{Querying}}\\ 472 | After loading data into BigQuery, you can query using Standard SQL (preferred) or Legacy SQL (old). Query jobs are actions executed asynchronously to load, export, query, or copy data. Results can be saved to permanent (store) or temporary (cache) tables. 2 types of queries: 473 | \begin{itemize} 474 | \item \textbf{Interactive}: query is executed immediately, counts toward daily/concurrent usage (default) 475 | \item \textbf{Batch}: batches of queries are queued and the query starts when idle resources are available, only counts for daily and switches to interactive if idle for 24 hours\\ 476 | \end{itemize} 477 | 478 | {\color{blue} \textbf{Wildcard Tables}}\\ 479 | Used if you want to union all similar tables with similar names. '*' (e.g. project.dataset.Table*) 480 | 481 | 482 | 483 | 484 | 485 | \end{minipage} 486 | }; 487 | \node[fancytitle, right=10pt] at (box.north west) {BigQuery Part II}; 488 | \end{tikzpicture} 489 | 490 | % ------------ Optimizing BigQuery ----------------- 491 | \begin{tikzpicture} 492 | \node [mybox] (box){% 493 | \begin{minipage}{0.3\textwidth} 494 | \setlist{nolistsep} 495 | 496 | 497 | {\color{blue} \textbf{Controlling Costs}}\\ 498 | - Avoid SELECT * (full scan), select only columns needed (SELECT * EXCEPT)\\ 499 | - Sample data using preview options for free\\ 500 | - Preview queries to estimate costs (dryrun) \\ 501 | - Use max bytes billed to limit query costs\\ 502 | - Don't use LIMIT clause to limit costs (still full scan)\\ 503 | - Monitor costs using dashboards and audit logs\\ 504 | - Partition data by date\\ 505 | - Break query results into stages\\ 506 | - Use default table expiration to delete unneeded data\\ 507 | - Use streaming inserts wisely\\ 508 | - Set hard limit on bytes (members) processed per day\\ 509 | 510 | {\color{blue} \textbf{Query Performance}}\\ 511 | Generally, queries that do less work perform better.\\ 512 | 513 | \textbf{Input Data/Data Sources}\\ 514 | - Avoid SELECT * \\ 515 | - Prune partitioned queries (for time-partitioned table, use PARTITIONTIME pseudo column to filter partitions)\\ 516 | - Denormalize data (use nested and repeated fields)\\ 517 | - Use external data sources appropriately\\ 518 | - Avoid excessive wildcard tables\\ 519 | 520 | \textbf{SQL Anti-Patterns}\\ 521 | - Avoid self-joins., use window functions (perform calculations across many table rows related to current row)\\ 522 | - Partition/Skew: avoid unequally sized partitions, or when a value occurs more often than any other value\\ 523 | - Cross-Join: avoid joins that generate more outputs than inputs (pre-aggregate data or use window function)\\ 524 | Update/Insert Single Row/Column: avoid point-specific DML, instead batch updates and inserts\\ 525 | 526 | \textbf{Managing Query Outputs}\\ 527 | - Avoid repeated joins and using the same subqueries\\ 528 | - Writing large sets has performance/cost impacts. Use filters or LIMIT clause. 128MB limit for cached results\\ 529 | - Use LIMIT clause for large sorts (Resources Exceeded)\\ 530 | 531 | \textbf{Optimizing Query Computation}\\ 532 | - Avoid repeatedly transforming data via SQL queries\\ 533 | - Avoid JavaScript user-defined functions\\ 534 | - Use approximate aggregation functions (approx count)\\ 535 | - Order query operations to maximize performance. Use ORDER BY only in outermost query, push complex operations to end of the query.\\ 536 | - For queries that join data from multiple tables, optimize join patterns. Start with the largest table. 537 | \end{minipage} 538 | }; 539 | \node[fancytitle, right=10pt] at (box.north west) {Optimizing BigQuery}; 540 | \end{tikzpicture} 541 | 542 | 543 | 544 | 545 | % ------------ DataStore ----------------- 546 | \begin{tikzpicture} 547 | \node [mybox] (box){% 548 | \begin{minipage}{0.3\textwidth} 549 | \setlist{nolistsep} 550 | NoSQL document database that automatically handles sharding and replication (highly available, scalable and durable). Supports ACID transactions, SQL-like queries. Query execution depends on size of returned result, not size of dataset. Ideal for "needle in a haystack" operation and applications that rely on highly available structured data at scale\\ 551 | 552 | {\color{blue} \textbf{Data Model}}\\ 553 | Data objects in Datastore are known as entities. An entity has one or more named properties, each of which can have one or more values. Each entity in has a key that uniquely identifies it. You can fetch an individual entity using the entity's key, or query one or more entities based on the entities' keys or property values.\\ 554 | 555 | Ideal for highly structured data at scale: product catalogs, customer experience based on users past activities/preferences, game states. Don't use if you need extremely low latency or analytics (complex joins, etc).\\ 556 | 557 | \includegraphics[width=\textwidth]{Figures/DocumentDatabase.png} 558 | 559 | \end{minipage} 560 | }; 561 | \node[fancytitle, right=10pt] at (box.north west) {DataStore}; 562 | \end{tikzpicture} 563 | 564 | % ------------ DataProc ----------------- 565 | \begin{tikzpicture} 566 | \node [mybox] (box){% 567 | \begin{minipage}{0.3\textwidth} 568 | \setlist{nolistsep} 569 | 570 | Fully-managed cloud service for running Spark and Hadoop clusters. Provides access to Hadoop cluster on GCP and Hadoop-ecosystem tools (Pig, Hive, and Spark). Can be used to implement ETL warehouse solution.\\ 571 | 572 | Preferred if migrating existing on-premise Hadoop or Spark infrastructure to GCP without redevelopment effort. Dataflow is preferred for a new development. 573 | \end{minipage} 574 | }; 575 | \node[fancytitle, right=10pt] at (box.north west) {DataProc}; 576 | \end{tikzpicture} 577 | 578 | % ------------ DataFlow ----------------- 579 | \begin{tikzpicture} 580 | \node [mybox] (box){% 581 | \begin{minipage}{0.3\textwidth} 582 | \setlist{nolistsep} 583 | 584 | Managed service for developing and executing data processing patterns (ETL) (based on Apache Beam) for \textbf{streaming} and \textbf{batch} data. Preferred for new Hadoop or Spark infrastructure development. Usually site between front-end and back-end storage solutions. \\ 585 | 586 | {\color{blue} \textbf{Concepts}}\\ 587 | \textbf{Pipeline}: encapsulates series of computations that accepts input data from external sources, transforms data to provide some useful intelligence, and produce output\\ 588 | \textbf{PCollections}: abstraction that represents a potentially distributed, multi-element data set, that acts as the pipeline's data. PCollection objects represent input, intermediate, and output data. The edges of the pipeline.\\ 589 | \textbf{Transforms}: operations in pipeline. A transform takes a PCollection(s) as input, performs an operation that you specify on each element in that collection, and produces a new output PCollection. Uses the "what/where/when/how" model. Nodes in the pipeline. Composite transforms are multiple transforms: combining, mapping, shuffling, reducing, or statistical analysis.\\ 590 | \textbf{Pipeline I/O}: the source/sink, where the data flows in and out. Supports read and write transforms for a number of common data storage types, as well as custom.\\ 591 | 592 | \includegraphics[width=\textwidth]{Figures/beam_pipeline.png} 593 | 594 | {\color{blue} \textbf{Windowing}}\\ 595 | Windowing a PCollection divides the elements into windows based on the associated event time for each element. Especially useful for PCollections with unbounded size, since it allows operating on a sub-group (mini-batches).\\ 596 | 597 | {\color{blue} \textbf{Triggers}}\\ 598 | Allows specifying a trigger to control when (in processing time) results for the given window can be produced. If unspecified, the default behavior is to trigger first when the watermark passes the end of the window, and then trigger again every time there is late arriving data. 599 | 600 | \end{minipage} 601 | }; 602 | \node[fancytitle, right=10pt] at (box.north west) {DataFlow}; 603 | \end{tikzpicture} 604 | 605 | % ------------ Pub/Sub ----------------- 606 | \begin{tikzpicture} 607 | \node [mybox] (box){% 608 | \begin{minipage}{0.3\textwidth} 609 | \setlist{nolistsep} 610 | 611 | Asynchronous messaging service that decouples senders and receivers. Allows for secure and highly available communication between independently written applications. \\ 612 | A publisher app creates and sends messages to a \textit{topic}. Subscriber applications create a subscription to a topic to receive messages from it. Communication can be one-to-many (fan-out), many-to-one (fan-in), and many-to-many. Gaurantees at least once delivery before deletion from queue.\\ 613 | 614 | {\color{blue} \textbf{Scenarios}} 615 | \begin{itemize} 616 | \item Balancing workloads in network clusters- queue can efficiently distribute tasks 617 | \item Implementing asynchronous workflows 618 | \item Data streaming from various processes or devices 619 | \item Reliability improvement- in case zone failure 620 | \item Distributing event notifications 621 | \item Refreshing distributed caches 622 | \item Logging to multiple systems\\ 623 | \end{itemize} 624 | 625 | 626 | {\color{blue} \textbf{Benefits/Features}}\\ 627 | Unified messaging, global presence, push- and pull-style subscriptions, replicated storage and guaranteed at-least-once message delivery, encryption of data at rest/transit, easy-to-use REST/JSON API\\ 628 | 629 | {\color{blue} \textbf{Data Model}}\\ 630 | \textbf{Topic}, \textbf{Subscription}, \textbf{Message} (combination of data and attributes that a publisher sends to a topic and is eventually delivered to subscribers), \textbf{Message Attribute} (key-value pair that a publisher can define for a message)\\ 631 | 632 | {\color{blue} \textbf{Message Flow}} 633 | \begin{itemize} 634 | \item Publisher creates a topic in the Cloud Pub/Sub service and sends messages to the topic. 635 | \item Messages are persisted in a message store until they are delivered and acknowledged by subscribers. 636 | \item The Pub/Sub service forwards messages from a topic to all of its subscriptions, individually. Each subscription receives messages either by pushing/pulling. 637 | \item The subscriber receives pending messages from its subscription and acknowledges message. 638 | \item When a message is acknowledged by the subscriber, it is removed from the subscription's message queue. 639 | \end{itemize} 640 | % \includegraphics[width=\textwidth]{pubsub.png} 641 | \end{minipage} 642 | }; 643 | \node[fancytitle, right=10pt] at (box.north west) {Pub/Sub}; 644 | \end{tikzpicture} 645 | 646 | 647 | 648 | 649 | 650 | % ------------ ML Engine ----------------- 651 | \begin{tikzpicture} 652 | \node [mybox] (box){% 653 | \begin{minipage}{0.3\textwidth} 654 | \setlist{nolistsep} 655 | 656 | Managed infrastructure of GCP with the power and flexibility of TensorFlow. Can use it to train ML models at scale and host trained models to make predictions about new data in the cloud. Supported frameworks include Tensorflow, scikit-learn and XGBoost.\\ 657 | 658 | {\color{blue} \textbf{ML Workflow}}\\ 659 | Evaluate Problem: What is the problem and is ML the best approach? How will you measure model's success?\\ 660 | \textbf{Choosing Development Environment}: Supports Python 2 and 3 and supports TF, scikit-learn, XGBoost as frameworks.\\ 661 | \textbf{Data Preparation and Exploration}: Involves gathering, cleaning, transforming, exploring, splitting, and preprocessing data. Also includes feature engineering. \\ 662 | \textbf{Model Training/Testing}: Provide access to train/test data and train them in batches. Evaluate progress/results and adjust the model as needed. Export/save trained model (250 MB or smaller to deploy in ML Engine).\\ 663 | \textbf{Hyperparameter Tuning}: hyperparameters are variables that govern the training process itself, not related to the training data itself. Usually constant during training.\\ 664 | \textbf{Prediction}: host your trained ML models in the cloud and use the Cloud ML prediction service to infer target values for new data\\ 665 | 666 | {\color{blue} \textbf{ML APIs}}\\ 667 | \textbf{Speech-to-Text}: speech-to-text conversion\\ 668 | \textbf{Text-to-Speech}: text-to-speech conversion \\ 669 | \textbf{Translation}: dynamically translate between languages\\ 670 | \textbf{Vision}: derive insight (objects/text) from images\\ 671 | \textbf{Natural Language}: extract information (sentiment, intent, entity, and syntax) about the text: people, places..\\ 672 | \textbf{Video Intelligence}: extract metadata from videos\\ 673 | 674 | {\color{blue} \textbf{Cloud Datalab}}\\ 675 | Interactive tool (run on an instance) to explore, analyze, transform and visualize data and build machine learning models. Built on Jupyter. Datalab is free but may incus costs based on usages of other services.\\ 676 | 677 | {\color{blue} \textbf{Data Studio}}\\ 678 | Turns your data into informative dashboards and reports. Updates to data will automatiaclly update in dashbaord. \textbf{Query cache} remembers the queries (requests for data) issued by the components in a report (lightning bolt)- turn off for data that changes frequently, want to prioritize freshness over performance, or using a data source that incurs usage costs (e.g. BigQuery). \textbf{Prefetch cache} predicts data that could be requested by analyzing the dimensions, metrics, filters, and date range properties and controls on the report. 679 | 680 | \end{minipage} 681 | }; 682 | \node[fancytitle, right=10pt] at (box.north west) {ML Engine}; 683 | \end{tikzpicture} 684 | 685 | 686 | % ------------ ML Concepts/Terminology ----------------- 687 | \begin{tikzpicture} 688 | \node [mybox] (box){% 689 | \begin{minipage}{0.3\textwidth} 690 | \setlist{nolistsep} 691 | {\color{cyan} \textbf{Features}}: input data used by the ML model\\ 692 | {\color{cyan}\textbf{Feature Engineering}}: transforming input features to be more useful for the models. e.g. mapping categories to buckets, normalizing between -1 and 1, removing null\\ 693 | {\color{cyan}\textbf{Train/Eval/Test}}: training is data used to optimize the model, evaluation is used to asses the model on new data during training, test is used to provide the final result\\ 694 | {\color{cyan}\textbf{Classification/Regression}}: regression is prediction a number (e.g. housing price), classification is prediction from a set of categories(e.g. predicting red/blue/green)\\ 695 | {\color{cyan}\textbf{Linear Regression}}: predicts an output by multiplying and summing input features with weights and biases\\ 696 | {\color{cyan}\textbf{Logistic Regression}}: similar to linear regression but predicts a probability\\ 697 | {\color{cyan}\textbf{Neural Network}}: composed of neurons (simple building blocks that actually “learn”), contains activation functions that makes it possible to predict non-linear outputs\\ 698 | {\color{cyan}\textbf{Activation Functions}}: mathematical functions that introduce non-linearity to a network e.g. RELU, tanh\\ 699 | {\color{cyan}\textbf{Sigmoid Function}}: function that maps very negative numbers to a number very close to 0, huge numbers close to 1, and 0 to .5. Useful for predicting probabilities\\ 700 | {\color{cyan}\textbf{Gradient Descent/Backpropagation}}: fundamental loss optimizer algorithms, of which the other optimizers are usually based. Backpropagation is similar to gradient descent but for neural nets \\ 701 | {\color{cyan}\textbf{Optimizer}}: operation that changes the weights and biases to reduce loss e.g. Adagrad or Adam\\ 702 | {\color{cyan}\textbf{Weights / Biases}}: weights are values that the input features are multiplied by to predict an output value. Biases are the value of the output given a weight of 0. \\ 703 | {\color{cyan}\textbf{Converge}}: algorithm that converges will eventually reach an optimal answer, even if very slowly. An algorithm that doesn’t converge may never reach an optimal answer. \\ 704 | {\color{cyan}\textbf{Learning Rate}}: rate at which optimizers change weights and biases. High learning rate generally trains faster but risks not converging, whereas a lower rate trains slower \\ 705 | {\color{cyan}\textbf{Overfitting}}: model performs great on the input data but poorly on the test data (combat by dropout, early stopping, or reduce \# of nodes or layers)\\ 706 | {\color{cyan}\textbf{Bias/Variance}}: how much output is determined by the features. more variance often can mean overfitting, more bias can mean a bad model \\ 707 | {\color{cyan}\textbf{Regularization}}: variety of approaches to reduce overfitting, including adding the weights to the loss function, randomly dropping layers (dropout)\\ 708 | {\color{cyan}\textbf{Ensemble Learning}}: training multiple models with different parameters to solve the same problem \\ 709 | % {\color{cyan}\textbf{Numerical Instability}}: issues with very large/small values due to the limits of floating point numbers in computers\\ 710 | {\color{cyan}\textbf{Embeddings}}: mapping from discrete objects, such as words, to vectors of real numbers. useful because classifiers/neural networks work well on vectors of real numbers 711 | 712 | \end{minipage} 713 | }; 714 | 715 | \node[fancytitle, right=10pt] at (box.north west) {ML Concepts/Terminology}; 716 | \end{tikzpicture} 717 | 718 | 719 | 720 | % ------------ TensorFlow ----------------- 721 | \begin{tikzpicture} 722 | \node [mybox] (box){% 723 | \begin{minipage}{0.3\textwidth} 724 | \setlist{nolistsep} 725 | 726 | 727 | Tensorflow is an open source software library for numerical computation using data flow graphs. Everything in TF is a graph, where nodes represent operations on data and edges represent the data. Phase 1 of TF is building up a computation graph and phase 2 is executing it. It is also distributed, meaning it can run on either a cluster of machines or just a single machine. \\ 728 | 729 | 730 | {\color{blue} \textbf{Tensors}}\\ 731 | In a graph, tensors are the edges and are multidimensional data arrays that flow through the graph. Central unit of data in TF and consists of a set of primitive values shaped into an array of any number of dimensions.\\ 732 | 733 | A tensor is characterized by its rank (\# dimensions in tensor), shape (\# of dimensions and size of each dimension), data type (data type of each element in tensor).\\ 734 | 735 | {\color{blue} \textbf{Placeholders and Variables}}\\ 736 | \textbf{Variables}: best way to represent shared, persistent state manipulated by your program. These are the parameters of the ML model are altered/trained during the training process. Training variables. \\ 737 | \textbf{Placeholders}: way to specify inputs into a graph that hold the place for a Tensor that will be fed at runtime. They are assigned once, do not change after. Input nodes\\ 738 | 739 | {\color{blue} \textbf{Popular Architectures}}\\ 740 | \textbf{Linear Classifier}: takes input features and combines them with weights and biases to predict output value\\ 741 | \textbf{DNNClassifier}: deep neural net, contains intermediate layers of nodes that represent “hidden features” and activation functions to represent non-linearity\\ 742 | \textbf{ConvNets}: convolutional neural nets. popular for image classification. \\ 743 | \textbf{Transfer Learning}: use existing trained models as starting points and add additional layers for the specific use case. idea is that highly trained existing models know general features that serve as a good starting point for training a small network on specific examples \\ 744 | \textbf{RNN}: recurrent neural nets, designed for handling a sequence of inputs that have “memory” of the sequence. LSTMs are a fancy version of RNNs, popular for NLP 745 | \textbf{GAN}: general adversarial neural net, one model creates fake examples, and another model is served both fake example and real examples and is asked to distinguish\\ 746 | \textbf{Wide and Deep}: combines linear classifiers with deep neural net classifiers, "wide" linear parts represent memorizing specific examples and “deep” parts represent understanding high level features\\ 747 | 748 | 749 | 750 | % In ML, feature vectors are inputs/attributes which the ML algorithm focuses on. Each data point is a list (vector) of such vectors: aka feature vector. The output is a label or number.\\ 751 | % Deep Learning is a representation ML based system that figures out by themselves what features to focus on. Neural networks are the most common class of deep learning algorithms, which are composed of neurons. 752 | \end{minipage} 753 | }; 754 | \node[fancytitle, right=10pt] at (box.north west) {TensorFlow}; 755 | \end{tikzpicture} 756 | 757 | 758 | % ------------ Case Study: Flowlogistic I----------------- 759 | \begin{tikzpicture} 760 | \node [mybox] (box){% 761 | \begin{minipage}{0.3\textwidth} 762 | \setlist{nolistsep} 763 | 764 | {\color{blue} \textbf{Company Overview}}\\ 765 | Flowlogistic is a top logistics and supply chain provider. They help businesses throughout the world manage their resources and transport them to their final destination. The company has grown rapidly, expanding their offerings to include rail, truck, aircraft, and oceanic shipping.\\ 766 | 767 | {\color{blue} \textbf{Company Background}}\\ 768 | The company started as a regional trucking company, and then expanded into other logistics markets. Because they have not updated their infrastructure, managing and tracking orders and shipments has become a bottleneck. To improve operations, Flowlogistic developed proprietary technology for tracking shipments in real time at the parcel level. However, they are unable to deploy it because their technology stack, based on Apache Kafka, cannot support the processing volume. In addition, Flowlogistic wants to further analyze their orders and shipments to determine how best to deploy their resources.\\ 769 | 770 | {\color{blue} \textbf{Solution Concept}}\\ 771 | Flowlogistic wants to implement two concepts in the cloud: 772 | \begin{itemize} 773 | \item Use their proprietary technology in a real-time inventory-tracking system that indicates the location of their loads. 774 | \item Perform analytics on all their orders and shipment logs (structured and unstructured data) to determine how best to deploy resources, which customers to target, and which markets to expand into. They also want to use predictive analytics to learn earlier when a shipment will be delayed.\\ 775 | \end{itemize} 776 | 777 | {\color{blue} \textbf{Business Requirements}}\\ 778 | -Build a reliable and reproducible environment with scaled parity of production\\ 779 | -Aggregate data in a centralized Data Lake for analysis\\ 780 | -Perform predictive analytics on future shipments\\ 781 | -Accurately track every shipment worldwide\\ 782 | -Improve business agility and speed of innovation through rapid provisioning of new resources\\ 783 | -Analyze/optimize architecture cloud performance\\ 784 | -Migrate fully to the cloud if all other requirements met \\ 785 | 786 | {\color{blue} \textbf{ Technical Requirements}}\\ 787 | -Handle both streaming and batch data\\ 788 | -Migrate existing Hadoop workloads\\ 789 | -Ensure architecture is scalable and elastic to meet the changing demands of the company\\ 790 | -Use managed services whenever possible\\ 791 | -Encrypt data in flight and at rest\\ 792 | -Connect a VPN between the production data center and cloud environment 793 | \end{minipage} 794 | }; 795 | \node[fancytitle, right=10pt] at (box.north west) {Case Study: Flowlogistic I}; 796 | \end{tikzpicture} 797 | 798 | % ------------ Case Study: Flowlogistic II ----------------- 799 | \begin{tikzpicture} 800 | \node [mybox] (box){% 801 | \begin{minipage}{0.3\textwidth} 802 | \setlist{nolistsep} 803 | 804 | {\color{blue} \textbf{Existing Technical Environment}}\\ 805 | Flowlogistic architecture resides in a single data center: 806 | \begin{itemize} 807 | 808 | \item Databases: 809 | \begin{itemize} 810 | \item 8 physical servers in 2 clusters 811 | \begin{itemize} 812 | \item SQL Server: inventory, user/static data 813 | \end{itemize} 814 | \item 3 physical servers 815 | \begin{itemize} 816 | \item Cassandra: metadata, tracking messages 817 | \end{itemize} 818 | \item 10 Kafka servers: tracking message aggregation and batch insert 819 | \end{itemize} 820 | 821 | \item Application servers: customer front end, middleware for order/customs 822 | \begin{itemize} 823 | \item 60 virtual machines across 20 physical servers 824 | \begin{itemize} 825 | \item Tomcat: Java services 826 | \item Nginx: static content 827 | \item Batch servers 828 | \end{itemize} 829 | \end{itemize} 830 | 831 | \item Storage appliances 832 | \begin{itemize} 833 | \item iSCSI for virtual machine (VM) hosts 834 | \item Fibre Channel storage area network (FC SAN): SQL server storage 835 | 836 | \item Network-attached storage (NAS): image storage, logs, backups 837 | 838 | \end{itemize} 839 | \item 10 Apache Hadoop / Spark Servers 840 | \begin{itemize} 841 | \item Core Data Lake 842 | \item Data analysis workloads 843 | \end{itemize} 844 | 845 | \item 20 miscellaneous servers 846 | \begin{itemize} 847 | \item Jenkins, monitoring, bastion hosts, security scanners, billing software\\ 848 | \end{itemize} 849 | \end{itemize} 850 | 851 | {\color{blue} \textbf{CEO Statement}}\\ 852 | We have grown so quickly that our inability to upgrade our infrastructure is really hampering further growth/efficiency. We are efficient at moving shipments around the world, but we are inefficient at moving data around. We need to organize our information so we can more easily understand where our customers are and what they are shipping.\\ 853 | 854 | {\color{blue} \textbf{CTO Statement}}\\ 855 | IT has never been a priority for us, so as our data has grown, we have not invested enough in our technology. I have a good staff to manage IT, but they are so busy managing our infrastructure that I cannot get them to do the things that really matter, such as organizing our data, building the analytics, and figuring out how to implement the CFO’s tracking technology.\\ 856 | 857 | {\color{blue} \textbf{CFO Statement}}\\ 858 | Part of our competitive advantage is that we penalize ourselves for late shipments/deliveries. Knowing where our shipments are at all times has a direct correlation to our bottom line and profitability. Additionally, I don’t want to commit capital to building out a server environment. 859 | \end{minipage} 860 | }; 861 | \node[fancytitle, right=10pt] at (box.north west) {Case Study: Flowlogistic II}; 862 | \end{tikzpicture} 863 | 864 | 865 | % ------------ Flowlogistic Potential Solution ----------------- 866 | \begin{tikzpicture} 867 | \node [mybox] (box){% 868 | \begin{minipage}{0.3\textwidth} 869 | \setlist{nolistsep} 870 | 871 | \begin{enumerate} 872 | \item Cloud Dataproc handles the existing workloads and produces results as before (using a VPN). 873 | \item At the same time, data is received through the data center, via either stream or batch, and sent to Pub/Sub. 874 | \item Pub/Sub encrypts the data in transit and at rest. 875 | \item Data is fed into Dataflow either as stream/batch data. 876 | \item Dataflow processes the data and sends the cleaned data to BigQuery (again either as stream or batch). 877 | \item Data can then be queried from BigQuery and predictive analysis can begin (using ML Engine, etc..) 878 | \end{enumerate} 879 | \includegraphics[width=\textwidth]{Figures/flowlogistic_sol.png} 880 | \textbf{\textit{Note}}: All services are fully managed., easily scalable, and can handle streaming/batch data. All technical requirements are met. 881 | 882 | \end{minipage} 883 | }; 884 | \node[fancytitle, right=10pt] at (box.north west) {Flowlogistic Potential Solution}; 885 | \end{tikzpicture} 886 | 887 | % ------------ MJTelco Case Study I ----------------- 888 | \begin{tikzpicture} 889 | \node [mybox] (box){% 890 | \begin{minipage}{0.3\textwidth} 891 | \setlist{nolistsep} 892 | 893 | {\color{blue} \textbf{Company Overview}}\\ 894 | MJTelco is a startup that plans to build networks in rapidly growing, underserved markets around the world. The company has patents for innovative optical communications hardware. Based on these patents, they can create many reliable, high-speed backbone links with inexpensive hardware.\\ 895 | 896 | {\color{blue} \textbf{Company Background}}\\ 897 | Founded by experienced telecom executives, MJTelco uses technologies originally developed to overcome communications challenges in space. Fundamental to their operation, they need to create a distributed data infrastructure that drives real-time analysis and incorporates machine learning to continuously optimize their topologies. Because their hardware is inexpensive, they plan to overdeploy the network allowing them to account for the impact of dynamic regional politics on location availability and cost.\\ 898 | 899 | Their management and operations teams are situated all around the globe creating many-to-many relationship between data consumers and providers in their system. After careful consideration, they decided public cloud is the perfect environment to support their needs.\\ 900 | 901 | {\color{blue} \textbf{Solution Concept}}\\ 902 | MJTelco is running a successful proof-of-concept (PoC) project in its labs. They have two primary needs: 903 | \begin{itemize} 904 | \item Scale and harden their PoC to support significantly more data flows generated when they ramp to more than 50,000 installations. 905 | \item Refine their machine-learning cycles to verify and improve the dynamic models they use to control topology definitions. 906 | MJTelco will also use three separate operating environments -- development/test, staging, and production -- to meet the needs of running experiments, deploying new features, and serving production customers.\\ 907 | \end{itemize} 908 | 909 | {\color{blue} \textbf{Business Requirements}}\\ 910 | -Scale up their production environment with minimal cost, instantiating resources when and where needed in an unpredictable, distributed telecom user community.\\ 911 | -Ensure security of their proprietary data to protect their leading-edge machine learning and analysis.\\ 912 | -Provide reliable and timely access to data for analysis from distributed research workers.\\ 913 | -Maintain isolated environments that support rapid iteration of their machine-learning models without affecting their customers. 914 | 915 | \end{minipage} 916 | }; 917 | \node[fancytitle, right=10pt] at (box.north west) {Case Study: MJTelco I}; 918 | \end{tikzpicture} 919 | 920 | % ------------ Case Study: MJTelco II ----------------- 921 | \begin{tikzpicture} 922 | \node [mybox] (box){% 923 | \begin{minipage}{0.3\textwidth} 924 | \setlist{nolistsep} 925 | 926 | {\color{blue} \textbf{Technical Requirements}}\\ 927 | -Ensure secure and efficient transport and storage of telemetry data.\\ 928 | -Rapidly scale instances to support between 10,000 and 100,000 data providers with multiple flows each.\\ 929 | -Allow analysis and presentation against data tables tracking up to 2 years of data storing approximately 100m records/day. 930 | Support rapid iteration of monitoring infrastructure focused on awareness of data pipeline problems both in telemetry flows and in production learning cycles.\\ 931 | 932 | {\color{blue} \textbf{CEO Statement}}\\ 933 | Our business model relies on our patents, analytics and dynamic machine learning. Our inexpensive hardware is organized to be highly reliable, which gives us cost advantages. We need to quickly stabilize our large distributed data pipelines to meet our reliability and capacity commitments.\\ 934 | 935 | {\color{blue} \textbf{CTO Statement}}\\ 936 | Our public cloud services must operate as advertised. We need resources that scale and keep our data secure. We also need environments in which our data scientists can carefully study and quickly adapt our models. Because we rely on automation to process our data, we also need our development and test environments to work as we iterate.\\ 937 | 938 | {\color{blue} \textbf{CFO Statement}}\\ 939 | This project is too large for us to maintain the hardware and software required for the data and analysis. Also, we cannot afford to staff an operations team to monitor so many data feeds, so we will rely on automation and infrastructure. Google Cloud’s machine learning will allow our quantitative researchers to work on our high-value problems instead of problems with our data pipelines. 940 | \end{minipage} 941 | }; 942 | \node[fancytitle, right=10pt] at (box.north west) {Case Study: MJTelco II}; 943 | \end{tikzpicture} 944 | 945 | 946 | % ------------ Choosing a Storage Option ----------------- 947 | \begin{tikzpicture} 948 | \node [mybox] (box){% 949 | \begin{minipage}{0.3\textwidth} 950 | \setlist{nolistsep} 951 | 952 | \begin{tabularx}{\textwidth}{X|p{2.4cm}|p{2.4cm}} 953 | \textbf{Need} & \textbf{Open Source} & \textbf{GCP Solution} \\ 954 | \hline 955 | Compute, Block Storage & Persistent Disks, SSD & Persistent Disks, SSD\\ 956 | \hline 957 | Media, Blob Storage & Filesystem, HDFS & Cloud Storage\\ 958 | \hline 959 | SQL Interface on File Data & Hive & BigQuery\\ 960 | \hline 961 | Document DB, NoSQL & CouchDB, MongoDB & DataStore\\ 962 | \hline 963 | Fast Scanning NoSQL & HBase, Cassandra & BigTable\\ 964 | \hline 965 | OLTP & RDBMS- MySQL & CloudSQL, Cloud Spanner\\ 966 | \hline 967 | OLAP & Hive & BigQuery\\ 968 | \hline 969 | \end{tabularx} 970 | 971 | \textbf{Cloud Storage}: unstructured data (blob)\\ 972 | % Cloud Storage for Firebase: unstructured data (blob), need for mobile SDKs\\ 973 | \textbf{CloudSQL}: OLTP, SQL, structured and relational data, no need for horizontal scaling\\ 974 | \textbf{Cloud Spanner}: OLTP, SQL, structured and relational data, need for horizontal scaling, between RDBMS/big data\\ 975 | \textbf{Cloud Datastore}: NoSQL, document data, key-value structured but non-relational data (XML, HTML, query depends on size of result (not dataset), fast to read/slow to write\\ 976 | \textbf{BigTable}: NoSQL, key-value data, columnar, good for sparse data, sensitive to hot spotting, high throughput and scalability for non-structured key/value data, where each value is typically no larger than 10 MB 977 | 978 | \end{minipage} 979 | }; 980 | \node[fancytitle, right=10pt] at (box.north west) {Choosing a Storage Option}; 981 | \end{tikzpicture} 982 | 983 | 984 | % ------------ Solutions ----------------- 985 | \begin{tikzpicture} 986 | \node [mybox] (box){% 987 | \begin{minipage}{0.3\textwidth} 988 | \setlist{nolistsep} 989 | 990 | {\color{blue} \textbf{Data Lifecycle}}\\ 991 | At each stage, GCP offers multiple services to manage your data. 992 | \begin{enumerate} 993 | \item \textbf{Ingest}: first stage is to pull in the raw data, such as streaming data from devices, on-premises batch data, application logs, or mobile-app user events and analytics 994 | \item \textbf{Store}: after the data has been retrieved, it needs to be stored in a format that is durable and can be easily accessed 995 | \item \textbf{Process and Analyze}: the data is transformed from raw form into actionable information 996 | \item \textbf{Explore and Visualize}: convert the results of the analysis into a format that is easy to draw insights from and to share with colleagues and peers 997 | \end{enumerate} 998 | 999 | \includegraphics[width=\textwidth]{Figures/Data_Lifecycle_Solutions.png} 1000 | 1001 | 1002 | {\color{blue} \textbf{Ingest}}\\ 1003 | There are a number of approaches to collect raw data, based on the data’s size, source, and latency. 1004 | \begin{itemize} 1005 | \item \textbf{Application}: data from application events, log files or user events, typically collected in a push model, where the application calls an API to send the data to storage (Stackdriver Logging, Pub/Sub, CloudSQL, Datastore, Bigtable, Spanner) 1006 | \item \textbf{Streaming}: data consists of a continuous stream of small, asynchronous messages. Common uses include telemetry, or collecting data from geographically dispersed devices (IoT) and user events and analytics (Pub/Sub) 1007 | \item \textbf{Batch}: large amounts of data are stored in a set of files that are transferred to storage in bulk. common use cases include scientific workloads, backups, migration (Storage, Transfer Service, Appliance) 1008 | \end{itemize} 1009 | 1010 | \end{minipage} 1011 | }; 1012 | \node[fancytitle, right=10pt] at (box.north west) {Solutions}; 1013 | \end{tikzpicture} 1014 | 1015 | % ------------ SOlutions Part II ----------------- 1016 | \begin{tikzpicture} 1017 | \node [mybox] (box){% 1018 | \begin{minipage}{0.3\textwidth} 1019 | \setlist{nolistsep} 1020 | 1021 | {\color{blue} \textbf{Storage}}\\ 1022 | \textbf{Cloud Storage}: durable and highly-available object storage for structured and unstructured data\\ 1023 | \textbf{Cloud SQL}: fully managed, cloud RDBMS that offers both MySQL and PostgreSQL engines with built-in support for replication, for low-latency, transactional, relational database workloads. supports RDBMS workloads up to 10 TB (storing financial transactions, user credentials, customer orders)\\ 1024 | \textbf{BigTable}: managed, high-performance NoSQL database service designed for terabyte- to petabyte-scale workloads. suitable for large-scale, high-throughput workloads such as advertising technology or IoT data infrastructure. does not support multi-row transactions, SQL queries or joins, consider Cloud SQL or Cloud Datastore instead\\ 1025 | \textbf{Cloud Spanner}: fully managed relational database service for mission-critical OLTP applications. horizontally scalable, and built for strong consistency, high availability, and global scale. supports schemas, ACID transactions, and SQL queries (use for retail and global supply chain, ad tech, financial services)\\ 1026 | \textbf{BigQuery}: stores large quantities of data for query and analysis instead of transactional processing 1027 | 1028 | \includegraphics[width=\textwidth]{Figures/Solutions-_Storage.png}\\ 1029 | 1030 | {\color{blue} \textbf{Exploration and Visualization}} 1031 | Data exploration and visualization to better understand the results of the processing and analysis. 1032 | \begin{itemize} 1033 | \item \textbf{Cloud Datalab}: interactive web-based tool that you can use to explore, analyze and visualize data built on top of Jupyter notebooks. runs on a VM and automatically saved to persistent disks and can be stored in GC Source Repo (git repo) 1034 | \item \textbf{Data Studio}: drag-and-drop report builder that you can use to visualize data into reports and dashboards that can then be shared with others, backed by live data, that can be shared and updated easily. data sources can be data files, Google Sheets, Cloud SQL, and BigQuery. supports \textbf{query} and \textbf{prefetch cache}: query remembers previous queries and if data not found, goes to prefetch cache, which predicts data that could be requested (disable if data changes frequently/using data source incurs charges) 1035 | \end{itemize} 1036 | 1037 | 1038 | 1039 | 1040 | \end{minipage} 1041 | }; 1042 | \node[fancytitle, right=10pt] at (box.north west) {Solutions- Part II}; 1043 | \end{tikzpicture} 1044 | 1045 | % ------------ TEMPLATE ----------------- 1046 | \begin{tikzpicture} 1047 | \node [mybox] (box){% 1048 | \begin{minipage}{0.3\textwidth} 1049 | \setlist{nolistsep} 1050 | 1051 | {\color{blue} \textbf{Process and Analyze}}\\ 1052 | In order to derive business value and insights from data, you must transform and analyze it. This requires a processing framework that can either analyze the data directly or prepare the data for downstream analysis, as well as tools to analyze and understand processing results. 1053 | \begin{itemize} 1054 | \item \textbf{Processing}: data from source systems is cleansed, normalized, and processed across multiple machines, and stored in analytical systems 1055 | \item \textbf{Analysis}: processed data is stored in systems that allow for ad-hoc querying and exploration 1056 | \item \textbf{Understanding}: based on analytical results, data is used to train and test automated machine-learning models 1057 | \end{itemize} 1058 | \includegraphics[width=\textwidth]{Figures/Solutions-_Proces_Analyze.png} 1059 | 1060 | {\color{cyan} \textbf{Processing}} 1061 | \begin{itemize} 1062 | \item \textbf{Cloud Dataproc}: migrate your existing Hadoop or Spark deployments to a fully-managed service that automates cluster creation, simplifies configuration and management of your cluster, has built-in monitoring and utilization reports, and can be shutdown when not in use 1063 | \item\textbf{Cloud Dataflow}: designed to simplify big data for both streaming and batch workloads, focus on filtering, aggregating, and transforming your data 1064 | \item\textbf{Cloud Dataprep}: service for visually exploring, cleaning, and preparing data for analysis. can transform data of any size stored in CSV, JSON, or relational-table formats\\ 1065 | \end{itemize} 1066 | 1067 | {\color{cyan} \textbf{Analyzing and Querying}} 1068 | \begin{itemize} 1069 | \item \textbf{BigQuery}: query using SQL, all data encrypted, user analysis, device and operational metrics, business intelligence 1070 | \item \textbf{Task-Specific ML}: Vision, Speech, Natural Language, Translation, Video Intelligence 1071 | \item \textbf{ML Engine}: managed platform you can use to run custom machine learning models at scale 1072 | \end{itemize} 1073 | 1074 | 1075 | \end{minipage} 1076 | }; 1077 | \node[fancytitle, right=10pt] at (box.north west) {Solutions- Part III}; 1078 | \end{tikzpicture} 1079 | 1080 | 1081 | streaming data: use Pub/Sub and Dataflow in combination 1082 | 1083 | 1084 | 1085 | 1086 | % % ------------------------------------------------------------------------ 1087 | % % ------------ TEMPLATE ----------------- 1088 | % \begin{tikzpicture} 1089 | % \node [mybox] (box){% 1090 | % \begin{minipage}{0.3\textwidth} 1091 | % \setlist{nolistsep} 1092 | 1093 | % {\color{blue} \textbf{TEMPLATE}}\\ 1094 | 1095 | % \begin{itemize} 1096 | % \item 1097 | % \end{itemize} 1098 | % \end{minipage} 1099 | % }; 1100 | % \node[fancytitle, right=10pt] at (box.north west) {TEMPLATE}; 1101 | % \end{tikzpicture} 1102 | % % ------------------------------------------------------------------------ 1103 | 1104 | \end{multicols*} 1105 | \end{document} 1106 | Contact GitHub API Training Shop Blog About 1107 | © 2016 GitHub, Inc. Terms Privacy Security Status Help --------------------------------------------------------------------------------