├── sample_driver.py ├── README.md └── zeppelin_notebook.json /sample_driver.py: -------------------------------------------------------------------------------- 1 | ''' 2 | A self-contained PySpark script. 3 | 4 | Run me with 5 | 6 | $SPARK_HOME/spark-submit --master local sample_driver.py 7 | ''' 8 | from pprint import pprint 9 | 10 | from pyspark.sql import SparkSession 11 | from pyspark.sql.types import StructType, StructField, StringType, IntegerType 12 | 13 | 14 | def main(spark): 15 | CLICKSTREAM_FILE = '/path/to/clickstream_data/2015_01_en_clickstream.tsv.gz' 16 | clickstream = (spark.sparkContext.textFile(CLICKSTREAM_FILE) 17 | .map(parse_line)) 18 | clickstream_schema = StructType([ 19 | StructField('prev_id', StringType()), 20 | StructField('curr_id', StringType()), 21 | StructField('views', IntegerType()), 22 | StructField('prev_title', StringType()), 23 | StructField('curr_title', StringType()), 24 | ]) 25 | 26 | clickstream_df = spark.createDataFrame(clickstream, clickstream_schema) 27 | clickstream_df.createOrReplaceTempView('clickstream') 28 | pprint(spark.sql('SELECT * FROM clickstream LIMIT 10').collect()) 29 | 30 | def parse_line(line): 31 | '''Parse a line in the log file, but ensure that the 'n' or 'views' column is 32 | convereted to an integer.''' 33 | parts = line.split('\t') 34 | parts[2] = int(parts[2]) 35 | return parts 36 | 37 | if __name__ == '__main__': 38 | spark = (SparkSession.builder 39 | .appName('sample_driver') 40 | .getOrCreate()) 41 | with spark: 42 | main(spark) 43 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # From gigabytes to petabytes and beyond with PySpark 2 | 3 | > Imagine writing a Python program that could just as easily process a few 4 | gigabytes of data locally or hundreds of petabytes in a distributed cluster 5 | without changing a single line of code? Too good to be true? It isn't, it's 6 | PySpark! In this tutorial we'll learn how to write PySpark that perform basic 7 | analysis and fancy machine learning and can run on your computer or thousands 8 | of servers. 9 | 10 | Mike Sukmanowsky ([@msukmanowsky](https://twitter.com/msukmanowsky)) 11 | 12 | Sunday November 13th, 11:50am - 1pm 13 | 14 | # Slides 15 | 16 | https://docs.google.com/presentation/d/1Dagrb3Xi7myOU14CjfBfG-q5DmxbQWdLyobfMmseFt8 17 | 18 | # Getting Setup 19 | 20 | We only have an hour for the tutorial, so to ensure that we don't waste any 21 | time, please go through all the steps below before coming to the session. 22 | 23 | If you have troubles installing any of the required software, reach out to Mike 24 | at mike.sukmanowsky@gmail.com or on [twitter](https://twitter.com/msukmanowsky). 25 | 26 | - Python (duh), 2 or 3 is fine 27 | - Java >= 1.7 28 | - Git (to clone this repository and get access to examples) 29 | - [Apache Spark 2.0.1 (pre-built for Hadoop 2.7)](http://d3kbcqa49mib13.cloudfront.net/spark-2.0.1-bin-hadoop2.7.tgz) 30 | - [Apache Zeppelin 0.6.2 (binary release)](http://www-us.apache.org/dist/zeppelin/zeppelin-0.6.2/zeppelin-0.6.2-bin-all.tgz) 31 | 32 | ## Spark 33 | 34 | After unpacking Spark to a directory of your choice, try running the following 35 | command: 36 | 37 | **Mac/Linux** 38 | ``` 39 | > cd 40 | > bin/pyspark 41 | ``` 42 | **Windows** 43 | ``` 44 | $ cd 45 | $ bin\pyspark.cmd 46 | ``` 47 | 48 | You should see some output and then be dropped into a Python interpreter. 49 | 50 | ``` 51 | Python 2.7.11 (default, Jul 19 2016, 10:14:23) 52 | [GCC 4.2.1 Compatible Apple LLVM 7.0.2 (clang-700.1.81)] on darwin 53 | Type "help", "copyright", "credits" or "license" for more information. 54 | Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 55 | Setting default log level to "WARN". 56 | To adjust logging level use sc.setLogLevel(newLevel). 57 | 16/11/05 21:03:50 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 58 | 16/11/05 21:04:00 WARN Utils: Your hostname, MacBook-Pro.local resolves to a loopback address: 127.0.0.1; using 192.168.0.17 instead (on interface en0) 59 | 16/11/05 21:04:00 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address 60 | Welcome to 61 | ____ __ 62 | / __/__ ___ _____/ /__ 63 | _\ \/ _ \/ _ `/ __/ '_/ 64 | /__ / .__/\_,_/_/ /_/\_\ version 2.0.1 65 | /_/ 66 | 67 | Using Python version 2.7.11 (default, Jul 19 2016 10:14:23) 68 | SparkSession available as 'spark'. 69 | >>> 70 | ``` 71 | 72 | Hit CTRL/CMD-D to exit the Python interpreter. 73 | 74 | ## Zeppelin 75 | 76 | Unzip the Zeppelin tarball and create a file `conf/zeppelin-env.sh` with a 77 | single line: 78 | 79 | ``` 80 | export SPARK_HOME="/path/to/spark/spark-2.0.1" 81 | ``` 82 | 83 | Next, confirm that Zeppelin works. 84 | 85 | **Mac/Linux** 86 | ``` 87 | > cd 88 | > bin/zeppelin-daemon.sh start 89 | ``` 90 | 91 | (Use `bin/zeppelin-daemon.sh stop` to stop when done.) 92 | 93 | **Windows** 94 | ``` 95 | $ cd 96 | $ bin\zeppelin.cmd 97 | ``` 98 | 99 | You'll see output similar to: 100 | ``` 101 | Log dir doesn't exist, create /Users/mikesukmanowsky/.opt/zeppelin-0.6.2-bin-all/logs 102 | Pid dir doesn't exist, create /Users/mikesukmanowsky/.opt/zeppelin-0.6.2-bin-all/run 103 | Zeppelin start [ OK ] 104 | ``` 105 | 106 | Try navigating to http://localhost:8080/ where you should see this: 107 | 108 | ![Zeppelin Screen](https://www.evernote.com/l/AAFlPH2qBzNLobKHEuvNgDt4hNLM7ZQb0ZIB/image.png) 109 | 110 | ## Download the dataset 111 | 112 | We'll be analyzing the [February 2015 English Wikipedia Clickstream](https://datahub.io/dataset/wikipedia-clickstream/resource/be85cc68-d1e6-4134-804a-fd36b94dbb82) 113 | dataset. The entire data set is 5.7GB so please download it before the tutorial 114 | to save on precious WiFi bandwidth. 115 | 116 | You can download the dataset from this page https://figshare.com/articles/Wikipedia_Clickstream/1305770. 117 | Click on the red "Download all (5.74GB)" link. 118 | 119 | ## Clone this repo 120 | 121 | ``` 122 | git clone https://github.com/msukmanowsky/pyconca-2016-spark-tutorial.git 123 | ``` 124 | 125 | If you've already cloned, make sure you `git pull` for the latest. 126 | 127 | ## Import and open the Zeppelin Notebook 128 | 129 | With Zeppelin running, head to http://localhost:8080/ and click on the 130 | "Import note" link. 131 | 132 | Click "Choose a JSON here", and select the 133 | [zeppelin_notebook.json](zeppelin_notebook.json) file in this repo. 134 | 135 | You should now see a "PyCon Canada 2016 - PySpark Tutorial" notebook on your 136 | Zeppelin home screen. 137 | -------------------------------------------------------------------------------- /zeppelin_notebook.json: -------------------------------------------------------------------------------- 1 | { 2 | "paragraphs": [ 3 | { 4 | "text": "%md\n![PHP CEO](https://www.evernote.com/l/AAHWzj78DlFKfYJJ0YPrxoYvb2zSe1w9h7kB/image.jpg)\n\nOur boss, [PHP CEO](https://twitter.com/php_ceo) has just found out about a website called Wikipedia and wants us to help explain it to the board of directors. To quote our illustrious CEO, \"I found out that those Wikimedia idiots publish their traffic data _publically_, can you believe that? I\u0027m sure there\u0027s a way to make $50M doing data science on it so figure out a way to Hadoop it and make it happen! The company is counting on you!\"\n\nNever wanting to disappoint our CEO (or miss a paycheque), you decide to oblige.\n\nTurns out, Wikimedia does in fact publish clickstream data for users of Wikipedia.org and you find it available for download here https://datahub.io/dataset/wikipedia-clickstream/resource/be85cc68-d1e6-4134-804a-fd36b94dbb82. The entire data set is 5.7GB which is starting to feel like \"big data\" so you decide to analyze it using that thing you heard about on Hacker News - Apache Spark.\n\n[Quoting Wikipedia directly](https://meta.wikimedia.org/wiki/Research:Wikipedia_clickstream), we tell the boss that the dataset contains:\n\n\u003e counts of (referer, resource) pairs extracted from the request logs of Wikipedia. A referer is an HTTP header field that identifies the address of the webpage that linked to the resource being requested. The data shows how people get to a Wikipedia article and what links they click on. In other words, it gives a weighted network of articles, where each edge weight corresponds to how often people navigate from one page to another. To give an example, consider the figure below, which shows incoming and outgoing traffic to the \"London\" article on English Wikipedia during January 2015.\n\nHis eyes gloss over after the first sentence, but he fires back with a few questions for us:\n\n1. Is wikipedia.org even big? How many hits does it get?\n2. What do people even read on that site?\n\nAnd with that, we\u0027re off. Let\u0027s load in the data into a Spark [Resilient Distributed Dataset (RDD)](http://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds) figure out what we\u0027re dealing with and try go get this monkey off our back.\n", 5 | "dateUpdated": "Nov 12, 2016 9:31:49 PM", 6 | "config": { 7 | "colWidth": 12.0, 8 | "graph": { 9 | "mode": "table", 10 | "height": 300.0, 11 | "optionOpen": false, 12 | "keys": [], 13 | "values": [], 14 | "groups": [], 15 | "scatter": {} 16 | }, 17 | "enabled": true, 18 | "editorMode": "ace/mode/markdown", 19 | "editorHide": true 20 | }, 21 | "settings": { 22 | "params": {}, 23 | "forms": {} 24 | }, 25 | "jobName": "paragraph_1478288491564_-439546265", 26 | "id": "20161104-154131_1723437402", 27 | "result": { 28 | "code": "SUCCESS", 29 | "type": "HTML", 30 | "msg": "\u003cp\u003e\u003cimg src\u003d\"https://www.evernote.com/l/AAHWzj78DlFKfYJJ0YPrxoYvb2zSe1w9h7kB/image.jpg\" alt\u003d\"PHP CEO\" /\u003e\u003c/p\u003e\n\u003cp\u003eOur boss, \u003ca href\u003d\"https://twitter.com/php_ceo\"\u003ePHP CEO\u003c/a\u003e has just found out about a website called Wikipedia and wants us to help explain it to the board of directors. To quote our illustrious CEO, \u0026ldquo;I found out that those Wikimedia idiots publish their traffic data \u003cem\u003epublically\u003c/em\u003e, can you believe that? I\u0027m sure there\u0027s a way to make $50M doing data science on it so figure out a way to Hadoop it and make it happen! The company is counting on you!\u0026rdquo;\u003c/p\u003e\n\u003cp\u003eNever wanting to disappoint our CEO (or miss a paycheque), you decide to oblige.\u003c/p\u003e\n\u003cp\u003eTurns out, Wikimedia does in fact publish clickstream data for users of Wikipedia.org and you find it available for download here https://datahub.io/dataset/wikipedia-clickstream/resource/be85cc68-d1e6-4134-804a-fd36b94dbb82. The entire data set is 5.7GB which is starting to feel like \u0026ldquo;big data\u0026rdquo; so you decide to analyze it using that thing you heard about on Hacker News - Apache Spark.\u003c/p\u003e\n\u003cp\u003e\u003ca href\u003d\"https://meta.wikimedia.org/wiki/Research:Wikipedia_clickstream\"\u003eQuoting Wikipedia directly\u003c/a\u003e, we tell the boss that the dataset contains:\u003c/p\u003e\n\u003cblockquote\u003e\u003cp\u003ecounts of (referer, resource) pairs extracted from the request logs of Wikipedia. A referer is an HTTP header field that identifies the address of the webpage that linked to the resource being requested. The data shows how people get to a Wikipedia article and what links they click on. In other words, it gives a weighted network of articles, where each edge weight corresponds to how often people navigate from one page to another. To give an example, consider the figure below, which shows incoming and outgoing traffic to the \u0026ldquo;London\u0026rdquo; article on English Wikipedia during January 2015.\u003c/p\u003e\n\u003c/blockquote\u003e\n\u003cp\u003eHis eyes gloss over after the first sentence, but he fires back with a few questions for us:\u003c/p\u003e\n\u003col\u003e\n\u003cli\u003eIs wikipedia.org even big? How many hits does it get?\u003c/li\u003e\n\u003cli\u003eWhat do people even read on that site?\u003c/li\u003e\n\u003c/ol\u003e\n\u003cp\u003eAnd with that, we\u0027re off. Let\u0027s load in the data into a Spark \u003ca href\u003d\"http://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds\"\u003eResilient Distributed Dataset (RDD)\u003c/a\u003e figure out what we\u0027re dealing with and try go get this monkey off our back.\u003c/p\u003e\n" 31 | }, 32 | "dateCreated": "Nov 4, 2016 3:41:31 AM", 33 | "dateStarted": "Nov 12, 2016 9:31:49 PM", 34 | "dateFinished": "Nov 12, 2016 9:31:49 PM", 35 | "status": "FINISHED", 36 | "progressUpdateIntervalMs": 500 37 | }, 38 | { 39 | "text": "%pyspark\nfrom __future__ import print_function\nfrom pprint import pprint\n\nCLICKSTREAM_FILE \u003d \u0027/Users/mikesukmanowsky/Downloads/1305770/2015_01_en_clickstream.tsv.gz\u0027\nclickstream \u003d sc.textFile(CLICKSTREAM_FILE) # sc \u003d SparkContext which is available for us automatically since we\u0027re using a \"%pyspark\" paragraph in Zeppelin \nclickstream", 40 | "dateUpdated": "Nov 12, 2016 9:01:31 PM", 41 | "config": { 42 | "colWidth": 12.0, 43 | "graph": { 44 | "mode": "table", 45 | "height": 300.0, 46 | "optionOpen": false, 47 | "keys": [], 48 | "values": [], 49 | "groups": [], 50 | "scatter": {} 51 | }, 52 | "enabled": true, 53 | "editorMode": "ace/mode/scala" 54 | }, 55 | "settings": { 56 | "params": {}, 57 | "forms": {} 58 | }, 59 | "jobName": "paragraph_1478288525992_873647704", 60 | "id": "20161104-154205_1326198179", 61 | "dateCreated": "Nov 4, 2016 3:42:05 AM", 62 | "dateStarted": "Nov 10, 2016 10:18:11 PM", 63 | "dateFinished": "Nov 10, 2016 10:18:39 PM", 64 | "status": "FINISHED", 65 | "errorMessage": "", 66 | "progressUpdateIntervalMs": 500 67 | }, 68 | { 69 | "text": "%md\n\nSo what the heck is this `clickstream` thing that we just loaded anyway? Spark says `MapPartitionsRDD[40]`. Let\u0027s ignore the `MapPartitions` part and focus on what\u0027s important: **RDD**.\n\n**Resilient Distributed Dataset or RDD** is Spark\u0027s core unit of abstraction. If you _really_ understand RDDs, then you\u0027ll understand a lot of what Spark is about. An RDD is just a collection of things which is partitioned so that Spark can perform computation on it in parallel. This is a core concept of \"big data\" processing. Take big data, and split into smaller parts so they can be operated on independently and in parallel.\n\nIn this case, we created an RDD out of a gzip file which begs the question, if an RDD is a collection of things, what is in our RDD?", 70 | "dateUpdated": "Nov 12, 2016 9:31:46 PM", 71 | "config": { 72 | "colWidth": 12.0, 73 | "graph": { 74 | "mode": "table", 75 | "height": 300.0, 76 | "optionOpen": false, 77 | "keys": [], 78 | "values": [], 79 | "groups": [], 80 | "scatter": {} 81 | }, 82 | "enabled": true, 83 | "editorHide": true 84 | }, 85 | "settings": { 86 | "params": {}, 87 | "forms": {} 88 | }, 89 | "jobName": "paragraph_1478567658068_-1920890636", 90 | "id": "20161107-201418_1676567479", 91 | "result": { 92 | "code": "SUCCESS", 93 | "type": "HTML", 94 | "msg": "\u003cp\u003eSo what the heck is this \u003ccode\u003eclickstream\u003c/code\u003e thing that we just loaded anyway? Spark says \u003ccode\u003eMapPartitionsRDD[40]\u003c/code\u003e. Let\u0027s ignore the \u003ccode\u003eMapPartitions\u003c/code\u003e part and focus on what\u0027s important: \u003cstrong\u003eRDD\u003c/strong\u003e.\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eResilient Distributed Dataset or RDD\u003c/strong\u003e is Spark\u0027s core unit of abstraction. If you \u003cem\u003ereally\u003c/em\u003e understand RDDs, then you\u0027ll understand a lot of what Spark is about. An RDD is just a collection of things which is partitioned so that Spark can perform computation on it in parallel. This is a core concept of \u0026ldquo;big data\u0026rdquo; processing. Take big data, and split into smaller parts so they can be operated on independently and in parallel.\u003c/p\u003e\n\u003cp\u003eIn this case, we created an RDD out of a gzip file which begs the question, if an RDD is a collection of things, what is in our RDD?\u003c/p\u003e\n" 95 | }, 96 | "dateCreated": "Nov 7, 2016 8:14:18 AM", 97 | "dateStarted": "Nov 12, 2016 9:31:46 PM", 98 | "dateFinished": "Nov 12, 2016 9:31:46 PM", 99 | "status": "FINISHED", 100 | "progressUpdateIntervalMs": 500 101 | }, 102 | { 103 | "text": "%pyspark\n\nclickstream.first()", 104 | "dateUpdated": "Nov 10, 2016 9:32:52 AM", 105 | "config": { 106 | "colWidth": 12.0, 107 | "graph": { 108 | "mode": "table", 109 | "height": 300.0, 110 | "optionOpen": false, 111 | "keys": [], 112 | "values": [], 113 | "groups": [], 114 | "scatter": {} 115 | }, 116 | "enabled": true, 117 | "editorMode": "ace/mode/python" 118 | }, 119 | "settings": { 120 | "params": {}, 121 | "forms": {} 122 | }, 123 | "jobName": "paragraph_1478567933184_-1355711471", 124 | "id": "20161107-201853_1840213788", 125 | "dateCreated": "Nov 7, 2016 8:18:53 AM", 126 | "dateStarted": "Nov 10, 2016 9:32:54 AM", 127 | "dateFinished": "Nov 10, 2016 9:33:20 AM", 128 | "status": "FINISHED", 129 | "errorMessage": "", 130 | "progressUpdateIntervalMs": 500 131 | }, 132 | { 133 | "text": "%md\n\nTurns out RDDs have a handy `first()` method which returns the first element in the collection which turns out to be the first line in the gzip file we loaded up. So `textFile` returns an RDD which is a collection of the lines in the file.\n\nLet\u0027s see how many lines are in that file by using the `count()` method on the RDD.", 134 | "dateUpdated": "Nov 12, 2016 9:31:53 PM", 135 | "config": { 136 | "colWidth": 12.0, 137 | "graph": { 138 | "mode": "table", 139 | "height": 300.0, 140 | "optionOpen": false, 141 | "keys": [], 142 | "values": [], 143 | "groups": [], 144 | "scatter": {} 145 | }, 146 | "enabled": true, 147 | "editorMode": "ace/mode/markdown", 148 | "editorHide": true 149 | }, 150 | "settings": { 151 | "params": {}, 152 | "forms": {} 153 | }, 154 | "jobName": "paragraph_1478567943934_294491886", 155 | "id": "20161107-201903_1247149665", 156 | "result": { 157 | "code": "SUCCESS", 158 | "type": "HTML", 159 | "msg": "\u003cp\u003eTurns out RDDs have a handy \u003ccode\u003efirst()\u003c/code\u003e method which returns the first element in the collection which turns out to be the first line in the gzip file we loaded up. So \u003ccode\u003etextFile\u003c/code\u003e returns an RDD which is a collection of the lines in the file.\u003c/p\u003e\n\u003cp\u003eLet\u0027s see how many lines are in that file by using the \u003ccode\u003ecount()\u003c/code\u003e method on the RDD.\u003c/p\u003e\n" 160 | }, 161 | "dateCreated": "Nov 7, 2016 8:19:03 AM", 162 | "dateStarted": "Nov 12, 2016 9:31:53 PM", 163 | "dateFinished": "Nov 12, 2016 9:31:53 PM", 164 | "status": "FINISHED", 165 | "progressUpdateIntervalMs": 500 166 | }, 167 | { 168 | "text": "%pyspark\n\u0027{:,}\u0027.format(clickstream.count())", 169 | "dateUpdated": "Nov 10, 2016 9:32:52 AM", 170 | "config": { 171 | "colWidth": 12.0, 172 | "graph": { 173 | "mode": "table", 174 | "height": 300.0, 175 | "optionOpen": false, 176 | "keys": [], 177 | "values": [], 178 | "groups": [], 179 | "scatter": {} 180 | }, 181 | "enabled": true, 182 | "editorMode": "ace/mode/python" 183 | }, 184 | "settings": { 185 | "params": {}, 186 | "forms": {} 187 | }, 188 | "jobName": "paragraph_1478568174925_-2062309665", 189 | "id": "20161107-202254_709908107", 190 | "dateCreated": "Nov 7, 2016 8:22:54 AM", 191 | "dateStarted": "Nov 10, 2016 9:33:20 AM", 192 | "dateFinished": "Nov 10, 2016 9:34:17 AM", 193 | "status": "FINISHED", 194 | "errorMessage": "", 195 | "progressUpdateIntervalMs": 500 196 | }, 197 | { 198 | "text": "%md\nAlmost 22 million lines. Not exactly big data, more like a little large but that\u0027s ok since we\u0027re just working locally.\n\nOne problem we have immediately is that the `clickstream` RDD isn\u0027t very useful since the data isn\u0027t parsed at all.\n\nThe clickstream data source in question actually has 5 fields delimited by a tab:\n\n* `prev_id`: if the referer doesn\u0027t correspond to an article in English Wikipedia, this is empty. Otherwise, it\u0027ll contain the unique MediaWiki page ID of the article correspodning to the referer.\n* `curr_id` the MediaWiki page ID of the article requested, this is always present\n* `n` the number of occurances (page views) of the `(referer, resource)` pair\n* `prev_title`: if referer was a Wikipedia article, this is the title of that article. Otherwise, this gets renamed to something like `other-\u003ewikipedia` (outside the English Wikipedia namespace), `other-empty` (empty referrer), `other-internal` (any other Wikimedia project, `other-google` (any Google site), `other-yahoo` (any Yahoo! site), `other-bing` (any Bing site), `other-facebook`, `other-twitter` and `other-other` (any other site).\n* `curr_title` the title of the Wikipedia article (more helpful than looking at the ID.\n\n\nOk, so we need to do a pretty simple Python string split on a tab delimiter. How do we do that? We can use one of Spark\u0027s **transformation** functions, `map()` which applies a function to all elements in a collection and returns a new collection.", 199 | "dateUpdated": "Nov 12, 2016 9:31:55 PM", 200 | "config": { 201 | "colWidth": 12.0, 202 | "graph": { 203 | "mode": "table", 204 | "height": 300.0, 205 | "optionOpen": false, 206 | "keys": [], 207 | "values": [], 208 | "groups": [], 209 | "scatter": {} 210 | }, 211 | "enabled": true, 212 | "editorMode": "ace/mode/markdown", 213 | "editorHide": true 214 | }, 215 | "settings": { 216 | "params": {}, 217 | "forms": {} 218 | }, 219 | "jobName": "paragraph_1478289568712_1470082354", 220 | "id": "20161104-155928_1642833557", 221 | "result": { 222 | "code": "SUCCESS", 223 | "type": "HTML", 224 | "msg": "\u003cp\u003eAlmost 22 million lines. Not exactly big data, more like a little large but that\u0027s ok since we\u0027re just working locally.\u003c/p\u003e\n\u003cp\u003eOne problem we have immediately is that the \u003ccode\u003eclickstream\u003c/code\u003e RDD isn\u0027t very useful since the data isn\u0027t parsed at all.\u003c/p\u003e\n\u003cp\u003eThe clickstream data source in question actually has 5 fields delimited by a tab:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003e\u003ccode\u003eprev_id\u003c/code\u003e: if the referer doesn\u0027t correspond to an article in English Wikipedia, this is empty. Otherwise, it\u0027ll contain the unique MediaWiki page ID of the article correspodning to the referer.\u003c/li\u003e\n\u003cli\u003e\u003ccode\u003ecurr_id\u003c/code\u003e the MediaWiki page ID of the article requested, this is always present\u003c/li\u003e\n\u003cli\u003e\u003ccode\u003en\u003c/code\u003e the number of occurances (page views) of the \u003ccode\u003e(referer, resource)\u003c/code\u003e pair\u003c/li\u003e\n\u003cli\u003e\u003ccode\u003eprev_title\u003c/code\u003e: if referer was a Wikipedia article, this is the title of that article. Otherwise, this gets renamed to something like \u003ccode\u003eother-\u0026gt;wikipedia\u003c/code\u003e (outside the English Wikipedia namespace), \u003ccode\u003eother-empty\u003c/code\u003e (empty referrer), \u003ccode\u003eother-internal\u003c/code\u003e (any other Wikimedia project, \u003ccode\u003eother-google\u003c/code\u003e (any Google site), \u003ccode\u003eother-yahoo\u003c/code\u003e (any Yahoo! site), \u003ccode\u003eother-bing\u003c/code\u003e (any Bing site), \u003ccode\u003eother-facebook\u003c/code\u003e, \u003ccode\u003eother-twitter\u003c/code\u003e and \u003ccode\u003eother-other\u003c/code\u003e (any other site).\u003c/li\u003e\n\u003cli\u003e\u003ccode\u003ecurr_title\u003c/code\u003e the title of the Wikipedia article (more helpful than looking at the ID.\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eOk, so we need to do a pretty simple Python string split on a tab delimiter. How do we do that? We can use one of Spark\u0027s \u003cstrong\u003etransformation\u003c/strong\u003e functions, \u003ccode\u003emap()\u003c/code\u003e which applies a function to all elements in a collection and returns a new collection.\u003c/p\u003e\n" 225 | }, 226 | "dateCreated": "Nov 4, 2016 3:59:28 AM", 227 | "dateStarted": "Nov 12, 2016 9:31:55 PM", 228 | "dateFinished": "Nov 12, 2016 9:31:55 PM", 229 | "status": "FINISHED", 230 | "progressUpdateIntervalMs": 500 231 | }, 232 | { 233 | "text": "%pyspark\nparsed \u003d clickstream.map(lambda line: line.split(\u0027\\t\u0027))\nparsed", 234 | "dateUpdated": "Nov 10, 2016 9:32:53 AM", 235 | "config": { 236 | "colWidth": 12.0, 237 | "graph": { 238 | "mode": "table", 239 | "height": 300.0, 240 | "optionOpen": false, 241 | "keys": [], 242 | "values": [], 243 | "groups": [], 244 | "scatter": {} 245 | }, 246 | "enabled": true, 247 | "editorMode": "ace/mode/python" 248 | }, 249 | "settings": { 250 | "params": {}, 251 | "forms": {} 252 | }, 253 | "jobName": "paragraph_1478396818605_712890923", 254 | "id": "20161105-214658_1458525638", 255 | "dateCreated": "Nov 5, 2016 9:46:58 AM", 256 | "dateStarted": "Nov 10, 2016 9:33:20 AM", 257 | "dateFinished": "Nov 10, 2016 9:34:17 AM", 258 | "status": "FINISHED", 259 | "errorMessage": "", 260 | "progressUpdateIntervalMs": 500 261 | }, 262 | { 263 | "text": "%md\nUh wait, why didn\u0027t that give us some output that we could see? That\u0027s because all transformations in Spark are lazy.\n\nWhen we called `map()` on our `clickstream` RDD, Spark returned a new RDD. Under the hood, RDDs are basically just directed acyclic graphs (DAGs) that outline the transformations required to get to a final result. This concept is powerful and I want to drive it home a bit. It\u0027s important to know that **all transformations return new RDDs**. Or, put another way, **RDDs are immutable**. You never have to be worried about modifying an RDD \"in-place\", because any time you modify an RDD, you\u0027ll get a new RDD.\n\nLet\u0027s say that we needed a version of the RDD with strings uppercased and in their original form.\n\n```\nclickstream_parsed \u003d clickstream.map(lambda line: line.split(\u0027\\t\u0027)\nclickstream_upper \u003d clickstream_parsed.map(lambda parts: [x.upper() for x in parts])\n```\n\nWe now have a parsed uppercased RDD and the parsed RDD in the original case. Since under the hood, RDDs are just DAGs assigning these two variables is not expensive at all. It\u0027s just storing instructions on how to produce data:\n\n* `clickstream_parsed`: `Load textFile() -\u003e apply line.split()`\n* `clickstream_upper`: `Load textFile() -\u003e apply line.split() -\u003e apply [x.upper() for x in parts]`\n\nSpark will also try to be clever when a part two children in a DAG have a common ancestor and will not compute the same transformations twice.\n\nAnything other transformations we do later just adds more steps to the DAG. If parts of your program refer to `clickstream_upper`, later on, you don\u0027t have to be worried about what another part did to this RDD, `clickstream_upper` is immutable and will always refer to the original transformation.\n\nSo how do we get to see the result of our `map()`? We can use `first()` again, but for fun, let\u0027s use another RDD method `take()` which takes the first _n_ elements from an RDD.", 264 | "dateUpdated": "Nov 12, 2016 9:31:58 PM", 265 | "config": { 266 | "colWidth": 12.0, 267 | "graph": { 268 | "mode": "table", 269 | "height": 300.0, 270 | "optionOpen": false, 271 | "keys": [], 272 | "values": [], 273 | "groups": [], 274 | "scatter": {} 275 | }, 276 | "enabled": true, 277 | "editorMode": "ace/mode/markdown", 278 | "editorHide": true 279 | }, 280 | "settings": { 281 | "params": {}, 282 | "forms": {} 283 | }, 284 | "jobName": "paragraph_1478398272610_-964710637", 285 | "id": "20161105-221112_1697982578", 286 | "result": { 287 | "code": "SUCCESS", 288 | "type": "HTML", 289 | "msg": "\u003cp\u003eUh wait, why didn\u0027t that give us some output that we could see? That\u0027s because all transformations in Spark are lazy.\u003c/p\u003e\n\u003cp\u003eWhen we called \u003ccode\u003emap()\u003c/code\u003e on our \u003ccode\u003eclickstream\u003c/code\u003e RDD, Spark returned a new RDD. Under the hood, RDDs are basically just directed acyclic graphs (DAGs) that outline the transformations required to get to a final result. This concept is powerful and I want to drive it home a bit. It\u0027s important to know that \u003cstrong\u003eall transformations return new RDDs\u003c/strong\u003e. Or, put another way, \u003cstrong\u003eRDDs are immutable\u003c/strong\u003e. You never have to be worried about modifying an RDD \u0026ldquo;in-place\u0026rdquo;, because any time you modify an RDD, you\u0027ll get a new RDD.\u003c/p\u003e\n\u003cp\u003eLet\u0027s say that we needed a version of the RDD with strings uppercased and in their original form.\u003c/p\u003e\n\u003cpre\u003e\u003ccode\u003eclickstream_parsed \u003d clickstream.map(lambda line: line.split(\u0027\\t\u0027)\nclickstream_upper \u003d clickstream_parsed.map(lambda parts: [x.upper() for x in parts])\n\u003c/code\u003e\u003c/pre\u003e\n\u003cp\u003eWe now have a parsed uppercased RDD and the parsed RDD in the original case. Since under the hood, RDDs are just DAGs assigning these two variables is not expensive at all. It\u0027s just storing instructions on how to produce data:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003e\u003ccode\u003eclickstream_parsed\u003c/code\u003e: \u003ccode\u003eLoad textFile() -\u0026gt; apply line.split()\u003c/code\u003e\u003c/li\u003e\n\u003cli\u003e\u003ccode\u003eclickstream_upper\u003c/code\u003e: \u003ccode\u003eLoad textFile() -\u0026gt; apply line.split() -\u0026gt; apply [x.upper() for x in parts]\u003c/code\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eSpark will also try to be clever when a part two children in a DAG have a common ancestor and will not compute the same transformations twice.\u003c/p\u003e\n\u003cp\u003eAnything other transformations we do later just adds more steps to the DAG. If parts of your program refer to \u003ccode\u003eclickstream_upper\u003c/code\u003e, later on, you don\u0027t have to be worried about what another part did to this RDD, \u003ccode\u003eclickstream_upper\u003c/code\u003e is immutable and will always refer to the original transformation.\u003c/p\u003e\n\u003cp\u003eSo how do we get to see the result of our \u003ccode\u003emap()\u003c/code\u003e? We can use \u003ccode\u003efirst()\u003c/code\u003e again, but for fun, let\u0027s use another RDD method \u003ccode\u003etake()\u003c/code\u003e which takes the first \u003cem\u003en\u003c/em\u003e elements from an RDD.\u003c/p\u003e\n" 290 | }, 291 | "dateCreated": "Nov 5, 2016 10:11:12 AM", 292 | "dateStarted": "Nov 12, 2016 9:31:58 PM", 293 | "dateFinished": "Nov 12, 2016 9:31:58 PM", 294 | "status": "FINISHED", 295 | "progressUpdateIntervalMs": 500 296 | }, 297 | { 298 | "text": "%pyspark\npprint(parsed.take(5))", 299 | "dateUpdated": "Nov 10, 2016 9:32:53 AM", 300 | "config": { 301 | "colWidth": 12.0, 302 | "graph": { 303 | "mode": "table", 304 | "height": 300.0, 305 | "optionOpen": false, 306 | "keys": [], 307 | "values": [], 308 | "groups": [], 309 | "scatter": {} 310 | }, 311 | "enabled": true, 312 | "editorMode": "ace/mode/python" 313 | }, 314 | "settings": { 315 | "params": {}, 316 | "forms": {} 317 | }, 318 | "jobName": "paragraph_1478569438111_-1629077470", 319 | "id": "20161107-204358_2060716368", 320 | "dateCreated": "Nov 7, 2016 8:43:58 AM", 321 | "dateStarted": "Nov 10, 2016 9:34:17 AM", 322 | "dateFinished": "Nov 10, 2016 9:34:17 AM", 323 | "status": "FINISHED", 324 | "errorMessage": "", 325 | "progressUpdateIntervalMs": 500 326 | }, 327 | { 328 | "text": "%md\nThat\u0027s better!\n\nBoth `first()` and `take()` are examples of a second class of functions on RDDs: **actions**. **Actions** trigger Spark to actually execute the steps in the DAG you\u0027ve been building up and perform some action. In this case, `first()` and `take()` load a sample of the data, evaluate the `split()` function, and return results to the driver script, this notebook, for us to print out.\n\nRDDs have quite a few of [**transformations**](http://spark.apache.org/docs/latest/programming-guide.html#transformations) and [**actions**](http://spark.apache.org/docs/latest/programming-guide.html#actions) but here are a few of the more popular ones you\u0027ll use:\n\n**Transformations**:\n\n* `map(func)`: return a new data set with `func()` applied to all elements\n* `filter(func)`: return a new dataset formed by selecting those elements of the source on which `func()` returns `true`\n* `sortByKey([ascending], [numTasks])`: sort an RDD by key instead of value (we\u0027ll get to the key vs. value distinction in a second)\n* `reduceByKey(func, [numTasks])`: similar to the Python `reduce()` function\n* `aggregateByKey(zeroValue)(seqOp, combOp, [numTasks])`: also similar to Python\u0027s `reduce()`, but with customization options (we\u0027ll come back to this)\n* `repartition()`: reshuffle the data in the RDD randomly to create either more or fewer partitions and balance it across them\n\n**Actions**:\n\n* `first()`, `take()`\n* `collect()`: return all the elements of the dataset as an array at the driver program.\n* `count()`: count the number of elements in the RDD\n* `saveAsTextFile(path)`: write the elements of the dataset as a text file (or set of text files) in a given directory in the local filesystem, HDFS or any other Hadoop-supported file system\n\nBefore we keep going, something is bugging me. That `count()` that we did seemed to take a _really_ long time, wouldn\u0027t pure Python be faster at that?", 329 | "dateUpdated": "Nov 12, 2016 9:32:02 PM", 330 | "config": { 331 | "colWidth": 12.0, 332 | "graph": { 333 | "mode": "table", 334 | "height": 300.0, 335 | "optionOpen": false, 336 | "keys": [], 337 | "values": [], 338 | "groups": [], 339 | "scatter": {} 340 | }, 341 | "enabled": true, 342 | "editorMode": "ace/mode/markdown", 343 | "editorHide": true 344 | }, 345 | "settings": { 346 | "params": {}, 347 | "forms": {} 348 | }, 349 | "jobName": "paragraph_1478569458083_-1878764182", 350 | "id": "20161107-204418_803807675", 351 | "result": { 352 | "code": "SUCCESS", 353 | "type": "HTML", 354 | "msg": "\u003cp\u003eThat\u0027s better!\u003c/p\u003e\n\u003cp\u003eBoth \u003ccode\u003efirst()\u003c/code\u003e and \u003ccode\u003etake()\u003c/code\u003e are examples of a second class of functions on RDDs: \u003cstrong\u003eactions\u003c/strong\u003e. \u003cstrong\u003eActions\u003c/strong\u003e trigger Spark to actually execute the steps in the DAG you\u0027ve been building up and perform some action. In this case, \u003ccode\u003efirst()\u003c/code\u003e and \u003ccode\u003etake()\u003c/code\u003e load a sample of the data, evaluate the \u003ccode\u003esplit()\u003c/code\u003e function, and return results to the driver script, this notebook, for us to print out.\u003c/p\u003e\n\u003cp\u003eRDDs have quite a few of \u003ca href\u003d\"http://spark.apache.org/docs/latest/programming-guide.html#transformations\"\u003e\u003cstrong\u003etransformations\u003c/strong\u003e\u003c/a\u003e and \u003ca href\u003d\"http://spark.apache.org/docs/latest/programming-guide.html#actions\"\u003e\u003cstrong\u003eactions\u003c/strong\u003e\u003c/a\u003e but here are a few of the more popular ones you\u0027ll use:\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eTransformations\u003c/strong\u003e:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003e\u003ccode\u003emap(func)\u003c/code\u003e: return a new data set with \u003ccode\u003efunc()\u003c/code\u003e applied to all elements\u003c/li\u003e\n\u003cli\u003e\u003ccode\u003efilter(func)\u003c/code\u003e: return a new dataset formed by selecting those elements of the source on which \u003ccode\u003efunc()\u003c/code\u003e returns \u003ccode\u003etrue\u003c/code\u003e\u003c/li\u003e\n\u003cli\u003e\u003ccode\u003esortByKey([ascending], [numTasks])\u003c/code\u003e: sort an RDD by key instead of value (we\u0027ll get to the key vs. value distinction in a second)\u003c/li\u003e\n\u003cli\u003e\u003ccode\u003ereduceByKey(func, [numTasks])\u003c/code\u003e: similar to the Python \u003ccode\u003ereduce()\u003c/code\u003e function\u003c/li\u003e\n\u003cli\u003e\u003ccode\u003eaggregateByKey(zeroValue)(seqOp, combOp, [numTasks])\u003c/code\u003e: also similar to Python\u0027s \u003ccode\u003ereduce()\u003c/code\u003e, but with customization options (we\u0027ll come back to this)\u003c/li\u003e\n\u003cli\u003e\u003ccode\u003erepartition()\u003c/code\u003e: reshuffle the data in the RDD randomly to create either more or fewer partitions and balance it across them\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003e\u003cstrong\u003eActions\u003c/strong\u003e:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003e\u003ccode\u003efirst()\u003c/code\u003e, \u003ccode\u003etake()\u003c/code\u003e\u003c/li\u003e\n\u003cli\u003e\u003ccode\u003ecollect()\u003c/code\u003e: return all the elements of the dataset as an array at the driver program.\u003c/li\u003e\n\u003cli\u003e\u003ccode\u003ecount()\u003c/code\u003e: count the number of elements in the RDD\u003c/li\u003e\n\u003cli\u003e\u003ccode\u003esaveAsTextFile(path)\u003c/code\u003e: write the elements of the dataset as a text file (or set of text files) in a given directory in the local filesystem, HDFS or any other Hadoop-supported file system\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eBefore we keep going, something is bugging me. That \u003ccode\u003ecount()\u003c/code\u003e that we did seemed to take a \u003cem\u003ereally\u003c/em\u003e long time, wouldn\u0027t pure Python be faster at that?\u003c/p\u003e\n" 355 | }, 356 | "dateCreated": "Nov 7, 2016 8:44:18 AM", 357 | "dateStarted": "Nov 12, 2016 9:32:02 PM", 358 | "dateFinished": "Nov 12, 2016 9:32:02 PM", 359 | "status": "FINISHED", 360 | "progressUpdateIntervalMs": 500 361 | }, 362 | { 363 | "text": "%pyspark\nimport gzip\n\nwith gzip.open(CLICKSTREAM_FILE) as fp:\n for count, _ in enumerate(fp):\n pass\n count +\u003d 1\n\n\u0027{:,}\u0027.format(count)", 364 | "dateUpdated": "Nov 10, 2016 9:32:53 AM", 365 | "config": { 366 | "colWidth": 12.0, 367 | "graph": { 368 | "mode": "table", 369 | "height": 300.0, 370 | "optionOpen": false, 371 | "keys": [], 372 | "values": [], 373 | "groups": [], 374 | "scatter": {} 375 | }, 376 | "enabled": true 377 | }, 378 | "settings": { 379 | "params": {}, 380 | "forms": {} 381 | }, 382 | "jobName": "paragraph_1478570540787_-1443862773", 383 | "id": "20161107-210220_1849083685", 384 | "dateCreated": "Nov 7, 2016 9:02:20 AM", 385 | "dateStarted": "Nov 10, 2016 9:34:17 AM", 386 | "dateFinished": "Nov 10, 2016 9:35:01 AM", 387 | "status": "FINISHED", 388 | "errorMessage": "", 389 | "progressUpdateIntervalMs": 500 390 | }, 391 | { 392 | "text": "%md\n\n54s versus 42s...Alright guys, tutorial is over. Python clearly rocks and Spark sucks.\n\n...or does it?\n\nI mentioned before that Spark gets most of its magic from the fact that it can operate on chunks of data in parallel. This is something that we can do in Python (think `threading` or `multiprocessing`), but it\u0027s non-trivial. Let\u0027s ask Spark how many partitions or chunks it\u0027s currently using for our dataset.", 393 | "dateUpdated": "Nov 12, 2016 9:32:06 PM", 394 | "config": { 395 | "colWidth": 12.0, 396 | "graph": { 397 | "mode": "table", 398 | "height": 300.0, 399 | "optionOpen": false, 400 | "keys": [], 401 | "values": [], 402 | "groups": [], 403 | "scatter": {} 404 | }, 405 | "enabled": true, 406 | "editorMode": "ace/mode/markdown", 407 | "editorHide": true 408 | }, 409 | "settings": { 410 | "params": {}, 411 | "forms": {} 412 | }, 413 | "jobName": "paragraph_1478570722555_-1982451581", 414 | "id": "20161107-210522_35023188", 415 | "result": { 416 | "code": "SUCCESS", 417 | "type": "HTML", 418 | "msg": "\u003cp\u003e54s versus 42s\u0026hellip;Alright guys, tutorial is over. Python clearly rocks and Spark sucks.\u003c/p\u003e\n\u003cp\u003e\u0026hellip;or does it?\u003c/p\u003e\n\u003cp\u003eI mentioned before that Spark gets most of its magic from the fact that it can operate on chunks of data in parallel. This is something that we can do in Python (think \u003ccode\u003ethreading\u003c/code\u003e or \u003ccode\u003emultiprocessing\u003c/code\u003e), but it\u0027s non-trivial. Let\u0027s ask Spark how many partitions or chunks it\u0027s currently using for our dataset.\u003c/p\u003e\n" 419 | }, 420 | "dateCreated": "Nov 7, 2016 9:05:22 AM", 421 | "dateStarted": "Nov 12, 2016 9:32:06 PM", 422 | "dateFinished": "Nov 12, 2016 9:32:06 PM", 423 | "status": "FINISHED", 424 | "progressUpdateIntervalMs": 500 425 | }, 426 | { 427 | "text": "%pyspark\n\nclickstream.getNumPartitions()", 428 | "dateUpdated": "Nov 10, 2016 9:32:53 AM", 429 | "config": { 430 | "colWidth": 12.0, 431 | "graph": { 432 | "mode": "table", 433 | "height": 300.0, 434 | "optionOpen": false, 435 | "keys": [], 436 | "values": [], 437 | "groups": [], 438 | "scatter": {} 439 | }, 440 | "enabled": true, 441 | "editorMode": "ace/mode/python" 442 | }, 443 | "settings": { 444 | "params": {}, 445 | "forms": {} 446 | }, 447 | "jobName": "paragraph_1478570757658_2059213021", 448 | "id": "20161107-210557_1180159027", 449 | "dateCreated": "Nov 7, 2016 9:05:57 AM", 450 | "dateStarted": "Nov 10, 2016 9:34:18 AM", 451 | "dateFinished": "Nov 10, 2016 9:35:01 AM", 452 | "status": "FINISHED", 453 | "errorMessage": "", 454 | "progressUpdateIntervalMs": 500 455 | }, 456 | { 457 | "text": "%md\n\nWhat the hell Spark!? You have all this computing power at your disposal and you\u0027re treating my data as if it\u0027s one contigious blob?\n\nIf we ran this on a 200 cluster node, we wouldn\u0027t even keep a single node busy!\n\nWhat happened behind the scenes was dependent on two things:\n\n1. We\u0027re only reading in one file\n2. That one file happens to be gzipped which is an encoding that cannot be \"split\" - only one process can decompresses the file completely (there are compression algos that allow multiple processes to decompress a chunk like [LZO](http://www.oberhumer.com/opensource/lzo/))\n\nSince we\u0027re reading in exactly one file that can only be processed by a single core, Spark did the only thing it could and left it all in a single partition.\n\nIf we want to take advantage of the multiple cores on our laptops or, eventually, the multiple nodes in our cluster, we need to **repartition** our data and redistribute it. Let\u0027s do that now and see if we can speed things up.", 458 | "dateUpdated": "Nov 12, 2016 9:32:09 PM", 459 | "config": { 460 | "colWidth": 12.0, 461 | "graph": { 462 | "mode": "table", 463 | "height": 300.0, 464 | "optionOpen": false, 465 | "keys": [], 466 | "values": [], 467 | "groups": [], 468 | "scatter": {} 469 | }, 470 | "enabled": true, 471 | "editorMode": "ace/mode/markdown", 472 | "editorHide": true 473 | }, 474 | "settings": { 475 | "params": {}, 476 | "forms": {} 477 | }, 478 | "jobName": "paragraph_1478570975923_-1258189398", 479 | "id": "20161107-210935_8134385", 480 | "result": { 481 | "code": "SUCCESS", 482 | "type": "HTML", 483 | "msg": "\u003cp\u003eWhat the hell Spark!? You have all this computing power at your disposal and you\u0027re treating my data as if it\u0027s one contigious blob?\u003c/p\u003e\n\u003cp\u003eIf we ran this on a 200 cluster node, we wouldn\u0027t even keep a single node busy!\u003c/p\u003e\n\u003cp\u003eWhat happened behind the scenes was dependent on two things:\u003c/p\u003e\n\u003col\u003e\n\u003cli\u003eWe\u0027re only reading in one file\u003c/li\u003e\n\u003cli\u003eThat one file happens to be gzipped which is an encoding that cannot be \u0026ldquo;split\u0026rdquo; - only one process can decompresses the file completely (there are compression algos that allow multiple processes to decompress a chunk like \u003ca href\u003d\"http://www.oberhumer.com/opensource/lzo/\"\u003eLZO\u003c/a\u003e)\u003c/li\u003e\n\u003c/ol\u003e\n\u003cp\u003eSince we\u0027re reading in exactly one file that can only be processed by a single core, Spark did the only thing it could and left it all in a single partition.\u003c/p\u003e\n\u003cp\u003eIf we want to take advantage of the multiple cores on our laptops or, eventually, the multiple nodes in our cluster, we need to \u003cstrong\u003erepartition\u003c/strong\u003e our data and redistribute it. Let\u0027s do that now and see if we can speed things up.\u003c/p\u003e\n" 484 | }, 485 | "dateCreated": "Nov 7, 2016 9:09:35 AM", 486 | "dateStarted": "Nov 12, 2016 9:32:09 PM", 487 | "dateFinished": "Nov 12, 2016 9:32:09 PM", 488 | "status": "FINISHED", 489 | "progressUpdateIntervalMs": 500 490 | }, 491 | { 492 | "text": "%pyspark\n\nclickstream \u003d clickstream.repartition(sc.defaultParallelism)\n\u0027{:,}\u0027.format(clickstream.count())", 493 | "dateUpdated": "Nov 10, 2016 10:18:55 PM", 494 | "config": { 495 | "colWidth": 12.0, 496 | "graph": { 497 | "mode": "table", 498 | "height": 300.0, 499 | "optionOpen": false, 500 | "keys": [], 501 | "values": [], 502 | "groups": [], 503 | "scatter": {} 504 | }, 505 | "enabled": true, 506 | "editorMode": "ace/mode/python" 507 | }, 508 | "settings": { 509 | "params": {}, 510 | "forms": {} 511 | }, 512 | "jobName": "paragraph_1478571389447_-481616165", 513 | "id": "20161107-211629_85220792", 514 | "dateCreated": "Nov 7, 2016 9:16:29 AM", 515 | "dateStarted": "Nov 10, 2016 10:18:41 PM", 516 | "dateFinished": "Nov 10, 2016 10:18:41 PM", 517 | "status": "FINISHED", 518 | "errorMessage": "", 519 | "progressUpdateIntervalMs": 500 520 | }, 521 | { 522 | "text": "%md\n\n`sc.defaultParallelism` usually corresponds to the number of cores your CPU has when running Spark locally.\n\nI have 8 cores so I effectively asked Spark to create 8 partitions with equal amounts of data. Spark then counted up the elements in each partition in parallel threads and then summed the result.\n\n44s isn\u0027t amazing, but at least we\u0027re on par with pure Python now. Repartitioning isn\u0027t free and Spark programs do have some overhead so in this trivial example, it\u0027s pretty rough to beat a pure Python line count. But now that the data is partitioned, all subsequent operations we do on it are that much faster. Put another way, with one call to `repartition()` we just achieved a theoretical 8x increase in performance.\n\n**Data partitioning is very important in Spark**, it\u0027s usually the key reason why your Spark job either runs fast and utilizes all resources or runs slow as your cores sit idle.\n\nTo get a better idea of how partitioning helped us, let\u0027s look at a slightly harder example: summing up the total number of pageviews in our dataset.", 523 | "dateUpdated": "Nov 12, 2016 9:32:12 PM", 524 | "config": { 525 | "colWidth": 12.0, 526 | "graph": { 527 | "mode": "table", 528 | "height": 300.0, 529 | "optionOpen": false, 530 | "keys": [], 531 | "values": [], 532 | "groups": [], 533 | "scatter": {} 534 | }, 535 | "enabled": true, 536 | "editorMode": "ace/mode/markdown", 537 | "editorHide": true 538 | }, 539 | "settings": { 540 | "params": {}, 541 | "forms": {} 542 | }, 543 | "jobName": "paragraph_1478571483128_1792019972", 544 | "id": "20161107-211803_1176928010", 545 | "result": { 546 | "code": "SUCCESS", 547 | "type": "HTML", 548 | "msg": "\u003cp\u003e\u003ccode\u003esc.defaultParallelism\u003c/code\u003e usually corresponds to the number of cores your CPU has when running Spark locally.\u003c/p\u003e\n\u003cp\u003eI have 8 cores so I effectively asked Spark to create 8 partitions with equal amounts of data. Spark then counted up the elements in each partition in parallel threads and then summed the result.\u003c/p\u003e\n\u003cp\u003e44s isn\u0027t amazing, but at least we\u0027re on par with pure Python now. Repartitioning isn\u0027t free and Spark programs do have some overhead so in this trivial example, it\u0027s pretty rough to beat a pure Python line count. But now that the data is partitioned, all subsequent operations we do on it are that much faster. Put another way, with one call to \u003ccode\u003erepartition()\u003c/code\u003e we just achieved a theoretical 8x increase in performance.\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eData partitioning is very important in Spark\u003c/strong\u003e, it\u0027s usually the key reason why your Spark job either runs fast and utilizes all resources or runs slow as your cores sit idle.\u003c/p\u003e\n\u003cp\u003eTo get a better idea of how partitioning helped us, let\u0027s look at a slightly harder example: summing up the total number of pageviews in our dataset.\u003c/p\u003e\n" 549 | }, 550 | "dateCreated": "Nov 7, 2016 9:18:03 AM", 551 | "dateStarted": "Nov 12, 2016 9:32:12 PM", 552 | "dateFinished": "Nov 12, 2016 9:32:12 PM", 553 | "status": "FINISHED", 554 | "progressUpdateIntervalMs": 500 555 | }, 556 | { 557 | "text": "%pyspark\ndef parse_line(line):\n \u0027\u0027\u0027Parse a line in the log file, but ensure that the \u0027n\u0027 or \u0027views\u0027 column is\n convereted to an integer.\u0027\u0027\u0027\n parts \u003d line.split(\u0027\\t\u0027)\n parts[2] \u003d int(parts[2])\n return parts", 558 | "dateUpdated": "Nov 10, 2016 10:19:30 PM", 559 | "config": { 560 | "colWidth": 12.0, 561 | "graph": { 562 | "mode": "table", 563 | "height": 300.0, 564 | "optionOpen": false, 565 | "keys": [], 566 | "values": [], 567 | "groups": [], 568 | "scatter": {} 569 | }, 570 | "enabled": true, 571 | "editorMode": "ace/mode/scala" 572 | }, 573 | "settings": { 574 | "params": {}, 575 | "forms": {} 576 | }, 577 | "jobName": "paragraph_1478572100247_994143222", 578 | "id": "20161107-212820_1637898955", 579 | "dateCreated": "Nov 7, 2016 9:28:20 AM", 580 | "dateStarted": "Nov 10, 2016 10:19:30 PM", 581 | "dateFinished": "Nov 10, 2016 10:19:30 PM", 582 | "status": "FINISHED", 583 | "errorMessage": "", 584 | "progressUpdateIntervalMs": 500 585 | }, 586 | { 587 | "text": "%pyspark\nwith gzip.open(CLICKSTREAM_FILE) as fp:\n views \u003d 0\n for line in fp:\n parsed \u003d parse_line(line.strip(\u0027\\n\u0027))\n views +\u003d parsed[2]\n\n\u0027{:,}\u0027.format(views)", 588 | "dateUpdated": "Nov 10, 2016 10:19:39 PM", 589 | "config": { 590 | "colWidth": 12.0, 591 | "graph": { 592 | "mode": "table", 593 | "height": 300.0, 594 | "optionOpen": false, 595 | "keys": [], 596 | "values": [], 597 | "groups": [], 598 | "scatter": {} 599 | }, 600 | "enabled": true 601 | }, 602 | "settings": { 603 | "params": {}, 604 | "forms": {} 605 | }, 606 | "jobName": "paragraph_1478834357179_435356742", 607 | "id": "20161110-221917_2093806364", 608 | "dateCreated": "Nov 10, 2016 10:19:17 PM", 609 | "status": "READY", 610 | "errorMessage": "", 611 | "progressUpdateIntervalMs": 500 612 | }, 613 | { 614 | "text": "%pyspark\nfrom operator import add\n\n(clickstream\n .map(parse_line)\n .map(lambda parts: parts[2]) # just get views\n .reduce(add)) # identical to Python\u0027s reduce(add, [1, 2, 3, 4])", 615 | "dateUpdated": "Nov 12, 2016 9:32:31 PM", 616 | "config": { 617 | "colWidth": 12.0, 618 | "graph": { 619 | "mode": "table", 620 | "height": 300.0, 621 | "optionOpen": false, 622 | "keys": [], 623 | "values": [], 624 | "groups": [], 625 | "scatter": {} 626 | }, 627 | "enabled": true, 628 | "editorMode": "ace/mode/python" 629 | }, 630 | "settings": { 631 | "params": {}, 632 | "forms": {} 633 | }, 634 | "jobName": "paragraph_1478572303140_-140791057", 635 | "id": "20161107-213143_497530321", 636 | "dateCreated": "Nov 7, 2016 9:31:43 AM", 637 | "dateStarted": "Nov 10, 2016 9:35:51 AM", 638 | "dateFinished": "Nov 10, 2016 9:37:55 AM", 639 | "status": "FINISHED", 640 | "errorMessage": "", 641 | "progressUpdateIntervalMs": 500 642 | }, 643 | { 644 | "text": "%md\n\nWe went from 1m18s in pure Python to 32s in Spark - a 59% reduction. Maybe this Spark thing isn\u0027t so bad after all!\n\nAlthough this is only running on our local laptops, what you\u0027ve just demonstrated is the key to what allows Spark to process petabyte datasets in minutes (or even seconds). It gives you a horizontal scaling escape hatch for your data. No matter how much data you have, Spark can handle it so long as you can provide it with the hardware. If you can\u0027t, you may have to wait a bit longer, but you still get the benefits of massively parallel processing.\n\nThere\u0027s one last trick I want to show before we review what we\u0027ve learned which ultimately made people pay a lot of attention to Spark: **caching**.\n\nOur clickstream dataset would probably be a lot faster if we could hold most of it in-memory. Turns out, Spark has an easy way to do that:", 645 | "dateUpdated": "Nov 12, 2016 9:32:44 PM", 646 | "config": { 647 | "colWidth": 12.0, 648 | "graph": { 649 | "mode": "table", 650 | "height": 300.0, 651 | "optionOpen": false, 652 | "keys": [], 653 | "values": [], 654 | "groups": [], 655 | "scatter": {} 656 | }, 657 | "enabled": true, 658 | "editorMode": "ace/mode/markdown", 659 | "editorHide": true 660 | }, 661 | "settings": { 662 | "params": {}, 663 | "forms": {} 664 | }, 665 | "jobName": "paragraph_1478572422149_-2100196256", 666 | "id": "20161107-213342_125686747", 667 | "result": { 668 | "code": "SUCCESS", 669 | "type": "HTML", 670 | "msg": "\u003cp\u003eWe went from 1m18s in pure Python to 32s in Spark - a 59% reduction. Maybe this Spark thing isn\u0027t so bad after all!\u003c/p\u003e\n\u003cp\u003eAlthough this is only running on our local laptops, what you\u0027ve just demonstrated is the key to what allows Spark to process petabyte datasets in minutes (or even seconds). It gives you a horizontal scaling escape hatch for your data. No matter how much data you have, Spark can handle it so long as you can provide it with the hardware. If you can\u0027t, you may have to wait a bit longer, but you still get the benefits of massively parallel processing.\u003c/p\u003e\n\u003cp\u003eThere\u0027s one last trick I want to show before we review what we\u0027ve learned which ultimately made people pay a lot of attention to Spark: \u003cstrong\u003ecaching\u003c/strong\u003e.\u003c/p\u003e\n\u003cp\u003eOur clickstream dataset would probably be a lot faster if we could hold most of it in-memory. Turns out, Spark has an easy way to do that:\u003c/p\u003e\n" 671 | }, 672 | "dateCreated": "Nov 7, 2016 9:33:42 AM", 673 | "dateStarted": "Nov 12, 2016 9:32:43 PM", 674 | "dateFinished": "Nov 12, 2016 9:32:43 PM", 675 | "status": "FINISHED", 676 | "progressUpdateIntervalMs": 500 677 | }, 678 | { 679 | "text": "%pyspark\nfrom pyspark import StorageLevel\n\nclickstream_parsed \u003d clickstream.map(parse_line)\n# Store whatever you can in memory, but spill anything that doesn\u0027t fit onto disk\nclickstream_parsed.persist(StorageLevel.MEMORY_AND_DISK)", 680 | "dateUpdated": "Nov 12, 2016 5:03:00 PM", 681 | "config": { 682 | "colWidth": 12.0, 683 | "graph": { 684 | "mode": "table", 685 | "height": 300.0, 686 | "optionOpen": false, 687 | "keys": [], 688 | "values": [], 689 | "groups": [], 690 | "scatter": {} 691 | }, 692 | "enabled": true, 693 | "editorMode": "ace/mode/python" 694 | }, 695 | "settings": { 696 | "params": {}, 697 | "forms": {} 698 | }, 699 | "jobName": "paragraph_1478573792808_-1289556038", 700 | "id": "20161107-215632_61079791", 701 | "dateCreated": "Nov 7, 2016 9:56:32 AM", 702 | "dateStarted": "Nov 10, 2016 10:19:38 PM", 703 | "dateFinished": "Nov 10, 2016 10:19:38 PM", 704 | "status": "FINISHED", 705 | "errorMessage": "", 706 | "progressUpdateIntervalMs": 500 707 | }, 708 | { 709 | "text": "%pyspark\n\n(clickstream_parsed\n .map(lambda parts: parts[2])\n .reduce(add))", 710 | "dateUpdated": "Nov 10, 2016 10:19:39 PM", 711 | "config": { 712 | "colWidth": 12.0, 713 | "graph": { 714 | "mode": "table", 715 | "height": 300.0, 716 | "optionOpen": false, 717 | "keys": [], 718 | "values": [], 719 | "groups": [], 720 | "scatter": {} 721 | }, 722 | "enabled": true 723 | }, 724 | "settings": { 725 | "params": {}, 726 | "forms": {} 727 | }, 728 | "jobName": "paragraph_1478834343787_-1536896768", 729 | "id": "20161110-221903_458344037", 730 | "dateCreated": "Nov 10, 2016 10:19:03 PM", 731 | "status": "READY", 732 | "errorMessage": "", 733 | "progressUpdateIntervalMs": 500 734 | }, 735 | { 736 | "text": "%md\n\nThis took a bit longer than our last run, and the reason is that Spark loaded up the data in each partition into JVM heap. When it hit memory limits, it spilled data to disk.\n\nNow that our cache is warmed though, let\u0027s see how fast we can get a sum.", 737 | "dateUpdated": "Nov 12, 2016 9:32:27 PM", 738 | "config": { 739 | "colWidth": 12.0, 740 | "graph": { 741 | "mode": "table", 742 | "height": 300.0, 743 | "optionOpen": false, 744 | "keys": [], 745 | "values": [], 746 | "groups": [], 747 | "scatter": {} 748 | }, 749 | "enabled": true, 750 | "editorHide": true 751 | }, 752 | "settings": { 753 | "params": {}, 754 | "forms": {} 755 | }, 756 | "jobName": "paragraph_1478574023768_199562291", 757 | "id": "20161107-220023_9554624", 758 | "result": { 759 | "code": "SUCCESS", 760 | "type": "HTML", 761 | "msg": "\u003cp\u003eThis took a bit longer than our last run, and the reason is that Spark loaded up the data in each partition into JVM heap. When it hit memory limits, it spilled data to disk.\u003c/p\u003e\n\u003cp\u003eNow that our cache is warmed though, let\u0027s see how fast we can get a sum.\u003c/p\u003e\n" 762 | }, 763 | "dateCreated": "Nov 7, 2016 10:00:23 AM", 764 | "dateStarted": "Nov 12, 2016 9:32:27 PM", 765 | "dateFinished": "Nov 12, 2016 9:32:27 PM", 766 | "status": "FINISHED", 767 | "progressUpdateIntervalMs": 500 768 | }, 769 | { 770 | "text": "%pyspark\n(clickstream_parsed\n .map(lambda parts: parts[2])\n .reduce(add))", 771 | "dateUpdated": "Nov 10, 2016 9:32:54 AM", 772 | "config": { 773 | "colWidth": 12.0, 774 | "graph": { 775 | "mode": "table", 776 | "height": 300.0, 777 | "optionOpen": false, 778 | "keys": [], 779 | "values": [], 780 | "groups": [], 781 | "scatter": {} 782 | }, 783 | "enabled": true, 784 | "editorMode": "ace/mode/scala" 785 | }, 786 | "settings": { 787 | "params": {}, 788 | "forms": {} 789 | }, 790 | "jobName": "paragraph_1478574092696_1786944308", 791 | "id": "20161107-220132_1816285552", 792 | "dateCreated": "Nov 7, 2016 10:01:32 AM", 793 | "dateStarted": "Nov 10, 2016 9:37:56 AM", 794 | "dateFinished": "Nov 10, 2016 9:39:38 AM", 795 | "status": "FINISHED", 796 | "errorMessage": "", 797 | "progressUpdateIntervalMs": 500 798 | }, 799 | { 800 | "text": "%md\n\nHECK YEAH. From 1m18s to 9s! To be clear, this is a good speed up, but isn\u0027t a fair comparison to the original Python version.\n\nTo compare apples-to-apples there, we would need to load the entire data file into something like a list and do a sum which would also be ridiculously fast.\n\nHopefully the main point is coming across though. Although we\u0027re working with only a single gzip file locally, everything you\u0027ve done so far could instead process millions of gzip files across hundreds of servers **without changing a single line of code**. That\u0027s the power of Spark.\n\nOk, let\u0027s rehash what we\u0027ve learned so far.", 801 | "dateUpdated": "Nov 12, 2016 9:33:02 PM", 802 | "config": { 803 | "colWidth": 12.0, 804 | "graph": { 805 | "mode": "table", 806 | "height": 300.0, 807 | "optionOpen": false, 808 | "keys": [], 809 | "values": [], 810 | "groups": [], 811 | "scatter": {} 812 | }, 813 | "enabled": true, 814 | "editorHide": true, 815 | "editorMode": "ace/mode/markdown" 816 | }, 817 | "settings": { 818 | "params": {}, 819 | "forms": {} 820 | }, 821 | "jobName": "paragraph_1478574147569_-2112208315", 822 | "id": "20161107-220227_1043819962", 823 | "result": { 824 | "code": "SUCCESS", 825 | "type": "HTML", 826 | "msg": "\u003cp\u003eHECK YEAH. From 1m18s to 9s! To be clear, this is a good speed up, but isn\u0027t a fair comparison to the original Python version.\u003c/p\u003e\n\u003cp\u003eTo compare apples-to-apples there, we would need to load the entire data file into something like a list and do a sum which would also be ridiculously fast.\u003c/p\u003e\n\u003cp\u003eHopefully the main point is coming across though. Although we\u0027re working with only a single gzip file locally, everything you\u0027ve done so far could instead process millions of gzip files across hundreds of servers \u003cstrong\u003ewithout changing a single line of code\u003c/strong\u003e. That\u0027s the power of Spark.\u003c/p\u003e\n\u003cp\u003eOk, let\u0027s rehash what we\u0027ve learned so far.\u003c/p\u003e\n" 827 | }, 828 | "dateCreated": "Nov 7, 2016 10:02:27 AM", 829 | "dateStarted": "Nov 12, 2016 9:33:01 PM", 830 | "dateFinished": "Nov 12, 2016 9:33:01 PM", 831 | "status": "FINISHED", 832 | "progressUpdateIntervalMs": 500 833 | }, 834 | { 835 | "text": "%md\n\nFiguring out the boss\u0027 second question \"What do people read on the site?\" can be translated as \"What are the top en.wikipedia.org articles?\".\n\nTo answer that, we\u0027ll need to explore another core Spark concept which is the key-value or pair RDD.\n\nPairRDDs differ from regular RDDs in that all elements are assumed to be two-element tuples or lists of `(key, value)`.\n\nLet\u0027s create a PairRDD that maps `curr_title -\u003e views` for our dataset.", 836 | "dateUpdated": "Nov 12, 2016 9:32:51 PM", 837 | "config": { 838 | "colWidth": 12.0, 839 | "graph": { 840 | "mode": "table", 841 | "height": 300.0, 842 | "optionOpen": false, 843 | "keys": [], 844 | "values": [], 845 | "groups": [], 846 | "scatter": {} 847 | }, 848 | "enabled": true, 849 | "editorMode": "ace/mode/markdown", 850 | "editorHide": true 851 | }, 852 | "settings": { 853 | "params": {}, 854 | "forms": {} 855 | }, 856 | "jobName": "paragraph_1478742542334_1099773527", 857 | "id": "20161109-204902_599151766", 858 | "result": { 859 | "code": "SUCCESS", 860 | "type": "HTML", 861 | "msg": "\u003cp\u003eFiguring out the boss\u0027 second question \u0026ldquo;What do people read on the site?\u0026rdquo; can be translated as \u0026ldquo;What are the top en.wikipedia.org articles?\u0026ldquo;.\u003c/p\u003e\n\u003cp\u003eTo answer that, we\u0027ll need to explore another core Spark concept which is the key-value or pair RDD.\u003c/p\u003e\n\u003cp\u003ePairRDDs differ from regular RDDs in that all elements are assumed to be two-element tuples or lists of \u003ccode\u003e(key, value)\u003c/code\u003e.\u003c/p\u003e\n\u003cp\u003eLet\u0027s create a PairRDD that maps \u003ccode\u003ecurr_title -\u0026gt; views\u003c/code\u003e for our dataset.\u003c/p\u003e\n" 862 | }, 863 | "dateCreated": "Nov 9, 2016 8:49:02 AM", 864 | "dateStarted": "Nov 12, 2016 9:32:51 PM", 865 | "dateFinished": "Nov 12, 2016 9:32:51 PM", 866 | "status": "FINISHED", 867 | "progressUpdateIntervalMs": 500 868 | }, 869 | { 870 | "text": "%pyspark\n\n(clickstream_parsed\n .map(lambda parsed: (parsed[4], parsed[2]))\n .first())", 871 | "dateUpdated": "Nov 10, 2016 9:32:54 AM", 872 | "config": { 873 | "colWidth": 12.0, 874 | "graph": { 875 | "mode": "table", 876 | "height": 300.0, 877 | "optionOpen": false, 878 | "keys": [], 879 | "values": [], 880 | "groups": [], 881 | "scatter": {} 882 | }, 883 | "enabled": true, 884 | "editorMode": "ace/mode/python" 885 | }, 886 | "settings": { 887 | "params": {}, 888 | "forms": {} 889 | }, 890 | "jobName": "paragraph_1478743064843_1240739409", 891 | "id": "20161109-205744_1312605228", 892 | "dateCreated": "Nov 9, 2016 8:57:44 AM", 893 | "dateStarted": "Nov 10, 2016 9:39:24 AM", 894 | "dateFinished": "Nov 10, 2016 9:39:38 AM", 895 | "status": "FINISHED", 896 | "errorMessage": "", 897 | "progressUpdateIntervalMs": 500 898 | }, 899 | { 900 | "text": "%md\n\nCool, so we have a mapping of `curr_title -\u003e views`, now we know we want to do the equivalent of:\n\n```\nSELECT curr_title, SUM(views)\nFROM clickstream_parsed\nGROUP BY curr_title\nORDER BY SUM(views) DESC\nLIMIT 25\n```\n\nHow can we do that in Spark?", 901 | "dateUpdated": "Nov 12, 2016 9:33:05 PM", 902 | "config": { 903 | "colWidth": 12.0, 904 | "graph": { 905 | "mode": "table", 906 | "height": 300.0, 907 | "optionOpen": false, 908 | "keys": [], 909 | "values": [], 910 | "groups": [], 911 | "scatter": {} 912 | }, 913 | "enabled": true, 914 | "editorMode": "ace/mode/markdown", 915 | "editorHide": true 916 | }, 917 | "settings": { 918 | "params": {}, 919 | "forms": {} 920 | }, 921 | "jobName": "paragraph_1478812703235_1794295241", 922 | "id": "20161110-161823_651379", 923 | "result": { 924 | "code": "SUCCESS", 925 | "type": "HTML", 926 | "msg": "\u003cp\u003eCool, so we have a mapping of \u003ccode\u003ecurr_title -\u0026gt; views\u003c/code\u003e, now we know we want to do the equivalent of:\u003c/p\u003e\n\u003cpre\u003e\u003ccode\u003eSELECT curr_title, SUM(views)\nFROM clickstream_parsed\nGROUP BY curr_title\nORDER BY SUM(views) DESC\nLIMIT 25\n\u003c/code\u003e\u003c/pre\u003e\n\u003cp\u003eHow can we do that in Spark?\u003c/p\u003e\n" 927 | }, 928 | "dateCreated": "Nov 10, 2016 4:18:23 AM", 929 | "dateStarted": "Nov 12, 2016 9:33:05 PM", 930 | "dateFinished": "Nov 12, 2016 9:33:05 PM", 931 | "status": "FINISHED", 932 | "progressUpdateIntervalMs": 500 933 | }, 934 | { 935 | "text": "%pyspark \nfrom operator import add\n\npprint(clickstream_parsed\n .map(lambda parsed: (parsed[4], parsed[2]))\n .reduceByKey(add)\n .top(25, key\u003dlambda (k, v): v))", 936 | "dateUpdated": "Nov 12, 2016 5:11:31 PM", 937 | "config": { 938 | "colWidth": 12.0, 939 | "graph": { 940 | "mode": "table", 941 | "height": 300.0, 942 | "optionOpen": false, 943 | "keys": [], 944 | "values": [], 945 | "groups": [], 946 | "scatter": {} 947 | }, 948 | "enabled": true, 949 | "editorMode": "ace/mode/python" 950 | }, 951 | "settings": { 952 | "params": {}, 953 | "forms": {} 954 | }, 955 | "jobName": "paragraph_1478812782348_1793415185", 956 | "id": "20161110-161942_227148540", 957 | "dateCreated": "Nov 10, 2016 4:19:42 AM", 958 | "dateStarted": "Nov 10, 2016 9:39:38 AM", 959 | "dateFinished": "Nov 10, 2016 9:40:16 AM", 960 | "status": "FINISHED", 961 | "errorMessage": "", 962 | "progressUpdateIntervalMs": 500 963 | }, 964 | { 965 | "text": "%md\n\nSweet! So no surprise that the top article was the `Main_Page`, but `Chris_Kyle` is a little surprising if you\u0027re not aware of who he is. \n\nChris Kyle was a US Navy Seal who died in Feb 2013 but was immortalized in the film American Sniper which premiered worldwide in mid-late January.\n\nThe interest in Charlie Hebdo correlates with the terrorist attacks against the satirical french magazine of the same name in January of 2015\n\nIn fact, most of the top pages, minus `Main_Page` seem to be correlated with news stories. People hear about a news event, maybe do a Google search for it and land at Wikipedia to learn more.\n\nOf course we haven\u0027t really validated the theory so let\u0027s try to do that for the `Chris_Kyle` article.\n\nIn SQL, we\u0027d want something like this:\n\n```\nSELECT curr_title, trafic_source, SUM(views)\nFROM clickstream\nWHERE curr_title \u003d \u0027Chris_Kyle\u0027\nGROUP BY curr_title, traffic_source\nORDER BY SUM(views) DESC\nLIMIT 100\n```\n\nReally all we\u0027re doing is adding another column to our group key. How would that translate to Spark?", 966 | "dateUpdated": "Nov 12, 2016 9:33:10 PM", 967 | "config": { 968 | "colWidth": 12.0, 969 | "graph": { 970 | "mode": "table", 971 | "height": 300.0, 972 | "optionOpen": false, 973 | "keys": [], 974 | "values": [], 975 | "groups": [], 976 | "scatter": {} 977 | }, 978 | "enabled": true, 979 | "editorMode": "ace/mode/markdown", 980 | "editorHide": true 981 | }, 982 | "settings": { 983 | "params": {}, 984 | "forms": {} 985 | }, 986 | "jobName": "paragraph_1478813057260_-1996960245", 987 | "id": "20161110-162417_1179182725", 988 | "result": { 989 | "code": "SUCCESS", 990 | "type": "HTML", 991 | "msg": "\u003cp\u003eSweet! So no surprise that the top article was the \u003ccode\u003eMain_Page\u003c/code\u003e, but \u003ccode\u003eChris_Kyle\u003c/code\u003e is a little surprising if you\u0027re not aware of who he is.\u003c/p\u003e\n\u003cp\u003eChris Kyle was a US Navy Seal who died in Feb 2013 but was immortalized in the film American Sniper which premiered worldwide in mid-late January.\u003c/p\u003e\n\u003cp\u003eThe interest in Charlie Hebdo correlates with the terrorist attacks against the satirical french magazine of the same name in January of 2015\u003c/p\u003e\n\u003cp\u003eIn fact, most of the top pages, minus \u003ccode\u003eMain_Page\u003c/code\u003e seem to be correlated with news stories. People hear about a news event, maybe do a Google search for it and land at Wikipedia to learn more.\u003c/p\u003e\n\u003cp\u003eOf course we haven\u0027t really validated the theory so let\u0027s try to do that for the \u003ccode\u003eChris_Kyle\u003c/code\u003e article.\u003c/p\u003e\n\u003cp\u003eIn SQL, we\u0027d want something like this:\u003c/p\u003e\n\u003cpre\u003e\u003ccode\u003eSELECT curr_title, trafic_source, SUM(views)\nFROM clickstream\nWHERE curr_title \u003d \u0027Chris_Kyle\u0027\nGROUP BY curr_title, traffic_source\nORDER BY SUM(views) DESC\nLIMIT 100\n\u003c/code\u003e\u003c/pre\u003e\n\u003cp\u003eReally all we\u0027re doing is adding another column to our group key. How would that translate to Spark?\u003c/p\u003e\n" 992 | }, 993 | "dateCreated": "Nov 10, 2016 4:24:17 AM", 994 | "dateStarted": "Nov 12, 2016 9:33:10 PM", 995 | "dateFinished": "Nov 12, 2016 9:33:10 PM", 996 | "status": "FINISHED", 997 | "progressUpdateIntervalMs": 500 998 | }, 999 | { 1000 | "text": "%pyspark\n\npprint(clickstream_parsed\n .filter(lambda parsed: parsed[4] \u003d\u003d \u0027Chris_Kyle\u0027)\n .map(lambda parsed: ((parsed[4], parsed[3]), parsed[2]))\n .reduceByKey(add)\n .top(10, key\u003dlambda (k, v): v))", 1001 | "dateUpdated": "Nov 10, 2016 9:32:54 AM", 1002 | "config": { 1003 | "colWidth": 12.0, 1004 | "graph": { 1005 | "mode": "table", 1006 | "height": 300.0, 1007 | "optionOpen": false, 1008 | "keys": [], 1009 | "values": [], 1010 | "groups": [], 1011 | "scatter": {} 1012 | }, 1013 | "enabled": true, 1014 | "editorMode": "ace/mode/scala" 1015 | }, 1016 | "settings": { 1017 | "params": {}, 1018 | "forms": {} 1019 | }, 1020 | "jobName": "paragraph_1478813830909_-1716575175", 1021 | "id": "20161110-163710_732153347", 1022 | "dateCreated": "Nov 10, 2016 4:37:10 AM", 1023 | "dateStarted": "Nov 10, 2016 9:39:39 AM", 1024 | "dateFinished": "Nov 10, 2016 9:40:29 AM", 1025 | "status": "FINISHED", 1026 | "errorMessage": "", 1027 | "progressUpdateIntervalMs": 500 1028 | }, 1029 | { 1030 | "text": "%md\n\nAh cool, so Google is the primary referral source, but, as expected interest in the film American Sniper also lead to many folks reading up on Chris Kyle.\n\nAt this point, you may be wondering what exactly is going on under the hood with Spark when you finally press enter on the cell above. We\u0027ll demystify that a lot more in time, but to start, we\u0027re going to talk about a core concept in Spark programs, **the shuffle**.\n\nTo understand the shuffle, let\u0027s see what the DAG looks like for that final RDD. I\u0027ll make a slight modification to things and call `sortBy` instead of `top` since `sortBy` is a transformation that won\u0027t trigger execution.", 1031 | "dateUpdated": "Nov 12, 2016 9:33:15 PM", 1032 | "config": { 1033 | "colWidth": 12.0, 1034 | "graph": { 1035 | "mode": "table", 1036 | "height": 300.0, 1037 | "optionOpen": false, 1038 | "keys": [], 1039 | "values": [], 1040 | "groups": [], 1041 | "scatter": {} 1042 | }, 1043 | "enabled": true, 1044 | "editorMode": "ace/mode/markdown", 1045 | "editorHide": true 1046 | }, 1047 | "settings": { 1048 | "params": {}, 1049 | "forms": {} 1050 | }, 1051 | "jobName": "paragraph_1478814607249_-1720691921", 1052 | "id": "20161110-165007_400840257", 1053 | "result": { 1054 | "code": "SUCCESS", 1055 | "type": "HTML", 1056 | "msg": "\u003cp\u003eAh cool, so Google is the primary referral source, but, as expected interest in the film American Sniper also lead to many folks reading up on Chris Kyle.\u003c/p\u003e\n\u003cp\u003eAt this point, you may be wondering what exactly is going on under the hood with Spark when you finally press enter on the cell above. We\u0027ll demystify that a lot more in time, but to start, we\u0027re going to talk about a core concept in Spark programs, \u003cstrong\u003ethe shuffle\u003c/strong\u003e.\u003c/p\u003e\n\u003cp\u003eTo understand the shuffle, let\u0027s see what the DAG looks like for that final RDD. I\u0027ll make a slight modification to things and call \u003ccode\u003esortBy\u003c/code\u003e instead of \u003ccode\u003etop\u003c/code\u003e since \u003ccode\u003esortBy\u003c/code\u003e is a transformation that won\u0027t trigger execution.\u003c/p\u003e\n" 1057 | }, 1058 | "dateCreated": "Nov 10, 2016 4:50:07 AM", 1059 | "dateStarted": "Nov 12, 2016 9:33:15 PM", 1060 | "dateFinished": "Nov 12, 2016 9:33:15 PM", 1061 | "status": "FINISHED", 1062 | "progressUpdateIntervalMs": 500 1063 | }, 1064 | { 1065 | "text": "%pyspark \n\nprint(clickstream_parsed\n .filter(lambda parsed: parsed[4] \u003d\u003d \u0027Chris_Kyle\u0027)\n .map(lambda parsed: ((parsed[4], parsed[3]), parsed[2]))\n .reduceByKey(add)\n .sortBy(lambda (k, v): v, ascending\u003dFalse)\n .toDebugString())", 1066 | "dateUpdated": "Nov 10, 2016 9:32:54 AM", 1067 | "config": { 1068 | "colWidth": 12.0, 1069 | "graph": { 1070 | "mode": "table", 1071 | "height": 300.0, 1072 | "optionOpen": false, 1073 | "keys": [], 1074 | "values": [], 1075 | "groups": [], 1076 | "scatter": {} 1077 | }, 1078 | "enabled": true, 1079 | "editorMode": "ace/mode/scala" 1080 | }, 1081 | "settings": { 1082 | "params": {}, 1083 | "forms": {} 1084 | }, 1085 | "jobName": "paragraph_1478815058790_-1219672233", 1086 | "id": "20161110-165738_624660842", 1087 | "dateCreated": "Nov 10, 2016 4:57:38 AM", 1088 | "dateStarted": "Nov 10, 2016 9:40:17 AM", 1089 | "dateFinished": "Nov 10, 2016 9:40:42 AM", 1090 | "status": "FINISHED", 1091 | "errorMessage": "", 1092 | "progressUpdateIntervalMs": 500 1093 | }, 1094 | { 1095 | "text": "%md\n\nWe just found out that PySpark pays an additional tax on performance due to object serialization and that there may be a solution out there. Truth is, there is in the form of DataFrames and Spark SQL. In fact, you\u0027ll often use an RDD only for some very basic data manipulation. In many cases, if your data is stored in the appropriate format, you can avoid RDDs entirely and load data directly into a DataFrame. We\u0027ll see this in a bit.\n\nIf DataFrames sound familiar to Pandas it\u0027s because they are. The API isn\u0027t identical, but there are tons of similarities. Just remember that under the hood, you\u0027re still dealing with RDDs in Spark. DataFrames are just higher level abstractions.\n\nThe great news is, we can very easily go from a RDD to a DataFrame and vice versa.\n\nSo, how do we create a DataFrame from our RDD?", 1096 | "dateUpdated": "Nov 12, 2016 9:33:18 PM", 1097 | "config": { 1098 | "colWidth": 12.0, 1099 | "graph": { 1100 | "mode": "table", 1101 | "height": 300.0, 1102 | "optionOpen": false, 1103 | "keys": [], 1104 | "values": [], 1105 | "groups": [], 1106 | "scatter": {} 1107 | }, 1108 | "enabled": true, 1109 | "editorMode": "ace/mode/markdown", 1110 | "editorHide": true 1111 | }, 1112 | "settings": { 1113 | "params": {}, 1114 | "forms": {} 1115 | }, 1116 | "jobName": "paragraph_1478815138959_232062512", 1117 | "id": "20161110-165858_592239418", 1118 | "result": { 1119 | "code": "SUCCESS", 1120 | "type": "HTML", 1121 | "msg": "\u003cp\u003eWe just found out that PySpark pays an additional tax on performance due to object serialization and that there may be a solution out there. Truth is, there is in the form of DataFrames and Spark SQL. In fact, you\u0027ll often use an RDD only for some very basic data manipulation. In many cases, if your data is stored in the appropriate format, you can avoid RDDs entirely and load data directly into a DataFrame. We\u0027ll see this in a bit.\u003c/p\u003e\n\u003cp\u003eIf DataFrames sound familiar to Pandas it\u0027s because they are. The API isn\u0027t identical, but there are tons of similarities. Just remember that under the hood, you\u0027re still dealing with RDDs in Spark. DataFrames are just higher level abstractions.\u003c/p\u003e\n\u003cp\u003eThe great news is, we can very easily go from a RDD to a DataFrame and vice versa.\u003c/p\u003e\n\u003cp\u003eSo, how do we create a DataFrame from our RDD?\u003c/p\u003e\n" 1122 | }, 1123 | "dateCreated": "Nov 10, 2016 4:58:58 AM", 1124 | "dateStarted": "Nov 12, 2016 9:33:18 PM", 1125 | "dateFinished": "Nov 12, 2016 9:33:18 PM", 1126 | "status": "FINISHED", 1127 | "progressUpdateIntervalMs": 500 1128 | }, 1129 | { 1130 | "text": "%pyspark\n# Import some schema definition helpers. You can ask Spark to implicitly determine schema, but it\u0027s slower and more error prone\nfrom pyspark.sql.types import StructType, StructField, StringType, IntegerType\n\nclickstream_schema \u003d StructType([\n StructField(\u0027prev_id\u0027, StringType()),\n StructField(\u0027curr_id\u0027, StringType()),\n StructField(\u0027views\u0027, IntegerType()),\n StructField(\u0027prev_title\u0027, StringType()),\n StructField(\u0027curr_title\u0027, StringType()),\n])\n\nclickstream_df \u003d sqlContext.createDataFrame(clickstream_parsed, clickstream_schema)", 1131 | "dateUpdated": "Nov 12, 2016 5:17:29 PM", 1132 | "config": { 1133 | "colWidth": 12.0, 1134 | "graph": { 1135 | "mode": "table", 1136 | "height": 300.0, 1137 | "optionOpen": false, 1138 | "keys": [], 1139 | "values": [], 1140 | "groups": [], 1141 | "scatter": {} 1142 | }, 1143 | "enabled": true, 1144 | "editorMode": "ace/mode/scala", 1145 | "editorHide": false 1146 | }, 1147 | "settings": { 1148 | "params": {}, 1149 | "forms": {} 1150 | }, 1151 | "jobName": "paragraph_1478400198710_1622450405", 1152 | "id": "20161105-224318_930617102", 1153 | "dateCreated": "Nov 5, 2016 10:43:18 AM", 1154 | "dateStarted": "Nov 10, 2016 10:19:54 PM", 1155 | "dateFinished": "Nov 10, 2016 10:20:04 PM", 1156 | "status": "FINISHED", 1157 | "errorMessage": "", 1158 | "progressUpdateIntervalMs": 500 1159 | }, 1160 | { 1161 | "text": "%pyspark\n\npprint(clickstream_df.take(5))", 1162 | "dateUpdated": "Nov 10, 2016 10:22:44 PM", 1163 | "config": { 1164 | "colWidth": 12.0, 1165 | "graph": { 1166 | "mode": "table", 1167 | "height": 300.0, 1168 | "optionOpen": false, 1169 | "keys": [], 1170 | "values": [], 1171 | "groups": [], 1172 | "scatter": {} 1173 | }, 1174 | "enabled": true, 1175 | "editorMode": "ace/mode/scala" 1176 | }, 1177 | "settings": { 1178 | "params": {}, 1179 | "forms": {} 1180 | }, 1181 | "jobName": "paragraph_1478831361409_768888663", 1182 | "id": "20161110-212921_309910808", 1183 | "dateCreated": "Nov 10, 2016 9:29:21 AM", 1184 | "dateStarted": "Nov 10, 2016 10:22:45 PM", 1185 | "dateFinished": "Nov 10, 2016 10:22:45 PM", 1186 | "status": "FINISHED", 1187 | "errorMessage": "", 1188 | "progressUpdateIntervalMs": 500 1189 | }, 1190 | { 1191 | "text": "%md\n\nAh cool. So a DataFrame is kind of like a collection of `namedtuple`s that are also strongly typed.\n\nJust to get familiar, let\u0027s do some of ths stuff we\u0027ve already done with RDDs to get comfy with DataFrames.", 1192 | "dateUpdated": "Nov 12, 2016 9:33:22 PM", 1193 | "config": { 1194 | "colWidth": 12.0, 1195 | "graph": { 1196 | "mode": "table", 1197 | "height": 300.0, 1198 | "optionOpen": false, 1199 | "keys": [], 1200 | "values": [], 1201 | "groups": [], 1202 | "scatter": {} 1203 | }, 1204 | "enabled": true, 1205 | "editorMode": "ace/mode/markdown", 1206 | "editorHide": true 1207 | }, 1208 | "settings": { 1209 | "params": {}, 1210 | "forms": {} 1211 | }, 1212 | "jobName": "paragraph_1478832334833_30336576", 1213 | "id": "20161110-214534_2099202807", 1214 | "result": { 1215 | "code": "SUCCESS", 1216 | "type": "HTML", 1217 | "msg": "\u003cp\u003eAh cool. So a DataFrame is kind of like a collection of \u003ccode\u003enamedtuple\u003c/code\u003es that are also strongly typed.\u003c/p\u003e\n\u003cp\u003eJust to get familiar, let\u0027s do some of ths stuff we\u0027ve already done with RDDs to get comfy with DataFrames.\u003c/p\u003e\n" 1218 | }, 1219 | "dateCreated": "Nov 10, 2016 9:45:34 AM", 1220 | "dateStarted": "Nov 12, 2016 9:33:22 PM", 1221 | "dateFinished": "Nov 12, 2016 9:33:22 PM", 1222 | "status": "FINISHED", 1223 | "progressUpdateIntervalMs": 500 1224 | }, 1225 | { 1226 | "text": "%pyspark\nfrom pyspark.sql import functions as F\n\nnum_rows \u003d clickstream_df.count()\nnum_views \u003d clickstream_df.groupby().sum(\u0027views\u0027).collect()[0][\u0027sum(views)\u0027]\n\nprint(\u0027Num rows: {:,}\u0027.format(num_rows))\nprint(\u0027Num views: {:,}\u0027.format(num_views))", 1227 | "dateUpdated": "Nov 12, 2016 5:18:48 PM", 1228 | "config": { 1229 | "colWidth": 12.0, 1230 | "graph": { 1231 | "mode": "table", 1232 | "height": 300.0, 1233 | "optionOpen": false, 1234 | "keys": [], 1235 | "values": [], 1236 | "groups": [], 1237 | "scatter": {} 1238 | }, 1239 | "enabled": true, 1240 | "editorMode": "ace/mode/scala" 1241 | }, 1242 | "settings": { 1243 | "params": {}, 1244 | "forms": {} 1245 | }, 1246 | "jobName": "paragraph_1478832410048_1610038939", 1247 | "id": "20161110-214650_1610773823", 1248 | "dateCreated": "Nov 10, 2016 9:46:50 AM", 1249 | "dateStarted": "Nov 10, 2016 10:37:15 PM", 1250 | "dateFinished": "Nov 10, 2016 10:41:45 PM", 1251 | "status": "FINISHED", 1252 | "errorMessage": "", 1253 | "progressUpdateIntervalMs": 500 1254 | }, 1255 | { 1256 | "text": "%md\n\nOk cool, we can count rows and sum things. Why is this better than what we had?\n\nIn the first version, we had parsing and reducing (summing) being done in Python. This is precisely what leads to the serialization overhead I mentioned before. When we write ` clickstream_df.groupby().sum(\u0027views\u0027)`, we\u0027re doing the same thing functionally, but the difference is the summing itself is now done entirely within the JVM instead of making roundtrips to Python and then back to the JVM. This can often result in massive performance improvements (especially when datasets are already cached).\n\nDataFrames are cool and have an [API](http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame) that I encourage you to study. However, we\u0027ve kind of had an overload of \"new stuff\" in this tutorial so far. What\u0027s great about DataFrames is that they simultaneously enable another feature in Spark that I\u0027d bet you\u0027re already familiar with - SQL.", 1257 | "dateUpdated": "Nov 12, 2016 9:33:25 PM", 1258 | "config": { 1259 | "colWidth": 12.0, 1260 | "graph": { 1261 | "mode": "table", 1262 | "height": 300.0, 1263 | "optionOpen": false, 1264 | "keys": [], 1265 | "values": [], 1266 | "groups": [], 1267 | "scatter": {} 1268 | }, 1269 | "enabled": true, 1270 | "editorMode": "ace/mode/markdown", 1271 | "editorHide": true 1272 | }, 1273 | "settings": { 1274 | "params": {}, 1275 | "forms": {} 1276 | }, 1277 | "jobName": "paragraph_1478835716370_-311097163", 1278 | "id": "20161110-224156_98990544", 1279 | "result": { 1280 | "code": "SUCCESS", 1281 | "type": "HTML", 1282 | "msg": "\u003cp\u003eOk cool, we can count rows and sum things. Why is this better than what we had?\u003c/p\u003e\n\u003cp\u003eIn the first version, we had parsing and reducing (summing) being done in Python. This is precisely what leads to the serialization overhead I mentioned before. When we write \u003ccode\u003eclickstream_df.groupby().sum(\u0027views\u0027)\u003c/code\u003e, we\u0027re doing the same thing functionally, but the difference is the summing itself is now done entirely within the JVM instead of making roundtrips to Python and then back to the JVM. This can often result in massive performance improvements (especially when datasets are already cached).\u003c/p\u003e\n\u003cp\u003eDataFrames are cool and have an \u003ca href\u003d\"http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame\"\u003eAPI\u003c/a\u003e that I encourage you to study. However, we\u0027ve kind of had an overload of \u0026ldquo;new stuff\u0026rdquo; in this tutorial so far. What\u0027s great about DataFrames is that they simultaneously enable another feature in Spark that I\u0027d bet you\u0027re already familiar with - SQL.\u003c/p\u003e\n" 1283 | }, 1284 | "dateCreated": "Nov 10, 2016 10:41:56 PM", 1285 | "dateStarted": "Nov 12, 2016 9:33:25 PM", 1286 | "dateFinished": "Nov 12, 2016 9:33:25 PM", 1287 | "status": "FINISHED", 1288 | "progressUpdateIntervalMs": 500 1289 | }, 1290 | { 1291 | "text": "%pyspark\n# create a SQL table called \"clickstream\", Spark is now acting as a quasi-relational database for you\nclickstream_df.createOrReplaceTempView(\u0027clickstream\u0027)", 1292 | "dateUpdated": "Nov 12, 2016 5:21:08 PM", 1293 | "config": { 1294 | "colWidth": 12.0, 1295 | "graph": { 1296 | "mode": "table", 1297 | "height": 300.0, 1298 | "optionOpen": false, 1299 | "keys": [], 1300 | "values": [], 1301 | "groups": [], 1302 | "scatter": {} 1303 | }, 1304 | "enabled": true, 1305 | "editorMode": "ace/mode/python" 1306 | }, 1307 | "settings": { 1308 | "params": {}, 1309 | "forms": {} 1310 | }, 1311 | "jobName": "paragraph_1478836055444_-1357338336", 1312 | "id": "20161110-224735_252506842", 1313 | "dateCreated": "Nov 10, 2016 10:47:35 PM", 1314 | "dateStarted": "Nov 10, 2016 10:48:06 PM", 1315 | "dateFinished": "Nov 10, 2016 10:48:07 PM", 1316 | "status": "FINISHED", 1317 | "errorMessage": "", 1318 | "progressUpdateIntervalMs": 500 1319 | }, 1320 | { 1321 | "text": "%sql\n-- Now we can query \"clickstream\" as if it were just a regular SQL table\n\nSELECT * FROM clickstream LIMIT 5", 1322 | "dateUpdated": "Nov 10, 2016 10:49:51 PM", 1323 | "config": { 1324 | "colWidth": 8.0, 1325 | "graph": { 1326 | "mode": "table", 1327 | "height": 300.0, 1328 | "optionOpen": false, 1329 | "keys": [], 1330 | "values": [], 1331 | "groups": [], 1332 | "scatter": {} 1333 | }, 1334 | "enabled": true, 1335 | "editorMode": "ace/mode/sql" 1336 | }, 1337 | "settings": { 1338 | "params": {}, 1339 | "forms": {} 1340 | }, 1341 | "jobName": "paragraph_1478836090162_-351173990", 1342 | "id": "20161110-224810_2048402765", 1343 | "dateCreated": "Nov 10, 2016 10:48:10 PM", 1344 | "dateStarted": "Nov 10, 2016 10:48:33 PM", 1345 | "dateFinished": "Nov 10, 2016 10:48:34 PM", 1346 | "status": "FINISHED", 1347 | "errorMessage": "", 1348 | "progressUpdateIntervalMs": 500 1349 | }, 1350 | { 1351 | "text": "%md\n\n\nBecause SQL returns structured data (thanks to the schema you specified above), Zeppelin can \"autoformat\" the result set.", 1352 | "dateUpdated": "Nov 12, 2016 9:33:37 PM", 1353 | "config": { 1354 | "colWidth": 4.0, 1355 | "graph": { 1356 | "mode": "table", 1357 | "height": 300.0, 1358 | "optionOpen": false, 1359 | "keys": [], 1360 | "values": [], 1361 | "groups": [], 1362 | "scatter": {} 1363 | }, 1364 | "enabled": true, 1365 | "editorMode": "ace/mode/markdown", 1366 | "editorHide": true 1367 | }, 1368 | "settings": { 1369 | "params": {}, 1370 | "forms": {} 1371 | }, 1372 | "jobName": "paragraph_1478836119888_-869430235", 1373 | "id": "20161110-224839_1285388846", 1374 | "result": { 1375 | "code": "SUCCESS", 1376 | "type": "HTML", 1377 | "msg": "\u003cp\u003eBecause SQL returns structured data (thanks to the schema you specified above), Zeppelin can \u0026ldquo;autoformat\u0026rdquo; the result set.\u003c/p\u003e\n" 1378 | }, 1379 | "dateCreated": "Nov 10, 2016 10:48:39 PM", 1380 | "dateStarted": "Nov 12, 2016 9:33:33 PM", 1381 | "dateFinished": "Nov 12, 2016 9:33:33 PM", 1382 | "status": "FINISHED", 1383 | "progressUpdateIntervalMs": 500 1384 | }, 1385 | { 1386 | "text": "%md\n\nThat is SO much better. Just a reminder, we did file reading in Java, our parsing in Python but the aggregation and filtering specified in our SQL query ran entirely in Java.\n\nThis is a marriage made in heaven. Manipulate data in Python, then have APIs that let us analyze data in Python or in SQL and not have to sacrifice performance.\n\nWe\u0027ve answered all the questions our boss wanted but let\u0027s explore a few others and have things consolidated in one place:\n\n* How many total views were there?\n* How many unique articles were viewed?\n* What are the top 10 articles?\n* What are the top 10 internal traffic sources?\n* What are the top 10 external sources?\n* How much traffic does a \"normal\" article get?", 1387 | "dateUpdated": "Nov 12, 2016 9:33:39 PM", 1388 | "config": { 1389 | "colWidth": 12.0, 1390 | "graph": { 1391 | "mode": "table", 1392 | "height": 300.0, 1393 | "optionOpen": false, 1394 | "keys": [], 1395 | "values": [], 1396 | "groups": [], 1397 | "scatter": {} 1398 | }, 1399 | "enabled": true, 1400 | "editorMode": "ace/mode/markdown", 1401 | "editorHide": true, 1402 | "title": false 1403 | }, 1404 | "settings": { 1405 | "params": {}, 1406 | "forms": {} 1407 | }, 1408 | "jobName": "paragraph_1478836439775_-598306498", 1409 | "id": "20161110-225359_2037453172", 1410 | "result": { 1411 | "code": "SUCCESS", 1412 | "type": "HTML", 1413 | "msg": "\u003cp\u003eThat is SO much better. Just a reminder, we did file reading in Java, our parsing in Python but the aggregation and filtering specified in our SQL query ran entirely in Java.\u003c/p\u003e\n\u003cp\u003eThis is a marriage made in heaven. Manipulate data in Python, then have APIs that let us analyze data in Python or in SQL and not have to sacrifice performance.\u003c/p\u003e\n\u003cp\u003eWe\u0027ve answered all the questions our boss wanted but let\u0027s explore a few others and have things consolidated in one place:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003eHow many total views were there?\u003c/li\u003e\n\u003cli\u003eHow many unique articles were viewed?\u003c/li\u003e\n\u003cli\u003eWhat are the top 10 articles?\u003c/li\u003e\n\u003cli\u003eWhat are the top 10 internal traffic sources?\u003c/li\u003e\n\u003cli\u003eWhat are the top 10 external sources?\u003c/li\u003e\n\u003cli\u003eHow much traffic does a \u0026ldquo;normal\u0026rdquo; article get?\u003c/li\u003e\n\u003c/ul\u003e\n" 1414 | }, 1415 | "dateCreated": "Nov 10, 2016 10:53:59 PM", 1416 | "dateStarted": "Nov 12, 2016 9:33:39 PM", 1417 | "dateFinished": "Nov 12, 2016 9:33:39 PM", 1418 | "status": "FINISHED", 1419 | "progressUpdateIntervalMs": 500 1420 | }, 1421 | { 1422 | "title": "How many total views?", 1423 | "text": "%sql\nSELECT SUM(views) FROM clickstream", 1424 | "dateUpdated": "Nov 12, 2016 1:16:19 PM", 1425 | "config": { 1426 | "colWidth": 3.0, 1427 | "graph": { 1428 | "mode": "table", 1429 | "height": 324.0, 1430 | "optionOpen": false, 1431 | "keys": [], 1432 | "values": [], 1433 | "groups": [], 1434 | "scatter": {} 1435 | }, 1436 | "enabled": true, 1437 | "editorMode": "ace/mode/sql", 1438 | "title": true 1439 | }, 1440 | "settings": { 1441 | "params": {}, 1442 | "forms": {} 1443 | }, 1444 | "jobName": "paragraph_1478884127522_-1765316878", 1445 | "id": "20161111-120847_2010603219", 1446 | "dateCreated": "Nov 11, 2016 12:08:47 PM", 1447 | "dateStarted": "Nov 11, 2016 12:09:07 PM", 1448 | "dateFinished": "Nov 11, 2016 12:10:53 PM", 1449 | "status": "FINISHED", 1450 | "errorMessage": "", 1451 | "progressUpdateIntervalMs": 500 1452 | }, 1453 | { 1454 | "title": "How many unique articles were viewed?", 1455 | "text": "%pyspark\nfrom pyspark.sql import functions as F\n\nprint(\u0027%table\u0027)\nprint(\u0027unique_articles\u0027)\nprint(clickstream_df\n .agg(F.approxCountDistinct(clickstream_df.curr_id)\n .alias(\u0027unique_articles\u0027))\n .first()[\u0027unique_articles\u0027])", 1456 | "dateUpdated": "Nov 11, 2016 12:55:52 PM", 1457 | "config": { 1458 | "colWidth": 4.0, 1459 | "graph": { 1460 | "mode": "table", 1461 | "height": 233.0, 1462 | "optionOpen": false, 1463 | "keys": [ 1464 | { 1465 | "name": "unique_articles", 1466 | "index": 0.0, 1467 | "aggr": "sum" 1468 | } 1469 | ], 1470 | "values": [], 1471 | "groups": [], 1472 | "scatter": { 1473 | "xAxis": { 1474 | "name": "unique_articles", 1475 | "index": 0.0, 1476 | "aggr": "sum" 1477 | } 1478 | } 1479 | }, 1480 | "enabled": true, 1481 | "editorMode": "ace/mode/sql", 1482 | "title": true 1483 | }, 1484 | "settings": { 1485 | "params": {}, 1486 | "forms": {} 1487 | }, 1488 | "jobName": "paragraph_1478884164948_1059017528", 1489 | "id": "20161111-120924_1352555215", 1490 | "dateCreated": "Nov 11, 2016 12:09:24 PM", 1491 | "dateStarted": "Nov 11, 2016 12:16:21 PM", 1492 | "dateFinished": "Nov 11, 2016 12:18:02 PM", 1493 | "status": "FINISHED", 1494 | "errorMessage": "", 1495 | "progressUpdateIntervalMs": 500 1496 | }, 1497 | { 1498 | "title": "What are the top 10 articles?", 1499 | "text": "%sql\nSELECT curr_title, sum(views) AS views\nFROM clickstream\nWHERE curr_title \u003c\u003e \u0027Main_Page\u0027\nGROUP BY curr_title\nORDER BY 2 DESC\nLIMIT 10", 1500 | "dateUpdated": "Nov 12, 2016 1:16:19 PM", 1501 | "config": { 1502 | "colWidth": 5.0, 1503 | "graph": { 1504 | "mode": "multiBarChart", 1505 | "height": 283.0, 1506 | "optionOpen": false, 1507 | "keys": [ 1508 | { 1509 | "name": "curr_title", 1510 | "index": 0.0, 1511 | "aggr": "sum" 1512 | } 1513 | ], 1514 | "values": [ 1515 | { 1516 | "name": "views", 1517 | "index": 1.0, 1518 | "aggr": "sum" 1519 | } 1520 | ], 1521 | "groups": [], 1522 | "scatter": { 1523 | "xAxis": { 1524 | "name": "curr_title", 1525 | "index": 0.0, 1526 | "aggr": "sum" 1527 | }, 1528 | "yAxis": { 1529 | "name": "views", 1530 | "index": 1.0, 1531 | "aggr": "sum" 1532 | } 1533 | } 1534 | }, 1535 | "enabled": true, 1536 | "editorMode": "ace/mode/sql", 1537 | "title": true 1538 | }, 1539 | "settings": { 1540 | "params": {}, 1541 | "forms": {} 1542 | }, 1543 | "jobName": "paragraph_1478884744644_1346056055", 1544 | "id": "20161111-121904_1358541289", 1545 | "dateCreated": "Nov 11, 2016 12:19:04 PM", 1546 | "dateStarted": "Nov 11, 2016 12:58:01 PM", 1547 | "dateFinished": "Nov 11, 2016 12:59:57 PM", 1548 | "status": "FINISHED", 1549 | "errorMessage": "", 1550 | "progressUpdateIntervalMs": 500 1551 | }, 1552 | { 1553 | "title": "What are the top 10 internal sources?", 1554 | "text": "%sql\nSELECT prev_title, SUM(views)\nFROM clickstream\nWHERE prev_title NOT LIKE \u0027other%\u0027\nGROUP BY prev_title\nORDER BY 2 DESC\nLIMIT 10", 1555 | "dateUpdated": "Nov 12, 2016 1:16:19 PM", 1556 | "config": { 1557 | "colWidth": 6.0, 1558 | "graph": { 1559 | "mode": "table", 1560 | "height": 294.0, 1561 | "optionOpen": false, 1562 | "keys": [ 1563 | { 1564 | "name": "prev_title", 1565 | "index": 0.0, 1566 | "aggr": "sum" 1567 | } 1568 | ], 1569 | "values": [ 1570 | { 1571 | "name": "sum(views)", 1572 | "index": 1.0, 1573 | "aggr": "sum" 1574 | } 1575 | ], 1576 | "groups": [], 1577 | "scatter": { 1578 | "xAxis": { 1579 | "name": "prev_title", 1580 | "index": 0.0, 1581 | "aggr": "sum" 1582 | }, 1583 | "yAxis": { 1584 | "name": "sum(views)", 1585 | "index": 1.0, 1586 | "aggr": "sum" 1587 | } 1588 | } 1589 | }, 1590 | "enabled": true, 1591 | "editorMode": "ace/mode/sql", 1592 | "title": true 1593 | }, 1594 | "settings": { 1595 | "params": {}, 1596 | "forms": {} 1597 | }, 1598 | "jobName": "paragraph_1478884959208_1074144208", 1599 | "id": "20161111-122239_1445627085", 1600 | "dateCreated": "Nov 11, 2016 12:22:39 PM", 1601 | "dateStarted": "Nov 11, 2016 12:26:26 PM", 1602 | "dateFinished": "Nov 11, 2016 12:28:06 PM", 1603 | "status": "FINISHED", 1604 | "errorMessage": "", 1605 | "progressUpdateIntervalMs": 500 1606 | }, 1607 | { 1608 | "title": "What\u0027re the top 10 external sources?", 1609 | "text": "%sql\nSELECT prev_title, SUM(views)\nFROM clickstream\nWHERE prev_title LIKE \u0027other%\u0027\nGROUP BY prev_title\nORDER BY 2 DESC\nLIMIT 10", 1610 | "dateUpdated": "Nov 12, 2016 1:16:19 PM", 1611 | "config": { 1612 | "colWidth": 6.0, 1613 | "graph": { 1614 | "mode": "multiBarChart", 1615 | "height": 299.0, 1616 | "optionOpen": false, 1617 | "keys": [ 1618 | { 1619 | "name": "prev_title", 1620 | "index": 0.0, 1621 | "aggr": "sum" 1622 | } 1623 | ], 1624 | "values": [ 1625 | { 1626 | "name": "sum(views)", 1627 | "index": 1.0, 1628 | "aggr": "sum" 1629 | } 1630 | ], 1631 | "groups": [], 1632 | "scatter": { 1633 | "xAxis": { 1634 | "name": "prev_title", 1635 | "index": 0.0, 1636 | "aggr": "sum" 1637 | }, 1638 | "yAxis": { 1639 | "name": "sum(views)", 1640 | "index": 1.0, 1641 | "aggr": "sum" 1642 | } 1643 | } 1644 | }, 1645 | "enabled": true, 1646 | "editorMode": "ace/mode/sql", 1647 | "title": true 1648 | }, 1649 | "settings": { 1650 | "params": {}, 1651 | "forms": {} 1652 | }, 1653 | "jobName": "paragraph_1478885294133_-331837187", 1654 | "id": "20161111-122814_1976094813", 1655 | "dateCreated": "Nov 11, 2016 12:28:14 PM", 1656 | "dateStarted": "Nov 11, 2016 12:28:50 PM", 1657 | "dateFinished": "Nov 11, 2016 12:30:19 PM", 1658 | "status": "FINISHED", 1659 | "errorMessage": "", 1660 | "progressUpdateIntervalMs": 500 1661 | }, 1662 | { 1663 | "text": "%pyspark\n\n# What does a \"normal\" article look like on English wikipedia in terms of total traffic?\nquantiles \u003d [0.0, 0.25, 0.5, 0.75, 0.9, 0.95, 0.99, 1.0]\nrelative_error \u003d 0.05\n\nres \u003d (clickstream_df\n .groupby(\u0027curr_id\u0027)\n .sum(\u0027views\u0027)\n .approxQuantile(\u0027sum(views)\u0027, quantiles, relative_error))\n\nres \u003d dict(zip([str(q) for q in quantiles], res))\n\nprint(\u0027%table\u0027)\nprint(\u0027pct\\tcount\u0027)\nfor k, v in sorted(res.items(), key\u003dlambda (k, v): float(k)):\n print(\u0027\\t\u0027.join([k, str(v)]))", 1664 | "dateUpdated": "Nov 11, 2016 1:04:13 PM", 1665 | "config": { 1666 | "colWidth": 12.0, 1667 | "graph": { 1668 | "mode": "table", 1669 | "height": 300.0, 1670 | "optionOpen": false, 1671 | "keys": [ 1672 | { 1673 | "name": "pct", 1674 | "index": 0.0, 1675 | "aggr": "sum" 1676 | } 1677 | ], 1678 | "values": [ 1679 | { 1680 | "name": "count", 1681 | "index": 1.0, 1682 | "aggr": "sum" 1683 | } 1684 | ], 1685 | "groups": [], 1686 | "scatter": { 1687 | "xAxis": { 1688 | "name": "pct", 1689 | "index": 0.0, 1690 | "aggr": "sum" 1691 | }, 1692 | "yAxis": { 1693 | "name": "count", 1694 | "index": 1.0, 1695 | "aggr": "sum" 1696 | } 1697 | } 1698 | }, 1699 | "enabled": true, 1700 | "editorMode": "ace/mode/scala" 1701 | }, 1702 | "settings": { 1703 | "params": {}, 1704 | "forms": {} 1705 | }, 1706 | "jobName": "paragraph_1478401946302_1316317807", 1707 | "id": "20161105-231226_1710122048", 1708 | "dateCreated": "Nov 5, 2016 11:12:26 AM", 1709 | "dateStarted": "Nov 11, 2016 11:20:35 AM", 1710 | "dateFinished": "Nov 11, 2016 11:20:35 AM", 1711 | "status": "FINISHED", 1712 | "errorMessage": "", 1713 | "progressUpdateIntervalMs": 500 1714 | }, 1715 | { 1716 | "title": "For the top 10 articles, what were the top 5 referrers?", 1717 | "text": "%sql\nSELECT curr_title, sum(views) AS views\nFROM clickstream\nWHERE curr_title \u003c\u003e \u0027Main_Page\u0027\nGROUP BY curr_title\nORDER BY 2 DESC\nLIMIT 10", 1718 | "dateUpdated": "Nov 12, 2016 5:21:53 PM", 1719 | "config": { 1720 | "colWidth": 12.0, 1721 | "graph": { 1722 | "mode": "table", 1723 | "height": 300.0, 1724 | "optionOpen": false, 1725 | "keys": [], 1726 | "values": [], 1727 | "groups": [], 1728 | "scatter": {} 1729 | }, 1730 | "enabled": true, 1731 | "editorMode": "ace/mode/sql", 1732 | "title": true 1733 | }, 1734 | "settings": { 1735 | "params": {}, 1736 | "forms": {} 1737 | }, 1738 | "jobName": "paragraph_1478887479246_1546212495", 1739 | "id": "20161111-130439_376310339", 1740 | "dateCreated": "Nov 11, 2016 1:04:39 PM", 1741 | "dateStarted": "Nov 12, 2016 5:21:53 PM", 1742 | "dateFinished": "Nov 12, 2016 5:21:56 PM", 1743 | "status": "ERROR", 1744 | "errorMessage": "", 1745 | "progressUpdateIntervalMs": 500 1746 | }, 1747 | { 1748 | "text": "%md\n\nUp until this point, we\u0027ve only covered loading something into Spark, transforming it a little and then analyzing it.\n\nBut we can also save Spark data to plenty of destinations (e.g. databases, file systems) and in virtually any format.\n\nIf you plan to work with data outside of Spark, saving in a format like JSON makes sense for interoperatbility. If you\u0027re looking to save a DataFrame for future use in Spark, you\u0027re far better to use the [Parquet](https://parquet.apache.org/) format. Parquet has benefits like: binary + column-stride storage (meaning super efficient to load just a few columns of data) and metadata stored along side (things like column data types and even summary stats like number of rows).\n\nLet\u0027s say we were only interested in entries in this dataset where `prev_title\u003d\u0027other-empty\u0027`.", 1749 | "dateUpdated": "Nov 12, 2016 9:33:53 PM", 1750 | "config": { 1751 | "colWidth": 12.0, 1752 | "graph": { 1753 | "mode": "table", 1754 | "height": 159.0, 1755 | "optionOpen": false, 1756 | "keys": [], 1757 | "values": [], 1758 | "groups": [], 1759 | "scatter": {} 1760 | }, 1761 | "enabled": true, 1762 | "editorMode": "ace/mode/markdown", 1763 | "editorHide": true 1764 | }, 1765 | "settings": { 1766 | "params": {}, 1767 | "forms": {} 1768 | }, 1769 | "jobName": "paragraph_1478400327508_439917821", 1770 | "id": "20161105-224527_1440402325", 1771 | "result": { 1772 | "code": "SUCCESS", 1773 | "type": "HTML", 1774 | "msg": "\u003cp\u003eUp until this point, we\u0027ve only covered loading something into Spark, transforming it a little and then analyzing it.\u003c/p\u003e\n\u003cp\u003eBut we can also save Spark data to plenty of destinations (e.g. databases, file systems) and in virtually any format.\u003c/p\u003e\n\u003cp\u003eIf you plan to work with data outside of Spark, saving in a format like JSON makes sense for interoperatbility. If you\u0027re looking to save a DataFrame for future use in Spark, you\u0027re far better to use the \u003ca href\u003d\"https://parquet.apache.org/\"\u003eParquet\u003c/a\u003e format. Parquet has benefits like: binary + column-stride storage (meaning super efficient to load just a few columns of data) and metadata stored along side (things like column data types and even summary stats like number of rows).\u003c/p\u003e\n\u003cp\u003eLet\u0027s say we were only interested in entries in this dataset where \u003ccode\u003eprev_title\u003d\u0027other-empty\u0027\u003c/code\u003e.\u003c/p\u003e\n" 1775 | }, 1776 | "dateCreated": "Nov 5, 2016 10:45:27 AM", 1777 | "dateStarted": "Nov 12, 2016 9:33:53 PM", 1778 | "dateFinished": "Nov 12, 2016 9:33:53 PM", 1779 | "status": "FINISHED", 1780 | "progressUpdateIntervalMs": 500 1781 | }, 1782 | { 1783 | "text": "%pyspark\nimport os\nimport tempfile\n\ntemp_file \u003d os.path.join(tempfile.mkdtemp(), \u0027wikipedia_other_empty.json\u0027)\nclickstream_df[clickstream_df[\u0027prev_title\u0027] \u003d\u003d \u0027other-empty\u0027].write.save(temp_file, format\u003d\u0027json\u0027)\nprint(temp_file)", 1784 | "dateUpdated": "Nov 12, 2016 2:42:39 PM", 1785 | "config": { 1786 | "colWidth": 12.0, 1787 | "graph": { 1788 | "mode": "table", 1789 | "height": 300.0, 1790 | "optionOpen": false, 1791 | "keys": [], 1792 | "values": [], 1793 | "groups": [], 1794 | "scatter": {} 1795 | }, 1796 | "enabled": true, 1797 | "editorMode": "ace/mode/python" 1798 | }, 1799 | "settings": { 1800 | "params": {}, 1801 | "forms": {} 1802 | }, 1803 | "jobName": "paragraph_1478880626899_651147588", 1804 | "id": "20161111-111026_819843125", 1805 | "dateCreated": "Nov 11, 2016 11:10:26 AM", 1806 | "dateStarted": "Nov 12, 2016 2:42:39 PM", 1807 | "dateFinished": "Nov 12, 2016 2:44:02 PM", 1808 | "status": "FINISHED", 1809 | "errorMessage": "", 1810 | "progressUpdateIntervalMs": 500 1811 | }, 1812 | { 1813 | "text": "%sh\nls -lah /path/to/tmpfile/wikipedia_other_empty.json", 1814 | "dateUpdated": "Nov 12, 2016 9:34:20 PM", 1815 | "config": { 1816 | "colWidth": 12.0, 1817 | "graph": { 1818 | "mode": "table", 1819 | "height": 300.0, 1820 | "optionOpen": false, 1821 | "keys": [], 1822 | "values": [], 1823 | "groups": [], 1824 | "scatter": {} 1825 | }, 1826 | "enabled": true, 1827 | "editorMode": "ace/mode/sh" 1828 | }, 1829 | "settings": { 1830 | "params": {}, 1831 | "forms": {} 1832 | }, 1833 | "jobName": "paragraph_1478979759836_1423195420", 1834 | "id": "20161112-144239_1936657561", 1835 | "dateCreated": "Nov 12, 2016 2:42:39 PM", 1836 | "dateStarted": "Nov 12, 2016 2:44:57 PM", 1837 | "dateFinished": "Nov 12, 2016 2:44:57 PM", 1838 | "status": "FINISHED", 1839 | "errorMessage": "", 1840 | "progressUpdateIntervalMs": 500 1841 | }, 1842 | { 1843 | "text": "%md\nWait, why is our output file a folder and not a file?\n\nIf you remember, the DataFrame had 8 partitions. Spark wrote these partitions out in parallel which, in our case, is kind of annoying because it\u0027ll be harder for a downstream program that isn\u0027t Spark to read that data. So, after we filter, let\u0027s reduce the number of partitions to 1 and then write it out so we get a single file instead of multiple.", 1844 | "dateUpdated": "Nov 12, 2016 9:34:14 PM", 1845 | "config": { 1846 | "colWidth": 12.0, 1847 | "graph": { 1848 | "mode": "table", 1849 | "height": 300.0, 1850 | "optionOpen": false, 1851 | "keys": [], 1852 | "values": [], 1853 | "groups": [], 1854 | "scatter": {} 1855 | }, 1856 | "enabled": true, 1857 | "editorMode": "ace/mode/markdown", 1858 | "editorHide": true 1859 | }, 1860 | "settings": { 1861 | "params": {}, 1862 | "forms": {} 1863 | }, 1864 | "jobName": "paragraph_1478979878199_974932303", 1865 | "id": "20161112-144438_1880153658", 1866 | "result": { 1867 | "code": "SUCCESS", 1868 | "type": "HTML", 1869 | "msg": "\u003cp\u003eWait, why is our output file a folder and not a file?\u003c/p\u003e\n\u003cp\u003eIf you remember, the DataFrame had 8 partitions. Spark wrote these partitions out in parallel which, in our case, is kind of annoying because it\u0027ll be harder for a downstream program that isn\u0027t Spark to read that data. So, after we filter, let\u0027s reduce the number of partitions to 1 and then write it out so we get a single file instead of multiple.\u003c/p\u003e\n" 1870 | }, 1871 | "dateCreated": "Nov 12, 2016 2:44:38 PM", 1872 | "dateStarted": "Nov 12, 2016 9:34:14 PM", 1873 | "dateFinished": "Nov 12, 2016 9:34:14 PM", 1874 | "status": "FINISHED", 1875 | "progressUpdateIntervalMs": 500 1876 | }, 1877 | { 1878 | "text": "%pyspark\ntemp_file \u003d os.path.join(tempfile.mkdtemp(), \u0027wikipedia_other_empty.json\u0027)\n(clickstream_df[clickstream_df[\u0027prev_title\u0027] \u003d\u003d \u0027other-empty\u0027]\n .coalesce(1)\n .write.save(temp_file, format\u003d\u0027json\u0027))\nprint(temp_file)", 1879 | "dateUpdated": "Nov 12, 2016 2:51:50 PM", 1880 | "config": { 1881 | "colWidth": 12.0, 1882 | "graph": { 1883 | "mode": "table", 1884 | "height": 300.0, 1885 | "optionOpen": false, 1886 | "keys": [], 1887 | "values": [], 1888 | "groups": [], 1889 | "scatter": {} 1890 | }, 1891 | "enabled": true, 1892 | "editorMode": "ace/mode/python" 1893 | }, 1894 | "settings": { 1895 | "params": {}, 1896 | "forms": {} 1897 | }, 1898 | "jobName": "paragraph_1478980201177_1196823505", 1899 | "id": "20161112-145001_42119916", 1900 | "dateCreated": "Nov 12, 2016 2:50:01 PM", 1901 | "dateStarted": "Nov 12, 2016 2:51:50 PM", 1902 | "dateFinished": "Nov 12, 2016 2:55:45 PM", 1903 | "status": "FINISHED", 1904 | "errorMessage": "", 1905 | "progressUpdateIntervalMs": 500 1906 | }, 1907 | { 1908 | "text": "%sh\nls -lah /path/to/tmpfile/wikipedia_other_empty.json", 1909 | "dateUpdated": "Nov 12, 2016 9:35:56 PM", 1910 | "config": { 1911 | "colWidth": 12.0, 1912 | "graph": { 1913 | "mode": "table", 1914 | "height": 300.0, 1915 | "optionOpen": false, 1916 | "keys": [], 1917 | "values": [], 1918 | "groups": [], 1919 | "scatter": {} 1920 | }, 1921 | "enabled": true, 1922 | "editorMode": "ace/mode/sh" 1923 | }, 1924 | "settings": { 1925 | "params": {}, 1926 | "forms": {} 1927 | }, 1928 | "jobName": "paragraph_1478980310487_1788688262", 1929 | "id": "20161112-145150_1126443405", 1930 | "dateCreated": "Nov 12, 2016 2:51:50 PM", 1931 | "dateStarted": "Nov 12, 2016 2:55:55 PM", 1932 | "dateFinished": "Nov 12, 2016 2:55:55 PM", 1933 | "status": "FINISHED", 1934 | "errorMessage": "", 1935 | "progressUpdateIntervalMs": 500 1936 | }, 1937 | { 1938 | "text": "%sh\nhead /path/to/tmpfile/wikipedia_other_empty.json/part-r-00000-e2e3d0b8-5b0a-46cf-899e-4cbd4bd2e563.json", 1939 | "dateUpdated": "Nov 12, 2016 9:35:56 PM", 1940 | "config": { 1941 | "colWidth": 12.0, 1942 | "graph": { 1943 | "mode": "table", 1944 | "height": 300.0, 1945 | "optionOpen": false, 1946 | "keys": [], 1947 | "values": [], 1948 | "groups": [], 1949 | "scatter": {} 1950 | }, 1951 | "enabled": true, 1952 | "editorMode": "ace/mode/sh" 1953 | }, 1954 | "settings": { 1955 | "params": {}, 1956 | "forms": {} 1957 | }, 1958 | "jobName": "paragraph_1478980555056_2080280311", 1959 | "id": "20161112-145555_105429055", 1960 | "dateCreated": "Nov 12, 2016 2:55:55 PM", 1961 | "dateStarted": "Nov 12, 2016 2:56:19 PM", 1962 | "dateFinished": "Nov 12, 2016 2:56:19 PM", 1963 | "status": "FINISHED", 1964 | "errorMessage": "", 1965 | "progressUpdateIntervalMs": 500 1966 | }, 1967 | { 1968 | "text": "%md\nCongrats! You\u0027ve answered all the boss\u0027 questions.\n\nNow let\u0027s understand a bit more about Spark internals and how we\u0027d turn something like this into a production-ready program.", 1969 | "dateUpdated": "Nov 12, 2016 9:35:49 PM", 1970 | "config": { 1971 | "colWidth": 12.0, 1972 | "graph": { 1973 | "mode": "table", 1974 | "height": 300.0, 1975 | "optionOpen": false, 1976 | "keys": [], 1977 | "values": [], 1978 | "groups": [], 1979 | "scatter": {} 1980 | }, 1981 | "enabled": true, 1982 | "editorMode": "ace/mode/markdown", 1983 | "editorHide": true 1984 | }, 1985 | "settings": { 1986 | "params": {}, 1987 | "forms": {} 1988 | }, 1989 | "jobName": "paragraph_1478980649594_-95520691", 1990 | "id": "20161112-145729_363701543", 1991 | "result": { 1992 | "code": "SUCCESS", 1993 | "type": "HTML", 1994 | "msg": "\u003cp\u003eCongrats! You\u0027ve answered all the boss\u0027 questions.\u003c/p\u003e\n\u003cp\u003eNow let\u0027s understand a bit more about Spark internals and how we\u0027d turn something like this into a production-ready program.\u003c/p\u003e\n" 1995 | }, 1996 | "dateCreated": "Nov 12, 2016 2:57:29 PM", 1997 | "dateStarted": "Nov 12, 2016 9:35:46 PM", 1998 | "dateFinished": "Nov 12, 2016 9:35:46 PM", 1999 | "status": "FINISHED", 2000 | "progressUpdateIntervalMs": 500 2001 | }, 2002 | { 2003 | "text": "", 2004 | "dateUpdated": "Nov 12, 2016 9:35:56 PM", 2005 | "config": { 2006 | "colWidth": 12.0, 2007 | "graph": { 2008 | "mode": "table", 2009 | "height": 300.0, 2010 | "optionOpen": false, 2011 | "keys": [], 2012 | "values": [], 2013 | "groups": [], 2014 | "scatter": {} 2015 | }, 2016 | "enabled": true, 2017 | "editorMode": "ace/mode/scala" 2018 | }, 2019 | "settings": { 2020 | "params": {}, 2021 | "forms": {} 2022 | }, 2023 | "jobName": "paragraph_1479004546181_154780706", 2024 | "id": "20161112-213546_947790463", 2025 | "dateCreated": "Nov 12, 2016 9:35:46 PM", 2026 | "status": "READY", 2027 | "progressUpdateIntervalMs": 500 2028 | } 2029 | ], 2030 | "name": "PyCon Canada 2016 - PySpark Tutorial", 2031 | "id": "2C1MWGRFG", 2032 | "angularObjects": { 2033 | "2C3CRPXD1:shared_process": [], 2034 | "2C2CAWWQ4:shared_process": [], 2035 | "2BZ4H6AVG:shared_process": [], 2036 | "2BZ7HS5Q6:shared_process": [], 2037 | "2C29XPEMP:shared_process": [], 2038 | "2BZ15F581:shared_process": [], 2039 | "2C178ED5G:shared_process": [], 2040 | "2C3JZBD9G:shared_process": [], 2041 | "2C2TBUT46:shared_process": [], 2042 | "2C2WYJQM9:shared_process": [], 2043 | "2C23HGFNG:shared_process": [], 2044 | "2BZMDMPSG:shared_process": [], 2045 | "2BZ4DB16C:shared_process": [], 2046 | "2BYXCMTRK:shared_process": [], 2047 | "2C1424BYW:shared_process": [], 2048 | "2BZJGUYJ2:shared_process": [], 2049 | "2C1T3ZP52:shared_process": [], 2050 | "2BZ6QG3XB:shared_process": [] 2051 | }, 2052 | "config": { 2053 | "looknfeel": "default" 2054 | }, 2055 | "info": {} 2056 | } --------------------------------------------------------------------------------