├── .gitignore ├── LICENSE ├── README.md ├── doc ├── dataframe-introduction.md ├── setup-cluster.md └── setup-local.md └── src └── bash ├── download-for-cluster.md └── download-for-local.md /.gitignore: -------------------------------------------------------------------------------- 1 | *.class 2 | *.log 3 | 4 | # sbt specific 5 | .cache/ 6 | .history/ 7 | .lib/ 8 | dist/* 9 | target/ 10 | lib_managed/ 11 | src_managed/ 12 | project/boot/ 13 | project/plugins/project/ 14 | 15 | # Scala-IDE specific 16 | .scala_dependencies 17 | .worksheet 18 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "{}" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright {yyyy} {name of copyright owner} 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | 203 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Spark DataFrames Introduction 2 | 3 | This is an introduction for Apache Spark DataFrames. 4 | You can practically learn Apache Spark DataFrame APIs with the real data sets from Github Archive. 5 | 6 | Apache Spark DataFrames are very useful data analytics tool for data scientists. 7 | It allow us to analize data more easily and more fast. 8 | 9 | Reynold Xin, Michael Armbrust and Davies Liu wrote wrote 10 | [the great blog article](https://databricks.com/blog/2015/02/17/introducing-dataframes-in-spark-for-large-scale-data-science.html) 11 | . 12 | And Reynold Xin gave a nice presentation 13 | [Spark DataFrames for Large-Scale Data Science](https://www.youtube.com/watch?v=Hvke1f10dL0) 14 | in Bay Area Spark Meetup at Feb, 17, 2015. 15 | 16 | ## Set up Environments for This Introduction 17 | 18 | You can select an environment to run Spark on your local machine or Amazon EC2. 19 | 20 | - [Set up Local Machine](./doc/setup-local.md) 21 | - [Set up Spark Cluster on EC2](./doc/setup-cluster.md) 22 | 23 | ## Spark DataFrames Introduction 24 | 25 | [Spark DataFrame Introduction](./doc/dataframe-introduction.md) 26 | 27 | ## TODO 28 | 29 | I will try to add the introduction for Python and R. 30 | Because Spark provides the APIs for Python and R (via SparkR), we should add the explanations for not only Scala but also Python and R. 31 | -------------------------------------------------------------------------------- /doc/dataframe-introduction.md: -------------------------------------------------------------------------------- 1 | # DataFrames Introduction 2 | 3 | ## Create DataFrames 4 | 5 | You should arrange the downloaded JSON data before using Spark DataFrames. 6 | And You can also create dataframes from Hive, Parquet and RDD. 7 | 8 | You know, you can run a Spark shell by executing `$SPARK_HOME/bin/spark-shell`. 9 | 10 | If you want to treat JSON files, use `SQLContext.load`. 11 | And I recommend you to set their aliases at the same time. 12 | 13 | ### On Local Machine 14 | 15 | ``` 16 | // Create a SQLContext (sc is an existing SparkContext) 17 | val context = new org.apache.spark.sql.SQLContext(sc) 18 | 19 | // Create a DataFrame for Github events 20 | var path = "file:///tmp/github-archive-data/*.json.gz" 21 | val event = context.load(path, "json").as('event) 22 | 23 | // Create a DataFrame for Github users 24 | path = "file:///tmp/github-archive-data/github-users.json" 25 | val user = context.load(path, "json").as('user) 26 | ``` 27 | 28 | You can show a schema by `printSchema`. 29 | 30 | ``` 31 | event.printSchema 32 | user.printSchema 33 | ``` 34 | 35 | ### On Spark Cluster 36 | 37 | ``` 38 | // Create a SQLContext (sc is an existing SparkContext) 39 | val context = new org.apache.spark.sql.SQLContext(sc) 40 | 41 | // Create a DataFrame for Github events 42 | var path = "file:///mnt/github-archive-data/*.json.gz" 43 | val event = context.load(path, "json").as('event) 44 | 45 | // Create a DataFrame for Github users 46 | path = "file:///mnt/github-archive-data/github-users.json" 47 | val user = context.load(path, "json").as('user) 48 | ``` 49 | 50 | ## Project (i.e. selecting some fields of a DataFrame) 51 | 52 | You can select a column by `dataframe("key")` or `dataframe.select("key")`. 53 | If you have select multiple columns, use `data.frame.select("key1", "key2")`. 54 | 55 | ``` 56 | // Select a clumn 57 | event("public").limit(2).show() 58 | 59 | // Use an alias for a column 60 | event.select('public as 'PUBLIC).limit(2).show() 61 | 62 | // Select multile columns with aliases 63 | event.select('public as 'PUBLIC, 'id as 'ID).limit(2).show() 64 | 65 | // Select nested columns with aliases 66 | event.select($"payload.size" as 'size, $"actor.id" as 'actor_id).limit(10).show() 67 | ``` 68 | 69 | ## Filter 70 | 71 | You can filter the data with `filter()`. 72 | 73 | ``` 74 | // Filter by a condition 75 | user.filter("name is null").select('id, 'name).limit(5).show() 76 | user.filter("name is not null").select('id, 'name).limit(5).show() 77 | 78 | // Filter by a comblination of two conditions 79 | // These two expression are same 80 | event.filter("public = true and type = 'ForkEvent'").count 81 | event.filter("public = true").filter("type = 'ForkEvent'").count 82 | ``` 83 | 84 | ## Aggregation 85 | 86 | ``` 87 | val countByType = event.groupBy("type").count() 88 | countByType.limit(5).show() 89 | 90 | val countByIdAndType = event.groupBy("id", "type").count() 91 | countByIdAndType.limit(10).foreach(println) 92 | ``` 93 | 94 | ## Join 95 | 96 | You can join two data sets with `join`. 97 | And `where` allows you to set conditions to join them. 98 | 99 | ``` 100 | // Self join 101 | val x = user.as('x) 102 | val y = user.as('y) 103 | val join = x.join(y).where($"x.id" === $"y.id") 104 | join.select($"x.id", $"y.id").limit(10).show 105 | 106 | // Join Pull Request event data with user data 107 | val pr = event.filter('type === "PullRequestEvent").as('pr) 108 | val join = pr.join(user).where($"pr.payload.pull_request.user.id" === $"user.id") 109 | join.select($"pr.type", $"user.name", $"pr.created_at").limit(5).show 110 | ``` 111 | 112 | ## UDFs 113 | 114 | You can define UDFs (User Define Functions) with `udf()`. 115 | 116 | ``` 117 | // Define User Defined Functions 118 | val toDate = udf((createdAt: String) => createdAt.substring(0, 10)) 119 | val toTime = udf((createdAt: String) => createdAt.substring(11, 19)) 120 | 121 | // Use the UDFs in select() 122 | event.select(toDate('created_at) as 'date, toTime('created_at) as 'time).limit(5).show 123 | ``` 124 | 125 | ## Execute Spark SQL 126 | 127 | You can manipurate data with not only DataFrame but also Spark SQL. 128 | 129 | ``` 130 | // Register a temporary table for the schema 131 | event.registerTempTable("event") 132 | 133 | // Execute a Spark SQL 134 | context.sql("SELECT created_at, repo.name AS `repo.name`, actor.id, type FROM event WHERE type = 'PullRequestEvent'").limit(5).show() 135 | ``` 136 | -------------------------------------------------------------------------------- /doc/setup-cluster.md: -------------------------------------------------------------------------------- 1 | # Set Up Spark Cluster on EC2 and Data Sets 2 | 3 | ## Check Out Apache Spark from Github 4 | 5 | At first, please check out spark from Github on your local machine. 6 | 7 | ``` 8 | git checkout https://github.com/apache/spark.git && cd spark 9 | git checkout -b v1.3.0-rc1 origin/v1.3.0-rc1 10 | ``` 11 | 12 | ## Launch a Spark Cluster ver. 1.3 on EC2 13 | 14 | `$SPARK_HOME/ec2/spark-ec2` command allow you to launch Spark Clusters on Amazon EC2. 15 | Where, the checked out directory is defined as `$SPARK_HOME`. 16 | 17 | You can launch a Spark Cluster with the below commands in Tokyo region on Amazon EC2. 18 | So, you should change the region for your location. 19 | 20 | ``` 21 | cd $SPARK_HOME 22 | 23 | REGION='ap-northeast-1' 24 | ZONE='ap-northeast-1b' 25 | VERSION='1.3.0' 26 | MASTER_INSTANCE_TYPE='r3.large' 27 | SLAVE_INSTANCE_TYPE='r3.8xlarge' 28 | NUM_SLAVES=5 29 | SPORT_PRICE=1.0 30 | SPARK_CLUSTER_NAME="spark-cluster-v${VERSION}-${SLAVE_INSTANCE_TYPE}x${NUM_SLAVES}" 31 | 32 | ./ec2/spark-ec2 -k ${YOUR_KEY_NAME} -i ${YOUR_KEY} -s $NUM_SLAVES --master-instance-type="$MASTER_INSTANCE_TYPE" --instance-type="$SLAVE_INSTANCE_TYPE" --region="$REGION" --zone="$ZONE" --spot-price=$SPOT_PRICE --spark-version="${VERSION}" --hadoop-major-version=2 launch "$SPARK_CLUSTER_NAME" 33 | ``` 34 | 35 | ## Download Data Sets on EC2 36 | 37 | Please log in with ssh to the master instance which was created by `$SPARK_HOME/ec2/spark-ec2`. 38 | And then execute the shell script to download the data sets for this introduction. 39 | Off cource, You should check out this introduction from Github on the instance. 40 | 41 | ``` 42 | bash ./src/bash/download-for-cluster.md 43 | ``` 44 | -------------------------------------------------------------------------------- /doc/setup-local.md: -------------------------------------------------------------------------------- 1 | # Set up Spark and Data Sets on Your Local Machine 2 | 3 | ## Download the Data Sets 4 | 5 | You can prepare for the data sets by the bash script. 6 | In order to execute the script, you should install mongoDB to use `bsondump` which converts a BSON format to JSON format. 7 | As a result of executing the script, the data sets are downloaded in `/tmp/github-archive-data`. 8 | 9 | ``` 10 | bash src/bash/prepare-data-local.sh 11 | 12 | cd /tmp/github-archive-data 13 | ls -l 14 | 2015-01-01-0.json.gz 2015-01-01-17.json.gz 2015-01-01-4.json.gz 15 | 2015-01-01-1.json.gz 2015-01-01-18.json.gz 2015-01-01-5.json.gz 16 | 2015-01-01-10.json.gz 2015-01-01-19.json.gz 2015-01-01-6.json.gz 17 | 2015-01-01-11.json.gz 2015-01-01-2.json.gz 2015-01-01-7.json.gz 18 | 2015-01-01-12.json.gz 2015-01-01-20.json.gz 2015-01-01-8.json.gz 19 | 2015-01-01-13.json.gz 2015-01-01-21.json.gz 2015-01-01-9.json.gz 20 | 2015-01-01-14.json.gz 2015-01-01-22.json.gz dump 21 | 2015-01-01-15.json.gz 2015-01-01-23.json.gz github-users.json 22 | 2015-01-01-16.json.gz 2015-01-01-3.json.gz 23 | ``` 24 | 25 | ## Build Spark on Your Local Machine 26 | 27 | You should build Apache Spark on your local machine to use it. 28 | It takes a long time to compile Spark with `sbt` for the first time. 29 | So I recommend you to have a break for compiling. 30 | 31 | This documentation is based on Spark 1.3 RC1, because Spark 1.3 have not released at Feb, 25 2015 yet. 32 | 33 | ``` 34 | git checkout https://github.com/apache/spark.git && cd spark 35 | git checkout -b v1.3.0-rc1 origin/v1.3.0-rc1 36 | ./sbt/sbt clean assembly 37 | ``` 38 | -------------------------------------------------------------------------------- /src/bash/download-for-cluster.md: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Check wheather you have bsondump command or not 4 | if [ `which bsondump` == "" ]; then 5 | echo "WARN: You don't have bsondump command. You should install mongodbd." 6 | exit 7 | fi 8 | 9 | # Make a directory to store downloaded data 10 | mkdir -p /mnt/github-archive-data/ 11 | cd /mnt/github-archive-data/ 12 | 13 | # Download Github Archive data at 2015-01-01 14 | # https://www.githubarchive.org/ 15 | wget http://data.githubarchive.org/2015-01-{01..30}-{0..23}.json.gz 16 | 17 | # Download Github user data at 2015-01-29 18 | # And arrange the data as 'github-users.json' 19 | wget http://ghtorrent.org/downloads/users-dump.2015-01-29.tar.gz 20 | tar zxvf users-dump.2015-01-29.tar.gz 21 | # Replace ObjectId with null. ObjectId is used for mongoDB, not valid JSON. 22 | bsondump dump/github/users.bson | sed -e "s/ObjectId([^)]*)/null/" > github-users.json 23 | -------------------------------------------------------------------------------- /src/bash/download-for-local.md: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Check wheather you have bsondump command or not 4 | if [ `which bsondump` == "" ]; then 5 | echo "WARN: You don't have bsondump command. You should install mongodbd." 6 | exit 7 | fi 8 | 9 | # Make a directory to store downloaded data 10 | mkdir -p /tmp/github-archive-data/ 11 | cd /tmp/github-archive-data/ 12 | 13 | # Download Github Archive data at 2015-01-01 14 | # https://www.githubarchive.org/ 15 | wget http://data.githubarchive.org/2015-01-01-{0..23}.json.gz 16 | 17 | # Download Github user data at 2015-01-29 18 | # And arrange the data as 'github-users.json' 19 | wget http://ghtorrent.org/downloads/users-dump.2015-01-29.tar.gz 20 | tar zxvf users-dump.2015-01-29.tar.gz 21 | # Replace ObjectId with null. ObjectId is used for mongoDB, not valid JSON. 22 | bsondump dump/github/users.bson | sed -e "s/ObjectId([^)]*)/null/" > github-users.json 23 | --------------------------------------------------------------------------------