├── .gitignore ├── 01_pyspark-basics-1.ipynb ├── 02_pyspark-basics-2.ipynb ├── 03_merging.ipynb ├── 04_missing-data.ipynb ├── 05_moving-average-imputation.ipynb ├── 06_pivoting.ipynb ├── 07_resampling.ipynb ├── 08_subsetting.ipynb ├── 09_summary-statistics.ipynb ├── 10_graphing.ipynb ├── 11_installing-python-modules.ipynb ├── 12_GLM.ipynb ├── License.d ├── README.md ├── UrbanSparkUtilities └── utilities.py ├── indep_vars ├── README.md ├── build_indep_vars.py └── setup.py ├── stata_list.PNG └── stata_reg.PNG /.gitignore: -------------------------------------------------------------------------------- 1 | .ipynb_checkpoints/*.* -------------------------------------------------------------------------------- /01_pyspark-basics-1.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "**_pySpark Basics: Loading, Exploring and Saving Data_**" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "_by Jeff Levy (jlevy@urban.org)_ & Alex Engler (aengler@urban.org)\n", 15 | "\n", 16 | "_Last Updated: 31 Jul 2017, Spark v2.1_" 17 | ] 18 | }, 19 | { 20 | "cell_type": "markdown", 21 | "metadata": {}, 22 | "source": [ 23 | "_Abstract: This guide will go over loading a CSV file into a dataframe, exploring it with basic commands, and finally writing it out to file in S3 and reading it back in._\n", 24 | "\n", 25 | "_Main operations used:_ `sc`, `spark`, `write.csv`, `read.csv`, `select`, `count`, `dtypes`, `schema`/`inferSchema`, `take`, `show`, `withColumnRenamed`, `columns`, `describe`, `coalesce`" 26 | ] 27 | }, 28 | { 29 | "cell_type": "markdown", 30 | "metadata": {}, 31 | "source": [ 32 | "***" 33 | ] 34 | }, 35 | { 36 | "cell_type": "markdown", 37 | "metadata": {}, 38 | "source": [ 39 | "# The Spark Context\n", 40 | "\n", 41 | "An initial note about how pySpark interacts with Jupyter notebooks: **two global variables are created in the startup process.** The first is the *spark context*:" 42 | ] 43 | }, 44 | { 45 | "cell_type": "code", 46 | "execution_count": 1, 47 | "metadata": {}, 48 | "outputs": [ 49 | { 50 | "data": { 51 | "text/plain": [ 52 | "" 53 | ] 54 | }, 55 | "execution_count": 1, 56 | "metadata": {}, 57 | "output_type": "execute_result" 58 | } 59 | ], 60 | "source": [ 61 | "sc" 62 | ] 63 | }, 64 | { 65 | "cell_type": "markdown", 66 | "metadata": {}, 67 | "source": [ 68 | "And the second is the *spark session*:" 69 | ] 70 | }, 71 | { 72 | "cell_type": "code", 73 | "execution_count": 2, 74 | "metadata": {}, 75 | "outputs": [ 76 | { 77 | "data": { 78 | "text/plain": [ 79 | "" 80 | ] 81 | }, 82 | "execution_count": 2, 83 | "metadata": {}, 84 | "output_type": "execute_result" 85 | } 86 | ], 87 | "source": [ 88 | "spark" 89 | ] 90 | }, 91 | { 92 | "cell_type": "markdown", 93 | "metadata": {}, 94 | "source": [ 95 | "They provide access to many of the underlying structures used by pySpark, and you may see them referred to in code throughout the tutorials alongside functions imported from pyspark.sql." 96 | ] 97 | }, 98 | { 99 | "cell_type": "markdown", 100 | "metadata": {}, 101 | "source": [ 102 | "# Loading From CSV\n", 103 | "\n", 104 | "We can load our data from a CSV file in an S3 bucket. There are three ways to handle data types (dtypes) for each column: The easiest, but the most computationally-expensive, is to pass `inferSchema=True` to the load method. The second way entails specifiying the dtypes manually for every column by passing `schema=StructType(...)`, which is computationally-efficient but may be difficult and prone to coder error for especially wide datasets. The final option is to not specify a schema option at all, in which case Spark will assign all the columns string dtypes. Note that dtypes can be changed later, as we will demonstrate, though it is more costly than doing it correctly in the loading process.\n", 105 | "\n", 106 | "Loading the data with the schema inferred:" 107 | ] 108 | }, 109 | { 110 | "cell_type": "code", 111 | "execution_count": 3, 112 | "metadata": { 113 | "collapsed": true 114 | }, 115 | "outputs": [], 116 | "source": [ 117 | "df = spark.read.csv('s3://ui-spark-social-science-public/data/Performance_2015Q1.txt', header=False, inferSchema=True, sep='|')" 118 | ] 119 | }, 120 | { 121 | "cell_type": "markdown", 122 | "metadata": {}, 123 | "source": [ 124 | "Example loading of the same data by passing a custom schema:" 125 | ] 126 | }, 127 | { 128 | "cell_type": "code", 129 | "execution_count": 4, 130 | "metadata": {}, 131 | "outputs": [ 132 | { 133 | "data": { 134 | "text/plain": [ 135 | "\n", 136 | "from pyspark.sql.types import DateType, TimestampType, IntegerType, FloatType, LongType, DoubleType\n", 137 | "from pyspark.sql.types import StructType, StructField\n", 138 | "custom_schema = StructType([StructField('_c0', DateType(), True),\n", 139 | " StructField('_c1', StringType(), True),\n", 140 | " StructField('_c2', DoubleType(), True),\n", 141 | " StructField('_c3', DoubleType(), True),\n", 142 | " StructField('_c4', DoubleType(), True),\n", 143 | " StructField('_c5', IntegerType(), True),\n", 144 | " ...\n", 145 | " StructField('_c27', StringType(), True)])\n", 146 | "df = spark.read.csv('s3://ui-spark-social-science-public/data/Performance_2015Q1.txt', header=False, schema=custom_schema, sep='|')\n" 147 | ] 148 | }, 149 | "execution_count": 4, 150 | "metadata": {}, 151 | "output_type": "execute_result" 152 | } 153 | ], 154 | "source": [ 155 | "\"\"\"\n", 156 | "from pyspark.sql.types import DateType, TimestampType, IntegerType, FloatType, LongType, DoubleType\n", 157 | "from pyspark.sql.types import StructType, StructField\n", 158 | "\n", 159 | "custom_schema = StructType([StructField('_c0', DateType(), True),\n", 160 | " StructField('_c1', StringType(), True),\n", 161 | " StructField('_c2', DoubleType(), True),\n", 162 | " StructField('_c3', DoubleType(), True),\n", 163 | " StructField('_c4', DoubleType(), True),\n", 164 | " StructField('_c5', IntegerType(), True),\n", 165 | " ...\n", 166 | " StructField('_c27', StringType(), True)])\n", 167 | " \n", 168 | "df = spark.read.csv('s3://ui-spark-social-science-public/data/Performance_2015Q1.txt', header=False, schema=custom_schema, sep='|')\n", 169 | "\"\"\";" 170 | ] 171 | }, 172 | { 173 | "cell_type": "markdown", 174 | "metadata": {}, 175 | "source": [ 176 | "One example of using infering and specifying a schema together might be with a large, unfamiliar dataset that you know you will need to load up and work with repeatedly. The first time you load it use `inferSchema`, then make note of the dtypes it assigns. Use that information to build the custom schema, so that when you load the data in the future you avoid the extra processing time necessary for infering.\n", 177 | "\n", 178 | "# Exploring the Data\n", 179 | "\n", 180 | "Our data is now loaded into a dataframe that we named `df`, with all the dtypes inferred. First we'll count the number of rows it found:" 181 | ] 182 | }, 183 | { 184 | "cell_type": "code", 185 | "execution_count": 5, 186 | "metadata": {}, 187 | "outputs": [ 188 | { 189 | "data": { 190 | "text/plain": [ 191 | "3526154" 192 | ] 193 | }, 194 | "execution_count": 5, 195 | "metadata": {}, 196 | "output_type": "execute_result" 197 | } 198 | ], 199 | "source": [ 200 | "df.count()" 201 | ] 202 | }, 203 | { 204 | "cell_type": "markdown", 205 | "metadata": {}, 206 | "source": [ 207 | "Then we look at the column-by-column dtypes the system estimated:" 208 | ] 209 | }, 210 | { 211 | "cell_type": "code", 212 | "execution_count": 6, 213 | "metadata": {}, 214 | "outputs": [ 215 | { 216 | "data": { 217 | "text/plain": [ 218 | "[('_c0', 'bigint'), ('_c1', 'string'), ('_c2', 'string'), ('_c3', 'double'), ('_c4', 'double'), ('_c5', 'int'), ('_c6', 'int'), ('_c7', 'int'), ('_c8', 'string'), ('_c9', 'int'), ('_c10', 'string'), ('_c11', 'string'), ('_c12', 'int'), ('_c13', 'string'), ('_c14', 'string'), ('_c15', 'string'), ('_c16', 'string'), ('_c17', 'string'), ('_c18', 'string'), ('_c19', 'string'), ('_c20', 'string'), ('_c21', 'string'), ('_c22', 'string'), ('_c23', 'string'), ('_c24', 'string'), ('_c25', 'string'), ('_c26', 'int'), ('_c27', 'string')]" 219 | ] 220 | }, 221 | "execution_count": 6, 222 | "metadata": {}, 223 | "output_type": "execute_result" 224 | } 225 | ], 226 | "source": [ 227 | "df.dtypes" 228 | ] 229 | }, 230 | { 231 | "cell_type": "markdown", 232 | "metadata": {}, 233 | "source": [ 234 | "For each pairing (a `tuple` object in Python, denoted by the parentheses), the first entry is the column name and the second is the dtype. Notice that this data has no headers with it (we specified `headers=False` when we loaded it), so Spark used its default naming convention of `_c0, _c1, ... _cn`. We'll makes some changes to that in a minute.\n", 235 | "\n", 236 | "Take a peak at five rows:" 237 | ] 238 | }, 239 | { 240 | "cell_type": "code", 241 | "execution_count": 7, 242 | "metadata": {}, 243 | "outputs": [ 244 | { 245 | "data": { 246 | "text/plain": [ 247 | "[Row(_c0=100002091588, _c1=u'01/01/2015', _c2=u'OTHER', _c3=4.125, _c4=None, _c5=0, _c6=360, _c7=360, _c8=u'01/2045', _c9=16740, _c10=u'0', _c11=u'N', _c12=None, _c13=None, _c14=None, _c15=None, _c16=None, _c17=None, _c18=None, _c19=None, _c20=None, _c21=None, _c22=None, _c23=None, _c24=None, _c25=None, _c26=None, _c27=None), Row(_c0=100002091588, _c1=u'02/01/2015', _c2=None, _c3=4.125, _c4=None, _c5=1, _c6=359, _c7=359, _c8=u'01/2045', _c9=16740, _c10=u'0', _c11=u'N', _c12=None, _c13=None, _c14=None, _c15=None, _c16=None, _c17=None, _c18=None, _c19=None, _c20=None, _c21=None, _c22=None, _c23=None, _c24=None, _c25=None, _c26=None, _c27=None), Row(_c0=100002091588, _c1=u'03/01/2015', _c2=None, _c3=4.125, _c4=None, _c5=2, _c6=358, _c7=358, _c8=u'01/2045', _c9=16740, _c10=u'0', _c11=u'N', _c12=None, _c13=None, _c14=None, _c15=None, _c16=None, _c17=None, _c18=None, _c19=None, _c20=None, _c21=None, _c22=None, _c23=None, _c24=None, _c25=None, _c26=None, _c27=None), Row(_c0=100002091588, _c1=u'04/01/2015', _c2=None, _c3=4.125, _c4=None, _c5=3, _c6=357, _c7=357, _c8=u'01/2045', _c9=16740, _c10=u'0', _c11=u'N', _c12=None, _c13=None, _c14=None, _c15=None, _c16=None, _c17=None, _c18=None, _c19=None, _c20=None, _c21=None, _c22=None, _c23=None, _c24=None, _c25=None, _c26=None, _c27=None), Row(_c0=100002091588, _c1=u'05/01/2015', _c2=None, _c3=4.125, _c4=None, _c5=4, _c6=356, _c7=356, _c8=u'01/2045', _c9=16740, _c10=u'0', _c11=u'N', _c12=None, _c13=None, _c14=None, _c15=None, _c16=None, _c17=None, _c18=None, _c19=None, _c20=None, _c21=None, _c22=None, _c23=None, _c24=None, _c25=None, _c26=None, _c27=None)]" 248 | ] 249 | }, 250 | "execution_count": 7, 251 | "metadata": {}, 252 | "output_type": "execute_result" 253 | } 254 | ], 255 | "source": [ 256 | "df.take(5)" 257 | ] 258 | }, 259 | { 260 | "cell_type": "markdown", 261 | "metadata": {}, 262 | "source": [ 263 | "In the format `column_name=value` for each row. Note that the formatting above is ugly because `take` doesn't try to make it pretty, it just returns the row object itself. We can use `show` instead and that attempts to format the data better, but because there are so many columns in this case the formatting of `show` doesn't fit, and each line wraps down to the next. We'll use `show` on a subset below.\n", 264 | "\n", 265 | "# Selecting & Renaming Columns" 266 | ] 267 | }, 268 | { 269 | "cell_type": "markdown", 270 | "metadata": {}, 271 | "source": [ 272 | "The select operation performs a by column subset of an existing DF. The columns to be returned in the new DF are specified as a list of column name strings in the select operation. Here, we create a new DF called perf_lim that includes only the first 14 columns in the perf DF, i.e. the DF perf_lim is a subset of perf:" 273 | ] 274 | }, 275 | { 276 | "cell_type": "code", 277 | "execution_count": 8, 278 | "metadata": {}, 279 | "outputs": [ 280 | { 281 | "data": { 282 | "text/plain": [ 283 | "[Row(_c0=100002091588, _c1=u'01/01/2015', _c2=u'OTHER', _c3=4.125, _c4=None, _c5=0, _c6=360, _c7=360, _c8=u'01/2045', _c9=16740, _c10=u'0', _c11=u'N', _c12=None, _c13=None)]" 284 | ] 285 | }, 286 | "execution_count": 8, 287 | "metadata": {}, 288 | "output_type": "execute_result" 289 | } 290 | ], 291 | "source": [ 292 | "df_lim = df.select('_c0','_c1','_c2', '_c3', '_c4', '_c5', '_c6', '_c7', '_c8', '_c9', '_c10', '_c11', '_c12', '_c13')\n", 293 | "df_lim.take(1)" 294 | ] 295 | }, 296 | { 297 | "cell_type": "markdown", 298 | "metadata": {}, 299 | "source": [ 300 | "We can rename columns one at a time, or several at a time:" 301 | ] 302 | }, 303 | { 304 | "cell_type": "code", 305 | "execution_count": 9, 306 | "metadata": {}, 307 | "outputs": [ 308 | { 309 | "data": { 310 | "text/plain": [ 311 | "DataFrame[loan_id: bigint, period: string, _c2: string, _c3: double, _c4: double, _c5: int, _c6: int, _c7: int, _c8: string, _c9: int, _c10: string, _c11: string, _c12: int, _c13: string]" 312 | ] 313 | }, 314 | "execution_count": 9, 315 | "metadata": {}, 316 | "output_type": "execute_result" 317 | } 318 | ], 319 | "source": [ 320 | "df_lim = df_lim.withColumnRenamed('_c0','loan_id').withColumnRenamed('_c1','period')\n", 321 | "df_lim" 322 | ] 323 | }, 324 | { 325 | "cell_type": "markdown", 326 | "metadata": { 327 | "collapsed": true 328 | }, 329 | "source": [ 330 | "You can see that column `C0` has been renamed to `loan_id`, and `C1` to `period`.\n", 331 | "\n", 332 | "We can also rename many of them in a loop using two lists or a dictionary:" 333 | ] 334 | }, 335 | { 336 | "cell_type": "code", 337 | "execution_count": 10, 338 | "metadata": { 339 | "collapsed": true 340 | }, 341 | "outputs": [], 342 | "source": [ 343 | "old_names = ['_c2', '_c3', '_c4', '_c5', '_c6', '_c7', '_c8', '_c9', '_c10', '_c11', '_c12', '_c13']\n", 344 | "new_names = ['servicer_name', 'new_int_rt', 'act_endg_upb', 'loan_age', 'mths_remng', 'aj_mths_remng', 'dt_matr', 'cd_msa', 'delq_sts', 'flag_mod', 'cd_zero_bal', 'dt_zero_bal']\n", 345 | "for old, new in zip(old_names, new_names):\n", 346 | " df_lim = df_lim.withColumnRenamed(old, new)" 347 | ] 348 | }, 349 | { 350 | "cell_type": "code", 351 | "execution_count": 11, 352 | "metadata": {}, 353 | "outputs": [ 354 | { 355 | "data": { 356 | "text/plain": [ 357 | "[Row(loan_id=100002091588, period=u'01/01/2015', servicer_name=u'OTHER', new_int_rt=4.125, act_endg_upb=None, loan_age=0, mths_remng=360, aj_mths_remng=360, dt_matr=u'01/2045', cd_msa=16740, delq_sts=u'0', flag_mod=u'N', cd_zero_bal=None, dt_zero_bal=None)]" 358 | ] 359 | }, 360 | "execution_count": 11, 361 | "metadata": {}, 362 | "output_type": "execute_result" 363 | } 364 | ], 365 | "source": [ 366 | "df_lim.take(1)" 367 | ] 368 | }, 369 | { 370 | "cell_type": "code", 371 | "execution_count": 12, 372 | "metadata": {}, 373 | "outputs": [ 374 | { 375 | "data": { 376 | "text/plain": [ 377 | "['loan_id', 'period', 'servicer_name', 'new_int_rt', 'act_endg_upb', 'loan_age', 'mths_remng', 'aj_mths_remng', 'dt_matr', 'cd_msa', 'delq_sts', 'flag_mod', 'cd_zero_bal', 'dt_zero_bal']" 378 | ] 379 | }, 380 | "execution_count": 12, 381 | "metadata": {}, 382 | "output_type": "execute_result" 383 | } 384 | ], 385 | "source": [ 386 | "df_lim.columns" 387 | ] 388 | }, 389 | { 390 | "cell_type": "markdown", 391 | "metadata": {}, 392 | "source": [ 393 | "# Describe\n", 394 | "\n", 395 | "Now we'll describe the data. Note that `describe` returns a new dataframe with the information, and so must have `show` called after it if our goal is to view it (note the nice formatting in this case). This can be called on one or more specific columns, as we do here, or the entire dataframe by passing no columns to describe:" 396 | ] 397 | }, 398 | { 399 | "cell_type": "code", 400 | "execution_count": 13, 401 | "metadata": { 402 | "scrolled": true 403 | }, 404 | "outputs": [ 405 | { 406 | "name": "stdout", 407 | "output_type": "stream", 408 | "text": [ 409 | "+-------+--------------------+-------------------+------------------+\n", 410 | "|summary| servicer_name| new_int_rt| loan_age|\n", 411 | "+-------+--------------------+-------------------+------------------+\n", 412 | "| count| 382039| 3526154| 3526154|\n", 413 | "| mean| null| 4.178168090219519| 5.134865351881966|\n", 414 | "| stddev| null|0.34382335723646673|3.3833930336063465|\n", 415 | "| min| CITIMORTGAGE, INC.| 2.75| -1|\n", 416 | "| max|WELLS FARGO BANK,...| 6.125| 34|\n", 417 | "+-------+--------------------+-------------------+------------------+\n", 418 | "\n" 419 | ] 420 | } 421 | ], 422 | "source": [ 423 | "df_described = df_lim.describe('servicer_name', 'new_int_rt', 'loan_age')\n", 424 | "df_described.show()" 425 | ] 426 | }, 427 | { 428 | "cell_type": "markdown", 429 | "metadata": { 430 | "collapsed": true 431 | }, 432 | "source": [ 433 | "# Writing to S3\n", 434 | "\n", 435 | "And finally, we can write data out to our S3 bucket. Note that if your data is small enough to be collected onto one computer, writing it is easy. We'll use the dataframe we just created using `describe` above as an example:" 436 | ] 437 | }, 438 | { 439 | "cell_type": "code", 440 | "execution_count": 14, 441 | "metadata": { 442 | "collapsed": true 443 | }, 444 | "outputs": [], 445 | "source": [ 446 | "df_described.write.format('com.databricks.spark.csv').option(\"header\", \"true\").save('s3://ui-spark-social-science-public/data/mycsv')" 447 | ] 448 | }, 449 | { 450 | "cell_type": "markdown", 451 | "metadata": { 452 | "collapsed": true 453 | }, 454 | "source": [ 455 | "The above line will turn *each partition* of this dataframe into a .csv file. This is an important note; if your data is very big it may be on a lot of partitions. This may be required if your data too large to fit in one csv file, but if your data should fit you can include the `coalesce` command, like this:\n", 456 | " \n", 457 | " df_described.coalesce(1).write.format(\"com.databricks.spark.csv\").option(\"header\", \"true\").save(\"s3://ui-spark-social-science-public/data/mycsv\")\n", 458 | "\n", 459 | "To tell it to combine all the data into 1 partition (or however many you pass in as the value). Again, only do this if your data isn't very large. See the pySpark tutorial on subsetting for more." 460 | ] 461 | }, 462 | { 463 | "cell_type": "code", 464 | "execution_count": null, 465 | "metadata": { 466 | "collapsed": true 467 | }, 468 | "outputs": [], 469 | "source": [] 470 | } 471 | ], 472 | "metadata": { 473 | "kernelspec": { 474 | "display_name": "Python 2", 475 | "language": "python", 476 | "name": "python2" 477 | }, 478 | "language_info": { 479 | "codemirror_mode": { 480 | "name": "ipython", 481 | "version": 2 482 | }, 483 | "file_extension": ".py", 484 | "mimetype": "text/x-python", 485 | "name": "python", 486 | "nbconvert_exporter": "python", 487 | "pygments_lexer": "ipython2", 488 | "version": "2.7.12" 489 | } 490 | }, 491 | "nbformat": 4, 492 | "nbformat_minor": 1 493 | } 494 | -------------------------------------------------------------------------------- /02_pyspark-basics-2.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "**_pySpark Basics: Dataframe Concepts_**" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "## _by Jeff Levy (jlevy@urban.org) & Alex Engler (aengler@urban.org)_\n", 15 | "\n", 16 | "_Last Updated: 31 Jul, Spark v2.1_" 17 | ] 18 | }, 19 | { 20 | "cell_type": "markdown", 21 | "metadata": {}, 22 | "source": [ 23 | "_Abstract: This guide will explore some basic concepts necessary for working with many dataframe operations, in particular `groupBy` and `persist`._\n", 24 | "\n", 25 | "_Main operations used: read.load, withColumn, groupBy, persist, cache, unpersist_" 26 | ] 27 | }, 28 | { 29 | "cell_type": "markdown", 30 | "metadata": {}, 31 | "source": [ 32 | "***" 33 | ] 34 | }, 35 | { 36 | "cell_type": "markdown", 37 | "metadata": {}, 38 | "source": [ 39 | "Spark does its computations in what is called a _lazy_ fashion. That is, when you tell it to do things to your data, it _doesn't do them right away._ Instead it checks that they're valid commands, then stacks them up until you actually ask it to return a value or a dataframe to you. This is stack of commands is called a _lineage_ in Spark, and means we can think of Spark dataframe objects as a list of instructions built on top of your original data.\n", 40 | "\n", 41 | "Let's see it in action. First we'll load up the same dataframe we did in basics 1:" 42 | ] 43 | }, 44 | { 45 | "cell_type": "code", 46 | "execution_count": 1, 47 | "metadata": { 48 | "collapsed": true 49 | }, 50 | "outputs": [], 51 | "source": [ 52 | "df = spark.read.format('com.databricks.spark.csv').options(header='False', inferschema='true', sep='|').load('s3://ui-spark-social-science-public/data/Performance_2015Q1.txt')" 53 | ] 54 | }, 55 | { 56 | "cell_type": "markdown", 57 | "metadata": {}, 58 | "source": [ 59 | "We'll take a subset of the columns and rename them, like we did in the first tutorial: " 60 | ] 61 | }, 62 | { 63 | "cell_type": "code", 64 | "execution_count": 2, 65 | "metadata": { 66 | "collapsed": true 67 | }, 68 | "outputs": [], 69 | "source": [ 70 | "df_lim = df.select('_c0','_c1','_c2', '_c3', '_c4', '_c5', '_c6', '_c7', '_c8', '_c9', '_c10', '_c11', '_c12', '_c13')\n", 71 | "\n", 72 | "old_names = ['_c0','_c1','_c2', '_c3', '_c4', '_c5', '_c6', '_c7', '_c8', '_c9', '_c10', '_c11', '_c12', '_c13']\n", 73 | "new_names = ['loan_id','period','servicer_name', 'new_int_rt', 'act_endg_upb', 'loan_age', 'mths_remng', 'aj_mths_remng', 'dt_matr', 'cd_msa', 'delq_sts', 'flag_mod', 'cd_zero_bal', 'dt_zero_bal']\n", 74 | "for old, new in zip(old_names, new_names):\n", 75 | " df_lim = df_lim.withColumnRenamed(old, new)" 76 | ] 77 | }, 78 | { 79 | "cell_type": "markdown", 80 | "metadata": {}, 81 | "source": [ 82 | "But now let's try some numerical operations on a column. We can use the .withColumn method to create a new dataframe that also had an additional calculated variable, in this case the difference between loan_age and months remaining." 83 | ] 84 | }, 85 | { 86 | "cell_type": "code", 87 | "execution_count": 3, 88 | "metadata": { 89 | "collapsed": true 90 | }, 91 | "outputs": [], 92 | "source": [ 93 | "## Add a column named 'loan_length' to the existing dataframe:\n", 94 | "df_lim = df_lim.withColumn('loan_length', df_lim['loan_age'] + df_lim['mths_remng'])\n", 95 | "\n", 96 | "## Group the new dataframe by servicer name:\n", 97 | "df_grp = df_lim.groupBy('servicer_name')\n", 98 | "\n", 99 | "## Compute average loan age, months remaining, and loan length by servicer:\n", 100 | "df_avg = df_grp.avg('loan_age', 'mths_remng', 'loan_length')" 101 | ] 102 | }, 103 | { 104 | "cell_type": "markdown", 105 | "metadata": {}, 106 | "source": [ 107 | "Here we performed a simple math operation (adding `loan_age` to `mnths_remng`) then perform a `groupBy` operation over the entries in `servicer_name` (more on groupBy in a minute) while asking it to calculate averages for three numeric columns across each servicer. \n", 108 | "\n", 109 | "However, if you actually ran the code, you probably noticed that the the code block finished nearly instantly - despite there being over 3.5 million rows of data. This is an example of _lazy_ computing - **nothing was actually computed here. ** At the moment, we're just creating a list of instructions. All pySpark did was make sure they were valid instructions. Now let's see what happens if we tell it to `show` us the results:" 110 | ] 111 | }, 112 | { 113 | "cell_type": "code", 114 | "execution_count": 4, 115 | "metadata": {}, 116 | "outputs": [ 117 | { 118 | "name": "stdout", 119 | "output_type": "stream", 120 | "text": [ 121 | "+--------------------+--------------------+------------------+------------------+\n", 122 | "| servicer_name| avg(loan_age)| avg(mths_remng)| avg(loan_length)|\n", 123 | "+--------------------+--------------------+------------------+------------------+\n", 124 | "| QUICKEN LOANS INC.|-0.08899247348614438| 358.5689787889155|358.47998631542936|\n", 125 | "|NATIONSTAR MORTGA...| 0.39047125841532887| 359.5821853961678| 359.9726566545831|\n", 126 | "| null| 5.6264681794400015|354.21486809483747| 359.8413362742775|\n", 127 | "|WELLS FARGO BANK,...| 0.6704475572258285|359.25937820293814|359.92982576016396|\n", 128 | "|FANNIE MAE/SETERU...| 9.333333333333334| 350.6666666666667| 360.0|\n", 129 | "|DITECH FINANCIAL LLC| 5.147629653197582| 354.7811008590519|359.92873051224944|\n", 130 | "|SENECA MORTGAGE S...| -0.2048814025438295|360.20075627363354| 359.9958748710897|\n", 131 | "|SUNTRUST MORTGAGE...| 0.8241234756097561| 359.1453887195122| 359.969512195122|\n", 132 | "|ROUNDPOINT MORTGA...| 5.153408024034549| 354.8269387244163|359.98034674845087|\n", 133 | "| PENNYMAC CORP.| 0.14966740576496673| 359.8470066518847|359.99667405764967|\n", 134 | "|PHH MORTGAGE CORP...| 0.9780420860018298|359.02195791399816| 360.0|\n", 135 | "|MATRIX FINANCIAL ...| 6.566794707639778| 353.4229620145113|359.98975672215107|\n", 136 | "| OTHER| 0.11480465916297489| 359.8345750772193|359.94937973638224|\n", 137 | "| CITIMORTGAGE, INC.| 0.338498789346247|359.41670702179175| 359.755205811138|\n", 138 | "|PINGORA LOAN SERV...| 7.573573382530696|352.40886824861633| 359.982441631147|\n", 139 | "|JP MORGAN CHASE B...| 1.6553418987669224| 358.3384495990342|359.99379149780117|\n", 140 | "| PNC BANK, N.A.| 1.1707779886148009|358.78747628083494| 359.9582542694497|\n", 141 | "|FREEDOM MORTGAGE ...| 8.56265812109968|351.29583403609377|359.85849215719344|\n", 142 | "+--------------------+--------------------+------------------+------------------+\n", 143 | "\n" 144 | ] 145 | } 146 | ], 147 | "source": [ 148 | "df_avg.show()" 149 | ] 150 | }, 151 | { 152 | "cell_type": "markdown", 153 | "metadata": {}, 154 | "source": [ 155 | "That takes a bit longer to run, because when you executed `show` you asked for a dataframe to be returned to you, which meant **Spark went back and caclulated the three previous operations.** You could have done any number of intermediate steps similar to those before calling `show` and they all would have been lazy operations that finished nearly instantly, until `show` ran them all." 156 | ] 157 | }, 158 | { 159 | "cell_type": "markdown", 160 | "metadata": {}, 161 | "source": [ 162 | "Now this would just be a background peculiarity, except that we have some control over the process. If you imagine your _lineage_ as a straight line of instructions leading from your source data to your ouput, **we can use the `persist()` method to create a point for branching.** Essentially it tells Spark \"follow the instructions to this point, then _hold these results_ because I'm going to come back to them again.\"\n", 163 | "\n", 164 | "Let's redo the previous code block with a `persist()`:" 165 | ] 166 | }, 167 | { 168 | "cell_type": "code", 169 | "execution_count": 5, 170 | "metadata": { 171 | "collapsed": true 172 | }, 173 | "outputs": [], 174 | "source": [ 175 | "df_keep = df_lim.withColumn('loan_length', df_lim['loan_age'] + df_lim['mths_remng'])\n", 176 | "\n", 177 | "df_keep.persist()\n", 178 | "\n", 179 | "df_grp = df_keep.groupBy('servicer_name')\n", 180 | "df_avg = df_grp.avg('loan_age', 'mths_remng', 'loan_length')" 181 | ] 182 | }, 183 | { 184 | "cell_type": "markdown", 185 | "metadata": {}, 186 | "source": [ 187 | "The `persist` command adds very little overhead in this case, finishing in in well under a second. Now we call `show` again to force it to calculate the averages by group:" 188 | ] 189 | }, 190 | { 191 | "cell_type": "code", 192 | "execution_count": 6, 193 | "metadata": {}, 194 | "outputs": [ 195 | { 196 | "name": "stdout", 197 | "output_type": "stream", 198 | "text": [ 199 | "+--------------------+--------------------+------------------+------------------+\n", 200 | "| servicer_name| avg(loan_age)| avg(mths_remng)| avg(loan_length)|\n", 201 | "+--------------------+--------------------+------------------+------------------+\n", 202 | "| QUICKEN LOANS INC.|-0.08899247348614438| 358.5689787889155|358.47998631542936|\n", 203 | "|NATIONSTAR MORTGA...| 0.39047125841532887| 359.5821853961678| 359.9726566545831|\n", 204 | "| null| 5.6264681794400015|354.21486809483747| 359.8413362742775|\n", 205 | "|WELLS FARGO BANK,...| 0.6704475572258285|359.25937820293814|359.92982576016396|\n", 206 | "|FANNIE MAE/SETERU...| 9.333333333333334| 350.6666666666667| 360.0|\n", 207 | "|DITECH FINANCIAL LLC| 5.147629653197582| 354.7811008590519|359.92873051224944|\n", 208 | "|SENECA MORTGAGE S...| -0.2048814025438295|360.20075627363354| 359.9958748710897|\n", 209 | "|SUNTRUST MORTGAGE...| 0.8241234756097561| 359.1453887195122| 359.969512195122|\n", 210 | "|ROUNDPOINT MORTGA...| 5.153408024034549| 354.8269387244163|359.98034674845087|\n", 211 | "| PENNYMAC CORP.| 0.14966740576496673| 359.8470066518847|359.99667405764967|\n", 212 | "|PHH MORTGAGE CORP...| 0.9780420860018298|359.02195791399816| 360.0|\n", 213 | "|MATRIX FINANCIAL ...| 6.566794707639778| 353.4229620145113|359.98975672215107|\n", 214 | "| OTHER| 0.11480465916297489| 359.8345750772193|359.94937973638224|\n", 215 | "| CITIMORTGAGE, INC.| 0.338498789346247|359.41670702179175| 359.755205811138|\n", 216 | "|PINGORA LOAN SERV...| 7.573573382530696|352.40886824861633| 359.982441631147|\n", 217 | "|JP MORGAN CHASE B...| 1.6553418987669224| 358.3384495990342|359.99379149780117|\n", 218 | "| PNC BANK, N.A.| 1.1707779886148009|358.78747628083494| 359.9582542694497|\n", 219 | "|FREEDOM MORTGAGE ...| 8.56265812109968|351.29583403609377|359.85849215719344|\n", 220 | "+--------------------+--------------------+------------------+------------------+\n", 221 | "\n" 222 | ] 223 | } 224 | ], 225 | "source": [ 226 | "df_avg.show()" 227 | ] 228 | }, 229 | { 230 | "cell_type": "markdown", 231 | "metadata": {}, 232 | "source": [ 233 | "Showing the groupBy averages this way took a bit longer because of the `persist` overhead. But now let's back up and, in addition to the mean, lets also get the sums of our groupBy object:" 234 | ] 235 | }, 236 | { 237 | "cell_type": "code", 238 | "execution_count": 7, 239 | "metadata": { 240 | "collapsed": true 241 | }, 242 | "outputs": [], 243 | "source": [ 244 | "df_sum = df_grp.sum('new_int_rt', 'loan_age', 'mths_remng', 'cd_zero_bal', 'loan_length')" 245 | ] 246 | }, 247 | { 248 | "cell_type": "markdown", 249 | "metadata": {}, 250 | "source": [ 251 | "That was the *lazy* portion, now we make it execute:" 252 | ] 253 | }, 254 | { 255 | "cell_type": "code", 256 | "execution_count": 8, 257 | "metadata": {}, 258 | "outputs": [ 259 | { 260 | "name": "stdout", 261 | "output_type": "stream", 262 | "text": [ 263 | "+--------------------+--------------------+-------------+---------------+----------------+----------------+\n", 264 | "| servicer_name| sum(new_int_rt)|sum(loan_age)|sum(mths_remng)|sum(cd_zero_bal)|sum(loan_length)|\n", 265 | "+--------------------+--------------------+-------------+---------------+----------------+----------------+\n", 266 | "| QUICKEN LOANS INC.| 101801.76500000055| -2081| 8384777| null| 8382696|\n", 267 | "|NATIONSTAR MORTGA...| 40287.497999999934| 3770| 3471766| 2| 3475536|\n", 268 | "| null|1.3139130895007337E7| 17690263| 1113692280| 16932| 1131382543|\n", 269 | "|WELLS FARGO BANK,...| 187326.36499999996| 29436| 15773283| null| 15802719|\n", 270 | "|FANNIE MAE/SETERU...| 26.6| 56| 2104| null| 2160|\n", 271 | "|DITECH FINANCIAL LLC| 39531.70999999991| 48537| 3345231| 41| 3393768|\n", 272 | "|SENECA MORTGAGE S...| 24093.55999999997| -1192| 2095648| null| 2094456|\n", 273 | "|SUNTRUST MORTGAGE...| 21530.767999999884| 4325| 1884795| null| 1889120|\n", 274 | "|ROUNDPOINT MORTGA...| 67708.25999999994| 82336| 5669070| 74| 5751406|\n", 275 | "| PENNYMAC CORP.| 15209.139999999992| 540| 1298328| null| 1298868|\n", 276 | "|PHH MORTGAGE CORP...| 9086.066000000006| 2138| 784822| null| 786960|\n", 277 | "|MATRIX FINANCIAL ...| 19212.93299999999| 30772| 1656140| 16| 1686912|\n", 278 | "| OTHER| 904855.043999986| 25163| 78868902| 21| 78894065|\n", 279 | "| CITIMORTGAGE, INC.| 16939.329999999998| 1398| 1484391| null| 1485789|\n", 280 | "|PINGORA LOAN SERV...| 64224.70499999985| 119049| 5539515| 111| 5658564|\n", 281 | "|JP MORGAN CHASE B...| 50187.154999999984| 19197| 4155651| null| 4174848|\n", 282 | "| PNC BANK, N.A.| 6911.725| 1851| 567243| 1| 569094|\n", 283 | "|FREEDOM MORTGAGE ...| 24800.60499999998| 50768| 2082833| 60| 2133601|\n", 284 | "+--------------------+--------------------+-------------+---------------+----------------+----------------+\n", 285 | "\n" 286 | ] 287 | } 288 | ], 289 | "source": [ 290 | "df_sum.show()" 291 | ] 292 | }, 293 | { 294 | "cell_type": "markdown", 295 | "metadata": {}, 296 | "source": [ 297 | "That was dramatically faster than the calculation showing the averages - (we benchmarked it at 1.49 seconds versus over 18 seconds). This is because Spark kept the intermediate results up to `persist()`, from when we calculated the averages, and thus only had to run the code that came after that. We can now do as many different branches of operations as we want stemming from `df_new` and since we persisted it, all the code before the `persist()` won't be executed again.\n", 298 | "\n", 299 | "There is no need for persisting if there is no branching. In fact, as we saw, `persist` adds a bit of overhead to the process, and so is actually a hinderance if you're not going to be utilizing the branch point. As a matter of good practice, and to free up more resources, you can call `.unpersist()` on a persisted object to drop it from storage when done with it:" 300 | ] 301 | }, 302 | { 303 | "cell_type": "code", 304 | "execution_count": 9, 305 | "metadata": {}, 306 | "outputs": [ 307 | { 308 | "data": { 309 | "text/plain": [ 310 | "DataFrame[loan_id: bigint, period: string, servicer_name: string, new_int_rt: double, act_endg_upb: double, loan_age: int, mths_remng: int, aj_mths_remng: int, dt_matr: string, cd_msa: int, delq_sts: string, flag_mod: string, cd_zero_bal: int, dt_zero_bal: string, loan_length: int]" 311 | ] 312 | }, 313 | "execution_count": 9, 314 | "metadata": {}, 315 | "output_type": "execute_result" 316 | } 317 | ], 318 | "source": [ 319 | "df_keep.unpersist();" 320 | ] 321 | }, 322 | { 323 | "cell_type": "markdown", 324 | "metadata": {}, 325 | "source": [ 326 | "(The trailing ; simply gags the output from the command. We don't need to see the summary of what we just unpersisted)\n", 327 | "\n", 328 | "Also note that `cache()` is essentially a synonym for `persist()`, except it specifies storing the checkpoint in memory for the fastest recall, while persisting allows Spark to swap some of the checkpoint to disk if necessary. Obviously `cache()` only works if the dataframe you are forcing it to hold is small enough that it can fit in the memory of each node, so use it with care." 329 | ] 330 | }, 331 | { 332 | "cell_type": "markdown", 333 | "metadata": {}, 334 | "source": [ 335 | "And finally, a bit more on `groupBy`. Hopefully the usage above has given you some insight into how it works. In short, `groupBy` is the vehicle for aggregation in a dataframe. A `groupBy` object is, in itself, incomplete. So, the line in the code block where we introduced a `persist()` above that looks like this:\n", 336 | "\n", 337 | "`df_grp = perf_keep.groupBy('_c2')`\n", 338 | "\n", 339 | "which generates a `groupBy` object where the data is grouped around the unique values found in column `C2`, but it is just a foundation. It is like the sentence _\"We are going to group our data up by the unique values found in column C2, and then...\"_ The next line of code contains the rest:\n", 340 | "\n", 341 | "`df_avg = df_grp.avg('_c3', '_c5', '_c6', '_c12', 'New_c12')`\n", 342 | "\n", 343 | "Or to finish the sentence, _\"... calculate the averages for these five columns within each group.\"_\n" 344 | ] 345 | }, 346 | { 347 | "cell_type": "code", 348 | "execution_count": null, 349 | "metadata": { 350 | "collapsed": true 351 | }, 352 | "outputs": [], 353 | "source": [] 354 | } 355 | ], 356 | "metadata": { 357 | "kernelspec": { 358 | "display_name": "Python 2", 359 | "language": "python", 360 | "name": "python2" 361 | }, 362 | "language_info": { 363 | "codemirror_mode": "text/x-ipython", 364 | "file_extension": ".py", 365 | "mimetype": "text/x-ipython", 366 | "name": "python", 367 | "pygments_lexer": "python", 368 | "version": "2.7.12\n" 369 | } 370 | }, 371 | "nbformat": 4, 372 | "nbformat_minor": 1 373 | } 374 | -------------------------------------------------------------------------------- /03_merging.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "**_pySpark Basics: Merging and Joining Data_**" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "_by Jeff Levy (jlevy@urban.org)_\n", 15 | "\n", 16 | "_Last Updated: 31 Jul 2017, Spark v2.1_" 17 | ] 18 | }, 19 | { 20 | "cell_type": "markdown", 21 | "metadata": {}, 22 | "source": [ 23 | "_Abstract: This guide will go over the various ways to concatenate two or more dataframes_\n", 24 | "\n", 25 | "_Main operations used: unionAll, join_" 26 | ] 27 | }, 28 | { 29 | "cell_type": "markdown", 30 | "metadata": {}, 31 | "source": [ 32 | "***" 33 | ] 34 | }, 35 | { 36 | "cell_type": "markdown", 37 | "metadata": {}, 38 | "source": [ 39 | "We begin with some basic setup to import the SQL structure that supports the dataframes we'll be using:" 40 | ] 41 | }, 42 | { 43 | "cell_type": "markdown", 44 | "metadata": {}, 45 | "source": [ 46 | "# Stacking Rows with Matching Columns\n", 47 | "\n", 48 | "You may have the same columns in each dataframe and just want to stack one on top of the other, row-wise. We can make this happen with a helper function, after we first build three simple toy dataframes:" 49 | ] 50 | }, 51 | { 52 | "cell_type": "code", 53 | "execution_count": 1, 54 | "metadata": { 55 | "collapsed": true 56 | }, 57 | "outputs": [], 58 | "source": [ 59 | "from pyspark.sql import Row\n", 60 | "\n", 61 | "row = Row(\"name\", \"pet\", \"count\")\n", 62 | "\n", 63 | "df1 = sc.parallelize([\n", 64 | " row(\"Sue\", \"cat\", 16),\n", 65 | " row(\"Kim\", \"dog\", 1), \n", 66 | " row(\"Bob\", \"fish\", 5)\n", 67 | " ]).toDF()\n", 68 | "\n", 69 | "df2 = sc.parallelize([\n", 70 | " row(\"Fred\", \"cat\", 2),\n", 71 | " row(\"Kate\", \"ant\", 179), \n", 72 | " row(\"Marc\", \"lizard\", 5)\n", 73 | " ]).toDF()\n", 74 | "\n", 75 | "df3 = sc.parallelize([\n", 76 | " row(\"Sarah\", \"shark\", 3),\n", 77 | " row(\"Jason\", \"kids\", 2), \n", 78 | " row(\"Scott\", \"squirrel\", 1)\n", 79 | " ]).toDF()" 80 | ] 81 | }, 82 | { 83 | "cell_type": "markdown", 84 | "metadata": {}, 85 | "source": [ 86 | "If we just want to stack two of them, we can use `unionAll`:" 87 | ] 88 | }, 89 | { 90 | "cell_type": "code", 91 | "execution_count": 2, 92 | "metadata": { 93 | "collapsed": true 94 | }, 95 | "outputs": [], 96 | "source": [ 97 | "df_union = df1.unionAll(df2)" 98 | ] 99 | }, 100 | { 101 | "cell_type": "code", 102 | "execution_count": 3, 103 | "metadata": {}, 104 | "outputs": [ 105 | { 106 | "name": "stdout", 107 | "output_type": "stream", 108 | "text": [ 109 | "+----+------+-----+\n", 110 | "|name| pet|count|\n", 111 | "+----+------+-----+\n", 112 | "| Sue| cat| 16|\n", 113 | "| Kim| dog| 1|\n", 114 | "| Bob| fish| 5|\n", 115 | "|Fred| cat| 2|\n", 116 | "|Kate| ant| 179|\n", 117 | "|Marc|lizard| 5|\n", 118 | "+----+------+-----+\n", 119 | "\n" 120 | ] 121 | } 122 | ], 123 | "source": [ 124 | "df_union.show()" 125 | ] 126 | }, 127 | { 128 | "cell_type": "markdown", 129 | "metadata": {}, 130 | "source": [ 131 | "The `unionAll` method only allows us to stack two dataframes at a time. We could do that repeatedly if there were more than one to stack in this way, but we can also use a helper function to make it easier. \n", 132 | "\n", 133 | "The standard Python command `reduce` applies a function to a list of items in order to \"reduce\" it down to one output. With this you can pass as many dataframes as you like into our helper function and they will come out stacked in one:" 134 | ] 135 | }, 136 | { 137 | "cell_type": "code", 138 | "execution_count": 4, 139 | "metadata": { 140 | "collapsed": true 141 | }, 142 | "outputs": [], 143 | "source": [ 144 | "from pyspark.sql import DataFrame\n", 145 | "from functools import reduce\n", 146 | "\n", 147 | "def union_many(*dfs):\n", 148 | " #this function can have as many dataframes as you want passed into it\n", 149 | " #the asterics before the name `dfs` tells Python that `dfs` will be a list\n", 150 | " #containing all of the arguments we pass into union_many when it is called\n", 151 | " \n", 152 | " return reduce(DataFrame.unionAll, dfs)\n", 153 | "\n", 154 | "df_union = union_many(df1, df2, df3)" 155 | ] 156 | }, 157 | { 158 | "cell_type": "code", 159 | "execution_count": 5, 160 | "metadata": {}, 161 | "outputs": [ 162 | { 163 | "name": "stdout", 164 | "output_type": "stream", 165 | "text": [ 166 | "+-----+--------+-----+\n", 167 | "| name| pet|count|\n", 168 | "+-----+--------+-----+\n", 169 | "| Sue| cat| 16|\n", 170 | "| Kim| dog| 1|\n", 171 | "| Bob| fish| 5|\n", 172 | "| Fred| cat| 2|\n", 173 | "| Kate| ant| 179|\n", 174 | "| Marc| lizard| 5|\n", 175 | "|Sarah| shark| 3|\n", 176 | "|Jason| kids| 2|\n", 177 | "|Scott|squirrel| 1|\n", 178 | "+-----+--------+-----+\n", 179 | "\n" 180 | ] 181 | } 182 | ], 183 | "source": [ 184 | "df_union.show()" 185 | ] 186 | }, 187 | { 188 | "cell_type": "markdown", 189 | "metadata": {}, 190 | "source": [ 191 | "# Merging Columns by Matching Rows\n", 192 | "\n", 193 | "The other way to merge is by combining columns on certain keys across rows. If you are familiar with SQL, pySpark )and Pandas for non-distributed data) draws its merging terminology from that. If you are coming from Stata, this is a generally more intuitive way to think about many-to-one, one-to-one and many-to-many merges. \n", 194 | "\n", 195 | "After we build our data there are four ways to specify the logic of the operation:" 196 | ] 197 | }, 198 | { 199 | "cell_type": "code", 200 | "execution_count": 6, 201 | "metadata": { 202 | "collapsed": true 203 | }, 204 | "outputs": [], 205 | "source": [ 206 | "row1 = Row(\"name\", \"pet\", \"count\")\n", 207 | "row2 = Row(\"name\", \"pet2\", \"count2\")\n", 208 | "\n", 209 | "df1 = sc.parallelize([\n", 210 | " row1(\"Sue\", \"cat\", 16),\n", 211 | " row1(\"Kim\", \"dog\", 1), \n", 212 | " row1(\"Bob\", \"fish\", 5),\n", 213 | " row1(\"Libuse\", \"horse\", 1)\n", 214 | " ]).toDF()\n", 215 | "\n", 216 | "df2 = sc.parallelize([\n", 217 | " row2(\"Sue\", \"eagle\", 2),\n", 218 | " row2(\"Kim\", \"ant\", 179), \n", 219 | " row2(\"Bob\", \"lizard\", 5),\n", 220 | " row2(\"Ferdinand\", \"bees\", 23)\n", 221 | " ]).toDF()" 222 | ] 223 | }, 224 | { 225 | "cell_type": "markdown", 226 | "metadata": {}, 227 | "source": [ 228 | "First we'll do an `inner join`, which *merges rows that have a match in both dataframes* and **drops** all others. This is the default type of join, so the `how` argument could be omitted here if you didn't wish to be explicit (being explicit is almost always better, however). We will merge on the entries in the `name` column, which you can see is the second argument in the method; this can also be a `list` if the merge should happen on more than one matching value:" 229 | ] 230 | }, 231 | { 232 | "cell_type": "code", 233 | "execution_count": 7, 234 | "metadata": {}, 235 | "outputs": [ 236 | { 237 | "name": "stdout", 238 | "output_type": "stream", 239 | "text": [ 240 | "+----+----+-----+------+------+\n", 241 | "|name| pet|count| pet2|count2|\n", 242 | "+----+----+-----+------+------+\n", 243 | "| Sue| cat| 16| eagle| 2|\n", 244 | "| Bob|fish| 5|lizard| 5|\n", 245 | "| Kim| dog| 1| ant| 179|\n", 246 | "+----+----+-----+------+------+\n", 247 | "\n" 248 | ] 249 | } 250 | ], 251 | "source": [ 252 | "df1.join(df2, 'name', how='inner').show()" 253 | ] 254 | }, 255 | { 256 | "cell_type": "markdown", 257 | "metadata": {}, 258 | "source": [ 259 | "The \"left\" dataframe here is `df1`, the \"right\" dataframe is `df2` - the names simply desribe their relative locations in the line of code. Notice that the entries for Libuse and Ferdinand are dropped, because they do not appear in *both* dataframes.\n", 260 | "\n", 261 | "An **outer join**, which *uses all rows from both dataframes regardless of matches*, fills in `null` for missing observations. Using the same two dataframes:" 262 | ] 263 | }, 264 | { 265 | "cell_type": "code", 266 | "execution_count": 8, 267 | "metadata": {}, 268 | "outputs": [ 269 | { 270 | "name": "stdout", 271 | "output_type": "stream", 272 | "text": [ 273 | "+---------+-----+-----+------+------+\n", 274 | "| name| pet|count| pet2|count2|\n", 275 | "+---------+-----+-----+------+------+\n", 276 | "| Sue| cat| 16| eagle| 2|\n", 277 | "|Ferdinand| null| null| bees| 23|\n", 278 | "| Bob| fish| 5|lizard| 5|\n", 279 | "| Kim| dog| 1| ant| 179|\n", 280 | "| Libuse|horse| 1| null| null|\n", 281 | "+---------+-----+-----+------+------+\n", 282 | "\n" 283 | ] 284 | } 285 | ], 286 | "source": [ 287 | "df1.join(df2, 'name', how='outer').show()" 288 | ] 289 | }, 290 | { 291 | "cell_type": "markdown", 292 | "metadata": {}, 293 | "source": [ 294 | "Libuse and Ferdinand both made it into the output now, but each has `null` values filled in where necessary.\n", 295 | "\n", 296 | "And finally a **left join** uses *all keys from the left dataframe* (in this case `df1`). Data from the right dataframe only shows up if it matches something in the left:" 297 | ] 298 | }, 299 | { 300 | "cell_type": "code", 301 | "execution_count": 9, 302 | "metadata": {}, 303 | "outputs": [ 304 | { 305 | "name": "stdout", 306 | "output_type": "stream", 307 | "text": [ 308 | "+------+-----+-----+------+------+\n", 309 | "| name| pet|count| pet2|count2|\n", 310 | "+------+-----+-----+------+------+\n", 311 | "| Sue| cat| 16| eagle| 2|\n", 312 | "| Bob| fish| 5|lizard| 5|\n", 313 | "| Kim| dog| 1| ant| 179|\n", 314 | "|Libuse|horse| 1| null| null|\n", 315 | "+------+-----+-----+------+------+\n", 316 | "\n" 317 | ] 318 | } 319 | ], 320 | "source": [ 321 | "df1.join(df2, 'name', how='left').show()" 322 | ] 323 | }, 324 | { 325 | "cell_type": "markdown", 326 | "metadata": {}, 327 | "source": [ 328 | "So the entry for Ferdinand was dropped because it has no match in the left dataframe. \n", 329 | "\n", 330 | "A **right join** would just be the opposte of that, and would drop Libuse but keep Ferdinand with `null` entries where necessary. A `right` join is equivalent to performing a `left` join but switching the places of `df1` and `df2` in the code block, that is: \n", 331 | "\n", 332 | " df2.join(df1, 'name', how='left')\n", 333 | "\n", 334 | "is logically the same as:\n", 335 | "\n", 336 | " df1.join(df2, 'name', how='right')" 337 | ] 338 | }, 339 | { 340 | "cell_type": "code", 341 | "execution_count": null, 342 | "metadata": { 343 | "collapsed": true 344 | }, 345 | "outputs": [], 346 | "source": [] 347 | } 348 | ], 349 | "metadata": { 350 | "kernelspec": { 351 | "display_name": "Python 2", 352 | "language": "python", 353 | "name": "python2" 354 | }, 355 | "language_info": { 356 | "codemirror_mode": { 357 | "name": "ipython", 358 | "version": 2 359 | }, 360 | "file_extension": ".py", 361 | "mimetype": "text/x-python", 362 | "name": "python", 363 | "nbconvert_exporter": "python", 364 | "pygments_lexer": "ipython2", 365 | "version": "2.7.12" 366 | } 367 | }, 368 | "nbformat": 4, 369 | "nbformat_minor": 1 370 | } 371 | -------------------------------------------------------------------------------- /04_missing-data.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "collapsed": true 7 | }, 8 | "source": [ 9 | "**_pySpark Basics: Missing Data_**" 10 | ] 11 | }, 12 | { 13 | "cell_type": "markdown", 14 | "metadata": {}, 15 | "source": [ 16 | "_by Jeff Levy (jlevy@urban.org)_\n", 17 | "\n", 18 | "_Last Updated: 31 Jul 2017, Spark v2.1_" 19 | ] 20 | }, 21 | { 22 | "cell_type": "markdown", 23 | "metadata": {}, 24 | "source": [ 25 | "_Abstract: In this guide we'll look at how to handle null and missing values in pySpark, with a brief discussion of imputation_\n", 26 | "\n", 27 | "_Main operations used: where, isNull, dropna, fillna_" 28 | ] 29 | }, 30 | { 31 | "cell_type": "markdown", 32 | "metadata": {}, 33 | "source": [ 34 | "***" 35 | ] 36 | }, 37 | { 38 | "cell_type": "markdown", 39 | "metadata": {}, 40 | "source": [ 41 | "We'll load some real data from CSV to work with. It helps to know in advance how the dataset handles missing values - are they an empty string, or something else? Most CSVs will use empty strings, but we can't compute anything on a column that is mixed strings and numbers. The `null` object in pySpark is what we want, and we can tell it when we import the data to replace the value our data uses to denote missing data with it." 42 | ] 43 | }, 44 | { 45 | "cell_type": "code", 46 | "execution_count": 1, 47 | "metadata": { 48 | "collapsed": true 49 | }, 50 | "outputs": [], 51 | "source": [ 52 | "df = spark.read.csv('s3://ui-spark-social-science-public/data/Performance_2015Q1.txt', \n", 53 | " header=False, inferSchema=True, sep='|', nullValue='')" 54 | ] 55 | }, 56 | { 57 | "cell_type": "markdown", 58 | "metadata": {}, 59 | "source": [ 60 | "Note that on the `nullValue=''` line, the empty string can be replaced by whatever your dataset uses - this is telling Spark which values in the dataframe it should convert to `null`." 61 | ] 62 | }, 63 | { 64 | "cell_type": "markdown", 65 | "metadata": {}, 66 | "source": [ 67 | "# Exploring Null Values\n", 68 | "\n", 69 | "First let's see how many rows the entire dataframe has:" 70 | ] 71 | }, 72 | { 73 | "cell_type": "code", 74 | "execution_count": 2, 75 | "metadata": {}, 76 | "outputs": [ 77 | { 78 | "data": { 79 | "text/plain": [ 80 | "3526154" 81 | ] 82 | }, 83 | "execution_count": 2, 84 | "metadata": {}, 85 | "output_type": "execute_result" 86 | } 87 | ], 88 | "source": [ 89 | "df.count()" 90 | ] 91 | }, 92 | { 93 | "cell_type": "markdown", 94 | "metadata": {}, 95 | "source": [ 96 | "To explore missing data in pySpark, we need to make sure we're looking in a numerical column - **the system does not insert `null` values into a column that has a string datatype.** The general point of `null` is so the system knows to skip those rows when doing calculations down a column. \n", 97 | "\n", 98 | "For example, the mean of the series [3, 4, 2, null, 5] is: \n", 99 | "\n", 100 | "14 / 4 = 3.5 \n", 101 | "\n", 102 | "not: \n", 103 | "\n", 104 | "14 / 5 = 2.8\n", 105 | "\n", 106 | "In other words, with proper `null` handling, [3, 4, 2, null, 5] is not the same as [3, 4, 2, 0, 5]. This distinction is not relevant in a column of strings." 107 | ] 108 | }, 109 | { 110 | "cell_type": "code", 111 | "execution_count": 3, 112 | "metadata": {}, 113 | "outputs": [ 114 | { 115 | "data": { 116 | "text/plain": [ 117 | "[('_c0', 'bigint'),\n", 118 | " ('_c1', 'string'),\n", 119 | " ('_c2', 'string'),\n", 120 | " ('_c3', 'double'),\n", 121 | " ('_c4', 'double'),\n", 122 | " ('_c5', 'int'),\n", 123 | " ('_c6', 'int'),\n", 124 | " ('_c7', 'int'),\n", 125 | " ('_c8', 'string'),\n", 126 | " ('_c9', 'int'),\n", 127 | " ('_c10', 'string'),\n", 128 | " ('_c11', 'string'),\n", 129 | " ('_c12', 'int'),\n", 130 | " ('_c13', 'string'),\n", 131 | " ('_c14', 'string'),\n", 132 | " ('_c15', 'string'),\n", 133 | " ('_c16', 'string'),\n", 134 | " ('_c17', 'string'),\n", 135 | " ('_c18', 'string'),\n", 136 | " ('_c19', 'string'),\n", 137 | " ('_c20', 'string'),\n", 138 | " ('_c21', 'string'),\n", 139 | " ('_c22', 'string'),\n", 140 | " ('_c23', 'string'),\n", 141 | " ('_c24', 'string'),\n", 142 | " ('_c25', 'string'),\n", 143 | " ('_c26', 'int'),\n", 144 | " ('_c27', 'string')]" 145 | ] 146 | }, 147 | "execution_count": 3, 148 | "metadata": {}, 149 | "output_type": "execute_result" 150 | } 151 | ], 152 | "source": [ 153 | "df.dtypes" 154 | ] 155 | }, 156 | { 157 | "cell_type": "markdown", 158 | "metadata": {}, 159 | "source": [ 160 | "For our practice purposes it doesn't matter what this data actually is, so we'll arbitrarily select a numerical column:" 161 | ] 162 | }, 163 | { 164 | "cell_type": "code", 165 | "execution_count": 4, 166 | "metadata": {}, 167 | "outputs": [ 168 | { 169 | "data": { 170 | "text/plain": [ 171 | "3510294" 172 | ] 173 | }, 174 | "execution_count": 4, 175 | "metadata": {}, 176 | "output_type": "execute_result" 177 | } 178 | ], 179 | "source": [ 180 | "df.where( df['_c12'].isNull() ).count()" 181 | ] 182 | }, 183 | { 184 | "cell_type": "markdown", 185 | "metadata": {}, 186 | "source": [ 187 | "This command takes the form `df.where(___).count()` where the blank is replaced with the desired condition - or in a sentance, *\"Count the dataframe where `___` is True\".* In the code I used some extra spaces in between the brackets just to make this stand out - Python ignores extra horizonal space nested in commands like that. So here we're counting how many rows have `null` values in column `_c12`.\n", 188 | "\n", 189 | "Note that if we left the `count()` method off the end then **it would return an actual dataframe of all rows where column `C12` is `null`.** So if you wanted more than just the count you could explore that subset.\n", 190 | "\n", 191 | "When we compare the null count to our earlier command, `df.count()`, we can see that column `C12` is mostly null values - there are 15,860 actual values in here, out of 3,526,154 rows. A common need when exploring a dataset might be to check _all_ our numeric rows for null values. However, the `isNull()` method can only be called on a column, not an entire dataframe, so I'll write a convenient Python function to do this for us with some comments to explain each step:" 192 | ] 193 | }, 194 | { 195 | "cell_type": "code", 196 | "execution_count": 5, 197 | "metadata": { 198 | "collapsed": true 199 | }, 200 | "outputs": [], 201 | "source": [ 202 | "def count_nulls(df):\n", 203 | " null_counts = [] #make an empty list to hold our results\n", 204 | " for col in df.dtypes: #iterate through the column data types we saw above, e.g. ('C0', 'bigint')\n", 205 | " cname = col[0] #splits out the column name, e.g. 'C0' \n", 206 | " ctype = col[1] #splits out the column type, e.g. 'bigint'\n", 207 | " if ctype != 'string': #skip processing string columns for efficiency (can't have nulls)\n", 208 | " nulls = df.where( df[cname].isNull() ).count()\n", 209 | " result = tuple([cname, nulls]) #new tuple, (column name, null count)\n", 210 | " null_counts.append(result) #put the new tuple in our result list\n", 211 | " return null_counts\n", 212 | "\n", 213 | "null_counts = count_nulls(df)" 214 | ] 215 | }, 216 | { 217 | "cell_type": "code", 218 | "execution_count": 6, 219 | "metadata": {}, 220 | "outputs": [ 221 | { 222 | "data": { 223 | "text/plain": [ 224 | "[('_c0', 0),\n", 225 | " ('_c3', 0),\n", 226 | " ('_c4', 1945752),\n", 227 | " ('_c5', 0),\n", 228 | " ('_c6', 0),\n", 229 | " ('_c7', 1),\n", 230 | " ('_c9', 0),\n", 231 | " ('_c12', 3510294),\n", 232 | " ('_c26', 3526153)]" 233 | ] 234 | }, 235 | "execution_count": 6, 236 | "metadata": {}, 237 | "output_type": "execute_result" 238 | } 239 | ], 240 | "source": [ 241 | "null_counts" 242 | ] 243 | }, 244 | { 245 | "cell_type": "markdown", 246 | "metadata": {}, 247 | "source": [ 248 | "A quick note about Python programming in general, for those who may be new(er) to the language: **one of the core precepts of Python is that most code needs to be _read_ even more often than it needs to be _run_.** For the purpose of clairty I spread the code in that last function out vertically far more than was strictly necessary. This bit of code would do the exact same thing:" 249 | ] 250 | }, 251 | { 252 | "cell_type": "code", 253 | "execution_count": 7, 254 | "metadata": {}, 255 | "outputs": [], 256 | "source": [ 257 | "\"\"\"\n", 258 | "null_counts = []\n", 259 | "for col in df.dtypes:\n", 260 | " if col[1] != 'string':\n", 261 | " null_counts.append(tuple([col[0], df.where(df[col[0]].isNull()).count()])))\n", 262 | "\"\"\";" 263 | ] 264 | }, 265 | { 266 | "cell_type": "markdown", 267 | "metadata": {}, 268 | "source": [ 269 | "But despite accomplishing the same thing in 4 lines instead of 8, it could be argued that it violates the rules of Python style by looking like an unreadable jumble. Much more on this can be found in the official Python PEP8 style guide, located at:\n", 270 | "\n", 271 | "https://www.python.org/dev/peps/pep-0008/\n", 272 | "\n", 273 | "If you'll be writing much Python code it's definitely worth looking over. Note, however, that pySpark frequently violates its guidelines." 274 | ] 275 | }, 276 | { 277 | "cell_type": "markdown", 278 | "metadata": { 279 | "collapsed": true 280 | }, 281 | "source": [ 282 | "# Dropping Null Values\n", 283 | "\n", 284 | "There are three things we can do with our `null` values now that we know what's in our dataframe. We can **ignore them**, we can **drop them**, or we can **replace them**. Remember, pySpark dataframes are immutable, so we can't actually change the original dataset. All operations return an entirely new dataframe, though we can tell it to overwrite the existing one with `df = df.some_operation()` which ends up functionaly equivalent." 285 | ] 286 | }, 287 | { 288 | "cell_type": "code", 289 | "execution_count": 8, 290 | "metadata": { 291 | "collapsed": true 292 | }, 293 | "outputs": [], 294 | "source": [ 295 | "df_drops = df.dropna(how='all', subset=['_c4', '_c12', '_c26'])" 296 | ] 297 | }, 298 | { 299 | "cell_type": "code", 300 | "execution_count": 9, 301 | "metadata": {}, 302 | "outputs": [ 303 | { 304 | "data": { 305 | "text/plain": [ 306 | "1580403" 307 | ] 308 | }, 309 | "execution_count": 9, 310 | "metadata": {}, 311 | "output_type": "execute_result" 312 | } 313 | ], 314 | "source": [ 315 | "df_drops.count()" 316 | ] 317 | }, 318 | { 319 | "cell_type": "markdown", 320 | "metadata": {}, 321 | "source": [ 322 | "The `df.dropna()` method has two arguments here: `how` can equal `'any'` or `'all'`; the first drops a row if _any_ value in it is `null`, the second drops a row only if _all_ values are. \n", 323 | "\n", 324 | "The `subset` argument takes a list of columns that you want to look in for `null` values. It does not actually subset the dataframe; it just checks in those three columns, then drops the row for the entire dataframe if that subset meets the criteria. This can be left off if it should check all columns for `null`.\n", 325 | "\n", 326 | "So we can see above that once we drop all rows that have `null` values in columns `_c4`, `_c12` and `_c26`, we're left with 1,580,403 rows out of the original 3,526,154 we saw when we called `count` on the whole dataframe." 327 | ] 328 | }, 329 | { 330 | "cell_type": "markdown", 331 | "metadata": {}, 332 | "source": [ 333 | "Note: There is a third argument that `dropna()` can take; the `thresh` argument sets a threshold for the number of `null` entries in a row before it drops it. It is set to an integer that **specifies how many non-null arguments the row must have; if it has less than that figure it drops the row.** If you specify this argument as we do below, it returns a dataframe where any row with less than 2 non-null values in the specified subset are dropped:" 334 | ] 335 | }, 336 | { 337 | "cell_type": "code", 338 | "execution_count": 10, 339 | "metadata": { 340 | "collapsed": true 341 | }, 342 | "outputs": [], 343 | "source": [ 344 | "df_drops2 = df.dropna(thresh=2, subset=['_c4', '_c12', '_c26'])" 345 | ] 346 | }, 347 | { 348 | "cell_type": "code", 349 | "execution_count": 11, 350 | "metadata": {}, 351 | "outputs": [ 352 | { 353 | "data": { 354 | "text/plain": [ 355 | "15860" 356 | ] 357 | }, 358 | "execution_count": 11, 359 | "metadata": {}, 360 | "output_type": "execute_result" 361 | } 362 | ], 363 | "source": [ 364 | "df_drops2.count()" 365 | ] 366 | }, 367 | { 368 | "cell_type": "markdown", 369 | "metadata": {}, 370 | "source": [ 371 | "This leaves us with a lot less columns than the `how='all'` version, which is what we would expect. In the first a row must have *all three columns* as `null` to be dropped; in the second only *any one of the three* must be null to be dropped.\n", 372 | "\n", 373 | "# Replacing Null Values" 374 | ] 375 | }, 376 | { 377 | "cell_type": "code", 378 | "execution_count": 12, 379 | "metadata": { 380 | "collapsed": true 381 | }, 382 | "outputs": [], 383 | "source": [ 384 | "df_fill = df.fillna(0, subset=['_c12'])" 385 | ] 386 | }, 387 | { 388 | "cell_type": "markdown", 389 | "metadata": {}, 390 | "source": [ 391 | "The above line goes through all of column `_c12` and replaces `null` values with the value we specified, in this case a zero. To verify we re-run the command on our new dataframe to count nulls that we used above:" 392 | ] 393 | }, 394 | { 395 | "cell_type": "code", 396 | "execution_count": 13, 397 | "metadata": {}, 398 | "outputs": [ 399 | { 400 | "data": { 401 | "text/plain": [ 402 | "0" 403 | ] 404 | }, 405 | "execution_count": 13, 406 | "metadata": {}, 407 | "output_type": "execute_result" 408 | } 409 | ], 410 | "source": [ 411 | "df_fill.where( df_fill['_c12'].isNull() ).count()" 412 | ] 413 | }, 414 | { 415 | "cell_type": "markdown", 416 | "metadata": {}, 417 | "source": [ 418 | "We see it replaced all 3,510,294 nulls we found earlier. The first term in `fillna()` can be most any type and any value, and the subset list can be left off if the fill should be applied to all columns (though be sure the dtype is consistent with what is already in that column). Note that `df.replace(a, b)` does this same thing, only you specify `a` as the value to be replaced and `b` as the replacement. It also accepts the optional subset list, but does not take advantage of optimized null handling." 419 | ] 420 | }, 421 | { 422 | "cell_type": "markdown", 423 | "metadata": { 424 | "collapsed": true 425 | }, 426 | "source": [ 427 | "# Imputation\n", 428 | "\n", 429 | "There are many methods for imputing missing data based upon the values around those missing. This includes, for example, moving average windows and fitting local linear models. In pySpark, most of these methods will be handled by _window functions_, which you can read more about here:\n", 430 | "\n", 431 | " https://databricks.com/blog/2015/07/15/introducing-window-functions-in-spark-sql.html\n", 432 | " \n", 433 | "These methods go beyond what we'll cover in this tutorial, though they may be covered in a future tutorial." 434 | ] 435 | }, 436 | { 437 | "cell_type": "code", 438 | "execution_count": null, 439 | "metadata": { 440 | "collapsed": true 441 | }, 442 | "outputs": [], 443 | "source": [] 444 | } 445 | ], 446 | "metadata": { 447 | "kernelspec": { 448 | "display_name": "Python 2", 449 | "language": "python", 450 | "name": "python2" 451 | }, 452 | "language_info": { 453 | "codemirror_mode": { 454 | "name": "ipython", 455 | "version": 2 456 | }, 457 | "file_extension": ".py", 458 | "mimetype": "text/x-python", 459 | "name": "python", 460 | "nbconvert_exporter": "python", 461 | "pygments_lexer": "ipython2", 462 | "version": "2.7.12" 463 | } 464 | }, 465 | "nbformat": 4, 466 | "nbformat_minor": 1 467 | } 468 | -------------------------------------------------------------------------------- /05_moving-average-imputation.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "**_pySpark Basics: Moving Average Imputation_**" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "_by Jeff Levy (jlevy@urban.org)_\n", 15 | "\n", 16 | "_Last Updated: 31 Jul 2017, Spark v2.1_" 17 | ] 18 | }, 19 | { 20 | "cell_type": "markdown", 21 | "metadata": {}, 22 | "source": [ 23 | "_**Abstract:** In this guide we will demonstrate pySpark's windowing function by imputing missing values using a moving average._\n", 24 | "\n", 25 | "_**Main operations used:** `window`, `partitionBy`, `orderBy`, `over`, `when`, `otherwise`_" 26 | ] 27 | }, 28 | { 29 | "cell_type": "markdown", 30 | "metadata": {}, 31 | "source": [ 32 | "***" 33 | ] 34 | }, 35 | { 36 | "cell_type": "markdown", 37 | "metadata": {}, 38 | "source": [ 39 | "Think of windowing as **defining a range around any given row that counts as the window**, then **performing an operation just within that window.** So if we define our window range as -1 to +1, then it will go over the data and examine every row along with its preceeding and following rows. If we define it as 0 to +3, it will look at every row and the three following rows. Obviously for this to make sense **the rows must be in a meaningful order**, and we have to **deliniate any groups within the data.** That is to say, if we have panel data with 12 monthly observations for all 50 states, and we don't group before we define a window, then a -1 to +1 window might include November and December for one state, then January for the next state.\n", 40 | "\n", 41 | "There are many ways to handle imputing missing data, and many uses for pySpark's windowing function. We will demonstrate the intersection of these two concepts using a modified version of the dimaonds dataset, where several values in the `'price'` column have been deleted:" 42 | ] 43 | }, 44 | { 45 | "cell_type": "code", 46 | "execution_count": 1, 47 | "metadata": { 48 | "collapsed": true 49 | }, 50 | "outputs": [], 51 | "source": [ 52 | "df = spark.read.csv('s3://ui-spark-social-science-public/data/diamonds_nulls.csv', \n", 53 | " inferSchema=True, header=True, sep=',', nullValue='')" 54 | ] 55 | }, 56 | { 57 | "cell_type": "code", 58 | "execution_count": 2, 59 | "metadata": {}, 60 | "outputs": [ 61 | { 62 | "name": "stdout", 63 | "output_type": "stream", 64 | "text": [ 65 | "+-----+---------+-----+-------+-----+-----+-----+----+----+----+\n", 66 | "|carat| cut|color|clarity|depth|table|price| x| y| z|\n", 67 | "+-----+---------+-----+-------+-----+-----+-----+----+----+----+\n", 68 | "| 0.23| Ideal| E| SI2| 61.5| 55.0| 326|3.95|3.98|2.43|\n", 69 | "| 0.21| Premium| E| SI1| 59.8| 61.0| 326|3.89|3.84|2.31|\n", 70 | "| 0.23| Good| E| VS1| 56.9| 65.0| 327|4.05|4.07|2.31|\n", 71 | "| 0.29| Premium| I| VS2| 62.4| 58.0| 334| 4.2|4.23|2.63|\n", 72 | "| 0.31| Good| J| SI2| 63.3| 58.0| 335|4.34|4.35|2.75|\n", 73 | "| 0.24|Very Good| J| VVS2| 62.8| 57.0| 336|3.94|3.96|2.48|\n", 74 | "| 0.24|Very Good| I| VVS1| 62.3| 57.0| 336|3.95|3.98|2.47|\n", 75 | "| 0.26|Very Good| H| SI1| 61.9| 55.0| 337|4.07|4.11|2.53|\n", 76 | "| 0.22| Fair| E| VS2| 65.1| 61.0| 337|3.87|3.78|2.49|\n", 77 | "| 0.23|Very Good| H| VS1| 59.4| 61.0| 338| 4.0|4.05|2.39|\n", 78 | "| 0.3| Good| J| SI1| 64.0| 55.0| 339|4.25|4.28|2.73|\n", 79 | "| 0.23| Ideal| J| VS1| 62.8| 56.0| 340|3.93| 3.9|2.46|\n", 80 | "| 0.22| Premium| F| SI1| 60.4| 61.0| 342|3.88|3.84|2.33|\n", 81 | "| 0.31| Ideal| J| SI2| 62.2| 54.0| 344|4.35|4.37|2.71|\n", 82 | "| 0.2| Premium| E| SI2| 60.2| 62.0| 345|3.79|3.75|2.27|\n", 83 | "| 0.32| Premium| E| I1| 60.9| 58.0| 345|4.38|4.42|2.68|\n", 84 | "| 0.3| Ideal| I| SI2| 62.0| 54.0| 348|4.31|4.34|2.68|\n", 85 | "| 0.3| Good| J| SI1| 63.4| 54.0| 351|4.23|4.29| 2.7|\n", 86 | "| 0.3| Good| J| SI1| 63.8| 56.0| 351|4.23|4.26|2.71|\n", 87 | "| 0.3|Very Good| J| SI1| 62.7| 59.0| 351|4.21|4.27|2.66|\n", 88 | "+-----+---------+-----+-------+-----+-----+-----+----+----+----+\n", 89 | "only showing top 20 rows\n", 90 | "\n" 91 | ] 92 | } 93 | ], 94 | "source": [ 95 | "df.show()" 96 | ] 97 | }, 98 | { 99 | "cell_type": "markdown", 100 | "metadata": {}, 101 | "source": [ 102 | "We can also show the subset where the value for `'price'` has been replaced with `null`:" 103 | ] 104 | }, 105 | { 106 | "cell_type": "code", 107 | "execution_count": 3, 108 | "metadata": {}, 109 | "outputs": [ 110 | { 111 | "name": "stdout", 112 | "output_type": "stream", 113 | "text": [ 114 | "+-----+---------+-----+-------+-----+-----+-----+----+----+----+\n", 115 | "|carat| cut|color|clarity|depth|table|price| x| y| z|\n", 116 | "+-----+---------+-----+-------+-----+-----+-----+----+----+----+\n", 117 | "| 0.24| Premium| I| VS1| 62.5| 57.0| null|3.97|3.94|2.47|\n", 118 | "| 0.35| Ideal| I| VS1| 60.9| 57.0| null|4.54|4.59|2.78|\n", 119 | "| 0.7| Good| F| VS1| 59.4| 62.0| null|5.71|5.76| 3.4|\n", 120 | "| 0.7| Fair| F| VS2| 64.5| 57.0| null|5.57|5.53|3.58|\n", 121 | "| 0.7| Premium| E| SI1| 61.2| 57.0| null|5.73|5.68|3.49|\n", 122 | "| 0.73| Premium| F| VS2| 62.5| 57.0| null|5.75| 5.7|3.58|\n", 123 | "| 1.01| Ideal| F| SI1| 62.7| 55.0| null|6.45| 6.4|4.03|\n", 124 | "| 1.03| Ideal| H| SI1| 61.1| 56.0| null| 6.5|6.53|3.98|\n", 125 | "| 1.28| Ideal| I| SI2| 61.7| 59.0| null|6.96|6.92|4.28|\n", 126 | "| 0.37| Premium| D| SI1| 60.4| 59.0| null|4.68|4.62|2.81|\n", 127 | "| 0.5| Ideal| J| VS2| 61.7| 57.0| null|5.09|5.12|3.15|\n", 128 | "| 0.34| Ideal| E| VS1| 61.2| 55.0| null|4.52|4.56|2.77|\n", 129 | "| 0.52| Ideal| D| VS2| 61.8| 55.0| null|5.19|5.23|3.22|\n", 130 | "| 0.71|Very Good| J| VVS2| 61.1| 58.0| null| 5.7|5.75| 3.5|\n", 131 | "| 0.76| Premium| H| SI1| 59.8| 57.0| null|5.93|5.91|3.54|\n", 132 | "| 0.58| Ideal| F| VS1| 60.3| 57.0| null|5.47|5.44|3.29|\n", 133 | "| 0.7|Very Good| E| VS1| 63.4| 62.0| null|5.64|5.56|3.55|\n", 134 | "| 0.92| Premium| D| I1| 63.0| 58.0| null|6.18|6.13|3.88|\n", 135 | "| 0.88|Very Good| I| SI1| 62.5| 56.0| null|6.06|6.19|3.83|\n", 136 | "| 0.7| Good| H| VVS2| 58.9| 61.5| null|5.77|5.84|3.42|\n", 137 | "| 0.7|Very Good| D| SI1| 62.8| 60.0| null|5.66|5.68|3.56|\n", 138 | "+-----+---------+-----+-------+-----+-----+-----+----+----+----+\n", 139 | "\n" 140 | ] 141 | } 142 | ], 143 | "source": [ 144 | "df.where(df['price'].isNull()).show(50)" 145 | ] 146 | }, 147 | { 148 | "cell_type": "markdown", 149 | "metadata": {}, 150 | "source": [ 151 | "# Defining a Window\n", 152 | "\n", 153 | "The first step to windowing is to **define the window parameters.** We do this by combining three elements: the grouping (`partitionBy`), the ordering (`orderBy`) and the range (`rowsBetween`). The window we define is then assigned to a variable, which we will use to perform computations." 154 | ] 155 | }, 156 | { 157 | "cell_type": "code", 158 | "execution_count": 4, 159 | "metadata": { 160 | "collapsed": true 161 | }, 162 | "outputs": [], 163 | "source": [ 164 | "from pyspark.sql import Window" 165 | ] 166 | }, 167 | { 168 | "cell_type": "code", 169 | "execution_count": 5, 170 | "metadata": { 171 | "collapsed": true 172 | }, 173 | "outputs": [], 174 | "source": [ 175 | "window = Window.partitionBy('cut', 'clarity').orderBy('price').rowsBetween(-3, 3)" 176 | ] 177 | }, 178 | { 179 | "cell_type": "code", 180 | "execution_count": 6, 181 | "metadata": {}, 182 | "outputs": [ 183 | { 184 | "data": { 185 | "text/plain": [ 186 | "" 187 | ] 188 | }, 189 | "execution_count": 6, 190 | "metadata": {}, 191 | "output_type": "execute_result" 192 | } 193 | ], 194 | "source": [ 195 | "window" 196 | ] 197 | }, 198 | { 199 | "cell_type": "markdown", 200 | "metadata": {}, 201 | "source": [ 202 | "**We now have an object of type `WindowSpec` that knows what the window should look like. **\n", 203 | "\n", 204 | "The first portion, `partionBy('cut', 'clarity')` is somewhat misleadningly named, as it is not related to *partitions* in Spark, which are segments of the distributed data, roughly analogous to individual computers within the cluster. It is much more closely related to `groupBy`, as discussed for example in the *basics 2.ipynb* tutorial. It tells pySpark that the windows should only be computed within each grouping of the columns `'cut'` and `'clarity'`. Like `groupBy`, `partitionBy` can take one or more criteria.\n", 205 | "\n", 206 | "The second portion, `orderBy('price')` simply sorts the data by price *within each partitionBy column*.\n", 207 | "\n", 208 | "And finally, `rowsBetween(-3, 3)` specifies the size of the window. In this case it includes seven rows in each window - the current row plus the three before and the three after.\n", 209 | "\n", 210 | "# Operations Over a Window\n", 211 | "\n", 212 | "The next step is to apply this window to an operation, which we can do using the `over` method. Here we will use `mean` as our aggregator, but you can do this with any valid aggregator function." 213 | ] 214 | }, 215 | { 216 | "cell_type": "code", 217 | "execution_count": 7, 218 | "metadata": { 219 | "collapsed": true 220 | }, 221 | "outputs": [], 222 | "source": [ 223 | "from pyspark.sql.functions import mean" 224 | ] 225 | }, 226 | { 227 | "cell_type": "code", 228 | "execution_count": 8, 229 | "metadata": {}, 230 | "outputs": [ 231 | { 232 | "data": { 233 | "text/plain": [ 234 | "Column" 235 | ] 236 | }, 237 | "execution_count": 8, 238 | "metadata": {}, 239 | "output_type": "execute_result" 240 | } 241 | ], 242 | "source": [ 243 | "moving_avg = mean(df['price']).over(window)\n", 244 | "moving_avg" 245 | ] 246 | }, 247 | { 248 | "cell_type": "markdown", 249 | "metadata": {}, 250 | "source": [ 251 | "What this creates is a *column object* that contains the set of SQL instructions needed to create the data. It hasn't been discussed in these tutorials, but pySpark is capable of taking SQL formatted instructions for most operations, and in the case of windowing, SQL is what underlies the Python code.\n", 252 | "\n", 253 | "**Remember that pySpark dataframes are immutable, so we cannot just fill in missing values.** Instead we have to create a new column, then recast the dataframe to include it:" 254 | ] 255 | }, 256 | { 257 | "cell_type": "code", 258 | "execution_count": 9, 259 | "metadata": {}, 260 | "outputs": [ 261 | { 262 | "name": "stdout", 263 | "output_type": "stream", 264 | "text": [ 265 | "+-----+-------+-----+-------+-----+-----+-----+----+----+----+------------------+\n", 266 | "|carat| cut|color|clarity|depth|table|price| x| y| z| moving_avg|\n", 267 | "+-----+-------+-----+-------+-----+-----+-----+----+----+----+------------------+\n", 268 | "| 0.73|Premium| F| VS2| 62.5| 57.0| null|5.75| 5.7|3.58| 356.0|\n", 269 | "| 0.29|Premium| I| VS2| 62.4| 58.0| 334| 4.2|4.23|2.63| 358.75|\n", 270 | "| 0.2|Premium| E| VS2| 59.8| 62.0| 367|3.79|3.77|2.26| 360.4|\n", 271 | "| 0.2|Premium| E| VS2| 59.0| 60.0| 367|3.81|3.78|2.24| 361.5|\n", 272 | "| 0.2|Premium| E| VS2| 61.1| 59.0| 367|3.81|3.78|2.32| 362.2857142857143|\n", 273 | "| 0.2|Premium| E| VS2| 59.7| 62.0| 367|3.84| 3.8|2.28| 367.0|\n", 274 | "| 0.2|Premium| F| VS2| 62.6| 59.0| 367|3.73|3.71|2.33|367.14285714285717|\n", 275 | "| 0.2|Premium| D| VS2| 62.3| 60.0| 367|3.73|3.68|2.31| 367.2857142857143|\n", 276 | "| 0.2|Premium| D| VS2| 61.7| 60.0| 367|3.77|3.72|2.31|369.14285714285717|\n", 277 | "| 0.3|Premium| J| VS2| 62.2| 58.0| 368|4.28| 4.3|2.67| 371.0|\n", 278 | "| 0.3|Premium| J| VS2| 60.6| 59.0| 368|4.34|4.38|2.64| 373.7142857142857|\n", 279 | "| 0.31|Premium| J| VS2| 62.5| 60.0| 380|4.31|4.36|2.71|376.42857142857144|\n", 280 | "| 0.31|Premium| J| VS2| 62.4| 60.0| 380|4.29|4.33|2.69|379.14285714285717|\n", 281 | "| 0.21|Premium| E| VS2| 60.5| 59.0| 386|3.87|3.83|2.33| 381.7142857142857|\n", 282 | "| 0.21|Premium| E| VS2| 59.6| 56.0| 386|3.93|3.89|2.33| 384.2857142857143|\n", 283 | "| 0.21|Premium| D| VS2| 61.6| 59.0| 386|3.82|3.78|2.34|385.14285714285717|\n", 284 | "| 0.21|Premium| D| VS2| 60.6| 60.0| 386|3.85|3.81|2.32| 387.0|\n", 285 | "| 0.21|Premium| D| VS2| 59.1| 62.0| 386|3.89|3.86|2.29| 388.0|\n", 286 | "| 0.21|Premium| D| VS2| 58.3| 59.0| 386|3.96|3.93| 2.3|389.57142857142856|\n", 287 | "| 0.32|Premium| J| VS2| 61.9| 58.0| 393|4.35|4.38| 2.7|392.14285714285717|\n", 288 | "+-----+-------+-----+-------+-----+-----+-----+----+----+----+------------------+\n", 289 | "only showing top 20 rows\n", 290 | "\n" 291 | ] 292 | } 293 | ], 294 | "source": [ 295 | "df = df.withColumn('moving_avg', moving_avg)\n", 296 | "df.show()" 297 | ] 298 | }, 299 | { 300 | "cell_type": "markdown", 301 | "metadata": {}, 302 | "source": [ 303 | "And it returns a dataframe sorted by the specifications from our window function with the new column fully calculated. Note that the first entry computes a window of 0, +3, the second entry a window of -1, +3, the third -2, +3 and the fourth finally -3, +3. It would be reasonable to expect it to compute `null` values where the full window range can't be operated over; neither way is necessarily wrong, but make sure you note how pySpark handles it.\n", 304 | "\n", 305 | "# Imputation\n", 306 | "\n", 307 | "Due to immutability, we will recast the dataframe with yet another column that takes the value from the `'price'` column if it **is not `null`**, and fills in the value from the `'moving_avg'` column if it **is `null`**. We will do this using pySpark's built in *`when... otherwise`* conditionals. It is an intuitive, if not very Pythonic, formulation. Then we cast the condition into a new column named `'imputed'`. Note that the `withColumn` code is split onto multiple lines just for readability; the open brackets tell Python automatically that the command is meant to continue seamlessly to the next line, ignoring leading whitespace." 308 | ] 309 | }, 310 | { 311 | "cell_type": "code", 312 | "execution_count": 10, 313 | "metadata": { 314 | "collapsed": true 315 | }, 316 | "outputs": [], 317 | "source": [ 318 | "from pyspark.sql.functions import when, col\n", 319 | "\n", 320 | "def replace_null(orig, ma):\n", 321 | " return when(orig.isNull(), ma).otherwise(orig)" 322 | ] 323 | }, 324 | { 325 | "cell_type": "code", 326 | "execution_count": 11, 327 | "metadata": { 328 | "collapsed": true 329 | }, 330 | "outputs": [], 331 | "source": [ 332 | "df_new = df.withColumn('imputed', \n", 333 | " replace_null(col('price'), col('moving_avg')) \n", 334 | " )" 335 | ] 336 | }, 337 | { 338 | "cell_type": "code", 339 | "execution_count": 12, 340 | "metadata": {}, 341 | "outputs": [ 342 | { 343 | "name": "stdout", 344 | "output_type": "stream", 345 | "text": [ 346 | "+-----+-------+-----+-------+-----+-----+-----+----+----+----+------------------+-------+\n", 347 | "|carat| cut|color|clarity|depth|table|price| x| y| z| moving_avg|imputed|\n", 348 | "+-----+-------+-----+-------+-----+-----+-----+----+----+----+------------------+-------+\n", 349 | "| 0.73|Premium| F| VS2| 62.5| 57.0| null|5.75| 5.7|3.58| 356.0| 356.0|\n", 350 | "| 0.29|Premium| I| VS2| 62.4| 58.0| 334| 4.2|4.23|2.63| 358.75| 334.0|\n", 351 | "| 0.2|Premium| E| VS2| 59.8| 62.0| 367|3.79|3.77|2.26| 360.4| 367.0|\n", 352 | "| 0.2|Premium| E| VS2| 59.0| 60.0| 367|3.81|3.78|2.24| 361.5| 367.0|\n", 353 | "| 0.2|Premium| E| VS2| 61.1| 59.0| 367|3.81|3.78|2.32| 362.2857142857143| 367.0|\n", 354 | "| 0.2|Premium| E| VS2| 59.7| 62.0| 367|3.84| 3.8|2.28| 367.0| 367.0|\n", 355 | "| 0.2|Premium| F| VS2| 62.6| 59.0| 367|3.73|3.71|2.33|367.14285714285717| 367.0|\n", 356 | "| 0.2|Premium| D| VS2| 62.3| 60.0| 367|3.73|3.68|2.31| 367.2857142857143| 367.0|\n", 357 | "| 0.2|Premium| D| VS2| 61.7| 60.0| 367|3.77|3.72|2.31|369.14285714285717| 367.0|\n", 358 | "| 0.3|Premium| J| VS2| 62.2| 58.0| 368|4.28| 4.3|2.67| 371.0| 368.0|\n", 359 | "| 0.3|Premium| J| VS2| 60.6| 59.0| 368|4.34|4.38|2.64| 373.7142857142857| 368.0|\n", 360 | "| 0.31|Premium| J| VS2| 62.5| 60.0| 380|4.31|4.36|2.71|376.42857142857144| 380.0|\n", 361 | "| 0.31|Premium| J| VS2| 62.4| 60.0| 380|4.29|4.33|2.69|379.14285714285717| 380.0|\n", 362 | "| 0.21|Premium| E| VS2| 60.5| 59.0| 386|3.87|3.83|2.33| 381.7142857142857| 386.0|\n", 363 | "| 0.21|Premium| E| VS2| 59.6| 56.0| 386|3.93|3.89|2.33| 384.2857142857143| 386.0|\n", 364 | "| 0.21|Premium| D| VS2| 61.6| 59.0| 386|3.82|3.78|2.34|385.14285714285717| 386.0|\n", 365 | "| 0.21|Premium| D| VS2| 60.6| 60.0| 386|3.85|3.81|2.32| 387.0| 386.0|\n", 366 | "| 0.21|Premium| D| VS2| 59.1| 62.0| 386|3.89|3.86|2.29| 388.0| 386.0|\n", 367 | "| 0.21|Premium| D| VS2| 58.3| 59.0| 386|3.96|3.93| 2.3|389.57142857142856| 386.0|\n", 368 | "| 0.32|Premium| J| VS2| 61.9| 58.0| 393|4.35|4.38| 2.7|392.14285714285717| 393.0|\n", 369 | "+-----+-------+-----+-------+-----+-----+-----+----+----+----+------------------+-------+\n", 370 | "only showing top 20 rows\n", 371 | "\n" 372 | ] 373 | } 374 | ], 375 | "source": [ 376 | "df_new.show()" 377 | ] 378 | }, 379 | { 380 | "cell_type": "markdown", 381 | "metadata": {}, 382 | "source": [ 383 | "We can see in the above, on the first row the price is `null`, and the imputed column shows the moving average value. On all the other rows the price has an actual value, and the imputed column uses those values. Below we can look again at all the rows where price is `null`:" 384 | ] 385 | }, 386 | { 387 | "cell_type": "code", 388 | "execution_count": 13, 389 | "metadata": {}, 390 | "outputs": [ 391 | { 392 | "name": "stdout", 393 | "output_type": "stream", 394 | "text": [ 395 | "+-----+---------+-----+-------+-----+-----+-----+----+----+----+-----------------+-----------------+\n", 396 | "|carat| cut|color|clarity|depth|table|price| x| y| z| moving_avg| imputed|\n", 397 | "+-----+---------+-----+-------+-----+-----+-----+----+----+----+-----------------+-----------------+\n", 398 | "| 0.73| Premium| F| VS2| 62.5| 57.0| null|5.75| 5.7|3.58| 356.0| 356.0|\n", 399 | "| 0.92| Premium| D| I1| 63.0| 58.0| null|6.18|6.13|3.88|394.3333333333333|394.3333333333333|\n", 400 | "| 0.71|Very Good| J| VVS2| 61.1| 58.0| null| 5.7|5.75| 3.5| 356.0| 356.0|\n", 401 | "| 0.35| Ideal| I| VS1| 60.9| 57.0| null|4.54|4.59|2.78| 340.0| 340.0|\n", 402 | "| 0.34| Ideal| E| VS1| 61.2| 55.0| null|4.52|4.56|2.77| 349.0| 349.0|\n", 403 | "| 0.58| Ideal| F| VS1| 60.3| 57.0| null|5.47|5.44|3.29| 357.0| 357.0|\n", 404 | "| 0.7| Good| F| VS1| 59.4| 62.0| null|5.71|5.76| 3.4|353.6666666666667|353.6666666666667|\n", 405 | "| 0.7| Premium| E| SI1| 61.2| 57.0| null|5.73|5.68|3.49| 326.0| 326.0|\n", 406 | "| 0.37| Premium| D| SI1| 60.4| 59.0| null|4.68|4.62|2.81| 334.0| 334.0|\n", 407 | "| 0.76| Premium| H| SI1| 59.8| 57.0| null|5.93|5.91|3.54|343.6666666666667|343.6666666666667|\n", 408 | "| 0.24| Premium| I| VS1| 62.5| 57.0| null|3.97|3.94|2.47| 386.0| 386.0|\n", 409 | "| 0.88|Very Good| I| SI1| 62.5| 56.0| null|6.06|6.19|3.83| 344.0| 344.0|\n", 410 | "| 0.7|Very Good| D| SI1| 62.8| 60.0| null|5.66|5.68|3.56| 347.0| 347.0|\n", 411 | "| 0.7| Good| H| VVS2| 58.9| 61.5| null|5.77|5.84|3.42|408.3333333333333|408.3333333333333|\n", 412 | "| 0.7| Fair| F| VS2| 64.5| 57.0| null|5.57|5.53|3.58|444.6666666666667|444.6666666666667|\n", 413 | "| 0.7|Very Good| E| VS1| 63.4| 62.0| null|5.64|5.56|3.55|349.3333333333333|349.3333333333333|\n", 414 | "| 1.28| Ideal| I| SI2| 61.7| 59.0| null|6.96|6.92|4.28|339.3333333333333|339.3333333333333|\n", 415 | "| 0.5| Ideal| J| VS2| 61.7| 57.0| null|5.09|5.12|3.15| 367.0| 367.0|\n", 416 | "| 0.52| Ideal| D| VS2| 61.8| 55.0| null|5.19|5.23|3.22| 367.0| 367.0|\n", 417 | "| 1.01| Ideal| F| SI1| 62.7| 55.0| null|6.45| 6.4|4.03| 360.0| 360.0|\n", 418 | "| 1.03| Ideal| H| SI1| 61.1| 56.0| null| 6.5|6.53|3.98|361.3333333333333|361.3333333333333|\n", 419 | "+-----+---------+-----+-------+-----+-----+-----+----+----+----+-----------------+-----------------+\n", 420 | "\n" 421 | ] 422 | } 423 | ], 424 | "source": [ 425 | "df_new.where(df['price'].isNull()).show(50)" 426 | ] 427 | }, 428 | { 429 | "cell_type": "markdown", 430 | "metadata": { 431 | "collapsed": true 432 | }, 433 | "source": [ 434 | "And we see that in all cases, the `'imputed'` column has the moving average value listed." 435 | ] 436 | }, 437 | { 438 | "cell_type": "code", 439 | "execution_count": null, 440 | "metadata": { 441 | "collapsed": true 442 | }, 443 | "outputs": [], 444 | "source": [] 445 | } 446 | ], 447 | "metadata": { 448 | "kernelspec": { 449 | "display_name": "Python 2", 450 | "language": "python", 451 | "name": "python2" 452 | }, 453 | "language_info": { 454 | "codemirror_mode": { 455 | "name": "ipython", 456 | "version": 2 457 | }, 458 | "file_extension": ".py", 459 | "mimetype": "text/x-python", 460 | "name": "python", 461 | "nbconvert_exporter": "python", 462 | "pygments_lexer": "ipython2", 463 | "version": "2.7.12" 464 | } 465 | }, 466 | "nbformat": 4, 467 | "nbformat_minor": 1 468 | } 469 | -------------------------------------------------------------------------------- /06_pivoting.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "**_pySpark Basics: Pivoting Data_**" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "_by Jeff Levy (jlevy@urban.org)_\n", 15 | "\n", 16 | "_Last Updated: 5 Aug 2016, Spark v2.0_" 17 | ] 18 | }, 19 | { 20 | "cell_type": "markdown", 21 | "metadata": {}, 22 | "source": [ 23 | "_Abstract: This guide will discuss the differences between pivoting and reshaping, and illustrate what capabilities exist within Spark._\n", 24 | "\n", 25 | "_Main operations used: `groupBy`, `pivot`, `sum`_" 26 | ] 27 | }, 28 | { 29 | "cell_type": "markdown", 30 | "metadata": {}, 31 | "source": [ 32 | "***" 33 | ] 34 | }, 35 | { 36 | "cell_type": "markdown", 37 | "metadata": {}, 38 | "source": [ 39 | "# Reshaping\n", 40 | "\n", 41 | "In pySpark pivoting involves an aggregation. If what you're looking for is reshaping, where a dataset is turned from wide to long or vice versa without the loss of any information, then that is not currently *explicitly* implemented in Spark. As we will show at the end however, **it is possible to make the `pivot` function work like a reshape from long to wide**, but not in reverse. It is possible to \"melt\" a dataset from wide to long, but it would require the writing of a loop to do it manually, which we do not demonstrate here.\n", 42 | "\n", 43 | "The likely reason for this lack of functionality is that it's an incredibly costly operation; if you've performed it in Stata or SAS, for example, it probably took a while to compute even on a small dataset. Doing it on very large data that is distributed across many nodes would involve a lot of shuffling and duplicating. \n", 44 | "\n", 45 | "**It is also important to note that many of the results you might be looking for out of a reshape can be accomplished via other, more efficient means,** such as `groupby`, particularly if your goal is various types of summary statistics. So while we can accomplish a reshape in some cases, if your data is very large it's best to think about ways to avoid having to completely restructure it." 46 | ] 47 | }, 48 | { 49 | "cell_type": "markdown", 50 | "metadata": {}, 51 | "source": [ 52 | "# Pivoting\n", 53 | "\n", 54 | "To illustrate how pivoting works, we create a simple dataset to experiment with:" 55 | ] 56 | }, 57 | { 58 | "cell_type": "code", 59 | "execution_count": 1, 60 | "metadata": { 61 | "collapsed": true 62 | }, 63 | "outputs": [], 64 | "source": [ 65 | "from pyspark.sql import Row\n", 66 | "\n", 67 | "row = Row('state', 'industry', 'hq', 'jobs')\n", 68 | "\n", 69 | "df = sc.parallelize([\n", 70 | " row('MI', 'auto', 'domestic', 716),\n", 71 | " row('MI', 'auto', 'foreign', 123),\n", 72 | " row('MI', 'auto', 'domestic', 1340),\n", 73 | " row('MI', 'retail', 'foreign', 12),\n", 74 | " row('MI', 'retail', 'foreign', 33),\n", 75 | " row('OH', 'auto', 'domestic', 349),\n", 76 | " row('OH', 'auto', 'foreign', 101),\n", 77 | " row('OH', 'auto', 'foreign', 77),\n", 78 | " row('OH', 'retail', 'domestic', 45),\n", 79 | " row('OH', 'retail', 'foreign', 12)\n", 80 | " ]).toDF()" 81 | ] 82 | }, 83 | { 84 | "cell_type": "code", 85 | "execution_count": 2, 86 | "metadata": {}, 87 | "outputs": [ 88 | { 89 | "name": "stdout", 90 | "output_type": "stream", 91 | "text": [ 92 | "+-----+--------+--------+----+\n", 93 | "|state|industry| hq|jobs|\n", 94 | "+-----+--------+--------+----+\n", 95 | "| MI| auto|domestic| 716|\n", 96 | "| MI| auto| foreign| 123|\n", 97 | "| MI| auto|domestic|1340|\n", 98 | "| MI| retail| foreign| 12|\n", 99 | "| MI| retail| foreign| 33|\n", 100 | "| OH| auto|domestic| 349|\n", 101 | "| OH| auto| foreign| 101|\n", 102 | "| OH| auto| foreign| 77|\n", 103 | "| OH| retail|domestic| 45|\n", 104 | "| OH| retail| foreign| 12|\n", 105 | "+-----+--------+--------+----+\n", 106 | "\n" 107 | ] 108 | } 109 | ], 110 | "source": [ 111 | "df.show()" 112 | ] 113 | }, 114 | { 115 | "cell_type": "markdown", 116 | "metadata": {}, 117 | "source": [ 118 | "Pivot operations must always be preceeded by a groupBy operation. In our first case we will simply pivot to show total domestic versus foreign jobs in each of our two states:" 119 | ] 120 | }, 121 | { 122 | "cell_type": "code", 123 | "execution_count": 3, 124 | "metadata": { 125 | "collapsed": true 126 | }, 127 | "outputs": [], 128 | "source": [ 129 | "df_pivot1 = df.groupby('state').pivot('hq', values=['domestic', 'foreign']).sum('jobs')" 130 | ] 131 | }, 132 | { 133 | "cell_type": "code", 134 | "execution_count": 4, 135 | "metadata": {}, 136 | "outputs": [ 137 | { 138 | "name": "stdout", 139 | "output_type": "stream", 140 | "text": [ 141 | "+-----+--------+-------+\n", 142 | "|state|domestic|foreign|\n", 143 | "+-----+--------+-------+\n", 144 | "| MI| 2056| 168|\n", 145 | "| OH| 394| 190|\n", 146 | "+-----+--------+-------+\n", 147 | "\n" 148 | ] 149 | } 150 | ], 151 | "source": [ 152 | "df_pivot1.show()" 153 | ] 154 | }, 155 | { 156 | "cell_type": "markdown", 157 | "metadata": {}, 158 | "source": [ 159 | "**Note that the `values=['domestic', 'foreign']` part of the pivot method is optional.** If we do not supply a list then pySpark will attempt to infer the values by looking through the pivot column we specified, but naturally that requires more processing than if we specify its contents up front. It's important `values` is specified correctly however; **pySpark will not look to see if you've left any possible values out, and will just discard any observations that don't match what you told it to look for.** If in doubt, it may be better to accept the inefficiency and let pySpark automatically determine them.\n", 160 | "\n", 161 | "As your datasets get larger this sort of optimization becomes more important. Also note that Spark has a hard limit of 10,000 columns created as a result of a pivot command.\n", 162 | "\n", 163 | "Here's another example, this time pivoting by both `state` and by `industry` by simply changing the groupby criteria:" 164 | ] 165 | }, 166 | { 167 | "cell_type": "code", 168 | "execution_count": 5, 169 | "metadata": { 170 | "collapsed": true 171 | }, 172 | "outputs": [], 173 | "source": [ 174 | "df_pivot = df.groupBy('state', 'industry').pivot('hq', values=['domestic', 'foreign']).sum('jobs')" 175 | ] 176 | }, 177 | { 178 | "cell_type": "code", 179 | "execution_count": 6, 180 | "metadata": {}, 181 | "outputs": [ 182 | { 183 | "name": "stdout", 184 | "output_type": "stream", 185 | "text": [ 186 | "+-----+--------+--------+-------+\n", 187 | "|state|industry|domestic|foreign|\n", 188 | "+-----+--------+--------+-------+\n", 189 | "| OH| retail| 45| 12|\n", 190 | "| MI| auto| 2056| 123|\n", 191 | "| OH| auto| 349| 178|\n", 192 | "| MI| retail| null| 45|\n", 193 | "+-----+--------+--------+-------+\n", 194 | "\n" 195 | ] 196 | } 197 | ], 198 | "source": [ 199 | "df_pivot.show()" 200 | ] 201 | }, 202 | { 203 | "cell_type": "markdown", 204 | "metadata": {}, 205 | "source": [ 206 | "The `sum` method at the end **can be replaced with other aggregators as necessary**, for example with `avg`.\n", 207 | "\n", 208 | "# Using Pivot to Reshape Long to Wide\n", 209 | "\n", 210 | "Pivot requires an aggregation argument at the end, as we have been using. However, what if *each row is uniquely defined* by the `groupby` and `pivot` columns?" 211 | ] 212 | }, 213 | { 214 | "cell_type": "code", 215 | "execution_count": 7, 216 | "metadata": { 217 | "collapsed": true 218 | }, 219 | "outputs": [], 220 | "source": [ 221 | "row = Row('state', 'industry', 'hq', 'jobs', 'firm')\n", 222 | "\n", 223 | "df = sc.parallelize([\n", 224 | " row('MI', 'auto', 'domestic', 716, 'A'),\n", 225 | " row('MI', 'auto', 'foreign', 123, 'B'),\n", 226 | " row('MI', 'auto', 'domestic', 1340, 'C'),\n", 227 | " row('MI', 'retail', 'foreign', 12, 'D'),\n", 228 | " row('MI', 'retail', 'foreign', 33, 'E'),\n", 229 | " row('OH', 'retail', 'mixed', 978, 'F'),\n", 230 | " row('OH', 'auto', 'domestic', 349, 'G'),\n", 231 | " row('OH', 'auto', 'foreign', 101, 'H'),\n", 232 | " row('OH', 'auto', 'foreign', 77, 'I'),\n", 233 | " row('OH', 'retail', 'domestic', 45, 'J'),\n", 234 | " row('OH', 'retail', 'foreign', 12, 'K'),\n", 235 | " row('OH', 'retail', 'mixed', 1, 'L'),\n", 236 | " row('OH', 'auto', 'other', 120, 'M'),\n", 237 | " row('OH', 'auto', 'domestic', 96, 'A'),\n", 238 | " row('MI', 'auto', 'foreign', 1117, 'A'),\n", 239 | " row('MI', 'retail', 'mixed', 9, 'F'),\n", 240 | " row('MI', 'auto', 'foreign', 11, 'B')\n", 241 | " ]).toDF()" 242 | ] 243 | }, 244 | { 245 | "cell_type": "code", 246 | "execution_count": 8, 247 | "metadata": {}, 248 | "outputs": [ 249 | { 250 | "name": "stdout", 251 | "output_type": "stream", 252 | "text": [ 253 | "+-----+--------+--------+----+----+\n", 254 | "|state|industry| hq|jobs|firm|\n", 255 | "+-----+--------+--------+----+----+\n", 256 | "| MI| auto|domestic| 716| A|\n", 257 | "| MI| auto| foreign| 123| B|\n", 258 | "| MI| auto|domestic|1340| C|\n", 259 | "| MI| retail| foreign| 12| D|\n", 260 | "| MI| retail| foreign| 33| E|\n", 261 | "| OH| retail| mixed| 978| F|\n", 262 | "| OH| auto|domestic| 349| G|\n", 263 | "| OH| auto| foreign| 101| H|\n", 264 | "| OH| auto| foreign| 77| I|\n", 265 | "| OH| retail|domestic| 45| J|\n", 266 | "| OH| retail| foreign| 12| K|\n", 267 | "| OH| retail| mixed| 1| L|\n", 268 | "| OH| auto| other| 120| M|\n", 269 | "| OH| auto|domestic| 96| A|\n", 270 | "| MI| auto| foreign|1117| A|\n", 271 | "| MI| retail| mixed| 9| F|\n", 272 | "| MI| auto| foreign| 11| B|\n", 273 | "+-----+--------+--------+----+----+\n", 274 | "\n" 275 | ] 276 | } 277 | ], 278 | "source": [ 279 | "df.show()" 280 | ] 281 | }, 282 | { 283 | "cell_type": "markdown", 284 | "metadata": {}, 285 | "source": [ 286 | "We've now added a unique identifier for each firm, which we will use instead of state and industry as our groupby criteria. We also expanded the number of values in the `hq` column:" 287 | ] 288 | }, 289 | { 290 | "cell_type": "markdown", 291 | "metadata": {}, 292 | "source": [ 293 | "We can now uniquely identify each row observation by the combination of its `firm`, `state` and `industry` entries, then show the different entries for `hq` for that observation as their own columns. It drops any columns that we don't use anywhere, but if we wanted to keep them we could just include them in the groupby criteria without changing the logic of the operation:" 294 | ] 295 | }, 296 | { 297 | "cell_type": "code", 298 | "execution_count": 11, 299 | "metadata": {}, 300 | "outputs": [], 301 | "source": [ 302 | "df_pivot = df.groupBy('firm', 'state', 'industry').pivot('hq', values=['domestic', 'foreign', 'mixed', 'other']).sum('jobs')" 303 | ] 304 | }, 305 | { 306 | "cell_type": "code", 307 | "execution_count": 12, 308 | "metadata": {}, 309 | "outputs": [ 310 | { 311 | "name": "stdout", 312 | "output_type": "stream", 313 | "text": [ 314 | "+----+-----+--------+--------+-------+-----+-----+\n", 315 | "|firm|state|industry|domestic|foreign|mixed|other|\n", 316 | "+----+-----+--------+--------+-------+-----+-----+\n", 317 | "| D| MI| retail| null| 12| null| null|\n", 318 | "| I| OH| auto| null| 77| null| null|\n", 319 | "| G| OH| auto| 349| null| null| null|\n", 320 | "| J| OH| retail| 45| null| null| null|\n", 321 | "| C| MI| auto| 1340| null| null| null|\n", 322 | "| A| MI| auto| 716| 1117| null| null|\n", 323 | "| K| OH| retail| null| 12| null| null|\n", 324 | "| B| MI| auto| null| 134| null| null|\n", 325 | "| F| MI| retail| null| null| 9| null|\n", 326 | "| E| MI| retail| null| 33| null| null|\n", 327 | "| M| OH| auto| null| null| null| 120|\n", 328 | "| H| OH| auto| null| 101| null| null|\n", 329 | "| F| OH| retail| null| null| 978| null|\n", 330 | "| L| OH| retail| null| null| 1| null|\n", 331 | "| A| OH| auto| 96| null| null| null|\n", 332 | "+----+-----+--------+--------+-------+-----+-----+\n", 333 | "\n" 334 | ] 335 | } 336 | ], 337 | "source": [ 338 | "df_pivot.show()" 339 | ] 340 | }, 341 | { 342 | "cell_type": "markdown", 343 | "metadata": {}, 344 | "source": [ 345 | "All we're doing is telling it to `sum` each grouping of values, but **each grouping only has a single entry.** Our data is now reshaped from long to wide. If we replaced the `sum` operator with `mean` or `max` or `min`, it wouldn't change anything.\n", 346 | "\n", 347 | "**Note one potential problem here:** performing this operation in software like Stata will raise an error if your specified reshape criteria isn't unique. When using `pivot` like this in pySpark there wouldn't be an error; it would just perform the aggregation. This may be problematic if it happens and you were not expecting it to, so it may be necessary to write in some validity tests after the operation completes. " 348 | ] 349 | }, 350 | { 351 | "cell_type": "code", 352 | "execution_count": null, 353 | "metadata": { 354 | "collapsed": true 355 | }, 356 | "outputs": [], 357 | "source": [] 358 | } 359 | ], 360 | "metadata": { 361 | "kernelspec": { 362 | "display_name": "Python 2", 363 | "language": "python", 364 | "name": "python2" 365 | } 366 | }, 367 | "nbformat": 4, 368 | "nbformat_minor": 1 369 | } 370 | -------------------------------------------------------------------------------- /07_resampling.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "**_pySpark Basics: Resampling_**" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "_by Jeff Levy (jlevy@urban.org)_\n", 15 | "\n", 16 | "_Last Updated: 31 Jul 2017, Spark v2.1_" 17 | ] 18 | }, 19 | { 20 | "cell_type": "markdown", 21 | "metadata": {}, 22 | "source": [ 23 | "_Abstract: This guide will demonstrate changing the frequency of observations by aggregating daily data into monthly._\n", 24 | "\n", 25 | "_Main operations used: `dtypes`, `udf`, `drop`, `groupBy`, `agg`, `withColumn`, `dateFormat`, `select`_" 26 | ] 27 | }, 28 | { 29 | "cell_type": "markdown", 30 | "metadata": {}, 31 | "source": [ 32 | "***" 33 | ] 34 | }, 35 | { 36 | "cell_type": "markdown", 37 | "metadata": {}, 38 | "source": [ 39 | "We begin by creating a simple dataset, where we first define a row as having three fields (columns) and then define each individual row by specifying its three entries:" 40 | ] 41 | }, 42 | { 43 | "cell_type": "code", 44 | "execution_count": 1, 45 | "metadata": { 46 | "collapsed": true 47 | }, 48 | "outputs": [], 49 | "source": [ 50 | "import datetime\n", 51 | "from pyspark.sql import Row\n", 52 | "from pyspark.sql.functions import col\n", 53 | "\n", 54 | "row = Row(\"date\", \"name\", \"production\")\n", 55 | "\n", 56 | "df = sc.parallelize([\n", 57 | " row(\"08/01/2014\", \"Kim\", 5),\n", 58 | " row(\"08/02/2014\", \"Kim\", 14),\n", 59 | " row(\"08/01/2014\", \"Bob\", 6),\n", 60 | " row(\"08/02/2014\", \"Bob\", 3),\n", 61 | " row(\"08/01/2014\", \"Sue\", 0),\n", 62 | " row(\"08/02/2014\", \"Sue\", 22),\n", 63 | " row(\"08/01/2014\", \"Dan\", 4),\n", 64 | " row(\"08/02/2014\", \"Dan\", 4),\n", 65 | " row(\"08/01/2014\", \"Joe\", 37),\n", 66 | " row(\"09/01/2014\", \"Kim\", 6),\n", 67 | " row(\"09/02/2014\", \"Kim\", 6),\n", 68 | " row(\"09/01/2014\", \"Bob\", 4),\n", 69 | " row(\"09/02/2014\", \"Bob\", 20),\n", 70 | " row(\"09/01/2014\", \"Sue\", 11),\n", 71 | " row(\"09/02/2014\", \"Sue\", 2),\n", 72 | " row(\"09/01/2014\", \"Dan\", 1),\n", 73 | " row(\"09/02/2014\", \"Dan\", 3),\n", 74 | " row(\"09/02/2014\", \"Joe\", 29)\n", 75 | " ]).toDF()" 76 | ] 77 | }, 78 | { 79 | "cell_type": "code", 80 | "execution_count": 2, 81 | "metadata": {}, 82 | "outputs": [ 83 | { 84 | "name": "stdout", 85 | "output_type": "stream", 86 | "text": [ 87 | "+----------+----+----------+\n", 88 | "| date|name|production|\n", 89 | "+----------+----+----------+\n", 90 | "|08/01/2014| Kim| 5|\n", 91 | "|08/02/2014| Kim| 14|\n", 92 | "|08/01/2014| Bob| 6|\n", 93 | "|08/02/2014| Bob| 3|\n", 94 | "|08/01/2014| Sue| 0|\n", 95 | "|08/02/2014| Sue| 22|\n", 96 | "|08/01/2014| Dan| 4|\n", 97 | "|08/02/2014| Dan| 4|\n", 98 | "|08/01/2014| Joe| 37|\n", 99 | "|09/01/2014| Kim| 6|\n", 100 | "|09/02/2014| Kim| 6|\n", 101 | "|09/01/2014| Bob| 4|\n", 102 | "|09/02/2014| Bob| 20|\n", 103 | "|09/01/2014| Sue| 11|\n", 104 | "|09/02/2014| Sue| 2|\n", 105 | "|09/01/2014| Dan| 1|\n", 106 | "|09/02/2014| Dan| 3|\n", 107 | "|09/02/2014| Joe| 29|\n", 108 | "+----------+----+----------+\n", 109 | "\n" 110 | ] 111 | } 112 | ], 113 | "source": [ 114 | "df.show()" 115 | ] 116 | }, 117 | { 118 | "cell_type": "code", 119 | "execution_count": 3, 120 | "metadata": {}, 121 | "outputs": [ 122 | { 123 | "data": { 124 | "text/plain": [ 125 | "[('date', 'string'), ('name', 'string'), ('production', 'bigint')]" 126 | ] 127 | }, 128 | "execution_count": 3, 129 | "metadata": {}, 130 | "output_type": "execute_result" 131 | } 132 | ], 133 | "source": [ 134 | "df.dtypes" 135 | ] 136 | }, 137 | { 138 | "cell_type": "markdown", 139 | "metadata": {}, 140 | "source": [ 141 | "While we have dates for each observation, you can see they are just string objects. Defaulting to strings is quite common in pySpark dataframes, and while we can convert them to date objects using the standard Python datetime module (demonstrated below), it is often not necessary. Whether it is worth the conversion likely depends on what other timeseries functions you plan on working with. As an example, let's resample this data to find monthly production for each individual.\n", 142 | "\n", 143 | "First we create a new column that contains just the month and year. This isn't quite as elegant in pySpark as it is for smaller, non-distributed data done in Pandas, but I'll comment each step carefully as we go:" 144 | ] 145 | }, 146 | { 147 | "cell_type": "code", 148 | "execution_count": 4, 149 | "metadata": { 150 | "collapsed": true 151 | }, 152 | "outputs": [], 153 | "source": [ 154 | "#'udf' stands for 'user defined function', and is simply a wrapper for functions you write and \n", 155 | "#want to apply to a column that knows how to iterate through pySpark dataframe columns. it should\n", 156 | "#be more clear after we use it below\n", 157 | "from pyspark.sql.functions import udf\n", 158 | "\n", 159 | "#we define our own function that knows how to split apart a MM/DD/YYYY string and return a \n", 160 | "#MM/YYYY string. everything in here is standard Python, and not specific to pySpark\n", 161 | "def split_date(whole_date):\n", 162 | " \n", 163 | " #this try-except handler provides some minimal fault tolerance in case one of our date \n", 164 | " #strings is malformed, as we might find with real-world data. if it fails to split the\n", 165 | " #date into three parts it just returns 'error', which we could later subset the data on\n", 166 | " #to see what went wrong\n", 167 | " try:\n", 168 | " mo, day, yr = whole_date.split('/')\n", 169 | " except ValueError:\n", 170 | " return 'error'\n", 171 | " \n", 172 | " #lastly we return the month and year strings joined together\n", 173 | " return mo + '/' + yr\n", 174 | "\n", 175 | "#this is where we wrap the function we wrote above in the udf wrapper\n", 176 | "udf_split_date = udf(split_date)\n", 177 | "\n", 178 | "#here we create a new dataframe by calling the original dataframe and specifying the new\n", 179 | "#column. unlike with Pandas or R, pySpark dataframes are immutable, so we cannot simply assign\n", 180 | "#to a new column on the original dataframe\n", 181 | "df_new = df.withColumn('month_year', udf_split_date('date'))" 182 | ] 183 | }, 184 | { 185 | "cell_type": "markdown", 186 | "metadata": {}, 187 | "source": [ 188 | "Note that we could easily use our `split_date` function above to use datetime objects. This could be useful if we wanted to resample our data to, say, quarterly or weekly, both of which datetime objects (https://docs.python.org/2/library/datetime.html) can easily keep track of for us. In the case of a monthly split, we would gain nothing from the extra operation.\n", 189 | "\n", 190 | "Below we see the results in our new dataframe, then we drop the original date column:" 191 | ] 192 | }, 193 | { 194 | "cell_type": "code", 195 | "execution_count": 5, 196 | "metadata": {}, 197 | "outputs": [ 198 | { 199 | "name": "stdout", 200 | "output_type": "stream", 201 | "text": [ 202 | "+----------+----+----------+----------+\n", 203 | "| date|name|production|month_year|\n", 204 | "+----------+----+----------+----------+\n", 205 | "|08/01/2014| Kim| 5| 08/2014|\n", 206 | "|08/02/2014| Kim| 14| 08/2014|\n", 207 | "|08/01/2014| Bob| 6| 08/2014|\n", 208 | "|08/02/2014| Bob| 3| 08/2014|\n", 209 | "|08/01/2014| Sue| 0| 08/2014|\n", 210 | "|08/02/2014| Sue| 22| 08/2014|\n", 211 | "|08/01/2014| Dan| 4| 08/2014|\n", 212 | "|08/02/2014| Dan| 4| 08/2014|\n", 213 | "|08/01/2014| Joe| 37| 08/2014|\n", 214 | "|09/01/2014| Kim| 6| 09/2014|\n", 215 | "|09/02/2014| Kim| 6| 09/2014|\n", 216 | "|09/01/2014| Bob| 4| 09/2014|\n", 217 | "|09/02/2014| Bob| 20| 09/2014|\n", 218 | "|09/01/2014| Sue| 11| 09/2014|\n", 219 | "|09/02/2014| Sue| 2| 09/2014|\n", 220 | "|09/01/2014| Dan| 1| 09/2014|\n", 221 | "|09/02/2014| Dan| 3| 09/2014|\n", 222 | "|09/02/2014| Joe| 29| 09/2014|\n", 223 | "+----------+----+----------+----------+\n", 224 | "\n" 225 | ] 226 | } 227 | ], 228 | "source": [ 229 | "df_new.show()" 230 | ] 231 | }, 232 | { 233 | "cell_type": "code", 234 | "execution_count": 6, 235 | "metadata": { 236 | "collapsed": true 237 | }, 238 | "outputs": [], 239 | "source": [ 240 | "df_new = df_new.drop('date')" 241 | ] 242 | }, 243 | { 244 | "cell_type": "markdown", 245 | "metadata": {}, 246 | "source": [ 247 | "Now we perform two steps on one line. First we group the data - this can be done along multiple categories if desired. So if we want to aggregate every employee's data together, leaving us with just values for August and September, we would group by `monthYear` alone. In this case let's say we want totals for each employee within each month, so we group by `monthYear` and by `name` together.\n", 248 | "\n", 249 | "After that we aggregate the resulting grouped dataframe; pySpark automatically knows the operations should be performed within groups only. We just pass a dictionary into the `.agg` method, with the key being the column name of interest and the value being the operation used to aggregate. We'll use `sum`, but we can also use, for example, `avg`, `min` or `max`. Note that this is done by passing the operation as a string." 250 | ] 251 | }, 252 | { 253 | "cell_type": "code", 254 | "execution_count": 7, 255 | "metadata": { 256 | "collapsed": true 257 | }, 258 | "outputs": [], 259 | "source": [ 260 | "df_agg = df_new.groupBy('month_year', 'name').agg({'production' : 'sum'})" 261 | ] 262 | }, 263 | { 264 | "cell_type": "markdown", 265 | "metadata": {}, 266 | "source": [ 267 | "The aggregation can be done on more than one field using different types, just by adding the appropriate entry to the dictionary. For example, if there was an \"hours worked\" column, we might pass a dictionary that looked like this: `{'production' : 'sum', 'hours' : 'avg'}`" 268 | ] 269 | }, 270 | { 271 | "cell_type": "code", 272 | "execution_count": 8, 273 | "metadata": {}, 274 | "outputs": [ 275 | { 276 | "name": "stdout", 277 | "output_type": "stream", 278 | "text": [ 279 | "+----------+----+---------------+\n", 280 | "|month_year|name|sum(production)|\n", 281 | "+----------+----+---------------+\n", 282 | "| 09/2014| Sue| 13|\n", 283 | "| 09/2014| Kim| 12|\n", 284 | "| 09/2014| Bob| 24|\n", 285 | "| 09/2014| Joe| 29|\n", 286 | "| 09/2014| Dan| 4|\n", 287 | "| 08/2014| Kim| 19|\n", 288 | "| 08/2014| Joe| 37|\n", 289 | "| 08/2014| Dan| 8|\n", 290 | "| 08/2014| Sue| 22|\n", 291 | "| 08/2014| Bob| 9|\n", 292 | "+----------+----+---------------+\n", 293 | "\n" 294 | ] 295 | } 296 | ], 297 | "source": [ 298 | "df_agg.show()" 299 | ] 300 | }, 301 | { 302 | "cell_type": "markdown", 303 | "metadata": { 304 | "collapsed": true 305 | }, 306 | "source": [ 307 | "If you definitely want datetime objects in your dataframe (Spark currently has very limited timeseries functionality), you can accomplish it with another `udf`:" 308 | ] 309 | }, 310 | { 311 | "cell_type": "code", 312 | "execution_count": 9, 313 | "metadata": { 314 | "collapsed": true 315 | }, 316 | "outputs": [], 317 | "source": [ 318 | "from pyspark.sql.functions import udf\n", 319 | "from pyspark.sql.types import DateType\n", 320 | "from datetime import datetime\n", 321 | "\n", 322 | "dateFormat = udf(lambda x: datetime.strptime(x, '%M/%d/%Y'), DateType())\n", 323 | " \n", 324 | "df_d = df.withColumn('new_date', dateFormat(col('date')))" 325 | ] 326 | }, 327 | { 328 | "cell_type": "code", 329 | "execution_count": 10, 330 | "metadata": {}, 331 | "outputs": [ 332 | { 333 | "data": { 334 | "text/plain": [ 335 | "[('date', 'string'),\n", 336 | " ('name', 'string'),\n", 337 | " ('production', 'bigint'),\n", 338 | " ('new_date', 'date')]" 339 | ] 340 | }, 341 | "execution_count": 10, 342 | "metadata": {}, 343 | "output_type": "execute_result" 344 | } 345 | ], 346 | "source": [ 347 | "df_d.dtypes" 348 | ] 349 | }, 350 | { 351 | "cell_type": "code", 352 | "execution_count": 11, 353 | "metadata": {}, 354 | "outputs": [ 355 | { 356 | "data": { 357 | "text/plain": [ 358 | "[Row(new_date=datetime.date(2014, 1, 1))]" 359 | ] 360 | }, 361 | "execution_count": 11, 362 | "metadata": {}, 363 | "output_type": "execute_result" 364 | } 365 | ], 366 | "source": [ 367 | "df_d.select('new_date').take(1)" 368 | ] 369 | }, 370 | { 371 | "cell_type": "markdown", 372 | "metadata": { 373 | "collapsed": true 374 | }, 375 | "source": [ 376 | "In this case we take advantage of the `strptime` feature of the standard Python datetime module, which takes a string and a format string and returns a datetime object. Datetime objects can be far more useful than a date as a string if you plan a lot of other timeseries operations; they allow things like subtracting two dates to get elapsed time, or separating by quarters or weeks, or accounting for time zones or leap years.\n", 377 | "\n", 378 | "Better time series functionality appears to be a priority in Spark development, and multiple options have already been proposed that would make their use far more effecient. Expect to see future versions taking better advantage of this." 379 | ] 380 | }, 381 | { 382 | "cell_type": "code", 383 | "execution_count": null, 384 | "metadata": { 385 | "collapsed": true 386 | }, 387 | "outputs": [], 388 | "source": [] 389 | } 390 | ], 391 | "metadata": { 392 | "kernelspec": { 393 | "display_name": "Python 2", 394 | "language": "python", 395 | "name": "python2" 396 | }, 397 | "language_info": { 398 | "codemirror_mode": { 399 | "name": "ipython", 400 | "version": 2 401 | }, 402 | "file_extension": ".py", 403 | "mimetype": "text/x-python", 404 | "name": "python", 405 | "nbconvert_exporter": "python", 406 | "pygments_lexer": "ipython2", 407 | "version": "2.7.12" 408 | } 409 | }, 410 | "nbformat": 4, 411 | "nbformat_minor": 1 412 | } 413 | -------------------------------------------------------------------------------- /08_subsetting.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "**_pySpark Basics: Subsetting Data_**" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "_by Jeff Levy (jlevy@urban.org)_\n", 15 | "\n", 16 | "_Last Updated: 31 Jul 2017, Spark v2.1_" 17 | ] 18 | }, 19 | { 20 | "cell_type": "markdown", 21 | "metadata": {}, 22 | "source": [ 23 | "_Abstract: This guide will go over filtering your data based on a specified criteria in order to get a subset_\n", 24 | "\n", 25 | "_Main operations used: `dtypes`, `take`, `show`, `select`, `drop`, `filter/where`, `sample`_" 26 | ] 27 | }, 28 | { 29 | "cell_type": "markdown", 30 | "metadata": {}, 31 | "source": [ 32 | "***" 33 | ] 34 | }, 35 | { 36 | "cell_type": "markdown", 37 | "metadata": {}, 38 | "source": [ 39 | "We begin by loading some real data from an S3 bucket (the same data used in the *basics* tutorial), allowing pySpark to auto determine the schema, then take a look at its contents:" 40 | ] 41 | }, 42 | { 43 | "cell_type": "code", 44 | "execution_count": 1, 45 | "metadata": {}, 46 | "outputs": [], 47 | "source": [ 48 | "df = spark.read.csv('s3://ui-spark-social-science-public/data/Performance_2015Q1.txt', header=False, inferSchema=True, sep='|')" 49 | ] 50 | }, 51 | { 52 | "cell_type": "code", 53 | "execution_count": 2, 54 | "metadata": {}, 55 | "outputs": [ 56 | { 57 | "data": { 58 | "text/plain": [ 59 | "[('_c0', 'bigint'),\n", 60 | " ('_c1', 'string'),\n", 61 | " ('_c2', 'string'),\n", 62 | " ('_c3', 'double'),\n", 63 | " ('_c4', 'double'),\n", 64 | " ('_c5', 'int'),\n", 65 | " ('_c6', 'int'),\n", 66 | " ('_c7', 'int'),\n", 67 | " ('_c8', 'string'),\n", 68 | " ('_c9', 'int'),\n", 69 | " ('_c10', 'string'),\n", 70 | " ('_c11', 'string'),\n", 71 | " ('_c12', 'int'),\n", 72 | " ('_c13', 'string'),\n", 73 | " ('_c14', 'string'),\n", 74 | " ('_c15', 'string'),\n", 75 | " ('_c16', 'string'),\n", 76 | " ('_c17', 'string'),\n", 77 | " ('_c18', 'string'),\n", 78 | " ('_c19', 'string'),\n", 79 | " ('_c20', 'string'),\n", 80 | " ('_c21', 'string'),\n", 81 | " ('_c22', 'string'),\n", 82 | " ('_c23', 'string'),\n", 83 | " ('_c24', 'string'),\n", 84 | " ('_c25', 'string'),\n", 85 | " ('_c26', 'int'),\n", 86 | " ('_c27', 'string')]" 87 | ] 88 | }, 89 | "execution_count": 2, 90 | "metadata": {}, 91 | "output_type": "execute_result" 92 | } 93 | ], 94 | "source": [ 95 | "df.dtypes" 96 | ] 97 | }, 98 | { 99 | "cell_type": "code", 100 | "execution_count": 3, 101 | "metadata": {}, 102 | "outputs": [ 103 | { 104 | "data": { 105 | "text/plain": [ 106 | "[Row(_c0=100002091588, _c1=u'01/01/2015', _c2=u'OTHER', _c3=4.125, _c4=None, _c5=0, _c6=360, _c7=360, _c8=u'01/2045', _c9=16740, _c10=u'0', _c11=u'N', _c12=None, _c13=None, _c14=None, _c15=None, _c16=None, _c17=None, _c18=None, _c19=None, _c20=None, _c21=None, _c22=None, _c23=None, _c24=None, _c25=None, _c26=None, _c27=None),\n", 107 | " Row(_c0=100002091588, _c1=u'02/01/2015', _c2=None, _c3=4.125, _c4=None, _c5=1, _c6=359, _c7=359, _c8=u'01/2045', _c9=16740, _c10=u'0', _c11=u'N', _c12=None, _c13=None, _c14=None, _c15=None, _c16=None, _c17=None, _c18=None, _c19=None, _c20=None, _c21=None, _c22=None, _c23=None, _c24=None, _c25=None, _c26=None, _c27=None),\n", 108 | " Row(_c0=100002091588, _c1=u'03/01/2015', _c2=None, _c3=4.125, _c4=None, _c5=2, _c6=358, _c7=358, _c8=u'01/2045', _c9=16740, _c10=u'0', _c11=u'N', _c12=None, _c13=None, _c14=None, _c15=None, _c16=None, _c17=None, _c18=None, _c19=None, _c20=None, _c21=None, _c22=None, _c23=None, _c24=None, _c25=None, _c26=None, _c27=None),\n", 109 | " Row(_c0=100002091588, _c1=u'04/01/2015', _c2=None, _c3=4.125, _c4=None, _c5=3, _c6=357, _c7=357, _c8=u'01/2045', _c9=16740, _c10=u'0', _c11=u'N', _c12=None, _c13=None, _c14=None, _c15=None, _c16=None, _c17=None, _c18=None, _c19=None, _c20=None, _c21=None, _c22=None, _c23=None, _c24=None, _c25=None, _c26=None, _c27=None),\n", 110 | " Row(_c0=100002091588, _c1=u'05/01/2015', _c2=None, _c3=4.125, _c4=None, _c5=4, _c6=356, _c7=356, _c8=u'01/2045', _c9=16740, _c10=u'0', _c11=u'N', _c12=None, _c13=None, _c14=None, _c15=None, _c16=None, _c17=None, _c18=None, _c19=None, _c20=None, _c21=None, _c22=None, _c23=None, _c24=None, _c25=None, _c26=None, _c27=None)]" 111 | ] 112 | }, 113 | "execution_count": 3, 114 | "metadata": {}, 115 | "output_type": "execute_result" 116 | } 117 | ], 118 | "source": [ 119 | "df.take(5)" 120 | ] 121 | }, 122 | { 123 | "cell_type": "markdown", 124 | "metadata": {}, 125 | "source": [ 126 | "Note that this output looks messy because `take` doesn't format the results, it just shows you the row object with the format *column=value* for each column across a row. It can be formatted nicely with `show()`, but due to the width of this data it will still look messy. See below for `show()` in action.\n", 127 | "\n", 128 | "# Subsetting by Columns\n", 129 | "\n", 130 | "One of the simplest subsettings is done by selecting just a few of the columns:" 131 | ] 132 | }, 133 | { 134 | "cell_type": "code", 135 | "execution_count": 4, 136 | "metadata": {}, 137 | "outputs": [ 138 | { 139 | "name": "stdout", 140 | "output_type": "stream", 141 | "text": [ 142 | "+------------+----------+-----+-----+\n", 143 | "| _c0| _c1| _c3| _c9|\n", 144 | "+------------+----------+-----+-----+\n", 145 | "|100002091588|01/01/2015|4.125|16740|\n", 146 | "|100002091588|02/01/2015|4.125|16740|\n", 147 | "|100002091588|03/01/2015|4.125|16740|\n", 148 | "|100002091588|04/01/2015|4.125|16740|\n", 149 | "|100002091588|05/01/2015|4.125|16740|\n", 150 | "+------------+----------+-----+-----+\n", 151 | "only showing top 5 rows\n", 152 | "\n" 153 | ] 154 | } 155 | ], 156 | "source": [ 157 | "from pyspark.sql.functions import col\n", 158 | "\n", 159 | "df_select = df.select(col('_c0'), col('_c1'), col('_c3'), col('_c9'))\n", 160 | "df_select.show(5)" 161 | ] 162 | }, 163 | { 164 | "cell_type": "markdown", 165 | "metadata": {}, 166 | "source": [ 167 | "Note that `show` defaults to showing the first 20 rows, but here we've specified only 5. There is also a shortcut for this notation that does the same thing but is a little easier to read. We show both because they both show up frequently in Spark resources:" 168 | ] 169 | }, 170 | { 171 | "cell_type": "code", 172 | "execution_count": 5, 173 | "metadata": {}, 174 | "outputs": [ 175 | { 176 | "name": "stdout", 177 | "output_type": "stream", 178 | "text": [ 179 | "+------------+----------+-----+-----+\n", 180 | "| _c0| _c1| _c3| _c9|\n", 181 | "+------------+----------+-----+-----+\n", 182 | "|100002091588|01/01/2015|4.125|16740|\n", 183 | "|100002091588|02/01/2015|4.125|16740|\n", 184 | "|100002091588|03/01/2015|4.125|16740|\n", 185 | "|100002091588|04/01/2015|4.125|16740|\n", 186 | "|100002091588|05/01/2015|4.125|16740|\n", 187 | "+------------+----------+-----+-----+\n", 188 | "only showing top 5 rows\n", 189 | "\n" 190 | ] 191 | } 192 | ], 193 | "source": [ 194 | "df_select = df[['_c0', '_c1', '_c3', '_c9']]\n", 195 | "df_select.show(5)" 196 | ] 197 | }, 198 | { 199 | "cell_type": "markdown", 200 | "metadata": {}, 201 | "source": [ 202 | "Or we can do the same thing by dropping, which is convenient if we want to keep more columns than we want to drop:" 203 | ] 204 | }, 205 | { 206 | "cell_type": "code", 207 | "execution_count": 6, 208 | "metadata": { 209 | "collapsed": true 210 | }, 211 | "outputs": [], 212 | "source": [ 213 | "df_drop = df_select.drop(col('_c3'))" 214 | ] 215 | }, 216 | { 217 | "cell_type": "code", 218 | "execution_count": 7, 219 | "metadata": {}, 220 | "outputs": [ 221 | { 222 | "name": "stdout", 223 | "output_type": "stream", 224 | "text": [ 225 | "+------------+----------+-----+\n", 226 | "| _c0| _c1| _c9|\n", 227 | "+------------+----------+-----+\n", 228 | "|100002091588|01/01/2015|16740|\n", 229 | "|100002091588|02/01/2015|16740|\n", 230 | "|100002091588|03/01/2015|16740|\n", 231 | "|100002091588|04/01/2015|16740|\n", 232 | "|100002091588|05/01/2015|16740|\n", 233 | "+------------+----------+-----+\n", 234 | "only showing top 5 rows\n", 235 | "\n" 236 | ] 237 | } 238 | ], 239 | "source": [ 240 | "df_drop.show(5)" 241 | ] 242 | }, 243 | { 244 | "cell_type": "markdown", 245 | "metadata": {}, 246 | "source": [ 247 | "# Subsetting by Rows\n", 248 | "\n", 249 | "We often want to subset by rows also, for example by specifying a conditional. Note that we have to use `.show()` at the end of `.describe()`, because **.describe() returns a new dataframe** with the information. In many other programs, such as Stata, `describe` returns a formatted table; here, both `summary` and `C6` are actually column names." 250 | ] 251 | }, 252 | { 253 | "cell_type": "code", 254 | "execution_count": 8, 255 | "metadata": {}, 256 | "outputs": [ 257 | { 258 | "name": "stdout", 259 | "output_type": "stream", 260 | "text": [ 261 | "+-------+-----------------+\n", 262 | "|summary| _c6|\n", 263 | "+-------+-----------------+\n", 264 | "| count| 3526154|\n", 265 | "| mean|354.7084951479714|\n", 266 | "| stddev| 4.01181251079202|\n", 267 | "| min| 292|\n", 268 | "| max| 480|\n", 269 | "+-------+-----------------+\n", 270 | "\n" 271 | ] 272 | } 273 | ], 274 | "source": [ 275 | "df.describe('_c6').show()" 276 | ] 277 | }, 278 | { 279 | "cell_type": "code", 280 | "execution_count": 9, 281 | "metadata": { 282 | "collapsed": true 283 | }, 284 | "outputs": [], 285 | "source": [ 286 | "df_sub = df.where(df['_c6'] < 358)" 287 | ] 288 | }, 289 | { 290 | "cell_type": "code", 291 | "execution_count": 10, 292 | "metadata": {}, 293 | "outputs": [ 294 | { 295 | "name": "stdout", 296 | "output_type": "stream", 297 | "text": [ 298 | "+-------+------------------+\n", 299 | "|summary| _c6|\n", 300 | "+-------+------------------+\n", 301 | "| count| 2598037|\n", 302 | "| mean|353.15604897081914|\n", 303 | "| stddev|3.5170213056883983|\n", 304 | "| min| 292|\n", 305 | "| max| 357|\n", 306 | "+-------+------------------+\n", 307 | "\n" 308 | ] 309 | } 310 | ], 311 | "source": [ 312 | "df_sub.describe('_c6').show()" 313 | ] 314 | }, 315 | { 316 | "cell_type": "markdown", 317 | "metadata": {}, 318 | "source": [ 319 | "You can see from the `max` entry for `_c6` that we've cut it off at below 358 now. Also note that **`where` is an alias for `filter`**; you can use them interchangeably in pySpark.\n", 320 | "\n", 321 | "We can repeat the same proceedure for multiple conditions and columns using standard logical operators:" 322 | ] 323 | }, 324 | { 325 | "cell_type": "code", 326 | "execution_count": 11, 327 | "metadata": { 328 | "collapsed": true 329 | }, 330 | "outputs": [], 331 | "source": [ 332 | "df_filter = df.where((df['_c6'] > 340) & (df['_c5'] < 4))" 333 | ] 334 | }, 335 | { 336 | "cell_type": "code", 337 | "execution_count": 12, 338 | "metadata": {}, 339 | "outputs": [ 340 | { 341 | "name": "stdout", 342 | "output_type": "stream", 343 | "text": [ 344 | "+-------+------------------+------------------+\n", 345 | "|summary| _c6| _c5|\n", 346 | "+-------+------------------+------------------+\n", 347 | "| count| 1254131| 1254131|\n", 348 | "| mean|358.48713810598736| 1.474693632483369|\n", 349 | "| stddev| 1.378961910349754|1.2067831502138422|\n", 350 | "| min| 341| -1|\n", 351 | "| max| 361| 3|\n", 352 | "+-------+------------------+------------------+\n", 353 | "\n" 354 | ] 355 | } 356 | ], 357 | "source": [ 358 | "df_filter.describe('_c6', '_c5').show()" 359 | ] 360 | }, 361 | { 362 | "cell_type": "markdown", 363 | "metadata": {}, 364 | "source": [ 365 | "# Random Sampling\n", 366 | "\n", 367 | "And finally, you might want to take a random sample of rows. This can be particularlly useful, for example, if your data is large enough to require more expensive clusters to be spun up to work with it all, and you want to use a smaller, less expensive cluster to work on a sample. Once your code is completed, you can then spin up the more expensive cluster and simply apply your code to the full sample\n", 368 | "\n", 369 | "You can pass three arguments into sample: **the first is a boolean, which is True to sample with replacement, False without**.\n", 370 | "The second is the **fraction of the dataset to take**, in this case 5%, and the third is an **optional random seed**. If\n", 371 | "you specify any integer here then someone else performing the same random operation that specifies the same seed\n", 372 | "will get the same result. If no seed is passed then the exact random sampling can't be duplicated." 373 | ] 374 | }, 375 | { 376 | "cell_type": "code", 377 | "execution_count": 13, 378 | "metadata": { 379 | "collapsed": true 380 | }, 381 | "outputs": [], 382 | "source": [ 383 | "df_sample = df.sample(False, 0.05, 99)" 384 | ] 385 | }, 386 | { 387 | "cell_type": "code", 388 | "execution_count": 14, 389 | "metadata": {}, 390 | "outputs": [ 391 | { 392 | "name": "stdout", 393 | "output_type": "stream", 394 | "text": [ 395 | "+-------+------------------+\n", 396 | "|summary| _c6|\n", 397 | "+-------+------------------+\n", 398 | "| count| 176015|\n", 399 | "| mean|354.69058318893275|\n", 400 | "| stddev| 4.028614501676224|\n", 401 | "| min| 293|\n", 402 | "| max| 361|\n", 403 | "+-------+------------------+\n", 404 | "\n" 405 | ] 406 | } 407 | ], 408 | "source": [ 409 | "df_sample.describe('_c6').show()" 410 | ] 411 | }, 412 | { 413 | "cell_type": "markdown", 414 | "metadata": {}, 415 | "source": [ 416 | "If you compare this to our original summary stats on unfiltered column C6 from above, you'll see it does a pretty good job maintaining the mean and stddev in a sample of only 5% of the data. You can then write this to a new file in an S3 bucket and work with it instead of the whole data." 417 | ] 418 | } 419 | ], 420 | "metadata": { 421 | "kernelspec": { 422 | "display_name": "Python 2", 423 | "language": "python", 424 | "name": "python2" 425 | }, 426 | "language_info": { 427 | "codemirror_mode": { 428 | "name": "ipython", 429 | "version": 2 430 | }, 431 | "file_extension": ".py", 432 | "mimetype": "text/x-python", 433 | "name": "python", 434 | "nbconvert_exporter": "python", 435 | "pygments_lexer": "ipython2", 436 | "version": "2.7.12" 437 | } 438 | }, 439 | "nbformat": 4, 440 | "nbformat_minor": 1 441 | } 442 | -------------------------------------------------------------------------------- /09_summary-statistics.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "**_pySpark Basics: Summary Statistics_**" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "_by Jeff Levy (jlevy@urban.org)_\n", 15 | "\n", 16 | "_Last Updated: 8 Aug 2016, Spark v2.0_" 17 | ] 18 | }, 19 | { 20 | "cell_type": "markdown", 21 | "metadata": {}, 22 | "source": [ 23 | "_Abstract: Here we will cover several common ways to summarize data. Many of these methods have been dicussed in other tutorials in different contexts._\n", 24 | "\n", 25 | "_Main operations used: `describe`, `skewness`, `kurtosis`, `collect`, `select`_" 26 | ] 27 | }, 28 | { 29 | "cell_type": "markdown", 30 | "metadata": {}, 31 | "source": [ 32 | "***" 33 | ] 34 | }, 35 | { 36 | "cell_type": "markdown", 37 | "metadata": {}, 38 | "source": [ 39 | "First we will load the same csv data we've been using in many other tutorials, then pare it down to a manageable subset for ease of use:" 40 | ] 41 | }, 42 | { 43 | "cell_type": "code", 44 | "execution_count": 1, 45 | "metadata": { 46 | "collapsed": true 47 | }, 48 | "outputs": [], 49 | "source": [ 50 | "df = spark.read.csv('s3://ui-spark-social-science-public/data/Performance_2015Q1.txt', header=False, inferSchema=True, sep='|')" 51 | ] 52 | }, 53 | { 54 | "cell_type": "code", 55 | "execution_count": 2, 56 | "metadata": { 57 | "collapsed": true 58 | }, 59 | "outputs": [], 60 | "source": [ 61 | "df = df[['_c0', '_c2', '_c3', '_c4', '_c5', '_c6']]" 62 | ] 63 | }, 64 | { 65 | "cell_type": "code", 66 | "execution_count": 3, 67 | "metadata": {}, 68 | "outputs": [ 69 | { 70 | "name": "stdout", 71 | "output_type": "stream", 72 | "text": [ 73 | "+------------+-----+-----+----+---+---+\n", 74 | "| _c0| _c2| _c3| _c4|_c5|_c6|\n", 75 | "+------------+-----+-----+----+---+---+\n", 76 | "|100002091588|OTHER|4.125|null| 0|360|\n", 77 | "|100002091588| null|4.125|null| 1|359|\n", 78 | "|100002091588| null|4.125|null| 2|358|\n", 79 | "|100002091588| null|4.125|null| 3|357|\n", 80 | "|100002091588| null|4.125|null| 4|356|\n", 81 | "+------------+-----+-----+----+---+---+\n", 82 | "only showing top 5 rows\n", 83 | "\n" 84 | ] 85 | } 86 | ], 87 | "source": [ 88 | "df.show(5)" 89 | ] 90 | }, 91 | { 92 | "cell_type": "markdown", 93 | "metadata": {}, 94 | "source": [ 95 | "Note that the format `_c0`, `_c1`, `...`, `_cN` is the default column names Spark uses if your data doesn't come with headers. For more on this, and renaming them, see the pySpark tutorial named *basics 1.ipynb*" 96 | ] 97 | }, 98 | { 99 | "cell_type": "markdown", 100 | "metadata": {}, 101 | "source": [ 102 | "# Describe\n", 103 | "\n", 104 | "The first thing we'll do is use the `describe` method to get some basics. Note that **describe will return a new dataframe** with the parameters, so we'll assign the results to a new variable and then call `show` on it:" 105 | ] 106 | }, 107 | { 108 | "cell_type": "code", 109 | "execution_count": 4, 110 | "metadata": {}, 111 | "outputs": [ 112 | { 113 | "name": "stdout", 114 | "output_type": "stream", 115 | "text": [ 116 | "+-------+--------------------+--------------------+-------------------+------------------+------------------+-----------------+\n", 117 | "|summary| _c0| _c2| _c3| _c4| _c5| _c6|\n", 118 | "+-------+--------------------+--------------------+-------------------+------------------+------------------+-----------------+\n", 119 | "| count| 3526154| 382039| 3526154| 1580402| 3526154| 3526154|\n", 120 | "| mean|5.503885995001908E11| null| 4.178168090219519|234846.78065481762| 5.134865351881966|354.7084951479714|\n", 121 | "| stddev|2.596112361975214...| null|0.34382335723646673|118170.68592261661|3.3833930336063465| 4.01181251079202|\n", 122 | "| min| 100002091588| CITIMORTGAGE, INC.| 2.75| 0.85| -1| 292|\n", 123 | "| max| 999995696635|WELLS FARGO BANK,...| 6.125| 1193544.39| 34| 480|\n", 124 | "+-------+--------------------+--------------------+-------------------+------------------+------------------+-----------------+\n", 125 | "\n" 126 | ] 127 | } 128 | ], 129 | "source": [ 130 | "df_described = df.describe()\n", 131 | "df_described.show()" 132 | ] 133 | }, 134 | { 135 | "cell_type": "markdown", 136 | "metadata": {}, 137 | "source": [ 138 | "Aside from the five included in `describe`, there are a handful of other built-in aggregators that can be applied to a column. Here we'll apply the `skewness` function to column `_c3`:" 139 | ] 140 | }, 141 | { 142 | "cell_type": "code", 143 | "execution_count": 5, 144 | "metadata": {}, 145 | "outputs": [ 146 | { 147 | "name": "stdout", 148 | "output_type": "stream", 149 | "text": [ 150 | "+------------------+\n", 151 | "| skewness(_c3)|\n", 152 | "+------------------+\n", 153 | "|0.5197993394959904|\n", 154 | "+------------------+\n", 155 | "\n" 156 | ] 157 | } 158 | ], 159 | "source": [ 160 | "from pyspark.sql.functions import skewness, kurtosis\n", 161 | "from pyspark.sql.functions import var_pop, var_samp, stddev, stddev_pop, sumDistinct, ntile\n", 162 | "df.select(skewness('_c3')).show()" 163 | ] 164 | }, 165 | { 166 | "cell_type": "markdown", 167 | "metadata": {}, 168 | "source": [ 169 | "# Expanding the Describe Output\n", 170 | "\n", 171 | "One convenient thing we might want to do is put all our summary statistics together in one spot - in essence, expand the output from `describe`. Below I'll go into a short example:" 172 | ] 173 | }, 174 | { 175 | "cell_type": "code", 176 | "execution_count": 6, 177 | "metadata": {}, 178 | "outputs": [ 179 | { 180 | "name": "stdout", 181 | "output_type": "stream", 182 | "text": [ 183 | "[Row(_c0='-0.00183847089866', _c2='None', _c3='0.519799339496', _c4='0.758411576756', _c5='0.286480156084', _c6='-2.69765201567', summary='skew'), Row(_c0='-1.19900726351', _c2='None', _c3='0.126057726847', _c4='0.576085602656', _c5='0.195187780089', _c6='24.7237858944', summary='kurtosis')]\n" 184 | ] 185 | } 186 | ], 187 | "source": [ 188 | "from pyspark.sql import Row\n", 189 | "\n", 190 | "columns = df_described.columns #list of column names: ['summary', '_c0', '_c3', '_c4', '_c5', '_c6']\n", 191 | "funcs = [skewness, kurtosis] #list of functions we want to include (imported earlier)\n", 192 | "fnames = ['skew', 'kurtosis'] #a list of strings describing the functions in the same order\n", 193 | "\n", 194 | "def new_item(func, column):\n", 195 | " \"\"\"\n", 196 | " This function takes in an aggregation function and a column name, then applies the aggregation to the\n", 197 | " column, collects it and returns a value. The value is in string format despite being a number, \n", 198 | " because that matches the output of describe.\n", 199 | " \"\"\"\n", 200 | " return str(df.select(func(column)).collect()[0][0])\n", 201 | "\n", 202 | "new_data = []\n", 203 | "for func, fname in zip(funcs, fnames):\n", 204 | " row_dict = {'summary':fname} #each row object begins with an entry for \"summary\"\n", 205 | " for column in columns[1:]:\n", 206 | " row_dict[column] = new_item(func, column)\n", 207 | " new_data.append(Row(**row_dict)) #using ** tells Python to unpack the entries of the dictionary\n", 208 | " \n", 209 | "print(new_data)" 210 | ] 211 | }, 212 | { 213 | "cell_type": "markdown", 214 | "metadata": {}, 215 | "source": [ 216 | "This code iterates through the entries in `funcs` and `fnames` together, then builds a new row object following the format of the standard `describe` output. You can see from the output that it looks nearly identical to the output of `collect` when applied to a dataframe:" 217 | ] 218 | }, 219 | { 220 | "cell_type": "code", 221 | "execution_count": 7, 222 | "metadata": {}, 223 | "outputs": [ 224 | { 225 | "data": { 226 | "text/plain": [ 227 | "[Row(summary=u'count', _c0=u'3526154', _c2=u'382039', _c3=u'3526154', _c4=u'1580402', _c5=u'3526154', _c6=u'3526154'),\n", 228 | " Row(summary=u'mean', _c0=u'5.503885995001908E11', _c2=None, _c3=u'4.178168090219519', _c4=u'234846.78065481762', _c5=u'5.134865351881966', _c6=u'354.7084951479714'),\n", 229 | " Row(summary=u'stddev', _c0=u'2.5961123619752148E11', _c2=None, _c3=u'0.34382335723646673', _c4=u'118170.68592261661', _c5=u'3.3833930336063465', _c6=u'4.01181251079202'),\n", 230 | " Row(summary=u'min', _c0=u'100002091588', _c2=u'CITIMORTGAGE, INC.', _c3=u'2.75', _c4=u'0.85', _c5=u'-1', _c6=u'292'),\n", 231 | " Row(summary=u'max', _c0=u'999995696635', _c2=u'WELLS FARGO BANK, N.A.', _c3=u'6.125', _c4=u'1193544.39', _c5=u'34', _c6=u'480')]" 232 | ] 233 | }, 234 | "execution_count": 7, 235 | "metadata": {}, 236 | "output_type": "execute_result" 237 | } 238 | ], 239 | "source": [ 240 | "df_described.collect()" 241 | ] 242 | }, 243 | { 244 | "cell_type": "markdown", 245 | "metadata": {}, 246 | "source": [ 247 | "Although the columns are out of order within the rows; this is because we built them from a dictionary, and dictionary entries in Python are inherently unordered. We will fix that below.\n", 248 | "\n", 249 | "The next step is to join the two sets of data into one, in order to make a modified `describe` output that includes skew and kurtosis. The same method could be used to include any other aggregations desired." 250 | ] 251 | }, 252 | { 253 | "cell_type": "code", 254 | "execution_count": 8, 255 | "metadata": {}, 256 | "outputs": [ 257 | { 258 | "name": "stdout", 259 | "output_type": "stream", 260 | "text": [ 261 | "+--------+--------------------+--------------------+-------------------+------------------+------------------+-----------------+\n", 262 | "| summary| _c0| _c2| _c3| _c4| _c5| _c6|\n", 263 | "+--------+--------------------+--------------------+-------------------+------------------+------------------+-----------------+\n", 264 | "| count| 3526154| 382039| 3526154| 1580402| 3526154| 3526154|\n", 265 | "| mean|5.503885995001908E11| null| 4.178168090219519|234846.78065481762| 5.134865351881966|354.7084951479714|\n", 266 | "| stddev|2.596112361975214...| null|0.34382335723646673|118170.68592261661|3.3833930336063465| 4.01181251079202|\n", 267 | "| min| 100002091588| CITIMORTGAGE, INC.| 2.75| 0.85| -1| 292|\n", 268 | "| max| 999995696635|WELLS FARGO BANK,...| 6.125| 1193544.39| 34| 480|\n", 269 | "| skew| -0.00183847089866| None| 0.519799339496| 0.758411576756| 0.286480156084| -2.69765201567|\n", 270 | "|kurtosis| -1.19900726351| None| 0.126057726847| 0.576085602656| 0.195187780089| 24.7237858944|\n", 271 | "+--------+--------------------+--------------------+-------------------+------------------+------------------+-----------------+\n", 272 | "\n" 273 | ] 274 | } 275 | ], 276 | "source": [ 277 | "new_describe = sc.parallelize(new_data).toDF() #turns the results from our loop into a dataframe\n", 278 | "new_describe = new_describe.select(df_described.columns) #forces the columns into the same order\n", 279 | "\n", 280 | "expanded_describe = df_described.unionAll(new_describe) #merges the new stats with the original describe\n", 281 | "expanded_describe.show()" 282 | ] 283 | }, 284 | { 285 | "cell_type": "markdown", 286 | "metadata": { 287 | "collapsed": true 288 | }, 289 | "source": [ 290 | "And now we have our expanded `describe` output." 291 | ] 292 | }, 293 | { 294 | "cell_type": "code", 295 | "execution_count": null, 296 | "metadata": { 297 | "collapsed": true 298 | }, 299 | "outputs": [], 300 | "source": [] 301 | } 302 | ], 303 | "metadata": { 304 | "kernelspec": { 305 | "display_name": "Python 2", 306 | "language": "python", 307 | "name": "python2" 308 | }, 309 | "language_info": { 310 | "codemirror_mode": { 311 | "name": "ipython", 312 | "version": 2 313 | }, 314 | "file_extension": ".py", 315 | "mimetype": "text/x-python", 316 | "name": "python", 317 | "nbconvert_exporter": "python", 318 | "pygments_lexer": "ipython2", 319 | "version": "2.7.12" 320 | } 321 | }, 322 | "nbformat": 4, 323 | "nbformat_minor": 1 324 | } 325 | -------------------------------------------------------------------------------- /11_installing-python-modules.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "**_pySpark Basics: Installing Python Modules_**" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "_by Jeff Levy (jlevy@urban.org)_\n", 15 | "\n", 16 | "_Last Updated: 2 Aug 2016, Spark v1.6.1_" 17 | ] 18 | }, 19 | { 20 | "cell_type": "markdown", 21 | "metadata": {}, 22 | "source": [ 23 | "_Abstract: When a new cluster is spun up on AWS, it comes with only a few Python modules installed. This guide will go over adding more as necessary._\n", 24 | "\n", 25 | "_Main operations used:_ `pip`" 26 | ] 27 | }, 28 | { 29 | "cell_type": "markdown", 30 | "metadata": {}, 31 | "source": [ 32 | "***" 33 | ] 34 | }, 35 | { 36 | "cell_type": "markdown", 37 | "metadata": {}, 38 | "source": [ 39 | "Those with experience in Python are probably accustomed to using a distribution with many modules pre-packaged, for example from Anaconda or Enthought. However, every time a cluster is spun up on AWS the installation of Python must be ported over from permanent storage, and the more it has to move and configure the longer the spinup process becomes. Based on the assumption that most users will not need very many modules not already provided by pySpark, we have opted to minimize startup time by not pre-configuring Python with lots of modules.\n", 40 | "\n", 41 | "# Installing Modules\n", 42 | "\n", 43 | "The current boostrap script installs `pip`, (module installation manager), `numpy` (many math operations), `requests` (tools for accessing the web) and `matplotlib` (graphing). It also comes with the [Python standard library](https://docs.python.org/2/library/) that all installs have access to, such as `datetime`, `random`, `collections` and so on. Any other modules you may need can be installed as follows:" 44 | ] 45 | }, 46 | { 47 | "cell_type": "code", 48 | "execution_count": 1, 49 | "metadata": { 50 | "collapsed": true 51 | }, 52 | "outputs": [], 53 | "source": [ 54 | "import pip" 55 | ] 56 | }, 57 | { 58 | "cell_type": "code", 59 | "execution_count": 4, 60 | "metadata": {}, 61 | "outputs": [ 62 | { 63 | "name": "stdout", 64 | "output_type": "stream", 65 | "text": [ 66 | "Collecting pandas\n", 67 | " Downloading pandas-0.18.1.tar.gz (7.3MB)\n", 68 | "Requirement already satisfied (use --upgrade to upgrade): python-dateutil in ./venv/lib/python2.7/site-packages (from pandas)\n", 69 | "Requirement already satisfied (use --upgrade to upgrade): pytz>=2011k in ./venv/lib/python2.7/site-packages (from pandas)\n", 70 | "Requirement already satisfied (use --upgrade to upgrade): numpy>=1.7.0 in ./venv/lib64/python2.7/site-packages (from pandas)\n", 71 | "Requirement already satisfied (use --upgrade to upgrade): six>=1.5 in ./venv/lib/python2.7/site-packages (from python-dateutil->pandas)\n", 72 | "Installing collected packages: pandas\n", 73 | " Running setup.py install for pandas: started\n", 74 | " Running setup.py install for pandas: still running...\n", 75 | " Running setup.py install for pandas: still running...\n", 76 | " Running setup.py install for pandas: finished with status 'done'\n", 77 | "Successfully installed pandas-0.18.1\n" 78 | ] 79 | }, 80 | { 81 | "data": { 82 | "text/plain": [ 83 | "0" 84 | ] 85 | }, 86 | "execution_count": 4, 87 | "metadata": {}, 88 | "output_type": "execute_result" 89 | } 90 | ], 91 | "source": [ 92 | "pip.main(['install', 'pandas'])" 93 | ] 94 | }, 95 | { 96 | "cell_type": "markdown", 97 | "metadata": {}, 98 | "source": [ 99 | "Note in the output that this command also checked on all the dependencies for `pandas`, and would have installed them if they had been lacking. You can do this for any package listed in the PyPi index, the official repository for Pthon modules.\n", 100 | "\n", 101 | "# Installing Specific Versions\n", 102 | "\n", 103 | "We can also use this to get specific versions of modules; by default it installs the newest:" 104 | ] 105 | }, 106 | { 107 | "cell_type": "code", 108 | "execution_count": 5, 109 | "metadata": {}, 110 | "outputs": [ 111 | { 112 | "name": "stdout", 113 | "output_type": "stream", 114 | "text": [ 115 | "Collecting statsmodels==0.6.0\n", 116 | " Downloading statsmodels-0.6.0.zip (7.3MB)\n", 117 | "Collecting scipy (from statsmodels==0.6.0)\n", 118 | " Downloading scipy-0.17.1-cp27-cp27mu-manylinux1_x86_64.whl (39.5MB)\n", 119 | "Requirement already satisfied (use --upgrade to upgrade): pandas in ./venv/lib64/python2.7/site-packages (from statsmodels==0.6.0)\n", 120 | "Collecting patsy (from statsmodels==0.6.0)\n", 121 | " Downloading patsy-0.4.1-py2.py3-none-any.whl (233kB)\n", 122 | "Requirement already satisfied (use --upgrade to upgrade): python-dateutil in ./venv/lib/python2.7/site-packages (from pandas->statsmodels==0.6.0)\n", 123 | "Requirement already satisfied (use --upgrade to upgrade): pytz>=2011k in ./venv/lib/python2.7/site-packages (from pandas->statsmodels==0.6.0)\n", 124 | "Requirement already satisfied (use --upgrade to upgrade): numpy>=1.7.0 in ./venv/lib64/python2.7/site-packages (from pandas->statsmodels==0.6.0)\n", 125 | "Requirement already satisfied (use --upgrade to upgrade): six in ./venv/lib/python2.7/site-packages (from patsy->statsmodels==0.6.0)\n", 126 | "Installing collected packages: scipy, patsy, statsmodels\n", 127 | " Running setup.py install for statsmodels: started\n", 128 | " Running setup.py install for statsmodels: finished with status 'done'\n", 129 | "Successfully installed patsy-0.4.1 scipy-0.17.1 statsmodels-0.6.0\n" 130 | ] 131 | }, 132 | { 133 | "data": { 134 | "text/plain": [ 135 | "0" 136 | ] 137 | }, 138 | "execution_count": 5, 139 | "metadata": {}, 140 | "output_type": "execute_result" 141 | } 142 | ], 143 | "source": [ 144 | "pip.main(['install', 'statsmodels==0.6.0'])" 145 | ] 146 | }, 147 | { 148 | "cell_type": "markdown", 149 | "metadata": {}, 150 | "source": [ 151 | "Note that this ended up installing two dependencies, `patsy` and `scipy`. \n", 152 | "\n", 153 | "# Upgrading Modules\n", 154 | "\n", 155 | "And finally, we can use `pip` to upgrade modules if necessary:" 156 | ] 157 | }, 158 | { 159 | "cell_type": "code", 160 | "execution_count": 6, 161 | "metadata": {}, 162 | "outputs": [ 163 | { 164 | "name": "stdout", 165 | "output_type": "stream", 166 | "text": [ 167 | "Collecting statsmodels\n", 168 | " Downloading statsmodels-0.6.1.tar.gz (7.0MB)\n", 169 | "Requirement already up-to-date: pandas in ./venv/lib64/python2.7/site-packages (from statsmodels)\n", 170 | "Requirement already up-to-date: python-dateutil in ./venv/lib/python2.7/site-packages (from pandas->statsmodels)\n", 171 | "Requirement already up-to-date: pytz>=2011k in ./venv/lib/python2.7/site-packages (from pandas->statsmodels)\n", 172 | "Requirement already up-to-date: numpy>=1.7.0 in ./venv/lib64/python2.7/site-packages (from pandas->statsmodels)\n", 173 | "Requirement already up-to-date: six>=1.5 in ./venv/lib/python2.7/site-packages (from python-dateutil->pandas->statsmodels)\n", 174 | "Installing collected packages: statsmodels\n", 175 | " Found existing installation: statsmodels 0.6.0\n", 176 | " Uninstalling statsmodels-0.6.0:\n", 177 | " Successfully uninstalled statsmodels-0.6.0\n", 178 | " Running setup.py install for statsmodels: started\n", 179 | " Running setup.py install for statsmodels: finished with status 'done'\n", 180 | "Successfully installed statsmodels-0.6.1\n" 181 | ] 182 | }, 183 | { 184 | "data": { 185 | "text/plain": [ 186 | "0" 187 | ] 188 | }, 189 | "execution_count": 6, 190 | "metadata": {}, 191 | "output_type": "execute_result" 192 | } 193 | ], 194 | "source": [ 195 | "pip.main(['install', '--upgrade', 'statsmodels'])" 196 | ] 197 | }, 198 | { 199 | "cell_type": "markdown", 200 | "metadata": {}, 201 | "source": [ 202 | "All of the packages you install this way will remain installed as long as this cluster is spun up. You can open multiple notebooks, or close all of them and open a new one, and they will all have access to the modules you've installed. Only when the cluster is spun down through the AWS Console will you need to start over.\n", 203 | "\n", 204 | "And finally, it's important to keep in mind that some packages may not be compatible with distributed data. The `pandas` module, for example, is for working with dataframes in a standard desktop environment - if you try to load a very large distributed dataset into a Pandas dataframe, *it will attempt to put all the data in one location*, and it will fail. If you have reduced your data down to a reasonable size, however, you can load it into a Pandas dataframe. Whether a module will work for you or not depends entirely on the module and the situation, so feel free to consult with Research Programming if in doubt." 205 | ] 206 | }, 207 | { 208 | "cell_type": "code", 209 | "execution_count": null, 210 | "metadata": { 211 | "collapsed": true 212 | }, 213 | "outputs": [], 214 | "source": [] 215 | } 216 | ], 217 | "metadata": { 218 | "kernelspec": { 219 | "display_name": "Python 3", 220 | "language": "python", 221 | "name": "python3" 222 | }, 223 | "language_info": { 224 | "codemirror_mode": { 225 | "name": "ipython", 226 | "version": 3 227 | }, 228 | "file_extension": ".py", 229 | "mimetype": "text/x-python", 230 | "name": "python", 231 | "nbconvert_exporter": "python", 232 | "pygments_lexer": "ipython3", 233 | "version": "3.6.2" 234 | } 235 | }, 236 | "nbformat": 4, 237 | "nbformat_minor": 1 238 | } 239 | -------------------------------------------------------------------------------- /License.d: -------------------------------------------------------------------------------- 1 | GNU GENERAL PUBLIC LICENSE 2 | ========================== 3 | 4 | Version 3, 29 June 2007 5 | 6 | Copyright © 2007 Free Software Foundation, Inc. <> 7 | 8 | Everyone is permitted to copy and distribute verbatim copies of this license 9 | document, but changing it is not allowed. 10 | 11 | ## Preamble 12 | 13 | The GNU General Public License is a free, copyleft license for software and other 14 | kinds of works. 15 | 16 | The licenses for most software and other practical works are designed to take away 17 | your freedom to share and change the works. By contrast, the GNU General Public 18 | License is intended to guarantee your freedom to share and change all versions of a 19 | program--to make sure it remains free software for all its users. We, the Free 20 | Software Foundation, use the GNU General Public License for most of our software; it 21 | applies also to any other work released this way by its authors. You can apply it to 22 | your programs, too. 23 | 24 | When we speak of free software, we are referring to freedom, not price. Our General 25 | Public Licenses are designed to make sure that you have the freedom to distribute 26 | copies of free software (and charge for them if you wish), that you receive source 27 | code or can get it if you want it, that you can change the software or use pieces of 28 | it in new free programs, and that you know you can do these things. 29 | 30 | To protect your rights, we need to prevent others from denying you these rights or 31 | asking you to surrender the rights. Therefore, you have certain responsibilities if 32 | you distribute copies of the software, or if you modify it: responsibilities to 33 | respect the freedom of others. 34 | 35 | For example, if you distribute copies of such a program, whether gratis or for a fee, 36 | you must pass on to the recipients the same freedoms that you received. You must make 37 | sure that they, too, receive or can get the source code. And you must show them these 38 | terms so they know their rights. 39 | 40 | Developers that use the GNU GPL protect your rights with two steps: (1) assert 41 | copyright on the software, and (2) offer you this License giving you legal permission 42 | to copy, distribute and/or modify it. 43 | 44 | For the developers' and authors' protection, the GPL clearly explains that there is 45 | no warranty for this free software. For both users' and authors' sake, the GPL 46 | requires that modified versions be marked as changed, so that their problems will not 47 | be attributed erroneously to authors of previous versions. 48 | 49 | Some devices are designed to deny users access to install or run modified versions of 50 | the software inside them, although the manufacturer can do so. This is fundamentally 51 | incompatible with the aim of protecting users' freedom to change the software. The 52 | systematic pattern of such abuse occurs in the area of products for individuals to 53 | use, which is precisely where it is most unacceptable. Therefore, we have designed 54 | this version of the GPL to prohibit the practice for those products. If such problems 55 | arise substantially in other domains, we stand ready to extend this provision to 56 | those domains in future versions of the GPL, as needed to protect the freedom of 57 | users. 58 | 59 | Finally, every program is threatened constantly by software patents. States should 60 | not allow patents to restrict development and use of software on general-purpose 61 | computers, but in those that do, we wish to avoid the special danger that patents 62 | applied to a free program could make it effectively proprietary. To prevent this, the 63 | GPL assures that patents cannot be used to render the program non-free. 64 | 65 | The precise terms and conditions for copying, distribution and modification follow. 66 | 67 | ## TERMS AND CONDITIONS 68 | 69 | ### 0. Definitions. 70 | 71 | “This License” refers to version 3 of the GNU General Public License. 72 | 73 | “Copyright” also means copyright-like laws that apply to other kinds of 74 | works, such as semiconductor masks. 75 | 76 | “The Program” refers to any copyrightable work licensed under this 77 | License. Each licensee is addressed as “you”. “Licensees” and 78 | “recipients” may be individuals or organizations. 79 | 80 | To “modify” a work means to copy from or adapt all or part of the work in 81 | a fashion requiring copyright permission, other than the making of an exact copy. The 82 | resulting work is called a “modified version” of the earlier work or a 83 | work “based on” the earlier work. 84 | 85 | A “covered work” means either the unmodified Program or a work based on 86 | the Program. 87 | 88 | To “propagate” a work means to do anything with it that, without 89 | permission, would make you directly or secondarily liable for infringement under 90 | applicable copyright law, except executing it on a computer or modifying a private 91 | copy. Propagation includes copying, distribution (with or without modification), 92 | making available to the public, and in some countries other activities as well. 93 | 94 | To “convey” a work means any kind of propagation that enables other 95 | parties to make or receive copies. Mere interaction with a user through a computer 96 | network, with no transfer of a copy, is not conveying. 97 | 98 | An interactive user interface displays “Appropriate Legal Notices” to the 99 | extent that it includes a convenient and prominently visible feature that (1) 100 | displays an appropriate copyright notice, and (2) tells the user that there is no 101 | warranty for the work (except to the extent that warranties are provided), that 102 | licensees may convey the work under this License, and how to view a copy of this 103 | License. If the interface presents a list of user commands or options, such as a 104 | menu, a prominent item in the list meets this criterion. 105 | 106 | ### 1. Source Code. 107 | 108 | The “source code” for a work means the preferred form of the work for 109 | making modifications to it. “Object code” means any non-source form of a 110 | work. 111 | 112 | A “Standard Interface” means an interface that either is an official 113 | standard defined by a recognized standards body, or, in the case of interfaces 114 | specified for a particular programming language, one that is widely used among 115 | developers working in that language. 116 | 117 | The “System Libraries” of an executable work include anything, other than 118 | the work as a whole, that (a) is included in the normal form of packaging a Major 119 | Component, but which is not part of that Major Component, and (b) serves only to 120 | enable use of the work with that Major Component, or to implement a Standard 121 | Interface for which an implementation is available to the public in source code form. 122 | A “Major Component”, in this context, means a major essential component 123 | (kernel, window system, and so on) of the specific operating system (if any) on which 124 | the executable work runs, or a compiler used to produce the work, or an object code 125 | interpreter used to run it. 126 | 127 | The “Corresponding Source” for a work in object code form means all the 128 | source code needed to generate, install, and (for an executable work) run the object 129 | code and to modify the work, including scripts to control those activities. However, 130 | it does not include the work's System Libraries, or general-purpose tools or 131 | generally available free programs which are used unmodified in performing those 132 | activities but which are not part of the work. For example, Corresponding Source 133 | includes interface definition files associated with source files for the work, and 134 | the source code for shared libraries and dynamically linked subprograms that the work 135 | is specifically designed to require, such as by intimate data communication or 136 | control flow between those subprograms and other parts of the work. 137 | 138 | The Corresponding Source need not include anything that users can regenerate 139 | automatically from other parts of the Corresponding Source. 140 | 141 | The Corresponding Source for a work in source code form is that same work. 142 | 143 | ### 2. Basic Permissions. 144 | 145 | All rights granted under this License are granted for the term of copyright on the 146 | Program, and are irrevocable provided the stated conditions are met. This License 147 | explicitly affirms your unlimited permission to run the unmodified Program. The 148 | output from running a covered work is covered by this License only if the output, 149 | given its content, constitutes a covered work. This License acknowledges your rights 150 | of fair use or other equivalent, as provided by copyright law. 151 | 152 | You may make, run and propagate covered works that you do not convey, without 153 | conditions so long as your license otherwise remains in force. You may convey covered 154 | works to others for the sole purpose of having them make modifications exclusively 155 | for you, or provide you with facilities for running those works, provided that you 156 | comply with the terms of this License in conveying all material for which you do not 157 | control copyright. Those thus making or running the covered works for you must do so 158 | exclusively on your behalf, under your direction and control, on terms that prohibit 159 | them from making any copies of your copyrighted material outside their relationship 160 | with you. 161 | 162 | Conveying under any other circumstances is permitted solely under the conditions 163 | stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 164 | 165 | ### 3. Protecting Users' Legal Rights From Anti-Circumvention Law. 166 | 167 | No covered work shall be deemed part of an effective technological measure under any 168 | applicable law fulfilling obligations under article 11 of the WIPO copyright treaty 169 | adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention 170 | of such measures. 171 | 172 | When you convey a covered work, you waive any legal power to forbid circumvention of 173 | technological measures to the extent such circumvention is effected by exercising 174 | rights under this License with respect to the covered work, and you disclaim any 175 | intention to limit operation or modification of the work as a means of enforcing, 176 | against the work's users, your or third parties' legal rights to forbid circumvention 177 | of technological measures. 178 | 179 | ### 4. Conveying Verbatim Copies. 180 | 181 | You may convey verbatim copies of the Program's source code as you receive it, in any 182 | medium, provided that you conspicuously and appropriately publish on each copy an 183 | appropriate copyright notice; keep intact all notices stating that this License and 184 | any non-permissive terms added in accord with section 7 apply to the code; keep 185 | intact all notices of the absence of any warranty; and give all recipients a copy of 186 | this License along with the Program. 187 | 188 | You may charge any price or no price for each copy that you convey, and you may offer 189 | support or warranty protection for a fee. 190 | 191 | ### 5. Conveying Modified Source Versions. 192 | 193 | You may convey a work based on the Program, or the modifications to produce it from 194 | the Program, in the form of source code under the terms of section 4, provided that 195 | you also meet all of these conditions: 196 | 197 | * **a)** The work must carry prominent notices stating that you modified it, and giving a 198 | relevant date. 199 | * **b)** The work must carry prominent notices stating that it is released under this 200 | License and any conditions added under section 7. This requirement modifies the 201 | requirement in section 4 to “keep intact all notices”. 202 | * **c)** You must license the entire work, as a whole, under this License to anyone who 203 | comes into possession of a copy. This License will therefore apply, along with any 204 | applicable section 7 additional terms, to the whole of the work, and all its parts, 205 | regardless of how they are packaged. This License gives no permission to license the 206 | work in any other way, but it does not invalidate such permission if you have 207 | separately received it. 208 | * **d)** If the work has interactive user interfaces, each must display Appropriate Legal 209 | Notices; however, if the Program has interactive interfaces that do not display 210 | Appropriate Legal Notices, your work need not make them do so. 211 | 212 | A compilation of a covered work with other separate and independent works, which are 213 | not by their nature extensions of the covered work, and which are not combined with 214 | it such as to form a larger program, in or on a volume of a storage or distribution 215 | medium, is called an “aggregate” if the compilation and its resulting 216 | copyright are not used to limit the access or legal rights of the compilation's users 217 | beyond what the individual works permit. Inclusion of a covered work in an aggregate 218 | does not cause this License to apply to the other parts of the aggregate. 219 | 220 | ### 6. Conveying Non-Source Forms. 221 | 222 | You may convey a covered work in object code form under the terms of sections 4 and 223 | 5, provided that you also convey the machine-readable Corresponding Source under the 224 | terms of this License, in one of these ways: 225 | 226 | * **a)** Convey the object code in, or embodied in, a physical product (including a 227 | physical distribution medium), accompanied by the Corresponding Source fixed on a 228 | durable physical medium customarily used for software interchange. 229 | * **b)** Convey the object code in, or embodied in, a physical product (including a 230 | physical distribution medium), accompanied by a written offer, valid for at least 231 | three years and valid for as long as you offer spare parts or customer support for 232 | that product model, to give anyone who possesses the object code either (1) a copy of 233 | the Corresponding Source for all the software in the product that is covered by this 234 | License, on a durable physical medium customarily used for software interchange, for 235 | a price no more than your reasonable cost of physically performing this conveying of 236 | source, or (2) access to copy the Corresponding Source from a network server at no 237 | charge. 238 | * **c)** Convey individual copies of the object code with a copy of the written offer to 239 | provide the Corresponding Source. This alternative is allowed only occasionally and 240 | noncommercially, and only if you received the object code with such an offer, in 241 | accord with subsection 6b. 242 | * **d)** Convey the object code by offering access from a designated place (gratis or for 243 | a charge), and offer equivalent access to the Corresponding Source in the same way 244 | through the same place at no further charge. You need not require recipients to copy 245 | the Corresponding Source along with the object code. If the place to copy the object 246 | code is a network server, the Corresponding Source may be on a different server 247 | (operated by you or a third party) that supports equivalent copying facilities, 248 | provided you maintain clear directions next to the object code saying where to find 249 | the Corresponding Source. Regardless of what server hosts the Corresponding Source, 250 | you remain obligated to ensure that it is available for as long as needed to satisfy 251 | these requirements. 252 | * **e)** Convey the object code using peer-to-peer transmission, provided you inform 253 | other peers where the object code and Corresponding Source of the work are being 254 | offered to the general public at no charge under subsection 6d. 255 | 256 | A separable portion of the object code, whose source code is excluded from the 257 | Corresponding Source as a System Library, need not be included in conveying the 258 | object code work. 259 | 260 | A “User Product” is either (1) a “consumer product”, which 261 | means any tangible personal property which is normally used for personal, family, or 262 | household purposes, or (2) anything designed or sold for incorporation into a 263 | dwelling. In determining whether a product is a consumer product, doubtful cases 264 | shall be resolved in favor of coverage. For a particular product received by a 265 | particular user, “normally used” refers to a typical or common use of 266 | that class of product, regardless of the status of the particular user or of the way 267 | in which the particular user actually uses, or expects or is expected to use, the 268 | product. A product is a consumer product regardless of whether the product has 269 | substantial commercial, industrial or non-consumer uses, unless such uses represent 270 | the only significant mode of use of the product. 271 | 272 | “Installation Information” for a User Product means any methods, 273 | procedures, authorization keys, or other information required to install and execute 274 | modified versions of a covered work in that User Product from a modified version of 275 | its Corresponding Source. The information must suffice to ensure that the continued 276 | functioning of the modified object code is in no case prevented or interfered with 277 | solely because modification has been made. 278 | 279 | If you convey an object code work under this section in, or with, or specifically for 280 | use in, a User Product, and the conveying occurs as part of a transaction in which 281 | the right of possession and use of the User Product is transferred to the recipient 282 | in perpetuity or for a fixed term (regardless of how the transaction is 283 | characterized), the Corresponding Source conveyed under this section must be 284 | accompanied by the Installation Information. But this requirement does not apply if 285 | neither you nor any third party retains the ability to install modified object code 286 | on the User Product (for example, the work has been installed in ROM). 287 | 288 | The requirement to provide Installation Information does not include a requirement to 289 | continue to provide support service, warranty, or updates for a work that has been 290 | modified or installed by the recipient, or for the User Product in which it has been 291 | modified or installed. Access to a network may be denied when the modification itself 292 | materially and adversely affects the operation of the network or violates the rules 293 | and protocols for communication across the network. 294 | 295 | Corresponding Source conveyed, and Installation Information provided, in accord with 296 | this section must be in a format that is publicly documented (and with an 297 | implementation available to the public in source code form), and must require no 298 | special password or key for unpacking, reading or copying. 299 | 300 | ### 7. Additional Terms. 301 | 302 | “Additional permissions” are terms that supplement the terms of this 303 | License by making exceptions from one or more of its conditions. Additional 304 | permissions that are applicable to the entire Program shall be treated as though they 305 | were included in this License, to the extent that they are valid under applicable 306 | law. If additional permissions apply only to part of the Program, that part may be 307 | used separately under those permissions, but the entire Program remains governed by 308 | this License without regard to the additional permissions. 309 | 310 | When you convey a copy of a covered work, you may at your option remove any 311 | additional permissions from that copy, or from any part of it. (Additional 312 | permissions may be written to require their own removal in certain cases when you 313 | modify the work.) You may place additional permissions on material, added by you to a 314 | covered work, for which you have or can give appropriate copyright permission. 315 | 316 | Notwithstanding any other provision of this License, for material you add to a 317 | covered work, you may (if authorized by the copyright holders of that material) 318 | supplement the terms of this License with terms: 319 | 320 | * **a)** Disclaiming warranty or limiting liability differently from the terms of 321 | sections 15 and 16 of this License; or 322 | * **b)** Requiring preservation of specified reasonable legal notices or author 323 | attributions in that material or in the Appropriate Legal Notices displayed by works 324 | containing it; or 325 | * **c)** Prohibiting misrepresentation of the origin of that material, or requiring that 326 | modified versions of such material be marked in reasonable ways as different from the 327 | original version; or 328 | * **d)** Limiting the use for publicity purposes of names of licensors or authors of the 329 | material; or 330 | * **e)** Declining to grant rights under trademark law for use of some trade names, 331 | trademarks, or service marks; or 332 | * **f)** Requiring indemnification of licensors and authors of that material by anyone 333 | who conveys the material (or modified versions of it) with contractual assumptions of 334 | liability to the recipient, for any liability that these contractual assumptions 335 | directly impose on those licensors and authors. 336 | 337 | All other non-permissive additional terms are considered “further 338 | restrictions” within the meaning of section 10. If the Program as you received 339 | it, or any part of it, contains a notice stating that it is governed by this License 340 | along with a term that is a further restriction, you may remove that term. If a 341 | license document contains a further restriction but permits relicensing or conveying 342 | under this License, you may add to a covered work material governed by the terms of 343 | that license document, provided that the further restriction does not survive such 344 | relicensing or conveying. 345 | 346 | If you add terms to a covered work in accord with this section, you must place, in 347 | the relevant source files, a statement of the additional terms that apply to those 348 | files, or a notice indicating where to find the applicable terms. 349 | 350 | Additional terms, permissive or non-permissive, may be stated in the form of a 351 | separately written license, or stated as exceptions; the above requirements apply 352 | either way. 353 | 354 | ### 8. Termination. 355 | 356 | You may not propagate or modify a covered work except as expressly provided under 357 | this License. Any attempt otherwise to propagate or modify it is void, and will 358 | automatically terminate your rights under this License (including any patent licenses 359 | granted under the third paragraph of section 11). 360 | 361 | However, if you cease all violation of this License, then your license from a 362 | particular copyright holder is reinstated (a) provisionally, unless and until the 363 | copyright holder explicitly and finally terminates your license, and (b) permanently, 364 | if the copyright holder fails to notify you of the violation by some reasonable means 365 | prior to 60 days after the cessation. 366 | 367 | Moreover, your license from a particular copyright holder is reinstated permanently 368 | if the copyright holder notifies you of the violation by some reasonable means, this 369 | is the first time you have received notice of violation of this License (for any 370 | work) from that copyright holder, and you cure the violation prior to 30 days after 371 | your receipt of the notice. 372 | 373 | Termination of your rights under this section does not terminate the licenses of 374 | parties who have received copies or rights from you under this License. If your 375 | rights have been terminated and not permanently reinstated, you do not qualify to 376 | receive new licenses for the same material under section 10. 377 | 378 | ### 9. Acceptance Not Required for Having Copies. 379 | 380 | You are not required to accept this License in order to receive or run a copy of the 381 | Program. Ancillary propagation of a covered work occurring solely as a consequence of 382 | using peer-to-peer transmission to receive a copy likewise does not require 383 | acceptance. However, nothing other than this License grants you permission to 384 | propagate or modify any covered work. These actions infringe copyright if you do not 385 | accept this License. Therefore, by modifying or propagating a covered work, you 386 | indicate your acceptance of this License to do so. 387 | 388 | ### 10. Automatic Licensing of Downstream Recipients. 389 | 390 | Each time you convey a covered work, the recipient automatically receives a license 391 | from the original licensors, to run, modify and propagate that work, subject to this 392 | License. You are not responsible for enforcing compliance by third parties with this 393 | License. 394 | 395 | An “entity transaction” is a transaction transferring control of an 396 | organization, or substantially all assets of one, or subdividing an organization, or 397 | merging organizations. If propagation of a covered work results from an entity 398 | transaction, each party to that transaction who receives a copy of the work also 399 | receives whatever licenses to the work the party's predecessor in interest had or 400 | could give under the previous paragraph, plus a right to possession of the 401 | Corresponding Source of the work from the predecessor in interest, if the predecessor 402 | has it or can get it with reasonable efforts. 403 | 404 | You may not impose any further restrictions on the exercise of the rights granted or 405 | affirmed under this License. For example, you may not impose a license fee, royalty, 406 | or other charge for exercise of rights granted under this License, and you may not 407 | initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging 408 | that any patent claim is infringed by making, using, selling, offering for sale, or 409 | importing the Program or any portion of it. 410 | 411 | ### 11. Patents. 412 | 413 | A “contributor” is a copyright holder who authorizes use under this 414 | License of the Program or a work on which the Program is based. The work thus 415 | licensed is called the contributor's “contributor version”. 416 | 417 | A contributor's “essential patent claims” are all patent claims owned or 418 | controlled by the contributor, whether already acquired or hereafter acquired, that 419 | would be infringed by some manner, permitted by this License, of making, using, or 420 | selling its contributor version, but do not include claims that would be infringed 421 | only as a consequence of further modification of the contributor version. For 422 | purposes of this definition, “control” includes the right to grant patent 423 | sublicenses in a manner consistent with the requirements of this License. 424 | 425 | Each contributor grants you a non-exclusive, worldwide, royalty-free patent license 426 | under the contributor's essential patent claims, to make, use, sell, offer for sale, 427 | import and otherwise run, modify and propagate the contents of its contributor 428 | version. 429 | 430 | In the following three paragraphs, a “patent license” is any express 431 | agreement or commitment, however denominated, not to enforce a patent (such as an 432 | express permission to practice a patent or covenant not to sue for patent 433 | infringement). To “grant” such a patent license to a party means to make 434 | such an agreement or commitment not to enforce a patent against the party. 435 | 436 | If you convey a covered work, knowingly relying on a patent license, and the 437 | Corresponding Source of the work is not available for anyone to copy, free of charge 438 | and under the terms of this License, through a publicly available network server or 439 | other readily accessible means, then you must either (1) cause the Corresponding 440 | Source to be so available, or (2) arrange to deprive yourself of the benefit of the 441 | patent license for this particular work, or (3) arrange, in a manner consistent with 442 | the requirements of this License, to extend the patent license to downstream 443 | recipients. “Knowingly relying” means you have actual knowledge that, but 444 | for the patent license, your conveying the covered work in a country, or your 445 | recipient's use of the covered work in a country, would infringe one or more 446 | identifiable patents in that country that you have reason to believe are valid. 447 | 448 | If, pursuant to or in connection with a single transaction or arrangement, you 449 | convey, or propagate by procuring conveyance of, a covered work, and grant a patent 450 | license to some of the parties receiving the covered work authorizing them to use, 451 | propagate, modify or convey a specific copy of the covered work, then the patent 452 | license you grant is automatically extended to all recipients of the covered work and 453 | works based on it. 454 | 455 | A patent license is “discriminatory” if it does not include within the 456 | scope of its coverage, prohibits the exercise of, or is conditioned on the 457 | non-exercise of one or more of the rights that are specifically granted under this 458 | License. You may not convey a covered work if you are a party to an arrangement with 459 | a third party that is in the business of distributing software, under which you make 460 | payment to the third party based on the extent of your activity of conveying the 461 | work, and under which the third party grants, to any of the parties who would receive 462 | the covered work from you, a discriminatory patent license (a) in connection with 463 | copies of the covered work conveyed by you (or copies made from those copies), or (b) 464 | primarily for and in connection with specific products or compilations that contain 465 | the covered work, unless you entered into that arrangement, or that patent license 466 | was granted, prior to 28 March 2007. 467 | 468 | Nothing in this License shall be construed as excluding or limiting any implied 469 | license or other defenses to infringement that may otherwise be available to you 470 | under applicable patent law. 471 | 472 | ### 12. No Surrender of Others' Freedom. 473 | 474 | If conditions are imposed on you (whether by court order, agreement or otherwise) 475 | that contradict the conditions of this License, they do not excuse you from the 476 | conditions of this License. If you cannot convey a covered work so as to satisfy 477 | simultaneously your obligations under this License and any other pertinent 478 | obligations, then as a consequence you may not convey it at all. For example, if you 479 | agree to terms that obligate you to collect a royalty for further conveying from 480 | those to whom you convey the Program, the only way you could satisfy both those terms 481 | and this License would be to refrain entirely from conveying the Program. 482 | 483 | ### 13. Use with the GNU Affero General Public License. 484 | 485 | Notwithstanding any other provision of this License, you have permission to link or 486 | combine any covered work with a work licensed under version 3 of the GNU Affero 487 | General Public License into a single combined work, and to convey the resulting work. 488 | The terms of this License will continue to apply to the part which is the covered 489 | work, but the special requirements of the GNU Affero General Public License, section 490 | 13, concerning interaction through a network will apply to the combination as such. 491 | 492 | ### 14. Revised Versions of this License. 493 | 494 | The Free Software Foundation may publish revised and/or new versions of the GNU 495 | General Public License from time to time. Such new versions will be similar in spirit 496 | to the present version, but may differ in detail to address new problems or concerns. 497 | 498 | Each version is given a distinguishing version number. If the Program specifies that 499 | a certain numbered version of the GNU General Public License “or any later 500 | version” applies to it, you have the option of following the terms and 501 | conditions either of that numbered version or of any later version published by the 502 | Free Software Foundation. If the Program does not specify a version number of the GNU 503 | General Public License, you may choose any version ever published by the Free 504 | Software Foundation. 505 | 506 | If the Program specifies that a proxy can decide which future versions of the GNU 507 | General Public License can be used, that proxy's public statement of acceptance of a 508 | version permanently authorizes you to choose that version for the Program. 509 | 510 | Later license versions may give you additional or different permissions. However, no 511 | additional obligations are imposed on any author or copyright holder as a result of 512 | your choosing to follow a later version. 513 | 514 | ### 15. Disclaimer of Warranty. 515 | 516 | THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. 517 | EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES 518 | PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER 519 | EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF 520 | MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE 521 | QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE 522 | DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 523 | 524 | ### 16. Limitation of Liability. 525 | 526 | IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY 527 | COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS 528 | PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, 529 | INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE 530 | PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE 531 | OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE 532 | WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE 533 | POSSIBILITY OF SUCH DAMAGES. 534 | 535 | ### 17. Interpretation of Sections 15 and 16. 536 | 537 | If the disclaimer of warranty and limitation of liability provided above cannot be 538 | given local legal effect according to their terms, reviewing courts shall apply local 539 | law that most closely approximates an absolute waiver of all civil liability in 540 | connection with the Program, unless a warranty or assumption of liability accompanies 541 | a copy of the Program in return for a fee. 542 | 543 | END OF TERMS AND CONDITIONS 544 | 545 | ## How to Apply These Terms to Your New Programs 546 | 547 | If you develop a new program, and you want it to be of the greatest possible use to 548 | the public, the best way to achieve this is to make it free software which everyone 549 | can redistribute and change under these terms. 550 | 551 | To do so, attach the following notices to the program. It is safest to attach them 552 | to the start of each source file to most effectively state the exclusion of warranty; 553 | and each file should have at least the “copyright” line and a pointer to 554 | where the full notice is found. 555 | 556 | 557 | Copyright (C) 558 | 559 | This program is free software: you can redistribute it and/or modify 560 | it under the terms of the GNU General Public License as published by 561 | the Free Software Foundation, either version 3 of the License, or 562 | (at your option) any later version. 563 | 564 | This program is distributed in the hope that it will be useful, 565 | but WITHOUT ANY WARRANTY; without even the implied warranty of 566 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 567 | GNU General Public License for more details. 568 | 569 | You should have received a copy of the GNU General Public License 570 | along with this program. If not, see . 571 | 572 | Also add information on how to contact you by electronic and paper mail. 573 | 574 | If the program does terminal interaction, make it output a short notice like this 575 | when it starts in an interactive mode: 576 | 577 | Copyright (C) 578 | This program comes with ABSOLUTELY NO WARRANTY; for details type 'show w'. 579 | This is free software, and you are welcome to redistribute it 580 | under certain conditions; type 'show c' for details. 581 | 582 | The hypothetical commands 'show w' and 'show c' should show the appropriate parts of 583 | the General Public License. Of course, your program's commands might be different; 584 | for a GUI interface, you would use an “about box”. 585 | 586 | You should also get your employer (if you work as a programmer) or school, if any, to 587 | sign a “copyright disclaimer” for the program, if necessary. For more 588 | information on this, and how to apply and follow the GNU GPL, see 589 | <>. 590 | 591 | The GNU General Public License does not permit incorporating your program into 592 | proprietary programs. If your program is a subroutine library, you may consider it 593 | more useful to permit linking proprietary applications with the library. If this is 594 | what you want to do, use the GNU Lesser General Public License instead of this 595 | License. But first, please read 596 | <>. 597 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # pyspark-tutorials 2 | Code snippets and tutorials for working with social science data in PySpark. 3 | Note that each .ipynb file can be downloaded and the code blocks executed or 4 | experimented with directly using a Jupyter (formerly IPython) notebook, or 5 | each one can be displayed in your browser as markdown text just by clicking on 6 | it. 7 | 8 | ## Spark Social Science Manual 9 | 10 | The tutorials included in this repository are geared towards social scientists and policy researchers that want to undertake research using "big data" sets. A manual to accompany these tutorials is linked below. The objective of the manual is to provide social scientists with a brief overview of the distributed computing solution developed by The Urban Institute's Research Programming Team, and of the changes in how researchers manage and analyze data required by this computing environment. 11 | 12 | [Spark Social Science Manual](https://urbaninstitute.github.io/spark-social-science-manual/) 13 | 14 | 1. If you're new to Python entirely, consider trying an intro tutorial first. 15 | Python is a language that stresses readability of code, so it won't be too 16 | difficult to dive right in. [This](http://www.learnpython.org/en/Hello%2C_World%21 "Interactive Python Tutorial") is one good interactive tutorial. 17 | 18 | 19 | 2. After that, or if you're already comfortable with Python basics, get started 20 | with pySpark with these two lessons. They will assume you are comfortable with 21 | what Python code looks like and in general how it works, and lay out some things 22 | you will need to know to understand the other lessons. 23 | 24 | [Basics 1](/01_pyspark-basics-1.ipynb) 25 | * Reading and writing data on S3 26 | * Handling column data types 27 | * Basic data exploration and describing 28 | * Renaming columns 29 | 30 | [Basics 2](/02_pyspark-basics-2.ipynb) 31 | * How pySpark processes commands - lazy computing 32 | * Persisting and unpersisting 33 | * Timing operations 34 | 35 | 3. Basic data tasks are covered in the following guides. Note that these are not 36 | intended to be comprehensive! They cover many of the things that are most 37 | common, but others may require you to look them up or experiment. Hopefully this 38 | framework gives you enough to get started. 39 | 40 | [Merging Data](/03_merging.ipynb) 41 | * Using unionAll to stack rows by matching columns 42 | * Using join to merge columns by matching specific row values 43 | 44 | [Missing Values](/04_missing-data.ipynb) 45 | * Handling null values on loading 46 | * Counting null values 47 | * Dropping null values 48 | * Replacing null values 49 | 50 | [Moving Average Imputation](/05_moving-average-imputation.ipynb) 51 | * Using pySpark window functions 52 | * Calculating a moving average 53 | * Imputing missing values 54 | 55 | [Pivoting/Reshaping](/06_pivoting.ipynb) 56 | * Using groupBy to organize data 57 | * Pivoting data with an aggregation 58 | * Reshaping from long to wide without aggregation 59 | 60 | [Resampling](/07_resampling.ipynb) 61 | * Upsampling data based on a date column 62 | * Using datetime objects 63 | 64 | [Subsetting](/08_subsetting.ipynb) 65 | * Filtering data based on criteria 66 | * Taking a randomized sample 67 | 68 | [Summary Statistics](/09_summary-statistics.ipynb) 69 | * Using describe 70 | * Adding additional aggregations to describe output 71 | 72 | [Graphing](/10_graphing.ipynb) 73 | * Aggregating to use Matplotlib and Pandas 74 | 75 | 4. The pySpark bootstrap used by the Urban Institute to start a cluster on Amazon 76 | Web Services only installs a handful of Python modules. If you need others for your 77 | work, or specfic versions, this tutorial explains how to get them. It uses only 78 | standard Python libraries, and is therefore not specific to the pySpark environment: 79 | 80 | [Installing Python Modules](/11_installing-python-modules.ipynb) 81 | * Using the pip module for Python packages 82 | 83 | 5. And finally, now that Spark 2.0 is deployed to Amazon Web Services development has 84 | begun on OLS and GLM tutorials, which will be uploaded when complete. 85 | [Introduction to GLM](/12_GLM.ipynb) 86 | -------------------------------------------------------------------------------- /UrbanSparkUtilities/utilities.py: -------------------------------------------------------------------------------- 1 | def prettify(raw_spark): 2 | width = 120 3 | 4 | if isinstance(raw_spark, list) and all([isinstance(item, (list, tuple)) for item in raw_spark]): 5 | lengths = sorted([sum([len(str(i)) for i in entry]) for entry in raw_spark], reverse=True) 6 | 7 | cumsum = 0 8 | for i, l in enumerate(lengths): 9 | cumsum += l 10 | if cumsum >= width: 11 | num_cols = len(lengths[:i]) 12 | break 13 | 14 | #in progress 15 | 16 | 17 | 18 | 19 | 20 | def schema(sdict, not_nullable=[]): 21 | """ 22 | sdict : A dictionary with key= and val= 23 | not_nullable : A list of column names that will be set to not allow nulls 24 | """ 25 | 26 | assert(isinstance(sdict, dict) and len(sdict) > 0), 'UrbanSparkUtilities schema function requires a non-empty dict as its first argument.' 27 | nullable = {c: True for c in sdict.keys()} 28 | for c in not_nullable: 29 | if c in nullable: 30 | nullable[c] = False 31 | 32 | dtypes = {'str': StringType, 33 | 'int': IntegerType, 34 | 'int32': IntegerType, 35 | 'int64': LongType, 36 | 'float': FloatType, 37 | 'float32': FloatType, 38 | 'float64': DoubleType, 39 | 'date': DateType, 40 | 'timestamp': DateType 41 | } 42 | assert(all([dt in dtypes for dt in sdict.values()])), 'UrbanSparkUtilities schema function recognizes str, int/int32/int64, float/float32/float64 and date/timestamp dtype strings.' 43 | 44 | from pyspark.sql.types import DateType, IntegerType, FloatType, LongType, DoubleType, StringType 45 | from pyspark.sql.types import StructType, StructField 46 | 47 | return StructType([StructField(c, dtypes[d](), nullable[c]) for c, d in sdict.items()]) 48 | 49 | def build_indep_vars(df, independent_vars, categorical_vars=None, keep_intermediate=False, summarizer=True): 50 | 51 | """ 52 | Data verification 53 | df : DataFrame 54 | independent_vars : List of column names 55 | categorical_vars : None or list of column names, e.g. ['col1', 'col2'] 56 | """ 57 | assert(type(df) is pyspark.sql.dataframe.DataFrame), 'pypark_glm: A pySpark dataframe is required as the first argument.' 58 | assert(type(independent_vars) is list), 'pyspark_glm: List of independent variable column names must be the third argument.' 59 | for iv in independent_vars: 60 | assert(type(iv) is str), 'pyspark_glm: Independent variables must be column name strings.' 61 | assert(iv in df.columns), 'pyspark_glm: Independent variable name is not a dataframe column.' 62 | if categorical_vars: 63 | for cv in categorical_vars: 64 | assert(type(cv) is str), 'pyspark_glm: Categorical variables must be column name strings.' 65 | assert(cv in df.columns), 'pyspark_glm: Categorical variable name is not a dataframe column.' 66 | assert(cv in independent_vars), 'pyspark_glm: Categorical variables must be independent variables.' 67 | 68 | """ 69 | Code 70 | """ 71 | from pyspark.ml import Pipeline 72 | from pyspark.ml.feature import StringIndexer, OneHotEncoder, VectorAssembler 73 | from pyspark.ml.regression import GeneralizedLinearRegression 74 | 75 | if categorical_vars: 76 | string_indexer = [StringIndexer(inputCol=x, 77 | outputCol='{}_index'.format(x)) 78 | for x in categorical_vars] 79 | 80 | encoder = [OneHotEncoder(dropLast=True, 81 | inputCol ='{}_index' .format(x), 82 | outputCol='{}_vector'.format(x)) 83 | for x in categorical_vars] 84 | 85 | independent_vars = ['{}_vector'.format(x) if x in categorical_vars else x for x in independent_vars] 86 | else: 87 | string_indexer, encoder = [], [] 88 | 89 | assembler = VectorAssembler(inputCols=independent_vars, 90 | outputCol='indep_vars') 91 | pipeline = Pipeline(stages=string_indexer+encoder+[assembler]) 92 | model = pipeline.fit(df) 93 | df = model.transform(df) 94 | 95 | #for building the crosswalk between indicies and column names 96 | if summarizer: 97 | param_crosswalk = {} 98 | 99 | i = 0 100 | for x in independent_vars: 101 | if '_vector' in x[-7:]: 102 | xrs = x.rstrip('_vector') 103 | dst = df[[xrs, '{}_index'.format(xrs)]].distinct().collect() 104 | 105 | for row in dst: 106 | param_crosswalk[int(row['{}_index'.format(xrs)]+i)] = row[xrs] 107 | maxind = max(param_crosswalk.keys()) 108 | del param_crosswalk[maxind] #for droplast 109 | i += len(dst) 110 | elif '_index' in x[:-6]: 111 | pass 112 | else: 113 | param_crosswalk[i] = x 114 | i += 1 115 | """ 116 | {0: 'carat', 117 | 1: u'SI1', 118 | 2: u'VS2', 119 | 3: u'SI2', 120 | 4: u'VS1', 121 | 5: u'VVS2', 122 | 6: u'VVS1', 123 | 7: u'IF'} 124 | """ 125 | make_summary = Summarizer(param_crosswalk) 126 | 127 | 128 | if not keep_intermediate: 129 | fcols = [c for c in df.columns if '_index' not in c[-6:] and '_vector' not in c[-7:]] 130 | df = df[fcols] 131 | 132 | if summarizer: 133 | return df, make_summary 134 | else: 135 | return df 136 | 137 | class Summarizer(object): 138 | def __init__(self, param_crosswalk): 139 | self.param_crosswalk = param_crosswalk 140 | self.precision = 4 141 | self.screen_width = 57 142 | self.hsep = '-' 143 | self.vsep = '|' 144 | 145 | def summarize(self, model, show=True, return_str=False): 146 | coefs = list(model.coefficients) 147 | inter = model.intercept 148 | tstat = model.summary.tValues 149 | stder = model.summary.coefficientStandardErrors 150 | pvals = model.summary.pValues 151 | 152 | #if model includes an intercept: 153 | if len(coefs) == len(tstat)-1: 154 | coefs.insert(0, inter) 155 | x = {0:'intercept'} 156 | for k, v in self.param_crosswalk.items(): 157 | x[k+1] = v 158 | else: 159 | x = self.param_crosswalk 160 | 161 | assert(len(coefs) == len(tstat) == len(stder) == len(pvals)) 162 | 163 | p = self.precision 164 | h = self.hsep 165 | v = self.vsep 166 | w = self.screen_width 167 | 168 | coefs = [str(round(e, p)).center(10) for e in coefs] 169 | tstat = [str(round(e, p)).center(10) for e in tstat] 170 | stder = [str(round(e, p)).center(10) for e in stder] 171 | pvals = [str(round(e, p)).center(10) for e in pvals] 172 | 173 | lines = '' 174 | for i in range(len(coefs)): 175 | lines += str(x[i]).rjust(15) + v + coefs[i] + stder[i] + tstat[i] + pvals[i] + '\n' 176 | 177 | labels = ''.rjust(15) + v + 'Coef'.center(10) + 'Std Err'.center(10) + 'T Stat'.center(10) + 'P Val'.center(10) 178 | pad = ''.rjust(15) + v 179 | 180 | output = """{hline}\n{labels}\n{hline}\n{lines}{hline}""".format( 181 | hline=h*w, 182 | labels=labels, 183 | lines=lines) 184 | if show: 185 | print(output) 186 | if return_str: 187 | return output 188 | -------------------------------------------------------------------------------- /indep_vars/README.md: -------------------------------------------------------------------------------- 1 | # Configure Independent Variables 2 | pySpark ML analysis tools, such as GLM or OLS, require a particular formatting of the independent variables. The tools they offer for this are powerful and flexible, but require the use of an excessive amount of obscure-looking code to accomplish what most social scientists will be used to achieving in one or two simple lines of Stata or SAS code. 3 | 4 | The standard formatting requires two columns from a dataframe: a **dependent variable column** usually referred to as 'label', and an **independent variable column** usually referred to as 'features'. The dependent variable is simply a column of numerical data; the column for the independent variables must be a *vector*. 5 | 6 | This function, `build_indep_vars.py` is meant to help automate most of this task. It takes in as arguments a pySpark dataframe, a list of column names for the independent variables, and an optional list of any independent variables that are categorical. It then handles in the background getting the data in the proper format and expanding the categorical variable columns into multiple columns of dummy variables. When completed it returns the original dataframe with a new column added to it named 'indep_vars' that contains the properly formatted vector. 7 | 8 | An example: 9 | 10 | `df = spark.read.csv('s3://ui-spark-social-science-public/data/diamonds.csv', inferSchema=True, header=True, sep=',')` 11 | `df = build_indep_vars(df, ['carat', 'clarity'], categorical_vars=['clarity'])` 12 | 13 | `glr = GeneralizedLinearRegression(family='gaussian', link='identity', labelCol='price', featuresCol='indep_vars')` 14 | 15 | `model = glr.fit(df)` 16 | `transformed = model.transform(df)` -------------------------------------------------------------------------------- /indep_vars/build_indep_vars.py: -------------------------------------------------------------------------------- 1 | """ 2 | Program written by Jeff Levy (jlevy@urban.org) for the Urban Institute, last revised 8/24/2016. 3 | Note that this is intended as a temporary work-around until pySpark improves its ML package. 4 | 5 | Tested in pySpark 2.0. 6 | 7 | """ 8 | 9 | def build_indep_vars(df, independent_vars, categorical_vars=None, keep_intermediate=False, summarizer=True): 10 | 11 | """ 12 | Data verification 13 | df : DataFrame 14 | independent_vars : List of column names 15 | categorical_vars : None or list of column names, e.g. ['col1', 'col2'] 16 | """ 17 | assert(type(df) is pyspark.sql.dataframe.DataFrame), 'pypark_glm: A pySpark dataframe is required as the first argument.' 18 | assert(type(independent_vars) is list), 'pyspark_glm: List of independent variable column names must be the third argument.' 19 | for iv in independent_vars: 20 | assert(type(iv) is str), 'pyspark_glm: Independent variables must be column name strings.' 21 | assert(iv in df.columns), 'pyspark_glm: Independent variable name is not a dataframe column.' 22 | if categorical_vars: 23 | for cv in categorical_vars: 24 | assert(type(cv) is str), 'pyspark_glm: Categorical variables must be column name strings.' 25 | assert(cv in df.columns), 'pyspark_glm: Categorical variable name is not a dataframe column.' 26 | assert(cv in independent_vars), 'pyspark_glm: Categorical variables must be independent variables.' 27 | 28 | """ 29 | Code 30 | """ 31 | from pyspark.ml import Pipeline 32 | from pyspark.ml.feature import StringIndexer, OneHotEncoder, VectorAssembler 33 | from pyspark.ml.regression import GeneralizedLinearRegression 34 | 35 | if categorical_vars: 36 | string_indexer = [StringIndexer(inputCol=x, 37 | outputCol='{}_index'.format(x)) 38 | for x in categorical_vars] 39 | 40 | encoder = [OneHotEncoder(dropLast=True, 41 | inputCol ='{}_index' .format(x), 42 | outputCol='{}_vector'.format(x)) 43 | for x in categorical_vars] 44 | 45 | independent_vars = ['{}_vector'.format(x) if x in categorical_vars else x for x in independent_vars] 46 | else: 47 | string_indexer, encoder = [], [] 48 | 49 | assembler = VectorAssembler(inputCols=independent_vars, 50 | outputCol='indep_vars') 51 | pipeline = Pipeline(stages=string_indexer+encoder+[assembler]) 52 | model = pipeline.fit(df) 53 | df = model.transform(df) 54 | 55 | #for building the crosswalk between indicies and column names 56 | if summarizer: 57 | param_crosswalk = {} 58 | 59 | i = 0 60 | for x in independent_vars: 61 | if '_vector' in x[-7:]: 62 | xrs = x.rstrip('_vector') 63 | dst = df[[xrs, '{}_index'.format(xrs)]].distinct().collect() 64 | 65 | for row in dst: 66 | param_crosswalk[int(row['{}_index'.format(xrs)]+i)] = row[xrs] 67 | maxind = max(param_crosswalk.keys()) 68 | del param_crosswalk[maxind] #for droplast 69 | i += len(dst) 70 | elif '_index' in x[:-6]: 71 | pass 72 | else: 73 | param_crosswalk[i] = x 74 | i += 1 75 | """ 76 | {0: 'carat', 77 | 1: u'SI1', 78 | 2: u'VS2', 79 | 3: u'SI2', 80 | 4: u'VS1', 81 | 5: u'VVS2', 82 | 6: u'VVS1', 83 | 7: u'IF'} 84 | """ 85 | make_summary = Summarizer(param_crosswalk) 86 | 87 | 88 | if not keep_intermediate: 89 | fcols = [c for c in df.columns if '_index' not in c[-6:] and '_vector' not in c[-7:]] 90 | df = df[fcols] 91 | 92 | if summarizer: 93 | return df, make_summary 94 | else: 95 | return df 96 | 97 | class Summarizer(object): 98 | def __init__(self, param_crosswalk): 99 | self.param_crosswalk = param_crosswalk 100 | self.precision = 4 101 | self.screen_width = 57 102 | self.hsep = '-' 103 | self.vsep = '|' 104 | 105 | def summarize(self, model, show=True, return_str=False): 106 | coefs = list(model.coefficients) 107 | inter = model.intercept 108 | tstat = model.summary.tValues 109 | stder = model.summary.coefficientStandardErrors 110 | pvals = model.summary.pValues 111 | 112 | #if model includes an intercept: 113 | if len(coefs) == len(tstat)-1: 114 | coefs.insert(0, inter) 115 | x = {0:'intercept'} 116 | for k, v in self.param_crosswalk.items(): 117 | x[k+1] = v 118 | else: 119 | x = self.param_crosswalk 120 | 121 | assert(len(coefs) == len(tstat) == len(stder) == len(pvals)) 122 | 123 | p = self.precision 124 | h = self.hsep 125 | v = self.vsep 126 | w = self.screen_width 127 | 128 | coefs = [str(round(e, p)).center(10) for e in coefs] 129 | tstat = [str(round(e, p)).center(10) for e in tstat] 130 | stder = [str(round(e, p)).center(10) for e in stder] 131 | pvals = [str(round(e, p)).center(10) for e in pvals] 132 | 133 | lines = '' 134 | for i in range(len(coefs)): 135 | lines += str(x[i]).rjust(15) + v + coefs[i] + stder[i] + tstat[i] + pvals[i] + '\n' 136 | 137 | labels = ''.rjust(15) + v + 'Coef'.center(10) + 'Std Err'.center(10) + 'T Stat'.center(10) + 'P Val'.center(10) 138 | pad = ''.rjust(15) + v 139 | 140 | output = """{hline}\n{labels}\n{hline}\n{lines}{hline}""".format( 141 | hline=h*w, 142 | labels=labels, 143 | lines=lines) 144 | if show: 145 | print(output) 146 | if return_str: 147 | return output -------------------------------------------------------------------------------- /indep_vars/setup.py: -------------------------------------------------------------------------------- 1 | from distutils.core import setup 2 | setup(name='build_indep_vars.py', 3 | version='1.0', 4 | url='https://github.com/UrbanInstitute/pyspark-tutorials/tree/master/indep_vars' 5 | author='Jeff Levy', 6 | autor_email='jlevy@urban.org', 7 | py_modules=['build_indep_vars']) 8 | 9 | #pip install -e git+https://github.com/UrbanInstitute/pyspark-tutorials/tree/master/indep_vars -------------------------------------------------------------------------------- /stata_list.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/UrbanInstitute/pyspark-tutorials/db970f3bc3dec59147f327b066e07f43d61d09da/stata_list.PNG -------------------------------------------------------------------------------- /stata_reg.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/UrbanInstitute/pyspark-tutorials/db970f3bc3dec59147f327b066e07f43d61d09da/stata_reg.PNG --------------------------------------------------------------------------------