├── 2017-01-02 - Learn to Code - Intro to Python for Data Science I - slide deck.pdf ├── README.md └── intro-python-ds-1.ipynb /2017-01-02 - Learn to Code - Intro to Python for Data Science I - slide deck.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/GalvanizeOpenSource/Learn-To-Code-Intro-Python-DS-1/30f93dac4059dc4dfb6dcc2d445329e7c45f6573/2017-01-02 - Learn to Code - Intro to Python for Data Science I - slide deck.pdf -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Learn to Code Workshop: Intro to Python for Data Science I 2 | 3 | Please download/clone this repo, especially the slide deck and Jupyter notebook, and follow the directions within it. 4 | 5 | (This page is always construction. Check back often when we create IPython Notebooks and other supplemental material.) 6 | 7 | -The Galvanize Community 8 | 9 | ## About the Author 10 | 11 | Lee Ngo is an evangelist for Galvanize based in Seattle. Previously he worked for UP Global (now Techstars) and founded his own ed-tech company in Pittsburgh, PA. Lee believes in learning by doing, engaging and sharing, and he teaches code through a combination of visual communication, teamwork, and project-oriented learning. 12 | 13 | You can email him at lee.ngo@galvanize.com for any further questions. 14 | -------------------------------------------------------------------------------- /intro-python-ds-1.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "collapsed": true 7 | }, 8 | "source": [ 9 | "# Welcome to Introduction to Python for Data Science\n", 10 | "\n", 11 | "A quick overview of some basic concepts in Python, applied for the field of data science. This course is designed by the Galvanize community. Feel free to download, fork, and work with it as you please!\n", 12 | "\n", 13 | "\n", 14 | "## In this course you will learn how to:\n", 15 | "\n", 16 | "* Set up your computer\n", 17 | "* Setting up iPython/Jupyter\n", 18 | "* Basic Python commands \n", 19 | "* Explorations in NumPy, Pandas, and Matplotlib\n", 20 | "* Sandbox time!\n", 21 | "\n", 22 | "## Gut check, Galvanize-style!\n", 23 | "\n", 24 | "* This course is for beginners\n", 25 | "* Feel free to move ahead\n", 26 | "* Help others when you can\n", 27 | "* Be patient and nice\n", 28 | "* We’ll all get through it!\n", 29 | "\n", 30 | "## What IS Python?\n", 31 | "\n", 32 | "The language of Python was created by Guido Van Rossum in the 1990s, and yes, it is totally named after Monty Python.\n", 33 | "Presently, the growth and adoption of the language is a an enormous open-source project with a huge worldwide community.\n", 34 | "\n", 35 | "## Why do we learn Python for data science?\n", 36 | "\n", 37 | "Python is an easy to learn, general-purpose, scalable language with popular libraries such as:\n", 38 | "* SciPy.org (Math, Science, Engineering)\n", 39 | "* StatsModels (Statistics)\n", 40 | "* Pandas (Frameworks)\n", 41 | "* SciKit-Learn (Machine Learning)\n", 42 | "* GGplot, MatplotLib, Plot.ly (Graphics)\n", 43 | "\n", 44 | "## Let's set up your computer!\n", 45 | "\n", 46 | "### Step 1: Install Anaconda.\n", 47 | "\n", 48 | "Anaconda from Continuum Analytics provides virtually everything you need to get started in data science. Go to [continuum.io/downloads](https://continuum.io/downloads) and follow the instructions in the website - they vary per platform. \n", 49 | "\n", 50 | "### Step 2: Install Git (optional)\n", 51 | "\n", 52 | "You'll need a command line prompt to launch Anaconda's Jupyter Notebooks for this lesson. We recommend Git in case you're interested in version control or cloud deployment in the future. Go to [git-scm.com/downloads](https://git-scm.com/downloads) and follow the directions there.\n", 53 | "\n", 54 | "### Step 3: Activate your Jupyter Notebook\n", 55 | "\n", 56 | "We're going to use Jupyter, formerly known as IPython, which is short for “Interactive Python.” This is a way for you to code within a browser in a faster, interactive way.\n", 57 | "\n", 58 | "1. Open up your Git terminal\n", 59 | "2. Navigate to your working directory\n", 60 | "3. Type “jupyter notebook” into the prompt\n", 61 | "4. Some computation should happen...\n", 62 | "5. Go to your browser and type in this URL: http://localhost:8888/ (this may launch anyway)\n", 63 | "\n", 64 | "If you see a new browser tab pop up, create a \"New\" notebook in the top right corner. Let's get started!\n", 65 | "\n", 66 | "## Basic Python Commands\n", 67 | "\n", 68 | "We're going to use this notebook to engage more actively with Python! For ease, I've created all the commands for you, but I recommend that you practice typing your own as well to get the \"feel\" for how a data scientist operates. *Learn by doing!*" 69 | ] 70 | }, 71 | { 72 | "cell_type": "markdown", 73 | "metadata": {}, 74 | "source": [ 75 | "### Data Types\n", 76 | "\n", 77 | "Like all coding languages, Python allows us to manipulate types of data in different ways. The first step to understanding it is to know what kinds of data it handles.\n", 78 | "\n", 79 | "There are generally **five** types of data:\n", 80 | "\n", 81 | "* int - integer value\n", 82 | "* float - decimal value\n", 83 | "* bool - True/False\n", 84 | "* complex - imaginary\n", 85 | "* NoneType - null value\n", 86 | "\n", 87 | "## LET'S CODE!\n", 88 | "\n", 89 | "Run the commands in the notebook below by clicking on them and typing 'Shift+Enter.' *What are the outputs?*" 90 | ] 91 | }, 92 | { 93 | "cell_type": "code", 94 | "execution_count": null, 95 | "metadata": { 96 | "collapsed": true 97 | }, 98 | "outputs": [], 99 | "source": [ 100 | "type(3.1415)" 101 | ] 102 | }, 103 | { 104 | "cell_type": "code", 105 | "execution_count": null, 106 | "metadata": { 107 | "collapsed": true 108 | }, 109 | "outputs": [], 110 | "source": [ 111 | "type(10)" 112 | ] 113 | }, 114 | { 115 | "cell_type": "code", 116 | "execution_count": null, 117 | "metadata": { 118 | "collapsed": true 119 | }, 120 | "outputs": [], 121 | "source": [ 122 | "type(4+8j)" 123 | ] 124 | }, 125 | { 126 | "cell_type": "code", 127 | "execution_count": null, 128 | "metadata": { 129 | "collapsed": true 130 | }, 131 | "outputs": [], 132 | "source": [ 133 | "type(False)" 134 | ] 135 | }, 136 | { 137 | "cell_type": "code", 138 | "execution_count": null, 139 | "metadata": { 140 | "collapsed": false, 141 | "scrolled": true 142 | }, 143 | "outputs": [], 144 | "source": [ 145 | "type('')" 146 | ] 147 | }, 148 | { 149 | "cell_type": "markdown", 150 | "metadata": {}, 151 | "source": [ 152 | "### 'Arrays' in Python\n", 153 | "\n", 154 | "Storing and organizing data in Python is immensely helpful, especially when you're attempting data science. Below are the five types of 'iterable data' available in Python (I use that term loosely for good reason).\n", 155 | "\n", 156 | "* str - string/varchar immutable value, defined with quotes = ‘abc’\n", 157 | "* list - collection of elements, defined with brackets = [‘a’, ‘b’]\n", 158 | "* tuple - immutable list, defined with parentheses = (‘a’, ‘b’)\n", 159 | "* dict - unordered key-value pairs, keys are unique and immutable, defined with braces = {‘a’:1, ‘b’:2} \n", 160 | "* set - unordered collection of unique elements, defined with braces = {‘a’, ‘b’}\n", 161 | "\n", 162 | "## LET'S CODE\n", 163 | "\n", 164 | "Using the cell below, create a list called `doc` that contains four elements of varying data types." 165 | ] 166 | }, 167 | { 168 | "cell_type": "code", 169 | "execution_count": null, 170 | "metadata": { 171 | "collapsed": true 172 | }, 173 | "outputs": [], 174 | "source": [ 175 | "doc = [ 'Gigawatts', 88 , 'miles per hour' , 1.21 ] # What is the result of 'doc[2]'?" 176 | ] 177 | }, 178 | { 179 | "cell_type": "markdown", 180 | "metadata": {}, 181 | "source": [ 182 | "What is the value of `doc[2]`? Run the cell below." 183 | ] 184 | }, 185 | { 186 | "cell_type": "code", 187 | "execution_count": null, 188 | "metadata": { 189 | "collapsed": true 190 | }, 191 | "outputs": [], 192 | "source": [ 193 | "doc[2]" 194 | ] 195 | }, 196 | { 197 | "cell_type": "markdown", 198 | "metadata": {}, 199 | "source": [ 200 | "### Control Flows in Python\n", 201 | "\n", 202 | "Conditional statements are also very useful in coding, and for Python, the syntax is slightly different.\n", 203 | "\n", 204 | "##### If, else statements" 205 | ] 206 | }, 207 | { 208 | "cell_type": "code", 209 | "execution_count": null, 210 | "metadata": { 211 | "collapsed": true 212 | }, 213 | "outputs": [], 214 | "source": [ 215 | "x, y = False, False\n", 216 | "if x:\n", 217 | " print('Apple')\n", 218 | "elif y:\n", 219 | " print('Orange')\n", 220 | "else:\n", 221 | " print('sandwich')" 222 | ] 223 | }, 224 | { 225 | "cell_type": "markdown", 226 | "metadata": {}, 227 | "source": [ 228 | "*What do you think the output will be?*" 229 | ] 230 | }, 231 | { 232 | "cell_type": "markdown", 233 | "metadata": {}, 234 | "source": [ 235 | "##### While loops\n", 236 | "\n", 237 | "Note the indents - a critical part of the syntax of Python. As always, make sure your `while` loops are written in such a way that will eventually break, or they could end up overloading your computer.\n", 238 | "\n", 239 | "Try the code below." 240 | ] 241 | }, 242 | { 243 | "cell_type": "code", 244 | "execution_count": null, 245 | "metadata": { 246 | "collapsed": true 247 | }, 248 | "outputs": [], 249 | "source": [ 250 | "x = 0\n", 251 | "while True:\n", 252 | " print('Hello!')\n", 253 | " x += 1\n", 254 | " if x >= 3:\n", 255 | " break " 256 | ] 257 | }, 258 | { 259 | "cell_type": "markdown", 260 | "metadata": {}, 261 | "source": [ 262 | "What do you think will be printed out from this loop? \n", 263 | "\n", 264 | "##### For Loops" 265 | ] 266 | }, 267 | { 268 | "cell_type": "code", 269 | "execution_count": null, 270 | "metadata": { 271 | "collapsed": true 272 | }, 273 | "outputs": [], 274 | "source": [ 275 | "for k in range(4):\n", 276 | " print(k ** 3)" 277 | ] 278 | }, 279 | { 280 | "cell_type": "markdown", 281 | "metadata": {}, 282 | "source": [ 283 | "### Functions - creating ways to use and interact with objects\n", 284 | "\n", 285 | "Starting anything with `def` will define a function with parameters. You can run that function afterwards by passing an argument through its parentheses `()`. Check out the basic math functions we have below." 286 | ] 287 | }, 288 | { 289 | "cell_type": "code", 290 | "execution_count": null, 291 | "metadata": { 292 | "collapsed": true 293 | }, 294 | "outputs": [], 295 | "source": [ 296 | "def x_plus_4(x):\n", 297 | " return x + 4\n", 298 | "\n", 299 | "x_plus_4(5)" 300 | ] 301 | }, 302 | { 303 | "cell_type": "code", 304 | "execution_count": null, 305 | "metadata": { 306 | "collapsed": true 307 | }, 308 | "outputs": [], 309 | "source": [ 310 | "def subtract(x,y):\n", 311 | " return x - y\n", 312 | "\n", 313 | "subtract(7,3)" 314 | ] 315 | }, 316 | { 317 | "cell_type": "markdown", 318 | "metadata": {}, 319 | "source": [ 320 | "### Import - bringing in libraries and frameworks to assist!\n", 321 | "\n", 322 | "`import` makes our lives much easier by bringing in built-in functionality from Anaconda and elsewhere. For example, we can import pi instead of calculating it ourselves." 323 | ] 324 | }, 325 | { 326 | "cell_type": "code", 327 | "execution_count": null, 328 | "metadata": { 329 | "collapsed": true 330 | }, 331 | "outputs": [], 332 | "source": [ 333 | "import math # Typically, we like to do imports at the top of a Python file\n", 334 | "math.pi" 335 | ] 336 | }, 337 | { 338 | "cell_type": "code", 339 | "execution_count": null, 340 | "metadata": { 341 | "collapsed": true 342 | }, 343 | "outputs": [], 344 | "source": [ 345 | "from math import sin\n", 346 | "sin(math.pi/2)" 347 | ] 348 | }, 349 | { 350 | "cell_type": "markdown", 351 | "metadata": {}, 352 | "source": [ 353 | "## LET'S CODE WITH NUMPY\n", 354 | "\n", 355 | "##### What is NumPy?\n", 356 | "\n", 357 | "Numpy is a python library of mathematical functions that allow us to operate on huge arrays and matrices of data. It's the first step to giving us the ability to do some interesting things with larger data sets.\n", 358 | "\n", 359 | "For now, let's start small and built a basic array." 360 | ] 361 | }, 362 | { 363 | "cell_type": "code", 364 | "execution_count": null, 365 | "metadata": { 366 | "collapsed": true 367 | }, 368 | "outputs": [], 369 | "source": [ 370 | "import numpy as np\n", 371 | "a = np.array([0,1,5,7,6])\n", 372 | "a[3]" 373 | ] 374 | }, 375 | { 376 | "cell_type": "markdown", 377 | "metadata": {}, 378 | "source": [ 379 | "Cool. Let's start building in more than one dimension of data, transpose it, and even multiply it with another array." 380 | ] 381 | }, 382 | { 383 | "cell_type": "code", 384 | "execution_count": null, 385 | "metadata": { 386 | "collapsed": true 387 | }, 388 | "outputs": [], 389 | "source": [ 390 | "a = np.array([[1,2,3],[4,5,6]])\n", 391 | "a.shape" 392 | ] 393 | }, 394 | { 395 | "cell_type": "code", 396 | "execution_count": null, 397 | "metadata": { 398 | "collapsed": true 399 | }, 400 | "outputs": [], 401 | "source": [ 402 | "a.T # transposing the array\n", 403 | "a.T.shape" 404 | ] 405 | }, 406 | { 407 | "cell_type": "code", 408 | "execution_count": null, 409 | "metadata": { 410 | "collapsed": true 411 | }, 412 | "outputs": [], 413 | "source": [ 414 | "b = np.array([6,7])\n", 415 | "np.dot(a.T,b) # matrix multiplication" 416 | ] 417 | }, 418 | { 419 | "cell_type": "markdown", 420 | "metadata": {}, 421 | "source": [ 422 | "Let's kick it up a notch. Potentially, you can work in the nth dimension!" 423 | ] 424 | }, 425 | { 426 | "cell_type": "code", 427 | "execution_count": null, 428 | "metadata": { 429 | "collapsed": true 430 | }, 431 | "outputs": [], 432 | "source": [ 433 | "aa = np.array(\n", 434 | " [[1,2,3],[4,5,6],[1,2,3],[4,5,6]]\n", 435 | " )\n", 436 | "bb = np.array(\n", 437 | " [[[3],[4],[6]],[[6],[5],[7]]]\n", 438 | " )" 439 | ] 440 | }, 441 | { 442 | "cell_type": "code", 443 | "execution_count": null, 444 | "metadata": { 445 | "collapsed": true 446 | }, 447 | "outputs": [], 448 | "source": [ 449 | "aa.shape, bb.shape # why do we check?" 450 | ] 451 | }, 452 | { 453 | "cell_type": "code", 454 | "execution_count": null, 455 | "metadata": { 456 | "collapsed": true 457 | }, 458 | "outputs": [], 459 | "source": [ 460 | "np.dot(aa,bb)" 461 | ] 462 | }, 463 | { 464 | "cell_type": "markdown", 465 | "metadata": {}, 466 | "source": [ 467 | "## LET'S CODE WITH PANDAS\n", 468 | "\n", 469 | "Pandas is an open-source Python library providing powerful data structures and analysis tools. It's commonly used by data scientists and will be helpful for some cool things we'll do here.\n", 470 | "\n", 471 | "First, we'll `import` pandas and give it a short name for ease of use. \n", 472 | "\n", 473 | "Let's create a new dataframe! Here, we're importing a dictionary into Pandas." 474 | ] 475 | }, 476 | { 477 | "cell_type": "code", 478 | "execution_count": null, 479 | "metadata": { 480 | "collapsed": false 481 | }, 482 | "outputs": [], 483 | "source": [ 484 | "import pandas as pd\n", 485 | "import numpy as np # Probably don't need this, but just in case\n", 486 | "dd = {\n", 487 | " ‘0’: pd.Series([1,2], index=[‘a’,‘b’]), \n", 488 | " ‘1’: pd.Series([15,25,35], index=[‘a’,‘b’,‘c’])\n", 489 | " }\n", 490 | "pd.DataFrame(dd)" 491 | ] 492 | }, 493 | { 494 | "cell_type": "markdown", 495 | "metadata": {}, 496 | "source": [ 497 | "We can also write dataframes directly with the function `Dataframe({})`. See below!" 498 | ] 499 | }, 500 | { 501 | "cell_type": "code", 502 | "execution_count": null, 503 | "metadata": { 504 | "collapsed": true 505 | }, 506 | "outputs": [], 507 | "source": [ 508 | "df = pd.DataFrame({\n", 509 | " ‘int_col’: [1,2,6,8,-1],\n", 510 | " ‘float_col’:[0.1,0.2,0.2,10.1,None],\n", 511 | " ‘str_col’: [‘a’,‘b’,None,‘c’,‘a’]})\n", 512 | "df" 513 | ] 514 | }, 515 | { 516 | "cell_type": "markdown", 517 | "metadata": {}, 518 | "source": [ 519 | "Now that you've created a quantitative dataframe, we can do a few statistical analyses. Run the following." 520 | ] 521 | }, 522 | { 523 | "cell_type": "code", 524 | "execution_count": null, 525 | "metadata": { 526 | "collapsed": true 527 | }, 528 | "outputs": [], 529 | "source": [ 530 | "df.describe() # basic stats" 531 | ] 532 | }, 533 | { 534 | "cell_type": "code", 535 | "execution_count": null, 536 | "metadata": { 537 | "collapsed": true 538 | }, 539 | "outputs": [], 540 | "source": [ 541 | "df.corr() # correlation" 542 | ] 543 | }, 544 | { 545 | "cell_type": "code", 546 | "execution_count": null, 547 | "metadata": { 548 | "collapsed": true 549 | }, 550 | "outputs": [], 551 | "source": [ 552 | "df.cov() # covariance" 553 | ] 554 | }, 555 | { 556 | "cell_type": "markdown", 557 | "metadata": {}, 558 | "source": [ 559 | "## LET'S MAKE VISUALIZATIONS\n", 560 | "\n", 561 | "The best way to convey your data is by showing it in meaningful ways. To do so, we're going to use **Matplotlib**, another popular Python library for creating visualizations.\n", 562 | "\n", 563 | "First, let's import the library and the command for showing these visualizations inline." 564 | ] 565 | }, 566 | { 567 | "cell_type": "code", 568 | "execution_count": null, 569 | "metadata": { 570 | "collapsed": true 571 | }, 572 | "outputs": [], 573 | "source": [ 574 | "%matplotlib inline \n", 575 | "import matplotlib.pyplot as plt" 576 | ] 577 | }, 578 | { 579 | "cell_type": "markdown", 580 | "metadata": {}, 581 | "source": [ 582 | "What might this informaton look like? Let's use our previous libraries to create some random data." 583 | ] 584 | }, 585 | { 586 | "cell_type": "code", 587 | "execution_count": null, 588 | "metadata": { 589 | "collapsed": true 590 | }, 591 | "outputs": [], 592 | "source": [ 593 | "plot_df = pd.DataFrame(\n", 594 | " np.random.randn(100,2),columns=[‘x’,‘y’]\n", 595 | " )\n", 596 | "plot_df[‘y’] += plot_df[‘x’]\n", 597 | "plot_df.head()" 598 | ] 599 | }, 600 | { 601 | "cell_type": "markdown", 602 | "metadata": {}, 603 | "source": [ 604 | "We can now create a simple plot of the data. Run the code below. Did it work?" 605 | ] 606 | }, 607 | { 608 | "cell_type": "code", 609 | "execution_count": null, 610 | "metadata": { 611 | "collapsed": true 612 | }, 613 | "outputs": [], 614 | "source": [ 615 | "plot_df.plot()" 616 | ] 617 | }, 618 | { 619 | "cell_type": "markdown", 620 | "metadata": {}, 621 | "source": [ 622 | "Let's create a scatterplot! Run the code below. Do you see any major differences?" 623 | ] 624 | }, 625 | { 626 | "cell_type": "code", 627 | "execution_count": null, 628 | "metadata": { 629 | "collapsed": true 630 | }, 631 | "outputs": [], 632 | "source": [ 633 | "plot_df.plot(‘x’,‘y’,kind=‘scatter’)" 634 | ] 635 | }, 636 | { 637 | "cell_type": "markdown", 638 | "metadata": {}, 639 | "source": [ 640 | "Let's create a histogram! Run the code below. How does this visualization compare with the others?" 641 | ] 642 | }, 643 | { 644 | "cell_type": "code", 645 | "execution_count": null, 646 | "metadata": { 647 | "collapsed": true 648 | }, 649 | "outputs": [], 650 | "source": [ 651 | "plot_df.plot(kind=‘hist’,alpha=0.3)" 652 | ] 653 | }, 654 | { 655 | "cell_type": "markdown", 656 | "metadata": { 657 | "collapsed": true 658 | }, 659 | "source": [ 660 | "## SANDBOX TIME!\n", 661 | "\n", 662 | "You're almost done! Let's see how you do on your own.\n", 663 | "\n", 664 | "**Try one of the following:**\n", 665 | "* Merging and joining your data frames\n", 666 | "* Removing and replacing some missing values\n", 667 | "* Renaming your data columns\n", 668 | "* Download a dataset and conduct some analysis\n" 669 | ] 670 | }, 671 | { 672 | "cell_type": "markdown", 673 | "metadata": {}, 674 | "source": [ 675 | "# YOU ARE NOW A DATA SCIENTIST! (KINDA)\n", 676 | "\n", 677 | "Don't stop learning! Visit the [Galvanize Open Source](https://github.com/galvanizeopensource) project to learn more through our other coding courses. We update this often with the latest distributed work, so check back for more details.\n", 678 | "\n", 679 | "## Interested in learning with Galvanize? \n", 680 | "\n", 681 | "This course was created by the geniuses who work at Galvanize. Here are some options:\n", 682 | "\n", 683 | "**Data Science Fundamentals: Intro to Python**\n", 684 | "* 6 week part-time workshop\n", 685 | "\n", 686 | "**Data Science Immersive Program**\n", 687 | "* 12 week full-time program\n", 688 | "\n", 689 | "**GalvanizeU**\n", 690 | "* 12 month program in San Francisco\n", 691 | "* Fully-accredited by the University of New Haven\n", 692 | "\n", 693 | "Learn more at our website here: [http://www.galvanize.com/courses/data-science](http://www.galvanize.com/courses/data-science)\n", 694 | "\n", 695 | "Got feedback for us? Feel free to email us at [info@galvanize.com](mailto:info@galvanize.com)." 696 | ] 697 | }, 698 | { 699 | "cell_type": "markdown", 700 | "metadata": {}, 701 | "source": [ 702 | "## About this Course's Author\n", 703 | "\n", 704 | "Lee Ngo is an evangelist for Galvanize based in Seattle. Previously he worked for UP Global (now Techstars) and founded his own ed-tech company in Pittsburgh, PA. Lee believes in learning by doing, engaging and sharing, and he teaches code through a combination of visual communication, teamwork, and project-oriented learning.\n", 705 | "\n", 706 | "You can email him at lee.ngo@galvanize.com for any further questions." 707 | ] 708 | } 709 | ], 710 | "metadata": { 711 | "anaconda-cloud": {}, 712 | "kernelspec": { 713 | "display_name": "Python [Root]", 714 | "language": "python", 715 | "name": "Python [Root]" 716 | }, 717 | "language_info": { 718 | "codemirror_mode": { 719 | "name": "ipython", 720 | "version": 2 721 | }, 722 | "file_extension": ".py", 723 | "mimetype": "text/x-python", 724 | "name": "python", 725 | "nbconvert_exporter": "python", 726 | "pygments_lexer": "ipython2", 727 | "version": "2.7.11" 728 | } 729 | }, 730 | "nbformat": 4, 731 | "nbformat_minor": 0 732 | } 733 | --------------------------------------------------------------------------------