├── .gitignore ├── 100-pandas-puzzles-with-solutions.ipynb ├── 100-pandas-puzzles.ipynb ├── CONTRIBUTORS.md ├── LICENSE ├── README.md ├── img └── candle.jpg └── requirements.txt /.gitignore: -------------------------------------------------------------------------------- 1 | .ipynb_checkpoints/* 2 | -------------------------------------------------------------------------------- /100-pandas-puzzles-with-solutions.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# 100 pandas puzzles\n", 8 | "\n", 9 | "Inspired by [100 Numpy exerises](https://github.com/rougier/numpy-100), here are 100* short puzzles for testing your knowledge of [pandas'](http://pandas.pydata.org/) power.\n", 10 | "\n", 11 | "Since pandas is a large library with many different specialist features and functions, these excercises focus mainly on the fundamentals of manipulating data (indexing, grouping, aggregating, cleaning), making use of the core DataFrame and Series objects. \n", 12 | "\n", 13 | "Many of the excerises here are stright-forward in that the solutions require no more than a few lines of code (in pandas or NumPy... don't go using pure Python or Cython!). Choosing the right methods and following best practices is the underlying goal.\n", 14 | "\n", 15 | "The exercises are loosely divided in sections. Each section has a difficulty rating; these ratings are subjective, of course, but should be a seen as a rough guide as to how inventive the required solution is.\n", 16 | "\n", 17 | "If you're just starting out with pandas and you are looking for some other resources, the official documentation is very extensive. In particular, some good places get a broader overview of pandas are...\n", 18 | "\n", 19 | "- [10 minutes to pandas](http://pandas.pydata.org/pandas-docs/stable/10min.html)\n", 20 | "- [pandas basics](http://pandas.pydata.org/pandas-docs/stable/basics.html)\n", 21 | "- [tutorials](http://pandas.pydata.org/pandas-docs/stable/tutorials.html)\n", 22 | "- [cookbook and idioms](http://pandas.pydata.org/pandas-docs/stable/cookbook.html#cookbook)\n", 23 | "\n", 24 | "Enjoy the puzzles!\n", 25 | "\n", 26 | "\\* *the list of exercises is not yet complete! Pull requests or suggestions for additional exercises, corrections and improvements are welcomed.*" 27 | ] 28 | }, 29 | { 30 | "cell_type": "markdown", 31 | "metadata": {}, 32 | "source": [ 33 | "## Importing pandas\n", 34 | "\n", 35 | "### Getting started and checking your pandas setup\n", 36 | "\n", 37 | "Difficulty: *easy* \n", 38 | "\n", 39 | "**1.** Import pandas under the alias `pd`." 40 | ] 41 | }, 42 | { 43 | "cell_type": "code", 44 | "execution_count": null, 45 | "metadata": {}, 46 | "outputs": [], 47 | "source": [ 48 | "import pandas as pd" 49 | ] 50 | }, 51 | { 52 | "cell_type": "markdown", 53 | "metadata": {}, 54 | "source": [ 55 | "**2.** Print the version of pandas that has been imported." 56 | ] 57 | }, 58 | { 59 | "cell_type": "code", 60 | "execution_count": null, 61 | "metadata": {}, 62 | "outputs": [], 63 | "source": [ 64 | "pd.__version__" 65 | ] 66 | }, 67 | { 68 | "cell_type": "markdown", 69 | "metadata": {}, 70 | "source": [ 71 | "**3.** Print out all the version information of the libraries that are required by the pandas library." 72 | ] 73 | }, 74 | { 75 | "cell_type": "code", 76 | "execution_count": null, 77 | "metadata": {}, 78 | "outputs": [], 79 | "source": [ 80 | "pd.show_versions()" 81 | ] 82 | }, 83 | { 84 | "cell_type": "markdown", 85 | "metadata": {}, 86 | "source": [ 87 | "## DataFrame basics\n", 88 | "\n", 89 | "### A few of the fundamental routines for selecting, sorting, adding and aggregating data in DataFrames\n", 90 | "\n", 91 | "Difficulty: *easy*\n", 92 | "\n", 93 | "Note: remember to import numpy using:\n", 94 | "```python\n", 95 | "import numpy as np\n", 96 | "```\n", 97 | "\n", 98 | "Consider the following Python dictionary `data` and Python list `labels`:\n", 99 | "\n", 100 | "``` python\n", 101 | "data = {'animal': ['cat', 'cat', 'snake', 'dog', 'dog', 'cat', 'snake', 'cat', 'dog', 'dog'],\n", 102 | " 'age': [2.5, 3, 0.5, np.nan, 5, 2, 4.5, np.nan, 7, 3],\n", 103 | " 'visits': [1, 3, 2, 3, 2, 3, 1, 1, 2, 1],\n", 104 | " 'priority': ['yes', 'yes', 'no', 'yes', 'no', 'no', 'no', 'yes', 'no', 'no']}\n", 105 | "\n", 106 | "labels = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']\n", 107 | "```\n", 108 | "(This is just some meaningless data I made up with the theme of animals and trips to a vet.)\n", 109 | "\n", 110 | "**4.** Create a DataFrame `df` from this dictionary `data` which has the index `labels`." 111 | ] 112 | }, 113 | { 114 | "cell_type": "code", 115 | "execution_count": null, 116 | "metadata": {}, 117 | "outputs": [], 118 | "source": [ 119 | "import numpy as np\n", 120 | "\n", 121 | "data = {'animal': ['cat', 'cat', 'snake', 'dog', 'dog', 'cat', 'snake', 'cat', 'dog', 'dog'],\n", 122 | " 'age': [2.5, 3, 0.5, np.nan, 5, 2, 4.5, np.nan, 7, 3],\n", 123 | " 'visits': [1, 3, 2, 3, 2, 3, 1, 1, 2, 1],\n", 124 | " 'priority': ['yes', 'yes', 'no', 'yes', 'no', 'no', 'no', 'yes', 'no', 'no']}\n", 125 | "\n", 126 | "labels = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']\n", 127 | "\n", 128 | "df = pd.DataFrame(data, index=labels)" 129 | ] 130 | }, 131 | { 132 | "cell_type": "markdown", 133 | "metadata": {}, 134 | "source": [ 135 | "**5.** Display a summary of the basic information about this DataFrame and its data (*hint: there is a single method that can be called on the DataFrame*)." 136 | ] 137 | }, 138 | { 139 | "cell_type": "code", 140 | "execution_count": null, 141 | "metadata": {}, 142 | "outputs": [], 143 | "source": [ 144 | "df.info()\n", 145 | "\n", 146 | "# ...or...\n", 147 | "\n", 148 | "df.describe()" 149 | ] 150 | }, 151 | { 152 | "cell_type": "markdown", 153 | "metadata": {}, 154 | "source": [ 155 | "**6.** Return the first 3 rows of the DataFrame `df`." 156 | ] 157 | }, 158 | { 159 | "cell_type": "code", 160 | "execution_count": null, 161 | "metadata": {}, 162 | "outputs": [], 163 | "source": [ 164 | "df.iloc[:3]\n", 165 | "\n", 166 | "# or equivalently\n", 167 | "\n", 168 | "df.head(3)" 169 | ] 170 | }, 171 | { 172 | "cell_type": "markdown", 173 | "metadata": {}, 174 | "source": [ 175 | "**7.** Select just the 'animal' and 'age' columns from the DataFrame `df`." 176 | ] 177 | }, 178 | { 179 | "cell_type": "code", 180 | "execution_count": null, 181 | "metadata": {}, 182 | "outputs": [], 183 | "source": [ 184 | "df.loc[:, ['animal', 'age']]\n", 185 | "\n", 186 | "# or\n", 187 | "\n", 188 | "df[['animal', 'age']]" 189 | ] 190 | }, 191 | { 192 | "cell_type": "markdown", 193 | "metadata": {}, 194 | "source": [ 195 | "**8.** Select the data in rows `[3, 4, 8]` *and* in columns `['animal', 'age']`." 196 | ] 197 | }, 198 | { 199 | "cell_type": "code", 200 | "execution_count": null, 201 | "metadata": {}, 202 | "outputs": [], 203 | "source": [ 204 | "df.loc[df.index[[3, 4, 8]], ['animal', 'age']]" 205 | ] 206 | }, 207 | { 208 | "cell_type": "markdown", 209 | "metadata": {}, 210 | "source": [ 211 | "**9.** Select only the rows where the number of visits is greater than 3." 212 | ] 213 | }, 214 | { 215 | "cell_type": "code", 216 | "execution_count": null, 217 | "metadata": {}, 218 | "outputs": [], 219 | "source": [ 220 | "df[df['visits'] > 3]" 221 | ] 222 | }, 223 | { 224 | "cell_type": "markdown", 225 | "metadata": {}, 226 | "source": [ 227 | "**10.** Select the rows where the age is missing, i.e. it is `NaN`." 228 | ] 229 | }, 230 | { 231 | "cell_type": "code", 232 | "execution_count": null, 233 | "metadata": {}, 234 | "outputs": [], 235 | "source": [ 236 | "df[df['age'].isnull()]" 237 | ] 238 | }, 239 | { 240 | "cell_type": "markdown", 241 | "metadata": {}, 242 | "source": [ 243 | "**11.** Select the rows where the animal is a cat *and* the age is less than 3." 244 | ] 245 | }, 246 | { 247 | "cell_type": "code", 248 | "execution_count": null, 249 | "metadata": {}, 250 | "outputs": [], 251 | "source": [ 252 | "df[(df['animal'] == 'cat') & (df['age'] < 3)]" 253 | ] 254 | }, 255 | { 256 | "cell_type": "markdown", 257 | "metadata": {}, 258 | "source": [ 259 | "**12.** Select the rows the age is between 2 and 4 (inclusive)." 260 | ] 261 | }, 262 | { 263 | "cell_type": "code", 264 | "execution_count": null, 265 | "metadata": {}, 266 | "outputs": [], 267 | "source": [ 268 | "df[df['age'].between(2, 4)]" 269 | ] 270 | }, 271 | { 272 | "cell_type": "markdown", 273 | "metadata": {}, 274 | "source": [ 275 | "**13.** Change the age in row 'f' to 1.5." 276 | ] 277 | }, 278 | { 279 | "cell_type": "code", 280 | "execution_count": null, 281 | "metadata": {}, 282 | "outputs": [], 283 | "source": [ 284 | "df.loc['f', 'age'] = 1.5" 285 | ] 286 | }, 287 | { 288 | "cell_type": "markdown", 289 | "metadata": {}, 290 | "source": [ 291 | "**14.** Calculate the sum of all visits in `df` (i.e. the total number of visits)." 292 | ] 293 | }, 294 | { 295 | "cell_type": "code", 296 | "execution_count": null, 297 | "metadata": {}, 298 | "outputs": [], 299 | "source": [ 300 | "df['visits'].sum()" 301 | ] 302 | }, 303 | { 304 | "cell_type": "markdown", 305 | "metadata": {}, 306 | "source": [ 307 | "**15.** Calculate the mean age for each different animal in `df`." 308 | ] 309 | }, 310 | { 311 | "cell_type": "code", 312 | "execution_count": null, 313 | "metadata": {}, 314 | "outputs": [], 315 | "source": [ 316 | "df.groupby('animal')['age'].mean()" 317 | ] 318 | }, 319 | { 320 | "cell_type": "markdown", 321 | "metadata": {}, 322 | "source": [ 323 | "**16.** Append a new row 'k' to `df` with your choice of values for each column. Then delete that row to return the original DataFrame." 324 | ] 325 | }, 326 | { 327 | "cell_type": "code", 328 | "execution_count": null, 329 | "metadata": {}, 330 | "outputs": [], 331 | "source": [ 332 | "df.loc['k'] = [5.5, 'dog', 'no', 2]\n", 333 | "\n", 334 | "# and then deleting the new row...\n", 335 | "\n", 336 | "df = df.drop('k')" 337 | ] 338 | }, 339 | { 340 | "cell_type": "markdown", 341 | "metadata": {}, 342 | "source": [ 343 | "**17.** Count the number of each type of animal in `df`." 344 | ] 345 | }, 346 | { 347 | "cell_type": "code", 348 | "execution_count": null, 349 | "metadata": {}, 350 | "outputs": [], 351 | "source": [ 352 | "df['animal'].value_counts()" 353 | ] 354 | }, 355 | { 356 | "cell_type": "markdown", 357 | "metadata": {}, 358 | "source": [ 359 | "**18.** Sort `df` first by the values in the 'age' in *decending* order, then by the value in the 'visits' column in *ascending* order (so row `i` should be first, and row `d` should be last)." 360 | ] 361 | }, 362 | { 363 | "cell_type": "code", 364 | "execution_count": null, 365 | "metadata": {}, 366 | "outputs": [], 367 | "source": [ 368 | "df.sort_values(by=['age', 'visits'], ascending=[False, True])" 369 | ] 370 | }, 371 | { 372 | "cell_type": "markdown", 373 | "metadata": {}, 374 | "source": [ 375 | "**19.** The 'priority' column contains the values 'yes' and 'no'. Replace this column with a column of boolean values: 'yes' should be `True` and 'no' should be `False`." 376 | ] 377 | }, 378 | { 379 | "cell_type": "code", 380 | "execution_count": null, 381 | "metadata": {}, 382 | "outputs": [], 383 | "source": [ 384 | "df['priority'] = df['priority'].map({'yes': True, 'no': False})" 385 | ] 386 | }, 387 | { 388 | "cell_type": "markdown", 389 | "metadata": {}, 390 | "source": [ 391 | "**20.** In the 'animal' column, change the 'snake' entries to 'python'." 392 | ] 393 | }, 394 | { 395 | "cell_type": "code", 396 | "execution_count": null, 397 | "metadata": {}, 398 | "outputs": [], 399 | "source": [ 400 | "df['animal'] = df['animal'].replace('snake', 'python')" 401 | ] 402 | }, 403 | { 404 | "cell_type": "markdown", 405 | "metadata": {}, 406 | "source": [ 407 | "**21.** For each animal type and each number of visits, find the mean age. In other words, each row is an animal, each column is a number of visits and the values are the mean ages (*hint: use a pivot table*)." 408 | ] 409 | }, 410 | { 411 | "cell_type": "code", 412 | "execution_count": null, 413 | "metadata": {}, 414 | "outputs": [], 415 | "source": [ 416 | "df.pivot_table(index='animal', columns='visits', values='age', aggfunc='mean')" 417 | ] 418 | }, 419 | { 420 | "cell_type": "markdown", 421 | "metadata": {}, 422 | "source": [ 423 | "## DataFrames: beyond the basics\n", 424 | "\n", 425 | "### Slightly trickier: you may need to combine two or more methods to get the right answer\n", 426 | "\n", 427 | "Difficulty: *medium*\n", 428 | "\n", 429 | "The previous section was tour through some basic but essential DataFrame operations. Below are some ways that you might need to cut your data, but for which there is no single \"out of the box\" method." 430 | ] 431 | }, 432 | { 433 | "cell_type": "markdown", 434 | "metadata": {}, 435 | "source": [ 436 | "**22.** You have a DataFrame `df` with a column 'A' of integers. For example:\n", 437 | "```python\n", 438 | "df = pd.DataFrame({'A': [1, 2, 2, 3, 4, 5, 5, 5, 6, 7, 7]})\n", 439 | "```\n", 440 | "\n", 441 | "How do you filter out rows which contain the same integer as the row immediately above?\n", 442 | "\n", 443 | "You should be left with a column containing the following values:\n", 444 | "\n", 445 | "```python\n", 446 | "1, 2, 3, 4, 5, 6, 7\n", 447 | "```" 448 | ] 449 | }, 450 | { 451 | "cell_type": "code", 452 | "execution_count": null, 453 | "metadata": {}, 454 | "outputs": [], 455 | "source": [ 456 | "df = pd.DataFrame({'A': [1, 2, 2, 3, 4, 5, 5, 5, 6, 7, 7]})\n", 457 | "\n", 458 | "df.loc[df['A'].shift() != df['A']]\n", 459 | "\n", 460 | "# Alternatively, we could use drop_duplicates() here. Note\n", 461 | "# that this removes *all* duplicates though, so it won't\n", 462 | "# work as desired if A is [1, 1, 2, 2, 1, 1] for example.\n", 463 | "\n", 464 | "df.drop_duplicates(subset='A')" 465 | ] 466 | }, 467 | { 468 | "cell_type": "markdown", 469 | "metadata": {}, 470 | "source": [ 471 | "**23.** Given a DataFrame of random numeric values:\n", 472 | "```python\n", 473 | "df = pd.DataFrame(np.random.random(size=(5, 3))) # this is a 5x3 DataFrame of float values\n", 474 | "```\n", 475 | "\n", 476 | "how do you subtract the row mean from each element in the row?" 477 | ] 478 | }, 479 | { 480 | "cell_type": "code", 481 | "execution_count": null, 482 | "metadata": {}, 483 | "outputs": [], 484 | "source": [ 485 | "df = pd.DataFrame(np.random.random(size=(5, 3)))\n", 486 | "\n", 487 | "df.sub(df.mean(axis=1), axis=0)" 488 | ] 489 | }, 490 | { 491 | "cell_type": "markdown", 492 | "metadata": {}, 493 | "source": [ 494 | "**24.** Suppose you have DataFrame with 10 columns of real numbers, for example:\n", 495 | "\n", 496 | "```python\n", 497 | "df = pd.DataFrame(np.random.random(size=(5, 10)), columns=list('abcdefghij'))\n", 498 | "```\n", 499 | "Which column of numbers has the smallest sum? Return that column's label." 500 | ] 501 | }, 502 | { 503 | "cell_type": "code", 504 | "execution_count": null, 505 | "metadata": {}, 506 | "outputs": [], 507 | "source": [ 508 | "df = pd.DataFrame(np.random.random(size=(5, 10)), columns=list('abcdefghij'))\n", 509 | "\n", 510 | "df.sum().idxmin()" 511 | ] 512 | }, 513 | { 514 | "cell_type": "markdown", 515 | "metadata": {}, 516 | "source": [ 517 | "**25.** How do you count how many unique rows a DataFrame has (i.e. ignore all rows that are duplicates)?" 518 | ] 519 | }, 520 | { 521 | "cell_type": "code", 522 | "execution_count": null, 523 | "metadata": {}, 524 | "outputs": [], 525 | "source": [ 526 | "df = pd.DataFrame(np.random.randint(0, 2, size=(10, 3)))\n", 527 | "\n", 528 | "len(df) - df.duplicated(keep=False).sum()\n", 529 | "\n", 530 | "# or perhaps more simply...\n", 531 | "\n", 532 | "len(df.drop_duplicates(keep=False))" 533 | ] 534 | }, 535 | { 536 | "cell_type": "markdown", 537 | "metadata": {}, 538 | "source": [ 539 | "The next three puzzles are slightly harder.\n", 540 | "\n", 541 | "**26.** In the cell below, you have a DataFrame `df` that consists of 10 columns of floating-point numbers. Exactly 5 entries in each row are NaN values. \n", 542 | "\n", 543 | "For each row of the DataFrame, find the *column* which contains the *third* NaN value.\n", 544 | "\n", 545 | "You should return a Series of column labels: `e, c, d, h, d`" 546 | ] 547 | }, 548 | { 549 | "cell_type": "code", 550 | "execution_count": null, 551 | "metadata": {}, 552 | "outputs": [], 553 | "source": [ 554 | "nan = np.nan\n", 555 | "\n", 556 | "data = [[0.04, nan, nan, 0.25, nan, 0.43, 0.71, 0.51, nan, nan],\n", 557 | " [ nan, nan, nan, 0.04, 0.76, nan, nan, 0.67, 0.76, 0.16],\n", 558 | " [ nan, nan, 0.5 , nan, 0.31, 0.4 , nan, nan, 0.24, 0.01],\n", 559 | " [0.49, nan, nan, 0.62, 0.73, 0.26, 0.85, nan, nan, nan],\n", 560 | " [ nan, nan, 0.41, nan, 0.05, nan, 0.61, nan, 0.48, 0.68]]\n", 561 | "\n", 562 | "columns = list('abcdefghij')\n", 563 | "\n", 564 | "df = pd.DataFrame(data, columns=columns)\n", 565 | "\n", 566 | "\n", 567 | "(df.isnull().cumsum(axis=1) == 3).idxmax(axis=1)" 568 | ] 569 | }, 570 | { 571 | "cell_type": "markdown", 572 | "metadata": {}, 573 | "source": [ 574 | "**27.** A DataFrame has a column of groups 'grps' and and column of integer values 'vals': \n", 575 | "\n", 576 | "```python\n", 577 | "df = pd.DataFrame({'grps': list('aaabbcaabcccbbc'), \n", 578 | " 'vals': [12,345,3,1,45,14,4,52,54,23,235,21,57,3,87]})\n", 579 | "```\n", 580 | "For each *group*, find the sum of the three greatest values. You should end up with the answer as follows:\n", 581 | "```\n", 582 | "grps\n", 583 | "a 409\n", 584 | "b 156\n", 585 | "c 345\n", 586 | "```" 587 | ] 588 | }, 589 | { 590 | "cell_type": "code", 591 | "execution_count": null, 592 | "metadata": {}, 593 | "outputs": [], 594 | "source": [ 595 | "df = pd.DataFrame({'grps': list('aaabbcaabcccbbc'), \n", 596 | " 'vals': [12,345,3,1,45,14,4,52,54,23,235,21,57,3,87]})\n", 597 | "\n", 598 | "df.groupby('grps')['vals'].nlargest(3).sum(level=0)" 599 | ] 600 | }, 601 | { 602 | "cell_type": "markdown", 603 | "metadata": {}, 604 | "source": [ 605 | "**28.** The DataFrame `df` constructed below has two integer columns 'A' and 'B'. The values in 'A' are between 1 and 100 (inclusive). \n", 606 | "\n", 607 | "For each group of 10 consecutive integers in 'A' (i.e. `(0, 10]`, `(10, 20]`, ...), calculate the sum of the corresponding values in column 'B'.\n", 608 | "\n", 609 | "The answer should be a Series as follows:\n", 610 | "\n", 611 | "```\n", 612 | "A\n", 613 | "(0, 10] 635\n", 614 | "(10, 20] 360\n", 615 | "(20, 30] 315\n", 616 | "(30, 40] 306\n", 617 | "(40, 50] 750\n", 618 | "(50, 60] 284\n", 619 | "(60, 70] 424\n", 620 | "(70, 80] 526\n", 621 | "(80, 90] 835\n", 622 | "(90, 100] 852\n", 623 | "```" 624 | ] 625 | }, 626 | { 627 | "cell_type": "code", 628 | "execution_count": null, 629 | "metadata": {}, 630 | "outputs": [], 631 | "source": [ 632 | "df = pd.DataFrame(np.random.RandomState(8765).randint(1, 101, size=(100, 2)), columns = [\"A\", \"B\"])\n", 633 | "\n", 634 | "df.groupby(pd.cut(df['A'], np.arange(0, 101, 10)))['B'].sum()" 635 | ] 636 | }, 637 | { 638 | "cell_type": "markdown", 639 | "metadata": {}, 640 | "source": [ 641 | "## DataFrames: harder problems \n", 642 | "\n", 643 | "### These might require a bit of thinking outside the box...\n", 644 | "\n", 645 | "...but all are solvable using just the usual pandas/NumPy methods (and so avoid using explicit `for` loops).\n", 646 | "\n", 647 | "Difficulty: *hard*" 648 | ] 649 | }, 650 | { 651 | "cell_type": "markdown", 652 | "metadata": {}, 653 | "source": [ 654 | "**29.** Consider a DataFrame `df` where there is an integer column 'X':\n", 655 | "```python\n", 656 | "df = pd.DataFrame({'X': [7, 2, 0, 3, 4, 2, 5, 0, 3, 4]})\n", 657 | "```\n", 658 | "For each value, count the difference back to the previous zero (or the start of the Series, whichever is closer). These values should therefore be \n", 659 | "\n", 660 | "```\n", 661 | "[1, 2, 0, 1, 2, 3, 4, 0, 1, 2]\n", 662 | "```\n", 663 | "\n", 664 | "Make this a new column 'Y'." 665 | ] 666 | }, 667 | { 668 | "cell_type": "code", 669 | "execution_count": null, 670 | "metadata": {}, 671 | "outputs": [], 672 | "source": [ 673 | "df = pd.DataFrame({'X': [7, 2, 0, 3, 4, 2, 5, 0, 3, 4]})\n", 674 | "\n", 675 | "izero = np.r_[-1, (df == 0).values.nonzero()[0]] # indices of zeros\n", 676 | "idx = np.arange(len(df))\n", 677 | "y = df['X'] != 0\n", 678 | "df['Y'] = idx - izero[np.searchsorted(izero - 1, idx) - 1]\n", 679 | "\n", 680 | "# http://stackoverflow.com/questions/30730981/how-to-count-distance-to-the-previous-zero-in-pandas-series/\n", 681 | "# credit: Behzad Nouri" 682 | ] 683 | }, 684 | { 685 | "cell_type": "markdown", 686 | "metadata": {}, 687 | "source": [ 688 | "Here's an alternative approach based on a [cookbook recipe](http://pandas.pydata.org/pandas-docs/stable/cookbook.html#grouping):" 689 | ] 690 | }, 691 | { 692 | "cell_type": "code", 693 | "execution_count": null, 694 | "metadata": {}, 695 | "outputs": [], 696 | "source": [ 697 | "df = pd.DataFrame({'X': [7, 2, 0, 3, 4, 2, 5, 0, 3, 4]})\n", 698 | "\n", 699 | "x = (df['X'] != 0).cumsum()\n", 700 | "y = x != x.shift()\n", 701 | "df['Y'] = y.groupby((y != y.shift()).cumsum()).cumsum()" 702 | ] 703 | }, 704 | { 705 | "cell_type": "markdown", 706 | "metadata": {}, 707 | "source": [ 708 | "And another approach using a groupby operation:" 709 | ] 710 | }, 711 | { 712 | "cell_type": "code", 713 | "execution_count": null, 714 | "metadata": {}, 715 | "outputs": [], 716 | "source": [ 717 | "df = pd.DataFrame({'X': [7, 2, 0, 3, 4, 2, 5, 0, 3, 4]})\n", 718 | "\n", 719 | "df['Y'] = df.groupby((df['X'] == 0).cumsum()).cumcount()\n", 720 | "\n", 721 | "# We're off by one before we reach the first zero.\n", 722 | "first_zero_idx = (df['X'] == 0).idxmax()\n", 723 | "df['Y'].iloc[0:first_zero_idx] += 1" 724 | ] 725 | }, 726 | { 727 | "cell_type": "markdown", 728 | "metadata": {}, 729 | "source": [ 730 | "**30.** Consider the DataFrame constructed below which contains rows and columns of numerical data. \n", 731 | "\n", 732 | "Create a list of the column-row index locations of the 3 largest values in this DataFrame. In this case, the answer should be:\n", 733 | "```\n", 734 | "[(5, 7), (6, 4), (2, 5)]\n", 735 | "```" 736 | ] 737 | }, 738 | { 739 | "cell_type": "code", 740 | "execution_count": null, 741 | "metadata": {}, 742 | "outputs": [], 743 | "source": [ 744 | "df = pd.DataFrame(np.random.RandomState(30).randint(1, 101, size=(8, 8)))\n", 745 | "\n", 746 | "df.unstack().sort_values()[-3:].index.tolist()\n", 747 | "\n", 748 | "# http://stackoverflow.com/questions/14941261/index-and-column-for-the-max-value-in-pandas-dataframe/\n", 749 | "# credit: DSM" 750 | ] 751 | }, 752 | { 753 | "cell_type": "markdown", 754 | "metadata": {}, 755 | "source": [ 756 | "**31.** You are given the DataFrame below with a column of group IDs, 'grps', and a column of corresponding integer values, 'vals'.\n", 757 | "\n", 758 | "```python\n", 759 | "df = pd.DataFrame({\"vals\": np.random.RandomState(31).randint(-30, 30, size=15), \n", 760 | " \"grps\": np.random.RandomState(31).choice([\"A\", \"B\"], 15)})\n", 761 | "```\n", 762 | "\n", 763 | "Create a new column 'patched_values' which contains the same values as the 'vals' any negative values in 'vals' with the group mean:\n", 764 | "\n", 765 | "```\n", 766 | " vals grps patched_vals\n", 767 | "0 -12 A 13.6\n", 768 | "1 -7 B 28.0\n", 769 | "2 -14 A 13.6\n", 770 | "3 4 A 4.0\n", 771 | "4 -7 A 13.6\n", 772 | "5 28 B 28.0\n", 773 | "6 -2 A 13.6\n", 774 | "7 -1 A 13.6\n", 775 | "8 8 A 8.0\n", 776 | "9 -2 B 28.0\n", 777 | "10 28 A 28.0\n", 778 | "11 12 A 12.0\n", 779 | "12 16 A 16.0\n", 780 | "13 -24 A 13.6\n", 781 | "14 -12 A 13.6\n", 782 | "```" 783 | ] 784 | }, 785 | { 786 | "cell_type": "code", 787 | "execution_count": null, 788 | "metadata": {}, 789 | "outputs": [], 790 | "source": [ 791 | "df = pd.DataFrame({\"vals\": np.random.RandomState(31).randint(-30, 30, size=15), \n", 792 | " \"grps\": np.random.RandomState(31).choice([\"A\", \"B\"], 15)})\n", 793 | "\n", 794 | "def replace(group):\n", 795 | " mask = group<0\n", 796 | " group[mask] = group[~mask].mean()\n", 797 | " return group\n", 798 | "\n", 799 | "df.groupby(['grps'])['vals'].transform(replace)\n", 800 | "\n", 801 | "# http://stackoverflow.com/questions/14760757/replacing-values-with-groupby-means/\n", 802 | "# credit: unutbu" 803 | ] 804 | }, 805 | { 806 | "cell_type": "markdown", 807 | "metadata": {}, 808 | "source": [ 809 | "**32.** Implement a rolling mean over groups with window size 3, which ignores NaN value. For example consider the following DataFrame:\n", 810 | "\n", 811 | "```python\n", 812 | ">>> df = pd.DataFrame({'group': list('aabbabbbabab'),\n", 813 | " 'value': [1, 2, 3, np.nan, 2, 3, np.nan, 1, 7, 3, np.nan, 8]})\n", 814 | ">>> df\n", 815 | " group value\n", 816 | "0 a 1.0\n", 817 | "1 a 2.0\n", 818 | "2 b 3.0\n", 819 | "3 b NaN\n", 820 | "4 a 2.0\n", 821 | "5 b 3.0\n", 822 | "6 b NaN\n", 823 | "7 b 1.0\n", 824 | "8 a 7.0\n", 825 | "9 b 3.0\n", 826 | "10 a NaN\n", 827 | "11 b 8.0\n", 828 | "```\n", 829 | "The goal is to compute the Series:\n", 830 | "\n", 831 | "```\n", 832 | "0 1.000000\n", 833 | "1 1.500000\n", 834 | "2 3.000000\n", 835 | "3 3.000000\n", 836 | "4 1.666667\n", 837 | "5 3.000000\n", 838 | "6 3.000000\n", 839 | "7 2.000000\n", 840 | "8 3.666667\n", 841 | "9 2.000000\n", 842 | "10 4.500000\n", 843 | "11 4.000000\n", 844 | "```\n", 845 | "E.g. the first window of size three for group 'b' has values 3.0, NaN and 3.0 and occurs at row index 5. Instead of being NaN the value in the new column at this row index should be 3.0 (just the two non-NaN values are used to compute the mean (3+3)/2)" 846 | ] 847 | }, 848 | { 849 | "cell_type": "code", 850 | "execution_count": null, 851 | "metadata": {}, 852 | "outputs": [], 853 | "source": [ 854 | "df = pd.DataFrame({'group': list('aabbabbbabab'),\n", 855 | " 'value': [1, 2, 3, np.nan, 2, 3, np.nan, 1, 7, 3, np.nan, 8]})\n", 856 | "\n", 857 | "g1 = df.groupby(['group'])['value'] # group values \n", 858 | "g2 = df.fillna(0).groupby(['group'])['value'] # fillna, then group values\n", 859 | "\n", 860 | "s = g2.rolling(3, min_periods=1).sum() / g1.rolling(3, min_periods=1).count() # compute means\n", 861 | "\n", 862 | "s.reset_index(level=0, drop=True).sort_index() # drop/sort index\n", 863 | "\n", 864 | "# http://stackoverflow.com/questions/36988123/pandas-groupby-and-rolling-apply-ignoring-nans/" 865 | ] 866 | }, 867 | { 868 | "cell_type": "markdown", 869 | "metadata": {}, 870 | "source": [ 871 | "## Series and DatetimeIndex\n", 872 | "\n", 873 | "### Exercises for creating and manipulating Series with datetime data\n", 874 | "\n", 875 | "Difficulty: *easy/medium*\n", 876 | "\n", 877 | "pandas is fantastic for working with dates and times. These puzzles explore some of this functionality.\n" 878 | ] 879 | }, 880 | { 881 | "cell_type": "markdown", 882 | "metadata": {}, 883 | "source": [ 884 | "**33.** Create a DatetimeIndex that contains each business day of 2015 and use it to index a Series of random numbers. Let's call this Series `s`." 885 | ] 886 | }, 887 | { 888 | "cell_type": "code", 889 | "execution_count": null, 890 | "metadata": {}, 891 | "outputs": [], 892 | "source": [ 893 | "dti = pd.date_range(start='2015-01-01', end='2015-12-31', freq='B') \n", 894 | "s = pd.Series(np.random.rand(len(dti)), index=dti)\n", 895 | "s" 896 | ] 897 | }, 898 | { 899 | "cell_type": "markdown", 900 | "metadata": {}, 901 | "source": [ 902 | "**34.** Find the sum of the values in `s` for every Wednesday." 903 | ] 904 | }, 905 | { 906 | "cell_type": "code", 907 | "execution_count": null, 908 | "metadata": {}, 909 | "outputs": [], 910 | "source": [ 911 | "s[s.index.weekday == 2].sum() " 912 | ] 913 | }, 914 | { 915 | "cell_type": "markdown", 916 | "metadata": {}, 917 | "source": [ 918 | "**35.** For each calendar month in `s`, find the mean of values." 919 | ] 920 | }, 921 | { 922 | "cell_type": "code", 923 | "execution_count": null, 924 | "metadata": {}, 925 | "outputs": [], 926 | "source": [ 927 | "s.resample('M').mean()" 928 | ] 929 | }, 930 | { 931 | "cell_type": "markdown", 932 | "metadata": {}, 933 | "source": [ 934 | "**36.** For each group of four consecutive calendar months in `s`, find the date on which the highest value occurred." 935 | ] 936 | }, 937 | { 938 | "cell_type": "code", 939 | "execution_count": null, 940 | "metadata": {}, 941 | "outputs": [], 942 | "source": [ 943 | "s.groupby(pd.Grouper(freq='4M')).idxmax()" 944 | ] 945 | }, 946 | { 947 | "cell_type": "markdown", 948 | "metadata": {}, 949 | "source": [ 950 | "**37.** Create a DateTimeIndex consisting of the third Thursday in each month for the years 2015 and 2016." 951 | ] 952 | }, 953 | { 954 | "cell_type": "code", 955 | "execution_count": null, 956 | "metadata": {}, 957 | "outputs": [], 958 | "source": [ 959 | "pd.date_range('2015-01-01', '2016-12-31', freq='WOM-3THU')" 960 | ] 961 | }, 962 | { 963 | "cell_type": "markdown", 964 | "metadata": {}, 965 | "source": [ 966 | "## Cleaning Data\n", 967 | "\n", 968 | "### Making a DataFrame easier to work with\n", 969 | "\n", 970 | "Difficulty: *easy/medium*\n", 971 | "\n", 972 | "It happens all the time: someone gives you data containing malformed strings, Python, lists and missing data. How do you tidy it up so you can get on with the analysis?\n", 973 | "\n", 974 | "Take this monstrosity as the DataFrame to use in the following puzzles:\n", 975 | "\n", 976 | "```python\n", 977 | "df = pd.DataFrame({'From_To': ['LoNDon_paris', 'MAdrid_miLAN', 'londON_StockhOlm', \n", 978 | " 'Budapest_PaRis', 'Brussels_londOn'],\n", 979 | " 'FlightNumber': [10045, np.nan, 10065, np.nan, 10085],\n", 980 | " 'RecentDelays': [[23, 47], [], [24, 43, 87], [13], [67, 32]],\n", 981 | " 'Airline': ['KLM(!)', ' (12)', '(British Airways. )', \n", 982 | " '12. Air France', '\"Swiss Air\"']})\n", 983 | "```\n", 984 | "\n", 985 | "Formatted, it looks like this:\n", 986 | "\n", 987 | "```\n", 988 | " From_To FlightNumber RecentDelays Airline\n", 989 | "0 LoNDon_paris 10045.0 [23, 47] KLM(!)\n", 990 | "1 MAdrid_miLAN NaN [] (12)\n", 991 | "2 londON_StockhOlm 10065.0 [24, 43, 87] (British Airways. )\n", 992 | "3 Budapest_PaRis NaN [13] 12. Air France\n", 993 | "4 Brussels_londOn 10085.0 [67, 32] \"Swiss Air\"\n", 994 | "```\n", 995 | "\n", 996 | "(It's some flight data I made up; it's not meant to be accurate in any way.)\n" 997 | ] 998 | }, 999 | { 1000 | "cell_type": "markdown", 1001 | "metadata": {}, 1002 | "source": [ 1003 | "**38.** Some values in the the **FlightNumber** column are missing (they are `NaN`). These numbers are meant to increase by 10 with each row so 10055 and 10075 need to be put in place. Modify `df` to fill in these missing numbers and make the column an integer column (instead of a float column)." 1004 | ] 1005 | }, 1006 | { 1007 | "cell_type": "code", 1008 | "execution_count": null, 1009 | "metadata": {}, 1010 | "outputs": [], 1011 | "source": [ 1012 | "df = pd.DataFrame({'From_To': ['LoNDon_paris', 'MAdrid_miLAN', 'londON_StockhOlm', \n", 1013 | " 'Budapest_PaRis', 'Brussels_londOn'],\n", 1014 | " 'FlightNumber': [10045, np.nan, 10065, np.nan, 10085],\n", 1015 | " 'RecentDelays': [[23, 47], [], [24, 43, 87], [13], [67, 32]],\n", 1016 | " 'Airline': ['KLM(!)', ' (12)', '(British Airways. )', \n", 1017 | " '12. Air France', '\"Swiss Air\"']})\n", 1018 | "\n", 1019 | "df['FlightNumber'] = df['FlightNumber'].interpolate().astype(int)\n", 1020 | "df" 1021 | ] 1022 | }, 1023 | { 1024 | "cell_type": "markdown", 1025 | "metadata": {}, 1026 | "source": [ 1027 | "**39.** The **From\\_To** column would be better as two separate columns! Split each string on the underscore delimiter `_` to give a new temporary DataFrame called 'temp' with the correct values. Assign the correct column names 'From' and 'To' to this temporary DataFrame. " 1028 | ] 1029 | }, 1030 | { 1031 | "cell_type": "code", 1032 | "execution_count": null, 1033 | "metadata": {}, 1034 | "outputs": [], 1035 | "source": [ 1036 | "temp = df.From_To.str.split('_', expand=True)\n", 1037 | "temp.columns = ['From', 'To']\n", 1038 | "temp" 1039 | ] 1040 | }, 1041 | { 1042 | "cell_type": "markdown", 1043 | "metadata": {}, 1044 | "source": [ 1045 | "**40.** Notice how the capitalisation of the city names is all mixed up in this temporary DataFrame 'temp'. Standardise the strings so that only the first letter is uppercase (e.g. \"londON\" should become \"London\".)" 1046 | ] 1047 | }, 1048 | { 1049 | "cell_type": "code", 1050 | "execution_count": null, 1051 | "metadata": {}, 1052 | "outputs": [], 1053 | "source": [ 1054 | "temp['From'] = temp['From'].str.capitalize()\n", 1055 | "temp['To'] = temp['To'].str.capitalize()\n", 1056 | "temp" 1057 | ] 1058 | }, 1059 | { 1060 | "cell_type": "markdown", 1061 | "metadata": {}, 1062 | "source": [ 1063 | "**41.** Delete the From_To column from **41.** Delete the **From_To** column from `df` and attach the temporary DataFrame 'temp' from the previous questions.`df` and attach the temporary DataFrame from the previous questions." 1064 | ] 1065 | }, 1066 | { 1067 | "cell_type": "code", 1068 | "execution_count": null, 1069 | "metadata": {}, 1070 | "outputs": [], 1071 | "source": [ 1072 | "df = df.drop('From_To', axis=1)\n", 1073 | "df = df.join(temp)\n", 1074 | "df" 1075 | ] 1076 | }, 1077 | { 1078 | "cell_type": "markdown", 1079 | "metadata": {}, 1080 | "source": [ 1081 | "**42**. In the **Airline** column, you can see some extra puctuation and symbols have appeared around the airline names. Pull out just the airline name. E.g. `'(British Airways. )'` should become `'British Airways'`." 1082 | ] 1083 | }, 1084 | { 1085 | "cell_type": "code", 1086 | "execution_count": null, 1087 | "metadata": {}, 1088 | "outputs": [], 1089 | "source": [ 1090 | "df['Airline'] = df['Airline'].str.extract('([a-zA-Z\\s]+)', expand=False).str.strip()\n", 1091 | "# note: using .strip() gets rid of any leading/trailing spaces\n", 1092 | "df" 1093 | ] 1094 | }, 1095 | { 1096 | "cell_type": "markdown", 1097 | "metadata": {}, 1098 | "source": [ 1099 | "**43**. In the **RecentDelays** column, the values have been entered into the DataFrame as a list. We would like each first value in its own column, each second value in its own column, and so on. If there isn't an Nth value, the value should be NaN.\n", 1100 | "\n", 1101 | "Expand the Series of lists into a new DataFrame named 'delays', rename the columns 'delay_1', 'delay_2', etc. and replace the unwanted RecentDelays column in `df` with 'delays'." 1102 | ] 1103 | }, 1104 | { 1105 | "cell_type": "code", 1106 | "execution_count": null, 1107 | "metadata": {}, 1108 | "outputs": [], 1109 | "source": [ 1110 | "# there are several ways to do this, but the following approach is possibly the simplest\n", 1111 | "\n", 1112 | "delays = df['RecentDelays'].apply(pd.Series)\n", 1113 | "\n", 1114 | "delays.columns = ['delay_{}'.format(n) for n in range(1, len(delays.columns)+1)]\n", 1115 | "\n", 1116 | "df = df.drop('RecentDelays', axis=1).join(delays)\n", 1117 | "\n", 1118 | "df" 1119 | ] 1120 | }, 1121 | { 1122 | "cell_type": "markdown", 1123 | "metadata": {}, 1124 | "source": [ 1125 | "The DataFrame should look much better now:\n", 1126 | "\n", 1127 | "```\n", 1128 | " FlightNumber Airline From To delay_1 delay_2 delay_3\n", 1129 | "0 10045 KLM London Paris 23.0 47.0 NaN\n", 1130 | "1 10055 Air France Madrid Milan NaN NaN NaN\n", 1131 | "2 10065 British Airways London Stockholm 24.0 43.0 87.0\n", 1132 | "3 10075 Air France Budapest Paris 13.0 NaN NaN\n", 1133 | "4 10085 Swiss Air Brussels London 67.0 32.0 NaN\n", 1134 | "```" 1135 | ] 1136 | }, 1137 | { 1138 | "cell_type": "markdown", 1139 | "metadata": { 1140 | "collapsed": true 1141 | }, 1142 | "source": [ 1143 | "## Using MultiIndexes\n", 1144 | "\n", 1145 | "### Go beyond flat DataFrames with additional index levels\n", 1146 | "\n", 1147 | "Difficulty: *medium*\n", 1148 | "\n", 1149 | "Previous exercises have seen us analysing data from DataFrames equipped with a single index level. However, pandas also gives you the possibilty of indexing your data using *multiple* levels. This is very much like adding new dimensions to a Series or a DataFrame. For example, a Series is 1D, but by using a MultiIndex with 2 levels we gain of much the same functionality as a 2D DataFrame.\n", 1150 | "\n", 1151 | "The set of puzzles below explores how you might use multiple index levels to enhance data analysis.\n", 1152 | "\n", 1153 | "To warm up, we'll look make a Series with two index levels. " 1154 | ] 1155 | }, 1156 | { 1157 | "cell_type": "markdown", 1158 | "metadata": {}, 1159 | "source": [ 1160 | "**44**. Given the lists `letters = ['A', 'B', 'C']` and `numbers = list(range(10))`, construct a MultiIndex object from the product of the two lists. Use it to index a Series of random numbers. Call this Series `s`." 1161 | ] 1162 | }, 1163 | { 1164 | "cell_type": "code", 1165 | "execution_count": null, 1166 | "metadata": {}, 1167 | "outputs": [], 1168 | "source": [ 1169 | "letters = ['A', 'B', 'C']\n", 1170 | "numbers = list(range(10))\n", 1171 | "\n", 1172 | "mi = pd.MultiIndex.from_product([letters, numbers])\n", 1173 | "s = pd.Series(np.random.rand(30), index=mi)\n", 1174 | "s" 1175 | ] 1176 | }, 1177 | { 1178 | "cell_type": "markdown", 1179 | "metadata": {}, 1180 | "source": [ 1181 | "**45.** Check the index of `s` is lexicographically sorted (this is a necessary proprty for indexing to work correctly with a MultiIndex)." 1182 | ] 1183 | }, 1184 | { 1185 | "cell_type": "code", 1186 | "execution_count": null, 1187 | "metadata": {}, 1188 | "outputs": [], 1189 | "source": [ 1190 | "s.index.is_lexsorted()\n", 1191 | "\n", 1192 | "# or more verbosely...\n", 1193 | "s.index.lexsort_depth == s.index.nlevels" 1194 | ] 1195 | }, 1196 | { 1197 | "cell_type": "markdown", 1198 | "metadata": {}, 1199 | "source": [ 1200 | "**46**. Select the labels `1`, `3` and `6` from the second level of the MultiIndexed Series." 1201 | ] 1202 | }, 1203 | { 1204 | "cell_type": "code", 1205 | "execution_count": null, 1206 | "metadata": {}, 1207 | "outputs": [], 1208 | "source": [ 1209 | "s.loc[:, [1, 3, 6]]" 1210 | ] 1211 | }, 1212 | { 1213 | "cell_type": "markdown", 1214 | "metadata": {}, 1215 | "source": [ 1216 | "**47**. Slice the Series `s`; slice up to label 'B' for the first level and from label 5 onwards for the second level." 1217 | ] 1218 | }, 1219 | { 1220 | "cell_type": "code", 1221 | "execution_count": null, 1222 | "metadata": {}, 1223 | "outputs": [], 1224 | "source": [ 1225 | "s.loc[pd.IndexSlice[:'B', 5:]]\n", 1226 | "\n", 1227 | "# or equivalently without IndexSlice...\n", 1228 | "s.loc[slice(None, 'B'), slice(5, None)]" 1229 | ] 1230 | }, 1231 | { 1232 | "cell_type": "markdown", 1233 | "metadata": {}, 1234 | "source": [ 1235 | "**48**. Sum the values in `s` for each label in the first level (you should have Series giving you a total for labels A, B and C)." 1236 | ] 1237 | }, 1238 | { 1239 | "cell_type": "code", 1240 | "execution_count": null, 1241 | "metadata": {}, 1242 | "outputs": [], 1243 | "source": [ 1244 | "s.sum(level=0)" 1245 | ] 1246 | }, 1247 | { 1248 | "cell_type": "markdown", 1249 | "metadata": {}, 1250 | "source": [ 1251 | "**49**. Suppose that `sum()` (and other methods) did not accept a `level` keyword argument. How else could you perform the equivalent of `s.sum(level=1)`?" 1252 | ] 1253 | }, 1254 | { 1255 | "cell_type": "code", 1256 | "execution_count": null, 1257 | "metadata": {}, 1258 | "outputs": [], 1259 | "source": [ 1260 | "# One way is to use .unstack()... \n", 1261 | "# This method should convince you that s is essentially just a regular DataFrame in disguise!\n", 1262 | "s.unstack().sum(axis=0)" 1263 | ] 1264 | }, 1265 | { 1266 | "cell_type": "markdown", 1267 | "metadata": {}, 1268 | "source": [ 1269 | "**50**. Exchange the levels of the MultiIndex so we have an index of the form (letters, numbers). Is this new Series properly lexsorted? If not, sort it." 1270 | ] 1271 | }, 1272 | { 1273 | "cell_type": "code", 1274 | "execution_count": null, 1275 | "metadata": {}, 1276 | "outputs": [], 1277 | "source": [ 1278 | "new_s = s.swaplevel(0, 1)\n", 1279 | "\n", 1280 | "if not new_s.index.is_lexsorted():\n", 1281 | " new_s = new_s.sort_index()\n", 1282 | "\n", 1283 | "new_s" 1284 | ] 1285 | }, 1286 | { 1287 | "cell_type": "markdown", 1288 | "metadata": { 1289 | "collapsed": true 1290 | }, 1291 | "source": [ 1292 | "## Minesweeper\n", 1293 | "\n", 1294 | "### Generate the numbers for safe squares in a Minesweeper grid\n", 1295 | "\n", 1296 | "Difficulty: *medium* to *hard*\n", 1297 | "\n", 1298 | "If you've ever used an older version of Windows, there's a good chance you've played with Minesweeper:\n", 1299 | "- https://en.wikipedia.org/wiki/Minesweeper_(video_game)\n", 1300 | "\n", 1301 | "\n", 1302 | "If you're not familiar with the game, imagine a grid of squares: some of these squares conceal a mine. If you click on a mine, you lose instantly. If you click on a safe square, you reveal a number telling you how many mines are found in the squares that are immediately adjacent. The aim of the game is to uncover all squares in the grid that do not contain a mine.\n", 1303 | "\n", 1304 | "In this section, we'll make a DataFrame that contains the necessary data for a game of Minesweeper: coordinates of the squares, whether the square contains a mine and the number of mines found on adjacent squares." 1305 | ] 1306 | }, 1307 | { 1308 | "cell_type": "markdown", 1309 | "metadata": {}, 1310 | "source": [ 1311 | "**51**. Let's suppose we're playing Minesweeper on a 5 by 4 grid, i.e.\n", 1312 | "```\n", 1313 | "X = 5\n", 1314 | "Y = 4\n", 1315 | "```\n", 1316 | "To begin, generate a DataFrame `df` with two columns, `'x'` and `'y'` containing every coordinate for this grid. That is, the DataFrame should start:\n", 1317 | "```\n", 1318 | " x y\n", 1319 | "0 0 0\n", 1320 | "1 0 1\n", 1321 | "2 0 2\n", 1322 | "...\n", 1323 | "```" 1324 | ] 1325 | }, 1326 | { 1327 | "cell_type": "code", 1328 | "execution_count": null, 1329 | "metadata": {}, 1330 | "outputs": [], 1331 | "source": [ 1332 | "X = 5\n", 1333 | "Y = 4\n", 1334 | "\n", 1335 | "p = pd.core.reshape.util.cartesian_product([np.arange(X), np.arange(Y)])\n", 1336 | "df = pd.DataFrame(np.asarray(p).T, columns=['x', 'y'])\n", 1337 | "df" 1338 | ] 1339 | }, 1340 | { 1341 | "cell_type": "markdown", 1342 | "metadata": {}, 1343 | "source": [ 1344 | "**52**. For this DataFrame `df`, create a new column of zeros (safe) and ones (mine). The probability of a mine occuring at each location should be 0.4." 1345 | ] 1346 | }, 1347 | { 1348 | "cell_type": "code", 1349 | "execution_count": null, 1350 | "metadata": {}, 1351 | "outputs": [], 1352 | "source": [ 1353 | "# One way is to draw samples from a binomial distribution.\n", 1354 | "\n", 1355 | "df['mine'] = np.random.binomial(1, 0.4, X*Y)\n", 1356 | "df" 1357 | ] 1358 | }, 1359 | { 1360 | "cell_type": "markdown", 1361 | "metadata": {}, 1362 | "source": [ 1363 | "**53**. Now create a new column for this DataFrame called `'adjacent'`. This column should contain the number of mines found on adjacent squares in the grid. \n", 1364 | "\n", 1365 | "(E.g. for the first row, which is the entry for the coordinate `(0, 0)`, count how many mines are found on the coordinates `(0, 1)`, `(1, 0)` and `(1, 1)`.)" 1366 | ] 1367 | }, 1368 | { 1369 | "cell_type": "code", 1370 | "execution_count": null, 1371 | "metadata": {}, 1372 | "outputs": [], 1373 | "source": [ 1374 | "# Here is one way to solve using merges.\n", 1375 | "# It's not necessary the optimal way, just \n", 1376 | "# the solution I thought of first...\n", 1377 | "\n", 1378 | "df['adjacent'] = \\\n", 1379 | " df.merge(df + [ 1, 1, 0], on=['x', 'y'], how='left')\\\n", 1380 | " .merge(df + [ 1, -1, 0], on=['x', 'y'], how='left')\\\n", 1381 | " .merge(df + [-1, 1, 0], on=['x', 'y'], how='left')\\\n", 1382 | " .merge(df + [-1, -1, 0], on=['x', 'y'], how='left')\\\n", 1383 | " .merge(df + [ 1, 0, 0], on=['x', 'y'], how='left')\\\n", 1384 | " .merge(df + [-1, 0, 0], on=['x', 'y'], how='left')\\\n", 1385 | " .merge(df + [ 0, 1, 0], on=['x', 'y'], how='left')\\\n", 1386 | " .merge(df + [ 0, -1, 0], on=['x', 'y'], how='left')\\\n", 1387 | " .iloc[:, 3:]\\\n", 1388 | " .sum(axis=1)\n", 1389 | " \n", 1390 | "# An alternative solution is to pivot the DataFrame \n", 1391 | "# to form the \"actual\" grid of mines and use convolution.\n", 1392 | "# See https://github.com/jakevdp/matplotlib_pydata2013/blob/master/examples/minesweeper.py\n", 1393 | "\n", 1394 | "from scipy.signal import convolve2d\n", 1395 | "\n", 1396 | "mine_grid = df.pivot_table(columns='x', index='y', values='mine')\n", 1397 | "counts = convolve2d(mine_grid.astype(complex), np.ones((3, 3)), mode='same').real.astype(int)\n", 1398 | "df['adjacent'] = (counts - mine_grid).ravel('F')" 1399 | ] 1400 | }, 1401 | { 1402 | "cell_type": "markdown", 1403 | "metadata": {}, 1404 | "source": [ 1405 | "**54**. For rows of the DataFrame that contain a mine, set the value in the `'adjacent'` column to NaN." 1406 | ] 1407 | }, 1408 | { 1409 | "cell_type": "code", 1410 | "execution_count": null, 1411 | "metadata": {}, 1412 | "outputs": [], 1413 | "source": [ 1414 | "df.loc[df['mine'] == 1, 'adjacent'] = np.nan" 1415 | ] 1416 | }, 1417 | { 1418 | "cell_type": "markdown", 1419 | "metadata": {}, 1420 | "source": [ 1421 | "**55**. Finally, convert the DataFrame to grid of the adjacent mine counts: columns are the `x` coordinate, rows are the `y` coordinate." 1422 | ] 1423 | }, 1424 | { 1425 | "cell_type": "code", 1426 | "execution_count": null, 1427 | "metadata": {}, 1428 | "outputs": [], 1429 | "source": [ 1430 | "df.drop('mine', axis=1).set_index(['y', 'x']).unstack()" 1431 | ] 1432 | }, 1433 | { 1434 | "cell_type": "markdown", 1435 | "metadata": {}, 1436 | "source": [ 1437 | "## Plotting\n", 1438 | "\n", 1439 | "### Visualize trends and patterns in data\n", 1440 | "\n", 1441 | "Difficulty: *medium*\n", 1442 | "\n", 1443 | "To really get a good understanding of the data contained in your DataFrame, it is often essential to create plots: if you're lucky, trends and anomalies will jump right out at you. This functionality is baked into pandas and the puzzles below explore some of what's possible with the library.\n", 1444 | "\n", 1445 | "**56.** Pandas is highly integrated with the plotting library matplotlib, and makes plotting DataFrames very user-friendly! Plotting in a notebook environment usually makes use of the following boilerplate:\n", 1446 | "\n", 1447 | "```python\n", 1448 | "import matplotlib.pyplot as plt\n", 1449 | "%matplotlib inline\n", 1450 | "plt.style.use('ggplot')\n", 1451 | "```\n", 1452 | "\n", 1453 | "matplotlib is the plotting library which pandas' plotting functionality is built upon, and it is usually aliased to ```plt```.\n", 1454 | "\n", 1455 | "```%matplotlib inline``` tells the notebook to show plots inline, instead of creating them in a separate window. \n", 1456 | "\n", 1457 | "```plt.style.use('ggplot')``` is a style theme that most people find agreeable, based upon the styling of R's ggplot package.\n", 1458 | "\n", 1459 | "For starters, make a scatter plot of this random data, but use black X's instead of the default markers. \n", 1460 | "\n", 1461 | "```df = pd.DataFrame({\"xs\":[1,5,2,8,1], \"ys\":[4,2,1,9,6]})```\n", 1462 | "\n", 1463 | "Consult the [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html) if you get stuck!" 1464 | ] 1465 | }, 1466 | { 1467 | "cell_type": "code", 1468 | "execution_count": null, 1469 | "metadata": { 1470 | "scrolled": false 1471 | }, 1472 | "outputs": [], 1473 | "source": [ 1474 | "import matplotlib.pyplot as plt\n", 1475 | "%matplotlib inline\n", 1476 | "plt.style.use('ggplot')\n", 1477 | "\n", 1478 | "df = pd.DataFrame({\"xs\":[1,5,2,8,1], \"ys\":[4,2,1,9,6]})\n", 1479 | "\n", 1480 | "df.plot.scatter(\"xs\", \"ys\", color = \"black\", marker = \"x\")" 1481 | ] 1482 | }, 1483 | { 1484 | "cell_type": "markdown", 1485 | "metadata": {}, 1486 | "source": [ 1487 | "**57.** Columns in your DataFrame can also be used to modify colors and sizes. Bill has been keeping track of his performance at work over time, as well as how good he was feeling that day, and whether he had a cup of coffee in the morning. Make a plot which incorporates all four features of this DataFrame.\n", 1488 | "\n", 1489 | "(Hint: If you're having trouble seeing the plot, try multiplying the Series which you choose to represent size by 10 or more)\n", 1490 | "\n", 1491 | "*The chart doesn't have to be pretty: this isn't a course in data viz!*\n", 1492 | "\n", 1493 | "```\n", 1494 | "df = pd.DataFrame({\"productivity\":[5,2,3,1,4,5,6,7,8,3,4,8,9],\n", 1495 | " \"hours_in\" :[1,9,6,5,3,9,2,9,1,7,4,2,2],\n", 1496 | " \"happiness\" :[2,1,3,2,3,1,2,3,1,2,2,1,3],\n", 1497 | " \"caffienated\" :[0,0,1,1,0,0,0,0,1,1,0,1,0]})\n", 1498 | "```" 1499 | ] 1500 | }, 1501 | { 1502 | "cell_type": "code", 1503 | "execution_count": null, 1504 | "metadata": {}, 1505 | "outputs": [], 1506 | "source": [ 1507 | "df = pd.DataFrame({\"productivity\":[5,2,3,1,4,5,6,7,8,3,4,8,9],\n", 1508 | " \"hours_in\" :[1,9,6,5,3,9,2,9,1,7,4,2,2],\n", 1509 | " \"happiness\" :[2,1,3,2,3,1,2,3,1,2,2,1,3],\n", 1510 | " \"caffienated\" :[0,0,1,1,0,0,0,0,1,1,0,1,0]})\n", 1511 | "\n", 1512 | "df.plot.scatter(\"hours_in\", \"productivity\", s = df.happiness * 30, c = df.caffienated)" 1513 | ] 1514 | }, 1515 | { 1516 | "cell_type": "markdown", 1517 | "metadata": {}, 1518 | "source": [ 1519 | "**58.** What if we want to plot multiple things? Pandas allows you to pass in a matplotlib *Axis* object for plots, and plots will also return an Axis object.\n", 1520 | "\n", 1521 | "Make a bar plot of monthly revenue with a line plot of monthly advertising spending (numbers in millions)\n", 1522 | "\n", 1523 | "```\n", 1524 | "df = pd.DataFrame({\"revenue\":[57,68,63,71,72,90,80,62,59,51,47,52],\n", 1525 | " \"advertising\":[2.1,1.9,2.7,3.0,3.6,3.2,2.7,2.4,1.8,1.6,1.3,1.9],\n", 1526 | " \"month\":range(12)\n", 1527 | " })\n", 1528 | "```" 1529 | ] 1530 | }, 1531 | { 1532 | "cell_type": "code", 1533 | "execution_count": null, 1534 | "metadata": {}, 1535 | "outputs": [], 1536 | "source": [ 1537 | "df = pd.DataFrame({\"revenue\":[57,68,63,71,72,90,80,62,59,51,47,52],\n", 1538 | " \"advertising\":[2.1,1.9,2.7,3.0,3.6,3.2,2.7,2.4,1.8,1.6,1.3,1.9],\n", 1539 | " \"month\":range(12)\n", 1540 | " })\n", 1541 | "\n", 1542 | "ax = df.plot.bar(\"month\", \"revenue\", color = \"green\")\n", 1543 | "df.plot.line(\"month\", \"advertising\", secondary_y = True, ax = ax)\n", 1544 | "ax.set_xlim((-1,12))" 1545 | ] 1546 | }, 1547 | { 1548 | "cell_type": "markdown", 1549 | "metadata": {}, 1550 | "source": [ 1551 | "Now we're finally ready to create a candlestick chart, which is a very common tool used to analyze stock price data. A candlestick chart shows the opening, closing, highest, and lowest price for a stock during a time window. The color of the \"candle\" (the thick part of the bar) is green if the stock closed above its opening price, or red if below.\n", 1552 | "\n", 1553 | "![Candlestick Example](img/candle.jpg)\n", 1554 | "\n", 1555 | "This was initially designed to be a pandas plotting challenge, but it just so happens that this type of plot is just not feasible using pandas' methods. If you are unfamiliar with matplotlib, we have provided a function that will plot the chart for you so long as you can use pandas to get the data into the correct format.\n", 1556 | "\n", 1557 | "Your first step should be to get the data in the correct format using pandas' time-series grouping function. We would like each candle to represent an hour's worth of data. You can write your own aggregation function which returns the open/high/low/close, but pandas has a built-in which also does this." 1558 | ] 1559 | }, 1560 | { 1561 | "cell_type": "markdown", 1562 | "metadata": {}, 1563 | "source": [ 1564 | "The below cell contains helper functions. Call ```day_stock_data()``` to generate a DataFrame containing the prices a hypothetical stock sold for, and the time the sale occurred. Call ```plot_candlestick(df)``` on your properly aggregated and formatted stock data to print the candlestick chart." 1565 | ] 1566 | }, 1567 | { 1568 | "cell_type": "code", 1569 | "execution_count": null, 1570 | "metadata": {}, 1571 | "outputs": [], 1572 | "source": [ 1573 | "#This function is designed to create semi-interesting random stock price data\n", 1574 | "\n", 1575 | "import numpy as np\n", 1576 | "def float_to_time(x):\n", 1577 | " return str(int(x)) + \":\" + str(int(x%1 * 60)).zfill(2) + \":\" + str(int(x*60 % 1 * 60)).zfill(2)\n", 1578 | "\n", 1579 | "def day_stock_data():\n", 1580 | " #NYSE is open from 9:30 to 4:00\n", 1581 | " time = 9.5\n", 1582 | " price = 100\n", 1583 | " results = [(float_to_time(time), price)]\n", 1584 | " while time < 16:\n", 1585 | " elapsed = np.random.exponential(.001)\n", 1586 | " time += elapsed\n", 1587 | " if time > 16:\n", 1588 | " break\n", 1589 | " price_diff = np.random.uniform(.999, 1.001)\n", 1590 | " price *= price_diff\n", 1591 | " results.append((float_to_time(time), price))\n", 1592 | " \n", 1593 | " \n", 1594 | " df = pd.DataFrame(results, columns = ['time','price'])\n", 1595 | " df.time = pd.to_datetime(df.time)\n", 1596 | " return df\n", 1597 | "\n", 1598 | "def plot_candlestick(agg):\n", 1599 | " fig, ax = plt.subplots()\n", 1600 | " for time in agg.index:\n", 1601 | " ax.plot([time.hour] * 2, agg.loc[time, [\"high\",\"low\"]].values, color = \"black\")\n", 1602 | " ax.plot([time.hour] * 2, agg.loc[time, [\"open\",\"close\"]].values, color = agg.loc[time, \"color\"], linewidth = 10)\n", 1603 | "\n", 1604 | " ax.set_xlim((8,16))\n", 1605 | " ax.set_ylabel(\"Price\")\n", 1606 | " ax.set_xlabel(\"Hour\")\n", 1607 | " ax.set_title(\"OHLC of Stock Value During Trading Day\")\n", 1608 | " plt.show()" 1609 | ] 1610 | }, 1611 | { 1612 | "cell_type": "markdown", 1613 | "metadata": {}, 1614 | "source": [ 1615 | "**59.** Generate a day's worth of random stock data, and aggregate / reformat it so that it has hourly summaries of the opening, highest, lowest, and closing prices" 1616 | ] 1617 | }, 1618 | { 1619 | "cell_type": "code", 1620 | "execution_count": null, 1621 | "metadata": {}, 1622 | "outputs": [], 1623 | "source": [ 1624 | "df = day_stock_data()\n", 1625 | "df.head()" 1626 | ] 1627 | }, 1628 | { 1629 | "cell_type": "code", 1630 | "execution_count": null, 1631 | "metadata": {}, 1632 | "outputs": [], 1633 | "source": [ 1634 | "df.set_index(\"time\", inplace = True)\n", 1635 | "agg = df.resample(\"H\").ohlc()\n", 1636 | "agg.columns = agg.columns.droplevel()\n", 1637 | "agg[\"color\"] = (agg.close > agg.open).map({True:\"green\",False:\"red\"})\n", 1638 | "agg.head()" 1639 | ] 1640 | }, 1641 | { 1642 | "cell_type": "markdown", 1643 | "metadata": {}, 1644 | "source": [ 1645 | "**60.** Now that you have your properly-formatted data, try to plot it yourself as a candlestick chart. Use the ```plot_candlestick(df)``` function above, or matplotlib's [```plot``` documentation](https://matplotlib.org/api/_as_gen/matplotlib.axes.Axes.plot.html) if you get stuck." 1646 | ] 1647 | }, 1648 | { 1649 | "cell_type": "code", 1650 | "execution_count": null, 1651 | "metadata": { 1652 | "scrolled": false 1653 | }, 1654 | "outputs": [], 1655 | "source": [ 1656 | "plot_candlestick(agg)" 1657 | ] 1658 | }, 1659 | { 1660 | "cell_type": "markdown", 1661 | "metadata": {}, 1662 | "source": [ 1663 | "*More exercises to follow soon...*" 1664 | ] 1665 | } 1666 | ], 1667 | "metadata": { 1668 | "kernelspec": { 1669 | "display_name": "Python 3", 1670 | "language": "python", 1671 | "name": "python3" 1672 | }, 1673 | "language_info": { 1674 | "codemirror_mode": { 1675 | "name": "ipython", 1676 | "version": 3 1677 | }, 1678 | "file_extension": ".py", 1679 | "mimetype": "text/x-python", 1680 | "name": "python", 1681 | "nbconvert_exporter": "python", 1682 | "pygments_lexer": "ipython3", 1683 | "version": "3.7.4" 1684 | } 1685 | }, 1686 | "nbformat": 4, 1687 | "nbformat_minor": 1 1688 | } 1689 | -------------------------------------------------------------------------------- /CONTRIBUTORS.md: -------------------------------------------------------------------------------- 1 | # Contributors 2 | 3 | Many thanks to the following contributors for their puzzles and fixes. 4 | 5 | ## Puzzles 6 | 7 | - [@johink](https://github.com/johink) - the plotting puzzles 56-60 8 | 9 | ## Solutions 10 | 11 | - [@madrury](https://github.com/madrury) - third solution to puzzle 29 12 | - [@xonoma](https://github.com/xonoma) - non-`ix` solution to puzzle 8 13 | 14 | ## Fixes 15 | 16 | - [pleydier](https://github.com/pleydier) - fix for puzzle 29 17 | - [g-morishita](https://github.com/g-morishita) - fix typo in puzzle 27 18 | - [499244188](https://github.com/499244188) - requirements.txt 19 | - [@guiem](https://github.com/guiem) - typo in README 20 | - [@johnny5550822](https://github.com/johnny5550822) - fix to puzzle 24 21 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 Alex Riley 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # 100 pandas puzzles 2 | 3 | ### [Puzzles notebook](https://github.com/ajcr/100-pandas-puzzles/blob/master/100-pandas-puzzles.ipynb) 4 | ### [Solutions notebook](https://github.com/ajcr/100-pandas-puzzles/blob/master/100-pandas-puzzles-with-solutions.ipynb) 5 | 6 | Inspired by [100 Numpy exerises](https://github.com/rougier/numpy-100), here are 100* short puzzles for testing your knowledge of [pandas'](http://pandas.pydata.org/) power. 7 | 8 | Since pandas is a large library with many different specialist features and functions, these excercises focus mainly on the fundamentals of manipulating data (indexing, grouping, aggregating, cleaning), making use of the core DataFrame and Series objects. Many of the excerises here are straightforward in that the solutions require no more than a few lines of code (in pandas or NumPy - don't go using pure Python!). Choosing the right methods and following best practices is the underlying goal. 9 | 10 | The exercises are loosely divided in sections. Each section has a difficulty rating; these ratings are subjective, of course, but should be a seen as a rough guide as to how elaborate the required solution needs to be. 11 | 12 | Good luck solving the puzzles! 13 | 14 | *\* the list of puzzles is not yet complete! Pull requests or suggestions for additional exercises, corrections and improvements are welcomed.* 15 | 16 | ## Overview of puzzles 17 | 18 | | Section Name | Description | Difficulty | 19 | | ------------- | ------------- | ------------- | 20 | | Importing pandas | Getting started and checking your pandas setup | Easy | 21 | | DataFrame basics | A few of the fundamental routines for selecting, sorting, adding and aggregating data in DataFrames | Easy | 22 | | DataFrames: beyond the basics | Slightly trickier: you may need to combine two or more methods to get the right answer | Medium | 23 | | DataFrames: harder problems | These might require a bit of thinking outside the box... | Hard | 24 | | Series and DatetimeIndex | Exercises for creating and manipulating Series with datetime data | Easy/Medium | 25 | | Cleaning Data | Making a DataFrame easier to work with | Easy/Medium | 26 | | Using MultiIndexes | Go beyond flat DataFrames with additional index levels | Medium | 27 | | Minesweeper | Generate the numbers for safe squares in a Minesweeper grid | Hard | 28 | | Plotting | Explore pandas' part of plotting functionality to see trends in data | Medium | 29 | 30 | ## Setting up 31 | 32 | To tackle the puzzles on your own computer, you'll need a Python 3 environment with the dependencies (namely pandas) installed. 33 | 34 | One way to do this is as follows. I'm using a bash shell, the procedure with Mac OS should be essentially the same. Windows, I'm not sure about. 35 | 36 | 1. Check you have Python 3 installed by printing the version of Python: 37 | ``` 38 | python -V 39 | ``` 40 | 41 | 2. Clone the puzzle repository using Git: 42 | 43 | ``` 44 | git clone https://github.com/ajcr/100-pandas-puzzles.git 45 | ``` 46 | 47 | 3. Install the dependencies (**caution**: if you don't want to modify any Python modules in your active environment, consider using a virtual environment instead): 48 | 49 | ``` 50 | python -m pip install -r requirements.txt 51 | ``` 52 | 53 | 4. Launch a jupyter notebook server: 54 | 55 | ``` 56 | jupyter notebook --notebook-dir=100-pandas-puzzles 57 | ``` 58 | 59 | You should be able to see the notebooks and launch them in your web browser. 60 | 61 | ## Contributors 62 | 63 | This repository has benefitted from numerous contributors, with those who have sent puzzles and fixes listed in [CONTRIBUTORS](https://github.com/ajcr/100-pandas-puzzles/blob/master/CONTRIBUTORS.md). 64 | 65 | Thanks to everyone who has raised an issue too. 66 | 67 | ## Other links 68 | 69 | If you feel like reading up on pandas before starting, the official documentation useful and very extensive. Good places get a broader overview of pandas are: 70 | 71 | - [10 minutes to pandas](http://pandas.pydata.org/pandas-docs/version/0.17.0/10min.html) 72 | - [pandas basics](http://pandas.pydata.org/pandas-docs/version/0.17.0/basics.html) 73 | - [tutorials](http://pandas.pydata.org/pandas-docs/stable/tutorials.html) 74 | - [cookbook and idioms](http://pandas.pydata.org/pandas-docs/version/0.17.0/cookbook.html#cookbook) 75 | - [Guilherme Samora's pandas exercises](https://github.com/guipsamora/pandas_exercises) 76 | 77 | There are may other excellent resources and books that are easily searchable and purchaseable. 78 | -------------------------------------------------------------------------------- /img/candle.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/KeithGalli/100-pandas-puzzles/8639a1488c15acc84316a41e9375225583cf19c8/img/candle.jpg -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | pandas>=0.25.0 2 | matplotlib>=2.1.1 3 | numpy>=1.17.0 4 | jupyter 5 | --------------------------------------------------------------------------------