\n",
39 | "\n",
40 | "## **Python PySpark Live Training Template**\n",
41 | "\n",
42 | "_Enter a brief description of your session, here's an example below:_\n",
43 | "\n",
44 | "Welcome to this hands-on training where we will immerse yourself in data visualization in Python. Using both `matplotlib` and `seaborn`, we'll learn how to create visualizations that are presentation-ready.\n",
45 | "\n",
46 | "The ability to present and discuss\n",
47 | "\n",
48 | "* Create various types of plots, including bar-plots, distribution plots, box-plots and more using Seaborn and Matplotlib.\n",
49 | "* Format and stylize your visualizations to make them report-ready.\n",
50 | "* Create sub-plots to create clearer visualizations and supercharge your workflow.\n",
51 | "\n",
52 | "## **The Dataset**\n",
53 | "\n",
54 | "_Enter a brief description of your dataset and its columns, here's an example below:_\n",
55 | "\n",
56 | "\n",
57 | "The dataset to be used in this webinar is a CSV file named `airbnb.csv`, which contains data on airbnb listings in the state of New York. It contains the following columns:\n",
58 | "\n",
59 | "- `listing_id`: The unique identifier for a listing\n",
60 | "- `description`: The description used on the listing\n",
61 | "- `host_id`: Unique identifier for a host\n",
62 | "- `host_name`: Name of host\n",
63 | "- `neighbourhood_full`: Name of boroughs and neighbourhoods\n",
64 | "- `coordinates`: Coordinates of listing _(latitude, longitude)_\n",
65 | "- `Listing added`: Date of added listing\n",
66 | "- `room_type`: Type of room \n",
67 | "- `rating`: Rating from 0 to 5.\n",
68 | "- `price`: Price per night for listing\n",
69 | "- `number_of_reviews`: Amount of reviews received \n",
70 | "- `last_review`: Date of last review\n",
71 | "- `reviews_per_month`: Number of reviews per month\n",
72 | "- `availability_365`: Number of days available per year\n",
73 | "- `Number of stays`: Total number of stays thus far\n"
74 | ]
75 | },
76 | {
77 | "cell_type": "markdown",
78 | "metadata": {
79 | "id": "NgaAMtZKXppQ",
80 | "colab_type": "text"
81 | },
82 | "source": [
83 | "## **Setting up a PySpark session**\n",
84 | "\n",
85 | "This set of code lets you enable a PySpark session using google colabs, make sure to run the code snippets to enable PySpark."
86 | ]
87 | },
88 | {
89 | "cell_type": "code",
90 | "metadata": {
91 | "id": "LfE-MdXOXppm",
92 | "colab_type": "code",
93 | "colab": {}
94 | },
95 | "source": [
96 | "# Just run this code\n",
97 | "!apt-get install openjdk-8-jdk-headless -qq > /dev/null\n",
98 | "!wget -q https://downloads.apache.org/spark/spark-2.4.5/spark-2.4.5-bin-hadoop2.7.tgz\n",
99 | "!tar xf spark-2.4.5-bin-hadoop2.7.tgz\n",
100 | "!pip install -q findspark"
101 | ],
102 | "execution_count": 0,
103 | "outputs": []
104 | },
105 | {
106 | "cell_type": "code",
107 | "metadata": {
108 | "id": "tM2OWT4VXpp0",
109 | "colab_type": "code",
110 | "colab": {}
111 | },
112 | "source": [
113 | "# Just run this code too!\n",
114 | "import os\n",
115 | "os.environ[\"JAVA_HOME\"] = \"/usr/lib/jvm/java-8-openjdk-amd64\"\n",
116 | "os.environ[\"SPARK_HOME\"] = \"/content/spark-2.4.5-bin-hadoop2.7\""
117 | ],
118 | "execution_count": 0,
119 | "outputs": []
120 | },
121 | {
122 | "cell_type": "code",
123 | "metadata": {
124 | "id": "ORtvjWUFXpp7",
125 | "colab_type": "code",
126 | "colab": {}
127 | },
128 | "source": [
129 | "# Set up a Spark session\n",
130 | "import findspark\n",
131 | "findspark.init()\n",
132 | "from pyspark.sql import SparkSession\n",
133 | "spark = SparkSession.builder.master(\"local[*]\").getOrCreate()"
134 | ],
135 | "execution_count": 0,
136 | "outputs": []
137 | },
138 | {
139 | "cell_type": "markdown",
140 | "metadata": {
141 | "colab_type": "text",
142 | "id": "BMYfcKeDY85K"
143 | },
144 | "source": [
145 | "## **Getting started**"
146 | ]
147 | },
148 | {
149 | "cell_type": "code",
150 | "metadata": {
151 | "colab_type": "code",
152 | "id": "EMQfyC7GUNhT",
153 | "colab": {}
154 | },
155 | "source": [
156 | "# Import other relevant libraries\n",
157 | "from pyspark.ml.feature import VectorAssembler\n",
158 | "from pyspark.ml.regression import LinearRegression"
159 | ],
160 | "execution_count": 0,
161 | "outputs": []
162 | },
163 | {
164 | "cell_type": "code",
165 | "metadata": {
166 | "colab_type": "code",
167 | "id": "IAfz_jiu0NjN",
168 | "colab": {}
169 | },
170 | "source": [
171 | "# Get dataset into local environment\n",
172 | "!wget -O /tmp/airbnb.csv 'https://github.com/datacamp/python-live-training-template/blob/master/data/airbnb.csv?raw=True'\n",
173 | "airbnb = spark.read.csv('/tmp/airbnb.csv', inferSchema=True, header =True)"
174 | ],
175 | "execution_count": 0,
176 | "outputs": []
177 | },
178 | {
179 | "cell_type": "markdown",
180 | "metadata": {
181 | "id": "6L1aMkQBXsdM",
182 | "colab_type": "text"
183 | },
184 | "source": [
185 | "### **Examples on use of markdown**\n",
186 | "\n",
187 | "#### **Images**\n",
188 | "\n",
189 | "To add images, gifs, or other assets of that kind, make sure to use the HTML `` function as in the following \n",
190 | "```\n",
191 | "
\n",
192 | "\n",
193 | "
\n",
194 | "
\n",
195 | "```\n",
196 | "\n",
197 | "- The `align` argument takes in `\"center\"`, `\"left\"`, `\"right\"`.\n",
198 | "- The `src` argument takes in the raw link of your image.\n",
199 | "- The `width` argument takes in a percentage, where `100%` is the original size of the image. \n",
200 | "\n",
201 | "\n",
202 | "#### **Formulas**\n",
203 | "\n",
204 | "To use formulas, feel free to use Latex Notation as such:\n",
205 | "\n",
206 | "$y = ax + b$\n",
207 | "\n",
208 | "You can even use color schemes like in this example, where coefficients are colored in red\n",
209 | "\n",
210 | "$y = \\color{red}a x + \\color{red}b$\n",
211 | "\n",
212 | "#### **Changing font color and size**\n",
213 | "\n",
214 | "To change or highlight specific texts in a color, you can use the following\n",
215 | "\n",
216 | "```\n",
217 | "**Example text**\n",
218 | "```\n",
219 | "\n",
220 | "Where the results will look like **Example text**.\n",
221 | "\n",
222 | "- The `color` argument takes in a HEX code for your color. "
223 | ]
224 | }
225 | ]
226 | }
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # **Big Data for Pythonistas** by **Hugo Bowne-Anderson**
2 |
3 | Live training sessions are designed to mimic the flow of how a real data scientist would address a problem or a task. As such, a session needs to have some “narrative” where learners are achieving stated learning objectives in the form of a real-life data science task or project. For example, a data visualization live session could be around analyzing a dataset and creating a report with a specific business objective in mind _(ex: analyzing and visualizing churn)_, a data cleaning live session could be about preparing a dataset for analysis etc ...
4 |
5 | As part of the 'Live training Spec' process, you will need to complete the following tasks:
6 |
7 | Edit this README by filling in the information for steps 1 - 4.
8 |
9 | ## Step 1: Foundations
10 |
11 | This part of the 'Live training Spec' process is designed to help guide you through session design by having you think through several key questions. Please make sure to delete the examples provided here for you.
12 |
13 | ### A. What problem(s) will students learn how to solve? (minimum of 5 problems)
14 |
15 | > _Here's an example from the Python for Spreadsheeets Users live session_
16 | >
17 | > - Key considerations to take in when transitioning from spreadsheets to Python.
18 | > - The Data Scientist mindset and keys to success in transitioning to Python.
19 | > - How to import `.xlsx` and `.csv` files into Python using `pandas`.
20 | > - How to filter a DataFrame using `pandas`.
21 | > - How to create new columns out of your DataFrame for more interesting features.
22 | > - Perform exploratory analysis of a DataFrame in `pandas`.
23 | > - How to clean a DataFrame using `pandas` to make it ready for analysis.
24 | > - Apply common spreadsheets operations such as pivot tables and vlookups in Python using `pandas`.
25 | > - Create simple, interesting visualizations using `matplotlib`.
26 |
27 |
28 | ### B. What technologies, packages, or functions will students use? Please be exhaustive.
29 |
30 | > - pandas
31 | > - matplotlib
32 | > - seaborn
33 |
34 | ### C. What terms or jargon will you define?
35 |
36 | _Whether during your opening and closing talk or your live training, you might have to define some terms and jargon to walk students through a problem you’re solving. Intuitive explanations using analogies are encouraged._
37 |
38 | > _Here's an example from the [Python for Spreadsheeets Users live session](https://www.datacamp.com/resources/webinars/live-training-python-for-spreadsheet-users)._
39 | >
40 | > - Packages: Packages are pieces of software we can import to Python. Similar to how we download, install Excel on MacOs, we import pandas on Python. (You can find it at minute 6:30)
41 |
42 | ### D. What mistakes or misconceptions do you expect?
43 |
44 | _To help minimize the amount of Q&As and make your live training re-usable, list out some mistakes and misconceptions you think students might encounter along the way._
45 |
46 | > _Here's an example from the [Data Visualization in Python live session](https://www.datacamp.com/resources/webinars/data-visualization-in-python)_
47 | >
48 | > - Anatomy of a matplotlib figure: When calling a matplotlib plot, a figure, axes and plot is being created behind the background. (You can find it at minute 11)
49 | > - As long as you do understand how plots work behind the scenes, you don't need to memorize syntax to customize your plot.
50 |
51 | ### E. What datasets will you use?
52 |
53 | Live training sessions are designed to walk students through something closer to a real-life data science workflow. Accordingly, the dataset needs to accommodate that user experience.
54 | As a rule of thumb, your dataset should always answer yes to the following question:
55 | > Is the dataset/problem I’m working on, something an industry data scientist/analyst could work on?
56 |
57 | Check our [datasets to avoid](https://instructor-support.datacamp.com/en/articles/2360699-datasets-to-avoid) list.
58 |
59 | ## Step 2: Who is this session for?
60 |
61 | Terms like "beginner" and "expert" mean different things to different people, so we use personas to help instructors clarify a live training's audience. When designing a specific live training, instructors should explain how it will or won't help these people, and what extra skills or prerequisite knowledge they are assuming their students have above and beyond what's included in the persona.
62 |
63 | - [ ] Please select the roles and industries that align with your live training.
64 | - [ ] Include an explanation describing your reasoning and any other relevant information.
65 |
66 | ### What roles would this live training be suitable for?
67 |
68 | *Check all that apply.*
69 |
70 | - [ ] Data Consumer
71 | - [ ] Leader
72 | - [ ] Data Analyst
73 | - [ ] Citizen Data Scientist
74 | - [ ] Data Scientist
75 | - [ ] Data Engineer
76 | - [ ] Database Administrator
77 | - [ ] Statistician
78 | - [ ] Machine Learning Scientist
79 | - [ ] Programmer
80 | - [ ] Other (please describe)
81 |
82 | ### What industries would this apply to?
83 |
84 | *List one or more industries that the content would be appropriate for.*
85 |
86 |
87 | ### What level of expertise should learners have before beginning the live training?
88 |
89 | *List three or more examples of skills that you expect learners to have before beginning the live training*
90 |
91 | > - Can draw common plot types (scatter, bar, histogram) using matplotlib and interpret them
92 | > - Can run a linear regression, use it to make predictions, and interpret the coefficients.
93 | > - Can calculate grouped summary statistics using SELECT queries with GROUP BY clauses.
94 |
95 |
96 | ## Step 3: Prerequisites
97 |
98 | List any prerequisite courses you think your live training could use from. This could be the live session’s companion course or a course you think students should take before the session. Prerequisites act as a guiding principle for your session and will set the topic framework, but you do not have to limit yourself in the live session to the syntax used in the prerequisite courses.
99 |
100 |
101 | ## Step 4: Session Outline
102 |
103 | A live training session usually begins with an introductory presentation, followed by the live training itself, and an ending presentation. Your live session is expected to be around 2h30m-3h long (including Q&A) with a hard-limit at 3h30m. You can check out our live training content guidelines [here](_LINK_).
104 |
105 |
106 | > _Example from [Python for Spreadsheet Users](https://www.datacamp.com/resources/webinars/live-training-python-for-spreadsheet-users)_
107 | >
108 | > ### Introduction Slides
109 | > - Introduction to the webinar and instructor (led by DataCamp TA)
110 | > - Introduction to the topics
111 | > - Discuss need to become data fluent
112 | > - Define data fluency
113 | > - Discuss how learning Python fits into that and go over session outline
114 | > - Set expectations about Q&A
115 | >
116 | > ### Live Training
117 | > #### Exploratory Data Analysis
118 | > - Import data and print header of DataFrame `pd.read_excel()`, `.head()`
119 | > - Glimpse at the data to
120 | > - Get column types using `.dtypes`
121 | > - Use `.describe()`, `.info()`
122 | > - **Q&A**
123 | > #### Data Cleaning and making it ready for analysis
124 | > - Convert date columns to datetime `pd.to_datetime()`
125 | > - Change column names
126 | > - Extract year, month from datetime `.strftime()`
127 | > - Drop an irrelevant column `.drop()`
128 | > - Fill missing values with `.fillna()`
129 | > #### Creating a report
130 | > - First report question: What is our overall sales performance this year? `.groupby()`, `.plt.plot()`
131 | > - Second report question: What is our overall sales performance this year? `.merge()`, `.groupby()`, `plt.plot()`
132 | > - Third report question: What is our overall sales performance this year? `.merge()`, `.groupby()`, `plt.plot()`
133 | > - **Q&A**
134 | >
135 | > ### Ending slides
136 | > - Recap of what we learned
137 | > - The data science mindset
138 | > - Call to action and course recommendations
139 |
140 | ## Authoring your session
141 |
142 | To get yourself started with setting up your live session, follow the steps below:
143 |
144 | 1. Download and install the "Open in Colabs" extension from [here](https://chrome.google.com/webstore/detail/open-in-colab/iogfkhleblhcpcekbiedikdehleodpjo?hl=en). This will let you take any jupyter notebook you see in a GitHub repository and open it as a **temporary** Colabs link.
145 | 2. Upload your dataset(s) to the `data` folder.
146 | 3. Upload your images, gifs, or any other assets you want to use in the notebook in the `assets` folder.
147 | 4. Check out the notebooks templates in the `notebooks` folder, and keep the template you want for your session while deleting all remaining ones.
148 |
149 | You can author and save your progress on your notebook using **either** of these methods.
150 |
151 | _**How to author your notebook: By directly saving into GitHub**_
152 |
153 | 1. Preview your desired notebook, press on "Open in Colabs" extension - and start developing your content in colabs _(which will act as the solution code to the session)_. :warning: **Important** :warning: Your progress will **not** be saved on Google Colabs since it's a temporary link. To save your progress, make sure to press on `File`, `Save a copy in GitHub` and follow remaining prompts.
154 | 2. Once your notebooks is ready to go, give it the name `session_name_solution.ipynb` create an empty version of the Notebook to be filled out by you and learners during the session, end the file name with `session_name.ipynb`.
155 | 3. Create Colabs links for both sessions and save them in notebooks :tada:
156 |
157 | _**How to author your notebook: By uploading notebook into GitHub**_
158 |
159 | 1. Preview your desired notebook, press on "Open in Colabs" extension - and start developing your content in colabs _(which will act as the solution code to the session)_. Once you're done, press on `file` - `download .ipynb` file - and overwrite the notebook by uploading it into GitHub.
160 | 2. Once your notebooks is ready to go, give it the name `session_name_solution.ipynb` create an empty version of the Notebook to be filled out by you and learners during the session, end the file name with `session_name.ipynb`.
161 | 3. Create Colabs links for both sessions and save them in notebooks :tada:
162 |
163 |
164 | You can check out either of those methods in action using this [recording](https://www.loom.com/share/1eeb148129244edd93fbc34bf5dc7f0d).
165 |
--------------------------------------------------------------------------------
/notebooks/python_live_session_template.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "name": "python_live_session_template.ipynb",
7 | "provenance": []
8 | },
9 | "kernelspec": {
10 | "display_name": "Python 3",
11 | "language": "python",
12 | "name": "python3"
13 | },
14 | "language_info": {
15 | "codemirror_mode": {
16 | "name": "ipython",
17 | "version": 3
18 | },
19 | "file_extension": ".py",
20 | "mimetype": "text/x-python",
21 | "name": "python",
22 | "nbconvert_exporter": "python",
23 | "pygments_lexer": "ipython3",
24 | "version": "3.7.1"
25 | }
26 | },
27 | "cells": [
28 | {
29 | "cell_type": "markdown",
30 | "metadata": {
31 | "colab_type": "text",
32 | "id": "6Ijg5wUCTQYG"
33 | },
34 | "source": [
35 | "
\n",
36 | "\n",
37 | "
\n",
38 | "
\n",
39 | "\n",
40 | "## **Python Live Training Template**\n",
41 | "\n",
42 | "_Enter a brief description of your session, here's an example below:_\n",
43 | "\n",
44 | "Welcome to this hands-on training where we will immerse yourself in data visualization in Python. Using both `matplotlib` and `seaborn`, we'll learn how to create visualizations that are presentation-ready.\n",
45 | "\n",
46 | "The ability to present and discuss\n",
47 | "\n",
48 | "* Create various types of plots, including bar-plots, distribution plots, box-plots and more using Seaborn and Matplotlib.\n",
49 | "* Format and stylize your visualizations to make them report-ready.\n",
50 | "* Create sub-plots to create clearer visualizations and supercharge your workflow.\n",
51 | "\n",
52 | "## **The Dataset**\n",
53 | "\n",
54 | "_Enter a brief description of your dataset and its columns, here's an example below:_\n",
55 | "\n",
56 | "\n",
57 | "The dataset to be used in this webinar is a CSV file named `airbnb.csv`, which contains data on airbnb listings in the state of New York. It contains the following columns:\n",
58 | "\n",
59 | "- `listing_id`: The unique identifier for a listing\n",
60 | "- `description`: The description used on the listing\n",
61 | "- `host_id`: Unique identifier for a host\n",
62 | "- `host_name`: Name of host\n",
63 | "- `neighbourhood_full`: Name of boroughs and neighbourhoods\n",
64 | "- `coordinates`: Coordinates of listing _(latitude, longitude)_\n",
65 | "- `Listing added`: Date of added listing\n",
66 | "- `room_type`: Type of room \n",
67 | "- `rating`: Rating from 0 to 5.\n",
68 | "- `price`: Price per night for listing\n",
69 | "- `number_of_reviews`: Amount of reviews received \n",
70 | "- `last_review`: Date of last review\n",
71 | "- `reviews_per_month`: Number of reviews per month\n",
72 | "- `availability_365`: Number of days available per year\n",
73 | "- `Number of stays`: Total number of stays thus far\n"
74 | ]
75 | },
76 | {
77 | "cell_type": "markdown",
78 | "metadata": {
79 | "colab_type": "text",
80 | "id": "BMYfcKeDY85K"
81 | },
82 | "source": [
83 | "## **Getting started**"
84 | ]
85 | },
86 | {
87 | "cell_type": "code",
88 | "metadata": {
89 | "colab_type": "code",
90 | "id": "EMQfyC7GUNhT",
91 | "colab": {}
92 | },
93 | "source": [
94 | "# Import libraries\n",
95 | "import pandas as pd\n",
96 | "import matplotlib.pyplot as plt\n",
97 | "import numpy as np\n",
98 | "import seaborn as sns\n",
99 | "import missingno as msno\n",
100 | "import datetime as dt"
101 | ],
102 | "execution_count": 0,
103 | "outputs": []
104 | },
105 | {
106 | "cell_type": "code",
107 | "metadata": {
108 | "colab_type": "code",
109 | "id": "l8t_EwRNZPLB",
110 | "outputId": "36a85c6f-f2ae-44e0-ac01-fc55462bc616",
111 | "colab": {
112 | "base_uri": "https://localhost:8080/",
113 | "height": 479
114 | }
115 | },
116 | "source": [
117 | "# Read in the dataset\n",
118 | "airbnb = pd.read_csv('https://github.com/datacamp/Big-Data-For-Pythonistas/blob/master/datasets/airbnb.csv?raw=true', \n",
119 | " index_col = 'Unnamed: 0')\n",
120 | "\n",
121 | "# Print header\n",
122 | "airbnb.head()"
123 | ],
124 | "execution_count": 0,
125 | "outputs": [
126 | {
127 | "output_type": "execute_result",
128 | "data": {
129 | "text/html": [
130 | "
\n",
131 | "\n",
144 | "
\n",
145 | " \n",
146 | "
\n",
147 | "
\n",
148 | "
listing_id
\n",
149 | "
name
\n",
150 | "
host_id
\n",
151 | "
host_name
\n",
152 | "
neighbourhood_full
\n",
153 | "
coordinates
\n",
154 | "
room_type
\n",
155 | "
price
\n",
156 | "
number_of_reviews
\n",
157 | "
last_review
\n",
158 | "
reviews_per_month
\n",
159 | "
availability_365
\n",
160 | "
rating
\n",
161 | "
number_of_stays
\n",
162 | "
5_stars
\n",
163 | "
listing_added
\n",
164 | "
\n",
165 | " \n",
166 | " \n",
167 | "
\n",
168 | "
0
\n",
169 | "
13740704
\n",
170 | "
Cozy,budget friendly, cable inc, private entra...
\n",
171 | "
20583125
\n",
172 | "
Michel
\n",
173 | "
Brooklyn, Flatlands
\n",
174 | "
(40.63222, -73.93398)
\n",
175 | "
Private room
\n",
176 | "
45$
\n",
177 | "
10
\n",
178 | "
2018-12-12
\n",
179 | "
0.70
\n",
180 | "
85
\n",
181 | "
4.100954
\n",
182 | "
12.0
\n",
183 | "
0.609432
\n",
184 | "
2018-06-08
\n",
185 | "
\n",
186 | "
\n",
187 | "
1
\n",
188 | "
22005115
\n",
189 | "
Two floor apartment near Central Park
\n",
190 | "
82746113
\n",
191 | "
Cecilia
\n",
192 | "
Manhattan, Upper West Side
\n",
193 | "
(40.78761, -73.96862)
\n",
194 | "
Entire home/apt
\n",
195 | "
135$
\n",
196 | "
1
\n",
197 | "
2019-06-30
\n",
198 | "
1.00
\n",
199 | "
145
\n",
200 | "
3.367600
\n",
201 | "
1.2
\n",
202 | "
0.746135
\n",
203 | "
2018-12-25
\n",
204 | "
\n",
205 | "
\n",
206 | "
2
\n",
207 | "
21667615
\n",
208 | "
Beautiful 1BR in Brooklyn Heights
\n",
209 | "
78251
\n",
210 | "
Leslie
\n",
211 | "
Brooklyn, Brooklyn Heights
\n",
212 | "
(40.7007, -73.99517)
\n",
213 | "
Entire home/apt
\n",
214 | "
150$
\n",
215 | "
0
\n",
216 | "
NaN
\n",
217 | "
NaN
\n",
218 | "
65
\n",
219 | "
NaN
\n",
220 | "
NaN
\n",
221 | "
NaN
\n",
222 | "
2018-08-15
\n",
223 | "
\n",
224 | "
\n",
225 | "
3
\n",
226 | "
6425850
\n",
227 | "
Spacious, charming studio
\n",
228 | "
32715865
\n",
229 | "
Yelena
\n",
230 | "
Manhattan, Upper West Side
\n",
231 | "
(40.79169, -73.97498)
\n",
232 | "
Entire home/apt
\n",
233 | "
86$
\n",
234 | "
5
\n",
235 | "
2017-09-23
\n",
236 | "
0.13
\n",
237 | "
0
\n",
238 | "
4.763203
\n",
239 | "
6.0
\n",
240 | "
0.769947
\n",
241 | "
2017-03-20
\n",
242 | "
\n",
243 | "
\n",
244 | "
4
\n",
245 | "
22986519
\n",
246 | "
Bedroom on the lively Lower East Side
\n",
247 | "
154262349
\n",
248 | "
Brooke
\n",
249 | "
Manhattan, Lower East Side
\n",
250 | "
(40.71884, -73.98354)
\n",
251 | "
Private room
\n",
252 | "
160$
\n",
253 | "
23
\n",
254 | "
2019-06-12
\n",
255 | "
2.29
\n",
256 | "
102
\n",
257 | "
3.822591
\n",
258 | "
27.6
\n",
259 | "
0.649383
\n",
260 | "
2020-10-23
\n",
261 | "
\n",
262 | " \n",
263 | "
\n",
264 | "
"
265 | ],
266 | "text/plain": [
267 | " listing_id ... listing_added\n",
268 | "0 13740704 ... 2018-06-08\n",
269 | "1 22005115 ... 2018-12-25\n",
270 | "2 21667615 ... 2018-08-15\n",
271 | "3 6425850 ... 2017-03-20\n",
272 | "4 22986519 ... 2020-10-23\n",
273 | "\n",
274 | "[5 rows x 16 columns]"
275 | ]
276 | },
277 | "metadata": {
278 | "tags": []
279 | },
280 | "execution_count": 4
281 | }
282 | ]
283 | },
284 | {
285 | "cell_type": "markdown",
286 | "metadata": {
287 | "id": "e_VZpFt7XfzO",
288 | "colab_type": "text"
289 | },
290 | "source": [
291 | "### **Examples on use of markdown**\n",
292 | "\n",
293 | "#### **Images**\n",
294 | "\n",
295 | "To add images, gifs, or other assets of that kind, make sure to use the HTML `` function as in the following \n",
296 | "```\n",
297 | "
\n",
298 | "\n",
299 | "
\n",
300 | "
\n",
301 | "```\n",
302 | "\n",
303 | "- The `align` argument takes in `\"center\"`, `\"left\"`, `\"right\"`.\n",
304 | "- The `src` argument takes in the raw link of your image.\n",
305 | "- The `width` argument takes in a percentage, where `100%` is the original size of the image. \n",
306 | "\n",
307 | "\n",
308 | "#### **Formulas**\n",
309 | "\n",
310 | "To use formulas, feel free to use Latex Notation as such:\n",
311 | "\n",
312 | "$y = ax + b$\n",
313 | "\n",
314 | "You can even use color schemes like in this example, where coefficients are colored in red\n",
315 | "\n",
316 | "$y = \\color{red}a x + \\color{red}b$\n",
317 | "\n",
318 | "#### **Changing font color and size**\n",
319 | "\n",
320 | "To change or highlight specific texts in a color, you can use the following\n",
321 | "\n",
322 | "```\n",
323 | "**Example text**\n",
324 | "```\n",
325 | "\n",
326 | "Where the results will look like **Example text**.\n",
327 | "\n",
328 | "- The `color` argument takes in a HEX code for your color. "
329 | ]
330 | }
331 | ]
332 | }
333 |
--------------------------------------------------------------------------------