\n",
204 | ""
205 | ]
206 | },
207 | {
208 | "cell_type": "markdown",
209 | "metadata": {},
210 | "source": [
211 | "At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).\n",
212 | "\n",
213 | "In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem."
214 | ]
215 | },
216 | {
217 | "cell_type": "code",
218 | "execution_count": null,
219 | "metadata": {},
220 | "outputs": [],
221 | "source": [
222 | "# Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset"
223 | ]
224 | }
225 | ],
226 | "metadata": {
227 | "kernelspec": {
228 | "display_name": "Python 3",
229 | "language": "python",
230 | "name": "python3"
231 | },
232 | "language_info": {
233 | "codemirror_mode": {
234 | "name": "ipython",
235 | "version": 3
236 | },
237 | "file_extension": ".py",
238 | "mimetype": "text/x-python",
239 | "name": "python",
240 | "nbconvert_exporter": "python",
241 | "pygments_lexer": "ipython3",
242 | "version": "3.6.6"
243 | }
244 | },
245 | "nbformat": 4,
246 | "nbformat_minor": 2
247 | }
248 |
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/Part 7 - Loading Image Data (Solution).ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# Loading Image Data\n",
8 | "\n",
9 | "So far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.\n",
10 | "\n",
11 | "We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:\n",
12 | "\n",
13 | "\n",
14 | "\n",
15 | "We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems."
16 | ]
17 | },
18 | {
19 | "cell_type": "code",
20 | "execution_count": null,
21 | "metadata": {},
22 | "outputs": [],
23 | "source": [
24 | "%matplotlib inline\n",
25 | "%config InlineBackend.figure_format = 'retina'\n",
26 | "\n",
27 | "import matplotlib.pyplot as plt\n",
28 | "\n",
29 | "import torch\n",
30 | "from torchvision import datasets, transforms\n",
31 | "\n",
32 | "import helper"
33 | ]
34 | },
35 | {
36 | "cell_type": "markdown",
37 | "metadata": {},
38 | "source": [
39 | "The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.html#imagefolder)). In general you'll use `ImageFolder` like so:\n",
40 | "\n",
41 | "```python\n",
42 | "dataset = datasets.ImageFolder('path/to/data', transform=transform)\n",
43 | "```\n",
44 | "\n",
45 | "where `'path/to/data'` is the file path to the data directory and `transform` is a sequence of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:\n",
46 | "```\n",
47 | "root/dog/xxx.png\n",
48 | "root/dog/xxy.png\n",
49 | "root/dog/xxz.png\n",
50 | "\n",
51 | "root/cat/123.png\n",
52 | "root/cat/nsdf3.png\n",
53 | "root/cat/asd932_.png\n",
54 | "```\n",
55 | "\n",
56 | "where each class has its own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set.\n",
57 | "\n",
58 | "### Transforms\n",
59 | "\n",
60 | "When you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:\n",
61 | "\n",
62 | "```python\n",
63 | "transform = transforms.Compose([transforms.Resize(255),\n",
64 | " transforms.CenterCrop(224),\n",
65 | " transforms.ToTensor()])\n",
66 | "\n",
67 | "```\n",
68 | "\n",
69 | "There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). \n",
70 | "\n",
71 | "### Data Loaders\n",
72 | "\n",
73 | "With the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.\n",
74 | "\n",
75 | "```python\n",
76 | "dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)\n",
77 | "```\n",
78 | "\n",
79 | "Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.\n",
80 | "\n",
81 | "```python\n",
82 | "# Looping through it, get a batch on each loop \n",
83 | "for images, labels in dataloader:\n",
84 | " pass\n",
85 | "\n",
86 | "# Get one batch\n",
87 | "images, labels = next(iter(dataloader))\n",
88 | "```\n",
89 | " \n",
90 | ">**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader."
91 | ]
92 | },
93 | {
94 | "cell_type": "code",
95 | "execution_count": null,
96 | "metadata": {},
97 | "outputs": [],
98 | "source": [
99 | "data_dir = 'Cat_Dog_data/train'\n",
100 | "\n",
101 | "transform = transforms.Compose([transforms.Resize(255),\n",
102 | " transforms.CenterCrop(224),\n",
103 | " transforms.ToTensor()])\n",
104 | "dataset = datasets.ImageFolder(data_dir, transform=transform)\n",
105 | "dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)"
106 | ]
107 | },
108 | {
109 | "cell_type": "code",
110 | "execution_count": null,
111 | "metadata": {},
112 | "outputs": [],
113 | "source": [
114 | "# Run this to test your data loader\n",
115 | "images, labels = next(iter(dataloader))\n",
116 | "helper.imshow(images[0], normalize=False)"
117 | ]
118 | },
119 | {
120 | "cell_type": "markdown",
121 | "metadata": {},
122 | "source": [
123 | "If you loaded the data correctly, you should see something like this (your image will be different):\n",
124 | "\n",
125 | ""
126 | ]
127 | },
128 | {
129 | "cell_type": "markdown",
130 | "metadata": {},
131 | "source": [
132 | "## Data Augmentation\n",
133 | "\n",
134 | "A common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.\n",
135 | "\n",
136 | "To randomly rotate, scale and crop, then flip your images you would define your transforms like this:\n",
137 | "\n",
138 | "```python\n",
139 | "train_transforms = transforms.Compose([transforms.RandomRotation(30),\n",
140 | " transforms.RandomResizedCrop(224),\n",
141 | " transforms.RandomHorizontalFlip(),\n",
142 | " transforms.ToTensor(),\n",
143 | " transforms.Normalize([0.5, 0.5, 0.5], \n",
144 | " [0.5, 0.5, 0.5])])\n",
145 | "```\n",
146 | "\n",
147 | "You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so\n",
148 | "\n",
149 | "```input[channel] = (input[channel] - mean[channel]) / std[channel]```\n",
150 | "\n",
151 | "Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.\n",
152 | "\n",
153 | "You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered other than normalizing. So, for validation/test images, you'll typically just resize and crop.\n",
154 | "\n",
155 | ">**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now."
156 | ]
157 | },
158 | {
159 | "cell_type": "code",
160 | "execution_count": null,
161 | "metadata": {},
162 | "outputs": [],
163 | "source": [
164 | "data_dir = 'Cat_Dog_data'\n",
165 | "\n",
166 | "# TODO: Define transforms for the training data and testing data\n",
167 | "train_transforms = transforms.Compose([transforms.RandomRotation(30),\n",
168 | " transforms.RandomResizedCrop(224),\n",
169 | " transforms.RandomHorizontalFlip(),\n",
170 | " transforms.ToTensor()]) \n",
171 | "\n",
172 | "test_transforms = transforms.Compose([transforms.Resize(255),\n",
173 | " transforms.CenterCrop(224),\n",
174 | " transforms.ToTensor()])\n",
175 | "\n",
176 | "\n",
177 | "# Pass transforms in here, then run the next cell to see how the transforms look\n",
178 | "train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms)\n",
179 | "test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms)\n",
180 | "\n",
181 | "trainloader = torch.utils.data.DataLoader(train_data, batch_size=32)\n",
182 | "testloader = torch.utils.data.DataLoader(test_data, batch_size=32)"
183 | ]
184 | },
185 | {
186 | "cell_type": "code",
187 | "execution_count": null,
188 | "metadata": {},
189 | "outputs": [],
190 | "source": [
191 | "# change this to the trainloader or testloader \n",
192 | "data_iter = iter(testloader)\n",
193 | "\n",
194 | "images, labels = next(data_iter)\n",
195 | "fig, axes = plt.subplots(figsize=(10,4), ncols=4)\n",
196 | "for ii in range(4):\n",
197 | " ax = axes[ii]\n",
198 | " helper.imshow(images[ii], ax=ax, normalize=False)"
199 | ]
200 | },
201 | {
202 | "cell_type": "markdown",
203 | "metadata": {},
204 | "source": [
205 | "Your transformed images should look something like this.\n",
206 | "\n",
207 | "
Training examples:
\n",
208 | "\n",
209 | "\n",
210 | "
Testing examples:
\n",
211 | ""
212 | ]
213 | },
214 | {
215 | "cell_type": "markdown",
216 | "metadata": {},
217 | "source": [
218 | "At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).\n",
219 | "\n",
220 | "In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem."
221 | ]
222 | },
223 | {
224 | "cell_type": "code",
225 | "execution_count": null,
226 | "metadata": {},
227 | "outputs": [],
228 | "source": [
229 | "# Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset"
230 | ]
231 | }
232 | ],
233 | "metadata": {
234 | "kernelspec": {
235 | "display_name": "Python 3",
236 | "language": "python",
237 | "name": "python3"
238 | },
239 | "language_info": {
240 | "codemirror_mode": {
241 | "name": "ipython",
242 | "version": 3
243 | },
244 | "file_extension": ".py",
245 | "mimetype": "text/x-python",
246 | "name": "python",
247 | "nbconvert_exporter": "python",
248 | "pygments_lexer": "ipython3",
249 | "version": "3.6.6"
250 | }
251 | },
252 | "nbformat": 4,
253 | "nbformat_minor": 2
254 | }
255 |
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/Part 8 - Transfer Learning (Exercises).ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# Transfer Learning\n",
8 | "\n",
9 | "In this notebook, you'll learn how to use pre-trained networks to solved challenging problems in computer vision. Specifically, you'll use networks trained on [ImageNet](http://www.image-net.org/) [available from torchvision](http://pytorch.org/docs/0.3.0/torchvision/models.html). \n",
10 | "\n",
11 | "ImageNet is a massive dataset with over 1 million labeled images in 1000 categories. It's used to train deep neural networks using an architecture called convolutional layers. I'm not going to get into the details of convolutional networks here, but if you want to learn more about them, please [watch this](https://www.youtube.com/watch?v=2-Ol7ZB0MmU).\n",
12 | "\n",
13 | "Once trained, these models work astonishingly well as feature detectors for images they weren't trained on. Using a pre-trained network on images not in the training set is called transfer learning. Here we'll use transfer learning to train a network that can classify our cat and dog photos with near perfect accuracy.\n",
14 | "\n",
15 | "With `torchvision.models` you can download these pre-trained networks and use them in your applications. We'll include `models` in our imports now."
16 | ]
17 | },
18 | {
19 | "cell_type": "code",
20 | "execution_count": null,
21 | "metadata": {},
22 | "outputs": [],
23 | "source": [
24 | "%matplotlib inline\n",
25 | "%config InlineBackend.figure_format = 'retina'\n",
26 | "\n",
27 | "import matplotlib.pyplot as plt\n",
28 | "\n",
29 | "import torch\n",
30 | "from torch import nn\n",
31 | "from torch import optim\n",
32 | "import torch.nn.functional as F\n",
33 | "from torchvision import datasets, transforms, models"
34 | ]
35 | },
36 | {
37 | "cell_type": "markdown",
38 | "metadata": {},
39 | "source": [
40 | "Most of the pretrained models require the input to be 224x224 images. Also, we'll need to match the normalization used when the models were trained. Each color channel was normalized separately, the means are `[0.485, 0.456, 0.406]` and the standard deviations are `[0.229, 0.224, 0.225]`."
41 | ]
42 | },
43 | {
44 | "cell_type": "code",
45 | "execution_count": null,
46 | "metadata": {},
47 | "outputs": [],
48 | "source": [
49 | "data_dir = 'assets/Cat_Dog_data'\n",
50 | "\n",
51 | "# TODO: Define transforms for the training data and testing data\n",
52 | "train_transforms =\n",
53 | "\n",
54 | "test_transforms =\n",
55 | "\n",
56 | "# Pass transforms in here, then run the next cell to see how the transforms look\n",
57 | "train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms)\n",
58 | "test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms)\n",
59 | "\n",
60 | "trainloader = torch.utils.data.DataLoader(train_data, batch_size=64, shuffle=True)\n",
61 | "testloader = torch.utils.data.DataLoader(test_data, batch_size=64)"
62 | ]
63 | },
64 | {
65 | "cell_type": "markdown",
66 | "metadata": {},
67 | "source": [
68 | "We can load in a model such as [DenseNet](http://pytorch.org/docs/0.3.0/torchvision/models.html#id5). Let's print out the model architecture so we can see what's going on."
69 | ]
70 | },
71 | {
72 | "cell_type": "code",
73 | "execution_count": null,
74 | "metadata": {
75 | "scrolled": true
76 | },
77 | "outputs": [],
78 | "source": [
79 | "model = models.densenet121(pretrained=True)\n",
80 | "model"
81 | ]
82 | },
83 | {
84 | "cell_type": "markdown",
85 | "metadata": {},
86 | "source": [
87 | "This model is built out of two main parts, the features and the classifier. The features part is a stack of convolutional layers and overall works as a feature detector that can be fed into a classifier. The classifier part is a single fully-connected layer `(classifier): Linear(in_features=1024, out_features=1000)`. This layer was trained on the ImageNet dataset, so it won't work for our specific problem. That means we need to replace the classifier, but the features will work perfectly on their own. In general, I think about pre-trained networks as amazingly good feature detectors that can be used as the input for simple feed-forward classifiers."
88 | ]
89 | },
90 | {
91 | "cell_type": "code",
92 | "execution_count": null,
93 | "metadata": {},
94 | "outputs": [],
95 | "source": [
96 | "# Freeze parameters so we don't backprop through them\n",
97 | "for param in model.parameters():\n",
98 | " param.requires_grad = False\n",
99 | "\n",
100 | "from collections import OrderedDict\n",
101 | "classifier = nn.Sequential(OrderedDict([\n",
102 | " ('fc1', nn.Linear(1024, 500)),\n",
103 | " ('relu', nn.ReLU()),\n",
104 | " ('fc2', nn.Linear(500, 2)),\n",
105 | " ('output', nn.LogSoftmax(dim=1))\n",
106 | " ]))\n",
107 | " \n",
108 | "model.classifier = classifier"
109 | ]
110 | },
111 | {
112 | "cell_type": "markdown",
113 | "metadata": {},
114 | "source": [
115 | "With our model built, we need to train the classifier. However, now we're using a **really deep** neural network. If you try to train this on a CPU like normal, it will take a long, long time. Instead, we're going to use the GPU to do the calculations. The linear algebra computations are done in parallel on the GPU leading to 100x increased training speeds. It's also possible to train on multiple GPUs, further decreasing training time.\n",
116 | "\n",
117 | "PyTorch, along with pretty much every other deep learning framework, uses [CUDA](https://developer.nvidia.com/cuda-zone) to efficiently compute the forward and backwards passes on the GPU. In PyTorch, you move your model parameters and other tensors to the GPU memory using `model.to('cuda')`. You can move them back from the GPU with `model.to('cpu')` which you'll commonly do when you need to operate on the network output outside of PyTorch. As a demonstration of the increased speed, I'll compare how long it takes to perform a forward and backward pass with and without a GPU."
118 | ]
119 | },
120 | {
121 | "cell_type": "code",
122 | "execution_count": null,
123 | "metadata": {},
124 | "outputs": [],
125 | "source": [
126 | "import time"
127 | ]
128 | },
129 | {
130 | "cell_type": "code",
131 | "execution_count": null,
132 | "metadata": {},
133 | "outputs": [],
134 | "source": [
135 | "for device in ['cpu', 'cuda']:\n",
136 | "\n",
137 | " criterion = nn.NLLLoss()\n",
138 | " # Only train the classifier parameters, feature parameters are frozen\n",
139 | " optimizer = optim.Adam(model.classifier.parameters(), lr=0.001)\n",
140 | "\n",
141 | " model.to(device)\n",
142 | "\n",
143 | " for ii, (inputs, labels) in enumerate(trainloader):\n",
144 | "\n",
145 | " # Move input and label tensors to the GPU\n",
146 | " inputs, labels = inputs.to(device), labels.to(device)\n",
147 | "\n",
148 | " start = time.time()\n",
149 | "\n",
150 | " outputs = model.forward(inputs)\n",
151 | " loss = criterion(outputs, labels)\n",
152 | "\n",
153 | " optimizer.zero_grad()\n",
154 | " loss.backward()\n",
155 | " optimizer.step()\n",
156 | "\n",
157 | " if ii==3:\n",
158 | " break\n",
159 | " \n",
160 | " print(f\"Device = {device}; Time per batch: {(time.time() - start)/3:.3f} seconds\")"
161 | ]
162 | },
163 | {
164 | "cell_type": "markdown",
165 | "metadata": {},
166 | "source": [
167 | "You can write device agnostic code which will automatically use CUDA if it's enabled like so:\n",
168 | "```python\n",
169 | "# at beginning of the script\n",
170 | "device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n",
171 | "\n",
172 | "...\n",
173 | "\n",
174 | "# then whenever you get a new Tensor or Module\n",
175 | "# this won't copy if they are already on the desired device\n",
176 | "input = data.to(device)\n",
177 | "model = MyModule(...).to(device)\n",
178 | "```\n",
179 | "\n",
180 | "From here, I'll let you finish training the model. The process is the same as before except now your model is much more powerful. You should get better than 95% accuracy easily.\n",
181 | "\n",
182 | ">**Exercise:** Train a pretrained models to classify the cat and dog images. Continue with the DenseNet model, or try ResNet, it's also a good model to try out first. Make sure you are only training the classifier and the parameters for the features part are frozen."
183 | ]
184 | },
185 | {
186 | "cell_type": "code",
187 | "execution_count": null,
188 | "metadata": {},
189 | "outputs": [],
190 | "source": [
191 | "## TODO: Use a pretrained model to classify the cat and dog images"
192 | ]
193 | }
194 | ],
195 | "metadata": {
196 | "kernelspec": {
197 | "display_name": "Python 3",
198 | "language": "python",
199 | "name": "python3"
200 | },
201 | "language_info": {
202 | "codemirror_mode": {
203 | "name": "ipython",
204 | "version": 3
205 | },
206 | "file_extension": ".py",
207 | "mimetype": "text/x-python",
208 | "name": "python",
209 | "nbconvert_exporter": "python",
210 | "pygments_lexer": "ipython3",
211 | "version": "3.6.6"
212 | }
213 | },
214 | "nbformat": 4,
215 | "nbformat_minor": 2
216 | }
217 |
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/Part 8 - Transfer Learning (Solution).ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# Transfer Learning\n",
8 | "\n",
9 | "In this notebook, you'll learn how to use pre-trained networks to solved challenging problems in computer vision. Specifically, you'll use networks trained on [ImageNet](http://www.image-net.org/) [available from torchvision](http://pytorch.org/docs/0.3.0/torchvision/models.html). \n",
10 | "\n",
11 | "ImageNet is a massive dataset with over 1 million labeled images in 1000 categories. It's used to train deep neural networks using an architecture called convolutional layers. I'm not going to get into the details of convolutional networks here, but if you want to learn more about them, please [watch this](https://www.youtube.com/watch?v=2-Ol7ZB0MmU).\n",
12 | "\n",
13 | "Once trained, these models work astonishingly well as feature detectors for images they weren't trained on. Using a pre-trained network on images not in the training set is called transfer learning. Here we'll use transfer learning to train a network that can classify our cat and dog photos with near perfect accuracy.\n",
14 | "\n",
15 | "With `torchvision.models` you can download these pre-trained networks and use them in your applications. We'll include `models` in our imports now."
16 | ]
17 | },
18 | {
19 | "cell_type": "code",
20 | "execution_count": null,
21 | "metadata": {},
22 | "outputs": [],
23 | "source": [
24 | "%matplotlib inline\n",
25 | "%config InlineBackend.figure_format = 'retina'\n",
26 | "\n",
27 | "import matplotlib.pyplot as plt\n",
28 | "\n",
29 | "import torch\n",
30 | "from torch import nn\n",
31 | "from torch import optim\n",
32 | "import torch.nn.functional as F\n",
33 | "from torchvision import datasets, transforms, models"
34 | ]
35 | },
36 | {
37 | "cell_type": "markdown",
38 | "metadata": {},
39 | "source": [
40 | "Most of the pretrained models require the input to be 224x224 images. Also, we'll need to match the normalization used when the models were trained. Each color channel was normalized separately, the means are `[0.485, 0.456, 0.406]` and the standard deviations are `[0.229, 0.224, 0.225]`."
41 | ]
42 | },
43 | {
44 | "cell_type": "code",
45 | "execution_count": null,
46 | "metadata": {},
47 | "outputs": [],
48 | "source": [
49 | "data_dir = 'Cat_Dog_data'\n",
50 | "\n",
51 | "# TODO: Define transforms for the training data and testing data\n",
52 | "train_transforms = transforms.Compose([transforms.RandomRotation(30),\n",
53 | " transforms.RandomResizedCrop(224),\n",
54 | " transforms.RandomHorizontalFlip(),\n",
55 | " transforms.ToTensor(),\n",
56 | " transforms.Normalize([0.485, 0.456, 0.406],\n",
57 | " [0.229, 0.224, 0.225])])\n",
58 | "\n",
59 | "test_transforms = transforms.Compose([transforms.Resize(255),\n",
60 | " transforms.CenterCrop(224),\n",
61 | " transforms.ToTensor(),\n",
62 | " transforms.Normalize([0.485, 0.456, 0.406],\n",
63 | " [0.229, 0.224, 0.225])])\n",
64 | "\n",
65 | "# Pass transforms in here, then run the next cell to see how the transforms look\n",
66 | "train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms)\n",
67 | "test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms)\n",
68 | "\n",
69 | "trainloader = torch.utils.data.DataLoader(train_data, batch_size=64, shuffle=True)\n",
70 | "testloader = torch.utils.data.DataLoader(test_data, batch_size=64)"
71 | ]
72 | },
73 | {
74 | "cell_type": "markdown",
75 | "metadata": {},
76 | "source": [
77 | "We can load in a model such as [DenseNet](http://pytorch.org/docs/0.3.0/torchvision/models.html#id5). Let's print out the model architecture so we can see what's going on."
78 | ]
79 | },
80 | {
81 | "cell_type": "code",
82 | "execution_count": null,
83 | "metadata": {
84 | "scrolled": true
85 | },
86 | "outputs": [],
87 | "source": [
88 | "model = models.densenet121(pretrained=True)\n",
89 | "model"
90 | ]
91 | },
92 | {
93 | "cell_type": "markdown",
94 | "metadata": {},
95 | "source": [
96 | "This model is built out of two main parts, the features and the classifier. The features part is a stack of convolutional layers and overall works as a feature detector that can be fed into a classifier. The classifier part is a single fully-connected layer `(classifier): Linear(in_features=1024, out_features=1000)`. This layer was trained on the ImageNet dataset, so it won't work for our specific problem. That means we need to replace the classifier, but the features will work perfectly on their own. In general, I think about pre-trained networks as amazingly good feature detectors that can be used as the input for simple feed-forward classifiers."
97 | ]
98 | },
99 | {
100 | "cell_type": "code",
101 | "execution_count": null,
102 | "metadata": {},
103 | "outputs": [],
104 | "source": [
105 | "# Freeze parameters so we don't backprop through them\n",
106 | "for param in model.parameters():\n",
107 | " param.requires_grad = False\n",
108 | "\n",
109 | "from collections import OrderedDict\n",
110 | "classifier = nn.Sequential(OrderedDict([\n",
111 | " ('fc1', nn.Linear(1024, 500)),\n",
112 | " ('relu', nn.ReLU()),\n",
113 | " ('fc2', nn.Linear(500, 2)),\n",
114 | " ('output', nn.LogSoftmax(dim=1))\n",
115 | " ]))\n",
116 | " \n",
117 | "model.classifier = classifier"
118 | ]
119 | },
120 | {
121 | "cell_type": "markdown",
122 | "metadata": {},
123 | "source": [
124 | "With our model built, we need to train the classifier. However, now we're using a **really deep** neural network. If you try to train this on a CPU like normal, it will take a long, long time. Instead, we're going to use the GPU to do the calculations. The linear algebra computations are done in parallel on the GPU leading to 100x increased training speeds. It's also possible to train on multiple GPUs, further decreasing training time.\n",
125 | "\n",
126 | "PyTorch, along with pretty much every other deep learning framework, uses [CUDA](https://developer.nvidia.com/cuda-zone) to efficiently compute the forward and backwards passes on the GPU. In PyTorch, you move your model parameters and other tensors to the GPU memory using `model.to('cuda')`. You can move them back from the GPU with `model.to('cpu')` which you'll commonly do when you need to operate on the network output outside of PyTorch. As a demonstration of the increased speed, I'll compare how long it takes to perform a forward and backward pass with and without a GPU."
127 | ]
128 | },
129 | {
130 | "cell_type": "code",
131 | "execution_count": null,
132 | "metadata": {},
133 | "outputs": [],
134 | "source": [
135 | "import time"
136 | ]
137 | },
138 | {
139 | "cell_type": "code",
140 | "execution_count": null,
141 | "metadata": {},
142 | "outputs": [],
143 | "source": [
144 | "for device in ['cpu', 'cuda']:\n",
145 | "\n",
146 | " criterion = nn.NLLLoss()\n",
147 | " # Only train the classifier parameters, feature parameters are frozen\n",
148 | " optimizer = optim.Adam(model.classifier.parameters(), lr=0.001)\n",
149 | "\n",
150 | " model.to(device)\n",
151 | "\n",
152 | " for ii, (inputs, labels) in enumerate(trainloader):\n",
153 | "\n",
154 | " # Move input and label tensors to the GPU\n",
155 | " inputs, labels = inputs.to(device), labels.to(device)\n",
156 | "\n",
157 | " start = time.time()\n",
158 | "\n",
159 | " outputs = model.forward(inputs)\n",
160 | " loss = criterion(outputs, labels)\n",
161 | "\n",
162 | " optimizer.zero_grad()\n",
163 | " loss.backward()\n",
164 | " optimizer.step()\n",
165 | "\n",
166 | " if ii==3:\n",
167 | " break\n",
168 | " \n",
169 | " print(f\"Device = {device}; Time per batch: {(time.time() - start)/3:.3f} seconds\")"
170 | ]
171 | },
172 | {
173 | "cell_type": "markdown",
174 | "metadata": {},
175 | "source": [
176 | "You can write device agnostic code which will automatically use CUDA if it's enabled like so:\n",
177 | "```python\n",
178 | "# at beginning of the script\n",
179 | "device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n",
180 | "\n",
181 | "...\n",
182 | "\n",
183 | "# then whenever you get a new Tensor or Module\n",
184 | "# this won't copy if they are already on the desired device\n",
185 | "input = data.to(device)\n",
186 | "model = MyModule(...).to(device)\n",
187 | "```\n",
188 | "\n",
189 | "From here, I'll let you finish training the model. The process is the same as before except now your model is much more powerful. You should get better than 95% accuracy easily.\n",
190 | "\n",
191 | ">**Exercise:** Train a pretrained models to classify the cat and dog images. Continue with the DenseNet model, or try ResNet, it's also a good model to try out first. Make sure you are only training the classifier and the parameters for the features part are frozen."
192 | ]
193 | },
194 | {
195 | "cell_type": "code",
196 | "execution_count": null,
197 | "metadata": {},
198 | "outputs": [],
199 | "source": [
200 | "# Use GPU if it's available\n",
201 | "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
202 | "\n",
203 | "model = models.densenet121(pretrained=True)\n",
204 | "\n",
205 | "# Freeze parameters so we don't backprop through them\n",
206 | "for param in model.parameters():\n",
207 | " param.requires_grad = False\n",
208 | " \n",
209 | "model.classifier = nn.Sequential(nn.Linear(1024, 256),\n",
210 | " nn.ReLU(),\n",
211 | " nn.Dropout(0.2),\n",
212 | " nn.Linear(256, 2),\n",
213 | " nn.LogSoftmax(dim=1))\n",
214 | "\n",
215 | "criterion = nn.NLLLoss()\n",
216 | "\n",
217 | "# Only train the classifier parameters, feature parameters are frozen\n",
218 | "optimizer = optim.Adam(model.classifier.parameters(), lr=0.003)\n",
219 | "\n",
220 | "model.to(device);"
221 | ]
222 | },
223 | {
224 | "cell_type": "code",
225 | "execution_count": null,
226 | "metadata": {},
227 | "outputs": [],
228 | "source": [
229 | "epochs = 1\n",
230 | "steps = 0\n",
231 | "running_loss = 0\n",
232 | "print_every = 5\n",
233 | "for epoch in range(epochs):\n",
234 | " for inputs, labels in trainloader:\n",
235 | " steps += 1\n",
236 | " # Move input and label tensors to the default device\n",
237 | " inputs, labels = inputs.to(device), labels.to(device)\n",
238 | " \n",
239 | " logps = model.forward(inputs)\n",
240 | " loss = criterion(logps, labels)\n",
241 | " \n",
242 | " optimizer.zero_grad()\n",
243 | " loss.backward()\n",
244 | " optimizer.step()\n",
245 | "\n",
246 | " running_loss += loss.item()\n",
247 | " \n",
248 | " if steps % print_every == 0:\n",
249 | " test_loss = 0\n",
250 | " accuracy = 0\n",
251 | " model.eval()\n",
252 | " with torch.no_grad():\n",
253 | " for inputs, labels in testloader:\n",
254 | " inputs, labels = inputs.to(device), labels.to(device)\n",
255 | " logps = model.forward(inputs)\n",
256 | " batch_loss = criterion(logps, labels)\n",
257 | " \n",
258 | " test_loss += batch_loss.item()\n",
259 | " \n",
260 | " # Calculate accuracy\n",
261 | " ps = torch.exp(logps)\n",
262 | " top_p, top_class = ps.topk(1, dim=1)\n",
263 | " equals = top_class == labels.view(*top_class.shape)\n",
264 | " accuracy += torch.mean(equals.type(torch.FloatTensor)).item()\n",
265 | " \n",
266 | " print(f\"Epoch {epoch+1}/{epochs}.. \"\n",
267 | " f\"Train loss: {running_loss/print_every:.3f}.. \"\n",
268 | " f\"Test loss: {test_loss/len(testloader):.3f}.. \"\n",
269 | " f\"Test accuracy: {accuracy/len(testloader):.3f}\")\n",
270 | " running_loss = 0\n",
271 | " model.train()"
272 | ]
273 | }
274 | ],
275 | "metadata": {
276 | "kernelspec": {
277 | "display_name": "Python 3",
278 | "language": "python",
279 | "name": "python3"
280 | },
281 | "language_info": {
282 | "codemirror_mode": {
283 | "name": "ipython",
284 | "version": 3
285 | },
286 | "file_extension": ".py",
287 | "mimetype": "text/x-python",
288 | "name": "python",
289 | "nbconvert_exporter": "python",
290 | "pygments_lexer": "ipython3",
291 | "version": "3.6.6"
292 | }
293 | },
294 | "nbformat": 4,
295 | "nbformat_minor": 2
296 | }
297 |
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/README.md:
--------------------------------------------------------------------------------
1 | # Deep Learning with PyTorch
2 |
3 | This repo contains notebooks and related code for Udacity's Deep Learning with PyTorch lesson. This lesson appears in our [AI Programming with Python Nanodegree program](https://www.udacity.com/course/ai-programming-python-nanodegree--nd089).
4 |
5 | * **Part 1:** Introduction to PyTorch and using tensors
6 | * **Part 2:** Building fully-connected neural networks with PyTorch
7 | * **Part 3:** How to train a fully-connected network with backpropagation on MNIST
8 | * **Part 4:** Exercise - train a neural network on Fashion-MNIST
9 | * **Part 5:** Using a trained network for making predictions and validating networks
10 | * **Part 6:** How to save and load trained models
11 | * **Part 7:** Load image data with torchvision, also data augmentation
12 | * **Part 8:** Use transfer learning to train a state-of-the-art image classifier for dogs and cats
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/__pycache__/helper.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/__pycache__/helper.cpython-37.pyc
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/assets/ImageNet_example.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/assets/ImageNet_example.png
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/assets/Pooling_Simple_max.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/assets/Pooling_Simple_max.png
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/assets/activation.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/assets/activation.png
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/assets/autoencoder_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/assets/autoencoder_1.png
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/assets/backprop_diagram.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/assets/backprop_diagram.png
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/assets/cat.70.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/assets/cat.70.jpg
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/assets/cat_cropped.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/assets/cat_cropped.png
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/assets/conv_net.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/assets/conv_net.jpg
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/assets/dog.128.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/assets/dog.128.jpg
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/assets/dog_cat.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/assets/dog_cat.png
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/assets/examples_new.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/assets/examples_new.png
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/assets/fashion-mnist-sprite.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/assets/fashion-mnist-sprite.png
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/assets/full_padding_no_strides_transposed.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/assets/full_padding_no_strides_transposed.gif
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/assets/function_approx.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/assets/function_approx.png
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/assets/gradient_descent.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/assets/gradient_descent.png
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/assets/image_distribution.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/assets/image_distribution.png
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/assets/infographic.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/assets/infographic.pdf
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/assets/lenet.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/assets/lenet.png
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/assets/mlp_mnist.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/assets/mlp_mnist.png
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/assets/mnist.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/assets/mnist.png
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/assets/multilayer_diagram_weights.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/assets/multilayer_diagram_weights.png
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/assets/network_diagram.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/assets/network_diagram.png
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/assets/overfitting.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/assets/overfitting.png
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/assets/padding_strides.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/assets/padding_strides.gif
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/assets/simple_neuron.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/assets/simple_neuron.png
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/assets/test_examples.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/assets/test_examples.png
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/assets/train_examples.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/assets/train_examples.png
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/assets/w1_backprop_graph.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/deep-learning-with-pytorch/assets/w1_backprop_graph.png
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/fc_model.py:
--------------------------------------------------------------------------------
1 | import torch
2 | from torch import nn
3 | import torch.nn.functional as F
4 |
5 |
6 | class Network(nn.Module):
7 | def __init__(self, input_size, output_size, hidden_layers, drop_p=0.5):
8 | ''' Builds a feedforward network with arbitrary hidden layers.
9 |
10 | Arguments
11 | ---------
12 | input_size: integer, size of the input layer
13 | output_size: integer, size of the output layer
14 | hidden_layers: list of integers, the sizes of the hidden layers
15 |
16 | '''
17 | super().__init__()
18 | # Input to a hidden layer
19 | self.hidden_layers = nn.ModuleList([nn.Linear(input_size, hidden_layers[0])])
20 |
21 | # Add a variable number of more hidden layers
22 | layer_sizes = zip(hidden_layers[:-1], hidden_layers[1:])
23 | self.hidden_layers.extend([nn.Linear(h1, h2) for h1, h2 in layer_sizes])
24 |
25 | self.output = nn.Linear(hidden_layers[-1], output_size)
26 |
27 | self.dropout = nn.Dropout(p=drop_p)
28 |
29 | def forward(self, x):
30 | ''' Forward pass through the network, returns the output logits '''
31 |
32 | for each in self.hidden_layers:
33 | x = F.relu(each(x))
34 | x = self.dropout(x)
35 | x = self.output(x)
36 |
37 | return F.log_softmax(x, dim=1)
38 |
39 |
40 | def validation(model, testloader, criterion):
41 | accuracy = 0
42 | test_loss = 0
43 | for images, labels in testloader:
44 |
45 | images = images.resize_(images.size()[0], 784)
46 |
47 | output = model.forward(images)
48 | test_loss += criterion(output, labels).item()
49 |
50 | ## Calculating the accuracy
51 | # Model's output is log-softmax, take exponential to get the probabilities
52 | ps = torch.exp(output)
53 | # Class with highest probability is our predicted class, compare with true label
54 | equality = (labels.data == ps.max(1)[1])
55 | # Accuracy is number of correct predictions divided by all predictions, just take the mean
56 | accuracy += equality.type_as(torch.FloatTensor()).mean()
57 |
58 | return test_loss, accuracy
59 |
60 |
61 | def train(model, trainloader, testloader, criterion, optimizer, epochs=5, print_every=40):
62 |
63 | steps = 0
64 | running_loss = 0
65 | for e in range(epochs):
66 | # Model in training mode, dropout is on
67 | model.train()
68 | for images, labels in trainloader:
69 | steps += 1
70 |
71 | # Flatten images into a 784 long vector
72 | images.resize_(images.size()[0], 784)
73 |
74 | optimizer.zero_grad()
75 |
76 | output = model.forward(images)
77 | loss = criterion(output, labels)
78 | loss.backward()
79 | optimizer.step()
80 |
81 | running_loss += loss.item()
82 |
83 | if steps % print_every == 0:
84 | # Model in inference mode, dropout is off
85 | model.eval()
86 |
87 | # Turn off gradients for validation, will speed up inference
88 | with torch.no_grad():
89 | test_loss, accuracy = validation(model, testloader, criterion)
90 |
91 | print("Epoch: {}/{}.. ".format(e+1, epochs),
92 | "Training Loss: {:.3f}.. ".format(running_loss/print_every),
93 | "Test Loss: {:.3f}.. ".format(test_loss/len(testloader)),
94 | "Test Accuracy: {:.3f}".format(accuracy/len(testloader)))
95 |
96 | running_loss = 0
97 |
98 | # Make sure dropout and grads are on for training
99 | model.train()
100 |
--------------------------------------------------------------------------------
/deep-learning-with-pytorch/helper.py:
--------------------------------------------------------------------------------
1 | import matplotlib.pyplot as plt
2 | import numpy as np
3 | from torch import nn, optim
4 | from torch.autograd import Variable
5 |
6 |
7 | def test_network(net, trainloader):
8 |
9 | criterion = nn.MSELoss()
10 | optimizer = optim.Adam(net.parameters(), lr=0.001)
11 |
12 | dataiter = iter(trainloader)
13 | images, labels = dataiter.next()
14 |
15 | # Create Variables for the inputs and targets
16 | inputs = Variable(images)
17 | targets = Variable(images)
18 |
19 | # Clear the gradients from all Variables
20 | optimizer.zero_grad()
21 |
22 | # Forward pass, then backward pass, then update weights
23 | output = net.forward(inputs)
24 | loss = criterion(output, targets)
25 | loss.backward()
26 | optimizer.step()
27 |
28 | return True
29 |
30 |
31 | def imshow(image, ax=None, title=None, normalize=True):
32 | """Imshow for Tensor."""
33 | if ax is None:
34 | fig, ax = plt.subplots()
35 | image = image.numpy().transpose((1, 2, 0))
36 |
37 | if normalize:
38 | mean = np.array([0.485, 0.456, 0.406])
39 | std = np.array([0.229, 0.224, 0.225])
40 | image = std * image + mean
41 | image = np.clip(image, 0, 1)
42 |
43 | ax.imshow(image)
44 | ax.spines['top'].set_visible(False)
45 | ax.spines['right'].set_visible(False)
46 | ax.spines['left'].set_visible(False)
47 | ax.spines['bottom'].set_visible(False)
48 | ax.tick_params(axis='both', length=0)
49 | ax.set_xticklabels('')
50 | ax.set_yticklabels('')
51 |
52 | return ax
53 |
54 |
55 | def view_recon(img, recon):
56 | ''' Function for displaying an image (as a PyTorch Tensor) and its
57 | reconstruction also a PyTorch Tensor
58 | '''
59 |
60 | fig, axes = plt.subplots(ncols=2, sharex=True, sharey=True)
61 | axes[0].imshow(img.numpy().squeeze())
62 | axes[1].imshow(recon.data.numpy().squeeze())
63 | for ax in axes:
64 | ax.axis('off')
65 | ax.set_adjustable('box-forced')
66 |
67 | def view_classify(img, ps, version="MNIST"):
68 | ''' Function for viewing an image and it's predicted classes.
69 | '''
70 | ps = ps.data.numpy().squeeze()
71 |
72 | fig, (ax1, ax2) = plt.subplots(figsize=(6,9), ncols=2)
73 | ax1.imshow(img.resize_(1, 28, 28).numpy().squeeze())
74 | ax1.axis('off')
75 | ax2.barh(np.arange(10), ps)
76 | ax2.set_aspect(0.1)
77 | ax2.set_yticks(np.arange(10))
78 | if version == "MNIST":
79 | ax2.set_yticklabels(np.arange(10))
80 | elif version == "Fashion":
81 | ax2.set_yticklabels(['T-shirt/top',
82 | 'Trouser',
83 | 'Pullover',
84 | 'Dress',
85 | 'Coat',
86 | 'Sandal',
87 | 'Shirt',
88 | 'Sneaker',
89 | 'Bag',
90 | 'Ankle Boot'], size='small');
91 | ax2.set_title('Class Probability')
92 | ax2.set_xlim(0, 1.1)
93 |
94 | plt.tight_layout()
95 |
--------------------------------------------------------------------------------
/gradient-descent/GradientDescent.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# Implementing the Gradient Descent Algorithm\n",
8 | "\n",
9 | "In this notebook, you'll be implementing the functions that build the gradient descent algorithm, namely:\n",
10 | "\n",
11 | "* `sigmoid`: The sigmoid activation function.\n",
12 | "* `output_formula`: The formula for the prediction.\n",
13 | "* `error_formula`: The formula for the error at a point.\n",
14 | "* `update_weights`: The function that updates the parameters with one gradient descent step.\n",
15 | "\n",
16 | "Your goal is to find the boundary on a small dataset that has two classes:\n",
17 | "\n",
18 | "\n",
19 | "\n",
20 | "After you implement the gradient descent functions, be sure to run the `train` function. This will graph several of the lines that are drawn in successive gradient descent steps. It will also graph the error function, and you'll be able to see it decreasing as the number of epochs grows.\n",
21 | "\n",
22 | "First, we'll start with some functions that will help us plot and visualize the data."
23 | ]
24 | },
25 | {
26 | "cell_type": "code",
27 | "execution_count": null,
28 | "metadata": {},
29 | "outputs": [],
30 | "source": [
31 | "import matplotlib.pyplot as plt\n",
32 | "import numpy as np\n",
33 | "import pandas as pd\n",
34 | "\n",
35 | "#Some helper functions for plotting and drawing lines\n",
36 | "\n",
37 | "def plot_points(X, y):\n",
38 | " admitted = X[np.argwhere(y==1)]\n",
39 | " rejected = X[np.argwhere(y==0)]\n",
40 | " plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'blue', edgecolor = 'k')\n",
41 | " plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'red', edgecolor = 'k')\n",
42 | "\n",
43 | "def display(m, b, color='g--'):\n",
44 | " plt.xlim(-0.05,1.05)\n",
45 | " plt.ylim(-0.05,1.05)\n",
46 | " x = np.arange(-10, 10, 0.1)\n",
47 | " plt.plot(x, m*x+b, color)"
48 | ]
49 | },
50 | {
51 | "cell_type": "markdown",
52 | "metadata": {},
53 | "source": [
54 | "## Reading and plotting the data"
55 | ]
56 | },
57 | {
58 | "cell_type": "code",
59 | "execution_count": null,
60 | "metadata": {},
61 | "outputs": [],
62 | "source": [
63 | "data = pd.read_csv('data.csv', header=None)\n",
64 | "X = np.array(data[[0,1]])\n",
65 | "y = np.array(data[2])\n",
66 | "plot_points(X,y)\n",
67 | "plt.show()"
68 | ]
69 | },
70 | {
71 | "cell_type": "markdown",
72 | "metadata": {},
73 | "source": [
74 | "## TODO: Implementing the basic functions\n",
75 | "Here is your turn to shine. Implement the following formulas, as explained in the text.\n",
76 | "- Sigmoid activation function\n",
77 | "\n",
78 | "$$\\sigma(x) = \\frac{1}{1+e^{-x}}$$\n",
79 | "\n",
80 | "- Output (prediction) formula\n",
81 | "\n",
82 | "$$\\hat{y} = \\sigma(w_1 x_1 + w_2 x_2 + b)$$\n",
83 | "\n",
84 | "- Error function\n",
85 | "\n",
86 | "$$Error(y, \\hat{y}) = - y \\log(\\hat{y}) - (1-y) \\log(1-\\hat{y})$$\n",
87 | "\n",
88 | "- The function that updates the weights\n",
89 | "\n",
90 | "$$ w_i \\longrightarrow w_i + \\alpha (y - \\hat{y}) x_i$$\n",
91 | "\n",
92 | "$$ b \\longrightarrow b + \\alpha (y - \\hat{y})$$"
93 | ]
94 | },
95 | {
96 | "cell_type": "code",
97 | "execution_count": null,
98 | "metadata": {},
99 | "outputs": [],
100 | "source": [
101 | "# Activation (sigmoid) function\n",
102 | "def sigmoid(x):\n",
103 | " pass\n",
104 | "\n",
105 | "# Output (prediction) formula\n",
106 | "def output_formula(features, weights, bias):\n",
107 | " pass\n",
108 | "\n",
109 | "# Error (log-loss) formula\n",
110 | "def error_formula(y, output):\n",
111 | " pass\n",
112 | "\n",
113 | "# Gradient descent step\n",
114 | "def update_weights(x, y, weights, bias, learnrate):\n",
115 | " pass"
116 | ]
117 | },
118 | {
119 | "cell_type": "markdown",
120 | "metadata": {},
121 | "source": [
122 | "## Training function\n",
123 | "This function will help us iterate the gradient descent algorithm through all the data, for a number of epochs. It will also plot the data, and some of the boundary lines obtained as we run the algorithm."
124 | ]
125 | },
126 | {
127 | "cell_type": "code",
128 | "execution_count": null,
129 | "metadata": {},
130 | "outputs": [],
131 | "source": [
132 | "np.random.seed(44)\n",
133 | "\n",
134 | "epochs = 100\n",
135 | "learnrate = 0.01\n",
136 | "\n",
137 | "def train(features, targets, epochs, learnrate, graph_lines=False):\n",
138 | " \n",
139 | " errors = []\n",
140 | " n_records, n_features = features.shape\n",
141 | " last_loss = None\n",
142 | " weights = np.random.normal(scale=1 / n_features**.5, size=n_features)\n",
143 | " bias = 0\n",
144 | " for e in range(epochs):\n",
145 | " del_w = np.zeros(weights.shape)\n",
146 | " for x, y in zip(features, targets):\n",
147 | " output = output_formula(x, weights, bias)\n",
148 | " error = error_formula(y, output)\n",
149 | " weights, bias = update_weights(x, y, weights, bias, learnrate)\n",
150 | " \n",
151 | " # Printing out the log-loss error on the training set\n",
152 | " out = output_formula(features, weights, bias)\n",
153 | " loss = np.mean(error_formula(targets, out))\n",
154 | " errors.append(loss)\n",
155 | " if e % (epochs / 10) == 0:\n",
156 | " print(\"\\n========== Epoch\", e,\"==========\")\n",
157 | " if last_loss and last_loss < loss:\n",
158 | " print(\"Train loss: \", loss, \" WARNING - Loss Increasing\")\n",
159 | " else:\n",
160 | " print(\"Train loss: \", loss)\n",
161 | " last_loss = loss\n",
162 | " predictions = out > 0.5\n",
163 | " accuracy = np.mean(predictions == targets)\n",
164 | " print(\"Accuracy: \", accuracy)\n",
165 | " if graph_lines and e % (epochs / 100) == 0:\n",
166 | " display(-weights[0]/weights[1], -bias/weights[1])\n",
167 | " \n",
168 | "\n",
169 | " # Plotting the solution boundary\n",
170 | " plt.title(\"Solution boundary\")\n",
171 | " display(-weights[0]/weights[1], -bias/weights[1], 'black')\n",
172 | "\n",
173 | " # Plotting the data\n",
174 | " plot_points(features, targets)\n",
175 | " plt.show()\n",
176 | "\n",
177 | " # Plotting the error\n",
178 | " plt.title(\"Error Plot\")\n",
179 | " plt.xlabel('Number of epochs')\n",
180 | " plt.ylabel('Error')\n",
181 | " plt.plot(errors)\n",
182 | " plt.show()"
183 | ]
184 | },
185 | {
186 | "cell_type": "markdown",
187 | "metadata": {},
188 | "source": [
189 | "## Time to train the algorithm!\n",
190 | "When we run the function, we'll obtain the following:\n",
191 | "- 10 updates with the current training loss and accuracy\n",
192 | "- A plot of the data and some of the boundary lines obtained. The final one is in black. Notice how the lines get closer and closer to the best fit, as we go through more epochs.\n",
193 | "- A plot of the error function. Notice how it decreases as we go through more epochs."
194 | ]
195 | },
196 | {
197 | "cell_type": "code",
198 | "execution_count": null,
199 | "metadata": {},
200 | "outputs": [],
201 | "source": [
202 | "train(X, y, epochs, learnrate, True)"
203 | ]
204 | }
205 | ],
206 | "metadata": {
207 | "kernelspec": {
208 | "display_name": "Python 3 (ipykernel)",
209 | "language": "python",
210 | "name": "python3"
211 | },
212 | "language_info": {
213 | "codemirror_mode": {
214 | "name": "ipython",
215 | "version": 3
216 | },
217 | "file_extension": ".py",
218 | "mimetype": "text/x-python",
219 | "name": "python",
220 | "nbconvert_exporter": "python",
221 | "pygments_lexer": "ipython3",
222 | "version": "3.8.2"
223 | }
224 | },
225 | "nbformat": 4,
226 | "nbformat_minor": 2
227 | }
228 |
--------------------------------------------------------------------------------
/gradient-descent/GradientDescentSolutions.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# Solution"
8 | ]
9 | },
10 | {
11 | "cell_type": "code",
12 | "execution_count": null,
13 | "metadata": {
14 | "collapsed": true
15 | },
16 | "outputs": [],
17 | "source": [
18 | "# Activation (sigmoid) function\n",
19 | "def sigmoid(x):\n",
20 | " return 1 / (1 + np.exp(-x))\n",
21 | "\n",
22 | "# Output (prediction) formula\n",
23 | "def output_formula(features, weights, bias):\n",
24 | " return sigmoid(np.dot(features, weights) + bias)\n",
25 | "\n",
26 | "# Error (log-loss) formula\n",
27 | "def error_formula(y, output):\n",
28 | " return - y*np.log(output) - (1 - y) * np.log(1-output)\n",
29 | "\n",
30 | "# Gradient descent step\n",
31 | "def update_weights(x, y, weights, bias, learnrate):\n",
32 | " output = output_formula(x, weights, bias)\n",
33 | " d_error = y - output\n",
34 | " weights += learnrate * d_error * x\n",
35 | " bias += learnrate * d_error\n",
36 | " return weights, bias"
37 | ]
38 | }
39 | ],
40 | "metadata": {
41 | "kernelspec": {
42 | "display_name": "Python 3 (ipykernel)",
43 | "language": "python",
44 | "name": "python3"
45 | },
46 | "language_info": {
47 | "codemirror_mode": {
48 | "name": "ipython",
49 | "version": 3
50 | },
51 | "file_extension": ".py",
52 | "mimetype": "text/x-python",
53 | "name": "python",
54 | "nbconvert_exporter": "python",
55 | "pygments_lexer": "ipython3",
56 | "version": "3.8.2"
57 | }
58 | },
59 | "nbformat": 4,
60 | "nbformat_minor": 2
61 | }
62 |
--------------------------------------------------------------------------------
/gradient-descent/data.csv:
--------------------------------------------------------------------------------
1 | 0.78051,-0.063669,1
2 | 0.28774,0.29139,1
3 | 0.40714,0.17878,1
4 | 0.2923,0.4217,1
5 | 0.50922,0.35256,1
6 | 0.27785,0.10802,1
7 | 0.27527,0.33223,1
8 | 0.43999,0.31245,1
9 | 0.33557,0.42984,1
10 | 0.23448,0.24986,1
11 | 0.0084492,0.13658,1
12 | 0.12419,0.33595,1
13 | 0.25644,0.42624,1
14 | 0.4591,0.40426,1
15 | 0.44547,0.45117,1
16 | 0.42218,0.20118,1
17 | 0.49563,0.21445,1
18 | 0.30848,0.24306,1
19 | 0.39707,0.44438,1
20 | 0.32945,0.39217,1
21 | 0.40739,0.40271,1
22 | 0.3106,0.50702,1
23 | 0.49638,0.45384,1
24 | 0.10073,0.32053,1
25 | 0.69907,0.37307,1
26 | 0.29767,0.69648,1
27 | 0.15099,0.57341,1
28 | 0.16427,0.27759,1
29 | 0.33259,0.055964,1
30 | 0.53741,0.28637,1
31 | 0.19503,0.36879,1
32 | 0.40278,0.035148,1
33 | 0.21296,0.55169,1
34 | 0.48447,0.56991,1
35 | 0.25476,0.34596,1
36 | 0.21726,0.28641,1
37 | 0.67078,0.46538,1
38 | 0.3815,0.4622,1
39 | 0.53838,0.32774,1
40 | 0.4849,0.26071,1
41 | 0.37095,0.38809,1
42 | 0.54527,0.63911,1
43 | 0.32149,0.12007,1
44 | 0.42216,0.61666,1
45 | 0.10194,0.060408,1
46 | 0.15254,0.2168,1
47 | 0.45558,0.43769,1
48 | 0.28488,0.52142,1
49 | 0.27633,0.21264,1
50 | 0.39748,0.31902,1
51 | 0.5533,1,0
52 | 0.44274,0.59205,0
53 | 0.85176,0.6612,0
54 | 0.60436,0.86605,0
55 | 0.68243,0.48301,0
56 | 1,0.76815,0
57 | 0.72989,0.8107,0
58 | 0.67377,0.77975,0
59 | 0.78761,0.58177,0
60 | 0.71442,0.7668,0
61 | 0.49379,0.54226,0
62 | 0.78974,0.74233,0
63 | 0.67905,0.60921,0
64 | 0.6642,0.72519,0
65 | 0.79396,0.56789,0
66 | 0.70758,0.76022,0
67 | 0.59421,0.61857,0
68 | 0.49364,0.56224,0
69 | 0.77707,0.35025,0
70 | 0.79785,0.76921,0
71 | 0.70876,0.96764,0
72 | 0.69176,0.60865,0
73 | 0.66408,0.92075,0
74 | 0.65973,0.66666,0
75 | 0.64574,0.56845,0
76 | 0.89639,0.7085,0
77 | 0.85476,0.63167,0
78 | 0.62091,0.80424,0
79 | 0.79057,0.56108,0
80 | 0.58935,0.71582,0
81 | 0.56846,0.7406,0
82 | 0.65912,0.71548,0
83 | 0.70938,0.74041,0
84 | 0.59154,0.62927,0
85 | 0.45829,0.4641,0
86 | 0.79982,0.74847,0
87 | 0.60974,0.54757,0
88 | 0.68127,0.86985,0
89 | 0.76694,0.64736,0
90 | 0.69048,0.83058,0
91 | 0.68122,0.96541,0
92 | 0.73229,0.64245,0
93 | 0.76145,0.60138,0
94 | 0.58985,0.86955,0
95 | 0.73145,0.74516,0
96 | 0.77029,0.7014,0
97 | 0.73156,0.71782,0
98 | 0.44556,0.57991,0
99 | 0.85275,0.85987,0
100 | 0.51912,0.62359,0
101 |
--------------------------------------------------------------------------------
/gradient-descent/points.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/udacity/cd0281-Introduction-to-Neural-Networks-with-PyTorch/b9077645c089fd3865c0e5d3b992e6f1bfd8a98a/gradient-descent/points.png
--------------------------------------------------------------------------------
/student-admissions/StudentAdmissions.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# Predicting Student Admissions with Neural Networks\n",
8 | "In this notebook, we predict student admissions to graduate school at UCLA based on three pieces of data:\n",
9 | "- GRE Scores (Test)\n",
10 | "- GPA Scores (Grades)\n",
11 | "- Class rank (1-4)\n",
12 | "\n",
13 | "The dataset originally came from here: http://www.ats.ucla.edu/\n",
14 | "\n",
15 | "## Loading the data\n",
16 | "To load the data and format it nicely, we will use two very useful packages called Pandas and Numpy. You can read on the documentation here:\n",
17 | "- https://pandas.pydata.org/pandas-docs/stable/\n",
18 | "- https://docs.scipy.org/"
19 | ]
20 | },
21 | {
22 | "cell_type": "code",
23 | "execution_count": null,
24 | "metadata": {
25 | "collapsed": true
26 | },
27 | "outputs": [],
28 | "source": [
29 | "# Importing pandas and numpy\n",
30 | "import pandas as pd\n",
31 | "import numpy as np\n",
32 | "\n",
33 | "# Reading the csv file into a pandas DataFrame\n",
34 | "data = pd.read_csv('student_data.csv')\n",
35 | "\n",
36 | "# Printing out the first 10 rows of our data\n",
37 | "data[:10]"
38 | ]
39 | },
40 | {
41 | "cell_type": "markdown",
42 | "metadata": {},
43 | "source": [
44 | "## Plotting the data\n",
45 | "\n",
46 | "First let's make a plot of our data to see how it looks. In order to have a 2D plot, let's ingore the rank."
47 | ]
48 | },
49 | {
50 | "cell_type": "code",
51 | "execution_count": null,
52 | "metadata": {
53 | "collapsed": true
54 | },
55 | "outputs": [],
56 | "source": [
57 | "# Importing matplotlib\n",
58 | "import matplotlib.pyplot as plt\n",
59 | "\n",
60 | "# Function to help us plot\n",
61 | "def plot_points(data):\n",
62 | " X = np.array(data[[\"gre\",\"gpa\"]])\n",
63 | " y = np.array(data[\"admit\"])\n",
64 | " admitted = X[np.argwhere(y==1)]\n",
65 | " rejected = X[np.argwhere(y==0)]\n",
66 | " plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'red', edgecolor = 'k')\n",
67 | " plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'cyan', edgecolor = 'k')\n",
68 | " plt.xlabel('Test (GRE)')\n",
69 | " plt.ylabel('Grades (GPA)')\n",
70 | " \n",
71 | "# Plotting the points\n",
72 | "plot_points(data)\n",
73 | "plt.show()"
74 | ]
75 | },
76 | {
77 | "cell_type": "markdown",
78 | "metadata": {},
79 | "source": [
80 | "Roughly, it looks like the students with high scores in the grades and test passed, while the ones with low scores didn't, but the data is not as nicely separable as we hoped it would. Maybe it would help to take the rank into account? Let's make 4 plots, each one for each rank."
81 | ]
82 | },
83 | {
84 | "cell_type": "code",
85 | "execution_count": null,
86 | "metadata": {
87 | "collapsed": true
88 | },
89 | "outputs": [],
90 | "source": [
91 | "# Separating the ranks\n",
92 | "data_rank1 = data[data[\"rank\"]==1]\n",
93 | "data_rank2 = data[data[\"rank\"]==2]\n",
94 | "data_rank3 = data[data[\"rank\"]==3]\n",
95 | "data_rank4 = data[data[\"rank\"]==4]\n",
96 | "\n",
97 | "# Plotting the graphs\n",
98 | "plot_points(data_rank1)\n",
99 | "plt.title(\"Rank 1\")\n",
100 | "plt.show()\n",
101 | "plot_points(data_rank2)\n",
102 | "plt.title(\"Rank 2\")\n",
103 | "plt.show()\n",
104 | "plot_points(data_rank3)\n",
105 | "plt.title(\"Rank 3\")\n",
106 | "plt.show()\n",
107 | "plot_points(data_rank4)\n",
108 | "plt.title(\"Rank 4\")\n",
109 | "plt.show()"
110 | ]
111 | },
112 | {
113 | "cell_type": "markdown",
114 | "metadata": {},
115 | "source": [
116 | "This looks more promising, as it seems that the lower the rank, the higher the acceptance rate. Let's use the rank as one of our inputs. In order to do this, we should one-hot encode it.\n",
117 | "\n",
118 | "## TODO: One-hot encoding the rank\n",
119 | "Use the `get_dummies` function in Pandas in order to one-hot encode the data."
120 | ]
121 | },
122 | {
123 | "cell_type": "code",
124 | "execution_count": null,
125 | "metadata": {
126 | "collapsed": true
127 | },
128 | "outputs": [],
129 | "source": [
130 | "# TODO: Make dummy variables for rank\n",
131 | "one_hot_data = None\n",
132 | "\n",
133 | "# TODO: Drop the previous rank column\n",
134 | "one_hot_data = None\n",
135 | "\n",
136 | "# Print the first 10 rows of our data\n",
137 | "one_hot_data[:10]"
138 | ]
139 | },
140 | {
141 | "cell_type": "markdown",
142 | "metadata": {},
143 | "source": [
144 | "## TODO: Scaling the data\n",
145 | "The next step is to scale the data. We notice that the range for grades is 1.0-4.0, whereas the range for test scores is roughly 200-800, which is much larger. This means our data is skewed, and that makes it hard for a neural network to handle. Let's fit our two features into a range of 0-1, by dividing the grades by 4.0, and the test score by 800."
146 | ]
147 | },
148 | {
149 | "cell_type": "code",
150 | "execution_count": null,
151 | "metadata": {
152 | "collapsed": true
153 | },
154 | "outputs": [],
155 | "source": [
156 | "# Making a copy of our data\n",
157 | "processed_data = one_hot_data[:]\n",
158 | "\n",
159 | "# TODO: Scale the columns\n",
160 | "\n",
161 | "# Printing the first 10 rows of our procesed data\n",
162 | "processed_data[:10]"
163 | ]
164 | },
165 | {
166 | "cell_type": "markdown",
167 | "metadata": {},
168 | "source": [
169 | "## Splitting the data into Training and Testing"
170 | ]
171 | },
172 | {
173 | "cell_type": "markdown",
174 | "metadata": {},
175 | "source": [
176 | "In order to test our algorithm, we'll split the data into a Training and a Testing set. The size of the testing set will be 10% of the total data."
177 | ]
178 | },
179 | {
180 | "cell_type": "code",
181 | "execution_count": null,
182 | "metadata": {
183 | "collapsed": true
184 | },
185 | "outputs": [],
186 | "source": [
187 | "sample = np.random.choice(processed_data.index, size=int(len(processed_data)*0.9), replace=False)\n",
188 | "train_data, test_data = processed_data.iloc[sample], processed_data.drop(sample)\n",
189 | "\n",
190 | "print(\"Number of training samples is\", len(train_data))\n",
191 | "print(\"Number of testing samples is\", len(test_data))\n",
192 | "print(train_data[:10])\n",
193 | "print(test_data[:10])"
194 | ]
195 | },
196 | {
197 | "cell_type": "markdown",
198 | "metadata": {},
199 | "source": [
200 | "## Splitting the data into features and targets (labels)\n",
201 | "Now, as a final step before the training, we'll split the data into features (X) and targets (y)."
202 | ]
203 | },
204 | {
205 | "cell_type": "code",
206 | "execution_count": null,
207 | "metadata": {
208 | "collapsed": true
209 | },
210 | "outputs": [],
211 | "source": [
212 | "features = train_data.drop('admit', axis=1)\n",
213 | "targets = train_data['admit']\n",
214 | "features_test = test_data.drop('admit', axis=1)\n",
215 | "targets_test = test_data['admit']\n",
216 | "\n",
217 | "print(features[:10])\n",
218 | "print(targets[:10])"
219 | ]
220 | },
221 | {
222 | "cell_type": "markdown",
223 | "metadata": {},
224 | "source": [
225 | "## Training the 2-layer Neural Network\n",
226 | "The following function trains the 2-layer neural network. First, we'll write some helper functions."
227 | ]
228 | },
229 | {
230 | "cell_type": "code",
231 | "execution_count": null,
232 | "metadata": {
233 | "collapsed": true
234 | },
235 | "outputs": [],
236 | "source": [
237 | "# Activation (sigmoid) function\n",
238 | "def sigmoid(x):\n",
239 | " return 1 / (1 + np.exp(-x))\n",
240 | "def sigmoid_prime(x):\n",
241 | " return sigmoid(x) * (1-sigmoid(x))\n",
242 | "def error_formula(y, output):\n",
243 | " return - y*np.log(output) - (1 - y) * np.log(1-output)"
244 | ]
245 | },
246 | {
247 | "cell_type": "markdown",
248 | "metadata": {},
249 | "source": [
250 | "# TODO: Backpropagate the error\n",
251 | "Now it's your turn to shine. Write the error term. Remember that this is given by the equation $$ (y-\\hat{y}) \\sigma'(x) $$"
252 | ]
253 | },
254 | {
255 | "cell_type": "code",
256 | "execution_count": null,
257 | "metadata": {
258 | "collapsed": true
259 | },
260 | "outputs": [],
261 | "source": [
262 | "# TODO: Write the error term formula\n",
263 | "def error_term_formula(x, y, output):\n",
264 | " pass"
265 | ]
266 | },
267 | {
268 | "cell_type": "code",
269 | "execution_count": null,
270 | "metadata": {
271 | "collapsed": true
272 | },
273 | "outputs": [],
274 | "source": [
275 | "# Neural Network hyperparameters\n",
276 | "epochs = 1000\n",
277 | "learnrate = 0.5\n",
278 | "\n",
279 | "# Training function\n",
280 | "def train_nn(features, targets, epochs, learnrate):\n",
281 | " \n",
282 | " # Use to same seed to make debugging easier\n",
283 | " np.random.seed(42)\n",
284 | "\n",
285 | " n_records, n_features = features.shape\n",
286 | " last_loss = None\n",
287 | "\n",
288 | " # Initialize weights\n",
289 | " weights = np.random.normal(scale=1 / n_features**.5, size=n_features)\n",
290 | "\n",
291 | " for e in range(epochs):\n",
292 | " del_w = np.zeros(weights.shape)\n",
293 | " for x, y in zip(features.values, targets):\n",
294 | " # Loop through all records, x is the input, y is the target\n",
295 | "\n",
296 | " # Activation of the output unit\n",
297 | " # Notice we multiply the inputs and the weights here \n",
298 | " # rather than storing h as a separate variable \n",
299 | " output = sigmoid(np.dot(x, weights))\n",
300 | "\n",
301 | " # The error, the target minus the network output\n",
302 | " error = error_formula(y, output)\n",
303 | "\n",
304 | " # The error term\n",
305 | " error_term = error_term_formula(x, y, output)\n",
306 | "\n",
307 | " # The gradient descent step, the error times the gradient times the inputs\n",
308 | " del_w += error_term * x\n",
309 | "\n",
310 | " # Update the weights here. The learning rate times the \n",
311 | " # change in weights, divided by the number of records to average\n",
312 | " weights += learnrate * del_w / n_records\n",
313 | "\n",
314 | " # Printing out the mean square error on the training set\n",
315 | " if e % (epochs / 10) == 0:\n",
316 | " out = sigmoid(np.dot(features, weights))\n",
317 | " loss = np.mean((out - targets) ** 2)\n",
318 | " print(\"Epoch:\", e)\n",
319 | " if last_loss and last_loss < loss:\n",
320 | " print(\"Train loss: \", loss, \" WARNING - Loss Increasing\")\n",
321 | " else:\n",
322 | " print(\"Train loss: \", loss)\n",
323 | " last_loss = loss\n",
324 | " print(\"=========\")\n",
325 | " print(\"Finished training!\")\n",
326 | " return weights\n",
327 | " \n",
328 | "weights = train_nn(features, targets, epochs, learnrate)"
329 | ]
330 | },
331 | {
332 | "cell_type": "markdown",
333 | "metadata": {},
334 | "source": [
335 | "## Calculating the Accuracy on the Test Data"
336 | ]
337 | },
338 | {
339 | "cell_type": "code",
340 | "execution_count": null,
341 | "metadata": {
342 | "collapsed": true
343 | },
344 | "outputs": [],
345 | "source": [
346 | "# Calculate accuracy on test data\n",
347 | "test_out = sigmoid(np.dot(features_test, weights))\n",
348 | "predictions = test_out > 0.5\n",
349 | "accuracy = np.mean(predictions == targets_test)\n",
350 | "print(\"Prediction accuracy: {:.3f}\".format(accuracy))"
351 | ]
352 | },
353 | {
354 | "cell_type": "code",
355 | "execution_count": null,
356 | "metadata": {
357 | "collapsed": true
358 | },
359 | "outputs": [],
360 | "source": []
361 | }
362 | ],
363 | "metadata": {
364 | "kernelspec": {
365 | "display_name": "Python 3",
366 | "language": "python",
367 | "name": "python3"
368 | },
369 | "language_info": {
370 | "codemirror_mode": {
371 | "name": "ipython",
372 | "version": 3
373 | },
374 | "file_extension": ".py",
375 | "mimetype": "text/x-python",
376 | "name": "python",
377 | "nbconvert_exporter": "python",
378 | "pygments_lexer": "ipython3",
379 | "version": "3.6.3"
380 | }
381 | },
382 | "nbformat": 4,
383 | "nbformat_minor": 2
384 | }
385 |
--------------------------------------------------------------------------------
/student-admissions/StudentAdmissionsSolutions.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# Solutions"
8 | ]
9 | },
10 | {
11 | "cell_type": "markdown",
12 | "metadata": {},
13 | "source": [
14 | "### One-hot encoding the rank"
15 | ]
16 | },
17 | {
18 | "cell_type": "code",
19 | "execution_count": null,
20 | "metadata": {
21 | "collapsed": true
22 | },
23 | "outputs": [],
24 | "source": [
25 | "# Make dummy variables for rank\n",
26 | "one_hot_data = pd.concat([data, pd.get_dummies(data['rank'], prefix='rank')], axis=1)\n",
27 | "\n",
28 | "# Drop the previous rank column\n",
29 | "one_hot_data = one_hot_data.drop('rank', axis=1)\n",
30 | "\n",
31 | "# Print the first 10 rows of our data\n",
32 | "one_hot_data[:10]"
33 | ]
34 | },
35 | {
36 | "cell_type": "markdown",
37 | "metadata": {},
38 | "source": [
39 | "### Scaling the data"
40 | ]
41 | },
42 | {
43 | "cell_type": "code",
44 | "execution_count": null,
45 | "metadata": {
46 | "collapsed": true
47 | },
48 | "outputs": [],
49 | "source": [
50 | "# Copying our data\n",
51 | "processed_data = one_hot_data[:]\n",
52 | "\n",
53 | "# Scaling the columns\n",
54 | "processed_data['gre'] = processed_data['gre']/800\n",
55 | "processed_data['gpa'] = processed_data['gpa']/4.0\n",
56 | "processed_data[:10]"
57 | ]
58 | },
59 | {
60 | "cell_type": "markdown",
61 | "metadata": {},
62 | "source": [
63 | "### Backpropagating the data"
64 | ]
65 | },
66 | {
67 | "cell_type": "code",
68 | "execution_count": null,
69 | "metadata": {
70 | "collapsed": true
71 | },
72 | "outputs": [],
73 | "source": [
74 | "def error_term_formula(x, y, output):\n",
75 | " return (y - output)*sigmoid_prime(x)"
76 | ]
77 | },
78 | {
79 | "cell_type": "code",
80 | "execution_count": null,
81 | "metadata": {},
82 | "outputs": [],
83 | "source": [
84 | "## alternative solution ##\n",
85 | "# you could also *only* use y and the output \n",
86 | "# and calculate sigmoid_prime directly from the activated output!\n",
87 | "\n",
88 | "# below is an equally valid solution (it doesn't utilize x)\n",
89 | "def error_term_formula(x, y, output):\n",
90 | " return (y-output) * output * (1 - output)"
91 | ]
92 | }
93 | ],
94 | "metadata": {
95 | "kernelspec": {
96 | "display_name": "Python 3",
97 | "language": "python",
98 | "name": "python3"
99 | },
100 | "language_info": {
101 | "codemirror_mode": {
102 | "name": "ipython",
103 | "version": 3
104 | },
105 | "file_extension": ".py",
106 | "mimetype": "text/x-python",
107 | "name": "python",
108 | "nbconvert_exporter": "python",
109 | "pygments_lexer": "ipython3",
110 | "version": "3.6.3"
111 | }
112 | },
113 | "nbformat": 4,
114 | "nbformat_minor": 2
115 | }
116 |
--------------------------------------------------------------------------------
/student-admissions/student_data.csv:
--------------------------------------------------------------------------------
1 | admit,gre,gpa,rank
2 | 0,380,3.61,3
3 | 1,660,3.67,3
4 | 1,800,4,1
5 | 1,640,3.19,4
6 | 0,520,2.93,4
7 | 1,760,3,2
8 | 1,560,2.98,1
9 | 0,400,3.08,2
10 | 1,540,3.39,3
11 | 0,700,3.92,2
12 | 0,800,4,4
13 | 0,440,3.22,1
14 | 1,760,4,1
15 | 0,700,3.08,2
16 | 1,700,4,1
17 | 0,480,3.44,3
18 | 0,780,3.87,4
19 | 0,360,2.56,3
20 | 0,800,3.75,2
21 | 1,540,3.81,1
22 | 0,500,3.17,3
23 | 1,660,3.63,2
24 | 0,600,2.82,4
25 | 0,680,3.19,4
26 | 1,760,3.35,2
27 | 1,800,3.66,1
28 | 1,620,3.61,1
29 | 1,520,3.74,4
30 | 1,780,3.22,2
31 | 0,520,3.29,1
32 | 0,540,3.78,4
33 | 0,760,3.35,3
34 | 0,600,3.4,3
35 | 1,800,4,3
36 | 0,360,3.14,1
37 | 0,400,3.05,2
38 | 0,580,3.25,1
39 | 0,520,2.9,3
40 | 1,500,3.13,2
41 | 1,520,2.68,3
42 | 0,560,2.42,2
43 | 1,580,3.32,2
44 | 1,600,3.15,2
45 | 0,500,3.31,3
46 | 0,700,2.94,2
47 | 1,460,3.45,3
48 | 1,580,3.46,2
49 | 0,500,2.97,4
50 | 0,440,2.48,4
51 | 0,400,3.35,3
52 | 0,640,3.86,3
53 | 0,440,3.13,4
54 | 0,740,3.37,4
55 | 1,680,3.27,2
56 | 0,660,3.34,3
57 | 1,740,4,3
58 | 0,560,3.19,3
59 | 0,380,2.94,3
60 | 0,400,3.65,2
61 | 0,600,2.82,4
62 | 1,620,3.18,2
63 | 0,560,3.32,4
64 | 0,640,3.67,3
65 | 1,680,3.85,3
66 | 0,580,4,3
67 | 0,600,3.59,2
68 | 0,740,3.62,4
69 | 0,620,3.3,1
70 | 0,580,3.69,1
71 | 0,800,3.73,1
72 | 0,640,4,3
73 | 0,300,2.92,4
74 | 0,480,3.39,4
75 | 0,580,4,2
76 | 0,720,3.45,4
77 | 0,720,4,3
78 | 0,560,3.36,3
79 | 1,800,4,3
80 | 0,540,3.12,1
81 | 1,620,4,1
82 | 0,700,2.9,4
83 | 0,620,3.07,2
84 | 0,500,2.71,2
85 | 0,380,2.91,4
86 | 1,500,3.6,3
87 | 0,520,2.98,2
88 | 0,600,3.32,2
89 | 0,600,3.48,2
90 | 0,700,3.28,1
91 | 1,660,4,2
92 | 0,700,3.83,2
93 | 1,720,3.64,1
94 | 0,800,3.9,2
95 | 0,580,2.93,2
96 | 1,660,3.44,2
97 | 0,660,3.33,2
98 | 0,640,3.52,4
99 | 0,480,3.57,2
100 | 0,700,2.88,2
101 | 0,400,3.31,3
102 | 0,340,3.15,3
103 | 0,580,3.57,3
104 | 0,380,3.33,4
105 | 0,540,3.94,3
106 | 1,660,3.95,2
107 | 1,740,2.97,2
108 | 1,700,3.56,1
109 | 0,480,3.13,2
110 | 0,400,2.93,3
111 | 0,480,3.45,2
112 | 0,680,3.08,4
113 | 0,420,3.41,4
114 | 0,360,3,3
115 | 0,600,3.22,1
116 | 0,720,3.84,3
117 | 0,620,3.99,3
118 | 1,440,3.45,2
119 | 0,700,3.72,2
120 | 1,800,3.7,1
121 | 0,340,2.92,3
122 | 1,520,3.74,2
123 | 1,480,2.67,2
124 | 0,520,2.85,3
125 | 0,500,2.98,3
126 | 0,720,3.88,3
127 | 0,540,3.38,4
128 | 1,600,3.54,1
129 | 0,740,3.74,4
130 | 0,540,3.19,2
131 | 0,460,3.15,4
132 | 1,620,3.17,2
133 | 0,640,2.79,2
134 | 0,580,3.4,2
135 | 0,500,3.08,3
136 | 0,560,2.95,2
137 | 0,500,3.57,3
138 | 0,560,3.33,4
139 | 0,700,4,3
140 | 0,620,3.4,2
141 | 1,600,3.58,1
142 | 0,640,3.93,2
143 | 1,700,3.52,4
144 | 0,620,3.94,4
145 | 0,580,3.4,3
146 | 0,580,3.4,4
147 | 0,380,3.43,3
148 | 0,480,3.4,2
149 | 0,560,2.71,3
150 | 1,480,2.91,1
151 | 0,740,3.31,1
152 | 1,800,3.74,1
153 | 0,400,3.38,2
154 | 1,640,3.94,2
155 | 0,580,3.46,3
156 | 0,620,3.69,3
157 | 1,580,2.86,4
158 | 0,560,2.52,2
159 | 1,480,3.58,1
160 | 0,660,3.49,2
161 | 0,700,3.82,3
162 | 0,600,3.13,2
163 | 0,640,3.5,2
164 | 1,700,3.56,2
165 | 0,520,2.73,2
166 | 0,580,3.3,2
167 | 0,700,4,1
168 | 0,440,3.24,4
169 | 0,720,3.77,3
170 | 0,500,4,3
171 | 0,600,3.62,3
172 | 0,400,3.51,3
173 | 0,540,2.81,3
174 | 0,680,3.48,3
175 | 1,800,3.43,2
176 | 0,500,3.53,4
177 | 1,620,3.37,2
178 | 0,520,2.62,2
179 | 1,620,3.23,3
180 | 0,620,3.33,3
181 | 0,300,3.01,3
182 | 0,620,3.78,3
183 | 0,500,3.88,4
184 | 0,700,4,2
185 | 1,540,3.84,2
186 | 0,500,2.79,4
187 | 0,800,3.6,2
188 | 0,560,3.61,3
189 | 0,580,2.88,2
190 | 0,560,3.07,2
191 | 0,500,3.35,2
192 | 1,640,2.94,2
193 | 0,800,3.54,3
194 | 0,640,3.76,3
195 | 0,380,3.59,4
196 | 1,600,3.47,2
197 | 0,560,3.59,2
198 | 0,660,3.07,3
199 | 1,400,3.23,4
200 | 0,600,3.63,3
201 | 0,580,3.77,4
202 | 0,800,3.31,3
203 | 1,580,3.2,2
204 | 1,700,4,1
205 | 0,420,3.92,4
206 | 1,600,3.89,1
207 | 1,780,3.8,3
208 | 0,740,3.54,1
209 | 1,640,3.63,1
210 | 0,540,3.16,3
211 | 0,580,3.5,2
212 | 0,740,3.34,4
213 | 0,580,3.02,2
214 | 0,460,2.87,2
215 | 0,640,3.38,3
216 | 1,600,3.56,2
217 | 1,660,2.91,3
218 | 0,340,2.9,1
219 | 1,460,3.64,1
220 | 0,460,2.98,1
221 | 1,560,3.59,2
222 | 0,540,3.28,3
223 | 0,680,3.99,3
224 | 1,480,3.02,1
225 | 0,800,3.47,3
226 | 0,800,2.9,2
227 | 1,720,3.5,3
228 | 0,620,3.58,2
229 | 0,540,3.02,4
230 | 0,480,3.43,2
231 | 1,720,3.42,2
232 | 0,580,3.29,4
233 | 0,600,3.28,3
234 | 0,380,3.38,2
235 | 0,420,2.67,3
236 | 1,800,3.53,1
237 | 0,620,3.05,2
238 | 1,660,3.49,2
239 | 0,480,4,2
240 | 0,500,2.86,4
241 | 0,700,3.45,3
242 | 0,440,2.76,2
243 | 1,520,3.81,1
244 | 1,680,2.96,3
245 | 0,620,3.22,2
246 | 0,540,3.04,1
247 | 0,800,3.91,3
248 | 0,680,3.34,2
249 | 0,440,3.17,2
250 | 0,680,3.64,3
251 | 0,640,3.73,3
252 | 0,660,3.31,4
253 | 0,620,3.21,4
254 | 1,520,4,2
255 | 1,540,3.55,4
256 | 1,740,3.52,4
257 | 0,640,3.35,3
258 | 1,520,3.3,2
259 | 1,620,3.95,3
260 | 0,520,3.51,2
261 | 0,640,3.81,2
262 | 0,680,3.11,2
263 | 0,440,3.15,2
264 | 1,520,3.19,3
265 | 1,620,3.95,3
266 | 1,520,3.9,3
267 | 0,380,3.34,3
268 | 0,560,3.24,4
269 | 1,600,3.64,3
270 | 1,680,3.46,2
271 | 0,500,2.81,3
272 | 1,640,3.95,2
273 | 0,540,3.33,3
274 | 1,680,3.67,2
275 | 0,660,3.32,1
276 | 0,520,3.12,2
277 | 1,600,2.98,2
278 | 0,460,3.77,3
279 | 1,580,3.58,1
280 | 1,680,3,4
281 | 1,660,3.14,2
282 | 0,660,3.94,2
283 | 0,360,3.27,3
284 | 0,660,3.45,4
285 | 0,520,3.1,4
286 | 1,440,3.39,2
287 | 0,600,3.31,4
288 | 1,800,3.22,1
289 | 1,660,3.7,4
290 | 0,800,3.15,4
291 | 0,420,2.26,4
292 | 1,620,3.45,2
293 | 0,800,2.78,2
294 | 0,680,3.7,2
295 | 0,800,3.97,1
296 | 0,480,2.55,1
297 | 0,520,3.25,3
298 | 0,560,3.16,1
299 | 0,460,3.07,2
300 | 0,540,3.5,2
301 | 0,720,3.4,3
302 | 0,640,3.3,2
303 | 1,660,3.6,3
304 | 1,400,3.15,2
305 | 1,680,3.98,2
306 | 0,220,2.83,3
307 | 0,580,3.46,4
308 | 1,540,3.17,1
309 | 0,580,3.51,2
310 | 0,540,3.13,2
311 | 0,440,2.98,3
312 | 0,560,4,3
313 | 0,660,3.67,2
314 | 0,660,3.77,3
315 | 1,520,3.65,4
316 | 0,540,3.46,4
317 | 1,300,2.84,2
318 | 1,340,3,2
319 | 1,780,3.63,4
320 | 1,480,3.71,4
321 | 0,540,3.28,1
322 | 0,460,3.14,3
323 | 0,460,3.58,2
324 | 0,500,3.01,4
325 | 0,420,2.69,2
326 | 0,520,2.7,3
327 | 0,680,3.9,1
328 | 0,680,3.31,2
329 | 1,560,3.48,2
330 | 0,580,3.34,2
331 | 0,500,2.93,4
332 | 0,740,4,3
333 | 0,660,3.59,3
334 | 0,420,2.96,1
335 | 0,560,3.43,3
336 | 1,460,3.64,3
337 | 1,620,3.71,1
338 | 0,520,3.15,3
339 | 0,620,3.09,4
340 | 0,540,3.2,1
341 | 1,660,3.47,3
342 | 0,500,3.23,4
343 | 1,560,2.65,3
344 | 0,500,3.95,4
345 | 0,580,3.06,2
346 | 0,520,3.35,3
347 | 0,500,3.03,3
348 | 0,600,3.35,2
349 | 0,580,3.8,2
350 | 0,400,3.36,2
351 | 0,620,2.85,2
352 | 1,780,4,2
353 | 0,620,3.43,3
354 | 1,580,3.12,3
355 | 0,700,3.52,2
356 | 1,540,3.78,2
357 | 1,760,2.81,1
358 | 0,700,3.27,2
359 | 0,720,3.31,1
360 | 1,560,3.69,3
361 | 0,720,3.94,3
362 | 1,520,4,1
363 | 1,540,3.49,1
364 | 0,680,3.14,2
365 | 0,460,3.44,2
366 | 1,560,3.36,1
367 | 0,480,2.78,3
368 | 0,460,2.93,3
369 | 0,620,3.63,3
370 | 0,580,4,1
371 | 0,800,3.89,2
372 | 1,540,3.77,2
373 | 1,680,3.76,3
374 | 1,680,2.42,1
375 | 1,620,3.37,1
376 | 0,560,3.78,2
377 | 0,560,3.49,4
378 | 0,620,3.63,2
379 | 1,800,4,2
380 | 0,640,3.12,3
381 | 0,540,2.7,2
382 | 0,700,3.65,2
383 | 1,540,3.49,2
384 | 0,540,3.51,2
385 | 0,660,4,1
386 | 1,480,2.62,2
387 | 0,420,3.02,1
388 | 1,740,3.86,2
389 | 0,580,3.36,2
390 | 0,640,3.17,2
391 | 0,640,3.51,2
392 | 1,800,3.05,2
393 | 1,660,3.88,2
394 | 1,600,3.38,3
395 | 1,620,3.75,2
396 | 1,460,3.99,3
397 | 0,620,4,2
398 | 0,560,3.04,3
399 | 0,460,2.63,2
400 | 0,700,3.65,2
401 | 0,600,3.89,3
402 |
--------------------------------------------------------------------------------