├── README.md
├── data.csv
└── demo.ipynb
/README.md:
--------------------------------------------------------------------------------
1 | # How_to_use_Tensorflow_for_classification-LIVE
2 | This is the code for the "How to Use Tensorflow for Classification" live session by Siraj Raval on Youtube
3 |
4 | ##Overview
5 |
6 | This is the code for [this](https://www.youtube.com/watch?v=4urPuRoT1sE) live session by Siraj Raval on Youtube. We'll build
7 | a classifier for houses. The housing data contains features for each house like # of bathrooms, price, and area. We'll manually add labels to our data (good buy or bad buy) then given a new house, we'll predict whether or not it's a good buy or bad buy. We use gradient descent as our optimization strategy and
8 |
9 | ##Dependencies
10 |
11 | * matplotlib
12 | * [tensorflow](https://www.tensorflow.org/get_started/os_setup)
13 | * pandas
14 | * numpy
15 |
16 | Install dependencies using [pip](https://pip.pypa.io/en/stable/)
17 | Install jupyter notebook using [this](http://jupyter.readthedocs.io/en/latest/install.html)
18 |
19 | ##Usage
20 |
21 | type `juptyer notebook` into terminal and a browser window will pop up. Click on demo.ipynb. You can iteratively compile
22 | each block of code to see the output results.
23 |
24 | ##Credits
25 | Credits for the code go to [jalammar](https://github.com/jalammar). I've merely created a wrapper to get people started.
26 |
--------------------------------------------------------------------------------
/data.csv:
--------------------------------------------------------------------------------
1 | index,area,bathrooms,price,sq_price
2 | 0,2104.0,3.0,399900.0,190.066539924
3 | 1,1600.0,3.0,329900.0,206.1875
4 | 2,2400.0,3.0,369000.0,153.75
5 | 3,1416.0,2.0,232000.0,163.84180791
6 | 4,3000.0,4.0,539900.0,179.966666667
7 | 5,1985.0,4.0,299900.0,151.083123426
8 | 6,1534.0,3.0,314900.0,205.280312907
9 | 7,1427.0,3.0,198999.0,139.452697968
10 | 8,1380.0,3.0,212000.0,153.623188406
11 | 9,1494.0,3.0,242500.0,162.315930388
12 | 10,1940.0,4.0,239999.0,123.710824742
13 | 11,2000.0,3.0,347000.0,173.5
14 | 12,1890.0,3.0,329999.0,174.602645503
15 | 13,4478.0,5.0,699900.0,156.297454221
16 | 14,1268.0,3.0,259900.0,204.968454259
17 | 15,2300.0,4.0,449900.0,195.608695652
18 | 16,1320.0,2.0,299900.0,227.196969697
19 | 17,1236.0,3.0,199900.0,161.731391586
20 | 18,2609.0,4.0,499998.0,191.643541587
21 | 19,3031.0,4.0,599000.0,197.624546354
22 | 20,1767.0,3.0,252900.0,143.123938879
23 | 21,1888.0,2.0,255000.0,135.063559322
24 | 22,1604.0,3.0,242900.0,151.433915212
25 | 23,1962.0,4.0,259900.0,132.46687054
26 | 24,3890.0,3.0,573900.0,147.532133676
27 | 25,1100.0,3.0,249900.0,227.181818182
28 | 26,1458.0,3.0,464500.0,318.587105624
29 | 27,2526.0,3.0,469000.0,185.669041964
30 | 28,2200.0,3.0,475000.0,215.909090909
31 | 29,2637.0,3.0,299900.0,113.727720895
32 | 30,1839.0,2.0,349900.0,190.266449157
33 | 31,1000.0,1.0,169900.0,169.9
34 | 32,2040.0,4.0,314900.0,154.362745098
35 | 33,3137.0,3.0,579900.0,184.858144724
36 | 34,1811.0,4.0,285900.0,157.868580895
37 | 35,1437.0,3.0,249900.0,173.903966597
38 | 36,1239.0,3.0,229900.0,185.552865214
39 | 37,2132.0,4.0,345000.0,161.81988743
40 | 38,4215.0,4.0,549000.0,130.24911032
41 | 39,2162.0,4.0,287000.0,132.747456059
42 | 40,1664.0,2.0,368500.0,221.454326923
43 | 41,2238.0,3.0,329900.0,147.408400357
44 | 42,2567.0,4.0,314000.0,122.321776393
45 | 43,1200.0,3.0,299000.0,249.166666667
46 | 44,852.0,2.0,179900.0,211.150234742
47 | 45,1852.0,4.0,299900.0,161.933045356
48 | 46,1203.0,3.0,239500.0,199.085619285
49 |
--------------------------------------------------------------------------------
/demo.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# Basic Classification Example with TensorFlow\n",
8 | "\n",
9 | "This notebook is a companion of [A Visual and Interactive Guide to the Basics of Neural Networks](https://jalammar.github.io/visual-interactive-guide-basics-neural-networks/).\n",
10 | "\n",
11 | "This is an example of how to do classification on a simple dataset in TensorFlow. Basically, we're building a model to help a friend choose a house to buy. She has given us the table below of houses and whether she likes them or not. We're to build a model that takes a house area and number of bathrooms as input, and outputs a prediction of whether she would like the house or not.\n",
12 | "\n",
13 | "| Area (sq ft) (x1) | Bathrooms (x2) | Label (y) |\n",
14 | " | --- | --- | --- |\n",
15 | " | 2,104 | 3 | Good |\n",
16 | " | 1,600 | 3 | Good |\n",
17 | " | 2,400 | 3 | Good |\n",
18 | " | 1,416 | \t2 | Bad |\n",
19 | " | 3,000 | \t4 | Bad |\n",
20 | " | 1,985 | \t4 | Good |\n",
21 | " | 1,534 | \t3 | Bad |\n",
22 | " | 1,427 | \t3 | Good |\n",
23 | " | 1,380 | \t3 | Good |\n",
24 | " | 1,494 | \t3 | Good |\n",
25 | " \n",
26 | " \n",
27 | " \n",
28 | " We'll start by loading our favorite libraries"
29 | ]
30 | },
31 | {
32 | "cell_type": "code",
33 | "execution_count": 168,
34 | "metadata": {
35 | "collapsed": false
36 | },
37 | "outputs": [],
38 | "source": [
39 | "%matplotlib inline \n",
40 | "import pandas as pd # A beautiful library to help us work with data as tables\n",
41 | "import numpy as np # So we can use number matrices. Both pandas and TensorFlow need it. \n",
42 | "import matplotlib.pyplot as plt # Visualize the things\n",
43 | "import tensorflow as tf # Fire from the gods\n"
44 | ]
45 | },
46 | {
47 | "cell_type": "markdown",
48 | "metadata": {},
49 | "source": [
50 | "We'll then load the house data CSV. Pandas is an incredible library that gives us great flexibility in dealing with table-like data. We load tables (or csv files, or excel sheets) into a \"data frame\", and process it however we like. You can think of it as a programatic way to do a lot of the things you previously did with Excel."
51 | ]
52 | },
53 | {
54 | "cell_type": "code",
55 | "execution_count": 165,
56 | "metadata": {
57 | "collapsed": false,
58 | "scrolled": true
59 | },
60 | "outputs": [
61 | {
62 | "data": {
63 | "text/html": [
64 | "
\n",
65 | "
\n",
66 | " \n",
67 | " \n",
68 | " | \n",
69 | " area | \n",
70 | " bathrooms | \n",
71 | "
\n",
72 | " \n",
73 | " \n",
74 | " \n",
75 | " 0 | \n",
76 | " 2104.0 | \n",
77 | " 3.0 | \n",
78 | "
\n",
79 | " \n",
80 | " 1 | \n",
81 | " 1600.0 | \n",
82 | " 3.0 | \n",
83 | "
\n",
84 | " \n",
85 | " 2 | \n",
86 | " 2400.0 | \n",
87 | " 3.0 | \n",
88 | "
\n",
89 | " \n",
90 | " 3 | \n",
91 | " 1416.0 | \n",
92 | " 2.0 | \n",
93 | "
\n",
94 | " \n",
95 | " 4 | \n",
96 | " 3000.0 | \n",
97 | " 4.0 | \n",
98 | "
\n",
99 | " \n",
100 | " 5 | \n",
101 | " 1985.0 | \n",
102 | " 4.0 | \n",
103 | "
\n",
104 | " \n",
105 | " 6 | \n",
106 | " 1534.0 | \n",
107 | " 3.0 | \n",
108 | "
\n",
109 | " \n",
110 | " 7 | \n",
111 | " 1427.0 | \n",
112 | " 3.0 | \n",
113 | "
\n",
114 | " \n",
115 | " 8 | \n",
116 | " 1380.0 | \n",
117 | " 3.0 | \n",
118 | "
\n",
119 | " \n",
120 | " 9 | \n",
121 | " 1494.0 | \n",
122 | " 3.0 | \n",
123 | "
\n",
124 | " \n",
125 | "
\n",
126 | "
"
127 | ],
128 | "text/plain": [
129 | " area bathrooms\n",
130 | "0 2104.0 3.0\n",
131 | "1 1600.0 3.0\n",
132 | "2 2400.0 3.0\n",
133 | "3 1416.0 2.0\n",
134 | "4 3000.0 4.0\n",
135 | "5 1985.0 4.0\n",
136 | "6 1534.0 3.0\n",
137 | "7 1427.0 3.0\n",
138 | "8 1380.0 3.0\n",
139 | "9 1494.0 3.0"
140 | ]
141 | },
142 | "execution_count": 165,
143 | "metadata": {},
144 | "output_type": "execute_result"
145 | }
146 | ],
147 | "source": [
148 | "dataframe = pd.read_csv(\"data.csv\") # Let's have Pandas load our dataset as a dataframe\n",
149 | "dataframe = dataframe.drop([\"index\", \"price\", \"sq_price\"], axis=1) # Remove columns we don't care about\n",
150 | "dataframe = dataframe[0:10] # We'll only use the first 10 rows of the dataset in this example\n",
151 | "dataframe # Let's have the notebook show us how the dataframe looks now"
152 | ]
153 | },
154 | {
155 | "cell_type": "markdown",
156 | "metadata": {
157 | "collapsed": false
158 | },
159 | "source": [
160 | "The dataframe now only has the features. Let's introduce the labels."
161 | ]
162 | },
163 | {
164 | "cell_type": "code",
165 | "execution_count": 120,
166 | "metadata": {
167 | "collapsed": false
168 | },
169 | "outputs": [
170 | {
171 | "data": {
172 | "text/html": [
173 | "\n",
174 | "
\n",
175 | " \n",
176 | " \n",
177 | " | \n",
178 | " area | \n",
179 | " bathrooms | \n",
180 | " y1 | \n",
181 | " y2 | \n",
182 | "
\n",
183 | " \n",
184 | " \n",
185 | " \n",
186 | " 0 | \n",
187 | " 2104.0 | \n",
188 | " 3.0 | \n",
189 | " 1 | \n",
190 | " 0 | \n",
191 | "
\n",
192 | " \n",
193 | " 1 | \n",
194 | " 1600.0 | \n",
195 | " 3.0 | \n",
196 | " 1 | \n",
197 | " 0 | \n",
198 | "
\n",
199 | " \n",
200 | " 2 | \n",
201 | " 2400.0 | \n",
202 | " 3.0 | \n",
203 | " 1 | \n",
204 | " 0 | \n",
205 | "
\n",
206 | " \n",
207 | " 3 | \n",
208 | " 1416.0 | \n",
209 | " 2.0 | \n",
210 | " 0 | \n",
211 | " 1 | \n",
212 | "
\n",
213 | " \n",
214 | " 4 | \n",
215 | " 3000.0 | \n",
216 | " 4.0 | \n",
217 | " 0 | \n",
218 | " 1 | \n",
219 | "
\n",
220 | " \n",
221 | " 5 | \n",
222 | " 1985.0 | \n",
223 | " 4.0 | \n",
224 | " 1 | \n",
225 | " 0 | \n",
226 | "
\n",
227 | " \n",
228 | " 6 | \n",
229 | " 1534.0 | \n",
230 | " 3.0 | \n",
231 | " 0 | \n",
232 | " 1 | \n",
233 | "
\n",
234 | " \n",
235 | " 7 | \n",
236 | " 1427.0 | \n",
237 | " 3.0 | \n",
238 | " 1 | \n",
239 | " 0 | \n",
240 | "
\n",
241 | " \n",
242 | " 8 | \n",
243 | " 1380.0 | \n",
244 | " 3.0 | \n",
245 | " 1 | \n",
246 | " 0 | \n",
247 | "
\n",
248 | " \n",
249 | " 9 | \n",
250 | " 1494.0 | \n",
251 | " 3.0 | \n",
252 | " 1 | \n",
253 | " 0 | \n",
254 | "
\n",
255 | " \n",
256 | "
\n",
257 | "
"
258 | ],
259 | "text/plain": [
260 | " area bathrooms y1 y2\n",
261 | "0 2104.0 3.0 1 0\n",
262 | "1 1600.0 3.0 1 0\n",
263 | "2 2400.0 3.0 1 0\n",
264 | "3 1416.0 2.0 0 1\n",
265 | "4 3000.0 4.0 0 1\n",
266 | "5 1985.0 4.0 1 0\n",
267 | "6 1534.0 3.0 0 1\n",
268 | "7 1427.0 3.0 1 0\n",
269 | "8 1380.0 3.0 1 0\n",
270 | "9 1494.0 3.0 1 0"
271 | ]
272 | },
273 | "execution_count": 120,
274 | "metadata": {},
275 | "output_type": "execute_result"
276 | }
277 | ],
278 | "source": [
279 | "dataframe.loc[:, (\"y1\")] = [1, 1, 1, 0, 0, 1, 0, 1, 1, 1] # This is our friend's list of which houses she liked\n",
280 | " # 1 = good, 0 = bad\n",
281 | "dataframe.loc[:, (\"y2\")] = dataframe[\"y1\"] == 0 # y2 is the negation of y1\n",
282 | "dataframe.loc[:, (\"y2\")] = dataframe[\"y2\"].astype(int) # Turn TRUE/FALSE values into 1/0\n",
283 | "# y2 means we don't like a house\n",
284 | "# (Yes, it's redundant. But learning to do it this way opens the door to Multiclass classification)\n",
285 | "dataframe # How is our dataframe looking now?"
286 | ]
287 | },
288 | {
289 | "cell_type": "markdown",
290 | "metadata": {
291 | "collapsed": false
292 | },
293 | "source": [
294 | "Now that we have all our data in the dataframe, we'll need to shape it in matrices to feed it to TensorFlow"
295 | ]
296 | },
297 | {
298 | "cell_type": "code",
299 | "execution_count": 126,
300 | "metadata": {
301 | "collapsed": false,
302 | "scrolled": false
303 | },
304 | "outputs": [],
305 | "source": [
306 | "inputX = dataframe.loc[:, ['area', 'bathrooms']].as_matrix()\n",
307 | "inputY = dataframe.loc[:, [\"y1\", \"y2\"]].as_matrix()"
308 | ]
309 | },
310 | {
311 | "cell_type": "markdown",
312 | "metadata": {},
313 | "source": [
314 | "So now our input matrix looks like this:"
315 | ]
316 | },
317 | {
318 | "cell_type": "code",
319 | "execution_count": 127,
320 | "metadata": {
321 | "collapsed": false
322 | },
323 | "outputs": [
324 | {
325 | "data": {
326 | "text/plain": [
327 | "array([[ 2.10400000e+03, 3.00000000e+00],\n",
328 | " [ 1.60000000e+03, 3.00000000e+00],\n",
329 | " [ 2.40000000e+03, 3.00000000e+00],\n",
330 | " [ 1.41600000e+03, 2.00000000e+00],\n",
331 | " [ 3.00000000e+03, 4.00000000e+00],\n",
332 | " [ 1.98500000e+03, 4.00000000e+00],\n",
333 | " [ 1.53400000e+03, 3.00000000e+00],\n",
334 | " [ 1.42700000e+03, 3.00000000e+00],\n",
335 | " [ 1.38000000e+03, 3.00000000e+00],\n",
336 | " [ 1.49400000e+03, 3.00000000e+00]])"
337 | ]
338 | },
339 | "execution_count": 127,
340 | "metadata": {},
341 | "output_type": "execute_result"
342 | }
343 | ],
344 | "source": [
345 | "inputX"
346 | ]
347 | },
348 | {
349 | "cell_type": "markdown",
350 | "metadata": {},
351 | "source": [
352 | "And our labels matrix looks like this:"
353 | ]
354 | },
355 | {
356 | "cell_type": "code",
357 | "execution_count": 128,
358 | "metadata": {
359 | "collapsed": false
360 | },
361 | "outputs": [
362 | {
363 | "data": {
364 | "text/plain": [
365 | "array([[1, 0],\n",
366 | " [1, 0],\n",
367 | " [1, 0],\n",
368 | " [0, 1],\n",
369 | " [0, 1],\n",
370 | " [1, 0],\n",
371 | " [0, 1],\n",
372 | " [1, 0],\n",
373 | " [1, 0],\n",
374 | " [1, 0]])"
375 | ]
376 | },
377 | "execution_count": 128,
378 | "metadata": {},
379 | "output_type": "execute_result"
380 | }
381 | ],
382 | "source": [
383 | "inputY"
384 | ]
385 | },
386 | {
387 | "cell_type": "markdown",
388 | "metadata": {},
389 | "source": [
390 | "Let's prepare some parameters for the training process"
391 | ]
392 | },
393 | {
394 | "cell_type": "code",
395 | "execution_count": 164,
396 | "metadata": {
397 | "collapsed": false
398 | },
399 | "outputs": [],
400 | "source": [
401 | "# Parameters\n",
402 | "learning_rate = 0.000001\n",
403 | "training_epochs = 2000\n",
404 | "display_step = 50\n",
405 | "n_samples = inputY.size"
406 | ]
407 | },
408 | {
409 | "cell_type": "markdown",
410 | "metadata": {},
411 | "source": [
412 | "And now to define the TensorFlow operations. Notice that this is a declaration step where we tell TensorFlow how the prediction is calculated. If we execute it, no calculation would be made. It would just acknowledge that it now knows how to do the operation."
413 | ]
414 | },
415 | {
416 | "cell_type": "code",
417 | "execution_count": 155,
418 | "metadata": {
419 | "collapsed": true
420 | },
421 | "outputs": [],
422 | "source": [
423 | "x = tf.placeholder(tf.float32, [None, 2]) # Okay TensorFlow, we'll feed you an array of examples. Each example will\n",
424 | " # be an array of two float values (area, and number of bathrooms).\n",
425 | " # \"None\" means we can feed you any number of examples\n",
426 | " # Notice we haven't fed it the values yet\n",
427 | " \n",
428 | "W = tf.Variable(tf.zeros([2, 2])) # Maintain a 2 x 2 float matrix for the weights that we'll keep updating \n",
429 | " # through the training process (make them all zero to begin with)\n",
430 | " \n",
431 | "b = tf.Variable(tf.zeros([2])) # Also maintain two bias values\n",
432 | "\n",
433 | "y_values = tf.add(tf.matmul(x, W), b) # The first step in calculating the prediction would be to multiply\n",
434 | " # the inputs matrix by the weights matrix then add the biases\n",
435 | " \n",
436 | "y = tf.nn.softmax(y_values) # Then we use softmax as an \"activation function\" that translates the\n",
437 | " # numbers outputted by the previous layer into probability form\n",
438 | " \n",
439 | "y_ = tf.placeholder(tf.float32, [None,2]) # For training purposes, we'll also feed you a matrix of labels"
440 | ]
441 | },
442 | {
443 | "cell_type": "markdown",
444 | "metadata": {},
445 | "source": [
446 | "Let's specify our cost function and use Gradient Descent"
447 | ]
448 | },
449 | {
450 | "cell_type": "code",
451 | "execution_count": 156,
452 | "metadata": {
453 | "collapsed": true
454 | },
455 | "outputs": [],
456 | "source": [
457 | "\n",
458 | "# Cost function: Mean squared error\n",
459 | "cost = tf.reduce_sum(tf.pow(y_ - y, 2))/(2*n_samples)\n",
460 | "# Gradient descent\n",
461 | "optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)\n"
462 | ]
463 | },
464 | {
465 | "cell_type": "code",
466 | "execution_count": 161,
467 | "metadata": {
468 | "collapsed": true
469 | },
470 | "outputs": [],
471 | "source": [
472 | "# Initialize variabls and tensorflow session\n",
473 | "init = tf.initialize_all_variables()\n",
474 | "sess = tf.Session()\n",
475 | "sess.run(init)"
476 | ]
477 | },
478 | {
479 | "cell_type": "markdown",
480 | "metadata": {},
481 | "source": [
482 | "*Drum roll*\n",
483 | "\n",
484 | "And now for the actual training"
485 | ]
486 | },
487 | {
488 | "cell_type": "code",
489 | "execution_count": 162,
490 | "metadata": {
491 | "collapsed": false,
492 | "scrolled": false
493 | },
494 | "outputs": [
495 | {
496 | "name": "stdout",
497 | "output_type": "stream",
498 | "text": [
499 | "Training step: 0000 cost= 0.114958666\n",
500 | "Training step: 0050 cost= 0.109539941\n",
501 | "Training step: 0100 cost= 0.109539881\n",
502 | "Training step: 0150 cost= 0.109539807\n",
503 | "Training step: 0200 cost= 0.109539732\n",
504 | "Training step: 0250 cost= 0.109539673\n",
505 | "Training step: 0300 cost= 0.109539606\n",
506 | "Training step: 0350 cost= 0.109539531\n",
507 | "Training step: 0400 cost= 0.109539464\n",
508 | "Training step: 0450 cost= 0.109539405\n",
509 | "Training step: 0500 cost= 0.109539330\n",
510 | "Training step: 0550 cost= 0.109539248\n",
511 | "Training step: 0600 cost= 0.109539196\n",
512 | "Training step: 0650 cost= 0.109539129\n",
513 | "Training step: 0700 cost= 0.109539054\n",
514 | "Training step: 0750 cost= 0.109538987\n",
515 | "Training step: 0800 cost= 0.109538913\n",
516 | "Training step: 0850 cost= 0.109538853\n",
517 | "Training step: 0900 cost= 0.109538779\n",
518 | "Training step: 0950 cost= 0.109538712\n",
519 | "Training step: 1000 cost= 0.109538652\n",
520 | "Training step: 1050 cost= 0.109538577\n",
521 | "Training step: 1100 cost= 0.109538510\n",
522 | "Training step: 1150 cost= 0.109538436\n",
523 | "Training step: 1200 cost= 0.109538376\n",
524 | "Training step: 1250 cost= 0.109538302\n",
525 | "Training step: 1300 cost= 0.109538242\n",
526 | "Training step: 1350 cost= 0.109538175\n",
527 | "Training step: 1400 cost= 0.109538093\n",
528 | "Training step: 1450 cost= 0.109538034\n",
529 | "Training step: 1500 cost= 0.109537959\n",
530 | "Training step: 1550 cost= 0.109537885\n",
531 | "Training step: 1600 cost= 0.109537825\n",
532 | "Training step: 1650 cost= 0.109537765\n",
533 | "Training step: 1700 cost= 0.109537683\n",
534 | "Training step: 1750 cost= 0.109537624\n",
535 | "Training step: 1800 cost= 0.109537557\n",
536 | "Training step: 1850 cost= 0.109537482\n",
537 | "Training step: 1900 cost= 0.109537423\n",
538 | "Training step: 1950 cost= 0.109537341\n",
539 | "Optimization Finished!\n",
540 | "Training cost= 0.109537 W= [[ 2.14149564e-04 -2.14149914e-04]\n",
541 | " [ 5.12748193e-05 -5.12747974e-05]] b= [ 1.19155184e-05 -1.19155284e-05] \n",
542 | "\n"
543 | ]
544 | }
545 | ],
546 | "source": [
547 | "for i in range(training_epochs): \n",
548 | " sess.run(optimizer, feed_dict={x: inputX, y_: inputY}) # Take a gradient descent step using our inputs and labels\n",
549 | "\n",
550 | " # That's all! The rest of the cell just outputs debug messages. \n",
551 | " # Display logs per epoch step\n",
552 | " if (i) % display_step == 0:\n",
553 | " cc = sess.run(cost, feed_dict={x: inputX, y_:inputY})\n",
554 | " print \"Training step:\", '%04d' % (i), \"cost=\", \"{:.9f}\".format(cc) #, \\\"W=\", sess.run(W), \"b=\", sess.run(b)\n",
555 | "\n",
556 | "print \"Optimization Finished!\"\n",
557 | "training_cost = sess.run(cost, feed_dict={x: inputX, y_: inputY})\n",
558 | "print \"Training cost=\", training_cost, \"W=\", sess.run(W), \"b=\", sess.run(b), '\\n'\n"
559 | ]
560 | },
561 | {
562 | "cell_type": "markdown",
563 | "metadata": {
564 | "collapsed": true
565 | },
566 | "source": [
567 | "Now the training is done. TensorFlow is now holding on to our trained model (Which is basically just the defined operations, plus the variables W and b that resulted from the training process).\n",
568 | "\n",
569 | "Is a cost value of 0.109537 good or bad? I have no idea. At least it's better than the first cost value of 0.114958666. Let's use the model on our dataset to see how it does, though:"
570 | ]
571 | },
572 | {
573 | "cell_type": "code",
574 | "execution_count": 159,
575 | "metadata": {
576 | "collapsed": false
577 | },
578 | "outputs": [
579 | {
580 | "data": {
581 | "text/plain": [
582 | "array([[ 0.71125221, 0.28874779],\n",
583 | " [ 0.66498977, 0.33501023],\n",
584 | " [ 0.73657656, 0.26342347],\n",
585 | " [ 0.64718789, 0.35281211],\n",
586 | " [ 0.78335613, 0.2166439 ],\n",
587 | " [ 0.70069474, 0.29930523],\n",
588 | " [ 0.65866327, 0.34133676],\n",
589 | " [ 0.64828628, 0.35171372],\n",
590 | " [ 0.64368278, 0.35631716],\n",
591 | " [ 0.65480113, 0.3451989 ]], dtype=float32)"
592 | ]
593 | },
594 | "execution_count": 159,
595 | "metadata": {},
596 | "output_type": "execute_result"
597 | }
598 | ],
599 | "source": [
600 | "sess.run(y, feed_dict={x: inputX })"
601 | ]
602 | },
603 | {
604 | "cell_type": "markdown",
605 | "metadata": {},
606 | "source": [
607 | "So It's guessing they're all good houses. That makes it get 7/10 correct. Not terribly impressive. A model with a hidden layer should do better, I guess."
608 | ]
609 | },
610 | {
611 | "cell_type": "markdown",
612 | "metadata": {
613 | "collapsed": false
614 | },
615 | "source": [
616 | "Btw, this is how I calculated the softmax values in the post:"
617 | ]
618 | },
619 | {
620 | "cell_type": "code",
621 | "execution_count": 163,
622 | "metadata": {
623 | "collapsed": false
624 | },
625 | "outputs": [
626 | {
627 | "data": {
628 | "text/plain": [
629 | "array([ 0.26894143, 0.7310586 ], dtype=float32)"
630 | ]
631 | },
632 | "execution_count": 163,
633 | "metadata": {},
634 | "output_type": "execute_result"
635 | }
636 | ],
637 | "source": [
638 | "sess.run(tf.nn.softmax([1., 2.]))"
639 | ]
640 | }
641 | ],
642 | "metadata": {
643 | "anaconda-cloud": {},
644 | "kernelspec": {
645 | "display_name": "Python [Root]",
646 | "language": "python",
647 | "name": "Python [Root]"
648 | },
649 | "language_info": {
650 | "codemirror_mode": {
651 | "name": "ipython",
652 | "version": 2
653 | },
654 | "file_extension": ".py",
655 | "mimetype": "text/x-python",
656 | "name": "python",
657 | "nbconvert_exporter": "python",
658 | "pygments_lexer": "ipython2",
659 | "version": "2.7.12"
660 | }
661 | },
662 | "nbformat": 4,
663 | "nbformat_minor": 0
664 | }
665 |
--------------------------------------------------------------------------------