├── Chapter02
├── Python 2.7
│ ├── computation_model.py
│ ├── data_model.py
│ ├── feeding_parameters.py
│ ├── fetching_parameters_1.py
│ ├── programming_model.py
│ ├── single_neuron_model_1.py
│ ├── tensor_flow_counter_1.py
│ └── tensor_with_numpy_1.py
├── Python 3.5
│ ├── computation_model.py
│ ├── data_model.py
│ ├── feeding_parameters.py
│ ├── fetching_parameters_1.py
│ ├── programming_model.py
│ ├── single_neuron_model_1.py
│ ├── tensor_flow_counter_1.py
│ └── tensor_with_numpy_1.py
└── Screenshots
│ ├── computations_model.png
│ ├── data_model.png
│ ├── feeding_parameters.png
│ ├── fetching_paramter.png
│ ├── programming_model.png
│ ├── single_input_neuron.png
│ ├── tensor_flow_counter.png
│ └── tensorflow_with_numpy.png
├── Chapter03
├── Python 2.7
│ ├── five_layers_relu_1.py
│ ├── five_layers_relu_dropout_1.py
│ ├── five_layers_sigmoid_1.py
│ ├── softmax_classifier_1.py
│ ├── softmax_model_loader_1.py
│ └── softmax_model_saver_1.py
├── Python 3.5
│ ├── five_layers_relu_1.py
│ ├── five_layers_relu_dropout_1.py
│ ├── five_layers_sigmoid_1.py
│ ├── softmax_classifier_1.py
│ ├── softmax_model_loader_1.py
│ └── softmax_model_saver_1.py
└── Screenshots
│ ├── five_layers_relu_dropout.png
│ ├── five_layers_relu_png.png
│ ├── five_layers_sigmod.png
│ ├── softmax_classiifer_1.png
│ └── softmax_cmodel_saver_1.png
├── Chapter04
├── EMOTION_CNN
│ ├── EmotionDetector
│ │ ├── test.csv
│ │ └── train.csv
│ ├── Python 2.7
│ │ ├── EmotionDetectorUtils.py
│ │ ├── EmotionDetector_1.py
│ │ └── test_your_image.py
│ ├── Python 3.5
│ │ ├── EmotionDetectorUtils.py
│ │ ├── EmotionDetector_1.py
│ │ ├── __init__.py
│ │ └── test_your_image.py
│ ├── Screenshots
│ │ ├── Emotion Detector.png
│ │ ├── Test your image _1.png
│ │ └── Test your image _2.png
│ └── author_img.jpg
└── MNIST_CNN
│ ├── Python 2.7
│ └── mnist_cnn_1.py
│ ├── Python 3.5
│ └── mnist_cnn_1.py
│ └── Screenshots
│ └── mnist_cnn.png
├── Chapter05
├── Python 2.7
│ ├── Convlutional_AutoEncoder.py
│ ├── autoencoder_1.py
│ ├── deconvolutional_autoencoder_1.py
│ └── denoising_autoencoder_1.py
├── Python 3.5
│ ├── Convlutional_AutoEncoder.py
│ ├── __init__.py
│ ├── autoencoder_1.py
│ ├── deconvolutional_autoencoder_1.py
│ └── denoising_autoencoder_1.py
└── Screenshots
│ ├── autoencoder.png
│ ├── deconvolutional_autoencoder.png
│ └── denoising_autoencoder.png
├── Chapter06
├── Python 2.7
│ ├── LSTM_model_1.py
│ ├── __init__.py
│ └── bidirectional_RNN_1.py
├── Python 3.5
│ ├── LSTM_model_1.py
│ ├── __init__.py
│ └── bidirectional_RNN_1.py
└── Screenshots
│ ├── Bidirectional_RNN_shot1.png
│ ├── Bidirectional_RNN_shot2.png
│ ├── LSTM_shot1.png
│ └── LSTM_shot2.png
├── Chapter07
├── Python 2.7
│ ├── gpu_computing_with_multiple_GPU.py
│ ├── gpu_example.py
│ └── gpu_soft_placemnet_1.py
├── Python 3.5
│ ├── gpu_computing_with_multiple_GPU.py
│ ├── gpu_example.py
│ └── gpu_soft_placemnet_1.py
└── Screenshots
│ ├── gpu_computing_with_multiple_GPU.png
│ ├── gpu_example.png
│ └── gpu_soft_placemnet_1.png
├── Chapter08
├── Python 2.7
│ ├── digit_classifier.py
│ ├── keras_movie_classifier_1.py
│ ├── keras_movie_classifier_using_convLayer_1.py
│ ├── pretty_tensor_digit_1.py
│ └── tflearn_titanic_classifier.py
├── Python 3.5
│ ├── __init__.py
│ ├── digit_classifier.py
│ ├── keras_movie_classifier_1.py
│ ├── keras_movie_classifier_using_convLayer_1.py
│ ├── pretty_tensor_digit_1.py
│ └── tflearn_titanic_classifier.py
├── Screenshots
│ ├── Digit classiifcation.png
│ ├── Digit classiifcation2.png
│ ├── Keras_movie_classifier.png
│ ├── pretty_tensor_snap1.png
│ ├── pretty_tensor_snap2.png
│ └── tflearn_titanic_clasiifer.png
└── data
│ └── titanic_dataset.csv
├── Chapter09
├── Python 2.7
│ └── classify_image.py
├── Python 3.5
│ └── classify_image.py
└── Screenshots
│ ├── classification_daisy_flower.png
│ ├── flowers_model_training.png
│ └── gpu_computing_with_multiple_GPU.png
├── Chapter10
├── Python 2.7
│ ├── FrozenLake_1.py
│ └── Q_Learning_1.py
├── Python 3.5
│ ├── FrozenLake_1.py
│ └── Q_Learning_1.py
└── Screeshots
│ ├── FrozenLake.png
│ └── Q_Learning.png
├── LICENSE
└── README.md
/Chapter02/Python 2.7/computation_model.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | with tf.Session() as session:
3 | x = tf.placeholder(tf.float32,[1],name="x")
4 | y = tf.placeholder(tf.float32,[1],name="y")
5 | z = tf.constant(2.0)
6 | y = x * z
7 | x_in = [100]
8 | y_output = session.run(y,{x:x_in})
9 | print(y_output)
10 |
--------------------------------------------------------------------------------
/Chapter02/Python 2.7/data_model.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 |
3 | scalar = tf.constant(100)
4 | vector = tf.constant([1,2,3,4,5])
5 | matrix = tf.constant([[1,2,3],[4,5,6]])
6 | cube_matrix = tf.constant([[[1],[2],[3]],[[4],[5],[6]],[[7],[8],[9]]])
7 |
8 |
9 | print(scalar.get_shape())
10 | print(vector.get_shape())
11 | print(matrix.get_shape())
12 | print(cube_matrix.get_shape())
13 |
14 | """
15 | >>>
16 | ()
17 | (5,)
18 | (2, 3)
19 | (3, 3, 1)
20 | >>>
21 | """
22 |
23 |
--------------------------------------------------------------------------------
/Chapter02/Python 2.7/feeding_parameters.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | import numpy as np
3 |
4 | a = 3
5 | b = 2
6 |
7 |
8 | x = tf.placeholder(tf.float32,shape=(a,b))
9 | y = tf.add(x,x)
10 |
11 | data = np.random.rand(a,b)
12 |
13 | sess = tf.Session()
14 |
15 | print sess.run(y,feed_dict={x:data})
16 |
17 |
--------------------------------------------------------------------------------
/Chapter02/Python 2.7/fetching_parameters_1.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 |
3 | constant_A = tf.constant([100.0])
4 | constant_B = tf.constant([300.0])
5 | constant_C = tf.constant([3.0])
6 |
7 | sum_ = tf.add(constant_A,constant_B)
8 | mul_ = tf.multiply(constant_A,constant_C)
9 |
10 | with tf.Session() as sess:
11 | result = sess.run([sum_,mul_])
12 | print(result)
13 |
14 |
15 | """
16 | >>>
17 | [array([ 400.], dtype=float32), array([ 300.], dtype=float32)]
18 | >>>
19 | """
20 |
--------------------------------------------------------------------------------
/Chapter02/Python 2.7/programming_model.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | with tf.Session() as session:
3 | x = tf.placeholder(tf.float32,[1],name="x")
4 | y = tf.placeholder(tf.float32,[1],name="y")
5 | z = tf.constant(2.0)
6 | y = x * z
7 | x_in = [100]
8 | y_output = session.run(y,{x:x_in})
9 | print(y_output)
10 |
--------------------------------------------------------------------------------
/Chapter02/Python 2.7/single_neuron_model_1.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 |
3 | weight = tf.Variable(1.0,name="weight")
4 | input_value = tf.constant(0.5,name="input_value")
5 | expected_output = tf.constant(0.0,name="expected_output")
6 | model = tf.multiply(input_value,weight,"model")
7 | loss_function = tf.pow(expected_output - model,2,name="loss_function")
8 |
9 | optimizer = tf.train.GradientDescentOptimizer(0.025).minimize(loss_function)
10 |
11 | for value in [input_value,weight,expected_output,model,loss_function]:
12 | tf.summary.scalar(value.op.name,value)
13 |
14 | summaries = tf.summary.merge_all()
15 | sess = tf.Session()
16 |
17 | summary_writer = tf.summary.FileWriter('log_simple_stats',sess.graph)
18 |
19 | sess.run(tf.global_variables_initializer())
20 | for i in range(100):
21 | summary_writer.add_summary(sess.run(summaries),i)
22 | sess.run(optimizer)
23 |
--------------------------------------------------------------------------------
/Chapter02/Python 2.7/tensor_flow_counter_1.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 |
3 | value = tf.Variable(0,name="value")
4 | one = tf.constant(1)
5 | new_value = tf.add(value,one)
6 | update_value=tf.assign(value,new_value)
7 |
8 | initialize_var = tf.global_variables_initializer()
9 |
10 | with tf.Session() as sess:
11 | sess.run(initialize_var)
12 | print(sess.run(value))
13 | for _ in range(10):
14 | sess.run(update_value)
15 | print(sess.run(value))
16 |
17 | """
18 | >>>
19 | 0
20 | 1
21 | 2
22 | 3
23 | 4
24 | 5
25 | 6
26 | 7
27 | 8
28 | 9
29 | 10
30 | >>>
31 | """
32 |
--------------------------------------------------------------------------------
/Chapter02/Python 2.7/tensor_with_numpy_1.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | import numpy as np
3 |
4 | #tensore 1d con valori costanti
5 | tensor_1d = np.array([1,2,3,4,5,6,7,8,9,10])
6 | tensor_1d = tf.constant(tensor_1d)
7 | with tf.Session() as sess:
8 | print (tensor_1d.get_shape())
9 | print sess.run(tensor_1d)
10 |
11 | #tensore 2d con valori variabili
12 | tensor_2d = np.array([(1,2,3),(4,5,6),(7,8,9)])
13 | tensor_2d = tf.Variable(tensor_2d)
14 | with tf.Session() as sess:
15 | sess.run(tf.global_variables_initializer())
16 | print (tensor_2d.get_shape())
17 | print sess.run(tensor_2d)
18 |
19 |
20 | tensor_3d = np.array([[[ 0, 1, 2],[ 3, 4, 5],[ 6, 7, 8]],
21 | [[ 9, 10, 11],[12, 13, 14],[15, 16, 17]],
22 | [[18, 19, 20],[21, 22, 23],[24, 25, 26]]])
23 |
24 | tensor_3d = tf.convert_to_tensor(tensor_3d, dtype=tf.float64)
25 | with tf.Session() as sess:
26 | print (tensor_3d.get_shape())
27 | print sess.run(tensor_3d)
28 |
29 |
30 | interactive_session = tf.InteractiveSession()
31 | tensor = np.array([1,2,3,4,5])
32 | tensor = tf.constant(tensor)
33 | print(tensor.eval())
34 | interactive_session.close()
35 |
36 | """
37 | Python 2.7.10 (default, Oct 14 2015, 16:09:02)
38 | [GCC 5.2.1 20151010] on linux2
39 | Type "copyright", "credits" or "license()" for more information.
40 | >>> ================================ RESTART ================================
41 | >>>
42 | (10,)
43 | [ 1 2 3 4 5 6 7 8 9 10]
44 | (3, 3)
45 | [[1 2 3]
46 | [4 5 6]
47 | [7 8 9]]
48 | (3, 3, 3)
49 | [[[ 0. 1. 2.]
50 | [ 3. 4. 5.]
51 | [ 6. 7. 8.]]
52 |
53 | [[ 9. 10. 11.]
54 | [ 12. 13. 14.]
55 | [ 15. 16. 17.]]
56 |
57 | [[ 18. 19. 20.]
58 | [ 21. 22. 23.]
59 | [ 24. 25. 26.]]]
60 | [1 2 3 4 5]
61 | >>>
62 | """
63 |
64 |
65 |
--------------------------------------------------------------------------------
/Chapter02/Python 3.5/computation_model.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | with tf.Session() as session:
3 | x = tf.placeholder(tf.float32, [1], name="x")
4 | y = tf.placeholder(tf.float32, [1], name="y")
5 | z = tf.constant(2.0)
6 | y = x * z
7 | x_in = [100]
8 | y_output = session.run(y, {x: x_in})
9 | print(y_output)
10 |
--------------------------------------------------------------------------------
/Chapter02/Python 3.5/data_model.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 |
3 | scalar = tf.constant(100)
4 | vector = tf.constant([1, 2, 3, 4, 5])
5 | matrix = tf.constant([[1, 2, 3], [4, 5, 6]])
6 | cube_matrix = tf.constant([[[1], [2], [3]], [[4], [5], [6]], [[7], [8], [9]]])
7 |
8 | print(scalar.get_shape())
9 | print(vector.get_shape())
10 | print(matrix.get_shape())
11 | print(cube_matrix.get_shape())
12 |
13 | """
14 | >>>
15 | ()
16 | (5,)
17 | (2, 3)
18 | (3, 3, 1)
19 | >>>
20 | """
21 |
22 |
--------------------------------------------------------------------------------
/Chapter02/Python 3.5/feeding_parameters.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | import numpy as np
3 |
4 | a = 3
5 | b = 2
6 |
7 | x = tf.placeholder(tf.float32, shape=(a, b))
8 | y = tf.add(x, x)
9 |
10 | data = np.random.rand(a, b)
11 |
12 | sess = tf.Session()
13 |
14 | print(sess.run(y,feed_dict={x: data}))
15 |
16 |
--------------------------------------------------------------------------------
/Chapter02/Python 3.5/fetching_parameters_1.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 |
3 | constant_A = tf.constant([100.0])
4 | constant_B = tf.constant([300.0])
5 | constant_C = tf.constant([3.0])
6 |
7 | sum_ = tf.add(constant_A, constant_B)
8 | mul_ = tf.multiply(constant_A, constant_C)
9 |
10 | with tf.Session() as sess:
11 | result = sess.run([sum_, mul_])
12 | print(result)
13 |
14 |
15 | """
16 | >>>
17 | [array([ 400.], dtype=float32), array([ 300.], dtype=float32)]
18 | >>>
19 | """
20 |
--------------------------------------------------------------------------------
/Chapter02/Python 3.5/programming_model.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 |
3 | with tf.Session() as session:
4 | x = tf.placeholder(tf.float32, [1], name="x")
5 | y = tf.placeholder(tf.float32, [1], name="y")
6 | z = tf.constant(2.0)
7 | y = x * z
8 |
9 | x_in = [100]
10 | y_output = session.run(y, {x: x_in})
11 | print(y_output)
12 |
--------------------------------------------------------------------------------
/Chapter02/Python 3.5/single_neuron_model_1.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 |
3 | weight = tf.Variable(1.0, name="weight")
4 | input_value = tf.constant(0.5, name="input_value")
5 | expected_output = tf.constant(0.0, name="expected_output")
6 | model = tf.multiply(input_value,weight, "model")
7 | loss_function = tf.pow(expected_output - model, 2, name="loss_function")
8 |
9 | optimizer = tf.train.GradientDescentOptimizer(0.025).minimize(loss_function)
10 |
11 | for value in [input_value, weight, expected_output, model, loss_function]:
12 | tf.summary.scalar(value.op.name, value)
13 |
14 | summaries = tf.summary.merge_all()
15 | sess = tf.Session()
16 |
17 | summary_writer = tf.summary.FileWriter('log_simple_stats', sess.graph)
18 |
19 | sess.run(tf.global_variables_initializer())
20 |
21 | for i in range(100):
22 | summary_writer.add_summary(sess.run(summaries), i)
23 | sess.run(optimizer)
24 |
--------------------------------------------------------------------------------
/Chapter02/Python 3.5/tensor_flow_counter_1.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 |
3 | value = tf.Variable(0, name="value")
4 | one = tf.constant(1)
5 | new_value = tf.add(value, one)
6 | update_value = tf.assign(value, new_value)
7 |
8 | initialize_var = tf.global_variables_initializer()
9 |
10 | with tf.Session() as sess:
11 | sess.run(initialize_var)
12 | print(sess.run(value))
13 | for _ in range(10):
14 | sess.run(update_value)
15 | print(sess.run(value))
16 |
17 | """
18 | >>>
19 | 0
20 | 1
21 | 2
22 | 3
23 | 4
24 | 5
25 | 6
26 | 7
27 | 8
28 | 9
29 | 10
30 | >>>
31 | """
32 |
--------------------------------------------------------------------------------
/Chapter02/Python 3.5/tensor_with_numpy_1.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | import numpy as np
3 |
4 | #tensore 1d con valori costanti
5 | tensor_1d = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
6 | tensor_1d = tf.constant(tensor_1d)
7 | with tf.Session() as sess:
8 | print(tensor_1d.get_shape())
9 | print(sess.run(tensor_1d))
10 |
11 | #tensore 2d con valori variabili
12 | tensor_2d = np.array([(1, 2, 3), (4, 5, 6), (7, 8, 9)])
13 | tensor_2d = tf.Variable(tensor_2d)
14 | with tf.Session() as sess:
15 | sess.run(tf.global_variables_initializer())
16 | print(tensor_2d.get_shape())
17 | print(sess.run(tensor_2d))
18 |
19 |
20 | tensor_3d = np.array([[[ 0, 1, 2],[ 3, 4, 5],[ 6, 7, 8]],
21 | [[ 9, 10, 11],[12, 13, 14],[15, 16, 17]],
22 | [[18, 19, 20],[21, 22, 23],[24, 25, 26]]])
23 |
24 | tensor_3d = tf.convert_to_tensor(tensor_3d, dtype=tf.float64)
25 | with tf.Session() as sess:
26 | print(tensor_3d.get_shape())
27 | print(sess.run(tensor_3d))
28 |
29 |
30 | interactive_session = tf.InteractiveSession()
31 | tensor = np.array([1, 2, 3, 4, 5])
32 | tensor = tf.constant(tensor)
33 | print(tensor.eval())
34 | interactive_session.close()
35 |
36 | """
37 | Python 2.7.10 (default, Oct 14 2015, 16:09:02)
38 | [GCC 5.2.1 20151010] on linux2
39 | Type "copyright", "credits" or "license()" for more information.
40 | >>> ================================ RESTART ================================
41 | >>>
42 | (10,)
43 | [ 1 2 3 4 5 6 7 8 9 10]
44 | (3, 3)
45 | [[1 2 3]
46 | [4 5 6]
47 | [7 8 9]]
48 | (3, 3, 3)
49 | [[[ 0. 1. 2.]
50 | [ 3. 4. 5.]
51 | [ 6. 7. 8.]]
52 |
53 | [[ 9. 10. 11.]
54 | [ 12. 13. 14.]
55 | [ 15. 16. 17.]]
56 |
57 | [[ 18. 19. 20.]
58 | [ 21. 22. 23.]
59 | [ 24. 25. 26.]]]
60 | [1 2 3 4 5]
61 | >>>
62 | """
63 |
64 |
65 |
--------------------------------------------------------------------------------
/Chapter02/Screenshots/computations_model.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter02/Screenshots/computations_model.png
--------------------------------------------------------------------------------
/Chapter02/Screenshots/data_model.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter02/Screenshots/data_model.png
--------------------------------------------------------------------------------
/Chapter02/Screenshots/feeding_parameters.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter02/Screenshots/feeding_parameters.png
--------------------------------------------------------------------------------
/Chapter02/Screenshots/fetching_paramter.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter02/Screenshots/fetching_paramter.png
--------------------------------------------------------------------------------
/Chapter02/Screenshots/programming_model.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter02/Screenshots/programming_model.png
--------------------------------------------------------------------------------
/Chapter02/Screenshots/single_input_neuron.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter02/Screenshots/single_input_neuron.png
--------------------------------------------------------------------------------
/Chapter02/Screenshots/tensor_flow_counter.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter02/Screenshots/tensor_flow_counter.png
--------------------------------------------------------------------------------
/Chapter02/Screenshots/tensorflow_with_numpy.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter02/Screenshots/tensorflow_with_numpy.png
--------------------------------------------------------------------------------
/Chapter03/Python 2.7/five_layers_relu_1.py:
--------------------------------------------------------------------------------
1 | from tensorflow.examples.tutorials.mnist import input_data
2 | import tensorflow as tf
3 | import math
4 |
5 | logs_path = 'log_simple_stats_5_layers_relu_softmax'
6 | batch_size = 100
7 | learning_rate = 0.5
8 | training_epochs = 10
9 |
10 | mnist = input_data.read_data_sets("/tmp/data", one_hot=True)
11 |
12 | X = tf.placeholder(tf.float32, [None, 784])
13 | Y_ = tf.placeholder(tf.float32, [None, 10])
14 | lr = tf.placeholder(tf.float32)
15 |
16 |
17 | L = 200
18 | M = 100
19 | N = 60
20 | O = 30
21 |
22 | W1 = tf.Variable(tf.truncated_normal([784, L], stddev=0.1))
23 | B1 = tf.Variable(tf.ones([L])/10)
24 | W2 = tf.Variable(tf.truncated_normal([L, M], stddev=0.1))
25 | B2 = tf.Variable(tf.ones([M])/10)
26 | W3 = tf.Variable(tf.truncated_normal([M, N], stddev=0.1))
27 | B3 = tf.Variable(tf.ones([N])/10)
28 | W4 = tf.Variable(tf.truncated_normal([N, O], stddev=0.1))
29 | B4 = tf.Variable(tf.ones([O])/10)
30 | W5 = tf.Variable(tf.truncated_normal([O, 10], stddev=0.1))
31 | B5 = tf.Variable(tf.zeros([10]))
32 |
33 |
34 | XX = tf.reshape(X, [-1, 784])
35 | Y1 = tf.nn.relu(tf.matmul(XX, W1) + B1)
36 | Y2 = tf.nn.relu(tf.matmul(Y1, W2) + B2)
37 | Y3 = tf.nn.relu(tf.matmul(Y2, W3) + B3)
38 | Y4 = tf.nn.relu(tf.matmul(Y3, W4) + B4)
39 | Ylogits = tf.matmul(Y4, W5) + B5
40 | Y = tf.nn.softmax(Ylogits)
41 |
42 | cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=Ylogits, labels=Y_)
43 | cross_entropy = tf.reduce_mean(cross_entropy)*100
44 |
45 |
46 | correct_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))
47 | accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
48 |
49 |
50 | train_step = tf.train.AdamOptimizer(lr).minimize(cross_entropy)
51 |
52 | tf.summary.scalar("cost", cross_entropy)
53 | tf.summary.scalar("accuracy", accuracy)
54 | summary_op = tf.summary.merge_all()
55 |
56 | init = tf.global_variables_initializer()
57 | sess = tf.Session()
58 | sess.run(init)
59 |
60 | with tf.Session() as sess:
61 | sess.run(tf.global_variables_initializer())
62 | writer = tf.summary.FileWriter(logs_path, \
63 | graph=tf.get_default_graph())
64 | for epoch in range(training_epochs):
65 | batch_count = int(mnist.train.num_examples/batch_size)
66 | for i in range(batch_count):
67 | batch_x, batch_y = mnist.train.next_batch(batch_size)
68 | max_learning_rate = 0.003
69 | min_learning_rate = 0.0001
70 | decay_speed = 2000
71 | learning_rate = min_learning_rate+\
72 | (max_learning_rate - min_learning_rate)\
73 | * math.exp(-i/decay_speed)
74 | _, summary = sess.run([train_step, summary_op],\
75 | {X: batch_x, Y_: batch_y,\
76 | lr: learning_rate})
77 | writer.add_summary(summary,\
78 | epoch * batch_count + i)
79 | #if epoch % 2 == 0:
80 | print("Epoch: ", epoch)
81 |
82 | print("Accuracy: ", accuracy.eval(feed_dict={X: mnist.test.images, Y_: mnist.test.labels}))
83 | print("done")
84 |
85 |
--------------------------------------------------------------------------------
/Chapter03/Python 2.7/five_layers_relu_dropout_1.py:
--------------------------------------------------------------------------------
1 | from tensorflow.examples.tutorials.mnist import input_data
2 | import tensorflow as tf
3 | import math
4 |
5 | logs_path = 'log_simple_stats_5_lyers_dropout'
6 | batch_size = 100
7 | learning_rate = 0.5
8 | training_epochs = 10
9 |
10 | mnist = input_data.read_data_sets("/tmp/data", one_hot=True)
11 |
12 | X = tf.placeholder(tf.float32, [None, 784])
13 | Y_ = tf.placeholder(tf.float32, [None, 10])
14 | lr = tf.placeholder(tf.float32)
15 | pkeep = tf.placeholder(tf.float32)
16 |
17 | L = 200
18 | M = 100
19 | N = 60
20 | O = 30
21 |
22 | W1 = tf.Variable(tf.truncated_normal([784, L], stddev=0.1))
23 | B1 = tf.Variable(tf.ones([L])/10)
24 | W2 = tf.Variable(tf.truncated_normal([L, M], stddev=0.1))
25 | B2 = tf.Variable(tf.ones([M])/10)
26 | W3 = tf.Variable(tf.truncated_normal([M, N], stddev=0.1))
27 | B3 = tf.Variable(tf.ones([N])/10)
28 | W4 = tf.Variable(tf.truncated_normal([N, O], stddev=0.1))
29 | B4 = tf.Variable(tf.ones([O])/10)
30 | W5 = tf.Variable(tf.truncated_normal([O, 10], stddev=0.1))
31 | B5 = tf.Variable(tf.zeros([10]))
32 |
33 | XX = tf.reshape(X, [-1, 28*28])
34 |
35 | Y1 = tf.nn.relu(tf.matmul(XX, W1) + B1)
36 | Y1d = tf.nn.dropout(Y1, pkeep)
37 |
38 | Y2 = tf.nn.relu(tf.matmul(Y1d, W2) + B2)
39 | Y2d = tf.nn.dropout(Y2, pkeep)
40 |
41 | Y3 = tf.nn.relu(tf.matmul(Y2d, W3) + B3)
42 | Y3d = tf.nn.dropout(Y3, pkeep)
43 |
44 | Y4 = tf.nn.relu(tf.matmul(Y3d, W4) + B4)
45 | Y4d = tf.nn.dropout(Y4, pkeep)
46 |
47 | Ylogits = tf.matmul(Y4d, W5) + B5
48 | Y = tf.nn.softmax(Ylogits)
49 |
50 | cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=Ylogits, labels=Y_)
51 | cross_entropy = tf.reduce_mean(cross_entropy)*100
52 |
53 | correct_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))
54 | accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
55 |
56 | train_step = tf.train.AdamOptimizer(lr).minimize(cross_entropy)
57 |
58 | tf.summary.scalar("cost", cross_entropy)
59 | tf.summary.scalar("accuracy", accuracy)
60 | summary_op = tf.summary.merge_all()
61 |
62 | init = tf.global_variables_initializer()
63 | sess = tf.Session()
64 | sess.run(init)
65 |
66 |
67 | with tf.Session() as sess:
68 | sess.run(tf.global_variables_initializer())
69 | writer = tf.summary.FileWriter(logs_path, \
70 | graph=tf.get_default_graph())
71 | for epoch in range(training_epochs):
72 | batch_count = int(mnist.train.num_examples/batch_size)
73 | for i in range(batch_count):
74 | batch_x, batch_y = mnist.train.next_batch(batch_size)
75 | max_learning_rate = 0.003
76 | min_learning_rate = 0.0001
77 | decay_speed = 2000
78 | learning_rate = min_learning_rate + (max_learning_rate - min_learning_rate) * math.exp(-i/decay_speed)
79 | _, summary = sess.run([train_step, summary_op], {X: batch_x, Y_: batch_y, pkeep: 0.75, lr: learning_rate})
80 | writer.add_summary(summary,\
81 | epoch * batch_count + i)
82 | print "Epoch: ", epoch
83 |
84 | print "Accuracy: ", accuracy.eval\
85 | (feed_dict={X: mnist.test.images, Y_: mnist.test.labels, pkeep: 0.75})
86 | print "done"
87 |
88 |
--------------------------------------------------------------------------------
/Chapter03/Python 2.7/five_layers_sigmoid_1.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | from tensorflow.examples.tutorials.mnist import input_data
3 | import math
4 |
5 | logs_path = 'log_simple_stats_5_layers_sigmoid'
6 | batch_size = 100
7 | learning_rate = 0.5
8 | training_epochs = 10
9 |
10 | mnist = input_data.read_data_sets("/tmp/data", one_hot=True)
11 | X = tf.placeholder(tf.float32, [None, 784])
12 | Y_ = tf.placeholder(tf.float32, [None, 10])
13 |
14 | L = 200
15 | M = 100
16 | N = 60
17 | O = 30
18 |
19 | W1 = tf.Variable(tf.truncated_normal([784, L], stddev=0.1))
20 | B1 = tf.Variable(tf.zeros([L]))
21 | W2 = tf.Variable(tf.truncated_normal([L, M], stddev=0.1))
22 | B2 = tf.Variable(tf.zeros([M]))
23 | W3 = tf.Variable(tf.truncated_normal([M, N], stddev=0.1))
24 | B3 = tf.Variable(tf.zeros([N]))
25 | W4 = tf.Variable(tf.truncated_normal([N, O], stddev=0.1))
26 | B4 = tf.Variable(tf.zeros([O]))
27 | W5 = tf.Variable(tf.truncated_normal([O, 10], stddev=0.1))
28 | B5 = tf.Variable(tf.zeros([10]))
29 |
30 |
31 | XX = tf.reshape(X, [-1, 784])
32 | Y1 = tf.nn.sigmoid(tf.matmul(XX, W1) + B1)
33 | Y2 = tf.nn.sigmoid(tf.matmul(Y1, W2) + B2)
34 | Y3 = tf.nn.sigmoid(tf.matmul(Y2, W3) + B3)
35 | Y4 = tf.nn.sigmoid(tf.matmul(Y3, W4) + B4)
36 | Ylogits = tf.matmul(Y4, W5) + B5
37 | Y = tf.nn.softmax(Ylogits)
38 |
39 |
40 | cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=Ylogits, labels=Y_)
41 | cross_entropy = tf.reduce_mean(cross_entropy)*100
42 | correct_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))
43 | accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
44 | learning_rate = 0.003
45 | train_step = tf.train.AdamOptimizer(learning_rate).minimize(cross_entropy)
46 | tf.summary.scalar("cost", cross_entropy)
47 | tf.summary.scalar("accuracy", accuracy)
48 | summary_op = tf.summary.merge_all()
49 |
50 |
51 |
52 | init = tf.global_variables_initializer()
53 | sess = tf.Session()
54 | sess.run(init)
55 |
56 | with tf.Session() as sess:
57 | sess.run(tf.global_variables_initializer())
58 | writer = tf.summary.FileWriter(logs_path, graph=tf.get_default_graph())
59 | for epoch in range(training_epochs):
60 | batch_count = int(mnist.train.num_examples/batch_size)
61 | for i in range(batch_count):
62 | batch_x, batch_y = mnist.train.next_batch(batch_size)
63 | _, summary = sess.run([train_step, summary_op],\
64 | feed_dict={X: batch_x,\
65 | Y_: batch_y})
66 | writer.add_summary(summary,\
67 | epoch * batch_count + i)
68 | print("Epoch: ", epoch)
69 |
70 | print("Accuracy: ", accuracy.eval(feed_dict={X: mnist.test.images, Y_: mnist.test.labels}))
71 | print("done")
72 |
73 |
--------------------------------------------------------------------------------
/Chapter03/Python 2.7/softmax_classifier_1.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | from tensorflow.examples.tutorials.mnist import input_data
3 | import matplotlib.pyplot as plt
4 | from random import randint
5 | import numpy as np
6 |
7 | logs_path = 'log_mnist_softmax'
8 | batch_size = 100
9 | learning_rate = 0.5
10 | training_epochs = 10
11 | mnist = input_data.read_data_sets("data", one_hot=True)
12 |
13 | X = tf.placeholder(tf.float32, [None, 784], name="input")
14 | Y_ = tf.placeholder(tf.float32, [None, 10])
15 | W = tf.Variable(tf.zeros([784, 10]))
16 | b = tf.Variable(tf.zeros([10]))
17 | XX = tf.reshape(X, [-1, 784])
18 |
19 |
20 | Y = tf.nn.softmax(tf.matmul(XX, W) + b, name="output")
21 | cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=Y_, logits=Y))
22 | correct_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))
23 | accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
24 |
25 | train_step = tf.train.GradientDescentOptimizer(0.005).minimize(cross_entropy)
26 |
27 | tf.summary.scalar("cost", cross_entropy)
28 | tf.summary.scalar("accuracy", accuracy)
29 | summary_op = tf.summary.merge_all()
30 |
31 | with tf.Session() as sess:
32 | sess.run(tf.global_variables_initializer())
33 | writer = tf.summary.FileWriter(logs_path, \
34 | graph=tf.get_default_graph())
35 | for epoch in range(training_epochs):
36 | batch_count = int(mnist.train.num_examples/batch_size)
37 | for i in range(batch_count):
38 | batch_x, batch_y = mnist.train.next_batch(batch_size)
39 | _, summary = sess.run([train_step, summary_op],\
40 | feed_dict={X: batch_x,\
41 | Y_: batch_y})
42 | writer.add_summary(summary, epoch * batch_count + i)
43 | print("Epoch: ", epoch)
44 |
45 | print("Accuracy: ", accuracy.eval(feed_dict={X: mnist.test.images, Y_: mnist.test.labels}))
46 | print("done")
47 |
48 | num = randint(0, mnist.test.images.shape[0])
49 | img = mnist.test.images[num]
50 |
51 | classification = sess.run(tf.argmax(Y, 1), feed_dict={X: [img]})
52 | print('Neural Network predicted', classification[0])
53 | print('Real label is:', np.argmax(mnist.test.labels[num]))
54 |
55 | saver = tf.train.Saver()
56 | save_path = saver.save(sess, "data/saved_mnist_cnn.ckpt")
57 | print("Model saved to %s" % save_path)
58 |
59 |
60 |
--------------------------------------------------------------------------------
/Chapter03/Python 2.7/softmax_model_loader_1.py:
--------------------------------------------------------------------------------
1 | import matplotlib.pyplot as plt
2 | import tensorflow as tf
3 | import numpy as np
4 | from random import randint
5 | from tensorflow.examples.tutorials.mnist import input_data
6 |
7 | mnist = input_data.read_data_sets('data', one_hot=True)
8 | sess = tf.InteractiveSession()
9 | new_saver = tf.train.import_meta_graph('data/saved_mnist_cnn.ckpt.meta')
10 | new_saver.restore(sess, 'data/saved_mnist_cnn.ckpt')
11 | tf.get_default_graph().as_graph_def()
12 |
13 | x = sess.graph.get_tensor_by_name("input:0")
14 | y_conv = sess.graph.get_tensor_by_name("output:0")
15 |
16 | num = randint(0, mnist.test.images.shape[0])
17 | img = mnist.test.images[num]
18 |
19 | result = sess.run(["input:0", y_conv], feed_dict= {x:img})
20 | print(result)
21 | print(sess.run(tf.argmax(result, 1)))
22 |
23 | plt.imshow(image_b.reshape([28, 28]), cmap='Greys')
24 | plt.show()
25 |
26 |
27 |
28 |
29 |
30 |
--------------------------------------------------------------------------------
/Chapter03/Python 2.7/softmax_model_saver_1.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | from tensorflow.examples.tutorials.mnist import input_data
3 | import matplotlib.pyplot as plt
4 | from random import randint
5 | import numpy as np
6 |
7 | logs_path = 'log_mnist_softmax'
8 | batch_size = 100
9 | learning_rate = 0.5
10 | training_epochs = 10
11 | mnist = input_data.read_data_sets("data", one_hot=True)
12 |
13 | X = tf.placeholder(tf.float32, [None, 784], name="input")
14 | Y_ = tf.placeholder(tf.float32, [None, 10])
15 | W = tf.Variable(tf.zeros([784, 10]))
16 | b = tf.Variable(tf.zeros([10]))
17 | XX = tf.reshape(X, [-1, 784])
18 |
19 | Y = tf.nn.softmax(tf.matmul(X, W) + b, name="output")
20 | cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=Y_, logits=Y))
21 | correct_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))
22 | accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
23 |
24 | train_step = tf.train.GradientDescentOptimizer(0.005).minimize(cross_entropy)
25 |
26 | tf.summary.scalar("cost", cross_entropy)
27 | tf.summary.scalar("accuracy", accuracy)
28 | summary_op = tf.summary.merge_all()
29 |
30 | with tf.Session() as sess:
31 | sess.run(tf.global_variables_initializer())
32 | writer = tf.summary.FileWriter(logs_path, \
33 | graph=tf.get_default_graph())
34 | for epoch in range(training_epochs):
35 | batch_count = int(mnist.train.num_examples / batch_size)
36 | for i in range(batch_count):
37 | batch_x, batch_y = mnist.train.next_batch(batch_size)
38 | _, summary = sess.run([train_step, summary_op], \
39 | feed_dict={X: batch_x, \
40 | Y_: batch_y})
41 | writer.add_summary(summary, epoch * batch_count + i)
42 | print("Epoch: ", epoch)
43 |
44 | print("Accuracy: ", accuracy.eval(feed_dict={X: mnist.test.images, Y_: mnist.test.labels}))
45 | print("done")
46 |
47 | num = randint(0, mnist.test.images.shape[0])
48 | img = mnist.test.images[num]
49 |
50 | classification = sess.run(tf.argmax(Y, 1), feed_dict={X: [img]})
51 | print('Neural Network predicted', classification[0])
52 | print('Real label is:', np.argmax(mnist.test.labels[num]))
53 |
54 | saver = tf.train.Saver()
55 | save_path = saver.save(sess, "data/saved_mnist_cnn.ckpt")
56 | print("Model saved to %s" % save_path)
57 |
58 |
59 |
--------------------------------------------------------------------------------
/Chapter03/Python 3.5/five_layers_relu_1.py:
--------------------------------------------------------------------------------
1 | from tensorflow.examples.tutorials.mnist import input_data
2 | import tensorflow as tf
3 | import math
4 |
5 | logs_path = 'log_simple_stats_5_layers_relu_softmax'
6 | batch_size = 100
7 | learning_rate = 0.5
8 | training_epochs = 10
9 |
10 | mnist = input_data.read_data_sets("/tmp/data", one_hot=True)
11 |
12 | X = tf.placeholder(tf.float32, [None, 784])
13 | Y_ = tf.placeholder(tf.float32, [None, 10])
14 | lr = tf.placeholder(tf.float32)
15 |
16 |
17 | L = 200
18 | M = 100
19 | N = 60
20 | O = 30
21 |
22 | W1 = tf.Variable(tf.truncated_normal([784, L], stddev=0.1))
23 | B1 = tf.Variable(tf.ones([L])/10)
24 | W2 = tf.Variable(tf.truncated_normal([L, M], stddev=0.1))
25 | B2 = tf.Variable(tf.ones([M])/10)
26 | W3 = tf.Variable(tf.truncated_normal([M, N], stddev=0.1))
27 | B3 = tf.Variable(tf.ones([N])/10)
28 | W4 = tf.Variable(tf.truncated_normal([N, O], stddev=0.1))
29 | B4 = tf.Variable(tf.ones([O])/10)
30 | W5 = tf.Variable(tf.truncated_normal([O, 10], stddev=0.1))
31 | B5 = tf.Variable(tf.zeros([10]))
32 |
33 |
34 | XX = tf.reshape(X, [-1, 784])
35 | Y1 = tf.nn.relu(tf.matmul(XX, W1) + B1)
36 | Y2 = tf.nn.relu(tf.matmul(Y1, W2) + B2)
37 | Y3 = tf.nn.relu(tf.matmul(Y2, W3) + B3)
38 | Y4 = tf.nn.relu(tf.matmul(Y3, W4) + B4)
39 | Ylogits = tf.matmul(Y4, W5) + B5
40 | Y = tf.nn.softmax(Ylogits)
41 |
42 | cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=Ylogits, labels=Y_)
43 | cross_entropy = tf.reduce_mean(cross_entropy)*100
44 |
45 |
46 | correct_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))
47 | accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
48 |
49 |
50 | train_step = tf.train.AdamOptimizer(lr).minimize(cross_entropy)
51 |
52 | tf.summary.scalar("cost", cross_entropy)
53 | tf.summary.scalar("accuracy", accuracy)
54 | summary_op = tf.summary.merge_all()
55 |
56 | init = tf.global_variables_initializer()
57 | sess = tf.Session()
58 | sess.run(init)
59 |
60 | with tf.Session() as sess:
61 | sess.run(tf.global_variables_initializer())
62 | writer = tf.summary.FileWriter(logs_path, \
63 | graph=tf.get_default_graph())
64 | for epoch in range(training_epochs):
65 | batch_count = int(mnist.train.num_examples/batch_size)
66 | for i in range(batch_count):
67 | batch_x, batch_y = mnist.train.next_batch(batch_size)
68 | max_learning_rate = 0.003
69 | min_learning_rate = 0.0001
70 | decay_speed = 2000
71 | learning_rate = min_learning_rate+\
72 | (max_learning_rate - min_learning_rate)\
73 | * math.exp(-i/decay_speed)
74 | _, summary = sess.run([train_step, summary_op],\
75 | {X: batch_x, Y_: batch_y,\
76 | lr: learning_rate})
77 | writer.add_summary(summary,\
78 | epoch * batch_count + i)
79 | #if epoch % 2 == 0:
80 | print("Epoch: ", epoch)
81 |
82 | print("Accuracy: ", accuracy.eval(feed_dict={X: mnist.test.images, Y_: mnist.test.labels}))
83 | print("done")
84 |
85 |
--------------------------------------------------------------------------------
/Chapter03/Python 3.5/five_layers_relu_dropout_1.py:
--------------------------------------------------------------------------------
1 | from tensorflow.examples.tutorials.mnist import input_data
2 | import tensorflow as tf
3 | import math
4 |
5 | logs_path = 'log_simple_stats_5_lyers_dropout'
6 | batch_size = 100
7 | learning_rate = 0.5
8 | training_epochs = 10
9 |
10 | mnist = input_data.read_data_sets("/tmp/data", one_hot=True)
11 |
12 | X = tf.placeholder(tf.float32, [None, 784])
13 | Y_ = tf.placeholder(tf.float32, [None, 10])
14 | lr = tf.placeholder(tf.float32)
15 | pkeep = tf.placeholder(tf.float32)
16 |
17 | L = 200
18 | M = 100
19 | N = 60
20 | O = 30
21 |
22 | W1 = tf.Variable(tf.truncated_normal([784, L], stddev=0.1))
23 | B1 = tf.Variable(tf.ones([L])/10)
24 | W2 = tf.Variable(tf.truncated_normal([L, M], stddev=0.1))
25 | B2 = tf.Variable(tf.ones([M])/10)
26 | W3 = tf.Variable(tf.truncated_normal([M, N], stddev=0.1))
27 | B3 = tf.Variable(tf.ones([N])/10)
28 | W4 = tf.Variable(tf.truncated_normal([N, O], stddev=0.1))
29 | B4 = tf.Variable(tf.ones([O])/10)
30 | W5 = tf.Variable(tf.truncated_normal([O, 10], stddev=0.1))
31 | B5 = tf.Variable(tf.zeros([10]))
32 |
33 | XX = tf.reshape(X, [-1, 28*28])
34 |
35 | Y1 = tf.nn.relu(tf.matmul(XX, W1) + B1)
36 | Y1d = tf.nn.dropout(Y1, pkeep)
37 |
38 | Y2 = tf.nn.relu(tf.matmul(Y1d, W2) + B2)
39 | Y2d = tf.nn.dropout(Y2, pkeep)
40 |
41 | Y3 = tf.nn.relu(tf.matmul(Y2d, W3) + B3)
42 | Y3d = tf.nn.dropout(Y3, pkeep)
43 |
44 | Y4 = tf.nn.relu(tf.matmul(Y3d, W4) + B4)
45 | Y4d = tf.nn.dropout(Y4, pkeep)
46 |
47 | Ylogits = tf.matmul(Y4d, W5) + B5
48 | Y = tf.nn.softmax(Ylogits)
49 |
50 | cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=Ylogits, labels=Y_)
51 | cross_entropy = tf.reduce_mean(cross_entropy)*100
52 |
53 | correct_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))
54 | accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
55 |
56 | train_step = tf.train.AdamOptimizer(lr).minimize(cross_entropy)
57 |
58 | tf.summary.scalar("cost", cross_entropy)
59 | tf.summary.scalar("accuracy", accuracy)
60 | summary_op = tf.summary.merge_all()
61 |
62 | init = tf.global_variables_initializer()
63 | sess = tf.Session()
64 | sess.run(init)
65 |
66 |
67 | with tf.Session() as sess:
68 | sess.run(tf.global_variables_initializer())
69 | writer = tf.summary.FileWriter(logs_path, \
70 | graph=tf.get_default_graph())
71 | for epoch in range(training_epochs):
72 | batch_count = int(mnist.train.num_examples/batch_size)
73 | for i in range(batch_count):
74 | batch_x, batch_y = mnist.train.next_batch(batch_size)
75 | max_learning_rate = 0.003
76 | min_learning_rate = 0.0001
77 | decay_speed = 2000
78 | learning_rate = min_learning_rate + (max_learning_rate - min_learning_rate) * math.exp(-i/decay_speed)
79 | _, summary = sess.run([train_step, summary_op], {X: batch_x, Y_: batch_y, pkeep: 0.75, lr: learning_rate})
80 | writer.add_summary(summary,\
81 | epoch * batch_count + i)
82 | print ("Epoch: ", epoch)
83 |
84 | print ("Accuracy: ", accuracy.eval\
85 | (feed_dict={X: mnist.test.images, Y_: mnist.test.labels, pkeep: 0.75}))
86 | print ("done")
87 |
88 |
--------------------------------------------------------------------------------
/Chapter03/Python 3.5/five_layers_sigmoid_1.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | from tensorflow.examples.tutorials.mnist import input_data
3 | import math
4 |
5 | logs_path = 'log_simple_stats_5_layers_sigmoid'
6 | batch_size = 100
7 | learning_rate = 0.5
8 | training_epochs = 10
9 |
10 | mnist = input_data.read_data_sets("/tmp/data", one_hot=True)
11 | X = tf.placeholder(tf.float32, [None, 784])
12 | Y_ = tf.placeholder(tf.float32, [None, 10])
13 |
14 | L = 200
15 | M = 100
16 | N = 60
17 | O = 30
18 |
19 | W1 = tf.Variable(tf.truncated_normal([784, L], stddev=0.1))
20 | B1 = tf.Variable(tf.zeros([L]))
21 | W2 = tf.Variable(tf.truncated_normal([L, M], stddev=0.1))
22 | B2 = tf.Variable(tf.zeros([M]))
23 | W3 = tf.Variable(tf.truncated_normal([M, N], stddev=0.1))
24 | B3 = tf.Variable(tf.zeros([N]))
25 | W4 = tf.Variable(tf.truncated_normal([N, O], stddev=0.1))
26 | B4 = tf.Variable(tf.zeros([O]))
27 | W5 = tf.Variable(tf.truncated_normal([O, 10], stddev=0.1))
28 | B5 = tf.Variable(tf.zeros([10]))
29 |
30 |
31 | XX = tf.reshape(X, [-1, 784])
32 | Y1 = tf.nn.sigmoid(tf.matmul(XX, W1) + B1)
33 | Y2 = tf.nn.sigmoid(tf.matmul(Y1, W2) + B2)
34 | Y3 = tf.nn.sigmoid(tf.matmul(Y2, W3) + B3)
35 | Y4 = tf.nn.sigmoid(tf.matmul(Y3, W4) + B4)
36 | Ylogits = tf.matmul(Y4, W5) + B5
37 | Y = tf.nn.softmax(Ylogits)
38 |
39 |
40 | cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=Ylogits, labels=Y_)
41 | cross_entropy = tf.reduce_mean(cross_entropy)*100
42 | correct_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))
43 | accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
44 | learning_rate = 0.003
45 | train_step = tf.train.AdamOptimizer(learning_rate).minimize(cross_entropy)
46 | tf.summary.scalar("cost", cross_entropy)
47 | tf.summary.scalar("accuracy", accuracy)
48 | summary_op = tf.summary.merge_all()
49 |
50 |
51 |
52 | init = tf.global_variables_initializer()
53 | sess = tf.Session()
54 | sess.run(init)
55 |
56 | with tf.Session() as sess:
57 | sess.run(tf.global_variables_initializer())
58 | writer = tf.summary.FileWriter(logs_path, graph=tf.get_default_graph())
59 | for epoch in range(training_epochs):
60 | batch_count = int(mnist.train.num_examples/batch_size)
61 | for i in range(batch_count):
62 | batch_x, batch_y = mnist.train.next_batch(batch_size)
63 | _, summary = sess.run([train_step, summary_op],\
64 | feed_dict={X: batch_x,\
65 | Y_: batch_y})
66 | writer.add_summary(summary,\
67 | epoch * batch_count + i)
68 | print("Epoch: ", epoch)
69 |
70 | print("Accuracy: ", accuracy.eval(feed_dict={X: mnist.test.images, Y_: mnist.test.labels}))
71 | print("done")
72 |
73 |
--------------------------------------------------------------------------------
/Chapter03/Python 3.5/softmax_classifier_1.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | from tensorflow.examples.tutorials.mnist import input_data
3 | import matplotlib.pyplot as plt
4 | from random import randint
5 | import numpy as np
6 |
7 | logs_path = 'log_mnist_softmax'
8 | batch_size = 100
9 | learning_rate = 0.5
10 | training_epochs = 10
11 | mnist = input_data.read_data_sets("data", one_hot=True)
12 |
13 | X = tf.placeholder(tf.float32, [None, 784], name="input")
14 | Y_ = tf.placeholder(tf.float32, [None, 10])
15 | W = tf.Variable(tf.zeros([784, 10]))
16 | b = tf.Variable(tf.zeros([10]))
17 | XX = tf.reshape(X, [-1, 784])
18 |
19 |
20 | Y = tf.nn.softmax(tf.matmul(XX, W) + b, name="output")
21 | cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=Y_, logits=Y))
22 | correct_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))
23 | accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
24 |
25 | train_step = tf.train.GradientDescentOptimizer(0.005).minimize(cross_entropy)
26 |
27 | tf.summary.scalar("cost", cross_entropy)
28 | tf.summary.scalar("accuracy", accuracy)
29 | summary_op = tf.summary.merge_all()
30 |
31 | with tf.Session() as sess:
32 | sess.run(tf.global_variables_initializer())
33 | writer = tf.summary.FileWriter(logs_path, \
34 | graph=tf.get_default_graph())
35 | for epoch in range(training_epochs):
36 | batch_count = int(mnist.train.num_examples/batch_size)
37 | for i in range(batch_count):
38 | batch_x, batch_y = mnist.train.next_batch(batch_size)
39 | _, summary = sess.run([train_step, summary_op],\
40 | feed_dict={X: batch_x,\
41 | Y_: batch_y})
42 | writer.add_summary(summary, epoch * batch_count + i)
43 | print("Epoch: ", epoch)
44 |
45 | print("Accuracy: ", accuracy.eval(feed_dict={X: mnist.test.images, Y_: mnist.test.labels}))
46 | print("done")
47 |
48 | num = randint(0, mnist.test.images.shape[0])
49 | img = mnist.test.images[num]
50 |
51 | classification = sess.run(tf.argmax(Y, 1), feed_dict={X: [img]})
52 | print('Neural Network predicted', classification[0])
53 | print('Real label is:', np.argmax(mnist.test.labels[num]))
54 |
55 | saver = tf.train.Saver()
56 | save_path = saver.save(sess, "data/saved_mnist_cnn.ckpt")
57 | print("Model saved to %s" % save_path)
58 |
59 |
60 |
--------------------------------------------------------------------------------
/Chapter03/Python 3.5/softmax_model_loader_1.py:
--------------------------------------------------------------------------------
1 | import matplotlib.pyplot as plt
2 | import tensorflow as tf
3 | import numpy as np
4 | from random import randint
5 | from tensorflow.examples.tutorials.mnist import input_data
6 |
7 | mnist = input_data.read_data_sets('data', one_hot=True)
8 | sess = tf.InteractiveSession()
9 | new_saver = tf.train.import_meta_graph('data/saved_mnist_cnn.ckpt.meta')
10 | new_saver.restore(sess, 'data/saved_mnist_cnn.ckpt')
11 | tf.get_default_graph().as_graph_def()
12 |
13 | x = sess.graph.get_tensor_by_name("input:0")
14 | y_conv = sess.graph.get_tensor_by_name("output:0")
15 |
16 | num = randint(0, mnist.test.images.shape[0])
17 | img = mnist.test.images[num]
18 |
19 | result = sess.run(["input:0", y_conv], feed_dict= {x:img})
20 | print(result)
21 | print(sess.run(tf.argmax(result, 1)))
22 |
23 | plt.imshow(image_b.reshape([28, 28]), cmap='Greys')
24 | plt.show()
25 |
26 |
27 |
28 |
29 |
30 |
--------------------------------------------------------------------------------
/Chapter03/Python 3.5/softmax_model_saver_1.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | from tensorflow.examples.tutorials.mnist import input_data
3 | import matplotlib.pyplot as plt
4 | from random import randint
5 | import numpy as np
6 |
7 | logs_path = 'log_mnist_softmax'
8 | batch_size = 100
9 | learning_rate = 0.5
10 | training_epochs = 10
11 | mnist = input_data.read_data_sets("data", one_hot=True)
12 |
13 | X = tf.placeholder(tf.float32, [None, 784], name="input")
14 | Y_ = tf.placeholder(tf.float32, [None, 10])
15 | W = tf.Variable(tf.zeros([784, 10]))
16 | b = tf.Variable(tf.zeros([10]))
17 | XX = tf.reshape(X, [-1, 784])
18 |
19 | Y = tf.nn.softmax(tf.matmul(X, W) + b, name="output")
20 | cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=Y_, logits=Y))
21 | correct_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))
22 | accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
23 |
24 | train_step = tf.train.GradientDescentOptimizer(0.005).minimize(cross_entropy)
25 |
26 | tf.summary.scalar("cost", cross_entropy)
27 | tf.summary.scalar("accuracy", accuracy)
28 | summary_op = tf.summary.merge_all()
29 |
30 | with tf.Session() as sess:
31 | sess.run(tf.global_variables_initializer())
32 | writer = tf.summary.FileWriter(logs_path, \
33 | graph=tf.get_default_graph())
34 | for epoch in range(training_epochs):
35 | batch_count = int(mnist.train.num_examples / batch_size)
36 | for i in range(batch_count):
37 | batch_x, batch_y = mnist.train.next_batch(batch_size)
38 | _, summary = sess.run([train_step, summary_op], \
39 | feed_dict={X: batch_x, \
40 | Y_: batch_y})
41 | writer.add_summary(summary, epoch * batch_count + i)
42 | print("Epoch: ", epoch)
43 |
44 | print("Accuracy: ", accuracy.eval(feed_dict={X: mnist.test.images, Y_: mnist.test.labels}))
45 | print("done")
46 |
47 | num = randint(0, mnist.test.images.shape[0])
48 | img = mnist.test.images[num]
49 |
50 | classification = sess.run(tf.argmax(Y, 1), feed_dict={X: [img]})
51 | print('Neural Network predicted', classification[0])
52 | print('Real label is:', np.argmax(mnist.test.labels[num]))
53 |
54 | saver = tf.train.Saver()
55 | save_path = saver.save(sess, "data/saved_mnist_cnn.ckpt")
56 | print("Model saved to %s" % save_path)
57 |
58 |
59 |
--------------------------------------------------------------------------------
/Chapter03/Screenshots/five_layers_relu_dropout.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter03/Screenshots/five_layers_relu_dropout.png
--------------------------------------------------------------------------------
/Chapter03/Screenshots/five_layers_relu_png.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter03/Screenshots/five_layers_relu_png.png
--------------------------------------------------------------------------------
/Chapter03/Screenshots/five_layers_sigmod.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter03/Screenshots/five_layers_sigmod.png
--------------------------------------------------------------------------------
/Chapter03/Screenshots/softmax_classiifer_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter03/Screenshots/softmax_classiifer_1.png
--------------------------------------------------------------------------------
/Chapter03/Screenshots/softmax_cmodel_saver_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter03/Screenshots/softmax_cmodel_saver_1.png
--------------------------------------------------------------------------------
/Chapter04/EMOTION_CNN/Python 2.7/EmotionDetectorUtils.py:
--------------------------------------------------------------------------------
1 | import pandas as pd
2 | import numpy as np
3 | import os, sys, inspect
4 | from six.moves import cPickle as pickle
5 | import scipy.misc as misc
6 |
7 | IMAGE_SIZE = 48
8 | NUM_LABELS = 7
9 | VALIDATION_PERCENT = 0.1 # use 10 percent of training images for validation
10 |
11 | IMAGE_LOCATION_NORM = IMAGE_SIZE / 2
12 |
13 | np.random.seed(0)
14 |
15 | emotion = {0:'anger', 1:'disgust',\
16 | 2:'fear',3:'happy',\
17 | 4:'sad',5:'surprise',6:'neutral'}
18 |
19 | class testResult:
20 |
21 | def __init__(self):
22 | self.anger = 0
23 | self.disgust = 0
24 | self.fear = 0
25 | self.happy = 0
26 | self.sad = 0
27 | self.surprise = 0
28 | self.neutral = 0
29 |
30 | def evaluate(self,label):
31 |
32 | if (0 == label):
33 | self.anger = self.anger+1
34 | if (1 == label):
35 | self.disgust = self.disgust+1
36 | if (2 == label):
37 | self.fear = self.fear+1
38 | if (3 == label):
39 | self.happy = self.happy+1
40 | if (4 == label):
41 | self.sad = self.sad+1
42 | if (5 == label):
43 | self.surprise = self.surprise+1
44 | if (6 == label):
45 | self.neutral = self.neutral+1
46 |
47 | def display_result(self,evaluations):
48 | print("anger = " + str((self.anger/float(evaluations))*100) + "%")
49 | print("disgust = " + str((self.disgust/float(evaluations))*100) + "%")
50 | print("fear = " + str((self.fear/float(evaluations))*100) + "%")
51 | print("happy = " + str((self.happy/float(evaluations))*100) + "%")
52 | print("sad = " + str((self.sad/float(evaluations))*100) + "%")
53 | print("surprise = " + str((self.surprise/float(evaluations))*100) + "%")
54 | print("neutral = " + str((self.neutral/float(evaluations))*100) + "%")
55 |
56 |
57 | def read_data(data_dir, force=False):
58 | def create_onehot_label(x):
59 | label = np.zeros((1, NUM_LABELS), dtype=np.float32)
60 | label[:, int(x)] = 1
61 | return label
62 |
63 | pickle_file = os.path.join(data_dir, "EmotionDetectorData.pickle")
64 | if force or not os.path.exists(pickle_file):
65 | train_filename = os.path.join(data_dir, "train.csv")
66 | data_frame = pd.read_csv(train_filename)
67 | data_frame['Pixels'] = data_frame['Pixels'].apply(lambda x: np.fromstring(x, sep=" ") / 255.0)
68 | data_frame = data_frame.dropna()
69 | print("Reading train.csv ...")
70 |
71 | train_images = np.vstack(data_frame['Pixels']).reshape(-1, IMAGE_SIZE, IMAGE_SIZE, 1)
72 | print(train_images.shape)
73 | train_labels = np.array([map(create_onehot_label, data_frame['Emotion'].values)]).reshape(-1, NUM_LABELS)
74 | print(train_labels.shape)
75 |
76 | permutations = np.random.permutation(train_images.shape[0])
77 | train_images = train_images[permutations]
78 | train_labels = train_labels[permutations]
79 | validation_percent = int(train_images.shape[0] * VALIDATION_PERCENT)
80 | validation_images = train_images[:validation_percent]
81 | validation_labels = train_labels[:validation_percent]
82 | train_images = train_images[validation_percent:]
83 | train_labels = train_labels[validation_percent:]
84 |
85 | print("Reading test.csv ...")
86 | test_filename = os.path.join(data_dir, "test.csv")
87 | data_frame = pd.read_csv(test_filename)
88 | data_frame['Pixels'] = data_frame['Pixels'].apply(lambda x: np.fromstring(x, sep=" ") / 255.0)
89 | data_frame = data_frame.dropna()
90 | test_images = np.vstack(data_frame['Pixels']).reshape(-1, IMAGE_SIZE, IMAGE_SIZE, 1)
91 |
92 | with open(pickle_file, "wb") as file:
93 | try:
94 | print('Picking ...')
95 | save = {
96 | "train_images": train_images,
97 | "train_labels": train_labels,
98 | "validation_images": validation_images,
99 | "validation_labels": validation_labels,
100 | "test_images": test_images,
101 | }
102 | pickle.dump(save, file, pickle.HIGHEST_PROTOCOL)
103 |
104 | except:
105 | print("Unable to pickle file :/")
106 |
107 | with open(pickle_file, "rb") as file:
108 | save = pickle.load(file)
109 | train_images = save["train_images"]
110 | train_labels = save["train_labels"]
111 | validation_images = save["validation_images"]
112 | validation_labels = save["validation_labels"]
113 | test_images = save["test_images"]
114 |
115 | return train_images, train_labels, validation_images, validation_labels, test_images
116 |
--------------------------------------------------------------------------------
/Chapter04/EMOTION_CNN/Python 2.7/EmotionDetector_1.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | import numpy as np
3 | #import os, sys, inspect
4 | from datetime import datetime
5 | import EmotionDetectorUtils
6 |
7 | """
8 | lib_path = os.path.realpath(
9 | os.path.abspath(os.path.join(os.path.split(inspect.getfile(inspect.currentframe()))[0], "..")))
10 | if lib_path not in sys.path:
11 | sys.path.insert(0, lib_path)
12 | """
13 |
14 |
15 | FLAGS = tf.flags.FLAGS
16 | tf.flags.DEFINE_string("data_dir", "EmotionDetector/", "Path to data files")
17 | tf.flags.DEFINE_string("logs_dir", "logs/EmotionDetector_logs/", "Path to where log files are to be saved")
18 | tf.flags.DEFINE_string("mode", "train", "mode: train (Default)/ test")
19 |
20 | BATCH_SIZE = 128
21 | LEARNING_RATE = 1e-3
22 | MAX_ITERATIONS = 1001
23 | REGULARIZATION = 1e-2
24 | IMAGE_SIZE = 48
25 | NUM_LABELS = 7
26 | VALIDATION_PERCENT = 0.1
27 |
28 |
29 | def add_to_regularization_loss(W, b):
30 | tf.add_to_collection("losses", tf.nn.l2_loss(W))
31 | tf.add_to_collection("losses", tf.nn.l2_loss(b))
32 |
33 | def weight_variable(shape, stddev=0.02, name=None):
34 | initial = tf.truncated_normal(shape, stddev=stddev)
35 | if name is None:
36 | return tf.Variable(initial)
37 | else:
38 | return tf.get_variable(name, initializer=initial)
39 |
40 |
41 | def bias_variable(shape, name=None):
42 | initial = tf.constant(0.0, shape=shape)
43 | if name is None:
44 | return tf.Variable(initial)
45 | else:
46 | return tf.get_variable(name, initializer=initial)
47 |
48 | def conv2d_basic(x, W, bias):
49 | conv = tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding="SAME")
50 | return tf.nn.bias_add(conv, bias)
51 |
52 | def max_pool_2x2(x):
53 | return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], \
54 | strides=[1, 2, 2, 1], padding="SAME")
55 |
56 |
57 | def emotion_cnn(dataset):
58 | with tf.name_scope("conv1") as scope:
59 | #W_conv1 = weight_variable([5, 5, 1, 32])
60 | #b_conv1 = bias_variable([32])
61 | tf.summary.histogram("W_conv1", weights['wc1'])
62 | tf.summary.histogram("b_conv1", biases['bc1'])
63 | conv_1 = tf.nn.conv2d(dataset, weights['wc1'],\
64 | strides=[1, 1, 1, 1], padding="SAME")
65 | h_conv1 = tf.nn.bias_add(conv_1, biases['bc1'])
66 | #h_conv1 = conv2d_basic(dataset, W_conv1, b_conv1)
67 | h_1 = tf.nn.relu(h_conv1)
68 | h_pool1 = max_pool_2x2(h_1)
69 | add_to_regularization_loss(weights['wc1'], biases['bc1'])
70 |
71 | with tf.name_scope("conv2") as scope:
72 | #W_conv2 = weight_variable([3, 3, 32, 64])
73 | #b_conv2 = bias_variable([64])
74 | tf.summary.histogram("W_conv2", weights['wc2'])
75 | tf.summary.histogram("b_conv2", biases['bc2'])
76 | conv_2 = tf.nn.conv2d(h_pool1, weights['wc2'], strides=[1, 1, 1, 1], padding="SAME")
77 | h_conv2 = tf.nn.bias_add(conv_2, biases['bc2'])
78 | #h_conv2 = conv2d_basic(h_pool1, weights['wc2'], biases['bc2'])
79 | h_2 = tf.nn.relu(h_conv2)
80 | h_pool2 = max_pool_2x2(h_2)
81 | add_to_regularization_loss(weights['wc2'], biases['bc2'])
82 |
83 | with tf.name_scope("fc_1") as scope:
84 | prob = 0.5
85 | image_size = IMAGE_SIZE / 4
86 | h_flat = tf.reshape(h_pool2, [-1, image_size * image_size * 64])
87 | #W_fc1 = weight_variable([image_size * image_size * 64, 256])
88 | #b_fc1 = bias_variable([256])
89 | tf.summary.histogram("W_fc1", weights['wf1'])
90 | tf.summary.histogram("b_fc1", biases['bf1'])
91 | h_fc1 = tf.nn.relu(tf.matmul(h_flat, weights['wf1']) + biases['bf1'])
92 | h_fc1_dropout = tf.nn.dropout(h_fc1, prob)
93 |
94 | with tf.name_scope("fc_2") as scope:
95 | #W_fc2 = weight_variable([256, NUM_LABELS])
96 | #b_fc2 = bias_variable([NUM_LABELS])
97 | tf.summary.histogram("W_fc2", weights['wf2'])
98 | tf.summary.histogram("b_fc2", biases['bf2'])
99 | #pred = tf.matmul(h_fc1, weights['wf2']) + biases['bf2']
100 | pred = tf.matmul(h_fc1_dropout, weights['wf2']) + biases['bf2']
101 |
102 | return pred
103 |
104 | weights = {
105 | 'wc1': weight_variable([5, 5, 1, 32], name="W_conv1"),
106 | 'wc2': weight_variable([3, 3, 32, 64],name="W_conv2"),
107 | 'wf1': weight_variable([(IMAGE_SIZE / 4) * (IMAGE_SIZE / 4) * 64, 256],name="W_fc1"),
108 | 'wf2': weight_variable([256, NUM_LABELS], name="W_fc2")
109 | }
110 |
111 | biases = {
112 | 'bc1': bias_variable([32], name="b_conv1"),
113 | 'bc2': bias_variable([64], name="b_conv2"),
114 | 'bf1': bias_variable([256], name="b_fc1"),
115 | 'bf2': bias_variable([NUM_LABELS], name="b_fc2")
116 | }
117 |
118 | def loss(pred, label):
119 | cross_entropy_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=label))
120 | tf.summary.scalar('Entropy', cross_entropy_loss)
121 | reg_losses = tf.add_n(tf.get_collection("losses"))
122 | tf.summary.scalar('Reg_loss', reg_losses)
123 | return cross_entropy_loss + REGULARIZATION * reg_losses
124 |
125 |
126 | def train(loss, step):
127 | return tf.train.AdamOptimizer(LEARNING_RATE).minimize(loss, global_step=step)
128 |
129 |
130 | def get_next_batch(images, labels, step):
131 | offset = (step * BATCH_SIZE) % (images.shape[0] - BATCH_SIZE)
132 | batch_images = images[offset: offset + BATCH_SIZE]
133 | batch_labels = labels[offset:offset + BATCH_SIZE]
134 | return batch_images, batch_labels
135 |
136 |
137 | def main(argv=None):
138 | train_images, train_labels, valid_images, valid_labels, test_images = EmotionDetectorUtils.read_data(FLAGS.data_dir)
139 | print("Train size: %s" % train_images.shape[0])
140 | print('Validation size: %s' % valid_images.shape[0])
141 | print("Test size: %s" % test_images.shape[0])
142 |
143 | global_step = tf.Variable(0, trainable=False)
144 | dropout_prob = tf.placeholder(tf.float32)
145 | input_dataset = tf.placeholder(tf.float32, [None, IMAGE_SIZE, IMAGE_SIZE, 1],name="input")
146 | input_labels = tf.placeholder(tf.float32, [None, NUM_LABELS])
147 |
148 | pred = emotion_cnn(input_dataset)
149 | output_pred = tf.nn.softmax(pred,name="output")
150 | loss_val = loss(pred, input_labels)
151 | train_op = train(loss_val, global_step)
152 |
153 | summary_op = tf.summary.merge_all()
154 | with tf.Session() as sess:
155 | sess.run(tf.global_variables_initializer())
156 | summary_writer = tf.summary.FileWriter(FLAGS.logs_dir, sess.graph_def)
157 | saver = tf.train.Saver()
158 | ckpt = tf.train.get_checkpoint_state(FLAGS.logs_dir)
159 | if ckpt and ckpt.model_checkpoint_path:
160 | saver.restore(sess, ckpt.model_checkpoint_path)
161 | print("Model Restored!")
162 |
163 | for step in range(MAX_ITERATIONS):
164 | batch_image, batch_label = get_next_batch(train_images, train_labels, step)
165 | feed_dict = {input_dataset: batch_image, input_labels: batch_label}
166 |
167 | sess.run(train_op, feed_dict=feed_dict)
168 | if step % 10 == 0:
169 | train_loss, summary_str = sess.run([loss_val, summary_op], feed_dict=feed_dict)
170 | summary_writer.add_summary(summary_str, global_step=step)
171 | print("Training Loss: %f" % train_loss)
172 |
173 | if step % 100 == 0:
174 | valid_loss = sess.run(loss_val, feed_dict={input_dataset: valid_images, input_labels: valid_labels})
175 | print("%s Validation Loss: %f" % (datetime.now(), valid_loss))
176 | saver.save(sess, FLAGS.logs_dir + 'model.ckpt', global_step=step)
177 |
178 |
179 | if __name__ == "__main__":
180 | tf.app.run()
181 |
182 |
183 |
184 | """
185 | >>>
186 | Train size: 3761
187 | Validation size: 417
188 | Test size: 1312
189 | WARNING:tensorflow:Passing a `GraphDef` to the SummaryWriter is deprecated. Pass a `Graph` object instead, such as `sess.graph`.
190 | Training Loss: 1.962236
191 | 2016-11-05 22:39:36.645682 Validation Loss: 1.962719
192 | Training Loss: 1.907290
193 | Training Loss: 1.849100
194 | Training Loss: 1.871116
195 | Training Loss: 1.798998
196 | Training Loss: 1.885601
197 | Training Loss: 1.849380
198 | Training Loss: 1.843139
199 | Training Loss: 1.933691
200 | Training Loss: 1.829839
201 | Training Loss: 1.839772
202 | 2016-11-05 22:42:58.951699 Validation Loss: 1.822431
203 | Training Loss: 1.772197
204 | Training Loss: 1.666473
205 | Training Loss: 1.620869
206 | Training Loss: 1.592660
207 | Training Loss: 1.422701
208 | Training Loss: 1.436721
209 | Training Loss: 1.348217
210 | Training Loss: 1.432023
211 | Training Loss: 1.347753
212 | Training Loss: 1.299889
213 | 2016-11-05 22:46:55.144483 Validation Loss: 1.335237
214 | Training Loss: 1.108747
215 | Training Loss: 1.197601
216 | Training Loss: 1.245860
217 | Training Loss: 1.164120
218 | Training Loss: 0.994351
219 | Training Loss: 1.072356
220 | Training Loss: 1.193485
221 | Training Loss: 1.118093
222 | Training Loss: 1.021220
223 | Training Loss: 1.069752
224 | 2016-11-05 22:50:17.677074 Validation Loss: 1.111559
225 | Training Loss: 1.099430
226 | Training Loss: 0.966327
227 | Training Loss: 0.960916
228 | Training Loss: 0.844742
229 | Training Loss: 0.979741
230 | Training Loss: 0.891897
231 | Training Loss: 1.013132
232 | Training Loss: 0.936738
233 | Training Loss: 0.911577
234 | Training Loss: 0.862605
235 | 2016-11-05 22:53:30.999141 Validation Loss: 0.999061
236 | Training Loss: 0.800337
237 | Training Loss: 0.776097
238 | Training Loss: 0.799260
239 | Training Loss: 0.919926
240 | Training Loss: 0.758807
241 | Training Loss: 0.807968
242 | Training Loss: 0.856378
243 | Training Loss: 0.867762
244 | Training Loss: 0.656170
245 | Training Loss: 0.688761
246 | 2016-11-05 22:56:53.256991 Validation Loss: 0.931223
247 | Training Loss: 0.696454
248 | Training Loss: 0.725157
249 | Training Loss: 0.674037
250 | Training Loss: 0.719200
251 | Training Loss: 0.749460
252 | Training Loss: 0.741768
253 | Training Loss: 0.702719
254 | Training Loss: 0.734194
255 | Training Loss: 0.669155
256 | Training Loss: 0.641528
257 | 2016-11-05 23:00:06.530139 Validation Loss: 0.911489
258 | Training Loss: 0.764550
259 | Training Loss: 0.646964
260 | Training Loss: 0.724712
261 | Training Loss: 0.726692
262 | Training Loss: 0.656019
263 | Training Loss: 0.690552
264 | Training Loss: 0.537638
265 | Training Loss: 0.680097
266 | Training Loss: 0.554115
267 | Training Loss: 0.590837
268 | 2016-11-05 23:03:15.351156 Validation Loss: 0.818303
269 | Training Loss: 0.656608
270 | Training Loss: 0.567394
271 | Training Loss: 0.545324
272 | Training Loss: 0.611726
273 | Training Loss: 0.600910
274 | Training Loss: 0.526467
275 | Training Loss: 0.584986
276 | Training Loss: 0.567015
277 | Training Loss: 0.555465
278 | Training Loss: 0.630097
279 | 2016-11-05 23:06:26.575298 Validation Loss: 0.824178
280 | Training Loss: 0.662920
281 | Training Loss: 0.512493
282 | Training Loss: 0.475912
283 | Training Loss: 0.455112
284 | Training Loss: 0.567875
285 | Training Loss: 0.582927
286 | Training Loss: 0.509225
287 | Training Loss: 0.602916
288 | Training Loss: 0.521976
289 | Training Loss: 0.445122
290 | 2016-11-05 23:09:40.136353 Validation Loss: 0.803449
291 | Training Loss: 0.435535
292 | Training Loss: 0.459343
293 | Training Loss: 0.481706
294 | Training Loss: 0.460640
295 | Training Loss: 0.554570
296 | Training Loss: 0.427962
297 | Training Loss: 0.512764
298 | Training Loss: 0.531128
299 | Training Loss: 0.364465
300 | Training Loss: 0.432366
301 | 2016-11-05 23:12:50.769527 Validation Loss: 0.851074
302 | >>>
303 | """
304 |
--------------------------------------------------------------------------------
/Chapter04/EMOTION_CNN/Python 2.7/test_your_image.py:
--------------------------------------------------------------------------------
1 | from scipy import misc
2 | import numpy as np
3 | import matplotlib.cm as cm
4 | import tensorflow as tf
5 | import os, sys, inspect
6 | from datetime import datetime
7 | from matplotlib import pyplot as plt
8 | import matplotlib.image as mpimg
9 | from scipy import misc
10 | import EmotionDetectorUtils
11 | from EmotionDetectorUtils import testResult
12 |
13 | emotion = {0:'anger', 1:'disgust',\
14 | 2:'fear',3:'happy',\
15 | 4:'sad',5:'surprise',6:'neutral'}
16 |
17 |
18 | def rgb2gray(rgb):
19 | return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])
20 |
21 | img = mpimg.imread('author_img.jpg')
22 | gray = rgb2gray(img)
23 | plt.imshow(gray, cmap = plt.get_cmap('gray'))
24 | plt.show()
25 |
26 |
27 | """"
28 | lib_path = os.path.realpath(
29 | os.path.abspath(os.path.join(os.path.split(inspect.getfile(inspect.currentframe()))[0], "..")))
30 | if lib_path not in sys.path:
31 | sys.path.insert(0, lib_path)
32 | """
33 |
34 |
35 |
36 | FLAGS = tf.flags.FLAGS
37 | tf.flags.DEFINE_string("data_dir", "EmotionDetector/", "Path to data files")
38 | tf.flags.DEFINE_string("logs_dir", "logs/EmotionDetector_logs/", "Path to where log files are to be saved")
39 | tf.flags.DEFINE_string("mode", "train", "mode: train (Default)/ test")
40 |
41 |
42 |
43 |
44 | train_images, train_labels, valid_images, valid_labels, test_images = \
45 | EmotionDetectorUtils.read_data(FLAGS.data_dir)
46 |
47 |
48 | sess = tf.InteractiveSession()
49 |
50 | new_saver = tf.train.import_meta_graph('logs/EmotionDetector_logs/model.ckpt-1000.meta')
51 | new_saver.restore(sess, 'logs/EmotionDetector_logs/model.ckpt-1000')
52 | tf.get_default_graph().as_graph_def()
53 |
54 | x = sess.graph.get_tensor_by_name("input:0")
55 | y_conv = sess.graph.get_tensor_by_name("output:0")
56 |
57 | image_0 = np.resize(gray,(1,48,48,1))
58 | tResult = testResult()
59 | num_evaluations = 1000
60 |
61 | for i in range(0,num_evaluations):
62 | result = sess.run(y_conv, feed_dict={x:image_0})
63 | label = sess.run(tf.argmax(result, 1))
64 | label = label[0]
65 | label = int(label)
66 | tResult.evaluate(label)
67 | tResult.display_result(num_evaluations)
68 |
69 |
70 |
71 |
72 |
--------------------------------------------------------------------------------
/Chapter04/EMOTION_CNN/Python 3.5/EmotionDetectorUtils.py:
--------------------------------------------------------------------------------
1 | import pandas as pd
2 | import numpy as np
3 | import os, sys, inspect
4 | from six.moves import cPickle as pickle
5 | import scipy.misc as misc
6 |
7 | IMAGE_SIZE = 48
8 | NUM_LABELS = 7
9 | VALIDATION_PERCENT = 0.1 # use 10 percent of training images for validation
10 |
11 | IMAGE_LOCATION_NORM = IMAGE_SIZE // 2
12 |
13 | np.random.seed(0)
14 |
15 | emotion = {0:'anger', 1:'disgust',\
16 | 2:'fear',3:'happy',\
17 | 4:'sad',5:'surprise',6:'neutral'}
18 |
19 | class testResult:
20 |
21 | def __init__(self):
22 | self.anger = 0
23 | self.disgust = 0
24 | self.fear = 0
25 | self.happy = 0
26 | self.sad = 0
27 | self.surprise = 0
28 | self.neutral = 0
29 |
30 | def evaluate(self,label):
31 |
32 | if (0 == label):
33 | self.anger = self.anger+1
34 | if (1 == label):
35 | self.disgust = self.disgust+1
36 | if (2 == label):
37 | self.fear = self.fear+1
38 | if (3 == label):
39 | self.happy = self.happy+1
40 | if (4 == label):
41 | self.sad = self.sad+1
42 | if (5 == label):
43 | self.surprise = self.surprise+1
44 | if (6 == label):
45 | self.neutral = self.neutral+1
46 |
47 | def display_result(self,evaluations):
48 | print("anger = " + str((self.anger/float(evaluations))*100) + "%")
49 | print("disgust = " + str((self.disgust/float(evaluations))*100) + "%")
50 | print("fear = " + str((self.fear/float(evaluations))*100) + "%")
51 | print("happy = " + str((self.happy/float(evaluations))*100) + "%")
52 | print("sad = " + str((self.sad/float(evaluations))*100) + "%")
53 | print("surprise = " + str((self.surprise/float(evaluations))*100) + "%")
54 | print("neutral = " + str((self.neutral/float(evaluations))*100) + "%")
55 |
56 |
57 | def read_data(data_dir, force=False):
58 | def create_onehot_label(x):
59 | label = np.zeros((1, NUM_LABELS), dtype=np.float32)
60 | label[:, int(x)] = 1
61 | return label
62 |
63 | pickle_file = os.path.join(data_dir, "EmotionDetectorData.pickle")
64 | if force or not os.path.exists(pickle_file):
65 | train_filename = os.path.join(data_dir, "train.csv")
66 | data_frame = pd.read_csv(train_filename)
67 | data_frame['Pixels'] = data_frame['Pixels'].apply(lambda x: np.fromstring(x, sep=" ") / 255.0)
68 | data_frame = data_frame.dropna()
69 | print("Reading train.csv ...")
70 |
71 | train_images = np.vstack(data_frame['Pixels']).reshape(-1, IMAGE_SIZE, IMAGE_SIZE, 1)
72 | print(train_images.shape)
73 | train_labels = np.array(list(map(create_onehot_label, data_frame['Emotion'].values))).reshape(-1, NUM_LABELS)
74 | print(train_labels.shape)
75 |
76 | permutations = np.random.permutation(train_images.shape[0])
77 | train_images = train_images[permutations]
78 | train_labels = train_labels[permutations]
79 | validation_percent = int(train_images.shape[0] * VALIDATION_PERCENT)
80 | validation_images = train_images[:validation_percent]
81 | validation_labels = train_labels[:validation_percent]
82 | train_images = train_images[validation_percent:]
83 | train_labels = train_labels[validation_percent:]
84 |
85 | print("Reading test.csv ...")
86 | test_filename = os.path.join(data_dir, "test.csv")
87 | data_frame = pd.read_csv(test_filename)
88 | data_frame['Pixels'] = data_frame['Pixels'].apply(lambda x: np.fromstring(x, sep=" ") / 255.0)
89 | data_frame = data_frame.dropna()
90 | test_images = np.vstack(data_frame['Pixels']).reshape(-1, IMAGE_SIZE, IMAGE_SIZE, 1)
91 |
92 | with open(pickle_file, "wb") as file:
93 | try:
94 | print('Picking ...')
95 | save = {
96 | "train_images": train_images,
97 | "train_labels": train_labels,
98 | "validation_images": validation_images,
99 | "validation_labels": validation_labels,
100 | "test_images": test_images,
101 | }
102 | pickle.dump(save, file, pickle.HIGHEST_PROTOCOL)
103 |
104 | except:
105 | print("Unable to pickle file :/")
106 |
107 | with open(pickle_file, "rb") as file:
108 | save = pickle.load(file)
109 | train_images = save["train_images"]
110 | train_labels = save["train_labels"]
111 | validation_images = save["validation_images"]
112 | validation_labels = save["validation_labels"]
113 | test_images = save["test_images"]
114 |
115 | return train_images, train_labels, validation_images, validation_labels, test_images
116 |
--------------------------------------------------------------------------------
/Chapter04/EMOTION_CNN/Python 3.5/EmotionDetector_1.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | import numpy as np
3 | #import os, sys, inspect
4 | from datetime import datetime
5 | import EmotionDetectorUtils
6 |
7 | """
8 | lib_path = os.path.realpath(
9 | os.path.abspath(os.path.join(os.path.split(inspect.getfile(inspect.currentframe()))[0], "..")))
10 | if lib_path not in sys.path:
11 | sys.path.insert(0, lib_path)
12 | """
13 |
14 |
15 | FLAGS = tf.flags.FLAGS
16 | tf.flags.DEFINE_string("data_dir", "EmotionDetector/", "Path to data files")
17 | tf.flags.DEFINE_string("logs_dir", "logs/EmotionDetector_logs/", "Path to where log files are to be saved")
18 | tf.flags.DEFINE_string("mode", "train", "mode: train (Default)/ test")
19 |
20 | BATCH_SIZE = 128
21 | LEARNING_RATE = 1e-3
22 | MAX_ITERATIONS = 1001
23 | REGULARIZATION = 1e-2
24 | IMAGE_SIZE = 48
25 | NUM_LABELS = 7
26 | VALIDATION_PERCENT = 0.1
27 |
28 |
29 | def add_to_regularization_loss(W, b):
30 | tf.add_to_collection("losses", tf.nn.l2_loss(W))
31 | tf.add_to_collection("losses", tf.nn.l2_loss(b))
32 |
33 | def weight_variable(shape, stddev=0.02, name=None):
34 | initial = tf.truncated_normal(shape, stddev=stddev)
35 | if name is None:
36 | return tf.Variable(initial)
37 | else:
38 | return tf.get_variable(name, initializer=initial)
39 |
40 |
41 | def bias_variable(shape, name=None):
42 | initial = tf.constant(0.0, shape=shape)
43 | if name is None:
44 | return tf.Variable(initial)
45 | else:
46 | return tf.get_variable(name, initializer=initial)
47 |
48 | def conv2d_basic(x, W, bias):
49 | conv = tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding="SAME")
50 | return tf.nn.bias_add(conv, bias)
51 |
52 | def max_pool_2x2(x):
53 | return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], \
54 | strides=[1, 2, 2, 1], padding="SAME")
55 |
56 |
57 | def emotion_cnn(dataset):
58 | with tf.name_scope("conv1") as scope:
59 | #W_conv1 = weight_variable([5, 5, 1, 32])
60 | #b_conv1 = bias_variable([32])
61 | tf.summary.histogram("W_conv1", weights['wc1'])
62 | tf.summary.histogram("b_conv1", biases['bc1'])
63 | conv_1 = tf.nn.conv2d(dataset, weights['wc1'],\
64 | strides=[1, 1, 1, 1], padding="SAME")
65 | h_conv1 = tf.nn.bias_add(conv_1, biases['bc1'])
66 | #h_conv1 = conv2d_basic(dataset, W_conv1, b_conv1)
67 | h_1 = tf.nn.relu(h_conv1)
68 | h_pool1 = max_pool_2x2(h_1)
69 | add_to_regularization_loss(weights['wc1'], biases['bc1'])
70 |
71 | with tf.name_scope("conv2") as scope:
72 | #W_conv2 = weight_variable([3, 3, 32, 64])
73 | #b_conv2 = bias_variable([64])
74 | tf.summary.histogram("W_conv2", weights['wc2'])
75 | tf.summary.histogram("b_conv2", biases['bc2'])
76 | conv_2 = tf.nn.conv2d(h_pool1, weights['wc2'], strides=[1, 1, 1, 1], padding="SAME")
77 | h_conv2 = tf.nn.bias_add(conv_2, biases['bc2'])
78 | #h_conv2 = conv2d_basic(h_pool1, weights['wc2'], biases['bc2'])
79 | h_2 = tf.nn.relu(h_conv2)
80 | h_pool2 = max_pool_2x2(h_2)
81 | add_to_regularization_loss(weights['wc2'], biases['bc2'])
82 |
83 | with tf.name_scope("fc_1") as scope:
84 | prob = 0.5
85 | image_size = IMAGE_SIZE // 4
86 | h_flat = tf.reshape(h_pool2, [-1, image_size * image_size * 64])
87 | #W_fc1 = weight_variable([image_size * image_size * 64, 256])
88 | #b_fc1 = bias_variable([256])
89 | tf.summary.histogram("W_fc1", weights['wf1'])
90 | tf.summary.histogram("b_fc1", biases['bf1'])
91 | h_fc1 = tf.nn.relu(tf.matmul(h_flat, weights['wf1']) + biases['bf1'])
92 | h_fc1_dropout = tf.nn.dropout(h_fc1, prob)
93 |
94 | with tf.name_scope("fc_2") as scope:
95 | #W_fc2 = weight_variable([256, NUM_LABELS])
96 | #b_fc2 = bias_variable([NUM_LABELS])
97 | tf.summary.histogram("W_fc2", weights['wf2'])
98 | tf.summary.histogram("b_fc2", biases['bf2'])
99 | #pred = tf.matmul(h_fc1, weights['wf2']) + biases['bf2']
100 | pred = tf.matmul(h_fc1_dropout, weights['wf2']) + biases['bf2']
101 |
102 | return pred
103 |
104 | weights = {
105 | 'wc1': weight_variable([5, 5, 1, 32], name="W_conv1"),
106 | 'wc2': weight_variable([3, 3, 32, 64],name="W_conv2"),
107 | 'wf1': weight_variable([(IMAGE_SIZE // 4) * (IMAGE_SIZE // 4) * 64, 256],name="W_fc1"),
108 | 'wf2': weight_variable([256, NUM_LABELS], name="W_fc2")
109 | }
110 |
111 | biases = {
112 | 'bc1': bias_variable([32], name="b_conv1"),
113 | 'bc2': bias_variable([64], name="b_conv2"),
114 | 'bf1': bias_variable([256], name="b_fc1"),
115 | 'bf2': bias_variable([NUM_LABELS], name="b_fc2")
116 | }
117 |
118 | def loss(pred, label):
119 | cross_entropy_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=label))
120 | tf.summary.scalar('Entropy', cross_entropy_loss)
121 | reg_losses = tf.add_n(tf.get_collection("losses"))
122 | tf.summary.scalar('Reg_loss', reg_losses)
123 | return cross_entropy_loss + REGULARIZATION * reg_losses
124 |
125 |
126 | def train(loss, step):
127 | return tf.train.AdamOptimizer(LEARNING_RATE).minimize(loss, global_step=step)
128 |
129 |
130 | def get_next_batch(images, labels, step):
131 | offset = (step * BATCH_SIZE) % (images.shape[0] - BATCH_SIZE)
132 | batch_images = images[offset: offset + BATCH_SIZE]
133 | batch_labels = labels[offset:offset + BATCH_SIZE]
134 | return batch_images, batch_labels
135 |
136 |
137 | def main(argv=None):
138 | train_images, train_labels, valid_images, valid_labels, test_images = EmotionDetectorUtils.read_data(FLAGS.data_dir)
139 | print("Train size: %s" % train_images.shape[0])
140 | print('Validation size: %s' % valid_images.shape[0])
141 | print("Test size: %s" % test_images.shape[0])
142 |
143 | global_step = tf.Variable(0, trainable=False)
144 | dropout_prob = tf.placeholder(tf.float32)
145 | input_dataset = tf.placeholder(tf.float32, [None, IMAGE_SIZE, IMAGE_SIZE, 1],name="input")
146 | input_labels = tf.placeholder(tf.float32, [None, NUM_LABELS])
147 |
148 | pred = emotion_cnn(input_dataset)
149 | output_pred = tf.nn.softmax(pred,name="output")
150 | loss_val = loss(pred, input_labels)
151 | train_op = train(loss_val, global_step)
152 |
153 | summary_op = tf.summary.merge_all()
154 | with tf.Session() as sess:
155 | sess.run(tf.global_variables_initializer())
156 | summary_writer = tf.summary.FileWriter(FLAGS.logs_dir, sess.graph_def)
157 | saver = tf.train.Saver()
158 | ckpt = tf.train.get_checkpoint_state(FLAGS.logs_dir)
159 | if ckpt and ckpt.model_checkpoint_path:
160 | saver.restore(sess, ckpt.model_checkpoint_path)
161 | print("Model Restored!")
162 |
163 | for step in range(MAX_ITERATIONS):
164 | batch_image, batch_label = get_next_batch(train_images, train_labels, step)
165 | feed_dict = {input_dataset: batch_image, input_labels: batch_label}
166 |
167 | sess.run(train_op, feed_dict=feed_dict)
168 | if step % 10 == 0:
169 | train_loss, summary_str = sess.run([loss_val, summary_op], feed_dict=feed_dict)
170 | summary_writer.add_summary(summary_str, global_step=step)
171 | print("Training Loss: %f" % train_loss)
172 |
173 | if step % 100 == 0:
174 | valid_loss = sess.run(loss_val, feed_dict={input_dataset: valid_images, input_labels: valid_labels})
175 | print("%s Validation Loss: %f" % (datetime.now(), valid_loss))
176 | saver.save(sess, FLAGS.logs_dir + 'model.ckpt', global_step=step)
177 |
178 |
179 | if __name__ == "__main__":
180 | tf.app.run()
181 |
182 |
183 |
184 | """
185 | >>>
186 | Train size: 3761
187 | Validation size: 417
188 | Test size: 1312
189 | Training Loss: 1.951450
190 | 2017-07-27 14:26:41.689096 Validation Loss: 1.958948
191 | Training Loss: 1.899691
192 | Training Loss: 1.873583
193 | Training Loss: 1.883454
194 | Training Loss: 1.794849
195 | Training Loss: 1.884183
196 | Training Loss: 1.848423
197 | Training Loss: 1.838916
198 | Training Loss: 1.918565
199 | Training Loss: 1.829074
200 | Training Loss: 1.864008
201 | 2017-07-27 14:27:00.305351 Validation Loss: 1.790150
202 | Training Loss: 1.753058
203 | Training Loss: 1.615597
204 | Training Loss: 1.571414
205 | Training Loss: 1.623350
206 | Training Loss: 1.494578
207 | Training Loss: 1.502531
208 | Training Loss: 1.349338
209 | Training Loss: 1.537164
210 | Training Loss: 1.364067
211 | Training Loss: 1.387331
212 | 2017-07-27 14:27:20.328279 Validation Loss: 1.375231
213 | Training Loss: 1.186529
214 | Training Loss: 1.386529
215 | Training Loss: 1.270537
216 | Training Loss: 1.211034
217 | Training Loss: 1.096524
218 | Training Loss: 1.192567
219 | Training Loss: 1.279141
220 | Training Loss: 1.199098
221 | Training Loss: 1.017902
222 | Training Loss: 1.249009
223 | 2017-07-27 14:27:38.844167 Validation Loss: 1.178693
224 | Training Loss: 1.222699
225 | Training Loss: 0.970940
226 | Training Loss: 1.012443
227 | Training Loss: 0.931900
228 | Training Loss: 1.016142
229 | Training Loss: 0.943123
230 | Training Loss: 1.099365
231 | Training Loss: 1.000534
232 | Training Loss: 0.925840
233 | Training Loss: 0.895967
234 | 2017-07-27 14:27:57.399234 Validation Loss: 1.103102
235 | Training Loss: 0.863209
236 | Training Loss: 0.833549
237 | Training Loss: 0.812724
238 | Training Loss: 1.009514
239 | Training Loss: 1.024465
240 | Training Loss: 0.961753
241 | Training Loss: 0.986352
242 | Training Loss: 0.959654
243 | Training Loss: 0.774006
244 | Training Loss: 0.858462
245 | 2017-07-27 14:28:15.782431 Validation Loss: 1.000128
246 | Training Loss: 0.663166
247 | Training Loss: 0.785379
248 | Training Loss: 0.821995
249 | Training Loss: 0.945040
250 | Training Loss: 0.909402
251 | Training Loss: 0.797702
252 | Training Loss: 0.769628
253 | Training Loss: 0.750213
254 | Training Loss: 0.722645
255 | Training Loss: 0.800091
256 | 2017-07-27 14:28:34.632889 Validation Loss: 0.924810
257 | Training Loss: 0.878261
258 | Training Loss: 0.817574
259 | Training Loss: 0.856897
260 | Training Loss: 0.752512
261 | Training Loss: 0.881165
262 | Training Loss: 0.710394
263 | Training Loss: 0.721797
264 | Training Loss: 0.726897
265 | Training Loss: 0.624348
266 | Training Loss: 0.730256
267 | 2017-07-27 14:28:53.171239 Validation Loss: 0.901341
268 | Training Loss: 0.685925
269 | Training Loss: 0.630337
270 | Training Loss: 0.656826
271 | Training Loss: 0.666020
272 | Training Loss: 0.627277
273 | Training Loss: 0.698149
274 | Training Loss: 0.722851
275 | Training Loss: 0.722231
276 | Training Loss: 0.701155
277 | Training Loss: 0.684319
278 | 2017-07-27 14:29:11.596521 Validation Loss: 0.894154
279 | Training Loss: 0.738686
280 | Training Loss: 0.580629
281 | Training Loss: 0.545667
282 | Training Loss: 0.614124
283 | Training Loss: 0.640999
284 | Training Loss: 0.762669
285 | Training Loss: 0.628534
286 | Training Loss: 0.690788
287 | Training Loss: 0.628837
288 | Training Loss: 0.565587
289 | 2017-07-27 14:29:30.075707 Validation Loss: 0.825970
290 | Training Loss: 0.551373
291 | Training Loss: 0.466755
292 | Training Loss: 0.583116
293 | Training Loss: 0.644869
294 | Training Loss: 0.626141
295 | Training Loss: 0.609953
296 | Training Loss: 0.622723
297 | Training Loss: 0.696944
298 | Training Loss: 0.543604
299 | Training Loss: 0.436234
300 | 2017-07-27 14:29:48.517299 Validation Loss: 0.873586
301 |
302 | >>>
303 | """
304 |
--------------------------------------------------------------------------------
/Chapter04/EMOTION_CNN/Python 3.5/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter04/EMOTION_CNN/Python 3.5/__init__.py
--------------------------------------------------------------------------------
/Chapter04/EMOTION_CNN/Python 3.5/test_your_image.py:
--------------------------------------------------------------------------------
1 | from scipy import misc
2 | import numpy as np
3 | import matplotlib.cm as cm
4 | import tensorflow as tf
5 | import os, sys, inspect
6 | from datetime import datetime
7 | from matplotlib import pyplot as plt
8 | import matplotlib.image as mpimg
9 | from scipy import misc
10 | import EmotionDetectorUtils
11 | from EmotionDetectorUtils import testResult
12 |
13 | emotion = {0:'anger', 1:'disgust',\
14 | 2:'fear',3:'happy',\
15 | 4:'sad',5:'surprise',6:'neutral'}
16 |
17 |
18 | def rgb2gray(rgb):
19 | return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])
20 |
21 | img = mpimg.imread('author_img.jpg')
22 | gray = rgb2gray(img)
23 | plt.imshow(gray, cmap = plt.get_cmap('gray'))
24 | plt.show()
25 |
26 |
27 | """"
28 | lib_path = os.path.realpath(
29 | os.path.abspath(os.path.join(os.path.split(inspect.getfile(inspect.currentframe()))[0], "..")))
30 | if lib_path not in sys.path:
31 | sys.path.insert(0, lib_path)
32 | """
33 |
34 |
35 |
36 | FLAGS = tf.flags.FLAGS
37 | tf.flags.DEFINE_string("data_dir", "EmotionDetector/", "Path to data files")
38 | tf.flags.DEFINE_string("logs_dir", "logs/EmotionDetector_logs/", "Path to where log files are to be saved")
39 | tf.flags.DEFINE_string("mode", "train", "mode: train (Default)/ test")
40 |
41 |
42 |
43 |
44 | train_images, train_labels, valid_images, valid_labels, test_images = \
45 | EmotionDetectorUtils.read_data(FLAGS.data_dir)
46 |
47 |
48 | sess = tf.InteractiveSession()
49 |
50 | new_saver = tf.train.import_meta_graph('logs/EmotionDetector_logs/model.ckpt-1000.meta')
51 | new_saver.restore(sess, 'logs/EmotionDetector_logs/model.ckpt-1000')
52 | tf.get_default_graph().as_graph_def()
53 |
54 | x = sess.graph.get_tensor_by_name("input:0")
55 | y_conv = sess.graph.get_tensor_by_name("output:0")
56 |
57 | image_0 = np.resize(gray,(1,48,48,1))
58 | tResult = testResult()
59 | num_evaluations = 1000
60 |
61 | for i in range(0,num_evaluations):
62 | result = sess.run(y_conv, feed_dict={x:image_0})
63 | label = sess.run(tf.argmax(result, 1))
64 | label = label[0]
65 | label = int(label)
66 | tResult.evaluate(label)
67 | tResult.display_result(num_evaluations)
68 |
69 |
70 |
71 |
72 |
--------------------------------------------------------------------------------
/Chapter04/EMOTION_CNN/Screenshots/Emotion Detector.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter04/EMOTION_CNN/Screenshots/Emotion Detector.png
--------------------------------------------------------------------------------
/Chapter04/EMOTION_CNN/Screenshots/Test your image _1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter04/EMOTION_CNN/Screenshots/Test your image _1.png
--------------------------------------------------------------------------------
/Chapter04/EMOTION_CNN/Screenshots/Test your image _2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter04/EMOTION_CNN/Screenshots/Test your image _2.png
--------------------------------------------------------------------------------
/Chapter04/EMOTION_CNN/author_img.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter04/EMOTION_CNN/author_img.jpg
--------------------------------------------------------------------------------
/Chapter04/MNIST_CNN/Python 2.7/mnist_cnn_1.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | import numpy as np
3 | #import mnist_data
4 |
5 | batch_size = 128
6 | test_size = 256
7 | img_size = 28
8 | num_classes = 10
9 |
10 | def init_weights(shape):
11 | return tf.Variable(tf.random_normal(shape, stddev=0.01))
12 |
13 |
14 | def model(X, w, w2, w3, w4, w_o, p_keep_conv, p_keep_hidden):
15 |
16 | conv1 = tf.nn.conv2d(X, w,\
17 | strides=[1, 1, 1, 1],\
18 | padding='SAME')
19 |
20 | conv1_a = tf.nn.relu(conv1)
21 | conv1 = tf.nn.max_pool(conv1_a, ksize=[1, 2, 2, 1]\
22 | ,strides=[1, 2, 2, 1],\
23 | padding='SAME')
24 | conv1 = tf.nn.dropout(conv1, p_keep_conv)
25 |
26 | conv2 = tf.nn.conv2d(conv1, w2,\
27 | strides=[1, 1, 1, 1],\
28 | padding='SAME')
29 | conv2_a = tf.nn.relu(conv2)
30 | conv2 = tf.nn.max_pool(conv2_a, ksize=[1, 2, 2, 1],\
31 | strides=[1, 2, 2, 1],\
32 | padding='SAME')
33 | conv2 = tf.nn.dropout(conv2, p_keep_conv)
34 |
35 | conv3=tf.nn.conv2d(conv2, w3,\
36 | strides=[1, 1, 1, 1]\
37 | ,padding='SAME')
38 |
39 | conv3 = tf.nn.relu(conv3)
40 |
41 |
42 | FC_layer = tf.nn.max_pool(conv3, ksize=[1, 2, 2, 1],\
43 | strides=[1, 2, 2, 1],\
44 | padding='SAME')
45 |
46 | FC_layer = tf.reshape(FC_layer, [-1, w4.get_shape().as_list()[0]])
47 | FC_layer = tf.nn.dropout(FC_layer, p_keep_conv)
48 |
49 |
50 | output_layer = tf.nn.relu(tf.matmul(FC_layer, w4))
51 | output_layer = tf.nn.dropout(output_layer, p_keep_hidden)
52 |
53 | result = tf.matmul(output_layer, w_o)
54 | return result
55 |
56 |
57 | #mnist = mnist_data.read_data_sets("ata/")
58 | from tensorflow.examples.tutorials.mnist import input_data
59 | mnist = input_data.read_data_sets("MNIST_data", one_hot=True)
60 |
61 | trX, trY, teX, teY = mnist.train.images,\
62 | mnist.train.labels, \
63 | mnist.test.images, \
64 | mnist.test.labels
65 |
66 | trX = trX.reshape(-1, img_size, img_size, 1) # 28x28x1 input img
67 | teX = teX.reshape(-1, img_size, img_size, 1) # 28x28x1 input img
68 |
69 | X = tf.placeholder("float", [None, img_size, img_size, 1])
70 | Y = tf.placeholder("float", [None, num_classes])
71 |
72 | w = init_weights([3, 3, 1, 32]) # 3x3x1 conv, 32 outputs
73 | w2 = init_weights([3, 3, 32, 64]) # 3x3x32 conv, 64 outputs
74 | w3 = init_weights([3, 3, 64, 128]) # 3x3x32 conv, 128 outputs
75 | w4 = init_weights([128 * 4 * 4, 625]) # FC 128 * 4 * 4 inputs, 625 outputs
76 | w_o = init_weights([625, num_classes]) # FC 625 inputs, 10 outputs (labels)
77 |
78 | p_keep_conv = tf.placeholder("float")
79 | p_keep_hidden = tf.placeholder("float")
80 | py_x = model(X, w, w2, w3, w4, w_o, p_keep_conv, p_keep_hidden)
81 |
82 | Y_ = tf.nn.softmax_cross_entropy_with_logits(logits=py_x, labels=Y)
83 | cost = tf.reduce_mean(Y_)
84 | optimizer = tf.train.\
85 | RMSPropOptimizer(0.001, 0.9).minimize(cost)
86 | predict_op = tf.argmax(py_x, 1)
87 |
88 | with tf.Session() as sess:
89 | #tf.initialize_all_variables().run()
90 | tf.global_variables_initializer().run()
91 | for i in range(100):
92 | training_batch = \
93 | zip(range(0, len(trX), \
94 | batch_size),
95 | range(batch_size, \
96 | len(trX)+1, \
97 | batch_size))
98 | for start, end in training_batch:
99 | sess.run(optimizer , feed_dict={X: trX[start:end],\
100 | Y: trY[start:end],\
101 | p_keep_conv: 0.8,\
102 | p_keep_hidden: 0.5})
103 |
104 | test_indices = np.arange(len(teX)) # Get A Test Batch
105 | np.random.shuffle(test_indices)
106 | test_indices = test_indices[0:test_size]
107 |
108 | print(i, np.mean(np.argmax(teY[test_indices], axis=1) ==\
109 | sess.run\
110 | (predict_op,\
111 | feed_dict={X: teX[test_indices],\
112 | Y: teY[test_indices], \
113 | p_keep_conv: 1.0,\
114 | p_keep_hidden: 1.0})))
115 |
116 | """
117 | Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
118 | Successfully extracted to train-images-idx3-ubyte.mnist 9912422 bytes.
119 | Loading ata/train-images-idx3-ubyte.mnist
120 | Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
121 | Successfully extracted to train-labels-idx1-ubyte.mnist 28881 bytes.
122 | Loading ata/train-labels-idx1-ubyte.mnist
123 | Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
124 | Successfully extracted to t10k-images-idx3-ubyte.mnist 1648877 bytes.
125 | Loading ata/t10k-images-idx3-ubyte.mnist
126 | Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
127 | Successfully extracted to t10k-labels-idx1-ubyte.mnist 4542 bytes.
128 | Loading ata/t10k-labels-idx1-ubyte.mnist
129 | (0, 0.95703125)
130 | (1, 0.98046875)
131 | (2, 0.9921875)
132 | (3, 0.99609375)
133 | (4, 0.99609375)
134 | (5, 0.98828125)
135 | (6, 0.99609375)
136 | (7, 0.99609375)
137 | (8, 0.98828125)
138 | (9, 0.98046875)
139 | (10, 0.99609375)
140 | (11, 1.0)
141 | (12, 0.9921875)
142 | (13, 0.98046875)
143 | (14, 0.98828125)
144 | (15, 0.9921875)
145 | (16, 0.9921875)
146 | (17, 0.9921875)
147 | (18, 0.9921875)
148 | (19, 1.0)
149 | (20, 0.98828125)
150 | (21, 0.99609375)
151 | (22, 0.98828125)
152 | (23, 1.0)
153 | (24, 0.9921875)
154 | (25, 0.99609375)
155 | (26, 0.99609375)
156 | (27, 0.98828125)
157 | (28, 0.98828125)
158 | (29, 0.9921875)
159 | (30, 0.99609375)
160 | (31, 0.9921875)
161 | (32, 0.99609375)
162 | (33, 1.0)
163 | (34, 0.99609375)
164 | (35, 1.0)
165 | (36, 0.9921875)
166 | (37, 1.0)
167 | (38, 0.99609375)
168 | (39, 0.99609375)
169 | (40, 0.99609375)
170 | (41, 0.9921875)
171 | (42, 0.98828125)
172 | (43, 0.9921875)
173 | (44, 0.9921875)
174 | (45, 0.9921875)
175 | (46, 0.9921875)
176 | (47, 0.98828125)
177 | (48, 0.99609375)
178 | (49, 0.99609375)
179 | (50, 1.0)
180 | (51, 0.98046875)
181 | (52, 0.99609375)
182 | (53, 0.98828125)
183 | (54, 0.99609375)
184 | (55, 0.9921875)
185 | (56, 0.99609375)
186 | (57, 0.9921875)
187 | (58, 0.98828125)
188 | (59, 0.99609375)
189 | (60, 0.99609375)
190 | (61, 0.98828125)
191 | (62, 1.0)
192 | (63, 0.98828125)
193 | (64, 0.98828125)
194 | (65, 0.98828125)
195 | (66, 1.0)
196 | (67, 0.99609375)
197 | (68, 1.0)
198 | (69, 1.0)
199 | (70, 0.9921875)
200 | (71, 0.99609375)
201 | (72, 0.984375)
202 | (73, 0.9921875)
203 | (74, 0.98828125)
204 | (75, 0.99609375)
205 | (76, 1.0)
206 | (77, 0.9921875)
207 | (78, 0.984375)
208 | (79, 1.0)
209 | (80, 0.9921875)
210 | (81, 0.9921875)
211 | (82, 0.99609375)
212 | (83, 1.0)
213 | (84, 0.98828125)
214 | (85, 0.98828125)
215 | (86, 0.99609375)
216 | (87, 1.0)
217 | (88, 0.99609375)
218 | """
219 |
--------------------------------------------------------------------------------
/Chapter04/MNIST_CNN/Python 3.5/mnist_cnn_1.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | import numpy as np
3 | #import mnist_data
4 |
5 | batch_size = 128
6 | test_size = 256
7 | img_size = 28
8 | num_classes = 10
9 |
10 | def init_weights(shape):
11 | return tf.Variable(tf.random_normal(shape, stddev=0.01))
12 |
13 |
14 | def model(X, w, w2, w3, w4, w_o, p_keep_conv, p_keep_hidden):
15 |
16 | conv1 = tf.nn.conv2d(X, w,\
17 | strides=[1, 1, 1, 1],\
18 | padding='SAME')
19 |
20 | conv1_a = tf.nn.relu(conv1)
21 | conv1 = tf.nn.max_pool(conv1_a, ksize=[1, 2, 2, 1]\
22 | ,strides=[1, 2, 2, 1],\
23 | padding='SAME')
24 | conv1 = tf.nn.dropout(conv1, p_keep_conv)
25 |
26 | conv2 = tf.nn.conv2d(conv1, w2,\
27 | strides=[1, 1, 1, 1],\
28 | padding='SAME')
29 | conv2_a = tf.nn.relu(conv2)
30 | conv2 = tf.nn.max_pool(conv2_a, ksize=[1, 2, 2, 1],\
31 | strides=[1, 2, 2, 1],\
32 | padding='SAME')
33 | conv2 = tf.nn.dropout(conv2, p_keep_conv)
34 |
35 | conv3=tf.nn.conv2d(conv2, w3,\
36 | strides=[1, 1, 1, 1]\
37 | ,padding='SAME')
38 |
39 | conv3 = tf.nn.relu(conv3)
40 |
41 |
42 | FC_layer = tf.nn.max_pool(conv3, ksize=[1, 2, 2, 1],\
43 | strides=[1, 2, 2, 1],\
44 | padding='SAME')
45 |
46 | FC_layer = tf.reshape(FC_layer, [-1, w4.get_shape().as_list()[0]])
47 | FC_layer = tf.nn.dropout(FC_layer, p_keep_conv)
48 |
49 |
50 | output_layer = tf.nn.relu(tf.matmul(FC_layer, w4))
51 | output_layer = tf.nn.dropout(output_layer, p_keep_hidden)
52 |
53 | result = tf.matmul(output_layer, w_o)
54 | return result
55 |
56 |
57 | #mnist = mnist_data.read_data_sets("ata/")
58 | from tensorflow.examples.tutorials.mnist import input_data
59 | mnist = input_data.read_data_sets("MNIST_data", one_hot=True)
60 |
61 | trX, trY, teX, teY = mnist.train.images,\
62 | mnist.train.labels, \
63 | mnist.test.images, \
64 | mnist.test.labels
65 |
66 | trX = trX.reshape(-1, img_size, img_size, 1) # 28x28x1 input img
67 | teX = teX.reshape(-1, img_size, img_size, 1) # 28x28x1 input img
68 |
69 | X = tf.placeholder("float", [None, img_size, img_size, 1])
70 | Y = tf.placeholder("float", [None, num_classes])
71 |
72 | w = init_weights([3, 3, 1, 32]) # 3x3x1 conv, 32 outputs
73 | w2 = init_weights([3, 3, 32, 64]) # 3x3x32 conv, 64 outputs
74 | w3 = init_weights([3, 3, 64, 128]) # 3x3x32 conv, 128 outputs
75 | w4 = init_weights([128 * 4 * 4, 625]) # FC 128 * 4 * 4 inputs, 625 outputs
76 | w_o = init_weights([625, num_classes]) # FC 625 inputs, 10 outputs (labels)
77 |
78 | p_keep_conv = tf.placeholder("float")
79 | p_keep_hidden = tf.placeholder("float")
80 | py_x = model(X, w, w2, w3, w4, w_o, p_keep_conv, p_keep_hidden)
81 |
82 | Y_ = tf.nn.softmax_cross_entropy_with_logits(logits=py_x, labels=Y)
83 | cost = tf.reduce_mean(Y_)
84 | optimizer = tf.train.\
85 | RMSPropOptimizer(0.001, 0.9).minimize(cost)
86 | predict_op = tf.argmax(py_x, 1)
87 |
88 | with tf.Session() as sess:
89 | #tf.initialize_all_variables().run()
90 | tf.global_variables_initializer().run()
91 | for i in range(100):
92 | training_batch = \
93 | zip(range(0, len(trX), \
94 | batch_size),
95 | range(batch_size, \
96 | len(trX)+1, \
97 | batch_size))
98 | for start, end in training_batch:
99 | sess.run(optimizer, feed_dict={X: trX[start:end],\
100 | Y: trY[start:end],\
101 | p_keep_conv: 0.8,\
102 | p_keep_hidden: 0.5})
103 |
104 | test_indices = np.arange(len(teX))# Get A Test Batch
105 | np.random.shuffle(test_indices)
106 | test_indices = test_indices[0:test_size]
107 |
108 | print(i, np.mean(np.argmax(teY[test_indices], axis=1) ==\
109 | sess.run\
110 | (predict_op,\
111 | feed_dict={X: teX[test_indices],\
112 | Y: teY[test_indices], \
113 | p_keep_conv: 1.0,\
114 | p_keep_hidden: 1.0})))
115 |
116 | """
117 | Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
118 | Successfully extracted to train-images-idx3-ubyte.mnist 9912422 bytes.
119 | Loading ata/train-images-idx3-ubyte.mnist
120 | Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
121 | Successfully extracted to train-labels-idx1-ubyte.mnist 28881 bytes.
122 | Loading ata/train-labels-idx1-ubyte.mnist
123 | Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
124 | Successfully extracted to t10k-images-idx3-ubyte.mnist 1648877 bytes.
125 | Loading ata/t10k-images-idx3-ubyte.mnist
126 | Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
127 | Successfully extracted to t10k-labels-idx1-ubyte.mnist 4542 bytes.
128 | Loading ata/t10k-labels-idx1-ubyte.mnist
129 | (0, 0.95703125)
130 | (1, 0.98046875)
131 | (2, 0.9921875)
132 | (3, 0.99609375)
133 | (4, 0.99609375)
134 | (5, 0.98828125)
135 | (6, 0.99609375)
136 | (7, 0.99609375)
137 | (8, 0.98828125)
138 | (9, 0.98046875)
139 | (10, 0.99609375)
140 | (11, 1.0)
141 | (12, 0.9921875)
142 | (13, 0.98046875)
143 | (14, 0.98828125)
144 | (15, 0.9921875)
145 | (16, 0.9921875)
146 | (17, 0.9921875)
147 | (18, 0.9921875)
148 | (19, 1.0)
149 | (20, 0.98828125)
150 | (21, 0.99609375)
151 | (22, 0.98828125)
152 | (23, 1.0)
153 | (24, 0.9921875)
154 | (25, 0.99609375)
155 | (26, 0.99609375)
156 | (27, 0.98828125)
157 | (28, 0.98828125)
158 | (29, 0.9921875)
159 | (30, 0.99609375)
160 | (31, 0.9921875)
161 | (32, 0.99609375)
162 | (33, 1.0)
163 | (34, 0.99609375)
164 | (35, 1.0)
165 | (36, 0.9921875)
166 | (37, 1.0)
167 | (38, 0.99609375)
168 | (39, 0.99609375)
169 | (40, 0.99609375)
170 | (41, 0.9921875)
171 | (42, 0.98828125)
172 | (43, 0.9921875)
173 | (44, 0.9921875)
174 | (45, 0.9921875)
175 | (46, 0.9921875)
176 | (47, 0.98828125)
177 | (48, 0.99609375)
178 | (49, 0.99609375)
179 | (50, 1.0)
180 | (51, 0.98046875)
181 | (52, 0.99609375)
182 | (53, 0.98828125)
183 | (54, 0.99609375)
184 | (55, 0.9921875)
185 | (56, 0.99609375)
186 | (57, 0.9921875)
187 | (58, 0.98828125)
188 | (59, 0.99609375)
189 | (60, 0.99609375)
190 | (61, 0.98828125)
191 | (62, 1.0)
192 | (63, 0.98828125)
193 | (64, 0.98828125)
194 | (65, 0.98828125)
195 | (66, 1.0)
196 | (67, 0.99609375)
197 | (68, 1.0)
198 | (69, 1.0)
199 | (70, 0.9921875)
200 | (71, 0.99609375)
201 | (72, 0.984375)
202 | (73, 0.9921875)
203 | (74, 0.98828125)
204 | (75, 0.99609375)
205 | (76, 1.0)
206 | (77, 0.9921875)
207 | (78, 0.984375)
208 | (79, 1.0)
209 | (80, 0.9921875)
210 | (81, 0.9921875)
211 | (82, 0.99609375)
212 | (83, 1.0)
213 | (84, 0.98828125)
214 | (85, 0.98828125)
215 | (86, 0.99609375)
216 | (87, 1.0)
217 | (88, 0.99609375)
218 | """
219 |
--------------------------------------------------------------------------------
/Chapter04/MNIST_CNN/Screenshots/mnist_cnn.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter04/MNIST_CNN/Screenshots/mnist_cnn.png
--------------------------------------------------------------------------------
/Chapter05/Python 2.7/Convlutional_AutoEncoder.py:
--------------------------------------------------------------------------------
1 | import matplotlib.pyplot as plt
2 | import numpy as np
3 | import math
4 | import tensorflow as tf
5 | import tensorflow.examples.tutorials.mnist.input_data as input_data
6 |
7 | from tensorflow.python.framework import ops
8 | import warnings
9 | import random
10 | import os
11 |
12 | warnings.filterwarnings("ignore")
13 | os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
14 | ops.reset_default_graph()
15 |
16 | # LOAD PACKAGES
17 | mnist = input_data.read_data_sets("data/", one_hot=True)
18 | trainimgs = mnist.train.images
19 | trainlabels = mnist.train.labels
20 | testimgs = mnist.test.images
21 | testlabels = mnist.test.labels
22 | ntrain = trainimgs.shape[0]
23 | ntest = testimgs.shape[0]
24 | dim = trainimgs.shape[1]
25 | nout = trainlabels.shape[1]
26 |
27 | print("Packages loaded")
28 | # WEIGHT AND BIASES
29 | n1 = 16
30 | n2 = 32
31 | n3 = 64
32 | ksize = 5
33 |
34 | weights = {
35 | 'ce1': tf.Variable(tf.random_normal([ksize, ksize, 1, n1], stddev=0.1)),
36 | 'ce2': tf.Variable(tf.random_normal([ksize, ksize, n1, n2], stddev=0.1)),
37 | 'ce3': tf.Variable(tf.random_normal([ksize, ksize, n2, n3], stddev=0.1)),
38 | 'cd3': tf.Variable(tf.random_normal([ksize, ksize, n2, n3], stddev=0.1)),
39 | 'cd2': tf.Variable(tf.random_normal([ksize, ksize, n1, n2], stddev=0.1)),
40 | 'cd1': tf.Variable(tf.random_normal([ksize, ksize, 1, n1], stddev=0.1))
41 | }
42 | biases = {
43 | 'be1': tf.Variable(tf.random_normal([n1], stddev=0.1)),
44 | 'be2': tf.Variable(tf.random_normal([n2], stddev=0.1)),
45 | 'be3': tf.Variable(tf.random_normal([n3], stddev=0.1)),
46 | 'bd3': tf.Variable(tf.random_normal([n2], stddev=0.1)),
47 | 'bd2': tf.Variable(tf.random_normal([n1], stddev=0.1)),
48 | 'bd1': tf.Variable(tf.random_normal([1], stddev=0.1))
49 | }
50 |
51 |
52 | def cae(_X, _W, _b, _keepprob):
53 | _input_r = tf.reshape(_X, shape=[-1, 28, 28, 1])
54 | # Encoder
55 | _ce1 = tf.nn.sigmoid(tf.add(tf.nn.conv2d(_input_r, _W['ce1'], strides=[1, 2, 2, 1], padding='SAME'), _b['be1']))
56 | _ce1 = tf.nn.dropout(_ce1, _keepprob)
57 | _ce2 = tf.nn.sigmoid(tf.add(tf.nn.conv2d(_ce1, _W['ce2'], strides=[1, 2, 2, 1], padding='SAME'), _b['be2']))
58 | _ce2 = tf.nn.dropout(_ce2, _keepprob)
59 | _ce3 = tf.nn.sigmoid(tf.add(tf.nn.conv2d(_ce2, _W['ce3'], strides=[1, 2, 2, 1], padding='SAME'), _b['be3']))
60 | _ce3 = tf.nn.dropout(_ce3, _keepprob)
61 | # Decoder
62 | _cd3 = tf.nn.sigmoid(tf.add(tf.nn.conv2d_transpose(_ce3, _W['cd3'], tf.stack([tf.shape(_X)[0], 7, 7, n2]), strides=[1, 2, 2, 1], padding='SAME'), _b['bd3']))
63 | _cd3 = tf.nn.dropout(_cd3, _keepprob)
64 | _cd2 = tf.nn.sigmoid(tf.add(tf.nn.conv2d_transpose(_cd3, _W['cd2'], tf.stack([tf.shape(_X)[0], 14, 14, n1]), strides=[1, 2, 2, 1], padding='SAME'), _b['bd2']))
65 | _cd2 = tf.nn.dropout(_cd2, _keepprob)
66 | _cd1 = tf.nn.sigmoid(tf.add(tf.nn.conv2d_transpose(_cd2, _W['cd1'], tf.stack([tf.shape(_X)[0], 28, 28, 1]), strides=[1, 2, 2, 1], padding='SAME'), _b['bd1']))
67 | _cd1 = tf.nn.dropout(_cd1, _keepprob)
68 | _out = _cd1
69 | return _out
70 |
71 | print("Network ready")
72 | x = tf.placeholder(tf.float32, [None, dim])
73 | y = tf.placeholder(tf.float32, [None, dim])
74 | keepprob = tf.placeholder(tf.float32)
75 | pred = cae(x, weights, biases, keepprob) # ['out']
76 | cost = tf.reduce_sum(tf.square(cae(x, weights, biases, keepprob)- tf.reshape(y, shape=[-1, 28, 28, 1])))
77 |
78 | learning_rate = 0.001
79 | optm = tf.train.AdamOptimizer(learning_rate).minimize(cost)
80 | init = tf.global_variables_initializer()
81 |
82 | print("Functions ready")
83 | sess = tf.Session()
84 | sess.run(init)
85 |
86 | # mean_img = np.mean(mnist.train.images, axis=0)
87 | mean_img = np.zeros((784))
88 | # Fit all training data
89 | batch_size = 128
90 | n_epochs = 50
91 | print("Strart training..")
92 |
93 | for epoch_i in range(n_epochs):
94 | for batch_i in range(mnist.train.num_examples // batch_size):
95 | batch_xs, _ = mnist.train.next_batch(batch_size)
96 | trainbatch = np.array([img - mean_img for img in batch_xs])
97 | trainbatch_noisy = trainbatch + 0.3 * np.random.randn(
98 | trainbatch.shape[0], 784)
99 | sess.run(optm, feed_dict={x: trainbatch_noisy, y: trainbatch, keepprob: 0.7})
100 | print("[%02d/%02d] cost: %.4f" % (epoch_i, n_epochs, sess.run(cost, feed_dict={x: trainbatch_noisy, y: trainbatch, keepprob: 1.})))
101 |
102 | if (epoch_i % 10) == 0:
103 | n_examples = 5
104 | test_xs, _ = mnist.test.next_batch(n_examples)
105 | test_xs_noisy = test_xs + 0.3 * np.random.randn(
106 | test_xs.shape[0], 784)
107 | recon = sess.run(pred, feed_dict={x: test_xs_noisy,keepprob: 1.})
108 | fig, axs = plt.subplots(2, n_examples, figsize=(15, 4))
109 |
110 | for example_i in range(n_examples):
111 | axs[0][example_i].matshow(np.reshape(test_xs_noisy[example_i, :], (28, 28)), cmap=plt.get_cmap('gray'))
112 | axs[1][example_i].matshow(np.reshape(np.reshape(recon[example_i, ...], (784,))+ mean_img, (28, 28)), cmap=plt.get_cmap('gray'))
113 | plt.show()
114 |
--------------------------------------------------------------------------------
/Chapter05/Python 2.7/autoencoder_1.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | import numpy as np
3 | import matplotlib.pyplot as plt
4 |
5 |
6 | # Import MINST data
7 | from tensorflow.examples.tutorials.mnist import input_data
8 | mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
9 |
10 | #mnist = mnist_data.read_data_sets("data/")
11 |
12 | # Parameters
13 | learning_rate = 0.01
14 | training_epochs = 10
15 | batch_size = 256
16 | display_step = 1
17 | examples_to_show = 10
18 |
19 | # Network Parameters
20 | n_hidden_1 = 256 # 1st layer num features
21 | n_hidden_2 = 128 # 2nd layer num features
22 | n_input = 784 # MNIST data input (img shape: 28*28)
23 |
24 | # tf Graph input (only pictures)
25 | X = tf.placeholder("float", [None, n_input])
26 |
27 | weights = {
28 | 'encoder_h1': tf.Variable\
29 | (tf.random_normal([n_input, n_hidden_1])),
30 | 'encoder_h2': tf.Variable\
31 | (tf.random_normal([n_hidden_1, n_hidden_2])),
32 | 'decoder_h1': tf.Variable\
33 | (tf.random_normal([n_hidden_2, n_hidden_1])),
34 | 'decoder_h2': tf.Variable\
35 | (tf.random_normal([n_hidden_1, n_input])),
36 | }
37 | biases = {
38 | 'encoder_b1': tf.Variable\
39 | (tf.random_normal([n_hidden_1])),
40 | 'encoder_b2': tf.Variable\
41 | (tf.random_normal([n_hidden_2])),
42 | 'decoder_b1': tf.Variable\
43 | (tf.random_normal([n_hidden_1])),
44 | 'decoder_b2': tf.Variable\
45 | (tf.random_normal([n_input])),
46 | }
47 |
48 |
49 |
50 | # Encoder Hidden layer with sigmoid activation #1
51 | encoder_in = tf.nn.sigmoid(tf.add\
52 | (tf.matmul(X, \
53 | weights['encoder_h1']),\
54 | biases['encoder_b1']))
55 |
56 | # Decoder Hidden layer with sigmoid activation #2
57 | encoder_out = tf.nn.sigmoid(tf.add\
58 | (tf.matmul(encoder_in,\
59 | weights['encoder_h2']),\
60 | biases['encoder_b2']))
61 |
62 |
63 | # Encoder Hidden layer with sigmoid activation #1
64 | decoder_in = tf.nn.sigmoid(tf.add\
65 | (tf.matmul(encoder_out,\
66 | weights['decoder_h1']),\
67 | biases['decoder_b1']))
68 |
69 | # Decoder Hidden layer with sigmoid activation #2
70 | decoder_out = tf.nn.sigmoid(tf.add\
71 | (tf.matmul(decoder_in,\
72 | weights['decoder_h2']),\
73 | biases['decoder_b2']))
74 |
75 |
76 | # Prediction
77 | y_pred = decoder_out
78 | # Targets (Labels) are the input data.
79 | y_true = X
80 |
81 | # Define loss and optimizer, minimize the squared error
82 | cost = tf.reduce_mean(tf.pow(y_true - y_pred, 2))
83 | optimizer = tf.train.RMSPropOptimizer(learning_rate).minimize(cost)
84 |
85 | # Initializing the variables
86 | init = tf.global_variables_initializer()
87 |
88 | # Launch the graph
89 | with tf.Session() as sess:
90 | sess.run(init)
91 | total_batch = int(mnist.train.num_examples/batch_size)
92 | # Training cycle
93 | for epoch in range(training_epochs):
94 | # Loop over all batches
95 | for i in range(total_batch):
96 | batch_xs, batch_ys =\
97 | mnist.train.next_batch(batch_size)
98 | # Run optimization op (backprop) and cost op (to get loss value)
99 | _, c = sess.run([optimizer, cost],\
100 | feed_dict={X: batch_xs})
101 | # Display logs per epoch step
102 | if epoch % display_step == 0:
103 | print("Epoch:", '%04d' % (epoch+1),
104 | "cost=", "{:.9f}".format(c))
105 |
106 | print("Optimization Finished!")
107 |
108 | # Applying encode and decode over test set
109 | encode_decode = sess.run(
110 | y_pred, feed_dict=\
111 | {X: mnist.test.images[:examples_to_show]})
112 | # Compare original images with their reconstructions
113 | f, a = plt.subplots(2, 4, figsize=(10, 5))
114 | for i in range(examples_to_show):
115 | a[0][i].imshow(np.reshape(mnist.test.images[i], (28, 28)))
116 | a[1][i].imshow(np.reshape(encode_decode[i], (28, 28)))
117 | f.show()
118 | plt.draw()
119 | plt.show()
120 |
--------------------------------------------------------------------------------
/Chapter05/Python 2.7/deconvolutional_autoencoder_1.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import tensorflow as tf
3 | import matplotlib.pyplot as plt
4 | from tensorflow.examples.tutorials.mnist import input_data
5 |
6 | #Plot function
7 | def plotresult(org_vec,noisy_vec,out_vec):
8 | plt.matshow(np.reshape(org_vec, (28, 28)),\
9 | cmap=plt.get_cmap('gray'))
10 | plt.title("Original Image")
11 | plt.colorbar()
12 |
13 | plt.matshow(np.reshape(noisy_vec, (28, 28)),\
14 | cmap=plt.get_cmap('gray'))
15 | plt.title("Input Image")
16 | plt.colorbar()
17 |
18 | outimg = np.reshape(out_vec, (28, 28))
19 | plt.matshow(outimg, cmap=plt.get_cmap('gray'))
20 | plt.title("Reconstructed Image")
21 | plt.colorbar()
22 | plt.show()
23 |
24 | # NETOWORK PARAMETERS
25 | n_input = 784
26 | n_hidden_1 = 256
27 | n_hidden_2 = 256
28 | n_output = 784
29 |
30 | epochs = 110
31 | batch_size = 100
32 | disp_step = 10
33 |
34 | print ("PACKAGES LOADED")
35 |
36 | mnist = input_data.read_data_sets('data/', one_hot=True)
37 | trainimg = mnist.train.images
38 | trainlabel = mnist.train.labels
39 | testimg = mnist.test.images
40 | testlabel = mnist.test.labels
41 | print ("MNIST LOADED")
42 |
43 |
44 | # PLACEHOLDERS
45 | x = tf.placeholder("float", [None, n_input])
46 | y = tf.placeholder("float", [None, n_output])
47 | dropout_keep_prob = tf.placeholder("float")
48 |
49 | # WEIGHTS
50 | weights = {
51 | 'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
52 | 'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
53 | 'out': tf.Variable(tf.random_normal([n_hidden_2, n_output]))
54 | }
55 | biases = {
56 | 'b1': tf.Variable(tf.random_normal([n_hidden_1])),
57 | 'b2': tf.Variable(tf.random_normal([n_hidden_2])),
58 | 'out': tf.Variable(tf.random_normal([n_output]))
59 | }
60 |
61 |
62 | encode_in = tf.nn.sigmoid\
63 | (tf.add(tf.matmul\
64 | (x, weights['h1']),\
65 | biases['b1']))
66 |
67 | encode_out = tf.nn.dropout\
68 | (encode_in, dropout_keep_prob)
69 |
70 | decode_in = tf.nn.sigmoid\
71 | (tf.add(tf.matmul\
72 | (encode_out, weights['h2']),\
73 | biases['b2']))
74 |
75 | decode_out = tf.nn.dropout(decode_in,\
76 | dropout_keep_prob)
77 |
78 |
79 | y_pred = tf.nn.sigmoid\
80 | (tf.matmul(decode_out,\
81 | weights['out']) +\
82 | biases['out'])
83 |
84 | # COST
85 | cost = tf.reduce_mean(tf.pow(y_pred - y, 2))
86 |
87 | # OPTIMIZER
88 | optmizer = tf.train.RMSPropOptimizer(0.01).minimize(cost)
89 |
90 | # INITIALIZER
91 | init = tf.global_variables_initializer()
92 |
93 | init = tf.global_variables_initializer()
94 |
95 | # Launch the graph
96 | with tf.Session() as sess:
97 | sess.run(init)
98 | print ("Start Training")
99 | for epoch in range(epochs):
100 | num_batch = int(mnist.train.num_examples/batch_size)
101 | total_cost = 0.
102 | for i in range(num_batch):
103 | batch_xs, batch_ys = mnist.train.next_batch(batch_size)
104 | batch_xs_noisy = batch_xs\
105 | + 0.3*np.random.randn(batch_size, 784)
106 | feeds = {x: batch_xs_noisy,\
107 | y: batch_xs, \
108 | dropout_keep_prob: 0.8}
109 | sess.run(optmizer, feed_dict=feeds)
110 | total_cost += sess.run(cost, feed_dict=feeds)
111 | # DISPLAY
112 | if epoch % disp_step == 0:
113 | print ("Epoch %02d/%02d average cost: %.6f"
114 | % (epoch, epochs, total_cost/num_batch))
115 |
116 | # Test one
117 | print ("Start Test")
118 | randidx = np.random.randint\
119 | (testimg.shape[0], size=1)
120 | orgvec = testimg[randidx, :]
121 | testvec = testimg[randidx, :]
122 | label = np.argmax(testlabel[randidx, :], 1)
123 |
124 | print ("Test label is %d" % (label))
125 | noisyvec = testvec + 0.3*np.random.randn(1, 784)
126 | outvec = sess.run(y_pred,\
127 | feed_dict={x: noisyvec,\
128 | dropout_keep_prob: 1})
129 |
130 | plotresult(orgvec,noisyvec,outvec)
131 | print ("restart Training")
132 |
--------------------------------------------------------------------------------
/Chapter05/Python 2.7/denoising_autoencoder_1.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import tensorflow as tf
3 | import matplotlib.pyplot as plt
4 | from tensorflow.examples.tutorials.mnist import input_data
5 |
6 | #Plot function
7 | def plotresult(org_vec,noisy_vec,out_vec):
8 | plt.matshow(np.reshape(org_vec, (28, 28)),\
9 | cmap=plt.get_cmap('gray'))
10 | plt.title("Original Image")
11 | plt.colorbar()
12 |
13 | plt.matshow(np.reshape(noisy_vec, (28, 28)),\
14 | cmap=plt.get_cmap('gray'))
15 | plt.title("Input Image")
16 | plt.colorbar()
17 |
18 | outimg = np.reshape(out_vec, (28, 28))
19 | plt.matshow(outimg, cmap=plt.get_cmap('gray'))
20 | plt.title("Reconstructed Image")
21 | plt.colorbar()
22 | plt.show()
23 |
24 | # NETOWRK PARAMETERS
25 | n_input = 784
26 | n_hidden_1 = 256
27 | n_hidden_2 = 256
28 | n_output = 784
29 |
30 | epochs = 100
31 | batch_size = 100
32 | disp_step = 10
33 |
34 | print ("PACKAGES LOADED")
35 |
36 | mnist = input_data.read_data_sets('data/', one_hot=True)
37 | trainimg = mnist.train.images
38 | trainlabel = mnist.train.labels
39 | testimg = mnist.test.images
40 | testlabel = mnist.test.labels
41 | print ("MNIST LOADED")
42 |
43 |
44 | # PLACEHOLDERS
45 | x = tf.placeholder("float", [None, n_input])
46 | y = tf.placeholder("float", [None, n_output])
47 | dropout_keep_prob = tf.placeholder("float")
48 |
49 | # WEIGHTS
50 | weights = {
51 | 'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
52 | 'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
53 | 'out': tf.Variable(tf.random_normal([n_hidden_2, n_output]))
54 | }
55 | biases = {
56 | 'b1': tf.Variable(tf.random_normal([n_hidden_1])),
57 | 'b2': tf.Variable(tf.random_normal([n_hidden_2])),
58 | 'out': tf.Variable(tf.random_normal([n_output]))
59 | }
60 |
61 |
62 | encode_in = tf.nn.sigmoid\
63 | (tf.add(tf.matmul\
64 | (x, weights['h1']),\
65 | biases['b1']))
66 |
67 | encode_out = tf.nn.dropout\
68 | (encode_in, dropout_keep_prob)
69 |
70 | decode_in = tf.nn.sigmoid\
71 | (tf.add(tf.matmul\
72 | (encode_out, weights['h2']),\
73 | biases['b2']))
74 |
75 | decode_out = tf.nn.dropout(decode_in,\
76 | dropout_keep_prob)
77 |
78 |
79 | y_pred = tf.nn.sigmoid\
80 | (tf.matmul(decode_out,\
81 | weights['out']) +\
82 | biases['out'])
83 |
84 | # COST
85 | cost = tf.reduce_mean(tf.pow(y_pred - y, 2))
86 |
87 | # OPTIMIZER
88 | optmizer = tf.train.RMSPropOptimizer(0.01).minimize(cost)
89 |
90 | # INITIALIZER
91 | init = tf.global_variables_initializer()
92 |
93 | init = tf.global_variables_initializer()
94 |
95 | # Launch the graph
96 | with tf.Session() as sess:
97 | sess.run(init)
98 | print ("Start Training")
99 | for epoch in range(epochs):
100 | num_batch = int(mnist.train.num_examples/batch_size)
101 | total_cost = 0.
102 | for i in range(num_batch):
103 | batch_xs, batch_ys = mnist.train.next_batch(batch_size)
104 | batch_xs_noisy = batch_xs + 0.3*np.random.randn(batch_size, 784)
105 | feeds = {x: batch_xs_noisy, y: batch_xs, dropout_keep_prob: 0.8}
106 | sess.run(optmizer, feed_dict=feeds)
107 | total_cost += sess.run(cost, feed_dict=feeds)
108 | # DISPLAY
109 | if epoch % disp_step == 0:
110 | print ("Epoch %02d/%02d average cost: %.6f"
111 | % (epoch, epochs, total_cost/num_batch))
112 |
113 | # Test one
114 | print ("Start Test")
115 | randidx = np.random.randint\
116 | (testimg.shape[0], size=1)
117 | orgvec = testimg[randidx, :]
118 | testvec = testimg[randidx, :]
119 | label = np.argmax(testlabel[randidx, :], 1)
120 |
121 | print ("Test label is %d" % (label))
122 | noisyvec = testvec + 0.3*np.random.randn(1, 784)
123 | outvec = sess.run(y_pred,\
124 | feed_dict={x: noisyvec,\
125 | dropout_keep_prob: 1})
126 |
127 | plotresult(orgvec,noisyvec,outvec)
128 | print ("restart Training")
129 |
130 |
131 |
132 | """"
133 | PACKAGES LOADED
134 | Extracting data/train-images-idx3-ubyte.gz
135 | Extracting data/train-labels-idx1-ubyte.gz
136 | Extracting data/t10k-images-idx3-ubyte.gz
137 | Extracting data/t10k-labels-idx1-ubyte.gz
138 | MNIST LOADED
139 | Start Training
140 | Epoch 00/100 average cost: 0.212313
141 | Start Test
142 | Test label is 6
143 | restart Training
144 | Epoch 10/100 average cost: 0.033660
145 | Start Test
146 | Test label is 2
147 | restart Training
148 | Epoch 20/100 average cost: 0.026888
149 | Start Test
150 | Test label is 6
151 | restart Training
152 | Epoch 30/100 average cost: 0.023660
153 | Start Test
154 | Test label is 1
155 | restart Training
156 | Epoch 40/100 average cost: 0.021740
157 | Start Test
158 | Test label is 9
159 | restart Training
160 | Epoch 50/100 average cost: 0.020399
161 | Start Test
162 | Test label is 0
163 | restart Training
164 | Epoch 60/100 average cost: 0.019593
165 | Start Test
166 | Test label is 9
167 | restart Training
168 | Epoch 70/100 average cost: 0.019026
169 | Start Test
170 | Test label is 1
171 | restart Training
172 | Epoch 80/100 average cost: 0.018537
173 | Start Test
174 | Test label is 4
175 | restart Training
176 | Epoch 90/100 average cost: 0.018224
177 | Start Test
178 | Test label is 9
179 | restart Training
180 | """
181 |
--------------------------------------------------------------------------------
/Chapter05/Python 3.5/Convlutional_AutoEncoder.py:
--------------------------------------------------------------------------------
1 | import matplotlib.pyplot as plt
2 | import numpy as np
3 | import math
4 | import tensorflow as tf
5 | import tensorflow.examples.tutorials.mnist.input_data as input_data
6 |
7 | from tensorflow.python.framework import ops
8 | import warnings
9 | import random
10 | import os
11 |
12 | warnings.filterwarnings("ignore")
13 | os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
14 | ops.reset_default_graph()
15 |
16 | # LOAD PACKAGES
17 | mnist = input_data.read_data_sets("data/", one_hot=True)
18 | trainimgs = mnist.train.images
19 | trainlabels = mnist.train.labels
20 | testimgs = mnist.test.images
21 | testlabels = mnist.test.labels
22 | ntrain = trainimgs.shape[0]
23 | ntest = testimgs.shape[0]
24 | dim = trainimgs.shape[1]
25 | nout = trainlabels.shape[1]
26 |
27 | print("Packages loaded")
28 | # WEIGHT AND BIASES
29 | n1 = 16
30 | n2 = 32
31 | n3 = 64
32 | ksize = 5
33 |
34 | weights = {
35 | 'ce1': tf.Variable(tf.random_normal([ksize, ksize, 1, n1], stddev=0.1)),
36 | 'ce2': tf.Variable(tf.random_normal([ksize, ksize, n1, n2], stddev=0.1)),
37 | 'ce3': tf.Variable(tf.random_normal([ksize, ksize, n2, n3], stddev=0.1)),
38 | 'cd3': tf.Variable(tf.random_normal([ksize, ksize, n2, n3], stddev=0.1)),
39 | 'cd2': tf.Variable(tf.random_normal([ksize, ksize, n1, n2], stddev=0.1)),
40 | 'cd1': tf.Variable(tf.random_normal([ksize, ksize, 1, n1], stddev=0.1))
41 | }
42 | biases = {
43 | 'be1': tf.Variable(tf.random_normal([n1], stddev=0.1)),
44 | 'be2': tf.Variable(tf.random_normal([n2], stddev=0.1)),
45 | 'be3': tf.Variable(tf.random_normal([n3], stddev=0.1)),
46 | 'bd3': tf.Variable(tf.random_normal([n2], stddev=0.1)),
47 | 'bd2': tf.Variable(tf.random_normal([n1], stddev=0.1)),
48 | 'bd1': tf.Variable(tf.random_normal([1], stddev=0.1))
49 | }
50 |
51 |
52 | def cae(_X, _W, _b, _keepprob):
53 | _input_r = tf.reshape(_X, shape=[-1, 28, 28, 1])
54 | # Encoder
55 | _ce1 = tf.nn.sigmoid(tf.add(tf.nn.conv2d(_input_r, _W['ce1'], strides=[1, 2, 2, 1], padding='SAME'), _b['be1']))
56 | _ce1 = tf.nn.dropout(_ce1, _keepprob)
57 | _ce2 = tf.nn.sigmoid(tf.add(tf.nn.conv2d(_ce1, _W['ce2'], strides=[1, 2, 2, 1], padding='SAME'), _b['be2']))
58 | _ce2 = tf.nn.dropout(_ce2, _keepprob)
59 | _ce3 = tf.nn.sigmoid(tf.add(tf.nn.conv2d(_ce2, _W['ce3'], strides=[1, 2, 2, 1], padding='SAME'), _b['be3']))
60 | _ce3 = tf.nn.dropout(_ce3, _keepprob)
61 | # Decoder
62 | _cd3 = tf.nn.sigmoid(tf.add(tf.nn.conv2d_transpose(_ce3, _W['cd3'], tf.stack([tf.shape(_X)[0], 7, 7, n2]), strides=[1, 2, 2, 1], padding='SAME'), _b['bd3']))
63 | _cd3 = tf.nn.dropout(_cd3, _keepprob)
64 | _cd2 = tf.nn.sigmoid(tf.add(tf.nn.conv2d_transpose(_cd3, _W['cd2'], tf.stack([tf.shape(_X)[0], 14, 14, n1]), strides=[1, 2, 2, 1], padding='SAME'), _b['bd2']))
65 | _cd2 = tf.nn.dropout(_cd2, _keepprob)
66 | _cd1 = tf.nn.sigmoid(tf.add(tf.nn.conv2d_transpose(_cd2, _W['cd1'], tf.stack([tf.shape(_X)[0], 28, 28, 1]), strides=[1, 2, 2, 1], padding='SAME'), _b['bd1']))
67 | _cd1 = tf.nn.dropout(_cd1, _keepprob)
68 | _out = _cd1
69 | return _out
70 |
71 | print("Network ready")
72 | x = tf.placeholder(tf.float32, [None, dim])
73 | y = tf.placeholder(tf.float32, [None, dim])
74 | keepprob = tf.placeholder(tf.float32)
75 | pred = cae(x, weights, biases, keepprob) # ['out']
76 | cost = tf.reduce_sum(tf.square(cae(x, weights, biases, keepprob)- tf.reshape(y, shape=[-1, 28, 28, 1])))
77 |
78 | learning_rate = 0.001
79 | optm = tf.train.AdamOptimizer(learning_rate).minimize(cost)
80 | init = tf.global_variables_initializer()
81 |
82 | print("Functions ready")
83 | sess = tf.Session()
84 | sess.run(init)
85 |
86 | # mean_img = np.mean(mnist.train.images, axis=0)
87 | mean_img = np.zeros((784))
88 | # Fit all training data
89 | batch_size = 128
90 | n_epochs = 50
91 | print("Strart training..")
92 |
93 | for epoch_i in range(n_epochs):
94 | for batch_i in range(mnist.train.num_examples // batch_size):
95 | batch_xs, _ = mnist.train.next_batch(batch_size)
96 | trainbatch = np.array([img - mean_img for img in batch_xs])
97 | trainbatch_noisy = trainbatch + 0.3 * np.random.randn(
98 | trainbatch.shape[0], 784)
99 | sess.run(optm, feed_dict={x: trainbatch_noisy, y: trainbatch, keepprob: 0.7})
100 | print("[%02d/%02d] cost: %.4f" % (epoch_i, n_epochs, sess.run(cost, feed_dict={x: trainbatch_noisy, y: trainbatch, keepprob: 1.})))
101 |
102 | if (epoch_i % 10) == 0:
103 | n_examples = 5
104 | test_xs, _ = mnist.test.next_batch(n_examples)
105 | test_xs_noisy = test_xs + 0.3 * np.random.randn(
106 | test_xs.shape[0], 784)
107 | recon = sess.run(pred, feed_dict={x: test_xs_noisy,keepprob: 1.})
108 | fig, axs = plt.subplots(2, n_examples, figsize=(15, 4))
109 |
110 | for example_i in range(n_examples):
111 | axs[0][example_i].matshow(np.reshape(test_xs_noisy[example_i, :], (28, 28)), cmap=plt.get_cmap('gray'))
112 | axs[1][example_i].matshow(np.reshape(np.reshape(recon[example_i, ...], (784,))+ mean_img, (28, 28)), cmap=plt.get_cmap('gray'))
113 | plt.show()
114 |
--------------------------------------------------------------------------------
/Chapter05/Python 3.5/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter05/Python 3.5/__init__.py
--------------------------------------------------------------------------------
/Chapter05/Python 3.5/autoencoder_1.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | import numpy as np
3 | import matplotlib.pyplot as plt
4 |
5 |
6 | # Import MINST data
7 | from tensorflow.examples.tutorials.mnist import input_data
8 | mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
9 |
10 | #mnist = mnist_data.read_data_sets("data/")
11 |
12 | # Parameters
13 | learning_rate = 0.01
14 | training_epochs = 20
15 | batch_size = 256
16 | display_step = 1
17 | examples_to_show = 10
18 |
19 | # Network Parameters
20 | n_hidden_1 = 256 # 1st layer num features
21 | n_hidden_2 = 128 # 2nd layer num features
22 | n_input = 784 # MNIST data input (img shape: 28*28)
23 |
24 | # tf Graph input (only pictures)
25 | X = tf.placeholder("float", [None, n_input])
26 |
27 | weights = {
28 | 'encoder_h1': tf.Variable\
29 | (tf.random_normal([n_input, n_hidden_1])),
30 | 'encoder_h2': tf.Variable\
31 | (tf.random_normal([n_hidden_1, n_hidden_2])),
32 | 'decoder_h1': tf.Variable\
33 | (tf.random_normal([n_hidden_2, n_hidden_1])),
34 | 'decoder_h2': tf.Variable\
35 | (tf.random_normal([n_hidden_1, n_input])),
36 | }
37 | biases = {
38 | 'encoder_b1': tf.Variable\
39 | (tf.random_normal([n_hidden_1])),
40 | 'encoder_b2': tf.Variable\
41 | (tf.random_normal([n_hidden_2])),
42 | 'decoder_b1': tf.Variable\
43 | (tf.random_normal([n_hidden_1])),
44 | 'decoder_b2': tf.Variable\
45 | (tf.random_normal([n_input])),
46 | }
47 |
48 |
49 |
50 | # Encoder Hidden layer with sigmoid activation #1
51 | encoder_in = tf.nn.sigmoid(tf.add\
52 | (tf.matmul(X, \
53 | weights['encoder_h1']),\
54 | biases['encoder_b1']))
55 |
56 | # Decoder Hidden layer with sigmoid activation #2
57 | encoder_out = tf.nn.sigmoid(tf.add\
58 | (tf.matmul(encoder_in,\
59 | weights['encoder_h2']),\
60 | biases['encoder_b2']))
61 |
62 |
63 | # Encoder Hidden layer with sigmoid activation #1
64 | decoder_in = tf.nn.sigmoid(tf.add\
65 | (tf.matmul(encoder_out,\
66 | weights['decoder_h1']),\
67 | biases['decoder_b1']))
68 |
69 | # Decoder Hidden layer with sigmoid activation #2
70 | decoder_out = tf.nn.sigmoid(tf.add\
71 | (tf.matmul(decoder_in,\
72 | weights['decoder_h2']),\
73 | biases['decoder_b2']))
74 |
75 |
76 | # Prediction
77 | y_pred = decoder_out
78 | # Targets (Labels) are the input data.
79 | y_true = X
80 |
81 | # Define loss and optimizer, minimize the squared error
82 | cost = tf.reduce_mean(tf.pow(y_true - y_pred, 2))
83 | optimizer = tf.train.RMSPropOptimizer(learning_rate).minimize(cost)
84 |
85 | # Initializing the variables
86 | init = tf.global_variables_initializer()
87 |
88 | # Launch the graph
89 | with tf.Session() as sess:
90 | sess.run(init)
91 | total_batch = int(mnist.train.num_examples/batch_size)
92 | # Training cycle
93 | for epoch in range(training_epochs):
94 | # Loop over all batches
95 | for i in range(total_batch):
96 | batch_xs, batch_ys =\
97 | mnist.train.next_batch(batch_size)
98 | # Run optimization op (backprop) and cost op (to get loss value)
99 | _, c = sess.run([optimizer, cost],\
100 | feed_dict={X: batch_xs})
101 | # Display logs per epoch step
102 | if epoch % display_step == 0:
103 | print("Epoch:", '%04d' % (epoch+1),
104 | "cost=", "{:.9f}".format(c))
105 |
106 | print("Optimization Finished!")
107 |
108 | # Applying encode and decode over test set
109 | encode_decode = sess.run(
110 | y_pred, feed_dict=\
111 | {X: mnist.test.images[:examples_to_show]})
112 | # Compare original images with their reconstructions
113 | f, a = plt.subplots(2, 10, figsize=(10, 2))
114 | for i in range(examples_to_show):
115 | a[0][i].imshow(np.reshape(mnist.test.images[i], (28, 28)))
116 | a[1][i].imshow(np.reshape(encode_decode[i], (28, 28)))
117 | f.show()
118 | plt.draw()
119 | plt.show()
120 |
--------------------------------------------------------------------------------
/Chapter05/Python 3.5/deconvolutional_autoencoder_1.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import tensorflow as tf
3 | import matplotlib.pyplot as plt
4 | from tensorflow.examples.tutorials.mnist import input_data
5 |
6 | #Plot function
7 | def plotresult(org_vec,noisy_vec,out_vec):
8 | plt.matshow(np.reshape(org_vec, (28, 28)), cmap=plt.get_cmap('gray'))
9 | plt.title("Original Image")
10 | plt.colorbar()
11 |
12 | plt.matshow(np.reshape(noisy_vec, (28, 28)), cmap=plt.get_cmap('gray'))
13 | plt.title("Input Image")
14 | plt.colorbar()
15 |
16 | outimg = np.reshape(out_vec, (28, 28))
17 | plt.matshow(outimg, cmap=plt.get_cmap('gray'))
18 | plt.title("Reconstructed Image")
19 | plt.colorbar()
20 | plt.show()
21 |
22 | # NETOWORK PARAMETERS
23 | n_input = 784
24 | n_hidden_1 = 256
25 | n_hidden_2 = 256
26 | n_output = 784
27 |
28 | epochs = 110
29 | batch_size = 100
30 | disp_step = 10
31 |
32 | print("PACKAGES LOADED")
33 |
34 | mnist = input_data.read_data_sets('data/', one_hot=True)
35 | trainimg = mnist.train.images
36 | trainlabel = mnist.train.labels
37 | testimg = mnist.test.images
38 | testlabel = mnist.test.labels
39 | print("MNIST LOADED")
40 |
41 |
42 | # PLACEHOLDERS
43 | x = tf.placeholder("float", [None, n_input])
44 | y = tf.placeholder("float", [None, n_output])
45 | dropout_keep_prob = tf.placeholder("float")
46 |
47 | # WEIGHTS
48 | weights = {
49 | 'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
50 | 'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
51 | 'out': tf.Variable(tf.random_normal([n_hidden_2, n_output]))
52 | }
53 | biases = {
54 | 'b1': tf.Variable(tf.random_normal([n_hidden_1])),
55 | 'b2': tf.Variable(tf.random_normal([n_hidden_2])),
56 | 'out': tf.Variable(tf.random_normal([n_output]))
57 | }
58 |
59 | encode_in = tf.nn.sigmoid(tf.add(tf.matmul(x, weights['h1']), biases['b1']))
60 | encode_out = tf.nn.dropout(encode_in, dropout_keep_prob)
61 | decode_in = tf.nn.sigmoid(tf.add(tf.matmul(encode_out, weights['h2']), biases['b2']))
62 | decode_out = tf.nn.dropout(decode_in, dropout_keep_prob)
63 |
64 | y_pred = tf.nn.sigmoid(tf.matmul(decode_out, weights['out']) + biases['out'])
65 |
66 | # COST
67 | cost = tf.reduce_mean(tf.pow(y_pred - y, 2))
68 |
69 | # OPTIMIZER
70 | optmizer = tf.train.RMSPropOptimizer(0.01).minimize(cost)
71 |
72 | # INITIALIZER
73 | init = tf.global_variables_initializer()
74 |
75 | init = tf.global_variables_initializer()
76 |
77 | # Launch the graph
78 | with tf.Session() as sess:
79 | sess.run(init)
80 | print("Start Training")
81 | for epoch in range(epochs):
82 | num_batch = int(mnist.train.num_examples/batch_size)
83 | total_cost = 0.
84 | for i in range(num_batch):
85 | batch_xs, batch_ys = mnist.train.next_batch(batch_size)
86 | batch_xs_noisy = batch_xs + 0.3*np.random.randn(batch_size, 784)
87 | feeds = {x: batch_xs_noisy, y: batch_xs, dropout_keep_prob: 0.8}
88 | sess.run(optmizer, feed_dict=feeds)
89 | total_cost += sess.run(cost, feed_dict=feeds)
90 | # DISPLAY
91 | if epoch % disp_step == 0:
92 | print("Epoch %02d/%02d average cost: %.6f" % (epoch, epochs, total_cost/num_batch))
93 |
94 | # Test one
95 | print ("Start Test")
96 | randidx = np.random.randint(testimg.shape[0], size=1)
97 | orgvec = testimg[randidx, :]
98 | testvec = testimg[randidx, :]
99 | label = np.argmax(testlabel[randidx, :], 1)
100 |
101 | print ("Test label is %d" % label)
102 | noisyvec = testvec + 0.3*np.random.randn(1, 784)
103 | outvec = sess.run(y_pred, feed_dict={x: noisyvec, dropout_keep_prob: 1})
104 |
105 | plotresult(orgvec,noisyvec,outvec)
106 | print("restart Training")
--------------------------------------------------------------------------------
/Chapter05/Python 3.5/denoising_autoencoder_1.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import tensorflow as tf
3 | import matplotlib.pyplot as plt
4 | from tensorflow.examples.tutorials.mnist import input_data
5 |
6 | #Plot function
7 | def plotresult(org_vec,noisy_vec,out_vec):
8 | plt.matshow(np.reshape(org_vec, (28, 28)), cmap=plt.get_cmap('gray'))
9 | plt.title("Original Image")
10 | plt.colorbar()
11 |
12 | plt.matshow(np.reshape(noisy_vec, (28, 28)), cmap=plt.get_cmap('gray'))
13 | plt.title("Input Image")
14 | plt.colorbar()
15 |
16 | outimg = np.reshape(out_vec, (28, 28))
17 | plt.matshow(outimg, cmap=plt.get_cmap('gray'))
18 | plt.title("Reconstructed Image")
19 | plt.colorbar()
20 | plt.show()
21 |
22 | # NETOWRK PARAMETERS
23 | n_input = 784
24 | n_hidden_1 = 256
25 | n_hidden_2 = 256
26 | n_output = 784
27 |
28 | epochs = 100
29 | batch_size = 100
30 | disp_step = 10
31 |
32 | print("PACKAGES LOADED")
33 |
34 | mnist = input_data.read_data_sets('data/', one_hot=True)
35 | trainimg = mnist.train.images
36 | trainlabel = mnist.train.labels
37 | testimg = mnist.test.images
38 | testlabel = mnist.test.labels
39 | print("MNIST LOADED")
40 |
41 |
42 | # PLACEHOLDERS
43 | x = tf.placeholder("float", [None, n_input])
44 | y = tf.placeholder("float", [None, n_output])
45 | dropout_keep_prob = tf.placeholder("float")
46 |
47 | # WEIGHTS
48 | weights = {
49 | 'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
50 | 'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
51 | 'out': tf.Variable(tf.random_normal([n_hidden_2, n_output]))
52 | }
53 | biases = {
54 | 'b1': tf.Variable(tf.random_normal([n_hidden_1])),
55 | 'b2': tf.Variable(tf.random_normal([n_hidden_2])),
56 | 'out': tf.Variable(tf.random_normal([n_output]))
57 | }
58 |
59 |
60 | encode_in = tf.nn.sigmoid\
61 | (tf.add(tf.matmul\
62 | (x, weights['h1']),\
63 | biases['b1']))
64 |
65 | encode_out = tf.nn.dropout\
66 | (encode_in, dropout_keep_prob)
67 |
68 | decode_in = tf.nn.sigmoid\
69 | (tf.add(tf.matmul\
70 | (encode_out, weights['h2']),\
71 | biases['b2']))
72 |
73 | decode_out = tf.nn.dropout(decode_in,\
74 | dropout_keep_prob)
75 |
76 |
77 | y_pred = tf.nn.sigmoid\
78 | (tf.matmul(decode_out,\
79 | weights['out']) +\
80 | biases['out'])
81 |
82 | # COST
83 | cost = tf.reduce_mean(tf.pow(y_pred - y, 2))
84 |
85 | # OPTIMIZER
86 | optmizer = tf.train.RMSPropOptimizer(0.01).minimize(cost)
87 |
88 | # INITIALIZER
89 | init = tf.global_variables_initializer()
90 |
91 | init = tf.global_variables_initializer()
92 |
93 | # Launch the graph
94 | with tf.Session() as sess:
95 | sess.run(init)
96 | print("Start Training")
97 | for epoch in range(epochs):
98 | num_batch = int(mnist.train.num_examples/batch_size)
99 | total_cost = 0.
100 | for i in range(num_batch):
101 | batch_xs, batch_ys = mnist.train.next_batch(batch_size)
102 | batch_xs_noisy = batch_xs + 0.3*np.random.randn(batch_size, 784)
103 | feeds = {x: batch_xs_noisy, y: batch_xs, dropout_keep_prob: 0.8}
104 | sess.run(optmizer, feed_dict=feeds)
105 | total_cost += sess.run(cost, feed_dict=feeds)
106 | # DISPLAY
107 | if epoch % disp_step == 0:
108 | print("Epoch %02d/%02d average cost: %.6f"
109 | % (epoch, epochs, total_cost/num_batch))
110 |
111 | # Test one
112 | print("Start Test")
113 | randidx = np.random.randint\
114 | (testimg.shape[0], size=1)
115 | orgvec = testimg[randidx, :]
116 | testvec = testimg[randidx, :]
117 | label = np.argmax(testlabel[randidx, :], 1)
118 |
119 | print("Test label is %d" % (label))
120 | noisyvec = testvec + 0.3*np.random.randn(1, 784)
121 | outvec = sess.run(y_pred,\
122 | feed_dict={x: noisyvec,\
123 | dropout_keep_prob: 1})
124 |
125 | plotresult(orgvec,noisyvec,outvec)
126 | print("restart Training")
127 |
128 |
129 |
130 | """"
131 | PACKAGES LOADED
132 | Extracting data/train-images-idx3-ubyte.gz
133 | Extracting data/train-labels-idx1-ubyte.gz
134 | Extracting data/t10k-images-idx3-ubyte.gz
135 | Extracting data/t10k-labels-idx1-ubyte.gz
136 | MNIST LOADED
137 | Start Training
138 | Epoch 00/100 average cost: 0.212313
139 | Start Test
140 | Test label is 6
141 | restart Training
142 | Epoch 10/100 average cost: 0.033660
143 | Start Test
144 | Test label is 2
145 | restart Training
146 | Epoch 20/100 average cost: 0.026888
147 | Start Test
148 | Test label is 6
149 | restart Training
150 | Epoch 30/100 average cost: 0.023660
151 | Start Test
152 | Test label is 1
153 | restart Training
154 | Epoch 40/100 average cost: 0.021740
155 | Start Test
156 | Test label is 9
157 | restart Training
158 | Epoch 50/100 average cost: 0.020399
159 | Start Test
160 | Test label is 0
161 | restart Training
162 | Epoch 60/100 average cost: 0.019593
163 | Start Test
164 | Test label is 9
165 | restart Training
166 | Epoch 70/100 average cost: 0.019026
167 | Start Test
168 | Test label is 1
169 | restart Training
170 | Epoch 80/100 average cost: 0.018537
171 | Start Test
172 | Test label is 4
173 | restart Training
174 | Epoch 90/100 average cost: 0.018224
175 | Start Test
176 | Test label is 9
177 | restart Training
178 | """
179 |
--------------------------------------------------------------------------------
/Chapter05/Screenshots/autoencoder.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter05/Screenshots/autoencoder.png
--------------------------------------------------------------------------------
/Chapter05/Screenshots/deconvolutional_autoencoder.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter05/Screenshots/deconvolutional_autoencoder.png
--------------------------------------------------------------------------------
/Chapter05/Screenshots/denoising_autoencoder.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter05/Screenshots/denoising_autoencoder.png
--------------------------------------------------------------------------------
/Chapter06/Python 2.7/LSTM_model_1.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | from tensorflow.contrib import rnn
3 |
4 | from tensorflow.examples.tutorials.mnist import input_data
5 | mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
6 |
7 | learning_rate = 0.001
8 | training_iters = 100000
9 | batch_size = 128
10 | display_step = 10
11 |
12 | n_input = 28
13 | n_steps = 28
14 | n_hidden = 128
15 | n_classes = 10
16 |
17 | x = tf.placeholder("float", [None, n_steps, n_input])
18 | y = tf.placeholder("float", [None, n_classes])
19 |
20 | weights = {
21 | 'out': tf.Variable(tf.random_normal([n_hidden, n_classes]))
22 | }
23 | biases = {
24 | 'out': tf.Variable(tf.random_normal([n_classes]))
25 | }
26 |
27 | def RNN(x, weights, biases):
28 | x = tf.transpose(x, [1, 0, 2])
29 | x = tf.reshape(x, [-1, n_input])
30 | x = tf.split(axis=0, num_or_size_splits=n_steps, value=x)
31 | lstm_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
32 | outputs, states = rnn.static_rnn(lstm_cell, x, dtype=tf.float32)
33 | return tf.matmul(outputs[-1], weights['out']) + biases['out']
34 |
35 | pred = RNN(x, weights, biases)
36 | cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
37 | optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
38 |
39 | correct_pred = tf. equal(tf.argmax(pred,1), tf.argmax(y,1))
40 | accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
41 |
42 | init = tf.global_variables_initializer()
43 |
44 | with tf.Session() as sess:
45 | sess.run(init)
46 | step = 1
47 | while step * batch_size < training_iters:
48 | batch_x, batch_y = mnist.train.next_batch(batch_size)
49 | batch_x = batch_x.reshape((batch_size, n_steps, n_input))
50 | sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
51 | if step % display_step == 0:
52 | acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})
53 | loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y})
54 | print("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \
55 | "{:.6f}".format(loss) + ", Training Accuracy= " + \
56 | "{:.5f}".format(acc))
57 | step += 1
58 | print("Optimization Finished!")
59 |
60 | test_len = 128
61 | test_data = mnist.test.images[:test_len].reshape((-1, n_steps, n_input))
62 | test_label = mnist.test.labels[:test_len]
63 | print("Testing Accuracy:", \
64 | sess.run(accuracy, feed_dict={x: test_data, y: test_label}))
65 |
--------------------------------------------------------------------------------
/Chapter06/Python 2.7/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter06/Python 2.7/__init__.py
--------------------------------------------------------------------------------
/Chapter06/Python 2.7/bidirectional_RNN_1.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | import numpy as np
3 | from tensorflow.contrib import rnn
4 |
5 | from tensorflow.examples.tutorials.mnist import input_data
6 | mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
7 |
8 | learning_rate = 0.001
9 | training_iters = 100000
10 | batch_size = 128
11 | display_step = 10
12 |
13 | n_input = 28
14 | n_steps = 28
15 | n_hidden = 128
16 | n_classes = 10
17 |
18 | x = tf.placeholder("float", [None, n_steps, n_input])
19 | y = tf.placeholder("float", [None, n_classes])
20 |
21 | weights = {
22 | 'out': tf.Variable(tf.random_normal([2*n_hidden, n_classes]))
23 | }
24 | biases = {
25 | 'out': tf.Variable(tf.random_normal([n_classes]))
26 | }
27 |
28 | def BiRNN(x, weights, biases):
29 | x = tf.transpose(x, [1, 0, 2])
30 | x = tf.reshape(x, [-1, n_input])
31 | x = tf.split(axis=0, num_or_size_splits=n_steps, value=x)
32 | lstm_fw_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
33 | lstm_bw_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
34 | try:
35 | outputs, _, _ = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,
36 | dtype=tf.float32)
37 | except Exception: # Old TensorFlow version only returns outputs not states
38 | outputs = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,
39 | dtype=tf.float32)
40 | return tf.matmul(outputs[-1], weights['out']) + biases['out']
41 |
42 | pred = BiRNN(x, weights, biases)
43 | cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
44 | optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
45 | correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
46 | accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
47 | init = tf.global_variables_initializer()
48 |
49 | with tf.Session() as sess:
50 | sess.run(init)
51 | step = 1
52 | while step * batch_size < training_iters:
53 | batch_x, batch_y = mnist.train.next_batch(batch_size)
54 | batch_x = batch_x.reshape((batch_size, n_steps, n_input))
55 | sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
56 | if step % display_step == 0:
57 | acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})
58 | loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y})
59 | print("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \
60 | "{:.6f}".format(loss) + ", Training Accuracy= " + \
61 | "{:.5f}".format(acc))
62 | step += 1
63 | print("Optimization Finished!")
64 |
65 | test_len = 128
66 | test_data = mnist.test.images[:test_len].reshape((-1, n_steps, n_input))
67 | test_label = mnist.test.labels[:test_len]
68 | print("Testing Accuracy:", \
69 | sess.run(accuracy, feed_dict={x: test_data, y: test_label}))
70 |
--------------------------------------------------------------------------------
/Chapter06/Python 3.5/LSTM_model_1.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | from tensorflow.contrib import rnn
3 |
4 | from tensorflow.examples.tutorials.mnist import input_data
5 | mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
6 |
7 | learning_rate = 0.001
8 | training_iters = 100000
9 | batch_size = 128
10 | display_step = 10
11 |
12 | n_input = 28
13 | n_steps = 28
14 | n_hidden = 128
15 | n_classes = 10
16 |
17 | x = tf.placeholder("float", [None, n_steps, n_input])
18 | y = tf.placeholder("float", [None, n_classes])
19 |
20 | weights = {
21 | 'out': tf.Variable(tf.random_normal([n_hidden, n_classes]))
22 | }
23 | biases = {
24 | 'out': tf.Variable(tf.random_normal([n_classes]))
25 | }
26 |
27 | def RNN(x, weights, biases):
28 | x = tf.transpose(x, [1, 0, 2])
29 | x = tf.reshape(x, [-1, n_input])
30 | x = tf.split(axis=0, num_or_size_splits=n_steps, value=x)
31 | lstm_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
32 | outputs, states = rnn.static_rnn(lstm_cell, x, dtype=tf.float32)
33 | return tf.matmul(outputs[-1], weights['out']) + biases['out']
34 |
35 | pred = RNN(x, weights, biases)
36 | cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
37 | optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
38 |
39 | correct_pred = tf. equal(tf.argmax(pred,1), tf.argmax(y,1))
40 | accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
41 |
42 | init = tf.global_variables_initializer()
43 |
44 | with tf.Session() as sess:
45 | sess.run(init)
46 | step = 1
47 | while step * batch_size < training_iters:
48 | batch_x, batch_y = mnist.train.next_batch(batch_size)
49 | batch_x = batch_x.reshape((batch_size, n_steps, n_input))
50 | sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
51 | if step % display_step == 0:
52 | acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})
53 | loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y})
54 | print("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \
55 | "{:.6f}".format(loss) + ", Training Accuracy= " + \
56 | "{:.5f}".format(acc))
57 | step += 1
58 | print("Optimization Finished!")
59 |
60 | test_len = 128
61 | test_data = mnist.test.images[:test_len].reshape((-1, n_steps, n_input))
62 | test_label = mnist.test.labels[:test_len]
63 | print("Testing Accuracy:", \
64 | sess.run(accuracy, feed_dict={x: test_data, y: test_label}))
65 |
--------------------------------------------------------------------------------
/Chapter06/Python 3.5/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter06/Python 3.5/__init__.py
--------------------------------------------------------------------------------
/Chapter06/Python 3.5/bidirectional_RNN_1.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | import numpy as np
3 | from tensorflow.contrib import rnn
4 |
5 | from tensorflow.examples.tutorials.mnist import input_data
6 | mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
7 |
8 | learning_rate = 0.001
9 | training_iters = 100000
10 | batch_size = 128
11 | display_step = 10
12 |
13 | n_input = 28
14 | n_steps = 28
15 | n_hidden = 128
16 | n_classes = 10
17 |
18 | x = tf.placeholder("float", [None, n_steps, n_input])
19 | y = tf.placeholder("float", [None, n_classes])
20 |
21 | weights = {
22 | 'out': tf.Variable(tf.random_normal([2*n_hidden, n_classes]))
23 | }
24 | biases = {
25 | 'out': tf.Variable(tf.random_normal([n_classes]))
26 | }
27 |
28 | def BiRNN(x, weights, biases):
29 | x = tf.transpose(x, [1, 0, 2])
30 | x = tf.reshape(x, [-1, n_input])
31 | x = tf.split(axis=0, num_or_size_splits=n_steps, value=x)
32 | lstm_fw_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
33 | lstm_bw_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
34 | try:
35 | outputs, _, _ = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,
36 | dtype=tf.float32)
37 | except Exception: # Old TensorFlow version only returns outputs not states
38 | outputs = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,
39 | dtype=tf.float32)
40 | return tf.matmul(outputs[-1], weights['out']) + biases['out']
41 |
42 | pred = BiRNN(x, weights, biases)
43 | cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
44 | optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
45 | correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
46 | accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
47 | init = tf.global_variables_initializer()
48 |
49 | with tf.Session() as sess:
50 | sess.run(init)
51 | step = 1
52 | while step * batch_size < training_iters:
53 | batch_x, batch_y = mnist.train.next_batch(batch_size)
54 | batch_x = batch_x.reshape((batch_size, n_steps, n_input))
55 | sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
56 | if step % display_step == 0:
57 | acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})
58 | loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y})
59 | print("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \
60 | "{:.6f}".format(loss) + ", Training Accuracy= " + \
61 | "{:.5f}".format(acc))
62 | step += 1
63 | print("Optimization Finished!")
64 |
65 | test_len = 128
66 | test_data = mnist.test.images[:test_len].reshape((-1, n_steps, n_input))
67 | test_label = mnist.test.labels[:test_len]
68 | print("Testing Accuracy:", \
69 | sess.run(accuracy, feed_dict={x: test_data, y: test_label}))
70 |
--------------------------------------------------------------------------------
/Chapter06/Screenshots/Bidirectional_RNN_shot1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter06/Screenshots/Bidirectional_RNN_shot1.png
--------------------------------------------------------------------------------
/Chapter06/Screenshots/Bidirectional_RNN_shot2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter06/Screenshots/Bidirectional_RNN_shot2.png
--------------------------------------------------------------------------------
/Chapter06/Screenshots/LSTM_shot1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter06/Screenshots/LSTM_shot1.png
--------------------------------------------------------------------------------
/Chapter06/Screenshots/LSTM_shot2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter06/Screenshots/LSTM_shot2.png
--------------------------------------------------------------------------------
/Chapter07/Python 2.7/gpu_computing_with_multiple_GPU.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import tensorflow as tf
3 | import datetime
4 |
5 | log_device_placement = True
6 | n = 10
7 |
8 | A = np.random.rand(10000, 10000).astype('float32')
9 | B = np.random.rand(10000, 10000).astype('float32')
10 |
11 | c1 = []
12 |
13 | def matpow(M, n):
14 | if n < 1: #Abstract cases where n < 1
15 | return M
16 | else:
17 | return tf.matmul(M, matpow(M, n-1))
18 |
19 | #FIRST GPU
20 | with tf.device('/gpu:0'):
21 | a = tf.placeholder(tf.float32, [10000, 10000])
22 | c1.append(matpow(a, n))
23 |
24 | #SECOND GPU
25 | with tf.device('/gpu:1'):
26 | b = tf.placeholder(tf.float32, [10000, 10000])
27 | c1.append(matpow(b, n))
28 |
29 |
30 | with tf.device('/cpu:0'):
31 | sum = tf.add_n(c1)
32 | print(sum)
33 |
34 | t1_1 = datetime.datetime.now()
35 | with tf.Session(config=tf.ConfigProto\
36 | (allow_soft_placement=True,\
37 | log_device_placement=log_device_placement))\
38 | as sess:
39 | sess.run(sum, {a:A, b:B})
40 | t2_1 = datetime.datetime.now()
41 |
--------------------------------------------------------------------------------
/Chapter07/Python 2.7/gpu_example.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import tensorflow as tf
3 | import datetime
4 |
5 | log_device_placement = True
6 |
7 | n = 10
8 |
9 | A = np.random.rand(10000, 10000).astype('float32')
10 | B = np.random.rand(10000, 10000).astype('float32')
11 |
12 |
13 | c1 = []
14 | c2 = []
15 |
16 | def matpow(M, n):
17 | if n < 1: #Abstract cases where n < 1
18 | return M
19 | else:
20 | return tf.matmul(M, matpow(M, n-1))
21 |
22 | with tf.device('/gpu:0'):
23 | a = tf.placeholder(tf.float32, [10000, 10000])
24 | b = tf.placeholder(tf.float32, [10000, 10000])
25 | c1.append(matpow(a, n))
26 | c1.append(matpow(b, n))
27 | # If the below code does not work use '/job:localhost/replica:0/task:0/cpu:0' as the GPU device
28 | with tf.device('/cpu:0'):
29 | sum = tf.add_n(c1) #Addition of all elements in c1, i.e. A^n + B^n
30 |
31 | t1_1 = datetime.datetime.now()
32 | with tf.Session(config=tf.ConfigProto\
33 | (log_device_placement=log_device_placement)) as sess:
34 | sess.run(sum, {a:A, b:B})
35 | t2_1 = datetime.datetime.now()
36 |
--------------------------------------------------------------------------------
/Chapter07/Python 2.7/gpu_soft_placemnet_1.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import tensorflow as tf
3 | import datetime
4 |
5 | log_device_placement = True
6 | n = 10
7 |
8 | A = np.random.rand(10000, 10000).astype('float32')
9 | B = np.random.rand(10000, 10000).astype('float32')
10 |
11 | c1 = []
12 |
13 | def matpow(M, n):
14 | if n < 1: #Abstract cases where n < 1
15 | return M
16 | else:
17 | return tf.matmul(M, matpow(M, n-1))
18 |
19 | with tf.device('/job:localhost/replica:0/task:0/cpu:0'):
20 | a = tf.placeholder(tf.float32, [10000, 10000])
21 | b = tf.placeholder(tf.float32, [10000, 10000])
22 | c1.append(matpow(a, n))
23 | c1.append(matpow(b, n))
24 |
25 | with tf.device('/job:localhost/replica:0/task:0/cpu:1'):
26 | sum = tf.add_n(c1)
27 | print(sum)
28 |
29 | t1_1 = datetime.datetime.now()
30 | with tf.Session(config=tf.ConfigProto\
31 | (allow_soft_placement=True,\
32 | log_device_placement=log_device_placement))\
33 | as sess:
34 | sess.run(sum, {a:A, b:B})
35 | t2_1 = datetime.datetime.now()
36 |
--------------------------------------------------------------------------------
/Chapter07/Python 3.5/gpu_computing_with_multiple_GPU.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import tensorflow as tf
3 | import datetime
4 |
5 | log_device_placement = True
6 | n = 10
7 |
8 | A = np.random.rand(10000, 10000).astype('float32')
9 | B = np.random.rand(10000, 10000).astype('float32')
10 |
11 | c1 = []
12 |
13 | def matpow(M, n):
14 | if n < 1: #Abstract cases where n < 1
15 | return M
16 | else:
17 | return tf.matmul(M, matpow(M, n-1))
18 |
19 | #FIRST GPU
20 | with tf.device('/gpu:0'):
21 | a = tf.placeholder(tf.float32, [10000, 10000])
22 | c1.append(matpow(a, n))
23 |
24 | #SECOND GPU
25 | with tf.device('/gpu:1'):
26 | b = tf.placeholder(tf.float32, [10000, 10000])
27 | c1.append(matpow(b, n))
28 |
29 |
30 | with tf.device('/cpu:0'):
31 | sum = tf.add_n(c1)
32 | print(sum)
33 |
34 | t1_1 = datetime.datetime.now()
35 | with tf.Session(config=tf.ConfigProto(allow_soft_placement=True, log_device_placement=log_device_placement)) as sess:
36 | sess.run(sum, {a:A, b:B})
37 |
38 | t2_1 = datetime.datetime.now()
39 |
--------------------------------------------------------------------------------
/Chapter07/Python 3.5/gpu_example.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import tensorflow as tf
3 | import datetime
4 |
5 | log_device_placement = True
6 | n = 10
7 | A = np.random.rand(10000, 10000).astype('float32')
8 | B = np.random.rand(10000, 10000).astype('float32')
9 | c1 = []
10 | c2 = []
11 |
12 | def matpow(M, n):
13 | if n < 1: #Abstract cases where n < 1
14 | return M
15 | else:
16 | return tf.matmul(M, matpow(M, n-1))
17 |
18 | with tf.device('/gpu:0'): # For CPU use /cpu:0
19 | a = tf.placeholder(tf.float32, [10000, 10000])
20 | b = tf.placeholder(tf.float32, [10000, 10000])
21 | c1.append(matpow(a, n))
22 | c1.append(matpow(b, n))
23 |
24 | # If the below code does not work use '/job:localhost/replica:0/task:0/cpu:0' as the GPU device
25 | with tf.device('/cpu:0'):
26 | sum = tf.add_n(c1) #Addition of all elements in c1, i.e. A^n + B^n
27 |
28 | t1_1 = datetime.datetime.now()
29 | with tf.Session(config=tf.ConfigProto(log_device_placement=log_device_placement)) as sess:
30 | sess.run(sum, {a:A, b:B})
31 |
32 | t2_1 = datetime.datetime.now()
33 |
--------------------------------------------------------------------------------
/Chapter07/Python 3.5/gpu_soft_placemnet_1.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import tensorflow as tf
3 | import datetime
4 |
5 | log_device_placement = True
6 | n = 10
7 |
8 | A = np.random.rand(10000, 10000).astype('float32')
9 | B = np.random.rand(10000, 10000).astype('float32')
10 |
11 | c1 = []
12 |
13 | def matpow(M, n):
14 | if n < 1: #Abstract cases where n < 1
15 | return M
16 | else:
17 | return tf.matmul(M, matpow(M, n-1))
18 |
19 | with tf.device('gpu:0'): # for CPU only, use /cpu:0
20 | a = tf.placeholder(tf.float32, [10000, 10000])
21 | b = tf.placeholder(tf.float32, [10000, 10000])
22 | c1.append(matpow(a, n))
23 | c1.append(matpow(b, n))
24 |
25 | with tf.device('gpu:1'): # for CPU only, use /cpu:0
26 | sum = tf.add_n(c1)
27 | print(sum)
28 |
29 | t1_1 = datetime.datetime.now()
30 | with tf.Session(config=tf.ConfigProto(allow_soft_placement=True, log_device_placement=log_device_placement)) as sess:
31 | sess.run(sum, {a:A, b:B})
32 |
33 | t2_1 = datetime.datetime.now()
34 |
--------------------------------------------------------------------------------
/Chapter07/Screenshots/gpu_computing_with_multiple_GPU.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter07/Screenshots/gpu_computing_with_multiple_GPU.png
--------------------------------------------------------------------------------
/Chapter07/Screenshots/gpu_example.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter07/Screenshots/gpu_example.png
--------------------------------------------------------------------------------
/Chapter07/Screenshots/gpu_soft_placemnet_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter07/Screenshots/gpu_soft_placemnet_1.png
--------------------------------------------------------------------------------
/Chapter08/Python 2.7/digit_classifier.py:
--------------------------------------------------------------------------------
1 | from six.moves import xrange
2 | import tensorflow as tf
3 | import prettytensor as pt
4 | from prettytensor.tutorial import data_utils
5 |
6 | tf.app.flags.DEFINE_string('save_path', None, 'Where to save the model checkpoints.')
7 | FLAGS = tf.app.flags.FLAGS
8 |
9 | BATCH_SIZE = 50
10 | EPOCH_SIZE = 60000 // BATCH_SIZE
11 | TEST_SIZE = 10000 // BATCH_SIZE
12 |
13 | tf.app.flags.DEFINE_string('model', 'full','Choose one of the models, either full or conv')
14 | FLAGS = tf.app.flags.FLAGS
15 | def multilayer_fully_connected(images, labels):
16 | images = pt.wrap(images)
17 | with pt.defaults_scope(activation_fn=tf.nn.relu,l2loss=0.00001):
18 | return (images.flatten().\
19 | fully_connected(100).\
20 | fully_connected(100).\
21 | softmax_classifier(10, labels))
22 |
23 | def lenet5(images, labels):
24 | images = pt.wrap(images)
25 | with pt.defaults_scope\
26 | (activation_fn=tf.nn.relu, l2loss=0.00001):
27 | return (images.conv2d(5, 20).\
28 | max_pool(2, 2).\
29 | conv2d(5, 50).\
30 | max_pool(2, 2).\
31 | flatten().\
32 | fully_connected(500).\
33 | softmax_classifier(10, labels))
34 |
35 | def main(_=None):
36 | image_placeholder = tf.placeholder\
37 | (tf.float32, [BATCH_SIZE, 28, 28, 1])
38 | labels_placeholder = tf.placeholder\
39 | (tf.float32, [BATCH_SIZE, 10])
40 |
41 | if FLAGS.model == 'full':
42 | result = multilayer_fully_connected\
43 | (image_placeholder,\
44 | labels_placeholder)
45 | elif FLAGS.model == 'conv':
46 | result = lenet5(image_placeholder,\
47 | labels_placeholder)
48 | else:
49 | raise ValueError\
50 | ('model must be full or conv: %s' % FLAGS.model)
51 |
52 | accuracy = result.softmax.\
53 | evaluate_classifier\
54 | (labels_placeholder,phase=pt.Phase.test)
55 |
56 | train_images, train_labels = data_utils.mnist(training=True)
57 | test_images, test_labels = data_utils.mnist(training=False)
58 | optimizer = tf.train.GradientDescentOptimizer(0.01)
59 | train_op = pt.apply_optimizer(optimizer,losses=[result.loss])
60 | runner = pt.train.Runner(save_path=FLAGS.save_path)
61 |
62 |
63 | with tf.Session():
64 | for epoch in xrange(10):
65 | train_images, train_labels = \
66 | data_utils.permute_data\
67 | ((train_images, train_labels))
68 |
69 | runner.train_model(train_op,result.\
70 | loss,EPOCH_SIZE,\
71 | feed_vars=(image_placeholder,\
72 | labels_placeholder),\
73 | feed_data=pt.train.\
74 | feed_numpy(BATCH_SIZE,\
75 | train_images,\
76 | train_labels),\
77 | print_every=100)
78 | classification_accuracy = runner.evaluate_model\
79 | (accuracy,\
80 | TEST_SIZE,\
81 | feed_vars=(image_placeholder,\
82 | labels_placeholder),\
83 | feed_data=pt.train.\
84 | feed_numpy(BATCH_SIZE,\
85 | test_images,\
86 | test_labels))
87 | print('epoch’ , epoch + 1)
88 | print(‘accuracy’, classification_accuracy )
89 |
90 | if __name__ == '__main__':
91 | tf.app.run()
92 |
93 |
--------------------------------------------------------------------------------
/Chapter08/Python 2.7/keras_movie_classifier_1.py:
--------------------------------------------------------------------------------
1 | import numpy
2 | from keras.datasets import imdb
3 | from keras.models import Sequential
4 | from keras.layers import Dense
5 | from keras.layers import LSTM
6 | from keras.layers.embeddings import Embedding
7 | from keras.preprocessing import sequence
8 |
9 | # fix random seed for reproducibility
10 | numpy.random.seed(7)
11 |
12 | # load the dataset but only keep the top n words, zero the rest
13 | top_words = 5000
14 | (X_train, y_train), (X_test, y_test) = imdb.load_data(nb_words=top_words)
15 | # truncate and pad input sequences
16 | max_review_length = 500
17 | X_train = sequence.pad_sequences(X_train, maxlen=max_review_length)
18 | X_test = sequence.pad_sequences(X_test, maxlen=max_review_length)
19 |
20 | # create the model
21 | embedding_vecor_length = 32
22 | model = Sequential()
23 | model.add(Embedding(top_words, embedding_vecor_length,\
24 | input_length=max_review_length))
25 | model.add(LSTM(100))
26 | model.add(Dense(1, activation='sigmoid'))
27 | model.compile(loss='binary_crossentropy',\
28 | optimizer='adam',\
29 | metrics=['accuracy'])
30 | print(model.summary())
31 |
32 | model.fit(X_train, y_train,\
33 | validation_data=(X_test, y_test),\
34 | nb_epoch=3, batch_size=64)
35 |
36 | # Final evaluation of the model
37 | scores = model.evaluate(X_test, y_test, verbose=0)
38 |
39 | print("Accuracy: %.2f%%" % (scores[1]*100))
40 |
--------------------------------------------------------------------------------
/Chapter08/Python 2.7/keras_movie_classifier_using_convLayer_1.py:
--------------------------------------------------------------------------------
1 | from __future__ import print_function
2 |
3 | import numpy
4 | from keras.datasets import imdb
5 | from keras.models import Sequential
6 | from keras.layers import Dense
7 | from keras.layers import LSTM
8 | from keras.layers.embeddings import Embedding
9 | from keras.preprocessing import sequence
10 | from keras.layers import Conv1D, GlobalMaxPooling1D
11 |
12 | # fix random seed for the reproducibility
13 | numpy.random.seed(7)
14 |
15 | # load the dataset but only keep the top n words, zero the rest
16 | top_words = 5000
17 | (X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=top_words)
18 | # truncate and pad input sequences
19 | max_review_length = 500
20 | X_train = sequence.pad_sequences(X_train, maxlen=max_review_length)
21 | X_test = sequence.pad_sequences(X_test, maxlen=max_review_length)
22 |
23 | # create the model
24 | embedding_vector_length = 32
25 | model = Sequential()
26 | model.add(Embedding(top_words, embedding_vector_length, input_length=max_review_length))
27 | model.add(Conv1D(padding="same", activation="relu", kernel_size=3, num_filter=32))
28 | model.add(GlobalMaxPooling1D())
29 | model.add(LSTM(32, input_dim=64, return_sequences=True))
30 | model.add(LSTM(24, return_sequences=True))
31 | model.add(LSTM(1, return_sequences=False))
32 |
33 | model.add(Dense(2, activation='sigmoid'))
34 | model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
35 | print(model.summary())
36 |
37 | model.fit(X_train, y_train, validation_data=(X_test, y_test), num_epoch=3, batch_size=64)
38 |
39 | # Final evaluation of the model
40 | scores = model.evaluate(X_test, y_test, verbose=0)
41 |
42 | print("Accuracy: %.2f%%" % (scores[1]*100))
43 |
--------------------------------------------------------------------------------
/Chapter08/Python 2.7/pretty_tensor_digit_1.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | import prettytensor as pt
3 | from prettytensor.tutorial import data_utils
4 |
5 | tf.app.flags.DEFINE_string('save_path', None, 'Where to save the model checkpoints.')
6 | FLAGS = tf.app.flags.FLAGS
7 |
8 | BATCH_SIZE = 50
9 | EPOCH_SIZE = 60000 // BATCH_SIZE
10 | TEST_SIZE = 10000 // BATCH_SIZE
11 |
12 | tf.app.flags.DEFINE_string('model', 'full','Choose one of the models, either full or conv')
13 | FLAGS = tf.app.flags.FLAGS
14 | def multilayer_fully_connected(images, labels):
15 | images = pt.wrap(images)
16 | with pt.defaults_scope(activation_fn=tf.nn.relu,l2loss=0.00001):
17 | return (images.flatten().fully_connected(100).fully_connected(100).softmax_classifier(10, labels))
18 |
19 |
20 | def lenet5(images, labels):
21 | images = pt.wrap(images)
22 | with pt.defaults_scope(activation_fn=tf.nn.relu, l2loss=0.00001):
23 | return (images.conv2d(5, 20).max_pool(2, 2).conv2d(5, 50).max_pool(2, 2).flatten().fully_connected(500).softmax_classifier(10, labels))
24 |
25 |
26 | def main(_=None):
27 | image_placeholder = tf.placeholder\
28 | (tf.float32, [BATCH_SIZE, 28, 28, 1])
29 | labels_placeholder = tf.placeholder\
30 | (tf.float32, [BATCH_SIZE, 10])
31 |
32 | if FLAGS.model == 'full':
33 | result = multilayer_fully_connected\
34 | (image_placeholder,\
35 | labels_placeholder)
36 | elif FLAGS.model == 'conv':
37 | result = lenet5(image_placeholder, labels_placeholder)
38 | else:
39 | raise ValueError\
40 | ('model must be full or conv: %s' % FLAGS.model)
41 |
42 | accuracy = result.softmax.\
43 | evaluate_classifier\
44 | (labels_placeholder,phase=pt.Phase.test)
45 |
46 | train_images, train_labels = data_utils.mnist(training=True)
47 | test_images, test_labels = data_utils.mnist(training=False)
48 | optimizer = tf.train.GradientDescentOptimizer(0.01)
49 | train_op = pt.apply_optimizer(optimizer,losses=[result.loss])
50 | runner = pt.train.Runner(save_path=FLAGS.save_path)
51 |
52 |
53 | with tf.Session():
54 | for epoch in xrange(10):
55 | train_images, train_labels = \
56 | data_utils.permute_data\
57 | ((train_images, train_labels))
58 |
59 | runner.train_model(train_op,result.\
60 | loss,EPOCH_SIZE,\
61 | feed_vars=(image_placeholder,\
62 | labels_placeholder),\
63 | feed_data=pt.train.\
64 | feed_numpy(BATCH_SIZE,\
65 | train_images,\
66 | train_labels),\
67 | print_every=100)
68 | classification_accuracy = runner.evaluate_model\
69 | (accuracy,\
70 | TEST_SIZE,\
71 | feed_vars=(image_placeholder,\
72 | labels_placeholder),\
73 | feed_data=pt.train.\
74 | feed_numpy(BATCH_SIZE,\
75 | test_images,\
76 | test_labels))
77 | print('epoch' , epoch + 1)
78 | print('accuracy', classification_accuracy )
79 |
80 | if __name__ == '__main__':
81 | tf.app.run()
82 |
--------------------------------------------------------------------------------
/Chapter08/Python 2.7/tflearn_titanic_classifier.py:
--------------------------------------------------------------------------------
1 | from tflearn.datasets import titanic
2 | titanic.download_dataset('titanic_dataset.csv')
3 | from tflearn.data_utils import load_csv
4 | data, labels = load_csv('titanic_dataset.csv', target_column=0,
5 | categorical_labels=True, n_classes=2)
6 |
7 | def preprocess(data, columns_to_ignore):
8 | for id in sorted(columns_to_ignore, reverse=True):
9 | [r.pop(id) for r in data]
10 | for i in range(len(data)):
11 | data[i][1] = 1. if data[i][1] == 'female' else 0.
12 | return np.array(data, dtype=np.float32)
13 |
14 | to_ignore=[1, 6]
15 | data = preprocess(data, to_ignore)
16 | net = tflearn.input_data(shape=[None, 6])
17 |
18 | net = tflearn.fully_connected(net, 32)
19 | net = tflearn.fully_connected(net, 32)
20 | net = tflearn.fully_connected(net, 2, activation='softmax')
21 | net = tflearn.regression(net)
22 | model = tflearn.DNN(net)
23 | model.fit(data, labels, n_epoch=10, batch_size=16, show_metric=True)
24 |
--------------------------------------------------------------------------------
/Chapter08/Python 3.5/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter08/Python 3.5/__init__.py
--------------------------------------------------------------------------------
/Chapter08/Python 3.5/digit_classifier.py:
--------------------------------------------------------------------------------
1 | from six.moves import range
2 | import tensorflow as tf
3 | import prettytensor as pt
4 | from prettytensor.tutorial import data_utils
5 |
6 | tf.app.flags.DEFINE_string('save_path', None, 'Where to save the model checkpoints.')
7 | FLAGS = tf.app.flags.FLAGS
8 |
9 | BATCH_SIZE = 50
10 | EPOCH_SIZE = 60000 // BATCH_SIZE
11 | TEST_SIZE = 10000 // BATCH_SIZE
12 |
13 | image_placeholder = tf.placeholder\
14 | (tf.float32, [BATCH_SIZE, 28, 28, 1])
15 | labels_placeholder = tf.placeholder\
16 | (tf.float32, [BATCH_SIZE, 10])
17 |
18 | tf.app.flags.DEFINE_string('model', 'full','Choose one of the models, either full or conv')
19 | FLAGS = tf.app.flags.FLAGS
20 | def multilayer_fully_connected(images, labels):
21 | images = pt.wrap(images)
22 | with pt.defaults_scope(activation_fn=tf.nn.relu,l2loss=0.00001):
23 | return (images.flatten().\
24 | fully_connected(100).\
25 | fully_connected(100).\
26 | softmax_classifier(10, labels))
27 |
28 | def lenet5(images, labels):
29 | images = pt.wrap(images)
30 | with pt.defaults_scope\
31 | (activation_fn=tf.nn.relu, l2loss=0.00001):
32 | return (images.conv2d(5, 20).\
33 | max_pool(2, 2).\
34 | conv2d(5, 50).\
35 | max_pool(2, 2).\
36 | flatten().\
37 | fully_connected(500).\
38 | softmax_classifier(10, labels))
39 |
40 | def main(_=None):
41 | image_placeholder = tf.placeholder\
42 | (tf.float32, [BATCH_SIZE, 28, 28, 1])
43 | labels_placeholder = tf.placeholder\
44 | (tf.float32, [BATCH_SIZE, 10])
45 |
46 | if FLAGS.model == 'full':
47 | result = multilayer_fully_connected(image_placeholder, labels_placeholder)
48 | elif FLAGS.model == 'conv':
49 | result = lenet5(image_placeholder, labels_placeholder)
50 | else:
51 | raise ValueError('model must be full or conv: %s' % FLAGS.model)
52 |
53 | accuracy = result.softmax.evaluate_classifier(labels_placeholder,phase=pt.Phase.test)
54 |
55 | train_images, train_labels = data_utils.mnist(training=True)
56 | test_images, test_labels = data_utils.mnist(training=False)
57 | optimizer = tf.train.GradientDescentOptimizer(0.01)
58 | train_op = pt.apply_optimizer(optimizer,losses=[result.loss])
59 | runner = pt.train.Runner(save_path=FLAGS.save_path)
60 |
61 |
62 | with tf.Session():
63 | for epoch in range(10):
64 | train_images, train_labels = \
65 | data_utils.permute_data\
66 | ((train_images, train_labels))
67 |
68 | runner.train_model(train_op,result.\
69 | loss,EPOCH_SIZE,\
70 | feed_vars=(image_placeholder,\
71 | labels_placeholder),\
72 | feed_data=pt.train.\
73 | feed_numpy(BATCH_SIZE,\
74 | train_images,\
75 | train_labels),\
76 | print_every=100)
77 | classification_accuracy = runner.evaluate_model\
78 | (accuracy,\
79 | TEST_SIZE,\
80 | feed_vars=(image_placeholder,\
81 | labels_placeholder),\
82 | feed_data=pt.train.\
83 | feed_numpy(BATCH_SIZE,\
84 | test_images,\
85 | test_labels))
86 |
87 | print('epoch' , epoch + 1)
88 | print('accuracy', classification_accuracy)
89 |
90 | if __name__ == '__main__':
91 | tf.app.run()
92 |
93 |
--------------------------------------------------------------------------------
/Chapter08/Python 3.5/keras_movie_classifier_1.py:
--------------------------------------------------------------------------------
1 | import numpy
2 | from keras.datasets import imdb
3 | from keras.models import Sequential
4 | from keras.layers import Dense
5 | from keras.layers import LSTM
6 | from keras.layers.embeddings import Embedding
7 | from keras.preprocessing import sequence
8 |
9 | # fix random seed for reproducibility
10 | numpy.random.seed(7)
11 |
12 | # load the dataset but only keep the top n words, zero the rest
13 | top_words = 5000
14 | (X_train, y_train), (X_test, y_test) = imdb.load_data(nb_words=top_words)
15 | # truncate and pad input sequences
16 | max_review_length = 500
17 | X_train = sequence.pad_sequences(X_train, maxlen=max_review_length)
18 | X_test = sequence.pad_sequences(X_test, maxlen=max_review_length)
19 |
20 | # create the model
21 | embedding_vecor_length = 32
22 | model = Sequential()
23 | model.add(Embedding(top_words, embedding_vecor_length,\
24 | input_length=max_review_length))
25 | model.add(LSTM(100))
26 | model.add(Dense(1, activation='sigmoid'))
27 | model.compile(loss='binary_crossentropy',\
28 | optimizer='adam',\
29 | metrics=['accuracy'])
30 | print(model.summary())
31 |
32 | model.fit(X_train, y_train,\
33 | validation_data=(X_test, y_test),\
34 | epochs=3, batch_size=64)
35 |
36 | # Final evaluation of the model
37 | scores = model.evaluate(X_test, y_test, verbose=0)
38 |
39 | print("Accuracy: %.2f%%" % (scores[1]*100))
40 |
--------------------------------------------------------------------------------
/Chapter08/Python 3.5/keras_movie_classifier_using_convLayer_1.py:
--------------------------------------------------------------------------------
1 | from __future__ import print_function
2 |
3 | import numpy
4 | from keras.datasets import imdb
5 | from keras.models import Sequential
6 | from keras.layers import Dense
7 | from keras.layers import LSTM
8 | from keras.layers.embeddings import Embedding
9 | from keras.preprocessing import sequence
10 | from keras.layers import Conv1D, GlobalMaxPooling1D
11 |
12 | # fix random seed for the reproducibility
13 | numpy.random.seed(7)
14 |
15 | # load the dataset but only keep the top n words, zero the rest
16 | top_words = 5000
17 | (X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=top_words)
18 | # truncate and pad input sequences
19 | max_review_length = 500
20 | X_train = sequence.pad_sequences(X_train, maxlen=max_review_length)
21 | X_test = sequence.pad_sequences(X_test, maxlen=max_review_length)
22 |
23 | # create the model
24 | embedding_vector_length = 32
25 | model = Sequential()
26 | model.add(Embedding(top_words, embedding_vector_length, input_length=max_review_length))
27 | model.add(Conv1D(padding="same", activation="relu", kernel_size=3, num_filter=32))
28 | model.add(GlobalMaxPooling1D())
29 | model.add(LSTM(32, input_dim=64, return_sequences=True))
30 | model.add(LSTM(24, return_sequences=True))
31 | model.add(LSTM(1, return_sequences=False))
32 |
33 | model.add(Dense(2, activation='sigmoid'))
34 | model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
35 | print(model.summary())
36 |
37 | model.fit(X_train, y_train, validation_data=(X_test, y_test), num_epoch=3, batch_size=64)
38 |
39 | # Final evaluation of the model
40 | scores = model.evaluate(X_test, y_test, verbose=0)
41 |
42 | print("Accuracy: %.2f%%" % (scores[1]*100))
43 |
--------------------------------------------------------------------------------
/Chapter08/Python 3.5/pretty_tensor_digit_1.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | import prettytensor as pt
3 | from prettytensor.tutorial import data_utils
4 |
5 | tf.app.flags.DEFINE_string('save_path', None, 'Where to save the model checkpoints.')
6 | FLAGS = tf.app.flags.FLAGS
7 |
8 | BATCH_SIZE = 50
9 | EPOCH_SIZE = 60000 // BATCH_SIZE
10 | TEST_SIZE = 10000 // BATCH_SIZE
11 |
12 | image_placeholder = tf.placeholder(tf.float32, [BATCH_SIZE, 28, 28, 1])
13 | labels_placeholder = tf.placeholder(tf.float32, [BATCH_SIZE, 10])
14 |
15 | tf.app.flags.DEFINE_string('model', 'full','Choose one of the models, either full or conv')
16 | FLAGS = tf.app.flags.FLAGS
17 | def multilayer_fully_connected(images, labels):
18 | images = pt.wrap(images)
19 | with pt.defaults_scope(activation_fn=tf.nn.relu, l2loss=0.00001):
20 | return (images.flatten().fully_connected(100).fully_connected(100).softmax_classifier(10, labels))
21 |
22 |
23 | def lenet5(images, labels):
24 | images = pt.wrap(images)
25 | with pt.defaults_scope(activation_fn=tf.nn.relu, l2loss=0.00001):
26 | return (images.conv2d(5, 20).max_pool(2, 2).conv2d(5, 50).max_pool(2, 2).flatten().fully_connected(500).softmax_classifier(10, labels))
27 |
28 |
29 | def main(_=None):
30 | image_placeholder = tf.placeholder(tf.float32, [BATCH_SIZE, 28, 28, 1])
31 | labels_placeholder = tf.placeholder(tf.float32, [BATCH_SIZE, 10])
32 |
33 | if FLAGS.model == 'full':
34 | result = multilayer_fully_connected(image_placeholder, labels_placeholder)
35 | elif FLAGS.model == 'conv':
36 | result = lenet5(image_placeholder, labels_placeholder)
37 | else:
38 | raise ValueError\
39 | ('model must be full or conv: %s' % FLAGS.model)
40 |
41 | accuracy = result.softmax.evaluate_classifier(labels_placeholder,phase=pt.Phase.test)
42 |
43 | train_images, train_labels = data_utils.mnist(training=True)
44 | test_images, test_labels = data_utils.mnist(training=False)
45 | optimizer = tf.train.GradientDescentOptimizer(0.01)
46 | train_op = pt.apply_optimizer(optimizer,losses=[result.loss])
47 | runner = pt.train.Runner(save_path=FLAGS.save_path)
48 |
49 |
50 | with tf.Session():
51 | for epoch in range(10):
52 | train_images, train_labels = \
53 | data_utils.permute_data\
54 | ((train_images, train_labels))
55 |
56 | runner.train_model(train_op,result.\
57 | loss,EPOCH_SIZE,\
58 | feed_vars=(image_placeholder,\
59 | labels_placeholder),\
60 | feed_data=pt.train.\
61 | feed_numpy(BATCH_SIZE,\
62 | train_images,\
63 | train_labels),\
64 | print_every=100)
65 | classification_accuracy = runner.evaluate_model\
66 | (accuracy,\
67 | TEST_SIZE,\
68 | feed_vars=(image_placeholder,\
69 | labels_placeholder),\
70 | feed_data=pt.train.\
71 | feed_numpy(BATCH_SIZE,\
72 | test_images,\
73 | test_labels))
74 |
75 | print('epoch' , epoch + 1)
76 | print('accuracy', classification_accuracy )
77 |
78 | if __name__ == '__main__':
79 | tf.app.run()
80 |
--------------------------------------------------------------------------------
/Chapter08/Python 3.5/tflearn_titanic_classifier.py:
--------------------------------------------------------------------------------
1 | import tflearn
2 | from tflearn.datasets import titanic
3 | import numpy as np
4 | titanic.download_dataset('titanic_dataset.csv')
5 | from tflearn.data_utils import load_csv
6 | data, labels = load_csv('titanic_dataset.csv', target_column=0,
7 | categorical_labels=True, n_classes=2)
8 |
9 | def preprocess(data, columns_to_ignore):
10 | for id in sorted(columns_to_ignore, reverse=True):
11 | [r.pop(id) for r in data]
12 | for i in range(len(data)):
13 | data[i][1] = 1. if data[i][1] == 'female' else 0.
14 | return np.array(data, dtype=np.float32)
15 |
16 | to_ignore=[1, 6]
17 | data = preprocess(data, to_ignore)
18 | net = tflearn.input_data(shape=[None, 6])
19 |
20 | net = tflearn.fully_connected(net, 32)
21 | net = tflearn.fully_connected(net, 32)
22 | net = tflearn.fully_connected(net, 2, activation='softmax')
23 | net = tflearn.regression(net)
24 | model = tflearn.DNN(net)
25 | model.fit(data, labels, n_epoch=10, batch_size=16, show_metric=True)
26 |
27 | # Evalute the model
28 | accuracy = model.evaluate(data, labels, batch_size=16)
29 | print('Accuracy: ', accuracy)
30 |
--------------------------------------------------------------------------------
/Chapter08/Screenshots/Digit classiifcation.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter08/Screenshots/Digit classiifcation.png
--------------------------------------------------------------------------------
/Chapter08/Screenshots/Digit classiifcation2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter08/Screenshots/Digit classiifcation2.png
--------------------------------------------------------------------------------
/Chapter08/Screenshots/Keras_movie_classifier.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter08/Screenshots/Keras_movie_classifier.png
--------------------------------------------------------------------------------
/Chapter08/Screenshots/pretty_tensor_snap1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter08/Screenshots/pretty_tensor_snap1.png
--------------------------------------------------------------------------------
/Chapter08/Screenshots/pretty_tensor_snap2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter08/Screenshots/pretty_tensor_snap2.png
--------------------------------------------------------------------------------
/Chapter08/Screenshots/tflearn_titanic_clasiifer.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter08/Screenshots/tflearn_titanic_clasiifer.png
--------------------------------------------------------------------------------
/Chapter09/Python 2.7/classify_image.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf, sys
2 |
3 | # You will be sending the image to be classified as a parameter
4 | provided_image_path = sys.argv[1]
5 |
6 | # then we will read the image data
7 | provided_image_data = tf.gfile.FastGFile(provided_image_path, 'rb').read()
8 |
9 | # Loads label file
10 | label_lines = [line.rstrip() for line
11 | in tf.gfile.GFile("tensorflow_files/retrained_labels.txt")]
12 |
13 | # Unpersists graph from file
14 | with tf.gfile.FastGFile("tensorflow_files/retrained_graph.pb", 'rb') as f:
15 | graph_def = tf.GraphDef()
16 | graph_def.ParseFromString(f.read())
17 | _ = tf.import_graph_def(graph_def, name='')
18 |
19 | with tf.Session() as sess:
20 | # pass the provided_image_data as input to the graph
21 | softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')
22 |
23 | netowrk_predictions = sess.run(softmax_tensor, \
24 | {'DecodeJpeg/contents:0': provided_image_data})
25 |
26 | # Sort the result by confidence to show the flower labels accordingly
27 | top_predictions = netowrk_predictions[0].argsort()[-len(netowrk_predictions[0]):][::-1]
28 |
29 | for prediction in top_predictions:
30 | flower_type = label_lines[prediction]
31 | score = netowrk_predictions[0][prediction]
32 | print('%s (score = %.5f)' % (flower_type, score))
33 |
--------------------------------------------------------------------------------
/Chapter09/Python 3.5/classify_image.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf, sys
2 |
3 | # You will be sending the image to be classified as a parameter
4 | provided_image_path = sys.argv[1]
5 |
6 | # then we will read the image data
7 | provided_image_data = tf.gfile.FastGFile(provided_image_path, 'rb').read()
8 |
9 | # Loads label file
10 | label_lines = [line.rstrip() for line
11 | in tf.gfile.GFile("tensorflow_files/retrained_labels.txt")]
12 |
13 | # Unpersists graph from file
14 | with tf.gfile.FastGFile("tensorflow_files/retrained_graph.pb", 'rb') as f:
15 | graph_def = tf.GraphDef()
16 | graph_def.ParseFromString(f.read())
17 | _ = tf.import_graph_def(graph_def, name='')
18 |
19 | with tf.Session() as sess:
20 | # pass the provided_image_data as input to the graph
21 | softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')
22 |
23 | netowrk_predictions = sess.run(softmax_tensor, \
24 | {'DecodeJpeg/contents:0': provided_image_data})
25 |
26 | # Sort the result by confidence to show the flower labels accordingly
27 | top_predictions = netowrk_predictions[0].argsort()[-len(netowrk_predictions[0]):][::-1]
28 |
29 | for prediction in top_predictions:
30 | flower_type = label_lines[prediction]
31 | score = netowrk_predictions[0][prediction]
32 | print('%s (score = %.5f)' % (flower_type, score))
33 |
--------------------------------------------------------------------------------
/Chapter09/Screenshots/classification_daisy_flower.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter09/Screenshots/classification_daisy_flower.png
--------------------------------------------------------------------------------
/Chapter09/Screenshots/flowers_model_training.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter09/Screenshots/flowers_model_training.png
--------------------------------------------------------------------------------
/Chapter09/Screenshots/gpu_computing_with_multiple_GPU.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter09/Screenshots/gpu_computing_with_multiple_GPU.png
--------------------------------------------------------------------------------
/Chapter10/Python 2.7/FrozenLake_1.py:
--------------------------------------------------------------------------------
1 | import gym
2 | import numpy as np
3 |
4 | env = gym.make('FrozenLake-v0')
5 |
6 | #Initialize table with all zeros
7 | Q = np.zeros([env.observation_space.n,env.action_space.n])
8 | # Set learning parameters
9 | lr = .85
10 | gamma = .99
11 | num_episodes = 2000
12 |
13 | #create lists to contain total rewards and steps per episode
14 | rList = []
15 | for i in range(num_episodes):
16 | #Reset environment and get first new observation
17 | s = env.reset()
18 | rAll = 0
19 | d = False
20 | j = 0
21 |
22 | #The Q-Table learning algorithm
23 | while j < 99:
24 | j+=1
25 |
26 | #Choose an action by greedily (with noise) picking from Q table
27 | a=np.argmax(Q[s,:]+ \
28 | np.random.randn(1,env.action_space.n)*(1./(i+1)))
29 |
30 | #Get new state and reward from environment
31 | s1,r,d,_ = env.step(a)
32 |
33 | #Update Q-Table with new knowledge
34 | Q[s,a] = Q[s,a] + lr*(r + gamma *np.max(Q[s1,:]) - Q[s,a])
35 | rAll += r
36 | s = s1
37 | if d == True:
38 | break
39 |
40 | rList.append(rAll)
41 |
42 | print("Score over time: " + str(sum(rList)/num_episodes))
43 | print("Final Q-Table Values")
44 | print(Q)
45 |
--------------------------------------------------------------------------------
/Chapter10/Python 2.7/Q_Learning_1.py:
--------------------------------------------------------------------------------
1 | import gym
2 | import numpy as np
3 | import random
4 | import tensorflow as tf
5 | import matplotlib.pyplot as plt
6 |
7 | #Define the FrozenLake enviroment
8 | env = gym.make('FrozenLake-v0')
9 |
10 | #Setup the TensorFlow placeholders and variabiles
11 | tf.reset_default_graph()
12 | inputs1 = tf.placeholder(shape=[1,16],dtype=tf.float32)
13 | W = tf.Variable(tf.random_uniform([16,4],0,0.01))
14 | Qout = tf.matmul(inputs1,W)
15 | predict = tf.argmax(Qout,1)
16 | nextQ = tf.placeholder(shape=[1,4],dtype=tf.float32)
17 |
18 | #define the loss and optimization functions
19 | loss = tf.reduce_sum(tf.square(nextQ - Qout))
20 | trainer = tf.train.GradientDescentOptimizer(learning_rate=0.1)
21 | updateModel = trainer.minimize(loss)
22 |
23 | #initilize the vabiables
24 | init = tf.global_variables_initializer()
25 |
26 | #prepare the q-learning parameters
27 | gamma = .99
28 | e = 0.1
29 | num_episodes = 6000
30 | jList = []
31 | rList = []
32 |
33 | #Run the session
34 | with tf.Session() as sess:
35 | sess.run(init)
36 | #Start the Q-learning procedure
37 | for i in range(num_episodes):
38 | s = env.reset()
39 | rAll = 0
40 | d = False
41 | j = 0
42 | while j < 99:
43 | j+=1
44 | a,allQ = sess.run([predict,Qout],\
45 | feed_dict=\
46 | {inputs1:np.identity(16)[s:s+1]})
47 |
48 | if np.random.rand(1) < e:
49 | a[0] = env.action_space.sample()
50 | s1,r,d,_ = env.step(a[0])
51 | Q1 = sess.run(Qout,feed_dict=\
52 | {inputs1:np.identity(16)[s1:s1+1]})
53 | maxQ1 = np.max(Q1)
54 | targetQ = allQ
55 | targetQ[0,a[0]] = r + gamma *maxQ1
56 | _,W1 = sess.run([updateModel,W],\
57 | feed_dict=\
58 | {inputs1:np.identity(16)[s:s+1],nextQ:targetQ})
59 | #cumulate the total reward
60 | rAll += r
61 | s = s1
62 | if d == True:
63 | e = 1./((i/50) + 10)
64 | break
65 | jList.append(j)
66 | rList.append(rAll)
67 | #print the results
68 | print("Percent of succesful episodes: " + str(sum(rList)/num_episodes) + "%")
69 |
--------------------------------------------------------------------------------
/Chapter10/Python 3.5/FrozenLake_1.py:
--------------------------------------------------------------------------------
1 | import gym
2 | import numpy as np
3 |
4 | env = gym.make('FrozenLake-v0')
5 |
6 | #Initialize table with all zeros
7 | Q = np.zeros([env.observation_space.n,env.action_space.n])
8 | # Set learning parameters
9 | lr = .85
10 | gamma = .99
11 | num_episodes = 2000
12 |
13 | #create lists to contain total rewards and steps per episode
14 | rList = []
15 | for i in range(num_episodes):
16 | #Reset environment and get first new observation
17 | s = env.reset()
18 | rAll = 0
19 | d = False
20 | j = 0
21 |
22 | #The Q-Table learning algorithm
23 | while j < 99:
24 | j+=1
25 |
26 | #Choose an action by greedily (with noise) picking from Q table
27 | a=np.argmax(Q[s,:]+ \
28 | np.random.randn(1,env.action_space.n)*(1./(i+1)))
29 |
30 | #Get new state and reward from environment
31 | s1,r,d,_ = env.step(a)
32 |
33 | #Update Q-Table with new knowledge
34 | Q[s,a] = Q[s,a] + lr*(r + gamma *np.max(Q[s1,:]) - Q[s,a])
35 | rAll += r
36 | s = s1
37 | if d == True:
38 | break
39 |
40 | rList.append(rAll)
41 |
42 | print("Score over time: " + str(sum(rList)/num_episodes))
43 | print("Final Q-Table Values")
44 | print(Q)
45 |
--------------------------------------------------------------------------------
/Chapter10/Python 3.5/Q_Learning_1.py:
--------------------------------------------------------------------------------
1 | import gym
2 | import numpy as np
3 | import random
4 | import tensorflow as tf
5 | import matplotlib.pyplot as plt
6 |
7 | #Define the FrozenLake enviroment
8 | env = gym.make('FrozenLake-v0')
9 |
10 | #Setup the TensorFlow placeholders and variabiles
11 | tf.reset_default_graph()
12 | inputs1 = tf.placeholder(shape=[1,16],dtype=tf.float32)
13 | W = tf.Variable(tf.random_uniform([16,4],0,0.01))
14 | Qout = tf.matmul(inputs1,W)
15 | predict = tf.argmax(Qout,1)
16 | nextQ = tf.placeholder(shape=[1,4],dtype=tf.float32)
17 |
18 | #define the loss and optimization functions
19 | loss = tf.reduce_sum(tf.square(nextQ - Qout))
20 | trainer = tf.train.GradientDescentOptimizer(learning_rate=0.1)
21 | updateModel = trainer.minimize(loss)
22 |
23 | #initilize the vabiables
24 | init = tf.global_variables_initializer()
25 |
26 | #prepare the q-learning parameters
27 | gamma = .99
28 | e = 0.1
29 | num_episodes = 6000
30 | jList = []
31 | rList = []
32 |
33 | #Run the session
34 | with tf.Session() as sess:
35 | sess.run(init)
36 | #Start the Q-learning procedure
37 | for i in range(num_episodes):
38 | s = env.reset()
39 | rAll = 0
40 | d = False
41 | j = 0
42 | while j < 99:
43 | j+=1
44 | a,allQ = sess.run([predict,Qout],\
45 | feed_dict=\
46 | {inputs1:np.identity(16)[s:s+1]})
47 |
48 | if np.random.rand(1) < e:
49 | a[0] = env.action_space.sample()
50 | s1,r,d,_ = env.step(a[0])
51 | Q1 = sess.run(Qout,feed_dict=\
52 | {inputs1:np.identity(16)[s1:s1+1]})
53 | maxQ1 = np.max(Q1)
54 | targetQ = allQ
55 | targetQ[0,a[0]] = r + gamma *maxQ1
56 | _,W1 = sess.run([updateModel,W],\
57 | feed_dict=\
58 | {inputs1:np.identity(16)[s:s+1],nextQ:targetQ})
59 | #cumulate the total reward
60 | rAll += r
61 | s = s1
62 | if d == True:
63 | e = 1./((i/50) + 10)
64 | break
65 | jList.append(j)
66 | rList.append(rAll)
67 | #print the results
68 | print("Percent of succesful episodes: " + str(sum(rList)/num_episodes) + "%")
69 |
--------------------------------------------------------------------------------
/Chapter10/Screeshots/FrozenLake.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter10/Screeshots/FrozenLake.png
--------------------------------------------------------------------------------
/Chapter10/Screeshots/Q_Learning.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-TensorFlow/be3cd0c08d2d8b3cafe958580d09aae41acd373e/Chapter10/Screeshots/Q_Learning.png
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2017 Deeptituscano
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | # Deep Learning with TensorFlow
5 | Deep Learning with TensorFlow by Packt
6 |
7 | This is the code repository for [Deep Learning with TensorFlow](https://www.packtpub.com/big-data-and-business-intelligence/deep-learning-tensorflow?utm_source=github&utm_medium=repository&utm_campaign=9781786469786), published by [Packt](https://www.packtpub.com/?utm_source=github). It contains all the supporting project files necessary to work through the book from start to finish.
8 | ## About the Book
9 | Deep learning is the step that comes after machine learning, and has more advanced implementations. Machine learning is not just for academics anymore, but is becoming a mainstream practice through wide adoption, and deep learning has taken the front seat. As a data scientist, if you want to explore data abstraction layers, this book will be your guide. This book shows how this can be exploited in the real world with complex raw data using TensorFlow 1.x.
10 |
11 | Throughout the book, you’ll learn how to implement deep learning algorithms for machine learning systems and integrate them into your product offerings, including search, image recognition, and language processing. Additionally, you’ll learn how to analyze and improve the performance of deep learning models. This can be done by comparing algorithms against benchmarks, along with machine intelligence, to learn from the information and determine ideal behaviors within a specific context.
12 |
13 | After finishing the book, you will be familiar with machine learning techniques, in particular the use of TensorFlow for deep learning, and will be ready to apply your knowledge to research or commercial projects.
14 | ## Instructions and Navigation
15 | All of the code is organized into folders. Each folder starts with a number followed by the application name. For example, Chapter02.
16 |
17 |
18 |
19 | The code will look like the following:
20 | ```
21 | >>> import tensorflow as tf
22 | >>> hello = tf.constant("hello TensorFlow!")
23 | >>> sess=tf.Session()
24 | ```
25 |
26 | All the examples have been implemented using Python version 2.7 on a Ubuntu Linux 64
27 | bit including the TensorFlow library version 1.0.1.
28 | You will also need the following Python modules (preferably the latest version):
29 | Pip
30 | Bazel
31 | Matplotlib
32 | NumPy
33 | Pandas
34 | Preface
35 | . Only for Chapter 8, Advanced TensorFlow Programming and Chapter 9, Reinforcement
36 | Learning, you will need the following frameworks:
37 | Keras
38 | Pretty Tensor
39 | TFLearn
40 | OpenAI gym
41 |
42 | ## Related Products
43 | * [Deep Learning with TensorFlow [Video]](https://www.packtpub.com/big-data-and-business-intelligence/deep-learning-tensorflow-video)
44 |
45 | * [Machine Learning with TensorFlow](https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-tensorflow)
46 |
47 | * [Building Machine Learning Projects with TensorFlow](https://www.packtpub.com/big-data-and-business-intelligence/building-machine-learning-projects-tensorflow)
48 |
49 | ### Suggestions and Feedback
50 | [Click here](https://docs.google.com/forms/d/e/1FAIpQLSe5qwunkGf6PUvzPirPDtuy1Du5Rlzew23UBp2S-P3wB-GcwQ/viewform) if you have any feedback or suggestions.
51 | ### Download a free PDF
52 |
53 | If you have already purchased a print or Kindle version of this book, you can get a DRM-free PDF version at no cost.
Simply click on the link to claim your free PDF.
54 |