├── Chapter05
├── pelican.jpg
├── flamingo.JPG
├── mobilenet_keras.py
└── mobilenetV2_keras.py
├── Chapter06
├── gru_rnn
│ ├── .DS_Store
│ └── stock_prediction.py
├── lstm_rnn
│ ├── .DS_Store
│ └── text_gen.py
├── vanilla_rnn
│ ├── .DS_Store
│ └── text_gen.py
└── bidirectional_rnn
│ ├── .DS_Store
│ └── sentiment_cls.py
├── LICENSE
├── Chapter02
├── first_dfn_keras.py
└── first_dfn.py
├── Chapter03
├── deep_ae.py
├── sparse_ae.py
├── contractive_ae.py
├── vanilla_ae.py
├── rbm.py
├── rbm_movielens.py
├── dbn.py
└── rbm_movielens_simulation.py
├── Chapter04
├── object_detection.py
└── cifar_cnn.py
├── Chapter08
├── siamese_nn.py
└── Bayesian_nn_tf.py
├── Chapter07
├── vanilla_tf.py
├── cgan_tf.py
├── dcgan_tf.py
└── infogan_tf.py
└── README.md
/Chapter05/pelican.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Hands-On-Deep-Learning-Architectures-with-Python/HEAD/Chapter05/pelican.jpg
--------------------------------------------------------------------------------
/Chapter05/flamingo.JPG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Hands-On-Deep-Learning-Architectures-with-Python/HEAD/Chapter05/flamingo.JPG
--------------------------------------------------------------------------------
/Chapter06/gru_rnn/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Hands-On-Deep-Learning-Architectures-with-Python/HEAD/Chapter06/gru_rnn/.DS_Store
--------------------------------------------------------------------------------
/Chapter06/lstm_rnn/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Hands-On-Deep-Learning-Architectures-with-Python/HEAD/Chapter06/lstm_rnn/.DS_Store
--------------------------------------------------------------------------------
/Chapter06/vanilla_rnn/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Hands-On-Deep-Learning-Architectures-with-Python/HEAD/Chapter06/vanilla_rnn/.DS_Store
--------------------------------------------------------------------------------
/Chapter06/bidirectional_rnn/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PacktPublishing/Hands-On-Deep-Learning-Architectures-with-Python/HEAD/Chapter06/bidirectional_rnn/.DS_Store
--------------------------------------------------------------------------------
/Chapter05/mobilenet_keras.py:
--------------------------------------------------------------------------------
1 | # MobileNEt
2 |
3 | import keras
4 | from keras.preprocessing import image
5 | from keras.applications import imagenet_utils
6 | from keras.applications.mobilenet import preprocess_input
7 | from keras.models import Model
8 |
9 |
10 | import numpy as np
11 | import argparse
12 | import matplotlib.pyplot as plt
13 |
14 |
15 | model = keras.applications.mobilenet.MobileNet(weights = 'imagenet')
16 |
17 | parser = argparse.ArgumentParser()
18 | parser.add_argument('--im_path', type = str, help = 'path to the image')
19 | args = parser.parse_args()
20 |
21 | # adding the path to image
22 | IM_PATH = args.im_path
23 |
24 | img = image.load_img(IM_PATH, target_size = (224, 224))
25 | img = image.img_to_array(img)
26 |
27 | img = np.expand_dims(img, axis = 0)
28 | img = preprocess_input(img)
29 | prediction = model.predict(img)
30 |
31 | output = imagenet_utils.decode_predictions(prediction)
32 |
33 | print(output)
--------------------------------------------------------------------------------
/Chapter05/mobilenetV2_keras.py:
--------------------------------------------------------------------------------
1 | # MobileNEt
2 |
3 | import keras
4 | from keras.preprocessing import image
5 | from keras.applications import imagenet_utils
6 | from keras.applications.mobilenet import preprocess_input
7 | from keras.models import Model
8 |
9 |
10 | import numpy as np
11 | import argparse
12 | import matplotlib.pyplot as plt
13 |
14 |
15 | model = keras.applications.mobilenet_v2.MobileNetV2(weights = 'imagenet')
16 |
17 | parser = argparse.ArgumentParser()
18 | parser.add_argument('--im_path', type = str, help = 'path to the image')
19 | args = parser.parse_args()
20 |
21 | # adding the path to image
22 | IM_PATH = args.im_path
23 |
24 | img = image.load_img(IM_PATH, target_size = (224, 224))
25 | img = image.img_to_array(img)
26 |
27 | img = np.expand_dims(img, axis = 0)
28 | img = preprocess_input(img)
29 | prediction = model.predict(img)
30 |
31 | output = imagenet_utils.decode_predictions(prediction)
32 |
33 | print(output)
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2019 Packt
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/Chapter06/bidirectional_rnn/sentiment_cls.py:
--------------------------------------------------------------------------------
1 | '''
2 | Source codes for Hands-On Deep Learning Architectures with Python (Packt Publishing)
3 | Chapter 6 Recurrent Neural Networks
4 | Author: Yuxi (Hayden) Liu
5 | '''
6 |
7 | from keras.datasets import imdb
8 |
9 | word_to_id = imdb.get_word_index()
10 |
11 |
12 | max_words = 5000
13 | (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_words, skip_top=10, seed=42)
14 |
15 | print(len(y_train), 'training samples')
16 | print(len(y_test), 'testing samples')
17 |
18 |
19 | from keras.preprocessing import sequence
20 | from keras.models import Sequential
21 | from keras.layers import Dense, Embedding, LSTM, Bidirectional
22 | from keras import optimizers
23 |
24 | maxlen = 500
25 |
26 | x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
27 | x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
28 | print('x_train shape:', x_train.shape)
29 | print('x_test shape:', x_test.shape)
30 |
31 |
32 |
33 | model = Sequential()
34 | model.add(Embedding(max_words, 128, input_length=maxlen))
35 | model.add(Bidirectional(LSTM(128, dropout=0.2, recurrent_dropout=0.2)))
36 |
37 | model.add(Dense(1, activation='sigmoid'))
38 |
39 | optimizer = optimizers.RMSprop(0.001)
40 | model.compile(optimizer=optimizer,loss='binary_crossentropy', metrics=['accuracy'])
41 |
42 | from keras.callbacks import EarlyStopping
43 | early_stop = EarlyStopping(monitor='val_loss', min_delta=0, patience=10, verbose=1, mode='min')
44 |
45 | hist = model.fit(x_train, y_train,
46 | batch_size=32,
47 | epochs=100,
48 | validation_data=[x_test, y_test], callbacks=[early_stop])
49 |
--------------------------------------------------------------------------------
/Chapter02/first_dfn_keras.py:
--------------------------------------------------------------------------------
1 | # importing the Sequential method in Keras
2 | import keras
3 | from keras.models import Sequential
4 |
5 | # Importing the Dense layer which creates a layer of Deep Feedforward Network
6 | from keras.layers import Dense, Activation, Flatten, Dropout
7 |
8 | # getting the data as we did earlier
9 | fashionObj = keras.datasets.fashion_mnist
10 |
11 | (trainX, trainY), (testX, testY) = fashionObj.load_data()
12 | print('train data x shape: ', trainX.shape)
13 | print('test data x shape:', testX.shape)
14 |
15 | print('train data y shape: ', trainY.shape)
16 | print('test data y shape: ', testY.shape)
17 |
18 |
19 | # Now we can directly jump to building model, we build in Sequential manner as discussed in Chapter 1
20 | model = Sequential()
21 |
22 | # the first layer we will use is to flatten the 2-d image input from (28,28) to 784
23 | model.add(Flatten(input_shape = (28, 28)))
24 |
25 | # adding first hidden layer with 512 units
26 | model.add(Dense(512))
27 |
28 | #adding activation to the output
29 | model.add(Activation('relu'))
30 |
31 | #using Dropout for Regularization
32 | model.add(Dropout(0.2))
33 |
34 | # adding our final output layer
35 | model.add(Dense(10))
36 |
37 | #softmax activation at the end
38 | model.add(Activation('softmax'))
39 |
40 | # normalising input data before feeding
41 | trainX = trainX / 255
42 | testX = testX / 255
43 |
44 | # compiling model with optimizer and loss
45 | model.compile(optimizer= 'Adam', loss = 'sparse_categorical_crossentropy', metrics = ['accuracy'])
46 |
47 | # training the model
48 | model.fit(trainX, trainY, epochs = 5, batch_size = 64)
49 |
50 | # evaluating the model on test data
51 | evalu = model.evaluate(testX, testY)
52 | print('Test Set average Accuracy: ', evalu[1])
53 |
54 |
--------------------------------------------------------------------------------
/Chapter03/deep_ae.py:
--------------------------------------------------------------------------------
1 | '''
2 | Source codes for Hands-On Deep Learning Architectures with Python (Packt Publishing)
3 | Chapter 3 Restricted Boltzmann Machines and Autoencoders
4 | Author: Yuxi (Hayden) Liu
5 | '''
6 |
7 | import pandas as pd
8 | import numpy as np
9 | from sklearn.preprocessing import StandardScaler
10 | from sklearn.model_selection import train_test_split
11 | from keras.models import Model
12 | from keras.layers import Input, Dense
13 | from keras.callbacks import ModelCheckpoint, TensorBoard
14 | from keras import optimizers
15 |
16 |
17 | data = pd.read_csv("creditcard.csv").drop(['Time'], axis=1)
18 |
19 | scaler = StandardScaler()
20 | data['Amount'] = scaler.fit_transform(data['Amount'].values.reshape(-1, 1))
21 |
22 |
23 | np.random.seed(1)
24 | data_train, data_test = train_test_split(data, test_size=0.2)
25 |
26 |
27 | data_test = data_test.append(data_train[data_train.Class == 1], ignore_index=True)
28 | data_train = data_train[data_train.Class == 0]
29 |
30 | X_train = data_train.drop(['Class'], axis=1).values
31 |
32 | X_test = data_test.drop(['Class'], axis=1).values
33 | Y_test = data_test['Class']
34 |
35 | input_size = 29
36 | hidden_sizes = [80, 40, 80]
37 |
38 | input_layer = Input(shape=(input_size,))
39 | encoder = Dense(hidden_sizes[0], activation="relu")(input_layer)
40 | encoder = Dense(hidden_sizes[1], activation="relu")(encoder)
41 | decoder = Dense(hidden_sizes[2], activation='relu')(encoder)
42 | decoder = Dense(input_size)(decoder)
43 | deep_ae = Model(inputs=input_layer, outputs=decoder)
44 | print(deep_ae.summary())
45 |
46 | optimizer = optimizers.Adam(lr=0.00005)
47 | deep_ae.compile(optimizer=optimizer, loss='mean_squared_error')
48 |
49 | tensorboard = TensorBoard(log_dir='./logs/run2/', write_graph=True, write_images=False)
50 |
51 | model_file = "model_deep_ae.h5"
52 | checkpoint = ModelCheckpoint(model_file, monitor='loss', verbose=1, save_best_only=True, mode='min')
53 |
54 | num_epoch = 50
55 | batch_size = 64
56 | deep_ae.fit(X_train, X_train, epochs=num_epoch, batch_size=batch_size, shuffle=True, validation_data=(X_test, X_test),
57 | verbose=1, callbacks=[checkpoint, tensorboard])
58 |
59 | recon = deep_ae.predict(X_test)
60 |
61 | recon_error = np.mean(np.power(X_test - recon, 2), axis=1)
62 |
63 |
64 | from sklearn.metrics import (precision_recall_curve, auc)
65 |
66 | precision, recall, th = precision_recall_curve(Y_test, recon_error)
67 | area = auc(recall, precision)
68 | print('Area under precision-recall curve:', area)
69 |
70 |
71 |
--------------------------------------------------------------------------------
/Chapter03/sparse_ae.py:
--------------------------------------------------------------------------------
1 | '''
2 | Source codes for Hands-On Deep Learning Architectures with Python (Packt Publishing)
3 | Chapter 3 Restricted Boltzmann Machines and Autoencoders
4 | Author: Yuxi (Hayden) Liu
5 | '''
6 |
7 | import pandas as pd
8 | import numpy as np
9 | from sklearn.preprocessing import StandardScaler
10 | from sklearn.model_selection import train_test_split
11 | from keras.models import Model
12 | from keras.layers import Input, Dense
13 | from keras.callbacks import ModelCheckpoint, TensorBoard
14 | from keras import optimizers
15 |
16 |
17 | data = pd.read_csv("creditcard.csv").drop(['Time'], axis=1)
18 | print(data.shape)
19 |
20 | print('Number of fraud samples: ', sum(data.Class == 1))
21 | print('Number of normal samples: ', sum(data.Class == 0))
22 |
23 |
24 | scaler = StandardScaler()
25 | data['Amount'] = scaler.fit_transform(data['Amount'].values.reshape(-1, 1))
26 |
27 |
28 |
29 | np.random.seed(1)
30 | data_train, data_test = train_test_split(data, test_size=0.95)
31 |
32 |
33 | data_test = data_test.append(data_train[data_train.Class == 1], ignore_index=True)
34 | data_train = data_train[data_train.Class == 0]
35 |
36 | X_train = data_train.drop(['Class'], axis=1).values
37 |
38 | X_test = data_test.drop(['Class'], axis=1).values
39 | Y_test = data_test['Class']
40 |
41 | input_size = 29
42 | hidden_size = 40
43 |
44 |
45 | from keras import regularizers
46 |
47 | hidden_sizes = [80, 40, 80]
48 |
49 | input_layer = Input(shape=(input_size,))
50 | encoder = Dense(hidden_sizes[0], activation="relu", activity_regularizer=regularizers.l1(3e-5))(input_layer)
51 | encoder = Dense(hidden_sizes[1], activation="relu")(encoder)
52 | decoder = Dense(hidden_sizes[2], activation='relu')(encoder)
53 | decoder = Dense(input_size)(decoder)
54 | sparse_ae = Model(inputs=input_layer, outputs=decoder)
55 | print(sparse_ae.summary())
56 |
57 |
58 |
59 | optimizer = optimizers.Adam(lr=0.0008)
60 | sparse_ae.compile(optimizer=optimizer, loss='mean_squared_error')
61 |
62 | tensorboard = TensorBoard(log_dir='./logs/run3/', write_graph=True, write_images=False)
63 |
64 | model_file = "model_sparse_ae.h5"
65 | checkpoint = ModelCheckpoint(model_file, monitor='loss', verbose=1, save_best_only=True, mode='min')
66 |
67 | num_epoch = 30
68 | batch_size = 64
69 | sparse_ae.fit(X_train, X_train, epochs=num_epoch, batch_size=batch_size, shuffle=True, validation_data=(X_test, X_test),
70 | verbose=1, callbacks=[checkpoint, tensorboard])
71 |
72 | recon = sparse_ae.predict(X_test)
73 |
74 | recon_error = np.mean(np.power(X_test - recon, 2), axis=1)
75 |
76 |
77 | from sklearn.metrics import (precision_recall_curve, auc)
78 |
79 | precision, recall, th = precision_recall_curve(Y_test, recon_error)
80 | area = auc(recall, precision)
81 | print('Area under precision-recall curve:', area)
82 |
83 |
84 |
--------------------------------------------------------------------------------
/Chapter03/contractive_ae.py:
--------------------------------------------------------------------------------
1 | '''
2 | Source codes for Hands-On Deep Learning Architectures with Python (Packt Publishing)
3 | Chapter 3 Restricted Boltzmann Machines and Autoencoders
4 | Author: Yuxi (Hayden) Liu
5 | '''
6 |
7 | import pandas as pd
8 | import numpy as np
9 | from sklearn.preprocessing import StandardScaler
10 | from sklearn.model_selection import train_test_split
11 | from keras.models import Model
12 | from keras.layers import Input, Dense
13 | from keras.callbacks import ModelCheckpoint, TensorBoard
14 | from keras import optimizers
15 | import keras.backend as K
16 |
17 |
18 | data = pd.read_csv("creditcard.csv").drop(['Time'], axis=1)
19 | print(data.shape)
20 |
21 | print('Number of fraud samples: ', sum(data.Class == 1))
22 | print('Number of normal samples: ', sum(data.Class == 0))
23 |
24 |
25 | scaler = StandardScaler()
26 | data['Amount'] = scaler.fit_transform(data['Amount'].values.reshape(-1, 1))
27 |
28 |
29 |
30 | np.random.seed(1)
31 | data_train, data_test = train_test_split(data, test_size=0.2)
32 |
33 |
34 | data_test = data_test.append(data_train[data_train.Class == 1], ignore_index=True)
35 | data_train = data_train[data_train.Class == 0]
36 |
37 | X_train = data_train.drop(['Class'], axis=1).values
38 |
39 | X_test = data_test.drop(['Class'], axis=1).values
40 | Y_test = data_test['Class']
41 |
42 | input_size = 29
43 | hidden_size = 40
44 |
45 | input_layer = Input(shape=(input_size,))
46 | encoder = Dense(hidden_size, activation="relu")(input_layer)
47 | decoder = Dense(input_size)(encoder)
48 | contractive_ae = Model(inputs=input_layer, outputs=decoder)
49 | print(contractive_ae.summary())
50 |
51 | optimizer = optimizers.Adam(lr=0.0003)
52 |
53 |
54 | factor = 1e-5
55 | def contractive_loss(y_pred, y_true):
56 | mse = K.mean(K.square(y_true - y_pred), axis=1)
57 | W = K.variable(value=contractive_ae.layers[1].get_weights()[0])
58 | W_T = K.transpose(W)
59 | W_T_sq_sum = K.sum(W_T ** 2, axis=1)
60 | h = contractive_ae.layers[1].output
61 | contractive = factor * K.sum((h * (1 - h)) ** 2 * W_T_sq_sum, axis=1)
62 | return mse + contractive
63 |
64 |
65 | contractive_ae.compile(optimizer=optimizer, loss=contractive_loss)
66 |
67 | tensorboard = TensorBoard(log_dir='./logs/run4/', write_graph=True, write_images=False)
68 |
69 | model_file = "model_contractive_ae.h5"
70 | checkpoint = ModelCheckpoint(model_file, monitor='loss', verbose=1, save_best_only=True, mode='min')
71 |
72 | num_epoch = 30
73 | batch_size = 64
74 | contractive_ae.fit(X_train, X_train, epochs=num_epoch, batch_size=batch_size, shuffle=True, validation_data=(X_test, X_test),
75 | verbose=1, callbacks=[checkpoint, tensorboard])
76 |
77 | recon = contractive_ae.predict(X_test)
78 |
79 | recon_error = np.mean(np.power(X_test - recon, 2), axis=1)
80 |
81 |
82 |
83 |
84 | from sklearn.metrics import (precision_recall_curve, auc)
85 |
86 |
87 | precision, recall, th = precision_recall_curve(Y_test, recon_error)
88 |
89 | area = auc(recall, precision)
90 | print('Area under precision-recall curve:', area)
91 |
92 |
--------------------------------------------------------------------------------
/Chapter03/vanilla_ae.py:
--------------------------------------------------------------------------------
1 | '''
2 | Source codes for Hands-On Deep Learning Architectures with Python (Packt Publishing)
3 | Chapter 3 Restricted Boltzmann Machines and Autoencoders
4 | Author: Yuxi (Hayden) Liu
5 | '''
6 |
7 | import pandas as pd
8 | import numpy as np
9 | from sklearn.preprocessing import StandardScaler
10 | from sklearn.model_selection import train_test_split
11 | from keras.models import Model
12 | from keras.layers import Input, Dense
13 | from keras.callbacks import ModelCheckpoint, TensorBoard
14 | from keras import optimizers
15 | import matplotlib.pyplot as plt
16 |
17 |
18 | data = pd.read_csv("creditcard.csv").drop(['Time'], axis=1)
19 | print(data.shape)
20 |
21 | print('Number of fraud samples: ', sum(data.Class == 1))
22 | print('Number of normal samples: ', sum(data.Class == 0))
23 |
24 |
25 | scaler = StandardScaler()
26 | data['Amount'] = scaler.fit_transform(data['Amount'].values.reshape(-1, 1))
27 |
28 |
29 |
30 | np.random.seed(1)
31 | data_train, data_test = train_test_split(data, test_size=0.2)
32 |
33 |
34 | data_test = data_test.append(data_train[data_train.Class == 1], ignore_index=True)
35 | data_train = data_train[data_train.Class == 0]
36 |
37 | X_train = data_train.drop(['Class'], axis=1).values
38 |
39 | X_test = data_test.drop(['Class'], axis=1).values
40 | Y_test = data_test['Class']
41 |
42 | input_size = 29
43 | hidden_size = 40
44 |
45 | input_layer = Input(shape=(input_size,))
46 | encoder = Dense(hidden_size, activation="relu")(input_layer)
47 | decoder = Dense(input_size)(encoder)
48 | ae = Model(inputs=input_layer, outputs=decoder)
49 | print(ae.summary())
50 |
51 | optimizer = optimizers.Adam(lr=0.0001)
52 | ae.compile(optimizer=optimizer, loss='mean_squared_error')
53 |
54 | tensorboard = TensorBoard(log_dir='./logs/run1/', write_graph=True, write_images=False)
55 |
56 | model_file = "model_ae.h5"
57 | checkpoint = ModelCheckpoint(model_file, monitor='loss', verbose=1, save_best_only=True, mode='min')
58 |
59 | num_epoch = 30
60 | batch_size = 64
61 | ae.fit(X_train, X_train, epochs=num_epoch, batch_size=batch_size, shuffle=True, validation_data=(X_test, X_test),
62 | verbose=1, callbacks=[checkpoint, tensorboard])
63 |
64 | recon = ae.predict(X_test)
65 |
66 | recon_error = np.mean(np.power(X_test - recon, 2), axis=1)
67 |
68 |
69 |
70 |
71 | from sklearn.metrics import (roc_auc_score, precision_recall_curve, auc, confusion_matrix)
72 |
73 | roc_auc = roc_auc_score(Y_test, recon_error)
74 | print('Area under ROC curve:', roc_auc)
75 |
76 | precision, recall, th = precision_recall_curve(Y_test, recon_error)
77 |
78 | plt.plot(recall, precision, 'b')
79 | plt.title('Precision-Recall Curve')
80 | plt.xlabel('Recall')
81 | plt.ylabel('Precision')
82 | plt.show()
83 |
84 | area = auc(recall, precision)
85 | print('Area under precision-recall curve:', area)
86 |
87 |
88 | plt.plot(th, precision[1:], 'k')
89 | plt.plot(th, recall[1:], 'b', label='Threshold-Recall curve')
90 | plt.title('Precision (black) and recall (blue) for different threshold values')
91 | plt.xlabel('Threshold of reconstruction error')
92 | plt.ylabel('Precision or recall')
93 | plt.show()
94 |
95 |
96 | threshold = .000001
97 | Y_pred = [1 if e > threshold else 0 for e in recon_error]
98 | conf_matrix = confusion_matrix(Y_test, Y_pred)
99 | print(conf_matrix)
100 |
--------------------------------------------------------------------------------
/Chapter04/object_detection.py:
--------------------------------------------------------------------------------
1 | '''
2 | NOTE:
3 | put all the boxes in tuple --> cv2 only takes tuples
4 | '''
5 | import tensorflow as tf
6 | import numpy as np
7 | import matplotlib.pyplot as plt
8 | import cv2
9 | import os
10 | import argparse
11 |
12 | parser = argparse.ArgumentParser()
13 |
14 | parser.add_argument('--im_path', type=str, help='path to input image')
15 | #parser.add_argument('--save_path', type=str, help='save path to output image')
16 | args = parser.parse_args()
17 | IM_PATH = args.im_path
18 |
19 | def read_image(imPath):
20 | img = cv2.imread(imPath)
21 | return img
22 |
23 |
24 |
25 | def save_bounding_boxes(img, bb, save_path, im_name):
26 | roi = img[bbox[1]:bbox[3], bbox[0]:bbox[2]] # from x1 to x2 and y1 to y2
27 | #print(roi.shape)
28 | if not os.path.exists(save_path):
29 | os.mkdir(save_path)
30 |
31 | cv2.imwrite(os.path.join(save_path, im_name), roi)
32 |
33 |
34 | # the path to checkpoint file
35 | FROZEN_GRAPH_FILE = 'frozen_inference_graph.pb' #path to frozen graph
36 |
37 | # load the model
38 |
39 | # making an empty graph
40 | graph = tf.Graph()
41 | with graph.as_default():
42 |
43 | serialGraph = tf.GraphDef()
44 | # we will create a serialized graph as the Protobuf (for which the extension of file is .pb)
45 | # needs to be read serially in a serial graph
46 | # we will transfer it later to the empty graph created
47 |
48 | with tf.gfile.GFile(FROZEN_GRAPH_FILE, 'rb') as f:
49 | serialRead = f.read()
50 | serialGraph.ParseFromString(serialRead)
51 | tf.import_graph_def(serialGraph, name = '')
52 |
53 | sess = tf.Session(graph = graph)
54 |
55 | # scores and num_detections is useless
56 |
57 | for dirs in os.listdir(IM_PATH):
58 | if not dirs.startswith('.'):
59 | for im in os.listdir(os.path.join(IM_PATH, dirs)):
60 | if im.endswith('.jpeg'):
61 |
62 | image = read_image(os.path.join(IM_PATH, dirs, im))
63 | if image is None:
64 | print('image read as None')
65 | print('image name: ', im)
66 |
67 | # here we will bring in the tensors from the frozen graph we loaded,
68 | # which will take the input through feed_dict and output the bounding boxes
69 |
70 | imageTensor = graph.get_tensor_by_name('image_tensor:0')
71 |
72 | bboxs = graph.get_tensor_by_name('detection_boxes:0')
73 |
74 |
75 | classes = graph.get_tensor_by_name('detection_classes:0')
76 |
77 | (outBoxes, classes) = sess.run([bboxs, classes],feed_dict={imageTensor: np.expand_dims(image, axis=0)})
78 |
79 |
80 | # visualise
81 | cnt = 0
82 | imageHeight, imageWidth = image.shape[:2]
83 | boxes = np.squeeze(outBoxes)
84 | classes = np.squeeze(classes)
85 | boxes = np.stack((boxes[:,1] * imageWidth, boxes[:,0] * imageHeight,
86 | boxes[:,3] * imageWidth, boxes[:,2] * imageHeight),axis=1).astype(np.int)
87 |
88 | for i, bb in enumerate(boxes):
89 | #bbox = (x1, y1, x2, y2 )
90 | #print(bbox)
91 | '''
92 | save_bounding_boxes(image, bbox,
93 | save_path = os.path.join(IM_PATH, dirs, 'detected' ),
94 | im_name = '_'.join([str(cnt), im]))
95 | cnt += 1
96 | '''
97 | print(classes[i])
98 | cv2.rectangle(image, (bb[0], bb[1]), (bb[2], bb[3]), (255,255,0), thickness = 1)
99 |
100 | plt.figure(figsize = (10, 10))
101 | plt.imshow(image)
102 | plt.show()
103 |
104 | cv2.imwrite(os.path.join(IM_PATH, dirs, 'a_' + im), image)
105 | #cv2.waitKey()
106 |
--------------------------------------------------------------------------------
/Chapter06/gru_rnn/stock_prediction.py:
--------------------------------------------------------------------------------
1 | '''
2 | Source codes for Hands-On Deep Learning Architectures with Python (Packt Publishing)
3 | Chapter 6 Recurrent Neural Networks
4 | Author: Yuxi (Hayden) Liu
5 | '''
6 |
7 | import numpy as np
8 | import matplotlib.pyplot as plt
9 | import pandas as pd
10 |
11 |
12 | raw_data = pd.read_csv('^DJI.csv')
13 | raw_data.head()
14 | data = raw_data.Close.values
15 | len(data)
16 |
17 | plt.plot(data)
18 | plt.xlabel('Time period')
19 | plt.ylabel('Price')
20 | plt.show()
21 |
22 |
23 | def generate_seq(data, window_size):
24 | """
25 | Transform input series into input sequences and outputs based on a specified window size
26 | @param data: input series
27 | @param window_size: int
28 | @return: numpy array of input sequences, numpy array of outputs
29 | """
30 | X, Y = [], []
31 | for i in range(window_size, len(data)):
32 | X.append(data[i - window_size:i])
33 | Y.append(data[i])
34 | return np.array(X),np.array(Y)
35 |
36 |
37 | window_size = 10
38 | X, Y = generate_seq(data, window_size)
39 | X.shape
40 | Y.shape
41 |
42 |
43 | train_ratio = 0.7
44 | val_ratio = 0.1
45 | train_n = int(len(Y) * train_ratio)
46 |
47 | X_train = X[:train_n]
48 | Y_train = Y[:train_n]
49 |
50 | X_test = X[train_n:]
51 | Y_test = Y[train_n:]
52 |
53 |
54 | def scale(X, Y):
55 | """
56 | Scaling the prices within each window
57 | @param X: input series
58 | @param Y: outputs
59 | @return: scaled input series and outputs
60 | """
61 | X_processed, Y_processed = np.copy(X), np.copy(Y)
62 | for i in range(len(X)):
63 | x = X[i, -1]
64 | X_processed[i] /= x
65 | Y_processed[i] /= x
66 | return X_processed, Y_processed
67 |
68 |
69 | def reverse_scale(X, Y_scaled):
70 | """
71 | Convert the scaled outputs to the original scale
72 | @param X: original input series
73 | @param Y_scaled: scaled outputs
74 | @return: outputs in original scale
75 | """
76 | Y_original = np.copy(Y_scaled)
77 | for i in range(len(X)):
78 | x = X[i, -1]
79 | Y_original[i] *= x
80 | return Y_original
81 |
82 |
83 | X_train_scaled, Y_train_scaled = scale(X_train, Y_train)
84 | X_test_scaled, Y_test_scaled = scale(X_test, Y_test)
85 |
86 |
87 |
88 | from keras.models import Sequential
89 | from keras.layers import Dense, GRU
90 | from keras.callbacks import TensorBoard, EarlyStopping, ModelCheckpoint
91 | from keras import optimizers
92 |
93 | model = Sequential()
94 | model.add(GRU(256, input_shape=(window_size, 1)))
95 | model.add(Dense(1))
96 |
97 | optimizer = optimizers.RMSprop(lr=0.0006, rho=0.9, epsilon=1e-08, decay=0.0)
98 | model.compile(loss='mean_squared_error', optimizer=optimizer)
99 |
100 |
101 | tensorboard = TensorBoard(log_dir='./logs/run1/', write_graph=True, write_images=False)
102 | early_stop = EarlyStopping(monitor='val_loss', min_delta=0, patience=100, verbose=1, mode='min')
103 | model_file = "weights/best_model.hdf5"
104 | checkpoint = ModelCheckpoint(model_file, monitor='val_loss', verbose=1, save_best_only=True, mode='min')
105 |
106 | X_train_reshaped = X_train_scaled.reshape((X_train_scaled.shape[0], X_train_scaled.shape[1], 1))
107 | X_test_reshaped = X_test_scaled.reshape((X_test_scaled.shape[0], X_test_scaled.shape[1], 1))
108 |
109 | model.fit(X_train_reshaped, Y_train_scaled, validation_data=(X_test_reshaped, Y_test_scaled),
110 | epochs=1, batch_size=100, verbose=1, callbacks=[tensorboard, early_stop, checkpoint])
111 |
112 |
113 | from keras.models import load_model
114 | model = load_model(model_file)
115 |
116 | pred_train_scaled = model.predict(X_train_reshaped)
117 | pred_test_scaled = model.predict(X_test_reshaped)
118 |
119 | pred_train = reverse_scale(X_train, pred_train_scaled)
120 | pred_test = reverse_scale(X_test, pred_test_scaled)
121 |
122 |
123 | plt.plot(Y)
124 | plt.plot(np.concatenate([pred_train, pred_test]))
125 | plt.xlabel('Time period')
126 | plt.ylabel('Price')
127 | plt.legend(['original series','prediction'],loc='center left')
128 | plt.show()
129 |
130 |
131 |
--------------------------------------------------------------------------------
/Chapter06/lstm_rnn/text_gen.py:
--------------------------------------------------------------------------------
1 | '''
2 | Source codes for Hands-On Deep Learning Architectures with Python (Packt Publishing)
3 | Chapter 6 Recurrent Neural Networks
4 | Author: Yuxi (Hayden) Liu
5 | '''
6 |
7 | import numpy as np
8 | from keras.models import Sequential
9 | from keras.layers.core import Dense, Activation, Dropout
10 | from keras.layers.recurrent import LSTM
11 | from keras.layers.wrappers import TimeDistributed
12 | from keras.callbacks import Callback, ModelCheckpoint, EarlyStopping
13 | from keras import optimizers
14 |
15 |
16 | training_file = 'warpeace_input.txt'
17 |
18 | raw_text = open(training_file, 'r').read()
19 | raw_text = raw_text.lower()
20 | raw_text[:100]
21 |
22 | n_chars = len(raw_text)
23 | print('Total characters: {}'.format(n_chars))
24 | chars = sorted(list(set(raw_text)))
25 | n_vocab = len(chars)
26 | print('Total vocabulary (unique characters): {}'.format(n_vocab))
27 | print(chars)
28 |
29 | index_to_char = dict((i, c) for i, c in enumerate(chars))
30 | char_to_index = dict((c, i) for i, c in enumerate(chars))
31 | print(char_to_index)
32 |
33 |
34 | seq_length = 160
35 | n_seq = int(n_chars / seq_length)
36 |
37 | X = np.zeros((n_seq, seq_length, n_vocab))
38 | Y = np.zeros((n_seq, seq_length, n_vocab))
39 |
40 | for i in range(n_seq):
41 | x_sequence = raw_text[i * seq_length : (i + 1) * seq_length]
42 | x_sequence_ohe = np.zeros((seq_length, n_vocab))
43 | for j in range(seq_length):
44 | char = x_sequence[j]
45 | index = char_to_index[char]
46 | x_sequence_ohe[j][index] = 1.
47 | X[i] = x_sequence_ohe
48 | y_sequence = raw_text[i * seq_length + 1 : (i + 1) * seq_length + 1]
49 | y_sequence_ohe = np.zeros((seq_length, n_vocab))
50 | for j in range(seq_length):
51 | char = y_sequence[j]
52 | index = char_to_index[char]
53 | y_sequence_ohe[j][index] = 1.
54 | Y[i] = y_sequence_ohe
55 |
56 |
57 | batch_size = 100
58 | n_layer = 2
59 | hidden_units = 700
60 | n_epoch= 300
61 | dropout = 0.4
62 |
63 |
64 | model = Sequential()
65 | model.add(LSTM(hidden_units, input_shape=(None, n_vocab), return_sequences=True))
66 | model.add(Dropout(dropout))
67 | for i in range(n_layer - 1):
68 | model.add(LSTM(hidden_units, return_sequences=True))
69 | model.add(Dropout(dropout))
70 | model.add(TimeDistributed(Dense(n_vocab)))
71 | model.add(Activation('softmax'))
72 |
73 | optimizer = optimizers.RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0)
74 | model.compile(loss="categorical_crossentropy", optimizer=optimizer)
75 |
76 | print(model.summary())
77 |
78 |
79 |
80 |
81 | def generate_text(model, gen_length, n_vocab, index_to_char):
82 | """
83 | Generating text using the RNN model
84 | @param model: current RNN model
85 | @param gen_length: number of characters we want to generate
86 | @param n_vocab: number of unique characters
87 | @param index_to_char: index to character mapping
88 | @return:
89 | """
90 | # Start with a randomly picked character
91 | index = np.random.randint(n_vocab)
92 | y_char = [index_to_char[index]]
93 | X = np.zeros((1, gen_length, n_vocab))
94 | for i in range(gen_length):
95 | X[0, i, index] = 1.
96 | indices = np.argmax(model.predict(X[:, max(0, i - 99):i + 1, :])[0], 1)
97 | index = indices[-1]
98 | y_char.append(index_to_char[index])
99 | return ('').join(y_char)
100 |
101 |
102 | class ResultChecker(Callback):
103 | def __init__(self, model, N, gen_length):
104 | self.model = model
105 | self.N = N
106 | self.gen_length = gen_length
107 |
108 | def on_epoch_end(self, epoch, logs={}):
109 | if epoch % self.N == 0:
110 | result = generate_text(self.model, self.gen_length, n_vocab, index_to_char)
111 | print('\nMy War and Peace:\n' + result)
112 |
113 |
114 | filepath="weights/weights_epoch_{epoch:03d}_loss_{loss:.4f}.hdf5"
115 | checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
116 |
117 | early_stop = EarlyStopping(monitor='loss', min_delta=0, patience=50, verbose=1, mode='min')
118 |
119 | model.fit(X, Y, batch_size=batch_size, verbose=1, epochs=n_epoch,
120 | callbacks=[ResultChecker(model, 10, 500), checkpoint, early_stop])
121 |
122 |
123 |
--------------------------------------------------------------------------------
/Chapter06/vanilla_rnn/text_gen.py:
--------------------------------------------------------------------------------
1 | '''
2 | Source codes for Hands-On Deep Learning Architectures with Python (Packt Publishing)
3 | Chapter 6 Recurrent Neural Networks
4 | Author: Yuxi (Hayden) Liu
5 | '''
6 |
7 | import numpy as np
8 | from keras.models import Sequential
9 | from keras.layers.core import Dense, Activation, Dropout
10 | from keras.layers.recurrent import SimpleRNN
11 | from keras.layers.wrappers import TimeDistributed
12 | from keras.callbacks import Callback, ModelCheckpoint, EarlyStopping
13 | from keras import optimizers
14 |
15 |
16 | training_file = 'warpeace_input.txt'
17 |
18 | raw_text = open(training_file, 'r').read()
19 | raw_text = raw_text.lower()
20 | raw_text[:100]
21 |
22 | n_chars = len(raw_text)
23 | print('Total characters: {}'.format(n_chars))
24 | chars = sorted(list(set(raw_text)))
25 | n_vocab = len(chars)
26 | print('Total vocabulary (unique characters): {}'.format(n_vocab))
27 | print(chars)
28 |
29 | index_to_char = dict((i, c) for i, c in enumerate(chars))
30 | char_to_index = dict((c, i) for i, c in enumerate(chars))
31 | print(char_to_index)
32 |
33 |
34 | seq_length = 100
35 | n_seq = int(n_chars / seq_length)
36 |
37 | X = np.zeros((n_seq, seq_length, n_vocab))
38 | Y = np.zeros((n_seq, seq_length, n_vocab))
39 |
40 | for i in range(n_seq):
41 | x_sequence = raw_text[i * seq_length : (i + 1) * seq_length]
42 | x_sequence_ohe = np.zeros((seq_length, n_vocab))
43 | for j in range(seq_length):
44 | char = x_sequence[j]
45 | index = char_to_index[char]
46 | x_sequence_ohe[j][index] = 1.
47 | X[i] = x_sequence_ohe
48 | y_sequence = raw_text[i * seq_length + 1 : (i + 1) * seq_length + 1]
49 | y_sequence_ohe = np.zeros((seq_length, n_vocab))
50 | for j in range(seq_length):
51 | char = y_sequence[j]
52 | index = char_to_index[char]
53 | y_sequence_ohe[j][index] = 1.
54 | Y[i] = y_sequence_ohe
55 |
56 | batch_size = 100
57 | n_layer = 2
58 | hidden_units = 800
59 | n_epoch = 300
60 | dropout = 0.3
61 |
62 | model = Sequential()
63 | model.add(SimpleRNN(hidden_units, activation='relu', input_shape=(None, n_vocab), return_sequences=True))
64 | model.add(Dropout(dropout))
65 | for i in range(n_layer - 1):
66 | model.add(SimpleRNN(hidden_units, activation='relu', return_sequences=True))
67 | model.add(Dropout(dropout))
68 | model.add(TimeDistributed(Dense(n_vocab)))
69 | model.add(Activation('softmax'))
70 |
71 |
72 | optimizer = optimizers.RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0)
73 | model.compile(loss="categorical_crossentropy", optimizer=optimizer)
74 |
75 | print(model.summary())
76 |
77 |
78 | def generate_text(model, gen_length, n_vocab, index_to_char):
79 | """
80 | Generating text using the RNN model
81 | @param model: current RNN model
82 | @param gen_length: number of characters we want to generate
83 | @param n_vocab: number of unique characters
84 | @param index_to_char: index to character mapping
85 | @return:
86 | """
87 | # Start with a randomly picked character
88 | index = np.random.randint(n_vocab)
89 | y_char = [index_to_char[index]]
90 | X = np.zeros((1, gen_length, n_vocab))
91 | for i in range(gen_length):
92 | X[0, i, index] = 1.
93 | indices = np.argmax(model.predict(X[:, max(0, i - 99):i + 1, :])[0], 1)
94 | index = indices[-1]
95 | y_char.append(index_to_char[index])
96 | return ('').join(y_char)
97 |
98 |
99 | class ResultChecker(Callback):
100 | def __init__(self, model, N, gen_length):
101 | self.model = model
102 | self.N = N
103 | self.gen_length = gen_length
104 |
105 | def on_epoch_end(self, epoch, logs={}):
106 | if epoch % self.N == 0:
107 | result = generate_text(self.model, self.gen_length, n_vocab, index_to_char)
108 | print('\nMy War and Peace:\n' + result)
109 |
110 |
111 | filepath="weights/weights_layer_%d_hidden_%d_epoch_{epoch:03d}_loss_{loss:.4f}.hdf5"
112 | checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
113 |
114 | early_stop = EarlyStopping(monitor='loss', min_delta=0, patience=50, verbose=1, mode='min')
115 |
116 | model.fit(X, Y, batch_size=batch_size, verbose=1, epochs=n_epoch,
117 | callbacks=[ResultChecker(model, 10, 200), checkpoint, early_stop])
118 |
119 |
120 |
--------------------------------------------------------------------------------
/Chapter08/siamese_nn.py:
--------------------------------------------------------------------------------
1 | '''
2 | Source codes for Hands-On Deep Learning Architectures with Python (Packt Publishing)
3 | Chapter 8 New Trends of Deep Learning
4 | Author: Yuxi (Hayden) Liu
5 | '''
6 |
7 | import numpy as np
8 | from PIL import Image
9 |
10 | from keras import backend as K
11 | from keras.layers import Activation
12 | from keras.layers import Input, Lambda, Dense, Dropout, Convolution2D, MaxPooling2D, Flatten
13 | from keras.models import Sequential, Model
14 |
15 |
16 |
17 |
18 | img = Image.open('./orl_faces/s1/1.pgm')
19 | print(img.size)
20 | # img.show()
21 |
22 | image_size = [92, 112, 1]
23 |
24 | def load_images_ids(path='./orl_faces'):
25 | id_image = {}
26 | for id in range(1, 41):
27 | id_image[id] = []
28 | for image_id in range(1, 11):
29 | img = Image.open('{}/s{}/{}.pgm'.format(path, id, image_id))
30 | img = np.array(img).reshape(image_size)
31 | id_image[id].append(img)
32 | return id_image
33 |
34 |
35 | id_image = load_images_ids()
36 |
37 |
38 |
39 |
40 | def siamese_network():
41 | seq = Sequential()
42 | nb_filter = 16
43 | kernel_size = 6
44 | # Convolution layer
45 | seq.add(Convolution2D(nb_filter, (kernel_size, kernel_size), input_shape=image_size, border_mode='valid'))
46 | seq.add(Activation('relu'))
47 | seq.add(MaxPooling2D(pool_size=(2, 2)))
48 | seq.add(Dropout(.25))
49 | # flatten
50 | seq.add(Flatten())
51 | seq.add(Dense(50, activation='relu'))
52 | seq.add(Dropout(0.1))
53 | return seq
54 |
55 |
56 | img_1 = Input(shape=image_size)
57 | img_2 = Input(shape=image_size)
58 |
59 | base_network = siamese_network()
60 | feature_1 = base_network(img_1)
61 | feature_2 = base_network(img_2)
62 |
63 |
64 | distance_function = lambda x: K.abs(x[0] - x[1])
65 | distance = Lambda(distance_function, output_shape=lambda x: x[0])([feature_1, feature_2])
66 | prediction = Dense(1, activation='sigmoid')(distance)
67 |
68 | model = Model(input=[img_1, img_2], output=prediction)
69 |
70 |
71 |
72 |
73 | from keras.losses import binary_crossentropy
74 | from keras.optimizers import Adam
75 | optimizer = Adam(lr=0.001)
76 |
77 | model.compile(loss=binary_crossentropy, optimizer=optimizer)
78 |
79 |
80 | np.random.seed(42)
81 |
82 | def gen_train_data(n, id_image):
83 | X_1, X_2 = [], []
84 | Y = [1] * (n // 2) + [0] * (n // 2)
85 | # generate positive samples
86 | ids = np.random.choice(range(1, 41), n // 2)
87 | for id in ids:
88 | two_image_ids = np.random.choice(range(10), 2, False)
89 | X_1.append(id_image[id][two_image_ids[0]])
90 | X_2.append(id_image[id][two_image_ids[1]])
91 | # generate negative samples, by randomly selecting two images from two ids
92 | for _ in range(n // 2):
93 | two_ids = np.random.choice(range(1, 41), 2, False)
94 | two_image_ids = np.random.randint(0, 10, 2)
95 | X_1.append(id_image[two_ids[0]][two_image_ids[0]])
96 | X_2.append(id_image[two_ids[1]][two_image_ids[1]])
97 | X_1 = np.array(X_1).reshape([n] + image_size) / 255
98 | X_2 = np.array(X_2).reshape([n] + image_size) / 255
99 | Y = np.array(Y)
100 | return [X_1, X_2], Y
101 |
102 |
103 |
104 | def gen_test_case(n_way):
105 | ids = np.random.choice(range(1, 41), n_way)
106 | id_1 = ids[0]
107 | image_1 = np.random.randint(0, 10, 1)[0]
108 | image_2 = np.random.randint(image_1 + 1, 9 + image_1, 1)[0] % 10
109 | X_1 = [id_image[id_1][image_1]]
110 | X_2 = [id_image[id_1][image_2]]
111 | for id_2 in ids[1:]:
112 | image_2 = np.random.randint(0, 10, 1)[0]
113 | X_1.append(id_image[id_1][image_1])
114 | X_2.append(id_image[id_2][image_2])
115 | X_1 = np.array(X_1).reshape([n_way] + image_size) / 255
116 | X_2 = np.array(X_2).reshape([n_way] + image_size) / 255
117 | return [X_1, X_2]
118 |
119 |
120 |
121 |
122 | X_train, Y_train = gen_train_data(8000, id_image)
123 |
124 | epochs = 10
125 | model.fit(X_train, Y_train, validation_split=0.1, batch_size=64, verbose=1, epochs=epochs)
126 |
127 |
128 | def knn(X):
129 | distances = [np.linalg.norm(x_1 - x_2) for x_1, x_2 in zip(X[0], X[1])]
130 | pred = np.argmin(distances)
131 | return pred
132 |
133 |
134 | n_experiment = 1000
135 |
136 | for n_way in [4, 9, 16, 25, 36, 40]:
137 | n_correct_snn = 0
138 | n_correct_knn = 0
139 | for _ in range(n_experiment):
140 | X_test = gen_test_case(n_way)
141 | pred = model.predict(X_test)
142 | pred_id = np.argmax(pred)
143 | if pred_id == 0:
144 | n_correct_snn += 1
145 | if knn(X_test) == 0:
146 | n_correct_knn += 1
147 | print('{}-way few shot learning accuracy: {}'.format(n_way, n_correct_snn / n_experiment))
148 | print('Baseline accuracy with knn: {}\n'.format(n_correct_knn / n_experiment))
149 |
--------------------------------------------------------------------------------
/Chapter08/Bayesian_nn_tf.py:
--------------------------------------------------------------------------------
1 | '''
2 | Source codes for Hands-On Deep Learning Architectures with Python (Packt Publishing)
3 | Chapter 8 New Trends of Deep Learning
4 | Author: Yuxi (Hayden) Liu
5 | '''
6 |
7 | import numpy as np
8 | import tensorflow as tf
9 |
10 | from edward.models import Categorical, Normal
11 | import edward as ed
12 |
13 |
14 |
15 | def load_dataset():
16 | (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data('./mnist_data')
17 | x_train = x_train / 255.
18 | x_train = x_train.reshape([-1, 28 * 28])
19 | x_test = x_test / 255.
20 | x_test = x_test.reshape([-1, 28 * 28])
21 | return (x_train, y_train), (x_test, y_test)
22 |
23 |
24 | (x_train, y_train), (x_test, y_test) = load_dataset()
25 |
26 |
27 | def gen_batches_label(x_data, y_data, batch_size, shuffle=True):
28 | """
29 | Generate batches including label for training
30 | @param x_data: training data
31 | @param y_data: training label
32 | @param batch_size: batch size
33 | @param shuffle: shuffle the data or not
34 | @return: batches generator
35 | """
36 | n_data = x_data.shape[0]
37 | if shuffle:
38 | idx = np.arange(n_data)
39 | np.random.shuffle(idx)
40 | x_data = x_data[idx]
41 | y_data = y_data[idx]
42 | for i in range(0, n_data - batch_size, batch_size):
43 | x_batch = x_data[i:i + batch_size]
44 | y_batch = y_data[i:i + batch_size]
45 | yield x_batch, y_batch
46 |
47 |
48 |
49 | batch_size = 100
50 | n_features = 28 * 28
51 | n_classes = 10
52 |
53 | x = tf.placeholder(tf.float32, [None, n_features])
54 |
55 | w = Normal(loc=tf.zeros([n_features, n_classes]), scale=tf.ones([n_features, n_classes]))
56 |
57 | b = Normal(loc=tf.zeros(n_classes), scale=tf.ones(n_classes))
58 |
59 | y = Categorical(tf.matmul(x, w) + b)
60 |
61 | qw = Normal(loc=tf.Variable(tf.random_normal([n_features, n_classes])),
62 | scale=tf.nn.softplus(tf.Variable(tf.random_normal([n_features, n_classes]))))
63 | qb = Normal(loc=tf.Variable(tf.random_normal([n_classes])),
64 | scale=tf.nn.softplus(tf.Variable(tf.random_normal([n_classes]))))
65 |
66 |
67 | y_ph = tf.placeholder(tf.int32, [batch_size])
68 |
69 | inference = ed.KLqp({w: qw, b: qb}, data={y: y_ph})
70 |
71 |
72 | inference.initialize(n_iter=100, scale={y: float(x_train.shape[0]) / batch_size})
73 |
74 |
75 |
76 | sess = tf.InteractiveSession()
77 |
78 | tf.global_variables_initializer().run()
79 |
80 |
81 | for _ in range(inference.n_iter):
82 | for X_batch, Y_batch in gen_batches_label(x_train, y_train, batch_size):
83 | inference.update(feed_dict={x: X_batch, y_ph: Y_batch})
84 |
85 |
86 |
87 | # Generate samples the posterior and store them.
88 | n_samples = 30
89 | pred_samples = []
90 |
91 | for _ in range(n_samples):
92 | w_sample = qw.sample()
93 | b_sample = qb.sample()
94 | prob = tf.nn.softmax(tf.matmul(x_test.astype(np.float32), w_sample) + b_sample)
95 | pred = np.argmax(prob.eval(), axis=1).astype(np.float32)
96 | pred_samples.append(pred)
97 |
98 |
99 |
100 | acc_samples = []
101 | for pred in pred_samples:
102 | acc = (pred == y_test).mean() * 100
103 | acc_samples.append(acc)
104 |
105 | print('The classification accuracy for each sample of w and b:', acc_samples)
106 |
107 | image_test_ind = 0
108 | image_test = x_test[image_test_ind]
109 | label_test = y_test[image_test_ind]
110 | print('The label of the image is:', label_test)
111 |
112 | import matplotlib.pyplot as plt
113 | plt.imshow(image_test.reshape((28, 28)), cmap='Blues')
114 | plt.show()
115 |
116 |
117 | pred_samples_test = [pred[image_test_ind] for pred in pred_samples]
118 | print('The predictions for the example are:', pred_samples_test)
119 |
120 | plt.hist(pred_samples_test, bins=range(10))
121 | plt.xticks(np.arange(0,10))
122 | plt.xlim(0, 10)
123 | plt.xlabel("Predictions for the example")
124 | plt.ylabel("Frequency")
125 | plt.show()
126 |
127 |
128 | from scipy import ndimage
129 | image_file = 'notMNIST_small/A/MDRiXzA4LnR0Zg==.png'
130 | image_not = ndimage.imread(image_file).astype(float)
131 |
132 | plt.imshow(image_not, cmap='Blues')
133 | plt.show()
134 |
135 |
136 | image_not = image_not / 255.
137 | image_not = image_not.reshape([-1, 28 * 28])
138 |
139 |
140 | pred_samples_not = []
141 |
142 | for _ in range(n_samples):
143 | w_sample = qw.sample()
144 | b_sample = qb.sample()
145 | prob = tf.nn.softmax(tf.matmul(image_not.astype(np.float32), w_sample) + b_sample)
146 | pred = np.argmax(prob.eval(), axis=1).astype(np.float32)
147 | pred_samples_not.append(pred[0])
148 |
149 |
150 | print('The predictions for the notMNIST example are:', pred_samples_not)
151 |
152 | plt.hist(pred_samples_not, bins=range(10))
153 | plt.xticks(np.arange(0,10))
154 | plt.xlim(0,10)
155 | plt.xlabel("Predictions for the notMNIST example")
156 | plt.ylabel("Frequency")
157 | plt.show()
--------------------------------------------------------------------------------
/Chapter03/rbm.py:
--------------------------------------------------------------------------------
1 | '''
2 | Source codes for Hands-On Deep Learning Architectures with Python (Packt Publishing)
3 | Chapter 3 Restricted Boltzmann Machines and Autoencoders
4 | Author: Yuxi (Hayden) Liu
5 | '''
6 |
7 | import numpy as np
8 | import tensorflow as tf
9 |
10 |
11 | class RBM(object):
12 | def __init__(self, num_v, num_h, batch_size, learning_rate, num_epoch, k=2):
13 | self.num_v = num_v
14 | self.num_h = num_h
15 | self.batch_size = batch_size
16 | self.learning_rate = learning_rate
17 | self.num_epoch = num_epoch
18 | self.k = k
19 | self.W, self.a, self.b = self._init_parameter()
20 |
21 | def _init_parameter(self):
22 | """ Initializing the model parameters including weights and bias """
23 | abs_val = np.sqrt(2.0 / (self.num_h + self.num_v))
24 | W = tf.get_variable('weights', shape=(self.num_v, self.num_h),
25 | initializer=tf.random_uniform_initializer(minval=-abs_val, maxval=abs_val))
26 | a = tf.get_variable('visible_bias', shape=(self.num_v), initializer=tf.zeros_initializer())
27 | b = tf.get_variable('hidden_bias', shape=(self.num_h), initializer=tf.zeros_initializer())
28 | return W, a, b
29 |
30 | def _prob_v_given_h(self, h):
31 | """
32 | Computing conditional probability P(v|h)
33 | @param h: hidden layer
34 | @return: P(v|h)
35 | """
36 | return tf.sigmoid(tf.add(self.a, tf.matmul(h, tf.transpose(self.W))))
37 |
38 | def _prob_h_given_v(self, v):
39 | """
40 | Computing conditional probability P(h|v)
41 | @param v: visible layer
42 | @return: P(h|v)
43 | """
44 | return tf.sigmoid(tf.add(self.b, tf.matmul(v, self.W)))
45 |
46 | def _bernoulli_sampling(self, prob):
47 | """ Bernoulli sampling based on input probability """
48 | distribution = tf.distributions.Bernoulli(probs=prob, dtype=tf.float32)
49 | return tf.cast(distribution.sample(), tf.float32)
50 |
51 | def _compute_gradients(self, v0, prob_h_v0, vk, prob_h_vk):
52 | """
53 | Computing gradients of weights and bias
54 | @param v0: visible vector before Gibbs sampling
55 | @param prob_h_v0: conditional probability P(h|v) before Gibbs sampling
56 | @param vk: visible vector after Gibbs sampling
57 | @param prob_h_vk: conditional probability P(h|v) after Gibbs sampling
58 | @return: gradients of weights, gradients of visible bias, gradients of hidden bias
59 | """
60 | outer_product0 = tf.matmul(tf.transpose(v0), prob_h_v0)
61 | outer_productk = tf.matmul(tf.transpose(vk), prob_h_vk)
62 | W_grad = tf.reduce_mean(outer_product0 - outer_productk, axis=0)
63 | a_grad = tf.reduce_mean(v0 - vk, axis=0)
64 | b_grad = tf.reduce_mean(prob_h_v0 - prob_h_vk, axis=0)
65 | return W_grad, a_grad, b_grad
66 |
67 | def _gibbs_sampling(self, v):
68 | """
69 | Gibbs sampling
70 | @param v: visible layer
71 | @return: visible vector before Gibbs sampling, conditional probability P(h|v) before Gibbs sampling,
72 | visible vector after Gibbs sampling, conditional probability P(h|v) after Gibbs sampling
73 | """
74 | v0 = v
75 | prob_h_v0 = self._prob_h_given_v(v0)
76 | vk = v
77 | prob_h_vk = prob_h_v0
78 |
79 | for _ in range(self.k):
80 | hk = self._bernoulli_sampling(prob_h_vk)
81 | prob_v_hk = self._prob_v_given_h(hk)
82 | vk = self._bernoulli_sampling(prob_v_hk)
83 | prob_h_vk = self._prob_h_given_v(vk)
84 |
85 | return v0, prob_h_v0, vk, prob_h_vk
86 |
87 | def _optimize(self, v):
88 | """
89 | Optimizing RBM model parameters
90 | @param v: input visible layer
91 | @return: updated parameters, mean squared error of reconstructing v
92 | """
93 | v0, prob_h_v0, vk, prob_h_vk = self._gibbs_sampling(v)
94 | W_grad, a_grad, b_grad = self._compute_gradients(v0, prob_h_v0, vk, prob_h_vk)
95 | para_update = [tf.assign(self.W, tf.add(self.W, self.learning_rate*W_grad)),
96 | tf.assign(self.a, tf.add(self.a, self.learning_rate*a_grad)),
97 | tf.assign(self.b, tf.add(self.b, self.learning_rate*b_grad))]
98 | error = tf.metrics.mean_squared_error(v0, vk)[1]
99 | return para_update, error
100 |
101 |
102 | def train(self, X_train):
103 | """
104 | Model training
105 | @param X_train: input data for training
106 | """
107 | X_train_plac = tf.placeholder(tf.float32, [None, self.num_v])
108 |
109 | para_update, error = self._optimize(X_train_plac)
110 | init = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer())
111 |
112 | with tf.Session() as sess:
113 | sess.run(init)
114 | epochs_err = []
115 | n_batch = int(X_train.shape[0] / self.batch_size)
116 |
117 | for epoch in range(1, self.num_epoch + 1):
118 | epoch_err_sum = 0
119 |
120 | for batch_number in range(n_batch):
121 |
122 | batch = X_train[batch_number * self.batch_size: (batch_number + 1) * self.batch_size]
123 |
124 | _, batch_err = sess.run((para_update, error), feed_dict={X_train_plac: batch})
125 |
126 | epoch_err_sum += batch_err
127 |
128 | epochs_err.append(epoch_err_sum / n_batch)
129 |
130 | if epoch % 10 == 0:
131 | print("Training error at epoch %s: %s" % (epoch, epochs_err[-1]))
132 |
133 |
--------------------------------------------------------------------------------
/Chapter07/vanilla_tf.py:
--------------------------------------------------------------------------------
1 | '''
2 | Source codes for Hands-On Deep Learning Architectures with Python (Packt Publishing)
3 | Chapter 7 Generative Adversarial Networks
4 | Author: Yuxi (Hayden) Liu
5 | '''
6 | import numpy as np
7 | import tensorflow as tf
8 |
9 |
10 | def load_dataset():
11 | (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data('./mnist_data')
12 | train_data = np.concatenate((x_train, x_test), axis=0)
13 | train_data = train_data / 255.
14 | train_data = train_data * 2. - 1
15 | train_data = train_data.reshape([-1, 28 * 28])
16 | return train_data
17 |
18 |
19 | def gen_batches(data, batch_size, shuffle=True):
20 | """
21 | Generate batches for training
22 | @param data: training data
23 | @param batch_size: batch size
24 | @param shuffle: shuffle the data or not
25 | @return: batches generator
26 | """
27 | n_data = data.shape[0]
28 | if shuffle:
29 | idx = np.arange(n_data)
30 | np.random.shuffle(idx)
31 | data = data[idx]
32 |
33 | for i in range(0, n_data, batch_size):
34 | batch = data[i:i + batch_size]
35 | yield batch
36 |
37 |
38 | data = load_dataset()
39 | print("Training dataset shape:", data.shape)
40 |
41 | import matplotlib.pyplot as plt
42 |
43 | def display_images(data, image_size=28):
44 | fig, axes = plt.subplots(4, 10, figsize=(10, 4))
45 | for i, ax in enumerate(axes.flatten()):
46 | img = data[i, :]
47 | img = (img - img.min()) / (img.max() - img.min())
48 | ax.imshow(img.reshape(image_size, image_size), cmap='gray')
49 | ax.xaxis.set_visible(False)
50 | ax.yaxis.set_visible(False)
51 | plt.subplots_adjust(wspace=0, hspace=0)
52 | plt.show()
53 |
54 |
55 | display_images(data)
56 |
57 |
58 | def dense(x, n_outputs, activation=None):
59 | return tf.layers.dense(x, n_outputs, activation=activation,
60 | kernel_initializer=tf.random_normal_initializer(mean=0.0, stddev=0.02))
61 |
62 |
63 | def generator(z, alpha=0.2):
64 | """
65 | Generator network
66 | @param z: input of random samples
67 | @param alpha: leaky relu factor
68 | @return: output of the generator network
69 | """
70 | with tf.variable_scope('generator', reuse=tf.AUTO_REUSE):
71 | fc1 = dense(z, 256)
72 | fc1 = tf.nn.leaky_relu(fc1, alpha)
73 |
74 | fc2 = dense(fc1, 512)
75 | fc2 = tf.nn.leaky_relu(fc2, alpha)
76 |
77 | fc3 = dense(fc2, 1024)
78 | fc3 = tf.nn.leaky_relu(fc3, alpha)
79 |
80 | out = dense(fc3, 28 * 28)
81 | out = tf.tanh(out)
82 | return out
83 |
84 |
85 | def discriminator(x, alpha=0.2):
86 | """
87 | Discriminator network
88 | @param x: input samples, can be real or generated samples
89 | @param alpha: leaky relu factor
90 | @return: output logits
91 | """
92 | with tf.variable_scope('discriminator', reuse=tf.AUTO_REUSE):
93 | fc1 = dense(x, 1024)
94 | fc1 = tf.nn.leaky_relu(fc1, alpha)
95 |
96 | fc2 = dense(fc1, 512)
97 | fc2 = tf.nn.leaky_relu(fc2, alpha)
98 |
99 | fc3 = dense(fc2, 256)
100 | fc3 = tf.nn.leaky_relu(fc3, alpha)
101 |
102 | out = dense(fc3, 1)
103 | return out
104 |
105 |
106 | noise_size = 100
107 | learning_rate = 0.0002
108 | batch_size = 128
109 | epochs = 100
110 | beta1 = 0.5
111 |
112 | tf.reset_default_graph()
113 |
114 | X_real = tf.placeholder(tf.float32, (None, 28 * 28), name='input_real')
115 |
116 | z = tf.placeholder(tf.float32, (None, noise_size), name='input_noise')
117 |
118 | g_sample = generator(z)
119 |
120 | d_real_out = discriminator(X_real)
121 | d_fake_out = discriminator(g_sample)
122 |
123 |
124 | g_loss = tf.reduce_mean(
125 | tf.nn.sigmoid_cross_entropy_with_logits(logits=d_fake_out, labels=tf.ones_like(d_fake_out)))
126 |
127 | tf.summary.scalar('generator_loss', g_loss)
128 |
129 | d_real_loss = tf.reduce_mean(
130 | tf.nn.sigmoid_cross_entropy_with_logits(logits=d_real_out, labels=tf.ones_like(d_real_out)))
131 |
132 | d_fake_loss = tf.reduce_mean(
133 | tf.nn.sigmoid_cross_entropy_with_logits(logits=d_fake_out, labels=tf.zeros_like(d_fake_out)))
134 |
135 | d_loss = d_real_loss + d_fake_loss
136 |
137 | tf.summary.scalar('discriminator_loss', d_loss)
138 |
139 | train_vars = tf.trainable_variables()
140 | d_vars = [var for var in train_vars if var.name.startswith('discriminator')]
141 | g_vars = [var for var in train_vars if var.name.startswith('generator')]
142 |
143 |
144 | with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
145 | d_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
146 | g_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
147 |
148 |
149 | n_sample_display = 40
150 | sample_z = np.random.uniform(-1, 1, size=(n_sample_display, noise_size))
151 |
152 |
153 | steps = 0
154 | with tf.Session() as sess:
155 | merged = tf.summary.merge_all()
156 | train_writer = tf.summary.FileWriter('./logdir/vanilla', sess.graph)
157 |
158 | sess.run(tf.global_variables_initializer())
159 | for epoch in range(epochs):
160 | for batch_x in gen_batches(data, batch_size):
161 |
162 | batch_z = np.random.uniform(-1, 1, size=(batch_size, noise_size))
163 |
164 | _, summary, d_loss_batch = sess.run([d_opt, merged, d_loss], feed_dict={z: batch_z, X_real: batch_x})
165 |
166 | sess.run(g_opt, feed_dict={z: batch_z})
167 | _, g_loss_batch = sess.run([g_opt, g_loss], feed_dict={z: batch_z})
168 |
169 | if steps % 100 == 0:
170 | train_writer.add_summary(summary, steps)
171 | print("Epoch {}/{} - discriminator loss: {:.4f}, generator Loss: {:.4f}".format(
172 | epoch + 1, epochs, d_loss_batch, g_loss_batch))
173 |
174 |
175 | steps += 1
176 |
177 | gen_samples = sess.run(generator(z), feed_dict={z: sample_z})
178 |
179 | display_images(gen_samples)
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | # Hands-On Deep Learning Architectures with Python
5 |
6 |
7 |
8 | This is the code repository for [Hands-On Deep Learning Architectures with Python](https://www2.packtpub.com/big-data-and-business-intelligence/hands-deep-learning-architectures-python?utm_source=github&utm_medium=repository&utm_campaign=9781788998086), published by Packt.
9 |
10 | **Create deep neural networks to solve computational problems using TensorFlow and Keras**
11 |
12 | ## What is this book about?
13 | Deep learning architectures are composed of multilevel nonlinear operations that represent high-level abstractions; this allows you to learn useful feature representations from the data. This book will help you learn and implement deep learning architectures to resolve various deep learning research problems.
14 |
15 | This book covers the following exciting features:
16 | * Implement CNNs, RNNs, and other commonly used architectures with Python
17 | * Explore architectures such as VGGNet, AlexNet, and GoogLeNet
18 | * Build deep learning architectures for AI applications such as face and image recognition, fraud detection, and many more
19 | * Understand the architectures and applications of Boltzmann machines and autoencoders with concrete examples
20 | * Master artificial intelligence and neural network concepts and apply them to your architecture
21 |
22 | If you feel this book is for you, get your [copy](https://www.amazon.com/dp/1788998081) today!
23 |
24 |
26 |
27 |
28 | ## Instructions and Navigations
29 | All of the code is organized into folders. For example, Chapter02.
30 |
31 | The code will look like the following:
32 | ```
33 | import tensorflow as tf
34 | import numpy as np
35 | import matplotlib.pyplot as plt
36 | from sklearn.model_selection import train_test_split
37 | ```
38 |
39 | **Following is what you need for this book:**
40 | If you're a data scientist, machine learning developer/engineer, or deep learning practitioner, or are curious about AI and want to upgrade your knowledge of various deep learning architectures, this book will appeal to you. You are expected to have some knowledge of statistics and machine learning algorithms to get the best out of this book
41 |
42 | With the following software and hardware list you can run all code files present in the book.
43 |
44 | ### Software and Hardware List
45 |
46 | | Chapter | Software required | OS required |
47 | | -------- | -------------------| -----------------------------------|
48 | | 1 | TensorFlow, Keras | Windows, Mac OS X, and Linux (Any) |
49 | | 2 - 8 | Python 3.x | Windows, Mac OS X, and Linux (Any) |
50 |
51 |
52 | We also provide a PDF file that has color images of the screenshots/diagrams used in this book. [Click here to download it](https://www.packtpub.com/sites/default/files/downloads/9781788998086_ColorImages.pdf).
53 |
54 |
55 | ### Related products
56 | * Python Deep Learning Projects [[Packt]](https://prod.packtpub.com/in/big-data-and-business-intelligence/python-deep-learning-projects?utm_source=github&utm_medium=repository&utm_campaign=9781788997096) [[Amazon]](https://www.amazon.com/dp/B07FNY2BZR)
57 |
58 | * Python Deep Learning - Second Edition [[Packt]](https://prod.packtpub.com/in/big-data-and-business-intelligence/python-deep-learning-second-edition?utm_source=github&utm_medium=repository&utm_campaign=9781789348460) [[Amazon]](https://www.amazon.com/dp/B07KQ29CQ3)
59 |
60 | ## Get to Know the Authors
61 | **Yuxi (Hayden) Liu**
62 | is an author of a series of machine learning books and an education enthusiast. His first book, the first edition of Python Machine Learning By Example, was a #1 bestseller on Amazon India in 2017 and 2018 and his other book R Deep Learning Projects, both published by Packt Publishing.
63 | He is an experienced data scientist who is focused on developing machine learning and deep learning models and systems. He has worked in a variety of data-driven domains and has applied his machine learning expertise to computational advertising, recommendations, and network anomaly detection. He published five first-authored IEEE transaction and conference papers during his master's research at the University of Toronto.
64 |
65 | **Saransh Mehta**
66 | has cross-domain experience of working with texts, images, and audio using deep learning. He has been building artificial, intelligence-based solutions, including a generative chatbot, an attendee-matching recommendation system, and audio keyword recognition systems for multiple start-ups. He is very familiar with the Python language, and has extensive knowledge of deep learning libraries such as TensorFlow and Keras. He has been in the top 10% of entrants to deep learning challenges hosted by Microsoft and Kaggle.
67 |
68 |
69 | ## Other books by the authors
70 | * [R Deep Learning Projects](https://prod.packtpub.com/in/big-data-and-business-intelligence/r-deep-learning-projects?utm_source=github&utm_medium=repository&utm_campaign=9781788478403)
71 | * [Python Machine Learning By Example](https://prod.packtpub.com/in/big-data-and-business-intelligence/python-machine-learning-example?utm_source=github&utm_medium=repository&utm_campaign=9781783553112)
72 |
73 |
74 | ### Suggestions and Feedback
75 | [Click here](https://docs.google.com/forms/d/e/1FAIpQLSdy7dATC6QmEL81FIUuymZ0Wy9vH1jHkvpY57OiMeKGqib_Ow/viewform) if you have any feedback or suggestions.
76 | ### Download a free PDF
77 |
78 | If you have already purchased a print or Kindle version of this book, you can get a DRM-free PDF version at no cost.
Simply click on the link to claim your free PDF.
79 |