├── README.md
├── all-scripts
├── bitcoin.py
├── painters.py
└── transfer-learning.py
├── bitcoin.py
├── bitcoin
├── README.md
├── data
│ └── 123.csv
├── images
│ ├── 123.jpg
│ ├── LSTM.png
│ ├── all_coins.png
│ ├── bitcoin.png
│ ├── btc-orange1.jpg
│ ├── components.png
│ ├── daily_BTC.png
│ ├── df.png
│ ├── krakenUSD.png
│ ├── lstm_summary.png
│ ├── model_running.png
│ ├── pred-vs-true.png
│ ├── prediction.png
│ ├── price_variations.png
│ ├── rnn2.png
│ └── trainingloss.png
├── img.png
├── notebooks
│ └── deep-learning-LSTM-bitcoins.ipynb
└── scripts
│ └── 123.py
├── image-recognition-tutorial
├── README.md
└── notebooks
│ └── 123
├── images
├── keras.jpg
├── matplotlib.png
├── pandas.png
├── scikitlearn.png
└── tf.png
├── keras-tf-tutorial
├── README.md
├── data
│ └── 123.csv
├── images
│ ├── Deep learning neural network.jpg
│ ├── MNIST_3.png
│ ├── cross_entropy.png
│ ├── loss_large_model.png
│ └── loss_small_model.png
└── notebooks
│ └── neural-nets-digits-mnist.ipynb
├── painters-identification
├── README.md
├── data
│ ├── _config.yml
│ └── all_data_info.csv
├── images
│ ├── 1200px-Leonardo_da_Vinci_-_Virgin_and_Child_with_St_Anne_C2RMF_retouched.jpg
│ ├── 36617.jpg
│ ├── 91485.jpg
│ ├── Anghiari.jpg
│ ├── DT45.jpg
│ ├── Max_pooling.png
│ ├── Peter_Paul_Ruben's_copy_of_the_lost_Battle_of_Anghiari.jpg
│ ├── TLng.jpg
│ ├── VGG16.jpg
│ ├── acc.png
│ ├── acc_curve.png
│ ├── andrew_ng_drivers_ml_success-1.png
│ ├── binary.png
│ ├── binarycrossentropy.png
│ ├── capstone-slides.pdf
│ ├── cnnpic.png
│ ├── cnnpic2.jpg
│ ├── convnetsmall.png
│ ├── dataframemarkdown.png
│ ├── emoji.jpeg
│ ├── emojismiling.jpg
│ ├── f13.png
│ ├── folders.png
│ ├── house_identify.png
│ ├── inceptionV3.png
│ ├── inceptionV3tds.png
│ ├── inceptionv3summary.png
│ ├── line.png
│ ├── loss.png
│ ├── loss_curve.png
│ ├── mouse_again.png
│ ├── multiclass.png
│ ├── paintingcontrast.jpg
│ ├── paintings_readme.jpg
│ ├── pooling.png
│ ├── rat.png
│ ├── relu.png
│ ├── rembrandtpic.jpg
│ ├── stanne.jpg
│ ├── test.png
│ ├── test_loss_picasso_size_32.png
│ ├── train_and_test_loss_picasso_size_32.png
│ └── transferlearning.jpg
├── notebooks
│ ├── capstone-models-final-model-building.ipynb
│ └── capstone-project-painter-identifier-marco-data-preprocessing.ipynb
└── scripts
│ ├── aux_func.py
│ └── capstone-models-final-model-building.py
├── painters.py
├── transfer-learning.py
└── transfer-learning
├── 123
├── README.md
├── images
├── 123
├── lionNN.jpg
└── treeNN.jpg
└── notebooks
├── 123
└── transfer-learning.ipynb
/README.md:
--------------------------------------------------------------------------------
1 | ## Deep Learning Projects
2 |
3 |      [](https://opensource.org/licenses/MIT)
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 | Notebooks and descriptions •
19 | Contact Information
20 |
21 |
22 |
23 | ### Notebooks and descriptions
24 | | Project | Brief Description |
25 | |--------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|
26 | | [painter-identifier](https://github.com/marcotav/deep-learning/blob/master/painters-identification/README.md) | I built a Convolutional Neural Net to identify the artist of a painting via transfer learning, instantiating the convolutional part of the Inception V3 model, and training a fully-connected network on top.|
27 | | [bitcoin-price-analysis](https://github.com/marcotav/deep-learning/blob/master/bitcoin/README.md) | I built predictive models for Bitcoin price data using recurrent neural networks (LSTMs). Correlations between altcoins are also considered.|
28 | | [keras-tf-tutorial](https://github.com/marcotav/deep-learning/blob/master/keras-tf-tutorial/README.md) | Neural networks tutorial where I build fully-connected networks and convolutional neural networks using both Keras and TensorFlow respectively (in progress). |
29 | | [transfer-learning-mini-tutorial](https://github.com/marcotav/deep-learning/blob/master/transfer-learning/README.md) | I illustrate the use of transfer learning using the Inception V3 deep neural network model.|
30 |
31 |
32 | ## Contact Information
33 |
34 | Feel free to contact me:
35 |
36 | * Email: [marcotav65@gmail.com](mailto:marcotav65@gmail.com)
37 | * GitHub: [marcotav](https://github.com/marcotav)
38 | * LinkedIn: [marco-tavora](https://www.linkedin.com/in/marco-tavora)
39 | * Website: [marcotavora.me](http://www.marcotavora.me)
40 |
--------------------------------------------------------------------------------
/all-scripts/bitcoin.py:
--------------------------------------------------------------------------------
1 | # Recurrent Neural Networks and Bitcoins
2 |
3 | ### Marco Tavora
4 |
5 | ## Introduction
6 |
7 | '''
8 | Nowadays most articles about bitcoin are speculative. Analyses based on strong technical foundations are rare.
9 |
10 | My goal in this project is to build predictive models for the price of Bitcoins and other cryptocurrencies. To accomplish that, I will:
11 | - Use first Long Short-Term Memory recurrent neural networks (LSTMs) for predictions;
12 | - I will then study correlations between altcoins;
13 | - The third step will be to repeat the first analysis using traditional time series.
14 |
15 | For a thorough introduction to Bitcoins you can check this [book](https://github.com/bitcoinbook/bitcoinbook). For LSTM [this](http://colah.github.io/posts/2015-08-Understanding-LSTMs/) is a great source.
16 | '''
17 |
18 | ## Importing libraries
19 |
20 | import numpy as np
21 | import pandas as pd
22 | import statsmodels.api as sm
23 | import aux_functions as af
24 | from scipy import stats
25 | import keras
26 | import pickle
27 | import quandl
28 | from keras.models import Sequential
29 | from keras.layers import Activation, Dense,LSTM,Dropout
30 | from sklearn.metrics import mean_squared_error
31 | from math import sqrt
32 | from random import randint
33 | from keras import initializers
34 | import datetime
35 | from datetime import datetime
36 | from matplotlib import pyplot as plt
37 | import json
38 | import requests
39 | import plotly.offline as py
40 | import plotly.graph_objs as go
41 | py.init_notebook_mode(connected=True)
42 | %matplotlib inline
43 | from IPython.core.interactiveshell import InteractiveShell
44 | InteractiveShell.ast_node_interactivity = "all" # see the value of multiple statements at once.
45 | pd.set_option('display.max_columns', None)
46 |
47 | ## Data
48 | ### There are several ways to fetch historical data for Bitcoins (or any other cryptocurrency). Here are a few examples:
49 |
50 | ### Flat files
51 | '''
52 | If a flat file already available, we can just read it using `pandas`. For example, one of the datasets that will be used in this notebook can be found [here](https://www.kaggle.com/mczielinski/bitcoin-historical-data/data). This data is from *bitFlyer* a Bitcoin exchange and marketplace.
53 | '''
54 | df = pd.read_csv('bitcoin_data.csv')
55 | '''
56 | This dataset from [Kaggle](https://www.kaggle.com/mczielinski/bitcoin-historical-data/kernels) contains, for the time period of 01/2012-03/2018, minute-to-minute updates of the open, high, low and close prices (OHLC), the volume in BTC and indicated currency, and the [weighted bitcoin price](https://github.com/Bitcoin-Foundation-Italia/bitcoin-wp).
57 | '''
58 | lst_col = df.shape[1]-1
59 | df.iloc[:,lst_col].plot(lw=2, figsize=(15,5));
60 | plt.legend(['Bitcoin'], loc='upper left',fontsize = 'x-large');
61 |
62 | ### Retrieve Data from Quandl's API
63 | '''
64 | Another possibility is to retrieve Bitcoin pricing data using Quandl's free [Bitcoin API](https://blog.quandl.com/api-for-bitcoin-data). For example, to obtain the daily bitcoin exchange rate (BTC vs. USD) on Bitstamp (Bitstamp is a bitcoin exchange based in Luxembourg) we use the code snippet below. The function `quandl_data` is inside the library `af`.
65 | '''
66 | quandl_id = 'BCHARTS/KRAKENUSD'
67 | df_qdl = af.quandl_data(quandl_id)
68 | df_qdl.columns = [c.lower().replace(' ', '_').replace('(', '').replace(')', '') for c in df_qdl.columns.values]
69 |
70 | lst_col_2 = df_qdl.shape[1]-1
71 | df_qdl.iloc[:,lst_col_2].plot(lw=3, figsize=(15,5));
72 | plt.legend(['krakenUSD'], loc='upper left',fontsize = 'x-large');
73 |
74 | ### Retrieve Data from cryptocompare.com
75 | '''
76 | Another possibility is to retrieve data from [cryptocompare](https://www.cryptocompare.com/). In this case, we use the `requests` packages to make a `.get` request (the object `res` is a `Response` object) such as:
77 |
78 | res = requests.get(URL)
79 | '''
80 |
81 | res = requests.get('https://min-api.cryptocompare.com/data/histoday?fsym=BTC&tsym=USD&limit=2000')
82 | df_cc = pd.DataFrame(json.loads(res.content)['Data']).set_index('time')
83 | df_cc.index = pd.to_datetime(df_cc.index, unit='s')
84 |
85 | lst_col = df_cc.shape[1]-1
86 | df_cc.iloc[:,lst_col].plot(lw=2, figsize=(15,5));
87 | plt.legend(['daily BTC'], loc='upper left',fontsize = 'x-large');
88 |
89 | ## Data Handling
90 | '''
91 | Let us start with `df` (from *bitFlyer*) from Kaggle.
92 | '''
93 |
94 | ### Checking for `NaNs` and making column titles cleaner
95 |
96 | '''
97 | Using `df.isnull().any()` we quickly see that the data does not contain null values. We get rid of upper cases, spaces, parentheses and so on, in the column titles.
98 | '''
99 |
100 |
101 | df.columns = [c.lower().replace(' ', '_').replace('(', '').replace(')', '') for c in df.columns.values]
102 |
103 |
104 | ### Group data by day and take the mean value
105 |
106 | '''
107 | Though Timestamps are in Unix time, this is easily taken care of using Python's `datetime` library:
108 | - `pd.to_datetime` converts the argument to `datetime`
109 | - `Series.dt.date` gives a numpy array of Python `datetime.date` objects
110 |
111 | We then drop the `Timestamp` column. The repeated dates occur simply because the data is collected minute-to-minute. Taking the daily mean using the method `daily_weighted_prices` from `af` we obtain:
112 | '''
113 | daily = af.daily_weighted_prices(df,'date','timestamp','s')
114 |
115 |
116 | ## Exploratory Data Analysis
117 | '''
118 | We can look for trends and seasonality in the data. Joining train and test sets and using the functions `trend`, `seasonal`, `residue` and `plot_components` for `af`:
119 | '''
120 | ### Trend, seasonality and residue
121 |
122 | dataset = daily.reset_index()
123 | dataset['date'] = pd.to_datetime(dataset['date'])
124 | dataset = dataset.set_index('date')
125 |
126 | af.plot_comp(af.trend(dataset,'weighted_price'),
127 | af.seasonal(dataset,'weighted_price'),
128 | af.residue(dataset,'weighted_price'),
129 | af.actual(dataset,'weighted_price'))
130 |
131 | # Partial Auto-correlation (PACF)
132 | '''
133 | The PACF is the correlation (of the variable with itself) at a given lag, **controlling for the effect of previous (shorter) lags**. We see below that according to the PACF, prices with **one day** difference are highly correlated. After that the partial auto-correlation essentially drops to zero.
134 | '''
135 | plt.figure(figsize=(15,7))
136 | ax = plt.subplot(211)
137 | sm.graphics.tsa.plot_pacf(dataset['weighted_price'].values.squeeze(), lags=48, ax=ax)
138 | plt.show();
139 |
140 | ## Train/Test split
141 | '''
142 | We now need to split our dataset. We need to fit the model using the training data and test the model with the test data. We can proceed as follows. We must choose the proportion of rows that will constitute the training set.
143 | '''
144 | train_size = 0.75
145 | training_rows = train_size*len(daily)
146 | int(training_rows)
147 | '''
148 | Then we slice `daily` using, for obvious reasons:
149 |
150 | [:int(training_rows)]
151 | [int(training_rows):]
152 |
153 | We then have:
154 | '''
155 | train = daily[0:int(training_rows)]
156 | test = daily[int(training_rows):]
157 | print('Shapes of training and testing sets:',train.shape[0],'and',test.shape[0])
158 | '''
159 | We can automatize the split using a simple function for the library `aux_func`:
160 | '''
161 | test_size = 1 - train_size
162 | train = af.train_test_split(daily, test_size=test_size)[0]
163 | test = af.train_test_split(daily, test_size=test_size)[1]
164 | print('Shapes of training and testing sets:',train.shape[0],'and',test.shape[0])
165 | af.vc_to_df(train).head()
166 | af.vc_to_df(train).tail()
167 | af.vc_to_df(test).head()
168 | af.vc_to_df(test).tail()
169 | train.shape,test.shape
170 |
171 | ### Checking
172 |
173 | af.vc_to_df(daily[0:199]).head()
174 | af.vc_to_df(daily[0:199]).tail()
175 | af.vc_to_df(daily[199:]).head()
176 | af.vc_to_df(daily[199:]).tail()
177 |
178 | ### Reshaping
179 | '''
180 | We must reshape `train` and `test` for `Keras`.
181 | '''
182 |
183 | train = np.reshape(train, (len(train), 1));
184 | test = np.reshape(test, (len(test), 1));
185 |
186 | train.shape
187 | test.shape
188 |
189 | ### `MinMaxScaler `
190 | '''
191 | We must now use `MinMaxScaler` which scales and translates features to a given range (between zero and one).
192 | '''
193 |
194 | from sklearn.preprocessing import MinMaxScaler
195 | scaler = MinMaxScaler()
196 | train = scaler.fit_transform(train)
197 | test = scaler.transform(test)
198 |
199 | Reshaping once more:
200 |
201 | X_train, Y_train = af.lb(train, 1)
202 | X_test, Y_test = af.lb(test, 1)
203 | X_train = np.reshape(X_train, (len(X_train), 1, X_train.shape[1]))
204 | X_test = np.reshape(X_test, (len(X_test), 1, X_test.shape[1]))
205 |
206 | '''
207 | Now the shape is (number of examples, time steps, features per step).
208 | '''
209 |
210 | X_train.shape
211 | X_test.shape
212 |
213 | model = Sequential()
214 | model.add(LSTM(256, return_sequences=True, input_shape=(X_train.shape[1], X_train.shape[2])))
215 | model.add(LSTM(256))
216 | model.add(Dense(1))
217 |
218 | # compile and fit the model
219 | model.compile(loss='mean_squared_error', optimizer='adam')
220 | history = model.fit(X_train, Y_train, epochs=100, batch_size=32,
221 | validation_data=(X_test, Y_test))
222 |
223 | model.summary()
224 |
225 | ## Train and test loss
226 |
227 | ### While the model is being trained, the train and test losses vary as shown in the figure below. The package `plotly.graph_objs` is extremely useful. The function `t( )` inside the argument is defined in `aux_func`:
228 |
229 | py.iplot(dict(data=[af.t('loss','training_loss',history), af.t('val_loss','val_loss',history)],
230 | layout=dict(title = 'history of training loss', xaxis = dict(title = 'epochs'),
231 | yaxis = dict(title = 'loss'))), filename='training_process')
232 |
233 | ### Root-mean-square deviation (RMSE)
234 |
235 | ### The RMSE measures differences between values predicted by a model the actual values.
236 |
237 | X_test_new = X_test.copy()
238 | X_test_new = np.append(X_test_new, scaler.transform(dataset.iloc[-1][0]))
239 | X_test_new = np.reshape(X_test_new, (len(X_test_new), 1, 1))
240 | prediction = model.predict(X_test_new)
241 |
242 | Inverting original scaling:
243 |
244 | pred_inv = scaler.inverse_transform(prediction.reshape(-1, 1))
245 | Y_test_inv = scaler.inverse_transform(Y_test.reshape(-1, 1))
246 | pred_inv_new = np.array(pred_inv[:,0][1:])
247 | Y_test_new_inv = np.array(Y_test_inv[:,0])
248 |
249 | ### Renaming arrays for clarity
250 |
251 | y_testing = Y_test_new_inv
252 | y_predict = pred_inv_new
253 |
254 | ## Prediction versus True Values
255 |
256 | layout = dict(title = 'True prices vs predicted prices',
257 | xaxis = dict(title = 'Day'), yaxis = dict(title = 'USD'))
258 | fig = dict(data=[af.prediction_vs_true(y_testing,'Prediction'),
259 | af.prediction_vs_true(y_predict,'True')],
260 | layout=layout)
261 | py.iplot(fig, filename='results')
262 |
263 | print('Prediction:\n')
264 | print(list(y_predict[0:10]))
265 | print('')
266 | print('Test set:\n')
267 | y_testing = [round(i,1) for i in list(Y_test_new_inv)]
268 | print(y_testing[0:10])
269 | print('')
270 | print('Difference:\n')
271 | diff = [round(abs((y_testing[i+1]-list(y_predict)[i])/list(y_predict)[i]),2) for i in range(len(y_predict)-1)]
272 | print(diff[0:30])
273 | print('')
274 | print('Mean difference:\n')
275 | print(100*round(np.mean(diff[0:30]),3),'%')
276 |
277 | ### The average difference is ~5%. There is something wrong here!
278 |
279 | df = pd.DataFrame(data={'prediction': y_predict.tolist(), 'testing': y_testing})
280 |
281 | pct_variation = df.pct_change()[1:]
282 | pct_variation = pct_variation[1:]
283 |
284 | pct_variation.head()
285 |
286 | layout = dict(title = 'True prices vs predicted prices variation (%)',
287 | xaxis = dict(title = 'Day'), yaxis = dict(title = 'USD'))
288 | fig = dict(data=[af.prediction_vs_true(pct_variation['prediction'],'Prediction'),af.prediction_vs_true(pct_variation['testing'],'True')],
289 | layout=layout)
290 | py.iplot(fig, filename='results')
291 |
292 | ## Altcoins
293 |
294 | ### Using the Poloniex API and two auxiliar function ([Ref.1](https://blog.patricktriest.com/analyzing-cryptocurrencies-python/)). Choosing the value of the end date to be today we have:
295 |
296 | poloniex = 'https://poloniex.com/public?command=returnChartData¤cyPair={}&start={}&end={}&period={}'
297 | start = datetime.strptime('2015-01-01', '%Y-%m-%d') # get data from the start of 2015
298 | end = datetime.now()
299 | period = 86400 # day in seconds
300 |
301 | def get_crypto_data(poloniex_pair):
302 | data_df = af.get_json_data(poloniex.format(poloniex_pair,
303 | start.timestamp(),
304 | end.timestamp(),
305 | period),
306 | poloniex_pair)
307 | data_df = data_df.set_index('date')
308 | return data_df
309 |
310 | lst_ac = ['ETH','LTC','XRP','ETC','STR','DASH','SC','XMR','XEM']
311 | len(lst_ac)
312 | ac_data = {}
313 | for a in lst_ac:
314 | ac_data[a] = get_crypto_data('BTC_{}'.format(a))
315 |
316 | lst_df = []
317 | for el in lst_ac:
318 | print('Altcoin:',el)
319 | ac_data[el].head()
320 | lst_df.append(ac_data[el])
321 |
322 | lst_df[0].head()
323 |
--------------------------------------------------------------------------------
/all-scripts/painters.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | from keras.preprocessing.image import ImageDataGenerator, img_to_array, load_img
3 | from keras.models import Sequential
4 | from keras.preprocessing import image
5 | from keras.layers import Dropout, Flatten, Dense
6 | from keras import applications
7 | from keras.utils.np_utils import to_categorical
8 | from keras import applications
9 | from keras.applications.imagenet_utils import preprocess_input
10 | from imagenet_utils import decode_predictions
11 | import math, cv2
12 |
13 | folder_train = './train_toy_3/'
14 | folder_test = './test_toy_3/'
15 |
16 | datagen = ImageDataGenerator(
17 | featurewise_center=True,
18 | featurewise_std_normalization=True,
19 | rotation_range=0.15,
20 | width_shift_range=0.2,
21 | height_shift_range=0.2,
22 | rescale = 1./255,
23 | shear_range=0.2,
24 | zoom_range=0.2,
25 | horizontal_flip=True,
26 | fill_mode='nearest')
27 |
28 | from keras.preprocessing.image import ImageDataGenerator, img_to_array, load_img
29 | from keras.models import Sequential
30 | from keras.layers import Conv2D, MaxPooling2D
31 | from keras.layers import Activation, Dropout, Flatten, Dense
32 | from keras import backend as K
33 | from keras.callbacks import EarlyStopping, Callback
34 | K.image_data_format() # this means that "backend": "tensorflow". Channels are RGB
35 | from keras import applications
36 | from keras.utils.np_utils import to_categorical
37 | import math, cv2
38 |
39 | ## Defining the new size of the image
40 |
41 | img_width, img_height = 120,120
42 |
43 | if K.image_data_format() == 'channels_first':
44 | input_shape = (3, img_width, img_height)
45 | print('Theano Backend')
46 | else:
47 | input_shape = (img_width, img_height, 3)
48 | print('TensorFlow Backend')
49 |
50 | input_shape
51 |
52 | nb_train_samples = 0
53 | for p in range(len(os.listdir(os.path.abspath(folder_train)))):
54 | nb_train_samples += len(os.listdir(os.path.abspath(folder_train) +'/'+ os.listdir(
55 | os.path.abspath(folder_train))[p]))
56 | nb_train_samples
57 |
58 | nb_test_samples = 0
59 | for p in range(len(os.listdir(os.path.abspath(folder_test)))):
60 | nb_test_samples += len(os.listdir(os.path.abspath(folder_test) +'/'+ os.listdir(
61 | os.path.abspath(folder_test))[p]))
62 |
63 | train_data_dir = os.path.abspath(folder_train) # folder containing training set already subdivided
64 | validation_data_dir = os.path.abspath(folder_test) # folder containing test set already subdivided
65 | nb_train_samples = nb_train_samples
66 | nb_validation_samples = nb_test_samples
67 | epochs = 100
68 | batch_size = 16 # batch_size = 16
69 | num_classes = len(os.listdir(os.path.abspath(folder_train)))
70 | print('The painters are',os.listdir(os.path.abspath(folder_train)))
71 |
72 | ### Class for early stopping
73 |
74 | # rdcolema
75 | class EarlyStoppingByLossVal(Callback):
76 | """Custom class to set a val loss target for early stopping"""
77 | def __init__(self, monitor='val_loss', value=0.45, verbose=0):
78 | super(Callback, self).__init__()
79 | self.monitor = monitor
80 | self.value = value
81 | self.verbose = verbose
82 |
83 | def on_epoch_end(self, epoch, logs={}):
84 | current = logs.get(self.monitor)
85 | if current is None:
86 | warnings.warn("Early stopping requires %s available!" % self.monitor, RuntimeWarning)
87 |
88 | if current < self.value:
89 | if self.verbose > 0:
90 | print("Epoch %05d: early stopping THR" % epoch)
91 | self.model.stop_training = True
92 | early_stopping = EarlyStopping(monitor='val_loss', patience=10, mode='auto') #
93 |
94 | top_model_weights_path = 'bottleneck_fc_model.h5'
95 |
96 | ### Creating InceptionV3 model
97 |
98 | from keras.applications.inception_v3 import InceptionV3
99 | model = applications.InceptionV3(include_top=False, weights='imagenet')
100 |
101 | applications.InceptionV3(include_top=False, weights='imagenet').summary()
102 |
103 | type(applications.InceptionV3(include_top=False, weights='imagenet').summary())
104 |
105 | ### Training and running images on InceptionV3
106 |
107 | datagen = ImageDataGenerator(rescale=1. / 255)
108 |
109 | generator = datagen.flow_from_directory(
110 | train_data_dir,
111 | target_size=(img_width, img_height),
112 | batch_size=batch_size,
113 | class_mode=None,
114 | shuffle=False)
115 |
116 | nb_train_samples = len(generator.filenames)
117 | num_classes = len(generator.class_indices)
118 | predict_size_train = int(math.ceil(nb_train_samples / batch_size))
119 | print('Number of training samples:',nb_train_samples)
120 | print('Number of classes:',num_classes)
121 |
122 |
123 | bottleneck_features_train = model.predict_generator(generator, predict_size_train) # these are numpy arrays
124 |
125 | bottleneck_features_train[0].shape
126 | bottleneck_features_train.shape
127 |
128 |
129 | np.save('bottleneck_features_train.npy', bottleneck_features_train)
130 |
131 |
132 | generator = datagen.flow_from_directory(
133 | validation_data_dir,
134 | target_size=(img_width, img_height),
135 | batch_size=batch_size,
136 | class_mode=None,
137 | shuffle=False)
138 |
139 | nb_validation_samples = len(generator.filenames)
140 |
141 | predict_size_validation = int(math.ceil(nb_validation_samples / batch_size))
142 | print('Number of testing samples:',nb_validation_samples)
143 |
144 | bottleneck_features_validation = model.predict_generator(
145 | generator, predict_size_validation)
146 |
147 | np.save('bottleneck_features_validation.npy', bottleneck_features_validation)
148 |
149 | ### Training the fully-connected network (the top-model)
150 |
151 |
152 | datagen_top = ImageDataGenerator(rescale=1./255)
153 | generator_top = datagen_top.flow_from_directory(
154 | train_data_dir,
155 | target_size=(img_width, img_height),
156 | batch_size=batch_size,
157 | class_mode='categorical',
158 | shuffle=False)
159 |
160 | nb_train_samples = len(generator_top.filenames)
161 | num_classes = len(generator_top.class_indices)
162 |
163 |
164 | train_data = np.load('bottleneck_features_train.npy')
165 |
166 | ## Converting training data into vectors of categories:
167 |
168 | train_labels = generator_top.classes
169 | print('Classes before dummification:',train_labels)
170 | train_labels = to_categorical(train_labels, num_classes=num_classes)
171 | print('Classes after dummification:\n\n',train_labels)
172 |
173 | ## Again repeating the process with the validation data:
174 |
175 | generator_top = datagen_top.flow_from_directory(
176 | validation_data_dir,
177 | target_size=(img_width, img_height),
178 | batch_size=batch_size,
179 | class_mode=None,
180 | shuffle=False)
181 |
182 | nb_validation_samples = len(generator_top.filenames)
183 |
184 | validation_data = np.load('bottleneck_features_validation.npy')
185 |
186 | validation_labels = generator_top.classes
187 | validation_labels = to_categorical(validation_labels, num_classes=num_classes)
188 |
189 | ### Building the small FL model using bottleneck features as input
190 |
191 | model = Sequential()
192 | model.add(Flatten(input_shape=train_data.shape[1:]))
193 | # model.add(Dense(1024, activation='relu'))
194 | # model.add(Dropout(0.5))
195 | model.add(Dense(512, activation='relu'))
196 | model.add(Dropout(0.5))
197 | model.add(Dense(256, activation='relu'))
198 | model.add(Dropout(0.5))
199 | model.add(Dense(128, activation='relu'))
200 | model.add(Dropout(0.5))
201 | model.add(Dense(64, activation='relu'))
202 | model.add(Dropout(0.5))
203 | model.add(Dense(32, activation='relu'))
204 | model.add(Dropout(0.5))
205 | model.add(Dense(16, activation='relu'))
206 | model.add(Dropout(0.5))
207 | model.add(Dense(8, activation='relu')) # Not valid for minimum = 500
208 | model.add(Dropout(0.5))
209 | # model.add(Dense(4, activation='relu')) # Not valid for minimum = 500
210 | # model.add(Dropout(0.5))
211 | model.add(Dense(num_classes, activation='sigmoid'))
212 |
213 | model.compile(optimizer='Adam',
214 | loss='binary_crossentropy', metrics=['accuracy'])
215 |
216 | history = model.fit(train_data, train_labels,
217 | epochs=epochs,
218 | batch_size=batch_size,
219 | validation_data=(validation_data, validation_labels))
220 |
221 | model.save_weights(top_model_weights_path)
222 |
223 | (eval_loss, eval_accuracy) = model.evaluate(
224 | validation_data, validation_labels,
225 | batch_size=batch_size, verbose=1)
226 |
227 | print("[INFO] accuracy: {:.2f}%".format(eval_accuracy * 100))
228 | print("[INFO] Loss: {}".format(eval_loss))
229 |
230 | train_data.shape[1:]
231 |
232 | plt.figure(1)
233 |
234 | # summarize history for accuracy
235 |
236 | plt.subplot(211)
237 | plt.plot(history.history['acc'])
238 | plt.plot(history.history['val_acc'])
239 | plt.title('model accuracy')
240 | plt.ylabel('accuracy')
241 | plt.xlabel('epoch')
242 | #pylab.ylim([0.4,0.68])
243 | plt.legend(['train', 'test'], loc='upper left')
244 |
245 | ### Plotting the loss history
246 |
247 | import pylab
248 | plt.subplot(212)
249 | plt.plot(history.history['loss'])
250 | plt.plot(history.history['val_loss'])
251 | plt.title('model loss')
252 | plt.ylabel('loss')
253 | plt.xlabel('epoch')
254 | plt.legend(['train', 'test'], loc='upper left')
255 | pylab.xlim([0,60])
256 | # pylab.ylim([0,1000])
257 | plt.show()
258 |
259 | import matplotlib.pyplot as plt
260 | import pylab
261 | %matplotlib inline
262 | %config InlineBackend.figure_format = 'retina'
263 | fig = plt.figure()
264 | plt.plot(history.history['loss'])
265 | plt.plot(history.history['val_loss'])
266 | plt.title('Classification Model Loss')
267 | plt.xlabel('Epoch')
268 | plt.ylabel('Loss')
269 | pylab.xlim([0,60])
270 | plt.legend(['Test', 'Validation'], loc='upper right')
271 | fig.savefig('loss.png')
272 | plt.show();
273 |
274 | import matplotlib.pyplot as plt
275 | %matplotlib inline
276 | %config InlineBackend.figure_format = 'retina'
277 |
278 | fig = plt.figure()
279 | plt.plot(history.history['acc'])
280 | plt.plot(history.history['val_acc'])
281 | plt.plot(figsize=(15,15))
282 | plt.title('Classification Model Accuracy')
283 | plt.xlabel('Epoch')
284 | plt.ylabel('Accuracy')
285 | pylab.xlim([0,100])
286 | plt.legend(['Test', 'Validation', 'Success Metric'], loc='lower right')
287 | fig.savefig('acc.png')
288 | plt.show();
289 |
290 |
291 | ### Predictions
292 |
293 | os.listdir(os.path.abspath('train_toy_3/Pierre-Auguste_Renoir))
294 |
295 | image_path = os.path.abspath('test_toy_3/Pierre-Auguste_Renoir/91485.jpg')
296 | orig = cv2.imread(image_path)
297 | image = load_img(image_path, target_size=(120,120))
298 | image
299 | image = img_to_array(image)
300 | image
301 |
302 | image = image / 255.
303 | image = np.expand_dims(image, axis=0)
304 | image
305 |
306 | # build the VGG16 network
307 | #model = applications.VGG16(include_top=False, weights='imagenet')
308 | model = applications.InceptionV3(include_top=False, weights='imagenet')
309 | # get the bottleneck prediction from the pre-trained VGG16 model
310 | bottleneck_prediction = model.predict(image)
311 |
312 | # build top model
313 | model = Sequential()
314 | model.add(Flatten(input_shape=train_data.shape[1:]))
315 | # model.add(Dense(1024, activation='relu'))
316 | # model.add(Dropout(0.5))
317 | model.add(Dense(512, activation='relu'))
318 | model.add(Dropout(0.5))
319 | model.add(Dense(256, activation='relu'))
320 | model.add(Dropout(0.5))
321 | model.add(Dense(128, activation='relu'))
322 | model.add(Dropout(0.5))
323 | model.add(Dense(64, activation='relu'))
324 | model.add(Dropout(0.5))
325 | model.add(Dense(32, activation='relu'))
326 | model.add(Dropout(0.5))
327 | model.add(Dense(16, activation='relu'))
328 | model.add(Dropout(0.5))
329 | model.add(Dense(8, activation='relu')) # Not valid for minimum = 500
330 | model.add(Dropout(0.5))
331 | # model.add(Dense(4, activation='relu')) # Not valid for minimum = 500
332 | # model.add(Dropout(0.5))
333 | model.add(Dense(num_classes, activation='sigmoid'))
334 |
335 | model.load_weights(top_model_weights_path)
336 |
337 | # use the bottleneck prediction on the top model to get the final classification
338 | class_predicted = model.predict_classes(bottleneck_prediction)
339 |
340 | inID = class_predicted[0]
341 |
342 | class_dictionary = generator_top.class_indices
343 |
344 | inv_map = {v: k for k, v in class_dictionary.items()}
345 |
346 | label = inv_map[inID]
347 |
348 | # get the prediction label
349 | print("Image ID: {}, Label: {}".format(inID, label))
350 |
351 | # display the predictions with the image
352 | cv2.putText(orig, "Predicted: {}".format(label), (10, 30), cv2.FONT_HERSHEY_PLAIN, 1.5, (43, 99, 255), 2)
353 |
354 | cv2.imshow("Classification", orig)
355 | cv2.waitKey(0)
356 | cv2.destroyAllWindows()
--------------------------------------------------------------------------------
/all-scripts/transfer-learning.py:
--------------------------------------------------------------------------------
1 | # Transfer Learning with pre-trained models
2 |
3 | import numpy as np
4 | import keras
5 | from keras.preprocessing import image
6 | from keras.applications.inception_v3 import InceptionV3
7 | from keras.applications.inception_v3 import preprocess_input, decode_predictions
8 |
9 | model = keras.applications.inception_v3.InceptionV3()
10 |
11 | img = image.load_img("lionNN.jpg", target_size=(299, 299))
12 |
13 |
14 | ### Converting to an array and changing the dimension:
15 |
16 | x = image.img_to_array(img)
17 | print(x.shape)
18 | x = np.expand_dims(x, axis=0)
19 | print(x.shape)
20 |
21 | ### Preprocessing and making predictions
22 |
23 | x = preprocess_input(x)
24 |
25 | predictions = model.predict(x)
26 |
27 |
28 | predicted_classes = decode_predictions(predictions, top=9)
29 |
30 | for imagenet_id, name, likelihood in predicted_classes[0]:
31 | print(name,':', likelihood)
32 |
33 |
--------------------------------------------------------------------------------
/bitcoin.py:
--------------------------------------------------------------------------------
1 | # Recurrent Neural Networks and Bitcoins
2 |
3 | ### Marco Tavora
4 |
5 | ## Introduction
6 |
7 | '''
8 | Nowadays most articles about bitcoin are speculative. Analyses based on strong technical foundations are rare.
9 |
10 | My goal in this project is to build predictive models for the price of Bitcoins and other cryptocurrencies. To accomplish that, I will:
11 | - Use first Long Short-Term Memory recurrent neural networks (LSTMs) for predictions;
12 | - I will then study correlations between altcoins;
13 | - The third step will be to repeat the first analysis using traditional time series.
14 |
15 | For a thorough introduction to Bitcoins you can check this [book](https://github.com/bitcoinbook/bitcoinbook). For LSTM [this](http://colah.github.io/posts/2015-08-Understanding-LSTMs/) is a great source.
16 | '''
17 |
18 | ## Importing libraries
19 |
20 | import numpy as np
21 | import pandas as pd
22 | import statsmodels.api as sm
23 | import aux_functions as af
24 | from scipy import stats
25 | import keras
26 | import pickle
27 | import quandl
28 | from keras.models import Sequential
29 | from keras.layers import Activation, Dense,LSTM,Dropout
30 | from sklearn.metrics import mean_squared_error
31 | from math import sqrt
32 | from random import randint
33 | from keras import initializers
34 | import datetime
35 | from datetime import datetime
36 | from matplotlib import pyplot as plt
37 | import json
38 | import requests
39 | import plotly.offline as py
40 | import plotly.graph_objs as go
41 | py.init_notebook_mode(connected=True)
42 | %matplotlib inline
43 | from IPython.core.interactiveshell import InteractiveShell
44 | InteractiveShell.ast_node_interactivity = "all" # see the value of multiple statements at once.
45 | pd.set_option('display.max_columns', None)
46 |
47 | ## Data
48 | ### There are several ways to fetch historical data for Bitcoins (or any other cryptocurrency). Here are a few examples:
49 |
50 | ### Flat files
51 | '''
52 | If a flat file already available, we can just read it using `pandas`. For example, one of the datasets that will be used in this notebook can be found [here](https://www.kaggle.com/mczielinski/bitcoin-historical-data/data). This data is from *bitFlyer* a Bitcoin exchange and marketplace.
53 | '''
54 | df = pd.read_csv('bitcoin_data.csv')
55 | '''
56 | This dataset from [Kaggle](https://www.kaggle.com/mczielinski/bitcoin-historical-data/kernels) contains, for the time period of 01/2012-03/2018, minute-to-minute updates of the open, high, low and close prices (OHLC), the volume in BTC and indicated currency, and the [weighted bitcoin price](https://github.com/Bitcoin-Foundation-Italia/bitcoin-wp).
57 | '''
58 | lst_col = df.shape[1]-1
59 | df.iloc[:,lst_col].plot(lw=2, figsize=(15,5));
60 | plt.legend(['Bitcoin'], loc='upper left',fontsize = 'x-large');
61 |
62 | ### Retrieve Data from Quandl's API
63 | '''
64 | Another possibility is to retrieve Bitcoin pricing data using Quandl's free [Bitcoin API](https://blog.quandl.com/api-for-bitcoin-data). For example, to obtain the daily bitcoin exchange rate (BTC vs. USD) on Bitstamp (Bitstamp is a bitcoin exchange based in Luxembourg) we use the code snippet below. The function `quandl_data` is inside the library `af`.
65 | '''
66 | quandl_id = 'BCHARTS/KRAKENUSD'
67 | df_qdl = af.quandl_data(quandl_id)
68 | df_qdl.columns = [c.lower().replace(' ', '_').replace('(', '').replace(')', '') for c in df_qdl.columns.values]
69 |
70 | lst_col_2 = df_qdl.shape[1]-1
71 | df_qdl.iloc[:,lst_col_2].plot(lw=3, figsize=(15,5));
72 | plt.legend(['krakenUSD'], loc='upper left',fontsize = 'x-large');
73 |
74 | ### Retrieve Data from cryptocompare.com
75 | '''
76 | Another possibility is to retrieve data from [cryptocompare](https://www.cryptocompare.com/). In this case, we use the `requests` packages to make a `.get` request (the object `res` is a `Response` object) such as:
77 |
78 | res = requests.get(URL)
79 | '''
80 |
81 | res = requests.get('https://min-api.cryptocompare.com/data/histoday?fsym=BTC&tsym=USD&limit=2000')
82 | df_cc = pd.DataFrame(json.loads(res.content)['Data']).set_index('time')
83 | df_cc.index = pd.to_datetime(df_cc.index, unit='s')
84 |
85 | lst_col = df_cc.shape[1]-1
86 | df_cc.iloc[:,lst_col].plot(lw=2, figsize=(15,5));
87 | plt.legend(['daily BTC'], loc='upper left',fontsize = 'x-large');
88 |
89 | ## Data Handling
90 | '''
91 | Let us start with `df` (from *bitFlyer*) from Kaggle.
92 | '''
93 |
94 | ### Checking for `NaNs` and making column titles cleaner
95 |
96 | '''
97 | Using `df.isnull().any()` we quickly see that the data does not contain null values. We get rid of upper cases, spaces, parentheses and so on, in the column titles.
98 | '''
99 |
100 |
101 | df.columns = [c.lower().replace(' ', '_').replace('(', '').replace(')', '') for c in df.columns.values]
102 |
103 |
104 | ### Group data by day and take the mean value
105 |
106 | '''
107 | Though Timestamps are in Unix time, this is easily taken care of using Python's `datetime` library:
108 | - `pd.to_datetime` converts the argument to `datetime`
109 | - `Series.dt.date` gives a numpy array of Python `datetime.date` objects
110 |
111 | We then drop the `Timestamp` column. The repeated dates occur simply because the data is collected minute-to-minute. Taking the daily mean using the method `daily_weighted_prices` from `af` we obtain:
112 | '''
113 | daily = af.daily_weighted_prices(df,'date','timestamp','s')
114 |
115 |
116 | ## Exploratory Data Analysis
117 | '''
118 | We can look for trends and seasonality in the data. Joining train and test sets and using the functions `trend`, `seasonal`, `residue` and `plot_components` for `af`:
119 | '''
120 | ### Trend, seasonality and residue
121 |
122 | dataset = daily.reset_index()
123 | dataset['date'] = pd.to_datetime(dataset['date'])
124 | dataset = dataset.set_index('date')
125 |
126 | af.plot_comp(af.trend(dataset,'weighted_price'),
127 | af.seasonal(dataset,'weighted_price'),
128 | af.residue(dataset,'weighted_price'),
129 | af.actual(dataset,'weighted_price'))
130 |
131 | # Partial Auto-correlation (PACF)
132 | '''
133 | The PACF is the correlation (of the variable with itself) at a given lag, **controlling for the effect of previous (shorter) lags**. We see below that according to the PACF, prices with **one day** difference are highly correlated. After that the partial auto-correlation essentially drops to zero.
134 | '''
135 | plt.figure(figsize=(15,7))
136 | ax = plt.subplot(211)
137 | sm.graphics.tsa.plot_pacf(dataset['weighted_price'].values.squeeze(), lags=48, ax=ax)
138 | plt.show();
139 |
140 | ## Train/Test split
141 | '''
142 | We now need to split our dataset. We need to fit the model using the training data and test the model with the test data. We can proceed as follows. We must choose the proportion of rows that will constitute the training set.
143 | '''
144 | train_size = 0.75
145 | training_rows = train_size*len(daily)
146 | int(training_rows)
147 | '''
148 | Then we slice `daily` using, for obvious reasons:
149 |
150 | [:int(training_rows)]
151 | [int(training_rows):]
152 |
153 | We then have:
154 | '''
155 | train = daily[0:int(training_rows)]
156 | test = daily[int(training_rows):]
157 | print('Shapes of training and testing sets:',train.shape[0],'and',test.shape[0])
158 | '''
159 | We can automatize the split using a simple function for the library `aux_func`:
160 | '''
161 | test_size = 1 - train_size
162 | train = af.train_test_split(daily, test_size=test_size)[0]
163 | test = af.train_test_split(daily, test_size=test_size)[1]
164 | print('Shapes of training and testing sets:',train.shape[0],'and',test.shape[0])
165 | af.vc_to_df(train).head()
166 | af.vc_to_df(train).tail()
167 | af.vc_to_df(test).head()
168 | af.vc_to_df(test).tail()
169 | train.shape,test.shape
170 |
171 | ### Checking
172 |
173 | af.vc_to_df(daily[0:199]).head()
174 | af.vc_to_df(daily[0:199]).tail()
175 | af.vc_to_df(daily[199:]).head()
176 | af.vc_to_df(daily[199:]).tail()
177 |
178 | ### Reshaping
179 | '''
180 | We must reshape `train` and `test` for `Keras`.
181 | '''
182 |
183 | train = np.reshape(train, (len(train), 1));
184 | test = np.reshape(test, (len(test), 1));
185 |
186 | train.shape
187 | test.shape
188 |
189 | ### `MinMaxScaler `
190 | '''
191 | We must now use `MinMaxScaler` which scales and translates features to a given range (between zero and one).
192 | '''
193 |
194 | from sklearn.preprocessing import MinMaxScaler
195 | scaler = MinMaxScaler()
196 | train = scaler.fit_transform(train)
197 | test = scaler.transform(test)
198 |
199 | Reshaping once more:
200 |
201 | X_train, Y_train = af.lb(train, 1)
202 | X_test, Y_test = af.lb(test, 1)
203 | X_train = np.reshape(X_train, (len(X_train), 1, X_train.shape[1]))
204 | X_test = np.reshape(X_test, (len(X_test), 1, X_test.shape[1]))
205 |
206 | '''
207 | Now the shape is (number of examples, time steps, features per step).
208 | '''
209 |
210 | X_train.shape
211 | X_test.shape
212 |
213 | model = Sequential()
214 | model.add(LSTM(256, return_sequences=True, input_shape=(X_train.shape[1], X_train.shape[2])))
215 | model.add(LSTM(256))
216 | model.add(Dense(1))
217 |
218 | # compile and fit the model
219 | model.compile(loss='mean_squared_error', optimizer='adam')
220 | history = model.fit(X_train, Y_train, epochs=100, batch_size=32,
221 | validation_data=(X_test, Y_test))
222 |
223 | model.summary()
224 |
225 | ## Train and test loss
226 |
227 | ### While the model is being trained, the train and test losses vary as shown in the figure below. The package `plotly.graph_objs` is extremely useful. The function `t( )` inside the argument is defined in `aux_func`:
228 |
229 | py.iplot(dict(data=[af.t('loss','training_loss',history), af.t('val_loss','val_loss',history)],
230 | layout=dict(title = 'history of training loss', xaxis = dict(title = 'epochs'),
231 | yaxis = dict(title = 'loss'))), filename='training_process')
232 |
233 | ### Root-mean-square deviation (RMSE)
234 |
235 | ### The RMSE measures differences between values predicted by a model the actual values.
236 |
237 | X_test_new = X_test.copy()
238 | X_test_new = np.append(X_test_new, scaler.transform(dataset.iloc[-1][0]))
239 | X_test_new = np.reshape(X_test_new, (len(X_test_new), 1, 1))
240 | prediction = model.predict(X_test_new)
241 |
242 | Inverting original scaling:
243 |
244 | pred_inv = scaler.inverse_transform(prediction.reshape(-1, 1))
245 | Y_test_inv = scaler.inverse_transform(Y_test.reshape(-1, 1))
246 | pred_inv_new = np.array(pred_inv[:,0][1:])
247 | Y_test_new_inv = np.array(Y_test_inv[:,0])
248 |
249 | ### Renaming arrays for clarity
250 |
251 | y_testing = Y_test_new_inv
252 | y_predict = pred_inv_new
253 |
254 | ## Prediction versus True Values
255 |
256 | layout = dict(title = 'True prices vs predicted prices',
257 | xaxis = dict(title = 'Day'), yaxis = dict(title = 'USD'))
258 | fig = dict(data=[af.prediction_vs_true(y_testing,'Prediction'),
259 | af.prediction_vs_true(y_predict,'True')],
260 | layout=layout)
261 | py.iplot(fig, filename='results')
262 |
263 | print('Prediction:\n')
264 | print(list(y_predict[0:10]))
265 | print('')
266 | print('Test set:\n')
267 | y_testing = [round(i,1) for i in list(Y_test_new_inv)]
268 | print(y_testing[0:10])
269 | print('')
270 | print('Difference:\n')
271 | diff = [round(abs((y_testing[i+1]-list(y_predict)[i])/list(y_predict)[i]),2) for i in range(len(y_predict)-1)]
272 | print(diff[0:30])
273 | print('')
274 | print('Mean difference:\n')
275 | print(100*round(np.mean(diff[0:30]),3),'%')
276 |
277 | ### The average difference is ~5%. There is something wrong here!
278 |
279 | df = pd.DataFrame(data={'prediction': y_predict.tolist(), 'testing': y_testing})
280 |
281 | pct_variation = df.pct_change()[1:]
282 | pct_variation = pct_variation[1:]
283 |
284 | pct_variation.head()
285 |
286 | layout = dict(title = 'True prices vs predicted prices variation (%)',
287 | xaxis = dict(title = 'Day'), yaxis = dict(title = 'USD'))
288 | fig = dict(data=[af.prediction_vs_true(pct_variation['prediction'],'Prediction'),af.prediction_vs_true(pct_variation['testing'],'True')],
289 | layout=layout)
290 | py.iplot(fig, filename='results')
291 |
292 | ## Altcoins
293 |
294 | ### Using the Poloniex API and two auxiliar function ([Ref.1](https://blog.patricktriest.com/analyzing-cryptocurrencies-python/)). Choosing the value of the end date to be today we have:
295 |
296 | poloniex = 'https://poloniex.com/public?command=returnChartData¤cyPair={}&start={}&end={}&period={}'
297 | start = datetime.strptime('2015-01-01', '%Y-%m-%d') # get data from the start of 2015
298 | end = datetime.now()
299 | period = 86400 # day in seconds
300 |
301 | def get_crypto_data(poloniex_pair):
302 | data_df = af.get_json_data(poloniex.format(poloniex_pair,
303 | start.timestamp(),
304 | end.timestamp(),
305 | period),
306 | poloniex_pair)
307 | data_df = data_df.set_index('date')
308 | return data_df
309 |
310 | lst_ac = ['ETH','LTC','XRP','ETC','STR','DASH','SC','XMR','XEM']
311 | len(lst_ac)
312 | ac_data = {}
313 | for a in lst_ac:
314 | ac_data[a] = get_crypto_data('BTC_{}'.format(a))
315 |
316 | lst_df = []
317 | for el in lst_ac:
318 | print('Altcoin:',el)
319 | ac_data[el].head()
320 | lst_df.append(ac_data[el])
321 |
322 | lst_df[0].head()
323 |
--------------------------------------------------------------------------------
/bitcoin/README.md:
--------------------------------------------------------------------------------
1 | ## Deep Learning and Bitcoins [[view code]](http://nbviewer.jupyter.org/github/marcotav/deep-learning/blob/master/bitcoin/notebooks/deep-learning-LSTM-bitcoins.ipynb)
2 |   
3 |
4 | **For the best viewing experience use [nbviewer](http://nbviewer.jupyter.org/github/marcotav/deep-learning/blob/master/bitcoin/notebooks/deep-learning-LSTM-bitcoins.ipynb).**
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 | Preamble •
14 | Goal •
15 | Importing libraries •
16 | Data •
17 | Data Wrangling •
18 | Train/Test split •
19 | Building LSTM •
20 | Train and test loss •
21 | Predicted and True Values •
22 | Altcoins •
23 | Bird's eye view of the underlying mathematics •
24 | To Dos
25 |
26 |
27 |
28 | ## Preamble
29 |
30 | There are several important questions about cryptocurrency speculation that haven't been completely understood yet. A few examples are:
31 | - What are the causes of the gigantic spikes and dips?
32 | - Are the dynamics of different altcoins related?
33 | - Is it possible to predict future prices?
34 |
35 |
36 | ## Goal
37 |
38 | My goal here is to build predictive models for the price of Bitcoins and other cryptocurrencies. To accomplish that, I will use Long Short-Term Memory recurrent neural networks (LSTMs). For a thorough introduction to Bitcoins you can check this [book](https://github.com/bitcoinbook/bitcoinbook). For LSTM [this](http://colah.github.io/posts/2015-08-Understanding-LSTMs/) is a good source.
39 |
40 |
41 | ## Importing libraries
42 |
43 | Let us now import the necessary libraries.
44 |
45 | ```
46 | import numpy as np
47 | import pandas as pd
48 | import statsmodels.api as sm
49 | import aux_func as af
50 | from scipy import stats
51 | import keras
52 | import pickle
53 | import quandl
54 | from keras.models import Sequential
55 | from keras.layers import Activation, Dense,LSTM,Dropout
56 | from sklearn.metrics import mean_squared_error
57 | from math import sqrt
58 | from random import randint
59 | from keras import initializers
60 | import datetime
61 | from datetime import datetime
62 | from matplotlib import pyplot as plt
63 | %matplotlib inline
64 |
65 | from IPython.core.interactiveshell import InteractiveShell
66 | InteractiveShell.ast_node_interactivity = "all" # see the value of multiple statements at once.
67 | pd.set_option('display.max_columns', None)
68 | ```
69 |
70 |
71 | ## Data
72 |
73 | There are several ways to fetch historical data for Bitcoins (or any other cryptocurrency). Here are a few examples:
74 |
75 | #### Flat files
76 | If a flat file already available, we can just read it using `pandas`. For example, one of the datasets that will be used in this notebook can be found [here](https://www.kaggle.com/mczielinski/bitcoin-historical-data/data). In this case we simply use:
77 | ```
78 | df = pd.read_csv('bitcoin_data.csv')
79 | ```
80 |
81 |
82 |
>
83 |
84 |
85 |
86 |
>
87 |
88 |
89 | #### Retrieve Data from Quandl's API
90 | Another possibility is to retrieve Bitcoin pricing data using [Quandl's free Bitcoin API]((https://blog.quandl.com/api-for-bitcoin-data)). For example, to obtain the daily bitcoin exchange rate (BTC vs. USD) on Bitstamp we use the code snippet below. The function `quandl_data` is inside the library `af`. Here we use:
91 | ```
92 | df_qdl = af.quandl_data(quandl_id)
93 | ```
94 | where e.g.
95 | ```
96 | quandl_id = 'BCHARTS/BITSTAMPUSD'
97 | ```
98 |
99 |
>
100 |
101 |
102 | #### Retrieve Data from cryptocompare.com
103 | Another possibility is to retrieve data from [cryptocompare](https://www.cryptocompare.com/). In this case, we use the `requests` packages to make a `.get` request (the object `res` is a `Response` object) such as:
104 |
105 | res = requests.get(URL)
106 |
107 |
108 |
109 |
>
110 |
111 |
112 |
113 |
>
114 |
115 |
116 |
117 |
118 | ## Data Wrangling
119 |
120 | ### Converting dates to daily means:
121 | Though Timestamps are in Unix time, this is easily taken care of using Python's `datetime` library:
122 | - `pd.to_datetime` converts the argument to `datetime`
123 | - `Series.dt.date` gives a numpy array of Python `datetime.date` objects
124 |
125 | We then drop the `Timestamp` column. The repeated dates occur simply because the data is collected minute-to-minute. Taking the daily mean:
126 | ```
127 | af.daily_weighted_prices(df,'date','timestamp','s').head()
128 | ```
129 |
130 | ## Train/Test split
131 | We now need to split our dataset. We need to fit the model using the training data and test the model with the test data. We must choose the proportion of rows that will constitute the training set.
132 | ```
133 | train_size = 0.80
134 | training_rows = train_size*len(daily)
135 | train = daily[:int(training_rows)]
136 | test = daily[int(training_rows):]
137 | ```
138 | We can automatize the split using a simple function for the library `aux_func`:
139 | ```
140 | train = af.train_test_split(daily, test_size=0.2)[0]
141 | test = af.train_test_split(daily, test_size=0.2)[1]
142 | ```
143 | We can check for trends, seasonality and residue in the data using `Scipy`. Joining train and test sets and using the functions `trend`, `seasonal`, `residue` and `plot_components` for `af`:
144 | ```
145 | dataset = pd.concat([train, test]).reset_index()
146 | dataset['date'] = pd.to_datetime(dataset['date'])
147 | dataset = dataset.set_index('date')
148 |
149 | af.plot_comp(af.trend(dataset,'weighted_price'),
150 | af.seasonal(dataset,'weighted_price'),
151 | af.residue(dataset,'weighted_price'),
152 | af.actual(dataset,'weighted_price'))
153 | ```
154 |
155 |
156 |
157 |
158 |
159 | To run the LSTM in Keras we must reshape out datasets:
160 | ```
161 | train = np.reshape(train, (len(train), 1))
162 | test = np.reshape(test, (len(test), 1))
163 | ```
164 | We also use `MinMaxScaler` to scale and translate features to a given range (between zero and one).
165 | ```
166 | from sklearn.preprocessing import MinMaxScaler
167 | scaler = MinMaxScaler()
168 | train = scaler.fit_transform(train)
169 | test = scaler.transform(test)
170 | ```
171 | After that we reshape once more:
172 | ```
173 | X_train, Y_train = af.lb(train, 1)
174 | X_test, Y_test = af.lb(test, 1)
175 | X_train = np.reshape(X_train, (len(X_train), 1, X_train.shape[1]))
176 | X_test = np.reshape(X_test, (len(X_test), 1, X_test.shape[1]))
177 | ```
178 | The function `lb` "looks back" defining a window of past values that will be used as predictors for
179 | future values.
180 |
181 |
182 | ## Building LSTMs
183 |
184 | ### Overview of Recurrent Neural Networks (RNN)
185 |
186 | Recurrent Neural Networks (RNN) are designed to handle sequential inputs (in contrast to multilayer perceptrons or convolutional neural nets). When an input is fed into the network, the weights matrix of the hidden layer from the *previous* input is also supplied concomitantly. Since the weights of the hidden layer have essentially "captured" information about the past, the RNN can be said to have "memory." In other words, the output of the network is influenced not just by the input just fed into it, but also by the full history of past inputs. The figure below, borrowed from [here](http://colah.github.io/posts/2015-08-Understanding-LSTMs/) makes these clearer:
187 |
188 |
189 |
>
190 |
191 |
192 | Following [Karpathy](http://karpathy.github.io/2015/05/21/rnn-effectiveness/), we can write a RNN as the class below:
193 | ```
194 | class RNN:
195 | def step(self, x):
196 | self.h = np.tanh(np.dot(self.W_hh, self.h) + np.dot(self.W_xh, x))
197 | y = np.dot(self.W_hy, self.h)
198 | return y
199 | ```
200 | and
201 | ```
202 | rnn = RNN()
203 | y = rnn.step(x)
204 | ```
205 | Inside the class, the first line updates the hidden state and the following line computes the output. In the instantiation below, `x` is the input vector and `y` is the output. The three matrices `W_hh`, `W_xh`, `W_hy` are the RNN parameters. For more details see [Karpathy](http://karpathy.github.io/2015/05/21/rnn-effectiveness/).
206 |
207 |
208 | ### LSTM networks
209 |
210 | LSTM networks are a type of RNN that is specially handy because they allow large networks to be trained successfully by avoiding problems with vanishing gradients and exploding gradients during backprop.
211 |
212 |
213 |
>
214 |
215 |
216 |
217 | ### LSTMs in action
218 |
219 | Building, compiling and fitting the network:
220 | ```
221 | model = Sequential()
222 | model.add(LSTM(256, return_sequences=True, input_shape=(X_train.shape[1], X_train.shape[2])))
223 | model.add(LSTM(256))
224 | model.add(Dense(1))
225 | model.compile(loss='mean_squared_error', optimizer='adam')
226 | history = model.fit(X_train, Y_train, epochs=200, batch_size=32,
227 | validation_data=(X_test, Y_test))
228 | ```
229 |
230 |
231 |
>
232 |
233 |
234 | The topology of the network is quite simple (by choice):
235 |
236 |
237 |
>
238 |
239 |
240 |
241 |
242 | ## Train and test loss
243 |
244 | While the model is being trained, the train and test losses vary as shown in the figure below. The package `plotly.graph_objs` is extremely useful. The function `t( )` inside the argument is defined in `aux_func`:
245 |
246 |
247 |
248 |
249 |
250 |
251 | ## Predicted and True Values
252 |
253 | The predicted and true values are shown below. The RMSE is too high which can be understood just by glancing at the curve. The function `t( )` inside the argument is defined in the `aux_func` library.
254 |
255 | ```
256 | layout = dict(title = 'True prices vs predicted prices',
257 | xaxis = dict(title = 'Day'), yaxis = dict(title = 'USD'))
258 | fig = dict(data=[af.prediction_vs_true(Y_test_new_inv,'Prediction'),af.prediction_vs_true(pred_inv_new,'True')],
259 | layout=layout)
260 | py.iplot(fig, filename='results')
261 | ```
262 |
263 |
264 |
265 |
266 |
267 |
268 | Repeating the analysis for price variations:
269 |
270 | ```
271 | df = pd.DataFrame(data={'prediction': y_predict.tolist(), 'testing': y_testing})
272 |
273 | pct_variation = df.pct_change()[1:]
274 | pct_variation = pct_variation[1:]
275 |
276 | layout = dict(title = 'True prices vs predicted prices variation (%)',
277 | xaxis = dict(title = 'Day'), yaxis = dict(title = 'USD'))
278 | fig = dict(data=[af.prediction_vs_true(pct_variation['prediction'],'Prediction')
279 | ,af.prediction_vs_true(pct_variation['testing'],'True')],
280 | layout=layout)
281 | py.iplot(fig, filename='results')
282 | ```
283 |
284 |
285 |
286 |
287 |
288 |
289 | ## Altcoins
290 |
291 | Using the Poloniex API and two auxiliar function ([Ref.1](https://blog.patricktriest.com/analyzing-cryptocurrencies-python/)). The URL from Poloniex we need, which I call `poloniex`, is [here]('https://poloniex.com/public?command=returnChartData¤cyPair={}&start={}&end={}&period={}') Choosing the value of the end date to be today we have:
292 |
293 | ```
294 | start = datetime.strptime('2015-01-01', '%Y-%m-%d') # get data from the start of 2015
295 | end = datetime.now()
296 | period = 86400 # day in seconds
297 | ```
298 | ```
299 | def get_crypto_data(poloniex_pair):
300 | data_df = af.get_json_data(poloniex.format(poloniex_pair,
301 | start.timestamp(),
302 | end.timestamp(),
303 | period),
304 | poloniex_pair)
305 | data_df = data_df.set_index('date')
306 | return data_df
307 |
308 | lst_ac = ['ETH','LTC','XRP','ETC','STR','DASH','SC','XMR','XEM']
309 | len(lst_ac)
310 | ac_data = {}
311 | for a in lst_ac:
312 | ac_data[a] = get_crypto_data('BTC_{}'.format(a))
313 | ```
314 |
315 |
316 | ## Bird's eye view of the underlying mathematics
317 | In preparation.
318 |
319 |
320 | ## To Dos
321 |
322 | - [ ] Build portfolio of cryptocurrencies
323 | - [ ] Tweak layers of network to improve RMSE
324 | - [ ] Use other machine learning algorithms (to make inferences and not just predictions)
325 | - [ ] Include other predictors
326 | - [ ] Decrease RMSE
327 | - [ ] Use GRU
328 | - [ ] Explain LSTM in more detail
329 | - [ ] Include pdf with mathematics in the repo
330 | - [ ] Eliminate stationarity and repeat analysis
331 | - [ ] Use `statsmodels` to obtain results of tests such as DF
332 |
--------------------------------------------------------------------------------
/bitcoin/data/123.csv:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/bitcoin/images/123.jpg:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/bitcoin/images/LSTM.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/bitcoin/images/LSTM.png
--------------------------------------------------------------------------------
/bitcoin/images/all_coins.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/bitcoin/images/all_coins.png
--------------------------------------------------------------------------------
/bitcoin/images/bitcoin.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/bitcoin/images/bitcoin.png
--------------------------------------------------------------------------------
/bitcoin/images/btc-orange1.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/bitcoin/images/btc-orange1.jpg
--------------------------------------------------------------------------------
/bitcoin/images/components.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/bitcoin/images/components.png
--------------------------------------------------------------------------------
/bitcoin/images/daily_BTC.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/bitcoin/images/daily_BTC.png
--------------------------------------------------------------------------------
/bitcoin/images/df.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/bitcoin/images/df.png
--------------------------------------------------------------------------------
/bitcoin/images/krakenUSD.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/bitcoin/images/krakenUSD.png
--------------------------------------------------------------------------------
/bitcoin/images/lstm_summary.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/bitcoin/images/lstm_summary.png
--------------------------------------------------------------------------------
/bitcoin/images/model_running.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/bitcoin/images/model_running.png
--------------------------------------------------------------------------------
/bitcoin/images/pred-vs-true.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/bitcoin/images/pred-vs-true.png
--------------------------------------------------------------------------------
/bitcoin/images/prediction.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/bitcoin/images/prediction.png
--------------------------------------------------------------------------------
/bitcoin/images/price_variations.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/bitcoin/images/price_variations.png
--------------------------------------------------------------------------------
/bitcoin/images/rnn2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/bitcoin/images/rnn2.png
--------------------------------------------------------------------------------
/bitcoin/images/trainingloss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/bitcoin/images/trainingloss.png
--------------------------------------------------------------------------------
/bitcoin/img.png:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/bitcoin/scripts/123.py:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/image-recognition-tutorial/README.md:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/image-recognition-tutorial/notebooks/123:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/images/keras.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/images/keras.jpg
--------------------------------------------------------------------------------
/images/matplotlib.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/images/matplotlib.png
--------------------------------------------------------------------------------
/images/pandas.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/images/pandas.png
--------------------------------------------------------------------------------
/images/scikitlearn.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/images/scikitlearn.png
--------------------------------------------------------------------------------
/images/tf.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/images/tf.png
--------------------------------------------------------------------------------
/keras-tf-tutorial/README.md:
--------------------------------------------------------------------------------
1 | ## Digit Recognition with Tensorflow and Keras [[view code]](http://nbviewer.jupyter.org/github/marcotav/deep-learning/blob/master/keras-tf-tutorial/notebooks/neural-nets-digits-mnist.ipynb)
2 |       
3 |
4 | **The code is available [here](http://nbviewer.jupyter.org/github/marcotav/deep-learning/blob/master/keras-tf-tutorial/notebooks/neural-nets-digits-mnist.ipynb) or by clicking on the [view code] link above.**
5 |
6 |
7 |
8 |
9 |
10 |
11 |
13 |
14 |
15 |
16 |
17 | Introduction •
18 | Steps •
19 | Loading MNIST dataset •
20 | Keras Convolutional Network •
21 | TensorFlow fully-connected network
22 |
23 |
24 |
25 |
26 | ## Introduction
27 |
28 | The MNIST dataset contains 70,000 images of digits taken from several scanned documents (normalized in size and centered). Each image is seen by the computer as an array of 28 x 28 pixels squared. Each of these numbers is given a value from inside [0,255], describing the pixel intensity at that point. Giving the computer this array of numbers, it will output numbers describing the probability of the image pertaining to a given class.
29 |
30 | In this notebook I will build and train two types of neural network using the MNIST set, namely:
31 | - A fully-connected neural network using `TensorFlow`
32 | - A `Keras` convolutional network
33 |
34 |
35 |
36 |
37 |
38 | ## Steps
39 |
40 | - We first load the data
41 | - Then we split the feature matrix `X` and the target vector `y` into `train` and `test` subsets.
42 | - This is followed by data preprocessing where:
43 | - `X` is normalized, dividing each pixel value by the maximum value of a pixel (255)
44 | - Since this is a multi-class classification problem, `Keras` needs `y` to be a one-hot encoded matrix
45 | - Create the neural network:
46 | - A `Keras` convolutional network
47 | - A `TensorFlow` fully-connected network
48 | - Since we have multi-class classification we will use a `softmax` activation function on the output layer.
49 | - Regularization and dropout can be used to improve performance.
50 | - We then train the network.
51 | - After training, we make predictions (numbers in the range 0-9).
52 |
53 |
54 | ## Imports
55 |
56 | ```
57 | import numpy as np
58 | import pandas as pd
59 | from keras.datasets import mnist
60 | from keras.models import Sequential
61 | from keras.layers import Dense
62 | from keras.layers import Dropout
63 | from keras.layers import Flatten
64 | from keras.layers.convolutional import Conv2D
65 | from keras.layers.convolutional import MaxPooling2D
66 | from keras.utils import np_utils
67 | from keras import backend as K
68 | K.set_image_dim_ordering('th')
69 | import matplotlib.pyplot as plt
70 | %matplotlib inline
71 | seed = 42
72 | np.random.seed(seed)
73 | from IPython.core.interactiveshell import InteractiveShell
74 | InteractiveShell.ast_node_interactivity = "all" # see the value of multiple statements at once.
75 | ```
76 |
77 | ## Loading MNIST dataset
78 |
79 | Loading the MNIST and looking at an image:
80 | ```
81 | (X_train, y_train), (X_test, y_test) = mnist.load_data()
82 | plt.imshow(X_train[10],cmap = plt.get_cmap('gray'))
83 | plt.show()
84 | ```
85 |
86 |
88 |
89 |
90 |
91 | Now we transforming `y` into a one-hot encoded matrix and normalize:
92 |
93 | ```
94 | X_train = X_train.reshape(X_train.shape[0], 1, 28, 28).astype('float32')
95 | X_test = X_test.reshape(X_test.shape[0], 1, 28, 28).astype('float32')
96 |
97 | X_train.shape
98 |
99 | X_train = X_train/255
100 | X_test = X_test/255
101 | y_train = np_utils.to_categorical(y_train)
102 | y_test = np_utils.to_categorical(y_test)
103 | ```
104 |
105 | ## `Keras` Convolutional Network
106 |
107 | Roughly speaking, convolutional neural networks (CNN) start from pixels, then get to edges, then corners, until a digit is obtained. Let uis first build a small CNN and then a larger one:
108 |
109 | ### Building a small CNN:
110 |
111 |
112 | The function `cnn_model` below builds the following CNN:
113 |
114 | - We have a convolutional layer with 30 feature maps of size 5 $\times$ 5.
115 |
116 | model.add(Conv2D(32, (5, 5), input_shape=(1, 28, 28), activation ='relu'))
117 |
118 | - Then we have a pooling layer which takes the maximum over 2 $\times$ 2 patches.
119 | - A Dropout layer with a probability of 20$%$.
120 | - Then a `Flatten` layer is included
121 | - Then we have a fully connected layer containing 128 neurons and a `reLU` activation.
122 | - Output layer.
123 |
124 | ```
125 | num_classes = 10
126 | def cnn_model():
127 | model = Sequential()
128 | model.add(Conv2D(32, (5, 5), input_shape=(1, 28, 28), activation = 'relu'))
129 | model.add(MaxPooling2D(pool_size=(2, 2)))
130 | model.add(Dropout(0.2))
131 | model.add(Flatten())
132 | model.add(Dense(128, activation = 'relu'))
133 | model.add(Dense(num_classes, activation = 'softmax'))
134 | model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
135 | return model
136 | ```
137 | ```
138 | small_model = cnn_model()
139 | history = small_model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=20, batch_size=200,verbose=2)
140 | scores = small_model.evaluate(X_test, y_test, verbose=2)
141 | print("CNN Error: %.2f%%" % (100-scores[1]*100))
142 | ```
143 | ```
144 | train_loss_small = history.history['loss']
145 | test_loss_small = history.history['val_loss']
146 | ```
147 |
148 |
150 |
151 |
152 |
153 | ### Building a larger CNN:
154 |
155 | Let us build Another function `large_cnn_model`. The topology now is:
156 |
157 | - A convolutional layer with 30 feature maps of size 5 $\times$ 5.
158 |
159 | model.add(Conv2D(30, (5, 5), input_shape=(1, 28, 28), activation = activation_1))
160 |
161 | - Then we have a pooling layer which takes the maximum over 2 $\times$ 2 patches.
162 | - Then a convolutional layer with 15 feature maps of size 3 × 3 is included.
163 | - The next steps is to include a pooling layer takes the maximum over 2 $\times$ 2 patches.
164 | - A Dropout layer with a probability of 20$%$.
165 | - Then a flatten layer is included
166 | - Then we have a fully connected layer containing 128 neurons and a `reLU` activation.
167 | - Finally we hve a fully connected layer with 50 neurons and `reLU` activation.
168 | - Output layer.
169 |
170 | The following function builds the CNN:
171 | ```
172 | def cnn_model_large():
173 | model = Sequential()
174 | model.add(Conv2D(30, (5, 5), input_shape=(1, 28, 28), activation='relu'))
175 | model.add(MaxPooling2D(pool_size=(2, 2)))
176 | model.add(Conv2D(15, (3, 3), activation='relu'))
177 | model.add(MaxPooling2D(pool_size=(2, 2)))
178 | model.add(Dropout(0.2))
179 | model.add(Flatten())
180 | model.add(Dense(128, activation='relu'))
181 | model.add(Dense(50, activation='relu'))
182 | model.add(Dense(num_classes, activation='softmax'))
183 | model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
184 | return model
185 | ```
186 | ```
187 | large_model = cnn_model_large()
188 | large_history = large_model.fit(X_train, y_train, validation_data=(X_test, y_test),
189 | epochs=20, batch_size=200,verbose=2)
190 | large_scores = large_model.evaluate(X_test, y_test, verbose=2)
191 | print("CNN Error: %.2f%%" % (100-large_scores[1]*100))
192 | ```
193 | ```
194 | train_loss_large = large_history.large_history['loss']
195 | test_loss_large = large_history.large_history['val_loss']
196 | ```
197 |
198 |
200 |
201 |
202 |
203 |
204 | ## `TensorFlow` fully-connected network
205 |
206 | ### A few other packages are needed
207 | ```
208 | import tensorflow as tf
209 | from tensorflow.examples.tutorials.mnist import input_data
210 | mnist = input_data.read_data_sets('MNIST_data', one_hot=True)`
211 | import random
212 | ```
213 |
214 | ### Defining our session and initializing variables
215 |
216 | The session is just a flow chart. The variables will be inputed later on.
217 |
218 | ```
219 | sess = tf.Session()
220 | init = tf.initialize_all_variables()
221 | sess.run(init)
222 | ```
223 |
224 | ### Placeholders
225 |
226 | - `TensorFlow` uses placeholders which are variables into which data is fed.
227 | - Using the optional argument `None` allows for feeding any number of inputs of size 784 (for `x`) or 10 (for `y`).
228 |
229 | ```
230 | x = tf.placeholder(tf.float32, shape=[None, 784])
231 | y_ = tf.placeholder(tf.float32, shape=[None, 10])
232 | ```
233 |
234 | ### Weights and bias
235 |
236 | The weight matrix and bias vectors are introduced as below. The dimensions of W and b are respectively, 784 x 10 and b is a vector of 10 components.
237 | ```
238 | W = tf.Variable(tf.zeros([784,10]))
239 | b = tf.Variable(tf.zeros([10]))
240 | ```
241 | ### Activation function and loss function
242 |
243 | - We use the `softmax` as our activation
244 | - The loss function is the cross-entropy:
245 |
246 |
247 |
249 |
250 |
251 |
252 | where the m is the number of the training examples and the predictions are the "hatted" variables.
253 |
254 | ```
255 | y = tf.nn.softmax(tf.matmul(x,W) + b)
256 | cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
257 | ```
258 |
259 | ### Training using gradient descent
260 |
261 | - We need to define the training method (GD) and variables for determining the accuracy.
262 |
263 | ```
264 | learning_rate = 0.05
265 | training = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
266 | prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
267 | acc = tf.reduce_mean(tf.cast(prediction, tf.float32))
268 | ```
269 | ```
270 | training_steps = 200
271 | for i in range(training_steps+1):
272 | sess.run(training, feed_dict={x: X_train, y_: y_train})
273 | if i%100 == 0:
274 | print('Training Step:' + str(i) + ' Accuracy = '
275 | + str(sess.run(acc, feed_dict={x: X_test, y_: y_test}))
276 | + ' Loss = ' + str(sess.run(cross_entropy, {x: X_train, y_: y_train})))
277 | sess.run(y, feed_dict={x: X_train})
278 | ```
279 | ```
280 | def display_compare(num):
281 | X_train, y_train = mnist.train.images[num,:].reshape(1,784),mnist.train.labels[num,:]
282 | label = y_train.argmax()
283 | plt.title('Prediction: %d Label: %d' % (sess.run(y, feed_dict={x: X_train}).argmax() , label))
284 | plt.imshow(X_train.reshape([28,28]))
285 | plt.show()
286 | ```
287 | ```
288 | for i in range(0,10):
289 | display_compare(i)
290 | ```
291 |
292 | ## To be continued
293 |
--------------------------------------------------------------------------------
/keras-tf-tutorial/data/123.csv:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/keras-tf-tutorial/images/Deep learning neural network.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/keras-tf-tutorial/images/Deep learning neural network.jpg
--------------------------------------------------------------------------------
/keras-tf-tutorial/images/MNIST_3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/keras-tf-tutorial/images/MNIST_3.png
--------------------------------------------------------------------------------
/keras-tf-tutorial/images/cross_entropy.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/keras-tf-tutorial/images/cross_entropy.png
--------------------------------------------------------------------------------
/keras-tf-tutorial/images/loss_large_model.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/keras-tf-tutorial/images/loss_large_model.png
--------------------------------------------------------------------------------
/keras-tf-tutorial/images/loss_small_model.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/keras-tf-tutorial/images/loss_small_model.png
--------------------------------------------------------------------------------
/painters-identification/README.md:
--------------------------------------------------------------------------------
1 | ## Painter Identification with Convolutional Neural Nets [[view code]](http://nbviewer.jupyter.org/github/marcotav/deep-learning/blob/master/painters-identification/notebooks/capstone-models-final-model-building.ipynb)
2 |   
3 |
4 | **The code is available [here](http://nbviewer.jupyter.org/github/marcotav/deep-learning/blob/master/painters-identification/notebooks/capstone-models-final-model-building.ipynb) or by clicking on the [view code] link above.**
5 |
6 |
7 |
8 |
9 |
11 |
12 |
13 |
14 |
15 | Goal •
16 | Challenges and Applications •
17 | Overview and Data •
18 | Preprocessing the data •
19 | Problems with small datasets •
20 | Using bottleneck features of Inception V3 •
21 | Creating and training using the InceptionV3 model •
22 | Training the fully-connected network •
23 | Plotting the accuracy and loss histories •
24 | Conclusions •
25 | To Dos
26 |
27 |
28 |
29 | ## Goal
30 |
31 | The goal of this project is to use Transfer Learning to build a Convolutional Neural Network (or ConvNet) to identify the artist of a given painting.
32 |
33 | ## Challenges and Applications
34 | There are several challenges inherent to this problem. Examples are:
35 | - Painters change styles along their career (many times drastically). Both paintings below are from Pablo Picasso but from different phases of his career.
36 |
37 |
38 |
39 |
40 |
41 |
42 | - The task involves more than just object recognition or face recognition, since painters can paint many different types of objects.
43 |
44 | However, using high-performance image recognition techniques for this task can have some distinctive advantages:
45 | - It is a non-invasive procedure, in contrast to several methods currently used by art specialists
46 | - It can be used for forgery identification possibly with better accuracy compared with current techniques
47 |
48 |
49 | ## Overview and Data
50 | The data was compiled by [Kaggle](https://www.kaggle.com/c/painter-by-numbers) (based on WikiArt.org datasets) and consisted in approximately 100,000 paintings by more than 2,000 painters. To prevent sampling size problems with the training set, I restricted the dataset by setting a threshold for the minimum number of paintings per artist. I varied this threshold and evaluated the appropriate metrics for each case. As a warm-up, I started with only three painters and increased the number of painters up to 37 (which was the number of painters with 400 or more paintings in the data).
51 |
52 | I used `Keras` to build a fully-connected neural network based on Google's deep learning model `Inception V3` using Transfer Learning. In Transfer Learning, knowledge obtained training using one type of data is used to train another type of data.
53 |
54 | First, a few comments about the dataset:
55 | - The file `all_data_info.csv` (downloaded from [Kaggle](https://www.kaggle.com/c/painter-by-numbers)) contains information about the paintings (such as artist, genre, filename of the associated image and so on). From this `csv` file, I built a Pandas `DataFrame` where each row corresponds to one image. Three columns of the`DataFrame` were useful in the analysis, namely, `artist` (the name of the artist), `in_train` (a Boolean variable identifying the presence or absence of that painting in the training set) and `new_filename` (the name of the files corresponding to the paintings).
56 |
57 |
58 |
59 |
60 |
61 |
62 |
63 |
64 | - To keep only artists with threshold `minimum` I wrote a simple function `threshold( )`, which is contained in the file `aux_func.py`, which is also in this repo. The functions takes three arguments. The list of artists can be built from the indexes of the `.value_counts()` Pandas `Series`. These steps are shown in the code snippet below.
65 | - In addition to the usual Python libraries (`Pandas`, `Numpy`, etc) the following packages were used:
66 | - `cv2` which is used for working with images e.g. for resizing
67 | - `os` which is used to work with directories
68 | - `tqdm` which shows a percentage bar for tasks
69 | - `PIL` (Python Imaging Library) which is used for opening/manipulating/saving image files
70 |
71 | ```
72 | # selecting only 3 columns
73 | df = pd.read_csv('all_data_info.csv').iloc[:, [0,10,11]]
74 | df = af.threshold(df,'artist',minimum)
75 | # list of the artists included in the analysis
76 | artists = list(df['artist'].value_counts().index)
77 | ```
78 |
79 |
80 | ### Preprocessing the data
81 | Some of the used to preprocess the data were:
82 | - The images from Wikiart.org have a large number of pixels (the full dataset had several GB), so I wrote a simple function `preprocess( )` to resize them. This can be justified if we argue that the style of the painter is present everywhere in the painting. The function is given by:
83 | ```
84 | def preprocess(size,PATH,PATHDUMP):
85 | Image.MAX_IMAGE_PIXELS = None
86 | # looping over the files in the original folder (containing large images)
87 | for image_filename in tqdm(os.listdir(PATH)):
88 | # os.path.join constructs a pathname out of partial pathnames
89 | # Image.open( ) converts a string (a path in this case) into a PIL image
90 | img = Image.open(os.path.join(PATH,
91 | image_filename))
92 | # resizing to equal dimensions and saving to a new folder
93 | try:
94 | img.resize((size, size))\
95 | .convert('RGB')\
96 | .save(os.path.join(PATHDUMP,
97 | image_filename))
98 | except Exception as e:
99 | print("Unable to process {}".format(image_filename))
100 | print(e)
101 | ```
102 |
103 | - The training and testing files were in two separate folders. I wrote a simple function `subfolder_maker_multi( )` to create, within each of these two folders, sub-folders corresponding to each of the painters (i.e. each of the classes). This was done to format the date in the shape required by the ConvNet model, which uses the `ImageDataGenerator` and `flow_from_directory()` functionality of `Keras` (more on this later).
104 |
105 |
106 |
107 |
108 |
109 |
110 |
111 |
112 |
113 |
114 |
115 | The function reads:
116 | ```
117 | def subfolder_maker_multi(PATH,PATHDUMP,lst):
118 | # lst is a list of sublists, each contains paintings by one painter only
119 | for sublist in tqdm(lst):
120 | for el in sublist:
121 | img = Image.open(os.path.join(PATH,el))
122 | img.save(os.path.join(PATHDUMP,
123 | painters[lst.index(sublist)].replace(' ','_'),el))
124 | ```
125 |
126 |
127 | ## Problems with small datasets
128 | The number of training examples in our dataset is rather small (for image recognition standards, where there are usually millions of training images). Therefore, making predictions with high accuracy avoiding overfitting is not feasible. To build a classification model with the level of capability of current state-of-the-art models I used, as mentioed before, Google's `Inception V3` applying Transfer Learning.
129 |
130 |
131 |
132 |
134 |
135 |
136 |
137 |
138 | In addition to using Transfer Learning, another way to circunvent the problem with small datasets is to use image augmentation. The `Keras` class `keras.preprocessing.image.ImageDataGenerator` generates batches of image data with real-time data augmentation and defines the configuration for both image data preparation and image data augmentation. Data augmentation is particularly useful in cases like the present one, where the number of images in the training set is not large, and overfitting can become an issue.
139 |
140 | To create an augmented image generator we can follow these steps:
141 |
142 | - We must first create an instance i.e. an augmented image generator (using the command below) where several arguments can be chosen. These arguments will determine the alterations to be performed on the images during training:
143 |
144 | datagen = ImageDataGenerator(arguments)
145 |
146 | - To use `datagen` to create new images we call the function `fit_generator( )` with the desired arguments.
147 |
148 | I will quickly explain some possible arguments of `ImageDataGenerator`:
149 | - `rotation range` defines the amplitude that the images will be rotated randomly during training. Rotations aren't always useful. For example, in the MNIST dataset all images have normalized orientation, so random rotations during training are not needed. In tour present case it is not clear how useful rotations are so I will choose an small argument (instead of just setting it to zero).
150 | - `rotation_range`, `width_shift_range`, `height_shift_range` and `shear_range`: the ranges of random shifts and random shears should be the same in our case, since the images were resized to have the same dimensions.
151 | - I set `fill mode` to be `nearest` which means that pixels that are missing will be filled by the nearest ones.
152 | - `horizontal_flip`: horizontal (and vertical) flips can be useful here since in many examples in our dataset there is no clear definition of orientation (again the MNIST dataset is an example where flipping is not useful)
153 | - We can also standardize pixel values using the `featurewise_center` and `feature_std_normalization` arguments.
154 |
155 |
156 | ## Batches and Epochs:
157 | Two important concepts are batches and epochs:
158 | - Batch: a set of N samples. The samples in a batch are processed independently, in parallel (from the docs)
159 | - Epoch: an arbitrary cutoff, generally defined as "one pass over the entire dataset", used to separate training into distinct phases, which is useful for logging and periodic evaluation. When using `evaluation_data` or `evaluation_split` with the `fit` method of Keras models, evaluation will be run at the end of every epoch (extracted from the docs).
160 | - Larger batch sizes: faster progress in training, but don't always converge as fast.
161 | - Smaller batch sizes: train slower, but can converge faster. It's definitely problem dependent.
162 |
163 | The number of training samples are obtained using the snippet below. The same code is used for the test data and it is therefore ommited.
164 | ```
165 | nb_train_samples = 0
166 | for p in range(len(os.listdir(os.path.abspath(folder_train)))):
167 | nb_train_samples += len(os.listdir(os.path.abspath(folder_train) +'/'+ os.listdir(
168 | os.path.abspath(folder_train))[p]))
169 | ```
170 |
171 | ## Using features of Inception V3
172 |
173 | The Inception V3 network, pre-trained on the ImageNet dataset, already know several useful features. The strategy is straightforward:
174 | - Instantiate the convolutional part of the Inception V3 (exclude fully-connected block)
175 | - Run the model on our training and test dataset and record the output in numpy arrays
176 | - Train a small fully-connected network on top of the features we have just stored
177 |
178 |
179 | ### Creating and training using the Inception V3 model
180 | We will now create the InceptionV3 model without the final fully-connected layers (setting `include_top=False`) and loading the ImageNet weights (by setting `weights ='imagenet`). Before that there are a few parameters to be defined:
181 | ```
182 | # folder containing training set already subdivided
183 | train_data_dir = os.path.abspath(folder_train)
184 | # folder containing test set already subdivided
185 | validation_data_dir = os.path.abspath(folder_test)
186 | epochs = 200
187 | batch_size = 16 # batch_size = 16
188 | num_classes = len(os.listdir(os.path.abspath(folder_train)))
189 | ```
190 | Building the network:
191 | ```
192 | from keras.applications.inception_v3 import InceptionV3
193 | model = applications.InceptionV3(include_top=False, weights='imagenet')
194 | ```
195 | We then create a generator and use `predict_generator( )` to generate predictions for the input samples.
196 | ```
197 | datagen = ImageDataGenerator(rescale=1. / 255)
198 | generator = datagen.flow_from_directory(
199 | train_data_dir,
200 | target_size=(img_width, img_height),
201 | batch_size=batch_size,
202 | class_mode=None,
203 | shuffle=False)
204 | features_train = model.predict_generator(generator, predict_size_train) # these are numpy arrays
205 | ```
206 | Using `predict( )` we see that, indeed, Inception V3 is able to identify some objects in the painting. The function `decode_predictions` decodes the results into a list of tuples of the form (class, description, probability). We see below that the model identifies the house in the image as a castle or mosque and shows correctly a non-zero probability of finding a seashore in the painting.
207 |
208 | ```
209 | preds = base_model.predict(x)
210 | print('Predicted:', decode_predictions(preds))
211 | ```
212 |
213 |
214 |
215 |
216 |
217 |
218 | The output is:
219 |
220 | ```
221 | Predicted:
222 | [[('n02980441', 'castle', 0.14958656), ('n03788195', 'mosque', 0.10087459), ('n04347754', 'submarine', 0.086444855), ('n03388043', 'fountain', 0.07997718), ('n09428293', 'seashore', 0.07918877)]]
223 | ```
224 |
225 | ### Building and training the fully-connected network on top of Inception V3
226 |
227 | The small fully-connected network is built and trained below. I used:
228 | - Binary cross-entropy as loss function
229 | - The `Adam` optimizer
230 |
231 | ```
232 | model = Sequential()
233 | model.add(Flatten(input_shape=train_data.shape[1:]))
234 | model.add(Dense(512, activation='relu'))
235 | model.add(Dropout(0.5))
236 | model.add(Dense(256, activation='relu'))
237 | model.add(Dropout(0.5))
238 | model.add(Dense(128, activation='relu'))
239 | model.add(Dropout(0.5))
240 | model.add(Dense(64, activation='relu'))
241 | model.add(Dropout(0.5))
242 | model.add(Dense(32, activation='relu'))
243 | model.add(Dropout(0.5))
244 | model.add(Dense(16, activation='relu'))
245 | model.add(Dropout(0.5))
246 | model.add(Dense(num_classes, activation='sigmoid'))
247 | model.compile(optimizer='Adam',
248 | loss='binary_crossentropy', metrics=['accuracy'])
249 | history = model.fit(train_data, train_labels,
250 | epochs=epochs,
251 | batch_size=batch_size,
252 | validation_data=(validation_data, validation_labels))
253 | ```
254 | We must now load the bottleneck features, get the class labels for the training set and convert the latter into categorial vectors:
255 | ```
256 | train_data = np.load('bottleneck_features_train.npy')
257 | train_labels = to_categorical(generator_top.classes, num_classes=num_classes)
258 | ```
259 |
260 | ### Plotting the accuracy and loss histories
261 | The plots of the training history. The first shows the accuracy and the second shows the loss. We see that the train and validation curves are very close indicating little overfitting.
262 |
263 |
264 |
265 |
266 |
267 |
268 |
269 |
270 |
271 |
272 |
273 |
274 |
275 |
276 |
277 |
278 |
279 |
280 |
281 | ### Conclusions
282 |
283 | I built a ConvNet to identify the artist of a given painting. I used transfer learning using Inception V3 model and obtained accuracy of the order of 76% with little overfitting.
284 |
285 |
286 | ## To Dos
287 |
288 | - [ ] Confusion matrix analysis
289 | - [ ] Using the `KerasClassifier( )` wrapper to use all `scikit-Learn` functionalities
290 | - [ ] Include discussion about predictions in the README
291 |
--------------------------------------------------------------------------------
/painters-identification/data/_config.yml:
--------------------------------------------------------------------------------
1 | theme: jekyll-theme-cayman
--------------------------------------------------------------------------------
/painters-identification/images/1200px-Leonardo_da_Vinci_-_Virgin_and_Child_with_St_Anne_C2RMF_retouched.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/1200px-Leonardo_da_Vinci_-_Virgin_and_Child_with_St_Anne_C2RMF_retouched.jpg
--------------------------------------------------------------------------------
/painters-identification/images/36617.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/36617.jpg
--------------------------------------------------------------------------------
/painters-identification/images/91485.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/91485.jpg
--------------------------------------------------------------------------------
/painters-identification/images/Anghiari.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/Anghiari.jpg
--------------------------------------------------------------------------------
/painters-identification/images/DT45.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/DT45.jpg
--------------------------------------------------------------------------------
/painters-identification/images/Max_pooling.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/Max_pooling.png
--------------------------------------------------------------------------------
/painters-identification/images/Peter_Paul_Ruben's_copy_of_the_lost_Battle_of_Anghiari.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/Peter_Paul_Ruben's_copy_of_the_lost_Battle_of_Anghiari.jpg
--------------------------------------------------------------------------------
/painters-identification/images/TLng.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/TLng.jpg
--------------------------------------------------------------------------------
/painters-identification/images/VGG16.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/VGG16.jpg
--------------------------------------------------------------------------------
/painters-identification/images/acc.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/acc.png
--------------------------------------------------------------------------------
/painters-identification/images/acc_curve.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/acc_curve.png
--------------------------------------------------------------------------------
/painters-identification/images/andrew_ng_drivers_ml_success-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/andrew_ng_drivers_ml_success-1.png
--------------------------------------------------------------------------------
/painters-identification/images/binary.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/binary.png
--------------------------------------------------------------------------------
/painters-identification/images/binarycrossentropy.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/binarycrossentropy.png
--------------------------------------------------------------------------------
/painters-identification/images/capstone-slides.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/capstone-slides.pdf
--------------------------------------------------------------------------------
/painters-identification/images/cnnpic.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/cnnpic.png
--------------------------------------------------------------------------------
/painters-identification/images/cnnpic2.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/cnnpic2.jpg
--------------------------------------------------------------------------------
/painters-identification/images/convnetsmall.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/convnetsmall.png
--------------------------------------------------------------------------------
/painters-identification/images/dataframemarkdown.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/dataframemarkdown.png
--------------------------------------------------------------------------------
/painters-identification/images/emoji.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/emoji.jpeg
--------------------------------------------------------------------------------
/painters-identification/images/emojismiling.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/emojismiling.jpg
--------------------------------------------------------------------------------
/painters-identification/images/f13.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/f13.png
--------------------------------------------------------------------------------
/painters-identification/images/folders.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/folders.png
--------------------------------------------------------------------------------
/painters-identification/images/house_identify.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/house_identify.png
--------------------------------------------------------------------------------
/painters-identification/images/inceptionV3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/inceptionV3.png
--------------------------------------------------------------------------------
/painters-identification/images/inceptionV3tds.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/inceptionV3tds.png
--------------------------------------------------------------------------------
/painters-identification/images/inceptionv3summary.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/inceptionv3summary.png
--------------------------------------------------------------------------------
/painters-identification/images/line.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/line.png
--------------------------------------------------------------------------------
/painters-identification/images/loss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/loss.png
--------------------------------------------------------------------------------
/painters-identification/images/loss_curve.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/loss_curve.png
--------------------------------------------------------------------------------
/painters-identification/images/mouse_again.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/mouse_again.png
--------------------------------------------------------------------------------
/painters-identification/images/multiclass.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/multiclass.png
--------------------------------------------------------------------------------
/painters-identification/images/paintingcontrast.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/paintingcontrast.jpg
--------------------------------------------------------------------------------
/painters-identification/images/paintings_readme.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/paintings_readme.jpg
--------------------------------------------------------------------------------
/painters-identification/images/pooling.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/pooling.png
--------------------------------------------------------------------------------
/painters-identification/images/rat.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/rat.png
--------------------------------------------------------------------------------
/painters-identification/images/relu.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/relu.png
--------------------------------------------------------------------------------
/painters-identification/images/rembrandtpic.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/rembrandtpic.jpg
--------------------------------------------------------------------------------
/painters-identification/images/stanne.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/stanne.jpg
--------------------------------------------------------------------------------
/painters-identification/images/test.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/test.png
--------------------------------------------------------------------------------
/painters-identification/images/test_loss_picasso_size_32.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/test_loss_picasso_size_32.png
--------------------------------------------------------------------------------
/painters-identification/images/train_and_test_loss_picasso_size_32.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/train_and_test_loss_picasso_size_32.png
--------------------------------------------------------------------------------
/painters-identification/images/transferlearning.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/painters-identification/images/transferlearning.jpg
--------------------------------------------------------------------------------
/painters-identification/scripts/aux_func.py:
--------------------------------------------------------------------------------
1 | def vc_to_df(s):
2 | s = s.to_frame().reset_index()
3 | s.columns = ['col_name','value_counts']
4 | s.set_index('col_name', inplace=True)
5 | return s
6 |
7 | def threshold(df,col,minimum):
8 | s = df[col].value_counts()
9 | lst = list(s[s >= minimum].index)
10 | return df[df[col].isin(lst)]
11 |
12 |
13 | __name__ == '__main__'
--------------------------------------------------------------------------------
/painters-identification/scripts/capstone-models-final-model-building.py:
--------------------------------------------------------------------------------
1 |
2 | # coding: utf-8
3 |
4 | # In[ ]:
5 |
6 |
7 |
8 | # Painters Identification using ConvNets
9 | ### Marco Tavora
10 |
11 |
12 |
13 |
14 |
15 | ## Index
16 |
17 | # - [Building Convolutional Neural Networks](#convnets)
18 | # - [Small ConvNets](#smallconvnets)
19 | # - [Imports for Convnets](#importconvnets)
20 | # - [Preprocessing](#keraspreprocessing)
21 | # - [Training the model](#traincnn)
22 | # - [Plotting the results](#plotting)
23 | # - [Transfer learning: Using InceptionV3](#VGG16)
24 | # - [Comments](#comments)
25 | # - [References](#ref)
26 |
27 | #
28 | #
29 |
30 |
31 |
32 |
33 | print('Created using Python', platform.python_version())
34 |
35 | ## Introduction
36 |
37 | The challenge of recognizing artists given their paintings has been, for a long time, far beyond the capability of algorithms. Recent advances in deep learning, specifically the development of convolutional neural networks, have made that task possible. One of the advantages of these methods is that, in contrast to several methods employed by art specialists, they are not invasive and do not interfere with the painting.
38 |
39 |
40 | ## Overview
41 |
42 | # I used Convolutional Neural Networks (ConvNets) to identify the artist of a given painting. The dataset contains a minimum of 400 paintings per artist
from a set of 37 famous artists.
43 | #
44 | # I trained a small ConvNet built from scratch, and also used transfer learning, fine-tuning the top layers of a deep pre-trained networks (VGG16).
45 |
46 | # ## Problems with small datasets
47 | # The number of training examples in our dataset is small (for image recognition standards). Therefore, making predictions with high accuracy avoiding overfitting becomes a difficult task. To build classification systems with the level of capability of current state-of-the-art models would need millions of training examples. Example of such models are the ImageNet models. Examples of these models include:
48 |
49 | # - VGG16
50 | # - VGG19
51 | # - ResNet50
52 | # - Inception V3
53 | # - Xception
54 |
55 |
56 |
57 | ## Preprocessing
58 |
59 | # The `Keras` class `keras.preprocessing.image.ImageDataGenerator` generates batches of image data with real-time data augmentation and defines the configuration for both image data preparation and image data augmentation. Data augmentation is particularly useful in cases like the present one, where the number of images in the training set is not large, and overfitting can become an issue.
60 |
61 | # To create an augmented image generator we can follow these steps:
62 |
63 | # - We must first create an instance i.e. an augmented image generator (using the command below) where several arguments can be chosen. These arguments will determine the alterations to be performed on the images during training:
64 |
65 | # datagen = ImageDataGenerator(arguments)
66 |
67 | # - To use `datagen` to create new images we call the function `fit_generator( )` with the desired arguments.
68 |
69 | # I will quickly explain some possible arguments of `ImageDataGenerator`:
70 | # - `rotation range` defines the amplitude that the images will be rotated randomly during training. Rotations aren't always useful. For example, in the MNIST dataset all images have normalized orientation, so random rotations during training are not needed. In tour present case it is not clear how useful rotations are so I will choose an small argument (instead of just setting it to zero).
71 | # - `rotation_range`, `width_shift_range`, `height_shift_range` and `shear_range`: the ranges of random shifts and random shears should be the same in our case, since the images were resized to have the same dimensions.
72 | # - I set `fill mode` to be `nearest` which means that pixels that are missing will be filled by the nearest ones.
73 | # - `horizontal_flip`: horizontal (and vertical) flips can be useful here since in many examples in our dataset there is no clear definition of orientation (again the MNIST dataset is an example where flipping is not useful)
74 | # - We can also standardize pixel values using the `featurewise_center` and `feature_std_normalization` arguments.
75 | # ***
76 |
77 | # ## Transfer Learning
78 | # One way to circunvent this issue is to use 'Transfer Learning', where we use a pre-trained model, modify its final layers and apply to our dataset. When the dataset is too small, these pre-trained models act as feature generators only (see discussion below). As will be illustrated later on, when the dataset in question has some reasonable size, one can drop some layers from the original model, stack a model on top of the network and perform some parameters fine-tuning.
79 |
80 | # Before following this approach, I will, in the next section, build a small ConvNet "from scratch".
81 |
82 | import tensorflow as tf
83 | from keras.preprocessing.image import ImageDataGenerator, img_to_array, load_img
84 | from keras.models import Sequential
85 | from keras.preprocessing import image
86 | from keras.layers import Dropout, Flatten, Dense
87 | from keras import applications
88 | from keras.utils.np_utils import to_categorical
89 | from keras import applications
90 | from keras.applications.imagenet_utils import preprocess_input
91 | from imagenet_utils import decode_predictions
92 | import math, cv2
93 |
94 | folder_train = './train_toy_3/'
95 | folder_test = './test_toy_3/'
96 |
97 | datagen = ImageDataGenerator(
98 | featurewise_center=True,
99 | featurewise_std_normalization=True,
100 | rotation_range=0.15,
101 | width_shift_range=0.2,
102 | height_shift_range=0.2,
103 | rescale = 1./255,
104 | shear_range=0.2,
105 | zoom_range=0.2,
106 | horizontal_flip=True,
107 | fill_mode='nearest')
108 |
109 | from keras.preprocessing.image import ImageDataGenerator, img_to_array, load_img
110 | from keras.models import Sequential
111 | from keras.layers import Conv2D, MaxPooling2D
112 | from keras.layers import Activation, Dropout, Flatten, Dense
113 | from keras import backend as K
114 | from keras.callbacks import EarlyStopping, Callback
115 | K.image_data_format() # this means that "backend": "tensorflow". Channels are RGB
116 | from keras import applications
117 | from keras.utils.np_utils import to_categorical
118 | import math, cv2
119 |
120 | ## Defining the new size of the image
121 |
122 | # - The images from Wikiart.org had a extremely large size, I wrote a simple function `preprocess( )` (see the notebook about data analysis in this repo) to resize the images. In the next cell I resize them again and play with the size to see how it impacts accuracy.
123 | # - The reason why cropping the image is partly justified is that I believe, the style of the artist is present everywhere in the painting, so cropping shouldn't cause major problems.
124 |
125 | img_width, img_height = 120,120
126 |
127 | if K.image_data_format() == 'channels_first':
128 | input_shape = (3, img_width, img_height)
129 | print('Theano Backend')
130 | else:
131 | input_shape = (img_width, img_height, 3)
132 | print('TensorFlow Backend')
133 |
134 | input_shape
135 |
136 | nb_train_samples = 0
137 | for p in range(len(os.listdir(os.path.abspath(folder_train)))):
138 | nb_train_samples += len(os.listdir(os.path.abspath(folder_train) +'/'+ os.listdir(
139 | os.path.abspath(folder_train))[p]))
140 | nb_train_samples
141 |
142 | nb_test_samples = 0
143 | for p in range(len(os.listdir(os.path.abspath(folder_test)))):
144 | nb_test_samples += len(os.listdir(os.path.abspath(folder_test) +'/'+ os.listdir(
145 | os.path.abspath(folder_test))[p]))
146 | nb_test_samples
147 |
148 | ## Batches and Epochs:
149 |
150 | # - Batch: a set of $N$ samples. The samples in a batch are processed independently, in parallel. If training, a batch results in only one update to the model (extracted from the docs).
151 | # - Epoch: an arbitrary cutoff, generally defined as "one pass over the entire dataset", used to separate training into distinct phases, which is useful for logging and periodic evaluation. When using `evaluation_data` or `evaluation_split` with the `fit` method of Keras models, evaluation will be run at the end of every epoch (extracted from the docs).
152 | # - Larger batch sizes:faster progress in training, but don't always converge as fast.
153 | # - Smaller batch sizes: train slower, but can converge faster. It's definitely problem dependent.
154 |
155 | train_data_dir = os.path.abspath(folder_train) # folder containing training set already subdivided
156 | validation_data_dir = os.path.abspath(folder_test) # folder containing test set already subdivided
157 | nb_train_samples = nb_train_samples
158 | nb_validation_samples = nb_test_samples
159 | epochs = 100
160 | batch_size = 16 # batch_size = 16
161 | num_classes = len(os.listdir(os.path.abspath(folder_train)))
162 | print('The painters are',os.listdir(os.path.abspath(folder_train)))
163 |
164 | ### Class for early stopping
165 |
166 | # Model stops training when 10 epochs do not show gain in accuracy.
167 |
168 | # rdcolema
169 | class EarlyStoppingByLossVal(Callback):
170 | """Custom class to set a val loss target for early stopping"""
171 | def __init__(self, monitor='val_loss', value=0.45, verbose=0):
172 | super(Callback, self).__init__()
173 | self.monitor = monitor
174 | self.value = value
175 | self.verbose = verbose
176 |
177 | def on_epoch_end(self, epoch, logs={}):
178 | current = logs.get(self.monitor)
179 | if current is None:
180 | warnings.warn("Early stopping requires %s available!" % self.monitor, RuntimeWarning)
181 |
182 | if current < self.value:
183 | if self.verbose > 0:
184 | print("Epoch %05d: early stopping THR" % epoch)
185 | self.model.stop_training = True
186 | early_stopping = EarlyStopping(monitor='val_loss', patience=10, mode='auto') #
187 |
188 | top_model_weights_path = 'bottleneck_fc_model.h5'
189 |
190 | ### Creating InceptionV3 model
191 |
192 | # We now create the InceptionV3 model without the final fully-connected layers (setting `include_top=False`) and loading the ImageNet weights (by setting `weights ='imagenet`)
193 |
194 | from keras.applications.inception_v3 import InceptionV3
195 | model = applications.InceptionV3(include_top=False, weights='imagenet')
196 |
197 | applications.InceptionV3(include_top=False, weights='imagenet').summary()
198 |
199 | type(applications.InceptionV3(include_top=False, weights='imagenet').summary())
200 |
201 | ### Training and running images on InceptionV3
202 |
203 | # We first create the generator. The generator is an iterator that generates batches of images when requested using e.g. `flow( )`.
204 |
205 | datagen = ImageDataGenerator(rescale=1. / 255)
206 |
207 | generator = datagen.flow_from_directory(
208 | train_data_dir,
209 | target_size=(img_width, img_height),
210 | batch_size=batch_size,
211 | class_mode=None,
212 | shuffle=False)
213 |
214 | nb_train_samples = len(generator.filenames)
215 | num_classes = len(generator.class_indices)
216 | predict_size_train = int(math.ceil(nb_train_samples / batch_size))
217 | print('Number of training samples:',nb_train_samples)
218 | print('Number of classes:',num_classes)
219 |
220 | ### Bottleneck features
221 |
222 | # The extracted features, which are the last activation maps before the fully-connected layers in the pre-trained model, are called "bottleneck features". The function `predict_generator( )` generates predictions for the input samples from a data generator.
223 |
224 | bottleneck_features_train = model.predict_generator(generator, predict_size_train) # these are numpy arrays
225 |
226 | bottleneck_features_train[0].shape
227 | bottleneck_features_train.shape
228 |
229 | In the next cell, we save the bottleneck features to help training our data:
230 |
231 | np.save('bottleneck_features_train.npy', bottleneck_features_train)
232 |
233 | Using `predict( )` we see that, indeed, `ResNet50` is able to identify some objects in the painting. The function `decode_predictions` decodes the results into a list of tuples of the form (class, description, probability). We see below that the model identifies the house in the image as a castle or mosque and shows correctly a non-zero probability of finding a seashore in the painting. In this case, `ResNet50` acts as a feature generator.
234 |
235 | Repeating the steps for the validation data:
236 |
237 | generator = datagen.flow_from_directory(
238 | validation_data_dir,
239 | target_size=(img_width, img_height),
240 | batch_size=batch_size,
241 | class_mode=None,
242 | shuffle=False)
243 |
244 | nb_validation_samples = len(generator.filenames)
245 |
246 | predict_size_validation = int(math.ceil(nb_validation_samples / batch_size))
247 | print('Number of testing samples:',nb_validation_samples)
248 |
249 | bottleneck_features_validation = model.predict_generator(
250 | generator, predict_size_validation)
251 |
252 | np.save('bottleneck_features_validation.npy', bottleneck_features_validation)
253 |
254 | ### Training the fully-connected network (the top-model)
255 |
256 | We now load the features just obtained, get the class labels for the training set and convert the latter into categorial vectors:
257 |
258 | datagen_top = ImageDataGenerator(rescale=1./255)
259 | generator_top = datagen_top.flow_from_directory(
260 | train_data_dir,
261 | target_size=(img_width, img_height),
262 | batch_size=batch_size,
263 | class_mode='categorical',
264 | shuffle=False)
265 |
266 | nb_train_samples = len(generator_top.filenames)
267 | num_classes = len(generator_top.class_indices)
268 |
269 | # Loading the features:
270 |
271 | train_data = np.load('bottleneck_features_train.npy')
272 |
273 | # Converting training data into vectors of categories:
274 |
275 | train_labels = generator_top.classes
276 | print('Classes before dummification:',train_labels)
277 | train_labels = to_categorical(train_labels, num_classes=num_classes)
278 | print('Classes after dummification:\n\n',train_labels)
279 |
280 | # Again repeating the process with the validation data:
281 |
282 | generator_top = datagen_top.flow_from_directory(
283 | validation_data_dir,
284 | target_size=(img_width, img_height),
285 | batch_size=batch_size,
286 | class_mode=None,
287 | shuffle=False)
288 |
289 | nb_validation_samples = len(generator_top.filenames)
290 |
291 | validation_data = np.load('bottleneck_features_validation.npy')
292 |
293 | validation_labels = generator_top.classes
294 | validation_labels = to_categorical(validation_labels, num_classes=num_classes)
295 |
296 | ### Building the small FL model using bottleneck features as input
297 |
298 | model = Sequential()
299 | model.add(Flatten(input_shape=train_data.shape[1:]))
300 | # model.add(Dense(1024, activation='relu'))
301 | # model.add(Dropout(0.5))
302 | model.add(Dense(512, activation='relu'))
303 | model.add(Dropout(0.5))
304 | model.add(Dense(256, activation='relu'))
305 | model.add(Dropout(0.5))
306 | model.add(Dense(128, activation='relu'))
307 | model.add(Dropout(0.5))
308 | model.add(Dense(64, activation='relu'))
309 | model.add(Dropout(0.5))
310 | model.add(Dense(32, activation='relu'))
311 | model.add(Dropout(0.5))
312 | model.add(Dense(16, activation='relu'))
313 | model.add(Dropout(0.5))
314 | model.add(Dense(8, activation='relu')) # Not valid for minimum = 500
315 | model.add(Dropout(0.5))
316 | # model.add(Dense(4, activation='relu')) # Not valid for minimum = 500
317 | # model.add(Dropout(0.5))
318 | model.add(Dense(num_classes, activation='sigmoid'))
319 |
320 | model.compile(optimizer='Adam',
321 | loss='binary_crossentropy', metrics=['accuracy'])
322 |
323 | history = model.fit(train_data, train_labels,
324 | epochs=epochs,
325 | batch_size=batch_size,
326 | validation_data=(validation_data, validation_labels))
327 |
328 | model.save_weights(top_model_weights_path)
329 |
330 | (eval_loss, eval_accuracy) = model.evaluate(
331 | validation_data, validation_labels,
332 | batch_size=batch_size, verbose=1)
333 |
334 | print("[INFO] accuracy: {:.2f}%".format(eval_accuracy * 100))
335 | print("[INFO] Loss: {}".format(eval_loss))
336 |
337 | train_data.shape[1:]
338 |
339 | # model.evaluate(
340 | # validation_data, validation_labels, batch_size=batch_size, verbose=1)
341 |
342 | # model.predict_classes(validation_data)
343 |
344 | # model.metrics_names
345 |
346 | #top_k_categorical_accuracy(y_true, y_pred, k=5)
347 |
348 | ### Plotting the accuracy history
349 |
350 | plt.figure(1)
351 |
352 | # summarize history for accuracy
353 |
354 | plt.subplot(211)
355 | plt.plot(history.history['acc'])
356 | plt.plot(history.history['val_acc'])
357 | plt.title('model accuracy')
358 | plt.ylabel('accuracy')
359 | plt.xlabel('epoch')
360 | #pylab.ylim([0.4,0.68])
361 | plt.legend(['train', 'test'], loc='upper left')
362 |
363 | ### Plotting the loss history
364 |
365 | import pylab
366 | plt.subplot(212)
367 | plt.plot(history.history['loss'])
368 | plt.plot(history.history['val_loss'])
369 | plt.title('model loss')
370 | plt.ylabel('loss')
371 | plt.xlabel('epoch')
372 | plt.legend(['train', 'test'], loc='upper left')
373 | pylab.xlim([0,60])
374 | # pylab.ylim([0,1000])
375 | plt.show()
376 |
377 | import matplotlib.pyplot as plt
378 | import pylab
379 | get_ipython().run_line_magic('matplotlib', 'inline')
380 | get_ipython().run_line_magic('config', "InlineBackend.figure_format = 'retina'")
381 | fig = plt.figure()
382 | plt.plot(history.history['loss'])
383 | plt.plot(history.history['val_loss'])
384 | plt.title('Classification Model Loss')
385 | plt.xlabel('Epoch')
386 | plt.ylabel('Loss')
387 | pylab.xlim([0,60])
388 | plt.legend(['Test', 'Validation'], loc='upper right')
389 | fig.savefig('loss.png')
390 | plt.show();
391 |
392 | import matplotlib.pyplot as plt
393 | get_ipython().run_line_magic('matplotlib', 'inline')
394 | get_ipython().run_line_magic('config', "InlineBackend.figure_format = 'retina'")
395 |
396 | fig = plt.figure()
397 | plt.plot(history.history['acc'])
398 | plt.plot(history.history['val_acc'])
399 | plt.plot(figsize=(15,15))
400 | plt.title('Classification Model Accuracy')
401 | plt.xlabel('Epoch')
402 | plt.ylabel('Accuracy')
403 | pylab.xlim([0,100])
404 | plt.legend(['Test', 'Validation', 'Success Metric'], loc='lower right')
405 | fig.savefig('acc.png')
406 | plt.show();
407 |
408 |
409 |
410 | ### Predictions
411 |
412 | os.listdir(os.path.abspath('train_toy_3/Pierre-Auguste_Renoir))
413 |
414 | image_path = os.path.abspath('test_toy_3/Pierre-Auguste_Renoir/91485.jpg')
415 | orig = cv2.imread(image_path)
416 | image = load_img(image_path, target_size=(120,120))
417 | image
418 | image = img_to_array(image)
419 | image
420 |
421 | image = image / 255.
422 | image = np.expand_dims(image, axis=0)
423 | image
424 |
425 | # build the VGG16 network
426 | #model = applications.VGG16(include_top=False, weights='imagenet')
427 | model = applications.InceptionV3(include_top=False, weights='imagenet')
428 | # get the bottleneck prediction from the pre-trained VGG16 model
429 | bottleneck_prediction = model.predict(image)
430 |
431 | # build top model
432 | model = Sequential()
433 | model.add(Flatten(input_shape=train_data.shape[1:]))
434 | # model.add(Dense(1024, activation='relu'))
435 | # model.add(Dropout(0.5))
436 | model.add(Dense(512, activation='relu'))
437 | model.add(Dropout(0.5))
438 | model.add(Dense(256, activation='relu'))
439 | model.add(Dropout(0.5))
440 | model.add(Dense(128, activation='relu'))
441 | model.add(Dropout(0.5))
442 | model.add(Dense(64, activation='relu'))
443 | model.add(Dropout(0.5))
444 | model.add(Dense(32, activation='relu'))
445 | model.add(Dropout(0.5))
446 | model.add(Dense(16, activation='relu'))
447 | model.add(Dropout(0.5))
448 | model.add(Dense(8, activation='relu')) # Not valid for minimum = 500
449 | model.add(Dropout(0.5))
450 | # model.add(Dense(4, activation='relu')) # Not valid for minimum = 500
451 | # model.add(Dropout(0.5))
452 | model.add(Dense(num_classes, activation='sigmoid'))
453 |
454 | model.load_weights(top_model_weights_path)
455 |
456 | # use the bottleneck prediction on the top model to get the final classification
457 | class_predicted = model.predict_classes(bottleneck_prediction)
458 |
459 | inID = class_predicted[0]
460 |
461 | class_dictionary = generator_top.class_indices
462 |
463 | inv_map = {v: k for k, v in class_dictionary.items()}
464 |
465 | label = inv_map[inID]
466 |
467 | # get the prediction label
468 | print("Image ID: {}, Label: {}".format(inID, label))
469 |
470 | # display the predictions with the image
471 | cv2.putText(orig, "Predicted: {}".format(label), (10, 30), cv2.FONT_HERSHEY_PLAIN, 1.5, (43, 99, 255), 2)
472 |
473 | cv2.imshow("Classification", orig)
474 | cv2.waitKey(0)
475 | cv2.destroyAllWindows()
476 |
477 |
--------------------------------------------------------------------------------
/painters.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | from keras.preprocessing.image import ImageDataGenerator, img_to_array, load_img
3 | from keras.models import Sequential
4 | from keras.preprocessing import image
5 | from keras.layers import Dropout, Flatten, Dense
6 | from keras import applications
7 | from keras.utils.np_utils import to_categorical
8 | from keras import applications
9 | from keras.applications.imagenet_utils import preprocess_input
10 | from imagenet_utils import decode_predictions
11 | import math, cv2
12 |
13 | folder_train = './train_toy_3/'
14 | folder_test = './test_toy_3/'
15 |
16 | datagen = ImageDataGenerator(
17 | featurewise_center=True,
18 | featurewise_std_normalization=True,
19 | rotation_range=0.15,
20 | width_shift_range=0.2,
21 | height_shift_range=0.2,
22 | rescale = 1./255,
23 | shear_range=0.2,
24 | zoom_range=0.2,
25 | horizontal_flip=True,
26 | fill_mode='nearest')
27 |
28 | from keras.preprocessing.image import ImageDataGenerator, img_to_array, load_img
29 | from keras.models import Sequential
30 | from keras.layers import Conv2D, MaxPooling2D
31 | from keras.layers import Activation, Dropout, Flatten, Dense
32 | from keras import backend as K
33 | from keras.callbacks import EarlyStopping, Callback
34 | K.image_data_format() # this means that "backend": "tensorflow". Channels are RGB
35 | from keras import applications
36 | from keras.utils.np_utils import to_categorical
37 | import math, cv2
38 |
39 | ## Defining the new size of the image
40 |
41 | img_width, img_height = 120,120
42 |
43 | if K.image_data_format() == 'channels_first':
44 | input_shape = (3, img_width, img_height)
45 | print('Theano Backend')
46 | else:
47 | input_shape = (img_width, img_height, 3)
48 | print('TensorFlow Backend')
49 |
50 | input_shape
51 |
52 | nb_train_samples = 0
53 | for p in range(len(os.listdir(os.path.abspath(folder_train)))):
54 | nb_train_samples += len(os.listdir(os.path.abspath(folder_train) +'/'+ os.listdir(
55 | os.path.abspath(folder_train))[p]))
56 | nb_train_samples
57 |
58 | nb_test_samples = 0
59 | for p in range(len(os.listdir(os.path.abspath(folder_test)))):
60 | nb_test_samples += len(os.listdir(os.path.abspath(folder_test) +'/'+ os.listdir(
61 | os.path.abspath(folder_test))[p]))
62 |
63 | train_data_dir = os.path.abspath(folder_train) # folder containing training set already subdivided
64 | validation_data_dir = os.path.abspath(folder_test) # folder containing test set already subdivided
65 | nb_train_samples = nb_train_samples
66 | nb_validation_samples = nb_test_samples
67 | epochs = 100
68 | batch_size = 16 # batch_size = 16
69 | num_classes = len(os.listdir(os.path.abspath(folder_train)))
70 | print('The painters are',os.listdir(os.path.abspath(folder_train)))
71 |
72 | ### Class for early stopping
73 |
74 | # rdcolema
75 | class EarlyStoppingByLossVal(Callback):
76 | """Custom class to set a val loss target for early stopping"""
77 | def __init__(self, monitor='val_loss', value=0.45, verbose=0):
78 | super(Callback, self).__init__()
79 | self.monitor = monitor
80 | self.value = value
81 | self.verbose = verbose
82 |
83 | def on_epoch_end(self, epoch, logs={}):
84 | current = logs.get(self.monitor)
85 | if current is None:
86 | warnings.warn("Early stopping requires %s available!" % self.monitor, RuntimeWarning)
87 |
88 | if current < self.value:
89 | if self.verbose > 0:
90 | print("Epoch %05d: early stopping THR" % epoch)
91 | self.model.stop_training = True
92 | early_stopping = EarlyStopping(monitor='val_loss', patience=10, mode='auto') #
93 |
94 | top_model_weights_path = 'bottleneck_fc_model.h5'
95 |
96 | ### Creating InceptionV3 model
97 |
98 | from keras.applications.inception_v3 import InceptionV3
99 | model = applications.InceptionV3(include_top=False, weights='imagenet')
100 |
101 | applications.InceptionV3(include_top=False, weights='imagenet').summary()
102 |
103 | type(applications.InceptionV3(include_top=False, weights='imagenet').summary())
104 |
105 | ### Training and running images on InceptionV3
106 |
107 | datagen = ImageDataGenerator(rescale=1. / 255)
108 |
109 | generator = datagen.flow_from_directory(
110 | train_data_dir,
111 | target_size=(img_width, img_height),
112 | batch_size=batch_size,
113 | class_mode=None,
114 | shuffle=False)
115 |
116 | nb_train_samples = len(generator.filenames)
117 | num_classes = len(generator.class_indices)
118 | predict_size_train = int(math.ceil(nb_train_samples / batch_size))
119 | print('Number of training samples:',nb_train_samples)
120 | print('Number of classes:',num_classes)
121 |
122 |
123 | bottleneck_features_train = model.predict_generator(generator, predict_size_train) # these are numpy arrays
124 |
125 | bottleneck_features_train[0].shape
126 | bottleneck_features_train.shape
127 |
128 |
129 | np.save('bottleneck_features_train.npy', bottleneck_features_train)
130 |
131 |
132 | generator = datagen.flow_from_directory(
133 | validation_data_dir,
134 | target_size=(img_width, img_height),
135 | batch_size=batch_size,
136 | class_mode=None,
137 | shuffle=False)
138 |
139 | nb_validation_samples = len(generator.filenames)
140 |
141 | predict_size_validation = int(math.ceil(nb_validation_samples / batch_size))
142 | print('Number of testing samples:',nb_validation_samples)
143 |
144 | bottleneck_features_validation = model.predict_generator(
145 | generator, predict_size_validation)
146 |
147 | np.save('bottleneck_features_validation.npy', bottleneck_features_validation)
148 |
149 | ### Training the fully-connected network (the top-model)
150 |
151 |
152 | datagen_top = ImageDataGenerator(rescale=1./255)
153 | generator_top = datagen_top.flow_from_directory(
154 | train_data_dir,
155 | target_size=(img_width, img_height),
156 | batch_size=batch_size,
157 | class_mode='categorical',
158 | shuffle=False)
159 |
160 | nb_train_samples = len(generator_top.filenames)
161 | num_classes = len(generator_top.class_indices)
162 |
163 |
164 | train_data = np.load('bottleneck_features_train.npy')
165 |
166 | ## Converting training data into vectors of categories:
167 |
168 | train_labels = generator_top.classes
169 | print('Classes before dummification:',train_labels)
170 | train_labels = to_categorical(train_labels, num_classes=num_classes)
171 | print('Classes after dummification:\n\n',train_labels)
172 |
173 | ## Again repeating the process with the validation data:
174 |
175 | generator_top = datagen_top.flow_from_directory(
176 | validation_data_dir,
177 | target_size=(img_width, img_height),
178 | batch_size=batch_size,
179 | class_mode=None,
180 | shuffle=False)
181 |
182 | nb_validation_samples = len(generator_top.filenames)
183 |
184 | validation_data = np.load('bottleneck_features_validation.npy')
185 |
186 | validation_labels = generator_top.classes
187 | validation_labels = to_categorical(validation_labels, num_classes=num_classes)
188 |
189 | ### Building the small FL model using bottleneck features as input
190 |
191 | model = Sequential()
192 | model.add(Flatten(input_shape=train_data.shape[1:]))
193 | # model.add(Dense(1024, activation='relu'))
194 | # model.add(Dropout(0.5))
195 | model.add(Dense(512, activation='relu'))
196 | model.add(Dropout(0.5))
197 | model.add(Dense(256, activation='relu'))
198 | model.add(Dropout(0.5))
199 | model.add(Dense(128, activation='relu'))
200 | model.add(Dropout(0.5))
201 | model.add(Dense(64, activation='relu'))
202 | model.add(Dropout(0.5))
203 | model.add(Dense(32, activation='relu'))
204 | model.add(Dropout(0.5))
205 | model.add(Dense(16, activation='relu'))
206 | model.add(Dropout(0.5))
207 | model.add(Dense(8, activation='relu')) # Not valid for minimum = 500
208 | model.add(Dropout(0.5))
209 | # model.add(Dense(4, activation='relu')) # Not valid for minimum = 500
210 | # model.add(Dropout(0.5))
211 | model.add(Dense(num_classes, activation='sigmoid'))
212 |
213 | model.compile(optimizer='Adam',
214 | loss='binary_crossentropy', metrics=['accuracy'])
215 |
216 | history = model.fit(train_data, train_labels,
217 | epochs=epochs,
218 | batch_size=batch_size,
219 | validation_data=(validation_data, validation_labels))
220 |
221 | model.save_weights(top_model_weights_path)
222 |
223 | (eval_loss, eval_accuracy) = model.evaluate(
224 | validation_data, validation_labels,
225 | batch_size=batch_size, verbose=1)
226 |
227 | print("[INFO] accuracy: {:.2f}%".format(eval_accuracy * 100))
228 | print("[INFO] Loss: {}".format(eval_loss))
229 |
230 | train_data.shape[1:]
231 |
232 | plt.figure(1)
233 |
234 | # summarize history for accuracy
235 |
236 | plt.subplot(211)
237 | plt.plot(history.history['acc'])
238 | plt.plot(history.history['val_acc'])
239 | plt.title('model accuracy')
240 | plt.ylabel('accuracy')
241 | plt.xlabel('epoch')
242 | #pylab.ylim([0.4,0.68])
243 | plt.legend(['train', 'test'], loc='upper left')
244 |
245 | ### Plotting the loss history
246 |
247 | import pylab
248 | plt.subplot(212)
249 | plt.plot(history.history['loss'])
250 | plt.plot(history.history['val_loss'])
251 | plt.title('model loss')
252 | plt.ylabel('loss')
253 | plt.xlabel('epoch')
254 | plt.legend(['train', 'test'], loc='upper left')
255 | pylab.xlim([0,60])
256 | # pylab.ylim([0,1000])
257 | plt.show()
258 |
259 | import matplotlib.pyplot as plt
260 | import pylab
261 | %matplotlib inline
262 | %config InlineBackend.figure_format = 'retina'
263 | fig = plt.figure()
264 | plt.plot(history.history['loss'])
265 | plt.plot(history.history['val_loss'])
266 | plt.title('Classification Model Loss')
267 | plt.xlabel('Epoch')
268 | plt.ylabel('Loss')
269 | pylab.xlim([0,60])
270 | plt.legend(['Test', 'Validation'], loc='upper right')
271 | fig.savefig('loss.png')
272 | plt.show();
273 |
274 | import matplotlib.pyplot as plt
275 | %matplotlib inline
276 | %config InlineBackend.figure_format = 'retina'
277 |
278 | fig = plt.figure()
279 | plt.plot(history.history['acc'])
280 | plt.plot(history.history['val_acc'])
281 | plt.plot(figsize=(15,15))
282 | plt.title('Classification Model Accuracy')
283 | plt.xlabel('Epoch')
284 | plt.ylabel('Accuracy')
285 | pylab.xlim([0,100])
286 | plt.legend(['Test', 'Validation', 'Success Metric'], loc='lower right')
287 | fig.savefig('acc.png')
288 | plt.show();
289 |
290 |
291 | ### Predictions
292 |
293 | os.listdir(os.path.abspath('train_toy_3/Pierre-Auguste_Renoir))
294 |
295 | image_path = os.path.abspath('test_toy_3/Pierre-Auguste_Renoir/91485.jpg')
296 | orig = cv2.imread(image_path)
297 | image = load_img(image_path, target_size=(120,120))
298 | image
299 | image = img_to_array(image)
300 | image
301 |
302 | image = image / 255.
303 | image = np.expand_dims(image, axis=0)
304 | image
305 |
306 | # build the VGG16 network
307 | #model = applications.VGG16(include_top=False, weights='imagenet')
308 | model = applications.InceptionV3(include_top=False, weights='imagenet')
309 | # get the bottleneck prediction from the pre-trained VGG16 model
310 | bottleneck_prediction = model.predict(image)
311 |
312 | # build top model
313 | model = Sequential()
314 | model.add(Flatten(input_shape=train_data.shape[1:]))
315 | # model.add(Dense(1024, activation='relu'))
316 | # model.add(Dropout(0.5))
317 | model.add(Dense(512, activation='relu'))
318 | model.add(Dropout(0.5))
319 | model.add(Dense(256, activation='relu'))
320 | model.add(Dropout(0.5))
321 | model.add(Dense(128, activation='relu'))
322 | model.add(Dropout(0.5))
323 | model.add(Dense(64, activation='relu'))
324 | model.add(Dropout(0.5))
325 | model.add(Dense(32, activation='relu'))
326 | model.add(Dropout(0.5))
327 | model.add(Dense(16, activation='relu'))
328 | model.add(Dropout(0.5))
329 | model.add(Dense(8, activation='relu')) # Not valid for minimum = 500
330 | model.add(Dropout(0.5))
331 | # model.add(Dense(4, activation='relu')) # Not valid for minimum = 500
332 | # model.add(Dropout(0.5))
333 | model.add(Dense(num_classes, activation='sigmoid'))
334 |
335 | model.load_weights(top_model_weights_path)
336 |
337 | # use the bottleneck prediction on the top model to get the final classification
338 | class_predicted = model.predict_classes(bottleneck_prediction)
339 |
340 | inID = class_predicted[0]
341 |
342 | class_dictionary = generator_top.class_indices
343 |
344 | inv_map = {v: k for k, v in class_dictionary.items()}
345 |
346 | label = inv_map[inID]
347 |
348 | # get the prediction label
349 | print("Image ID: {}, Label: {}".format(inID, label))
350 |
351 | # display the predictions with the image
352 | cv2.putText(orig, "Predicted: {}".format(label), (10, 30), cv2.FONT_HERSHEY_PLAIN, 1.5, (43, 99, 255), 2)
353 |
354 | cv2.imshow("Classification", orig)
355 | cv2.waitKey(0)
356 | cv2.destroyAllWindows()
--------------------------------------------------------------------------------
/transfer-learning.py:
--------------------------------------------------------------------------------
1 | # Transfer Learning with pre-trained models
2 |
3 | import numpy as np
4 | import keras
5 | from keras.preprocessing import image
6 | from keras.applications.inception_v3 import InceptionV3
7 | from keras.applications.inception_v3 import preprocess_input, decode_predictions
8 |
9 | model = keras.applications.inception_v3.InceptionV3()
10 |
11 | img = image.load_img("lionNN.jpg", target_size=(299, 299))
12 |
13 |
14 | ### Converting to an array and changing the dimension:
15 |
16 | x = image.img_to_array(img)
17 | print(x.shape)
18 | x = np.expand_dims(x, axis=0)
19 | print(x.shape)
20 |
21 | ### Preprocessing and making predictions
22 |
23 | x = preprocess_input(x)
24 |
25 | predictions = model.predict(x)
26 |
27 |
28 | predicted_classes = decode_predictions(predictions, top=9)
29 |
30 | for imagenet_id, name, likelihood in predicted_classes[0]:
31 | print(name,':', likelihood)
32 |
33 |
--------------------------------------------------------------------------------
/transfer-learning/123:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/transfer-learning/README.md:
--------------------------------------------------------------------------------
1 | ## Transfer Learning [[view code]](http://nbviewer.jupyter.org/github/marcotav/deep-learning/blob/master/transfer-learning/notebooks/transfer-learning.ipynb)
2 |   
3 |
4 | **The code is available [here](http://nbviewer.jupyter.org/github/marcotav/deep-learning/blob/master/transfer-learning/notebooks/transfer-learning.ipynb) or by clicking on the [view code] link above.**
5 |
6 |
7 |
8 |
9 |
10 |
11 | Keras has many well-known pre-trained image recognition models already built in. These models are trained using images from the Imagenet data set, a collection of millions of pictures of labeled objects. Transfer learning is when we reuse these pre-trained models in consort with our own models. If you need to recognize an object not included in the Imagenet set, you can start with the pre-trained model and fine-tune it if need be, and that makes things much easier and faster than starting from scratch.
12 |
13 | I will illustrate this using the Inception V3 deep neural network model. We first load the Keras' Inception V3 model creating a new object `model`.
14 | ```
15 | import numpy as np
16 | from keras.preprocessing import image
17 | from keras.applications.inception_v3 import InceptionV3
18 | ```
19 | Note that the size of the image needs and the number of input nodes must match. For Inception V3, images need to be 299 pixels by 299 pixels so depending on the size of your input images, you may need to resize them.
20 | ```
21 | img = image.load_img("lionNN.jpg", target_size=(224, 224))
22 | ```
23 | The next step is to convert `img` into a `numpy` array and since Keras expects a list of images we must add a forth component:
24 |
25 | ```
26 | x = image.img_to_array(img)
27 | x = np.expand_dims(x, axis=0)
28 | ```
29 |
30 |
31 |
32 |
33 |
34 |
35 |
36 | Pixel values vary from zero to 25 but with neural nets, smaller numbers perform best. We therefore must rescale the data before inputing it. More specifically, we must normalize to the range of the images used by the trained network. We do this using `preprocess_input`:
37 | ```
38 | x = InceptionV3.preprocess_input(x)
39 | ```
40 | Now we run the sclaed data through the network. To make a prediction we call `model.predict` passing the image x above:
41 | ```
42 | predictions = model.predict(x)
43 | ```
44 | This cell returns a `predictions` object which is an array of 1,000 floats. There floats represent the likelihood that our image contains each of the 1,000 objects recognized by the pre-trained model. The last stepo is to look up the names of the classes:
45 | ```
46 | predicted_classes = InceptionV3.decode_predictions(predictions, top=9)
47 | ```
48 | The output of the predictions is:
49 | ```
50 | lion : 0.9088954
51 | collie : 0.0037420776
52 | chow : 0.0013897745
53 | leopard : 0.0013692076
54 | stopwatch : 0.00096159696
55 | cheetah : 0.0008766686
56 | Arabian_camel : 0.0006716717
57 | tiger : 0.00063297455
58 | hyena : 0.00061666104
59 | ```
60 |
61 |
--------------------------------------------------------------------------------
/transfer-learning/images/123:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/transfer-learning/images/lionNN.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/transfer-learning/images/lionNN.jpg
--------------------------------------------------------------------------------
/transfer-learning/images/treeNN.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/marcotav/deep-learning/604054d8f728b56bb9dedb1743dbf9e83b5664cf/transfer-learning/images/treeNN.jpg
--------------------------------------------------------------------------------
/transfer-learning/notebooks/123:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------