├── .gitattributes ├── 9781484241486.jpg ├── Contributing.md ├── LICENSE.txt ├── README.md ├── Untitled (1).ipynb ├── Untitled.ipynb ├── Untitled1.ipynb ├── Untitled2.ipynb ├── Untitled3.ipynb └── errata.md /.gitattributes: -------------------------------------------------------------------------------- 1 | # Auto detect text files and perform LF normalization 2 | * text=auto 3 | -------------------------------------------------------------------------------- /9781484241486.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Apress/practical-ml-image-processing/0fff7cba1e7195d6cbb0d1f3f5575709fa9ae1ba/9781484241486.jpg -------------------------------------------------------------------------------- /Contributing.md: -------------------------------------------------------------------------------- 1 | # Contributing to Apress Source Code 2 | 3 | Copyright for Apress source code belongs to the author(s). However, under fair use you are encouraged to fork and contribute minor corrections and updates for the benefit of the author(s) and other readers. 4 | 5 | ## How to Contribute 6 | 7 | 1. Make sure you have a GitHub account. 8 | 2. Fork the repository for the relevant book. 9 | 3. Create a new branch on which to make your change, e.g. 10 | `git checkout -b my_code_contribution` 11 | 4. Commit your change. Include a commit message describing the correction. Please note that if your commit message is not clear, the correction will not be accepted. 12 | 5. Submit a pull request. 13 | 14 | Thank you for your contribution! -------------------------------------------------------------------------------- /LICENSE.txt: -------------------------------------------------------------------------------- 1 | Freeware License, some rights reserved 2 | 3 | Copyright (c) 2019 Himanshu Singh 4 | 5 | Permission is hereby granted, free of charge, to anyone obtaining a copy 6 | of this software and associated documentation files (the "Software"), 7 | to work with the Software within the limits of freeware distribution and fair use. 8 | This includes the rights to use, copy, and modify the Software for personal use. 9 | Users are also allowed and encouraged to submit corrections and modifications 10 | to the Software for the benefit of other users. 11 | 12 | It is not allowed to reuse, modify, or redistribute the Software for 13 | commercial use in any way, or for a user’s educational materials such as books 14 | or blog articles without prior permission from the copyright holder. 15 | 16 | The above copyright notice and this permission notice need to be included 17 | in all copies or substantial portions of the software. 18 | 19 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 20 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 21 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 22 | AUTHORS OR COPYRIGHT HOLDERS OR APRESS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 23 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 24 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 25 | SOFTWARE. 26 | 27 | 28 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Apress Source Code 2 | 3 | This repository accompanies [*Practical Machine Learning and Image Processing*](https://www.apress.com/9781484241486) by Himanshu Singh (Apress, 2019). 4 | 5 | [comment]: #cover 6 | ![Cover image](9781484241486.jpg) 7 | 8 | Download the files as a zip using the green button, or clone the repository to your machine using Git. 9 | 10 | ## Releases 11 | 12 | Release v1.0 corresponds to the code in the published book, without corrections or updates. 13 | 14 | ## Contributions 15 | 16 | See the file Contributing.md for more information on how you can contribute to this repository. -------------------------------------------------------------------------------- /Untitled (1).ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 32, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "test_img_path = \"camera_cal/calibration1.jpg\"" 10 | ] 11 | }, 12 | { 13 | "cell_type": "code", 14 | "execution_count": 34, 15 | "metadata": {}, 16 | "outputs": [], 17 | "source": [ 18 | "import cv2\n", 19 | "import matplotlib.image as mpimg\n", 20 | "img = mpimg.imread(test_img_path)" 21 | ] 22 | }, 23 | { 24 | "cell_type": "code", 25 | "execution_count": 36, 26 | "metadata": {}, 27 | "outputs": [], 28 | "source": [ 29 | "img_size =(img.shape[1], img.shape[0])" 30 | ] 31 | }, 32 | { 33 | "cell_type": "code", 34 | "execution_count": 45, 35 | "metadata": {}, 36 | "outputs": [], 37 | "source": [ 38 | "def convert3D_to_2D(path, x, y):\n", 39 | " rwp = np.zeros((y*x, 3), np.float32) #creates zeroes matrix having 54 rows and 3 columns\n", 40 | " tmp = np.mgrid[0:x, 0:y].T.reshape(-1, 2) #creates a grid having 6 columns and 9 rows, while each row incrementing by 1 starting from 0 to 9. It then transposes and then converts to 54 rows and 2 columns\n", 41 | " rwp[:,:2] = tmp #changing the first two points based upon above matrix\n", 42 | " \n", 43 | " rwpoints = [] # 3d points in real world space\n", 44 | " imgpoints = [] # 2d points in image plane.\n", 45 | " \n", 46 | " images = glob.glob(path) #read calibration images\n", 47 | "\n", 48 | " for fname in images: #finding chessboard corners\n", 49 | " img = cv2.imread(fname)\n", 50 | " gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n", 51 | "\n", 52 | " corner_found, corners = cv2.findChessboardCorners(gray, (x,y), None) #positions of internal corners of the chessboard\n", 53 | "\n", 54 | " if corner_found == True:\n", 55 | " rwpoints.append(rwp)\n", 56 | " imgpoints.append(corners)\n", 57 | "\n", 58 | " cv2.drawChessboardCorners(img, (x,y), corners, corner_found)\n", 59 | "\n", 60 | " return (rwpoints, imgpoints)" 61 | ] 62 | }, 63 | { 64 | "cell_type": "code", 65 | "execution_count": 46, 66 | "metadata": {}, 67 | "outputs": [], 68 | "source": [ 69 | "import numpy as np\n", 70 | "import glob\n", 71 | "import matplotlib.pyplot as plt\n", 72 | "import cv2\n", 73 | "import matplotlib.image as mpimg\n", 74 | "import pickle\n", 75 | "import random\n", 76 | "%matplotlib qt" 77 | ] 78 | }, 79 | { 80 | "cell_type": "code", 81 | "execution_count": 47, 82 | "metadata": {}, 83 | "outputs": [], 84 | "source": [ 85 | "(objpoints, imgpoints) = convert3D_to_2D(\"camera_cal/calibration*.jpg\", 9,6)" 86 | ] 87 | }, 88 | { 89 | "cell_type": "code", 90 | "execution_count": 50, 91 | "metadata": {}, 92 | "outputs": [], 93 | "source": [ 94 | "_, cm_mtx, dist_cf, rot_vecs, trns_vecs = cv2.calibrateCamera(objpoints, imgpoints, img_size, None, None)" 95 | ] 96 | }, 97 | { 98 | "cell_type": "code", 99 | "execution_count": 52, 100 | "metadata": {}, 101 | "outputs": [], 102 | "source": [ 103 | "undst_img = cv2.undistort(img, mtx, dist, None, mtx)" 104 | ] 105 | }, 106 | { 107 | "cell_type": "code", 108 | "execution_count": 55, 109 | "metadata": {}, 110 | "outputs": [ 111 | { 112 | "data": { 113 | "text/plain": [ 114 | "" 115 | ] 116 | }, 117 | "execution_count": 55, 118 | "metadata": {}, 119 | "output_type": "execute_result" 120 | } 121 | ], 122 | "source": [ 123 | "f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20, 10))\n", 124 | "ax1.imshow(img)\n", 125 | "ax2.imshow(undst_img)" 126 | ] 127 | }, 128 | { 129 | "cell_type": "code", 130 | "execution_count": null, 131 | "metadata": {}, 132 | "outputs": [], 133 | "source": [] 134 | } 135 | ], 136 | "metadata": { 137 | "kernelspec": { 138 | "display_name": "Python 3", 139 | "language": "python", 140 | "name": "python3" 141 | }, 142 | "language_info": { 143 | "codemirror_mode": { 144 | "name": "ipython", 145 | "version": 3 146 | }, 147 | "file_extension": ".py", 148 | "mimetype": "text/x-python", 149 | "name": "python", 150 | "nbconvert_exporter": "python", 151 | "pygments_lexer": "ipython3", 152 | "version": "3.6.5" 153 | } 154 | }, 155 | "nbformat": 4, 156 | "nbformat_minor": 2 157 | } 158 | -------------------------------------------------------------------------------- /Untitled2.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "SIFT" 8 | ] 9 | }, 10 | { 11 | "cell_type": "code", 12 | "execution_count": 1, 13 | "metadata": {}, 14 | "outputs": [ 15 | { 16 | "name": "stdout", 17 | "output_type": "stream", 18 | "text": [ 19 | "Make Sure that both the images are in the same folder\n", 20 | "Displaying SIFT Features\n" 21 | ] 22 | } 23 | ], 24 | "source": [ 25 | "import cv2\n", 26 | "import numpy as np\n", 27 | "import matplotlib.pyplot as plt\n", 28 | "from Sift_Operations import *\n", 29 | "\n", 30 | "print('''Make Sure that both the images are in the same folder''')\n", 31 | "\n", 32 | "x = input(\"Enter First Image Name: \")\n", 33 | "Image1 = cv2.imread(x)\n", 34 | "y = input(\"Enter Second Image Name: \")\n", 35 | "Image2 = cv2.imread(y)\n", 36 | "\n", 37 | "Image1_gray = cv2.cvtColor(Image1, cv2.COLOR_BGR2GRAY)\n", 38 | "Image2_gray = cv2.cvtColor(Image2, cv2.COLOR_BGR2GRAY)\n", 39 | "\n", 40 | "Image1_key_points, Image1_descriptors = extract_sift_features(Image1_gray)\n", 41 | "Image2_key_points, Image2_descriptors = extract_sift_features(Image2_gray)\n", 42 | "\n", 43 | "print( 'Displaying SIFT Features')\n", 44 | "showing_sift_features(Image1_gray, Image1, Image1_key_points);\n", 45 | "\n", 46 | "norm = cv2.NORM_L2\n", 47 | "bruteForce = cv2.BFMatcher(norm)\n", 48 | "\n", 49 | "matches = bruteForce.match(Image1_descriptors, Image2_descriptors)\n", 50 | "\n", 51 | "matches = sorted(matches, key = lambda match:match.distance)\n", 52 | "\n", 53 | "matched_img = cv2.drawMatches(\n", 54 | " Image1, Image1_key_points,\n", 55 | " Image2, Image2_key_points,\n", 56 | " matches[:100], Image2.copy())\n", 57 | "\n", 58 | "#plt.figure(figsize=(100,300))\n", 59 | "cv2.imwrite(\"kp.jpg\",matched_img);" 60 | ] 61 | }, 62 | { 63 | "cell_type": "markdown", 64 | "metadata": {}, 65 | "source": [ 66 | "RANSAC" 67 | ] 68 | }, 69 | { 70 | "cell_type": "code", 71 | "execution_count": 2, 72 | "metadata": {}, 73 | "outputs": [ 74 | { 75 | "ename": "TypeError", 76 | "evalue": "'int' object is not callable", 77 | "output_type": "error", 78 | "traceback": [ 79 | "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m", 80 | "\u001b[1;31mTypeError\u001b[0m Traceback (most recent call last)", 81 | "\u001b[1;32m\u001b[0m in \u001b[0;36m\u001b[1;34m()\u001b[0m\n\u001b[0;32m 6\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 7\u001b[0m \u001b[0mimage\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mcv2\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mimread\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m\"shelf.jpg\"\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m----> 8\u001b[1;33m \u001b[0mio\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mimsave\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m\"1.jpg\"\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mcv2\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mRANSAC\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mimage\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m", 82 | "\u001b[1;31mTypeError\u001b[0m: 'int' object is not callable" 83 | ] 84 | } 85 | ], 86 | "source": [ 87 | "from skimage import feature, color, transform, io\n", 88 | "import numpy as np\n", 89 | "import logging\n", 90 | "import cv2\n", 91 | "from Ransac_Operations import *\n", 92 | "\n", 93 | "image = cv2.imread(\"shelf.jpg\")\n", 94 | "io.imsave(\"1.jpg\", rectify_image(image))" 95 | ] 96 | }, 97 | { 98 | "cell_type": "markdown", 99 | "metadata": {}, 100 | "source": [ 101 | "Image Registration" 102 | ] 103 | }, 104 | { 105 | "cell_type": "raw", 106 | "metadata": {}, 107 | "source": [] 108 | }, 109 | { 110 | "cell_type": "code", 111 | "execution_count": null, 112 | "metadata": {}, 113 | "outputs": [ 114 | { 115 | "name": "stderr", 116 | "output_type": "stream", 117 | "text": [ 118 | "C:\\Users\\Sohails\\Himanshu\\Image Processing\\Affine.py:24: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.\n", 119 | "To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.\n", 120 | " theta = np.linalg.lstsq(M, b)[0]\n" 121 | ] 122 | } 123 | ], 124 | "source": [ 125 | "import numpy as np\n", 126 | "import cv2\n", 127 | "from Ransac import *\n", 128 | "from Affine import *\n", 129 | "from Align import *\n", 130 | "\n", 131 | "img_source = cv2.imread(\"2.jpg\")\n", 132 | "img_target = cv2.imread(\"target.jpg\")\n", 133 | "keypoint_source, descriptor_source = extract_SIFT(img_source)\n", 134 | "keypoint_target, descriptor_target = extract_SIFT(img_target)\n", 135 | "pos = match_SIFT(descriptor_source, descriptor_target)\n", 136 | "H = affine_matrix(keypoint_source, keypoint_target, pos)\n", 137 | "\n", 138 | "rows, cols, _ = img_target.shape\n", 139 | "warp = cv2.warpAffine(img_source, H, (cols, rows))\n", 140 | "merge = np.uint8(img_target * 0.5 + warp * 0.5)\n", 141 | "cv2.imshow('img', merge)\n", 142 | "cv2.waitKey(0)\n", 143 | "cv2.destroyAllWindows()" 144 | ] 145 | }, 146 | { 147 | "cell_type": "code", 148 | "execution_count": 7, 149 | "metadata": {}, 150 | "outputs": [ 151 | { 152 | "data": { 153 | "text/plain": [ 154 | "(2, 2239)" 155 | ] 156 | }, 157 | "execution_count": 7, 158 | "metadata": {}, 159 | "output_type": "execute_result" 160 | } 161 | ], 162 | "source": [ 163 | "keypoint_target.shape" 164 | ] 165 | }, 166 | { 167 | "cell_type": "markdown", 168 | "metadata": {}, 169 | "source": [ 170 | "Stitching" 171 | ] 172 | }, 173 | { 174 | "cell_type": "markdown", 175 | "metadata": {}, 176 | "source": [ 177 | "MNIST - CNN" 178 | ] 179 | }, 180 | { 181 | "cell_type": "code", 182 | "execution_count": 1, 183 | "metadata": {}, 184 | "outputs": [ 185 | { 186 | "name": "stderr", 187 | "output_type": "stream", 188 | "text": [ 189 | "C:\\Users\\Sohails\\Anaconda3\\lib\\site-packages\\h5py\\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n", 190 | " from ._conv import register_converters as _register_converters\n", 191 | "Using TensorFlow backend.\n" 192 | ] 193 | }, 194 | { 195 | "name": "stdout", 196 | "output_type": "stream", 197 | "text": [ 198 | "Train on 60000 samples, validate on 10000 samples\n", 199 | "Epoch 1/12\n", 200 | "60000/60000 [==============================] - 182s 3ms/step - loss: 0.2589 - acc: 0.9206 - val_loss: 0.0558 - val_acc: 0.9820\n", 201 | "Epoch 2/12\n", 202 | "60000/60000 [==============================] - 180s 3ms/step - loss: 0.0868 - acc: 0.9748 - val_loss: 0.0425 - val_acc: 0.9853\n", 203 | "Epoch 3/12\n", 204 | "60000/60000 [==============================] - 182s 3ms/step - loss: 0.0664 - acc: 0.9804 - val_loss: 0.0327 - val_acc: 0.9883\n", 205 | "Epoch 4/12\n", 206 | "60000/60000 [==============================] - 187s 3ms/step - loss: 0.0537 - acc: 0.9839 - val_loss: 0.0311 - val_acc: 0.9889\n", 207 | "Epoch 5/12\n", 208 | "60000/60000 [==============================] - 197s 3ms/step - loss: 0.0459 - acc: 0.9862 - val_loss: 0.0310 - val_acc: 0.9893\n", 209 | "Epoch 6/12\n", 210 | "60000/60000 [==============================] - 187s 3ms/step - loss: 0.0407 - acc: 0.9875 - val_loss: 0.0304 - val_acc: 0.9897\n", 211 | "Epoch 7/12\n", 212 | "60000/60000 [==============================] - 187s 3ms/step - loss: 0.0364 - acc: 0.9895 - val_loss: 0.0294 - val_acc: 0.9901\n", 213 | "Epoch 8/12\n", 214 | "60000/60000 [==============================] - 183s 3ms/step - loss: 0.0340 - acc: 0.9898 - val_loss: 0.0290 - val_acc: 0.9907\n", 215 | "Epoch 9/12\n", 216 | "60000/60000 [==============================] - 183s 3ms/step - loss: 0.0311 - acc: 0.9907 - val_loss: 0.0265 - val_acc: 0.9917\n", 217 | "Epoch 10/12\n", 218 | "60000/60000 [==============================] - 183s 3ms/step - loss: 0.0285 - acc: 0.9915 - val_loss: 0.0252 - val_acc: 0.9920\n", 219 | "Epoch 11/12\n", 220 | "60000/60000 [==============================] - 184s 3ms/step - loss: 0.0275 - acc: 0.9914 - val_loss: 0.0264 - val_acc: 0.9920\n", 221 | "Epoch 12/12\n", 222 | "60000/60000 [==============================] - 184s 3ms/step - loss: 0.0264 - acc: 0.9921 - val_loss: 0.0264 - val_acc: 0.9918\n", 223 | "Test loss: 0.026368951098990512\n", 224 | "Test accuracy: 0.9918\n" 225 | ] 226 | } 227 | ], 228 | "source": [ 229 | "import keras\n", 230 | "from keras.models import Sequential\n", 231 | "from keras.layers import Dense, Dropout, Flatten\n", 232 | "from keras.layers import Conv2D, MaxPooling2D\n", 233 | "from Load_and_Preprocess import *\n", 234 | "\n", 235 | "x_train,x_test,y_train,y_test, input_shape = load_and_preprocess()\n", 236 | "num_classes=10\n", 237 | "\n", 238 | "model = Sequential()\n", 239 | "model.add(Conv2D(32, kernel_size=(3, 3),\n", 240 | " activation='relu',\n", 241 | " input_shape=input_shape))\n", 242 | "\n", 243 | "model.add(Conv2D(64, (3, 3), activation='relu'))\n", 244 | "\n", 245 | "model.add(MaxPooling2D(pool_size=(2, 2)))\n", 246 | "\n", 247 | "model.add(Dropout(0.25))\n", 248 | "\n", 249 | "model.add(Flatten())\n", 250 | "\n", 251 | "model.add(Dense(128, activation='relu'))\n", 252 | "\n", 253 | "model.add(Dropout(0.5))\n", 254 | "\n", 255 | "model.add(Dense(num_classes, activation='softmax'))\n", 256 | "\n", 257 | "model.compile(loss=keras.losses.categorical_crossentropy,\n", 258 | " optimizer=keras.optimizers.Adadelta(),\n", 259 | " metrics=['accuracy'])\n", 260 | "\n", 261 | "model.fit(x_train, y_train,\n", 262 | " batch_size=128,\n", 263 | " epochs=12,\n", 264 | " validation_data=(x_test, y_test))\n", 265 | "\n", 266 | "score = model.evaluate(x_test, y_test, verbose=0)\n", 267 | "print('Test loss:', score[0])\n", 268 | "print('Test accuracy:', score[1])" 269 | ] 270 | }, 271 | { 272 | "cell_type": "markdown", 273 | "metadata": {}, 274 | "source": [ 275 | "MNIST - ANN" 276 | ] 277 | }, 278 | { 279 | "cell_type": "code", 280 | "execution_count": 5, 281 | "metadata": {}, 282 | "outputs": [ 283 | { 284 | "name": "stdout", 285 | "output_type": "stream", 286 | "text": [ 287 | "Epoch 1/10\n", 288 | "42000/42000 [==============================] - 55s 1ms/step - loss: 0.0207 - acc: 0.8769\n", 289 | "Epoch 2/10\n", 290 | "42000/42000 [==============================] - 58s 1ms/step - loss: 0.0091 - acc: 0.9505\n", 291 | "Epoch 3/10\n", 292 | "42000/42000 [==============================] - 57s 1ms/step - loss: 0.0067 - acc: 0.9643\n", 293 | "Epoch 4/10\n", 294 | "42000/42000 [==============================] - 51s 1ms/step - loss: 0.0053 - acc: 0.9713\n", 295 | "Epoch 5/10\n", 296 | "42000/42000 [==============================] - 52s 1ms/step - loss: 0.0043 - acc: 0.9771\n", 297 | "Epoch 6/10\n", 298 | "42000/42000 [==============================] - 59s 1ms/step - loss: 0.0036 - acc: 0.9806\n", 299 | "Epoch 7/10\n", 300 | "42000/42000 [==============================] - 59s 1ms/step - loss: 0.0031 - acc: 0.9837\n", 301 | "Epoch 8/10\n", 302 | "42000/42000 [==============================] - 59s 1ms/step - loss: 0.0026 - acc: 0.9857\n", 303 | "Epoch 9/10\n", 304 | "42000/42000 [==============================] - 59s 1ms/step - loss: 0.0023 - acc: 0.9876\n", 305 | "Epoch 10/10\n", 306 | "42000/42000 [==============================] - 60s 1ms/step - loss: 0.0020 - acc: 0.9895\n" 307 | ] 308 | } 309 | ], 310 | "source": [ 311 | "import pandas as pd\n", 312 | "import keras\n", 313 | "from keras.models import Sequential\n", 314 | "from keras.layers import Dense\n", 315 | "\n", 316 | "input_data = pd.read_csv(\"train.csv\")\n", 317 | "\n", 318 | "y = input_data['label']\n", 319 | "input_data.drop('label',axis=1,inplace = True)\n", 320 | "X = input_data\n", 321 | "y = pd.get_dummies(y)\n", 322 | "\n", 323 | "classifier = Sequential()\n", 324 | "classifier.add(Dense(units = 600, kernel_initializer = 'uniform', activation = 'relu', input_dim = 784))\n", 325 | "classifier.add(Dense(units = 400, kernel_initializer = 'uniform', activation = 'relu'))\n", 326 | "classifier.add(Dense(units = 200, kernel_initializer = 'uniform', activation = 'relu'))\n", 327 | "classifier.add(Dense(units = 10, kernel_initializer = 'uniform', activation = 'sigmoid'))\n", 328 | "classifier.compile(optimizer = 'sgd', loss = 'mean_squared_error', metrics = ['accuracy'])\n", 329 | "\n", 330 | "classifier.fit(X, y, batch_size = 10, epochs = 10)\n", 331 | "\n", 332 | "test_data = pd.read_csv(\"test.csv\")\n", 333 | "y_pred = classifier.predict(test_data)\n" 334 | ] 335 | }, 336 | { 337 | "cell_type": "code", 338 | "execution_count": 1, 339 | "metadata": {}, 340 | "outputs": [], 341 | "source": [ 342 | "import pandas as pd\n", 343 | "data = pd.read_csv(\"train.csv\")" 344 | ] 345 | }, 346 | { 347 | "cell_type": "code", 348 | "execution_count": 2, 349 | "metadata": {}, 350 | "outputs": [], 351 | "source": [ 352 | "y = data['label']\n", 353 | "data.drop('label',axis=1,inplace = True)\n", 354 | "X = data\n", 355 | "y = pd.Categorical(y)" 356 | ] 357 | }, 358 | { 359 | "cell_type": "code", 360 | "execution_count": 3, 361 | "metadata": {}, 362 | "outputs": [], 363 | "source": [ 364 | "from sklearn.linear_model import LogisticRegression\n", 365 | "from sklearn.tree import DecisionTreeClassifier\n", 366 | "from sklearn.svm import LinearSVC" 367 | ] 368 | }, 369 | { 370 | "cell_type": "code", 371 | "execution_count": 4, 372 | "metadata": {}, 373 | "outputs": [], 374 | "source": [ 375 | "logreg = LogisticRegression()\n", 376 | "dt = DecisionTreeClassifier()\n", 377 | "svc = LinearSVC()" 378 | ] 379 | }, 380 | { 381 | "cell_type": "code", 382 | "execution_count": 5, 383 | "metadata": {}, 384 | "outputs": [], 385 | "source": [ 386 | "model_logreg = logreg.fit(X,y)\n", 387 | "model_dt = dt.fit(X,y)\n", 388 | "model_svc = svc.fit(X,y)" 389 | ] 390 | }, 391 | { 392 | "cell_type": "code", 393 | "execution_count": 6, 394 | "metadata": {}, 395 | "outputs": [], 396 | "source": [ 397 | "X_test = pd.read_csv(\"test.csv\")\n", 398 | "pred_logreg = model_logreg.predict(X_test)\n", 399 | "pred_dt = model_logreg.predict(X_test)\n", 400 | "pred_svc = model_logreg.predict(X_test)" 401 | ] 402 | }, 403 | { 404 | "cell_type": "code", 405 | "execution_count": 12, 406 | "metadata": {}, 407 | "outputs": [], 408 | "source": [ 409 | "pred_logreg = model_logreg.predict(X)\n", 410 | "pred_dt = model_dt.predict(X)\n", 411 | "pred_svc = model_svc.predict(X)" 412 | ] 413 | }, 414 | { 415 | "cell_type": "code", 416 | "execution_count": 13, 417 | "metadata": {}, 418 | "outputs": [ 419 | { 420 | "name": "stdout", 421 | "output_type": "stream", 422 | "text": [ 423 | "Decision Tree Accuracy is: 100.0\n", 424 | "Logistic Regression Accuracy is: 93.8547619047619\n", 425 | "Support Vector Machine Accuracy is: 88.26190476190476\n" 426 | ] 427 | } 428 | ], 429 | "source": [ 430 | "from sklearn.metrics import accuracy_score\n", 431 | "print(\"Decision Tree Accuracy is: \", accuracy_score(pred_dt, y)*100)\n", 432 | "print(\"Logistic Regression Accuracy is: \", accuracy_score(pred_logreg, y)*100)\n", 433 | "print(\"Support Vector Machine Accuracy is: \", accuracy_score(pred_svc, y)*100)" 434 | ] 435 | }, 436 | { 437 | "cell_type": "code", 438 | "execution_count": null, 439 | "metadata": {}, 440 | "outputs": [], 441 | "source": [] 442 | } 443 | ], 444 | "metadata": { 445 | "kernelspec": { 446 | "display_name": "Python 3", 447 | "language": "python", 448 | "name": "python3" 449 | }, 450 | "language_info": { 451 | "codemirror_mode": { 452 | "name": "ipython", 453 | "version": 3 454 | }, 455 | "file_extension": ".py", 456 | "mimetype": "text/x-python", 457 | "name": "python", 458 | "nbconvert_exporter": "python", 459 | "pygments_lexer": "ipython3", 460 | "version": "3.6.5" 461 | } 462 | }, 463 | "nbformat": 4, 464 | "nbformat_minor": 2 465 | } 466 | -------------------------------------------------------------------------------- /errata.md: -------------------------------------------------------------------------------- 1 | # Errata for *Book Title* 2 | 3 | On **page xx** [Summary of error]: 4 | 5 | Details of error here. Highlight key pieces in **bold**. 6 | 7 | *** 8 | 9 | On **page xx** [Summary of error]: 10 | 11 | Details of error here. Highlight key pieces in **bold**. 12 | 13 | *** --------------------------------------------------------------------------------