├── Artificial_Neural_Networks_Forward_Propagation.ipynb ├── Artificial_Neural_Networks_Forward_Propagation_1.ipynb ├── Classification_Models_with_Keras.ipynb ├── Classification_Models_with_Keras_(0.1).ipynb ├── Convolutional_Neural_Networks_with_Keras.ipynb ├── Modeling_Concrete_Compressive_Strength_Using_Deep_Learning_Regression_with_Keras.ipynb ├── Modeling_Concrete_Compressive_Strength_Using_Deep_Learning_Regression_with_Keras_1.ipynb ├── Peer-graded Assignment_ Build a Regression Model in Keras (A).ipynb ├── Peer-graded Assignment_ Build a Regression Model in Keras (B).ipynb ├── Peer-graded Assignment_ Build a Regression Model in Keras (C).ipynb ├── README.md ├── Regression_model_with_Keras.ipynb └── courseproject.ipynb /Classification_Models_with_Keras.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "colab": { 6 | "provenance": [] 7 | }, 8 | "kernelspec": { 9 | "name": "python3", 10 | "display_name": "Python 3" 11 | }, 12 | "language_info": { 13 | "name": "python" 14 | } 15 | }, 16 | "cells": [ 17 | { 18 | "cell_type": "markdown", 19 | "source": [ 20 | "* To use the Keras library to build models for classificaiton problems. We will use the popular MNIST dataset, a dataset of images, for a change.\n", 21 | "\n", 22 | "* The MNIST database, short for Modified National Institute of Standards and Technology database, is a large database of handwritten digits that is commonly used for training various image processing systems. The database is also widely used for training and testing in the field of machine learning.\n", 23 | "\n", 24 | "* The MNIST database contains 60,000 training images and 10,000 testing images of digits written by high school students and employees of the United States Census Bureau.\n", 25 | "\n", 26 | "* Also, to compare how conventional neural networks compare to convolutional neural network" 27 | ], 28 | "metadata": { 29 | "id": "vOPVD65ULwSk" 30 | } 31 | }, 32 | { 33 | "cell_type": "markdown", 34 | "source": [ 35 | "# Classification Models with Keras" 36 | ], 37 | "metadata": { 38 | "id": "8FBjeIo5MAZs" 39 | } 40 | }, 41 | { 42 | "cell_type": "markdown", 43 | "source": [ 44 | "**Import Keras and Package**" 45 | ], 46 | "metadata": { 47 | "id": "RQyC6dLHMEUy" 48 | } 49 | }, 50 | { 51 | "cell_type": "code", 52 | "source": [ 53 | "# All Libraries required for this lab are listed below. The libraries pre-installed on Skills Network Labs are commented.\n", 54 | "# If you run this notebook on a different environment, e.g. your desktop, you may need to uncomment and install certain libraries.\n", 55 | "\n", 56 | "#!pip install numpy==1.21.4\n", 57 | "#!pip install pandas==1.3.4\n", 58 | "#!pip install keras==2.1.6\n", 59 | "#!pip install matplotlib==3.5.0" 60 | ], 61 | "metadata": { 62 | "id": "crbisaoDMHdo" 63 | }, 64 | "execution_count": 1, 65 | "outputs": [] 66 | }, 67 | { 68 | "cell_type": "code", 69 | "source": [ 70 | "import keras\n", 71 | "\n", 72 | "from keras.models import Sequential\n", 73 | "from keras.layers import Dense\n", 74 | "from keras.utils import to_categorical" 75 | ], 76 | "metadata": { 77 | "id": "zvhXEIbrMJm9" 78 | }, 79 | "execution_count": 2, 80 | "outputs": [] 81 | }, 82 | { 83 | "cell_type": "markdown", 84 | "source": [ 85 | "load the MNIST dataset from the Keras library. The dataset is readily divided into a training set and a test set." 86 | ], 87 | "metadata": { 88 | "id": "P3BvYg7aMjFF" 89 | } 90 | }, 91 | { 92 | "cell_type": "code", 93 | "source": [ 94 | "import matplotlib.pyplot as plt" 95 | ], 96 | "metadata": { 97 | "id": "o7s2oDUYM6pK" 98 | }, 99 | "execution_count": 6, 100 | "outputs": [] 101 | }, 102 | { 103 | "cell_type": "code", 104 | "source": [ 105 | "# import the data\n", 106 | "from keras.datasets import mnist\n", 107 | "\n", 108 | "# read the data\n", 109 | "(X_train, y_train), (X_test, y_test) = mnist.load_data()" 110 | ], 111 | "metadata": { 112 | "colab": { 113 | "base_uri": "https://localhost:8080/" 114 | }, 115 | "id": "E6EX6OzxMj0p", 116 | "outputId": "62963309-1829-4418-aa74-e8d21460d973" 117 | }, 118 | "execution_count": 3, 119 | "outputs": [ 120 | { 121 | "output_type": "stream", 122 | "name": "stdout", 123 | "text": [ 124 | "Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n", 125 | "\u001b[1m11490434/11490434\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 0us/step\n" 126 | ] 127 | } 128 | ] 129 | }, 130 | { 131 | "cell_type": "markdown", 132 | "source": [ 133 | "Let's confirm the number of images in each set. According to the dataset's documentation, we should have 60000 images in X_train and 10000 images in the X_test." 134 | ], 135 | "metadata": { 136 | "id": "R3agenxKMnnP" 137 | } 138 | }, 139 | { 140 | "cell_type": "code", 141 | "source": [ 142 | "X_train.shape" 143 | ], 144 | "metadata": { 145 | "colab": { 146 | "base_uri": "https://localhost:8080/" 147 | }, 148 | "id": "B_gVP1zzMoQh", 149 | "outputId": "b501536e-29a4-4772-9799-2860c7d71d4b" 150 | }, 151 | "execution_count": 4, 152 | "outputs": [ 153 | { 154 | "output_type": "execute_result", 155 | "data": { 156 | "text/plain": [ 157 | "(60000, 28, 28)" 158 | ] 159 | }, 160 | "metadata": {}, 161 | "execution_count": 4 162 | } 163 | ] 164 | }, 165 | { 166 | "cell_type": "markdown", 167 | "source": [ 168 | "The first number in the output tuple is the number of images, and the other two numbers are the size of the images in datset. So, each image is 28 pixels by 28 pixels." 169 | ], 170 | "metadata": { 171 | "id": "T_QJtK3BMrrU" 172 | } 173 | }, 174 | { 175 | "cell_type": "markdown", 176 | "source": [ 177 | "Let's visualize the first image in the training set using Matplotlib's scripting layer." 178 | ], 179 | "metadata": { 180 | "id": "dRnJKVpSMziD" 181 | } 182 | }, 183 | { 184 | "cell_type": "code", 185 | "source": [ 186 | "plt.imshow(X_train[0])" 187 | ], 188 | "metadata": { 189 | "colab": { 190 | "base_uri": "https://localhost:8080/", 191 | "height": 447 192 | }, 193 | "id": "J69UER-XM0Jf", 194 | "outputId": "951c822f-888b-456f-c35f-b928beb253b6" 195 | }, 196 | "execution_count": 7, 197 | "outputs": [ 198 | { 199 | "output_type": "execute_result", 200 | "data": { 201 | "text/plain": [ 202 | "" 203 | ] 204 | }, 205 | "metadata": {}, 206 | "execution_count": 7 207 | }, 208 | { 209 | "output_type": "display_data", 210 | "data": { 211 | "text/plain": [ 212 | "
" 213 | ], 214 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAaAAAAGdCAYAAABU0qcqAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/bCgiHAAAACXBIWXMAAA9hAAAPYQGoP6dpAAAcTUlEQVR4nO3df3DU9b3v8dcCyQqaLI0hv0rAgD+wAvEWJWZAxJJLSOc4gIwHf3QGvF4cMXiKaPXGUZHWM2nxjrV6qd7TqURnxB+cEaiO5Y4GE441oQNKGW7blNBY4iEJFSe7IUgIyef+wXXrQgJ+1l3eSXg+Zr4zZPf75vvx69Znv9nNNwHnnBMAAOfYMOsFAADOTwQIAGCCAAEATBAgAIAJAgQAMEGAAAAmCBAAwAQBAgCYGGG9gFP19vbq4MGDSktLUyAQsF4OAMCTc04dHR3Ky8vTsGH9X+cMuAAdPHhQ+fn51ssAAHxDzc3NGjt2bL/PD7gApaWlSZJm6vsaoRTj1QAAfJ1Qtz7QO9H/nvcnaQFat26dnnrqKbW2tqqwsFDPPfecpk+ffta5L7/tNkIpGhEgQAAw6Pz/O4ye7W2UpHwI4fXXX9eqVau0evVqffTRRyosLFRpaakOHTqUjMMBAAahpATo6aef1rJly3TnnXfqO9/5jl544QWNGjVKL774YjIOBwAYhBIeoOPHj2vXrl0qKSn5x0GGDVNJSYnq6upO27+rq0uRSCRmAwAMfQkP0Geffaaenh5lZ2fHPJ6dna3W1tbT9q+srFQoFIpufAIOAM4P5j+IWlFRoXA4HN2am5utlwQAOAcS/im4zMxMDR8+XG1tbTGPt7W1KScn57T9g8GggsFgopcBABjgEn4FlJqaqmnTpqm6ujr6WG9vr6qrq1VcXJzowwEABqmk/BzQqlWrtGTJEl1zzTWaPn26nnnmGXV2durOO+9MxuEAAINQUgK0ePFi/f3vf9fjjz+u1tZWXX311dq6detpH0wAAJy/As45Z72Ir4pEIgqFQpqt+dwJAQAGoROuWzXaonA4rPT09H73M/8UHADg/ESAAAAmCBAAwAQBAgCYIEAAABMECABgggABAEwQIACACQIEADBBgAAAJggQAMAEAQIAmCBAAAATBAgAYIIAAQBMECAAgAkCBAAwQYAAACYIEADABAECAJggQAAAEwQIAGCCAAEATBAgAIAJAgQAMEGAAAAmCBAAwAQBAgCYIEAAABMECABgggABAEwQIACACQIEADBBgAAAJggQAMAEAQIAmCBAAAATBAgAYIIAAQBMECAAgAkCBAAwQYAAACYIEADABAECAJggQAAAEwQIAGCCAAEATBAgAIAJAgQAMEGAAAAmCBAAwAQBAgCYGGG9AGAgCYzw/5/E8DGZSVhJYjQ8eElccz2jer1nxk885D0z6t6A90zr06neMx9d87r3jCR91tPpPVO08QHvmUtX1XvPDAVcAQEATBAgAICJhAfoiSeeUCAQiNkmTZqU6MMAAAa5pLwHdNVVV+m99977x0Hi+L46AGBoS0oZRowYoZycnGT81QCAISIp7wHt27dPeXl5mjBhgu644w4dOHCg3327uroUiURiNgDA0JfwABUVFamqqkpbt27V888/r6amJl1//fXq6Ojoc//KykqFQqHolp+fn+glAQAGoIQHqKysTLfccoumTp2q0tJSvfPOO2pvb9cbb7zR5/4VFRUKh8PRrbm5OdFLAgAMQEn/dMDo0aN1+eWXq7Gxsc/ng8GggsFgspcBABhgkv5zQEeOHNH+/fuVm5ub7EMBAAaRhAfowQcfVG1trT755BN9+OGHWrhwoYYPH67bbrst0YcCAAxiCf8W3KeffqrbbrtNhw8f1pgxYzRz5kzV19drzJgxiT4UAGAQS3iAXnvttUT/lRighl95mfeMC6Z4zxy8YbT3zBfX+d9EUpIyQv5z/1EY340uh5rfHk3znvnZ/5rnPbNjygbvmabuL7xnJOmnbf/VeybvP1xcxzofcS84AIAJAgQAMEGAAAAmCBAAwAQBAgCYIEAAABMECABgggABAEwQIACACQIEADBBgAAAJggQAMBE0n8hHQa+ntnfjWvu6ap13jOXp6TGdSycW92ux3vm8eeWes+M6PS/cWfxxhXeM2n/ecJ7RpKCn/nfxHTUzh1xHet8xBUQAMAEAQIAmCBAAAATBAgAYIIAAQBMECAAgAkCBAAwQYAAACYIEADABAECAJggQAAAEwQIAGCCAAEATHA3bCjYcDCuuV3H8r1nLk9pi+tYQ80DLdd5z/z1SKb3TNXEf/eekaRwr/9dqrOf/TCuYw1k/mcBPrgCAgCYIEAAABMECABgggABAEwQIACACQIEADBBgAAAJggQAMAEAQIAmCBAAAATBAgAYIIAAQBMcDNS6ERLa1xzz/3sFu+Zf53X6T0zfM9F3jN/uPc575l4PfnZVO+ZxpJR3jM97S3eM7cX3+s9I0mf/Iv/TIH+ENexcP7iCggAYIIAAQBMECAAgAkCBAAwQYAAACYIEADABAECAJggQAAAEwQIAGCCAAEATBAgAIAJAgQAMMHNSBG3jPV13jNj3rrYe6bn8OfeM1dN/m/eM5L0f2e96D3zm3+7wXsmq/1D75l4BOriu0Fogf+/WsAbV0AAABMECABgwjtA27dv10033aS8vDwFAgFt3rw55nnnnB5//HHl5uZq5MiRKikp0b59+xK1XgDAEOEdoM7OThUWFmrdunV9Pr927Vo9++yzeuGFF7Rjxw5deOGFKi0t1bFjx77xYgEAQ4f3hxDKyspUVlbW53POOT3zzDN69NFHNX/+fEnSyy+/rOzsbG3evFm33nrrN1stAGDISOh7QE1NTWptbVVJSUn0sVAopKKiItXV9f2xmq6uLkUikZgNADD0JTRAra2tkqTs7OyYx7Ozs6PPnaqyslKhUCi65efnJ3JJAIAByvxTcBUVFQqHw9GtubnZekkAgHMgoQHKycmRJLW1tcU83tbWFn3uVMFgUOnp6TEbAGDoS2iACgoKlJOTo+rq6uhjkUhEO3bsUHFxcSIPBQAY5Lw/BXfkyBE1NjZGv25qatLu3buVkZGhcePGaeXKlXryySd12WWXqaCgQI899pjy8vK0YMGCRK4bADDIeQdo586duvHGG6Nfr1q1SpK0ZMkSVVVV6aGHHlJnZ6fuvvtutbe3a+bMmdq6dasuuOCCxK0aADDoBZxzznoRXxWJRBQKhTRb8zUikGK9HAxSf/nf18Y3908veM/c+bc53jN/n9nhPaPeHv8ZwMAJ160abVE4HD7j+/rmn4IDAJyfCBAAwAQBAgCYIEAAABMECABgggABAEwQIACACQIEADBBgAAAJggQAMAEAQIAmCBAAAATBAgAYML71zEAg8GVD/8lrrk7p/jf2Xr9+Oqz73SKG24p955Je73eewYYyLgCAgCYIEAAABMECABgggABAEwQIACACQIEADBBgAAAJggQAMAEAQIAmCBAAAATBAgAYIIAAQBMcDNSDEk97eG45g4vv9J75sBvvvCe+R9Pvuw9U/HPC71n3Mch7xlJyv/XOv8h5+I6Fs5fXAEBAEwQIACACQIEADBBgAAAJggQAMAEAQIAmCBAAAATBAgAYIIAAQBMECAAgAkCBAAwQYAAACa4GSnwFb1/+JP3zK1rfuQ988rq/+k9s/s6/xuY6jr/EUm66sIV3jOX/arFe+bEXz/xnsHQwRUQAMAEAQIAmCBAAAATBAgAYIIAAQBMECAAgAkCBAAwQYAAACYIEADABAECAJggQAAAEwQIAGAi4Jxz1ov4qkgkolAopNmarxGBFOvlAEnhZlztPZP+00+9Z16d8H+8Z+I16f3/7j1zxZqw90zPvr96z+DcOuG6VaMtCofDSk9P73c/roAAACYIEADAhHeAtm/frptuukl5eXkKBALavHlzzPNLly5VIBCI2ebNm5eo9QIAhgjvAHV2dqqwsFDr1q3rd5958+appaUlur366qvfaJEAgKHH+zeilpWVqays7Iz7BINB5eTkxL0oAMDQl5T3gGpqapSVlaUrrrhCy5cv1+HDh/vdt6urS5FIJGYDAAx9CQ/QvHnz9PLLL6u6ulo/+9nPVFtbq7KyMvX09PS5f2VlpUKhUHTLz89P9JIAAAOQ97fgzubWW2+N/nnKlCmaOnWqJk6cqJqaGs2ZM+e0/SsqKrRq1aro15FIhAgBwHkg6R/DnjBhgjIzM9XY2Njn88FgUOnp6TEbAGDoS3qAPv30Ux0+fFi5ubnJPhQAYBDx/hbckSNHYq5mmpqatHv3bmVkZCgjI0Nr1qzRokWLlJOTo/379+uhhx7SpZdeqtLS0oQuHAAwuHkHaOfOnbrxxhujX3/5/s2SJUv0/PPPa8+ePXrppZfU3t6uvLw8zZ07Vz/5yU8UDAYTt2oAwKDHzUiBQWJ4dpb3zMHFl8Z1rB0P/8J7Zlgc39G/o2mu90x4Zv8/1oGBgZuRAgAGNAIEADBBgAAAJggQAMAEAQIAmCBAAAATBAgAYIIAAQBMECAAgAkCBAAwQYAAACYIEADABAECAJhI+K/kBpAcPW2HvGeyn/WfkaRjD53wnhkVSPWe+dUlb3vP/NPCld4zozbt8J5B8nEFBAAwQYAAACYIEADABAECAJggQAAAEwQIAGCCAAEATBAgAIAJAgQAMEGAAAAmCBAAwAQBAgCY4GakgIHemVd7z+y/5QLvmclXf+I9I8V3Y9F4PPf5f/GeGbVlZxJWAgtcAQEATBAgAIAJAgQAMEGAAAAmCBAAwAQBAgCYIEAAABMECABgggABAEwQIACACQIEADBBgAAAJrgZKfAVgWsme8/85V/8b9z5qxkvec/MuuC498y51OW6vWfqPy/wP1Bvi/8MBiSugAAAJggQAMAEAQIAmCBAAAATBAgAYIIAAQBMECAAgAkCBAAwQYAAACYIEADABAECAJggQAAAE9yMFAPeiILx3jP778yL61hPLH7Ne2bRRZ/FdayB7JG2a7xnan9xnffMt16q857B0MEVEADABAECAJjwClBlZaWuvfZapaWlKSsrSwsWLFBDQ0PMPseOHVN5ebkuvvhiXXTRRVq0aJHa2toSumgAwODnFaDa2lqVl5ervr5e7777rrq7uzV37lx1dnZG97n//vv11ltvaePGjaqtrdXBgwd18803J3zhAIDBzetDCFu3bo35uqqqSllZWdq1a5dmzZqlcDisX//619qwYYO+973vSZLWr1+vK6+8UvX19bruOv83KQEAQ9M3eg8oHA5LkjIyMiRJu3btUnd3t0pKSqL7TJo0SePGjVNdXd+fdunq6lIkEonZAABDX9wB6u3t1cqVKzVjxgxNnjxZktTa2qrU1FSNHj06Zt/s7Gy1trb2+fdUVlYqFApFt/z8/HiXBAAYROIOUHl5ufbu3avXXvP/uYmvqqioUDgcjm7Nzc3f6O8DAAwOcf0g6ooVK/T2229r+/btGjt2bPTxnJwcHT9+XO3t7TFXQW1tbcrJyenz7woGgwoGg/EsAwAwiHldATnntGLFCm3atEnbtm1TQUFBzPPTpk1TSkqKqquro481NDTowIEDKi4uTsyKAQBDgtcVUHl5uTZs2KAtW7YoLS0t+r5OKBTSyJEjFQqFdNddd2nVqlXKyMhQenq67rvvPhUXF/MJOABADK8APf/885Kk2bNnxzy+fv16LV26VJL085//XMOGDdOiRYvU1dWl0tJS/fKXv0zIYgEAQ0fAOeesF/FVkUhEoVBIszVfIwIp1svBGYy4ZJz3THharvfM4h9vPftOp7hn9F+9Zwa6B1r8v4tQ90v/m4pKUkbV7/2HenviOhaGnhOuWzXaonA4rPT09H73415wAAATBAgAYIIAAQBMECAAgAkCBAAwQYAAACYIEADABAECAJggQAAAEwQIAGCCAAEATBAgAIAJAgQAMBHXb0TFwDUit+/fPHsmn794YVzHWl5Q6z1zW1pbXMcayFb850zvmY+ev9p7JvPf93rPZHTUec8A5wpXQAAAEwQIAGCCAAEATBAgAIAJAgQAMEGAAAAmCBAAwAQBAgCYIEAAABMECABgggABAEwQIACACW5Geo4cL73Gf+b+z71nHrn0He+ZuSM7vWcGuraeL+Kam/WbB7xnJj36Z++ZjHb/m4T2ek8AAxtXQAAAEwQIAGCCAAEATBAgAIAJAgQAMEGAAAAmCBAAwAQBAgCYIEAAABMECABgggABAEwQIACACW5Geo58ssC/9X+ZsjEJK0mcde0TvWd+UTvXeybQE/CemfRkk/eMJF3WtsN7pieuIwHgCggAYIIAAQBMECAAgAkCBAAwQYAAACYIEADABAECAJggQAAAEwQIAGCCAAEATBAgAIAJAgQAMBFwzjnrRXxVJBJRKBTSbM3XiECK9XIAAJ5OuG7VaIvC4bDS09P73Y8rIACACQIEADDhFaDKykpde+21SktLU1ZWlhYsWKCGhoaYfWbPnq1AIBCz3XPPPQldNABg8PMKUG1trcrLy1VfX693331X3d3dmjt3rjo7O2P2W7ZsmVpaWqLb2rVrE7poAMDg5/UbUbdu3RrzdVVVlbKysrRr1y7NmjUr+vioUaOUk5OTmBUCAIakb/QeUDgcliRlZGTEPP7KK68oMzNTkydPVkVFhY4ePdrv39HV1aVIJBKzAQCGPq8roK/q7e3VypUrNWPGDE2ePDn6+O23367x48crLy9Pe/bs0cMPP6yGhga9+eabff49lZWVWrNmTbzLAAAMUnH/HNDy5cv129/+Vh988IHGjh3b737btm3TnDlz1NjYqIkTJ572fFdXl7q6uqJfRyIR5efn83NAADBIfd2fA4rrCmjFihV6++23tX379jPGR5KKiookqd8ABYNBBYPBeJYBABjEvALknNN9992nTZs2qaamRgUFBWed2b17tyQpNzc3rgUCAIYmrwCVl5drw4YN2rJli9LS0tTa2ipJCoVCGjlypPbv368NGzbo+9//vi6++GLt2bNH999/v2bNmqWpU6cm5R8AADA4eb0HFAgE+nx8/fr1Wrp0qZqbm/WDH/xAe/fuVWdnp/Lz87Vw4UI9+uijZ/w+4FdxLzgAGNyS8h7Q2VqVn5+v2tpan78SAHCe4l5wAAATBAgAYIIAAQBMECAAgAkCBAAwQYAAACYIEADABAECAJggQAAAEwQIAGCCAAEATBAgAIAJAgQAMEGAAAAmCBAAwAQBAgCYIEAAABMECABgggABAEwQIACACQIEADBBgAAAJggQAMAEAQIAmCBAAAATI6wXcCrnnCTphLolZ7wYAIC3E+qW9I//nvdnwAWoo6NDkvSB3jFeCQDgm+jo6FAoFOr3+YA7W6LOsd7eXh08eFBpaWkKBAIxz0UiEeXn56u5uVnp6elGK7THeTiJ83AS5+EkzsNJA+E8OOfU0dGhvLw8DRvW/zs9A+4KaNiwYRo7duwZ90lPTz+vX2Bf4jycxHk4ifNwEufhJOvzcKYrny/xIQQAgAkCBAAwMagCFAwGtXr1agWDQeulmOI8nMR5OInzcBLn4aTBdB4G3IcQAADnh0F1BQQAGDoIEADABAECAJggQAAAE4MmQOvWrdMll1yiCy64QEVFRfr9739vvaRz7oknnlAgEIjZJk2aZL2spNu+fbtuuukm5eXlKRAIaPPmzTHPO+f0+OOPKzc3VyNHjlRJSYn27dtns9gkOtt5WLp06Wmvj3nz5tksNkkqKyt17bXXKi0tTVlZWVqwYIEaGhpi9jl27JjKy8t18cUX66KLLtKiRYvU1tZmtOLk+DrnYfbs2ae9Hu655x6jFfdtUATo9ddf16pVq7R69Wp99NFHKiwsVGlpqQ4dOmS9tHPuqquuUktLS3T74IMPrJeUdJ2dnSosLNS6dev6fH7t2rV69tln9cILL2jHjh268MILVVpaqmPHjp3jlSbX2c6DJM2bNy/m9fHqq6+ewxUmX21trcrLy1VfX693331X3d3dmjt3rjo7O6P73H///Xrrrbe0ceNG1dbW6uDBg7r55psNV514X+c8SNKyZctiXg9r1641WnE/3CAwffp0V15eHv26p6fH5eXlucrKSsNVnXurV692hYWF1sswJclt2rQp+nVvb6/LyclxTz31VPSx9vZ2FwwG3auvvmqwwnPj1PPgnHNLlixx8+fPN1mPlUOHDjlJrra21jl38t99SkqK27hxY3SfP/3pT06Sq6urs1pm0p16Hpxz7oYbbnA//OEP7Rb1NQz4K6Djx49r165dKikpiT42bNgwlZSUqK6uznBlNvbt26e8vDxNmDBBd9xxhw4cOGC9JFNNTU1qbW2NeX2EQiEVFRWdl6+PmpoaZWVl6YorrtDy5ct1+PBh6yUlVTgcliRlZGRIknbt2qXu7u6Y18OkSZM0bty4If16OPU8fOmVV15RZmamJk+erIqKCh09etRief0acDcjPdVnn32mnp4eZWdnxzyenZ2tP//5z0arslFUVKSqqipdccUVamlp0Zo1a3T99ddr7969SktLs16eidbWVknq8/Xx5XPni3nz5unmm29WQUGB9u/fr0ceeURlZWWqq6vT8OHDrZeXcL29vVq5cqVmzJihyZMnSzr5ekhNTdXo0aNj9h3Kr4e+zoMk3X777Ro/frzy8vK0Z88ePfzww2poaNCbb75puNpYAz5A+IeysrLon6dOnaqioiKNHz9eb7zxhu666y7DlWEguPXWW6N/njJliqZOnaqJEyeqpqZGc+bMMVxZcpSXl2vv3r3nxfugZ9Lfebj77rujf54yZYpyc3M1Z84c7d+/XxMnTjzXy+zTgP8WXGZmpoYPH37ap1ja2tqUk5NjtKqBYfTo0br88svV2NhovRQzX74GeH2cbsKECcrMzBySr48VK1bo7bff1vvvvx/z61tycnJ0/Phxtbe3x+w/VF8P/Z2HvhQVFUnSgHo9DPgApaamatq0aaquro4+1tvbq+rqahUXFxuuzN6RI0e0f/9+5ebmWi/FTEFBgXJycmJeH5FIRDt27DjvXx+ffvqpDh8+PKReH845rVixQps2bdK2bdtUUFAQ8/y0adOUkpIS83poaGjQgQMHhtTr4WznoS+7d++WpIH1erD+FMTX8dprr7lgMOiqqqrcH//4R3f33Xe70aNHu9bWVuulnVMPPPCAq6mpcU1NTe53v/udKykpcZmZme7QoUPWS0uqjo4O9/HHH7uPP/7YSXJPP/20+/jjj93f/vY355xzP/3pT93o0aPdli1b3J49e9z8+fNdQUGB++KLL4xXnlhnOg8dHR3uwQcfdHV1da6pqcm999577rvf/a677LLL3LFjx6yXnjDLly93oVDI1dTUuJaWluh29OjR6D733HOPGzdunNu2bZvbuXOnKy4udsXFxYarTryznYfGxkb34x//2O3cudM1NTW5LVu2uAkTJrhZs2YZrzzWoAiQc84999xzbty4cS41NdVNnz7d1dfXWy/pnFu8eLHLzc11qamp7tvf/rZbvHixa2xstF5W0r3//vtO0mnbkiVLnHMnP4r92GOPuezsbBcMBt2cOXNcQ0OD7aKT4Ezn4ejRo27u3LluzJgxLiUlxY0fP94tW7ZsyP2ftL7++SW59evXR/f54osv3L333uu+9a1vuVGjRrmFCxe6lpYWu0UnwdnOw4EDB9ysWbNcRkaGCwaD7tJLL3U/+tGPXDgctl34Kfh1DAAAEwP+PSAAwNBEgAAAJggQAMAEAQIAmCBAAAATBAgAYIIAAQBMECAAgAkCBAAwQYAAACYIEADABAECAJj4f4W4/AnknuSPAAAAAElFTkSuQmCC\n" 215 | }, 216 | "metadata": {} 217 | } 218 | ] 219 | }, 220 | { 221 | "cell_type": "markdown", 222 | "source": [ 223 | "With conventional neural networks, we cannot feed in the image as input as is. So we need to flatten the images into one-dimensional vectors, each of size 1 x (28 x 28) = 1 x 784." 224 | ], 225 | "metadata": { 226 | "id": "-PaL5m44M_d1" 227 | } 228 | }, 229 | { 230 | "cell_type": "code", 231 | "source": [ 232 | "# flatten images into one-dimensional vector\n", 233 | "\n", 234 | "num_pixels = X_train.shape[1] * X_train.shape[2] # find size of one-dimensional vector\n", 235 | "\n", 236 | "X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32') # flatten training images\n", 237 | "X_test = X_test.reshape(X_test.shape[0], num_pixels).astype('float32') # flatten test images" 238 | ], 239 | "metadata": { 240 | "id": "Ck10DsnvNCHt" 241 | }, 242 | "execution_count": 8, 243 | "outputs": [] 244 | }, 245 | { 246 | "cell_type": "markdown", 247 | "source": [ 248 | "Since pixel values can range from 0 to 255, let's normalize the vectors to be between 0 and 1." 249 | ], 250 | "metadata": { 251 | "id": "N7tPvpunNGIt" 252 | } 253 | }, 254 | { 255 | "cell_type": "code", 256 | "source": [ 257 | "# normalize inputs from 0-255 to 0-1\n", 258 | "X_train = X_train / 255\n", 259 | "X_test = X_test / 255" 260 | ], 261 | "metadata": { 262 | "id": "H5cgB8iQNGoX" 263 | }, 264 | "execution_count": 9, 265 | "outputs": [] 266 | }, 267 | { 268 | "cell_type": "markdown", 269 | "source": [ 270 | "Finally, before we start building our model, remember that for classification we need to divide our target variable into categories. We use the to_categorical function from the Keras Utilities package." 271 | ], 272 | "metadata": { 273 | "id": "5zLEFn9aNJlH" 274 | } 275 | }, 276 | { 277 | "cell_type": "code", 278 | "source": [ 279 | "# one hot encode outputs\n", 280 | "y_train = to_categorical(y_train)\n", 281 | "y_test = to_categorical(y_test)\n", 282 | "\n", 283 | "num_classes = y_test.shape[1]\n", 284 | "print(num_classes)" 285 | ], 286 | "metadata": { 287 | "colab": { 288 | "base_uri": "https://localhost:8080/" 289 | }, 290 | "id": "EYRllOuaNMBb", 291 | "outputId": "db5dd2a5-282e-465e-9fd7-ebf516bbc4da" 292 | }, 293 | "execution_count": 10, 294 | "outputs": [ 295 | { 296 | "output_type": "stream", 297 | "name": "stdout", 298 | "text": [ 299 | "10\n" 300 | ] 301 | } 302 | ] 303 | }, 304 | { 305 | "cell_type": "markdown", 306 | "source": [ 307 | "# Build a Neural Network" 308 | ], 309 | "metadata": { 310 | "id": "QMupWgTQNOwa" 311 | } 312 | }, 313 | { 314 | "cell_type": "code", 315 | "source": [ 316 | "# define classification model\n", 317 | "def classification_model():\n", 318 | " # create model\n", 319 | " model = Sequential()\n", 320 | " model.add(Dense(num_pixels, activation='relu', input_shape=(num_pixels,)))\n", 321 | " model.add(Dense(100, activation='relu'))\n", 322 | " model.add(Dense(num_classes, activation='softmax'))\n", 323 | "\n", 324 | "\n", 325 | " # compile model\n", 326 | " model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])\n", 327 | " return model" 328 | ], 329 | "metadata": { 330 | "id": "ixOXBPlpNRO9" 331 | }, 332 | "execution_count": 11, 333 | "outputs": [] 334 | }, 335 | { 336 | "cell_type": "markdown", 337 | "source": [ 338 | "# Train and Test the Network" 339 | ], 340 | "metadata": { 341 | "id": "-J0oW3XINUA5" 342 | } 343 | }, 344 | { 345 | "cell_type": "code", 346 | "source": [ 347 | "# build the model\n", 348 | "model = classification_model()\n", 349 | "\n", 350 | "# fit the model\n", 351 | "model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, verbose=2)\n", 352 | "\n", 353 | "# evaluate the model\n", 354 | "scores = model.evaluate(X_test, y_test, verbose=0)" 355 | ], 356 | "metadata": { 357 | "colab": { 358 | "base_uri": "https://localhost:8080/" 359 | }, 360 | "id": "qitndvvFNW84", 361 | "outputId": "b006b608-7d16-4442-881f-7f017162f250" 362 | }, 363 | "execution_count": 12, 364 | "outputs": [ 365 | { 366 | "output_type": "stream", 367 | "name": "stderr", 368 | "text": [ 369 | "/usr/local/lib/python3.10/dist-packages/keras/src/layers/core/dense.py:87: UserWarning: Do not pass an `input_shape`/`input_dim` argument to a layer. When using Sequential models, prefer using an `Input(shape)` object as the first layer in the model instead.\n", 370 | " super().__init__(activity_regularizer=activity_regularizer, **kwargs)\n" 371 | ] 372 | }, 373 | { 374 | "output_type": "stream", 375 | "name": "stdout", 376 | "text": [ 377 | "Epoch 1/10\n", 378 | "1875/1875 - 21s - 11ms/step - accuracy: 0.9444 - loss: 0.1845 - val_accuracy: 0.9725 - val_loss: 0.0847\n", 379 | "Epoch 2/10\n", 380 | "1875/1875 - 18s - 10ms/step - accuracy: 0.9752 - loss: 0.0788 - val_accuracy: 0.9772 - val_loss: 0.0742\n", 381 | "Epoch 3/10\n", 382 | "1875/1875 - 24s - 13ms/step - accuracy: 0.9824 - loss: 0.0536 - val_accuracy: 0.9779 - val_loss: 0.0765\n", 383 | "Epoch 4/10\n", 384 | "1875/1875 - 40s - 21ms/step - accuracy: 0.9871 - loss: 0.0396 - val_accuracy: 0.9797 - val_loss: 0.0711\n", 385 | "Epoch 5/10\n", 386 | "1875/1875 - 19s - 10ms/step - accuracy: 0.9901 - loss: 0.0307 - val_accuracy: 0.9822 - val_loss: 0.0695\n", 387 | "Epoch 6/10\n", 388 | "1875/1875 - 22s - 12ms/step - accuracy: 0.9912 - loss: 0.0257 - val_accuracy: 0.9801 - val_loss: 0.0882\n", 389 | "Epoch 7/10\n", 390 | "1875/1875 - 20s - 11ms/step - accuracy: 0.9931 - loss: 0.0215 - val_accuracy: 0.9775 - val_loss: 0.0841\n", 391 | "Epoch 8/10\n", 392 | "1875/1875 - 21s - 11ms/step - accuracy: 0.9937 - loss: 0.0197 - val_accuracy: 0.9787 - val_loss: 0.0883\n", 393 | "Epoch 9/10\n", 394 | "1875/1875 - 19s - 10ms/step - accuracy: 0.9949 - loss: 0.0161 - val_accuracy: 0.9803 - val_loss: 0.0825\n", 395 | "Epoch 10/10\n", 396 | "1875/1875 - 22s - 12ms/step - accuracy: 0.9947 - loss: 0.0165 - val_accuracy: 0.9809 - val_loss: 0.0896\n" 397 | ] 398 | } 399 | ] 400 | }, 401 | { 402 | "cell_type": "markdown", 403 | "source": [ 404 | "Let's print the accuracy and the corresponding error." 405 | ], 406 | "metadata": { 407 | "id": "fYKVt6HYOcMB" 408 | } 409 | }, 410 | { 411 | "cell_type": "code", 412 | "source": [ 413 | "print('Accuracy: {}% \\n Error: {}'.format(scores[1], 1 - scores[1]))" 414 | ], 415 | "metadata": { 416 | "colab": { 417 | "base_uri": "https://localhost:8080/" 418 | }, 419 | "id": "Q2QLCNoXOeBV", 420 | "outputId": "96fe8cfe-824a-454d-dde3-a0c58b3a6a56" 421 | }, 422 | "execution_count": 13, 423 | "outputs": [ 424 | { 425 | "output_type": "stream", 426 | "name": "stdout", 427 | "text": [ 428 | "Accuracy: 0.98089998960495% \n", 429 | " Error: 0.01910001039505005\n" 430 | ] 431 | } 432 | ] 433 | }, 434 | { 435 | "cell_type": "markdown", 436 | "source": [ 437 | "Just running 10 epochs could actually take over 20 minutes. But enjoy the results as they are getting generated." 438 | ], 439 | "metadata": { 440 | "id": "hDlN3TsBOiuQ" 441 | } 442 | }, 443 | { 444 | "cell_type": "markdown", 445 | "source": [ 446 | "Sometimes, you cannot afford to retrain your model everytime you want to use it, especially if you are limited on computational resources and training your model can take a long time. Therefore, with the Keras library, you can save your model after training. To do that, we use the save method." 447 | ], 448 | "metadata": { 449 | "id": "zLEuA0bUOjTT" 450 | } 451 | }, 452 | { 453 | "cell_type": "code", 454 | "source": [ 455 | "model.save('classification_model.h5')" 456 | ], 457 | "metadata": { 458 | "colab": { 459 | "base_uri": "https://localhost:8080/" 460 | }, 461 | "id": "v8uNtWWQOlfk", 462 | "outputId": "da809084-a104-48aa-b429-075a42cb78a1" 463 | }, 464 | "execution_count": 14, 465 | "outputs": [ 466 | { 467 | "output_type": "stream", 468 | "name": "stderr", 469 | "text": [ 470 | "WARNING:absl:You are saving your model as an HDF5 file via `model.save()` or `keras.saving.save_model(model)`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')` or `keras.saving.save_model(model, 'my_model.keras')`. \n" 471 | ] 472 | } 473 | ] 474 | }, 475 | { 476 | "cell_type": "markdown", 477 | "source": [ 478 | "Since our model contains multidimensional arrays of data, then models are usually saved as .h5 files.\n", 479 | "\n", 480 | "\n", 481 | "\n", 482 | "\n", 483 | "\n", 484 | "\n", 485 | "\n", 486 | "When you are ready to use your model again, you use the load_model function from keras.models." 487 | ], 488 | "metadata": { 489 | "id": "h-csISDVOqbB" 490 | } 491 | }, 492 | { 493 | "cell_type": "code", 494 | "source": [ 495 | "from keras.models import load_model" 496 | ], 497 | "metadata": { 498 | "id": "RP55kuPHOtZI" 499 | }, 500 | "execution_count": 15, 501 | "outputs": [] 502 | }, 503 | { 504 | "cell_type": "code", 505 | "source": [ 506 | "pretrained_model = load_model('classification_model.h5')" 507 | ], 508 | "metadata": { 509 | "id": "6y6WViYzOu_q", 510 | "outputId": "2e2b2626-589d-4985-b33c-c6e6d8946eee", 511 | "colab": { 512 | "base_uri": "https://localhost:8080/" 513 | } 514 | }, 515 | "execution_count": 16, 516 | "outputs": [ 517 | { 518 | "output_type": "stream", 519 | "name": "stderr", 520 | "text": [ 521 | "WARNING:absl:Compiled the loaded model, but the compiled metrics have yet to be built. `model.compile_metrics` will be empty until you train or evaluate the model.\n" 522 | ] 523 | } 524 | ] 525 | } 526 | ] 527 | } -------------------------------------------------------------------------------- /Classification_Models_with_Keras_(0.1).ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "colab": { 6 | "provenance": [] 7 | }, 8 | "kernelspec": { 9 | "name": "python3", 10 | "display_name": "Python 3" 11 | }, 12 | "language_info": { 13 | "name": "python" 14 | } 15 | }, 16 | "cells": [ 17 | { 18 | "cell_type": "markdown", 19 | "source": [ 20 | "* To use the Keras library to build models for classificaiton problems. We will use the popular MNIST dataset, a dataset of images, for a change.\n", 21 | "\n", 22 | "* The MNIST database, short for Modified National Institute of Standards and Technology database, is a large database of handwritten digits that is commonly used for training various image processing systems. The database is also widely used for training and testing in the field of machine learning.\n", 23 | "\n", 24 | "* The MNIST database contains 60,000 training images and 10,000 testing images of digits written by high school students and employees of the United States Census Bureau.\n", 25 | "\n", 26 | "* Also, to compare how conventional neural networks compare to convolutional neural network" 27 | ], 28 | "metadata": { 29 | "id": "vOPVD65ULwSk" 30 | } 31 | }, 32 | { 33 | "cell_type": "markdown", 34 | "source": [ 35 | "# Classification Models with Keras" 36 | ], 37 | "metadata": { 38 | "id": "8FBjeIo5MAZs" 39 | } 40 | }, 41 | { 42 | "cell_type": "markdown", 43 | "source": [ 44 | "**Import Keras and Package**" 45 | ], 46 | "metadata": { 47 | "id": "RQyC6dLHMEUy" 48 | } 49 | }, 50 | { 51 | "cell_type": "code", 52 | "source": [ 53 | "# All Libraries required for this lab are listed below. The libraries pre-installed on Skills Network Labs are commented.\n", 54 | "# If you run this notebook on a different environment, e.g. your desktop, you may need to uncomment and install certain libraries.\n", 55 | "\n", 56 | "#!pip install numpy==1.21.4\n", 57 | "#!pip install pandas==1.3.4\n", 58 | "#!pip install keras==2.1.6\n", 59 | "#!pip install matplotlib==3.5.0" 60 | ], 61 | "metadata": { 62 | "id": "crbisaoDMHdo" 63 | }, 64 | "execution_count": 1, 65 | "outputs": [] 66 | }, 67 | { 68 | "cell_type": "code", 69 | "source": [ 70 | "import keras\n", 71 | "\n", 72 | "from keras.models import Sequential\n", 73 | "from keras.layers import Dense\n", 74 | "from keras.utils import to_categorical" 75 | ], 76 | "metadata": { 77 | "id": "zvhXEIbrMJm9" 78 | }, 79 | "execution_count": 2, 80 | "outputs": [] 81 | }, 82 | { 83 | "cell_type": "markdown", 84 | "source": [ 85 | "load the MNIST dataset from the Keras library. The dataset is readily divided into a training set and a test set." 86 | ], 87 | "metadata": { 88 | "id": "P3BvYg7aMjFF" 89 | } 90 | }, 91 | { 92 | "cell_type": "code", 93 | "source": [ 94 | "import matplotlib.pyplot as plt" 95 | ], 96 | "metadata": { 97 | "id": "o7s2oDUYM6pK" 98 | }, 99 | "execution_count": 6, 100 | "outputs": [] 101 | }, 102 | { 103 | "cell_type": "code", 104 | "source": [ 105 | "# import the data\n", 106 | "from keras.datasets import mnist\n", 107 | "\n", 108 | "# read the data\n", 109 | "(X_train, y_train), (X_test, y_test) = mnist.load_data()" 110 | ], 111 | "metadata": { 112 | "colab": { 113 | "base_uri": "https://localhost:8080/" 114 | }, 115 | "id": "E6EX6OzxMj0p", 116 | "outputId": "62963309-1829-4418-aa74-e8d21460d973" 117 | }, 118 | "execution_count": 3, 119 | "outputs": [ 120 | { 121 | "output_type": "stream", 122 | "name": "stdout", 123 | "text": [ 124 | "Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n", 125 | "\u001b[1m11490434/11490434\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 0us/step\n" 126 | ] 127 | } 128 | ] 129 | }, 130 | { 131 | "cell_type": "markdown", 132 | "source": [ 133 | "Let's confirm the number of images in each set. According to the dataset's documentation, we should have 60000 images in X_train and 10000 images in the X_test." 134 | ], 135 | "metadata": { 136 | "id": "R3agenxKMnnP" 137 | } 138 | }, 139 | { 140 | "cell_type": "code", 141 | "source": [ 142 | "X_train.shape" 143 | ], 144 | "metadata": { 145 | "colab": { 146 | "base_uri": "https://localhost:8080/" 147 | }, 148 | "id": "B_gVP1zzMoQh", 149 | "outputId": "b501536e-29a4-4772-9799-2860c7d71d4b" 150 | }, 151 | "execution_count": 4, 152 | "outputs": [ 153 | { 154 | "output_type": "execute_result", 155 | "data": { 156 | "text/plain": [ 157 | "(60000, 28, 28)" 158 | ] 159 | }, 160 | "metadata": {}, 161 | "execution_count": 4 162 | } 163 | ] 164 | }, 165 | { 166 | "cell_type": "markdown", 167 | "source": [ 168 | "The first number in the output tuple is the number of images, and the other two numbers are the size of the images in datset. So, each image is 28 pixels by 28 pixels." 169 | ], 170 | "metadata": { 171 | "id": "T_QJtK3BMrrU" 172 | } 173 | }, 174 | { 175 | "cell_type": "markdown", 176 | "source": [ 177 | "Let's visualize the first image in the training set using Matplotlib's scripting layer." 178 | ], 179 | "metadata": { 180 | "id": "dRnJKVpSMziD" 181 | } 182 | }, 183 | { 184 | "cell_type": "code", 185 | "source": [ 186 | "plt.imshow(X_train[0])" 187 | ], 188 | "metadata": { 189 | "colab": { 190 | "base_uri": "https://localhost:8080/", 191 | "height": 447 192 | }, 193 | "id": "J69UER-XM0Jf", 194 | "outputId": "951c822f-888b-456f-c35f-b928beb253b6" 195 | }, 196 | "execution_count": 7, 197 | "outputs": [ 198 | { 199 | "output_type": "execute_result", 200 | "data": { 201 | "text/plain": [ 202 | "" 203 | ] 204 | }, 205 | "metadata": {}, 206 | "execution_count": 7 207 | }, 208 | { 209 | "output_type": "display_data", 210 | "data": { 211 | "text/plain": [ 212 | "
" 213 | ], 214 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAaAAAAGdCAYAAABU0qcqAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/bCgiHAAAACXBIWXMAAA9hAAAPYQGoP6dpAAAcTUlEQVR4nO3df3DU9b3v8dcCyQqaLI0hv0rAgD+wAvEWJWZAxJJLSOc4gIwHf3QGvF4cMXiKaPXGUZHWM2nxjrV6qd7TqURnxB+cEaiO5Y4GE441oQNKGW7blNBY4iEJFSe7IUgIyef+wXXrQgJ+1l3eSXg+Zr4zZPf75vvx69Znv9nNNwHnnBMAAOfYMOsFAADOTwQIAGCCAAEATBAgAIAJAgQAMEGAAAAmCBAAwAQBAgCYGGG9gFP19vbq4MGDSktLUyAQsF4OAMCTc04dHR3Ky8vTsGH9X+cMuAAdPHhQ+fn51ssAAHxDzc3NGjt2bL/PD7gApaWlSZJm6vsaoRTj1QAAfJ1Qtz7QO9H/nvcnaQFat26dnnrqKbW2tqqwsFDPPfecpk+ffta5L7/tNkIpGhEgQAAw6Pz/O4ye7W2UpHwI4fXXX9eqVau0evVqffTRRyosLFRpaakOHTqUjMMBAAahpATo6aef1rJly3TnnXfqO9/5jl544QWNGjVKL774YjIOBwAYhBIeoOPHj2vXrl0qKSn5x0GGDVNJSYnq6upO27+rq0uRSCRmAwAMfQkP0Geffaaenh5lZ2fHPJ6dna3W1tbT9q+srFQoFIpufAIOAM4P5j+IWlFRoXA4HN2am5utlwQAOAcS/im4zMxMDR8+XG1tbTGPt7W1KScn57T9g8GggsFgopcBABjgEn4FlJqaqmnTpqm6ujr6WG9vr6qrq1VcXJzowwEABqmk/BzQqlWrtGTJEl1zzTWaPn26nnnmGXV2durOO+9MxuEAAINQUgK0ePFi/f3vf9fjjz+u1tZWXX311dq6detpH0wAAJy/As45Z72Ir4pEIgqFQpqt+dwJAQAGoROuWzXaonA4rPT09H73M/8UHADg/ESAAAAmCBAAwAQBAgCYIEAAABMECABgggABAEwQIACACQIEADBBgAAAJggQAMAEAQIAmCBAAAATBAgAYIIAAQBMECAAgAkCBAAwQYAAACYIEADABAECAJggQAAAEwQIAGCCAAEATBAgAIAJAgQAMEGAAAAmCBAAwAQBAgCYIEAAABMECABgggABAEwQIACACQIEADBBgAAAJggQAMAEAQIAmCBAAAATBAgAYIIAAQBMECAAgAkCBAAwQYAAACYIEADABAECAJggQAAAEwQIAGCCAAEATBAgAIAJAgQAMEGAAAAmCBAAwAQBAgCYGGG9AGAgCYzw/5/E8DGZSVhJYjQ8eElccz2jer1nxk885D0z6t6A90zr06neMx9d87r3jCR91tPpPVO08QHvmUtX1XvPDAVcAQEATBAgAICJhAfoiSeeUCAQiNkmTZqU6MMAAAa5pLwHdNVVV+m99977x0Hi+L46AGBoS0oZRowYoZycnGT81QCAISIp7wHt27dPeXl5mjBhgu644w4dOHCg3327uroUiURiNgDA0JfwABUVFamqqkpbt27V888/r6amJl1//fXq6Ojoc//KykqFQqHolp+fn+glAQAGoIQHqKysTLfccoumTp2q0tJSvfPOO2pvb9cbb7zR5/4VFRUKh8PRrbm5OdFLAgAMQEn/dMDo0aN1+eWXq7Gxsc/ng8GggsFgspcBABhgkv5zQEeOHNH+/fuVm5ub7EMBAAaRhAfowQcfVG1trT755BN9+OGHWrhwoYYPH67bbrst0YcCAAxiCf8W3KeffqrbbrtNhw8f1pgxYzRz5kzV19drzJgxiT4UAGAQS3iAXnvttUT/lRighl95mfeMC6Z4zxy8YbT3zBfX+d9EUpIyQv5z/1EY340uh5rfHk3znvnZ/5rnPbNjygbvmabuL7xnJOmnbf/VeybvP1xcxzofcS84AIAJAgQAMEGAAAAmCBAAwAQBAgCYIEAAABMECABgggABAEwQIACACQIEADBBgAAAJggQAMBE0n8hHQa+ntnfjWvu6ap13jOXp6TGdSycW92ux3vm8eeWes+M6PS/cWfxxhXeM2n/ecJ7RpKCn/nfxHTUzh1xHet8xBUQAMAEAQIAmCBAAAATBAgAYIIAAQBMECAAgAkCBAAwQYAAACYIEADABAECAJggQAAAEwQIAGCCAAEATHA3bCjYcDCuuV3H8r1nLk9pi+tYQ80DLdd5z/z1SKb3TNXEf/eekaRwr/9dqrOf/TCuYw1k/mcBPrgCAgCYIEAAABMECABgggABAEwQIACACQIEADBBgAAAJggQAMAEAQIAmCBAAAATBAgAYIIAAQBMcDNS6ERLa1xzz/3sFu+Zf53X6T0zfM9F3jN/uPc575l4PfnZVO+ZxpJR3jM97S3eM7cX3+s9I0mf/Iv/TIH+ENexcP7iCggAYIIAAQBMECAAgAkCBAAwQYAAACYIEADABAECAJggQAAAEwQIAGCCAAEATBAgAIAJAgQAMMHNSBG3jPV13jNj3rrYe6bn8OfeM1dN/m/eM5L0f2e96D3zm3+7wXsmq/1D75l4BOriu0Fogf+/WsAbV0AAABMECABgwjtA27dv10033aS8vDwFAgFt3rw55nnnnB5//HHl5uZq5MiRKikp0b59+xK1XgDAEOEdoM7OThUWFmrdunV9Pr927Vo9++yzeuGFF7Rjxw5deOGFKi0t1bFjx77xYgEAQ4f3hxDKyspUVlbW53POOT3zzDN69NFHNX/+fEnSyy+/rOzsbG3evFm33nrrN1stAGDISOh7QE1NTWptbVVJSUn0sVAopKKiItXV9f2xmq6uLkUikZgNADD0JTRAra2tkqTs7OyYx7Ozs6PPnaqyslKhUCi65efnJ3JJAIAByvxTcBUVFQqHw9GtubnZekkAgHMgoQHKycmRJLW1tcU83tbWFn3uVMFgUOnp6TEbAGDoS2iACgoKlJOTo+rq6uhjkUhEO3bsUHFxcSIPBQAY5Lw/BXfkyBE1NjZGv25qatLu3buVkZGhcePGaeXKlXryySd12WWXqaCgQI899pjy8vK0YMGCRK4bADDIeQdo586duvHGG6Nfr1q1SpK0ZMkSVVVV6aGHHlJnZ6fuvvtutbe3a+bMmdq6dasuuOCCxK0aADDoBZxzznoRXxWJRBQKhTRb8zUikGK9HAxSf/nf18Y3908veM/c+bc53jN/n9nhPaPeHv8ZwMAJ160abVE4HD7j+/rmn4IDAJyfCBAAwAQBAgCYIEAAABMECABgggABAEwQIACACQIEADBBgAAAJggQAMAEAQIAmCBAAAATBAgAYML71zEAg8GVD/8lrrk7p/jf2Xr9+Oqz73SKG24p955Je73eewYYyLgCAgCYIEAAABMECABgggABAEwQIACACQIEADBBgAAAJggQAMAEAQIAmCBAAAATBAgAYIIAAQBMcDNSDEk97eG45g4vv9J75sBvvvCe+R9Pvuw9U/HPC71n3Mch7xlJyv/XOv8h5+I6Fs5fXAEBAEwQIACACQIEADBBgAAAJggQAMAEAQIAmCBAAAATBAgAYIIAAQBMECAAgAkCBAAwQYAAACa4GSnwFb1/+JP3zK1rfuQ988rq/+k9s/s6/xuY6jr/EUm66sIV3jOX/arFe+bEXz/xnsHQwRUQAMAEAQIAmCBAAAATBAgAYIIAAQBMECAAgAkCBAAwQYAAACYIEADABAECAJggQAAAEwQIAGAi4Jxz1ov4qkgkolAopNmarxGBFOvlAEnhZlztPZP+00+9Z16d8H+8Z+I16f3/7j1zxZqw90zPvr96z+DcOuG6VaMtCofDSk9P73c/roAAACYIEADAhHeAtm/frptuukl5eXkKBALavHlzzPNLly5VIBCI2ebNm5eo9QIAhgjvAHV2dqqwsFDr1q3rd5958+appaUlur366qvfaJEAgKHH+zeilpWVqays7Iz7BINB5eTkxL0oAMDQl5T3gGpqapSVlaUrrrhCy5cv1+HDh/vdt6urS5FIJGYDAAx9CQ/QvHnz9PLLL6u6ulo/+9nPVFtbq7KyMvX09PS5f2VlpUKhUHTLz89P9JIAAAOQ97fgzubWW2+N/nnKlCmaOnWqJk6cqJqaGs2ZM+e0/SsqKrRq1aro15FIhAgBwHkg6R/DnjBhgjIzM9XY2Njn88FgUOnp6TEbAGDoS3qAPv30Ux0+fFi5ubnJPhQAYBDx/hbckSNHYq5mmpqatHv3bmVkZCgjI0Nr1qzRokWLlJOTo/379+uhhx7SpZdeqtLS0oQuHAAwuHkHaOfOnbrxxhujX3/5/s2SJUv0/PPPa8+ePXrppZfU3t6uvLw8zZ07Vz/5yU8UDAYTt2oAwKDHzUiBQWJ4dpb3zMHFl8Z1rB0P/8J7Zlgc39G/o2mu90x4Zv8/1oGBgZuRAgAGNAIEADBBgAAAJggQAMAEAQIAmCBAAAATBAgAYIIAAQBMECAAgAkCBAAwQYAAACYIEADABAECAJhI+K/kBpAcPW2HvGeyn/WfkaRjD53wnhkVSPWe+dUlb3vP/NPCld4zozbt8J5B8nEFBAAwQYAAACYIEADABAECAJggQAAAEwQIAGCCAAEATBAgAIAJAgQAMEGAAAAmCBAAwAQBAgCY4GakgIHemVd7z+y/5QLvmclXf+I9I8V3Y9F4PPf5f/GeGbVlZxJWAgtcAQEATBAgAIAJAgQAMEGAAAAmCBAAwAQBAgCYIEAAABMECABgggABAEwQIACACQIEADBBgAAAJrgZKfAVgWsme8/85V/8b9z5qxkvec/MuuC498y51OW6vWfqPy/wP1Bvi/8MBiSugAAAJggQAMAEAQIAmCBAAAATBAgAYIIAAQBMECAAgAkCBAAwQYAAACYIEADABAECAJggQAAAE9yMFAPeiILx3jP778yL61hPLH7Ne2bRRZ/FdayB7JG2a7xnan9xnffMt16q857B0MEVEADABAECAJjwClBlZaWuvfZapaWlKSsrSwsWLFBDQ0PMPseOHVN5ebkuvvhiXXTRRVq0aJHa2toSumgAwODnFaDa2lqVl5ervr5e7777rrq7uzV37lx1dnZG97n//vv11ltvaePGjaqtrdXBgwd18803J3zhAIDBzetDCFu3bo35uqqqSllZWdq1a5dmzZqlcDisX//619qwYYO+973vSZLWr1+vK6+8UvX19bruOv83KQEAQ9M3eg8oHA5LkjIyMiRJu3btUnd3t0pKSqL7TJo0SePGjVNdXd+fdunq6lIkEonZAABDX9wB6u3t1cqVKzVjxgxNnjxZktTa2qrU1FSNHj06Zt/s7Gy1trb2+fdUVlYqFApFt/z8/HiXBAAYROIOUHl5ufbu3avXXvP/uYmvqqioUDgcjm7Nzc3f6O8DAAwOcf0g6ooVK/T2229r+/btGjt2bPTxnJwcHT9+XO3t7TFXQW1tbcrJyenz7woGgwoGg/EsAwAwiHldATnntGLFCm3atEnbtm1TQUFBzPPTpk1TSkqKqquro481NDTowIEDKi4uTsyKAQBDgtcVUHl5uTZs2KAtW7YoLS0t+r5OKBTSyJEjFQqFdNddd2nVqlXKyMhQenq67rvvPhUXF/MJOABADK8APf/885Kk2bNnxzy+fv16LV26VJL085//XMOGDdOiRYvU1dWl0tJS/fKXv0zIYgEAQ0fAOeesF/FVkUhEoVBIszVfIwIp1svBGYy4ZJz3THharvfM4h9vPftOp7hn9F+9Zwa6B1r8v4tQ90v/m4pKUkbV7/2HenviOhaGnhOuWzXaonA4rPT09H73415wAAATBAgAYIIAAQBMECAAgAkCBAAwQYAAACYIEADABAECAJggQAAAEwQIAGCCAAEATBAgAIAJAgQAMBHXb0TFwDUit+/fPHsmn794YVzHWl5Q6z1zW1pbXMcayFb850zvmY+ev9p7JvPf93rPZHTUec8A5wpXQAAAEwQIAGCCAAEATBAgAIAJAgQAMEGAAAAmCBAAwAQBAgCYIEAAABMECABgggABAEwQIACACW5Geo4cL73Gf+b+z71nHrn0He+ZuSM7vWcGuraeL+Kam/WbB7xnJj36Z++ZjHb/m4T2ek8AAxtXQAAAEwQIAGCCAAEATBAgAIAJAgQAMEGAAAAmCBAAwAQBAgCYIEAAABMECABgggABAEwQIACACW5Geo58ssC/9X+ZsjEJK0mcde0TvWd+UTvXeybQE/CemfRkk/eMJF3WtsN7pieuIwHgCggAYIIAAQBMECAAgAkCBAAwQYAAACYIEADABAECAJggQAAAEwQIAGCCAAEATBAgAIAJAgQAMBFwzjnrRXxVJBJRKBTSbM3XiECK9XIAAJ5OuG7VaIvC4bDS09P73Y8rIACACQIEADDhFaDKykpde+21SktLU1ZWlhYsWKCGhoaYfWbPnq1AIBCz3XPPPQldNABg8PMKUG1trcrLy1VfX693331X3d3dmjt3rjo7O2P2W7ZsmVpaWqLb2rVrE7poAMDg5/UbUbdu3RrzdVVVlbKysrRr1y7NmjUr+vioUaOUk5OTmBUCAIakb/QeUDgcliRlZGTEPP7KK68oMzNTkydPVkVFhY4ePdrv39HV1aVIJBKzAQCGPq8roK/q7e3VypUrNWPGDE2ePDn6+O23367x48crLy9Pe/bs0cMPP6yGhga9+eabff49lZWVWrNmTbzLAAAMUnH/HNDy5cv129/+Vh988IHGjh3b737btm3TnDlz1NjYqIkTJ572fFdXl7q6uqJfRyIR5efn83NAADBIfd2fA4rrCmjFihV6++23tX379jPGR5KKiookqd8ABYNBBYPBeJYBABjEvALknNN9992nTZs2qaamRgUFBWed2b17tyQpNzc3rgUCAIYmrwCVl5drw4YN2rJli9LS0tTa2ipJCoVCGjlypPbv368NGzbo+9//vi6++GLt2bNH999/v2bNmqWpU6cm5R8AADA4eb0HFAgE+nx8/fr1Wrp0qZqbm/WDH/xAe/fuVWdnp/Lz87Vw4UI9+uijZ/w+4FdxLzgAGNyS8h7Q2VqVn5+v2tpan78SAHCe4l5wAAATBAgAYIIAAQBMECAAgAkCBAAwQYAAACYIEADABAECAJggQAAAEwQIAGCCAAEATBAgAIAJAgQAMEGAAAAmCBAAwAQBAgCYIEAAABMECABgggABAEwQIACACQIEADBBgAAAJggQAMAEAQIAmCBAAAATI6wXcCrnnCTphLolZ7wYAIC3E+qW9I//nvdnwAWoo6NDkvSB3jFeCQDgm+jo6FAoFOr3+YA7W6LOsd7eXh08eFBpaWkKBAIxz0UiEeXn56u5uVnp6elGK7THeTiJ83AS5+EkzsNJA+E8OOfU0dGhvLw8DRvW/zs9A+4KaNiwYRo7duwZ90lPTz+vX2Bf4jycxHk4ifNwEufhJOvzcKYrny/xIQQAgAkCBAAwMagCFAwGtXr1agWDQeulmOI8nMR5OInzcBLn4aTBdB4G3IcQAADnh0F1BQQAGDoIEADABAECAJggQAAAE4MmQOvWrdMll1yiCy64QEVFRfr9739vvaRz7oknnlAgEIjZJk2aZL2spNu+fbtuuukm5eXlKRAIaPPmzTHPO+f0+OOPKzc3VyNHjlRJSYn27dtns9gkOtt5WLp06Wmvj3nz5tksNkkqKyt17bXXKi0tTVlZWVqwYIEaGhpi9jl27JjKy8t18cUX66KLLtKiRYvU1tZmtOLk+DrnYfbs2ae9Hu655x6jFfdtUATo9ddf16pVq7R69Wp99NFHKiwsVGlpqQ4dOmS9tHPuqquuUktLS3T74IMPrJeUdJ2dnSosLNS6dev6fH7t2rV69tln9cILL2jHjh268MILVVpaqmPHjp3jlSbX2c6DJM2bNy/m9fHqq6+ewxUmX21trcrLy1VfX693331X3d3dmjt3rjo7O6P73H///Xrrrbe0ceNG1dbW6uDBg7r55psNV514X+c8SNKyZctiXg9r1641WnE/3CAwffp0V15eHv26p6fH5eXlucrKSsNVnXurV692hYWF1sswJclt2rQp+nVvb6/LyclxTz31VPSx9vZ2FwwG3auvvmqwwnPj1PPgnHNLlixx8+fPN1mPlUOHDjlJrra21jl38t99SkqK27hxY3SfP/3pT06Sq6urs1pm0p16Hpxz7oYbbnA//OEP7Rb1NQz4K6Djx49r165dKikpiT42bNgwlZSUqK6uznBlNvbt26e8vDxNmDBBd9xxhw4cOGC9JFNNTU1qbW2NeX2EQiEVFRWdl6+PmpoaZWVl6YorrtDy5ct1+PBh6yUlVTgcliRlZGRIknbt2qXu7u6Y18OkSZM0bty4If16OPU8fOmVV15RZmamJk+erIqKCh09etRief0acDcjPdVnn32mnp4eZWdnxzyenZ2tP//5z0arslFUVKSqqipdccUVamlp0Zo1a3T99ddr7969SktLs16eidbWVknq8/Xx5XPni3nz5unmm29WQUGB9u/fr0ceeURlZWWqq6vT8OHDrZeXcL29vVq5cqVmzJihyZMnSzr5ekhNTdXo0aNj9h3Kr4e+zoMk3X777Ro/frzy8vK0Z88ePfzww2poaNCbb75puNpYAz5A+IeysrLon6dOnaqioiKNHz9eb7zxhu666y7DlWEguPXWW6N/njJliqZOnaqJEyeqpqZGc+bMMVxZcpSXl2vv3r3nxfugZ9Lfebj77rujf54yZYpyc3M1Z84c7d+/XxMnTjzXy+zTgP8WXGZmpoYPH37ap1ja2tqUk5NjtKqBYfTo0br88svV2NhovRQzX74GeH2cbsKECcrMzBySr48VK1bo7bff1vvvvx/z61tycnJ0/Phxtbe3x+w/VF8P/Z2HvhQVFUnSgHo9DPgApaamatq0aaquro4+1tvbq+rqahUXFxuuzN6RI0e0f/9+5ebmWi/FTEFBgXJycmJeH5FIRDt27DjvXx+ffvqpDh8+PKReH845rVixQps2bdK2bdtUUFAQ8/y0adOUkpIS83poaGjQgQMHhtTr4WznoS+7d++WpIH1erD+FMTX8dprr7lgMOiqqqrcH//4R3f33Xe70aNHu9bWVuulnVMPPPCAq6mpcU1NTe53v/udKykpcZmZme7QoUPWS0uqjo4O9/HHH7uPP/7YSXJPP/20+/jjj93f/vY355xzP/3pT93o0aPdli1b3J49e9z8+fNdQUGB++KLL4xXnlhnOg8dHR3uwQcfdHV1da6pqcm999577rvf/a677LLL3LFjx6yXnjDLly93oVDI1dTUuJaWluh29OjR6D733HOPGzdunNu2bZvbuXOnKy4udsXFxYarTryznYfGxkb34x//2O3cudM1NTW5LVu2uAkTJrhZs2YZrzzWoAiQc84999xzbty4cS41NdVNnz7d1dfXWy/pnFu8eLHLzc11qamp7tvf/rZbvHixa2xstF5W0r3//vtO0mnbkiVLnHMnP4r92GOPuezsbBcMBt2cOXNcQ0OD7aKT4Ezn4ejRo27u3LluzJgxLiUlxY0fP94tW7ZsyP2ftL7++SW59evXR/f54osv3L333uu+9a1vuVGjRrmFCxe6lpYWu0UnwdnOw4EDB9ysWbNcRkaGCwaD7tJLL3U/+tGPXDgctl34Kfh1DAAAEwP+PSAAwNBEgAAAJggQAMAEAQIAmCBAAAATBAgAYIIAAQBMECAAgAkCBAAwQYAAACYIEADABAECAJj4f4W4/AnknuSPAAAAAElFTkSuQmCC\n" 215 | }, 216 | "metadata": {} 217 | } 218 | ] 219 | }, 220 | { 221 | "cell_type": "markdown", 222 | "source": [ 223 | "With conventional neural networks, we cannot feed in the image as input as is. So we need to flatten the images into one-dimensional vectors, each of size 1 x (28 x 28) = 1 x 784." 224 | ], 225 | "metadata": { 226 | "id": "-PaL5m44M_d1" 227 | } 228 | }, 229 | { 230 | "cell_type": "code", 231 | "source": [ 232 | "# flatten images into one-dimensional vector\n", 233 | "\n", 234 | "num_pixels = X_train.shape[1] * X_train.shape[2] # find size of one-dimensional vector\n", 235 | "\n", 236 | "X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32') # flatten training images\n", 237 | "X_test = X_test.reshape(X_test.shape[0], num_pixels).astype('float32') # flatten test images" 238 | ], 239 | "metadata": { 240 | "id": "Ck10DsnvNCHt" 241 | }, 242 | "execution_count": 8, 243 | "outputs": [] 244 | }, 245 | { 246 | "cell_type": "markdown", 247 | "source": [ 248 | "Since pixel values can range from 0 to 255, let's normalize the vectors to be between 0 and 1." 249 | ], 250 | "metadata": { 251 | "id": "N7tPvpunNGIt" 252 | } 253 | }, 254 | { 255 | "cell_type": "code", 256 | "source": [ 257 | "# normalize inputs from 0-255 to 0-1\n", 258 | "X_train = X_train / 255\n", 259 | "X_test = X_test / 255" 260 | ], 261 | "metadata": { 262 | "id": "H5cgB8iQNGoX" 263 | }, 264 | "execution_count": 9, 265 | "outputs": [] 266 | }, 267 | { 268 | "cell_type": "markdown", 269 | "source": [ 270 | "Finally, before we start building our model, remember that for classification we need to divide our target variable into categories. We use the to_categorical function from the Keras Utilities package." 271 | ], 272 | "metadata": { 273 | "id": "5zLEFn9aNJlH" 274 | } 275 | }, 276 | { 277 | "cell_type": "code", 278 | "source": [ 279 | "# one hot encode outputs\n", 280 | "y_train = to_categorical(y_train)\n", 281 | "y_test = to_categorical(y_test)\n", 282 | "\n", 283 | "num_classes = y_test.shape[1]\n", 284 | "print(num_classes)" 285 | ], 286 | "metadata": { 287 | "colab": { 288 | "base_uri": "https://localhost:8080/" 289 | }, 290 | "id": "EYRllOuaNMBb", 291 | "outputId": "db5dd2a5-282e-465e-9fd7-ebf516bbc4da" 292 | }, 293 | "execution_count": 10, 294 | "outputs": [ 295 | { 296 | "output_type": "stream", 297 | "name": "stdout", 298 | "text": [ 299 | "10\n" 300 | ] 301 | } 302 | ] 303 | }, 304 | { 305 | "cell_type": "markdown", 306 | "source": [ 307 | "# Build a Neural Network" 308 | ], 309 | "metadata": { 310 | "id": "QMupWgTQNOwa" 311 | } 312 | }, 313 | { 314 | "cell_type": "code", 315 | "source": [ 316 | "# define classification model\n", 317 | "def classification_model():\n", 318 | " # create model\n", 319 | " model = Sequential()\n", 320 | " model.add(Dense(num_pixels, activation='relu', input_shape=(num_pixels,)))\n", 321 | " model.add(Dense(100, activation='relu'))\n", 322 | " model.add(Dense(num_classes, activation='softmax'))\n", 323 | "\n", 324 | "\n", 325 | " # compile model\n", 326 | " model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])\n", 327 | " return model" 328 | ], 329 | "metadata": { 330 | "id": "ixOXBPlpNRO9" 331 | }, 332 | "execution_count": 11, 333 | "outputs": [] 334 | }, 335 | { 336 | "cell_type": "markdown", 337 | "source": [ 338 | "# Train and Test the Network" 339 | ], 340 | "metadata": { 341 | "id": "-J0oW3XINUA5" 342 | } 343 | }, 344 | { 345 | "cell_type": "code", 346 | "source": [ 347 | "# build the model\n", 348 | "model = classification_model()\n", 349 | "\n", 350 | "# fit the model\n", 351 | "model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, verbose=2)\n", 352 | "\n", 353 | "# evaluate the model\n", 354 | "scores = model.evaluate(X_test, y_test, verbose=0)" 355 | ], 356 | "metadata": { 357 | "colab": { 358 | "base_uri": "https://localhost:8080/" 359 | }, 360 | "id": "qitndvvFNW84", 361 | "outputId": "b006b608-7d16-4442-881f-7f017162f250" 362 | }, 363 | "execution_count": 12, 364 | "outputs": [ 365 | { 366 | "output_type": "stream", 367 | "name": "stderr", 368 | "text": [ 369 | "/usr/local/lib/python3.10/dist-packages/keras/src/layers/core/dense.py:87: UserWarning: Do not pass an `input_shape`/`input_dim` argument to a layer. When using Sequential models, prefer using an `Input(shape)` object as the first layer in the model instead.\n", 370 | " super().__init__(activity_regularizer=activity_regularizer, **kwargs)\n" 371 | ] 372 | }, 373 | { 374 | "output_type": "stream", 375 | "name": "stdout", 376 | "text": [ 377 | "Epoch 1/10\n", 378 | "1875/1875 - 21s - 11ms/step - accuracy: 0.9444 - loss: 0.1845 - val_accuracy: 0.9725 - val_loss: 0.0847\n", 379 | "Epoch 2/10\n", 380 | "1875/1875 - 18s - 10ms/step - accuracy: 0.9752 - loss: 0.0788 - val_accuracy: 0.9772 - val_loss: 0.0742\n", 381 | "Epoch 3/10\n", 382 | "1875/1875 - 24s - 13ms/step - accuracy: 0.9824 - loss: 0.0536 - val_accuracy: 0.9779 - val_loss: 0.0765\n", 383 | "Epoch 4/10\n", 384 | "1875/1875 - 40s - 21ms/step - accuracy: 0.9871 - loss: 0.0396 - val_accuracy: 0.9797 - val_loss: 0.0711\n", 385 | "Epoch 5/10\n", 386 | "1875/1875 - 19s - 10ms/step - accuracy: 0.9901 - loss: 0.0307 - val_accuracy: 0.9822 - val_loss: 0.0695\n", 387 | "Epoch 6/10\n", 388 | "1875/1875 - 22s - 12ms/step - accuracy: 0.9912 - loss: 0.0257 - val_accuracy: 0.9801 - val_loss: 0.0882\n", 389 | "Epoch 7/10\n", 390 | "1875/1875 - 20s - 11ms/step - accuracy: 0.9931 - loss: 0.0215 - val_accuracy: 0.9775 - val_loss: 0.0841\n", 391 | "Epoch 8/10\n", 392 | "1875/1875 - 21s - 11ms/step - accuracy: 0.9937 - loss: 0.0197 - val_accuracy: 0.9787 - val_loss: 0.0883\n", 393 | "Epoch 9/10\n", 394 | "1875/1875 - 19s - 10ms/step - accuracy: 0.9949 - loss: 0.0161 - val_accuracy: 0.9803 - val_loss: 0.0825\n", 395 | "Epoch 10/10\n", 396 | "1875/1875 - 22s - 12ms/step - accuracy: 0.9947 - loss: 0.0165 - val_accuracy: 0.9809 - val_loss: 0.0896\n" 397 | ] 398 | } 399 | ] 400 | }, 401 | { 402 | "cell_type": "markdown", 403 | "source": [ 404 | "Let's print the accuracy and the corresponding error." 405 | ], 406 | "metadata": { 407 | "id": "fYKVt6HYOcMB" 408 | } 409 | }, 410 | { 411 | "cell_type": "code", 412 | "source": [ 413 | "print('Accuracy: {}% \\n Error: {}'.format(scores[1], 1 - scores[1]))" 414 | ], 415 | "metadata": { 416 | "colab": { 417 | "base_uri": "https://localhost:8080/" 418 | }, 419 | "id": "Q2QLCNoXOeBV", 420 | "outputId": "96fe8cfe-824a-454d-dde3-a0c58b3a6a56" 421 | }, 422 | "execution_count": 13, 423 | "outputs": [ 424 | { 425 | "output_type": "stream", 426 | "name": "stdout", 427 | "text": [ 428 | "Accuracy: 0.98089998960495% \n", 429 | " Error: 0.01910001039505005\n" 430 | ] 431 | } 432 | ] 433 | }, 434 | { 435 | "cell_type": "markdown", 436 | "source": [ 437 | "Just running 10 epochs could actually take over 20 minutes. But enjoy the results as they are getting generated." 438 | ], 439 | "metadata": { 440 | "id": "hDlN3TsBOiuQ" 441 | } 442 | }, 443 | { 444 | "cell_type": "markdown", 445 | "source": [ 446 | "Sometimes, you cannot afford to retrain your model everytime you want to use it, especially if you are limited on computational resources and training your model can take a long time. Therefore, with the Keras library, you can save your model after training. To do that, we use the save method." 447 | ], 448 | "metadata": { 449 | "id": "zLEuA0bUOjTT" 450 | } 451 | }, 452 | { 453 | "cell_type": "code", 454 | "source": [ 455 | "model.save('classification_model.h5')" 456 | ], 457 | "metadata": { 458 | "colab": { 459 | "base_uri": "https://localhost:8080/" 460 | }, 461 | "id": "v8uNtWWQOlfk", 462 | "outputId": "da809084-a104-48aa-b429-075a42cb78a1" 463 | }, 464 | "execution_count": 14, 465 | "outputs": [ 466 | { 467 | "output_type": "stream", 468 | "name": "stderr", 469 | "text": [ 470 | "WARNING:absl:You are saving your model as an HDF5 file via `model.save()` or `keras.saving.save_model(model)`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')` or `keras.saving.save_model(model, 'my_model.keras')`. \n" 471 | ] 472 | } 473 | ] 474 | }, 475 | { 476 | "cell_type": "markdown", 477 | "source": [ 478 | "Since our model contains multidimensional arrays of data, then models are usually saved as .h5 files.\n", 479 | "\n", 480 | "\n", 481 | "\n", 482 | "\n", 483 | "\n", 484 | "\n", 485 | "\n", 486 | "When you are ready to use your model again, you use the load_model function from keras.models." 487 | ], 488 | "metadata": { 489 | "id": "h-csISDVOqbB" 490 | } 491 | }, 492 | { 493 | "cell_type": "code", 494 | "source": [ 495 | "from keras.models import load_model" 496 | ], 497 | "metadata": { 498 | "id": "RP55kuPHOtZI" 499 | }, 500 | "execution_count": 15, 501 | "outputs": [] 502 | }, 503 | { 504 | "cell_type": "code", 505 | "source": [ 506 | "pretrained_model = load_model('classification_model.h5')" 507 | ], 508 | "metadata": { 509 | "id": "6y6WViYzOu_q", 510 | "outputId": "2e2b2626-589d-4985-b33c-c6e6d8946eee", 511 | "colab": { 512 | "base_uri": "https://localhost:8080/" 513 | } 514 | }, 515 | "execution_count": 16, 516 | "outputs": [ 517 | { 518 | "output_type": "stream", 519 | "name": "stderr", 520 | "text": [ 521 | "WARNING:absl:Compiled the loaded model, but the compiled metrics have yet to be built. `model.compile_metrics` will be empty until you train or evaluate the model.\n" 522 | ] 523 | } 524 | ] 525 | } 526 | ] 527 | } -------------------------------------------------------------------------------- /Convolutional_Neural_Networks_with_Keras.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "colab": { 6 | "provenance": [] 7 | }, 8 | "kernelspec": { 9 | "name": "python3", 10 | "display_name": "Python 3" 11 | }, 12 | "language_info": { 13 | "name": "python" 14 | } 15 | }, 16 | "cells": [ 17 | { 18 | "cell_type": "markdown", 19 | "source": [ 20 | "**Import Keras and Packages**" 21 | ], 22 | "metadata": { 23 | "id": "it233c3y2J3J" 24 | } 25 | }, 26 | { 27 | "cell_type": "code", 28 | "execution_count": 8, 29 | "metadata": { 30 | "id": "oytJPOiC18DX" 31 | }, 32 | "outputs": [], 33 | "source": [ 34 | "# All Libraries required for this lab are listed below. The libraries pre-installed on Skills Network Labs are commented.\n", 35 | "# If you run this notebook on a different environment, e.g. your desktop, you may need to uncomment and install certain libraries.\n", 36 | "\n", 37 | "# !pip install numpy==1.21.4\n", 38 | "# !pip install pandas==1.3.4\n", 39 | "# !pip install keras==2.1.6" 40 | ] 41 | }, 42 | { 43 | "cell_type": "code", 44 | "source": [ 45 | "import keras\n", 46 | "from keras.models import Sequential\n", 47 | "from keras.layers import Dense\n", 48 | "from keras.utils import to_categorical" 49 | ], 50 | "metadata": { 51 | "id": "sA2LKNDQ2Qab" 52 | }, 53 | "execution_count": 4, 54 | "outputs": [] 55 | }, 56 | { 57 | "cell_type": "markdown", 58 | "source": [ 59 | "* When working with convolutional neural networks in particular, we will need additional packages." 60 | ], 61 | "metadata": { 62 | "id": "seLnBmgk2wWp" 63 | } 64 | }, 65 | { 66 | "cell_type": "code", 67 | "source": [ 68 | "# from keras.layers.convolutional import Conv2D # to add convolutional layers\n", 69 | "# from keras.layers.convolutional import MaxPooling2D # to add pooling layers\n", 70 | "# from keras.layers import Flatten # to flatten data for fully connected layers" 71 | ], 72 | "metadata": { 73 | "id": "9Wv8Case2xYZ" 74 | }, 75 | "execution_count": 15, 76 | "outputs": [] 77 | }, 78 | { 79 | "cell_type": "code", 80 | "source": [ 81 | "from keras.layers import Conv2D # to add convolutional layers\n", 82 | "from keras.layers import MaxPooling2D # to add pooling layers\n", 83 | "from keras.layers import Flatten # to flatten data for fully connected layers\n" 84 | ], 85 | "metadata": { 86 | "id": "StCDYzYA4L-o" 87 | }, 88 | "execution_count": 16, 89 | "outputs": [] 90 | }, 91 | { 92 | "cell_type": "markdown", 93 | "source": [ 94 | "# Convolutional Layer with One set of convolutional and pooling layers" 95 | ], 96 | "metadata": { 97 | "id": "UVnWoLTB2_2n" 98 | } 99 | }, 100 | { 101 | "cell_type": "code", 102 | "source": [ 103 | "# # import data\n", 104 | "# from keras.datasets import mnist\n", 105 | "\n", 106 | "# # load data\n", 107 | "# (X_train, y_train), (X_test, y_test) = mnist.load_data()\n", 108 | "\n", 109 | "# # reshape to be [samples][pixels][width][height]\n", 110 | "# X_train = X_train.reshape(X_train.shape[0], 28, 28, 1).astype('float32')\n", 111 | "# X_test = X_test.reshape(X_test.shape[0], 28, 28, 1).astype('float32')" 112 | ], 113 | "metadata": { 114 | "id": "fIeq6R2d4QNK" 115 | }, 116 | "execution_count": 18, 117 | "outputs": [] 118 | }, 119 | { 120 | "cell_type": "code", 121 | "source": [ 122 | "!pip install tensorflow --upgrade\n" 123 | ], 124 | "metadata": { 125 | "colab": { 126 | "base_uri": "https://localhost:8080/", 127 | "height": 1000 128 | }, 129 | "id": "pfMZenZ84hh2", 130 | "outputId": "936d668e-3a03-4057-9608-d55dc5046873" 131 | }, 132 | "execution_count": 19, 133 | "outputs": [ 134 | { 135 | "output_type": "stream", 136 | "name": "stdout", 137 | "text": [ 138 | "Requirement already satisfied: tensorflow in /usr/local/lib/python3.10/dist-packages (2.17.0)\n", 139 | "Requirement already satisfied: absl-py>=1.0.0 in /usr/local/lib/python3.10/dist-packages (from tensorflow) (1.4.0)\n", 140 | "Requirement already satisfied: astunparse>=1.6.0 in /usr/local/lib/python3.10/dist-packages (from tensorflow) (1.6.3)\n", 141 | "Requirement already satisfied: flatbuffers>=24.3.25 in /usr/local/lib/python3.10/dist-packages (from tensorflow) (24.3.25)\n", 142 | "Requirement already satisfied: gast!=0.5.0,!=0.5.1,!=0.5.2,>=0.2.1 in /usr/local/lib/python3.10/dist-packages (from tensorflow) (0.6.0)\n", 143 | "Requirement already satisfied: google-pasta>=0.1.1 in /usr/local/lib/python3.10/dist-packages (from tensorflow) (0.2.0)\n", 144 | "Requirement already satisfied: h5py>=3.10.0 in /usr/local/lib/python3.10/dist-packages (from tensorflow) (3.11.0)\n", 145 | "Requirement already satisfied: libclang>=13.0.0 in /usr/local/lib/python3.10/dist-packages (from tensorflow) (18.1.1)\n", 146 | "Requirement already satisfied: ml-dtypes<0.5.0,>=0.3.1 in /usr/local/lib/python3.10/dist-packages (from tensorflow) (0.4.1)\n", 147 | "Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.10/dist-packages (from tensorflow) (3.4.0)\n", 148 | "Requirement already satisfied: packaging in /usr/local/lib/python3.10/dist-packages (from tensorflow) (24.1)\n", 149 | "Requirement already satisfied: protobuf!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<5.0.0dev,>=3.20.3 in /usr/local/lib/python3.10/dist-packages (from tensorflow) (3.20.3)\n", 150 | "Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.10/dist-packages (from tensorflow) (2.32.3)\n", 151 | "Requirement already satisfied: setuptools in /usr/local/lib/python3.10/dist-packages (from tensorflow) (71.0.4)\n", 152 | "Requirement already satisfied: six>=1.12.0 in /usr/local/lib/python3.10/dist-packages (from tensorflow) (1.16.0)\n", 153 | "Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.10/dist-packages (from tensorflow) (2.4.0)\n", 154 | "Requirement already satisfied: typing-extensions>=3.6.6 in /usr/local/lib/python3.10/dist-packages (from tensorflow) (4.12.2)\n", 155 | "Requirement already satisfied: wrapt>=1.11.0 in /usr/local/lib/python3.10/dist-packages (from tensorflow) (1.16.0)\n", 156 | "Requirement already satisfied: grpcio<2.0,>=1.24.3 in /usr/local/lib/python3.10/dist-packages (from tensorflow) (1.64.1)\n", 157 | "Requirement already satisfied: tensorboard<2.18,>=2.17 in /usr/local/lib/python3.10/dist-packages (from tensorflow) (2.17.0)\n", 158 | "Collecting keras>=3.2.0 (from tensorflow)\n", 159 | " Downloading keras-3.5.0-py3-none-any.whl.metadata (5.8 kB)\n", 160 | "Requirement already satisfied: tensorflow-io-gcs-filesystem>=0.23.1 in /usr/local/lib/python3.10/dist-packages (from tensorflow) (0.37.1)\n", 161 | "Collecting numpy<2.0.0,>=1.23.5 (from tensorflow)\n", 162 | " Using cached numpy-1.26.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (61 kB)\n", 163 | "Requirement already satisfied: wheel<1.0,>=0.23.0 in /usr/local/lib/python3.10/dist-packages (from astunparse>=1.6.0->tensorflow) (0.44.0)\n", 164 | "Requirement already satisfied: rich in /usr/local/lib/python3.10/dist-packages (from keras>=3.2.0->tensorflow) (13.8.1)\n", 165 | "Requirement already satisfied: namex in /usr/local/lib/python3.10/dist-packages (from keras>=3.2.0->tensorflow) (0.0.8)\n", 166 | "Requirement already satisfied: optree in /usr/local/lib/python3.10/dist-packages (from keras>=3.2.0->tensorflow) (0.12.1)\n", 167 | "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests<3,>=2.21.0->tensorflow) (3.3.2)\n", 168 | "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests<3,>=2.21.0->tensorflow) (3.10)\n", 169 | "Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests<3,>=2.21.0->tensorflow) (2.2.3)\n", 170 | "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests<3,>=2.21.0->tensorflow) (2024.8.30)\n", 171 | "Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.10/dist-packages (from tensorboard<2.18,>=2.17->tensorflow) (3.7)\n", 172 | "Requirement already satisfied: tensorboard-data-server<0.8.0,>=0.7.0 in /usr/local/lib/python3.10/dist-packages (from tensorboard<2.18,>=2.17->tensorflow) (0.7.2)\n", 173 | "Requirement already satisfied: werkzeug>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from tensorboard<2.18,>=2.17->tensorflow) (3.0.4)\n", 174 | "Requirement already satisfied: MarkupSafe>=2.1.1 in /usr/local/lib/python3.10/dist-packages (from werkzeug>=1.0.1->tensorboard<2.18,>=2.17->tensorflow) (2.1.5)\n", 175 | "Requirement already satisfied: markdown-it-py>=2.2.0 in /usr/local/lib/python3.10/dist-packages (from rich->keras>=3.2.0->tensorflow) (3.0.0)\n", 176 | "Requirement already satisfied: pygments<3.0.0,>=2.13.0 in /usr/local/lib/python3.10/dist-packages (from rich->keras>=3.2.0->tensorflow) (2.18.0)\n", 177 | "Requirement already satisfied: mdurl~=0.1 in /usr/local/lib/python3.10/dist-packages (from markdown-it-py>=2.2.0->rich->keras>=3.2.0->tensorflow) (0.1.2)\n", 178 | "Downloading keras-3.5.0-py3-none-any.whl (1.1 MB)\n", 179 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.1/1.1 MB\u001b[0m \u001b[31m18.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", 180 | "\u001b[?25hUsing cached numpy-1.26.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.2 MB)\n", 181 | "Installing collected packages: numpy, keras\n", 182 | " Attempting uninstall: numpy\n", 183 | " Found existing installation: numpy 2.1.1\n", 184 | " Uninstalling numpy-2.1.1:\n", 185 | " Successfully uninstalled numpy-2.1.1\n", 186 | " Attempting uninstall: keras\n", 187 | " Found existing installation: Keras 2.1.6\n", 188 | " Uninstalling Keras-2.1.6:\n", 189 | " Successfully uninstalled Keras-2.1.6\n", 190 | "\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n", 191 | "arviz 0.19.0 requires pandas>=1.5.0, but you have pandas 1.3.4 which is incompatible.\n", 192 | "bigframes 1.19.0 requires pandas>=1.5.3, but you have pandas 1.3.4 which is incompatible.\n", 193 | "cudf-cu12 24.6.1 requires pandas<2.2.3dev0,>=2.0, but you have pandas 1.3.4 which is incompatible.\n", 194 | "geopandas 1.0.1 requires pandas>=1.4.0, but you have pandas 1.3.4 which is incompatible.\n", 195 | "ibis-framework 9.2.0 requires pandas<3,>=1.5.3, but you have pandas 1.3.4 which is incompatible.\n", 196 | "mizani 0.11.4 requires pandas>=2.1.0, but you have pandas 1.3.4 which is incompatible.\n", 197 | "plotnine 0.13.6 requires pandas<3.0.0,>=2.1.0, but you have pandas 1.3.4 which is incompatible.\n", 198 | "statsmodels 0.14.3 requires pandas!=2.1.0,>=1.4, but you have pandas 1.3.4 which is incompatible.\n", 199 | "xarray 2024.9.0 requires pandas>=2.1, but you have pandas 1.3.4 which is incompatible.\u001b[0m\u001b[31m\n", 200 | "\u001b[0mSuccessfully installed keras-3.5.0 numpy-1.26.4\n" 201 | ] 202 | }, 203 | { 204 | "output_type": "display_data", 205 | "data": { 206 | "application/vnd.colab-display-data+json": { 207 | "pip_warning": { 208 | "packages": [ 209 | "keras" 210 | ] 211 | }, 212 | "id": "f5a05f3a50dc4545a419017c6d6040c0" 213 | } 214 | }, 215 | "metadata": {} 216 | } 217 | ] 218 | }, 219 | { 220 | "cell_type": "code", 221 | "source": [ 222 | "from tensorflow.keras.datasets import mnist\n", 223 | "\n", 224 | "# load data\n", 225 | "(X_train, y_train), (X_test, y_test) = mnist.load_data()\n", 226 | "\n", 227 | "# reshape to be [samples][pixels][width][height]\n", 228 | "X_train = X_train.reshape(X_train.shape[0], 28, 28, 1).astype('float32')\n", 229 | "X_test = X_test.reshape(X_test.shape[0], 28, 28, 1).astype('float32')\n" 230 | ], 231 | "metadata": { 232 | "colab": { 233 | "base_uri": "https://localhost:8080/" 234 | }, 235 | "id": "RwxgmdqV4j1z", 236 | "outputId": "4c2f99cb-6488-439c-bfd8-faac458a9406" 237 | }, 238 | "execution_count": 20, 239 | "outputs": [ 240 | { 241 | "output_type": "stream", 242 | "name": "stdout", 243 | "text": [ 244 | "Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n", 245 | "\u001b[1m11490434/11490434\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 0us/step\n" 246 | ] 247 | } 248 | ] 249 | }, 250 | { 251 | "cell_type": "markdown", 252 | "source": [ 253 | "Let's normalize the pixel values to be between 0 and 1" 254 | ], 255 | "metadata": { 256 | "id": "4uIayqBH4s4l" 257 | } 258 | }, 259 | { 260 | "cell_type": "code", 261 | "source": [ 262 | "X_train = X_train / 255 # normalize training data\n", 263 | "X_test = X_test / 255 # normalize test data" 264 | ], 265 | "metadata": { 266 | "id": "NQvQz09B4tZ4" 267 | }, 268 | "execution_count": 21, 269 | "outputs": [] 270 | }, 271 | { 272 | "cell_type": "markdown", 273 | "source": [ 274 | "Next, let's convert the target variable into binary categories" 275 | ], 276 | "metadata": { 277 | "id": "qWERMAnp4xkm" 278 | } 279 | }, 280 | { 281 | "cell_type": "code", 282 | "source": [ 283 | "y_train = to_categorical(y_train)\n", 284 | "y_test = to_categorical(y_test)\n", 285 | "\n", 286 | "num_classes = y_test.shape[1] # number of categories" 287 | ], 288 | "metadata": { 289 | "id": "69Iwnj9340FO" 290 | }, 291 | "execution_count": 22, 292 | "outputs": [] 293 | }, 294 | { 295 | "cell_type": "markdown", 296 | "source": [ 297 | "Next, let's define a function that creates our model. Let's start with one set of convolutional and pooling layers." 298 | ], 299 | "metadata": { 300 | "id": "f1vRw7DA45uK" 301 | } 302 | }, 303 | { 304 | "cell_type": "code", 305 | "source": [ 306 | "def convolutional_model():\n", 307 | "\n", 308 | " # create model\n", 309 | " model = Sequential()\n", 310 | " model.add(Conv2D(16, (5, 5), strides=(1, 1), activation='relu', input_shape=(28, 28, 1)))\n", 311 | " model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))\n", 312 | "\n", 313 | " model.add(Flatten())\n", 314 | " model.add(Dense(100, activation='relu'))\n", 315 | " model.add(Dense(num_classes, activation='softmax'))\n", 316 | "\n", 317 | " # compile model\n", 318 | " model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])\n", 319 | " return model" 320 | ], 321 | "metadata": { 322 | "id": "MR91hsHc48WT" 323 | }, 324 | "execution_count": 24, 325 | "outputs": [] 326 | }, 327 | { 328 | "cell_type": "markdown", 329 | "source": [ 330 | "Finally, let's call the function to create the model, and then let's train it and evaluate it." 331 | ], 332 | "metadata": { 333 | "id": "0w35Mv7t5Fje" 334 | } 335 | }, 336 | { 337 | "cell_type": "code", 338 | "source": [ 339 | "# build the model\n", 340 | "model = convolutional_model()\n", 341 | "\n", 342 | "# fit the model\n", 343 | "model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=200, verbose=2)\n", 344 | "\n", 345 | "# evaluate the model\n", 346 | "scores = model.evaluate(X_test, y_test, verbose=0)\n", 347 | "print(\"Accuracy: {} \\n Error: {}\".format(scores[1], 100-scores[1]*100))" 348 | ], 349 | "metadata": { 350 | "colab": { 351 | "base_uri": "https://localhost:8080/" 352 | }, 353 | "id": "JfoBjz0b5IS5", 354 | "outputId": "a79a6ab8-c3c7-4c7f-8e3d-48531d46a2d0" 355 | }, 356 | "execution_count": 25, 357 | "outputs": [ 358 | { 359 | "output_type": "stream", 360 | "name": "stderr", 361 | "text": [ 362 | "/usr/local/lib/python3.10/dist-packages/keras/src/layers/convolutional/base_conv.py:107: UserWarning: Do not pass an `input_shape`/`input_dim` argument to a layer. When using Sequential models, prefer using an `Input(shape)` object as the first layer in the model instead.\n", 363 | " super().__init__(activity_regularizer=activity_regularizer, **kwargs)\n" 364 | ] 365 | }, 366 | { 367 | "output_type": "stream", 368 | "name": "stdout", 369 | "text": [ 370 | "Epoch 1/10\n", 371 | "300/300 - 25s - 83ms/step - accuracy: 0.9222 - loss: 0.2879 - val_accuracy: 0.9694 - val_loss: 0.0989\n", 372 | "Epoch 2/10\n", 373 | "300/300 - 41s - 136ms/step - accuracy: 0.9750 - loss: 0.0855 - val_accuracy: 0.9779 - val_loss: 0.0658\n", 374 | "Epoch 3/10\n", 375 | "300/300 - 39s - 132ms/step - accuracy: 0.9831 - loss: 0.0560 - val_accuracy: 0.9823 - val_loss: 0.0506\n", 376 | "Epoch 4/10\n", 377 | "300/300 - 40s - 134ms/step - accuracy: 0.9872 - loss: 0.0436 - val_accuracy: 0.9864 - val_loss: 0.0444\n", 378 | "Epoch 5/10\n", 379 | "300/300 - 42s - 142ms/step - accuracy: 0.9893 - loss: 0.0357 - val_accuracy: 0.9874 - val_loss: 0.0405\n", 380 | "Epoch 6/10\n", 381 | "300/300 - 41s - 137ms/step - accuracy: 0.9912 - loss: 0.0291 - val_accuracy: 0.9847 - val_loss: 0.0445\n", 382 | "Epoch 7/10\n", 383 | "300/300 - 41s - 136ms/step - accuracy: 0.9931 - loss: 0.0232 - val_accuracy: 0.9882 - val_loss: 0.0362\n", 384 | "Epoch 8/10\n", 385 | "300/300 - 41s - 136ms/step - accuracy: 0.9941 - loss: 0.0198 - val_accuracy: 0.9878 - val_loss: 0.0378\n", 386 | "Epoch 9/10\n", 387 | "300/300 - 40s - 132ms/step - accuracy: 0.9952 - loss: 0.0161 - val_accuracy: 0.9883 - val_loss: 0.0359\n", 388 | "Epoch 10/10\n", 389 | "300/300 - 25s - 85ms/step - accuracy: 0.9957 - loss: 0.0142 - val_accuracy: 0.9872 - val_loss: 0.0408\n", 390 | "Accuracy: 0.9872000217437744 \n", 391 | " Error: 1.2799978256225586\n" 392 | ] 393 | } 394 | ] 395 | }, 396 | { 397 | "cell_type": "markdown", 398 | "source": [ 399 | "# Convolutional Layer with two sets of convolutional and pooling layers" 400 | ], 401 | "metadata": { 402 | "id": "dZi6H0F35N8U" 403 | } 404 | }, 405 | { 406 | "cell_type": "markdown", 407 | "source": [ 408 | "Let's redefine our convolutional model so that it has two convolutional and pooling layers instead of just one layer of each." 409 | ], 410 | "metadata": { 411 | "id": "t8mkamhQ5PbF" 412 | } 413 | }, 414 | { 415 | "cell_type": "code", 416 | "source": [ 417 | "def convolutional_model():\n", 418 | "\n", 419 | " # create model\n", 420 | " model = Sequential()\n", 421 | " model.add(Conv2D(16, (5, 5), activation='relu', input_shape=(28, 28, 1)))\n", 422 | " model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))\n", 423 | "\n", 424 | " model.add(Conv2D(8, (2, 2), activation='relu'))\n", 425 | " model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))\n", 426 | "\n", 427 | " model.add(Flatten())\n", 428 | " model.add(Dense(100, activation='relu'))\n", 429 | " model.add(Dense(num_classes, activation='softmax'))\n", 430 | "\n", 431 | " # Compile model\n", 432 | " model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])\n", 433 | " return model" 434 | ], 435 | "metadata": { 436 | "id": "oaC9mJ8S5TW_" 437 | }, 438 | "execution_count": 26, 439 | "outputs": [] 440 | }, 441 | { 442 | "cell_type": "markdown", 443 | "source": [ 444 | "* Now, let's call the function to create our new convolutional neural network, and then let's train it and evaluate it." 445 | ], 446 | "metadata": { 447 | "id": "ctvkZL4i7E2h" 448 | } 449 | }, 450 | { 451 | "cell_type": "code", 452 | "source": [ 453 | "# build the model\n", 454 | "model = convolutional_model()\n", 455 | "\n", 456 | "# fit the model\n", 457 | "model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=200, verbose=2)\n", 458 | "\n", 459 | "# evaluate the model\n", 460 | "scores = model.evaluate(X_test, y_test, verbose=0)\n", 461 | "print(\"Accuracy: {} \\n Error: {}\".format(scores[1], 100-scores[1]*100))" 462 | ], 463 | "metadata": { 464 | "colab": { 465 | "base_uri": "https://localhost:8080/" 466 | }, 467 | "id": "j5QUpkRX7F8t", 468 | "outputId": "42fd611a-34a3-4798-a39b-7152dbec8dec" 469 | }, 470 | "execution_count": 27, 471 | "outputs": [ 472 | { 473 | "output_type": "stream", 474 | "name": "stdout", 475 | "text": [ 476 | "Epoch 1/10\n", 477 | "300/300 - 25s - 84ms/step - accuracy: 0.8748 - loss: 0.4637 - val_accuracy: 0.9625 - val_loss: 0.1316\n", 478 | "Epoch 2/10\n", 479 | "300/300 - 41s - 136ms/step - accuracy: 0.9658 - loss: 0.1148 - val_accuracy: 0.9723 - val_loss: 0.0883\n", 480 | "Epoch 3/10\n", 481 | "300/300 - 42s - 139ms/step - accuracy: 0.9752 - loss: 0.0834 - val_accuracy: 0.9803 - val_loss: 0.0619\n", 482 | "Epoch 4/10\n", 483 | "300/300 - 24s - 79ms/step - accuracy: 0.9799 - loss: 0.0673 - val_accuracy: 0.9807 - val_loss: 0.0583\n", 484 | "Epoch 5/10\n", 485 | "300/300 - 41s - 135ms/step - accuracy: 0.9825 - loss: 0.0583 - val_accuracy: 0.9832 - val_loss: 0.0531\n", 486 | "Epoch 6/10\n", 487 | "300/300 - 41s - 137ms/step - accuracy: 0.9847 - loss: 0.0500 - val_accuracy: 0.9839 - val_loss: 0.0486\n", 488 | "Epoch 7/10\n", 489 | "300/300 - 41s - 137ms/step - accuracy: 0.9865 - loss: 0.0438 - val_accuracy: 0.9862 - val_loss: 0.0441\n", 490 | "Epoch 8/10\n", 491 | "300/300 - 40s - 134ms/step - accuracy: 0.9879 - loss: 0.0399 - val_accuracy: 0.9856 - val_loss: 0.0428\n", 492 | "Epoch 9/10\n", 493 | "300/300 - 40s - 132ms/step - accuracy: 0.9886 - loss: 0.0363 - val_accuracy: 0.9865 - val_loss: 0.0406\n", 494 | "Epoch 10/10\n", 495 | "300/300 - 24s - 79ms/step - accuracy: 0.9897 - loss: 0.0330 - val_accuracy: 0.9881 - val_loss: 0.0361\n", 496 | "Accuracy: 0.988099992275238 \n", 497 | " Error: 1.1900007724761963\n" 498 | ] 499 | } 500 | ] 501 | } 502 | ] 503 | } -------------------------------------------------------------------------------- /Peer-graded Assignment_ Build a Regression Model in Keras (A).ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "## Download and Clean Dataset" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "Let's start by importing the pandas and the Numpy libraries." 15 | ] 16 | }, 17 | { 18 | "cell_type": "code", 19 | "execution_count": 2, 20 | "metadata": {}, 21 | "outputs": [], 22 | "source": [ 23 | "import pandas as pd\n", 24 | "import numpy as np" 25 | ] 26 | }, 27 | { 28 | "cell_type": "markdown", 29 | "metadata": {}, 30 | "source": [ 31 | "We will be using the dataset provided in the assignment\n", 32 | "\n", 33 | "The dataset is about the compressive strength of different samples of concrete based on the volumes of the different ingredients that were used to make them. Ingredients include:\n", 34 | "\n", 35 | "1. Cement\n", 36 | "\n", 37 | "2. Blast Furnace Slag\n", 38 | "\n", 39 | "3. Fly Ash\n", 40 | "\n", 41 | "4. Water\n", 42 | "\n", 43 | "5. Superplasticizer\n", 44 | "\n", 45 | "6. Coarse Aggregate\n", 46 | "\n", 47 | "7. Fine Aggregate" 48 | ] 49 | }, 50 | { 51 | "cell_type": "markdown", 52 | "metadata": {}, 53 | "source": [ 54 | "Let's read the dataset into a pandas dataframe." 55 | ] 56 | }, 57 | { 58 | "cell_type": "code", 59 | "execution_count": 3, 60 | "metadata": {}, 61 | "outputs": [ 62 | { 63 | "data": { 64 | "text/html": [ 65 | "
\n", 66 | "\n", 79 | "\n", 80 | " \n", 81 | " \n", 82 | " \n", 83 | " \n", 84 | " \n", 85 | " \n", 86 | " \n", 87 | " \n", 88 | " \n", 89 | " \n", 90 | " \n", 91 | " \n", 92 | " \n", 93 | " \n", 94 | " \n", 95 | " \n", 96 | " \n", 97 | " \n", 98 | " \n", 99 | " \n", 100 | " \n", 101 | " \n", 102 | " \n", 103 | " \n", 104 | " \n", 105 | " \n", 106 | " \n", 107 | " \n", 108 | " \n", 109 | " \n", 110 | " \n", 111 | " \n", 112 | " \n", 113 | " \n", 114 | " \n", 115 | " \n", 116 | " \n", 117 | " \n", 118 | " \n", 119 | " \n", 120 | " \n", 121 | " \n", 122 | " \n", 123 | " \n", 124 | " \n", 125 | " \n", 126 | " \n", 127 | " \n", 128 | " \n", 129 | " \n", 130 | " \n", 131 | " \n", 132 | " \n", 133 | " \n", 134 | " \n", 135 | " \n", 136 | " \n", 137 | " \n", 138 | " \n", 139 | " \n", 140 | " \n", 141 | " \n", 142 | " \n", 143 | " \n", 144 | " \n", 145 | " \n", 146 | " \n", 147 | " \n", 148 | " \n", 149 | " \n", 150 | " \n", 151 | " \n", 152 | " \n", 153 | " \n", 154 | " \n", 155 | " \n", 156 | "
CementBlast Furnace SlagFly AshWaterSuperplasticizerCoarse AggregateFine AggregateAgeStrength
0540.00.00.0162.02.51040.0676.02879.99
1540.00.00.0162.02.51055.0676.02861.89
2332.5142.50.0228.00.0932.0594.027040.27
3332.5142.50.0228.00.0932.0594.036541.05
4198.6132.40.0192.00.0978.4825.536044.30
\n", 157 | "
" 158 | ], 159 | "text/plain": [ 160 | " Cement Blast Furnace Slag Fly Ash Water Superplasticizer \\\n", 161 | "0 540.0 0.0 0.0 162.0 2.5 \n", 162 | "1 540.0 0.0 0.0 162.0 2.5 \n", 163 | "2 332.5 142.5 0.0 228.0 0.0 \n", 164 | "3 332.5 142.5 0.0 228.0 0.0 \n", 165 | "4 198.6 132.4 0.0 192.0 0.0 \n", 166 | "\n", 167 | " Coarse Aggregate Fine Aggregate Age Strength \n", 168 | "0 1040.0 676.0 28 79.99 \n", 169 | "1 1055.0 676.0 28 61.89 \n", 170 | "2 932.0 594.0 270 40.27 \n", 171 | "3 932.0 594.0 365 41.05 \n", 172 | "4 978.4 825.5 360 44.30 " 173 | ] 174 | }, 175 | "execution_count": 3, 176 | "metadata": {}, 177 | "output_type": "execute_result" 178 | } 179 | ], 180 | "source": [ 181 | "concrete_data = pd.read_csv('concrete_data.csv')\n", 182 | "concrete_data.head()" 183 | ] 184 | }, 185 | { 186 | "cell_type": "markdown", 187 | "metadata": {}, 188 | "source": [ 189 | "So the first concrete sample has 540 cubic meter of cement, 0 cubic meter of blast furnace slag, 0 cubic meter of fly ash, 162 cubic meter of water, 2.5 cubic meter of superplaticizer, 1040 cubic meter of coarse aggregate, 676 cubic meter of fine aggregate. Such a concrete mix which is 28 days old, has a compressive strength of 79.99 MPa. " 190 | ] 191 | }, 192 | { 193 | "cell_type": "markdown", 194 | "metadata": {}, 195 | "source": [ 196 | "#### Let's check how many data points we have." 197 | ] 198 | }, 199 | { 200 | "cell_type": "code", 201 | "execution_count": 4, 202 | "metadata": {}, 203 | "outputs": [ 204 | { 205 | "data": { 206 | "text/plain": [ 207 | "(1030, 9)" 208 | ] 209 | }, 210 | "execution_count": 4, 211 | "metadata": {}, 212 | "output_type": "execute_result" 213 | } 214 | ], 215 | "source": [ 216 | "concrete_data.shape" 217 | ] 218 | }, 219 | { 220 | "cell_type": "markdown", 221 | "metadata": {}, 222 | "source": [ 223 | "So, there are approximately 1000 samples to train our model on. Because of the few samples, we have to be careful not to overfit the training data." 224 | ] 225 | }, 226 | { 227 | "cell_type": "markdown", 228 | "metadata": {}, 229 | "source": [ 230 | "Let's check the dataset for any missing values." 231 | ] 232 | }, 233 | { 234 | "cell_type": "code", 235 | "execution_count": 5, 236 | "metadata": {}, 237 | "outputs": [ 238 | { 239 | "data": { 240 | "text/html": [ 241 | "
\n", 242 | "\n", 255 | "\n", 256 | " \n", 257 | " \n", 258 | " \n", 259 | " \n", 260 | " \n", 261 | " \n", 262 | " \n", 263 | " \n", 264 | " \n", 265 | " \n", 266 | " \n", 267 | " \n", 268 | " \n", 269 | " \n", 270 | " \n", 271 | " \n", 272 | " \n", 273 | " \n", 274 | " \n", 275 | " \n", 276 | " \n", 277 | " \n", 278 | " \n", 279 | " \n", 280 | " \n", 281 | " \n", 282 | " \n", 283 | " \n", 284 | " \n", 285 | " \n", 286 | " \n", 287 | " \n", 288 | " \n", 289 | " \n", 290 | " \n", 291 | " \n", 292 | " \n", 293 | " \n", 294 | " \n", 295 | " \n", 296 | " \n", 297 | " \n", 298 | " \n", 299 | " \n", 300 | " \n", 301 | " \n", 302 | " \n", 303 | " \n", 304 | " \n", 305 | " \n", 306 | " \n", 307 | " \n", 308 | " \n", 309 | " \n", 310 | " \n", 311 | " \n", 312 | " \n", 313 | " \n", 314 | " \n", 315 | " \n", 316 | " \n", 317 | " \n", 318 | " \n", 319 | " \n", 320 | " \n", 321 | " \n", 322 | " \n", 323 | " \n", 324 | " \n", 325 | " \n", 326 | " \n", 327 | " \n", 328 | " \n", 329 | " \n", 330 | " \n", 331 | " \n", 332 | " \n", 333 | " \n", 334 | " \n", 335 | " \n", 336 | " \n", 337 | " \n", 338 | " \n", 339 | " \n", 340 | " \n", 341 | " \n", 342 | " \n", 343 | " \n", 344 | " \n", 345 | " \n", 346 | " \n", 347 | " \n", 348 | " \n", 349 | " \n", 350 | " \n", 351 | " \n", 352 | " \n", 353 | " \n", 354 | " \n", 355 | " \n", 356 | " \n", 357 | " \n", 358 | " \n", 359 | " \n", 360 | " \n", 361 | " \n", 362 | " \n", 363 | " \n", 364 | " \n", 365 | " \n", 366 | " \n", 367 | " \n", 368 | "
CementBlast Furnace SlagFly AshWaterSuperplasticizerCoarse AggregateFine AggregateAgeStrength
count1030.0000001030.0000001030.0000001030.0000001030.0000001030.0000001030.0000001030.0000001030.000000
mean281.16786473.89582554.188350181.5672826.204660972.918932773.58048545.66213635.817961
std104.50636486.27934263.99700421.3542195.97384177.75395480.17598063.16991216.705742
min102.0000000.0000000.000000121.8000000.000000801.000000594.0000001.0000002.330000
25%192.3750000.0000000.000000164.9000000.000000932.000000730.9500007.00000023.710000
50%272.90000022.0000000.000000185.0000006.400000968.000000779.50000028.00000034.445000
75%350.000000142.950000118.300000192.00000010.2000001029.400000824.00000056.00000046.135000
max540.000000359.400000200.100000247.00000032.2000001145.000000992.600000365.00000082.600000
\n", 369 | "
" 370 | ], 371 | "text/plain": [ 372 | " Cement Blast Furnace Slag Fly Ash Water \\\n", 373 | "count 1030.000000 1030.000000 1030.000000 1030.000000 \n", 374 | "mean 281.167864 73.895825 54.188350 181.567282 \n", 375 | "std 104.506364 86.279342 63.997004 21.354219 \n", 376 | "min 102.000000 0.000000 0.000000 121.800000 \n", 377 | "25% 192.375000 0.000000 0.000000 164.900000 \n", 378 | "50% 272.900000 22.000000 0.000000 185.000000 \n", 379 | "75% 350.000000 142.950000 118.300000 192.000000 \n", 380 | "max 540.000000 359.400000 200.100000 247.000000 \n", 381 | "\n", 382 | " Superplasticizer Coarse Aggregate Fine Aggregate Age \\\n", 383 | "count 1030.000000 1030.000000 1030.000000 1030.000000 \n", 384 | "mean 6.204660 972.918932 773.580485 45.662136 \n", 385 | "std 5.973841 77.753954 80.175980 63.169912 \n", 386 | "min 0.000000 801.000000 594.000000 1.000000 \n", 387 | "25% 0.000000 932.000000 730.950000 7.000000 \n", 388 | "50% 6.400000 968.000000 779.500000 28.000000 \n", 389 | "75% 10.200000 1029.400000 824.000000 56.000000 \n", 390 | "max 32.200000 1145.000000 992.600000 365.000000 \n", 391 | "\n", 392 | " Strength \n", 393 | "count 1030.000000 \n", 394 | "mean 35.817961 \n", 395 | "std 16.705742 \n", 396 | "min 2.330000 \n", 397 | "25% 23.710000 \n", 398 | "50% 34.445000 \n", 399 | "75% 46.135000 \n", 400 | "max 82.600000 " 401 | ] 402 | }, 403 | "execution_count": 5, 404 | "metadata": {}, 405 | "output_type": "execute_result" 406 | } 407 | ], 408 | "source": [ 409 | "concrete_data.describe()" 410 | ] 411 | }, 412 | { 413 | "cell_type": "code", 414 | "execution_count": 6, 415 | "metadata": {}, 416 | "outputs": [ 417 | { 418 | "data": { 419 | "text/plain": [ 420 | "Cement 0\n", 421 | "Blast Furnace Slag 0\n", 422 | "Fly Ash 0\n", 423 | "Water 0\n", 424 | "Superplasticizer 0\n", 425 | "Coarse Aggregate 0\n", 426 | "Fine Aggregate 0\n", 427 | "Age 0\n", 428 | "Strength 0\n", 429 | "dtype: int64" 430 | ] 431 | }, 432 | "execution_count": 6, 433 | "metadata": {}, 434 | "output_type": "execute_result" 435 | } 436 | ], 437 | "source": [ 438 | "concrete_data.isnull().sum()" 439 | ] 440 | }, 441 | { 442 | "cell_type": "markdown", 443 | "metadata": {}, 444 | "source": [ 445 | "The data looks very clean and is ready to be used to build our model." 446 | ] 447 | }, 448 | { 449 | "cell_type": "markdown", 450 | "metadata": {}, 451 | "source": [ 452 | "#### Split data into predictors and target" 453 | ] 454 | }, 455 | { 456 | "cell_type": "markdown", 457 | "metadata": {}, 458 | "source": [ 459 | "The target variable in this problem is the concrete sample strength. Therefore, our predictors will be all the other columns." 460 | ] 461 | }, 462 | { 463 | "cell_type": "code", 464 | "execution_count": 7, 465 | "metadata": {}, 466 | "outputs": [], 467 | "source": [ 468 | "concrete_data_columns = concrete_data.columns\n", 469 | "predictors = concrete_data[concrete_data_columns[concrete_data_columns != 'Strength']] # all columns except Strength\n", 470 | "target = concrete_data['Strength'] # Strength column" 471 | ] 472 | }, 473 | { 474 | "cell_type": "markdown", 475 | "metadata": {}, 476 | "source": [ 477 | "Let's do a quick sanity check of the predictors and the target dataframes." 478 | ] 479 | }, 480 | { 481 | "cell_type": "code", 482 | "execution_count": 8, 483 | "metadata": {}, 484 | "outputs": [ 485 | { 486 | "data": { 487 | "text/html": [ 488 | "
\n", 489 | "\n", 502 | "\n", 503 | " \n", 504 | " \n", 505 | " \n", 506 | " \n", 507 | " \n", 508 | " \n", 509 | " \n", 510 | " \n", 511 | " \n", 512 | " \n", 513 | " \n", 514 | " \n", 515 | " \n", 516 | " \n", 517 | " \n", 518 | " \n", 519 | " \n", 520 | " \n", 521 | " \n", 522 | " \n", 523 | " \n", 524 | " \n", 525 | " \n", 526 | " \n", 527 | " \n", 528 | " \n", 529 | " \n", 530 | " \n", 531 | " \n", 532 | " \n", 533 | " \n", 534 | " \n", 535 | " \n", 536 | " \n", 537 | " \n", 538 | " \n", 539 | " \n", 540 | " \n", 541 | " \n", 542 | " \n", 543 | " \n", 544 | " \n", 545 | " \n", 546 | " \n", 547 | " \n", 548 | " \n", 549 | " \n", 550 | " \n", 551 | " \n", 552 | " \n", 553 | " \n", 554 | " \n", 555 | " \n", 556 | " \n", 557 | " \n", 558 | " \n", 559 | " \n", 560 | " \n", 561 | " \n", 562 | " \n", 563 | " \n", 564 | " \n", 565 | " \n", 566 | " \n", 567 | " \n", 568 | " \n", 569 | " \n", 570 | " \n", 571 | " \n", 572 | " \n", 573 | "
CementBlast Furnace SlagFly AshWaterSuperplasticizerCoarse AggregateFine AggregateAge
0540.00.00.0162.02.51040.0676.028
1540.00.00.0162.02.51055.0676.028
2332.5142.50.0228.00.0932.0594.0270
3332.5142.50.0228.00.0932.0594.0365
4198.6132.40.0192.00.0978.4825.5360
\n", 574 | "
" 575 | ], 576 | "text/plain": [ 577 | " Cement Blast Furnace Slag Fly Ash Water Superplasticizer \\\n", 578 | "0 540.0 0.0 0.0 162.0 2.5 \n", 579 | "1 540.0 0.0 0.0 162.0 2.5 \n", 580 | "2 332.5 142.5 0.0 228.0 0.0 \n", 581 | "3 332.5 142.5 0.0 228.0 0.0 \n", 582 | "4 198.6 132.4 0.0 192.0 0.0 \n", 583 | "\n", 584 | " Coarse Aggregate Fine Aggregate Age \n", 585 | "0 1040.0 676.0 28 \n", 586 | "1 1055.0 676.0 28 \n", 587 | "2 932.0 594.0 270 \n", 588 | "3 932.0 594.0 365 \n", 589 | "4 978.4 825.5 360 " 590 | ] 591 | }, 592 | "execution_count": 8, 593 | "metadata": {}, 594 | "output_type": "execute_result" 595 | } 596 | ], 597 | "source": [ 598 | "predictors.head()" 599 | ] 600 | }, 601 | { 602 | "cell_type": "code", 603 | "execution_count": 9, 604 | "metadata": {}, 605 | "outputs": [ 606 | { 607 | "data": { 608 | "text/plain": [ 609 | "0 79.99\n", 610 | "1 61.89\n", 611 | "2 40.27\n", 612 | "3 41.05\n", 613 | "4 44.30\n", 614 | "Name: Strength, dtype: float64" 615 | ] 616 | }, 617 | "execution_count": 9, 618 | "metadata": {}, 619 | "output_type": "execute_result" 620 | } 621 | ], 622 | "source": [ 623 | "target.head()" 624 | ] 625 | }, 626 | { 627 | "cell_type": "code", 628 | "execution_count": 10, 629 | "metadata": {}, 630 | "outputs": [ 631 | { 632 | "data": { 633 | "text/plain": [ 634 | "8" 635 | ] 636 | }, 637 | "execution_count": 10, 638 | "metadata": {}, 639 | "output_type": "execute_result" 640 | } 641 | ], 642 | "source": [ 643 | "n_cols = predictors.shape[1] # number of predictors\n", 644 | "n_cols" 645 | ] 646 | }, 647 | { 648 | "cell_type": "markdown", 649 | "metadata": {}, 650 | "source": [ 651 | "" 652 | ] 653 | }, 654 | { 655 | "cell_type": "markdown", 656 | "metadata": {}, 657 | "source": [ 658 | "" 659 | ] 660 | }, 661 | { 662 | "cell_type": "markdown", 663 | "metadata": {}, 664 | "source": [ 665 | "## Import Keras" 666 | ] 667 | }, 668 | { 669 | "cell_type": "markdown", 670 | "metadata": {}, 671 | "source": [ 672 | "#### Let's go ahead and import the Keras library" 673 | ] 674 | }, 675 | { 676 | "cell_type": "code", 677 | "execution_count": 11, 678 | "metadata": {}, 679 | "outputs": [ 680 | { 681 | "name": "stderr", 682 | "output_type": "stream", 683 | "text": [ 684 | "Using TensorFlow backend.\n" 685 | ] 686 | } 687 | ], 688 | "source": [ 689 | "import keras" 690 | ] 691 | }, 692 | { 693 | "cell_type": "markdown", 694 | "metadata": {}, 695 | "source": [ 696 | "As you can see, the TensorFlow backend was used to install the Keras library." 697 | ] 698 | }, 699 | { 700 | "cell_type": "markdown", 701 | "metadata": {}, 702 | "source": [ 703 | "Let's import the rest of the packages from the Keras library that we will need to build our regressoin model." 704 | ] 705 | }, 706 | { 707 | "cell_type": "code", 708 | "execution_count": 12, 709 | "metadata": {}, 710 | "outputs": [], 711 | "source": [ 712 | "from keras.models import Sequential\n", 713 | "from keras.layers import Dense" 714 | ] 715 | }, 716 | { 717 | "cell_type": "code", 718 | "execution_count": 13, 719 | "metadata": {}, 720 | "outputs": [], 721 | "source": [ 722 | "# define regression model\n", 723 | "def regression_model():\n", 724 | " # create model\n", 725 | " model = Sequential()\n", 726 | " model.add(Dense(10, activation='relu', input_shape=(n_cols,)))\n", 727 | " model.add(Dense(1))\n", 728 | " \n", 729 | " # compile model\n", 730 | " model.compile(optimizer='adam', loss='mean_squared_error')\n", 731 | " return model" 732 | ] 733 | }, 734 | { 735 | "cell_type": "markdown", 736 | "metadata": {}, 737 | "source": [ 738 | "The above function creates a model that has one hidden layer with 10 neurons and a ReLU activation function. It uses the adam optimizer and the mean squared error as the loss function." 739 | ] 740 | }, 741 | { 742 | "cell_type": "markdown", 743 | "metadata": {}, 744 | "source": [ 745 | "Let's import scikit-learn in order to randomly split the data into a training and test sets" 746 | ] 747 | }, 748 | { 749 | "cell_type": "code", 750 | "execution_count": 14, 751 | "metadata": {}, 752 | "outputs": [], 753 | "source": [ 754 | "from sklearn.model_selection import train_test_split" 755 | ] 756 | }, 757 | { 758 | "cell_type": "markdown", 759 | "metadata": {}, 760 | "source": [ 761 | "Splitting the data into a training and test sets by holding 30% of the data for testing" 762 | ] 763 | }, 764 | { 765 | "cell_type": "code", 766 | "execution_count": 15, 767 | "metadata": {}, 768 | "outputs": [], 769 | "source": [ 770 | "X_train, X_test, y_train, y_test = train_test_split(predictors, target, test_size=0.3, random_state=42)" 771 | ] 772 | }, 773 | { 774 | "cell_type": "markdown", 775 | "metadata": {}, 776 | "source": [ 777 | "## Train and Test the Network" 778 | ] 779 | }, 780 | { 781 | "cell_type": "markdown", 782 | "metadata": {}, 783 | "source": [ 784 | "Let's call the function now to create our model." 785 | ] 786 | }, 787 | { 788 | "cell_type": "code", 789 | "execution_count": 16, 790 | "metadata": {}, 791 | "outputs": [], 792 | "source": [ 793 | "# build the model\n", 794 | "model = regression_model()" 795 | ] 796 | }, 797 | { 798 | "cell_type": "markdown", 799 | "metadata": {}, 800 | "source": [ 801 | "Next, we will train the model for 50 epochs.\n" 802 | ] 803 | }, 804 | { 805 | "cell_type": "code", 806 | "execution_count": 23, 807 | "metadata": {}, 808 | "outputs": [ 809 | { 810 | "name": "stdout", 811 | "output_type": "stream", 812 | "text": [ 813 | "Epoch 1/50\n", 814 | "721/721 [==============================] - 0s 49us/step - loss: 228.8328\n", 815 | "Epoch 2/50\n", 816 | "721/721 [==============================] - 0s 51us/step - loss: 215.9244\n", 817 | "Epoch 3/50\n", 818 | "721/721 [==============================] - 0s 52us/step - loss: 208.6913\n", 819 | "Epoch 4/50\n", 820 | "721/721 [==============================] - 0s 51us/step - loss: 203.2783\n", 821 | "Epoch 5/50\n", 822 | "721/721 [==============================] - 0s 53us/step - loss: 198.6944\n", 823 | "Epoch 6/50\n", 824 | "721/721 [==============================] - 0s 44us/step - loss: 192.0748\n", 825 | "Epoch 7/50\n", 826 | "721/721 [==============================] - 0s 48us/step - loss: 187.0641\n", 827 | "Epoch 8/50\n", 828 | "721/721 [==============================] - 0s 49us/step - loss: 182.6149\n", 829 | "Epoch 9/50\n", 830 | "721/721 [==============================] - 0s 49us/step - loss: 180.3111\n", 831 | "Epoch 10/50\n", 832 | "721/721 [==============================] - 0s 43us/step - loss: 171.8613\n", 833 | "Epoch 11/50\n", 834 | "721/721 [==============================] - 0s 50us/step - loss: 169.2272\n", 835 | "Epoch 12/50\n", 836 | "721/721 [==============================] - 0s 49us/step - loss: 163.2405\n", 837 | "Epoch 13/50\n", 838 | "721/721 [==============================] - 0s 48us/step - loss: 163.1215\n", 839 | "Epoch 14/50\n", 840 | "721/721 [==============================] - 0s 50us/step - loss: 157.3549\n", 841 | "Epoch 15/50\n", 842 | "721/721 [==============================] - 0s 48us/step - loss: 154.1529\n", 843 | "Epoch 16/50\n", 844 | "721/721 [==============================] - 0s 50us/step - loss: 151.6540\n", 845 | "Epoch 17/50\n", 846 | "721/721 [==============================] - 0s 49us/step - loss: 149.4561\n", 847 | "Epoch 18/50\n", 848 | "721/721 [==============================] - 0s 50us/step - loss: 146.1440\n", 849 | "Epoch 19/50\n", 850 | "721/721 [==============================] - 0s 53us/step - loss: 143.4548\n", 851 | "Epoch 20/50\n", 852 | "721/721 [==============================] - 0s 48us/step - loss: 142.1613\n", 853 | "Epoch 21/50\n", 854 | "721/721 [==============================] - 0s 45us/step - loss: 145.6452\n", 855 | "Epoch 22/50\n", 856 | "721/721 [==============================] - 0s 46us/step - loss: 136.2488\n", 857 | "Epoch 23/50\n", 858 | "721/721 [==============================] - 0s 47us/step - loss: 134.2804\n", 859 | "Epoch 24/50\n", 860 | "721/721 [==============================] - 0s 50us/step - loss: 132.4984\n", 861 | "Epoch 25/50\n", 862 | "721/721 [==============================] - 0s 51us/step - loss: 130.1131\n", 863 | "Epoch 26/50\n", 864 | "721/721 [==============================] - 0s 46us/step - loss: 129.2872\n", 865 | "Epoch 27/50\n", 866 | "721/721 [==============================] - 0s 50us/step - loss: 129.2868\n", 867 | "Epoch 28/50\n", 868 | "721/721 [==============================] - 0s 46us/step - loss: 126.3845\n", 869 | "Epoch 29/50\n", 870 | "721/721 [==============================] - 0s 55us/step - loss: 125.4513\n", 871 | "Epoch 30/50\n", 872 | "721/721 [==============================] - 0s 50us/step - loss: 123.3704\n", 873 | "Epoch 31/50\n", 874 | "721/721 [==============================] - 0s 45us/step - loss: 125.7223\n", 875 | "Epoch 32/50\n", 876 | "721/721 [==============================] - 0s 57us/step - loss: 123.0903\n", 877 | "Epoch 33/50\n", 878 | "721/721 [==============================] - 0s 52us/step - loss: 124.5342\n", 879 | "Epoch 34/50\n", 880 | "721/721 [==============================] - 0s 49us/step - loss: 121.9392\n", 881 | "Epoch 35/50\n", 882 | "721/721 [==============================] - 0s 63us/step - loss: 119.7415\n", 883 | "Epoch 36/50\n", 884 | "721/721 [==============================] - 0s 56us/step - loss: 119.3327\n", 885 | "Epoch 37/50\n", 886 | "721/721 [==============================] - 0s 58us/step - loss: 119.2475\n", 887 | "Epoch 38/50\n", 888 | "721/721 [==============================] - 0s 50us/step - loss: 117.9988\n", 889 | "Epoch 39/50\n", 890 | "721/721 [==============================] - 0s 60us/step - loss: 116.8755\n", 891 | "Epoch 40/50\n", 892 | "721/721 [==============================] - 0s 52us/step - loss: 118.0329\n", 893 | "Epoch 41/50\n", 894 | "721/721 [==============================] - 0s 59us/step - loss: 116.3102\n", 895 | "Epoch 42/50\n", 896 | "721/721 [==============================] - 0s 43us/step - loss: 120.6214\n", 897 | "Epoch 43/50\n", 898 | "721/721 [==============================] - 0s 48us/step - loss: 116.5501\n", 899 | "Epoch 44/50\n", 900 | "721/721 [==============================] - 0s 53us/step - loss: 115.1019\n", 901 | "Epoch 45/50\n", 902 | "721/721 [==============================] - 0s 56us/step - loss: 115.6051\n", 903 | "Epoch 46/50\n", 904 | "721/721 [==============================] - 0s 47us/step - loss: 112.9754\n", 905 | "Epoch 47/50\n", 906 | "721/721 [==============================] - 0s 47us/step - loss: 112.9954\n", 907 | "Epoch 48/50\n", 908 | "721/721 [==============================] - 0s 46us/step - loss: 114.7252\n", 909 | "Epoch 49/50\n", 910 | "721/721 [==============================] - 0s 48us/step - loss: 113.8326\n", 911 | "Epoch 50/50\n", 912 | "721/721 [==============================] - 0s 47us/step - loss: 111.1589\n" 913 | ] 914 | }, 915 | { 916 | "data": { 917 | "text/plain": [ 918 | "" 919 | ] 920 | }, 921 | "execution_count": 23, 922 | "metadata": {}, 923 | "output_type": "execute_result" 924 | } 925 | ], 926 | "source": [ 927 | "# fit the model\n", 928 | "epochs = 50\n", 929 | "model.fit(X_train, y_train, epochs=epochs, verbose=1)" 930 | ] 931 | }, 932 | { 933 | "cell_type": "markdown", 934 | "metadata": {}, 935 | "source": [ 936 | "Next we need to evaluate the model on the test data." 937 | ] 938 | }, 939 | { 940 | "cell_type": "code", 941 | "execution_count": 34, 942 | "metadata": {}, 943 | "outputs": [ 944 | { 945 | "name": "stdout", 946 | "output_type": "stream", 947 | "text": [ 948 | "309/309 [==============================] - 0s 45us/step\n" 949 | ] 950 | }, 951 | { 952 | "data": { 953 | "text/plain": [ 954 | "50.11536543268988" 955 | ] 956 | }, 957 | "execution_count": 34, 958 | "metadata": {}, 959 | "output_type": "execute_result" 960 | } 961 | ], 962 | "source": [ 963 | "loss_val = model.evaluate(X_test, y_test)\n", 964 | "y_pred = model.predict(X_test)\n", 965 | "loss_val" 966 | ] 967 | }, 968 | { 969 | "cell_type": "markdown", 970 | "metadata": {}, 971 | "source": [ 972 | "Now we need to compute the mean squared error between the predicted concrete strength and the actual concrete strength." 973 | ] 974 | }, 975 | { 976 | "cell_type": "markdown", 977 | "metadata": {}, 978 | "source": [ 979 | "Let's import the mean_squared_error function from Scikit-learn." 980 | ] 981 | }, 982 | { 983 | "cell_type": "code", 984 | "execution_count": 35, 985 | "metadata": {}, 986 | "outputs": [], 987 | "source": [ 988 | "from sklearn.metrics import mean_squared_error" 989 | ] 990 | }, 991 | { 992 | "cell_type": "code", 993 | "execution_count": 36, 994 | "metadata": {}, 995 | "outputs": [ 996 | { 997 | "name": "stdout", 998 | "output_type": "stream", 999 | "text": [ 1000 | "50.115365393280605 0.0\n" 1001 | ] 1002 | } 1003 | ], 1004 | "source": [ 1005 | "mean_square_error = mean_squared_error(y_test, y_pred)\n", 1006 | "mean = np.mean(mean_square_error)\n", 1007 | "standard_deviation = np.std(mean_square_error)\n", 1008 | "print(mean, standard_deviation)" 1009 | ] 1010 | }, 1011 | { 1012 | "cell_type": "markdown", 1013 | "metadata": {}, 1014 | "source": [ 1015 | "Create a list of 50 mean squared errors and report mean and the standard deviation of the mean squared errors." 1016 | ] 1017 | }, 1018 | { 1019 | "cell_type": "code", 1020 | "execution_count": 32, 1021 | "metadata": {}, 1022 | "outputs": [ 1023 | { 1024 | "name": "stdout", 1025 | "output_type": "stream", 1026 | "text": [ 1027 | "MSE 1: 45.5088312216947\n", 1028 | "MSE 2: 55.35291563114302\n", 1029 | "MSE 3: 44.05973736290793\n", 1030 | "MSE 4: 47.97384547261358\n", 1031 | "MSE 5: 46.27975555222397\n", 1032 | "MSE 6: 55.298517245690796\n", 1033 | "MSE 7: 53.225258663634264\n", 1034 | "MSE 8: 42.05248688879908\n", 1035 | "MSE 9: 44.19757292417261\n", 1036 | "MSE 10: 49.286330979233036\n", 1037 | "MSE 11: 44.415615118823006\n", 1038 | "MSE 12: 42.217887643857296\n", 1039 | "MSE 13: 52.90644020401544\n", 1040 | "MSE 14: 49.27302535220643\n", 1041 | "MSE 15: 48.32965868113496\n", 1042 | "MSE 16: 41.30980674348603\n", 1043 | "MSE 17: 44.32437073297099\n", 1044 | "MSE 18: 43.46031706232855\n", 1045 | "MSE 19: 41.79449335734049\n", 1046 | "MSE 20: 45.04111388123151\n", 1047 | "MSE 21: 46.147853863663656\n", 1048 | "MSE 22: 44.4407159749744\n", 1049 | "MSE 23: 41.86130490349334\n", 1050 | "MSE 24: 44.014878973605946\n", 1051 | "MSE 25: 46.43221985406474\n", 1052 | "MSE 26: 52.132735267811995\n", 1053 | "MSE 27: 46.03804981515631\n", 1054 | "MSE 28: 44.34027326762869\n", 1055 | "MSE 29: 50.555759145989775\n", 1056 | "MSE 30: 46.53499229208937\n", 1057 | "MSE 31: 55.29556269475943\n", 1058 | "MSE 32: 40.970819893006755\n", 1059 | "MSE 33: 46.93645506923639\n", 1060 | "MSE 34: 41.52137350187333\n", 1061 | "MSE 35: 48.03412892363218\n", 1062 | "MSE 36: 49.22415666055525\n", 1063 | "MSE 37: 47.94709300994873\n", 1064 | "MSE 38: 48.0688581744444\n", 1065 | "MSE 39: 44.629797222544845\n", 1066 | "MSE 40: 41.93882681096642\n", 1067 | "MSE 41: 48.76071383034913\n", 1068 | "MSE 42: 42.53869528909331\n", 1069 | "MSE 43: 44.61198106167\n", 1070 | "MSE 44: 56.762810247229915\n", 1071 | "MSE 45: 49.92081000812617\n", 1072 | "MSE 46: 51.572559159164676\n", 1073 | "MSE 47: 46.578073125055305\n", 1074 | "MSE 48: 44.42433949195837\n", 1075 | "MSE 49: 53.60878757365699\n", 1076 | "MSE 50: 50.11536543268988\n", 1077 | "\n", 1078 | "\n", 1079 | "Below is the mean and standard deviation of 50 mean squared errors without normalized data. Total number of epochs for each training is: 50\n", 1080 | "\n", 1081 | "Mean: 47.04535900368696\n", 1082 | "Standard Deviation: 4.146803067606264\n" 1083 | ] 1084 | } 1085 | ], 1086 | "source": [ 1087 | "total_mean_squared_errors = 50\n", 1088 | "epochs = 50\n", 1089 | "mean_squared_errors = []\n", 1090 | "for i in range(0, total_mean_squared_errors):\n", 1091 | " X_train, X_test, y_train, y_test = train_test_split(predictors, target, test_size=0.3, random_state=i)\n", 1092 | " model.fit(X_train, y_train, epochs=epochs, verbose=0)\n", 1093 | " MSE = model.evaluate(X_test, y_test, verbose=0)\n", 1094 | " print(\"MSE \"+str(i+1)+\": \"+str(MSE))\n", 1095 | " y_pred = model.predict(X_test)\n", 1096 | " mean_square_error = mean_squared_error(y_test, y_pred)\n", 1097 | " mean_squared_errors.append(mean_square_error)\n", 1098 | "\n", 1099 | "mean_squared_errors = np.array(mean_squared_errors)\n", 1100 | "mean = np.mean(mean_squared_errors)\n", 1101 | "standard_deviation = np.std(mean_squared_errors)\n", 1102 | "\n", 1103 | "print('\\n')\n", 1104 | "print(\"Below is the mean and standard deviation of \" +str(total_mean_squared_errors) + \" mean squared errors without normalized data. Total number of epochs for each training is: \" +str(epochs) + \"\\n\")\n", 1105 | "print(\"Mean: \"+str(mean))\n", 1106 | "print(\"Standard Deviation: \"+str(standard_deviation))" 1107 | ] 1108 | }, 1109 | { 1110 | "cell_type": "code", 1111 | "execution_count": null, 1112 | "metadata": {}, 1113 | "outputs": [], 1114 | "source": [] 1115 | } 1116 | ], 1117 | "metadata": { 1118 | "kernelspec": { 1119 | "display_name": "Python 3", 1120 | "language": "python", 1121 | "name": "python3" 1122 | }, 1123 | "language_info": { 1124 | "codemirror_mode": { 1125 | "name": "ipython", 1126 | "version": 3 1127 | }, 1128 | "file_extension": ".py", 1129 | "mimetype": "text/x-python", 1130 | "name": "python", 1131 | "nbconvert_exporter": "python", 1132 | "pygments_lexer": "ipython3", 1133 | "version": "3.6.9" 1134 | } 1135 | }, 1136 | "nbformat": 4, 1137 | "nbformat_minor": 2 1138 | } 1139 | -------------------------------------------------------------------------------- /Peer-graded Assignment_ Build a Regression Model in Keras (B).ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "## Download and Clean Dataset" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "Let's start by importing the pandas and the Numpy libraries." 15 | ] 16 | }, 17 | { 18 | "cell_type": "code", 19 | "execution_count": 1, 20 | "metadata": {}, 21 | "outputs": [], 22 | "source": [ 23 | "import pandas as pd\n", 24 | "import numpy as np" 25 | ] 26 | }, 27 | { 28 | "cell_type": "markdown", 29 | "metadata": {}, 30 | "source": [ 31 | "We will be using the dataset provided in the assignment\n", 32 | "\n", 33 | "The dataset is about the compressive strength of different samples of concrete based on the volumes of the different ingredients that were used to make them. Ingredients include:\n", 34 | "\n", 35 | "1. Cement\n", 36 | "\n", 37 | "2. Blast Furnace Slag\n", 38 | "\n", 39 | "3. Fly Ash\n", 40 | "\n", 41 | "4. Water\n", 42 | "\n", 43 | "5. Superplasticizer\n", 44 | "\n", 45 | "6. Coarse Aggregate\n", 46 | "\n", 47 | "7. Fine Aggregate" 48 | ] 49 | }, 50 | { 51 | "cell_type": "markdown", 52 | "metadata": {}, 53 | "source": [ 54 | "Let's read the dataset into a pandas dataframe." 55 | ] 56 | }, 57 | { 58 | "cell_type": "code", 59 | "execution_count": 2, 60 | "metadata": {}, 61 | "outputs": [ 62 | { 63 | "data": { 64 | "text/html": [ 65 | "
\n", 66 | "\n", 79 | "\n", 80 | " \n", 81 | " \n", 82 | " \n", 83 | " \n", 84 | " \n", 85 | " \n", 86 | " \n", 87 | " \n", 88 | " \n", 89 | " \n", 90 | " \n", 91 | " \n", 92 | " \n", 93 | " \n", 94 | " \n", 95 | " \n", 96 | " \n", 97 | " \n", 98 | " \n", 99 | " \n", 100 | " \n", 101 | " \n", 102 | " \n", 103 | " \n", 104 | " \n", 105 | " \n", 106 | " \n", 107 | " \n", 108 | " \n", 109 | " \n", 110 | " \n", 111 | " \n", 112 | " \n", 113 | " \n", 114 | " \n", 115 | " \n", 116 | " \n", 117 | " \n", 118 | " \n", 119 | " \n", 120 | " \n", 121 | " \n", 122 | " \n", 123 | " \n", 124 | " \n", 125 | " \n", 126 | " \n", 127 | " \n", 128 | " \n", 129 | " \n", 130 | " \n", 131 | " \n", 132 | " \n", 133 | " \n", 134 | " \n", 135 | " \n", 136 | " \n", 137 | " \n", 138 | " \n", 139 | " \n", 140 | " \n", 141 | " \n", 142 | " \n", 143 | " \n", 144 | " \n", 145 | " \n", 146 | " \n", 147 | " \n", 148 | " \n", 149 | " \n", 150 | " \n", 151 | " \n", 152 | " \n", 153 | " \n", 154 | " \n", 155 | " \n", 156 | "
CementBlast Furnace SlagFly AshWaterSuperplasticizerCoarse AggregateFine AggregateAgeStrength
0540.00.00.0162.02.51040.0676.02879.99
1540.00.00.0162.02.51055.0676.02861.89
2332.5142.50.0228.00.0932.0594.027040.27
3332.5142.50.0228.00.0932.0594.036541.05
4198.6132.40.0192.00.0978.4825.536044.30
\n", 157 | "
" 158 | ], 159 | "text/plain": [ 160 | " Cement Blast Furnace Slag Fly Ash Water Superplasticizer \\\n", 161 | "0 540.0 0.0 0.0 162.0 2.5 \n", 162 | "1 540.0 0.0 0.0 162.0 2.5 \n", 163 | "2 332.5 142.5 0.0 228.0 0.0 \n", 164 | "3 332.5 142.5 0.0 228.0 0.0 \n", 165 | "4 198.6 132.4 0.0 192.0 0.0 \n", 166 | "\n", 167 | " Coarse Aggregate Fine Aggregate Age Strength \n", 168 | "0 1040.0 676.0 28 79.99 \n", 169 | "1 1055.0 676.0 28 61.89 \n", 170 | "2 932.0 594.0 270 40.27 \n", 171 | "3 932.0 594.0 365 41.05 \n", 172 | "4 978.4 825.5 360 44.30 " 173 | ] 174 | }, 175 | "execution_count": 2, 176 | "metadata": {}, 177 | "output_type": "execute_result" 178 | } 179 | ], 180 | "source": [ 181 | "concrete_data = pd.read_csv('concrete_data.csv')\n", 182 | "concrete_data.head()" 183 | ] 184 | }, 185 | { 186 | "cell_type": "markdown", 187 | "metadata": {}, 188 | "source": [ 189 | "So the first concrete sample has 540 cubic meter of cement, 0 cubic meter of blast furnace slag, 0 cubic meter of fly ash, 162 cubic meter of water, 2.5 cubic meter of superplaticizer, 1040 cubic meter of coarse aggregate, 676 cubic meter of fine aggregate. Such a concrete mix which is 28 days old, has a compressive strength of 79.99 MPa. " 190 | ] 191 | }, 192 | { 193 | "cell_type": "markdown", 194 | "metadata": {}, 195 | "source": [ 196 | "#### Let's check how many data points we have." 197 | ] 198 | }, 199 | { 200 | "cell_type": "code", 201 | "execution_count": 3, 202 | "metadata": {}, 203 | "outputs": [ 204 | { 205 | "data": { 206 | "text/plain": [ 207 | "(1030, 9)" 208 | ] 209 | }, 210 | "execution_count": 3, 211 | "metadata": {}, 212 | "output_type": "execute_result" 213 | } 214 | ], 215 | "source": [ 216 | "concrete_data.shape" 217 | ] 218 | }, 219 | { 220 | "cell_type": "markdown", 221 | "metadata": {}, 222 | "source": [ 223 | "So, there are approximately 1000 samples to train our model on. Because of the few samples, we have to be careful not to overfit the training data." 224 | ] 225 | }, 226 | { 227 | "cell_type": "markdown", 228 | "metadata": {}, 229 | "source": [ 230 | "Let's check the dataset for any missing values." 231 | ] 232 | }, 233 | { 234 | "cell_type": "code", 235 | "execution_count": 4, 236 | "metadata": {}, 237 | "outputs": [ 238 | { 239 | "data": { 240 | "text/html": [ 241 | "
\n", 242 | "\n", 255 | "\n", 256 | " \n", 257 | " \n", 258 | " \n", 259 | " \n", 260 | " \n", 261 | " \n", 262 | " \n", 263 | " \n", 264 | " \n", 265 | " \n", 266 | " \n", 267 | " \n", 268 | " \n", 269 | " \n", 270 | " \n", 271 | " \n", 272 | " \n", 273 | " \n", 274 | " \n", 275 | " \n", 276 | " \n", 277 | " \n", 278 | " \n", 279 | " \n", 280 | " \n", 281 | " \n", 282 | " \n", 283 | " \n", 284 | " \n", 285 | " \n", 286 | " \n", 287 | " \n", 288 | " \n", 289 | " \n", 290 | " \n", 291 | " \n", 292 | " \n", 293 | " \n", 294 | " \n", 295 | " \n", 296 | " \n", 297 | " \n", 298 | " \n", 299 | " \n", 300 | " \n", 301 | " \n", 302 | " \n", 303 | " \n", 304 | " \n", 305 | " \n", 306 | " \n", 307 | " \n", 308 | " \n", 309 | " \n", 310 | " \n", 311 | " \n", 312 | " \n", 313 | " \n", 314 | " \n", 315 | " \n", 316 | " \n", 317 | " \n", 318 | " \n", 319 | " \n", 320 | " \n", 321 | " \n", 322 | " \n", 323 | " \n", 324 | " \n", 325 | " \n", 326 | " \n", 327 | " \n", 328 | " \n", 329 | " \n", 330 | " \n", 331 | " \n", 332 | " \n", 333 | " \n", 334 | " \n", 335 | " \n", 336 | " \n", 337 | " \n", 338 | " \n", 339 | " \n", 340 | " \n", 341 | " \n", 342 | " \n", 343 | " \n", 344 | " \n", 345 | " \n", 346 | " \n", 347 | " \n", 348 | " \n", 349 | " \n", 350 | " \n", 351 | " \n", 352 | " \n", 353 | " \n", 354 | " \n", 355 | " \n", 356 | " \n", 357 | " \n", 358 | " \n", 359 | " \n", 360 | " \n", 361 | " \n", 362 | " \n", 363 | " \n", 364 | " \n", 365 | " \n", 366 | " \n", 367 | " \n", 368 | "
CementBlast Furnace SlagFly AshWaterSuperplasticizerCoarse AggregateFine AggregateAgeStrength
count1030.0000001030.0000001030.0000001030.0000001030.0000001030.0000001030.0000001030.0000001030.000000
mean281.16786473.89582554.188350181.5672826.204660972.918932773.58048545.66213635.817961
std104.50636486.27934263.99700421.3542195.97384177.75395480.17598063.16991216.705742
min102.0000000.0000000.000000121.8000000.000000801.000000594.0000001.0000002.330000
25%192.3750000.0000000.000000164.9000000.000000932.000000730.9500007.00000023.710000
50%272.90000022.0000000.000000185.0000006.400000968.000000779.50000028.00000034.445000
75%350.000000142.950000118.300000192.00000010.2000001029.400000824.00000056.00000046.135000
max540.000000359.400000200.100000247.00000032.2000001145.000000992.600000365.00000082.600000
\n", 369 | "
" 370 | ], 371 | "text/plain": [ 372 | " Cement Blast Furnace Slag Fly Ash Water \\\n", 373 | "count 1030.000000 1030.000000 1030.000000 1030.000000 \n", 374 | "mean 281.167864 73.895825 54.188350 181.567282 \n", 375 | "std 104.506364 86.279342 63.997004 21.354219 \n", 376 | "min 102.000000 0.000000 0.000000 121.800000 \n", 377 | "25% 192.375000 0.000000 0.000000 164.900000 \n", 378 | "50% 272.900000 22.000000 0.000000 185.000000 \n", 379 | "75% 350.000000 142.950000 118.300000 192.000000 \n", 380 | "max 540.000000 359.400000 200.100000 247.000000 \n", 381 | "\n", 382 | " Superplasticizer Coarse Aggregate Fine Aggregate Age \\\n", 383 | "count 1030.000000 1030.000000 1030.000000 1030.000000 \n", 384 | "mean 6.204660 972.918932 773.580485 45.662136 \n", 385 | "std 5.973841 77.753954 80.175980 63.169912 \n", 386 | "min 0.000000 801.000000 594.000000 1.000000 \n", 387 | "25% 0.000000 932.000000 730.950000 7.000000 \n", 388 | "50% 6.400000 968.000000 779.500000 28.000000 \n", 389 | "75% 10.200000 1029.400000 824.000000 56.000000 \n", 390 | "max 32.200000 1145.000000 992.600000 365.000000 \n", 391 | "\n", 392 | " Strength \n", 393 | "count 1030.000000 \n", 394 | "mean 35.817961 \n", 395 | "std 16.705742 \n", 396 | "min 2.330000 \n", 397 | "25% 23.710000 \n", 398 | "50% 34.445000 \n", 399 | "75% 46.135000 \n", 400 | "max 82.600000 " 401 | ] 402 | }, 403 | "execution_count": 4, 404 | "metadata": {}, 405 | "output_type": "execute_result" 406 | } 407 | ], 408 | "source": [ 409 | "concrete_data.describe()" 410 | ] 411 | }, 412 | { 413 | "cell_type": "code", 414 | "execution_count": 5, 415 | "metadata": {}, 416 | "outputs": [ 417 | { 418 | "data": { 419 | "text/plain": [ 420 | "Cement 0\n", 421 | "Blast Furnace Slag 0\n", 422 | "Fly Ash 0\n", 423 | "Water 0\n", 424 | "Superplasticizer 0\n", 425 | "Coarse Aggregate 0\n", 426 | "Fine Aggregate 0\n", 427 | "Age 0\n", 428 | "Strength 0\n", 429 | "dtype: int64" 430 | ] 431 | }, 432 | "execution_count": 5, 433 | "metadata": {}, 434 | "output_type": "execute_result" 435 | } 436 | ], 437 | "source": [ 438 | "concrete_data.isnull().sum()" 439 | ] 440 | }, 441 | { 442 | "cell_type": "markdown", 443 | "metadata": {}, 444 | "source": [ 445 | "The data looks very clean and is ready to be used to build our model." 446 | ] 447 | }, 448 | { 449 | "cell_type": "markdown", 450 | "metadata": {}, 451 | "source": [ 452 | "#### Split data into predictors and target" 453 | ] 454 | }, 455 | { 456 | "cell_type": "markdown", 457 | "metadata": {}, 458 | "source": [ 459 | "The target variable in this problem is the concrete sample strength. Therefore, our predictors will be all the other columns." 460 | ] 461 | }, 462 | { 463 | "cell_type": "code", 464 | "execution_count": 6, 465 | "metadata": {}, 466 | "outputs": [], 467 | "source": [ 468 | "concrete_data_columns = concrete_data.columns\n", 469 | "predictors = concrete_data[concrete_data_columns[concrete_data_columns != 'Strength']] # all columns except Strength\n", 470 | "target = concrete_data['Strength'] # Strength column" 471 | ] 472 | }, 473 | { 474 | "cell_type": "markdown", 475 | "metadata": {}, 476 | "source": [ 477 | "Let's do a quick sanity check of the predictors and the target dataframes." 478 | ] 479 | }, 480 | { 481 | "cell_type": "code", 482 | "execution_count": 7, 483 | "metadata": {}, 484 | "outputs": [ 485 | { 486 | "data": { 487 | "text/html": [ 488 | "
\n", 489 | "\n", 502 | "\n", 503 | " \n", 504 | " \n", 505 | " \n", 506 | " \n", 507 | " \n", 508 | " \n", 509 | " \n", 510 | " \n", 511 | " \n", 512 | " \n", 513 | " \n", 514 | " \n", 515 | " \n", 516 | " \n", 517 | " \n", 518 | " \n", 519 | " \n", 520 | " \n", 521 | " \n", 522 | " \n", 523 | " \n", 524 | " \n", 525 | " \n", 526 | " \n", 527 | " \n", 528 | " \n", 529 | " \n", 530 | " \n", 531 | " \n", 532 | " \n", 533 | " \n", 534 | " \n", 535 | " \n", 536 | " \n", 537 | " \n", 538 | " \n", 539 | " \n", 540 | " \n", 541 | " \n", 542 | " \n", 543 | " \n", 544 | " \n", 545 | " \n", 546 | " \n", 547 | " \n", 548 | " \n", 549 | " \n", 550 | " \n", 551 | " \n", 552 | " \n", 553 | " \n", 554 | " \n", 555 | " \n", 556 | " \n", 557 | " \n", 558 | " \n", 559 | " \n", 560 | " \n", 561 | " \n", 562 | " \n", 563 | " \n", 564 | " \n", 565 | " \n", 566 | " \n", 567 | " \n", 568 | " \n", 569 | " \n", 570 | " \n", 571 | " \n", 572 | " \n", 573 | "
CementBlast Furnace SlagFly AshWaterSuperplasticizerCoarse AggregateFine AggregateAge
0540.00.00.0162.02.51040.0676.028
1540.00.00.0162.02.51055.0676.028
2332.5142.50.0228.00.0932.0594.0270
3332.5142.50.0228.00.0932.0594.0365
4198.6132.40.0192.00.0978.4825.5360
\n", 574 | "
" 575 | ], 576 | "text/plain": [ 577 | " Cement Blast Furnace Slag Fly Ash Water Superplasticizer \\\n", 578 | "0 540.0 0.0 0.0 162.0 2.5 \n", 579 | "1 540.0 0.0 0.0 162.0 2.5 \n", 580 | "2 332.5 142.5 0.0 228.0 0.0 \n", 581 | "3 332.5 142.5 0.0 228.0 0.0 \n", 582 | "4 198.6 132.4 0.0 192.0 0.0 \n", 583 | "\n", 584 | " Coarse Aggregate Fine Aggregate Age \n", 585 | "0 1040.0 676.0 28 \n", 586 | "1 1055.0 676.0 28 \n", 587 | "2 932.0 594.0 270 \n", 588 | "3 932.0 594.0 365 \n", 589 | "4 978.4 825.5 360 " 590 | ] 591 | }, 592 | "execution_count": 7, 593 | "metadata": {}, 594 | "output_type": "execute_result" 595 | } 596 | ], 597 | "source": [ 598 | "predictors.head()" 599 | ] 600 | }, 601 | { 602 | "cell_type": "code", 603 | "execution_count": 8, 604 | "metadata": {}, 605 | "outputs": [ 606 | { 607 | "data": { 608 | "text/plain": [ 609 | "0 79.99\n", 610 | "1 61.89\n", 611 | "2 40.27\n", 612 | "3 41.05\n", 613 | "4 44.30\n", 614 | "Name: Strength, dtype: float64" 615 | ] 616 | }, 617 | "execution_count": 8, 618 | "metadata": {}, 619 | "output_type": "execute_result" 620 | } 621 | ], 622 | "source": [ 623 | "target.head()" 624 | ] 625 | }, 626 | { 627 | "cell_type": "markdown", 628 | "metadata": {}, 629 | "source": [ 630 | "Finally, the last step is to normalize the data by substracting the mean and dividing by the standard deviation." 631 | ] 632 | }, 633 | { 634 | "cell_type": "code", 635 | "execution_count": 9, 636 | "metadata": {}, 637 | "outputs": [ 638 | { 639 | "data": { 640 | "text/html": [ 641 | "
\n", 642 | "\n", 655 | "\n", 656 | " \n", 657 | " \n", 658 | " \n", 659 | " \n", 660 | " \n", 661 | " \n", 662 | " \n", 663 | " \n", 664 | " \n", 665 | " \n", 666 | " \n", 667 | " \n", 668 | " \n", 669 | " \n", 670 | " \n", 671 | " \n", 672 | " \n", 673 | " \n", 674 | " \n", 675 | " \n", 676 | " \n", 677 | " \n", 678 | " \n", 679 | " \n", 680 | " \n", 681 | " \n", 682 | " \n", 683 | " \n", 684 | " \n", 685 | " \n", 686 | " \n", 687 | " \n", 688 | " \n", 689 | " \n", 690 | " \n", 691 | " \n", 692 | " \n", 693 | " \n", 694 | " \n", 695 | " \n", 696 | " \n", 697 | " \n", 698 | " \n", 699 | " \n", 700 | " \n", 701 | " \n", 702 | " \n", 703 | " \n", 704 | " \n", 705 | " \n", 706 | " \n", 707 | " \n", 708 | " \n", 709 | " \n", 710 | " \n", 711 | " \n", 712 | " \n", 713 | " \n", 714 | " \n", 715 | " \n", 716 | " \n", 717 | " \n", 718 | " \n", 719 | " \n", 720 | " \n", 721 | " \n", 722 | " \n", 723 | " \n", 724 | " \n", 725 | " \n", 726 | "
CementBlast Furnace SlagFly AshWaterSuperplasticizerCoarse AggregateFine AggregateAge
02.476712-0.856472-0.846733-0.916319-0.6201470.862735-1.217079-0.279597
12.476712-0.856472-0.846733-0.916319-0.6201471.055651-1.217079-0.279597
20.4911870.795140-0.8467332.174405-1.038638-0.526262-2.2398293.551340
30.4911870.795140-0.8467332.174405-1.038638-0.526262-2.2398295.055221
4-0.7900750.678079-0.8467330.488555-1.0386380.0704920.6475694.976069
\n", 727 | "
" 728 | ], 729 | "text/plain": [ 730 | " Cement Blast Furnace Slag Fly Ash Water Superplasticizer \\\n", 731 | "0 2.476712 -0.856472 -0.846733 -0.916319 -0.620147 \n", 732 | "1 2.476712 -0.856472 -0.846733 -0.916319 -0.620147 \n", 733 | "2 0.491187 0.795140 -0.846733 2.174405 -1.038638 \n", 734 | "3 0.491187 0.795140 -0.846733 2.174405 -1.038638 \n", 735 | "4 -0.790075 0.678079 -0.846733 0.488555 -1.038638 \n", 736 | "\n", 737 | " Coarse Aggregate Fine Aggregate Age \n", 738 | "0 0.862735 -1.217079 -0.279597 \n", 739 | "1 1.055651 -1.217079 -0.279597 \n", 740 | "2 -0.526262 -2.239829 3.551340 \n", 741 | "3 -0.526262 -2.239829 5.055221 \n", 742 | "4 0.070492 0.647569 4.976069 " 743 | ] 744 | }, 745 | "execution_count": 9, 746 | "metadata": {}, 747 | "output_type": "execute_result" 748 | } 749 | ], 750 | "source": [ 751 | "predictors_norm = (predictors - predictors.mean()) / predictors.std()\n", 752 | "predictors_norm.head()" 753 | ] 754 | }, 755 | { 756 | "cell_type": "code", 757 | "execution_count": 10, 758 | "metadata": {}, 759 | "outputs": [ 760 | { 761 | "data": { 762 | "text/plain": [ 763 | "8" 764 | ] 765 | }, 766 | "execution_count": 10, 767 | "metadata": {}, 768 | "output_type": "execute_result" 769 | } 770 | ], 771 | "source": [ 772 | "n_cols = predictors_norm.shape[1] # number of predictors\n", 773 | "n_cols" 774 | ] 775 | }, 776 | { 777 | "cell_type": "markdown", 778 | "metadata": {}, 779 | "source": [ 780 | "" 781 | ] 782 | }, 783 | { 784 | "cell_type": "markdown", 785 | "metadata": {}, 786 | "source": [ 787 | "" 788 | ] 789 | }, 790 | { 791 | "cell_type": "markdown", 792 | "metadata": {}, 793 | "source": [ 794 | "## Import Keras" 795 | ] 796 | }, 797 | { 798 | "cell_type": "markdown", 799 | "metadata": {}, 800 | "source": [ 801 | "#### Let's go ahead and import the Keras library" 802 | ] 803 | }, 804 | { 805 | "cell_type": "code", 806 | "execution_count": 11, 807 | "metadata": {}, 808 | "outputs": [ 809 | { 810 | "name": "stderr", 811 | "output_type": "stream", 812 | "text": [ 813 | "Using TensorFlow backend.\n" 814 | ] 815 | } 816 | ], 817 | "source": [ 818 | "import keras" 819 | ] 820 | }, 821 | { 822 | "cell_type": "markdown", 823 | "metadata": {}, 824 | "source": [ 825 | "As you can see, the TensorFlow backend was used to install the Keras library." 826 | ] 827 | }, 828 | { 829 | "cell_type": "markdown", 830 | "metadata": {}, 831 | "source": [ 832 | "Let's import the rest of the packages from the Keras library that we will need to build our regressoin model." 833 | ] 834 | }, 835 | { 836 | "cell_type": "code", 837 | "execution_count": 12, 838 | "metadata": {}, 839 | "outputs": [], 840 | "source": [ 841 | "from keras.models import Sequential\n", 842 | "from keras.layers import Dense" 843 | ] 844 | }, 845 | { 846 | "cell_type": "code", 847 | "execution_count": 13, 848 | "metadata": {}, 849 | "outputs": [], 850 | "source": [ 851 | "# define regression model\n", 852 | "def regression_model():\n", 853 | " # create model\n", 854 | " model = Sequential()\n", 855 | " model.add(Dense(10, activation='relu', input_shape=(n_cols,)))\n", 856 | " model.add(Dense(1))\n", 857 | " \n", 858 | " # compile model\n", 859 | " model.compile(optimizer='adam', loss='mean_squared_error')\n", 860 | " return model" 861 | ] 862 | }, 863 | { 864 | "cell_type": "markdown", 865 | "metadata": {}, 866 | "source": [ 867 | "The above function creates a model that has one hidden layer with 10 neurons and a ReLU activation function. It uses the adam optimizer and the mean squared error as the loss function." 868 | ] 869 | }, 870 | { 871 | "cell_type": "markdown", 872 | "metadata": {}, 873 | "source": [ 874 | "Let's import scikit-learn in order to randomly split the data into a training and test sets" 875 | ] 876 | }, 877 | { 878 | "cell_type": "code", 879 | "execution_count": 14, 880 | "metadata": {}, 881 | "outputs": [], 882 | "source": [ 883 | "from sklearn.model_selection import train_test_split" 884 | ] 885 | }, 886 | { 887 | "cell_type": "markdown", 888 | "metadata": {}, 889 | "source": [ 890 | "Splitting the data into a training and test sets by holding 30% of the data for testing" 891 | ] 892 | }, 893 | { 894 | "cell_type": "code", 895 | "execution_count": 15, 896 | "metadata": {}, 897 | "outputs": [], 898 | "source": [ 899 | "X_train, X_test, y_train, y_test = train_test_split(predictors_norm, target, test_size=0.3, random_state=42)" 900 | ] 901 | }, 902 | { 903 | "cell_type": "markdown", 904 | "metadata": {}, 905 | "source": [ 906 | "## Train and Test the Network" 907 | ] 908 | }, 909 | { 910 | "cell_type": "markdown", 911 | "metadata": {}, 912 | "source": [ 913 | "Let's call the function now to create our model." 914 | ] 915 | }, 916 | { 917 | "cell_type": "code", 918 | "execution_count": 16, 919 | "metadata": {}, 920 | "outputs": [], 921 | "source": [ 922 | "# build the model\n", 923 | "model = regression_model()" 924 | ] 925 | }, 926 | { 927 | "cell_type": "markdown", 928 | "metadata": {}, 929 | "source": [ 930 | "Next, we will train the model for 50 epochs.\n" 931 | ] 932 | }, 933 | { 934 | "cell_type": "code", 935 | "execution_count": 17, 936 | "metadata": {}, 937 | "outputs": [ 938 | { 939 | "name": "stdout", 940 | "output_type": "stream", 941 | "text": [ 942 | "Epoch 1/50\n", 943 | " - 0s - loss: 1651.1822\n", 944 | "Epoch 2/50\n", 945 | " - 0s - loss: 1631.2427\n", 946 | "Epoch 3/50\n", 947 | " - 0s - loss: 1611.8665\n", 948 | "Epoch 4/50\n", 949 | " - 0s - loss: 1593.0129\n", 950 | "Epoch 5/50\n", 951 | " - 0s - loss: 1574.5028\n", 952 | "Epoch 6/50\n", 953 | " - 0s - loss: 1556.1718\n", 954 | "Epoch 7/50\n", 955 | " - 0s - loss: 1538.1068\n", 956 | "Epoch 8/50\n", 957 | " - 0s - loss: 1520.4636\n", 958 | "Epoch 9/50\n", 959 | " - 0s - loss: 1502.4612\n", 960 | "Epoch 10/50\n", 961 | " - 0s - loss: 1484.6265\n", 962 | "Epoch 11/50\n", 963 | " - 0s - loss: 1466.5279\n", 964 | "Epoch 12/50\n", 965 | " - 0s - loss: 1448.3623\n", 966 | "Epoch 13/50\n", 967 | " - 0s - loss: 1429.3800\n", 968 | "Epoch 14/50\n", 969 | " - 0s - loss: 1410.6920\n", 970 | "Epoch 15/50\n", 971 | " - 0s - loss: 1391.1524\n", 972 | "Epoch 16/50\n", 973 | " - 0s - loss: 1371.3942\n", 974 | "Epoch 17/50\n", 975 | " - 0s - loss: 1351.2343\n", 976 | "Epoch 18/50\n", 977 | " - 0s - loss: 1330.5209\n", 978 | "Epoch 19/50\n", 979 | " - 0s - loss: 1309.4222\n", 980 | "Epoch 20/50\n", 981 | " - 0s - loss: 1287.7469\n", 982 | "Epoch 21/50\n", 983 | " - 0s - loss: 1265.5437\n", 984 | "Epoch 22/50\n", 985 | " - 0s - loss: 1242.3534\n", 986 | "Epoch 23/50\n", 987 | " - 0s - loss: 1219.0512\n", 988 | "Epoch 24/50\n", 989 | " - 0s - loss: 1194.1156\n", 990 | "Epoch 25/50\n", 991 | " - 0s - loss: 1169.3435\n", 992 | "Epoch 26/50\n", 993 | " - 0s - loss: 1143.0799\n", 994 | "Epoch 27/50\n", 995 | " - 0s - loss: 1116.2312\n", 996 | "Epoch 28/50\n", 997 | " - 0s - loss: 1088.7420\n", 998 | "Epoch 29/50\n", 999 | " - 0s - loss: 1060.6200\n", 1000 | "Epoch 30/50\n", 1001 | " - 0s - loss: 1032.4794\n", 1002 | "Epoch 31/50\n", 1003 | " - 0s - loss: 1003.6558\n", 1004 | "Epoch 32/50\n", 1005 | " - 0s - loss: 974.9272\n", 1006 | "Epoch 33/50\n", 1007 | " - 0s - loss: 946.0666\n", 1008 | "Epoch 34/50\n", 1009 | " - 0s - loss: 917.1510\n", 1010 | "Epoch 35/50\n", 1011 | " - 0s - loss: 888.5678\n", 1012 | "Epoch 36/50\n", 1013 | " - 0s - loss: 859.9041\n", 1014 | "Epoch 37/50\n", 1015 | " - 0s - loss: 831.3088\n", 1016 | "Epoch 38/50\n", 1017 | " - 0s - loss: 803.0652\n", 1018 | "Epoch 39/50\n", 1019 | " - 0s - loss: 775.1644\n", 1020 | "Epoch 40/50\n", 1021 | " - 0s - loss: 747.1952\n", 1022 | "Epoch 41/50\n", 1023 | " - 0s - loss: 720.1013\n", 1024 | "Epoch 42/50\n", 1025 | " - 0s - loss: 693.5036\n", 1026 | "Epoch 43/50\n", 1027 | " - 0s - loss: 667.2098\n", 1028 | "Epoch 44/50\n", 1029 | " - 0s - loss: 641.2060\n", 1030 | "Epoch 45/50\n", 1031 | " - 0s - loss: 616.5792\n", 1032 | "Epoch 46/50\n", 1033 | " - 0s - loss: 592.0672\n", 1034 | "Epoch 47/50\n", 1035 | " - 0s - loss: 568.0495\n", 1036 | "Epoch 48/50\n", 1037 | " - 0s - loss: 545.4204\n", 1038 | "Epoch 49/50\n", 1039 | " - 0s - loss: 522.8632\n", 1040 | "Epoch 50/50\n", 1041 | " - 0s - loss: 501.0779\n" 1042 | ] 1043 | }, 1044 | { 1045 | "data": { 1046 | "text/plain": [ 1047 | "" 1048 | ] 1049 | }, 1050 | "execution_count": 17, 1051 | "metadata": {}, 1052 | "output_type": "execute_result" 1053 | } 1054 | ], 1055 | "source": [ 1056 | "# fit the model\n", 1057 | "epochs = 50\n", 1058 | "model.fit(X_train, y_train, epochs=epochs, verbose=2)" 1059 | ] 1060 | }, 1061 | { 1062 | "cell_type": "markdown", 1063 | "metadata": {}, 1064 | "source": [ 1065 | "Next we need to evaluate the model on the test data." 1066 | ] 1067 | }, 1068 | { 1069 | "cell_type": "code", 1070 | "execution_count": 18, 1071 | "metadata": {}, 1072 | "outputs": [ 1073 | { 1074 | "name": "stdout", 1075 | "output_type": "stream", 1076 | "text": [ 1077 | "309/309 [==============================] - 0s 73us/step\n" 1078 | ] 1079 | }, 1080 | { 1081 | "data": { 1082 | "text/plain": [ 1083 | "493.70862813906376" 1084 | ] 1085 | }, 1086 | "execution_count": 18, 1087 | "metadata": {}, 1088 | "output_type": "execute_result" 1089 | } 1090 | ], 1091 | "source": [ 1092 | "loss_val = model.evaluate(X_test, y_test)\n", 1093 | "y_pred = model.predict(X_test)\n", 1094 | "loss_val" 1095 | ] 1096 | }, 1097 | { 1098 | "cell_type": "markdown", 1099 | "metadata": {}, 1100 | "source": [ 1101 | "Now we need to compute the mean squared error between the predicted concrete strength and the actual concrete strength." 1102 | ] 1103 | }, 1104 | { 1105 | "cell_type": "markdown", 1106 | "metadata": {}, 1107 | "source": [ 1108 | "Let's import the mean_squared_error function from Scikit-learn." 1109 | ] 1110 | }, 1111 | { 1112 | "cell_type": "code", 1113 | "execution_count": 19, 1114 | "metadata": {}, 1115 | "outputs": [], 1116 | "source": [ 1117 | "from sklearn.metrics import mean_squared_error" 1118 | ] 1119 | }, 1120 | { 1121 | "cell_type": "code", 1122 | "execution_count": 20, 1123 | "metadata": {}, 1124 | "outputs": [ 1125 | { 1126 | "name": "stdout", 1127 | "output_type": "stream", 1128 | "text": [ 1129 | "493.70863791330663 0.0\n" 1130 | ] 1131 | } 1132 | ], 1133 | "source": [ 1134 | "mean_square_error = mean_squared_error(y_test, y_pred)\n", 1135 | "mean = np.mean(mean_square_error)\n", 1136 | "standard_deviation = np.std(mean_square_error)\n", 1137 | "print(mean, standard_deviation)" 1138 | ] 1139 | }, 1140 | { 1141 | "cell_type": "markdown", 1142 | "metadata": {}, 1143 | "source": [ 1144 | "Create a list of 50 mean squared errors and report mean and the standard deviation of the mean squared errors." 1145 | ] 1146 | }, 1147 | { 1148 | "cell_type": "code", 1149 | "execution_count": 22, 1150 | "metadata": {}, 1151 | "outputs": [ 1152 | { 1153 | "name": "stdout", 1154 | "output_type": "stream", 1155 | "text": [ 1156 | "MSE 1: 96.23834398880746\n", 1157 | "MSE 2: 82.52934939190023\n", 1158 | "MSE 3: 50.423970811961155\n", 1159 | "MSE 4: 48.99604903532849\n", 1160 | "MSE 5: 44.70407389668585\n", 1161 | "MSE 6: 45.84412700072847\n", 1162 | "MSE 7: 47.875012147773816\n", 1163 | "MSE 8: 35.477522495109284\n", 1164 | "MSE 9: 35.563788170181816\n", 1165 | "MSE 10: 36.80308609255695\n", 1166 | "MSE 11: 35.19219320105889\n", 1167 | "MSE 12: 34.34737917211835\n", 1168 | "MSE 13: 40.38878035313875\n", 1169 | "MSE 14: 38.32519055956004\n", 1170 | "MSE 15: 32.740622634640786\n", 1171 | "MSE 16: 27.634737857337136\n", 1172 | "MSE 17: 31.34054788879592\n", 1173 | "MSE 18: 32.40728301755047\n", 1174 | "MSE 19: 31.37120697490606\n", 1175 | "MSE 20: 31.427853618239123\n", 1176 | "MSE 21: 29.94852909223933\n", 1177 | "MSE 22: 30.7034236080824\n", 1178 | "MSE 23: 25.380024011852672\n", 1179 | "MSE 24: 28.135550983515373\n", 1180 | "MSE 25: 32.137249283034436\n", 1181 | "MSE 26: 32.515969162234214\n", 1182 | "MSE 27: 26.966679705771043\n", 1183 | "MSE 28: 28.428698246533045\n", 1184 | "MSE 29: 31.233512199426546\n", 1185 | "MSE 30: 29.140160242716473\n", 1186 | "MSE 31: 27.48099442515944\n", 1187 | "MSE 32: 27.894654844956875\n", 1188 | "MSE 33: 25.82127880355687\n", 1189 | "MSE 34: 28.76602467904199\n", 1190 | "MSE 35: 32.21809555103092\n", 1191 | "MSE 36: 32.908837167190505\n", 1192 | "MSE 37: 24.138178942658755\n", 1193 | "MSE 38: 30.938592262638426\n", 1194 | "MSE 39: 29.289363231473757\n", 1195 | "MSE 40: 24.388125768371385\n", 1196 | "MSE 41: 30.36188562633922\n", 1197 | "MSE 42: 25.28865729643689\n", 1198 | "MSE 43: 27.72400673924912\n", 1199 | "MSE 44: 33.43268985192753\n", 1200 | "MSE 45: 30.220784554589528\n", 1201 | "MSE 46: 30.706770295078314\n", 1202 | "MSE 47: 28.18811243791796\n", 1203 | "MSE 48: 30.8071376961026\n", 1204 | "MSE 49: 31.410146176236346\n", 1205 | "MSE 50: 30.94078703451311\n", 1206 | "\n", 1207 | "\n", 1208 | "Below is the mean and standard deviation of 50 mean squared errors with normalized data. Total number of epochs for each training is: 50\n", 1209 | "\n", 1210 | "Mean: 34.742920380277766\n", 1211 | "Standard Deviation: 12.76633575833323\n" 1212 | ] 1213 | } 1214 | ], 1215 | "source": [ 1216 | "total_mean_squared_errors = 50\n", 1217 | "epochs = 50\n", 1218 | "mean_squared_errors = []\n", 1219 | "for i in range(0, total_mean_squared_errors):\n", 1220 | " X_train, X_test, y_train, y_test = train_test_split(predictors_norm, target, test_size=0.3, random_state=i)\n", 1221 | " model.fit(X_train, y_train, epochs=epochs, verbose=0)\n", 1222 | " MSE = model.evaluate(X_test, y_test, verbose=0)\n", 1223 | " print(\"MSE \"+str(i+1)+\": \"+str(MSE))\n", 1224 | " y_pred = model.predict(X_test)\n", 1225 | " mean_square_error = mean_squared_error(y_test, y_pred)\n", 1226 | " mean_squared_errors.append(mean_square_error)\n", 1227 | "\n", 1228 | "mean_squared_errors = np.array(mean_squared_errors)\n", 1229 | "mean = np.mean(mean_squared_errors)\n", 1230 | "standard_deviation = np.std(mean_squared_errors)\n", 1231 | "\n", 1232 | "print('\\n')\n", 1233 | "print(\"Below is the mean and standard deviation of \" +str(total_mean_squared_errors) + \" mean squared errors with normalized data. Total number of epochs for each training is: \" +str(epochs) + \"\\n\")\n", 1234 | "print(\"Mean: \"+str(mean))\n", 1235 | "print(\"Standard Deviation: \"+str(standard_deviation))" 1236 | ] 1237 | }, 1238 | { 1239 | "cell_type": "code", 1240 | "execution_count": null, 1241 | "metadata": {}, 1242 | "outputs": [], 1243 | "source": [] 1244 | } 1245 | ], 1246 | "metadata": { 1247 | "kernelspec": { 1248 | "display_name": "Python 3", 1249 | "language": "python", 1250 | "name": "python3" 1251 | }, 1252 | "language_info": { 1253 | "codemirror_mode": { 1254 | "name": "ipython", 1255 | "version": 3 1256 | }, 1257 | "file_extension": ".py", 1258 | "mimetype": "text/x-python", 1259 | "name": "python", 1260 | "nbconvert_exporter": "python", 1261 | "pygments_lexer": "ipython3", 1262 | "version": "3.6.9" 1263 | } 1264 | }, 1265 | "nbformat": 4, 1266 | "nbformat_minor": 2 1267 | } 1268 | -------------------------------------------------------------------------------- /Peer-graded Assignment_ Build a Regression Model in Keras (C).ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "## Download and Clean Dataset" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "Let's start by importing the pandas and the Numpy libraries." 15 | ] 16 | }, 17 | { 18 | "cell_type": "code", 19 | "execution_count": 1, 20 | "metadata": {}, 21 | "outputs": [], 22 | "source": [ 23 | "import pandas as pd\n", 24 | "import numpy as np" 25 | ] 26 | }, 27 | { 28 | "cell_type": "markdown", 29 | "metadata": {}, 30 | "source": [ 31 | "We will be using the dataset provided in the assignment\n", 32 | "\n", 33 | "The dataset is about the compressive strength of different samples of concrete based on the volumes of the different ingredients that were used to make them. Ingredients include:\n", 34 | "\n", 35 | "1. Cement\n", 36 | "\n", 37 | "2. Blast Furnace Slag\n", 38 | "\n", 39 | "3. Fly Ash\n", 40 | "\n", 41 | "4. Water\n", 42 | "\n", 43 | "5. Superplasticizer\n", 44 | "\n", 45 | "6. Coarse Aggregate\n", 46 | "\n", 47 | "7. Fine Aggregate" 48 | ] 49 | }, 50 | { 51 | "cell_type": "markdown", 52 | "metadata": {}, 53 | "source": [ 54 | "Let's read the dataset into a pandas dataframe." 55 | ] 56 | }, 57 | { 58 | "cell_type": "code", 59 | "execution_count": 2, 60 | "metadata": {}, 61 | "outputs": [ 62 | { 63 | "data": { 64 | "text/html": [ 65 | "
\n", 66 | "\n", 79 | "\n", 80 | " \n", 81 | " \n", 82 | " \n", 83 | " \n", 84 | " \n", 85 | " \n", 86 | " \n", 87 | " \n", 88 | " \n", 89 | " \n", 90 | " \n", 91 | " \n", 92 | " \n", 93 | " \n", 94 | " \n", 95 | " \n", 96 | " \n", 97 | " \n", 98 | " \n", 99 | " \n", 100 | " \n", 101 | " \n", 102 | " \n", 103 | " \n", 104 | " \n", 105 | " \n", 106 | " \n", 107 | " \n", 108 | " \n", 109 | " \n", 110 | " \n", 111 | " \n", 112 | " \n", 113 | " \n", 114 | " \n", 115 | " \n", 116 | " \n", 117 | " \n", 118 | " \n", 119 | " \n", 120 | " \n", 121 | " \n", 122 | " \n", 123 | " \n", 124 | " \n", 125 | " \n", 126 | " \n", 127 | " \n", 128 | " \n", 129 | " \n", 130 | " \n", 131 | " \n", 132 | " \n", 133 | " \n", 134 | " \n", 135 | " \n", 136 | " \n", 137 | " \n", 138 | " \n", 139 | " \n", 140 | " \n", 141 | " \n", 142 | " \n", 143 | " \n", 144 | " \n", 145 | " \n", 146 | " \n", 147 | " \n", 148 | " \n", 149 | " \n", 150 | " \n", 151 | " \n", 152 | " \n", 153 | " \n", 154 | " \n", 155 | " \n", 156 | "
CementBlast Furnace SlagFly AshWaterSuperplasticizerCoarse AggregateFine AggregateAgeStrength
0540.00.00.0162.02.51040.0676.02879.99
1540.00.00.0162.02.51055.0676.02861.89
2332.5142.50.0228.00.0932.0594.027040.27
3332.5142.50.0228.00.0932.0594.036541.05
4198.6132.40.0192.00.0978.4825.536044.30
\n", 157 | "
" 158 | ], 159 | "text/plain": [ 160 | " Cement Blast Furnace Slag Fly Ash Water Superplasticizer \\\n", 161 | "0 540.0 0.0 0.0 162.0 2.5 \n", 162 | "1 540.0 0.0 0.0 162.0 2.5 \n", 163 | "2 332.5 142.5 0.0 228.0 0.0 \n", 164 | "3 332.5 142.5 0.0 228.0 0.0 \n", 165 | "4 198.6 132.4 0.0 192.0 0.0 \n", 166 | "\n", 167 | " Coarse Aggregate Fine Aggregate Age Strength \n", 168 | "0 1040.0 676.0 28 79.99 \n", 169 | "1 1055.0 676.0 28 61.89 \n", 170 | "2 932.0 594.0 270 40.27 \n", 171 | "3 932.0 594.0 365 41.05 \n", 172 | "4 978.4 825.5 360 44.30 " 173 | ] 174 | }, 175 | "execution_count": 2, 176 | "metadata": {}, 177 | "output_type": "execute_result" 178 | } 179 | ], 180 | "source": [ 181 | "concrete_data = pd.read_csv('concrete_data.csv')\n", 182 | "concrete_data.head()" 183 | ] 184 | }, 185 | { 186 | "cell_type": "markdown", 187 | "metadata": {}, 188 | "source": [ 189 | "So the first concrete sample has 540 cubic meter of cement, 0 cubic meter of blast furnace slag, 0 cubic meter of fly ash, 162 cubic meter of water, 2.5 cubic meter of superplaticizer, 1040 cubic meter of coarse aggregate, 676 cubic meter of fine aggregate. Such a concrete mix which is 28 days old, has a compressive strength of 79.99 MPa. " 190 | ] 191 | }, 192 | { 193 | "cell_type": "markdown", 194 | "metadata": {}, 195 | "source": [ 196 | "#### Let's check how many data points we have." 197 | ] 198 | }, 199 | { 200 | "cell_type": "code", 201 | "execution_count": 3, 202 | "metadata": {}, 203 | "outputs": [ 204 | { 205 | "data": { 206 | "text/plain": [ 207 | "(1030, 9)" 208 | ] 209 | }, 210 | "execution_count": 3, 211 | "metadata": {}, 212 | "output_type": "execute_result" 213 | } 214 | ], 215 | "source": [ 216 | "concrete_data.shape" 217 | ] 218 | }, 219 | { 220 | "cell_type": "markdown", 221 | "metadata": {}, 222 | "source": [ 223 | "So, there are approximately 1000 samples to train our model on. Because of the few samples, we have to be careful not to overfit the training data." 224 | ] 225 | }, 226 | { 227 | "cell_type": "markdown", 228 | "metadata": {}, 229 | "source": [ 230 | "Let's check the dataset for any missing values." 231 | ] 232 | }, 233 | { 234 | "cell_type": "code", 235 | "execution_count": 4, 236 | "metadata": {}, 237 | "outputs": [ 238 | { 239 | "data": { 240 | "text/html": [ 241 | "
\n", 242 | "\n", 255 | "\n", 256 | " \n", 257 | " \n", 258 | " \n", 259 | " \n", 260 | " \n", 261 | " \n", 262 | " \n", 263 | " \n", 264 | " \n", 265 | " \n", 266 | " \n", 267 | " \n", 268 | " \n", 269 | " \n", 270 | " \n", 271 | " \n", 272 | " \n", 273 | " \n", 274 | " \n", 275 | " \n", 276 | " \n", 277 | " \n", 278 | " \n", 279 | " \n", 280 | " \n", 281 | " \n", 282 | " \n", 283 | " \n", 284 | " \n", 285 | " \n", 286 | " \n", 287 | " \n", 288 | " \n", 289 | " \n", 290 | " \n", 291 | " \n", 292 | " \n", 293 | " \n", 294 | " \n", 295 | " \n", 296 | " \n", 297 | " \n", 298 | " \n", 299 | " \n", 300 | " \n", 301 | " \n", 302 | " \n", 303 | " \n", 304 | " \n", 305 | " \n", 306 | " \n", 307 | " \n", 308 | " \n", 309 | " \n", 310 | " \n", 311 | " \n", 312 | " \n", 313 | " \n", 314 | " \n", 315 | " \n", 316 | " \n", 317 | " \n", 318 | " \n", 319 | " \n", 320 | " \n", 321 | " \n", 322 | " \n", 323 | " \n", 324 | " \n", 325 | " \n", 326 | " \n", 327 | " \n", 328 | " \n", 329 | " \n", 330 | " \n", 331 | " \n", 332 | " \n", 333 | " \n", 334 | " \n", 335 | " \n", 336 | " \n", 337 | " \n", 338 | " \n", 339 | " \n", 340 | " \n", 341 | " \n", 342 | " \n", 343 | " \n", 344 | " \n", 345 | " \n", 346 | " \n", 347 | " \n", 348 | " \n", 349 | " \n", 350 | " \n", 351 | " \n", 352 | " \n", 353 | " \n", 354 | " \n", 355 | " \n", 356 | " \n", 357 | " \n", 358 | " \n", 359 | " \n", 360 | " \n", 361 | " \n", 362 | " \n", 363 | " \n", 364 | " \n", 365 | " \n", 366 | " \n", 367 | " \n", 368 | "
CementBlast Furnace SlagFly AshWaterSuperplasticizerCoarse AggregateFine AggregateAgeStrength
count1030.0000001030.0000001030.0000001030.0000001030.0000001030.0000001030.0000001030.0000001030.000000
mean281.16786473.89582554.188350181.5672826.204660972.918932773.58048545.66213635.817961
std104.50636486.27934263.99700421.3542195.97384177.75395480.17598063.16991216.705742
min102.0000000.0000000.000000121.8000000.000000801.000000594.0000001.0000002.330000
25%192.3750000.0000000.000000164.9000000.000000932.000000730.9500007.00000023.710000
50%272.90000022.0000000.000000185.0000006.400000968.000000779.50000028.00000034.445000
75%350.000000142.950000118.300000192.00000010.2000001029.400000824.00000056.00000046.135000
max540.000000359.400000200.100000247.00000032.2000001145.000000992.600000365.00000082.600000
\n", 369 | "
" 370 | ], 371 | "text/plain": [ 372 | " Cement Blast Furnace Slag Fly Ash Water \\\n", 373 | "count 1030.000000 1030.000000 1030.000000 1030.000000 \n", 374 | "mean 281.167864 73.895825 54.188350 181.567282 \n", 375 | "std 104.506364 86.279342 63.997004 21.354219 \n", 376 | "min 102.000000 0.000000 0.000000 121.800000 \n", 377 | "25% 192.375000 0.000000 0.000000 164.900000 \n", 378 | "50% 272.900000 22.000000 0.000000 185.000000 \n", 379 | "75% 350.000000 142.950000 118.300000 192.000000 \n", 380 | "max 540.000000 359.400000 200.100000 247.000000 \n", 381 | "\n", 382 | " Superplasticizer Coarse Aggregate Fine Aggregate Age \\\n", 383 | "count 1030.000000 1030.000000 1030.000000 1030.000000 \n", 384 | "mean 6.204660 972.918932 773.580485 45.662136 \n", 385 | "std 5.973841 77.753954 80.175980 63.169912 \n", 386 | "min 0.000000 801.000000 594.000000 1.000000 \n", 387 | "25% 0.000000 932.000000 730.950000 7.000000 \n", 388 | "50% 6.400000 968.000000 779.500000 28.000000 \n", 389 | "75% 10.200000 1029.400000 824.000000 56.000000 \n", 390 | "max 32.200000 1145.000000 992.600000 365.000000 \n", 391 | "\n", 392 | " Strength \n", 393 | "count 1030.000000 \n", 394 | "mean 35.817961 \n", 395 | "std 16.705742 \n", 396 | "min 2.330000 \n", 397 | "25% 23.710000 \n", 398 | "50% 34.445000 \n", 399 | "75% 46.135000 \n", 400 | "max 82.600000 " 401 | ] 402 | }, 403 | "execution_count": 4, 404 | "metadata": {}, 405 | "output_type": "execute_result" 406 | } 407 | ], 408 | "source": [ 409 | "concrete_data.describe()" 410 | ] 411 | }, 412 | { 413 | "cell_type": "code", 414 | "execution_count": 5, 415 | "metadata": {}, 416 | "outputs": [ 417 | { 418 | "data": { 419 | "text/plain": [ 420 | "Cement 0\n", 421 | "Blast Furnace Slag 0\n", 422 | "Fly Ash 0\n", 423 | "Water 0\n", 424 | "Superplasticizer 0\n", 425 | "Coarse Aggregate 0\n", 426 | "Fine Aggregate 0\n", 427 | "Age 0\n", 428 | "Strength 0\n", 429 | "dtype: int64" 430 | ] 431 | }, 432 | "execution_count": 5, 433 | "metadata": {}, 434 | "output_type": "execute_result" 435 | } 436 | ], 437 | "source": [ 438 | "concrete_data.isnull().sum()" 439 | ] 440 | }, 441 | { 442 | "cell_type": "markdown", 443 | "metadata": {}, 444 | "source": [ 445 | "The data looks very clean and is ready to be used to build our model." 446 | ] 447 | }, 448 | { 449 | "cell_type": "markdown", 450 | "metadata": {}, 451 | "source": [ 452 | "#### Split data into predictors and target" 453 | ] 454 | }, 455 | { 456 | "cell_type": "markdown", 457 | "metadata": {}, 458 | "source": [ 459 | "The target variable in this problem is the concrete sample strength. Therefore, our predictors will be all the other columns." 460 | ] 461 | }, 462 | { 463 | "cell_type": "code", 464 | "execution_count": 6, 465 | "metadata": {}, 466 | "outputs": [], 467 | "source": [ 468 | "concrete_data_columns = concrete_data.columns\n", 469 | "predictors = concrete_data[concrete_data_columns[concrete_data_columns != 'Strength']] # all columns except Strength\n", 470 | "target = concrete_data['Strength'] # Strength column" 471 | ] 472 | }, 473 | { 474 | "cell_type": "markdown", 475 | "metadata": {}, 476 | "source": [ 477 | "Let's do a quick sanity check of the predictors and the target dataframes." 478 | ] 479 | }, 480 | { 481 | "cell_type": "code", 482 | "execution_count": 7, 483 | "metadata": {}, 484 | "outputs": [ 485 | { 486 | "data": { 487 | "text/html": [ 488 | "
\n", 489 | "\n", 502 | "\n", 503 | " \n", 504 | " \n", 505 | " \n", 506 | " \n", 507 | " \n", 508 | " \n", 509 | " \n", 510 | " \n", 511 | " \n", 512 | " \n", 513 | " \n", 514 | " \n", 515 | " \n", 516 | " \n", 517 | " \n", 518 | " \n", 519 | " \n", 520 | " \n", 521 | " \n", 522 | " \n", 523 | " \n", 524 | " \n", 525 | " \n", 526 | " \n", 527 | " \n", 528 | " \n", 529 | " \n", 530 | " \n", 531 | " \n", 532 | " \n", 533 | " \n", 534 | " \n", 535 | " \n", 536 | " \n", 537 | " \n", 538 | " \n", 539 | " \n", 540 | " \n", 541 | " \n", 542 | " \n", 543 | " \n", 544 | " \n", 545 | " \n", 546 | " \n", 547 | " \n", 548 | " \n", 549 | " \n", 550 | " \n", 551 | " \n", 552 | " \n", 553 | " \n", 554 | " \n", 555 | " \n", 556 | " \n", 557 | " \n", 558 | " \n", 559 | " \n", 560 | " \n", 561 | " \n", 562 | " \n", 563 | " \n", 564 | " \n", 565 | " \n", 566 | " \n", 567 | " \n", 568 | " \n", 569 | " \n", 570 | " \n", 571 | " \n", 572 | " \n", 573 | "
CementBlast Furnace SlagFly AshWaterSuperplasticizerCoarse AggregateFine AggregateAge
0540.00.00.0162.02.51040.0676.028
1540.00.00.0162.02.51055.0676.028
2332.5142.50.0228.00.0932.0594.0270
3332.5142.50.0228.00.0932.0594.0365
4198.6132.40.0192.00.0978.4825.5360
\n", 574 | "
" 575 | ], 576 | "text/plain": [ 577 | " Cement Blast Furnace Slag Fly Ash Water Superplasticizer \\\n", 578 | "0 540.0 0.0 0.0 162.0 2.5 \n", 579 | "1 540.0 0.0 0.0 162.0 2.5 \n", 580 | "2 332.5 142.5 0.0 228.0 0.0 \n", 581 | "3 332.5 142.5 0.0 228.0 0.0 \n", 582 | "4 198.6 132.4 0.0 192.0 0.0 \n", 583 | "\n", 584 | " Coarse Aggregate Fine Aggregate Age \n", 585 | "0 1040.0 676.0 28 \n", 586 | "1 1055.0 676.0 28 \n", 587 | "2 932.0 594.0 270 \n", 588 | "3 932.0 594.0 365 \n", 589 | "4 978.4 825.5 360 " 590 | ] 591 | }, 592 | "execution_count": 7, 593 | "metadata": {}, 594 | "output_type": "execute_result" 595 | } 596 | ], 597 | "source": [ 598 | "predictors.head()" 599 | ] 600 | }, 601 | { 602 | "cell_type": "code", 603 | "execution_count": 8, 604 | "metadata": {}, 605 | "outputs": [ 606 | { 607 | "data": { 608 | "text/plain": [ 609 | "0 79.99\n", 610 | "1 61.89\n", 611 | "2 40.27\n", 612 | "3 41.05\n", 613 | "4 44.30\n", 614 | "Name: Strength, dtype: float64" 615 | ] 616 | }, 617 | "execution_count": 8, 618 | "metadata": {}, 619 | "output_type": "execute_result" 620 | } 621 | ], 622 | "source": [ 623 | "target.head()" 624 | ] 625 | }, 626 | { 627 | "cell_type": "markdown", 628 | "metadata": {}, 629 | "source": [ 630 | "Finally, the last step is to normalize the data by substracting the mean and dividing by the standard deviation." 631 | ] 632 | }, 633 | { 634 | "cell_type": "code", 635 | "execution_count": 9, 636 | "metadata": {}, 637 | "outputs": [ 638 | { 639 | "data": { 640 | "text/html": [ 641 | "
\n", 642 | "\n", 655 | "\n", 656 | " \n", 657 | " \n", 658 | " \n", 659 | " \n", 660 | " \n", 661 | " \n", 662 | " \n", 663 | " \n", 664 | " \n", 665 | " \n", 666 | " \n", 667 | " \n", 668 | " \n", 669 | " \n", 670 | " \n", 671 | " \n", 672 | " \n", 673 | " \n", 674 | " \n", 675 | " \n", 676 | " \n", 677 | " \n", 678 | " \n", 679 | " \n", 680 | " \n", 681 | " \n", 682 | " \n", 683 | " \n", 684 | " \n", 685 | " \n", 686 | " \n", 687 | " \n", 688 | " \n", 689 | " \n", 690 | " \n", 691 | " \n", 692 | " \n", 693 | " \n", 694 | " \n", 695 | " \n", 696 | " \n", 697 | " \n", 698 | " \n", 699 | " \n", 700 | " \n", 701 | " \n", 702 | " \n", 703 | " \n", 704 | " \n", 705 | " \n", 706 | " \n", 707 | " \n", 708 | " \n", 709 | " \n", 710 | " \n", 711 | " \n", 712 | " \n", 713 | " \n", 714 | " \n", 715 | " \n", 716 | " \n", 717 | " \n", 718 | " \n", 719 | " \n", 720 | " \n", 721 | " \n", 722 | " \n", 723 | " \n", 724 | " \n", 725 | " \n", 726 | "
CementBlast Furnace SlagFly AshWaterSuperplasticizerCoarse AggregateFine AggregateAge
02.476712-0.856472-0.846733-0.916319-0.6201470.862735-1.217079-0.279597
12.476712-0.856472-0.846733-0.916319-0.6201471.055651-1.217079-0.279597
20.4911870.795140-0.8467332.174405-1.038638-0.526262-2.2398293.551340
30.4911870.795140-0.8467332.174405-1.038638-0.526262-2.2398295.055221
4-0.7900750.678079-0.8467330.488555-1.0386380.0704920.6475694.976069
\n", 727 | "
" 728 | ], 729 | "text/plain": [ 730 | " Cement Blast Furnace Slag Fly Ash Water Superplasticizer \\\n", 731 | "0 2.476712 -0.856472 -0.846733 -0.916319 -0.620147 \n", 732 | "1 2.476712 -0.856472 -0.846733 -0.916319 -0.620147 \n", 733 | "2 0.491187 0.795140 -0.846733 2.174405 -1.038638 \n", 734 | "3 0.491187 0.795140 -0.846733 2.174405 -1.038638 \n", 735 | "4 -0.790075 0.678079 -0.846733 0.488555 -1.038638 \n", 736 | "\n", 737 | " Coarse Aggregate Fine Aggregate Age \n", 738 | "0 0.862735 -1.217079 -0.279597 \n", 739 | "1 1.055651 -1.217079 -0.279597 \n", 740 | "2 -0.526262 -2.239829 3.551340 \n", 741 | "3 -0.526262 -2.239829 5.055221 \n", 742 | "4 0.070492 0.647569 4.976069 " 743 | ] 744 | }, 745 | "execution_count": 9, 746 | "metadata": {}, 747 | "output_type": "execute_result" 748 | } 749 | ], 750 | "source": [ 751 | "predictors_norm = (predictors - predictors.mean()) / predictors.std()\n", 752 | "predictors_norm.head()" 753 | ] 754 | }, 755 | { 756 | "cell_type": "code", 757 | "execution_count": 10, 758 | "metadata": {}, 759 | "outputs": [], 760 | "source": [ 761 | "n_cols = predictors_norm.shape[1] # number of predictors" 762 | ] 763 | }, 764 | { 765 | "cell_type": "markdown", 766 | "metadata": {}, 767 | "source": [ 768 | "" 769 | ] 770 | }, 771 | { 772 | "cell_type": "markdown", 773 | "metadata": {}, 774 | "source": [ 775 | "" 776 | ] 777 | }, 778 | { 779 | "cell_type": "markdown", 780 | "metadata": {}, 781 | "source": [ 782 | "## Import Keras" 783 | ] 784 | }, 785 | { 786 | "cell_type": "markdown", 787 | "metadata": {}, 788 | "source": [ 789 | "#### Let's go ahead and import the Keras library" 790 | ] 791 | }, 792 | { 793 | "cell_type": "code", 794 | "execution_count": 11, 795 | "metadata": {}, 796 | "outputs": [ 797 | { 798 | "name": "stderr", 799 | "output_type": "stream", 800 | "text": [ 801 | "Using TensorFlow backend.\n" 802 | ] 803 | } 804 | ], 805 | "source": [ 806 | "import keras" 807 | ] 808 | }, 809 | { 810 | "cell_type": "markdown", 811 | "metadata": {}, 812 | "source": [ 813 | "As you can see, the TensorFlow backend was used to install the Keras library." 814 | ] 815 | }, 816 | { 817 | "cell_type": "markdown", 818 | "metadata": {}, 819 | "source": [ 820 | "Let's import the rest of the packages from the Keras library that we will need to build our regressoin model." 821 | ] 822 | }, 823 | { 824 | "cell_type": "code", 825 | "execution_count": 12, 826 | "metadata": {}, 827 | "outputs": [], 828 | "source": [ 829 | "from keras.models import Sequential\n", 830 | "from keras.layers import Dense" 831 | ] 832 | }, 833 | { 834 | "cell_type": "code", 835 | "execution_count": 13, 836 | "metadata": {}, 837 | "outputs": [], 838 | "source": [ 839 | "# define regression model\n", 840 | "def regression_model():\n", 841 | " # create model\n", 842 | " model = Sequential()\n", 843 | " model.add(Dense(10, activation='relu', input_shape=(n_cols,)))\n", 844 | " model.add(Dense(1))\n", 845 | " \n", 846 | " # compile model\n", 847 | " model.compile(optimizer='adam', loss='mean_squared_error')\n", 848 | " return model" 849 | ] 850 | }, 851 | { 852 | "cell_type": "markdown", 853 | "metadata": {}, 854 | "source": [ 855 | "The above function creates a model that has one hidden layer with 10 neurons and a ReLU activation function. It uses the adam optimizer and the mean squared error as the loss function." 856 | ] 857 | }, 858 | { 859 | "cell_type": "markdown", 860 | "metadata": {}, 861 | "source": [ 862 | "Let's import scikit-learn in order to randomly split the data into a training and test sets" 863 | ] 864 | }, 865 | { 866 | "cell_type": "code", 867 | "execution_count": 14, 868 | "metadata": {}, 869 | "outputs": [], 870 | "source": [ 871 | "from sklearn.model_selection import train_test_split" 872 | ] 873 | }, 874 | { 875 | "cell_type": "markdown", 876 | "metadata": {}, 877 | "source": [ 878 | "Splitting the data into a training and test sets by holding 30% of the data for testing" 879 | ] 880 | }, 881 | { 882 | "cell_type": "code", 883 | "execution_count": 15, 884 | "metadata": {}, 885 | "outputs": [], 886 | "source": [ 887 | "X_train, X_test, y_train, y_test = train_test_split(predictors_norm, target, test_size=0.3, random_state=42)" 888 | ] 889 | }, 890 | { 891 | "cell_type": "markdown", 892 | "metadata": {}, 893 | "source": [ 894 | "## Train and Test the Network" 895 | ] 896 | }, 897 | { 898 | "cell_type": "markdown", 899 | "metadata": {}, 900 | "source": [ 901 | "Let's call the function now to create our model." 902 | ] 903 | }, 904 | { 905 | "cell_type": "code", 906 | "execution_count": 16, 907 | "metadata": {}, 908 | "outputs": [], 909 | "source": [ 910 | "# build the model\n", 911 | "model = regression_model()" 912 | ] 913 | }, 914 | { 915 | "cell_type": "markdown", 916 | "metadata": {}, 917 | "source": [ 918 | "Next, we will train the model for 50 epochs.\n" 919 | ] 920 | }, 921 | { 922 | "cell_type": "code", 923 | "execution_count": 17, 924 | "metadata": {}, 925 | "outputs": [ 926 | { 927 | "name": "stdout", 928 | "output_type": "stream", 929 | "text": [ 930 | "Epoch 1/50\n", 931 | " - 0s - loss: 1619.9290\n", 932 | "Epoch 2/50\n", 933 | " - 0s - loss: 1605.9002\n", 934 | "Epoch 3/50\n", 935 | " - 0s - loss: 1592.7049\n", 936 | "Epoch 4/50\n", 937 | " - 0s - loss: 1580.0384\n", 938 | "Epoch 5/50\n", 939 | " - 0s - loss: 1567.8959\n", 940 | "Epoch 6/50\n", 941 | " - 0s - loss: 1555.8902\n", 942 | "Epoch 7/50\n", 943 | " - 0s - loss: 1544.0718\n", 944 | "Epoch 8/50\n", 945 | " - 0s - loss: 1532.1884\n", 946 | "Epoch 9/50\n", 947 | " - 0s - loss: 1520.3562\n", 948 | "Epoch 10/50\n", 949 | " - 0s - loss: 1508.3005\n", 950 | "Epoch 11/50\n", 951 | " - 0s - loss: 1495.9915\n", 952 | "Epoch 12/50\n", 953 | " - 0s - loss: 1483.3504\n", 954 | "Epoch 13/50\n", 955 | " - 0s - loss: 1470.3092\n", 956 | "Epoch 14/50\n", 957 | " - 0s - loss: 1456.5277\n", 958 | "Epoch 15/50\n", 959 | " - 0s - loss: 1441.9769\n", 960 | "Epoch 16/50\n", 961 | " - 0s - loss: 1426.6419\n", 962 | "Epoch 17/50\n", 963 | " - 0s - loss: 1409.9818\n", 964 | "Epoch 18/50\n", 965 | " - 0s - loss: 1392.7723\n", 966 | "Epoch 19/50\n", 967 | " - 0s - loss: 1374.1715\n", 968 | "Epoch 20/50\n", 969 | " - 0s - loss: 1355.0938\n", 970 | "Epoch 21/50\n", 971 | " - 0s - loss: 1335.0666\n", 972 | "Epoch 22/50\n", 973 | " - 0s - loss: 1314.0019\n", 974 | "Epoch 23/50\n", 975 | " - 0s - loss: 1292.1961\n", 976 | "Epoch 24/50\n", 977 | " - 0s - loss: 1269.8421\n", 978 | "Epoch 25/50\n", 979 | " - 0s - loss: 1246.7773\n", 980 | "Epoch 26/50\n", 981 | " - 0s - loss: 1222.8060\n", 982 | "Epoch 27/50\n", 983 | " - 0s - loss: 1198.2740\n", 984 | "Epoch 28/50\n", 985 | " - 0s - loss: 1173.3091\n", 986 | "Epoch 29/50\n", 987 | " - 0s - loss: 1147.9970\n", 988 | "Epoch 30/50\n", 989 | " - 0s - loss: 1121.8537\n", 990 | "Epoch 31/50\n", 991 | " - 0s - loss: 1096.4113\n", 992 | "Epoch 32/50\n", 993 | " - 0s - loss: 1069.9583\n", 994 | "Epoch 33/50\n", 995 | " - 0s - loss: 1043.5579\n", 996 | "Epoch 34/50\n", 997 | " - 0s - loss: 1017.1510\n", 998 | "Epoch 35/50\n", 999 | " - 0s - loss: 990.6406\n", 1000 | "Epoch 36/50\n", 1001 | " - 0s - loss: 964.0386\n", 1002 | "Epoch 37/50\n", 1003 | " - 0s - loss: 937.8134\n", 1004 | "Epoch 38/50\n", 1005 | " - 0s - loss: 911.5759\n", 1006 | "Epoch 39/50\n", 1007 | " - 0s - loss: 885.1590\n", 1008 | "Epoch 40/50\n", 1009 | " - 0s - loss: 859.3973\n", 1010 | "Epoch 41/50\n", 1011 | " - 0s - loss: 833.7985\n", 1012 | "Epoch 42/50\n", 1013 | " - 0s - loss: 808.4688\n", 1014 | "Epoch 43/50\n", 1015 | " - 0s - loss: 783.6086\n", 1016 | "Epoch 44/50\n", 1017 | " - 0s - loss: 758.5277\n", 1018 | "Epoch 45/50\n", 1019 | " - 0s - loss: 734.1365\n", 1020 | "Epoch 46/50\n", 1021 | " - 0s - loss: 709.8988\n", 1022 | "Epoch 47/50\n", 1023 | " - 0s - loss: 685.8417\n", 1024 | "Epoch 48/50\n", 1025 | " - 0s - loss: 662.0089\n", 1026 | "Epoch 49/50\n", 1027 | " - 0s - loss: 638.6001\n", 1028 | "Epoch 50/50\n", 1029 | " - 0s - loss: 615.3103\n" 1030 | ] 1031 | }, 1032 | { 1033 | "data": { 1034 | "text/plain": [ 1035 | "" 1036 | ] 1037 | }, 1038 | "execution_count": 17, 1039 | "metadata": {}, 1040 | "output_type": "execute_result" 1041 | } 1042 | ], 1043 | "source": [ 1044 | "# fit the model\n", 1045 | "epochs = 50\n", 1046 | "model.fit(X_train, y_train, epochs=epochs, verbose=2)" 1047 | ] 1048 | }, 1049 | { 1050 | "cell_type": "markdown", 1051 | "metadata": {}, 1052 | "source": [ 1053 | "Next we need to evaluate the model on the test data." 1054 | ] 1055 | }, 1056 | { 1057 | "cell_type": "code", 1058 | "execution_count": 18, 1059 | "metadata": {}, 1060 | "outputs": [ 1061 | { 1062 | "name": "stdout", 1063 | "output_type": "stream", 1064 | "text": [ 1065 | "309/309 [==============================] - 0s 99us/step\n" 1066 | ] 1067 | }, 1068 | { 1069 | "data": { 1070 | "text/plain": [ 1071 | "568.6731778734325" 1072 | ] 1073 | }, 1074 | "execution_count": 18, 1075 | "metadata": {}, 1076 | "output_type": "execute_result" 1077 | } 1078 | ], 1079 | "source": [ 1080 | "loss_val = model.evaluate(X_test, y_test)\n", 1081 | "y_pred = model.predict(X_test)\n", 1082 | "loss_val" 1083 | ] 1084 | }, 1085 | { 1086 | "cell_type": "markdown", 1087 | "metadata": {}, 1088 | "source": [ 1089 | "Now we need to compute the mean squared error between the predicted concrete strength and the actual concrete strength." 1090 | ] 1091 | }, 1092 | { 1093 | "cell_type": "markdown", 1094 | "metadata": {}, 1095 | "source": [ 1096 | "Let's import the mean_squared_error function from Scikit-learn." 1097 | ] 1098 | }, 1099 | { 1100 | "cell_type": "code", 1101 | "execution_count": 19, 1102 | "metadata": {}, 1103 | "outputs": [], 1104 | "source": [ 1105 | "from sklearn.metrics import mean_squared_error" 1106 | ] 1107 | }, 1108 | { 1109 | "cell_type": "code", 1110 | "execution_count": 20, 1111 | "metadata": {}, 1112 | "outputs": [ 1113 | { 1114 | "name": "stdout", 1115 | "output_type": "stream", 1116 | "text": [ 1117 | "568.6731721576705 0.0\n" 1118 | ] 1119 | } 1120 | ], 1121 | "source": [ 1122 | "mean_square_error = mean_squared_error(y_test, y_pred)\n", 1123 | "mean = np.mean(mean_square_error)\n", 1124 | "standard_deviation = np.std(mean_square_error)\n", 1125 | "print(mean, standard_deviation)" 1126 | ] 1127 | }, 1128 | { 1129 | "cell_type": "markdown", 1130 | "metadata": {}, 1131 | "source": [ 1132 | "Create a list of 50 mean squared errors and report mean and the standard deviation of the mean squared errors." 1133 | ] 1134 | }, 1135 | { 1136 | "cell_type": "code", 1137 | "execution_count": 23, 1138 | "metadata": {}, 1139 | "outputs": [ 1140 | { 1141 | "name": "stdout", 1142 | "output_type": "stream", 1143 | "text": [ 1144 | "MSE 1: 91.54214665187601\n", 1145 | "MSE 2: 96.45934940078884\n", 1146 | "MSE 3: 60.99977624146298\n", 1147 | "MSE 4: 52.6089736173068\n", 1148 | "MSE 5: 48.47822881902306\n", 1149 | "MSE 6: 50.00089337987807\n", 1150 | "MSE 7: 50.98678406156768\n", 1151 | "MSE 8: 35.016470418393034\n", 1152 | "MSE 9: 37.91907855530773\n", 1153 | "MSE 10: 40.17911149305819\n", 1154 | "MSE 11: 35.02482645719954\n", 1155 | "MSE 12: 35.78384830264984\n", 1156 | "MSE 13: 39.988689984318505\n", 1157 | "MSE 14: 39.42342661885382\n", 1158 | "MSE 15: 33.71654069770887\n", 1159 | "MSE 16: 30.216898433598885\n", 1160 | "MSE 17: 33.6267665443297\n", 1161 | "MSE 18: 32.813331338579985\n", 1162 | "MSE 19: 31.611103156623717\n", 1163 | "MSE 20: 33.135259801130076\n", 1164 | "MSE 21: 30.4002184945017\n", 1165 | "MSE 22: 31.295121856491928\n", 1166 | "MSE 23: 29.93721978641251\n", 1167 | "MSE 24: 29.820429372941792\n", 1168 | "MSE 25: 32.12473531062549\n", 1169 | "MSE 26: 30.56014739348279\n", 1170 | "MSE 27: 27.039147015914175\n", 1171 | "MSE 28: 26.98878112965803\n", 1172 | "MSE 29: 34.63950805910969\n", 1173 | "MSE 30: 32.09487095928501\n", 1174 | "MSE 31: 28.615838251453386\n", 1175 | "MSE 32: 27.302777200840822\n", 1176 | "MSE 33: 25.54295540473222\n", 1177 | "MSE 34: 30.38003274615143\n", 1178 | "MSE 35: 30.532176292444124\n", 1179 | "MSE 36: 34.315960683483134\n", 1180 | "MSE 37: 26.434616348118457\n", 1181 | "MSE 38: 32.59470261262073\n", 1182 | "MSE 39: 29.724276249462733\n", 1183 | "MSE 40: 26.920120171358672\n", 1184 | "MSE 41: 31.091062045791773\n", 1185 | "MSE 42: 24.654989995616926\n", 1186 | "MSE 43: 27.452373313286543\n", 1187 | "MSE 44: 32.488291595360224\n", 1188 | "MSE 45: 30.40325473427387\n", 1189 | "MSE 46: 30.421345238546724\n", 1190 | "MSE 47: 29.867179105196957\n", 1191 | "MSE 48: 30.329298581120266\n", 1192 | "MSE 49: 31.90319937795497\n", 1193 | "MSE 50: 30.236021875177773\n", 1194 | "\n", 1195 | "\n", 1196 | "Below is the mean and standard deviation of 50 mean squared errors with normalized data. Total number of epochs for each training is: 100\n", 1197 | "\n", 1198 | "Mean: 36.11284302918107\n", 1199 | "Standard Deviation: 13.892652368037062\n" 1200 | ] 1201 | } 1202 | ], 1203 | "source": [ 1204 | "total_mean_squared_errors = 50\n", 1205 | "epochs = 100\n", 1206 | "mean_squared_errors = []\n", 1207 | "for i in range(0, total_mean_squared_errors):\n", 1208 | " X_train, X_test, y_train, y_test = train_test_split(predictors_norm, target, test_size=0.3, random_state=i)\n", 1209 | " model.fit(X_train, y_train, epochs=epochs, verbose=0)\n", 1210 | " MSE = model.evaluate(X_test, y_test, verbose=0)\n", 1211 | " print(\"MSE \"+str(i+1)+\": \"+str(MSE))\n", 1212 | " y_pred = model.predict(X_test)\n", 1213 | " mean_square_error = mean_squared_error(y_test, y_pred)\n", 1214 | " mean_squared_errors.append(mean_square_error)\n", 1215 | "\n", 1216 | "mean_squared_errors = np.array(mean_squared_errors)\n", 1217 | "mean = np.mean(mean_squared_errors)\n", 1218 | "standard_deviation = np.std(mean_squared_errors)\n", 1219 | "\n", 1220 | "print('\\n')\n", 1221 | "print(\"Below is the mean and standard deviation of \" +str(total_mean_squared_errors) + \" mean squared errors with normalized data. Total number of epochs for each training is: \" +str(epochs) + \"\\n\")\n", 1222 | "print(\"Mean: \"+str(mean))\n", 1223 | "print(\"Standard Deviation: \"+str(standard_deviation))" 1224 | ] 1225 | }, 1226 | { 1227 | "cell_type": "code", 1228 | "execution_count": null, 1229 | "metadata": {}, 1230 | "outputs": [], 1231 | "source": [] 1232 | } 1233 | ], 1234 | "metadata": { 1235 | "kernelspec": { 1236 | "display_name": "Python 3", 1237 | "language": "python", 1238 | "name": "python3" 1239 | }, 1240 | "language_info": { 1241 | "codemirror_mode": { 1242 | "name": "ipython", 1243 | "version": 3 1244 | }, 1245 | "file_extension": ".py", 1246 | "mimetype": "text/x-python", 1247 | "name": "python", 1248 | "nbconvert_exporter": "python", 1249 | "pygments_lexer": "ipython3", 1250 | "version": "3.6.9" 1251 | } 1252 | }, 1253 | "nbformat": 4, 1254 | "nbformat_minor": 2 1255 | } 1256 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 |

3 | 4 |

5 | 6 |

Deep Insight: A Deep Learning Odyssey

7 | 8 |

9 | Embark on an epic journey through the boundless realm of Deep Learning, where algorithms learn and machines dream. 10 |

11 | 12 |

13 | View Demo · 14 | Report Bug · 15 | Request Feature 16 |

17 | 18 | ## Table of Contents 19 | 20 | - [About the Project](#about-the-project) 21 | - [Why Deep Learning Matters](#why-deep-learning-matters) 22 | - [Key Features](#key-features) 23 | - [Getting Started](#getting-started) 24 | - [Installation](#installation) 25 | - [Usage](#usage) 26 | - [Contributing](#contributing) 27 | - [License](#license) 28 | 29 | ## About The Project 30 | 31 | 32 | 33 | ### Why Deep Learning Matters : 34 | 35 | Deep Learning is not just another buzzword; it's a technological revolution that's reshaping industries, enabling autonomous vehicles, diagnosing diseases, and even composing music. Understanding its power and potential is crucial in today's world. 36 | 37 | Deep Insight takes you by the hand and guides you through the labyrinth of neural networks, convolutional layers, and recurrent dreams. Whether you're an aspiring AI researcher, a data wizard, or a curious soul, this repository opens doors to: 38 | 39 | #### Key Features 40 | 41 | - 🧠 **Model Repository**: Dive into a treasure trove of pre-trained models, ready to unleash their magic on your data. 42 | - 📊 **Interactive Visualizations**: Witness the minds of AI come to life through captivating visualizations of model training and prediction. 43 | - 🌐 **Extensive Documentation**: An encyclopedia of deep learning wisdom, with in-depth tutorials, guides, and research papers. 44 | - 💬 **Community Collaboration**: Join a vibrant community of learners and builders, sharing insights and advancing the field together. 45 | 46 | ## Getting Started 47 | 48 | To begin your journey into the depths of Deep Insight. 49 | 50 | ## Usage 51 | Now that you have Deep Insight at your fingertips, the cosmos of possibilities awaits: 52 | 53 | **Exploration**: Navigate the exploration.ipynb notebook to explore pre-trained models and datasets. 54 | **Training**: Take control in the training.ipynb notebook, where you can fine-tune models for your unique challenges. 55 | **Learning**: Immerse yourself in the documentation (docs/) to deepen your understanding of the dark arts of deep learning. 56 | 57 | ## Contributing 58 | The quest for knowledge never ends, and we welcome fellow adventurers to join us. If you possess insights, ideas, or magic spells to improve Deep Insight, don't hesitate to: 59 | 60 | > Open an issue. 61 | > Submit a pull request. 62 | 63 | ## 🚀 Launch Your Deep Learning Journey 64 | 65 | Congratulations, intrepid explorer! 🌟 You've reached the end of this README, but your adventure into the world of Deep Learning is just beginning. 66 | 67 | As you embark on this odyssey, remember that the horizons of AI are boundless. What you learn here, the models you craft, and the insights you uncover will shape the future. In the words of Alan Turing, "We can only see a short distance ahead, but we can see plenty there that needs to be done." 68 | 69 | So, are you ready to reshape the future? To teach machines to think, dream, and innovate? The path ahead is challenging, but with curiosity as your compass and code as your wand, you're bound for greatness. 70 | 71 | **🔮 Embrace the unknown. Delve into the depths. Unleash the power of Deep Learning.** 72 | 73 | If you ever need guidance, have questions, or want to share your discoveries, our community of fellow explorers is here to support you. 74 | 75 | 76 | --------------------------------------------------------------------------------