├── README.md └── logistic_regression_numpy.ipynb /README.md: -------------------------------------------------------------------------------- 1 | # Logistic regression using numpy 2 | ### Learning method: mini-batch gradient descent 3 | ---------- 4 | 5 | ## Quick intro to logistic regression 6 | 7 | Logistic regression can be used for categorical classification. In our case, we will focus on a model of binary classification. 8 | 9 | To do that, a logistic / sigmoid function will be fitted to the data. This function tends to 1 as z -> infinite and to 0 as z -> -infinite, and is as follows: 10 | 11 | ![](https://latex.codecogs.com/gif.latex?h(z)&space;=&space;\frac{1}{1+\exp{z}}) 12 | 13 | To obtain a 0 or 1 output, one just simply sets a threshold at 0.5: everything below is zero and everything above is one. 14 | 15 | Having a dataset with a series of samples, each of those with their given features, we can express z in matrix form as follows: 16 | 17 | ![](https://latex.codecogs.com/gif.latex?$z&space;=&space;X\theta$) 18 | 19 | ![](https://latex.codecogs.com/gif.latex?h(X\theta)&space;=&space;\frac{1}{1&space;+&space;\exp{X\theta}}) 20 | 21 | Where X is the matrix of [samples] rows and [features] columns, and theta is the vector of the model parameters (weights of our regressor). 22 | 23 | The objective will be to obtain that theta vector given a set of X and y([0,1]), that best fits the data. How? Minimizing the cost function of the problem. 24 | 25 | The cost function J of a logistic regression problem is the following: 26 | 27 | ![](https://latex.codecogs.com/gif.latex?J(\theta)&space;=&space;-\frac{1}{n}&space;\sum_{i&space;=&space;1}^{n}&space;(y*log(h(X\theta))&space;+&space;(1-y)*log(1-h(X\theta)))) 28 | 29 | To find the theta that minimizes the error between the predicted and the real data, an iterative process where that cost function is derived to find its minimum and update the theta parameters to go towards it will be applied. This is called gradient descent: 30 | 31 | ![](https://latex.codecogs.com/gif.latex?\theta&space;=&space;\theta&space;-&space;\frac{\delta&space;J(\theta)}{\delta\theta}) 32 | 33 | with 34 | 35 | ![](https://latex.codecogs.com/gif.latex?\frac{\delta&space;J(\theta)}{\delta\theta}&space;=&space;\frac{1}{n}&space;X^{T}[h(X)&space;-&space;y]) 36 | 37 | Different ways to update the parameter in function of the gradient can be applied (once for every data entry, once for all the dataset...). In this case, it has been decided to do it in small batches, thus we will be appliying minibatch gradient descent. 38 | 39 | -------------------- 40 | 41 | ### Tested with the *Breast Cancer Wisconsin (Diagnostic) Data Set*, included in *sklearn* 42 | - For each cell nucleus 10 different features are stated (radius, texture, area, compactness...) 43 | - For each feature three values appear: mean, error and worst 44 | - This leads to a 30 dimensional classification problem, into two categories: benign or malignant cell 45 | -------------------------------------------------------------------------------- /logistic_regression_numpy.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Logistic regression using numpy\n", 8 | "\n", 9 | "### Learning method: mini-batch gradient descent\n", 10 | "----------\n", 11 | "\n", 12 | "## Quick intro to logistic regression\n", 13 | "\n", 14 | "Logistic regression can be used for categorical classification. In our case, we will focus on a model of binary classification.\n", 15 | "\n", 16 | "A logistic/sigmoid function will be fitted to the data. This function tends to 1 as z -> infinite and to 0 as z -> -infinite, and is as follows:\n", 17 | "\n", 18 | "$$h(z) = \\frac{1}{1 + \\exp{z}}$$\n", 19 | "\n", 20 | "To obtain a 0 or 1 output, one just simply sets a threshold at 0.5: everything below is zero and everything above is one.\n", 21 | "\n", 22 | "Having a dataset with a series of samples, each of those with their given features, we can express z in matrix form as follows:\n", 23 | "\n", 24 | "$$z = X\\theta$$\n", 25 | "\n", 26 | "$$h(X\\theta) = \\frac{1}{1 + \\exp{X\\theta}}$$\n", 27 | "\n", 28 | "Where X is the matrix of [samples] rows and [features] columns, and theta is the vector of the model parameters (weights of our regressor).\n", 29 | "\n", 30 | "The objective will be to obtain that theta vector given a set of X and y([0,1]), that best fits the data. How? Minimizing the cost function of the problem.\n", 31 | "\n", 32 | "The cost function J of a logistic regression problem is the following:\n", 33 | "\n", 34 | "$$J(\\theta) = -\\frac{1}{n} \\sum_{i = 1}^{n} (y*log(h(X\\theta)) + (1-y)*log(1-h(X\\theta)))$$\n", 35 | "\n", 36 | "To find the theta that minimizes the error between the predicted and the real data, an iterative process where that cost function is derived to find its minimum and update the theta parameters to go towards it will be applied. This is called gradient descent:\n", 37 | "\n", 38 | "$$\\theta = \\theta - \\frac{\\delta J(\\theta)}{\\delta\\theta}$$\n", 39 | "\n", 40 | "with\n", 41 | "\n", 42 | "$$\\frac{\\delta J(\\theta)}{\\delta\\theta} = \\frac{1}{n} X^{T}[h(X) - y]$$\n", 43 | "\n", 44 | "Different ways to update the parameter in function of the gradient can be applied (once for every data entry, once for all the dataset...). In this case, it has been decided to do it in small batches, thus we will be appliying minibatch gradient descent.\n", 45 | "\n", 46 | "--------------------\n", 47 | "### Tested with the *Breast Cancer Wisconsin (Diagnostic) Data Set*, included in *sklearn* (569 samples)\n", 48 | "- For each cell nucleus 10 different features are stated (radius, texture, area, compactness...)\n", 49 | "- For each feature three values appear: mean, error and worst\n", 50 | "- This leads to a 30 dimensional classification problem, into two categories: benign or malignant cell\n", 51 | "\n", 52 | "------------------------------------------\n", 53 | "\n", 54 | "#### Import libraries " 55 | ] 56 | }, 57 | { 58 | "cell_type": "code", 59 | "execution_count": 1, 60 | "metadata": {}, 61 | "outputs": [], 62 | "source": [ 63 | "import numpy as np\n", 64 | "import matplotlib.pyplot as plt\n", 65 | "\n", 66 | "# For the confusion matrix\n", 67 | "from sklearn.metrics import confusion_matrix\n", 68 | "import itertools" 69 | ] 70 | }, 71 | { 72 | "cell_type": "markdown", 73 | "metadata": {}, 74 | "source": [ 75 | "Before going into the data:\n", 76 | " 1. A class will be created containing the regressor and all of its methods." 77 | ] 78 | }, 79 | { 80 | "cell_type": "code", 81 | "execution_count": 2, 82 | "metadata": {}, 83 | "outputs": [], 84 | "source": [ 85 | "# 1 The logistic regressor class and its methods\n", 86 | "class LogisticReg:\n", 87 | " '''\n", 88 | " Logitic Regression, by Wladi Arce.\n", 89 | " \n", 90 | " This class creates an instance of a logistic regressor, which allows to classify data\n", 91 | " into 2 categories, 0 and 1.\n", 92 | " '''\n", 93 | " def __init__(self):\n", 94 | " self.theta = 0.5 * np.random.randn(x_train_scaled.shape[1], 1) #Initializes randomly the weights\n", 95 | " \n", 96 | " def sigmoid(self, x):\n", 97 | " '''\n", 98 | " Performs logistic regression given the matrix of parameters\n", 99 | " '''\n", 100 | " probability = 1/(1 + np.exp(-np.matmul(x, self.theta)))\n", 101 | " return probability\n", 102 | " \n", 103 | " def compute_cost(self, x, y):\n", 104 | " '''\n", 105 | " Cost function\n", 106 | " '''\n", 107 | " cost = np.mean(-y*np.log(self.sigmoid(x)) - (1-y)*np.log(1-self.sigmoid(x)))\n", 108 | " return cost\n", 109 | " \n", 110 | " def compute_gradient(self, x, y):\n", 111 | " '''\n", 112 | " Calculates the gradient given the expected output and the parameters\n", 113 | " '''\n", 114 | " gradient = np.mean(np.matmul(x.T,self.sigmoid(x)-y))\n", 115 | " return gradient\n", 116 | "\n", 117 | " def fit(self, x, y, learning_rate = 0.01, batch_size = 10, epochs = 50):\n", 118 | " '''\n", 119 | " Fits the regressor to the data using minibatch gradient descent as follows:\n", 120 | " \n", 121 | " x\n", 122 | " y\n", 123 | " learning_rate (default 0.01)\n", 124 | " batch_size (default 10)\n", 125 | " epochs (default 50)\n", 126 | " \n", 127 | " compute number of batches\n", 128 | " for each epoch:\n", 129 | " shuffle dataset\n", 130 | " for i in number_of_batches:\n", 131 | " x_batch = select [batch_size] of the features dataset\n", 132 | " y_batch = select [batch_size] of the output dataset\n", 133 | " compute gradient with x_batch, y_batch\n", 134 | " apply gradient descent to update the parameters \n", 135 | " '''\n", 136 | " num_samples = x.shape[0]\n", 137 | " N_iterations = int(num_samples / batch_size) * epochs\n", 138 | " start = 0\n", 139 | " end = 0\n", 140 | " \n", 141 | " for step in range(N_iterations):\n", 142 | " # if new epoch, shuffle the data\n", 143 | " if step % (num_samples / batch_size) == 0:\n", 144 | " indexes = np.random.permutation(x.shape[0])\n", 145 | " y = y[indexes]\n", 146 | " x = x[indexes]\n", 147 | " \n", 148 | " # create a mini-batch of data to train on\n", 149 | " end = start + batch_size\n", 150 | " if end >= num_samples:\n", 151 | " end = num_samples\n", 152 | " x_batch = x[start:end, :]\n", 153 | " y_batch = y[start:end]\n", 154 | " start = 0 if end >= num_samples else end\n", 155 | "\n", 156 | " # update parameters using a x_step and y_step\n", 157 | " self.theta= self.theta - learning_rate * self.compute_gradient(x_batch, y_batch)\n", 158 | " training_cost = self.compute_cost(x[:,:], y)\n", 159 | " print('training cost: %f' %training_cost)\n", 160 | " \n", 161 | " def predict(self, x):\n", 162 | " '''\n", 163 | " Is basically the sigmoid function, but will convert the probabilities into binary values\n", 164 | " '''\n", 165 | " return self.sigmoid(x) >= 0.5" 166 | ] 167 | }, 168 | { 169 | "cell_type": "markdown", 170 | "metadata": {}, 171 | "source": [ 172 | " 2. Some auxiliary functions to analyze its performance will be included: accuracy and plotting of the confusion matrix (will use sklearn for this)" 173 | ] 174 | }, 175 | { 176 | "cell_type": "code", 177 | "execution_count": 3, 178 | "metadata": {}, 179 | "outputs": [], 180 | "source": [ 181 | "# 2 The confusion matrix and the accuracy calculator\n", 182 | "\n", 183 | "def compute_accuracy(y_real, y_pred):\n", 184 | " '''\n", 185 | " Checks how many values are equal between the real data and the predicted data\n", 186 | " '''\n", 187 | " correct = y_real == y_pred\n", 188 | " return np.sum(correct)/correct.shape[0]\n", 189 | "\n", 190 | "# This function has been taken from sklearn documentation\n", 191 | "def plot_confusion_matrix(cm, classes,\n", 192 | " normalize=False,\n", 193 | " title='Confusion matrix',\n", 194 | " cmap=plt.cm.Blues):\n", 195 | " \"\"\"\n", 196 | " This function prints and plots the confusion matrix.\n", 197 | " Normalization can be applied by setting `normalize=True`.\n", 198 | " \"\"\"\n", 199 | " if normalize:\n", 200 | " cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n", 201 | " print(\"Normalized confusion matrix\")\n", 202 | " else:\n", 203 | " print('Confusion matrix, without normalization')\n", 204 | "\n", 205 | " print(cm)\n", 206 | "\n", 207 | " plt.imshow(cm, interpolation='nearest', cmap=cmap)\n", 208 | " plt.title(title)\n", 209 | " plt.colorbar()\n", 210 | " tick_marks = np.arange(len(classes))\n", 211 | " plt.xticks(tick_marks, classes, rotation=45)\n", 212 | " plt.yticks(tick_marks, classes)\n", 213 | "\n", 214 | " fmt = '.2f' if normalize else 'd'\n", 215 | " thresh = cm.max() / 2.\n", 216 | " for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):\n", 217 | " plt.text(j, i, format(cm[i, j], fmt),\n", 218 | " horizontalalignment=\"center\",\n", 219 | " color=\"white\" if cm[i, j] > thresh else \"black\")\n", 220 | "\n", 221 | " plt.tight_layout()\n", 222 | " plt.ylabel('True label')\n", 223 | " plt.xlabel('Predicted label')" 224 | ] 225 | }, 226 | { 227 | "cell_type": "markdown", 228 | "metadata": {}, 229 | "source": [ 230 | "### Everything is ready, lets try to model something!\n", 231 | "\n", 232 | "## Step 1: prepare the data\n", 233 | "\n", 234 | "The dataset is imported from sklearn. Data and targets are assigned and the set is split into training and test.\n", 235 | "\n", 236 | "As well, the features are scaled by removing their mean and dividing by the standard deviation.\n", 237 | "\n", 238 | "Finally, as we are solving a regression problem, an intercept term (one) is added at the beginning of each data sample." 239 | ] 240 | }, 241 | { 242 | "cell_type": "code", 243 | "execution_count": 4, 244 | "metadata": {}, 245 | "outputs": [], 246 | "source": [ 247 | "from sklearn.datasets import load_breast_cancer\n", 248 | "\n", 249 | "# load the dataset\n", 250 | "data = load_breast_cancer()\n", 251 | "x = data.data\n", 252 | "y = data.target\n", 253 | "\n", 254 | "# split into training and test sets\n", 255 | "N_train = int(0.8 * x.shape[0])\n", 256 | "\n", 257 | "x_train = x[:N_train,:]\n", 258 | "y_train = np.reshape(y[:N_train], (-1,1))\n", 259 | "x_test = x[N_train:,:]\n", 260 | "y_test = np.reshape(y[N_train:], (-1,1))\n", 261 | "\n", 262 | "# scale features by removing mean and dividing by the standard deviation\n", 263 | "x_train_scaled = (x_train - np.average(x_train, 0))/np.std(x_train)\n", 264 | "x_test_scaled = (x_test - np.average(x_test, 0))/np.std(x_test)\n", 265 | "\n", 266 | "# Add intercept terms and initialize parameters\n", 267 | "x_train_scaled = np.hstack((np.ones((x_train_scaled.shape[0], 1)), x_train_scaled))\n", 268 | "x_test_scaled = np.hstack((np.ones((x_test_scaled.shape[0], 1)), x_test_scaled))" 269 | ] 270 | }, 271 | { 272 | "cell_type": "markdown", 273 | "metadata": {}, 274 | "source": [ 275 | "## Step 2: fit it to a regression model" 276 | ] 277 | }, 278 | { 279 | "cell_type": "code", 280 | "execution_count": 5, 281 | "metadata": {}, 282 | "outputs": [ 283 | { 284 | "name": "stdout", 285 | "output_type": "stream", 286 | "text": [ 287 | "training cost: 0.225676\n" 288 | ] 289 | } 290 | ], 291 | "source": [ 292 | "# An instance of the Regression model will be created, and the fit method run with the train set\n", 293 | "\n", 294 | "classifier = LogisticReg()\n", 295 | "classifier.fit(x_train_scaled, y_train)" 296 | ] 297 | }, 298 | { 299 | "cell_type": "markdown", 300 | "metadata": {}, 301 | "source": [ 302 | "## Step 3: check its accuracy against the test data" 303 | ] 304 | }, 305 | { 306 | "cell_type": "code", 307 | "execution_count": 6, 308 | "metadata": {}, 309 | "outputs": [ 310 | { 311 | "name": "stdout", 312 | "output_type": "stream", 313 | "text": [ 314 | "Test cost: 0.326889553023996\n", 315 | "Accuracy on test set: 0.87\n", 316 | "Confusion matrix, without normalization\n", 317 | "[[26 0]\n", 318 | " [15 73]]\n" 319 | ] 320 | }, 321 | { 322 | "data": { 323 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAVAAAAEmCAYAAAA0k8gFAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4wLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvpW3flQAAIABJREFUeJzt3Xmc3dP9x/HXeyaJRBNLhMiCWEJK7LGU2tdYUz9bikb5SVFUKaLUXksXO9UolaJir9QWmkotP1tEbLWEELIQEbsgic/vj+8ZrjGZe3Pnztx7c99Pj/uY+13u+X5mJj5zzvme7zmKCMzMbMHVlTsAM7Nq5QRqZlYkJ1AzsyI5gZqZFckJ1MysSE6gZmZFcgK1ViOpk6R/SvpQ0s0tKGc/SfeVMrZykbSZpJfLHYeVhjwO1CT9GDgG6Ad8DEwAfhsRD7ew3AOAI4FNImJuiwOtcJIC6BsRr5Y7FmsbroHWOEnHABcCZwPdgeWBy4HdS1D8CsArtZA8CyGpXbljsBKLCL9q9AUsDnwC7NXMOYuQJdhp6XUhsEg6tiUwBTgWmAFMB36ajp0OfAnMSdc4GDgNuC6n7D5AAO3S9oHAJLJa8OvAfjn7H8753CbAk8CH6esmOcfGAmcCj6Ry7gO6zed7a4j/+Jz4BwE7Aa8As4Bf55y/IfAo8EE691KgQzr2YPpePk3f7z455Z8AvA1c27AvfWbldI310nZPYCawZbn/bfhV2Ms10Nr2A6AjcHsz55wEbAysA6xNlkROzjm+LFki7kWWJC+TtGREnEpWq70xIjpHxFXNBSLpe8DFwMCI6EKWJCc0cV5X4K507lLA+cBdkpbKOe3HwE+BZYAOwK+aufSyZD+DXsApwJXA/sD6wGbAKZJWSufOA34JdCP72W0DHA4QEZunc9ZO3++NOeV3JauND829cES8RpZcr5e0KPBX4JqIGNtMvFZBnEBr21LAzGi+ib0fcEZEzIiId8lqlgfkHJ+Tjs+JiLvJal+rFRnPV0B/SZ0iYnpEvNDEOTsDEyPi2oiYGxE3AC8Bu+ac89eIeCUiZgM3kSX/+ZlD1t87BxhJlhwvioiP0/VfANYCiIinIuKxdN03gD8DWxTwPZ0aEV+keL4lIq4EJgKPAz3I/mBZlXACrW3vAd3y9M31BCbnbE9O+74uo1EC/gzovKCBRMSnZM3eQ4Hpku6S1K+AeBpi6pWz/fYCxPNeRMxL7xsS3Ds5x2c3fF7SqpLulPS2pI/Iatjdmikb4N2I+DzPOVcC/YFLIuKLPOdaBXECrW2PAp+T9fvNzzSy5meD5dO+YnwKLJqzvWzuwYgYHRHbkdXEXiJLLPniaYhpapExLYg/kcXVNyIWA34NKM9nmh3mIqkzWb/yVcBpqYvCqoQTaA2LiA/J+v0ukzRI0qKS2ksaKOl36bQbgJMlLS2pWzr/uiIvOQHYXNLykhYHTmw4IKm7pN1SX+gXZF0B85oo425gVUk/ltRO0j7A6sCdRca0ILoAHwGfpNrxYY2OvwOs9J1PNe8i4KmI+F+yvt0rWhyltRkn0BoXEeeTjQE9GXgXeAs4AvhHOuUsYBzwLPAcMD7tK+Za9wM3prKe4ttJr47sbv40sjvTW5Bu0DQq4z1gl3Tue2R30HeJiJnFxLSAfkV2g+pjstrxjY2OnwaMkPSBpL3zFSZpd2BHsm4LyH4P60nar2QRW6vyQHozsyK5BmpmViQnUDOzIjmBmpkVyQnUzKxIntygSF2W6BpL9+xd7jAsx5KdOpQ7BMsxefIbzJw5M9842YLVL7ZCxNzvPMz1HTH73dERsWOprtscJ9AiLd2zN2dce3e5w7Ace67tP2iVZNONBpS0vJg7m0VWyzs6jM8nXJbv6bCScQI1s+ogQV19uaP4FidQM6seqqzbNk6gZlY9VLIu1ZKorHRuZjZfymqg+V75SpFWkzQh5/WRpKMldZV0v6SJ6euS+cpyAjWz6iCyPtB8rzwi4uWIWCci1iGbOPszsknFhwFjIqIvMCZtN8sJ1MyqhLImfL7XgtkGeC0iJpOtAzYi7R9B89M8Au4DNbNqUvqbSPuSTdkI0D0ipgNExHRJy+T7sGugZlY9CquBdpM0Luc1tOmi1AHYDbi52HBcAzWz6lD4ONCZEVHIKP6BwPiIaFjC5R1JPVLtswfZSq3Ncg3UzKpHCe7C5xjMN813gFHAkPR+CHBHvgKcQM2sSpRmGBNAWkZ6O+C2nN3nAttJmpiOnZuvHDfhzaw6CKgvzaOcEfEZ2bLeufveI7srXzAnUDOrHhX2JJITqJlVCflZeDOzorkGamZWBE9nZ2bWAm7Cm5kVyU14M7Ni+CaSmVlxGqazqyBOoGZWJVwDNTMrnvtAzcyK5Ca8mVkR5Ca8mVnx3IQ3MyuOnEDNzBZc1oJ3AjUzK4JcAzUzK5YTqJlZkZxAzcyKUYF9oJU1qMrMbD6U+kDzvQoqS1pC0i2SXpL0oqQfSOoq6X5JE9PXJfOV4wRqZlWjVAkUuAi4NyL6AWsDLwLDgDER0RcYk7ab5QRqZlWjrq4u7ysfSYsBmwNXAUTElxHxAbA7MCKdNgIYlDeeor8TM7O2pAJf0E3SuJzX0EYlrQS8C/xV0tOS/iLpe0D3iJgOkL4uky8k30Qys6pRYBN9ZkQMaOZ4O2A94MiIeFzSRRTQXG+Ka6BmVhVKeBNpCjAlIh5P27eQJdR3JPUASF9n5CvICdTMqobqlPeVT0S8DbwlabW0axvgv8AoYEjaNwS4I19ZbsKbWXVQSQfSHwlcL6kDMAn4KVmF8iZJBwNvAnvlK8QJ1MyqRqkSaERMAJrqJ91mQcpxAjWzquFHOc3MiiAK6+NsS76JVEPee3saZ/9sb07YcyuG7b0No2+46utj9438K8ftsQXD9t6GGy76bRmjrF33jb6XtdZYjTX6rcLvf3duucOpPCrpk0gl4RpoDalvV8+Pf/kb+vRbk9mffsIpB+xE/40248NZMxn/4H2cPfI+2ndYhA9nzSx3qDVn3rx5HH3Uz7nrnvvp1bs3P9x4A3bZZTe+v/rq5Q6tolRaE9410BqyRLfu9Om3JgCdvteZnn1WYdaMtxlzy7XsMuRw2ndYBIDFu3YrZ5g16cknnmDllVdhxZVWokOHDuy1z77c+c+8o2hqTimGMZWSE2iNenfaW0x++QVW6b8ub785iZcnPMGpQ3blrKF7MumFCeUOr+ZMmzaV3r2X+3q7V6/eTJ06tYwRVaZKa8JXZAKVNE/SBEnPSBovaZMWlHWGpG1LGV+1+/yzT7n4+J+x37Gn0alzF+bNncunH33IadeMYvBRJ3HJiYcTEeUOs6Y09fOutOZquRWSPN0HmpkdEesASNoBOAfYopiCIuKUUgZW7ebOncPFxw9lkx0HscHWAwHo2r0HG2w1EEms3H9d6iQ+/mAWiy25VJmjrR29evVmypS3vt6eOnUKPXv2LGNElanS/qhUZA20kcWA9xs2JB0n6UlJz0o6Pe3rkyZFvVLSC5Luk9QpHbtG0p7p/U5pAtWHJV0s6c60/zRJV0saK2mSpKPK8H22uojgL2ccR88V+zJw/28mqFl/ix3477hHAJg+eRJz586hyxJdyxVmTRqwwQa8+upE3nj9db788ktuvnEkO++yW7nDqjiV1gdaqTXQTpImAB2BHsDWAJK2B/oCG5JNXDVK0uZkj131BQZHxCGSbgL+B7iuoUBJHYE/A5tHxOuSbmh0zX7AVkAX4GVJf4qIObknpGmxhgIstWyvEn/Lre+VZ57kkbtvZblV+nHSj3cAYK/DT2CL3ffhyjN+xbC9t6Fd+w4MPe2CivtLv7Br164dF1x0KbvuvAPz5s1jyIEHsfoaa5Q7rIpTaf8uKzWB5jbhfwD8TVJ/YPv0ejqd15kscb4JvJ4ezwJ4CujTqMx+wKSIeD1t30BKhsldEfEF8IWkGUB3sllbvhYRw4HhACutvlbVdRKuts6GXDvurSaPHXbmxW0cjTW248Cd2HHgTuUOo3KV9ln4kqjUBPq1iHhUUjdgabJa5zkR8efccyT1Ab7I2TUP6NSoqHw/+cafr/ifjVktEVBh+bPy+0Al9QPqgfeA0cBBkjqnY70k5Z01OnkJWCklW4B9ShyqmbUqUVeX/9WWKrWW1dAHCtkfniERMQ+4T9L3gUdTVf4TYH+yGmOzImK2pMOBeyXNBJ5ondDNrLW4CV+AiKhv5thFZCvqNdY/55w/5Lw/MOecByKin7LfwmXAuHTOaY2u0R8zqyyqvCZ8RSbQVnSIpCFAB7IbUX/Oc76ZVQgB9fWVlUFrKoFGxAXABeWOw8yK4ya8mVkx3IQ3MytONoypsjKoE6iZVYnSDVOS9AbwMdkInrkRMUBSV+BGsodw3gD2joj351cGVME4UDOzBiWejWmriFgnIhoWlxsGjImIvsCYtN0sJ1Azqw6pDzTfqwV2B0ak9yOAQfk+4ARqZlWhoQ+0gBpoN0njcl5DmyguyB7MeSrnePeImA6QvuZ9ytF9oGZWNQrsA52Z0yyfn00jYlp6FPx+SS8VFU8xHzIzK4dSNeEjYlr6OgO4nWyKzHck9ciuox7AjHzlOIGaWXUo0bLGkr4nqUvDe7IpMp8HRgFD0mlDgLyr+rkJb2ZVoYTT2XUHbk/Jth3w94i4V9KTwE2SDiabY3ivfAU5gZpZlSjNONCImASs3cT+94BtFqQsJ1Azqxp+EsnMrBh+Ft7MrDgC6uoq6763E6iZVQ3XQM3MiuQ+UDOzYrgP1MysOCrhdHal4gRqZlWjrsKqoPNNoJIWa+6DEfFR6cMxM5u/CsufzdZAXyCb8ik35IbtAJZvxbjMzL5FqqKbSBGxXFsGYmaWT32F9YEWNCpV0r6Sfp3e95a0fuuGZWb2Xa08I/0Cy5tAJV0KbAUckHZ9BlzRmkGZmTUmsjvx+f5rS4Xchd8kItaT9DRARMyS1KGV4zIz+zap4prwhSTQOZLqyG4cIWkp4KtWjcrMrAkVdg+poD7Qy4BbgaUlnQ48DJzXqlGZmTUisnGg+V5tKW8NNCL+JukpYNu0a6+IeL51wzIz+65Kq4EW+iRSPTCHrBlfWfNJmVlNkApelbPNFHIX/iTgBqAn0Bv4u6QTWzswM7PGStmEl1Qv6WlJd6btFSU9LmmipBsLuVleSG1yf2CDiDg5Ik4iW/7zJwVHaWZWIirgtQB+AbyYs30ecEFE9AXeBw7OV0AhCXQy327qtwMmLUCQZmYlUYpljVM5vYGdgb+kbQFbA7ekU0YAg/KV09xkIheQ9Xl+BrwgaXTa3p7sTryZWZtR4eNAu0kal7M9PCKGNzrnQuB4oEvaXgr4ICLmpu0pQK98F2ruJlLDnfYXgLty9j+Wr1Azs9ZQYAVzZkQMmH8Z2gWYERFPSdqyYXcTp0a+CzU3mchV+T5sZtaWSjQb06bAbpJ2AjoCi5HVSJeQ1C7VQnsD0/IVVMhd+JUljZT0rKRXGl4t/AbMzBaIyGZjyvfKJyJOjIjeEdEH2Bf4d0TsBzwA7JlOGwLcka+sQm4iXQP8NcU/ELgJGFnA58zMSqrEd+EbOwE4RtKrZH2ieVvhhQykXzQiRkv6Q0S8Bpws6aGWxWlmtmCk0i/pERFjgbHp/SSyYZoFKySBfpFu8b8m6VBgKrDMgoVpZtZy1fgo5y+BzsBRwG+BxYGDWjMoM7OmVNqjnIVMJvJ4evsx30yqbGbWpkTbz7aUT3MD6W+nmXFQEbFHq0RkZtaUMizZkU9zNdBL2yyKKtSxXT1rdGt25WdrY0tucES5Q7AcX7z8ZsnLrKZVOce0ZSBmZs0RUF8tCdTMrNJU2D0kJ1Azqx5Vm0AlLRIRX7RmMGZm8yNRcatyFvIs/IaSngMmpu21JV3S6pGZmTUi5X+1pUKehb8Y2AV4DyAingG2as2gzMwaq8pVOYG6iJjcaPjAvFaKx8xsviptRctCEuhbkjYEQlI9cCTg6ezMrE0twIz0baaQBHoYWTN+eeAd4F9pn5lZm6qwYaAFPQs/g2zSUTOzsqqwCmj+BCrpSpp4Jj4ihrZKRGZmTWi4iVRJCmnC/yvnfUfgR8BbrROOmdl8COor7C5SIU34G3O3JV0L3N9qEZmZzYdaumhHiRXzKOeKwAqlDsTMrDlZE77cUXxbIX2g7/NNH2gdMAsY1ppBmZk1pRTDmCR1BB4EFiHLgbdExKmSViRbMLMrMB44ICK+bK6sZhNoWgtpbbJ1kAC+ioi8i82bmZVaCWugXwBbR8QnktoDD0u6BzgGuCAiRkq6AjgY+FNzBTXbJZuS5e0RMS+9nDzNrDwKeA6+kJv0kfkkbbZPrwC2Bm5J+0cAg/KVVcg9rSckrVfAeWZmrarAZ+G7SRqX8/rOkEtJ9ZImADPIboq/BnwQEXPTKVOAXvniaW5NpHapsB8Ch0h6DfiUrCYdEeGkamZtRhQ8jGlmRAxo7oSImAesI2kJ4Hbg+02dlu9CzfWBPgGsRwHVWDOz1ifqSjyMKSI+kDQW2BhYIqfi2BuYlu/zzSVQpQu8VopAzcxaQpTmWXhJSwNzUvLsBGwLnAc8AOxJdid+CHBHvrKaS6BLSzpmfgcj4vwFitrMrCVUsrvwPYARaXa5OuCmiLhT0n+BkZLOAp4GrspXUHMJtB7oDBU29N/MalLWB9rydBQRzwLrNrF/ErDhgpTVXAKdHhFnLGBsZmatppomE6msSM2s5lVY/mw2gW7TZlGYmeUhQX2FZdD5JtCImNWWgZiZ5VNZ6bO42ZjMzNpctU6obGZWESorfTqBmlnVEHUVNiGoE6iZVQVRnevCm5lVBLkP1MysOJWVPp1AzaxKVNU4UDOzSuMmvJlZkSorfTqBmlmVEG7Cm5kVrcLypxOomVULoQprxDuBmlnVcA3UzKwIlTiMqdKejDIzmy8p/yt/GVpO0gOSXpT0gqRfpP1dJd0vaWL6umS+slwDrSGnHfdzHvr3vXRdamluvu8xAK644BxuHzmCJbt2A+CI40/hh1ttX84wa0bfFZbh2vMO+np7xV5Lceaf7qLrEt9jly3W4qsI3p31MUNPvY7p735YxkgrR4n6QOcCx0bEeEldgKck3Q8cCIyJiHMlDQOGASc0V5ATaA3Zdc8fs8+QQzjlmEO/tX+/gw/nJ0OPKlNUtWvi5BlsvO+5ANTViddG/5ZRDzzD+x/N5ozL7wLg8MFbcOLQgRz125HlDLUiZPOBtryciJgOTE/vP5b0ItAL2B3YMp02AhiLE6g1WH+jTZn21uRyh2FN2GrD1Xh9yru8Of39b+1ftNMiRESZoqo8pZ5QWVIfshU6Hwe6p+RKREyXtEy+zzuBGjeOuJI7bxvJ6muuyzEnn8Vii+ft+rES22uH9bnp3qe+3j7t57uy3y4b8uEns9lx6MVljKyyFNiE7yZpXM728IgY/p2ypM7ArcDREfFRMY+JttpNJEkh6dqc7XaS3pV0Z57PbdlwjqTdUl9Em5C0jqSd2up6lWCv/Q9m1IMTGHn3w3Rbpjvnn3VyuUOqOe3b1bPzFmty2/1Pf73vtMv+Sd+Bv2HkPeM4dJ/Nyxhd5Whowud7ATMjYkDOq6nk2Z4seV4fEbel3e9I6pGO9wBm5IupNe/Cfwr0l9QpbW8HTF2QAiJiVEScW/LI5m8doKYS6FJLL0N9fT11dXXsse8QXnjmqfwfspLa4YerM+Glt5gx6+PvHLvpnicZtM06ZYiqAknUFfDKX4wEXAW8GBHn5xwaBQxJ74cAd+Qrq7WHMd0D7JzeDwZuaDggaUNJ/yfp6fR1tcYflnSgpEvT+5UlPSbpSUlnSPok7d9S0lhJt0h6SdL16QeEpFPS+c9LGp6zf6yk8yQ9IekVSZtJ6gCcAewjaYKkfVr1J1Mh3p3x9tfv/z36TlZe9ftljKY27b3jgG8131defumv3++8xVq88sY75QirIqmAVwE2BQ4Atk7/r09ILc9zge0kTSSr8OWtvLV2H+hI4JTUJF8LuBrYLB17Cdg8IuZK2hY4G/ifZsq6CLgoIm6QdGijY+sCawDTgEfIfkAPA5dGxBkAqTthF+Cf6TPtImLD9IM7NSK2lXQKMCAijmgqAElDgaEAy/ZaruAfQqU48ciDeOqxh/ng/ffYcePvc+gvT2TcYw/zyn+fA4mevZfnpLMvLHeYNaVTx/ZsvVE/jjjr67oFZx21O31XWIavvgrenD7Ld+CTUq3KGREPM/9cu82ClNWqCTQink13uQYDdzc6vDgwQlJfIID2eYr7ATAovf878IecY09ExBQASROAPmQJdCtJxwOLAl2BF/gmgTb0ezyVzi/k+xkODAdYfa11q+7W6DmXXP2dfYP2+UkZIrEGsz+fQ++tvj1SZvCv/lKmaCpfZT2H1DZPIo0iS3Y3NNp/JvBARPQHdgU6tuAaX+S8nwe0k9QRuBzYMyLWBK5sdI0vcs9vwbXNrI1IyvtqS22RQK8GzoiI5xrtX5xvbiodWEA5j/FNE3/fAs5vSJYz03CFPQv4zMdAlwLOM7MyKMWjnKXU6gk0IqZExEVNHPodcI6kR4D6Aoo6GjhG0hNAD6DZZ9si4gOyWudzwD+AJwu4xgPA6rV0E8msmpToJlLJtFrTNSI6N7FvLNnjUUTEo8CqOYd/08Q51wDXpONTgY0jIiTtC4xrfH7aPiLn/cnAdwY2RsSWOe9nkvpAI2IWsEGh36OZtbEK6wStpr6/9YFL01CkD4CD8pxvZgsRqfSPcrZU1STQiHgIWLvccZhZ+VRW+qyiBGpmVmkZ1AnUzKpEYY9qtiUnUDOrCuW4y56PE6iZVY8Ky6BOoGZWNbyssZlZkUqxpEcpOYGaWXWowE5QJ1AzqxpuwpuZFUG0/WQh+TiBmlnVcAI1MyuSm/BmZkWqtBpoW0yobGZWEqWYUFnS1ZJmSHo+Z19XSfdLmpi+LllIPE6gZlYVslFM+f8rwDXAjo32DQPGRERfYEzazssJ1MyqQwG1z0JqoBHxIDCr0e7dgRHp/Qi+WcCyWe4DNbOqUWAXaDdJ43K2h6cVdZvTPSKmA0TEdEnLFHIhJ1AzqxIFr7o5MyIGtHY04Ca8mVWRVlyV8x1JPbJrqAcwo5APOYGaWVUoZEXOFoxyGgUMSe+HAHcU8iEnUDOrHiXIoJJuAB4FVpM0RdLBwLnAdpImAtul7bzcB2pmVaMUS3pExOD5HNpmQctyAjWzqlFhDyI5gZpZlWjZTaJW4QRqZlUhm86usjKoE6iZVY3KSp9OoGZWRSqsAuoEambVw/OBmpkVyTVQM7MitPBRzVbhBGpmVcNNeDOzYlVW/nQCNbPqUecEamZWjIKX7GgzTqBmVhWyJ5HKHcW3OYGaWdVwAjUzK5Kb8GZmxfA4UDOz4rRwyY5W4QRqZlWj0qaz85pIZlY1SrUqp6QdJb0s6VVJw4qNxwnUzKpGKVbllFQPXAYMBFYHBktavZh4nEDNrHqUZl3jDYFXI2JSRHwJjAR2LyYc94GaWVUQpVmVE+gFvJWzPQXYqJiCnECL9OJzE2au12fxyeWOowS6ATPLHYR9y8LyO1mhlIWNH//U6E7t1a2AUztKGpezPTwihudsN5WFo5iYnECLFBFLlzuGUpA0LiIGlDsO+4Z/J02LiB1LVNQUYLmc7d7AtGIKch+omdWaJ4G+klaU1AHYFxhVTEGugZpZTYmIuZKOAEYD9cDVEfFCMWU5gdrw/KdYG/PvpJVFxN3A3S0tRxFF9Z2amdU894GamRXJCdTMrEhOoGZmRXICNatyqrQpimqIE6g1qeF/SklLSeqau88qh6SOpMcQJa0maf0yh1RTnECtSRERknYD7gT+I2lQeMhGJVoaWF/STcDtLByPgFYNjwO1JklaAzgCOARYGThTUoeIuKm8kVmuiHhLUh2wB/C3iJgMIKkuIr4qb3QLPydQ+w5JPYFjgHkR8TzwvKR5ZEm0fURcX94ITZJyWgRXA7OA1SWdAVwQEe9LWiIiPihflAs/N+HtWyStEBHTgLHAXEk/kdQxIu4ETgdOltSjrEFaQxfLjpJOAQanP2qjgCWBn0saCJwkacmyBrqQcwK13BtGfYGrJf0iIq4FbgY2APZMSfQfwOYRMb2M4da0nN/VusBFwOfAjyRdGxGPAyOArmQzrj8YEe+XLdga4Ec5DYB0w+inwFdk81GOiog/Stof2BJ4CPgb2b8Z962VkaQNgSHAIxHxd0mdgFuA9yLiJ+mcXhExtVFT30rMNdAaJamzpEXT+yWAk4DTgMHAKcBGkn4eEdcBjwDjI+PkWX49gc2BtSQtGhGzgf8BlpN0RzpnGmRN/TLFWBN8E6kGpYR5NHCppNnAl+nQJxHxpaTxwLPAgZI+j4iryhWrfXPDSFJvYHpE/EPSh8CpwE6S7oqI2anfsz84cbYV10BrULozOxzoCPwoIj4juwHxR0m9I+Jj4BngQeCHklYsX7SWkueuwPXAZZIOJetSOQs4DNhDUqeI+DwixjVXlpWWE2iNSWMGSXfadwP2lzSIbGXCx4Axko4FLgTuADoBXcoUrgGSNiUbATGYrNV4CPAbsj9wfyRLoouXLcAa5iZ8DUlNwa8kdY+IdyLickkfAT8i+2N6IfAqsAQwCFgU6Es2xtDaWM5g+BWA/wXWTq8/AgcCZ5Il1scj4r1yxVnLnEBrSGoK7gScJ+kxYHREXJdGxuxG9u9hVER8LukHwO+AgyJiSvmirj05d847Ap+lO+3tgD8DB0TES5J2BHoAvSJiYjnjrWVOoDVE0gBgH7KnjPoBm0laMiKulNSerNb5H7KxhTOAfVJT39pQwyB54HBJD5H9UXtZUjdgmKRLyGqlRzt5lpfHgdaI9D/fWOCZiNhP0iJkz09vBLySmvM9nTDLL43zPAe4DtgRmJTevw1cBSwGXBIRt5ctSAOcQGuKpL3JnlA5PCJuTs3C/ciS6DkR8VZZAzQkrQD8Hbg1Is6XtDzZpC5fATdGxNOp1fC+B8mXnxPoQipn7OBmZI9jPgs8TZYszwVOj4hbUxJdxjXP8slNhOnhhouATYBBETFRUi9gGNl43dMj4qPyRWu5PIxpIZXTjzYc+Ai4HNg/Led6EvB7SXtHxFwnz/LJ+UO3iaQ9c2zoAAAHCklEQVTHgTnAz4DbgHMk9Y2IqcB5wJVOnpXFCXQhJWlxYBdgV+A54DOgYS7Pu8huJHlSkDJKw5RC0lbAtsCKZDfx2pENkn8OuFjSqhExJSJeKmO41gQ34RcSklYmGyM4LyLuSPuOJ2sK9gD2iog306D59yLioXSO+9HaWJrZ6vP0fk2yWf/3IbtJdCawFjAAaA/8GrgtIsaXKVxrhmugCwFJq5I9NbQpcEJ61A/gNWBZ4PcpeQ4gawrWN3zWybNtKVtf6lhJi6VdnwP3RcRjEfFGRBwAfAL8G/gyIk528qxcTqBVTtLqwI3AiRFxLFlfpyStFhG3AveQzRd5F9kQmOMiYmzZAq5h6YbdbLK77F3SI5qzyMbj7pRz6nCgA/D3hkdvrTK5CV/lJP2QbOLcurT9LDAV6AU8FBE/l9SdbF2j99KAbDfb21iqeV4AnBsRL0oaBqxJ1kTvQ7Yg3K/JEuwBwNlkj9geExFzyhK05eW/blUuIh4GdpY0SdK/gFsiYiBZH9p2koal597/LyJeTp9x8mx77YE3gTMk9SGrZU4gmxRkErAD2dNh2wG/JKuBDiCbj8AqlGugCwlJ2wCjgQ4Nkx5LOhhYIiL+WNbgDABla0kdBKxPNgrifbKZlFYELo+IZ9KSHZsDfyGbavD5csVr+bkGupCIiDFkE4K8AiBpFeA4sqEwViYNaxgBRLaW1CVkNc/zyRaA+xNZl8tRyhaA6wDMBQY6eVY+10AXMmnw/G3A68CxEXFvmUOqWTmD5HcgG2L2BXAFWcXleGAN4ASyG0lLRcSksgVrRXECXQil5vxinmyi/CTtApxBtvzGsWTJcj9gHtkaVGuQzXr1eblitOI5gS7EfLe97aXJP5aLiEfS2lMXkT1VtBbZOlRTyZruuwORzn21XPFayziBmpVA6uvsArwMfAgcGRH3S+pJNsP/dWSP1gp4nKyvehv/gatuvolkVgKR+Qi4BngL+KWkPdJELQE8kd6vSPZAw4lOntXPM9KbtZCk9jmD3ceS1TjvIVsWGuB+oK+kP5HN+r9fRDxejlittNyEN2sBSf3IniC6OiLGpqb89cAbwBNkK2j+FngBWB34yslz4eEaqFnLLAPsD6wh6QqymeNPJFuC+HGyZaHPBi6MiH+ULUprFU6gZi0QEQ9K2pzsKbBpZNMHjiSbi2AM2RysAryy6ULITXizEkiD5S8kG660LjAQeCQi/iWpXUTMLWuA1iqcQM1KRNLOwB+AjSPiw0Y3l2wh5Ca8WYlExF2S5gGvSOoXEe+XOyZrXa6BmpVYqol+6omrF35OoGatxI/SLvycQM3MiuRHOc3MiuQEamZWJCdQM7MiOYFaUSTNkzRB0vOSbpZU9OJnkraUdGd6v1tasXJ+5y4h6fAirnGapF8Vur/ROddI2nMBrtVHkpfjqAFOoFas2RGxTkT0B74EDs09qMwC//uKiFERcW4zpywBLHACNWsNTqBWCg8Bq6Sa14uSLgfGA8tJ2l7So5LGp5pqZ8jWbpL0kqSHgT0aCpJ0oKRL0/vukm6X9Ex6bQKcC6ycar+/T+cdJ+lJSc9KOj2nrJMkvZyWe14t3zch6ZBUzjOSbm1Uq95W0kOSXknLdCCpXtLvc679s5b+IK26OIFai0hqR/bcd8Pqn6sBf4uIdYFPgZOBbSNiPWAccIykjsCVwK7AZsCy8yn+YuA/EbE2sB7ZlHDDgNdS7fc4SdsDfYENgXWA9SVtLml9YF+y59L3ADYo4Nu5LSI2SNd7ETg451gfYAtgZ+CK9D0cDHwYERuk8g+RtGIB17GFhB/ltGJ1kjQhvX+IbJb1nsDkiHgs7d+YbA7MR9LEwh2AR4F+wOsRMRFA0nXA0CausTXwE4CImAd8mJb+zbV9ej2dtjuTJdQuwO0R8Vm6xqgCvqf+ks4i6yboTDbDUoObIuIrYKKkSel72B5YK6d/dPF07VcKuJYtBJxArVizI2Kd3B0pSX6auwu4PyIGNzpvHbJlLkpBwDkR8edG1zi6iGtcAwyKiGckHQhsmXOscVmRrn1kROQmWiT1WcDrWpVyE95a02PAppJWAZC0qKRVgZeAFSWtnM4bPJ/PjwEOS5+tl7QY8DFZ7bLBaOCgnL7VXpKWAR4EfiSpk6QuZN0F+XQBpktqT7b0cK69JNWlmFciWzxuNHBYOh9Jq0r6XgHXsYWEa6DWaiLi3VSTu0HSImn3yRHxiqShwF2SZgIPA/2bKOIXwHBJB5Oto35YRDwq6ZE0TOie1A/6feDRVAP+BNg/IsZLuhGYAEwm62bI5zdks8hPJuvTzU3ULwP/AboDh0bE55L+QtY3Oj4t5fEu2ZpHViP8LLyZWZHchDczK5ITqJlZkZxAzcyK5ARqZlYkJ1AzsyI5gZqZFckJ1MysSP8P64Ckld/u+UQAAAAASUVORK5CYII=\n", 324 | "text/plain": [ 325 | "" 326 | ] 327 | }, 328 | "metadata": {}, 329 | "output_type": "display_data" 330 | } 331 | ], 332 | "source": [ 333 | "# Having the model fit, values for the test set will be predicted, and \n", 334 | "# then compared against the real output. The accuracy will be computed\n", 335 | "# and the confusion matrix will show in a very visual way its performance\n", 336 | "\n", 337 | "# PREDICTION\n", 338 | "y_pred = classifier.predict(x_test_scaled)\n", 339 | "\n", 340 | "# TEST SET COST\n", 341 | "test_cost = classifier.compute_cost(x_test_scaled,y_test)\n", 342 | "print('Test cost: ',test_cost)\n", 343 | "\n", 344 | "# ACCURACY\n", 345 | "print(\"Accuracy on test set: {:.2f}\".format(compute_accuracy(y_test,y_pred)))\n", 346 | "\n", 347 | "# CONFUSION MATRIX\n", 348 | "labels = ['Benign','Malignant']\n", 349 | "cnf_matrix = confusion_matrix(y_test, y_pred)\n", 350 | "np.set_printoptions(precision=2)\n", 351 | "plt.figure()\n", 352 | "plot_confusion_matrix(cnf_matrix, classes=labels, title='Confusion matrix')\n", 353 | "plt.show()\n" 354 | ] 355 | } 356 | ], 357 | "metadata": { 358 | "kernelspec": { 359 | "display_name": "Python 3", 360 | "language": "python", 361 | "name": "python3" 362 | }, 363 | "language_info": { 364 | "codemirror_mode": { 365 | "name": "ipython", 366 | "version": 3 367 | }, 368 | "file_extension": ".py", 369 | "mimetype": "text/x-python", 370 | "name": "python", 371 | "nbconvert_exporter": "python", 372 | "pygments_lexer": "ipython3", 373 | "version": "3.6.4" 374 | } 375 | }, 376 | "nbformat": 4, 377 | "nbformat_minor": 2 378 | } 379 | --------------------------------------------------------------------------------