├── Car Brand Classifier And Deployment ├── Procfile ├── README.md ├── Transfer Learning Resnet 50.ipynb ├── app.py ├── model-Resnet-50-h5 (Download Link).txt ├── requirements.txt ├── static │ ├── css │ │ └── main.css │ └── js │ │ └── main.js └── templates │ ├── base.html │ └── index.html ├── Compare 2 Images using OpenCV and PIL ├── Post.jpg ├── Pre.jpg ├── Readme.md ├── Screenshot1.PNG ├── Screenshot2.PNG ├── Screenshot3.PNG ├── Screenshot4.PNG └── compare-2-images.ipynb ├── Covid19 FaceMask Detector (CNN & OpenCV) ├── Readme.md ├── face_mask_detection.ipynb ├── haarcascade_frontalface_default.xml ├── man-mask-protective.jpg ├── mask.py ├── video.mp4 ├── video1.mp4 ├── video2.mp4 └── women with mask.jpg ├── Image Background Remover App ├── InputImg.jpg ├── OutputImg.png ├── README.md ├── image_background_remover.py └── requirements.txt ├── Image Classifier Using Resnet50 ├── README.md ├── Screenshot1.PNG ├── Screenshot2.PNG ├── image-classifier-using-resnet50.ipynb └── images │ ├── Image1.jpg │ ├── Image3.jpg │ ├── Scooter.jpg │ ├── banana.jpg │ ├── car.jpg │ ├── image10.jpg │ ├── image11.jpg │ ├── image2.jpg │ ├── image4.jpg │ ├── image6.jpg │ ├── image8.jpg │ └── image9.jpg ├── OpenCV Face Detection ├── Face+Eyes_detection_App.py ├── FaceDetection_App.py ├── Face_detection_using_webcam.py ├── Output.PNG └── README.md ├── Readme.md ├── Scraping Text Data from Image ├── InvoiceToText Recording.gif ├── OCR_Invoice_to_Text.py ├── Readme.md └── invoice4.PNG └── Text Recognizer Android App (FireBase + AutoML) ├── App Demo Video.gif ├── Readme.md ├── ScreenShot.jpg ├── TextRecognizer Full Project.zip ├── TextRecognizer.apk └── output-metadata.json /Car Brand Classifier And Deployment/Procfile: -------------------------------------------------------------------------------- 1 | web: gunicorn app:main -------------------------------------------------------------------------------- /Car Brand Classifier And Deployment/README.md: -------------------------------------------------------------------------------- 1 | # (Deep Learning) Car-Brand-Classifier 2 | "Car Brand Classifier App" Classifies cars of different brands using transfer learning technique Resnet-50. Here Transfer learning recognizes car brands of 3 different brands(i.e. Audi/Lamborghini/Mercedes). For the training set we used 80 images and for validation set 52 images. 3 | 4 |
5 | 6 | ### ScreenRecording Clip of Live App. 7 | [![Demo Doccou alpha](https://github.com/amark720/Amar-kumar/blob/master/ScreenShots/Car%20Brand%20Classifier%20GIF.gif)](http://ec2-18-220-203-245.us-east-2.compute.amazonaws.com:8080) 8 | 9 | ### Web App and Deployment 10 | 11 | This project uses Flask for the web app and its deployment is done on AWS Ec2. 12 | 13 | 14 | ### ScreenShots: 15 | 16 | #### Landing Page- 17 | 18 | 19 | 20 | #### Result- 21 | 22 | 23 | 24 | #### Improvements 25 | * Here we've used a very less amount of images to train the model. So, the Model can be improved further by adding more images to the training set. 26 | * We can add more Classes of Images to help predecting Cars of many Brands. 27 | * Adding more Layers and Epocs into Neural Network will further improve the Accuracy. 28 | 29 | 30 | #### Feel Free to contact me at➛ amark720@gmail.com for any help related to this Project! 31 | 37 | -------------------------------------------------------------------------------- /Car Brand Classifier And Deployment/Transfer Learning Resnet 50.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "## Transfer Learning VGG 16 and VGG 19 using Keras" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "Please download the dataset from the below url" 15 | ] 16 | }, 17 | { 18 | "cell_type": "code", 19 | "execution_count": 67, 20 | "metadata": {}, 21 | "outputs": [], 22 | "source": [ 23 | "# import the libraries as shown below\n", 24 | "\n", 25 | "from tensorflow.keras.layers import Input, Lambda, Dense, Flatten\n", 26 | "from tensorflow.keras.models import Model\n", 27 | "from tensorflow.keras.applications.resnet50 import ResNet50\n", 28 | "#from keras.applications.vgg16 import VGG16\n", 29 | "from tensorflow.keras.applications.resnet50 import preprocess_input\n", 30 | "from tensorflow.keras.preprocessing import image\n", 31 | "from tensorflow.keras.preprocessing.image import ImageDataGenerator,load_img\n", 32 | "from tensorflow.keras.models import Sequential\n", 33 | "import numpy as np\n", 34 | "from glob import glob\n", 35 | "import matplotlib.pyplot as plt" 36 | ] 37 | }, 38 | { 39 | "cell_type": "code", 40 | "execution_count": 68, 41 | "metadata": {}, 42 | "outputs": [], 43 | "source": [ 44 | "# re-size all the images to this\n", 45 | "IMAGE_SIZE = [224, 224]\n", 46 | "\n", 47 | "train_path = 'Datasets/train'\n", 48 | "valid_path = 'Datasets/test'\n" 49 | ] 50 | }, 51 | { 52 | "cell_type": "code", 53 | "execution_count": 69, 54 | "metadata": {}, 55 | "outputs": [], 56 | "source": [ 57 | "# Import the Vgg 16 library as shown below and add preprocessing layer to the front of VGG\n", 58 | "# Here we will be using imagenet weights\n", 59 | "\n", 60 | "resnet = ResNet50(input_shape=IMAGE_SIZE + [3], weights='imagenet', include_top=False)\n", 61 | "\n", 62 | "\n" 63 | ] 64 | }, 65 | { 66 | "cell_type": "code", 67 | "execution_count": 70, 68 | "metadata": {}, 69 | "outputs": [], 70 | "source": [ 71 | "# don't train existing weights\n", 72 | "for layer in resnet.layers:\n", 73 | " layer.trainable = False" 74 | ] 75 | }, 76 | { 77 | "cell_type": "code", 78 | "execution_count": 71, 79 | "metadata": {}, 80 | "outputs": [ 81 | { 82 | "data": { 83 | "text/plain": [ 84 | "['Datasets/train\\\\audi',\n", 85 | " 'Datasets/train\\\\lamborghini',\n", 86 | " 'Datasets/train\\\\mercedes']" 87 | ] 88 | }, 89 | "execution_count": 71, 90 | "metadata": {}, 91 | "output_type": "execute_result" 92 | } 93 | ], 94 | "source": [ 95 | " # useful for getting number of output classes\n", 96 | "folders = glob('Datasets/train/*')\n", 97 | "folders" 98 | ] 99 | }, 100 | { 101 | "cell_type": "code", 102 | "execution_count": 72, 103 | "metadata": {}, 104 | "outputs": [], 105 | "source": [ 106 | "# our layers - you can add more if you want\n", 107 | "x = Flatten()(resnet.output)" 108 | ] 109 | }, 110 | { 111 | "cell_type": "code", 112 | "execution_count": 73, 113 | "metadata": {}, 114 | "outputs": [], 115 | "source": [ 116 | "prediction = Dense(len(folders), activation='softmax')(x)\n", 117 | "\n", 118 | "# create a model object\n", 119 | "model = Model(inputs=resnet.input, outputs=prediction)" 120 | ] 121 | }, 122 | { 123 | "cell_type": "code", 124 | "execution_count": 74, 125 | "metadata": {}, 126 | "outputs": [ 127 | { 128 | "name": "stdout", 129 | "output_type": "stream", 130 | "text": [ 131 | "Model: \"model_2\"\n", 132 | "__________________________________________________________________________________________________\n", 133 | "Layer (type) Output Shape Param # Connected to \n", 134 | "==================================================================================================\n", 135 | "input_3 (InputLayer) [(None, 224, 224, 3) 0 \n", 136 | "__________________________________________________________________________________________________\n", 137 | "conv1_pad (ZeroPadding2D) (None, 230, 230, 3) 0 input_3[0][0] \n", 138 | "__________________________________________________________________________________________________\n", 139 | "conv1_conv (Conv2D) (None, 112, 112, 64) 9472 conv1_pad[0][0] \n", 140 | "__________________________________________________________________________________________________\n", 141 | "conv1_bn (BatchNormalization) (None, 112, 112, 64) 256 conv1_conv[0][0] \n", 142 | "__________________________________________________________________________________________________\n", 143 | "conv1_relu (Activation) (None, 112, 112, 64) 0 conv1_bn[0][0] \n", 144 | "__________________________________________________________________________________________________\n", 145 | "pool1_pad (ZeroPadding2D) (None, 114, 114, 64) 0 conv1_relu[0][0] \n", 146 | "__________________________________________________________________________________________________\n", 147 | "pool1_pool (MaxPooling2D) (None, 56, 56, 64) 0 pool1_pad[0][0] \n", 148 | "__________________________________________________________________________________________________\n", 149 | "conv2_block1_1_conv (Conv2D) (None, 56, 56, 64) 4160 pool1_pool[0][0] \n", 150 | "__________________________________________________________________________________________________\n", 151 | "conv2_block1_1_bn (BatchNormali (None, 56, 56, 64) 256 conv2_block1_1_conv[0][0] \n", 152 | "__________________________________________________________________________________________________\n", 153 | "conv2_block1_1_relu (Activation (None, 56, 56, 64) 0 conv2_block1_1_bn[0][0] \n", 154 | "__________________________________________________________________________________________________\n", 155 | "conv2_block1_2_conv (Conv2D) (None, 56, 56, 64) 36928 conv2_block1_1_relu[0][0] \n", 156 | "__________________________________________________________________________________________________\n", 157 | "conv2_block1_2_bn (BatchNormali (None, 56, 56, 64) 256 conv2_block1_2_conv[0][0] \n", 158 | "__________________________________________________________________________________________________\n", 159 | "conv2_block1_2_relu (Activation (None, 56, 56, 64) 0 conv2_block1_2_bn[0][0] \n", 160 | "__________________________________________________________________________________________________\n", 161 | "conv2_block1_0_conv (Conv2D) (None, 56, 56, 256) 16640 pool1_pool[0][0] \n", 162 | "__________________________________________________________________________________________________\n", 163 | "conv2_block1_3_conv (Conv2D) (None, 56, 56, 256) 16640 conv2_block1_2_relu[0][0] \n", 164 | "__________________________________________________________________________________________________\n", 165 | "conv2_block1_0_bn (BatchNormali (None, 56, 56, 256) 1024 conv2_block1_0_conv[0][0] \n", 166 | "__________________________________________________________________________________________________\n", 167 | "conv2_block1_3_bn (BatchNormali (None, 56, 56, 256) 1024 conv2_block1_3_conv[0][0] \n", 168 | "__________________________________________________________________________________________________\n", 169 | "conv2_block1_add (Add) (None, 56, 56, 256) 0 conv2_block1_0_bn[0][0] \n", 170 | " conv2_block1_3_bn[0][0] \n", 171 | "__________________________________________________________________________________________________\n", 172 | "conv2_block1_out (Activation) (None, 56, 56, 256) 0 conv2_block1_add[0][0] \n", 173 | "__________________________________________________________________________________________________\n", 174 | "conv2_block2_1_conv (Conv2D) (None, 56, 56, 64) 16448 conv2_block1_out[0][0] \n", 175 | "__________________________________________________________________________________________________\n", 176 | "conv2_block2_1_bn (BatchNormali (None, 56, 56, 64) 256 conv2_block2_1_conv[0][0] \n", 177 | "__________________________________________________________________________________________________\n", 178 | "conv2_block2_1_relu (Activation (None, 56, 56, 64) 0 conv2_block2_1_bn[0][0] \n", 179 | "__________________________________________________________________________________________________\n", 180 | "conv2_block2_2_conv (Conv2D) (None, 56, 56, 64) 36928 conv2_block2_1_relu[0][0] \n", 181 | "__________________________________________________________________________________________________\n", 182 | "conv2_block2_2_bn (BatchNormali (None, 56, 56, 64) 256 conv2_block2_2_conv[0][0] \n", 183 | "__________________________________________________________________________________________________\n", 184 | "conv2_block2_2_relu (Activation (None, 56, 56, 64) 0 conv2_block2_2_bn[0][0] \n", 185 | "__________________________________________________________________________________________________\n", 186 | "conv2_block2_3_conv (Conv2D) (None, 56, 56, 256) 16640 conv2_block2_2_relu[0][0] \n", 187 | "__________________________________________________________________________________________________\n", 188 | "conv2_block2_3_bn (BatchNormali (None, 56, 56, 256) 1024 conv2_block2_3_conv[0][0] \n", 189 | "__________________________________________________________________________________________________\n", 190 | "conv2_block2_add (Add) (None, 56, 56, 256) 0 conv2_block1_out[0][0] \n", 191 | " conv2_block2_3_bn[0][0] \n", 192 | "__________________________________________________________________________________________________\n", 193 | "conv2_block2_out (Activation) (None, 56, 56, 256) 0 conv2_block2_add[0][0] \n", 194 | "__________________________________________________________________________________________________\n", 195 | "conv2_block3_1_conv (Conv2D) (None, 56, 56, 64) 16448 conv2_block2_out[0][0] \n", 196 | "__________________________________________________________________________________________________\n", 197 | "conv2_block3_1_bn (BatchNormali (None, 56, 56, 64) 256 conv2_block3_1_conv[0][0] \n", 198 | "__________________________________________________________________________________________________\n", 199 | "conv2_block3_1_relu (Activation (None, 56, 56, 64) 0 conv2_block3_1_bn[0][0] \n", 200 | "__________________________________________________________________________________________________\n", 201 | "conv2_block3_2_conv (Conv2D) (None, 56, 56, 64) 36928 conv2_block3_1_relu[0][0] \n", 202 | "__________________________________________________________________________________________________\n", 203 | "conv2_block3_2_bn (BatchNormali (None, 56, 56, 64) 256 conv2_block3_2_conv[0][0] \n", 204 | "__________________________________________________________________________________________________\n", 205 | "conv2_block3_2_relu (Activation (None, 56, 56, 64) 0 conv2_block3_2_bn[0][0] \n", 206 | "__________________________________________________________________________________________________\n", 207 | "conv2_block3_3_conv (Conv2D) (None, 56, 56, 256) 16640 conv2_block3_2_relu[0][0] \n", 208 | "__________________________________________________________________________________________________\n", 209 | "conv2_block3_3_bn (BatchNormali (None, 56, 56, 256) 1024 conv2_block3_3_conv[0][0] \n", 210 | "__________________________________________________________________________________________________\n", 211 | "conv2_block3_add (Add) (None, 56, 56, 256) 0 conv2_block2_out[0][0] \n", 212 | " conv2_block3_3_bn[0][0] \n", 213 | "__________________________________________________________________________________________________\n", 214 | "conv2_block3_out (Activation) (None, 56, 56, 256) 0 conv2_block3_add[0][0] \n", 215 | "__________________________________________________________________________________________________\n", 216 | "conv3_block1_1_conv (Conv2D) (None, 28, 28, 128) 32896 conv2_block3_out[0][0] \n", 217 | "__________________________________________________________________________________________________\n", 218 | "conv3_block1_1_bn (BatchNormali (None, 28, 28, 128) 512 conv3_block1_1_conv[0][0] \n", 219 | "__________________________________________________________________________________________________\n", 220 | "conv3_block1_1_relu (Activation (None, 28, 28, 128) 0 conv3_block1_1_bn[0][0] \n", 221 | "__________________________________________________________________________________________________\n", 222 | "conv3_block1_2_conv (Conv2D) (None, 28, 28, 128) 147584 conv3_block1_1_relu[0][0] \n", 223 | "__________________________________________________________________________________________________\n", 224 | "conv3_block1_2_bn (BatchNormali (None, 28, 28, 128) 512 conv3_block1_2_conv[0][0] \n", 225 | "__________________________________________________________________________________________________\n", 226 | "conv3_block1_2_relu (Activation (None, 28, 28, 128) 0 conv3_block1_2_bn[0][0] \n", 227 | "__________________________________________________________________________________________________\n", 228 | "conv3_block1_0_conv (Conv2D) (None, 28, 28, 512) 131584 conv2_block3_out[0][0] \n", 229 | "__________________________________________________________________________________________________\n", 230 | "conv3_block1_3_conv (Conv2D) (None, 28, 28, 512) 66048 conv3_block1_2_relu[0][0] \n", 231 | "__________________________________________________________________________________________________\n", 232 | "conv3_block1_0_bn (BatchNormali (None, 28, 28, 512) 2048 conv3_block1_0_conv[0][0] \n", 233 | "__________________________________________________________________________________________________\n", 234 | "conv3_block1_3_bn (BatchNormali (None, 28, 28, 512) 2048 conv3_block1_3_conv[0][0] \n", 235 | "__________________________________________________________________________________________________\n", 236 | "conv3_block1_add (Add) (None, 28, 28, 512) 0 conv3_block1_0_bn[0][0] \n", 237 | " conv3_block1_3_bn[0][0] \n", 238 | "__________________________________________________________________________________________________\n", 239 | "conv3_block1_out (Activation) (None, 28, 28, 512) 0 conv3_block1_add[0][0] \n", 240 | "__________________________________________________________________________________________________\n", 241 | "conv3_block2_1_conv (Conv2D) (None, 28, 28, 128) 65664 conv3_block1_out[0][0] \n", 242 | "__________________________________________________________________________________________________\n", 243 | "conv3_block2_1_bn (BatchNormali (None, 28, 28, 128) 512 conv3_block2_1_conv[0][0] \n", 244 | "__________________________________________________________________________________________________\n", 245 | "conv3_block2_1_relu (Activation (None, 28, 28, 128) 0 conv3_block2_1_bn[0][0] \n", 246 | "__________________________________________________________________________________________________\n", 247 | "conv3_block2_2_conv (Conv2D) (None, 28, 28, 128) 147584 conv3_block2_1_relu[0][0] \n", 248 | "__________________________________________________________________________________________________\n", 249 | "conv3_block2_2_bn (BatchNormali (None, 28, 28, 128) 512 conv3_block2_2_conv[0][0] \n", 250 | "__________________________________________________________________________________________________\n", 251 | "conv3_block2_2_relu (Activation (None, 28, 28, 128) 0 conv3_block2_2_bn[0][0] \n", 252 | "__________________________________________________________________________________________________\n", 253 | "conv3_block2_3_conv (Conv2D) (None, 28, 28, 512) 66048 conv3_block2_2_relu[0][0] \n", 254 | "__________________________________________________________________________________________________\n", 255 | "conv3_block2_3_bn (BatchNormali (None, 28, 28, 512) 2048 conv3_block2_3_conv[0][0] \n", 256 | "__________________________________________________________________________________________________\n", 257 | "conv3_block2_add (Add) (None, 28, 28, 512) 0 conv3_block1_out[0][0] \n", 258 | " conv3_block2_3_bn[0][0] \n", 259 | "__________________________________________________________________________________________________\n", 260 | "conv3_block2_out (Activation) (None, 28, 28, 512) 0 conv3_block2_add[0][0] \n", 261 | "__________________________________________________________________________________________________\n", 262 | "conv3_block3_1_conv (Conv2D) (None, 28, 28, 128) 65664 conv3_block2_out[0][0] \n", 263 | "__________________________________________________________________________________________________\n", 264 | "conv3_block3_1_bn (BatchNormali (None, 28, 28, 128) 512 conv3_block3_1_conv[0][0] \n", 265 | "__________________________________________________________________________________________________\n", 266 | "conv3_block3_1_relu (Activation (None, 28, 28, 128) 0 conv3_block3_1_bn[0][0] \n", 267 | "__________________________________________________________________________________________________\n", 268 | "conv3_block3_2_conv (Conv2D) (None, 28, 28, 128) 147584 conv3_block3_1_relu[0][0] \n", 269 | "__________________________________________________________________________________________________\n", 270 | "conv3_block3_2_bn (BatchNormali (None, 28, 28, 128) 512 conv3_block3_2_conv[0][0] \n", 271 | "__________________________________________________________________________________________________\n", 272 | "conv3_block3_2_relu (Activation (None, 28, 28, 128) 0 conv3_block3_2_bn[0][0] \n", 273 | "__________________________________________________________________________________________________\n", 274 | "conv3_block3_3_conv (Conv2D) (None, 28, 28, 512) 66048 conv3_block3_2_relu[0][0] \n", 275 | "__________________________________________________________________________________________________\n", 276 | "conv3_block3_3_bn (BatchNormali (None, 28, 28, 512) 2048 conv3_block3_3_conv[0][0] \n", 277 | "__________________________________________________________________________________________________\n", 278 | "conv3_block3_add (Add) (None, 28, 28, 512) 0 conv3_block2_out[0][0] \n", 279 | " conv3_block3_3_bn[0][0] \n", 280 | "__________________________________________________________________________________________________\n", 281 | "conv3_block3_out (Activation) (None, 28, 28, 512) 0 conv3_block3_add[0][0] \n", 282 | "__________________________________________________________________________________________________\n", 283 | "conv3_block4_1_conv (Conv2D) (None, 28, 28, 128) 65664 conv3_block3_out[0][0] \n", 284 | "__________________________________________________________________________________________________\n", 285 | "conv3_block4_1_bn (BatchNormali (None, 28, 28, 128) 512 conv3_block4_1_conv[0][0] \n", 286 | "__________________________________________________________________________________________________\n", 287 | "conv3_block4_1_relu (Activation (None, 28, 28, 128) 0 conv3_block4_1_bn[0][0] \n", 288 | "__________________________________________________________________________________________________\n", 289 | "conv3_block4_2_conv (Conv2D) (None, 28, 28, 128) 147584 conv3_block4_1_relu[0][0] \n", 290 | "__________________________________________________________________________________________________\n", 291 | "conv3_block4_2_bn (BatchNormali (None, 28, 28, 128) 512 conv3_block4_2_conv[0][0] \n", 292 | "__________________________________________________________________________________________________\n", 293 | "conv3_block4_2_relu (Activation (None, 28, 28, 128) 0 conv3_block4_2_bn[0][0] \n", 294 | "__________________________________________________________________________________________________\n", 295 | "conv3_block4_3_conv (Conv2D) (None, 28, 28, 512) 66048 conv3_block4_2_relu[0][0] \n", 296 | "__________________________________________________________________________________________________\n", 297 | "conv3_block4_3_bn (BatchNormali (None, 28, 28, 512) 2048 conv3_block4_3_conv[0][0] \n", 298 | "__________________________________________________________________________________________________\n", 299 | "conv3_block4_add (Add) (None, 28, 28, 512) 0 conv3_block3_out[0][0] \n", 300 | " conv3_block4_3_bn[0][0] \n", 301 | "__________________________________________________________________________________________________\n", 302 | "conv3_block4_out (Activation) (None, 28, 28, 512) 0 conv3_block4_add[0][0] \n", 303 | "__________________________________________________________________________________________________\n", 304 | "conv4_block1_1_conv (Conv2D) (None, 14, 14, 256) 131328 conv3_block4_out[0][0] \n", 305 | "__________________________________________________________________________________________________\n", 306 | "conv4_block1_1_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block1_1_conv[0][0] \n", 307 | "__________________________________________________________________________________________________\n", 308 | "conv4_block1_1_relu (Activation (None, 14, 14, 256) 0 conv4_block1_1_bn[0][0] \n", 309 | "__________________________________________________________________________________________________\n", 310 | "conv4_block1_2_conv (Conv2D) (None, 14, 14, 256) 590080 conv4_block1_1_relu[0][0] \n", 311 | "__________________________________________________________________________________________________\n", 312 | "conv4_block1_2_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block1_2_conv[0][0] \n", 313 | "__________________________________________________________________________________________________\n", 314 | "conv4_block1_2_relu (Activation (None, 14, 14, 256) 0 conv4_block1_2_bn[0][0] \n", 315 | "__________________________________________________________________________________________________\n", 316 | "conv4_block1_0_conv (Conv2D) (None, 14, 14, 1024) 525312 conv3_block4_out[0][0] \n", 317 | "__________________________________________________________________________________________________\n", 318 | "conv4_block1_3_conv (Conv2D) (None, 14, 14, 1024) 263168 conv4_block1_2_relu[0][0] \n", 319 | "__________________________________________________________________________________________________\n", 320 | "conv4_block1_0_bn (BatchNormali (None, 14, 14, 1024) 4096 conv4_block1_0_conv[0][0] \n", 321 | "__________________________________________________________________________________________________\n", 322 | "conv4_block1_3_bn (BatchNormali (None, 14, 14, 1024) 4096 conv4_block1_3_conv[0][0] \n", 323 | "__________________________________________________________________________________________________\n", 324 | "conv4_block1_add (Add) (None, 14, 14, 1024) 0 conv4_block1_0_bn[0][0] \n", 325 | " conv4_block1_3_bn[0][0] \n", 326 | "__________________________________________________________________________________________________\n", 327 | "conv4_block1_out (Activation) (None, 14, 14, 1024) 0 conv4_block1_add[0][0] \n", 328 | "__________________________________________________________________________________________________\n", 329 | "conv4_block2_1_conv (Conv2D) (None, 14, 14, 256) 262400 conv4_block1_out[0][0] \n", 330 | "__________________________________________________________________________________________________\n", 331 | "conv4_block2_1_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block2_1_conv[0][0] \n", 332 | "__________________________________________________________________________________________________\n", 333 | "conv4_block2_1_relu (Activation (None, 14, 14, 256) 0 conv4_block2_1_bn[0][0] \n", 334 | "__________________________________________________________________________________________________\n", 335 | "conv4_block2_2_conv (Conv2D) (None, 14, 14, 256) 590080 conv4_block2_1_relu[0][0] \n", 336 | "__________________________________________________________________________________________________\n", 337 | "conv4_block2_2_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block2_2_conv[0][0] \n", 338 | "__________________________________________________________________________________________________\n", 339 | "conv4_block2_2_relu (Activation (None, 14, 14, 256) 0 conv4_block2_2_bn[0][0] \n", 340 | "__________________________________________________________________________________________________\n", 341 | "conv4_block2_3_conv (Conv2D) (None, 14, 14, 1024) 263168 conv4_block2_2_relu[0][0] \n", 342 | "__________________________________________________________________________________________________\n", 343 | "conv4_block2_3_bn (BatchNormali (None, 14, 14, 1024) 4096 conv4_block2_3_conv[0][0] \n", 344 | "__________________________________________________________________________________________________\n", 345 | "conv4_block2_add (Add) (None, 14, 14, 1024) 0 conv4_block1_out[0][0] \n", 346 | " conv4_block2_3_bn[0][0] \n", 347 | "__________________________________________________________________________________________________\n", 348 | "conv4_block2_out (Activation) (None, 14, 14, 1024) 0 conv4_block2_add[0][0] \n", 349 | "__________________________________________________________________________________________________\n", 350 | "conv4_block3_1_conv (Conv2D) (None, 14, 14, 256) 262400 conv4_block2_out[0][0] \n", 351 | "__________________________________________________________________________________________________\n", 352 | "conv4_block3_1_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block3_1_conv[0][0] \n", 353 | "__________________________________________________________________________________________________\n", 354 | "conv4_block3_1_relu (Activation (None, 14, 14, 256) 0 conv4_block3_1_bn[0][0] \n", 355 | "__________________________________________________________________________________________________\n", 356 | "conv4_block3_2_conv (Conv2D) (None, 14, 14, 256) 590080 conv4_block3_1_relu[0][0] \n", 357 | "__________________________________________________________________________________________________\n", 358 | "conv4_block3_2_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block3_2_conv[0][0] \n", 359 | "__________________________________________________________________________________________________\n", 360 | "conv4_block3_2_relu (Activation (None, 14, 14, 256) 0 conv4_block3_2_bn[0][0] \n", 361 | "__________________________________________________________________________________________________\n", 362 | "conv4_block3_3_conv (Conv2D) (None, 14, 14, 1024) 263168 conv4_block3_2_relu[0][0] \n", 363 | "__________________________________________________________________________________________________\n", 364 | "conv4_block3_3_bn (BatchNormali (None, 14, 14, 1024) 4096 conv4_block3_3_conv[0][0] \n", 365 | "__________________________________________________________________________________________________\n", 366 | "conv4_block3_add (Add) (None, 14, 14, 1024) 0 conv4_block2_out[0][0] \n", 367 | " conv4_block3_3_bn[0][0] \n", 368 | "__________________________________________________________________________________________________\n", 369 | "conv4_block3_out (Activation) (None, 14, 14, 1024) 0 conv4_block3_add[0][0] \n", 370 | "__________________________________________________________________________________________________\n", 371 | "conv4_block4_1_conv (Conv2D) (None, 14, 14, 256) 262400 conv4_block3_out[0][0] \n", 372 | "__________________________________________________________________________________________________\n", 373 | "conv4_block4_1_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block4_1_conv[0][0] \n", 374 | "__________________________________________________________________________________________________\n", 375 | "conv4_block4_1_relu (Activation (None, 14, 14, 256) 0 conv4_block4_1_bn[0][0] \n", 376 | "__________________________________________________________________________________________________\n", 377 | "conv4_block4_2_conv (Conv2D) (None, 14, 14, 256) 590080 conv4_block4_1_relu[0][0] \n", 378 | "__________________________________________________________________________________________________\n", 379 | "conv4_block4_2_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block4_2_conv[0][0] \n", 380 | "__________________________________________________________________________________________________\n", 381 | "conv4_block4_2_relu (Activation (None, 14, 14, 256) 0 conv4_block4_2_bn[0][0] \n", 382 | "__________________________________________________________________________________________________\n", 383 | "conv4_block4_3_conv (Conv2D) (None, 14, 14, 1024) 263168 conv4_block4_2_relu[0][0] \n", 384 | "__________________________________________________________________________________________________\n", 385 | "conv4_block4_3_bn (BatchNormali (None, 14, 14, 1024) 4096 conv4_block4_3_conv[0][0] \n", 386 | "__________________________________________________________________________________________________\n", 387 | "conv4_block4_add (Add) (None, 14, 14, 1024) 0 conv4_block3_out[0][0] \n", 388 | " conv4_block4_3_bn[0][0] \n", 389 | "__________________________________________________________________________________________________\n", 390 | "conv4_block4_out (Activation) (None, 14, 14, 1024) 0 conv4_block4_add[0][0] \n", 391 | "__________________________________________________________________________________________________\n", 392 | "conv4_block5_1_conv (Conv2D) (None, 14, 14, 256) 262400 conv4_block4_out[0][0] \n", 393 | "__________________________________________________________________________________________________\n", 394 | "conv4_block5_1_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block5_1_conv[0][0] \n", 395 | "__________________________________________________________________________________________________\n", 396 | "conv4_block5_1_relu (Activation (None, 14, 14, 256) 0 conv4_block5_1_bn[0][0] \n", 397 | "__________________________________________________________________________________________________\n", 398 | "conv4_block5_2_conv (Conv2D) (None, 14, 14, 256) 590080 conv4_block5_1_relu[0][0] \n", 399 | "__________________________________________________________________________________________________\n", 400 | "conv4_block5_2_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block5_2_conv[0][0] \n", 401 | "__________________________________________________________________________________________________\n", 402 | "conv4_block5_2_relu (Activation (None, 14, 14, 256) 0 conv4_block5_2_bn[0][0] \n", 403 | "__________________________________________________________________________________________________\n", 404 | "conv4_block5_3_conv (Conv2D) (None, 14, 14, 1024) 263168 conv4_block5_2_relu[0][0] \n", 405 | "__________________________________________________________________________________________________\n", 406 | "conv4_block5_3_bn (BatchNormali (None, 14, 14, 1024) 4096 conv4_block5_3_conv[0][0] \n", 407 | "__________________________________________________________________________________________________\n", 408 | "conv4_block5_add (Add) (None, 14, 14, 1024) 0 conv4_block4_out[0][0] \n", 409 | " conv4_block5_3_bn[0][0] \n", 410 | "__________________________________________________________________________________________________\n", 411 | "conv4_block5_out (Activation) (None, 14, 14, 1024) 0 conv4_block5_add[0][0] \n", 412 | "__________________________________________________________________________________________________\n", 413 | "conv4_block6_1_conv (Conv2D) (None, 14, 14, 256) 262400 conv4_block5_out[0][0] \n", 414 | "__________________________________________________________________________________________________\n", 415 | "conv4_block6_1_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block6_1_conv[0][0] \n", 416 | "__________________________________________________________________________________________________\n", 417 | "conv4_block6_1_relu (Activation (None, 14, 14, 256) 0 conv4_block6_1_bn[0][0] \n", 418 | "__________________________________________________________________________________________________\n", 419 | "conv4_block6_2_conv (Conv2D) (None, 14, 14, 256) 590080 conv4_block6_1_relu[0][0] \n", 420 | "__________________________________________________________________________________________________\n", 421 | "conv4_block6_2_bn (BatchNormali (None, 14, 14, 256) 1024 conv4_block6_2_conv[0][0] \n", 422 | "__________________________________________________________________________________________________\n", 423 | "conv4_block6_2_relu (Activation (None, 14, 14, 256) 0 conv4_block6_2_bn[0][0] \n", 424 | "__________________________________________________________________________________________________\n", 425 | "conv4_block6_3_conv (Conv2D) (None, 14, 14, 1024) 263168 conv4_block6_2_relu[0][0] \n", 426 | "__________________________________________________________________________________________________\n", 427 | "conv4_block6_3_bn (BatchNormali (None, 14, 14, 1024) 4096 conv4_block6_3_conv[0][0] \n", 428 | "__________________________________________________________________________________________________\n", 429 | "conv4_block6_add (Add) (None, 14, 14, 1024) 0 conv4_block5_out[0][0] \n", 430 | " conv4_block6_3_bn[0][0] \n", 431 | "__________________________________________________________________________________________________\n", 432 | "conv4_block6_out (Activation) (None, 14, 14, 1024) 0 conv4_block6_add[0][0] \n", 433 | "__________________________________________________________________________________________________\n", 434 | "conv5_block1_1_conv (Conv2D) (None, 7, 7, 512) 524800 conv4_block6_out[0][0] \n", 435 | "__________________________________________________________________________________________________\n", 436 | "conv5_block1_1_bn (BatchNormali (None, 7, 7, 512) 2048 conv5_block1_1_conv[0][0] \n", 437 | "__________________________________________________________________________________________________\n", 438 | "conv5_block1_1_relu (Activation (None, 7, 7, 512) 0 conv5_block1_1_bn[0][0] \n", 439 | "__________________________________________________________________________________________________\n", 440 | "conv5_block1_2_conv (Conv2D) (None, 7, 7, 512) 2359808 conv5_block1_1_relu[0][0] \n", 441 | "__________________________________________________________________________________________________\n", 442 | "conv5_block1_2_bn (BatchNormali (None, 7, 7, 512) 2048 conv5_block1_2_conv[0][0] \n", 443 | "__________________________________________________________________________________________________\n", 444 | "conv5_block1_2_relu (Activation (None, 7, 7, 512) 0 conv5_block1_2_bn[0][0] \n", 445 | "__________________________________________________________________________________________________\n", 446 | "conv5_block1_0_conv (Conv2D) (None, 7, 7, 2048) 2099200 conv4_block6_out[0][0] \n", 447 | "__________________________________________________________________________________________________\n", 448 | "conv5_block1_3_conv (Conv2D) (None, 7, 7, 2048) 1050624 conv5_block1_2_relu[0][0] \n", 449 | "__________________________________________________________________________________________________\n", 450 | "conv5_block1_0_bn (BatchNormali (None, 7, 7, 2048) 8192 conv5_block1_0_conv[0][0] \n", 451 | "__________________________________________________________________________________________________\n", 452 | "conv5_block1_3_bn (BatchNormali (None, 7, 7, 2048) 8192 conv5_block1_3_conv[0][0] \n", 453 | "__________________________________________________________________________________________________\n", 454 | "conv5_block1_add (Add) (None, 7, 7, 2048) 0 conv5_block1_0_bn[0][0] \n", 455 | " conv5_block1_3_bn[0][0] \n", 456 | "__________________________________________________________________________________________________\n", 457 | "conv5_block1_out (Activation) (None, 7, 7, 2048) 0 conv5_block1_add[0][0] \n", 458 | "__________________________________________________________________________________________________\n", 459 | "conv5_block2_1_conv (Conv2D) (None, 7, 7, 512) 1049088 conv5_block1_out[0][0] \n", 460 | "__________________________________________________________________________________________________\n", 461 | "conv5_block2_1_bn (BatchNormali (None, 7, 7, 512) 2048 conv5_block2_1_conv[0][0] \n", 462 | "__________________________________________________________________________________________________\n", 463 | "conv5_block2_1_relu (Activation (None, 7, 7, 512) 0 conv5_block2_1_bn[0][0] \n", 464 | "__________________________________________________________________________________________________\n", 465 | "conv5_block2_2_conv (Conv2D) (None, 7, 7, 512) 2359808 conv5_block2_1_relu[0][0] \n", 466 | "__________________________________________________________________________________________________\n", 467 | "conv5_block2_2_bn (BatchNormali (None, 7, 7, 512) 2048 conv5_block2_2_conv[0][0] \n", 468 | "__________________________________________________________________________________________________\n", 469 | "conv5_block2_2_relu (Activation (None, 7, 7, 512) 0 conv5_block2_2_bn[0][0] \n", 470 | "__________________________________________________________________________________________________\n", 471 | "conv5_block2_3_conv (Conv2D) (None, 7, 7, 2048) 1050624 conv5_block2_2_relu[0][0] \n", 472 | "__________________________________________________________________________________________________\n", 473 | "conv5_block2_3_bn (BatchNormali (None, 7, 7, 2048) 8192 conv5_block2_3_conv[0][0] \n", 474 | "__________________________________________________________________________________________________\n", 475 | "conv5_block2_add (Add) (None, 7, 7, 2048) 0 conv5_block1_out[0][0] \n", 476 | " conv5_block2_3_bn[0][0] \n", 477 | "__________________________________________________________________________________________________\n", 478 | "conv5_block2_out (Activation) (None, 7, 7, 2048) 0 conv5_block2_add[0][0] \n", 479 | "__________________________________________________________________________________________________\n", 480 | "conv5_block3_1_conv (Conv2D) (None, 7, 7, 512) 1049088 conv5_block2_out[0][0] \n", 481 | "__________________________________________________________________________________________________\n", 482 | "conv5_block3_1_bn (BatchNormali (None, 7, 7, 512) 2048 conv5_block3_1_conv[0][0] \n", 483 | "__________________________________________________________________________________________________\n", 484 | "conv5_block3_1_relu (Activation (None, 7, 7, 512) 0 conv5_block3_1_bn[0][0] \n", 485 | "__________________________________________________________________________________________________\n", 486 | "conv5_block3_2_conv (Conv2D) (None, 7, 7, 512) 2359808 conv5_block3_1_relu[0][0] \n", 487 | "__________________________________________________________________________________________________\n", 488 | "conv5_block3_2_bn (BatchNormali (None, 7, 7, 512) 2048 conv5_block3_2_conv[0][0] \n", 489 | "__________________________________________________________________________________________________\n", 490 | "conv5_block3_2_relu (Activation (None, 7, 7, 512) 0 conv5_block3_2_bn[0][0] \n", 491 | "__________________________________________________________________________________________________\n", 492 | "conv5_block3_3_conv (Conv2D) (None, 7, 7, 2048) 1050624 conv5_block3_2_relu[0][0] \n", 493 | "__________________________________________________________________________________________________\n", 494 | "conv5_block3_3_bn (BatchNormali (None, 7, 7, 2048) 8192 conv5_block3_3_conv[0][0] \n", 495 | "__________________________________________________________________________________________________\n", 496 | "conv5_block3_add (Add) (None, 7, 7, 2048) 0 conv5_block2_out[0][0] \n", 497 | " conv5_block3_3_bn[0][0] \n", 498 | "__________________________________________________________________________________________________\n", 499 | "conv5_block3_out (Activation) (None, 7, 7, 2048) 0 conv5_block3_add[0][0] \n", 500 | "__________________________________________________________________________________________________\n", 501 | "flatten_2 (Flatten) (None, 100352) 0 conv5_block3_out[0][0] \n", 502 | "__________________________________________________________________________________________________\n", 503 | "dense_2 (Dense) (None, 3) 301059 flatten_2[0][0] \n", 504 | "==================================================================================================\n", 505 | "Total params: 23,888,771\n", 506 | "Trainable params: 301,059\n", 507 | "Non-trainable params: 23,587,712\n", 508 | "__________________________________________________________________________________________________\n" 509 | ] 510 | } 511 | ], 512 | "source": [ 513 | "\n", 514 | "# view the structure of the model\n", 515 | "model.summary()\n" 516 | ] 517 | }, 518 | { 519 | "cell_type": "code", 520 | "execution_count": 75, 521 | "metadata": {}, 522 | "outputs": [], 523 | "source": [ 524 | "# tell the model what cost and optimization method to use\n", 525 | "model.compile(\n", 526 | " loss='categorical_crossentropy',\n", 527 | " optimizer='adam',\n", 528 | " metrics=['accuracy']\n", 529 | ")\n" 530 | ] 531 | }, 532 | { 533 | "cell_type": "code", 534 | "execution_count": 76, 535 | "metadata": {}, 536 | "outputs": [], 537 | "source": [ 538 | "# Use the Image Data Generator to import the images from the dataset\n", 539 | "from tensorflow.keras.preprocessing.image import ImageDataGenerator\n", 540 | "\n", 541 | "train_datagen = ImageDataGenerator(rescale = 1./255,\n", 542 | " shear_range = 0.2,\n", 543 | " zoom_range = 0.2,\n", 544 | " horizontal_flip = True)\n", 545 | "\n", 546 | "test_datagen = ImageDataGenerator(rescale = 1./255)" 547 | ] 548 | }, 549 | { 550 | "cell_type": "code", 551 | "execution_count": 77, 552 | "metadata": {}, 553 | "outputs": [ 554 | { 555 | "name": "stdout", 556 | "output_type": "stream", 557 | "text": [ 558 | "Found 64 images belonging to 3 classes.\n" 559 | ] 560 | } 561 | ], 562 | "source": [ 563 | "# Make sure you provide the same target size as initialied for the image size\n", 564 | "training_set = train_datagen.flow_from_directory('Datasets/Train',\n", 565 | " target_size = (224, 224),\n", 566 | " batch_size = 32,\n", 567 | " class_mode = 'categorical')" 568 | ] 569 | }, 570 | { 571 | "cell_type": "code", 572 | "execution_count": 78, 573 | "metadata": {}, 574 | "outputs": [ 575 | { 576 | "name": "stdout", 577 | "output_type": "stream", 578 | "text": [ 579 | "Found 58 images belonging to 3 classes.\n" 580 | ] 581 | } 582 | ], 583 | "source": [ 584 | "test_set = test_datagen.flow_from_directory('Datasets/Test',\n", 585 | " target_size = (224, 224),\n", 586 | " batch_size = 32,\n", 587 | " class_mode = 'categorical')" 588 | ] 589 | }, 590 | { 591 | "cell_type": "code", 592 | "execution_count": 79, 593 | "metadata": {}, 594 | "outputs": [ 595 | { 596 | "name": "stdout", 597 | "output_type": "stream", 598 | "text": [ 599 | "WARNING:tensorflow:sample_weight modes were coerced from\n", 600 | " ...\n", 601 | " to \n", 602 | " ['...']\n", 603 | "WARNING:tensorflow:sample_weight modes were coerced from\n", 604 | " ...\n", 605 | " to \n", 606 | " ['...']\n", 607 | "Train for 2 steps, validate for 2 steps\n", 608 | "Epoch 1/5\n", 609 | "2/2 [==============================] - 27s 14s/step - loss: 4.8157 - accuracy: 0.4062 - val_loss: 8.9823 - val_accuracy: 0.3276\n", 610 | "Epoch 2/5\n", 611 | "2/2 [==============================] - 22s 11s/step - loss: 6.7594 - accuracy: 0.5938 - val_loss: 12.2410 - val_accuracy: 0.3276\n", 612 | "Epoch 3/5\n", 613 | "2/2 [==============================] - 22s 11s/step - loss: 0.9376 - accuracy: 0.8906 - val_loss: 14.1595 - val_accuracy: 0.3276\n", 614 | "Epoch 4/5\n", 615 | "2/2 [==============================] - 21s 10s/step - loss: 1.1360 - accuracy: 0.9219 - val_loss: 15.6251 - val_accuracy: 0.3276\n", 616 | "Epoch 5/5\n", 617 | "2/2 [==============================] - 22s 11s/step - loss: 1.7595 - accuracy: 0.9062 - val_loss: 18.2626 - val_accuracy: 0.3276\n" 618 | ] 619 | } 620 | ], 621 | "source": [ 622 | "# fit the model\n", 623 | "# Run the cell. It will take some time to execute\n", 624 | "r = model.fit(\n", 625 | " training_set,\n", 626 | " validation_data=test_set,\n", 627 | " epochs=100,\n", 628 | " steps_per_epoch=len(training_set),\n", 629 | " validation_steps=len(test_set)\n", 630 | ")" 631 | ] 632 | }, 633 | { 634 | "cell_type": "code", 635 | "execution_count": 81, 636 | "metadata": {}, 637 | "outputs": [ 638 | { 639 | "data": { 640 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXoAAAD4CAYAAADiry33AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8vihELAAAACXBIWXMAAAsTAAALEwEAmpwYAAAsxElEQVR4nO3deXhU1f3H8fc3e0JCCFlYEiDsYQ8QEIiyurC4AG4gsmnBpba1/mq12lqtfVrrUlsVRGwBUcSlgtpK3SqbCrLJvsguIUASSEJC9sn5/XGHAGEC2e/M5Pt6njyZuffO3G+u8cPJuWfOEWMMSimlvJeP3QUopZSqWxr0Sinl5TTolVLKy2nQK6WUl9OgV0opL+dndwGuREVFmfj4eLvLUEopj7Fx48YMY0y0q31uGfTx8fFs2LDB7jKUUspjiMjhivZp141SSnk5DXqllPJyGvRKKeXl3LKP3pXi4mJSUlIoKCiwuxSPFBQURFxcHP7+/naXopSqZx4T9CkpKYSFhREfH4+I2F2ORzHGcPLkSVJSUmjbtq3d5Sil6pnHdN0UFBQQGRmpIV8NIkJkZKT+NaRUA+UxQQ9oyNeAXjulGi6PCnqllPJKjhLY/Ql8/WKdvL0GfSVlZWUxe/bsar129OjRZGVlVfr4J598kueff75a51JKeZDMQ/C/p+HFbvDOHbB+HpQU1vppPOZmrN3OBv39999/0T6Hw4Gvr2+Fr122bFldlqaU8iQlRbDnE9j4BhxYDuIDHa6Bvn+FjteBb+3HsrboK+nRRx9l//79JCYm8vDDD7NixQqGDRvGHXfcQY8ePQAYO3Ysffv2pVu3bsydO7fstfHx8WRkZHDo0CG6dOnCjBkz6NatG9deey35+fmXPO/mzZsZMGAAPXv2ZNy4cWRmZgLw0ksv0bVrV3r27MmECRMAWLlyJYmJiSQmJtK7d29ycnLq6GoopaosYx98/jv4axd4fxpk7IWhj8GD22DSe5Awpk5CHjy0Rf/Uv3ewM/V0rb5n15aN+f0N3Src/8wzz7B9+3Y2b94MwIoVK1i3bh3bt28vG7I4b948mjZtSn5+Pv369ePmm28mMjLygvfZu3cvixcv5vXXX+e2227jgw8+4M4776zwvFOmTOHll19myJAhPPHEEzz11FP87W9/45lnnuHgwYMEBgaWdQs9//zzzJo1i+TkZHJzcwkKCqrZRVFK1UxxAez6N2xcAIe/BvGFzqOg7zRoPxx8Ku4JqE0eGfTuon///heMS3/ppZdYunQpAEeOHGHv3r0XBX3btm1JTEwEoG/fvhw6dKjC98/OziYrK4shQ4YAMHXqVG699VYAevbsyaRJkxg7dixjx44FIDk5mYceeohJkyYxfvx44uLiauknVUpVSdouq2tm6zuQnwkR8TDiCUicBGHN670cjwz6S7W861OjRo3KHq9YsYIvv/ySNWvWEBISwtChQ12OWw8MDCx77Ovre9mum4p88sknrFq1io8//pinn36aHTt28OijjzJmzBiWLVvGgAED+PLLL0lISKjW+yulqqgoD3YstVrvKevAxx+63AB9p0L8YPCxr6f8skEvIvOA64E0Y0x357Z3gc7OQ5oAWcaYRBevPQTkAA6gxBiTVCtV2yAsLOySfd7Z2dlEREQQEhLC7t27Wbt2bY3PGR4eTkREBKtXr+aqq67izTffZMiQIZSWlnLkyBGGDRvGlVdeydtvv01ubi4nT56kR48e9OjRgzVr1rB7924NeqXq2rGtVrhvex8KT0NkR7j2j9BrIjSKsrs6oHIt+gXAK8DCsxuMMbeffSwiLwDZl3j9MGNMRnULdBeRkZEkJyfTvXt3Ro0axZgxYy7YP3LkSObMmUPPnj3p3LkzAwYMqJXzvvHGG9x7773k5eXRrl075s+fj8Ph4M477yQ7OxtjDL/85S9p0qQJv/vd71i+fDm+vr507dqVUaNG1UoNSqlyCnNg279g0xuQ+j34BUHXsVbrvfVAcLMPKIox5vIHicQD/znboj9vuwA/AsONMXtdvO4QkFTVoE9KSjLlFx7ZtWsXXbp0qcrbqHL0GipVA8bA0U2waQFs+wCKz0BMNyvce94GwRG2liciGyvqNalpH/1VwAlXIe9kgM9FxACvGWPmVnAcIjITmAnQunXrGpallFK1JD8Ltr5ntd5PbAf/EOg+HvpMg7gkt2u9u1LToJ8ILL7E/mRjTKqIxABfiMhuY8wqVwc6/xGYC1aLvoZ1KaVU9RkDP661wn3Hh1CSDy16wfUvQvdbIKix3RVWSbWDXkT8gPFA34qOMcakOr+nichSoD/gMuiVUsp2eadgy2JraGTGHggIg8SJ0GcqtEy0u7pqq0mL/mpgtzEmxdVOEWkE+BhjcpyPrwX+UIPzKaVU7SsthUOrrdb7rn+Dowji+sGNr1hdNAGNLv8ebq4ywysXA0OBKBFJAX5vjPknMIFy3TYi0hL4hzFmNNAMWOqcHtcPeNsY82ntlq+UUtWUmwabF8GmhXDqAASFQ9Jd0GcKNHOPz+rUlssGvTFmYgXbp7nYlgqMdj4+APSqYX1KKVV7SkvhwFfWuPc9/4XSEmiTDEMeha43gn+w3RXWCY/8ZKynCA0NJTc3t9LblVJ15HQqfP8WbHoTsn+EkEi44l6r7z26k93V1TkNeqWUd3KUwL4vrNb73s/BlEK7oXDNU9ZMkX6Bl3sHr6HTFFfSI488csHCI08++SQvvPACubm5jBgxgj59+tCjRw8++uijSr+nMYaHH36Y7t2706NHD959910Ajh07xuDBg0lMTKR79+6sXr0ah8PBtGnTyo598cW6WYlGKY+XeRi++iP8rTssnmB9cjX5Qfj59zDlI+sGawMKefDUFv1/H4Xj22r3PZv3gFHPVLh7woQJPPjgg2ULj7z33nt8+umnBAUFsXTpUho3bkxGRgYDBgzgxhtvrNQarUuWLGHz5s1s2bKFjIwM+vXrx+DBg3n77be57rrrePzxx3E4HOTl5bF582aOHj3K9u3bAaq0YpVSXs9RDHuWWcMi939lbetwNYx+DjqNBF9/e+uzmWcGvQ169+5NWloaqamppKenExERQevWrSkuLuaxxx5j1apV+Pj4cPToUU6cOEHz5pefivTrr79m4sSJ+Pr60qxZM4YMGcL69evp168fd911F8XFxYwdO5bExETatWvHgQMH+NnPfsaYMWO49tpr6+GnVsrNndxvjZrZvAjOpEPjWBjyCPS+E5q0srs6t+GZQX+JlndduuWWW/jXv/7F8ePHy1Z1WrRoEenp6WzcuBF/f3/i4+NdTk/sSkXzDA0ePJhVq1bxySefMHnyZB5++GGmTJnCli1b+Oyzz5g1axbvvfce8+bNq7WfTSmPUVJ4bjGPQ6utxTw6jbTmnOlwdb0t5uFJPDPobTJhwgRmzJhBRkYGK1euBKzpiWNiYvD392f58uUcPny40u83ePBgXnvtNaZOncqpU6dYtWoVzz33HIcPHyY2NpYZM2Zw5swZNm3axOjRowkICODmm2+mffv2TJs2rY5+SqXcVPoeq2tmy2LIPwVNWsPw30LindC4hd3VuTUN+iro1q0bOTk5xMbG0qKF9Ys1adIkbrjhBpKSkkhMTKzS/O/jxo1jzZo19OrVCxHh2WefpXnz5rzxxhs899xz+Pv7ExoaysKFCzl69CjTp0+ntLQUgD//+c918jMq5VaK8mDnR1br/chaazGPhDFW673tUFsX8/AklZqmuL7pNMV1Q6+h8hjHtzmX4nsPCrOhaXsr3HvdAaHRdlfnlupymmKllKodhbmw/QNrzpmjG8E3ELreZAV8m2SPmA7YXWnQK6XsY4w1zn3jAivki3IhuguMfAZ63g4hTe2u0Ct4VNAbYyo1Pl1dzB276FQDVpB9bjGP49vAL9i5mMdUaNVfW++1zGOCPigoiJMnTxIZGalhX0XGGE6ePElQUJDdpaiGzBg4ss4K9+1LrMU8mveAMS9Aj1ut2SNVnfCYoI+LiyMlJYX09HS7S/FIQUFBxMXF2V2GaojyTsGWd6yAT98NAaHQ63bnYh69tfVeDzwm6P39/Wnbtq3dZSilKiM3zWq97/wQdn4MjkKI7Qs3vATdb4bAULsrbFA8JuiVUm6qpMjqZ09Z7/xaB1k/WvsCw61RM32mQvPu9tbZgGnQK6WqJjvFGegbrO+pm60WO1hzzcQlQf+ZENffWlDbX+8N2U2DXilVseJ8K8jLWusbICfV2ucbaPWx959hrbEa1w/CY20tV7mmQa+UshgDmYfOC/X1VpdMaYm1PyIe4pOdoZ4EzXqAX4CdFatKqszi4POA64E0Y0x357YngRnA2SEwjxljlrl47Ujg74Av1qLh9kw7qZS6WGEupG6yAv2IM9jzMqx9/o0gtg8M+vm5YA+NsbdeVW2VadEvAF4BFpbb/qIx5vmKXiQivsAs4BogBVgvIh8bY3ZWs1alVHWVlsLJfRe21tN2WsvrAUR2hE7XWYEe18/6dKqv/sHvLS77X9IYs0pE4qvx3v2BfcaYAwAi8g5wE6BBr1Rdy8+05os5e8M0ZQMUZFn7AsMhri8kXG+FemwfnWrAy9Xkn+wHRGQKsAH4P2NMZrn9scCR856nAFdU9GYiMhOYCdC6desalKVUA1PqgLRdF46Eydjj3CkQ09WaHKxVfyvYIzvq9L4NTHWD/lXgacA4v78A3FXuGFcfd6twwhVjzFxgLljTFFezLqW835mMC7tgjm6yJgMDCIm0wrznrdbwxpa9IaixvfUq21Ur6I0xJ84+FpHXgf+4OCwFOH/RxjggtTrnU6rBchTDie1WS/3IOivYMw9a+8TXmium10Qr3Fv1g4i2OqWAuki1gl5EWhhjjjmfjgO2uzhsPdBRRNoCR4EJwB3VqlKphuL0sQtb66nfQ4lzDeLQ5laYJ023gr1FIgSE2Fqu8gyVGV65GBgKRIlICvB7YKiIJGJ1xRwC7nEe2xJrGOVoY0yJiDwAfIY1vHKeMWZHXfwQSnmk4gI4vvVcqB9ZD6dTrH2+AVaQJ919biRMeJy21lW1eMxSgkp5NGOs+V/Ov2F6fCs4iqz94a2tQD97w7R5D/ALtLdm5VF0KUGl6lvRGavb5fxgz3Xe2vILtoY0DrjPumEalwRhze2tV3k1DXqlasoYOHXg3M3SlPVwYgcYh7W/aXtoN+xciz2mK/j621uzalA06JWqqoJsa0jj+TdN850fIwkIsz6MdNVDzg8jJUGjSHvrVQ2eBr1Sl1N0Bg6ugr1fwOFvrVWSMIBAdMK5T5jG9YPozuDja3fFSl1Ag16p8oyBk/th7+fW1+FvrJum/o2gzSBrEeu4JGvFJF3nVHkADXqlwJp3/dDX58I985C1PaqztYhGx2ug9UAdCaM8kga9arhOHbC6Y/Z+AYdWWx9M8guGdkNg4ANWuEfE212lUjWmQa8ajuICqxtm7xew7wtr2l6wRsX0nW4Fe5tkXfpOeR0NeuXdMg9bob73C+uGanEe+AVB/JVWl0yHqyGyvd1VKlWnNOiVdykpgh+/Pdclc3a63iZtIHESdLzWCnmdI0Y1IBr0yvNlpzi7Y76EAyusKXt9A6xumL5TrXCP7KDzxKgGS4NeeR5HMRz5zjlC5ktIc86VF94Ket4GHa6BtoMhMNTeOpVyExr0yjOcPma12Pd+brXaC0+Dj5815PGap60bqdEJ2mpXygUNeuWeHCVwdMO5ce3Ht1nbw1pCt7FWd0zbIbp6klKVoEGv3EduGuz7nxXs+/9nzSkjvtB6AIz4vRXuzbppq12pKtKgV/YpdViTg+393BoCmfq9tT20GSTcAB2vtmZ9DG5ia5lKeToNelW/zpy0Wut7P7da7/mnQHysCcGG/9a6kdq8J/j42F2pUl5Dg17VrdJSOLbZOa79czi6ETAQEmV1xXS8BtoPh5CmdleqlNfSoFe1L+8U7P/KGiWz70s4kw6INdvj0EetcG/RW1vtStUTDXpVc8ZY65+eHdeesg5MKQRHQPsRVsu9wwhoFGV3pUo1SJcNehGZB1wPpBljuju3PQfcABQB+4HpxpgsF689BOQADqCkooVrlQcqyIb9y899IjX3uLW9RSJc9X9WuMf21UU4lHIDlWnRLwBeARaet+0L4DfGmBIR+QvwG+CRCl4/zBiTUaMqlf2MsdZBPTtB2I9rrTVRA8Ohw3Ar2NuPgLBmdleqlCrnskFvjFklIvHltn1+3tO1wC21XJdyB4U51qdQz04QlpNqbW/WA5J/YYV7XD/w1R5ApdxZbfwfehfwbgX7DPC5iBjgNWPM3IreRERmAjMBWrduXQtlqSozBtL3nBvXfngNlBZbC163Hwodf2NN69u4pd2VKqWqoEZBLyKPAyXAogoOSTbGpIpIDPCFiOw2xqxydaDzH4G5AElJSaYmdakqKFv42nkjNftHa3tMVxhwn9Vqb3UF+AXYW6dSqtqqHfQiMhXrJu0IY4zLYDbGpDq/p4nIUqA/4DLoVT1L2QCr/2q13M8ufN1uKFz1S+tDS01a2V2hUqqWVCvoRWQk1s3XIcaYvAqOaQT4GGNynI+vBf5Q7UpV7Tj8Lax8Fg4st4Y/9pvhXEJvkC58rZSXqszwysXAUCBKRFKA32ONsgnE6o4BWGuMuVdEWgL/MMaMBpoBS537/YC3jTGf1slPoS7NGDi4ElY+B4e/hkbRcPVT0O9uCAyzuzqlVB2rzKibiS42/7OCY1OB0c7HB4BeNapO1Ywx1hj3lc9aH2IKbQ7X/Rn6TtOl9JRqQHRcnDcyBvYsg1XPWTNCNo6D0c9D78ngH2R3dUqpeqZB701KS2HXR7DqeTix3VoQ+4a/Q687dNSMUg2YBr03KHXA9iWw+nlI320thD32VehxK/j6212dUspmGvSezFEMW9+D1S/Aqf3Wmqk3/xO6jdM5ZpRSZTToPVFJEWxeBF+/CFmHrSkJbltorcqkU/8qpcrRoPckxQXw/ZtWwJ8+Ci37wKi/QKeRuo6qUqpCGvSeoCgPNs6Hb16ypgNuNQBufMmaLVIDXil1GRr07qwwB9b/A759BfIyIP4quPl167sGvFKqkjTo3VF+FqybC2tnQ36mtabq4F9Dm4F2V6aU8kAa9O4k75QV7t+9BoWnrb73wb+GuL52V6aU8mAa9O4gNx3WvAzr/wlFudDlBhj8MLTQGSSUUjWnQW+nnOPWDdYN86CkALqPh6t+Bc262l2ZUsqLaNDbITsFvv4bbFoIpSXQ8zZrQe2ojnZXppTyQhr09SnzkLXYx+a3AQO9JsJVD0HTdnZXppTyYhr09SFjnzVNwdZ3rakJ+kyBKx+EJro2rlKq7mnQ16W0XdZMkjuWgG8A9J8JyT/XxbWVUvVKg74uHN9mzQW/82PwD4GBD8Cgn0FojN2VKaUaIA362nR0kxXwe5ZBYGPrBuuA+6FRpN2VKaUaMA362vDjd7DqWWvZvqBwGPobuOIea/FtpZSyWWUWB58HXA+kGWO6O7c1Bd4F4oFDwG3GmEwXrx0J/B3wxVo0/Jlaq9xuxsChr62AP7gKQiJhxBPQbwYENba7OqWUKlOZycsXACPLbXsU+J8xpiPwP+fzC4iILzALGAV0BSaKiOd/EsgY2P8VzB8Fb1wPabvh2j/Cg9usrhoNeaWUm7lsi94Ys0pE4sttvgkY6nz8BrACeKTcMf2BfcaYAwAi8o7zdTurX66NjIG9n8PKZ+HoBghrCaOetYZK+gfbXZ1SSlWoun30zYwxxwCMMcdExNVwkljgyHnPU4Arqnk++5SWwp5PrJusx7ZAeGu4/kVInAR+gXZXp5RSl1WXN2NdTZhuKjxYZCYwE6B1azf4IFGpA3Z+aI2DT9tpfXr1plnQ83ZdcFsp5VGqG/QnRKSFszXfAkhzcUwK0Oq853FAakVvaIyZC8wFSEpKqvAfhDrnKIHt/7I+yZrxA0R1hvGvQ7fx4KuDlJRSnqe6yfUxMBV4xvn9IxfHrAc6ikhb4CgwAbijmuereyVFsPUday6azIMQ0w1uXQBdbrSmLVBKKQ9VmeGVi7FuvEaJSArwe6yAf09E7gZ+BG51HtsSaxjlaGNMiYg8AHyGNbxynjFmR938GDVQUuhccPtvkH3EmgP+9kXQeTT4VGZQklJKubfKjLqZWMGuES6OTQVGn/d8GbCs2tXVpeJ82PgGfPN3yEmFuH4w5q/Q8Rpdj1Up5VUaXqdzYa610Me3L8OZNGiTDGNnQ7uhGvBKKa/UcIK+4LS14PaaWZB/CtoOgSHzIf5KuytTSqk65f1Bn59pLba9djYUZEOHa2DIr6FVf7srU0qpeuG9QX/mJKydBd/NhaIc6DwGBv8KYvvYXZlSStUr7wv6nBOw5mVYPw+K86DrTTD4YWje3e7KlFLKFt4T9IW58NXTsHEBOIqg+y3WJGMxCXZXppRStvKeoPcLggMrnAH/EES2t7sipZRyC94T9L5+cM9q8AuwuxKllHIr3vXRTw15pZS6iHcFvVJKqYto0CullJfToFdKKS+nQa+UUl5Og14ppbycBr1SSnk5DXqllPJyGvRKKeXlNOiVUsrLadArpZSX06BXSikvV+2gF5HOIrL5vK/TIvJguWOGikj2ecc8UeOKlVJKVUm1Z680xuwBEgFExBc4Cix1cehqY8z11T2PUkqpmqmtrpsRwH5jzOFaej+llFK1pLaCfgKwuIJ9A0Vki4j8V0S6VfQGIjJTRDaIyIb09PRaKksppVSNg15EAoAbgfdd7N4EtDHG9AJeBj6s6H2MMXONMUnGmKTo6OialqWUUsqpNlr0o4BNxpgT5XcYY04bY3Kdj5cB/iISVQvnVEopVUm1EfQTqaDbRkSai4g4H/d3nu9kLZxTKaVUJdVozVgRCQGuAe45b9u9AMaYOcAtwH0iUgLkAxOMMaYm51RKKVU1NQp6Y0weEFlu25zzHr8CvFKTcyillKoZ/WSsUkp5OQ36BsoYw/70XHILS+wuRSlVx2rUdaM8S15RCd/sO8lXu9NYsSeNY9kFxDYJZsH0fnRsFmZ3eUqpOqJB7+UOZZxh+Z40vtqdxncHTlHkKCU00I8rO0Rx95URzFl5gPGvfsvcyUkMbB95+TdUSnkcDXovU1RSyrqDp1i+J43lu9M4kHEGgHbRjZgysA3DE2JIim9KgJ/Va3ddt+ZMX7CeKfO+47lbejG2d6yd5Sul6oAGvRc4cbqA5butVvs3+zI4U+QgwM+Hge0imTKwDcMSYmgT2cjla1s1DeGDewcx880NPPjuZo5m5XP/0PY4P/6glPICGvQeyFFq2Hwkqyzcdx47DUDL8CDG9o5lWOcYBnWIJCSgcv95w0P8WXh3f379r60899kejpzK4+mx3fH31Xv1SnkDDXoPkZVXxMof0lm+O42VP6STmVeMr4/Qt00Ej4xMYHhCDJ2ahVa7JR7o58uLtyUSFxHMrOX7Sc0uYPakPoQG6q+IUp5O/y92U8YYdh3LKetr3/RjJqUGIhsFMCwhhmGdYxjcMZrwEP9aO6ePj/DwdQnERYTw2w+3c9ucNcyf3o9mjYNq7RxKqfqnQe9GzhSW8M2+DGe4p3P8dAEAPWLDeWB4R4Z1jqZXXBN8fOq2/3xi/9Y0Dw/igUWbGDfrG+ZP70/n5jr8UilPJe449UxSUpLZsGGD3WXUi4MZZ8rGtZ8//PGqjlEMS4hhaOdoYsLsaVFvP5rNXQvWk1/kYM7kviR30IlHlXJXIrLRGJPkcp8Gff0qLHGw7uApZ7inc9A5/LFDTCjDncGe1Obc8Ee7Hc3KZ/r8dRxIP8MzN/fklr5xdpeklHLhUkGvXTf14Fh2Piv2pJcNf8wrchDo58PA9pFMT45nWOcYWjUNsbtMl2KbBPP+vYO4762N/Or9LRzNzOfnIzro8EulPIgGfR1wlBq+/zHT+YnUdHY5hz/GNglmfJ9YhifEMLBdFMEBvjZXWjnhwf4smN6fR5ds5cUvf+BIZh5/Ht9Dh18q5SE06GtJ5hlr+ONXu9NYtTedLOfwx6Q2EfxmVALDEmLoGFP94Y92C/Dz4YVbexEXEcJL/9vLidPW8MuwoNob9aOUqhsa9NVkjGFH6mlWOOeR2Xwki1IDUaEBjEhoxvCEGK7sGEV4sPcEoYjw0DWdiGsSzGNLt3Grc/hli/Bgu0tTSl2CBn0V5BaW8PXeDFbsSWP5njROnC4EoFdcOD8b3pHhCTH0iA2v8+GPdrutXytaNAnivrc2MXbWN8yf1p+uLRvbXZZSqgI66uYSjDHnDX9M57uDJyl2GMIC/RjcKZphCTEM6RRNdFig3aXaYtex00yfv57cwhJmT+rD4E7RdpekVIOlwyuroKDYwXcHT7F8t9VqP3wyD4BOzUIZ1jmGYQkx9G0ToTcinY5l5zN9/nr2puXy53E9uK1fK7tLUqpBqrPhlSJyCMgBHEBJ+ZOIdefx78BoIA+YZozZVJNz1oXUrPyyT6N+sy+D/GJr+GNyhyh+cmVbhrrx8Ee7tQgP5v17B3L/ok38+oOtHMnM46FrOnnsTWelvFFt9NEPM8ZkVLBvFNDR+XUF8Krzu61KHKV8fySLr3Zb88jsPp4DQFxEMLcmxTEsIYaB7SIJ8veM4Y92CwvyZ960fjy+dBsvf7WPo5n5PHNzT7f50JdSDV1d34y9CVhorP6htSLSRERaGGOO1fF5L3LqTBErf7DGta/6IZ3s/GL8fISk+AgeG23N/tg+2nOHP9rN39eHv9zck7iIEP76xQ8cyy5gzuS+XjXqSClPVdOgN8DnImKA14wxc8vtjwWOnPc8xbmtzoP+7PDH5bvT+GqPNfzRGIgKDeTars0Y5hz+2FjHgdcaEeHnIzoSFxHMIx9s5dY53zJ/en9im+jwS6XsVNOgTzbGpIpIDPCFiOw2xqw6b7+r5rHLu78iMhOYCdC6detqFWMNf0wvGyWTllOICPSMa8KDIzoxPCGGbi0be/3wR7uN7xNH88ZB3PPWRufwy350jw23uyylGqxaG3UjIk8CucaY58/b9hqwwhiz2Pl8DzD0cl031Rl1k1/koM/TX5Bf7CAsyBr+OLxzDEM6RxMV2jCHP9rthxM5TJu3jqz8Ymbd0YdhCTF2l6SU16qTUTci0gjwMcbkOB9fC/yh3GEfAw+IyDtYN2Gz66p/PjjAl8fGdKFTTCh920Tgp8MfbdepWRhLf5rMXQvW85OFG3j6pu7ccUX1/lpTSlVfTbpumgFLnTcv/YC3jTGfisi9AMaYOcAyrKGV+7CGV06vWbmXNnlAm7p8e1UNzRoH8e49A3ng7U08tnQbKZl5/Oraztp9plQ90g9MqXpR4ijldx/tYPG6H7mxV0ueu7UngX46fFWp2qLz0Svb+fn68Kdx3WnVNJhnP93D8dMFzJ3clyYhAXaXppTX045sVW9EhPuHduDvExLZ/GMWN7/6LUdO5dldllJeT4Ne1bubEmNZeHd/0nMKGTf7G7amZNldklJeTYNe2WJAu0iW3D+IIH9fbn9tLV/uPGF3SUp5LQ16ZZsOMWEsuX8QHWJCmfnmBhauOWR3SUp5JQ16ZauYsCDevWcAwzrH8MRHO/jTsl2UlrrfSDClPJkGvbJdSIAfr03uy+QBbZi76gA/W/w9BcUOu8tSymvo8ErlFvx8ffjDTd1o1TSYPy3bzYnTBbw+JYmIRjr8Uqma0ha9chsiwszB7Xnljt5sPZrN+Fe/5fDJM3aXpZTH06BXbuf6ni1Z9JMryMwrYvzsb/n+x0y7S1LKo2nQK7fUL74pH9w3iEaBfkx8fS2f7Thud0lKeSwNeuW22keHsuT+QSQ0b8y9b21k3tcH7S5JKY+kQa/cWlRoIItnDOCaLs34w3928od/78Shwy+VqhINeuX2ggN8efXOvkwbFM+8bw7y00WbdPilUlWgQa88gq+P8OSN3fjd9V35bOdxJr6+lpO5hXaXpZRH0KBXHuXuK9vy6qQ+7Ew9zfhXv+Vghg6/VOpyNOiVxxnZvQVvzxhATkEJ42d/w8bDp+wuSSm3pkGvPFLfNhEsuW8Q4cH+THz9O5Ztq5OliJXyChr0ymPFRzViyf3J9IgN56dvb+Ifqw/gjktjKmU3DXrl0Zo2CmDRT65gZLfm/PGTXTz58Q4dfqlUOdUOehFpJSLLRWSXiOwQkV+4OGaoiGSLyGbn1xM1K1epiwX5+zLrjj7MuKotb6w5zD1vbiSvqMTuspRyGzVp0ZcA/2eM6QIMAH4qIl1dHLfaGJPo/PpDDc6nVIV8fITHx3TlqRu78dXuE0ycu5b0HB1+qRTUIOiNMceMMZucj3OAXUBsbRWmVHVMHRTPa5OT2HMih/GvfsP+9Fy7S1LKdrXSRy8i8UBv4DsXuweKyBYR+a+IdLvEe8wUkQ0isiE9Pb02ylIN1DVdm/HOzIHkFzkYP/tb1h3U4ZeqYatx0ItIKPAB8KAx5nS53ZuANsaYXsDLwIcVvY8xZq4xJskYkxQdHV3TslQDl9iqCUvuSyYyNIA7//EdH29JtbskpWxTo6AXEX+skF9kjFlSfr8x5rQxJtf5eBngLyJRNTmnUpXVOjKEJfcNIrFVE36++HteXbFfh1+qBqkmo24E+Cewyxjz1wqOae48DhHp7zzfyeqeU6mqahISwMK7+3NDr5b85dPd/PbD7ZQ4Su0uS6l6VZM1Y5OBycA2Edns3PYY0BrAGDMHuAW4T0RKgHxggtEmlapnQf6+/P32RGKbBDNn5X6OZRfw8sTeNArUJZNVwyDumLtJSUlmw4YNdpehvNBbaw/zxEfb6dqyMfOm9iOmcZDdJSlVK0RkozEmydU+/WSsalDuHNCGf0xN4kD6GcbN/pa9J3LsLkmpOqdBrxqc4QnNeHfmQIocpYx/9VvW7NfbRso+xhiy84vZl5bLliNZdXIO7bpRDVZKZh7T5q/n8MkzPHtLT8b1jrO7JOVFCoodpOcUkp5baH0/++XieVGJNUAgOiyQ9Y9fXa3zXarrRu9GqQYrLiKED+4dxD1vbeCX727haGY+Px3WAedAMaUuUuIo5dSZItIuEeAZzu85BRfPtyQCkY0CiAoNJDoskHZRjYgOC7zgqy5o0KsGLTzEnzfu6s8j/9rK85//QEpmPk+P7Y6/r/ZqNhRnu04uanGXC/KM3EJOninCVSdIWKAf0WGBRIUF0qVFYwZ3vDC8o0MDiQkLpGmjAPxs+N3SoFcNXqCfLy/enkhcRAivLN9HanYBsyf1IVSHX3q0/KKzXScFl+w6ycgtosjFZysC/HyIdra8WzUNoU+biLLnZ1vkMc7HwQG+NvyElae/yUoBIsKvrutMXEQwj3+4ndvmrGHetH40D9fhl+6k2FHKydwilwGeUbbdep5beHHXiY9AZOi5oO4QE3ZRy/vs48ZBfl7TjadBr9R5JvRvTfPwIH66aBPjZn/D/On9SGje2O6yvFppqSErv7isb/tSNy1PnSly+R6Ng/zKArp7bPgFgX1+gDdtFICvj3eEd1XoqBulXNiRms1dC9aTV+hgzuS+JHfQKZoqwxhDscNQ5CilsNhBTkHJxYFdLsQzcgspcbEqWKCfDzGNL2xlR4cGObtOAsq2RYUGEuTv3l0n9eFSo2406JWqQGpWPtPnr2d/ei7P3NyTW/q69/BLR6mhsMRBUUkphSWlFBaXUuRwUFBc6gze0gv2lx130TbHBfvP31ZYflvZezus7yWlLm9WnuXrI0Q2CnDZVVL+eWig93Sd1AcdXqlUNbRsEsz79w3kvrc28qv3t5CSmccvRnS8KHyMMRcHoKugPBuKF4WxFZSFZWF87jUVh3H5/aW1slauj1g3pwP9fQj08yHAz8d6XvbYh/BgfwL9Lt5/0TZ/HxoF+FmtcmeIR4QE4NMAu07spi16pS6jqKSU3yzZxgebUohtElwW7GXBW0uzYQaUhaVvudC0tpU99vchwNfVtosDOsBVAF/iPe0Y+qdqh7bolaqBAD8fnr+1J91jG7Ppx6yKA7jCAPU9F87+rgM4wNdHuylUndGgV6oSRITpyW2Znmx3JUpVnf6dppRSXk6DXimlvJwGvVJKeTkNeqWU8nIa9Eop5eU06JVSystp0CullJfToFdKKS/nllMgiEg6cLiaL48CMmqxnNqidVWN1lU1WlfVeGNdbYwx0a52uGXQ14SIbKhovgc7aV1Vo3VVjdZVNQ2tLu26UUopL6dBr5RSXs4bg36u3QVUQOuqGq2rarSuqmlQdXldH71SSqkLeWOLXiml1Hk06JVSyst5ZNCLyEgR2SMi+0TkURf7RURecu7fKiJ93KSuoSKSLSKbnV9P1FNd80QkTUS2V7Dfrut1ubrsul6tRGS5iOwSkR0i8gsXx9T7NatkXfV+zUQkSETWicgWZ11PuTjGjutVmbps+R1znttXRL4Xkf+42Fe718sY41FfgC+wH2gHBABbgK7ljhkN/BcQYADwnZvUNRT4jw3XbDDQB9hewf56v16VrMuu69UC6ON8HAb84Ca/Y5Wpq96vmfMahDof+wPfAQPc4HpVpi5bfsec534IeNvV+Wv7enlii74/sM8Yc8AYUwS8A9xU7pibgIXGshZoIiIt3KAuWxhjVgGnLnGIHderMnXZwhhzzBizyfk4B9gFxJY7rN6vWSXrqnfOa5DrfOrv/Co/ysOO61WZumwhInHAGOAfFRxSq9fLE4M+Fjhy3vMULv5lr8wxdtQFMND5p+R/RaRbHddUWXZcr8qy9XqJSDzQG6s1eD5br9kl6gIbrpmzG2IzkAZ8YYxxi+tVibrAnt+xvwG/Bkor2F+r18sTg15cbCv/r3RljqltlTnnJqz5KHoBLwMf1nFNlWXH9aoMW6+XiIQCHwAPGmNOl9/t4iX1cs0uU5ct18wY4zDGJAJxQH8R6V7uEFuuVyXqqvfrJSLXA2nGmI2XOszFtmpfL08M+hSg1XnP44DUahxT73UZY06f/VPSGLMM8BeRqDquqzLsuF6XZef1EhF/rDBdZIxZ4uIQW67Z5eqy+3fMGJMFrABGlttl6+9YRXXZdL2SgRtF5BBWF+9wEXmr3DG1er08MejXAx1FpK2IBAATgI/LHfMxMMV553oAkG2MOWZ3XSLSXETE+bg/1vU/Wcd1VYYd1+uy7LpeznP+E9hljPlrBYfV+zWrTF12XDMRiRaRJs7HwcDVwO5yh9lxvS5blx3XyxjzG2NMnDEmHisnvjLG3FnusFq9Xn7VL9cexpgSEXkA+AxrpMs8Y8wOEbnXuX8OsAzrrvU+IA+Y7iZ13QLcJyIlQD4wwThvsdclEVmMNbogSkRSgN9j3Ziy7XpVsi5brhdWi2sysM3ZvwvwGND6vNrsuGaVqcuOa9YCeENEfLGC8j1jzH/s/n+yknXZ9Tt2kbq8XjoFglJKeTlP7LpRSilVBRr0Sinl5TTolVLKy2nQK6WUl9OgV0opL6dBr5RSXk6DXimlvNz/A0qQCAOfE5RvAAAAAElFTkSuQmCC\n", 641 | "text/plain": [ 642 | "
" 643 | ] 644 | }, 645 | "metadata": { 646 | "needs_background": "light" 647 | }, 648 | "output_type": "display_data" 649 | }, 650 | { 651 | "data": { 652 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXQAAAD4CAYAAAD8Zh1EAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8vihELAAAACXBIWXMAAAsTAAALEwEAmpwYAAAlxklEQVR4nO3deXxU1f3/8deHkJCFQEgIiwlLQATZCRHB3VoUxOKGimu1VUTBpbYqUq1W60+7aNWC8uVr+boFkbpCRa27VaslQNhRkDWsIZBASEK28/sjQ0xCQiaQ5E4m7+fjwcO5c8/MfHJM3nPm3HvPmHMOERFp+lp4XYCIiNQPBbqISJBQoIuIBAkFuohIkFCgi4gEiZZevXD79u1d9+7dvXp5EZEmadGiRbudc/HV7fMs0Lt3705aWppXLy8i0iSZ2aaa9mnKRUQkSCjQRUSChAJdRCRIeDaHXp2ioiIyMjIoKCjwupQmJzw8nMTEREJDQ70uRUQ8ElCBnpGRQXR0NN27d8fMvC6nyXDOkZWVRUZGBklJSV6XIyIeCagpl4KCAuLi4hTmdWRmxMXF6ZONSDMXUIEOKMyPkvpNRAJqykVEAp9zjj0HCtm0J4/NWXlszc6nZQsjJjKUthGhtI0Io21EaPl2ZFiIBhyNRIFeQXZ2NrNnz+bWW2+t82PPP/98Zs+eTUxMTP0XJtLISkod27Lz2bwnj01ZeWzac4DNWWW3N+/JI/dgsd/PFRpivqAP9QV9WJXtyv899IbQNiKUsJYBN4kQ0BToFWRnZ/Pss89WG+glJSWEhITU+NgFCxY0ZGki9a6gqOTHwM46UH578548MvbmUVTy45ffhIW0IDE2gm6xkQxLiqVrbCTd4sr+JbaLpLjUkZNfRE5eEdn5hezLLyI7r4ic/CKy84vK9+XkF7FrfwFrd+0nO6+I/QVHfmOIDAupIfxreFPwvRlEh7ekRYvm96lAgV7BlClT+OGHHxg8eDAjR45kzJgx/P73v6dz586kp6ezatUqLrroIrZs2UJBQQF33HEHEyZMAH5cyiA3N5fRo0dz2mmn8fXXX5OQkMA777xDREREpdeaP38+f/jDHygsLCQuLo7U1FQ6duxIbm4ut912G2lpaZgZDz74IJdeeinvv/8+U6dOpaSkhPbt2/Pxxx970UXShDjn2JtXVB7Wm7PyyqdJNu05wM59Byu1jw5vSbe4SPp2bsOo/p3oFhtJ17hIusVF0alNOCG1BGTrVi1JiIk4YpuqSkod+/IrB392Xs1vCBt355GdX0hOfhEFRaU1Pq8ZtAmvOOqvPvjbRh6+LyK06U4RmVdfQZeSkuKqruWyevVqTjzxRAB+P38lq7btq9fX7HtcGx78Wb8a92/cuJELLriAFStWAPDZZ58xZswYVqxYUX464J49e4iNjSU/P5+TTjqJzz//nLi4uEqBfvzxx5OWlsbgwYO5/PLLGTt2LNdcc02l19q7dy8xMTGYGc8//zyrV6/miSee4N577+XgwYM89dRT5e2Ki4tJTk7miy++ICkpqbyGqir2nzQPJaWO7Tn55WFdNsI+UPbfrDz2V5ka6dimFd1io8qCukJgd4uNJCYytEkFWUFRSeU3g7yK4V9Y5U2iqOxNwrddUlpz7oWFtKBNpfAPrRT+Mb7bMRFhldq1jQglNKThp4jMbJFzLqW6fX6N0M1sFPA0EAI875x7vMr+dsAsoCdQAPzCObfimKoOEMOGDat0bvczzzzDW2+9BcCWLVtYu3YtcXFxlR6TlJTE4MGDARg6dCgbN2487HkzMjK44oor2L59O4WFheWv8dFHHzFnzpzydu3atWP+/PmcccYZ5W2qC3MJXgVFJWwpn8vOY3PWgfKRdsbefApLfhyphoYYie0i6RobydBu7XxTI1F0i4ukS7tIIsJqnjZsasJDQwgPDaFDm/A6Pc45R+7B4mqD/tCnghzfp4DsvCJ27CtgzY797MsvOuwNsqqosBBiIsuCvm1Ey/JPAjGRoZXCv0+naI7vEH0sP361ag10MwsBpgMjgQxgoZnNc86tqtBsKpDunLvYzPr42p9zLIUdaSTdmKKiospvf/bZZ3z00Uf85z//ITIykrPOOqvac79btWpVfjskJIT8/PzD2tx2223cddddjB07ls8++4yHHnoIKPtlqzpKqu4+CS7ZeYWVA7vC9MiOfZV/x1q3aknX2Eh6d4pmZL+OdIstC+yusZEcFxNR69RIc2dmRIeHEh0eSmK7uj22uKSUfQXF5VNDOYc+EVScIqrwprB+dy7Zvk8OhcU/vvFOPLMnU0b3qeefzL8R+jBgnXNuPYCZzQEuBCoGel/gMQDn3Boz625mHZ1zO+u74IYUHR3N/v37a9yfk5NDu3btiIyMZM2aNXzzzTdH/Vo5OTkkJCQA8OKLL5bff+655zJt2rRKUy4jRoxg0qRJbNiw4YhTLhK4SksdO/YVVJoSKZ/PzjrAvioHBztEt6JbXCSnHt++/ADkoWmS2KgwvcF7pGVIC2KjwoiNCgOiam1fUUFRSXngt4lomMOX/jxrArClwnYGcHKVNkuBS4AvzWwY0A1IBJpUoMfFxXHqqafSv39/Ro8ezZgxYyrtHzVqFDNmzGDgwIH07t2b4cOHH/VrPfTQQ1x22WUkJCQwfPhwNmzYAMD999/PpEmT6N+/PyEhITz44INccsklzJw5k0suuYTS0lI6dOjAhx9+eEw/q9S/gqISMvYeOmuk7GyRsjNHDrBlb36lEVrLFkZiuwi6xkUxuEtM+Qi7W1wUXWIjiAzT+QrB5tAUUcc6ThHVRa0HRc3sMuA859yNvu1rgWHOudsqtGlD2Rz7EGA50Ae40Tm3tMpzTQAmAHTt2nXopk2V12nXQb1jo/5reDl5RWw6dNDRF9aHbu/YV0DFP6eosBC6+g44/jjCLpse6dw2nJaNcABNgs+xHhTNALpU2E4EtlVs4JzbB9zgezEDNvj+UaXdTGAmlJ3l4k/xIo0tK/cga3fllp/e92N455GTX1SpbfvWZVMjI3rE+c4YiaSrL7TjNDUijcyfQF8I9DKzJGArMB64qmIDM4sB8pxzhcCNwBe+kBdpUr7+YTfXz1pYfuZISAsjISaCbnGRXDCwc6XA7hobSVQrTY1I4Kj1t9E5V2xmk4EPKDttcZZzbqWZTfTtnwGcCLxkZiWUHSz9ZQPWLNIgduQUcPurS+gaF8nvLuhLt7iys0Ya49xikfrg1/DCObcAWFDlvhkVbv8H6FW/pYk0nqKSUibPXkxeYQlzJiQ3yDnCIg1NnxdFgMffW0Papr08c+UQhbk0WfosKc3eguXb+fuXG7j+lO6MHXSc1+WIHDUF+jFq3bq11yXIMfghM5d7Xl/GkK4xTD1fp3xK06ZAl2Yrr7CYW15ZRFjLFky/Kllrb0uTp9/gCu69916effbZ8u2HHnqIJ554gtzcXM455xySk5MZMGAA77zzTq3PddFFFzF06FD69evHzJkzy+9///33SU5OZtCgQZxzTtlyN7m5udxwww0MGDCAgQMH8sYbb9T/DyeVOOf47VsrWLsrl6fHD+a4Oi77KhKIAveg6HtTYMfy+n3OTgNg9OM17h4/fjx33nln+RdczJ07l/fff5/w8HDeeust2rRpw+7duxk+fDhjx4494kUjs2bNqrTM7qWXXkppaSk33XRTpWVwAR555BHatm3L8uVlP+/evXvr8YeW6qR+u5m3lmzlrpEncHqveK/LEakXgRvoHhgyZAi7du1i27ZtZGZm0q5dO7p27UpRURFTp07liy++oEWLFmzdupWdO3fSqVOnGp+rumV2MzMzq10Gt7olc6XhLN2SzcPzV3FW73gmn3281+WI1JvADfQjjKQb0rhx43j99dfZsWMH48ePByA1NZXMzEwWLVpEaGgo3bt3r3bZ3ENqWma3pmVwtTxu49l7oJBbUxcTH92Kv14+uFl+TZkEL82hVzF+/HjmzJnD66+/zrhx44CypW47dOhAaGgon376KVUXFauqpmV2R4wYweeff16+suKhKZdDS+YeoimXhlFa6vjV3HQy9x/k2auTaRcV5nVJIvVKgV5Fv3792L9/PwkJCXTu3BmAq6++mrS0NFJSUkhNTaVPnyMvTD9q1CiKi4sZOHAgDzzwQPkyu/Hx8eXL4A4aNIgrrrgCKFsyd+/evfTv359Bgwbx6aefNuwP2UxN+3Qdn32Xye9+1pdBXWK8Lkek3gXsd4pK3an/avbvtZlcN+u/XDQ4gScvH6QpLmmyjrR8rkboEvS2Zedz+6tL6NWhNY9e3F9hLkFLgS5BrbC4lFtTF1NU4njumqH6JiAJagEX6F5NATV16rfq/b8Fq0nfks2fxg2kZ7yWaZDgFlCBHh4eTlZWlsKpjpxzZGVlER7ecN9V2BTNW7qNF77eyC9PS+L8AZ29LkekwQXU58/ExEQyMjLIzMz0upQmJzw8nMTERK/LCBhrd+5nyhvLSOnWjimjj3xWkkiwCKhADw0NLb+KUuRoHThYzC2pi4kMC2HaVcn6xiFpNgIq0EWOlXOOKW8uZ31mLq/ceDKd2moaSpoPDV0kqLz49UbmL93Gr8/tzSk923tdjkijUqBL0Fi8eS+PLljNOX06cMuZPb0uR6TRKdAlKGTlHmRS6mI6tQ3nSS26Jc2U5tClySspddz5WjpZBwp585ZTaBsZ6nVJIp7QCF2avKc/+p5/r93Nw2P70T+hrdfliHhGgS5N2qff7eKZT9Zx2dBErjipi9fliHhKgS5NVsbePH71Wjondm7DIxdp0S0RvwLdzEaZ2Xdmts7MplSzv62ZzTezpWa20sxuqP9SRX50sLiEW1MXU1LieO7qZMJDQ7wuScRztQa6mYUA04HRQF/gSjPrW6XZJGCVc24QcBbwhJnp62CkwTw8fxXLMnL4y+WD6N4+yutyRAKCPyP0YcA659x651whMAe4sEobB0Rb2Wfe1sAeoLheKxXxeWtJBqnfbubmM3twXr+av6hbpLnxJ9ATgC0VtjN891U0DTgR2AYsB+5wzpVWfSIzm2BmaWaWpgW45Gis2bGP+95czslJsdx9bm+vyxEJKP4EenVHmqqub3sekA4cBwwGpplZm8Me5NxM51yKcy4lPj6+jqVKc7e/oIhbXllMdHgof7tqCC216JZIJf78RWQAFc8HS6RsJF7RDcCbrsw6YAOgNUul3jjnuOf1ZWzek8e0K4fQIVqLbolU5U+gLwR6mVmS70DneGBelTabgXMAzKwj0BtYX5+FSvP29y838N6KHdw7qjcn94jzuhyRgFTrpf/OuWIzmwx8AIQAs5xzK81som//DOAR4AUzW07ZFM29zrndDVi3NCMLN+7h8ffWcF6/jtx0eg+vyxEJWH6t5eKcWwAsqHLfjAq3twHn1m9pIpC5v2zRrcR2Efz5skG6eEjkCLQ4lwSs4pJSbn91CTn5RbxwwzDahGvRLZEjUaBLwHryw+/5z/os/nLZIPoed9hJUyJShc77koD00aqdPPvZD1w5rAvjhurLr0X8oUCXgLM5K49fzU2nf0IbHvxZP6/LEWkyFOgSUAqKSrgldREGPHf1UC26JVIHmkOXgPLQvJWs3LaPWden0CU20utyRJoUjdAlYMxN28KchVuYdHZPftKno9fliDQ5CnQJCCu35fDA2ys4pWccd43UolsiR0OBLp7LyS/i1tTFxESG8syVQwhpoYuHRI6G5tDFU8457v7HUrbuzee1m4fTvnUrr0sSabI0QhdPzfxiPf9atZP7zj+Rod1ivS5HpElToItnvlmfxR/fX8OYAZ35xandvS5HpMlToIsndu0rYPLsJXRvH8Xjlw7Qolsi9UBz6NLoikpKmTx7CQcOFjP7ppOJ1qJbIvVCgS6N7s8ffMd/N+7hqSsGc0LHaK/LEQkamnKRRvX+ih3M/GI91w7vxkVDqn7XuIgcCwW6NJoNuw9w9z+WMqhLDPdfcKLX5YgEHQW6NIr8whJueWURISHG9KuG0KqlFt0SqW+aQ5cG55zj/rdX8N3O/fzf9SeR2E6Lbok0BI3QpcHNWbiFNxZncPtPenFW7w5elyMStBTo0qBWbM3hwXkrOb1Xe24/p5fX5YgENQW6NJjsvEImvrKI9lFhPD1ei26JNDTNoUuDKC113DV3KTv3FTD35hHERoV5XZJI0NMIXRrEc5//wCdrdvHABX0Z0rWd1+WINAt+BbqZjTKz78xsnZlNqWb/3WaW7vu3wsxKzExL5zVTX63bzRP/+o6xg47j2uHdvC5HpNmoNdDNLASYDowG+gJXmlnfim2cc392zg12zg0G7gM+d87taYB6JcDtyCng9leX0CO+NY9dokW3RBqTPyP0YcA659x651whMAe48AjtrwRerY/ipGkpKill0uzFFBSVMOOaoUS10iEakcbkT6AnAFsqbGf47juMmUUCo4A3atg/wczSzCwtMzOzrrVKgHtswRoWbdrLH8cN5PgOrb0uR6TZ8SfQq/vM7Gpo+zPgq5qmW5xzM51zKc65lPj4eH9rlCbg3WXbmfXVBq4/pTsXDDzO63JEmiV/Aj0D6FJhOxHYVkPb8Wi6pdlZtyuXe15fSnLXGKaer0W3RLziT6AvBHqZWZKZhVEW2vOqNjKztsCZwDv1W6IEsrzCYm5NXUSr0BCmX51MWEudCSvilVqPWjnnis1sMvABEALMcs6tNLOJvv0zfE0vBv7lnDvQYNVKQHHOMfXN5azdlcvLvziZzm0jvC5JpFnz6zQE59wCYEGV+2ZU2X4BeKG+CpPA98q3m3k7fRu/HnkCp/Vq73U5Is2ePh/LUUnfks3D81dydu94Jp19vNfliAgKdDkKew8UMil1MR2iw/nrFYNpoUW3RAKCrvyQOiktddz5WjqZ+w/y+i0jiInUolsigUIjdKmTv32yjs+/z+TBsX0ZmBjjdTkiUoECXfz2+feZPPXx91ySnMBVw7p6XY6IVKFAF79szc7nzjlL6N0xmkcv0qJbIoFIgS61KiwuZVLqYopKHM9enUxEWIjXJYlINXRQVGr16LurSN+SzYxrkukRr0W3RAKVRuhyRO+kb+XF/2ziptOTGNW/s9fliMgRKNClRmt37mfKG8s5qXs77hnVx+tyRKQWCnSpVu7BYia+soioVi2ZdlUyoSH6VREJdPorlcM455jyxjI27D7A364cQsc24V6XJCJ+UKDLYV74eiP/XLadu8/rw4iecV6XIyJ+UqBLJYs27eXRd1fz0xM7MvHMHl6XIyJ1oECXclm5B5mUupjjYiJ44vJBunhIpInReegCQEmp4/Y5S9ibV8ibt55C24hQr0sSkTpSoAsAT330PV+ty+JPlw6k33FtvS5HRI6CplyET9fs4m+frOPylEQuP6lL7Q8QkYCkQG/mtuzJ487X0unbuQ0PX9jf63JE5Bgo0JuxgqISbk1dTKlzPHdNMuGhWnRLpCnTHHoz9vA/V7F8aw7/e10K3eKivC5HRI6RRujN1JuLM5j97WYmntmTkX07el2OiNQDBXoztGbHPqa+tZzhPWL5zbkneF2OiNQTBXozs6+giFteWUyb8FCeuXIILbXolkjQ8Ouv2cxGmdl3ZrbOzKbU0OYsM0s3s5Vm9nn9lin1wTnHPf9YxuY9eUy7KpkO0Vp0SySY1HpQ1MxCgOnASCADWGhm85xzqyq0iQGeBUY55zabWYcGqleOwd+/3MD7K3fw2/NPZFhSrNfliEg982eEPgxY55xb75wrBOYAF1ZpcxXwpnNuM4Bzblf9linH6r8b9vDYe2sY1a8TN56e5HU5ItIA/An0BGBLhe0M330VnQC0M7PPzGyRmV1X3ROZ2QQzSzOztMzMzKOrWOps1/4CJs9eTNfYSP582UAtuiUSpPwJ9Or++l2V7ZbAUGAMcB7wgJkddvqEc26mcy7FOZcSHx9f52Kl7opKSrn91SXsKyjiuWuSiQ7XolsiwcqfC4sygIoLfCQC26pps9s5dwA4YGZfAIOA7+ulSjkqew4UMil1Md+s38MTlw2iT6c2XpckIg3InxH6QqCXmSWZWRgwHphXpc07wOlm1tLMIoGTgdX1W6rUxXc79nPh9C9ZtHkvT14+iEuHJnpdkog0sFpH6M65YjObDHwAhACznHMrzWyib/8M59xqM3sfWAaUAs8751Y0ZOFSs3+t3MGvXksnqlVL5t48gsFdYrwuSUQagTlXdTq8caSkpLi0tDRPXjtYOeeY/uk6/vKv7xmU2Jb/uTaFTm11rrlIMDGzRc65lOr2aXGuIJFfWMJvXl/Ku8u2c/GQBB67ZIBWTxRpZhToQWBbdj43vZTGqu37uG90Hyac0UOnJoo0Qwr0Ji5t4x4mvrKIg0WlzPr5SZzdRxfpijRXCvQmbO7CLfz27eUkxEQwZ0IKx3eI9rokEfGQAr0JKi4p5dEFq/m/rzZyeq/2TLsymbaRumBIpLlToDcx2XmFTJ69hC/X7eaXpyVx3+g+WgJXRAAFepOydud+bnwpje3ZBfxp3EAuT+lS+4NEpNlQoDcRH6/eyR1z0gkPDeHVCScztJuWvxWRyhToAc45x4zP1/OnD9bQ77g2zLw2heNiIrwuS0QCkAI9gBUUlXDvG8t4J30bPxt0HH+6dCARYbpYSESqp0APUDtyCpjwchrLt+Zw93m9ufWsnrpYSESOSIEegBZv3svNLy8i72AxM69NYWTfjl6XJCJNgAI9wLy+KIOpby6nU9twUm88mRM66mIhEfGPAj1AlJQ6Hn9vNf/77w2c0jOO6Vcl0y4qzOuyRKQJUaAHgJz8Im5/dQmff5/J9ad057djTiRUFwuJSB0p0D32Q2YuN72Yxpa9eTx2yQCuHNbV65JEpIlSoHvos+92cdurSwgLaUHqjcMZlqSLhUTk6CnQPeCc4/l/b+Cx91bTu1Mb/ve6oSS2i/S6LBFp4hTojaygqISpby3nzcVbOX9AJ/5y2SAiw/S/QUSOnZKkEe3aV8CElxeRviWbu0aewG0/OV4XC4lIvVGgN5KlW7KZ8HIa+wuKmXHNUEb17+R1SSISZBTojeDtJVu5541ldIhuxRu3nMKJndt4XZKIBCEFegMqKXX8+YPvmPH5D5ycFMtz1wwlVhcLiUgDUaA3kH0FRdw5J51P1uzi6pO78tDYfrpYSEQalF8JY2ajzOw7M1tnZlOq2X+WmeWYWbrv3+/qv9SmY8PuA1w8/Su++D6TRy7qz6MXD1CYi0iDq3WEbmYhwHRgJJABLDSzec65VVWa/ts5d0ED1Nik/HttJpNSFxPSwnj5lyczomec1yWJSDPhz5TLMGCdc249gJnNAS4EqgZ6s+ac4/++2sgf3l1Frw7RPP/zFLrE6mIhEWk8/gR6ArClwnYGcHI17UaY2VJgG/Ab59zKeqivSThYXMIDb69gbloG5/btyJNXDKZ1Kx2eEJHG5U/qVHfli6uyvRjo5pzLNbPzgbeBXoc9kdkEYAJA167BsQhV5v6DTHxlEYs27eX2c3px5zm9aNFCFwuJSOPz50hdBtClwnYiZaPwcs65fc65XN/tBUCombWv+kTOuZnOuRTnXEp8fPwxlB0YVmzNYey0L1m5LYfpVyVz18gTFOYi4hl/An0h0MvMkswsDBgPzKvYwMw6me8adjMb5nverPouNpDMX7qNcTO+xoDXJ57CmIGdvS5JRJq5WqdcnHPFZjYZ+AAIAWY551aa2UTf/hnAOOAWMysG8oHxzrmq0zJBobTU8eSH3zPt03WkdGvHjGuH0r51K6/LEhHBvMrdlJQUl5aW5slrH63cg8X86rV0Ply1kytSuvDIRf0Ja6nzy0Wk8ZjZIudcSnX7dCqGnzZn5XHjSwv5IfMAD/2sLz8/pbtWShSRgKJA98PXP+zm1tTFOAcv/WIYpx5/2PFeERHPKdCPwDnHy99s4vfzV9GjfRTP/zyFbnFRXpclIlItBXoNCotLeXDeSl7972Z+emIH/nrFYKLDQ70uS0SkRgr0amTlHuSWVxbz3417mHR2T349srfOLxeRgKdAr2LVtn3c9FIau3MP8vT4wVw4OMHrkkRE/KJAr+C95du5a+5S2kaE8o+JIxiYGON1SSIiflOgU3ax0NMfr+Xpj9cypGsM/3PtUDpEh3tdlohInTT7QD9wsJhfz13K+yt3MG5oIo9e3J9WLUO8LktEpM6adaBv2ZPHTS+l8f3O/dw/5kR+eVqSLhYSkSar2Qb6t+uzuCV1McUlpbxwwzDOOKHpr/4oIs1bswz01G838eA7K+kaF8nz16XQI7611yWJiByzZhXoRSWlPDx/FS9/s4mzesfzzJVDaKOLhUQkSDSbQN9zoJBbUxfxzfo93HxmD+45rw8hulhIRIJIswj0NTvKLhbaue8gf71iEBcPSfS6JBGRehf0gf7Byh386rV0WrdqydybRzC4S4zXJYmINIigDXTnHNM+WccTH37PoMS2zLwuhY5tdLGQiASvoAz0vMJi7n59Ge8u287FQxJ47JIBhIfqYiERCW5BF+hbs/OZ8FIaq7bv477RfZhwRg9dLCQizUJQBXraxj1MfGURB4tKmfXzkzi7TwevSxIRaTRBE+ivLdzM/W+vILFdJHMmpHB8B10sJCLNS5MP9OKSUv7w7mpe+Hojp/dqz7Qrk2kbqYuFRKT5adKBnp1XyOTZS/hy3W5uPC2JKaP70DKkhddliYh4oskG+tqd+7nxpTS2Zxfwp3EDuTyli9cliYh4qkkG+serd3LHnHTCQ0N4dcJwhnZr53VJIiKe82t+wsxGmdl3ZrbOzKYcod1JZlZiZuPqr8TK5i3dxo0vpZHUPor5t52qMBcR8al1hG5mIcB0YCSQASw0s3nOuVXVtPsj8EFDFHrIace35xenJvGbc3sTEaaLhUREDvFnhD4MWOecW++cKwTmABdW0+424A1gVz3Wd5jYqDAeuKCvwlxEpAp/Aj0B2FJhO8N3XzkzSwAuBmYc6YnMbIKZpZlZWmZmZl1rFRGRI/An0Ku7bt5V2X4KuNc5V3KkJ3LOzXTOpTjnUuLj9ZVvIiL1yZ+zXDKAiucEJgLbqrRJAeb41kxpD5xvZsXOubfro0gREamdP4G+EOhlZknAVmA8cFXFBs65pEO3zewF4J8KcxGRxlVroDvnis1sMmVnr4QAs5xzK81som//EefNRUSkcfh1YZFzbgGwoMp91Qa5c+76Yy9LRETqSgufiIgECQW6iEiQUKCLiAQJBbqISJBQoIuIBAkFuohIkFCgi4gECQW6iEiQUKCLiAQJBbqISJBQoIuIBAkFuohIkFCgi4gECQW6iEiQUKCLiAQJBbqISJBQoIuIBAm/vrEooLw3BXYs97oKEZGj12kAjH683p9WI3QRkSDR9EboDfCuJiISDDRCFxEJEgp0EZEgoUAXEQkSCnQRkSDhV6Cb2Sgz+87M1pnZlGr2X2hmy8ws3czSzOy0+i9VRESOpNazXMwsBJgOjAQygIVmNs85t6pCs4+Bec45Z2YDgblAn4YoWEREqufPCH0YsM45t945VwjMAS6s2MA5l+ucc77NKMAhIiKNyp9ATwC2VNjO8N1XiZldbGZrgHeBX1T3RGY2wTclk5aZmXk09YqISA38ubDIqrnvsBG4c+4t4C0zOwN4BPhpNW1mAjMBzCzTzDbVrdxy7YHdR/nYhhSodUHg1qa66kZ11U0w1tWtph3+BHoG0KXCdiKwrabGzrkvzKynmbV3ztVYsHMu3o/XrpaZpTnnUo728Q0lUOuCwK1NddWN6qqb5laXP1MuC4FeZpZkZmHAeGBeleKONzPz3U4GwoCs+i5WRERqVusI3TlXbGaTgQ+AEGCWc26lmU307Z8BXApcZ2ZFQD5wRYWDpCIi0gj8WpzLObcAWFDlvhkVbv8R+GP9lnZEMxvxteoiUOuCwK1NddWN6qqbZlWXaSAtIhIcdOm/iEiQUKCLiASJgA50P9aQMTN7xrd/me8Mm0Co6ywzy/GtbZNuZr9rpLpmmdkuM1tRw36v+qu2uhq9v8ysi5l9amarzWylmd1RTZtG7y8/6/Kiv8LN7L9mttRX1++raeNFf/lTlyd/j77XDjGzJWb2z2r21X9/OecC8h9lZ9T8APSg7DTIpUDfKm3OB96j7OKn4cC3AVLXWcA/PeizM4BkYEUN+xu9v/ysq9H7C+gMJPtuRwPfB8jvlz91edFfBrT23Q4FvgWGB0B/+VOXJ3+Pvte+C5hd3es3RH8F8gi91jVkfNsvuTLfADFm1jkA6vKEc+4LYM8RmnjRX/7U1eicc9udc4t9t/cDqzl8SYtG7y8/62p0vj7I9W2G+v5VPaPCi/7ypy5PmFkiMAZ4voYm9d5fgRzo/qwh49c6Mx7UBTDC9zHwPTPr18A1+cuL/vKXZ/1lZt2BIZSN7irytL+OUBd40F++6YN0YBfwoXMuIPrLj7rAm9+vp4B7gNIa9td7fwVyoPuzhoxf68zUM39eczHQzTk3CPgb8HYD1+QvL/rLH571l5m1Bt4A7nTO7au6u5qHNEp/1VKXJ/3lnCtxzg2mbPmPYWbWv0oTT/rLj7oavb/M7AJgl3Nu0ZGaVXPfMfVXIAe6P2vI1Gmdmcaqyzm379DHQFd2UVaombVv4Lr84UV/1cqr/jKzUMpCM9U592Y1TTzpr9rq8vr3yzmXDXwGjKqyy9Pfr5rq8qi/TgXGmtlGyqZlf2Jmr1RpU+/9FciBXusaMr7t63xHi4cDOc657V7XZWadzMrXthlGWT8Hwto2XvRXrbzoL9/r/R1Y7Zx7soZmjd5f/tTlUX/Fm1mM73YEZauprqnSzIv+qrUuL/rLOXefcy7ROdedsoz4xDl3TZVm9d5ffl367wXn3xoyCyg7UrwOyANuCJC6xgG3mFkxZWvbjHe+w9oNycxepeyIfnszywAepOwgkWf95WddXvTXqcC1wHLf/CvAVKBrhbq86C9/6vKivzoDL1rZN5i1AOY65/7p9d+jn3V58vdYnYbuL136LyISJAJ5ykVEROpAgS4iEiQU6CIiQUKBLiISJBToIiJBQoEuIhIkFOgiIkHi/wM89L/iy4xBwAAAAABJRU5ErkJggg==\n", 653 | "text/plain": [ 654 | "
" 655 | ] 656 | }, 657 | "metadata": { 658 | "needs_background": "light" 659 | }, 660 | "output_type": "display_data" 661 | }, 662 | { 663 | "data": { 664 | "text/plain": [ 665 | "
" 666 | ] 667 | }, 668 | "metadata": {}, 669 | "output_type": "display_data" 670 | } 671 | ], 672 | "source": [ 673 | "# plot the loss\n", 674 | "plt.plot(r.history['loss'], label='train loss')\n", 675 | "plt.plot(r.history['val_loss'], label='val loss')\n", 676 | "plt.legend()\n", 677 | "plt.show()\n", 678 | "plt.savefig('LossVal_loss')\n", 679 | "\n", 680 | "# plot the accuracy\n", 681 | "plt.plot(r.history['accuracy'], label='train acc')\n", 682 | "plt.plot(r.history['val_accuracy'], label='val acc')\n", 683 | "plt.legend()\n", 684 | "plt.show()\n", 685 | "plt.savefig('AccVal_acc')" 686 | ] 687 | }, 688 | { 689 | "cell_type": "code", 690 | "execution_count": 82, 691 | "metadata": {}, 692 | "outputs": [], 693 | "source": [ 694 | "# save it as a h5 file\n", 695 | "\n", 696 | "\n", 697 | "from tensorflow.keras.models import load_model\n", 698 | "\n", 699 | "model.save('model_resnet50.h5')" 700 | ] 701 | }, 702 | { 703 | "cell_type": "code", 704 | "execution_count": null, 705 | "metadata": {}, 706 | "outputs": [], 707 | "source": [] 708 | }, 709 | { 710 | "cell_type": "code", 711 | "execution_count": 83, 712 | "metadata": {}, 713 | "outputs": [], 714 | "source": [ 715 | "\n", 716 | "y_pred = model.predict(test_set)\n" 717 | ] 718 | }, 719 | { 720 | "cell_type": "code", 721 | "execution_count": 84, 722 | "metadata": {}, 723 | "outputs": [ 724 | { 725 | "data": { 726 | "text/plain": [ 727 | "array([[1.72658485e-17, 3.15305914e-14, 1.00000000e+00],\n", 728 | " [1.44285156e-18, 1.04114977e-14, 1.00000000e+00],\n", 729 | " [1.72185482e-18, 2.45480757e-15, 1.00000000e+00],\n", 730 | " [1.69613023e-18, 5.67724129e-15, 1.00000000e+00],\n", 731 | " [1.58277143e-18, 3.13576086e-15, 1.00000000e+00],\n", 732 | " [4.46850329e-18, 1.18827149e-13, 1.00000000e+00],\n", 733 | " [3.54525992e-19, 4.03665452e-15, 1.00000000e+00],\n", 734 | " [6.27264810e-19, 4.79972067e-15, 1.00000000e+00],\n", 735 | " [1.86832241e-17, 2.78132331e-14, 1.00000000e+00],\n", 736 | " [2.50938722e-18, 1.34208999e-14, 1.00000000e+00],\n", 737 | " [1.44070657e-18, 6.03744714e-15, 1.00000000e+00],\n", 738 | " [1.29441017e-18, 4.11096641e-15, 1.00000000e+00],\n", 739 | " [4.26931158e-18, 6.29958190e-14, 1.00000000e+00],\n", 740 | " [3.45607537e-18, 8.87510704e-15, 1.00000000e+00],\n", 741 | " [2.06131713e-18, 1.81756974e-14, 1.00000000e+00],\n", 742 | " [1.03641626e-18, 5.46802458e-15, 1.00000000e+00],\n", 743 | " [1.00217139e-18, 5.56380283e-15, 1.00000000e+00],\n", 744 | " [1.82711066e-18, 1.86024292e-14, 1.00000000e+00],\n", 745 | " [3.87583438e-19, 4.07064299e-15, 1.00000000e+00],\n", 746 | " [8.47338694e-19, 2.99262055e-15, 1.00000000e+00],\n", 747 | " [5.06119888e-18, 2.16092199e-14, 1.00000000e+00],\n", 748 | " [1.59416336e-18, 9.65978142e-15, 1.00000000e+00],\n", 749 | " [5.72713180e-18, 1.50026831e-14, 1.00000000e+00],\n", 750 | " [2.28775348e-18, 8.07533006e-15, 1.00000000e+00],\n", 751 | " [4.63019435e-18, 1.41081325e-14, 1.00000000e+00],\n", 752 | " [7.96198894e-18, 2.15624502e-14, 1.00000000e+00],\n", 753 | " [1.04141781e-18, 1.15448947e-14, 1.00000000e+00],\n", 754 | " [1.15247621e-18, 4.19886599e-15, 1.00000000e+00],\n", 755 | " [4.73869439e-18, 1.06193571e-14, 1.00000000e+00],\n", 756 | " [1.18833418e-18, 6.53833270e-14, 1.00000000e+00],\n", 757 | " [2.84519008e-18, 7.93761859e-15, 1.00000000e+00],\n", 758 | " [1.64842321e-18, 7.50876751e-15, 1.00000000e+00],\n", 759 | " [6.47598998e-18, 1.88450939e-14, 1.00000000e+00],\n", 760 | " [8.19181879e-19, 2.95362887e-15, 1.00000000e+00],\n", 761 | " [6.70528073e-18, 2.20390383e-14, 1.00000000e+00],\n", 762 | " [1.66077529e-18, 6.74443759e-15, 1.00000000e+00],\n", 763 | " [1.44466898e-18, 7.34333774e-15, 1.00000000e+00],\n", 764 | " [1.98024826e-18, 2.13992921e-15, 1.00000000e+00],\n", 765 | " [5.60118631e-19, 3.58009952e-15, 1.00000000e+00],\n", 766 | " [2.06639519e-18, 1.56281983e-14, 1.00000000e+00],\n", 767 | " [2.80224286e-19, 1.51491605e-14, 1.00000000e+00],\n", 768 | " [1.56633794e-18, 8.40833005e-15, 1.00000000e+00],\n", 769 | " [1.92835653e-18, 7.12104580e-15, 1.00000000e+00],\n", 770 | " [1.61547329e-18, 5.66715821e-15, 1.00000000e+00],\n", 771 | " [1.38129215e-18, 1.36457237e-14, 1.00000000e+00],\n", 772 | " [1.06465987e-17, 2.40037770e-14, 1.00000000e+00],\n", 773 | " [1.57657078e-18, 1.01950597e-14, 1.00000000e+00],\n", 774 | " [2.60430806e-18, 1.80379342e-14, 1.00000000e+00],\n", 775 | " [8.51574334e-18, 3.26405569e-14, 1.00000000e+00],\n", 776 | " [7.40295009e-19, 3.48502113e-15, 1.00000000e+00],\n", 777 | " [2.37361172e-18, 6.39510976e-15, 1.00000000e+00],\n", 778 | " [1.30336425e-17, 2.92999674e-14, 1.00000000e+00],\n", 779 | " [4.08764250e-19, 4.92643679e-15, 1.00000000e+00],\n", 780 | " [1.11583428e-18, 5.83212804e-15, 1.00000000e+00],\n", 781 | " [1.51674087e-17, 2.18138750e-14, 1.00000000e+00],\n", 782 | " [9.69172575e-19, 2.22222228e-15, 1.00000000e+00],\n", 783 | " [7.40182471e-18, 3.08916135e-14, 1.00000000e+00],\n", 784 | " [4.56019915e-19, 9.16357766e-15, 1.00000000e+00]], dtype=float32)" 785 | ] 786 | }, 787 | "execution_count": 84, 788 | "metadata": {}, 789 | "output_type": "execute_result" 790 | } 791 | ], 792 | "source": [ 793 | "y_pred" 794 | ] 795 | }, 796 | { 797 | "cell_type": "code", 798 | "execution_count": 85, 799 | "metadata": {}, 800 | "outputs": [], 801 | "source": [ 802 | "import numpy as np\n", 803 | "y_pred = np.argmax(y_pred, axis=1)" 804 | ] 805 | }, 806 | { 807 | "cell_type": "code", 808 | "execution_count": 86, 809 | "metadata": {}, 810 | "outputs": [ 811 | { 812 | "data": { 813 | "text/plain": [ 814 | "array([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,\n", 815 | " 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,\n", 816 | " 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2], dtype=int64)" 817 | ] 818 | }, 819 | "execution_count": 86, 820 | "metadata": {}, 821 | "output_type": "execute_result" 822 | } 823 | ], 824 | "source": [ 825 | "y_pred" 826 | ] 827 | }, 828 | { 829 | "cell_type": "code", 830 | "execution_count": null, 831 | "metadata": {}, 832 | "outputs": [], 833 | "source": [] 834 | }, 835 | { 836 | "cell_type": "code", 837 | "execution_count": 87, 838 | "metadata": {}, 839 | "outputs": [], 840 | "source": [ 841 | "from tensorflow.keras.models import load_model\n", 842 | "from tensorflow.keras.preprocessing import image" 843 | ] 844 | }, 845 | { 846 | "cell_type": "code", 847 | "execution_count": 88, 848 | "metadata": {}, 849 | "outputs": [], 850 | "source": [ 851 | "model=load_model('model_resnet50.h5')" 852 | ] 853 | }, 854 | { 855 | "cell_type": "code", 856 | "execution_count": 89, 857 | "metadata": {}, 858 | "outputs": [], 859 | "source": [ 860 | "#img_data" 861 | ] 862 | }, 863 | { 864 | "cell_type": "code", 865 | "execution_count": 90, 866 | "metadata": {}, 867 | "outputs": [], 868 | "source": [ 869 | "img=image.load_img('Datasets/Test/lamborghini/11.jpg',target_size=(224,224))\n", 870 | "\n" 871 | ] 872 | }, 873 | { 874 | "cell_type": "code", 875 | "execution_count": 91, 876 | "metadata": {}, 877 | "outputs": [ 878 | { 879 | "data": { 880 | "text/plain": [ 881 | "array([[[252., 252., 252.],\n", 882 | " [252., 252., 252.],\n", 883 | " [252., 252., 252.],\n", 884 | " ...,\n", 885 | " [196., 187., 172.],\n", 886 | " [217., 208., 193.],\n", 887 | " [243., 234., 219.]],\n", 888 | "\n", 889 | " [[252., 252., 252.],\n", 890 | " [252., 252., 252.],\n", 891 | " [252., 252., 252.],\n", 892 | " ...,\n", 893 | " [245., 245., 237.],\n", 894 | " [243., 243., 235.],\n", 895 | " [242., 242., 234.]],\n", 896 | "\n", 897 | " [[252., 252., 252.],\n", 898 | " [252., 252., 252.],\n", 899 | " [252., 252., 252.],\n", 900 | " ...,\n", 901 | " [240., 249., 248.],\n", 902 | " [242., 251., 250.],\n", 903 | " [242., 251., 250.]],\n", 904 | "\n", 905 | " ...,\n", 906 | "\n", 907 | " [[189., 207., 229.],\n", 908 | " [190., 206., 229.],\n", 909 | " [190., 206., 229.],\n", 910 | " ...,\n", 911 | " [171., 180., 187.],\n", 912 | " [171., 180., 187.],\n", 913 | " [171., 180., 187.]],\n", 914 | "\n", 915 | " [[185., 206., 227.],\n", 916 | " [185., 206., 227.],\n", 917 | " [185., 206., 227.],\n", 918 | " ...,\n", 919 | " [171., 180., 187.],\n", 920 | " [171., 180., 187.],\n", 921 | " [171., 180., 187.]],\n", 922 | "\n", 923 | " [[185., 206., 227.],\n", 924 | " [185., 206., 227.],\n", 925 | " [185., 206., 227.],\n", 926 | " ...,\n", 927 | " [171., 180., 187.],\n", 928 | " [171., 180., 187.],\n", 929 | " [171., 180., 187.]]], dtype=float32)" 930 | ] 931 | }, 932 | "execution_count": 91, 933 | "metadata": {}, 934 | "output_type": "execute_result" 935 | } 936 | ], 937 | "source": [ 938 | "x=image.img_to_array(img)\n", 939 | "x" 940 | ] 941 | }, 942 | { 943 | "cell_type": "code", 944 | "execution_count": 92, 945 | "metadata": {}, 946 | "outputs": [ 947 | { 948 | "data": { 949 | "text/plain": [ 950 | "(224, 224, 3)" 951 | ] 952 | }, 953 | "execution_count": 92, 954 | "metadata": {}, 955 | "output_type": "execute_result" 956 | } 957 | ], 958 | "source": [ 959 | "x.shape" 960 | ] 961 | }, 962 | { 963 | "cell_type": "code", 964 | "execution_count": 93, 965 | "metadata": {}, 966 | "outputs": [], 967 | "source": [ 968 | "x=x/255" 969 | ] 970 | }, 971 | { 972 | "cell_type": "code", 973 | "execution_count": 94, 974 | "metadata": {}, 975 | "outputs": [ 976 | { 977 | "data": { 978 | "text/plain": [ 979 | "(1, 224, 224, 3)" 980 | ] 981 | }, 982 | "execution_count": 94, 983 | "metadata": {}, 984 | "output_type": "execute_result" 985 | } 986 | ], 987 | "source": [ 988 | "x=np.expand_dims(x,axis=0)\n", 989 | "img_data=preprocess_input(x)\n", 990 | "img_data.shape" 991 | ] 992 | }, 993 | { 994 | "cell_type": "code", 995 | "execution_count": 95, 996 | "metadata": {}, 997 | "outputs": [ 998 | { 999 | "data": { 1000 | "text/plain": [ 1001 | "array([[2.2220233e-10, 4.6077218e-07, 9.9999952e-01]], dtype=float32)" 1002 | ] 1003 | }, 1004 | "execution_count": 95, 1005 | "metadata": {}, 1006 | "output_type": "execute_result" 1007 | } 1008 | ], 1009 | "source": [ 1010 | "model.predict(img_data)" 1011 | ] 1012 | }, 1013 | { 1014 | "cell_type": "code", 1015 | "execution_count": 96, 1016 | "metadata": {}, 1017 | "outputs": [], 1018 | "source": [ 1019 | "a=np.argmax(model.predict(img_data), axis=1)" 1020 | ] 1021 | }, 1022 | { 1023 | "cell_type": "code", 1024 | "execution_count": 97, 1025 | "metadata": {}, 1026 | "outputs": [ 1027 | { 1028 | "data": { 1029 | "text/plain": [ 1030 | "array([False])" 1031 | ] 1032 | }, 1033 | "execution_count": 97, 1034 | "metadata": {}, 1035 | "output_type": "execute_result" 1036 | } 1037 | ], 1038 | "source": [ 1039 | "a==1" 1040 | ] 1041 | }, 1042 | { 1043 | "cell_type": "code", 1044 | "execution_count": 98, 1045 | "metadata": {}, 1046 | "outputs": [ 1047 | { 1048 | "data": { 1049 | "text/plain": [ 1050 | "'2.1.0'" 1051 | ] 1052 | }, 1053 | "execution_count": 98, 1054 | "metadata": {}, 1055 | "output_type": "execute_result" 1056 | } 1057 | ], 1058 | "source": [ 1059 | "import tensorflow as tf\n", 1060 | "tf.version.VERSION" 1061 | ] 1062 | }, 1063 | { 1064 | "cell_type": "code", 1065 | "execution_count": null, 1066 | "metadata": {}, 1067 | "outputs": [], 1068 | "source": [] 1069 | } 1070 | ], 1071 | "metadata": { 1072 | "kernelspec": { 1073 | "display_name": "Python 3", 1074 | "language": "python", 1075 | "name": "python3" 1076 | }, 1077 | "language_info": { 1078 | "codemirror_mode": { 1079 | "name": "ipython", 1080 | "version": 3 1081 | }, 1082 | "file_extension": ".py", 1083 | "mimetype": "text/x-python", 1084 | "name": "python", 1085 | "nbconvert_exporter": "python", 1086 | "pygments_lexer": "ipython3", 1087 | "version": "3.7.6" 1088 | } 1089 | }, 1090 | "nbformat": 4, 1091 | "nbformat_minor": 2 1092 | } 1093 | -------------------------------------------------------------------------------- /Car Brand Classifier And Deployment/app.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """ 3 | Created on Thu Jun 11 22:34:20 2020 4 | 5 | @author: Krish Naik 6 | """ 7 | 8 | from __future__ import division, print_function 9 | # coding=utf-8 10 | import sys 11 | import os 12 | import glob 13 | import re 14 | import numpy as np 15 | import keras 16 | 17 | # Keras 18 | from tensorflow.keras.applications.imagenet_utils import preprocess_input, decode_predictions 19 | from tensorflow.keras.models import load_model 20 | from tensorflow.keras.preprocessing import image 21 | 22 | # Flask utils 23 | from flask import Flask, redirect, url_for, request, render_template 24 | from werkzeug.utils import secure_filename 25 | #from gevent.pywsgi import WSGIServer 26 | 27 | # Define a flask app 28 | app = Flask(__name__) 29 | 30 | # Model saved with Keras model.save() 31 | MODEL_PATH ='model_resnet50.h5' 32 | 33 | # Load your trained model 34 | model = load_model('model_resnet50.h5') 35 | 36 | 37 | 38 | 39 | def model_predict(img_path, model): 40 | img = image.load_img(img_path, target_size=(224, 224)) 41 | 42 | # Preprocessing the image 43 | x = image.img_to_array(img) 44 | # x = np.true_divide(x, 255) 45 | ## Scaling 46 | x= x/255 47 | x = np.expand_dims(x, axis=0) 48 | 49 | 50 | 51 | 52 | preds = model.predict(x) 53 | preds=np.argmax(preds, axis=1) 54 | if preds==0: 55 | preds="The Car is Audi" 56 | elif preds==1: 57 | preds="The Car is Lamborghini" 58 | elif preds == 2: 59 | preds = "The Car is Mercedes" 60 | else: 61 | preds="Other Than Audi/Lamborghini/Mercedes" 62 | 63 | 64 | return preds 65 | 66 | 67 | @app.route('/', methods=['GET']) 68 | def index(): 69 | # Main page 70 | return render_template('index.html') 71 | 72 | 73 | @app.route('/predict', methods=['GET', 'POST']) 74 | def upload(): 75 | if request.method == 'POST': 76 | # Get the file from post request 77 | f = request.files['file'] 78 | 79 | # Save the file to ./uploads 80 | basepath = os.path.dirname(__file__) 81 | file_path = os.path.join( 82 | basepath, 'uploads', secure_filename(f.filename)) 83 | ''' 84 | try: 85 | os.mkdir(os.path.join(basepath, 'uploads')) 86 | except: 87 | pass 88 | ''' 89 | f.save(file_path) 90 | 91 | # Make prediction 92 | preds = model_predict(file_path, model) 93 | result=preds 94 | return result 95 | return None 96 | 97 | if __name__ == '__main__': 98 | app.run(host='0.0.0.0', port=8080, debug=True) 99 | 100 | ''' 101 | if __name__ == '__main__': 102 | # app.run(debug=True) 103 | ''' -------------------------------------------------------------------------------- /Car Brand Classifier And Deployment/model-Resnet-50-h5 (Download Link).txt: -------------------------------------------------------------------------------- 1 | Download Model.h5 File using the Below link- 2 | 3 | https://github.com/amark720/Car-Brand-Classifier-And-Deployment/blob/main/model_resnet50.h5 4 | 5 | 6 | Ps- GitHub is not allowing to upload files which is more than 25 MB to Subfolders that's why I've given the above main deployed Model link from where you can download the Model.h5 file. -------------------------------------------------------------------------------- /Car Brand Classifier And Deployment/requirements.txt: -------------------------------------------------------------------------------- 1 | 2 | 3 | jsonify 4 | requests 5 | gunicorn 6 | 7 | 8 | absl-py==0.9.0 9 | astunparse==1.6.3 10 | attrs==19.3.0 11 | backcall==0.1.0 12 | bleach==3.1.5 13 | cachetools==4.1.0 14 | certifi==2020.4.5.1 15 | chardet==3.0.4 16 | click==7.1.2 17 | colorama==0.4.3 18 | cycler==0.10.0 19 | decorator==4.4.2 20 | defusedxml==0.6.0 21 | entrypoints==0.3 22 | Flask==1.1.2 23 | Flask-Cors==3.0.8 24 | gast==0.3.3 25 | geojson==2.5.0 26 | google-auth==1.15.0 27 | google-auth-oauthlib==0.4.1 28 | google-pasta==0.2.0 29 | grpcio==1.29.0 30 | h5py==2.10.0 31 | idna==2.9 32 | importlib-metadata==1.6.0 33 | ipykernel==5.3.0 34 | ipython==7.14.0 35 | ipython-genutils==0.2.0 36 | ipywidgets==7.5.1 37 | itsdangerous==1.1.0 38 | jedi==0.17.0 39 | Jinja2==2.11.2 40 | joblib==0.15.1 41 | jsonschema==3.2.0 42 | jupyter==1.0.0 43 | jupyter-client==6.1.3 44 | jupyter-console==6.1.0 45 | jupyter-core==4.6.3 46 | Keras-Preprocessing==1.1.2 47 | kiwisolver==1.2.0 48 | lxml==4.5.1 49 | Markdown==3.2.2 50 | MarkupSafe==1.1.1 51 | matplotlib==3.2.1 52 | mistune==0.8.4 53 | nbconvert==5.6.1 54 | nbformat==5.0.6 55 | notebook==6.0.3 56 | numpy==1.18.4 57 | oauthlib==3.1.0 58 | opencv-python==4.2.0.34 59 | opt-einsum==3.2.1 60 | packaging==20.4 61 | pandas==1.0.3 62 | pandas-datareader==0.8.1 63 | pandocfilters==1.4.2 64 | parso==0.7.0 65 | pexpect==4.8.0 66 | pickleshare==0.7.5 67 | Pillow==7.1.2 68 | prometheus-client==0.7.1 69 | prompt-toolkit==3.0.5 70 | protobuf==3.8.0 71 | ptyprocess==0.6.0 72 | pyasn1==0.4.8 73 | pyasn1-modules==0.2.8 74 | Pygments==2.6.1 75 | pyparsing==2.4.7 76 | pyrsistent==0.16.0 77 | PySocks==1.7.1 78 | python-dateutil==2.8.1 79 | pytz==2020.1 80 | pywinpty==0.5.7 81 | pyzmq==19.0.1 82 | qtconsole==4.7.4 83 | QtPy==1.9.0 84 | requests-oauthlib==1.3.0 85 | rsa==4.0 86 | scikit-learn==0.23.1 87 | scipy==1.4.1 88 | seaborn==0.10.1 89 | Send2Trash==1.5.0 90 | six==1.15.0 91 | sklearn==0.0 92 | tensorboard==2.2.1 93 | tensorboard-plugin-wit==1.6.0.post3 94 | tensorflow-cpu==2.3.0 95 | tensorflow-estimator==2.2.0 96 | termcolor==1.1.0 97 | terminado==0.8.3 98 | testpath==0.4.4 99 | threadpoolctl==2.0.0 100 | tornado==6.0.4 101 | traitlets==4.3.3 102 | urllib3==1.25.9 103 | wcwidth==0.1.9 104 | webencodings==0.5.1 105 | Werkzeug==1.0.1 106 | widgetsnbextension==3.5.1 107 | wincertstore==0.2 108 | wrapt==1.12.1 109 | zipp==3.1.0 110 | -------------------------------------------------------------------------------- /Car Brand Classifier And Deployment/static/css/main.css: -------------------------------------------------------------------------------- 1 | .img-preview { 2 | width: 256px; 3 | height: 256px; 4 | position: relative; 5 | border: 5px solid #F8F8F8; 6 | box-shadow: 0px 2px 4px 0px rgba(0, 0, 0, 0.1); 7 | margin-top: 1em; 8 | margin-bottom: 1em; 9 | } 10 | 11 | 12 | .img-preview>div { 13 | width: 100%; 14 | height: 100%; 15 | background-size: 256px 256px; 16 | background-repeat: no-repeat; 17 | background-position: center; 18 | } 19 | 20 | input[type="file"] { 21 | display: none; 22 | } 23 | 24 | .upload-label{ 25 | display: inline-block; 26 | padding: 12px 30px; 27 | background: #39D2B4; 28 | color: #fff; 29 | font-size: 1em; 30 | transition: all .4s; 31 | cursor: pointer; 32 | } 33 | 34 | .upload-label:hover{ 35 | background: #34495E; 36 | color: #39D2B4; 37 | } 38 | 39 | .loader { 40 | border: 8px solid #f3f3f3; /* Light grey */ 41 | border-top: 8px solid #3498db; /* Blue */ 42 | border-radius: 50%; 43 | width: 50px; 44 | height: 50px; 45 | animation: spin 1s linear infinite; 46 | } 47 | 48 | @keyframes spin { 49 | 0% { transform: rotate(0deg); } 50 | 100% { transform: rotate(360deg); } 51 | } -------------------------------------------------------------------------------- /Car Brand Classifier And Deployment/static/js/main.js: -------------------------------------------------------------------------------- 1 | $(document).ready(function () { 2 | // Init 3 | $('.image-section').hide(); 4 | $('.loader').hide(); 5 | $('#result').hide(); 6 | 7 | // Upload Preview 8 | function readURL(input) { 9 | if (input.files && input.files[0]) { 10 | var reader = new FileReader(); 11 | reader.onload = function (e) { 12 | $('#imagePreview').css('background-image', 'url(' + e.target.result + ')'); 13 | $('#imagePreview').hide(); 14 | $('#imagePreview').fadeIn(650); 15 | } 16 | reader.readAsDataURL(input.files[0]); 17 | } 18 | } 19 | $("#imageUpload").change(function () { 20 | $('.image-section').show(); 21 | $('#btn-predict').show(); 22 | $('#result').text(''); 23 | $('#result').hide(); 24 | readURL(this); 25 | }); 26 | 27 | // Predict 28 | $('#btn-predict').click(function () { 29 | var form_data = new FormData($('#upload-file')[0]); 30 | 31 | // Show loading animation 32 | $(this).hide(); 33 | $('.loader').show(); 34 | 35 | // Make prediction by calling api /predict 36 | $.ajax({ 37 | type: 'POST', 38 | url: '/predict', 39 | data: form_data, 40 | contentType: false, 41 | cache: false, 42 | processData: false, 43 | async: true, 44 | success: function (data) { 45 | // Get and display the result 46 | $('.loader').hide(); 47 | $('#result').fadeIn(600); 48 | $('#result').text(' Result: ' + data); 49 | console.log('Success!'); 50 | }, 51 | }); 52 | }); 53 | 54 | }); 55 | -------------------------------------------------------------------------------- /Car Brand Classifier And Deployment/templates/base.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | Car Brand Classifier! 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 45 |
46 |
{% block content %}{% endblock %}
47 |
48 | 49 | 50 |
51 | 52 | 53 |













54 | Deployed by: Amar Kumar 55 |
56 |
57 | 58 | -------------------------------------------------------------------------------- /Car Brand Classifier And Deployment/templates/index.html: -------------------------------------------------------------------------------- 1 | {% extends "base.html" %} {% block content %} 2 | 3 |

Find Your Car Brand by Simply uploading a Car Photo!

4 | 5 |
6 |
7 | 10 | 11 |
12 | 13 | 22 | 23 | 24 | 25 |

26 | 27 |

28 | 29 |
30 | 31 | {% endblock %} -------------------------------------------------------------------------------- /Compare 2 Images using OpenCV and PIL/Post.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Compare 2 Images using OpenCV and PIL/Post.jpg -------------------------------------------------------------------------------- /Compare 2 Images using OpenCV and PIL/Pre.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Compare 2 Images using OpenCV and PIL/Pre.jpg -------------------------------------------------------------------------------- /Compare 2 Images using OpenCV and PIL/Readme.md: -------------------------------------------------------------------------------- 1 | # Compare Difference between 2 Images. 2 | 3 | This is a program in Python to detect the difference in the 2 images (Pre.jpg and Post.jpg) of the same map location. The both 2 images are of same map location but taken on different dates. 4 | We need to find out those spots on the images where they are differ from one another using OpenCV and Computer Vision. 5 | 6 | #### My Kaggle Notebook Link -> https://www.kaggle.com/datawarriors/compare-2-images 7 | 8 | ## Screenshots: 9 | 10 | ### Dataset: 11 | 12 | ####           Previous Image                     New Image 13 | 14 | !--------- We Need to Find the Difference Between both the Above TWO Images. ---------! 15 | 16 | 17 | ### Output Screenshot: 18 | 19 | #### Screenshot 1 20 | 21 | 22 | 23 | #### Screenshot 2 24 | 25 | 26 | #### Screenshot 3 27 | 28 | 29 | 30 | #### Screenshot 4 31 | 32 | 33 | 34 | ### Conclusion! 35 | One can examine maps at different scales and make observations about the amount of detail one can see. We can compare satellite images with maps and use satellite images to measure and map changing land use. With the use of Computer Vision this task can be done in less time with a great accuracy. 36 | 37 | Thank You! 38 | 39 | #### Feel Free to contact me at➛ amark720@gmail.com for any help related to this Project! 40 | -------------------------------------------------------------------------------- /Compare 2 Images using OpenCV and PIL/Screenshot1.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Compare 2 Images using OpenCV and PIL/Screenshot1.PNG -------------------------------------------------------------------------------- /Compare 2 Images using OpenCV and PIL/Screenshot2.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Compare 2 Images using OpenCV and PIL/Screenshot2.PNG -------------------------------------------------------------------------------- /Compare 2 Images using OpenCV and PIL/Screenshot3.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Compare 2 Images using OpenCV and PIL/Screenshot3.PNG -------------------------------------------------------------------------------- /Compare 2 Images using OpenCV and PIL/Screenshot4.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Compare 2 Images using OpenCV and PIL/Screenshot4.PNG -------------------------------------------------------------------------------- /Covid19 FaceMask Detector (CNN & OpenCV)/Readme.md: -------------------------------------------------------------------------------- 1 | # Covid19 FaceMask Detector using CNN & OpenCV. 2 | 3 | 4 |
5 |

Face Mask Detection system built with OpenCV, Keras/TensorFlow using Deep Learning and Computer Vision concepts in order to detect face masks in static images as well as in video streams.

6 |
7 | 8 |                                     9 | 10 | ## Live Demo: 11 |

12 | 13 | 14 | 15 | ## :innocent: Motivation 16 | In the present scenario due to Covid-19, there is no efficient face mask detection applications which are now in high demand for transportation means, densely populated areas, residential districts, large-scale manufacturers and other enterprises to ensure safety. Also, the absence of large datasets of __‘with_mask’__ images has made this task more cumbersome and challenging. 17 | 18 | 19 | ## :star: Features 20 | 21 | This system can be used in real-time applications which require face-mask detection for safety purposes due to the outbreak of Covid-19. This project can be integrated with embedded systems for application in airports, railway stations, offices, schools, and public places to ensure that public safety guidelines are followed. 22 | 23 | ## :file_folder: Dataset 24 | The dataset used can be downloaded here - [Click to Download](https://www.kaggle.com/prithwirajmitra/covid-face-mask-detection-dataset) 25 | 26 | This dataset consists of __1006 images__ belonging to two classes: 27 | * __with_mask: 500 images__ 28 | * __without_mask: 606 images__ 29 | 30 | 31 | 32 | ## 🚀 Installation 33 | 1. Download the files in this repository and extract them. 34 | 2. Run Face_Mask_Detection.ipynb file first using Google colab:-
35 | * Colab File link - https://colab.research.google.com/drive/1rX32L-EHFvdtulPbVlwllBve8bdKwC_m#scrollTo=pO9U0q_KNDsF 36 | 37 | 3. Running the above .ipynb file will generate Model.h5 file. 38 | 4. Download that Model.h5 file from Colab to local Machine. 39 | 5. Now Run Mask.py file 40 | 6. Done. 41 | 42 | Note: Make sure that you're using the same Tensorflow and Keras version on your local machine that you're using on Google Colab otherwise you'll get error. 43 | 44 | ## :key: Results 45 | 46 | #### Our model gave 92% accuracy for Face Mask Detection after training via tensorflow==2.3.0
47 | The model can further be Improved by doing parameter tuning. 48 | 49 | ![](https://github.com/chandrikadeb7/Face-Mask-Detection/blob/master/Readme_images/Screenshot%202020-06-01%20at%209.48.27%20PM.png) 50 | 51 | #### We got the following accuracy/loss training curve plot 52 | ![](https://github.com/chandrikadeb7/Face-Mask-Detection/blob/master/plot.png) 53 | 54 | ## :clap: And it's done! 55 | Feel free to mail me for any doubts/query 56 | :email: amark720@gmail.com 57 | 58 | ## :heart: Owner 59 | Made with :heart:  by [Amar Kumar](https://github.com/amark720) 60 | 61 | 62 | -------------------------------------------------------------------------------- /Covid19 FaceMask Detector (CNN & OpenCV)/man-mask-protective.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Covid19 FaceMask Detector (CNN & OpenCV)/man-mask-protective.jpg -------------------------------------------------------------------------------- /Covid19 FaceMask Detector (CNN & OpenCV)/mask.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | from tensorflow.keras.models import load_model 3 | from keras.preprocessing.image import load_img , img_to_array 4 | import numpy as np 5 | import tensorflow as tf 6 | print(tf.version.VERSION) 7 | 8 | model =load_model('model.h5') 9 | 10 | img_width , img_height = 150,150 11 | 12 | face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml') 13 | 14 | cap = cv2.VideoCapture('video.mp4') 15 | 16 | img_count_full = 0 17 | 18 | font = cv2.FONT_HERSHEY_SIMPLEX 19 | org = (1,1) 20 | class_label = '' 21 | fontScale = 1 22 | color = (0,0,255) 23 | thickness = 2 24 | 25 | while True: 26 | img_count_full += 1 27 | response , color_img = cap.read() 28 | 29 | if response == False: 30 | break 31 | 32 | 33 | scale = 50 34 | width = int(color_img.shape[1]*scale /100) 35 | height = int(color_img.shape[0]*scale/100) 36 | dim = (width,height) 37 | 38 | color_img = cv2.resize(color_img, dim ,interpolation= cv2.INTER_AREA) 39 | 40 | gray_img = cv2.cvtColor(color_img,cv2.COLOR_BGR2GRAY) 41 | 42 | faces = face_cascade.detectMultiScale(gray_img, 1.1, 6) 43 | 44 | img_count = 0 45 | for (x,y,w,h) in faces: 46 | org = (x+20,y+85) 47 | img_count += 1 48 | color_face = color_img[y:y+h,x:x+w] 49 | cv2.imwrite('input/%d%dface.jpg'%(img_count_full,img_count),color_face) 50 | img = load_img('input/%d%dface.jpg'%(img_count_full,img_count),target_size=(img_width,img_height)) 51 | img = img_to_array(img) 52 | img = np.expand_dims(img,axis=0) 53 | prediction = model.predict(img) 54 | 55 | 56 | if prediction==0: 57 | class_label = "Mask" 58 | color = (0,255,0) 59 | 60 | else: 61 | class_label = "No Mask" 62 | color = (0,0,255) 63 | 64 | 65 | cv2.rectangle(color_img,(x,y),(x+w,y+h),(255,0,0),3) 66 | cv2.putText(color_img, class_label, org, font, fontScale, color, thickness,cv2.LINE_AA) 67 | 68 | cv2.imshow('Face mask detection', color_img) 69 | if cv2.waitKey(1) & 0xFF == ord('q'): 70 | break 71 | 72 | cap.release() 73 | cv2.destroyAllWindows() 74 | 75 | 76 | 77 | -------------------------------------------------------------------------------- /Covid19 FaceMask Detector (CNN & OpenCV)/video.mp4: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Covid19 FaceMask Detector (CNN & OpenCV)/video.mp4 -------------------------------------------------------------------------------- /Covid19 FaceMask Detector (CNN & OpenCV)/video1.mp4: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Covid19 FaceMask Detector (CNN & OpenCV)/video1.mp4 -------------------------------------------------------------------------------- /Covid19 FaceMask Detector (CNN & OpenCV)/video2.mp4: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Covid19 FaceMask Detector (CNN & OpenCV)/video2.mp4 -------------------------------------------------------------------------------- /Covid19 FaceMask Detector (CNN & OpenCV)/women with mask.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Covid19 FaceMask Detector (CNN & OpenCV)/women with mask.jpg -------------------------------------------------------------------------------- /Image Background Remover App/InputImg.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Image Background Remover App/InputImg.jpg -------------------------------------------------------------------------------- /Image Background Remover App/OutputImg.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Image Background Remover App/OutputImg.png -------------------------------------------------------------------------------- /Image Background Remover App/README.md: -------------------------------------------------------------------------------- 1 | # 🖼️ Image Background Remover 2 | 3 | ## **Description** 4 | The **Image Background Remover** is a Python-based desktop application that allows users to remove the background from an image easily. With an intuitive graphical user interface (GUI) built using **Tkinter**, the app enables users to upload an image, process it using the **rembg** library, and download the result with the background removed. The tool is especially useful for designers, e-commerce professionals, and anyone needing quick and clean image background removal. 5 | 6 | --- 7 | 8 | ## **Features** 9 | - User-friendly GUI for uploading and processing images. 10 | - Automatic background removal using AI-powered **rembg** library. 11 | - Preview of the processed image in the application. 12 | - Option to save the processed image in high-quality PNG format. 13 | 14 | --- 15 | 16 | ### Screenshots: 17 | 18 | |

Input Image

|

Output Image

| 19 | | ------------ | ------------ | 20 | 21 | --- 22 | 23 | ## **Technologies Used** 24 | - **Programming Language:** Python 25 | - **GUI Library:** Tkinter 26 | - **Background Removal:** rembg 27 | - **Image Processing:** Pillow (PIL) 28 | - **File Handling:** io 29 | 30 | --- 31 | 32 | ## **Installation Instructions** 33 | To run this application on your local machine, follow these steps: 34 | 35 | 1. **Clone the Repository:** 36 | ```bash 37 | git clone https://github.com/amark720/Computer-Vision-and-OpenCV-Projects.git 38 | cd "Image Background Remover App" 39 | ``` 40 | 2. **Create and Activate Virtual Environment (Optional):** 41 | ```bash 42 | python -m venv venv 43 | source venv/bin/activate # On Linux/Mac 44 | venv\Scripts\activate # On Windows 45 | ``` 46 | 3. **Install Required Packages:** 47 | 48 | Run the following command to install the dependencies: 49 | ```bash 50 | pip install -r requirements.txt 51 | ``` 52 | 53 | --- 54 | 55 | ## How to Use: 56 | 57 | 1. **Run the Application:** 58 | ```bash 59 | python image_background_remover.py 60 | ``` 61 | 2. Click on the "**Upload Image**" button to select an image from your system. 62 | 3. The application will process the image to remove its background. 63 | 4. A new window will display the processed image. 64 | 5. Use the "**Save Image**" button to download the result to your system in PNG format. 65 | 66 | --- 67 | 68 | ### Areas of Further Improvement 69 | 1. **Support for Batch Processing:** Add the ability to process multiple images simultaneously. 70 | 2. **Cloud Integration:** Integrate cloud storage (e.g., AWS S3 or Google Drive) for uploading and saving images. 71 | 3. **Quality Enhancement:** Improve the resolution and clarity of the output by fine-tuning the background removal model. 72 | 4. **Custom Backgrounds:** Allow users to replace the background with custom images or colors. 73 | 5. **Cross-Platform Support:** Package the application into standalone executables for Windows, macOS, and Linux. 74 | 75 | --- 76 | 77 | ### Conclusion 78 | The Image Background Remover simplifies the tedious task of removing image backgrounds, making it quick and accessible for everyone. 79 | This project serves as a foundation for more advanced image editing tools and demonstrates the power of Python and Data Science in building practical applications. 80 | 81 | ### Contributions 82 | Contributions are welcome! Feel free to fork this repository, submit issues, or create pull requests to improve the project. 83 | 84 | ### Acknowledgments 85 | - The rembg library for its excellent background removal capabilities. 86 | - The Python and open-source community for providing robust libraries and tools. 87 | 88 | #### 📧 Feel Free to contact me at➛ amark720@gmail.com for any help related to this Project! 89 | -------------------------------------------------------------------------------- /Image Background Remover App/image_background_remover.py: -------------------------------------------------------------------------------- 1 | import tkinter as tk 2 | from tkinter import filedialog 3 | from PIL import Image, ImageTk 4 | from PIL import ImageFilter 5 | from rembg import remove 6 | import io 7 | 8 | 9 | def remove_background(): 10 | # Open file dialog to select image 11 | file_path = filedialog.askopenfilename( 12 | title="Select an Image", 13 | filetypes=[("Image Files", "*.png *.jpg *.jpeg *.bmp")] 14 | ) 15 | 16 | if not file_path: 17 | return # Exit if no file is selected 18 | 19 | try: 20 | # Read and process the image 21 | with open(file_path, "rb") as f: 22 | input_image = f.read() 23 | output_image = remove(input_image, alpha_matting=True) 24 | 25 | # Load the image with background removed 26 | output_image = Image.open(io.BytesIO(output_image)).convert("RGBA") 27 | output_image = output_image.filter(ImageFilter.DETAIL) 28 | 29 | # Show the processed image 30 | show_image(output_image) 31 | except Exception as e: 32 | print(f"Error: {e}") 33 | 34 | 35 | def show_image(image): 36 | # Display the image in a new window 37 | window = tk.Toplevel() 38 | window.title("Background Removed") 39 | window.geometry("1000x1000") 40 | 41 | # Resize image to fit within the window 42 | original_image = image 43 | image.thumbnail((800, 800)) 44 | img_tk = ImageTk.PhotoImage(image) 45 | 46 | label = tk.Label(window, image=img_tk) 47 | label.image = img_tk 48 | label.pack() 49 | 50 | # Save button to download the processed image 51 | save_button = tk.Button( 52 | window, text="Save Image", command=lambda: save_image(original_image) 53 | ) 54 | save_button.pack() 55 | 56 | 57 | def save_image(image): 58 | # Save the processed image 59 | file_path = filedialog.asksaveasfilename( 60 | defaultextension=".png", filetypes=[("PNG files", "*.png")] 61 | ) 62 | if file_path: 63 | image.save(file_path) 64 | print(f"Image saved at: {file_path}") 65 | 66 | 67 | # Main Application Window 68 | root = tk.Tk() 69 | root.title("Image Background Remover") 70 | root.geometry("300x200") 71 | 72 | # Add Buttons 73 | upload_button = tk.Button( 74 | root, text="Upload Image", command=remove_background, width=20, height=2 75 | ) 76 | upload_button.pack(pady=50) 77 | 78 | root.mainloop() 79 | -------------------------------------------------------------------------------- /Image Background Remover App/requirements.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Image Background Remover App/requirements.txt -------------------------------------------------------------------------------- /Image Classifier Using Resnet50/README.md: -------------------------------------------------------------------------------- 1 | # Image classification with ResNet50 2 | Doing cool things with data doesn't always need to be difficult. By using ResNet-50 you don't have to start from scratch when it comes to building a classifier model and make a prediction based on it. This project is an beginners guide to ResNet-50. In the following you will get an short overall introduction to ResNet-50 and a simple project on how to use it for image classification with python coding. 3 | 4 | Here I've Created a Program using ResNet50 to predict whether an image is of which Category. The model is trained using existing deep learning model i.e. resnet50. Also Ive uploaded the code on Kaggle. 5 | 6 | ### What is ResNet-50 and why use it for image classification? 7 | ResNet-50 is a pretrained Deep Learning model for image classification of the Convolutional Neural Network(CNN, or ConvNet), which is a class of deep neural networks, most commonly applied to analyzing visual imagery. ResNet-50 is 50 layers deep and is trained on a million images of 1000 categories from the ImageNet database. Furthermore the model has over 23 million trainable parameters, which indicates a deep architecture that makes it better for image recognition. Using a pretrained model is a highly effective approach, compared if you need to build it from scratch, where you need to collect great amounts of data and train it yourself. Of course, there are other pretrained deep models to use such as AlexNet, GoogleNet or VGG19, but the ResNet-50 is noted for excellent generalization performance with fewer error rates on recognition tasks and is therefore a useful tool to know. 8 | 9 | ## ScreenShots: 10 | 11 | ### Single Image Classification- 12 | 13 | 14 | 15 | ### Multiple Image Classification- 16 | 17 | 18 | 19 | 20 | ### Kaggle Notebook Link -> https://www.kaggle.com/datawarriors/image-classifier-using-resnet50 21 | 22 | #### Feel Free to contact me at➛ amark720@gmail.com for any help related to this Project! 23 | -------------------------------------------------------------------------------- /Image Classifier Using Resnet50/Screenshot1.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Image Classifier Using Resnet50/Screenshot1.PNG -------------------------------------------------------------------------------- /Image Classifier Using Resnet50/Screenshot2.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Image Classifier Using Resnet50/Screenshot2.PNG -------------------------------------------------------------------------------- /Image Classifier Using Resnet50/images/Image1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Image Classifier Using Resnet50/images/Image1.jpg -------------------------------------------------------------------------------- /Image Classifier Using Resnet50/images/Image3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Image Classifier Using Resnet50/images/Image3.jpg -------------------------------------------------------------------------------- /Image Classifier Using Resnet50/images/Scooter.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Image Classifier Using Resnet50/images/Scooter.jpg -------------------------------------------------------------------------------- /Image Classifier Using Resnet50/images/banana.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Image Classifier Using Resnet50/images/banana.jpg -------------------------------------------------------------------------------- /Image Classifier Using Resnet50/images/car.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Image Classifier Using Resnet50/images/car.jpg -------------------------------------------------------------------------------- /Image Classifier Using Resnet50/images/image10.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Image Classifier Using Resnet50/images/image10.jpg -------------------------------------------------------------------------------- /Image Classifier Using Resnet50/images/image11.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Image Classifier Using Resnet50/images/image11.jpg -------------------------------------------------------------------------------- /Image Classifier Using Resnet50/images/image2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Image Classifier Using Resnet50/images/image2.jpg -------------------------------------------------------------------------------- /Image Classifier Using Resnet50/images/image4.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Image Classifier Using Resnet50/images/image4.jpg -------------------------------------------------------------------------------- /Image Classifier Using Resnet50/images/image6.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Image Classifier Using Resnet50/images/image6.jpg -------------------------------------------------------------------------------- /Image Classifier Using Resnet50/images/image8.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Image Classifier Using Resnet50/images/image8.jpg -------------------------------------------------------------------------------- /Image Classifier Using Resnet50/images/image9.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Image Classifier Using Resnet50/images/image9.jpg -------------------------------------------------------------------------------- /OpenCV Face Detection/Face+Eyes_detection_App.py: -------------------------------------------------------------------------------- 1 | ''' 2 | step1. GoTo Command Prompt and install package opencv using command 'pip install opencv-python' 3 | 4 | after running the code 5 | ''' 6 | 7 | import numpy as np 8 | import cv2, time 9 | 10 | # We point OpenCV's CascadeClassifier function to where our 11 | # classifier (XML file format) is stored 12 | #face_classifier = cv2.CascadeClassifier('haarcascade_frontalface_default.xml') 13 | 14 | face_classifier = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml') 15 | eye_classifier = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_eye.xml') 16 | 17 | # Update your image path Here! 18 | img = cv2.imread('C:/Users/gsc-30431/PycharmProjects/test1.py/Python_Projects/FaceDetection_App/img3.jpg') 19 | 20 | gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) 21 | 22 | faces = face_classifier.detectMultiScale(gray, 1.3, 5) 23 | 24 | # When no faces detected, face_classifier returns and empty tuple 25 | if faces is (): 26 | print("No Face Found") 27 | 28 | for (x ,y ,w ,h) in faces: 29 | cv2.rectangle(img ,(x ,y) ,( x +w , y +h) ,(255 ,0 ,0) ,2) 30 | cv2.imshow('img' ,img) 31 | cv2.waitKey(100) 32 | roi_gray = gray[y: y +h, x: x +w] 33 | roi_color = img[y: y +h, x: x +w] 34 | eyes = eye_classifier.detectMultiScale(roi_gray) 35 | for (ex ,ey ,ew ,eh) in eyes: 36 | cv2.rectangle(roi_color ,(ex ,ey) ,(ex +ew ,ey +eh) ,(0 ,255 ,0) ,2) 37 | cv2.imshow('img' ,img) 38 | cv2.waitKey(500) 39 | 40 | cv2.waitKey(500) 41 | 42 | 43 | cv2.waitKey(0) 44 | cv2.destroyAllWindows() -------------------------------------------------------------------------------- /OpenCV Face Detection/FaceDetection_App.py: -------------------------------------------------------------------------------- 1 | ''' 2 | step1. GoTo Command Prompt and install package opencv using command 'pip install opencv-python' 3 | 4 | after running the code 5 | ''' 6 | 7 | import numpy as np 8 | import cv2, time 9 | 10 | # We point OpenCV's CascadeClassifier function to where our 11 | # classifier (XML file format) is stored 12 | #face_classifier = cv2.CascadeClassifier('haarcascade_frontalface_default.xml') 13 | 14 | face_classifier = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml') 15 | eye_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_eye.xml') 16 | 17 | # Load our image then convert it to grayscale 18 | # Update your image path Here! 19 | image = cv2.imread('C:/Users/gsc-30431/PycharmProjects/test1.py/Python_Projects/FaceDetection_App/img2.jpg') 20 | gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) 21 | 22 | # Our classifier returns the ROI of the detected face as a tuple 23 | # It stores the top left coordinate and the bottom right coordiantes 24 | faces = face_classifier.detectMultiScale(gray, 1.3, 5) 25 | 26 | # When no faces detected, face_classifier returns and empty tuple 27 | if faces is (): 28 | print("No faces found") 29 | 30 | # We iterate through our faces array and draw a rectangle 31 | # over each face in faces 32 | for (x, y, w, h) in faces: 33 | cv2.rectangle(image, (x, y), (x + w, y + h), (255, 0, 0), 2) 34 | cv2.imshow('Face Detection', image) 35 | cv2.waitKey(1200) 36 | cv2.waitKey(0) 37 | cv2.destroyAllWindows() -------------------------------------------------------------------------------- /OpenCV Face Detection/Face_detection_using_webcam.py: -------------------------------------------------------------------------------- 1 | ''' 2 | step1. GoTo Command Prompt and install package opencv using command 'pip install opencv-python' 3 | 4 | after running the code 5 | ''' 6 | 7 | 8 | import numpy as np 9 | import cv2, time 10 | 11 | # We point OpenCV's CascadeClassifier function to where our 12 | # classifier (XML file format) is stored 13 | #face_classifier = cv2.CascadeClassifier('haarcascade_frontalface_default.xml') 14 | 15 | face_classifier = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml') 16 | eye_classifier = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_eye.xml') 17 | 18 | def face_detector(img, size=0.5): 19 | # Convert image to grayscale 20 | gray = cv2.cvtColor(img ,cv2.COLOR_BGR2GRAY) 21 | faces = face_classifier.detectMultiScale(gray, 1.3, 5) 22 | if faces is (): 23 | return img 24 | 25 | for (x ,y ,w ,h) in faces: 26 | x = x - 50 27 | w = w + 50 28 | y = y - 50 29 | h = h + 50 30 | cv2.rectangle(img, (x, y), (x+w, y+h), (255, 0, 0), 2) 31 | roi_gray = gray[y: y+h, x: x+w] 32 | roi_color = img[y: y+h, x: x+w] 33 | eyes = eye_classifier.detectMultiScale(roi_gray) 34 | 35 | for (ex ,ey ,ew ,eh) in eyes: 36 | cv2.rectangle(roi_color ,(ex ,ey) ,(ex +ew ,ey +eh) ,(0 ,0 ,255) ,2) 37 | 38 | roi_color = cv2.flip(roi_color ,1) 39 | return roi_color 40 | 41 | cap = cv2.VideoCapture(0) 42 | 43 | while True: 44 | 45 | ret, frame = cap.read() 46 | cv2.imshow('Our Face Extractor', face_detector(frame)) 47 | if cv2.waitKey(1) == 13: # 13 is the Enter Key 48 | break 49 | 50 | cap.release() 51 | cv2.destroyAllWindows() -------------------------------------------------------------------------------- /OpenCV Face Detection/Output.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/OpenCV Face Detection/Output.PNG -------------------------------------------------------------------------------- /OpenCV Face Detection/README.md: -------------------------------------------------------------------------------- 1 | # OpenCV-Face-Detetection 2 | Tried to Implement the OpenCV, for detection of Face and Eyes from an image, given as an input, as well as rendering live webcam, using Haar-cascades, for Face and Eyes detection, it was fun working with OpenCV, implementing new things. 3 | 4 | ## Screenshot 5 | 6 | ![alt text](https://github.com/amark720/OpenCV-Face-Detetection/blob/master/Output.PNG?raw=true) 7 | -------------------------------------------------------------------------------- /Readme.md: -------------------------------------------------------------------------------- 1 | # Computer Vision & OpenCV Projects! 2 | Python TensorFlow Flask Firebase Keras AWS 3 | 4 | 5 | ## Overview 6 | • This Repository consists Computer Vision Projects made by Me.
7 | • Datasets are provided in each of the folders above, and the solution to the problem statements as well.
8 | • Visit each folder to access the Projects in detail. 9 | 10 | Landing Page 11 | 12 | 13 | ### Don't forget to ⭐ the repository, if it helped you in anyway.
14 | 15 | ### Repo Stats: 16 | [![GitHub](https://img.shields.io/github/followers/amark720?style=social)](https://github.com/amark720)   [![GitHub](https://img.shields.io/github/stars/amark720/Computer-Vision-and-OpenCV-Projects?style=social)](https://github.com/amark720/Computer-Vision-and-OpenCV-Projects)   [![GitHub](https://img.shields.io/github/forks/amark720/Computer-Vision-and-OpenCV-Projects?style=social)](https://github.com/amark720/Computer-Vision-and-OpenCV-Projects) 17 | 18 | #### Feel Free to contact me at➛ databoyamar@gmail.com for any help related to Projects in this Repository! 19 | -------------------------------------------------------------------------------- /Scraping Text Data from Image/InvoiceToText Recording.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Scraping Text Data from Image/InvoiceToText Recording.gif -------------------------------------------------------------------------------- /Scraping Text Data from Image/OCR_Invoice_to_Text.py: -------------------------------------------------------------------------------- 1 | ''' 2 | step1. GoTo Command Prompt and install pytesseract package using command 'pip install pytesseract' 3 | step2. Goto this link - https://github.com/ub-mannheim/tesseract/wiki and download 4 | 'tesseract-ocr-w64-setup-v5.0.0-alpha.20200328.exe (64 bit) resp.' setup and Install it on your machine. 5 | 6 | After that run the below code 7 | ''' 8 | 9 | import pytesseract # Importing the package installed in step1. 10 | from PIL import Image 11 | 12 | pytesseract.pytesseract.tesseract_cmd = r"C:/Users/gsc-30431/AppData/Local/Tesseract-OCR/tesseract.exe" 13 | # Provide the path of tesseract.exe file which was installed in step2 14 | 15 | 16 | def convert(): 17 | img = Image.open('C:/Users/gsc-30431/PycharmProjects/test1.py/Python_Projects/OCR_Img_to_Text/invoice4.png') 18 | # Provite the path of Image which is to be converted into text 19 | text = pytesseract.image_to_string(img) 20 | print(text) 21 | return text 22 | 23 | 24 | text = convert() 25 | list = list(text.split('\n')) # Splitting the list items line by line wise 26 | list = [x for x in list if x] # Removing blank spaces/items from the list 27 | print(list) 28 | 29 | item_start_index = list.index('Item') 30 | item_end_index = list.index('Date') 31 | date_end_index = list.index('‘Amount (E)') 32 | amount_end_index = list.index('Reason') 33 | 34 | dictionary = dict() 35 | dictionary['Items'] = [] 36 | for i in range(item_start_index, item_end_index): 37 | dictionary['Items'].append(list[i]) 38 | print("Items are: ", dictionary['Items']) 39 | 40 | dictionary['Date'] = [] 41 | for i in range(item_end_index + 1, date_end_index): 42 | dictionary['Date'].append(list[i]) 43 | print("Dates are: ", dictionary['Date']) 44 | 45 | dictionary['Amount'] = [] 46 | for i in range(date_end_index + 1, date_end_index + 6): 47 | dictionary['Amount'].append(list[i]) 48 | print("Amount are: ", dictionary['Amount']) 49 | 50 | dictionary['Reason'] = [] 51 | for i in range(amount_end_index + 1, amount_end_index + 6): 52 | dictionary['Reason'].append(list[i]) 53 | print("Reason are: ", dictionary['Reason']) 54 | 55 | print(dictionary) -------------------------------------------------------------------------------- /Scraping Text Data from Image/Readme.md: -------------------------------------------------------------------------------- 1 | # Scraping Text Data from Invoice 2 | ![Python 3.9](https://img.shields.io/badge/Python-3.6-brightgreen.svg) ![NLTK](https://img.shields.io/badge/Library-NLTK-orange.svg) ![PIL](https://img.shields.io/badge/PIL-1.1.7-blueviolet) ![pytesseract](https://img.shields.io/badge/pytesseract-0.3.4-yellow) 3 | 4 | 5 | ## **Introduction** 6 | This project demonstrates how to extract and process text from invoice images using **Python**, **pytesseract**, and **Pillow**. It automates the process of reading invoices and converting the information into a structured dictionary format for easy data handling. The extracted data can be further used for financial record-keeping, auditing, or integrating into databases. 7 | 8 | 9 | ## **Features** 10 | - Extracts text data from images using **Tesseract OCR**. 11 | - Splits and cleans the extracted text for structured processing. 12 | - Stores extracted details in a **dictionary** with key-value pairs for: 13 | - Items 14 | - Dates 15 | - Amounts 16 | - Reasons 17 | - Customizable preprocessing for different invoice formats. 18 | 19 | 20 | ### View ScreenRecording for Live Demo: 21 | 22 | 23 | 24 | ## **Installation Instructions** 25 | #### Step 1: Clone the Repository 26 | ```bash 27 | git clone https://github.com/amark720/Computer-Vision-and-OpenCV-Projects.git 28 | cd "Scraping Text Data from Image" 29 | ``` 30 | 31 | #### Step 2: Install Required Libraries 32 | Install the necessary Python libraries: 33 | ```bash 34 | pip install pytesseract pillow 35 | ``` 36 | 37 | #### Step 3: Install Tesseract OCR 38 | Download and install **Tesseract OCR** from [HERE](https://github.com/tesseract-ocr/tesseract): 39 | - Use the installer: `tesseract-ocr-w64-setup-v5.0.0-alpha.20200328.exe (64-bit)` 40 | - During installation, note the installation directory for later use. 41 | 42 | #### Step 4: Update Paths in Code 43 | - Update the `tesseract_cmd` variable in the code with the path to your `tesseract.exe` file. 44 | - Provide the path to the invoice image you wish to process. 45 | 46 | 47 | ## **Technologies Used** 48 | - **Programming Language:** Python 3.9 49 | - **OCR Tool:** Tesseract OCR 50 | - **Libraries:** 51 | - `pytesseract`: For extracting text from images. 52 | - `Pillow`: For image loading and manipulation. 53 | 54 | 55 | ## **How to Use** 56 | 1. **Prepare the Invoice Image:** 57 | - Place your invoice image in a known directory. 58 | - Ensure the text in the image is clear and legible. 59 | 60 | 2. **Run the Script:** 61 | ```bash 62 | python OCR_Invoice_to_Text.py 63 | ``` 64 | 65 | 3. **Customize Text Preprocessing:** 66 | - Modify the text cleaning logic to suit your invoice format. 67 | - Update indices in the code if the invoice structure changes. 68 | 69 | 4. **Output:** 70 | - Extracted data is displayed in the console as a structured dictionary: 71 | ```json 72 | { 73 | "Items": [...], 74 | "Date": [...], 75 | "Amount": [...], 76 | "Reason": [...] 77 | } 78 | ``` 79 | 80 | 81 | ## **Areas of Further Improvement** 82 | - **Invoice Format Detection:** 83 | - Add automatic detection and customization for different invoice templates. 84 | - **GUI Integration:** 85 | - Build a user-friendly interface for uploading images and displaying results. 86 | - **Database Storage:** 87 | - Save the extracted data directly to a database for long-term use. 88 | - **Batch Processing:** 89 | - Enable processing of multiple invoices simultaneously. 90 | 91 | 92 | ## **Conclusion** 93 | This project provides a foundation for extracting and organizing data from invoice images using OCR. It is customizable and can be scaled for various business needs, such as automating financial data entry. 94 | 95 | 96 | ## **Acknowledgments** 97 | - Thanks to the **Tesseract OCR** team for providing a powerful open-source OCR tool. 98 | - Special thanks to the developers of **Pillow** and **pytesseract** libraries for seamless Python integrations. 99 | 100 | 101 | #### 📧 Feel Free to contact me at➛ **amark720@gmail.com** for any assistance or questions related to this project! 102 | -------------------------------------------------------------------------------- /Scraping Text Data from Image/invoice4.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Scraping Text Data from Image/invoice4.PNG -------------------------------------------------------------------------------- /Text Recognizer Android App (FireBase + AutoML)/App Demo Video.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Text Recognizer Android App (FireBase + AutoML)/App Demo Video.gif -------------------------------------------------------------------------------- /Text Recognizer Android App (FireBase + AutoML)/Readme.md: -------------------------------------------------------------------------------- 1 | # Text Recognizer Android App (FireBase + ML Kit) 2 | Firebase Android Java 3 | 4 | Optical Charecter Recognition (OCR) is the ability that gives a mobile to read text appears in an image. We will create Android App that uses FireBase & ML Kit to recognize texts from Image. It runs on Android device. User needs to upload image from their gallery into the app to extract text from it. Go through the video tutorial below which explains the working of application in detail. 5 | 6 | ## ScreenRecording: 7 | [![Demo Doccou alpha](https://github.com/amark720/Computer-Vision-and-OpenCV-Projects/blob/main/Text%20Recognizer%20Android%20App%20(FireBase%20%2B%20AutoML)/App%20Demo%20Video.gif)](https://github.com/amark720/Computer-Vision-and-OpenCV-Projects/blob/main/Text%20Recognizer%20Android%20App%20(FireBase%20%2B%20AutoML)/App%20Demo%20Video.gif) 8 | 9 | **Note:** 10 | * Anyone can try this app on their Android device. Just download TextRecognizer.apk from the above uploaded files and install it on your device and try the App. 11 | * If you want to Modify and further improve the App then, Download "TextRecognizer Full Project.zip" and after extracting it, import the project into Android Studio. 12 | 13 | 14 | ## Screenshot: 15 | 16 | ####           Landing Page!                    Output Page! 17 | Landing Page 18 | 19 | 20 | #### Feel Free to contact me at➛ amark720@gmail.com for any help related to this Project! 21 | 22 | 26 | -------------------------------------------------------------------------------- /Text Recognizer Android App (FireBase + AutoML)/ScreenShot.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Text Recognizer Android App (FireBase + AutoML)/ScreenShot.jpg -------------------------------------------------------------------------------- /Text Recognizer Android App (FireBase + AutoML)/TextRecognizer Full Project.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Text Recognizer Android App (FireBase + AutoML)/TextRecognizer Full Project.zip -------------------------------------------------------------------------------- /Text Recognizer Android App (FireBase + AutoML)/TextRecognizer.apk: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amark720/Computer-Vision-and-OpenCV-Projects/7a144a3b2ba6f089b085f274ac4e505a8bee8dce/Text Recognizer Android App (FireBase + AutoML)/TextRecognizer.apk -------------------------------------------------------------------------------- /Text Recognizer Android App (FireBase + AutoML)/output-metadata.json: -------------------------------------------------------------------------------- 1 | { 2 | "version": 1, 3 | "artifactType": { 4 | "type": "APK", 5 | "kind": "Directory" 6 | }, 7 | "applicationId": "com.example.textrecognizer", 8 | "variantName": "debug", 9 | "elements": [ 10 | { 11 | "type": "SINGLE", 12 | "filters": [], 13 | "properties": [], 14 | "versionCode": 1, 15 | "versionName": "1.0", 16 | "enabled": true, 17 | "outputFile": "app-debug.apk" 18 | } 19 | ] 20 | } --------------------------------------------------------------------------------