Find Your Car Brand by Simply uploading a Car Photo!
4 |
5 |
6 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 | {% endblock %}
--------------------------------------------------------------------------------
/Cat Dog Images Classifier (CNN + Keras)/Cat_Dog_Classifier_Using_CNN.ipynb:
--------------------------------------------------------------------------------
1 | {"nbformat":4,"nbformat_minor":0,"metadata":{"colab":{"name":"Convolutional Neural Network","provenance":[],"collapsed_sections":[],"toc_visible":true,"authorship_tag":"ABX9TyN4RwM22jdD+NwpsDagcktL"},"kernelspec":{"name":"python3","display_name":"Python 3"}},"cells":[{"cell_type":"markdown","metadata":{"id":"3DR-eO17geWu","colab_type":"text"},"source":["# Convolutional Neural Network"]},{"cell_type":"markdown","metadata":{"id":"EMefrVPCg-60","colab_type":"text"},"source":["### Importing the libraries"]},{"cell_type":"code","metadata":{"id":"sCV30xyVhFbE","colab_type":"code","colab":{"base_uri":"https://localhost:8080/","height":34},"outputId":"41e39496-ad7b-45be-8cb5-5ae492405174","executionInfo":{"status":"ok","timestamp":1586435320041,"user_tz":-240,"elapsed":2561,"user":{"displayName":"Hadelin de Ponteves","photoUrl":"https://lh3.googleusercontent.com/a-/AOh14GhEuXdT7eQweUmRPW8_laJuPggSK6hfvpl5a6WBaA=s64","userId":"15047218817161520419"}}},"source":["import tensorflow as tf\n","from keras.preprocessing.image import ImageDataGenerator"],"execution_count":1,"outputs":[{"output_type":"stream","text":["Using TensorFlow backend.\n"],"name":"stderr"}]},{"cell_type":"code","metadata":{"id":"FIleuCAjoFD8","colab_type":"code","colab":{"base_uri":"https://localhost:8080/","height":34},"outputId":"9f4bbca7-a8c6-4a14-8354-82c989248f45","executionInfo":{"status":"ok","timestamp":1586435320042,"user_tz":-240,"elapsed":2558,"user":{"displayName":"Hadelin de Ponteves","photoUrl":"https://lh3.googleusercontent.com/a-/AOh14GhEuXdT7eQweUmRPW8_laJuPggSK6hfvpl5a6WBaA=s64","userId":"15047218817161520419"}}},"source":["tf.__version__"],"execution_count":2,"outputs":[{"output_type":"execute_result","data":{"text/plain":["'2.2.0-rc2'"]},"metadata":{"tags":[]},"execution_count":2}]},{"cell_type":"markdown","metadata":{"id":"oxQxCBWyoGPE","colab_type":"text"},"source":["## Part 1 - Data Preprocessing"]},{"cell_type":"markdown","metadata":{"id":"y8K74-1foOic","colab_type":"text"},"source":["### Generating images for the Training set"]},{"cell_type":"code","metadata":{"id":"OlH2WYQ5ocVO","colab_type":"code","colab":{}},"source":["train_datagen = ImageDataGenerator(rescale = 1./255,\n"," shear_range = 0.2,\n"," zoom_range = 0.2,\n"," horizontal_flip = True)"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"LXXei7qHornJ","colab_type":"text"},"source":["### Generating images for the Test set"]},{"cell_type":"code","metadata":{"id":"T9It49laowGX","colab_type":"code","colab":{}},"source":["test_datagen = ImageDataGenerator(rescale = 1./255)"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"MvE-heJNo3GG","colab_type":"text"},"source":["### Creating the Training set"]},{"cell_type":"code","metadata":{"id":"0koUcJMJpEBD","colab_type":"code","colab":{"base_uri":"https://localhost:8080/","height":34},"outputId":"9d177ed1-04eb-4b72-a6f6-d9da3528fb20","executionInfo":{"status":"ok","timestamp":1586435320043,"user_tz":-240,"elapsed":2544,"user":{"displayName":"Hadelin de Ponteves","photoUrl":"https://lh3.googleusercontent.com/a-/AOh14GhEuXdT7eQweUmRPW8_laJuPggSK6hfvpl5a6WBaA=s64","userId":"15047218817161520419"}}},"source":["training_set = train_datagen.flow_from_directory('dataset/training_set',\n"," target_size = (64, 64),\n"," batch_size = 32,\n"," class_mode = 'binary')"],"execution_count":5,"outputs":[{"output_type":"stream","text":["Found 334 images belonging to 3 classes.\n"],"name":"stdout"}]},{"cell_type":"markdown","metadata":{"id":"mrCMmGw9pHys","colab_type":"text"},"source":["### Creating the Test set"]},{"cell_type":"code","metadata":{"id":"SH4WzfOhpKc3","colab_type":"code","colab":{"base_uri":"https://localhost:8080/","height":34},"outputId":"ea549ecd-e7b7-408c-df58-cbbf77fb58db","executionInfo":{"status":"ok","timestamp":1586435320044,"user_tz":-240,"elapsed":2543,"user":{"displayName":"Hadelin de Ponteves","photoUrl":"https://lh3.googleusercontent.com/a-/AOh14GhEuXdT7eQweUmRPW8_laJuPggSK6hfvpl5a6WBaA=s64","userId":"15047218817161520419"}}},"source":["test_set = test_datagen.flow_from_directory('dataset/test_set',\n"," target_size = (64, 64),\n"," batch_size = 32,\n"," class_mode = 'binary')"],"execution_count":6,"outputs":[{"output_type":"stream","text":["Found 334 images belonging to 3 classes.\n"],"name":"stdout"}]},{"cell_type":"markdown","metadata":{"id":"af8O4l90gk7B","colab_type":"text"},"source":["## Part 2 - Building the CNN"]},{"cell_type":"markdown","metadata":{"id":"ces1gXY2lmoX","colab_type":"text"},"source":["### Initialising the CNN"]},{"cell_type":"code","metadata":{"id":"SAUt4UMPlhLS","colab_type":"code","colab":{}},"source":["cnn = tf.keras.models.Sequential()"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"u5YJj_XMl5LF","colab_type":"text"},"source":["### Step 1 - Convolution"]},{"cell_type":"code","metadata":{"id":"XPzPrMckl-hV","colab_type":"code","colab":{}},"source":["cnn.add(tf.keras.layers.Conv2D(filters=32, kernel_size=3, padding=\"same\", activation=\"relu\", input_shape=[64, 64, 3]))"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"tf87FpvxmNOJ","colab_type":"text"},"source":["### Step 2 - Pooling"]},{"cell_type":"code","metadata":{"id":"ncpqPl69mOac","colab_type":"code","colab":{}},"source":["cnn.add(tf.keras.layers.MaxPool2D(pool_size=2, strides=2, padding='valid'))"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"xaTOgD8rm4mU","colab_type":"text"},"source":["### Adding a second convolutional layer"]},{"cell_type":"code","metadata":{"id":"i_-FZjn_m8gk","colab_type":"code","colab":{}},"source":["cnn.add(tf.keras.layers.Conv2D(filters=32, kernel_size=3, padding=\"same\", activation=\"relu\"))\n","cnn.add(tf.keras.layers.MaxPool2D(pool_size=2, strides=2, padding='valid'))"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"tmiEuvTunKfk","colab_type":"text"},"source":["### Step 3 - Flattening"]},{"cell_type":"code","metadata":{"id":"6AZeOGCvnNZn","colab_type":"code","colab":{}},"source":["cnn.add(tf.keras.layers.Flatten())"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"dAoSECOm203v","colab_type":"text"},"source":["### Step 4 - Full Connection"]},{"cell_type":"code","metadata":{"id":"8GtmUlLd26Nq","colab_type":"code","colab":{}},"source":["cnn.add(tf.keras.layers.Dense(units=128, activation='relu'))"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"yTldFvbX28Na","colab_type":"text"},"source":["### Step 5 - Output Layer"]},{"cell_type":"code","metadata":{"id":"1p_Zj1Mc3Ko_","colab_type":"code","colab":{}},"source":["cnn.add(tf.keras.layers.Dense(units=1, activation='sigmoid'))"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"D6XkI90snSDl","colab_type":"text"},"source":["## Part 3 - Training the CNN"]},{"cell_type":"markdown","metadata":{"id":"vfrFQACEnc6i","colab_type":"text"},"source":["### Compiling the CNN"]},{"cell_type":"code","metadata":{"id":"NALksrNQpUlJ","colab_type":"code","colab":{}},"source":["cnn.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"ehS-v3MIpX2h","colab_type":"text"},"source":["### Training the CNN on the Training set and evaluating it on the Test set"]},{"cell_type":"code","metadata":{"id":"XUj1W4PJptta","colab_type":"code","colab":{"base_uri":"https://localhost:8080/","height":924},"outputId":"3de830db-5a3c-41ea-c318-3dd55b8099aa","executionInfo":{"status":"ok","timestamp":1586438210865,"user_tz":-240,"elapsed":2893328,"user":{"displayName":"Hadelin de Ponteves","photoUrl":"https://lh3.googleusercontent.com/a-/AOh14GhEuXdT7eQweUmRPW8_laJuPggSK6hfvpl5a6WBaA=s64","userId":"15047218817161520419"}}},"source":["cnn.fit_generator(training_set,\n"," steps_per_epoch = 334,\n"," epochs = 25,\n"," validation_data = test_set,\n"," validation_steps = 334)"],"execution_count":15,"outputs":[{"output_type":"stream","text":["WARNING:tensorflow:From :5: Model.fit_generator (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.\n","Instructions for updating:\n","Please use Model.fit, which supports generators.\n","Epoch 1/25\n","334/334 [==============================] - 116s 346ms/step - loss: -37484704.0000 - accuracy: 0.4987 - val_loss: -218133216.0000 - val_accuracy: 0.4999\n","Epoch 2/25\n","334/334 [==============================] - 115s 343ms/step - loss: -1530877952.0000 - accuracy: 0.5002 - val_loss: -4208331776.0000 - val_accuracy: 0.5004\n","Epoch 3/25\n","334/334 [==============================] - 114s 342ms/step - loss: -10730311680.0000 - accuracy: 0.4992 - val_loss: -21080999936.0000 - val_accuracy: 0.5000\n","Epoch 4/25\n","334/334 [==============================] - 114s 343ms/step - loss: -38008516608.0000 - accuracy: 0.4999 - val_loss: -62359158784.0000 - val_accuracy: 0.4993\n","Epoch 5/25\n","334/334 [==============================] - 114s 342ms/step - loss: -95429894144.0000 - accuracy: 0.5003 - val_loss: -141409304576.0000 - val_accuracy: 0.5004\n","Epoch 6/25\n","334/334 [==============================] - 114s 342ms/step - loss: -195819356160.0000 - accuracy: 0.5002 - val_loss: -270405451776.0000 - val_accuracy: 0.5005\n","Epoch 7/25\n","334/334 [==============================] - 114s 343ms/step - loss: -350490820608.0000 - accuracy: 0.4992 - val_loss: -460973342720.0000 - val_accuracy: 0.4995\n","Epoch 8/25\n","334/334 [==============================] - 115s 343ms/step - loss: -567733321728.0000 - accuracy: 0.5008 - val_loss: -721733025792.0000 - val_accuracy: 0.5005\n","Epoch 9/25\n","334/334 [==============================] - 119s 357ms/step - loss: -871657046016.0000 - accuracy: 0.4992 - val_loss: -1073665343488.0000 - val_accuracy: 0.4997\n","Epoch 10/25\n","334/334 [==============================] - 115s 344ms/step - loss: -1252980817920.0000 - accuracy: 0.5005 - val_loss: -1517686751232.0000 - val_accuracy: 0.5002\n","Epoch 11/25\n","334/334 [==============================] - 115s 345ms/step - loss: -1743530622976.0000 - accuracy: 0.5000 - val_loss: -2079168659456.0000 - val_accuracy: 0.4998\n","Epoch 12/25\n","334/334 [==============================] - 115s 345ms/step - loss: -2341885968384.0000 - accuracy: 0.5000 - val_loss: -2746375471104.0000 - val_accuracy: 0.5003\n","Epoch 13/25\n","334/334 [==============================] - 115s 344ms/step - loss: -3043618455552.0000 - accuracy: 0.5000 - val_loss: -3542708387840.0000 - val_accuracy: 0.4995\n","Epoch 14/25\n","334/334 [==============================] - 115s 343ms/step - loss: -3882742448128.0000 - accuracy: 0.5001 - val_loss: -4493544521728.0000 - val_accuracy: 0.4996\n","Epoch 15/25\n","334/334 [==============================] - 121s 361ms/step - loss: -4904350908416.0000 - accuracy: 0.4998 - val_loss: -5572023812096.0000 - val_accuracy: 0.4997\n","Epoch 16/25\n","334/334 [==============================] - 115s 344ms/step - loss: -6028249268224.0000 - accuracy: 0.4999 - val_loss: -6827078057984.0000 - val_accuracy: 0.5000\n","Epoch 17/25\n","334/334 [==============================] - 115s 345ms/step - loss: -7348382859264.0000 - accuracy: 0.5007 - val_loss: -8296905310208.0000 - val_accuracy: 0.4998\n","Epoch 18/25\n","334/334 [==============================] - 115s 345ms/step - loss: -8830923571200.0000 - accuracy: 0.4989 - val_loss: -9900669272064.0000 - val_accuracy: 0.5004\n","Epoch 19/25\n","334/334 [==============================] - 115s 345ms/step - loss: -10479195914240.0000 - accuracy: 0.5007 - val_loss: -11772465512448.0000 - val_accuracy: 0.5003\n","Epoch 20/25\n","334/334 [==============================] - 116s 346ms/step - loss: -12438010331136.0000 - accuracy: 0.5002 - val_loss: -13772319096832.0000 - val_accuracy: 0.4996\n","Epoch 21/25\n","334/334 [==============================] - 115s 345ms/step - loss: -14471482310656.0000 - accuracy: 0.4999 - val_loss: -15963112079360.0000 - val_accuracy: 0.4998\n","Epoch 22/25\n","334/334 [==============================] - 115s 345ms/step - loss: -16802979512320.0000 - accuracy: 0.4998 - val_loss: -18570987700224.0000 - val_accuracy: 0.5004\n","Epoch 23/25\n","334/334 [==============================] - 115s 345ms/step - loss: -19371355275264.0000 - accuracy: 0.4995 - val_loss: -21191286849536.0000 - val_accuracy: 0.4998\n","Epoch 24/25\n","334/334 [==============================] - 116s 346ms/step - loss: -22239346950144.0000 - accuracy: 0.5005 - val_loss: -24250689781760.0000 - val_accuracy: 0.5002\n","Epoch 25/25\n","334/334 [==============================] - 115s 345ms/step - loss: -25315573235712.0000 - accuracy: 0.5000 - val_loss: -27645225992192.0000 - val_accuracy: 0.4999\n"],"name":"stdout"},{"output_type":"execute_result","data":{"text/plain":[""]},"metadata":{"tags":[]},"execution_count":15}]}]}
--------------------------------------------------------------------------------
/Cat Dog Images Classifier (CNN + Keras)/Cat_Dog_Classifier_Using_CNN.py:
--------------------------------------------------------------------------------
1 | # Convolutional Neural Network
2 |
3 | # Installing Theano
4 | # pip install --upgrade --no-deps git+git://github.com/Theano/Theano.git
5 |
6 | # Installing Tensorflow
7 | # Install Tensorflow from the website: https://www.tensorflow.org/versions/r0.12/get_started/os_setup.html
8 |
9 | # Installing Keras
10 | # pip install --upgrade keras
11 |
12 | # Part 1 - Building the CNN
13 |
14 | # Importing the Keras libraries and packages
15 | from keras.models import Sequential
16 | from keras.layers import Convolution2D
17 | from keras.layers import MaxPooling2D
18 | from keras.layers import Flatten
19 | from keras.layers import Dense
20 |
21 | # Initialising the CNN
22 | classifier = Sequential()
23 |
24 | # Step 1 - Convolution
25 | classifier.add(Convolution2D(32, 3, 3, input_shape = (64, 64, 3), activation = 'relu'))
26 |
27 | # Step 2 - Pooling
28 | classifier.add(MaxPooling2D(pool_size = (2, 2)))
29 |
30 | # Adding a second convolutional layer
31 | classifier.add(Convolution2D(32, 3, 3, activation = 'relu'))
32 | classifier.add(MaxPooling2D(pool_size = (2, 2)))
33 |
34 | # Step 3 - Flattening
35 | classifier.add(Flatten())
36 |
37 | # Step 4 - Full connection
38 | classifier.add(Dense(output_dim = 128, activation = 'relu'))
39 | classifier.add(Dense(output_dim = 1, activation = 'sigmoid'))
40 |
41 | # Compiling the CNN
42 | classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
43 |
44 | # Part 2 - Fitting the CNN to the images
45 |
46 | from keras.preprocessing.image import ImageDataGenerator
47 |
48 | train_datagen = ImageDataGenerator(rescale = 1./255,
49 | shear_range = 0.2,
50 | zoom_range = 0.2,
51 | horizontal_flip = True)
52 |
53 | test_datagen = ImageDataGenerator(rescale = 1./255)
54 |
55 | training_set = train_datagen.flow_from_directory('dataset/training_set',
56 | target_size = (64, 64),
57 | batch_size = 32,
58 | class_mode = 'binary')
59 |
60 | test_set = test_datagen.flow_from_directory('dataset/test_set',
61 | target_size = (64, 64),
62 | batch_size = 32,
63 | class_mode = 'binary')
64 |
65 | classifier.fit_generator(training_set,
66 | samples_per_epoch = 8000,
67 | nb_epoch = 25,
68 | validation_data = test_set,
69 | nb_val_samples = 2000)
--------------------------------------------------------------------------------
/Cat Dog Images Classifier (CNN + Keras)/DataSet/How to Download Dataset....txt:
--------------------------------------------------------------------------------
1 | The dataset size is too large to upload. However I can upload that dataset here using GitBash but I dont want to OverLoad my Repo with Large Files. You can Download the Entire code and Dataset from This Repository-> https://github.com/MonicaGS/Machine-Learning-A-Z
2 |
3 | Just download the whole code from the repository link given above..
4 | Then navigate to folder "Part 8- Deep Learning" -> " Section 40 - Convolutional Neural Networks (CNN)/Convolutional_Neural_Networks " then Take the dataset folder from there which will include a total of 10000 images of dogs and cats.
--------------------------------------------------------------------------------
/Cat Dog Images Classifier (CNN + Keras)/Image.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/amark720/Deep-Learning-Projects/0ae283356592451eb834af43494545324eead24c/Cat Dog Images Classifier (CNN + Keras)/Image.gif
--------------------------------------------------------------------------------
/Cat Dog Images Classifier (CNN + Keras)/Readme.md:
--------------------------------------------------------------------------------
1 | ## Cat & Dog Classifier using Convolution Neural Network.
2 |   
3 |
4 | ### Problem statement :
5 |
6 | In this Project we are implementing Convolution Neural Network(CNN) Classifier for Classifying images of dogs and cats. The Total number of images available for training is 8,000 and final testing is done on 2,000 images.
7 |
8 | ### Dependencies
9 | * Jupyter notebook
10 | * Tensorflow 2.10
11 | * Python 3.7+
12 | * Matplotlib
13 | * Scikit-Learn
14 | * Pandas
15 |
16 |
17 |
18 | ### Don't forget to ⭐ the repository, if it helped you in anyway.
19 |
20 | #### Feel Free to contact me at➛ databoyamar@gmail.com for any help related to Projects in this Repository!
21 |
--------------------------------------------------------------------------------
/Covid19 FaceMask Detector (CNN & OpenCV)/Readme.md:
--------------------------------------------------------------------------------
1 | # Covid19 FaceMask Detector using CNN & OpenCV.
2 |
3 |
4 |
5 |
Face Mask Detection system built with OpenCV, Keras/TensorFlow using Deep Learning and Computer Vision concepts in order to detect face masks in static images as well as in video streams.
6 |
7 |
8 |
9 |
10 | ## Live Demo:
11 |
12 |
13 |
14 |
15 | ## :innocent: Motivation
16 | In the present scenario due to Covid-19, there is no efficient face mask detection applications which are now in high demand for transportation means, densely populated areas, residential districts, large-scale manufacturers and other enterprises to ensure safety. Also, the absence of large datasets of __‘with_mask’__ images has made this task more cumbersome and challenging.
17 |
18 |
19 | ## :star: Features
20 |
21 | This system can be used in real-time applications which require face-mask detection for safety purposes due to the outbreak of Covid-19. This project can be integrated with embedded systems for application in airports, railway stations, offices, schools, and public places to ensure that public safety guidelines are followed.
22 |
23 | ## :file_folder: Dataset
24 | The dataset used can be downloaded here - [Click to Download](https://www.kaggle.com/prithwirajmitra/covid-face-mask-detection-dataset)
25 |
26 | This dataset consists of __1006 images__ belonging to two classes:
27 | * __with_mask: 500 images__
28 | * __without_mask: 606 images__
29 |
30 |
31 |
32 | ## 🚀 Installation
33 | 1. Download the files in this repository and extract them.
34 | 2. Run Face_Mask_Detection.ipynb file first using Google colab:-
35 | * Colab File link - https://colab.research.google.com/drive/1rX32L-EHFvdtulPbVlwllBve8bdKwC_m#scrollTo=pO9U0q_KNDsF
36 |
37 | 3. Running the above .ipynb file will generate Model.h5 file.
38 | 4. Download that Model.h5 file from Colab to local Machine.
39 | 5. Now Run Mask.py file
40 | 6. Done.
41 |
42 | Note: Make sure that you're using the same Tensorflow and Keras version on your local machine that you're using on Google Colab otherwise you'll get error.
43 |
44 | ## :key: Results
45 |
46 | #### Our model gave 92% accuracy for Face Mask Detection after training via tensorflow==2.3.0
47 | The model can further be Improved by doing parameter tuning.
48 |
49 | 
50 |
51 | #### We got the following accuracy/loss training curve plot
52 | 
53 |
54 | ## :clap: And it's done!
55 | Feel free to mail me for any doubts/query
56 | :email: amark720@gmail.com
57 |
58 | ## :heart: Owner
59 | Made with :heart: by [Amar Kumar](https://github.com/amark720)
60 |
61 |
62 |
--------------------------------------------------------------------------------
/Covid19 FaceMask Detector (CNN & OpenCV)/man-mask-protective.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/amark720/Deep-Learning-Projects/0ae283356592451eb834af43494545324eead24c/Covid19 FaceMask Detector (CNN & OpenCV)/man-mask-protective.jpg
--------------------------------------------------------------------------------
/Covid19 FaceMask Detector (CNN & OpenCV)/mask.py:
--------------------------------------------------------------------------------
1 | import cv2
2 | from tensorflow.keras.models import load_model
3 | from keras.preprocessing.image import load_img , img_to_array
4 | import numpy as np
5 | import tensorflow as tf
6 | print(tf.version.VERSION)
7 |
8 | model =load_model('model.h5')
9 |
10 | img_width , img_height = 150,150
11 |
12 | face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
13 |
14 | cap = cv2.VideoCapture('video.mp4')
15 |
16 | img_count_full = 0
17 |
18 | font = cv2.FONT_HERSHEY_SIMPLEX
19 | org = (1,1)
20 | class_label = ''
21 | fontScale = 1
22 | color = (0,0,255)
23 | thickness = 2
24 |
25 | while True:
26 | img_count_full += 1
27 | response , color_img = cap.read()
28 |
29 | if response == False:
30 | break
31 |
32 |
33 | scale = 50
34 | width = int(color_img.shape[1]*scale /100)
35 | height = int(color_img.shape[0]*scale/100)
36 | dim = (width,height)
37 |
38 | color_img = cv2.resize(color_img, dim ,interpolation= cv2.INTER_AREA)
39 |
40 | gray_img = cv2.cvtColor(color_img,cv2.COLOR_BGR2GRAY)
41 |
42 | faces = face_cascade.detectMultiScale(gray_img, 1.1, 6)
43 |
44 | img_count = 0
45 | for (x,y,w,h) in faces:
46 | org = (x+20,y+85)
47 | img_count += 1
48 | color_face = color_img[y:y+h,x:x+w]
49 | cv2.imwrite('input/%d%dface.jpg'%(img_count_full,img_count),color_face)
50 | img = load_img('input/%d%dface.jpg'%(img_count_full,img_count),target_size=(img_width,img_height))
51 | img = img_to_array(img)
52 | img = np.expand_dims(img,axis=0)
53 | prediction = model.predict(img)
54 |
55 |
56 | if prediction==0:
57 | class_label = "Mask"
58 | color = (0,255,0)
59 |
60 | else:
61 | class_label = "No Mask"
62 | color = (0,0,255)
63 |
64 |
65 | cv2.rectangle(color_img,(x,y),(x+w,y+h),(255,0,0),3)
66 | cv2.putText(color_img, class_label, org, font, fontScale, color, thickness,cv2.LINE_AA)
67 |
68 | cv2.imshow('Face mask detection', color_img)
69 | if cv2.waitKey(1) & 0xFF == ord('q'):
70 | break
71 |
72 | cap.release()
73 | cv2.destroyAllWindows()
74 |
75 |
76 |
77 |
--------------------------------------------------------------------------------
/Covid19 FaceMask Detector (CNN & OpenCV)/video.mp4:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/amark720/Deep-Learning-Projects/0ae283356592451eb834af43494545324eead24c/Covid19 FaceMask Detector (CNN & OpenCV)/video.mp4
--------------------------------------------------------------------------------
/Covid19 FaceMask Detector (CNN & OpenCV)/video1.mp4:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/amark720/Deep-Learning-Projects/0ae283356592451eb834af43494545324eead24c/Covid19 FaceMask Detector (CNN & OpenCV)/video1.mp4
--------------------------------------------------------------------------------
/Covid19 FaceMask Detector (CNN & OpenCV)/video2.mp4:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/amark720/Deep-Learning-Projects/0ae283356592451eb834af43494545324eead24c/Covid19 FaceMask Detector (CNN & OpenCV)/video2.mp4
--------------------------------------------------------------------------------
/Covid19 FaceMask Detector (CNN & OpenCV)/women with mask.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/amark720/Deep-Learning-Projects/0ae283356592451eb834af43494545324eead24c/Covid19 FaceMask Detector (CNN & OpenCV)/women with mask.jpg
--------------------------------------------------------------------------------
/Digit Recognizer Kaggle/Readme.md:
--------------------------------------------------------------------------------
1 | # MNIST- Digit Recognizer
2 | This is a neural network designed to train on the MNIST data set for recognizing handwritten digits. It is run with an input layer of 784 inputs (28x28 images) and 2 hidden layers, each with 15 neurons. The output layer has 10 neurons, each of which represents the probability that the image is the digit 0-9, in order. The sigmoid activation function is used.
3 |
4 | The data set used to train this is the MNIST data set, found here: https://www.kaggle.com/c/digit-recognizer/data. The training data set (60,000 images) is used for everything; it is split 75/25% for training and testing, respectively.
5 |
6 | ### Python Implementation
7 | Dataset- MNIST dataset
8 | Images of size 28 X 28
9 | Classify digits from 0 to 9
10 |
11 | ## Kaggle NoteBook Link:
12 | https://www.kaggle.com/datawarriors/digit-recognizer-detailed-step-wise
13 |
--------------------------------------------------------------------------------
/Image Classifier Using Resnet50/README.md:
--------------------------------------------------------------------------------
1 | # Image classification with ResNet50
2 | Doing cool things with data doesn't always need to be difficult. By using ResNet-50 you don't have to start from scratch when it comes to building a classifier model and make a prediction based on it. This project is an beginners guide to ResNet-50. In the following you will get an short overall introduction to ResNet-50 and a simple project on how to use it for image classification with python coding.
3 |
4 | Here I've Created a Program using ResNet50 to predict whether an image is of which Category. The model is trained using existing deep learning model i.e. resnet50. Also Ive uploaded the code on Kaggle.
5 |
6 | ### What is ResNet-50 and why use it for image classification?
7 | ResNet-50 is a pretrained Deep Learning model for image classification of the Convolutional Neural Network(CNN, or ConvNet), which is a class of deep neural networks, most commonly applied to analyzing visual imagery. ResNet-50 is 50 layers deep and is trained on a million images of 1000 categories from the ImageNet database. Furthermore the model has over 23 million trainable parameters, which indicates a deep architecture that makes it better for image recognition. Using a pretrained model is a highly effective approach, compared if you need to build it from scratch, where you need to collect great amounts of data and train it yourself. Of course, there are other pretrained deep models to use such as AlexNet, GoogleNet or VGG19, but the ResNet-50 is noted for excellent generalization performance with fewer error rates on recognition tasks and is therefore a useful tool to know.
8 |
9 | ## ScreenShots:
10 |
11 | ### Single Image Classification-
12 |
13 |
14 |
15 | ### Multiple Image Classification-
16 |
17 |
18 |
19 |
20 | ### Kaggle Notebook Link -> https://www.kaggle.com/datawarriors/image-classifier-using-resnet50
21 |
22 | #### Feel Free to contact me at➛ amark720@gmail.com for any help related to this Project!
23 |
--------------------------------------------------------------------------------
/Image Classifier Using Resnet50/Screenshot1.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/amark720/Deep-Learning-Projects/0ae283356592451eb834af43494545324eead24c/Image Classifier Using Resnet50/Screenshot1.PNG
--------------------------------------------------------------------------------
/Image Classifier Using Resnet50/Screenshot2.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/amark720/Deep-Learning-Projects/0ae283356592451eb834af43494545324eead24c/Image Classifier Using Resnet50/Screenshot2.PNG
--------------------------------------------------------------------------------
/Image Classifier Using Resnet50/images/Image1.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/amark720/Deep-Learning-Projects/0ae283356592451eb834af43494545324eead24c/Image Classifier Using Resnet50/images/Image1.jpg
--------------------------------------------------------------------------------
/Image Classifier Using Resnet50/images/Image3.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/amark720/Deep-Learning-Projects/0ae283356592451eb834af43494545324eead24c/Image Classifier Using Resnet50/images/Image3.jpg
--------------------------------------------------------------------------------
/Image Classifier Using Resnet50/images/Scooter.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/amark720/Deep-Learning-Projects/0ae283356592451eb834af43494545324eead24c/Image Classifier Using Resnet50/images/Scooter.jpg
--------------------------------------------------------------------------------
/Image Classifier Using Resnet50/images/banana.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/amark720/Deep-Learning-Projects/0ae283356592451eb834af43494545324eead24c/Image Classifier Using Resnet50/images/banana.jpg
--------------------------------------------------------------------------------
/Image Classifier Using Resnet50/images/car.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/amark720/Deep-Learning-Projects/0ae283356592451eb834af43494545324eead24c/Image Classifier Using Resnet50/images/car.jpg
--------------------------------------------------------------------------------
/Image Classifier Using Resnet50/images/image10.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/amark720/Deep-Learning-Projects/0ae283356592451eb834af43494545324eead24c/Image Classifier Using Resnet50/images/image10.jpg
--------------------------------------------------------------------------------
/Image Classifier Using Resnet50/images/image11.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/amark720/Deep-Learning-Projects/0ae283356592451eb834af43494545324eead24c/Image Classifier Using Resnet50/images/image11.jpg
--------------------------------------------------------------------------------
/Image Classifier Using Resnet50/images/image2.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/amark720/Deep-Learning-Projects/0ae283356592451eb834af43494545324eead24c/Image Classifier Using Resnet50/images/image2.jpg
--------------------------------------------------------------------------------
/Image Classifier Using Resnet50/images/image4.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/amark720/Deep-Learning-Projects/0ae283356592451eb834af43494545324eead24c/Image Classifier Using Resnet50/images/image4.jpg
--------------------------------------------------------------------------------
/Image Classifier Using Resnet50/images/image6.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/amark720/Deep-Learning-Projects/0ae283356592451eb834af43494545324eead24c/Image Classifier Using Resnet50/images/image6.jpg
--------------------------------------------------------------------------------
/Image Classifier Using Resnet50/images/image8.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/amark720/Deep-Learning-Projects/0ae283356592451eb834af43494545324eead24c/Image Classifier Using Resnet50/images/image8.jpg
--------------------------------------------------------------------------------
/Image Classifier Using Resnet50/images/image9.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/amark720/Deep-Learning-Projects/0ae283356592451eb834af43494545324eead24c/Image Classifier Using Resnet50/images/image9.jpg
--------------------------------------------------------------------------------
/Keras Introduction Exploration/A Gentle Introduction to Keras.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# Welcome to A Gentle Introduction to Keras "
8 | ]
9 | },
10 | {
11 | "cell_type": "markdown",
12 | "metadata": {},
13 | "source": [
14 | "This course focuses on a specific sub-field of machine learning called **predictive modeling.**\n",
15 | "\n",
16 | "Within predicitve modeling is a speciality or another sub-field called **deep learning.**\n",
17 | "\n",
18 | "We will be crafting deep learning models with a library called Keras. \n",
19 | "\n",
20 | ">**Predictive modeling** is focused on developing models that make accurate predictions at the expense of explaining why predictions are made. \n",
21 | "\n",
22 | "You and I don't need to be able to write a binary classification model. We need to know how to use and interpret the results of the model. "
23 | ]
24 | },
25 | {
26 | "cell_type": "markdown",
27 | "metadata": {},
28 | "source": [
29 | "**Where does machine learning fit into data science?**\n",
30 | "\n",
31 | "Data science is a much broader discipline. \n",
32 | "\n",
33 | "> Data Scientists take the raw data, analyse it, connect the dots and tell a story often via several visualizations. They usually have a broader range of skill-set and may not have too much depth into more than one or two. They are more on the creative side. Like an Artist. An Engineer, on the other hand, is someone who looks at the data as something they have to take in and churn out an output in some appropriate form in the most efficient way possible. The implementation details and other efficiency hacks are usually on the tip of their fingers. There can be a lot of overlap between the two but it is more like A Data Scientist is a Machine Learning Engineer but not the other way round. -- Ria Chakraborty, Data Scientist\n",
34 | "\n",
35 | "\n",
36 | "\n"
37 | ]
38 | },
39 | {
40 | "cell_type": "markdown",
41 | "metadata": {},
42 | "source": [
43 | "# Step 1. Import our modules\n",
44 | "\n",
45 | "Two important points here. Firstly, the **from** means we aren't importing the entire library, only a specific module. Secondly, notice we **are** imporing the entire numpy library. \n",
46 | "\n",
47 | "> If you get a message that states: WARNING (theano.configdefaults): g++ not detected, blah... blah. Run this in your Anaconda prompt. \n",
48 | "\n",
49 | "conda install mingw libpython\n",
50 | "\n"
51 | ]
52 | },
53 | {
54 | "cell_type": "code",
55 | "execution_count": null,
56 | "metadata": {
57 | "collapsed": false
58 | },
59 | "outputs": [],
60 | "source": [
61 | "from keras.models import Sequential\n",
62 | "from keras.layers import Dense\n",
63 | "import numpy"
64 | ]
65 | },
66 | {
67 | "cell_type": "markdown",
68 | "metadata": {},
69 | "source": [
70 | "# Step 2. Set our random seed\n"
71 | ]
72 | },
73 | {
74 | "cell_type": "markdown",
75 | "metadata": {},
76 | "source": [
77 | "Run an algorithm on a dataset and you've built a great model. Can you produce the same model again given the same data?\n",
78 | "You should be able to. It should be a requirement that is high on the list for your modeling project.\n",
79 | "\n",
80 | "> We achieve reproducibility in applied machine learning by using the exact same code, data and sequence of random numbers.\n",
81 | "\n",
82 | "Random numbers are created using a random number generator. It’s a simple program that generates a sequence of numbers that are random enough for most applications.\n",
83 | "\n",
84 | "This math function is deterministic. If it uses the same starting point called a seed number, it will give the same sequence of random numbers.\n",
85 | "\n",
86 | "Hold on... what's **deterministic** mean? \n",
87 | "\n",
88 | "> \"a deterministic algorithm is an algorithm which, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence of states\"\n",
89 | "\n",
90 | "Let's apply an English translator to this: \n",
91 | "\n",
92 | "> The **only purpose of seeding** is to make sure that you get the **exact same result** when you run this code many times on the exact same data."
93 | ]
94 | },
95 | {
96 | "cell_type": "code",
97 | "execution_count": null,
98 | "metadata": {
99 | "collapsed": true
100 | },
101 | "outputs": [],
102 | "source": [
103 | "seed = 9\n",
104 | "numpy.random.seed(seed)"
105 | ]
106 | },
107 | {
108 | "cell_type": "markdown",
109 | "metadata": {},
110 | "source": [
111 | "# Step 3. Import our data set\n"
112 | ]
113 | },
114 | {
115 | "cell_type": "markdown",
116 | "metadata": {},
117 | "source": [
118 | "Let's import the object called read_csv. \n",
119 | "\n",
120 | "We define a variable called filename and put our data set in it. \n",
121 | "\n",
122 | "The last line does the work. It using the function called **read_csv** to put the contents of our data set into a variable called dataframe. "
123 | ]
124 | },
125 | {
126 | "cell_type": "code",
127 | "execution_count": null,
128 | "metadata": {
129 | "collapsed": false
130 | },
131 | "outputs": [],
132 | "source": [
133 | "from pandas import read_csv\n",
134 | "filename = 'BBCN.csv'\n",
135 | "dataframe = read_csv(filename)"
136 | ]
137 | },
138 | {
139 | "cell_type": "markdown",
140 | "metadata": {},
141 | "source": [
142 | "# Step 4. Split the Output Variables\n"
143 | ]
144 | },
145 | {
146 | "cell_type": "markdown",
147 | "metadata": {},
148 | "source": [
149 | "The first thing we need to do is put our data in an array. \n",
150 | "\n",
151 | "> An array is a data structure that stores values of **same data type**. \n",
152 | "\n",
153 | "In Python, this is the main difference between arrays and lists. While python lists can contain values corresponding to different data types, arrays in python can only contain values corresponding to same data type."
154 | ]
155 | },
156 | {
157 | "cell_type": "code",
158 | "execution_count": null,
159 | "metadata": {
160 | "collapsed": true
161 | },
162 | "outputs": [],
163 | "source": [
164 | "array = dataframe.values"
165 | ]
166 | },
167 | {
168 | "cell_type": "markdown",
169 | "metadata": {},
170 | "source": [
171 | "The code below is the trickest part of the exercise. Now, we are assinging X and y as output variables.\n",
172 | "\n",
173 | "> That looks pretty easy but keep in mind that an array starts at 0. \n",
174 | "\n",
175 | "If you take a look at the shape of our dataframe (shape means the number of columns and rows) you can see we have 12 rows. \n",
176 | "\n",
177 | "On the X array below we saying... include all items in the array from 0 to 11. \n",
178 | "\n",
179 | "On the y array below we are saying... just use the column in the array mapped to the **11th row**. The **BikeBuyer** column. \n",
180 | "\n",
181 | "> Before we split X and Y out we are going to put them in an array. \n",
182 | "\n",
183 | "\n"
184 | ]
185 | },
186 | {
187 | "cell_type": "code",
188 | "execution_count": null,
189 | "metadata": {
190 | "collapsed": false
191 | },
192 | "outputs": [],
193 | "source": [
194 | "X = array[:,0:11] \n",
195 | "Y = array[:,11]"
196 | ]
197 | },
198 | {
199 | "cell_type": "code",
200 | "execution_count": null,
201 | "metadata": {
202 | "collapsed": false
203 | },
204 | "outputs": [],
205 | "source": [
206 | "dataframe.head()"
207 | ]
208 | },
209 | {
210 | "cell_type": "markdown",
211 | "metadata": {},
212 | "source": [
213 | "# Step 4. Build the Model\n"
214 | ]
215 | },
216 | {
217 | "cell_type": "markdown",
218 | "metadata": {},
219 | "source": [
220 | "We can piece it all together by adding each layer. \n",
221 | "\n",
222 | "> The first layer has 11 neurons and expects 11 input variables. \n",
223 | "\n",
224 | "The second hidden layer has 8 neurons.\n",
225 | "\n",
226 | "The third hidden layer has 8 neurons. \n",
227 | "\n",
228 | "The output layer has 1 neuron to predict the class. \n",
229 | "\n",
230 | "How many hidden layers are in our model? "
231 | ]
232 | },
233 | {
234 | "cell_type": "code",
235 | "execution_count": null,
236 | "metadata": {
237 | "collapsed": true
238 | },
239 | "outputs": [],
240 | "source": [
241 | "model = Sequential()\n",
242 | "model.add(Dense(12, input_dim=11, init='uniform', activation='relu'))\n",
243 | "model.add(Dense(8, init='uniform', activation='relu'))\n",
244 | "model.add(Dense(1, init='uniform', activation='sigmoid'))"
245 | ]
246 | },
247 | {
248 | "cell_type": "markdown",
249 | "metadata": {},
250 | "source": [
251 | "# Step 5. Compile the Model"
252 | ]
253 | },
254 | {
255 | "cell_type": "markdown",
256 | "metadata": {},
257 | "source": [
258 | "A metric is a function that is used to judge the performance of your model. Metric functions are to be supplied in the metrics parameter when a model is compiled.\n",
259 | "\n",
260 | "> Lastly, we set the cost (or loss) function to categorical_crossentropy. The (binary) cross-entropy is just the technical term for the **cost function** in logistic regression, and the categorical cross-entropy is its generalization for multi-class predictions via softmax"
261 | ]
262 | },
263 | {
264 | "cell_type": "markdown",
265 | "metadata": {},
266 | "source": [
267 | "Binary learning models are models which just predict one of two outcomes: positive or negative. These models are very well suited to drive decisions, such as whether to administer a patient a certain drug or to include a lead in a targeted marketing campaign.\n",
268 | "\n",
269 | "> Accuracy is perhaps the most intuitive performance measure. **It is simply the ratio of correctly predicted observations.**\n",
270 | "\n",
271 | "Using accuracy is only good for symmetric data sets where the class distribution is 50/50 and the cost of false positives and false negatives are roughly the same. In our case our classes are balanced. "
272 | ]
273 | },
274 | {
275 | "cell_type": "markdown",
276 | "metadata": {},
277 | "source": [
278 | "Whenever you train a model with your data, you are actually producing some new values (predicted) for a specific feature. However, that specific feature already has some values which are real values in the dataset. \n",
279 | "\n",
280 | "> We know the the closer the predicted values to their corresponding real values, the better the model.\n",
281 | "\n",
282 | "We are using cost function to measure **how close the predicted values are to their corresponding real values.**\n",
283 | "\n",
284 | "So, for our model we choose binary_crossentropy. "
285 | ]
286 | },
287 | {
288 | "cell_type": "code",
289 | "execution_count": null,
290 | "metadata": {
291 | "collapsed": true
292 | },
293 | "outputs": [],
294 | "source": [
295 | "model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])"
296 | ]
297 | },
298 | {
299 | "cell_type": "markdown",
300 | "metadata": {},
301 | "source": [
302 | "# Step 5. Fit the Model"
303 | ]
304 | },
305 | {
306 | "cell_type": "markdown",
307 | "metadata": {},
308 | "source": [
309 | "**Epoch:** A full pass over all of your training data.\n",
310 | "\n",
311 | "For example, let's say you have 1213 observations. So an epoch concludes when it has finished a training pass over all 1213 of your observations.\n",
312 | "\n",
313 | "> What you'd expect to see from running fit on your Keras model, is a decrease in loss over n number of epochs."
314 | ]
315 | },
316 | {
317 | "cell_type": "markdown",
318 | "metadata": {},
319 | "source": [
320 | "batch_size denotes the subset size of your training sample (e.g. 100 out of 1000) which is going to be used in order to train the network during its learning process. \n",
321 | "\n",
322 | "Each batch trains network in a successive order, taking into account the updated weights coming from the appliance of the previous batch. \n",
323 | "\n",
324 | ">Example: if you have 1000 training examples, and your batch size is 500, then it will take 2 iterations to complete 1 epoch.\n",
325 | "\n"
326 | ]
327 | },
328 | {
329 | "cell_type": "code",
330 | "execution_count": null,
331 | "metadata": {
332 | "collapsed": false
333 | },
334 | "outputs": [],
335 | "source": [
336 | "model.fit(X, Y, nb_epoch=200, batch_size=30)"
337 | ]
338 | },
339 | {
340 | "cell_type": "markdown",
341 | "metadata": {},
342 | "source": [
343 | "# Step 6. Score the Model"
344 | ]
345 | },
346 | {
347 | "cell_type": "code",
348 | "execution_count": null,
349 | "metadata": {
350 | "collapsed": false
351 | },
352 | "outputs": [],
353 | "source": [
354 | "scores = model.evaluate(X, Y)\n",
355 | "print(\"%s: %.2f%%\" % (model.metrics_names[1], scores[1]*100))\n"
356 | ]
357 | }
358 | ],
359 | "metadata": {
360 | "kernelspec": {
361 | "display_name": "Python 2",
362 | "language": "python",
363 | "name": "python2"
364 | },
365 | "language_info": {
366 | "codemirror_mode": {
367 | "name": "ipython",
368 | "version": 2
369 | },
370 | "file_extension": ".py",
371 | "mimetype": "text/x-python",
372 | "name": "python",
373 | "nbconvert_exporter": "python",
374 | "pygments_lexer": "ipython2",
375 | "version": "2.7.13"
376 | }
377 | },
378 | "nbformat": 4,
379 | "nbformat_minor": 2
380 | }
381 |
--------------------------------------------------------------------------------
/Keras Introduction Exploration/BBCN.csv:
--------------------------------------------------------------------------------
1 | MaritalStatus,Gender,YearlyIncome,TotalChildren,NumberChildrenAtHome,EnglishEducation,HouseOwnerFlag,NumberCarsOwned,CommuteDistance,Region,Age,BikeBuyer
2 | 5,1,9,2,0,5,1,0,2,2,5,1
3 | 5,1,6,3,3,5,0,1,1,2,4,1
4 | 5,1,6,3,3,5,1,1,5,2,4,1
5 | 5,2,7,0,0,5,0,1,10,2,5,1
6 | 5,2,8,5,5,5,1,4,2,2,5,1
7 | 5,1,7,0,0,5,1,1,10,2,4,1
8 | 5,2,7,0,0,5,1,1,10,2,4,1
9 | 5,1,6,3,3,5,1,2,1,2,4,1
10 | 5,2,6,4,4,5,1,3,20,2,4,1
11 | 5,1,7,0,0,5,0,1,10,2,4,1
12 | 5,2,7,0,0,5,0,1,10,2,4,1
13 | 5,1,6,4,4,5,1,4,20,2,4,1
14 | 5,2,9.5,2,0,5,1,2,2,1,5,0
15 | 5,1,9.5,2,0,5,1,3,1,1,5,0
16 | 5,2,9.5,3,0,5,0,3,2,1,5,0
17 | 3,2,3,0,0,3,0,1,10,1,3,1
18 | 3,1,3,0,0,3,1,1,10,1,3,1
19 | 1,2,2,4,0,1,1,2,10,2,8,1
20 | 3,1,3,2,0,3,1,2,10,2,8,1
21 | 1,1,4,0,0,1,0,2,10,1,3,0
22 | 1,1,4,0,0,1,0,2,2,1,3,1
23 | 3,2,4,0,0,3,0,1,2,1,3,1
24 | 3,1,4,0,0,3,1,1,10,1,3,1
25 | 3,1,4,0,0,3,1,1,2,1,3,0
26 | 3,1,6,0,0,3,1,2,10,1,3,0
27 | 2,1,1,2,1,2,1,2,2,2,8,1
28 | 3,1,3,2,0,3,0,2,2,2,7,1
29 | 3,1,3,2,0,3,1,2,10,2,7,1
30 | 3,2,3,2,0,3,1,2,2,2,7,1
31 | 3,1,3,2,0,3,1,2,2,2,7,1
32 | 2,2,1,2,1,2,1,2,2,2,7,1
33 | 1,2,2,4,0,1,1,2,2,2,7,1
34 | 1,2,2,4,0,1,1,2,2,2,7,1
35 | 1,1,2,4,0,1,1,2,10,2,7,1
36 | 1,2,2,4,0,1,1,2,10,2,7,1
37 | 2,2,1,2,1,2,1,2,2,2,7,1
38 | 3,2,6,0,0,3,1,2,2,1,3,1
39 | 2,2,4,0,0,2,0,2,10,1,3,1
40 | 2,2,1,2,1,2,1,2,10,2,7,1
41 | 3,1,3,3,0,3,1,2,10,2,7,1
42 | 3,1,3,0,0,3,1,2,2,1,3,1
43 | 3,2,6,0,0,3,1,2,2,1,3,1
44 | 3,2,7,0,0,3,1,2,1,1,3,1
45 | 3,1,6,0,0,3,1,2,10,1,4,0
46 | 2,1,2,2,1,2,1,2,2,2,7,1
47 | 1,1,3,3,0,1,0,2,2,2,7,0
48 | 1,2,3,3,0,1,1,2,10,2,7,1
49 | 1,2,3,3,0,1,0,2,2,2,7,1
50 | 1,1,3,3,0,1,1,2,10,2,7,1
51 | 2,2,4,0,0,2,1,2,10,1,3,0
52 | 1,1,3,3,0,1,1,2,10,2,7,1
53 | 1,1,3,3,0,1,0,2,2,2,7,0
54 | 3,2,4,2,0,3,0,2,10,2,7,1
55 | 3,2,6,0,0,3,0,2,2,1,3,1
56 | 3,2,4,2,0,3,1,2,10,2,7,1
57 | 3,1,4,2,0,3,1,2,10,2,7,1
58 | 3,2,4,3,0,3,1,2,10,2,7,1
59 | 4,1,7,2,0,4,1,2,2,2,7,1
60 | 3,1,8,2,0,3,1,2,2,2,7,1
61 | 3,2,8,2,0,3,1,2,10,2,7,0
62 | 3,1,8,2,0,3,1,2,10,2,7,1
63 | 3,1,8,2,0,3,1,2,10,2,7,1
64 | 1,1,4,0,0,1,1,2,10,1,4,1
65 | 1,2,4,0,0,1,1,2,10,1,4,1
66 | 1,1,4,0,0,1,1,2,10,1,4,1
67 | 3,2,6,0,0,3,0,2,2,1,4,0
68 | 3,2,7,0,0,3,1,2,10,1,4,0
69 | 3,1,6,0,0,3,1,2,10,1,4,0
70 | 1,2,8,2,0,1,1,2,10,2,7,0
71 | 1,2,8,2,0,1,0,2,2,2,7,1
72 | 1,1,8,2,0,1,1,2,10,2,7,1
73 | 1,2,8,2,0,1,1,2,10,2,7,0
74 | 1,2,8,2,0,1,0,2,2,2,7,1
75 | 1,2,7,2,0,1,1,2,10,2,6,0
76 | 1,1,7,2,0,1,1,2,10,2,6,0
77 | 1,2,8,2,0,1,1,2,10,2,6,1
78 | 1,1,8,2,0,1,1,2,10,2,6,1
79 | 1,2,8,2,0,1,0,2,2,2,6,1
80 | 1,2,4,0,0,1,0,2,2,1,4,0
81 | 4,1,9.5,0,0,4,1,2,1,2,6,0
82 | 4,1,9.5,0,0,4,1,2,1,2,6,1
83 | 5,2,9.5,2,2,5,1,3,5,1,5,0
84 | 4,2,9.5,0,1,4,0,3,2,1,5,1
85 | 4,2,9.5,0,1,4,1,3,2,1,5,1
86 | 2,1,8,2,0,2,0,2,2,1,6,1
87 | 1,2,6,2,0,1,0,2,10,1,6,0
88 | 3,1,7,2,1,3,1,0,1,1,6,0
89 | 3,2,7,3,2,3,0,0,1,1,6,0
90 | 5,1,8,2,1,5,1,0,5,1,6,1
91 | 5,2,8,2,1,5,1,1,5,1,6,1
92 | 3,1,9,2,0,3,1,1,5,1,6,1
93 | 3,1,9,2,0,3,1,1,10,1,6,0
94 | 5,2,9,2,2,5,1,0,2,2,5,1
95 | 4,2,9.5,0,0,4,1,0,1,2,5,1
96 | 3,1,7,1,0,3,0,1,1,2,4,0
97 | 3,1,7,1,0,3,1,1,10,2,4,1
98 | 5,1,6,1,0,5,1,1,10,2,4,1
99 | 3,1,6,1,0,3,1,1,10,2,4,1
100 | 3,2,6,1,0,3,0,1,1,2,4,0
101 | 5,1,6,1,0,5,1,1,1,2,6,1
102 | 3,2,6,1,0,3,1,1,10,2,6,1
103 | 5,2,7,0,0,5,0,1,1,2,4,1
104 | 5,2,8,5,5,5,1,4,2,2,4,0
105 | 5,2,7,0,0,5,1,1,10,2,4,1
106 | 5,1,7,0,0,5,0,1,1,2,4,1
107 | 5,2,7,0,0,5,0,1,10,2,4,1
108 | 5,1,7,0,0,5,0,1,10,2,4,1
109 | 5,2,9,1,0,5,1,1,10,2,6,1
110 | 5,2,7,0,0,5,0,2,1,2,4,1
111 | 5,1,7,0,0,5,0,2,1,2,4,1
112 | 3,1,6,1,0,3,1,1,10,2,4,1
113 | 3,2,6,1,0,3,1,1,1,2,4,1
114 | 3,2,6,1,0,3,1,1,1,2,4,1
115 | 5,1,7,5,4,5,1,2,10,2,4,0
116 | 3,2,7,5,4,3,1,2,10,2,4,0
117 | 3,1,7,5,4,3,1,2,1,2,4,0
118 | 3,1,7,5,4,3,1,2,10,2,4,0
119 | 3,2,7,5,4,3,1,3,20,2,6,1
120 | 3,1,8,1,0,3,0,1,1,2,6,0
121 | 3,1,3,2,0,3,0,2,1,2,81,0
122 | 5,2,4,2,0,5,1,1,10,2,8,1
123 | 3,1,7,5,4,3,1,3,20,2,6,0
124 | 3,1,7,5,4,3,1,3,10,2,6,0
125 | 3,1,7,5,4,3,1,3,20,2,6,0
126 | 3,1,8,1,0,3,1,1,10,2,6,1
127 | 3,2,7,1,0,3,1,1,10,2,6,0
128 | 1,1,1,2,1,1,0,2,1,2,7,1
129 | 1,2,4,0,0,1,1,1,10,1,4,0
130 | 1,2,4,0,0,1,1,1,10,1,4,0
131 | 1,2,4,0,0,1,1,1,10,1,4,1
132 | 1,2,3,0,0,1,1,2,10,1,3,0
133 | 1,2,3,0,0,1,0,2,1,1,3,0
134 | 1,2,3,0,0,1,0,2,10,1,3,1
135 | 2,2,3,0,0,2,0,2,1,1,3,0
136 | 1,1,1,5,0,1,1,2,10,2,8,1
137 | 2,1,3,0,0,2,0,2,10,1,3,1
138 | 1,2,4,0,0,1,0,2,1,1,3,1
139 | 1,2,4,0,0,1,1,2,10,1,3,0
140 | 1,2,4,0,0,1,0,2,1,1,3,0
141 | 3,2,3,2,0,3,0,2,10,2,8,0
142 | 2,1,3,0,0,2,0,2,10,1,4,0
143 | 1,2,4,0,0,1,0,2,1,1,3,0
144 | 1,1,4,0,0,1,0,2,1,1,3,0
145 | 1,1,4,0,0,1,1,2,10,1,4,0
146 | 1,1,4,0,0,1,0,2,1,1,3,1
147 | 1,2,4,0,0,1,0,2,1,1,3,1
148 | 5,2,4,2,0,5,1,2,10,2,8,0
149 | 4,1,6,2,0,4,1,1,1,2,8,1
150 | 5,1,4,2,0,5,1,2,10,2,8,0
151 | 5,1,4,2,0,5,1,2,1,2,8,0
152 | 5,1,4,2,0,5,1,2,10,2,8,0
153 | 4,2,6,2,0,4,0,1,1,2,8,1
154 | 1,1,4,0,0,1,1,2,10,1,4,1
155 | 1,2,4,0,0,1,1,2,10,1,4,0
156 | 3,2,4,0,0,3,1,1,10,1,4,1
157 | 3,1,4,0,0,3,0,1,2,1,4,1
158 | 3,2,4,0,0,3,0,1,2,1,4,1
159 | 1,2,4,0,0,1,1,2,10,1,4,0
160 | 1,1,4,0,0,1,1,2,10,1,4,1
161 | 1,2,4,0,0,1,0,2,2,1,4,0
162 | 3,1,4,0,0,3,0,1,10,1,4,1
163 | 3,2,4,0,0,3,0,1,2,1,4,0
164 | 3,1,4,0,0,3,1,1,10,1,4,1
165 | 3,1,4,0,0,3,1,1,10,1,4,0
166 | 3,1,4,0,0,3,0,1,2,1,4,0
167 | 3,2,6,0,0,3,0,1,2,1,4,0
168 | 3,2,7,0,0,3,0,2,1,1,4,0
169 | 5,2,8,0,0,5,0,1,1,1,4,1
170 | 5,1,8,0,0,5,0,1,1,1,5,1
171 | 5,1,9,4,4,5,1,1,1,1,5,0
172 | 5,2,9,4,4,5,1,1,1,1,5,0
173 | 5,1,9.5,1,0,5,1,2,2,1,5,1
174 | 5,2,9.5,1,0,5,1,2,2,1,5,0
175 | 4,2,9.5,0,0,4,0,4,1,1,5,0
176 | 4,1,9.5,0,1,4,1,0,5,1,5,0
177 | 3,1,7,0,0,3,1,2,10,1,4,1
178 | 5,1,9,4,4,5,1,1,2,1,5,0
179 | 4,1,9.5,2,4,4,1,2,10,1,8,0
180 | 4,1,9.5,2,3,4,1,3,2,1,8,0
181 | 4,2,9.5,2,4,4,1,3,10,1,8,0
182 | 5,2,9.5,1,3,5,1,4,1,1,8,0
183 | 4,1,9.5,2,3,4,1,2,1,1,8,0
184 | 5,2,7,4,0,5,1,2,20,1,8,0
185 | 5,2,7,4,0,5,1,2,20,1,8,0
186 | 5,2,7,4,0,5,1,2,20,1,8,0
187 | 5,2,7,4,0,5,1,2,20,1,8,0
188 | 5,2,7,4,0,5,1,2,20,1,8,0
189 | 4,2,6,4,0,4,1,2,20,1,8,0
190 | 4,2,6,4,0,4,0,2,2,1,8,0
191 | 4,1,6,4,0,4,1,2,20,1,8,1
192 | 5,1,7,5,0,5,1,2,20,1,8,0
193 | 5,2,7,5,0,5,1,3,20,1,8,1
194 | 5,1,6,4,0,5,1,2,5,1,8,0
195 | 5,1,6,4,0,5,1,2,5,1,8,0
196 | 5,2,6,4,0,5,1,2,5,1,8,0
197 | 5,2,6,5,0,5,1,3,20,1,8,0
198 | 5,1,6,5,0,5,1,3,20,1,7,0
199 | 5,2,7,3,1,5,0,1,2,1,7,0
200 | 5,2,7,4,1,5,1,1,20,1,7,0
201 | 5,2,7,4,1,5,1,1,20,1,7,0
202 | 5,1,7,4,1,5,1,1,2,1,7,0
203 | 5,2,7,4,1,5,1,1,20,1,7,0
204 | 5,2,8,5,0,5,0,2,2,1,7,0
205 | 1,1,4,2,1,1,1,2,20,1,7,0
206 | 1,2,4,2,1,1,1,2,5,1,7,0
207 | 1,2,4,2,1,1,1,2,20,1,7,0
208 | 3,1,6,2,1,3,1,1,20,1,7,0
209 | 3,2,6,2,1,3,1,1,20,1,7,0
210 | 5,2,6,2,1,5,1,0,20,1,7,0
211 | 5,2,6,2,1,5,1,0,20,1,7,0
212 | 3,1,7,4,1,3,1,2,20,1,7,0
213 | 1,2,4,2,1,1,1,2,5,1,7,0
214 | 1,2,4,2,1,1,1,2,5,1,7,0
215 | 3,2,6,2,1,3,1,2,20,1,7,0
216 | 3,1,6,2,1,3,1,2,20,1,7,0
217 | 2,2,4,2,1,2,1,2,20,1,7,0
218 | 3,2,6,2,1,3,1,2,20,1,7,1
219 | 3,2,6,2,1,3,1,2,20,1,7,1
220 | 3,2,6,2,1,3,0,2,5,1,7,0
221 | 1,1,6,2,1,1,1,2,20,1,7,0
222 | 3,2,6,2,1,3,1,1,20,1,7,0
223 | 3,1,7,4,2,3,1,1,2,1,7,0
224 | 5,2,8,3,1,5,1,1,2,1,7,0
225 | 1,2,7,2,1,1,1,2,5,1,7,0
226 | 1,2,7,2,1,1,1,2,20,1,7,1
227 | 3,2,6,2,1,3,1,1,20,1,7,0
228 | 3,2,6,2,1,3,1,1,5,1,7,0
229 | 3,1,6,2,1,3,1,2,20,1,7,1
230 | 3,2,6,2,1,3,1,2,20,1,7,0
231 | 3,1,7,4,2,3,1,1,20,1,7,0
232 | 3,2,7,4,2,3,0,2,5,1,7,0
233 | 1,1,6,2,1,1,1,2,20,1,7,0
234 | 1,2,6,3,1,1,0,2,5,1,7,0
235 | 3,1,7,4,2,3,1,2,20,1,7,0
236 | 3,2,7,4,2,3,1,2,20,1,7,0
237 | 3,1,7,4,2,3,1,2,20,1,7,0
238 | 3,1,7,5,2,3,1,2,20,1,7,0
239 | 5,1,9.5,2,3,5,0,4,1,3,6,1
240 | 5,2,9.5,2,3,5,0,4,1,3,6,1
241 | 3,2,9.5,2,4,3,0,2,10,3,6,1
242 | 3,2,9.5,2,3,3,1,4,10,3,6,1
243 | 3,2,9.5,2,3,3,1,4,10,3,6,1
244 | 3,1,9.5,2,4,3,1,3,10,3,6,1
245 | 3,2,9.5,2,3,3,1,3,1,3,6,1
246 | 5,2,9.5,2,3,5,0,3,1,3,6,1
247 | 1,1,9.5,3,4,1,0,4,10,3,6,1
248 | 1,2,9.5,3,4,1,0,4,10,3,6,1
249 | 3,2,9.5,3,4,3,0,3,1,3,6,1
250 | 3,1,9.5,3,4,3,1,4,20,3,7,0
251 | 3,2,9.5,3,4,3,1,3,1,3,7,1
252 | 1,2,9.5,3,4,1,1,2,10,3,7,1
253 | 1,1,3,4,0,1,0,2,10,1,84,1
254 | 4,1,9,3,0,4,1,1,5,1,84,1
255 | 4,1,6,2,0,4,1,2,2,1,84,0
256 | 4,1,7,4,0,4,1,2,10,1,84,0
257 | 4,1,7,4,0,4,1,2,10,1,83,0
258 | 4,2,7,4,0,4,1,2,10,1,83,0
259 | 4,2,9.5,1,2,4,1,4,5,1,84,1
260 | 4,1,9.5,1,2,4,1,4,5,1,83,0
261 | 3,2,9.5,4,1,3,1,4,5,1,4,1
262 | 5,2,9.5,2,2,5,1,3,5,1,4,0
263 | 4,2,9.5,1,2,4,1,1,1,1,4,1
264 | 5,2,8,4,3,5,1,0,1,1,4,0
265 | 3,2,9,5,5,3,1,3,5,1,4,1
266 | 3,1,9,5,5,3,0,3,1,1,4,1
267 | 3,2,9,5,5,3,1,3,1,1,4,1
268 | 3,2,9,0,0,3,0,1,10,1,4,1
269 | 3,1,9.5,4,2,3,1,4,5,1,4,1
270 | 4,2,9.5,2,2,4,1,2,2,1,4,0
271 | 4,1,9.5,2,2,4,1,2,1,1,4,0
272 | 4,1,9.5,2,2,4,1,3,1,1,4,1
273 | 5,2,9.5,2,0,5,0,4,2,1,4,1
274 | 3,2,7,4,3,3,1,0,10,1,6,1
275 | 3,1,9,2,1,3,1,0,20,1,6,0
276 | 4,1,5,2,0,4,1,2,10,1,82,1
277 | 4,2,8,4,0,4,1,2,1,1,83,1
278 | 3,2,8,4,3,3,1,2,5,1,4,0
279 | 3,1,8,4,3,3,1,2,1,1,4,0
280 | 3,1,9,0,0,3,0,1,20,1,4,1
281 | 3,2,9,0,0,3,1,1,20,1,4,0
282 | 3,1,9,0,0,3,1,1,20,1,4,0
283 | 5,2,9.5,4,2,5,1,4,10,1,4,1
284 | 3,1,9,5,4,3,1,4,2,1,4,1
285 | 5,1,9.5,1,3,5,1,2,2,1,4,0
286 | 5,2,9.5,1,3,5,1,2,2,1,4,0
287 | 3,1,9.5,1,3,3,0,4,5,1,4,0
288 | 4,1,9.5,0,0,4,1,4,5,1,4,1
289 | 3,1,7,5,4,3,0,3,10,1,6,0
290 | 5,2,9.5,1,3,5,1,3,5,1,4,0
291 | 5,2,9.5,1,3,5,1,1,1,1,6,1
292 | 5,2,9.5,1,3,5,1,3,5,1,4,1
293 | 5,2,9.5,1,3,5,1,3,5,1,4,1
294 | 3,1,9.5,1,2,3,0,3,1,1,4,1
295 | 1,1,6,2,0,1,0,2,2,1,7,0
296 | 1,1,6,2,0,1,0,2,2,1,7,0
297 | 2,2,8,2,0,2,1,2,10,1,7,1
298 | 4,2,5,2,0,4,1,2,10,1,81,1
299 | 4,1,7,4,0,4,1,2,10,1,82,1
300 | 4,1,7,4,0,4,1,2,2,1,82,1
301 | 4,1,7,4,0,4,1,2,2,1,8,1
302 | 3,1,8,5,4,3,0,3,2,1,6,0
303 | 3,1,8,5,4,3,1,4,10,1,6,0
304 | 1,1,9,4,3,1,0,3,2,1,6,1
305 | 1,2,9,4,3,1,0,3,2,1,6,1
306 | 3,2,9.5,0,3,3,1,0,10,1,6,0
307 | 3,1,9.5,3,3,3,0,4,5,1,6,0
308 | 3,1,9.5,3,3,3,0,4,5,1,6,0
309 | 2,1,8,2,0,2,1,2,10,1,7,1
310 | 1,1,6,2,0,1,0,2,2,1,7,0
311 | 1,2,6,2,0,1,1,2,10,1,7,0
312 | 4,2,7,3,0,4,1,2,10,1,7,0
313 | 4,2,7,3,0,4,0,2,2,1,7,0
314 | 4,2,7,3,0,4,1,2,10,1,7,0
315 | 4,1,7,3,0,4,1,2,10,1,7,0
316 | 4,2,7,3,0,4,0,2,2,1,7,0
317 | 4,2,7,3,0,4,0,2,2,1,7,0
318 | 4,1,6,3,0,4,1,2,10,1,7,1
319 | 3,2,4,3,0,3,0,2,10,1,7,1
320 | 3,2,4,3,0,3,0,2,2,1,7,0
321 | 4,2,7,4,0,4,0,2,2,1,8,1
322 | 4,2,8,4,0,4,1,2,10,1,8,0
323 | 5,1,9,5,0,5,1,2,10,1,8,0
324 | 5,2,9,5,0,5,1,2,10,1,8,0
325 | 4,1,9.5,2,4,4,1,3,10,1,8,1
326 | 5,1,9,5,0,5,1,2,2,1,8,1
327 | 5,1,9,5,0,5,1,2,10,1,8,1
328 | 5,1,9.5,2,3,5,1,4,1,1,8,1
329 | 5,1,9,5,0,5,1,2,10,1,8,1
330 | 5,1,9,5,0,5,1,2,10,1,8,1
331 | 5,1,9,5,0,5,1,2,10,1,8,1
332 | 4,1,9.5,2,4,4,1,1,5,1,8,0
333 | 3,2,9,4,4,3,1,2,2,1,5,0
334 | 4,2,1,1,0,4,1,0,1,3,4,1
335 | 4,2,2,1,0,4,1,0,1,3,4,1
336 | 5,2,1,1,0,5,1,0,1,3,4,1
337 | 5,2,1,1,0,5,1,0,1,3,4,1
338 | 4,1,2,1,0,4,1,0,1,3,4,1
339 | 5,1,1,1,0,5,1,0,1,3,4,1
340 | 4,1,2,1,0,4,1,0,1,3,4,1
341 | 4,1,2,1,0,4,1,0,1,3,4,1
342 | 4,2,1,1,0,4,1,0,1,3,8,1
343 | 4,1,2,1,0,4,1,0,1,3,8,1
344 | 4,1,3,5,0,4,1,0,1,3,4,0
345 | 5,1,1,1,0,5,1,0,1,3,6,1
346 | 3,2,1,1,0,3,1,0,1,3,6,1
347 | 3,2,1,1,0,3,1,0,1,3,6,1
348 | 3,2,1,2,0,3,1,1,1,3,7,1
349 | 3,1,1,2,0,3,1,1,1,3,7,0
350 | 3,1,1,2,0,3,1,1,1,3,7,0
351 | 3,2,1,2,0,3,1,1,5,3,7,0
352 | 4,2,3,1,0,4,1,0,1,3,8,0
353 | 5,2,2,1,0,5,1,0,1,3,8,1
354 | 3,1,1,2,0,3,1,1,1,3,8,0
355 | 5,2,3,1,0,5,1,0,1,3,8,1
356 | 3,2,2,1,0,3,1,0,1,3,8,0
357 | 4,1,4,1,0,4,1,0,1,3,8,0
358 | 3,1,1,0,0,3,0,1,5,2,3,1
359 | 3,1,1,0,0,3,0,1,5,2,3,1
360 | 3,2,1,0,0,3,0,1,1,2,3,1
361 | 3,1,1,0,0,3,0,1,1,2,3,1
362 | 3,1,1,0,0,3,1,1,5,2,3,1
363 | 1,2,1,0,0,1,1,2,1,2,3,0
364 | 1,1,1,0,0,1,0,2,1,2,3,0
365 | 1,1,1,0,0,1,0,2,1,2,3,1
366 | 1,2,1,1,1,1,0,0,5,2,3,1
367 | 1,2,1,1,1,1,0,0,2,2,3,1
368 | 5,2,2,0,0,5,1,0,1,2,3,0
369 | 5,1,2,0,0,5,1,0,1,2,3,0
370 | 1,1,1,1,1,1,0,0,2,2,3,1
371 | 5,2,2,0,0,5,0,0,1,2,3,0
372 | 5,2,2,0,0,5,1,0,1,2,3,0
373 | 5,2,2,0,0,5,1,0,1,2,3,0
374 | 5,2,2,0,0,5,1,0,1,2,3,1
375 | 5,2,3,1,0,5,1,0,1,3,8,0
376 | 5,1,3,1,0,5,1,0,1,3,8,0
377 | 5,1,3,1,0,5,1,0,1,3,8,0
378 | 4,1,4,1,0,4,1,0,1,3,8,0
379 | 4,1,3,4,0,4,1,0,1,3,6,0
380 | 1,2,1,1,1,1,0,1,5,3,6,1
381 | 1,1,1,1,1,1,1,1,5,3,6,1
382 | 3,1,2,2,0,3,1,1,2,3,6,1
383 | 3,2,2,2,0,3,1,1,5,3,6,1
384 | 3,1,2,2,0,3,1,1,5,3,6,1
385 | 4,2,3,3,0,4,1,0,1,3,6,0
386 | 5,2,3,3,0,5,1,0,1,3,6,0
387 | 5,1,3,3,0,5,1,0,1,3,6,0
388 | 5,2,3,3,0,5,1,0,1,3,6,0
389 | 4,2,4,3,0,4,1,0,1,3,6,1
390 | 4,1,4,3,0,4,1,0,1,3,6,1
391 | 1,1,1,1,1,1,0,1,5,3,5,0
392 | 1,2,1,1,1,1,0,1,5,3,5,0
393 | 1,2,1,2,2,1,1,0,1,3,5,0
394 | 1,1,1,2,2,1,1,0,1,3,5,0
395 | 1,1,1,2,2,1,1,0,1,3,5,0
396 | 1,1,1,2,2,1,1,0,1,3,5,1
397 | 5,2,3,0,0,5,1,0,1,3,5,1
398 | 1,1,1,2,2,1,0,1,1,3,5,0
399 | 3,2,2,0,0,3,0,1,1,3,5,0
400 | 5,1,3,0,0,5,1,0,1,3,5,1
401 | 5,2,3,1,0,5,1,0,1,3,5,0
402 | 4,1,4,0,0,4,1,0,1,3,5,1
403 | 4,2,4,0,0,4,1,0,1,3,5,1
404 | 4,2,4,0,0,4,1,0,1,3,5,1
405 | 4,2,4,0,0,4,1,0,1,3,5,1
406 | 3,2,1,3,0,3,0,2,1,3,7,0
407 | 3,2,1,3,0,3,0,2,1,3,7,0
408 | 3,2,2,2,0,3,1,1,5,3,7,1
409 | 3,1,2,2,0,3,1,1,1,3,7,0
410 | 1,1,1,2,1,1,0,2,1,3,7,0
411 | 1,2,1,2,2,1,1,1,1,3,5,1
412 | 3,1,2,1,1,3,1,0,5,3,5,1
413 | 3,1,8,3,2,3,0,1,5,3,7,0
414 | 3,2,8,4,2,3,1,1,20,3,7,1
415 | 3,2,8,4,2,3,0,1,5,3,7,1
416 | 3,1,8,4,2,3,1,1,20,3,7,1
417 | 3,1,9,5,0,3,0,2,20,3,8,0
418 | 2,2,6,3,1,2,1,4,5,3,8,1
419 | 1,2,7,5,1,1,1,2,20,3,8,1
420 | 3,1,9,5,0,3,0,2,20,3,8,0
421 | 4,1,9.5,3,4,4,1,4,1,3,8,0
422 | 5,1,9.5,2,3,5,1,4,20,3,7,1
423 | 5,2,9.5,3,4,5,1,4,10,3,7,1
424 | 5,1,9.5,4,4,5,1,3,10,3,7,1
425 | 3,2,8,4,2,3,0,1,5,3,7,1
426 | 3,2,8,4,2,3,1,2,20,3,7,0
427 | 1,2,9,4,2,1,1,2,20,3,7,1
428 | 5,1,9.5,3,4,5,1,4,20,3,7,0
429 | 3,2,9.5,2,5,3,1,2,20,3,7,1
430 | 5,2,9.5,2,5,5,1,2,20,3,7,1
431 | 3,1,8,4,1,3,1,2,20,3,7,1
432 | 1,2,9,4,1,1,1,2,20,3,7,0
433 | 3,1,9.5,4,5,3,1,2,20,3,7,1
434 | 1,2,8,4,2,1,1,2,20,3,7,1
435 | 1,1,8,5,2,1,0,2,5,3,7,1
436 | 3,1,9.5,5,5,3,1,4,1,3,7,0
437 | 3,2,9.5,3,4,3,1,4,20,3,7,0
438 | 3,2,9.5,3,5,3,0,4,20,3,7,0
439 | 1,1,9,4,1,1,1,3,20,3,7,0
440 | 1,2,9.5,4,5,1,1,4,1,3,7,0
441 | 1,2,9,4,1,1,0,3,10,3,7,1
442 | 2,1,8,5,0,2,0,2,10,3,7,0
443 | 2,2,9,5,0,2,0,2,10,3,7,0
444 | 2,1,9,5,0,2,1,2,20,3,7,0
445 | 5,2,6,0,0,5,0,4,5,2,4,1
446 | 5,2,7,0,0,5,1,3,20,2,4,1
447 | 5,2,7,0,0,5,0,3,20,2,4,1
448 | 5,2,7,0,0,5,0,3,20,2,4,1
449 | 5,2,7,0,0,5,1,4,20,2,4,1
450 | 5,1,8,0,0,5,1,3,20,2,4,1
451 | 5,1,8,0,0,5,1,3,20,2,4,1
452 | 1,1,9.5,4,5,1,1,4,1,2,4,1
453 | 5,1,7,0,0,5,0,4,20,2,4,1
454 | 5,2,8,0,0,5,0,3,20,2,4,1
455 | 5,1,8,0,0,5,0,3,20,2,4,1
456 | 5,2,8,0,0,5,0,3,20,2,4,1
457 | 1,1,9.5,0,5,1,1,2,20,2,4,1
458 | 2,1,9.5,5,5,2,1,4,20,2,4,1
459 | 5,2,9,0,0,5,1,3,20,2,4,1
460 | 1,2,9.5,0,5,1,1,2,20,2,4,0
461 | 5,2,8,0,0,5,0,3,20,2,4,1
462 | 5,1,9,0,0,5,0,3,20,2,4,1
463 | 5,1,9,0,0,5,0,3,20,2,4,1
464 | 1,2,9.5,0,5,1,1,4,20,2,5,1
465 | 1,2,9.5,0,5,1,0,4,20,2,5,0
466 | 5,1,9,0,0,5,1,4,20,2,5,1
467 | 5,1,9,0,0,5,0,4,20,2,5,1
468 | 2,1,9.5,0,5,2,1,4,20,2,5,0
469 | 4,1,9.5,1,5,4,1,4,1,2,5,0
470 | 5,2,4,1,1,5,1,0,2,3,4,1
471 | 2,1,1,0,0,2,0,2,2,3,4,0
472 | 2,1,2,0,0,2,0,2,1,3,4,1
473 | 3,2,3,0,0,3,1,1,5,3,4,0
474 | 5,1,6,3,2,5,1,2,10,2,4,1
475 | 2,2,2,0,0,2,0,2,2,3,3,0
476 | 2,1,2,0,0,2,1,2,2,3,3,0
477 | 1,1,3,0,0,1,0,1,2,3,3,0
478 | 5,2,4,1,1,5,1,0,2,3,4,0
479 | 1,1,3,0,0,1,0,1,2,3,4,0
480 | 1,1,3,0,0,1,0,1,5,3,4,0
481 | 5,1,4,1,1,5,1,0,2,3,4,1
482 | 5,2,4,1,1,5,1,0,2,3,4,1
483 | 5,2,4,1,1,5,1,0,2,3,4,1
484 | 1,2,3,0,0,1,0,1,2,3,4,1
485 | 5,1,4,1,1,5,1,1,1,3,4,1
486 | 3,1,4,2,2,3,1,0,1,3,5,1
487 | 3,1,4,2,2,3,0,0,1,3,4,1
488 | 2,2,1,0,0,2,1,2,2,3,5,0
489 | 2,2,1,0,0,2,0,2,2,3,5,0
490 | 2,1,2,0,0,2,1,2,2,3,5,1
491 | 2,2,2,0,0,2,0,2,2,3,5,1
492 | 2,1,2,0,0,2,0,2,1,3,5,0
493 | 2,1,2,0,0,2,0,2,1,3,5,1
494 | 1,1,3,0,0,1,0,1,5,3,5,1
495 | 3,2,4,2,2,3,1,0,2,3,5,1
496 | 3,1,4,2,2,3,1,0,2,3,5,1
497 | 3,1,4,2,2,3,1,1,2,3,5,0
498 | 3,1,4,2,2,3,1,2,1,3,5,1
499 | 3,2,4,2,2,3,1,2,2,3,5,0
500 | 3,1,4,3,3,3,1,2,2,1,4,0
501 | 3,1,4,3,3,3,1,2,2,1,4,0
502 |
--------------------------------------------------------------------------------
/Keras Introduction Exploration/Readme.md:
--------------------------------------------------------------------------------
1 | # Introduction to Keras for Duilding Deep Learning Models
2 |
3 | This Python Notebook focuses on a specific sub-field of machine learning called **predictive modeling.**
4 |
5 |
6 |
7 | Within predicitve modeling is a speciality or another sub-field called **deep learning.**
8 |
9 | This notebook includes deep learning models with a library called Keras.
10 |
11 | >**Predictive modeling** is focused on developing models that make accurate predictions at the expense of explaining why predictions are made.
12 |
13 | You and I don't need to be able to write a binary classification model. We need to know how to use and interpret the results of the model.
14 |
15 | Just Download and open the .ipynb file and everything is explained there in a detailed manner.
16 | Alternatively you can visit the link given below where the notebook can be accessed through Kaggle.
17 |
18 | #### My Kaggle Notebook Link -> https://www.kaggle.com/datawarriors/a-gentle-introduction-to-keras
19 |
--------------------------------------------------------------------------------
/Keras Introduction Exploration/Screenshot.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/amark720/Deep-Learning-Projects/0ae283356592451eb834af43494545324eead24c/Keras Introduction Exploration/Screenshot.jpeg
--------------------------------------------------------------------------------
/Readme.md:
--------------------------------------------------------------------------------
1 | # Deep-Learning-Projects
2 |
3 |
4 |
5 |
6 | ## Overview
7 | • This Repository consists Deep Learning Projects made by Me.
8 | • Datasets are provided in each of the folders above, and the solution to the problem statements as well.
9 | • Visit each folder to access the Projects in detail.
10 |
11 |
12 |
13 | ### Don't forget to ⭐ the repository, if it helped you in anyway.
14 |
15 | ### Repo Stats:
16 | [](https://github.com/amark720) [](https://github.com/amark720/Deep-Learning-Projects) [](https://github.com/amark720/Deep-Learning-Projects)
17 |
18 | #### Feel Free to contact me at➛ databoyamar@gmail.com for any help related to Projects in this Repository!
19 |
--------------------------------------------------------------------------------