├── Cataract Detection ├── Cataract _Detection.ipynb └── readme.md ├── Driver_Drowsiness_Detection ├── Drowsy_Driver_Detection.ipynb └── readme.md ├── Emotion Detection ├── Emotion_Detection.ipynb ├── WebCam.py └── readme.md ├── Eye_Disease_Classification ├── Eye_Diseases.ipynb └── readme.md ├── Face_and_Eye_Detection ├── Face_&_Eye_Detection.ipynb ├── readme.md └── results │ ├── eyes.png │ ├── face&eyes.png │ ├── face.png │ ├── original.png │ └── readme.md ├── Lane_Detection ├── Lane_Detection_for_Autonomous_Vehicles.ipynb ├── readme.md └── results │ ├── detected_edges.png │ ├── detected_lanes.png │ ├── original.png │ ├── readme.md │ └── roi_image.png ├── MNIST Classification ├── MNIST_Classification.ipynb └── readme.md ├── Object_Detection ├── Object_Detection_&_More_Using_YOLOV3.ipynb ├── readme.md └── results │ ├── image1.png │ ├── image2.png │ ├── image3.png │ ├── image4.png │ └── readme.md ├── Pneumonia Detection ├── Pneumonia_Detection.ipynb └── readme.md ├── README.md └── Traffic Signs Recognition ├── Traffic_Signs_Detection.ipynb └── readme.md /Cataract Detection/readme.md: -------------------------------------------------------------------------------- 1 | # Cataract Detection 2 | 3 | This project focuses on detecting cataracts in eye images using deep-learning models and visualization techniques. The process involves data splitting, model development, training, and visualization of activation maps and attention maps. 4 | 5 | ## Data Preprocessing 6 | 7 | - Data is divided into training, validation, and test sets using a specified ratio. 8 | - Image data augmentation is performed on the training set to improve model performance. 9 | 10 | ## Model Development 11 | 12 | ### InceptionV3 Transfer Learning 13 | 14 | - A pre-trained InceptionV3 model with added custom layers is used for cataract detection. 15 | - The model is compiled with categorical cross-entropy loss and the Adamax optimizer. 16 | 17 | #### Model Training 18 | 19 | - The model is trained on the training set, and its performance is evaluated on the validation set. 20 | - Training and validation loss and accuracy are visualized using graphs. 21 | 22 | #### Activation Map Visualization 23 | 24 | - Activation maps are generated to visualize the feature maps produced by a specific layer of the model. 25 | - Activation maps can help in understanding which areas of the image contribute to the model's decision. 26 | 27 | #### Grad-CAM (Gradient-weighted Class Activation Map) 28 | 29 | - Grad-CAM is used to generate an attention map for the input image. 30 | - The attention map highlights the areas in the image that are crucial for the model's prediction. 31 | - Attention maps provide interpretability and insight into the model's decision-making process. 32 | 33 | ### Custom CNN Model 34 | 35 | - A custom CNN model is developed for cataract detection using similar procedures as the InceptionV3 model. 36 | 37 | ## Evaluation 38 | 39 | - The models are evaluated on the training, validation, and test sets. 40 | - Evaluation metrics include loss and accuracy. 41 | - Sample images are loaded, and predictions are made to determine if the images show signs of cataracts. 42 | 43 | ## Results 44 | 45 | - The Inception model achieves an accuracy of 97.36% 46 | - The CNN model achieves an accuracy of 90.31% 47 | -------------------------------------------------------------------------------- /Driver_Drowsiness_Detection/readme.md: -------------------------------------------------------------------------------- 1 | # Driver Drowsiness Detection 2 | 3 | ## Overview 4 | The project aims to identify and classify driver drowsiness using deep learning models, specifically Transfer Learning and Convolutional Neural Network with Parallel Convolution Architecture. 5 | 6 | ## Dataset 7 | The dataset used for training and evaluation is the "Driver Drowsiness Dataset (DDD)." It consists of images categorized as drowsy and non-drowsy. 8 | 9 | ## Code Structure 10 | - **Data Preparation:** 11 | - The dataset is organized into two categories: drowsy and non-drowsy. 12 | - Dataframes are created to manage file paths and labels. 13 | - Data distribution is visualized using a count plot. 14 | 15 | - **Data Augmentation and Splitting:** 16 | - ImageDataGenerator is used for data augmentation. 17 | - The dataset is split into training, validation, and test sets using `train_test_split`. 18 | 19 | - **Model Architecture:** 20 | - Transfer Learning is implemented using InceptionResNetV2. 21 | - A custom CNN architecture with Parallel Convolution layers 22 | - The models are compiled with Adamax optimizer and categorical crossentropy loss. 23 | 24 | - **Training:** 25 | - The model is trained for a specified number of epochs. 26 | - Training and validation accuracy/loss are visualized using plots. 27 | 28 | - **Evaluation:** 29 | - Model performance is evaluated on the training, validation, and test sets. 30 | - Classification report and confusion matrix are generated. 31 | 32 | - **Prediction and Visualization:** 33 | - Sample images are provided to demonstrate model predictions. 34 | - Activation maps and Grad-CAM visualizations are implemented for interpretability. 35 | 36 | 37 | ## Requirements 38 | - Python 39 | - TensorFlow 40 | - Pandas 41 | - Matplotlib 42 | - Seaborn 43 | - OpenCV 44 | - PIL 45 | 46 | ## Results 47 | The project achieved promising results in terms of accuracy and provides visualizations to aid in model interpretation. 48 | 49 | | Model | Accuracy | 50 | |----------|----------| 51 | | Transfer Learning | 99.73% | 52 | | Custom CNN | 99.90% | 53 | 54 | 55 | -------------------------------------------------------------------------------- /Emotion Detection/WebCam.py: -------------------------------------------------------------------------------- 1 | from keras.models import load_model 2 | from time import sleep 3 | from tensorflow.keras.preprocessing.image import img_to_array 4 | from keras.preprocessing import image 5 | import cv2 6 | import numpy as np 7 | 8 | face_classifier = cv2.CascadeClassifier(r'C:/Users/Snigdho/Music/haarcascade_frontalface_default.xml') 9 | classifier =load_model(r'C:/Users/Snigdho/Music/Emotion_Detection.h5') 10 | 11 | emotion_labels = ['Angry','Disgust','Fear','Happy','Neutral', 'Sad', 'Surprise'] 12 | 13 | cap = cv2.VideoCapture(0) 14 | 15 | while True: 16 | _, frame = cap.read() 17 | labels = [] 18 | gray = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY) 19 | faces = face_classifier.detectMultiScale(gray) 20 | 21 | for (x,y,w,h) in faces: 22 | cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,255),2) 23 | roi_gray = gray[y:y+h,x:x+w] 24 | roi_gray = cv2.resize(roi_gray,(224,224),interpolation=cv2.INTER_AREA) 25 | roi = cv2.cvtColor(roi_gray, cv2.COLOR_GRAY2RGB) 26 | roi = roi.astype('float')/255.0 27 | roi = np.expand_dims(roi,axis=0) 28 | 29 | prediction = classifier.predict(roi)[0] 30 | label=emotion_labels[prediction.argmax()] 31 | label_position = (x,y) 32 | cv2.putText(frame,label,label_position,cv2.FONT_HERSHEY_SIMPLEX,1,(0,255,0),2) 33 | 34 | cv2.imshow('Emotion Detector',frame) 35 | if cv2.waitKey(1) & 0xFF == ord('q'): 36 | break 37 | cap.release() 38 | cv2.destroyAllWindows() 39 | -------------------------------------------------------------------------------- /Emotion Detection/readme.md: -------------------------------------------------------------------------------- 1 | # Facial Emotion Detection 2 | 3 | 4 | This repository contains code for real-time facial emotion detection using a MobileNet pre-trained model and webcam feed. The model is trained on the facial expression dataset which consists of 224x224 grayscale images of faces labeled with seven emotions - anger, disgust, fear, happiness, sadness, surprise, and neutral. The model achieved a validation accuracy of 67.65%. 5 | 6 | happy 7 | 8 | neutral 9 | 10 | 11 | 12 | # Requirements: 13 | 14 | * Python 3 15 | 16 | * OpenCV 17 | 18 | * TensorFlow 19 | 20 | * Keras 21 | 22 | * NumPy 23 | 24 | 25 | # Results: 26 | 27 | * The system achieved a validation accuracy of 67.65% in recognizing 7 different emotions 28 | 29 | * The emotions detected are displayed on the screen in real time. 30 | 31 | * In real-world scenarios, the accuracy may vary depending on various factors such as lighting conditions, facial expressions, and camera angles. 32 | 33 | -------------------------------------------------------------------------------- /Eye_Disease_Classification/readme.md: -------------------------------------------------------------------------------- 1 | # Eye Disease Classification 2 | 3 | ## Overview 4 | This repository contains code for a deep learning project focused on the classification of eye diseases using Convolutional Neural Networks (CNNs). The project utilizes transfer learning with the InceptionResNetV2 architecture and includes implementations of custom CNNs, including one with a Parallel Convolution Architecture. 5 | 6 | 7 | ## Code Structure 8 | 9 | - **Data Preparation**: The dataset is loaded, and dataframes are created to organize file paths and labels. Data distribution is visualized using a count plot. 10 | 11 | - **Data Augmentation**: ImageDataGenerator is used for data augmentation during training to improve model generalization. 12 | 13 | - **Model Architecture**: The InceptionResNetV2 model is used for transfer learning. Custom CNNs, including one with a parallel convolutional architecture, are also implemented and trained. 14 | 15 | - **Model Training**: The model is compiled using the Adamax optimizer and categorical cross-entropy loss. Training and validation results are visualized using accuracy and loss plots. 16 | 17 | - **Model Evaluation**: The trained model is evaluated on the test set, and metrics such as accuracy, loss, classification report, and confusion matrix are displayed. 18 | 19 | - **Activation Maps**: Activation maps and Grad-CAM (Gradient-weighted Class Activation Mapping) are visualized to understand which parts of the image the model focuses on during predictions. 20 | 21 | ## Requirements 22 | - Python 23 | - TensorFlow 24 | - Pandas 25 | - Matplotlib 26 | - Seaborn 27 | - Scikit-learn 28 | - OpenCV 29 | 30 | ## Results 31 | The project achieved promising results in terms of accuracy and provides visualizations to aid in model interpretation. 32 | 33 | | Model | Accuracy | 34 | |----------|----------| 35 | | Transfer Learning | 94.31% | 36 | | Custom CNN | 89.81% | 37 | | CNN with Parallel Convolution | 93.36% | 38 | -------------------------------------------------------------------------------- /Face_and_Eye_Detection/readme.md: -------------------------------------------------------------------------------- 1 | # Face and Eye Detection 2 | 3 | ## Overview 4 | This project focuses on detecting faces and eyes in images using OpenCV and Haar cascades. It includes functions to detect faces, eyes, and both in an image. 5 | 6 | ## Prerequisites 7 | - Python 8 | - OpenCV 9 | - Matplotlib 10 | 11 | ## Example 12 | 13 | 1. **Original** 14 | 15 | 16 | ![Original Image](results/original.png) 17 | 18 | 2. **Face Detection** 19 | 20 | 21 | ![Original Image](results/face.png) 22 | 23 | 3. **Eyes Detection** 24 | 25 | 26 | ![Original Image](results/eyes.png) 27 | 28 | 4. **Face and Eyes Detection** 29 | 30 | 31 | ![Original Image](results/face&eyes.png) 32 | 33 | 34 | 35 | 36 | 37 | -------------------------------------------------------------------------------- /Face_and_Eye_Detection/results/eyes.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Snigdho8869/Computer-Vision-Projects/372b9d32051d6cf5850f16a0eb9936312cc952e5/Face_and_Eye_Detection/results/eyes.png -------------------------------------------------------------------------------- /Face_and_Eye_Detection/results/face&eyes.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Snigdho8869/Computer-Vision-Projects/372b9d32051d6cf5850f16a0eb9936312cc952e5/Face_and_Eye_Detection/results/face&eyes.png -------------------------------------------------------------------------------- /Face_and_Eye_Detection/results/face.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Snigdho8869/Computer-Vision-Projects/372b9d32051d6cf5850f16a0eb9936312cc952e5/Face_and_Eye_Detection/results/face.png -------------------------------------------------------------------------------- /Face_and_Eye_Detection/results/original.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Snigdho8869/Computer-Vision-Projects/372b9d32051d6cf5850f16a0eb9936312cc952e5/Face_and_Eye_Detection/results/original.png -------------------------------------------------------------------------------- /Face_and_Eye_Detection/results/readme.md: -------------------------------------------------------------------------------- 1 | images 2 | -------------------------------------------------------------------------------- /Lane_Detection/readme.md: -------------------------------------------------------------------------------- 1 | # Lane Detection 2 | 3 | This project implements a lane detection algorithm using computer vision techniques. It detects lanes in images and highlights them, providing a visual representation of the detected lanes on the road. 4 | 5 | ## Overview 6 | 7 | The project performs the following steps: 8 | 9 | 1. **Image Loading:** The input image is loaded using OpenCV. 10 | 11 | 2. **Edge Detection:** Edge detection is applied to the grayscale image using the Canny edge detector. 12 | 13 | 3. **Region of Interest (ROI) Selection:** A specific region of interest is defined in the image, focusing on the area where lanes are expected. 14 | 15 | 4. **Hough Transform:** The Hough transform is applied to detect lines in the region of interest. 16 | 17 | 5. **Visualization:** Detected lines are overlaid on the original image to highlight the lanes. 18 | 19 | 20 | ## Prerequisites 21 | 22 | - OpenCV 23 | - Matplotlib 24 | - NumPy 25 | 26 | ## Results 27 | 28 | 1. **Original Image** 29 | 30 | 31 | ![Original Image](results/original.png) 32 | 33 | 2. **Detected Edges Image** 34 | 35 | 36 | ![Detected Edges Image](results/detected_edges.png) 37 | 38 | 3. **ROI Image** 39 | 40 | 41 | ![ROI Image](results/roi_image.png) 42 | 43 | 4. **Detected Lanes Image** 44 | 45 | 46 | ![Detected Lanes Image](results/detected_lanes.png) 47 | 48 | 49 | ## Parameters 50 | 51 | Adjustable parameters in the code include: 52 | 53 | - Canny edge detection thresholds. 54 | - Region of interest vertices and dimensions. 55 | - Hough transform parameters. 56 | 57 | Feel free to experiment with these parameters for different images and scenarios. 58 | 59 | -------------------------------------------------------------------------------- /Lane_Detection/results/detected_edges.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Snigdho8869/Computer-Vision-Projects/372b9d32051d6cf5850f16a0eb9936312cc952e5/Lane_Detection/results/detected_edges.png -------------------------------------------------------------------------------- /Lane_Detection/results/detected_lanes.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Snigdho8869/Computer-Vision-Projects/372b9d32051d6cf5850f16a0eb9936312cc952e5/Lane_Detection/results/detected_lanes.png -------------------------------------------------------------------------------- /Lane_Detection/results/original.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Snigdho8869/Computer-Vision-Projects/372b9d32051d6cf5850f16a0eb9936312cc952e5/Lane_Detection/results/original.png -------------------------------------------------------------------------------- /Lane_Detection/results/readme.md: -------------------------------------------------------------------------------- 1 | images 2 | -------------------------------------------------------------------------------- /Lane_Detection/results/roi_image.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Snigdho8869/Computer-Vision-Projects/372b9d32051d6cf5850f16a0eb9936312cc952e5/Lane_Detection/results/roi_image.png -------------------------------------------------------------------------------- /MNIST Classification/readme.md: -------------------------------------------------------------------------------- 1 | # MNIST Image Classification 2 | 3 | This repository contains code for MNIST image classification using a convolutional neural network (CNN) and an autoencoder model with an encoder component. The code preprocesses the MNIST dataset, visualizes the data, trains a CNN model for image classification, and evaluates its performance. Additionally, it demonstrates the use of an autoencoder and its encoder for feature extraction. 4 | 5 | ### Data Preprocessing 6 | 7 | - The MNIST dataset is loaded and normalized to a range between 0 and 1. 8 | 9 | - The images are resized to 32x32 pixels to match the model's input shape. 10 | 11 | - Data augmentation is applied to increase model robustness. 12 | 13 | ### Image Classification Models 14 | 15 | - A CNN model is built for image classification. The model architecture includes convolutional layers, max-pooling layers, and dense layers. 16 | 17 | 18 | - An autoencoder is created to compress and reconstruct MNIST images. However, the focus is on the encoder part of the autoencoder to classify images. 19 | 20 | ## Results 21 | 22 | - The CNN model achieves an accuracy of 99.40% 23 | - The Autoencoder based encoder classifier achieves an accuracy of 96.22% 24 | 25 | -------------------------------------------------------------------------------- /Object_Detection/readme.md: -------------------------------------------------------------------------------- 1 | # Object Detection using YOLOv3 2 | This repository contains code for object detection using the YOLOv3 algorithm. YOLOv3 is a popular real-time object detection algorithm known for its speed and accuracy. 3 | 4 | ## Prerequisites 5 | 6 | - Python 3 7 | - OpenCV 8 | - NumPy 9 | - Matplotlib 10 | 11 | ## Download YOLOv3 Weights and Configuration 12 | To run the object detection, you need to download the YOLOv3 pre-trained weights and configuration files. Run the following commands: 13 | 14 | download_url = "https://pjreddie.com/media/files/yolov3.weights" 15 | desired_path = "/content/yolov3.weights" 16 | !wget {download_url} -O {desired_path} 17 | 18 | download_url = "https://github.com/pjreddie/darknet/raw/master/cfg/yolov3.cfg" 19 | desired_path = "/content/yolov3.cfg" 20 | !wget {download_url} -O {desired_path} 21 | 22 | download_url = "https://github.com/pjreddie/darknet/blob/master/data/coco.names" 23 | desired_path = "/content/coco.names" 24 | !wget {download_url} -O {desired_path} 25 | 26 | ## Results 27 | 28 | ![Image 1 Image](results/image1.png) 29 | 30 | 31 | ![Image 2 Image](results/image2.png) 32 | 33 | 34 | ![Image 3 Image](results/image3.png) 35 | 36 | 37 | ![Image 4 Image](results/image4.png) 38 | 39 | ## Customization 40 | You can modify the confidence threshold and Non-Maximum Suppression (NMS) parameters in the code for different detection sensitivity. 41 | To detect a different specific object, you can change the class_id according to the COCO labels. 42 | 43 | 44 | -------------------------------------------------------------------------------- /Object_Detection/results/image1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Snigdho8869/Computer-Vision-Projects/372b9d32051d6cf5850f16a0eb9936312cc952e5/Object_Detection/results/image1.png -------------------------------------------------------------------------------- /Object_Detection/results/image2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Snigdho8869/Computer-Vision-Projects/372b9d32051d6cf5850f16a0eb9936312cc952e5/Object_Detection/results/image2.png -------------------------------------------------------------------------------- /Object_Detection/results/image3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Snigdho8869/Computer-Vision-Projects/372b9d32051d6cf5850f16a0eb9936312cc952e5/Object_Detection/results/image3.png -------------------------------------------------------------------------------- /Object_Detection/results/image4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Snigdho8869/Computer-Vision-Projects/372b9d32051d6cf5850f16a0eb9936312cc952e5/Object_Detection/results/image4.png -------------------------------------------------------------------------------- /Object_Detection/results/readme.md: -------------------------------------------------------------------------------- 1 | images 2 | -------------------------------------------------------------------------------- /Pneumonia Detection/readme.md: -------------------------------------------------------------------------------- 1 | # Pneumonia Detection and Classification 2 | 3 | This folder contains code for pneumonia detection and classification using various approaches, including machine learning with HOG features and deep learning with CNN models and transfer learning. The code loads and preprocesses the Chest X-ray dataset and demonstrates different methods to classify pneumonia and normal images. 4 | 5 | ## Data Preprocessing 6 | 7 | - The Chest X-ray dataset is loaded from the specified directory. 8 | 9 | - Images are preprocessed, resized to 128x128 pixels, and augmented to enhance model robustness. 10 | 11 | ## Pneumonia Detection with HOG Features 12 | 13 | - The code computes Histogram of Oriented Gradients (HOG) features for each image. 14 | 15 | - A Random Forest classifier is trained on these features to detect pneumonia. 16 | 17 | - Model performance is evaluated with accuracy, a classification report, and a confusion matrix. 18 | 19 | ## Pneumonia Classification with Transfer Learning 20 | 21 | - A pre-trained MobileNetV2 model is used as a feature extractor. 22 | 23 | - A custom classification head is added to the MobileNetV2 model. 24 | 25 | - The model is trained and evaluated for pneumonia classification. 26 | 27 | ## Pneumonia Classification with Custom CNN Model 28 | 29 | - A custom CNN model is created for pneumonia classification. 30 | 31 | - The model is trained and evaluated on the same dataset. 32 | 33 | ## Pneumonia Classification with Autoencoder's Encoder 34 | 35 | - An autoencoder is created, and its encoder part is used for feature extraction. 36 | 37 | - The extracted features are used for pneumonia classification with a custom classifier. 38 | 39 | ## Results 40 | 41 | - The CNN model achieved an accuracy of 95.31% 42 | - MobileNetV2 based model achieved an accuracy of 96.06% 43 | - The Encoder classifier model achieved an accuracy of 92.82% 44 | 45 | ## Conclusion 46 | 47 | This folder provides a comprehensive codebase for pneumonia detection and classification using various approaches. By following this code, users can explore different methods for medical image analysis, including traditional machine learning and deep learning techniques. 48 | 49 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Computer Vision Projects 2 | This repository hosts a collection of computer vision projects using deep learning techniques, focusing on various real-world applications. Each project is designed to demonstrate the power of transfer learning and convolutional neural networks (CNN) in solving practical problems. Here's what you'll find in this repository: 3 | 4 | ## Project Highlights: 5 | 6 | - **Cataract Detection:** 7 | Detect cataracts in eye images using pre-trained models and fine-tuning. 8 | 9 | - **Traffic Sign Detection:** 10 | Identify and classify traffic signs from images for improved road safety. 11 | 12 | - **Pneumonia Detection:** 13 | Utilize deep learning to diagnose pneumonia from chest X-ray images. 14 | 15 | - **Emotion Detection:** 16 | Build models to recognize and classify emotions from facial expressions. 17 | 18 | - **MNIST Digit Classification:** 19 | Develop a model to classify handwritten digits from the MNIST dataset, a fundamental task for beginners in computer vision. 20 | 21 | - **Driver Drowsiness Detection:** 22 | Enhance road safety with Driver Drowsiness Detection! Utilize Transfer Learning and Convolutional Neural Networks with Parallel Convolution Architecture to identify and classify driver drowsiness. 23 | 24 | - **Eye Diseases:** 25 | Contribute to healthcare with a deep learning project focused on classifying eye diseases. Employ Convolutional Neural Networks (CNNs), including those with a Parallel Convolution Architecture, for accurate disease classification. 26 | 27 | - **Lane Detection for Autonomous Vehicles:** 28 | Contribute to the development of autonomous vehicles with a lane detection algorithm using computer vision techniques. Highlight detected lanes on the road, providing a visual representation crucial for vehicle navigation and safety. 29 | 30 | - **Object Detection Using YOLOv3:** 31 | Experience the speed and accuracy of the YOLOv3 algorithm for real-time object detection. This repository provides code for implementing object detection and showcases the versatility of YOLOv3 in identifying and tracking various objects. 32 | 33 | 34 | 35 | 36 | 37 | 38 | -------------------------------------------------------------------------------- /Traffic Signs Recognition/readme.md: -------------------------------------------------------------------------------- 1 | # Traffic Sign Detection and Classification 2 | 3 | This repository contains code for traffic sign detection and classification. It includes data preprocessing, feature extraction using Histogram of Oriented Gradients (HOG), training a Random Forest classifier for traffic sign detection, and a deep learning approach using transfer learning and Convolutional Neural Networks (CNNs) for traffic sign classification. 4 | 5 | ## Data Preprocessing 6 | 7 | - Class labels are loaded from CSV files. 8 | 9 | - Image data is collected from various classes, and labels are assigned to each image. 10 | 11 | - Images are resized and preprocessed for feature extraction and classification. 12 | 13 | ## Traffic Sign Detection with HOG Features 14 | 15 | - Histogram of Oriented Gradients (HOG) features are extracted from each image. 16 | 17 | - A Random Forest classifier is trained to detect traffic signs. 18 | 19 | - Model performance is evaluated using accuracy and a classification report with class names. 20 | 21 | ## Data Splitting 22 | 23 | - The data is split into training and validation sets for deep learning. 24 | 25 | - Data augmentation is applied to the training set for improved model generalization. 26 | 27 | ## Traffic Sign Classification with Transfer Learning 28 | 29 | - Transfer learning is used with the InceptionV3 architecture pre-trained on ImageNet. 30 | 31 | - Additional layers are added for traffic sign classification. 32 | 33 | - The model is trained and evaluated using the training and validation sets. 34 | 35 | ## Results 36 | 37 | - The Inception based model achieves an accuracy of 97.62% 38 | 39 | - The Rendomforrest classifier achieves an accuracy of 92.07% 40 | 41 | 42 | ## Conclusion 43 | 44 | This repository demonstrates a comprehensive workflow for traffic sign detection and classification. It includes traditional machine learning techniques with HOG features and deep learning with transfer learning and CNN models. 45 | 46 | 47 | --------------------------------------------------------------------------------