├── .gitattributes ├── .gitignore ├── LICENSE ├── README.md ├── STechSubmission ├── Certificate.pdf ├── Handout.pdf ├── ProjectEvaluationSheet.pdf └── Report.pdf ├── images ├── op.png ├── seq.png ├── signal.png ├── yolo net.png └── yolo.jpg ├── multithreading.py ├── out.txt ├── program.py ├── requirements-python-3.7.4.txt ├── requirements.txt ├── run.sh ├── tracking ├── __init__.py ├── centroidtracker.py └── trackableobject.py └── videos ├── 1.mp4 ├── 2.mp4 ├── 3.mp4 ├── 4.mp4 └── test.mp4 /.gitattributes: -------------------------------------------------------------------------------- 1 | # Auto detect text files and perform LF normalization 2 | * text=auto 3 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | *.egg-info/ 24 | .installed.cfg 25 | *.egg 26 | MANIFEST 27 | 28 | # PyInstaller 29 | # Usually these files are written by a python script from a template 30 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 31 | *.manifest 32 | *.spec 33 | 34 | # Installer logs 35 | pip-log.txt 36 | pip-delete-this-directory.txt 37 | 38 | # Unit test / coverage reports 39 | htmlcov/ 40 | .tox/ 41 | .nox/ 42 | .coverage 43 | .coverage.* 44 | .cache 45 | nosetests.xml 46 | coverage.xml 47 | *.cover 48 | .hypothesis/ 49 | .pytest_cache/ 50 | 51 | # Translations 52 | *.mo 53 | *.pot 54 | 55 | # Django stuff: 56 | *.log 57 | local_settings.py 58 | db.sqlite3 59 | 60 | # Flask stuff: 61 | instance/ 62 | .webassets-cache 63 | 64 | # Scrapy stuff: 65 | .scrapy 66 | 67 | # Sphinx documentation 68 | docs/_build/ 69 | 70 | # PyBuilder 71 | target/ 72 | 73 | # Jupyter Notebook 74 | .ipynb_checkpoints 75 | 76 | # IPython 77 | profile_default/ 78 | ipython_config.py 79 | 80 | # pyenv 81 | .python-version 82 | 83 | # celery beat schedule file 84 | celerybeat-schedule 85 | 86 | # SageMath parsed files 87 | *.sage.py 88 | 89 | # Environments 90 | .env 91 | .venv 92 | env/ 93 | venv/ 94 | ENV/ 95 | env.bak/ 96 | venv.bak/ 97 | 98 | # Spyder project settings 99 | .spyderproject 100 | .spyproject 101 | 102 | # Rope project settings 103 | .ropeproject 104 | 105 | # mkdocs documentation 106 | /site 107 | 108 | # mypy 109 | .mypy_cache/ 110 | .dmypy.json 111 | dmypy.json 112 | 113 | # Pyre type checker 114 | .pyre/ 115 | .vscode/settings.json 116 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2020 4Tron 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Adaptive Traffic Signal Control System 2 | 3 | ## 1. Abstract : 4 | 5 | Traffic congestion is becoming a serious problem with a large number of cars on the roads. Vehicles queue length waiting to be processed at the intersection is rising sharply with the increase of the traffic flow, and the traditional traffic lights cannot efficiently schedule it. 6 | 7 | In fact, we use computer vision and machine learning to have the characteristics of the competing trafc ows at the signalized road intersection. This is done by a state-of-the-art, real-time object detection based on a deep Convolutional Neural Networks called You Only Look Once (YOLO). Then traffic signal phases are optimized according to collected data, mainly queue density and waiting time per vehicle, to enable as much as more vehicles to pass safely with minimum waiting time. YOLO can be implemented on embedded controllers using Transfer Learning 8 | technique. 9 | 10 | ## 2. Problem Statement : 11 | 12 | To build a self adaptive traffic light control system based on yolo.Disproportionate and 13 | diverse traffic in different lanes leads to inefficient utilization of same time slot for each 14 | of them characterized by slower speeds, longer trip times, and increased vehicular 15 | queuing.To create a system which enable the traffic management system to take time 16 | allocation decisions for a particular lane according to the traffic density on other 17 | different lanes with the help of cameras, image processing modules. 18 | 19 | ## 3. Introduction : 20 | 21 | Traffic congestion is a major problem in many cities, and the fixed-cycle light signal controllers are not resolving the high waiting time in the intersection. We see often a policeman managing the movements instead of the traffic light. He sees road status and decides the allowed duration of each direction. This human achievement encourages us to create a smart Traffic light control taking into account the real time traffic condition and smartly manage the intersection. To implement such a system, we need two main parts: eyes to watch the real-time road condition and a brain to process it. A traffic signal system at its core has two major tasks: move as many users through the intersection as possible doing this with as little conflict between these users as possible. 22 | 23 | ## 4. Execution 24 | ### Demo 25 | 26 | You can see the demo of the project via the gif below. 27 | 28 | ![Gif of a demo project could not be loaded](https://github.com/nikola1011/yolov3-car-counter/blob/master/demo-yolov3-dlib-window-rec.gif) 29 | 30 | ### Literature 31 | The projects is based on [Tensor nets](https://github.com/taehoonlee/tensornets), [keras-yolov3 repository](https://github.com/experiencor/keras-yolo3) - find more detailed read on the [blog](https://towardsdatascience.com/object-detection-using-yolov3-using-keras-80bf35e61ce1). 32 | ### Dependencies 33 | Install dependencies via pip specified by requirements.txt file. 34 | The code is tested and run with Python 3.7.4 and Python 3.5.6 on Ubuntu 18.04.3 LTS. 35 | (Windows 10 platforms should also be able to run the project). 36 | 37 | 38 | ## 5. Technology : 39 | 40 | ### 5.1 YOLO 41 | 42 | You only look once (YOLO) is a state-of-the-art, real-time object detection 43 | systemYOLO, a new approach to object detection. Prior work on object detection 44 | repurposes classifiers to perform detection. Instead, we frame object detection as a 45 | regression problem to spatially separated bounding boxes and associated class 46 | probabilities. A single neural network predicts bounding boxes and class probabilities 47 | directly from full images in one evaluation. Since the whole detection pipeline is a 48 | single network, it can be optimized end-to-end directly on detection performance. 49 | 50 | ![yolo](https://github.com/4Tron/Adaptive-Traffic-Signal-Control-System/blob/master/images/yolo.jpg) 51 | 52 | The object detection task consists in determining the location on the image where 53 | certain objects are present, as well as classifying those objects. Previous methods for 54 | this, like R-CNN and its variations, used a pipeline to perform this task in multiple 55 | steps. This can be slow to run and also hard to optimize, because each individual 56 | component must be trained separately. YOLO, does it all with a single neural network. 57 | 58 | ![yolo_net](https://github.com/4Tron/Adaptive-Traffic-Signal-Control-System/blob/master/images/yolo%20net.png) 59 | 60 | ### YoloV3 Car Counter 61 | 62 | This is a demo project that uses YoloV3 neural network to count vehicles on a given video. The detection happens every x frames where x can be specified. Other times the dlib library is used for tracking previously detected vehicles. Furthermore, you can edit confidence detection level, number of frames to count vehicle as detected before removing it from trackable list and the maximum distance from centroid (see CentroidTracker class), number of frames to skip detection (and only use tracking) and the whether to use the original video size as annotations output or the YoloV3 416x416 size. 63 | 64 | YoloV3 model is pretrained and downloaded (Internet connection is required for the download process). 65 | 66 | ## 6. Working :- 67 | 68 | ![signals](https://github.com/4Tron/Adaptive-Traffic-Signal-Control-System/blob/master/images/signal.png) 69 | 70 | The solution can be explained in four simple steps: 71 | 72 | 1.Get a real time image of each lane. 73 | 2.Scan and determine traffic density. 74 | 3.Input this data to the Time Allocation module. 75 | 4.The output will be the time slots for each lane, accordingly. 76 | 77 | ![flow](https://github.com/4Tron/Adaptive-Traffic-Signal-Control-System/blob/master/images/seq.png) 78 | 79 | ### 6.1 Sequence of operations performed: 80 | 81 | 1.Camera sends images after regular short intervals to our system. 82 | 2.The system determines further the number of cars in the lane and hence computes its 83 | relative density with respect to other lanes. 84 | 3.Time allotment module takes input (as traffic density) from this system and 85 | determines an optimized and efficient time slot. 86 | 4.This value is then triggered by the microprocessor to the respective Traffic Lights. 87 | 88 | 89 | ## 7. Code : 90 | ### 7.1 Synchronization logic: 91 | 92 | f = open("out.txt", "r") 93 | no_of_vehicles=[] 94 | no_of_vehicles.append(int(f.readline())) 95 | no_of_vehicles.append(int(f.readline())) 96 | no_of_vehicles.append(int(f.readline())) 97 | no_of_vehicles.append(int(f.readline())) 98 | baseTimer = 120 # baseTimer = int(input("Enter the base timer value")) 99 | timeLimits = [5, 30] # timeLimits = list(map(int,input("Enter the time limits ").split())) 100 | print("Input no of vehicles : ", *no_of_vehicles) 101 | 102 | t = [(i / sum(no_of_vehicles)) * baseTimer if timeLimits[0] < (i / sum(no_of_vehicles)) * baseTimer < timeLimits[1] else min(timeLimits, key=lambda x: abs(x - (i / sum(no_of_vehicles)) * baseTimer)) for i in no_of_vehicles] 103 | print(t, sum(t)) 104 | 105 | 106 | ## 8. Result : 107 | ![](https://github.com/4Tron/Adaptive-Traffic-Signal-Control-System/blob/master/images/op.png) 108 | 109 | ## 9. Conclusion : 110 | 111 | The goal of this work is to improve intelligent transport systems by developing a Self-adaptive 112 | algorithm to control road traffic based on deep Learning. This new system facilitates the 113 | movement of cars in intersections, resulting in reducing congestion, less CO2 emissions, etc. 114 | The richness that video data provides highlights the importance of advancing the state-of-the-art 115 | in object detection, classication and tracking for real-time applications. YOLO provides 116 | extremely fast inference speed with slight compromise in accuracy, especially at lower 117 | resolutions and with smaller objects. While real-time inference is possible, applications that 118 | utilize edge devices still require improvements in either the architecture’s design or edge 119 | device’s hardware. 120 | Finally, we have proposed a new algorithm taking this real-time data from YOLO and 121 | optimizing phases in order to reduce vehicle waiting time. 122 | 123 | 124 | ## 10.Extensibility : 125 | You can easily extend this project by changing the classes you are interested in detecting and tracking (see what classes does YoloV3 support and/or change the neural network used by tensornets for better speed/accuracy. 126 | -------------------------------------------------------------------------------- /STechSubmission/Certificate.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/4Tron/Adaptive-Traffic-Signal-Control-System/075808f73e185b3478d1f7619c865f2aca7b28c9/STechSubmission/Certificate.pdf -------------------------------------------------------------------------------- /STechSubmission/Handout.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/4Tron/Adaptive-Traffic-Signal-Control-System/075808f73e185b3478d1f7619c865f2aca7b28c9/STechSubmission/Handout.pdf -------------------------------------------------------------------------------- /STechSubmission/ProjectEvaluationSheet.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/4Tron/Adaptive-Traffic-Signal-Control-System/075808f73e185b3478d1f7619c865f2aca7b28c9/STechSubmission/ProjectEvaluationSheet.pdf -------------------------------------------------------------------------------- /STechSubmission/Report.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/4Tron/Adaptive-Traffic-Signal-Control-System/075808f73e185b3478d1f7619c865f2aca7b28c9/STechSubmission/Report.pdf -------------------------------------------------------------------------------- /images/op.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/4Tron/Adaptive-Traffic-Signal-Control-System/075808f73e185b3478d1f7619c865f2aca7b28c9/images/op.png -------------------------------------------------------------------------------- /images/seq.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/4Tron/Adaptive-Traffic-Signal-Control-System/075808f73e185b3478d1f7619c865f2aca7b28c9/images/seq.png -------------------------------------------------------------------------------- /images/signal.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/4Tron/Adaptive-Traffic-Signal-Control-System/075808f73e185b3478d1f7619c865f2aca7b28c9/images/signal.png -------------------------------------------------------------------------------- /images/yolo net.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/4Tron/Adaptive-Traffic-Signal-Control-System/075808f73e185b3478d1f7619c865f2aca7b28c9/images/yolo net.png -------------------------------------------------------------------------------- /images/yolo.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/4Tron/Adaptive-Traffic-Signal-Control-System/075808f73e185b3478d1f7619c865f2aca7b28c9/images/yolo.jpg -------------------------------------------------------------------------------- /multithreading.py: -------------------------------------------------------------------------------- 1 | from tracking.centroidtracker import CentroidTracker 2 | from tracking.trackableobject import TrackableObject 3 | import tensornets as nets 4 | import cv2 5 | import numpy as np 6 | import time 7 | import dlib 8 | import tensorflow.compat.v1 as tf 9 | import os 10 | import threading 11 | 12 | def countVehicles(param): 13 | # param -> path of the video 14 | # list -> number of vehicles will be written in the list 15 | # index ->Index at which data has to be written 16 | 17 | tf.disable_v2_behavior() 18 | 19 | # Image size must be '416x416' as YoloV3 network expects that specific image size as input 20 | img_size = 416 21 | inputs = tf.placeholder(tf.float32, [None, img_size, img_size, 3]) 22 | model = nets.YOLOv3COCO(inputs, nets.Darknet19) 23 | 24 | ct = CentroidTracker(maxDisappeared=5, maxDistance=50) # Look into 'CentroidTracker' for further info about parameters 25 | trackers = [] # List of all dlib trackers 26 | trackableObjects = {} # Dictionary of trackable objects containing object's ID and its' corresponding centroid/s 27 | skip_frames = 10 # Numbers of frames to skip from detecting 28 | confidence_level = 0.40 # The confidence level of a detection 29 | total = 0 # Total number of detected objects from classes of interest 30 | use_original_video_size_as_output_size = True # Shows original video as output and not the 416x416 image that is used as yolov3 input (NOTE: Detection still happens with 416x416 img size but the output is displayed in original video size if this parameter is True) 31 | 32 | video_path = os.getcwd() + param # "/videos/4.mp4" 33 | video_name = os.path.basename(video_path) 34 | 35 | # print("Loading video {video_path}...".format(video_path=video_path)) 36 | if not os.path.exists(video_path): 37 | print("File does not exist. Exited.") 38 | exit() 39 | 40 | # YoloV3 detects 80 classes represented below 41 | all_classes = ["person", "bicycle", "car", "motorbike", "aeroplane", "bus", "train", "truck", \ 42 | "boat", "traffic light", "fire hydrant", "stop sign", "parking meter", "bench", \ 43 | "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", \ 44 | "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", \ 45 | "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", \ 46 | "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", \ 47 | "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", \ 48 | "chair", "sofa", "pottedplant", "bed", "diningtable", "toilet", "tvmonitor", "laptop", "mouse", \ 49 | "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", \ 50 | "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush"] 51 | 52 | # Classes of interest (with their corresponding indexes for easier looping) 53 | classes = { 1 : 'bicycle', 2 : 'car', 3 : 'motorbike', 5 : 'bus', 7 : 'truck' } 54 | 55 | with tf.Session() as sess: 56 | sess.run(model.pretrained()) 57 | cap = cv2.VideoCapture(video_path) 58 | 59 | # Get video size (just for log purposes) 60 | width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) 61 | height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) 62 | 63 | # Scale used for output window size and net size 64 | width_scale = 1 65 | height_scale = 1 66 | 67 | if use_original_video_size_as_output_size: 68 | width_scale = width / img_size 69 | height_scale = height / img_size 70 | 71 | def drawRectangleCV2(img, pt1, pt2, color, thickness, width_scale=width_scale, height_scale=height_scale): 72 | point1 = (int(pt1[0] * width_scale), int(pt1[1] * height_scale)) 73 | point2 = (int(pt2[0] * width_scale), int(pt2[1] * height_scale)) 74 | return cv2.rectangle(img, point1, point2, color, thickness) 75 | 76 | def drawTextCV2(img, text, pt, font, font_scale, color, lineType, width_scale=width_scale, height_scale=height_scale): 77 | pt = (int(pt[0] * width_scale), int(pt[1] * height_scale)) 78 | cv2.putText(img, text, pt, font, font_scale, color, lineType) 79 | 80 | def drawCircleCV2(img, center, radius, color, thickness, width_scale=width_scale, height_scale=height_scale): 81 | center = (int(center[0] * width_scale), int(center[1] * height_scale)) 82 | cv2.circle(img, center, radius, color, thickness) 83 | 84 | # Python 3.5.6 does not support f-strings (next line will generate syntax error) 85 | #print(f"Loaded {video_path}. Width: {width}, Height: {height}") 86 | # print("Loaded {video_path}. Width: {width}, Height: {height}".format(video_path=video_path, width=width, height=height)) 87 | 88 | skipped_frames_counter = 0 89 | 90 | while(cap.isOpened()): 91 | try : 92 | ret, frame = cap.read() 93 | img = cv2.resize(frame, (img_size, img_size)) 94 | except: 95 | print(total_str) 96 | 97 | 98 | output_img = frame if use_original_video_size_as_output_size else img 99 | 100 | tracker_rects = [] 101 | 102 | if skipped_frames_counter == skip_frames: 103 | 104 | # Detecting happens after number of frames have passes specified by 'skip_frames' variable value 105 | # print("[DETECTING]") 106 | 107 | trackers = [] 108 | skipped_frames_counter = 0 # reset counter 109 | 110 | np_img = np.array(img).reshape(-1, img_size, img_size, 3) 111 | 112 | start_time=time.time() 113 | predictions = sess.run(model.preds, {inputs: model.preprocess(np_img)}) 114 | # print("Detection took %s seconds" % (time.time() - start_time)) 115 | 116 | # model.get_boxes returns a 80 element array containing information about detected classes 117 | # each element contains a list of detected boxes, confidence level ... 118 | detections = model.get_boxes(predictions, np_img.shape[1:3]) 119 | np_detections = np.array(detections) 120 | 121 | # Loop only through classes we are interested in 122 | for class_index in classes.keys(): 123 | local_count = 0 124 | class_name = classes[class_index] 125 | 126 | # Loop through detected infos of a class we are interested in 127 | for i in range(len(np_detections[class_index])): 128 | box = np_detections[class_index][i] 129 | 130 | if np_detections[class_index][i][4] >= confidence_level: 131 | # print("Detected ", class_name, " with confidence of ", np_detections[class_index][i][4]) 132 | 133 | local_count += 1 134 | startX, startY, endX, endY = box[0], box[1], box[2], box[3] 135 | 136 | drawRectangleCV2(output_img, (startX, startY), (endX, endY), (0, 255, 0), 1) 137 | drawTextCV2(output_img, class_name, (startX, startY), cv2.FONT_HERSHEY_SIMPLEX, .5, (0, 0, 255), 1) 138 | 139 | # Construct a dlib rectangle object from the bounding box coordinates and then start the dlib correlation 140 | tracker = dlib.correlation_tracker() 141 | rect = dlib.rectangle(int(startX), int(startY), int(endX), int(endY)) 142 | tracker.start_track(img, rect) 143 | 144 | # Add the tracker to our list of trackers so we can utilize it during skip frames 145 | trackers.append(tracker) 146 | 147 | # Write the total number of detected objects for a given class on this frame 148 | # print(class_name," : ", local_count) 149 | else: 150 | 151 | # If detection is not happening then track previously detected objects (if any) 152 | # print("[TRACKING]") 153 | 154 | skipped_frames_counter += 1 # Increase the number frames for which we did not use detection 155 | 156 | # Loop through tracker, update each of them and display their rectangle 157 | for tracker in trackers: 158 | tracker.update(img) 159 | pos = tracker.get_position() 160 | 161 | # Unpack the position object 162 | startX = int(pos.left()) 163 | startY = int(pos.top()) 164 | endX = int(pos.right()) 165 | endY = int(pos.bottom()) 166 | 167 | # Add the bounding box coordinates to the tracking rectangles list 168 | tracker_rects.append((startX, startY, endX, endY)) 169 | 170 | # Draw tracking rectangles 171 | drawRectangleCV2(output_img, (startX, startY), (endX, endY), (255, 0, 0), 1) 172 | 173 | 174 | 175 | # Use the centroid tracker to associate the (1) old object centroids with (2) the newly computed object centroids 176 | objects = ct.update(tracker_rects) 177 | 178 | # Loop over the tracked objects 179 | for (objectID, centroid) in objects.items(): 180 | # Check to see if a trackable object exists for the current object ID 181 | to = trackableObjects.get(objectID, None) 182 | 183 | if to is None: 184 | # If there is no existing trackable object, create one 185 | to = TrackableObject(objectID, centroid) 186 | else: 187 | to.centroids.append(centroid) 188 | 189 | # If the object has not been counted, count it and mark it as counted 190 | if not to.counted: 191 | total += 1 192 | to.counted = True 193 | 194 | # Store the trackable object in our dictionary 195 | trackableObjects[objectID] = to 196 | 197 | # Draw both the ID of the object and the centroid of the object on the output frame 198 | object_id = "ID {}".format(objectID) 199 | drawTextCV2(output_img, object_id, (centroid[0] - 10, centroid[1] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 1) 200 | drawCircleCV2(output_img, (centroid[0], centroid[1]), 2, (0, 255, 0), -1) 201 | 202 | # Display the total count so far 203 | total_str = str(total) 204 | drawTextCV2(output_img, total_str, (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 0, 255), 2) 205 | 206 | # Display the current frame (with all annotations drawn up to this point) 207 | cv2.imshow(video_name, output_img) 208 | 209 | key = cv2.waitKey(1) & 0xFF 210 | if key == ord('q'): # QUIT (exits) 211 | break 212 | elif key == ord('p'): 213 | cv2.waitKey(0) # PAUSE (Enter any key to continue) 214 | 215 | cap.release() 216 | cv2.destroyAllWindows() 217 | print("Exited") 218 | 219 | """ 220 | function which will run our code 221 | 222 | will write the number of veicles in the list provided 223 | """ 224 | 225 | if __name__ == "__main__": 226 | 227 | 228 | 229 | countVehicles("/videos/test.mp4") 230 | 231 | # Logic for setting the time for each signal 232 | -------------------------------------------------------------------------------- /out.txt: -------------------------------------------------------------------------------- 1 | 14 2 | 14 3 | 14 4 | 14 5 | -------------------------------------------------------------------------------- /program.py: -------------------------------------------------------------------------------- 1 | f = open("out.txt", "r") 2 | no_of_vehicles=[] 3 | no_of_vehicles.append(int(f.readline())) 4 | no_of_vehicles.append(int(f.readline())) 5 | no_of_vehicles.append(int(f.readline())) 6 | no_of_vehicles.append(int(f.readline())) 7 | 8 | baseTimer = 120 # baseTimer = int(input("Enter the base timer value")) 9 | timeLimits = [5, 30] # timeLimits = list(map(int,input("Enter the time limits ").split())) 10 | 11 | print("Input no of vehicles : ", *no_of_vehicles) 12 | t = [(i / sum(no_of_vehicles)) * baseTimer if timeLimits[0] < (i / sum(no_of_vehicles)) * baseTimer < timeLimits[1] 13 | else min(timeLimits, key=lambda x: abs(x - (i / sum(no_of_vehicles)) * baseTimer)) for i in no_of_vehicles] 14 | 15 | print(t, sum(t)) -------------------------------------------------------------------------------- /requirements-python-3.7.4.txt: -------------------------------------------------------------------------------- 1 | absl-py==0.9.0 2 | asn1crypto==1.2.0 3 | astor==0.8.1 4 | cachetools==4.0.0 5 | certifi==2018.8.24 6 | cffi==1.13.0 7 | chardet==3.0.4 8 | conda==4.7.12 9 | conda-package-handling==1.6.0 10 | cryptography==2.8 11 | Cython==0.29.15 12 | dlib==19.19.0 13 | gast==0.2.2 14 | google-auth==1.11.2 15 | google-auth-oauthlib==0.4.1 16 | google-pasta==0.1.8 17 | grpcio==1.27.2 18 | h5py==2.10.0 19 | idna==2.9 20 | imutils==0.5.3 21 | Keras-Applications==1.0.8 22 | Keras-Preprocessing==1.1.0 23 | Markdown==3.2.1 24 | numpy==1.18.1 25 | oauthlib==3.1.0 26 | opencv-contrib-python==4.2.0.32 27 | opt-einsum==3.1.0 28 | protobuf==3.11.3 29 | pyasn1==0.4.8 30 | pyasn1-modules==0.2.8 31 | pycosat==0.6.3 32 | pycparser==2.19 33 | pyOpenSSL==19.0.0 34 | PySocks==1.7.1 35 | requests==2.23.0 36 | requests-oauthlib==1.3.0 37 | rsa==4.0 38 | ruamel-yaml==0.15.46 39 | scipy==1.4.1 40 | six==1.14.0 41 | tensorboard==2.1.0 42 | tensorflow==2.1.2 43 | tensorflow-estimator==2.1.0 44 | tensornets==0.4.3 45 | termcolor==1.1.0 46 | tqdm==4.36.1 47 | urllib3==1.25.8 48 | Werkzeug==1.0.0 49 | wrapt==1.12.0 50 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | absl-py==0.9.0 2 | astor==0.8.1 3 | cachetools==4.0.0 4 | certifi==2018.8.24 5 | chardet==3.0.4 6 | Cython==0.29.15 7 | dlib==19.19.0 8 | gast==0.2.2 9 | google-auth==1.11.2 10 | google-auth-oauthlib==0.4.1 11 | google-pasta==0.1.8 12 | grpcio==1.27.2 13 | h5py==2.10.0 14 | idna==2.9 15 | imutils==0.5.3 16 | Keras-Applications==1.0.8 17 | Keras-Preprocessing==1.1.0 18 | Markdown==3.2.1 19 | numpy==1.18.1 20 | oauthlib==3.1.0 21 | opencv-contrib-python==4.2.0.32 22 | opt-einsum==3.1.0 23 | protobuf==3.11.3 24 | pyasn1==0.4.8 25 | pyasn1-modules==0.2.8 26 | requests==2.23.0 27 | requests-oauthlib==1.3.0 28 | rsa==4.0 29 | scipy==1.4.1 30 | six==1.14.0 31 | tensorboard==2.1.0 32 | tensorflow==2.1.2 33 | tensorflow-estimator==2.1.0 34 | tensornets==0.4.3 35 | termcolor==1.1.0 36 | urllib3==1.25.8 37 | Werkzeug==1.0.0 38 | wrapt==1.12.0 39 | -------------------------------------------------------------------------------- /run.sh: -------------------------------------------------------------------------------- 1 | fname=out.txt 2 | if test -e $fname 3 | then 4 | rm $fname 5 | fi 6 | python multithreading.py "/videos/1.mp4" >> out.txt 7 | python multithreading.py "/videos/2.mp4" >> out.txt 8 | python multithreading.py "/videos/3.mp4" >> out.txt 9 | python multithreading.py "/videos/4.mp4" >> out.txt 10 | python program.py 11 | -------------------------------------------------------------------------------- /tracking/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/4Tron/Adaptive-Traffic-Signal-Control-System/075808f73e185b3478d1f7619c865f2aca7b28c9/tracking/__init__.py -------------------------------------------------------------------------------- /tracking/centroidtracker.py: -------------------------------------------------------------------------------- 1 | # import the necessary packages 2 | from scipy.spatial import distance as dist 3 | from collections import OrderedDict 4 | import numpy as np 5 | 6 | class CentroidTracker: 7 | def __init__(self, maxDisappeared=50, maxDistance=50): 8 | # initialize the next unique object ID along with two ordered 9 | # dictionaries used to keep track of mapping a given object 10 | # ID to its centroid and number of consecutive frames it has 11 | # been marked as "disappeared", respectively 12 | self.nextObjectID = 0 13 | self.objects = OrderedDict() 14 | self.disappeared = OrderedDict() 15 | 16 | # store the number of maximum consecutive frames a given 17 | # object is allowed to be marked as "disappeared" until we 18 | # need to deregister the object from tracking 19 | self.maxDisappeared = maxDisappeared 20 | 21 | # store the maximum distance between centroids to associate 22 | # an object -- if the distance is larger than this maximum 23 | # distance we'll start to mark the object as "disappeared" 24 | self.maxDistance = maxDistance 25 | 26 | def register(self, centroid): 27 | # when registering an object we use the next available object 28 | # ID to store the centroid 29 | self.objects[self.nextObjectID] = centroid 30 | self.disappeared[self.nextObjectID] = 0 31 | self.nextObjectID += 1 32 | 33 | def deregister(self, objectID): 34 | # to deregister an object ID we delete the object ID from 35 | # both of our respective dictionaries 36 | del self.objects[objectID] 37 | del self.disappeared[objectID] 38 | 39 | def update(self, rects): 40 | # check to see if the list of input bounding box rectangles 41 | # is empty 42 | if len(rects) == 0: 43 | # loop over any existing tracked objects and mark them 44 | # as disappeared 45 | for objectID in list(self.disappeared.keys()): 46 | self.disappeared[objectID] += 1 47 | 48 | # if we have reached a maximum number of consecutive 49 | # frames where a given object has been marked as 50 | # missing, deregister it 51 | if self.disappeared[objectID] > self.maxDisappeared: 52 | self.deregister(objectID) 53 | 54 | # return early as there are no centroids or tracking info 55 | # to update 56 | return self.objects 57 | 58 | # initialize an array of input centroids for the current frame 59 | inputCentroids = np.zeros((len(rects), 2), dtype="int") 60 | 61 | # loop over the bounding box rectangles 62 | for (i, (startX, startY, endX, endY)) in enumerate(rects): 63 | # use the bounding box coordinates to derive the centroid 64 | cX = int((startX + endX) / 2.0) 65 | cY = int((startY + endY) / 2.0) 66 | inputCentroids[i] = (cX, cY) 67 | 68 | # if we are currently not tracking any objects take the input 69 | # centroids and register each of them 70 | if len(self.objects) == 0: 71 | for i in range(0, len(inputCentroids)): 72 | self.register(inputCentroids[i]) 73 | 74 | # otherwise, are are currently tracking objects so we need to 75 | # try to match the input centroids to existing object 76 | # centroids 77 | else: 78 | # grab the set of object IDs and corresponding centroids 79 | objectIDs = list(self.objects.keys()) 80 | objectCentroids = list(self.objects.values()) 81 | 82 | # compute the distance between each pair of object 83 | # centroids and input centroids, respectively -- our 84 | # goal will be to match an input centroid to an existing 85 | # object centroid 86 | D = dist.cdist(np.array(objectCentroids), inputCentroids) 87 | 88 | # in order to perform this matching we must (1) find the 89 | # smallest value in each row and then (2) sort the row 90 | # indexes based on their minimum values so that the row 91 | # with the smallest value as at the *front* of the index 92 | # list 93 | rows = D.min(axis=1).argsort() 94 | 95 | # next, we perform a similar process on the columns by 96 | # finding the smallest value in each column and then 97 | # sorting using the previously computed row index list 98 | cols = D.argmin(axis=1)[rows] 99 | 100 | # in order to determine if we need to update, register, 101 | # or deregister an object we need to keep track of which 102 | # of the rows and column indexes we have already examined 103 | usedRows = set() 104 | usedCols = set() 105 | 106 | # loop over the combination of the (row, column) index 107 | # tuples 108 | for (row, col) in zip(rows, cols): 109 | # if we have already examined either the row or 110 | # column value before, ignore it 111 | if row in usedRows or col in usedCols: 112 | continue 113 | 114 | # if the distance between centroids is greater than 115 | # the maximum distance, do not associate the two 116 | # centroids to the same object 117 | if D[row, col] > self.maxDistance: 118 | continue 119 | 120 | # otherwise, grab the object ID for the current row, 121 | # set its new centroid, and reset the disappeared 122 | # counter 123 | objectID = objectIDs[row] 124 | self.objects[objectID] = inputCentroids[col] 125 | self.disappeared[objectID] = 0 126 | 127 | # indicate that we have examined each of the row and 128 | # column indexes, respectively 129 | usedRows.add(row) 130 | usedCols.add(col) 131 | 132 | # compute both the row and column index we have NOT yet 133 | # examined 134 | unusedRows = set(range(0, D.shape[0])).difference(usedRows) 135 | unusedCols = set(range(0, D.shape[1])).difference(usedCols) 136 | 137 | # in the event that the number of object centroids is 138 | # equal or greater than the number of input centroids 139 | # we need to check and see if some of these objects have 140 | # potentially disappeared 141 | if D.shape[0] >= D.shape[1]: 142 | # loop over the unused row indexes 143 | for row in unusedRows: 144 | # grab the object ID for the corresponding row 145 | # index and increment the disappeared counter 146 | objectID = objectIDs[row] 147 | self.disappeared[objectID] += 1 148 | 149 | # check to see if the number of consecutive 150 | # frames the object has been marked "disappeared" 151 | # for warrants deregistering the object 152 | if self.disappeared[objectID] > self.maxDisappeared: 153 | self.deregister(objectID) 154 | 155 | # otherwise, if the number of input centroids is greater 156 | # than the number of existing object centroids we need to 157 | # register each new input centroid as a trackable object 158 | else: 159 | for col in unusedCols: 160 | self.register(inputCentroids[col]) 161 | 162 | # return the set of trackable objects 163 | return self.objects -------------------------------------------------------------------------------- /tracking/trackableobject.py: -------------------------------------------------------------------------------- 1 | class TrackableObject: 2 | def __init__(self, objectID, centroid): 3 | # store the object ID, then initialize a list of centroids 4 | # using the current centroid 5 | self.objectID = objectID 6 | self.centroids = [centroid] 7 | 8 | # initialize a boolean used to indicate if the object has 9 | # already been counted or not 10 | self.counted = False -------------------------------------------------------------------------------- /videos/1.mp4: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/4Tron/Adaptive-Traffic-Signal-Control-System/075808f73e185b3478d1f7619c865f2aca7b28c9/videos/1.mp4 -------------------------------------------------------------------------------- /videos/2.mp4: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/4Tron/Adaptive-Traffic-Signal-Control-System/075808f73e185b3478d1f7619c865f2aca7b28c9/videos/2.mp4 -------------------------------------------------------------------------------- /videos/3.mp4: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/4Tron/Adaptive-Traffic-Signal-Control-System/075808f73e185b3478d1f7619c865f2aca7b28c9/videos/3.mp4 -------------------------------------------------------------------------------- /videos/4.mp4: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/4Tron/Adaptive-Traffic-Signal-Control-System/075808f73e185b3478d1f7619c865f2aca7b28c9/videos/4.mp4 -------------------------------------------------------------------------------- /videos/test.mp4: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/4Tron/Adaptive-Traffic-Signal-Control-System/075808f73e185b3478d1f7619c865f2aca7b28c9/videos/test.mp4 --------------------------------------------------------------------------------