├── README.md ├── TrafficLightDetection-Inference.html ├── TrafficLightDetection-Inference.ipynb ├── config ├── faster_rcnn-traffic-udacity_sim.config ├── faster_rcnn-traffic_udacity_real.config ├── ssd_inception-traffic-udacity_sim.config ├── ssd_inception-traffic_udacity_real.config ├── ssd_mobilenet-traffic-udacity_sim.config └── ssd_mobilenet-traffic_udacity_real.config ├── data_conversion_udacity_real.py ├── data_conversion_udacity_sim.py ├── examples ├── left0000.jpg ├── left0003.jpg ├── left0011.jpg ├── left0027.jpg ├── left0140.jpg ├── left0701.jpg ├── real0000.png ├── real0140.png ├── real0701.png ├── sim0003.png ├── sim0011.png └── sim0027.png ├── label_map.pbtxt ├── test_images_sim ├── left0003.jpg ├── left0011.jpg ├── left0027.jpg ├── left0034.jpg ├── left0036.jpg ├── left0040.jpg ├── left0048.jpg ├── left0545.jpg ├── left0560.jpg ├── left0588.jpg ├── left0606.jpg └── left0607.jpg └── test_images_udacity ├── left0000.jpg ├── left0140.jpg ├── left0183.jpg ├── left0282.jpg ├── left0358.jpg ├── left0528.jpg ├── left0561.jpg ├── left0681.jpg └── left0701.jpg /README.md: -------------------------------------------------------------------------------- 1 | [//]: # (Image References) 2 | [left0000]: ./examples/left0000.jpg 3 | [left0003]: ./examples/left0003.jpg 4 | [left0011]: ./examples/left0011.jpg 5 | [left0027]: ./examples/left0027.jpg 6 | [left0140]: ./examples/left0140.jpg 7 | [left0701]: ./examples/left0701.jpg 8 | 9 | [real0000]: ./examples/real0000.png 10 | [real0140]: ./examples/real0140.png 11 | [real0701]: ./examples/real0701.png 12 | [sim0003]: ./examples/sim0003.png 13 | [sim0011]: ./examples/sim0011.png 14 | [sim0027]: ./examples/sim0027.png 15 | 16 | # Traffic Light Detection and Classification with TensorFlow Object Detection API 17 | --- 18 | 19 | #### A brief introduction to the project is available [here](https://medium.com/@Vatsal410/traffic-light-detection-tensorflow-api-c75fdbadac62) 20 | 21 | --- 22 | 23 | AWS AMI with all the software dependencies like TensorFlow and Anaconda (in the community AMIs) - `udacity-carnd-advanced-deep-learning` 24 | 25 | ### Get the dataset 26 | 27 | [Drive location](https://drive.google.com/file/d/0B-Eiyn-CUQtxdUZWMkFfQzdObUE/view?usp=sharing) 28 | 29 | ### Get the models 30 | 31 | Do `git clone https://github.com/tensorflow/models.git` inside the tensorflow directory 32 | 33 | Follow the instructions at [this page](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md) for installing some simple dependencies. 34 | 35 | **All the files have to be kept inside the `tensorflow/models/research/` directory - data/, config/, data_conversion python files, .record files and utilitites/ ,etc.** 36 | 37 | 38 | ### Location of pre-trained models: 39 | [pre-trained models zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) 40 | 41 | Download the required model tar.gz files and untar them into `/tensorflow/models/research/` directory with `tar -xvzf name_of_tar_file`. 42 | 43 | ### Creating TFRecord files: 44 | 45 | `python data_conversion_udacity_sim.py --output_path sim_data.record` 46 | 47 | `python data_conversion_udacity_real.py --output_path real_data.record` 48 | 49 | --- 50 | 51 | ## Commands for training the models and saving the weights for inference. 52 | 53 | ## Using Faster-RCNN model 54 | 55 | ### For Simulator Data 56 | 57 | #### Training 58 | 59 | `python object_detection/train.py --pipeline_config_path=config/faster_rcnn-traffic-udacity_sim.config --train_dir=data/sim_training_data/sim_data_capture` 60 | 61 | #### Saving for Inference 62 | 63 | `python object_detection/export_inference_graph.py --pipeline_config_path=config/faster_rcnn-traffic-udacity_sim.config --trained_checkpoint_prefix=data/sim_training_data/sim_data_capture/model.ckpt-5000 --output_directory=frozen_sim/` 64 | 65 | 66 | ### For Real Data 67 | 68 | #### Training 69 | 70 | `python object_detection/train.py --pipeline_config_path=config/faster_rcnn-traffic_udacity_real.config --train_dir=data/real_training_data` 71 | 72 | #### Saving for Inference 73 | 74 | `python object_detection/export_inference_graph.py --pipeline_config_path=config/faster_rcnn-traffic_udacity_real.config --trained_checkpoint_prefix=data/real_training_data/model.ckpt-10000 --output_directory=frozen_real/` 75 | 76 | --- 77 | 78 | ## Using Inception SSD v2 79 | 80 | ### For Simulator Data 81 | 82 | #### Training 83 | 84 | `python object_detection/train.py --pipeline_config_path=config/ssd_inception-traffic-udacity_sim.config --train_dir=data/sim_training_data/sim_data_capture` 85 | 86 | #### Saving for Inference 87 | 88 | `python object_detection/export_inference_graph.py --pipeline_config_path=config/ssd_inception-traffic-udacity_sim.config --trained_checkpoint_prefix=data/sim_training_data/sim_data_capture/model.ckpt-5000 --output_directory=frozen_models/frozen_sim_inception/` 89 | 90 | 91 | ### For Real Data 92 | 93 | #### Training 94 | 95 | `python object_detection/train.py --pipeline_config_path=config/ssd_inception-traffic_udacity_real.config --train_dir=data/real_training_data` 96 | 97 | #### Saving for Inference 98 | 99 | `python object_detection/export_inference_graph.py --pipeline_config_path=config/ssd_inception-traffic_udacity_real.config --trained_checkpoint_prefix=data/real_training_data/model.ckpt-10000 --output_directory=frozen_models/frozen_real_inception/` 100 | 101 | --- 102 | 103 | ## Using MobileNet SSD v1 104 | (Due to some unknown reasons the model gets trained but does not save for inference. Ignoring this for now.) 105 | 106 | ### For Simulator Data 107 | 108 | #### Training 109 | 110 | `python object_detection/train.py --pipeline_config_path=config/ssd_mobilenet-traffic-udacity_sim.config --train_dir=data/sim_training_data/sim_data_capture` 111 | 112 | #### Saving for Inference 113 | 114 | `python object_detection/export_inference_graph.py --pipeline_config_path=config/ssd_mobilenet-traffic-udacity_sim.config --trained_checkpoint_prefix=data/sim_training_data/sim_data_capture/model.ckpt-5000 --output_directory=frozen_models/frozen_sim_mobile/` 115 | 116 | 117 | ### For Real Data 118 | 119 | #### Training 120 | 121 | `python object_detection/train.py --pipeline_config_path=config/ssd_mobilenet-traffic_udacity_real.config --train_dir=data/real_training_data` 122 | 123 | #### Saving for Inference 124 | 125 | `python object_detection/export_inference_graph.py --pipeline_config_path=config/ssd_mobilenet-traffic_udacity_real.config --trained_checkpoint_prefix=data/real_training_data/model.ckpt-10000 --output_directory=frozen_models/frozen_real_mobile/` 126 | 127 | --- 128 | 129 | **Inference results can be viewed using the TrafficLightDetection-Inference.ipynb or .html files.** 130 | 131 | ### Camera Image and Model's Detections 132 | ![alt-text][left0000] 133 | ![alt-text][real0000] 134 | 135 | ![alt-text][left0140] 136 | ![alt-text][real0140] 137 | 138 | ![alt-text][left0701] 139 | ![alt-text][real0701] 140 | 141 | ![alt-text][left0003] 142 | ![alt-text][sim0003] 143 | 144 | ![alt-text][left0011] 145 | ![alt-text][sim0011] 146 | 147 | ![alt-text][left0027] 148 | ![alt-text][sim0027] 149 | 150 | --- 151 | 152 | #### Some useful links 153 | 154 | - [Uploading/Downloading files between AWS and GoogleDrive](http://olivermarshall.net/how-to-upload-a-file-to-google-drive-from-the-command-line/) 155 | 156 | - [Using Jupyter notebooks with AWS](https://medium.com/towards-data-science/setting-up-and-using-jupyter-notebooks-on-aws-61a9648db6c5) 157 | -------------------------------------------------------------------------------- /config/faster_rcnn-traffic-udacity_sim.config: -------------------------------------------------------------------------------- 1 | # Faster R-CNN with Resnet-101 (v1) configured for udacity Sim 2 | 3 | model { 4 | faster_rcnn { 5 | num_classes: 4 6 | image_resizer { 7 | keep_aspect_ratio_resizer { 8 | min_dimension: 600 9 | max_dimension: 800 10 | } 11 | } 12 | feature_extractor { 13 | type: 'faster_rcnn_resnet101' 14 | first_stage_features_stride: 16 15 | } 16 | first_stage_anchor_generator { 17 | grid_anchor_generator { 18 | scales: [0.25, 0.5, 1.0, 2.0] 19 | aspect_ratios: [0.5, 1.0, 2.0] 20 | height_stride: 16 21 | width_stride: 16 22 | } 23 | } 24 | first_stage_box_predictor_conv_hyperparams { 25 | op: CONV 26 | regularizer { 27 | l2_regularizer { 28 | weight: 0.0 29 | } 30 | } 31 | initializer { 32 | truncated_normal_initializer { 33 | stddev: 0.01 34 | } 35 | } 36 | } 37 | first_stage_nms_score_threshold: 0.0 38 | first_stage_nms_iou_threshold: 0.7 39 | first_stage_max_proposals: 10 40 | first_stage_localization_loss_weight: 2.0 41 | first_stage_objectness_loss_weight: 1.0 42 | initial_crop_size: 14 43 | maxpool_kernel_size: 2 44 | maxpool_stride: 2 45 | second_stage_box_predictor { 46 | mask_rcnn_box_predictor { 47 | use_dropout: false 48 | dropout_keep_probability: 1.0 49 | fc_hyperparams { 50 | op: FC 51 | regularizer { 52 | l2_regularizer { 53 | weight: 0.0 54 | } 55 | } 56 | initializer { 57 | variance_scaling_initializer { 58 | factor: 1.0 59 | uniform: true 60 | mode: FAN_AVG 61 | } 62 | } 63 | } 64 | } 65 | } 66 | second_stage_post_processing { 67 | batch_non_max_suppression { 68 | score_threshold: 0.0 69 | iou_threshold: 0.6 70 | max_detections_per_class: 10 71 | max_total_detections: 10 72 | } 73 | score_converter: SOFTMAX 74 | } 75 | second_stage_localization_loss_weight: 2.0 76 | second_stage_classification_loss_weight: 1.0 77 | second_stage_batch_size: 10 78 | } 79 | } 80 | 81 | train_config: { 82 | batch_size: 1 83 | optimizer { 84 | momentum_optimizer: { 85 | learning_rate: { 86 | manual_step_learning_rate { 87 | initial_learning_rate: 0.0003 88 | schedule { 89 | step: 0 90 | learning_rate: .0003 91 | } 92 | schedule { 93 | step: 900000 94 | learning_rate: .00003 95 | } 96 | schedule { 97 | step: 1200000 98 | learning_rate: .000003 99 | } 100 | } 101 | } 102 | momentum_optimizer_value: 0.9 103 | } 104 | use_moving_average: false 105 | } 106 | gradient_clipping_by_norm: 10.0 107 | fine_tune_checkpoint: "faster_rcnn_resnet101_coco_11_06_2017/model.ckpt" 108 | #fine_tune_checkpoint: "frozen_out/sim-resnet-10-proposals/model.ckpt" 109 | from_detection_checkpoint: true 110 | # Note: The below line limits the training process to 200K steps, which we 111 | # empirically found to be sufficient enough to train the pets dataset. This 112 | # effectively bypasses the learning rate schedule (the learning rate will 113 | # never decay). Remove the below line to train indefinitely. 114 | 115 | num_steps: 10000 116 | data_augmentation_options { 117 | random_horizontal_flip { 118 | } 119 | } 120 | } 121 | 122 | train_input_reader: { 123 | tf_record_input_reader { 124 | input_path: "sim_data.record" 125 | } 126 | label_map_path: "label_map.pbtxt" 127 | } 128 | 129 | #eval_config: { 130 | #num_examples: 2000 131 | # Note: The below line limits the evaluation process to 10 evaluations. 132 | # Remove the below line to evaluate indefinitely. 133 | #max_evals: 10 134 | #} 135 | 136 | #eval_input_reader: { 137 | #tf_record_input_reader { 138 | #input_path: "sim_data.record" 139 | #} 140 | #label_map_path: "label_map.pbtxt" 141 | #shuffle: false 142 | #num_readers: 1 143 | #} 144 | 145 | -------------------------------------------------------------------------------- /config/faster_rcnn-traffic_udacity_real.config: -------------------------------------------------------------------------------- 1 | # Faster R-CNN with Resnet-101 (v1) configured for udacity integration project 2 | 3 | model { 4 | faster_rcnn { 5 | num_classes: 4 6 | image_resizer { 7 | keep_aspect_ratio_resizer { 8 | min_dimension: 1096 9 | max_dimension: 1368 10 | } 11 | } 12 | feature_extractor { 13 | type: 'faster_rcnn_resnet101' 14 | first_stage_features_stride: 16 15 | } 16 | first_stage_anchor_generator { 17 | grid_anchor_generator { 18 | scales: [0.25, 0.5, 1.0, 2.0] 19 | aspect_ratios: [0.5, 1.0, 2.0] 20 | height_stride: 16 21 | width_stride: 16 22 | } 23 | } 24 | first_stage_box_predictor_conv_hyperparams { 25 | op: CONV 26 | regularizer { 27 | l2_regularizer { 28 | weight: 0.0 29 | } 30 | } 31 | initializer { 32 | truncated_normal_initializer { 33 | stddev: 0.01 34 | } 35 | } 36 | } 37 | first_stage_nms_score_threshold: 0.0 38 | first_stage_nms_iou_threshold: 0.7 39 | first_stage_max_proposals: 50 40 | first_stage_localization_loss_weight: 2.0 41 | first_stage_objectness_loss_weight: 1.0 42 | initial_crop_size: 14 43 | maxpool_kernel_size: 2 44 | maxpool_stride: 2 45 | second_stage_box_predictor { 46 | mask_rcnn_box_predictor { 47 | use_dropout: false 48 | dropout_keep_probability: 1.0 49 | fc_hyperparams { 50 | op: FC 51 | regularizer { 52 | l2_regularizer { 53 | weight: 0.0 54 | } 55 | } 56 | initializer { 57 | variance_scaling_initializer { 58 | factor: 1.0 59 | uniform: true 60 | mode: FAN_AVG 61 | } 62 | } 63 | } 64 | } 65 | } 66 | second_stage_post_processing { 67 | batch_non_max_suppression { 68 | score_threshold: 0.0 69 | iou_threshold: 0.6 70 | max_detections_per_class: 10 71 | max_total_detections: 10 72 | } 73 | score_converter: SOFTMAX 74 | } 75 | second_stage_localization_loss_weight: 2.0 76 | second_stage_classification_loss_weight: 1.0 77 | second_stage_batch_size: 10 78 | } 79 | } 80 | 81 | train_config: { 82 | batch_size: 1 83 | optimizer { 84 | momentum_optimizer: { 85 | learning_rate: { 86 | manual_step_learning_rate { 87 | initial_learning_rate: 0.0003 88 | schedule { 89 | step: 0 90 | learning_rate: .0003 91 | } 92 | schedule { 93 | step: 900000 94 | learning_rate: .00003 95 | } 96 | schedule { 97 | step: 1200000 98 | learning_rate: .000003 99 | } 100 | } 101 | } 102 | momentum_optimizer_value: 0.9 103 | } 104 | use_moving_average: false 105 | } 106 | gradient_clipping_by_norm: 10.0 107 | #fine_tune_checkpoint: "faster_rcnn_resnet101_coco_11_06_2017/model.ckpt" 108 | fine_tune_checkpoint: "frozen_out/resnet-20707-50-proposals/model.ckpt" 109 | from_detection_checkpoint: true 110 | # Note: The below line limits the training process to 200K steps, which we 111 | # empirically found to be sufficient enough to train the pets dataset. This 112 | # effectively bypasses the learning rate schedule (the learning rate will 113 | # never decay). Remove the below line to train indefinitely. 114 | 115 | num_steps: 10000 116 | data_augmentation_options { 117 | random_horizontal_flip { 118 | } 119 | } 120 | } 121 | 122 | train_input_reader: { 123 | tf_record_input_reader { 124 | input_path: "real_data.record" 125 | } 126 | label_map_path: "label_map.pbtxt" 127 | } 128 | -------------------------------------------------------------------------------- /config/ssd_inception-traffic-udacity_sim.config: -------------------------------------------------------------------------------- 1 | # SSD with Inception v2 configuration for MSCOCO Dataset. 2 | # Users should configure the fine_tune_checkpoint field in the train config as 3 | # well as the label_map_path and input_path fields in the train_input_reader and 4 | # eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that 5 | # should be configured. 6 | 7 | model { 8 | ssd { 9 | num_classes: 4 10 | box_coder { 11 | faster_rcnn_box_coder { 12 | y_scale: 10.0 13 | x_scale: 10.0 14 | height_scale: 5.0 15 | width_scale: 5.0 16 | } 17 | } 18 | matcher { 19 | argmax_matcher { 20 | matched_threshold: 0.5 21 | unmatched_threshold: 0.5 22 | ignore_thresholds: false 23 | negatives_lower_than_unmatched: true 24 | force_match_for_each_row: true 25 | } 26 | } 27 | similarity_calculator { 28 | iou_similarity { 29 | } 30 | } 31 | anchor_generator { 32 | ssd_anchor_generator { 33 | num_layers: 6 34 | min_scale: 0.2 35 | max_scale: 0.95 36 | aspect_ratios: 1.0 37 | aspect_ratios: 2.0 38 | aspect_ratios: 0.5 39 | aspect_ratios: 3.0 40 | aspect_ratios: 0.3333 41 | reduce_boxes_in_lowest_layer: true 42 | } 43 | } 44 | image_resizer { 45 | fixed_shape_resizer { 46 | height: 300 47 | width: 300 48 | } 49 | } 50 | box_predictor { 51 | convolutional_box_predictor { 52 | min_depth: 0 53 | max_depth: 0 54 | num_layers_before_predictor: 0 55 | use_dropout: false 56 | dropout_keep_probability: 0.8 57 | kernel_size: 3 58 | box_code_size: 4 59 | apply_sigmoid_to_scores: false 60 | conv_hyperparams { 61 | activation: RELU_6, 62 | regularizer { 63 | l2_regularizer { 64 | weight: 0.00004 65 | } 66 | } 67 | initializer { 68 | truncated_normal_initializer { 69 | stddev: 0.03 70 | mean: 0.0 71 | } 72 | } 73 | } 74 | } 75 | } 76 | feature_extractor { 77 | type: 'ssd_inception_v2' 78 | min_depth: 16 79 | depth_multiplier: 1.0 80 | conv_hyperparams { 81 | activation: RELU_6, 82 | regularizer { 83 | l2_regularizer { 84 | weight: 0.00004 85 | } 86 | } 87 | initializer { 88 | truncated_normal_initializer { 89 | stddev: 0.03 90 | mean: 0.0 91 | } 92 | } 93 | batch_norm { 94 | train: true, 95 | scale: true, 96 | center: true, 97 | decay: 0.9997, 98 | epsilon: 0.001, 99 | } 100 | } 101 | } 102 | loss { 103 | classification_loss { 104 | weighted_sigmoid { 105 | anchorwise_output: true 106 | } 107 | } 108 | localization_loss { 109 | weighted_smooth_l1 { 110 | anchorwise_output: true 111 | } 112 | } 113 | hard_example_miner { 114 | num_hard_examples: 3000 115 | iou_threshold: 0.99 116 | loss_type: CLASSIFICATION 117 | max_negatives_per_positive: 3 118 | min_negatives_per_image: 0 119 | } 120 | classification_weight: 1.0 121 | localization_weight: 1.0 122 | } 123 | normalize_loss_by_num_matches: true 124 | post_processing { 125 | batch_non_max_suppression { 126 | score_threshold: 1e-8 127 | iou_threshold: 0.6 128 | max_detections_per_class: 50 129 | max_total_detections: 50 130 | } 131 | score_converter: SIGMOID 132 | } 133 | } 134 | } 135 | 136 | train_config: { 137 | batch_size: 24 138 | optimizer { 139 | rms_prop_optimizer: { 140 | learning_rate: { 141 | exponential_decay_learning_rate { 142 | initial_learning_rate: 0.004 143 | decay_steps: 800720 144 | decay_factor: 0.95 145 | } 146 | } 147 | momentum_optimizer_value: 0.9 148 | decay: 0.9 149 | epsilon: 1.0 150 | } 151 | } 152 | fine_tune_checkpoint: "ssd_inception_v2_coco_11_06_2017/model.ckpt" 153 | from_detection_checkpoint: true 154 | # Note: The below line limits the training process to 200K steps, which we 155 | # empirically found to be sufficient enough to train the pets dataset. This 156 | # effectively bypasses the learning rate schedule (the learning rate will 157 | # never decay). Remove the below line to train indefinitely. 158 | num_steps: 5000 159 | data_augmentation_options { 160 | random_horizontal_flip { 161 | } 162 | } 163 | data_augmentation_options { 164 | ssd_random_crop { 165 | } 166 | } 167 | } 168 | 169 | train_input_reader: { 170 | tf_record_input_reader { 171 | input_path: "sim_data.record" 172 | } 173 | label_map_path: "label_map.pbtxt" 174 | } 175 | -------------------------------------------------------------------------------- /config/ssd_inception-traffic_udacity_real.config: -------------------------------------------------------------------------------- 1 | # SSD with Inception v2 configuration for MSCOCO Dataset. 2 | # Users should configure the fine_tune_checkpoint field in the train config as 3 | # well as the label_map_path and input_path fields in the train_input_reader and 4 | # eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that 5 | # should be configured. 6 | 7 | model { 8 | ssd { 9 | num_classes: 4 10 | box_coder { 11 | faster_rcnn_box_coder { 12 | y_scale: 10.0 13 | x_scale: 10.0 14 | height_scale: 5.0 15 | width_scale: 5.0 16 | } 17 | } 18 | matcher { 19 | argmax_matcher { 20 | matched_threshold: 0.5 21 | unmatched_threshold: 0.5 22 | ignore_thresholds: false 23 | negatives_lower_than_unmatched: true 24 | force_match_for_each_row: true 25 | } 26 | } 27 | similarity_calculator { 28 | iou_similarity { 29 | } 30 | } 31 | anchor_generator { 32 | ssd_anchor_generator { 33 | num_layers: 6 34 | min_scale: 0.2 35 | max_scale: 0.95 36 | aspect_ratios: 1.0 37 | aspect_ratios: 2.0 38 | aspect_ratios: 0.5 39 | aspect_ratios: 3.0 40 | aspect_ratios: 0.3333 41 | reduce_boxes_in_lowest_layer: true 42 | } 43 | } 44 | image_resizer { 45 | fixed_shape_resizer { 46 | height: 300 47 | width: 300 48 | } 49 | } 50 | box_predictor { 51 | convolutional_box_predictor { 52 | min_depth: 0 53 | max_depth: 0 54 | num_layers_before_predictor: 0 55 | use_dropout: false 56 | dropout_keep_probability: 0.8 57 | kernel_size: 3 58 | box_code_size: 4 59 | apply_sigmoid_to_scores: false 60 | conv_hyperparams { 61 | activation: RELU_6, 62 | regularizer { 63 | l2_regularizer { 64 | weight: 0.00004 65 | } 66 | } 67 | initializer { 68 | truncated_normal_initializer { 69 | stddev: 0.03 70 | mean: 0.0 71 | } 72 | } 73 | } 74 | } 75 | } 76 | feature_extractor { 77 | type: 'ssd_inception_v2' 78 | min_depth: 16 79 | depth_multiplier: 1.0 80 | conv_hyperparams { 81 | activation: RELU_6, 82 | regularizer { 83 | l2_regularizer { 84 | weight: 0.00004 85 | } 86 | } 87 | initializer { 88 | truncated_normal_initializer { 89 | stddev: 0.03 90 | mean: 0.0 91 | } 92 | } 93 | batch_norm { 94 | train: true, 95 | scale: true, 96 | center: true, 97 | decay: 0.9997, 98 | epsilon: 0.001, 99 | } 100 | } 101 | } 102 | loss { 103 | classification_loss { 104 | weighted_sigmoid { 105 | anchorwise_output: true 106 | } 107 | } 108 | localization_loss { 109 | weighted_smooth_l1 { 110 | anchorwise_output: true 111 | } 112 | } 113 | hard_example_miner { 114 | num_hard_examples: 3000 115 | iou_threshold: 0.99 116 | loss_type: CLASSIFICATION 117 | max_negatives_per_positive: 3 118 | min_negatives_per_image: 0 119 | } 120 | classification_weight: 1.0 121 | localization_weight: 1.0 122 | } 123 | normalize_loss_by_num_matches: true 124 | post_processing { 125 | batch_non_max_suppression { 126 | score_threshold: 1e-8 127 | iou_threshold: 0.6 128 | max_detections_per_class: 50 129 | max_total_detections: 50 130 | } 131 | score_converter: SIGMOID 132 | } 133 | } 134 | } 135 | 136 | train_config: { 137 | batch_size: 24 138 | optimizer { 139 | rms_prop_optimizer: { 140 | learning_rate: { 141 | exponential_decay_learning_rate { 142 | initial_learning_rate: 0.004 143 | decay_steps: 800720 144 | decay_factor: 0.95 145 | } 146 | } 147 | momentum_optimizer_value: 0.9 148 | decay: 0.9 149 | epsilon: 1.0 150 | } 151 | } 152 | fine_tune_checkpoint: "ssd_inception_v2_coco_11_06_2017/model.ckpt" 153 | from_detection_checkpoint: true 154 | # Note: The below line limits the training process to 200K steps, which we 155 | # empirically found to be sufficient enough to train the pets dataset. This 156 | # effectively bypasses the learning rate schedule (the learning rate will 157 | # never decay). Remove the below line to train indefinitely. 158 | num_steps: 10000 159 | data_augmentation_options { 160 | random_horizontal_flip { 161 | } 162 | } 163 | data_augmentation_options { 164 | ssd_random_crop { 165 | } 166 | } 167 | } 168 | 169 | train_input_reader: { 170 | tf_record_input_reader { 171 | input_path: "real_data.record" 172 | } 173 | label_map_path: "label_map.pbtxt" 174 | } 175 | -------------------------------------------------------------------------------- /config/ssd_mobilenet-traffic-udacity_sim.config: -------------------------------------------------------------------------------- 1 | # SSD with Mobilenet v1 configuration for MSCOCO Dataset. 2 | # Users should configure the fine_tune_checkpoint field in the train config as 3 | # well as the label_map_path and input_path fields in the train_input_reader and 4 | # eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that 5 | # should be configured. 6 | 7 | model { 8 | ssd { 9 | num_classes: 4 10 | box_coder { 11 | faster_rcnn_box_coder { 12 | y_scale: 10.0 13 | x_scale: 10.0 14 | height_scale: 5.0 15 | width_scale: 5.0 16 | } 17 | } 18 | matcher { 19 | argmax_matcher { 20 | matched_threshold: 0.5 21 | unmatched_threshold: 0.5 22 | ignore_thresholds: false 23 | negatives_lower_than_unmatched: true 24 | force_match_for_each_row: true 25 | } 26 | } 27 | similarity_calculator { 28 | iou_similarity { 29 | } 30 | } 31 | anchor_generator { 32 | ssd_anchor_generator { 33 | num_layers: 6 34 | min_scale: 0.2 35 | max_scale: 0.95 36 | aspect_ratios: 1.0 37 | aspect_ratios: 2.0 38 | aspect_ratios: 0.5 39 | aspect_ratios: 3.0 40 | aspect_ratios: 0.3333 41 | } 42 | } 43 | image_resizer { 44 | fixed_shape_resizer { 45 | height: 300 46 | width: 300 47 | } 48 | } 49 | box_predictor { 50 | convolutional_box_predictor { 51 | min_depth: 0 52 | max_depth: 0 53 | num_layers_before_predictor: 0 54 | use_dropout: false 55 | dropout_keep_probability: 0.8 56 | kernel_size: 1 57 | box_code_size: 4 58 | apply_sigmoid_to_scores: false 59 | conv_hyperparams { 60 | activation: RELU_6, 61 | regularizer { 62 | l2_regularizer { 63 | weight: 0.00004 64 | } 65 | } 66 | initializer { 67 | truncated_normal_initializer { 68 | stddev: 0.03 69 | mean: 0.0 70 | } 71 | } 72 | batch_norm { 73 | train: true, 74 | scale: true, 75 | center: true, 76 | decay: 0.9997, 77 | epsilon: 0.001, 78 | } 79 | } 80 | } 81 | } 82 | feature_extractor { 83 | type: 'ssd_mobilenet_v1' 84 | min_depth: 16 85 | depth_multiplier: 1.0 86 | conv_hyperparams { 87 | activation: RELU_6, 88 | regularizer { 89 | l2_regularizer { 90 | weight: 0.00004 91 | } 92 | } 93 | initializer { 94 | truncated_normal_initializer { 95 | stddev: 0.03 96 | mean: 0.0 97 | } 98 | } 99 | batch_norm { 100 | train: true, 101 | scale: true, 102 | center: true, 103 | decay: 0.9997, 104 | epsilon: 0.001, 105 | } 106 | } 107 | } 108 | loss { 109 | classification_loss { 110 | weighted_sigmoid { 111 | anchorwise_output: true 112 | } 113 | } 114 | localization_loss { 115 | weighted_smooth_l1 { 116 | anchorwise_output: true 117 | } 118 | } 119 | hard_example_miner { 120 | num_hard_examples: 3000 121 | iou_threshold: 0.99 122 | loss_type: CLASSIFICATION 123 | max_negatives_per_positive: 3 124 | min_negatives_per_image: 0 125 | } 126 | classification_weight: 1.0 127 | localization_weight: 1.0 128 | } 129 | normalize_loss_by_num_matches: true 130 | post_processing { 131 | batch_non_max_suppression { 132 | score_threshold: 1e-8 133 | iou_threshold: 0.6 134 | max_detections_per_class: 50 135 | max_total_detections: 50 136 | } 137 | score_converter: SIGMOID 138 | } 139 | } 140 | } 141 | 142 | 143 | train_config: { 144 | batch_size: 24 145 | optimizer { 146 | rms_prop_optimizer: { 147 | learning_rate: { 148 | exponential_decay_learning_rate { 149 | initial_learning_rate: 0.004 150 | decay_steps: 800720 151 | decay_factor: 0.95 152 | } 153 | } 154 | momentum_optimizer_value: 0.9 155 | decay: 0.9 156 | epsilon: 1.0 157 | } 158 | } 159 | fine_tune_checkpoint: "ssd_mobilenet_v1_coco_11_06_2017/model.ckpt" 160 | from_detection_checkpoint: true 161 | # Note: The below line limits the training process to 200K steps, which we 162 | # empirically found to be sufficient enough to train the pets dataset. This 163 | # effectively bypasses the learning rate schedule (the learning rate will 164 | # never decay). Remove the below line to train indefinitely. 165 | num_steps: 5000 166 | data_augmentation_options { 167 | random_horizontal_flip { 168 | } 169 | } 170 | data_augmentation_options { 171 | ssd_random_crop { 172 | } 173 | } 174 | } 175 | 176 | train_input_reader: { 177 | tf_record_input_reader { 178 | input_path: "sim_data.record" 179 | } 180 | label_map_path: "label_map.pbtxt" 181 | } 182 | -------------------------------------------------------------------------------- /config/ssd_mobilenet-traffic_udacity_real.config: -------------------------------------------------------------------------------- 1 | # SSD with Mobilenet v1 configuration for MSCOCO Dataset. 2 | # Users should configure the fine_tune_checkpoint field in the train config as 3 | # well as the label_map_path and input_path fields in the train_input_reader and 4 | # eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that 5 | # should be configured. 6 | 7 | model { 8 | ssd { 9 | num_classes: 4 10 | box_coder { 11 | faster_rcnn_box_coder { 12 | y_scale: 10.0 13 | x_scale: 10.0 14 | height_scale: 5.0 15 | width_scale: 5.0 16 | } 17 | } 18 | matcher { 19 | argmax_matcher { 20 | matched_threshold: 0.5 21 | unmatched_threshold: 0.5 22 | ignore_thresholds: false 23 | negatives_lower_than_unmatched: true 24 | force_match_for_each_row: true 25 | } 26 | } 27 | similarity_calculator { 28 | iou_similarity { 29 | } 30 | } 31 | anchor_generator { 32 | ssd_anchor_generator { 33 | num_layers: 6 34 | min_scale: 0.2 35 | max_scale: 0.95 36 | aspect_ratios: 1.0 37 | aspect_ratios: 2.0 38 | aspect_ratios: 0.5 39 | aspect_ratios: 3.0 40 | aspect_ratios: 0.3333 41 | } 42 | } 43 | image_resizer { 44 | fixed_shape_resizer { 45 | height: 300 46 | width: 300 47 | } 48 | } 49 | box_predictor { 50 | convolutional_box_predictor { 51 | min_depth: 0 52 | max_depth: 0 53 | num_layers_before_predictor: 0 54 | use_dropout: false 55 | dropout_keep_probability: 0.8 56 | kernel_size: 1 57 | box_code_size: 4 58 | apply_sigmoid_to_scores: false 59 | conv_hyperparams { 60 | activation: RELU_6, 61 | regularizer { 62 | l2_regularizer { 63 | weight: 0.00004 64 | } 65 | } 66 | initializer { 67 | truncated_normal_initializer { 68 | stddev: 0.03 69 | mean: 0.0 70 | } 71 | } 72 | batch_norm { 73 | train: true, 74 | scale: true, 75 | center: true, 76 | decay: 0.9997, 77 | epsilon: 0.001, 78 | } 79 | } 80 | } 81 | } 82 | feature_extractor { 83 | type: 'ssd_mobilenet_v1' 84 | min_depth: 16 85 | depth_multiplier: 1.0 86 | conv_hyperparams { 87 | activation: RELU_6, 88 | regularizer { 89 | l2_regularizer { 90 | weight: 0.00004 91 | } 92 | } 93 | initializer { 94 | truncated_normal_initializer { 95 | stddev: 0.03 96 | mean: 0.0 97 | } 98 | } 99 | batch_norm { 100 | train: true, 101 | scale: true, 102 | center: true, 103 | decay: 0.9997, 104 | epsilon: 0.001, 105 | } 106 | } 107 | } 108 | loss { 109 | classification_loss { 110 | weighted_sigmoid { 111 | anchorwise_output: true 112 | } 113 | } 114 | localization_loss { 115 | weighted_smooth_l1 { 116 | anchorwise_output: true 117 | } 118 | } 119 | hard_example_miner { 120 | num_hard_examples: 3000 121 | iou_threshold: 0.99 122 | loss_type: CLASSIFICATION 123 | max_negatives_per_positive: 3 124 | min_negatives_per_image: 0 125 | } 126 | classification_weight: 1.0 127 | localization_weight: 1.0 128 | } 129 | normalize_loss_by_num_matches: true 130 | post_processing { 131 | batch_non_max_suppression { 132 | score_threshold: 1e-8 133 | iou_threshold: 0.6 134 | max_detections_per_class: 50 135 | max_total_detections: 50 136 | } 137 | score_converter: SIGMOID 138 | } 139 | } 140 | } 141 | 142 | 143 | train_config: { 144 | batch_size: 24 145 | optimizer { 146 | rms_prop_optimizer: { 147 | learning_rate: { 148 | exponential_decay_learning_rate { 149 | initial_learning_rate: 0.004 150 | decay_steps: 800720 151 | decay_factor: 0.95 152 | } 153 | } 154 | momentum_optimizer_value: 0.9 155 | decay: 0.9 156 | epsilon: 1.0 157 | } 158 | } 159 | fine_tune_checkpoint: "ssd_mobilenet_v1_coco_11_06_2017/model.ckpt" 160 | from_detection_checkpoint: true 161 | # Note: The below line limits the training process to 200K steps, which we 162 | # empirically found to be sufficient enough to train the pets dataset. This 163 | # effectively bypasses the learning rate schedule (the learning rate will 164 | # never decay). Remove the below line to train indefinitely. 165 | num_steps: 10000 166 | data_augmentation_options { 167 | random_horizontal_flip { 168 | } 169 | } 170 | data_augmentation_options { 171 | ssd_random_crop { 172 | } 173 | } 174 | } 175 | 176 | train_input_reader: { 177 | tf_record_input_reader { 178 | input_path: "real_data.record" 179 | } 180 | label_map_path: "label_map.pbtxt" 181 | } 182 | -------------------------------------------------------------------------------- /data_conversion_udacity_real.py: -------------------------------------------------------------------------------- 1 | ''' 2 | Usage: python data_conversion_udacity_real.py --output_path output_file_name.record 3 | ''' 4 | 5 | import tensorflow as tf 6 | import yaml 7 | import os 8 | from utils import dataset_util 9 | 10 | 11 | flags = tf.app.flags 12 | flags.DEFINE_string('output_path', '', 'Path to output TFRecord') 13 | FLAGS = flags.FLAGS 14 | 15 | LABEL_DICT = { 16 | "Green" : 1, 17 | "Red" : 2, 18 | "Yellow" : 3, 19 | "off" : 4, 20 | } 21 | 22 | def create_tf_example(example): 23 | 24 | # Udacity real data set 25 | height = 1096 # Image height 26 | width = 1368 # Image width 27 | 28 | filename = example['filename'] # Filename of the image. Empty if image is not from file 29 | filename = filename.encode() 30 | 31 | with tf.gfile.GFile(example['filename'], 'rb') as fid: 32 | encoded_image = fid.read() 33 | 34 | image_format = 'jpg'.encode() 35 | 36 | xmins = [] # List of normalized left x coordinates in bounding box (1 per box) 37 | xmaxs = [] # List of normalized right x coordinates in bounding box 38 | # (1 per box) 39 | ymins = [] # List of normalized top y coordinates in bounding box (1 per box) 40 | ymaxs = [] # List of normalized bottom y coordinates in bounding box 41 | # (1 per box) 42 | classes_text = [] # List of string class name of bounding box (1 per box) 43 | classes = [] # List of integer class id of bounding box (1 per box) 44 | 45 | for box in example['annotations']: 46 | #print("adding box") 47 | xmins.append(float(box['xmin'] / width)) 48 | xmaxs.append(float((box['xmin'] + box['x_width']) / width)) 49 | ymins.append(float(box['ymin'] / height)) 50 | ymaxs.append(float((box['ymin']+ box['y_height']) / height)) 51 | classes_text.append(box['class'].encode()) 52 | classes.append(int(LABEL_DICT[box['class']])) 53 | 54 | 55 | tf_example = tf.train.Example(features=tf.train.Features(feature={ 56 | 'image/height': dataset_util.int64_feature(height), 57 | 'image/width': dataset_util.int64_feature(width), 58 | 'image/filename': dataset_util.bytes_feature(filename), 59 | 'image/source_id': dataset_util.bytes_feature(filename), 60 | 'image/encoded': dataset_util.bytes_feature(encoded_image), 61 | 'image/format': dataset_util.bytes_feature(image_format), 62 | 'image/object/bbox/xmin': dataset_util.float_list_feature(xmins), 63 | 'image/object/bbox/xmax': dataset_util.float_list_feature(xmaxs), 64 | 'image/object/bbox/ymin': dataset_util.float_list_feature(ymins), 65 | 'image/object/bbox/ymax': dataset_util.float_list_feature(ymaxs), 66 | 'image/object/class/text': dataset_util.bytes_list_feature(classes_text), 67 | 'image/object/class/label': dataset_util.int64_list_feature(classes), 68 | })) 69 | 70 | return tf_example 71 | 72 | 73 | def main(_): 74 | 75 | writer = tf.python_io.TFRecordWriter(FLAGS.output_path) 76 | 77 | INPUT_YAML = "data/real_training_data/real_data_annotations.yaml" 78 | examples = yaml.load(open(INPUT_YAML, 'rb').read()) 79 | 80 | #examples = examples[:10] # for testing 81 | len_examples = len(examples) 82 | print("Loaded ", len(examples), "examples") 83 | 84 | for i in range(len(examples)): 85 | examples[i]['filename'] = os.path.abspath(os.path.join(os.path.dirname(INPUT_YAML), examples[i]['filename'])) 86 | 87 | counter = 0 88 | for example in examples: 89 | tf_example = create_tf_example(example) 90 | writer.write(tf_example.SerializeToString()) 91 | 92 | if counter % 10 == 0: 93 | print("Percent done", (counter/len_examples)*100) 94 | counter += 1 95 | 96 | writer.close() 97 | 98 | 99 | 100 | if __name__ == '__main__': 101 | tf.app.run() 102 | -------------------------------------------------------------------------------- /data_conversion_udacity_sim.py: -------------------------------------------------------------------------------- 1 | ''' 2 | Usage: python data_conversion_udacity_sim.py --output_path output_file_name.record 3 | ''' 4 | 5 | import tensorflow as tf 6 | import yaml 7 | import os 8 | from utils import dataset_util 9 | 10 | 11 | flags = tf.app.flags 12 | flags.DEFINE_string('output_path', '', 'Path to output TFRecord') 13 | FLAGS = flags.FLAGS 14 | 15 | LABEL_DICT = { 16 | "Green" : 1, 17 | "Red" : 2, 18 | "Yellow" : 3, 19 | "off" : 4, 20 | } 21 | 22 | def create_tf_example(example): 23 | 24 | # Udacity sim data set 25 | height = 600 # Image height 26 | width = 800 # Image width 27 | 28 | filename = example['filename'] # Filename of the image. Empty if image is not from file 29 | filename = filename.encode() 30 | 31 | with tf.gfile.GFile(example['filename'], 'rb') as fid: 32 | encoded_image = fid.read() 33 | 34 | image_format = 'jpg'.encode() 35 | 36 | xmins = [] # List of normalized left x coordinates in bounding box (1 per box) 37 | xmaxs = [] # List of normalized right x coordinates in bounding box 38 | # (1 per box) 39 | ymins = [] # List of normalized top y coordinates in bounding box (1 per box) 40 | ymaxs = [] # List of normalized bottom y coordinates in bounding box 41 | # (1 per box) 42 | classes_text = [] # List of string class name of bounding box (1 per box) 43 | classes = [] # List of integer class id of bounding box (1 per box) 44 | 45 | for box in example['annotations']: 46 | #if box['occluded'] is False: 47 | #print("adding box") 48 | xmins.append(float(box['xmin'] / width)) 49 | xmaxs.append(float((box['xmin'] + box['x_width']) / width)) 50 | ymins.append(float(box['ymin'] / height)) 51 | ymaxs.append(float((box['ymin']+ box['y_height']) / height)) 52 | classes_text.append(box['class'].encode()) 53 | classes.append(int(LABEL_DICT[box['class']])) 54 | 55 | 56 | tf_example = tf.train.Example(features=tf.train.Features(feature={ 57 | 'image/height': dataset_util.int64_feature(height), 58 | 'image/width': dataset_util.int64_feature(width), 59 | 'image/filename': dataset_util.bytes_feature(filename), 60 | 'image/source_id': dataset_util.bytes_feature(filename), 61 | 'image/encoded': dataset_util.bytes_feature(encoded_image), 62 | 'image/format': dataset_util.bytes_feature(image_format), 63 | 'image/object/bbox/xmin': dataset_util.float_list_feature(xmins), 64 | 'image/object/bbox/xmax': dataset_util.float_list_feature(xmaxs), 65 | 'image/object/bbox/ymin': dataset_util.float_list_feature(ymins), 66 | 'image/object/bbox/ymax': dataset_util.float_list_feature(ymaxs), 67 | 'image/object/class/text': dataset_util.bytes_list_feature(classes_text), 68 | 'image/object/class/label': dataset_util.int64_list_feature(classes), 69 | })) 70 | 71 | return tf_example 72 | 73 | 74 | def main(_): 75 | 76 | writer = tf.python_io.TFRecordWriter(FLAGS.output_path) 77 | 78 | # Udacity 79 | INPUT_YAML = "data/sim_training_data/sim_data_annotations.yaml" 80 | examples = yaml.load(open(INPUT_YAML, 'rb').read()) 81 | 82 | #examples = examples[:10] # for testing 83 | len_examples = len(examples) 84 | print("Loaded ", len(examples), "examples") 85 | 86 | for i in range(len(examples)): 87 | examples[i]['filename'] = os.path.abspath(os.path.join(os.path.dirname(INPUT_YAML), examples[i]['filename'])) 88 | 89 | counter = 0 90 | for example in examples: 91 | #print example 92 | tf_example = create_tf_example(example) 93 | writer.write(tf_example.SerializeToString()) 94 | 95 | if counter % 10 == 0: 96 | print("Percent done", (counter/len_examples)*100) 97 | counter += 1 98 | 99 | writer.close() 100 | 101 | 102 | 103 | if __name__ == '__main__': 104 | tf.app.run() 105 | -------------------------------------------------------------------------------- /examples/left0000.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/examples/left0000.jpg -------------------------------------------------------------------------------- /examples/left0003.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/examples/left0003.jpg -------------------------------------------------------------------------------- /examples/left0011.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/examples/left0011.jpg -------------------------------------------------------------------------------- /examples/left0027.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/examples/left0027.jpg -------------------------------------------------------------------------------- /examples/left0140.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/examples/left0140.jpg -------------------------------------------------------------------------------- /examples/left0701.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/examples/left0701.jpg -------------------------------------------------------------------------------- /examples/real0000.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/examples/real0000.png -------------------------------------------------------------------------------- /examples/real0140.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/examples/real0140.png -------------------------------------------------------------------------------- /examples/real0701.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/examples/real0701.png -------------------------------------------------------------------------------- /examples/sim0003.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/examples/sim0003.png -------------------------------------------------------------------------------- /examples/sim0011.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/examples/sim0011.png -------------------------------------------------------------------------------- /examples/sim0027.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/examples/sim0027.png -------------------------------------------------------------------------------- /label_map.pbtxt: -------------------------------------------------------------------------------- 1 | item { 2 | id: 1 3 | name: 'Green' 4 | } 5 | 6 | item { 7 | id: 2 8 | name: 'Red' 9 | } 10 | 11 | item { 12 | id: 3 13 | name: 'Yellow' 14 | } 15 | 16 | item { 17 | id: 4 18 | name: 'off' 19 | } 20 | -------------------------------------------------------------------------------- /test_images_sim/left0003.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/test_images_sim/left0003.jpg -------------------------------------------------------------------------------- /test_images_sim/left0011.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/test_images_sim/left0011.jpg -------------------------------------------------------------------------------- /test_images_sim/left0027.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/test_images_sim/left0027.jpg -------------------------------------------------------------------------------- /test_images_sim/left0034.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/test_images_sim/left0034.jpg -------------------------------------------------------------------------------- /test_images_sim/left0036.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/test_images_sim/left0036.jpg -------------------------------------------------------------------------------- /test_images_sim/left0040.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/test_images_sim/left0040.jpg -------------------------------------------------------------------------------- /test_images_sim/left0048.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/test_images_sim/left0048.jpg -------------------------------------------------------------------------------- /test_images_sim/left0545.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/test_images_sim/left0545.jpg -------------------------------------------------------------------------------- /test_images_sim/left0560.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/test_images_sim/left0560.jpg -------------------------------------------------------------------------------- /test_images_sim/left0588.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/test_images_sim/left0588.jpg -------------------------------------------------------------------------------- /test_images_sim/left0606.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/test_images_sim/left0606.jpg -------------------------------------------------------------------------------- /test_images_sim/left0607.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/test_images_sim/left0607.jpg -------------------------------------------------------------------------------- /test_images_udacity/left0000.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/test_images_udacity/left0000.jpg -------------------------------------------------------------------------------- /test_images_udacity/left0140.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/test_images_udacity/left0140.jpg -------------------------------------------------------------------------------- /test_images_udacity/left0183.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/test_images_udacity/left0183.jpg -------------------------------------------------------------------------------- /test_images_udacity/left0282.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/test_images_udacity/left0282.jpg -------------------------------------------------------------------------------- /test_images_udacity/left0358.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/test_images_udacity/left0358.jpg -------------------------------------------------------------------------------- /test_images_udacity/left0528.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/test_images_udacity/left0528.jpg -------------------------------------------------------------------------------- /test_images_udacity/left0561.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/test_images_udacity/left0561.jpg -------------------------------------------------------------------------------- /test_images_udacity/left0681.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/test_images_udacity/left0681.jpg -------------------------------------------------------------------------------- /test_images_udacity/left0701.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vatsl/TrafficLight_Detection-TensorFlowAPI/ae9d9e2118a9c4a5fa31984ad051b51381d45a7e/test_images_udacity/left0701.jpg --------------------------------------------------------------------------------