├── LICENSE ├── README.md ├── images.zip ├── label_convention.md ├── label_example.png └── mirrors.zip /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2021 Florian Kraus 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Radar Ghost Dataset 2 | This is the accompanying github repository and landing page for the **Radar Ghost Dataset**. 3 | 4 | - Paper: https://ieeexplore.ieee.org/document/9636338 5 | - Paper (open access): https://arxiv.org/abs/2404.01437 6 | - Dataset: 7 | - Version 1.0: [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.6474851.svg)](https://doi.org/10.5281/zenodo.6474851) (deprecated for lidar usage) 8 | - Version 1.1: [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.6676246.svg)](https://doi.org/10.5281/zenodo.6676246) 9 | 10 | ## Overview 11 | The **Radar Ghost Dataset** -- a dataset with detailed manual annotations for different kinds of radar ghost detections. We hope that our dataset encourages more researchers to engage in the fields of radar based multi-path objects. 12 | 13 | ## Dataset 14 | 15 | ### Changelog 16 | 17 | #### Version 1.0 18 | - Upload initial dataset to zenodo 19 | 20 | #### Version 1.1 21 | - Upload new version of dataset to zenodo 22 | - Fixed timesync issue between lidar and radar 23 | - Added data in car coordinates for lidar. Added columns `x_cc`, `y_cc`, and `z_cc`. 24 | - Removed cartesian coordinates for sensor coordinate system in lidar. Removed columns `x_sc`, `y_sc`, and `z_sc`. This will break existing code which accesses those fields. Use the car coordinate variants (`_cc`) instead. This removes the need to manually take the mounting position into account. 25 | - Added spherical coordinates for sensor coordinate system in lidar. Added columns `r_sc`, `theta_sc` (elevation), and `phi_sc` (azimuth). Unifies things between lidar and radar. Car coordinates are always cartesian and sensor coordinates always spherical/polar. 26 | - Removed lidar noise. Previously lidar data contained points marked as noise. Those are now removed. 27 | - Added `uuid` column for lidar data. 28 | 29 | ### Downloads 30 | - `original.zip` contains the 111 hand labeled sequences as described in the paper 31 | - `virtual.zip` contains additional 460 sequences with multiple objects. Those were created by overlaying original sequences from the same scenario. 32 | 33 | ### Images 34 | Check the `images.zip` for example images of each scenario. 35 | 36 | ### Mirrors 37 | Check `mirrors.zip` for short descriptions of the reflective surfaces. Contains a `.json` file for each sequence. Additional info regarding the `mirrors` column in the radar data. 38 | 39 | ### H5 Files 40 | The dataset is provided as `h5` files (after unpacking the zip files). Each h5 file contains an `radar` and `lidar` dataset entry (check code example below). 41 | 42 | ```python 43 | import h5py 44 | import numpy as np 45 | 46 | path = 'radar_ghost_example.h5' 47 | with h5py.File(path, 'r') as data: 48 | # use np.copy when using context managers (with statement), otherwise data will be empty. 49 | radar_data = np.copy(data['radar']) # numpy struct array 50 | lidar_data = np.copy(data['lidar']) # numpy struct array 51 | 52 | # work with data ... 53 | ``` 54 | 55 | ### Annotations/Labels 56 | Check the [label conventions](label_convention.md) for more information. 57 | 58 | ### Coordinate Systems 59 | We use two types of coordinate systems. Sensor coordinate systems for each sensor (two radars, one lidar) and a "car coordinate" system which is the same for all sensors ([see exception for v1.0 of dataset](#issue:-missing-car-coordinate-system-for-lidar-in-v1.0)). 60 | 61 | Each positional entry has an indicator attached which coordinate system it belongs to. The car coordinate system is indicated with `_cc` and the sensor coordinate systems with `_sc`. 62 | 63 | All coordinate systems use the standard convention for car coordinate systems: x-axis points to the front, y-axis points to the left, and the z-axis points up. A positive azimuth angle (phi) within the radar coordinate frames indicates measures left to the y-axis (same as yaw). 64 | 65 | All entries are given in meters [m] or radians [rad] for angles. 66 | 67 | #### Sensor Mounting Positions 68 | The mounting positions of all sensors in the car coordinate system. Note that the two radar sensors are mounted at an angle (indicated by the yaw angle). 69 | ```python 70 | left_radar_mounting_pos = { 71 | 'x': 3.739, # [m] 72 | 'y': 0.658, # [m] 73 | 'z': 0.0305, # [m] 74 | 'yaw': 0.523599 # [rad] 75 | } 76 | 77 | right_radar_mounting_pos = { 78 | 'x': 3.739, # [m] 79 | 'y': -0.658, # [m] 80 | 'z': 0.0305, # [m] 81 | 'yaw': -0.523599 # [rad] 82 | } 83 | 84 | lidar_mounting_pos = { 85 | 'x': 3.739, # [m] 86 | 'y': -0.194, # [m] 87 | 'z': 0.2806, # [m] 88 | } 89 | ``` 90 | 91 | 92 | ### Radar Data 93 | Description of each entry for radar data in the h5 file. Note: the radar has no elevation information. 94 | 95 | | Column | Description | 96 | | -------------------- | ----------- | 97 | | frame | Each frame contains measurement cycle of both radar sensors (ignoring rare frame drops) | 98 | | frame_timestamp | Timestamp for each frame, relative to start of recording. | 99 | | timestamp | Timestamp of the actual measurement, left and right sensor trigger with slight offset from each other. Preferably use "frame" or "frame_timestamp" instead of this. Only use this if you want to differentiate between the two sensors. | 100 | | sensor | Which sensor took the measurement ("left" or "right") | 101 | | x_cc | x-coordinate [m] in car coordinate system | 102 | | y_cc | y-coordinate [m] in car coordinate system | 103 | | r_sc | range [m] (distance to sensor) in sensor coordinate system (cf. mounting positions of sensors) | 104 | | phi_sc | Azimuth angle [rad] in sensor coordinate system (cf. mounting positions of sensors) | 105 | | vr_sc | Doppler value or "radial velocity" [m/s] in sensor coordinates | 106 | | amp | Amplitude of radar echo | 107 | | uuid | Universal unique identifier, each radar detection is assigned an unique id. This helps track single detections over the whole dataset. Useful for debugging. | 108 | | original_uuid | Only present for virtual sequences (sequences created by overlaying multiple sequences). When creating virtual sequences each detection is assigned a new uuid. The original uuid is preserved in this field. Note that the actual coordinates will differ from the original detection, since random noise is applied. | 109 | | label_id | Integer encoding the label (e.g. real object, type 1 second bounce, ...). Check `label_convention.md` for more information. | 110 | | instance_id | Instance or cluster id. May stretch over multiple frames. A single object might have multiple instance_ids if it is not visible for a certain number of frames. | 111 | | human_readable_label | the label_id field decoded to a human friendly format. | 112 | | group | Whether a group was labeled. Only present in one scenario where a group of pedestrians was labeled. | 113 | | mirror | Only valid for certain multi bounce reflection. Short description of the reflective surface is given. A slightly more verbose description can be found inside mirrors.zip | 114 | 115 | ### Lidar Data v1.1 116 | Description of each entry for lidar data in the h5 file. This applys only to data from the dataset version 1.1. 117 | 118 | | Column | Description | 119 | | -------------------- | ----------- | 120 | | timestamp | Timestamp, relative to start of recording. | 121 | | x_cc | x-coordinate [m] in car coordinate system | 122 | | y_cc | y-coordinate [m] in car coordinate system | 123 | | z_cc | z-coordinate [m] in car coordinate system | 124 | | r_sc | range [m] (distance to sensor) in sensor coordinate system (cf. mounting positions of sensors) | 125 | | theta_sc | Elevation/polar angle [rad] in sensor coordinate system (cf. mounting positions of sensors) | 126 | | phi_sc | Azimuth angle [rad] in sensor coordinate system (cf. mounting positions of sensors) | 127 | | uuid | Universal unique identifier, each lidar points is assigned an unique id. Useful for debugging. | 128 | 129 | ### Lidar Data v1.0 (Deprecated!) 130 | Description of each entry for lidar data in the h5 file. This applys only to data from the dataset version 1.0. 131 | 132 | | Column | Description | 133 | | -------------------- | ----------- | 134 | | timestamp | Timestamp, starting at zero. Not properly synced to radar in v1.0! | 135 | | x_sc | x-coordinate [m] in sensor coordinate system | 136 | | y_sc | y-coordinate [m] in sensor coordinate system | 137 | | z_sc | z-coordinate [m] in sensor coordinate system | 138 | 139 | ### Issues with v1.0 Lidar Data 140 | Version 1.0 of the dataset has three issues with the lidar data. If you want to work with Lidar data it is advised to use data from a newer version (v1.1). 141 | 142 | #### Issue: Sensor Noise 143 | Data contains lidar points which are marked as noise. Fixed in v1.1. 144 | 145 | #### Issue: Timestamps Sync Issue between Radar and Lidar in v1.0 146 | The currently provided timestamps for the lidar do not necessarily sync correctly with the radar. Might be off by some tenths of a second. The current timestamps for lidar and radar timestamps start at 0. This is incorrect, since measurements of lidar and radar do not necessarily start at the same time. Fixed in v1.1. 147 | 148 | #### Issue: Missing Car Coordinate System for Lidar in v1.0 149 | The lidar data currently only contains "sensor coordinates" (`_sc`) and not "car coordinates" (`_cc`). 150 | In v1.1 data is provided in car coordinates. Quickfix: account for the mounting position relative to the car coordinate system. 151 | 152 | ```python 153 | lidar_mounting_pos = { 154 | 'x': 3.739, # [m] 155 | 'y': -0.194, # [m] 156 | 'z': 0.2806, # [m] 157 | } 158 | 159 | # sensor coordinates to car coordinates 160 | lidar_data['x_cc'] = lidar_data['x_sc'] + lidar_mounting_pos['x'] 161 | lidar_data['y_cc'] = lidar_data['y_sc'] + lidar_mounting_pos['y'] 162 | lidar_data['z_cc'] = lidar_data['z_sc'] + lidar_mounting_pos['z'] 163 | ``` 164 | 165 | 166 | ### Folder Structure and File Name Conventions 167 | The dataset is split into train/val/test. Those are the same splits as used in our paper. Feel free to split the dataset whichever way you like. The presented splits are a suggestion and for reproducibility. If you chose to split the dataset differently (e.g. cross-validation) make sure that train and test to not contain sequences from the same scenario. There are also some scenarios where only the car position differs slightly, those should also be in the same split to avoid overfitting on the test data. Check the provided images to identify those similar scenarios. Note that in our split selection val and train use sequences from the same scenarios. 168 | 169 | After unzipping the data you are presented with the following file structure. The folder `original/` contains the original 111 sequences whereas the `virtual/` folder contains the overlaid sequences with multiple objects. Check the paper for more information. 170 | 171 | ``` 172 | |- original/ 173 | |- train/ 174 | |- scenario-xx_sequence-01_xxx_train.h5 175 | |- scenario-xx_sequence-02_xxx_train.h5 176 | |- ... 177 | |- val/ 178 | |- scenario-xx_sequence-01_xxx_val.h5 179 | |- scenario-xx_sequence-02_xxx_val.h5 180 | |- ... 181 | |- test/ 182 | |- scenario-xx_sequence-01_xxx_test.h5 183 | |- scenario-xx_sequence-02_xxx_test.h5 184 | |- ... 185 | 186 | |- virtual/ 187 | |- train/ 188 | |- scenario-xx_sequences-x-x_xxx_train.h5 189 | |- scenario-xx_sequences-x-x_xxx_train.h5 190 | |- ... 191 | |- val/ 192 | |- scenario-xx_sequences-x-x_xxx_train.h5 193 | |- scenario-xx_sequences-x-x_xxx_train.h5 194 | |- ... 195 | |- test/ 196 | |- scenario-xx_sequences-x-x_xxx_train.h5 197 | |- scenario-xx_sequences-x-x_xxx_train.h5 198 | |- ... 199 | ``` 200 | 201 | 202 | For the sequences in the `original/` folder the naming conventions are as follows. 203 | ``` 204 | scenario-xx_sequence-0x_class_split.h5 205 | ``` 206 | Scenarios range from 01 to 21 and sequences depending on the scenario from 01 to 08. 207 | The `class` entry is either `ped` or `cycl` depending on the class of the main object in the sequence (c.f. paper for more information on the recording process). The `split` is one of `train`, `val`, or `test`. 208 | 209 | The sequences in the `virtual/` folder have a slightly more involved naming scheme: 210 | 211 | ``` 212 | scenario-xx_sequences-x-...-x_start-frames-x-...-x_class-...-class_split.h5 213 | ``` 214 | Each virtual sequence is comprised of multiple real sequences of a single scenario. The used sequences are denoted after the `sequences-` part (between two and 5 sequences). When overlaying the sequences we do not start each sequence from zero. The used start frames for each sequence is indicated after the `start-frames-` part of the name. This also allows to overlay the same sequence but started at different times. After that the classes of the main object of each sequence is listed (ped or cycl). Finally the split is indicated. 215 | -------------------------------------------------------------------------------- /images.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/flkraus/ghosts/83bf5d8f93f1b20ee1fa3102c63d3ed2ded9a829/images.zip -------------------------------------------------------------------------------- /label_convention.md: -------------------------------------------------------------------------------- 1 | # Label Convention 2 | ![Example of different multi-path detections](label_example.png) 3 | 4 | Description of the labels. Main focus of the labeling convention is to encode different multi-path reflections (e.g. type1 second order, ...). Check the paper for more details. 5 | 6 | Make sure to check the [Examples](#examples) and [Work with Labels](#work-with-labels) Section to get a better feel for the labeling scheme. 7 | 8 | 9 | The labels are provided in the `label_id` field of the radar data. The `label_id` is encoded as an integer. Special values are reserved for "background", "ignore" and "noise". 10 | 11 | 12 | ## Special Values 13 | | Integer Representation | Human Readable | Description | 14 | | ---------------------- | ----------- | ----------- | 15 | | 0 | background | All detections which don't have a specific label assigned during the annotation process are considered background. | 16 | | -1 | ignore | Ignore regions or detections impossible to correctly annotate. E.g. moving truck in background. | 17 | | -2 | noise | Radar artifacts | 18 | 19 | ## Groups 20 | There is also the `group` column which is a boolean value indicating whether an annotation belongs to a group of things. This is only used for pedestrians in one scenario. 21 | 22 | ## 4 Digit Labels 23 | All other annotations are represented by an 4 digit integer, with an optional minus sign (sketchy). 24 | 25 | The 4 digit integer splits into 4 parts C, M, T, and O: 26 | ``` 27 | CMTO - (class, is_main, type, order) 28 | ``` 29 | 30 | Overview table for CMTO: 31 | | Digit | Possible Value | Meaning | 32 | | ------ | ------- | ------- | 33 | | C | 1 - 5 | **class** - The first digit c encodes the class. (E.g. pedestrian, cyclist) | 34 | | M | 1 or 0 | **is_main** - Whether it is the main object (1) or another object (0). | 35 | | T | 3-bit binary number xxx -> 0 - 8 | **type** - Bounce type (1st, 2nd, or undecided) | 36 | | O | 3-bit binary number xxx -> 0 - 8 | **order** - Bounce order (1st/real, 2nd, 3rd, or undecided) | 37 | 38 | ### Class Mapping (C) 39 | The class (C) part indicates the class of the annotation. 40 | 41 | Class ID (C) | Class Name 42 | ------------ | ---------- 43 | 1 | pedestrian (ped) 44 | 2 | cyclist (cycl) 45 | 3 | car 46 | 4 | large_vehicle 47 | 5 | motorcycle 48 | 49 | Python mapping 50 | ```python 51 | class2str = { 52 | 1: 'pedestrian', 53 | 2: 'cyclist', 54 | 3: 'car', 55 | 4: 'large_vehicle', 56 | 5: 'motorcycle', 57 | } 58 | ``` 59 | 60 | ### Main Object (M) 61 | The is_main (M) part indicates whether it is the main or another object. 62 | 63 | For more information about the main object check the original paper. In short each sequence has a main object which moves in front of a reflective surface. For this object detailed multi-path annotations are present. For all other objects only rudimentary multi-path annotations are present. 64 | 65 | 66 | Main (M) | Description 67 | -------- | ---------- 68 | 1 | Annotation refers to main object 69 | 0 | Annotation refers to another object 70 | 71 | 72 | ### Bounce Type (T) 73 | For the bounce type we differentiate between type1 and type2 bounces. Type1 bounces are those where the signal is returned from the real object. Type2 bounces are those where the signal is returned from a reflective surface. We encode this with a binary system. 74 | 75 | Bounce Type | Binary | Decimal (T) | Description 76 | ----------- | ------ | ----------- | ----------- 77 | type 1 | 001 | 1 | type 1 multi-path detection 78 | type 2 | 010 | 2 | type 2 multi-path detection 79 | type 1 or 2 | 011 | 3 | type1 or/and type2 multi-path detection. Whenever both types are possible. Happens mostly for second order bounces when the object is on the orthogonal line between mirror and radar sensor. 80 | undecided | 000 | 0 | If unclear where the last bounce occurred. Or for all "random" multi-path reflections. This is also used for multi-path reflections caused by other objects (not main object). 81 | 82 | ### Bounce Order (O) 83 | 84 | For the bounce order we differentiate between first (or real object), second, and third order. The order represents the number of bounces the signal took before returning to the sensor. Everything above 3 bounces is no longer easy to label since the geometry becomes too complex if multiple reflective surfaces are involved. 85 | 86 | 87 | Bounce Order | Binary | Decimal (T) | Description 88 | ------------ | ------ | ----------- | ----------- 89 | 1st/real | 001 | 1 | Real or normal detection. Signal returned directly from object of interest to the sensor. 90 | 2nd | 010 | 2 | Signal took an additional bounce (last bounce either on real object or on reflective surface) 91 | 3rd | 100 | 4 | Signal tool two additional bounces. Only labeled as such if last bounce happened on reflective surface. 3rd order bounce with last bounce on real object are improbable and hard/impossible to label, if they are present they are labeled as "undecided" 92 | 1st or 2nd | 011 | 3 | If both 1st or 2nd order is possible, i.e. second bounce is close to real object. Mostly for type 1 bounces. Happens if object is very close to reflective surface. 93 | 2nd or 3rd | 110 | 6 | If both 2nd or 3rd is possible. Only for type 2 bounces. Happens when second and third bounce are close together. Reason: Object is close to reflective surface. 94 | undecided | 000 | 0 | Higher order bounces or "random" multi-path reflections. This is also used for multi-path reflections caused by other objects (not main object). 95 | 96 | 97 | ### Leading Minus Sign 98 | If the digit has a leading minus sign (negative integer), this means the annotation has the "sketchy/unsure" tag. Basically an intermediate state between a full annotation and "ignore". 99 | 100 | ### Notes 101 | - With this convention all label_ids ending with `11` are real objects (type 1 first order). Everything else is some kind of multi-path reflection. 102 | - Specific bounce types and bounce orders are only labeled for the main object. This means for all other objects (`is_main` (M) = 0) the bounce type and order are both set to `0` (undecided). I.e. all multi-path reflections for other objects end with `00`. 103 | - Main object is always of class "pedestrian" or "cyclist". Only other objects can be of class "car", "large_vehicle", or "motorbike" 104 | 105 | ## Examples 106 | Here a few examples for the `CMTO` label convention. 107 | 108 | CMTO | Description 109 | ----- | ---------- 110 | 1111 | Pedestrian, main object, type 1 first order bounce (real). This is the real main object (pedestrian). No multi-path. 111 | 1011 | Pedestrian, other object, type 1 first order bounce (real). This is another pedestrian in the background. No multi-path. 112 | 2111 | Cyclist, main object, type 1 first order bounce (real). This is the real main object (cyclist). No multi-path. 113 | 1112 | Pedestrian, main object, type 1 second order. Multi-path reflection. 114 | 1124 | Pedestrian, main object, type2 third order. Multi-path reflection. 115 | 2100 | Cyclist, main object, unspecific multi-path reflection. 116 | 2126 | Cyclist, main object, type2 2nd or 3rd order multi-path reflection. 117 | 2132 | Cyclist, main object, type1 or type2 2nd order multi-path reflection. 118 | 2000 | Cyclist, other object, unspecific multi-path reflection. (all multi-path reflections for other objects are of this nature). 119 | -1112 | Pedestrian, main object, type 1 second bounce, with sketchy/unsure tag. 120 | -3011 | Car, other object, real/1st, with sketchy/unsure tag. 121 | 122 | ## Work with Labels 123 | Example python code to extract and work with the label_ids. 124 | 125 | Other options exist to work with the labels. No guarantee that this is the best option. Also no guarantee that the examples are 100% correct. Use as inspiration only and think about what you want to do with the data. 126 | 127 | 128 | Example to extract the CMTO and the special labels (background, ignore, noise). 129 | ```python 130 | import numpy as np 131 | 132 | label_id = radar_data['label_id'] 133 | 134 | 135 | background = label_id == 0 136 | ignore = label_id == -1 137 | noise = label_id == -2 138 | ignore = np.logical_or(ignore, noise) # treat noise as ignore 139 | 140 | # differentiate between special labels (one digit) and 4 digit labels 141 | bg_or_ign = np.logical_or(background, ignore) 142 | non_bg_or_ign = np.logical_not(bg_or_ign) 143 | 144 | sketchy = label_id < -2 145 | 146 | label_id = np.abs(label_id) 147 | 148 | c = (label_id // 1000) % 10 149 | m = (label_id // 100) % 10 150 | t = (label_id // 10) % 10 151 | o = label_id % 10 152 | 153 | # c, m, t, o are only valid for non background/ignore/noise detections 154 | 155 | # set to -1 for background/ignore/noise -> maybe useful for later 156 | c[bg_or_ign] = -1 157 | m[bg_or_ign] = -1 158 | t[bg_or_ign] = -1 159 | o[bg_or_ign] = -1 160 | ``` 161 | 162 | Do stuff with this information. 163 | 164 | ```python 165 | # get all real/1st bounce objects and split into pedestrians and cyclists 166 | real_obj = np.logical_and(t == 1, o == 1) 167 | # technically o == 1 would suffice since due to labeling convention 168 | 169 | real_ped = np.logical_and(c == 1, real_obj) 170 | real_cycl = np.logical_and(c == 2, real_obj) 171 | 172 | # get all multi-path objects (order != 1) 173 | multi_path = np.logical_and(o != 1, o != -1) # make sure to ignore special labels 174 | multi_path = np.logical_and(o != 1, non_bg_or_ign) # make sure to ignore special labels (alternative) 175 | 176 | # get specific multi-path objects 177 | type1_2nd = np.logical_or(t == 1, o == 2) 178 | type2_2nd = np.logical_or(t == 2, o == 2) 179 | type2_3rd = np.logical_or(t == 2, o == 4) 180 | 181 | # add annotations which could be both 2nd or 3rd (if that is desired) 182 | type2_2nd_or_3rd = np.logical_or(t == 2, o == 6) 183 | 184 | type2_2nd = np.logical_or(type2_2nd, type2_2nd_or_3rd) 185 | type2_3rd = np.logical_or(type2_3rd, type2_2nd_or_3rd) 186 | 187 | # ignore undecided multi-path bounces 188 | undecided = np.logical_or(t == 0, o == 0) # either type or order is undecided 189 | ignore = np.logical_or(ignore, undecided) 190 | 191 | # ignore non vru classes (car, large_vehicle, motorcycle) 192 | ignore = np.logical_or(ignore, c > 2) 193 | 194 | # ignore sketchy 195 | ignore = np.logical_or(ignore, sketchy) 196 | ``` 197 | 198 | Example to map the information to class labels for model/neural-network training. 199 | 200 | Note: You might ignore groups or generic multi-path detections during training. But that is ultimately up to the task at hand. 201 | 202 | ```python 203 | # training objective: distinguish between real and multi-path objects. No class prediction (ped/cycl) 204 | 205 | label = np.zeros_like(label_id) 206 | label[ignore] = -1 207 | label[real_obj] = 1 208 | 209 | # only use the detailed/specific labels 210 | label[type1_2nd] = 2 211 | label[type2_2nd] = 2 212 | label[type2_3rd] = 2 213 | 214 | # or use all multi-path labels instead 215 | label[multi_path] = 2 216 | 217 | # training objective: Distinguish between ped and cycl plus type1_2nd, type2_2nd and type2_3rd 218 | label[real_ped] = 1 219 | label[real_cycl] = 2 220 | 221 | type1_2nd_ped = np.logical_and(type1_2nd, c == 1) 222 | type2_2nd_ped = np.logical_and(type2_2nd, c == 1) 223 | type2_3rd_ped = np.logical_and(type2_3rd, c == 1) 224 | type1_2nd_cycl = np.logical_and(type1_2nd, c == 2) 225 | type2_2nd_cycl = np.logical_and(type2_2nd, c == 2) 226 | type2_3rd_cycl = np.logical_and(type2_3rd, c == 2) 227 | 228 | label[type1_2nd_ped] = 3 229 | label[type2_2nd_ped] = 4 230 | label[type2_3rd_ped] = 5 231 | label[type1_2nd_cycl] = 6 232 | label[type2_2nd_cycl] = 7 233 | label[type2_3rd_cycl] = 8 234 | 235 | # Ignore generic multi-path detections and groups 236 | label[undecided] = -1 237 | label[data['radar']['group']] = -1 238 | ``` 239 | 240 | 241 | 242 | 243 | 244 | 245 | -------------------------------------------------------------------------------- /label_example.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/flkraus/ghosts/83bf5d8f93f1b20ee1fa3102c63d3ed2ded9a829/label_example.png -------------------------------------------------------------------------------- /mirrors.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/flkraus/ghosts/83bf5d8f93f1b20ee1fa3102c63d3ed2ded9a829/mirrors.zip --------------------------------------------------------------------------------