├── .gitignore ├── .travis.yml ├── README.md ├── dataset ├── annotations │ ├── README.md │ ├── dev.csv │ └── dev_constrained_ytbb_train.csv.gz └── tasks │ ├── dev.csv │ └── test.csv ├── docs ├── css │ ├── bootstrap-social.css │ ├── bootstrap.min.css │ ├── home_style.css │ ├── ie10-viewport-bug-workaround.css │ ├── one-page-wonder.css │ ├── project_style.css │ ├── starter-template.css │ ├── video_grid.css │ └── video_grid_boh.css ├── evaluation.html ├── img │ ├── oxford_logo.png │ ├── oxuva_logo.png │ └── teaser.jpg ├── index.html ├── js │ ├── bootstrap.min.js │ ├── ie-emulation-modes-warning.js │ ├── ie10-viewport-bug-workaround.js │ └── jquery.min.js └── video_grid.html ├── examples └── opencv │ ├── opencv.sh │ ├── requirements.txt │ └── track.py ├── python └── oxuva │ ├── __init__.py │ ├── annot.py │ ├── assess.py │ ├── dataset.py │ ├── io_annot.py │ ├── io_pred.py │ ├── io_task.py │ ├── pred.py │ ├── task.py │ ├── test_assess.py │ ├── tools │ ├── __init__.py │ ├── analyze.py │ └── visualize.py │ └── util.py ├── pythonpath.sh ├── requirements.txt └── setup.cfg /.gitignore: -------------------------------------------------------------------------------- 1 | *.pyc 2 | docs/email_form.html 3 | *.tar 4 | *.tgz 5 | dataset/images 6 | dataset/annotations/test.csv 7 | workspace/cache 8 | workspace/analysis 9 | *.swp 10 | *.swo 11 | .DS_Store 12 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | language: python 2 | python: 3 | - "2.7" 4 | - "3.4" 5 | cache: pip 6 | install: 7 | - pip install -r requirements.txt 8 | - pip install nose 9 | script: 10 | - nosetests python/oxuva 11 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # OxUvA long-term tracking benchmark [ECCV'18] 2 | **Note:** *If, while reading this tutorial, you are stuck somewhere or you are unsure you are interpreting the instructions correctly, do not hesitate to open an issue here on GitHub.* 3 | 4 | This repository provides Python code to measure the quality of a tracker's predictions and generate all figures of the paper. 5 | The following sections provide instructions for each stage. 6 | 7 | 1. [Obtain the data](#1-obtain-the-data) 8 | 2. [Set up the environment](#2-set-up-the-environment) 9 | 3. [Run your tracker](#3-run-your-tracker) 10 | 4. [Submit to the evaluation server](#4-submit-to-the-evaluation-server) 11 | 5. [Generate the plots for a paper](#5-generate-the-plots-for-a-paper) 12 | 6. [Add your tracker to the results page](#6-add-your-tracker-to-the-results-page) 13 | 14 | The challenge is split into two tracks: "constrained" and "open". 15 | To be eligible for the "constrained" challenge, a tracker must use *only* the data in `annotations/dev_constrained_ytbb_train.csv` and `annotations/dev.csv` for development. 16 | All other trackers must enter the "open" challenge. With *development* we intend, in addition to training, also pre-training, validation and hyper-parameter search. 17 | For example, SINT uses pre-trained weights and SiamFC is trained from scratch on ImageNet VID. 18 | Hence they are both in the "open" challenge. 19 | 20 | The results of all *citeable* trackers are maintained in a [results repository](https://github.com/oxuva/long-term-tracking-results/). 21 | This repo should be used for comparison against state-of-the-art. 22 | It is updated periodically according to a [schedule](https://docs.google.com/document/d/1BtoMzxMGfKMM7DtYOm44dXNr18HrG5CqN9cyxDAem-M/edit). 23 | 24 | 25 | ## 1. Obtain the data 26 | 27 | The ground-truth labels for the dev set can be found *in this repository* in [`dataset/annotations`](dataset/annotations). 28 | The tracker initialization for the dev *and* test sets can be found in [`dataset/tasks`](dataset/tasks). 29 | **Note:** Only the annotations for the *dev* set are public. 30 | These can be useful for diagnosing failures and hyperparameter search. 31 | For the *test* set, the annotations are secret and trackers can only be assessed via the evaluation server (explained later). 32 | 33 | To obtain the images, fill in [this form](https://docs.google.com/forms/d/e/1FAIpQLSepA_sLCMrqnZXBPnZFNmggf-MdEGa2Um-Q7pRGQt4SxvGNeg/viewform) and then download `images_dev.tar` and `images_test.tar`. 34 | Extract these archives in `dataset/`. 35 | 36 | The structure of `dataset/` should be: 37 | ``` 38 | dataset/images/{subset}/{video}/{frame:06d}.jpeg 39 | dataset/tasks/{subset}.csv 40 | dataset/annotations/{subset}.csv 41 | ``` 42 | where `{subset}` is either `dev` or `test`, `{video}` is the video ID e.g. `vid0042`, and `{frame:06d}` is the frame number e.g. `002934`. 43 | 44 | 45 | ### Task format 46 | 47 | A tracking "task" consists of the initial and final frame numbers and the rectangle that defines the target in the initial frame. 48 | A collection of tracking tasks are specified in a single CSV file (e.g. [`dataset/tasks/dev.csv`](dataset/tasks/dev.csv)) with the following fields. 49 | 50 | * `video_id`: (string) Name of video. 51 | * `object_id`: (string) Name of object within the video. 52 | * `init_frame`: (integer) Frame in which initial rectangle is specified. 53 | * `last_frame`: (integer) Last frame in which tracker is required to make a prediction (inclusive). 54 | * `xmin`, `xmax`, `ymin`, `ymax`: (float) Rectangle in the initial frame. Co-ordinates are relative to the image: zero means top and left, one means bottom and right. 55 | 56 | A tracker should output predictions for frames `init_frame` + 1, `init_frame` + 2, ..., `last_frame`. 57 | 58 | The function `oxuva.load_dataset_tasks_csv` will return a `VideoObjectDict` of `Task`s from such a file. 59 | 60 | ### Annotation format 61 | 62 | A track "annotation" gives the ground-truth path of an object. 63 | This can be used for training and evaluating trackers. 64 | The annotation includes the class, but this information is not provided for a "task", and thus will not be available at testing time. 65 | 66 | A collection of track annotations are specified in a single CSV file (e.g. [`dataset/annotations/dev.csv`](dataset/annotations/dev.csv)) with the following fields. 67 | 68 | * `video_id`: (string) Name of video. 69 | * `object_id`: (string) Name of object within the video. 70 | * `class_id`: (integer) Index of object class. Matches YTBB. 71 | * `class_name`: (string) Name of object class. Matches YTBB. 72 | * `contains_cuts`: (string) Either `true`, `false` or `unknown`. 73 | * `always_visible`: (string) Either `true`, `false` or `unknown`. 74 | * `frame_num`: (integer) Frame of current annotation. 75 | * `object_presence`: (string) Either `present` or `absent`. 76 | * `xmin`, `xmax`, `ymin`, `ymax`: (float) Rectangle in the current frame if present, otherwise it should be ignored. 77 | 78 | The function `oxuva.load_dataset_annotations_csv` will return a `VideoObjectDict` of track annotation dict from such a file. 79 | The functions `oxuva.make_track_label` and `oxuva.make_frame_label` are used to construct track annotation dicts. 80 | The function `oxuva.make_task_from_track` converts a track annotation into a tracking task with ground-truth labels. 81 | 82 | 83 | ## 2. Set up the environment 84 | 85 | To run the code in this repository, it is necessary to install the Python libraries listed in [`requirements.txt`](requirements.txt). 86 | To install these dependencies using `pip` (perhaps in a virtual environment): 87 | ```bash 88 | pip install -r requirements.txt 89 | ``` 90 | 91 | You must also add the parent directory of `oxuva/` to `PYTHONPATH` to be able to import the `oxuva` package. 92 | ```bash 93 | export PYTHONPATH="path/to/long-term-tracking-benchmark/python:$PYTHONPATH" 94 | ``` 95 | Alternatively, for convenience, you can `source` the script `pythonpath.sh` in `bash`: 96 | ```bash 97 | source path/to/long-term-tracking-benchmark/pythonpath.sh 98 | ``` 99 | 100 | 101 | ## 3. Run your tracker 102 | 103 | **Note:** Unlike the VOT or OTB toolkits, ours does not execute your tracker. 104 | Your tracker should output all predictions in the format described below. 105 | For Python trackers, we provide the utility functions `oxuva.load_dataset_tasks_csv` and `oxuva.dump_predictions_csv` to make this easy. 106 | See [`examples/opencv/track.py`](examples/opencv/track.py) for an example. 107 | 108 | All rectangle co-ordinates are relative to the image: zero means top and left, one means bottom and right. 109 | If the object extends beyond the image boundary, ground-truth rectangles are clipped to \[0, 1\]. 110 | 111 | ### Prediction format 112 | 113 | The predictions for a tracker are specified with one CSV file per track. 114 | The names of these files must be `{video}_{object}.csv`. 115 | The fields of these CSV files are: 116 | 117 | * `video_id`: (string) Name of video. 118 | * `object_id`: (string) Name of object within the video. 119 | * `frame_num`: (integer) Frame of current annotation. 120 | * `present`: (string) Either `present` or `absent` (can use `true`/`false` or `0`/`1` too) 121 | * `score`: (float) Number that represents confidence of object presence. 122 | * `xmin`, `xmax`, `ymin`, `ymax`: (float) Rectangle in the current frame if present, otherwise it is ignored. 123 | 124 | The score is only used for diagnostics, it does not affect the main evaluation of the tracker. 125 | If the object is predicted `absent`, then the score and the rectangle will not be used. 126 | Since the ground-truth annotations do not extend beyond the edge of the image, the evaluation toolkit will truncate the predicted rectangles to the image frame before computing the IOU. 127 | 128 | 129 | ## 4. Submit to the evaluation server 130 | 131 | Since the annotations for the test set are secret, in order to evaluate your tracker and produce plots similar to the one in our paper you need to submit the raw prediction csv files to the [evaluation server](https://competitions.codalab.org/competitions/19529#participate), hosted on CodaLab. 132 | 133 | First, create a CodaLab account (if you do not already have one) and request to join the OxUvA competition. 134 | Note that the CodaLab account is per human, not per tracker. 135 | Do *not* create a username for your tracker. 136 | The name of your tracker will appear when you add it to the results repository (point 6 of this tutorial). 137 | Please choose a username that enables us to identify you, such as your real name or your GitHub account. 138 | 139 | To submit the results, create a zip archive containing all predictions in CSV format (as described above). 140 | There should be one file per object with the filename `{video}_{object}.csv`. 141 | It doesn't matter whether the CSV files are contained at the root level of the zip archive or below a single subdirectory of any name. 142 | If a submission encounters an error (for example, a missing prediction file), you will be able to view the verbose error log, and the submission will not count towards your quota. 143 | (If you want, you can first upload your predictions for the dev set to confirm that your predictions are in the correct format.) 144 | Please consider that for the dev set your quota is of 500 submissions in total (max 50 per day), while for the test set the limit is of *10 submissions in total (max 1 per day)*. 145 | 146 | Once the submission has been successful, you can download the generated output files. 147 | These will be used to generate the plots and submit to the results repo. 148 | 149 | **Note:** You will notice that the CodaLab challenge shows a leaderboard with usernames and scores. 150 | For the purpose of writing a paper, you do not need to compare against the most recent methods: what matters are the state-of-the-art results for citeable trackers in the results repository (point 6). 151 | 152 | 153 | ## 5. Generate the plots for a paper 154 | 155 | First, clone the results repo. 156 | ```bash 157 | git clone https://github.com/oxuva/long-term-tracking-results.git 158 | cd long-term-tracking-results 159 | ``` 160 | This repo contains several snapshots of the past state-of-the-art as git tags (*TODO* generate tags). 161 | The tag `eccv18` indicates the set of methods in our original paper, and successive tags are of the form `{year}-{month:02d}`, for example: 162 | ```bash 163 | git checkout 2018-07 164 | ``` 165 | 166 | You can state in your paper which tag you are comparing against. 167 | When writing a paper, you are not required to compare against the most recent state-of-the-art... but clearly the most recent the better, as your results will be more convincing. 168 | 169 | Add an entry for your tracker to `trackers.json`. 170 | You must specify a human-readable name for your tracker, and whether your tracker is eligible for the constrained-data challenge. 171 | ```json 172 | "tracker_id": {"name": "Tracker Name", "constrained": false}, 173 | ``` 174 | Use `python -m json.tool --sort-keys` to standardize the formatting and order of this file. 175 | 176 | For the test set, copy the `iou_0dx.json` files returned by the evaluation server to the directory 177 | ```results/assess/test/{tracker_id}/``` 178 | in the results repo. 179 | The script `oxuva.tools.analyze` will try to load this summary of the tracker assessment from these files before attempting to read the complete predictions of the tracker and the ground-truth annotations. 180 | 181 | For the dev set, you may follow the same procedure as above. 182 | However, it is possible to evaluate your tracker's predictions locally, without using the evaluation server. 183 | To do this, put the CSV files of your tracker's predictions (that is, the input to the evaluation server) in the directory 184 | ```predictions/dev/{tracker_id}/``` 185 | in the results repo. 186 | The script will generate the corresponding files in the `assess/` directory. 187 | Note that if you update the predictions, you should erase the corresponding files in the `assess/` directory, or specify `--no_use_summary`. 188 | If desired, the predictions of other trackers on the *dev* set are available from Google Drive (TODO). 189 | Please do not publish your predictions on the *test* set, as it may enable someone to construct an approximate ground-truth using a consensus method. 190 | 191 | To generate all plots and tables, use `analyze_paper.sh` or `analyze_web.sh`: 192 | ```bash 193 | bash analyze_paper.sh --data=test --challenge=open --loglevel=warning 194 | ``` 195 | 196 | To just generate the table of results: 197 | ```bash 198 | python -m oxuva.tools.analyze table --data=dev --challenge=open 199 | ``` 200 | The results table will be written to `analysis/dev/open/table.csv`. 201 | Use `--help` to discover the optional flags. 202 | For example, you can use `--iou_thresholds 0.1 0.3 0.5 0.7 0.9` to generate the results with a wider range of thresholds. 203 | 204 | Similarly, to just generate the main figure, use: 205 | ```bash 206 | python -m oxuva.tools.analyze plot_tpr_tnr --data=dev --challenge=open 207 | ``` 208 | **Note:** Please do *not* put the dev set plots in the paper without the test set. 209 | In general, comparison statements of the type *A is better than B* should be done using the test set. 210 | 211 | 212 | ## 6. Add your tracker to the results page 213 | 214 | Separately from the evaluation server, we are maintaining a [results page/repository](https://github.com/oxuva/long-term-tracking-results) that reflects the state-of-the-art on our dataset. 215 | 216 | In order to have your tracker added to the plots, you need to: 217 | 218 | 1) Have completed all the previous points and produced the test set plots of your tracker. 219 | 2) Have a document that describes your tracker. It does not need to be a peer-reviewed conference - arXiv is fine - we just need a *citeable* method. 220 | 3) Do a pull request to the results repository, containing everything we need to update the plots (i.e. `assess/test/{tracker_name}/iou_0d{3,5,7}.json`. Remember to specify the name of your tracker and whether it qualifies for the constrained challenge in `trackers.json`. In the comment section, please include a) your CodaLab username, b) the paper that describes your method and optionally d) a short description of your method. Do not include the generated plots, we will update these after merging the pull request. 221 | 4) The organizers will manually review your request according to [this schedule](https://docs.google.com/document/d/1BtoMzxMGfKMM7DtYOm44dXNr18HrG5CqN9cyxDAem-M/edit). 222 | 223 | Remember that even if your method is not in first place, submitting your tracker to the results repository is valuable to the community and it increases the chance of having your paper read and cited. 224 | -------------------------------------------------------------------------------- /dataset/annotations/README.md: -------------------------------------------------------------------------------- 1 | The file `dev.csv` follows the OxUvA format and contains annotations for the OxUvA long-term tracking _dev_ set (which is constructed using videos from the YTBB _validation_ set). 2 | The file `dev_constrained_ytbb_train.csv` follows the YTBB format and contains annotations for the subset of the YTBB _train_ set which can be used for development in the "constrained" track. 3 | (For the "open" track, one may use the entire YTBB train set, which includes the test classes.) 4 | Both files only contain annotations for the dev classes. 5 | Whereas the tracks in `dev.csv` are comprised of multiple 20-second annotation segments, the tracks in `dev_constrained_ytbb_train.csv` are the original 20-second segments from YTBB. 6 | -------------------------------------------------------------------------------- /dataset/annotations/dev_constrained_ytbb_train.csv.gz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/oxuva/long-term-tracking-benchmark/fd49bed27af85bb78120598ce65397470a387520/dataset/annotations/dev_constrained_ytbb_train.csv.gz -------------------------------------------------------------------------------- /dataset/tasks/dev.csv: -------------------------------------------------------------------------------- 1 | vid0029,obj0000,0,2790,0.344,0.441,0.4,0.535 2 | vid0145,obj0000,0,1350,0.278,0.445,0.475,0.665 3 | vid0144,obj0000,0,2340,0.005,0.231,0.31333333,0.61833334 4 | vid0143,obj0000,0,2730,0.23,0.871,0.42333335,0.56 5 | vid0142,obj0000,0,2430,0.28,0.458,0.34666666,0.5883333 6 | vid0141,obj0000,0,2370,0.238,0.603,0.32666665,0.5416667 7 | vid0140,obj0000,0,3180,0.283,0.512,0.025,0.8433333 8 | vid0020,obj0000,0,1380,0.32,0.659,0.30333334,0.85833335 9 | vid0021,obj0000,0,1110,0.346,0.677,0.39,0.99 10 | vid0022,obj0000,0,3900,0.305,0.567,0.35166666,0.995 11 | vid0023,obj0000,0,3180,0.12,0.951,0.32,0.735 12 | vid0025,obj0000,0,37440,0.232,0.446,0.58166665,0.825 13 | vid0025,obj0001,2700,27990,0.622,0.787,0.165,0.5883333 14 | vid0027,obj0000,0,1440,0.497,0.703,0.47166666,0.74666667 15 | vid0115,obj0000,0,5850,0.359,0.537,0.30666667,0.86 16 | vid0058,obj0000,0,1380,0.444,0.911,0.17833333,0.8383333 17 | vid0235,obj0000,0,9480,0.155,0.488,0.175,0.735 18 | vid0235,obj0001,840,8130,0.68,1.0,0.535,0.7583333 19 | vid0236,obj0000,0,12600,0.306,0.802,0.36333334,0.6666667 20 | vid0231,obj0000,0,1470,0.365,0.714,0.11333334,0.8516667 21 | vid0230,obj0000,0,4080,0.47,0.631,0.45833334,0.58666664 22 | vid0233,obj0000,0,3630,0.627,0.943,0.425,0.72 23 | vid0133,obj0000,0,2340,0.567,0.81,0.485,0.8283333 24 | vid0131,obj0000,0,1380,0.449,0.577,0.37166667,0.57 25 | vid0137,obj0000,0,3060,0.272,0.605,0.42666668,0.7133333 26 | vid0135,obj0000,0,1380,0.013,0.333,0.41,0.835 27 | vid0334,obj0000,0,2730,0.349,0.609,0.27666667,0.87166667 28 | vid0335,obj0000,0,1380,0.042,0.606,0.315,0.865 29 | vid0336,obj0000,0,1380,0.233,0.541,0.22666667,0.855 30 | vid0139,obj0000,0,1440,0.523,0.705,0.29333332,0.885 31 | vid0330,obj0000,0,1350,0.313,0.403,0.56666666,0.785 32 | vid0331,obj0000,0,1380,0.433,0.76,0.38833332,0.97833335 33 | vid0333,obj0000,0,1440,0.421,0.5,0.27,0.49166667 34 | vid0155,obj0000,0,1890,0.156,0.453,0.32833335,1.0 35 | vid0151,obj0000,0,5040,0.18,0.609,0.35333332,0.59833336 36 | vid0153,obj0000,0,13530,0.533,0.603,0.29666665,0.44166666 37 | vid0258,obj0000,0,5940,0.379,0.725,0.38333333,1.0 38 | vid0158,obj0000,0,1440,0.428,0.713,0.13,0.8 39 | vid0159,obj0000,0,4140,0.487,0.687,0.0,0.91 40 | vid0228,obj0000,0,1440,0.324,0.612,0.385,0.6716667 41 | vid0223,obj0000,0,2280,0.558,0.68,0.405,0.57166666 42 | vid0226,obj0000,0,3120,0.0,0.984,0.0,1.0 43 | vid0252,obj0000,0,1380,0.093,0.431,0.29333332,0.70666665 44 | vid0309,obj0000,0,3330,0.553,0.598,0.35166666,0.47333333 45 | vid0308,obj0000,0,1380,0.303,0.592,0.41666666,0.7083333 46 | vid0301,obj0000,0,1290,0.0,0.388,0.14,0.69166666 47 | vid0300,obj0000,0,1320,0.676,0.992,0.51166666,1.0 48 | vid0303,obj0000,0,2790,0.342,0.657,0.30666667,0.89666665 49 | vid0302,obj0000,0,2340,0.001,0.857,0.39333335,1.0 50 | vid0306,obj0000,0,4140,0.428,0.852,0.35166666,0.70666665 51 | vid0163,obj0000,0,2280,0.321,1.0,0.09166667,1.0 52 | vid0162,obj0000,0,4140,0.277,0.651,0.495,0.7216667 53 | vid0167,obj0000,0,1380,0.379,0.479,0.39,0.48666668 54 | vid0166,obj0000,0,1320,0.315,0.461,0.74666667,1.0 55 | vid0169,obj0000,0,1350,0.75,0.877,0.0,0.17833333 56 | vid0168,obj0000,0,2250,0.331,0.505,0.17833333,0.585 57 | vid0225,obj0000,0,1440,0.554,1.0,0.23333333,0.6166667 58 | vid0316,obj0000,0,8130,0.138,0.476,0.11833333,0.655 59 | vid0316,obj0001,2700,10830,0.488,0.887,0.48833334,0.9316667 60 | vid0315,obj0000,0,1440,0.201,0.678,0.05,0.68 61 | vid0315,obj0001,11700,13110,0.233,0.431,0.0,0.595 62 | vid0315,obj0002,13950,15390,0.235,0.651,0.11833333,0.61 63 | vid0313,obj0000,0,5430,0.339,0.893,0.165,0.8283333 64 | vid0311,obj0000,0,2760,0.298,0.509,0.43,0.47833332 65 | vid0095,obj0000,0,5490,0.201,0.824,0.39666668,0.51166666 66 | vid0094,obj0000,0,1290,0.526,0.993,0.0,0.635 67 | vid0097,obj0000,0,5880,0.569,0.99,0.0,0.775 68 | vid0093,obj0000,0,2670,0.217,0.767,0.16333333,0.78833336 69 | vid0092,obj0000,0,1440,0.189,0.925,0.22166666,0.50666666 70 | vid0014,obj0000,0,1470,0.255,0.664,0.335,0.61333334 71 | vid0017,obj0000,0,4080,0.536,0.628,0.41666666,0.50666666 72 | vid0011,obj0000,0,1410,0.297,0.993,0.05666667,0.82 73 | vid0012,obj0000,0,8130,0.304,0.493,0.35,0.90833336 74 | vid0255,obj0000,0,2280,0.358,0.68,0.28333333,0.835 75 | vid0254,obj0000,0,3240,0.0,0.291,0.0,0.655 76 | vid0253,obj0000,0,1380,0.403,0.539,0.22,0.55333334 77 | vid0018,obj0000,0,2340,0.413,0.57,0.026666667,0.255 78 | vid0077,obj0000,0,3240,0.072,0.424,0.39833334,1.0 79 | vid0076,obj0000,0,15810,0.203,1.0,0.43333334,1.0 80 | vid0074,obj0000,0,1440,0.43,0.797,0.45,0.6716667 81 | vid0073,obj0000,0,6390,0.311,0.535,0.57,1.0 82 | vid0072,obj0000,0,5970,0.187,0.517,0.37833333,1.0 83 | vid0178,obj0000,0,4440,0.377,0.601,0.35333332,0.83166665 84 | vid0070,obj0000,0,3180,0.001,1.0,0.47,1.0 85 | vid0176,obj0000,0,1800,0.857,0.999,0.35833332,0.43 86 | vid0177,obj0000,0,1440,0.42,0.487,0.44,0.62666667 87 | vid0174,obj0000,0,1950,0.011,0.67,0.22333333,0.44666666 88 | vid0175,obj0000,0,1440,0.203,0.665,0.115,0.87666667 89 | vid0175,obj0001,27000,29340,0.0,1.0,0.18166667,0.87666667 90 | vid0079,obj0000,0,5370,0.105,0.749,0.0,0.7916667 91 | vid0146,obj0000,0,1380,0.394,0.68,0.4,0.9433333 92 | vid0089,obj0000,0,5490,0.28,0.659,0.6483333,0.87166667 93 | vid0082,obj0000,0,14490,0.412,0.553,0.32833335,0.49 94 | vid0080,obj0000,0,1410,0.519,0.895,0.115,0.41333333 95 | vid0081,obj0000,0,1230,0.181,0.461,0.37,0.6333333 96 | vid0084,obj0000,0,2700,0.266,0.906,0.096666664,0.24 97 | vid0085,obj0000,0,3150,0.024,0.26,0.19,0.885 98 | vid0003,obj0000,0,10860,0.387,0.797,0.425,0.92 99 | vid0000,obj0000,0,4170,0.471,0.662,0.27333334,0.62833333 100 | vid0001,obj0000,0,2550,0.36,0.482,0.475,0.62 101 | vid0249,obj0000,0,18540,0.288,0.463,0.535,0.82666665 102 | vid0004,obj0000,0,3180,0.225,0.829,0.0016666667,0.66 103 | vid0005,obj0000,0,1440,0.0,1.0,0.20833333,0.5933333 104 | vid0195,obj0000,0,3240,0.225,0.598,0.33166668,0.71 105 | vid0008,obj0000,0,3630,0.375,0.89,0.27666667,0.9633333 106 | vid0241,obj0000,0,1440,0.076,0.239,0.26833335,0.36 107 | vid0109,obj0000,0,2280,0.507,0.614,0.42166665,0.65166664 108 | vid0067,obj0000,0,7740,0.387,0.542,0.018333333,0.80333334 109 | vid0062,obj0000,0,16590,0.206,0.748,0.385,1.0 110 | vid0062,obj0001,13410,15240,0.262,0.511,0.38833332,0.8616667 111 | vid0063,obj0000,0,8910,0.0,0.473,0.6766667,0.93833333 112 | vid0103,obj0000,0,1440,0.258,0.763,0.0,0.64 113 | vid0102,obj0000,0,2580,0.397,0.649,0.12833333,0.73 114 | vid0101,obj0000,0,3240,0.491,0.801,0.20833333,0.515 115 | vid0100,obj0000,0,1230,0.432,0.565,0.14166667,0.50333333 116 | vid0107,obj0000,0,1350,0.344,0.556,0.29833335,0.575 117 | vid0069,obj0000,0,4980,0.292,0.64,0.07666667,1.0 118 | vid0281,obj0000,0,5490,0.526,0.82,0.37833333,0.64 119 | vid0271,obj0000,0,3240,0.0,0.762,0.395,0.51166666 120 | vid0272,obj0000,0,5040,0.375,0.855,0.0,0.49833333 121 | vid0274,obj0000,0,1170,0.0,0.727,0.12833333,1.0 122 | vid0276,obj0000,0,10950,0.33,0.663,0.56166667,0.88 123 | vid0051,obj0000,0,1260,0.442,0.545,0.52166665,0.59166664 124 | vid0299,obj0000,0,1500,0.646,0.752,0.27333334,0.57166666 125 | vid0298,obj0000,0,1440,0.283,0.52,0.26833335,0.60833335 126 | vid0298,obj0001,5850,7290,0.433,0.597,0.47666666,0.79333335 127 | vid0054,obj0000,0,1980,0.055,0.808,0.10333333,0.85333335 128 | vid0056,obj0000,0,3240,0.355,0.628,0.56666666,0.7816667 129 | vid0284,obj0000,0,8130,0.0,1.0,0.0,0.89166665 130 | vid0291,obj0000,0,1440,0.283,0.559,0.20833333,0.7583333 131 | vid0290,obj0000,0,1440,0.327,0.625,0.38833332,1.0 132 | vid0296,obj0000,0,1440,0.456,0.788,0.21333334,0.69166666 133 | vid0295,obj0000,0,4530,0.565,0.989,0.0,0.7733333 134 | vid0043,obj0000,0,1050,0.219,0.748,0.08833333,1.0 135 | vid0040,obj0000,0,1440,0.399,0.799,0.37333333,0.6983333 136 | vid0205,obj0000,0,7740,0.454,0.967,0.0,0.68333334 137 | vid0266,obj0000,0,1440,0.574,0.705,0.20666666,0.29 138 | vid0180,obj0000,0,13980,0.38,0.657,0.61333334,0.95666665 139 | vid0186,obj0000,0,1440,0.596,0.809,0.28,0.6066667 140 | vid0260,obj0000,0,4110,0.675,0.917,0.475,0.77166665 141 | vid0184,obj0000,0,5010,0.42,0.726,0.33,0.9816667 142 | vid0189,obj0000,0,5940,0.399,0.693,0.40333334,0.73 143 | vid0269,obj0000,0,1380,0.0,0.592,0.43,0.7966667 144 | vid0288,obj0000,0,1440,0.442,0.733,0.17,0.7683333 145 | vid0289,obj0000,0,4980,0.437,0.679,0.425,0.71 146 | vid0048,obj0000,0,990,0.109,0.934,0.52166665,0.83166665 147 | vid0046,obj0000,0,9480,0.182,0.724,0.685,0.78833336 148 | vid0047,obj0000,0,4140,0.333,0.725,0.035,0.705 149 | vid0044,obj0000,0,9030,0.239,0.643,0.48333332,1.0 150 | vid0042,obj0000,0,10890,0.427,0.768,0.55833334,0.83166665 151 | vid0053,obj0000,0,8640,0.226,0.68,0.17166667,0.885 152 | vid0286,obj0000,0,4140,0.432,0.52,0.295,0.7083333 153 | vid0287,obj0000,0,4140,0.293,0.676,0.5283333,0.71166664 154 | vid0190,obj0000,0,1380,0.874,1.0,0.093333334,0.5133333 155 | vid0191,obj0000,0,1440,0.672,0.838,0.07333333,0.855 156 | vid0211,obj0000,0,5430,0.565,0.988,0.0,0.77666664 157 | vid0210,obj0000,0,1440,0.0,0.632,0.0,0.17166667 158 | vid0216,obj0000,0,3180,0.101,0.617,0.123333335,0.86333334 159 | vid0196,obj0000,0,2370,0.224,0.982,0.29,0.625 160 | vid0196,obj0001,8550,9990,0.251,0.742,0.20666666,0.5 161 | vid0197,obj0000,0,4980,0.305,0.483,0.31666666,0.665 162 | vid0197,obj0001,870,4080,0.382,0.614,0.37333333,0.63166666 163 | vid0292,obj0000,0,2790,0.503,0.801,0.23666666,0.79333335 164 | vid0111,obj0000,0,4590,0.486,1.0,0.41166666,0.94 165 | vid0066,obj0000,0,12210,0.021,0.931,0.20666666,0.54333335 166 | vid0213,obj0000,0,1230,0.121,0.375,0.6483333,1.0 167 | vid0038,obj0000,0,5220,0.455,0.666,0.325,0.58666664 168 | vid0329,obj0000,0,1380,0.245,0.444,0.315,0.60333335 169 | vid0212,obj0000,0,2130,0.521,0.767,0.24833333,0.49166667 170 | vid0033,obj0000,0,13590,0.346,0.879,0.16,0.555 171 | vid0033,obj0001,4500,10890,0.577,0.904,0.19833334,0.89 172 | vid0032,obj0000,0,6840,0.404,0.785,0.60833335,0.7916667 173 | vid0032,obj0001,4080,6510,0.2,0.393,0.4,1.0 174 | vid0031,obj0000,0,3690,0.507,0.768,0.5233333,0.69666666 175 | vid0030,obj0000,0,1440,0.416,0.718,0.41666666,0.72833335 176 | vid0037,obj0000,0,12510,0.228,0.532,0.35833332,0.65833336 177 | vid0192,obj0000,0,1500,0.547,1.0,0.18833333,0.945 178 | vid0034,obj0000,0,2190,0.559,0.947,0.24,1.0 179 | vid0193,obj0000,0,11280,0.416,0.791,0.22,0.83166665 180 | vid0245,obj0000,0,4620,0.0,0.976,0.345,0.6666667 181 | vid0246,obj0000,0,1290,0.228,0.473,0.42333335,0.7083333 182 | vid0098,obj0000,0,1380,0.356,0.556,0.245,0.7966667 183 | vid0202,obj0000,0,1410,0.0,0.407,0.27833334,0.665 184 | vid0203,obj0000,0,7170,0.317,0.742,0.0016666667,1.0 185 | vid0204,obj0000,0,4590,0.373,0.877,0.08166666,0.73833334 186 | vid0215,obj0000,0,2310,0.381,0.449,0.45833334,0.60333335 187 | vid0215,obj0001,5370,10860,0.0,0.444,0.0,1.0 188 | vid0215,obj0002,11670,13110,0.215,0.695,0.0,0.8466667 189 | vid0208,obj0000,0,3180,0.441,0.951,0.28,0.7583333 190 | vid0214,obj0000,0,2790,0.168,0.616,0.23166667,0.6016667 191 | vid0121,obj0000,0,2340,0.322,0.444,0.6116667,0.675 192 | vid0120,obj0000,0,19050,0.259,0.904,0.44,0.635 193 | vid0123,obj0000,0,1170,0.379,0.481,0.11333334,0.20833333 194 | vid0122,obj0000,0,1440,0.406,0.71,0.35666665,0.6333333 195 | vid0327,obj0000,0,11700,0.712,1.0,0.48333332,0.8383333 196 | vid0327,obj0001,7230,10890,0.272,0.935,0.13166666,0.885 197 | vid0326,obj0000,0,1890,0.63,0.766,0.38166666,0.56666666 198 | vid0324,obj0000,0,3240,0.494,0.933,0.28833333,1.0 199 | vid0129,obj0000,0,1380,0.067,0.249,0.41,0.515 200 | vid0128,obj0000,0,4080,0.31,0.595,0.31166667,0.54 201 | -------------------------------------------------------------------------------- /dataset/tasks/test.csv: -------------------------------------------------------------------------------- 1 | vid0147,obj0000,0,1380,0.588,0.798,0.44333333,0.5833333 2 | vid0147,obj0001,4080,5490,0.545,0.827,0.34666666,0.55333334 3 | vid0028,obj0000,0,4140,0.662,0.902,0.33166668,1.0 4 | vid0024,obj0000,0,1440,0.124,0.731,0.0,0.35333332 5 | vid0026,obj0000,0,1890,0.456,0.837,0.67,1.0 6 | vid0026,obj0001,2700,4950,0.776,1.0,0.0,0.42333335 7 | vid0148,obj0000,0,1410,0.438,0.687,0.43333334,1.0 8 | vid0239,obj0000,0,1470,0.473,0.776,0.21666667,0.61333334 9 | vid0238,obj0000,0,1350,0.035,0.345,0.18833333,1.0 10 | vid0234,obj0000,0,1320,0.485,0.851,0.07666667,0.41833332 11 | vid0237,obj0000,0,4140,0.146,0.676,0.17166667,0.7366667 12 | vid0232,obj0000,0,1530,0.368,0.984,0.38666666,0.93833333 13 | vid0132,obj0000,0,1380,0.857,1.0,0.145,1.0 14 | vid0130,obj0000,0,1440,0.354,0.495,0.685,0.76 15 | vid0136,obj0000,0,9030,0.37,0.462,0.395,0.48833334 16 | vid0134,obj0000,0,1440,0.0,0.946,0.108333334,1.0 17 | vid0138,obj0000,0,1380,0.244,0.456,0.14166667,0.79833335 18 | vid0332,obj0000,0,2760,0.289,0.456,0.62833333,0.735 19 | vid0154,obj0000,0,3120,0.365,0.909,0.0,1.0 20 | vid0156,obj0000,0,1440,0.45,0.934,0.28166667,0.66 21 | vid0157,obj0000,0,2790,0.513,0.708,0.43666667,0.5733333 22 | vid0150,obj0000,0,5430,0.191,0.472,0.7033333,0.9766667 23 | vid0150,obj0001,2190,3630,0.597,0.951,0.56333333,0.8016667 24 | vid0259,obj0000,0,1230,0.377,0.589,0.135,0.69666666 25 | vid0259,obj0001,8460,9900,0.378,0.736,0.26166666,0.88166666 26 | vid0152,obj0000,0,3630,0.27,0.733,0.085,0.47666666 27 | vid0267,obj0000,0,1770,0.491,0.659,0.43166667,0.94666666 28 | vid0264,obj0000,0,1320,0.255,0.346,0.46,0.7083333 29 | vid0264,obj0001,2730,6390,0.692,0.996,0.06666667,0.5133333 30 | vid0149,obj0000,0,1440,0.222,1.0,0.5,0.93833333 31 | vid0262,obj0000,0,1620,0.407,0.506,0.24833333,0.62666667 32 | vid0229,obj0000,0,5310,0.31,0.6,0.37166667,0.69166666 33 | vid0222,obj0000,0,2760,0.031,0.99,0.555,0.795 34 | vid0220,obj0000,0,1440,0.319,0.349,0.40333334,0.505 35 | vid0221,obj0000,0,5430,0.263,0.61,0.31166667,0.815 36 | vid0221,obj0001,1050,9030,0.534,1.0,0.31833333,1.0 37 | vid0227,obj0000,0,3690,0.226,0.463,0.49,0.8433333 38 | vid0224,obj0000,0,3900,0.336,0.689,0.42,0.53333336 39 | vid0305,obj0000,0,1350,0.222,0.882,0.24666667,0.515 40 | vid0304,obj0000,0,8190,0.344,0.692,0.34666666,0.7183333 41 | vid0307,obj0000,0,1380,0.384,0.507,0.40333334,0.805 42 | vid0161,obj0000,0,1290,0.297,0.483,0.21833333,1.0 43 | vid0160,obj0000,0,1380,0.551,0.618,0.27833334,0.68333334 44 | vid0165,obj0000,0,1440,0.103,0.442,0.40166667,0.82666665 45 | vid0164,obj0000,0,1440,0.481,0.525,0.22333333,0.27666667 46 | vid0317,obj0000,0,2220,0.058,0.624,0.083333336,0.76666665 47 | vid0314,obj0000,0,1890,0.808,1.0,0.42333335,0.97833335 48 | vid0312,obj0000,0,1860,0.584,0.675,0.525,0.8016667 49 | vid0310,obj0000,0,5940,0.562,0.669,0.29666665,0.865 50 | vid0096,obj0000,0,15840,0.341,0.507,0.42166665,0.5683333 51 | vid0096,obj0001,12180,14940,0.183,0.263,0.45833334,0.5366667 52 | vid0091,obj0000,0,1740,0.25,0.546,0.24666667,0.855 53 | vid0090,obj0000,0,2700,0.148,0.729,0.5466667,0.785 54 | vid0318,obj0000,0,2760,0.185,0.729,0.21,0.7966667 55 | vid0319,obj0000,0,1440,0.033,0.82,0.18,1.0 56 | vid0015,obj0000,0,1890,0.868,1.0,0.0,0.605 57 | vid0016,obj0000,0,1440,0.374,0.547,0.33166668,0.6716667 58 | vid0010,obj0000,0,1380,0.471,0.737,0.41833332,0.6933333 59 | vid0013,obj0000,0,1890,0.529,0.757,0.16,0.7183333 60 | vid0257,obj0000,0,2610,0.109,0.326,0.28333333,1.0 61 | vid0256,obj0000,0,2340,0.207,0.709,0.53333336,0.74666667 62 | vid0019,obj0000,0,1440,0.777,0.841,0.37,0.44166666 63 | vid0019,obj0001,1800,3690,0.55,0.58,0.42333335,0.45833334 64 | vid0019,obj0002,8610,9990,0.429,0.64,0.34333333,0.5516667 65 | vid0251,obj0000,0,1380,0.675,1.0,0.0,0.7133333 66 | vid0250,obj0000,0,1350,0.178,0.83,0.415,1.0 67 | vid0075,obj0000,0,2730,0.311,0.384,0.20833333,0.5183333 68 | vid0071,obj0000,0,3690,0.011,0.061,0.325,0.45333335 69 | vid0179,obj0000,0,1440,0.194,0.372,0.31666666,1.0 70 | vid0172,obj0000,0,1380,0.467,0.715,0.43166667,0.73833334 71 | vid0173,obj0000,0,5040,0.37,0.508,0.515,0.8883333 72 | vid0170,obj0000,0,1890,0.52,0.609,0.575,0.94 73 | vid0171,obj0000,0,1080,0.503,0.802,0.21333334,0.8616667 74 | vid0194,obj0000,0,2340,0.515,0.692,0.7683333,0.83666664 75 | vid0078,obj0000,0,7290,0.293,0.516,0.25166667,0.845 76 | vid0088,obj0000,0,10890,0.355,0.972,0.40833333,0.71 77 | vid0083,obj0000,0,1380,0.371,0.822,0.07666667,1.0 78 | vid0086,obj0000,0,1530,0.56,0.766,0.25,0.42333335 79 | vid0087,obj0000,0,3180,0.471,1.0,0.32333332,1.0 80 | vid0002,obj0000,0,1440,0.634,0.848,0.24666667,0.67333335 81 | vid0248,obj0000,0,2040,0.309,0.881,0.21333334,0.8466667 82 | vid0248,obj0001,3390,4740,0.158,0.337,0.26833335,0.87166667 83 | vid0007,obj0000,0,4140,0.0,0.921,0.31166667,0.9716667 84 | vid0244,obj0000,0,3240,0.241,0.286,0.17666666,0.31666666 85 | vid0247,obj0000,0,1440,0.138,0.308,0.855,1.0 86 | vid0240,obj0000,0,1440,0.305,0.452,0.585,0.62166667 87 | vid0242,obj0000,0,1530,0.435,0.492,0.86833334,0.96666664 88 | vid0243,obj0000,0,2790,0.828,0.921,0.165,0.6433333 89 | vid0064,obj0000,0,1890,0.0,0.15,0.16,0.83 90 | vid0065,obj0000,0,6840,0.19,0.738,0.405,0.7083333 91 | vid0108,obj0000,0,1290,0.211,0.468,0.53,0.74333334 92 | vid0060,obj0000,0,1050,0.396,0.956,0.11666667,0.75166667 93 | vid0061,obj0000,0,11790,0.331,0.564,0.165,0.65833336 94 | vid0068,obj0000,0,6840,0.173,0.833,0.34,0.72833335 95 | vid0106,obj0000,0,4140,0.27,0.607,0.16666667,0.7133333 96 | vid0105,obj0000,0,2550,0.645,0.71,0.30666667,0.49166667 97 | vid0104,obj0000,0,4080,0.255,0.895,0.3,1.0 98 | vid0050,obj0000,0,1320,0.496,0.602,0.085,0.87166667 99 | vid0279,obj0000,0,2280,0.123,0.401,0.28,0.745 100 | vid0278,obj0000,0,2280,0.019,0.475,0.305,0.5366667 101 | vid0270,obj0000,0,1440,0.278,0.344,0.395,0.66333336 102 | vid0273,obj0000,0,2340,0.576,0.7,0.45166665,0.815 103 | vid0275,obj0000,0,2850,0.0,0.862,0.051666666,1.0 104 | vid0277,obj0000,0,2280,0.368,0.603,0.29666665,0.87 105 | vid0118,obj0000,0,8640,0.491,0.816,0.27833334,0.67333335 106 | vid0118,obj0001,4950,18090,0.408,0.694,0.20833333,0.71166664 107 | vid0119,obj0000,0,1440,0.138,1.0,0.44833332,0.905 108 | vid0045,obj0000,0,12690,0.0,0.739,0.19666667,0.48166665 109 | vid0055,obj0000,0,6390,0.47,0.612,0.08166666,0.25333333 110 | vid0057,obj0000,0,2340,0.177,0.589,0.04,1.0 111 | vid0293,obj0000,0,9030,0.057,0.419,0.15666667,0.70166665 112 | vid0112,obj0000,0,1440,0.033,0.776,0.18166667,0.87666667 113 | vid0113,obj0000,0,3900,0.417,0.522,0.08833333,0.44666666 114 | vid0297,obj0000,0,1200,0.156,0.255,0.52166665,0.7 115 | vid0116,obj0000,0,10080,0.324,0.575,0.37666667,1.0 116 | vid0294,obj0000,0,1290,0.0,0.571,0.17833333,1.0 117 | vid0183,obj0000,0,13140,0.0,0.612,0.36166668,0.76166666 118 | vid0182,obj0000,0,2730,0.278,0.778,0.34666666,0.645 119 | vid0181,obj0000,0,5490,0.612,0.72,0.12666667,0.45 120 | vid0265,obj0000,0,1440,0.505,0.686,0.11333334,0.91833335 121 | vid0187,obj0000,0,1440,0.518,0.725,0.72833335,0.80833334 122 | vid0263,obj0000,0,3690,0.388,0.558,0.35,0.75666666 123 | vid0185,obj0000,0,5940,0.374,0.444,0.24333334,0.65 124 | vid0261,obj0000,0,1260,0.309,0.394,0.7216667,0.7733333 125 | vid0188,obj0000,0,1380,0.526,0.864,0.35333332,1.0 126 | vid0268,obj0000,0,1440,0.486,0.713,0.44166666,0.7633333 127 | vid0110,obj0000,0,1830,0.504,0.579,0.40666667,0.6333333 128 | vid0049,obj0000,0,12180,0.632,0.961,0.505,0.67 129 | vid0049,obj0001,1380,13080,0.382,0.574,0.65833336,0.715 130 | vid0280,obj0000,0,9030,0.398,0.598,0.44833332,0.6383333 131 | vid0282,obj0000,0,12180,0.259,0.598,0.26833335,0.6116667 132 | vid0283,obj0000,0,2340,0.487,0.594,0.53,0.575 133 | vid0285,obj0000,0,1440,0.333,0.669,0.89666665,1.0 134 | vid0041,obj0000,0,1440,0.382,0.583,0.19,1.0 135 | vid0052,obj0000,0,1050,0.0,0.197,0.17333333,0.415 136 | vid0217,obj0000,0,3030,0.539,0.565,0.47833332,0.495 137 | vid0198,obj0000,0,1440,0.216,0.5,0.40666667,1.0 138 | vid0199,obj0000,0,1740,0.349,1.0,0.0,1.0 139 | vid0219,obj0000,0,9540,0.562,0.827,0.45,0.69666666 140 | vid0218,obj0000,0,4140,0.38,0.598,0.26166666,0.62833333 141 | vid0218,obj0001,4650,9540,0.344,1.0,0.21,0.7733333 142 | vid0059,obj0000,0,1890,0.606,0.983,0.015,0.42333335 143 | vid0006,obj0000,0,2340,0.11,0.476,0.375,0.725 144 | vid0039,obj0000,0,1380,0.19,0.827,0.08,0.53333336 145 | vid0114,obj0000,0,1380,0.146,0.447,0.0,0.9916667 146 | vid0036,obj0000,0,1440,0.223,0.359,0.445,0.685 147 | vid0035,obj0000,0,1050,0.167,0.847,0.33333334,0.62833333 148 | vid0035,obj0001,1410,8700,0.0,0.275,0.29666665,0.6016667 149 | vid0099,obj0000,0,1380,0.164,0.854,0.415,0.6483333 150 | vid0117,obj0000,0,2790,0.313,0.627,0.5466667,0.8283333 151 | vid0200,obj0000,0,1440,0.518,0.584,0.535,0.8283333 152 | vid0201,obj0000,0,2340,0.793,1.0,0.0016666667,0.425 153 | vid0009,obj0000,0,1350,0.43,0.505,0.415,0.65 154 | vid0206,obj0000,0,1440,0.296,0.517,0.255,0.67333335 155 | vid0207,obj0000,0,4140,0.24,0.335,0.59,0.6383333 156 | vid0209,obj0000,0,4140,0.284,0.563,0.52,1.0 157 | vid0125,obj0000,0,3240,0.247,0.861,0.6533333,0.86833334 158 | vid0124,obj0000,0,7320,0.432,0.694,0.20333333,0.715 159 | vid0127,obj0000,0,2190,0.3,0.488,0.28666666,0.485 160 | vid0126,obj0000,0,2340,0.112,1.0,0.275,0.93666667 161 | vid0328,obj0000,0,4590,0.544,0.702,0.61333334,0.81333333 162 | vid0325,obj0000,0,1590,0.549,0.611,0.335,0.42833334 163 | vid0323,obj0000,0,1410,0.832,1.0,0.47166666,0.61 164 | vid0322,obj0000,0,900,0.0,1.0,0.16333333,0.80833334 165 | vid0321,obj0000,0,1350,0.111,0.928,0.26333332,1.0 166 | vid0320,obj0000,0,3240,0.375,0.631,0.28,0.43166667 167 | -------------------------------------------------------------------------------- /docs/css/bootstrap-social.css: -------------------------------------------------------------------------------- 1 | /* 2 | * Social Buttons for Bootstrap 3 | * 4 | * Copyright 2013-2016 Panayiotis Lipiridis 5 | * Licensed under the MIT License 6 | * 7 | * https://github.com/lipis/bootstrap-social 8 | */ 9 | 10 | .btn-social{position:relative;padding-left:44px;text-align:left;white-space:nowrap;overflow:hidden;text-overflow:ellipsis}.btn-social>:first-child{position:absolute;left:0;top:0;bottom:0;width:32px;line-height:34px;font-size:1.6em;text-align:center;border-right:1px solid rgba(0,0,0,0.2)} 11 | .btn-social.btn-lg{padding-left:61px}.btn-social.btn-lg>:first-child{line-height:45px;width:45px;font-size:1.8em} 12 | .btn-social.btn-sm{padding-left:38px}.btn-social.btn-sm>:first-child{line-height:28px;width:28px;font-size:1.4em} 13 | .btn-social.btn-xs{padding-left:30px}.btn-social.btn-xs>:first-child{line-height:20px;width:20px;font-size:1.2em} 14 | .btn-social-icon{position:relative;padding-left:44px;text-align:left;white-space:nowrap;overflow:hidden;text-overflow:ellipsis;height:34px;width:34px;padding:0}.btn-social-icon>:first-child{position:absolute;left:0;top:0;bottom:0;width:32px;line-height:34px;font-size:1.6em;text-align:center;border-right:1px solid rgba(0,0,0,0.2)} 15 | .btn-social-icon.btn-lg{padding-left:61px}.btn-social-icon.btn-lg>:first-child{line-height:45px;width:45px;font-size:1.8em} 16 | .btn-social-icon.btn-sm{padding-left:38px}.btn-social-icon.btn-sm>:first-child{line-height:28px;width:28px;font-size:1.4em} 17 | .btn-social-icon.btn-xs{padding-left:30px}.btn-social-icon.btn-xs>:first-child{line-height:20px;width:20px;font-size:1.2em} 18 | .btn-social-icon>:first-child{border:none;text-align:center;width:100% !important} 19 | .btn-social-icon.btn-lg{height:45px;width:45px;padding-left:0;padding-right:0} 20 | .btn-social-icon.btn-sm{height:30px;width:30px;padding-left:0;padding-right:0} 21 | .btn-social-icon.btn-xs{height:22px;width:22px;padding-left:0;padding-right:0} 22 | .btn-adn{color:#fff;background-color:#d87a68;border-color:rgba(0,0,0,0.2)}.btn-adn:focus,.btn-adn.focus{color:#fff;background-color:#ce563f;border-color:rgba(0,0,0,0.2)} 23 | .btn-adn:hover{color:#fff;background-color:#ce563f;border-color:rgba(0,0,0,0.2)} 24 | .btn-adn:active,.btn-adn.active,.open>.dropdown-toggle.btn-adn{color:#fff;background-color:#ce563f;border-color:rgba(0,0,0,0.2)}.btn-adn:active:hover,.btn-adn.active:hover,.open>.dropdown-toggle.btn-adn:hover,.btn-adn:active:focus,.btn-adn.active:focus,.open>.dropdown-toggle.btn-adn:focus,.btn-adn:active.focus,.btn-adn.active.focus,.open>.dropdown-toggle.btn-adn.focus{color:#fff;background-color:#b94630;border-color:rgba(0,0,0,0.2)} 25 | .btn-adn:active,.btn-adn.active,.open>.dropdown-toggle.btn-adn{background-image:none} 26 | .btn-adn.disabled:hover,.btn-adn[disabled]:hover,fieldset[disabled] .btn-adn:hover,.btn-adn.disabled:focus,.btn-adn[disabled]:focus,fieldset[disabled] .btn-adn:focus,.btn-adn.disabled.focus,.btn-adn[disabled].focus,fieldset[disabled] .btn-adn.focus{background-color:#d87a68;border-color:rgba(0,0,0,0.2)} 27 | .btn-adn .badge{color:#d87a68;background-color:#fff} 28 | .btn-bitbucket{color:#fff;background-color:#205081;border-color:rgba(0,0,0,0.2)}.btn-bitbucket:focus,.btn-bitbucket.focus{color:#fff;background-color:#163758;border-color:rgba(0,0,0,0.2)} 29 | .btn-bitbucket:hover{color:#fff;background-color:#163758;border-color:rgba(0,0,0,0.2)} 30 | .btn-bitbucket:active,.btn-bitbucket.active,.open>.dropdown-toggle.btn-bitbucket{color:#fff;background-color:#163758;border-color:rgba(0,0,0,0.2)}.btn-bitbucket:active:hover,.btn-bitbucket.active:hover,.open>.dropdown-toggle.btn-bitbucket:hover,.btn-bitbucket:active:focus,.btn-bitbucket.active:focus,.open>.dropdown-toggle.btn-bitbucket:focus,.btn-bitbucket:active.focus,.btn-bitbucket.active.focus,.open>.dropdown-toggle.btn-bitbucket.focus{color:#fff;background-color:#0f253c;border-color:rgba(0,0,0,0.2)} 31 | .btn-bitbucket:active,.btn-bitbucket.active,.open>.dropdown-toggle.btn-bitbucket{background-image:none} 32 | .btn-bitbucket.disabled:hover,.btn-bitbucket[disabled]:hover,fieldset[disabled] .btn-bitbucket:hover,.btn-bitbucket.disabled:focus,.btn-bitbucket[disabled]:focus,fieldset[disabled] .btn-bitbucket:focus,.btn-bitbucket.disabled.focus,.btn-bitbucket[disabled].focus,fieldset[disabled] .btn-bitbucket.focus{background-color:#205081;border-color:rgba(0,0,0,0.2)} 33 | .btn-bitbucket .badge{color:#205081;background-color:#fff} 34 | .btn-dropbox{color:#fff;background-color:#1087dd;border-color:rgba(0,0,0,0.2)}.btn-dropbox:focus,.btn-dropbox.focus{color:#fff;background-color:#0d6aad;border-color:rgba(0,0,0,0.2)} 35 | .btn-dropbox:hover{color:#fff;background-color:#0d6aad;border-color:rgba(0,0,0,0.2)} 36 | .btn-dropbox:active,.btn-dropbox.active,.open>.dropdown-toggle.btn-dropbox{color:#fff;background-color:#0d6aad;border-color:rgba(0,0,0,0.2)}.btn-dropbox:active:hover,.btn-dropbox.active:hover,.open>.dropdown-toggle.btn-dropbox:hover,.btn-dropbox:active:focus,.btn-dropbox.active:focus,.open>.dropdown-toggle.btn-dropbox:focus,.btn-dropbox:active.focus,.btn-dropbox.active.focus,.open>.dropdown-toggle.btn-dropbox.focus{color:#fff;background-color:#0a568c;border-color:rgba(0,0,0,0.2)} 37 | .btn-dropbox:active,.btn-dropbox.active,.open>.dropdown-toggle.btn-dropbox{background-image:none} 38 | .btn-dropbox.disabled:hover,.btn-dropbox[disabled]:hover,fieldset[disabled] .btn-dropbox:hover,.btn-dropbox.disabled:focus,.btn-dropbox[disabled]:focus,fieldset[disabled] .btn-dropbox:focus,.btn-dropbox.disabled.focus,.btn-dropbox[disabled].focus,fieldset[disabled] .btn-dropbox.focus{background-color:#1087dd;border-color:rgba(0,0,0,0.2)} 39 | .btn-dropbox .badge{color:#1087dd;background-color:#fff} 40 | .btn-facebook{color:#fff;background-color:#3b5998;border-color:rgba(0,0,0,0.2)}.btn-facebook:focus,.btn-facebook.focus{color:#fff;background-color:#2d4373;border-color:rgba(0,0,0,0.2)} 41 | .btn-facebook:hover{color:#fff;background-color:#2d4373;border-color:rgba(0,0,0,0.2)} 42 | .btn-facebook:active,.btn-facebook.active,.open>.dropdown-toggle.btn-facebook{color:#fff;background-color:#2d4373;border-color:rgba(0,0,0,0.2)}.btn-facebook:active:hover,.btn-facebook.active:hover,.open>.dropdown-toggle.btn-facebook:hover,.btn-facebook:active:focus,.btn-facebook.active:focus,.open>.dropdown-toggle.btn-facebook:focus,.btn-facebook:active.focus,.btn-facebook.active.focus,.open>.dropdown-toggle.btn-facebook.focus{color:#fff;background-color:#23345a;border-color:rgba(0,0,0,0.2)} 43 | .btn-facebook:active,.btn-facebook.active,.open>.dropdown-toggle.btn-facebook{background-image:none} 44 | .btn-facebook.disabled:hover,.btn-facebook[disabled]:hover,fieldset[disabled] .btn-facebook:hover,.btn-facebook.disabled:focus,.btn-facebook[disabled]:focus,fieldset[disabled] .btn-facebook:focus,.btn-facebook.disabled.focus,.btn-facebook[disabled].focus,fieldset[disabled] .btn-facebook.focus{background-color:#3b5998;border-color:rgba(0,0,0,0.2)} 45 | .btn-facebook .badge{color:#3b5998;background-color:#fff} 46 | .btn-flickr{color:#fff;background-color:#ff0084;border-color:rgba(0,0,0,0.2)}.btn-flickr:focus,.btn-flickr.focus{color:#fff;background-color:#cc006a;border-color:rgba(0,0,0,0.2)} 47 | .btn-flickr:hover{color:#fff;background-color:#cc006a;border-color:rgba(0,0,0,0.2)} 48 | .btn-flickr:active,.btn-flickr.active,.open>.dropdown-toggle.btn-flickr{color:#fff;background-color:#cc006a;border-color:rgba(0,0,0,0.2)}.btn-flickr:active:hover,.btn-flickr.active:hover,.open>.dropdown-toggle.btn-flickr:hover,.btn-flickr:active:focus,.btn-flickr.active:focus,.open>.dropdown-toggle.btn-flickr:focus,.btn-flickr:active.focus,.btn-flickr.active.focus,.open>.dropdown-toggle.btn-flickr.focus{color:#fff;background-color:#a80057;border-color:rgba(0,0,0,0.2)} 49 | .btn-flickr:active,.btn-flickr.active,.open>.dropdown-toggle.btn-flickr{background-image:none} 50 | .btn-flickr.disabled:hover,.btn-flickr[disabled]:hover,fieldset[disabled] .btn-flickr:hover,.btn-flickr.disabled:focus,.btn-flickr[disabled]:focus,fieldset[disabled] .btn-flickr:focus,.btn-flickr.disabled.focus,.btn-flickr[disabled].focus,fieldset[disabled] .btn-flickr.focus{background-color:#ff0084;border-color:rgba(0,0,0,0.2)} 51 | .btn-flickr .badge{color:#ff0084;background-color:#fff} 52 | .btn-foursquare{color:#fff;background-color:#f94877;border-color:rgba(0,0,0,0.2)}.btn-foursquare:focus,.btn-foursquare.focus{color:#fff;background-color:#f71752;border-color:rgba(0,0,0,0.2)} 53 | .btn-foursquare:hover{color:#fff;background-color:#f71752;border-color:rgba(0,0,0,0.2)} 54 | .btn-foursquare:active,.btn-foursquare.active,.open>.dropdown-toggle.btn-foursquare{color:#fff;background-color:#f71752;border-color:rgba(0,0,0,0.2)}.btn-foursquare:active:hover,.btn-foursquare.active:hover,.open>.dropdown-toggle.btn-foursquare:hover,.btn-foursquare:active:focus,.btn-foursquare.active:focus,.open>.dropdown-toggle.btn-foursquare:focus,.btn-foursquare:active.focus,.btn-foursquare.active.focus,.open>.dropdown-toggle.btn-foursquare.focus{color:#fff;background-color:#e30742;border-color:rgba(0,0,0,0.2)} 55 | .btn-foursquare:active,.btn-foursquare.active,.open>.dropdown-toggle.btn-foursquare{background-image:none} 56 | .btn-foursquare.disabled:hover,.btn-foursquare[disabled]:hover,fieldset[disabled] .btn-foursquare:hover,.btn-foursquare.disabled:focus,.btn-foursquare[disabled]:focus,fieldset[disabled] .btn-foursquare:focus,.btn-foursquare.disabled.focus,.btn-foursquare[disabled].focus,fieldset[disabled] .btn-foursquare.focus{background-color:#f94877;border-color:rgba(0,0,0,0.2)} 57 | .btn-foursquare .badge{color:#f94877;background-color:#fff} 58 | .btn-github{color:#fff;background-color:#444;border-color:rgba(0,0,0,0.2)}.btn-github:focus,.btn-github.focus{color:#fff;background-color:#2b2b2b;border-color:rgba(0,0,0,0.2)} 59 | .btn-github:hover{color:#fff;background-color:#2b2b2b;border-color:rgba(0,0,0,0.2)} 60 | .btn-github:active,.btn-github.active,.open>.dropdown-toggle.btn-github{color:#fff;background-color:#2b2b2b;border-color:rgba(0,0,0,0.2)}.btn-github:active:hover,.btn-github.active:hover,.open>.dropdown-toggle.btn-github:hover,.btn-github:active:focus,.btn-github.active:focus,.open>.dropdown-toggle.btn-github:focus,.btn-github:active.focus,.btn-github.active.focus,.open>.dropdown-toggle.btn-github.focus{color:#fff;background-color:#191919;border-color:rgba(0,0,0,0.2)} 61 | .btn-github:active,.btn-github.active,.open>.dropdown-toggle.btn-github{background-image:none} 62 | .btn-github.disabled:hover,.btn-github[disabled]:hover,fieldset[disabled] .btn-github:hover,.btn-github.disabled:focus,.btn-github[disabled]:focus,fieldset[disabled] .btn-github:focus,.btn-github.disabled.focus,.btn-github[disabled].focus,fieldset[disabled] .btn-github.focus{background-color:#444;border-color:rgba(0,0,0,0.2)} 63 | .btn-github .badge{color:#444;background-color:#fff} 64 | .btn-google{color:#fff;background-color:#dd4b39;border-color:rgba(0,0,0,0.2)}.btn-google:focus,.btn-google.focus{color:#fff;background-color:#c23321;border-color:rgba(0,0,0,0.2)} 65 | .btn-google:hover{color:#fff;background-color:#c23321;border-color:rgba(0,0,0,0.2)} 66 | .btn-google:active,.btn-google.active,.open>.dropdown-toggle.btn-google{color:#fff;background-color:#c23321;border-color:rgba(0,0,0,0.2)}.btn-google:active:hover,.btn-google.active:hover,.open>.dropdown-toggle.btn-google:hover,.btn-google:active:focus,.btn-google.active:focus,.open>.dropdown-toggle.btn-google:focus,.btn-google:active.focus,.btn-google.active.focus,.open>.dropdown-toggle.btn-google.focus{color:#fff;background-color:#a32b1c;border-color:rgba(0,0,0,0.2)} 67 | .btn-google:active,.btn-google.active,.open>.dropdown-toggle.btn-google{background-image:none} 68 | .btn-google.disabled:hover,.btn-google[disabled]:hover,fieldset[disabled] .btn-google:hover,.btn-google.disabled:focus,.btn-google[disabled]:focus,fieldset[disabled] .btn-google:focus,.btn-google.disabled.focus,.btn-google[disabled].focus,fieldset[disabled] .btn-google.focus{background-color:#dd4b39;border-color:rgba(0,0,0,0.2)} 69 | .btn-google .badge{color:#dd4b39;background-color:#fff} 70 | .btn-instagram{color:#fff;background-color:#3f729b;border-color:rgba(0,0,0,0.2)}.btn-instagram:focus,.btn-instagram.focus{color:#fff;background-color:#305777;border-color:rgba(0,0,0,0.2)} 71 | .btn-instagram:hover{color:#fff;background-color:#305777;border-color:rgba(0,0,0,0.2)} 72 | .btn-instagram:active,.btn-instagram.active,.open>.dropdown-toggle.btn-instagram{color:#fff;background-color:#305777;border-color:rgba(0,0,0,0.2)}.btn-instagram:active:hover,.btn-instagram.active:hover,.open>.dropdown-toggle.btn-instagram:hover,.btn-instagram:active:focus,.btn-instagram.active:focus,.open>.dropdown-toggle.btn-instagram:focus,.btn-instagram:active.focus,.btn-instagram.active.focus,.open>.dropdown-toggle.btn-instagram.focus{color:#fff;background-color:#26455d;border-color:rgba(0,0,0,0.2)} 73 | .btn-instagram:active,.btn-instagram.active,.open>.dropdown-toggle.btn-instagram{background-image:none} 74 | .btn-instagram.disabled:hover,.btn-instagram[disabled]:hover,fieldset[disabled] .btn-instagram:hover,.btn-instagram.disabled:focus,.btn-instagram[disabled]:focus,fieldset[disabled] .btn-instagram:focus,.btn-instagram.disabled.focus,.btn-instagram[disabled].focus,fieldset[disabled] .btn-instagram.focus{background-color:#3f729b;border-color:rgba(0,0,0,0.2)} 75 | .btn-instagram .badge{color:#3f729b;background-color:#fff} 76 | .btn-linkedin{color:#fff;background-color:#007bb6;border-color:rgba(0,0,0,0.2)}.btn-linkedin:focus,.btn-linkedin.focus{color:#fff;background-color:#005983;border-color:rgba(0,0,0,0.2)} 77 | .btn-linkedin:hover{color:#fff;background-color:#005983;border-color:rgba(0,0,0,0.2)} 78 | .btn-linkedin:active,.btn-linkedin.active,.open>.dropdown-toggle.btn-linkedin{color:#fff;background-color:#005983;border-color:rgba(0,0,0,0.2)}.btn-linkedin:active:hover,.btn-linkedin.active:hover,.open>.dropdown-toggle.btn-linkedin:hover,.btn-linkedin:active:focus,.btn-linkedin.active:focus,.open>.dropdown-toggle.btn-linkedin:focus,.btn-linkedin:active.focus,.btn-linkedin.active.focus,.open>.dropdown-toggle.btn-linkedin.focus{color:#fff;background-color:#00405f;border-color:rgba(0,0,0,0.2)} 79 | .btn-linkedin:active,.btn-linkedin.active,.open>.dropdown-toggle.btn-linkedin{background-image:none} 80 | .btn-linkedin.disabled:hover,.btn-linkedin[disabled]:hover,fieldset[disabled] .btn-linkedin:hover,.btn-linkedin.disabled:focus,.btn-linkedin[disabled]:focus,fieldset[disabled] .btn-linkedin:focus,.btn-linkedin.disabled.focus,.btn-linkedin[disabled].focus,fieldset[disabled] .btn-linkedin.focus{background-color:#007bb6;border-color:rgba(0,0,0,0.2)} 81 | .btn-linkedin .badge{color:#007bb6;background-color:#fff} 82 | .btn-microsoft{color:#fff;background-color:#2672ec;border-color:rgba(0,0,0,0.2)}.btn-microsoft:focus,.btn-microsoft.focus{color:#fff;background-color:#125acd;border-color:rgba(0,0,0,0.2)} 83 | .btn-microsoft:hover{color:#fff;background-color:#125acd;border-color:rgba(0,0,0,0.2)} 84 | .btn-microsoft:active,.btn-microsoft.active,.open>.dropdown-toggle.btn-microsoft{color:#fff;background-color:#125acd;border-color:rgba(0,0,0,0.2)}.btn-microsoft:active:hover,.btn-microsoft.active:hover,.open>.dropdown-toggle.btn-microsoft:hover,.btn-microsoft:active:focus,.btn-microsoft.active:focus,.open>.dropdown-toggle.btn-microsoft:focus,.btn-microsoft:active.focus,.btn-microsoft.active.focus,.open>.dropdown-toggle.btn-microsoft.focus{color:#fff;background-color:#0f4bac;border-color:rgba(0,0,0,0.2)} 85 | .btn-microsoft:active,.btn-microsoft.active,.open>.dropdown-toggle.btn-microsoft{background-image:none} 86 | .btn-microsoft.disabled:hover,.btn-microsoft[disabled]:hover,fieldset[disabled] .btn-microsoft:hover,.btn-microsoft.disabled:focus,.btn-microsoft[disabled]:focus,fieldset[disabled] .btn-microsoft:focus,.btn-microsoft.disabled.focus,.btn-microsoft[disabled].focus,fieldset[disabled] .btn-microsoft.focus{background-color:#2672ec;border-color:rgba(0,0,0,0.2)} 87 | .btn-microsoft .badge{color:#2672ec;background-color:#fff} 88 | .btn-odnoklassniki{color:#fff;background-color:#f4731c;border-color:rgba(0,0,0,0.2)}.btn-odnoklassniki:focus,.btn-odnoklassniki.focus{color:#fff;background-color:#d35b0a;border-color:rgba(0,0,0,0.2)} 89 | .btn-odnoklassniki:hover{color:#fff;background-color:#d35b0a;border-color:rgba(0,0,0,0.2)} 90 | .btn-odnoklassniki:active,.btn-odnoklassniki.active,.open>.dropdown-toggle.btn-odnoklassniki{color:#fff;background-color:#d35b0a;border-color:rgba(0,0,0,0.2)}.btn-odnoklassniki:active:hover,.btn-odnoklassniki.active:hover,.open>.dropdown-toggle.btn-odnoklassniki:hover,.btn-odnoklassniki:active:focus,.btn-odnoklassniki.active:focus,.open>.dropdown-toggle.btn-odnoklassniki:focus,.btn-odnoklassniki:active.focus,.btn-odnoklassniki.active.focus,.open>.dropdown-toggle.btn-odnoklassniki.focus{color:#fff;background-color:#b14c09;border-color:rgba(0,0,0,0.2)} 91 | .btn-odnoklassniki:active,.btn-odnoklassniki.active,.open>.dropdown-toggle.btn-odnoklassniki{background-image:none} 92 | .btn-odnoklassniki.disabled:hover,.btn-odnoklassniki[disabled]:hover,fieldset[disabled] .btn-odnoklassniki:hover,.btn-odnoklassniki.disabled:focus,.btn-odnoklassniki[disabled]:focus,fieldset[disabled] .btn-odnoklassniki:focus,.btn-odnoklassniki.disabled.focus,.btn-odnoklassniki[disabled].focus,fieldset[disabled] .btn-odnoklassniki.focus{background-color:#f4731c;border-color:rgba(0,0,0,0.2)} 93 | .btn-odnoklassniki .badge{color:#f4731c;background-color:#fff} 94 | .btn-openid{color:#fff;background-color:#f7931e;border-color:rgba(0,0,0,0.2)}.btn-openid:focus,.btn-openid.focus{color:#fff;background-color:#da7908;border-color:rgba(0,0,0,0.2)} 95 | .btn-openid:hover{color:#fff;background-color:#da7908;border-color:rgba(0,0,0,0.2)} 96 | .btn-openid:active,.btn-openid.active,.open>.dropdown-toggle.btn-openid{color:#fff;background-color:#da7908;border-color:rgba(0,0,0,0.2)}.btn-openid:active:hover,.btn-openid.active:hover,.open>.dropdown-toggle.btn-openid:hover,.btn-openid:active:focus,.btn-openid.active:focus,.open>.dropdown-toggle.btn-openid:focus,.btn-openid:active.focus,.btn-openid.active.focus,.open>.dropdown-toggle.btn-openid.focus{color:#fff;background-color:#b86607;border-color:rgba(0,0,0,0.2)} 97 | .btn-openid:active,.btn-openid.active,.open>.dropdown-toggle.btn-openid{background-image:none} 98 | .btn-openid.disabled:hover,.btn-openid[disabled]:hover,fieldset[disabled] .btn-openid:hover,.btn-openid.disabled:focus,.btn-openid[disabled]:focus,fieldset[disabled] .btn-openid:focus,.btn-openid.disabled.focus,.btn-openid[disabled].focus,fieldset[disabled] .btn-openid.focus{background-color:#f7931e;border-color:rgba(0,0,0,0.2)} 99 | .btn-openid .badge{color:#f7931e;background-color:#fff} 100 | .btn-pinterest{color:#fff;background-color:#cb2027;border-color:rgba(0,0,0,0.2)}.btn-pinterest:focus,.btn-pinterest.focus{color:#fff;background-color:#9f191f;border-color:rgba(0,0,0,0.2)} 101 | .btn-pinterest:hover{color:#fff;background-color:#9f191f;border-color:rgba(0,0,0,0.2)} 102 | .btn-pinterest:active,.btn-pinterest.active,.open>.dropdown-toggle.btn-pinterest{color:#fff;background-color:#9f191f;border-color:rgba(0,0,0,0.2)}.btn-pinterest:active:hover,.btn-pinterest.active:hover,.open>.dropdown-toggle.btn-pinterest:hover,.btn-pinterest:active:focus,.btn-pinterest.active:focus,.open>.dropdown-toggle.btn-pinterest:focus,.btn-pinterest:active.focus,.btn-pinterest.active.focus,.open>.dropdown-toggle.btn-pinterest.focus{color:#fff;background-color:#801419;border-color:rgba(0,0,0,0.2)} 103 | .btn-pinterest:active,.btn-pinterest.active,.open>.dropdown-toggle.btn-pinterest{background-image:none} 104 | .btn-pinterest.disabled:hover,.btn-pinterest[disabled]:hover,fieldset[disabled] .btn-pinterest:hover,.btn-pinterest.disabled:focus,.btn-pinterest[disabled]:focus,fieldset[disabled] .btn-pinterest:focus,.btn-pinterest.disabled.focus,.btn-pinterest[disabled].focus,fieldset[disabled] .btn-pinterest.focus{background-color:#cb2027;border-color:rgba(0,0,0,0.2)} 105 | .btn-pinterest .badge{color:#cb2027;background-color:#fff} 106 | .btn-reddit{color:#000;background-color:#eff7ff;border-color:rgba(0,0,0,0.2)}.btn-reddit:focus,.btn-reddit.focus{color:#000;background-color:#bcddff;border-color:rgba(0,0,0,0.2)} 107 | .btn-reddit:hover{color:#000;background-color:#bcddff;border-color:rgba(0,0,0,0.2)} 108 | .btn-reddit:active,.btn-reddit.active,.open>.dropdown-toggle.btn-reddit{color:#000;background-color:#bcddff;border-color:rgba(0,0,0,0.2)}.btn-reddit:active:hover,.btn-reddit.active:hover,.open>.dropdown-toggle.btn-reddit:hover,.btn-reddit:active:focus,.btn-reddit.active:focus,.open>.dropdown-toggle.btn-reddit:focus,.btn-reddit:active.focus,.btn-reddit.active.focus,.open>.dropdown-toggle.btn-reddit.focus{color:#000;background-color:#98ccff;border-color:rgba(0,0,0,0.2)} 109 | .btn-reddit:active,.btn-reddit.active,.open>.dropdown-toggle.btn-reddit{background-image:none} 110 | .btn-reddit.disabled:hover,.btn-reddit[disabled]:hover,fieldset[disabled] .btn-reddit:hover,.btn-reddit.disabled:focus,.btn-reddit[disabled]:focus,fieldset[disabled] .btn-reddit:focus,.btn-reddit.disabled.focus,.btn-reddit[disabled].focus,fieldset[disabled] .btn-reddit.focus{background-color:#eff7ff;border-color:rgba(0,0,0,0.2)} 111 | .btn-reddit .badge{color:#eff7ff;background-color:#000} 112 | .btn-soundcloud{color:#fff;background-color:#f50;border-color:rgba(0,0,0,0.2)}.btn-soundcloud:focus,.btn-soundcloud.focus{color:#fff;background-color:#c40;border-color:rgba(0,0,0,0.2)} 113 | .btn-soundcloud:hover{color:#fff;background-color:#c40;border-color:rgba(0,0,0,0.2)} 114 | .btn-soundcloud:active,.btn-soundcloud.active,.open>.dropdown-toggle.btn-soundcloud{color:#fff;background-color:#c40;border-color:rgba(0,0,0,0.2)}.btn-soundcloud:active:hover,.btn-soundcloud.active:hover,.open>.dropdown-toggle.btn-soundcloud:hover,.btn-soundcloud:active:focus,.btn-soundcloud.active:focus,.open>.dropdown-toggle.btn-soundcloud:focus,.btn-soundcloud:active.focus,.btn-soundcloud.active.focus,.open>.dropdown-toggle.btn-soundcloud.focus{color:#fff;background-color:#a83800;border-color:rgba(0,0,0,0.2)} 115 | .btn-soundcloud:active,.btn-soundcloud.active,.open>.dropdown-toggle.btn-soundcloud{background-image:none} 116 | .btn-soundcloud.disabled:hover,.btn-soundcloud[disabled]:hover,fieldset[disabled] .btn-soundcloud:hover,.btn-soundcloud.disabled:focus,.btn-soundcloud[disabled]:focus,fieldset[disabled] .btn-soundcloud:focus,.btn-soundcloud.disabled.focus,.btn-soundcloud[disabled].focus,fieldset[disabled] .btn-soundcloud.focus{background-color:#f50;border-color:rgba(0,0,0,0.2)} 117 | .btn-soundcloud .badge{color:#f50;background-color:#fff} 118 | .btn-tumblr{color:#fff;background-color:#2c4762;border-color:rgba(0,0,0,0.2)}.btn-tumblr:focus,.btn-tumblr.focus{color:#fff;background-color:#1c2d3f;border-color:rgba(0,0,0,0.2)} 119 | .btn-tumblr:hover{color:#fff;background-color:#1c2d3f;border-color:rgba(0,0,0,0.2)} 120 | .btn-tumblr:active,.btn-tumblr.active,.open>.dropdown-toggle.btn-tumblr{color:#fff;background-color:#1c2d3f;border-color:rgba(0,0,0,0.2)}.btn-tumblr:active:hover,.btn-tumblr.active:hover,.open>.dropdown-toggle.btn-tumblr:hover,.btn-tumblr:active:focus,.btn-tumblr.active:focus,.open>.dropdown-toggle.btn-tumblr:focus,.btn-tumblr:active.focus,.btn-tumblr.active.focus,.open>.dropdown-toggle.btn-tumblr.focus{color:#fff;background-color:#111c26;border-color:rgba(0,0,0,0.2)} 121 | .btn-tumblr:active,.btn-tumblr.active,.open>.dropdown-toggle.btn-tumblr{background-image:none} 122 | .btn-tumblr.disabled:hover,.btn-tumblr[disabled]:hover,fieldset[disabled] .btn-tumblr:hover,.btn-tumblr.disabled:focus,.btn-tumblr[disabled]:focus,fieldset[disabled] .btn-tumblr:focus,.btn-tumblr.disabled.focus,.btn-tumblr[disabled].focus,fieldset[disabled] .btn-tumblr.focus{background-color:#2c4762;border-color:rgba(0,0,0,0.2)} 123 | .btn-tumblr .badge{color:#2c4762;background-color:#fff} 124 | .btn-twitter{color:#fff;background-color:#55acee;border-color:rgba(0,0,0,0.2)}.btn-twitter:focus,.btn-twitter.focus{color:#fff;background-color:#2795e9;border-color:rgba(0,0,0,0.2)} 125 | .btn-twitter:hover{color:#fff;background-color:#2795e9;border-color:rgba(0,0,0,0.2)} 126 | .btn-twitter:active,.btn-twitter.active,.open>.dropdown-toggle.btn-twitter{color:#fff;background-color:#2795e9;border-color:rgba(0,0,0,0.2)}.btn-twitter:active:hover,.btn-twitter.active:hover,.open>.dropdown-toggle.btn-twitter:hover,.btn-twitter:active:focus,.btn-twitter.active:focus,.open>.dropdown-toggle.btn-twitter:focus,.btn-twitter:active.focus,.btn-twitter.active.focus,.open>.dropdown-toggle.btn-twitter.focus{color:#fff;background-color:#1583d7;border-color:rgba(0,0,0,0.2)} 127 | .btn-twitter:active,.btn-twitter.active,.open>.dropdown-toggle.btn-twitter{background-image:none} 128 | .btn-twitter.disabled:hover,.btn-twitter[disabled]:hover,fieldset[disabled] .btn-twitter:hover,.btn-twitter.disabled:focus,.btn-twitter[disabled]:focus,fieldset[disabled] .btn-twitter:focus,.btn-twitter.disabled.focus,.btn-twitter[disabled].focus,fieldset[disabled] .btn-twitter.focus{background-color:#55acee;border-color:rgba(0,0,0,0.2)} 129 | .btn-twitter .badge{color:#55acee;background-color:#fff} 130 | .btn-vimeo{color:#fff;background-color:#1ab7ea;border-color:rgba(0,0,0,0.2)}.btn-vimeo:focus,.btn-vimeo.focus{color:#fff;background-color:#1295bf;border-color:rgba(0,0,0,0.2)} 131 | .btn-vimeo:hover{color:#fff;background-color:#1295bf;border-color:rgba(0,0,0,0.2)} 132 | .btn-vimeo:active,.btn-vimeo.active,.open>.dropdown-toggle.btn-vimeo{color:#fff;background-color:#1295bf;border-color:rgba(0,0,0,0.2)}.btn-vimeo:active:hover,.btn-vimeo.active:hover,.open>.dropdown-toggle.btn-vimeo:hover,.btn-vimeo:active:focus,.btn-vimeo.active:focus,.open>.dropdown-toggle.btn-vimeo:focus,.btn-vimeo:active.focus,.btn-vimeo.active.focus,.open>.dropdown-toggle.btn-vimeo.focus{color:#fff;background-color:#0f7b9f;border-color:rgba(0,0,0,0.2)} 133 | .btn-vimeo:active,.btn-vimeo.active,.open>.dropdown-toggle.btn-vimeo{background-image:none} 134 | .btn-vimeo.disabled:hover,.btn-vimeo[disabled]:hover,fieldset[disabled] .btn-vimeo:hover,.btn-vimeo.disabled:focus,.btn-vimeo[disabled]:focus,fieldset[disabled] .btn-vimeo:focus,.btn-vimeo.disabled.focus,.btn-vimeo[disabled].focus,fieldset[disabled] .btn-vimeo.focus{background-color:#1ab7ea;border-color:rgba(0,0,0,0.2)} 135 | .btn-vimeo .badge{color:#1ab7ea;background-color:#fff} 136 | .btn-vk{color:#fff;background-color:#587ea3;border-color:rgba(0,0,0,0.2)}.btn-vk:focus,.btn-vk.focus{color:#fff;background-color:#466482;border-color:rgba(0,0,0,0.2)} 137 | .btn-vk:hover{color:#fff;background-color:#466482;border-color:rgba(0,0,0,0.2)} 138 | .btn-vk:active,.btn-vk.active,.open>.dropdown-toggle.btn-vk{color:#fff;background-color:#466482;border-color:rgba(0,0,0,0.2)}.btn-vk:active:hover,.btn-vk.active:hover,.open>.dropdown-toggle.btn-vk:hover,.btn-vk:active:focus,.btn-vk.active:focus,.open>.dropdown-toggle.btn-vk:focus,.btn-vk:active.focus,.btn-vk.active.focus,.open>.dropdown-toggle.btn-vk.focus{color:#fff;background-color:#3a526b;border-color:rgba(0,0,0,0.2)} 139 | .btn-vk:active,.btn-vk.active,.open>.dropdown-toggle.btn-vk{background-image:none} 140 | .btn-vk.disabled:hover,.btn-vk[disabled]:hover,fieldset[disabled] .btn-vk:hover,.btn-vk.disabled:focus,.btn-vk[disabled]:focus,fieldset[disabled] .btn-vk:focus,.btn-vk.disabled.focus,.btn-vk[disabled].focus,fieldset[disabled] .btn-vk.focus{background-color:#587ea3;border-color:rgba(0,0,0,0.2)} 141 | .btn-vk .badge{color:#587ea3;background-color:#fff} 142 | .btn-yahoo{color:#fff;background-color:#720e9e;border-color:rgba(0,0,0,0.2)}.btn-yahoo:focus,.btn-yahoo.focus{color:#fff;background-color:#500a6f;border-color:rgba(0,0,0,0.2)} 143 | .btn-yahoo:hover{color:#fff;background-color:#500a6f;border-color:rgba(0,0,0,0.2)} 144 | .btn-yahoo:active,.btn-yahoo.active,.open>.dropdown-toggle.btn-yahoo{color:#fff;background-color:#500a6f;border-color:rgba(0,0,0,0.2)}.btn-yahoo:active:hover,.btn-yahoo.active:hover,.open>.dropdown-toggle.btn-yahoo:hover,.btn-yahoo:active:focus,.btn-yahoo.active:focus,.open>.dropdown-toggle.btn-yahoo:focus,.btn-yahoo:active.focus,.btn-yahoo.active.focus,.open>.dropdown-toggle.btn-yahoo.focus{color:#fff;background-color:#39074e;border-color:rgba(0,0,0,0.2)} 145 | .btn-yahoo:active,.btn-yahoo.active,.open>.dropdown-toggle.btn-yahoo{background-image:none} 146 | .btn-yahoo.disabled:hover,.btn-yahoo[disabled]:hover,fieldset[disabled] .btn-yahoo:hover,.btn-yahoo.disabled:focus,.btn-yahoo[disabled]:focus,fieldset[disabled] .btn-yahoo:focus,.btn-yahoo.disabled.focus,.btn-yahoo[disabled].focus,fieldset[disabled] .btn-yahoo.focus{background-color:#720e9e;border-color:rgba(0,0,0,0.2)} 147 | .btn-yahoo .badge{color:#720e9e;background-color:#fff} -------------------------------------------------------------------------------- /docs/css/home_style.css: -------------------------------------------------------------------------------- 1 | body { 2 | /*font-family: 'Cousine', ;*/ 3 | /* font-family: 'Cutive Mono', ;*/ 4 | font-family: 'Open Sans', sans-serif; 5 | font-size: 20px; 6 | } 7 | 8 | a:link { 9 | color: #2980b9; 10 | text-decoration:none; 11 | } 12 | 13 | a:visited { 14 | color: #2980b9; 15 | text-decoration:none; 16 | } 17 | 18 | a:hover { 19 | color: #2980b9; 20 | text-decoration:underline; 21 | } -------------------------------------------------------------------------------- /docs/css/ie10-viewport-bug-workaround.css: -------------------------------------------------------------------------------- 1 | /*! 2 | * IE10 viewport hack for Surface/desktop Windows 8 bug 3 | * Copyright 2014-2015 Twitter, Inc. 4 | * Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE) 5 | */ 6 | 7 | /* 8 | * See the Getting Started docs for more information: 9 | * http://getbootstrap.com/getting-started/#support-ie10-width 10 | */ 11 | @-webkit-viewport { width: device-width; } 12 | @-moz-viewport { width: device-width; } 13 | @-ms-viewport { width: device-width; } 14 | @-o-viewport { width: device-width; } 15 | @viewport { width: device-width; } 16 | -------------------------------------------------------------------------------- /docs/css/one-page-wonder.css: -------------------------------------------------------------------------------- 1 | /* 2 | * Start Bootstrap - One Page Wonder (http://startbootstrap.com/) 3 | * Copyright 2013-2016 Start Bootstrap 4 | * Licensed under MIT (https://github.com/BlackrockDigital/startbootstrap/blob/gh-pages/LICENSE) 5 | */ 6 | 7 | body { 8 | margin-top: 50px; /* Required padding for .navbar-fixed-top. Remove if using .navbar-static-top. Change if height of navigation changes. */ 9 | font-family: 'Open Sans', sans-serif; 10 | font-size: 20px; 11 | } 12 | 13 | .header-image { 14 | display: block; 15 | width: 100%; 16 | text-align: center; 17 | background: url('http://placehold.it/1900x500') no-repeat center center scroll; 18 | -webkit-background-size: cover; 19 | -moz-background-size: cover; 20 | background-size: cover; 21 | -o-background-size: cover; 22 | } 23 | 24 | .headline { 25 | padding: 120px 0; 26 | } 27 | 28 | .headline h1 { 29 | font-size: 90px; 30 | background: #fff; 31 | background: rgba(255,255,255,0.9); 32 | } 33 | 34 | .headline h2 { 35 | font-size: 60px; 36 | background: #fff; 37 | background: rgba(255,255,255,0.9); 38 | } 39 | 40 | .featurette-divider { 41 | margin: 80px 0; 42 | } 43 | 44 | .featurette { 45 | overflow: hidden; 46 | } 47 | 48 | .featurette-image.pull-left { 49 | margin-right: 40px; 50 | } 51 | 52 | .featurette-image.pull-right { 53 | margin-left: 40px; 54 | } 55 | 56 | .featurette-heading { 57 | font-size: 50px; 58 | } 59 | 60 | footer { 61 | margin: 50px 0; 62 | } 63 | 64 | @media(max-width:1200px) { 65 | .headline h1 { 66 | font-size: 140px; 67 | } 68 | 69 | .headline h2 { 70 | font-size: 63px; 71 | } 72 | 73 | .featurette-divider { 74 | margin: 50px 0; 75 | } 76 | 77 | .featurette-image.pull-left { 78 | margin-right: 20px; 79 | } 80 | 81 | .featurette-image.pull-right { 82 | margin-left: 20px; 83 | } 84 | 85 | .featurette-heading { 86 | font-size: 35px; 87 | } 88 | } 89 | 90 | @media(max-width:991px) { 91 | .headline h1 { 92 | font-size: 105px; 93 | } 94 | 95 | .headline h2 { 96 | font-size: 50px; 97 | } 98 | 99 | .featurette-divider { 100 | margin: 40px 0; 101 | } 102 | 103 | .featurette-image { 104 | max-width: 50%; 105 | } 106 | 107 | .featurette-image.pull-left { 108 | margin-right: 10px; 109 | } 110 | 111 | .featurette-image.pull-right { 112 | margin-left: 10px; 113 | } 114 | 115 | .featurette-heading { 116 | font-size: 30px; 117 | } 118 | } 119 | 120 | @media(max-width:768px) { 121 | .container { 122 | margin: 0 15px; 123 | } 124 | 125 | .featurette-divider { 126 | margin: 40px 0; 127 | } 128 | 129 | .featurette-heading { 130 | font-size: 25px; 131 | } 132 | } 133 | 134 | @media(max-width:668px) { 135 | .headline h1 { 136 | font-size: 70px; 137 | } 138 | 139 | .headline h2 { 140 | font-size: 32px; 141 | } 142 | 143 | .featurette-divider { 144 | margin: 30px 0; 145 | } 146 | } 147 | 148 | @media(max-width:640px) { 149 | .headline { 150 | padding: 75px 0 25px 0; 151 | } 152 | 153 | .headline h1 { 154 | font-size: 60px; 155 | } 156 | 157 | .headline h2 { 158 | font-size: 30px; 159 | } 160 | } 161 | 162 | @media(max-width:375px) { 163 | .featurette-divider { 164 | margin: 10px 0; 165 | } 166 | 167 | .featurette-image { 168 | max-width: 100%; 169 | } 170 | 171 | .featurette-image.pull-left { 172 | margin-right: 0; 173 | margin-bottom: 10px; 174 | } 175 | 176 | .featurette-image.pull-right { 177 | margin-bottom: 10px; 178 | margin-left: 0; 179 | } 180 | } -------------------------------------------------------------------------------- /docs/css/project_style.css: -------------------------------------------------------------------------------- 1 | /*.navbar { 2 | background-color: #002147; 3 | } 4 | .active { 5 | background-color: #3277ae; 6 | }*/ 7 | 8 | .header-image { 9 | display: block; 10 | width: 100%; 11 | text-align: center; 12 | background: url('wheatfield.jpg') no-repeat center center scroll; 13 | opacity: 0.8; 14 | -webkit-background-size: cover; 15 | -moz-background-size: cover; 16 | background-size: cover; 17 | -o-background-size: cover; 18 | } 19 | 20 | .headline { 21 | padding: 120px 0; 22 | } 23 | 24 | .headline h1 { 25 | font-size: 80px; 26 | background: #fff; 27 | background: rgba(255,255,255,0); 28 | color: #fff; 29 | } 30 | 31 | .headline h2 { 32 | font-size: 30px; 33 | background: #fff; 34 | background: rgba(255,255,255,0); 35 | color: #fff; 36 | } 37 | 38 | .text-muted { 39 | font-size: 24px; 40 | } 41 | 42 | .featurette-divider { 43 | margin: 40px 0; 44 | } 45 | 46 | 47 | .lead { 48 | font-size: 19px; 49 | } 50 | 51 | .featurette { 52 | overflow: hidden; 53 | } 54 | 55 | .featurette-image.pull-left { 56 | margin-right: 40px; 57 | } 58 | 59 | .featurette-image.pull-right { 60 | margin-left: 40px; 61 | } 62 | 63 | .featurette-heading { 64 | font-size: 32px; 65 | } 66 | 67 | @media(max-width:1200px) { 68 | .headline h1 { 69 | font-size: 140px; 70 | } 71 | 72 | .headline h2 { 73 | font-size: 63px; 74 | } 75 | 76 | .featurette-divider { 77 | margin: 50px 0; 78 | } 79 | 80 | .featurette-image.pull-left { 81 | margin-right: 20px; 82 | } 83 | 84 | .featurette-image.pull-right { 85 | margin-left: 20px; 86 | } 87 | 88 | .featurette-heading { 89 | font-size: 35px; 90 | } 91 | } 92 | 93 | @media(max-width:991px) { 94 | .headline h1 { 95 | font-size: 105px; 96 | } 97 | 98 | .headline h2 { 99 | font-size: 50px; 100 | } 101 | 102 | .featurette-divider { 103 | margin: 40px 0; 104 | } 105 | 106 | .featurette-image { 107 | max-width: 50%; 108 | } 109 | 110 | .featurette-image.pull-left { 111 | margin-right: 10px; 112 | } 113 | 114 | .featurette-image.pull-right { 115 | margin-left: 10px; 116 | } 117 | 118 | .featurette-heading { 119 | font-size: 30px; 120 | } 121 | } 122 | 123 | @media(max-width:768px) { 124 | .container { 125 | margin: 0 15px; 126 | } 127 | 128 | .featurette-divider { 129 | margin: 40px 0; 130 | } 131 | 132 | .featurette-heading { 133 | font-size: 25px; 134 | } 135 | } 136 | 137 | @media(max-width:668px) { 138 | .headline h1 { 139 | font-size: 70px; 140 | } 141 | 142 | .headline h2 { 143 | font-size: 32px; 144 | } 145 | 146 | .featurette-divider { 147 | margin: 30px 0; 148 | } 149 | } 150 | 151 | @media(max-width:640px) { 152 | .headline { 153 | padding: 75px 0 25px 0; 154 | } 155 | 156 | .headline h1 { 157 | font-size: 60px; 158 | } 159 | 160 | .headline h2 { 161 | font-size: 30px; 162 | } 163 | } 164 | 165 | @media(max-width:375px) { 166 | .featurette-divider { 167 | margin: 10px 0; 168 | } 169 | 170 | .featurette-image { 171 | max-width: 100%; 172 | } 173 | 174 | .featurette-image.pull-left { 175 | margin-right: 0; 176 | margin-bottom: 10px; 177 | } 178 | 179 | .featurette-image.pull-right { 180 | margin-bottom: 10px; 181 | margin-left: 0; 182 | } 183 | } 184 | 185 | /****************************************************/ 186 | 187 | h1 { 188 | font-family: 'Open Sans', sans-serif; 189 | font-size: 36px; 190 | } 191 | 192 | h2 { 193 | font-family: 'Open Sans', sans-serif; 194 | font-size: 28px; 195 | } 196 | 197 | p { 198 | font-family: 'Open Sans', sans-serif; 199 | } 200 | 201 | p.abstract { 202 | font-size: 20px; 203 | text-align: justify; 204 | } 205 | 206 | a:hover { 207 | color: #2980b9; 208 | text-decoration:underline; 209 | } 210 | 211 | p.university{ 212 | font-size: 22px; 213 | /*font-weight: bold;*/ 214 | } 215 | 216 | p.contacts{ 217 | font-size: 16px; 218 | } 219 | 220 | .pointers { 221 | font-family: 'Open Sans', sans-serif; 222 | font-size: 24px; 223 | /*font-weight: bold;*/ 224 | } 225 | 226 | .pointers_sub { 227 | font-family: 'Open Sans', sans-serif; 228 | font-size: 16px; 229 | } 230 | 231 | .row { 232 | text-align: center; 233 | } 234 | 235 | 236 | /* Styling of navbar*/ 237 | .navbar { 238 | background-color: #53526C; 239 | border-color: #53526C; 240 | font-family: 'Open Sans', sans-serif; 241 | font-size: 20px; 242 | } 243 | 244 | .navbar-brand { 245 | font-size: 28px; 246 | } 247 | .navbar .navbar-brand { 248 | color: #ecf0f1; 249 | } 250 | .navbar .navbar-brand:hover, .navbar .navbar-brand:focus { 251 | color: #a1c0d4; 252 | } 253 | .navbar .navbar-text { 254 | color: #ecf0f1; 255 | } 256 | .navbar .navbar-nav > li > a { 257 | color: #ecf0f1; 258 | } 259 | .navbar .navbar-nav > li > a:hover, .navbar .navbar-nav > li > a:focus { 260 | color: #a1c0d4; 261 | } 262 | .navbar .navbar-nav > .active > a, .navbar .navbar-nav > .active > a:hover, .navbar .navbar-nav > .active > a:focus { 263 | color: #a1c0d4; 264 | background-color: #002147; 265 | } 266 | .navbar .navbar-nav > .open > a, .navbar .navbar-nav > .open > a:hover, .navbar .navbar-nav > .open > a:focus { 267 | color: #a1c0d4; 268 | background-color: #3277ae; 269 | } 270 | .navbar .navbar-toggle { 271 | border-color: #3277ae; 272 | } 273 | .navbar .navbar-toggle:hover, .navbar .navbar-toggle:focus { 274 | background-color: #3277ae; 275 | } 276 | .navbar .navbar-toggle .icon-bar { 277 | background-color: #ecf0f1; 278 | } 279 | .navbar .navbar-collapse, 280 | .navbar .navbar-form { 281 | border-color: #ecf0f1; 282 | } 283 | .navbar .navbar-link { 284 | color: #ecf0f1; 285 | } 286 | .navbar .navbar-link:hover { 287 | color: #a1c0d4; 288 | } 289 | 290 | @media (max-width: 767px) { 291 | .navbar .navbar-nav .open .dropdown-menu > li > a { 292 | color: #ecf0f1; 293 | } 294 | .navbar .navbar-nav .open .dropdown-menu > li > a:hover, .navbar .navbar-nav .open .dropdown-menu > li > a:focus { 295 | color: #a1c0d4; 296 | } 297 | .navbar .navbar-nav .open .dropdown-menu > .active > a, .navbar .navbar-nav .open .dropdown-menu > .active > a:hover, .navbar .navbar-nav .open .dropdown-menu > .active > a:focus { 298 | color: #a1c0d4; 299 | background-color: #3277ae; 300 | } 301 | } 302 | -------------------------------------------------------------------------------- /docs/css/starter-template.css: -------------------------------------------------------------------------------- 1 | body { 2 | padding-top: 50px; 3 | } 4 | .starter-template { 5 | padding: 40px 15px; 6 | text-align: center; 7 | } 8 | -------------------------------------------------------------------------------- /docs/css/video_grid.css: -------------------------------------------------------------------------------- 1 | 2 | body { 3 | width: 80%; 4 | margin: 30px auto; 5 | font-family: sans-serif; 6 | } 7 | 8 | h1{ 9 | text-align: center; 10 | } 11 | 12 | h4{ 13 | text-align: center; 14 | } 15 | 16 | div { 17 | display: flex; 18 | flex-wrap: wrap; 19 | } 20 | 21 | .youtube { 22 | display: inline-block; 23 | margin-bottom: 8px; 24 | width: calc(50% - 4px); 25 | margin-right: 0px; 26 | text-decoration: none; 27 | color: black; 28 | } 29 | 30 | @media screen and (min-width: 50em) { 31 | .youtube { 32 | width: calc(25% - 6px); 33 | } 34 | } 35 | 36 | .videoWrapper { 37 | position: relative; 38 | padding-bottom: 54%; 39 | padding-top: 20%; 40 | height: 0; 41 | } 42 | 43 | .videoWrapper iframe { 44 | position: absolute; 45 | top: 0; 46 | left: 0; 47 | width: 100%; 48 | height: 100%; 49 | border: none; 50 | } 51 | 52 | .p a { 53 | display: inline; 54 | font-size: 13px; 55 | margin: 0; 56 | text-decoration: underline; 57 | color: blue; 58 | } 59 | 60 | .p { 61 | text-align: center; 62 | font-size: 13px; 63 | padding-top: 100px; 64 | } 65 | -------------------------------------------------------------------------------- /docs/css/video_grid_boh.css: -------------------------------------------------------------------------------- 1 | body { 2 | width: 80%; 3 | margin: 30px auto; 4 | font-family: sans-serif; 5 | } 6 | 7 | h1 { 8 | text-align: center;} 9 | 10 | a { 11 | display: inline-block; 12 | margin-bottom: 8px; 13 | width: calc(50% - 4px); 14 | margin-right: 8px; 15 | text-decoration: none; 16 | color: black; 17 | } 18 | 19 | a:nth-of-type(2n) { 20 | margin-right: 0; 21 | } 22 | 23 | @media screen and (min-width: 50em) { 24 | a { 25 | width: calc(25% - 6px); 26 | } 27 | 28 | a:nth-of-type(2n) { 29 | margin-right: 8px; 30 | } 31 | 32 | a:nth-of-type(4n) { 33 | margin-right: 0; 34 | } 35 | } 36 | 37 | .videoWrapper { 38 | position: relative; 39 | padding-bottom: 66.66%; 40 | padding-top: 20px; 41 | height: 0; 42 | display: flex; 43 | flex-wrap: wrap; 44 | } 45 | 46 | .videoWrapper iframe { 47 | position: absolute; 48 | top: 0; 49 | left: 0; 50 | width: 100%; 51 | height: 100%; 52 | border: none; 53 | } 54 | 55 | .p a { 56 | display: inline; 57 | font-size: 13px; 58 | margin: 0; 59 | text-decoration: underline; 60 | color: blue; 61 | } 62 | 63 | .p { 64 | text-align: center; 65 | font-size: 13px; 66 | padding-top: 100px; 67 | } 68 | -------------------------------------------------------------------------------- /docs/evaluation.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | Evaluation 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 41 | 42 |

Evaluation server

43 |

We are setting up a server where you will be able to submit your raw results on the test set.

44 |

There will also be a leaderboard updated on a rolling basis.

45 |
46 |

Work in progress...

47 | 48 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | -------------------------------------------------------------------------------- /docs/img/oxford_logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/oxuva/long-term-tracking-benchmark/fd49bed27af85bb78120598ce65397470a387520/docs/img/oxford_logo.png -------------------------------------------------------------------------------- /docs/img/oxuva_logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/oxuva/long-term-tracking-benchmark/fd49bed27af85bb78120598ce65397470a387520/docs/img/oxuva_logo.png -------------------------------------------------------------------------------- /docs/img/teaser.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/oxuva/long-term-tracking-benchmark/fd49bed27af85bb78120598ce65397470a387520/docs/img/teaser.jpg -------------------------------------------------------------------------------- /docs/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | Long-term Tracking 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 37 | 38 | 39 | 40 | 69 | 70 |
71 | 72 |
73 |
74 |

Long-term Tracking in the Wild: A Benchmark

75 |
76 |

77 | Jack Valmadre *,   78 | Luca Bertinetto *,   79 | João F. Henriques,   80 | Ran Tao,
81 | Andrea Vedaldi,   82 | Arnold Smeulders,   83 | Philip Torr,   84 | Efstratios Gavves * 85 |

86 |
87 |
88 |

* means equal contribution

89 |
90 |
91 |

To appear at ECCV'18

92 |
93 |   94 |
pipeline picture

95 |   96 |

97 | We introduce a new video dataset and benchmark to assess single-object tracking algorithms. Benchmarks have enabled great strides in the field of object tracking by defining standardized evaluations on large sets of diverse videos. However, these works have focused exclusively on sequences only few tens of seconds long, and where the target object is always present. Consequently, most researchers have designed methods tailored to this "short-term" scenario, which is poorly representative of practitioners' needs. Aiming to address this disparity, we compile a long-term, large-scale tracking dataset of sequences with average length greater than two minutes and with frequent target object disappearance. This dataset is the largest ever for single object tracking: it comprises 366 sequences for a total of 14 hours of video, 26 times more than the popular OTB-100. We assess the performance of several algorithms, considering both the ability to locate the target and to determine whether it is present or absent. Our goal is to offer the community a large and diverse benchmark to enable the design and evaluation of tracking methods ready to be used "in the wild". 98 |

99 |   100 |   101 | 102 | 104 | 105 | 106 | 107 | 108 | 109 | 110 | 111 | -------------------------------------------------------------------------------- /docs/js/bootstrap.min.js: -------------------------------------------------------------------------------- 1 | /*! 2 | * Bootstrap v3.3.6 (http://getbootstrap.com) 3 | * Copyright 2011-2015 Twitter, Inc. 4 | * Licensed under the MIT license 5 | */ 6 | if("undefined"==typeof jQuery)throw new Error("Bootstrap's JavaScript requires jQuery");+function(a){"use strict";var b=a.fn.jquery.split(" ")[0].split(".");if(b[0]<2&&b[1]<9||1==b[0]&&9==b[1]&&b[2]<1||b[0]>2)throw new Error("Bootstrap's JavaScript requires jQuery version 1.9.1 or higher, but lower than version 3")}(jQuery),+function(a){"use strict";function b(){var a=document.createElement("bootstrap"),b={WebkitTransition:"webkitTransitionEnd",MozTransition:"transitionend",OTransition:"oTransitionEnd otransitionend",transition:"transitionend"};for(var c in b)if(void 0!==a.style[c])return{end:b[c]};return!1}a.fn.emulateTransitionEnd=function(b){var c=!1,d=this;a(this).one("bsTransitionEnd",function(){c=!0});var e=function(){c||a(d).trigger(a.support.transition.end)};return setTimeout(e,b),this},a(function(){a.support.transition=b(),a.support.transition&&(a.event.special.bsTransitionEnd={bindType:a.support.transition.end,delegateType:a.support.transition.end,handle:function(b){return a(b.target).is(this)?b.handleObj.handler.apply(this,arguments):void 0}})})}(jQuery),+function(a){"use strict";function b(b){return this.each(function(){var c=a(this),e=c.data("bs.alert");e||c.data("bs.alert",e=new d(this)),"string"==typeof b&&e[b].call(c)})}var c='[data-dismiss="alert"]',d=function(b){a(b).on("click",c,this.close)};d.VERSION="3.3.6",d.TRANSITION_DURATION=150,d.prototype.close=function(b){function c(){g.detach().trigger("closed.bs.alert").remove()}var e=a(this),f=e.attr("data-target");f||(f=e.attr("href"),f=f&&f.replace(/.*(?=#[^\s]*$)/,""));var g=a(f);b&&b.preventDefault(),g.length||(g=e.closest(".alert")),g.trigger(b=a.Event("close.bs.alert")),b.isDefaultPrevented()||(g.removeClass("in"),a.support.transition&&g.hasClass("fade")?g.one("bsTransitionEnd",c).emulateTransitionEnd(d.TRANSITION_DURATION):c())};var e=a.fn.alert;a.fn.alert=b,a.fn.alert.Constructor=d,a.fn.alert.noConflict=function(){return a.fn.alert=e,this},a(document).on("click.bs.alert.data-api",c,d.prototype.close)}(jQuery),+function(a){"use strict";function b(b){return this.each(function(){var d=a(this),e=d.data("bs.button"),f="object"==typeof b&&b;e||d.data("bs.button",e=new c(this,f)),"toggle"==b?e.toggle():b&&e.setState(b)})}var c=function(b,d){this.$element=a(b),this.options=a.extend({},c.DEFAULTS,d),this.isLoading=!1};c.VERSION="3.3.6",c.DEFAULTS={loadingText:"loading..."},c.prototype.setState=function(b){var c="disabled",d=this.$element,e=d.is("input")?"val":"html",f=d.data();b+="Text",null==f.resetText&&d.data("resetText",d[e]()),setTimeout(a.proxy(function(){d[e](null==f[b]?this.options[b]:f[b]),"loadingText"==b?(this.isLoading=!0,d.addClass(c).attr(c,c)):this.isLoading&&(this.isLoading=!1,d.removeClass(c).removeAttr(c))},this),0)},c.prototype.toggle=function(){var a=!0,b=this.$element.closest('[data-toggle="buttons"]');if(b.length){var c=this.$element.find("input");"radio"==c.prop("type")?(c.prop("checked")&&(a=!1),b.find(".active").removeClass("active"),this.$element.addClass("active")):"checkbox"==c.prop("type")&&(c.prop("checked")!==this.$element.hasClass("active")&&(a=!1),this.$element.toggleClass("active")),c.prop("checked",this.$element.hasClass("active")),a&&c.trigger("change")}else this.$element.attr("aria-pressed",!this.$element.hasClass("active")),this.$element.toggleClass("active")};var d=a.fn.button;a.fn.button=b,a.fn.button.Constructor=c,a.fn.button.noConflict=function(){return a.fn.button=d,this},a(document).on("click.bs.button.data-api",'[data-toggle^="button"]',function(c){var d=a(c.target);d.hasClass("btn")||(d=d.closest(".btn")),b.call(d,"toggle"),a(c.target).is('input[type="radio"]')||a(c.target).is('input[type="checkbox"]')||c.preventDefault()}).on("focus.bs.button.data-api blur.bs.button.data-api",'[data-toggle^="button"]',function(b){a(b.target).closest(".btn").toggleClass("focus",/^focus(in)?$/.test(b.type))})}(jQuery),+function(a){"use strict";function b(b){return this.each(function(){var d=a(this),e=d.data("bs.carousel"),f=a.extend({},c.DEFAULTS,d.data(),"object"==typeof b&&b),g="string"==typeof b?b:f.slide;e||d.data("bs.carousel",e=new c(this,f)),"number"==typeof b?e.to(b):g?e[g]():f.interval&&e.pause().cycle()})}var c=function(b,c){this.$element=a(b),this.$indicators=this.$element.find(".carousel-indicators"),this.options=c,this.paused=null,this.sliding=null,this.interval=null,this.$active=null,this.$items=null,this.options.keyboard&&this.$element.on("keydown.bs.carousel",a.proxy(this.keydown,this)),"hover"==this.options.pause&&!("ontouchstart"in document.documentElement)&&this.$element.on("mouseenter.bs.carousel",a.proxy(this.pause,this)).on("mouseleave.bs.carousel",a.proxy(this.cycle,this))};c.VERSION="3.3.6",c.TRANSITION_DURATION=600,c.DEFAULTS={interval:5e3,pause:"hover",wrap:!0,keyboard:!0},c.prototype.keydown=function(a){if(!/input|textarea/i.test(a.target.tagName)){switch(a.which){case 37:this.prev();break;case 39:this.next();break;default:return}a.preventDefault()}},c.prototype.cycle=function(b){return b||(this.paused=!1),this.interval&&clearInterval(this.interval),this.options.interval&&!this.paused&&(this.interval=setInterval(a.proxy(this.next,this),this.options.interval)),this},c.prototype.getItemIndex=function(a){return this.$items=a.parent().children(".item"),this.$items.index(a||this.$active)},c.prototype.getItemForDirection=function(a,b){var c=this.getItemIndex(b),d="prev"==a&&0===c||"next"==a&&c==this.$items.length-1;if(d&&!this.options.wrap)return b;var e="prev"==a?-1:1,f=(c+e)%this.$items.length;return this.$items.eq(f)},c.prototype.to=function(a){var b=this,c=this.getItemIndex(this.$active=this.$element.find(".item.active"));return a>this.$items.length-1||0>a?void 0:this.sliding?this.$element.one("slid.bs.carousel",function(){b.to(a)}):c==a?this.pause().cycle():this.slide(a>c?"next":"prev",this.$items.eq(a))},c.prototype.pause=function(b){return b||(this.paused=!0),this.$element.find(".next, .prev").length&&a.support.transition&&(this.$element.trigger(a.support.transition.end),this.cycle(!0)),this.interval=clearInterval(this.interval),this},c.prototype.next=function(){return this.sliding?void 0:this.slide("next")},c.prototype.prev=function(){return this.sliding?void 0:this.slide("prev")},c.prototype.slide=function(b,d){var e=this.$element.find(".item.active"),f=d||this.getItemForDirection(b,e),g=this.interval,h="next"==b?"left":"right",i=this;if(f.hasClass("active"))return this.sliding=!1;var j=f[0],k=a.Event("slide.bs.carousel",{relatedTarget:j,direction:h});if(this.$element.trigger(k),!k.isDefaultPrevented()){if(this.sliding=!0,g&&this.pause(),this.$indicators.length){this.$indicators.find(".active").removeClass("active");var l=a(this.$indicators.children()[this.getItemIndex(f)]);l&&l.addClass("active")}var m=a.Event("slid.bs.carousel",{relatedTarget:j,direction:h});return a.support.transition&&this.$element.hasClass("slide")?(f.addClass(b),f[0].offsetWidth,e.addClass(h),f.addClass(h),e.one("bsTransitionEnd",function(){f.removeClass([b,h].join(" ")).addClass("active"),e.removeClass(["active",h].join(" ")),i.sliding=!1,setTimeout(function(){i.$element.trigger(m)},0)}).emulateTransitionEnd(c.TRANSITION_DURATION)):(e.removeClass("active"),f.addClass("active"),this.sliding=!1,this.$element.trigger(m)),g&&this.cycle(),this}};var d=a.fn.carousel;a.fn.carousel=b,a.fn.carousel.Constructor=c,a.fn.carousel.noConflict=function(){return a.fn.carousel=d,this};var e=function(c){var d,e=a(this),f=a(e.attr("data-target")||(d=e.attr("href"))&&d.replace(/.*(?=#[^\s]+$)/,""));if(f.hasClass("carousel")){var g=a.extend({},f.data(),e.data()),h=e.attr("data-slide-to");h&&(g.interval=!1),b.call(f,g),h&&f.data("bs.carousel").to(h),c.preventDefault()}};a(document).on("click.bs.carousel.data-api","[data-slide]",e).on("click.bs.carousel.data-api","[data-slide-to]",e),a(window).on("load",function(){a('[data-ride="carousel"]').each(function(){var c=a(this);b.call(c,c.data())})})}(jQuery),+function(a){"use strict";function b(b){var c,d=b.attr("data-target")||(c=b.attr("href"))&&c.replace(/.*(?=#[^\s]+$)/,"");return a(d)}function c(b){return this.each(function(){var c=a(this),e=c.data("bs.collapse"),f=a.extend({},d.DEFAULTS,c.data(),"object"==typeof b&&b);!e&&f.toggle&&/show|hide/.test(b)&&(f.toggle=!1),e||c.data("bs.collapse",e=new d(this,f)),"string"==typeof b&&e[b]()})}var d=function(b,c){this.$element=a(b),this.options=a.extend({},d.DEFAULTS,c),this.$trigger=a('[data-toggle="collapse"][href="#'+b.id+'"],[data-toggle="collapse"][data-target="#'+b.id+'"]'),this.transitioning=null,this.options.parent?this.$parent=this.getParent():this.addAriaAndCollapsedClass(this.$element,this.$trigger),this.options.toggle&&this.toggle()};d.VERSION="3.3.6",d.TRANSITION_DURATION=350,d.DEFAULTS={toggle:!0},d.prototype.dimension=function(){var a=this.$element.hasClass("width");return a?"width":"height"},d.prototype.show=function(){if(!this.transitioning&&!this.$element.hasClass("in")){var b,e=this.$parent&&this.$parent.children(".panel").children(".in, .collapsing");if(!(e&&e.length&&(b=e.data("bs.collapse"),b&&b.transitioning))){var f=a.Event("show.bs.collapse");if(this.$element.trigger(f),!f.isDefaultPrevented()){e&&e.length&&(c.call(e,"hide"),b||e.data("bs.collapse",null));var g=this.dimension();this.$element.removeClass("collapse").addClass("collapsing")[g](0).attr("aria-expanded",!0),this.$trigger.removeClass("collapsed").attr("aria-expanded",!0),this.transitioning=1;var h=function(){this.$element.removeClass("collapsing").addClass("collapse in")[g](""),this.transitioning=0,this.$element.trigger("shown.bs.collapse")};if(!a.support.transition)return h.call(this);var i=a.camelCase(["scroll",g].join("-"));this.$element.one("bsTransitionEnd",a.proxy(h,this)).emulateTransitionEnd(d.TRANSITION_DURATION)[g](this.$element[0][i])}}}},d.prototype.hide=function(){if(!this.transitioning&&this.$element.hasClass("in")){var b=a.Event("hide.bs.collapse");if(this.$element.trigger(b),!b.isDefaultPrevented()){var c=this.dimension();this.$element[c](this.$element[c]())[0].offsetHeight,this.$element.addClass("collapsing").removeClass("collapse in").attr("aria-expanded",!1),this.$trigger.addClass("collapsed").attr("aria-expanded",!1),this.transitioning=1;var e=function(){this.transitioning=0,this.$element.removeClass("collapsing").addClass("collapse").trigger("hidden.bs.collapse")};return a.support.transition?void this.$element[c](0).one("bsTransitionEnd",a.proxy(e,this)).emulateTransitionEnd(d.TRANSITION_DURATION):e.call(this)}}},d.prototype.toggle=function(){this[this.$element.hasClass("in")?"hide":"show"]()},d.prototype.getParent=function(){return a(this.options.parent).find('[data-toggle="collapse"][data-parent="'+this.options.parent+'"]').each(a.proxy(function(c,d){var e=a(d);this.addAriaAndCollapsedClass(b(e),e)},this)).end()},d.prototype.addAriaAndCollapsedClass=function(a,b){var c=a.hasClass("in");a.attr("aria-expanded",c),b.toggleClass("collapsed",!c).attr("aria-expanded",c)};var e=a.fn.collapse;a.fn.collapse=c,a.fn.collapse.Constructor=d,a.fn.collapse.noConflict=function(){return a.fn.collapse=e,this},a(document).on("click.bs.collapse.data-api",'[data-toggle="collapse"]',function(d){var e=a(this);e.attr("data-target")||d.preventDefault();var f=b(e),g=f.data("bs.collapse"),h=g?"toggle":e.data();c.call(f,h)})}(jQuery),+function(a){"use strict";function b(b){var c=b.attr("data-target");c||(c=b.attr("href"),c=c&&/#[A-Za-z]/.test(c)&&c.replace(/.*(?=#[^\s]*$)/,""));var d=c&&a(c);return d&&d.length?d:b.parent()}function c(c){c&&3===c.which||(a(e).remove(),a(f).each(function(){var d=a(this),e=b(d),f={relatedTarget:this};e.hasClass("open")&&(c&&"click"==c.type&&/input|textarea/i.test(c.target.tagName)&&a.contains(e[0],c.target)||(e.trigger(c=a.Event("hide.bs.dropdown",f)),c.isDefaultPrevented()||(d.attr("aria-expanded","false"),e.removeClass("open").trigger(a.Event("hidden.bs.dropdown",f)))))}))}function d(b){return this.each(function(){var c=a(this),d=c.data("bs.dropdown");d||c.data("bs.dropdown",d=new g(this)),"string"==typeof b&&d[b].call(c)})}var e=".dropdown-backdrop",f='[data-toggle="dropdown"]',g=function(b){a(b).on("click.bs.dropdown",this.toggle)};g.VERSION="3.3.6",g.prototype.toggle=function(d){var e=a(this);if(!e.is(".disabled, :disabled")){var f=b(e),g=f.hasClass("open");if(c(),!g){"ontouchstart"in document.documentElement&&!f.closest(".navbar-nav").length&&a(document.createElement("div")).addClass("dropdown-backdrop").insertAfter(a(this)).on("click",c);var h={relatedTarget:this};if(f.trigger(d=a.Event("show.bs.dropdown",h)),d.isDefaultPrevented())return;e.trigger("focus").attr("aria-expanded","true"),f.toggleClass("open").trigger(a.Event("shown.bs.dropdown",h))}return!1}},g.prototype.keydown=function(c){if(/(38|40|27|32)/.test(c.which)&&!/input|textarea/i.test(c.target.tagName)){var d=a(this);if(c.preventDefault(),c.stopPropagation(),!d.is(".disabled, :disabled")){var e=b(d),g=e.hasClass("open");if(!g&&27!=c.which||g&&27==c.which)return 27==c.which&&e.find(f).trigger("focus"),d.trigger("click");var h=" li:not(.disabled):visible a",i=e.find(".dropdown-menu"+h);if(i.length){var j=i.index(c.target);38==c.which&&j>0&&j--,40==c.which&&jdocument.documentElement.clientHeight;this.$element.css({paddingLeft:!this.bodyIsOverflowing&&a?this.scrollbarWidth:"",paddingRight:this.bodyIsOverflowing&&!a?this.scrollbarWidth:""})},c.prototype.resetAdjustments=function(){this.$element.css({paddingLeft:"",paddingRight:""})},c.prototype.checkScrollbar=function(){var a=window.innerWidth;if(!a){var b=document.documentElement.getBoundingClientRect();a=b.right-Math.abs(b.left)}this.bodyIsOverflowing=document.body.clientWidth
',trigger:"hover focus",title:"",delay:0,html:!1,container:!1,viewport:{selector:"body",padding:0}},c.prototype.init=function(b,c,d){if(this.enabled=!0,this.type=b,this.$element=a(c),this.options=this.getOptions(d),this.$viewport=this.options.viewport&&a(a.isFunction(this.options.viewport)?this.options.viewport.call(this,this.$element):this.options.viewport.selector||this.options.viewport),this.inState={click:!1,hover:!1,focus:!1},this.$element[0]instanceof document.constructor&&!this.options.selector)throw new Error("`selector` option must be specified when initializing "+this.type+" on the window.document object!");for(var e=this.options.trigger.split(" "),f=e.length;f--;){var g=e[f];if("click"==g)this.$element.on("click."+this.type,this.options.selector,a.proxy(this.toggle,this));else if("manual"!=g){var h="hover"==g?"mouseenter":"focusin",i="hover"==g?"mouseleave":"focusout";this.$element.on(h+"."+this.type,this.options.selector,a.proxy(this.enter,this)),this.$element.on(i+"."+this.type,this.options.selector,a.proxy(this.leave,this))}}this.options.selector?this._options=a.extend({},this.options,{trigger:"manual",selector:""}):this.fixTitle()},c.prototype.getDefaults=function(){return c.DEFAULTS},c.prototype.getOptions=function(b){return b=a.extend({},this.getDefaults(),this.$element.data(),b),b.delay&&"number"==typeof b.delay&&(b.delay={show:b.delay,hide:b.delay}),b},c.prototype.getDelegateOptions=function(){var b={},c=this.getDefaults();return this._options&&a.each(this._options,function(a,d){c[a]!=d&&(b[a]=d)}),b},c.prototype.enter=function(b){var c=b instanceof this.constructor?b:a(b.currentTarget).data("bs."+this.type);return c||(c=new this.constructor(b.currentTarget,this.getDelegateOptions()),a(b.currentTarget).data("bs."+this.type,c)),b instanceof a.Event&&(c.inState["focusin"==b.type?"focus":"hover"]=!0),c.tip().hasClass("in")||"in"==c.hoverState?void(c.hoverState="in"):(clearTimeout(c.timeout),c.hoverState="in",c.options.delay&&c.options.delay.show?void(c.timeout=setTimeout(function(){"in"==c.hoverState&&c.show()},c.options.delay.show)):c.show())},c.prototype.isInStateTrue=function(){for(var a in this.inState)if(this.inState[a])return!0;return!1},c.prototype.leave=function(b){var c=b instanceof this.constructor?b:a(b.currentTarget).data("bs."+this.type);return c||(c=new this.constructor(b.currentTarget,this.getDelegateOptions()),a(b.currentTarget).data("bs."+this.type,c)),b instanceof a.Event&&(c.inState["focusout"==b.type?"focus":"hover"]=!1),c.isInStateTrue()?void 0:(clearTimeout(c.timeout),c.hoverState="out",c.options.delay&&c.options.delay.hide?void(c.timeout=setTimeout(function(){"out"==c.hoverState&&c.hide()},c.options.delay.hide)):c.hide())},c.prototype.show=function(){var b=a.Event("show.bs."+this.type);if(this.hasContent()&&this.enabled){this.$element.trigger(b);var d=a.contains(this.$element[0].ownerDocument.documentElement,this.$element[0]);if(b.isDefaultPrevented()||!d)return;var e=this,f=this.tip(),g=this.getUID(this.type);this.setContent(),f.attr("id",g),this.$element.attr("aria-describedby",g),this.options.animation&&f.addClass("fade");var h="function"==typeof this.options.placement?this.options.placement.call(this,f[0],this.$element[0]):this.options.placement,i=/\s?auto?\s?/i,j=i.test(h);j&&(h=h.replace(i,"")||"top"),f.detach().css({top:0,left:0,display:"block"}).addClass(h).data("bs."+this.type,this),this.options.container?f.appendTo(this.options.container):f.insertAfter(this.$element),this.$element.trigger("inserted.bs."+this.type);var k=this.getPosition(),l=f[0].offsetWidth,m=f[0].offsetHeight;if(j){var n=h,o=this.getPosition(this.$viewport);h="bottom"==h&&k.bottom+m>o.bottom?"top":"top"==h&&k.top-mo.width?"left":"left"==h&&k.left-lg.top+g.height&&(e.top=g.top+g.height-i)}else{var j=b.left-f,k=b.left+f+c;jg.right&&(e.left=g.left+g.width-k)}return e},c.prototype.getTitle=function(){var a,b=this.$element,c=this.options;return a=b.attr("data-original-title")||("function"==typeof c.title?c.title.call(b[0]):c.title)},c.prototype.getUID=function(a){do a+=~~(1e6*Math.random());while(document.getElementById(a));return a},c.prototype.tip=function(){if(!this.$tip&&(this.$tip=a(this.options.template),1!=this.$tip.length))throw new Error(this.type+" `template` option must consist of exactly 1 top-level element!");return this.$tip},c.prototype.arrow=function(){return this.$arrow=this.$arrow||this.tip().find(".tooltip-arrow")},c.prototype.enable=function(){this.enabled=!0},c.prototype.disable=function(){this.enabled=!1},c.prototype.toggleEnabled=function(){this.enabled=!this.enabled},c.prototype.toggle=function(b){var c=this;b&&(c=a(b.currentTarget).data("bs."+this.type),c||(c=new this.constructor(b.currentTarget,this.getDelegateOptions()),a(b.currentTarget).data("bs."+this.type,c))),b?(c.inState.click=!c.inState.click,c.isInStateTrue()?c.enter(c):c.leave(c)):c.tip().hasClass("in")?c.leave(c):c.enter(c)},c.prototype.destroy=function(){var a=this;clearTimeout(this.timeout),this.hide(function(){a.$element.off("."+a.type).removeData("bs."+a.type),a.$tip&&a.$tip.detach(),a.$tip=null,a.$arrow=null,a.$viewport=null})};var d=a.fn.tooltip;a.fn.tooltip=b,a.fn.tooltip.Constructor=c,a.fn.tooltip.noConflict=function(){return a.fn.tooltip=d,this}}(jQuery),+function(a){"use strict";function b(b){return this.each(function(){var d=a(this),e=d.data("bs.popover"),f="object"==typeof b&&b;(e||!/destroy|hide/.test(b))&&(e||d.data("bs.popover",e=new c(this,f)),"string"==typeof b&&e[b]())})}var c=function(a,b){this.init("popover",a,b)};if(!a.fn.tooltip)throw new Error("Popover requires tooltip.js");c.VERSION="3.3.6",c.DEFAULTS=a.extend({},a.fn.tooltip.Constructor.DEFAULTS,{placement:"right",trigger:"click",content:"",template:''}),c.prototype=a.extend({},a.fn.tooltip.Constructor.prototype),c.prototype.constructor=c,c.prototype.getDefaults=function(){return c.DEFAULTS},c.prototype.setContent=function(){var a=this.tip(),b=this.getTitle(),c=this.getContent();a.find(".popover-title")[this.options.html?"html":"text"](b),a.find(".popover-content").children().detach().end()[this.options.html?"string"==typeof c?"html":"append":"text"](c),a.removeClass("fade top bottom left right in"),a.find(".popover-title").html()||a.find(".popover-title").hide()},c.prototype.hasContent=function(){return this.getTitle()||this.getContent()},c.prototype.getContent=function(){var a=this.$element,b=this.options;return a.attr("data-content")||("function"==typeof b.content?b.content.call(a[0]):b.content)},c.prototype.arrow=function(){return this.$arrow=this.$arrow||this.tip().find(".arrow")};var d=a.fn.popover;a.fn.popover=b,a.fn.popover.Constructor=c,a.fn.popover.noConflict=function(){return a.fn.popover=d,this}}(jQuery),+function(a){"use strict";function b(c,d){this.$body=a(document.body),this.$scrollElement=a(a(c).is(document.body)?window:c),this.options=a.extend({},b.DEFAULTS,d),this.selector=(this.options.target||"")+" .nav li > a",this.offsets=[],this.targets=[],this.activeTarget=null,this.scrollHeight=0,this.$scrollElement.on("scroll.bs.scrollspy",a.proxy(this.process,this)),this.refresh(),this.process()}function c(c){return this.each(function(){var d=a(this),e=d.data("bs.scrollspy"),f="object"==typeof c&&c;e||d.data("bs.scrollspy",e=new b(this,f)),"string"==typeof c&&e[c]()})}b.VERSION="3.3.6",b.DEFAULTS={offset:10},b.prototype.getScrollHeight=function(){return this.$scrollElement[0].scrollHeight||Math.max(this.$body[0].scrollHeight,document.documentElement.scrollHeight)},b.prototype.refresh=function(){var b=this,c="offset",d=0;this.offsets=[],this.targets=[],this.scrollHeight=this.getScrollHeight(),a.isWindow(this.$scrollElement[0])||(c="position",d=this.$scrollElement.scrollTop()),this.$body.find(this.selector).map(function(){var b=a(this),e=b.data("target")||b.attr("href"),f=/^#./.test(e)&&a(e);return f&&f.length&&f.is(":visible")&&[[f[c]().top+d,e]]||null}).sort(function(a,b){return a[0]-b[0]}).each(function(){b.offsets.push(this[0]),b.targets.push(this[1])})},b.prototype.process=function(){var a,b=this.$scrollElement.scrollTop()+this.options.offset,c=this.getScrollHeight(),d=this.options.offset+c-this.$scrollElement.height(),e=this.offsets,f=this.targets,g=this.activeTarget;if(this.scrollHeight!=c&&this.refresh(),b>=d)return g!=(a=f[f.length-1])&&this.activate(a);if(g&&b=e[a]&&(void 0===e[a+1]||b .dropdown-menu > .active").removeClass("active").end().find('[data-toggle="tab"]').attr("aria-expanded",!1),b.addClass("active").find('[data-toggle="tab"]').attr("aria-expanded",!0),h?(b[0].offsetWidth,b.addClass("in")):b.removeClass("fade"),b.parent(".dropdown-menu").length&&b.closest("li.dropdown").addClass("active").end().find('[data-toggle="tab"]').attr("aria-expanded",!0),e&&e()}var g=d.find("> .active"),h=e&&a.support.transition&&(g.length&&g.hasClass("fade")||!!d.find("> .fade").length);g.length&&h?g.one("bsTransitionEnd",f).emulateTransitionEnd(c.TRANSITION_DURATION):f(),g.removeClass("in")};var d=a.fn.tab;a.fn.tab=b,a.fn.tab.Constructor=c,a.fn.tab.noConflict=function(){return a.fn.tab=d,this};var e=function(c){c.preventDefault(),b.call(a(this),"show")};a(document).on("click.bs.tab.data-api",'[data-toggle="tab"]',e).on("click.bs.tab.data-api",'[data-toggle="pill"]',e)}(jQuery),+function(a){"use strict";function b(b){return this.each(function(){var d=a(this),e=d.data("bs.affix"),f="object"==typeof b&&b;e||d.data("bs.affix",e=new c(this,f)),"string"==typeof b&&e[b]()})}var c=function(b,d){this.options=a.extend({},c.DEFAULTS,d),this.$target=a(this.options.target).on("scroll.bs.affix.data-api",a.proxy(this.checkPosition,this)).on("click.bs.affix.data-api",a.proxy(this.checkPositionWithEventLoop,this)),this.$element=a(b),this.affixed=null,this.unpin=null,this.pinnedOffset=null,this.checkPosition()};c.VERSION="3.3.6",c.RESET="affix affix-top affix-bottom",c.DEFAULTS={offset:0,target:window},c.prototype.getState=function(a,b,c,d){var e=this.$target.scrollTop(),f=this.$element.offset(),g=this.$target.height();if(null!=c&&"top"==this.affixed)return c>e?"top":!1;if("bottom"==this.affixed)return null!=c?e+this.unpin<=f.top?!1:"bottom":a-d>=e+g?!1:"bottom";var h=null==this.affixed,i=h?e:f.top,j=h?g:b;return null!=c&&c>=e?"top":null!=d&&i+j>=a-d?"bottom":!1},c.prototype.getPinnedOffset=function(){if(this.pinnedOffset)return this.pinnedOffset;this.$element.removeClass(c.RESET).addClass("affix");var a=this.$target.scrollTop(),b=this.$element.offset();return this.pinnedOffset=b.top-a},c.prototype.checkPositionWithEventLoop=function(){setTimeout(a.proxy(this.checkPosition,this),1)},c.prototype.checkPosition=function(){if(this.$element.is(":visible")){var b=this.$element.height(),d=this.options.offset,e=d.top,f=d.bottom,g=Math.max(a(document).height(),a(document.body).height());"object"!=typeof d&&(f=e=d),"function"==typeof e&&(e=d.top(this.$element)),"function"==typeof f&&(f=d.bottom(this.$element));var h=this.getState(g,b,e,f);if(this.affixed!=h){null!=this.unpin&&this.$element.css("top","");var i="affix"+(h?"-"+h:""),j=a.Event(i+".bs.affix");if(this.$element.trigger(j),j.isDefaultPrevented())return;this.affixed=h,this.unpin="bottom"==h?this.getPinnedOffset():null,this.$element.removeClass(c.RESET).addClass(i).trigger(i.replace("affix","affixed")+".bs.affix")}"bottom"==h&&this.$element.offset({top:g-b-f})}};var d=a.fn.affix;a.fn.affix=b,a.fn.affix.Constructor=c,a.fn.affix.noConflict=function(){return a.fn.affix=d,this},a(window).on("load",function(){a('[data-spy="affix"]').each(function(){var c=a(this),d=c.data();d.offset=d.offset||{},null!=d.offsetBottom&&(d.offset.bottom=d.offsetBottom),null!=d.offsetTop&&(d.offset.top=d.offsetTop),b.call(c,d)})})}(jQuery); -------------------------------------------------------------------------------- /docs/js/ie-emulation-modes-warning.js: -------------------------------------------------------------------------------- 1 | // NOTICE!! DO NOT USE ANY OF THIS JAVASCRIPT 2 | // IT'S JUST JUNK FOR OUR DOCS! 3 | // ++++++++++++++++++++++++++++++++++++++++++ 4 | /*! 5 | * Copyright 2014-2015 Twitter, Inc. 6 | * 7 | * Licensed under the Creative Commons Attribution 3.0 Unported License. For 8 | * details, see https://creativecommons.org/licenses/by/3.0/. 9 | */ 10 | // Intended to prevent false-positive bug reports about Bootstrap not working properly in old versions of IE due to folks testing using IE's unreliable emulation modes. 11 | (function () { 12 | 'use strict'; 13 | 14 | function emulatedIEMajorVersion() { 15 | var groups = /MSIE ([0-9.]+)/.exec(window.navigator.userAgent) 16 | if (groups === null) { 17 | return null 18 | } 19 | var ieVersionNum = parseInt(groups[1], 10) 20 | var ieMajorVersion = Math.floor(ieVersionNum) 21 | return ieMajorVersion 22 | } 23 | 24 | function actualNonEmulatedIEMajorVersion() { 25 | // Detects the actual version of IE in use, even if it's in an older-IE emulation mode. 26 | // IE JavaScript conditional compilation docs: https://msdn.microsoft.com/library/121hztk3%28v=vs.94%29.aspx 27 | // @cc_on docs: https://msdn.microsoft.com/library/8ka90k2e%28v=vs.94%29.aspx 28 | var jscriptVersion = new Function('/*@cc_on return @_jscript_version; @*/')() // jshint ignore:line 29 | if (jscriptVersion === undefined) { 30 | return 11 // IE11+ not in emulation mode 31 | } 32 | if (jscriptVersion < 9) { 33 | return 8 // IE8 (or lower; haven't tested on IE<8) 34 | } 35 | return jscriptVersion // IE9 or IE10 in any mode, or IE11 in non-IE11 mode 36 | } 37 | 38 | var ua = window.navigator.userAgent 39 | if (ua.indexOf('Opera') > -1 || ua.indexOf('Presto') > -1) { 40 | return // Opera, which might pretend to be IE 41 | } 42 | var emulated = emulatedIEMajorVersion() 43 | if (emulated === null) { 44 | return // Not IE 45 | } 46 | var nonEmulated = actualNonEmulatedIEMajorVersion() 47 | 48 | if (emulated !== nonEmulated) { 49 | window.alert('WARNING: You appear to be using IE' + nonEmulated + ' in IE' + emulated + ' emulation mode.\nIE emulation modes can behave significantly differently from ACTUAL older versions of IE.\nPLEASE DON\'T FILE BOOTSTRAP BUGS based on testing in IE emulation modes!') 50 | } 51 | })(); 52 | -------------------------------------------------------------------------------- /docs/js/ie10-viewport-bug-workaround.js: -------------------------------------------------------------------------------- 1 | /*! 2 | * IE10 viewport hack for Surface/desktop Windows 8 bug 3 | * Copyright 2014-2015 Twitter, Inc. 4 | * Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE) 5 | */ 6 | 7 | // See the Getting Started docs for more information: 8 | // http://getbootstrap.com/getting-started/#support-ie10-width 9 | 10 | (function () { 11 | 'use strict'; 12 | 13 | if (navigator.userAgent.match(/IEMobile\/10\.0/)) { 14 | var msViewportStyle = document.createElement('style') 15 | msViewportStyle.appendChild( 16 | document.createTextNode( 17 | '@-ms-viewport{width:auto!important}' 18 | ) 19 | ) 20 | document.querySelector('head').appendChild(msViewportStyle) 21 | } 22 | 23 | })(); 24 | -------------------------------------------------------------------------------- /docs/video_grid.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | Video samples 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 41 | 42 | 43 |

Example videos

44 |   45 |

Below a few short excerpts. To watch the full video, hover on the frame and click on the YouTube icon.

46 |   47 |

Request to download dev and test set (366 videos for 14 hours) here.

48 |
49 | 50 | 51 | 52 | 53 | 54 |
55 | 56 |
57 |
58 | 59 |
60 |
61 | 62 |
63 |
64 | 65 |
66 |
67 | 68 |
69 |
70 | 71 |
72 |
73 | 74 |
75 |
76 | 77 |
78 |
79 | 80 |
81 |
82 | 83 |
84 |
85 | 86 |
87 |
88 | 89 |
90 | 91 | 92 | 94 | 95 | 96 | 97 | 98 | 99 | 100 | 101 | -------------------------------------------------------------------------------- /examples/opencv/opencv.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -x 4 | 5 | # Change into directory of script. 6 | cd "$( dirname "${BASH_SOURCE[0]}" )" 7 | 8 | python track.py -v ../../dataset/ ../../predictions/ --data=dev --tracker=MEDIANFLOW 9 | -------------------------------------------------------------------------------- /examples/opencv/requirements.txt: -------------------------------------------------------------------------------- 1 | opencv-python 2 | opencv-contrib-python 3 | -------------------------------------------------------------------------------- /examples/opencv/track.py: -------------------------------------------------------------------------------- 1 | from __future__ import absolute_import 2 | from __future__ import division 3 | from __future__ import print_function 4 | 5 | import argparse 6 | import cv2 7 | import os 8 | import time 9 | 10 | import oxuva 11 | 12 | 13 | def main(): 14 | parser = argparse.ArgumentParser() 15 | parser.add_argument('data_dir') 16 | parser.add_argument('predictions_dir') 17 | parser.add_argument('--data', default='dev') 18 | parser.add_argument('--verbose', '-v', action='store_true') 19 | parser.add_argument('--tracker', default='TLD') 20 | global args 21 | args = parser.parse_args() 22 | 23 | tracker_id = 'cv' + args.tracker 24 | tracker_preds_dir = os.path.join(args.predictions_dir, args.data, tracker_id) 25 | if not os.path.exists(tracker_preds_dir): 26 | os.makedirs(tracker_preds_dir, 0755) 27 | 28 | tasks_file = os.path.join(args.data_dir, 'tasks', args.data + '.csv') 29 | with open(tasks_file, 'r') as fp: 30 | tasks = oxuva.load_dataset_tasks_csv(fp) 31 | # tracks_file = os.path.join(args.data_dir, 'annotations', args.data + '.csv') 32 | # with open(tracks_file, 'r') as fp: 33 | # tracks = oxuva.load_annotations_csv(fp) 34 | # tasks = {key: oxuva.make_task_from_track(track) for key, track in tracks.items()} 35 | 36 | imfile = lambda vid, t: os.path.join( 37 | args.data_dir, 'images', args.data, vid, '{:06d}.jpeg'.format(t)) 38 | 39 | for key, task in tasks.items(): 40 | vid, obj = key 41 | if args.verbose: 42 | print(vid, obj) 43 | preds_file = os.path.join(tracker_preds_dir, '{}_{}.csv'.format(vid, obj)) 44 | if os.path.exists(preds_file): 45 | continue 46 | 47 | tracker = Tracker(tracker_type=args.tracker) 48 | tracker.init(imfile(vid, task.init_time), task.init_rect) 49 | preds = oxuva.SparseTimeSeries() 50 | start = time.time() 51 | for t in range(task.init_time + 1, task.last_time + 1): 52 | preds[t] = tracker.next(imfile(vid, t)) 53 | dur_sec = time.time() - start 54 | if args.verbose: 55 | fps = (task.last_time - task.init_time + 1) / dur_sec 56 | print('fps {:.3g}'.format(fps)) 57 | 58 | tmp_preds_file = os.path.join(tracker_preds_dir, '{}_{}.csv.tmp'.format(vid, obj)) 59 | with open(tmp_preds_file, 'w') as fp: 60 | oxuva.dump_predictions_csv(vid, obj, preds, fp) 61 | os.rename(tmp_preds_file, preds_file) 62 | 63 | 64 | class Tracker: 65 | 66 | def __init__(self, tracker_type): 67 | self._tracker = create_tracker(tracker_type) 68 | 69 | def init(self, imfile, rect): 70 | im = cv2.imread(imfile, cv2.IMREAD_COLOR) 71 | imheight, imwidth, _ = im.shape 72 | if args.verbose: 73 | print('image size', '{}x{}'.format(imwidth, imheight)) 74 | cvrect = rect_to_opencv(rect, imsize_hw=(imheight, imwidth)) 75 | ok = self._tracker.init(im, cvrect) 76 | assert ok 77 | 78 | def next(self, imfile): 79 | im = cv2.imread(imfile, cv2.IMREAD_COLOR) 80 | imheight, imwidth, _ = im.shape 81 | ok, cvrect = self._tracker.update(im) 82 | if not ok: 83 | return oxuva.make_prediction(present=False, score=0.0) 84 | else: 85 | rect = rect_from_opencv(cvrect, imsize_hw=(imheight, imwidth)) 86 | return oxuva.make_prediction(present=True, score=1.0, **rect) 87 | 88 | 89 | def create_tracker(tracker_type): 90 | # https://www.learnopencv.com/object-tracking-using-opencv-cpp-python/ 91 | major_ver, minor_ver, subminor_ver = cv2.__version__.split('.') 92 | if int(minor_ver) < 3: 93 | tracker = cv2.Tracker_create(tracker_type) 94 | else: 95 | if tracker_type == 'BOOSTING': 96 | tracker = cv2.TrackerBoosting_create() 97 | elif tracker_type == 'MIL': 98 | tracker = cv2.TrackerMIL_create() 99 | elif tracker_type == 'KCF': 100 | tracker = cv2.TrackerKCF_create() 101 | elif tracker_type == 'TLD': 102 | tracker = cv2.TrackerTLD_create() 103 | elif tracker_type == 'MEDIANFLOW': 104 | tracker = cv2.TrackerMedianFlow_create() 105 | elif tracker_type == 'GOTURN': 106 | tracker = cv2.TrackerGOTURN_create() 107 | return tracker 108 | 109 | 110 | def rect_to_opencv(rect, imsize_hw): 111 | imheight, imwidth = imsize_hw 112 | xmin_abs = rect['xmin'] * imwidth 113 | ymin_abs = rect['ymin'] * imheight 114 | xmax_abs = rect['xmax'] * imwidth 115 | ymax_abs = rect['ymax'] * imheight 116 | return (xmin_abs, ymin_abs, xmax_abs - xmin_abs, ymax_abs - ymin_abs) 117 | 118 | 119 | def rect_from_opencv(rect, imsize_hw): 120 | imheight, imwidth = imsize_hw 121 | xmin_abs, ymin_abs, width_abs, height_abs = rect 122 | xmax_abs = xmin_abs + width_abs 123 | ymax_abs = ymin_abs + height_abs 124 | return { 125 | 'xmin': xmin_abs / imwidth, 126 | 'ymin': ymin_abs / imheight, 127 | 'xmax': xmax_abs / imwidth, 128 | 'ymax': ymax_abs / imheight, 129 | } 130 | 131 | 132 | if __name__ == '__main__': 133 | main() 134 | -------------------------------------------------------------------------------- /python/oxuva/__init__.py: -------------------------------------------------------------------------------- 1 | from __future__ import absolute_import 2 | from __future__ import division 3 | from __future__ import print_function 4 | 5 | from oxuva.annot import * 6 | from oxuva.assess import * 7 | from oxuva.dataset import * 8 | from oxuva.io_annot import * 9 | from oxuva.io_pred import * 10 | from oxuva.io_task import * 11 | from oxuva.pred import * 12 | from oxuva.task import * 13 | from oxuva.util import * 14 | -------------------------------------------------------------------------------- /python/oxuva/annot.py: -------------------------------------------------------------------------------- 1 | from __future__ import absolute_import 2 | from __future__ import division 3 | from __future__ import print_function 4 | 5 | from oxuva import util 6 | 7 | 8 | def make_track_label(category, frames, contains_cuts=None, always_visible=None): 9 | '''Creates a track annotation dictionary.''' 10 | return { 11 | 'frames': frames, 12 | 'category': category, 13 | 'contains_cuts': contains_cuts, 14 | 'always_visible': always_visible, 15 | } 16 | 17 | 18 | def make_frame_label(present=None, exemplar=None, xmin=None, ymin=None, xmax=None, ymax=None): 19 | '''Creates a frame annotation dictionary (for ground-truth).''' 20 | return { 21 | 'present': util.default_if_none(present, False), 22 | 'exemplar': util.default_if_none(exemplar, False), 23 | 'xmin': util.default_if_none(xmin, 0.0), 24 | 'xmax': util.default_if_none(xmax, 0.0), 25 | 'ymin': util.default_if_none(ymin, 0.0), 26 | 'ymax': util.default_if_none(ymax, 0.0), 27 | } 28 | -------------------------------------------------------------------------------- /python/oxuva/assess.py: -------------------------------------------------------------------------------- 1 | ''' 2 | Examples: 3 | 4 | To evaluate the prediction of a tracker for one tracking task: 5 | 6 | assessment = assess.assess_sequence(task.labels, prediction, iou_threshold=0.5) 7 | 8 | The function assess_sequence() calls subset_using_previous_if_missing() internally. 9 | This function may alternatively be called before assess_sequence(). 10 | The result will be the same because subset_using_previous_if_missing() is idempotent. 11 | 12 | prediction_subset = assess.subset_using_previous_if_missing( 13 | prediction, task.labels.sorted_keys()) 14 | assessment = assess.assess_sequence( 15 | task.labels, prediction_subset, iou_threshold=0.5) 16 | 17 | Since assessment is a SparseTimeSeries of frame assessments, 18 | we can consider a subset of frames: 19 | 20 | assessment_subset = util.select_interval( 21 | assessment, min_time, max_time, init_time=task.init_time) 22 | 23 | To accumulate per-frame assessments into a summary for the sequence: 24 | 25 | sequence_assessment = assess.assessment_sum(frame_assessments) 26 | 27 | This can also be used to accumulate sequence summaries for a dataset: 28 | 29 | dataset_assessment = assess.assessment_sum(sequence_assessments) 30 | 31 | To obtain the performance metrics from the summary: 32 | 33 | stats = assess.quality_metrics(dataset_assessment) 34 | 35 | Full example: 36 | 37 | assessments = {} 38 | for key in tasks: 39 | assessments[key] = assess.assess_sequence( 40 | tasks[key].labels, predictions[key], iou_threshold=0.5) 41 | 42 | sequence_assessments = { 43 | vid_obj: assess.assessment_sum(assessments[vid_obj].values()) 44 | for vid_obj in assessments} 45 | dataset_assessment = assess.assessment_sum(sequence_assessments.values()) 46 | return assess.quality_metrics(dataset_assessment) 47 | ''' 48 | 49 | from __future__ import absolute_import 50 | from __future__ import division 51 | from __future__ import print_function 52 | 53 | import functools 54 | import itertools 55 | import json 56 | import math 57 | import numpy as np 58 | import os 59 | 60 | import logging 61 | logger = logging.getLogger(__name__) 62 | 63 | from oxuva import dataset 64 | from oxuva import io_pred 65 | from oxuva import util 66 | 67 | 68 | FRAME_RATE = 30 69 | 70 | 71 | def quality_metrics(assessment): 72 | '''Computes the TPR, TNR from TP, FP, etc. 73 | 74 | Args: 75 | assessment -- Dictionary with TP, FP, TN, FN. 76 | ''' 77 | metrics = {} 78 | num_pos = assessment['TP'] + assessment['FN'] 79 | num_neg = assessment['TN'] + assessment['FP'] 80 | with np.errstate(invalid='ignore'): 81 | # Allow nan values in cases of 0 / 0. 82 | metrics['TPR'] = np.asfarray(assessment['TP']) / num_pos 83 | metrics['TNR'] = np.asfarray(assessment['TN']) / num_neg 84 | # TODO: Add some errorbars? 85 | metrics['GM'] = util.geometric_mean(metrics['TPR'], metrics['TNR']) 86 | metrics['MaxGM'] = max_geometric_mean_line(metrics['TNR'], metrics['TPR'], 1, 0) 87 | # Include the raw totals. 88 | metrics.update(assessment) 89 | return metrics 90 | 91 | 92 | def subset_using_previous_if_missing(series, times): 93 | '''Extracts a subset of values at the given times. 94 | If there is no data for a particular time, then the last value is used. 95 | 96 | Args: 97 | series: SparseTimeSeries of data. 98 | times: List of times. 99 | 100 | Returns: 101 | Time series sampled at specified times. 102 | 103 | Examples: 104 | >> subset_using_previous_if_missing([(2, 'hi'), (4, 'bye')], [2, 3, 4, 5]) 105 | ['hi', 'hi', 'bye', 'bye'] 106 | 107 | Raises an exception if asked for a time before the first element in series. 108 | ''' 109 | assert isinstance(series, util.SparseTimeSeries) 110 | series = list(series.sorted_items()) 111 | subset = [None for _ in times] 112 | t_curr, x_curr = None, None 113 | for i, t in enumerate(times): 114 | # Read from series until we have read all elements <= t. 115 | read_all = False 116 | while not read_all: 117 | if len(series) == 0: 118 | read_all = True 119 | else: 120 | # Peek at next element. 121 | t_next, x_next = series[0] 122 | if t_next > t: 123 | # We have gone past t. 124 | read_all = True 125 | else: 126 | # Keep going. 127 | t_curr, x_curr = t_next, x_next 128 | series = series[1:] 129 | if t_curr is None: 130 | raise ValueError('no value for time: {}'.format(t)) 131 | if t_curr != t: 132 | logger.warning('no prediction for time %d: use prediction for time %s', t, t_curr) 133 | subset[i] = x_curr 134 | return util.SparseTimeSeries(zip(times, subset)) 135 | 136 | 137 | def load_predictions_and_select_frames(tasks, tracker_pred_dir, permissive=False, log_prefix=''): 138 | '''Loads all predictions of a tracker and takes the subset of frames with ground truth. 139 | 140 | Args: 141 | tasks: VideoObjectDict of Tasks. 142 | tracker_pred_dir: Directory that contains files video_object.csv 143 | 144 | Returns: 145 | VideoObjectDict of SparseTimeSeries of frame assessments. 146 | ''' 147 | logger.info('load predictions from "%s"', tracker_pred_dir) 148 | preds = dataset.VideoObjectDict() 149 | for track_num, vid_obj in enumerate(tasks.keys()): 150 | vid, obj = vid_obj 151 | track_name = vid + '_' + obj 152 | logger.debug(log_prefix + 'object {}/{} {}'.format(track_num + 1, len(tasks), track_name)) 153 | pred_file = os.path.join(tracker_pred_dir, '{}.csv'.format(track_name)) 154 | try: 155 | with open(pred_file, 'r') as fp: 156 | pred = io_pred.load_predictions_csv(fp) 157 | except IOError as exc: 158 | if permissive: 159 | logger.warning('exclude track %s: %s', track_name, str(exc)) 160 | else: 161 | raise 162 | pred = subset_using_previous_if_missing(pred, tasks[vid_obj].labels.sorted_keys()) 163 | preds[vid_obj] = pred 164 | return preds 165 | 166 | 167 | def assess_sequence(gt, pred, iou_threshold): 168 | '''Evaluate predicted track against ground-truth annotations. 169 | 170 | Args: 171 | gt: SparseTimeSeries of annotation dicts. 172 | pred: SparseTimeSeries of prediction dicts. 173 | iou_threshold: Threshold for determining true positive. 174 | 175 | Returns: 176 | An assessment of each frame with ground-truth. 177 | This is a TimeSeries of per-frame assessment dicts. 178 | ''' 179 | times = gt.sorted_keys() 180 | # if pred.sorted_keys() != times: 181 | pred = subset_using_previous_if_missing(pred, times) 182 | return util.SparseTimeSeries({t: assess_frame(gt[t], pred[t], iou_threshold) for t in times}) 183 | 184 | 185 | def make_assessment(num_frames=0, tp=0, fp=0, tn=0, fn=0, num_present=0, num_absent=0): 186 | return {'num_frames': num_frames, 'TP': tp, 'TN': tn, 'FP': fp, 'FN': fn, 187 | 'num_present': num_present, 'num_absent': num_absent} 188 | 189 | 190 | def assessment_sum(assessments): 191 | return util.dict_sum_strict(assessments, make_assessment()) 192 | 193 | 194 | def assess_frame(gt, pred, iou_threshold): 195 | '''Turn prediction into TP, FP, TN, FN. 196 | 197 | Args: 198 | gt, pred: Dictionaries with fields: present, xmin, xmax, ymin, ymax. 199 | 200 | Returns: 201 | Frame assessment dict with TP, FP, etc. 202 | ''' 203 | # TODO: Some other metrics may require knowledge of aspect ratio. 204 | # (This is not an issue for now since IOU is invariant). 205 | 206 | # TP: gt "present" and pred "present" and box correct 207 | # FN: gt "present" and not (pred "present" and box correct) 208 | # TN: gt "absent" and pred "absent" 209 | # FP: gt "absent" and pred "present" 210 | result = make_assessment(num_frames=1, 211 | num_present=(1 if gt['present'] else 0), 212 | num_absent=(0 if gt['present'] else 1)) 213 | if gt['present']: 214 | if pred['present'] and iou_clip(gt, pred) >= iou_threshold: 215 | result['TP'] += 1 216 | else: 217 | result['FN'] += 1 218 | else: 219 | if pred['present']: 220 | result['FP'] += 1 221 | else: 222 | result['TN'] += 1 223 | if 'score' in pred: 224 | result['score'] = pred['score'] 225 | return result 226 | 227 | 228 | def iou_clip(a, b): 229 | bounds = unit_rect() 230 | a = intersect(a, bounds) 231 | b = intersect(b, bounds) 232 | return iou(a, b) 233 | 234 | 235 | def iou(a, b): 236 | i = vol(intersect(a, b)) 237 | u = vol(a) + vol(b) - i 238 | return float(i) / float(u) 239 | 240 | 241 | def vol(r): 242 | # Any inverted rectangle is silently considered empty. 243 | # (Allows for empty intersection.) 244 | xsize = max(0, r['xmax'] - r['xmin']) 245 | ysize = max(0, r['ymax'] - r['ymin']) 246 | return xsize * ysize 247 | 248 | 249 | def intersect(a, b): 250 | return { 251 | 'xmin': max(a['xmin'], b['xmin']), 252 | 'ymin': max(a['ymin'], b['ymin']), 253 | 'xmax': min(a['xmax'], b['xmax']), 254 | 'ymax': min(a['ymax'], b['ymax']), 255 | } 256 | 257 | 258 | def unit_rect(): 259 | return {'xmin': 0.0, 'ymin': 0.0, 'xmax': 1.0, 'ymax': 1.0} 260 | 261 | 262 | def posthoc_threshold(assessments): 263 | '''Trace curve of operating points by varying score threshold. 264 | 265 | Args: 266 | assessments: List of TimeSeries of per-frame assessments. 267 | ''' 268 | # Group all "present" predictions (TP, FP) by score. 269 | by_score = {} 270 | for ass in assessments: 271 | if ass['TP'] or ass['FP']: 272 | by_score.setdefault(float(ass['score']), []).append(ass) 273 | 274 | # Trace threshold from min to max. 275 | # Initially everything is labelled absent (negative). 276 | num_present = sum(ass['TP'] + ass['FN'] for ass in assessments) 277 | num_absent = sum(ass['TN'] + ass['FP'] for ass in assessments) 278 | total = {'TP': 0, 'FP': 0, 'TN': num_absent, 'FN': num_present} 279 | 280 | # Start switching the highest scores from "absent" to "present". 281 | points = [] 282 | points.append(dict(total)) 283 | if float('nan') in by_score: 284 | raise ValueError('score is nan but prediction is "present"') 285 | for score in sorted(by_score.keys(), reverse=True): 286 | # Iterate through the next block of points with the same score. 287 | for assessment in by_score[score]: 288 | total['TP'] += assessment['TP'] 289 | total['FN'] -= assessment['TP'] 290 | total['FP'] += assessment['FP'] 291 | total['TN'] -= assessment['FP'] 292 | points.append(dict(total)) 293 | return points 294 | 295 | 296 | def max_geometric_mean_line(x1, y1, x2, y2): 297 | # Obtained using Matlab symbolic toolbox. 298 | # >> syms x1 x2 y1 y2 th 299 | # >> x = (1-th)*x1 + th*x2 300 | # >> y = (1-th)*y1 + th*y2 301 | # >> f = x * y 302 | # >> coeffs(f, th) 303 | # [ x1*y1, - y1*(x1 - x2) - x1*(y1 - y2), (x1 - x2)*(y1 - y2)] 304 | a = (x1 - x2) * (y1 - y2) 305 | b = - y1 * (x1 - x2) - x1 * (y1 - y2) 306 | # Maximize the quadratic on [0, 1]. 307 | # Check endpoints. 308 | candidates = [0.0, 1.0] 309 | if a < 0: 310 | # Concave quadratic. Check if peak is in bounds. 311 | th_star = -b / (2 * a) 312 | if 0 <= th_star <= 1: 313 | candidates.append(th_star) 314 | g = lambda x, y: math.sqrt(x * y) 315 | h = lambda th: g((1 - th) * x1 + th * x2, (1 - th) * y1 + th * y2) 316 | return max([h(th) for th in candidates]) 317 | 318 | 319 | def make_dataset_assessment(totals, quantized_totals, frame_assessments=None): 320 | '''Sufficient to produce all plots. 321 | 322 | This is what will be returned to the user by the evaluation server. 323 | ''' 324 | return { 325 | 'totals': totals, 326 | 'quantized_totals': quantized_totals, 327 | 'frame_assessments': frame_assessments, # Ignored by dump_xxx functions! 328 | } 329 | 330 | 331 | def union_dataset_assessment(x, y): 332 | '''Combines the tracks of two datasets.''' 333 | if y is None: 334 | return x 335 | if x is None: 336 | return y 337 | return { 338 | 'totals': dataset.VideoObjectDict(dict(itertools.chain( 339 | x['totals'].items(), 340 | y['totals'].items()))), 341 | 'quantized_totals': dataset.VideoObjectDict(dict(itertools.chain( 342 | x['quantized_totals'].items(), 343 | y['quantized_totals'].items()))), 344 | } 345 | 346 | 347 | def dump_dataset_assessment_json(x, f): 348 | data = { 349 | # Convert to list of items because JSON does not support tuple as keys. 350 | 'totals': sorted(x['totals'].items()), 351 | # Convert to list of items because JSON does not support tuple as keys. 352 | # Extract elements of each QuantizedAssessment. 353 | 'quantized_totals': [(vid_obj, value.elems) 354 | for vid_obj, value in sorted(x['quantized_totals'].items())], 355 | } 356 | json.dump(data, f, sort_keys=True) 357 | 358 | 359 | def load_dataset_assessment_json(f): 360 | data = json.load(f) 361 | return make_dataset_assessment( 362 | totals=dataset.VideoObjectDict({ 363 | tuple(vid_obj): total for vid_obj, total in data['totals']}), 364 | quantized_totals=dataset.VideoObjectDict({ 365 | tuple(vid_obj): QuantizedAssessment({ 366 | tuple(interval): total for interval, total in quantized_totals}) 367 | for vid_obj, quantized_totals in data['quantized_totals']})) 368 | 369 | 370 | def assess_dataset(tasks, predictions, iou_threshold, resolution_seconds=30): 371 | ''' 372 | Args: 373 | tasks: VideoObjectDict of tasks. Each task must include annotations. 374 | predictions: VideoObjectDict of predictions. 375 | 376 | Returns: 377 | Enough information to produce the plots. 378 | ''' 379 | frame_assessments = dataset.VideoObjectDict({ 380 | key: assess_sequence(tasks[key].labels, predictions[key], iou_threshold) 381 | for key in tasks.keys()}) 382 | return make_dataset_assessment( 383 | totals=dataset.VideoObjectDict({ 384 | key: assessment_sum(frame_assessments[key].values()) 385 | for key in frame_assessments.keys()}), 386 | quantized_totals=dataset.VideoObjectDict({ 387 | key: quantize_sequence_assessment(frame_assessments[key], 388 | init_time=tasks[key].init_time, 389 | resolution=(FRAME_RATE * resolution_seconds)) 390 | for key in frame_assessments.keys()}), 391 | frame_assessments=frame_assessments) 392 | 393 | 394 | class QuantizedAssessment(object): 395 | '''Describes the assessment of intervals of a sequence. 396 | 397 | This is sufficient to construct the temporal plots 398 | without revealing whether each individual prediction is correct or not. 399 | ''' 400 | 401 | def __init__(self, elems): 402 | ''' 403 | Args: 404 | elems: Map from (a, b) to assessment dict. 405 | ''' 406 | if isinstance(elems, dict): 407 | elems = list(elems.items()) 408 | elems = sorted(elems) 409 | self.elems = elems 410 | 411 | def get(self, min_time=None, max_time=None): 412 | '''Get cumulative assessment of interval [min_time, max_time].''' 413 | # Include all bins within [min_time, max_time]. 414 | subset = [] 415 | for interval, value in self.elems: 416 | u, v = interval 417 | # if min_time <= u <= v <= max_time: 418 | if (min_time is None or min_time <= u) and (max_time is None or v <= max_time): 419 | subset.append(value) 420 | elif (min_time < u < max_time) or (min_time < v < max_time): 421 | # If interval is not within [min_time, max_time], 422 | # then require that it is entirely outside [min_time, max_time]. 423 | raise ValueError('interval {} straddles requested {}'.format( 424 | str((u, v)), str((min_time, max_time)))) 425 | return assessment_sum(subset) 426 | 427 | # def get_vector(self, intervals): 428 | # return _to_vector_dict([self.get(min_time, max_time) 429 | # for min_time, max_time in intervals]) 430 | 431 | 432 | def quantize_sequence_assessment(assessment, resolution, init_time): 433 | ''' 434 | Args: 435 | assessment: SparseTimeSeries of assessment dicts. 436 | resolution: Integer specifying temporal resolution. 437 | init_time: Absolute time at which tracker was started. 438 | 439 | Returns: 440 | Ordered list of ((a, b), value) elements where a, b are integers. 441 | ''' 442 | if int(resolution) != resolution: 443 | logger.warning('resolution is not integer: %g', resolution) 444 | resolution = int(resolution) 445 | 446 | subsets = {} 447 | for abs_time, frame in assessment.items(): 448 | t = abs_time - init_time 449 | i = int(math.ceil(t / float(resolution))) 450 | interval = resolution * (i - 1), resolution * i 451 | subsets.setdefault(interval, []).append(frame) 452 | sums = {interval: assessment_sum(subsets[interval]) for interval in subsets.keys()} 453 | return QuantizedAssessment(sorted(sums.items())) 454 | 455 | 456 | def _to_vector_dict(list_of_dicts): 457 | keys = _get_keys_and_assert_equal(list_of_dicts) 458 | vector_dict = {} 459 | for key in keys: 460 | vector_dict[key] = np.array([x[key] for x in list_of_dicts]) 461 | return vector_dict 462 | 463 | 464 | def dataset_quality(totals, enable_bootstrap=True, num_trials=None, base_seed=0): 465 | ''' 466 | Args: 467 | totals: VideoObjectDict of per-sequence assessment dicts. 468 | ''' 469 | quality = summarize(totals.values()) 470 | if enable_bootstrap: 471 | if num_trials is None: 472 | raise ValueError('must specify number of trials for bootstrap sampling') 473 | quality.update(bootstrap(summarize, totals, num_trials, base_seed=base_seed)) 474 | quality = {k: np.asarray(v).tolist() for k, v in quality.items()} 475 | return quality 476 | 477 | 478 | def dataset_quality_interval(quantized_assessments, min_time=None, max_time=None, 479 | enable_bootstrap=True, num_trials=None, base_seed=0): 480 | ''' 481 | Args: 482 | totals: VideoObjectDict of per-sequence assessment dicts. 483 | ''' 484 | interval_totals = dataset.VideoObjectDict({ 485 | track: quantized_assessments[track].get(min_time, max_time) 486 | for track in quantized_assessments.keys()}) 487 | quality = summarize(interval_totals.values()) 488 | if enable_bootstrap: 489 | if num_trials is None: 490 | raise ValueError('must specify number of trials for bootstrap sampling') 491 | quality.update(bootstrap(summarize, interval_totals, num_trials, base_seed=base_seed)) 492 | quality = {k: np.asarray(v).tolist() for k, v in quality.items()} 493 | return quality 494 | 495 | 496 | def dataset_quality_filter(totals, require_none_absent=False, require_some_absent=False, 497 | enable_bootstrap=True, num_trials=None, base_seed=0): 498 | # Apply filter after bootstrap sampling dataset. 499 | summarize_func = functools.partial(summarize_filter, 500 | require_none_absent=require_none_absent, 501 | require_some_absent=require_some_absent) 502 | quality = summarize_func(totals.values()) 503 | if enable_bootstrap: 504 | if num_trials is None: 505 | raise ValueError('must specify number of trials for bootstrap sampling') 506 | quality.update(bootstrap(summarize_func, totals, num_trials, base_seed=base_seed)) 507 | quality = {k: np.asarray(v).tolist() for k, v in quality.items()} 508 | return quality 509 | 510 | 511 | def summarize(totals): 512 | '''Obtain dataset quality from per-sequence assessments. 513 | 514 | Args: 515 | totals: List of assessment dicts. 516 | ''' 517 | return quality_metrics(assessment_sum(totals)) 518 | 519 | 520 | def summarize_filter(totals, require_none_absent=False, require_some_absent=False): 521 | totals = [x for x in totals if 522 | (not require_none_absent or x['num_absent'] == 0) and 523 | (not require_some_absent or x['num_absent'] > 0)] 524 | return summarize(totals) 525 | 526 | 527 | def bootstrap(func, data, num_trials, base_seed=0): 528 | ''' 529 | Args: 530 | func: Maps list of per-track elements to a dictionary of metrics. 531 | This will be called num_trials times. 532 | data: VideoObjectDict of elements. 533 | 534 | The function will be called func(x) where x is a list of the values in data. 535 | It would normally be called func(data.values()). 536 | 537 | VideoObjectDict is required because sampling is performed on videos not tracks. 538 | ''' 539 | metrics = [] 540 | for i in range(num_trials): 541 | sample = _bootstrap_sample_by_video(data, seed=(base_seed + i)) 542 | logger.debug('bootstrap trial %d: num sequences %d', i + 1, len(sample)) 543 | metrics.append(func(sample)) 544 | return _stats_from_repetitions(metrics) 545 | 546 | 547 | def _bootstrap_sample_by_video(tracks, seed): 548 | '''Samples videos with replacement and returns a list of all tracks. 549 | 550 | Args: 551 | tracks: VideoObjectDict 552 | ''' 553 | assert isinstance(tracks, dataset.VideoObjectDict) 554 | by_video = tracks.to_nested_dict() 555 | rand = np.random.RandomState(seed) 556 | names = list(by_video.keys()) 557 | names_sample = rand.choice(names, len(by_video), replace=True) 558 | return list(itertools.chain.from_iterable(by_video[name].values() for name in names_sample)) 559 | 560 | 561 | def _stats_from_repetitions(xs): 562 | '''Maps a list of dictionaries to the mean and variance of the values. 563 | 564 | Appends '_mean' and '_var' to the original keys. 565 | ''' 566 | # Check that all dictionaries have the same keys. 567 | fields = _get_keys_and_assert_equal(xs) 568 | stats = {} 569 | stats.update({field + '_mean': np.mean([x[field] for x in xs], axis=0) for field in fields}) 570 | stats.update({field + '_var': np.var([x[field] for x in xs], axis=0) for field in fields}) 571 | return stats 572 | 573 | 574 | def _get_keys_and_assert_equal(xs): 575 | '''Asserts that all dictionaries have the same keys and returns the set of keys.''' 576 | assert len(xs) > 0 577 | fields = None 578 | for x in xs: 579 | curr_fields = set(x.keys()) 580 | if fields is None: 581 | fields = curr_fields 582 | else: 583 | if curr_fields != fields: 584 | raise ValueError('fields differ: {} and {}'.format(fields, curr_fields)) 585 | return fields 586 | -------------------------------------------------------------------------------- /python/oxuva/dataset.py: -------------------------------------------------------------------------------- 1 | from __future__ import absolute_import 2 | from __future__ import division 3 | from __future__ import print_function 4 | 5 | import logging 6 | logger = logging.getLogger(__name__) 7 | 8 | from oxuva import util 9 | 10 | 11 | class VideoObjectDict(object): 12 | '''Represents map video -> object -> element. 13 | Behaves as a dictionary with keys of (video, object) tuples. 14 | 15 | Example: 16 | for key in tracks.keys(): 17 | print(tracks[key]) 18 | 19 | tracks = VideoObjectDict() 20 | ... 21 | for vid in tracks.videos(): 22 | for obj in tracks.objects(vid): 23 | print(tracks[(vid, obj)]) 24 | ''' 25 | 26 | def __init__(self, elems=None): 27 | if elems is None: 28 | self._elems = dict() 29 | elif isinstance(elems, VideoObjectDict): 30 | self._elems = dict(elems._elems) 31 | else: 32 | self._elems = dict(elems) 33 | 34 | def videos(self): 35 | return set([vid for vid, obj in self._elems.keys()]) 36 | 37 | def objects(self, vid): 38 | # TODO: This is somewhat inefficient if called for all videos. 39 | return [obj_i for vid_i, obj_i in self._elems.keys() if vid_i == vid] 40 | 41 | def __len__(self): 42 | return len(self._elems) 43 | 44 | def __getitem__(self, key): 45 | return self._elems[key] 46 | 47 | def __setitem__(self, key, value): 48 | self._elems[key] = value 49 | 50 | def __delitem__(self, key): 51 | del self._elems[key] 52 | 53 | def keys(self): 54 | return self._elems.keys() 55 | 56 | def values(self): 57 | return self._elems.values() 58 | 59 | def items(self): 60 | return self._elems.items() 61 | 62 | def __iter__(self): 63 | for k in self._elems.keys(): 64 | yield k 65 | 66 | def to_nested_dict(self): 67 | elems = {} 68 | for (vid, obj), elem in self._elems.items(): 69 | elems.setdefault(vid, {})[obj] = elem 70 | return elems 71 | 72 | def update_from_nested_dict(self, elems): 73 | for vid, vid_elems in elems.items(): 74 | for obj, elem in vid_elems.items(): 75 | self._elems[(vid, obj)] = elem 76 | -------------------------------------------------------------------------------- /python/oxuva/io_annot.py: -------------------------------------------------------------------------------- 1 | from __future__ import absolute_import 2 | from __future__ import division 3 | from __future__ import print_function 4 | 5 | import csv 6 | 7 | from oxuva.annot import make_frame_label 8 | from oxuva.annot import make_track_label 9 | from oxuva.dataset import VideoObjectDict 10 | from oxuva import util 11 | 12 | 13 | TRACK_FIELDS = [ 14 | 'video_id', 'object_id', 15 | 'class_id', 'class_name', 'contains_cuts', 'always_visible', 16 | 'frame_num', 'object_presence', 'xmin', 'xmax', 'ymin', 'ymax', 17 | ] 18 | 19 | 20 | def load_dataset_annotations_csv(fp): 21 | '''Loads the annotations for an entire dataset from one CSV file.''' 22 | reader = csv.DictReader(fp, fieldnames=TRACK_FIELDS) 23 | rows = [row for row in reader] 24 | 25 | # Group rows by object. 26 | rows_by_track = {} 27 | for row in rows: 28 | vid_id = row['video_id'] 29 | obj_id = row['object_id'] 30 | rows_by_track.setdefault((vid_id, obj_id), []).append(row) 31 | 32 | tracks = VideoObjectDict() 33 | for vid_obj in rows_by_track.keys(): 34 | # vid_id, obj_id = vid_obj 35 | frames = util.SparseTimeSeries() 36 | for row in rows_by_track[vid_obj]: 37 | present = _parse_is_present(row['object_presence']) 38 | # TODO: Support 'exemplar' field in CSV format? 39 | t = int(row['frame_num']) 40 | frames[t] = make_frame_label( 41 | present=present, 42 | xmin=float(row['xmin']) if present else None, 43 | xmax=float(row['xmax']) if present else None, 44 | ymin=float(row['ymin']) if present else None, 45 | ymax=float(row['ymax']) if present else None) 46 | assert len(frames) >= 2 47 | first_row = rows_by_track[vid_obj][0] 48 | tracks[vid_obj] = make_track_label( 49 | category=first_row['class_name'], 50 | frames=frames, 51 | contains_cuts=first_row['contains_cuts'], 52 | always_visible=first_row['always_visible']) 53 | 54 | return tracks 55 | 56 | 57 | def dump_dataset_annotations_csv(tracks, fp): 58 | '''Writes the annotations for an entire dataset to one CSV file.''' 59 | writer = csv.DictWriter(fp, fieldnames=TRACK_FIELDS) 60 | for vid in sorted(tracks.videos()): 61 | # Sort objects by their first frame. 62 | sort_key = lambda obj: (_start_time(tracks[vid][obj]), obj) 63 | for obj in sorted(tracks.objects(vid), key=sort_key): 64 | for frame_num, frame in track['frames']: 65 | assert frame_num == int(frame_num) 66 | frame_num = int(frame_num) 67 | assert frame_num % 30 == 0 68 | # timestamp_sec = frame_num // 30 69 | class_name = track.get('category', '') 70 | class_id = CLASS_ID_LOOKUP[class_name] if class_name else '' 71 | row = { 72 | 'video_id': vid, 73 | 'object_id': obj, 74 | 'class_id': class_id, 75 | 'class_name': class_name, 76 | 'contains_cuts': _str_contains_cuts(track.get('contains_cuts', None)), 77 | 'always_visible': _str_always_visible(track.get('always_visible', None)), 78 | 'frame_num': frame_num, 79 | 'object_presence': 'present' if frame['present'] else 'absent', 80 | 'xmin': frame['xmin'], 81 | 'xmax': frame['xmax'], 82 | 'ymin': frame['ymin'], 83 | 'ymax': frame['ymax'], 84 | } 85 | writer.writerow(row) 86 | 87 | 88 | def _start_time(track): 89 | frames = track['frames'] 90 | return frames.sorted_keys()[0] 91 | 92 | 93 | def _parse_is_present(s): 94 | if s == 'present': 95 | return True 96 | elif s == 'absent': 97 | return False 98 | else: 99 | raise ValueError('unknown value for presence: {}'.format(s)) 100 | 101 | 102 | def _str_is_present(present): 103 | if present: 104 | return 'present' 105 | else: 106 | return 'absent' 107 | 108 | 109 | def _str_contains_cuts(contains_cuts): 110 | if contains_cuts is True: 111 | return 'true' 112 | elif contains_cuts is False: 113 | return 'false' 114 | else: 115 | return 'unknown' 116 | 117 | 118 | def _parse_contains_cuts(s): 119 | return util.str2bool_or_none(s) 120 | 121 | 122 | def _str_always_visible(always_visible): 123 | if always_visible is True: 124 | return 'true' 125 | elif always_visible is False: 126 | return 'false' 127 | else: 128 | return 'unknown' 129 | 130 | 131 | def _parse_always_visible(s): 132 | return util.str2bool_or_none(s) 133 | -------------------------------------------------------------------------------- /python/oxuva/io_pred.py: -------------------------------------------------------------------------------- 1 | from __future__ import absolute_import 2 | from __future__ import division 3 | from __future__ import print_function 4 | 5 | import csv 6 | 7 | from oxuva.pred import make_prediction 8 | from oxuva import util 9 | 10 | 11 | PREDICTION_FIELD_NAMES_V1 = [ 12 | 'video', 'object', 'imwidth', 'imheight', 13 | 'frame', 'score', 'present', 'xmin_pix', 'ymin_pix', 'xmax_pix', 'ymax_pix', 14 | ] 15 | PREDICTION_FIELD_NAMES_V2 = [ 16 | 'video', 'object', 'frame_num', 'present', 'score', 'xmin', 'xmax', 'ymin', 'ymax', 17 | ] 18 | PREDICTION_FIELD_NAMES = PREDICTION_FIELD_NAMES_V2 19 | 20 | 21 | def load_predictions_csv(fp): 22 | '''Loads output of tracker from CSV file. 23 | 24 | Args: 25 | fp: File-like object with read() and seek(). 26 | 27 | Returns: 28 | List of (time, prediction-dict) pairs. 29 | ''' 30 | # has_header = csv.Sniffer().has_header(fp.read(4<<10)) # 4 kB 31 | # fp.seek(0) 32 | reader = _dict_reader_optional_fieldnames(fp, PREDICTION_FIELD_NAMES) 33 | 34 | predictions = util.SparseTimeSeries() 35 | for row in reader: 36 | present = util.str2bool(row['present']) 37 | t = int(row['frame_num']) 38 | predictions[t] = make_prediction( 39 | present=present, 40 | score=float(row['score']), 41 | xmin=float(row['xmin']) if present else None, 42 | xmax=float(row['xmax']) if present else None, 43 | ymin=float(row['ymin']) if present else None, 44 | ymax=float(row['ymax']) if present else None) 45 | return predictions 46 | 47 | 48 | def _dict_reader_optional_fieldnames(fp, fieldnames): 49 | '''Creates a csv.DictReader with the given fieldnames. 50 | 51 | The file may or may not contain a header row. 52 | If it contains a header row, it must match fieldnames. 53 | 54 | This function exists because csv.Sniffer() is unreliable. 55 | For example, it may fail if one row uses int and one row uses float. 56 | ''' 57 | # Try to read headers from file. 58 | reader = csv.DictReader(fp, fieldnames=None) 59 | if set(reader.fieldnames) != set(fieldnames): 60 | fp.seek(0) 61 | reader = csv.DictReader(fp, fieldnames=fieldnames) 62 | return reader 63 | 64 | 65 | def dump_predictions_csv(vid_id, obj_id, predictions, fp): 66 | '''Writes output of tracker for a single track to a CSV file. 67 | 68 | Args: 69 | vid_id: String. 70 | obj_id: String. 71 | predictions: SparseTimeSeries of prediction dicts. 72 | fp: File-like object with write(). 73 | ''' 74 | writer = csv.DictWriter(fp, fieldnames=PREDICTION_FIELD_NAMES) 75 | for t, prediction in predictions.items(): 76 | row = { 77 | 'video': vid_id, 78 | 'object': obj_id, 79 | 'frame_num': t, 80 | 'present': util.bool2str(prediction['present']), 81 | 'score': prediction['score'], 82 | 'xmin': prediction['xmin'], 83 | 'xmax': prediction['xmax'], 84 | 'ymin': prediction['ymin'], 85 | 'ymax': prediction['ymax'], 86 | } 87 | writer.writerow(row) 88 | -------------------------------------------------------------------------------- /python/oxuva/io_task.py: -------------------------------------------------------------------------------- 1 | from __future__ import absolute_import 2 | from __future__ import division 3 | from __future__ import print_function 4 | 5 | import csv 6 | 7 | from oxuva.dataset import VideoObjectDict 8 | from oxuva.task import Task 9 | 10 | 11 | TASK_FIELDS = [ 12 | 'video_id', 'object_id', 13 | 'init_frame', 'last_frame', 'xmin', 'xmax', 'ymin', 'ymax', 14 | ] 15 | 16 | 17 | def load_dataset_tasks_csv(fp): 18 | '''Loads the problem definitions for an entire dataset from one CSV file.''' 19 | reader = csv.DictReader(fp, fieldnames=TASK_FIELDS) 20 | rows = [row for row in reader] 21 | 22 | tasks = VideoObjectDict() 23 | for row in rows: 24 | key = (row['video_id'], row['object_id']) 25 | tasks[key] = Task( 26 | init_time=int(row['init_frame']), 27 | last_time=int(row['last_frame']), 28 | init_rect={ 29 | 'xmin': float(row['xmin']), 30 | 'xmax': float(row['xmax']), 31 | 'ymin': float(row['ymin']), 32 | 'ymax': float(row['ymax'])}) 33 | return tasks 34 | -------------------------------------------------------------------------------- /python/oxuva/pred.py: -------------------------------------------------------------------------------- 1 | from __future__ import absolute_import 2 | from __future__ import division 3 | from __future__ import print_function 4 | 5 | from oxuva import util 6 | 7 | 8 | def make_prediction(present=None, score=None, xmin=None, ymin=None, xmax=None, ymax=None): 9 | '''Describes the output of a tracker in one frame.''' 10 | return { 11 | 'present': util.default_if_none(present, True), 12 | 'score': util.default_if_none(score, 0.0), 13 | 'xmin': util.default_if_none(xmin, 0.0), 14 | 'xmax': util.default_if_none(xmax, 0.0), 15 | 'ymin': util.default_if_none(ymin, 0.0), 16 | 'ymax': util.default_if_none(ymax, 0.0), 17 | } 18 | -------------------------------------------------------------------------------- /python/oxuva/task.py: -------------------------------------------------------------------------------- 1 | from __future__ import absolute_import 2 | from __future__ import division 3 | from __future__ import print_function 4 | 5 | import logging 6 | logger = logging.getLogger(__name__) 7 | 8 | from oxuva import util 9 | 10 | 11 | class Task(object): 12 | '''Describes a tracking task with optional ground-truth annotations.''' 13 | 14 | def __init__(self, init_time, init_rect, labels=None, last_time=None, attributes=None): 15 | '''Create a trasking task. 16 | 17 | Args: 18 | init_time -- Time of supervision (in frames). 19 | init_rect -- Rectangle dict. 20 | labels -- SparseTimeSeries of frame annotation dicts. 21 | Does not include first frame. 22 | last_time -- Time of last frame of interest, inclusive (optional). 23 | Consider frames init_time <= t <= last_time. 24 | attributes -- Dictionary with extra attributes. 25 | 26 | If last_time is None, then the last frame of labels will be used. 27 | ''' 28 | self.init_time = init_time 29 | self.init_rect = init_rect 30 | if labels: 31 | if init_time in labels: 32 | raise ValueError('labels should not contain init time') 33 | self.labels = labels 34 | if last_time is None and labels is not None: 35 | self.last_time = labels.sorted_keys()[-1] 36 | else: 37 | self.last_time = last_time 38 | self.attributes = attributes or {} 39 | 40 | def len(self): 41 | return self.last_time - self.init_time + 1 42 | 43 | 44 | def make_task_from_track(track): 45 | '''Creates a tracking task from a track annotation dict (oxuva.annot.make_track_label). 46 | 47 | The first frame is adopted as initialization. 48 | The remaining frames become the ground-truth rectangles. 49 | ''' 50 | frames = list(track['frames'].sorted_items()) 51 | init_time, init_annot = frames[0] 52 | labels = util.SparseTimeSeries(frames[1:]) 53 | # TODO: Check that init_annot['exemplar'] is True. 54 | init_rect = {k: init_annot[k] for k in ['xmin', 'xmax', 'ymin', 'ymax']} 55 | attributes = {k: v for k, v in track.items() if k not in {'frames'}} 56 | return Task(init_time, init_rect, labels=labels, attributes=attributes) 57 | -------------------------------------------------------------------------------- /python/oxuva/test_assess.py: -------------------------------------------------------------------------------- 1 | from __future__ import absolute_import 2 | from __future__ import division 3 | from __future__ import print_function 4 | 5 | import unittest 6 | 7 | import math 8 | import numpy as np 9 | 10 | from oxuva.assess import subset_using_previous_if_missing 11 | from oxuva.assess import iou 12 | from oxuva.assess import max_geometric_mean_line 13 | from oxuva.assess import posthoc_threshold 14 | from oxuva import util 15 | 16 | 17 | class TestSubsetUsingPrevious(unittest.TestCase): 18 | 19 | def test_missing(self): 20 | source = util.SparseTimeSeries({1: 'one', 3: 'three'}) 21 | times = [1, 2, 3] 22 | got = subset_using_previous_if_missing(source, times) 23 | want = [(1, 'one'), (2, 'one'), (3, 'three')] 24 | self.assertEqual(list(got.sorted_items()), want) 25 | 26 | def test_beyond_end(self): 27 | source = util.SparseTimeSeries({1: 'one', 3: 'three'}) 28 | times = [2, 4, 6] 29 | got = subset_using_previous_if_missing(source, times) 30 | want = [(2, 'one'), (4, 'three'), (6, 'three')] 31 | self.assertEqual(list(got.sorted_items()), want) 32 | 33 | def test_before_start(self): 34 | source = util.SparseTimeSeries({3: 'three'}) 35 | times = [1, 2, 3] 36 | self.assertRaises( 37 | Exception, lambda: subset_using_previous_if_missing(source, times)) 38 | 39 | def test_idempotent(self): 40 | source = util.SparseTimeSeries({1: 'one', 3: 'three'}) 41 | times = [1, 2, 3, 4] 42 | once = subset_using_previous_if_missing(source, times) 43 | twice = subset_using_previous_if_missing(once, times) 44 | self.assertEqual(list(once.sorted_items()), list(twice.sorted_items())) 45 | 46 | 47 | class TestIOU(unittest.TestCase): 48 | 49 | def test_simple(self): 50 | p = {'xmin': 0.1, 'xmax': 0.4, 'ymin': 0.5, 'ymax': 0.9} 51 | q = {'xmin': 0.3, 'xmax': 0.5, 'ymin': 0.5, 'ymax': 1.0} 52 | # vol(p) is 0.3 * 0.4 = 0.12 53 | # vol(q) is 0.2 * 0.5 = 0.1 54 | # intersection is 0.1 * 0.4 = 0.04 55 | # union is (0.12 + 0.1) - 0.04 = 0.22 - 0.04 = 0.18 56 | want = 0.04 / 0.18 57 | np.testing.assert_almost_equal(iou(p, q), want) 58 | np.testing.assert_almost_equal(iou(q, p), want) 59 | 60 | def test_same(self): 61 | p = {'xmin': 0.1, 'xmax': 0.4, 'ymin': 0.6, 'ymax': 0.8} 62 | np.testing.assert_almost_equal(iou(p, p), 1.0) 63 | 64 | def test_empty_intersection(self): 65 | p = {'xmin': 0.1, 'xmax': 0.4, 'ymin': 0.6, 'ymax': 0.8} 66 | q = {'xmin': 0.4, 'xmax': 0.8, 'ymin': 0.6, 'ymax': 0.8} 67 | np.testing.assert_almost_equal(iou(p, q), 0.0) 68 | 69 | 70 | class TestMaxGeometricMeanLineSeg(unittest.TestCase): 71 | 72 | def test_simple(self): 73 | x1, y1 = 0, 1 74 | x2, y2 = 1, 0 75 | got = max_geometric_mean_line(x1, y1, x2, y2) 76 | want = 0.5 77 | np.testing.assert_almost_equal(got, want) 78 | 79 | def test_bound(self): 80 | x1, y1 = 0.3, 0.9 81 | x2, y2 = 0.7, 0.4 82 | max_val = max_geometric_mean_line(x1, y1, x2, y2) 83 | self.assertLessEqual(util.geometric_mean(x1, y1), max_val) 84 | self.assertLessEqual(util.geometric_mean(x2, y2), max_val) 85 | xm, ym = 0.5 * (x1 + x2), 0.5 * (y1 + y2) 86 | self.assertLessEqual(util.geometric_mean(xm, ym), max_val) 87 | 88 | 89 | class TestPosthocThreshold(unittest.TestCase): 90 | 91 | def test_main(self): 92 | assessments = [ 93 | {'TP': 1, 'FP': 0, 'TN': 0, 'FN': 0, 'score': 40}, # true positive 94 | {'TP': 1, 'FP': 0, 'TN': 0, 'FN': 0, 'score': 30}, # false positive 95 | {'TP': 0, 'FP': 1, 'TN': 0, 'FN': 0, 'score': 20}, # true positive 96 | {'TP': 1, 'FP': 0, 'TN': 0, 'FN': 0, 'score': 10}, # true positive 97 | # Treat score zero as threshold for this imaginary classifier. 98 | {'TP': 0, 'FP': 0, 'TN': 1, 'FN': 0, 'score': -10}, # true negative 99 | {'TP': 0, 'FP': 0, 'TN': 0, 'FN': 1, 'score': -20}, # false negative 100 | {'TP': 0, 'FP': 0, 'TN': 1, 'FN': 0, 'score': -30}, # true negative 101 | ] 102 | got = posthoc_threshold(assessments) 103 | want = [ 104 | {'TP': 0, 'FP': 0, 'TN': 3, 'FN': 4}, 105 | {'TP': 1, 'FP': 0, 'TN': 3, 'FN': 3}, 106 | {'TP': 2, 'FP': 0, 'TN': 3, 'FN': 2}, 107 | {'TP': 2, 'FP': 1, 'TN': 2, 'FN': 2}, 108 | {'TP': 3, 'FP': 1, 'TN': 2, 'FN': 1}, 109 | ] 110 | self.assertEqual(len(got), len(want)) 111 | for point_got, point_want in zip(got, want): 112 | for k in ['TP', 'FP', 'TN', 'FN']: 113 | np.testing.assert_almost_equal(point_got[k], point_want[k]) 114 | 115 | 116 | if __name__ == '__main__': 117 | unittest.main() 118 | -------------------------------------------------------------------------------- /python/oxuva/tools/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/oxuva/long-term-tracking-benchmark/fd49bed27af85bb78120598ce65397470a387520/python/oxuva/tools/__init__.py -------------------------------------------------------------------------------- /python/oxuva/tools/analyze.py: -------------------------------------------------------------------------------- 1 | from __future__ import absolute_import 2 | from __future__ import division 3 | from __future__ import print_function 4 | 5 | import argparse 6 | import itertools 7 | import functools 8 | import json 9 | import numpy as np 10 | import math 11 | import os 12 | import subprocess 13 | import sys 14 | 15 | import logging 16 | logger = logging.getLogger(__name__) 17 | 18 | import matplotlib 19 | matplotlib.use('Agg') 20 | import matplotlib.pyplot as plt 21 | 22 | import oxuva 23 | 24 | # /python/oxuva/tools/analyze.py 25 | TOOLS_DIR = os.path.dirname(os.path.realpath(__file__)) 26 | REPO_DIR = os.path.realpath(os.path.join(TOOLS_DIR, '..', '..', '..')) 27 | 28 | FRAME_RATE = 30 29 | MARKERS = ['o', 'v', '^', '<', '>', 's', 'd'] # '*' 30 | CMAP_PREFERENCE = ['tab10', 'tab20', 'hsv'] 31 | GRID_COLOR = '0.85' # plt.rcParams['grid.color'] 32 | CLEARANCE = 1.1 # Axis range is CLEARANCE * max_value, rounded up. 33 | ARGS_FORMATTER = argparse.ArgumentDefaultsHelpFormatter # Show default values 34 | INTERVAL_TYPES = ['before', 'after', 'between'] 35 | INTERVAL_AXIS_LABEL = { 36 | 'before': 'Frames before time (min)', 37 | 'after': 'Frames after time (min)', 38 | 'between': 'Frames in interval (min)', 39 | } 40 | 41 | 42 | def _add_arguments(parser): 43 | common = argparse.ArgumentParser(add_help=False) 44 | common.add_argument('--data', default='dev', help='{dev,test,devtest}') 45 | common.add_argument('--challenge', default='open', 46 | help='Assess trackers from which challenge?', 47 | choices=['constrained', 'open', 'open_minus_constrained']) 48 | # common.add_argument('--verbose', '-v', action='store_true') 49 | common.add_argument('--loglevel', default='info', choices=['info', 'debug', 'warning']) 50 | common.add_argument('--permissive', action='store_true', 51 | help='Silently exclude tracks which caused an error') 52 | # common.add_argument('--ignore_cache', action='store_true') 53 | common.add_argument('--no_use_summary', dest='use_summary', action='store_false', 54 | help='Do not load/dump assessment summaries') 55 | common.add_argument('--iou_thresholds', nargs='+', type=float, default=[0.5], 56 | help='List of IOU thresholds to use', metavar='IOU') 57 | common.add_argument('--top', type=int, default=10, 58 | help='Only show top n trackers (zero to show all)') 59 | common.add_argument('--no_bootstrap', dest='bootstrap', action='store_false', 60 | help='Disable results that require bootstrap sampling') 61 | common.add_argument('--bootstrap_trials', type=int, default=100, 62 | help='Number of trials for bootstrap sampling') 63 | common.add_argument('--errorbar_size', type=float, 64 | default=1.64485, # scipy.stats.norm.ppf(0.5 + 0.9 / 2) 65 | help='Number of standard deviations') 66 | common.add_argument('--convert_to_png', action='store_true', 67 | help='Convert PDF figures to PNG for web') 68 | common.add_argument('--png_resolution', type=int, default=150, 69 | help='Dots-per-inch for PNG conversion') 70 | 71 | plot_args = argparse.ArgumentParser(add_help=False) 72 | plot_args.add_argument('--width_inches', type=float, default=4.2) 73 | plot_args.add_argument('--height_inches', type=float, default=4.0) 74 | plot_args.add_argument('--legend_inches', type=float, default=1.3) 75 | 76 | tpr_tnr_args = argparse.ArgumentParser(add_help=False) 77 | tpr_tnr_args.add_argument('--no_level_sets', dest='level_sets', action='store_false') 78 | tpr_tnr_args.add_argument('--no_lower_bounds', dest='lower_bounds', action='store_false') 79 | tpr_tnr_args.add_argument('--asterisk', action='store_true') 80 | 81 | subparsers = parser.add_subparsers(dest='subcommand', help='Analysis mode') 82 | subparsers.required = True # https://bugs.python.org/issue9253#msg186387 83 | # table: Produce a table (one column per IOU threshold) 84 | subparser = subparsers.add_parser('table', formatter_class=ARGS_FORMATTER, parents=[common]) 85 | # plot_tpr_tnr: Produce a figure (one figure per IOU threshold) 86 | subparser = subparsers.add_parser('plot_tpr_tnr', formatter_class=ARGS_FORMATTER, 87 | parents=[common, plot_args, tpr_tnr_args]) 88 | # plot_tpr_tnr_intervals: Produce a figure (one figure per IOU threshold) 89 | subparser = subparsers.add_parser('plot_tpr_tnr_intervals', formatter_class=ARGS_FORMATTER, 90 | parents=[common, plot_args, tpr_tnr_args]) 91 | subparser.add_argument('--times', type=int, default=[0, 60, 120, 240], help='seconds') 92 | # plot_tpr_time: Produce a figure for interval ranges (0, t) and (t, inf). 93 | subparser = subparsers.add_parser('plot_tpr_time', formatter_class=ARGS_FORMATTER, 94 | parents=[common, plot_args]) 95 | subparser.add_argument('--max_time', type=int, default=600, help='seconds') 96 | subparser.add_argument('--time_step', type=int, default=60, help='seconds') 97 | subparser.add_argument('--no_same_axes', dest='same_axes', action='store_false') 98 | # plot_present_absent: Produce a figure (one figure per IOU threshold) 99 | subparser = subparsers.add_parser('plot_present_absent', formatter_class=ARGS_FORMATTER, 100 | parents=[common, plot_args]) 101 | 102 | 103 | def main(): 104 | parser = argparse.ArgumentParser(formatter_class=ARGS_FORMATTER) 105 | _add_arguments(parser) 106 | global args 107 | args = parser.parse_args() 108 | logging.basicConfig(level=getattr(logging, args.loglevel.upper())) 109 | 110 | dataset_names = _get_datasets(args.data) 111 | # Load tasks without annotations. 112 | dataset_tasks = { 113 | dataset: _load_tasks(os.path.join(REPO_DIR, 'dataset', 'tasks', dataset + '.csv')) 114 | for dataset in dataset_names} 115 | # Take union of all datasets. 116 | tasks = {key: task for dataset in dataset_names 117 | for key, task in dataset_tasks[dataset].items()} 118 | 119 | tracker_names = _load_tracker_names() 120 | trackers = set(tracker_names.keys()) 121 | dataset_assessments = {} 122 | for dataset in dataset_names: 123 | dataset_assessments[dataset] = _get_assessments(dataset, trackers) 124 | # Take subset of trackers for which it was possible to load results. 125 | trackers = set(dataset_assessments[dataset].keys()) 126 | if len(trackers) < 1: 127 | raise RuntimeError('could not obtain assessment of any trackers') 128 | 129 | # Assign colors and markers alphabetically to achieve invariance across plots. 130 | trackers = sorted(trackers, key=lambda s: s.lower()) 131 | color_list = _generate_colors(len(trackers)) 132 | tracker_colors = dict(zip(trackers, color_list)) 133 | tracker_markers = dict(zip(trackers, itertools.cycle(MARKERS))) 134 | 135 | # Merge tracks from all datasets. 136 | # TODO: Ensure that none have same key? 137 | assessments = {} 138 | for tracker in trackers: 139 | assessments[tracker] = {} 140 | for iou in args.iou_thresholds: 141 | assessments[tracker][iou] = functools.reduce( 142 | oxuva.union_dataset_assessment, 143 | (dataset_assessments[dataset][tracker][iou] for dataset in dataset_names), 144 | None) 145 | 146 | # Use simple metrics to get ranking. 147 | rank_quality = { 148 | tracker: oxuva.dataset_quality(assessments[tracker][0.5]['totals'], enable_bootstrap=False) 149 | for tracker in trackers} 150 | trackers = _sort_quality(rank_quality) 151 | top_trackers = trackers[:args.top] if args.top else trackers 152 | 153 | if args.subcommand == 'table': 154 | _print_statistics(assessments, trackers, tracker_names) 155 | elif args.subcommand == 'plot_tpr_tnr': 156 | _plot_tpr_tnr_overall(assessments, top_trackers, 157 | tracker_names, tracker_colors, tracker_markers) 158 | elif args.subcommand == 'plot_tpr_tnr_intervals': 159 | _plot_tpr_tnr_intervals(assessments, top_trackers, 160 | tracker_names, tracker_colors, tracker_markers) 161 | elif args.subcommand == 'plot_tpr_time': 162 | for iou in args.iou_thresholds: 163 | for bootstrap in ([False, True] if args.bootstrap else [False]): 164 | _plot_intervals(assessments, top_trackers, iou, bootstrap, 165 | tracker_names, tracker_colors, tracker_markers) 166 | elif args.subcommand == 'plot_present_absent': 167 | for iou in args.iou_thresholds: 168 | for bootstrap in ([False, True] if args.bootstrap else [False]): 169 | _plot_present_absent(assessments, top_trackers, iou, bootstrap, 170 | tracker_names, tracker_colors, tracker_markers) 171 | 172 | 173 | def _get_assessments(dataset, trackers): 174 | ''' 175 | Args: 176 | dataset: String that identifies dataset ("dev" or "test"). 177 | trackers: List of tracker names. 178 | 179 | Returns: 180 | Dictionary that maps [tracker][iou] to dataset assessment. 181 | Only returns assessments for subset of trackers that were successful. 182 | ''' 183 | # Create functions to load tasks with annotations on demand. 184 | # (Do not need annotations if using cached assessments.) 185 | # TODO: Code would be easier to read using a class with lazy-cached elements as members? 186 | get_annotations = oxuva.LazyCacheCaller(functools.partial( 187 | _load_tasks_with_annotations, 188 | os.path.join(REPO_DIR, 'dataset', 'annotations', dataset + '.csv'))) 189 | 190 | assessments = {} 191 | for tracker_ind, tracker in enumerate(trackers): 192 | try: 193 | log_context = 'tracker {}/{} {}'.format(tracker_ind + 1, len(trackers), tracker) 194 | tracker_assessments = {} 195 | # Load predictions at most once for all IOU thresholds (can be slow). 196 | get_predictions = oxuva.LazyCacheCaller( 197 | lambda: oxuva.load_predictions_and_select_frames( 198 | get_annotations(), 199 | os.path.join('predictions', dataset, tracker), 200 | permissive=args.permissive, 201 | log_prefix=log_context + ': ')) 202 | # Obtain results at all IOU thresholds in order to make axes equal in all graphs. 203 | # TODO: Is it unsafe to use float (iou) as dictionary key? 204 | for iou in args.iou_thresholds: 205 | logger.info('assess tracker "%s" with iou %g', tracker, iou) 206 | assess_func = lambda: oxuva.assess_dataset(get_annotations(), get_predictions(), 207 | iou, resolution_seconds=30) 208 | if args.use_summary: 209 | tracker_assessments[iou] = oxuva.cache( 210 | oxuva.Protocol( 211 | dump=oxuva.dump_dataset_assessment_json, 212 | load=oxuva.load_dataset_assessment_json, binary=False), 213 | os.path.join( 214 | 'assess', dataset, tracker, 'iou_{}.json'.format(oxuva.float2str(iou))), 215 | assess_func) 216 | else: 217 | # When it is not cached, it will include frame_assessments. 218 | # TODO: Could cache (selected frames of) predictions to file if this is slow. 219 | tracker_assessments[iou] = assess_func() 220 | except IOError as ex: 221 | logger.warning('could not obtain assessment of tracker "%s" on dataset "%s": %s', 222 | tracker, dataset, ex) 223 | else: 224 | assessments[tracker] = tracker_assessments 225 | return assessments 226 | 227 | 228 | def _load_tracker_names(): 229 | with open('trackers.json', 'r') as f: 230 | trackers = json.load(f) 231 | trackers = {key: tracker for key, tracker in trackers.items() 232 | if ((args.challenge == 'open') or 233 | (args.challenge == 'constrained' and tracker['constrained']) or 234 | (args.challenge == 'open_minus_constrained' and not tracker['constrained']))} 235 | tracker_names = {key: tracker['name'] for key, tracker in trackers.items()} 236 | return tracker_names 237 | 238 | 239 | def _get_datasets(name): 240 | if name == 'devtest': 241 | return ['dev', 'test'] 242 | else: 243 | return [name] 244 | 245 | 246 | def _load_tasks(fname): 247 | logger.debug('load tasks without annotations from "%s"', fname) 248 | with open(fname, 'r') as fp: 249 | return oxuva.load_dataset_tasks_csv(fp) 250 | 251 | 252 | def _load_tasks_with_annotations(fname): 253 | logger.debug('load tasks with annotations from "%s"', fname) 254 | with open(fname, 'r') as fp: 255 | # if fname.endswith('.json'): 256 | # tracks = json.load(fp) 257 | if fname.endswith('.csv'): 258 | tracks = oxuva.load_dataset_annotations_csv(fp) 259 | else: 260 | raise ValueError('unknown extension: {}'.format(fname)) 261 | return oxuva.map_dict(oxuva.make_task_from_track, tracks) 262 | 263 | 264 | def _print_statistics(assessments, trackers, names=None): 265 | fields = ['TPR', 'TNR', 'GM', 'MaxGM'] 266 | if args.bootstrap: 267 | # Include xxx_var keys too. 268 | fields = list(itertools.chain.from_iterable([key, key + '_var'] for key in fields)) 269 | names = names or {} 270 | stats = {tracker: {iou: ( 271 | oxuva.dataset_quality(assessments[tracker][iou]['totals'], 272 | enable_bootstrap=args.bootstrap, num_trials=args.bootstrap_trials)) 273 | for iou in args.iou_thresholds} for tracker in trackers} 274 | table_dir = os.path.join('analysis', args.data, args.challenge) 275 | _ensure_dir_exists(table_dir) 276 | table_file = os.path.join(table_dir, 'table.csv') 277 | logger.info('write table to %s', table_file) 278 | with open(table_file, 'w') as f: 279 | fieldnames = ['tracker'] + [ 280 | metric + '_' + str(iou) for iou in args.iou_thresholds for metric in fields] 281 | print(','.join(fieldnames), file=f) 282 | for tracker in trackers: 283 | row = [names.get(tracker, tracker)] + [ 284 | '{:.6g}'.format(stats[tracker][iou][metric]) 285 | for iou in args.iou_thresholds for metric in fields] 286 | print(','.join(row), file=f) 287 | 288 | 289 | def _plot_tpr_tnr_overall(assessments, trackers, 290 | names=None, colors=None, markers=None): 291 | bootstrap_modes = [False, True] if args.bootstrap else [False] 292 | 293 | for iou in args.iou_thresholds: 294 | for bootstrap in bootstrap_modes: 295 | for posthoc in ([False] if bootstrap else [False, True]): 296 | _plot_tpr_tnr(('tpr_tnr_iou_' + oxuva.float2str(iou) + 297 | ('_posthoc' if posthoc else '') + 298 | ('_bootstrap' if bootstrap else '')), 299 | assessments, trackers, iou, bootstrap, posthoc, 300 | names=names, colors=colors, markers=markers, 301 | min_time_sec=None, max_time_sec=None, include_score=True) 302 | 303 | 304 | def _plot_tpr_tnr_intervals(assessments, trackers, 305 | names=None, colors=None, markers=None): 306 | modes = ['before', 'after'] 307 | intervals_sec = {} 308 | for mode in modes: 309 | intervals_sec[mode], _ = _make_intervals(args.times, mode) 310 | 311 | bootstrap_modes = [False, True] if args.bootstrap else [False] 312 | for bootstrap in bootstrap_modes: 313 | for iou in args.iou_thresholds: 314 | # Order by performance on all frames. 315 | stats = {tracker: oxuva.dataset_quality(assessments[tracker][iou]['totals'], 316 | enable_bootstrap=bootstrap, 317 | num_trials=args.bootstrap_trials) 318 | for tracker in trackers} 319 | order = _sort_quality(stats, use_bootstrap_mean=False) 320 | 321 | tpr_key = 'TPR_mean' if bootstrap else 'TPR' 322 | # Get stats for all plots to establish axis range. 323 | # Note: This means that dataset_quality_interval() is called twice. 324 | max_tpr = max([max([max([ 325 | oxuva.dataset_quality_interval( 326 | assessments[tracker][iou]['quantized_totals'], 327 | min_time=None if min_time is None else FRAME_RATE * min_time, 328 | max_time=None if max_time is None else FRAME_RATE * max_time, 329 | enable_bootstrap=bootstrap, num_trials=args.bootstrap_trials)[tpr_key] 330 | for tracker in trackers]) 331 | for min_time, max_time in intervals_sec[mode]]) 332 | for mode in modes]) 333 | 334 | for mode in modes: 335 | for min_time_sec, max_time_sec in intervals_sec[mode]: 336 | base_name = '_'.join( 337 | ['tpr_tnr', 'iou_' + oxuva.float2str(iou), 338 | 'interval_{}_{}'.format(oxuva.float2str(min_time_sec), 339 | oxuva.float2str(max_time_sec))] + 340 | (['bootstrap'] if bootstrap else [])) 341 | _plot_tpr_tnr(base_name, assessments, trackers, iou, bootstrap, posthoc=False, 342 | min_time_sec=min_time_sec, max_time_sec=max_time_sec, 343 | max_tpr=max_tpr, order=order, 344 | names=names, colors=colors, markers=markers) 345 | 346 | 347 | def _plot_tpr_tnr(base_name, assessments, trackers, iou_threshold, bootstrap, posthoc, 348 | min_time_sec=None, max_time_sec=None, include_score=False, 349 | max_tpr=None, order=None, 350 | names=None, colors=None, markers=None, legend_kwargs=None): 351 | names = names or {} 352 | colors = colors or {} 353 | markers = markers or {} 354 | legend_kwargs = legend_kwargs or {} 355 | 356 | tpr_key = 'TPR_mean' if bootstrap else 'TPR' 357 | tnr_key = 'TNR_mean' if bootstrap else 'TNR' 358 | 359 | for iou in args.iou_thresholds: 360 | stats = { 361 | tracker: oxuva.dataset_quality_interval( 362 | assessments[tracker][iou_threshold]['quantized_totals'], 363 | min_time=None if min_time_sec is None else FRAME_RATE * min_time_sec, 364 | max_time=None if max_time_sec is None else FRAME_RATE * max_time_sec, 365 | enable_bootstrap=bootstrap, num_trials=args.bootstrap_trials) 366 | for tracker in trackers} 367 | if order is None: 368 | order = _sort_quality(stats, use_bootstrap_mean=False) 369 | 370 | plt.figure(figsize=(args.width_inches, args.height_inches)) 371 | plt.xlabel('True Negative Rate (Absent)') 372 | plt.ylabel('True Positive Rate (Present)') 373 | if args.level_sets: 374 | _plot_level_sets() 375 | 376 | for tracker in order: 377 | if bootstrap: 378 | plot_func = functools.partial( 379 | _errorbar, 380 | xerr=args.errorbar_size * np.sqrt([stats[tracker]['TNR_var']]), 381 | yerr=args.errorbar_size * np.sqrt([stats[tracker]['TPR_var']]), 382 | capsize=3) 383 | else: 384 | plot_func = plt.plot 385 | plot_func([stats[tracker][tnr_key]], [stats[tracker][tpr_key]], 386 | label=_tracker_label(names.get(tracker, tracker), include_score, 387 | stats[tracker], use_bootstrap_mean=False), 388 | color=colors.get(tracker, None), 389 | marker=markers.get(tracker, None), 390 | markerfacecolor='none', markeredgewidth=2, clip_on=False) 391 | if args.lower_bounds: 392 | plt.plot( 393 | [stats[tracker][tnr_key], 1], [stats[tracker][tpr_key], 0], 394 | color=colors.get(tracker, None), 395 | linestyle='dashed', marker='') 396 | 397 | if posthoc: 398 | num_posthoc = 0 399 | for tracker in order: 400 | # Add posthoc-threshold curves to figure. 401 | # TODO: Plot distribution of post-hoc curves when using bootstrap sampling? 402 | if assessments[tracker][iou_threshold].get('frame_assessments', None) is None: 403 | logger.warning('cannot do posthoc curve for tracker "%s" at iou %g', 404 | tracker, iou_threshold) 405 | else: 406 | _plot_posthoc_curve(assessments[tracker][iou_threshold]['frame_assessments'], 407 | marker='', color=colors.get(tracker, None)) 408 | num_posthoc += 1 409 | if num_posthoc == 0: 410 | logger.warning('skip posthoc plot: zero trackers') 411 | return 412 | 413 | if max_tpr is None: 414 | max_tpr = max([stats[tracker][tpr_key] for tracker in trackers]) 415 | plt.xlim(xmin=0, xmax=1) 416 | plt.ylim(ymin=0, ymax=_ceil_nearest(CLEARANCE * max_tpr, 0.1)) 417 | plt.grid(color=GRID_COLOR, clip_on=False) 418 | _hide_spines() 419 | plot_dir = os.path.join('analysis', args.data, args.challenge) 420 | _ensure_dir_exists(plot_dir) 421 | _save_fig(os.path.join(plot_dir, base_name + '_no_legend.pdf')) 422 | _legend_outside(**legend_kwargs) 423 | _save_fig(os.path.join(plot_dir, base_name + '.pdf')) 424 | 425 | 426 | def _plot_posthoc_curve(assessments, **kwargs): 427 | frames = list(itertools.chain.from_iterable( 428 | series.values() for series in assessments.values())) 429 | operating_points = oxuva.posthoc_threshold(frames) 430 | metrics = list(map(oxuva.quality_metrics, operating_points)) 431 | plt.plot([point['TNR'] for point in metrics], 432 | [point['TPR'] for point in metrics], **kwargs) 433 | 434 | 435 | def _plot_intervals(assessments, trackers, iou_threshold, bootstrap, 436 | names=None, colors=None, markers=None): 437 | # TODO: Add errorbars using bootstrap sampling? 438 | names = names or {} 439 | colors = colors or {} 440 | markers = markers or {} 441 | times_sec = range(0, args.max_time + 1, args.time_step) 442 | 443 | # Get overall stats for order in legend. 444 | overall_stats = {tracker: oxuva.dataset_quality(assessments[tracker][iou_threshold]['totals'], 445 | enable_bootstrap=bootstrap, 446 | num_trials=args.bootstrap_trials) 447 | for tracker in trackers} 448 | order = _sort_quality(overall_stats, use_bootstrap_mean=False) 449 | 450 | intervals_sec = {} 451 | points = {} 452 | for mode in INTERVAL_TYPES: 453 | intervals_sec[mode], points[mode] = _make_intervals(times_sec, mode) 454 | 455 | stats = {mode: {tracker: [ 456 | oxuva.dataset_quality_interval( 457 | assessments[tracker][iou_threshold]['quantized_totals'], 458 | min_time=None if min_time is None else FRAME_RATE * min_time, 459 | max_time=None if max_time is None else FRAME_RATE * max_time, 460 | enable_bootstrap=bootstrap, num_trials=args.bootstrap_trials) 461 | for min_time, max_time in intervals_sec[mode]] 462 | for tracker in trackers} for mode in INTERVAL_TYPES} 463 | tpr_key = 'TPR_mean' if bootstrap else 'TPR' 464 | # Find maximum TPR value over all plots (to have same axes). 465 | max_tpr = {mode: max(s[tpr_key] for tracker in trackers for s in stats[mode][tracker]) 466 | for mode in INTERVAL_TYPES} 467 | 468 | for mode in INTERVAL_TYPES: 469 | plt.figure(figsize=(args.width_inches, args.height_inches)) 470 | plt.xlabel(INTERVAL_AXIS_LABEL[mode]) 471 | plt.ylabel('True Positive Rate') 472 | for tracker in order: 473 | tpr = [s.get(tpr_key, None) for s in stats[mode][tracker]] 474 | if bootstrap: 475 | tpr_var = [s.get('TPR_var', None) for s in stats[mode][tracker]] 476 | plot_func = functools.partial( 477 | _errorbar, 478 | yerr=args.errorbar_size * np.sqrt(tpr_var), 479 | capsize=3) 480 | else: 481 | plot_func = plt.plot 482 | plot_func(1 / 60.0 * np.asarray(points[mode]), tpr, 483 | label=names.get(tracker, tracker), 484 | marker=markers.get(tracker, None), 485 | color=colors.get(tracker, None), 486 | markerfacecolor='none', markeredgewidth=2, clip_on=False) 487 | plt.xlim(xmin=0, xmax=args.max_time / 60.0) 488 | ymax = max(max_tpr.values()) if args.same_axes else max_tpr[mode] 489 | plt.ylim(ymin=0, ymax=_ceil_nearest(CLEARANCE * ymax, 0.1)) 490 | plt.grid(color=GRID_COLOR, clip_on=False) 491 | _hide_spines() 492 | plot_dir = os.path.join('analysis', args.data, args.challenge) 493 | _ensure_dir_exists(plot_dir) 494 | base_name = ('tpr_time_iou_{}_interval_{}'.format(oxuva.float2str(iou_threshold), mode) + 495 | ('_bootstrap' if bootstrap else '')) 496 | _save_fig(os.path.join(plot_dir, base_name + '_no_legend.pdf')) 497 | _legend_outside() 498 | _save_fig(os.path.join(plot_dir, base_name + '.pdf')) 499 | 500 | 501 | def _plot_present_absent( 502 | assessments, trackers, iou_threshold, bootstrap, 503 | names=None, colors=None, markers=None): 504 | names = names or {} 505 | colors = colors or {} 506 | markers = markers or {} 507 | 508 | stats_whole = { 509 | tracker: oxuva.dataset_quality( 510 | assessments[tracker][iou_threshold]['totals'], 511 | enable_bootstrap=bootstrap, num_trials=args.bootstrap_trials) 512 | for tracker in trackers} 513 | stats_all_present = { 514 | tracker: oxuva.dataset_quality_filter( 515 | assessments[tracker][iou_threshold]['totals'], require_none_absent=True, 516 | enable_bootstrap=bootstrap, num_trials=args.bootstrap_trials) 517 | for tracker in trackers} 518 | stats_any_absent = { 519 | tracker: oxuva.dataset_quality_filter( 520 | assessments[tracker][iou_threshold]['totals'], require_some_absent=True, 521 | enable_bootstrap=bootstrap, num_trials=args.bootstrap_trials) 522 | for tracker in trackers} 523 | 524 | order = _sort_quality(stats_whole) 525 | tpr_key = 'TPR_mean' if bootstrap else 'TPR' 526 | max_tpr = max(max([stats_all_present[tracker][tpr_key] for tracker in trackers]), 527 | max([stats_any_absent[tracker][tpr_key] for tracker in trackers])) 528 | 529 | plt.figure(figsize=(args.width_inches, args.height_inches)) 530 | plt.xlabel('TPR (tracks without absent labels)') 531 | plt.ylabel('TPR (tracks with some absent labels)') 532 | for tracker in order: 533 | if bootstrap: 534 | plot_func = functools.partial( 535 | _errorbar, 536 | xerr=args.errorbar_size * np.sqrt([stats_all_present[tracker]['TPR_var']]), 537 | yerr=args.errorbar_size * np.sqrt([stats_any_absent[tracker]['TPR_var']]), 538 | capsize=3) 539 | else: 540 | plot_func = plt.plot 541 | plot_func( 542 | [stats_all_present[tracker][tpr_key]], [stats_any_absent[tracker][tpr_key]], 543 | label=names.get(tracker, tracker), 544 | color=colors.get(tracker, None), 545 | marker=markers.get(tracker, None), 546 | markerfacecolor='none', markeredgewidth=2, clip_on=False) 547 | plt.xlim(xmin=0, xmax=_ceil_nearest(CLEARANCE * max_tpr, 0.1)) 548 | plt.ylim(ymin=0, ymax=_ceil_nearest(CLEARANCE * max_tpr, 0.1)) 549 | plt.grid(color=GRID_COLOR, clip_on=False) 550 | _hide_spines() 551 | # Draw a diagonal line. 552 | plt.plot([0, 1], [0, 1], color=GRID_COLOR, linewidth=1, linestyle='dotted') 553 | plot_dir = os.path.join('analysis', args.data, args.challenge) 554 | _ensure_dir_exists(plot_dir) 555 | base_name = ('present_absent_iou_{}'.format(oxuva.float2str(iou_threshold)) + 556 | ('_bootstrap' if bootstrap else '')) 557 | _save_fig(os.path.join(plot_dir, base_name + '_no_legend.pdf')) 558 | _legend_outside() 559 | _save_fig(os.path.join(plot_dir, base_name + '.pdf')) 560 | 561 | 562 | def _make_intervals(values, interval_type): 563 | '''Produces intervals and points at which to plot them. 564 | 565 | Returns: 566 | intervals, points 567 | 568 | Example: 569 | >> _make_intervals([0, 1, 2, 3], 'before') 570 | [(0, 1), (0, 2), (0, 3)], [1, 2, 3] 571 | 572 | >> _make_intervals([0, 1, 2, 3], 'after') 573 | [(0, inf), (1, inf), (2, inf), (3, inf)], [1, 2, 3] 574 | 575 | >> _make_intervals([0, 1, 2, 3], 'between') 576 | [(0, 1), (1, 2), (2, 3)], [0.5, 1.5, 2.5] 577 | ''' 578 | if interval_type == 'before': 579 | intervals = [(0, x) for x in values if x > 0] 580 | points = [x for x in values if x > 0] 581 | elif interval_type == 'after': 582 | intervals = [(x, float('inf')) for x in values] 583 | points = list(values) 584 | elif interval_type == 'between': 585 | intervals = list(zip(values, values[1:])) 586 | points = [0.5 * (a + b) for a, b in intervals] 587 | return intervals, points 588 | 589 | 590 | def _ensure_dir_exists(dir): 591 | if not os.path.exists(dir): 592 | os.makedirs(dir) 593 | 594 | 595 | def _generate_colors(n): 596 | # return [colorsys.hsv_to_rgb(i / n, s, v) for i in range(n)] 597 | for cmap_name in CMAP_PREFERENCE: 598 | cmap = matplotlib.cm.get_cmap(cmap_name) 599 | if n <= cmap.N: 600 | break 601 | return [cmap(float(i) / n) for i in range(n)] 602 | 603 | 604 | def _save_fig(plot_file): 605 | logger.info('write plot to %s', plot_file) 606 | plt.savefig(plot_file) 607 | 608 | if args.convert_to_png: 609 | name, ext = os.path.splitext(plot_file) 610 | if not ext.lower() == '.pdf': 611 | raise ValueError('plot file does not have pdf extension: {:s}'.format(plot_file)) 612 | logger.debug('convert to png: %s', plot_file) 613 | png_file = name + '.png' 614 | try: 615 | subprocess.check_call(['convert', '-density', str(args.png_resolution), plot_file, 616 | '-quality', '90', png_file]) 617 | except subprocess.CalledProcessError as ex: 618 | logger.warning('could not convert to png: %s', ex) 619 | 620 | 621 | def _plot_level_sets(n=10, num_points=100): 622 | x = np.linspace(0, 1, num_points + 1)[1:] 623 | for gm in np.asfarray(range(1, n)) / n: 624 | # gm = sqrt(x*y); y = gm^2 / x 625 | y = gm**2 / x 626 | plt.plot(x, y, color=GRID_COLOR, linewidth=1, linestyle='dotted') 627 | 628 | 629 | def _ceil_nearest(x, step): 630 | '''Rounds up to nearest multiple of step.''' 631 | return math.ceil(x / step) * step 632 | 633 | 634 | def _sort_quality(quality, use_bootstrap_mean=False): 635 | ''' 636 | Args: 637 | quality: Dict that maps tracker name to quality dict. 638 | ''' 639 | def sort_key(tracker): 640 | return _quality_sort_key(quality[tracker], 641 | use_bootstrap_mean=use_bootstrap_mean) 642 | 643 | return sorted(quality.keys(), key=sort_key, reverse=True) 644 | 645 | 646 | def _quality_sort_key(stats, use_bootstrap_mean=False): 647 | if use_bootstrap_mean: 648 | return (stats['MaxGM_mean'], stats['TPR_mean'], stats['TNR_mean']) 649 | else: 650 | return (stats['MaxGM'], stats['TPR'], stats['TNR']) 651 | 652 | 653 | def _tracker_label(name, include_score, stats, use_bootstrap_mean): 654 | if not include_score: 655 | return name 656 | gm_key = 'GM_mean' if use_bootstrap_mean else 'GM' 657 | max_gm_key = 'MaxGM_mean' if use_bootstrap_mean else 'MaxGM' 658 | max_at_point = abs(stats[gm_key] - stats[max_gm_key]) <= 1e-3 659 | asterisk = '*' if args.asterisk and max_at_point else '' 660 | return '{} ({:.2f}{})'.format(name, stats[max_gm_key], asterisk) 661 | 662 | 663 | def _errorbar(*args, **kwargs): 664 | container = plt.errorbar(*args, **kwargs) 665 | # Disable clipping for caps of errorbars. 666 | _, caplines, barlinecols = container 667 | for capline in caplines: 668 | capline.set_clip_on(False) 669 | for barlinecol in barlinecols: 670 | barlinecol.set_clip_on(False) 671 | # return container 672 | 673 | 674 | def _hide_spines(): 675 | ax = plt.gca() 676 | ax.spines['left'].set_visible(False) 677 | ax.spines['right'].set_visible(False) 678 | ax.spines['top'].set_visible(False) 679 | ax.spines['bottom'].set_visible(False) 680 | plt.tick_params(axis='both', which='both', top=False, bottom=False, left=False, right=False) 681 | 682 | 683 | def _legend_no_errorbars(**kwargs): 684 | '''Replaces plt.legend(). Excludes errorbars from legend.''' 685 | # https://swdg.io/2015/errorbar-legends/ 686 | ax = plt.gca() 687 | handles, labels = ax.get_legend_handles_labels() 688 | handles = [h[0] if isinstance(h, matplotlib.container.ErrorbarContainer) else h 689 | for h in handles] 690 | ax.legend(handles, labels, **kwargs) 691 | 692 | 693 | def _legend_outside(**kwargs): 694 | fig = plt.gcf() 695 | fig.set_size_inches(args.width_inches + args.legend_inches, args.height_inches) 696 | frac = float(args.width_inches) / (args.width_inches + args.legend_inches) 697 | ax = plt.gca() 698 | box = ax.get_position() 699 | ax.set_position([box.x0, box.y0, box.width * frac, box.height]) 700 | _legend_no_errorbars(loc='center left', bbox_to_anchor=(1.02, 0.5), **kwargs) 701 | 702 | 703 | if __name__ == '__main__': 704 | main() 705 | -------------------------------------------------------------------------------- /python/oxuva/tools/visualize.py: -------------------------------------------------------------------------------- 1 | from __future__ import absolute_import 2 | from __future__ import division 3 | from __future__ import print_function 4 | 5 | import argparse 6 | import contextlib 7 | import os 8 | import shutil 9 | import subprocess 10 | import tempfile 11 | from PIL import Image, ImageDraw, ImageColor, ImageFont 12 | 13 | import logging 14 | logger = logging.getLogger(__name__) 15 | 16 | import oxuva 17 | 18 | 19 | # /python/oxuva/tools/analyze.py 20 | TOOLS_DIR = os.path.dirname(os.path.realpath(__file__)) 21 | REPO_DIR = os.path.realpath(os.path.join(TOOLS_DIR, '..', '..', '..')) 22 | DATA_DIR = os.path.join(REPO_DIR, 'dataset') 23 | 24 | 25 | def _add_arguments(parser): 26 | parser.add_argument('tracker', help='Name of tracker to visualize') 27 | parser.add_argument('--loglevel', default='info', choices=['info', 'debug', 'warning']) 28 | parser.add_argument('--data', default='dev', choices=['dev', 'test'], help='{dev,test}') 29 | # TODO: Allow user to specify single video? 30 | # TODO: Plot multiple trackers together? 31 | 32 | 33 | def main(): 34 | parser = argparse.ArgumentParser() 35 | _add_arguments(parser) 36 | args = parser.parse_args() 37 | logging.basicConfig(level=getattr(logging, args.loglevel.upper())) 38 | 39 | tasks_file = os.path.join(DATA_DIR, 'tasks', args.data + '.csv') 40 | images_dir = os.path.join(DATA_DIR, 'images', args.data) 41 | predictions_dir = os.path.join('predictions', args.data, args.tracker) 42 | output_dir = os.path.join('visualize', args.data, args.tracker) 43 | if not os.path.exists(output_dir): 44 | os.makedirs(output_dir, 0o755) 45 | 46 | with open(tasks_file, 'r') as fp: 47 | tasks = oxuva.load_dataset_tasks_csv(fp) 48 | 49 | for i, (key, task) in enumerate(sorted(tasks.items())): 50 | vid, obj = key 51 | logger.info('task %d/%d: (%s, %s)', i + 1, len(tasks), vid, obj) 52 | video_file = os.path.join(output_dir, '{}_{}.mp4'.format(vid, obj)) 53 | if os.path.exists(video_file): 54 | logger.debug('skip %s %s: already exists', vid, obj) 55 | continue 56 | try: 57 | predictions_file = os.path.join(predictions_dir, '{}_{}.csv'.format(vid, obj)) 58 | with open(predictions_file, 'r') as fp: 59 | predictions = oxuva.load_predictions_csv(fp) 60 | image_file_func = lambda t: os.path.join(images_dir, vid, '{:06d}.jpeg'.format(t)) 61 | output_file = os.path.join(output_dir, '{}_{}.mp4'.format(vid, obj)) 62 | _visualize_video(task, predictions, image_file_func, output_file) 63 | except (IOError, subprocess.CalledProcessError) as ex: 64 | logger.warning('could not visualize %s %s: %s', vid, obj, ex) 65 | 66 | 67 | def _visualize_video(task, predictions, image_file_func, output_file): 68 | times = list(range(task.init_time, task.last_time + 1)) 69 | predictions = oxuva.subset_using_previous_if_missing(predictions, times[1:]) 70 | 71 | with _make_temp_dir(prefix='tmp-visualize-') as tmp_dir: 72 | logger.debug('write image files to %s', tmp_dir) 73 | # TODO: Delete temporary directory? 74 | pattern = os.path.join(tmp_dir, '%06d.jpeg') # Messy but used by ffmpeg. 75 | 76 | for i, t in enumerate(times): 77 | im = Image.open(image_file_func(t)) 78 | draw = ImageDraw.Draw(im) 79 | if t == task.init_time: 80 | # Draw initial rectangle. 81 | draw.rectangle(_pil_rect(task.init_rect, im.size), outline=_get_color('green')) 82 | else: 83 | # Draw predicted rectangle. 84 | draw.rectangle(_pil_rect(predictions[t], im.size), outline=_get_color('yellow')) 85 | del draw 86 | im.save(pattern % i) 87 | 88 | # TODO: Make video with ffmpeg. 89 | tmp_output_file = _tmp_name(output_file) 90 | command = _ffmpeg_command(['-i', pattern, 91 | '-r', '30', # fps 92 | '-vf', 'scale=trunc(iw/2)*2:trunc(ih/2)*2', 93 | tmp_output_file]) 94 | subprocess.check_call(command) 95 | os.rename(tmp_output_file, output_file) 96 | 97 | 98 | def _pil_rect(rect, size_xy): 99 | width, height = size_xy 100 | xmin = int(round(rect['xmin'] * width)) 101 | xmax = int(round(rect['xmax'] * width)) 102 | ymin = int(round(rect['ymin'] * height)) 103 | ymax = int(round(rect['ymax'] * height)) 104 | return [(xmin, ymin), (xmax, ymax)] 105 | 106 | 107 | # https://github.com/mrmrs/colors/blob/master/js/colors.js 108 | NICE_COLORS = { 109 | 'aqua': '#7fdbff', 110 | 'blue': '#0074d9', 111 | 'lime': '#01ff70', 112 | 'navy': '#001f3f', 113 | 'teal': '#39cccc', 114 | 'olive': '#3d9970', 115 | 'green': '#2ecc40', 116 | 'red': '#ff4136', 117 | 'maroon': '#85144b', 118 | 'orange': '#ff851b', 119 | 'purple': '#b10dc9', 120 | 'yellow': '#ffdc00', 121 | 'fuchsia': '#f012be', 122 | 'gray': '#aaaaaa', 123 | 'white': '#ffffff', 124 | 'black': '#111111', 125 | 'silver': '#dddddd', 126 | } 127 | 128 | 129 | def _get_color(name): 130 | return ImageColor.getrgb(NICE_COLORS[name]) 131 | 132 | 133 | def _ffmpeg_command(args): 134 | return ( 135 | ['ffmpeg', 136 | '-loglevel', 'error', # Quiet. 137 | '-y', # Overwrite output without asking. 138 | '-nostdin', # No interaction with user (q to quit). 139 | ] + args) 140 | 141 | 142 | def _tmp_name(fname): 143 | head, tail = os.path.split(fname) 144 | return os.path.join(head, 'tmp_' + tail) 145 | 146 | 147 | @contextlib.contextmanager 148 | def _make_temp_dir(*args, **kwargs): 149 | temp_dir = tempfile.mkdtemp(*args, **kwargs) 150 | try: 151 | yield temp_dir 152 | finally: 153 | shutil.rmtree(temp_dir) 154 | 155 | 156 | if __name__ == '__main__': 157 | main() 158 | -------------------------------------------------------------------------------- /python/oxuva/util.py: -------------------------------------------------------------------------------- 1 | from __future__ import absolute_import 2 | from __future__ import division 3 | from __future__ import print_function 4 | 5 | import collections 6 | import functools 7 | import json 8 | import numpy as np 9 | import os 10 | import pickle 11 | import sys 12 | 13 | import logging 14 | logger = logging.getLogger(__name__) 15 | 16 | 17 | def str2bool(x): 18 | x = x.strip().lower() 19 | if x in ['t', 'true', 'y', 'yes', '1']: 20 | return True 21 | if x in ['f', 'false', 'n', 'no', '0']: 22 | return False 23 | raise ValueError('warning: unclear value: {}'.format(x)) 24 | 25 | 26 | def str2bool_or_none(x): 27 | try: 28 | return str2bool(x) 29 | except ValueError: 30 | return None 31 | 32 | 33 | def bool2str(x): 34 | return str(x).lower() 35 | 36 | 37 | def default_if_none(x, value): 38 | return value if x is None else x 39 | 40 | 41 | def harmonic_mean(*args): 42 | assert all([x >= 0 for x in args]) 43 | if any([x == 0 for x in args]): 44 | return 0. 45 | return np.asscalar(1. / np.mean(1. / np.asfarray(args))) 46 | 47 | 48 | def geometric_mean(*args): 49 | with np.errstate(divide='ignore'): 50 | # log(zero) leads to -inf 51 | # log(negative) leads to nan 52 | # log(nan) leads to nan 53 | # nan + anything is nan 54 | # -inf + (any finite value) is -inf 55 | # exp(-inf) is 0 56 | return np.exp(np.mean(np.log(args))).tolist() 57 | 58 | 59 | def cache(protocol, filename, func, makedir=True, ignore_existing=False): 60 | '''Caches the result of a function in a file. 61 | 62 | Args: 63 | func -- Function with no arguments. 64 | makedir -- Create parent directory if it does not exist. 65 | ignore_existing -- Ignore existing cache file and call function. 66 | If it existed, the old cache file will be over-written. 67 | ''' 68 | if (not ignore_existing) and os.path.exists(filename): 69 | logger.info('load from cache: %s', filename) 70 | with open(filename, 'rb' if protocol.binary else 'r') as r: 71 | result = protocol.load(r) 72 | else: 73 | logger.debug('cache file not found: %s', filename) 74 | dir = os.path.dirname(filename) 75 | if makedir and (not os.path.exists(dir)): 76 | os.makedirs(dir) 77 | result = func() 78 | # Write to a temporary file and then perform atomic rename. 79 | # This guards against partial cache files. 80 | tmp = filename + '.tmp' 81 | with open(tmp, 'wb' if protocol.binary else 'w') as w: 82 | protocol.dump(result, w) 83 | os.rename(tmp, filename) 84 | return result 85 | 86 | 87 | Protocol = collections.namedtuple('Protocol', ['dump', 'load', 'binary']) 88 | protocol_json = Protocol(dump=functools.partial(json.dump, sort_keys=True), 89 | load=json.load, binary=False) 90 | protocol_pickle = Protocol(dump=pickle.dump, load=pickle.load, binary=True) 91 | 92 | cache_json = functools.partial(cache, protocol_json) 93 | cache_pickle = functools.partial(cache, protocol_pickle) 94 | 95 | 96 | def dict_sum(xs, initializer=None): 97 | if initializer is None: 98 | total = {} 99 | else: 100 | total = dict(initializer) 101 | for x in xs: 102 | for k, v in x.items(): 103 | if k in total: 104 | total[k] += v 105 | else: 106 | total[k] = v 107 | return total 108 | 109 | 110 | def dict_sum_strict(xs, initializer): 111 | total = dict(initializer) 112 | for x in xs: 113 | for k in initializer.keys(): 114 | total[k] += x[k] 115 | return total 116 | 117 | 118 | def map_dict(f, x): 119 | return {k: f(v) for k, v in x.items()} 120 | 121 | 122 | class SparseTimeSeries(object): 123 | '''Dictionary with integer keys in sorted order.''' 124 | 125 | def __init__(self, frames=None): 126 | self._frames = {} if frames is None else dict(frames) 127 | 128 | def __len__(self): 129 | return len(self._frames) 130 | 131 | def __getitem__(self, t): 132 | return self._frames[t] 133 | 134 | def __setitem__(self, t, value): 135 | self._frames[t] = value 136 | 137 | def __delitem__(self, t): 138 | del self._frames[t] 139 | 140 | def get(self, t, default): 141 | return self._frames.get(t, default) 142 | 143 | def setdefault(self, t, default): 144 | return self._frames.setdefault(t, default) 145 | 146 | def keys(self): 147 | return self._frames.keys() 148 | 149 | def sorted_keys(self): 150 | '''Returns times in sequential order.''' 151 | return sorted(self._frames.keys()) 152 | 153 | def values(self): 154 | return self._frames.values() 155 | 156 | def sorted_items(self): 157 | '''Returns (time, value) pairs in sequential order.''' 158 | times = sorted(self._frames.keys()) 159 | return zip(times, [self._frames[t] for t in times]) 160 | 161 | def items(self): 162 | return self._frames.items() 163 | 164 | def __iter__(self): 165 | for t in sorted(self._frames.keys()): 166 | yield t 167 | 168 | def __contains__(self, t): 169 | return t in self._frames 170 | 171 | 172 | def select_interval(series, min_time=None, max_time=None, init_time=0): 173 | return SparseTimeSeries({ 174 | t: x for t, x in series.items() 175 | if ((min_time is None or min_time <= t - init_time) and 176 | (max_time is None or t - init_time <= max_time))}) 177 | 178 | 179 | class LazyCacheCaller(object): 180 | 181 | def __init__(self, func): 182 | self.func = func 183 | self.evaluated = False 184 | self.result = None 185 | 186 | def __call__(self): 187 | if not self.evaluated: 188 | self.result = self.func() 189 | self.evaluated = True 190 | return self.result 191 | 192 | 193 | def float2str(x): 194 | return str(x).replace('.', 'd') 195 | -------------------------------------------------------------------------------- /pythonpath.sh: -------------------------------------------------------------------------------- 1 | dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" 2 | export PYTHONPATH="$dir/python:$PYTHONPATH" 3 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | numpy 2 | matplotlib 3 | -------------------------------------------------------------------------------- /setup.cfg: -------------------------------------------------------------------------------- 1 | [pycodestyle] 2 | ignore = E501,E402,E731 3 | --------------------------------------------------------------------------------