├── README.md └── [202105]长时视觉目标跟踪前沿简介.pdf /README.md: -------------------------------------------------------------------------------- 1 | # Long-term Visual Tracking: 2 | 3 | This page focuses on watching the state-of-the-art performance for the long-term tracking task (if you are interested in the short-term tracking task, please visit **[here](https://github.com/wangdongdut/Online-Visual-Tracking-SOTA)**). If you are also interested in some resources on Paper Writting (computer vision), please visit **[here](https://github.com/wangdongdut/PaperWriting)**. 4 | 5 | ### Survey 6 | 7 | * **Chang Liu, Xiao-Fan Chen, Chun-Juan Bo, Dong Wang. [Long-term Visual Tracking: Review and Experimental Comparison](https://link.springer.com/content/pdf/10.1007/s11633-022-1344-1). Machine Intelligence Research, 2022. [[PDF](https://link.springer.com/content/pdf/10.1007/s11633-022-1344-1.pdf)] :star2:**
8 | * 刘畅,陈晓凡,薄纯娟,王栋. "[长时视觉目标跟踪前沿简介](https://github.com/wangdongdut/Long-term-Visual-Tracking/blob/master/%5B202105%5D%E9%95%BF%E6%97%B6%E8%A7%86%E8%A7%89%E7%9B%AE%E6%A0%87%E8%B7%9F%E8%B8%AA%E5%89%8D%E6%B2%BF%E7%AE%80%E4%BB%8B.pdf)." CAA-PRMI专委会通讯2021年第1期(总第13期). 9 | [[PDF](https://github.com/wangdongdut/Long-term-Visual-Tracking/blob/master/%5B202105%5D%E9%95%BF%E6%97%B6%E8%A7%86%E8%A7%89%E7%9B%AE%E6%A0%87%E8%B7%9F%E8%B8%AA%E5%89%8D%E6%B2%BF%E7%AE%80%E4%BB%8B.pdf)]
10 | 11 | ## Benchmark Results: 12 | 13 | * **OxUvA:star2:** 14 | | Tracker | MaxGM | Speed (fps) | Paper/Code | 15 | |:----------- |:----------------:|:----------------:|:----------------:| 16 | | KeepTrack (ICCV21) | 0.809 | 18 (RTX 2080Ti) | [Paper](https://arxiv.org/abs/2103.16556)/[Code](https://github.com/visionml/pytracking) | 17 | | **LTMU (CVPR20)** | 0.751 | 13 (RTX 2080Ti) | [Paper](https://arxiv.org/abs/2004.00305)/[Code](https://github.com/Daikenan/LTMU) | 18 | | Siam R-CNN (CVPR20) | 0.723 | 5 (Tesla V100) | [Paper](https://arxiv.org/pdf/1911.12836.pdf)/[Code](https://github.com/VisualComputingInstitute/SiamR-CNN) | 19 | | DMTrack (CVPR21) | 0.688 | 31 (Titan XP) | [Paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhang_Distractor-Aware_Fast_Tracking_via_Dynamic_Convolutions_and_MOT_Philosophy_CVPR_2021_paper.pdf)/[Project](https://github.com/hqucv/dmtrack) | 20 | | **SPLT (ICCV19)** | 0.622 | 26 (GTX 1080Ti) | [Paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Yan_Skimming-Perusal_Tracking_A_Framework_for_Real-Time_and_Robust_Long-Term_Tracking_ICCV_2019_paper.pdf)/[Code](https://github.com/iiau-tracker/SPLT) | 21 | | GlobalTrack (AAAI20) | 0.603 | 6 (GTX TitanX) | [Paper](https://arxiv.org/abs/1912.08531)/[Code](https://github.com/huanglianghua/GlobalTrack) | 22 | | **MBMD (Arxiv)** | 0.544 | 4 (GTX 1080Ti) | [Paper](https://arxiv.org/abs/1809.0432)/[Code](https://github.com/xiaobai1217/MBMD) | 23 | | SiamFC+R (ECCV18) | 0.454 | 52 (Unkown GPU) | [Paper](https://arxiv.org/pdf/1803.09502.pdf)/[Code](https://github.com/oxuva/long-term-tracking-benchmark) | 24 | 25 | * OxUvA Leaderboard: https://competitions.codalab.org/competitions/19529#results 26 | * SiamFC+R is the best tracker in the original [OxUvA](https://arxiv.org/pdf/1803.09502.pdf) paper. 27 | 28 | * **TLP:star2:** 29 | 30 | | Tracker | Success Score | Speed (fps) | Paper/Code | 31 | |:----------- |:----------------:|:----------------:|:----------------:| 32 | | **LTMU (CVPR20)** | 0.571 | 13 (RTX 2080Ti) | [Paper](https://arxiv.org/abs/2004.00305)/[Code](https://github.com/Daikenan/LTMU) | 33 | | DMTrack (CVPR21) | 0.541 | 31 (Titan XP) | [Paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhang_Distractor-Aware_Fast_Tracking_via_Dynamic_Convolutions_and_MOT_Philosophy_CVPR_2021_paper.pdf)/[Project](https://github.com/hqucv/dmtrack) | 34 | | GlobalTrack (AAAI20) | 0.520 | 6 (GTX TitanX) | [Paper](https://arxiv.org/abs/1912.08531)/[Code](https://github.com/huanglianghua/GlobalTrack) | 35 | | **SPLT (ICCV19)** | 0.416 | 26 (GTX 1080Ti) | [Paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Yan_Skimming-Perusal_Tracking_A_Framework_for_Real-Time_and_Robust_Long-Term_Tracking_ICCV_2019_paper.pdf)/[Code](https://github.com/iiau-tracker/SPLT) | 36 | | MDNet (CVPR16) | 0.372 | 5 (GTX 1080Ti) | [Paper](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Nam_Learning_Multi-Domain_Convolutional_CVPR_2016_paper.pdf)/[Code](https://github.com/hyeonseobnam/py-MDNet) | 37 | 38 | * MDNet is the best tracker in the original [TLP](https://amoudgl.github.io/tlp/) paper. 39 | 40 | * **LaSOT:star2:** 41 | 42 | | Tracker | Success Score | Speed (fps) | Paper/Code | 43 | |:----------- |:----------------:|:----------------:|:----------------:| 44 | | **STARK (ICCV21)** | 0.671 | 32 (Tesla V100) | [Paper](https://arxiv.org/abs/2103.17154)/[Code](https://github.com/researchmm/Stark) | 45 | | KeepTrack (ICCV21) | 0.671 | 18 (RTX 2080Ti) | [Paper](https://arxiv.org/abs/2103.16556)/[Code](https://github.com/visionml/pytracking) | 46 | | **ARDiMPsuper (CVPR21)** | 0.653 | 33 (RTX 2080Ti) | [Paper](https://arxiv.org/abs/2012.06815)/[Code](https://github.com/MasterBin-IIAU/AlphaRefine) | 47 | | **TransT (CVPR21)** | 0.649 | 50 (RTX 2080Ti) | [Paper](https://arxiv.org/abs/2103.15436)/[Code](https://github.com/chenxin-dlut/TransT) | 48 | | Siam R-CNN (CVPR20) | 0.648 | 5 (Tesla V100) | [Paper](https://arxiv.org/pdf/1911.12836.pdf)/[Code](https://github.com/VisualComputingInstitute/SiamR-CNN) | 49 | | TrDimp (CVPR21) | 0.639 | 26 (GTX 1080Ti) | [Paper](https://arxiv.org/abs/2103.11681)/[Code](https://github.com/594422814/TransformerTrack) | 50 | | PrDiMP50 (CVPR20) | 0.598 | 30 (Unkown GPU) | [Paper](https://arxiv.org/pdf/2003.12565.pdf)/[Code](https://github.com/visionml/pytracking) | 51 | | DMTrack (CVPR21) | 0.580 | 31 (Titan XP) | [Paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhang_Distractor-Aware_Fast_Tracking_via_Dynamic_Convolutions_and_MOT_Philosophy_CVPR_2021_paper.pdf)/[Project](https://github.com/hqucv/dmtrack) | 52 | | **LTMU (CVPR20)** | 0.572 | 13 (RTX 2080Ti) | [Paper](https://arxiv.org/abs/2004.00305)/[Code](https://github.com/Daikenan/LTMU) | 53 | | DiMP50 (ICCV19) | 0.568 | 43 (GTX 1080) | [Paper](https://arxiv.org/pdf/1904.07220.pdf)/[Code](https://github.com/visionml/pytracking) | 54 | | Ocean (ECCV20) | 0.560 | 25 (Tesla V100) | [Paper](https://arxiv.org/abs/2006.10721)/[Code](https://github.com/researchmm/TracKit) | 55 | | GlobalTrack (AAAI20) | 0.521 | 6 (GTX TitanX) | [Paper](https://arxiv.org/abs/1912.08531)/[Code](https://github.com/huanglianghua/GlobalTrack) | 56 | | **SPLT (ICCV19)** | 0.426 | 26 (GTX 1080Ti) | [Paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Yan_Skimming-Perusal_Tracking_A_Framework_for_Real-Time_and_Robust_Long-Term_Tracking_ICCV_2019_paper.pdf)/[Code](https://github.com/iiau-tracker/SPLT) | 57 | | MDNet (CVPR16) | 0.397 | 5 (GTX 1080Ti) | [Paper](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Nam_Learning_Multi-Domain_Convolutional_CVPR_2016_paper.pdf)/[Code](https://github.com/hyeonseobnam/py-MDNet) | 58 | 59 | * **Baseline(short-term): SiamRPN++, Ocean, DiMP, PrDiMP, Siam R-CNN, MDNet** 60 | * **Baseline(long-term): SPLT, GlobalTrack, LTMU, Siam R-CNN** 61 | * MDNet is the best tracker in the original [LaSOT](https://cis.temple.edu/lasot/) paper. 62 | * **[paperswithcode-SOTA](https://paperswithcode.com/sota/visual-object-tracking-on-lasot): https://paperswithcode.com/sota/visual-object-tracking-on-lasot** 63 | 64 | * **Previous Benchmark Results** 65 | 66 | ## Benchmark 67 | 68 | * **List:** 69 | 70 | | Datasets | #videos | #total/min/max/average frames|Absent Label| 71 | |:----------- |:----------------:|:----------------:|:----------------:| 72 | | [VOT2019-LT/VOT2020-LT/VOT2021-LT](https://www.votchallenge.net/) | **50** | XXXX/XXXX/XXXX/XXXX | Yes | 73 | | [TLP](https://amoudgl.github.io/tlp/) | **50** | XXXX/XXXX/XXXX/XXXX | No | 74 | | [OxUvA](https://oxuva.github.io/long-term-tracking-benchmark/) | 366 (dev-200/test-**166**) | XXXX/XXXX/XXXX/XXXX | Yes | 75 | | [LaSOT](https://cis.temple.edu/lasot/) | 1,400 (I-all-1,400/II-test-**280**) | 3.52M/1,000/11,397/2,506 | Yes | 76 | 77 | * [UAV-20L](https://uav123.org/) has been included in VOT2018-LT/VOT2019-LT/VOT2020-LT. 78 | * [VOT2018-LT](http://www.votchallenge.net/vot2018/) is a subset of VOT2019-LT/VOT2020-LT/VOT2021-LT. VOT2019-LT, VOT2020-LT and VOT2021-LT are same. 79 | 80 | * **VOT:** . [[Visual Object Tracking Challenge](http://www.votchallenge.net/)] 81 | * [[VOT2021LT](http://www.votchallenge.net/vot2021/)][[Report](http://prints.vicos.si/publications/384/)] 82 | * [[VOT2020LT](http://www.votchallenge.net/vot2020/)][[Report](http://prints.vicos.si/publications/384/)] 83 | * [[VOT2019LT](http://www.votchallenge.net/vot2019/)][[Report](http://prints.vicos.si/publications/375/)] 84 | * [[VOT2018LT](http://www.votchallenge.net/vot2018/)][[Report](http://prints.vicos.si/publications/365/)] 85 | 86 | * **OxUvA:** Jack Valmadre, Luca Bertinetto, João F. Henriques, Ran Tao, Andrea Vedaldi, Arnold Smeulders, Philip Torr, Efstratios Gavves.
87 | "Long-term Tracking in the Wild: a Benchmark." ECCV (2018). 88 | [[paper](https://arxiv.org/pdf/1803.09502.pdf)] 89 | [[project](https://oxuva.github.io/long-term-tracking-benchmark/)] 90 | 91 | * **TLP:** Abhinav Moudgil, Vineet Gandhi.
92 | "Long-term Visual Object Tracking Benchmark." ACCV (2018). 93 | [[paper](https://arxiv.org/abs/1712.01358)] 94 | [[project](https://amoudgl.github.io/tlp/)] 95 | 96 | * **CDTB:** Alan Lukežič, Ugur Kart, Jani Käpylä, Ahmed Durmush, Joni-Kristian Kämäräinen, Jiří Matas, Matej Kristan.
97 | "CDTB: A Color and Depth Visual Object Tracking Dataset and Benchmark." ICCV (2019). 98 | [[paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Lukezic_CDTB_A_Color_and_Depth_Visual_Object_Tracking_Dataset_and_ICCV_2019_paper.pdf)] 99 | **`RGB-D Long-term`** 100 | 101 | * **LaSOT:** Heng Fan, Liting Lin, Fan Yang, Peng Chu, Ge Deng, Sijia Yu, Hexin Bai, Yong Xu, Chunyuan Liao, Haibin Ling.
102 | "LaSOT: A High-quality Benchmark for Large-scale Single Object Tracking." CVPR (2019). 103 | [[paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Fan_LaSOT_A_High-Quality_Benchmark_for_Large-Scale_Single_Object_Tracking_CVPR_2019_paper.pdf)] 104 | [[project](https://cis.temple.edu/lasot/)]
105 | `The LaSOT dataset is not a typical long-term dataset. But it is a good choice for connecting long-term and short-term trackers. Usually, short-term trackers drift very easily in the long-term datasets since they have no re-detection module. Long-term trackers also achieve unsatisfactory performance in the short-term datasets, since the tested sequences are often very short and the evaluation criterion pay less attention to the re-detection capability (especially VOT' EAO). LaSOT is a large-scale, long-frame dataset with precision and succuess criterion. Thus, it is a good choice if you want to fairly compare the performance of long-term and short-term trackers in one figure/table.` 106 | 107 | * **UAV20L:** Matthias Mueller, Neil Smith and Bernard Ghanem.
108 | "A Benchmark and Simulator for UAV Tracking." ECCV (2016). 109 | [[paper](https://ivul.kaust.edu.sa/Documents/Publications/2016/A%20Benchmark%20and%20Simulator%20for%20UAV%20Tracking.pdf)] 110 | [[project](https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx)] 111 | [[dataset](https://ivul.kaust.edu.sa/Pages/Dataset-UAV123.aspx)]
112 | `All 20 videos of UAV20L have been included in the VOT2018LT dataset.` 113 | 114 | ### Recent Long-term Trackers 115 | 116 | * **STARK: Bin Yan, Houwen Peng, Jianlong Fu, Dong Wang, Huchuan Lu.**
117 | "Learning Spatio-Temporal Transformer for Visual Tracking." ICCV, 2021. 118 | [[Paper](https://arxiv.org/abs/2103.17154)] 119 | [[Code](https://github.com/researchmm/Stark)]
120 | 121 | * **KeepTrack: Christoph Mayer, Martin Danelljan, Danda Pani Paudel, Luc Van Gool.**
122 | "Learning Target Candidate Association to Keep Track of What Not to Track." ICCV (2021). 123 | [[Paper](https://arxiv.org/abs/2103.16556)] 124 | [[Project](https://github.com/visionml/pytracking)]
125 | 126 | * **DMTrack: Zikai Zhang, Bineng Zhong, Shengping Zhang, Zhenjun Tang, Xin Liu, Zhaoxiang Zhang.**
127 | "Distractor-Aware Fast Tracking via Dynamic Convolutions and MOT Philosophy." CVPR (2021). 128 | [[Paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhang_Distractor-Aware_Fast_Tracking_via_Dynamic_Convolutions_and_MOT_Philosophy_CVPR_2021_paper.pdf)] 129 | [[Project](https://github.com/hqucv/dmtrack)]
130 | 131 | * **LTMU: Kenan Dai, Yunhua Zhang, Dong Wang, Jianhua Li, Huchuan Lu, Xiaoyun Yang.**
132 | **High-Performance Long-Term Tracking with Meta-Updater. CVPR (2020).** 133 | [[Paper](https://arxiv.org/abs/2004.00305)] 134 | [[Code](https://github.com/Daikenan/LTMU)]
135 | **VOT2019-LT Winner**:star2:, **VOT2020-LT Winner**:star2:
136 | `1. This work is an improved version of the VOT2019-LT winner, `**[[LT_DSE](https://github.com/Daikenan/LT_DSE)]**.
137 | `2. The baseline version is the VOT2020-LT winner, `**[[LTMU_B](https://github.com/Daikenan/LTMU)]**. 138 | 139 | * **Siam R-CNN:** Paul Voigtlaender, Jonathon Luiten, Philip H.S. Torr, Bastian Leibe.
140 | "Siam R-CNN: Visual Tracking by Re-Detection." CVPR (2020). 141 | [[Paper](https://arxiv.org/pdf/1911.12836.pdf)] 142 | [[Code](https://github.com/VisualComputingInstitute/SiamR-CNN)] 143 | [[Project](https://www.vision.rwth-aachen.de/page/siamrcnn)] 144 | 145 | * **TACT:** Janghoon Choi, Junseok Kwon, Kyoung Mu Lee.
146 | "Visual Tracking by TridentAlign and Context Embeddin." ACCV (2020). 147 | [[Paper](https://arxiv.org/pdf/2007.06887.pdf)] 148 | [[Code](https://github.com/JanghoonChoi/TACT)] 149 | 150 | * **DAL:** Yanlin Qian, Alan Lukežič, Matej Kristan, Joni-Kristian Kämäräinen, Jiri Mata
151 | "DAL - A Deep Depth-aware Long-term Tracker" ICPR (2020). [[paper](https://arxiv.org/pdf/1912.00660.pdf)] **`RGB-D Long-term`** 152 | 153 | * **GlobalTrack:** Lianghua Huang, Xin Zhao, Kaiqi Huang.
154 | "GlobalTrack: A Simple and Strong Baseline for Long-term Tracking." AAAI (2020). 155 | [[Paper](https://arxiv.org/abs/1912.08531)] 156 | [[Code](https://github.com/huanglianghua/GlobalTrack)] 157 | 158 | * **SPLT: Bin Yan, Haojie Zhao, Dong Wang, Huchuan Lu, Xiaoyun Yang.**
159 | **"Skimming-Perusal' Tracking: A Framework for Real-Time and Robust Long-Term Tracking." ICCV (2019).** 160 | [[Paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Yan_Skimming-Perusal_Tracking_A_Framework_for_Real-Time_and_Robust_Long-Term_Tracking_ICCV_2019_paper.pdf)] 161 | [[Code](https://github.com/iiau-tracker/SPLT)]
**An improved (much faster) version of VOT2018-LT Winner**:star2: 162 | 163 | * **flow_MDNet_RPN:** Han Wu, Xueyuan Yang, Yong Yang, Guizhong Liu.
164 | "Flow Guided Short-term Trackers with Cascade Detection for Long-term Tracking." ICCVW (2019). 165 | [[Paper](http://openaccess.thecvf.com/content_ICCVW_2019/papers/VISDrone/Wu_Flow_Guided_Short-Term_Trackers_with_Cascade_Detection_for_Long-Term_Tracking_ICCVW_2019_paper.pdf)] 166 | 167 | * **OTR:** Ugur Kart, Alan Lukezic, Matej Kristan, Joni-Kristian Kamarainen, Jiri Matas.
168 | "Object Tracking by Reconstruction with View-Specific Discriminative Correlation Filters." CVPR (2019). 169 | [[Paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Kart_Object_Tracking_by_Reconstruction_With_View-Specific_Discriminative_Correlation_Filters_CVPR_2019_paper.pdf)] 170 | [[Code](https://github.com/ugurkart/OTR)] **`RGB-D Long-term`** 171 | 172 | * **SiamRPN++:** Bo Li, Wei Wu, Qiang Wang, Fangyi Zhang, Junliang Xing, Junjie Yan. 173 | "SiamRPN++: Evolution of Siamese Visual Tracking with Very Deep Networks." CVPR (2019). 174 | [[Paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Li_SiamRPN_Evolution_of_Siamese_Visual_Tracking_With_Very_Deep_Networks_CVPR_2019_paper.pdf)] 175 | [[Code](https://github.com/STVIR/pysot)] 176 | 177 | * **MBMD: Yunhua Zhang, Dong Wang, Lijun Wang, Jinqing Qi, Huchuan Lu.**
178 | **"Learning regression and verification networks for long-term visual tracking." Arxiv (2018).**
179 | [[Paper](https://arxiv.org/abs/1809.04320)] 180 | [[Code](https://github.com/xiaobai1217/MBMD)] 181 | [[Journal Version](https://link.springer.com/article/10.1007/s11263-021-01487-3)] 182 | **VOT2018-LT Winner**:star2: 183 | 184 | * **MMLT:** Hankyeol Lee, Seokeon choi, Changick Kim.
185 | "A Memory Model based on the Siamese Network for Long-term Tracking." ECCVW (2018). 186 | [[Paper](http://openaccess.thecvf.com/content_ECCVW_2018/papers/11129/Lee_A_Memory_Model_based_on_the_Siamese_Network_for_Long-term_ECCVW_2018_paper.pdf)] 187 | [[Code](https://github.com/bismex/MMLT)] 188 | 189 | * **FuCoLoT:** Alan Lukežič, Luka Čehovin Zajc, Tomáš Vojíř, Jiří Matas and Matej Kristan.
190 | "FuCoLoT - A Fully-Correlational Long-Term Tracker." ACCV (2018). 191 | [[Paper](http://prints.vicos.si/publications/366)] 192 | [[Code](https://github.com/alanlukezic/fucolot)] 193 | 194 | ### Long-term Trackers modified from Short-term Ones 195 | 196 | * **SiamDW:** Zhipeng Zhang, Houwen Peng.
197 | "Deeper and Wider Siamese Networks for Real-Time Visual Tracking." CVPR (2019). 198 | [[Paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Zhang_Deeper_and_Wider_Siamese_Networks_for_Real-Time_Visual_Tracking_CVPR_2019_paper.pdf)] 199 | [[Code](https://github.com/researchmm/SiamDW)] **VOT2019 RGB-D Winner**:star2: 200 | Denoted as "SiamDW_D" "SiamDW_LT", see the VOT2019 official report 201 | [[vot2019code](https://github.com/researchmm/VOT2019)] 202 | 203 | * **DaSiam_LT:** Zheng Zhu, Qiang Wang, Bo Li, Wei Wu, Junjie Yan, Weiming Hu.
204 | "Distractor-Aware Siamese Networks for Visual Object Tracking." ECCV (2018). 205 | [[paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Zheng_Zhu_Distractor-aware_Siamese_Networks_ECCV_2018_paper.pdf)] 206 | [[code](https://github.com/foolwood/DaSiamRPN)] **VOT2018-LT Runner-up**:star2: 207 | 208 | 209 | ### Early Long-term Trackers (before 2018) 210 | 211 | * **PTAV:** Heng Fan, Haibin Ling.
212 | "Parallel Tracking and Verifying: A Framework for Real-Time and High Accuracy Visual Tracking." ICCV (2017).
213 | [[paper](http://openaccess.thecvf.com/content_ICCV_2017/papers/Fan_Parallel_Tracking_and_ICCV_2017_paper.pdf)] 214 | [[supp](http://openaccess.thecvf.com/content_ICCV_2017/supplemental/Fan_Parallel_Tracking_and_ICCV_2017_supplemental.pdf)] 215 | [[project](http://www.dabi.temple.edu/~hbling/code/PTAV/ptav.htm)] 216 | [[code](http://www.dabi.temple.edu/~hbling/code/PTAV/serial_ptav_v1.zip)] 217 | 218 | * **EBT:** Gao Zhu, Fatih Porikli, Hongdong Li.
219 | "Beyond Local Search: Tracking Objects Everywhere with Instance-Specific Proposals." CVPR (2016).
220 | [[paper](http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Zhu_Beyond_Local_Search_CVPR_2016_paper.pdf)] 221 | [[exe](http://www.votchallenge.net/vot2016/download/02_EBT.zip)] 222 | 223 | * **LCT:** Chao Ma, Xiaokang Yang, Chongyang Zhang, Ming-Hsuan Yang.
224 | "Long-term Correlation Tracking." CVPR (2015).
225 | [[paper](http://openaccess.thecvf.com/content_cvpr_2015/papers/Ma_Long-Term_Correlation_Tracking_2015_CVPR_paper.pdf)] 226 | [[project](https://sites.google.com/site/chaoma99/cvpr15_tracking)] 227 | [[github](https://github.com/chaoma99/lct-tracker)] 228 | 229 | * **MUSTer:** Zhibin Hong, Zhe Chen, Chaohui Wang, Xue Mei, Danil Prokhorov, Dacheng Tao.
230 | "MUlti-Store Tracker (MUSTer): a Cognitive Psychology Inspired Approach to Object Tracking." CVPR (2015).
231 | [[paper](https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Hong_MUlti-Store_Tracker_MUSTer_2015_CVPR_paper.pdf)] 232 | [[project](https://sites.google.com/site/zhibinhong4131/Projects/muster)] 233 | 234 | * **CMT:** Georg Nebehay, Roman Pflugfelder.
235 | "Clustering of Static-Adaptive Correspondences for Deformable Object Tracking." CVPR (2015).
236 | [[paper](https://zpascal.net/cvpr2015/Nebehay_Clustering_of_Static-Adaptive_2015_CVPR_paper.pdf)] 237 | [[project](http://www.gnebehay.com/cmt)] [[github](https://github.com/gnebehay/CMT)] 238 | 239 | * **SPL:** James Steven Supančič III, Deva Ramanan.
240 | "Self-paced Learning for Long-term Tracking." CVPR (2013).
241 | [[paper](https://www.cv-foundation.org/openaccess/content_cvpr_2013/papers/Supancic_III_Self-Paced_Learning_for_2013_CVPR_paper.pdf)] 242 | [[github](https://github.com/jsupancic/SPLTT-Release)] 243 | 244 | * **TLD:** Zdenek Kalal, Krystian Mikolajczyk, Jiri Matas.
245 | "Tracking-Learning-Detection." TPAMI (2012).
246 | [[paper](https://ieeexplore.ieee.org/document/6104061)] 247 | [[project](https://github.com/zk00006/OpenTLD)] 248 | 249 | ## Measurement & Discussion: 250 | 251 | * Alan Lukežič, Luka Čehovin Zajc, Tomáš Vojíř, Jiří Matas, Matej Krista. 252 | "Performance Evaluation Methodology for Long-Term Visual Object Tracking." ArXiv (2019). 253 | [[paper](https://arxiv.org/abs/1906.08675)] 254 | 255 | * Alan Lukežič, Luka Čehovin Zajc, Tomáš Vojíř, Jiří Matas, Matej Kristan. 256 | "Now You See Me: Evaluating Performance in Long-term Visual Tracking." ArXiv (2018). 257 | [[paper](https://arxiv.org/abs/1804.07056)] 258 | 259 | * Shyamgopal Karthik, Abhinav Moudgil, Vineet Gandhi. 260 | "Exploring 3 R's of Long-term Tracking: Re-detection, Recovery and Reliability." WACV (2020). 261 | [[paper](http://openaccess.thecvf.com/content_WACV_2020/papers/Karthik_Exploring_3_Rs_of_Long-term_Tracking_Redetection_Recovery_and_Reliability_WACV_2020_paper.pdf)] 262 | 263 | 264 | 265 | ## Previous Benchmark Results: 266 | 267 | * **~~VOT2019-LT/VOT2020-LT/VOT2021-LT~~** 268 | 269 | | Tracker | F-Score | Speed (fps) | Paper/Code | 270 | |:----------- |:----------------:|:----------------:|:----------------:| 271 | | KeepTrack (ICCV21) | 0.709 | 18 (RTX 2080Ti) | [Paper](https://arxiv.org/abs/2103.16556)/[Code](https://github.com/visionml/pytracking) | 272 | | **STARK (ICCV21)** | 0.701 | 32 (Tesla V100) | [Paper](https://arxiv.org/abs/2103.17154)/[Code](https://github.com/researchmm/Stark) | 273 | | **LTMU (CVPR20)** | 0.697 | 13 (RTX 2080Ti) | [Paper](https://arxiv.org/abs/2004.00305)/[Code](https://github.com/Daikenan/LTMU) | 274 | | **LT_DSE** | 0.695 | N/A | N/A | 275 | | **LTMU_B** | 0.691 | N/A | [Paper](https://arxiv.org/abs/2004.00305)/[Code](https://github.com/Daikenan/LTMU) | 276 | | DMTrack (CVPR21) | 0.687 | 31 (Titan XP) | [Paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhang_Distractor-Aware_Fast_Tracking_via_Dynamic_Convolutions_and_MOT_Philosophy_CVPR_2021_paper.pdf)/[Project](https://github.com/hqucv/dmtrack) | 277 | | Megtrack | 0.687 | N/A | N/A | 278 | | CLGS | 0.674 | N/A | N/A | 279 | | SiamDW_LT | 0.665 | N/A | N/A | 280 | | **SPLT (ICCV19)** | 0.587 | 26 (GTX 1080Ti) | [Paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Yan_Skimming-Perusal_Tracking_A_Framework_for_Real-Time_and_Robust_Long-Term_Tracking_ICCV_2019_paper.pdf)/[Code](https://github.com/iiau-tracker/SPLT) | 281 | | mbdet | 0.567 | N/A | N/A | 282 | | SiamRPNsLT | 0.556 | N/A | N/A | 283 | | Siamfcos-LT | 0.520 | N/A | N/A | 284 | | CooSiam | 0.508 | N/A | N/A | 285 | | ASINT | 0.505 | N/A | N/A | 286 | | FuCoLoT | 0.411 | N/A | N/A | 287 | 288 | * Most results are obtained from the original [VOT2019](http://prints.vicos.si/publications/375/) and [VOT2020](http://prints.vicos.si/publications/375/) reports. 289 | * All sequences and settings are same in the VOT2019-LT, VOT2020-LT and VOT2021-LT challenges. 290 | * We will not update the results [marked by 2022-11]. Please focus on VOT2022-LT. 291 | 292 | * **~~VOT2018-LT:~~** 293 | 294 | | Tracker | F-Score | Speed (fps) | Paper/Code | 295 | |:----------- |:----------------:|:----------------:|:----------------:| 296 | | **LTMU (CVPR20)** | 0.690 | 13 (RTX 2080Ti) | [Paper](https://arxiv.org/abs/2004.00305)/[Code](https://github.com/Daikenan/LTMU) | 297 | | Siam R-CNN (CVPR20) | 0.668 | 5 (Tesla V100) | [Paper](https://arxiv.org/pdf/1911.12836.pdf)/[Code](https://github.com/VisualComputingInstitute/SiamR-CNN) | 298 | | PG-Net (ECCV20) | 0.642 | N/A | [Paper]()/[Code]()| 299 | | SiamRPN++ | 0.629 | 35 (Titan XP) | [Paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Li_SiamRPN_Evolution_of_Siamese_Visual_Tracking_With_Very_Deep_Networks_CVPR_2019_paper.pdf)/[Code](https://github.com/STVIR/pysot) | 300 | | **SPLT (ICCV19)** | 0.622 | 26 (GTX 1080Ti) | [Paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Yan_Skimming-Perusal_Tracking_A_Framework_for_Real-Time_and_Robust_Long-Term_Tracking_ICCV_2019_paper.pdf)/[Code](https://github.com/iiau-tracker/SPLT) | 301 | | **MBMD (Arxiv)** | 0.610 | 4 (GTX 1080Ti) | [Paper](https://arxiv.org/abs/1809.0432)/[Code](https://github.com/xiaobai1217/MBMD) | 302 | | DaSiam_LT (ECCV18) | 0.607 | 110 (TITAN X) | [Paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Zheng_Zhu_Distractor-aware_Siamese_Networks_ECCV_2018_paper.pdf)/[Code](https://github.com/foolwood/DaSiamRPN) | 303 | 304 | * MBMD and DaSiam_LT is the winner and runner-up in the original [VOT2018_LT](http://prints.vicos.si/publications/365/) report. 305 | * VOT2018-LT is a subset of VOT2019-LT; thus, we will not update the results [marked by 2021-08]. 306 | -------------------------------------------------------------------------------- /[202105]长时视觉目标跟踪前沿简介.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/wangdongdut/Long-term-Visual-Tracking/1f3c351eb4ce0170cad0e6139cde7965322c2bc5/[202105]长时视觉目标跟踪前沿简介.pdf --------------------------------------------------------------------------------