├── README.md
├── img
├── Trackers .pdf
├── Trackers .png
├── dino.gif
├── test.py
└── tracking_words.png
└── notes
├── CV&AI_Journals.md
├── DRLTrackers.md
├── Long-term-Visual-Tracking.md
├── MOT-Papers.md
├── Online-Visual-Tracking-SOTA.md
├── SiamTrackers.md
├── Transformer Tracking.md
├── UAV-Vision.md
├── Visual Trackers for Single Object.md
└── all_about_sot.md
/img/Trackers .pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/DavidZhangdw/Visual-Tracking-Development/04d7489e982aaae953c667093745447f9230c7d1/img/Trackers .pdf
--------------------------------------------------------------------------------
/img/Trackers .png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/DavidZhangdw/Visual-Tracking-Development/04d7489e982aaae953c667093745447f9230c7d1/img/Trackers .png
--------------------------------------------------------------------------------
/img/dino.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/DavidZhangdw/Visual-Tracking-Development/04d7489e982aaae953c667093745447f9230c7d1/img/dino.gif
--------------------------------------------------------------------------------
/img/test.py:
--------------------------------------------------------------------------------
1 | import torch
2 | print('Hello, World')
3 |
--------------------------------------------------------------------------------
/img/tracking_words.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/DavidZhangdw/Visual-Tracking-Development/04d7489e982aaae953c667093745447f9230c7d1/img/tracking_words.png
--------------------------------------------------------------------------------
/notes/CV&AI_Journals.md:
--------------------------------------------------------------------------------
1 | ## Journals of CV & AI
2 |
3 | | Name | Full Name | CCF Rank | Publisher | Chinese Academy of Sciences Division |
4 | | ---------- | ------------------------------------------------------------ | -------- | ----------------- | ------------------------------------ |
5 | | | Nature-Biomedical Enginee | | | 2019 IF = 18.952 |
6 | | | Nature-Machine Intelligence | | | 2018 IF = 15~17.5. |
7 | | | Nature-Communication | | | SCI-1 |
8 | | | Nature Human Behaviour | | | SCI-1 |
9 | | | | | | |
10 | | TPAMI | IEEE Trans on Pattern Analysis and Machine | CCF-A | IEEE | SCI-1区Top |
11 | | TIP | IEEE Transactions on Image Processing | CCF-A | IEEE | SCI-1区Top |
12 | | Proc. IEEE | Proceedings of the IEEE | CCF-A | IEEE | SCI-1区Top |
13 | | JACM | Journal of the ACM | CCF-A | ACM | SCI-2区Top |
14 | | TKDE | IEEE Transactions on Knowledge and Data Engineering | CCF-A | IEEE | SCI-2区 |
15 | | TVCG | IEEE Transactions on Visualization and Computer Graphics | CCF-A | IEEE | SCI-1区Top |
16 | | IJCV | International Journal of Computer Vision | CCF-A | Springer | SCI-2区 |
17 | | AI | Artificial Intelligence | CCF-A | Elsevier | SCI-2区 |
18 | | JMLR | Journal of Machine Learning Research | CCF-A | MIT Press | SCI-3区 |
19 | | | | | | |
20 | | TMM | IEEE Transactions on Multimedia | CCF-B | IEEE | SCI-1区Top |
21 | | CVIU | Computer Vision and Image Understanding | CCF-B | Elsevier | SCI-3区 |
22 | | TCYB | IEEE Transactions on Cybernetics | CCF-B | IEEE | SCI-1区Top |
23 | | TNNLS | IEEE Transactions on Neural Networks and learning systems | CCF-B | IEEE | SCI-1区Top |
24 | | TMI | IEEE Transactions on Medical Imaging | CCF-B | IEEE | SCI-1区Top |
25 | | TKDD | ACM Transactions on Knowledge Discovery from Data | CCF-B | ACM | SCI-3区 |
26 | | TCSVT | IEEE Transactions on Circuits and Systems for Video Technology | CCF-B | IEEE | SCI-2区 |
27 | | TITS | IEEE Transactions on Intelligent Transportation Systems | CCF-B | IEEE | SCI-1区Top |
28 | | TGRS | IEEE Transactions on Geoscience and Remote Sensing | CCF-B | IEEE | SCI-2区Top |
29 | | PR | Pattern Recognition | CCF-B | Elsevier | SCI-1区Top |
30 | | ML | Machine Learning | CCF-B | Springer | SCI-3区 |
31 | | | Neural Computation | CCF-B | MIT Press | SCI-3区 |
32 | | | Neural Networks | CCF-B | Elsevier | SCI-2区 |
33 | | | Information Sciences | CCF-B | Elsevier | SCI-1区Top |
34 | | TCJ | The Computer Journal | CCF-B | Oxford University | SCI-4区 |
35 | | WWWJ | World Wide Web Journal | CCF-B | Springer | SCI-3区 |
36 | | | | | | |
37 | | TBD | IEEE Transactions on Big Data | CCF-C | IEEE | None |
38 | | APIN | Applied Intelligence | CCF-C | Springer | SCI-3区 |
39 | | EAAI | Engineering Applications of Artificial Intelligence | CCF-C | Elsevier | SCI-2区 |
40 | | ESWA | Expert Systems with Applications | CCF-C | Elsevier | SCI-2区 |
41 | | IVC | Image and Vision Computing | CCF-C | Elsevier | SCI-3区 |
42 | | IJIS | International Journal of Intelligent Systems | CCF-C | Wiley | SCI-2区 |
43 | | KBS | Knowledge-Based Systems | CCF-C | Elsevier | SCI-1区Top |
44 | | NCA | Neural Computing & Applications | CCF-C | Springer | SCI-2区 |
45 | | | Neurocomputing | CCF-C | Elsevier | SCI-2区 |
46 | | | Signal Processing | CCF-C | Elsevier | SCI-2区 |
47 | | PRL | Pattern Recognition Letters | CCF-C | Elsevier | SCI-3区 |
48 | | SPL | IEEE Signal Processing Letters | CCF-C | IEEE | SCI-2区 |
49 | | GRSL | IEEE Geoscience and Remote Sensing Letters | CCF-C | IEEE | SCI-2区 |
50 | | IET-IPR | IET Image Processing | CCF-C | IET | SCI-3区 |
51 | | IET-CVI | IET Computer Vision | CCF-C | IET | SCI-4区 |
52 | | MTA | Multimedia Tools and Applications | CCF-C | Springer | SCI-4区 |
53 | | TVC | The Visual Computer | CCF-C | Springer | SCI-4区 |
54 | | | | | | |
55 | | IOTJ | IEEE Internet of Things Journal | | IEEE | SCI-1区 |
56 | | JMLC | International Journal of Machine Learning and Cybernetics | | Springer | SCI-2区 |
57 |
58 |
59 |
60 |
61 |
62 | ## Chinese Journals of CV & AI
63 |
64 | | Name | Full Name | CCF Rank | Publisher | Chinese Academy of Sciences Division |
65 | | ---- | ------------------------------------------ | -------- | ------------------------------- | ------------------------------------ |
66 | | JAS | IEEE/CAA Journal of Automatica Sinica | | IEEE/CAA | SCI-1区Top |
67 | | SCIS | Science China Information Sciences | CCF-B | Science in China Press/Springer | SCI-2区 |
68 | | JCST | Journal of Computer Science and Technology | CCF-B | SCIENCE PRESS/Springer | SCI-2区 |
69 | | | | | | |
70 | | CVM | Computational Visual Media | None | Springer | SCI-2区 |
71 | | FCS | Frontiers in Computer Science | CCF-C | Springer | SCI-2区 |
72 | | | CAAI Transactions on Intelligence Technology| CCF-B | CAAI | SCI |
73 | | MIR | Machine Intelligence Research | | Springer | SCI |
74 | | | | | | |
75 | | | 计算机学报 | CCF-A | 中国计算机学会 | |
76 | | | 软件学报 | CCF-A | 中国计算机学会 | |
77 | | | 中国科学:信息科学 | CCF-A | 中国科学院 | |
78 | | | 计算机研究与发展 | CCF-A | 中国计算机学会 | |
79 | | | 计算机辅助设计与图形学学报 | CCF-A | 中国计算机学会 | |
80 | | | 电子学报 | CCF-A | 中国电子学会 | |
81 | | | 自动化学报 | CCF-A | 中国自动化学会 | |
82 | | | | | | |
83 | | | 模式识别与人工智能 | CCF-B | 中国自动化学会 | |
84 | | | 中国图像图形学报 | CCF-B | 中国图象图形学学会 | |
85 | | | | | | |
86 | | | 计算机应用 | CCF-C | 中国科学院成都分院 | |
87 | | | | | | |
88 |
89 |
90 |
91 |
*Copyright © 2020 Dawei Zhang. All rights reserved.*
92 |
--------------------------------------------------------------------------------
/notes/DRLTrackers.md:
--------------------------------------------------------------------------------
1 | ## The Paper Collection of Tracking Algorithms based on Deep Reinforcement Learning
2 |
3 | 1. Siamese Attentive Graph Tracking, ACM MM2020.
4 |
5 | 2. TSAS: Three-step action search networks with deep Q-learning for real-time object tracking, Pattern Recognition, 2020.
6 |
7 | 3. Maximum Entropy Reinforced Single Object Visual Tracking, ECAI 2020.
8 |
9 | 4. High Performance Visual Tracking with Siamese Actor-Critic Network, ICIP 2020.
10 |
11 | 5. POST: POlicy-Based Switch Tracking, AAAI 2020.
12 |
13 | 6. A3CTD: Visual Tracking by means of Deep Reinforcement Learning and an Expert Demonstrator, ICCVW 2020.
14 |
15 | 7. Dong, Xingping, Jianbing Shen, Wenguan Wang, Yu Liu, Ling Shao, and Fatih Porikli. "Hyperparameter optimization for tracking with continuous deep q-learning."
16 | In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 518-527. 2018.
17 |
18 | 8. Ren, Liangliang, Jiwen Lu, Zifeng Wang, Qi Tian, and Jie Zhou. "Collaborative deep reinforcement learning for multi-object tracking."
19 | In Proceedings of the European Conference on Computer Vision (ECCV), pp. 586-602. 2018.
20 |
21 | 9. DRL-IS: Ren, Liangliang, Xin Yuan, Jiwen Lu, Ming Yang, and Jie Zhou. "Deep reinforcement learning with iterative shift for visual tracking."
22 | In Proceedings of the European Conference on Computer Vision (ECCV), pp. 684-700. 2018.
23 |
24 | 10. ACT: Boyu Chen, Dong Wang, Peixia Li, Huchuan Lu. "Real-time 'Actor-Critic' Tracking." ECCV (2018).
25 |
26 | 11. SINT++: Xiao Wang, Chenglong Li, Bin Luo, Jin Tang. "SINT++: Robust Visual Tracking via Adversarial Positive Instance Generation." CVPR (2018).
27 |
28 | 8. EAST: Huang, Chen, Simon Lucey, and Deva Ramanan. "Learning policies for adaptive tracking with deep feature cascades."
29 | In Proceedings of the IEEE International Conference on Computer Vision, pp. 105-114. 2017.
30 |
31 | 9. Liu, Xiaobai, Qian Xu, Thuan Chau, Yadong Mu, Lei Zhu, and Shuicheng Yan. "Revisiting jump-diffusion process for visual tracking: a reinforcement learning approach."
32 | IEEE Transactions on Circuits and Systems for Video Technology (2018).
33 |
34 | 10. Yun, Sangdoo, Jongwon Choi, Youngjoon Yoo, Kimin Yun, and Jin Young Choi. "Action-driven visual object tracking with deep reinforcement learning."
35 | IEEE transactions on neural networks and learning systems 29, no. 6 (2018): 2239-2252.
36 |
37 | 11. Jiang, Ming-xin, Chao Deng, Zhi-geng Pan, Lan-fang Wang, and Xing Sun. "Multiobject Tracking in Videos Based on LSTM and Deep Reinforcement Learning."
38 | Complexity 2018 (2018).
39 |
40 | 12. ADNet: Sangdoo Yun, Jongwon Choi, Youngjoon Yoo, Kimin Yun, Jin Young Choi. "Action-Decision Networks for Visual Tracking with Deep Reinforcement Learning."
41 | CVPR (2017 Spotlight).
42 |
43 | 13. p-tracker: James Supančič, III; Deva Ramanan. "Tracking as Online Decision-Making: Learning a Policy From Streaming Videos With Reinforcement Learning."
44 | ICCV (2017).
45 |
46 | 14. Bae, Seung-Hwan, and Kuk-Jin Yoon. "Confidence-based data association and discriminative deep appearance learning for robust online multi-object tracking."
47 | IEEE transactions on pattern analysis and machine intelligence 40, no. 3 (2017): 595-610.
48 |
49 | 15. RDT: Janghoon Choi, Junseok Kwon, Kyoung Mu Lee. "Visual Tracking by Reinforced Decision Making." arXiv (2017).
50 |
51 | 16. RLT: Da Zhang, Hamid Maei, Xin Wang, Yuan-Fang Wang. "Deep Reinforcement Learning for Visual Object Tracking in Videos." arXiv (2017).
52 |
53 | 17. Zhang, Da, Hamid Maei, Xin Wang, and Yuan-Fang Wang. "Deep reinforcement learning for visual object tracking in videos."
54 | arXiv preprint arXiv:1701.08936 (2017).
55 |
56 | 18. Luo, Wenhan, Peng Sun, Fangwei Zhong, Wei Liu, Tong Zhang, and Yizhou Wang. "End-to-end active object tracking via reinforcement learning."
57 | arXiv preprint arXiv:1705.10561 (2017).
58 |
59 | 19. Kamalapurkar, Rushikesh, Lindsey Andrews, Patrick Walters, and Warren E. Dixon. "Model-based reinforcement learning for infinite-horizon approximate optimal tracking."
60 | IEEE transactions on neural networks and learning systems 28, no. 3 (2016): 753-758.
61 |
62 | 20. Luo, Biao, Derong Liu, Tingwen Huang, and Ding Wang. "Model-free optimal tracking control via critic-only Q-learning."
63 | IEEE transactions on neural networks and learning systems 27, no. 10 (2016): 2134-2144.
64 |
65 | 21. Xiang, Yu, Alexandre Alahi, and Silvio Savarese. "Learning to track: Online multi-object tracking by decision making."
66 | In Proceedings of the IEEE international conference on computer vision, pp. 4705-4713. 2015.
67 |
68 |
--------------------------------------------------------------------------------
/notes/Long-term-Visual-Tracking.md:
--------------------------------------------------------------------------------
1 | # Long-term Visual Tracking:
2 |
3 | This page focuses on watching the state-of-the-art performance for the long-term tracking task (if you are interested in the short-term tracking task, please visit [here](https://github.com/wangdongdut/Online-Visual-Tracking-SOTA)).
4 |
5 | ### Recent Long-term Trackers
6 |
7 | * **LTMU: Kenan Dai, Yunhua Zhang, Dong Wang, Jianhua Li, Huchuan Lu, Xiaoyun Yang.**
8 | **"High-Performance Long-Term Tracking with Meta-Updater." CVPR (2020).**
9 | [[paper](https://arxiv.org/abs/2004.00305)]
10 | [[code](https://github.com/Daikenan/LTMU)]
11 | **VOT2019-LT Winner**:star2:
12 | `This work is an improved version of the VOT2019-LT winner, `[[LT_DSE](https://github.com/Daikenan/LT_DSE)].
13 |
14 | * **Siam R-CNN:** Paul Voigtlaender, Jonathon Luiten, Philip H.S. Torr, Bastian Leibe.
15 | "Siam R-CNN: Visual Tracking by Re-Detection." ArXiv (2019).
16 | [[paper](https://arxiv.org/pdf/1911.12836.pdf)]
17 | [[code](https://github.com/VisualComputingInstitute/SiamR-CNN)]
18 | [[project](https://www.vision.rwth-aachen.de/page/siamrcnn)]
19 |
20 | * **DAL:** Yanlin Qian, Alan Lukežič, Matej Kristan, Joni-Kristian Kämäräinen, Jiri Mata
21 | "DAL - A Deep Depth-aware Long-term Tracker" ArXiv (2019).
22 | [[paper](https://arxiv.org/pdf/1912.00660.pdf)] **`RGB-D Long-term`**
23 |
24 | * **GlobalTrack:** Lianghua Huang, Xin Zhao, Kaiqi Huang.
25 | "GlobalTrack: A Simple and Strong Baseline for Long-term Tracking." AAAI (2020).
26 | [[paper](https://arxiv.org/abs/1912.08531)]
27 | [[code](https://github.com/huanglianghua/GlobalTrack)]
28 |
29 | * **SPLT: Bin Yan, Haojie Zhao, Dong Wang, Huchuan Lu, Xiaoyun Yang.**
30 | **"Skimming-Perusal' Tracking: A Framework for Real-Time and Robust Long-Term Tracking." ICCV (2019).**
31 | [[paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Yan_Skimming-Perusal_Tracking_A_Framework_for_Real-Time_and_Robust_Long-Term_Tracking_ICCV_2019_paper.pdf)]
32 | [[code](https://github.com/iiau-tracker/SPLT)]
33 |
34 | * **flow_MDNet_RPN:** Han Wu, Xueyuan Yang, Yong Yang, Guizhong Liu.
35 | "Flow Guided Short-term Trackers with Cascade Detection for Long-term Tracking." ICCVW (2019).
36 | [[paper](http://openaccess.thecvf.com/content_ICCVW_2019/papers/VISDrone/Wu_Flow_Guided_Short-Term_Trackers_with_Cascade_Detection_for_Long-Term_Tracking_ICCVW_2019_paper.pdf)]
37 |
38 | * **OTR:** Ugur Kart, Alan Lukezic, Matej Kristan, Joni-Kristian Kamarainen, Jiri Matas.
39 | "Object Tracking by Reconstruction with View-Specific Discriminative Correlation Filters." CVPR (2019).
40 | [[paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Kart_Object_Tracking_by_Reconstruction_With_View-Specific_Discriminative_Correlation_Filters_CVPR_2019_paper.pdf)]
41 | [[code](https://github.com/ugurkart/OTR)] **`RGB-D Long-term`**
42 |
43 | * **SiamRPN++:** Bo Li, Wei Wu, Qiang Wang, Fangyi Zhang, Junliang Xing, Junjie Yan.
44 | "SiamRPN++: Evolution of Siamese Visual Tracking with Very Deep Networks." CVPR (2019).
45 | [[paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Li_SiamRPN_Evolution_of_Siamese_Visual_Tracking_With_Very_Deep_Networks_CVPR_2019_paper.pdf)]
46 | [[code](https://github.com/STVIR/pysot)]
47 |
48 | * **MBMD: Yunhua Zhang, Dong Wang, Lijun Wang, Jinqing Qi, Huchuan Lu.**
49 | **"Learning regression and verification networks for long-term visual tracking." Arxiv (2018).**
50 | [[paper](https://arxiv.org/abs/1809.04320)]
51 | [[code](https://github.com/xiaobai1217/MBMD)]
52 | **VOT2018-LT Winner**:star2:
53 |
54 | * **MMLT:** Hankyeol Lee, Seokeon choi, Changick Kim.
55 | "A Memory Model based on the Siamese Network for Long-term Tracking." ECCVW (2018).
56 | [[paper](http://openaccess.thecvf.com/content_ECCVW_2018/papers/11129/Lee_A_Memory_Model_based_on_the_Siamese_Network_for_Long-term_ECCVW_2018_paper.pdf)]
57 | [[code](https://github.com/bismex/MMLT)]
58 |
59 | * **FuCoLoT:** Alan Lukežič, Luka Čehovin Zajc, Tomáš Vojíř, Jiří Matas and Matej Kristan.
60 | "FuCoLoT - A Fully-Correlational Long-Term Tracker." ACCV (2018).
61 | [[paper](http://prints.vicos.si/publications/366)]
62 | [[code](https://github.com/alanlukezic/fucolot)]
63 |
64 | ### Long-term Trackers modified from Short-term Ones
65 |
66 | * **SiamDW:** Zhipeng Zhang, Houwen Peng.
67 | "Deeper and Wider Siamese Networks for Real-Time Visual Tracking." CVPR (2019).
68 | [[paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Zhang_Deeper_and_Wider_Siamese_Networks_for_Real-Time_Visual_Tracking_CVPR_2019_paper.pdf)]
69 | [[code](https://github.com/researchmm/SiamDW)] **VOT2019 RGB-D Winner**:star2:
70 | Denoted as "SiamDW_D" "SiamDW_LT", see the VOT2019 official report
71 | [[vot2019code](https://github.com/researchmm/VOT2019)]
72 |
73 | * **DaSiam_LT:** Zheng Zhu, Qiang Wang, Bo Li, Wei Wu, Junjie Yan, Weiming Hu.
74 | "Distractor-Aware Siamese Networks for Visual Object Tracking." ECCV (2018).
75 | [[paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Zheng_Zhu_Distractor-aware_Siamese_Networks_ECCV_2018_paper.pdf)]
76 | [[code](https://github.com/foolwood/DaSiamRPN)] **VOT2018-LT Runner-up**:star2:
77 |
78 |
79 | ### Early Long-term Trackers (before 2018)
80 |
81 | * **PTAV:** Heng Fan, Haibin Ling.
82 | "Parallel Tracking and Verifying: A Framework for Real-Time and High Accuracy Visual Tracking." ICCV (2017).
83 | [[paper](http://openaccess.thecvf.com/content_ICCV_2017/papers/Fan_Parallel_Tracking_and_ICCV_2017_paper.pdf)]
84 | [[supp](http://openaccess.thecvf.com/content_ICCV_2017/supplemental/Fan_Parallel_Tracking_and_ICCV_2017_supplemental.pdf)]
85 | [[project](http://www.dabi.temple.edu/~hbling/code/PTAV/ptav.htm)]
86 | [[code](http://www.dabi.temple.edu/~hbling/code/PTAV/serial_ptav_v1.zip)]
87 |
88 | * **EBT:** Gao Zhu, Fatih Porikli, Hongdong Li.
89 | "Beyond Local Search: Tracking Objects Everywhere with Instance-Specific Proposals." CVPR (2016).
90 | [[paper](http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Zhu_Beyond_Local_Search_CVPR_2016_paper.pdf)]
91 | [[exe](http://www.votchallenge.net/vot2016/download/02_EBT.zip)]
92 |
93 | * **LCT:** Chao Ma, Xiaokang Yang, Chongyang Zhang, Ming-Hsuan Yang.
94 | "Long-term Correlation Tracking." CVPR (2015).
95 | [[paper](http://openaccess.thecvf.com/content_cvpr_2015/papers/Ma_Long-Term_Correlation_Tracking_2015_CVPR_paper.pdf)]
96 | [[project](https://sites.google.com/site/chaoma99/cvpr15_tracking)]
97 | [[github](https://github.com/chaoma99/lct-tracker)]
98 |
99 | * **MUSTer:** Zhibin Hong, Zhe Chen, Chaohui Wang, Xue Mei, Danil Prokhorov, Dacheng Tao.
100 | "MUlti-Store Tracker (MUSTer): a Cognitive Psychology Inspired Approach to Object Tracking." CVPR (2015).
101 | [[paper](https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Hong_MUlti-Store_Tracker_MUSTer_2015_CVPR_paper.pdf)]
102 | [[project](https://sites.google.com/site/zhibinhong4131/Projects/muster)]
103 |
104 | * **CMT:** Georg Nebehay, Roman Pflugfelder.
105 | "Clustering of Static-Adaptive Correspondences for Deformable Object Tracking." CVPR (2015).
106 | [[paper](https://zpascal.net/cvpr2015/Nebehay_Clustering_of_Static-Adaptive_2015_CVPR_paper.pdf)]
107 | [[project](http://www.gnebehay.com/cmt)]
108 | [[github](https://github.com/gnebehay/CMT)]
109 |
110 | * **SPL:** James Steven Supančič III, Deva Ramanan.
111 | "Self-paced Learning for Long-term Tracking." CVPR (2013).
112 | [[paper](https://www.cv-foundation.org/openaccess/content_cvpr_2013/papers/Supancic_III_Self-Paced_Learning_for_2013_CVPR_paper.pdf)]
113 | [[github](https://github.com/jsupancic/SPLTT-Release)]
114 |
115 | * **TLD:** Zdenek Kalal, Krystian Mikolajczyk, Jiri Matas.
116 | "Tracking-Learning-Detection." TPAMI (2012).
117 | [[paper](https://ieeexplore.ieee.org/document/6104061)]
118 | [[project](https://github.com/zk00006/OpenTLD)]
119 |
120 |
121 | ## Benchmark
122 |
123 | * **VOT:** . [[Visual Object Tracking Challenge](http://www.votchallenge.net/)]
124 | * [[VOT2020LT](http://www.votchallenge.net/vot2020/)][[Coming Soon](http://www.votchallenge.net/vot2020/)]
125 | * [[VOT2019LT](http://www.votchallenge.net/vot2019/)][[Report](http://prints.vicos.si/publications/375/)]
126 | * [[VOT2018LT](http://www.votchallenge.net/vot2018/)][[Report](http://prints.vicos.si/publications/365/)]
127 |
128 | * **OxUvA:** Jack Valmadre, Luca Bertinetto, João F. Henriques, Ran Tao, Andrea Vedaldi, Arnold Smeulders, Philip Torr, Efstratios Gavves.
129 | "Long-term Tracking in the Wild: a Benchmark." ECCV (2018).
130 | [[paper](https://arxiv.org/pdf/1803.09502.pdf)]
131 | [[project](https://oxuva.github.io/long-term-tracking-benchmark/)]
132 |
133 | * **TLP:** Abhinav Moudgil, Vineet Gandhi.
134 | "Long-term Visual Object Tracking Benchmark." ACCV (2018).
135 | [[paper](https://arxiv.org/abs/1712.01358)]
136 | [[project](https://amoudgl.github.io/tlp/)]
137 |
138 | * **CDTB:** Alan Lukežič, Ugur Kart, Jani Käpylä, Ahmed Durmush, Joni-Kristian Kämäräinen, Jiří Matas, Matej Kristan.
139 | "CDTB: A Color and Depth Visual Object Tracking Dataset and Benchmark." ICCV (2019).
140 | [[paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Lukezic_CDTB_A_Color_and_Depth_Visual_Object_Tracking_Dataset_and_ICCV_2019_paper.pdf)]
141 | [[project](https://oxuva.github.io/long-term-tracking-benchmark/)] **`RGB-D Long-term`**
142 |
143 | * **LaSOT:** Heng Fan, Liting Lin, Fan Yang, Peng Chu, Ge Deng, Sijia Yu, Hexin Bai, Yong Xu, Chunyuan Liao, Haibin Ling.
144 | "LaSOT: A High-quality Benchmark for Large-scale Single Object Tracking." CVPR (2019).
145 | [[paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Fan_LaSOT_A_High-Quality_Benchmark_for_Large-Scale_Single_Object_Tracking_CVPR_2019_paper.pdf)]
146 | [[project](https://cis.temple.edu/lasot/)]
147 | `The LaSOT dataset is not a typical long-term dataset. But it is a good choice for connecting long-term and short-term trackers. Usually, short-term trackers drift very easily in the long-term datasets since they have no re-detection module. Long-term trackers also achieve unsatisfactory performance in the short-term datasets, since the tested sequences are often very short and the evaluation criterion pay less attention to the re-detection capability (especially VOT' EAO). LaSOT is a large-scale, long-frame dataset with precision and succuess criterion. Thus, it is a good choice if you want to fairly compare the performance of long-term and short-term trackers in one figure/table.`
148 |
149 | * **UAV20L:** Matthias Mueller, Neil Smith and Bernard Ghanem.
150 | "A Benchmark and Simulator for UAV Tracking." ECCV (2016).
151 | [[paper](https://ivul.kaust.edu.sa/Documents/Publications/2016/A%20Benchmark%20and%20Simulator%20for%20UAV%20Tracking.pdf)]
152 | [[project](https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx)]
153 | [[dataset](https://ivul.kaust.edu.sa/Pages/Dataset-UAV123.aspx)]
154 | `All 20 videos of UAV20L have been included in the VOT2018LT dataset.`
155 |
156 |
157 | ## Measurement&Discussion:
158 |
159 | * Alan Lukežič, Luka Čehovin Zajc, Tomáš Vojíř, Jiří Matas, Matej Krista.
160 | "Performance Evaluation Methodology for Long-Term Visual Object Tracking." ArXiv (2019).
161 | [[paper](https://arxiv.org/abs/1906.08675)]
162 |
163 | * Alan Lukežič, Luka Čehovin Zajc, Tomáš Vojíř, Jiří Matas, Matej Kristan.
164 | "Now You See Me: Evaluating Performance in Long-term Visual Tracking." ArXiv (2018).
165 | [[paper](https://arxiv.org/abs/1804.07056)]
166 |
167 | * Shyamgopal Karthik, Abhinav Moudgil, Vineet Gandhi.
168 | "Exploring 3 R's of Long-term Tracking: Re-detection, Recovery and Reliability." WACV (2020).
169 | [[paper](http://openaccess.thecvf.com/content_WACV_2020/papers/Karthik_Exploring_3_Rs_of_Long-term_Tracking_Redetection_Recovery_and_Reliability_WACV_2020_paper.pdf)]
170 |
171 | ## Resources:
172 |
173 | * **"Paper, Benchmark, Researchers, Teams" maintained by Qiang Wang:**
174 | https://github.com/foolwood/benchmark_results
175 |
176 | * **"pysot [SiamRPN++, SiamMask, DaSiamRPN, SiamRPN]":**
177 | https://github.com/STVIR/pysot
178 |
179 | * **"pytracking [PrDIMP, SuperDIMP, DIMP, ATOM]":**
180 | https://github.com/visionml/pytracking
181 |
182 |
183 | ## Benchmark Results:
184 |
185 | * **VOT2019-LT/VOT2020-LT:**
186 |
187 | | Tracker | F-Score | Speed (fps) | Paper/Code |
188 | |:----------- |:----------------:|:----------------:|:----------------:|
189 | | **LTMU (CVPR20)** | 0.697 | 13 (RTX 2080Ti) | [Paper](https://arxiv.org/abs/2004.00305)/[Code](https://github.com/Daikenan/LTMU) |
190 | | LT_DSE | 0.695 | N/A | N/A |
191 | | CLGS | 0.674 | N/A | N/A |
192 | | SiamDW_LT | 0.665 | N/A | N/A |
193 | | **SPLT (ICCV19)** | 0.587 | 26 (GTX 1080Ti) | [Paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Yan_Skimming-Perusal_Tracking_A_Framework_for_Real-Time_and_Robust_Long-Term_Tracking_ICCV_2019_paper.pdf)/[Code](https://github.com/iiau-tracker/SPLT) |
194 | | mbdet | 0.567 | N/A | N/A |
195 | | SiamRPNsLT | 0.556 | N/A | N/A |
196 | | Siamfcos-LT | 0.520 | N/A | N/A |
197 | | CooSiam | 0.508 | N/A | N/A |
198 | | ASINT | 0.505 | N/A | N/A |
199 | | FuCoLoT | 0.411 | N/A | N/A |
200 |
201 | * Most results are obtained from the original [VOT2019_LT](http://prints.vicos.si/publications/375/) report.
202 |
203 | * **VOT2018-LT:**
204 |
205 | | Tracker | F-Score | Speed (fps) | Paper/Code |
206 | |:----------- |:----------------:|:----------------:|:----------------:|
207 | | **LTMU (CVPR20)** | 0.690 | 13 (RTX 2080Ti) | [Paper](https://arxiv.org/abs/2004.00305)/[Code](https://github.com/Daikenan/LTMU) |
208 | | Siam R-CNN (CVPR20) | 0.668 | 5 (Tesla V100) | [Paper](https://arxiv.org/pdf/1911.12836.pdf)/[Code](https://github.com/VisualComputingInstitute/SiamR-CNN) |
209 | | SiamRPN++ | 0.629 | 35 (Titan XP) | [Paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Li_SiamRPN_Evolution_of_Siamese_Visual_Tracking_With_Very_Deep_Networks_CVPR_2019_paper.pdf)/[Code](https://github.com/STVIR/pysot) |
210 | | **SPLT (ICCV19)** | 0.622 | 26 (GTX 1080Ti) | [Paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Yan_Skimming-Perusal_Tracking_A_Framework_for_Real-Time_and_Robust_Long-Term_Tracking_ICCV_2019_paper.pdf)/[Code](https://github.com/iiau-tracker/SPLT) |
211 | | **MBMD (Arxiv)** | 0.610 | 4 (GTX 1080Ti) | [Paper](https://arxiv.org/abs/1809.0432)/[Code](https://github.com/xiaobai1217/MBMD) |
212 | | DaSiam_LT (ECCV18) | 0.607 | 110 (TITAN X) | [Paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Zheng_Zhu_Distractor-aware_Siamese_Networks_ECCV_2018_paper.pdf)/[Code](https://github.com/foolwood/DaSiamRPN) |
213 |
214 | * MBMD and DaSiam_LT is the winner and runner-up in the original [VOT2018_LT](http://prints.vicos.si/publications/365/) report.
215 |
216 | * **OxUvA:**
217 | | Tracker | MaxGM | Speed (fps) | Paper/Code |
218 | |:----------- |:----------------:|:----------------:|:----------------:|
219 | | **LTMU (CVPR20)** | 0.751 | 13 (RTX 2080Ti) | [Paper](https://arxiv.org/abs/2004.00305)/[Code](https://github.com/Daikenan/LTMU) |
220 | | Siam R-CNN (CVPR20) | 0.723 | 5 (Tesla V100) | [Paper](https://arxiv.org/pdf/1911.12836.pdf)/[Code](https://github.com/VisualComputingInstitute/SiamR-CNN) |
221 | | **SPLT (ICCV19)** | 0.622 | 26 (GTX 1080Ti) | [Paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Yan_Skimming-Perusal_Tracking_A_Framework_for_Real-Time_and_Robust_Long-Term_Tracking_ICCV_2019_paper.pdf)/[Code](https://github.com/iiau-tracker/SPLT) |
222 | | GlobalTrack (AAAI20) | 0.603 | 6 (GTX TitanX) | [Paper](https://arxiv.org/abs/1912.08531)/[Code](https://github.com/huanglianghua/GlobalTrack) |
223 | | **MBMD (Arxiv)** | 0.544 | 4 (GTX 1080Ti) | [Paper](https://arxiv.org/abs/1809.0432)/[Code](https://github.com/xiaobai1217/MBMD) |
224 | | SiamFC+R (ECCV18) | 0.454 | 52 (Unkown GPU) | [Paper](https://arxiv.org/pdf/1803.09502.pdf)/[Code](https://github.com/oxuva/long-term-tracking-benchmark) |
225 |
226 | * OxUvA Leaderboard: https://competitions.codalab.org/competitions/19529#results
227 | * SiamFC+R is the best tracker in the original [OxUvA](https://arxiv.org/pdf/1803.09502.pdf) paper.
228 |
229 | * **TLP:**
230 |
231 | | Tracker | Success Score | Speed (fps) | Paper/Code |
232 | |:----------- |:----------------:|:----------------:|:----------------:|
233 | | **LTMU (CVPR20)** | 0.571 | 13 (RTX 2080Ti) | [Paper](https://arxiv.org/abs/2004.00305)/[Code](https://github.com/Daikenan/LTMU) |
234 | | GlobalTrack (AAAI20) | 0.520 | 6 (GTX TitanX) | [Paper](https://arxiv.org/abs/1912.08531)/[Code](https://github.com/huanglianghua/GlobalTrack) |
235 | | **SPLT (ICCV19)** | 0.416 | 26 (GTX 1080Ti) | [Paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Yan_Skimming-Perusal_Tracking_A_Framework_for_Real-Time_and_Robust_Long-Term_Tracking_ICCV_2019_paper.pdf)/[Code](https://github.com/iiau-tracker/SPLT) |
236 | | MDNet (CVPR16) | 0.372 | 5 (GTX 1080Ti) | [Paper](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Nam_Learning_Multi-Domain_Convolutional_CVPR_2016_paper.pdf)/[Code](https://github.com/hyeonseobnam/py-MDNet) |
237 |
238 | * MDNet is the best tracker in the original [TLP](https://amoudgl.github.io/tlp/) paper.
239 |
240 | * **LaSOT:**
241 |
242 | | Tracker | Success Score | Speed (fps) | Paper/Code |
243 | |:----------- |:----------------:|:----------------:|:----------------:|
244 | | Siam R-CNN (CVPR20) | 0.648 | 5 (Tesla V100) | [Paper](https://arxiv.org/pdf/1911.12836.pdf)/[Code](https://github.com/VisualComputingInstitute/SiamR-CNN) |
245 | | PrDiMP50 (CVPR20) | 0.598 | 30 (Unkown GPU) | [Paper](https://arxiv.org/pdf/2003.12565.pdf)/[Code](https://github.com/visionml/pytracking) |
246 | | **LTMU (CVPR20)** | 0.572 | 13 (RTX 2080Ti) | [Paper](https://arxiv.org/abs/2004.00305)/[Code](https://github.com/Daikenan/LTMU) |
247 | | DiMP50 (ICCV19) | 0.568 | 43 (GTX 1080) | [Paper](https://arxiv.org/pdf/1904.07220.pdf)/[Code](https://github.com/visionml/pytracking) |
248 | | SiamAttn (CVPR20) | 0.560 | 45 (RTX 2080Ti) | [Paper](https://arxiv.org/pdf/2004.06711.pdf)/[Code]() |
249 | | SiamFC++GoogLeNet (AAAI20)| 0.544 | 90 (RTX 2080Ti) | [Paper](https://arxiv.org/pdf/1911.06188.pdf)/[Code](https://github.com/MegviiDetection/video_analyst) |
250 | | MAML-FCOS (CVPR20) | 0.523 | 42 (NVIDIA P100) | [Paper](https://arxiv.org/pdf/2004.00830.pdf)/[Code]() |
251 | | GlobalTrack (AAAI20) | 0.521 | 6 (GTX TitanX) | [Paper](https://arxiv.org/abs/1912.08531)/[Code](https://github.com/huanglianghua/GlobalTrack) |
252 | | ATOM (CVPR19) | 0.515 | 30 (GTX 1080) | [Paper](https://arxiv.org/pdf/1811.07628.pdf)/[Code](https://github.com/visionml/pytracking) |
253 | | SiamBAN (CVPR20) | 0.514 | 40 (GTX 1080Ti) | [Paper](https://arxiv.org/pdf/2003.06761.pdf)/[Code](https://github.com/hqucv/siamban) |
254 | | SiamCar (CVPR20) | 0.507 | 52 (RTX 2080Ti) | [Paper](https://arxiv.org/pdf/1911.07241.pdf)/[Code](https://github.com/ohhhyeahhh/SiamCAR) |
255 | | SiamRPN++ (CVPR19) | 0.496 | 35 (Titan XP) | [Paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Li_SiamRPN_Evolution_of_Siamese_Visual_Tracking_With_Very_Deep_Networks_CVPR_2019_paper.pdf)/[Code](https://github.com/STVIR/pysot) |
256 | | ROAM++ (CVPR20) | 0.447 | 20 (RTX 2080)| [Paper](https://arxiv.org/pdf/1907.12006.pdf)/[Code](https://github.com/skyoung/ROAM) |
257 | | **SPLT (ICCV19)** | 0.426 | 26 (GTX 1080Ti) | [Paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Yan_Skimming-Perusal_Tracking_A_Framework_for_Real-Time_and_Robust_Long-Term_Tracking_ICCV_2019_paper.pdf)/[Code](https://github.com/iiau-tracker/SPLT) |
258 | | MDNet (CVPR16) | 0.397 | 5 (GTX 1080Ti) | [Paper](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Nam_Learning_Multi-Domain_Convolutional_CVPR_2016_paper.pdf)/[Code](https://github.com/hyeonseobnam/py-MDNet) |
259 |
260 | * MDNet is the best tracker in the original [LaSOT](https://cis.temple.edu/lasot/) paper.
261 |
262 | ## All Tracking Datasets:
263 | * **List:**
264 |
265 | | Datasets | #videos | #total/min/max/average frames|Absent Label|
266 | |:----------- |:----------------:|:----------------:|:----------------:|
267 | | [OTB-2015](http://cvlab.hanyang.ac.kr/tracker_benchmark/) | 100 | 59K/71/3,872/590 | No |
268 | | [TC-128](http://www.dabi.temple.edu/~hbling/data/TColor-128/TColor-128.html) | 128 | 55K/71/3,872/429 | No |
269 | | [NUS-PRO](https://www.ece.nus.edu.sg/lv/pro/nus_pro.html) | 365 | 135K/146/5,040/371 | No |
270 | | [UAV123](https://uav123.org/) | 123 | 113K/109/3,085/915 | No |
271 | | [TB70]() | 70 | XXXX | No |
272 | | [ALOV300++](http://alov300pp.joomlafree.it/) | 315 | 8.9K/XXXX/XXXX/284 | No |
273 | | [NfS](http://ci2cv.net/nfs/index.html) | 100 | 383K/169/20,665/3,830 | No |
274 | | [GOT-10k](http://got-10k.aitestunion.com/) | train-10k, val-180, test-180 | 1.5M | No |
275 | | | | |
276 | | [LaSOT](https://cis.temple.edu/lasot/) | 1,400 (I-all-1,400/II-test-280) | 3.52M/1,000/11,397/2,506 | Yes |
277 | | | | |
278 | | [VOT2019-LT/VOT2020-LT](https://www.votchallenge.net/) | 50 | XXXX/XXXX/XXXX/XXXX | Yes |
279 | | [TLP](https://amoudgl.github.io/tlp/) | 50 | XXXX/XXXX/XXXX/XXXX | No |
280 | | [OxUvA](https://oxuva.github.io/long-term-tracking-benchmark/) | 366 (dev-200/test-166) | XXXX/XXXX/XXXX/XXXX | Yes |
281 |
282 | * [OTB-2013](http://cvlab.hanyang.ac.kr/tracker_benchmark/benchmark_v10.html) is a subset of OTB-2015.
283 | * [UAV-20L](https://uav123.org/) has been included in VOT2018-LT/VOT2019-LT/VOT2020-LT.
284 | * [VOT2018-LT](http://www.votchallenge.net/vot2018/) is a subset of VOT2019-LT/VOT2020-LT. VOT2019-LT and VOT2020-LT are same.
285 |
--------------------------------------------------------------------------------
/notes/MOT-Papers.md:
--------------------------------------------------------------------------------
1 | # Multi-Object-Tracking-Paper-List
2 |
3 | Multi-object tracking is a deeply explored but not successfully solved computer vision task. This filed needs more open sources and more standard evaluation metrics. Opening source code is not the characteristic for this field(maybe the large number of parameters waited to be tuned)...... This is a paper list for multi-object-tracking.
4 |
5 | [](https://github.com/ifzhang/FairMOT/blob/master/assets/MOT15.gif) [](https://github.com/ifzhang/FairMOT/blob/master/assets/MOT16.gif) [](https://github.com/ifzhang/FairMOT/blob/master/assets/MOT17.gif) [](https://github.com/ifzhang/FairMOT/blob/master/assets/MOT20.gif)
6 |
7 |
8 |
9 |
10 |
11 | Conf. |
12 | Trackers |
13 | MOT15 |
14 | MOT16 |
15 | MOT17 |
16 | MOT20 |
17 | KITTI |
18 | UA-DETRAC |
19 | BDD100K |
20 | Waymo |
21 |
22 |
23 | MOTA/IDF1 |
24 | MOTA/IDF1 |
25 | MOTA/IDF1 |
26 | MOTA/IDF1 |
27 | MOTA/MOTP |
28 | PR-MOTA |
29 | MOTA/MOTP |
30 | MOTA |
31 |
32 |
33 | CVPR2021 |
34 | DAM-LSTM |
35 | - |
36 | 49.9/52.5 |
37 | 53.6/55.8 |
38 | - |
39 | - |
40 | - |
41 | - |
42 | - |
43 |
44 |
45 | LPC_MOT |
46 | - |
47 | - |
48 | 59.0/66.8 |
49 | 56.3/62.5 |
50 | - |
51 | - |
52 | - |
53 | - |
54 |
55 |
56 | CorrTracker |
57 | 62.3/65.7(*) |
58 | 76.6/74.3(*) |
59 | 76.5/73.6(*) |
60 | 65.2/69.1(*) |
61 | - |
62 | - |
63 | - |
64 | - |
65 |
66 |
67 | TADAM |
68 | - |
69 | 59.1/59.5 |
70 | 59.7/58.7 |
71 | 56.6/51.6 |
72 | - |
73 | - |
74 | - |
75 | - |
76 |
77 |
78 | ArTIST |
79 | - |
80 | 63.0/61.9 |
81 | 62.3/59.7 |
82 | 53.6/51.0 |
83 | - |
84 | - |
85 | - |
86 | - |
87 |
88 |
89 | Qdtrack |
90 | - |
91 | - |
92 | 64.6/65.1 |
93 | - |
94 | - |
95 | - |
96 | 64.3/- |
97 | 51.18 |
98 |
99 |
100 | SiamMOT |
101 | - |
102 | - |
103 | 65.9/63.3 |
104 | - |
105 | - |
106 | - |
107 | - |
108 | - |
109 |
110 |
111 | TraDeS |
112 | - |
113 | 70.1/64.7(*) |
114 | 69.1/63.9(*) |
115 | - |
116 | - |
117 | - |
118 | - |
119 | - |
120 |
121 |
122 | GMTracker |
123 | - |
124 | 68.7/65.0 |
125 | 70.6/66.2 |
126 | - |
127 | - |
128 | - |
129 | - |
130 | - |
131 |
132 |
133 |
134 |
135 | ECCV20 |
136 | CTracker |
137 | - |
138 | 67.6/57.2(*) |
139 | 66.6/57.4(*) |
140 | - |
141 | - |
142 | - |
143 | - |
144 | - |
145 |
146 |
147 | JDE |
148 | - |
149 | 64.4/55.8(*) |
150 | - |
151 | - |
152 | - |
153 | - |
154 | - |
155 | - |
156 |
157 |
158 | CenterTrack |
159 | - |
160 | - |
161 | 61.5/59.6
162 | 67.8/64.7(*) |
163 | - |
164 | 89.44/85.0(*) |
165 | - |
166 | - |
167 | - |
168 |
169 |
170 | DMM-NET |
171 | - |
172 | - |
173 | - |
174 | - |
175 | - |
176 | 12.2 |
177 | - |
178 | - |
179 |
180 |
181 | CVPR20 |
182 | DeepMOT |
183 | - |
184 | 54.8/53.4 |
185 | 53.7/53.8 |
186 | - |
187 | - |
188 | - |
189 | - |
190 | - |
191 |
192 |
193 | MPNTrack |
194 | 51.5/58.6 |
195 | 58.6/61.7 |
196 | 58.8/61.7 |
197 | - |
198 | - |
199 | - |
200 | - |
201 | - |
202 |
203 |
204 | GNN3DMOT |
205 | - |
206 | - |
207 | - |
208 | - |
209 | 82.24/84.05 |
210 | - |
211 | - |
212 | - |
213 |
214 |
215 | UMA |
216 | - |
217 | 50.5/52.8 |
218 | 53.1/54.4 |
219 | - |
220 | - |
221 | - |
222 | - |
223 | - |
224 |
225 |
226 | MOTSNet |
227 | - |
228 | - |
229 | - |
230 | - |
231 | - |
232 | - |
233 | 58.2/84.0(*) |
234 | - |
235 |
236 |
237 | SQE |
238 | - |
239 | -/68.3 |
240 | - |
241 | - |
242 | - |
243 | - |
244 | - |
245 | - |
246 |
247 |
248 | RetinaTrack |
249 | - |
250 | - |
251 | 39.19/-(*) |
252 | - |
253 | - |
254 | - |
255 | - |
256 | 44.92(*) |
257 |
258 |
259 | TubeTK |
260 | - |
261 | 64.0/59.4(*) |
262 | 63.0/58.6(*) |
263 | - |
264 | - |
265 | - |
266 | - |
267 | - |
268 |
269 |
270 | AAAI20 |
271 | DASOT |
272 | - |
273 | 46.1/49.4 |
274 | 48.0/51.3 |
275 | - |
276 | - |
277 | - |
278 | - |
279 | - |
280 |
281 |
282 | ICML20 |
283 | Lif_T |
284 | 52.5/60.0 |
285 | 61.3/64.7 |
286 | 60.5/65.6 |
287 | - |
288 | - |
289 | - |
290 | - |
291 | - |
292 |
293 |
294 | IJCAI20 |
295 | GSM_Trackor |
296 | - |
297 | 57.0/58.2 |
298 | 56.4/57.8 |
299 | - |
300 | - |
301 | - |
302 | - |
303 | - |
304 |
305 |
306 | ICCV19 |
307 | Tracktor++ |
308 | 44.1/46.7(*) |
309 | 54.4/52.5(*) |
310 | 53.5/52.3(*) |
311 | - |
312 | - |
313 | - |
314 | - |
315 | - |
316 |
317 |
318 | STRN |
319 | 38.1/46.6 |
320 | 48.5/53.9 |
321 | 50.9/56.5 |
322 | - |
323 | - |
324 | - |
325 | - |
326 | - |
327 |
328 |
329 | FAMNet |
330 | 40.6/41.4 |
331 | - |
332 | 52.0/48.7 |
333 | - |
334 | 77.1/78.8 |
335 | 19.8(*) |
336 | - |
337 | - |
338 |
339 |
340 | mmMOT |
341 | - |
342 | - |
343 | - |
344 | - |
345 | 84.77/85.21 |
346 | - |
347 | - |
348 | - |
349 |
350 |
351 | 3DT |
352 | - |
353 | - |
354 | - |
355 | - |
356 | 84.52/85.64 |
357 | - |
358 | - |
359 | - |
360 |
361 |
362 | CVPR19 |
363 | SAS |
364 | 22.2/27.1 |
365 | - |
366 | 44.2/57.2 |
367 | - |
368 | - |
369 | - |
370 | - |
371 | - |
372 |
373 |
374 | CVPRW19 |
375 | JBNOT |
376 | - |
377 | - |
378 | 52.6/50.8 |
379 | - |
380 | - |
381 | - |
382 | - |
383 | - |
384 |
385 |
386 | AAAI19 |
387 | LNUH |
388 | - |
389 | 47.5/43.6 |
390 | - |
391 | - |
392 | - |
393 | - |
394 | - |
395 | - |
396 |
397 |
398 | IV19 |
399 | FANTrack |
400 | - |
401 | - |
402 | - |
403 | - |
404 | 77.72/82.32 |
405 | - |
406 | - |
407 | - |
408 |
409 |
410 | ECCV18 |
411 | DMAN |
412 | - |
413 | 46.1/54.8 |
414 | 48.2/55.7 |
415 | - |
416 | - |
417 | - |
418 | - |
419 | - |
420 |
421 |
422 | C-DRL |
423 | 37.1/- |
424 | 47.3/- |
425 | - |
426 | - |
427 | - |
428 | - |
429 | - |
430 | - |
431 |
432 |
433 | MHT-bLSTM |
434 | - |
435 | 42.1/47.8 |
436 | 47.5/51.9 |
437 | - |
438 | - |
439 | - |
440 | - |
441 | - |
442 |
443 |
444 | CVPRW18 |
445 | FWT |
446 | - |
447 | 47.8/47.8 |
448 | 51.3/47.6 |
449 | - |
450 | - |
451 | - |
452 | - |
453 | - |
454 |
455 |
456 | MOT_LSTM |
457 | - |
458 | 62.6/- |
459 | - |
460 | - |
461 | - |
462 | - |
463 | - |
464 | - |
465 |
466 |
467 | TPAMI18 |
468 | CCC |
469 | 35.6/45.1 |
470 | 47.1/52.3 |
471 | 51.2/54.5 |
472 | - |
473 | - |
474 | - |
475 | - |
476 | - |
477 |
478 |
479 | DAN |
480 | 38.3/45.6 |
481 | - |
482 | 52.4/49.5 |
483 | - |
484 | - |
485 | - |
486 | - |
487 | - |
488 |
489 |
490 | ICPR18 |
491 | HOGM |
492 | - |
493 | 64.8/73.5(*) |
494 | - |
495 | - |
496 | - |
497 | - |
498 | - |
499 | - |
500 |
501 |
502 | BeyondPixels |
503 | - |
504 | - |
505 | - |
506 | - |
507 | 84.24/85.73 |
508 | - |
509 | - |
510 | - |
511 |
512 |
513 | ACM18 |
514 | TNT |
515 | - |
516 | 49.2/56.1 |
517 | 51.9/58.0 |
518 | - |
519 | - |
520 | - |
521 | - |
522 | - |
523 |
524 |
525 | ICME18 |
526 | MOTDT |
527 | - |
528 | 47.6/50.9 |
529 | - |
530 | - |
531 | - |
532 | - |
533 | - |
534 | - |
535 |
536 |
537 | ICCV17 |
538 | AMIR |
539 | 37.6/- |
540 | 47.2/- |
541 | - |
542 | - |
543 | - |
544 | - |
545 | - |
546 | - |
547 |
548 |
549 | STAM |
550 | 34.3/- |
551 | 46.0/- |
552 | - |
553 | - |
554 | - |
555 | - |
556 | - |
557 | - |
558 |
559 |
560 | CVPR17 |
561 | LMP |
562 | - |
563 | 48.8/- |
564 | - |
565 | - |
566 | - |
567 | - |
568 | - |
569 | - |
570 |
571 |
572 | Quad-CNN |
573 | 33.8/- |
574 | 44.1/- |
575 | - |
576 | - |
577 | - |
578 | - |
579 | - |
580 | - |
581 |
582 |
583 | DNF |
584 | - |
585 | - |
586 | - |
587 | - |
588 | 67.36/78.79 |
589 | - |
590 | - |
591 | - |
592 |
593 |
594 | TPAMI17 |
595 | CDA_DDALpb |
596 | 32.8/-
597 | 51.3/-(*) |
598 | 43.9/- |
599 | - |
600 | - |
601 | - |
602 | - |
603 | - |
604 | - |
605 |
606 |
607 | AAAI17 |
608 | RNN_LSTM |
609 | 19.0/17.1 |
610 | - |
611 | - |
612 | - |
613 | - |
614 | - |
615 | - |
616 | - |
617 |
618 |
619 | ICIPI17 |
620 | AP_RCNN |
621 | 38.5/-
622 | 53.0/-(*) |
623 | - |
624 | - |
625 | - |
626 | - |
627 | - |
628 | - |
629 | - |
630 |
631 |
632 | Others |
633 | UnsupTrack |
634 | - |
635 | 62.4/58.5 |
636 | 61.7/58.1 |
637 | - |
638 | - |
639 | - |
640 | - |
641 | - |
642 |
643 |
644 | CSTrack |
645 | - |
646 | 75.6/73.3(*) |
647 | 74.9/72.6(*) |
648 | - |
649 | - |
650 | - |
651 | - |
652 | - |
653 |
654 |
655 | FairMOT |
656 | 60.6/64.7(*) |
657 | 74.9/72.8(*) |
658 | 73.7/72.3(*) |
659 | 61.8/67.3(*) |
660 | - |
661 | - |
662 | - |
663 | - |
664 |
665 |
666 | FMA |
667 | - |
668 | - |
669 | 47.4/- |
670 | - |
671 | - |
672 | - |
673 | - |
674 | - |
675 |
676 |
677 | SAC |
678 | - |
679 | 49.2/56.5
680 | 69.6/68.6(*)
681 | 71.2/73.1(*) |
682 | 52.7/57.9
683 | 54.7/62.3 |
684 | - |
685 | - |
686 | - |
687 | - |
688 | - |
689 |
690 |
691 | TAT |
692 | - |
693 | 49.0/- |
694 | 51.5/- |
695 | - |
696 | - |
697 | - |
698 | - |
699 | - |
700 |
701 |
702 | DeepSORT |
703 | - |
704 | 61.4/62.2(*) |
705 | - |
706 | - |
707 | - |
708 | - |
709 | - |
710 | - |
711 |
712 |
713 | SORT |
714 | - |
715 | 59.8/53.8(*) |
716 | - |
717 | - |
718 | - |
719 | - |
720 | - |
721 | - |
722 |
723 |
724 |
725 | ### Recent MOT Papers
726 |
727 | **StrongSORT** Yunhao Du, Yang Song, Bo Yang, Yanyun Zhao. StrongSORT: Make DeepSORT Great Again[[paper]](https://arxiv.org/pdf/2202.13514.pdf)[[code]](https://github.com/xxxxx)
728 |
729 |
730 | ## Datasets
731 |
732 | ### Surveillance Scenarios
733 |
734 | [PETS2009](http://www.cvg.reading.ac.uk/PETS2009/a.html) : An old dataset.
735 | [MOT dataset](https://motchallenge.net/) : A dataset for multi-person detection and tracking, mostly used.
736 | [UA-DETRAC](http://detrac-db.rit.albany.edu/) : A dataset for multi-car detection and tracking.
737 | [AVSS2018 Challenge](https://iwt4s2018.wordpress.com/challenge/) : AVSS2018 Challenge based on UA-DETRAC is opened!
738 | [DukeMTMC](http://vision.cs.duke.edu/DukeMTMC/) : A dataset for multi-camera multi-person tracking.
739 | [PoseTrack](https://posetrack.net/): A dataset for multi-person pose tracking.
740 | [NVIDIA AI CITY Challenge](https://www.aicitychallenge.org/): Challenges including "Traffic Flow Analysis", "Anomaly Detection" and "Multi-sensor Vehicle Detection and Reidentification", you may find some insteresting codes on their [Github repos](https://github.com/NVIDIAAICITYCHALLENGE)
741 | [Vis Drone](http://www.aiskyeye.com/views/index): Tracking videos captured by drone-mounted cameras.
742 | [JTA Dataset](http://imagelab.ing.unimore.it/imagelab/page.asp?IdPage=25): A huge dataset for pedestrian pose estimation and tracking in urban scenarios created by exploiting the highly photorealistic video game Grand Theft Auto V developed by Rockstar North.
743 | [Path Track](http://people.ee.ethz.ch/~daid/pathtrack/) A new dataset with many scenes.
744 | [MOTS](https://www.vision.rwth-aachen.de/page/mots) MOTS: Multi-Object Tracking and Segmentation. In CVPR 2019
745 |
746 | ### Driving Scenarios
747 |
748 | [KITTI-Tracking](http://www.cvlibs.net/datasets/kitti/eval_tracking.php) : Multi-person or multi-car tracking dataset.
749 | [MOTS](https://www.vision.rwth-aachen.de/page/mots) Multi-Object Tracking and Segmentation. In CVPR 2019
750 | [Apollo-Tracking](http://apolloscape.auto/tracking.html) 3D Lidar multi-object tracking.
751 | [Baidu Trajectory](http://apolloscape.auto/trajectory.html) Interesting dataset for trajectory prediction for Autonomous drive, wait to be opened.
752 |
753 | ## Review Papers
754 |
755 | Mk Bashar, Samia Islam, Kashifa Kawaakib Hussain, Md. Bakhtiar Hasan, A.B.M. Ashikur Rahman, and Md. Hasanul Kabir, "Multiple Object Tracking in Recent Times: A Literature Review" [[paper]](https://arxiv.org/abs/2209.04796)
756 | Wenhan Luo, Junliang Xing, Anton Milan, Xiaoqin Zhang, Wei Liu, and Tae-Kyun Kim, "Multiple Object Tracking: A Literature Review" [[paper]](http://pdfs.semanticscholar.org/3dff/acda086689c1bcb01a8dad4557a4e92b8205.pdf)
757 | P Emami,PM Pardalos,L Elefteriadou,S Ranka "Machine Learning Methods for Solving Assignment Problems in Multi-Target Tracking" [[paper]](http://xueshu.baidu.com/s?wd=paperuri%3A%28dcfbdc0f8f79fe44d9166fd2481e37aa%29&filter=sc_long_sign&tn=SE_xueshusource_2kduw22v&sc_vurl=http%3A%2F%2Farxiv.org%2Fpdf%2F1802.06897&ie=utf-8&sc_us=15766836095004964816)
758 | A 101 slide, Globally-Optimal Greedy Algorithms for Tracking a Variable Number of Objects [[paper]](http://vision.stanford.edu/teaching/cs231b_spring1415/slides/greedy_fahim_albert.pdf)
759 | Francisco Luque Sánchez, Siham Tabik, Luigi Troiano, Roberto Tagliaferri, Francisco Herrera, "Deep Learning in Video Multi-Object Tracking: A Survey" [[paper]](https://arxiv.org/pdf/1907.12740.pdf)
760 |
761 | ## Evaluation Metric
762 |
763 | **CLEAR MOT** : Bernardin, K. & Stiefelhagen, R. "Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metric" [[paper]](https://cvhci.anthropomatik.kit.edu/images/stories/msmmi/papers/eurasip2008.pdf)
764 | **IDF1** : Ristani, E., Solera, Cucchiara, R. & Tomasi, C. "Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking" [[paper]](https://users.cs.duke.edu/~ristani/bmtt2016/ristani2016MTMC.pdf)
765 | **TrackEval** : [[GitHub]](https://github.com/JonathonLuiten/TrackEval)
766 | **Evaluation Code**: [[Python]](https://github.com/cheind/py-motmetrics) [[Matlab]](https://bitbucket.org/amilan/motchallenge-devkit/src/default/ )
767 |
768 | ## Researcher
769 |
770 | **Computer Vision Group at RWTH Aachen University** [[webpage and source code]](https://www.vision.rwth-aachen.de/publications/0000/)
771 | **Anton Milan** [[webpage and his source code]](http://www.milanton.de/)
772 | **Laura Leal-Taixé** [[webpage and her source code]](https://lealtaixe.github.io/publications/)
773 | **Dynamic Vision and Learning Group** [[webpage and their source code]](https://dvl.in.tum.de/research/mot/)
774 | **Longyin Wen** [[webpage and his source code]](http://www.cbsr.ia.ac.cn/users/lywen/)
775 | **UCF** [[webpage]](http://crcv.ucf.edu/projects/tracking)
776 |
777 |
778 | ## Open Source
779 |
780 | ### Online
781 |
782 | **Tracking-Objects-Points** Zhou, Xingyi and Koltun, Vladlen and Kr{\"a}henb{\"u}hl, Philipp. Tracking Objects as Points[[paper]](https://arxiv.org/pdf/2004.01177v1.pdf)[[code]](https://github.com/xingyizhou/CenterTrack)
783 | **Towards-Realtime-MOT** Zhongdao Wang, Liang Zheng, Yixuan Liu, Shengjin Wang. Towards Real-Time Multi-Object Tracking. [[paper]](https://arxiv.org/pdf/1909.12605v1.pdf) [[code]](https://github.com/Zhongdao/Towards-Realtime-MOT)
784 | **Track-no-bnw** Bergmann P, Meinhardt T, Lealtaixe L, et al. Tracking without bells and whistles[J]. (ICCV2020). [[paper]](https://arxiv.org/pdf/1903.05625.pdf)[[code]](https://github.com/phil-bergmann/tracking_wo_bnw)
785 | **DeepMot** Yihong Xu Yutong Ban Xavier Alameda-Pineda Radu Horaud, "DeepMOT:A Differentiable Framework for Training Multiple Object Trackers" [[paper]](https://arxiv.org/pdf/1906.06618.pdf)[[code]](https://gitlab.inria.fr/yixu/deepmot)
786 | **TrackR-CNN,MOTS** Paul Voigtlaender and Michael Krause, "MOTS: Multi-Object Tracking and Segmentation" In CVPR2019 [[paper]](https://www.vision.rwth-aachen.de/media/papers/mots-multi-object-tracking-and-segmentation/MOTS.pdf)[[code]](https://github.com/VisualComputingInstitute/TrackR-CNN/tree/master)
787 | **Tracktor** Philipp Bergmann, Tim Meinhardt, Laura Leal-Taixe "Tracking without bells and whistles" In Arxiv [[paper]](https://arxiv.org/pdf/1903.05625.pdf)[[code]](https://github.com/phil-bergmann/tracking_wo_bnw)
788 | **DMAN** Zhu, Ji and Yang, Hua and Liu, Nian and Kim, Minyoung and Zhang, Wenjun and Yang, Ming-Hsuan "Online Multi-Object Tracking with Dual Matching Attention Networks" [[paper]](http://openaccess.thecvf.com/content_ECCV_2018/papers/Ji_Zhu_Online_Multi-Object_Tracking_ECCV_2018_paper.pdf) [[code]](https://github.com/jizhu1023/DMAN_MOT) In ECCV2018
789 | **LSST** Weitao Feng, Zhihao Hu, Wei Wu, Junjie Yan, Wanli Ouyang "Multi-Object Tracking with Multiple Cues and Switcher-Aware Classification" [[paper]](https://arxiv.org/abs/1901.06129) SOA tracker on MOT ranking list. No code now.
790 | **SST** Sun. S., Akhtar, N., Song, H., Mian A., & Shah M. (2018). Deep Affinity Network for Multiple Object Tracking[[paper]](https://arxiv.org/abs/1810.11780)[[code]](https://github.com/shijieS/SST): Interesting work and expect the author to update their DPM tracking results on MOT17 benchmark.
791 | **MOTDT** Long Chen, Haizhou Ai "Real-time Multiple People Tracking with Deeply Learned Candidate Selection and Person Re-identification" in ICME 2018 [[code]](https://github.com/longcw/MOTDT)[[paper]](https://www.researchgate.net/publication/326224594_Real-time_Multiple_People_Tracking_with_Deeply_Learned_Candidate_Selection_and_Person_Re-identification)!
792 | **TMPORT** : E. Ristani and C. Tomasi. Tracking Multiple People Online and in Real Time. in ACCV 2014 [[paper]](https://users.cs.duke.edu/~tomasi/papers/ristani/ristaniAccv14.pdf) [[code]](http://vision.cs.duke.edu/DukeMTMC/)
793 | **MOT-RNN** : Anton Milan, Seyed Hamid Rezatofighi, Anthony Dick, Konrad Schindler, Ian Reid "Online Multi-target Tracking using Recurrent Neural Networks"[[paper]](http://www.milanton.de/files/aaai2017/aaai2017-anton-rnntracking.pdf) [[code]](https://bitbucket.org/amilan/rnntracking) In AAAI 2017.
794 | **DeepSort** : Wojke, Nicolai and Bewley, Alex and Paulus, Dietrich "Simple Online and Realtime Tracking with a Deep Association Metric" [[paper]](https://arxiv.org/abs/1703.07402) [[code]](https://github.com/nwojke/deep_sort) In ICIP 2017
795 | **Sort** : Bewley, Alex and Ge, Zongyuan and Ott, Lionel and Ramos, Fabio and Upcroft, Ben "Simple Online and Realtime Tracking"[[paper]](https://arxiv.org/abs/1602.00763) [[code]](https://github.com/abewley/sort) In ICIP 2016.
796 | **MDP** : Yu Xiang, Alexandre Alahi, and Silvio Savarese "Learning to Track: Online Multi-Object Tracking by Decision Making
797 | " [[paper]](http://openaccess.thecvf.com/content_iccv_2015/papers/Xiang_Learning_to_Track_ICCV_2015_paper.pdf) [[code]](http://cvgl.stanford.edu/projects/MDP_tracking/) In International Conference on Computer Vision (ICCV), 2015
798 | **CMOT** : S. H. Bae and K. Yoon. "Robust online multi-object tracking based on tracklet confidence and online discriminative appearance learning" [[paper]](https://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Bae_Robust_Online_Multi-Object_2014_CVPR_paper.pdf) [[code]](https://cvl.gist.ac.kr/project/cmot.html) In CVPR 2014
799 | **RCMSS** : Mohamed A. Naiel1, M. Omair Ahmad, M.N.S. Swamy, Jongwoo Lim, and Ming-Hsuan Yang "Online Multi-Object Tracking Via
800 | Robust Collaborative Model and Sample Selection"[[paper]](https://users.encs.concordia.ca/~rcmss/include/Papers/CVIU2016.pdf) [[code]](https://users.encs.concordia.ca/~rcmss/) Computer Vision and Image Understanding 2016
801 | **MHT-DAM** : Chanho Kim, Fuxin Li, Arridhana Ciptadi, James M. Rehg "Multiple Hypothesis Tracking Revisited"[[paper]](https://www.cc.gatech.edu/~ckim314/papers/MHTR_ICCV2015.pdf) [[code]](http://rehg.org/mht/) In ICCV 2015
802 | **OMPTTH** : Jianming Zhang, Liliana Lo Presti and Stan Sclaroff, "Online Multi-Person Tracking by Tracker Hierarchy," [[paper]]() [[code]](http://cs-people.bu.edu/jmzhang/tracker_hierarchy/Tracker_Hierarchy.htm) Proc. Int. Conf. on Advanced Video and Signal Based Surveillance (AVSS), 2012.
803 | **SMOT** : C. Dicle, O. Camps, M. Sznaier. "The Way They Move: Tracking Targets with Similar Appearance" [[paper]](https://www.cv-foundation.org/openaccess/content_iccv_2013/papers/Dicle_The_Way_They_2013_ICCV_paper.pdf) [[code]](https://bitbucket.org/cdicle/smot) In ICCV, 2013.
804 |
805 | ### Batch
806 |
807 | **muSSP** Wang C, Wang Y, Wang Y, et al. muSSP: Efficient Min-cost Flow Algorithm for Multi-object Tracking[C]. neural information processing systems, 2019: 423-432. [[paper]](http://papers.nips.cc/paper/8334-mussp-efficient-min-cost-flow-algorithm-for-multi-object-tracking)[[code]](https://github.com/yu-lab-vt/muSSP)
808 | **TNT** Gaoang Wang, Yizhou Wang, Haotian Zhang, Renshu Gu, Jenq-Neng Hwang "Exploit the Connectivity: Multi-Object Tracking with TrackletNet" [[paper]](https://arxiv.org/pdf/1811.07258.pdf) [[code]](https://github.com/GaoangW/TNT)
809 | **NT** Longyin Wen*, Dawei Du*, Shengkun Li, Xiao Bian, Siwei Lyu Learning Non-Uniform Hypergraph for Multi-Object Tracking, In AAAI 2019 [[paper]](http://www.cs.albany.edu/~lsw/papers/aaai19a.pdf)[[code]](https://github.com/longyin880815) Waited!
810 | **headTracking**: Shun Zhang, Jinjun Wang, Zelun Wang, Yihong Gong,Yuehu Liu: "Multi-Target Tracking by Learning Local-to-Global Trajectory Models" in PR 2015 [[paper]](https://www.researchgate.net/publication/265295656_Multi-Target_Tracking_by_Learning_Local-to-Global_Trajectory_Models) [[code]](https://github.com/gengshan-y/headTracking) seems like a repo.
811 | **IOU** : E. Bochinski, V. Eiselein, T. Sikora. "High-Speed Tracking-by-Detection Without Using Image Information" [[paper]](http://elvera.nue.tu-berlin.de/files/1517Bochinski2017.pdf) [[code]](https://github.com/bochinski/iou-tracker/) In International Workshop on Traffic and Street Surveillance for Safety and Security at IEEE AVSS 2017, 2017.
812 | **NMGC-MOT** Andrii Maksai, Xinchao Wang, Franc¸ois Fleuret, and Pascal Fua "Non-Markovian Globally Consistent Multi-Object Tracking
813 | " [[paper]](http://openaccess.thecvf.com/content_ICCV_2017/papers/Maksai_Non-Markovian_Globally_Consistent_ICCV_2017_paper.pdf)[[code]](https://github.com/maksay/ptrack_cpp) In ICCV 2017
814 | **D2T** Christoph Feichtenhofer, Axel Pinz, Andrew Zisserman, "Detect to Track and Track to Detect" [[paper]](http://openaccess.thecvf.com/content_ICCV_2017/papers/Feichtenhofer_Detect_to_Track_ICCV_2017_paper.pdf) [[code]](https://github.com/feichtenhofer/Detect-Track) In ICCV 2017
815 | **H2T** : Longyin Wen, Wenbo Li, Junjie Yan, Zhen Lei, Dong Yi, Stan Z. Li. "Multiple Target Tracking Based on Undirected Hierarchical Relation Hypergraph," [[paper]](http://www.cbsr.ia.ac.cn/users/lywen/papers/CVPR2014_HyperGraphMultiTargetsTracker.pdf) [[code]](http://www.cbsr.ia.ac.cn/users/lywen/) IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
816 | **LDCT** : F. Solera, S. Calderara, R. Cucchiara "Learning to Divide and Conquer for Online Multi-Target Tracking" [[paper]](http://ieeexplore.ieee.org/document/7410854/) [[code page 1]](https://github.com/francescosolera/LDCT) [[code page 2]](http://imagelab.ing.unimore.it/imagelab/researchActivity.asp?idActivity=09) In Proceedings of International Converence on Computer Vision (ICCV), Santiago Cile, Dec 12-18, 2015
817 | **CEM** : Anton Milan, Stefan Roth, Konrad Schindler "Continuous Energy Minimization for Multi-Target Tracking" [[paper]](http://www.milanton.de/files/pami2014/pami2014-anton.pdf) [[code]](http://www.milanton.de/contracking/) in pami 2014
818 | **OPCNF** : Chari, Visesh and Lacoste-Julien, Simon and Laptev, Ivan and Sivic, Josef "On Pairwise Costs for Network Flow Multi-Object Tracking" [[paper]](https://arxiv.org/abs/1408.3304) [[code]](http://www.di.ens.fr/willow/research/flowtrack/) In CVPR 2015
819 | **KSP** : J. Berclaz, F. Fleuret, E. Türetken and P. Fua "Multiple Object Tracking using K-Shortest Paths Optimization" [[paper]](https://cvlab.epfl.ch/files/content/sites/cvlab2/files/publications/publications/2011/BerclazFTF11.pdf) [[code]](https://cvlab.epfl.ch/software/ksp) IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011.
820 | **GMCP** : Amir Roshan Zamir, Afshin Dehghan, and Mubarak Shah "GMCP-Tracker: Global Multi-object Tracking Using Generalized Minimum Clique Graphs" [[paper]](http://crcv.ucf.edu/papers/eccv2012/GMCP-Tracker_ECCV12.pdf) [[code]](http://crcv.ucf.edu/projects/GMCP-Tracker/) European Conference on Computer Vision (ECCV), 2012.
821 |
822 | ### Driving Scenarios
823 |
824 | **MOTBeyondPixels** Sarthak Sharma*, Junaid Ahmed Ansari*, J. Krishna Murthy, and K. Madhava Krishna Beyond Pixels: Leveraging Geometry and Shape Cues for Online Multi-Object Tracking In ICRA 2018 [[paper]](https://arxiv.org/abs/1802.09298)[[code]](https://github.com/JunaidCS032/MOTBeyondPixels)
825 | **CIWT** Aljoˇsa Oˇsep, Alexander Hermans Combined Image and World-Space Tracking in Traffic Scenes In ICRA 2017 [[paper]](https://www.vision.rwth-aachen.de/media/papers/paper_final_compressed.pdf) [[code]](https://github.com/aljosaosep/ciwt)
826 |
827 | ### MCMT
828 |
829 | **DeepCC** Ristani and C. Tomasi "Features for Multi-Target Multi-Camera Tracking and Re-Identification" In CVPR 2018 [[paper]](https://arxiv.org/pdf/1803.10859.pdf) [[code]](https://github.com/ergysr/DeepCC)
830 | **towards-reid-tracking** Lucas Beyer∗ Stefan Breuers∗ "Towards a Principled Integration of Multi-Camera Re-Identification andTracking through Optimal Bayes Filters"[[paper]](https://arxiv.org/pdf/1705.04608.pdf)[[code]](https://github.com/VisualComputingInstitute/towards-reid-tracking)
831 |
832 | ### RGBD Tracking
833 |
834 | **DetTA** Stefan Breuers, Lucas Beyer "Detection-Tracking for Efficient Person Analysis: The DetTA Pipeline" [[paper]](https://arxiv.org/abs/1804.10134)[[code]](https://github.com/sbreuers/detta)
835 |
836 | ## Private Detection
837 |
838 | **POI** : F. Yu, W. Li, Q. Li, Y. Liu, X. Shi, J. Yan. "POI: Multiple Object Tracking with High Performance Detection and Appearance Feature" [[paper]](https://arxiv.org/pdf/1610.06136.pdf) [[detection]](https://drive.google.com/open?id=0B5ACiy41McAHMjczS2p0dFg3emM) In BMTT, SenseTime Group Limited, 2016
839 |
840 | ## New papers
841 |
842 | ### CVPR
843 |
844 | #### 2017
845 |
846 | Eldar Insafutdinov, Mykhaylo Andriluka, Leonid Pishchulin, Siyu Tang, Evgeny Levinkov, Bjoern Andres, Bernt Schiele "Art Track: Articulated Multi-Person Tracking in the Wild" [[paper]](https://arxiv.org/abs/1612.01465)
847 | Manmohan Chandraker, Paul Vernaza, Wongun Choi, Samuel Schulter "Deep Network Flow for Multi-Object Tracking" [[paper]](http://openaccess.thecvf.com/content_cvpr_2017/papers/Schulter_Deep_Network_Flow_CVPR_2017_paper.pdf)
848 | Jeany Son, Mooyeol Baek, Minsu Cho, and Bohyung Han, "Multi-Object Tracking with Quadruplet Convolutional Neural Networks" [[paper]](http://openaccess.thecvf.com/content_cvpr_2017/papers/Son_Multi-Object_Tracking_With_CVPR_2017_paper.pdf)
849 |
850 | #### 2018
851 |
852 | Girdhar R, Gkioxari G, Torresani L, et al. Detect-and-Track: Efficient Pose Estimation in Videos[C].(CVPR2018)[[paper]](http://openaccess.thecvf.com/content_cvpr_2018/papers/Girdhar_Detect-and-Track_Efficient_Pose_CVPR_2018_paper.pdf)[[code]](https://rohitgirdhar.github.io/DetectAndTrack/)
853 | Rolling Shutter and Radial Distortion Are Features for High Frame Rate Multi-Camera Tracking[[paper]](http://openaccess.thecvf.com/content_cvpr_2018/papers/Bapat_Rolling_Shutter_and_CVPR_2018_paper.pdf)
854 | Features for Multi-Target Multi-Camera Tracking and Re-Identification [[paper]](http://openaccess.thecvf.com/content_cvpr_2018/papers/Ristani_Features_for_Multi-Target_CVPR_2018_paper.pdf)
855 |
856 | #### 2019
857 |
858 | Self-Supervised Adaptation of High-Fidelity Face Models for Monocular Performance Tracking [[paper]](http://openaccess.thecvf.com/content_CVPR_2019/papers/Raaj_Efficient_Online_Multi-Person_2D_Pose_Tracking_With_Recurrent_Spatio-Temporal_Affinity_CVPR_2019_paper.pdf)
859 | Eliminating Exposure Bias and Metric Mismatch in Multiple Object Tracking [[paper]](http://openaccess.thecvf.com/content_CVPR_2019/papers/Maksai_Eliminating_Exposure_Bias_and_Metric_Mismatch_in_Multiple_Object_Tracking_CVPR_2019_paper.pdf)
860 |
861 | ### ICCV
862 |
863 | A. Sadeghian, A. Alahi, S. Savarese, Tracking The Untrackable: Learning To Track Multiple Cues with Long-Term Dependencies [[paper]](https://arxiv.org/abs/1701.01909)
864 | Andrii Maksai, Xinchao Wang, Franc¸ois Fleuret, and Pascal Fua "Non-Markovian Globally Consistent Multi-Object Tracking
865 | " [[paper]](http://openaccess.thecvf.com/content_ICCV_2017/papers/Maksai_Non-Markovian_Globally_Consistent_ICCV_2017_paper.pdf)[[code]](https://github.com/maksay/ptrack_cpp)
866 | Christoph Feichtenhofer, Axel Pinz, Andrew Zisserman, "Detect to Track and Track to Detect" [[paper]](http://openaccess.thecvf.com/content_ICCV_2017/papers/Feichtenhofer_Detect_to_Track_ICCV_2017_paper.pdf) [[code]](https://github.com/feichtenhofer/Detect-Track)
867 | Qi Chu, Wanli Ouyang, Xiaogang Wang, Bin Liu, Nenghai Yu "Online Multi-Object Tracking Using CNN-Based Single Object Tracker With Spatial-Temporal Attention Mechanism" [[paper]](http://openaccess.thecvf.com/content_ICCV_2017/papers/Chu_Online_Multi-Object_Tracking_ICCV_2017_paper.pdf)
868 | **Track-no-bnw** Bergmann P, Meinhardt T, Lealtaixe L, et al. Tracking without bells and whistles[J]. (ICCV2020). [[paper]](https://arxiv.org/pdf/1903.05625.pdf)[[code]](https://github.com/phil-bergmann/tracking_wo_bnw)
869 |
870 | ### ECCV2018
871 |
872 | Ren, Liangliang and Lu, Jiwen and Wang, Zifeng and Tian, Qi and Zhou, Jie "Collaborative Deep Reinforcement Learning for Multi-Object Tracking" [[paper]](http://openaccess.thecvf.com/content_ECCV_2018/papers/Liangliang_Ren_Collaborative_Deep_Reinforcement_ECCV_2018_paper.pdf)
873 | Kim, Chanho and Li, Fuxin and Rehg, James M "Multi-object Tracking with Neural Gating Using Bilinear LSTM" [[paper]](http://openaccess.thecvf.com/content_ECCV_2018/papers/Chanho_Kim_Multi-object_Tracking_with_ECCV_2018_paper.pdf)
874 |
875 | ### New paper
876 |
877 | M Fabbri, F Lanzi, S Calderara, A Palazzi "Learning to Detect and Track Visible and Occluded Body Joints in a Virtual World" [[paper]](https://www.researchgate.net/publication/323957071_Learning_to_Detect_and_Track_Visible_and_Occluded_Body_Joints_in_a_Virtual_World) [[code]] Waited!
878 | Cong Ma, Changshui Yang, Fan Yang, Yueqing Zhuang, Ziwei Zhang, Huizhu Jia, Xiaodong Xie "Trajectory Factory: Tracklet Cleaving and Re-connection by Deep Siamese Bi-GRU for Multiple Object Tracking" In ICME 2018 [[paper]](https://arxiv.org/abs/1804.04555)
879 | Kuan Fang, Yu Xiang, Xiaocheng Li and Silvio Savarese "Recurrent Autoregressive Networks for Online Multi-Object Tracking" In IEEE Winter Conference on Applications of Computer Vision (WACV), 2018. [[webpage]](http://yuxng.github.io/)
880 | Tharindu Fernando, Simon Denman, Sridha Sridharan, Clinton Fookes "Tracking by Prediction: A Deep Generative Model for Mutli-Person localisation and Tracking" In WACV 2018 [[paper]](https://arxiv.org/pdf/1803.03347.pdf)
881 |
882 | ### Multi-person Face Tracking
883 |
884 | Shun Zhang, Yihong Gong, Jia-Bin Huang, Jongwoo Lim, Jinjun Wang, Narendra Ahuja and Ming-Hsuan Yang "Tracking Persons-of-Interest via Adaptive Discriminative Features" In ECCV 2016 [[paper]](https://link.springer.com/content/pdf/10.1007%2F978-3-319-46454-1_26.pdf) [[code]](https://github.com/shunzhang876/AdaptiveFeatureLearning)
885 | Chung-Ching Lin, Ying Hung"A Prior-Less Method for Multi-Face Tracking in Unconstrained Videos" In CVPR 2018 [[paper]](http://openaccess.thecvf.com/content_cvpr_2018/papers/Lin_A_Prior-Less_Method_CVPR_2018_paper.pdf)
886 |
887 | ### Multi-person Pose Tracking
888 |
889 | Yuliang Xiu, Jiefeng Li, Haoyu Wang, Yinghong Fang, Cewu Lu "Pose Flow: Efficient Online Pose Tracking" [[paper]](https://arxiv.org/abs/1802.00977) Idea is interesting but the true source code is not opened.
890 | Bin Xiao, Haiping Wu, and Yichen Wei "Simple Baselines for Human Pose Estimation and Tracking" [[paper]](https://arxiv.org/pdf/1804.06208.pdf)[[code]](https://github.com/Microsoft/human-pose-estimation.pytorch)
891 | Girdhar R, Gkioxari G, Torresani L, et al. Detect-and-Track: Efficient Pose Estimation in Videos[C].(CVPR2018)[[paper]](http://openaccess.thecvf.com/content_cvpr_2018/papers/Girdhar_Detect-and-Track_Efficient_Pose_CVPR_2018_paper.pdf)[[code]](https://rohitgirdhar.github.io/DetectAndTrack/)
892 | Multi-Person Articulated Tracking With Spatial and Temporal Embeddings [[paper]](http://openaccess.thecvf.com/content_CVPR_2019/papers/Jin_Multi-Person_Articulated_Tracking_With_Spatial_and_Temporal_Embeddings_CVPR_2019_paper.pdf)
893 |
894 | ### Multi-camera Tracking
895 |
896 | Rolling Shutter and Radial Distortion Are Features for High Frame Rate Multi-Camera Tracking[[paper]](http://openaccess.thecvf.com/content_cvpr_2018/papers/Bapat_Rolling_Shutter_and_CVPR_2018_paper.pdf)
897 | CityFlow: A City-Scale Benchmark for Multi-Target Multi-Camera Vehicle Tracking and Re-Identification[[paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Tang_CityFlow_A_City-Scale_Benchmark_for_Multi-Target_Multi-Camera_Vehicle_Tracking_and_CVPR_2019_paper.pdf)]
898 |
--------------------------------------------------------------------------------
/notes/Online-Visual-Tracking-SOTA.md:
--------------------------------------------------------------------------------
1 | # Online-Visual-Tracking-SOTA
2 |
3 | This page focuses on watching the state-of-the-art performance for the short-term tracking task (if you are interested in the long-term tracking task, please visit [here](https://github.com/wangdongdut/Long-term-Visual-Tracking)). The evaluation datasets include:
4 | LaSOT, VOT2019, VOT2018, TrackingNet, GOT-10k, NFS, UAV123, TC-128, OTB-100.
5 |
6 | * **TOP-One Performance on All Datasets:**
7 |
8 | | LaSOT | VOT2019 | VOT2018 | TrackingNet | Got-10k | NFS | UAV123 | TC-128 | OTB-100 |
9 | |:--------:|:-------:|:-------:|:-----------:|:-----------:|:--------:|:-------:|:--------:|:-----------:|
10 | | Success | EAO | EAO | Success | Success | Success | Success | Success | Success |
11 | | 0.648 | 0.395 | 0.489 | 0.812 | 0.649 | 0.639 | 0.680 | 0.649 | 0.712 |
12 |
13 | * **LaSOT:**
14 |
15 | | Tracker | Success Score | Speed (fps) | Paper/Code |
16 | |:----------- |:----------------:|:----------------:|:----------------:|
17 | | Siam R-CNN (CVPR20) | 0.648 | 5 (Tesla V100) | [Paper](https://arxiv.org/pdf/1911.12836.pdf)/[Code](https://github.com/VisualComputingInstitute/SiamR-CNN) |
18 | | PrDiMP50 (CVPR20) | 0.598 | 30 (Unkown GPU) | [Paper](https://arxiv.org/pdf/2003.12565.pdf)/[Code](https://github.com/visionml/pytracking) |
19 | | LTMU (CVPR20) | 0.572 | 13 (RTX 2080Ti) | [Paper](https://arxiv.org/abs/2004.00305)/[Code](https://github.com/Daikenan/LTMU) |
20 | | DiMP50 (ICCV19) | 0.568 | 43 (GTX 1080) | [Paper](https://arxiv.org/pdf/1904.07220.pdf)/[Code](https://github.com/visionml/pytracking) |
21 | | SiamAttn (CVPR20) | 0.560 | 45 (RTX 2080Ti) | [Paper](https://arxiv.org/pdf/2004.06711.pdf)/[Code]() |
22 | | SiamFC++GoogLeNet (AAAI20)| 0.544 | 90 (RTX 2080Ti) | [Paper](https://arxiv.org/pdf/1911.06188.pdf)/[Code](https://github.com/MegviiDetection/video_analyst) |
23 | | MAML-FCOS (CVPR20) | 0.523 | 42 (NVIDIA P100) | [Paper](https://arxiv.org/pdf/2004.00830.pdf)/[Code]() |
24 | | GlobalTrack (AAAI20) | 0.521 | 6 (GTX TitanX) | [Paper](https://arxiv.org/abs/1912.08531)/[Code](https://github.com/huanglianghua/GlobalTrack) |
25 | | ATOM (CVPR19) | 0.515 | 30 (GTX 1080) | [Paper](https://arxiv.org/pdf/1811.07628.pdf)/[Code](https://github.com/visionml/pytracking) |
26 | | SiamBAN (CVPR20) | 0.514 | 40 (GTX 1080Ti) | [Paper](https://arxiv.org/pdf/2003.06761.pdf)/[Code](https://github.com/hqucv/siamban) |
27 | | SiamCar (CVPR20) | 0.507 | 52 (RTX 2080Ti) | [Paper](https://arxiv.org/pdf/1911.07241.pdf)/[Code](https://github.com/ohhhyeahhh/SiamCAR) |
28 | | SiamRPN++ (CVPR19) | 0.496 | 35 (Titan XP) | [Paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Li_SiamRPN_Evolution_of_Siamese_Visual_Tracking_With_Very_Deep_Networks_CVPR_2019_paper.pdf)/[Code](https://github.com/STVIR/pysot) |
29 | | ROAM++ (CVPR20) | 0.447 | 20 (RTX 2080)| [Paper](https://arxiv.org/pdf/1907.12006.pdf)/[Code](https://github.com/skyoung/ROAM) |
30 | | SPLT (ICCV19) | 0.426 | 26 (GTX 1080Ti) | [Paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Yan_Skimming-Perusal_Tracking_A_Framework_for_Real-Time_and_Robust_Long-Term_Tracking_ICCV_2019_paper.pdf)/[Code](https://github.com/iiau-tracker/SPLT) |
31 | | MDNet (CVPR16) | 0.397 | 5 (GTX 1080Ti) | [Paper](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Nam_Learning_Multi-Domain_Convolutional_CVPR_2016_paper.pdf)/[Code](https://github.com/hyeonseobnam/py-MDNet) |
32 |
33 | * MDNet is the best tracker in the original [LaSOT](https://cis.temple.edu/lasot/) paper.
34 |
35 | * **VOT2019:**
36 |
37 | | Tracker | EAO | Accuracy (A) | Robustness (R) | Paper/Code |
38 | |:----------- |:----------------:|:----------------:|:----------------:|:----------------:|
39 | | DRNet (VOT2019) | 0.395 | 0.605 | 0.261 | [Code](https://github.com/ShuaiBai623/DRNet)|
40 |
41 | * DRNet is the best tracker in the original [VOT2019](http://prints.vicos.si/publications/375) report.
42 |
43 | * **VOT2018:**
44 |
45 | | Tracker | EAO | Accuracy (A) | Robustness (R) | Paper/Code |
46 | |:----------- |:----------------:|:----------------:|:----------------:|:----------------:|
47 | | D3S (CVPR20) | 0.489 | 0.640 | 0.150 | [Paper](https://arxiv.org/pdf/1911.08862.pdf)/[Code](https://github.com/alanlukezic/d3s) |
48 | | SiamAttn (CVPR20) | 0.470 | 0.63 | 0.16 | [Paper](https://arxiv.org/pdf/2004.06711.pdf)/[Code]() |
49 | | MAML-Retina (CVPR20) | 0.452 | 0.604 | 0.159 | [Paper](https://arxiv.org/pdf/2004.00830.pdf)/[Code]() |
50 | | SiamBAN (CVPR20) | 0.452 | 0.597 | 0.178 | [Paper](https://arxiv.org/pdf/2003.06761.pdf)/[Code](https://github.com/hqucv/siamban) |
51 | | PrDiMP50 (CVPR20) | 0.442 | 0.618 | 0.165 | [Paper](https://arxiv.org/pdf/2003.12565.pdf)/[Code](https://github.com/visionml/pytracking) |
52 | | DiMP50 (ICCV19) | 0.440 | 0.587 | 0.153 | [Paper](https://arxiv.org/pdf/1904.07220.pdf)/[Code](https://github.com/visionml/pytracking) |
53 | | SiamFC++GoogLeNet (AAAI20)| 0.426 | 0.587 | 0.183 | [Paper](https://arxiv.org/pdf/1911.06188.pdf)/[Code](https://github.com/MegviiDetection/video_analyst) |
54 | | SiamRPN++ (CVPR19) | 0.414 | 0.600 | 0.234 | [Paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Li_SiamRPN_Evolution_of_Siamese_Visual_Tracking_With_Very_Deep_Networks_CVPR_2019_paper.pdf)/[Code](https://github.com/STVIR/pysot) |
55 | | Siam R-CNN (CVPR20) | 0.408 | 0.597 | 0.220 | [Paper](https://arxiv.org/pdf/1911.12836.pdf)/[Code](https://github.com/VisualComputingInstitute/SiamR-CNN) |
56 | | ATOM (CVPR19) | 0.401 | 0.590 | 0.204 | [Paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Danelljan_ATOM_Accurate_Tracking_by_Overlap_Maximization_CVPR_2019_paper.pdf)/[Code](https://github.com/visionml/pytracking) |
57 | | LADCF (VOT2018) | 0.389 | 0.503 | 0.159 | [Code](https://github.com/XU-TIANYANG/LADCF) |
58 |
59 | * VOT2018 and VOT2017 have same sequences. VOT2013 to VOT2016 are small-scale and out-of-date.
60 | * LADCF is the best tracker in the original [VOT2018](http://prints.vicos.si/publications/365) report.
61 |
62 | * **TrackingNet:**
63 |
64 | | Tracker | Success Score | Norm Precision Score | Speed (fps) | Paper/Code |
65 | |:----------- |:----------------:|:----------------:|:----------------:|:----------------:|
66 | | Siam R-CNN (CVPR20) | 0.812 | 0.854 | 5 (Tesla V100) | [Paper](https://arxiv.org/pdf/1911.12836.pdf)/[Code](https://github.com/VisualComputingInstitute/SiamR-CNN) |
67 | | PrDiMP50 (CVPR20) | 0.758 | 0.816 | 30 (Unkown GPU) | [Paper](https://arxiv.org/pdf/2003.12565.pdf)/[Code](https://github.com/visionml/pytracking) |
68 | | MAML-FCOS (CVPR20) | 0.757 | 0.822 | 42 (NVIDIA P100) | [Paper](https://arxiv.org/pdf/2004.00830.pdf)/[Code]() |
69 | | SiamAttn (CVPR20) | 0.752 | 0.817 | 45 (RTX 2080Ti) | [Paper](https://arxiv.org/pdf/2004.06711.pdf)/[Code]() |
70 | | DiMP50 (ICCV19) | 0.740 | 0.801 | 43 (GTX 1080) | [Paper](https://arxiv.org/pdf/1904.07220.pdf)/[Code](https://github.com/visionml/pytracking) |
71 | | SiamRPN++ (CVPR19) | 0.733 | 0.800 | 35 (Titan XP) | [Paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Li_SiamRPN_Evolution_of_Siamese_Visual_Tracking_With_Very_Deep_Networks_CVPR_2019_paper.pdf)/[Code](https://github.com/STVIR/pysot) |
72 |
73 | * The performance on TrackingNet is improved very fast. Here merely list the trackers performing better than **SiamRPN++**.
74 | * TrackingNet Leaderboard:http://eval.tracking-net.org/web/challenges/challenge-page/39/leaderboard
75 |
76 | * **GOT-10k:**
77 |
78 | | Tracker | Success Score (AO) | Speed (fps) | Paper/Code |
79 | |:----------- |:----------------:|:----------------:|:----------------:|
80 | | Siam R-CNN (CVPR20) | 0.649 | 5 (Tesla V100) | [Paper](https://arxiv.org/pdf/1911.12836.pdf)/[Code](https://github.com/VisualComputingInstitute/SiamR-CNN) |
81 | | PrDiMP50 (CVPR20) | 0.634 | 30 (Unkown GPU) | [Paper](https://arxiv.org/pdf/2003.12565.pdf)/[Code](https://github.com/visionml/pytracking) |
82 | | DiMP50 (ICCV19) | 0.611 | 43 (GTX 1080) | [Paper](https://arxiv.org/pdf/1904.07220.pdf)/[Code](https://github.com/visionml/pytracking) |
83 | | D3S (CVPR20) | 0.597 | 25 (GTX 1080) | [Paper](https://arxiv.org/pdf/1911.08862.pdf)/[Code](https://github.com/alanlukezic/d3s) |
84 | | ATOM (CVPR19) | 0.556 | 30 (GTX 1080) | [Paper](https://arxiv.org/pdf/1811.07628.pdf)/[Code](https://github.com/visionml/pytracking) |
85 |
86 | * The performance on GOT-10k has been improved significantly after ATOM. Here merely list the trackers performing better than **ATOM**.
87 | * GOT-10k leaderboard: http://got-10k.aitestunion.com/leaderboard
88 |
89 | * **NFS:**
90 |
91 | | Tracker | Success Score | Speed (fps) | Paper/Code |
92 | |:----------- |:----------------:|:----------------:|:----------------:|
93 | | Siam R-CNN (CVPR20) | 0.639 | 5 (Tesla V100) | [Paper](https://arxiv.org/pdf/1911.12836.pdf)/[Code](https://github.com/VisualComputingInstitute/SiamR-CNN) |
94 | | PrDiMP50 (CVPR20) | 0.635 | 30 (Unkown GPU) | [Paper](https://arxiv.org/pdf/2003.12565.pdf)/[Code](https://github.com/visionml/pytracking) |
95 | | DiMP50 (ICCV19) | 0.620 | 43 (GTX 1080) | [Paper](https://arxiv.org/pdf/1904.07220.pdf)/[Code](https://github.com/visionml/pytracking) |
96 |
97 | * **UAV123:**
98 |
99 | | Tracker | Success Score | Speed (fps) | Paper/Code |
100 | |:----------- |:----------------:|:----------------:|:----------------:|
101 | | PrDiMP50 (CVPR20) | 0.680 | 30 (Unkown GPU) | [Paper](https://arxiv.org/pdf/2003.12565.pdf)/[Code](https://github.com/visionml/pytracking) |
102 | | DiMP50 (ICCV19) | 0.654 | 43 (GTX 1080) | [Paper](https://arxiv.org/pdf/1904.07220.pdf)/[Code](https://github.com/visionml/pytracking) |
103 | | Siam R-CNN (CVPR20) | 0.649 | 5 (Tesla V100) | [Paper](https://arxiv.org/pdf/1911.12836.pdf)/[Code](https://github.com/VisualComputingInstitute/SiamR-CNN) |
104 | | ATOM (CVPR19) | 0.643 | 30 (GTX 1080) | [Paper](https://arxiv.org/pdf/1811.07628.pdf)/[Code](https://github.com/visionml/pytracking) |
105 | | SiamRPN++ (CVPR19) | 0.642 | 35 (Titan XP) | [Paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Li_SiamRPN_Evolution_of_Siamese_Visual_Tracking_With_Very_Deep_Networks_CVPR_2019_paper.pdf)/[Code](https://github.com/STVIR/pysot) |
106 |
107 | * **TC-128:**
108 |
109 | | Tracker | Success Score | Speed (fps) | Paper/Code |
110 | |:----------- |:----------------:|:----------------:|:----------------:|
111 | | Siam R-CNN (CVPR20) | 0.649 | 5 (Tesla V100) | [Paper](https://arxiv.org/pdf/1911.12836.pdf)/[Code](https://github.com/VisualComputingInstitute/SiamR-CNN) |
112 | | UPDT (ECCV2018) | 0.622 | N/A | [Paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Goutam_Bhat_Unveiling_the_Power_ECCV_2018_paper.pdf) |
113 |
114 | * **OTB-100/OTB-2015:**
115 | | Tracker | Success Score | Precision Score | Speed (fps) | Paper/Code |
116 | |:----------- |:----------------:|:----------------:|:----------------:|:----------------:|
117 | | SiamAttn (CVPR20) | 0.712 | 0.926 | 45 (RTX 2080Ti) | [Paper](https://arxiv.org/pdf/2004.06711.pdf)/[Code]() |
118 | | UPDT (ECCV2018) | 0.702 | 0.931 | N/A | [Paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Goutam_Bhat_Unveiling_the_Power_ECCV_2018_paper.pdf) |
119 | | Siam R-CNN (CVPR20) | 0.701 | 0.891 | 5 (Tesla V100) | [Paper](https://arxiv.org/pdf/1911.12836.pdf)/[Code](https://github.com/VisualComputingInstitute/SiamR-CNN) |
120 | | DRT (CVPR18) | 0.699 | 0.923 | N/A | [Paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Sun_Correlation_Tracking_via_CVPR_2018_paper.pdf)/[Code](https://github.com/cswaynecool/DRT) |
121 | | PrDiMP50 (CVPR20) | 0.696 | N/A | 30 (Unkown GPU) | [Paper](https://arxiv.org/pdf/2003.12565.pdf)/[Code](https://github.com/visionml/pytracking) |
122 | | SiamRPN++ (CVPR19) | 0.696 | 0.914 | 35 (Titan XP) | [Paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Li_SiamRPN_Evolution_of_Siamese_Visual_Tracking_With_Very_Deep_Networks_CVPR_2019_paper.pdf)/[Code](https://github.com/STVIR/pysot) |
123 | | MCCT (CVPR18) | 0.696 | 0.914 | 8 (GTX 1080Ti) | [Paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Wang_Multi-Cue_Correlation_Filters_CVPR_2018_paper.pdf)/[Code](https://github.com/594422814/MCCT) |
124 | | SiamBAN (CVPR20) | 0.696 | 0.910 | 40 (GTX 1080Ti) | [Paper](https://arxiv.org/pdf/2003.06761.pdf)/[Code](https://github.com/hqucv/siamban) |
125 | | GFS-DCF (ICCV19) | 0.693 | 0.932 | 8 (Titan X) | [Paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Xu_Joint_Group_Feature_Selection_and_Discriminative_Filter_Learning_for_Robust_ICCV_2019_paper.pdf)/[Code](https://github.com/XU-TIANYANG/GFS-DCF) |
126 | | SACF (ECCV18) | 0.693 | 0.917 | 23 (GTX Titan) | [Paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/mengdan_zhang_Visual_Tracking_via_ECCV_2018_paper.pdf)|
127 | | ASRCF(CVPR19) | 0.692 | 0.922 | 28 (GTX 1080Ti) | [Paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Dai_Visual_Tracking_via_Adaptive_Spatially-Regularized_Correlation_Filters_CVPR_2019_paper.pdf)/[Code](https://github.com/Daikenan/ASRCF) |
128 | | LSART (CVPR18) | 0.691 | 0.923 | 1 (Titan X) | [Paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Sun_Learning_Spatial-Aware_Regressions_CVPR_2018_paper.pdf)/[Code](https://github.com/cswaynecool/LSART) |
129 | | ECO (CVPR17) | 0.691 | N/A | 8 (Unkown GPU) | [Paper](http://openaccess.thecvf.com/content_cvpr_2017/papers/Danelljan_ECO_Efficient_Convolution_CVPR_2017_paper.pdf)/[M-Code](https://github.com/martin-danelljan/ECO)/[P-Code](https://github.com/visionml/pytracking)|
130 |
131 | * Too many trackers are tested on OTB-100, here merely list the SOTA trackers whose success scores are larger than **0.690**.
132 | * OTB-50 and OTB-2013 are similar subsets of OTB-100.
133 | * It seems that the OTB-100 dataset has already been overfitting.
134 |
135 | ## All Short-Term Tracking Datasets:
136 | * **List:**
137 |
138 | | Datasets | #videos | #total/min/max/average frames|Absent Label|
139 | |:----------- |:----------------:|:----------------:|:----------------:|
140 | | [LaSOT](https://cis.temple.edu/lasot/) | 1,400 (I-all-1,400/II-test-280) | 3.52M/1,000/11,397/2,506 | Yes |
141 | | [VOT2018]() | | | No |
142 | | [TrackingNet]() | | | No |
143 | | [GOT-10k](http://got-10k.aitestunion.com/) | train-10k, val-180, test-180 | 1.5M | No |
144 | | [NfS](http://ci2cv.net/nfs/index.html) | 100 | 383K/169/20,665/3,830 | No |
145 | | [UAV123](https://uav123.org/) | 123 | 113K/109/3,085/915 | No |
146 | | | | |
147 | | [OTB-2015](http://cvlab.hanyang.ac.kr/tracker_benchmark/) | 100 | 59K/71/3,872/590 | No |
148 | | [TC-128](http://www.dabi.temple.edu/~hbling/data/TColor-128/TColor-128.html) | 128 | 55K/71/3,872/429 | No |
149 | | [ALOV300++](http://alov300pp.joomlafree.it/) | 315 | 8.9K/XXXX/XXXX/284 | No |
150 | | [NUS-PRO](https://www.ece.nus.edu.sg/lv/pro/nus_pro.html) | 365 | 135K/146/5,040/371 | No |
151 |
152 | * [OTB-2013/OTB-50](http://cvlab.hanyang.ac.kr/tracker_benchmark/benchmark_v10.html) is a subset of OTB-2015.
153 |
154 |
155 | ## Conference Tracking Papers:
156 | * **2020:**
157 | * High-Performance Long-Term Tracking with Meta-Updater. CVPR, 2020.
Kenan Dai, Yunhua Zhang, Dong Wang, Jianhua Li, Huchuan Lu, Xiaoyun Yang. [[Paper]()][[Code](https://github.com/Daikenan/LTMU)]
158 |
159 | * Cooling-Shrinking Attack: Blinding the Tracker with Imperceptible Noises. CVPR, 2020.
Bin Yan, Dong Wang, Huchuan Lu, Xiaoyun Yang. [[Paper]()][[Code](https://github.com/MasterBin-IIAU/CSA)]
160 |
161 | * Siam R-CNN: Visual Tracking by Re-Detection. CVPR, 2020.
Paul Voigtlaender, Jonathon Luiten, Philip H. S. Torr, Bastian Leibe. [[Paper]()][[Code](https://github.com/VisualComputingInstitute/SiamR-CNN)]
162 |
163 | * Probabilistic Regression for Visual Tracking. CVPR, 2020.
Martin Danelljan, Luc Van Gool, Radu Timofte. [[Paper]()][[Code](https://github.com/visionml/pytracking)]
164 |
165 | * D3S - A Discriminative Single Shot Segmentation Tracker. CVPR, 2020.
Alan Lukezic, Jiri Matas, Matej Kristan. [[Paper]()][[Code](https://github.com/alanlukezic/d3s)]
166 |
167 | * Tracking by Instance Detection: A Meta-Learning Approach. CVPR, 2020.
Guangting Wang, Chong Luo, Xiaoyan Sun, Zhiwei Xiong, Wenjun Zeng: Tracking by Instance Detection: A Meta-Learning Approach. [[Paper]()][[Code]()]
168 |
169 | * SiamCAR: Siamese Fully Convolutional Classification and Regression for Visual Tracking. CVPR, 2020.
Dongyan Guo, Jun Wang, Ying Cui, Zhenhua Wang, Shengyong Chen. [[Paper]()][[Code](https://github.com/ohhhyeahhh/SiamCAR)]
170 |
171 | * Siamese Box Adaptive Network for Visual Tracking. CVPR, 2020.
Zedu Chen, Bineng Zhong, Guorong Li, Shengping Zhang, Rongrong Ji. [[Paper]()][[Code](https://github.com/hqucv/siamban)]
172 |
173 | * Deformable Siamese Attention Networks for Visual Object Tracking. CVPR, 2020.
Yuechen Yu, Yilei Xiong, Weilin Huang, Matthew R. Scott. [[Paper]()][[Code]()]
174 |
175 | * MAST: A Memory-Augmented Self-supervised Tracker. CVPR, 2020.
Zihang Lai, Erika Lu, Weidi Xie. [[Paper]()][[Code](https://github.com/zlai0/MAST)]
176 |
177 | * ROAM: Recurrently Optimizing Tracking Model. CVPR, 2020.
Tianyu Yang, Pengfei Xu, Runbo Hu, Hua Chai, Antoni B. Chan. [[Paper]()][[Code](https://github.com/skyoung/ROAM)]
178 |
179 | * AutoTrack: Towards High-Performance Visual Tracking for UAV with Automatic Spatio-Temporal Regularization. CVPR, 2020.
180 | Yiming Li, Changhong Fu, Fangqiang Ding, Ziyuan Huang, Geng Lu. [[Paper]()][[Code](https://github.com/vision4robotics/AutoTrack)]
181 |
182 | * **2019:**
183 | * Unsupervised Deep Tracking. CVPR, 2019.
Ning Wang, Yibing Song, Chao Ma, Wengang Zhou, Wei Liu, Houqiang Li. [[Paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Wang_Unsupervised_Deep_Tracking_CVPR_2019_paper.pdf)][[Code](https://github.com/594422814/UDT)]
184 |
185 | * Fast Online Object Tracking and Segmentation: A Unifying Approach. CVPR, 2019.
Qiang Wang, Li Zhang, Luca Bertinetto, Weiming Hu, Philip H.S. Torr. [[Paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Wang_Fast_Online_Object_Tracking_and_Segmentation_A_Unifying_Approach_CVPR_2019_paper.pdf)]
186 |
187 | * Object Tracking by Reconstruction With View-Specific Discriminative Correlation Filters. CVPR, 2019.
Ugur Kart, Alan Lukezic, Matej Kristan, Joni-Kristian Kamarainen, Jiri Matas. [[Paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Kart_Object_Tracking_by_Reconstruction_With_View-Specific_Discriminative_Correlation_Filters_CVPR_2019_paper.pdf)]
188 |
189 | * Target-Aware Deep Tracking. CVPR, 2019.
Xin Li, Chao Ma, Baoyuan Wu, Zhenyu He, Ming-Hsuan Yang. [[Paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Li_Target-Aware_Deep_Tracking_CVPR_2019_paper.pdf)]
190 |
191 | * SPM-Tracker: Series-Parallel Matching for Real-Time Visual Object Tracking. CVPR, 2019.
Guangting Wang, Chong Luo, Zhiwei Xiong, Wenjun Zeng. [[Paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Wang_SPM-Tracker_Series-Parallel_Matching_for_Real-Time_Visual_Object_Tracking_CVPR_2019_paper.pdf)]
192 |
193 | * SiamRPN++: Evolution of Siamese Visual Tracking With Very Deep Networks. CVPR, 2019.
Bo Li, Wei Wu, Qiang Wang, Fangyi Zhang, Junliang Xing, Junjie Yan. [[Paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Li_SiamRPN_Evolution_of_Siamese_Visual_Tracking_With_Very_Deep_Networks_CVPR_2019_paper.pdf)]
194 |
195 | * Deeper and Wider Siamese Networks for Real-Time Visual Tracking. CVPR, 2019.
Zhipeng Zhang, Houwen Peng. [[Paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Zhang_Deeper_and_Wider_Siamese_Networks_for_Real-Time_Visual_Tracking_CVPR_2019_paper.pdf)]
196 |
197 | * Graph Convolutional Tracking. CVPR, 2019.
Junyu Gao, Tianzhu Zhang, Changsheng Xu. [[Paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Gao_Graph_Convolutional_Tracking_CVPR_2019_paper.pdf)]
198 |
199 | * ATOM: Accurate Tracking by Overlap Maximization. CVPR, 2019.
Martin Danelljan, Goutam Bhat, Fahad Shahbaz Khan, Michael Felsberg. [[Paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Danelljan_ATOM_Accurate_Tracking_by_Overlap_Maximization_CVPR_2019_paper.pdf)]
200 |
201 | * Visual Tracking via Adaptive Spatially-Regularized Correlation Filters. CVPR, 2019.
Kenan Dai, Dong Wang, Huchuan Lu, Chong Sun, Jianhua Li. [[Paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Dai_Visual_Tracking_via_Adaptive_Spatially-Regularized_Correlation_Filters_CVPR_2019_paper.pdf)]
202 |
203 | * LaSOT: A High-Quality Benchmark for Large-Scale Single Object Tracking. CVPR, 2019.
Heng Fan, Liting Lin, Fan Yang, Peng Chu, Ge Deng, Sijia Yu, Hexin Bai, Yong Xu, Chunyuan Liao, Haibin Ling. [[Paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Fan_LaSOT_A_High-Quality_Benchmark_for_Large-Scale_Single_Object_Tracking_CVPR_2019_paper.pdf)]
204 |
205 | * ROI Pooled Correlation Filters for Visual Tracking. CVPR, 2019.
Yuxuan Sun, Chong Sun, Dong Wang, You He, Huchuan Lu. [[Paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Sun_ROI_Pooled_Correlation_Filters_for_Visual_Tracking_CVPR_2019_paper.pdf)]
206 |
207 | * Siamese Cascaded Region Proposal Networks for Real-Time Visual Tracking. CVPR, 2019.
Heng Fan, Haibin Ling. [[Paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Fan_Siamese_Cascaded_Region_Proposal_Networks_for_Real-Time_Visual_Tracking_CVPR_2019_paper.pdf)]
208 |
209 | * Deep Meta Learning for Real-Time Target-Aware Visual Tracking. ICCV, 2019.
Janghoon Choi, Junseok Kwon, Kyoung Mu Lee. [[Paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Choi_Deep_Meta_Learning_for_Real-Time_Target-Aware_Visual_Tracking_ICCV_2019_paper.pdf)
210 |
211 | * 'Skimming-Perusal' Tracking: A Framework for Real-Time and Robust Long-Term Tracking. ICCV, 2019.
Bin Yan, Haojie Zhao, Dong Wang, Huchuan Lu, Xiaoyun Yang. [[Paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Yan_Skimming-Perusal_Tracking_A_Framework_for_Real-Time_and_Robust_Long-Term_Tracking_ICCV_2019_paper.pdf)]
212 |
213 | * Learning Aberrance Repressed Correlation Filters for Real-Time UAV Tracking. ICCV, 2019.
Ziyuan Huang, Changhong Fu, Yiming Li, Fuling Lin, Peng Lu. [[Paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Huang_Learning_Aberrance_Repressed_Correlation_Filters_for_Real-Time_UAV_Tracking_ICCV_2019_paper.pdf)]
214 |
215 | * Physical Adversarial Textures That Fool Visual Object Tracking. ICCV, 2019.
Rey Reza Wiyatno, Anqi Xu. [[Paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Wiyatno_Physical_Adversarial_Textures_That_Fool_Visual_Object_Tracking_ICCV_2019_paper.pdf)]
216 |
217 | * GradNet: Gradient-Guided Network for Visual Object Tracking. ICCV, 2019.
Peixia Li, Boyu Chen, Wanli Ouyang, Dong Wang, Xiaoyun Yang, Huchuan Lu. [[Paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Li_GradNet_Gradient-Guided_Network_for_Visual_Object_Tracking_ICCV_2019_paper.pdf)]
218 |
219 | * Learning Discriminative Model Prediction for Tracking. ICCV, 2019.
Goutam Bhat, Martin Danelljan, Luc Van Gool, Radu Timofte. [[Paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Bhat_Learning_Discriminative_Model_Prediction_for_Tracking_ICCV_2019_paper.pdf)]
220 |
221 | * Joint Group Feature Selection and Discriminative Filter Learning for Robust Visual Object Tracking. ICCV, 2019.
Tianyang Xu, Zhen-Hua Feng, Xiao-Jun Wu, Josef Kittler. [[Paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Xu_Joint_Group_Feature_Selection_and_Discriminative_Filter_Learning_for_Robust_ICCV_2019_paper.pdf)]
222 |
223 | * CDTB: A Color and Depth Visual Object Tracking Dataset and Benchmark. ICCV, 2019.
Alan Lukezic, Ugur Kart, Jani Kapyla, Ahmed Durmush, Joni-Kristian Kamarainen, Jiri Matas, Matej Kristan. [[Paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Lukezic_CDTB_A_Color_and_Depth_Visual_Object_Tracking_Dataset_and_ICCV_2019_paper.pdf)]
224 |
225 | * **2018:**
226 | * Context-Aware Deep Feature Compression for High-Speed Visual Tracking. CVPR, 2018.
Jongwon Choi, Hyung Jin Chang, Tobias Fischer, Sangdoo Yun, Kyuewang Lee, Jiyeoup Jeong, Yiannis Demiris, Jin Young Choi. [[Paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Choi_Context-Aware_Deep_Feature_CVPR_2018_paper.pdf)]
227 |
228 | * Correlation Tracking via Joint Discrimination and Reliability Learning. CVPR, 2018.
Chong Sun, Dong Wang, Huchuan Lu, Ming-Hsuan Yang. [[Paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Sun_Correlation_Tracking_via_CVPR_2018_paper.pdf)]
229 |
230 | * Hyperparameter Optimization for Tracking With Continuous Deep Q-Learning. CVPR, 2018.
Xingping Dong, Jianbing Shen, Wenguan Wang, Yu Liu, Ling Shao, Fatih Porikli. [[Paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Dong_Hyperparameter_Optimization_for_CVPR_2018_paper.pdf)]
231 |
232 | * End-to-End Flow Correlation Tracking With Spatial-Temporal Attention. CVPR, 2018.
Zheng Zhu, Wei Wu, Wei Zou, Junjie Yan. [[Paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Zhu_End-to-End_Flow_Correlation_CVPR_2018_paper.pdf)]
233 |
234 | * Efficient Diverse Ensemble for Discriminative Co-Tracking. CVPR, 2018.
Kourosh Meshgi, Shigeyuki Oba, Shin Ishii. [[Paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Meshgi_Efficient_Diverse_Ensemble_CVPR_2018_paper.pdf)]
235 |
236 | * A Twofold Siamese Network for Real-Time Object Tracking. CVPR, 2018.
Anfeng He, Chong Luo, Xinmei Tian, Wenjun Zeng. [[Paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/He_A_Twofold_Siamese_CVPR_2018_paper.pdf)]
237 |
238 | * Multi-Cue Correlation Filters for Robust Visual Tracking. CVPR, 2018.
Ning Wang, Wengang Zhou, Qi Tian, Richang Hong, Meng Wang, Houqiang Li. [[Paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Wang_Multi-Cue_Correlation_Filters_CVPR_2018_paper.pdf)]
239 |
240 | * Learning Attentions: Residual Attentional Siamese Network for High Performance Online Visual Tracking. CVPR, 2018.
Qiang Wang, Zhu Teng, Junliang Xing, Jin Gao, Weiming Hu, Stephen Maybank. [[Paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Wang_Learning_Attentions_Residual_CVPR_2018_paper.pdf)]
241 |
242 | * SINT++: Robust Visual Tracking via Adversarial Positive Instance Generation. CVPR, 2018.
Xiao Wang, Chenglong Li, Bin Luo, Jin Tang. [[Paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Wang_SINT_Robust_Visual_CVPR_2018_paper.pdf)]
243 |
244 | * High-Speed Tracking With Multi-Kernel Correlation Filters. CVPR, 2018.
Ming Tang, Bin Yu, Fan Zhang, Jinqiao Wang. [[Paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Tang_High-Speed_Tracking_With_CVPR_2018_paper.pdf)]
245 |
246 | * Learning Spatial-Temporal Regularized Correlation Filters for Visual Tracking. CVPR, 2018.
Feng Li, Cheng Tian, Wangmeng Zuo, Lei Zhang, Ming-Hsuan Yang. [[Paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Li_Learning_Spatial-Temporal_Regularized_CVPR_2018_paper.pdf)]
247 |
248 | * Learning Spatial-Aware Regressions for Visual Tracking. CVPR, 2018.
Chong Sun, Dong Wang, Huchuan Lu, Ming-Hsuan Yang. [[Paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Sun_Learning_Spatial-Aware_Regressions_CVPR_2018_paper.pdf)]
249 |
250 | * High Performance Visual Tracking With Siamese Region Proposal Network. CVPR, 2018.
Bo Li, Junjie Yan, Wei Wu, Zheng Zhu, Xiaolin Hu. [[Paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Li_High_Performance_Visual_CVPR_2018_paper.pdf)]
251 |
252 | * VITAL: VIsual Tracking via Adversarial Learning. CVPR, 2018.
Yibing Song, Chao Ma, Xiaohe Wu, Lijun Gong, Linchao Bao, Wangmeng Zuo, Chunhua Shen, Rynson W.H. Lau, Ming-Hsuan Yang. [[Paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Song_VITAL_VIsual_Tracking_CVPR_2018_paper.pdf)]
253 |
254 | * Distractor-aware Siamese Networks for Visual Object Tracking. ECCV, 2018.
Zheng Zhu, Qiang Wang, Bo Li, Wei Wu, Junjie Yan, Weiming Hu. [[Paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Zheng_Zhu_Distractor-aware_Siamese_Networks_ECCV_2018_paper.pdf)]
255 |
256 | * Learning Dynamic Memory Networks for Object Tracking. ECCV, 2018.
Tianyu Yang, Antoni B. Chan. [[Paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Tianyu_Yang_Learning_Dynamic_Memory_ECCV_2018_paper.pdf)]
257 |
258 | * TrackingNet: A Large-Scale Dataset and Benchmark for Object Tracking in the Wild. ECCV, 2018.
Matthias Muller, Adel Bibi, Silvio Giancola, Salman Alsubaihi, Bernard Ghanem. [[Paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Matthias_Muller_TrackingNet_A_Large-Scale_ECCV_2018_paper.pdf)]
259 |
260 | * Structured Siamese Network for Real-Time Visual Tracking. ECCV, 2018.
Yunhua Zhang, Lijun Wang, Jinqing Qi, Dong Wang, Mengyang Feng, Huchuan Lu. [[Paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Yunhua_Zhang_Structured_Siamese_Network_ECCV_2018_paper.pdf)]
261 |
262 | * Triplet Loss in Siamese Network for Object Tracking. ECCV, 2018.
Xingping Dong, Jianbing Shen. [[Paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Xingping_Dong_Triplet_Loss_with_ECCV_2018_paper.pdf)]
263 |
264 | * Real-time 'Actor-Critic' Tracking. ECCV, 2018.
Boyu Chen, Dong Wang, Peixia Li, Shuang Wang, Huchuan Lu. [[Paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Boyu_Chen_Real-time_Actor-Critic_Tracking_ECCV_2018_paper.pdf)]
265 |
266 | * Joint Representation and Truncated Inference Learning for Correlation Filter based Tracking. ECCV, 2018.
Yingjie Yao, Xiaohe Wu, Lei Zhang, Shiguang Shan, Wangmeng Zuo. [[Paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Yingjie_Yao_Joint_Representation_and_ECCV_2018_paper.pdf)]
267 |
268 | * Visual Tracking via Spatially Aligned Correlation Filters Network. ECCV, 2018.
Mengdan Zhang, Qiang Wang, Junliang Xing, Jin Gao, Peixi Peng, Weiming Hu, Steve Maybank. [[Paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/mengdan_zhang_Visual_Tracking_via_ECCV_2018_paper.pdf)]
269 |
270 | * Deep Reinforcement Learning with Iterative Shift for Visual Tracking. ECCV, 2018.
Liangliang Ren, Xin Yuan, Jiwen Lu, Ming Yang, Jie Zhou. [[Paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Liangliang_Ren_Deep_Reinforcement_Learning_ECCV_2018_paper.pdf)]
271 |
272 | * Cross-Modal Ranking with Soft Consistency and Noisy Labels for Robust RGB-T Tracking. ECCV, 2018.
Chenglong Li, Chengli Zhu, Yan Huang, Jin Tang, Liang Wang. [[Paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Chenglong_Li_Cross-Modal_Ranking_with_ECCV_2018_paper.pdf)]
273 |
274 | * Long-term Tracking in the Wild: a Benchmark. ECCV, 2018.
Jack Valmadre, Luca Bertinetto, Joao F. Henriques, Ran Tao, Andrea Vedaldi, Arnold W.M. Smeulders, Philip H.S. Torr, Efstratios Gavves. [[Paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Efstratios_Gavves_Long-term_Tracking_in_ECCV_2018_paper.pdf)]
275 |
276 | * Deep Regression Tracking with Shrinkage Loss. ECCV, 2018.
Xiankai Lu, Chao Ma, Bingbing Ni, Xiaokang Yang, Ian Reid, Ming-Hsuan Yang. [[Paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Xiankai_Lu_Deep_Regression_Tracking_ECCV_2018_paper.pdf)]
277 |
278 | * Unveiling the Power of Deep Tracking. ECCV, 2018.
Goutam Bhat, Joakim Johnander, Martin Danelljan, Fahad Shahbaz Khan, Michael Felsberg. [[Paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Goutam_Bhat_Unveiling_the_Power_ECCV_2018_paper.pdf)]
279 |
280 | * **2017:**
281 | * Context-Aware Correlation Filter Tracking. CVPR, 2017.
Matthias Mueller, Neil Smith, Bernard Ghanem. [[Paper](http://openaccess.thecvf.com/content_cvpr_2017/papers/Mueller_Context-Aware_Correlation_Filter_CVPR_2017_paper.pdf)]
282 |
283 | * Superpixel-Based Tracking-By-Segmentation Using Markov Chains. CVPR, 2017.
Donghun Yeo, Jeany Son, Bohyung Han, Joon Hee Han. [[Paper](http://openaccess.thecvf.com/content_cvpr_2017/papers/Yeo_Superpixel-Based_Tracking-By-Segmentation_Using_CVPR_2017_paper.pdf)]
284 |
285 | * Action-Decision Networks for Visual Tracking With Deep Reinforcement Learning. CVPR, 2017.
Sangdoo Yun, Jongwon Choi, Youngjoon Yoo, Kimin Yun, Jin Young Choi. [[Paper](http://openaccess.thecvf.com/content_cvpr_2017/papers/Yun_Action-Decision_Networks_for_CVPR_2017_paper.pdf)]
286 |
287 | * End-To-End Representation Learning for Correlation Filter Based Tracking. CVPR, 2017.
Jack Valmadre, Luca Bertinetto, Joao Henriques, Andrea Vedaldi, Philip H. S. Torr. [[Paper](http://openaccess.thecvf.com/content_cvpr_2017/papers/Valmadre_End-To-End_Representation_Learning_CVPR_2017_paper.pdf)]
288 |
289 | * Large Margin Object Tracking With Circulant Feature Maps. CVPR, 2017.
Mengmeng Wang, Yong Liu, Zeyi Huang. [[Paper](http://openaccess.thecvf.com/content_cvpr_2017/papers/Wang_Large_Margin_Object_CVPR_2017_paper.pdf)]
290 |
291 | * Multi-Task Correlation Particle Filter for Robust Object Tracking. CVPR, 2017.
Tianzhu Zhang, Changsheng Xu, Ming-Hsuan Yang. [[Paper](http://openaccess.thecvf.com/content_cvpr_2017/papers/Zhang_Multi-Task_Correlation_Particle_CVPR_2017_paper.pdf)]
292 |
293 | * Attentional Correlation Filter Network for Adaptive Visual Tracking. CVPR, 2017.
Jongwon Choi, Hyung Jin Chang, Sangdoo Yun, Tobias Fischer, Yiannis Demiris, Jin Young Choi. [[Paper](http://openaccess.thecvf.com/content_cvpr_2017/papers/Choi_Attentional_Correlation_Filter_CVPR_2017_paper.pdf)]
294 |
295 | * Robust Visual Tracking Using Oblique Random Forests. CVPR, 2017.
Le Zhang, Jagannadan Varadarajan, Ponnuthurai Nagaratnam Suganthan, Narendra Ahuja, Pierre Moulin. [[Paper](http://openaccess.thecvf.com/content_cvpr_2017/papers/Zhang_Robust_Visual_Tracking_CVPR_2017_paper.pdf)]
296 |
297 | * Tracking by Natural Language Specification. CVPR, 2017.
Zhenyang Li, Ran Tao, Efstratios Gavves, Cees G. M. Snoek, Arnold W.M. Smeulders. [[Paper](http://openaccess.thecvf.com/content_cvpr_2017/papers/Li_Tracking_by_Natural_CVPR_2017_paper.pdf)]
298 |
299 | * ECO: Efficient Convolution Operators for Tracking. CVPR, 2017.
Martin Danelljan, Goutam Bhat, Fahad Shahbaz Khan, Michael Felsberg. [[Paper](http://openaccess.thecvf.com/content_cvpr_2017/papers/Danelljan_ECO_Efficient_Convolution_CVPR_2017_paper.pdf)]
300 |
301 | * Learning Policies for Adaptive Tracking With Deep Feature Cascades. ICCV, 2017.
Chen Huang, Simon Lucey, Deva Ramanan. [[Paper](http://openaccess.thecvf.com/content_ICCV_2017/papers/Huang_Learning_Policies_for_ICCV_2017_paper.pdf)]
302 |
303 | * Tracking as Online Decision-Making: Learning a Policy From Streaming Videos With Reinforcement Learning. ICCV, 2017.
James Supancic,III, Deva Ramanan. [[Paper](http://openaccess.thecvf.com/content_ICCV_2017/papers/Supancic_Tracking_as_Online_ICCV_2017_paper.pdf)]
304 |
305 | * Need for Speed: A Benchmark for Higher Frame Rate Object Tracking. ICCV, 2017.
Hamed Kiani Galoogahi, Ashton Fagg, Chen Huang, Deva Ramanan, Simon Lucey. [[Paper](http://openaccess.thecvf.com/content_ICCV_2017/papers/Galoogahi_Need_for_Speed_ICCV_2017_paper.pdf)]
306 |
307 | * Learning Background-Aware Correlation Filters for Visual Tracking. ICCV, 2017.
Hamed Kiani Galoogahi, Ashton Fagg, Simon Lucey. [[Paper](http://openaccess.thecvf.com/content_ICCV_2017/papers/Galoogahi_Learning_Background-Aware_Correlation_ICCV_2017_paper.pdf)]
308 |
309 | * Robust Object Tracking Based on Temporal and Spatial Deep Networks. ICCV, 2017.
Zhu Teng, Junliang Xing, Qiang Wang, Congyan Lang, Songhe Feng, Yi Jin. [[Paper](http://openaccess.thecvf.com/content_ICCV_2017/papers/Teng_Robust_Object_Tracking_ICCV_2017_paper.pdf)]
310 |
311 | * Learning Dynamic Siamese Network for Visual Object Tracking. ICCV, 2017.
Qing Guo, Wei Feng, Ce Zhou, Rui Huang, Liang Wan, Song Wang. [[Paper](http://openaccess.thecvf.com/content_ICCV_2017/papers/Guo_Learning_Dynamic_Siamese_ICCV_2017_paper.pdf)]
312 |
313 | * CREST: Convolutional Residual Learning for Visual Tracking. ICCV, 2017.
Yibing Song, Chao Ma, Lijun Gong, Jiawei Zhang, Rynson W. H. Lau, Ming-Hsuan Yang. [[Paper](http://openaccess.thecvf.com/content_ICCV_2017/papers/Song_CREST_Convolutional_Residual_ICCV_2017_paper.pdf)]
314 |
315 | * Beyond Standard Benchmarks: Parameterizing Performance Evaluation in Visual Object Tracking. ICCV, 2017.
Luka Cehovin Zajc, Alan Lukezic, Ales Leonardis, Matej Kristan. [[Paper](http://openaccess.thecvf.com/content_ICCV_2017/papers/Zajc_Beyond_Standard_Benchmarks_ICCV_2017_paper.pdf)]
316 |
317 | * Parallel Tracking and Verifying: A Framework for Real-Time and High Accuracy Visual Tracking. ICCV, 2017.
318 | Heng Fan, Haibin Ling. [[Paper](http://openaccess.thecvf.com/content_ICCV_2017/papers/Fan_Parallel_Tracking_and_ICCV_2017_paper.pdf)]
319 |
320 | * Non-Rigid Object Tracking via Deformable Patches Using Shape-Preserved KCF and Level Sets. ICCV, 2017.
Xin Sun, Ngai-Man Cheung, Hongxun Yao, Yiluan Guo. [[Paper](http://openaccess.thecvf.com/content_ICCV_2017/papers/Sun_Non-Rigid_Object_Tracking_ICCV_2017_paper.pdf)]
321 |
322 | * Learning Policies for Adaptive Tracking With Deep Feature Cascades. ICCV, 2017.
Chen Huang, Simon Lucey, Deva Ramanan. [[Paper](http://openaccess.thecvf.com/content_ICCV_2017/papers/Huang_Learning_Policies_for_ICCV_2017_paper.pdf)]
323 |
324 | * **2016:**
325 | * Beyond Local Search: Tracking Objects Everywhere With Instance-Specific Proposals. CVPR, 2016.
Gao Zhu, Fatih Porikli, Hongdong Li. [[Paper](http://openaccess.thecvf.com/content_cvpr_2016/papers/Zhu_Beyond_Local_Search_CVPR_2016_paper.pdf)]
326 |
327 | * STCT: Sequentially Training Convolutional Networks for Visual Tracking. CVPR, 2016.
Lijun Wang, Wanli Ouyang, Xiaogang Wang, Huchuan Lu. [[Paper](http://openaccess.thecvf.com/content_cvpr_2016/papers/Wang_STCT_Sequentially_Training_CVPR_2016_paper.pdf)]
328 |
329 | * Staple: Complementary Learners for Real-Time Tracking. CVPR, 2016.
Luca Bertinetto, Jack Valmadre, Stuart Golodetz, Ondrej Miksik, Philip H. S. Torr. [[Paper](http://openaccess.thecvf.com/content_cvpr_2016/papers/Bertinetto_Staple_Complementary_Learners_CVPR_2016_paper.pdf)]
330 |
331 | * Siamese Instance Search for Tracking. CVPR, 2016.
Ran Tao, Efstratios Gavves, Arnold W.M. Smeulders. [[Paper](http://openaccess.thecvf.com/content_cvpr_2016/papers/Tao_Siamese_Instance_Search_CVPR_2016_paper.pdf)]
332 |
333 | * Adaptive Decontamination of the Training Set: A Unified Formulation for Discriminative Visual Tracking. CVPR, 2016.
Martin Danelljan, Gustav Hager, Fahad Shahbaz Khan, Michael Felsberg. [[Paper](http://openaccess.thecvf.com/content_cvpr_2016/papers/Danelljan_Adaptive_Decontamination_of_CVPR_2016_paper.pdf)]
334 |
335 | * Adaptive Decontamination of the Training Set: A Unified Formulation for Discriminative Visual Tracking. CVPR, 2016.
Martin Danelljan, Gustav Hager, Fahad Shahbaz Khan, Michael Felsberg. [[Paper](http://openaccess.thecvf.com/content_cvpr_2016/papers/Danelljan_Adaptive_Decontamination_of_CVPR_2016_paper.pdf)]
336 |
337 | * Recurrently Target-Attending Tracking. CVPR, 2016.
Zhen Cui, Shengtao Xiao, Jiashi Feng, Shuicheng Yan. [[Paper](http://openaccess.thecvf.com/content_cvpr_2016/papers/Cui_Recurrently_Target-Attending_Tracking_CVPR_2016_paper.pdf)]
338 |
339 | * In Defense of Sparse Tracking: Circulant Sparse Tracker. CVPR, 2016.
Tianzhu Zhang, Adel Bibi, Bernard Ghanem. [[Paper](http://openaccess.thecvf.com/content_cvpr_2016/papers/Zhang_In_Defense_of_CVPR_2016_paper.pdf)]
340 |
341 | * Object Tracking via Dual Linear Structured SVM and Explicit Feature Map. CVPR, 2016.
Jifeng Ning, Jimei Yang, Shaojie Jiang, Lei Zhang, Ming-Hsuan Yang. [[Paper](http://openaccess.thecvf.com/content_cvpr_2016/papers/Ning_Object_Tracking_via_CVPR_2016_paper.pdf)]
342 |
343 | * Learning Multi-Domain Convolutional Neural Networks for Visual Tracking. CVPR, 2016.
Hyeonseob Nam, Bohyung Han. [[Paper](http://openaccess.thecvf.com/content_cvpr_2016/papers/Nam_Learning_Multi-Domain_Convolutional_CVPR_2016_paper.pdf)]
344 |
345 | * Hedged Deep Tracking. CVPR, 2016.
Yuankai Qi, Shengping Zhang, Lei Qin, Hongxun Yao, Qingming Huang, Jongwoo Lim, Ming-Hsuan Yang. [[Paper](http://openaccess.thecvf.com/content_cvpr_2016/papers/Qi_Hedged_Deep_Tracking_CVPR_2016_paper.pdf)]
346 |
347 | * Target Response Adaptation for Correlation Filter Tracking. ECCV, 2016.
Adel Bibi, Matthias Mueller, Bernard Ghanem.
348 |
349 | * Beyond Correlation Filters: Learning Continuous Convolution Operators for Visual Tracking. ECCV, 2016.
Martin Danelljan, Andreas Robinson, Fahad Khan, Michael Felsberg.
350 |
351 | * Structural Correlation Filter for Robust Visual Tracking. CVPR, 2016.
Si Liu, Tianzhu Zhang, Xiaochun Cao, Changsheng Xu. [[Paper](http://openaccess.thecvf.com/content_cvpr_2016/papers/Liu_Structural_Correlation_Filter_CVPR_2016_paper.pdf)]
352 |
353 | * Visual Tracking Using Attention-Modulated Disintegration and Integration. CVPR, 2016.
Jongwon Choi, Hyung Jin Chang, Jiyeoup Jeong, Yiannis Demiris, Jin Young Choi. [[Paper](http://openaccess.thecvf.com/content_cvpr_2016/papers/Choi_Visual_Tracking_Using_CVPR_2016_paper.pdf)]
354 |
355 | * A Benchmark and Simulator for UAV Tracking. ECCV, 2016.
Matthias Mueller, Bernard Ghanem, Neil Smith.
356 |
357 | * Distractor-supported single target tracking in extremely cluttered scenes. ECCV, 2016.
Jingjing Xiao, Linbo Qiao, Rustam Stolkin, leš Leonardis.
358 |
359 | * Real-Time Visual Tracking: Promoting the Robustness of Correlation Filter Learning. ECCV, 2016.
Yao Sui, Ziming Zhang, Guanghui Wang, Yafei Tang, Li Zhang.
360 |
361 |
362 | * **2015:**
363 | * Structural Sparse Tracking. CVPR, 2015.
Tianzhu Zhang, Si Liu, Changsheng Xu, Shuicheng Yan, Bernard Ghanem, Narendra Ahuja, Ming-Hsuan Yang. [[Paper](http://openaccess.thecvf.com/content_cvpr_2015/papers/Zhang_Structural_Sparse_Tracking_2015_CVPR_paper.pdf)]
364 |
365 | * Reliable Patch Trackers: Robust Visual Tracking by Exploiting Reliable Patches. CVPR, 2015.
Yang Li, Jianke Zhu, Steven C.H. Hoi. [[Paper](http://openaccess.thecvf.com/content_cvpr_2015/papers/Li_Reliable_Patch_Trackers_2015_CVPR_paper.pdf)]
366 |
367 | * MUlti-Store Tracker (MUSTer): A Cognitive Psychology Inspired Approach to Object Tracking. CVPR, 2015.
Zhibin Hong, Zhe Chen, Chaohui Wang, Xue Mei, Danil Prokhorov, Dacheng Tao. [[Paper](http://openaccess.thecvf.com/content_cvpr_2015/papers/Hong_MUlti-Store_Tracker_MUSTer_2015_CVPR_paper.pdf)]
368 |
369 | * In Defense of Color-Based Model-Free Tracking. CVPR, 2015.
Horst Possegger, Thomas Mauthner, Horst Bischof. [[Paper](http://openaccess.thecvf.com/content_cvpr_2015/papers/Possegger_In_Defense_of_2015_CVPR_paper.pdf)]
370 |
371 | * JOTS: Joint Online Tracking and Segmentation. CVPR, 2015.
Longyin Wen, Dawei Du, Zhen Lei, Stan Z. Li, Ming-Hsuan Yang. [[Paper](http://openaccess.thecvf.com/content_cvpr_2015/papers/Wen_JOTS_Joint_Online_2015_CVPR_paper.pdf)]
372 |
373 | * Clustering of Static-Adaptive Correspondences for Deformable Object Tracking. CVPR, 2015.
Georg Nebehay, Roman Pflugfelder. [[Paper](http://openaccess.thecvf.com/content_cvpr_2015/papers/Nebehay_Clustering_of_Static-Adaptive_2015_CVPR_paper.pdf)]
374 |
375 | * Real-Time Part-Based Visual Tracking via Adaptive Correlation Filters. CVPR, 2015.
Ting Liu, Gang Wang, Qingxiong Yang. [[Paper](http://openaccess.thecvf.com/content_cvpr_2015/papers/Liu_Real-Time_Part-Based_Visual_2015_CVPR_paper.pdf)]
376 |
377 | * Multihypothesis Trajectory Analysis for Robust Visual Tracking. CVPR, 2015.
Dae-Youn Lee, Jae-Young Sim, Chang-Su Kim. [[Paper](http://openaccess.thecvf.com/content_cvpr_2015/papers/Lee_Multihypothesis_Trajectory_Analysis_2015_CVPR_paper.pdf)]
378 |
379 | * Long-Term Correlation Tracking. CVPR, 2015.
Chao Ma, Xiaokang Yang, Chongyang Zhang, Ming-Hsuan Yang. [[Paper](http://openaccess.thecvf.com/content_cvpr_2015/papers/Ma_Long-Term_Correlation_Tracking_2015_CVPR_paper.pdf)]
380 |
381 | * Discriminative Low-Rank Tracking. ICCV, 2015.
Yao Sui, Yafei Tang, Li Zhang. [[Paper](http://openaccess.thecvf.com/content_iccv_2015/papers/Sui_Discriminative_Low-Rank_Tracking_ICCV_2015_paper.pdf)]
382 |
383 | * SOWP: Spatially Ordered and Weighted Patch Descriptor for Visual Tracking. ICCV, 2015.
Han-Ul Kim, Dae-Youn Lee, Jae-Young Sim, Chang-Su Kim. [[Paper](http://openaccess.thecvf.com/content_iccv_2015/papers/Kim_SOWP_Spatially_Ordered_ICCV_2015_paper.pdf)]
384 |
385 | * Multi-Kernel Correlation Filter for Visual Tracking. ICCV, 2015.
Ming Tang, Jiayi Feng. [[Paper](http://openaccess.thecvf.com/content_iccv_2015/papers/Tang_Multi-Kernel_Correlation_Filter_ICCV_2015_paper.pdf)]
386 |
387 | * Tracking-by-Segmentation With Online Gradient Boosting Decision Tree. ICCV, 2015.
Jeany Son, Ilchae Jung, Kayoung Park, Bohyung Han. [[Paper](http://openaccess.thecvf.com/content_iccv_2015/papers/Son_Tracking-by-Segmentation_With_Online_ICCV_2015_paper.pdf)]
388 |
389 | * Exploring Causal Relationships in Visual Object Tracking. ICCV, 2015.
Karel Lebeda, Simon Hadfield, Richard Bowden. [[Paper](http://openaccess.thecvf.com/content_iccv_2015/papers/Lebeda_Exploring_Causal_Relationships_ICCV_2015_paper.pdf)]
390 |
391 | * Hierarchical Convolutional Features for Visual Tracking. ICCV, 2015.
Chao Ma, Jia-Bin Huang, Xiaokang Yang, Ming-Hsuan Yang. [[Paper](http://openaccess.thecvf.com/content_iccv_2015/papers/Ma_Hierarchical_Convolutional_Features_ICCV_2015_paper.pdf)]
392 |
393 | * Online Object Tracking With Proposal Selection. ICCV, 2015.
Yang Hua, Karteek Alahari, Cordelia Schmid. [[Paper](http://openaccess.thecvf.com/content_iccv_2015/papers/Hua_Online_Object_Tracking_ICCV_2015_paper.pdf)]
394 |
395 | * Understanding and Diagnosing Visual Tracking Systems. ICCV, 2015.
Naiyan Wang, Jianping Shi, Dit-Yan Yeung, Jiaya Jia. [[Paper](http://openaccess.thecvf.com/content_iccv_2015/papers/Wang_Understanding_and_Diagnosing_ICCV_2015_paper.pdf)]
396 |
397 | * Visual Tracking With Fully Convolutional Networks. ICCV, 2015.
Lijun Wang, Wanli Ouyang, Xiaogang Wang, Huchuan Lu. [[Paper](http://openaccess.thecvf.com/content_iccv_2015/papers/Wang_Visual_Tracking_With_ICCV_2015_paper.pdf)]
398 |
399 | * Multiple Feature Fusion via Weighted Entropy for Visual Tracking. ICCV, 2015.
Lin Ma, Jiwen Lu, Jianjiang Feng, Jie Zhou. [[Paper](http://openaccess.thecvf.com/content_iccv_2015/papers/Ma_Multiple_Feature_Fusion_ICCV_2015_paper.pdf)]
400 |
401 | * Local Subspace Collaborative Tracking. ICCV, 2015.
Lin Ma, Xiaoqin Zhang, Weiming Hu, Junliang Xing, Jiwen Lu, Jie Zhou. [[Paper](http://openaccess.thecvf.com/content_iccv_2015/papers/Ma_Local_Subspace_Collaborative_ICCV_2015_paper.pdf)]
402 |
403 | * Learning Spatially Regularized Correlation Filters for Visual Tracking. ICCV, 2015.
Martin Danelljan, Gustav Hager, Fahad Shahbaz Khan, Michael Felsberg. [[Paper](http://openaccess.thecvf.com/content_iccv_2015/papers/Danelljan_Learning_Spatially_Regularized_ICCV_2015_paper.pdf)]
404 |
405 | * TRIC-track: Tracking by Regression With Incrementally Learned Cascades. ICCV, 2015.
Xiaomeng Wang, Michel Valstar, Brais Martinez, Muhammad Haris Khan, Tony Pridmore. [[Paper](http://openaccess.thecvf.com/content_iccv_2015/papers/Wang_TRIC-track_Tracking_by_ICCV_2015_paper.pdf)]
406 |
407 | * Linearization to Nonlinear Learning for Visual Tracking. ICCV, 2015.
Bo Ma, Hongwei Hu, Jianbing Shen, Yuping Zhang, Fatih Porikli. [[Paper](http://openaccess.thecvf.com/content_iccv_2015/papers/Ma_Linearization_to_Nonlinear_ICCV_2015_paper.pdf)]
408 |
409 | * **2014:**
410 | * Adaptive Color Attributes for Real-Time Visual Tracking. CVPR, 2014.
Martin Danelljan, Fahad Shahbaz Khan, Michael Felsberg, Joost van de Weijer. [[Paper](http://openaccess.thecvf.com/content_cvpr_2014/papers/Danelljan_Adaptive_Color_Attributes_2014_CVPR_paper.pdf)[
411 |
412 | * Multi-Cue Visual Tracking Using Robust Feature-Level Fusion Based on Joint Sparse Representation. CVPR, 2014.
Xiangyuan Lan, Andy J. Ma, Pong C. Yuen. [[Paper](http://openaccess.thecvf.com/content_cvpr_2014/papers/Lan_Multi-Cue_Visual_Tracking_2014_CVPR_paper.pdf)]
413 |
414 | * Multi-Forest Tracker: A Chameleon in Tracking. CVPR, 2014.
David J. Tan, Slobodan Ilic. [[Paper](http://openaccess.thecvf.com/content_cvpr_2014/papers/Tan_Multi-Forest_Tracker_A_2014_CVPR_paper.pdf)]
415 |
416 | * Pyramid-based Visual Tracking Using Sparsity Represented Mean Transform. CVPR, 2014.
Zhe Zhang, Kin Hong Wong. [[Paper](http://openaccess.thecvf.com/content_cvpr_2014/papers/Zhang_Pyramid-based_Visual_Tracking_2014_CVPR_paper.pdf)]
417 |
418 | * Partial Occlusion Handling for Visual Tracking via Robust Part Matching. CVPR, 2014.
Tianzhu Zhang, Kui Jia, Changsheng Xu, Yi Ma, Narendra Ahuja. [[Paper](http://openaccess.thecvf.com/content_cvpr_2014/papers/Zhang_Partial_Occlusion_Handling_2014_CVPR_paper.pdf)]
419 |
420 | * Speeding Up Tracking by Ignoring Features. CVPR, 2014.
Lu Zhang, Hamdi Dibeklioglu, Laurens van der Maaten. [[Paper](http://openaccess.thecvf.com/content_cvpr_2014/papers/Zhang_Speeding_Up_Tracking_2014_CVPR_paper.pdf)]
421 |
422 | * Online Object Tracking, Learning and Parsing with And-Or Graphs. CVPR, 2014.
Yang Lu, Tianfu Wu, Song Chun Zhu. [[Paper](http://openaccess.thecvf.com/content_cvpr_2014/papers/Lu_Online_Object_Tracking_2014_CVPR_paper.pdf)]
423 |
424 | * Visual Tracking via Probability Continuous Outlier Model. CVPR, 2014.
Dong Wang, Huchuan Lu. [[Paper](http://openaccess.thecvf.com/content_cvpr_2014/papers/Wang_Visual_Tracking_via_2014_CVPR_paper.pdf)]
425 |
426 | * Visual Tracking Using Pertinent Patch Selection and Masking. CVPR, 2014.
Dae-Youn Lee, Jae-Young Sim, Chang-Su Kim. [[Paper](http://openaccess.thecvf.com/content_cvpr_2014/papers/Lee_Visual_Tracking_Using_2014_CVPR_paper.pdf)]
427 |
428 | * Interval Tracker: Tracking by Interval Analysis. CVPR, 2014.
Junseok Kwon, Kyoung Mu Lee. [[Paper](http://openaccess.thecvf.com/content_cvpr_2014/papers/Kwon_Interval_Tracker_Tracking_2014_CVPR_paper.pdf)]
429 |
430 | * Unifying Spatial and Attribute Selection for Distracter-Resilient Tracking. CVPR, 2014.
Nan Jiang, Ying Wu. [[Paper](http://openaccess.thecvf.com/content_cvpr_2014/papers/Jiang_Unifying_Spatial_and_2014_CVPR_paper.pdf)]
431 |
432 | * Visual Tracking by Sampling Tree-Structured Graphical Models. ECCV, 2014.
Seunghoon Hong, Bohyung Han.
433 |
434 | * Description-Discrimination Collaborative Tracking. ECCV, 2014.
Dapeng Chen, Zejian Yuan,Gang Hua, Yang Wu, Nanning Zheng.
435 |
436 | * Online, Real-Time Tracking using a Category-to-Individual Detector. ECCV, 2014.
David Hall, Pietro Perona.
437 |
438 | * Robust Visual Tracking with Double Bounding Box Model. ECCV, 2014.
Junseok Kwon, Junha Roh, Kyoung Mu Lee, Luc Van Gool.
439 |
440 | * Transfer Learning Based Visual Tracking with Gaussian Process Regression. ECCV, 2014.
Jin Gao, Haibin Ling, Weiming Hu, Junliang Xing.
441 |
442 | * Online Graph-Based Tracking. ECCV, 2014.
Hyeonseob Nam , Seunghoon Hong, Bohyung Han.
443 |
444 | * Fast Visual Tracking via Dense Spatio-Temporal Context Learning. ECCV, 2014.
Kaihua Zhang, Lei Zhang, Qingshan Liu, David Zhang, Ming-Hsuan Yang.
445 |
446 | * Extended Lucas-Kanade Tracking. ECCV, 2014.
Shaul Oron, Aharon Bar-Hillel, Shai Avidan.
447 |
448 | * Appearances can be deceiving: Learning visual tracking from few trajectory annotations. ECCV, 2014.
Santiago Manen, Junseok Kwon, Matthieu Guillaumin, Luc Van Gool.
449 |
450 | * Tracking using Multilevel Quantizations. ECCV, 2014.
Zhibin Hong, Chaohui Wang, Xue Mei, Danil Prokhorov, Dacheng Tao.
451 |
452 | * Occlusing and Motion Reasoning for Long-term Tracking. ECCV, 2014.
Yang Hua, Karteek Alahari, Cordelia Schmid.
453 |
454 | * MEEM: Robust Tracking via Multiple Experts using Entropy Minimization. ECCV, 2014.
Jianming Zhang, Shugao Ma, Stan Sclaroff.
455 |
456 | * A Superior Tracking Approach: Building a strong Tracker through Fusion. ECCV, 2014.
Christian Bailer, Alain Pagani, Didier Stricker.
457 |
458 | * **2013:**
459 | * Visual Tracking via Locality Sensitive Histograms. CVPR, 2013.
Shengfeng He, Qingxiong Yang, Rynson W.H. Lau, Jiang Wang, Ming-Hsuan Yang.
460 |
461 | * Online Object Tracking: A Benchmark. CVPR, 2013.
Yi Wu, Jongwoo Lim, Ming-Hsuan Yang.
462 |
463 | * Learning Compact Binary Codes for Visual Tracking. CVPR, 2013.
Xi Li, Chunhua Shen, Anthony Dick, Anton van den Hengel.
464 |
465 | * Least Soft-Threshold Squares Tracking. CVPR, 2013.
Dong Wang, Huchuan Lu, Ming-Hsuan Yang.
466 |
467 | * Part-Based Visual Tracking with Online Latent Structural Learning. CVPR, 2013.
Rui Yao, Qinfeng Shi, Chunhua Shen, Yanning Zhang, Anton van den Hengel.
468 |
469 | * Minimum Uncertainty Gap for Robust Visual Tracking. CVPR, 2013.
Junseok Kwon, Kyoung Mu Lee.
470 |
471 | * Structure Preserving Object Tracking. CVPR, 2013.
Lu Zhang, Laurens van der Maaten.
472 |
473 | * Self-Paced Learning for Long-Term Tracking. CVPR, 2013.
James S. Supancic III, Deva Ramanan.
474 |
475 | * Tracking via Robust Multi-task Multi-view Joint Sparse Representation. ICCV, 2013.
Zhibin Hong, Xue Mei, Danil Prokhorov, Dacheng Tao.
476 |
477 | * Online Robust Non-negative Dictionary Learning for Visual Tracking. ICCV, 2013.
Naiyan Wang, Jingdong Wang, Dit-Yan Yeung.
478 |
479 | * Robust Object Tracking with Online Multi-lifespan Dictionary Learning. ICCV, 2013.
Junliang Xing, Jin Gao, Bing Li, Weiming Hu, Shuicheng Yan.
480 |
481 | * Finding the Best from the Second Bests - Inhibiting Subjective Bias in Evaluation of Visual Tracking Algorithms. ICCV, 2013.
Yu Pang, Haibin Ling.
482 |
483 | * PixelTrack: A Fast Adaptive Algorithm for Tracking Non-rigid Objects. ICCV, 2013.
Stefan Duffner, Christophe Garcia.
484 |
485 | * Discriminant Tracking Using Tensor Representation with Semi-supervised Improvement.ICCV, 2013.
Jin Gao, Junliang Xing, Weiming Hu, Steve Maybank.
486 |
487 | * Initialization-Insensitive Visual Tracking through Voting with Salient Local Features. ICCV, 2013.
Kwang Moo Yi, Hawook Jeong, Byeongho Heo, Hyung Jin Chang, Jin Young Choi.
488 |
489 | * Randomized Ensemble Tracking. ICCV, 2013.
Qinxun Bai, Zheng Wu, Stan Sclaroff, Margrit Betke, Camille Monnier.
490 |
491 | * Tracking Revisited Using RGBD Camera: Unified Benchmark and Baselines. ICCV, 2013.
Shuran Song, Jianxiong Xiao.
492 |
493 | * Modeling Self-Occlusions in Dynamic Shape and Appearance Tracking. ICCV, 2013.
Yanchao Yang, Ganesh Sundaramoorthi.
494 |
495 | * Orderless Tracking through Model-Averaged Posterior Estimation. ICCV, 2013.
Seunghoon Hong, Suha Kwak, Bohyung Han.
496 |
497 | * **Before 2013:**
498 |
499 |
500 |
501 |
502 |
503 | * **Survey & Book:**
504 |
505 |
506 | * Handcrafted and Deep Trackers: Recent Visual Object Tracking Approaches and Trends. ACM CS, 2019.
Mustansar Fiaz, Arif Mahmood, Sajid Javed, Soon Ki Jung.
507 |
508 | * Online Visual Tracking. Springer, 2019.
Huchuan Lu, Dong Wang.
509 |
510 | * Deep Visual Tracking: Review and Experimental Comparison. PR, 2018.
Peixia Li, Dong Wang, Lijun Wang, Huchuan Lu.
511 |
512 | * Visual Tracking: An Experimental Survey. IEEE TPAMI, 2014.
Arnold W. M. Smeulders, Dung Manh Chu, Rita Cucchiara, Simone Calderara, Afshin Dehghan, Mubarak Shah.
513 |
514 | * A Survey of Appearance Models in Visual Object Tracking. ACM TIST, 2013.
Xi Li, Weiming Hu, Chunhua Shen, Zhongfei Zhang, Anthony R. Dick, Anton van den Hengel.
515 |
516 | * Object Tracking: A Survey. ACM CS, 2006.
Alper Yilmaz, Omar Javed, Mubarak Shah.
517 |
518 | * **Resources:**
519 |
520 | * [SiamTrackers](https://github.com/HonglinChu/SiamTrackers):https://github.com/HonglinChu/SiamTrackers
521 |
522 | * [CFTrackers](https://github.com/HonglinChu/CFTrackers): https://github.com/HonglinChu/CFTrackers
523 |
524 | * [pysot-toolkit](https://github.com/StrangerZhang/pysot-toolkit):https://github.com/StrangerZhang/pysot-toolkit
525 |
--------------------------------------------------------------------------------
/notes/SiamTrackers.md:
--------------------------------------------------------------------------------
1 | # SiamTrackers
2 |
3 | The code will come soon! https://www.bilibili.com/video/BV1Y64y1T7qs/
4 |
5 | | Trackers | Debug | Train | Test | Evaluation | Comment | Toolkit | GPU | Version |
6 | | :--------- | :--------: | :------: |:------: |:------: |:------: |:------: | :------: | :------: |
7 | | Siamese | √| √ | √| √ | √| | √|- |
8 | | SiamFC | √ | √ | √| √| √|got10k|√ |unofficial|
9 | | SiamRPN | √ | √ | √| √| √|got10k| √|unofficial|
10 | | DaSiamRPN | √ | | √| √| √|pysot| √ | official|
11 | | UpdateNet | √ | √ | √| √| √|pysot| √ | unofficial|
12 | | SiamRPN++ | √ | √ | √| √| √|pysot| √|official|
13 | | SiamMask | √ | √ | √| √| √|pysot| √ |official|
14 | | SiamFC++ | √ | √ | √| √| √|pysot&got10k| √ |official|
15 |
16 | # Description
17 |
18 | - Siamese
19 |
20 | 基于孪生网络的简单人脸分类实现,支持训练和测试,
21 |
22 | - 2016-ECCV-SiamFC
23 |
24 | 添加got10k评估工具,对接口进行优化 可评估 可训练和测试 复现结果略低于论文
25 |
26 | - 2018-CVPR-SiamRPN
27 |
28 | API接口进行优化,添加got10k评估工具,可评估,可训练和测试,复现结果略低于论文; 支持测试; 支持训练 支持 评估
29 |
30 | - 2018-ECCV-DaSiamRPN
31 |
32 | API接口优化;支持VScode单步调试加pysot评估工具;支持一键评估;不支持训练,支持测试
33 |
34 | - 2019-ICCV-UpdateNet
35 |
36 | 复现updatenet网络,可测试,训练,评估自己的模型
37 |
38 | - 2019-CVPR-SiamRPN++
39 |
40 | API接口优化,支持VScode单步调试 ,对训练和测试的输入输出接口进行了优化,方便单步调试对代码进行部分注释 修改训练模式,将多机多GPU并行,改成单机多GPU并行,支持单步调试;
41 |
42 |
43 | - 2019-CVPR-SiamMask
44 | - 2020-AAAI-SiamFC++
45 |
46 |
47 | # Model
48 |
49 | - SiamRPNVOT.model link: https://pan.baidu.com/s/1V7GMgurufuILhzTSJ4LsYA password: p4ig
50 |
51 | - SiamRPNOTB.model link: https://pan.baidu.com/s/1mpXaIDcf0HXf3vMccaSriw password: 5xm9
52 |
53 | - SiamRPNBIG.model link: https://pan.baidu.com/s/10v3d3G7BYSRBanIgaL73_Q password: b3b6
54 |
55 | # Results
56 |
57 | | Trackers| | SiamFC | DaSiamRPN |DaSiamRPN | SiamRPN++ |SiamRPN|SiamFC++ |
58 | |:---------: |:-----:|:--------:| :------: |:------: |:------: |:------: |:------:|
59 | | | | | | | | | |
60 | | Backbone | - | AlexNet | AlexNet(OTB/VOT) |AlexNet(BIG) | AlexNet(DW) |AlexNet(UP) |AlexNet|
61 | | FPS | >120fps | 120 | 200 | 160 | 180 | 200 | 160 |
62 | | | | | | | | | |
63 | | OTB100 | AUC | 0.570 | 0.655 | 0.646 | 0.648 | 0.637 | 0.656 |
64 | | | DP | 0.767 | 0.880 | 0.859 | 0.853 |0.851 | |
65 | | | | | | | | | |
66 | | UAV123 | AUC | 0.504 | 0.586 | 0.604 | 0.578 |0.527 | |
67 | | | DP | 0.702 | 0.796 | 0.801 | 0.769 |0.748 | |
68 | | | | | | | | | |
69 | | UAV20L | AUC | 0.410 | 0.617 | 0.524 | 0.530 |0.454 | |
70 | | | DP | 0.566 | 0.838 | 0.691 | 0.695 |0.617 | |
71 | | | | | | | | | |
72 | | DTB70 | AUC | 0.487 | | | 0.588 | | |
73 | | | DP | 0.735 | | | 0.797 | | |
74 | | | | | | | | | |
75 | | UAVDT | AUC | | | | 0.566 | | |
76 | | | DP | | | | 0.793 | | |
77 | | | | | | | | | |
78 | | VisDrone | AUC | | | | 0.572 | | |
79 | | | DP | | | | 0.764 | | |
80 | | | | | | | | | |
81 | | VOT2016 | A | 0.538 | 0.61 | 0.625 | 0.618 |0.56 | |
82 | | | R | 0.424 | 0.22 | 0.224 | 0.238 |0.26 | |
83 | | | E | 0.262 | 0.411 | 0.439 | 0.393 | 0.344 | |
84 | | |Lost | 91 | | 48 | 51 | | |
85 | | | | | | | | | |
86 | | VOT2018 | A | 0.501 | 0.56 | 0.586 | 0.576 |0.49 | 0.556 |
87 | | | R | 0.534 | 0.34 | 0.276 | 0.290 |0.46 | 0.183 |
88 | | | E | 0.223 | 0.326 | 0.383 | 0.352 |0.244 | 0.400 |
89 | | | Lost | 114 | | 59 | 62 | | |
90 |
91 | # Dataset
92 |
93 | - UAV123 link: https://pan.baidu.com/s/1AhNnfjF4fZe14sUFefU3iA password: 2iq4
94 |
95 | - VOT2018 link: https://pan.baidu.com/s/1MOWZ5lcxfF0wsgSuj5g4Yw password: e5eh
96 |
97 | - VisDrone2019 link: https://pan.baidu.com/s/1Y6ubKHuYX65mK_iDVSfKPQ password: yxb6
98 |
99 | - OTB2015 link: https://pan.baidu.com/s/1ZjKgRMYSHfR_w3Z7iQEkYA password: t5i1
100 |
101 | - DTB70 link: https://pan.baidu.com/s/1kfHrArw0aVhGPSM91WHomw password: e7qm
102 |
103 | - ILSVRC2015 VID link: https://pan.baidu.com/s/1CXWgpAG4CYpk-WnaUY5mAQ password: uqzj
104 |
105 | - NFS link: https://pan.baidu.com/s/1ei54oKNA05iBkoUwXPOB7g password: vng1
106 |
107 | - GOT10k link: https://pan.baidu.com/s/172oiQPA_Ky2iujcW5Irlow password: uxds
108 |
109 | - UAVDT link: https://pan.baidu.com/s/1K8oo53mPYCxUFVMXIGLhVA password: keva
110 |
111 | - YTB-VOS link: https://pan.baidu.com/s/1WMB0q9GJson75QBFVfeH5A password: sf1m
112 |
113 | - YTB-Crop511 (used in siamrpn++ and siammask)link: https://pan.baidu.com/s/112zLS_02-Z2ouKGbnPlTjw password: ebq1
114 |
115 | - TCColor128 link: https://pan.baidu.com/s/1v4J6zWqZwj8fHi5eo5EJvQ password: 26d4
116 |
117 | - DAVIS2017 link: https://pan.baidu.com/s/1JTsumpnkWotEJQE7KQmh6A password: c9qp
118 |
119 | - ytb&vid (used in siamrpn) link: https://pan.baidu.com/s/1gF8PSZDzw-7EAVrdYHQwsA password: 6vkz
120 |
121 | - trackingnet link https://pan.baidu.com/s/1PXSRAqcw-KMfBIJYUtI4Aw code: nkb9 (Note that this link is provided by SiamFC++ author)
122 |
123 | # [Reference]
124 |
125 | [1] SiamFC
126 |
127 | [2] SiamRPN
128 |
129 | [3] DaSiamRPN
130 |
131 | [4] UpdateNet
132 |
133 | [5] SiamRPN++
134 |
135 | [6] SiamMask
136 |
137 | [7] SiamFC++
138 |
--------------------------------------------------------------------------------
/notes/Transformer Tracking.md:
--------------------------------------------------------------------------------
1 | # Transformer Tracking
2 |
3 | This repository is a paper digest of Transformer-related approaches in vision tracking tasks. Currently, tasks in this repository include **Single Object Tracking (SOT)**, **Video Object Segmentation (VOS)**, **Multiple Object Tracking (MOT)**, **Video Instance Segmentation (VIS)**, **Video Object Detection (VOD)**, **3D Object Tracking (3DOT)** and **Object Re-Identification (ReID)**. Note that some trackers involving a non-local attention mechanism are also collected. Papers are listed in alphabetical order of the first character.
4 |
5 |
6 |
7 | ## :bookmark:Single Object Tracking (SOT)
8 |
9 | ### CVPR 2020:tada::tada::tada:
10 |
11 | - **SiamAttn** (Deformable Siamese Attention Networks for Visual Object Tracking) [[link](https://arxiv.org/abs/2004.06711)]
12 |
13 | ### ICPR 2020
14 |
15 | - **VTT** (VTT: Long-term Visual Tracking with Transformers) [[link](https://pure.qub.ac.uk/en/publications/vtt-long-term-visual-tracking-with-transformers)]
16 |
17 | ### CVPR 2021:tada::tada::tada:
18 |
19 | - **SiamGAT** (Graph Attention Tracking) [[link](https://arxiv.org/abs/2011.11204)]
20 | - **STMTrack** (STMTrack: Template-free Visual Tracking with Space-time Memory Networks) [[link](https://arxiv.org/abs/2104.00324)]
21 | - **TMT** (Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking) [[link](https://arxiv.org/abs/2103.11681)]
22 | - **TransT** (Transformer Tracking) [[link](https://arxiv.org/abs/2103.15436)]
23 |
24 | ### ICCV 2021:tada::tada::tada:
25 |
26 | - **AutoMatch** (Learn to Match: Automatic Matching Network Design for Visual Tracking) [[link](https://arxiv.org/abs/2108.00803)]
27 | - **DTT** (High-Performance Discriminative Tracking With Transformers) [[link](https://openaccess.thecvf.com/content/ICCV2021/html/Yu_High-Performance_Discriminative_Tracking_With_Transformers_ICCV_2021_paper.html)]
28 | - **DualTFR** (Learning Tracking Representations via Dual-Branch Fully Transformer Networks) [[link](https://arxiv.org/abs/2112.02571)]
29 | - **HiFT** (HiFT: Hierarchical Feature Transformer for Aerial Tracking) [[link](https://arxiv.org/abs/2108.00202)]
30 | - **SAMN** (Learning Spatio-Appearance Memory Network for High-Performance Visual Tracking) [[link](https://arxiv.org/abs/2009.09669)]
31 | - **STARK** (Learning Spatio-Temporal Transformer for Visual Tracking) [[link](https://arxiv.org/abs/2103.17154)]
32 |
33 | ### Preprint 2021
34 |
35 | - **E.T.Track** (Efficient Visual Tracking with Exemplar Transformers) [[link](https://arxiv.org/abs/2112.09686)]
36 | - **MFGNet** (MFGNet: Dynamic Modality-Aware Filter Generation for RGB-T Tracking) [[link](https://arxiv.org/abs/2107.10433)]
37 | - **SwinTrack** (SwinTrack: A Simple and Strong Baseline for Transformer Tracking) [[link](https://arxiv.org/abs/2112.00995)]
38 | - **TREG** (Target Transformed Regression for Accurate Tracking) [[link](https://arxiv.org/abs/2104.00403)]
39 | - **TrTr** (TrTr: Visual Tracking with Transformer) [[link](https://arxiv.org/abs/2105.03817)]
40 |
41 | ### CVPR 2022:tada::tada::tada:
42 |
43 | - **CSWinTT** (Transformer Tracking with Cyclic Shifting Window Attention) [[link](https://arxiv.org/abs/2205.03806)]
44 | - **GTELT** (Global Tracking via Ensemble of Local Trackers) [[link](https://arxiv.org/abs/2203.16092)]
45 | - **MixFormer** (MixFormer: End-to-End Tracking with Iterative Mixed Attention) [[link](https://arxiv.org/abs/2203.11082)]
46 | - **RBO** (Ranking-Based Siamese Visual Tracking) [[link](https://arxiv.org/abs/2205.11761)]
47 | - **SBT** (Correlation-Aware Deep Tracking) [[link](https://arxiv.org/abs/2203.01666)]
48 | - **STNet** (Spiking Transformers for Event-based Single Object Tracking) [[link](https://openaccess.thecvf.com/content/CVPR2022/html/Zhang_Spiking_Transformers_for_Event-Based_Single_Object_Tracking_CVPR_2022_paper.html)]
49 | - **TCTrack** (TCTrack: Temporal Contexts for Aerial Tracking) [[link](https://arxiv.org/abs/2203.01885)]
50 | - **ToMP** (Transforming Model Prediction for Tracking) [[link](https://arxiv.org/abs/2203.11192)]
51 | - **UDAT** (Unsupervised Domain Adaptation for Nighttime Aerial Tracking) [[link](https://arxiv.org/abs/2203.10541)]
52 | - **UTT** (Unified Transformer Tracker for Object Tracking) [[link](https://arxiv.org/abs/2203.15175)]
53 |
54 | ### ECCV 2022:tada::tada::tada:
55 |
56 | - **AiATrack** (AiATrack: Attention in Attention for Transformer Visual Tracking) [[link](https://arxiv.org/abs/2207.09603)]
57 | - **OSTrack** (Joint Feature Learning and Relation Modeling for Tracking: A One-Stream Framework) [[link](https://arxiv.org/abs/2203.11991)]
58 | - **Unicorn** (Towards Grand Unification of Object Tracking) [[link](https://arxiv.org/abs/2207.07078)]
59 |
60 | ### AAAI 2022
61 |
62 | - **APFNet** (Attribute-based Progressive Fusion Network for RGBT Tracking) [[link](https://aaai-2022.virtualchair.net/poster_aaai7747)]
63 |
64 | ### IJCAI 2022
65 |
66 | - **InBN** (Learning Target-aware Representation for Visual Tracking via Informative Interactions) [[link](https://arxiv.org/abs/2201.02526)]
67 | - **SparseTT** (SparseTT: Visual Tracking with Sparse Transformers) [[link](https://arxiv.org/abs/2205.03776)]
68 |
69 | ### MICCAI 2022
70 |
71 | - **TLT** (Transformer Lesion Tracker) [[link](https://arxiv.org/abs/2206.06252)]
72 |
73 | ### WACV 2022
74 |
75 | - **SiamTPN** (Siamese Transformer Pyramid Networks for Real-Time UAV Tracking) [[link](https://arxiv.org/abs/2110.08822)]
76 |
77 | ### Preprint 2022
78 |
79 | - **HCAT** (Efficient Visual Tracking via Hierarchical Cross-Attention Transformer) [[link](https://arxiv.org/abs/2203.13537)]
80 | - **SiamLA** (Learning Localization-aware Target Confidence for Siamese Visual Tracking) [[link](https://arxiv.org/abs/2204.14093)]
81 | - **SimTrack** (Backbone is All Your Need: A Simplified Architecture for Visual Object Tracking) [[link](https://arxiv.org/abs/2203.05328)]
82 | - **TransT-M** (High-Performance Transformer Tracking) [[link](https://arxiv.org/abs/2203.13533)]
83 |
84 |
85 |
86 | ## :bookmark:Video Object Segmentation (VOS)
87 |
88 | ### ICCV 2019:tada::tada::tada:
89 |
90 | - **STM** (Video Object Segmentation using Space-Time Memory Networks) [[link](https://arxiv.org/abs/1904.00607)]
91 |
92 | ### CVPR 2020:tada::tada::tada:
93 |
94 | - **MAST** (MAST: A Memory-Augmented Self-supervised Tracker) [[link](https://arxiv.org/abs/2002.07793)]
95 | - **TVOS** (A Transductive Approach for Video Object Segmentation) [[link](https://arxiv.org/abs/2004.07193)]
96 |
97 | ### NeurIPS 2020:tada::tada::tada:
98 |
99 | - **AFB-URR** (Video Object Segmentation with Adaptive Feature Bank and Uncertain-Region Refinement) [[link](https://arxiv.org/abs/2010.07958)]
100 |
101 | ### ECCV 2020:tada::tada::tada:
102 |
103 | - **GCM** (Fast Video Object Segmentation using the Global Context Module) [[link](https://arxiv.org/abs/2001.11243)]
104 | - **GraphMemVOS** (Video Object Segmentation with Episodic Graph Memory Networks) [[link](https://arxiv.org/abs/2007.07020)]
105 | - **KMN** (Kernelized Memory Network for Video Object Segmentation) [[link](https://arxiv.org/abs/2007.08270)]
106 |
107 | ### CVPR 2021:tada::tada::tada:
108 |
109 | - **GIEL** (Video Object Segmentation Using Global and Instance Embedding Learning) [[link](https://openaccess.thecvf.com/content/CVPR2021/html/Ge_Video_Object_Segmentation_Using_Global_and_Instance_Embedding_Learning_CVPR_2021_paper.html)]
110 | - **LCM** (Learning Position and Target Consistency for Memory-based Video Object Segmentation) [[link](https://arxiv.org/abs/2104.04329)]
111 | - **RMNet** (Efficient Regional Memory Network for Video Object Segmentation) [[link](https://arxiv.org/abs/2103.12934)]
112 | - **SSTVOS** (SSTVOS: Sparse Spatiotemporal Transformers for Video Object Segmentation) [[link](https://arxiv.org/abs/2101.08833)]
113 | - **SwiftNet** (SwiftNet: Real-time Video Object Segmentation) [[link](https://arxiv.org/abs/2102.04604)]
114 |
115 | ### NeurIPS 2021:tada::tada::tada:
116 |
117 | - **AOT** (Associating Objects with Transformers for Video Object Segmentation) [[link](https://arxiv.org/abs/2106.02638)]
118 | - **STCN** (Rethinking Space-Time Networks with Improved Memory Coverage for Efficient Video Object Segmentation) [[link](https://arxiv.org/abs/2106.05210)]
119 |
120 | ### ICCV 2021:tada::tada::tada:
121 |
122 | - **DINO** (Emerging Properties in Self-Supervised Vision Transformers) [[link](https://arxiv.org/abs/2104.14294)]
123 | - **HMMN** (Hierarchical Memory Matching Network for Video Object Segmentation) [[link](https://arxiv.org/abs/2109.11404)]
124 | - **JOINT** (Joint Inductive and Transductive Learning for Video Object Segmentation) [[link](https://arxiv.org/abs/2108.03679)]
125 | - **MotionGroup** (Self-supervised Video Object Segmentation by Motion Grouping) [[link](https://arxiv.org/abs/2104.07658)]
126 | - **SAMN** (Learning Spatio-Appearance Memory Network for High-Performance Visual Tracking) [[link](https://arxiv.org/abs/2009.09669)]
127 |
128 | ### AAAI 2021
129 |
130 | - **STG-Net** (Spatiotemporal Graph Neural Network based Mask Reconstruction for Video Object Segmentation) [[link](https://arxiv.org/abs/2012.05499)]
131 |
132 | ### Preprint 2021
133 |
134 | - **TransVOS** (TransVOS: Video Object Segmentation with Transformers) [[link](https://arxiv.org/abs/2106.00588)]
135 |
136 | ### CVPR 2022:tada::tada::tada:
137 |
138 | - **LBDT** (Language-Bridged Spatial-Temporal Interaction for Referring Video Object Segmentation) [[link](https://arxiv.org/abs/2206.03789)]
139 | - **MTTR** (End-to-End Referring Video Object Segmentation with Multimodal Transformers) [[link](https://arxiv.org/abs/2111.14821)]
140 | - **RDE-VOS** (Recurrent Dynamic Embedding for Video Object Segmentation) [[link](https://arxiv.org/abs/2205.03761)]
141 | - **ReferFormer** (Language as Queries for Referring Video Object Segmentation) [[link](https://arxiv.org/abs/2201.00487)]
142 |
143 | ### ECCV 2022:tada::tada::tada:
144 |
145 | - **QDMN** (Learning Quality-aware Dynamic Memory for Video Object Segmentation) [[link](https://arxiv.org/abs/2207.07922)]
146 | - **Unicorn** (Towards Grand Unification of Object Tracking) [[link](https://arxiv.org/abs/2207.07078)]
147 | - **XMem** (XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model) [[link](https://arxiv.org/abs/2207.07115)]
148 |
149 | ### AAAI 2022
150 |
151 | - **SITVOS** (Siamese Network with Interactive Transformer for Video Object Segmentation) [[link](https://arxiv.org/abs/2112.13983)]
152 |
153 | ### WACV 2022
154 |
155 | - **BMVOS** (Pixel-Level Bijective Matching for Video Object Segmentation) [[link](https://arxiv.org/abs/2110.01644)]
156 |
157 | ### Preprint 2022
158 |
159 | - **AOST** (Associating Objects with Scalable Transformers for Video Object Segmentation) [[link](https://arxiv.org/abs/2203.11442)]
160 | - **INO** (In-N-Out Generative Learning for Dense Unsupervised Video Segmentation) [[link](https://arxiv.org/abs/2203.15312)]
161 | - **Locater** (Local-Global Context Aware Transformer for Language-Guided Video Segmentation) [[link](https://arxiv.org/abs/2203.09773)]
162 | - **VLGM+LMDF** (Deeply Interleaved Two-Stream Encoder for Referring Video Segmentation) [[link](https://arxiv.org/abs/2203.15969)]
163 |
164 |
165 |
166 | ## :bookmark:Multiple Object Tracking (MOT)
167 |
168 | ### CVPR 2021:tada::tada::tada:
169 |
170 | - **MeNToS** (MeNToS: Tracklets Association with a Space-Time Memory Network) [[link](https://arxiv.org/abs/2107.07067)]
171 |
172 | ### Preprint 2021
173 |
174 | - **MeNToS** (Multi-Object Tracking and Segmentation with a Space-Time Memory Network) [[link](https://arxiv.org/abs/2110.11284)]
175 | - **MO3TR** (Looking Beyond Two Frames: End-to-End Multi-Object Tracking Using Spatial and Temporal Transformers) [[link](https://arxiv.org/abs/2103.14829)]
176 | - **RelationTrack** (RelationTrack: Relation-aware Multiple Object Tracking with Decoupled Representation) [[link](https://arxiv.org/abs/2105.04322)]
177 | - **TransCenter** (TransCenter: Transformers with Dense Queries for Multiple-Object Tracking) [[link](https://arxiv.org/abs/2103.15145)]
178 | - **TransMOT** (TransMOT: Spatial-Temporal Graph Transformer for Multiple Object Tracking) [[link](https://arxiv.org/abs/2104.00194)]
179 | - **TransTrack** (TransTrack: Multiple Object Tracking with Transformer) [[link](https://arxiv.org/abs/2012.15460)]
180 |
181 | ### CVPR 2022:tada::tada::tada:
182 |
183 | - **GTR** (Global Tracking Transformers) [[link](https://arxiv.org/abs/2203.13250)]
184 | - **MeMOT** (MeMOT: Multi-Object Tracking with Memory) [[link](https://arxiv.org/abs/2203.16761)]
185 | - **Time3D** (Time3D: End-to-End Joint Monocular 3D Object Detection and Tracking for Autonomous Driving) [[link](https://arxiv.org/abs/2205.14882)]
186 | - **TrackFormer** (TrackFormer: Multi-Object Tracking with Transformers) [[link](https://arxiv.org/abs/2101.02702)]
187 | - **TRL** (Exploiting Temporal Relations on Radar Perception for Autonomous Driving) [[link](https://arxiv.org/abs/2204.01184)]
188 | - **UTT** (Unified Transformer Tracker for Object Tracking) [[link](https://arxiv.org/abs/2203.15175)]
189 |
190 | ### ECCV 2022:tada::tada::tada:
191 |
192 | - **MOTR** (MOTR: End-to-End Multiple-Object Tracking with TRansformer) [[link](https://arxiv.org/abs/2105.03247)]
193 | - **P3AFormer** (Tracking Objects as Pixel-wise Distributions) [[link](https://arxiv.org/abs/2207.05518)]
194 | - **Unicorn** (Towards Grand Unification of Object Tracking) [[link](https://arxiv.org/abs/2207.07078)]
195 |
196 | ### Preprint 2022
197 |
198 | - **PatchTrack** (PatchTrack: Multiple Object Tracking Using Frame Patches) [[link](https://arxiv.org/abs/2201.00080)]
199 |
200 |
201 |
202 | ## :bookmark:Video Instance Segmentation (VIS)
203 |
204 | ### CVPR 2021:tada::tada::tada:
205 |
206 | - **VisTR** (End-to-End Video Instance Segmentation with Transformers) [[link](https://arxiv.org/abs/2011.14503)]
207 |
208 | ### NeurIPS 2021:tada::tada::tada:
209 |
210 | - **IFC** (Video Instance Segmentation using Inter-Frame Communication Transformers) [[link](https://arxiv.org/abs/2106.03299)]
211 |
212 | ### IROS 2021
213 |
214 | - **LMANet** (Local Memory Attention for Fast Video Semantic Segmentation) [[link](https://arxiv.org/abs/2101.01715)]
215 |
216 | ### ICIP 2021
217 |
218 | - **TMANet** (Temporal Memory Attention for Video Semantic Segmentation) [[link](https://arxiv.org/abs/2102.08643)]
219 |
220 | ### Preprint 2021
221 |
222 | - **Mask2Former** (Mask2Former for Video Instance Segmentation) [[link](https://arxiv.org/abs/2112.10764)]
223 | - **QueryTrack** (Tracking Instances as Queries) [[link](https://arxiv.org/abs/2106.11963)]
224 |
225 | ### CVPR 2022:tada::tada::tada:
226 |
227 | - **EfficientVIS** (Efficient Video Instance Segmentation via Tracklet Query and Proposal) [[link](https://arxiv.org/abs/2203.01853)]
228 | - **TeViT** (Temporally Efficient Vision Transformer for Video Instance Segmentation) [[link](https://arxiv.org/abs/2204.08412)]
229 | - **Video K-Net** (Video K-Net: A Simple, Strong, and Unified Baseline for Video Segmentation) [[link](https://arxiv.org/abs/2204.04656)]
230 |
231 | ### ECCV 2022:tada::tada::tada:
232 |
233 | - **Seqformer** (SeqFormer: a Frustratingly Simple Model for Video Instance Segmentation) [[link](https://arxiv.org/abs/2112.08275)]
234 |
235 | ### AAAI 2022
236 |
237 | - **HITF** (Hybrid Instance-aware Temporal Fusion for Online Video Instance Segmentation) [[link](https://arxiv.org/abs/2112.01695)]
238 |
239 | ### ICASSP 2022
240 |
241 | - **DefVIS** (Deformable VisTR: Spatio temporal deformable attention for video instance segmentation) [[link](https://arxiv.org/abs/2203.06318)]
242 |
243 | ### WACV 2022
244 |
245 | - **VPS-Transformer** (Time-Space Transformers for Video Panoptic Segmentation) [[link](https://openaccess.thecvf.com/content/WACV2022/html/Petrovai_Time-Space_Transformers_for_Video_Panoptic_Segmentation_WACV_2022_paper.html)]
246 |
247 | ### Preprint 2022
248 |
249 | - **MS-STS VIS** (Video Instance Segmentation via Multi-scale Spatio-temporal Split Attention Transformer) [[link](https://arxiv.org/abs/2203.13253)]
250 | - **VITA** (VITA: Video Instance Segmentation via Object Token Association) [[link](https://arxiv.org/abs/2206.04403)]
251 |
252 |
253 |
254 | ## :bookmark:Video Object Detection (VOD)
255 |
256 | ### Preprint 2020
257 |
258 | - **TCTR** (Temporal-Channel Transformer for 3D Lidar-Based Video Object Detection in Autonomous Driving) [[link](https://arxiv.org/abs/2011.13628)]
259 |
260 | ### Preprint 2021
261 |
262 | - **TransVOD** (End-to-End Video Object Detection with Spatial-Temporal Transformers) [[link](https://arxiv.org/abs/2105.10920)]
263 |
264 | ### CVPR 2022:tada::tada::tada:
265 |
266 | - **SLT-Net** (Implicit Motion Handling for Video Camouflaged Object Detection) [[link](https://arxiv.org/abs/2203.07363)]
267 |
268 | ### Preprint 2022
269 |
270 | - **TransVOD++** (TransVOD: End-to-end Video Object Detection with Spatial-Temporal Transformers) [[link](https://arxiv.org/abs/2201.05047)]
271 | - **UFO** (A Unified Transformer Framework for Group-based Segmentation: Co-Segmentation, Co-Saliency Detection and Video Salient Object Detection) [[link](https://arxiv.org/abs/2203.04708)]
272 |
273 |
274 |
275 | ## :bookmark:3D Object Tracking (3DOT)
276 |
277 | ### IROS 2021
278 |
279 | - **PTT** (PTT: Point-Track-Transformer Module for 3D Single Object Tracking in Point Clouds) [[link](https://arxiv.org/abs/2108.06455)]
280 |
281 | ### BMVC 2021
282 |
283 | - **LTTR** (3D Object Tracking with Transformer) [[link](https://arxiv.org/abs/2110.14921)]
284 |
285 | ### CVPR 2022:tada::tada::tada:
286 |
287 | - **PTTR** (PTTR: Relational 3D Point Cloud Object Tracking with Transformer) [[link](https://arxiv.org/abs/2112.02857)]
288 | - **TransFusion** (TransFusion: Robust LiDAR-Camera Fusion for 3D Object Detection with Transformers) [[link](https://arxiv.org/abs/2203.11496)]
289 |
290 | ### ECCV 2022:tada::tada::tada:
291 |
292 | - **CMT** (CMT: Context-Matching-Guided Transformer for 3D Tracking in Point Clouds) [[link](https://github.com/jasongzy/CMT)]
293 |
294 |
295 |
296 | ## :bookmark:Object Re-Identification (ReID)
297 |
298 | ### CVPR 2021:tada::tada::tada:
299 |
300 | - **PAT** (Diverse Part Discovery: Occluded Person Re-identification with Part-Aware Transformer) [[paper](https://arxiv.org/abs/2106.04095)]
301 |
302 | ### ICCV 2021:tada::tada::tada:
303 |
304 | - **APD** (Transformer Meets Part Model: Adaptive Part Division for Person Re-Identification) [[paper](https://openaccess.thecvf.com/content/ICCV2021W/HTCV/html/Lai_Transformer_Meets_Part_Model_Adaptive_Part_Division_for_Person_Re-Identification_ICCVW_2021_paper.html)]
305 | - **TransReID** (TransReID: Transformer-based Object Re-Identification) [[paper](https://arxiv.org/abs/2102.04378)]
306 |
307 | ### MM 2021
308 |
309 | - **HAT** (HAT: Hierarchical Aggregation Transformers for Person Re-identification) [[paper](https://arxiv.org/abs/2107.05946)]
310 |
311 | ### Preprint 2021
312 |
313 | - **AAformer** (AAformer: Auto-Aligned Transformer for Person Re-Identification) [[paper](https://arxiv.org/abs/2104.00921)]
314 | - **CMTR** (CMTR: Cross-modality Transformer for Visible-infrared Person Re-identification) [[paper](https://arxiv.org/abs/2110.08994)]
315 | - **STT** (Spatiotemporal Transformer for Video-based Person Re-identification) [[paper](https://arxiv.org/abs/2103.16469)]
316 | - **TMT** (A Video Is Worth Three Views: Trigeminal Transformers for Video-based Person Re-identification) [[paper](https://arxiv.org/abs/2104.01745)]
317 |
--------------------------------------------------------------------------------
/notes/UAV-Vision.md:
--------------------------------------------------------------------------------
1 | # UAV-Vision
2 |
3 |
4 |
5 |
6 | ## Datasets:
7 |
8 | * **VisDrone:Vision Meets Drones: A Challenge.**
9 |
10 | * [[VisDrone2020](http://aiskyeye.com/)]
11 |
12 | * [[VisDrone2019](http://2019.aiskyeye.com/)]
13 |
14 | * [[VisDrone2018](http://2019.aiskyeye.com)]
15 |
16 |
17 |
18 | * **DroneVehicle.** [[offical-link](https://github.com/VisDrone/DroneVehicle)]
19 |
20 |
21 |
22 | * **DUT-UAVOSPT [ICASSP 2019]:**
23 | **"Online Single Person Tracking in Unmanned Aerial Vehicles." **
24 |
25 | [[paper](https://ieeexplore.ieee.org/abstract/document/8682449)]
26 | [[offical-link](https://github.com/wangdongdut/Online-Single-Person-Tracking-in-UAV)]
27 |
28 | * **AU-AIR [ICRA 2020]: Ilker Bozcan, Erdal Kayacan.**
29 | **"AU-AIR: A Multi-modal Unmanned Aerial Vehicle Dataset for Low Altitude Traffic Surveillance." ICRA (2020).**
30 | [[paper](http://www.lewissoft.com/pdf/ICRA2020/0905.pdf)]
31 | [[offical-link](https://bozcani.github.io/auairdataset)]
32 |
33 | * **UAVDT [ECCV 2018]: Dawei Du, Yuankai Qi, Hongyang Yu, Yifan Yang, Kaiwen Duan, Guorong Li, Weigang Zhang, Qingming Huang, Qi Tian.**
34 | **"The Unmanned Aerial Vehicle Benchmark: Object Detection and Tracking." ECCV (2020).**
35 | [[paper](https://openaccess.thecvf.com/content_ECCV_2018/papers/Dawei_Du_The_Unmanned_Aerial_ECCV_2018_paper.pdf)]
36 | [[offical-link](https://sites.google.com/site/daviddo0323/projects/uavdt)]
37 |
38 | * **DTB70 [AAAI 2017]:Siyi Li, Dit-Yan Yeung.**
39 | **"Visual Object Tracking for Unmanned Aerial Vehicles: A Benchmark and New Motion Models." AAAI (2017).**
40 | [[paper](https://www.aaai.org/ocs/index.php/AAAI/AAAI17/paper/viewFile/14338/14292)]
41 | [[offical-link](https://github.com/flyers/drone-tracking)]
42 |
43 | * **UAV123 [ECCV 2016]: Matthias Mueller, Neil Smith, Bernard Ghanem.**
44 | **"A Benchmark and Simulator for UAV Tracking." ECCV (2016).**
45 | [[paper](https://link.springer.com/chapter/10.1007%2F978-3-319-46448-0_27)]
46 | [[offical-link](https://cemse.kaust.edu.sa/ivul/uav123)]
47 |
48 | * **VEDAI [JVCIR 2016]:Sebastien Razakarivony, Frederic Jurie.**
49 | **"Vehicle Detection in Aerial Imagery : A Small Target Detection Benchmark." JVCIR (2016).**
50 | [[paper](https://www.sciencedirect.com/science/article/abs/pii/S1047320315002187)]
51 | [[offical-link](https://downloads.greyc.fr/vedai/)]
52 |
53 | * **DOTA [CVPR 2018]:Gui-Song Xia, Xiang Bai, Jian Ding, Zhen Zhu, Serge Belongie, Jiebo Luo, Mihai Datcu, Marcello Pelillo, Liangpei Zhang.**
54 | **"DOTA: A Large-scale Dataset for Object DeTection in Aerial Images." CVPR (2018).**
55 | [[paper](https://openaccess.thecvf.com/content_cvpr_2018/papers/Xia_DOTA_A_Large-Scale_CVPR_2018_paper.pdf)]
56 | [[offical-link](https://captain-whu.github.io/DOTA/)]
57 |
58 | * **iSAID [CVPRW 2019]:Syed Waqas Zamir, Aditya Arora, Akshita Gupta, Salman Khan, Guolei Sun, Fahad Shahbaz Khan, Fan Zhu, Ling Shao, Gui-Song Xia, Xiang Bai.**
59 | **"iSAID: A Large-scale Dataset for Instance Segmentation in Aerial Image." CVPRW (2019).**
60 | [[paper](https://openaccess.thecvf.com/content_CVPRW_2019/papers/DOAI/Zamir_iSAID_A_Large-scale_Dataset_for_Instance_Segmentation_in_Aerial_Images_CVPRW_2019_paper.pdf)]
61 | [[offical-link](https://captain-whu.github.io/iSAID/)]
62 |
63 | * **UAVid [P&RS 2020]:Ye Lyu, George Vosselman, Gui-Song Xia, Alper Yilmaz, Michael Ying Yang.**
64 | **"UAVid: A semantic segmentation dataset for UAV imagery." P&RS (2020).**
65 | [[paper](https://www.sciencedirect.com/science/article/abs/pii/S0924271620301295)]
66 | [[offical-link](https://uavid.nl/)]
67 |
68 | * **Anti-UAV [CVPR 2020].** [[offical-link](https://github.com/ZhaoJ9014/Anti-UAV)]
69 |
70 | * **DroneFace.** [[offical-link](https://homepage.iis.sinica.edu.tw/~swc/pub/drone_face_open_dataset.html)]
71 |
72 | * **UCF Aerial Action Dataset.** [[offical-link](https://www.crcv.ucf.edu/data/UCF_Aerial_Action.php)]
73 |
74 | * **Semantic Drone Dataset.** [[offical-link](https://www.tugraz.at/institute/icg/research/team-fraundorfer/software-media/dronedataset/)]
75 |
76 | ## Vehicle Re-identification:
77 |
78 | * https://github.com/bismex/Awesome-vehicle-re-identification
79 | * https://github.com/layumi/Vehicle_reID-Collection
80 | * https://github.com/Jakel21/vehicle-ReID-baseline
81 |
82 |
83 | ## Person Re-identification:
84 |
85 | * https://github.com/bismex/Awesome-person-re-identification
86 | * https://github.com/layumi/Person_reID_baseline_pytorch
87 |
--------------------------------------------------------------------------------
/notes/Visual Trackers for Single Object.md:
--------------------------------------------------------------------------------
1 | ## Visual Trackers for Single Object
2 |
3 | -----------------------
4 | ### Dataset
5 |
6 | * **CDTB:** Alan Lukežič, Ugur Kart, Jani Käpylä, Ahmed Durmush, Joni-Kristian Kämäräinen, Jiří Matas, Matej Kristan. CDTB: A Color and Depth Visual Object Tracking Dataset and Benchmark. [[paper](https://arxiv.org/pdf/1907.00618.pdf)]
7 |
8 | * **LTB50:** Alan Lukežič, Luka Čehovin Zajc, Tomáš Vojíř, Jiří Matas, Matej Kristan. Performance Evaluation Methodology for Long-Term Visual Object Tracking. [[paper](https://arxiv.org/pdf/1906.08675.pdf)]
9 |
10 | * **GOT-10k:** Lianghua Huang, Xin Zhao, Kaiqi Huang. GOT-10k: A Large High-Diversity Benchmark for Generic Object Tracking in the Wild. [[paper](https://arxiv.org/pdf/1810.11981.pdf)][[github](https://github.com/got-10k/toolkit-matlab)][[project](http://got-10k.aitestunion.com/)]
11 |
12 | * **LaSOT:** Heng Fan, Liting Lin, Fan Yang, Peng Chu, Ge Deng, Sijia Yu, Hexin Bai, Yong Xu, Chunyuan Liao, Haibin Ling. "LaSOT: A High-quality Benchmark for Large-scale Single Object Tracking." CVPR (2019). [[paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Fan_LaSOT_A_High-Quality_Benchmark_for_Large-Scale_Single_Object_Tracking_CVPR_2019_paper.pdf)][[supp](http://openaccess.thecvf.com/content_CVPR_2019/supplemental/Fan_LaSOT_A_High-Quality_CVPR_2019_supplemental.pdf)][[project](https://cis.temple.edu/lasot/)]
13 |
14 | * **NFS:** H. Kiani Galoogahi, A. Fagg, C. Huang, D. Ramanan, S.Lucey. Need for Speed: A Benchmark for Higher Frame Rate Object Tracking, 2017, arXiv preprint arXiv:1703.05884[[paper](https://arxiv.org/abs/1703.05884.pdf)][[project](http://ci2cv.net/nfs/index.html)]
15 |
16 | * **UAV123:** A Benchmark and Simulator for UAV Tracking.[[project](https://ivul.kaust.edu.sa/Pages/Dataset-UAV123.aspx)]
17 |
18 | * **TrackNet:** Chenge Li, Gregory Dobler, Xin Feng, Yao Wang. TrackNet: Simultaneous Object Detection and Tracking and Its Application in Traffic Video Analysis.[[paper](https://arxiv.org/pdf/1902.01466.pdf)][project](https://tracking-net.org/)]
19 |
20 | * **VOT2018:** VOT2018 Challenge. [[project](http://www.votchallenge.net/vot2018/dataset.html)]
21 |
22 | * **OTB2015:** Wu Y, Lim J, Yang M H. Object tracking benchmark[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1834-1848.[[paper](https://www.researchgate.net/profile/Ming-Hsuan_Yang2/publication/273279481_Object_Tracking_Benchmark/links/5556e2d908ae6943a8734e3e/Object-Tracking-Benchmark.pdf)][[project](http://cvlab.hanyang.ac.kr/tracker_benchmark/datasets.html)]
23 |
24 | -----------------------
25 | ### Survey
26 |
27 | * Ross Goroshin, Jonathan Tompson, Debidatta Dwibedi. An Analysis of Object Representations in Deep Visual Trackers. [[paper](https://arxiv.org/pdf/2001.02593.pdf)]
28 |
29 | * Shaoze You, Hua Zhu, Menggang Li, Yutan Li. A Review of Visual Trackers and Analysis of its Application to Mobile Robot. [[paper](https://arxiv.org/ftp/arxiv/papers/1910/1910.09761.pdf)]
30 |
31 | * Seyed Mojtaba Marvasti-Zadeh, Li Cheng, Hossein Ghanei-Yakhdan, Shohreh Kasaei. Deep Learning for Visual Tracking: A Comprehensive Survey. [[paper](https://arxiv.org/pdf/1912.00535.pdf)]
32 |
33 | ----------------------------
34 | ### CVPR2020
35 |
36 | * **SiamAttn:** Yuechen Yu, Yilei Xiong, Weilin Huang, Matthew R. Scott. Deformable Siamese Attention Networks for Visual Object Tracking. [[paper](https://arxiv.org/pdf/2004.06711.pdf)]
37 |
38 | * **Siam R-CNN:** Paul Voigtlaender, Jonathon Luiten, Philip H.S. Torr, Bastian Leibe. Siam R-CNN: Visual Tracking by Re-Detection. [[paper](https://www.vision.rwth-aachen.de/media/papers/192/siamrcnn.pdf)][[code](https://github.com/VisualComputingInstitute/SiamR-CNN)][[project](https://www.vision.rwth-aachen.de/page/siamrcnn)]
39 |
40 | * **Retina-MAML:** Guangting Wang, Chong Luo, Xiaoyan Sun, Zhiwei Xiong, Wenjun Zeng. Tracking by Instance Detection: A Meta-Learning Approach. (**oral**) [[paper](https://arxiv.org/pdf/2004.00830.pdf)]
41 |
42 | * **PrDiMP:** Martin Danelljan, Luc Van Gool, Radu Timofte. Probabilistic Regression for Visual Tracking. [[paper](https://arxiv.org/pdf/2003.12565.pdf)][[code](https://github.com/visionml/pytracking)]
43 |
44 | * **CSA:** Bin Yan, Dong Wang, Huchuan Lu, Xiaoyun Yang. Cooling-Shrinking Attack: Blinding the Tracker with Imperceptible Noises. [[paper](https://arxiv.org/pdf/2003.09595.pdf)][[code](https://github.com/MasterBin-IIAU/CSA)]
45 |
46 | * **SiamBAN:** Zedu Chen, Bineng Zhong, Guorong Li, Shengping Zhang, Rongrong Ji. Siamese Box Adaptive Network for Visual Tracking. [[paper](https://arxiv.org/pdf/2003.06761.pdf)][[code](https://github.com/hqucv/siamban)]
47 |
48 |
49 | ----------------------------
50 | ### AAAI2020
51 |
52 | * **GlobalTrack:** Lianghua Huang, Xin Zhao, Kaiqi Huang. GlobalTrack: A Simple and Strong Baseline for Long-term Tracking. [[paper](https://arxiv.org/pdf/1912.08531.pdf)][[code](https://github.com/huanglianghua/GlobalTrack)]
53 |
54 | * **SPSTracker:** Qintao Hu, Lijun Zhou, Xiaoxiao Wang, Yao Mao, Jianlin Zhang, Qixiang Ye. "SPSTracker: Sub-Peak Suppression of Response Map for Robust Object Tracking." AAAI (2020). [[paper](https://arxiv.org/pdf/1912.00597.pdf)][[code](https://github.com/TrackerLB/SPSTracker)]
55 |
56 | * **SiamFC++:** Yinda Xu, Zeyu Wang, Zuoxin Li, Yuan Ye, Gang Yu. "SiamFC++: Towards Robust and Accurate Visual Tracking with Target Estimation Guidelines." AAAI (2020). [[paper](https://arxiv.org/pdf/1911.06188.pdf)][[code](https://github.com/MegviiDetection/video_analyst)]
57 |
58 | ----------------------------
59 | ### 2020
60 |
61 | * **TS-RCN:** Ning Zhang, Jingen Liu, Ke Wang, Dan Zeng, Tao Mei. Robust Visual Object Tracking with Two-Stream Residual Convolutional Networks. [[paper](https://arxiv.org/pdf/2005.06536.pdf)]
62 |
63 | * **FCOT:** Yutao Cui, Cheng Jiang, Limin Wang, Gangshan Wu. Fully Convolutional Online Tracking. [[paper](https://arxiv.org/pdf/2004.07109.pdf)][[code](https://github.com/MCG-NJU/FCOT)]
64 |
65 | * **Surroundings:** Goutam Bhat, Martin Danelljan, Luc Van Gool, Radu Timofte. Know Your Surroundings: Exploiting Scene Information for Object Tracking. [[paper](https://arxiv.org/pdf/2003.11014.pdf)]
66 |
67 | * **DMV:** Gunhee Nam, Seoung Wug Oh, Joon-Young Lee, Seon Joo Kim. DMV: Visual Object Tracking via Part-level Dense Memory and Voting-based Retrieval. [[paper](https://arxiv.org/pdf/2003.09171.pdf)]
68 |
69 | -------------------
70 | ### ICCV2019
71 |
72 | * **VOT2019:** Kristan, Matej, et al. "The Seventh Visual Object Tracking VOT2019 Challenge Results." ICCV workshops (2019). [[paper](http://openaccess.thecvf.com/content_ICCVW_2019/papers/VOT/Kristan_The_Seventh_Visual_Object_Tracking_VOT2019_Challenge_Results_ICCVW_2019_paper.pdf)]
73 |
74 | * **DiMP:** Goutam Bhat, Martin Danelljan, Luc Van Gool, Radu Timofte. "Learning Discriminative Model Prediction for Tracking." ICCV (2019). [[paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Bhat_Learning_Discriminative_Model_Prediction_for_Tracking_ICCV_2019_paper.pdf)][[code](https://github.com/visionml/pytracking)][[supp](http://openaccess.thecvf.com/content_ICCV_2019/supplemental/Bhat_Learning_Discriminative_Model_ICCV_2019_supplemental.pdf)]
75 |
76 | * **UpdateNet:** Lichao Zhang, Abel Gonzalez-Garcia, Joost van de Weijer, Martin Danelljan, Fahad Shahbaz Khan. "Learning the Model Update for Siamese Trackers." ICCV (2019). [[paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Zhang_Learning_the_Model_Update_for_Siamese_Trackers_ICCV_2019_paper.pdf)][[code](https://github.com/zhanglichao/updatenet)][[supp](http://openaccess.thecvf.com/content_ICCV_2019/supplemental/Zhang_Learning_the_Model_ICCV_2019_supplemental.pdf)]
77 |
78 | * Achal Dave, Pavel Tokmakov, Cordelia Schmid, Deva Ramanan. "Learning to Track Any Object." ICCV workshop (2019). [[paper](https://arxiv.org/pdf/1910.11844.pdf)]
79 |
80 | * **GradNet:** Peixia Li, Boyu Chen, Wanli Ouyang, Dong Wang, Xiaoyun Yang, Huchuan Lu. "GradNet: Gradient-Guided Network for Visual Object Tracking." ICCV (2019 **oral**). [[paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Li_GradNet_Gradient-Guided_Network_for_Visual_Object_Tracking_ICCV_2019_paper.pdf)][[code](https://github.com/LPXTT/GradNet-Tensorflow)]
81 |
82 | * **GFS-DCF:** Tianyang Xu, Zhen-Hua Feng, Xiao-Jun Wu, Josef Kittler. "Joint Group Feature Selection and Discriminative Filter Learning for Robust Visual Object Tracking." ICCV (2019). [[paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Xu_Joint_Group_Feature_Selection_and_Discriminative_Filter_Learning_for_Robust_ICCV_2019_paper.pdf)][[code](https://github.com/XU-TIANYANG/GFS-DCF)]
83 |
84 | -------------------
85 | ### ICIP2019
86 |
87 | * **Cascaded-Siam:** Peng Gao, Yipeng Ma, Ruyue Yuan, Liyi Xiao, Fei Wang. "Learning Cascaded Siamese Networks for High Performance Visual Tracking." ICIP (2019). [[paper](https://arxiv.org/pdf/1905.02857.pdf)]
88 |
89 | ### CVPR2019
90 |
91 | * **RPCF:** Yuxuan Sun, Chong Sun, Dong Wang, Huchuan Lu, You He. "ROI Pooled Correlation Filters for Visual Tracking." CVPR (2019). [[paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Sun_ROI_Pooled_Correlation_Filters_for_Visual_Tracking_CVPR_2019_paper.pdf)]
92 |
93 | * **OTR:** Ugur Kart, Alan Lukezic, Matej Kristan, Joni-Kristian Kamarainen, Jiri Matas. "Object Tracking by Reconstruction with View-Specific Discriminative Correlation Filters." CVPR (2019). [[paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Kart_Object_Tracking_by_Reconstruction_With_View-Specific_Discriminative_Correlation_Filters_CVPR_2019_paper.pdf)][[code](https://github.com/ugurkart/OTR)]
94 |
95 | * **GCT:** Junyu Gao, Tianzhu Zhang, Changsheng Xu."Graph Convolutional Tracking." CVPR (2019 **oral**). [[paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Gao_Graph_Convolutional_Tracking_CVPR_2019_paper.pdf)]
96 |
97 | * **SPM:** Guangting Wang, Chong Luo, Zhiwei Xiong, Wenjun Zeng. SPM-Tracker: Series-Parallel Matching for Real-Time Visual Object Tracking. [[paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Wang_SPM-Tracker_Series-Parallel_Matching_for_Real-Time_Visual_Object_Tracking_CVPR_2019_paper.pdf)]
98 |
99 | * **ATOM:** Martin Danelljan, Goutam Bhat, Fahad Shahbaz Khan, Michael Felsberg. ATOM: Accurate Tracking by Overlap Maximization. CVPR (2019 **oral**)[[paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Danelljan_ATOM_Accurate_Tracking_by_Overlap_Maximization_CVPR_2019_paper.pdf)][[supp](http://openaccess.thecvf.com/content_CVPR_2019/supplemental/Danelljan_ATOM_Accurate_Tracking_CVPR_2019_supplemental.pdf)][[code](https://github.com/visionml/pytracking)]
100 |
101 | * **TADT:** Xin Li, Chao Ma, Baoyuan Wu, Zhenyu He, Ming-Hsuan Yang. "Target-Aware Deep Tracking" CVPR (2019).[[paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Li_Target-Aware_Deep_Tracking_CVPR_2019_paper.pdf)][[supp](http://openaccess.thecvf.com/content_CVPR_2019/supplemental/Li_Target-Aware_Deep_Tracking_CVPR_2019_supplemental.pdf)][[project](https://xinli-zn.github.io/TADT-project-page/)][[official-code-matlab](https://github.com/XinLi-zn/TADT)]
102 |
103 | * **UDT:** Wang, Ning and Song, Yibing and Ma, Chao and Zhou, Wengang and Liu, Wei and Li, Houqiang. "Unsupervised Deep Tracking." CVPR (2019).[[paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Wang_Unsupervised_Deep_Tracking_CVPR_2019_paper.pdf)][[official-code-matlab](https://github.com/594422814/UDT)][[official-code-pytorch](https://github.com/594422814/UDT_pytorch)]
104 |
105 | * **ASRCF:** Kenan Dai, Dong Wang, Huchuan Lu, Chong Sun, Jianhua Li. "Visual Tracking via Adaptive Spatially-Regularized Correlation Filters." CVPR (2019 **oral**). [[paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Dai_Visual_Tracking_via_Adaptive_Spatially-Regularized_Correlation_Filters_CVPR_2019_paper.pdf)][[code](https://github.com/Daikenan/ASRCF)]
106 |
107 | * **SiamMask:** Qiang Wang, Li Zhang, Luca Bertinetto, Weiming Hu, Philip H.S. Torr. "Fast Online Object Tracking and Segmentation: A Unifying Approach." CVPR (2019).[[paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Wang_Fast_Online_Object_Tracking_and_Segmentation_A_Unifying_Approach_CVPR_2019_paper.pdf)][[supp](http://openaccess.thecvf.com/content_CVPR_2019/supplemental/Wang_Fast_Online_Object_CVPR_2019_supplemental.pdf)][[project](http://www.robots.ox.ac.uk/~qwang/SiamMask/)][[code](https://github.com/foolwood/SiamMask?utm_campaign=explore-email&utm_medium=email&utm_source=newsletter&utm_term=daily)]
108 |
109 | * **SiamDW:** Zhipeng Zhang, Houwen Peng. "Deeper and Wider Siamese Networks for Real-Time Visual Tracking." CVPR (2019 **oral**).[[paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Zhang_Deeper_and_Wider_Siamese_Networks_for_Real-Time_Visual_Tracking_CVPR_2019_paper.pdf)][[supp](http://openaccess.thecvf.com/content_CVPR_2019/supplemental/Zhang_Deeper_and_Wider_CVPR_2019_supplemental.pdf)][[code](https://github.com/researchmm/SiamDW)]
110 |
111 | * **SiamRPN++:** Bo Li, Wei Wu, Qiang Wang, Fangyi Zhang, Junliang Xing, Junjie Yan. "SiamRPN++: Evolution of Siamese Visual Tracking with Very Deep Networks." CVPR (2019 **oral**).[[paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Li_SiamRPN_Evolution_of_Siamese_Visual_Tracking_With_Very_Deep_Networks_CVPR_2019_paper.pdf)][[project](http://bo-li.info/SiamRPN++/)]
112 |
113 | * **C-RPN:** Heng Fan, Haibin Ling. "Siamese Cascaded Region Proposal Networks for Real-Time Visual Tracking." CVPR (2019). [[paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Fan_Siamese_Cascaded_Region_Proposal_Networks_for_Real-Time_Visual_Tracking_CVPR_2019_paper.pdf)][[supp](http://openaccess.thecvf.com/content_CVPR_2019/supplemental/Fan_Siamese_Cascaded_Region_CVPR_2019_supplemental.pdf)][[code](http://www.dabi.temple.edu/~hbling/code/CRPN/crpn.htm)]
114 |
115 | ### 2019
116 | * **Siam-GAN:** Zhaofu Diao, Ying Wei, Yujiang Fu, Shuo Feng. A single target tracking algorithm based on Generative Adversarial Networks. [[paper](https://arxiv.org/pdf/1912.11967.pdf)]
117 |
118 | * **SiamMan:** Wenzhang Zhou, Longyin Wen, Libo Zhang, Dawei Du, Tiejian Luo, Yanjun Wu. SiamMan: Siamese Motion-aware Network for Visual Tracking. [[paper](https://arxiv.org/pdf/1912.05515.pdf)]
119 |
120 | * **D3S:** Alan Lukežič, Jiří Matas, Matej Kristan. D3S -- A Discriminative Single Shot Segmentation Tracker. [[paper](https://arxiv.org/pdf/1911.08862.pdf)]
121 |
122 | * **TracKlinic:** Heng Fan, Fan Yang, Peng Chu, Lin Yuan, Haibin Ling. TracKlinic: Diagnosis of Challenge Factors in Visual Tracking. [[paper](https://arxiv.org/pdf/1911.07959.pdf)]
123 |
124 | * **SiamCAR:** Dongyan Guo, Jun Wang, Ying Cui, Zhenhua Wang, Shengyong Chen. SiamCAR: Siamese Fully Convolutional Classification and Regression for Visual Tracking. [[paper](https://arxiv.org/pdf/1911.07241.pdf)]
125 |
126 | * **DROL:** Jinghao Zhou, Peng Wang, Haoyang Sun. Discriminative and Robust Online Learning for Siamese Visual Tracking. [[paper](https://arxiv.org/pdf/1909.02959.pdf)][[code](https://github.com/shallowtoil/DROL)]
127 |
128 | * **RAR:** Peng Gao, Qiquan Zhang, Liyi Xiao, Yan Zhang, Fei Wang. Learning Reinforced Attentional Representation for End-to-End Visual Tracking. [[paper](https://arxiv.org/pdf/1908.10009.pdf)]
129 |
130 | * **BVT:** Qing Guo, Wei Feng, Zhihao Chen, Ruijun Gao, Liang Wan, Song Wang. Effects of Blur and Deblurring to Visual Object Tracking. [[paper](https://arxiv.org/pdf/1908.07904.pdf)]
131 |
132 | * **GFS-DCF:** Tianyang Xu, Zhen-Hua Feng, Xiao-Jun Wu, Josef Kittler. Joint Group Feature Selection and Discriminative Filter Learning for Robust Visual Object Tracking. [[paper](https://arxiv.org/pdf/1907.13242.pdf)]
133 |
134 | * **THOR:** Axel Sauer, Elie Aljalbout, Sami Haddadin. Tracking Holistic Object Representations. BMVC 2019. [[paper](https://arxiv.org/pdf/1907.12920.pdf)][[code](https://github.com/xl-sr/THOR)]
135 |
136 | * **ROAM:** Tianyu Yang, Pengfei Xu, Runbo Hu, Hua Chai, Antoni B. Chan
137 | . ROAM: Recurrently Optimizing Tracking Model. [[paper](https://arxiv.org/pdf/1907.12006.pdf)]
138 |
139 | * **fECO_fDeepSTRCF:** Ning Wang, Wengang Zhou, Yibing Song, Chao Ma, Houqiang Li. Real-Time Correlation Tracking via Joint Model Compression and Transfer. [[paper](https://arxiv.org/pdf/1907.09831.pdf)]
140 |
141 | * **SiamMask_E:** Bao Xin Chen, John K. Tsotsos. Fast Visual Object Tracking with Rotated Bounding Boxes. [[paper](https://arxiv.org/pdf/1907.03892.pdf)]
142 |
143 | * **DCFST:** Linyu Zheng, Ming Tang, JinqiaoWang, Hanqing Lu. Learning Features with Differentiable Closed-Form Solver for Tracking. [[paper](https://arxiv.org/pdf/1906.10414.pdf)]
144 |
145 | * **HAT:** Qiangqiang Wu, Zhihui Chen, Lin Cheng, Yan Yan, Bo Li, Hanzi Wang. Hallucinated Adversarial Learning for Robust Visual Tracking. [[paper](https://arxiv.org/pdf/1906.07008.pdf)]
146 |
147 | * **RCG:** Feng Li, Xiaohe Wu, Wangmeng Zuo, David Zhang, Lei Zhang. Remove Cosine Window from Correlation Filter-based Visual Trackers: When and How. [[paper](https://arxiv.org/pdf/1905.06648.pdf)][[code](https://github.com/lifeng9472/Removing_cosine_window_from_CF_trackers)]
148 |
149 | * **BoLTVOS:** Paul Voigtlaender, Jonathon Luiten, Bastian Leibe. BoLTVOS: Box-Level Tracking for Video Object Segmentation. [[paper](https://arxiv.org/pdf/1904.04552.pdf)]
150 |
151 | * **PTS:** Jianren Wang, Yihui He, Xiaobo Wang, Xinjia Yu, Xia Chen. Prediction-Tracking-Segmentation[[paper](https://arxiv.org/pdf/1904.03280.pdf)]
152 |
153 | * **TCDCaps:** Ding Ma, Xiangqian Wu. TCDCaps: Visual Tracking via Cascaded Dense Capsules[[paper](https://arxiv.org/pdf/1902.10054.pdf)]
154 | * **SiamVGG:** Yuhong Li, Xiaofan Zhang. SiamVGG: Visual Tracking using Deeper Siamese Networks[[paper](https://arxiv.org/pdf/1902.02804.pdf)][[code](https://github.com/leeyeehoo/SiamVGG)]
155 |
156 | ### 2018.12
157 |
158 | * **AM-Net:** Xiaolong Jiang, Peizhao Li, Xiantong Zhen, Xianbin Cao. Model-free Tracking with Deep Appearance and Motion Features Integration.(WACV), 2019 [[paper](https://arxiv.org/pdf/1812.06418.pdf)]
159 |
160 | ### AAAI2019
161 |
162 | * **LDES:** Yang Li, Jianke Zhu, Steven C.H. Hoi, Wenjie Song, Zhefeng Wang, Hantang Liu. "Robust Estimation of Similarity Transformation for Visual Object Tracking." AAAI (2019). [[paper](https://arxiv.org/pdf/1712.05231.pdf)][[code](https://github.com/ihpdep/LDES)]
163 |
164 | ----------------------------
165 |
166 | ### NIPS2018
167 |
168 | * **DAT:** Shi Pu, Yibing Song, Chao Ma, Honggang Zhang, Ming-Hsuan Yang. "Deep Attentive Tracking via Reciprocative Learning." NIPS (2018). [[paper](https://arxiv.org/pdf/1810.03851.pdf)][[project](https://ybsong00.github.io/nips18_tracking/index)][[code](https://github.com/shipubupt/NIPS2018)]
169 |
170 | ### ECCV2018
171 |
172 | * **UPDT:** Goutam Bhat, Joakim Johnander, Martin Danelljan, Fahad Shahbaz Khan, Michael Felsberg. "Unveiling the Power of Deep Tracking." ECCV (2018). [[paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Goutam_Bhat_Unveiling_the_Power_ECCV_2018_paper.pdf)]
173 |
174 | * **DaSiamRPN:** Zheng Zhu, Qiang Wang, Bo Li, Wu Wei, Junjie Yan, Weiming Hu. "Distractor-aware Siamese Networks for Visual Object Tracking." ECCV (2018). [[paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Zheng_Zhu_Distractor-aware_Siamese_Networks_ECCV_2018_paper.pdf)][[github](https://github.com/foolwood/DaSiamRPN)]
175 |
176 | * **SACF:** Mengdan Zhang, Qiang Wang, Junliang Xing, Jin Gao, Peixi Peng, Weiming Hu, Steve Maybank. "Visual Tracking via Spatially Aligned Correlation Filters Network." ECCV (2018). [[paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/mengdan_zhang_Visual_Tracking_via_ECCV_2018_paper.pdf)]
177 |
178 | * **RTINet:** Yingjie Yao, Xiaohe Wu, Lei Zhang, Shiguang Shan, Wangmeng Zuo. "Joint Representation and Truncated Inference Learning for Correlation Filter based Tracking." ECCV (2018). [[paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Yingjie_Yao_Joint_Representation_and_ECCV_2018_paper.pdf)]
179 |
180 | * **Meta-Tracker:** Eunbyung Park, Alexander C. Berg. "Meta-Tracker: Fast and Robust Online Adaptation for Visual Object Trackers." [[paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Eunbyung_Park_Meta-Tracker_Fast_and_ECCV_2018_paper.pdf)][[github](https://github.com/silverbottlep/meta_trackers)]
181 |
182 | * **DSLT:** Xiankai Lu, Chao Ma*, Bingbing Ni, Xiaokang Yang, Ian Reid, Ming-Hsuan Yang. "Deep Regression Tracking with Shrinkage Loss." ECCV (2018). [[paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Xiankai_Lu_Deep_Regression_Tracking_ECCV_2018_paper.pdf)][[github](https://github.com/chaoma99/DSLT)]
183 |
184 | * **DRL-IS:** Liangliang Ren, Xin Yuan, Jiwen Lu, Ming Yang, Jie Zhou. "Deep Reinforcement Learning with Iterative Shift for Visual Tracking." ECCV (2018). [[paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Liangliang_Ren_Deep_Reinforcement_Learning_ECCV_2018_paper.pdf)]
185 |
186 | * **RT-MDNet:** Ilchae Jung, Jeany Son, Mooyeol Baek, Bohyung Han. "Real-Time MDNet." ECCV (2018). [[paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Ilchae_Jung_Real-Time_MDNet_ECCV_2018_paper.pdf)]
187 |
188 | * **ACT:** Boyu Chen, Dong Wang, Peixia Li, Huchuan Lu. "Real-time 'Actor-Critic' Tracking." ECCV (2018).[[paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Boyu_Chen_Real-time_Actor-Critic_Tracking_ECCV_2018_paper.pdf)][[github](https://github.com/bychen515/ACT)]
189 |
190 | * **StructSiam:** Yunhua Zhang, Lijun Wang, Dong Wang, Mengyang Feng, Huchuan Lu, Jinqing Qi. "Structured Siamese Network for Real-Time Visual Tracking." ECCV (2018). [[paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Yunhua_Zhang_Structured_Siamese_Network_ECCV_2018_paper.pdf)]
191 |
192 | * **MemTrack:** Tianyu Yang, Antoni B. Chan. "Learning Dynamic Memory Networks for Object Tracking." ECCV (2018). [[paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Tianyu_Yang_Learning_Dynamic_Memory_ECCV_2018_paper.pdf)]
193 |
194 | * **SiamFC-tri:** Xingping Dong, Jianbing Shen. "Triplet Loss in Siamese Network for Object Tracking." ECCV (2018). [[paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Xingping_Dong_Triplet_Loss_with_ECCV_2018_paper.pdf)][[github](https://github.com/shenjianbing/TripletTracking)]
195 |
196 | * **OxUvA long-term dataset+benchmark:** Jack Valmadre, Luca Bertinetto, João F. Henriques, Ran Tao, Andrea Vedaldi, Arnold Smeulders, Philip Torr, Efstratios Gavves. "Long-term Tracking in the Wild: a Benchmark." ECCV (2018). [[paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Efstratios_Gavves_Long-term_Tracking_in_ECCV_2018_paper.pdf)][[project](https://oxuva.github.io/long-term-tracking-benchmark/)]
197 |
198 | * **TrackingNet:** Matthias Müller, Adel Bibi, Silvio Giancola, Salman Al-Subaihi, Bernard Ghanem. "TrackingNet: A Large-Scale Dataset and Benchmark for Object Tracking in the Wild." ECCV (2018). [[paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Matthias_Muller_TrackingNet_A_Large-Scale_ECCV_2018_paper.pdf)] [[project](http://tracking-net.org/)]
199 |
200 | ### CVPR2018
201 |
202 | * **VITAL:** Yibing Song, Chao Ma, Xiaohe Wu, Lijun Gong, Linchao Bao, Wangmeng Zuo, Chunhua Shen, Rynson Lau, and Ming-Hsuan Yang.
203 | "VITAL: VIsual Tracking via Adversarial Learning." CVPR (2018 **Spotlight**).
204 | [[project](https://ybsong00.github.io/cvpr18_tracking/index)]
205 | [[paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Song_VITAL_VIsual_Tracking_CVPR_2018_paper.pdf)]
206 | [[github](https://github.com/ybsong00/Vital_release)]
207 |
208 | * **LSART:** Chong Sun, Dong Wang, Huchuan Lu, Ming-Hsuan Yang.
209 | "Learning Spatial-Aware Regressions for Visual Tracking." CVPR (2018 **Spotlight**).
210 | [[paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Sun_Learning_Spatial-Aware_Regressions_CVPR_2018_paper.pdf)]
211 |
212 | * **SiamRPN:** Bo Li, Wei Wu, Zheng Zhu, Junjie Yan.
213 | "High Performance Visual Tracking with Siamese Region Proposal Network." CVPR (2018 **Spotlight**).
214 | [[paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Li_High_Performance_Visual_CVPR_2018_paper.pdf)]
215 |
216 | * **TRACA:** Jongwon Choi, Hyung Jin Chang, Tobias Fischer, Sangdoo Yun, Kyuewang Lee, Jiyeoup Jeong, Yiannis Demiris, Jin Young Choi.
217 | "Context-aware Deep Feature Compression for High-speed Visual Tracking." CVPR (2018).
218 | [[project](https://sites.google.com/site/jwchoivision/)]
219 | [[paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Choi_Context-Aware_Deep_Feature_CVPR_2018_paper.pdf)]
220 |
221 | * **RASNet:** Qiang Wang, Zhu Teng, Junliang Xing, Jin Gao, Weiming Hu, Stephen Maybank.
222 | "Learning Attentions: Residual Attentional Siamese Network for High Performance Online Visual Tracking." CVPR 2018.
223 | [[paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Wang_Learning_Attentions_Residual_CVPR_2018_paper.pdf)]
224 |
225 | * **SA-Siam:** Anfeng He, Chong Luo, Xinmei Tian, Wenjun Zeng.
226 | "A Twofold Siamese Network for Real-Time Object Tracking." CVPR (2018).
227 | [[paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/He_A_Twofold_Siamese_CVPR_2018_paper.pdf)]
228 |
229 | * **STRCF:** Feng Li, Cheng Tian, Wangmeng Zuo, Lei Zhang, Ming-Hsuan Yang.
230 | "Learning Spatial-Temporal Regularized Correlation Filters for Visual Tracking." CVPR (2018).
231 | [[paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Li_Learning_Spatial-Temporal_Regularized_CVPR_2018_paper.pdf)]
232 | [[github](https://github.com/lifeng9472/STRCF)]
233 |
234 | * **FlowTrack:** Zheng Zhu, Wei Wu, Wei Zou, Junjie Yan.
235 | "End-to-end Flow Correlation Tracking with Spatial-temporal Attention." CVPR (2018).
236 | [[paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Zhu_End-to-End_Flow_Correlation_CVPR_2018_paper.pdf)]
237 |
238 | * **DEDT:** Kourosh Meshgi, Shigeyuki Oba, Shin Ishii.
239 | "Efficient Diverse Ensemble for Discriminative Co-Tracking." CVPR (2018).
240 | [[paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Meshgi_Efficient_Diverse_Ensemble_CVPR_2018_paper.pdf)]
241 |
242 | * **SINT++:** Xiao Wang, Chenglong Li, Bin Luo, Jin Tang.
243 | "SINT++: Robust Visual Tracking via Adversarial Positive Instance Generation." CVPR (2018).
244 | [[paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Wang_SINT_Robust_Visual_CVPR_2018_paper.pdf)]
245 |
246 | * **DRT:** Chong Sun, Dong Wang, Huchuan Lu, Ming-Hsuan Yang.
247 | "Correlation Tracking via Joint Discrimination and Reliability Learning." CVPR (2018).
248 | [[paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Sun_Correlation_Tracking_via_CVPR_2018_paper.pdf)]
249 |
250 | * **MCCT:** Ning Wang, Wengang Zhou, Qi Tian, Richang Hong, Meng Wang, Houqiang Li.
251 | "Multi-Cue Correlation Filters for Robust Visual Tracking." CVPR (2018).
252 | [[paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Wang_Multi-Cue_Correlation_Filters_CVPR_2018_paper.pdf)]
253 | [[github](https://github.com/594422814/MCCT)]
254 |
255 | * **MKCF:** Ming Tang, Bin Yu, Fan Zhang, Jinqiao Wang.
256 | "High-speed Tracking with Multi-kernel Correlation Filters." CVPR (2018).
257 | [[paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Tang_High-Speed_Tracking_With_CVPR_2018_paper.pdf)]
258 |
259 | * **HP:** Xingping Dong, Jianbing Shen, Wenguan Wang, Yu, Liu, Ling Shao, and Fatih Porikli.
260 | "Hyperparameter Optimization for Tracking with Continuous Deep Q-Learning." CVPR (2018).
261 | [[paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Dong_Hyperparameter_Optimization_for_CVPR_2018_paper.pdf)]
262 |
--------------------------------------------------------------------------------
/notes/all_about_sot.md:
--------------------------------------------------------------------------------
1 | # All-About-SOT
2 |
3 | ## Paper&Code list
4 | - **[Visual Tracking Paper List](https://github.com/foolwood/benchmark_results)**
5 | - **[fork](https://github.com/bluoluo/Awesome-single-object-tracking)**
6 | - **[Long-term-Visual-Tracking](https://github.com/wangdongdut/Long-term-Visual-Tracking)**
7 |
8 | ### Recommendations
9 | - **[pysot](https://github.com/STVIR/pysot)**
10 | - **[pytracking](https://github.com/visionml/pytracking)**
11 | - **[TracKit](https://github.com/researchmm/TracKit)**
12 | - **[MMTracking](https://github.com/open-mmlab/mmtracking)**
13 |
14 | ### Other Codes
15 | - **[SiamTrackers](https://github.com/HonglinChu/SiamTrackers)**
16 | - **[pyCFTrackers](https://github.com/fengyang95/pyCFTrackers)**
17 |
18 |
19 | ## SOTA
20 | - **[Comparision](https://github.com/JudasDie/Comparision)**
21 | - **[Online-Visual-Tracking-SOTA](https://github.com/wangdongdut/Online-Visual-Tracking-SOTA)**
22 |
23 |
24 | ## Dataset
25 | - **[trackdat](https://github.com/jvlmdr/trackdat)**
26 | - **[SiamTrackers_dataset](https://github.com/HonglinChu/SiamTrackers#dataset)**
27 |
28 |
29 | ## Tools
30 | - **[pysot-toolkit](https://github.com/StrangerZhang/pysot-toolkit)**
31 | - **[got10k-toolkit](https://github.com/got-10k/toolkit)**
32 | - **[ResearchTools](https://github.com/JudasDie/ResearchTools)**
33 | - **[VisualTracking-Toolkit](https://github.com/foolwood/VisualTracking-Toolkit)**
34 | - **[visual_tracker_benchmark](https://github.com/HonglinChu/visual_tracker_benchmark)**
35 |
36 |
--------------------------------------------------------------------------------