├── assets
├── teaser.jpg
├── data_pipeline.jpg
├── matanyone1vs2.jpg
└── matanyone2_logo.png
└── README.md
/assets/teaser.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pq-yang/MatAnyone2/HEAD/assets/teaser.jpg
--------------------------------------------------------------------------------
/assets/data_pipeline.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pq-yang/MatAnyone2/HEAD/assets/data_pipeline.jpg
--------------------------------------------------------------------------------
/assets/matanyone1vs2.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pq-yang/MatAnyone2/HEAD/assets/matanyone1vs2.jpg
--------------------------------------------------------------------------------
/assets/matanyone2_logo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pq-yang/MatAnyone2/HEAD/assets/matanyone2_logo.png
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 |

4 |
Scaling Video Matting via a Learned Quality Evaluator
5 |
6 |
7 |
13 |
14 | 1S-Lab, Nanyang Technological University
15 | 2SenseTime Research, Singapore
16 |
17 | †Project lead
18 |
19 |
20 |
21 |
36 |
37 |
MatAnyone 2 is a practical human video matting framework that preserves fine details by avoiding segmentation-like boundaries, while also shows enhanced robustness under challenging real-world conditions.
38 |
39 |
40 |

41 |
42 |
43 | :movie_camera: For more visual results, go checkout our
project page
44 |
45 | ---
46 |
47 |
48 |
49 | ## 📮 Update
50 | - [2025.12] This repo is created.
51 |
52 | ## 🔎 Overview
53 | 
54 |
55 | ## 🛠️ Data Pipeline
56 | 
57 |
58 |
59 | ## 📑 Citation
60 |
61 | If you find our repo useful for your research, please consider citing our paper:
62 |
63 | ```bibtex
64 | @InProceedings{yang2025matanyone2,
65 | title = {{MatAnyone 2}: Scaling Video Matting via a Learned Quality Evaluator},
66 | author = {Yang, Peiqing and Zhou, Shangchen and Hao, Kai and Tao, Qingyi},
67 | booktitle = {arXiv preprint arXiv:2512.11782},
68 | year = {2025}
69 | }
70 | ```
71 |
72 | ## 📧 Contact
73 |
74 | If you have any questions, please feel free to reach us at `peiqingyang99@outlook.com`.
75 |
--------------------------------------------------------------------------------