├── README.md └── images └── overview.png /README.md: -------------------------------------------------------------------------------- 1 | ## Awesome Vision-Language Models [![Awesome](https://awesome.re/badge.svg)](https://awesome.re) 2 | 3 | 4 | 5 | This is the repository of **Vision Language Models for Vision Tasks: a Survey**, a systematic survey of VLM studies in various visual recognition tasks including image classification, object detection, semantic segmentation, etc. For details, please refer to: 6 | 7 | **Vision-Language Models for Vision Tasks: A Survey** [[Paper](https://arxiv.org/abs/2304.00685)] 8 | 9 | *IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2024* 10 | 11 | 🤩 Our paper is selected into **TPAMI Top 50 Popular Paper List** !! 12 | 13 | [![arXiv](https://img.shields.io/badge/arXiv-2304.00685-b31b1b.svg)](https://arxiv.org/abs/2304.00685) 14 | [![Maintenance](https://img.shields.io/badge/Maintained%3F-yes-green.svg)](https://GitHub.com/Naereen/StrapDown.js/graphs/commit-activity) 15 | [![PR's Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat)](http://makeapullrequest.com) 16 | 17 | 18 | 19 | *Feel free to pull requests or contact us if you find any related papers that are not included here.* 20 | 21 | The process to submit a pull request is as follows: 22 | - a. Fork the project into your own repository. 23 | - b. Add the Title, Paper link, Conference, Project/Code link in `README.md` using the following format: 24 | ``` 25 | |[Title](Paper Link)|Conference|[Code/Project](Code/Project link)| 26 | ``` 27 | - c. Submit the pull request to this branch. 28 | 29 | ## 🔥 News 30 | 31 | ‼️ **We plan to update the arXiv version of our survey paper soon. If your paper is missing from this repository, feel free to contact us or open an issue!** 32 | 33 | Last update on 2025/3/24 34 | 35 | #### VLMs and Synthetic Data 36 | 37 | * [OpenReview 2025] A Survey on Bridging VLMs and Synthetic Data [[Paper](https://openreview.net/pdf?id=ThjDCZOljE)][[Code](https://github.com/mghiasvand1/Awesome-VLM-Synthetic-Data)] 38 | 39 | #### VLM Pre-training Methods 40 | 41 | * [NeurIPS 2024] PLIP: Language-Image Pre-training for Person Representation Learning [[Paper](https://papers.nips.cc/paper_files/paper/2024/file/510ad3018bbdc5b6e3b10646e2e35771-Paper-Conference.pdf)][[Code](https://github.com/Zplusdragon/PLIP)] 42 | * [NeurIPS 2024] LoTLIP: Improving Language-Image Pre-training for Long Text Understanding [[Paper](https://papers.nips.cc/paper_files/paper/2024/file/77828623211df05497ce3658300dafd9-Paper-Conference.pdf)][[Code](https://wuw2019.github.io/lot-lip)] 43 | * [NeurIPS 2024] Accelerating Pre-training of Multimodal LLMs via Chain-of-Sight [[Paper](https://papers.nips.cc/paper_files/paper/2024/file/8a54a80ffc2834689ffdd0920202018e-Paper-Conference.pdf)][[Code](https://chain-of-sight.github.io/)] 44 | 45 | #### VLM Transfer Learning Methods 46 | * [NeurIPS 2024] ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models [[Paper](https://papers.nips.cc/paper_files/paper/2024/file/4fd96b997454b5b02698595df70fccaf-Paper-Conference.pdf)][[Code](https://github.com/mrwu-mac/ControlMLLM)] 47 | * [NeurIPS 2024] Leveraging Hallucinations to Reduce Manual Prompt Dependency in Promptable Segmentation [[Paper](https://papers.nips.cc/paper_files/paper/2024/file/c1e1ad233411e25b54bb5df3a0576c2c-Paper-Conference.pdf)][[Code](https://lwpyh.github.io/ProMaC/)] 48 | * [NeurIPS 2024] Visual Fourier Prompt Tuning [[Paper](https://papers.nips.cc/paper_files/paper/2024/file/0a0eba34ab2ff40ca2d2843324dcc4ab-Paper-Conference.pdf)][[Code](https://github.com/runtsang/VFPT)] 49 | * [NeurIPS 2024] Improving Visual Prompt Tuning by Gaussian Neighborhood Minimization for Long-Tailed Visual Recognition [[Paper](https://papers.nips.cc/paper_files/paper/2024/file/bc667ac84ef58f2b5022da97a465cbab-Paper-Conference.pdf)][[Code](https://github.com/Keke921/GNM-PT)] 50 | * [NeurIPS 2024] Few-Shot Adversarial Prompt Learning on Vision-Language Models [[Paper](https://papers.nips.cc/paper_files/paper/2024/file/05aedcaf4bc6e78a5e22b4cf9114c5e8-Paper-Conference.pdf)][[Code](https://github.com/lionel-w2/FAP)] 51 | * [NeurIPS 2024] Visual Prompt Tuning in Null Space for Continual Learning [[Paper](https://papers.nips.cc/paper_files/paper/2024/file/0f06be0008bc568c88d76206aa17954f-Paper-Conference.pdf)][[Code](https://github.com/zugexiaodui/VPTinNSforCL)] 52 | * [NeurIPS 2024] IPO: Interpretable Prompt Optimization for Vision-Language Models [[Paper](https://papers.nips.cc/paper_files/paper/2024/file/e52e4de8689a9955b6d3ff421d019387-Paper-Conference.pdf)] 53 | * [NeurIPS 2024] LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning [[Paper](https://papers.nips.cc/paper_files/paper/2023/file/f0606b882692637835e8ac981089eccd-Paper-Conference.pdf)][[Code](https://github.com/AtsuMiyai/LoCoOp)] 54 | 55 | 56 | #### VLM Knowledge Distillation for Detection 57 | 58 | * [NeurIPS 2024] Scaling Open-Vocabulary Object Detection [[Paper](https://papers.nips.cc/paper_files/paper/2023/file/e6d58fc68c0f3c36ae6e0e64478a69c0-Paper-Conference.pdf)][[Code](https://github.com/google-research/scenic/tree/main/scenic/projects/owl_vit)] 59 | * [NeurIPS 2024] Bridging the Gap between Object and Image-level Representations for Open-Vocabulary Detection [[Paper](https://papers.nips.cc/paper_files/paper/2022/file/dabf612543b97ea9c8f46d058d33cf74-Paper-Conference.pdf)][[Code](https://github.com/hanoonaR/object-centric-ovd)] 60 | * [NeurIPS 2024] CoDet: Co-Occurrence Guided Region-Word Alignment for Open-Vocabulary Object Detection [[Paper](https://papers.nips.cc/paper_files/paper/2023/file/e10a6a906ef323efaf708f76cf3c1d1e-Paper-Conference.pdf)][[Code](https://github.com/CVMI-Lab/CoDet)] 61 | 62 | #### VLM Knowledge Distillation for Segmentation 63 | 64 | * [NeurIPS 2024] Relationship Prompt Learning is Enough for Open-Vocabulary Semantic Segmentation [[Paper](https://papers.nips.cc/paper_files/paper/2024/file/8773cdaf02c5af3528e05f1cee816129-Paper-Conference.pdf)] 65 | 66 | #### VLM Knowledge Distillation for Other Vision Tasks 67 | 68 | 69 | 70 | ## Abstract 71 | 72 | Most visual recognition studies rely heavily on crowd-labelled data in deep neural networks (DNNs) training, and they usually train a DNN for each single visual recognition task, leading to a laborious and time-consuming visual recognition paradigm. To address the two challenges, Vision Language Models (VLMs) have been intensively investigated recently, which learns rich vision-language correlation from web-scale image-text pairs that are almost infinitely available on the Internet and enables zero-shot predictions on various visual recognition tasks with a single VLM. This paper provides a systematic review of visual language models for various visual recognition tasks, including: (1) the background that introduces the development of visual recognition paradigms; (2) the foundations of VLM that summarize the widely-adopted network architectures, pre-training objectives, and downstream tasks; (3) the widely adopted datasets in VLM pre-training and evaluations; (4) the review and categorization of existing VLM pre-training methods, VLM transfer learning methods, and VLM knowledge distillation methods; (5) the benchmarking, analysis and discussion of the reviewed methods; (6) several research challenges and potential research directions that could be pursued in the future VLM studies for visual recognition. 73 | 74 | ## Citation 75 | If you find our work useful in your research, please consider citing: 76 | ``` 77 | @article{zhang2024vision, 78 | title={Vision-language models for vision tasks: A survey}, 79 | author={Zhang, Jingyi and Huang, Jiaxing and Jin, Sheng and Lu, Shijian}, 80 | journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 81 | year={2024}, 82 | publisher={IEEE} 83 | } 84 | ``` 85 | 86 | ## Menu 87 | - [Datasets](#datasets) 88 | - [Datasets for VLM Pre-training](#datasets-for-vlm-pre-training) 89 | - [Datasets for VLM Evaluation](#datasets-for-vlm-evaluation) 90 | - [Vision-Language Pre-training Methods](#vision-language-pre-training-methods) 91 | - [Pre-training with Contrastive Objective](#pre-training-with-contrastive-objective) 92 | - [Pre-training with Generative Objective](#pre-training-with-generative-objective) 93 | - [Pre-training with Alignment Objective](#pre-training-with-alignment-objective) 94 | - [Vision-Language Model Transfer Learning Methods](#vision-language-model-transfer-learning-methods) 95 | - [Transfer with Prompt Tuning](#transfer-with-prompt-tuning) 96 | - [Transfer with Text Prompt Tuning](#transfer-with-text-prompt-tuning) 97 | - [Transfer with Visual Prompt Tuning](#transfer-with-visual-prompt-tuning) 98 | - [Transfer with Text and Visual Prompt Tuning](#transfer-with-text-and-visual-prompt-tuning) 99 | - [Transfer with Feature Adapter](#transfer-with-feature-adapter) 100 | - [Transfer with Other Methods](#transfer-with-other-methods) 101 | - [Vision-Language Model Knowledge Distillation Methods](#vision-language-model-knowledge-distillation-methods) 102 | - [Knowledge Distillation for Object Detection](#knowledge-distillation-for-object-detection) 103 | - [Knowledge Distillation for Semantic Segmentation](#knowledge-distillation-for-semantic-segmentation) 104 | 105 | ## Datasets 106 | 107 | ### Datasets for VLM Pre-training 108 | 109 | 110 | | Dataset | Year | Num of Image-Text Paris | Language | Project | 111 | |-----------------------------------------------------|:------:|:-------------------------------:|:----------------:|:------------:| 112 | |[SBU Caption](https://proceedings.neurips.cc/paper_files/paper/2011/file/5dd9db5e033da9c6fb5ba83c7a7ebea9-Paper.pdf)|2011|1M|English|[Project](https://www.cs.rice.edu/~vo9/sbucaptions/)| 113 | |[COCO Caption](https://arxiv.org/pdf/1504.00325v2.pdf)|2016|1.5M|English|[Project](https://github.com/tylin/coco-caption)| 114 | |[Yahoo Flickr Creative Commons 100 Million](https://arxiv.org/pdf/1503.01817v2.pdf)|2016|100M|English|[Project](http://projects.dfki.uni-kl.de/yfcc100m/)| 115 | |[Visual Genome](https://arxiv.org/pdf/1602.07332v1.pdf)|2017|5.4M|English|[Project](http://visualgenome.org/)| 116 | |[Conceptual Captions 3M](https://aclanthology.org/P18-1238.pdf)|2018|3.3M|English|[Project](https://ai.google.com/research/ConceptualCaptions/)| 117 | |[Localized Narratives](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123500630.pdf)|2020|0.87M|English|[Project](https://google.github.io/localized-narratives/)| 118 | |[Conceptual 12M](https://openaccess.thecvf.com/content/CVPR2021/papers/Changpinyo_Conceptual_12M_Pushing_Web-Scale_Image-Text_Pre-Training_To_Recognize_Long-Tail_Visual_CVPR_2021_paper.pdf)|2021|12M|English|[Project](https://github.com/google-research-datasets/conceptual-12m)| 119 | |[Wikipedia-based Image Text](https://arxiv.org/pdf/2103.01913v2.pdf)|2021|37.6M|108 Languages|[Project](https://github.com/google-research-datasets/wit)| 120 | |[Red Caps](https://arxiv.org/pdf/2111.11431v1.pdf)|2021|12M|English|[Project](https://redcaps.xyz/)| 121 | |[LAION400M](https://arxiv.org/pdf/2111.02114v1.pdf)|2021|400M|English|[Project](https://laion.ai/blog/laion-400-open-dataset/)| 122 | |[LAION5B](https://arxiv.org/pdf/2210.08402.pdf)|2022|5B|Over 100 Languages|[Project](https://laion.ai/blog/laion-5b/)| 123 | |[WuKong](https://arxiv.org/pdf/2202.06767.pdf)|2022|100M|Chinese|[Project](https://wukong-dataset.github.io/wukong-dataset/)| 124 | |[CLIP](https://arxiv.org/pdf/2103.00020.pdf)|2021|400M|English|-| 125 | |[ALIGN](https://arxiv.org/pdf/2102.05918.pdf)|2021|1.8B|English|-| 126 | |[FILIP](https://arxiv.org/pdf/2111.07783.pdf)|2021|300M|English|-| 127 | |[WebLI](https://arxiv.org/pdf/2209.06794.pdf)|2022|12B|English|-| 128 | 129 | 130 | 131 | ### Datasets for VLM Evaluation 132 | 133 | #### Image Classification 134 | 135 | | Dataset | Year | Classes | Training | Testing |Evaluation Metric| Project| 136 | |-----------------------------------------------------|:------:|:-------:|:--------:|:-------:|:------:|:-----------:| 137 | |MNIST|1998|10|60,000|10,000|Accuracy|[Project](http://yann.lecun.com/exdb/mnist/)| 138 | |Caltech-101|2004|102|3,060|6,085|Mean Per Class|[Project](https://data.caltech.edu/records/mzrjq-6wc02)| 139 | |PASCAL VOC 2007|2007|20|5,011|4,952|11-point mAP|[Project](http://host.robots.ox.ac.uk/pascal/VOC/voc2007/)| 140 | |Oxford 102 Flowers|2008|102|2,040|6,149|Mean Per Class|[Project](https://www.robots.ox.ac.uk/~vgg/data/flowers/102/)| 141 | |CIFAR-10|2009|10|50,000|10,000|Accuracy|[Project](https://www.cs.toronto.edu/~kriz/cifar.html)| 142 | |CIFAR-100|2009|100|50,000|10,000|Accuracy|[Project](https://www.cs.toronto.edu/~kriz/cifar.html)| 143 | |ImageNet-1k|2009|1000|1,281,167|50,000|Accuracy|[Project](https://www.image-net.org/)| 144 | |SUN397|2010|397|19,850|19,850|Accuracy|[Project](https://vision.princeton.edu/projects/2010/SUN/)| 145 | |SVHN|2011|10|73,257|26,032|Accuracy|[Project](http://ufldl.stanford.edu/housenumbers/)| 146 | |STL-10|2011|10|1,000|8,000|Accuracy|[Project](https://cs.stanford.edu/~acoates/stl10/)| 147 | |GTSRB|2011|43|26,640|12,630|Accuracy|[Project](https://www.kaggle.com/datasets/meowmeowmeowmeowmeow/gtsrb-german-traffic-sign)| 148 | |KITTI Distance|2012|4|6,770|711|Accuracy|[Project](https://github.com/harshilpatel312/KITTI-distance-estimation)| 149 | |IIIT5k|2012|36|2,000|3,000|Accuracy|[Project](https://cvit.iiit.ac.in/research/projects/cvit-projects/the-iiit-5k-word-dataset)| 150 | |Oxford-IIIT PETS|2012|37|3,680|3,669|Mean Per Class|[Project](https://www.robots.ox.ac.uk/~vgg/data/pets/)| 151 | |Stanford Cars|2013|196|8,144|8,041|Accuracy|[Project](http://ai.stanford.edu/~jkrause/cars/car_dataset.html)| 152 | |FGVC Aircraft|2013|100|6,667|3,333|Mean Per Class|[Project](https://www.robots.ox.ac.uk/~vgg/data/fgvc-aircraft/)| 153 | |Facial Emotion|2013|8|32,140|3,574|Accuracy|[Project](https://www.kaggle.com/competitions/challenges-in-representation-learning-facial-expression-recognition-challenge/data)| 154 | |Rendered SST2|2013|2|7,792|1,821|Accuracy|[Project](https://github.com/openai/CLIP/blob/main/data/rendered-sst2.md)| 155 | |Describable Textures|2014|47|3,760|1,880|Accuracy|[Project](https://www.robots.ox.ac.uk/~vgg/data/dtd/)| 156 | |Food-101|2014|101|75,750|25,250|Accuracy|[Project](https://www.kaggle.com/datasets/dansbecker/food-101)| 157 | |Birdsnap|2014|500|42,283|2,149|Accuracy|[Project](https://thomasberg.org/)| 158 | |RESISC45|2017|45|3,150|25,200|Accuracy|[Project](https://pan.baidu.com/s/1mifR6tU?_at_=1679281159364#list/path=%2F)| 159 | |CLEVR Counts|2017|8|2,000|500|Accuracy|[Project](https://cs.stanford.edu/people/jcjohns/clevr/)| 160 | |PatchCamelyon|2018|2|294,912|32,768|Accuracy|[Project](https://github.com/basveeling/pcam)| 161 | |EuroSAT|2019|10|10,000|5,000|Accuracy|[Project](https://github.com/phelber/eurosat)| 162 | |Hateful Memes|2020|2|8,500|500|ROC AUC|[Project](https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set/)| 163 | |Country211|2021|211|43,200|21,100|Accuracy|[Project](https://github.com/openai/CLIP/blob/main/data/country211.md)| 164 | 165 | #### Image-Text Retrieval 166 | 167 | | Dataset | Year | Classes | Training | Testing |Evaluation Metric| Project| 168 | |-----------------------------------------------------|:------:|:-------:|:--------:|:-------:|:------:|:-----------:| 169 | |Flickr30k|2014|-|31,783|-|Recall|[Project](https://shannon.cs.illinois.edu/DenotationGraph/) 170 | |COCO Caption|2015|-|82,783|5,000|Recall|[Project](https://github.com/tylin/coco-caption) 171 | 172 | 173 | #### Action Recognition 174 | 175 | | Dataset | Year | Classes | Training | Testing |Evaluation Metric| Project| 176 | |-----------------------------------------------------|:------:|:-------:|:--------:|:-------:|:------:|:-----------:| 177 | |UCF101|2012|101|9,537|1,794|Accuracy|[Project](https://www.crcv.ucf.edu/data/UCF101.php)| 178 | |Kinetics700|2019|700|494,801|31,669|Mean (top1, top5)|[Project](https://www.deepmind.com/open-source/kinetics)| 179 | |RareAct|2020|122|7,607|-|mWAP, mSAP|[Project](https://github.com/antoine77340/RareAct)| 180 | 181 | #### Object Detection 182 | 183 | | Dataset | Year | Classes | Training | Testing |Evaluation Metric| Project| 184 | |-----------------------------------------------------|:------:|:-------:|:--------:|:-------:|:------:|:-----------:| 185 | |COCO 2014 Detection|2014|80|83,000|41,000|Box mAP|[Project](https://www.kaggle.com/datasets/jeffaudi/coco-2014-dataset-for-yolov3)| 186 | |COCO 2017 Detection|2017|80|118,000|5,000|Box mAP|[Project](https://www.kaggle.com/datasets/awsaf49/coco-2017-dataset)| 187 | |LVIS|2019|1203|118,000|5,000|Box mAP|[Project](https://www.lvisdataset.org/)| 188 | |ODinW|2022|314|132,413|20,070|Box mAP|[Project](https://eval.ai/web/challenges/challenge-page/1839/overview)| 189 | 190 | #### Semantic Segmentation 191 | 192 | | Dataset | Year | Classes | Training | Testing |Evaluation Metric| Project| 193 | |-----------------------------------------------------|:------:|:-------:|:--------:|:-------:|:------:|:-----------:| 194 | |PASCAL VOC 2012|2012|20|1,464|1,449|mIoU|[Project](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/)| 195 | |PASCAL Content|2014|459|4,998|5,105|mIoU|[Project](https://www.cs.stanford.edu/~roozbeh/pascal-context/)| 196 | |Cityscapes|2016|19|2,975|500|mIoU|[Project](https://www.cityscapes-dataset.com/)| 197 | |ADE20k|2017|150|25,574|2,000|mIoU|[Project](https://groups.csail.mit.edu/vision/datasets/ADE20K/)| 198 | 199 | ## Vision-Language Pre-training Methods 200 | 201 | 202 | 203 | 204 | 205 | 206 | ### Pre-training with Contrastive Objective 207 | 208 | | Paper | Published in | Code/Project | 209 | |---------------------------------------------------|:-------------:|:------------:| 210 | |[CLIP: Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/pdf/2103.00020.pdf)|ICML 2021|[Code](https://github.com/openai/CLIP)| 211 | |[ALIGN: Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/pdf/2102.05918.pdf)|ICML 2021|-| 212 | |[OTTER: Data Efficient Language-Supervised Zero-Shot Recognition with Optimal Transport Distillation](https://github.com/facebookresearch/OTTER)|arXiv 2021|[Code](https://github.com/facebookresearch/OTTER)| 213 | |[Florence: A New Foundation Model for Computer Vision](https://arxiv.org/abs/2111.11432)|arXiv 2021|-| 214 | |[RegionClip: Region-based Language-Image Pretraining](https://arxiv.org/abs/2112.09106)|arXiv 2021|[Code](https://github.com/microsoft/RegionCLIP)| 215 | |[DeCLIP: Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm](https://arxiv.org/abs/2110.05208)|ICLR 2022|[Code](https://github.com/Sense-GVT/DeCLIP)| 216 | |[FILIP: Fine-grained Interactive Language-Image Pre-Training](https://arxiv.org/abs/2111.07783)|ICLR 2022|-| 217 | |[KELIP: Large-scale Bilingual Language-Image Contrastive Learning](https://arxiv.org/abs/2203.14463)|ICLRW 2022|[Code](https://github.com/navervision/KELIP)| 218 | |[ZeroVL: Contrastive Vision-Language Pre-training with Limited Resources](https://arxiv.org/abs/2112.09331)|ECCV 2022|[Code](https://github.com/zerovl/ZeroVL)| 219 | |[SLIP: Self-supervision meets Language-Image Pre-training](https://arxiv.org/abs/2112.12750)|ECCV 2022|[Code](https://github.com/facebookresearch/SLIP)| 220 | |[UniCL: Unified Contrastive Learning in Image-Text-Label Space](https://arxiv.org/abs/2204.03610)|CVPR 2022|[Code](https://github.com/microsoft/UniCL)| 221 | |[LiT: Zero-Shot Transfer with Locked-image text Tuning](https://arxiv.org/abs/2111.07991)|CVPR 2022|[Code](https://google-research.github.io/vision_transformer/lit/)| 222 | |[GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094)|CVPR 2022|[Code](https://github.com/NVlabs/GroupViT)| 223 | |[PyramidCLIP: Hierarchical Feature Alignment for Vision-language Model Pretraining](https://arxiv.org/abs/2204.14095)|NeurIPS 2022|-| 224 | |[UniCLIP: Unified Framework for Contrastive Language-Image Pre-training](https://arxiv.org/abs/2209.13430)|NeurIPS 2022|-| 225 | |[K-LITE: Learning Transferable Visual Models with External Knowledge](https://arxiv.org/abs/2204.09222)|NeurIPS 2022|[Code](https://github.com/microsoft/klite)| 226 | |[FIBER: Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone](https://arxiv.org/abs/2206.07643)|NeurIPS 2022|[Code](https://github.com/microsoft/FIBER)| 227 | |[Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335)|arXiv 2022|[Code](https://github.com/OFA-Sys/Chinese-CLIP)| 228 | |[AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679)|arXiv 2022|[Code](https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltCLIP)| 229 | |[SegCLIP: Patch Aggregation with Learnable Centers for Open-Vocabulary Semantic Segmentation](https://arxiv.org/abs/2211.14813)|arXiv 2022|[Code](https://github.com/ArrowLuo/SegCLIP)| 230 | |[NLIP: Noise-robust Language-Image Pre-training](https://arxiv.org/abs/2212.07086)|AAAI 2023|-| 231 | |[PaLI: A Jointly-Scaled Multilingual Language-Image Model](https://arxiv.org/abs/2209.06794)|ICLR 2023|[Project](https://ai.googleblog.com/2022/09/pali-scaling-language-image-learning-in.html)| 232 | |[HiCLIP: Contrastive Language-Image Pretraining with Hierarchy-aware Attention](https://arxiv.org/abs/2303.02995)|ICLR 2023|[Code](https://github.com/jeykigung/hiclip)| 233 | |[CLIPPO: Image-and-Language Understanding from Pixels Only](https://arxiv.org/abs/2212.08045)|CVPR 2023|[Code](https://github.com/google-research/big_vision)| 234 | |[RA-CLIP: Retrieval Augmented Contrastive Language-Image Pre-training](https://openaccess.thecvf.com/content/CVPR2023/papers/Xie_RA-CLIP_Retrieval_Augmented_Contrastive_Language-Image_Pre-Training_CVPR_2023_paper.pdf)|CVPR 2023|-| 235 | |[DeAR: Debiasing Vision-Language Models with Additive Residuals](https://arxiv.org/abs/2303.10431)|CVPR 2023|-| 236 | |[Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training](https://arxiv.org/abs/2301.02280)|CVPR 2023|[Code](https://github.com/facebookresearch/diht)| 237 | |[LaCLIP: Improving CLIP Training with Language Rewrites](https://arxiv.org/abs/2305.20088)|NeurIPS 2023|[Code](https://github.com/LijieFan/LaCLIP)| 238 | |[ALIP: Adaptive Language-Image Pre-training with Synthetic Caption](https://arxiv.org/pdf/2308.08428.pdf)|ICCV 2023|[Code](https://github.com/deepglint/ALIP)| 239 | |[GrowCLIP: Data-aware Automatic Model Growing for Large-scale Contrastive Language-Image Pre-training](https://arxiv.org/pdf/2308.11331v1.pdf)|ICCV 2023|-| 240 | |[CLIPpy: Perceptual Grouping in Contrastive Vision-Language Models](https://arxiv.org/abs/2210.09996)|ICCV 2023|-| 241 | |[ViTamin: Designing Scalable Vision Models in the Vision-Language Era](https://arxiv.org/abs/2404.02132v1)|CVPR 2024|[Code](https://github.com/Beckschen/ViTamin)| 242 | |[Iterated Learning Improves Compositionality in Large Vision-Language Models](https://arxiv.org/abs/2404.02145v1)|CVPR 2024|-| 243 | |[FairCLIP: Harnessing Fairness in Vision-Language Learning](https://arxiv.org/abs/2403.19949v1)|CVPR 2024|[Code](https://ophai.hms.harvard.edu/datasets/fairvlmed10k)| 244 | |[Retrieval-Enhanced Contrastive Vision-Text Models](https://arxiv.org/abs/2306.07196)|ICLR 2024|-| 245 | |[CLIPS: An Enhanced CLIP Framework for Learning with Synthetic Captions](https://arxiv.org/abs/2411.16828)]|arXiv 2024|[Code](https://github.com/UCSC-VLAA/CLIPS)| 246 | |[Sigmoid Loss for Language Image Pre-Training](https://openaccess.thecvf.com/content/ICCV2023/papers/Zhai_Sigmoid_Loss_for_Language_Image_Pre-Training_ICCV_2023_paper.pdf)|CVPR 2023|[Code](https://github.com/google-research/big_vision)| 247 | 248 | 249 | 250 | 251 | 252 | 253 | 254 | 255 | 256 | 257 | 258 | ### Pre-training with Generative Objective 259 | 260 | | Paper | Published in | Code/Project | 261 | |---------------------------------------------------|:-------------:|:------------:| 262 | |[FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482)|CVPR 2022|[Code](https://github.com/facebookresearch/multimodal/tree/main/examples/flava)| 263 | |[CoCa: Contrastive Captioners are Image-Text Foundation Models](https://arxiv.org/abs/2205.01917)|arXiv 2022|[Code](https://github.com/lucidrains/CoCa-pytorch)| 264 | |[Too Large; Data Reduction for Vision-Language Pre-Training](https://arxiv.org/abs/2305.20087)|arXiv 2023|[Code](https://github.com/showlab/data-centric.vlp)| 265 | |[SAM: Segment Anything](https://arxiv.org/abs/2304.02643)|arXiv 2023|[Code](https://github.com/facebookresearch/segment-anything)| 266 | |[SEEM: Segment Everything Everywhere All at Once](https://arxiv.org/pdf/2304.06718.pdf)|arXiv 2023|[Code](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once)| 267 | |[Semantic-SAM: Segment and Recognize Anything at Any Granularity](https://arxiv.org/pdf/2307.04767.pdf)|arXiv 2023|[Code](https://github.com/UX-Decoder/Semantic-SAM)| 268 | |[Generative Region-Language Pretraining for Open-Ended Object Detection](https://arxiv.org/pdf/2403.10191v1.pdf)|CVPR 2024|[Code](https://github.com/FoundationVision/GenerateU)| 269 | |[InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks](https://arxiv.org/abs/2312.14238)|CVPR 2024|[Code](https://github.com/OpenGVLab/InternVL)| 270 | |[VILA: On Pre-training for Visual Language Models](https://arxiv.org/abs/2312.07533)|CVPR 2024|-| 271 | |[Enhancing Vision-Language Pre-training with Rich Supervisions](https://arxiv.org/pdf/2403.03346v1.pdf)|CVPR 2024|-| 272 | |[Unified Language-Vision Pretraining in LLM with Dynamic Discrete Visual Tokenization](https://arxiv.org/abs/2309.04669)|ICLR 2024|[Code](https://github.com/jy0205/LaVIT)| 273 | |[MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning](https://arxiv.org/abs/2309.07915)|ICLR 2024|[Code](https://github.com/PKUnlp-icler/MIC)| 274 | |[RLAIF-V: Aligning MLLMs through Open-Source AI Feedback for Super GPT-4V Trustworthiness](https://arxiv.org/abs/2405.17220)|arXiv 2024|[Code](https://github.com/RLHF-V/RLAIF-V)| 275 | |[RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback](https://openaccess.thecvf.com/content/CVPR2024/papers/Yu_RLHF-V_Towards_Trustworthy_MLLMs_via_Behavior_Alignment_from_Fine-grained_Correctional_CVPR_2024_paper.pdf)|CVPR 2024|[Code](https://github.com/RLHF-V/RLHF-V)| 276 | |[Efficient Vision-Language Pre-training by Cluster Masking](https://arxiv.org/pdf/2405.08815)|CVPR 2024|[Code](https://github.com/Zi-hao-Wei/Efficient-Vision-Language-Pre-training-by-Cluster-Masking)| 277 | 278 | 279 | 280 | 281 | ### Pre-training with Alignment Objective 282 | 283 | | Paper | Published in | Code/Project | 284 | |---------------------------------------------------|:-------------:|:------------:| 285 | |[GLIP: Grounded Language-Image Pre-training](https://arxiv.org/abs/2112.03857)|CVPR 2022|[Code](https://github.com/microsoft/GLIP)| 286 | |[DetCLIP: Dictionary-Enriched Visual-Concept Paralleled Pre-training for Open-world Detection](https://arxiv.org/abs/2209.09407)|NeurIPS 2022|-| 287 | |[nCLIP: Non-Contrastive Learning Meets Language-Image Pre-Training](https://arxiv.org/abs/2210.09304)|CVPR 2023|[Code](https://github.com/shallowtoil/xclip)| 288 | |[Do Vision and Language Encoders Represent the World Similarly?](https://openaccess.thecvf.com/content/CVPR2024/papers/Maniparambil_Do_Vision_and_Language_Encoders_Represent_the_World_Similarly_CVPR_2024_paper.pdf)|CVPR 2024|[Code](https://github.com/mayug/0-shot-llm-vision)| 289 | |[Non-autoregressive Sequence-to-Sequence Vision-Language Models](https://arxiv.org/abs/2403.02249v1)|CVPR 2024|-| 290 | |[MMRL: Multi-Modal Representation Learning for Vision-Language Models](https://arxiv.org/abs/2503.08497)|CVPR 2025|[Code](https://github.com/yunncheng/MMRL)| 291 | 292 | 293 | 294 | 295 | ## Vision-Language Model Transfer Learning Methods 296 | 297 | ### Transfer with Prompt Tuning 298 | 299 | #### Transfer with Text Prompt Tuning 300 | 301 | | Paper | Published in | Code/Project | 302 | |---------------------------------------------------|:-------------:|:------------:| 303 | |[CoOp: Learning to Prompt for Vision-Language Models](https://arxiv.org/abs/2109.01134)|IJCV 2022|[Code](https://github.com/KaiyangZhou/CoOp)| 304 | |[CoCoOp: Conditional Prompt Learning for Vision-Language Models](https://arxiv.org/abs/2203.05557)|CVPR 2022|[Code](https://github.com/KaiyangZhou/CoOp)| 305 | |[ProDA: Prompt Distribution Learning](https://arxiv.org/abs/2205.03340)|CVPR 2022|-| 306 | |[DenseClip: Language-Guided Dense Prediction with Context-Aware Prompting](https://arxiv.org/abs/2112.01518)|CVPR 2022|[Code](https://github.com/raoyongming/DenseCLIP)| 307 | |[TPT: Test-time prompt tuning for zero-shot generalization in vision-language models](https://arxiv.org/abs/2209.07511)|NeurIPS 2022|[Code](https://github.com/azshue/TPT)| 308 | |[DualCoOp: Fast Adaptation to Multi-Label Recognition with Limited Annotations](https://arxiv.org/abs/2206.09541)|NeurIPS 2022|[Code](https://github.com/sunxm2357/DualCoOp)| 309 | |[CPL: Counterfactual Prompt Learning for Vision and Language Models](https://arxiv.org/abs/2210.10362)|EMNLP 2022|[Code](https://github.com/eric-ai-lab/CPL)| 310 | |[Bayesian Prompt Learning for Image-Language Model Generalization](https://arxiv.org/abs/2210.02390v2)|arXiv 2022|-| 311 | |[UPL: Unsupervised Prompt Learning for Vision-Language Models](https://arxiv.org/abs/2204.03649)|arXiv 2022|[Code](https://github.com/tonyhuang2022/UPL)| 312 | |[ProGrad: Prompt-aligned Gradient for Prompt Tuning](https://arxiv.org/abs/2205.14865)|arXiv 2022|[Code](https://github.com/BeierZhu/Prompt-align)| 313 | |[SoftCPT: Prompt Tuning with Soft Context Sharing for Vision-Language Models](https://arxiv.org/abs/2208.13474)|arXiv 2022|[Code](https://github.com/kding1225/softcpt)| 314 | |[SubPT: Understanding and Mitigating Overfitting in Prompt Tuning for Vision-Language Models](https://arxiv.org/abs/2211.02219)|TCSVT 2023|[Code](https://github.com/machengcheng2016/Subspace-Prompt-Learning)| 315 | |[LASP: Text-to-Text Optimization for Language-Aware Soft Prompting of Vision & Language Models](https://arxiv.org/abs/2210.01115)|CVPR 2023|[Code](https://www.adrianbulat.com/lasp)| 316 | |[LMPT: Prompt Tuning with Class-Specific Embedding Loss for Long-tailed Multi-Label Visual Recognition](https://arxiv.org/abs/2305.04536)|ACLW 2024|[Code](https://github.com/richard-peng-xia/LMPT)| 317 | |[Texts as Images in Prompt Tuning for Multi-Label Image Recognition](https://arxiv.org/abs/2211.12739)|CVPR 2023|[code](https://github.com/guozix/TaI-DPT) 318 | |[Visual-Language Prompt Tuning with Knowledge-guided Context Optimization](https://arxiv.org/abs/2303.13283)|CVPR 2023|[Code](https://github.com/htyao89/KgCoOp)| 319 | |[Learning to Name Classes for Vision and Language Models](https://arxiv.org/abs/2304.01830v1)|CVPR 2023|-| 320 | |[PLOT: Prompt Learning with Optimal Transport for Vision-Language Models](https://arxiv.org/abs/2210.01253)|ICLR 2023|[Code](https://github.com/CHENGY12/PLOT)| 321 | |[CuPL: What does a platypus look like? Generating customized prompts for zero-shot image classification](https://arxiv.org/abs/2209.03320)|ICCV 2023|[Code](https://github.com/sarahpratt/CuPL)| 322 | |[ProTeCt: Prompt Tuning for Hierarchical Consistency](https://arxiv.org/abs/2306.02240)|arXiv 2023|-| 323 | |[Enhancing CLIP with CLIP: Exploring Pseudolabeling for Limited-Label Prompt Tuning](https://arxiv.org/abs/2306.01669)|arXiv 2023|[Code](http://github.com/BatsResearch/menghini-enhanceCLIPwithCLIP-code)| 324 | |[Why Is Prompt Tuning for Vision-Language Models Robust to Noisy Labels?](https://arxiv.org/pdf/2307.11978v1.pdf)|ICCV 2023|[Code](https://github.com/CEWu/PTNL)| 325 | |[Gradient-Regulated Meta-Prompt Learning for Generalizable Vision-Language Models](https://arxiv.org/pdf/2303.06571.pdf)|ICCV 2023|-| 326 | |[Knowledge-Aware Prompt Tuning for Generalizable Vision-Language Models](https://arxiv.org/pdf/2308.11186v1.pdf)|ICCV 2023|-| 327 | |[Read-only Prompt Optimization for Vision-Language Few-shot Learning](https://arxiv.org/pdf/2308.14960.pdf)|ICCV 2023|[Code](https://github.com/mlvlab/RPO)| 328 | |[Bayesian Prompt Learning for Image-Language Model Generalization](https://arxiv.org/pdf/2210.02390.pdf)|ICCV 2023|[Code](https://github.com/saic-fi/Bayesian-Prompt-Learning)| 329 | |[Distribution-Aware Prompt Tuning for Vision-Language Models](https://arxiv.org/pdf/2309.03406.pdf)|ICCV 2023|[Code](https://github.com/mlvlab/DAPT)| 330 | |[LPT: Long-Tailed Prompt Tuning For Image Classification](https://arxiv.org/pdf/2210.01033.pdf)|ICCV 2023|[Code](https://github.com/DongSky/LPT)| 331 | |[Diverse Data Augmentation with Diffusions for Effective Test-time Prompt Tuning](https://openaccess.thecvf.com/content/ICCV2023/papers/Feng_Diverse_Data_Augmentation_with_Diffusions_for_Effective_Test-time_Prompt_Tuning_ICCV_2023_paper.pdf)|ICCV 2023|[Code](https://github.com/chunmeifeng/DiffTPT)| 332 | |[Efficient Test-Time Prompt Tuning for Vision-Language Models](https://arxiv.org/abs/2408.05775)|arXiv 2024|-| 333 | |[Text-driven Prompt Generation for Vision-Language Models in Federated Learning](https://arxiv.org/abs/2310.06123)|ICLR 2024|-| 334 | |[C-TPT: Calibrated Test-Time Prompt Tuning for Vision-Language Models via Text Feature Dispersion](https://openreview.net/pdf?id=jzzEHTBFOT)|ICLR 2024|-| 335 | |[Prompt Gradient Projection for Continual Learning](https://openreview.net/pdf?id=EH2O3h7sBI)|ICLR 2024|-| 336 | |[Nemesis: Normalizing the soft-prompt vectors of vision-language models](https://openreview.net/pdf?id=zmJDzPh1Dm)|ICLR 2024|[Code](https://github.com/ShyFoo/Nemesis)| 337 | |[DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning](https://arxiv.org/abs/2309.05173)|ICLR 2024|[Code](https://github.com/ZhengxiangShi/DePT)| 338 | |[TCP:Textual-based Class-aware Prompt tuning for Visual-Language Model](https://arxiv.org/abs/2311.18231)|CVPR 2024|[Code](https://github.com/htyao89/Textual-based_Class-aware_prompt_tuning/)| 339 | |[One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models](https://arxiv.org/abs/2403.01849v1)|CVPR 2024|[Code](https://github.com/TreeLLi/APT)| 340 | |[Any-Shift Prompting for Generalization over Distributions](https://arxiv.org/abs/2402.10099)|CVPR 2024|-| 341 | |[Towards Better Vision-Inspired Vision-Language Models](https://www.lamda.nju.edu.cn/caoyh/files/VIVL.pdf)|CVPR 2024|-| 342 | |[Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models](https://arxiv.org/pdf/2407.05342v1)|ECCV 2024|[Code](https://github.com/lloongx/DIKI)| 343 | |[Historical Test-time Prompt Tuning for Vision Foundation Models](https://arxiv.org/pdf/2410.20346)|NeurIPS 2024|–| 344 | 345 | 346 | 347 | 348 | 349 | 350 | 351 | 352 | 353 | 354 | 355 | 356 | 357 | 358 | 359 | 360 | 361 | 362 | #### Transfer with Visual Prompt Tuning 363 | 364 | | Paper | Published in | Code/Project | 365 | |---------------------------------------------------|:-------------:|:------------:| 366 | |[Exploring Visual Prompts for Adapting Large-Scale Models](https://arxiv.org/abs/2203.17274)|arXiv 2022|[Code](https://github.com/hjbahng/visual_prompting)| 367 | |[Retrieval-Enhanced Visual Prompt Learning for Few-shot Classification](https://arxiv.org/abs/2306.02243)|arXiv 2023|-| 368 | |[Fine-Grained Visual Prompting](https://arxiv.org/abs/2306.04356)|arXiv 2023|-| 369 | |[LoGoPrompt: Synthetic Text Images Can Be Good Visual Prompts for Vision-Language Models](https://arxiv.org/pdf/2309.01155v1.pdf)|ICCV 2023|[Code](https://chengshiest.github.io/logo/)| 370 | |[Progressive Visual Prompt Learning with Contrastive Feature Re-formation](https://arxiv.org/abs/2304.08386)|IJCV 2024|[Code](https://github.com/MCG-NJU/ProVP)| 371 | |[Visual In-Context Prompting](https://arxiv.org/abs/2311.13601)|CVPR 2024|[Code](https://github.com/UX-Decoder/DINOv)| 372 | |[FALIP: Visual Prompt as Foveal Attention Boosts CLIP Zero-Shot Performance](https://arxiv.org/abs/2407.05578v1)|ECCV 2024|[Code](https://pumpkin805.github.io/FALIP/)| 373 | 374 | 375 | 376 | 377 | #### Transfer with Text and Visual Prompt Tuning 378 | 379 | | Paper | Published in | Code/Project | 380 | |---------------------------------------------------|:-------------:|:------------:| 381 | |[UPT: Unified Vision and Language Prompt Learning](https://arxiv.org/abs/2210.07225)|arXiv 2022|[Code](https://github.com/yuhangzang/upt)| 382 | |[MVLPT: Multitask Vision-Language Prompt Tuning](https://arxiv.org/abs/2211.11720)|arXiv 2022|[Code](https://github.com/facebookresearch/vilbert-multi-task)| 383 | |[CAVPT: Dual Modality Prompt Tuning for Vision-Language Pre-Trained Model](https://arxiv.org/abs/2208.08340)|arXiv 2022|[Code](https://github.com/fanrena/DPT)| 384 | |[MaPLe: Multi-modal Prompt Learning](https://arxiv.org/abs/2210.03117)|CVPR 2023|[Code](https://github.com/muzairkhattak/multimodal-prompt-learning)| 385 | |[Learning to Prompt Segment Anything Models](https://arxiv.org/pdf/2401.04651.pdf)|arXiv 2024|-| 386 | |[An Image Is Worth 1000 Lies: Transferability of Adversarial Images across Prompts on Vision-Language Models](https://openreview.net/pdf?id=nc5GgFAvtk)|ICLR 2024|-| 387 | |[GalLoP: Learning Global and Local Prompts for Vision-Language Models](https://arxiv.org/pdf/2407.01400)|ECCV 2024|-| 388 | |[CLAP: Isolating Content from Style through Contrastive Learning with Augmented Prompts](https://arxiv.org/abs/2311.16445)|ECCV 2024|[Code](https://github.com/YichaoCai1/CLAP)| 389 | |[Learning to Prompt Segment Anything Models](https://arxiv.org/pdf/2401.04651.pdf)|arXiv 2024|–| 390 | 391 | 392 | 393 | 394 | 395 | ### Transfer with Feature Adapter 396 | 397 | | Paper | Published in | Code/Project | 398 | |---------------------------------------------------|:-------------:|:------------:| 399 | |[Clip-Adapter: Better Vision-Language Models with Feature Adapters](https://arxiv.org/abs/2110.04544)|arXiv 2021|[Code](https://github.com/gaopengcuhk/CLIP-Adapter)| 400 | |[Tip-Adapte: Training-free Adaption of CLIP for Few-shot Classification](https://arxiv.org/abs/2207.09519)|ECCV 2022|[Code](https://github.com/gaopengcuhk/Tip-Adapter)| 401 | |[SVL-Adapter: Self-Supervised Adapter for Vision-Language Pretrained Models](https://arxiv.org/abs/2210.03794)|BMVC 2022|[Code](https://github.com/omipan/svl_adapter)| 402 | |[CLIPPR: Improving Zero-Shot Models with Label Distribution Priors](https://arxiv.org/abs/2212.00784)|arXiv 2022|[Code](https://github.com/jonkahana/CLIPPR)| 403 | |[SgVA-CLIP: Semantic-guided Visual Adapting of Vision-Language Models for Few-shot Image Classification](https://arxiv.org/abs/2211.16191)|arXiv 2022|-| 404 | |[SuS-X: Training-Free Name-Only Transfer of Vision-Language Models](https://arxiv.org/abs/2211.16198)|ICCV 2023|[Code](https://github.com/vishaal27/SuS-X)| 405 | |[VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity Control](https://arxiv.org/abs/2308.09804)|ICCV 2023|[Code](https://github.com/HenryHZY/VL-PET)| 406 | |[SAM-Adapter: Adapting SAM in Underperformed Scenes: Camouflage, Shadow, Medical Image Segmentation, and More](https://arxiv.org/abs/2304.09148)|arXiv 2023|[Code](http://tianrun-chen.github.io/SAM-Adaptor/)| 407 | |[Segment Anything in High Quality](https://arxiv.org/abs/2306.01567)|arXiv 2023|[Code](https://github.com/SysCV/SAM-HQ)| 408 | |[HGCLIP: Exploring Vision-Language Models with Graph Representations for Hierarchical Understanding](https://arxiv.org/abs/2311.14064)|COLING 2025|[Code](https://github.com/richard-peng-xia/HGCLIP)| 409 | |[CLAP: Contrastive Learning with Augmented Prompts for Robustness on Pretrained Vision-Language Models](https://arxiv.org/abs/2311.16445)|arXiv 2023|-| 410 | |[AWT: Transferring Vision-Language Models via Augmentation, Weighting, and Transportation](https://arxiv.org/abs/2407.04603)|NeurIPS 2024|[Code](https://github.com/MCG-NJU/AWT)| 411 | |[A Closer Look at the Few-Shot Adaptation of Large Vision-Language Models](https://arxiv.org/abs/2312.12730)|CVPR 2024|[Code](https://github.com/jusiro/CLAP)| 412 | |[Efficient Test-Time Adaptation of Vision-Language Models](https://arxiv.org/abs/2403.18293v1)|CVPR 2024|[Code](https://kdiaaa.github.io/tda/)| 413 | |[Dual Memory Networks: A Versatile Adaptation Approach for Vision-Language Models](https://arxiv.org/abs/2403.17589v1)|CVPR 2024|[Code](https://github.com/YBZh/DMN)| 414 | 415 | 416 | 417 | ### Transfer with Other Methods 418 | 419 | | Paper | Published in | Code/Project | 420 | |---------------------------------------------------|:-------------:|:------------:| 421 | |[VT-Clip: Enhancing Vision-Language Models with Visual-guided Texts](https://arxiv.org/abs/2112.02399)|arXiv 2021|-| 422 | |[Wise-FT: Robust fine-tuning of zero-shot models](https://arxiv.org/abs/2109.01903)|CVPR 2022|[Code](https://github.com/mlfoundations/wise-ft)| 423 | |[MaskCLIP: Extract Free Dense Labels from CLIP](https://arxiv.org/abs/2112.01071)|ECCV 2022|[Code](https://github.com/chongzhou96/MaskCLIP)| 424 | |[MUST: Masked Unsupervised Self-training for Label-free Image Classification](https://arxiv.org/abs/2206.02967)|ICLR 2023| [Code](https://github.com/salesforce/MUST)| 425 | |[CALIP: Zero-Shot Enhancement of CLIP with Parameter-free Attention](https://arxiv.org/abs/2209.14169)|AAAI 2023|[Code](https://github.com/ziyuguo99/calip)| 426 | |[Semantic Prompt for Few-Shot Image Recognition](https://arxiv.org/abs/2303.14123v1)|CVPR 2023|-| 427 | |[Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners](https://arxiv.org/abs/2303.02151)|CVPR 2023|[Code](https://github.com/ZrrSkywalker/CaFo)| 428 | |[Task Residual for Tuning Vision-Language Models](https://arxiv.org/abs/2211.10277)|CVPR 2023|[Code](https://github.com/geekyutao/TaskRes)| 429 | |[Deeply Coupled Cross-Modal Prompt Learning](https://arxiv.org/abs/2305.17903)|ACL 2023|[Code](https://github.com/GingL/CMPA)| 430 | |[Prompt Ensemble Self-training for Open-Vocabulary Domain Adaptation](https://arxiv.org/abs/2306.16658)|arXiv 2023|-| 431 | |[Personalize Segment Anything Model with One Shot](https://arxiv.org/abs/2305.03048)|arXiv 2023|[Code](https://github.com/ZrrSkywalker/Personalize-SAM)| 432 | |[Chils: Zero-shot image classification with hierarchical label sets](https://proceedings.mlr.press/v202/novack23a/novack23a.pdf)|ICML 2023|[Code](https://github.com/acmi-lab/CHILS)| 433 | |[Improving Zero-shot Generalization and Robustness of Multi-modal Models](https://openaccess.thecvf.com/content/CVPR2023/papers/Ge_Improving_Zero-Shot_Generalization_and_Robustness_of_Multi-Modal_Models_CVPR_2023_paper.pdf)|CVPR 2023|[Code](https://github.com/gyhandy/Hierarchy-CLIP)| 434 | |[Exploiting Category Names for Few-Shot Classification with Vision-Language Models](https://openreview.net/pdf?id=w25Q9Ttjrs)|ICLR W 2023|-| 435 | |[Beyond Sole Strength: Customized Ensembles for Generalized Vision-Language Models](https://arxiv.org/abs/2311.17091)|arXiv 2023|[Code](https://github.com/zhiheLu/Ensemble_VLM)| 436 | |[Regularized Mask Tuning: Uncovering Hidden Knowledge in Pre-trained Vision-Language Models](https://arxiv.org/pdf/2307.15049v1.pdf)|ICCV 2023|[Code](https://wuw2019.github.io/RMT/)| 437 | |[PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization](https://arxiv.org/pdf/2307.15199v1.pdf)|ICCV 2023|[Code](https://promptstyler.github.io/)| 438 | |[PADCLIP: Pseudo-labeling with Adaptive Debiasing in CLIP for Unsupervised Domain Adaptation](https://assets.amazon.science/ff/08/64f27eb54b82a0c59c95dc138af4/padclip-pseudo-labeling-with-adaptive-debiasing-in-clip.pdf)|ICCV 2023|-| 439 | |[Black Box Few-Shot Adaptation for Vision-Language models](https://arxiv.org/pdf/2304.01752.pdf)|ICCV 2023|[Code](https://github.com/saic-fi/LFA)| 440 | |[AD-CLIP: Adapting Domains in Prompt Space Using CLIP](https://arxiv.org/pdf/2308.05659.pdf)|ICCVW 2023|-| 441 | |[Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning](https://arxiv.org/pdf/2306.14565.pdf)|arXiv 2023|[Code](https://fuxiaoliu.github.io/LRV/)| 442 | |[Language Models as Black-Box Optimizers for Vision-Language Models](https://arxiv.org/abs/2309.05950)|arXiv 2023|-| 443 | |[Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching](https://arxiv.org/abs/2305.13310)|ICLR 2024|[Code](https://github.com/aim-uofa/Matcher)| 444 | |[Consistency-guided Prompt Learning for Vision-Language Models](https://arxiv.org/abs/2306.01195)|ICLR 2024|-| 445 | |[Efficient Test-Time Adaptation of Vision-Language Models](https://arxiv.org/abs/2403.18293v1)|CVPR 2024|[Code](https://kdiaaa.github.io/tda/)| 446 | |[Dual Memory Networks: A Versatile Adaptation Approach for Vision-Language Models](https://arxiv.org/abs/2403.17589v1)|CVPR 2024|[Code](https://github.com/YBZh/DMN)| 447 | |[A Closer Look at the Few-Shot Adaptation of Large Vision-Language Models](https://arxiv.org/abs/2312.12730)|CVPR 2024|[Code](https://github.com/jusiro/CLAP)| 448 | |[Anchor-based Robust Finetuning of Vision-Language Models](https://arxiv.org/abs/2404.06244)|CVPR 2024|| 449 | |[Pre-trained Vision and Language Transformers Are Few-Shot Incremental Learners](https://arxiv.org/abs/2404.02117v1)|CVPR 2024|[Code](https://github.com/KHU-AGI/PriViLege)| 450 | 451 | 452 | 453 | 454 | 455 | 456 | 457 | 458 | 459 | 460 | 461 | 462 | 463 | ## Vision-Language Model Knowledge Distillation Methods 464 | 465 | ### Knowledge Distillation for Object Detection 466 | | Paper | Published in | Code/Project | 467 | |---------------------------------------------------|:-------------:|:------------:| 468 | |[ViLD: Open-vocabulary Object Detection via Vision and Language Knowledge Distillation](https://arxiv.org/abs/2104.13921)|ICLR 2022|[Code](https://github.com/tensorflow/tpu/tree/master/models/official/detection/projects/vild)| 469 | |[DetPro: Learning to Prompt for Open-Vocabulary Object Detection with Vision-Language Model](https://arxiv.org/abs/2203.14940)|CVPR 2022|[Code](https://github.com/dyabel/detpro)| 470 | |[XPM: Open-Vocabulary Instance Segmentation via Robust Cross-Modal Pseudo-Labeling](https://arxiv.org/abs/2111.12698)|CVPR 2022|[Code](https://github.com/hbdat/cvpr22_cross_modal_pseudo_labeling)| 471 | |[Bridging the Gap between Object and Image-level Representations for Open-Vocabulary Detection](https://arxiv.org/abs/2207.03482)|NeurIPS 2022|[Code](https://github.com/hanoonaR/object-centric-ovd)| 472 | |[PromptDet: Towards Open-vocabulary Detection using Uncurated Images](https://arxiv.org/abs/2203.16513)|ECCV 2022|[Code](https://github.com/fcjian/PromptDet)| 473 | |[PB-OVD: Open Vocabulary Object Detection with Pseudo Bounding-Box Labels](https://arxiv.org/abs/2111.09452)|ECCV 2022|[Code](https://github.com/salesforce/PB-OVD)| 474 | |[OV-DETR: Open-Vocabulary DETR with Conditional Matching](https://arxiv.org/abs/2203.11876)|ECCV 2022|[Code](https://github.com/yuhangzang/OV-DETR)| 475 | |[Detic: Detecting Twenty-thousand Classes using Image-level Supervision](https://arxiv.org/abs/2201.02605)|ECCV 2022|[Code](https://github.com/facebookresearch/Detic)| 476 | |[OWL-ViT: Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230)|ECCV 2022|[Code](https://github.com/google-research/scenic/tree/main/scenic/projects/owl_vit)| 477 | |[VL-PLM: Exploiting Unlabeled Data with Vision and Language Models for Object Detection](https://arxiv.org/abs/2207.08954)|ECCV 2022|[Code](https://github.com/xiaofeng94/VL-PLM)| 478 | |[ZSD-YOLO: Zero-shot Object Detection Through Vision-Language Embedding Alignment](https://arxiv.org/abs/2109.12066)|arXiv 2022|[Code](https://github.com/Johnathan-Xie/ZSD-YOLO)| 479 | |[HierKD: Open-Vocabulary One-Stage Detection with Hierarchical Visual-Language Knowledge Distillation](https://arxiv.org/abs/2203.10593)|arXiv 2022|[Code](https://github.com/mengqiDyangge/HierKD)| 480 | |[VLDet: Learning Object-Language Alignments for Open-Vocabulary Object Detection](https://arxiv.org/abs/2211.14843)|ICLR 2023|[Code](https://github.com/clin1223/VLDet)| 481 | |[F-VLM: Open-Vocabulary Object Detection upon Frozen Vision and Language Models](https://arxiv.org/abs/2209.15639)|ICLR 2023|[Code](https://github.com/google-research/google-research/tree/master/fvlm)| 482 | |[CondHead: Learning to Detect and Segment for Open Vocabulary Object Detection](https://arxiv.org/abs/2212.12130)|CVPR 2023|-| 483 | |[Aligning Bag of Regions for Open-Vocabulary Object Detection](https://arxiv.org/abs/2302.13996)|CVPR 2023|[Code](https://github.com/wusize/ovdet)| 484 | |[Region-Aware Pretraining for Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2305.07011v1)|CVPR 2023|[Code](https://github.com/mcahny/rovit)| 485 | |[Object-Aware Distillation Pyramid for Open-Vocabulary Object Detection](https://arxiv.org/abs/2303.05892)|CVPR 2023|[Code](https://github.com/LutingWang/OADP)| 486 | |[CORA: Adapting CLIP for Open-Vocabulary Detection with Region Prompting and Anchor Pre-Matching](https://arxiv.org/abs/2303.13076v1)|CVPR 2023|[Code](https://github.com/tgxs002/CORA)| 487 | |[DetCLIPv2: Scalable Open-Vocabulary Object Detection Pre-training via Word-Region Alignment](https://arxiv.org/abs/2304.04514v1)|CVPR 2023|-| 488 | |[Detecting Everything in the Open World: Towards Universal Object Detection](https://arxiv.org/abs/2303.11749)|CVPR 2023|[Code](https://github.com/zhenyuw16/UniDetector)| 489 | |[CapDet: Unifying Dense Captioning and Open-World Detection Pretraining](https://arxiv.org/abs/2303.02489)|CVPR 2023|-| 490 | |[Contextual Object Detection with Multimodal Large Language Models](https://arxiv.org/abs/2305.18279)|arXiv 2023|[Code](https://github.com/yuhangzang/ContextDET)| 491 | |[Building One-class Detector for Anything: Open-vocabulary Zero-shot OOD Detection Using Text-image Models](https://arxiv.org/abs/2305.17207)|arXiv 2023|[Code](https://github.com/gyhandy/One-Class-Anything)| 492 | |[EdaDet: Open-Vocabulary Object Detection Using Early Dense Alignment](https://arxiv.org/pdf/2309.01151v1.pdf)|ICCV 2023|[Code](https://chengshiest.github.io/edadet)| 493 | |[Improving Pseudo Labels for Open-Vocabulary Object Detection](https://arxiv.org/pdf/2308.06412.pdf)|arXiv 2023|-| 494 | |[RegionGPT: Towards Region Understanding Vision Language Model](https://arxiv.org/pdf/2403.02330v1.pdf)|CVPR 2024|[Code](https://guoqiushan.github.io/regiongpt.github.io/)| 495 | |[LLMs Meet VLMs: Boost Open Vocabulary Object Detection with Fine-grained Descriptors](https://arxiv.org/pdf/2402.04630.pdf)|ICLR 2024|-| 496 | |[Ins-DetCLIP: Aligning Detection Model to Follow Human-Language Instruction](https://openreview.net/pdf?id=M0MF4t3hE9)|ICLR 2024|-| 497 | |[Open-Vocabulary Object Detection via Language Hierarchy](https://arxiv.org/pdf/2410.20371)|NeurIPS 2024|–| 498 | 499 | 500 | 501 | 502 | 503 | ### Knowledge Distillation for Semantic Segmentation 504 | 505 | | Paper | Published in | Code/Project | 506 | |---------------------------------------------------|:-------------:|:------------:| 507 | |[SSIW: Semantic Segmentation In-the-Wild Without Seeing Any Segmentation Examples](https://arxiv.org/abs/2112.03185)|arXiv 2021|-| 508 | |[ReCo: Retrieve and Co-segment for Zero-shot Transfer](https://arxiv.org/abs/2206.07045)|NeurIPS 2022|[Code](https://github.com/NoelShin/reco)| 509 | |[CLIMS: Cross Language Image Matching for Weakly Supervised Semantic Segmentation](https://arxiv.org/abs/2203.02668)|CVPR 2022|[Code](https://github.com/CVI-SZU/CLIMS)| 510 | |[CLIPSeg: Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003)|CVPR 2022|[Code](https://github.com/timojl/clipseg)| 511 | |[ZegFormer: Decoupling Zero-Shot Semantic Segmentation](https://arxiv.org/abs/2112.07910)|CVPR 2022|[Code](https://github.com/dingjiansw101/ZegFormer)| 512 | |[LSeg: Language-driven Semantic Segmentation](https://arxiv.org/abs/2201.03546)|ICLR 2022|[Code](https://github.com/isl-org/lang-seg)| 513 | |[ZSSeg: A Simple Baseline for Open-Vocabulary Semantic Segmentation with Pre-trained Vision-language Model](https://arxiv.org/abs/2112.14757)|ECCV 2022|[Code](https://github.com/MendelXu/zsseg.baseline)| 514 | |[OpenSeg: Scaling Open-Vocabulary Image Segmentation with Image-Level Labels](https://arxiv.org/abs/2112.12143)|ECCV 2022|[Code](https://github.com/tensorflow/tpu/tree/641c1ac6e26ed788327b973582cbfa297d7d31e7/models/official/detection/projects/openseg)| 515 | |[Fusioner: Open-vocabulary Semantic Segmentation with Frozen Vision-Language Models](https://arxiv.org/abs/2210.15138)|BMVC 2022|[Code](https://github.com/chaofanma/Fusioner)| 516 | |[OVSeg: Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP](https://arxiv.org/abs/2210.04150)|CVPR 2023|[Code](https://github.com/facebookresearch/ov-seg)| 517 | |[ZegCLIP: Towards Adapting CLIP for Zero-shot Semantic Segmentation](https://arxiv.org/abs/2212.03588)|CVPR 2023|[Code](https://github.com/ZiqinZhou66/ZegCLIP)| 518 | |[CLIP is Also an Efficient Segmenter: A Text-Driven Approach for Weakly Supervised Semantic Segmentation](https://arxiv.org/abs/2212.09506)|CVPR 2023|[Code](https://github.com/linyq2117/CLIP-ES)| 519 | |[FreeSeg: Unified, Universal and Open-Vocabulary Image Segmentation](https://arxiv.org/abs/2303.17225v1)|CVPR 2023|[Code](https://freeseg.github.io/)| 520 | |[Mask-free OVIS: Open-Vocabulary Instance Segmentation without Manual Mask Annotations](https://arxiv.org/abs/2303.16891v1)|CVPR 2023|[Code](https://vibashan.github.io/ovis-web/)| 521 | |[Exploring Open-Vocabulary Semantic Segmentation without Human Labels](https://arxiv.org/abs/2306.00450)|arXiv 2023|-| 522 | |[OpenVIS: Open-vocabulary Video Instance Segmentation](https://arxiv.org/abs/2305.16835)|arXiv 2023|-| 523 | |[Segment Anything is A Good Pseudo-label Generator for Weakly Supervised Semantic Segmentation](https://arxiv.org/abs/2305.01275)|arXiv 2023|-| 524 | |[Segment Anything Model (SAM) Enhanced Pseudo Labels for Weakly Supervised Semantic Segmentation](https://arxiv.org/abs/2305.05803)|arXiv 2023|[Code](https://github.com/cskyl/SAM_WSSS)| 525 | |[Plug-and-Play, Dense-Label-Free Extraction of Open-Vocabulary Semantic Segmentation from Vision-Language Models](https://arxiv.org/abs/2311.17095)|arXiv 2023|-| 526 | |[SegPrompt: Boosting Open-World Segmentation via Category-level Prompt Learning](https://arxiv.org/pdf/2308.06531v1.pdf)|ICCV 2023|[Code](https://github.com/aim-uofa/SegPrompt)| 527 | |[ICPC: Instance-Conditioned Prompting with Contrastive Learning for Semantic Segmentation](https://arxiv.org/pdf/2308.07078.pdf)|arXiv 2023|-| 528 | |[Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convolutional CLIP](https://arxiv.org/pdf/2308.02487.pdf)|arXiv 2023|[Code](https://github.com/bytedance/fc-clip)| 529 | |[Plug-and-Play, Dense-Label-Free Extraction of Open-Vocabulary Semantic Segmentation from Vision-Language Models](https://arxiv.org/abs/2311.17095)|arXiv 2023|-| 530 | |[CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction](https://arxiv.org/abs/2310.01403)|ICLR 2024|-| 531 | 532 | 533 | 534 | ### Knowledge Distillation for Other Tasks 535 | 536 | | Paper | Published in | Code/Project | 537 | |---------------------------------------------------|:-------------:|:------------:| 538 | |[Controlling Vision-Language Models for Universal Image Restoration](https://arxiv.org/abs/2310.01018)|arXiv 2023|[Code](https://github.com/Algolzw/daclip-uir)| 539 | |[FROSTER: Frozen CLIP Is A Strong Teacher for Open-Vocabulary Action Recognition](https://arxiv.org/pdf/2402.03241.pdf)|ICLR 2024|[Project](https://visual-ai.github.io/froster)| 540 | |[AnomalyCLIP: Object-agnostic Prompt Learning for Zero-shot Anomaly Detection](https://arxiv.org/pdf/2310.18961.pdf)|ICLR 2024|[Code](https://github.com/zqhang/AnomalyCLIP)| 541 | |[EXIF as Language: Learning Cross-Modal Associations Between Images and Camera Metadata](https://arxiv.org/abs/2301.04647)|CVPR 2023|[Code](https://hellomuffin.github.io/exif-as-language/)| 542 | 543 | 544 | 545 | -------------------------------------------------------------------------------- /images/overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jingyi0000/VLM_survey/d3fa0be4ea13b54dee323920488817b919296afc/images/overview.png --------------------------------------------------------------------------------