├── LICENSE ├── README.md └── imgs ├── arch.png └── timeline.png /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Efficient-Multimodal-LLMs-Survey 2 | 3 | > **[Efficient Multimodal Large Language Models: A Survey](https://arxiv.org/pdf/2405.10739v1)**[ [arXiv]](https://arxiv.org/pdf/2405.10739v1) 4 | 5 | > *Yizhang Jin12, Jian Li1, Yexin Liu3, Tianjun Gu4, Kai Wu1, Zhengkai Jiang1, Muyang He3, Bo Zhao3, Xin Tan4, Zhenye Gan1, Yabiao Wang1, Chengjie Wang1, Lizhuang Ma2* 6 | 7 | > *1Tencent YouTu Lab, 2SJTU, 3BAAI, 4ECNU* 8 | 9 | **⚡We will actively maintain this repository and incorporate new research as it emerges. If you have any questions, please contact swordli@tencent.com. Welcome to collaborate on academic research and writing papers together.**. 10 | 11 | ``` 12 | @misc{jin2024efficient, 13 | title={Efficient Multimodal Large Language Models: A Survey}, 14 | author={Yizhang Jin and Jian Li and Yexin Liu and Tianjun Gu and Kai Wu and Zhengkai Jiang and Muyang He and Bo Zhao and Xin Tan and Zhenye Gan and Yabiao Wang and Chengjie Wang and Lizhuang Ma}, 15 | year={2024}, 16 | eprint={2405.10739}, 17 | archivePrefix={arXiv}, 18 | primaryClass={cs.CV} 19 | } 20 | ``` 21 | 22 | ## 📌 What is This Survey About? 23 | 24 |

25 | 26 |

27 | 28 | 29 | In the past year, Multimodal Large Language Models (MLLMs) have demonstrated remarkable performance in tasks such as visual question answering, visual understanding and reasoning. However, the extensive model size and high training and inference costs have hindered the widespread application of MLLMs in academia and industry. Thus, studying efficient and lightweight MLLMs has enormous potential, especially in edge computing scenarios. In this survey, we provide a comprehensive and systematic review of the current state of efficient MLLMs. Specifically, we summarize the timeline of representative efficient MLLMs, research state of efficient structures and strategies, and the applications. Finally, we discuss the limitations of current efficient MLLM research and promising future directions. 30 | 31 |

32 | 33 |

34 | 35 | ### Summary of 17 Mainstream Efficient MLLMs 36 | 37 | | Model | Vision Encoder | Resolution | Vision Encoder Parameter Size | LLM | LLM Parameter Size | Vision-LLM Projector | Timeline | 38 | |-------|----------------|------------|------------------------------|-----|----------------|----------------------|----------| 39 | | MobileVLM | CLIP ViT-L/14 | 336 | 0.3B | MobileLLaMA | 2.7B | LDP | 2023-12 | 40 | | LLaVA-Phi | CLIP ViT-L/14 | 336 | 0.3B | Phi-2 | 2.7B | MLP | 2024-01| 41 | | Imp-v1| SigLIP | 384 | 0.4B | Phi-2 | 2.7B | - | 2024-02 | 42 | | TinyLLaVA | SigLIP-SO | 384 | 0.4B | Phi-2 | 2.7B | MLP | 2024-02 | 43 | | Bunny | SigLIP-SO | 384 | 0.4B | Phi-2 | 2.7B | MLP | 2024-02 | 44 | | MobileVLM-v2-3B | CLIP ViT-L/14 | 336 | 0.3B | MobileLLaMA | 2.7B | LDPv2 | 2024-02 | 45 | | MoE-LLaVA-3.6B | CLIP-Large | 384 | - | Phi-2 | 2.7B | MLP | 2024-02 | 46 | | Cobra | DINOv2, SigLIP-SO | 384 | 0.3B+0.4B | Mamba-2.8b-Zephyr | 2.8B | MLP | 2024-03 | 47 | | Mini-Gemini | CLIP-Large | 336 | - | Gemma | 2B | MLP | 2024-03 | 48 | | Vary-toy | CLIP | 224 | - | Qwen | 1.8B | - | 2024-01 | 49 | | TinyGPT-V | EVA | 224/448 | - | Phi-2 | 2.7B | Q-Former | 2024-01 | 50 | | SPHINX-Tiny | DINOv2 , CLIP-ConvNeXt | 448 | - | TinyLlama | 1.1B | - | 2024-02 | 51 | | ALLaVA-Longer | CLIP-ViT-L/14 | 336 | 0.3B | Phi-2 | 2.7B | - | 2024-02 | 52 | | MM1-3B-MoE-Chat | CLIP_DFN-ViT-H | 378 | - | - | 3B | C-Abstractor | 2024-03 | 53 | | LLaVA-Gemma | DinoV2 | - | - | Gemma-2b-it | 2B | - | 2024-03 | 54 | | Mipha-3B | SigLIP | 384 | - | Phi-2 | 2.7B | - | 2024-03 | 55 | | VL-Mamba | SigLIP-SO | 384 | - | Mamba-2.8B-Slimpj | 2.8B | VSS-L2 | 2024-03 | 56 | | MiniCPM-V 2.0 | SigLIP | - | 0.4B | MiniCPM | 2.7B | Perceiver Resampler | 2024-03 | 57 | | DeepSeek-VL | SigLIP-L | 384 | 0.4B | DeepSeek-LLM | 1.3B | MLP | 2024-03 | 58 | | KarmaVLM | SigLIP-SO | 384 | 0.4B | Qwen1.5 | 0.5B | - | 2024-02 | 59 | | moondream2 | SigLIP | - | - | Phi-1.5 | 1.3B | - | 2024-03 | 60 | | Bunny-v1.1-4B | SigLIP | 1152 | - | Phi-3-Mini-4K | 3.8B | - | 2024-02 | 61 | 62 | 63 | ## Efficient MLLMs 64 | 65 | ### Architecture 66 | - Mobilevlm: A fast, reproducible and strong vision language assistant for mobile devices. arXiv, 2023 [[Paper](https://arxiv.org/abs/2312.16886)] 67 | - Llava-phi: Efficient multi-modal assistant with small language model. arXiv, 2024 [[Paper](https://arxiv.org/abs/2401.02330)] 68 | - Imp-v1: An emprical study of multimodal small language models. arXiv, 2024 [[Paper](https://arxiv.org/abs/2405.12107)] 69 | - TinyLLaVA: A Framework of Small-scale Large Multimodal Models. arxiv, 2024 [[Paper](https://arxiv.org/abs/2402.14289)] 70 | - (Bunny)Efficient multimodal learning from data-centric perspective.arXiv, 2024 [[Paper](https://arxiv.org/abs/2402.11530)] 71 | - Gemini: a family of highly capable multimodal modelsarXiv, 2023 [[Paper](https://arxiv.org/abs/2312.11805)] 72 | - Mobilevlm v2: Faster and stronger baseline for vision language model. arXiv, 2024 [[Paper](https://arxiv.org/abs/2402.0376)] 73 | - Moe-llava: Mixture of experts for large vision-language models. arXiv, 2024 [[Paper](https://arxiv.org/abs/2401.15947)] 74 | - Cobra:Extending mamba to multi-modal large language model for efficient inference. arXiv, 2024 [[Paper](https://arxiv.org/abs/2403.14520)] 75 | - Mini-gemini: Mining the potential of multi-modality vision language models. arXiv, 2024 [[Paper](https://arxiv.org/abs/2403.18814)] 76 | - (Vary-toy)Small language model meets with reinforced vision vocabulary. arXiv, 2024 [[Paper](https://arxiv.org/abs/2401.12503)] 77 | - Tinygpt-v: Efficient multimodal large language model via small backbones.arXiv, 2023 [[Paper](https://arxiv.org/abs/2312.16862)] 78 | - SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models.arXiv, 2024 [[Paper](https://arxiv.org/abs/2402.05935)] 79 | - ALLaVA: Harnessing GPT4V-synthesized Data for A Lite Vision-Language Model.arXiv, 2024 [[Paper](https://arxiv.org/abs/2402.11684)] 80 | - Mm1: Methods, analysis \& insights from multimodal llm pre-training.arXiv, 2024 [[Paper](https://arxiv.org/abs/2403.09611)] 81 | - LLaVA-Gemma: Accelerating Multimodal Foundation Models with a Compact Language Model.arXiv, 2024 [[Paper](https://arxiv.org/abs/2404.01331)] 82 | - Mipha: A Comprehensive Overhaul of Multimodal Assistant with Small Language Models.arXiv, 2024 [[Paper](https://arxiv.org/abs/2403.06199)] 83 | - VL-Mamba: Exploring State Space Models for Multimodal Learning.arXiv, 2024 [[Paper](https://arxiv.org/abs/2403.13600)] 84 | - MiniCPM-V 2.0: An Efficient End-side MLLM with Strong OCR and Understanding Capabilities.github, 2024 [[Github](https://github.com/OpenBMB/MiniCPM-V)] 85 | - DeepSeek-VL: Towards Real-World Vision-Language Understanding 86 | .arXiv, 2024 [[Paper](https://arxiv.org/abs/2403.05525)] 87 | - KarmaVLM: A family of high efficiency and powerful visual language model.github, 2024 [[Github](https://github.com/thomas-yanxin/KarmaVLM)] 88 | - moondream: tiny vision language model.github, 2024 [[Github](https://github.com/vikhyat/moondream)] 89 | 90 | 91 | #### Vision Encoder 92 | 93 | ##### Multiple Vision Encoders 94 | - Broadening the visual encoding of vision-language models, arXiv, 2024 [[Paper](https://arxiv.org/abs/2404.07204)] 95 | - Cobra: Extending Mamba to Multi-Modal Large Language Model for Efficient Inference, arXiv, 2024 [[Paper](https://arxiv.org/abs/2403.14520)] 96 | - SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models, arXiv, 2024 [[Paper](https://arxiv.org/abs/2402.05935)] 97 | 98 | ##### Lightweight Vision Encoder 99 | - ViTamin: Designing Scalable Vision Models in the Vision-Language Era. arXiv, 2024 [[Paper](https://arxiv.org/abs/2404.02132)] 100 | 101 | #### Vision-Language Projector 102 | 103 | ##### MLP-based 104 | - Visual Instruction Tuning. arXiv, 2023 [[Paper](https://arxiv.org/abs/2304.08485)] 105 | - Improved baselines with visual instruction tuning. arXiv, 2023 [[Paper](https://arxiv.org/abs/2310.03744)] 106 | - TokenPacker: Efficient Visual Projector for Multimodal LLM. arXiv, 2024 [[Paper](https://arxiv.org/abs/2407.02392)] 107 | 108 | ##### Attention-based 109 | - Flamingo: a Visual Language Model for Few-Shot Learning, arXiv, 2022 [[Paper](https://arxiv.org/abs/2204.14198)] 110 | - BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models, arXiv, 2023 [[Paper](https://arxiv.org/abs/2301.12597)] 111 | - Broadening the visual encoding of vision-language models, arXiv, 2024 [[Paper](https://arxiv.org/abs/2404.07204)] 112 | 113 | ##### CNN-based 114 | - MobileVLM V2: Faster and Stronger Baseline for Vision Language Model, arXiv, 2023 [[Paper](https://arxiv.org/abs/2402.03766)] 115 | - Mobilevlm: A fast, reproducible and strong vision language assistant for mobile devices. arXiv, 2023 [[Paper](https://arxiv.org/abs/2312.16886)] 116 | 117 | ##### Mamba-based 118 | - Vl-mamba: Exploring state space models for multimodal learning.arXiv, 2024 [[Paper](https://arxiv.org/abs/2403.13600)] 119 | 120 | ##### Hybrid Structure 121 | - Honeybee: Locality-enhanced projector for multimodal llm.arXiv, 2023 [[Paper](https://arxiv.org/abs/2312.06742)] 122 | 123 | #### Small Language Models 124 | - Llama: Open and efficient foundation language models. arXiv, 2023 [[Paper](https://arxiv.org/abs/2302.13971)] 125 | - Vicuna: An open-source chatbot impressing gpt-4 with 90\%* chatgpt quality.website, 2023 [[web](https://vicuna. lmsys. org)] 126 | - Phi-2: The surprising power of small language models. blog 2023 [[blog](Microsoft Research Blog)] 127 | - Gemma: Open models based on gemini research and technology. arXiv, 2024 [[Paper](https://arxiv.org/abs/2403.08295)] 128 | - Phi-3 technical report: A highly capable language model locally on your phone. 2024 129 | 130 | #### Vision Token Compression 131 | 132 | ##### Multi-view Input 133 | - Llava-uhd: an lmm perceiving any aspect ratio and high-resolution images. arXiv, 2024 [[Paper](https://arxiv.org/abs/2403.11703)] 134 | - A pioneering large vision- language model handling resolutions from 336 pixels to 4k hd. arXiv, 2024 [[Paper](https://arxiv.org/abs/2404.06512)] 135 | 136 | ##### Token processing 137 | - Llava-uhd: an lmm perceiving any aspect ratio and high-resolution images. arXiv, 2024 [[Paper](https://arxiv.org/abs/2403.11703)] 138 | - Texthawk: Exploring efficient fine-grained perception of multimodal large language models. arXiv, 2024 [[Paper](https://arxiv.org/abs/2404.09204)] 139 | - Tiny- chart: Efficient chart understanding with visual token merging and program-of-thoughts learning. 140 | - Llava-prumerge: Adaptive token reduction for efficient large multimodal models. arXiv, 2024 [[Paper](https://arxiv.org/abs/2403.15388)] 141 | - Madtp: Multi- modal alignment-guided dynamic token pruning for accelerating vision-language transformer. arXiv, 2024 [[Paper](https://arxiv.org/abs/2403.02991)] 142 | - CROSSGET: CROSS-GUIDED ENSEMBLE OF TOKENS FOR ACCELERATING VISION-LANGUAGE TRANSFORMERS. ICML, 2024 [[Paper](https://arxiv.org/pdf/2305.17455)] 143 | - Matryoshka Query Transformer for Large Vision-Language Models. arxiv, 2024 [[Paper](https://arxiv.org/pdf/2405.19315)] 144 | - Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Context Sparsification. ICLR, 2025 [[Paper](https://arxiv.org/abs/2412.00876)] 145 | 146 | 147 | ##### Multi-Scale Information Fusion 148 | - Mini-gemini: Mining the potential of multi-modality vision language models. arXiv, 2024 [[Paper](https://arxiv.org/abs/2403.18814)] 149 | - When do we not need larger vision models? arXiv, 2024 [[Paper](https://arxiv.org/abs/2403.13043)] 150 | arXiv, 2023 [[Paper](https://arxiv.org/abs/2302.13971)] 151 | 152 | ##### Vision Expert Agents 153 | - Plug-and-play grounding of reasoning in multimodal large language models. arXiv, 2024 [[Paper](https://arxiv.org/abs/2403.19322)] 154 | - Mova: Adapting mixture of vision experts to multimodal context. arXiv, 2024 [[Paper](https://arxiv.org/abs/2404.13046)] 155 | 156 | ##### Video-Specific Methods 157 | - Elysium: Exploring object-level perception in videos via mllm. arXiv, 2024 [[Paper](https://arxiv.org/abs/2403.16558)] 158 | - Extending video-language pretraining to n-modality by language-based semantic alignment. arXiv, 2023 [[Paper](https://arxiv.org/abs/2310.01852)] 159 | - Video-llava: Learning united visual representation by alignment before projection. arXiv, 2023 [[Paper](https://arxiv.org/abs/2311.10122)] 160 | 161 | #### Efficient Structures 162 | ##### Mixture of Experts 163 | - Moe-llava: Mixture of experts for large vision-language models. arXiv, 2024 [[Paper](https://arxiv.org/abs/2401.15947)] 164 | - Mm1: Methods, analysis & insights from multimodal llm pre-training. arXiv, 2024 [[Paper](https://arxiv.org/abs/2403.09611)] 165 | - Mixtral of experts. arXiv, 2024 [[Paper](https://arxiv.org/abs/2401.04088)] 166 | 167 | ##### Mamba 168 | - Cobra: Extending Mamba to Multi-Modal Large Language Model for Efficient Inference, arXiv, 2024 [[Paper](https://arxiv.org/abs/2403.14520)] 169 | - Mamba: Linear-time sequence modeling with selective state spaces. arXiv, 2023 [[Paper](https://arxiv.org/abs/2312.00752)] 170 | - Vl-mamba: Exploring state space models for multimodal learning. arXiv, 2024 [[Paper](https://arxiv.org/abs/2403.13600)] 171 | 172 | ##### Inferece Acceleration 173 | - LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context Inference. EMNLP Findings, 2024 [[Paper](https://arxiv.org/pdf/2406.18139)] [[Code](https://github.com/SUSTechBruce/LOOK-M)] 174 | - On speculative decoding for multimodal large language models. arXiv, 2024 [[Paper](https://arxiv.org/abs/2404.08856)] 175 | - An image is worth 1/2 tokens after layer 2: Plug-and-play inference acceleration for large vision-language models. arXiv, 2024 [[Paper](https://arxiv.org/abs/2403.06764)] 176 | - Boosting multimodal large language models with visual tokens withdrawal for rapid inference. arXiv, 2024 [[Paper](https://arxiv.org/abs/2405.05803)] 177 | - Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Context Sparsification. ICLR, 2025 [[Paper](https://arxiv.org/abs/2412.00876)] 178 | 179 | 180 | 181 | 182 | ### Training 183 | 184 | #### Pre-Training 185 | 186 | ##### Which part to unfreeze 187 | - Tinyllava: A framework of small-scale large multimodal models. arXiv, 2024 [[Paper](https://arxiv.org/abs/2402.14289)] 188 | - Vila: On pre-training for visual language models. arXiv, 2023 [[Paper](https://arxiv.org/abs/2312.07533)] 189 | - Sharegpt4v: Improving large multi-modal models with better captions. arXiv, 2023 [[Paper](https://arxiv.org/abs/2311.12793)] 190 | 191 | ##### Multi-stage pre-training 192 | - What matters when building vision- language models? arXiv, 2024 [[Paper](https://arxiv.org/abs/2405.02246)] 193 | 194 | #### Instruction-Tunining 195 | ##### Efficient IT 196 | - Cheap and quick: Efficient vision-language instruction tuning for large language models. nips, 2023 [[Paper](https://proceedings.neurips.cc/paper_files/paper/2023/file/5e84e4413268b713f0d4a1b23a9dae57-Paper-Conference.pdf)] 197 | - Hyperllava: Dynamic visual and language expert tuning for multimodal large language models. arXiv, 2024 [[Paper](https://arxiv.org/abs/2403.13447)] 198 | 199 | #### Diverse Training Steps 200 | - SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models.arXiv, 2024 [[Paper](https://arxiv.org/abs/2402.05935)] 201 | - Cobra:Extending mamba to multi-modal large language model for efficient inference. arXiv, 2024 [[Paper](https://arxiv.org/abs/2403.14520)] 202 | - Tinygpt-v: Efficient multimodal large language model via small backbones.arXiv, 2023 [[Paper](https://arxiv.org/abs/2312.16862)] 203 | 204 | #### Parameter Efficient Transfer Learning 205 | - Not All Attention is Needed: Parameter and Computation Efficient Transfer Learning for Multi-modal Large Language Models. arXiv, 2024 [[Paper](https://arxiv.org/abs/2403.15226)] 206 | - Memory-space visual prompting for efficient vision-language fine-tuning. arXiv, 2024 [[Paper](https://arxiv.org/abs/2405.05615)] 207 | 208 | ### Applications 209 | 210 | #### Biomedical Analysis 211 | - Training small multimodal models to bridge biomedical competency gap: A case study in radiology imaging. arXiv, 2024 [[Paper]] 212 | - Moe-tinymed: Mixture of experts for tiny medical large vision-language models. arXiv, 2024 [[Paper](https://arxiv.org/abs/2404.10237)] 213 | 214 | #### Document Understanding 215 | - Texthawk: Exploring efficient fine-grained perception of multimodal large language models. arXiv, 2024 [[Paper](https://arxiv.org/abs/2404.09204)] 216 | - Tiny- chart: Efficient chart understanding with visual token merging and program-of-thoughts learning. arXiv, 2024 [[Paper](https://arxiv.org/abs/2404.16635)] 217 | - Monkey: Image resolution and text label are important things for large multi-modal models. arXiv, 2024 [[Paper](https://arxiv.org/abs/2404.06918)] 218 | - Hrvda: High-resolution visual document assistant. arXiv, 2023 [[Paper](https://arxiv.org/abs/2311.06607)] 219 | 220 | #### Video Comprehension 221 | - mplug-2: A modular- ized multi-modal foundation model across text, image and video. arXiv, 2023 [[Paper](https://arxiv.org/abs/2302.00402)] 222 | - Video-llava: Learning united visual representation by alignment before projection. arXiv, 2023 [[Paper](https://arxiv.org/abs/2311.10122)] 223 | - Ma-lmm: Memory-augmented large multimodal model for long-term video under- standing. arXiv, 2024 [[Paper](https://arxiv.org/abs/2404.05726)] 224 | - Llama-vid: An image is worth 2 tokens in large language models. arXiv, 2023 [[Paper](https://arxiv.org/abs/2311.17043 )] 225 | -------------------------------------------------------------------------------- /imgs/arch.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/swordlidev/Efficient-Multimodal-LLMs-Survey/fd7c4f89ed03bb96dbf8f44959dc8e5b5a03fa92/imgs/arch.png -------------------------------------------------------------------------------- /imgs/timeline.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/swordlidev/Efficient-Multimodal-LLMs-Survey/fd7c4f89ed03bb96dbf8f44959dc8e5b5a03fa92/imgs/timeline.png --------------------------------------------------------------------------------