└── README.md /README.md: -------------------------------------------------------------------------------- 1 | # Hallucination in Large Foundation Models 2 | This repository will be updated to include all the contemporary papers related to hallucination in foundation models. We broadly categorize the papers into **four** major categories as follows. 3 | 4 | ## Text 5 | ### LLMs 6 | 1. [SELFCHECKGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models](https://arxiv.org/pdf/2303.08896.pdf) 7 | 2. [Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators](https://arxiv.org/abs/2310.07289.pdf) 8 | 3. [HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models](https://arxiv.org/pdf/2305.11747.pdf) 9 | 4. [Self-contradictory Hallucinations of Large Language Models: Evaluation, Detection and Mitigation](https://arxiv.org/pdf/2305.15852.pdf) 10 | 5. [PURR: Efficiently Editing Language Model Hallucinations by Denoising Language Model Corruptions](https://arxiv.org/pdf/2305.14908.pdf) 11 | 6. [Mitigating Language Model Hallucination with Interactive Question-Knowledge Alignment](https://arxiv.org/pdf/2305.13669.pdf) 12 | 7. [How Language Model Hallucinations Can Snowball](https://arxiv.org/pdf/2305.13534.pdf) 13 | 8. [Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback](https://arxiv.org/pdf/2302.12813.pdf) 14 | 9. [The Internal State of an LLM Knows When its Lying](https://arxiv.org/pdf/2304.13734.pdf) 15 | 10. [Chain of Knowledge: A Framework for Grounding Large Language Models with Structured Knowledge Bases](https://arxiv.org/pdf/2305.13269.pdf) 16 | 11. [HALO: Estimation and Reduction of Hallucinations in Open-Source Weak Large Language Models](https://arxiv.org/pdf/2308.11764.pdf) 17 | 12. [A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation](https://arxiv.org/pdf/2307.03987.pdf) 18 | 13. [Dehallucinating Large Language Models Using Formal Methods Guided Iterative Prompting](https://ieeexplore.ieee.org/abstract/document/10207581) 19 | 14. [Sources of Hallucination by Large Language Models on Inference Tasks](https://arxiv.org/pdf/2305.14552.pdf) 20 | 15. [Citation: A Key to Building Responsible and Accountable Large Language Models](https://arxiv.org/pdf/2307.02185.pdf) 21 | 16. [Zero-resource hallucination prevention for large language models](https://arxiv.org/pdf/2309.02654.pdf) 22 | 17. [RARR: Researching and Revising What Language Models Say, Using Language Models](https://aclanthology.org/2023.acl-long.910.pdf) 23 | 24 | ### Multilingual LLMs 25 | 1. [Hallucinations in Large Multilingual Translation Models](https://arxiv.org/pdf/2303.16104.pdf) 26 | 27 | ### Domain-specific LLMs 28 | 1. [Med-HALT: Medical Domain Hallucination Test for Large Language Models](https://arxiv.org/pdf/2308.11764.pdf) 29 | 1. [ChatLawLLM](https://arxiv.org/pdf/2306.16092.pdf) 30 | 31 | ## Image 32 | 33 | 1. [Evaluating Object Hallucination in Large Vision-Language Models](https://arxiv.org/pdf/2305.10355.pdf) 34 | 1. [Detecting and Preventing Hallucinations in Large Vision Language Models](https://arxiv.org/pdf/2308.06394.pdf) 35 | 1. [Plausible May Not Be Faithful: Probing Object Hallucination in Vision-Language Pre-training](https://arxiv.org/pdf/2210.07688.pdf) 36 | 1. [Hallucination Improves the Performance of Unsupervised Visual Representation Learning](https://arxiv.org/pdf/2307.12168.pdf) 37 | 38 | 39 | ## Video 40 | 1. [Let’s Think Frame by Frame: Evaluating Video Chain of Thought with Video Infilling and Prediction](https://arxiv.org/pdf/2305.13903.pdf) 41 | 2. [Putting People in Their Place: Affordance-Aware Human Insertion into Scenes](https://openaccess.thecvf.com/content/CVPR2023/papers/Kulal_Putting_People_in_Their_Place_Affordance-Aware_Human_Insertion_Into_Scenes_CVPR_2023_paper.pdf) 42 | 3. [VideoChat : Chat-Centric Video Understanding](https://arxiv.org/pdf/2305.06355.pdf) 43 | 44 | 45 | 46 | ## Audio 47 | 1. [LP-MusicCaps: LLM-Based Pseudo Music Captioning](https://arxiv.org/pdf/2307.16372.pdf) 48 | 2. [Audio-Journey: Efficient Visual+LLM-aided Audio Encodec Diffusion](https://openreview.net/pdf?id=vzMXsTCdFB) 49 | --------------------------------------------------------------------------------