5 |
6 | Knowledge Verification to Nip Hallucination in the Bud
7 | -----------------------------
8 |
9 |

10 |

11 |

12 |

13 |
14 |
18 |
19 |
20 |
21 | _**Fanqi Wan
†, Xinting Huang
‡, Leyang Cui
‡, Xiaojun Quan
†, Wei Bi
‡, Shuming Shi
‡**_
22 |
23 |
24 |
25 |
26 |
27 | _
† Sun Yat-sen University,
28 |
‡ Tencent AI Lab_
29 |
30 |
31 |
32 |
33 | ## News
34 | - **Jan 19, 2024:** 🔥 We're excited to announce that the KCA datasets for open-book tuning, discard tuning, and refusal tuning are now available on 🤗 [Huggingface Datasets](https://huggingface.co/datasets/Wanfq/KCA_data). The fine-tuned models are now available on 🤗 [Huggingface Models](https://huggingface.co/models?sort=trending&search=KCA). Happy exploring!
35 |
36 | ## Contents
37 |
38 | - [Overview](#overview)
39 | - [Data Release](#data-release)
40 | - [Model Release](#model-release)
41 | - [Knowledge Inconsistency Detection](#knowledge-inconsistency-detection)
42 | - [Knowledge Inconsistency Calibration](#knowledge-inconsistency-calibration)
43 | - [Evaluation](#evaluation)
44 | - [Citation](#citation)
45 |
46 | ## Overview
47 |
48 | In this study, we demonstrate the feasibility of mitigating hallucinations by verifying and minimizing the inconsistency between external knowledge present in the alignment data and the intrinsic knowledge embedded within foundation LLMs.
49 |
50 |