├── info ├── merge_res.png ├── bar_perf_mqa.png ├── bar_perf_sqa.png ├── LV_Performance.png ├── cmrc_mixup_distribution_all.pdf ├── cmrc_mixup_distribution_all.png ├── distribution_evidence.md ├── SOTA_model_performance.md ├── detailed_annotation_information.md └── error_cases_and_metric_cases.md ├── requirements.txt ├── batch_eval_gpt_single.sh ├── batch_eval_multiple.sh ├── batch_eval_single.sh ├── LICENSE ├── evaluation.py ├── prediction_gpt.py ├── config.py ├── utils.py ├── prediction.py ├── metrics.py ├── LICENSE_CC ├── README_ZH.md └── README.md /info/merge_res.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/infinigence/LVEval/HEAD/info/merge_res.png -------------------------------------------------------------------------------- /info/bar_perf_mqa.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/infinigence/LVEval/HEAD/info/bar_perf_mqa.png -------------------------------------------------------------------------------- /info/bar_perf_sqa.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/infinigence/LVEval/HEAD/info/bar_perf_sqa.png -------------------------------------------------------------------------------- /info/LV_Performance.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/infinigence/LVEval/HEAD/info/LV_Performance.png -------------------------------------------------------------------------------- /info/cmrc_mixup_distribution_all.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/infinigence/LVEval/HEAD/info/cmrc_mixup_distribution_all.pdf -------------------------------------------------------------------------------- /info/cmrc_mixup_distribution_all.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/infinigence/LVEval/HEAD/info/cmrc_mixup_distribution_all.png -------------------------------------------------------------------------------- /info/distribution_evidence.md: -------------------------------------------------------------------------------- 1 | For several representative datasets, we create histograms that plot the positional distribution of all samples with different positions in the context, using 10% intervals. 2 | 3 | ![](cmrc_mixup_distribution_all.png) 4 | 5 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | jieba 2 | numpy 3 | pandas 4 | tqdm 5 | datasets 6 | pandas 7 | tqdm 8 | numpy==1.24.4 9 | openai==1.6.1 10 | zhipuai==2.0.1 11 | icetk==0.0.7 12 | rouge==1.0.1 13 | jieba==0.42.1 14 | accelerate==0.23.0 15 | flash-attn==2.3.2 16 | tokenizers==0.14.1 17 | tiktoken==0.5.1 18 | torch==2.1.1 19 | transformers==4.35.0 20 | -------------------------------------------------------------------------------- /batch_eval_gpt_single.sh: -------------------------------------------------------------------------------- 1 | model_name=$1 2 | model_max_len=$2 3 | timestamp=000000000000 4 | output_dir="outputs/$model_name/$model_max_len/$timestamp" 5 | 6 | echo "output dir $output_dir" 7 | 8 | cmd="python3 prediction_gpt.py --model-name $model_name --model-max-len $model_max_len --output-dir $output_dir" 9 | echo $cmd 10 | eval $cmd 11 | 12 | cmd="python3 evaluation.py --input-dir $output_dir" 13 | echo $cmd 14 | eval $cmd 15 | -------------------------------------------------------------------------------- /batch_eval_multiple.sh: -------------------------------------------------------------------------------- 1 | model_path=$1 2 | model_name=$2 3 | model_max_len=$3 4 | timestamp=$(date "+%y%m%d%H%M%S") 5 | output_dir="outputs/$model_name/$model_max_len/$timestamp" 6 | 7 | echo "output dir $output_dir" 8 | 9 | cmd="python3 prediction.py --model-path $model_path --model-name $model_name --model-max-len $model_max_len --output-dir $output_dir" 10 | echo $cmd 11 | eval $cmd 12 | 13 | cmd="python3 evaluation.py --input-dir $output_dir" 14 | echo $cmd 15 | eval $cmd 16 | -------------------------------------------------------------------------------- /batch_eval_single.sh: -------------------------------------------------------------------------------- 1 | model_path=$1 2 | model_name=$2 3 | model_max_len=$3 4 | timestamp=$(date "+%y%m%d%H%M%S") 5 | output_dir="outputs/$model_name/$model_max_len/$timestamp" 6 | 7 | echo "output dir $output_dir" 8 | 9 | cmd="python3 prediction.py --model-path $model_path --model-name $model_name --model-max-len $model_max_len --output-dir $output_dir --single-process" 10 | echo $cmd 11 | eval $cmd 12 | 13 | cmd="python3 evaluation.py --input-dir $output_dir" 14 | echo $cmd 15 | eval $cmd 16 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2024 Infinigence AI 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. -------------------------------------------------------------------------------- /info/SOTA_model_performance.md: -------------------------------------------------------------------------------- 1 | For larger models(70B+), we have evaluated the opensourced Qwen2-72B-Instruct and Meta-Llama-3.1-70B-Instruct, both with 128k context length. 2 | 3 | During the testing of Qwen, we configured the Yarn long-text extension according to the official repository. However, the model failed to generate correct outputs during the evaluation. We also found similar unresolved issues in the official repository(https://huggingface.co/Qwen/Qwen2-72B-Instruct/discussions/17, https://github.com/QwenLM/Qwen2/issues/717). Therefore, here we only present the results of the 16k valid tests. 4 | 5 | ```json 6 | # results of Qwen2-72B-Instruct at 16k length level 7 | { 8 | "cmrc_mixup_16k": 37.05, 9 | "dureader_mixup_16k": 20.86, 10 | "factrecall_en_16k": 4.95, 11 | "factrecall_zh_16k": 12.7, 12 | "hotpotwikiqa_mixup_16k": 31.98, 13 | "lic_mixup_16k": 29.06, 14 | "loogle_CR_mixup_16k": 14.97, 15 | "loogle_MIR_mixup_16k": 16.61, 16 | "loogle_SD_mixup_16k": 52.04, 17 | "multifieldqa_en_mixup_16k": 25.4, 18 | "multifieldqa_zh_mixup_16k": 32.8 19 | } 20 | ``` 21 | 22 | We have completed the evaluations for Meta-Llama-3.1-70B-Instruct on 16k/32k/64k length levels, and drew the performance result in the following line graph. Due to the significant computational cost associated with the 70B long-context model, the evaluations for the 128k and 256k levels are still ongoing. The results will be updated upon completion. 23 | 24 | For close models with long-context capability, we present the results of Kimi(api moonshot-v1-128k). Due to the high cost(707.52M tokens, 42451.2RMB) of evaluation on all five length levels, we can only bear the cost of evaluation on 16k and 32k length levels. We hope this partial results can provide a performance reference for SOTA long-context LLMs. 25 | 26 | ![](LV_Performance.png) 27 | -------------------------------------------------------------------------------- /info/detailed_annotation_information.md: -------------------------------------------------------------------------------- 1 | - The information of annotators 2 | 3 | We hire 5 annotators in total, 3 of them are master students engaged in LLM research and the rest two annotators are master students in Linguistics. We hire them to work on-site as full-time annotators. 4 | 5 | - Hours worked on annotation 6 | 7 | In average, annotators worked 8 hours per day. Six datasets containing a total of 557 instances of CF were examined, requiring 2 annotators and 3 days to complete the verification process. The task was assigned to the linguistics master's students to ensure semantic consistency. Six datasets containing KPR, comprising a total of 1,924 pairs, were processed. Replacing key words and phrase in the Chinese datasets required 5 individuals and 3 days, while replacing them in the English datasets required 2 individuals and 2 days. Nine datasets containing a total of 955 instances of AK were verified, requiring 2 individuals and 1 day to complete the process. 8 | 9 | - Detailed annotation process 10 | 11 | Firstly, we trained five annotators and had them conduct a trial annotation according to the following annotation guidelines: 12 | - For CFI, there are 3 types of cases needed to manually modify: a) The CF generated by GPT-4 did not achieve the intended interference effect. b) The CF generated by GPT-4 altered or added to the original facts, resulting in conflicts. c) The CF generated by GPT-4 did not replace the main subject, leading to ambiguity in the interpretation of the question. 13 | - For KPR, The replacement of certain words or phrases need to be done in the answer, if they appear in the question, also requires corresponding replacements in the question. The detailed requirements are as follows: a) Ensure that the replaced words or phrases differ entirely in characters from the original words or phrases. b) Maximize the differences between the entire sentence and the original sentence, replace as many elements as possible. c) Try to replace fixed terms, meaning that the term remains consistent throughout the original text without the presence of synonyms. d) The replaced concept can be fabricated or inconsistent with common knowledge, as long as a relevant answer can be found within the context. 14 | -------------------------------------------------------------------------------- /evaluation.py: -------------------------------------------------------------------------------- 1 | import os 2 | import re 3 | import json 4 | import argparse 5 | import pandas as pd 6 | 7 | 8 | from config import DATASET_METRIC 9 | from utils import ensure_dir 10 | 11 | 12 | def parse_args(args=None): 13 | parser = argparse.ArgumentParser() 14 | parser.add_argument('--input-dir', type=str, default=None) 15 | return parser.parse_args(args) 16 | 17 | def custom_sort(s): 18 | letters = re.findall('[a-zA-Z]+', s) 19 | numbers = re.findall('\d+', s) 20 | return (letters, int(numbers[0])) if numbers else (letters, 0) 21 | 22 | def scorer(dataset, predictions, answers, gold_anss): 23 | dataset_name = re.split('_.{1,3}k', dataset)[0] 24 | total_score = 0. 25 | total_sample = 0 26 | scores = {DATASET_METRIC[dataset_name].__name__: []} 27 | for (prediction, ground_truths, gold_ans) in zip(predictions, answers, gold_anss): 28 | total_sample += 1 29 | score = 0. 30 | for ground_truth in ground_truths: 31 | score = max(score, DATASET_METRIC[dataset_name](prediction, ground_truth, gold_ans)) 32 | break 33 | total_score += score 34 | scores[DATASET_METRIC[dataset_name].__name__].append(score) 35 | return round(100 * total_score / total_sample, 2), scores 36 | 37 | if __name__ == '__main__': 38 | args = parse_args() 39 | path = args.input_dir.rstrip("/") 40 | save_dir = f"{path}/eval_result/" 41 | ensure_dir(save_dir) 42 | 43 | 44 | all_files = [f for f in os.listdir(path) if os.path.isfile(os.path.join(path, f))] 45 | all_files.sort(key=custom_sort) 46 | 47 | all_results = dict() 48 | all_scores = dict() 49 | for filename in all_files: 50 | if not filename.endswith("jsonl"): 51 | continue 52 | predictions, answers, gold_anss, datas = [], [], [], [] 53 | dataset = filename.split('.')[0] 54 | dataset_name = re.split('_.{1,3}k', dataset)[0] 55 | length = dataset.split('_')[-1] 56 | with open(f"{path}/{filename}", "r", encoding="utf-8") as f: 57 | for line in f: 58 | data = json.loads(line) 59 | datas.append(data) 60 | predictions.append(data["pred"]) 61 | answers.append(data["answers"]) 62 | gold_ans = data['gold_ans'] if 'gold_ans' in data else None 63 | gold_anss.append(gold_ans) 64 | score_mean, metric_scores = scorer(dataset, predictions, answers, gold_anss) 65 | all_scores[dataset] = score_mean 66 | if dataset_name in all_results: 67 | all_results[dataset_name].append({length: score_mean}) 68 | else: 69 | all_results[dataset_name] = [{length: score_mean}] 70 | 71 | out_path = os.path.join(save_dir, "result.json") 72 | with open(out_path, "w") as f: 73 | json.dump(all_scores, f, ensure_ascii=False, indent=4) 74 | 75 | panda_list = [] 76 | for dataset_name, length_score_list in all_results.items(): 77 | lengths_scores = dict() 78 | for item in length_score_list: 79 | length, score = list(item.items())[0] 80 | lengths_scores[length] = score 81 | panda_dict = dict() 82 | panda_dict["dataset_name"] = dataset_name 83 | panda_dict.update(**lengths_scores) 84 | panda_list.append(panda_dict) 85 | dataframe = pd.DataFrame(panda_list) 86 | print(dataframe, '\n') 87 | out_path = os.path.join(save_dir, "result.csv") 88 | dataframe.to_csv(out_path, index=False) 89 | -------------------------------------------------------------------------------- /prediction_gpt.py: -------------------------------------------------------------------------------- 1 | import os 2 | import re 3 | import tiktoken 4 | from icetk import icetk 5 | import argparse 6 | from tqdm import tqdm 7 | from openai import OpenAI 8 | from zhipuai import ZhipuAI 9 | 10 | 11 | from config import ( 12 | DATASET_MAXGEN, 13 | DATASET_PROMPT, 14 | DATASET_SELECTED, 15 | DATASET_LENGTH_LEVEL, 16 | ) 17 | from utils import ( 18 | ensure_dir, 19 | seed_everything, 20 | get_dataset_names, 21 | post_process, 22 | load_jsonl, 23 | load_LVEval_dataset, 24 | dump_preds_results_once, 25 | ) 26 | 27 | 28 | 29 | def get_pred( 30 | model, 31 | tokenizer, 32 | data, 33 | max_length, 34 | max_gen, 35 | prompt_format, 36 | model_name, 37 | save_path, 38 | model_id, 39 | ): 40 | preds = [] 41 | 42 | existed_questions = [data['input'] for data in load_jsonl(save_path)] 43 | 44 | for json_obj in tqdm(data): 45 | if json_obj['input'] in existed_questions: 46 | print(f'pred already exists in {save_path}, jump...') 47 | continue 48 | prompt = prompt_format.format(**json_obj) 49 | # following LongBench, we truncate to fit max_length 50 | tokenized_prompt = tokenizer.encode(prompt) 51 | if len(tokenized_prompt) > max_length: 52 | half = int(max_length / 2) 53 | prompt = tokenizer.decode(tokenized_prompt[:half]) + tokenizer.decode(tokenized_prompt[-half:]) 54 | 55 | response = model.chat.completions.create( 56 | model = model_id, 57 | # do_sample = False, 58 | messages = [ 59 | {"role": "system", "content": "You are a helpful assistant."}, 60 | {"role": "user", "content": prompt}, 61 | ] 62 | ) 63 | pred = response.choices[0].message.content 64 | pred = post_process(pred, model_name) 65 | item = { 66 | "pred": pred, 67 | "answers": json_obj["answers"], 68 | "gold_ans": json_obj["answer_keywords"] if "answer_keywords" in json_obj else None, 69 | "input": json_obj["input"], 70 | "all_classes": json_obj["all_classes"] if "all_classes" in json_obj else None, 71 | "length": json_obj["length"], 72 | } 73 | dump_preds_results_once(item, save_path) 74 | preds.append(item) 75 | return preds 76 | 77 | def single_processing(datasets, args): 78 | model_id = args.model_name 79 | if 'gpt' in model_id: 80 | client = OpenAI() 81 | tokenizer = tiktoken.encoding_for_model(model_id) 82 | elif 'glm' in model_id: 83 | client = ZhipuAI(api_key="************************") 84 | tokenizer = icetk 85 | 86 | 87 | for dataset in tqdm(datasets): 88 | datas = load_LVEval_dataset(dataset, args.data_path) 89 | dataset_name = re.split('_.{1,3}k', dataset)[0] 90 | prompt_format = DATASET_PROMPT[dataset_name] 91 | max_gen = DATASET_MAXGEN[dataset_name] 92 | save_path = os.path.join(args.output_dir, dataset + ".jsonl") 93 | preds = get_pred( 94 | client, 95 | tokenizer, 96 | datas, 97 | args.model_max_length, 98 | max_gen, 99 | prompt_format, 100 | args.model_name, 101 | save_path, 102 | model_id, 103 | ) 104 | 105 | def parse_args(args=None): 106 | parser = argparse.ArgumentParser() 107 | parser.add_argument("--model-name", type=str, default=None, required=True, choices=["gpt-4-0613", "gpt-3.5-turbo-1106", "gpt-4-1106-preview", "glm-4", "glm-3-turbo"]) 108 | parser.add_argument("--model-max-length", type=int, default=15500) 109 | parser.add_argument("--data-path", type=str, default=None) 110 | parser.add_argument("--output-dir", type=str, default="outputs") 111 | return process_args(parser.parse_args(args)) 112 | 113 | def process_args(args): 114 | return args 115 | 116 | if __name__ == "__main__": 117 | seed_everything(42) 118 | args = parse_args() 119 | ensure_dir(args.output_dir) 120 | datasets = get_dataset_names(DATASET_SELECTED, DATASET_LENGTH_LEVEL) 121 | single_processing(datasets, args) 122 | -------------------------------------------------------------------------------- /config.py: -------------------------------------------------------------------------------- 1 | from metrics import ( 2 | qa_f1_score, 3 | qa_f1_score_with_gold_ans, 4 | qa_f1_zh_score, 5 | qa_f1_zh_score_with_gold_ans, 6 | rouge_zh_score_blacklist, 7 | ) 8 | 9 | DATASET_MAXGEN = { 10 | "hotpotwikiqa_mixup": 64, 11 | "loogle_SD_mixup": 64, 12 | "loogle_CR_mixup": 64, 13 | "loogle_MIR_mixup": 64, 14 | "multifieldqa_en_mixup": 64, 15 | "multifieldqa_zh_mixup": 64, 16 | "factrecall_en": 16, 17 | "factrecall_zh": 16, 18 | "cmrc_mixup": 64, 19 | "lic_mixup": 64, 20 | "dureader_mixup": 64, 21 | } 22 | 23 | DATASET_PROMPT = { 24 | "hotpotwikiqa_mixup": "Answer the question based on the given passages. Questions and answers are only relevant to some passages. Only give me the answer and do not output any other explanation and evidence.\n\nArticle: {context}\n\nPlease answer the following question based on the above passages. Questions and answers are only relevant to some passages. Only give me the answer and do not output any other explanation and evidence.\n\nQuestion: {input}\nAnswer:", 25 | "loogle_SD_mixup": "Please answer the following question based on the given passages. Questions and answers are only relevant to one passage. Only give me the answer and do not output any other explanation and evidence.\n\nArticle: {context}\n\nPlease answer the following question based on the above passages. Questions and answers are only relevant to one passage. Only give me the answer and do not output any other explanation and evidence.\n\nQuestion: {input}\nAnswer:", 26 | "loogle_CR_mixup": "Please answer the following question based on the given passages. Questions and answers are only relevant to one passage. Only give me the answer and do not output any other explanation and evidence.\n\nArticle: {context}\n\nPlease answer the following question based on the above passages. Questions and answers are only relevant to one passage. Only give me the answer and do not output any other explanation and evidence.\n\nQuestion: {input}\nAnswer:", 27 | "loogle_MIR_mixup": "Please answer the following question based on the given passages. Questions and answers are only relevant to one passage. Only give me the answer and do not output any other explanation and evidence.\n\nArticle: {context}\n\nPlease answer the following question based on the above passages. Questions and answers are only relevant to one passage. Only give me the answer and do not output any other explanation and evidence.\n\nQuestion: {input}\nAnswer:", 28 | "multifieldqa_en_mixup": "Please answer the following question based on the given passages. Questions and answers are only relevant to one passage. Only give me the answer and do not output any other explanation and evidence.\n\nArticle: {context}\n\nPlease answer the following question based on the above passages. Questions and answers are only relevant to one passage. Only give me the answer and do not output any other explanation and evidence.\n\nQuestion: {input}\nAnswer:", 29 | "multifieldqa_zh_mixup": "请阅读以下文章并用中文回答问题,问题和答案只与其中一篇文章有关。只需要直接给出问题的答案,不要输出其他任何解释和证据。\n\n文章:{context}\n\n请基于上面的文章回答下面的问题,问题和答案只与其中一篇文章有关。只需要直接给出问题的答案,不要输出其他任何解释和证据。\n\n问题:{input}\n回答:", 30 | "factrecall_en": "Please answer the following questions based on the given article.\n\nArticle: {context}\n\nPlease answer the following questions based on the above article.\n\nQuestion: {input}\nAnswer:", 31 | "factrecall_zh": "请基于给定的文章回答下述问题。\n\n文章:{context}\n\n现在请基于上述文章回答下面的问题。\n\n问题:{input}\n回答:", 32 | "cmrc_mixup": "请根据下面给定的文章回答问题,问题和答案只与其中一篇文章有关。\n\n文章:{context}\n\n现在请基于上述文章回答下面的问题,问题和答案只与其中一篇文章有关。\n\n问题:{input}\n回答:", 33 | "lic_mixup": "请根据下面给定的文章回答问题,问题和答案只与其中一篇文章有关。\n\n文章:{context}\n\n请现在基于上述文章回答下面的问题,问题和答案只与其中一篇文章有关。\n\n问题:{input}\n回答:", 34 | "dureader_mixup": "请根据下面给定的文章回答问题,问题和答案只与其中一篇文章有关。\n\n文章:{context}\n\n现在请基于上述文章回答下面的问题,问题和答案只与其中一篇文章有关。\n\n问题:{input}\n回答:", 35 | } 36 | 37 | DATASET_METRIC = { 38 | "hotpotwikiqa_mixup": qa_f1_score_with_gold_ans, 39 | "loogle_SD_mixup": qa_f1_score_with_gold_ans, 40 | "loogle_CR_mixup": qa_f1_score_with_gold_ans, 41 | "loogle_MIR_mixup": qa_f1_score_with_gold_ans, 42 | "multifieldqa_en_mixup": qa_f1_score_with_gold_ans, 43 | "multifieldqa_zh_mixup": qa_f1_zh_score_with_gold_ans, 44 | "factrecall_en": qa_f1_score, 45 | "factrecall_zh": qa_f1_zh_score, 46 | "cmrc_mixup": qa_f1_zh_score_with_gold_ans, 47 | "lic_mixup": qa_f1_zh_score_with_gold_ans, 48 | "dureader_mixup": rouge_zh_score_blacklist, 49 | } 50 | 51 | DATASET_SELECTED = [ 52 | "hotpotwikiqa_mixup", 53 | "loogle_SD_mixup", 54 | "loogle_CR_mixup", 55 | "loogle_MIR_mixup", 56 | "multifieldqa_en_mixup", 57 | "multifieldqa_zh_mixup", 58 | "factrecall_en", 59 | "factrecall_zh", 60 | "cmrc_mixup", 61 | "lic_mixup", 62 | "dureader_mixup", 63 | ] 64 | 65 | DATASET_LENGTH_LEVEL = [ 66 | '16k', 67 | '32k', 68 | '64k', 69 | '128k', 70 | '256k', 71 | ] -------------------------------------------------------------------------------- /utils.py: -------------------------------------------------------------------------------- 1 | import os 2 | import json 3 | import torch 4 | import random 5 | import numpy as np 6 | from datasets import load_dataset 7 | from transformers import ( 8 | AutoTokenizer, 9 | AutoModelForCausalLM, 10 | ) 11 | 12 | 13 | 14 | def ensure_dir(directory_path): 15 | if not os.path.exists(directory_path): 16 | os.makedirs(directory_path) 17 | 18 | def seed_everything(seed): 19 | torch.manual_seed(seed) 20 | torch.cuda.manual_seed(seed) 21 | np.random.seed(seed) 22 | random.seed(seed) 23 | torch.backends.cudnn.benchmark = False 24 | torch.backends.cudnn.deterministic = True 25 | torch.cuda.manual_seed_all(seed) 26 | 27 | def get_dataset_names(dataset_names, length_levels): 28 | datasets = [] 29 | for name in dataset_names: 30 | for length in length_levels: 31 | datasets.append(f"{name}_{length}") 32 | 33 | return datasets 34 | 35 | def dump_preds_results_once(pred, save_path): 36 | with open(save_path, "a+", encoding="utf-8") as f: 37 | json.dump(pred, f, ensure_ascii=False) 38 | f.write("\n") 39 | 40 | def dump_preds_results(preds, save_path): 41 | with open(save_path, "w", encoding="utf-8") as f: 42 | for pred in preds: 43 | json.dump(pred, f, ensure_ascii=False) 44 | f.write("\n") 45 | print(f"results saving >>>>>>>>> {save_path}") 46 | 47 | def load_jsonl(data_path): 48 | datas = [] 49 | if os.path.exists(data_path): 50 | f = open(data_path, 'r') 51 | for line in f.readlines(): 52 | datas.append(json.loads(line)) 53 | else: 54 | print(f"not exists: {data_path}") 55 | return datas 56 | 57 | def load_LVEval_dataset(dataset_name, data_path=None): 58 | print(f"loading dataset >>>>>>>>> {dataset_name}") 59 | if data_path: # load from local path 60 | datas = [] 61 | data_path = os.path.join(data_path, dataset_name) + ".jsonl" 62 | datas = load_jsonl(data_path) 63 | print(f"dataset path >>>>>>>>> {data_path}") 64 | else: # load from huggingface 65 | datas = load_dataset("infini-ai/LVEval", dataset_name, split='test', token=True) 66 | return list(datas) 67 | 68 | def load_model_and_tokenizer(model_path, device): 69 | print(device) 70 | try: 71 | tokenizer = AutoTokenizer.from_pretrained( 72 | model_path, trust_remote_code=True 73 | ) 74 | model = AutoModelForCausalLM.from_pretrained( 75 | model_path, device_map=device, trust_remote_code=True, torch_dtype=torch.bfloat16, use_flash_attention_2=True, 76 | ) 77 | except: 78 | tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) 79 | model = AutoModelForCausalLM.from_pretrained( 80 | model_path, device_map=device, trust_remote_code=True, torch_dtype=torch.bfloat16, use_flash_attention_2=False, 81 | ) 82 | 83 | model = model.eval() 84 | return model, tokenizer 85 | 86 | def load_model_and_tokenizer_once(id, model_path, device_dict=None, lock=None): 87 | device = torch.device(f"cuda:{id}") if id != -1 else "auto" 88 | print(f"using device {device}") 89 | model, tokenizer = load_model_and_tokenizer( 90 | model_path, device 91 | ) 92 | if device_dict is None: 93 | return model, tokenizer 94 | if lock: 95 | with lock: 96 | device_dict[id] = (model, tokenizer) 97 | else: 98 | device_dict[id] = (model, tokenizer) 99 | 100 | def model_generate(tokenizer, prompt, max_gen, model): 101 | input = tokenizer(prompt, truncation=False, return_tensors="pt").to(model.device) 102 | context_length = input.input_ids.shape[-1] 103 | output = model.generate( 104 | **input, 105 | max_new_tokens=max_gen, 106 | do_sample=False, 107 | )[0] 108 | pred = tokenizer.decode(output[context_length:], skip_special_tokens=True) 109 | return pred 110 | 111 | def truncate_prompt(tokenizer, prompt, max_length): 112 | # following LongBench, we truncate middle content to fit max_length 113 | tokenized_prompt = tokenizer( 114 | prompt, truncation=False, return_tensors="pt" 115 | ).input_ids[0] 116 | if len(tokenized_prompt) > max_length: 117 | half = int(max_length / 2) 118 | prompt = tokenizer.decode(tokenized_prompt[:half], skip_special_tokens=True) + tokenizer.decode(tokenized_prompt[-half:], skip_special_tokens=True) 119 | return prompt 120 | 121 | def build_chat(tokenizer, prompt, model_name): 122 | if "chatglm2" in model_name: 123 | prompt = tokenizer.build_prompt(prompt) 124 | elif "BlueLM" in model_name: 125 | prompt = f"[|Human|]:{prompt}[|AI|]:" 126 | elif "vicuna" in model_name or "sft" in model_name: 127 | system_message = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions." 128 | prompt = f"{system_message} USER: {prompt} ASSISTANT:" 129 | elif "llama2" in model_name or "Llama-2" in model_name or "LLaMA" in model_name: 130 | prompt = f"[INST]{prompt}[/INST]\n\n" 131 | elif "Mistral" in model_name: 132 | prompt = f"[INST] {prompt} [/INST]" 133 | elif "internlm" in model_name: 134 | prompt = f"<|User|>:{prompt}\n<|Bot|>:" 135 | else: 136 | prompt = f"{prompt}" 137 | return prompt 138 | 139 | def post_process(response, model_name): 140 | if "xgen" in model_name: 141 | response = response.strip().replace("Assistant:", "") 142 | elif "internlm" in model_name: 143 | response = response.split("")[0] 144 | 145 | return response -------------------------------------------------------------------------------- /info/error_cases_and_metric_cases.md: -------------------------------------------------------------------------------- 1 | # Error cases 2 | 3 | In the hotpotwikiqa_mixup_16k dataset, In the 65th test sample, the question is "What is the date of death of the director of the film Nallavan Vazhvan?" This question requires multi-hop reasoning to get the correct answer. The model needs to first extract the director's name from the passage "### Passage 30" which introduces the film Nallavan Vazhvan, and then retrieve the final answer from "### Passage 15" which details the director's life. But the model gives the wrong answer from another director's introduction in "### Passage 27", which provides a vague statement—that the director was involved in the production of over 60 films—which the model mistakenly interprets as relevant information for answering the question. Another potential reason for the model's incorrect response is that the director's first name in the film's introduction was abbreviated, which prevented the model from retrieving the correct answer through exact matching. This indicates that even a powerful model like Llama 3 struggles to accurately understand the relationships between entities in long-context multi-step reasoning and is easily misled by other seemingly straightforward yet ambiguous information. 4 | 5 | ```json 6 | {"pred": "9 December 1988", "answers": ["4 November 2003"], "gold_ans": "4 November 2003", "input": "What is the date of death of the director of film Nallavan Vazhvan?", "all_classes": null, "length": 21447} 7 | ``` 8 | 9 | Context: 10 | 11 | ```json 12 | "......### Passage 15\nPalaniyaandi Neelakantan (2 October 1916 \u2013 4 November 2003) was a Tamil film director, who was active for nearly four decades.\n\nLife\nHe was born at Villupuram, Tamil Nadu......### Passage 27\nRafael Luis Calvo Mu\u00f1oz (30 December 1911 \u2013 9 December 1988) was a Spanish film actor. He appeared in more than 60 films including Miracle of Marcelino (1955)....### Passage 30\nNallavan Vazhvan (transl.\u2009The good man will live) is a 1961 Indian Tamil-language crime thriller film produced and directed by P. Neelakantan......" 13 | ``` 14 | In the factrecall-zh-16k dataset, all of Llama-3-8b-Instruct's responses were misled by the CF, that is "贝克汉姆"( "David Beckham"), whereas in the factrecall-en-16k dataset, Llama-3-8b-Instruct has only 32% of samples being misled by the CF. This suggests that the model's anti-interference ability may be highly imbalanced across different languages. We provide a list of model responses as examples for clarification. 15 | 16 | factrecall-zh-16k 17 | ```json 18 | # factrecall-zh-16k 19 | {"pred": "贝克汉姆。", "answers": ["贝多芬"], "gold_ans": null, "input": "被世人广泛推崇为现代物理学奠基人的科学家叫什么名字?", "all_classes": null, "length": 13249} 20 | {"pred": "贝克汉姆。", "answers": ["贝多芬"], "gold_ans": null, "input": "被世人广泛推崇为现代物理学奠基人的科学家叫什么名字?", "all_classes": null, "length": 13390} 21 | {"pred": "贝克汉姆", "answers": ["贝多芬"], "gold_ans": null, "input": "被世人广泛推崇为现代物理学奠基人的科学家叫什么名字?", "all_classes": null, "length": 13316} 22 | {"pred": "贝克汉姆。", "answers": ["贝多芬"], "gold_ans": null, "input": "被世人广泛推崇为现代物理学奠基人的科学家叫什么名字?", "all_classes": null, "length": 13334} 23 | {"pred": "贝克汉姆。", "answers": ["贝多芬"], "gold_ans": null, "input": "被世人广泛推崇为现代物理学奠基人的科学家叫什么名字?", "all_classes": null, "length": 13266} 24 | ...... 25 | ``` 26 | factrecall-en-16k 27 | ```json 28 | # factrecall-en-16k 29 | {"pred": "David Beckham.", "answers": ["Ludwig Beethoven"], "gold_ans": null, "input": "What is the name of the scientist widely acclaimed as the foundational figure of modern physics?", "all_classes": null, "length": 13940} 30 | {"pred": "David Beckham.", "answers": ["Ludwig Beethoven"], "gold_ans": null, "input": "What is the name of the scientist widely acclaimed as the foundational figure of modern physics?", "all_classes": null, "length": 14047} 31 | {"pred": "Ludwig Beethoven.", "answers": ["Ludwig Beethoven"], "gold_ans": null, "input": "What is the name of the scientist widely acclaimed as the foundational figure of modern physics?", "all_classes": null, "length": 13988} 32 | {"pred": "Ludwig Beethoven.", "answers": ["Ludwig Beethoven"], "gold_ans": null, "input": "What is the name of the scientist widely acclaimed as the foundational figure of modern physics?", "all_classes": null, "length": 14126} 33 | {"pred": "Ludwig Beethoven.", "answers": ["Ludwig Beethoven"], "gold_ans": null, "input": "What is the name of the scientist widely acclaimed as the foundational figure of modern physics?", "all_classes": null, "length": 13895} 34 | {"pred": "Ludwig Beethoven.", "answers": ["Ludwig Beethoven"], "gold_ans": null, "input": "What is the name of the scientist widely acclaimed as the foundational figure of modern physics?", "all_classes": null, "length": 14019} 35 | ...... 36 | ``` 37 | 38 | # Metric cases 39 | For AK related falsely high score in plain F1, please look at the case below. The model failed to locate the specific time information (2020), but it gets a 0.3 score due to other matched words like "independent publishing of digital books". In human evaluation, this response would clearly be given a score of 0. These types of examples have been manually verified as a common occurrence across multiple datasets in plain F1 metric, so we designed the keyword-recall-based metric to reduce bias. 40 | 41 | ```json 42 | {"qa_f1_score": 0.30769230769230765, 43 | "pred": "There is no mention of Martin or independent publishing of digital books in the passage. The passage appears to be about a research paper on contour completion using deep structure priors.", 44 | "answers": ["Martin began independent publishing her books as digital books in **2020**."], 45 | "answers_keywords": "2020", 46 | "input": "When did Martin start independent publishing her books as digital books?", "all_classes": null, "length": 18496} 47 | ``` 48 | 49 | More examples: 50 | 51 | ```json 52 | {"qa_f1_score_with_gold_ans": 0.4, 53 | "pred": "For services to Medicine and to the community in the Cayman Islands.", 54 | "answers": ["For his services to **music**."], 55 | "answers_keyword": "services to music", 56 | "input": "What is Geoffrey Michael Windsor Taylor being recognized for?", "all_classes": null, "length": 32957} 57 | ``` 58 | ```json 59 | {"qa_f1_score": 0.6666666666666666, 60 | "pred": "Low mechanical flexibility.", 61 | "answers": ["**Increased** mechanical flexibility."], 62 | "answers_keyword": "Increased mechanical flexibility.", 63 | "input": "What are the benefits of using binary variables in the SLAS formulation?", "all_classes": null, "length": 16690} 64 | ``` 65 | ```json 66 | {"qa_f1_zh_score": 0.4827586206896552, 67 | "pred": "根据文章26中的内容,电影《毕业风暴》的导演是提莫·贝克曼贝托夫(TimurBekmambetov)。", 68 | "answers": ["《毕业风暴》的导演是罗马尼亚导演**克里斯汀穆基**。"], 69 | "answers_keyword": "克里斯汀穆基", 70 | "input": "谁是电影《毕业风暴》的导演?", "all_classes": null, "length": 16114} 71 | ``` 72 | -------------------------------------------------------------------------------- /prediction.py: -------------------------------------------------------------------------------- 1 | import os 2 | import re 3 | import torch 4 | import argparse 5 | from tqdm import tqdm 6 | 7 | from config import ( 8 | DATASET_MAXGEN, 9 | DATASET_PROMPT, 10 | DATASET_SELECTED, 11 | DATASET_LENGTH_LEVEL, 12 | ) 13 | from utils import ( 14 | ensure_dir, 15 | seed_everything, 16 | get_dataset_names, 17 | build_chat, 18 | post_process, 19 | model_generate, 20 | truncate_prompt, 21 | dump_preds_results, 22 | load_LVEval_dataset, 23 | load_model_and_tokenizer_once, 24 | ) 25 | 26 | 27 | def get_pred( 28 | model, 29 | tokenizer, 30 | data, 31 | max_length, 32 | max_gen, 33 | prompt_format, 34 | model_name, 35 | ): 36 | preds = [] 37 | for json_obj in tqdm(data): 38 | prompt = prompt_format.format(**json_obj) 39 | prompt = truncate_prompt(tokenizer, prompt, max_length) 40 | 41 | if hasattr(model, "chat"): # using model.chat() to generate prediction, which determines the conversation format of the open-source model by itself. 42 | pred = model.chat( 43 | tokenizer, 44 | prompt, 45 | max_new_tokens=max_gen, 46 | max_length=None, 47 | do_sample=False, 48 | history=[], #None, 49 | )[0] 50 | else: # using model.generate() to generate prediction, make sure custmized conversation format in build_chat() is correct. 51 | prompt = build_chat(tokenizer, prompt, model_name) 52 | pred = model_generate(tokenizer, prompt, max_gen, model) 53 | 54 | pred = post_process(pred, model_name) 55 | 56 | preds.append({ 57 | "pred": pred, 58 | "answers": json_obj["answers"], 59 | "gold_ans": json_obj["answer_keywords"] if "answer_keywords" in json_obj else None, 60 | "input": json_obj["input"], 61 | "all_classes": json_obj["all_classes"] if "all_classes" in json_obj else None, 62 | "length": json_obj["length"], 63 | }) 64 | return preds 65 | 66 | def evaluate(mix): 67 | datas = mix["datas"] 68 | dataset = mix["dataset"] 69 | id = mix["id"] 70 | dataset_name = re.split('_.{1,3}k', dataset)[0] 71 | prompt_format = mix["dataset_prompt"][dataset_name] 72 | max_gen = mix["dataset_maxgen"][dataset_name] 73 | preds = get_pred( 74 | mix["model"], 75 | mix["tokenizer"], 76 | datas, 77 | mix["max_length"], 78 | max_gen, 79 | prompt_format, 80 | mix["model_name"], 81 | ) 82 | mix["shared_dict"][id] = preds 83 | 84 | def split_datasets(input_list, num_parts, dataset, shared_dict, device_dict, args): 85 | avg = len(input_list) // num_parts 86 | remainder = len(input_list) % num_parts 87 | result = [] 88 | start = 0 89 | id = 0 90 | for i in range(num_parts): 91 | end = start + avg 92 | if i < remainder: 93 | end += 1 94 | dic = {} 95 | dic["datas"] = input_list[start:end] 96 | dic["dataset"] = dataset 97 | dic["id"] = id 98 | dic["model_path"] = args.model_path 99 | dic["model_name"] = args.model_name 100 | dic["max_length"] = args.model_max_length 101 | dic["data_path"] = args.data_path 102 | dic["out_dir"] = args.output_dir 103 | dic["model"] = device_dict[id][0] 104 | dic["tokenizer"] = device_dict[id][1] 105 | dic["shared_dict"] = shared_dict 106 | dic["dataset_prompt"] = DATASET_PROMPT 107 | dic["dataset_maxgen"] = DATASET_MAXGEN 108 | dic["args"] = args 109 | result.append(dic) 110 | start = end 111 | id = id + 1 112 | return result 113 | 114 | def multiple_processing_once(num_gpus, dataset, shared_dict, device_dict, args): 115 | datas = load_LVEval_dataset(dataset, args.data_path) 116 | mixs = split_datasets(datas, num_gpus, dataset, shared_dict, device_dict, args) 117 | ctx = torch.multiprocessing.get_context("spawn") 118 | pool = ctx.Pool(processes=num_gpus) 119 | pool.map(evaluate, mixs) 120 | pool.close() 121 | pool.join() 122 | merged_results = [] 123 | for i in range(len(shared_dict)): 124 | merged_results += shared_dict[i] 125 | dump_preds_results(merged_results, os.path.join(args.output_dir, dataset + ".jsonl")) 126 | 127 | def load_model_and_tokenizer_serial(num_gpus, model_path): 128 | device_dict = dict() 129 | for i in range(num_gpus): 130 | load_model_and_tokenizer_once(i, model_path, device_dict) 131 | return device_dict 132 | 133 | def multiple_processing(datasets, args): 134 | num_gpus = torch.cuda.device_count() 135 | device_dict = load_model_and_tokenizer_serial(num_gpus, args.model_path) 136 | manager = torch.multiprocessing.Manager() 137 | shared_dict = manager.dict() 138 | for dataset in tqdm(datasets): 139 | multiple_processing_once(num_gpus, dataset, shared_dict, device_dict, args) 140 | 141 | def single_processing(datasets, args): 142 | model, tokenizer = load_model_and_tokenizer_once(-1, args.model_path) 143 | for dataset in tqdm(datasets): 144 | datas = load_LVEval_dataset(dataset, args.data_path) 145 | dataset_name = re.split('_.{1,3}k', dataset)[0] 146 | prompt_format = DATASET_PROMPT[dataset_name] 147 | max_gen = DATASET_MAXGEN[dataset_name] 148 | preds = get_pred( 149 | model, 150 | tokenizer, 151 | datas, 152 | args.model_max_length, 153 | max_gen, 154 | prompt_format, 155 | args.model_name, 156 | ) 157 | dump_preds_results(preds, os.path.join(args.output_dir, dataset + ".jsonl")) 158 | 159 | def parse_args(args=None): 160 | parser = argparse.ArgumentParser() 161 | parser.add_argument("--model-path", type=str, default=None, required=True) 162 | parser.add_argument("--model-name", type=str, default=None) 163 | parser.add_argument("--model-max-length", type=int, default=15500) 164 | parser.add_argument("--data-path", type=str, default=None) 165 | parser.add_argument("--output-dir", type=str, default="outputs") 166 | parser.add_argument("--single-process", action="store_true") 167 | return process_args(parser.parse_args(args)) 168 | 169 | def process_args(args): 170 | model_path = args.model_path.rstrip("/") 171 | if not args.model_name: 172 | args.model_name = os.path.basename(model_path) 173 | return args 174 | 175 | 176 | 177 | if __name__ == "__main__": 178 | seed_everything(42) 179 | args = parse_args() 180 | ensure_dir(args.output_dir) 181 | datasets = get_dataset_names(DATASET_SELECTED, DATASET_LENGTH_LEVEL) 182 | 183 | if args.single_process: 184 | single_processing(datasets, args) 185 | else: 186 | multiple_processing(datasets, args) 187 | -------------------------------------------------------------------------------- /metrics.py: -------------------------------------------------------------------------------- 1 | """Functions for computing metrics. 2 | 3 | Part of following code are modified from `https://github.com/THUDM/LongBench/blob/a80fd111d6e5fe1735eb7be53fece976706f8e0c/metrics.py` 4 | """ 5 | 6 | 7 | import re 8 | import jieba 9 | import string 10 | from rouge import Rouge 11 | from collections import Counter 12 | 13 | 14 | 15 | ABANDON_WORDS_EN = ['and', 'to', 'of', 'in', 'her', 'was', 'with', 'for', 'it', 'from', 'is', 'that', 'his', 'he', 'by', 'she', 'they', 'or', 'at', 'because', 'be', 'on', 'are', 'their', 'what', 'as', 'had', 'were', 'about', 'being', 'this', 'who', 'but', 'have', 'has', 'when', 'which', 'does'] 16 | ABANDON_WORDS_ZH = ['的', '和', '是', '等', '在', '年', '可以', '为', '与', '‰', '了', '或', '一种', '月', 'c', '至', '日', '有', '进行', '于', '不', '中', '×', '根据', '小', '由', '亩', '也', '要', '指', '法', '会', '元', '主要', '以及', '通过', '首先', '对', '然后', '号', '以', '所', '后', '丁', '包括', '无', '将', '用', '能', '形', '方面', '因素', '位于', '而', '从', '到', '一定', '用于', '但', '使用', '让', '具有', '并', '亿元', '万元', '上', '类', '基于', '才', '来', '地', '片', '其他', '个', '或者', '变得', '时', '给', '你', '使', '条', '受', '已经', '带', '度'] 17 | 18 | def normalize_answer(s): 19 | """Lower text and remove punctuation, articles and extra whitespace.""" 20 | 21 | def remove_articles(text): 22 | return re.sub(r"\b(a|an|the)\b", " ", text) 23 | 24 | def white_space_fix(text): 25 | return " ".join(text.split()) 26 | 27 | def remove_punc(text): 28 | exclude = set(string.punctuation) 29 | return "".join(ch for ch in text if ch not in exclude) 30 | 31 | def lower(text): 32 | return text.lower() 33 | 34 | return white_space_fix(remove_articles(remove_punc(lower(s)))) 35 | 36 | 37 | def normalize_zh_answer(s): 38 | """Lower text and remove punctuation, extra whitespace.""" 39 | 40 | def white_space_fix(text): 41 | return "".join(text.split()) 42 | 43 | def remove_punc(text): 44 | cn_punctuation = "!?。。"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏." 45 | all_punctuation = set(string.punctuation + cn_punctuation) 46 | return "".join(ch for ch in text if ch not in all_punctuation) 47 | 48 | def lower(text): 49 | return text.lower() 50 | 51 | return white_space_fix(remove_punc(lower(s))) 52 | 53 | def rouge_score(prediction, ground_truth, gold_ans=None, **kwargs): 54 | rouge = Rouge() 55 | try: 56 | scores = rouge.get_scores([prediction], [ground_truth], avg=True) 57 | except: 58 | return 0.0 59 | return scores["rouge-l"]["f"] 60 | 61 | def rouge_zh_score(prediction, ground_truth, gold_ans=None, **kwargs): 62 | prediction = " ".join(list(jieba.cut(prediction, cut_all=False))) 63 | ground_truth = " ".join(list(jieba.cut(ground_truth, cut_all=False))) 64 | score = rouge_score(prediction, ground_truth) 65 | return score 66 | 67 | def rouge_zh_score_blacklist(prediction, ground_truth, gold_ans=None, **kwargs): 68 | prediction = " ".join(list(jieba.cut(prediction, cut_all=False))) 69 | ground_truth = " ".join(list(jieba.cut(ground_truth, cut_all=False))) 70 | prediction_tokens = list(jieba.cut(prediction, cut_all=False)) 71 | ground_truth_tokens = list(jieba.cut(ground_truth, cut_all=False)) 72 | prediction_tokens = [normalize_zh_answer(token) for token in prediction_tokens] 73 | ground_truth_tokens = [normalize_zh_answer(token) for token in ground_truth_tokens] 74 | filtered_prediction_tokens = [i for i in prediction_tokens if i not in ABANDON_WORDS_ZH] 75 | filtered_ground_truth_tokens = [i for i in ground_truth_tokens if i not in ABANDON_WORDS_ZH] 76 | prediction = " ".join(filtered_prediction_tokens) 77 | ground_truth = " ".join(filtered_ground_truth_tokens) 78 | score = rouge_score(prediction, ground_truth) 79 | return score 80 | 81 | def f1_score(prediction, ground_truth, **kwargs): 82 | common = Counter(prediction) & Counter(ground_truth) 83 | num_same = sum(common.values()) 84 | if num_same == 0: 85 | return 0 86 | precision = 1.0 * num_same / len(prediction) 87 | recall = 1.0 * num_same / len(ground_truth) 88 | f1 = (2 * precision * recall) / (precision + recall) 89 | return f1 90 | 91 | def qa_f1_score(prediction, ground_truth, gold_ans=None, **kwargs): 92 | normalized_prediction = normalize_answer(prediction) 93 | normalized_ground_truth = normalize_answer(ground_truth) 94 | prediction_tokens = normalized_prediction.split() 95 | ground_truth_tokens = normalized_ground_truth.split() 96 | return f1_score(prediction_tokens, ground_truth_tokens) 97 | 98 | def qa_f1_score_factrecall(prediction, ground_truth, gold_ans=None, **kwargs): 99 | normalized_prediction = normalize_answer(prediction) 100 | normalized_ground_truth = normalize_answer(ground_truth) 101 | prediction_tokens = normalized_prediction.split() 102 | ground_truth_tokens = normalized_ground_truth.split() 103 | common = Counter(prediction_tokens) & Counter(ground_truth_tokens) 104 | num_same = sum(common.values()) 105 | recall = 1.0 * num_same / len(ground_truth_tokens) 106 | 107 | return recall 108 | 109 | def qa_f1_score_with_gold_ans(prediction, ground_truth, gold_ans=None, **kwargs): 110 | normalized_prediction = normalize_answer(prediction) 111 | normalized_ground_truth = normalize_answer(ground_truth) 112 | prediction_tokens = normalized_prediction.split() 113 | ground_truth_tokens = normalized_ground_truth.split() 114 | # answer keywords recall 115 | if gold_ans: 116 | gold_ans_tokens = normalize_answer(gold_ans) 117 | gold_ans_tokens = gold_ans_tokens.split() 118 | common = Counter(prediction_tokens) & Counter(gold_ans_tokens) 119 | filtered_common = {key: value for key, value in common.items() if key not in ABANDON_WORDS_EN} 120 | num_same = sum(filtered_common.values()) 121 | recall = 1.0 * num_same / len(gold_ans_tokens) 122 | if recall < 0.2: return 0. 123 | 124 | return f1_score(prediction_tokens, ground_truth_tokens) 125 | 126 | def qa_f1_zh_score(prediction, ground_truth, gold_ans=None, **kwargs): 127 | prediction_tokens = list(jieba.cut(prediction, cut_all=False)) 128 | ground_truth_tokens = list(jieba.cut(ground_truth, cut_all=False)) 129 | prediction_tokens = [normalize_zh_answer(token) for token in prediction_tokens] 130 | ground_truth_tokens = [normalize_zh_answer(token) for token in ground_truth_tokens] 131 | prediction_tokens = [token for token in prediction_tokens if len(token) > 0] 132 | ground_truth_tokens = [token for token in ground_truth_tokens if len(token) > 0] 133 | return f1_score(prediction_tokens, ground_truth_tokens) 134 | 135 | def qa_f1_zh_score_factrecall(prediction, ground_truth, gold_ans=None, **kwargs): 136 | prediction_tokens = list(jieba.cut(prediction, cut_all=False)) 137 | ground_truth_tokens = list(jieba.cut(ground_truth, cut_all=False)) 138 | prediction_tokens = [normalize_zh_answer(token) for token in prediction_tokens] 139 | ground_truth_tokens = [normalize_zh_answer(token) for token in ground_truth_tokens] 140 | prediction_tokens = [token for token in prediction_tokens if len(token) > 0] 141 | ground_truth_tokens = [token for token in ground_truth_tokens if len(token) > 0] 142 | common = Counter(prediction_tokens) & Counter(ground_truth_tokens) 143 | num_same = sum(common.values()) 144 | recall = 1.0 * num_same / len(ground_truth_tokens) 145 | return recall 146 | 147 | def qa_f1_zh_score_with_gold_ans(prediction, ground_truth, gold_ans=None, **kwargs): 148 | prediction_tokens = list(jieba.cut(prediction, cut_all=False)) 149 | ground_truth_tokens = list(jieba.cut(ground_truth, cut_all=False)) 150 | prediction_tokens = [normalize_zh_answer(token) for token in prediction_tokens] 151 | ground_truth_tokens = [normalize_zh_answer(token) for token in ground_truth_tokens] 152 | prediction_tokens = [token for token in prediction_tokens if len(token) > 0] 153 | ground_truth_tokens = [token for token in ground_truth_tokens if len(token) > 0] 154 | # answer keywords recall 155 | if not gold_ans: 156 | gold_ans = ground_truth 157 | if gold_ans: 158 | gold_ans_tokens = list(jieba.cut(gold_ans, cut_all=False)) 159 | gold_ans_tokens = [normalize_zh_answer(token) for token in gold_ans_tokens] 160 | gold_ans_tokens = [token for token in gold_ans_tokens if len(token) > 0] 161 | common = Counter(prediction_tokens) & Counter(gold_ans_tokens) 162 | filtered_common = {key: value for key, value in common.items() if key not in ABANDON_WORDS_ZH} 163 | num_same = sum(filtered_common.values()) 164 | recall = 1.0 * num_same / len(gold_ans_tokens) 165 | if recall < 0.4: return 0. 166 | 167 | return f1_score(prediction_tokens, ground_truth_tokens) -------------------------------------------------------------------------------- /LICENSE_CC: -------------------------------------------------------------------------------- 1 | Attribution-ShareAlike 4.0 International 2 | 3 | ======================================================================= 4 | 5 | Creative Commons Corporation ("Creative Commons") is not a law firm and 6 | does not provide legal services or legal advice. Distribution of 7 | Creative Commons public licenses does not create a lawyer-client or 8 | other relationship. Creative Commons makes its licenses and related 9 | information available on an "as-is" basis. Creative Commons gives no 10 | warranties regarding its licenses, any material licensed under their 11 | terms and conditions, or any related information. Creative Commons 12 | disclaims all liability for damages resulting from their use to the 13 | fullest extent possible. 14 | 15 | Using Creative Commons Public Licenses 16 | 17 | Creative Commons public licenses provide a standard set of terms and 18 | conditions that creators and other rights holders may use to share 19 | original works of authorship and other material subject to copyright 20 | and certain other rights specified in the public license below. The 21 | following considerations are for informational purposes only, are not 22 | exhaustive, and do not form part of our licenses. 23 | 24 | Considerations for licensors: Our public licenses are 25 | intended for use by those authorized to give the public 26 | permission to use material in ways otherwise restricted by 27 | copyright and certain other rights. Our licenses are 28 | irrevocable. Licensors should read and understand the terms 29 | and conditions of the license they choose before applying it. 30 | Licensors should also secure all rights necessary before 31 | applying our licenses so that the public can reuse the 32 | material as expected. Licensors should clearly mark any 33 | material not subject to the license. This includes other CC- 34 | licensed material, or material used under an exception or 35 | limitation to copyright. More considerations for licensors: 36 | wiki.creativecommons.org/Considerations_for_licensors 37 | 38 | Considerations for the public: By using one of our public 39 | licenses, a licensor grants the public permission to use the 40 | licensed material under specified terms and conditions. If 41 | the licensor's permission is not necessary for any reason--for 42 | example, because of any applicable exception or limitation to 43 | copyright--then that use is not regulated by the license. Our 44 | licenses grant only permissions under copyright and certain 45 | other rights that a licensor has authority to grant. Use of 46 | the licensed material may still be restricted for other 47 | reasons, including because others have copyright or other 48 | rights in the material. A licensor may make special requests, 49 | such as asking that all changes be marked or described. 50 | Although not required by our licenses, you are encouraged to 51 | respect those requests where reasonable. More_considerations 52 | for the public: 53 | wiki.creativecommons.org/Considerations_for_licensees 54 | 55 | ======================================================================= 56 | 57 | Creative Commons Attribution-ShareAlike 4.0 International Public 58 | License 59 | 60 | By exercising the Licensed Rights (defined below), You accept and agree 61 | to be bound by the terms and conditions of this Creative Commons 62 | Attribution-ShareAlike 4.0 International Public License ("Public 63 | License"). To the extent this Public License may be interpreted as a 64 | contract, You are granted the Licensed Rights in consideration of Your 65 | acceptance of these terms and conditions, and the Licensor grants You 66 | such rights in consideration of benefits the Licensor receives from 67 | making the Licensed Material available under these terms and 68 | conditions. 69 | 70 | 71 | Section 1 -- Definitions. 72 | 73 | a. Adapted Material means material subject to Copyright and Similar 74 | Rights that is derived from or based upon the Licensed Material 75 | and in which the Licensed Material is translated, altered, 76 | arranged, transformed, or otherwise modified in a manner requiring 77 | permission under the Copyright and Similar Rights held by the 78 | Licensor. For purposes of this Public License, where the Licensed 79 | Material is a musical work, performance, or sound recording, 80 | Adapted Material is always produced where the Licensed Material is 81 | synched in timed relation with a moving image. 82 | 83 | b. Adapter's License means the license You apply to Your Copyright 84 | and Similar Rights in Your contributions to Adapted Material in 85 | accordance with the terms and conditions of this Public License. 86 | 87 | c. BY-SA Compatible License means a license listed at 88 | creativecommons.org/compatiblelicenses, approved by Creative 89 | Commons as essentially the equivalent of this Public License. 90 | 91 | d. Copyright and Similar Rights means copyright and/or similar rights 92 | closely related to copyright including, without limitation, 93 | performance, broadcast, sound recording, and Sui Generis Database 94 | Rights, without regard to how the rights are labeled or 95 | categorized. For purposes of this Public License, the rights 96 | specified in Section 2(b)(1)-(2) are not Copyright and Similar 97 | Rights. 98 | 99 | e. Effective Technological Measures means those measures that, in the 100 | absence of proper authority, may not be circumvented under laws 101 | fulfilling obligations under Article 11 of the WIPO Copyright 102 | Treaty adopted on December 20, 1996, and/or similar international 103 | agreements. 104 | 105 | f. Exceptions and Limitations means fair use, fair dealing, and/or 106 | any other exception or limitation to Copyright and Similar Rights 107 | that applies to Your use of the Licensed Material. 108 | 109 | g. License Elements means the license attributes listed in the name 110 | of a Creative Commons Public License. The License Elements of this 111 | Public License are Attribution and ShareAlike. 112 | 113 | h. Licensed Material means the artistic or literary work, database, 114 | or other material to which the Licensor applied this Public 115 | License. 116 | 117 | i. Licensed Rights means the rights granted to You subject to the 118 | terms and conditions of this Public License, which are limited to 119 | all Copyright and Similar Rights that apply to Your use of the 120 | Licensed Material and that the Licensor has authority to license. 121 | 122 | j. Licensor means the individual(s) or entity(ies) granting rights 123 | under this Public License. 124 | 125 | k. Share means to provide material to the public by any means or 126 | process that requires permission under the Licensed Rights, such 127 | as reproduction, public display, public performance, distribution, 128 | dissemination, communication, or importation, and to make material 129 | available to the public including in ways that members of the 130 | public may access the material from a place and at a time 131 | individually chosen by them. 132 | 133 | l. Sui Generis Database Rights means rights other than copyright 134 | resulting from Directive 96/9/EC of the European Parliament and of 135 | the Council of 11 March 1996 on the legal protection of databases, 136 | as amended and/or succeeded, as well as other essentially 137 | equivalent rights anywhere in the world. 138 | 139 | m. You means the individual or entity exercising the Licensed Rights 140 | under this Public License. Your has a corresponding meaning. 141 | 142 | 143 | Section 2 -- Scope. 144 | 145 | a. License grant. 146 | 147 | 1. Subject to the terms and conditions of this Public License, 148 | the Licensor hereby grants You a worldwide, royalty-free, 149 | non-sublicensable, non-exclusive, irrevocable license to 150 | exercise the Licensed Rights in the Licensed Material to: 151 | 152 | a. reproduce and Share the Licensed Material, in whole or 153 | in part; and 154 | 155 | b. produce, reproduce, and Share Adapted Material. 156 | 157 | 2. Exceptions and Limitations. For the avoidance of doubt, where 158 | Exceptions and Limitations apply to Your use, this Public 159 | License does not apply, and You do not need to comply with 160 | its terms and conditions. 161 | 162 | 3. Term. The term of this Public License is specified in Section 163 | 6(a). 164 | 165 | 4. Media and formats; technical modifications allowed. The 166 | Licensor authorizes You to exercise the Licensed Rights in 167 | all media and formats whether now known or hereafter created, 168 | and to make technical modifications necessary to do so. The 169 | Licensor waives and/or agrees not to assert any right or 170 | authority to forbid You from making technical modifications 171 | necessary to exercise the Licensed Rights, including 172 | technical modifications necessary to circumvent Effective 173 | Technological Measures. For purposes of this Public License, 174 | simply making modifications authorized by this Section 2(a) 175 | (4) never produces Adapted Material. 176 | 177 | 5. Downstream recipients. 178 | 179 | a. Offer from the Licensor -- Licensed Material. Every 180 | recipient of the Licensed Material automatically 181 | receives an offer from the Licensor to exercise the 182 | Licensed Rights under the terms and conditions of this 183 | Public License. 184 | 185 | b. Additional offer from the Licensor -- Adapted Material. 186 | Every recipient of Adapted Material from You 187 | automatically receives an offer from the Licensor to 188 | exercise the Licensed Rights in the Adapted Material 189 | under the conditions of the Adapter's License You apply. 190 | 191 | c. No downstream restrictions. You may not offer or impose 192 | any additional or different terms or conditions on, or 193 | apply any Effective Technological Measures to, the 194 | Licensed Material if doing so restricts exercise of the 195 | Licensed Rights by any recipient of the Licensed 196 | Material. 197 | 198 | 6. No endorsement. Nothing in this Public License constitutes or 199 | may be construed as permission to assert or imply that You 200 | are, or that Your use of the Licensed Material is, connected 201 | with, or sponsored, endorsed, or granted official status by, 202 | the Licensor or others designated to receive attribution as 203 | provided in Section 3(a)(1)(A)(i). 204 | 205 | b. Other rights. 206 | 207 | 1. Moral rights, such as the right of integrity, are not 208 | licensed under this Public License, nor are publicity, 209 | privacy, and/or other similar personality rights; however, to 210 | the extent possible, the Licensor waives and/or agrees not to 211 | assert any such rights held by the Licensor to the limited 212 | extent necessary to allow You to exercise the Licensed 213 | Rights, but not otherwise. 214 | 215 | 2. Patent and trademark rights are not licensed under this 216 | Public License. 217 | 218 | 3. To the extent possible, the Licensor waives any right to 219 | collect royalties from You for the exercise of the Licensed 220 | Rights, whether directly or through a collecting society 221 | under any voluntary or waivable statutory or compulsory 222 | licensing scheme. In all other cases the Licensor expressly 223 | reserves any right to collect such royalties. 224 | 225 | 226 | Section 3 -- License Conditions. 227 | 228 | Your exercise of the Licensed Rights is expressly made subject to the 229 | following conditions. 230 | 231 | a. Attribution. 232 | 233 | 1. If You Share the Licensed Material (including in modified 234 | form), You must: 235 | 236 | a. retain the following if it is supplied by the Licensor 237 | with the Licensed Material: 238 | 239 | i. identification of the creator(s) of the Licensed 240 | Material and any others designated to receive 241 | attribution, in any reasonable manner requested by 242 | the Licensor (including by pseudonym if 243 | designated); 244 | 245 | ii. a copyright notice; 246 | 247 | iii. a notice that refers to this Public License; 248 | 249 | iv. a notice that refers to the disclaimer of 250 | warranties; 251 | 252 | v. a URI or hyperlink to the Licensed Material to the 253 | extent reasonably practicable; 254 | 255 | b. indicate if You modified the Licensed Material and 256 | retain an indication of any previous modifications; and 257 | 258 | c. indicate the Licensed Material is licensed under this 259 | Public License, and include the text of, or the URI or 260 | hyperlink to, this Public License. 261 | 262 | 2. You may satisfy the conditions in Section 3(a)(1) in any 263 | reasonable manner based on the medium, means, and context in 264 | which You Share the Licensed Material. For example, it may be 265 | reasonable to satisfy the conditions by providing a URI or 266 | hyperlink to a resource that includes the required 267 | information. 268 | 269 | 3. If requested by the Licensor, You must remove any of the 270 | information required by Section 3(a)(1)(A) to the extent 271 | reasonably practicable. 272 | 273 | b. ShareAlike. 274 | 275 | In addition to the conditions in Section 3(a), if You Share 276 | Adapted Material You produce, the following conditions also apply. 277 | 278 | 1. The Adapter's License You apply must be a Creative Commons 279 | license with the same License Elements, this version or 280 | later, or a BY-SA Compatible License. 281 | 282 | 2. You must include the text of, or the URI or hyperlink to, the 283 | Adapter's License You apply. You may satisfy this condition 284 | in any reasonable manner based on the medium, means, and 285 | context in which You Share Adapted Material. 286 | 287 | 3. You may not offer or impose any additional or different terms 288 | or conditions on, or apply any Effective Technological 289 | Measures to, Adapted Material that restrict exercise of the 290 | rights granted under the Adapter's License You apply. 291 | 292 | 293 | Section 4 -- Sui Generis Database Rights. 294 | 295 | Where the Licensed Rights include Sui Generis Database Rights that 296 | apply to Your use of the Licensed Material: 297 | 298 | a. for the avoidance of doubt, Section 2(a)(1) grants You the right 299 | to extract, reuse, reproduce, and Share all or a substantial 300 | portion of the contents of the database; 301 | 302 | b. if You include all or a substantial portion of the database 303 | contents in a database in which You have Sui Generis Database 304 | Rights, then the database in which You have Sui Generis Database 305 | Rights (but not its individual contents) is Adapted Material, 306 | 307 | including for purposes of Section 3(b); and 308 | c. You must comply with the conditions in Section 3(a) if You Share 309 | all or a substantial portion of the contents of the database. 310 | 311 | For the avoidance of doubt, this Section 4 supplements and does not 312 | replace Your obligations under this Public License where the Licensed 313 | Rights include other Copyright and Similar Rights. 314 | 315 | 316 | Section 5 -- Disclaimer of Warranties and Limitation of Liability. 317 | 318 | a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE 319 | EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS 320 | AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF 321 | ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS, 322 | IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION, 323 | WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR 324 | PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS, 325 | ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT 326 | KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT 327 | ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU. 328 | 329 | b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE 330 | TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION, 331 | NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT, 332 | INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES, 333 | COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR 334 | USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN 335 | ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR 336 | DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR 337 | IN PART, THIS LIMITATION MAY NOT APPLY TO YOU. 338 | 339 | c. The disclaimer of warranties and limitation of liability provided 340 | above shall be interpreted in a manner that, to the extent 341 | possible, most closely approximates an absolute disclaimer and 342 | waiver of all liability. 343 | 344 | 345 | Section 6 -- Term and Termination. 346 | 347 | a. This Public License applies for the term of the Copyright and 348 | Similar Rights licensed here. However, if You fail to comply with 349 | this Public License, then Your rights under this Public License 350 | terminate automatically. 351 | 352 | b. Where Your right to use the Licensed Material has terminated under 353 | Section 6(a), it reinstates: 354 | 355 | 1. automatically as of the date the violation is cured, provided 356 | it is cured within 30 days of Your discovery of the 357 | violation; or 358 | 359 | 2. upon express reinstatement by the Licensor. 360 | 361 | For the avoidance of doubt, this Section 6(b) does not affect any 362 | right the Licensor may have to seek remedies for Your violations 363 | of this Public License. 364 | 365 | c. For the avoidance of doubt, the Licensor may also offer the 366 | Licensed Material under separate terms or conditions or stop 367 | distributing the Licensed Material at any time; however, doing so 368 | will not terminate this Public License. 369 | 370 | d. Sections 1, 5, 6, 7, and 8 survive termination of this Public 371 | License. 372 | 373 | 374 | Section 7 -- Other Terms and Conditions. 375 | 376 | a. The Licensor shall not be bound by any additional or different 377 | terms or conditions communicated by You unless expressly agreed. 378 | 379 | b. Any arrangements, understandings, or agreements regarding the 380 | Licensed Material not stated herein are separate from and 381 | independent of the terms and conditions of this Public License. 382 | 383 | 384 | Section 8 -- Interpretation. 385 | 386 | a. For the avoidance of doubt, this Public License does not, and 387 | shall not be interpreted to, reduce, limit, restrict, or impose 388 | conditions on any use of the Licensed Material that could lawfully 389 | be made without permission under this Public License. 390 | 391 | b. To the extent possible, if any provision of this Public License is 392 | deemed unenforceable, it shall be automatically reformed to the 393 | minimum extent necessary to make it enforceable. If the provision 394 | cannot be reformed, it shall be severed from this Public License 395 | without affecting the enforceability of the remaining terms and 396 | conditions. 397 | 398 | c. No term or condition of this Public License will be waived and no 399 | failure to comply consented to unless expressly agreed to by the 400 | Licensor. 401 | 402 | d. Nothing in this Public License constitutes or may be interpreted 403 | as a limitation upon, or waiver of, any privileges and immunities 404 | that apply to the Licensor or You, including from the legal 405 | processes of any jurisdiction or authority. 406 | 407 | 408 | ======================================================================= 409 | 410 | Creative Commons is not a party to its public 411 | licenses. Notwithstanding, Creative Commons may elect to apply one of 412 | its public licenses to material it publishes and in those instances 413 | will be considered the “Licensor.” The text of the Creative Commons 414 | public licenses is dedicated to the public domain under the CC0 Public 415 | Domain Dedication. Except for the limited purpose of indicating that 416 | material is shared under a Creative Commons public license or as 417 | otherwise permitted by the Creative Commons policies published at 418 | creativecommons.org/policies, Creative Commons does not authorize the 419 | use of the trademark "Creative Commons" or any other trademark or logo 420 | of Creative Commons without its prior written consent including, 421 | without limitation, in connection with any unauthorized modifications 422 | to any of its public licenses or any other arrangements, 423 | understandings, or agreements concerning use of licensed material. For 424 | the avoidance of doubt, this paragraph does not form part of the 425 | public licenses. 426 | 427 | Creative Commons may be contacted at creativecommons.org. 428 | -------------------------------------------------------------------------------- /README_ZH.md: -------------------------------------------------------------------------------- 1 |

2 | 🤗 HF Repo • 📃 Paper 3 |

4 | 5 | Read this in [English](README.md). 6 | 7 | # _LV_-Eval: 5个长度等级、最长支持256k的长文本评测基准 8 | 9 | **_LV_-Eval**是一个具备5个长度等级(16k、32k、64k、128k和256k)、最大文本测试长度达到256k的长文本评测基准。**_LV_-Eval**的平均文本长度达到102,380字,最小/最大文本长度为11,896/387,406字。**_LV_-Eval**主要有两类评测任务——单跳QA和多跳QA,共包含11个涵盖中英文的评测数据子集。**_LV_-Eval**设计时引入3个关键技术:干扰事实插入(**C**onfusiong **F**acts **I**nsertion,CFI)提高挑战性,关键词和短语替换(**K**eyword and **P**hrase **R**eplacement,KPR)减少信息泄漏,以及基于关键词召回的评测指标(**A**nswer **K**eywords,AK,指代结合答案关键词和字词黑名单的评价指标)提高评测数值客观性。我们希望*LV*-Eval为未来长文本大语言模型的研究发展提供有价值的性能参考。 10 | 11 | ## 关键特性 12 | 13 | * **超长文本长度**: **_LV_-Eval**由5个长度等级构成,分别是16k、32k、64k、128k以及256k。同一数据集在不同长度等级下具有相同的问答对集合,只是构成各长度等级的上下文长度不同。我们的目的是保持问答对一致的情况下,充分测试模型在不同长度等级上下文中的性能表现,更可控地评估模型的长文本能力。 14 | * **结合混淆和干扰信息来提升评测难度**: 构建测试数据的过程中,我们将问答相关文档和无关文档混合拼接起来构成测试文档。该构建方式在扩展文本长度的同时,可有效评测模型从冗长混淆文本中提取关键信息的能力。此外,我们还使用GPT-4生成多个干扰信息,并在人工检查后随机插入到测试文档中,以评测模型在有相似事实描述的干扰下保持准确推理的能力。 15 | * **替换数据中的关键信息以减少信息泄漏**: 为了解决长文本能力评测中由于信息泄漏而引起的指标虚高问题,我们采用关键词和短语替换的方式处理数据的上下文以及问答对,替换后的信息不再是公共知识,也在很大程度上与数据源的原始信息不同。所有的替换词和短语标注都由人类标注员完成。这样一来,**_LV_-Eval**能够严格要求被测模型根据数据中实际提供的上下文信息来回答问题,而非通过“背题”或者预训练阶段的常识记忆的方式来回答问题。 16 | * **基于关键词召回的指标可更客观公正地评测模型性能**: 目前已有的评测指标(如F1分、ROUGH等)存在受回答格式和无关字词干扰的问题,容易导致评测结果虚高。为解决这个问题,我们人工标注了答案关键词和字词黑名单。答案关键词是从原始答案中提取的最具回答信息量的词汇或短语,而字词黑名单主要包含一些无信息量的代词、助词,比如“的”、“和”、“了”等。评测指标的计算被设计为两阶段过程,以F1分数为例:第一阶段先计算模型回答对答案关键词的召回分数,如果分数低于预设阈值,则直接计0分;如果召回分数高于阈值,则进一步计算模型回答与完整答案的F1分数——首先将字词黑名单中的词从回答和答案中过滤掉,再正常进行F1分数计算。这样一来,评测指标可使得模型得分更加客观公正。 17 | 18 | ## **_LV_-Eval**总览 19 | 在下面的表格中,CFI是**C**onfusiong **F**acts **I**nsertion的缩写,表示该数据集插入了干扰事实,KPR是**K**eyword and **P**hrase **R**eplacement的缩写,表示该数据集进行了关键词和短语替换,AK是**A**nswer **K**eywords的缩写,表示该数据集标注了答案中的关键词,用于基于关键词召回的指标计算。 20 | 21 | #### 单跳QA 22 | 单跳QA任务中,支持回答问题的证据或事实仅出现在上下文中的某一个位置。 23 | 24 | | 数据集 | CFI | KPR数据量 | AK | 语言 | QA对数量 | 文本数量 | 25 | |:---------------------:|:---:|-------|:--:|:--------:|:----------:|:----------:| 26 | | loogle-SD-mixup | | | ✔ | 英 | 160 | 800 | 27 | | cmrc-mixup | | 786 | | 中 | 200 | 1,000 | 28 | | multifieldqa-en-mixup | ✔ | 476 | ✔ | 英 | 101 | 505 | 29 | | multifieldqa-zh-mixup | ✔ | 424 | ✔ | 中 | 133 | 665 | 30 | | factrecall-en | ✔ | 3 | ✔ | 英 | 1 | 200 * 5 | 31 | | factrecall-zh | ✔ | 3 | ✔ | 中 | 1 | 200 * 5 | 32 | 33 | **factrecall-en**和**factrecall-zh**被设计用于“大海捞针”压力测试,因此其所有数据的qa对保持一致。 34 | 35 | #### 多跳QA 36 | 多跳QA任务中,支持回答问题的证据或事实会出现在上下文中多个不同的位置,需要汇总多处关键信息才能得到正确答案。 37 | 38 | | 数据集 | CFI | KPR数据量 | AK | 语言 | QA对数量 | 文本数量 | 39 | |:---------------------:|:---:|-------|:--:|:--------:|:----------:|:----------:| 40 | | dureader-mixup | | | | 中 | 176 | 880 | 41 | | loogle-CR-mixup | | | ✔ | 英 | 99 | 495 | 42 | | loogle-MR-mixup | | | ✔ | 英 | 139 | 695 | 43 | | hotpotwikiqa-mixup | ✔ | 232 | ✔ | 英 | 124 | 620 | 44 | | lic-mixup | ✔ | | ✔ | 中 | 197 | 985 | 45 | 46 | ## 目录 47 | - [排行榜](#排行榜) 48 | - [在***LV*-Eval**上评测你的模型](#在***LV*-Eval**上评测你的模型) 49 | - [各数据集上的详细评测结果](#各数据集上的详细评测结果) 50 | - [许可](#许可) 51 | - [引用](#引用) 52 | 53 | 54 | ## 排行榜 55 | 以下是各模型不同长度等级下在所有任务上的平均得分(%),我们共评测了2个商用模型和8个开源模型 56 | 57 | #### 已评测的模型 58 | | 模型名称 | 是否监督微调 | 上下文窗口长度 | HuggingFace / API 源 | 59 | |:----------------------:|:----------:|:--------------:|:----------------------------------------:| 60 | | Llama2-7B-Chat-hf | ✔ | $4k$ | meta-llama/Llama-2-7b-chat-hf | 61 | | Qwen-7B-8k-Chat | ✔ | $8k$ | Qwen/Qwen-7B-Chat | 62 | | Vicuna-7B-16k-v1.5 | ✔ | $16k$ | lmsys/vicuna-7b-v1.5-16k | 63 | | ChatGLM3-6B-32k | ✔ | $32k$ | THUDM/chatglm3-6b-32k | 64 | | Llama2-7B-32k-Instruct | ✔ | $32k$ | togethercomputer/Llama-2-7B-32K-Instruct | 65 | | BlueLM-7B-32k-Chat | ✔ | $32k$ | vivo-ai/BlueLM-7B-Chat-32K | 66 | | LongChat-7B-32k-v1.5 | ✔ | $32k$ | lmsys/longchat-7b-v1.5-32k | 67 | | Yi-6B-200k | | $200k$ | 01-ai/Yi-6B-200K | 68 | | GPT-4-8k | ✔ | $8k$ | gpt-4-0613 | 69 | | GPT-3.5-16k | ✔ | $16k$ | gpt-3.5-turbo-1106 | 70 | 71 | #### 整体结果 72 | ![](info/merge_res.png) 73 | 74 | | 模型名称 | $16k$ | $32k$ | $64k$ | $128k$ | $256k$ | 75 | |:----------------------:|:------:|:------:|:------:|:------:|:------:| 76 | | ChatGLM3-6B-32k | 30.70 | 26.62 | 17.62 | 11.56 | 7.17 | 77 | | BlueLM-7B-32k-Chat | 24.09 | 16.80 | 9.22 | 6.51 | 4.77 | 78 | | Yi-6B-200k | 13.73 | 11.95 | 9.82 | 8.24 | 5.28 | 79 | | LongChat-7B-32k-v1.5 | 13.54 | 10.70 | 6.80 | 5.35 | 4.22 | 80 | | Llama2-7B-32k-Instruct | 13.66 | 10.07 | 6.03 | 4.43 | 2.87 | 81 | | Qwen-7B-8k-Chat | 7.90 | 4.86 | 3.88 | 3.00 | 2.71 | 82 | | Vicuna-7B-16k-v1.5 | 5.77 | 3.90 | 2.62 | 2.07 | 1.92 | 83 | | Llama2-7B-Chat-hf | 4.18 | 2.19 | 1.81 | 1.45 | 1.10 | 84 | | GPT-3.5-16k | 14.09 | 8.19 | 4.94 | 3.21 | 2.23 | 85 | | GPT-4-8k | 18.27 | 10.60 | 6.84 | 4.08 | 2.54 | 86 | 87 | 88 | ## 在***LV*-Eval**上评测你的模型 89 | 90 | #### 加载数据 91 | ```python 92 | from datasets import load_dataset 93 | 94 | DATASET_NAMES = [ 95 | "hotpotwikiqa_mixup", "loogle_SD_mixup", "loogle_CR_mixup", "loogle_MIR_mixup", \ 96 | "multifieldqa_en_mixup", "multifieldqa_zh_mixup", "factrecall_en", "factrecall_zh", \ 97 | "cmrc_mixup", "lic_mixup", "dureader_mixup" 98 | ] 99 | 100 | DATASET_LENGTH_LEVEL = [ 101 | '16k', '32k', '64k', '128k', '256k' 102 | ] 103 | 104 | def get_dataset_names(dataset_names, length_levels): 105 | datasets = [] 106 | for name in dataset_names: 107 | for length in length_levels: 108 | datasets.append(f"{name}_{length}") 109 | return datasets 110 | 111 | for dataset in get_dataset_names(DATASET_NAMES, DATASET_LENGTH_LEVEL): 112 | data = load_dataset("Infinigence/LVEval", dataset, split='test', token=True) 113 | ``` 114 | 115 | 你也可以将数据下载到本地后进行加载,数据链接:`https://huggingface.co/datasets/Infinigence/LVEval/resolve/main/{task_name}.zip` 116 | 117 | 注意,请指定你想下载的子任务的名称{task_name}。 118 | 119 | 例如,如果你想下载hotpotwikiqa_mixup,需要访问这个链接:https://huggingface.co/datasets/Infinigence/LVEval/resolve/main/hotpotwikiqa_mixup.zip 120 | 121 | #### 数据格式 122 | **_LV_-Eval**中的所有数据都遵循以下格式。 123 | ```json 124 | { 125 | "input": "问题", 126 | "context": "上下文", 127 | "answers": "答案", 128 | "length": "上下文文本长度 (中文计字符数,英文计词数)", 129 | "dataset": "数据集名称", 130 | "language": "数据语言类型", 131 | "answer_keywords": "从答案中过滤得到的关键词", 132 | "confusing_facts": "插入到上下文中的干扰事实描述,增加测试难度", 133 | } 134 | ``` 135 | 136 | #### 评测 137 | 通过pip安装python环境: `pip install -r requirements.txt`。 138 | 139 | 通常,我们用多卡数据并行的模式运行评测,节省评测时间。需要在shell脚本依次输入的命令参数有—————模型路径、模型名称(注意名称应与[utils.py](utils.py)中的build_chat函数适配,以定制提示词格式)、以及模型最大长度(-500以预留输出窗口)。例如: 140 | ```bash 141 | bash batch_eval_multiple.sh /home/user/workspace/public_models/chatglm3-6b-32k chatglm3 31500 142 | ``` 143 | 对于上下文窗口特别长或者本身尺寸比较大的模型,避免显存爆炸,我们建议用Hugging Face自带的'auto'模型并行模式运行评测。例如: 144 | ```bash 145 | bash batch_eval_single.sh /home/user/workspace/public_models/Yi-6B-200K yi-200k 199500 146 | ``` 147 | 此外,我们也可以分步骤进行评测。先运行[prediction.py](predictioin.py)得到前向推理的预测结果。需要通过`--model-path`选择待测模型、`--model-name`定义模型名称、`--model-max-len`输入模型最大窗口长度,以及`--output-dir`定义结果文件输出路径。例如: 148 | ```bash 149 | python prediction.py --model-path /home/user/workspace/public_models/chatglm3-6b-32k --model-name chatglm3 --model-max-len 31500 --output-dir ./outputs/ 150 | ``` 151 | 前向推理的预测结果将保存在`[output dir]/[model name]`中。 152 | 然后我们可运行[evaluation.py](evaluation.py)来对上述得到的预测结果进行打分,得到**_LV_-Eval**最终的评测结果。需要通过`--input-dir`来定义预测结果的路径。例如: 153 | ```bash 154 | python evaluation.py --input-dir ./outputs/chatglm3/ 155 | ``` 156 | 而后,我们将在shell命令行中看到打印的评测结果,并且在预测结果的同级目录下得到`results.json`和`results.csv`的评测结果文件。 157 | 158 | 更多定制的需求可通过[config.py](config.py)(选择想要测试的数据子集以及长度等级)和[utils.py](utils.py)(与模型相关的提示格式定制)来选择和定义。 159 | 160 | 此外,对于商用模型,我们以另一个脚本来调用其API进行评测。例如拼测OpenAI的GPT系列模型,我们需要设置模型名称和模型最大窗口长度。 161 | 注意,评测前需要设置OPENAI_API_KEY环境变量。 162 | ```bash 163 | bash batch_eval_gpt_single.sh gpt-4-1106-preview 127500 164 | ``` 165 | 166 | 167 | ## 各数据集上的详细评测结果 168 | 以下是各模型不同任务下在所有长度等级上的平均得分(%)。 169 | 170 | #### 单跳QA 171 | ![](info/bar_perf_sqa.png) 172 | 173 | | 模型名称 | loogle-SD-mixup | cmrc-mixup | multifieldqa-en-mixup | multifieldqa-zh-mixup | factrecall-en | factrecall-zh | 174 | |:----------------------:|:---------------:|:----------:|:---------------------:|:---------------------:|:-------------:|:-------------:| 175 | | ChatGLM3-6B-32k | 22.29 | 28.16 | 12.93 | 18.99 | 52.60 | 6.10 | 176 | | BlueLM-7B-32k-Chat | 13.02 | 17.53 | 7.32 | 11.49 | 24.03 | 18.80 | 177 | | Yi-6B-200k | 29.17 | 1.27 | 7.75 | 1.84 | 22.28 | 13.95 | 178 | | LongChat-7B-32k-v1.5 | 14.56 | 9.65 | 6.95 | 5.86 | 9.14 | 4.28 | 179 | | Llama2-7B-32k-Instruct | 7.63 | 6.12 | 4.63 | 2.56 | 38.09 | 0.92 | 180 | | Qwen-7B-8k-Chat | 4.78 | 5.81 | 4.52 | 4.57 | 0.80 | 5.45 | 181 | | Vicuna-7B-16k-v1.5 | 4.68 | 6.04 | 3.44 | 2.89 | 0.09 | 0 | 182 | | Llama2-7B-Chat-hf | 3.04 | 1.97 | 3.99 | 1.48 | 0.45 | 0 | 183 | | GPT-3.5-16k | 13.99 | 5.16 | 9.78 | 8.51 | 2.87 | 5.28 | 184 | | GPT-4-8k | 11.13 | 5.96 | 10.16 | 7.29 | 9.25 | 11.39 | 185 | 186 | #### 多跳QA 187 | ![](info/bar_perf_mqa.png) 188 | 189 | | 模型名称 | dureader-mixup | loogle-CR-mixup | loogle-MR-mixup | hotpotwikiqa-mixup | lic-mixup | 190 | |:----------------------:|:--------------:|:---------------:|:---------------:|:------------------:|:---------:| 191 | | ChatGLM3-6B-32k | 19.57 | 10.17 | 9.10 | 11.15 | 15.02 | 192 | | BlueLM-7B-32k-Chat | 14.61 | 5.04 | 2.87 | 11.22 | 9.11 | 193 | | Yi-6B-200k | 2.83 | 5.82 | 4.41 | 12.42 | 6.12 | 194 | | LongChat-7B-32k-v1.5 | 10.34 | 8.59 | 6.03 | 6.98 | 6.92 | 195 | | Llama2-7B-32k-Instruct | 9.57 | 2.51 | 1.92 | 2.31 | 5.27 | 196 | | Qwen-7B-8k-Chat | 10.42 | 3.14 | 2.70 | 2.23 | 4.77 | 197 | | Vicuna-7B-16k-v1.5 | 7.18 | 3.26 | 2.31 | 1.95 | 4.00 | 198 | | Llama2-7B-Chat-hf | 5.49 | 2.62 | 1.80 | 1.74 | 1.02 | 199 | | GPT-3.5-16k | 4.87 | 6.09 | 5.87 | 5.88 | 3.53 | 200 | | GPT-4-8k | 12.07 | 7.26 | 5.91 | 7.46 | 5.28 | 201 | 202 | 以下是各模型在各个数据集下5个长度等级中的详细得分。 203 | 204 | #### loogle-SD-mixup 205 | | 模型名称 | $16k$ | $32k$ | $64k$ | $128k$ | $256k$ | 206 | |:----------------------:|-------|-------|-------|--------|--------| 207 | | ChatGLM3-6B-32k | 41.82 | 30.31 | 19.07 | 11.34 | 8.92 | 208 | | BlueLM-7B-32k-Chat | 34.34 | 15.10 | 4.95 | 5.32 | 5.41 | 209 | | Yi-6B-200k | 39.56 | 36.48 | 31.71 | 25.71 | 12.37 | 210 | | LongChat-7B-32k-v1.5 | 27.42 | 18.21 | 12.09 | 9.11 | 5.97 | 211 | | Llama2-7B-32k-Instruct | 13.94 | 10.58 | 5.53 | 4.80 | 3.30 | 212 | | Qwen-7B-8k-Chat | 10.54 | 4.70 | 2.40 | 3.25 | 3.02 | 213 | | Vicuna-7B-16k-v1.5 | 8.79 | 4.90 | 3.07 | 4.24 | 2.39 | 214 | | Llama2-7B-Chat-hf | 6.75 | 2.61 | 2.58 | 2.04 | 1.24 | 215 | | GPT-3.5-16k | 31.67 | 18.56 | 10.41 | 5.74 | 3.56 | 216 | | GPT-4-8k | 27.01 | 14.01 | 8.00 | 5.14 | 1.48 | 217 | 218 | #### cmrc-mixup 219 | | 模型名称 | $16k$ | $32k$ | $64k$ | $128k$ | $256k$ | 220 | |:----------------------:|-------|-------|-------|--------|--------| 221 | | ChatGLM3-6B-32k | 51.21 | 46.34 | 20.71 | 14.16 | 8.38 | 222 | | BlueLM-7B-32k-Chat | 45.89 | 19.53 | 10.66 | 7.06 | 4.51 | 223 | | Yi-6B-200k | 1.05 | 0.35 | 0.84 | 1.58 | 2.54 | 224 | | LongChat-7B-32k-v1.5 | 20.99 | 10.77 | 8.97 | 3.77 | 3.75 | 225 | | Llama2-7B-32k-Instruct | 13.86 | 7.31 | 4.10 | 2.95 | 2.40 | 226 | | Qwen-7B-8k-Chat | 11.13 | 5.32 | 4.68 | 3.81 | 4.09 | 227 | | Vicuna-7B-16k-v1.5 | 11.75 | 6.55 | 5.04 | 2.75 | 4.13 | 228 | | Llama2-7B-Chat-hf | 3.85 | 1.08 | 1.72 | 1.64 | 1.54 | 229 | | GPT-3.5-16k | 12.19 | 6.00 | 3.57 | 2.73 | 1.32 | 230 | | GPT-4-8k | 14.67 | 3.33 | 5.31 | 3.81 | 2.68 | 231 | 232 | #### multifieldqa-en-mixup 233 | | 模型名称 | $16k$ | $32k$ | $64k$ | $128k$ | $256k$ | 234 | |:----------------------:|-------|-------|-------|--------|--------| 235 | | ChatGLM3-6B-32k | 25.40 | 12.78 | 12.32 | 9.89 | 4.24 | 236 | | BlueLM-7B-32k-Chat | 11.82 | 6.34 | 8.38 | 5.29 | 4.78 | 237 | | Yi-6B-200k | 10.01 | 9.24 | 8.83 | 5.98 | 4.69 | 238 | | LongChat-7B-32k-v1.5 | 12.02 | 7.58 | 7.84 | 3.11 | 4.22 | 239 | | Llama2-7B-32k-Instruct | 8.03 | 4.96 | 4.12 | 3.90 | 2.13 | 240 | | Qwen-7B-8k-Chat | 7.66 | 3.61 | 5.23 | 3.64 | 2.44 | 241 | | Vicuna-7B-16k-v1.5 | 6.29 | 4.32 | 2.79 | 2.51 | 1.28 | 242 | | Llama2-7B-Chat-hf | 8.81 | 5.55 | 1.58 | 2.54 | 1.49 | 243 | | GPT-3.5-16k | 18.78 | 11.59 | 7.38 | 7.95 | 3.21 | 244 | | GPT-4-8k | 19.00 | 12.69 | 8.30 | 7.25 | 3.54 | 245 | 246 | #### multifieldqa-zh-mixup 247 | | 模型名称 | $16k$ | $32k$ | $64k$ | $128k$ | $256k$ | 248 | |:----------------------:|-------|-------|-------|--------|--------| 249 | | ChatGLM3-6B-32k | 32.38 | 24.48 | 20.97 | 10.00 | 7.05 | 250 | | BlueLM-7B-32k-Chat | 22.05 | 17.64 | 7.36 | 5.90 | 4.48 | 251 | | Yi-6B-200k | 2.85 | 0.75 | 1.89 | 2.11 | 1.58 | 252 | | LongChat-7B-32k-v1.5 | 9.81 | 8.82 | 3.23 | 3.54 | 3.92 | 253 | | Llama2-7B-32k-Instruct | 4.55 | 3.93 | 1.45 | 1.74 | 1.15 | 254 | | Qwen-7B-8k-Chat | 8.82 | 5.68 | 3.01 | 2.84 | 2.52 | 255 | | Vicuna-7B-16k-v1.5 | 5.82 | 4.45 | 2.03 | 0.88 | 1.26 | 256 | | Llama2-7B-Chat-hf | 4.72 | 1.21 | 0.68 | 0.24 | 0.56 | 257 | | GPT-3.5-16k | 18.94 | 12.21 | 6.29 | 2.94 | 2.15 | 258 | | GPT-4-8k | 17.61 | 11.18 | 4.99 | 1.76 | 0.92 | 259 | 260 | #### factrecall-en 261 | | 模型名称 | $16k$ | $32k$ | $64k$ | $128k$ | $256k$ | 262 | |:----------------------:|-------|-------|-------|--------|--------| 263 | | ChatGLM3-6B-32k | 91.50 | 89.00 | 46.00 | 24.00 | 12.5 | 264 | | BlueLM-7B-32k-Chat | 58.50 | 32.17 | 15.50 | 9.00 | 5.00 | 265 | | Yi-6B-200k | 24.88 | 23.09 | 24.96 | 22.04 | 16.44 | 266 | | LongChat-7B-32k-v1.5 | 9.22 | 14.33 | 8.31 | 7.86 | 6.00 | 267 | | Llama2-7B-32k-Instruct | 75.20 | 56.00 | 33.00 | 17.85 | 8.40 | 268 | | Qwen-7B-8k-Chat | 1.77 | 1.12 | 0.71 | 0.18 | 0.22 | 269 | | Vicuna-7B-16k-v1.5 | 0 | 0 | 0 | 0.25 | 0.20 | 270 | | Llama2-7B-Chat-hf | 1.08 | 0.46 | 0.31 | 0.23 | 0.15 | 271 | | GPT-3.5-16k | 8.25 | 3.27 | 1.80 | 0.60 | 0.45 | 272 | | GPT-4-8k | 23.40 | 11.84 | 5.21 | 4.03 | 1.79 | 273 | 274 | #### factrecall-zh 275 | | 模型名称 | $16k$ | $32k$ | $64k$ | $128k$ | $256k$ | 276 | |:----------------------:|-------|-------|-------|--------|--------| 277 | | ChatGLM3-6B-32k | 0 | 2.00 | 12.50 | 9.00 | 7.00 | 278 | | BlueLM-7B-32k-Chat | 19.00 | 37.00 | 20.00 | 12.50 | 5.50 | 279 | | Yi-6B-200k | 25.73 | 16.86 | 12.41 | 10.13 | 4.62 | 280 | | LongChat-7B-32k-v1.5 | 7.20 | 5.00 | 3.50 | 3.70 | 2.00 | 281 | | Llama2-7B-32k-Instruct | 2.55 | 0.74 | 0.53 | 0.49 | 0.29 | 282 | | Qwen-7B-8k-Chat | 15.75 | 6.00 | 3.50 | 1.50 | 0.50 | 283 | | Vicuna-7B-16k-v1.5 | 0 | 0 | 0 | 0 | 0 | 284 | | Llama2-7B-Chat-hf | 0 | 0 | 0 | 0 | 0 | 285 | | GPT-3.5-16k | 14.51 | 6.70 | 2.49 | 1.72 | 0.98 | 286 | | GPT-4-8k | 28.03 | 15.24 | 8.08 | 3.58 | 2.00 | 287 | 288 | #### dureader-mixup 289 | | 模型名称 | $16k$ | $32k$ | $64k$ | $128k$ | $256k$ | 290 | |:----------------------:|-------|-------|-------|--------|--------| 291 | | ChatGLM3-6B-32k | 23.99 | 25.21 | 22.01 | 17.94 | 8.72 | 292 | | BlueLM-7B-32k-Chat | 19.40 | 19.74 | 14.44 | 10.95 | 8.51 | 293 | | Yi-6B-200k | 2.87 | 2.98 | 2.88 | 2.36 | 3.06 | 294 | | LongChat-7B-32k-v1.5 | 13.44 | 11.57 | 9.23 | 9.51 | 7.96 | 295 | | Llama2-7B-32k-Instruct | 11.82 | 10.65 | 8.58 | 9.34 | 7.48 | 296 | | Qwen-7B-8k-Chat | 12.00 | 12.80 | 10.48 | 8.15 | 8.65 | 297 | | Vicuna-7B-16k-v1.5 | 9.67 | 7.65 | 6.62 | 6.25 | 5.70 | 298 | | Llama2-7B-Chat-hf | 7.21 | 5.42 | 5.59 | 4.78 | 4.45 | 299 | | GPT-3.5-16k | 8.01 | 5.26 | 4.26 | 3.30 | 3.50 | 300 | | GPT-4-8k | 19.14 | 13.64 | 12.66 | 8.19 | 6.71 | 301 | 302 | #### loogle-CR-mixup 303 | | 模型名称 | $16k$ | $32k$ | $64k$ | $128k$ | $256k$ | 304 | |:----------------------:|-------|-------|-------|--------|--------| 305 | | ChatGLM3-6B-32k | 14.41 | 14.10 | 9.92 | 6.95 | 5.46 | 306 | | BlueLM-7B-32k-Chat | 9.01 | 7.36 | 3.81 | 2.40 | 2.60 | 307 | | Yi-6B-200k | 8.25 | 8.83 | 4.73 | 4.05 | 3.23 | 308 | | LongChat-7B-32k-v1.5 | 11.25 | 11.17 | 9.31 | 6.19 | 5.03 | 309 | | Llama2-7B-32k-Instruct | 3.11 | 2.82 | 2.01 | 2.46 | 2.16 | 310 | | Qwen-7B-8k-Chat | 5.48 | 3.30 | 3.82 | 1.14 | 1.94 | 311 | | Vicuna-7B-16k-v1.5 | 5.00 | 4.25 | 3.76 | 1.99 | 1.28 | 312 | | Llama2-7B-Chat-hf | 3.69 | 3.29 | 3.13 | 2.19 | 0.81 | 313 | | GPT-3.5-16k | 10.04 | 8.39 | 5.58 | 3.08 | 3.37 | 314 | | GPT-4-8k | 12.68 | 10.40 | 6.48 | 2.83 | 3.91 | 315 | 316 | #### loogle-MR-mixup 317 | | 模型名称 | $16k$ | $32k$ | $64k$ | $128k$ | $256k$ | 318 | |:----------------------:|-------|-------|-------|--------|--------| 319 | | ChatGLM3-6B-32k | 15.83 | 11.62 | 7.00 | 7.24 | 3.82 | 320 | | BlueLM-7B-32k-Chat | 4.90 | 3.14 | 1.68 | 2.46 | 2.19 | 321 | | Yi-6B-200k | 6.94 | 7.67 | 2.69 | 3.44 | 1.32 | 322 | | LongChat-7B-32k-v1.5 | 10.53 | 9.51 | 3.04 | 4.05 | 3.01 | 323 | | Llama2-7B-32k-Instruct | 3.12 | 2.61 | 1.44 | 1.47 | 0.95 | 324 | | Qwen-7B-8k-Chat | 4.93 | 2.95 | 2.37 | 1.80 | 1.46 | 325 | | Vicuna-7B-16k-v1.5 | 5.17 | 3.83 | 0.96 | 0.55 | 1.06 | 326 | | Llama2-7B-Chat-hf | 3.37 | 2.20 | 2.05 | 1.04 | 0.33 | 327 | | GPT-3.5-16k | 12.95 | 7.03 | 6.23 | 2.13 | 1.00 | 328 | | GPT-4-8k | 12.24 | 7.83 | 6.26 | 2.30 | 0.90 | 329 | 330 | #### hotpotwikiqa-mixup 331 | | 模型名称 | $16k$ | $32k$ | $64k$ | $128k$ | $256k$ | 332 | |:----------------------:|-------|-------|-------|--------|--------| 333 | | ChatGLM3-6B-32k | 16.98 | 14.76 | 9.02 | 8.31 | 6.68 | 334 | | BlueLM-7B-32k-Chat | 19.31 | 14.07 | 9.63 | 7.71 | 5.40 | 335 | | Yi-6B-200k | 23.55 | 18.94 | 9.94 | 7.66 | 2.01 | 336 | | LongChat-7B-32k-v1.5 | 11.57 | 10.71 | 4.77 | 5.49 | 2.37 | 337 | | Llama2-7B-32k-Instruct | 3.54 | 2.31 | 2.20 | 1.86 | 1.62 | 338 | | Qwen-7B-8k-Chat | 2.78 | 1.89 | 2.27 | 2.37 | 1.82 | 339 | | Vicuna-7B-16k-v1.5 | 2.63 | 2.19 | 2.05 | 1.04 | 1.85 | 340 | | Llama2-7B-Chat-hf | 3.99 | 1.30 | 1.84 | 0.81 | 0.75 | 341 | | GPT-3.5-16k | 11.96 | 6.66 | 3.27 | 4.23 | 3.30 | 342 | | GPT-4-8k | 13.51 | 10.62 | 6.67 | 4.13 | 2.36 | 343 | 344 | #### lic-mixup 345 | | 模型名称 | $16k$ | $32k$ | $64k$ | $128k$ | $256k$ | 346 | |:----------------------:|-------|-------|-------|--------|--------| 347 | | ChatGLM3-6B-32k | 24.15 | 22.27 | 14.33 | 8.30 | 6.07 | 348 | | BlueLM-7B-32k-Chat | 20.75 | 12.68 | 5.00 | 3.03 | 4.11 | 349 | | Yi-6B-200k | 5.37 | 6.25 | 7.19 | 5.56 | 6.24 | 350 | | LongChat-7B-32k-v1.5 | 15.45 | 10.02 | 4.54 | 2.47 | 2.14 | 351 | | Llama2-7B-32k-Instruct | 10.55 | 8.87 | 3.41 | 1.85 | 1.66 | 352 | | Qwen-7B-8k-Chat | 6.05 | 6.07 | 4.21 | 4.34 | 3.19 | 353 | | Vicuna-7B-16k-v1.5 | 8.34 | 4.81 | 2.52 | 2.36 | 1.99 | 354 | | Llama2-7B-Chat-hf | 2.48 | 0.99 | 0.48 | 0.42 | 0.73 | 355 | | GPT-3.5-16k | 7.65 | 4.42 | 3.07 | 0.87 | 1.65 | 356 | | GPT-4-8k | 13.69 | 5.86 | 3.23 | 1.90 | 1.70 | 357 | 358 | 359 | ## 许可 360 | **_LV_-Eval**中cmrc-mixup和lic-mixup数据集遵循`CC-BY-SA-4.0`许可,其他数据集遵循`MIT`许可。 361 | 362 | 363 | ## 引用 364 | ``` 365 | @misc{yuan2024lveval, 366 | title={LV-Eval: A Balanced Long-Context Benchmark with 5 Length Levels Up to 256K}, 367 | author={Tao Yuan and Xuefei Ning and Dong Zhou and Zhijie Yang and Shiyao Li and Minghui Zhuang and Zheyue Tan and Zhuyu Yao and Dahua Lin and Boxun Li and Guohao Dai and Shengen Yan and Yu Wang}, 368 | year={2024}, 369 | eprint={2402.05136}, 370 | archivePrefix={arXiv}, 371 | primaryClass={cs.CL} 372 | } 373 | ``` 374 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 |

2 | 🤗 HF Repo • 📃 Paper 3 |

4 | 5 | 阅读[中文版本](README_ZH.md)。 6 | 7 | # _LV_-Eval: A Balanced Long-Context Benchmark with 5 Length Levels Up to 256K 8 | 9 | **_LV_-Eval** is a challenging long-context benchmark with five length levels (16k, 32k, 64k, 128k, and 256k) reaching up to 256k words. The average number of words is 102,380, and the Min/Max number of words is 11,896/387,406. **_LV_-Eval** features two main tasks, single-hop QA and multi-hop QA, comprising 11 bilingual datasets. The design of **_LV_-Eval** has incorporated three key techniques, namely confusing facts insertion (CFI), keyword and phrase replacement (KPR), and keyword-recall-based metrics (AK, short for metics with Answer Keywords and word blacklist) design, which jointly provide a challenging, mitigated-knowledge-leakege, and more accurate evaluation of the long-context capability of LLMs. We anticipate that **_LV_-Eval** will serve as a valuable resource for supporting future research on long-context LLMs. 10 | 11 | ## Key Characteristics 12 | 13 | * **Sufficiently long context length to evaluate state-of-the-art models**: **_LV_-Eval** comprises 5 length levels with word counts of 16k, 32k, 64k, 128k, and 256k. Test instances across these levels share the same set of question-answer (QA) pairs, and only differ in the context content and length. Testing on the same QA pairs with different context lengths facilitates a controllable evaluation of models' long-context ability. 14 | * **Incorporation of distraction and confusion to increase difficulty**: When constructing the context for each test instance, we mix up distracting documents and supporting documents. This approach evaluates the model's ability in pinpointing key information in a large bunch of distracting texts. In addition, we insert confusing facts generated by GPT-4 and revised by human annotators into the context. This assesses the model's capability to accurately reason in the presence of interference. 15 | * **Keyword and phrase replacement to mitigate knowledge leakage**: To mitigate the biased evaluation of long-context ability caused by knowledge leakage, we apply keyword and phrase replacement in the context and QA pairs. The replacement rules are annotated by human annotators. In this way, **_LV_-Eval** requires LLMs to rely on their understanding of the long context to answer questions rather than relying on memorization or common-sense knowledge. 16 | * **Keyword-recall-based metric for more objective scoring**: Existing $N$-gram metrics such as the F1 score are sensitive to the format variations and non-informative words in the answer, which results in inaccurate scores. To address this, we manually annotate answer keywords and a blacklist of unrelated words. The answer keywords are the critical words or sentences extracted from original ground-truth (GT) answers, while the word blacklist contains common and non-informative words such as 'the', 'a', 'of', and so on. The metric calculation follows a two-stage procedure: the first stage calculates the recall of answer keywords; if the recall exceeds a certain threshold, the second stage will remove all the blacklisted words and then calculate the F1 score between the prediction and the GT answer. This metric design can get scores with higher objectivity. 17 | 18 | ## Overview of **_LV_-Eval** 19 | In the following tables, CFI is short for **C**onfusiong **F**acts **I**nsertion, KPR is short for **K**eyword and **P**hrase **R**eplacement, and AK is short for **A**nswer **K**eywords used in keyword-recall-based metrics. 20 | 21 | #### Single-hop QA 22 | In a single-hop QA task, only a single evidence in the context is needed to derive the answer. 23 | 24 | | Dataset | CFI | \#KPR | AK | Language | \#QA pairs | \#Contexts | 25 | |:---------------------:|:---:|-------|:--:|:--------:|:----------:|:----------:| 26 | | loogle-SD-mixup | | | ✔ | en | 160 | 800 | 27 | | cmrc-mixup | | 786 | | zh | 200 | 1,000 | 28 | | multifieldqa-en-mixup | ✔ | 476 | ✔ | en | 101 | 505 | 29 | | multifieldqa-zh-mixup | ✔ | 424 | ✔ | zh | 133 | 665 | 30 | | factrecall-en | ✔ | 3 | ✔ | en | 1 | 200 * 5 | 31 | | factrecall-zh | ✔ | 3 | ✔ | zh | 1 | 200 * 5 | 32 | 33 | **factrecall-en** and **factrecall-zh** are designed for presure test of "needle in haystack", so the qa pair is kept the same across all data instances. 34 | 35 | #### Multi-hop QA 36 | In multi-hop QA tasks, the reasoning to derive the answer needs to gather multiple pieces of information from various locations in the context. 37 | 38 | | Dataset | CFI | \#KPR | AK | Language | \#QA pairs | \#Contexts | 39 | |:---------------------:|:---:|-------|:--:|:--------:|:----------:|:----------:| 40 | | dureader-mixup | | | | zh | 176 | 880 | 41 | | loogle-CR-mixup | | | ✔ | en | 99 | 495 | 42 | | loogle-MR-mixup | | | ✔ | en | 139 | 695 | 43 | | hotpotwikiqa-mixup | ✔ | 232 | ✔ | en | 124 | 620 | 44 | | lic-mixup | ✔ | | ✔ | zh | 197 | 985 | 45 | 46 | ## Table of Contents 47 | - [Leaderboard](#leaderboard) 48 | - [Evaluate Your LLMs on **_LV_-Eval**](#evaluate-your-llms-on-lv-eval) 49 | - [Detail Result on Each Dataset](#detail-result-on-each-dataset) 50 | - [License](#license) 51 | - [Citation](#citation) 52 | 53 | 54 | ## Leaderboard 55 | Here is the average scores (%) over all tasks on 5 length levels. We evaluate 2 commercial LLMs an 8 open-source LLMs. 56 | 57 | #### Evaluated LLMs 58 | | Model Name | SFT | Context Length | HuggingFace / API Endpoint | 59 | |:----------------------:|:----------:|:--------------:|:----------------------------------------:| 60 | | Llama2-7B-Chat-hf | ✔ | $4k$ | meta-llama/Llama-2-7b-chat-hf | 61 | | Qwen-7B-8k-Chat | ✔ | $8k$ | Qwen/Qwen-7B-Chat | 62 | | Vicuna-7B-16k-v1.5 | ✔ | $16k$ | lmsys/vicuna-7b-v1.5-16k | 63 | | ChatGLM3-6B-32k | ✔ | $32k$ | THUDM/chatglm3-6b-32k | 64 | | Llama2-7B-32k-Instruct | ✔ | $32k$ | togethercomputer/Llama-2-7B-32K-Instruct | 65 | | BlueLM-7B-32k-Chat | ✔ | $32k$ | vivo-ai/BlueLM-7B-Chat-32K | 66 | | LongChat-7B-32k-v1.5 | ✔ | $32k$ | lmsys/longchat-7b-v1.5-32k | 67 | | Yi-6B-200k | | $200k$ | 01-ai/Yi-6B-200K | 68 | | GPT-4-8k | ✔ | $8k$ | gpt-4-0613 | 69 | | GPT-3.5-16k | ✔ | $16k$ | gpt-3.5-turbo-1106 | 70 | 71 | #### Overall Result 72 | ![](info/merge_res.png) 73 | 74 | | Model Name | $16k$ | $32k$ | $64k$ | $128k$ | $256k$ | 75 | |:----------------------:|:------:|:------:|:------:|:------:|:------:| 76 | | ChatGLM3-6B-32k | 30.70 | 26.62 | 17.62 | 11.56 | 7.17 | 77 | | BlueLM-7B-32k-Chat | 24.09 | 16.80 | 9.22 | 6.51 | 4.77 | 78 | | Yi-6B-200k | 13.73 | 11.95 | 9.82 | 8.24 | 5.28 | 79 | | LongChat-7B-32k-v1.5 | 13.54 | 10.70 | 6.80 | 5.35 | 4.22 | 80 | | Llama2-7B-32k-Instruct | 13.66 | 10.07 | 6.03 | 4.43 | 2.87 | 81 | | Qwen-7B-8k-Chat | 7.90 | 4.86 | 3.88 | 3.00 | 2.71 | 82 | | Vicuna-7B-16k-v1.5 | 5.77 | 3.90 | 2.62 | 2.07 | 1.92 | 83 | | Llama2-7B-Chat-hf | 4.18 | 2.19 | 1.81 | 1.45 | 1.10 | 84 | | GPT-3.5-16k | 14.09 | 8.19 | 4.94 | 3.21 | 2.23 | 85 | | GPT-4-8k | 18.27 | 10.60 | 6.84 | 4.08 | 2.54 | 86 | 87 | 88 | ## Evaluate Your LLMs on **_LV_-Eval** 89 | 90 | 91 | #### Load Data 92 | ```python 93 | from datasets import load_dataset 94 | 95 | DATASET_NAMES = [ 96 | "hotpotwikiqa_mixup", "loogle_SD_mixup", "loogle_CR_mixup", "loogle_MIR_mixup", \ 97 | "multifieldqa_en_mixup", "multifieldqa_zh_mixup", "factrecall_en", "factrecall_zh", \ 98 | "cmrc_mixup", "lic_mixup", "dureader_mixup" 99 | ] 100 | 101 | DATASET_LENGTH_LEVEL = [ 102 | '16k', '32k', '64k', '128k', '256k' 103 | ] 104 | 105 | def get_dataset_names(dataset_names, length_levels): 106 | datasets = [] 107 | for name in dataset_names: 108 | for length in length_levels: 109 | datasets.append(f"{name}_{length}") 110 | return datasets 111 | 112 | for dataset in get_dataset_names(DATASET_NAMES, DATASET_LENGTH_LEVEL): 113 | data = load_dataset("Infinigence/LVEval", dataset, split='test', token=True) 114 | ``` 115 | 116 | Alternatively, you can download datas to your local folder from the following link: `https://huggingface.co/datasets/Infinigence/LVEval/resolve/main/{task_name}.zip` 117 | 118 | remember to replace {task_name} with the name of the subset you want. 119 | 120 | For example, if you want to download the data for hotpotwikiqa_mixup, you can visit this link: https://huggingface.co/datasets/Infinigence/LVEval/resolve/main/hotpotwikiqa_mixup.zip 121 | 122 | #### Data Format 123 | All data in **_LV_-Eval** follows the following format. 124 | 125 | ```json 126 | { 127 | "input": "The input/command for the task, usually short, such as questions in QA, queries in Few-shot tasks, etc", 128 | "context": "The documents input into the long-text task.", 129 | "answers": "A List of all true answers", 130 | "length": "Total length of the first three items (counted in characters for Chinese and words for English)", 131 | "dataset": "The name of the dataset to which this piece of data belongs", 132 | "language": "The language of this piece of data", 133 | "answer_keywords": "The key words or sentences manually filtered from the answers", 134 | "confusing_facts": "This key represents confusing facts inserted to context to make the evaluation more challenging.", 135 | } 136 | ``` 137 | 138 | #### Evaluation 139 | Install the requirements with pip: `pip install -r requirements.txt`. 140 | 141 | Generally, we run evaluation in data parrallel mode. We need to select model_path, model_name(Modify this to make it compatible with the names defined in the build_chat function in [utils.py](utils.py) for customized prompt format needs) and model_max_length(-500 to reserve output window) sequeentially in the shell scripts. For example: 142 | ```bash 143 | bash batch_eval_multiple.sh /home/user/workspace/public_models/chatglm3-6b-32k chatglm3 31500 144 | ``` 145 | For models with extra long context windows or exceeding model size, we suggest to run evaluation in HF auto model parrallel mode. For example: 146 | ```bash 147 | bash batch_eval_single.sh /home/user/workspace/public_models/Yi-6B-200K yi-200k 199500 148 | ``` 149 | We can also run evaluation step by step. Firstly, run [prediction.py](predictioin.py) to get inference results. We need to select model via `--model-path`, define model name via `--model-name`, input model max length via `--model-max-len`, and define output directory via `--output-dir`. For example: 150 | ```bash 151 | python prediction.py --model-path /home/user/workspace/public_models/chatglm3-6b-32k --model-name chatglm3 --model-max-len 31500 --output-dir ./outputs/ 152 | ``` 153 | The prediction results will be saved in `[output dir]/[model name]`. 154 | Then, we can run [evaluation.py](evaluation.py) on prediction results we obtained before, to get the evaluation results of _LV_-Eval. The prediction results directory need to be defined via `--input-dir`. For example: 155 | ```bash 156 | python evaluation.py --input-dir ./outputs/chatglm3/ 157 | ``` 158 | After that, we will see evaluation results printed in shell, and get `results.json`, `results.csv` file in output directory. 159 | 160 | The cusetome needs can be defined in [config.py](config.py) (for selecting the datasets and length levels we want to evaluate) and [utils.py](utils.py) (for customize the prompt format of our models). 161 | 162 | Additionally, we evaluate some commercial models with API through the following scipts. For example, evaluate OpenAI's GPT series, we need to select model_name and model_max_length. 163 | Note the OPENAI_API_KEY need to be set before evaluation. 164 | ```bash 165 | bash batch_eval_gpt_single.sh gpt-4-1106-preview 127500 166 | ``` 167 | 168 | 169 | ## Detail Result on Each Dataset 170 | Average scores over all length levels on each dataset. 171 | 172 | #### Single-hop QA 173 | ![](info/bar_perf_sqa.png) 174 | 175 | | Model Name | loogle-SD-mixup | cmrc-mixup | multifieldqa-en-mixup | multifieldqa-zh-mixup | factrecall-en | factrecall-zh | 176 | |:----------------------:|:---------------:|:----------:|:---------------------:|:---------------------:|:-------------:|:-------------:| 177 | | ChatGLM3-6B-32k | 22.29 | 28.16 | 12.93 | 18.99 | 52.60 | 6.10 | 178 | | BlueLM-7B-32k-Chat | 13.02 | 17.53 | 7.32 | 11.49 | 24.03 | 18.80 | 179 | | Yi-6B-200k | 29.17 | 1.27 | 7.75 | 1.84 | 22.28 | 13.95 | 180 | | LongChat-7B-32k-v1.5 | 14.56 | 9.65 | 6.95 | 5.86 | 9.14 | 4.28 | 181 | | Llama2-7B-32k-Instruct | 7.63 | 6.12 | 4.63 | 2.56 | 38.09 | 0.92 | 182 | | Qwen-7B-8k-Chat | 4.78 | 5.81 | 4.52 | 4.57 | 0.80 | 5.45 | 183 | | Vicuna-7B-16k-v1.5 | 4.68 | 6.04 | 3.44 | 2.89 | 0.09 | 0 | 184 | | Llama2-7B-Chat-hf | 3.04 | 1.97 | 3.99 | 1.48 | 0.45 | 0 | 185 | | GPT-3.5-16k | 13.99 | 5.16 | 9.78 | 8.51 | 2.87 | 5.28 | 186 | | GPT-4-8k | 11.13 | 5.96 | 10.16 | 7.29 | 9.25 | 11.39 | 187 | 188 | #### Multi-hop QA 189 | ![](info/bar_perf_mqa.png) 190 | 191 | | Model Name | dureader-mixup | loogle-CR-mixup | loogle-MR-mixup | hotpotwikiqa-mixup | lic-mixup | 192 | |:----------------------:|:--------------:|:---------------:|:---------------:|:------------------:|:---------:| 193 | | ChatGLM3-6B-32k | 19.57 | 10.17 | 9.10 | 11.15 | 15.02 | 194 | | BlueLM-7B-32k-Chat | 14.61 | 5.04 | 2.87 | 11.22 | 9.11 | 195 | | Yi-6B-200k | 2.83 | 5.82 | 4.41 | 12.42 | 6.12 | 196 | | LongChat-7B-32k-v1.5 | 10.34 | 8.59 | 6.03 | 6.98 | 6.92 | 197 | | Llama2-7B-32k-Instruct | 9.57 | 2.51 | 1.92 | 2.31 | 5.27 | 198 | | Qwen-7B-8k-Chat | 10.42 | 3.14 | 2.70 | 2.23 | 4.77 | 199 | | Vicuna-7B-16k-v1.5 | 7.18 | 3.26 | 2.31 | 1.95 | 4.00 | 200 | | Llama2-7B-Chat-hf | 5.49 | 2.62 | 1.80 | 1.74 | 1.02 | 201 | | GPT-3.5-16k | 4.87 | 6.09 | 5.87 | 5.88 | 3.53 | 202 | | GPT-4-8k | 12.07 | 7.26 | 5.91 | 7.46 | 5.28 | 203 | 204 | Scores of each length levels on each dataset. 205 | 206 | #### loogle-SD-mixup 207 | | Model Name | $16k$ | $32k$ | $64k$ | $128k$ | $256k$ | 208 | |:----------------------:|-------|-------|-------|--------|--------| 209 | | ChatGLM3-6B-32k | 41.82 | 30.31 | 19.07 | 11.34 | 8.92 | 210 | | BlueLM-7B-32k-Chat | 34.34 | 15.10 | 4.95 | 5.32 | 5.41 | 211 | | Yi-6B-200k | 39.56 | 36.48 | 31.71 | 25.71 | 12.37 | 212 | | LongChat-7B-32k-v1.5 | 27.42 | 18.21 | 12.09 | 9.11 | 5.97 | 213 | | Llama2-7B-32k-Instruct | 13.94 | 10.58 | 5.53 | 4.80 | 3.30 | 214 | | Qwen-7B-8k-Chat | 10.54 | 4.70 | 2.40 | 3.25 | 3.02 | 215 | | Vicuna-7B-16k-v1.5 | 8.79 | 4.90 | 3.07 | 4.24 | 2.39 | 216 | | Llama2-7B-Chat-hf | 6.75 | 2.61 | 2.58 | 2.04 | 1.24 | 217 | | GPT-3.5-16k | 31.67 | 18.56 | 10.41 | 5.74 | 3.56 | 218 | | GPT-4-8k | 27.01 | 14.01 | 8.00 | 5.14 | 1.48 | 219 | 220 | #### cmrc-mixup 221 | | Model Name | $16k$ | $32k$ | $64k$ | $128k$ | $256k$ | 222 | |:----------------------:|-------|-------|-------|--------|--------| 223 | | ChatGLM3-6B-32k | 51.21 | 46.34 | 20.71 | 14.16 | 8.38 | 224 | | BlueLM-7B-32k-Chat | 45.89 | 19.53 | 10.66 | 7.06 | 4.51 | 225 | | Yi-6B-200k | 1.05 | 0.35 | 0.84 | 1.58 | 2.54 | 226 | | LongChat-7B-32k-v1.5 | 20.99 | 10.77 | 8.97 | 3.77 | 3.75 | 227 | | Llama2-7B-32k-Instruct | 13.86 | 7.31 | 4.10 | 2.95 | 2.40 | 228 | | Qwen-7B-8k-Chat | 11.13 | 5.32 | 4.68 | 3.81 | 4.09 | 229 | | Vicuna-7B-16k-v1.5 | 11.75 | 6.55 | 5.04 | 2.75 | 4.13 | 230 | | Llama2-7B-Chat-hf | 3.85 | 1.08 | 1.72 | 1.64 | 1.54 | 231 | | GPT-3.5-16k | 12.19 | 6.00 | 3.57 | 2.73 | 1.32 | 232 | | GPT-4-8k | 14.67 | 3.33 | 5.31 | 3.81 | 2.68 | 233 | 234 | #### multifieldqa-en-mixup 235 | | Model Name | $16k$ | $32k$ | $64k$ | $128k$ | $256k$ | 236 | |:----------------------:|-------|-------|-------|--------|--------| 237 | | ChatGLM3-6B-32k | 25.40 | 12.78 | 12.32 | 9.89 | 4.24 | 238 | | BlueLM-7B-32k-Chat | 11.82 | 6.34 | 8.38 | 5.29 | 4.78 | 239 | | Yi-6B-200k | 10.01 | 9.24 | 8.83 | 5.98 | 4.69 | 240 | | LongChat-7B-32k-v1.5 | 12.02 | 7.58 | 7.84 | 3.11 | 4.22 | 241 | | Llama2-7B-32k-Instruct | 8.03 | 4.96 | 4.12 | 3.90 | 2.13 | 242 | | Qwen-7B-8k-Chat | 7.66 | 3.61 | 5.23 | 3.64 | 2.44 | 243 | | Vicuna-7B-16k-v1.5 | 6.29 | 4.32 | 2.79 | 2.51 | 1.28 | 244 | | Llama2-7B-Chat-hf | 8.81 | 5.55 | 1.58 | 2.54 | 1.49 | 245 | | GPT-3.5-16k | 18.78 | 11.59 | 7.38 | 7.95 | 3.21 | 246 | | GPT-4-8k | 19.00 | 12.69 | 8.30 | 7.25 | 3.54 | 247 | 248 | #### multifieldqa-zh-mixup 249 | | Model Name | $16k$ | $32k$ | $64k$ | $128k$ | $256k$ | 250 | |:----------------------:|-------|-------|-------|--------|--------| 251 | | ChatGLM3-6B-32k | 32.38 | 24.48 | 20.97 | 10.00 | 7.05 | 252 | | BlueLM-7B-32k-Chat | 22.05 | 17.64 | 7.36 | 5.90 | 4.48 | 253 | | Yi-6B-200k | 2.85 | 0.75 | 1.89 | 2.11 | 1.58 | 254 | | LongChat-7B-32k-v1.5 | 9.81 | 8.82 | 3.23 | 3.54 | 3.92 | 255 | | Llama2-7B-32k-Instruct | 4.55 | 3.93 | 1.45 | 1.74 | 1.15 | 256 | | Qwen-7B-8k-Chat | 8.82 | 5.68 | 3.01 | 2.84 | 2.52 | 257 | | Vicuna-7B-16k-v1.5 | 5.82 | 4.45 | 2.03 | 0.88 | 1.26 | 258 | | Llama2-7B-Chat-hf | 4.72 | 1.21 | 0.68 | 0.24 | 0.56 | 259 | | GPT-3.5-16k | 18.94 | 12.21 | 6.29 | 2.94 | 2.15 | 260 | | GPT-4-8k | 17.61 | 11.18 | 4.99 | 1.76 | 0.92 | 261 | 262 | #### factrecall-en 263 | | Model Name | $16k$ | $32k$ | $64k$ | $128k$ | $256k$ | 264 | |:----------------------:|-------|-------|-------|--------|--------| 265 | | ChatGLM3-6B-32k | 91.50 | 89.00 | 46.00 | 24.00 | 12.5 | 266 | | BlueLM-7B-32k-Chat | 58.50 | 32.17 | 15.50 | 9.00 | 5.00 | 267 | | Yi-6B-200k | 24.88 | 23.09 | 24.96 | 22.04 | 16.44 | 268 | | LongChat-7B-32k-v1.5 | 9.22 | 14.33 | 8.31 | 7.86 | 6.00 | 269 | | Llama2-7B-32k-Instruct | 75.20 | 56.00 | 33.00 | 17.85 | 8.40 | 270 | | Qwen-7B-8k-Chat | 1.77 | 1.12 | 0.71 | 0.18 | 0.22 | 271 | | Vicuna-7B-16k-v1.5 | 0 | 0 | 0 | 0.25 | 0.20 | 272 | | Llama2-7B-Chat-hf | 1.08 | 0.46 | 0.31 | 0.23 | 0.15 | 273 | | GPT-3.5-16k | 8.25 | 3.27 | 1.80 | 0.60 | 0.45 | 274 | | GPT-4-8k | 23.40 | 11.84 | 5.21 | 4.03 | 1.79 | 275 | 276 | #### factrecall-zh 277 | | Model Name | $16k$ | $32k$ | $64k$ | $128k$ | $256k$ | 278 | |:----------------------:|-------|-------|-------|--------|--------| 279 | | ChatGLM3-6B-32k | 0 | 2.00 | 12.50 | 9.00 | 7.00 | 280 | | BlueLM-7B-32k-Chat | 19.00 | 37.00 | 20.00 | 12.50 | 5.50 | 281 | | Yi-6B-200k | 25.73 | 16.86 | 12.41 | 10.13 | 4.62 | 282 | | LongChat-7B-32k-v1.5 | 7.20 | 5.00 | 3.50 | 3.70 | 2.00 | 283 | | Llama2-7B-32k-Instruct | 2.55 | 0.74 | 0.53 | 0.49 | 0.29 | 284 | | Qwen-7B-8k-Chat | 15.75 | 6.00 | 3.50 | 1.50 | 0.50 | 285 | | Vicuna-7B-16k-v1.5 | 0 | 0 | 0 | 0 | 0 | 286 | | Llama2-7B-Chat-hf | 0 | 0 | 0 | 0 | 0 | 287 | | GPT-3.5-16k | 14.51 | 6.70 | 2.49 | 1.72 | 0.98 | 288 | | GPT-4-8k | 28.03 | 15.24 | 8.08 | 3.58 | 2.00 | 289 | 290 | #### dureader-mixup 291 | | Model Name | $16k$ | $32k$ | $64k$ | $128k$ | $256k$ | 292 | |:----------------------:|-------|-------|-------|--------|--------| 293 | | ChatGLM3-6B-32k | 23.99 | 25.21 | 22.01 | 17.94 | 8.72 | 294 | | BlueLM-7B-32k-Chat | 19.40 | 19.74 | 14.44 | 10.95 | 8.51 | 295 | | Yi-6B-200k | 2.87 | 2.98 | 2.88 | 2.36 | 3.06 | 296 | | LongChat-7B-32k-v1.5 | 13.44 | 11.57 | 9.23 | 9.51 | 7.96 | 297 | | Llama2-7B-32k-Instruct | 11.82 | 10.65 | 8.58 | 9.34 | 7.48 | 298 | | Qwen-7B-8k-Chat | 12.00 | 12.80 | 10.48 | 8.15 | 8.65 | 299 | | Vicuna-7B-16k-v1.5 | 9.67 | 7.65 | 6.62 | 6.25 | 5.70 | 300 | | Llama2-7B-Chat-hf | 7.21 | 5.42 | 5.59 | 4.78 | 4.45 | 301 | | GPT-3.5-16k | 8.01 | 5.26 | 4.26 | 3.30 | 3.50 | 302 | | GPT-4-8k | 19.14 | 13.64 | 12.66 | 8.19 | 6.71 | 303 | 304 | #### loogle-CR-mixup 305 | | Model Name | $16k$ | $32k$ | $64k$ | $128k$ | $256k$ | 306 | |:----------------------:|-------|-------|-------|--------|--------| 307 | | ChatGLM3-6B-32k | 14.41 | 14.10 | 9.92 | 6.95 | 5.46 | 308 | | BlueLM-7B-32k-Chat | 9.01 | 7.36 | 3.81 | 2.40 | 2.60 | 309 | | Yi-6B-200k | 8.25 | 8.83 | 4.73 | 4.05 | 3.23 | 310 | | LongChat-7B-32k-v1.5 | 11.25 | 11.17 | 9.31 | 6.19 | 5.03 | 311 | | Llama2-7B-32k-Instruct | 3.11 | 2.82 | 2.01 | 2.46 | 2.16 | 312 | | Qwen-7B-8k-Chat | 5.48 | 3.30 | 3.82 | 1.14 | 1.94 | 313 | | Vicuna-7B-16k-v1.5 | 5.00 | 4.25 | 3.76 | 1.99 | 1.28 | 314 | | Llama2-7B-Chat-hf | 3.69 | 3.29 | 3.13 | 2.19 | 0.81 | 315 | | GPT-3.5-16k | 10.04 | 8.39 | 5.58 | 3.08 | 3.37 | 316 | | GPT-4-8k | 12.68 | 10.40 | 6.48 | 2.83 | 3.91 | 317 | 318 | #### loogle-MR-mixup 319 | | Model Name | $16k$ | $32k$ | $64k$ | $128k$ | $256k$ | 320 | |:----------------------:|-------|-------|-------|--------|--------| 321 | | ChatGLM3-6B-32k | 15.83 | 11.62 | 7.00 | 7.24 | 3.82 | 322 | | BlueLM-7B-32k-Chat | 4.90 | 3.14 | 1.68 | 2.46 | 2.19 | 323 | | Yi-6B-200k | 6.94 | 7.67 | 2.69 | 3.44 | 1.32 | 324 | | LongChat-7B-32k-v1.5 | 10.53 | 9.51 | 3.04 | 4.05 | 3.01 | 325 | | Llama2-7B-32k-Instruct | 3.12 | 2.61 | 1.44 | 1.47 | 0.95 | 326 | | Qwen-7B-8k-Chat | 4.93 | 2.95 | 2.37 | 1.80 | 1.46 | 327 | | Vicuna-7B-16k-v1.5 | 5.17 | 3.83 | 0.96 | 0.55 | 1.06 | 328 | | Llama2-7B-Chat-hf | 3.37 | 2.20 | 2.05 | 1.04 | 0.33 | 329 | | GPT-3.5-16k | 12.95 | 7.03 | 6.23 | 2.13 | 1.00 | 330 | | GPT-4-8k | 12.24 | 7.83 | 6.26 | 2.30 | 0.90 | 331 | 332 | #### hotpotwikiqa-mixup 333 | | Model Name | $16k$ | $32k$ | $64k$ | $128k$ | $256k$ | 334 | |:----------------------:|-------|-------|-------|--------|--------| 335 | | ChatGLM3-6B-32k | 16.98 | 14.76 | 9.02 | 8.31 | 6.68 | 336 | | BlueLM-7B-32k-Chat | 19.31 | 14.07 | 9.63 | 7.71 | 5.40 | 337 | | Yi-6B-200k | 23.55 | 18.94 | 9.94 | 7.66 | 2.01 | 338 | | LongChat-7B-32k-v1.5 | 11.57 | 10.71 | 4.77 | 5.49 | 2.37 | 339 | | Llama2-7B-32k-Instruct | 3.54 | 2.31 | 2.20 | 1.86 | 1.62 | 340 | | Qwen-7B-8k-Chat | 2.78 | 1.89 | 2.27 | 2.37 | 1.82 | 341 | | Vicuna-7B-16k-v1.5 | 2.63 | 2.19 | 2.05 | 1.04 | 1.85 | 342 | | Llama2-7B-Chat-hf | 3.99 | 1.30 | 1.84 | 0.81 | 0.75 | 343 | | GPT-3.5-16k | 11.96 | 6.66 | 3.27 | 4.23 | 3.30 | 344 | | GPT-4-8k | 13.51 | 10.62 | 6.67 | 4.13 | 2.36 | 345 | 346 | #### lic-mixup 347 | | Model Name | $16k$ | $32k$ | $64k$ | $128k$ | $256k$ | 348 | |:----------------------:|-------|-------|-------|--------|--------| 349 | | ChatGLM3-6B-32k | 24.15 | 22.27 | 14.33 | 8.30 | 6.07 | 350 | | BlueLM-7B-32k-Chat | 20.75 | 12.68 | 5.00 | 3.03 | 4.11 | 351 | | Yi-6B-200k | 5.37 | 6.25 | 7.19 | 5.56 | 6.24 | 352 | | LongChat-7B-32k-v1.5 | 15.45 | 10.02 | 4.54 | 2.47 | 2.14 | 353 | | Llama2-7B-32k-Instruct | 10.55 | 8.87 | 3.41 | 1.85 | 1.66 | 354 | | Qwen-7B-8k-Chat | 6.05 | 6.07 | 4.21 | 4.34 | 3.19 | 355 | | Vicuna-7B-16k-v1.5 | 8.34 | 4.81 | 2.52 | 2.36 | 1.99 | 356 | | Llama2-7B-Chat-hf | 2.48 | 0.99 | 0.48 | 0.42 | 0.73 | 357 | | GPT-3.5-16k | 7.65 | 4.42 | 3.07 | 0.87 | 1.65 | 358 | | GPT-4-8k | 13.69 | 5.86 | 3.23 | 1.90 | 1.70 | 359 | 360 | 361 | ## License 362 | In **_LV_-Eval**, the cmrc-mixup and lic-mixup datasets follow `CC-BY-SA-4.0` license, and the other datasets follow `MIT` license. 363 | 364 | 365 | ## Citation 366 | ``` 367 | @misc{yuan2024lveval, 368 | title={LV-Eval: A Balanced Long-Context Benchmark with 5 Length Levels Up to 256K}, 369 | author={Tao Yuan and Xuefei Ning and Dong Zhou and Zhijie Yang and Shiyao Li and Minghui Zhuang and Zheyue Tan and Zhuyu Yao and Dahua Lin and Boxun Li and Guohao Dai and Shengen Yan and Yu Wang}, 370 | year={2024}, 371 | eprint={2402.05136}, 372 | archivePrefix={arXiv}, 373 | primaryClass={cs.CL} 374 | } 375 | ``` 376 | --------------------------------------------------------------------------------