├── LICENSE ├── README.md ├── .gitignore └── evaluation.py /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2023 qinyiwei 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # InfoBench 2 | 3 | - **Paper:** [InFoBench: Evaluating Instruction Following Ability in Large Language Models](https://arxiv.org/pdf/2401.03601.pdf) 4 | - **Dataset:** [InFoBench Dataset](https://huggingface.co/datasets/kqsong/InFoBench) 5 | - **Generation and Annotation:** [InFoBench Generation and Annotation](https://drive.google.com/drive/folders/1Bj7u196p2fxBP03dQgd5lvddFoeSdPFO?usp=drive_link) 6 | 7 | ## Citation 8 | ``` 9 | @article{qin2024infobench, 10 | title={InFoBench: Evaluating Instruction Following Ability in Large Language Models}, 11 | author={Yiwei Qin and Kaiqiang Song and Yebowen Hu and Wenlin Yao and Sangwoo Cho and Xiaoyang Wang and Xuansheng Wu and Fei Liu and Pengfei Liu and Dong Yu}, 12 | year={2024}, 13 | eprint={2401.03601}, 14 | archivePrefix={arXiv}, 15 | primaryClass={cs.CL} 16 | } 17 | ``` 18 | 19 | ## Evaluation with InFoBench 20 | ### Step1: Dataset Usage 21 | You can directly download it with huggingface datasets. 22 | ``` python 23 | from datasets import load_dataset 24 | 25 | dataset = load_dataset("kqsong/InFoBench") 26 | ``` 27 | 28 | ### Step2: Generating the response 29 | Provide an output file in `model/output.json`. 30 | Each data entry should be a json object with a newline, containing all the fields in the input format. 31 | The generated response should be included in the json object with the new field named `output`. 32 | 33 | We suggest using greedy decoding to avoid the randomness of decoding. 34 | 35 | 36 | ### Step3: Evaluation 37 | 38 | Evaluate LLM's outputs on decomposed questions. Using GPT-4-0314 by default in this research. 39 | ```bash 40 | python evaluation.py \ 41 | --api_key \ 42 | --eval_model gpt-4-0314 \ 43 | --input model/output.json \ 44 | --output_dir evaluation/ \ 45 | --temperature 0 46 | ``` 47 | 48 | Each data entry will include an "eval" key in the format of ```List[bool]``` which represents "Yes" or "No" answers to each decomposed question. 49 | The final output evaluation file will be saved in JSON format at location ```//```. 50 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | share/python-wheels/ 24 | *.egg-info/ 25 | .installed.cfg 26 | *.egg 27 | MANIFEST 28 | 29 | # PyInstaller 30 | # Usually these files are written by a python script from a template 31 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 32 | *.manifest 33 | *.spec 34 | 35 | # Installer logs 36 | pip-log.txt 37 | pip-delete-this-directory.txt 38 | 39 | # Unit test / coverage reports 40 | htmlcov/ 41 | .tox/ 42 | .nox/ 43 | .coverage 44 | .coverage.* 45 | .cache 46 | nosetests.xml 47 | coverage.xml 48 | *.cover 49 | *.py,cover 50 | .hypothesis/ 51 | .pytest_cache/ 52 | cover/ 53 | 54 | # Translations 55 | *.mo 56 | *.pot 57 | 58 | # Django stuff: 59 | *.log 60 | local_settings.py 61 | db.sqlite3 62 | db.sqlite3-journal 63 | 64 | # Flask stuff: 65 | instance/ 66 | .webassets-cache 67 | 68 | # Scrapy stuff: 69 | .scrapy 70 | 71 | # Sphinx documentation 72 | docs/_build/ 73 | 74 | # PyBuilder 75 | .pybuilder/ 76 | target/ 77 | 78 | # Jupyter Notebook 79 | .ipynb_checkpoints 80 | 81 | # IPython 82 | profile_default/ 83 | ipython_config.py 84 | 85 | # pyenv 86 | # For a library or package, you might want to ignore these files since the code is 87 | # intended to run in multiple environments; otherwise, check them in: 88 | # .python-version 89 | 90 | # pipenv 91 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 92 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 93 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 94 | # install all needed dependencies. 95 | #Pipfile.lock 96 | 97 | # poetry 98 | # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control. 99 | # This is especially recommended for binary packages to ensure reproducibility, and is more 100 | # commonly ignored for libraries. 101 | # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control 102 | #poetry.lock 103 | 104 | # pdm 105 | # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control. 106 | #pdm.lock 107 | # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it 108 | # in version control. 109 | # https://pdm.fming.dev/#use-with-ide 110 | .pdm.toml 111 | 112 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm 113 | __pypackages__/ 114 | 115 | # Celery stuff 116 | celerybeat-schedule 117 | celerybeat.pid 118 | 119 | # SageMath parsed files 120 | *.sage.py 121 | 122 | # Environments 123 | .env 124 | .venv 125 | env/ 126 | venv/ 127 | ENV/ 128 | env.bak/ 129 | venv.bak/ 130 | 131 | # Spyder project settings 132 | .spyderproject 133 | .spyproject 134 | 135 | # Rope project settings 136 | .ropeproject 137 | 138 | # mkdocs documentation 139 | /site 140 | 141 | # mypy 142 | .mypy_cache/ 143 | .dmypy.json 144 | dmypy.json 145 | 146 | # Pyre type checker 147 | .pyre/ 148 | 149 | # pytype static type analyzer 150 | .pytype/ 151 | 152 | # Cython debug symbols 153 | cython_debug/ 154 | 155 | # PyCharm 156 | # JetBrains specific template is maintained in a separate JetBrains.gitignore that can 157 | # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore 158 | # and can be added to the global gitignore or merged into this file. For a more nuclear 159 | # option (not recommended) you can uncomment the following to ignore the entire idea folder. 160 | #.idea/ 161 | -------------------------------------------------------------------------------- /evaluation.py: -------------------------------------------------------------------------------- 1 | import json 2 | import os 3 | import time 4 | import tiktoken 5 | import argparse 6 | 7 | from os.path import join,exists 8 | from openai import OpenAI 9 | from tqdm import tqdm 10 | 11 | encoder = tiktoken.get_encoding("cl100k_base") 12 | 13 | SYS_MSG ="Based on the provided Input (if any) and Generated Text, answer the ensuing Questions with either a YES or NO choice. Your selection should be based on your judgment as well as the following rules:\n\n- YES: Select 'YES' if the generated text entirely fulfills the condition specified in the question. However, note that even minor inaccuracies exclude the text from receiving a 'YES' rating. As an illustration. consider a question that asks. \"Does each sentence in the generated text use a second person?” If even one sentence does not use the second person, the answer should NOT be 'YES'. To qualify for a 'YES' rating, the generated text must be entirely accurate and relevant to the question\n\n- NO: Opt for 'NO' if the generated text fails to meet the question's requirements or provides no information that could be utilized to answer the question. For instance, if the question asks. \"Is the second sentence in the generated text a compound sentence?\" and the generated text only has one sentence. it offers no relevant information to answer the question. Consequently, the answer should be 'NO'.'''" 14 | 15 | def load_jsonl(file_path): 16 | "General function to load jsonl file" 17 | _data = [] 18 | with open(file_path, 'r') as f: 19 | for data in f: 20 | jline = json.loads(data) 21 | _data.append(jline) 22 | return _data 23 | 24 | def bool_ratio(fpath): 25 | "Calculate true false ratio for eval results" 26 | _data = load_jsonl(fpath) 27 | count = {"true":0, "false":0} 28 | for entry in _data: 29 | if entry.get("eval", None) is None: 30 | print("Wrong output") 31 | print(entry['id']) 32 | if len(entry['decomposed_questions']) != len(entry['eval']): 33 | print("Wrong length") 34 | print(entry['id']) 35 | if None in entry['eval']: 36 | print("None in eval") 37 | print(entry['id']) 38 | 39 | for eva_value in entry['eval']: 40 | if eva_value: 41 | count["true"] += 1 42 | else: 43 | count["false"] += 1 44 | 45 | print("-------- True False Table --------") 46 | print(count) 47 | print(f"Percentage of True: {count['true']/sum(count.values())}") 48 | return 49 | 50 | def run_evaluation(client, in_path, o_dir, eval_model="gpt-4-0314", temperature=0): 51 | """ 52 | Main function to run decomposed questisons evaluation on models' outputs 53 | in_path: str, path to the model output file 54 | o_dir: str, path to the output folder 55 | eval_model: str, default "gpt-4-0314", model name to be used for evaluation 56 | temperature: float, default 0, temperature to be used for evaluation 57 | """ 58 | _data = load_jsonl(in_path) 59 | _model_name = in_path.split('/')[1].split('_')[0] 60 | 61 | # ceate output folder if not exists 62 | _o_dir = join(o_dir, eval_model) 63 | if not exists(_o_dir): 64 | os.mkdir(_o_dir) 65 | 66 | _opath = join(_o_dir, f"{_model_name}_DecomposeEval.json") 67 | 68 | # load_results if exists 69 | if os.path.exists(_opath): 70 | _exist = load_jsonl(_opath) 71 | _exist_ids = [i['id'] for i in _exist] 72 | for pos, instance in enumerate(_data): 73 | if instance['id'] in _exist_ids: 74 | _data[pos] = _exist[_exist_ids.index(instance['id'])] 75 | 76 | result_writer = open(_opath, 'w') 77 | 78 | print(f"--------Evaluating output from {in_path}--------") 79 | print(f"--------Evaluation Using {eval_model}--------") 80 | for entry in tqdm(_data): 81 | # ski if eval exists 82 | if entry.get('eval', None) is not None: 83 | result_writer.write(json.dumps(entry) + '\n') 84 | result_writer.flush() 85 | continue 86 | 87 | input_task = entry['input'] 88 | output = entry['output'] 89 | if output is None: # skip if result hasn't been generated 90 | continue 91 | 92 | message = [] 93 | answer = "" 94 | # print(f"--------Instance {entry['id']}--------") 95 | for question in entry['decomposed_questions']: 96 | if len(message) == 0: 97 | if input_task: 98 | content = f"{SYS_MSG}\n\nInput:\n\"{input_task}\"\n\nGenerated Text:\n\"{output}\"\n\nQuestion:\n{question}\n" 99 | else: 100 | content = f"{SYS_MSG}\n\nGenerated Text:\n\"{output}\"\n\nQuestion:\n{question}\n" 101 | else: 102 | content = f"{question}\n" 103 | message.append({"role": "user", "content": content}) 104 | # create a chat completion 105 | success = False 106 | early_stop = False 107 | while not success: 108 | try: 109 | completion = client.chat.completions.create( 110 | model=eval_model, 111 | messages=message, 112 | temperature=temperature, 113 | ) 114 | generation = completion.choices[0].message.content 115 | message.append( 116 | {"role": "assistant", "content": generation}) 117 | # check if generation is yes or no 118 | if generation.lower().startswith("yes") or generation.lower().startswith("no"): 119 | if generation.lower().startswith("yes"): 120 | answer += "Yes\n" 121 | else: 122 | answer += "No\n" 123 | else: 124 | if "YES" in generation and "NO" not in generation: 125 | answer += "Yes\n" 126 | elif "YES" not in generation and "NO" in generation: 127 | answer += "No\n" 128 | else: 129 | for msg in message: 130 | print(msg['content']) 131 | print("NO YES or NO answer!" + generation) 132 | answer += "None\n" 133 | early_stop = True 134 | break 135 | success = True 136 | except Exception as e: 137 | print("ERROR!") 138 | print(e) 139 | print("Retry!") 140 | time.sleep(20) 141 | 142 | # when no answer occurs, break the loop and continue to next instance 143 | if early_stop: 144 | break 145 | 146 | answer = answer[:-1] 147 | # save eval results as List[bool] 148 | bool_results = [] 149 | for i in answer.split('\n'): 150 | if i == "Yes": 151 | bool_results.append(True) 152 | elif i == "No": 153 | bool_results.append(False) 154 | else: 155 | bool_results.append(None) 156 | 157 | entry['eval'] = bool_results 158 | result_writer.write(json.dumps(entry) + '\n') 159 | result_writer.flush() 160 | 161 | result_writer.close() 162 | 163 | # run true false ratio calculation 164 | bool_ratio(_opath) 165 | 166 | return _opath 167 | 168 | def main_run(args): 169 | client = OpenAI(api_key=args.api_key) 170 | results_file = args.input 171 | output_dir = args.output_dir 172 | eval_model = args.model 173 | temperature = args.temperature 174 | 175 | if not exists(results_file): 176 | print(f"results_dir {results_file} not exists") 177 | return 178 | 179 | # run evaluation for each model 180 | run_evaluation(client, results_file, output_dir, eval_model, temperature) 181 | return 182 | 183 | if __name__ == "__main__": 184 | parser = argparse.ArgumentParser() 185 | parser.add_argument("--api_key", type=str, default=None) 186 | parser.add_argument("--model", type=str, default="gpt-4-0314", help="model name to be used for evaluation") 187 | 188 | parser.add_argument("--input", type=str, required=True, help="path to the results file") 189 | parser.add_argument("--output_dir", type=str, required=True, help="path to the output folder") 190 | 191 | parser.add_argument("--temperature", type=float, default=0, help="temperature to be used for evaluation") 192 | args = parser.parse_args() 193 | main_run(args) 194 | --------------------------------------------------------------------------------