├── assets ├── treevgr.png └── treebench.png ├── requirements.txt ├── README.md ├── inference_treebench.py └── LICENSE /assets/treevgr.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Haochen-Wang409/TreeVGR/HEAD/assets/treevgr.png -------------------------------------------------------------------------------- /assets/treebench.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Haochen-Wang409/TreeVGR/HEAD/assets/treebench.png -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | torch==2.7.0 2 | tqdm 3 | datasets 4 | transformers<4.53.0 5 | qwen-vl-utils 6 | openai==1.93.0 7 | numpy==1.26.4 -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology 2 | 3 |

4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 |

13 | 14 | **TL; DR**: We propose TreeBench, the first benchmark specially designed for evaluating "thinking with images" capabilities with *traceable visual evidence*, and TreeVGR, the current state-of-the-art open-source visual grounded reasoning models. 15 | 16 | > **Abstract.** Models like OpenAI-o3 pioneer visual grounded reasoning by dynamically referencing visual regions, 17 | > just like human "thinking with images". However, no benchmark exists to evaluate these capabilities 18 | > holistically. To bridge this gap, we propose **TreeBench** (Traceable Evidence Evaluation Benchmark), 19 | > a diagnostic benchmark built on three principles: (1) focused visual perception of subtle targets in 20 | > complex scenes, (2) traceable evidence via bounding box evaluation, and (3) second-order reasoning 21 | > to test object interactions and spatial hierarchies beyond simple object localization. Prioritizing 22 | > images with dense objects, we initially sample 1K high-quality images from SA-1B, and incorporate 23 | > eight LMM experts to manually annotate questions, candidate options, and answers for each 24 | > image. After three stages of quality control, **TreeBench** consists of 405 challenging visual question- 25 | > answering pairs, even the most advanced models struggle with this benchmark, where none of 26 | > them reach 60% accuracy, e.g., OpenAI-o3 scores only 54.87. Furthermore, we introduce **TreeVGR** 27 | > (Traceable Evidence Enhanced Visual Grounded Reasoning), a training paradigm to supervise 28 | > localization and reasoning jointly with reinforcement learning, enabling accurate localizations and 29 | > explainable reasoning pathways. Initialized from Qwen2.5-VL-7B, it improves V* Bench (+16.8), 30 | > MME-RealWorld (+12.6), and TreeBench (+13.4), proving traceability is key to advancing visual 31 | > grounded reasoning. 32 | 33 | ![](./assets/treebench.png) 34 | 35 | ## Release 36 | 37 | - [2025/08/12] 🔥 **TreeBench** has been utilized by **GLM-4.5V**! 38 | - [2025/07/12] 🔥🔥🔥 **TreeBench** and **TreeVGR** have been supported by [**VLMEvalKit**](https://github.com/open-compass/VLMEvalKit)! 🔥🔥🔥 39 | - [2025/07/11] 🔥 **TreeBench** and **TreeVGR** have been released. Check out the [paper](https://arxiv.org/pdf/TBD) for details. 40 | 41 | 42 | ## Installation 43 | 44 | ```bash 45 | pip3 install -r requirements.txt 46 | pip3 install flash-attn --no-build-isolation -v 47 | ``` 48 | 49 | ## Usage 50 | 51 | This repo provides a simple local inference demo of our TreeVGR on TreeBench. First, clone this repo, 52 | ```bash 53 | git clone https://github.com/Haochen-Wang409/TreeVGR 54 | cd TreeVGR 55 | ``` 56 | and then, simply run inference_treebench.py 57 | ```bash 58 | python3 inference_treebench.py 59 | ``` 60 | 61 | This should give: 62 | ``` 63 | Perception/Attributes 18/29=62.07 64 | Perception/Material 7/13=53.85 65 | Perception/Physical State 19/23=82.61 66 | Perception/Object Retrieval 10/16=62.5 67 | Perception/OCR 42/68=61.76 68 | Reasoning/Perspective Transform 19/85=22.35 69 | Reasoning/Ordering 20/57=35.09 70 | Reasoning/Contact and Occlusion 25/41=60.98 71 | Reasoning/Spatial Containment 20/29=68.97 72 | Reasoning/Comparison 20/44=45.45 73 | ==> Overall 200/405=49.38 74 | ==> Mean IoU: 43.3 75 | ``` 76 | This result is slightly different from the paper, as we mainly utlized [**VLMEvalKit**](https://github.com/open-compass/VLMEvalKit) for a more comprehensive evaluation. 77 | 78 | **Benchmark** 79 | - TreeBench: https://huggingface.co/datasets/HaochenWang/TreeBench 80 | 81 | **Checkpoints** 82 | - TreeVGR-7B: https://huggingface.co/HaochenWang/TreeVGR-7B 83 | - TreeVGR-7B-CI: https://huggingface.co/HaochenWang/TreeVGR-7B-CI 84 | 85 | **Training Datasets** 86 | - TreeVGR-RL-37K: https://huggingface.co/datasets/HaochenWang/TreeVGR-RL-37K 87 | - TreeVGR-SFT-35K: https://huggingface.co/datasets/HaochenWang/TreeVGR-SFT-35K 88 | 89 | ## Citation 90 | 91 | If you find Ross useful for your research and applications, please cite using this BibTeX: 92 | ```bibtex 93 | @article{wang2025traceable, 94 | title={Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology}, 95 | author={Haochen Wang and Xiangtai Li and Zilong Huang and Anran Wang and Jiacong Wang and Tao Zhang and Jiani Zheng and Sule Bai and Zijian Kang and Jiashi Feng and Zhuochen Wang and Zhaoxiang Zhang}, 96 | journal={arXiv preprint arXiv:2507.07999}, 97 | year={2025} 98 | } 99 | ``` 100 | 101 | ## Acknowledgement 102 | We would like to express our sincere appreciation to the following projects: 103 | - [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL): The base model we utilzed. 104 | - [VGR](https://huggingface.co/datasets/BytedanceDouyinContent/VGR): The source of our SFT dataset. 105 | - [V*](https://github.com/penghao-wu/vstar) and [VisDrone](https://github.com/VisDrone/VisDrone-Dataset): The image source of our RL dataset. 106 | - [SA-1B](https://ai.meta.com/datasets/segment-anything/): The image source of our TreeBench. 107 | - [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory): The SFT codebase we utilized. 108 | - [EasyR1](https://github.com/hiyouga/EasyR1): The RL codebase we utilized. 109 | -------------------------------------------------------------------------------- /inference_treebench.py: -------------------------------------------------------------------------------- 1 | import ast 2 | import re 3 | import multiprocessing 4 | multiprocessing.set_start_method('spawn', force=True) 5 | 6 | from tqdm import tqdm 7 | import torch 8 | import numpy as np 9 | from datasets import load_dataset 10 | from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor 11 | from qwen_vl_utils import process_vision_info 12 | from openai import OpenAI 13 | 14 | 15 | def compute_box_iou(predict_str: str, target_boxes: list) -> float: 16 | pattern = r"(.*?)" 17 | matches = re.findall(pattern, predict_str, re.DOTALL) 18 | 19 | all_boxes = [] 20 | 21 | for match in matches: 22 | box = match.strip() 23 | 24 | coord_pattern = r'\[(\d+),(\d+),(\d+),(\d+)\]' 25 | coord_match = re.match(coord_pattern, box) 26 | 27 | if coord_match: 28 | x1, y1, x2, y2 = map(int, coord_match.groups()) 29 | 30 | if x1 < x2 and y1 < y2: 31 | # all_boxes.append([(x1 + x2) / 2, (y1 + y2) / 2, x2 - x1, y2 - y1]) 32 | all_boxes.append([x1, y1, x2, y2]) 33 | 34 | def calculate_average_iou(pred_boxes, target_boxes): 35 | def compute_iou(box1, box2): 36 | x1_min, y1_min, x1_max, y1_max = box1 37 | x2_min, y2_min, x2_max, y2_max = box2 38 | 39 | inter_x_min = max(x1_min, x2_min) 40 | inter_y_min = max(y1_min, y2_min) 41 | inter_x_max = min(x1_max, x2_max) 42 | inter_y_max = min(y1_max, y2_max) 43 | 44 | inter_width = max(0, inter_x_max - inter_x_min) 45 | inter_height = max(0, inter_y_max - inter_y_min) 46 | inter_area = inter_width * inter_height 47 | 48 | area1 = (x1_max - x1_min) * (y1_max - y1_min) 49 | area2 = (x2_max - x2_min) * (y2_max - y2_min) 50 | 51 | union_area = area1 + area2 - inter_area 52 | 53 | return inter_area / union_area if union_area > 0 else 0.0 54 | 55 | pred_coords = pred_boxes 56 | target_coords = target_boxes # x1,y1,x2,y2 57 | 58 | total_iou = 0.0 59 | num_targets = len(target_boxes) 60 | 61 | if num_targets == 0: 62 | return 0.0 63 | 64 | for t_coord in target_coords: 65 | best_iou = 0.0 66 | for p_coord in pred_coords: 67 | iou = compute_iou(t_coord, p_coord) 68 | if iou > best_iou: 69 | best_iou = iou 70 | total_iou += best_iou 71 | 72 | return total_iou / num_targets 73 | 74 | return calculate_average_iou(all_boxes, target_boxes) 75 | 76 | 77 | def eval_model_row(item): 78 | if item["category"] == "OCR": 79 | qs = item["question"] 80 | else: 81 | qs = item["question"] + " Options:\n" + item["multi-choice options"] 82 | 83 | content = [ 84 | { 85 | "type": "image_url", 86 | "image_url": f"data:image/jpeg;base64,{item['image']}", 87 | }, 88 | { 89 | "type": "text", 90 | "text": qs + "\nSelect the best answer to the above multiple-choice question based on the image. After the reasoning process, respond with only the letter of the correct option between and .", 91 | }, 92 | ] 93 | 94 | messages = [ 95 | { 96 | "role": "system", 97 | "content": [{ 98 | "type": "text", 99 | "text": """A conversation between user and assistant. The user asks a question, and the Assistant solves it. The assistant MUST first think about the reasoning process in the mind and then provide the user with the answer. The reasoning process and answer are enclosed within and tags, respectively. When referring to particular objects in the reasoning process, the assistant MUST localize the object with bounding box coordinates between and . You MUST strictly follow the format.""", 100 | }], 101 | }, 102 | { 103 | "role": "user", 104 | "content": content, 105 | }, 106 | ] 107 | 108 | # Preparation for inference 109 | text = processor.apply_chat_template( 110 | messages, tokenize=False, add_generation_prompt=True 111 | ) 112 | text += "" 113 | 114 | image_inputs, video_inputs = process_vision_info(messages) 115 | inputs = processor( 116 | text=[text], 117 | images=image_inputs, 118 | videos=video_inputs, 119 | padding=True, 120 | return_tensors="pt", 121 | ) 122 | inputs = inputs.to(model.device) 123 | 124 | # Inference: Generation of the output 125 | with torch.inference_mode(): 126 | generated_ids = model.generate( 127 | **inputs, 128 | top_p=0.001, 129 | top_k=1, 130 | temperature=0.01, 131 | repetition_penalty=1.0, 132 | max_new_tokens=1024, 133 | use_cache=True, 134 | do_sample=True, 135 | ) 136 | 137 | generated_ids_trimmed = [ 138 | out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) 139 | ] 140 | output_text = processor.batch_decode( 141 | generated_ids_trimmed, skip_special_tokens=False, clean_up_tokenization_spaces=False 142 | ) 143 | 144 | box_iou = compute_box_iou(output_text[0], ast.literal_eval(item["target_instances"])) 145 | 146 | pattern = r"(.*?)" 147 | match = re.search(pattern, output_text[0], re.DOTALL) 148 | ans = match.group(1).strip().upper() if match else output_text[0] 149 | 150 | item["prediction"] = ans 151 | item["iou"] = box_iou 152 | 153 | return item 154 | 155 | 156 | # default model 157 | model = Qwen2_5_VLForConditionalGeneration.from_pretrained( 158 | "HaochenWang/TreeVGR-7B", 159 | torch_dtype=torch.bfloat16, 160 | attn_implementation="flash_attention_2", 161 | device_map="auto", 162 | low_cpu_mem_usage=True, 163 | ) 164 | 165 | # default processor 166 | processor = AutoProcessor.from_pretrained( 167 | "HaochenWang/TreeVGR-7B", 168 | min_pixels=1280*28*28, max_pixels=16384*28*28, 169 | ) 170 | 171 | 172 | if __name__ == "__main__": 173 | # load data 174 | df = load_dataset("HaochenWang/TreeBench", data_files="TreeBench.tsv", delimiter="\t")["train"] 175 | 176 | # obtain results 177 | data = [] 178 | pool = multiprocessing.Pool(processes=torch.cuda.device_count()) 179 | with tqdm(total=len(df), desc="Processing") as pbar: 180 | for result in pool.imap(eval_model_row, df): 181 | if result is not None: 182 | data.append(result) 183 | pbar.update(1) 184 | 185 | pool.close() 186 | pool.join() 187 | 188 | results = {} 189 | tags = ["Perception/Attributes", "Perception/Material", "Perception/Physical State", 190 | "Perception/Object Retrieval", "Perception/OCR", 191 | "Reasoning/Perspective Transform", "Reasoning/Ordering", "Reasoning/Contact and Occlusion", 192 | "Reasoning/Spatial Containment", "Reasoning/Comparison"] 193 | total = 0 194 | correct = 0 195 | 196 | for tag in tags: 197 | results[tag] = {"correct": 0, "total": 0} 198 | for item in data: 199 | if tag == item["category"]: 200 | total += 1 201 | results[tag]["total"] += 1 202 | # exact matching 203 | if item["prediction"].upper() == item["answer"].upper(): 204 | results[tag]["correct"] += 1 205 | correct += 1 206 | 207 | acc = results[tag]["correct"] / results[tag]["total"] 208 | print(tag, f"{results[tag]['correct']}/{results[tag]['total']}={round(acc * 100, 2)}") 209 | print("==> Overall", f"{correct}/{total}={round(correct / total * 100, 2)}") 210 | 211 | iou = np.array([x["iou"] for x in data]) 212 | print("==> Mean IoU:", round(np.mean(iou) * 100, 2)) -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [Haochen Wang] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. --------------------------------------------------------------------------------