├── .gitignore
├── .ipynb_checkpoints
└── demo-checkpoint.ipynb
├── LICENSE
├── README.md
├── clip
├── __init__.py
├── bpe_simple_vocab_16e6.txt.gz
├── build_model.py
├── clip.py
├── clip_model.py
├── clip_surgery_model.py
└── simple_tokenizer.py
├── config
├── CAMO.yaml
├── CHAMELEON.yaml
└── COD10K.yaml
├── contrastive_generate.py
├── datasets
├── __init__.py
├── datasets.py
├── image_folder.py
└── wrappers.py
├── demo.ipynb
├── demo_v4-ezgif.com-speed.gif
├── frame_promac.png
├── framework_ProMaC_v10.png
├── llava
├── __init__.py
├── constants.py
├── conversation.py
├── eval
│ ├── eval_gpt_review.py
│ ├── eval_gpt_review_bench.py
│ ├── eval_gpt_review_visual.py
│ ├── eval_pope.py
│ ├── eval_science_qa.py
│ ├── eval_science_qa_gpt4.py
│ ├── eval_science_qa_gpt4_requery.py
│ ├── eval_textvqa.py
│ ├── generate_webpage_data_from_table.py
│ ├── m4c_evaluator.py
│ ├── model_qa.py
│ ├── model_vqa.py
│ ├── model_vqa_loader.py
│ ├── model_vqa_mmbench.py
│ ├── model_vqa_science.py
│ ├── qa_baseline_gpt35.py
│ ├── run_llava.py
│ ├── summarize_gpt_review.py
│ ├── table
│ │ ├── answer
│ │ │ ├── answer_alpaca-13b.jsonl
│ │ │ ├── answer_bard.jsonl
│ │ │ ├── answer_gpt35.jsonl
│ │ │ ├── answer_llama-13b.jsonl
│ │ │ └── answer_vicuna-13b.jsonl
│ │ ├── caps_boxes_coco2014_val_80.jsonl
│ │ ├── model.jsonl
│ │ ├── prompt.jsonl
│ │ ├── question.jsonl
│ │ ├── results
│ │ │ ├── test_sqa_llava_13b_v0.json
│ │ │ └── test_sqa_llava_lcs_558k_sqa_12e_vicuna_v1_3_13b.json
│ │ ├── review
│ │ │ ├── review_alpaca-13b_vicuna-13b.jsonl
│ │ │ ├── review_bard_vicuna-13b.jsonl
│ │ │ ├── review_gpt35_vicuna-13b.jsonl
│ │ │ └── review_llama-13b_vicuna-13b.jsonl
│ │ ├── reviewer.jsonl
│ │ └── rule.json
│ └── webpage
│ │ ├── figures
│ │ ├── alpaca.png
│ │ ├── bard.jpg
│ │ ├── chatgpt.svg
│ │ ├── llama.jpg
│ │ ├── swords_FILL0_wght300_GRAD0_opsz48.svg
│ │ └── vicuna.jpeg
│ │ ├── index.html
│ │ ├── script.js
│ │ └── styles.css
├── mm_utils.py
├── model
│ ├── __init__.py
│ ├── apply_delta.py
│ ├── builder.py
│ ├── consolidate.py
│ ├── language_model
│ │ ├── llava_llama.py
│ │ ├── llava_mistral.py
│ │ ├── llava_mpt.py
│ │ └── mpt
│ │ │ ├── adapt_tokenizer.py
│ │ │ ├── attention.py
│ │ │ ├── blocks.py
│ │ │ ├── configuration_mpt.py
│ │ │ ├── custom_embedding.py
│ │ │ ├── flash_attn_triton.py
│ │ │ ├── hf_prefixlm_converter.py
│ │ │ ├── meta_init_context.py
│ │ │ ├── modeling_mpt.py
│ │ │ ├── norm.py
│ │ │ └── param_init_fns.py
│ ├── llava_arch.py
│ ├── make_delta.py
│ ├── multimodal_encoder
│ │ ├── builder.py
│ │ └── clip_encoder.py
│ ├── multimodal_projector
│ │ └── builder.py
│ └── utils.py
├── serve
│ ├── __init__.py
│ ├── cli.py
│ ├── controller.py
│ ├── examples
│ │ ├── extreme_ironing.jpg
│ │ └── waterview.jpg
│ ├── gradio_web_server.py
│ ├── model_worker.py
│ ├── register_worker.py
│ ├── sglang_worker.py
│ └── test_message.py
├── train
│ ├── llama_flash_attn_monkey_patch.py
│ ├── llama_xformers_attn_monkey_patch.py
│ ├── llava_trainer.py
│ ├── train.py
│ ├── train_mem.py
│ └── train_xformers.py
└── utils.py
├── main.py
├── motivation.png
├── requirements_llava.txt
├── script_llava.sh
├── sod_metric.py
├── utils.py
├── utils_mllm.py
├── vcd_utils
└── vcd_sample.py
└── visulization_n.png
/.gitignore:
--------------------------------------------------------------------------------
1 | __pycache__/
2 | data
3 | *.pth
4 | output_img
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2024 Jian Hu
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # :fire: [NeurIPS24] ProMaC: Leveraging Hallucinations to Reduce Manual Prompt Dependency in Promptable Segmentation
2 |
3 | Code release of paper:
4 |
5 | [**Leveraging Hallucinations to Reduce Manual Prompt Dependency in Promptable Segmentation**](https://arxiv.org/abs/2408.15205)
6 |
7 | [Jian Hu](https://lwpyh.github.io/), [Jiayi Lin](https://jylin8100.github.io/), [Junchi Yan](https://thinklab.sjtu.edu.cn/), [Shaogang Gong](http://www.eecs.qmul.ac.uk/~sgg/)
8 |
9 | Queen Mary University of London, Shanghai Jiao Tong University
10 |
11 |
12 |
13 |
14 |
15 | ## :rocket: News
16 | * **[2024.09.25]** ProMaC is accepted to NeurIPS 2024!
17 | * **[2024.08.30]** Model running instructions with LLaVA1.5 on CAMO and COD10K datasets are released.
18 | * **[2024.08.26]** [Demo](#demo) of ProMaC is released.
19 | * **[2024.08.26]** Model running instructions with LLaVA1.5 on CHAMELEON dataset is released.
20 |
21 |
22 |
23 |
24 |
25 |
26 | ## :bulb: Highlight
27 |
28 | Promptable segmentation typically requires instance-specific manual prompts to guide the segmentation of each desired object.To minimize such a need, task-generic promptable segmentation has been introduced, which employs a single task-generic prompt to segment various images of different objects in the same task.Current methods use Multimodal Large Language Models (MLLMs) to reason detailed instance-specific prompts from a task-generic prompt for improving segmentation accuracy. The effectiveness of this segmentation heavily depends on the precision of these derived prompts. However, MLLMs often suffer hallucinations during reasoning, resulting in inaccurate prompting. While existing methods focus on eliminating hallucinations to improve a model, we argue that MLLM hallucinations can reveal valuable contextual insights when leveraged correctly, as they represent pre-trained large-scale knowledge beyond individual images. In this paper, we utilize hallucinations to mine task-related information from images and verify its accuracy for enhancing precision of the generated prompts.
29 |
30 |
31 |
32 |
33 |
34 |
35 | A brief introduction of how we ProMaC do!
36 |
37 |
38 |
39 |
40 | Specifically, we introduce an iterative Prompt-Mask Cycle generation framework (ProMaC) with a prompt generator and a mask generator. The prompt generator uses a multi-scale chain of thought prompting, initially exploring hallucinations for extracting extended contextual knowledge on a test image. These hallucinations are then reduced to formulate precise instance-specific prompts, directing the mask generator to produce masks consistenting with task semantics by mask semantic alignment. The generated masks iteratively induce the prompt generator to focus more on task-relevant image areas and reduce irrelevant hallucinations, resulting jointly in better prompts and masks.
41 |
42 |
43 |
44 |
45 |
46 | ## Quick Start
47 |
48 |
49 |
50 |
51 |
52 |
53 |
54 |
57 |
58 | ### Download Dataset
59 | 1. Download the datasets from the follow links:
60 |
61 | **Camouflaged Object Detection Dataset**
62 | - **[COD10K](https://github.com/DengPingFan/SINet/)**
63 | - **[CAMO](https://drive.google.com/open?id=1h-OqZdwkuPhBvGcVAwmh0f1NGqlH_4B6)**
64 | - **[CHAMELEON](https://www.polsl.pl/rau6/datasets/)**
65 | 2. Put it in ./data/.
66 | ### Running ProMaC on CHAMELON Dataset with LLaVA1.5
67 | 1. When playing with LLaVA, this code was implemented with Python 3.8 and PyTorch 2.1.0. We recommend creating [virtualenv](https://virtualenv.pypa.io/) environment and installing all the dependencies, as follows:
68 | ```bash
69 | # create virtual environment
70 | virtualenv ProMaC
71 | source ProMaC/bin/activate
72 | # prepare LLaVA
73 | git clone https://github.com/haotian-liu/LLaVA.git
74 | cd LLaVA
75 | pip install -e .
76 | cd ..
77 | # prepare SAM
78 | pip install git+https://github.com/facebookresearch/segment-anything.git
79 | wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth
80 | pip install opencv-python imageio ftfy urllib3==1.26.6
81 | pip install diffusers transformers==4.36.0 accelerate scipy safetensors protobuf
82 | ```
83 | 2. Our ProMaC is a training-free test-time adaptation approach, so you can play with it by running:
84 | ```bash
85 | python main.py --config config/CHAMELEON.yaml
86 | ```
87 | or
88 | ```bash
89 | bash script_llava.sh
90 | ```
91 |
92 | ## Demo
93 | We further prepare a [jupyter notebook demo](https://github.com/lwpyh/promaC_code/blob/main/demo.ipynb) for visualization.
94 | 1. Complete the following steps in the shell before opening the jupyter notebook. \
95 | The virtualenv environment named ProMaC needs to be created first following [Quick Start](#running-gensam-on-chamelon-dataset-with-llava1llava15).
96 | ```
97 | pip install notebook
98 | pip install ipykernel ipywidgets
99 | python -m ipykernel install --user --name ProMaC
100 | ```
101 | 2. Open demo.ipynb and select the 'ProMaC' kernel in the running notebook.
102 |
103 |
104 |
105 |
106 | ## TO-DO LIST
107 | - [x] Update datasets and implementation scripts
108 | - [x] Demo and Codes
109 | - [ ] Keep incorporating more capabilities
110 |
111 | ## Citation
112 |
113 | If you find our work useful in your research, please consider citing:
114 |
115 | ```
116 | @article{hu2024leveraging,
117 | title={Leveraging Hallucinations to Reduce Manual Prompt Dependency in Promptable Segmentation},
118 | author={Hu, Jian and Lin, Jiayi and Yan, Junchi and Gong, Shaogang},
119 | journal={arXiv preprint arXiv:2408.15205},
120 | year={2024}
121 | }
122 | ```
123 |
124 | ## :cupid: Acknowledgements
125 |
126 | - [GenSAM](https://github.com/jyLin8100/GenSAM)
127 | - [Segment Anything](https://github.com/facebookresearch/segment-anything)
128 | - [LLaVA](https://github.com/haotian-liu/LLaVA)
129 | - [BLIP2](https://github.com/salesforce/LAVIS/tree/main/projects/blip2)
130 | - [CLIP Surgery](https://github.com/xmed-lab/CLIP_Surgery)
131 |
132 |
--------------------------------------------------------------------------------
/clip/__init__.py:
--------------------------------------------------------------------------------
1 | from .clip import *
2 |
--------------------------------------------------------------------------------
/clip/bpe_simple_vocab_16e6.txt.gz:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/lwpyh/ProMaC_code/689e9e9350f8a50145bbbaad0da4ba7b459ee050/clip/bpe_simple_vocab_16e6.txt.gz
--------------------------------------------------------------------------------
/clip/build_model.py:
--------------------------------------------------------------------------------
1 | from torch import nn
2 | from .clip_model import CLIP
3 | from .clip_surgery_model import CLIPSurgery
4 |
5 |
6 | def convert_weights(model: nn.Module):
7 | """Convert applicable model parameters to fp16"""
8 |
9 | def _convert_weights_to_fp16(l):
10 | if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Linear)):
11 | l.weight.data = l.weight.data.half()
12 | if l.bias is not None:
13 | l.bias.data = l.bias.data.half()
14 |
15 | if isinstance(l, nn.MultiheadAttention):
16 | for attr in [*[f"{s}_proj_weight" for s in ["in", "q", "k", "v"]], "in_proj_bias", "bias_k", "bias_v"]:
17 | tensor = getattr(l, attr)
18 | if tensor is not None:
19 | tensor.data = tensor.data.half()
20 |
21 | for name in ["text_projection", "proj"]:
22 | if hasattr(l, name):
23 | attr = getattr(l, name)
24 | if attr is not None:
25 | attr.data = attr.data.half()
26 |
27 | model.apply(_convert_weights_to_fp16)
28 |
29 |
30 | def build_model(name: str, state_dict: dict, params:dict):
31 | vit = "visual.proj" in state_dict
32 |
33 | if vit:
34 | vision_width = state_dict["visual.conv1.weight"].shape[0]
35 | vision_layers = len([k for k in state_dict.keys() if k.startswith("visual.") and k.endswith(".attn.in_proj_weight")])
36 | vision_patch_size = state_dict["visual.conv1.weight"].shape[-1]
37 | grid_size = round((state_dict["visual.positional_embedding"].shape[0] - 1) ** 0.5)
38 | image_resolution = vision_patch_size * grid_size
39 | else:
40 | counts: list = [len(set(k.split(".")[2] for k in state_dict if k.startswith(f"visual.layer{b}"))) for b in [1, 2, 3, 4]]
41 | vision_layers = tuple(counts)
42 | vision_width = state_dict["visual.layer1.0.conv1.weight"].shape[0]
43 | output_width = round((state_dict["visual.attnpool.positional_embedding"].shape[0] - 1) ** 0.5)
44 | vision_patch_size = None
45 | assert output_width ** 2 + 1 == state_dict["visual.attnpool.positional_embedding"].shape[0]
46 | image_resolution = output_width * 32
47 |
48 | embed_dim = state_dict["text_projection"].shape[1]
49 | context_length = state_dict["positional_embedding"].shape[0]
50 | vocab_size = state_dict["token_embedding.weight"].shape[0]
51 | transformer_width = state_dict["ln_final.weight"].shape[0]
52 | transformer_heads = transformer_width // 64
53 | transformer_layers = len(set(k.split(".")[2] for k in state_dict if k.startswith(f"transformer.resblocks")))
54 |
55 | if 'CS-' in name:
56 | model = CLIPSurgery(
57 | embed_dim,
58 | image_resolution, vision_layers, vision_width, vision_patch_size,
59 | context_length, vocab_size, transformer_width, transformer_heads, transformer_layers, params
60 | )
61 | else:
62 | model = CLIP(
63 | embed_dim,
64 | image_resolution, vision_layers, vision_width, vision_patch_size,
65 | context_length, vocab_size, transformer_width, transformer_heads, transformer_layers
66 | )
67 |
68 | for key in ["input_resolution", "context_length", "vocab_size"]:
69 | if key in state_dict:
70 | del state_dict[key]
71 |
72 | #convert_weights(model)
73 | model.load_state_dict(state_dict)
74 | return model.eval()
75 |
--------------------------------------------------------------------------------
/clip/simple_tokenizer.py:
--------------------------------------------------------------------------------
1 | import gzip
2 | import html
3 | import os
4 | from functools import lru_cache
5 |
6 | import ftfy
7 | import regex as re
8 |
9 |
10 | @lru_cache()
11 | def default_bpe():
12 | return os.path.join(os.path.dirname(os.path.abspath(__file__)), "bpe_simple_vocab_16e6.txt.gz")
13 |
14 |
15 | @lru_cache()
16 | def bytes_to_unicode():
17 | """
18 | Returns list of utf-8 byte and a corresponding list of unicode strings.
19 | The reversible bpe codes work on unicode strings.
20 | This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
21 | When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.
22 | This is a signficant percentage of your normal, say, 32K bpe vocab.
23 | To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
24 | And avoids mapping to whitespace/control characters the bpe code barfs on.
25 | """
26 | bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1))
27 | cs = bs[:]
28 | n = 0
29 | for b in range(2**8):
30 | if b not in bs:
31 | bs.append(b)
32 | cs.append(2**8+n)
33 | n += 1
34 | cs = [chr(n) for n in cs]
35 | return dict(zip(bs, cs))
36 |
37 |
38 | def get_pairs(word):
39 | """Return set of symbol pairs in a word.
40 | Word is represented as tuple of symbols (symbols being variable-length strings).
41 | """
42 | pairs = set()
43 | prev_char = word[0]
44 | for char in word[1:]:
45 | pairs.add((prev_char, char))
46 | prev_char = char
47 | return pairs
48 |
49 |
50 | def basic_clean(text):
51 | text = ftfy.fix_text(text)
52 | text = html.unescape(html.unescape(text))
53 | return text.strip()
54 |
55 |
56 | def whitespace_clean(text):
57 | text = re.sub(r'\s+', ' ', text)
58 | text = text.strip()
59 | return text
60 |
61 |
62 | class SimpleTokenizer(object):
63 | def __init__(self, bpe_path: str = default_bpe()):
64 | self.byte_encoder = bytes_to_unicode()
65 | self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
66 | merges = gzip.open(bpe_path).read().decode("utf-8").split('\n')
67 | merges = merges[1:49152-256-2+1]
68 | merges = [tuple(merge.split()) for merge in merges]
69 | vocab = list(bytes_to_unicode().values())
70 | vocab = vocab + [v+'' for v in vocab]
71 | for merge in merges:
72 | vocab.append(''.join(merge))
73 | vocab.extend(['<|startoftext|>', '<|endoftext|>'])
74 | self.encoder = dict(zip(vocab, range(len(vocab))))
75 | self.decoder = {v: k for k, v in self.encoder.items()}
76 | self.bpe_ranks = dict(zip(merges, range(len(merges))))
77 | self.cache = {'<|startoftext|>': '<|startoftext|>', '<|endoftext|>': '<|endoftext|>'}
78 | self.pat = re.compile(r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""", re.IGNORECASE)
79 |
80 | def bpe(self, token):
81 | if token in self.cache:
82 | return self.cache[token]
83 | word = tuple(token[:-1]) + ( token[-1] + '',)
84 | pairs = get_pairs(word)
85 |
86 | if not pairs:
87 | return token+''
88 |
89 | while True:
90 | bigram = min(pairs, key = lambda pair: self.bpe_ranks.get(pair, float('inf')))
91 | if bigram not in self.bpe_ranks:
92 | break
93 | first, second = bigram
94 | new_word = []
95 | i = 0
96 | while i < len(word):
97 | try:
98 | j = word.index(first, i)
99 | new_word.extend(word[i:j])
100 | i = j
101 | except:
102 | new_word.extend(word[i:])
103 | break
104 |
105 | if word[i] == first and i < len(word)-1 and word[i+1] == second:
106 | new_word.append(first+second)
107 | i += 2
108 | else:
109 | new_word.append(word[i])
110 | i += 1
111 | new_word = tuple(new_word)
112 | word = new_word
113 | if len(word) == 1:
114 | break
115 | else:
116 | pairs = get_pairs(word)
117 | word = ' '.join(word)
118 | self.cache[token] = word
119 | return word
120 |
121 | def encode(self, text):
122 | bpe_tokens = []
123 | text = whitespace_clean(basic_clean(text)).lower()
124 | for token in re.findall(self.pat, text):
125 | token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8'))
126 | bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' '))
127 | return bpe_tokens
128 |
129 | def decode(self, tokens):
130 | text = ''.join([self.decoder[token] for token in tokens])
131 | text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors="replace").replace('', ' ')
132 | return text
133 |
--------------------------------------------------------------------------------
/config/CAMO.yaml:
--------------------------------------------------------------------------------
1 |
2 | test_dataset:
3 | dataset:
4 | name: paired-image-folders
5 | args:
6 | root_path_1: /data/home/acw652/Crosscheck1/data/CAMO_TestingDataset/Image
7 | root_path_2: /data/home/acw652/Crosscheck1/data/CAMO_TestingDataset/GT
8 | cache: none
9 | split_key: test
10 | wrapper:
11 | name: val
12 | args:
13 | inp_size: 1024
14 | batch_size: 1
15 |
16 |
17 | ## VLM
18 | llm: LLaVA # [blip, LLaVA]
19 | load_in_8bit: false # for blip only
20 |
21 | # text prompt
22 | prompt_q: TheCamo #3attriTheBgSynCamo
23 | use_gene_prompt: false # store_true, use generic prompt w/o VLM
24 | use_gene_prompt_fg: false # store_true, use generic prompt for foreground, for exact object') # Note: only completed for LLaVA
25 | update_text: true # store_true, update text with VLM for each iteration')
26 | check_exist_each_iter: false # only for multiple classes segmentation, check if a certain class exists
27 |
28 | # llava
29 | LLaVA_w_caption: true # store_true
30 | model_path: liuhaotian/llava-v1.5-13b # liuhaotian/llava-v1.5-13b (llava1.5) llava-v1.6-vicuna-13b
31 | model_base: null
32 | num_chunks: 1
33 | chunk_idx: 0
34 | temperature: 0.2
35 | top_p: null
36 | num_beams: 1
37 |
38 |
39 | ## Spatial CLIP
40 | clip_model: CS-ViT-B/16 # model for clip surgery')
41 | clip_model_ori: ViT-B/16 # model for clip')
42 | rdd_str: '' # help='text for redundant features as input of clip surgery')
43 | clip_attn_qkv_strategy: kk # qkv attention strategy for clip surgery; [vv(original), kk]
44 | clip_use_bg_text: true # store_true, background text input for clip surgery
45 | clip_bg_strategy: FgBgHm # for clip surgery'); [FgBgHm, FgBgHmClamp]
46 | down_sample: 0.5 #, help='down sample to generate points from CLIP surgery output')
47 | attn_thr: 0.9 # help='threshold for CLIP Surgery to get points from attention map')
48 |
49 |
50 | ## SAM
51 | sam_checkpoint: sam_vit_h_4b8939.pth
52 | sam_model_type: vit_h
53 | patch_list: [1, 2]
54 |
55 | ## iteration
56 | recursive: 3 # help='recursive times to use CLIP surgery, to get the point')
57 | recursive_coef: 0.3 #, help='recursive coefficient to use CLIP surgery, to get the point')
58 | clipInputEMA: true # store_true')
59 | post_mode: 'MaxIOUBoxSAMInput'
60 |
61 |
62 |
63 |
64 |
65 |
--------------------------------------------------------------------------------
/config/CHAMELEON.yaml:
--------------------------------------------------------------------------------
1 |
2 | test_dataset:
3 | dataset:
4 | name: paired-image-folders
5 | args:
6 | root_path_1: /data/home/acw652/Crosscheck1/data/CHAMELEON_TestingDataset/Image
7 | root_path_2: /data/home/acw652/Crosscheck1/data/CHAMELEON_TestingDataset/GT
8 | cache: none
9 | split_key: test
10 | wrapper:
11 | name: val
12 | args:
13 | inp_size: 1024
14 | batch_size: 1
15 |
16 |
17 | ## VLM
18 | llm: LLaVA # [blip, LLaVA]
19 | load_in_8bit: false # for blip only
20 |
21 | # text prompt
22 | prompt_q: TheCamo #3attriTheBgSynCamo
23 | use_gene_prompt: false # store_true, use generic prompt w/o VLM
24 | use_gene_prompt_fg: false # store_true, use generic prompt for foreground, for exact object') # Note: only completed for LLaVA
25 | update_text: true # store_true, update text with VLM for each iteration')
26 | check_exist_each_iter: false # only for multiple classes segmentation, check if a certain class exists
27 |
28 | # llava
29 | LLaVA_w_caption: true # store_true
30 | model_path: liuhaotian/llava-v1.5-13b # liuhaotian/llava-v1.5-13b (llava1.5) #llava-v1.6-vicuna-13b
31 | model_base: null
32 | num_chunks: 1
33 | chunk_idx: 0
34 | temperature: 0.2
35 | top_p: null
36 | num_beams: 1
37 |
38 |
39 | ## Spatial CLIP
40 | clip_model: CS-ViT-B/16 # model for clip surgery')
41 | clip_model_ori: ViT-B/16 # model for clip')
42 | rdd_str: '' # help='text for redundant features as input of clip surgery')
43 | clip_attn_qkv_strategy: kk # qkv attention strategy for clip surgery; [vv(original), kk]
44 | clip_use_bg_text: true # store_true, background text input for clip surgery
45 | clip_bg_strategy: FgBgHm # for clip surgery'); [FgBgHm, FgBgHmClamp]
46 | down_sample: 0.5 #, help='down sample to generate points from CLIP surgery output')
47 | attn_thr: 0.9 # help='threshold for CLIP Surgery to get points from attention map')
48 |
49 |
50 | ## SAM
51 | sam_checkpoint: sam_vit_h_4b8939.pth
52 | sam_model_type: vit_h
53 | patch_list: [1, 2]
54 |
55 | ## iteration
56 | recursive: 3 # help='recursive times to use CLIP surgery, to get the point')
57 | recursive_coef: 0.3 #, help='recursive coefficient to use CLIP surgery, to get the point')
58 | clipInputEMA: true # store_true')
59 | post_mode: 'MaxIOUBoxSAMInput'
60 |
61 |
62 |
63 |
64 |
65 |
--------------------------------------------------------------------------------
/config/COD10K.yaml:
--------------------------------------------------------------------------------
1 |
2 | test_dataset:
3 | dataset:
4 | name: paired-image-folders
5 | args:
6 | root_path_1: /data/home/acw652/Crosscheck1/data/COD/TestDataset/COD10K/Imgs
7 | root_path_2: /data/home/acw652/Crosscheck1/data/COD/TestDataset/COD10K/GT
8 | cache: none
9 | split_key: test
10 | wrapper:
11 | name: val
12 | args:
13 | inp_size: 1024
14 | batch_size: 1
15 |
16 |
17 | ## VLM
18 | llm: LLaVA # [blip, LLaVA]
19 | load_in_8bit: false # for blip only
20 |
21 | # text prompt
22 | prompt_q: TheCamo #3attriTheBgSynCamo
23 | use_gene_prompt: false # store_true, use generic prompt w/o VLM
24 | use_gene_prompt_fg: false # store_true, use generic prompt for foreground, for exact object') # Note: only completed for LLaVA
25 | update_text: true # store_true, update text with VLM for each iteration')
26 | check_exist_each_iter: false # only for multiple classes segmentation, check if a certain class exists
27 |
28 | # llava
29 | LLaVA_w_caption: true # store_true
30 | model_path: liuhaotian/llava-v1.5-13b # liuhaotian/llava-v1.5-13b (llava1.5) # llava-v1.6-vicuna-13b
31 | model_base: null
32 | num_chunks: 1
33 | chunk_idx: 0
34 | temperature: 0.2
35 | top_p: null
36 | num_beams: 1
37 |
38 |
39 | ## Spatial CLIP
40 | clip_model: CS-ViT-B/16 # model for clip surgery')
41 | clip_model_ori: ViT-B/16 # model for clip')
42 | rdd_str: '' # help='text for redundant features as input of clip surgery')
43 | clip_attn_qkv_strategy: kk # qkv attention strategy for clip surgery; [vv(original), kk]
44 | clip_use_bg_text: true # store_true, background text input for clip surgery
45 | clip_bg_strategy: FgBgHm # for clip surgery'); [FgBgHm, FgBgHmClamp]
46 | down_sample: 0.5 #, help='down sample to generate points from CLIP surgery output')
47 | attn_thr: 0.9 # help='threshold for CLIP Surgery to get points from attention map')
48 |
49 |
50 | ## SAM
51 | sam_checkpoint: sam_vit_h_4b8939.pth
52 | sam_model_type: vit_h
53 | patch_list: [1, 2]
54 |
55 | ## iteration
56 | recursive: 3 # help='recursive times to use CLIP surgery, to get the point')
57 | recursive_coef: 0.3 #, help='recursive coefficient to use CLIP surgery, to get the point')
58 | clipInputEMA: true # store_true')
59 | post_mode: 'MaxIOUBoxSAMInput'
60 |
61 |
62 |
63 |
64 |
65 |
--------------------------------------------------------------------------------
/datasets/__init__.py:
--------------------------------------------------------------------------------
1 | from .datasets import register, make
2 | from . import image_folder
3 | from . import wrappers
4 |
--------------------------------------------------------------------------------
/datasets/datasets.py:
--------------------------------------------------------------------------------
1 | import copy
2 |
3 |
4 | datasets = {}
5 |
6 |
7 | def register(name):
8 | def decorator(cls):
9 | datasets[name] = cls
10 | return cls
11 | return decorator
12 |
13 |
14 | def make(dataset_spec, args=None):
15 | if args is not None:
16 | dataset_args = copy.deepcopy(dataset_spec['args'])
17 | dataset_args.update(args)
18 | else:
19 | dataset_args = dataset_spec['args']
20 | dataset = datasets[dataset_spec['name']](**dataset_args)
21 | return dataset
22 |
--------------------------------------------------------------------------------
/datasets/image_folder.py:
--------------------------------------------------------------------------------
1 | import os
2 | import json
3 | from PIL import Image
4 |
5 | import pickle
6 | import imageio
7 | import numpy as np
8 | import torch
9 | from torch.utils.data import Dataset
10 | from torchvision import transforms
11 | import random
12 | from datasets import register
13 |
14 |
15 | @register('image-folder')
16 | class ImageFolder(Dataset):
17 | def __init__(self, path, split_file=None, split_key=None, first_k=None, size=None,
18 | repeat=1, cache='none', mask=False):
19 | self.repeat = repeat
20 | self.cache = cache
21 | self.path = path
22 | self.Train = False
23 | self.split_key = split_key
24 |
25 | self.size = size
26 | self.mask = mask
27 | if self.mask:
28 | self.img_transform = transforms.Compose([
29 | transforms.Resize((self.size, self.size), interpolation=Image.NEAREST),
30 | transforms.ToTensor(),
31 | ])
32 | else:
33 | self.img_transform = transforms.Compose([
34 | transforms.Resize((self.size, self.size)),
35 | transforms.ToTensor(),
36 | transforms.Normalize(mean=[0.485, 0.456, 0.406],
37 | std=[0.229, 0.224, 0.225])
38 | ])
39 |
40 | if split_file is None:
41 | filenames = sorted(os.listdir(path))
42 | else:
43 | with open(split_file, 'r') as f:
44 | filenames = json.load(f)[split_key]
45 | if first_k is not None:
46 | filenames = filenames[:first_k]
47 |
48 | self.files = []
49 | self.paths = []
50 |
51 | for filename in filenames:
52 | file = os.path.join(path, filename)
53 | self.append_file(file)
54 | self.paths.append(file)
55 |
56 | def append_file(self, file):
57 | if self.cache == 'none':
58 | self.files.append(file)
59 | elif self.cache == 'in_memory':
60 | self.files.append(self.img_process(file))
61 |
62 | def __len__(self):
63 | return len(self.files) * self.repeat
64 |
65 | def __getitem__(self, idx):
66 | x = self.files[idx % len(self.files)]
67 |
68 | if self.cache == 'none':
69 | return self.img_process(x)
70 | elif self.cache == 'in_memory':
71 | return x
72 |
73 | def img_process(self, file):
74 | if self.mask:
75 | return Image.open(file).convert('L')
76 | else:
77 | return Image.open(file).convert('RGB')
78 |
79 | @register('paired-image-folders')
80 | class PairedImageFolders(Dataset):
81 |
82 | def __init__(self, root_path_1, root_path_2, **kwargs):
83 | self.dataset_1 = ImageFolder(root_path_1, **kwargs)
84 | self.dataset_2 = ImageFolder(root_path_2, **kwargs, mask=True)
85 | self.paths_img = self.dataset_1.paths
86 | self.paths_gt = self.dataset_2.paths
87 |
88 | def __len__(self):
89 | return len(self.dataset_1)
90 |
91 | def __getitem__(self, idx):
92 | return self.dataset_1[idx], self.dataset_2[idx]
93 |
--------------------------------------------------------------------------------
/datasets/wrappers.py:
--------------------------------------------------------------------------------
1 |
2 | import functools
3 | import random
4 | import math
5 | from PIL import Image
6 |
7 | import numpy as np
8 | import torch
9 | from torch.utils.data import Dataset
10 | from torchvision import transforms
11 | import torchvision
12 |
13 | from datasets import register
14 | import cv2
15 | from math import pi
16 | from torchvision.transforms import InterpolationMode
17 |
18 | import torch.nn.functional as F
19 | def to_mask(mask):
20 | return transforms.ToTensor()(
21 | transforms.Grayscale(num_output_channels=1)(
22 | transforms.ToPILImage()(mask)))
23 |
24 |
25 | def resize_fn(img, size):
26 | return transforms.ToTensor()(
27 | transforms.Resize(size)(
28 | transforms.ToPILImage()(img)))
29 |
30 |
31 | @register('val')
32 | class ValDataset(Dataset):
33 | def __init__(self, dataset, inp_size=None, augment=False):
34 | self.dataset = dataset
35 | self.inp_size = inp_size
36 | self.augment = augment
37 |
38 | self.img_transform = transforms.Compose([
39 | transforms.Resize((inp_size, inp_size)),
40 | transforms.ToTensor(),
41 | transforms.Normalize(mean=[0.485, 0.456, 0.406],
42 | std=[0.229, 0.224, 0.225])
43 | ])
44 | self.mask_transform = transforms.Compose([
45 | transforms.Resize((inp_size, inp_size), interpolation=Image.NEAREST),
46 | transforms.ToTensor(),
47 | ])
48 |
49 | def __len__(self):
50 | return len(self.dataset)
51 |
52 | def __getitem__(self, idx):
53 | img, mask = self.dataset[idx]
54 |
55 | return {
56 | 'inp': self.img_transform(img),
57 | 'gt': self.mask_transform(mask)
58 | }
59 |
60 |
61 | @register('train')
62 | class TrainDataset(Dataset):
63 | def __init__(self, dataset, size_min=None, size_max=None, inp_size=None,
64 | augment=False, gt_resize=None):
65 | self.dataset = dataset
66 | self.size_min = size_min
67 | if size_max is None:
68 | size_max = size_min
69 | self.size_max = size_max
70 | self.augment = augment
71 | self.gt_resize = gt_resize
72 |
73 | self.inp_size = inp_size
74 | self.img_transform = transforms.Compose([
75 | transforms.Resize((self.inp_size, self.inp_size)),
76 | transforms.ToTensor(),
77 | transforms.Normalize(mean=[0.485, 0.456, 0.406],
78 | std=[0.229, 0.224, 0.225])
79 | ])
80 | self.inverse_transform = transforms.Compose([
81 | transforms.Normalize(mean=[0., 0., 0.],
82 | std=[1/0.229, 1/0.224, 1/0.225]),
83 | transforms.Normalize(mean=[-0.485, -0.456, -0.406],
84 | std=[1, 1, 1])
85 | ])
86 | self.mask_transform = transforms.Compose([
87 | transforms.Resize((self.inp_size, self.inp_size)),
88 | transforms.ToTensor(),
89 | ])
90 |
91 | def __len__(self):
92 | return len(self.dataset)
93 |
94 | def __getitem__(self, idx):
95 | img, mask = self.dataset[idx]
96 |
97 | # random filp
98 | if random.random() < 0.5:
99 | img = img.transpose(Image.FLIP_LEFT_RIGHT)
100 | mask = mask.transpose(Image.FLIP_LEFT_RIGHT)
101 |
102 | img = transforms.Resize((self.inp_size, self.inp_size))(img)
103 | mask = transforms.Resize((self.inp_size, self.inp_size), interpolation=InterpolationMode.NEAREST)(mask)
104 |
105 | return {
106 | 'inp': self.img_transform(img),
107 | 'gt': self.mask_transform(mask)
108 | }
--------------------------------------------------------------------------------
/demo_v4-ezgif.com-speed.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/lwpyh/ProMaC_code/689e9e9350f8a50145bbbaad0da4ba7b459ee050/demo_v4-ezgif.com-speed.gif
--------------------------------------------------------------------------------
/frame_promac.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/lwpyh/ProMaC_code/689e9e9350f8a50145bbbaad0da4ba7b459ee050/frame_promac.png
--------------------------------------------------------------------------------
/framework_ProMaC_v10.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/lwpyh/ProMaC_code/689e9e9350f8a50145bbbaad0da4ba7b459ee050/framework_ProMaC_v10.png
--------------------------------------------------------------------------------
/llava/__init__.py:
--------------------------------------------------------------------------------
1 | from .model import LlavaLlamaForCausalLM
2 |
--------------------------------------------------------------------------------
/llava/constants.py:
--------------------------------------------------------------------------------
1 | CONTROLLER_HEART_BEAT_EXPIRATION = 30
2 | WORKER_HEART_BEAT_INTERVAL = 15
3 |
4 | LOGDIR = "."
5 |
6 | # Model Constants
7 | IGNORE_INDEX = -100
8 | IMAGE_TOKEN_INDEX = -200
9 | DEFAULT_IMAGE_TOKEN = ""
10 | DEFAULT_IMAGE_PATCH_TOKEN = ""
11 | DEFAULT_IM_START_TOKEN = ""
12 | DEFAULT_IM_END_TOKEN = ""
13 | IMAGE_PLACEHOLDER = ""
14 |
--------------------------------------------------------------------------------
/llava/eval/eval_gpt_review.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import json
3 | import os
4 |
5 | import openai
6 | import tqdm
7 | import ray
8 | import time
9 |
10 | NUM_SECONDS_TO_SLEEP = 3
11 |
12 | @ray.remote(num_cpus=4)
13 | def get_eval(content: str, max_tokens: int):
14 | while True:
15 | try:
16 | response = openai.ChatCompletion.create(
17 | model='gpt-4',
18 | messages=[{
19 | 'role': 'system',
20 | 'content': 'You are a helpful and precise assistant for checking the quality of the answer.'
21 | }, {
22 | 'role': 'user',
23 | 'content': content,
24 | }],
25 | temperature=0.2, # TODO: figure out which temperature is best for evaluation
26 | max_tokens=max_tokens,
27 | )
28 | break
29 | except openai.error.RateLimitError:
30 | pass
31 | except Exception as e:
32 | print(e)
33 | time.sleep(NUM_SECONDS_TO_SLEEP)
34 |
35 | print('success!')
36 | return response['choices'][0]['message']['content']
37 |
38 |
39 | def parse_score(review):
40 | try:
41 | score_pair = review.split('\n')[0]
42 | score_pair = score_pair.replace(',', ' ')
43 | sp = score_pair.split(' ')
44 | if len(sp) == 2:
45 | return [float(sp[0]), float(sp[1])]
46 | else:
47 | print('error', review)
48 | return [-1, -1]
49 | except Exception as e:
50 | print(e)
51 | print('error', review)
52 | return [-1, -1]
53 |
54 |
55 | if __name__ == '__main__':
56 | parser = argparse.ArgumentParser(description='ChatGPT-based QA evaluation.')
57 | parser.add_argument('-q', '--question')
58 | # parser.add_argument('-a', '--answer')
59 | parser.add_argument('-a', '--answer-list', nargs='+', default=[])
60 | parser.add_argument('-r', '--rule')
61 | parser.add_argument('-o', '--output')
62 | parser.add_argument('--max-tokens', type=int, default=1024, help='maximum number of tokens produced in the output')
63 | args = parser.parse_args()
64 |
65 | ray.init()
66 |
67 | f_q = open(os.path.expanduser(args.question))
68 | f_ans1 = open(os.path.expanduser(args.answer_list[0]))
69 | f_ans2 = open(os.path.expanduser(args.answer_list[1]))
70 | rule_dict = json.load(open(os.path.expanduser(args.rule), 'r'))
71 |
72 | review_file = open(f'{args.output}', 'w')
73 |
74 | js_list = []
75 | handles = []
76 | idx = 0
77 | for ques_js, ans1_js, ans2_js in zip(f_q, f_ans1, f_ans2):
78 | # if idx == 1:
79 | # break
80 |
81 | ques = json.loads(ques_js)
82 | ans1 = json.loads(ans1_js)
83 | ans2 = json.loads(ans2_js)
84 |
85 | category = json.loads(ques_js)['category']
86 | if category in rule_dict:
87 | rule = rule_dict[category]
88 | else:
89 | rule = rule_dict['default']
90 | prompt = rule['prompt']
91 | role = rule['role']
92 | content = (f'[Question]\n{ques["text"]}\n\n'
93 | f'[{role} 1]\n{ans1["text"]}\n\n[End of {role} 1]\n\n'
94 | f'[{role} 2]\n{ans2["text"]}\n\n[End of {role} 2]\n\n'
95 | f'[System]\n{prompt}\n\n')
96 | js_list.append({
97 | 'id': idx+1,
98 | 'question_id': ques['question_id'],
99 | 'answer1_id': ans1['answer_id'],
100 | 'answer2_id': ans2['answer_id'],
101 | 'category': category})
102 | idx += 1
103 | handles.append(get_eval.remote(content, args.max_tokens))
104 | # To avoid the rate limit set by OpenAI
105 | time.sleep(NUM_SECONDS_TO_SLEEP)
106 |
107 | reviews = ray.get(handles)
108 | for idx, review in enumerate(reviews):
109 | scores = parse_score(review)
110 | js_list[idx]['content'] = review
111 | js_list[idx]['tuple'] = scores
112 | review_file.write(json.dumps(js_list[idx]) + '\n')
113 | review_file.close()
114 |
--------------------------------------------------------------------------------
/llava/eval/eval_gpt_review_bench.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import json
3 | import os
4 |
5 | import openai
6 | import time
7 |
8 | NUM_SECONDS_TO_SLEEP = 0.5
9 |
10 |
11 | def get_eval(content: str, max_tokens: int):
12 | while True:
13 | try:
14 | response = openai.ChatCompletion.create(
15 | model='gpt-4-0314',
16 | messages=[{
17 | 'role': 'system',
18 | 'content': 'You are a helpful and precise assistant for checking the quality of the answer.'
19 | }, {
20 | 'role': 'user',
21 | 'content': content,
22 | }],
23 | temperature=0.2, # TODO: figure out which temperature is best for evaluation
24 | max_tokens=max_tokens,
25 | )
26 | break
27 | except openai.error.RateLimitError:
28 | pass
29 | except Exception as e:
30 | print(e)
31 | time.sleep(NUM_SECONDS_TO_SLEEP)
32 |
33 | return response['choices'][0]['message']['content']
34 |
35 |
36 | def parse_score(review):
37 | try:
38 | score_pair = review.split('\n')[0]
39 | score_pair = score_pair.replace(',', ' ')
40 | sp = score_pair.split(' ')
41 | if len(sp) == 2:
42 | return [float(sp[0]), float(sp[1])]
43 | else:
44 | print('error', review)
45 | return [-1, -1]
46 | except Exception as e:
47 | print(e)
48 | print('error', review)
49 | return [-1, -1]
50 |
51 |
52 | if __name__ == '__main__':
53 | parser = argparse.ArgumentParser(description='ChatGPT-based QA evaluation.')
54 | parser.add_argument('-q', '--question')
55 | parser.add_argument('-c', '--context')
56 | parser.add_argument('-a', '--answer-list', nargs='+', default=[])
57 | parser.add_argument('-r', '--rule')
58 | parser.add_argument('-o', '--output')
59 | parser.add_argument('--max-tokens', type=int, default=1024, help='maximum number of tokens produced in the output')
60 | args = parser.parse_args()
61 |
62 | f_q = open(os.path.expanduser(args.question))
63 | f_ans1 = open(os.path.expanduser(args.answer_list[0]))
64 | f_ans2 = open(os.path.expanduser(args.answer_list[1]))
65 | rule_dict = json.load(open(os.path.expanduser(args.rule), 'r'))
66 |
67 | if os.path.isfile(os.path.expanduser(args.output)):
68 | cur_reviews = [json.loads(line) for line in open(os.path.expanduser(args.output))]
69 | else:
70 | cur_reviews = []
71 |
72 | review_file = open(f'{args.output}', 'a')
73 |
74 | context_list = [json.loads(line) for line in open(os.path.expanduser(args.context))]
75 | image_to_context = {context['image']: context for context in context_list}
76 |
77 | handles = []
78 | idx = 0
79 | for ques_js, ans1_js, ans2_js in zip(f_q, f_ans1, f_ans2):
80 | ques = json.loads(ques_js)
81 | ans1 = json.loads(ans1_js)
82 | ans2 = json.loads(ans2_js)
83 |
84 | inst = image_to_context[ques['image']]
85 |
86 | if isinstance(inst['caption'], list):
87 | cap_str = '\n'.join(inst['caption'])
88 | else:
89 | cap_str = inst['caption']
90 |
91 | category = 'llava_bench_' + json.loads(ques_js)['category']
92 | if category in rule_dict:
93 | rule = rule_dict[category]
94 | else:
95 | assert False, f"Visual QA category not found in rule file: {category}."
96 | prompt = rule['prompt']
97 | role = rule['role']
98 | content = (f'[Context]\n{cap_str}\n\n'
99 | f'[Question]\n{ques["text"]}\n\n'
100 | f'[{role} 1]\n{ans1["text"]}\n\n[End of {role} 1]\n\n'
101 | f'[{role} 2]\n{ans2["text"]}\n\n[End of {role} 2]\n\n'
102 | f'[System]\n{prompt}\n\n')
103 | cur_js = {
104 | 'id': idx+1,
105 | 'question_id': ques['question_id'],
106 | 'answer1_id': ans1.get('answer_id', ans1['question_id']),
107 | 'answer2_id': ans2.get('answer_id', ans2['answer_id']),
108 | 'category': category
109 | }
110 | if idx >= len(cur_reviews):
111 | review = get_eval(content, args.max_tokens)
112 | scores = parse_score(review)
113 | cur_js['content'] = review
114 | cur_js['tuple'] = scores
115 | review_file.write(json.dumps(cur_js) + '\n')
116 | review_file.flush()
117 | else:
118 | print(f'Skipping {idx} as we already have it.')
119 | idx += 1
120 | print(idx)
121 | review_file.close()
122 |
--------------------------------------------------------------------------------
/llava/eval/eval_gpt_review_visual.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import json
3 | import os
4 |
5 | import openai
6 | import time
7 |
8 | NUM_SECONDS_TO_SLEEP = 0.5
9 |
10 |
11 | def get_eval(content: str, max_tokens: int):
12 | while True:
13 | try:
14 | response = openai.ChatCompletion.create(
15 | model='gpt-4-0314',
16 | messages=[{
17 | 'role': 'system',
18 | 'content': 'You are a helpful and precise assistant for checking the quality of the answer.'
19 | }, {
20 | 'role': 'user',
21 | 'content': content,
22 | }],
23 | temperature=0.2, # TODO: figure out which temperature is best for evaluation
24 | max_tokens=max_tokens,
25 | )
26 | break
27 | except openai.error.RateLimitError:
28 | pass
29 | except Exception as e:
30 | print(e)
31 | time.sleep(NUM_SECONDS_TO_SLEEP)
32 |
33 | return response['choices'][0]['message']['content']
34 |
35 |
36 | def parse_score(review):
37 | try:
38 | score_pair = review.split('\n')[0]
39 | score_pair = score_pair.replace(',', ' ')
40 | sp = score_pair.split(' ')
41 | if len(sp) == 2:
42 | return [float(sp[0]), float(sp[1])]
43 | else:
44 | print('error', review)
45 | return [-1, -1]
46 | except Exception as e:
47 | print(e)
48 | print('error', review)
49 | return [-1, -1]
50 |
51 |
52 | if __name__ == '__main__':
53 | parser = argparse.ArgumentParser(description='ChatGPT-based QA evaluation.')
54 | parser.add_argument('-q', '--question')
55 | parser.add_argument('-c', '--context')
56 | parser.add_argument('-a', '--answer-list', nargs='+', default=[])
57 | parser.add_argument('-r', '--rule')
58 | parser.add_argument('-o', '--output')
59 | parser.add_argument('--max-tokens', type=int, default=1024, help='maximum number of tokens produced in the output')
60 | args = parser.parse_args()
61 |
62 | f_q = open(os.path.expanduser(args.question))
63 | f_ans1 = open(os.path.expanduser(args.answer_list[0]))
64 | f_ans2 = open(os.path.expanduser(args.answer_list[1]))
65 | rule_dict = json.load(open(os.path.expanduser(args.rule), 'r'))
66 |
67 | if os.path.isfile(os.path.expanduser(args.output)):
68 | cur_reviews = [json.loads(line) for line in open(os.path.expanduser(args.output))]
69 | else:
70 | cur_reviews = []
71 |
72 | review_file = open(f'{args.output}', 'a')
73 |
74 | context_list = [json.loads(line) for line in open(os.path.expanduser(args.context))]
75 | image_to_context = {context['image']: context for context in context_list}
76 |
77 | handles = []
78 | idx = 0
79 | for ques_js, ans1_js, ans2_js in zip(f_q, f_ans1, f_ans2):
80 | ques = json.loads(ques_js)
81 | ans1 = json.loads(ans1_js)
82 | ans2 = json.loads(ans2_js)
83 |
84 | inst = image_to_context[ques['image']]
85 | cap_str = '\n'.join(inst['captions'])
86 | box_str = '\n'.join([f'{instance["category"]}: {instance["bbox"]}' for instance in inst['instances']])
87 |
88 | category = json.loads(ques_js)['category']
89 | if category in rule_dict:
90 | rule = rule_dict[category]
91 | else:
92 | assert False, f"Visual QA category not found in rule file: {category}."
93 | prompt = rule['prompt']
94 | role = rule['role']
95 | content = (f'[Context]\n{cap_str}\n\n{box_str}\n\n'
96 | f'[Question]\n{ques["text"]}\n\n'
97 | f'[{role} 1]\n{ans1["text"]}\n\n[End of {role} 1]\n\n'
98 | f'[{role} 2]\n{ans2["text"]}\n\n[End of {role} 2]\n\n'
99 | f'[System]\n{prompt}\n\n')
100 | cur_js = {
101 | 'id': idx+1,
102 | 'question_id': ques['question_id'],
103 | 'answer1_id': ans1.get('answer_id', ans1['question_id']),
104 | 'answer2_id': ans2.get('answer_id', ans2['answer_id']),
105 | 'category': category
106 | }
107 | if idx >= len(cur_reviews):
108 | review = get_eval(content, args.max_tokens)
109 | scores = parse_score(review)
110 | cur_js['content'] = review
111 | cur_js['tuple'] = scores
112 | review_file.write(json.dumps(cur_js) + '\n')
113 | review_file.flush()
114 | else:
115 | print(f'Skipping {idx} as we already have it.')
116 | idx += 1
117 | print(idx)
118 | review_file.close()
119 |
--------------------------------------------------------------------------------
/llava/eval/eval_pope.py:
--------------------------------------------------------------------------------
1 | import os
2 | import json
3 | import argparse
4 |
5 | def eval_pope(answers, label_file):
6 | label_list = [json.loads(q)['label'] for q in open(label_file, 'r')]
7 |
8 | for answer in answers:
9 | text = answer['text']
10 |
11 | # Only keep the first sentence
12 | if text.find('.') != -1:
13 | text = text.split('.')[0]
14 |
15 | text = text.replace(',', '')
16 | words = text.split(' ')
17 | if 'No' in words or 'not' in words or 'no' in words:
18 | answer['text'] = 'no'
19 | else:
20 | answer['text'] = 'yes'
21 |
22 | for i in range(len(label_list)):
23 | if label_list[i] == 'no':
24 | label_list[i] = 0
25 | else:
26 | label_list[i] = 1
27 |
28 | pred_list = []
29 | for answer in answers:
30 | if answer['text'] == 'no':
31 | pred_list.append(0)
32 | else:
33 | pred_list.append(1)
34 |
35 | pos = 1
36 | neg = 0
37 | yes_ratio = pred_list.count(1) / len(pred_list)
38 |
39 | TP, TN, FP, FN = 0, 0, 0, 0
40 | for pred, label in zip(pred_list, label_list):
41 | if pred == pos and label == pos:
42 | TP += 1
43 | elif pred == pos and label == neg:
44 | FP += 1
45 | elif pred == neg and label == neg:
46 | TN += 1
47 | elif pred == neg and label == pos:
48 | FN += 1
49 |
50 | print('TP\tFP\tTN\tFN\t')
51 | print('{}\t{}\t{}\t{}'.format(TP, FP, TN, FN))
52 |
53 | precision = float(TP) / float(TP + FP)
54 | recall = float(TP) / float(TP + FN)
55 | f1 = 2*precision*recall / (precision + recall)
56 | acc = (TP + TN) / (TP + TN + FP + FN)
57 | print('Accuracy: {}'.format(acc))
58 | print('Precision: {}'.format(precision))
59 | print('Recall: {}'.format(recall))
60 | print('F1 score: {}'.format(f1))
61 | print('Yes ratio: {}'.format(yes_ratio))
62 | print('%.3f, %.3f, %.3f, %.3f, %.3f' % (f1, acc, precision, recall, yes_ratio) )
63 |
64 | if __name__ == "__main__":
65 | parser = argparse.ArgumentParser()
66 | parser.add_argument("--annotation-dir", type=str)
67 | parser.add_argument("--question-file", type=str)
68 | parser.add_argument("--result-file", type=str)
69 | args = parser.parse_args()
70 |
71 | questions = [json.loads(line) for line in open(args.question_file)]
72 | questions = {question['question_id']: question for question in questions}
73 | answers = [json.loads(q) for q in open(args.result_file)]
74 | for file in os.listdir(args.annotation_dir):
75 | assert file.startswith('coco_pope_')
76 | assert file.endswith('.json')
77 | category = file[10:-5]
78 | cur_answers = [x for x in answers if questions[x['question_id']]['category'] == category]
79 | print('Category: {}, # samples: {}'.format(category, len(cur_answers)))
80 | eval_pope(cur_answers, os.path.join(args.annotation_dir, file))
81 | print("====================================")
82 |
--------------------------------------------------------------------------------
/llava/eval/eval_science_qa.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import json
3 | import os
4 | import re
5 | import random
6 |
7 |
8 | def get_args():
9 | parser = argparse.ArgumentParser()
10 | parser.add_argument('--base-dir', type=str)
11 | parser.add_argument('--result-file', type=str)
12 | parser.add_argument('--output-file', type=str)
13 | parser.add_argument('--output-result', type=str)
14 | parser.add_argument('--split', type=str, default='test')
15 | parser.add_argument('--options', type=list, default=["A", "B", "C", "D", "E"])
16 | return parser.parse_args()
17 |
18 |
19 | def convert_caps(results):
20 | fakecaps = []
21 | for result in results:
22 | image_id = result['question_id']
23 | caption = result['text']
24 | fakecaps.append({"image_id": int(image_id), "caption": caption})
25 | return fakecaps
26 |
27 |
28 | def get_pred_idx(prediction, choices, options):
29 | """
30 | Get the index (e.g. 2) from the prediction (e.g. 'C')
31 | """
32 | if prediction in options[:len(choices)]:
33 | return options.index(prediction)
34 | else:
35 | return -1
36 | return random.choice(range(len(choices)))
37 |
38 |
39 | if __name__ == "__main__":
40 | args = get_args()
41 |
42 | base_dir = args.base_dir
43 | split_indices = json.load(open(os.path.join(base_dir, "pid_splits.json")))[args.split]
44 | problems = json.load(open(os.path.join(base_dir, "problems.json")))
45 | predictions = [json.loads(line) for line in open(args.result_file)]
46 | predictions = {pred['question_id']: pred for pred in predictions}
47 | split_problems = {idx: problems[idx] for idx in split_indices}
48 |
49 | results = {'correct': [], 'incorrect': []}
50 | sqa_results = {}
51 | sqa_results['acc'] = None
52 | sqa_results['correct'] = None
53 | sqa_results['count'] = None
54 | sqa_results['results'] = {}
55 | sqa_results['outputs'] = {}
56 |
57 | for prob_id, prob in split_problems.items():
58 | if prob_id not in predictions:
59 | pred = {'text': 'FAILED', 'prompt': 'Unknown'}
60 | pred_text = 'FAILED'
61 | else:
62 | pred = predictions[prob_id]
63 | pred_text = pred['text']
64 |
65 | if pred_text in args.options:
66 | answer = pred_text
67 | elif len(pred_text) >= 3 and pred_text[0] in args.options and pred_text[1:3] == ". ":
68 | answer = pred_text[0]
69 | else:
70 | pattern = re.compile(r'The answer is ([A-Z]).')
71 | res = pattern.findall(pred_text)
72 | if len(res) == 1:
73 | answer = res[0] # 'A', 'B', ...
74 | else:
75 | answer = "FAILED"
76 |
77 | pred_idx = get_pred_idx(answer, prob['choices'], args.options)
78 |
79 | analysis = {
80 | 'question_id': prob_id,
81 | 'parsed_ans': answer,
82 | 'ground_truth': args.options[prob['answer']],
83 | 'question': pred['prompt'],
84 | 'pred': pred_text,
85 | 'is_multimodal': '' in pred['prompt'],
86 | }
87 |
88 | sqa_results['results'][prob_id] = get_pred_idx(answer, prob['choices'], args.options)
89 | sqa_results['outputs'][prob_id] = pred_text
90 |
91 | if pred_idx == prob['answer']:
92 | results['correct'].append(analysis)
93 | else:
94 | results['incorrect'].append(analysis)
95 |
96 | correct = len(results['correct'])
97 | total = len(results['correct']) + len(results['incorrect'])
98 |
99 | ###### IMG ######
100 | multimodal_correct = len([x for x in results['correct'] if x['is_multimodal']])
101 | multimodal_incorrect = len([x for x in results['incorrect'] if x['is_multimodal']])
102 | multimodal_total = multimodal_correct + multimodal_incorrect
103 | ###### IMG ######
104 |
105 | print(f'Total: {total}, Correct: {correct}, Accuracy: {correct / total * 100:.2f}%, IMG-Accuracy: {multimodal_correct / multimodal_total * 100:.2f}%')
106 |
107 | sqa_results['acc'] = correct / total * 100
108 | sqa_results['correct'] = correct
109 | sqa_results['count'] = total
110 |
111 | with open(args.output_file, 'w') as f:
112 | json.dump(results, f, indent=2)
113 | with open(args.output_result, 'w') as f:
114 | json.dump(sqa_results, f, indent=2)
115 |
--------------------------------------------------------------------------------
/llava/eval/eval_science_qa_gpt4.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import json
3 | import os
4 | import re
5 | import random
6 | from collections import defaultdict
7 |
8 |
9 | def get_args():
10 | parser = argparse.ArgumentParser()
11 | parser.add_argument('--base-dir', type=str)
12 | parser.add_argument('--gpt4-result', type=str)
13 | parser.add_argument('--our-result', type=str)
14 | parser.add_argument('--split', type=str, default='test')
15 | parser.add_argument('--options', type=list, default=["A", "B", "C", "D", "E"])
16 | return parser.parse_args()
17 |
18 |
19 | def convert_caps(results):
20 | fakecaps = []
21 | for result in results:
22 | image_id = result['question_id']
23 | caption = result['text']
24 | fakecaps.append({"image_id": int(image_id), "caption": caption})
25 | return fakecaps
26 |
27 |
28 | def get_pred_idx(prediction, choices, options):
29 | """
30 | Get the index (e.g. 2) from the prediction (e.g. 'C')
31 | """
32 | if prediction in options[:len(choices)]:
33 | return options.index(prediction)
34 | else:
35 | return random.choice(range(len(choices)))
36 |
37 |
38 | if __name__ == "__main__":
39 | args = get_args()
40 |
41 | base_dir = args.base_dir
42 | split_indices = json.load(open(os.path.join(base_dir, "pid_splits.json")))[args.split]
43 | problems = json.load(open(os.path.join(base_dir, "problems.json")))
44 | our_predictions = [json.loads(line) for line in open(args.our_result)]
45 | our_predictions = {pred['question_id']: pred for pred in our_predictions}
46 | split_problems = {idx: problems[idx] for idx in split_indices}
47 |
48 | gpt4_predictions = json.load(open(args.gpt4_result))['outputs']
49 |
50 | results = defaultdict(lambda: 0)
51 |
52 | for prob_id, prob in split_problems.items():
53 | if prob_id not in our_predictions:
54 | continue
55 | if prob_id not in gpt4_predictions:
56 | continue
57 | our_pred = our_predictions[prob_id]['text']
58 | gpt4_pred = gpt4_predictions[prob_id]
59 |
60 | pattern = re.compile(r'The answer is ([A-Z]).')
61 | our_res = pattern.findall(our_pred)
62 | if len(our_res) == 1:
63 | our_answer = our_res[0] # 'A', 'B', ...
64 | else:
65 | our_answer = "FAILED"
66 | gpt4_res = pattern.findall(gpt4_pred)
67 | if len(gpt4_res) == 1:
68 | gpt4_answer = gpt4_res[0] # 'A', 'B', ...
69 | else:
70 | gpt4_answer = "FAILED"
71 |
72 | our_pred_idx = get_pred_idx(our_answer, prob['choices'], args.options)
73 | gpt4_pred_idx = get_pred_idx(gpt4_answer, prob['choices'], args.options)
74 |
75 | if gpt4_answer == 'FAILED':
76 | results['gpt4_failed'] += 1
77 | # continue
78 | gpt4_pred_idx = our_pred_idx
79 | # if our_pred_idx != prob['answer']:
80 | # print(our_predictions[prob_id]['prompt'])
81 | # print('-----------------')
82 | # print(f'LECTURE: {prob["lecture"]}')
83 | # print(f'SOLUTION: {prob["solution"]}')
84 | # print('=====================')
85 | else:
86 | # continue
87 | pass
88 | # gpt4_pred_idx = our_pred_idx
89 |
90 | if gpt4_pred_idx == prob['answer']:
91 | results['correct'] += 1
92 | else:
93 | results['incorrect'] += 1
94 |
95 |
96 | if gpt4_pred_idx == prob['answer'] or our_pred_idx == prob['answer']:
97 | results['correct_upperbound'] += 1
98 |
99 | correct = results['correct']
100 | total = results['correct'] + results['incorrect']
101 | print(f'Total: {total}, Correct: {correct}, Accuracy: {correct / total * 100:.2f}%')
102 | print(f'Total: {total}, Correct (upper): {results["correct_upperbound"]}, Accuracy: {results["correct_upperbound"] / total * 100:.2f}%')
103 | print(f'Total: {total}, GPT-4 NO-ANS (RANDOM): {results["gpt4_failed"]}, Percentage: {results["gpt4_failed"] / total * 100:.2f}%')
104 |
105 |
--------------------------------------------------------------------------------
/llava/eval/eval_science_qa_gpt4_requery.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import json
3 | import os
4 | import re
5 | import random
6 | from collections import defaultdict
7 |
8 |
9 | def get_args():
10 | parser = argparse.ArgumentParser()
11 | parser.add_argument('--base-dir', type=str)
12 | parser.add_argument('--gpt4-result', type=str)
13 | parser.add_argument('--requery-result', type=str)
14 | parser.add_argument('--our-result', type=str)
15 | parser.add_argument('--output-result', type=str)
16 | parser.add_argument('--split', type=str, default='test')
17 | parser.add_argument('--options', type=list, default=["A", "B", "C", "D", "E"])
18 | return parser.parse_args()
19 |
20 |
21 | def convert_caps(results):
22 | fakecaps = []
23 | for result in results:
24 | image_id = result['question_id']
25 | caption = result['text']
26 | fakecaps.append({"image_id": int(image_id), "caption": caption})
27 | return fakecaps
28 |
29 |
30 | def get_pred_idx(prediction, choices, options):
31 | """
32 | Get the index (e.g. 2) from the prediction (e.g. 'C')
33 | """
34 | if prediction in options[:len(choices)]:
35 | return options.index(prediction)
36 | else:
37 | return random.choice(range(len(choices)))
38 |
39 |
40 | if __name__ == "__main__":
41 | args = get_args()
42 |
43 | base_dir = args.base_dir
44 | split_indices = json.load(open(os.path.join(base_dir, "pid_splits.json")))[args.split]
45 | problems = json.load(open(os.path.join(base_dir, "problems.json")))
46 | our_predictions = [json.loads(line) for line in open(args.our_result)]
47 | our_predictions = {pred['question_id']: pred for pred in our_predictions}
48 | split_problems = {idx: problems[idx] for idx in split_indices}
49 |
50 | requery_predictions = [json.loads(line) for line in open(args.requery_result)]
51 | requery_predictions = {pred['question_id']: pred for pred in requery_predictions}
52 |
53 | gpt4_predictions = json.load(open(args.gpt4_result))['outputs']
54 |
55 | results = defaultdict(lambda: 0)
56 |
57 | sqa_results = {}
58 | sqa_results['acc'] = None
59 | sqa_results['correct'] = None
60 | sqa_results['count'] = None
61 | sqa_results['results'] = {}
62 | sqa_results['outputs'] = {}
63 |
64 | for prob_id, prob in split_problems.items():
65 | if prob_id not in our_predictions:
66 | assert False
67 | if prob_id not in gpt4_predictions:
68 | assert False
69 | our_pred = our_predictions[prob_id]['text']
70 | gpt4_pred = gpt4_predictions[prob_id]
71 | if prob_id not in requery_predictions:
72 | results['missing_requery'] += 1
73 | requery_pred = "MISSING"
74 | else:
75 | requery_pred = requery_predictions[prob_id]['text']
76 |
77 | pattern = re.compile(r'The answer is ([A-Z]).')
78 | our_res = pattern.findall(our_pred)
79 | if len(our_res) == 1:
80 | our_answer = our_res[0] # 'A', 'B', ...
81 | else:
82 | our_answer = "FAILED"
83 |
84 | requery_res = pattern.findall(requery_pred)
85 | if len(requery_res) == 1:
86 | requery_answer = requery_res[0] # 'A', 'B', ...
87 | else:
88 | requery_answer = "FAILED"
89 |
90 | gpt4_res = pattern.findall(gpt4_pred)
91 | if len(gpt4_res) == 1:
92 | gpt4_answer = gpt4_res[0] # 'A', 'B', ...
93 | else:
94 | gpt4_answer = "FAILED"
95 |
96 | our_pred_idx = get_pred_idx(our_answer, prob['choices'], args.options)
97 | gpt4_pred_idx = get_pred_idx(gpt4_answer, prob['choices'], args.options)
98 | requery_pred_idx = get_pred_idx(requery_answer, prob['choices'], args.options)
99 |
100 | results['total'] += 1
101 |
102 | if gpt4_answer == 'FAILED':
103 | results['gpt4_failed'] += 1
104 | if gpt4_pred_idx == prob['answer']:
105 | results['gpt4_correct'] += 1
106 | if our_pred_idx == prob['answer']:
107 | results['gpt4_ourvisual_correct'] += 1
108 | elif gpt4_pred_idx == prob['answer']:
109 | results['gpt4_correct'] += 1
110 | results['gpt4_ourvisual_correct'] += 1
111 |
112 | if our_pred_idx == prob['answer']:
113 | results['our_correct'] += 1
114 |
115 | if requery_answer == 'FAILED':
116 | sqa_results['results'][prob_id] = our_pred_idx
117 | if our_pred_idx == prob['answer']:
118 | results['requery_correct'] += 1
119 | else:
120 | sqa_results['results'][prob_id] = requery_pred_idx
121 | if requery_pred_idx == prob['answer']:
122 | results['requery_correct'] += 1
123 | else:
124 | print(f"""
125 | Question ({args.options[prob['answer']]}): {our_predictions[prob_id]['prompt']}
126 | Our ({our_answer}): {our_pred}
127 | GPT-4 ({gpt4_answer}): {gpt4_pred}
128 | Requery ({requery_answer}): {requery_pred}
129 | print("=====================================")
130 | """)
131 |
132 | if gpt4_pred_idx == prob['answer'] or our_pred_idx == prob['answer']:
133 | results['correct_upperbound'] += 1
134 |
135 | total = results['total']
136 | print(f'Total: {total}, Our-Correct: {results["our_correct"]}, Accuracy: {results["our_correct"] / total * 100:.2f}%')
137 | print(f'Total: {total}, GPT-4-Correct: {results["gpt4_correct"]}, Accuracy: {results["gpt4_correct"] / total * 100:.2f}%')
138 | print(f'Total: {total}, GPT-4 NO-ANS (RANDOM): {results["gpt4_failed"]}, Percentage: {results["gpt4_failed"] / total * 100:.2f}%')
139 | print(f'Total: {total}, GPT-4-OursVisual-Correct: {results["gpt4_ourvisual_correct"]}, Accuracy: {results["gpt4_ourvisual_correct"] / total * 100:.2f}%')
140 | print(f'Total: {total}, Requery-Correct: {results["requery_correct"]}, Accuracy: {results["requery_correct"] / total * 100:.2f}%')
141 | print(f'Total: {total}, Correct upper: {results["correct_upperbound"]}, Accuracy: {results["correct_upperbound"] / total * 100:.2f}%')
142 |
143 | sqa_results['acc'] = results["requery_correct"] / total * 100
144 | sqa_results['correct'] = results["requery_correct"]
145 | sqa_results['count'] = total
146 |
147 | with open(args.output_result, 'w') as f:
148 | json.dump(sqa_results, f, indent=2)
149 |
150 |
--------------------------------------------------------------------------------
/llava/eval/eval_textvqa.py:
--------------------------------------------------------------------------------
1 | import os
2 | import argparse
3 | import json
4 | import re
5 |
6 | from llava.eval.m4c_evaluator import TextVQAAccuracyEvaluator
7 |
8 |
9 | def get_args():
10 | parser = argparse.ArgumentParser()
11 | parser.add_argument('--annotation-file', type=str)
12 | parser.add_argument('--result-file', type=str)
13 | parser.add_argument('--result-dir', type=str)
14 | return parser.parse_args()
15 |
16 |
17 | def prompt_processor(prompt):
18 | if prompt.startswith('OCR tokens: '):
19 | pattern = r"Question: (.*?) Short answer:"
20 | match = re.search(pattern, prompt, re.DOTALL)
21 | question = match.group(1)
22 | elif 'Reference OCR token: ' in prompt and len(prompt.split('\n')) == 3:
23 | if prompt.startswith('Reference OCR token:'):
24 | question = prompt.split('\n')[1]
25 | else:
26 | question = prompt.split('\n')[0]
27 | elif len(prompt.split('\n')) == 2:
28 | question = prompt.split('\n')[0]
29 | else:
30 | assert False
31 |
32 | return question.lower()
33 |
34 |
35 | def eval_single(annotation_file, result_file):
36 | experiment_name = os.path.splitext(os.path.basename(result_file))[0]
37 | print(experiment_name)
38 | annotations = json.load(open(annotation_file))['data']
39 | annotations = {(annotation['image_id'], annotation['question'].lower()): annotation for annotation in annotations}
40 | results = [json.loads(line) for line in open(result_file)]
41 |
42 | pred_list = []
43 | for result in results:
44 | annotation = annotations[(result['question_id'], prompt_processor(result['prompt']))]
45 | pred_list.append({
46 | "pred_answer": result['text'],
47 | "gt_answers": annotation['answers'],
48 | })
49 |
50 | evaluator = TextVQAAccuracyEvaluator()
51 | print('Samples: {}\nAccuracy: {:.2f}%\n'.format(len(pred_list), 100. * evaluator.eval_pred_list(pred_list)))
52 |
53 |
54 | if __name__ == "__main__":
55 | args = get_args()
56 |
57 | if args.result_file is not None:
58 | eval_single(args.annotation_file, args.result_file)
59 |
60 | if args.result_dir is not None:
61 | for result_file in sorted(os.listdir(args.result_dir)):
62 | if not result_file.endswith('.jsonl'):
63 | print(f'Skipping {result_file}')
64 | continue
65 | eval_single(args.annotation_file, os.path.join(args.result_dir, result_file))
66 |
--------------------------------------------------------------------------------
/llava/eval/generate_webpage_data_from_table.py:
--------------------------------------------------------------------------------
1 | """Generate json file for webpage."""
2 | import json
3 | import os
4 | import re
5 |
6 | # models = ['llama', 'alpaca', 'gpt35', 'bard']
7 | models = ['vicuna']
8 |
9 |
10 | def read_jsonl(path: str, key: str=None):
11 | data = []
12 | with open(os.path.expanduser(path)) as f:
13 | for line in f:
14 | if not line:
15 | continue
16 | data.append(json.loads(line))
17 | if key is not None:
18 | data.sort(key=lambda x: x[key])
19 | data = {item[key]: item for item in data}
20 | return data
21 |
22 |
23 | def trim_hanging_lines(s: str, n: int) -> str:
24 | s = s.strip()
25 | for _ in range(n):
26 | s = s.split('\n', 1)[1].strip()
27 | return s
28 |
29 |
30 | if __name__ == '__main__':
31 | questions = read_jsonl('table/question.jsonl', key='question_id')
32 |
33 | # alpaca_answers = read_jsonl('table/answer/answer_alpaca-13b.jsonl', key='question_id')
34 | # bard_answers = read_jsonl('table/answer/answer_bard.jsonl', key='question_id')
35 | # gpt35_answers = read_jsonl('table/answer/answer_gpt35.jsonl', key='question_id')
36 | # llama_answers = read_jsonl('table/answer/answer_llama-13b.jsonl', key='question_id')
37 | vicuna_answers = read_jsonl('table/answer/answer_vicuna-13b.jsonl', key='question_id')
38 | ours_answers = read_jsonl('table/results/llama-13b-hf-alpaca.jsonl', key='question_id')
39 |
40 | review_vicuna = read_jsonl('table/review/review_vicuna-13b_llama-13b-hf-alpaca.jsonl', key='question_id')
41 | # review_alpaca = read_jsonl('table/review/review_alpaca-13b_vicuna-13b.jsonl', key='question_id')
42 | # review_bard = read_jsonl('table/review/review_bard_vicuna-13b.jsonl', key='question_id')
43 | # review_gpt35 = read_jsonl('table/review/review_gpt35_vicuna-13b.jsonl', key='question_id')
44 | # review_llama = read_jsonl('table/review/review_llama-13b_vicuna-13b.jsonl', key='question_id')
45 |
46 | records = []
47 | for qid in questions.keys():
48 | r = {
49 | 'id': qid,
50 | 'category': questions[qid]['category'],
51 | 'question': questions[qid]['text'],
52 | 'answers': {
53 | # 'alpaca': alpaca_answers[qid]['text'],
54 | # 'llama': llama_answers[qid]['text'],
55 | # 'bard': bard_answers[qid]['text'],
56 | # 'gpt35': gpt35_answers[qid]['text'],
57 | 'vicuna': vicuna_answers[qid]['text'],
58 | 'ours': ours_answers[qid]['text'],
59 | },
60 | 'evaluations': {
61 | # 'alpaca': review_alpaca[qid]['text'],
62 | # 'llama': review_llama[qid]['text'],
63 | # 'bard': review_bard[qid]['text'],
64 | 'vicuna': review_vicuna[qid]['content'],
65 | # 'gpt35': review_gpt35[qid]['text'],
66 | },
67 | 'scores': {
68 | 'vicuna': review_vicuna[qid]['tuple'],
69 | # 'alpaca': review_alpaca[qid]['score'],
70 | # 'llama': review_llama[qid]['score'],
71 | # 'bard': review_bard[qid]['score'],
72 | # 'gpt35': review_gpt35[qid]['score'],
73 | },
74 | }
75 |
76 | # cleanup data
77 | cleaned_evals = {}
78 | for k, v in r['evaluations'].items():
79 | v = v.strip()
80 | lines = v.split('\n')
81 | # trim the first line if it's a pair of numbers
82 | if re.match(r'\d+[, ]+\d+', lines[0]):
83 | lines = lines[1:]
84 | v = '\n'.join(lines)
85 | cleaned_evals[k] = v.replace('Assistant 1', "**Assistant 1**").replace('Assistant 2', '**Assistant 2**')
86 |
87 | r['evaluations'] = cleaned_evals
88 | records.append(r)
89 |
90 | # Reorder the records, this is optional
91 | for r in records:
92 | if r['id'] <= 20:
93 | r['id'] += 60
94 | else:
95 | r['id'] -= 20
96 | for r in records:
97 | if r['id'] <= 50:
98 | r['id'] += 10
99 | elif 50 < r['id'] <= 60:
100 | r['id'] -= 50
101 | for r in records:
102 | if r['id'] == 7:
103 | r['id'] = 1
104 | elif r['id'] < 7:
105 | r['id'] += 1
106 |
107 | records.sort(key=lambda x: x['id'])
108 |
109 | # Write to file
110 | with open('webpage/data.json', 'w') as f:
111 | json.dump({'questions': records, 'models': models}, f, indent=2)
112 |
--------------------------------------------------------------------------------
/llava/eval/model_qa.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | from transformers import AutoTokenizer, AutoModelForCausalLM, StoppingCriteria
3 | import torch
4 | import os
5 | import json
6 | from tqdm import tqdm
7 | import shortuuid
8 |
9 | from llava.conversation import default_conversation
10 | from llava.utils import disable_torch_init
11 |
12 |
13 | @torch.inference_mode()
14 | def eval_model(model_name, questions_file, answers_file):
15 | # Model
16 | disable_torch_init()
17 | model_name = os.path.expanduser(model_name)
18 | tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
19 | model = AutoModelForCausalLM.from_pretrained(model_name,
20 | torch_dtype=torch.float16).cuda()
21 |
22 |
23 | ques_file = open(os.path.expanduser(questions_file), "r")
24 | ans_file = open(os.path.expanduser(answers_file), "w")
25 | for i, line in enumerate(tqdm(ques_file)):
26 | idx = json.loads(line)["question_id"]
27 | qs = json.loads(line)["text"]
28 | cat = json.loads(line)["category"]
29 | conv = default_conversation.copy()
30 | conv.append_message(conv.roles[0], qs)
31 | prompt = conv.get_prompt()
32 | inputs = tokenizer([prompt])
33 | input_ids = torch.as_tensor(inputs.input_ids).cuda()
34 | output_ids = model.generate(
35 | input_ids,
36 | do_sample=True,
37 | use_cache=True,
38 | temperature=0.7,
39 | max_new_tokens=1024,)
40 | outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0]
41 | try:
42 | index = outputs.index(conv.sep, len(prompt))
43 | except ValueError:
44 | outputs += conv.sep
45 | index = outputs.index(conv.sep, len(prompt))
46 |
47 | outputs = outputs[len(prompt) + len(conv.roles[1]) + 2:index].strip()
48 | ans_id = shortuuid.uuid()
49 | ans_file.write(json.dumps({"question_id": idx,
50 | "text": outputs,
51 | "answer_id": ans_id,
52 | "model_id": model_name,
53 | "metadata": {}}) + "\n")
54 | ans_file.flush()
55 | ans_file.close()
56 |
57 | if __name__ == "__main__":
58 | parser = argparse.ArgumentParser()
59 | parser.add_argument("--model-name", type=str, default="facebook/opt-350m")
60 | parser.add_argument("--question-file", type=str, default="tables/question.jsonl")
61 | parser.add_argument("--answers-file", type=str, default="answer.jsonl")
62 | args = parser.parse_args()
63 |
64 | eval_model(args.model_name, args.question_file, args.answers_file)
65 |
--------------------------------------------------------------------------------
/llava/eval/model_vqa.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import torch
3 | import os
4 | import json
5 | from tqdm import tqdm
6 | import shortuuid
7 |
8 | from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN
9 | from llava.conversation import conv_templates, SeparatorStyle
10 | from llava.model.builder import load_pretrained_model
11 | from llava.utils import disable_torch_init
12 | from llava.mm_utils import tokenizer_image_token, process_images, get_model_name_from_path
13 |
14 | from PIL import Image
15 | import math
16 |
17 |
18 | def split_list(lst, n):
19 | """Split a list into n (roughly) equal-sized chunks"""
20 | chunk_size = math.ceil(len(lst) / n) # integer division
21 | return [lst[i:i+chunk_size] for i in range(0, len(lst), chunk_size)]
22 |
23 |
24 | def get_chunk(lst, n, k):
25 | chunks = split_list(lst, n)
26 | return chunks[k]
27 |
28 |
29 | def eval_model(args):
30 | # Model
31 | disable_torch_init()
32 | model_path = os.path.expanduser(args.model_path)
33 | model_name = get_model_name_from_path(model_path)
34 | tokenizer, model, image_processor, context_len = load_pretrained_model(model_path, args.model_base, model_name)
35 |
36 | questions = [json.loads(q) for q in open(os.path.expanduser(args.question_file), "r")]
37 | questions = get_chunk(questions, args.num_chunks, args.chunk_idx)
38 | answers_file = os.path.expanduser(args.answers_file)
39 | os.makedirs(os.path.dirname(answers_file), exist_ok=True)
40 | ans_file = open(answers_file, "w")
41 | for line in tqdm(questions):
42 | idx = line["question_id"]
43 | image_file = line["image"]
44 | qs = line["text"]
45 | cur_prompt = qs
46 | if model.config.mm_use_im_start_end:
47 | qs = DEFAULT_IM_START_TOKEN + DEFAULT_IMAGE_TOKEN + DEFAULT_IM_END_TOKEN + '\n' + qs
48 | else:
49 | qs = DEFAULT_IMAGE_TOKEN + '\n' + qs
50 |
51 | conv = conv_templates[args.conv_mode].copy()
52 | conv.append_message(conv.roles[0], qs)
53 | conv.append_message(conv.roles[1], None)
54 | prompt = conv.get_prompt()
55 |
56 | input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda()
57 |
58 | image = Image.open(os.path.join(args.image_folder, image_file)).convert('RGB')
59 | image_tensor = process_images([image], image_processor, model.config)[0]
60 |
61 | with torch.inference_mode():
62 | output_ids = model.generate(
63 | input_ids,
64 | images=image_tensor.unsqueeze(0).half().cuda(),
65 | image_sizes=[image.size],
66 | do_sample=True if args.temperature > 0 else False,
67 | temperature=args.temperature,
68 | top_p=args.top_p,
69 | num_beams=args.num_beams,
70 | # no_repeat_ngram_size=3,
71 | max_new_tokens=1024,
72 | use_cache=True)
73 |
74 | outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
75 |
76 | ans_id = shortuuid.uuid()
77 | ans_file.write(json.dumps({"question_id": idx,
78 | "prompt": cur_prompt,
79 | "text": outputs,
80 | "answer_id": ans_id,
81 | "model_id": model_name,
82 | "metadata": {}}) + "\n")
83 | ans_file.flush()
84 | ans_file.close()
85 |
86 | if __name__ == "__main__":
87 | parser = argparse.ArgumentParser()
88 | parser.add_argument("--model-path", type=str, default="facebook/opt-350m")
89 | parser.add_argument("--model-base", type=str, default=None)
90 | parser.add_argument("--image-folder", type=str, default="")
91 | parser.add_argument("--question-file", type=str, default="tables/question.jsonl")
92 | parser.add_argument("--answers-file", type=str, default="answer.jsonl")
93 | parser.add_argument("--conv-mode", type=str, default="llava_v1")
94 | parser.add_argument("--num-chunks", type=int, default=1)
95 | parser.add_argument("--chunk-idx", type=int, default=0)
96 | parser.add_argument("--temperature", type=float, default=0.2)
97 | parser.add_argument("--top_p", type=float, default=None)
98 | parser.add_argument("--num_beams", type=int, default=1)
99 | args = parser.parse_args()
100 |
101 | eval_model(args)
102 |
--------------------------------------------------------------------------------
/llava/eval/model_vqa_loader.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import torch
3 | import os
4 | import json
5 | from tqdm import tqdm
6 | import shortuuid
7 |
8 | from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN
9 | from llava.conversation import conv_templates, SeparatorStyle
10 | from llava.model.builder import load_pretrained_model
11 | from llava.utils import disable_torch_init
12 | from llava.mm_utils import tokenizer_image_token, process_images, get_model_name_from_path
13 | from torch.utils.data import Dataset, DataLoader
14 |
15 | from PIL import Image
16 | import math
17 |
18 |
19 | def split_list(lst, n):
20 | """Split a list into n (roughly) equal-sized chunks"""
21 | chunk_size = math.ceil(len(lst) / n) # integer division
22 | return [lst[i:i+chunk_size] for i in range(0, len(lst), chunk_size)]
23 |
24 |
25 | def get_chunk(lst, n, k):
26 | chunks = split_list(lst, n)
27 | return chunks[k]
28 |
29 |
30 | # Custom dataset class
31 | class CustomDataset(Dataset):
32 | def __init__(self, questions, image_folder, tokenizer, image_processor, model_config):
33 | self.questions = questions
34 | self.image_folder = image_folder
35 | self.tokenizer = tokenizer
36 | self.image_processor = image_processor
37 | self.model_config = model_config
38 |
39 | def __getitem__(self, index):
40 | line = self.questions[index]
41 | image_file = line["image"]
42 | qs = line["text"]
43 | if self.model_config.mm_use_im_start_end:
44 | qs = DEFAULT_IM_START_TOKEN + DEFAULT_IMAGE_TOKEN + DEFAULT_IM_END_TOKEN + '\n' + qs
45 | else:
46 | qs = DEFAULT_IMAGE_TOKEN + '\n' + qs
47 |
48 | conv = conv_templates[args.conv_mode].copy()
49 | conv.append_message(conv.roles[0], qs)
50 | conv.append_message(conv.roles[1], None)
51 | prompt = conv.get_prompt()
52 |
53 | image = Image.open(os.path.join(self.image_folder, image_file)).convert('RGB')
54 | image_tensor = process_images([image], self.image_processor, self.model_config)[0]
55 |
56 | input_ids = tokenizer_image_token(prompt, self.tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt')
57 |
58 | return input_ids, image_tensor, image.size
59 |
60 | def __len__(self):
61 | return len(self.questions)
62 |
63 |
64 | def collate_fn(batch):
65 | input_ids, image_tensors, image_sizes = zip(*batch)
66 | input_ids = torch.stack(input_ids, dim=0)
67 | image_tensors = torch.stack(image_tensors, dim=0)
68 | return input_ids, image_tensors, image_sizes
69 |
70 |
71 | # DataLoader
72 | def create_data_loader(questions, image_folder, tokenizer, image_processor, model_config, batch_size=1, num_workers=4):
73 | assert batch_size == 1, "batch_size must be 1"
74 | dataset = CustomDataset(questions, image_folder, tokenizer, image_processor, model_config)
75 | data_loader = DataLoader(dataset, batch_size=batch_size, num_workers=num_workers, shuffle=False, collate_fn=collate_fn)
76 | return data_loader
77 |
78 |
79 | def eval_model(args):
80 | # Model
81 | disable_torch_init()
82 | model_path = os.path.expanduser(args.model_path)
83 | model_name = get_model_name_from_path(model_path)
84 | tokenizer, model, image_processor, context_len = load_pretrained_model(model_path, args.model_base, model_name)
85 |
86 | questions = [json.loads(q) for q in open(os.path.expanduser(args.question_file), "r")]
87 | questions = get_chunk(questions, args.num_chunks, args.chunk_idx)
88 | answers_file = os.path.expanduser(args.answers_file)
89 | os.makedirs(os.path.dirname(answers_file), exist_ok=True)
90 | ans_file = open(answers_file, "w")
91 |
92 | if 'plain' in model_name and 'finetune' not in model_name.lower() and 'mmtag' not in args.conv_mode:
93 | args.conv_mode = args.conv_mode + '_mmtag'
94 | print(f'It seems that this is a plain model, but it is not using a mmtag prompt, auto switching to {args.conv_mode}.')
95 |
96 | data_loader = create_data_loader(questions, args.image_folder, tokenizer, image_processor, model.config)
97 |
98 | for (input_ids, image_tensor, image_sizes), line in tqdm(zip(data_loader, questions), total=len(questions)):
99 | idx = line["question_id"]
100 | cur_prompt = line["text"]
101 |
102 | input_ids = input_ids.to(device='cuda', non_blocking=True)
103 |
104 | with torch.inference_mode():
105 | output_ids = model.generate(
106 | input_ids,
107 | images=image_tensor.to(dtype=torch.float16, device='cuda', non_blocking=True),
108 | image_sizes=image_sizes,
109 | do_sample=True if args.temperature > 0 else False,
110 | temperature=args.temperature,
111 | top_p=args.top_p,
112 | num_beams=args.num_beams,
113 | max_new_tokens=args.max_new_tokens,
114 | use_cache=True)
115 |
116 | outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
117 |
118 | ans_id = shortuuid.uuid()
119 | ans_file.write(json.dumps({"question_id": idx,
120 | "prompt": cur_prompt,
121 | "text": outputs,
122 | "answer_id": ans_id,
123 | "model_id": model_name,
124 | "metadata": {}}) + "\n")
125 | # ans_file.flush()
126 | ans_file.close()
127 |
128 | if __name__ == "__main__":
129 | parser = argparse.ArgumentParser()
130 | parser.add_argument("--model-path", type=str, default="facebook/opt-350m")
131 | parser.add_argument("--model-base", type=str, default=None)
132 | parser.add_argument("--image-folder", type=str, default="")
133 | parser.add_argument("--question-file", type=str, default="tables/question.jsonl")
134 | parser.add_argument("--answers-file", type=str, default="answer.jsonl")
135 | parser.add_argument("--conv-mode", type=str, default="llava_v1")
136 | parser.add_argument("--num-chunks", type=int, default=1)
137 | parser.add_argument("--chunk-idx", type=int, default=0)
138 | parser.add_argument("--temperature", type=float, default=0.2)
139 | parser.add_argument("--top_p", type=float, default=None)
140 | parser.add_argument("--num_beams", type=int, default=1)
141 | parser.add_argument("--max_new_tokens", type=int, default=128)
142 | args = parser.parse_args()
143 |
144 | eval_model(args)
145 |
--------------------------------------------------------------------------------
/llava/eval/model_vqa_mmbench.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import torch
3 | import os
4 | import json
5 | import pandas as pd
6 | from tqdm import tqdm
7 | import shortuuid
8 |
9 | from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN
10 | from llava.conversation import conv_templates, SeparatorStyle
11 | from llava.model.builder import load_pretrained_model
12 | from llava.utils import disable_torch_init
13 | from llava.mm_utils import tokenizer_image_token, process_images, load_image_from_base64, get_model_name_from_path
14 |
15 | from PIL import Image
16 | import math
17 |
18 |
19 | all_options = ['A', 'B', 'C', 'D']
20 |
21 |
22 | def split_list(lst, n):
23 | """Split a list into n (roughly) equal-sized chunks"""
24 | chunk_size = math.ceil(len(lst) / n) # integer division
25 | return [lst[i:i+chunk_size] for i in range(0, len(lst), chunk_size)]
26 |
27 |
28 | def get_chunk(lst, n, k):
29 | chunks = split_list(lst, n)
30 | return chunks[k]
31 |
32 |
33 | def is_none(value):
34 | if value is None:
35 | return True
36 | if type(value) is float and math.isnan(value):
37 | return True
38 | if type(value) is str and value.lower() == 'nan':
39 | return True
40 | if type(value) is str and value.lower() == 'none':
41 | return True
42 | return False
43 |
44 | def get_options(row, options):
45 | parsed_options = []
46 | for option in options:
47 | option_value = row[option]
48 | if is_none(option_value):
49 | break
50 | parsed_options.append(option_value)
51 | return parsed_options
52 |
53 |
54 | def eval_model(args):
55 | # Model
56 | disable_torch_init()
57 | model_path = os.path.expanduser(args.model_path)
58 | model_name = get_model_name_from_path(model_path)
59 | tokenizer, model, image_processor, context_len = load_pretrained_model(model_path, args.model_base, model_name)
60 |
61 | questions = pd.read_table(os.path.expanduser(args.question_file))
62 | questions = get_chunk(questions, args.num_chunks, args.chunk_idx)
63 | answers_file = os.path.expanduser(args.answers_file)
64 | os.makedirs(os.path.dirname(answers_file), exist_ok=True)
65 | ans_file = open(answers_file, "w")
66 |
67 | if 'plain' in model_name and 'finetune' not in model_name.lower() and 'mmtag' not in args.conv_mode:
68 | args.conv_mode = args.conv_mode + '_mmtag'
69 | print(f'It seems that this is a plain model, but it is not using a mmtag prompt, auto switching to {args.conv_mode}.')
70 |
71 | for index, row in tqdm(questions.iterrows(), total=len(questions)):
72 | options = get_options(row, all_options)
73 | cur_option_char = all_options[:len(options)]
74 |
75 | if args.all_rounds:
76 | num_rounds = len(options)
77 | else:
78 | num_rounds = 1
79 |
80 | for round_idx in range(num_rounds):
81 | idx = row['index']
82 | question = row['question']
83 | hint = row['hint']
84 | image = load_image_from_base64(row['image'])
85 | if not is_none(hint):
86 | question = hint + '\n' + question
87 | for option_char, option in zip(all_options[:len(options)], options):
88 | question = question + '\n' + option_char + '. ' + option
89 | qs = cur_prompt = question
90 | if model.config.mm_use_im_start_end:
91 | qs = DEFAULT_IM_START_TOKEN + DEFAULT_IMAGE_TOKEN + DEFAULT_IM_END_TOKEN + '\n' + qs
92 | else:
93 | qs = DEFAULT_IMAGE_TOKEN + '\n' + qs
94 |
95 | if args.single_pred_prompt:
96 | if args.lang == 'cn':
97 | qs = qs + '\n' + "请直接回答选项字母。"
98 | else:
99 | qs = qs + '\n' + "Answer with the option's letter from the given choices directly."
100 |
101 | conv = conv_templates[args.conv_mode].copy()
102 | conv.append_message(conv.roles[0], qs)
103 | conv.append_message(conv.roles[1], None)
104 | prompt = conv.get_prompt()
105 |
106 | input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda()
107 |
108 | image_tensor = process_images([image], image_processor, model.config)[0]
109 |
110 | with torch.inference_mode():
111 | output_ids = model.generate(
112 | input_ids,
113 | images=image_tensor.unsqueeze(0).half().cuda(),
114 | image_sizes=[image.size],
115 | do_sample=True if args.temperature > 0 else False,
116 | temperature=args.temperature,
117 | top_p=args.top_p,
118 | num_beams=args.num_beams,
119 | # no_repeat_ngram_size=3,
120 | max_new_tokens=1024,
121 | use_cache=True)
122 |
123 | outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
124 |
125 | ans_id = shortuuid.uuid()
126 | ans_file.write(json.dumps({"question_id": idx,
127 | "round_id": round_idx,
128 | "prompt": cur_prompt,
129 | "text": outputs,
130 | "options": options,
131 | "option_char": cur_option_char,
132 | "answer_id": ans_id,
133 | "model_id": model_name,
134 | "metadata": {}}) + "\n")
135 | ans_file.flush()
136 |
137 | # rotate options
138 | options = options[1:] + options[:1]
139 | cur_option_char = cur_option_char[1:] + cur_option_char[:1]
140 | ans_file.close()
141 |
142 | if __name__ == "__main__":
143 | parser = argparse.ArgumentParser()
144 | parser.add_argument("--model-path", type=str, default="facebook/opt-350m")
145 | parser.add_argument("--model-base", type=str, default=None)
146 | parser.add_argument("--image-folder", type=str, default="")
147 | parser.add_argument("--question-file", type=str, default="tables/question.jsonl")
148 | parser.add_argument("--answers-file", type=str, default="answer.jsonl")
149 | parser.add_argument("--conv-mode", type=str, default="llava_v1")
150 | parser.add_argument("--num-chunks", type=int, default=1)
151 | parser.add_argument("--chunk-idx", type=int, default=0)
152 | parser.add_argument("--temperature", type=float, default=0.2)
153 | parser.add_argument("--top_p", type=float, default=None)
154 | parser.add_argument("--num_beams", type=int, default=1)
155 | parser.add_argument("--all-rounds", action="store_true")
156 | parser.add_argument("--single-pred-prompt", action="store_true")
157 | parser.add_argument("--lang", type=str, default="en")
158 | args = parser.parse_args()
159 |
160 | eval_model(args)
161 |
--------------------------------------------------------------------------------
/llava/eval/model_vqa_science.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import torch
3 | import os
4 | import json
5 | from tqdm import tqdm
6 | import shortuuid
7 |
8 | from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN
9 | from llava.conversation import conv_templates, SeparatorStyle
10 | from llava.model.builder import load_pretrained_model
11 | from llava.utils import disable_torch_init
12 | from llava.mm_utils import tokenizer_image_token, process_images, get_model_name_from_path
13 |
14 | from PIL import Image
15 | import math
16 |
17 |
18 | def split_list(lst, n):
19 | """Split a list into n (roughly) equal-sized chunks"""
20 | chunk_size = math.ceil(len(lst) / n) # integer division
21 | return [lst[i:i+chunk_size] for i in range(0, len(lst), chunk_size)]
22 |
23 |
24 | def get_chunk(lst, n, k):
25 | chunks = split_list(lst, n)
26 | return chunks[k]
27 |
28 |
29 | def eval_model(args):
30 | # Model
31 | disable_torch_init()
32 | model_path = os.path.expanduser(args.model_path)
33 | model_name = get_model_name_from_path(model_path)
34 | tokenizer, model, image_processor, context_len = load_pretrained_model(model_path, args.model_base, model_name)
35 |
36 | questions = json.load(open(os.path.expanduser(args.question_file), "r"))
37 | questions = get_chunk(questions, args.num_chunks, args.chunk_idx)
38 | answers_file = os.path.expanduser(args.answers_file)
39 | os.makedirs(os.path.dirname(answers_file), exist_ok=True)
40 | ans_file = open(answers_file, "w")
41 | for i, line in enumerate(tqdm(questions)):
42 | idx = line["id"]
43 | question = line['conversations'][0]
44 | qs = question['value'].replace('', '').strip()
45 | cur_prompt = qs
46 |
47 | if 'image' in line:
48 | image_file = line["image"]
49 | image = Image.open(os.path.join(args.image_folder, image_file))
50 | image_tensor = process_images([image], image_processor, model.config)[0]
51 | images = image_tensor.unsqueeze(0).half().cuda()
52 | image_sizes = [image.size]
53 | if getattr(model.config, 'mm_use_im_start_end', False):
54 | qs = DEFAULT_IM_START_TOKEN + DEFAULT_IMAGE_TOKEN + DEFAULT_IM_END_TOKEN + '\n' + qs
55 | else:
56 | qs = DEFAULT_IMAGE_TOKEN + '\n' + qs
57 | cur_prompt = '' + '\n' + cur_prompt
58 | else:
59 | images = None
60 | image_sizes = None
61 |
62 | if args.single_pred_prompt:
63 | qs = qs + '\n' + "Answer with the option's letter from the given choices directly."
64 | cur_prompt = cur_prompt + '\n' + "Answer with the option's letter from the given choices directly."
65 |
66 | conv = conv_templates[args.conv_mode].copy()
67 | conv.append_message(conv.roles[0], qs)
68 | conv.append_message(conv.roles[1], None)
69 | prompt = conv.get_prompt()
70 |
71 | input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda()
72 |
73 | with torch.inference_mode():
74 | output_ids = model.generate(
75 | input_ids,
76 | images=images,
77 | image_sizes=image_sizes,
78 | do_sample=True if args.temperature > 0 else False,
79 | temperature=args.temperature,
80 | max_new_tokens=1024,
81 | use_cache=True,
82 | )
83 |
84 | outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
85 |
86 | ans_id = shortuuid.uuid()
87 | ans_file.write(json.dumps({"question_id": idx,
88 | "prompt": cur_prompt,
89 | "text": outputs,
90 | "answer_id": ans_id,
91 | "model_id": model_name,
92 | "metadata": {}}) + "\n")
93 | ans_file.flush()
94 | ans_file.close()
95 |
96 | if __name__ == "__main__":
97 | parser = argparse.ArgumentParser()
98 | parser.add_argument("--model-path", type=str, default="facebook/opt-350m")
99 | parser.add_argument("--model-base", type=str, default=None)
100 | parser.add_argument("--image-folder", type=str, default="")
101 | parser.add_argument("--question-file", type=str, default="tables/question.json")
102 | parser.add_argument("--answers-file", type=str, default="answer.jsonl")
103 | parser.add_argument("--conv-mode", type=str, default="llava_v0")
104 | parser.add_argument("--num-chunks", type=int, default=1)
105 | parser.add_argument("--chunk-idx", type=int, default=0)
106 | parser.add_argument("--temperature", type=float, default=0.2)
107 | parser.add_argument("--answer-prompter", action="store_true")
108 | parser.add_argument("--single-pred-prompt", action="store_true")
109 | args = parser.parse_args()
110 |
111 | eval_model(args)
112 |
--------------------------------------------------------------------------------
/llava/eval/qa_baseline_gpt35.py:
--------------------------------------------------------------------------------
1 | """Generate answers with GPT-3.5"""
2 | # Note: you need to be using OpenAI Python v0.27.0 for the code below to work
3 | import argparse
4 | import json
5 | import os
6 | import time
7 | import concurrent.futures
8 |
9 | import openai
10 | import tqdm
11 | import shortuuid
12 |
13 | MODEL = 'gpt-3.5-turbo'
14 | MODEL_ID = 'gpt-3.5-turbo:20230327'
15 |
16 | def get_answer(question_id: int, question: str, max_tokens: int):
17 | ans = {
18 | 'answer_id': shortuuid.uuid(),
19 | 'question_id': question_id,
20 | 'model_id': MODEL_ID,
21 | }
22 | for _ in range(3):
23 | try:
24 | response = openai.ChatCompletion.create(
25 | model=MODEL,
26 | messages=[{
27 | 'role': 'system',
28 | 'content': 'You are a helpful assistant.'
29 | }, {
30 | 'role': 'user',
31 | 'content': question,
32 | }],
33 | max_tokens=max_tokens,
34 | )
35 | ans['text'] = response['choices'][0]['message']['content']
36 | return ans
37 | except Exception as e:
38 | print('[ERROR]', e)
39 | ans['text'] = '#ERROR#'
40 | time.sleep(1)
41 | return ans
42 |
43 |
44 | if __name__ == '__main__':
45 | parser = argparse.ArgumentParser(description='ChatGPT answer generation.')
46 | parser.add_argument('-q', '--question')
47 | parser.add_argument('-o', '--output')
48 | parser.add_argument('--max-tokens', type=int, default=1024, help='maximum number of tokens produced in the output')
49 | args = parser.parse_args()
50 |
51 | questions_dict = {}
52 | with open(os.path.expanduser(args.question)) as f:
53 | for line in f:
54 | if not line:
55 | continue
56 | q = json.loads(line)
57 | questions_dict[q['question_id']] = q['text']
58 |
59 | answers = []
60 |
61 | with concurrent.futures.ThreadPoolExecutor(max_workers=32) as executor:
62 | futures = []
63 | for qid, question in questions_dict.items():
64 | future = executor.submit(get_answer, qid, question, args.max_tokens)
65 | futures.append(future)
66 |
67 | for future in tqdm.tqdm(concurrent.futures.as_completed(futures), total=len(futures)):
68 | answers.append(future.result())
69 |
70 | answers.sort(key=lambda x: x['question_id'])
71 |
72 | with open(os.path.expanduser(args.output), 'w') as f:
73 | table = [json.dumps(ans) for ans in answers]
74 | f.write('\n'.join(table))
75 |
--------------------------------------------------------------------------------
/llava/eval/run_llava.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import torch
3 |
4 | from llava.constants import (
5 | IMAGE_TOKEN_INDEX,
6 | DEFAULT_IMAGE_TOKEN,
7 | DEFAULT_IM_START_TOKEN,
8 | DEFAULT_IM_END_TOKEN,
9 | IMAGE_PLACEHOLDER,
10 | )
11 | from llava.conversation import conv_templates, SeparatorStyle
12 | from llava.model.builder import load_pretrained_model
13 | from llava.utils import disable_torch_init
14 | from llava.mm_utils import (
15 | process_images,
16 | tokenizer_image_token,
17 | get_model_name_from_path,
18 | )
19 |
20 | from PIL import Image
21 |
22 | import requests
23 | from PIL import Image
24 | from io import BytesIO
25 | import re
26 |
27 |
28 | def image_parser(args):
29 | out = args.image_file.split(args.sep)
30 | return out
31 |
32 |
33 | def load_image(image_file):
34 | if image_file.startswith("http") or image_file.startswith("https"):
35 | response = requests.get(image_file)
36 | image = Image.open(BytesIO(response.content)).convert("RGB")
37 | else:
38 | image = Image.open(image_file).convert("RGB")
39 | return image
40 |
41 |
42 | def load_images(image_files):
43 | out = []
44 | for image_file in image_files:
45 | image = load_image(image_file)
46 | out.append(image)
47 | return out
48 |
49 |
50 | def eval_model(args):
51 | # Model
52 | disable_torch_init()
53 |
54 | model_name = get_model_name_from_path(args.model_path)
55 | tokenizer, model, image_processor, context_len = load_pretrained_model(
56 | args.model_path, args.model_base, model_name
57 | )
58 |
59 | qs = args.query
60 | image_token_se = DEFAULT_IM_START_TOKEN + DEFAULT_IMAGE_TOKEN + DEFAULT_IM_END_TOKEN
61 | if IMAGE_PLACEHOLDER in qs:
62 | if model.config.mm_use_im_start_end:
63 | qs = re.sub(IMAGE_PLACEHOLDER, image_token_se, qs)
64 | else:
65 | qs = re.sub(IMAGE_PLACEHOLDER, DEFAULT_IMAGE_TOKEN, qs)
66 | else:
67 | if model.config.mm_use_im_start_end:
68 | qs = image_token_se + "\n" + qs
69 | else:
70 | qs = DEFAULT_IMAGE_TOKEN + "\n" + qs
71 |
72 | if "llama-2" in model_name.lower():
73 | conv_mode = "llava_llama_2"
74 | elif "mistral" in model_name.lower():
75 | conv_mode = "mistral_instruct"
76 | elif "v1.6-34b" in model_name.lower():
77 | conv_mode = "chatml_direct"
78 | elif "v1" in model_name.lower():
79 | conv_mode = "llava_v1"
80 | elif "mpt" in model_name.lower():
81 | conv_mode = "mpt"
82 | else:
83 | conv_mode = "llava_v0"
84 |
85 | if args.conv_mode is not None and conv_mode != args.conv_mode:
86 | print(
87 | "[WARNING] the auto inferred conversation mode is {}, while `--conv-mode` is {}, using {}".format(
88 | conv_mode, args.conv_mode, args.conv_mode
89 | )
90 | )
91 | else:
92 | args.conv_mode = conv_mode
93 |
94 | conv = conv_templates[args.conv_mode].copy()
95 | conv.append_message(conv.roles[0], qs)
96 | conv.append_message(conv.roles[1], None)
97 | prompt = conv.get_prompt()
98 |
99 | image_files = image_parser(args)
100 | images = load_images(image_files)
101 | image_sizes = [x.size for x in images]
102 | images_tensor = process_images(
103 | images,
104 | image_processor,
105 | model.config
106 | ).to(model.device, dtype=torch.float16)
107 |
108 | input_ids = (
109 | tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors="pt")
110 | .unsqueeze(0)
111 | .cuda()
112 | )
113 |
114 | with torch.inference_mode():
115 | output_ids = model.generate(
116 | input_ids,
117 | images=images_tensor,
118 | image_sizes=image_sizes,
119 | do_sample=True if args.temperature > 0 else False,
120 | temperature=args.temperature,
121 | top_p=args.top_p,
122 | num_beams=args.num_beams,
123 | max_new_tokens=args.max_new_tokens,
124 | use_cache=True,
125 | )
126 |
127 | outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
128 | print(outputs)
129 |
130 |
131 | if __name__ == "__main__":
132 | parser = argparse.ArgumentParser()
133 | parser.add_argument("--model-path", type=str, default="facebook/opt-350m")
134 | parser.add_argument("--model-base", type=str, default=None)
135 | parser.add_argument("--image-file", type=str, required=True)
136 | parser.add_argument("--query", type=str, required=True)
137 | parser.add_argument("--conv-mode", type=str, default=None)
138 | parser.add_argument("--sep", type=str, default=",")
139 | parser.add_argument("--temperature", type=float, default=0.2)
140 | parser.add_argument("--top_p", type=float, default=None)
141 | parser.add_argument("--num_beams", type=int, default=1)
142 | parser.add_argument("--max_new_tokens", type=int, default=512)
143 | args = parser.parse_args()
144 |
145 | eval_model(args)
146 |
--------------------------------------------------------------------------------
/llava/eval/summarize_gpt_review.py:
--------------------------------------------------------------------------------
1 | import json
2 | import os
3 | from collections import defaultdict
4 |
5 | import numpy as np
6 |
7 | import argparse
8 |
9 | def parse_args():
10 | parser = argparse.ArgumentParser(description='ChatGPT-based QA evaluation.')
11 | parser.add_argument('-d', '--dir', default=None)
12 | parser.add_argument('-v', '--version', default=None)
13 | parser.add_argument('-s', '--select', nargs='*', default=None)
14 | parser.add_argument('-f', '--files', nargs='*', default=[])
15 | parser.add_argument('-i', '--ignore', nargs='*', default=[])
16 | return parser.parse_args()
17 |
18 |
19 | if __name__ == '__main__':
20 | args = parse_args()
21 |
22 | if args.ignore is not None:
23 | args.ignore = [int(x) for x in args.ignore]
24 |
25 | if len(args.files) > 0:
26 | review_files = args.files
27 | else:
28 | review_files = [x for x in os.listdir(args.dir) if x.endswith('.jsonl') and (x.startswith('gpt4_text') or x.startswith('reviews_') or x.startswith('review_') or 'review' in args.dir)]
29 |
30 | for review_file in sorted(review_files):
31 | config = os.path.basename(review_file).replace('gpt4_text_', '').replace('.jsonl', '')
32 | if args.select is not None and any(x not in config for x in args.select):
33 | continue
34 | if '0613' in config:
35 | version = '0613'
36 | else:
37 | version = '0314'
38 | if args.version is not None and args.version != version:
39 | continue
40 | scores = defaultdict(list)
41 | print(config)
42 | with open(os.path.join(args.dir, review_file) if args.dir is not None else review_file) as f:
43 | for review_str in f:
44 | review = json.loads(review_str)
45 | if review['question_id'] in args.ignore:
46 | continue
47 | if 'category' in review:
48 | scores[review['category']].append(review['tuple'])
49 | scores['all'].append(review['tuple'])
50 | else:
51 | if 'tuple' in review:
52 | scores['all'].append(review['tuple'])
53 | else:
54 | scores['all'].append(review['score'])
55 | for k, v in sorted(scores.items()):
56 | stats = np.asarray(v).mean(0).tolist()
57 | stats = [round(x, 3) for x in stats]
58 | # print(k, stats, round(stats[1]/stats[0]*100, 1))
59 | print(k, round(stats[1]/stats[0]*100, 1), round(stats[0] * 10, 1), round(stats[1] * 10, 1))
60 | print('=================================')
61 |
--------------------------------------------------------------------------------
/llava/eval/table/model.jsonl:
--------------------------------------------------------------------------------
1 | {"model_id": "vicuna-13b:20230322-clean-lang", "model_name": "vicuna-13b", "model_version": "20230322-clean-lang", "model_metadata": "vicuna-13b-20230322-clean-lang"}
2 | {"model_id": "alpaca-13b:v1", "model_name": "alpaca-13b", "model_version": "v1", "model_metadata": "alpaca-13b"}
3 | {"model_id": "llama-13b:v1", "model_name": "llama-13b", "model_version": "v1", "model_metadata": "hf-llama-13b"}
4 | {"model_id": "bard:20230327", "model_name": "bard", "model_version": "20230327", "model_metadata": "Google Bard 20230327"}
5 | {"model_id": "gpt-3.5-turbo:20230327", "model_name": "gpt-3.5-turbo", "model_version": "20230327", "model_metadata": "OpenAI ChatGPT gpt-3.5-turbo Chat Completion"}
6 |
--------------------------------------------------------------------------------
/llava/eval/table/prompt.jsonl:
--------------------------------------------------------------------------------
1 | {"prompt_id": 1, "system_prompt": "You are a helpful and precise assistant for checking the quality of the answer.", "prompt_template": "[Question]\n{question}\n\n[Assistant 1]\n{answer_1}\n\n[End of Assistant 1]\n\n[Assistant 2]\n{answer_2}\n\n[End of Assistant 2]\n\n[System]\n{prompt}\n\n", "defaults": {"prompt": "We would like to request your feedback on the performance of two AI assistants in response to the user question displayed above.\nPlease rate the helpfulness, relevance, accuracy, level of details of their responses. Each assistant receives an overall score on a scale of 1 to 10, where a higher score indicates better overall performance.\nPlease first output a single line containing only two values indicating the scores for Assistant 1 and 2, respectively. The two scores are separated by a space.\nIn the subsequent line, please provide a comprehensive explanation of your evaluation, avoiding any potential bias and ensuring that the order in which the responses were presented does not affect your judgment."}, "description": "Prompt for general questions"}
2 | {"prompt_id": 2, "system_prompt": "You are a helpful and precise assistant for checking the quality of the answer.", "prompt_template": "[Question]\n{question}\n\n[Assistant 1]\n{answer_1}\n\n[End of Assistant 1]\n\n[Assistant 2]\n{answer_2}\n\n[End of Assistant 2]\n\n[System]\n{prompt}\n\n", "defaults": {"prompt": "Your task is to evaluate the coding abilities of the above two assistants. They have been asked to implement a program to solve a given problem. Please review their code submissions, paying close attention to their problem-solving approach, code structure, readability, and the inclusion of helpful comments.\n\nPlease ensure that the assistants' submissions:\n\n1. Correctly implement the given problem statement.\n2. Contain accurate and efficient code.\n3. Include clear and concise comments that explain the code's logic and functionality.\n4. Adhere to proper coding standards and best practices.\n\nOnce you have carefully reviewed both submissions, provide detailed feedback on their strengths and weaknesses, along with any suggestions for improvement. You should first output a single line containing two scores on the scale of 1-10 (1: no code/no sense; 10: perfect) for Assistant 1 and 2, respectively. Then give extra comments starting from the next line."}, "description": "Prompt for coding questions"}
3 | {"prompt_id": 3, "system_prompt": "You are a helpful and precise assistant for checking the quality of the answer.", "prompt_template": "[Question]\n{question}\n\n[Assistant 1]\n{answer_1}\n\n[End of Assistant 1]\n\n[Assistant 2]\n{answer_2}\n\n[End of Assistant 2]\n\n[System]\n{prompt}\n\n", "defaults": {"prompt": "We would like to request your feedback on the mathematical proficiency of two AI assistants regarding the given user question.\nFirstly, please solve the problem independently, without referring to the answers provided by Assistant 1 and Assistant 2.\nAfterward, please examine the problem-solving process of Assistant 1 and Assistant 2 step-by-step to ensure their correctness, identifying any incorrect steps if present. Your evaluation should take into account not only the answer but also the problem-solving steps.\nFinally, please output a Python tuple containing two numerical scores for Assistant 1 and Assistant 2, ranging from 1 to 10, respectively. If applicable, explain the reasons for any variations in their scores and determine which assistant performed better."}, "description": "Prompt for math questions"}
4 | {"prompt_id": 4, "system_prompt": "You are a helpful and precise assistant for checking the quality of the answer.", "prompt_template": "[Visual Context]\n{context}\n[Question]\n{question}\n\n[Assistant 1]\n{answer_1}\n\n[End of Assistant 1]\n\n[Assistant 2]\n{answer_2}\n\n[End of Assistant 2]\n\n[System]\n{prompt}\n\n", "defaults": {"prompt": "We would like to request your feedback on the performance of two AI assistants in response to the user question displayed above. The user asks the question on observing an image. For your reference, the visual content in the image is represented with five descriptive sentences describing the same image and the bounding box coordinates of each object in the scene. These coordinates are in the form of bounding boxes, represented as (x1, y1, x2, y2) with floating numbers ranging from 0 to 1. These values correspond to the top left x, top left y, bottom right x, and bottom right y. \nPlease rate the helpfulness, relevance, accuracy, level of details of their responses. Each assistant receives an overall score on a scale of 1 to 10, where a higher score indicates better overall performance.\nPlease first output a single line containing only two values indicating the scores for Assistant 1 and 2, respectively. The two scores are separated by a space.\nIn the subsequent line, please provide a comprehensive explanation of your evaluation, avoiding any potential bias and ensuring that the order in which the responses were presented does not affect your judgment."}, "description": "Prompt for visual questions"}
5 |
--------------------------------------------------------------------------------
/llava/eval/table/reviewer.jsonl:
--------------------------------------------------------------------------------
1 | {"reviewer_id": "gpt-4-0328-default", "prompt_id": 1, "metadata": {"temperature": 0.2, "max_tokens": 1024}, "description": "GPT-4 for general questions"}
2 | {"reviewer_id": "gpt-4-0328-coding", "prompt_id": 2, "metadata": {"temperature": 0.2, "max_tokens": 1024}, "description": "GPT-4 for coding questions"}
3 | {"reviewer_id": "gpt-4-0328-math", "prompt_id": 3, "metadata": {"temperature": 0.2, "max_tokens": 1024}, "description": "GPT-4 for math questions"}
4 | {"reviewer_id": "gpt-4-0417-visual", "prompt_id": 4, "metadata": {"temperature": 0.2, "max_tokens": 1024}, "description": "GPT-4 for math questions"}
5 |
--------------------------------------------------------------------------------
/llava/eval/table/rule.json:
--------------------------------------------------------------------------------
1 | {
2 | "coding": {"role": "Assistant", "prompt": "Your task is to evaluate the coding abilities of the above two assistants. They have been asked to implement a program to solve a given problem. Please review their code submissions, paying close attention to their problem-solving approach, code structure, readability, and the inclusion of helpful comments.\n\nPlease ensure that the assistants' submissions:\n\n1. Correctly implement the given problem statement.\n2. Contain accurate and efficient code.\n3. Include clear and concise comments that explain the code's logic and functionality.\n4. Adhere to proper coding standards and best practices.\n\nOnce you have carefully reviewed both submissions, provide detailed feedback on their strengths and weaknesses, along with any suggestions for improvement. You should first output a single line containing two scores on the scale of 1-10 (1: no code/no sense; 10: perfect) for Assistant 1 and 2, respectively. Then give extra comments starting from the next line."},
3 | "math": {"role": "Assistant", "prompt": "We would like to request your feedback on the mathematical proficiency of two AI assistants regarding the given user question.\nFirstly, please solve the problem independently, without referring to the answers provided by Assistant 1 and Assistant 2.\nAfterward, please examine the problem-solving process of Assistant 1 and Assistant 2 step-by-step to ensure their correctness, identifying any incorrect steps if present. Your evaluation should take into account not only the answer but also the problem-solving steps.\nFinally, please output a Python tuple containing two numerical scores for Assistant 1 and Assistant 2, ranging from 1 to 10, respectively. If applicable, explain the reasons for any variations in their scores and determine which assistant performed better."},
4 | "default": {"role": "Assistant", "prompt": "We would like to request your feedback on the performance of two AI assistants in response to the user question displayed above.\nPlease rate the helpfulness, relevance, accuracy, level of details of their responses. Each assistant receives an overall score on a scale of 1 to 10, where a higher score indicates better overall performance.\nPlease first output a single line containing only two values indicating the scores for Assistant 1 and 2, respectively. The two scores are separated by a space.\nIn the subsequent line, please provide a comprehensive explanation of your evaluation, avoiding any potential bias and ensuring that the order in which the responses were presented does not affect your judgment."},
5 | "conv": {"role": "Assistant", "prompt": "We would like to request your feedback on the performance of two AI assistants in response to the user question displayed above. The user asks the question on observing an image. For your reference, the visual content in the image is represented with five descriptive sentences describing the same image and the bounding box coordinates of each object in the scene. These coordinates are in the form of bounding boxes, represented as (x1, y1, x2, y2) with floating numbers ranging from 0 to 1. These values correspond to the top left x, top left y, bottom right x, and bottom right y. \nPlease rate the helpfulness, relevance, accuracy, level of details of their responses. Each assistant receives an overall score on a scale of 1 to 10, where a higher score indicates better overall performance.\nPlease first output a single line containing only two values indicating the scores for Assistant 1 and 2, respectively. The two scores are separated by a space.\nIn the subsequent line, please provide a comprehensive explanation of your evaluation, avoiding any potential bias and ensuring that the order in which the responses were presented does not affect your judgment."},
6 | "detail": {"role": "Assistant", "prompt": "We would like to request your feedback on the performance of two AI assistants in response to the user question displayed above. The user asks the question on observing an image. For your reference, the visual content in the image is represented with five descriptive sentences describing the same image and the bounding box coordinates of each object in the scene. These coordinates are in the form of bounding boxes, represented as (x1, y1, x2, y2) with floating numbers ranging from 0 to 1. These values correspond to the top left x, top left y, bottom right x, and bottom right y. \nPlease rate the helpfulness, relevance, accuracy, level of details of their responses. Each assistant receives an overall score on a scale of 1 to 10, where a higher score indicates better overall performance.\nPlease first output a single line containing only two values indicating the scores for Assistant 1 and 2, respectively. The two scores are separated by a space.\nIn the subsequent line, please provide a comprehensive explanation of your evaluation, avoiding any potential bias and ensuring that the order in which the responses were presented does not affect your judgment."},
7 | "complex": {"role": "Assistant", "prompt": "We would like to request your feedback on the performance of two AI assistants in response to the user question displayed above. The user asks the question on observing an image. For your reference, the visual content in the image is represented with five descriptive sentences describing the same image and the bounding box coordinates of each object in the scene. These coordinates are in the form of bounding boxes, represented as (x1, y1, x2, y2) with floating numbers ranging from 0 to 1. These values correspond to the top left x, top left y, bottom right x, and bottom right y. \nPlease rate the helpfulness, relevance, accuracy, level of details of their responses. Each assistant receives an overall score on a scale of 1 to 10, where a higher score indicates better overall performance.\nPlease first output a single line containing only two values indicating the scores for Assistant 1 and 2, respectively. The two scores are separated by a space.\nIn the subsequent line, please provide a comprehensive explanation of your evaluation, avoiding any potential bias and ensuring that the order in which the responses were presented does not affect your judgment."},
8 | "llava_bench_conv": {"role": "Assistant", "prompt": "We would like to request your feedback on the performance of two AI assistants in response to the user question displayed above. The user asks the question on observing an image. For your reference, the visual content in the image is represented with a few sentences describing the image. \nPlease rate the helpfulness, relevance, accuracy, level of details of their responses. Each assistant receives an overall score on a scale of 1 to 10, where a higher score indicates better overall performance.\nPlease first output a single line containing only two values indicating the scores for Assistant 1 and 2, respectively. The two scores are separated by a space.\nIn the subsequent line, please provide a comprehensive explanation of your evaluation, avoiding any potential bias and ensuring that the order in which the responses were presented does not affect your judgment."},
9 | "llava_bench_detail": {"role": "Assistant", "prompt": "We would like to request your feedback on the performance of two AI assistants in response to the user question displayed above. The user asks the question on observing an image. For your reference, the visual content in the image is represented with a few sentences describing the image. \nPlease rate the helpfulness, relevance, accuracy, level of details of their responses. Each assistant receives an overall score on a scale of 1 to 10, where a higher score indicates better overall performance.\nPlease first output a single line containing only two values indicating the scores for Assistant 1 and 2, respectively. The two scores are separated by a space.\nIn the subsequent line, please provide a comprehensive explanation of your evaluation, avoiding any potential bias and ensuring that the order in which the responses were presented does not affect your judgment."},
10 | "llava_bench_complex": {"role": "Assistant", "prompt": "We would like to request your feedback on the performance of two AI assistants in response to the user question displayed above. The user asks the question on observing an image. For your reference, the visual content in the image is represented with a few sentences describing the image. \nPlease rate the helpfulness, relevance, accuracy, level of details of their responses. Each assistant receives an overall score on a scale of 1 to 10, where a higher score indicates better overall performance.\nPlease first output a single line containing only two values indicating the scores for Assistant 1 and 2, respectively. The two scores are separated by a space.\nIn the subsequent line, please provide a comprehensive explanation of your evaluation, avoiding any potential bias and ensuring that the order in which the responses were presented does not affect your judgment."}
11 | }
--------------------------------------------------------------------------------
/llava/eval/webpage/figures/alpaca.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/lwpyh/ProMaC_code/689e9e9350f8a50145bbbaad0da4ba7b459ee050/llava/eval/webpage/figures/alpaca.png
--------------------------------------------------------------------------------
/llava/eval/webpage/figures/bard.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/lwpyh/ProMaC_code/689e9e9350f8a50145bbbaad0da4ba7b459ee050/llava/eval/webpage/figures/bard.jpg
--------------------------------------------------------------------------------
/llava/eval/webpage/figures/chatgpt.svg:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/llava/eval/webpage/figures/llama.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/lwpyh/ProMaC_code/689e9e9350f8a50145bbbaad0da4ba7b459ee050/llava/eval/webpage/figures/llama.jpg
--------------------------------------------------------------------------------
/llava/eval/webpage/figures/swords_FILL0_wght300_GRAD0_opsz48.svg:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/llava/eval/webpage/figures/vicuna.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/lwpyh/ProMaC_code/689e9e9350f8a50145bbbaad0da4ba7b459ee050/llava/eval/webpage/figures/vicuna.jpeg
--------------------------------------------------------------------------------
/llava/eval/webpage/index.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 | Who's GPT-4's favorite? Battles between State-of-the-Art Chatbots
7 |
8 |
9 |
10 |
11 |
12 |
13 |
32 |
33 |
34 |
Who's GPT-4's favorite? Battles between State-of-the-Art Chatbots
35 |
36 |
37 |
38 |
39 |
40 |
41 |
42 |
43 |
44 |
45 |
46 |
47 |
48 |
49 |
50 |
51 |
52 |
53 |
54 |
55 |
56 |
57 |
58 |
59 |
60 |
61 |
62 |
63 |
64 |
65 |
66 |
67 |
68 |
69 |
70 |
71 |
72 |
73 |
74 |
75 |
76 |
77 |
78 |
79 |
80 |
81 |
82 |
83 |
84 |
85 |
86 |
87 |
88 |
89 |
90 |
91 |
92 |
93 |
94 |
95 |
96 |
97 |
98 |
99 |
100 |
101 |
102 |
103 |
104 |
105 |
106 |
107 |
108 |
109 |
110 |
111 | Assistant #2 (Vicuna, our model)
112 |
113 |
114 |
115 |
116 |
117 |
118 |
119 |
120 |
121 |
122 |
123 |
124 |
125 |
GPT-4 Evaluation
126 |
127 |
128 |
129 |
130 |
131 |
132 |
133 |
134 |
135 |
136 |
137 | This website is co-authored with GPT-4.
138 |