├── README.md
├── model
├── __init__.py
└── forgery_analyst
│ ├── __init__.py
│ └── llava
│ ├── __init__.py
│ ├── constants.py
│ ├── conversation.py
│ ├── eval
│ ├── eval_gpt_review.py
│ ├── eval_gpt_review_bench.py
│ ├── eval_gpt_review_visual.py
│ ├── eval_pope.py
│ ├── eval_science_qa.py
│ ├── eval_science_qa_gpt4.py
│ ├── eval_science_qa_gpt4_requery.py
│ ├── eval_textvqa.py
│ ├── generate_webpage_data_from_table.py
│ ├── m4c_evaluator.py
│ ├── model_qa.py
│ ├── model_vqa.py
│ ├── model_vqa_loader.py
│ ├── model_vqa_mmbench.py
│ ├── model_vqa_science.py
│ ├── qa_baseline_gpt35.py
│ ├── run_llava.py
│ ├── summarize_gpt_review.py
│ ├── table
│ │ ├── answer
│ │ │ ├── answer_alpaca-13b.jsonl
│ │ │ ├── answer_bard.jsonl
│ │ │ ├── answer_gpt35.jsonl
│ │ │ ├── answer_llama-13b.jsonl
│ │ │ └── answer_vicuna-13b.jsonl
│ │ ├── caps_boxes_coco2014_val_80.jsonl
│ │ ├── model.jsonl
│ │ ├── prompt.jsonl
│ │ ├── question.jsonl
│ │ ├── results
│ │ │ ├── test_sqa_llava_13b_v0.json
│ │ │ └── test_sqa_llava_lcs_558k_sqa_12e_vicuna_v1_3_13b.json
│ │ ├── review
│ │ │ ├── review_alpaca-13b_vicuna-13b.jsonl
│ │ │ ├── review_bard_vicuna-13b.jsonl
│ │ │ ├── review_gpt35_vicuna-13b.jsonl
│ │ │ └── review_llama-13b_vicuna-13b.jsonl
│ │ ├── reviewer.jsonl
│ │ └── rule.json
│ └── webpage
│ │ ├── figures
│ │ ├── alpaca.png
│ │ ├── bard.jpg
│ │ ├── chatgpt.svg
│ │ ├── llama.jpg
│ │ ├── swords_FILL0_wght300_GRAD0_opsz48.svg
│ │ └── vicuna.jpeg
│ │ ├── index.html
│ │ ├── script.js
│ │ └── styles.css
│ ├── mm_utils.py
│ ├── model
│ ├── __init__.py
│ ├── __pycache__
│ │ ├── __init__.cpython-310.pyc
│ │ ├── builder.cpython-310.pyc
│ │ └── llava_arch.cpython-310.pyc
│ ├── apply_delta.py
│ ├── builder.py
│ ├── consolidate.py
│ ├── language_model
│ │ ├── __pycache__
│ │ │ ├── llava_llama.cpython-310.pyc
│ │ │ └── llava_mpt.cpython-310.pyc
│ │ ├── llava_llama.py
│ │ ├── llava_mistral.py
│ │ └── llava_mpt.py
│ ├── llava_arch.py
│ ├── make_delta.py
│ ├── multimodal_encoder
│ │ ├── __pycache__
│ │ │ ├── builder.cpython-310.pyc
│ │ │ └── clip_encoder.cpython-310.pyc
│ │ ├── builder.py
│ │ └── clip_encoder.py
│ ├── multimodal_projector
│ │ ├── __pycache__
│ │ │ └── builder.cpython-310.pyc
│ │ └── builder.py
│ └── utils.py
│ ├── serve
│ ├── __init__.py
│ ├── cli.py
│ ├── controller.py
│ ├── examples
│ │ ├── extreme_ironing.jpg
│ │ └── waterview.jpg
│ ├── gradio_web_server.py
│ ├── model_worker.py
│ ├── register_worker.py
│ ├── sglang_worker.py
│ └── test_message.py
│ ├── train
│ ├── llama_flash_attn_monkey_patch.py
│ ├── llama_xformers_attn_monkey_patch.py
│ ├── llava_trainer.py
│ ├── train.py
│ ├── train_mem.py
│ └── train_xformers.py
│ └── utils.py
├── prompt
├── data_engine_prompt.py
└── real_analysis_text.py
├── requirements.txt
├── run_engine.py
├── run_sharecaptioner.py
├── src
└── teaser.png
└── utils
└── utils.py
/README.md:
--------------------------------------------------------------------------------
1 | # ForgerySleuth: Empowering Multimodal Large Language Models for Image Manipulation Detection and Analyzing
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 | ## Abstract
10 |
11 | In this work, we explored the potential of multimodal large language models in the image manipulation detection task. We constructed *ForgeryAnalysis*, a dataset containing forgery analysis text annotations. Each entry was initially generated by GPT-4o and then reviewed by experts. The proposed data engine *ForgeryAnalyst* enables the creation of a larger-scale *ForgeryAnalysis-PT* dataset for pre-training purposes. We also proposed *ForgerySleuth*, which leverages multimodal large language model to perform comprehensive clue fusion and generate segmentation outputs indicating specific regions that are tampered. More details about our work can be found in the [paper](https://arxiv.org/abs/2411.19466).
12 |
13 | ## Contents
14 | - [Install](#install)
15 | - [ForgeryAnalyst Data Engine](#forgeryanalyst-data-engine)
16 | - [ForgeryAnalysis Dataset](#forgeryanalysis-dataset)
17 | - [ForgerySleuth Assistant (TODO)](#forgerysleuth-assistant)
18 |
19 | ## Install
20 |
21 | ```
22 | conda create --name --file requirements.txt
23 | ```
24 |
25 | ## ForgeryAnalyst Data Engine
26 | ### Automatic Annotation
27 |
28 | You can use the data engine [ForgeryAnalyst-llava-13B](https://huggingface.co/Zhihao18/ForgeryAnalyst-llava-13B) to automatically annotate forgery analysis text for images that already have tampered region masks:
29 |
30 | ```
31 | python run_engine.py --model-path Zhihao18/ForgeryAnalyst-llava-13B --image-path --mask-path --manipulation-type --output-path
32 | ```
33 |
34 | ### Authentic Image Analysis Generation
35 |
36 | To ensure consistency in the training data, for authentic images, you can use [ShareCaptioner](https://github.com/ShareGPT4Omni/ShareGPT4V) to generate detailed image captions and then organize them in the Chain-of-Clues format.
37 |
38 | ```
39 | python run_sharecaptioner.py --model-path Lin-Chen/ShareCaptioner --image-path --output-path
40 | ```
41 |
42 | **Tips**: You can download [ShareCaptioner](https://github.com/ShareGPT4Omni/ShareGPT4V) in advance and use `local_files_only=True` to force the use of local weights, avoiding potential network issues.
43 |
44 |
45 | ## ForgeryAnalysis Dataset
46 |
47 | ### ForgeryAnalysis-PT
48 |
49 | #### Overview
50 |
51 | The **ForgeryAnalysis-PT** dataset consists of forgery analysis texts automatically generated by our data engine, **ForgeryAnalyst**. The dataset corresponds to two publicly available image manipulation detection datasets: **CASIA2** and **MIML**. Each entry in the dataset provides forgery analysis for a corresponding tampered image, including clues and explanations structured in a Chain-of-Clues format.
52 |
53 | #### Usage
54 |
55 | Before using this dataset, download the original CASIA2 and MIML datasets from the respective public repositories, as ForgeryAnalysis-PT relies on these datasets for the corresponding tampered images.
56 |
57 | The tampering analysis for each image is saved as a `.txt` file with the same name as the tampered image in the original CASIA2 and MIML datasets. You can download this dataset from the following link: [Google Drive](https://drive.google.com/file/d/1vUDFKfyW5vEkVXyTiuMIOrKS4yGtg8nJ/view?usp=sharing).
58 |
59 | #### License
60 |
61 | The ForgeryAnalysis-PT dataset is freely available for academic research and development. However, you must respect the terms and conditions of the original datasets, CASIA2 and MIML.
62 |
63 | ## ForgerySleuth Assistant (TODO)
64 |
65 |
66 | ## Evaluation Dataset
67 |
68 | We used several publicly available and widely used image manipulation detection datasets to evaluate the performance of IMD methods. You can access the original repositories and download the data through the following links:
69 |
70 | | Dataset | Paper | Download URL |
71 | | --- | --- | --- |
72 | | Columbia | Detecting Image Splicing Using Geometry Invariants And Camera Characteristics Consistency | https://www.ee.columbia.edu/ln/dvmm/downloads/authsplcuncmp |
73 | | CASIA | Casia image tampering detection evaluation database | [Unofficial] https://github.com/namtpham/casia1groundtruth |
74 | | | | [Unofficial] https://github.com/namtpham/casia2groundtruth |
75 | | Coverage | COVERAGE - A Novel Database for Copy-move Forgery Detection | https://github.com/wenbihan/coverage |
76 | | NIST16 | MFC Datasets: Large-Scale Benchmark Datasets for Media Forensic Challenge Evaluation | https://mfc.nist.gov/users/sign_in |
77 | | IMD20 | IMD2020: A Large-Scale Annotated Dataset Tailored for Detecting Manipulated Images | https://staff.utia.cas.cz/novozada/db |
78 | | COCOGlide | TruFor: Leveraging all-round clues for trustworthy image forgery detection and localization | https://github.com/grip-unina/TruFor?tab=readme-ov-file#cocoglide-dataset |
79 |
80 | ## Citation
81 | If you find this project useful for your research and applications, please cite using this BibTeX:
82 |
83 | ```
84 | @misc{sun2024forgerysleuth,
85 | title={ForgerySleuth: Empowering Multimodal Large Language Models for Image Manipulation Detection},
86 | author={Sun, Zhihao and Jiang, Haoran and Chen, Haoran and Cao, Yixin and Qiu, Xipeng and Wu, Zuxuan and Jiang, Yu-Gang},
87 | publisher={arXiv:2411.19466},
88 | year={2024},
89 | url={https://arxiv.org/abs/2411.19466},
90 | }
91 | ```
92 |
93 | ## Acknowledgment
94 | - This work is built upon the [LLaVA](https://github.com/haotian-liu/LLaVA), [LISA](https://github.com/dvlab-research/LISA) and [SAM](https://github.com/facebookresearch/segment-anything).
95 | - In the process of dataset creation and model evaluation, we utilized [ChatGPT](https://platform.openai.com/docs/api-reference/introduction) and [ShareCaptioner](https://github.com/ShareGPT4Omni/ShareGPT4V).
--------------------------------------------------------------------------------
/model/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sunzhihao18/ForgerySleuth/fa5f8b552a3e44c91dcaa202f4a74d25d79397dc/model/__init__.py
--------------------------------------------------------------------------------
/model/forgery_analyst/__init__.py:
--------------------------------------------------------------------------------
1 | from llava import *
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sunzhihao18/ForgerySleuth/fa5f8b552a3e44c91dcaa202f4a74d25d79397dc/model/forgery_analyst/llava/__init__.py
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/constants.py:
--------------------------------------------------------------------------------
1 | CONTROLLER_HEART_BEAT_EXPIRATION = 30
2 | WORKER_HEART_BEAT_INTERVAL = 15
3 |
4 | LOGDIR = "."
5 |
6 | # Model Constants
7 | IGNORE_INDEX = -100
8 | IMAGE_TOKEN_INDEX = -200
9 | DEFAULT_IMAGE_TOKEN = ""
10 | DEFAULT_IMAGE_PATCH_TOKEN = ""
11 | DEFAULT_IM_START_TOKEN = ""
12 | DEFAULT_IM_END_TOKEN = ""
13 | IMAGE_PLACEHOLDER = ""
14 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/eval_gpt_review.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import json
3 | import os
4 |
5 | import openai
6 | import tqdm
7 | import ray
8 | import time
9 |
10 | NUM_SECONDS_TO_SLEEP = 3
11 |
12 | @ray.remote(num_cpus=4)
13 | def get_eval(content: str, max_tokens: int):
14 | while True:
15 | try:
16 | response = openai.ChatCompletion.create(
17 | model='gpt-4',
18 | messages=[{
19 | 'role': 'system',
20 | 'content': 'You are a helpful and precise assistant for checking the quality of the answer.'
21 | }, {
22 | 'role': 'user',
23 | 'content': content,
24 | }],
25 | temperature=0.2, # TODO: figure out which temperature is best for evaluation
26 | max_tokens=max_tokens,
27 | )
28 | break
29 | except openai.error.RateLimitError:
30 | pass
31 | except Exception as e:
32 | print(e)
33 | time.sleep(NUM_SECONDS_TO_SLEEP)
34 |
35 | print('success!')
36 | return response['choices'][0]['message']['content']
37 |
38 |
39 | def parse_score(review):
40 | try:
41 | score_pair = review.split('\n')[0]
42 | score_pair = score_pair.replace(',', ' ')
43 | sp = score_pair.split(' ')
44 | if len(sp) == 2:
45 | return [float(sp[0]), float(sp[1])]
46 | else:
47 | print('error', review)
48 | return [-1, -1]
49 | except Exception as e:
50 | print(e)
51 | print('error', review)
52 | return [-1, -1]
53 |
54 |
55 | if __name__ == '__main__':
56 | parser = argparse.ArgumentParser(description='ChatGPT-based QA evaluation.')
57 | parser.add_argument('-q', '--question')
58 | # parser.add_argument('-a', '--answer')
59 | parser.add_argument('-a', '--answer-list', nargs='+', default=[])
60 | parser.add_argument('-r', '--rule')
61 | parser.add_argument('-o', '--output')
62 | parser.add_argument('--max-tokens', type=int, default=1024, help='maximum number of tokens produced in the output')
63 | args = parser.parse_args()
64 |
65 | ray.init()
66 |
67 | f_q = open(os.path.expanduser(args.question))
68 | f_ans1 = open(os.path.expanduser(args.answer_list[0]))
69 | f_ans2 = open(os.path.expanduser(args.answer_list[1]))
70 | rule_dict = json.load(open(os.path.expanduser(args.rule), 'r'))
71 |
72 | review_file = open(f'{args.output}', 'w')
73 |
74 | js_list = []
75 | handles = []
76 | idx = 0
77 | for ques_js, ans1_js, ans2_js in zip(f_q, f_ans1, f_ans2):
78 | # if idx == 1:
79 | # break
80 |
81 | ques = json.loads(ques_js)
82 | ans1 = json.loads(ans1_js)
83 | ans2 = json.loads(ans2_js)
84 |
85 | category = json.loads(ques_js)['category']
86 | if category in rule_dict:
87 | rule = rule_dict[category]
88 | else:
89 | rule = rule_dict['default']
90 | prompt = rule['prompt']
91 | role = rule['role']
92 | content = (f'[Question]\n{ques["text"]}\n\n'
93 | f'[{role} 1]\n{ans1["text"]}\n\n[End of {role} 1]\n\n'
94 | f'[{role} 2]\n{ans2["text"]}\n\n[End of {role} 2]\n\n'
95 | f'[System]\n{prompt}\n\n')
96 | js_list.append({
97 | 'id': idx+1,
98 | 'question_id': ques['question_id'],
99 | 'answer1_id': ans1['answer_id'],
100 | 'answer2_id': ans2['answer_id'],
101 | 'category': category})
102 | idx += 1
103 | handles.append(get_eval.remote(content, args.max_tokens))
104 | # To avoid the rate limit set by OpenAI
105 | time.sleep(NUM_SECONDS_TO_SLEEP)
106 |
107 | reviews = ray.get(handles)
108 | for idx, review in enumerate(reviews):
109 | scores = parse_score(review)
110 | js_list[idx]['content'] = review
111 | js_list[idx]['tuple'] = scores
112 | review_file.write(json.dumps(js_list[idx]) + '\n')
113 | review_file.close()
114 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/eval_gpt_review_bench.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import json
3 | import os
4 |
5 | import openai
6 | import time
7 |
8 | NUM_SECONDS_TO_SLEEP = 0.5
9 |
10 |
11 | def get_eval(content: str, max_tokens: int):
12 | while True:
13 | try:
14 | response = openai.ChatCompletion.create(
15 | model='gpt-4-0314',
16 | messages=[{
17 | 'role': 'system',
18 | 'content': 'You are a helpful and precise assistant for checking the quality of the answer.'
19 | }, {
20 | 'role': 'user',
21 | 'content': content,
22 | }],
23 | temperature=0.2, # TODO: figure out which temperature is best for evaluation
24 | max_tokens=max_tokens,
25 | )
26 | break
27 | except openai.error.RateLimitError:
28 | pass
29 | except Exception as e:
30 | print(e)
31 | time.sleep(NUM_SECONDS_TO_SLEEP)
32 |
33 | return response['choices'][0]['message']['content']
34 |
35 |
36 | def parse_score(review):
37 | try:
38 | score_pair = review.split('\n')[0]
39 | score_pair = score_pair.replace(',', ' ')
40 | sp = score_pair.split(' ')
41 | if len(sp) == 2:
42 | return [float(sp[0]), float(sp[1])]
43 | else:
44 | print('error', review)
45 | return [-1, -1]
46 | except Exception as e:
47 | print(e)
48 | print('error', review)
49 | return [-1, -1]
50 |
51 |
52 | if __name__ == '__main__':
53 | parser = argparse.ArgumentParser(description='ChatGPT-based QA evaluation.')
54 | parser.add_argument('-q', '--question')
55 | parser.add_argument('-c', '--context')
56 | parser.add_argument('-a', '--answer-list', nargs='+', default=[])
57 | parser.add_argument('-r', '--rule')
58 | parser.add_argument('-o', '--output')
59 | parser.add_argument('--max-tokens', type=int, default=1024, help='maximum number of tokens produced in the output')
60 | args = parser.parse_args()
61 |
62 | f_q = open(os.path.expanduser(args.question))
63 | f_ans1 = open(os.path.expanduser(args.answer_list[0]))
64 | f_ans2 = open(os.path.expanduser(args.answer_list[1]))
65 | rule_dict = json.load(open(os.path.expanduser(args.rule), 'r'))
66 |
67 | if os.path.isfile(os.path.expanduser(args.output)):
68 | cur_reviews = [json.loads(line) for line in open(os.path.expanduser(args.output))]
69 | else:
70 | cur_reviews = []
71 |
72 | review_file = open(f'{args.output}', 'a')
73 |
74 | context_list = [json.loads(line) for line in open(os.path.expanduser(args.context))]
75 | image_to_context = {context['image']: context for context in context_list}
76 |
77 | handles = []
78 | idx = 0
79 | for ques_js, ans1_js, ans2_js in zip(f_q, f_ans1, f_ans2):
80 | ques = json.loads(ques_js)
81 | ans1 = json.loads(ans1_js)
82 | ans2 = json.loads(ans2_js)
83 |
84 | inst = image_to_context[ques['image']]
85 |
86 | if isinstance(inst['caption'], list):
87 | cap_str = '\n'.join(inst['caption'])
88 | else:
89 | cap_str = inst['caption']
90 |
91 | category = 'llava_bench_' + json.loads(ques_js)['category']
92 | if category in rule_dict:
93 | rule = rule_dict[category]
94 | else:
95 | assert False, f"Visual QA category not found in rule file: {category}."
96 | prompt = rule['prompt']
97 | role = rule['role']
98 | content = (f'[Context]\n{cap_str}\n\n'
99 | f'[Question]\n{ques["text"]}\n\n'
100 | f'[{role} 1]\n{ans1["text"]}\n\n[End of {role} 1]\n\n'
101 | f'[{role} 2]\n{ans2["text"]}\n\n[End of {role} 2]\n\n'
102 | f'[System]\n{prompt}\n\n')
103 | cur_js = {
104 | 'id': idx+1,
105 | 'question_id': ques['question_id'],
106 | 'answer1_id': ans1.get('answer_id', ans1['question_id']),
107 | 'answer2_id': ans2.get('answer_id', ans2['answer_id']),
108 | 'category': category
109 | }
110 | if idx >= len(cur_reviews):
111 | review = get_eval(content, args.max_tokens)
112 | scores = parse_score(review)
113 | cur_js['content'] = review
114 | cur_js['tuple'] = scores
115 | review_file.write(json.dumps(cur_js) + '\n')
116 | review_file.flush()
117 | else:
118 | print(f'Skipping {idx} as we already have it.')
119 | idx += 1
120 | print(idx)
121 | review_file.close()
122 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/eval_gpt_review_visual.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import json
3 | import os
4 |
5 | import openai
6 | import time
7 |
8 | NUM_SECONDS_TO_SLEEP = 0.5
9 |
10 |
11 | def get_eval(content: str, max_tokens: int):
12 | while True:
13 | try:
14 | response = openai.ChatCompletion.create(
15 | model='gpt-4-0314',
16 | messages=[{
17 | 'role': 'system',
18 | 'content': 'You are a helpful and precise assistant for checking the quality of the answer.'
19 | }, {
20 | 'role': 'user',
21 | 'content': content,
22 | }],
23 | temperature=0.2, # TODO: figure out which temperature is best for evaluation
24 | max_tokens=max_tokens,
25 | )
26 | break
27 | except openai.error.RateLimitError:
28 | pass
29 | except Exception as e:
30 | print(e)
31 | time.sleep(NUM_SECONDS_TO_SLEEP)
32 |
33 | return response['choices'][0]['message']['content']
34 |
35 |
36 | def parse_score(review):
37 | try:
38 | score_pair = review.split('\n')[0]
39 | score_pair = score_pair.replace(',', ' ')
40 | sp = score_pair.split(' ')
41 | if len(sp) == 2:
42 | return [float(sp[0]), float(sp[1])]
43 | else:
44 | print('error', review)
45 | return [-1, -1]
46 | except Exception as e:
47 | print(e)
48 | print('error', review)
49 | return [-1, -1]
50 |
51 |
52 | if __name__ == '__main__':
53 | parser = argparse.ArgumentParser(description='ChatGPT-based QA evaluation.')
54 | parser.add_argument('-q', '--question')
55 | parser.add_argument('-c', '--context')
56 | parser.add_argument('-a', '--answer-list', nargs='+', default=[])
57 | parser.add_argument('-r', '--rule')
58 | parser.add_argument('-o', '--output')
59 | parser.add_argument('--max-tokens', type=int, default=1024, help='maximum number of tokens produced in the output')
60 | args = parser.parse_args()
61 |
62 | f_q = open(os.path.expanduser(args.question))
63 | f_ans1 = open(os.path.expanduser(args.answer_list[0]))
64 | f_ans2 = open(os.path.expanduser(args.answer_list[1]))
65 | rule_dict = json.load(open(os.path.expanduser(args.rule), 'r'))
66 |
67 | if os.path.isfile(os.path.expanduser(args.output)):
68 | cur_reviews = [json.loads(line) for line in open(os.path.expanduser(args.output))]
69 | else:
70 | cur_reviews = []
71 |
72 | review_file = open(f'{args.output}', 'a')
73 |
74 | context_list = [json.loads(line) for line in open(os.path.expanduser(args.context))]
75 | image_to_context = {context['image']: context for context in context_list}
76 |
77 | handles = []
78 | idx = 0
79 | for ques_js, ans1_js, ans2_js in zip(f_q, f_ans1, f_ans2):
80 | ques = json.loads(ques_js)
81 | ans1 = json.loads(ans1_js)
82 | ans2 = json.loads(ans2_js)
83 |
84 | inst = image_to_context[ques['image']]
85 | cap_str = '\n'.join(inst['captions'])
86 | box_str = '\n'.join([f'{instance["category"]}: {instance["bbox"]}' for instance in inst['instances']])
87 |
88 | category = json.loads(ques_js)['category']
89 | if category in rule_dict:
90 | rule = rule_dict[category]
91 | else:
92 | assert False, f"Visual QA category not found in rule file: {category}."
93 | prompt = rule['prompt']
94 | role = rule['role']
95 | content = (f'[Context]\n{cap_str}\n\n{box_str}\n\n'
96 | f'[Question]\n{ques["text"]}\n\n'
97 | f'[{role} 1]\n{ans1["text"]}\n\n[End of {role} 1]\n\n'
98 | f'[{role} 2]\n{ans2["text"]}\n\n[End of {role} 2]\n\n'
99 | f'[System]\n{prompt}\n\n')
100 | cur_js = {
101 | 'id': idx+1,
102 | 'question_id': ques['question_id'],
103 | 'answer1_id': ans1.get('answer_id', ans1['question_id']),
104 | 'answer2_id': ans2.get('answer_id', ans2['answer_id']),
105 | 'category': category
106 | }
107 | if idx >= len(cur_reviews):
108 | review = get_eval(content, args.max_tokens)
109 | scores = parse_score(review)
110 | cur_js['content'] = review
111 | cur_js['tuple'] = scores
112 | review_file.write(json.dumps(cur_js) + '\n')
113 | review_file.flush()
114 | else:
115 | print(f'Skipping {idx} as we already have it.')
116 | idx += 1
117 | print(idx)
118 | review_file.close()
119 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/eval_pope.py:
--------------------------------------------------------------------------------
1 | import os
2 | import json
3 | import argparse
4 |
5 | def eval_pope(answers, label_file):
6 | label_list = [json.loads(q)['label'] for q in open(label_file, 'r')]
7 |
8 | for answer in answers:
9 | text = answer['text']
10 |
11 | # Only keep the first sentence
12 | if text.find('.') != -1:
13 | text = text.split('.')[0]
14 |
15 | text = text.replace(',', '')
16 | words = text.split(' ')
17 | if 'No' in words or 'not' in words or 'no' in words:
18 | answer['text'] = 'no'
19 | else:
20 | answer['text'] = 'yes'
21 |
22 | for i in range(len(label_list)):
23 | if label_list[i] == 'no':
24 | label_list[i] = 0
25 | else:
26 | label_list[i] = 1
27 |
28 | pred_list = []
29 | for answer in answers:
30 | if answer['text'] == 'no':
31 | pred_list.append(0)
32 | else:
33 | pred_list.append(1)
34 |
35 | pos = 1
36 | neg = 0
37 | yes_ratio = pred_list.count(1) / len(pred_list)
38 |
39 | TP, TN, FP, FN = 0, 0, 0, 0
40 | for pred, label in zip(pred_list, label_list):
41 | if pred == pos and label == pos:
42 | TP += 1
43 | elif pred == pos and label == neg:
44 | FP += 1
45 | elif pred == neg and label == neg:
46 | TN += 1
47 | elif pred == neg and label == pos:
48 | FN += 1
49 |
50 | print('TP\tFP\tTN\tFN\t')
51 | print('{}\t{}\t{}\t{}'.format(TP, FP, TN, FN))
52 |
53 | precision = float(TP) / float(TP + FP)
54 | recall = float(TP) / float(TP + FN)
55 | f1 = 2*precision*recall / (precision + recall)
56 | acc = (TP + TN) / (TP + TN + FP + FN)
57 | print('Accuracy: {}'.format(acc))
58 | print('Precision: {}'.format(precision))
59 | print('Recall: {}'.format(recall))
60 | print('F1 score: {}'.format(f1))
61 | print('Yes ratio: {}'.format(yes_ratio))
62 | print('%.3f, %.3f, %.3f, %.3f, %.3f' % (f1, acc, precision, recall, yes_ratio) )
63 |
64 | if __name__ == "__main__":
65 | parser = argparse.ArgumentParser()
66 | parser.add_argument("--annotation-dir", type=str)
67 | parser.add_argument("--question-file", type=str)
68 | parser.add_argument("--result-file", type=str)
69 | args = parser.parse_args()
70 |
71 | questions = [json.loads(line) for line in open(args.question_file)]
72 | questions = {question['question_id']: question for question in questions}
73 | answers = [json.loads(q) for q in open(args.result_file)]
74 | for file in os.listdir(args.annotation_dir):
75 | assert file.startswith('coco_pope_')
76 | assert file.endswith('.json')
77 | category = file[10:-5]
78 | cur_answers = [x for x in answers if questions[x['question_id']]['category'] == category]
79 | print('Category: {}, # samples: {}'.format(category, len(cur_answers)))
80 | eval_pope(cur_answers, os.path.join(args.annotation_dir, file))
81 | print("====================================")
82 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/eval_science_qa.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import json
3 | import os
4 | import re
5 | import random
6 |
7 |
8 | def get_args():
9 | parser = argparse.ArgumentParser()
10 | parser.add_argument('--base-dir', type=str)
11 | parser.add_argument('--result-file', type=str)
12 | parser.add_argument('--output-file', type=str)
13 | parser.add_argument('--output-result', type=str)
14 | parser.add_argument('--split', type=str, default='test')
15 | parser.add_argument('--options', type=list, default=["A", "B", "C", "D", "E"])
16 | return parser.parse_args()
17 |
18 |
19 | def convert_caps(results):
20 | fakecaps = []
21 | for result in results:
22 | image_id = result['question_id']
23 | caption = result['text']
24 | fakecaps.append({"image_id": int(image_id), "caption": caption})
25 | return fakecaps
26 |
27 |
28 | def get_pred_idx(prediction, choices, options):
29 | """
30 | Get the index (e.g. 2) from the prediction (e.g. 'C')
31 | """
32 | if prediction in options[:len(choices)]:
33 | return options.index(prediction)
34 | else:
35 | return -1
36 | return random.choice(range(len(choices)))
37 |
38 |
39 | if __name__ == "__main__":
40 | args = get_args()
41 |
42 | base_dir = args.base_dir
43 | split_indices = json.load(open(os.path.join(base_dir, "pid_splits.json")))[args.split]
44 | problems = json.load(open(os.path.join(base_dir, "problems.json")))
45 | predictions = [json.loads(line) for line in open(args.result_file)]
46 | predictions = {pred['question_id']: pred for pred in predictions}
47 | split_problems = {idx: problems[idx] for idx in split_indices}
48 |
49 | results = {'correct': [], 'incorrect': []}
50 | sqa_results = {}
51 | sqa_results['acc'] = None
52 | sqa_results['correct'] = None
53 | sqa_results['count'] = None
54 | sqa_results['results'] = {}
55 | sqa_results['outputs'] = {}
56 |
57 | for prob_id, prob in split_problems.items():
58 | if prob_id not in predictions:
59 | pred = {'text': 'FAILED', 'prompt': 'Unknown'}
60 | pred_text = 'FAILED'
61 | else:
62 | pred = predictions[prob_id]
63 | pred_text = pred['text']
64 |
65 | if pred_text in args.options:
66 | answer = pred_text
67 | elif len(pred_text) >= 3 and pred_text[0] in args.options and pred_text[1:3] == ". ":
68 | answer = pred_text[0]
69 | else:
70 | pattern = re.compile(r'The answer is ([A-Z]).')
71 | res = pattern.findall(pred_text)
72 | if len(res) == 1:
73 | answer = res[0] # 'A', 'B', ...
74 | else:
75 | answer = "FAILED"
76 |
77 | pred_idx = get_pred_idx(answer, prob['choices'], args.options)
78 |
79 | analysis = {
80 | 'question_id': prob_id,
81 | 'parsed_ans': answer,
82 | 'ground_truth': args.options[prob['answer']],
83 | 'question': pred['prompt'],
84 | 'pred': pred_text,
85 | 'is_multimodal': '' in pred['prompt'],
86 | }
87 |
88 | sqa_results['results'][prob_id] = get_pred_idx(answer, prob['choices'], args.options)
89 | sqa_results['outputs'][prob_id] = pred_text
90 |
91 | if pred_idx == prob['answer']:
92 | results['correct'].append(analysis)
93 | else:
94 | results['incorrect'].append(analysis)
95 |
96 | correct = len(results['correct'])
97 | total = len(results['correct']) + len(results['incorrect'])
98 |
99 | ###### IMG ######
100 | multimodal_correct = len([x for x in results['correct'] if x['is_multimodal']])
101 | multimodal_incorrect = len([x for x in results['incorrect'] if x['is_multimodal']])
102 | multimodal_total = multimodal_correct + multimodal_incorrect
103 | ###### IMG ######
104 |
105 | print(f'Total: {total}, Correct: {correct}, Accuracy: {correct / total * 100:.2f}%, IMG-Accuracy: {multimodal_correct / multimodal_total * 100:.2f}%')
106 |
107 | sqa_results['acc'] = correct / total * 100
108 | sqa_results['correct'] = correct
109 | sqa_results['count'] = total
110 |
111 | with open(args.output_file, 'w') as f:
112 | json.dump(results, f, indent=2)
113 | with open(args.output_result, 'w') as f:
114 | json.dump(sqa_results, f, indent=2)
115 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/eval_science_qa_gpt4.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import json
3 | import os
4 | import re
5 | import random
6 | from collections import defaultdict
7 |
8 |
9 | def get_args():
10 | parser = argparse.ArgumentParser()
11 | parser.add_argument('--base-dir', type=str)
12 | parser.add_argument('--gpt4-result', type=str)
13 | parser.add_argument('--our-result', type=str)
14 | parser.add_argument('--split', type=str, default='test')
15 | parser.add_argument('--options', type=list, default=["A", "B", "C", "D", "E"])
16 | return parser.parse_args()
17 |
18 |
19 | def convert_caps(results):
20 | fakecaps = []
21 | for result in results:
22 | image_id = result['question_id']
23 | caption = result['text']
24 | fakecaps.append({"image_id": int(image_id), "caption": caption})
25 | return fakecaps
26 |
27 |
28 | def get_pred_idx(prediction, choices, options):
29 | """
30 | Get the index (e.g. 2) from the prediction (e.g. 'C')
31 | """
32 | if prediction in options[:len(choices)]:
33 | return options.index(prediction)
34 | else:
35 | return random.choice(range(len(choices)))
36 |
37 |
38 | if __name__ == "__main__":
39 | args = get_args()
40 |
41 | base_dir = args.base_dir
42 | split_indices = json.load(open(os.path.join(base_dir, "pid_splits.json")))[args.split]
43 | problems = json.load(open(os.path.join(base_dir, "problems.json")))
44 | our_predictions = [json.loads(line) for line in open(args.our_result)]
45 | our_predictions = {pred['question_id']: pred for pred in our_predictions}
46 | split_problems = {idx: problems[idx] for idx in split_indices}
47 |
48 | gpt4_predictions = json.load(open(args.gpt4_result))['outputs']
49 |
50 | results = defaultdict(lambda: 0)
51 |
52 | for prob_id, prob in split_problems.items():
53 | if prob_id not in our_predictions:
54 | continue
55 | if prob_id not in gpt4_predictions:
56 | continue
57 | our_pred = our_predictions[prob_id]['text']
58 | gpt4_pred = gpt4_predictions[prob_id]
59 |
60 | pattern = re.compile(r'The answer is ([A-Z]).')
61 | our_res = pattern.findall(our_pred)
62 | if len(our_res) == 1:
63 | our_answer = our_res[0] # 'A', 'B', ...
64 | else:
65 | our_answer = "FAILED"
66 | gpt4_res = pattern.findall(gpt4_pred)
67 | if len(gpt4_res) == 1:
68 | gpt4_answer = gpt4_res[0] # 'A', 'B', ...
69 | else:
70 | gpt4_answer = "FAILED"
71 |
72 | our_pred_idx = get_pred_idx(our_answer, prob['choices'], args.options)
73 | gpt4_pred_idx = get_pred_idx(gpt4_answer, prob['choices'], args.options)
74 |
75 | if gpt4_answer == 'FAILED':
76 | results['gpt4_failed'] += 1
77 | # continue
78 | gpt4_pred_idx = our_pred_idx
79 | # if our_pred_idx != prob['answer']:
80 | # print(our_predictions[prob_id]['prompt'])
81 | # print('-----------------')
82 | # print(f'LECTURE: {prob["lecture"]}')
83 | # print(f'SOLUTION: {prob["solution"]}')
84 | # print('=====================')
85 | else:
86 | # continue
87 | pass
88 | # gpt4_pred_idx = our_pred_idx
89 |
90 | if gpt4_pred_idx == prob['answer']:
91 | results['correct'] += 1
92 | else:
93 | results['incorrect'] += 1
94 |
95 |
96 | if gpt4_pred_idx == prob['answer'] or our_pred_idx == prob['answer']:
97 | results['correct_upperbound'] += 1
98 |
99 | correct = results['correct']
100 | total = results['correct'] + results['incorrect']
101 | print(f'Total: {total}, Correct: {correct}, Accuracy: {correct / total * 100:.2f}%')
102 | print(f'Total: {total}, Correct (upper): {results["correct_upperbound"]}, Accuracy: {results["correct_upperbound"] / total * 100:.2f}%')
103 | print(f'Total: {total}, GPT-4 NO-ANS (RANDOM): {results["gpt4_failed"]}, Percentage: {results["gpt4_failed"] / total * 100:.2f}%')
104 |
105 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/eval_science_qa_gpt4_requery.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import json
3 | import os
4 | import re
5 | import random
6 | from collections import defaultdict
7 |
8 |
9 | def get_args():
10 | parser = argparse.ArgumentParser()
11 | parser.add_argument('--base-dir', type=str)
12 | parser.add_argument('--gpt4-result', type=str)
13 | parser.add_argument('--requery-result', type=str)
14 | parser.add_argument('--our-result', type=str)
15 | parser.add_argument('--output-result', type=str)
16 | parser.add_argument('--split', type=str, default='test')
17 | parser.add_argument('--options', type=list, default=["A", "B", "C", "D", "E"])
18 | return parser.parse_args()
19 |
20 |
21 | def convert_caps(results):
22 | fakecaps = []
23 | for result in results:
24 | image_id = result['question_id']
25 | caption = result['text']
26 | fakecaps.append({"image_id": int(image_id), "caption": caption})
27 | return fakecaps
28 |
29 |
30 | def get_pred_idx(prediction, choices, options):
31 | """
32 | Get the index (e.g. 2) from the prediction (e.g. 'C')
33 | """
34 | if prediction in options[:len(choices)]:
35 | return options.index(prediction)
36 | else:
37 | return random.choice(range(len(choices)))
38 |
39 |
40 | if __name__ == "__main__":
41 | args = get_args()
42 |
43 | base_dir = args.base_dir
44 | split_indices = json.load(open(os.path.join(base_dir, "pid_splits.json")))[args.split]
45 | problems = json.load(open(os.path.join(base_dir, "problems.json")))
46 | our_predictions = [json.loads(line) for line in open(args.our_result)]
47 | our_predictions = {pred['question_id']: pred for pred in our_predictions}
48 | split_problems = {idx: problems[idx] for idx in split_indices}
49 |
50 | requery_predictions = [json.loads(line) for line in open(args.requery_result)]
51 | requery_predictions = {pred['question_id']: pred for pred in requery_predictions}
52 |
53 | gpt4_predictions = json.load(open(args.gpt4_result))['outputs']
54 |
55 | results = defaultdict(lambda: 0)
56 |
57 | sqa_results = {}
58 | sqa_results['acc'] = None
59 | sqa_results['correct'] = None
60 | sqa_results['count'] = None
61 | sqa_results['results'] = {}
62 | sqa_results['outputs'] = {}
63 |
64 | for prob_id, prob in split_problems.items():
65 | if prob_id not in our_predictions:
66 | assert False
67 | if prob_id not in gpt4_predictions:
68 | assert False
69 | our_pred = our_predictions[prob_id]['text']
70 | gpt4_pred = gpt4_predictions[prob_id]
71 | if prob_id not in requery_predictions:
72 | results['missing_requery'] += 1
73 | requery_pred = "MISSING"
74 | else:
75 | requery_pred = requery_predictions[prob_id]['text']
76 |
77 | pattern = re.compile(r'The answer is ([A-Z]).')
78 | our_res = pattern.findall(our_pred)
79 | if len(our_res) == 1:
80 | our_answer = our_res[0] # 'A', 'B', ...
81 | else:
82 | our_answer = "FAILED"
83 |
84 | requery_res = pattern.findall(requery_pred)
85 | if len(requery_res) == 1:
86 | requery_answer = requery_res[0] # 'A', 'B', ...
87 | else:
88 | requery_answer = "FAILED"
89 |
90 | gpt4_res = pattern.findall(gpt4_pred)
91 | if len(gpt4_res) == 1:
92 | gpt4_answer = gpt4_res[0] # 'A', 'B', ...
93 | else:
94 | gpt4_answer = "FAILED"
95 |
96 | our_pred_idx = get_pred_idx(our_answer, prob['choices'], args.options)
97 | gpt4_pred_idx = get_pred_idx(gpt4_answer, prob['choices'], args.options)
98 | requery_pred_idx = get_pred_idx(requery_answer, prob['choices'], args.options)
99 |
100 | results['total'] += 1
101 |
102 | if gpt4_answer == 'FAILED':
103 | results['gpt4_failed'] += 1
104 | if gpt4_pred_idx == prob['answer']:
105 | results['gpt4_correct'] += 1
106 | if our_pred_idx == prob['answer']:
107 | results['gpt4_ourvisual_correct'] += 1
108 | elif gpt4_pred_idx == prob['answer']:
109 | results['gpt4_correct'] += 1
110 | results['gpt4_ourvisual_correct'] += 1
111 |
112 | if our_pred_idx == prob['answer']:
113 | results['our_correct'] += 1
114 |
115 | if requery_answer == 'FAILED':
116 | sqa_results['results'][prob_id] = our_pred_idx
117 | if our_pred_idx == prob['answer']:
118 | results['requery_correct'] += 1
119 | else:
120 | sqa_results['results'][prob_id] = requery_pred_idx
121 | if requery_pred_idx == prob['answer']:
122 | results['requery_correct'] += 1
123 | else:
124 | print(f"""
125 | Question ({args.options[prob['answer']]}): {our_predictions[prob_id]['prompt']}
126 | Our ({our_answer}): {our_pred}
127 | GPT-4 ({gpt4_answer}): {gpt4_pred}
128 | Requery ({requery_answer}): {requery_pred}
129 | print("=====================================")
130 | """)
131 |
132 | if gpt4_pred_idx == prob['answer'] or our_pred_idx == prob['answer']:
133 | results['correct_upperbound'] += 1
134 |
135 | total = results['total']
136 | print(f'Total: {total}, Our-Correct: {results["our_correct"]}, Accuracy: {results["our_correct"] / total * 100:.2f}%')
137 | print(f'Total: {total}, GPT-4-Correct: {results["gpt4_correct"]}, Accuracy: {results["gpt4_correct"] / total * 100:.2f}%')
138 | print(f'Total: {total}, GPT-4 NO-ANS (RANDOM): {results["gpt4_failed"]}, Percentage: {results["gpt4_failed"] / total * 100:.2f}%')
139 | print(f'Total: {total}, GPT-4-OursVisual-Correct: {results["gpt4_ourvisual_correct"]}, Accuracy: {results["gpt4_ourvisual_correct"] / total * 100:.2f}%')
140 | print(f'Total: {total}, Requery-Correct: {results["requery_correct"]}, Accuracy: {results["requery_correct"] / total * 100:.2f}%')
141 | print(f'Total: {total}, Correct upper: {results["correct_upperbound"]}, Accuracy: {results["correct_upperbound"] / total * 100:.2f}%')
142 |
143 | sqa_results['acc'] = results["requery_correct"] / total * 100
144 | sqa_results['correct'] = results["requery_correct"]
145 | sqa_results['count'] = total
146 |
147 | with open(args.output_result, 'w') as f:
148 | json.dump(sqa_results, f, indent=2)
149 |
150 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/eval_textvqa.py:
--------------------------------------------------------------------------------
1 | import os
2 | import argparse
3 | import json
4 | import re
5 |
6 | from llava.eval.m4c_evaluator import TextVQAAccuracyEvaluator
7 |
8 |
9 | def get_args():
10 | parser = argparse.ArgumentParser()
11 | parser.add_argument('--annotation-file', type=str)
12 | parser.add_argument('--result-file', type=str)
13 | parser.add_argument('--result-dir', type=str)
14 | return parser.parse_args()
15 |
16 |
17 | def prompt_processor(prompt):
18 | if prompt.startswith('OCR tokens: '):
19 | pattern = r"Question: (.*?) Short answer:"
20 | match = re.search(pattern, prompt, re.DOTALL)
21 | question = match.group(1)
22 | elif 'Reference OCR token: ' in prompt and len(prompt.split('\n')) == 3:
23 | if prompt.startswith('Reference OCR token:'):
24 | question = prompt.split('\n')[1]
25 | else:
26 | question = prompt.split('\n')[0]
27 | elif len(prompt.split('\n')) == 2:
28 | question = prompt.split('\n')[0]
29 | else:
30 | assert False
31 |
32 | return question.lower()
33 |
34 |
35 | def eval_single(annotation_file, result_file):
36 | experiment_name = os.path.splitext(os.path.basename(result_file))[0]
37 | print(experiment_name)
38 | annotations = json.load(open(annotation_file))['data']
39 | annotations = {(annotation['image_id'], annotation['question'].lower()): annotation for annotation in annotations}
40 | results = [json.loads(line) for line in open(result_file)]
41 |
42 | pred_list = []
43 | for result in results:
44 | annotation = annotations[(result['question_id'], prompt_processor(result['prompt']))]
45 | pred_list.append({
46 | "pred_answer": result['text'],
47 | "gt_answers": annotation['answers'],
48 | })
49 |
50 | evaluator = TextVQAAccuracyEvaluator()
51 | print('Samples: {}\nAccuracy: {:.2f}%\n'.format(len(pred_list), 100. * evaluator.eval_pred_list(pred_list)))
52 |
53 |
54 | if __name__ == "__main__":
55 | args = get_args()
56 |
57 | if args.result_file is not None:
58 | eval_single(args.annotation_file, args.result_file)
59 |
60 | if args.result_dir is not None:
61 | for result_file in sorted(os.listdir(args.result_dir)):
62 | if not result_file.endswith('.jsonl'):
63 | print(f'Skipping {result_file}')
64 | continue
65 | eval_single(args.annotation_file, os.path.join(args.result_dir, result_file))
66 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/generate_webpage_data_from_table.py:
--------------------------------------------------------------------------------
1 | """Generate json file for webpage."""
2 | import json
3 | import os
4 | import re
5 |
6 | # models = ['llama', 'alpaca', 'gpt35', 'bard']
7 | models = ['vicuna']
8 |
9 |
10 | def read_jsonl(path: str, key: str=None):
11 | data = []
12 | with open(os.path.expanduser(path)) as f:
13 | for line in f:
14 | if not line:
15 | continue
16 | data.append(json.loads(line))
17 | if key is not None:
18 | data.sort(key=lambda x: x[key])
19 | data = {item[key]: item for item in data}
20 | return data
21 |
22 |
23 | def trim_hanging_lines(s: str, n: int) -> str:
24 | s = s.strip()
25 | for _ in range(n):
26 | s = s.split('\n', 1)[1].strip()
27 | return s
28 |
29 |
30 | if __name__ == '__main__':
31 | questions = read_jsonl('table/question.jsonl', key='question_id')
32 |
33 | # alpaca_answers = read_jsonl('table/answer/answer_alpaca-13b.jsonl', key='question_id')
34 | # bard_answers = read_jsonl('table/answer/answer_bard.jsonl', key='question_id')
35 | # gpt35_answers = read_jsonl('table/answer/answer_gpt35.jsonl', key='question_id')
36 | # llama_answers = read_jsonl('table/answer/answer_llama-13b.jsonl', key='question_id')
37 | vicuna_answers = read_jsonl('table/answer/answer_vicuna-13b.jsonl', key='question_id')
38 | ours_answers = read_jsonl('table/results/llama-13b-hf-alpaca.jsonl', key='question_id')
39 |
40 | review_vicuna = read_jsonl('table/review/review_vicuna-13b_llama-13b-hf-alpaca.jsonl', key='question_id')
41 | # review_alpaca = read_jsonl('table/review/review_alpaca-13b_vicuna-13b.jsonl', key='question_id')
42 | # review_bard = read_jsonl('table/review/review_bard_vicuna-13b.jsonl', key='question_id')
43 | # review_gpt35 = read_jsonl('table/review/review_gpt35_vicuna-13b.jsonl', key='question_id')
44 | # review_llama = read_jsonl('table/review/review_llama-13b_vicuna-13b.jsonl', key='question_id')
45 |
46 | records = []
47 | for qid in questions.keys():
48 | r = {
49 | 'id': qid,
50 | 'category': questions[qid]['category'],
51 | 'question': questions[qid]['text'],
52 | 'answers': {
53 | # 'alpaca': alpaca_answers[qid]['text'],
54 | # 'llama': llama_answers[qid]['text'],
55 | # 'bard': bard_answers[qid]['text'],
56 | # 'gpt35': gpt35_answers[qid]['text'],
57 | 'vicuna': vicuna_answers[qid]['text'],
58 | 'ours': ours_answers[qid]['text'],
59 | },
60 | 'evaluations': {
61 | # 'alpaca': review_alpaca[qid]['text'],
62 | # 'llama': review_llama[qid]['text'],
63 | # 'bard': review_bard[qid]['text'],
64 | 'vicuna': review_vicuna[qid]['content'],
65 | # 'gpt35': review_gpt35[qid]['text'],
66 | },
67 | 'scores': {
68 | 'vicuna': review_vicuna[qid]['tuple'],
69 | # 'alpaca': review_alpaca[qid]['score'],
70 | # 'llama': review_llama[qid]['score'],
71 | # 'bard': review_bard[qid]['score'],
72 | # 'gpt35': review_gpt35[qid]['score'],
73 | },
74 | }
75 |
76 | # cleanup data
77 | cleaned_evals = {}
78 | for k, v in r['evaluations'].items():
79 | v = v.strip()
80 | lines = v.split('\n')
81 | # trim the first line if it's a pair of numbers
82 | if re.match(r'\d+[, ]+\d+', lines[0]):
83 | lines = lines[1:]
84 | v = '\n'.join(lines)
85 | cleaned_evals[k] = v.replace('Assistant 1', "**Assistant 1**").replace('Assistant 2', '**Assistant 2**')
86 |
87 | r['evaluations'] = cleaned_evals
88 | records.append(r)
89 |
90 | # Reorder the records, this is optional
91 | for r in records:
92 | if r['id'] <= 20:
93 | r['id'] += 60
94 | else:
95 | r['id'] -= 20
96 | for r in records:
97 | if r['id'] <= 50:
98 | r['id'] += 10
99 | elif 50 < r['id'] <= 60:
100 | r['id'] -= 50
101 | for r in records:
102 | if r['id'] == 7:
103 | r['id'] = 1
104 | elif r['id'] < 7:
105 | r['id'] += 1
106 |
107 | records.sort(key=lambda x: x['id'])
108 |
109 | # Write to file
110 | with open('webpage/data.json', 'w') as f:
111 | json.dump({'questions': records, 'models': models}, f, indent=2)
112 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/model_qa.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | from transformers import AutoTokenizer, AutoModelForCausalLM, StoppingCriteria
3 | import torch
4 | import os
5 | import json
6 | from tqdm import tqdm
7 | import shortuuid
8 |
9 | from llava.conversation import default_conversation
10 | from llava.utils import disable_torch_init
11 |
12 |
13 | @torch.inference_mode()
14 | def eval_model(model_name, questions_file, answers_file):
15 | # Model
16 | disable_torch_init()
17 | model_name = os.path.expanduser(model_name)
18 | tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
19 | model = AutoModelForCausalLM.from_pretrained(model_name,
20 | torch_dtype=torch.float16).cuda()
21 |
22 |
23 | ques_file = open(os.path.expanduser(questions_file), "r")
24 | ans_file = open(os.path.expanduser(answers_file), "w")
25 | for i, line in enumerate(tqdm(ques_file)):
26 | idx = json.loads(line)["question_id"]
27 | qs = json.loads(line)["text"]
28 | cat = json.loads(line)["category"]
29 | conv = default_conversation.copy()
30 | conv.append_message(conv.roles[0], qs)
31 | prompt = conv.get_prompt()
32 | inputs = tokenizer([prompt])
33 | input_ids = torch.as_tensor(inputs.input_ids).cuda()
34 | output_ids = model.generate(
35 | input_ids,
36 | do_sample=True,
37 | use_cache=True,
38 | temperature=0.7,
39 | max_new_tokens=1024,)
40 | outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0]
41 | try:
42 | index = outputs.index(conv.sep, len(prompt))
43 | except ValueError:
44 | outputs += conv.sep
45 | index = outputs.index(conv.sep, len(prompt))
46 |
47 | outputs = outputs[len(prompt) + len(conv.roles[1]) + 2:index].strip()
48 | ans_id = shortuuid.uuid()
49 | ans_file.write(json.dumps({"question_id": idx,
50 | "text": outputs,
51 | "answer_id": ans_id,
52 | "model_id": model_name,
53 | "metadata": {}}) + "\n")
54 | ans_file.flush()
55 | ans_file.close()
56 |
57 | if __name__ == "__main__":
58 | parser = argparse.ArgumentParser()
59 | parser.add_argument("--model-name", type=str, default="facebook/opt-350m")
60 | parser.add_argument("--question-file", type=str, default="tables/question.jsonl")
61 | parser.add_argument("--answers-file", type=str, default="answer.jsonl")
62 | args = parser.parse_args()
63 |
64 | eval_model(args.model_name, args.question_file, args.answers_file)
65 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/model_vqa.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import torch
3 | import os
4 | import json
5 | from tqdm import tqdm
6 | import shortuuid
7 |
8 | from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN
9 | from llava.conversation import conv_templates, SeparatorStyle
10 | from llava.model.builder import load_pretrained_model
11 | from llava.utils import disable_torch_init
12 | from llava.mm_utils import tokenizer_image_token, process_images, get_model_name_from_path
13 |
14 | from PIL import Image
15 | import math
16 |
17 |
18 | def split_list(lst, n):
19 | """Split a list into n (roughly) equal-sized chunks"""
20 | chunk_size = math.ceil(len(lst) / n) # integer division
21 | return [lst[i:i+chunk_size] for i in range(0, len(lst), chunk_size)]
22 |
23 |
24 | def get_chunk(lst, n, k):
25 | chunks = split_list(lst, n)
26 | return chunks[k]
27 |
28 |
29 | def eval_model(args):
30 | # Model
31 | disable_torch_init()
32 | model_path = os.path.expanduser(args.model_path)
33 | model_name = get_model_name_from_path(model_path)
34 | tokenizer, model, image_processor, context_len = load_pretrained_model(model_path, args.model_base, model_name)
35 |
36 | questions = [json.loads(q) for q in open(os.path.expanduser(args.question_file), "r")]
37 | questions = get_chunk(questions, args.num_chunks, args.chunk_idx)
38 | answers_file = os.path.expanduser(args.answers_file)
39 | os.makedirs(os.path.dirname(answers_file), exist_ok=True)
40 | ans_file = open(answers_file, "w")
41 | for line in tqdm(questions):
42 | idx = line["question_id"]
43 | image_file = line["image"]
44 | qs = line["text"]
45 | cur_prompt = qs
46 | if model.config.mm_use_im_start_end:
47 | qs = DEFAULT_IM_START_TOKEN + DEFAULT_IMAGE_TOKEN + DEFAULT_IM_END_TOKEN + '\n' + qs
48 | else:
49 | qs = DEFAULT_IMAGE_TOKEN + '\n' + qs
50 |
51 | conv = conv_templates[args.conv_mode].copy()
52 | conv.append_message(conv.roles[0], qs)
53 | conv.append_message(conv.roles[1], None)
54 | prompt = conv.get_prompt()
55 |
56 | input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda()
57 |
58 | image = Image.open(os.path.join(args.image_folder, image_file)).convert('RGB')
59 | image_tensor = process_images([image], image_processor, model.config)[0]
60 |
61 | with torch.inference_mode():
62 | output_ids = model.generate(
63 | input_ids,
64 | images=image_tensor.unsqueeze(0).half().cuda(),
65 | image_sizes=[image.size],
66 | do_sample=True if args.temperature > 0 else False,
67 | temperature=args.temperature,
68 | top_p=args.top_p,
69 | num_beams=args.num_beams,
70 | # no_repeat_ngram_size=3,
71 | max_new_tokens=1024,
72 | use_cache=True)
73 |
74 | outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
75 |
76 | ans_id = shortuuid.uuid()
77 | ans_file.write(json.dumps({"question_id": idx,
78 | "prompt": cur_prompt,
79 | "text": outputs,
80 | "answer_id": ans_id,
81 | "model_id": model_name,
82 | "metadata": {}}) + "\n")
83 | ans_file.flush()
84 | ans_file.close()
85 |
86 | if __name__ == "__main__":
87 | parser = argparse.ArgumentParser()
88 | parser.add_argument("--model-path", type=str, default="facebook/opt-350m")
89 | parser.add_argument("--model-base", type=str, default=None)
90 | parser.add_argument("--image-folder", type=str, default="")
91 | parser.add_argument("--question-file", type=str, default="tables/question.jsonl")
92 | parser.add_argument("--answers-file", type=str, default="answer.jsonl")
93 | parser.add_argument("--conv-mode", type=str, default="llava_v1")
94 | parser.add_argument("--num-chunks", type=int, default=1)
95 | parser.add_argument("--chunk-idx", type=int, default=0)
96 | parser.add_argument("--temperature", type=float, default=0.2)
97 | parser.add_argument("--top_p", type=float, default=None)
98 | parser.add_argument("--num_beams", type=int, default=1)
99 | args = parser.parse_args()
100 |
101 | eval_model(args)
102 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/model_vqa_loader.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import torch
3 | import os
4 | import json
5 | from tqdm import tqdm
6 | import shortuuid
7 |
8 | from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN
9 | from llava.conversation import conv_templates, SeparatorStyle
10 | from llava.model.builder import load_pretrained_model
11 | from llava.utils import disable_torch_init
12 | from llava.mm_utils import tokenizer_image_token, process_images, get_model_name_from_path
13 | from torch.utils.data import Dataset, DataLoader
14 |
15 | from PIL import Image
16 | import math
17 |
18 |
19 | def split_list(lst, n):
20 | """Split a list into n (roughly) equal-sized chunks"""
21 | chunk_size = math.ceil(len(lst) / n) # integer division
22 | return [lst[i:i+chunk_size] for i in range(0, len(lst), chunk_size)]
23 |
24 |
25 | def get_chunk(lst, n, k):
26 | chunks = split_list(lst, n)
27 | return chunks[k]
28 |
29 |
30 | # Custom dataset class
31 | class CustomDataset(Dataset):
32 | def __init__(self, questions, image_folder, tokenizer, image_processor, model_config):
33 | self.questions = questions
34 | self.image_folder = image_folder
35 | self.tokenizer = tokenizer
36 | self.image_processor = image_processor
37 | self.model_config = model_config
38 |
39 | def __getitem__(self, index):
40 | line = self.questions[index]
41 | image_file = line["image"]
42 | qs = line["text"]
43 | if self.model_config.mm_use_im_start_end:
44 | qs = DEFAULT_IM_START_TOKEN + DEFAULT_IMAGE_TOKEN + DEFAULT_IM_END_TOKEN + '\n' + qs
45 | else:
46 | qs = DEFAULT_IMAGE_TOKEN + '\n' + qs
47 |
48 | conv = conv_templates[args.conv_mode].copy()
49 | conv.append_message(conv.roles[0], qs)
50 | conv.append_message(conv.roles[1], None)
51 | prompt = conv.get_prompt()
52 |
53 | image = Image.open(os.path.join(self.image_folder, image_file)).convert('RGB')
54 | image_tensor = process_images([image], self.image_processor, self.model_config)[0]
55 |
56 | input_ids = tokenizer_image_token(prompt, self.tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt')
57 |
58 | return input_ids, image_tensor, image.size
59 |
60 | def __len__(self):
61 | return len(self.questions)
62 |
63 |
64 | def collate_fn(batch):
65 | input_ids, image_tensors, image_sizes = zip(*batch)
66 | input_ids = torch.stack(input_ids, dim=0)
67 | image_tensors = torch.stack(image_tensors, dim=0)
68 | return input_ids, image_tensors, image_sizes
69 |
70 |
71 | # DataLoader
72 | def create_data_loader(questions, image_folder, tokenizer, image_processor, model_config, batch_size=1, num_workers=4):
73 | assert batch_size == 1, "batch_size must be 1"
74 | dataset = CustomDataset(questions, image_folder, tokenizer, image_processor, model_config)
75 | data_loader = DataLoader(dataset, batch_size=batch_size, num_workers=num_workers, shuffle=False, collate_fn=collate_fn)
76 | return data_loader
77 |
78 |
79 | def eval_model(args):
80 | # Model
81 | disable_torch_init()
82 | model_path = os.path.expanduser(args.model_path)
83 | model_name = get_model_name_from_path(model_path)
84 | tokenizer, model, image_processor, context_len = load_pretrained_model(model_path, args.model_base, model_name)
85 |
86 | questions = [json.loads(q) for q in open(os.path.expanduser(args.question_file), "r")]
87 | questions = get_chunk(questions, args.num_chunks, args.chunk_idx)
88 | answers_file = os.path.expanduser(args.answers_file)
89 | os.makedirs(os.path.dirname(answers_file), exist_ok=True)
90 | ans_file = open(answers_file, "w")
91 |
92 | if 'plain' in model_name and 'finetune' not in model_name.lower() and 'mmtag' not in args.conv_mode:
93 | args.conv_mode = args.conv_mode + '_mmtag'
94 | print(f'It seems that this is a plain model, but it is not using a mmtag prompt, auto switching to {args.conv_mode}.')
95 |
96 | data_loader = create_data_loader(questions, args.image_folder, tokenizer, image_processor, model.config)
97 |
98 | for (input_ids, image_tensor, image_sizes), line in tqdm(zip(data_loader, questions), total=len(questions)):
99 | idx = line["question_id"]
100 | cur_prompt = line["text"]
101 |
102 | input_ids = input_ids.to(device='cuda', non_blocking=True)
103 |
104 | with torch.inference_mode():
105 | output_ids = model.generate(
106 | input_ids,
107 | images=image_tensor.to(dtype=torch.float16, device='cuda', non_blocking=True),
108 | image_sizes=image_sizes,
109 | do_sample=True if args.temperature > 0 else False,
110 | temperature=args.temperature,
111 | top_p=args.top_p,
112 | num_beams=args.num_beams,
113 | max_new_tokens=args.max_new_tokens,
114 | use_cache=True)
115 |
116 | outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
117 |
118 | ans_id = shortuuid.uuid()
119 | ans_file.write(json.dumps({"question_id": idx,
120 | "prompt": cur_prompt,
121 | "text": outputs,
122 | "answer_id": ans_id,
123 | "model_id": model_name,
124 | "metadata": {}}) + "\n")
125 | # ans_file.flush()
126 | ans_file.close()
127 |
128 | if __name__ == "__main__":
129 | parser = argparse.ArgumentParser()
130 | parser.add_argument("--model-path", type=str, default="facebook/opt-350m")
131 | parser.add_argument("--model-base", type=str, default=None)
132 | parser.add_argument("--image-folder", type=str, default="")
133 | parser.add_argument("--question-file", type=str, default="tables/question.jsonl")
134 | parser.add_argument("--answers-file", type=str, default="answer.jsonl")
135 | parser.add_argument("--conv-mode", type=str, default="llava_v1")
136 | parser.add_argument("--num-chunks", type=int, default=1)
137 | parser.add_argument("--chunk-idx", type=int, default=0)
138 | parser.add_argument("--temperature", type=float, default=0.2)
139 | parser.add_argument("--top_p", type=float, default=None)
140 | parser.add_argument("--num_beams", type=int, default=1)
141 | parser.add_argument("--max_new_tokens", type=int, default=128)
142 | args = parser.parse_args()
143 |
144 | eval_model(args)
145 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/model_vqa_mmbench.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import torch
3 | import os
4 | import json
5 | import pandas as pd
6 | from tqdm import tqdm
7 | import shortuuid
8 |
9 | from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN
10 | from llava.conversation import conv_templates, SeparatorStyle
11 | from llava.model.builder import load_pretrained_model
12 | from llava.utils import disable_torch_init
13 | from llava.mm_utils import tokenizer_image_token, process_images, load_image_from_base64, get_model_name_from_path
14 |
15 | from PIL import Image
16 | import math
17 |
18 |
19 | all_options = ['A', 'B', 'C', 'D']
20 |
21 |
22 | def split_list(lst, n):
23 | """Split a list into n (roughly) equal-sized chunks"""
24 | chunk_size = math.ceil(len(lst) / n) # integer division
25 | return [lst[i:i+chunk_size] for i in range(0, len(lst), chunk_size)]
26 |
27 |
28 | def get_chunk(lst, n, k):
29 | chunks = split_list(lst, n)
30 | return chunks[k]
31 |
32 |
33 | def is_none(value):
34 | if value is None:
35 | return True
36 | if type(value) is float and math.isnan(value):
37 | return True
38 | if type(value) is str and value.lower() == 'nan':
39 | return True
40 | if type(value) is str and value.lower() == 'none':
41 | return True
42 | return False
43 |
44 | def get_options(row, options):
45 | parsed_options = []
46 | for option in options:
47 | option_value = row[option]
48 | if is_none(option_value):
49 | break
50 | parsed_options.append(option_value)
51 | return parsed_options
52 |
53 |
54 | def eval_model(args):
55 | # Model
56 | disable_torch_init()
57 | model_path = os.path.expanduser(args.model_path)
58 | model_name = get_model_name_from_path(model_path)
59 | tokenizer, model, image_processor, context_len = load_pretrained_model(model_path, args.model_base, model_name)
60 |
61 | questions = pd.read_table(os.path.expanduser(args.question_file))
62 | questions = get_chunk(questions, args.num_chunks, args.chunk_idx)
63 | answers_file = os.path.expanduser(args.answers_file)
64 | os.makedirs(os.path.dirname(answers_file), exist_ok=True)
65 | ans_file = open(answers_file, "w")
66 |
67 | if 'plain' in model_name and 'finetune' not in model_name.lower() and 'mmtag' not in args.conv_mode:
68 | args.conv_mode = args.conv_mode + '_mmtag'
69 | print(f'It seems that this is a plain model, but it is not using a mmtag prompt, auto switching to {args.conv_mode}.')
70 |
71 | for index, row in tqdm(questions.iterrows(), total=len(questions)):
72 | options = get_options(row, all_options)
73 | cur_option_char = all_options[:len(options)]
74 |
75 | if args.all_rounds:
76 | num_rounds = len(options)
77 | else:
78 | num_rounds = 1
79 |
80 | for round_idx in range(num_rounds):
81 | idx = row['index']
82 | question = row['question']
83 | hint = row['hint']
84 | image = load_image_from_base64(row['image'])
85 | if not is_none(hint):
86 | question = hint + '\n' + question
87 | for option_char, option in zip(all_options[:len(options)], options):
88 | question = question + '\n' + option_char + '. ' + option
89 | qs = cur_prompt = question
90 | if model.config.mm_use_im_start_end:
91 | qs = DEFAULT_IM_START_TOKEN + DEFAULT_IMAGE_TOKEN + DEFAULT_IM_END_TOKEN + '\n' + qs
92 | else:
93 | qs = DEFAULT_IMAGE_TOKEN + '\n' + qs
94 |
95 | if args.single_pred_prompt:
96 | if args.lang == 'cn':
97 | qs = qs + '\n' + "请直接回答选项字母。"
98 | else:
99 | qs = qs + '\n' + "Answer with the option's letter from the given choices directly."
100 |
101 | conv = conv_templates[args.conv_mode].copy()
102 | conv.append_message(conv.roles[0], qs)
103 | conv.append_message(conv.roles[1], None)
104 | prompt = conv.get_prompt()
105 |
106 | input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda()
107 |
108 | image_tensor = process_images([image], image_processor, model.config)[0]
109 |
110 | with torch.inference_mode():
111 | output_ids = model.generate(
112 | input_ids,
113 | images=image_tensor.unsqueeze(0).half().cuda(),
114 | image_sizes=[image.size],
115 | do_sample=True if args.temperature > 0 else False,
116 | temperature=args.temperature,
117 | top_p=args.top_p,
118 | num_beams=args.num_beams,
119 | # no_repeat_ngram_size=3,
120 | max_new_tokens=1024,
121 | use_cache=True)
122 |
123 | outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
124 |
125 | ans_id = shortuuid.uuid()
126 | ans_file.write(json.dumps({"question_id": idx,
127 | "round_id": round_idx,
128 | "prompt": cur_prompt,
129 | "text": outputs,
130 | "options": options,
131 | "option_char": cur_option_char,
132 | "answer_id": ans_id,
133 | "model_id": model_name,
134 | "metadata": {}}) + "\n")
135 | ans_file.flush()
136 |
137 | # rotate options
138 | options = options[1:] + options[:1]
139 | cur_option_char = cur_option_char[1:] + cur_option_char[:1]
140 | ans_file.close()
141 |
142 | if __name__ == "__main__":
143 | parser = argparse.ArgumentParser()
144 | parser.add_argument("--model-path", type=str, default="facebook/opt-350m")
145 | parser.add_argument("--model-base", type=str, default=None)
146 | parser.add_argument("--image-folder", type=str, default="")
147 | parser.add_argument("--question-file", type=str, default="tables/question.jsonl")
148 | parser.add_argument("--answers-file", type=str, default="answer.jsonl")
149 | parser.add_argument("--conv-mode", type=str, default="llava_v1")
150 | parser.add_argument("--num-chunks", type=int, default=1)
151 | parser.add_argument("--chunk-idx", type=int, default=0)
152 | parser.add_argument("--temperature", type=float, default=0.2)
153 | parser.add_argument("--top_p", type=float, default=None)
154 | parser.add_argument("--num_beams", type=int, default=1)
155 | parser.add_argument("--all-rounds", action="store_true")
156 | parser.add_argument("--single-pred-prompt", action="store_true")
157 | parser.add_argument("--lang", type=str, default="en")
158 | args = parser.parse_args()
159 |
160 | eval_model(args)
161 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/model_vqa_science.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import torch
3 | import os
4 | import json
5 | from tqdm import tqdm
6 | import shortuuid
7 |
8 | from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN
9 | from llava.conversation import conv_templates, SeparatorStyle
10 | from llava.model.builder import load_pretrained_model
11 | from llava.utils import disable_torch_init
12 | from llava.mm_utils import tokenizer_image_token, process_images, get_model_name_from_path
13 |
14 | from PIL import Image
15 | import math
16 |
17 |
18 | def split_list(lst, n):
19 | """Split a list into n (roughly) equal-sized chunks"""
20 | chunk_size = math.ceil(len(lst) / n) # integer division
21 | return [lst[i:i+chunk_size] for i in range(0, len(lst), chunk_size)]
22 |
23 |
24 | def get_chunk(lst, n, k):
25 | chunks = split_list(lst, n)
26 | return chunks[k]
27 |
28 |
29 | def eval_model(args):
30 | # Model
31 | disable_torch_init()
32 | model_path = os.path.expanduser(args.model_path)
33 | model_name = get_model_name_from_path(model_path)
34 | tokenizer, model, image_processor, context_len = load_pretrained_model(model_path, args.model_base, model_name)
35 |
36 | questions = json.load(open(os.path.expanduser(args.question_file), "r"))
37 | questions = get_chunk(questions, args.num_chunks, args.chunk_idx)
38 | answers_file = os.path.expanduser(args.answers_file)
39 | os.makedirs(os.path.dirname(answers_file), exist_ok=True)
40 | ans_file = open(answers_file, "w")
41 | for i, line in enumerate(tqdm(questions)):
42 | idx = line["id"]
43 | question = line['conversations'][0]
44 | qs = question['value'].replace('', '').strip()
45 | cur_prompt = qs
46 |
47 | if 'image' in line:
48 | image_file = line["image"]
49 | image = Image.open(os.path.join(args.image_folder, image_file))
50 | image_tensor = process_images([image], image_processor, model.config)[0]
51 | images = image_tensor.unsqueeze(0).half().cuda()
52 | image_sizes = [image.size]
53 | if getattr(model.config, 'mm_use_im_start_end', False):
54 | qs = DEFAULT_IM_START_TOKEN + DEFAULT_IMAGE_TOKEN + DEFAULT_IM_END_TOKEN + '\n' + qs
55 | else:
56 | qs = DEFAULT_IMAGE_TOKEN + '\n' + qs
57 | cur_prompt = '' + '\n' + cur_prompt
58 | else:
59 | images = None
60 | image_sizes = None
61 |
62 | if args.single_pred_prompt:
63 | qs = qs + '\n' + "Answer with the option's letter from the given choices directly."
64 | cur_prompt = cur_prompt + '\n' + "Answer with the option's letter from the given choices directly."
65 |
66 | conv = conv_templates[args.conv_mode].copy()
67 | conv.append_message(conv.roles[0], qs)
68 | conv.append_message(conv.roles[1], None)
69 | prompt = conv.get_prompt()
70 |
71 | input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda()
72 |
73 | with torch.inference_mode():
74 | output_ids = model.generate(
75 | input_ids,
76 | images=images,
77 | image_sizes=image_sizes,
78 | do_sample=True if args.temperature > 0 else False,
79 | temperature=args.temperature,
80 | max_new_tokens=1024,
81 | use_cache=True,
82 | )
83 |
84 | outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
85 |
86 | ans_id = shortuuid.uuid()
87 | ans_file.write(json.dumps({"question_id": idx,
88 | "prompt": cur_prompt,
89 | "text": outputs,
90 | "answer_id": ans_id,
91 | "model_id": model_name,
92 | "metadata": {}}) + "\n")
93 | ans_file.flush()
94 | ans_file.close()
95 |
96 | if __name__ == "__main__":
97 | parser = argparse.ArgumentParser()
98 | parser.add_argument("--model-path", type=str, default="facebook/opt-350m")
99 | parser.add_argument("--model-base", type=str, default=None)
100 | parser.add_argument("--image-folder", type=str, default="")
101 | parser.add_argument("--question-file", type=str, default="tables/question.json")
102 | parser.add_argument("--answers-file", type=str, default="answer.jsonl")
103 | parser.add_argument("--conv-mode", type=str, default="llava_v0")
104 | parser.add_argument("--num-chunks", type=int, default=1)
105 | parser.add_argument("--chunk-idx", type=int, default=0)
106 | parser.add_argument("--temperature", type=float, default=0.2)
107 | parser.add_argument("--answer-prompter", action="store_true")
108 | parser.add_argument("--single-pred-prompt", action="store_true")
109 | args = parser.parse_args()
110 |
111 | eval_model(args)
112 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/qa_baseline_gpt35.py:
--------------------------------------------------------------------------------
1 | """Generate answers with GPT-3.5"""
2 | # Note: you need to be using OpenAI Python v0.27.0 for the code below to work
3 | import argparse
4 | import json
5 | import os
6 | import time
7 | import concurrent.futures
8 |
9 | import openai
10 | import tqdm
11 | import shortuuid
12 |
13 | MODEL = 'gpt-3.5-turbo'
14 | MODEL_ID = 'gpt-3.5-turbo:20230327'
15 |
16 | def get_answer(question_id: int, question: str, max_tokens: int):
17 | ans = {
18 | 'answer_id': shortuuid.uuid(),
19 | 'question_id': question_id,
20 | 'model_id': MODEL_ID,
21 | }
22 | for _ in range(3):
23 | try:
24 | response = openai.ChatCompletion.create(
25 | model=MODEL,
26 | messages=[{
27 | 'role': 'system',
28 | 'content': 'You are a helpful assistant.'
29 | }, {
30 | 'role': 'user',
31 | 'content': question,
32 | }],
33 | max_tokens=max_tokens,
34 | )
35 | ans['text'] = response['choices'][0]['message']['content']
36 | return ans
37 | except Exception as e:
38 | print('[ERROR]', e)
39 | ans['text'] = '#ERROR#'
40 | time.sleep(1)
41 | return ans
42 |
43 |
44 | if __name__ == '__main__':
45 | parser = argparse.ArgumentParser(description='ChatGPT answer generation.')
46 | parser.add_argument('-q', '--question')
47 | parser.add_argument('-o', '--output')
48 | parser.add_argument('--max-tokens', type=int, default=1024, help='maximum number of tokens produced in the output')
49 | args = parser.parse_args()
50 |
51 | questions_dict = {}
52 | with open(os.path.expanduser(args.question)) as f:
53 | for line in f:
54 | if not line:
55 | continue
56 | q = json.loads(line)
57 | questions_dict[q['question_id']] = q['text']
58 |
59 | answers = []
60 |
61 | with concurrent.futures.ThreadPoolExecutor(max_workers=32) as executor:
62 | futures = []
63 | for qid, question in questions_dict.items():
64 | future = executor.submit(get_answer, qid, question, args.max_tokens)
65 | futures.append(future)
66 |
67 | for future in tqdm.tqdm(concurrent.futures.as_completed(futures), total=len(futures)):
68 | answers.append(future.result())
69 |
70 | answers.sort(key=lambda x: x['question_id'])
71 |
72 | with open(os.path.expanduser(args.output), 'w') as f:
73 | table = [json.dumps(ans) for ans in answers]
74 | f.write('\n'.join(table))
75 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/run_llava.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import torch
3 |
4 | from llava.constants import (
5 | IMAGE_TOKEN_INDEX,
6 | DEFAULT_IMAGE_TOKEN,
7 | DEFAULT_IM_START_TOKEN,
8 | DEFAULT_IM_END_TOKEN,
9 | IMAGE_PLACEHOLDER,
10 | )
11 | from llava.conversation import conv_templates, SeparatorStyle
12 | from llava.model.builder import load_pretrained_model
13 | from llava.utils import disable_torch_init
14 | from llava.mm_utils import (
15 | process_images,
16 | tokenizer_image_token,
17 | get_model_name_from_path,
18 | )
19 |
20 | from PIL import Image
21 |
22 | import requests
23 | from PIL import Image
24 | from io import BytesIO
25 | import re
26 |
27 |
28 | def image_parser(args):
29 | out = args.image_file.split(args.sep)
30 | return out
31 |
32 |
33 | def load_image(image_file):
34 | if image_file.startswith("http") or image_file.startswith("https"):
35 | response = requests.get(image_file)
36 | image = Image.open(BytesIO(response.content)).convert("RGB")
37 | else:
38 | image = Image.open(image_file).convert("RGB")
39 | return image
40 |
41 |
42 | def load_images(image_files):
43 | out = []
44 | for image_file in image_files:
45 | image = load_image(image_file)
46 | out.append(image)
47 | return out
48 |
49 |
50 | def eval_model(args):
51 | # Model
52 | disable_torch_init()
53 |
54 | model_name = get_model_name_from_path(args.model_path)
55 | tokenizer, model, image_processor, context_len = load_pretrained_model(
56 | args.model_path, args.model_base, model_name
57 | )
58 |
59 | qs = args.query
60 | image_token_se = DEFAULT_IM_START_TOKEN + DEFAULT_IMAGE_TOKEN + DEFAULT_IM_END_TOKEN
61 | if IMAGE_PLACEHOLDER in qs:
62 | if model.config.mm_use_im_start_end:
63 | qs = re.sub(IMAGE_PLACEHOLDER, image_token_se, qs)
64 | else:
65 | qs = re.sub(IMAGE_PLACEHOLDER, DEFAULT_IMAGE_TOKEN, qs)
66 | else:
67 | if model.config.mm_use_im_start_end:
68 | qs = image_token_se + "\n" + qs
69 | else:
70 | qs = DEFAULT_IMAGE_TOKEN + "\n" + qs
71 |
72 | if "llama-2" in model_name.lower():
73 | conv_mode = "llava_llama_2"
74 | elif "mistral" in model_name.lower():
75 | conv_mode = "mistral_instruct"
76 | elif "v1.6-34b" in model_name.lower():
77 | conv_mode = "chatml_direct"
78 | elif "v1" in model_name.lower():
79 | conv_mode = "llava_v1"
80 | elif "mpt" in model_name.lower():
81 | conv_mode = "mpt"
82 | else:
83 | conv_mode = "llava_v0"
84 |
85 | if args.conv_mode is not None and conv_mode != args.conv_mode:
86 | print(
87 | "[WARNING] the auto inferred conversation mode is {}, while `--conv-mode` is {}, using {}".format(
88 | conv_mode, args.conv_mode, args.conv_mode
89 | )
90 | )
91 | else:
92 | args.conv_mode = conv_mode
93 |
94 | conv = conv_templates[args.conv_mode].copy()
95 | conv.append_message(conv.roles[0], qs)
96 | conv.append_message(conv.roles[1], None)
97 | prompt = conv.get_prompt()
98 |
99 | image_files = image_parser(args)
100 | images = load_images(image_files)
101 | image_sizes = [x.size for x in images]
102 | images_tensor = process_images(
103 | images,
104 | image_processor,
105 | model.config
106 | ).to(model.device, dtype=torch.float16)
107 |
108 | input_ids = (
109 | tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors="pt")
110 | .unsqueeze(0)
111 | .cuda()
112 | )
113 |
114 | with torch.inference_mode():
115 | output_ids = model.generate(
116 | input_ids,
117 | images=images_tensor,
118 | image_sizes=image_sizes,
119 | do_sample=True if args.temperature > 0 else False,
120 | temperature=args.temperature,
121 | top_p=args.top_p,
122 | num_beams=args.num_beams,
123 | max_new_tokens=args.max_new_tokens,
124 | use_cache=True,
125 | )
126 |
127 | outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
128 | print(outputs)
129 |
130 |
131 | if __name__ == "__main__":
132 | parser = argparse.ArgumentParser()
133 | parser.add_argument("--model-path", type=str, default="facebook/opt-350m")
134 | parser.add_argument("--model-base", type=str, default=None)
135 | parser.add_argument("--image-file", type=str, required=True)
136 | parser.add_argument("--query", type=str, required=True)
137 | parser.add_argument("--conv-mode", type=str, default=None)
138 | parser.add_argument("--sep", type=str, default=",")
139 | parser.add_argument("--temperature", type=float, default=0.2)
140 | parser.add_argument("--top_p", type=float, default=None)
141 | parser.add_argument("--num_beams", type=int, default=1)
142 | parser.add_argument("--max_new_tokens", type=int, default=512)
143 | args = parser.parse_args()
144 |
145 | eval_model(args)
146 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/summarize_gpt_review.py:
--------------------------------------------------------------------------------
1 | import json
2 | import os
3 | from collections import defaultdict
4 |
5 | import numpy as np
6 |
7 | import argparse
8 |
9 | def parse_args():
10 | parser = argparse.ArgumentParser(description='ChatGPT-based QA evaluation.')
11 | parser.add_argument('-d', '--dir', default=None)
12 | parser.add_argument('-v', '--version', default=None)
13 | parser.add_argument('-s', '--select', nargs='*', default=None)
14 | parser.add_argument('-f', '--files', nargs='*', default=[])
15 | parser.add_argument('-i', '--ignore', nargs='*', default=[])
16 | return parser.parse_args()
17 |
18 |
19 | if __name__ == '__main__':
20 | args = parse_args()
21 |
22 | if args.ignore is not None:
23 | args.ignore = [int(x) for x in args.ignore]
24 |
25 | if len(args.files) > 0:
26 | review_files = args.files
27 | else:
28 | review_files = [x for x in os.listdir(args.dir) if x.endswith('.jsonl') and (x.startswith('gpt4_text') or x.startswith('reviews_') or x.startswith('review_') or 'review' in args.dir)]
29 |
30 | for review_file in sorted(review_files):
31 | config = os.path.basename(review_file).replace('gpt4_text_', '').replace('.jsonl', '')
32 | if args.select is not None and any(x not in config for x in args.select):
33 | continue
34 | if '0613' in config:
35 | version = '0613'
36 | else:
37 | version = '0314'
38 | if args.version is not None and args.version != version:
39 | continue
40 | scores = defaultdict(list)
41 | print(config)
42 | with open(os.path.join(args.dir, review_file) if args.dir is not None else review_file) as f:
43 | for review_str in f:
44 | review = json.loads(review_str)
45 | if review['question_id'] in args.ignore:
46 | continue
47 | if 'category' in review:
48 | scores[review['category']].append(review['tuple'])
49 | scores['all'].append(review['tuple'])
50 | else:
51 | if 'tuple' in review:
52 | scores['all'].append(review['tuple'])
53 | else:
54 | scores['all'].append(review['score'])
55 | for k, v in sorted(scores.items()):
56 | stats = np.asarray(v).mean(0).tolist()
57 | stats = [round(x, 3) for x in stats]
58 | # print(k, stats, round(stats[1]/stats[0]*100, 1))
59 | print(k, round(stats[1]/stats[0]*100, 1), round(stats[0] * 10, 1), round(stats[1] * 10, 1))
60 | print('=================================')
61 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/table/model.jsonl:
--------------------------------------------------------------------------------
1 | {"model_id": "vicuna-13b:20230322-clean-lang", "model_name": "vicuna-13b", "model_version": "20230322-clean-lang", "model_metadata": "vicuna-13b-20230322-clean-lang"}
2 | {"model_id": "alpaca-13b:v1", "model_name": "alpaca-13b", "model_version": "v1", "model_metadata": "alpaca-13b"}
3 | {"model_id": "llama-13b:v1", "model_name": "llama-13b", "model_version": "v1", "model_metadata": "hf-llama-13b"}
4 | {"model_id": "bard:20230327", "model_name": "bard", "model_version": "20230327", "model_metadata": "Google Bard 20230327"}
5 | {"model_id": "gpt-3.5-turbo:20230327", "model_name": "gpt-3.5-turbo", "model_version": "20230327", "model_metadata": "OpenAI ChatGPT gpt-3.5-turbo Chat Completion"}
6 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/table/prompt.jsonl:
--------------------------------------------------------------------------------
1 | {"prompt_id": 1, "system_prompt": "You are a helpful and precise assistant for checking the quality of the answer.", "prompt_template": "[Question]\n{question}\n\n[Assistant 1]\n{answer_1}\n\n[End of Assistant 1]\n\n[Assistant 2]\n{answer_2}\n\n[End of Assistant 2]\n\n[System]\n{prompt}\n\n", "defaults": {"prompt": "We would like to request your feedback on the performance of two AI assistants in response to the user question displayed above.\nPlease rate the helpfulness, relevance, accuracy, level of details of their responses. Each assistant receives an overall score on a scale of 1 to 10, where a higher score indicates better overall performance.\nPlease first output a single line containing only two values indicating the scores for Assistant 1 and 2, respectively. The two scores are separated by a space.\nIn the subsequent line, please provide a comprehensive explanation of your evaluation, avoiding any potential bias and ensuring that the order in which the responses were presented does not affect your judgment."}, "description": "Prompt for general questions"}
2 | {"prompt_id": 2, "system_prompt": "You are a helpful and precise assistant for checking the quality of the answer.", "prompt_template": "[Question]\n{question}\n\n[Assistant 1]\n{answer_1}\n\n[End of Assistant 1]\n\n[Assistant 2]\n{answer_2}\n\n[End of Assistant 2]\n\n[System]\n{prompt}\n\n", "defaults": {"prompt": "Your task is to evaluate the coding abilities of the above two assistants. They have been asked to implement a program to solve a given problem. Please review their code submissions, paying close attention to their problem-solving approach, code structure, readability, and the inclusion of helpful comments.\n\nPlease ensure that the assistants' submissions:\n\n1. Correctly implement the given problem statement.\n2. Contain accurate and efficient code.\n3. Include clear and concise comments that explain the code's logic and functionality.\n4. Adhere to proper coding standards and best practices.\n\nOnce you have carefully reviewed both submissions, provide detailed feedback on their strengths and weaknesses, along with any suggestions for improvement. You should first output a single line containing two scores on the scale of 1-10 (1: no code/no sense; 10: perfect) for Assistant 1 and 2, respectively. Then give extra comments starting from the next line."}, "description": "Prompt for coding questions"}
3 | {"prompt_id": 3, "system_prompt": "You are a helpful and precise assistant for checking the quality of the answer.", "prompt_template": "[Question]\n{question}\n\n[Assistant 1]\n{answer_1}\n\n[End of Assistant 1]\n\n[Assistant 2]\n{answer_2}\n\n[End of Assistant 2]\n\n[System]\n{prompt}\n\n", "defaults": {"prompt": "We would like to request your feedback on the mathematical proficiency of two AI assistants regarding the given user question.\nFirstly, please solve the problem independently, without referring to the answers provided by Assistant 1 and Assistant 2.\nAfterward, please examine the problem-solving process of Assistant 1 and Assistant 2 step-by-step to ensure their correctness, identifying any incorrect steps if present. Your evaluation should take into account not only the answer but also the problem-solving steps.\nFinally, please output a Python tuple containing two numerical scores for Assistant 1 and Assistant 2, ranging from 1 to 10, respectively. If applicable, explain the reasons for any variations in their scores and determine which assistant performed better."}, "description": "Prompt for math questions"}
4 | {"prompt_id": 4, "system_prompt": "You are a helpful and precise assistant for checking the quality of the answer.", "prompt_template": "[Visual Context]\n{context}\n[Question]\n{question}\n\n[Assistant 1]\n{answer_1}\n\n[End of Assistant 1]\n\n[Assistant 2]\n{answer_2}\n\n[End of Assistant 2]\n\n[System]\n{prompt}\n\n", "defaults": {"prompt": "We would like to request your feedback on the performance of two AI assistants in response to the user question displayed above. The user asks the question on observing an image. For your reference, the visual content in the image is represented with five descriptive sentences describing the same image and the bounding box coordinates of each object in the scene. These coordinates are in the form of bounding boxes, represented as (x1, y1, x2, y2) with floating numbers ranging from 0 to 1. These values correspond to the top left x, top left y, bottom right x, and bottom right y. \nPlease rate the helpfulness, relevance, accuracy, level of details of their responses. Each assistant receives an overall score on a scale of 1 to 10, where a higher score indicates better overall performance.\nPlease first output a single line containing only two values indicating the scores for Assistant 1 and 2, respectively. The two scores are separated by a space.\nIn the subsequent line, please provide a comprehensive explanation of your evaluation, avoiding any potential bias and ensuring that the order in which the responses were presented does not affect your judgment."}, "description": "Prompt for visual questions"}
5 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/table/reviewer.jsonl:
--------------------------------------------------------------------------------
1 | {"reviewer_id": "gpt-4-0328-default", "prompt_id": 1, "metadata": {"temperature": 0.2, "max_tokens": 1024}, "description": "GPT-4 for general questions"}
2 | {"reviewer_id": "gpt-4-0328-coding", "prompt_id": 2, "metadata": {"temperature": 0.2, "max_tokens": 1024}, "description": "GPT-4 for coding questions"}
3 | {"reviewer_id": "gpt-4-0328-math", "prompt_id": 3, "metadata": {"temperature": 0.2, "max_tokens": 1024}, "description": "GPT-4 for math questions"}
4 | {"reviewer_id": "gpt-4-0417-visual", "prompt_id": 4, "metadata": {"temperature": 0.2, "max_tokens": 1024}, "description": "GPT-4 for math questions"}
5 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/table/rule.json:
--------------------------------------------------------------------------------
1 | {
2 | "coding": {"role": "Assistant", "prompt": "Your task is to evaluate the coding abilities of the above two assistants. They have been asked to implement a program to solve a given problem. Please review their code submissions, paying close attention to their problem-solving approach, code structure, readability, and the inclusion of helpful comments.\n\nPlease ensure that the assistants' submissions:\n\n1. Correctly implement the given problem statement.\n2. Contain accurate and efficient code.\n3. Include clear and concise comments that explain the code's logic and functionality.\n4. Adhere to proper coding standards and best practices.\n\nOnce you have carefully reviewed both submissions, provide detailed feedback on their strengths and weaknesses, along with any suggestions for improvement. You should first output a single line containing two scores on the scale of 1-10 (1: no code/no sense; 10: perfect) for Assistant 1 and 2, respectively. Then give extra comments starting from the next line."},
3 | "math": {"role": "Assistant", "prompt": "We would like to request your feedback on the mathematical proficiency of two AI assistants regarding the given user question.\nFirstly, please solve the problem independently, without referring to the answers provided by Assistant 1 and Assistant 2.\nAfterward, please examine the problem-solving process of Assistant 1 and Assistant 2 step-by-step to ensure their correctness, identifying any incorrect steps if present. Your evaluation should take into account not only the answer but also the problem-solving steps.\nFinally, please output a Python tuple containing two numerical scores for Assistant 1 and Assistant 2, ranging from 1 to 10, respectively. If applicable, explain the reasons for any variations in their scores and determine which assistant performed better."},
4 | "default": {"role": "Assistant", "prompt": "We would like to request your feedback on the performance of two AI assistants in response to the user question displayed above.\nPlease rate the helpfulness, relevance, accuracy, level of details of their responses. Each assistant receives an overall score on a scale of 1 to 10, where a higher score indicates better overall performance.\nPlease first output a single line containing only two values indicating the scores for Assistant 1 and 2, respectively. The two scores are separated by a space.\nIn the subsequent line, please provide a comprehensive explanation of your evaluation, avoiding any potential bias and ensuring that the order in which the responses were presented does not affect your judgment."},
5 | "conv": {"role": "Assistant", "prompt": "We would like to request your feedback on the performance of two AI assistants in response to the user question displayed above. The user asks the question on observing an image. For your reference, the visual content in the image is represented with five descriptive sentences describing the same image and the bounding box coordinates of each object in the scene. These coordinates are in the form of bounding boxes, represented as (x1, y1, x2, y2) with floating numbers ranging from 0 to 1. These values correspond to the top left x, top left y, bottom right x, and bottom right y. \nPlease rate the helpfulness, relevance, accuracy, level of details of their responses. Each assistant receives an overall score on a scale of 1 to 10, where a higher score indicates better overall performance.\nPlease first output a single line containing only two values indicating the scores for Assistant 1 and 2, respectively. The two scores are separated by a space.\nIn the subsequent line, please provide a comprehensive explanation of your evaluation, avoiding any potential bias and ensuring that the order in which the responses were presented does not affect your judgment."},
6 | "detail": {"role": "Assistant", "prompt": "We would like to request your feedback on the performance of two AI assistants in response to the user question displayed above. The user asks the question on observing an image. For your reference, the visual content in the image is represented with five descriptive sentences describing the same image and the bounding box coordinates of each object in the scene. These coordinates are in the form of bounding boxes, represented as (x1, y1, x2, y2) with floating numbers ranging from 0 to 1. These values correspond to the top left x, top left y, bottom right x, and bottom right y. \nPlease rate the helpfulness, relevance, accuracy, level of details of their responses. Each assistant receives an overall score on a scale of 1 to 10, where a higher score indicates better overall performance.\nPlease first output a single line containing only two values indicating the scores for Assistant 1 and 2, respectively. The two scores are separated by a space.\nIn the subsequent line, please provide a comprehensive explanation of your evaluation, avoiding any potential bias and ensuring that the order in which the responses were presented does not affect your judgment."},
7 | "complex": {"role": "Assistant", "prompt": "We would like to request your feedback on the performance of two AI assistants in response to the user question displayed above. The user asks the question on observing an image. For your reference, the visual content in the image is represented with five descriptive sentences describing the same image and the bounding box coordinates of each object in the scene. These coordinates are in the form of bounding boxes, represented as (x1, y1, x2, y2) with floating numbers ranging from 0 to 1. These values correspond to the top left x, top left y, bottom right x, and bottom right y. \nPlease rate the helpfulness, relevance, accuracy, level of details of their responses. Each assistant receives an overall score on a scale of 1 to 10, where a higher score indicates better overall performance.\nPlease first output a single line containing only two values indicating the scores for Assistant 1 and 2, respectively. The two scores are separated by a space.\nIn the subsequent line, please provide a comprehensive explanation of your evaluation, avoiding any potential bias and ensuring that the order in which the responses were presented does not affect your judgment."},
8 | "llava_bench_conv": {"role": "Assistant", "prompt": "We would like to request your feedback on the performance of two AI assistants in response to the user question displayed above. The user asks the question on observing an image. For your reference, the visual content in the image is represented with a few sentences describing the image. \nPlease rate the helpfulness, relevance, accuracy, level of details of their responses. Each assistant receives an overall score on a scale of 1 to 10, where a higher score indicates better overall performance.\nPlease first output a single line containing only two values indicating the scores for Assistant 1 and 2, respectively. The two scores are separated by a space.\nIn the subsequent line, please provide a comprehensive explanation of your evaluation, avoiding any potential bias and ensuring that the order in which the responses were presented does not affect your judgment."},
9 | "llava_bench_detail": {"role": "Assistant", "prompt": "We would like to request your feedback on the performance of two AI assistants in response to the user question displayed above. The user asks the question on observing an image. For your reference, the visual content in the image is represented with a few sentences describing the image. \nPlease rate the helpfulness, relevance, accuracy, level of details of their responses. Each assistant receives an overall score on a scale of 1 to 10, where a higher score indicates better overall performance.\nPlease first output a single line containing only two values indicating the scores for Assistant 1 and 2, respectively. The two scores are separated by a space.\nIn the subsequent line, please provide a comprehensive explanation of your evaluation, avoiding any potential bias and ensuring that the order in which the responses were presented does not affect your judgment."},
10 | "llava_bench_complex": {"role": "Assistant", "prompt": "We would like to request your feedback on the performance of two AI assistants in response to the user question displayed above. The user asks the question on observing an image. For your reference, the visual content in the image is represented with a few sentences describing the image. \nPlease rate the helpfulness, relevance, accuracy, level of details of their responses. Each assistant receives an overall score on a scale of 1 to 10, where a higher score indicates better overall performance.\nPlease first output a single line containing only two values indicating the scores for Assistant 1 and 2, respectively. The two scores are separated by a space.\nIn the subsequent line, please provide a comprehensive explanation of your evaluation, avoiding any potential bias and ensuring that the order in which the responses were presented does not affect your judgment."}
11 | }
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/webpage/figures/alpaca.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sunzhihao18/ForgerySleuth/fa5f8b552a3e44c91dcaa202f4a74d25d79397dc/model/forgery_analyst/llava/eval/webpage/figures/alpaca.png
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/webpage/figures/bard.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sunzhihao18/ForgerySleuth/fa5f8b552a3e44c91dcaa202f4a74d25d79397dc/model/forgery_analyst/llava/eval/webpage/figures/bard.jpg
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/webpage/figures/chatgpt.svg:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/webpage/figures/llama.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sunzhihao18/ForgerySleuth/fa5f8b552a3e44c91dcaa202f4a74d25d79397dc/model/forgery_analyst/llava/eval/webpage/figures/llama.jpg
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/webpage/figures/swords_FILL0_wght300_GRAD0_opsz48.svg:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/webpage/figures/vicuna.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sunzhihao18/ForgerySleuth/fa5f8b552a3e44c91dcaa202f4a74d25d79397dc/model/forgery_analyst/llava/eval/webpage/figures/vicuna.jpeg
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/webpage/index.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 | Who's GPT-4's favorite? Battles between State-of-the-Art Chatbots
7 |
8 |
9 |
10 |
11 |
12 |
13 |
32 |
33 |
34 |
Who's GPT-4's favorite? Battles between State-of-the-Art Chatbots
35 |
36 |
37 |
54 |
55 |
56 |
57 |
58 |
59 |
60 |
61 |
62 |
![other logo]()
63 |
64 |
65 |
66 |
67 |
68 |

69 |
70 |
71 |
72 |
73 |

74 |
75 |
76 |
77 |
78 |
79 |
80 |
81 |
82 |
85 |
86 |
87 |
122 |
123 |
124 |
133 |
134 |
135 |
136 |
137 |
This website is co-authored with GPT-4.
138 |
139 |
140 |
141 |
142 |
143 |
144 |
145 |
146 |
147 |
148 |
149 |
160 |
161 |
162 |
163 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/eval/webpage/styles.css:
--------------------------------------------------------------------------------
1 | body {
2 | font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
3 | background-color: #f8f9fa;
4 | }
5 |
6 | .navbar-dark .navbar-nav .nav-link {
7 | color: #f1cf68;
8 | font-size: 1.1rem;
9 | padding: 0.5rem 0.6rem;
10 | }
11 |
12 | .card-header {
13 | font-weight: bold;
14 | }
15 |
16 | .card {
17 | box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
18 | transition: 0.3s;
19 | }
20 |
21 | .card:hover {
22 | box-shadow: 0 8px 16px rgba(0, 0, 0, 0.2);
23 | }
24 |
25 | button {
26 | transition: background-color 0.3s;
27 | }
28 |
29 | button:hover {
30 | background-color: #007bff;
31 | }
32 |
33 | @media (max-width: 767px) {
34 | .form-row .form-group {
35 | margin-bottom: 10px;
36 | }
37 | }
38 |
39 | /* Extra styles */
40 |
41 | .expandable-card .card-text-container {
42 | max-height: 200px;
43 | overflow-y: hidden;
44 | position: relative;
45 | }
46 |
47 | .expandable-card.expanded .card-text-container {
48 | max-height: none;
49 | }
50 |
51 | .expand-btn {
52 | position: relative;
53 | display: none;
54 | background-color: rgba(255, 255, 255, 0.8);
55 | color: #510c75;
56 | border-color: transparent;
57 | }
58 |
59 | .expand-btn:hover {
60 | background-color: rgba(200, 200, 200, 0.8);
61 | text-decoration: none;
62 | border-color: transparent;
63 | color: #510c75;
64 | }
65 |
66 | .expand-btn:focus {
67 | outline: none;
68 | text-decoration: none;
69 | }
70 |
71 | .expandable-card:not(.expanded) .card-text-container:after {
72 | content: "";
73 | position: absolute;
74 | bottom: 0;
75 | left: 0;
76 | width: 100%;
77 | height: 90px;
78 | background: linear-gradient(rgba(255, 255, 255, 0.2), rgba(255, 255, 255, 1));
79 | }
80 |
81 | .expandable-card:not(.expanded) .expand-btn {
82 | margin-top: -40px;
83 | }
84 |
85 | .card-body {
86 | padding-bottom: 5px;
87 | }
88 |
89 | .vertical-flex-layout {
90 | justify-content: center;
91 | align-items: center;
92 | height: 100%;
93 | display: flex;
94 | flex-direction: column;
95 | gap: 5px;
96 | }
97 |
98 | .figure-img {
99 | max-width: 100%;
100 | height: auto;
101 | }
102 |
103 | .adjustable-font-size {
104 | font-size: calc(0.5rem + 2vw);
105 | }
106 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/mm_utils.py:
--------------------------------------------------------------------------------
1 | from PIL import Image
2 | from io import BytesIO
3 | import base64
4 | import torch
5 | import math
6 | import ast
7 |
8 | from transformers import StoppingCriteria
9 | from llava.constants import IMAGE_TOKEN_INDEX
10 |
11 |
12 | def select_best_resolution(original_size, possible_resolutions):
13 | """
14 | Selects the best resolution from a list of possible resolutions based on the original size.
15 |
16 | Args:
17 | original_size (tuple): The original size of the image in the format (width, height).
18 | possible_resolutions (list): A list of possible resolutions in the format [(width1, height1), (width2, height2), ...].
19 |
20 | Returns:
21 | tuple: The best fit resolution in the format (width, height).
22 | """
23 | original_width, original_height = original_size
24 | best_fit = None
25 | max_effective_resolution = 0
26 | min_wasted_resolution = float('inf')
27 |
28 | for width, height in possible_resolutions:
29 | scale = min(width / original_width, height / original_height)
30 | downscaled_width, downscaled_height = int(original_width * scale), int(original_height * scale)
31 | effective_resolution = min(downscaled_width * downscaled_height, original_width * original_height)
32 | wasted_resolution = (width * height) - effective_resolution
33 |
34 | if effective_resolution > max_effective_resolution or (effective_resolution == max_effective_resolution and wasted_resolution < min_wasted_resolution):
35 | max_effective_resolution = effective_resolution
36 | min_wasted_resolution = wasted_resolution
37 | best_fit = (width, height)
38 |
39 | return best_fit
40 |
41 |
42 | def resize_and_pad_image(image, target_resolution):
43 | """
44 | Resize and pad an image to a target resolution while maintaining aspect ratio.
45 |
46 | Args:
47 | image (PIL.Image.Image): The input image.
48 | target_resolution (tuple): The target resolution (width, height) of the image.
49 |
50 | Returns:
51 | PIL.Image.Image: The resized and padded image.
52 | """
53 | original_width, original_height = image.size
54 | target_width, target_height = target_resolution
55 |
56 | scale_w = target_width / original_width
57 | scale_h = target_height / original_height
58 |
59 | if scale_w < scale_h:
60 | new_width = target_width
61 | new_height = min(math.ceil(original_height * scale_w), target_height)
62 | else:
63 | new_height = target_height
64 | new_width = min(math.ceil(original_width * scale_h), target_width)
65 |
66 | # Resize the image
67 | resized_image = image.resize((new_width, new_height))
68 |
69 | new_image = Image.new('RGB', (target_width, target_height), (0, 0, 0))
70 | paste_x = (target_width - new_width) // 2
71 | paste_y = (target_height - new_height) // 2
72 | new_image.paste(resized_image, (paste_x, paste_y))
73 |
74 | return new_image
75 |
76 |
77 | def divide_to_patches(image, patch_size):
78 | """
79 | Divides an image into patches of a specified size.
80 |
81 | Args:
82 | image (PIL.Image.Image): The input image.
83 | patch_size (int): The size of each patch.
84 |
85 | Returns:
86 | list: A list of PIL.Image.Image objects representing the patches.
87 | """
88 | patches = []
89 | width, height = image.size
90 | for i in range(0, height, patch_size):
91 | for j in range(0, width, patch_size):
92 | box = (j, i, j + patch_size, i + patch_size)
93 | patch = image.crop(box)
94 | patches.append(patch)
95 |
96 | return patches
97 |
98 |
99 | def get_anyres_image_grid_shape(image_size, grid_pinpoints, patch_size):
100 | """
101 | Calculate the shape of the image patch grid after the preprocessing for images of any resolution.
102 |
103 | Args:
104 | image_size (tuple): The size of the input image in the format (width, height).
105 | grid_pinpoints (str): A string representation of a list of possible resolutions.
106 | patch_size (int): The size of each image patch.
107 |
108 | Returns:
109 | tuple: The shape of the image patch grid in the format (width, height).
110 | """
111 | if type(grid_pinpoints) is list:
112 | possible_resolutions = grid_pinpoints
113 | else:
114 | possible_resolutions = ast.literal_eval(grid_pinpoints)
115 | width, height = select_best_resolution(image_size, possible_resolutions)
116 | return width // patch_size, height // patch_size
117 |
118 |
119 | def process_anyres_image(image, processor, grid_pinpoints):
120 | """
121 | Process an image with variable resolutions.
122 |
123 | Args:
124 | image (PIL.Image.Image): The input image to be processed.
125 | processor: The image processor object.
126 | grid_pinpoints (str): A string representation of a list of possible resolutions.
127 |
128 | Returns:
129 | torch.Tensor: A tensor containing the processed image patches.
130 | """
131 | if type(grid_pinpoints) is list:
132 | possible_resolutions = grid_pinpoints
133 | else:
134 | possible_resolutions = ast.literal_eval(grid_pinpoints)
135 | best_resolution = select_best_resolution(image.size, possible_resolutions)
136 | image_padded = resize_and_pad_image(image, best_resolution)
137 |
138 | patches = divide_to_patches(image_padded, processor.crop_size['height'])
139 |
140 | image_original_resize = image.resize((processor.size['shortest_edge'], processor.size['shortest_edge']))
141 |
142 | image_patches = [image_original_resize] + patches
143 | image_patches = [processor.preprocess(image_patch, return_tensors='pt')['pixel_values'][0]
144 | for image_patch in image_patches]
145 | return torch.stack(image_patches, dim=0)
146 |
147 |
148 | def load_image_from_base64(image):
149 | return Image.open(BytesIO(base64.b64decode(image)))
150 |
151 |
152 | def expand2square(pil_img, background_color):
153 | width, height = pil_img.size
154 | if width == height:
155 | return pil_img
156 | elif width > height:
157 | result = Image.new(pil_img.mode, (width, width), background_color)
158 | result.paste(pil_img, (0, (width - height) // 2))
159 | return result
160 | else:
161 | result = Image.new(pil_img.mode, (height, height), background_color)
162 | result.paste(pil_img, ((height - width) // 2, 0))
163 | return result
164 |
165 |
166 | def process_images(images, image_processor, model_cfg):
167 | image_aspect_ratio = getattr(model_cfg, "image_aspect_ratio", None)
168 | new_images = []
169 | if image_aspect_ratio == 'pad':
170 | for image in images:
171 | image = expand2square(image, tuple(int(x*255) for x in image_processor.image_mean))
172 | image = image_processor.preprocess(image, return_tensors='pt')['pixel_values'][0]
173 | new_images.append(image)
174 | elif image_aspect_ratio == "anyres":
175 | for image in images:
176 | image = process_anyres_image(image, image_processor, model_cfg.image_grid_pinpoints)
177 | new_images.append(image)
178 | else:
179 | return image_processor(images, return_tensors='pt')['pixel_values']
180 | if all(x.shape == new_images[0].shape for x in new_images):
181 | new_images = torch.stack(new_images, dim=0)
182 | return new_images
183 |
184 |
185 | def tokenizer_image_token(prompt, tokenizer, image_token_index=IMAGE_TOKEN_INDEX, return_tensors=None):
186 | prompt_chunks = [tokenizer(chunk).input_ids for chunk in prompt.split('')]
187 |
188 | def insert_separator(X, sep):
189 | return [ele for sublist in zip(X, [sep]*len(X)) for ele in sublist][:-1]
190 |
191 | input_ids = []
192 | offset = 0
193 | if len(prompt_chunks) > 0 and len(prompt_chunks[0]) > 0 and prompt_chunks[0][0] == tokenizer.bos_token_id:
194 | offset = 1
195 | input_ids.append(prompt_chunks[0][0])
196 |
197 | for x in insert_separator(prompt_chunks, [image_token_index] * (offset + 1)):
198 | input_ids.extend(x[offset:])
199 |
200 | if return_tensors is not None:
201 | if return_tensors == 'pt':
202 | return torch.tensor(input_ids, dtype=torch.long)
203 | raise ValueError(f'Unsupported tensor type: {return_tensors}')
204 | return input_ids
205 |
206 |
207 | def get_model_name_from_path(model_path):
208 | model_path = model_path.strip("/")
209 | model_paths = model_path.split("/")
210 | if model_paths[-1].startswith('checkpoint-'):
211 | return model_paths[-2] + "_" + model_paths[-1]
212 | else:
213 | return model_paths[-1]
214 |
215 | class KeywordsStoppingCriteria(StoppingCriteria):
216 | def __init__(self, keywords, tokenizer, input_ids):
217 | self.keywords = keywords
218 | self.keyword_ids = []
219 | self.max_keyword_len = 0
220 | for keyword in keywords:
221 | cur_keyword_ids = tokenizer(keyword).input_ids
222 | if len(cur_keyword_ids) > 1 and cur_keyword_ids[0] == tokenizer.bos_token_id:
223 | cur_keyword_ids = cur_keyword_ids[1:]
224 | if len(cur_keyword_ids) > self.max_keyword_len:
225 | self.max_keyword_len = len(cur_keyword_ids)
226 | self.keyword_ids.append(torch.tensor(cur_keyword_ids))
227 | self.tokenizer = tokenizer
228 | self.start_len = input_ids.shape[1]
229 |
230 | def call_for_batch(self, output_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
231 | offset = min(output_ids.shape[1] - self.start_len, self.max_keyword_len)
232 | self.keyword_ids = [keyword_id.to(output_ids.device) for keyword_id in self.keyword_ids]
233 | for keyword_id in self.keyword_ids:
234 | truncated_output_ids = output_ids[0, -keyword_id.shape[0]:]
235 | if torch.equal(truncated_output_ids, keyword_id):
236 | return True
237 | outputs = self.tokenizer.batch_decode(output_ids[:, -offset:], skip_special_tokens=True)[0]
238 | for keyword in self.keywords:
239 | if keyword in outputs:
240 | return True
241 | return False
242 |
243 | def __call__(self, output_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
244 | outputs = []
245 | for i in range(output_ids.shape[0]):
246 | outputs.append(self.call_for_batch(output_ids[i].unsqueeze(0), scores))
247 | return all(outputs)
248 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/model/__init__.py:
--------------------------------------------------------------------------------
1 | from .language_model.llava_llama import LlavaLlamaForCausalLM, LlavaConfig
2 | # from .language_model.llava_mpt import LlavaMptForCausalLM, LlavaMptConfig
3 | # from .language_model.llava_mistral import LlavaMistralForCausalLM, LlavaMistralConfig
4 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/model/__pycache__/__init__.cpython-310.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sunzhihao18/ForgerySleuth/fa5f8b552a3e44c91dcaa202f4a74d25d79397dc/model/forgery_analyst/llava/model/__pycache__/__init__.cpython-310.pyc
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/model/__pycache__/builder.cpython-310.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sunzhihao18/ForgerySleuth/fa5f8b552a3e44c91dcaa202f4a74d25d79397dc/model/forgery_analyst/llava/model/__pycache__/builder.cpython-310.pyc
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/model/__pycache__/llava_arch.cpython-310.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sunzhihao18/ForgerySleuth/fa5f8b552a3e44c91dcaa202f4a74d25d79397dc/model/forgery_analyst/llava/model/__pycache__/llava_arch.cpython-310.pyc
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/model/apply_delta.py:
--------------------------------------------------------------------------------
1 | """
2 | Usage:
3 | python3 -m fastchat.model.apply_delta --base ~/model_weights/llama-7b --target ~/model_weights/vicuna-7b --delta lmsys/vicuna-7b-delta
4 | """
5 | import argparse
6 |
7 | import torch
8 | from tqdm import tqdm
9 | from transformers import AutoTokenizer, AutoModelForCausalLM
10 | from llava import LlavaLlamaForCausalLM
11 |
12 |
13 | def apply_delta(base_model_path, target_model_path, delta_path):
14 | print("Loading base model")
15 | base = AutoModelForCausalLM.from_pretrained(
16 | base_model_path, torch_dtype=torch.float16, low_cpu_mem_usage=True)
17 |
18 | print("Loading delta")
19 | delta = LlavaLlamaForCausalLM.from_pretrained(delta_path, torch_dtype=torch.float16, low_cpu_mem_usage=True)
20 | delta_tokenizer = AutoTokenizer.from_pretrained(delta_path)
21 |
22 | print("Applying delta")
23 | for name, param in tqdm(delta.state_dict().items(), desc="Applying delta"):
24 | if name not in base.state_dict():
25 | assert name in ['model.mm_projector.weight', 'model.mm_projector.bias'], f'{name} not in base model'
26 | continue
27 | if param.data.shape == base.state_dict()[name].shape:
28 | param.data += base.state_dict()[name]
29 | else:
30 | assert name in ['model.embed_tokens.weight', 'lm_head.weight'], \
31 | f'{name} dimension mismatch: {param.data.shape} vs {base.state_dict()[name].shape}'
32 | bparam = base.state_dict()[name]
33 | param.data[:bparam.shape[0], :bparam.shape[1]] += bparam
34 |
35 | print("Saving target model")
36 | delta.save_pretrained(target_model_path)
37 | delta_tokenizer.save_pretrained(target_model_path)
38 |
39 |
40 | if __name__ == "__main__":
41 | parser = argparse.ArgumentParser()
42 | parser.add_argument("--base-model-path", type=str, required=True)
43 | parser.add_argument("--target-model-path", type=str, required=True)
44 | parser.add_argument("--delta-path", type=str, required=True)
45 |
46 | args = parser.parse_args()
47 |
48 | apply_delta(args.base_model_path, args.target_model_path, args.delta_path)
49 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/model/builder.py:
--------------------------------------------------------------------------------
1 | # Copyright 2023 Haotian Liu
2 | #
3 | # Licensed under the Apache License, Version 2.0 (the "License");
4 | # you may not use this file except in compliance with the License.
5 | # You may obtain a copy of the License at
6 | #
7 | # http://www.apache.org/licenses/LICENSE-2.0
8 | #
9 | # Unless required by applicable law or agreed to in writing, software
10 | # distributed under the License is distributed on an "AS IS" BASIS,
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | # See the License for the specific language governing permissions and
13 | # limitations under the License.
14 |
15 |
16 | import os
17 | import warnings
18 | import shutil
19 |
20 | from transformers import AutoTokenizer, AutoModelForCausalLM, AutoConfig, BitsAndBytesConfig
21 | import torch
22 | from llava.model import *
23 | from llava.constants import DEFAULT_IMAGE_PATCH_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN
24 |
25 |
26 | def load_pretrained_model(model_path, model_base, model_name, load_8bit=False, load_4bit=False, device_map="cuda", device="cuda", use_flash_attn=False, **kwargs):
27 | kwargs = {"device_map": device_map, **kwargs}
28 |
29 | if device != "cuda":
30 | kwargs['device_map'] = {"": device}
31 |
32 | if load_8bit:
33 | kwargs['load_in_8bit'] = True
34 | elif load_4bit:
35 | kwargs['load_in_4bit'] = True
36 | kwargs['quantization_config'] = BitsAndBytesConfig(
37 | load_in_4bit=True,
38 | bnb_4bit_compute_dtype=torch.float16,
39 | bnb_4bit_use_double_quant=True,
40 | bnb_4bit_quant_type='nf4'
41 | )
42 | else:
43 | kwargs['torch_dtype'] = torch.float16
44 |
45 | if use_flash_attn:
46 | kwargs['attn_implementation'] = 'flash_attention_2'
47 |
48 | if 'llava' in model_name.lower():
49 | # Load LLaVA model
50 | if 'lora' in model_name.lower() and model_base is None:
51 | warnings.warn('There is `lora` in model name but no `model_base` is provided. If you are loading a LoRA model, please provide the `model_base` argument. Detailed instruction: https://github.com/haotian-liu/LLaVA#launch-a-model-worker-lora-weights-unmerged.')
52 | if 'lora' in model_name.lower() and model_base is not None:
53 | from llava.model.language_model.llava_llama import LlavaConfig
54 | lora_cfg_pretrained = LlavaConfig.from_pretrained(model_path)
55 | tokenizer = AutoTokenizer.from_pretrained(model_base, use_fast=False)
56 | print('Loading LLaVA from base model...')
57 | model = LlavaLlamaForCausalLM.from_pretrained(model_base, low_cpu_mem_usage=True, config=lora_cfg_pretrained, **kwargs)
58 | token_num, tokem_dim = model.lm_head.out_features, model.lm_head.in_features
59 | if model.lm_head.weight.shape[0] != token_num:
60 | model.lm_head.weight = torch.nn.Parameter(torch.empty(token_num, tokem_dim, device=model.device, dtype=model.dtype))
61 | model.model.embed_tokens.weight = torch.nn.Parameter(torch.empty(token_num, tokem_dim, device=model.device, dtype=model.dtype))
62 |
63 | print('Loading additional LLaVA weights...')
64 | if os.path.exists(os.path.join(model_path, 'non_lora_trainables.bin')):
65 | non_lora_trainables = torch.load(os.path.join(model_path, 'non_lora_trainables.bin'), map_location='cpu')
66 | else:
67 | # this is probably from HF Hub
68 | from huggingface_hub import hf_hub_download
69 | def load_from_hf(repo_id, filename, subfolder=None):
70 | cache_file = hf_hub_download(
71 | repo_id=repo_id,
72 | filename=filename,
73 | subfolder=subfolder)
74 | return torch.load(cache_file, map_location='cpu')
75 | non_lora_trainables = load_from_hf(model_path, 'non_lora_trainables.bin')
76 | non_lora_trainables = {(k[11:] if k.startswith('base_model.') else k): v for k, v in non_lora_trainables.items()}
77 | if any(k.startswith('model.model.') for k in non_lora_trainables):
78 | non_lora_trainables = {(k[6:] if k.startswith('model.') else k): v for k, v in non_lora_trainables.items()}
79 | model.load_state_dict(non_lora_trainables, strict=False)
80 |
81 | from peft import PeftModel
82 | print('Loading LoRA weights...')
83 | model = PeftModel.from_pretrained(model, model_path)
84 | print('Merging LoRA weights...')
85 | model = model.merge_and_unload()
86 | print('Model is loaded...')
87 | elif model_base is not None:
88 | # this may be mm projector only
89 | print('Loading LLaVA from base model...')
90 | if 'mpt' in model_name.lower():
91 | if not os.path.isfile(os.path.join(model_path, 'configuration_mpt.py')):
92 | shutil.copyfile(os.path.join(model_base, 'configuration_mpt.py'), os.path.join(model_path, 'configuration_mpt.py'))
93 | tokenizer = AutoTokenizer.from_pretrained(model_base, use_fast=True)
94 | cfg_pretrained = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
95 | model = LlavaMptForCausalLM.from_pretrained(model_base, low_cpu_mem_usage=True, config=cfg_pretrained, **kwargs)
96 | else:
97 | tokenizer = AutoTokenizer.from_pretrained(model_base, use_fast=False)
98 | cfg_pretrained = AutoConfig.from_pretrained(model_path)
99 | model = LlavaLlamaForCausalLM.from_pretrained(model_base, low_cpu_mem_usage=True, config=cfg_pretrained, **kwargs)
100 |
101 | mm_projector_weights = torch.load(os.path.join(model_path, 'mm_projector.bin'), map_location='cpu')
102 | mm_projector_weights = {k: v.to(torch.float16) for k, v in mm_projector_weights.items()}
103 | model.load_state_dict(mm_projector_weights, strict=False)
104 | else:
105 | if 'mpt' in model_name.lower():
106 | tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=True)
107 | model = LlavaMptForCausalLM.from_pretrained(model_path, low_cpu_mem_usage=True, **kwargs)
108 | elif 'mistral' in model_name.lower():
109 | tokenizer = AutoTokenizer.from_pretrained(model_path)
110 | model = LlavaMistralForCausalLM.from_pretrained(
111 | model_path,
112 | low_cpu_mem_usage=True,
113 | **kwargs
114 | )
115 | else:
116 | tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
117 | model = LlavaLlamaForCausalLM.from_pretrained(
118 | model_path,
119 | low_cpu_mem_usage=True,
120 | **kwargs
121 | )
122 | else:
123 | # Load language model
124 | if model_base is not None:
125 | # PEFT model
126 | from peft import PeftModel
127 | tokenizer = AutoTokenizer.from_pretrained(model_base, use_fast=False)
128 | model = AutoModelForCausalLM.from_pretrained(model_base, low_cpu_mem_usage=True, **kwargs)
129 | print(f"Loading LoRA weights from {model_path}")
130 | model = PeftModel.from_pretrained(model, model_path)
131 | print(f"Merging weights")
132 | model = model.merge_and_unload()
133 | print('Convert to FP16...')
134 | model.to(torch.float16)
135 | else:
136 | use_fast = False
137 | if 'mpt' in model_name.lower():
138 | tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=True)
139 | model = AutoModelForCausalLM.from_pretrained(model_path, low_cpu_mem_usage=True, trust_remote_code=True, **kwargs)
140 | else:
141 | tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
142 | model = AutoModelForCausalLM.from_pretrained(model_path, low_cpu_mem_usage=True, **kwargs)
143 |
144 | image_processor = None
145 |
146 | if 'llava' in model_name.lower():
147 | mm_use_im_start_end = getattr(model.config, "mm_use_im_start_end", False)
148 | mm_use_im_patch_token = getattr(model.config, "mm_use_im_patch_token", True)
149 | if mm_use_im_patch_token:
150 | tokenizer.add_tokens([DEFAULT_IMAGE_PATCH_TOKEN], special_tokens=True)
151 | if mm_use_im_start_end:
152 | tokenizer.add_tokens([DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN], special_tokens=True)
153 | model.resize_token_embeddings(len(tokenizer))
154 |
155 | vision_tower = model.get_vision_tower()
156 | if not vision_tower.is_loaded:
157 | # Zhihao edit: device_map = 'cuda' if device_map is None else device_map
158 | vision_tower.load_model(device_map=device_map)
159 | if device_map != 'auto':
160 | vision_tower.to(device=device_map, dtype=torch.float16)
161 | image_processor = vision_tower.image_processor
162 |
163 | if hasattr(model.config, "max_sequence_length"):
164 | context_len = model.config.max_sequence_length
165 | else:
166 | context_len = 2048
167 |
168 | return tokenizer, model, image_processor, context_len
169 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/model/consolidate.py:
--------------------------------------------------------------------------------
1 | """
2 | Usage:
3 | python3 -m llava.model.consolidate --src ~/model_weights/llava-7b --dst ~/model_weights/llava-7b_consolidate
4 | """
5 | import argparse
6 |
7 | import torch
8 | from transformers import AutoTokenizer, AutoModelForCausalLM
9 | from llava.model import *
10 | from llava.model.utils import auto_upgrade
11 |
12 |
13 | def consolidate_ckpt(src_path, dst_path):
14 | print("Loading model")
15 | auto_upgrade(src_path)
16 | src_model = AutoModelForCausalLM.from_pretrained(src_path, torch_dtype=torch.float16, low_cpu_mem_usage=True)
17 | src_tokenizer = AutoTokenizer.from_pretrained(src_path, use_fast=False)
18 | src_model.save_pretrained(dst_path)
19 | src_tokenizer.save_pretrained(dst_path)
20 |
21 |
22 | if __name__ == "__main__":
23 | parser = argparse.ArgumentParser()
24 | parser.add_argument("--src", type=str, required=True)
25 | parser.add_argument("--dst", type=str, required=True)
26 |
27 | args = parser.parse_args()
28 |
29 | consolidate_ckpt(args.src, args.dst)
30 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/model/language_model/__pycache__/llava_llama.cpython-310.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sunzhihao18/ForgerySleuth/fa5f8b552a3e44c91dcaa202f4a74d25d79397dc/model/forgery_analyst/llava/model/language_model/__pycache__/llava_llama.cpython-310.pyc
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/model/language_model/__pycache__/llava_mpt.cpython-310.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sunzhihao18/ForgerySleuth/fa5f8b552a3e44c91dcaa202f4a74d25d79397dc/model/forgery_analyst/llava/model/language_model/__pycache__/llava_mpt.cpython-310.pyc
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/model/language_model/llava_llama.py:
--------------------------------------------------------------------------------
1 | # Copyright 2023 Haotian Liu
2 | #
3 | # Licensed under the Apache License, Version 2.0 (the "License");
4 | # you may not use this file except in compliance with the License.
5 | # You may obtain a copy of the License at
6 | #
7 | # http://www.apache.org/licenses/LICENSE-2.0
8 | #
9 | # Unless required by applicable law or agreed to in writing, software
10 | # distributed under the License is distributed on an "AS IS" BASIS,
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | # See the License for the specific language governing permissions and
13 | # limitations under the License.
14 |
15 |
16 | from typing import List, Optional, Tuple, Union
17 |
18 | import torch
19 | import torch.nn as nn
20 |
21 | from transformers import AutoConfig, AutoModelForCausalLM, \
22 | LlamaConfig, LlamaModel, LlamaForCausalLM
23 |
24 | from transformers.modeling_outputs import CausalLMOutputWithPast
25 | from transformers.generation.utils import GenerateOutput
26 |
27 | from ..llava_arch import LlavaMetaModel, LlavaMetaForCausalLM
28 |
29 |
30 | class LlavaConfig(LlamaConfig):
31 | model_type = "llava_llama"
32 |
33 |
34 | class LlavaLlamaModel(LlavaMetaModel, LlamaModel):
35 | config_class = LlavaConfig
36 |
37 | def __init__(self, config: LlamaConfig):
38 | super(LlavaLlamaModel, self).__init__(config)
39 |
40 |
41 | class LlavaLlamaForCausalLM(LlamaForCausalLM, LlavaMetaForCausalLM):
42 | config_class = LlavaConfig
43 |
44 | def __init__(self, config):
45 | super(LlamaForCausalLM, self).__init__(config)
46 | self.model = LlavaLlamaModel(config)
47 | self.pretraining_tp = config.pretraining_tp
48 | self.vocab_size = config.vocab_size
49 | self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
50 |
51 | # Initialize weights and apply final processing
52 | self.post_init()
53 |
54 | def get_model(self):
55 | return self.model
56 |
57 | def forward(
58 | self,
59 | input_ids: torch.LongTensor = None,
60 | attention_mask: Optional[torch.Tensor] = None,
61 | position_ids: Optional[torch.LongTensor] = None,
62 | past_key_values: Optional[List[torch.FloatTensor]] = None,
63 | inputs_embeds: Optional[torch.FloatTensor] = None,
64 | labels: Optional[torch.LongTensor] = None,
65 | use_cache: Optional[bool] = None,
66 | output_attentions: Optional[bool] = None,
67 | output_hidden_states: Optional[bool] = None,
68 | images: Optional[torch.FloatTensor] = None,
69 | image_sizes: Optional[List[List[int]]] = None,
70 | return_dict: Optional[bool] = None,
71 | ) -> Union[Tuple, CausalLMOutputWithPast]:
72 |
73 | # if position_ids is None:
74 | # print("position_ids is None")
75 | # else:
76 | # print("position_ids is not None", position_ids.shape)
77 |
78 | # if input_ids is None:
79 | # print("input_ids is None")
80 | # else:
81 | # print("input_ids is not None", input_ids.shape)
82 |
83 | # if inputs_embeds is None:
84 | # print("inputs_embeds is None")
85 | # else:
86 | # print("inputs_embeds is not None", inputs_embeds.shape)
87 |
88 | # if attention_mask is None:
89 | # print("attention_mask is None")
90 | # else:
91 | # print("attention_mask is not None", attention_mask.shape)
92 |
93 | if inputs_embeds is None:
94 | (
95 | input_ids,
96 | position_ids,
97 | attention_mask,
98 | past_key_values,
99 | inputs_embeds,
100 | labels
101 | ) = self.prepare_inputs_labels_for_multimodal(
102 | input_ids,
103 | position_ids,
104 | attention_mask,
105 | past_key_values,
106 | labels,
107 | images,
108 | image_sizes
109 | )
110 |
111 | return super().forward(
112 | input_ids=input_ids,
113 | attention_mask=attention_mask,
114 | position_ids=position_ids,
115 | past_key_values=past_key_values,
116 | inputs_embeds=inputs_embeds,
117 | labels=labels,
118 | use_cache=use_cache,
119 | output_attentions=output_attentions,
120 | output_hidden_states=output_hidden_states,
121 | return_dict=return_dict
122 | )
123 |
124 | @torch.no_grad()
125 | def generate(
126 | self,
127 | inputs: Optional[torch.Tensor] = None,
128 | images: Optional[torch.Tensor] = None,
129 | image_sizes: Optional[torch.Tensor] = None,
130 | **kwargs,
131 | ) -> Union[GenerateOutput, torch.LongTensor]:
132 | position_ids = kwargs.pop("position_ids", None)
133 | attention_mask = kwargs.pop("attention_mask", None)
134 | if "inputs_embeds" in kwargs:
135 | raise NotImplementedError("`inputs_embeds` is not supported")
136 |
137 | if images is not None:
138 | (
139 | inputs,
140 | position_ids,
141 | attention_mask,
142 | _,
143 | inputs_embeds,
144 | _
145 | ) = self.prepare_inputs_labels_for_multimodal(
146 | inputs,
147 | position_ids,
148 | attention_mask,
149 | None,
150 | None,
151 | images,
152 | image_sizes=image_sizes
153 | )
154 | else:
155 | inputs_embeds = self.get_model().embed_tokens(inputs)
156 |
157 | return super().generate(
158 | inputs_embeds=inputs_embeds,
159 | **kwargs
160 | )
161 |
162 | def prepare_inputs_for_generation(self, input_ids, past_key_values=None,
163 | inputs_embeds=None, **kwargs):
164 | images = kwargs.pop("images", None)
165 | image_sizes = kwargs.pop("image_sizes", None)
166 | inputs = super().prepare_inputs_for_generation(
167 | input_ids, past_key_values=past_key_values, inputs_embeds=inputs_embeds, **kwargs
168 | )
169 | if images is not None:
170 | inputs['images'] = images
171 | if image_sizes is not None:
172 | inputs['image_sizes'] = image_sizes
173 | return inputs
174 |
175 | AutoConfig.register("llava_llama", LlavaConfig)
176 | AutoModelForCausalLM.register(LlavaConfig, LlavaLlamaForCausalLM)
177 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/model/language_model/llava_mistral.py:
--------------------------------------------------------------------------------
1 | # Copyright 2023 Haotian Liu
2 | #
3 | # Licensed under the Apache License, Version 2.0 (the "License");
4 | # you may not use this file except in compliance with the License.
5 | # You may obtain a copy of the License at
6 | #
7 | # http://www.apache.org/licenses/LICENSE-2.0
8 | #
9 | # Unless required by applicable law or agreed to in writing, software
10 | # distributed under the License is distributed on an "AS IS" BASIS,
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | # See the License for the specific language governing permissions and
13 | # limitations under the License.
14 |
15 |
16 | from typing import List, Optional, Tuple, Union
17 |
18 | import torch
19 | import torch.nn as nn
20 | from torch.nn import CrossEntropyLoss
21 |
22 | from transformers import AutoConfig, AutoModelForCausalLM, \
23 | MistralConfig, MistralModel, MistralForCausalLM
24 |
25 | from transformers.modeling_outputs import CausalLMOutputWithPast
26 | from transformers.generation.utils import GenerateOutput
27 |
28 | from ..llava_arch import LlavaMetaModel, LlavaMetaForCausalLM
29 |
30 |
31 | class LlavaMistralConfig(MistralConfig):
32 | model_type = "llava_mistral"
33 |
34 |
35 | class LlavaMistralModel(LlavaMetaModel, MistralModel):
36 | config_class = LlavaMistralConfig
37 |
38 | def __init__(self, config: MistralConfig):
39 | super(LlavaMistralModel, self).__init__(config)
40 |
41 |
42 | class LlavaMistralForCausalLM(MistralForCausalLM, LlavaMetaForCausalLM):
43 | config_class = LlavaMistralConfig
44 |
45 | def __init__(self, config):
46 | super(MistralForCausalLM, self).__init__(config)
47 | self.model = LlavaMistralModel(config)
48 |
49 | self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
50 |
51 | # Initialize weights and apply final processing
52 | self.post_init()
53 |
54 | def get_model(self):
55 | return self.model
56 |
57 | def forward(
58 | self,
59 | input_ids: torch.LongTensor = None,
60 | attention_mask: Optional[torch.Tensor] = None,
61 | position_ids: Optional[torch.LongTensor] = None,
62 | past_key_values: Optional[List[torch.FloatTensor]] = None,
63 | inputs_embeds: Optional[torch.FloatTensor] = None,
64 | labels: Optional[torch.LongTensor] = None,
65 | use_cache: Optional[bool] = None,
66 | output_attentions: Optional[bool] = None,
67 | output_hidden_states: Optional[bool] = None,
68 | images: Optional[torch.FloatTensor] = None,
69 | image_sizes: Optional[List[List[int]]] = None,
70 | return_dict: Optional[bool] = None,
71 | ) -> Union[Tuple, CausalLMOutputWithPast]:
72 |
73 | if inputs_embeds is None:
74 | (
75 | input_ids,
76 | position_ids,
77 | attention_mask,
78 | past_key_values,
79 | inputs_embeds,
80 | labels
81 | ) = self.prepare_inputs_labels_for_multimodal(
82 | input_ids,
83 | position_ids,
84 | attention_mask,
85 | past_key_values,
86 | labels,
87 | images,
88 | image_sizes
89 | )
90 |
91 | return super().forward(
92 | input_ids=input_ids,
93 | attention_mask=attention_mask,
94 | position_ids=position_ids,
95 | past_key_values=past_key_values,
96 | inputs_embeds=inputs_embeds,
97 | labels=labels,
98 | use_cache=use_cache,
99 | output_attentions=output_attentions,
100 | output_hidden_states=output_hidden_states,
101 | return_dict=return_dict
102 | )
103 |
104 | @torch.no_grad()
105 | def generate(
106 | self,
107 | inputs: Optional[torch.Tensor] = None,
108 | images: Optional[torch.Tensor] = None,
109 | image_sizes: Optional[torch.Tensor] = None,
110 | **kwargs,
111 | ) -> Union[GenerateOutput, torch.LongTensor]:
112 | position_ids = kwargs.pop("position_ids", None)
113 | attention_mask = kwargs.pop("attention_mask", None)
114 | if "inputs_embeds" in kwargs:
115 | raise NotImplementedError("`inputs_embeds` is not supported")
116 |
117 | if images is not None:
118 | (
119 | inputs,
120 | position_ids,
121 | attention_mask,
122 | _,
123 | inputs_embeds,
124 | _
125 | ) = self.prepare_inputs_labels_for_multimodal(
126 | inputs,
127 | position_ids,
128 | attention_mask,
129 | None,
130 | None,
131 | images,
132 | image_sizes=image_sizes
133 | )
134 | else:
135 | inputs_embeds = self.get_model().embed_tokens(inputs)
136 |
137 | return super().generate(
138 | position_ids=position_ids,
139 | attention_mask=attention_mask,
140 | inputs_embeds=inputs_embeds,
141 | **kwargs
142 | )
143 |
144 | def prepare_inputs_for_generation(self, input_ids, past_key_values=None,
145 | inputs_embeds=None, **kwargs):
146 | images = kwargs.pop("images", None)
147 | image_sizes = kwargs.pop("image_sizes", None)
148 | inputs = super().prepare_inputs_for_generation(
149 | input_ids, past_key_values=past_key_values, inputs_embeds=inputs_embeds, **kwargs
150 | )
151 | if images is not None:
152 | inputs['images'] = images
153 | if image_sizes is not None:
154 | inputs['image_sizes'] = image_sizes
155 | return inputs
156 |
157 | AutoConfig.register("llava_mistral", LlavaMistralConfig)
158 | AutoModelForCausalLM.register(LlavaMistralConfig, LlavaMistralForCausalLM)
159 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/model/language_model/llava_mpt.py:
--------------------------------------------------------------------------------
1 | # Copyright 2023 Haotian Liu
2 | #
3 | # Licensed under the Apache License, Version 2.0 (the "License");
4 | # you may not use this file except in compliance with the License.
5 | # You may obtain a copy of the License at
6 | #
7 | # http://www.apache.org/licenses/LICENSE-2.0
8 | #
9 | # Unless required by applicable law or agreed to in writing, software
10 | # distributed under the License is distributed on an "AS IS" BASIS,
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | # See the License for the specific language governing permissions and
13 | # limitations under the License.
14 |
15 |
16 | from typing import Optional, Tuple
17 |
18 | import torch
19 |
20 | from transformers import AutoConfig, AutoModelForCausalLM, \
21 | MptConfig, MptForCausalLM, MptModel
22 | from llava.model.llava_arch import LlavaMetaModel, LlavaMetaForCausalLM
23 |
24 |
25 | class LlavaMptConfig(MptConfig):
26 | model_type = "llava_mpt"
27 |
28 |
29 | class LlavaMptModel(LlavaMetaModel, MptModel):
30 | config_class = LlavaMptConfig
31 |
32 | def __init__(self, config: MptConfig):
33 | config.hidden_size = config.d_model
34 | super(LlavaMptModel, self).__init__(config)
35 |
36 | def embed_tokens(self, x):
37 | return self.wte(x)
38 |
39 |
40 | class LlavaMptForCausalLM(MptForCausalLM, LlavaMetaForCausalLM):
41 | config_class = LlavaMptConfig
42 | supports_gradient_checkpointing = True
43 |
44 | def __init__(self, config):
45 | super(MptForCausalLM, self).__init__(config)
46 |
47 | self.transformer = LlavaMptModel(config)
48 | self.lm_head = torch.nn.Linear(config.hidden_size, config.vocab_size, bias=False)
49 |
50 | # Initialize weights and apply final processing
51 | self.post_init()
52 |
53 | def get_model(self):
54 | return self.transformer
55 |
56 | def _set_gradient_checkpointing(self, module, value=False):
57 | if isinstance(module, LlavaMptModel):
58 | module.gradient_checkpointing = value
59 |
60 | def forward(
61 | self,
62 | input_ids: Optional[torch.LongTensor] = None,
63 | past_key_values: Optional[Tuple[Tuple[torch.Tensor, torch.Tensor], ...]] = None,
64 | attention_mask: Optional[torch.Tensor] = None,
65 | inputs_embeds: Optional[torch.Tensor] = None,
66 | labels: Optional[torch.Tensor] = None,
67 | use_cache: Optional[bool] = None,
68 | output_attentions: Optional[bool] = None,
69 | output_hidden_states: Optional[bool] = None,
70 | return_dict: Optional[bool] = None,
71 | images=None):
72 |
73 | input_ids, attention_mask, past_key_values, inputs_embeds, labels = self.prepare_inputs_labels_for_multimodal(input_ids, attention_mask, past_key_values, labels, images)
74 |
75 | return super().forward(
76 | input_ids,
77 | past_key_values=past_key_values,
78 | attention_mask=attention_mask,
79 | inputs_embeds=inputs_embeds,
80 | labels=labels,
81 | use_cache=use_cache,
82 | output_attentions=output_attentions,
83 | output_hidden_states=output_hidden_states,
84 | return_dict=return_dict,
85 | )
86 |
87 | def prepare_inputs_for_generation(self, input_ids, past_key_values=None, inputs_embeds=None, **kwargs):
88 | images = kwargs.pop("images", None)
89 | _inputs = super().prepare_inputs_for_generation(
90 | input_ids, past_key_values=past_key_values, inputs_embeds=inputs_embeds, **kwargs
91 | )
92 | _inputs['images'] = images
93 | return _inputs
94 |
95 |
96 | AutoConfig.register("llava_mpt", LlavaMptConfig)
97 | AutoModelForCausalLM.register(LlavaMptConfig, LlavaMptForCausalLM)
98 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/model/make_delta.py:
--------------------------------------------------------------------------------
1 | """
2 | Usage:
3 | python3 -m llava.model.make_delta --base ~/model_weights/llama-7b --target ~/model_weights/llava-7b --delta ~/model_weights/llava-7b-delta --hub-repo-id liuhaotian/llava-7b-delta
4 | """
5 | import argparse
6 |
7 | import torch
8 | from tqdm import tqdm
9 | from transformers import AutoTokenizer, AutoModelForCausalLM
10 | from llava.model.utils import auto_upgrade
11 |
12 |
13 | def make_delta(base_model_path, target_model_path, delta_path, hub_repo_id):
14 | print("Loading base model")
15 | base = AutoModelForCausalLM.from_pretrained(
16 | base_model_path, torch_dtype=torch.float16, low_cpu_mem_usage=True)
17 |
18 | print("Loading target model")
19 | auto_upgrade(target_model_path)
20 | target = AutoModelForCausalLM.from_pretrained(target_model_path, torch_dtype=torch.float16, low_cpu_mem_usage=True)
21 |
22 | print("Calculating delta")
23 | for name, param in tqdm(target.state_dict().items(), desc="Calculating delta"):
24 | if name not in base.state_dict():
25 | assert name in ['model.mm_projector.weight', 'model.mm_projector.bias'], f'{name} not in base model'
26 | continue
27 | if param.data.shape == base.state_dict()[name].shape:
28 | param.data -= base.state_dict()[name]
29 | else:
30 | assert name in ['model.embed_tokens.weight', 'lm_head.weight'], f'{name} dimension mismatch: {param.data.shape} vs {base.state_dict()[name].shape}'
31 | bparam = base.state_dict()[name]
32 | param.data[:bparam.shape[0], :bparam.shape[1]] -= bparam
33 |
34 | print("Saving delta")
35 | if hub_repo_id:
36 | kwargs = {"push_to_hub": True, "repo_id": hub_repo_id}
37 | else:
38 | kwargs = {}
39 | target.save_pretrained(delta_path, **kwargs)
40 | target_tokenizer = AutoTokenizer.from_pretrained(target_model_path)
41 | target_tokenizer.save_pretrained(delta_path, **kwargs)
42 |
43 |
44 | if __name__ == "__main__":
45 | parser = argparse.ArgumentParser()
46 | parser.add_argument("--base-model-path", type=str, required=True)
47 | parser.add_argument("--target-model-path", type=str, required=True)
48 | parser.add_argument("--delta-path", type=str, required=True)
49 | parser.add_argument("--hub-repo-id", type=str, default=None)
50 | args = parser.parse_args()
51 |
52 | make_delta(args.base_model_path, args.target_model_path, args.delta_path, args.hub_repo_id)
53 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/model/multimodal_encoder/__pycache__/builder.cpython-310.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sunzhihao18/ForgerySleuth/fa5f8b552a3e44c91dcaa202f4a74d25d79397dc/model/forgery_analyst/llava/model/multimodal_encoder/__pycache__/builder.cpython-310.pyc
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/model/multimodal_encoder/__pycache__/clip_encoder.cpython-310.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sunzhihao18/ForgerySleuth/fa5f8b552a3e44c91dcaa202f4a74d25d79397dc/model/forgery_analyst/llava/model/multimodal_encoder/__pycache__/clip_encoder.cpython-310.pyc
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/model/multimodal_encoder/builder.py:
--------------------------------------------------------------------------------
1 | import os
2 | from .clip_encoder import CLIPVisionTower, CLIPVisionTowerS2
3 |
4 |
5 | def build_vision_tower(vision_tower_cfg, **kwargs):
6 | vision_tower = getattr(vision_tower_cfg, 'mm_vision_tower', getattr(vision_tower_cfg, 'vision_tower', None))
7 | is_absolute_path_exists = os.path.exists(vision_tower)
8 | use_s2 = getattr(vision_tower_cfg, 's2', False)
9 | if is_absolute_path_exists or vision_tower.startswith("openai") or vision_tower.startswith("laion") or "ShareGPT4V" in vision_tower:
10 | if use_s2:
11 | return CLIPVisionTowerS2(vision_tower, args=vision_tower_cfg, **kwargs)
12 | else:
13 | return CLIPVisionTower(vision_tower, args=vision_tower_cfg, **kwargs)
14 |
15 | raise ValueError(f'Unknown vision tower: {vision_tower}')
16 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/model/multimodal_encoder/clip_encoder.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn as nn
3 |
4 | from transformers import CLIPVisionModel, CLIPImageProcessor, CLIPVisionConfig
5 |
6 |
7 | class CLIPVisionTower(nn.Module):
8 | def __init__(self, vision_tower, args, delay_load=False):
9 | super().__init__()
10 |
11 | self.is_loaded = False
12 |
13 | self.vision_tower_name = vision_tower
14 | self.select_layer = args.mm_vision_select_layer
15 | self.select_feature = getattr(args, 'mm_vision_select_feature', 'patch')
16 |
17 | if not delay_load:
18 | self.load_model()
19 | elif getattr(args, 'unfreeze_mm_vision_tower', False):
20 | self.load_model()
21 | else:
22 | self.cfg_only = CLIPVisionConfig.from_pretrained(self.vision_tower_name)
23 |
24 | def load_model(self, device_map=None):
25 | if self.is_loaded:
26 | print('{} is already loaded, `load_model` called again, skipping.'.format(self.vision_tower_name))
27 | return
28 |
29 | self.image_processor = CLIPImageProcessor.from_pretrained(self.vision_tower_name)
30 | self.vision_tower = CLIPVisionModel.from_pretrained(self.vision_tower_name, device_map=device_map)
31 | self.vision_tower.requires_grad_(False)
32 |
33 | self.is_loaded = True
34 |
35 | def feature_select(self, image_forward_outs):
36 | image_features = image_forward_outs.hidden_states[self.select_layer]
37 | if self.select_feature == 'patch':
38 | image_features = image_features[:, 1:]
39 | elif self.select_feature == 'cls_patch':
40 | image_features = image_features
41 | else:
42 | raise ValueError(f'Unexpected select feature: {self.select_feature}')
43 | return image_features
44 |
45 | @torch.no_grad()
46 | def forward(self, images):
47 | if type(images) is list:
48 | image_features = []
49 | for image in images:
50 | image_forward_out = self.vision_tower(image.to(device=self.device, dtype=self.dtype).unsqueeze(0), output_hidden_states=True)
51 | image_feature = self.feature_select(image_forward_out).to(image.dtype)
52 | image_features.append(image_feature)
53 | else:
54 | image_forward_outs = self.vision_tower(images.to(device=self.device, dtype=self.dtype), output_hidden_states=True)
55 | image_features = self.feature_select(image_forward_outs).to(images.dtype)
56 |
57 | return image_features
58 |
59 | @property
60 | def dummy_feature(self):
61 | return torch.zeros(1, self.hidden_size, device=self.device, dtype=self.dtype)
62 |
63 | @property
64 | def dtype(self):
65 | return self.vision_tower.dtype
66 |
67 | @property
68 | def device(self):
69 | return self.vision_tower.device
70 |
71 | @property
72 | def config(self):
73 | if self.is_loaded:
74 | return self.vision_tower.config
75 | else:
76 | return self.cfg_only
77 |
78 | @property
79 | def hidden_size(self):
80 | return self.config.hidden_size
81 |
82 | @property
83 | def num_patches_per_side(self):
84 | return self.config.image_size // self.config.patch_size
85 |
86 | @property
87 | def num_patches(self):
88 | return (self.config.image_size // self.config.patch_size) ** 2
89 |
90 |
91 |
92 | class CLIPVisionTowerS2(CLIPVisionTower):
93 | def __init__(self, vision_tower, args, delay_load=False):
94 | super().__init__(vision_tower, args, delay_load)
95 |
96 | self.s2_scales = getattr(args, 's2_scales', '336,672,1008')
97 | self.s2_scales = list(map(int, self.s2_scales.split(',')))
98 | self.s2_scales.sort()
99 | self.s2_split_size = self.s2_scales[0]
100 | self.s2_image_size = self.s2_scales[-1]
101 |
102 | try:
103 | from s2wrapper import forward as multiscale_forward
104 | except ImportError:
105 | raise ImportError('Package s2wrapper not found! Please install by running: \npip install git+https://github.com/bfshi/scaling_on_scales.git')
106 | self.multiscale_forward = multiscale_forward
107 |
108 | # change resize/crop size in preprocessing to the largest image size in s2_scale
109 | if not delay_load or getattr(args, 'unfreeze_mm_vision_tower', False):
110 | self.image_processor.size['shortest_edge'] = self.s2_image_size
111 | self.image_processor.crop_size['height'] = self.image_processor.crop_size['width'] = self.s2_image_size
112 |
113 | def load_model(self, device_map=None):
114 | if self.is_loaded:
115 | print('{} is already loaded, `load_model` called again, skipping.'.format(self.vision_tower_name))
116 | return
117 |
118 | self.image_processor = CLIPImageProcessor.from_pretrained(self.vision_tower_name)
119 | self.vision_tower = CLIPVisionModel.from_pretrained(self.vision_tower_name, device_map=device_map)
120 | self.vision_tower.requires_grad_(False)
121 |
122 | self.image_processor.size['shortest_edge'] = self.s2_image_size
123 | self.image_processor.crop_size['height'] = self.image_processor.crop_size['width'] = self.s2_image_size
124 |
125 | self.is_loaded = True
126 |
127 | @torch.no_grad()
128 | def forward_feature(self, images):
129 | image_forward_outs = self.vision_tower(images.to(device=self.device, dtype=self.dtype), output_hidden_states=True)
130 | image_features = self.feature_select(image_forward_outs).to(images.dtype)
131 | return image_features
132 |
133 | @torch.no_grad()
134 | def forward(self, images):
135 | if type(images) is list:
136 | image_features = []
137 | for image in images:
138 | image_feature = self.multiscale_forward(self.forward_feature, image.unsqueeze(0), img_sizes=self.s2_scales, max_split_size=self.s2_split_size)
139 | image_features.append(image_feature)
140 | else:
141 | image_features = self.multiscale_forward(self.forward_feature, images, img_sizes=self.s2_scales, max_split_size=self.s2_split_size)
142 |
143 | return image_features
144 |
145 | @property
146 | def hidden_size(self):
147 | return self.config.hidden_size * len(self.s2_scales)
148 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/model/multimodal_projector/__pycache__/builder.cpython-310.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sunzhihao18/ForgerySleuth/fa5f8b552a3e44c91dcaa202f4a74d25d79397dc/model/forgery_analyst/llava/model/multimodal_projector/__pycache__/builder.cpython-310.pyc
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/model/multimodal_projector/builder.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn as nn
3 | import re
4 |
5 |
6 | class IdentityMap(nn.Module):
7 | def __init__(self):
8 | super().__init__()
9 |
10 | def forward(self, x, *args, **kwargs):
11 | return x
12 |
13 | @property
14 | def config(self):
15 | return {"mm_projector_type": 'identity'}
16 |
17 |
18 | class SimpleResBlock(nn.Module):
19 | def __init__(self, channels):
20 | super().__init__()
21 | self.pre_norm = nn.LayerNorm(channels)
22 |
23 | self.proj = nn.Sequential(
24 | nn.Linear(channels, channels),
25 | nn.GELU(),
26 | nn.Linear(channels, channels)
27 | )
28 | def forward(self, x):
29 | x = self.pre_norm(x)
30 | return x + self.proj(x)
31 |
32 |
33 | def build_vision_projector(config, delay_load=False, **kwargs):
34 | projector_type = getattr(config, 'mm_projector_type', 'linear')
35 |
36 | if projector_type == 'linear':
37 | return nn.Linear(config.mm_hidden_size, config.hidden_size)
38 |
39 | mlp_gelu_match = re.match(r'^mlp(\d+)x_gelu$', projector_type)
40 | if mlp_gelu_match:
41 | mlp_depth = int(mlp_gelu_match.group(1))
42 | modules = [nn.Linear(config.mm_hidden_size, config.hidden_size)]
43 | for _ in range(1, mlp_depth):
44 | modules.append(nn.GELU())
45 | modules.append(nn.Linear(config.hidden_size, config.hidden_size))
46 | return nn.Sequential(*modules)
47 |
48 | if projector_type == 'identity':
49 | return IdentityMap()
50 |
51 | raise ValueError(f'Unknown projector type: {projector_type}')
52 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/model/utils.py:
--------------------------------------------------------------------------------
1 | from transformers import AutoConfig
2 |
3 |
4 | def auto_upgrade(config):
5 | cfg = AutoConfig.from_pretrained(config)
6 | if 'llava' in config and 'llava' not in cfg.model_type:
7 | assert cfg.model_type == 'llama'
8 | print("You are using newer LLaVA code base, while the checkpoint of v0 is from older code base.")
9 | print("You must upgrade the checkpoint to the new code base (this can be done automatically).")
10 | confirm = input("Please confirm that you want to upgrade the checkpoint. [Y/N]")
11 | if confirm.lower() in ["y", "yes"]:
12 | print("Upgrading checkpoint...")
13 | assert len(cfg.architectures) == 1
14 | setattr(cfg.__class__, "model_type", "llava")
15 | cfg.architectures[0] = 'LlavaLlamaForCausalLM'
16 | cfg.save_pretrained(config)
17 | print("Checkpoint upgraded.")
18 | else:
19 | print("Checkpoint upgrade aborted.")
20 | exit(1)
21 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/serve/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sunzhihao18/ForgerySleuth/fa5f8b552a3e44c91dcaa202f4a74d25d79397dc/model/forgery_analyst/llava/serve/__init__.py
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/serve/cli.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import torch
3 |
4 | from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN
5 | from llava.conversation import conv_templates, SeparatorStyle
6 | from llava.model.builder import load_pretrained_model
7 | from llava.utils import disable_torch_init
8 | from llava.mm_utils import process_images, tokenizer_image_token, get_model_name_from_path
9 |
10 | from PIL import Image
11 |
12 | import requests
13 | from PIL import Image
14 | from io import BytesIO
15 | from transformers import TextStreamer
16 |
17 |
18 | def load_image(image_file):
19 | if image_file.startswith('http://') or image_file.startswith('https://'):
20 | response = requests.get(image_file)
21 | image = Image.open(BytesIO(response.content)).convert('RGB')
22 | else:
23 | image = Image.open(image_file).convert('RGB')
24 | return image
25 |
26 |
27 | def main(args):
28 | # Model
29 | disable_torch_init()
30 |
31 | model_name = get_model_name_from_path(args.model_path)
32 | tokenizer, model, image_processor, context_len = load_pretrained_model(args.model_path, args.model_base, model_name, args.load_8bit, args.load_4bit, device=args.device)
33 |
34 | if "llama-2" in model_name.lower():
35 | conv_mode = "llava_llama_2"
36 | elif "mistral" in model_name.lower():
37 | conv_mode = "mistral_instruct"
38 | elif "v1.6-34b" in model_name.lower():
39 | conv_mode = "chatml_direct"
40 | elif "v1" in model_name.lower():
41 | conv_mode = "llava_v1"
42 | elif "mpt" in model_name.lower():
43 | conv_mode = "mpt"
44 | else:
45 | conv_mode = "llava_v0"
46 |
47 | if args.conv_mode is not None and conv_mode != args.conv_mode:
48 | print('[WARNING] the auto inferred conversation mode is {}, while `--conv-mode` is {}, using {}'.format(conv_mode, args.conv_mode, args.conv_mode))
49 | else:
50 | args.conv_mode = conv_mode
51 |
52 | conv = conv_templates[args.conv_mode].copy()
53 | if "mpt" in model_name.lower():
54 | roles = ('user', 'assistant')
55 | else:
56 | roles = conv.roles
57 |
58 | image = load_image(args.image_file)
59 | image_size = image.size
60 | # Similar operation in model_worker.py
61 | image_tensor = process_images([image], image_processor, model.config)
62 | if type(image_tensor) is list:
63 | image_tensor = [image.to(model.device, dtype=torch.float16) for image in image_tensor]
64 | else:
65 | image_tensor = image_tensor.to(model.device, dtype=torch.float16)
66 |
67 | while True:
68 | try:
69 | inp = input(f"{roles[0]}: ")
70 | except EOFError:
71 | inp = ""
72 | if not inp:
73 | print("exit...")
74 | break
75 |
76 | print(f"{roles[1]}: ", end="")
77 |
78 | if image is not None:
79 | # first message
80 | if model.config.mm_use_im_start_end:
81 | inp = DEFAULT_IM_START_TOKEN + DEFAULT_IMAGE_TOKEN + DEFAULT_IM_END_TOKEN + '\n' + inp
82 | else:
83 | inp = DEFAULT_IMAGE_TOKEN + '\n' + inp
84 | image = None
85 |
86 | conv.append_message(conv.roles[0], inp)
87 | conv.append_message(conv.roles[1], None)
88 | prompt = conv.get_prompt()
89 |
90 | input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).to(model.device)
91 | stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2
92 | keywords = [stop_str]
93 | streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
94 |
95 | with torch.inference_mode():
96 | output_ids = model.generate(
97 | input_ids,
98 | images=image_tensor,
99 | image_sizes=[image_size],
100 | do_sample=True if args.temperature > 0 else False,
101 | temperature=args.temperature,
102 | max_new_tokens=args.max_new_tokens,
103 | streamer=streamer,
104 | use_cache=True)
105 |
106 | outputs = tokenizer.decode(output_ids[0]).strip()
107 | conv.messages[-1][-1] = outputs
108 |
109 | if args.debug:
110 | print("\n", {"prompt": prompt, "outputs": outputs}, "\n")
111 |
112 |
113 | if __name__ == "__main__":
114 | parser = argparse.ArgumentParser()
115 | parser.add_argument("--model-path", type=str, default="facebook/opt-350m")
116 | parser.add_argument("--model-base", type=str, default=None)
117 | parser.add_argument("--image-file", type=str, required=True)
118 | parser.add_argument("--device", type=str, default="cuda")
119 | parser.add_argument("--conv-mode", type=str, default=None)
120 | parser.add_argument("--temperature", type=float, default=0.2)
121 | parser.add_argument("--max-new-tokens", type=int, default=512)
122 | parser.add_argument("--load-8bit", action="store_true")
123 | parser.add_argument("--load-4bit", action="store_true")
124 | parser.add_argument("--debug", action="store_true")
125 | args = parser.parse_args()
126 | main(args)
127 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/serve/examples/extreme_ironing.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sunzhihao18/ForgerySleuth/fa5f8b552a3e44c91dcaa202f4a74d25d79397dc/model/forgery_analyst/llava/serve/examples/extreme_ironing.jpg
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/serve/examples/waterview.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sunzhihao18/ForgerySleuth/fa5f8b552a3e44c91dcaa202f4a74d25d79397dc/model/forgery_analyst/llava/serve/examples/waterview.jpg
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/serve/register_worker.py:
--------------------------------------------------------------------------------
1 | """
2 | Manually register workers.
3 |
4 | Usage:
5 | python3 -m fastchat.serve.register_worker --controller http://localhost:21001 --worker-name http://localhost:21002
6 | """
7 |
8 | import argparse
9 |
10 | import requests
11 |
12 | if __name__ == "__main__":
13 | parser = argparse.ArgumentParser()
14 | parser.add_argument("--controller-address", type=str)
15 | parser.add_argument("--worker-name", type=str)
16 | parser.add_argument("--check-heart-beat", action="store_true")
17 | args = parser.parse_args()
18 |
19 | url = args.controller_address + "/register_worker"
20 | data = {
21 | "worker_name": args.worker_name,
22 | "check_heart_beat": args.check_heart_beat,
23 | "worker_status": None,
24 | }
25 | r = requests.post(url, json=data)
26 | assert r.status_code == 200
27 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/serve/sglang_worker.py:
--------------------------------------------------------------------------------
1 | """
2 | A model worker executes the model.
3 | """
4 | import argparse
5 | import asyncio
6 | from concurrent.futures import ThreadPoolExecutor
7 | import json
8 | import time
9 | import threading
10 | import uuid
11 |
12 | from fastapi import FastAPI, Request, BackgroundTasks
13 | from fastapi.responses import StreamingResponse
14 | import requests
15 | import re
16 | import uvicorn
17 | from functools import partial
18 |
19 | from llava.constants import WORKER_HEART_BEAT_INTERVAL
20 | from llava.utils import (build_logger, server_error_msg,
21 | pretty_print_semaphore)
22 | from llava.mm_utils import process_images, load_image_from_base64, tokenizer_image_token, expand2square
23 | from llava.constants import DEFAULT_IMAGE_TOKEN
24 |
25 | import sglang as sgl
26 | from sglang.backend.runtime_endpoint import RuntimeEndpoint
27 |
28 |
29 | GB = 1 << 30
30 |
31 | worker_id = str(uuid.uuid4())[:6]
32 | logger = build_logger("model_worker", f"model_worker_{worker_id}.log")
33 | global_counter = 0
34 |
35 | model_semaphore = None
36 |
37 |
38 | def heart_beat_worker(controller):
39 | while True:
40 | time.sleep(WORKER_HEART_BEAT_INTERVAL)
41 | controller.send_heart_beat()
42 |
43 |
44 | @sgl.function
45 | def pipeline(s, prompt, max_tokens):
46 | for p in prompt:
47 | if type(p) is str:
48 | s += p
49 | else:
50 | s += sgl.image(p)
51 | s += sgl.gen("response", max_tokens=max_tokens)
52 |
53 |
54 | class ModelWorker:
55 | def __init__(self, controller_addr, worker_addr, sgl_endpoint,
56 | worker_id, no_register, model_name):
57 | self.controller_addr = controller_addr
58 | self.worker_addr = worker_addr
59 | self.worker_id = worker_id
60 |
61 | # Select backend
62 | backend = RuntimeEndpoint(sgl_endpoint)
63 | sgl.set_default_backend(backend)
64 | model_path = backend.model_info["model_path"]
65 |
66 | if model_path.endswith("/"):
67 | model_path = model_path[:-1]
68 | if model_name is None:
69 | model_paths = model_path.split("/")
70 | if model_paths[-1].startswith('checkpoint-'):
71 | self.model_name = model_paths[-2] + "_" + model_paths[-1]
72 | else:
73 | self.model_name = model_paths[-1]
74 | else:
75 | self.model_name = model_name
76 |
77 | logger.info(f"Loading the SGLANG model {self.model_name} on worker {worker_id} ...")
78 |
79 | if not no_register:
80 | self.register_to_controller()
81 | self.heart_beat_thread = threading.Thread(
82 | target=heart_beat_worker, args=(self,), daemon=True)
83 | self.heart_beat_thread.start()
84 |
85 | def register_to_controller(self):
86 | logger.info("Register to controller")
87 |
88 | url = self.controller_addr + "/register_worker"
89 | data = {
90 | "worker_name": self.worker_addr,
91 | "check_heart_beat": True,
92 | "worker_status": self.get_status()
93 | }
94 | r = requests.post(url, json=data)
95 | assert r.status_code == 200
96 |
97 | def send_heart_beat(self):
98 | logger.info(f"Send heart beat. Models: {[self.model_name]}. "
99 | f"Semaphore: {pretty_print_semaphore(model_semaphore)}. "
100 | f"global_counter: {global_counter}")
101 |
102 | url = self.controller_addr + "/receive_heart_beat"
103 |
104 | while True:
105 | try:
106 | ret = requests.post(url, json={
107 | "worker_name": self.worker_addr,
108 | "queue_length": self.get_queue_length()}, timeout=5)
109 | exist = ret.json()["exist"]
110 | break
111 | except requests.exceptions.RequestException as e:
112 | logger.error(f"heart beat error: {e}")
113 | time.sleep(5)
114 |
115 | if not exist:
116 | self.register_to_controller()
117 |
118 | def get_queue_length(self):
119 | if model_semaphore is None:
120 | return 0
121 | else:
122 | return args.limit_model_concurrency - model_semaphore._value + (len(
123 | model_semaphore._waiters) if model_semaphore._waiters is not None else 0)
124 |
125 | def get_status(self):
126 | return {
127 | "model_names": [self.model_name],
128 | "speed": 1,
129 | "queue_length": self.get_queue_length(),
130 | }
131 |
132 | async def generate_stream(self, params):
133 | ori_prompt = prompt = params["prompt"]
134 | images = params.get("images", None)
135 | if images is not None and len(images) > 0:
136 | if len(images) > 0:
137 | if len(images) != prompt.count(DEFAULT_IMAGE_TOKEN):
138 | raise ValueError("Number of images does not match number of tokens in prompt")
139 |
140 | images = [load_image_from_base64(image) for image in images]
141 |
142 | # FIXME: for image-start/end token
143 | # replace_token = DEFAULT_IMAGE_TOKEN
144 | # if getattr(self.model.config, 'mm_use_im_start_end', False):
145 | # replace_token = DEFAULT_IM_START_TOKEN + replace_token + DEFAULT_IM_END_TOKEN
146 | # prompt = prompt.replace(DEFAULT_IMAGE_TOKEN, replace_token)
147 | prompt = prompt.replace(' ' + DEFAULT_IMAGE_TOKEN + '\n', DEFAULT_IMAGE_TOKEN)
148 | prompt_split = prompt.split(DEFAULT_IMAGE_TOKEN)
149 | prompt = []
150 | for i in range(len(prompt_split)):
151 | prompt.append(prompt_split[i])
152 | if i < len(images):
153 | prompt.append(images[i])
154 | else:
155 | prompt = [prompt]
156 |
157 | temperature = float(params.get("temperature", 1.0))
158 | top_p = float(params.get("top_p", 1.0))
159 | # max_context_length = getattr(model.config, 'max_position_embeddings', 2048)
160 | max_new_tokens = min(int(params.get("max_new_tokens", 256)), 1024)
161 | stop_str = params.get("stop", None)
162 | stop_str = [stop_str] if stop_str is not None else None
163 |
164 | print({'prompt': prompt, 'max_new_tokens': max_new_tokens, 'temperature': temperature, 'top_p': top_p})
165 | state = pipeline.run(prompt, max_new_tokens, temperature=temperature, top_p=top_p, stream=True)
166 |
167 | generated_text = ori_prompt
168 | async for text_outputs in state.text_async_iter(var_name="response"):
169 | generated_text += text_outputs
170 | yield json.dumps({"text": generated_text, "error_code": 0}).encode() + b"\0"
171 |
172 | async def generate_stream_gate(self, params):
173 | try:
174 | async for x in self.generate_stream(params):
175 | yield x
176 | except ValueError as e:
177 | print("Caught ValueError:", e)
178 | ret = {
179 | "text": server_error_msg,
180 | "error_code": 1,
181 | }
182 | yield json.dumps(ret).encode() + b"\0"
183 | except Exception as e:
184 | print("Caught Unknown Error", e)
185 | ret = {
186 | "text": server_error_msg,
187 | "error_code": 1,
188 | }
189 | yield json.dumps(ret).encode() + b"\0"
190 |
191 |
192 | app = FastAPI()
193 |
194 |
195 | def release_model_semaphore(fn=None):
196 | model_semaphore.release()
197 | if fn is not None:
198 | fn()
199 |
200 |
201 | @app.post("/worker_generate_stream")
202 | async def generate_stream(request: Request):
203 | global model_semaphore, global_counter
204 | global_counter += 1
205 | params = await request.json()
206 |
207 | if model_semaphore is None:
208 | model_semaphore = asyncio.Semaphore(args.limit_model_concurrency)
209 | await model_semaphore.acquire()
210 | worker.send_heart_beat()
211 | generator = worker.generate_stream_gate(params)
212 | background_tasks = BackgroundTasks()
213 | background_tasks.add_task(partial(release_model_semaphore, fn=worker.send_heart_beat))
214 | return StreamingResponse(generator, background=background_tasks)
215 |
216 |
217 | @app.post("/worker_get_status")
218 | async def get_status(request: Request):
219 | return worker.get_status()
220 |
221 |
222 | if __name__ == "__main__":
223 | parser = argparse.ArgumentParser()
224 | parser.add_argument("--host", type=str, default="localhost")
225 | parser.add_argument("--port", type=int, default=21002)
226 | parser.add_argument("--worker-address", type=str,
227 | default="http://localhost:21002")
228 | parser.add_argument("--controller-address", type=str,
229 | default="http://localhost:21001")
230 | parser.add_argument("--model-name", type=str)
231 | parser.add_argument("--sgl-endpoint", type=str)
232 | parser.add_argument("--limit-model-concurrency", type=int, default=5)
233 | parser.add_argument("--stream-interval", type=int, default=1)
234 | parser.add_argument("--no-register", action="store_true")
235 | args = parser.parse_args()
236 | logger.info(f"args: {args}")
237 |
238 | worker = ModelWorker(args.controller_address,
239 | args.worker_address,
240 | args.sgl_endpoint,
241 | worker_id,
242 | args.no_register,
243 | args.model_name)
244 | uvicorn.run(app, host=args.host, port=args.port, log_level="info")
245 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/serve/test_message.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import json
3 |
4 | import requests
5 |
6 | from llava.conversation import default_conversation
7 |
8 |
9 | def main():
10 | if args.worker_address:
11 | worker_addr = args.worker_address
12 | else:
13 | controller_addr = args.controller_address
14 | ret = requests.post(controller_addr + "/refresh_all_workers")
15 | ret = requests.post(controller_addr + "/list_models")
16 | models = ret.json()["models"]
17 | models.sort()
18 | print(f"Models: {models}")
19 |
20 | ret = requests.post(controller_addr + "/get_worker_address",
21 | json={"model": args.model_name})
22 | worker_addr = ret.json()["address"]
23 | print(f"worker_addr: {worker_addr}")
24 |
25 | if worker_addr == "":
26 | return
27 |
28 | conv = default_conversation.copy()
29 | conv.append_message(conv.roles[0], args.message)
30 | prompt = conv.get_prompt()
31 |
32 | headers = {"User-Agent": "LLaVA Client"}
33 | pload = {
34 | "model": args.model_name,
35 | "prompt": prompt,
36 | "max_new_tokens": args.max_new_tokens,
37 | "temperature": 0.7,
38 | "stop": conv.sep,
39 | }
40 | response = requests.post(worker_addr + "/worker_generate_stream", headers=headers,
41 | json=pload, stream=True)
42 |
43 | print(prompt.replace(conv.sep, "\n"), end="")
44 | for chunk in response.iter_lines(chunk_size=8192, decode_unicode=False, delimiter=b"\0"):
45 | if chunk:
46 | data = json.loads(chunk.decode("utf-8"))
47 | output = data["text"].split(conv.sep)[-1]
48 | print(output, end="\r")
49 | print("")
50 |
51 |
52 | if __name__ == "__main__":
53 | parser = argparse.ArgumentParser()
54 | parser.add_argument("--controller-address", type=str, default="http://localhost:21001")
55 | parser.add_argument("--worker-address", type=str)
56 | parser.add_argument("--model-name", type=str, default="facebook/opt-350m")
57 | parser.add_argument("--max-new-tokens", type=int, default=32)
58 | parser.add_argument("--message", type=str, default=
59 | "Tell me a story with more than 1000 words.")
60 | args = parser.parse_args()
61 |
62 | main()
63 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/train/llama_flash_attn_monkey_patch.py:
--------------------------------------------------------------------------------
1 | from typing import Optional, Tuple
2 | import warnings
3 |
4 | import torch
5 |
6 | import transformers
7 | from transformers.models.llama.modeling_llama import apply_rotary_pos_emb, repeat_kv
8 |
9 | try:
10 | from flash_attn.flash_attn_interface import flash_attn_unpadded_qkvpacked_func
11 | except ImportError:
12 | from flash_attn.flash_attn_interface import flash_attn_varlen_qkvpacked_func as flash_attn_unpadded_qkvpacked_func
13 | from flash_attn.bert_padding import unpad_input, pad_input
14 |
15 |
16 | def forward(
17 | self,
18 | hidden_states: torch.Tensor,
19 | attention_mask: Optional[torch.Tensor] = None,
20 | position_ids: Optional[torch.Tensor] = None,
21 | past_key_value: Optional[Tuple[torch.Tensor]] = None,
22 | output_attentions: bool = False,
23 | use_cache: bool = False,
24 | ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
25 | if output_attentions:
26 | warnings.warn(
27 | "Output attentions is not supported for patched `LlamaAttention`, returning `None` instead."
28 | )
29 |
30 | bsz, q_len, _ = hidden_states.size()
31 |
32 | query_states = (
33 | self.q_proj(hidden_states)
34 | .view(bsz, q_len, self.num_heads, self.head_dim)
35 | .transpose(1, 2)
36 | )
37 | key_states = (
38 | self.k_proj(hidden_states)
39 | .view(bsz, q_len, self.num_key_value_heads, self.head_dim)
40 | .transpose(1, 2)
41 | )
42 | value_states = (
43 | self.v_proj(hidden_states)
44 | .view(bsz, q_len, self.num_key_value_heads, self.head_dim)
45 | .transpose(1, 2)
46 | ) # shape: (b, num_heads, s, head_dim)
47 |
48 | kv_seq_len = key_states.shape[-2]
49 | if past_key_value is not None:
50 | kv_seq_len += past_key_value[0].shape[-2]
51 |
52 | cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
53 | query_states, key_states = apply_rotary_pos_emb(
54 | query_states, key_states, cos, sin, position_ids
55 | )
56 |
57 | if past_key_value is not None:
58 | # reuse k, v
59 | key_states = torch.cat([past_key_value[0], key_states], dim=2)
60 | value_states = torch.cat([past_key_value[1], value_states], dim=2)
61 |
62 | past_key_value = (key_states, value_states) if use_cache else None
63 |
64 | # repeat k/v heads if n_kv_heads < n_heads
65 | key_states = repeat_kv(key_states, self.num_key_value_groups)
66 | value_states = repeat_kv(value_states, self.num_key_value_groups)
67 |
68 | # Transform the data into the format required by flash attention
69 | qkv = torch.stack([query_states, key_states, value_states], dim=2)
70 | qkv = qkv.transpose(1, 3) # shape: [b, s, 3, num_heads, head_dim]
71 | key_padding_mask = attention_mask
72 |
73 | if key_padding_mask is None:
74 | qkv = qkv.reshape(-1, 3, self.num_heads, self.head_dim)
75 | cu_q_lens = torch.arange(
76 | 0, (bsz + 1) * q_len, step=q_len, dtype=torch.int32, device=qkv.device
77 | )
78 | max_s = q_len
79 | output = flash_attn_unpadded_qkvpacked_func(
80 | qkv, cu_q_lens, max_s, 0.0, softmax_scale=None, causal=True
81 | )
82 | output = output.view(bsz, q_len, -1)
83 | else:
84 | qkv = qkv.reshape(bsz, q_len, -1)
85 | qkv, indices, cu_q_lens, max_s = unpad_input(qkv, key_padding_mask)
86 | qkv = qkv.view(-1, 3, self.num_heads, self.head_dim)
87 | output_unpad = flash_attn_unpadded_qkvpacked_func(
88 | qkv, cu_q_lens, max_s, 0.0, softmax_scale=None, causal=True
89 | )
90 | output_unpad = output_unpad.reshape(-1, self.num_heads * self.head_dim)
91 | output = pad_input(output_unpad, indices, bsz, q_len)
92 |
93 | return self.o_proj(output), None, past_key_value
94 |
95 |
96 | # Disable the transformation of the attention mask in LlamaModel as the flash attention
97 | # requires the attention mask to be the same as the key_padding_mask
98 | def _prepare_decoder_attention_mask(
99 | self, attention_mask, input_shape, inputs_embeds, past_key_values_length
100 | ):
101 | # [bsz, seq_len]
102 | return attention_mask
103 |
104 |
105 | def replace_llama_attn_with_flash_attn():
106 | cuda_major, cuda_minor = torch.cuda.get_device_capability()
107 | if cuda_major < 8:
108 | warnings.warn(
109 | "Flash attention is only supported on A100 or H100 GPU during training due to head dim > 64 backward."
110 | "ref: https://github.com/HazyResearch/flash-attention/issues/190#issuecomment-1523359593"
111 | )
112 | transformers.models.llama.modeling_llama.LlamaModel._prepare_decoder_attention_mask = (
113 | _prepare_decoder_attention_mask
114 | )
115 | transformers.models.llama.modeling_llama.LlamaAttention.forward = forward
116 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/train/llama_xformers_attn_monkey_patch.py:
--------------------------------------------------------------------------------
1 | """
2 | Directly copied the code from https://raw.githubusercontent.com/oobabooga/text-generation-webui/main/modules/llama_attn_hijack.py and made some adjustments
3 | """
4 |
5 | import logging
6 | import math
7 | from typing import Optional, Tuple
8 |
9 | import torch
10 | import transformers.models.llama.modeling_llama
11 | from torch import nn
12 |
13 | try:
14 | import xformers.ops
15 | except ImportError:
16 | logging.error("xformers not found! Please install it before trying to use it.")
17 |
18 |
19 | def replace_llama_attn_with_xformers_attn():
20 | transformers.models.llama.modeling_llama.LlamaAttention.forward = xformers_forward
21 |
22 |
23 | def xformers_forward(
24 | self,
25 | hidden_states: torch.Tensor,
26 | attention_mask: Optional[torch.Tensor] = None,
27 | position_ids: Optional[torch.LongTensor] = None,
28 | past_key_value: Optional[Tuple[torch.Tensor]] = None,
29 | output_attentions: bool = False,
30 | use_cache: bool = False,
31 | ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
32 | # pylint: disable=duplicate-code
33 | bsz, q_len, _ = hidden_states.size()
34 |
35 | query_states = (
36 | self.q_proj(hidden_states)
37 | .view(bsz, q_len, self.num_heads, self.head_dim)
38 | .transpose(1, 2)
39 | )
40 | key_states = (
41 | self.k_proj(hidden_states)
42 | .view(bsz, q_len, self.num_heads, self.head_dim)
43 | .transpose(1, 2)
44 | )
45 | value_states = (
46 | self.v_proj(hidden_states)
47 | .view(bsz, q_len, self.num_heads, self.head_dim)
48 | .transpose(1, 2)
49 | )
50 |
51 | kv_seq_len = key_states.shape[-2]
52 | if past_key_value is not None:
53 | kv_seq_len += past_key_value[0].shape[-2]
54 | cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
55 | (
56 | query_states,
57 | key_states,
58 | ) = transformers.models.llama.modeling_llama.apply_rotary_pos_emb(
59 | query_states, key_states, cos, sin, position_ids
60 | )
61 | # [bsz, nh, t, hd]
62 |
63 | if past_key_value is not None:
64 | # reuse k, v, self_attention
65 | key_states = torch.cat([past_key_value[0], key_states], dim=2)
66 | value_states = torch.cat([past_key_value[1], value_states], dim=2)
67 |
68 | past_key_value = (key_states, value_states) if use_cache else None
69 |
70 | # We only apply xformers optimizations if we don't need to output the whole attention matrix
71 | if not output_attentions:
72 | query_states = query_states.transpose(1, 2)
73 | key_states = key_states.transpose(1, 2)
74 | value_states = value_states.transpose(1, 2)
75 |
76 | # This is a nasty hack. We know attention_mask in transformers is either LowerTriangular or all Zeros.
77 | # We therefore check if one element in the upper triangular portion is zero. If it is, then the mask is all zeros.
78 | if attention_mask is None or attention_mask[0, 0, 0, 1] == 0:
79 | # input and output should be of form (bsz, q_len, num_heads, head_dim)
80 | attn_output = xformers.ops.memory_efficient_attention(
81 | query_states, key_states, value_states, attn_bias=None
82 | )
83 | else:
84 | # input and output should be of form (bsz, q_len, num_heads, head_dim)
85 | attn_output = xformers.ops.memory_efficient_attention(
86 | query_states,
87 | key_states,
88 | value_states,
89 | attn_bias=xformers.ops.LowerTriangularMask(),
90 | )
91 | attn_weights = None
92 | else:
93 | attn_weights = torch.matmul(
94 | query_states, key_states.transpose(2, 3)
95 | ) / math.sqrt(self.head_dim)
96 |
97 | if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len):
98 | raise ValueError(
99 | f"Attention weights should be of size {(bsz * self.num_heads, q_len, kv_seq_len)}, but is"
100 | f" {attn_weights.size()}"
101 | )
102 |
103 | if attention_mask is not None:
104 | if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
105 | raise ValueError(
106 | f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}"
107 | )
108 | attn_weights = attn_weights + attention_mask
109 | attn_weights = torch.max(
110 | attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min)
111 | )
112 |
113 | # upcast attention to fp32
114 | attn_weights = nn.functional.softmax(
115 | attn_weights, dim=-1, dtype=torch.float32
116 | ).to(query_states.dtype)
117 | attn_output = torch.matmul(attn_weights, value_states)
118 |
119 | if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
120 | raise ValueError(
121 | f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is"
122 | f" {attn_output.size()}"
123 | )
124 |
125 | attn_output = attn_output.transpose(1, 2)
126 |
127 | attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
128 | attn_output = self.o_proj(attn_output)
129 | return attn_output, attn_weights, past_key_value
130 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/train/train_mem.py:
--------------------------------------------------------------------------------
1 | from llava.train.train import train
2 |
3 | if __name__ == "__main__":
4 | train(attn_implementation="flash_attention_2")
5 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/train/train_xformers.py:
--------------------------------------------------------------------------------
1 | # Make it more memory efficient by monkey patching the LLaMA model with xformers attention.
2 |
3 | # Need to call this before importing transformers.
4 | from llava.train.llama_xformers_attn_monkey_patch import (
5 | replace_llama_attn_with_xformers_attn,
6 | )
7 |
8 | replace_llama_attn_with_xformers_attn()
9 |
10 | from llava.train.train import train
11 |
12 | if __name__ == "__main__":
13 | train()
14 |
--------------------------------------------------------------------------------
/model/forgery_analyst/llava/utils.py:
--------------------------------------------------------------------------------
1 | import datetime
2 | import logging
3 | import logging.handlers
4 | import os
5 | import sys
6 |
7 | import requests
8 |
9 | from llava.constants import LOGDIR
10 |
11 | server_error_msg = "**NETWORK ERROR DUE TO HIGH TRAFFIC. PLEASE REGENERATE OR REFRESH THIS PAGE.**"
12 | moderation_msg = "YOUR INPUT VIOLATES OUR CONTENT MODERATION GUIDELINES. PLEASE TRY AGAIN."
13 |
14 | handler = None
15 |
16 |
17 | def build_logger(logger_name, logger_filename):
18 | global handler
19 |
20 | formatter = logging.Formatter(
21 | fmt="%(asctime)s | %(levelname)s | %(name)s | %(message)s",
22 | datefmt="%Y-%m-%d %H:%M:%S",
23 | )
24 |
25 | # Set the format of root handlers
26 | if not logging.getLogger().handlers:
27 | logging.basicConfig(level=logging.INFO)
28 | logging.getLogger().handlers[0].setFormatter(formatter)
29 |
30 | # Redirect stdout and stderr to loggers
31 | stdout_logger = logging.getLogger("stdout")
32 | stdout_logger.setLevel(logging.INFO)
33 | sl = StreamToLogger(stdout_logger, logging.INFO)
34 | sys.stdout = sl
35 |
36 | stderr_logger = logging.getLogger("stderr")
37 | stderr_logger.setLevel(logging.ERROR)
38 | sl = StreamToLogger(stderr_logger, logging.ERROR)
39 | sys.stderr = sl
40 |
41 | # Get logger
42 | logger = logging.getLogger(logger_name)
43 | logger.setLevel(logging.INFO)
44 |
45 | # Add a file handler for all loggers
46 | if handler is None:
47 | os.makedirs(LOGDIR, exist_ok=True)
48 | filename = os.path.join(LOGDIR, logger_filename)
49 | handler = logging.handlers.TimedRotatingFileHandler(
50 | filename, when='D', utc=True, encoding='UTF-8')
51 | handler.setFormatter(formatter)
52 |
53 | for name, item in logging.root.manager.loggerDict.items():
54 | if isinstance(item, logging.Logger):
55 | item.addHandler(handler)
56 |
57 | return logger
58 |
59 |
60 | class StreamToLogger(object):
61 | """
62 | Fake file-like stream object that redirects writes to a logger instance.
63 | """
64 | def __init__(self, logger, log_level=logging.INFO):
65 | self.terminal = sys.stdout
66 | self.logger = logger
67 | self.log_level = log_level
68 | self.linebuf = ''
69 |
70 | def __getattr__(self, attr):
71 | return getattr(self.terminal, attr)
72 |
73 | def write(self, buf):
74 | temp_linebuf = self.linebuf + buf
75 | self.linebuf = ''
76 | for line in temp_linebuf.splitlines(True):
77 | # From the io.TextIOWrapper docs:
78 | # On output, if newline is None, any '\n' characters written
79 | # are translated to the system default line separator.
80 | # By default sys.stdout.write() expects '\n' newlines and then
81 | # translates them so this is still cross platform.
82 | if line[-1] == '\n':
83 | self.logger.log(self.log_level, line.rstrip())
84 | else:
85 | self.linebuf += line
86 |
87 | def flush(self):
88 | if self.linebuf != '':
89 | self.logger.log(self.log_level, self.linebuf.rstrip())
90 | self.linebuf = ''
91 |
92 |
93 | def disable_torch_init():
94 | """
95 | Disable the redundant torch default initialization to accelerate model creation.
96 | """
97 | import torch
98 | setattr(torch.nn.Linear, "reset_parameters", lambda self: None)
99 | setattr(torch.nn.LayerNorm, "reset_parameters", lambda self: None)
100 |
101 |
102 | def violates_moderation(text):
103 | """
104 | Check whether the text violates OpenAI moderation API.
105 | """
106 | url = "https://api.openai.com/v1/moderations"
107 | headers = {"Content-Type": "application/json",
108 | "Authorization": "Bearer " + os.environ["OPENAI_API_KEY"]}
109 | text = text.replace("\n", "")
110 | data = "{" + '"input": ' + f'"{text}"' + "}"
111 | data = data.encode("utf-8")
112 | try:
113 | ret = requests.post(url, headers=headers, data=data, timeout=5)
114 | flagged = ret.json()["results"][0]["flagged"]
115 | except requests.exceptions.RequestException as e:
116 | flagged = False
117 | except KeyError as e:
118 | flagged = False
119 |
120 | return flagged
121 |
122 |
123 | def pretty_print_semaphore(semaphore):
124 | if semaphore is None:
125 | return "None"
126 | return f"Semaphore(value={semaphore._value}, locked={semaphore.locked()})"
127 |
--------------------------------------------------------------------------------
/prompt/data_engine_prompt.py:
--------------------------------------------------------------------------------
1 |
2 | DATA_ENGINE_PROMPT = "You are a rigorous and responsible image tampering (altering) detection expert. " \
3 | "You can localize the exact tampered region and analyze your detection decision according to tampering clues at different levels. " \
4 | "Assuming that you have detected this is a image and the manipulation type is [MANIPULATION_TYPE], " \
5 | "the exact tampered region boundary is highlighted with color in this image (and your detection IS correct).\n" \
6 | "Please provide the chain-of-clues supporting your detection decision in the following style: " \
7 | "# high-level semantic anomalies (such as content contrary to common sense, inciting and misleading content), " \
8 | "# middle-level visual defects (such as traces of tampered region or boundary, lighting inconsistency, perspective relationships, and physical constraints) and " \
9 | "# low-level pixel statistics (such as noise, color, textural, sharpness, and AI-generation fingerprint), " \
10 | "where the high-level anomalies are significant doubts worth attention, and the middle-level and low-level findings are reliable evidence."
11 |
--------------------------------------------------------------------------------
/prompt/real_analysis_text.py:
--------------------------------------------------------------------------------
1 |
2 | REAL_ANALYSIS_TEXT_LIST = [
3 | "The following analysis affirms the authenticity of the image, with observations categorized into high-level semantic coherence, " \
4 | "middle-level visual consistency, and low-level pixel statistics.\n\n" \
5 | "# High-Level Semantic Coherence\n\n" \
6 | "1. Alignment with Common Sense\n\n" \
7 | "[DETAILED-CAPTION]\n" \
8 | "The content is entirely plausible and aligns with real-world scenarios. The scene authentically reflects a natural and non-misleading setting.\n\n" \
9 | "# Middle-Level Visual Consistency\n\n" \
10 | "1. Absence of Boundary Traces or Irregularities\n\n" \
11 | "All regions of the image exhibit smooth transitions and natural continuity.\n\n" \
12 | "2. Coherent Lighting\n\n" \
13 | "The lighting across the image is consistent, with shadows, highlights, and reflections properly aligned to the light source.\n\n" \
14 | "3. Harmonious Perspective\n\n" \
15 | "The size, scale, and orientation of all elements are consistent with natural perspective rules. Spatial relationships between objects are logical.\n\n" \
16 | "4. Adherence to Physical Constraints\n\n" \
17 | "All interactions and arrangements of objects follow physical laws, such as gravity and balance.\n\n" \
18 | "# Low-Level Pixel Statistics\n\n" \
19 | "1. Uniform Color\n\n" \
20 | "The colors and tones are cohesive, with smooth gradients and consistent blending across the scene.\n\n" \
21 | "2. Homogeneous Texture and Sharpness\n\n" \
22 | "The texture and sharpness are evenly distributed, with no areas appearing artificially smoothed, grainy, or oversharpened.",
23 |
24 | "The following analysis supports the authenticity of the image, categorizing observations into high-level semantic coherence, " \
25 | "middle-level visual consistency, and low-level pixel statistics.\n\n" \
26 | "# High-Level Semantic Coherence\n\n" \
27 | "## Consistency with Common Sense\n\n" \
28 | "[DETAILED-CAPTION]\n" \
29 | "The image depicts an entirely plausible scenario that aligns with real-world expectations. The content reflects a natural and truthful setting with no misleading elements.\n\n" \
30 | "# Middle-Level Visual Consistency\n\n" \
31 | "## Consistent Lighting\n\n" \
32 | "The lighting across the image is coherent, with highlights and reflections consistently matching the direction of the light source.\n\n" \
33 | "## Compliance with Physical Constraints\n\n" \
34 | "The interactions and placements of objects adhere to physical laws, such as gravity and balance, ensuring that the scene is plausible in a real-world context.\n\n" \
35 | "## Consistent Perspective\n\n" \
36 | "The spatial relationships between elements are logical and free from distortion.\n\n" \
37 | "# Low-Level Pixel Statistics\n\n" \
38 | "## Cohesive Color Distribution\n\n" \
39 | "The colors and tones in the image are harmoniously distributed and align with the environment.\n\n" \
40 | "## Consistent Noise Patterns\n\n" \
41 | "The noise distribution across the image is uniform, with no abrupt changes or localized discrepancies that would indicate editing."
42 | ]
43 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | # This file may be used to create an environment using:
2 | # $ conda create --name --file
3 |
4 | accelerate=0.33.0
5 | deepspeed=0.14.5
6 | gradio=3.39.0
7 | huggingface-hub=0.24.5
8 | mpi4py=4.0.0
9 | numpy=1.24.2
10 | openai=0.27.8
11 | opencv-python=4.8.0.74
12 | packaging=24.1
13 | pandas=2.2.2
14 | peft=0.4.0
15 | pillow=9.4.0
16 | requests=2.31.0
17 | scipy=1.11.2
18 | torch=2.4.0
19 | torchvision=0.19.0
20 | transformers=4.31.0
21 |
--------------------------------------------------------------------------------
/run_engine.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import os
3 | import sys
4 |
5 | import cv2
6 | import numpy as np
7 |
8 | import torch
9 |
10 | from PIL import Image
11 |
12 | from utils.utils import DEFAULT_IMAGE_TOKEN, IMAGE_TOKEN_INDEX
13 |
14 | from prompt.data_engine_prompt import DATA_ENGINE_PROMPT
15 |
16 | from model.forgery_analyst.llava.conversation import conv_templates
17 | from model.forgery_analyst.llava.utils import disable_torch_init
18 | from model.forgery_analyst.llava.model.builder import load_pretrained_model
19 | from model.forgery_analyst.llava.mm_utils import process_images, tokenizer_image_token, get_model_name_from_path
20 |
21 |
22 | def parse_args(args):
23 | parser = argparse.ArgumentParser()
24 |
25 | parser.add_argument("--model-path", type=str, default="Zhihao18/ForgeryAnalyst-llava-13B")
26 |
27 | parser.add_argument("--image-path", type=str, default=None)
28 | parser.add_argument("--mask-path", type=str, default=None)
29 | parser.add_argument("--output-path", type=str, default=None)
30 |
31 | parser.add_argument("--manipulation-type", type=str, default='photoshop',
32 | choices=['photoshop', 'copy-move', 'remove', 'AI-generate'])
33 |
34 | parser.add_argument("--temperature", type=float, default=0.2)
35 | parser.add_argument("--top_p", type=float, default=None)
36 | parser.add_argument("--num_beams", type=int, default=1)
37 | parser.add_argument("--max_new_tokens", type=int, default=2048)
38 |
39 | return parser.parse_args(args)
40 |
41 |
42 | def highlight_forgery_boundary(image_path, mask_path, thickness=5):
43 | image = cv2.imread(image_path)
44 |
45 | (B, G, R) = cv2.split(image)
46 | sum_B, sum_G, sum_R = np.sum(B), np.sum(G), np.sum(R)
47 |
48 | min_channel = min(('R', sum_R), ('G', sum_G), ('B', sum_B), key=lambda x: x[1])
49 | color_dict = {'B': [255, 0, 0], 'G': [0, 255, 0], 'R': [0, 0, 255]}
50 | color = color_dict[min_channel[0]]
51 |
52 | mask = cv2.imread(mask_path, cv2.IMREAD_GRAYSCALE)
53 | mask = cv2.resize(mask, (image.shape[1], image.shape[0]))
54 | _, mask = cv2.threshold(mask, 32, 255, cv2.THRESH_BINARY)
55 |
56 | kernel = np.ones((5, 5), np.uint8)
57 | mask = cv2.dilate(mask, kernel, iterations=5)
58 |
59 | # Create a new mask to mark the outer boundary touching areas
60 | outer_boundary_touching_mask = np.zeros_like(mask)
61 |
62 | # Mark pixels at the outer boundary in the mask
63 | outer_boundary_touching_mask[0, :] = mask[0, :] # Top row
64 | outer_boundary_touching_mask[-1, :] = mask[-1, :] # Bottom row
65 | outer_boundary_touching_mask[:, 0] = mask[:, 0] # Left column
66 | outer_boundary_touching_mask[:, -1] = mask[:, -1] # Right column
67 |
68 | outer_boundary = cv2.Canny(outer_boundary_touching_mask, threshold1=100, threshold2=200)
69 | outer_boundary_contours, _ = cv2.findContours(outer_boundary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
70 | cv2.drawContours(image, outer_boundary_contours, -1, color, thickness)
71 |
72 | boundary = cv2.Canny(mask, threshold1=100, threshold2=200)
73 | boundary_contours, _ = cv2.findContours(boundary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
74 | cv2.drawContours(image, boundary_contours, -1, color, thickness)
75 |
76 | image_hb = Image.fromarray(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
77 |
78 | return image_hb
79 |
80 |
81 | def prepare_data(image_path, mask_path, manipulation_type='photoshop'):
82 | image = [highlight_forgery_boundary(image_path, mask_path)]
83 |
84 | default_question = DATA_ENGINE_PROMPT
85 | question = default_question.replace('[MANIPULATION_TYPE]', manipulation_type)
86 | question = DEFAULT_IMAGE_TOKEN + '\n' + question
87 |
88 | conv = conv_templates['llava_v1'].copy()
89 | conv.append_message(conv.roles[0], question)
90 | conv.append_message(conv.roles[1], None)
91 |
92 | prompt = conv.get_prompt()
93 |
94 | return image, prompt
95 |
96 |
97 | def main(args):
98 | args = parse_args(args)
99 |
100 | disable_torch_init()
101 |
102 | tokenizer, model, image_processor, _ = load_pretrained_model(
103 | args.model_path, None, get_model_name_from_path(args.model_path)
104 | )
105 |
106 | if args.image_path and os.path.exists(args.image_path):
107 | image_path = args.image_path
108 | else:
109 | image_path = input("Please enter the path to the image file: ")
110 |
111 | if args.mask_path and os.path.exists(args.mask_path):
112 | mask_path = args.mask_path
113 | else:
114 | mask_path = input("Please enter the path to the forgery mask file: ")
115 |
116 | image, prompt = prepare_data(image_path, mask_path, args.manipulation_type)
117 |
118 | image_size = [x.size for x in image]
119 | image_tensor = process_images(image, image_processor, model.config)
120 | image_tensor = image_tensor.to(model.device, dtype=torch.float16)
121 |
122 | input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors="pt").unsqueeze(0)
123 | input_ids = input_ids.to(model.device)
124 |
125 | with torch.inference_mode():
126 | output_ids = model.generate(
127 | inputs=input_ids,
128 | images=image_tensor,
129 | image_sizes=image_size,
130 | do_sample=True if args.temperature > 0 else False,
131 | temperature=args.temperature,
132 | top_p=args.top_p,
133 | num_beams=args.num_beams,
134 | max_new_tokens=args.max_new_tokens,
135 | use_cache=True,
136 | )
137 |
138 | outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
139 |
140 | if args.output_path:
141 | if os.path.exists(args.output_path):
142 | print(f"File {args.output_path} already exists.")
143 | else:
144 | os.mkdir(os.path.dirname(args.output_path), exist_ok=True)
145 | with open(args.output_path, 'w') as f:
146 | f.write(outputs)
147 |
148 | print(outputs)
149 |
150 |
151 | if __name__ == "__main__":
152 | main(sys.argv[1:])
--------------------------------------------------------------------------------
/run_sharecaptioner.py:
--------------------------------------------------------------------------------
1 | # https://github.com/ShareGPT4Omni/ShareGPT4V/blob/master/tools/share-cap_batch_infer.py
2 |
3 | import argparse
4 | import random
5 | import os
6 | import sys
7 |
8 | import torch
9 |
10 | from PIL import Image
11 | from transformers import AutoModelForCausalLM, AutoTokenizer
12 |
13 | from prompt.real_analysis_text import REAL_ANALYSIS_TEXT_LIST
14 |
15 |
16 | def parse_args(args):
17 | parser = argparse.ArgumentParser()
18 |
19 | parser.add_argument("--model-path", type=str, default="Lin-Chen/ShareCaptioner")
20 |
21 | parser.add_argument("--image-path", type=str, default=None)
22 | parser.add_argument("--output-path", type=str, default=None)
23 |
24 | parser.add_argument("--num_gpus", default=1, type=int)
25 |
26 | return parser.parse_args(args)
27 |
28 |
29 | def auto_configure_device_map(num_gpus):
30 | num_trans_layers = 32
31 | per_gpu_layers = 38 / num_gpus
32 |
33 | device_map = {
34 | 'visual_encoder': 0,
35 | 'ln_vision': 0,
36 | 'Qformer': 0,
37 | 'internlm_model.model.embed_tokens': 0,
38 | 'internlm_model.model.norm': 0,
39 | 'internlm_model.lm_head': 0,
40 | 'query_tokens': 0,
41 | 'flag_image_start': 0,
42 | 'flag_image_end': 0,
43 | 'internlm_proj': 0,
44 | }
45 |
46 | used = 6
47 | gpu_target = 0
48 |
49 | for i in range(num_trans_layers):
50 | if used >= per_gpu_layers:
51 | gpu_target += 1
52 | used = 0
53 | assert gpu_target < num_gpus
54 | device_map[f'internlm_model.model.layers.{i}'] = gpu_target
55 | used += 1
56 |
57 | return device_map
58 |
59 |
60 | def main(args):
61 | args = parse_args(args)
62 |
63 | # You can download ShareCaptioner in advance,
64 | # and use `local_files_only=True` to force the use of local weights,
65 | # avoiding potential network issues.
66 |
67 | tokenizer = AutoTokenizer.from_pretrained(
68 | args.model_path, trust_remote_code=True)
69 | model = AutoModelForCausalLM.from_pretrained(
70 | args.model_path, trust_remote_code=True).eval().half()
71 |
72 | if args.num_gpus > 1:
73 | from accelerate import dispatch_model
74 | device_map = auto_configure_device_map(args.num_gpus)
75 | model = dispatch_model(model, device_map=device_map)
76 | else:
77 | model.cuda()
78 |
79 | model.tokenizer = tokenizer
80 |
81 | if args.image_path and os.path.exists(args.image_path):
82 | image_path = args.image_path
83 | else:
84 | image_path = input("Please enter the path to the image file: ")
85 |
86 | image = Image.open(image_path).convert('RGB')
87 |
88 | prompt_seg1 = '<|User|>:'
89 | prompt_seg2 = f'Analyze the image in a comprehensive and detailed manner.{model.eoh}\n<|Bot|>:'
90 |
91 | with torch.no_grad():
92 | image = model.vis_processor(image).unsqueeze(0)
93 | image = model.encode_img(image.to(torch.float16))
94 |
95 | prompt_emb1 = model.encode_text(prompt_seg1, add_special_tokens=True).unsqueeze(0)
96 | prompt_emb2 = model.encode_text(prompt_seg2, add_special_tokens=False).unsqueeze(0)
97 |
98 | input = torch.cat([prompt_emb1, image, prompt_emb2], dim=1)
99 |
100 | out_embeds = model.internlm_model.generate(
101 | inputs_embeds=input,
102 | max_length=512,
103 | num_beams=3,
104 | min_length=1,
105 | do_sample=True,
106 | repetition_penalty=1.5,
107 | length_penalty=1.0,
108 | temperature=1.,
109 | eos_token_id=model.tokenizer.eos_token_id,
110 | num_return_sequences=1,
111 | )
112 | caption = model.decode_text(out_embeds)
113 | caption = caption.replace('\n', '')
114 |
115 | analysis = random.choice(REAL_ANALYSIS_TEXT_LIST)
116 | analysis = analysis.replace('[DETAILED_CAPTION]', caption)
117 |
118 | if args.output_path:
119 | if os.path.exists(args.output_path):
120 | print(f"File {args.output_path} already exists.")
121 | else:
122 | os.mkdir(os.path.dirname(args.output_path), exist_ok=True)
123 | with open(args.output_path, 'w') as f:
124 | f.write(analysis)
125 |
126 | print(analysis)
127 |
128 |
129 | if __name__ == "__main__":
130 | main(sys.argv[1:])
--------------------------------------------------------------------------------
/src/teaser.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sunzhihao18/ForgerySleuth/fa5f8b552a3e44c91dcaa202f4a74d25d79397dc/src/teaser.png
--------------------------------------------------------------------------------
/utils/utils.py:
--------------------------------------------------------------------------------
1 |
2 | IMAGE_TOKEN_INDEX = -200
3 |
4 | DEFAULT_IMAGE_TOKEN = ""
--------------------------------------------------------------------------------