├── LICENSE ├── README.md ├── VERITE ├── VERITE.csv ├── VERITE_articles.csv ├── VERITE_clip_image_embeddings_ViTL14.npy ├── VERITE_clip_text_embeddings_ViTL14.npy ├── figments.png └── verite.png ├── experiment.py ├── extract_features.py ├── main.py ├── model.py ├── prepare_datasets.py ├── requirements.txt └── utils.py /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright 2023 Stefanos-Iordanis Papadopoulos 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # image-text-verification 2 | 3 | Official repository for the "VERITE: a Robust benchmark for multimodal misinformation detection accounting for unimodal bias" paper which was published at the [International Journal of Multimedia Information Retrieval](https://link.springer.com/article/10.1007/s13735-023-00312-6). 4 | 5 | ## Abstract 6 | >*Multimedia content has become ubiquitous on social media platforms, leading to the rise of multimodal misinformation (MM) and the urgent need for effective strategies to detect and prevent its spread. In recent years, the challenge of multimodal misinformation detection (MMD) has garnered significant attention by researchers and has mainly involved the creation of annotated, weakly annotated, or synthetically generated training datasets, along with the development of various deep learning MMD models. However, the problem of unimodal bias in MMD benchmarks -where biased or unimodal methods outperform their multimodal counterparts on an inherently multimodal task- has been overlooked. In this study, we systematically investigate and identify the presence of unimodal bias in widely-used MMD benchmarks (VMU-Twitter, COSMOS), raising concerns about their suitability for reliable evaluation. To address this issue, we introduce the “VERification of Image-TExtpairs” (VERITE) benchmark for MMD which incorporates real-world data, excludes “asymmetric multimodal misinformation” and utilizes “modality balancing”. We conduct an extensive comparative study with a Transformer-based architecture that shows the ability of VERITE to effectively address unimodal bias, rendering it a robust evaluation framework for MMD. Furthermore, we introduce a new method -termed Crossmodal HArd Synthetic MisAlignment (CHASMA)- for generating realistic synthetic training data that preserve crossmodal relations between legitimate images and false human-written captions. By leveraging CHASMA in the training process, we observe consistent and notable improvements in predictive performance on the VERITE; with a 9.2% increase in accuracy* 7 | 8 | This repository also reproduces the methods presented in [Synthetic Misinformers: Generating and Combating Multimodal Misinformation](https://dl.acm.org/doi/fullHtml/10.1145/3592572.3592842). 9 | 10 | If you find our work useful, please cite: 11 | ``` 12 | @article{papadopoulos2024verite, 13 | title={VERITE: a Robust benchmark for multimodal misinformation detection accounting for unimodal bias}, 14 | author={Papadopoulos, Stefanos-Iordanis and Koutlis, Christos and Papadopoulos, Symeon and Petrantonakis, Panagiotis C}, 15 | journal={International Journal of Multimedia Information Retrieval}, 16 | volume={13}, 17 | number={1}, 18 | pages={4}, 19 | year={2024}, 20 | publisher={Springer} 21 | } 22 | 23 | @inproceedings{papadopoulos2023synthetic, 24 | title={Synthetic Misinformers: Generating and Combating Multimodal Misinformation}, 25 | author={Papadopoulos, Stefanos-Iordanis and Koutlis, Christos and Papadopoulos, Symeon and Petrantonakis, Panagiotis}, 26 | booktitle={Proceedings of the 2nd ACM International Workshop on Multimedia AI against Disinformation}, 27 | pages={36--44}, 28 | year={2023} 29 | } 30 | ``` 31 | 32 | ## Preparation 33 | 34 | - Clone this repo: 35 | ``` 36 | git clone https://github.com/stevejpapad/image-text-verification 37 | cd image-text-verification 38 | ``` 39 | 40 | - Create a python (>= 3.8) environment (Anaconda is recommended) 41 | 42 | - Install all dependencies with: `pip install --file requirements.txt`. If needed, follow the [instructions](https://github.com/openai/CLIP) to install CLIP. 43 | 44 | ## VERITE Benchmark 45 | 46 | VERITE is a benchmark dataset designed for evaluating fine-grained crossmodal misinformation detection models. This dataset consists of real-world instances of misinformation collected from Snopes and Reuters, and it addresses unimodal bias by excluding asymmetric misinformation and employing modality balancing. Modality balancing denotes that images and captions will appear twice, once in their truthful and once in their misleading pairs to ensure that the model considers both modalities when distinguishing between truth and misinformation. 47 | 48 | ![Screenshot](VERITE/verite.png) 49 | 50 | The images are sourced from within the articles of Snopes and Reuters, as well as Google Images. We do not provide the images, only their URLs. 51 | VERITE supports multiclass classification of three categories: Truthful, Out-of-context, and Miscaptioned image-caption pairs but it can also be used for binary classification. 52 | We collected 260 articles from Snopes and 78 from Reuters that met our criteria which translates to 338 Truthful, 338 Miscaptioned and 324 Out-of-Context pairs. 53 | Please note that this dataset is intended solely for research purposes. 54 | 55 | - If you are only interested in the VERITE benchmark, we provide the processed dataset and the visual and textual features from CLIP ViT-L/14 in `/VERITE`. 56 | - If you also want to download the images from the provided URLs, you can run the following code: 57 | ```python 58 | from prepare_datasets import prepare_verite 59 | prepare_VERITE(download_images=True) 60 | ``` 61 | The above code reads the "VERITE_articles.csv" file which comprises six columns `['id', 'true_url', 'false_caption', 'true_caption', 'false_url', 'query', 'snopes_url']` and 337 rows. 62 | After downloading the images, the dataset is "un-packed" into separate instances per class: true, miscaptioned, out-of-context. 63 | The final "VERITE.csv" consists of three columns `['caption', 'image_path', 'label']` and 1000 rows. 64 | Sample from "VERITE.csv": 65 | ```python 66 | {'caption': 'Image shows a damaged railway bridge collapsed into a body of water in June 2020 in Murmansk, Russia.', 67 | 'image_path': 'images/true_239.jpg', 68 | 'label': 'true'} 69 | {'caption': 'Image shows a damaged railway bridge collapsed into a body of water in 2022 during the Russia-Ukraine war.', 70 | 'image_path': 'images/true_239.jpg', 71 | 'label': 'miscaptioned'} 72 | {'caption': 'Image shows a damaged railway bridge collapsed into a body of water in June 2020 in Murmansk, Russia.', 73 | 'image_path': 'images/false_239.jpg', 74 | 'label': 'out-of-context'} 75 | ``` 76 | 77 | - To extract visual and textual features of VERITE with the use of CLIP ViT-L/14 run the following code: 78 | ```python 79 | from extract_features import extract_CLIP_features 80 | extract_CLIP_features(data_path='VERITE/', output_path='VERITE/VERITE_') 81 | ``` 82 | 83 | If you encounter any problems while downloading and preparing VERITE (e.g., broken image URLs), please contact stefpapad@iti.gr 84 | 85 | ## Prerequisite 86 | If you want to reproduce the experiments on the paper it is necassary to first download the following datasets and save them in their respective folder: 87 | - COSMOS test set -> https://github.com/shivangi-aneja/COSMOS -> `/COSMOS` 88 | - Fakeddit dataset -> https://github.com/entitize/Fakeddit -> `/Fakeddit` 89 | - MEIR -> https://github.com/Ekraam/MEIR -> `/MEIR` 90 | - VisualNews -> https://github.com/FuxiaoLiu/VisualNews-Repository -> `/VisualNews` 91 | - NewsCLIPings -> https://github.com/g-luo/news_clippings -> `/news_clippings` 92 | - Image-verification-corpus (Twitter dataset) -> https://github.com/MKLab-ITI/image-verification-corpus -> `/Twitter` 93 | 94 | ## Reproducibility 95 | All experiments from the paper can be re-created by running 96 | ```python main.py``` 97 | to prepare the datasets, extract the CLIP features and reproduce all experiments. 98 | 99 | ## Usage 100 | - To extract visual and/or textual features with the use of CLIP ViT-L14 run: 101 | ```python 102 | from extract_features import extract_CLIP_features 103 | extract_CLIP_features(data_path=INPUT_PATH, output_path=OUTPUT_PATH) 104 | ``` 105 | 106 | - To prepare the *CHASMA* dataset run the following: 107 | ```python 108 | from prepare_datasets import prepare_Misalign 109 | prepare_Misalign(CLIP_VERSION="ViT-L/14", choose_gpu=0) 110 | ``` 111 | 112 | - To train the DT-Transformer (e.g) for 30 epochs, on binary classification, using the *CHASMA* dataset while utilizing both multimodal and unimodal (text-only, image-only) inputs with a learning_rate of 5e-5, 4 transformer layers, 8 attention heads of dimensionality 1024, run the following code: 113 | ```python 114 | run_experiment( 115 | dataset_methods_list = [ 116 | 'Misalign', 117 | ], 118 | modality_options = [ 119 | ["images", "texts"], 120 | ["texts"], 121 | ["images"] 122 | ], 123 | epochs=30, 124 | seed_options = [0], 125 | lr_options = [5e-5], 126 | batch_size_options = [512], 127 | tf_layers_options = [4], 128 | tf_head_options = [8], 129 | tf_dim_options = [1024], 130 | use_multiclass = False, 131 | balancing_method = None, 132 | choose_gpu = 0, 133 | ) 134 | ``` 135 | 136 | - Similarly, for finegrained detection using with the *CHASMA + R-NESt + NC-t2t* run the following: 137 | ```python 138 | run_experiment( 139 | dataset_methods_list = [ 140 | 'EntitySwaps_random_topicXMisalignXnews_clippings_txt2txt', 141 | ], 142 | epochs=30, 143 | use_multiclass = True, 144 | balancing_method = 'downsample', 145 | ) 146 | ``` 147 | 148 | ## Acknowledgements 149 | This work is partially funded by the project "vera.ai: VERification Assisted by Artificial Intelligence" under grant agreement no. 101070093. 150 | 151 | ## Licence 152 | This project is licensed under the Apache License 2.0 - see the [LICENSE](https://github.com/stevejpapad/image-text-verification/blob/master/LICENSE) file for more details. 153 | 154 | ## Contact 155 | Stefanos-Iordanis Papadopoulos (stefpapad@iti.gr) 156 | -------------------------------------------------------------------------------- /VERITE/VERITE_clip_image_embeddings_ViTL14.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevejpapad/image-text-verification/7377400d3326f6d2899cd8c07867e9a09e9f8ff5/VERITE/VERITE_clip_image_embeddings_ViTL14.npy -------------------------------------------------------------------------------- /VERITE/VERITE_clip_text_embeddings_ViTL14.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevejpapad/image-text-verification/7377400d3326f6d2899cd8c07867e9a09e9f8ff5/VERITE/VERITE_clip_text_embeddings_ViTL14.npy -------------------------------------------------------------------------------- /VERITE/figments.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevejpapad/image-text-verification/7377400d3326f6d2899cd8c07867e9a09e9f8ff5/VERITE/figments.png -------------------------------------------------------------------------------- /VERITE/verite.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevejpapad/image-text-verification/7377400d3326f6d2899cd8c07867e9a09e9f8ff5/VERITE/verite.png -------------------------------------------------------------------------------- /experiment.py: -------------------------------------------------------------------------------- 1 | import time 2 | import random 3 | import os, json 4 | from PIL import Image 5 | import pandas as pd 6 | import pickle 7 | import torch 8 | import torch.nn as nn 9 | import numpy as np 10 | from sklearn import metrics 11 | from sklearn.utils import class_weight 12 | 13 | from model import DT_Transformer 14 | from utils import ( 15 | choose_best_model, 16 | early_stop, 17 | train_step, 18 | eval_step, 19 | prepare_dataloader, 20 | binary_acc, 21 | eval_cosmos, 22 | eval_verite, 23 | load_data, 24 | load_ensemble_data, 25 | save_results_csv, 26 | load_features, 27 | down_sample 28 | ) 29 | 30 | 31 | def run_experiment( 32 | dataset_methods_list, 33 | modality_options = [ 34 | ["images", "texts"], 35 | ["images", "texts", "-attention"], 36 | ["texts"], 37 | ["images"] 38 | ], 39 | choose_CLIP_version='ViT-L/14', 40 | epochs=30, 41 | seed_options = [0], 42 | lr_options = [5e-5], 43 | batch_size_options = [512], 44 | tf_layers_options = [1, 4], 45 | tf_head_options = [2, 8], 46 | tf_dim_options = [128, 1024], 47 | use_multiclass = False, # True, False 48 | balancing_method = None, # None, 'downsample', 'class_weights' 49 | choose_gpu = 0, 50 | num_workers=8, 51 | init_model_name = '' 52 | ): 53 | torch.manual_seed(0) 54 | random.seed(0) 55 | np.random.seed(0) 56 | 57 | os.environ["CUDA_VISIBLE_DEVICES"] = str(choose_gpu) 58 | device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") 59 | print(device) 60 | 61 | for dataset_method in dataset_methods_list: 62 | 63 | input_parameters = { 64 | "NUM_WORKERS": num_workers, 65 | "EARLY_STOP_EPOCHS": 10, 66 | "CHOOSE_DATASET": dataset_method, 67 | "CLIP_VERSION": choose_CLIP_version, 68 | "BALANCING": 'downsample' if 'Misalign_D' in dataset_method else balancing_method 69 | } 70 | 71 | if len(dataset_method.split('X')) > 1 : 72 | train_data, valid_data, test_data = load_ensemble_data(dataset_method=dataset_method, 73 | use_multiclass=use_multiclass) 74 | 75 | else: 76 | train_data, valid_data, test_data = load_data(dataset_method) 77 | 78 | clip_image_embeddings, clip_text_embeddings = load_features(input_parameters) 79 | clip_version = input_parameters["CLIP_VERSION"].replace("-", "").replace("/", "") 80 | 81 | class_weights = np.array([1.0 for cl in train_data.falsified.unique()]) 82 | print(train_data.falsified.value_counts()) 83 | 84 | if input_parameters['BALANCING'] == 'downsample': 85 | train_data = down_sample(train_data) 86 | valid_data = down_sample(valid_data) 87 | test_data = down_sample(test_data) 88 | 89 | print(train_data.falsified.value_counts()) 90 | 91 | elif input_parameters['BALANCING'] == 'class_weights': 92 | class_weights = class_weight.compute_class_weight(class_weight='balanced', 93 | y=train_data.falsified, 94 | classes=sorted(train_data.falsified.unique())) 95 | print(class_weights) 96 | 97 | for use_features in modality_options: 98 | 99 | experiment = 0 100 | 101 | for seed in seed_options: 102 | 103 | if use_features == ["images", "texts"]: 104 | model_name = dataset_method + '_multimodal_' + str(seed) + init_model_name 105 | 106 | elif use_features == ["texts"]: 107 | model_name = dataset_method + '_textonly_' + str(seed) + init_model_name 108 | 109 | elif use_features == ["images"]: 110 | model_name = dataset_method + '_imageonly_' + str(seed) + init_model_name 111 | else: 112 | model_name = dataset_method + '_-attention_' + str(seed) + init_model_name 113 | 114 | torch.manual_seed(seed) 115 | 116 | print("*****", seed, use_features, dataset_method, choose_CLIP_version, model_name, "*****") 117 | 118 | for batch_size in batch_size_options: 119 | for lr in lr_options: 120 | for tf_layers in tf_layers_options: 121 | for tf_head in tf_head_options: 122 | for tf_dim in tf_dim_options: 123 | 124 | experiment += 1 125 | 126 | parameters = { 127 | "LEARNING_RATE": lr, 128 | "EPOCHS": epochs, 129 | "BATCH_SIZE": batch_size, 130 | "TF_LAYERS": tf_layers, 131 | "TF_HEAD": tf_head, 132 | "TF_DIM": tf_dim, 133 | "NUM_WORKERS": 8, 134 | "USE_FEATURES": use_features, 135 | "EARLY_STOP_EPOCHS": input_parameters["EARLY_STOP_EPOCHS"], 136 | "CHOOSE_DATASET": input_parameters["CHOOSE_DATASET"], 137 | "CLIP_VERSION": input_parameters["CLIP_VERSION"], 138 | "SEED": seed, 139 | "BALANCING": input_parameters["BALANCING"], 140 | } 141 | 142 | train_dataloader = prepare_dataloader( 143 | clip_image_embeddings, 144 | clip_text_embeddings, 145 | train_data, 146 | parameters["BATCH_SIZE"], 147 | parameters["NUM_WORKERS"], 148 | True, 149 | ) 150 | valid_dataloader = prepare_dataloader( 151 | clip_image_embeddings, 152 | clip_text_embeddings, 153 | valid_data, 154 | parameters["BATCH_SIZE"], 155 | parameters["NUM_WORKERS"], 156 | False, 157 | ) 158 | test_dataloader = prepare_dataloader( 159 | clip_image_embeddings, 160 | clip_text_embeddings, 161 | test_data, 162 | parameters["BATCH_SIZE"], 163 | parameters["NUM_WORKERS"], 164 | False, 165 | ) 166 | 167 | print("!!!!!!!!!!!!!!!!!!!", experiment, "!!!!!!!!!!!!!!!!!!!") 168 | print("!!!!!!!!!!!!!!!!!!!", parameters, "!!!!!!!!!!!!!!!!!!!") 169 | 170 | if parameters["CLIP_VERSION"] == "ViT-L/14": 171 | emb_dim_ = 768 172 | elif parameters["CLIP_VERSION"] == "ViT-B/32": 173 | emb_dim_ = 512 174 | 175 | parameters["EMB_SIZE"] = emb_dim_ 176 | if "-attention" in parameters["USE_FEATURES"]: 177 | parameters["EMB_SIZE"] = emb_dim_ * 2 178 | 179 | model = DT_Transformer( 180 | device=device, 181 | tf_layers=parameters["TF_LAYERS"], 182 | tf_head=parameters["TF_HEAD"], 183 | tf_dim=parameters["TF_DIM"], 184 | emb_dim=parameters["EMB_SIZE"], 185 | use_features=parameters["USE_FEATURES"], 186 | use_multiclass=use_multiclass 187 | ) 188 | 189 | model.to(device) 190 | class_weights_torch = torch.tensor(class_weights).to(device, non_blocking=True) 191 | 192 | if use_multiclass: 193 | criterion = nn.CrossEntropyLoss(weight = class_weights_torch) 194 | else: 195 | criterion = nn.BCEWithLogitsLoss() 196 | 197 | optimizer = torch.optim.Adam( 198 | model.parameters(), lr=parameters["LEARNING_RATE"] 199 | ) 200 | 201 | scheduler = torch.optim.lr_scheduler.StepLR( 202 | optimizer, step_size=30, gamma=0.1, verbose=True 203 | ) 204 | 205 | batches_per_epoch = train_dataloader.__len__() 206 | 207 | history = [] 208 | has_not_improved_for = 0 209 | 210 | PATH = "checkpoints_pt/model" + model_name + ".pt" 211 | 212 | for epoch in range(parameters["EPOCHS"]): 213 | 214 | train_step( 215 | model, 216 | train_dataloader, 217 | epoch, 218 | optimizer, 219 | criterion, 220 | device, 221 | batches_per_epoch, 222 | use_multiclass 223 | ) 224 | 225 | results = eval_step(model, valid_dataloader, epoch, device, use_multiclass=use_multiclass) 226 | history.append(results) 227 | 228 | has_not_improved_for = early_stop( 229 | has_not_improved_for, 230 | model, 231 | optimizer, 232 | history, 233 | epoch, 234 | PATH, 235 | metrics_list=["Accuracy", "F1"] if use_multiclass else ["Accuracy","AUC","Pristine","Falsified"], 236 | ) 237 | 238 | if has_not_improved_for >= parameters["EARLY_STOP_EPOCHS"]: 239 | 240 | EARLY_STOP_EPOCHS = parameters["EARLY_STOP_EPOCHS"] 241 | print( 242 | f"Performance has not improved for {EARLY_STOP_EPOCHS} epochs. Stop training at epoch {epoch}!" 243 | ) 244 | break 245 | 246 | print("Finished Training. Loading the best model from checkpoints.") 247 | 248 | checkpoint = torch.load(PATH) 249 | model.load_state_dict(checkpoint["model_state_dict"]) 250 | optimizer.load_state_dict(checkpoint["optimizer_state_dict"]) 251 | epoch = checkpoint["epoch"] 252 | 253 | res_val = eval_step(model, 254 | valid_dataloader, 255 | -1, 256 | device, 257 | use_multiclass=use_multiclass) 258 | 259 | res_test = eval_step(model, 260 | test_dataloader, 261 | -1, 262 | device, 263 | use_multiclass=use_multiclass) 264 | 265 | cosmos_results = eval_cosmos( 266 | model, 267 | clip_version, 268 | device, 269 | parameters["BATCH_SIZE"], 270 | parameters["NUM_WORKERS"], 271 | use_multiclass=use_multiclass 272 | ) 273 | 274 | verite_results = eval_verite( 275 | model, 276 | clip_version, 277 | device, 278 | parameters["BATCH_SIZE"], 279 | parameters["NUM_WORKERS"], 280 | use_multiclass=use_multiclass, 281 | label_map={'true': 0, 'miscaptioned': 1, 'out-of-context': 2}, 282 | ) 283 | 284 | res_val = { 285 | "valid_" + str(key.lower()): val for key, val in res_val.items() 286 | } 287 | 288 | res_test = { 289 | "test_" + str(key.lower()): val for key, val in res_test.items() 290 | } 291 | 292 | res_cosmos = { 293 | "cosmos_" + str(key): val for key, val in cosmos_results.items() 294 | } 295 | 296 | res_verite = { 297 | "verite_" + str(key): val for key, val in verite_results.items() 298 | } 299 | 300 | all_results = {**res_test, **res_val} 301 | all_results = {**res_cosmos, **all_results} 302 | all_results = {**res_verite, **all_results} 303 | all_results = {**parameters, **all_results} 304 | all_results["path"] = PATH 305 | all_results["history"] = history 306 | 307 | if not os.path.isdir("results"): 308 | os.mkdir("results") 309 | 310 | save_results_csv( 311 | "results/", 312 | "results_verite_multiclass" if use_multiclass else 'results_verite', 313 | all_results, 314 | ) -------------------------------------------------------------------------------- /extract_features.py: -------------------------------------------------------------------------------- 1 | import os 2 | import clip 3 | import json 4 | import torch 5 | import numpy as np 6 | import pandas as pd 7 | from PIL import Image 8 | from tqdm import tqdm 9 | from string import digits 10 | 11 | 12 | def preprocess_input(path, input_caption, preprocess, device): 13 | 14 | text = None 15 | 16 | if path: 17 | image = preprocess(Image.open(path)).unsqueeze(0).to(device) 18 | 19 | else: 20 | image = None 21 | 22 | attempt = 0 23 | limit = 4 24 | 25 | while True: 26 | 27 | attempt += 1 28 | 29 | if attempt == 1: 30 | max_len = 70 31 | 32 | elif attempt == 2: 33 | max_len = 50 34 | 35 | elif attempt == 3: 36 | 37 | print('Try 3') 38 | max_len = 30 39 | 40 | elif attempt == 4: 41 | 42 | print('Last attempt') 43 | max_len = 20 44 | 45 | else: 46 | break 47 | 48 | try: 49 | caption = input_caption 50 | 51 | if attempt > 3: 52 | print("Drastic measure", caption) 53 | 54 | remove_digits = str.maketrans('', '', digits) 55 | caption = caption.translate(remove_digits) 56 | 57 | caption = ' '.join(caption.split(' ')[:max_len]) 58 | text = clip.tokenize(caption.split('"')[0]).to(device) 59 | 60 | break 61 | 62 | except Exception as ex: 63 | if limit == attempt: 64 | break 65 | 66 | return image, text 67 | 68 | 69 | def load_dataset(data_path): 70 | 71 | 72 | print("Load: ", data_path) 73 | 74 | if 'EntitySwaps_topic_clip' in data_path: 75 | ner_train_df = pd.read_csv('VisualNews/train_entity_swap_topic_CLIP.csv', index_col=0) 76 | ner_valid_df = pd.read_csv('VisualNews/valid_entity_swap_topic_CLIP.csv', index_col=0) 77 | ner_test_df = pd.read_csv('VisualNews/test_entity_swap_topic_CLIP.csv', index_col=0) 78 | 79 | data = pd.concat([ner_train_df, ner_valid_df, ner_test_df]) 80 | data = data[data.falsified == True] 81 | 82 | elif 'EntitySwaps_topic_random' in data_path: 83 | ner_train_df = pd.read_csv('VisualNews/train_entity_swap_topic.csv', index_col=0) 84 | ner_valid_df = pd.read_csv('VisualNews/valid_entity_swap_topic.csv', index_col=0) 85 | ner_test_df = pd.read_csv('VisualNews/test_entity_swap_topic.csv', index_col=0) 86 | 87 | data = pd.concat([ner_train_df, ner_valid_df, ner_test_df]) 88 | data = data[data.falsified == True] 89 | 90 | elif 'MISALIGN' in data_path: 91 | 92 | train_df = pd.read_csv('Fakeddit/all_samples/all_train.tsv', sep='\t') 93 | train_df = train_df[train_df['2_way_label'] == 0] 94 | train_df = train_df[~train_df.clean_title.isna()] 95 | train_df['id'] = '_TRAIN' 96 | 97 | valid_df = pd.read_csv('Fakeddit/all_samples/all_validate.tsv', sep='\t') 98 | valid_df = valid_df[valid_df['2_way_label'] == 0] 99 | valid_df = valid_df[~valid_df.clean_title.isna()] 100 | valid_df['id'] = '_VALID' 101 | 102 | test_df = pd.read_csv('Fakeddit/all_samples/all_test_public.tsv', sep='\t') 103 | test_df = test_df[test_df['2_way_label'] == 0] 104 | test_df = test_df[~test_df.clean_title.isna()] 105 | test_df['id'] = '_TEST' 106 | 107 | data = pd.concat([train_df, valid_df, test_df]) 108 | data['caption'] = data['clean_title'] 109 | 110 | elif 'COSMOS' in data_path: 111 | data = [] 112 | for line in open('COSMOS/cosmos_anns/test_data.json', 'r'): 113 | data.append(json.loads(line)) 114 | 115 | data = pd.DataFrame(data) 116 | data['caption'] = data['caption1'] 117 | data['image_path'] = 'images_test/' + data["img_local_path"] 118 | 119 | elif 'VisualNews' in data_path: 120 | data = json.load(open(data_path + 'data.json')) 121 | data = pd.DataFrame(data) 122 | 123 | elif 'VERITE' in data_path: 124 | data = pd.read_csv(data_path + 'VERITE.csv', index_col=0) 125 | data = pd.DataFrame(data) 126 | 127 | elif 'Fakeddit' in data_path: 128 | train_df = pd.read_csv(data_path + 'all_samples/all_train.tsv', sep='\t') 129 | valid_df = pd.read_csv(data_path + 'all_samples/all_validate.tsv', sep='\t') 130 | test_df = pd.read_csv(data_path + 'all_samples/all_test_public.tsv', sep='\t') 131 | data = pd.concat([train_df, valid_df, test_df]) 132 | data = data[~data.image_url.isna()] 133 | data = data[~data.clean_title.isna()] 134 | 135 | data['image_path'] = 'images/' + data['id'] + ".jpg" 136 | data["caption"] = data["clean_title"] 137 | 138 | return data 139 | 140 | def extract_CLIP_features(data_path, output_path, use_image=True, choose_clip_version = "ViT-L/14", choose_gpu = 0): 141 | 142 | os.environ["CUDA_VISIBLE_DEVICES"] = str(choose_gpu) 143 | device = torch.device( 144 | "cuda:0" if torch.cuda.is_available() else "cpu" 145 | ) 146 | 147 | model, preprocess = clip.load(choose_clip_version, device=device) 148 | 149 | data = load_dataset(data_path) 150 | 151 | save_id = 'id' in data.columns 152 | model.eval() 153 | 154 | all_ids, all_text_features, all_visual_features = [], [], [] 155 | 156 | with torch.no_grad(): 157 | 158 | for (row) in tqdm(data.itertuples(), total=data.shape[0]): 159 | 160 | if not use_image: 161 | 162 | caption = row.caption 163 | image, text = preprocess_input(None, caption, preprocess, device) 164 | 165 | if text != None: 166 | text_features = model.encode_text(text) 167 | all_text_features.append(text_features.cpu().detach().numpy()[0]) 168 | if save_id: 169 | all_ids.append(row.id) 170 | 171 | else: 172 | path = data_path + row.image_path.split('./')[-1] 173 | 174 | if os.path.isfile(path): 175 | caption = row.caption 176 | image, text = preprocess_input(path, caption, preprocess, device) 177 | 178 | if text != None: 179 | image_features = model.encode_image(image) 180 | text_features = model.encode_text(text) 181 | 182 | all_text_features.append(text_features.cpu().detach().numpy()[0]) 183 | all_visual_features.append(image_features.cpu().detach().numpy()[0]) 184 | 185 | if save_id: 186 | all_ids.append(row.id) 187 | 188 | 189 | print("Save: ", output_path) 190 | 191 | clip_version = choose_clip_version.replace('-', '').replace('/', '') 192 | 193 | all_text_features = np.array(all_text_features).reshape(len(all_text_features),-1) 194 | np.save(output_path + "clip_text_embeddings_" + clip_version + ".npy", all_text_features) 195 | 196 | if use_image: 197 | all_visual_features = np.array(all_visual_features).reshape(len(all_visual_features),-1) 198 | np.save(output_path + "clip_image_embeddings_" + clip_version + ".npy", all_visual_features) 199 | 200 | if save_id: 201 | 202 | if 'MISALIGN' in data_path: 203 | 204 | new_ids = [] 205 | for i in range(all_ids.count('_TRAIN')): 206 | new_ids.append(str(i) + '_TRAIN') 207 | 208 | for i in range(all_ids.count('_VALID')): 209 | new_ids.append(str(i) + '_VALID') 210 | 211 | for i in range(all_ids.count('_TEST')): 212 | new_ids.append(str(i) + '_TEST') 213 | 214 | all_ids= new_ids 215 | 216 | all_ids = np.array(all_ids) 217 | np.save(output_path + "item_ids_" + clip_version +".npy", all_ids) 218 | 219 | def find_images(folder): 220 | images = [] 221 | for root, dirs, files in os.walk(folder): 222 | for file in files: 223 | if file.lower().endswith(('.jpg', '.jpeg', '.png', '.gif', '.bmp')): 224 | images.append(os.path.join(root, file)) 225 | return images 226 | 227 | def check_paths(img_id, img_paths): 228 | for path in img_paths: 229 | 230 | if img_id in path: 231 | return path 232 | 233 | def unpack_data(input_df, img_paths): 234 | 235 | keep_data = [] 236 | 237 | for i, row in input_df.iterrows(): 238 | current_row = row.copy() 239 | image_ids = row.image_id.split(',') 240 | 241 | if len(image_ids) >= 2: 242 | 243 | for img_id in image_ids: 244 | current_row['image_id'] = img_id 245 | current_row["image_path"] = check_paths(img_id, img_paths) 246 | 247 | keep_data.append(current_row.to_dict()) 248 | 249 | else: 250 | 251 | img_id = current_row.image_id 252 | current_row["image_path"] = check_paths(img_id, img_paths) 253 | keep_data.append(current_row.to_dict()) 254 | 255 | return pd.DataFrame(keep_data) 256 | 257 | def load_twitter(): 258 | 259 | train_a = pd.read_csv('Twitter/image-verification-corpus-master/mediaeval2015/devset/tweets.txt', sep="\t") 260 | train_b = pd.read_csv('Twitter/image-verification-corpus-master/mediaeval2015/testset/tweets.txt', sep="\t") 261 | train_c = pd.read_csv ("Twitter/image-verification-corpus-master/mediaeval2016/devset/posts.txt", sep="\t") 262 | 263 | train_a.rename({'tweetId': 'id', 'imageId(s)': 'image_id', 'label': 'falsified', 'tweetText': 'caption'}, axis=1, inplace=True) 264 | train_b.rename({'tweetId': 'id', 'imageId(s)': 'image_id', 'label': 'falsified', 'tweetText': 'caption'}, axis=1, inplace=True) 265 | train_c.rename({'post_id': 'id', 'image_id(s)': 'image_id', 'label': 'falsified', 'post_text': 'caption'}, axis=1, inplace=True) 266 | 267 | test_df = pd.read_csv ("Twitter/image-verification-corpus-master/mediaeval2016/testset/posts_groundtruth.txt", sep="\t") 268 | test_df = test_df.rename({'post_id': 'id', 'imageId(s)': 'image_id', 'label': 'falsified', 'post_text': 'caption'}, axis=1) 269 | 270 | train_df = pd.concat([train_a, train_b, train_c]) 271 | 272 | mediaeval2015_a = find_images('Twitter/image-verification-corpus-master/mediaeval2015/devset/Medieval2015_DevSet_Images') 273 | mediaeval2015_b = find_images('Twitter/image-verification-corpus-master/mediaeval2015/testset/TestSetImages') 274 | test_paths = find_images('Twitter/image-verification-corpus-master/mediaeval2016/testset/Mediaeval2016_TestSet_Images') 275 | 276 | dev_paths = mediaeval2015_a + mediaeval2015_b 277 | all_paths = dev_paths + test_paths 278 | 279 | train_df = unpack_data(train_df, all_paths) 280 | test_df = unpack_data(test_df, all_paths) 281 | 282 | train_df = train_df[~train_df.image_path.isna()] 283 | train_df = train_df[train_df.falsified != 'humor'] 284 | test_df = test_df[~test_df.image_path.isna()] 285 | 286 | return train_df, test_df 287 | 288 | def extract_CLIP_twitter(output_path = 'Twitter/', choose_clip_version = "ViT-L/14", choose_gpu = 0): 289 | 290 | print("Load Twitter data") 291 | train_df, test_df = load_twitter() 292 | all_data = pd.concat([train_df, test_df]) 293 | 294 | os.environ["CUDA_VISIBLE_DEVICES"] = str(choose_gpu) 295 | device = torch.device( 296 | "cuda:0" if torch.cuda.is_available() else "cpu" 297 | ) 298 | 299 | print("Load CLIP") 300 | model, preprocess = clip.load(choose_clip_version, device=device) 301 | model.eval() 302 | 303 | tweets = all_data.copy() 304 | images = tweets.drop_duplicates('image_id') 305 | 306 | print("Extract CLIP features from the images") 307 | images_ids = [] 308 | all_visual_features = [] 309 | with torch.no_grad(): 310 | 311 | for (row) in tqdm(images.itertuples(), total=images.shape[0]): 312 | path = row.image_path 313 | image = preprocess(Image.open(path)).unsqueeze(0).to(device) 314 | image_features = model.encode_image(image) 315 | 316 | all_visual_features.append(image_features.cpu().detach().numpy()[0]) 317 | images_ids.append(row.image_id) 318 | 319 | print("Save: ", output_path) 320 | 321 | print("Save visual features") 322 | clip_version = choose_clip_version.replace('-', '').replace('/', '') 323 | 324 | all_visual_features = np.array(all_visual_features).reshape(len(all_visual_features),-1) 325 | np.save(output_path + "clip_image_embeddings_" + clip_version + ".npy", all_visual_features) 326 | all_ids = np.array(images_ids) 327 | np.save(output_path + "image_item_ids_" + clip_version +".npy", all_ids) 328 | 329 | print("Extract CLIP features from the tweets") 330 | text_ids = [] 331 | all_text_features = [] 332 | for (row) in tqdm(tweets.itertuples(), total=tweets.shape[0]): 333 | 334 | _, text = preprocess_input(None, row.caption, preprocess, device) 335 | 336 | if text != None: 337 | text_features = model.encode_text(text) 338 | all_text_features.append(text_features.cpu().detach().numpy()[0]) 339 | 340 | text_ids.append(row.id) 341 | 342 | print("Save textual features") 343 | all_text_features = np.array(all_text_features).reshape(len(all_text_features),-1) 344 | np.save(output_path + "clip_text_embeddings_" + clip_version + ".npy", all_text_features) 345 | 346 | all_ids = np.array(text_ids) 347 | np.save(output_path + "text_item_ids_" + clip_version +".npy", all_ids) 348 | 349 | # Keep only data that have features 350 | train_df = train_df[train_df.id.isin(text_ids)] 351 | test_df = test_df[test_df.id.isin(text_ids)] 352 | 353 | # Save final dataframes as .csv 354 | train_df.to_csv('Twitter/train.csv') 355 | test_df.to_csv('Twitter/test.csv') 356 | 357 | 358 | -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | from experiment import run_experiment 2 | from prepare_datasets import prepare_verite, prepare_Misalign, get_K_most_similar, extract_entities, prepare_CLIP_NESt, prepare_R_NESt 3 | from extract_features import extract_CLIP_features, extract_CLIP_twitter 4 | 5 | # Scrape the images of VERITE and prepare the dataset 6 | prepare_verite(download_images=True) 7 | 8 | # Extract features with CLIP ViT-L/14 from VERITE, COSMOS, Twitter, VisualNews etc 9 | extract_CLIP_features(data_path='VERITE/', output_path='VERITE/VERITE_') 10 | extract_CLIP_features(data_path='COSMOS/', output_path='COSMOS/COSMOS_') 11 | extract_CLIP_twitter(output_path='Twitter/', choose_clip_version="ViT-L/14", choose_gpu=0) 12 | extract_CLIP_features(data_path='VisualNews/origin/', output_path='VisualNews/') 13 | extract_CLIP_features(data_path='Fakeddit/', output_path='Fakeddit/fd_original_') 14 | extract_CLIP_features(data_path='VisualNews/MISALIGN', output_path='VisualNews/MISALIGN_', use_image=False) 15 | 16 | # After extracting the Fakeddit we can create the Misalign dataset 17 | prepare_Misalign(CLIP_VERSION="ViT-L/14", choose_gpu=0) 18 | 19 | # Calculate the K most similar pairs from VisualNews. Necassary for creating CLIP-NESt. Can also be used to re-create CSt 20 | get_K_most_similar(K_most_similar = 20, CLIP_VERSION="ViT-L/14", choose_gpu=1) 21 | 22 | # Extract named entities from VisualNews pairs 23 | extract_entities() 24 | 25 | # Create the CLIP-NESt dataset and then extract its CLIP features 26 | prepare_CLIP_NESt() 27 | extract_CLIP_features(data_path='EntitySwaps_topic_clip', output_path='VisualNews/EntitySwaps_topic_clip', use_image=False) 28 | 29 | # Create the R-NESt dataset and then extract its CLIP features 30 | prepare_R_NESt() 31 | extract_CLIP_features(data_path='EntitySwaps_topic_random', output_path='VisualNews/EntitySwaps_topic_random', use_image=False) 32 | 33 | 34 | # ### Table: 2 (Twitter) 35 | run_experiment( 36 | dataset_methods_list = [ 37 | 'Twitter_comparable', # Uses the evaluation protocol of previous works 38 | 'Twitter_corrected', # Uses a corrected evaluation protocol 39 | ], 40 | modality_options = [ 41 | ["images", "texts"], 42 | ["images", "texts", "-attention"], 43 | ["texts"], 44 | ["images"] 45 | ], 46 | epochs=30, 47 | seed_options = [0], 48 | lr_options = [5e-5, 1e-5], 49 | batch_size_options = [16], 50 | tf_layers_options = [1, 4], 51 | tf_head_options = [2, 8], 52 | tf_dim_options = [128, 1024], 53 | use_multiclass = False, 54 | balancing_method = None, 55 | choose_gpu = 0, 56 | init_model_name = '' 57 | ) 58 | 59 | # ### Tables: 3, 4 and parts of 6 (single binary datasets) 60 | run_experiment( 61 | dataset_methods_list = [ 62 | 'random_sampling_topic', # RSt 63 | 'clip_based_sampling_topic', # CSt 64 | 'news_clippings_txt2txt', # NC-t2t 65 | 'meir', 66 | 'EntitySwaps_random_topic', # R-NESt 67 | 'EntitySwaps_CLIP_topic', # CLIP-NESt 68 | 'fakeddit_original', 69 | 'Misalign', 70 | 'Misalign_D', # 'downsample' is automatically applied 71 | ], 72 | modality_options = [ 73 | ["images", "texts"], 74 | ["images", "texts", "-attention"], 75 | ["texts"], 76 | ["images"] 77 | ], 78 | epochs=30, 79 | seed_options = [0], 80 | lr_options = [5e-5], 81 | batch_size_options = [512], 82 | tf_layers_options = [1, 4], 83 | tf_head_options = [2, 8], 84 | tf_dim_options = [128, 1024], 85 | use_multiclass = False, 86 | balancing_method = None, 87 | choose_gpu = 0, 88 | init_model_name = '' 89 | ) 90 | 91 | # Table 5: Multiclass classification on VERITE 92 | run_experiment( 93 | dataset_methods_list = [ 94 | 'EntitySwaps_CLIP_topicXclip_based_sampling_topic', 95 | 'EntitySwaps_CLIP_topicXnews_clippings_txt2txt', 96 | 'EntitySwaps_random_topicXclip_based_sampling_topic', 97 | 'EntitySwaps_random_topicXnews_clippings_txt2txt', 98 | 'MisalignXclip_based_sampling_topic', 99 | 'MisalignXnews_clippings_txt2txt', 100 | 'Misalign_DXclip_based_sampling_topic', 101 | 'Misalign_DXnews_clippings_txt2txt', 102 | 'EntitySwaps_random_topicXMisalign_DXnews_clippings_txt2txt', 103 | 'EntitySwaps_CLIP_topicXMisalign_DXnews_clippings_txt2txt', 104 | 'EntitySwaps_random_topicXMisalignXnews_clippings_txt2txt', 105 | 'EntitySwaps_CLIP_topicXMisalignXnews_clippings_txt2txt', 106 | ], 107 | epochs=30, 108 | use_multiclass = True, 109 | balancing_method = 'downsample', 110 | ) 111 | 112 | # Table 6: Ensemble datasets for binary classification on VERITE 113 | run_experiment( 114 | dataset_methods_list = [ 115 | 'EntitySwaps_CLIP_topicXnews_clippings_txt2txt', 116 | 'EntitySwaps_random_topicXnews_clippings_txt2txt', 117 | 'MisalignXnews_clippings_txt2txt', 118 | 'Misalign_DXnews_clippings_txt2txt', 119 | 'EntitySwaps_random_topicXMisalign_DXnews_clippings_txt2txt', 120 | 'EntitySwaps_CLIP_topicXMisalign_DXnews_clippings_txt2txt', 121 | 'EntitySwaps_random_topicXMisalignXnews_clippings_txt2txt', 122 | 'EntitySwaps_CLIP_topicXMisalignXnews_clippings_txt2txt', 123 | ], 124 | epochs=30, 125 | use_multiclass = False, 126 | balancing_method = 'downsample', 127 | ) 128 | 129 | -------------------------------------------------------------------------------- /model.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | 4 | class DT_Transformer(nn.Module): 5 | def __init__( 6 | self, 7 | device, 8 | emb_dim=1024, 9 | tf_layers=1, 10 | tf_head=2, 11 | tf_dim=128, 12 | activation="gelu", 13 | dropout=0.1, 14 | use_features=["images", "texts"], 15 | use_multiclass=False 16 | ): 17 | 18 | super().__init__() 19 | 20 | self.use_features = use_features 21 | self.emb_dim = emb_dim 22 | 23 | self.transformer = nn.TransformerEncoder( 24 | nn.TransformerEncoderLayer( 25 | d_model=self.emb_dim, 26 | nhead=tf_head, 27 | dim_feedforward=tf_dim, 28 | dropout=dropout, 29 | activation=activation, 30 | batch_first=True, 31 | norm_first=False, 32 | ), 33 | num_layers=tf_layers, 34 | ) 35 | 36 | self.dropout = nn.Dropout(p=dropout) 37 | self.layer_norm = nn.LayerNorm(self.emb_dim) 38 | self.gelu = nn.GELU() 39 | self.fcl = nn.Linear(self.emb_dim, self.emb_dim // 2) 40 | self.use_multiclass = use_multiclass 41 | 42 | if self.use_multiclass: 43 | self.output_score = nn.Linear(self.emb_dim // 2, 3) 44 | else: 45 | self.output_score = nn.Linear(self.emb_dim // 2, 1) 46 | 47 | def forward(self, img, txt): 48 | 49 | if "images" not in self.use_features: 50 | x = txt 51 | elif "texts" not in self.use_features: 52 | x = img 53 | elif "-attention" in self.use_features: 54 | x = torch.cat((img, txt), axis=1) 55 | else: 56 | img = img.unsqueeze(1) 57 | txt = txt.unsqueeze(1) 58 | x = torch.cat((img, txt), axis=1) 59 | 60 | b_size = x.shape[0] 61 | 62 | x = self.transformer(x) 63 | x = x.mean(1) 64 | 65 | x = self.layer_norm(x) 66 | x = self.dropout(x) 67 | x = self.fcl(x) 68 | x = self.gelu(x) 69 | x = self.dropout(x) 70 | y = self.output_score(x) 71 | return y -------------------------------------------------------------------------------- /prepare_datasets.py: -------------------------------------------------------------------------------- 1 | import json 2 | import os 3 | import copy 4 | import time 5 | import torch 6 | import spacy 7 | import pickle 8 | import random 9 | import shutil 10 | import requests 11 | import numpy as np 12 | import pandas as pd 13 | from tqdm import tqdm 14 | from PIL import Image 15 | from io import BytesIO 16 | import multiprocessing as mp 17 | from utils import load_features 18 | 19 | def save_image(url, name): 20 | 21 | try: 22 | response = requests.get(url, verify=False, timeout=20) 23 | 24 | if response.status_code == 200: 25 | 26 | img = Image.open(BytesIO(response.content)) 27 | 28 | width, height = img.size 29 | 30 | if 2400 > width > 1200 or 2400 > height > 1200: 31 | img = img.resize((width//2, height//2)) 32 | 33 | if width > 2400 or height > 2400: 34 | img = img.resize((width//4, height//4)) 35 | 36 | if not img.mode == 'RGB': 37 | img = img.convert('RGB') 38 | 39 | img.save("VERITE/images/" + name + ".jpg") 40 | 41 | except Exception as e: 42 | print(e, "!!!", url) 43 | 44 | 45 | 46 | def prepare_verite(download_images=True): 47 | 48 | verite = pd.read_csv('VERITE/VERITE_articles.csv', index_col=0) 49 | 50 | if download_images: 51 | 52 | print("Scrape images!") 53 | 54 | directory = 'VERITE/images' 55 | if not os.path.exists(directory): 56 | os.makedirs(directory) 57 | 58 | # Scrape images 59 | for (i, row) in tqdm(verite.iterrows(), total=verite.shape[0]): 60 | idx = row.id 61 | t_url = row.true_url 62 | f_url = row.false_url 63 | 64 | save_image(t_url, "true_"+str(idx)) 65 | 66 | if f_url: 67 | save_image(f_url, "false_"+str(idx)) 68 | 69 | # From: true-caption, false-caption, true-image-url, false-image-url, article-url 70 | # Change to -> caption, image_path, label 71 | 72 | print("Unpack dataset!") 73 | unpack_data = [] 74 | 75 | for (i, row) in tqdm(verite.iterrows(), total=verite.shape[0]): 76 | 77 | idx = row.id 78 | true_caption = row.true_caption 79 | false_caption = row.false_caption 80 | true_img_path = 'images/true_' + str(idx) + '.jpg' 81 | 82 | unpack_data.append({ 83 | 'caption': true_caption, 84 | 'image_path': true_img_path, 85 | 'label': 'true' 86 | }) 87 | 88 | unpack_data.append({ 89 | 'caption': false_caption, 90 | 'image_path': true_img_path, 91 | 'label': 'miscaptioned' 92 | }) 93 | 94 | if row.false_url: 95 | false_img_path = 'images/false_' + str(idx) + '.jpg' 96 | 97 | unpack_data.append({ 98 | 'caption': true_caption, 99 | 'image_path': false_img_path, 100 | 'label': 'out-of-context' 101 | }) 102 | 103 | verite_df = pd.DataFrame(unpack_data) 104 | verite_df.to_csv('VERITE/VERITE.csv') 105 | 106 | def load_split_VisualNews(clip_version="ViT-L/14", load_features=True): 107 | 108 | print("Load VisualNews") 109 | data = json.load(open('/fssd4/user-data/stefpapad/MISINFO/VisualNews/origin/data.json')) 110 | # data = json.load(open('VisualNews/origin/data.json')) 111 | vn_df = pd.DataFrame(data) 112 | 113 | vn_df = vn_df.sample(frac=1, random_state=0).reset_index(drop=True) 114 | 115 | vn_df["id"] = vn_df["id"].astype('str') 116 | vn_df["image_id"] = vn_df["id"] 117 | vn_df["falsified"] = False 118 | vn_df["type_of_alteration"] = 'None' 119 | 120 | total_len = vn_df.shape[0] 121 | train_len = int(total_len * 0.8) 122 | valid_len = (total_len - train_len) // 2 123 | 124 | train_df = vn_df.iloc[:train_len] 125 | valid_df = vn_df.iloc[train_len:train_len+valid_len] 126 | test_df = vn_df.iloc[-valid_len-1:] 127 | 128 | if load_features: 129 | print("Load embeddings") 130 | clip_image_embeddings = np.load("VisualNews/clip_image_embeddings_" + clip_version + ".npy") 131 | clip_text_embeddings = np.load("VisualNews/clip_text_embeddings_" + clip_version + ".npy") 132 | item_ids = np.load("VisualNews/item_ids_" + clip_version + ".npy") 133 | clip_image_embeddings = pd.DataFrame(clip_image_embeddings, index=item_ids).T 134 | clip_text_embeddings = pd.DataFrame(clip_text_embeddings, index=item_ids).T 135 | 136 | clip_image_embeddings.columns = clip_image_embeddings.columns.astype('str') 137 | clip_text_embeddings.columns = clip_text_embeddings.columns.astype('str') 138 | 139 | return train_df, valid_df, test_df, clip_image_embeddings, clip_text_embeddings 140 | 141 | else: 142 | return train_df, valid_df, test_df 143 | 144 | def prepare_Misalign(CLIP_VERSION = "ViT-L/14", choose_gpu = 0): 145 | 146 | os.environ["CUDA_VISIBLE_DEVICES"] = str(choose_gpu) 147 | device = "cuda" if torch.cuda.is_available() else "cpu" 148 | cos_sim = torch.nn.CosineSimilarity(dim=-1, eps=1e-08) 149 | 150 | clip_version = CLIP_VERSION.replace("-", "").replace("/", "") 151 | train_df, valid_df, test_df, clip_image_embeddings, clip_text_embeddings = load_split_VisualNews(clip_version) 152 | 153 | def cross_modal_misalignment(sample, false_df, input_name, torch_fakeddit_clip_text_embeddings): 154 | 155 | choice = random.choice(['image', 'text']) 156 | 157 | i = sample["id"] 158 | 159 | if choice == 'image': 160 | current_item = clip_image_embeddings[i] 161 | 162 | elif choice == 'text': 163 | current_item = clip_text_embeddings[i] 164 | 165 | a = torch.from_numpy(current_item.values).to(device) 166 | all_similarites = cos_sim(a.reshape(1, -1), torch_fakeddit_clip_text_embeddings).cpu().detach().numpy() 167 | most_similar_id = all_similarites.argmax() 168 | similarity = all_similarites[most_similar_id] 169 | fakeddit_item = false_df.iloc[most_similar_id] 170 | 171 | sample['id'] = str(most_similar_id) + '_' + input_name # add suffix to fakeddit items! not to be mistaken with visual news items 172 | sample['falsified'] = True 173 | sample['original_caption'] = sample["caption"] 174 | sample['caption'] = fakeddit_item.clean_title 175 | sample['similarity'] = similarity 176 | sample["type_of_alteration"] = choice + '|' + str(fakeddit_item["6_way_label"]) 177 | return sample 178 | 179 | def Misalign(input_df, false_df, input_name, method='both'): 180 | 181 | fakeddit_features = np.load("VisualNews/MISALIGN_clip_text_embeddings_" + clip_version + ".npy").astype('float32') 182 | all_idx = np.load("VisualNews/MISALIGN_item_ids_" + clip_version + ".npy") 183 | fakeddit_features = pd.DataFrame(fakeddit_features.T, columns=all_idx) 184 | 185 | cols = [x for x in fakeddit_features.columns if input_name in x] 186 | fakeddit_features = fakeddit_features[cols].values.T 187 | 188 | fakeddit_features = torch.from_numpy(fakeddit_features).to(device) 189 | 190 | all_generated_items = [] 191 | 192 | for (row) in tqdm(input_df.to_dict(orient="records"), total=input_df.shape[0]): 193 | 194 | generated_item = cross_modal_misalignment(row, false_df, input_name, fakeddit_features) 195 | all_generated_items.append(generated_item) 196 | 197 | return pd.DataFrame(all_generated_items) 198 | 199 | def apply_Misalign(input_df, split): 200 | 201 | df = pd.read_csv('Fakeddit/all_samples/all_'+ split + '.tsv', sep='\t') 202 | df = df[df['2_way_label'] == 0] 203 | df = df[~df.clean_title.isna()].reset_index(drop=True) 204 | 205 | if split == 'validate': 206 | split = 'valid' 207 | 208 | if split == 'test_public': 209 | split = 'test' 210 | 211 | generated_data = Misalign(input_df, df, split.upper()) 212 | new_df = pd.concat([generated_data, input_df]) 213 | new_df = new_df.sample(frac=1) 214 | 215 | new_df.to_csv('VisualNews/' + split +'_Misalign.csv') 216 | 217 | 218 | apply_Misalign(train_df, 'train') 219 | apply_Misalign(valid_df, 'validate') 220 | apply_Misalign(test_df, 'test_public') 221 | 222 | 223 | def get_K_most_similar(by_topic = True, 224 | max_tries = 3, 225 | by_modality = 'both', 226 | K_most_similar = 20, 227 | CLIP_VERSION = "ViT-L/14", 228 | choose_gpu = 0): 229 | 230 | random.seed(0) 231 | np.random.seed(0) 232 | torch.manual_seed(0) 233 | 234 | clip_version = CLIP_VERSION.replace("-", "").replace("/", "") 235 | 236 | train_df, valid_df, test_df, clip_image_embeddings, clip_text_embeddings = load_split_VisualNews(clip_version) 237 | 238 | features_dict = { 239 | 'id': clip_text_embeddings, 240 | 'image_id': clip_image_embeddings 241 | } 242 | 243 | os.environ["CUDA_VISIBLE_DEVICES"] = str(choose_gpu) 244 | device = "cuda" if torch.cuda.is_available() else "cpu" 245 | cos_sim = torch.nn.CosineSimilarity(dim=-1, eps=1e-08) 246 | 247 | all_entity_types = ['CARDINAL', 'DATE', 'EVENT', 'FAC', 'GPE', 'LANGUAGE', 'LAW', 'LOC', 'MONEY', 'NORP', 'ORDINAL', 'ORG', 'PERCENT', 'PERSON', 'PRODUCT', 'QUANTITY', 'TIME', 'WORK_OF_ART'] 248 | 249 | for input_df, file_name in [ 250 | (valid_df, 'valid_CLIP_based_similarities'), 251 | (test_df, 'test_CLIP_based_similarities'), 252 | (train_df, 'train_CLIP_based_similarities') 253 | ]: 254 | 255 | start_time = time.time() 256 | 257 | print('*****', file_name, '*****') 258 | 259 | input_df = input_df.fillna(value=False).reset_index(drop=True) 260 | 261 | input_df["id"] = input_df["id"].astype('str') 262 | input_df['most_similar_by_text'] = None 263 | input_df['most_similar_by_image'] = None 264 | 265 | if by_topic: 266 | topic_x_id_texts = input_df.groupby('topic')['id'].apply(list) 267 | topic_x_id_images = input_df.groupby('topic')['image_id'].apply(list) 268 | 269 | val_counts = input_df.topic.value_counts() 270 | val_counts = val_counts[val_counts >= 2] 271 | input_df = input_df[input_df.topic.isin(val_counts.index.tolist())] 272 | 273 | topic_x_id = {'image': topic_x_id_images.to_dict(), 274 | 'text': topic_x_id_texts.to_dict() 275 | } 276 | 277 | def myFunc(args): 278 | 279 | idx, row = args 280 | 281 | temp_files = {} 282 | 283 | for modality in ['image', 'text']: 284 | 285 | if modality == 'image': 286 | 287 | modality_id = 'image_id' 288 | else: 289 | modality_id = 'id' 290 | 291 | if by_topic: 292 | candidates = topic_x_id[modality][row['topic']].copy() 293 | else: 294 | candidates = input_df[modality_id].copy().unique().tolist() 295 | 296 | current_item_id = row[modality_id] 297 | current_item = input_df[input_df[modality_id] == current_item_id].reset_index(drop=True) 298 | 299 | candidates.remove(current_item_id) 300 | 301 | current_item_features = features_dict[modality_id][current_item_id] 302 | candidate_features = features_dict[modality_id][candidates] 303 | 304 | a = torch.from_numpy(current_item_features.values).to(device) 305 | b = torch.from_numpy(candidate_features.values).to(device) 306 | 307 | all_similarites = cos_sim(a.reshape(1, -1), b.T).cpu().detach().numpy() 308 | 309 | most_similar_ids = all_similarites.argsort()[::-1][:K_most_similar] 310 | 311 | K_most_similar_IDs = np.array(candidates)[most_similar_ids] 312 | temp_files['most_similar_by_' + modality] = '/'.join([x for x in K_most_similar_IDs]) 313 | 314 | current_item['most_similar_by_image'] = temp_files['most_similar_by_image'] 315 | current_item['most_similar_by_text'] = temp_files['most_similar_by_text'] 316 | 317 | return current_item.to_dict('records') 318 | 319 | results = [] 320 | for (idx,row) in tqdm(input_df.iterrows(), total=input_df.shape[0]): 321 | res = myFunc((idx,row)) 322 | 323 | results.append(res) 324 | 325 | most_similar_data = pd.DataFrame([x[0] for x in results if x != None]) 326 | most_similar_data.to_csv('VisualNews/' + file_name + '.csv') 327 | 328 | def get_entities(txt, nlp): 329 | all_entity_types = ['CARDINAL', 'DATE', 'EVENT', 'FAC', 'GPE', 'LANGUAGE', 'LAW', 'LOC', 'MONEY', 'NORP', 'ORDINAL', 'ORG', 'PERCENT', 'PERSON', 'PRODUCT', 'QUANTITY', 'TIME', 'WORK_OF_ART'] 330 | 331 | ner_dict = {} 332 | ner_count = {} 333 | 334 | for key in all_entity_types: 335 | ner_dict[key] = '' 336 | ner_dict['count_' + key] = 0 337 | 338 | doc = nlp(txt) 339 | for ent in doc.ents: 340 | 341 | key = ent.label_ 342 | current_items = ner_dict[key] 343 | 344 | if current_items != '': 345 | current_items = current_items + '|' + ent.text 346 | else: 347 | current_items = ent.text 348 | ner_dict[key] = current_items 349 | ner_dict['count_' + key] = len(current_items.split('|')) 350 | 351 | return ner_dict 352 | 353 | def calc_ner(input_df, nlp): 354 | all_ner = [] 355 | 356 | input_df.reset_index(drop=True, inplace=True) 357 | 358 | for (row) in tqdm(input_df.itertuples(), total=input_df.shape[0]): 359 | ner_dict = get_entities(row.caption, nlp) 360 | 361 | all_ner.append(ner_dict) 362 | 363 | all_ner = pd.DataFrame(all_ner) 364 | all_ner['id'] = input_df['id'] 365 | 366 | input_df = pd.merge(input_df, all_ner, on='id') 367 | 368 | return input_df 369 | 370 | def extract_entities(): 371 | 372 | nlp = spacy.load("en_core_web_trf") 373 | train_df, valid_df, test_df = load_split_VisualNews(load_features=False) 374 | 375 | ner_train_df = calc_ner(train_df, nlp) 376 | ner_train_df.to_csv('VisualNews/ner_train.csv') 377 | 378 | ner_valid_df = calc_ner(valid_df, nlp) 379 | ner_train_df.to_csv('VisualNews/ner_valid.csv') 380 | 381 | ner_test_df = calc_ner(test_df, nlp) 382 | ner_train_df.to_csv('VisualNews/ner_test.csv') 383 | 384 | 385 | def prepare_CLIP_NESt(num_workers = 16, by_topic = True, max_tries = 5, by_modality = 'both'): 386 | 387 | random.seed(0) 388 | np.random.seed(0) 389 | 390 | ner_train_df = pd.read_csv('VisualNews/ner_train.csv', index_col=0) 391 | ner_valid_df = pd.read_csv('VisualNews/ner_valid.csv', index_col=0) 392 | ner_test_df = pd.read_csv('VisualNews/ner_test.csv', index_col=0) 393 | 394 | train_df = pd.read_csv('VisualNews/train_CLIP_based_similarities.csv', index_col=0)[['id', 'most_similar_by_text', 'most_similar_by_image']] 395 | valid_df = pd.read_csv('VisualNews/valid_CLIP_based_similarities.csv', index_col=0)[['id', 'most_similar_by_text', 'most_similar_by_image']] 396 | test_df = pd.read_csv('VisualNews/test_CLIP_based_similarities.csv', index_col=0)[['id', 'most_similar_by_text', 'most_similar_by_image']] 397 | 398 | ner_train_df = ner_train_df.merge(train_df, on='id') 399 | ner_valid_df = ner_valid_df.merge(valid_df, on='id') 400 | ner_test_df = ner_test_df.merge(test_df, on='id') 401 | 402 | all_entity_types = ['CARDINAL', 'DATE', 'EVENT', 'FAC', 'GPE', 'LANGUAGE', 'LAW', 'LOC', 'MONEY', 'NORP', 'ORDINAL', 'ORG', 'PERCENT', 'PERSON', 'PRODUCT', 'QUANTITY', 'TIME', 'WORK_OF_ART'] 403 | 404 | for input_df, file_name in [ 405 | (ner_train_df, 'train_entity_swap_topic_CLIP'), 406 | (ner_valid_df, 'valid_entity_swap_topic_CLIP'), 407 | (ner_test_df, 'test_entity_swap_topic_CLIP'), 408 | ]: 409 | 410 | start_time = time.time() 411 | 412 | print('*****', file_name, '*****') 413 | 414 | input_df = input_df.fillna(value=False).reset_index(drop=True) 415 | 416 | input_df["id"] = input_df["id"].astype('str') 417 | input_df["original_caption"] = input_df["caption"] 418 | input_df["num_of_alterations"] = 0 419 | input_df["falsified"] = False 420 | input_df["type_of_alteration"] = None 421 | input_df["altered_entities"] = None 422 | 423 | def myFunc(args): 424 | 425 | idx, row = args 426 | try: 427 | 428 | if by_modality == 'both': 429 | random_type_of_alteration = random.choice(['id', 'image_id']) 430 | elif by_modality == 'image': 431 | random_type_of_alteration = 'image_id' 432 | else: 433 | random_type_of_alteration = 'id' 434 | 435 | if random_type_of_alteration == 'image_id': 436 | similar_by = 'most_similar_by_image' 437 | elif random_type_of_alteration == 'id': 438 | similar_by = 'most_similar_by_text' 439 | 440 | current_item = copy.deepcopy(row) 441 | most_similar_ids = current_item[similar_by].split('/') 442 | 443 | for tries in range(max_tries): 444 | 445 | if len(most_similar_ids) >= tries: 446 | 447 | similar_item_id = most_similar_ids[tries] 448 | similar_item_df = input_df[input_df['id'] == similar_item_id] 449 | 450 | num_of_alterations = 0 451 | collect_alterations = [] 452 | replace_text = current_item["caption"] 453 | 454 | for entity_type in all_entity_types: 455 | 456 | if current_item[entity_type] and similar_item_df[entity_type].tolist()[0]: 457 | 458 | for i in range(row["count_" + entity_type]): 459 | 460 | current_entity = current_item[entity_type].split('|')[i] 461 | swap_entity = similar_item_df[entity_type].tolist()[0].split('|')[0] 462 | 463 | if swap_entity != current_entity: 464 | replace_text = replace_text.replace(current_entity, swap_entity) 465 | 466 | replaced_entity = entity_type + '/' + current_entity + '/' + swap_entity 467 | collect_alterations.append(replaced_entity) 468 | 469 | num_of_alterations += 1 470 | 471 | if num_of_alterations > len(similar_item_df[entity_type].tolist()[0].split('|')): 472 | break 473 | 474 | if num_of_alterations > 0: 475 | 476 | current_item['id'] = current_item['id'] + "_alt" 477 | current_item['num_of_alterations'] = num_of_alterations 478 | current_item['falsified'] = True 479 | current_item['caption'] = replace_text 480 | 481 | to_str = '|'.join(collect_alterations) 482 | current_item["altered_entities"] = to_str 483 | current_item['type_of_alteration'] = random_type_of_alteration 484 | 485 | return current_item.to_dict() 486 | 487 | except Exception as e: 488 | print(e) 489 | return None 490 | 491 | with mp.Pool(processes=num_workers) as executor: 492 | results = executor.map(myFunc,[(idx, row) for idx,row in input_df.iterrows()]) 493 | 494 | falsified_data = pd.DataFrame([x for x in results if x != None]) 495 | all_data = pd.concat([input_df, falsified_data]) 496 | all_data = all_data.sample(frac=1).reset_index(drop=True) 497 | 498 | all_data.to_csv('VisualNews/' + file_name + '.csv') 499 | 500 | 501 | def random_sampling_method(input_df, by_topic=False, by_modality='both'): 502 | 503 | random_falsified_data = [] 504 | 505 | if by_topic: 506 | topic_x_id = input_df.groupby('topic')['id'].apply(list) 507 | 508 | val_counts = input_df.topic.value_counts() 509 | val_counts = val_counts[val_counts >= 2] 510 | input_df = input_df[input_df.topic.isin(val_counts.index.tolist())] 511 | 512 | for (row) in tqdm(input_df.to_dict(orient="records"), total=input_df.shape[0]): 513 | while True: 514 | 515 | if by_modality == 'both': 516 | random_type_of_alteration = random.choice(['id', 'image_id']) 517 | elif by_modality == 'image': 518 | random_type_of_alteration = 'image_id' 519 | else: 520 | random_type_of_alteration = 'id' 521 | 522 | if by_topic: 523 | candidates = topic_x_id[row['topic']] 524 | 525 | random_item = random.choice(candidates) 526 | random_item = input_df[input_df[random_type_of_alteration] == random_item] 527 | 528 | else: 529 | random_item = input_df.sample(1) 530 | 531 | if random_item[random_type_of_alteration].tolist()[0] != row[random_type_of_alteration]: 532 | break 533 | 534 | row[random_type_of_alteration] = random_item[random_type_of_alteration].tolist()[0] 535 | 536 | if random_type_of_alteration == 'id': 537 | row['caption'] = random_item.caption.tolist()[0] 538 | row['article_path'] = random_item.article_path.tolist()[0] 539 | elif random_type_of_alteration == 'image_id': 540 | row['image_path'] = random_item.image_path.tolist()[0] 541 | 542 | 543 | row['falsified'] = True 544 | row['type_of_alteration'] = "altered_" + random_type_of_alteration 545 | random_falsified_data.append(row) 546 | 547 | input_df_false = pd.DataFrame(random_falsified_data, 548 | columns=input_df.columns) 549 | 550 | new_df = pd.concat([input_df, input_df_false]) 551 | new_df = new_df.sample(frac=1).reset_index(drop=True) 552 | 553 | return new_df 554 | 555 | 556 | def prepare_R_NESt(): 557 | 558 | random.seed(0) 559 | np.random.seed(0) 560 | 561 | train_df, valid_df, test_df = load_split_VisualNews(load_features=False) 562 | 563 | new_train_df = random_sampling_method(train_df) 564 | new_train_df.to_csv('VisualNews/train_random_sample.csv') 565 | 566 | new_valid_df = random_sampling_method(valid_df) 567 | new_valid_df.to_csv('VisualNews/valid_random_sample.csv') 568 | 569 | new_test_df = random_sampling_method(test_df) 570 | new_test_df.to_csv('VisualNews/test_random_sample.csv') 571 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | numpy==1.21.5 2 | pandas==1.4.2 3 | torch==1.11.0 4 | scikit-learn==1.1.1 5 | imblearn==0.10.1 6 | spacy==3.5.2 7 | ftfy==6.1.1 8 | regex==2022.4.24 9 | tqdm==4.64.0 10 | clip @ git+https://github.com/openai/CLIP.git@b46f5ac7587d2e1862f8b7b1573179d80dcdd620 -------------------------------------------------------------------------------- /utils.py: -------------------------------------------------------------------------------- 1 | import os 2 | import json 3 | import time 4 | import torch 5 | import numpy as np 6 | import pandas as pd 7 | from sklearn import metrics 8 | from torch.utils.data import DataLoader 9 | from imblearn.under_sampling import RandomUnderSampler 10 | 11 | class DatasetIterator(torch.utils.data.Dataset): 12 | def __init__( 13 | self, 14 | input_data, 15 | visual_features, 16 | textual_features 17 | ): 18 | self.input_data = input_data 19 | self.visual_features = visual_features 20 | self.textual_features = textual_features 21 | 22 | def __len__(self): 23 | return self.input_data.shape[0] 24 | 25 | def __getitem__(self, idx): 26 | current = self.input_data.iloc[idx] 27 | 28 | img = self.visual_features[current.image_id].values 29 | txt = self.textual_features[current.id].values 30 | label = float(current.falsified) 31 | 32 | return img, txt, label 33 | 34 | def prepare_dataloader(image_embeddings, text_embeddings, input_data, batch_size, num_workers, shuffle): 35 | dg = DatasetIterator( 36 | input_data, 37 | visual_features=image_embeddings, 38 | textual_features=text_embeddings 39 | ) 40 | 41 | dataloader = DataLoader( 42 | dg, 43 | batch_size=batch_size, 44 | shuffle=shuffle, 45 | num_workers=num_workers, 46 | pin_memory=True, 47 | ) 48 | 49 | return dataloader 50 | 51 | 52 | def binary_acc(y_pred, y_test): 53 | 54 | y_pred = torch.sigmoid(y_pred) 55 | y_pred_tag = torch.round(y_pred) 56 | 57 | correct_results_sum = (y_pred_tag == y_test).sum().float() 58 | acc = correct_results_sum / y_test.shape[0] 59 | acc = torch.round(acc * 100) 60 | 61 | return acc 62 | 63 | def topsis(xM, wV=None): 64 | m, n = xM.shape 65 | 66 | if wV is None: 67 | wV = np.ones((1, n)) / n 68 | else: 69 | wV = wV / np.sum(wV) 70 | 71 | normal = np.sqrt(np.sum(xM**2, axis=0)) 72 | 73 | rM = xM / normal 74 | tM = rM * wV 75 | twV = np.max(tM, axis=0) 76 | tbV = np.min(tM, axis=0) 77 | dwV = np.sqrt(np.sum((tM - twV) ** 2, axis=1)) 78 | dbV = np.sqrt(np.sum((tM - tbV) ** 2, axis=1)) 79 | swV = dwV / (dwV + dbV) 80 | 81 | arg_sw = np.argsort(swV)[::-1] 82 | 83 | r_sw = swV[arg_sw] 84 | 85 | return np.argsort(swV)[::-1] 86 | 87 | def choose_best_model(input_df, metrics, epsilon=1e-6): 88 | 89 | X0 = input_df.copy() 90 | X0 = X0.reset_index(drop=True) 91 | X1 = X0[metrics] 92 | X1 = X1.reset_index(drop=True) 93 | 94 | # Stop if the scores are identical in all consecutive epochs 95 | X1[:-1] = X1[:-1] + epsilon 96 | 97 | if "Accuracy" in metrics: 98 | X1["Accuracy"] = 1 - X1["Accuracy"] 99 | 100 | if "Precision" in metrics: 101 | X1["Precision"] = 1 - X1["Precision"] 102 | 103 | if "Recall" in metrics: 104 | X1["Recall"] = 1 - X1["Recall"] 105 | 106 | if "AUC" in metrics: 107 | X1["AUC"] = 1 - X1["AUC"] 108 | 109 | if "F1" in metrics: 110 | X1["F1"] = 1 - X1["F1"] 111 | 112 | if "Pristine" in metrics: 113 | X1["Pristine"] = 1 - X1["Pristine"] 114 | 115 | if "Falsified" in metrics: 116 | X1["Falsified"] = 1 - X1["Falsified"] 117 | 118 | X_np = X1.to_numpy() 119 | best_results = topsis(X_np) 120 | top_K_results = best_results[:1] 121 | return X0.iloc[top_K_results] 122 | 123 | def save_results_csv(output_folder_, output_file_, model_performance_): 124 | print("Save Results ", end=" ... ") 125 | exp_results_pd = pd.DataFrame(pd.Series(model_performance_)).transpose() 126 | if not os.path.isfile(output_folder_ + "/" + output_file_ + ".csv"): 127 | exp_results_pd.to_csv( 128 | output_folder_ + "/" + output_file_ + ".csv", 129 | header=True, 130 | index=False, 131 | columns=list(model_performance_.keys()), 132 | ) 133 | else: 134 | exp_results_pd.to_csv( 135 | output_folder_ + "/" + output_file_ + ".csv", 136 | mode="a", 137 | header=False, 138 | index=False, 139 | columns=list(model_performance_.keys()), 140 | ) 141 | print("Done\n") 142 | 143 | 144 | def down_sample(input_data): 145 | 146 | rus = RandomUnderSampler(random_state=0) 147 | X, y = rus.fit_resample(input_data[['id', 'image_id']], input_data['falsified']) 148 | X['falsified'] = y 149 | 150 | return X.sample(frac=1) 151 | 152 | 153 | def early_stop(has_not_improved_for, model, optimizer, history, current_epoch, PATH, metrics_list): 154 | 155 | best_index = choose_best_model( 156 | pd.DataFrame(history), metrics=metrics_list 157 | ).index[0] 158 | 159 | if not os.path.isdir(PATH.split('/')[0]): 160 | os.mkdir(PATH.split('/')[0]) 161 | 162 | if current_epoch == best_index: 163 | 164 | print("Checkpoint!\n") 165 | torch.save( 166 | { 167 | "epoch": current_epoch, 168 | "model_state_dict": model.state_dict(), 169 | "optimizer_state_dict": optimizer.state_dict(), 170 | }, 171 | PATH, 172 | ) 173 | 174 | has_not_improved_for = 0 175 | else: 176 | 177 | print("DID NOT CHECKPOINT!\n") 178 | has_not_improved_for += 1 179 | 180 | return has_not_improved_for 181 | 182 | 183 | def train_step(model, input_dataloader, current_epoch, optimizer, criterion, device, batches_per_epoch, use_multiclass=False): 184 | epoch_start_time = time.time() 185 | 186 | running_loss = 0.0 187 | model.train() 188 | 189 | for i, data in enumerate(input_dataloader, 0): 190 | 191 | images = data[0].to(device, non_blocking=True) 192 | texts = data[1].to(device, non_blocking=True) 193 | 194 | if use_multiclass: 195 | labels = torch.nn.functional.one_hot(data[2].long(), num_classes=3).float().to(device, non_blocking=True) 196 | else: 197 | labels = data[2].to(device, non_blocking=True) 198 | 199 | optimizer.zero_grad() 200 | outputs = model(images, texts) 201 | 202 | loss = criterion( 203 | outputs, labels if use_multiclass else labels.unsqueeze(1) 204 | ) 205 | 206 | loss.backward() 207 | optimizer.step() 208 | 209 | running_loss += loss.item() 210 | 211 | print( 212 | f"[Epoch:{current_epoch + 1}, Batch:{i + 1:5d}/{batches_per_epoch}]. Passed time: {round((time.time() - epoch_start_time) / 60, 1)} minutes. loss: {running_loss / (i+1):.3f}", 213 | end="\r", 214 | ) 215 | 216 | 217 | def eval_step(model, input_dataloader, current_epoch, device, use_multiclass=False, return_results=True): 218 | 219 | if current_epoch >= 0: 220 | print("\nEvaluation:", end=" -> ") 221 | else: 222 | print("\nFinal evaluation on the TESTING set", end=" -> ") 223 | 224 | model.eval() 225 | 226 | y_true = [] 227 | y_pred = [] 228 | 229 | with torch.no_grad(): 230 | for i, data in enumerate(input_dataloader, 0): 231 | 232 | images = data[0].to(device, non_blocking=True) 233 | texts = data[1].to(device, non_blocking=True) 234 | labels = data[2].to(device, non_blocking=True) 235 | 236 | predictions = model(images, texts) 237 | y_pred.extend(predictions.cpu().detach().numpy()) 238 | y_true.extend(labels.cpu().detach().numpy()) 239 | 240 | y_pred = np.vstack(y_pred) 241 | 242 | if use_multiclass: 243 | y_true = np.vstack(y_true) 244 | y_true = y_true.flatten() 245 | y_pred_softmax = torch.log_softmax(torch.Tensor(y_pred), dim = 1) 246 | _, y_pred_tags = torch.max(y_pred_softmax, dim = 1) 247 | y_pred = y_pred_tags.numpy() 248 | 249 | if not return_results: 250 | return y_true, y_pred 251 | 252 | acc = metrics.accuracy_score(y_true, y_pred) 253 | prec = metrics.precision_score(y_true, y_pred, average='macro') 254 | recall = metrics.recall_score(y_true, y_pred, average='macro') 255 | f1 = metrics.f1_score(y_true, y_pred, average='macro') 256 | 257 | results = { 258 | "epoch": current_epoch, 259 | "Accuracy": round(acc, 4), 260 | "Precision": round(prec, 4), 261 | "Recall": round(recall, 4), 262 | "F1": round(f1, 4), 263 | } 264 | print(results) 265 | 266 | else: 267 | y_pred = 1/(1 + np.exp(-y_pred)) 268 | y_true = np.array(y_true).reshape(-1,1) 269 | 270 | if not return_results: 271 | return y_true, y_pred 272 | 273 | auc = metrics.roc_auc_score(y_true, y_pred) 274 | y_pred = np.round(y_pred) 275 | acc = metrics.accuracy_score(y_true, y_pred) 276 | prec = metrics.precision_score(y_true, y_pred) 277 | recall = metrics.recall_score(y_true, y_pred) 278 | f1 = metrics.f1_score(y_true, y_pred) 279 | 280 | cm = metrics.confusion_matrix(y_true, y_pred, normalize="true").diagonal() 281 | 282 | results = { 283 | "epoch": current_epoch, 284 | "Accuracy": round(acc, 4), 285 | "AUC": round(auc, 4), 286 | "Precision": round(prec, 4), 287 | "Recall": round(recall, 4), 288 | "F1": round(f1, 4), 289 | 'Pristine': round(cm[0], 4), 290 | 'Falsified': round(cm[1], 4) 291 | } 292 | print(results) 293 | 294 | return results 295 | 296 | 297 | def eval_cosmos(model, clip_version, device, batch_size, num_workers, use_multiclass=False): 298 | data = [] 299 | 300 | for line in open('COSMOS/cosmos_anns/test_data.json', 'r'): 301 | data.append(json.loads(line)) 302 | 303 | cosmos_test = pd.DataFrame(data) 304 | cosmos_text_embeddings = np.load("COSMOS/COSMOS_clip_text_embeddings_test_" + clip_version + ".npy").astype('float32') 305 | cosmos_image_embeddings = np.load("COSMOS/COSMOS_clip_image_embeddings_test_" + clip_version + ".npy").astype('float32') 306 | 307 | # Alter COSMOS to be similar to VisualNews in order to re-use the same evaluation functions 308 | cosmos_test.index.name = 'image_id' 309 | cosmos_test = cosmos_test.reset_index() 310 | cosmos_test['id'] = cosmos_test['image_id'] 311 | cosmos_test.rename({'context_label': 'falsified'}, axis=1, inplace=True) 312 | 313 | cosmos_image_embeddings = pd.DataFrame(cosmos_image_embeddings, index=cosmos_test.id.values).T 314 | cosmos_text_embeddings = pd.DataFrame(cosmos_text_embeddings, index=cosmos_test.id.values).T 315 | 316 | cosmos_dataloader = prepare_dataloader(cosmos_image_embeddings, cosmos_text_embeddings, cosmos_test, batch_size, num_workers, False) 317 | 318 | if use_multiclass: 319 | y_true, y_pred = eval_step(model, cosmos_dataloader, -1, device, use_multiclass=True, return_results=False) 320 | y_pred[np.where(y_pred > 0)] = 1 321 | 322 | acc = metrics.accuracy_score(y_true, y_pred) 323 | prec = metrics.precision_score(y_true, y_pred) 324 | recall = metrics.recall_score(y_true, y_pred) 325 | f1 = metrics.f1_score(y_true, y_pred) 326 | 327 | cm = metrics.confusion_matrix(y_true, y_pred, normalize="true").diagonal() 328 | 329 | cosmos_results = { 330 | "epoch": -1, 331 | "Accuracy": round(acc, 4), 332 | "AUC": 0, 333 | "Precision": round(prec, 4), 334 | "Recall": round(recall, 4), 335 | "F1": round(f1, 4), 336 | 'Pristine': round(cm[0], 4), 337 | 'Falsified': round(cm[1], 4) 338 | } 339 | 340 | print(cosmos_results) 341 | 342 | else: 343 | cosmos_results = eval_step(model, cosmos_dataloader, -1, device) 344 | 345 | return cosmos_results 346 | 347 | def check_C(C, pos): 348 | 349 | if C == 0: 350 | return np.zeros(pos.shape[0]) 351 | else: 352 | return np.ones(pos.shape[0]) 353 | 354 | 355 | def sensitivity_per_class(y_true, y_pred, C): 356 | 357 | pos = np.where(y_true == C)[0] 358 | y_true = y_true[pos] 359 | y_pred = y_pred[pos] 360 | 361 | if C == 2: 362 | y_true = np.ones(y_true.shape[0]).reshape(-1, 1) 363 | 364 | return round((y_pred == y_true).sum() / y_true.shape[0], 4) 365 | 366 | def accuracy_CvC(y_true, y_pred, Ca, Cb): 367 | pos_a, _ = np.where(y_true == Ca) 368 | pos_b, _ = np.where(y_true == Cb) 369 | 370 | y_pred_a = y_pred[pos_a].flatten() 371 | y_pred_b = y_pred[pos_b].flatten() 372 | 373 | y_true_a = check_C(Ca, pos_a) 374 | y_true_b = check_C(Cb, pos_b) 375 | 376 | y_pred_avb = np.concatenate([y_pred_a, y_pred_b]) 377 | y_true_avb = np.concatenate([y_true_a, y_true_b]) 378 | 379 | return round(metrics.accuracy_score(y_true_avb, y_pred_avb), 4) 380 | 381 | def eval_verite(model, clip_version, device, batch_size, num_workers, use_multiclass=False, label_map={'true': 0, 'miscaptioned': 1, 'out-of-context': 2}): 382 | 383 | verite_test = pd.read_csv('VERITE/VERITE.csv', index_col=0) 384 | verite_test = verite_test.reset_index().rename({'index': 'id', 'label': 'falsified'}, axis=1) 385 | verite_test['image_id'] = verite_test['id'] 386 | 387 | verite_text_embeddings = np.load("VERITE/VERITE_clip_text_embeddings_" + clip_version + ".npy").astype('float32') 388 | verite_image_embeddings = np.load("VERITE/VERITE_clip_image_embeddings_" + clip_version + ".npy").astype('float32') 389 | 390 | verite_image_embeddings = pd.DataFrame(verite_image_embeddings, index=verite_test.id.values).T 391 | verite_text_embeddings = pd.DataFrame(verite_text_embeddings, index=verite_test.id.values).T 392 | 393 | verite_test.falsified.replace(label_map, inplace=True) 394 | verite_dataloader = prepare_dataloader(verite_image_embeddings, verite_text_embeddings, verite_test, batch_size, num_workers, False) 395 | 396 | y_true, y_pred = eval_step(model, verite_dataloader, -1, device, use_multiclass=use_multiclass, return_results=False) 397 | 398 | if use_multiclass: 399 | acc = metrics.accuracy_score(y_true, y_pred) 400 | matrix = metrics.confusion_matrix(y_true, y_pred) 401 | cm_results = matrix.diagonal() / matrix.sum(axis=1) 402 | 403 | true_ = cm_results[0] 404 | miscaptioned_ = cm_results[1] 405 | out_of_context = cm_results[2] 406 | 407 | verite_results = { 408 | "epoch": -1, 409 | "Accuracy": round(acc, 4), 410 | 'True': round(cm_results[0], 4), 411 | 'Miscaptioned': round(cm_results[1], 4), 412 | 'Out-Of-Context': round(cm_results[2], 4) 413 | } 414 | 415 | else: 416 | y_pred = y_pred.round() 417 | 418 | verite_results = {} 419 | 420 | verite_results['epoch'] = -1 421 | 422 | verite_results['True'] = sensitivity_per_class(y_true, y_pred, 0) 423 | verite_results['Miscaptioned'] = sensitivity_per_class(y_true, y_pred, 1) 424 | verite_results['Out-Of-Context'] = sensitivity_per_class(y_true, y_pred, 2) 425 | 426 | verite_results['true_v_miscaptioned'] = accuracy_CvC(y_true, y_pred, 0, 1) 427 | verite_results['true_v_ooc'] = accuracy_CvC(y_true, y_pred, 0, 2) 428 | verite_results['miscaptioned_v_ooc'] = accuracy_CvC(y_true, y_pred, 1, 2) 429 | 430 | y_true_all = y_true.copy() 431 | y_true_all[np.where(y_true_all == 2)[0]] = 1 432 | 433 | verite_results['accuracy'] = round(metrics.accuracy_score(y_true_all, y_pred), 4) 434 | verite_results['balanced_accuracy'] = round(metrics.balanced_accuracy_score(y_true_all, y_pred), 4) 435 | 436 | print(verite_results) 437 | return verite_results 438 | 439 | 440 | def load_data(choose_dataset): # , choose_columns=['id', 'image_id', 'falsified'] 441 | 442 | print("Load data for:", choose_dataset) 443 | 444 | if choose_dataset == "news_clippings": 445 | train_data = json.load(open("news_clippings/data/news_clippings/data/merged_balanced/train.json")) 446 | valid_data = json.load(open("news_clippings/data/news_clippings/data/merged_balanced/val.json")) 447 | test_data = json.load(open("news_clippings/data/news_clippings/data/merged_balanced/test.json")) 448 | 449 | train_data = pd.DataFrame(train_data["annotations"]) 450 | valid_data = pd.DataFrame(valid_data["annotations"]) 451 | test_data = pd.DataFrame(test_data["annotations"]) 452 | 453 | train_data = train_data.sample(frac=1, random_state=0) 454 | 455 | elif choose_dataset == "news_clippings_txt2img": 456 | 457 | train_data = json.load(open("news_clippings/data/news_clippings/data/semantics_clip_text_image/train.json")) 458 | valid_data = json.load(open("news_clippings/data/news_clippings/data/semantics_clip_text_image/val.json")) 459 | test_data = json.load(open("news_clippings/data/news_clippings/data/semantics_clip_text_image/test.json")) 460 | 461 | train_data = pd.DataFrame(train_data["annotations"]) 462 | valid_data = pd.DataFrame(valid_data["annotations"]) 463 | test_data = pd.DataFrame(test_data["annotations"]) 464 | 465 | train_data = train_data.sample(frac=1, random_state=0) 466 | 467 | elif choose_dataset == "news_clippings_txt2txt": 468 | 469 | train_data = json.load(open("news_clippings/data/news_clippings/data/semantics_clip_text_text/train.json")) 470 | valid_data = json.load(open("news_clippings/data/news_clippings/data/semantics_clip_text_text/val.json")) 471 | test_data = json.load(open("news_clippings/data/news_clippings/data/semantics_clip_text_text/test.json")) 472 | 473 | train_data = pd.DataFrame(train_data["annotations"]) 474 | valid_data = pd.DataFrame(valid_data["annotations"]) 475 | test_data = pd.DataFrame(test_data["annotations"]) 476 | 477 | train_data = train_data.sample(frac=1, random_state=0) 478 | 479 | elif choose_dataset == "random_sampling": 480 | train_data = pd.read_csv('VisualNews/train_random_sample.csv', index_col=0) 481 | valid_data = pd.read_csv('VisualNews/valid_random_sample.csv', index_col=0) 482 | test_data = pd.read_csv('VisualNews/test_random_sample.csv', index_col=0) 483 | 484 | elif choose_dataset == "random_sampling_topic": 485 | train_data = pd.read_csv('VisualNews/train_random_sample_topic.csv', index_col=0) 486 | valid_data = pd.read_csv('VisualNews/valid_random_sample_topic.csv', index_col=0) 487 | test_data = pd.read_csv('VisualNews/test_random_sample_topic.csv', index_col=0) 488 | 489 | elif choose_dataset == "random_sampling_topic_image": 490 | train_data = pd.read_csv('VisualNews/train_random_sample_topic_image.csv', index_col=0) 491 | valid_data = pd.read_csv('VisualNews/valid_random_sample_topic_image.csv', index_col=0) 492 | test_data = pd.read_csv('VisualNews/test_random_sample_topic_image.csv', index_col=0) 493 | 494 | elif choose_dataset == "random_sampling_topic_text": 495 | train_data = pd.read_csv('VisualNews/train_random_sample_topic_text.csv', index_col=0) 496 | valid_data = pd.read_csv('VisualNews/valid_random_sample_topic_text.csv', index_col=0) 497 | test_data = pd.read_csv('VisualNews/test_random_sample_topic_text.csv', index_col=0) 498 | 499 | elif choose_dataset == "meir": 500 | train_data = pd.read_csv('MEIR/train_meir.csv', index_col=0) 501 | valid_data = pd.read_csv('MEIR/valid_meir.csv', index_col=0) 502 | test_data = pd.read_csv('MEIR/test_meir.csv', index_col=0) 503 | 504 | elif "Misalign" in choose_dataset: 505 | train_data = pd.read_csv('VisualNews/train_Misalign.csv', index_col=0) 506 | valid_data = pd.read_csv('VisualNews/valid_Misalign.csv', index_col=0) 507 | test_data = pd.read_csv('VisualNews/test_Misalign.csv', index_col=0) 508 | 509 | if choose_dataset == 'Misalign_D': 510 | train_data = train_data.sample(frac=1).drop_duplicates('id') 511 | valid_data = valid_data.sample(frac=1).drop_duplicates('id') 512 | test_data = test_data.sample(frac=1).drop_duplicates('id') 513 | 514 | elif choose_dataset == "clip_based_sampling_topic": 515 | train_data = pd.read_csv('VisualNews/train_clip_based_sampling_topic.csv', index_col=0) 516 | valid_data = pd.read_csv('VisualNews/valid_clip_based_sampling_topic.csv', index_col=0) 517 | test_data = pd.read_csv('VisualNews/test_clip_based_sampling_topic.csv', index_col=0) 518 | 519 | elif choose_dataset == "clip_based_sampling_topic_image": 520 | train_data = pd.read_csv('VisualNews/train_clip_based_sampling_topic_image.csv', index_col=0) 521 | valid_data = pd.read_csv('VisualNews/valid_clip_based_sampling_topic_image.csv', index_col=0) 522 | test_data = pd.read_csv('VisualNews/test_clip_based_sampling_topic_image.csv', index_col=0) 523 | 524 | elif choose_dataset == "clip_based_sampling_topic_text": 525 | train_data = pd.read_csv('VisualNews/train_clip_based_sampling_topic_text.csv', index_col=0) 526 | valid_data = pd.read_csv('VisualNews/valid_clip_based_sampling_topic_text.csv', index_col=0) 527 | test_data = pd.read_csv('VisualNews/test_clip_based_sampling_topic_text.csv', index_col=0) 528 | 529 | elif choose_dataset == "EntitySwaps_random_topic": 530 | train_data = pd.read_csv('VisualNews/train_entity_swap_topic.csv', index_col=0) 531 | valid_data = pd.read_csv('VisualNews/valid_entity_swap_topic.csv', index_col=0) 532 | test_data = pd.read_csv('VisualNews/test_entity_swap_topic.csv', index_col=0) 533 | 534 | elif choose_dataset == "EntitySwaps_CLIP_topic": 535 | train_data = pd.read_csv('VisualNews/train_entity_swap_topic_CLIP.csv', index_col=0) 536 | valid_data = pd.read_csv('VisualNews/valid_entity_swap_topic_CLIP.csv', index_col=0) 537 | test_data = pd.read_csv('VisualNews/test_entity_swap_topic_CLIP.csv', index_col=0) 538 | 539 | elif choose_dataset == "EntitySwaps_CLIP_topic_bytext": 540 | train_data = pd.read_csv('VisualNews/train_entity_swap_topic_CLIP_text.csv', index_col=0) 541 | valid_data = pd.read_csv('VisualNews/valid_entity_swap_topic_CLIP_text.csv', index_col=0) 542 | test_data = pd.read_csv('VisualNews/test_entity_swap_topic_CLIP_text.csv', index_col=0) 543 | 544 | elif choose_dataset == "EntitySwaps_CLIP_topic_byimage": 545 | train_data = pd.read_csv('VisualNews/train_entity_swap_topic_CLIP_image.csv', index_col=0) 546 | valid_data = pd.read_csv('VisualNews/valid_entity_swap_topic_CLIP_image.csv', index_col=0) 547 | test_data = pd.read_csv('VisualNews/test_entity_swap_topic_CLIP_image.csv', index_col=0) 548 | 549 | elif choose_dataset == "fakeddit_original": 550 | train_data = pd.read_csv('Fakeddit/all_samples/all_train.tsv', sep='\t') 551 | valid_data = pd.read_csv('Fakeddit/all_samples/all_validate.tsv', sep='\t') 552 | test_data = pd.read_csv('Fakeddit/all_samples/all_test_public.tsv', sep='\t') 553 | 554 | train_data = train_data[~train_data.image_url.isna()] 555 | train_data = train_data[~train_data.clean_title.isna()] 556 | 557 | valid_data = valid_data[~valid_data.image_url.isna()] 558 | valid_data = valid_data[~valid_data.clean_title.isna()] 559 | 560 | test_data = test_data[~test_data.image_url.isna()] 561 | test_data = test_data[~test_data.clean_title.isna()] 562 | 563 | train_data['image_id'] = train_data["id"] 564 | valid_data['image_id'] = valid_data["id"] 565 | test_data['image_id'] = test_data["id"] 566 | 567 | train_data["falsified"] = train_data["2_way_label"] 568 | valid_data["falsified"] = valid_data["2_way_label"] 569 | test_data["falsified"] = test_data["2_way_label"] 570 | 571 | id_list = np.load('Fakeddit/fd_original_clip_item_ids_ViTL14.npy') 572 | 573 | train_data = train_data[train_data.id.isin(id_list)] 574 | valid_data = valid_data[valid_data.id.isin(id_list)] 575 | test_data = test_data[test_data.id.isin(id_list)] 576 | 577 | train_data.falsified.replace({0:'1', 1:'0'}, inplace=True) 578 | valid_data.falsified.replace({0:'1', 1:'0'}, inplace=True) 579 | test_data.falsified.replace({0:'1', 1:'0'}, inplace=True) 580 | 581 | elif "Twitter" in choose_dataset: 582 | train_data = pd.read_csv('Twitter/train.csv', index_col=0) 583 | test_data = pd.read_csv('Twitter/test.csv', index_col=0) 584 | 585 | train_data.falsified = train_data.falsified.replace({'fake': 1, 'real': 0}) 586 | test_data.falsified = test_data.falsified.replace({'fake': 1, 'real': 0}) 587 | 588 | if choose_dataset == "Twitter_comparable": 589 | valid_data = test_data.copy() 590 | 591 | elif choose_dataset == "Twitter_corrected": 592 | valid_data = train_data.sample(frac=0.1, random_state=0) 593 | train_data = train_data[~train_data.id.isin(valid_data.id.tolist())] 594 | 595 | train_data.id = train_data.id.astype('str') 596 | valid_data.id = valid_data.id.astype('str') 597 | test_data.id = test_data.id.astype('str') 598 | 599 | train_data.image_id = train_data.image_id.astype('str') 600 | valid_data.image_id = valid_data.image_id.astype('str') 601 | test_data.image_id = test_data.image_id.astype('str') 602 | 603 | train_data.reset_index(drop=True, inplace=True) 604 | valid_data.reset_index(drop=True, inplace=True) 605 | test_data.reset_index(drop=True, inplace=True) 606 | 607 | return train_data, valid_data, test_data 608 | # return train_data[choose_columns], valid_data[choose_columns], test_data[choose_columns] 609 | 610 | 611 | def load_ensemble_data(dataset_method, use_multiclass, choose_columns=['id', 'image_id', 'falsified']): 612 | 613 | dataset_list = dataset_method.split('X') 614 | 615 | a_train_data, a_valid_data, a_test_data = load_data(dataset_method.split('X')[0]) 616 | b_train_data, b_valid_data, b_test_data = load_data(dataset_method.split('X')[-1]) 617 | 618 | a_train_data.loc[a_train_data.falsified == True, 'falsified'] = 'tempered_occ' 619 | a_valid_data.loc[a_valid_data.falsified == True, 'falsified'] = 'tempered_occ' 620 | a_test_data.loc[a_test_data.falsified == True, 'falsified'] = 'tempered_occ' 621 | 622 | b_train_data.loc[b_train_data.falsified == True, 'falsified'] = 'untempered_occ' 623 | b_valid_data.loc[b_valid_data.falsified == True, 'falsified'] = 'untempered_occ' 624 | b_test_data.loc[b_test_data.falsified == True, 'falsified'] = 'untempered_occ' 625 | 626 | if len(dataset_list) == 3: 627 | a2_train_data, a2_valid_data, a2_test_data = load_data(dataset_method.split('X')[1]) 628 | a2_train_data.loc[a2_train_data.falsified == True, 'falsified'] = 'tempered_occ' 629 | a2_valid_data.loc[a2_valid_data.falsified == True, 'falsified'] = 'tempered_occ' 630 | a2_test_data.loc[a2_test_data.falsified == True, 'falsified'] = 'tempered_occ' 631 | 632 | elif len(dataset_list) > 3: 633 | raise BaseException("Error, cannot combine more than 3 datasets.") 634 | 635 | if len(dataset_list) == 3: 636 | train_data = pd.concat([a_train_data, a2_train_data, b_train_data]) 637 | valid_data = pd.concat([a_valid_data, a2_valid_data, b_valid_data]) 638 | test_data = pd.concat([a_test_data, a2_test_data, b_test_data]) 639 | 640 | else: 641 | train_data = pd.concat([a_train_data, b_train_data]) 642 | valid_data = pd.concat([a_valid_data, b_valid_data]) 643 | test_data = pd.concat([a_test_data, b_test_data]) 644 | 645 | if use_multiclass: 646 | label_map={'true': 0, 'tempered_occ': 1, 'untempered_occ': 2} 647 | else: 648 | label_map={'true': 0, 'tempered_occ': 1, 'untempered_occ': 1} 649 | 650 | train_data = train_data.drop_duplicates(['id', 'image_id', 'falsified'], keep='first') 651 | valid_data = valid_data.drop_duplicates(['id', 'image_id', 'falsified'], keep='first') 652 | test_data = test_data.drop_duplicates(['id', 'image_id', 'falsified'], keep='first') 653 | 654 | train_data.loc[train_data.falsified == False, 'falsified'] = 'true' 655 | valid_data.loc[valid_data.falsified == False, 'falsified'] = 'true' 656 | test_data.loc[test_data.falsified == False, 'falsified'] = 'true' 657 | 658 | train_data = train_data.sample(frac=1) 659 | 660 | train_data.reset_index(drop=True, inplace=True) 661 | valid_data.reset_index(drop=True, inplace=True) 662 | test_data.reset_index(drop=True, inplace=True) 663 | 664 | train_data.falsified.replace(label_map, inplace=True) 665 | valid_data.falsified.replace(label_map, inplace=True) 666 | test_data.falsified.replace(label_map, inplace=True) 667 | 668 | print(train_data.falsified.value_counts()) 669 | 670 | return train_data[choose_columns], valid_data[choose_columns], test_data[choose_columns] 671 | 672 | def load_features(input_parameters): 673 | 674 | clip_version = input_parameters["CLIP_VERSION"].replace("-", "").replace("/", "") 675 | 676 | print("Load features for:", input_parameters["CHOOSE_DATASET"]) 677 | 678 | if input_parameters["CHOOSE_DATASET"] == "meir": 679 | clip_text_embeddings = np.load( 680 | "MEIR/MEIR_clip_text_embeddings_" + clip_version + ".npy" 681 | ).astype("float32") 682 | clip_image_embeddings = np.load( 683 | "MEIR/MEIR_clip_image_embeddings_" + clip_version + ".npy" 684 | ).astype("float32") 685 | 686 | clip_image_embeddings = pd.DataFrame(clip_image_embeddings).T 687 | clip_text_embeddings = pd.DataFrame(clip_text_embeddings).T 688 | 689 | elif 'Twitter' in input_parameters["CHOOSE_DATASET"]: 690 | 691 | clip_image_embeddings = np.load( 692 | "Twitter/clip_image_embeddings_" + clip_version + ".npy" 693 | ).astype("float32") 694 | 695 | clip_text_embeddings = np.load( 696 | "Twitter/clip_text_embeddings_" + clip_version + ".npy" 697 | ).astype("float32") 698 | 699 | text_ids = np.load("Twitter/text_item_ids_" + clip_version + ".npy") 700 | image_ids = np.load("Twitter/image_item_ids_" + clip_version + ".npy") 701 | 702 | clip_image_embeddings = pd.DataFrame(clip_image_embeddings, index=image_ids).T 703 | clip_text_embeddings = pd.DataFrame(clip_text_embeddings, index=text_ids).T 704 | clip_text_embeddings = clip_text_embeddings.loc[:,~clip_text_embeddings.columns.duplicated()].copy() 705 | 706 | elif input_parameters["CHOOSE_DATASET"] == "fakeddit_original": 707 | 708 | clip_text_embeddings = np.load( 709 | "Fakeddit/fd_original_clip_text_embeddings_" + clip_version + ".npy" 710 | ).astype("float32") 711 | 712 | clip_image_embeddings = np.load( 713 | "Fakeddit/fd_original_clip_image_embeddings_" + clip_version + ".npy" 714 | ).astype("float32") 715 | 716 | item_ids = np.load("Fakeddit/fd_original_clip_item_ids_" + clip_version + ".npy") 717 | 718 | clip_image_embeddings = pd.DataFrame(clip_image_embeddings, index=item_ids).T 719 | clip_text_embeddings = pd.DataFrame(clip_text_embeddings, index=item_ids).T 720 | 721 | 722 | else: 723 | print("VisualNews features") 724 | 725 | clip_image_embeddings = np.load( 726 | "VisualNews/clip_image_embeddings_" + clip_version + ".npy" 727 | ).astype("float32") 728 | 729 | clip_text_embeddings = np.load( 730 | "VisualNews/clip_text_embeddings_" + clip_version + ".npy" 731 | ).astype("float32") 732 | 733 | item_ids = np.load("VisualNews/item_ids_" + clip_version + ".npy") 734 | 735 | clip_image_embeddings = pd.DataFrame(clip_image_embeddings, index=item_ids).T 736 | clip_text_embeddings = pd.DataFrame(clip_text_embeddings, index=item_ids).T 737 | 738 | if 'Misalign' in input_parameters["CHOOSE_DATASET"]: 739 | 740 | print("Misalign features") 741 | 742 | print("Load numpy") 743 | all_misalign_features = np.load("VisualNews/MISALIGN_clip_text_embeddings_" + clip_version + ".npy").astype('float32') 744 | 745 | print("Load IDX") 746 | all_idx = np.load("VisualNews/MISALIGN_item_ids_" + clip_version + ".npy") 747 | 748 | print("To dataframe") 749 | all_misalign_features = pd.DataFrame(all_misalign_features.T, columns=all_idx) 750 | 751 | print("Concat") 752 | clip_text_embeddings = pd.concat([clip_text_embeddings, all_misalign_features], axis=1) 753 | 754 | if 'EntitySwaps_random_topic' in input_parameters["CHOOSE_DATASET"]: 755 | 756 | NES_text_features = np.load("VisualNews/EntitySwaps_topic_random_text_embeddings_" + clip_version +".npy").astype("float32") 757 | NES_ids = np.load("VisualNews/EntitySwaps_topic_random_item_ids_" + clip_version +".npy") 758 | 759 | NES_text_features = pd.DataFrame(NES_text_features.T, columns=NES_ids) 760 | clip_text_embeddings = pd.concat([clip_text_embeddings, NES_text_features], axis=1) 761 | 762 | if 'EntitySwaps_CLIP_topic' in input_parameters["CHOOSE_DATASET"]: 763 | 764 | NES_text_features = np.load("VisualNews/EntitySwaps_topic_clip_text_embeddings_" + clip_version +".npy").astype("float32") 765 | NES_ids = np.load("VisualNews/EntitySwaps_topic_clip_item_ids_" + clip_version +".npy") 766 | 767 | NES_text_features = pd.DataFrame(NES_text_features.T, columns=NES_ids) 768 | clip_text_embeddings = pd.concat([clip_text_embeddings, NES_text_features], axis=1) 769 | 770 | if 'EntitySwaps_CLIP_topic_bytext' in input_parameters["CHOOSE_DATASET"]: 771 | 772 | NES_text_features = np.load("VisualNews/EntitySwaps_topic_bytext_clip_text_embeddings_" + clip_version +".npy").astype("float32") 773 | NES_ids = np.load("VisualNews/EntitySwaps_topic_bytext_clip_item_ids_" + clip_version +".npy") 774 | 775 | NES_text_features = pd.DataFrame(NES_text_features.T, columns=NES_ids) 776 | clip_text_embeddings = pd.concat([clip_text_embeddings, NES_text_features], axis=1) 777 | 778 | if 'EntitySwaps_CLIP_topic_byimage' in input_parameters["CHOOSE_DATASET"]: 779 | 780 | NES_text_features = np.load("VisualNews/EntitySwaps_topic_byimage_clip_text_embeddings_" + clip_version +".npy").astype("float32") 781 | NES_ids = np.load("VisualNews/EntitySwaps_topic_byimage_clip_item_ids_" + clip_version +".npy") 782 | 783 | NES_text_features = pd.DataFrame(NES_text_features.T, columns=NES_ids) 784 | clip_text_embeddings = pd.concat([clip_text_embeddings, NES_text_features], axis=1) 785 | 786 | clip_image_embeddings.columns = clip_image_embeddings.columns.astype('str') 787 | clip_text_embeddings.columns = clip_text_embeddings.columns.astype('str') 788 | return clip_image_embeddings, clip_text_embeddings --------------------------------------------------------------------------------