├── docs
├── zed.gif
├── _config.yml
└── index.md
├── LICENSE.md
├── README.md
└── .gitignore
/docs/zed.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/grip-unina/ZED/HEAD/docs/zed.gif
--------------------------------------------------------------------------------
/docs/_config.yml:
--------------------------------------------------------------------------------
1 | remote_theme: grip-unina/webtemplate
2 | plugins:
3 | - jekyll
4 | - jekyll-seo-tag
5 |
--------------------------------------------------------------------------------
/LICENSE.md:
--------------------------------------------------------------------------------
1 | THIS DOCUMENT CONSTITUTES A LICENCE TO USE THE SOFTWARE ON THE TERMS AND CONDITIONS APPEARING BELOW.
2 |
3 |
4 | Preamble
5 |
6 | This License applies to the software with which this license is distributed.
7 | The software is intellectual property of Image Processing Research Group of University Federico II of Naples ('GRIP-UNINA')
8 | and is placed under the protection of copyright laws, including Italian legislation and international treaties.
9 | BY USING THE PROGRAM, YOU INDICATE YOUR ACCEPTANCE OF THIS LICENSE TO DO SO.
10 |
11 |
12 | Terms and Conditions
13 |
14 | Reproduction, modification, and usage of the software covered by this license is allowed free of charge provided that:
15 | (i) this software should be used, reproduced and modified only for informational and nonprofit purposes; any unauthorized use of this software for industrial or profit-oriented activities is expressively prohibited; and
16 | (ii) any reproduction or modification retains all original notices including proprietary or copyright notices; and
17 | (iii) reference to the original authors is given whenever results, which arise from the use of this software or any modification of it, are made public.
18 | No other use of the materials and of any information incorporated thereto is hereby authorized.
19 | In addition, be informed that some names are protected by trademarks which are the property of GRIP-UNINA, its researchers and/or other third parties whether a specific mention in that respect is made or not.
20 |
21 |
22 | Disclaimers
23 |
24 | This software is provided 'as-is', without any express or implied warranty.
25 | In no event will the authors be held liable for any damages arising from the use of this software.
26 |
27 |
28 | Transmission of user information
29 |
30 | Any and all information or request for information you may direct to GRIP
31 | through e-mail as may be linked to website http://www.grip.unina.it/
32 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # ZED
2 |
3 | [](https://grip-unina.github.io/ZED/)
4 | [](https://arxiv.org/abs/2409.15875)
5 | [](https://www.grip.unina.it)
6 |
7 | This is the official repository of the paper:
8 | [Zero-Shot Detection of AI-Generated Images](https://arxiv.org/abs/2409.15875).
9 |
10 | Davide Cozzolino, Giovanni Poggi, Matthias Nießner, and Luisa Verdoliva.
11 |
12 | ## Overview
13 | Detecting AI-generated images has become an extraordinarily difficult challenge as new generative architectures emerge on a daily basis with more and more capabilities and unprecedented realism. New versions of many commercial tools, such as DALLE, Midjourney, and Stable Diffusion, have been released recently, and it is impractical to continually update and retrain supervised forensic detectors to handle such a large variety of models. To address this challenge, we propose a zero-shot entropy-based detector (ZED) that neither needs AI-generated training data nor relies on knowledge of generative architectures to artificially synthesize their artifacts. Inspired by recent works on machine-generated text detection, our idea is to measure how surprising the image under analysis is compared to a model of real images. To this end, we rely on a lossless image encoder that estimates the probability distribution of each pixel given its context. To ensure computational efficiency, the encoder has a multi-resolution architecture and contexts comprise mostly pixels of the lower-resolution version of the image.Since only real images are needed to learn the model, the detector is independent of generator architectures and synthetic training data. Using a single discriminative feature, the proposed detector achieves state-of-the-art performance. On a wide variety of generative models it achieves an average improvement of more than 3% over the SoTA in terms of accuracy.
14 |
15 |
16 |
17 | ## Code
18 | Coming Soon
19 |
20 | ## License
21 |
22 | Copyright (c) 2024 Image Processing Research Group of University Federico II of Naples ('GRIP-UNINA').
23 |
24 | All rights reserved.
25 |
26 | This software should be used, reproduced and modified only for informational and nonprofit purposes.
27 |
28 | By downloading and/or using any of these files, you implicitly agree to all the
29 | terms of the license, as specified in the document LICENSE.txt
30 | (included in this package)
31 |
32 | ## Bibtex
33 |
34 | ```
35 | @inproceedings{cozzolino2024zed,
36 | author={Davide Cozzolino and Giovanni Poggi and Matthias Nießner and Luisa Verdoliva},
37 | title={{Zero-Shot Detection of AI-Generated Images}},
38 | booktitle={European Conference on Computer Vision (ECCV)},
39 | year={2024},
40 | }
41 | ```
42 |
43 | ## Acknowledgments
44 | We gratefully acknowledge the support of this research by a TUM-IAS Hans Fischer Senior Fellowship, the ERC Starting Grant Scan2CAD (804724), and a Google Gift. This material is also based on research sponsored by the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL) under agreement number FA8750-20-2-1004. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.
45 | The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government. In addition, this work has received funding by the European Union under the Horizon Europe vera.ai project, Grant Agreement number 101070093.
46 |
--------------------------------------------------------------------------------
/docs/index.md:
--------------------------------------------------------------------------------
1 | ---
2 | layout: paper
3 | paper: Zero-Shot Detection of AI-Generated Images
4 | github_url: https://github.com/grip-unina/ZED
5 | authors:
6 | - name: Davide Cozzolino
7 | link: https://www.grip.unina.it/members/cozzolino
8 | index: 1
9 | - name: Giovanni Poggi
10 | link: https://www.grip.unina.it/members/poggi
11 | index: 1
12 | - name: Matthias Niessner
13 | link: https://niessnerlab.org/members/matthias_niessner/profile.html
14 | index: 2
15 | - name: Luisa Verdoliva
16 | link: https://www.grip.unina.it/members/verdoliva
17 | index: 1
18 | affiliations:
19 | - name: University Federico II of Naples, Italy
20 | index: 1
21 | - name: Technical University of Munich
22 | index: 2
23 | ---
24 |
25 | [](https://github.com/grip-unina/ZED/)
26 | [](https://arxiv.org/abs/2409.15875)
27 | [](https://www.grip.unina.it)
28 |
29 | Detecting AI-generated images has become an extraordinarily difficult challenge as new generative architectures emerge on a daily basis with more and more capabilities and unprecedented realism. New versions of many commercial tools, such as DALLE, Midjourney, and Stable Diffusion, have been released recently, and it is impractical to continually update and retrain supervised forensic detectors to handle such a large variety of models. To address this challenge, we propose a zero-shot entropy-based detector (ZED) that neither needs AI-generated training data nor relies on knowledge of generative architectures to artificially synthesize their artifacts. Inspired by recent works on machine-generated text detection, our idea is to measure how surprising the image under analysis is compared to a model of real images. To this end, we rely on a lossless image encoder that estimates the probability distribution of each pixel given its context. To ensure computational efficiency, the encoder has a multi-resolution architecture and contexts comprise mostly pixels of the lower-resolution version of the image.Since only real images are needed to learn the model, the detector is independent of generator architectures and synthetic training data. Using a single discriminative feature, the proposed detector achieves state-of-the-art performance. On a wide variety of generative models it achieves an average improvement of more than 3% over the SoTA in terms of accuracy.
30 |
31 |
32 | 
33 |
34 |
35 | ## Bibtex
36 |
37 | ```
38 | @inproceedings{cozzolino2024zed,
39 | author={Davide Cozzolino and Giovanni Poggi and Matthias Nießner and Luisa Verdoliva},
40 | title={{Zero-Shot Detection of AI-Generated Images}},
41 | booktitle={European Conference on Computer Vision (ECCV)},
42 | year={2024},
43 | }
44 | ```
45 |
46 | ## Acknowledgments
47 | We gratefully acknowledge the support of this research by a TUM-IAS Hans Fischer Senior Fellowship, the ERC Starting Grant Scan2CAD (804724), and a Google Gift. This material is also based on research sponsored by the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL) under agreement number FA8750-20-2-1004. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.
48 | The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government. In addition, this work has received funding by the European Union under the Horizon Europe vera.ai project, Grant Agreement number 101070093.
49 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # C extensions
7 | *.so
8 |
9 | # Distribution / packaging
10 | .Python
11 | build/
12 | develop-eggs/
13 | dist/
14 | downloads/
15 | eggs/
16 | .eggs/
17 | lib/
18 | lib64/
19 | parts/
20 | sdist/
21 | var/
22 | wheels/
23 | share/python-wheels/
24 | *.egg-info/
25 | .installed.cfg
26 | *.egg
27 | MANIFEST
28 |
29 | # PyInstaller
30 | # Usually these files are written by a python script from a template
31 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
32 | *.manifest
33 | *.spec
34 |
35 | # Installer logs
36 | pip-log.txt
37 | pip-delete-this-directory.txt
38 |
39 | # Unit test / coverage reports
40 | htmlcov/
41 | .tox/
42 | .nox/
43 | .coverage
44 | .coverage.*
45 | .cache
46 | nosetests.xml
47 | coverage.xml
48 | *.cover
49 | *.py,cover
50 | .hypothesis/
51 | .pytest_cache/
52 | cover/
53 |
54 | # Translations
55 | *.mo
56 | *.pot
57 |
58 | # Django stuff:
59 | *.log
60 | local_settings.py
61 | db.sqlite3
62 | db.sqlite3-journal
63 |
64 | # Flask stuff:
65 | instance/
66 | .webassets-cache
67 |
68 | # Scrapy stuff:
69 | .scrapy
70 |
71 | # Sphinx documentation
72 | docs/_build/
73 |
74 | # PyBuilder
75 | .pybuilder/
76 | target/
77 |
78 | # Jupyter Notebook
79 | .ipynb_checkpoints
80 |
81 | # IPython
82 | profile_default/
83 | ipython_config.py
84 |
85 | # pyenv
86 | # For a library or package, you might want to ignore these files since the code is
87 | # intended to run in multiple environments; otherwise, check them in:
88 | # .python-version
89 |
90 | # pipenv
91 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
92 | # However, in case of collaboration, if having platform-specific dependencies or dependencies
93 | # having no cross-platform support, pipenv may install dependencies that don't work, or not
94 | # install all needed dependencies.
95 | #Pipfile.lock
96 |
97 | # poetry
98 | # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
99 | # This is especially recommended for binary packages to ensure reproducibility, and is more
100 | # commonly ignored for libraries.
101 | # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
102 | #poetry.lock
103 |
104 | # pdm
105 | # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
106 | #pdm.lock
107 | # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
108 | # in version control.
109 | # https://pdm.fming.dev/#use-with-ide
110 | .pdm.toml
111 |
112 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
113 | __pypackages__/
114 |
115 | # Celery stuff
116 | celerybeat-schedule
117 | celerybeat.pid
118 |
119 | # SageMath parsed files
120 | *.sage.py
121 |
122 | # Environments
123 | .env
124 | .venv
125 | env/
126 | venv/
127 | ENV/
128 | env.bak/
129 | venv.bak/
130 |
131 | # Spyder project settings
132 | .spyderproject
133 | .spyproject
134 |
135 | # Rope project settings
136 | .ropeproject
137 |
138 | # mkdocs documentation
139 | /site
140 |
141 | # mypy
142 | .mypy_cache/
143 | .dmypy.json
144 | dmypy.json
145 |
146 | # Pyre type checker
147 | .pyre/
148 |
149 | # pytype static type analyzer
150 | .pytype/
151 |
152 | # Cython debug symbols
153 | cython_debug/
154 |
155 | # PyCharm
156 | # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
157 | # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
158 | # and can be added to the global gitignore or merged into this file. For a more nuclear
159 | # option (not recommended) you can uncomment the following to ignore the entire idea folder.
160 | .idea/
161 | .git-credentials
162 |
--------------------------------------------------------------------------------