├── docs ├── grass-x4.webp ├── gravel-x4.webp ├── remake-grass.webp ├── remix-gravel.webp ├── enhance-grass.webp ├── mashup-gravel.webp └── pypi.rst ├── examples ├── dirt1.webp ├── dirt2.webp ├── dirt3.webp ├── dirt4.webp ├── dirt5.webp ├── dirt6.webp ├── dirt7.webp ├── dirt8.webp ├── grass1.webp ├── grass2.webp ├── grass3.webp ├── grass4.webp ├── grass5.webp ├── grass6.webp ├── grass7.webp ├── grass8.webp ├── Demo_Grass.ipynb └── Demo_Gravel.ipynb ├── tests ├── data │ ├── U145220.png │ ├── U145540.png │ ├── U151643.png │ ├── U153952.png │ └── U155728.png ├── conftest.py ├── test_api.py └── test_match.py ├── tasks ├── pycov.ini ├── setup-cuda.yml ├── setup-cpu.yml ├── pytest.ini └── __init__.py ├── src └── texturize │ ├── __init__.py │ ├── patch.py │ ├── logger.py │ ├── api.py │ ├── app.py │ ├── critics.py │ ├── io.py │ ├── __main__.py │ ├── solvers.py │ ├── commands.py │ └── match.py ├── .gitignore ├── pyproject.toml ├── .github └── workflows │ ├── python-release.yml │ └── python-package.yml ├── README.rst └── LICENSE /docs/grass-x4.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/texturedesign/texturize/HEAD/docs/grass-x4.webp -------------------------------------------------------------------------------- /docs/gravel-x4.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/texturedesign/texturize/HEAD/docs/gravel-x4.webp -------------------------------------------------------------------------------- /examples/dirt1.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/texturedesign/texturize/HEAD/examples/dirt1.webp -------------------------------------------------------------------------------- /examples/dirt2.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/texturedesign/texturize/HEAD/examples/dirt2.webp -------------------------------------------------------------------------------- /examples/dirt3.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/texturedesign/texturize/HEAD/examples/dirt3.webp -------------------------------------------------------------------------------- /examples/dirt4.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/texturedesign/texturize/HEAD/examples/dirt4.webp -------------------------------------------------------------------------------- /examples/dirt5.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/texturedesign/texturize/HEAD/examples/dirt5.webp -------------------------------------------------------------------------------- /examples/dirt6.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/texturedesign/texturize/HEAD/examples/dirt6.webp -------------------------------------------------------------------------------- /examples/dirt7.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/texturedesign/texturize/HEAD/examples/dirt7.webp -------------------------------------------------------------------------------- /examples/dirt8.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/texturedesign/texturize/HEAD/examples/dirt8.webp -------------------------------------------------------------------------------- /docs/remake-grass.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/texturedesign/texturize/HEAD/docs/remake-grass.webp -------------------------------------------------------------------------------- /docs/remix-gravel.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/texturedesign/texturize/HEAD/docs/remix-gravel.webp -------------------------------------------------------------------------------- /examples/grass1.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/texturedesign/texturize/HEAD/examples/grass1.webp -------------------------------------------------------------------------------- /examples/grass2.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/texturedesign/texturize/HEAD/examples/grass2.webp -------------------------------------------------------------------------------- /examples/grass3.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/texturedesign/texturize/HEAD/examples/grass3.webp -------------------------------------------------------------------------------- /examples/grass4.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/texturedesign/texturize/HEAD/examples/grass4.webp -------------------------------------------------------------------------------- /examples/grass5.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/texturedesign/texturize/HEAD/examples/grass5.webp -------------------------------------------------------------------------------- /examples/grass6.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/texturedesign/texturize/HEAD/examples/grass6.webp -------------------------------------------------------------------------------- /examples/grass7.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/texturedesign/texturize/HEAD/examples/grass7.webp -------------------------------------------------------------------------------- /examples/grass8.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/texturedesign/texturize/HEAD/examples/grass8.webp -------------------------------------------------------------------------------- /tests/data/U145220.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/texturedesign/texturize/HEAD/tests/data/U145220.png -------------------------------------------------------------------------------- /tests/data/U145540.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/texturedesign/texturize/HEAD/tests/data/U145540.png -------------------------------------------------------------------------------- /tests/data/U151643.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/texturedesign/texturize/HEAD/tests/data/U151643.png -------------------------------------------------------------------------------- /tests/data/U153952.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/texturedesign/texturize/HEAD/tests/data/U153952.png -------------------------------------------------------------------------------- /tests/data/U155728.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/texturedesign/texturize/HEAD/tests/data/U155728.png -------------------------------------------------------------------------------- /docs/enhance-grass.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/texturedesign/texturize/HEAD/docs/enhance-grass.webp -------------------------------------------------------------------------------- /docs/mashup-gravel.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/texturedesign/texturize/HEAD/docs/mashup-gravel.webp -------------------------------------------------------------------------------- /tasks/pycov.ini: -------------------------------------------------------------------------------- 1 | [run] 2 | disable_warnings = no-data-collected 3 | 4 | [report] 5 | exclude_lines = 6 | pycov:no -------------------------------------------------------------------------------- /src/texturize/__init__.py: -------------------------------------------------------------------------------- 1 | # neural-texturize — Copyright (c) 2020, Novelty Factory KG. See LICENSE for details. 2 | 3 | __version__ = "dev" 4 | -------------------------------------------------------------------------------- /tasks/setup-cuda.yml: -------------------------------------------------------------------------------- 1 | channels: 2 | - pytorch 3 | - defaults 4 | dependencies: 5 | - pip 6 | - pytorch=2.0.0 7 | - torchvision=0.15.0 8 | - pip: 9 | - invoke 10 | - poetry 11 | -------------------------------------------------------------------------------- /tasks/setup-cpu.yml: -------------------------------------------------------------------------------- 1 | channels: 2 | - pytorch 3 | - defaults 4 | dependencies: 5 | - pip 6 | - pytorch=2.0.0 7 | - torchvision=0.15.0 8 | - cpuonly 9 | - pip: 10 | - invoke 11 | - poetry 12 | -------------------------------------------------------------------------------- /tasks/pytest.ini: -------------------------------------------------------------------------------- 1 | # texturize — Copyright (c) 2020, Novelty Factory KG. See LICENSE for details. 2 | [pytest] 3 | python_files = 4 | api_*.py 5 | cmd_*.py 6 | func_*.py 7 | prop_*.py 8 | test_*.py 9 | unit_*.py 10 | 11 | addopts = 12 | --cov=src/texturize 13 | --cov-config=tasks/pycov.ini 14 | --cov-report=term-missing:skip-covered 15 | --tb=short 16 | --color=yes 17 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / binary files. 2 | __pycache__/ 3 | *.py[cod] 4 | *.*~ 5 | 6 | # Tests / results / generated. 7 | output/ 8 | tmp/ 9 | *.webp 10 | *.gif 11 | *.png 12 | *.jpg 13 | 14 | # Distribution / packaging / build. 15 | htmlcov/ 16 | poetry.lock 17 | .coverage 18 | .pytest_cache/ 19 | *.egg-info/ 20 | build/ 21 | dist/ 22 | sdist/ 23 | 24 | # Code editors / environment. 25 | .venv/ 26 | .vscode/ 27 | .hypothesis/ 28 | .ipynb*/ 29 | 30 | # Documentation. 31 | docs/_build 32 | -------------------------------------------------------------------------------- /tasks/__init__.py: -------------------------------------------------------------------------------- 1 | # neural-texturize — Copyright (c) 2020, Novelty Factory KG. See LICENSE for details. 2 | 3 | import sys 4 | from invoke import task 5 | 6 | 7 | @task 8 | def test(cmd): 9 | """Run the automated test suite using `pytest`. 10 | 11 | Any arguments that are after `--` are passed directly to pytest, useful for: 12 | * specifying which files or patterns to test 13 | * configuring the various plugins installed 14 | 15 | """ 16 | try: 17 | idx = sys.argv.index("--") 18 | extra = " ".join(sys.argv[idx+1:]) 19 | except ValueError: 20 | extra = "" 21 | 22 | cmd.run("poetry run pytest -c tasks/pytest.ini " + extra) 23 | -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [tool.poetry] 2 | name = "texturize" 3 | description = "🤖🖌️ Generate new photo-realistic textures similar to a source image." 4 | version = "0.14-dev" 5 | authors = ["Alex J. Champandard <445208+alexjc@users.noreply.github.com>"] 6 | readme = "docs/pypi.rst" 7 | repository = "https://github.com/photogeniq/texturize" 8 | license = "AGPL-3.0" 9 | classifiers = [ 10 | "Development Status :: 3 - Alpha", 11 | "Topic :: Multimedia :: Graphics", 12 | ] 13 | packages = [ 14 | { include = "texturize", from = "src" }, 15 | ] 16 | 17 | [tool.poetry.scripts] 18 | texturize = 'texturize.__main__:main' 19 | 20 | [tool.poetry.dependencies] 21 | python = "^3.7" 22 | creativeai = "^0.2.1" 23 | docopt = "^0.6.2" 24 | progressbar2 = "^4.2.0" 25 | schema = "^0.7.5" 26 | 27 | [tool.poetry.dev-dependencies] 28 | pytest = ">=7.0.0" 29 | hypothesis = ">=6.0.0" 30 | pytest-cov = ">=3.0.0" 31 | keyring = ">=22.4.0" 32 | 33 | [build-system] 34 | requires = ["poetry>=1.0"] 35 | build-backend = "poetry.masonry.api" 36 | -------------------------------------------------------------------------------- /src/texturize/patch.py: -------------------------------------------------------------------------------- 1 | # neural-texturize — Copyright (c) 2020, Novelty Factory KG. See LICENSE for details. 2 | 3 | import itertools 4 | 5 | import torch 6 | import torch.nn.functional as F 7 | 8 | 9 | class PatchBuilder: 10 | def __init__(self, patch_size=3, weights=None): 11 | self.min = -((patch_size - 1) // 2) 12 | self.max = patch_size + self.min - 1 13 | self.patch_size = patch_size 14 | 15 | if weights is None: 16 | weights = torch.ones(size=(patch_size ** 2,)) 17 | else: 18 | weights = torch.tensor(weights, dtype=torch.float32) 19 | 20 | self.weights = weights / weights.sum() 21 | 22 | def extract(self, array): 23 | padded = F.pad( 24 | array, 25 | pad=(abs(self.min), self.max, abs(self.min), self.max), 26 | mode="constant", 27 | value=0.0 28 | ) 29 | h, w = ( 30 | padded.shape[2] - self.patch_size + 1, 31 | padded.shape[3] - self.patch_size + 1, 32 | ) 33 | output = [] 34 | for y, x in itertools.product(self.coords, repeat=2): 35 | p = padded[:, :, y : h + y, x : w + x] 36 | output.append(p) 37 | return torch.cat(output, dim=1) 38 | 39 | @property 40 | def coords(self): 41 | return range(self.patch_size) 42 | -------------------------------------------------------------------------------- /.github/workflows/python-release.yml: -------------------------------------------------------------------------------- 1 | # Python Release 2 | # 3 | # Automatically build and publish the package from the supported Python 3.x version. 4 | # 5 | 6 | name: release 7 | 8 | on: 9 | push: 10 | tags: 11 | - 'v*.*.*' 12 | - 'v*.*' 13 | 14 | jobs: 15 | release: 16 | runs-on: ${{ matrix.os }} 17 | strategy: 18 | matrix: 19 | os: [ubuntu-latest] 20 | python: [3.7] 21 | name: Python ${{ matrix.python }} ${{ matrix.os }} 22 | steps: 23 | - uses: actions/checkout@v3 24 | - name: Get Tag 25 | id: tag 26 | run: | 27 | echo ::set-output name=version::${GITHUB_REF#refs/tags/v} 28 | - name: Set Version 29 | run: | 30 | sed -ri 's/version = "(.*)"/version = "${{ steps.tag.outputs.version }}"/' pyproject.toml 31 | sed -i 's/"dev"/"${{ steps.tag.outputs.version }}"/' src/texturize/__init__.py 32 | - name: Setup Conda 33 | uses: s-weigand/setup-conda@v1 34 | with: 35 | activate-conda: true 36 | python-version: ${{ matrix.python }} 37 | - name: Install Requirements 38 | run: | 39 | pip install poetry invoke 40 | - name: Create Package 41 | run: | 42 | poetry build 43 | - name: Publish Package 44 | env: 45 | POETRY_PYPI_TOKEN_PYPI: ${{ secrets.PYPI_TOKEN }} 46 | run: | 47 | poetry publish -n 48 | -------------------------------------------------------------------------------- /.github/workflows/python-package.yml: -------------------------------------------------------------------------------- 1 | # Python Package 2 | # 3 | # Automatically build and test the package for supported Python 3.x versions. 4 | # 5 | 6 | name: build 7 | 8 | on: 9 | push: 10 | branches: [ master ] 11 | pull_request: 12 | branches: [ master ] 13 | 14 | jobs: 15 | build: 16 | runs-on: ${{ matrix.os }} 17 | strategy: 18 | matrix: 19 | os: [ubuntu-latest] 20 | python: ['3.7', '3.8', '3.9', '3.10'] 21 | include: 22 | - os: windows-latest 23 | python: '3.7' 24 | - os: macOS-latest 25 | python: '3.7' 26 | - os: windows-latest 27 | python: '3.10' 28 | - os: macOS-latest 29 | python: '3.10' 30 | name: Python ${{ matrix.python }} ${{ matrix.os }} 31 | steps: 32 | - uses: actions/checkout@v3 33 | with: 34 | submodules: recursive 35 | - name: Setup Conda 36 | uses: s-weigand/setup-conda@v1 37 | with: 38 | activate-conda: true 39 | update-conda: true 40 | python-version: ${{ matrix.python }} 41 | conda-channels: pytorch 42 | - name: Install Requirements 43 | run: | 44 | conda install pytorch torchvision cpuonly -c pytorch 45 | pip install poetry invoke pillow --force 46 | - name: Setup Dependencies 47 | run: | 48 | poetry config virtualenvs.create false 49 | poetry install 50 | - name: Run Automated Tests 51 | run: | 52 | poetry run pytest -c tasks/pytest.ini -v -k "not test_api" 53 | - name: Run API Test Suite 54 | run: | 55 | poetry run pytest -c tasks/pytest.ini -v -k test_api --suite=fast 56 | - name: Create Package 57 | run: | 58 | poetry build 59 | -------------------------------------------------------------------------------- /docs/pypi.rst: -------------------------------------------------------------------------------- 1 | texturize 2 | ========= 3 | 4 | Generate photo-realistic textures similar to a source picture. Remix, remake, mashup! 5 | 6 | .. image:: https://raw.githubusercontent.com/photogeniq/texturize/master/docs/grass-x4.webp 7 | 8 | A command-line tool and Python library to automatically generate new textures similar 9 | to a source image or photograph. It's useful in the context of computer graphics if 10 | you want to make variations on a theme or expand the size of an existing texture. 11 | 12 | This software is powered by deep learning technology — using a combination of 13 | convolution networks and example-based optimization to synthesize images. We're 14 | building ``texturize`` as the highest-quality open source library available! 15 | 16 | **➔ See the** `GitHub repository `_ **for details.** 17 | 18 | ---- 19 | 20 | |Python Version| |License Type| |Project Stars| |Package Version| |Project Status| |Build Status| 21 | 22 | .. |Python Version| image:: https://img.shields.io/pypi/pyversions/texturize 23 | :target: https://www.python.org/ 24 | 25 | .. |License Type| image:: https://img.shields.io/badge/license-AGPL-blue.svg 26 | :target: https://github.com/photogeniq/texturize/blob/master/LICENSE 27 | 28 | .. |Project Stars| image:: https://img.shields.io/github/stars/photogeniq/texturize.svg?color=turquoise 29 | :target: https://github.com/photogeniq/texturize/stargazers 30 | 31 | .. |Package Version| image:: https://img.shields.io/pypi/v/texturize?color=turquoise 32 | :alt: PyPI - Version 33 | :target: https://pypi.org/project/texturize/ 34 | 35 | .. |Project Status| image:: https://img.shields.io/pypi/status/texturize?color=#00ff00 36 | :alt: PyPI - Status 37 | :target: https://github.com/photogeniq/texturize 38 | 39 | .. |Build Status| image:: https://img.shields.io/github/workflow/status/photogeniq/texturize/build 40 | :alt: GitHub Workflow Status 41 | :target: https://github.com/photogeniq/texturize/actions?query=workflow%3Abuild 42 | -------------------------------------------------------------------------------- /tests/conftest.py: -------------------------------------------------------------------------------- 1 | # texturize — Copyright (c) 2020, Novelty Factory KG. See LICENSE for details. 2 | 3 | import glob 4 | import random 5 | import pathlib 6 | 7 | import pytest 8 | import PIL.Image, PIL.ImageOps 9 | 10 | 11 | def pytest_addoption(parser): 12 | parser.addoption( 13 | "--suite", 14 | action="store", 15 | default="fast", 16 | help="image suite to test: fast or full", 17 | ) 18 | 19 | 20 | def pytest_generate_tests(metafunc): 21 | if "filename" not in metafunc.fixturenames: 22 | return 23 | 24 | if metafunc.config.getoption("suite") == "full": 25 | filenames = glob.glob("examples/*.webp") 26 | count = 4 27 | if metafunc.config.getoption("suite") == "fast": 28 | filenames = glob.glob("tests/data/*.png") 29 | count = 1 30 | 31 | assert len(filenames) >= count 32 | metafunc.parametrize("filename", random.sample(filenames, count)) 33 | 34 | 35 | @pytest.fixture() 36 | def size(request): 37 | if request.config.getoption("--suite") == "full": 38 | return (272, 240) 39 | if request.config.getoption("--suite") == "fast": 40 | return (96, 80) 41 | assert False, "Invalid test suite specified." 42 | 43 | 44 | @pytest.fixture(scope="function") 45 | def image(request, filename, size): 46 | img = PIL.Image.open(filename) 47 | return PIL.ImageOps.fit(img.convert("RGB"), size) 48 | 49 | 50 | def pytest_collect_file(path, parent): 51 | if not path.strpath.endswith('conftest.py'): 52 | return 53 | 54 | if parent.config.getoption('--suite') != "full": 55 | return 56 | 57 | from doctest import ELLIPSIS 58 | 59 | from sybil import Sybil 60 | import sybil.parsers.rest as SybilParsers 61 | import sybil.integration.pytest as SybilTest 62 | 63 | sybil = Sybil( 64 | parsers=[ 65 | SybilParsers.DocTestParser(optionflags=ELLIPSIS), 66 | SybilParsers.PythonCodeBlockParser(), 67 | ], 68 | path='.', 69 | patterns=['*.rst'], 70 | ) 71 | 72 | return SybilTest.SybilFile.from_parent(parent, path=pathlib.Path('README.rst'), sybil=sybil) 73 | -------------------------------------------------------------------------------- /tests/test_api.py: -------------------------------------------------------------------------------- 1 | # texturize — Copyright (c) 2020, Novelty Factory KG. See LICENSE for details. 2 | 3 | import torch 4 | import pytest 5 | 6 | import PIL.ImageOps 7 | 8 | from texturize import api, commands 9 | 10 | 11 | def test_api_remix(image, size): 12 | remix = commands.Remix(source=image) 13 | for r in api.process_octaves(remix, size=size, iterations=2): 14 | assert len(r.tensor) == 1 15 | assert isinstance(r.tensor, torch.Tensor) 16 | assert r.tensor.shape[2:] == (size[1] // r.scale, size[0] // r.scale) 17 | 18 | 19 | def test_api_remake(image, size): 20 | remake = commands.Remake(target=image, source=image) 21 | for r in api.process_octaves(remake, size=size, iterations=2): 22 | assert len(r.tensor) == 1 23 | assert isinstance(r.tensor, torch.Tensor) 24 | assert r.tensor.shape[2:] == (size[1] // r.scale, size[0] // r.scale) 25 | 26 | 27 | def test_api_mashup(image, size): 28 | mashup = commands.Mashup(sources=[image, image]) 29 | for r in api.process_octaves(mashup, size=size, iterations=2): 30 | assert len(r.tensor) == 1 31 | assert isinstance(r.tensor, torch.Tensor) 32 | assert r.tensor.shape[2:] == (size[1] // r.scale, size[0] // r.scale) 33 | 34 | 35 | def test_api_enhance(image, size): 36 | mashup = commands.Enhance(target=image, source=image, zoom=2) 37 | for r in api.process_octaves(mashup, size=size, iterations=2): 38 | assert len(r.tensor) == 1 39 | assert isinstance(r.tensor, torch.Tensor) 40 | assert r.tensor.shape[2:] == (size[1] // r.scale, size[0] // r.scale) 41 | 42 | 43 | def test_api_expand(image, size): 44 | expand = commands.Expand(target=image, source=image, factor=(1.5, 1.5)) 45 | for r in api.process_octaves(expand, size=size, iterations=2): 46 | assert len(r.tensor) == 1 47 | assert isinstance(r.tensor, torch.Tensor) 48 | assert r.tensor.shape[2:] == (size[1] // r.scale, size[0] // r.scale) 49 | 50 | 51 | def test_api_repair(image, size): 52 | src = PIL.ImageOps.mirror(image) 53 | repair = commands.Repair(target=image.convert("RGBA"), source=src) 54 | for r in api.process_octaves(repair, size=size, iterations=2): 55 | assert len(r.tensor) == 1 56 | assert isinstance(r.tensor, torch.Tensor) 57 | assert r.tensor.shape[2:] == (size[1] // r.scale, size[0] // r.scale) 58 | -------------------------------------------------------------------------------- /src/texturize/logger.py: -------------------------------------------------------------------------------- 1 | # neural-texturize — Copyright (c) 2020, Novelty Factory KG. See LICENSE for details. 2 | 3 | import progressbar 4 | 5 | try: 6 | import ipywidgets 7 | except ImportError: 8 | pass 9 | 10 | 11 | class ansi: 12 | WHITE = "\033[1;97m" 13 | BLACK = "\033[0;30m\033[47m" 14 | YELLOW = "\033[1;33m" 15 | PINK = "\033[1;35m" 16 | ENDC = "\033[0m\033[49m" 17 | 18 | 19 | class EmptyLog: 20 | def notice(self, *args): 21 | pass 22 | 23 | def info(self, *args): 24 | pass 25 | 26 | def debug(self, *args): 27 | pass 28 | 29 | def warn(self, *args): 30 | pass 31 | 32 | def create_progress_bar(self, iterations): 33 | return progressbar.NullBar(max_value=iterations) 34 | 35 | 36 | class ConsoleLog: 37 | def __init__(self, quiet, verbose): 38 | self.quiet = quiet 39 | self.verbose = verbose 40 | 41 | def create_progress_bar(self, iterations): 42 | widgets = [ 43 | progressbar.Variable("iter", format="{name}: {value}"), 44 | " | ", 45 | progressbar.Variable("loss", format="{name}: {value:0.3e}"), 46 | " ", 47 | progressbar.Bar(marker="■", fill="·"), 48 | " ", 49 | progressbar.Percentage(), 50 | " | ", 51 | progressbar.Timer(format='elapsed: %(elapsed)s'), 52 | ] 53 | ProgressBar = progressbar.NullBar if self.quiet else progressbar.ProgressBar 54 | return ProgressBar( 55 | max_value=iterations, widgets=widgets, variables={"loss": float("+inf")} 56 | ) 57 | 58 | def debug(self, *args): 59 | if self.verbose: 60 | print(*args) 61 | 62 | def notice(self, *args): 63 | if not self.quiet: 64 | print(*args) 65 | 66 | def info(self, *args): 67 | if not self.quiet: 68 | print(ansi.BLACK + "".join(args) + ansi.ENDC) 69 | 70 | def warn(self, *args): 71 | print(ansi.YELLOW + "".join(args) + ansi.ENDC) 72 | 73 | def error(self, *args): 74 | print(ansi.PINK + "".join(args) + ansi.ENDC) 75 | 76 | 77 | class NotebookLog: 78 | class ProgressBar: 79 | def __init__(self, max_iter): 80 | self.bar = ipywidgets.IntProgress( 81 | value=0, 82 | min=0, 83 | max=max_iter, 84 | step=1, 85 | description="", 86 | bar_style="", 87 | orientation="horizontal", 88 | layout=ipywidgets.Layout(width="100%", margin="0"), 89 | ) 90 | 91 | from IPython.display import display 92 | 93 | display(self.bar) 94 | 95 | def update(self, value, **keywords): 96 | self.bar.value = value 97 | 98 | def reset(self, iterations): 99 | self.bar.max = iterations 100 | self.bar.value = 0 101 | self.bar.layout = ipywidgets.Layout(width="100%", margin="0") 102 | 103 | def finish(self): 104 | self.bar.layout = ipywidgets.Layout(display="none") 105 | 106 | def __init__(self): 107 | self.progress = None 108 | 109 | def create_progress_bar(self, iterations): 110 | if self.progress is None: 111 | self.progress = NotebookLog.ProgressBar(iterations) 112 | else: 113 | self.progress.reset(iterations) 114 | 115 | return self.progress 116 | 117 | def debug(self, *args): 118 | pass 119 | 120 | def notice(self, *args): 121 | pass 122 | 123 | def info(self, *args): 124 | pass 125 | 126 | def warn(self, *args): 127 | pass 128 | 129 | 130 | def get_default_log(): 131 | try: 132 | get_ipython 133 | ipywidgets 134 | return NotebookLog() 135 | except NameError: 136 | return EmptyLog() 137 | -------------------------------------------------------------------------------- /src/texturize/api.py: -------------------------------------------------------------------------------- 1 | # texturize — Copyright (c) 2020, Novelty Factory KG. See LICENSE for details. 2 | 3 | import math 4 | 5 | import torch 6 | from creativeai.vision import encoders 7 | 8 | from .app import Application, Result 9 | from .io import * 10 | 11 | 12 | @torch.no_grad() 13 | def process_iterations( 14 | cmd, 15 | *, 16 | size: tuple, 17 | log: object = None, 18 | octaves: int = None, 19 | variations: int = 1, 20 | iterations: tuple = (400,), 21 | model: str = "VGG11", 22 | layers: str = None, 23 | mode: str = None, 24 | device: str = None, 25 | precision: str = None, 26 | ): 27 | """Synthesize a new texture and return a PyTorch tensor at each iteration. 28 | """ 29 | 30 | # Configure the default options dynamically, unless overriden. 31 | factor = math.sqrt((size[0] * size[1]) / (32 ** 2)) 32 | octaves = octaves or getattr(cmd, "octaves", int(math.log(factor, 2) + 1.0)) 33 | iterations = iterations if isinstance(iterations, tuple) else (iterations,) 34 | 35 | # Setup the application to use throughout the synthesis. 36 | app = Application(log, device, precision) 37 | 38 | # Encoder used by all the critics at every octave. 39 | encoder = getattr(encoders, model + 'Encoder')(pretrained=True, pool_type=torch.nn.AvgPool2d) 40 | encoder = encoder.to(device=app.device, dtype=app.precision) 41 | app.encoder = encoder 42 | app.layers = layers 43 | app.mode = mode 44 | 45 | # Coarse-to-fine rendering, number of octaves specified by user. 46 | seed = None 47 | for octave, scale in enumerate(2 ** s for s in range(octaves - 1, -1, -1)): 48 | app.log.info(f"\n OCTAVE #{octave} ") 49 | app.log.debug("<- scale:", f"1/{scale}") 50 | 51 | app.progress = app.log.create_progress_bar(100) 52 | 53 | result_size = (variations, 3, size[1] // scale, size[0] // scale) 54 | app.log.debug("<- seed:", tuple(result_size[2:])) 55 | 56 | for dtype in [torch.float32, torch.float16]: 57 | if app.precision != dtype: 58 | app.precision = dtype 59 | app.encoder = app.encoder.to(dtype=dtype) 60 | if seed is not None: 61 | seed = seed.to(app.device) 62 | 63 | try: 64 | critics = cmd.prepare_critics(app, scale) 65 | seed = cmd.prepare_seed_tensor(app, result_size, previous=seed) 66 | 67 | its = iterations[-1] if octave >= len(iterations) else iterations[octave] 68 | for i, result in enumerate(app.process_octave( 69 | seed, app.encoder, critics, octave, scale, iterations=its, 70 | )): 71 | yield result 72 | 73 | seed = result.tensor 74 | del result 75 | break 76 | 77 | except RuntimeError as e: 78 | if "CUDA out of memory." not in str(e): 79 | raise 80 | app.log.error(f"ERROR: Out of memory at octave {octave}.") 81 | app.log.debug(e) 82 | 83 | import gc; gc.collect 84 | torch.cuda.empty_cache() 85 | 86 | 87 | @torch.no_grad() 88 | def process_octaves(cmd, **kwargs): 89 | """Synthesize a new texture from sources and return a PyTorch tensor at each octave. 90 | """ 91 | for r in process_iterations(cmd, **kwargs): 92 | if r.iteration >= 0: 93 | continue 94 | 95 | yield Result( 96 | r.tensor, r.octave, r.scale, -r.iteration, r.loss, r.rate, r.retries 97 | ) 98 | 99 | 100 | def process_single_command(cmd, log: object, output: str = None, properties: list = [], **config: dict): 101 | for result in process_octaves(cmd, log=log, **config): 102 | result = cmd.finalize_octave(result) 103 | 104 | filenames = [] 105 | for i in range(result.tensor.shape[0]): 106 | from .io import save_tensor_to_files 107 | filename = output.format( 108 | octave=result.octave, 109 | variation=f"_{i}" if result.tensor.shape[0] > 1 else "", 110 | command=cmd.__class__.__name__.lower(), 111 | prop="_{prop}" if len(properties) else "", 112 | ) 113 | save_tensor_to_files(result.tensor[i:i+1], filename, properties) 114 | log.debug("\n=> output:", filename) 115 | filenames.append(filename) 116 | 117 | return result, filenames 118 | -------------------------------------------------------------------------------- /src/texturize/app.py: -------------------------------------------------------------------------------- 1 | # texturize — Copyright (c) 2020, Novelty Factory KG. See LICENSE for details. 2 | 3 | import itertools 4 | import dataclasses 5 | 6 | import torch 7 | import torch.nn.functional as F 8 | 9 | from .logger import get_default_log 10 | from .solvers import ( 11 | SolverSGD, 12 | SolverLBFGS, 13 | MultiCriticObjective, 14 | SequentialCriticObjective, 15 | ) 16 | from .io import * 17 | 18 | 19 | __all__ = ["Application", "Result", "TextureSynthesizer"] 20 | 21 | 22 | class TextureSynthesizer: 23 | def __init__(self, device, encoder, lr, iterations): 24 | self.device = device 25 | self.encoder = encoder 26 | self.iterations = iterations 27 | self.learning_rate = lr 28 | 29 | def run(self, progress, seed_img, *args): 30 | for oc, sc in itertools.product( 31 | [MultiCriticObjective, SequentialCriticObjective], [SolverLBFGS, SolverSGD], 32 | ): 33 | if sc == SolverLBFGS and seed_img.dtype == torch.float16: 34 | continue 35 | 36 | try: 37 | yield from self._run(progress, seed_img, *args, objective_class=oc, solver_class=sc) 38 | progress.finish() 39 | return 40 | except RuntimeError as e: 41 | if "CUDA out of memory." not in str(e): 42 | raise 43 | 44 | import gc; gc.collect 45 | torch.cuda.empty_cache() 46 | 47 | raise RuntimeError("CUDA out of memory.") 48 | 49 | def _run( 50 | self, progress, seed_img, critics, objective_class=None, solver_class=None 51 | ): 52 | """Run the optimizer on the image according to the loss returned by the critics. 53 | """ 54 | critics = list(itertools.chain.from_iterable(critics)) 55 | image = seed_img.to(self.device) 56 | alpha = None if image.shape[1] == 3 else image[:, 3:4] 57 | image = image[:, 0:3].detach().requires_grad_(True) 58 | 59 | obj = objective_class(self.encoder, critics, alpha=alpha) 60 | opt = solver_class(obj, image, lr=self.learning_rate) 61 | 62 | for i, loss, lr, retries in self._iterate(opt): 63 | # Constrain the image to the valid color range. 64 | image.data.clamp_(0.0, 1.0) 65 | 66 | # Update the progress bar with the result! 67 | progress.update(i * 100 / (self.iterations - 1), loss=loss, iter=i) 68 | 69 | # Return back to the user... 70 | yield loss, image, lr, retries 71 | 72 | progress.max_value = i + 1 73 | 74 | def _iterate(self, opt): 75 | for i in range(self.iterations): 76 | # Perform one step of the optimization. 77 | loss, scores = opt.step() 78 | 79 | # Return this iteration to the caller... 80 | yield i, loss, opt.lr, opt.retries 81 | 82 | 83 | @dataclasses.dataclass 84 | class Result: 85 | tensor: torch.Tensor 86 | octave: int 87 | scale: int 88 | iteration: int 89 | loss: float 90 | rate: float 91 | retries: int 92 | 93 | @property 94 | def images(self): 95 | return save_tensor_to_images(self.tensor) 96 | 97 | 98 | class Application: 99 | def __init__(self, log=None, device=None, precision=None): 100 | # Setup the output and logging to use throughout the synthesis. 101 | self.log = log or get_default_log() 102 | # Determine which device use based on what's available. 103 | self.device = torch.device( 104 | device or ("cuda" if torch.cuda.is_available() else "cpu") 105 | ) 106 | # The floating point format is 32-bit by default, 16-bit supported. 107 | self.precision = getattr(torch, precision or "float32") 108 | 109 | def process_octave(self, result_img, encoder, critics, octave, scale, iterations): 110 | # Each octave we start a new optimization process. 111 | synth = TextureSynthesizer(self.device, encoder, lr=0.1, iterations=iterations) 112 | result_img = result_img.to(dtype=self.precision) 113 | 114 | # The first iteration contains the rescaled image with noise. 115 | yield Result(result_img, octave, scale, 0, float("+inf"), 1.0, 0) 116 | 117 | for iteration, (loss, result_img, lr, retries) in enumerate( 118 | synth.run(self.progress, result_img, critics), start=1 119 | ): 120 | yield Result(result_img, octave, scale, iteration, loss, lr, retries) 121 | 122 | # The last iteration is repeated to indicate completion. 123 | yield Result(result_img, octave, scale, -iteration, loss, lr, retries) 124 | -------------------------------------------------------------------------------- /examples/Demo_Grass.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Texturize Demo: Grass\n", 8 | "\n", 9 | "Welcome! This notebook contains a demo of `texturize`, which will generate three **variations of grass textures** and takes about 5-10 minutes to run in total.\n", 10 | "\n", 11 | "* You can run the whole notebook and follow along, click `Runtime > Run All` from the menu.\n", 12 | "* Alternatively, run each code cell by selecting it, then clicking the arrow button ➤ in the left column.\n", 13 | "* Re-run blocks `a.` to use a different random crop of the source image.\n", 14 | "* Re-run blocks `b.` to generate a new texture from a different random seed.\n", 15 | "* Watch the generator optimize as it displays the result frame-by-frame!\n", 16 | "\n", 17 | "If you enjoy the project, feel free to **browse the repository** on [GitHub](https://github.com/photogeniq/texturize) and leave us a star ★. If encounter any problems with your textures, report them in the [Issues](https://github.com/photogeniq/texturize). Thanks!\n", 18 | "\n", 19 | "_This notebook is released under the CC-BY-NC-SA license — including the text, images and code._" 20 | ] 21 | }, 22 | { 23 | "cell_type": "code", 24 | "execution_count": null, 25 | "metadata": {}, 26 | "outputs": [], 27 | "source": [ 28 | "# Install the latest release from the Python Package Index (PYPI).\n", 29 | "!pip install -q --progress-bar=off texturize\n", 30 | "\n", 31 | "from texturize import api, commands, io\n", 32 | "\n", 33 | "# The sample files are stored as attachments in this GitHub Release.\n", 34 | "BASE_URL = \"https://github.com/photogeniq/texturize/releases/download/v0.0/\"" 35 | ] 36 | }, 37 | { 38 | "cell_type": "markdown", 39 | "metadata": {}, 40 | "source": [ 41 | "All the images in this file are generated using this function. You can configure the parameters here if necessary:" 42 | ] 43 | }, 44 | { 45 | "cell_type": "code", 46 | "execution_count": null, 47 | "metadata": {}, 48 | "outputs": [], 49 | "source": [ 50 | "# @title Settings\n", 51 | "\n", 52 | "resolution = '0.3 MP' # @param ['0.3 MP', '0.6 MP']\n", 53 | "quality = 4 # @param {type:\"slider\", min:0, max:10, step:1}\n", 54 | "\n", 55 | "target_sizes = {\n", 56 | " '0.3 MP': (736, 352),\n", 57 | " '0.6 MP': (1088, 544),\n", 58 | "}\n", 59 | "\n", 60 | "source_crops = {\n", 61 | " '0.3 MP': (528, 528),\n", 62 | " '0.6 MP': (776, 776),\n", 63 | "}\n", 64 | "\n", 65 | "\n", 66 | "def synthesize(image):\n", 67 | " \"\"\"An iterator that yields a new result each step of the optimation.\n", 68 | " \"\"\"\n", 69 | " remix = commands.Remix(image)\n", 70 | "\n", 71 | " return api.process_iterations(\n", 72 | " remix,\n", 73 | " octaves=None, # Number of levels in coarse-to-fine rendering.\n", 74 | " size=target_sizes[resolution], # Resolution of the output, must fit in GPU memory.\n", 75 | " quality=quality, # The quality level, higher is better.\n", 76 | " )" 77 | ] 78 | }, 79 | { 80 | "cell_type": "markdown", 81 | "metadata": {}, 82 | "source": [ 83 | "# Sample: Grass 1\n", 84 | "\n", 85 | "## a. Original Image" 86 | ] 87 | }, 88 | { 89 | "cell_type": "code", 90 | "execution_count": null, 91 | "metadata": {}, 92 | "outputs": [], 93 | "source": [ 94 | "grass1 = io.load_image_from_url(BASE_URL + \"grass1.webp\") # Image CC-BY-NC-SA @alexjc.\n", 95 | "grass1 = io.random_crop(grass1, source_crops[resolution])\n", 96 | "\n", 97 | "io.show_image_as_tiles(grass1, count=5, size=(144, 144))" 98 | ] 99 | }, 100 | { 101 | "cell_type": "markdown", 102 | "metadata": {}, 103 | "source": [ 104 | "## b. Generated Result" 105 | ] 106 | }, 107 | { 108 | "cell_type": "code", 109 | "execution_count": null, 110 | "metadata": {}, 111 | "outputs": [], 112 | "source": [ 113 | "display_widget = io.show_result_in_notebook()\n", 114 | "\n", 115 | "for result in synthesize(grass1):\n", 116 | " display_widget.update(result)" 117 | ] 118 | }, 119 | { 120 | "cell_type": "markdown", 121 | "metadata": {}, 122 | "source": [ 123 | "# Sample: Grass 2\n", 124 | "\n", 125 | "## a. Original Image" 126 | ] 127 | }, 128 | { 129 | "cell_type": "code", 130 | "execution_count": null, 131 | "metadata": {}, 132 | "outputs": [], 133 | "source": [ 134 | "grass2 = io.load_image_from_url(BASE_URL + \"grass2.webp\") # Image CC-BY-NC-SA @alexjc.\n", 135 | "grass2 = io.random_crop(grass2, source_crops[resolution])\n", 136 | "\n", 137 | "io.show_image_as_tiles(grass2, count=5, size=(144, 144))" 138 | ] 139 | }, 140 | { 141 | "cell_type": "markdown", 142 | "metadata": {}, 143 | "source": [ 144 | "## b. Generated Result" 145 | ] 146 | }, 147 | { 148 | "cell_type": "code", 149 | "execution_count": null, 150 | "metadata": {}, 151 | "outputs": [], 152 | "source": [ 153 | "display_widget = io.show_result_in_notebook()\n", 154 | "\n", 155 | "for result in synthesize(grass2):\n", 156 | " display_widget.update(result)" 157 | ] 158 | }, 159 | { 160 | "cell_type": "markdown", 161 | "metadata": {}, 162 | "source": [ 163 | "# Sample: Grass 3\n", 164 | "\n", 165 | "## a. Original Image" 166 | ] 167 | }, 168 | { 169 | "cell_type": "code", 170 | "execution_count": null, 171 | "metadata": {}, 172 | "outputs": [], 173 | "source": [ 174 | "grass3 = io.load_image_from_url(BASE_URL + \"grass3.webp\") # Image CC-BY-NC-SA @alexjc.\n", 175 | "grass3 = io.random_crop(grass3, source_crops[resolution])\n", 176 | "\n", 177 | "io.show_image_as_tiles(grass3, count=5, size=(144, 144))" 178 | ] 179 | }, 180 | { 181 | "cell_type": "markdown", 182 | "metadata": {}, 183 | "source": [ 184 | "## b. Generated Result" 185 | ] 186 | }, 187 | { 188 | "cell_type": "code", 189 | "execution_count": null, 190 | "metadata": {}, 191 | "outputs": [], 192 | "source": [ 193 | "display_widget = io.show_result_in_notebook()\n", 194 | "\n", 195 | "for result in synthesize(grass3):\n", 196 | " display_widget.update(result)" 197 | ] 198 | }, 199 | { 200 | "cell_type": "markdown", 201 | "metadata": {}, 202 | "source": [ 203 | "# What Next?\n", 204 | "\n", 205 | "1. Browse the [source code repository](https://github.com/photogeniq/texturize).\n", 206 | "2. Let us know what you think on [social media](https://twitter.com/alexjc)! \n", 207 | "3. Star ★ the [texturize project](https://github.com/photogeniq/texturize) on GitHub." 208 | ] 209 | } 210 | ], 211 | "metadata": { 212 | "accelerator": "GPU", 213 | "colab": { 214 | "name": "Texturize Demo: Grass" 215 | }, 216 | "kernelspec": { 217 | "display_name": "Python 3", 218 | "language": "python", 219 | "name": "python3" 220 | }, 221 | "language_info": { 222 | "codemirror_mode": { 223 | "name": "ipython", 224 | "version": 3 225 | }, 226 | "file_extension": ".py", 227 | "mimetype": "text/x-python", 228 | "name": "python", 229 | "nbconvert_exporter": "python", 230 | "pygments_lexer": "ipython3", 231 | "version": "3.8.3" 232 | } 233 | }, 234 | "nbformat": 4, 235 | "nbformat_minor": 4 236 | } 237 | -------------------------------------------------------------------------------- /examples/Demo_Gravel.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Texturize Demo: Gravel\n", 8 | "\n", 9 | "Welcome! This notebook contains a demo of `texturize`, which will generate three **variations of gravel textures** and takes about 5-10 minutes to run in total.\n", 10 | "\n", 11 | "* You can run the whole notebook and follow along, click `Runtime > Run All` from the menu.\n", 12 | "* Alternatively, run each code cell by selecting it, then clicking the arrow button ➤ in the left column.\n", 13 | "* Re-run blocks `a.` to use a different random crop of the source image.\n", 14 | "* Re-run blocks `b.` to generate a new texture from a different random seed.\n", 15 | "* Watch the generator optimize as it displays the result frame-by-frame!\n", 16 | "\n", 17 | "If you enjoy the project, feel free to **browse the repository** on [GitHub](https://github.com/photogeniq/texturize) and leave us a star ★. If encounter any problems with your textures, report them in the [Issues](https://github.com/photogeniq/texturize). Thanks!\n", 18 | "\n", 19 | "_This notebook is released under the CC-BY-NC-SA license — including the text, images and code._" 20 | ] 21 | }, 22 | { 23 | "cell_type": "code", 24 | "execution_count": null, 25 | "metadata": {}, 26 | "outputs": [], 27 | "source": [ 28 | "# Install the latest release from the Python Package Index (PyPI).\n", 29 | "!pip install -q --progress-bar=off texturize\n", 30 | "\n", 31 | "from texturize import api, commands, io\n", 32 | "\n", 33 | "# The sample files are stored as attachments in this GitHub Release.\n", 34 | "BASE_URL = \"https://github.com/photogeniq/texturize/releases/download/v0.0/\"" 35 | ] 36 | }, 37 | { 38 | "cell_type": "markdown", 39 | "metadata": {}, 40 | "source": [ 41 | "All the images in this file are generated using this function. You can configure the parameters here if necessary:" 42 | ] 43 | }, 44 | { 45 | "cell_type": "code", 46 | "execution_count": null, 47 | "metadata": {}, 48 | "outputs": [], 49 | "source": [ 50 | "# @title Settings\n", 51 | "\n", 52 | "resolution = '0.3 MP' # @param ['0.3 MP', '0.6 MP']\n", 53 | "quality = 4 # @param {type:\"slider\", min:0, max:10, step:1}\n", 54 | "\n", 55 | "target_sizes = {\n", 56 | " '0.3 MP': (736, 352),\n", 57 | " '0.6 MP': (1088, 544),\n", 58 | "}\n", 59 | "\n", 60 | "source_crops = {\n", 61 | " '0.3 MP': (528, 528),\n", 62 | " '0.6 MP': (776, 776),\n", 63 | "}\n", 64 | "\n", 65 | "\n", 66 | "def synthesize(image):\n", 67 | " \"\"\"An iterator that yields a new result each step of the optimation.\n", 68 | " \"\"\"\n", 69 | " remix = commands.Remix(image)\n", 70 | "\n", 71 | " return api.process_iterations(\n", 72 | " remix,\n", 73 | " octaves=None, # Number of levels in coarse-to-fine rendering.\n", 74 | " size=target_sizes[resolution], # Resolution of the output, must fit in GPU memory.\n", 75 | " quality=quality, # The quality level, higher is better.\n", 76 | " )" 77 | ] 78 | }, 79 | { 80 | "cell_type": "markdown", 81 | "metadata": {}, 82 | "source": [ 83 | "# Sample: Gravel 1\n", 84 | "\n", 85 | "## a. Original Image" 86 | ] 87 | }, 88 | { 89 | "cell_type": "code", 90 | "execution_count": null, 91 | "metadata": {}, 92 | "outputs": [], 93 | "source": [ 94 | "gravel1 = io.load_image_from_url(BASE_URL + \"gravel1.webp\") # Image CC-BY-NC-SA @alexjc.\n", 95 | "gravel1 = io.random_crop(gravel1, source_crops[resolution])\n", 96 | "\n", 97 | "io.show_image_as_tiles(gravel1, count=5, size=(144, 144))" 98 | ] 99 | }, 100 | { 101 | "cell_type": "markdown", 102 | "metadata": {}, 103 | "source": [ 104 | "## b. Generated Result" 105 | ] 106 | }, 107 | { 108 | "cell_type": "code", 109 | "execution_count": null, 110 | "metadata": {}, 111 | "outputs": [], 112 | "source": [ 113 | "display_widget = io.show_result_in_notebook()\n", 114 | "\n", 115 | "for result in synthesize(gravel1):\n", 116 | " display_widget.update(result)" 117 | ] 118 | }, 119 | { 120 | "cell_type": "markdown", 121 | "metadata": {}, 122 | "source": [ 123 | "# Sample: Gravel 2\n", 124 | "\n", 125 | "## a. Original Image" 126 | ] 127 | }, 128 | { 129 | "cell_type": "code", 130 | "execution_count": null, 131 | "metadata": {}, 132 | "outputs": [], 133 | "source": [ 134 | "gravel2 = io.load_image_from_url(BASE_URL + \"gravel2.webp\") # Image CC-BY-NC-SA @alexjc.\n", 135 | "gravel2 = io.random_crop(gravel2, source_crops[resolution])\n", 136 | "\n", 137 | "io.show_image_as_tiles(gravel2, count=5, size=(144, 144))" 138 | ] 139 | }, 140 | { 141 | "cell_type": "markdown", 142 | "metadata": {}, 143 | "source": [ 144 | "## b. Generated Result" 145 | ] 146 | }, 147 | { 148 | "cell_type": "code", 149 | "execution_count": null, 150 | "metadata": {}, 151 | "outputs": [], 152 | "source": [ 153 | "display_widget = io.show_result_in_notebook()\n", 154 | "\n", 155 | "for result in synthesize(gravel2):\n", 156 | " display_widget.update(result)" 157 | ] 158 | }, 159 | { 160 | "cell_type": "markdown", 161 | "metadata": {}, 162 | "source": [ 163 | "# Sample: Gravel 3\n", 164 | "\n", 165 | "## a. Original Image" 166 | ] 167 | }, 168 | { 169 | "cell_type": "code", 170 | "execution_count": null, 171 | "metadata": {}, 172 | "outputs": [], 173 | "source": [ 174 | "gravel3 = io.load_image_from_url(BASE_URL + \"gravel3.webp\") # Image CC-BY-NC-SA @alexjc.\n", 175 | "gravel3 = io.random_crop(gravel3, source_crops[resolution])\n", 176 | "\n", 177 | "io.show_image_as_tiles(gravel3, count=5, size=(144, 144))" 178 | ] 179 | }, 180 | { 181 | "cell_type": "markdown", 182 | "metadata": {}, 183 | "source": [ 184 | "## b. Generated Result" 185 | ] 186 | }, 187 | { 188 | "cell_type": "code", 189 | "execution_count": null, 190 | "metadata": {}, 191 | "outputs": [], 192 | "source": [ 193 | "display_widget = io.show_result_in_notebook()\n", 194 | "\n", 195 | "for result in synthesize(gravel3):\n", 196 | " display_widget.update(result)" 197 | ] 198 | }, 199 | { 200 | "cell_type": "markdown", 201 | "metadata": {}, 202 | "source": [ 203 | "# What Next?\n", 204 | "\n", 205 | "1. Browse the [source code repository](https://github.com/photogeniq/texturize).\n", 206 | "2. Let us know what you think on [social media](https://twitter.com/alexjc)! \n", 207 | "3. Star ★ the [texturize project](https://github.com/photogeniq/texturize) on GitHub." 208 | ] 209 | } 210 | ], 211 | "metadata": { 212 | "accelerator": "GPU", 213 | "colab": { 214 | "name": "Texturize Demo: Gravel" 215 | }, 216 | "kernelspec": { 217 | "display_name": "Python 3", 218 | "language": "python", 219 | "name": "python3" 220 | }, 221 | "language_info": { 222 | "codemirror_mode": { 223 | "name": "ipython", 224 | "version": 3 225 | }, 226 | "file_extension": ".py", 227 | "mimetype": "text/x-python", 228 | "name": "python", 229 | "nbconvert_exporter": "python", 230 | "pygments_lexer": "ipython3", 231 | "version": "3.8.3" 232 | } 233 | }, 234 | "nbformat": 4, 235 | "nbformat_minor": 4 236 | } 237 | -------------------------------------------------------------------------------- /src/texturize/critics.py: -------------------------------------------------------------------------------- 1 | # texturize — Copyright (c) 2020, Novelty Factory KG. See LICENSE for details. 2 | 3 | import math 4 | import itertools 5 | 6 | import torch 7 | import torch.nn.functional as F 8 | 9 | from .patch import PatchBuilder 10 | from .match import FeatureMatcher 11 | 12 | 13 | class GramMatrixCritic: 14 | """A `Critic` evaluates the features of an image to determine how it scores. 15 | 16 | This critic computes a 2D histogram of feature cross-correlations for the specified 17 | layer (e.g. "1_1") or layer pair (e.g. "1_1:2_1"), and compares it to the target 18 | gram matrix. 19 | """ 20 | 21 | def __init__(self, layer, offset: float = -1.0): 22 | self.pair = tuple(layer.split(":")) 23 | if len(self.pair) == 1: 24 | self.pair = (self.pair[0], self.pair[0]) 25 | self.offset = offset 26 | self.gram = None 27 | 28 | def on_start(self): 29 | pass 30 | 31 | def on_finish(self): 32 | pass 33 | 34 | def evaluate(self, features): 35 | current = self._prepare_gram(features) 36 | result = F.mse_loss(current, self.gram.expand_as(current), reduction="none") 37 | yield 1e4 * result.flatten(1).mean(dim=1) 38 | 39 | def from_features(self, features): 40 | def norm(xs): 41 | if not isinstance(xs, (tuple, list)): 42 | xs = (xs,) 43 | ms = [torch.mean(x, dim=(2, 3), keepdim=True) for x in xs] 44 | return (sum(ms) / len(ms)).clamp(min=1.0) 45 | 46 | self.means = (norm(features[self.pair[0]]), norm(features[self.pair[1]])) 47 | self.gram = self._prepare_gram(features) 48 | 49 | def get_layers(self): 50 | return set(self.pair) 51 | 52 | def _gram_matrix(self, column, row): 53 | (b, ch, h, w) = column.size() 54 | f_c = column.view(b, ch, w * h) 55 | (b, ch, h, w) = row.size() 56 | f_r = row.view(b, ch, w * h) 57 | 58 | gram = (f_c / w).bmm((f_r / h).transpose(1, 2)) / ch 59 | assert not torch.isnan(gram).any() 60 | 61 | return gram 62 | 63 | def _prepare_gram(self, features): 64 | result = 0.0 65 | for l, u in zip(features[self.pair[0]], features[self.pair[1]]): 66 | lower = l / self.means[0] + self.offset 67 | upper = u / self.means[1] + self.offset 68 | gram = self._gram_matrix( 69 | lower, F.interpolate(upper, size=lower.shape[2:], mode="nearest") 70 | ) 71 | result += gram 72 | return result / len(features[self.pair[0]]) 73 | 74 | 75 | def sample(arr, count): 76 | """Deterministically sample N entries from an array.""" 77 | if arr.shape[2] < count: 78 | f = int(math.ceil(count / arr.shape[2])) 79 | arr = torch.cat([arr] * f, dim=2) 80 | return arr[:, :, :count] 81 | 82 | 83 | class HistogramCritic: 84 | """ 85 | This critic uses the Sliced Wasserstein Distance of the features to approximate the 86 | distance between n-dimensional histogram. 87 | 88 | See https://arxiv.org/abs/2006.07229 for details. 89 | """ 90 | 91 | def __init__(self, layer): 92 | self.layer = layer 93 | 94 | def on_start(self): 95 | pass 96 | 97 | def on_finish(self): 98 | pass 99 | 100 | def get_layers(self): 101 | return {self.layer} 102 | 103 | def from_features(self, features): 104 | data = features[self.layer] 105 | self.gm = data.mean(dim=(2,3), keepdim=True) 106 | self.g = (data - self.gm) * 2.0 107 | 108 | def sorted_projection(self, proj_t): 109 | return torch.sort(proj_t, dim=2).values 110 | 111 | def evaluate(self, features): 112 | data = features[self.layer] 113 | assert data.ndim == 4 and data.shape[0] == 1 114 | 115 | conv_L1 = torch.nn.Conv2d(data.shape[1], 128, kernel_size=(1, 1), dilation=1, bias=False, padding=0).to(self.g.device) 116 | torch.nn.init.orthogonal_(conv_L1.weight) 117 | 118 | with torch.no_grad(): 119 | conv_L1.padding_mode = 'reflect' 120 | 121 | count_L1 = data.shape[2] * data.shape[3] 122 | patches_L1 = conv_L1(self.g) 123 | source = self.sorted_projection(sample(patches_L1.flatten(2), count_L1)) 124 | 125 | conv_L1.padding_mode = 'circular' 126 | current = self.sorted_projection(conv_L1((data - self.gm) * 2.0).flatten(2)) 127 | assert source.shape == current.shape 128 | 129 | yield F.mse_loss(current, source) 130 | 131 | 132 | class PatchCritic: 133 | 134 | LAST = None 135 | 136 | def __init__(self, layer, variety=0.2): 137 | self.layer = layer 138 | self.patches = None 139 | self.device = None 140 | self.builder = PatchBuilder(patch_size=2) 141 | self.matcher = FeatureMatcher(device="cpu", variety=variety) 142 | self.split_hints = {} 143 | 144 | def get_layers(self): 145 | return {self.layer} 146 | 147 | def on_start(self): 148 | self.patches = self.patches.to(self.device) 149 | self.matcher.update_sources(self.patches) 150 | 151 | def on_finish(self): 152 | self.matcher.sources = None 153 | self.patches = self.patches.cpu() 154 | 155 | def from_features(self, features): 156 | patches = self.prepare(features).detach() 157 | self.device = patches.device 158 | self.patches = patches.cpu() 159 | self.iteration = 0 160 | 161 | def prepare(self, features): 162 | if isinstance(features[self.layer], (tuple, list)): 163 | sources = [self.builder.extract(f) for f in features[self.layer]] 164 | chunk_size = min(s.shape[2] for s in sources) 165 | chunks = [torch.split(s, chunk_size, dim=2) for s in sources] 166 | return torch.cat(list(itertools.chain.from_iterable(chunks)), dim=3) 167 | else: 168 | return self.builder.extract(features[self.layer]) 169 | 170 | def auto_split(self, function, *arguments, **keywords): 171 | key = (self.matcher.target.shape, function) 172 | for i in self.split_hints.get(key, range(16)): 173 | try: 174 | result = function(*arguments, split=2 ** i, **keywords) 175 | self.split_hints[key] = list(range(i, 16)) 176 | return result 177 | except RuntimeError as e: 178 | if "CUDA out of memory." not in str(e): 179 | raise 180 | 181 | assert False, f"Unable to fit {function} execution into CUDA memory." 182 | 183 | def evaluate(self, features): 184 | self.iteration += 1 185 | 186 | target = self.prepare(features) 187 | self.matcher.update_target(target) 188 | 189 | matched_target = self._update(target) 190 | yield 0.5 * F.mse_loss(target, matched_target) 191 | del matched_target 192 | 193 | matched_source = self.matcher.reconstruct_source() 194 | yield 0.5 * F.mse_loss(matched_source, self.patches) 195 | del matched_source 196 | 197 | @torch.no_grad() 198 | def _update(self, target): 199 | if self.iteration == 1: 200 | self.auto_split(self.matcher.compare_features_identity) 201 | self.matcher.update_biases() 202 | 203 | if target.flatten(1).shape[1] < 1_048_576: 204 | self.auto_split(self.matcher.compare_features_matrix) 205 | else: 206 | self.auto_split(self.matcher.compare_features_identity) 207 | self.auto_split(self.matcher.compare_features_inverse) 208 | self.auto_split( 209 | self.matcher.compare_features_random, 210 | radius=[16, 8, 4, -1][self.iteration % 4], 211 | ) 212 | self.auto_split( 213 | self.matcher.compare_features_nearby, 214 | radius=[4, 2, 1][self.iteration % 3], 215 | ) 216 | self.auto_split( 217 | self.matcher.compare_features_coarse, parent=PatchCritic.LAST 218 | ) 219 | 220 | PatchCritic.LAST = self.matcher 221 | self.matcher.update_biases() 222 | matched_target = self.matcher.reconstruct_target() 223 | 224 | return matched_target 225 | -------------------------------------------------------------------------------- /src/texturize/io.py: -------------------------------------------------------------------------------- 1 | # texturize — Copyright (c) 2020, Novelty Factory KG. See LICENSE for details. 2 | 3 | import glob 4 | import time 5 | import random 6 | import urllib 7 | import difflib 8 | from io import BytesIO 9 | 10 | import PIL.Image 11 | import torch 12 | import torchvision.transforms.functional as V 13 | 14 | 15 | def load_tensor_from_files(glob_pattern, device='cpu', mode=None) -> tuple: 16 | arrays, props = [], [] 17 | for filename in sorted(glob.glob(glob_pattern)): 18 | img = load_image_from_file(filename, mode) 19 | arr = load_tensor_from_image(img, device) 20 | arrays.append(arr) 21 | 22 | prop = ''.join([s[2] for s in difflib.ndiff(glob_pattern, filename) if s[0]=='+']) 23 | props.append(prop + ":" + str(arr.shape[1])) 24 | 25 | assert all(a.shape[2:] == arrays[0].shape[2:] for a in arrays[1:]) 26 | return torch.cat(arrays, dim=1), props 27 | 28 | 29 | def save_tensor_to_files(images, filename, props): 30 | c = 0 31 | for prop in props: 32 | suffix, i = prop.split(":") 33 | i = int(i) 34 | img = images[:, c:c+i] 35 | save_tensor_to_file(img, filename.replace('{prop}', suffix), mode="RGB" if i==3 else "L") 36 | c += i 37 | 38 | 39 | def load_image_from_file(filename, mode=None): 40 | image = PIL.Image.open(filename) 41 | if mode is not None: 42 | return image.convert(mode) 43 | else: 44 | return image 45 | 46 | 47 | def load_tensor_from_image(image, device, dtype=torch.float32): 48 | # NOTE: torchvision incorrectly assumes that I;16 means signed, but 49 | # Pillow docs say unsigned. 50 | if isinstance(image, PIL.Image.Image) and image.mode == "I;16": 51 | import numpy 52 | arr = numpy.frombuffer(image.tobytes(), dtype=numpy.uint16) 53 | arr = arr.reshape((image.height, image.width)) 54 | assert arr.min() >= 0 and arr.max() < 65536 55 | image = arr.astype(numpy.float32) / 65536.0 56 | 57 | if not isinstance(image, torch.Tensor): 58 | return V.to_tensor(image).unsqueeze(0).to(device, dtype) 59 | return image 60 | 61 | 62 | def load_image_from_url(url, mode="RGB"): 63 | response = urllib.request.urlopen(url) 64 | buffer = BytesIO(response.read()) 65 | return PIL.Image.open(buffer).convert(mode) 66 | 67 | 68 | def random_crop(image, size): 69 | x = random.randint(0, image.size[0] - size[0]) 70 | y = random.randint(0, image.size[1] - size[1]) 71 | return image.crop((x, y, x + size[0], y + size[1])) 72 | 73 | 74 | def save_tensor_to_file(tensor, filename, mode="RGB"): 75 | assert tensor.shape[0] == 1 76 | img = save_tensor_to_images(tensor, mode=mode) 77 | img[0].save(filename) 78 | 79 | 80 | def save_tensor_to_images(tensor, mode="RGB"): 81 | assert tensor.min() >= 0.0 and tensor.max() <= 1.0 82 | return [ 83 | V.to_pil_image(tensor[j].detach().cpu().float(), mode) 84 | for j in range(tensor.shape[0]) 85 | ] 86 | 87 | 88 | try: 89 | from IPython.display import display, clear_output 90 | import ipywidgets 91 | except ImportError: 92 | pass 93 | 94 | 95 | def show_image_as_tiles(image, count, size): 96 | def make_crop(): 97 | buffer = BytesIO() 98 | x = random.randint(0, image.size[0] - size[0]) 99 | y = random.randint(0, image.size[1] - size[1]) 100 | tile = image.crop((x, y, x + size[0], y + size[1])) 101 | tile.save(buffer, format="webp", quality=80) 102 | buffer.seek(0) 103 | return buffer.read() 104 | 105 | pct = 100.0 / count 106 | tiles = [ 107 | ipywidgets.Image( 108 | value=make_crop(), format="webp", layout=ipywidgets.Layout(width=f"{pct}%") 109 | ) 110 | for _ in range(count) 111 | ] 112 | box = ipywidgets.HBox(tiles, layout=ipywidgets.Layout(width="100%")) 113 | display(box) 114 | 115 | 116 | def show_result_in_notebook(throttle=None, title=None): 117 | class ResultWidget: 118 | def __init__(self, throttle, title): 119 | self.title = f"

{title}

" if title is not None else "" 120 | self.style = """""" 124 | self.html = ipywidgets.HTML(value="") 125 | self.img = ipywidgets.Image( 126 | value=b"", 127 | format="webp", 128 | layout=ipywidgets.Layout(width="100%", margin="0"), 129 | ) 130 | self.box = ipywidgets.VBox( 131 | [self.html, self.img], layout=ipywidgets.Layout(display="none") 132 | ) 133 | display(self.box) 134 | 135 | self.throttle = throttle 136 | self.start_time = time.time() 137 | self.total_sent = 0 138 | 139 | def update(self, result): 140 | assert len(result.tensor) == 1, "Only one image supported." 141 | 142 | for out in save_tensor_to_images(result.tensor[:, 0:3]): 143 | elapsed = time.time() - self.start_time 144 | last, first = bool(result.iteration < 0), bool(result.iteration == 0) 145 | self.html.set_trait( 146 | "value", 147 | f"""{self.title} 148 | {self.style} 149 | """, 159 | ) 160 | 161 | if not last and self.total_sent / elapsed > self.throttle: 162 | break 163 | 164 | buffer = BytesIO() 165 | if throttle == float("+inf") or out.size[0] * out.size[1] < 192 * 192: 166 | out.save(buffer, format="webp", method=6, lossless=True) 167 | else: 168 | out.save(buffer, format="webp", quality=90 if last else 50) 169 | 170 | buffer.seek(0) 171 | self.img.set_trait("value", buffer.read()) 172 | self.total_sent += buffer.tell() 173 | if first: 174 | self.box.layout = ipywidgets.Layout(display="box") 175 | break 176 | 177 | try: 178 | # Notebooks running remotely on Google Colab require throttle to work reliably. 179 | import google.colab 180 | throttle = throttle or 16_384 181 | except ImportError: 182 | # When running Jupyter locally, you get the full experience by default! 183 | throttle = throttle or float('+inf') 184 | 185 | return ResultWidget(throttle, title) 186 | 187 | 188 | def load_image_from_notebook(): 189 | """Allow the user to upload an image directly into a Jupyter notebook, then provide 190 | a single-use iterator over the images that were collected. 191 | """ 192 | 193 | class ImageUploadWidget(ipywidgets.FileUpload): 194 | def __init__(self): 195 | super(ImageUploadWidget, self).__init__(accept="image/*", multiple=True) 196 | 197 | self.observe(self.add_to_results, names="value") 198 | self.results = [] 199 | 200 | def get(self, index): 201 | return self.results[index] 202 | 203 | def __iter__(self): 204 | while len(self.results) > 0: 205 | yield self.results.pop(0) 206 | 207 | def add_to_results(self, change): 208 | for filename, data in change["new"].items(): 209 | buffer = BytesIO(data["content"]) 210 | image = PIL.Image.open(buffer) 211 | self.results.append(image) 212 | self.set_trait("value", {}) 213 | 214 | widget = ImageUploadWidget() 215 | display(widget) 216 | return widget 217 | -------------------------------------------------------------------------------- /src/texturize/__main__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | r""" _ _ _ 3 | | |_ _____ _| |_ _ _ _ __(_)_______ 4 | | __/ _ \ \/ / __| | | | '__| |_ / _ \ 5 | | || __/> <| |_| |_| | | | |/ / __/ 6 | \__\___/_/\_\\__|\__,_|_| |_/___\___| 7 | 8 | Usage: 9 | texturize remix SOURCE... [options] --size=WxH 10 | texturize enhance TARGET [with] SOURCE [options] --zoom=ZOOM 11 | texturize expand TARGET [with] SOURCE [options] --size=WxH 12 | texturize mashup SOURCE TARGET [options] --size=WxH 13 | texturize remake TARGET [like] SOURCE [options] --weights=WEIGHTS 14 | texturize repair TARGET [with] SOURCE [options] 15 | texturize --help 16 | 17 | Examples: 18 | texturize remix samples/grass.webp --size=1440x960 --output=result.png 19 | texturize remix samples/gravel.png --iterations=100 20 | texturize remix samples/sand.tiff --output=tmp/{source}-{octave}.webp 21 | texturize remix samples/brick.jpg --device=cpu 22 | 23 | Options: 24 | SOURCE Path to source image to use as texture. 25 | -s WxH, --size=WxH Output resolution as WIDTHxHEIGHT. [default: 640x480] 26 | -o FILE, --output=FILE Filename for saving the result, includes format variables. 27 | [default: {command}_{source}{variation}{prop}.png] 28 | --intact Keep the source image at the original size. 29 | 30 | --weights=WEIGHTS Comma-separated list of blend weights. [default: 1.0] 31 | --zoom=ZOOM Integer zoom factor for enhancing. [default: 2] 32 | 33 | --variations=V Number of images to generate at same time. [default: 1] 34 | --seed=SEED Configure the random number generation. 35 | --mode=MODE Either "patch" or "gram" to manually specify critics. 36 | --octaves=O Number of octaves to process. Defaults to 5 for 512x512, or 37 | 4 for 256x256 equivalent pixel count. 38 | --iterations=I Iterations for optimization, higher is better. [default: 200] 39 | 40 | --model=MODEL Name of the convolution network to use. [default: VGG11] 41 | --layers=LAYERS Comma-separated list of layers. 42 | --device=DEVICE Hardware to use, either "cpu" or "cuda". 43 | --precision=PRECISION Floating-point format to use, "float16" or "float32". 44 | --quiet Suppress any messages going to stdout. 45 | --verbose Display more information on stdout. 46 | -h, --help Show this message. 47 | """ 48 | # 49 | # Copyright (c) 2020, Novelty Factory KG. 50 | # 51 | # texturize is free software: you can redistribute it and/or modify it under the terms 52 | # of the GNU Affero General Public License version 3. This program is distributed in 53 | # the hope that it will be useful but WITHOUT ANY WARRANTY; without even the implied 54 | # warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 55 | # 56 | 57 | import os 58 | import math 59 | 60 | import docopt 61 | from schema import Schema, Use, And, Or 62 | 63 | import torch 64 | 65 | from . import __version__ 66 | from . import api, io, commands 67 | from .logger import ansi, ConsoleLog 68 | 69 | 70 | def validate(config): 71 | # Determine the shape of output tensor (H, W) from specified resolution. 72 | def split_size(size: str): 73 | return tuple(map(int, size.split("x"))) 74 | 75 | def split_strings(text: str): 76 | return text.split(",") 77 | 78 | def split_floats(text: str): 79 | return tuple(map(float, text.split(","))) 80 | 81 | def split_ints(text: str): 82 | return tuple(map(int, text.split(","))) 83 | 84 | sch = Schema( 85 | { 86 | "SOURCE": [str], 87 | "TARGET": Or(None, str), 88 | "size": And(Use(split_size), tuple), 89 | "output": str, 90 | "intact": Use(bool), 91 | "weights": Use(split_floats), 92 | "zoom": Use(int), 93 | "variations": Use(int), 94 | "seed": Or(None, Use(int)), 95 | "mode": Or(None, "patch", "gram", "hist"), 96 | "octaves": Or(None, Use(int)), 97 | "iterations": Use(split_ints), 98 | "model": Or("VGG11", "VGG13", "VGG16", "VGG19"), 99 | "layers": Or(None, Use(split_strings)), 100 | "device": Or(None, "cpu", "cuda"), 101 | "precision": Or(None, "float16", "float32"), 102 | "help": Use(bool), 103 | "quiet": Use(bool), 104 | "verbose": Use(bool), 105 | }, 106 | ignore_extra_keys=True, 107 | ) 108 | return sch.validate({k.replace("--", ""): v for k, v in config.items()}) 109 | 110 | 111 | def main(): 112 | # Parse the command-line options based on the script's documentation. 113 | config = docopt.docopt(__doc__[204:], version=__version__, help=False) 114 | all_commands = [cmd.lower() for cmd in commands.__all__] + ["--help"] 115 | command = [cmd for cmd in all_commands if config[cmd]][0] 116 | 117 | # Ensure the user-specified values are correct, separate command-specific arguments. 118 | config = validate(config) 119 | sources, target, output, seed = [ 120 | config.pop(k) for k in ("SOURCE", "TARGET", "output", "seed") 121 | ] 122 | weights, zoom = [config.pop(k) for k in ("weights", "zoom")] 123 | 124 | # Setup the output logging and display the logo! 125 | log = ConsoleLog(config.pop("quiet"), config.pop("verbose")) 126 | log.notice(ansi.PINK + __doc__[:204] + ansi.ENDC) 127 | if config.pop("help") is True: 128 | log.notice(__doc__[204:]) 129 | return 130 | 131 | resize_source = not config.pop("intact") 132 | 133 | # Scan all the files based on the patterns specified. 134 | for filename in sources: 135 | # If there's a random seed, use the same for all images. 136 | if seed is not None: 137 | torch.manual_seed(seed) 138 | torch.cuda.manual_seed(seed) 139 | 140 | # Load the images necessary. 141 | source_arr, source_props = io.load_tensor_from_files(filename) 142 | target_arr = io.load_tensor_from_files(target) if target else None 143 | 144 | if resize_source: 145 | def length(s): return math.sqrt(s[0] * s[1]) 146 | s = length(config["size"]) / length(source_arr.shape[2:]) 147 | F = torch.nn.functional 148 | source_arr = F.interpolate(source_arr, scale_factor=s, mode="bilinear", antialias=True) 149 | 150 | # Setup the command specified by user. 151 | if command == "remix": 152 | cmd = commands.Remix(source_arr) 153 | if command == "enhance": 154 | cmd = commands.Enhance(target_arr, source_arr, zoom=zoom) 155 | config["octaves"] = cmd.octaves 156 | # Calculate the size based on the specified zoom. 157 | config["size"] = (target_arr.size[0] * zoom, target_arr.size[1] * zoom) 158 | if command == "expand": 159 | # Calculate the factor based on the specified size. 160 | factor = ( 161 | target_arr.size[0] / config["size"][0], 162 | target_arr.size[1] / config["size"][1], 163 | ) 164 | cmd = commands.Expand(target_arr, source_arr, factor=factor) 165 | if command == "remake": 166 | cmd = commands.Remake(target_arr, source_arr, weights=weights) 167 | config["octaves"] = 1 168 | config["size"] = target_arr.size 169 | if command == "mashup": 170 | cmd = commands.Mashup([source_arr, target_arr]) 171 | if command == "repair": 172 | cmd = commands.Repair(target_arr, source_arr) 173 | config["octaves"] = 3 174 | config["size"] = target_arr.size 175 | 176 | # Process the files one by one, each may have multiple variations. 177 | try: 178 | config["output"] = output 179 | config["output"] = config["output"].replace( 180 | "{source}", os.path.splitext(os.path.basename(filename))[0] 181 | ) 182 | if target: 183 | config["output"] = config["output"].replace( 184 | "{target}", os.path.splitext(os.path.basename(target))[0] 185 | ) 186 | config["properties"] = source_props 187 | 188 | result, filenames = api.process_single_command(cmd, log, **config) 189 | log.notice(ansi.PINK + "\n=> result:", filenames, ansi.ENDC) 190 | except KeyboardInterrupt: 191 | print(ansi.PINK + "\nCTRL+C detected, interrupting..." + ansi.ENDC) 192 | break 193 | 194 | 195 | if __name__ == "__main__": 196 | main() 197 | -------------------------------------------------------------------------------- /src/texturize/solvers.py: -------------------------------------------------------------------------------- 1 | # texturize — Copyright (c) 2020, Novelty Factory KG. See LICENSE for details. 2 | 3 | import torch.optim 4 | 5 | 6 | class SolverLBFGS: 7 | """Encapsulate the L-BFGS optimizer from PyTorch with a standard interface. 8 | """ 9 | 10 | def __init__(self, objective, image, lr=1.0): 11 | self.objective = objective 12 | self.image = image 13 | self.lr = lr 14 | self.retries = 0 15 | self.last_result = (float("+inf"), None) 16 | self.last_image = None 17 | 18 | self.reset_optimizer() 19 | 20 | def reset_optimizer(self): 21 | self.optimizer = torch.optim.LBFGS( 22 | [self.image], lr=self.lr, max_iter=2, max_eval=4, history_size=10 23 | ) 24 | self.iteration = 0 25 | 26 | def update_lr(self, factor=None): 27 | if factor is not None: 28 | self.lr *= factor 29 | 30 | # The first forty iterations, we increase the learning rate slowly to full value. 31 | for group in self.optimizer.param_groups: 32 | group["lr"] = self.lr * min(self.iteration / 40.0, 1.0) ** 2 33 | 34 | def call_objective(self): 35 | """This function wraps the main LBFGS optimizer from PyTorch and uses simple 36 | hard-coded heuristics to determine its stability, and how best to manage it. 37 | 38 | Informally, it acts like a look-ahead optimizer that rolls back the step if 39 | there was a divergence in the optimization. 40 | """ 41 | # Update the learning-rate dynamically, prepare optimizer. 42 | self.iteration += 1 43 | self.update_lr() 44 | 45 | # Each iteration we reset the accumulated gradients. 46 | self.optimizer.zero_grad() 47 | 48 | # Prepare the image for evaluation, then run the objective. 49 | self.image.data.clamp_(0.0, 1.0) 50 | with torch.enable_grad(): 51 | loss, scores = self.objective(self.image) 52 | 53 | # Did the optimizer make progress as expected? 54 | cur_result = self.image.grad.data.abs().mean() 55 | if cur_result <= self.last_result[0] * 8.0: 56 | self.next_result = loss, scores 57 | 58 | if cur_result < self.last_result[0] * 2.0: 59 | self.last_image = self.image.data.cpu().clone() 60 | self.last_result = (cur_result.item(), loss) 61 | return loss * 1.0 62 | 63 | # Look-ahead failed, so restore the image from the backup. 64 | self.image.data[:] = self.last_image.to(self.image.device) 65 | self.image.data[:] += torch.empty_like(self.image.data).normal_(std=1e-3) 66 | 67 | # There was a small regression: dampen the gradients and reduce step size. 68 | if cur_result < self.last_result[0] * 24.0: 69 | self.image.grad.data.mul_(self.last_result[0] / cur_result) 70 | self.update_lr(factor=0.95) 71 | self.next_result = loss, scores 72 | return loss * 2.0 73 | 74 | self.update_lr(factor=0.8) 75 | raise ValueError 76 | 77 | def step(self): 78 | """Perform one iteration of the optimization process. This function will catch 79 | the optimizer diverging, and reset the optimizer's internal state. 80 | """ 81 | while True: 82 | try: 83 | # This optimizer decides when and how to call the objective. 84 | self.optimizer.step(self.call_objective) 85 | break 86 | except ValueError: 87 | # To restart the optimization, we create a new instance from same image. 88 | self.reset_optimizer() 89 | self.retries += 1 90 | 91 | # Report progress once the first few retries are done. 92 | loss, score = self.next_result 93 | return loss, score 94 | 95 | 96 | class SolverSGD: 97 | """Encapsulate the SGD or Adam optimizers from PyTorch with a standard interface. 98 | """ 99 | 100 | def __init__(self, objective, image, opt_class="Adam", lr=1.0): 101 | self.objective = objective 102 | self.image = image 103 | self.lr = lr 104 | self.retries = 0 105 | 106 | opt_class = getattr(torch.optim, opt_class) 107 | self.optimizer = opt_class([image], lr=lr) 108 | self.iteration = 1 109 | 110 | def step(self): 111 | # The first 10 iterations, we increase the learning rate slowly to full value. 112 | for group in self.optimizer.param_groups: 113 | group["lr"] = self.lr * min(self.iteration / 10.0, 1.0) ** 2 114 | 115 | # Each iteration we reset the accumulated gradients and compute the objective. 116 | self.iteration += 1 117 | self.optimizer.zero_grad() 118 | 119 | # Let the objective compute the loss and its gradients. 120 | self.image.data.clamp_(0.0, 1.0) 121 | assert self.image.requires_grad == True 122 | with torch.enable_grad(): 123 | loss, scores = self.objective(self.image) 124 | 125 | assert self.image.grad is not None, "Objective did not produce image gradients." 126 | assert not torch.isnan(self.image.grad).any(), f"Gradient is NaN, loss {loss}." 127 | 128 | # Now compute the updates to the image according to the gradients. 129 | self.optimizer.step() 130 | assert not torch.isnan(self.image).any(), f"Image is NaN for loss {loss}." 131 | 132 | return loss, scores 133 | 134 | 135 | class MultiCriticObjective: 136 | """An `Objective` that defines a problem to be solved by evaluating candidate 137 | solutions (i.e. images) and returning the computed error. 138 | 139 | This objective evaluates a list of critics to produce a final "loss" that's the sum 140 | of all the scores returned by the critics. It's also responsible for computing the 141 | gradients. 142 | """ 143 | 144 | def __init__(self, encoder, critics, alpha=None): 145 | self.encoder = encoder 146 | self.critics = critics 147 | self.alpha = alpha 148 | 149 | def __call__(self, image): 150 | """Main evaluation function that's called by the solver. Processes the image, 151 | computes the gradients, and returns the loss. 152 | """ 153 | 154 | # Extract features from image. 155 | layers = [c.get_layers() for c in self.critics] 156 | feats = dict(self.encoder.extract_all(image, layers)) 157 | 158 | for critic in self.critics: 159 | critic.on_start() 160 | 161 | # Apply all the critics one by one. 162 | scores = [] 163 | for critic in self.critics: 164 | total = 0.0 165 | for loss in critic.evaluate(feats): 166 | total += loss 167 | scores.append(total) 168 | 169 | # Calculate the final loss and compute the gradients. 170 | loss = (sum(scores) / len(scores)).mean() 171 | loss.backward() 172 | 173 | for critic in self.critics: 174 | critic.on_finish() 175 | 176 | if self.alpha is not None: 177 | image.grad.data.mul_(self.alpha) 178 | 179 | return loss.item(), scores 180 | 181 | 182 | class SequentialCriticObjective: 183 | """An `Objective` that evaluates each of the critics one by one. 184 | """ 185 | 186 | def __init__(self, encoder, critics, alpha=None): 187 | self.encoder = encoder 188 | self.critics = critics 189 | self.alpha = alpha 190 | 191 | def __call__(self, image): 192 | # Apply all the critics one by one, keep track of results. 193 | scores = [] 194 | for critic in self.critics: 195 | critic.on_start() 196 | 197 | # Extract minimal necessary features from image. 198 | origin_feats = dict( 199 | self.encoder.extract_all(image, critic.get_layers(), as_checkpoints=True) 200 | ) 201 | 202 | detach_feats = { 203 | k: f.detach().requires_grad_(True) for k, f in origin_feats.items() 204 | } 205 | 206 | # Ask the critic to evaluate the loss. 207 | total = 0.0 208 | for i, loss in enumerate(critic.evaluate(detach_feats)): 209 | assert not torch.isnan(loss), "Loss diverged to NaN." 210 | loss.backward() 211 | total += loss.item() 212 | del loss 213 | 214 | scores.append(total) 215 | critic.on_finish() 216 | 217 | # Backpropagate from those features. 218 | tensors, grads = [], [] 219 | for original, optimized in zip( 220 | origin_feats.values(), detach_feats.values() 221 | ): 222 | if optimized.grad is not None: 223 | tensors.append(original) 224 | grads.append(optimized.grad) 225 | 226 | torch.autograd.backward(tensors, grads) 227 | 228 | del tensors 229 | del grads 230 | del origin_feats 231 | del detach_feats 232 | 233 | if self.alpha is not None: 234 | image.grad.data.mul_(self.alpha) 235 | 236 | return sum(scores) / len(scores), scores 237 | -------------------------------------------------------------------------------- /src/texturize/commands.py: -------------------------------------------------------------------------------- 1 | # texturize — Copyright (c) 2020, Novelty Factory KG. See LICENSE for details. 2 | 3 | import math 4 | 5 | import torch 6 | import torch.nn.functional as F 7 | 8 | from .io import load_tensor_from_image 9 | from .app import Result 10 | from .critics import PatchCritic, GramMatrixCritic, HistogramCritic 11 | 12 | 13 | __all__ = ["Remix", "Mashup", "Enhance", "Expand", "Remake", "Repair"] 14 | 15 | 16 | def create_default_critics(mode, layers=None): 17 | if mode == "gram": 18 | layers = layers or ("1_1", "1_1:2_1", "2_1", "2_1:3_1", "3_1") 19 | else: 20 | layers = layers or ("3_1", "2_1", "1_1") 21 | 22 | if mode == "patch": 23 | return [PatchCritic(layer=l) for l in layers] 24 | elif mode == "gram": 25 | return [GramMatrixCritic(layer=l) for l in layers] 26 | elif mode == "hist": 27 | return [HistogramCritic(layer=l) for l in layers] 28 | 29 | 30 | def prepare_default_critics(app, scale, texture, critics): 31 | texture_cur = F.interpolate( 32 | texture, scale_factor=1.0 / scale, mode="area", recompute_scale_factor=False, 33 | ).to(device=app.device, dtype=app.precision) 34 | 35 | layers = [c.get_layers() for c in critics] 36 | feats = dict(app.encoder.extract_all(texture_cur, layers)) 37 | 38 | for critic in critics: 39 | critic.from_features(feats) 40 | app.log.debug("<- source:", tuple(texture_cur.shape[2:]), "\n") 41 | 42 | 43 | class Command: 44 | def prepare_critics(self, app, scale): 45 | raise NotImplementedError 46 | 47 | def prepare_seed_tensor(self, size, previous=None): 48 | raise NotImplementedError 49 | 50 | def finalize_octave(self, result): 51 | return result 52 | 53 | 54 | def renormalize(origin, target): 55 | target_mean = target.mean(dim=(2, 3), keepdim=True) 56 | target_std = target.std(dim=(2, 3), keepdim=True) 57 | origin_mean = origin.mean(dim=(2, 3), keepdim=True) 58 | origin_std = origin.std(dim=(2, 3), keepdim=True) 59 | 60 | result = target_mean + target_std * ((origin - origin_mean) / origin_std) 61 | return result.clamp(0.0, 1.0) 62 | 63 | 64 | def upscale(features, size): 65 | features = F.pad(features, pad=(0, 1, 0, 1), mode='circular') 66 | features = F.interpolate(features, (size[0]+1, size[1]+1), mode='bilinear', align_corners=True) 67 | return features[:, :, 0:-1, 0:-1] 68 | 69 | 70 | def downscale(image, size): 71 | return F.interpolate(image, size=size, mode="area").clamp(0.0, 1.0) 72 | 73 | 74 | def random_normal(size, mean): 75 | b, _, h, w = size 76 | current = torch.empty((b, 1, h, w), device=mean.device, dtype=torch.float32) 77 | return (mean + current.normal_(std=0.1)).clamp(0.0, 1.0) 78 | 79 | 80 | class Remix(Command): 81 | def __init__(self, source): 82 | self.source = load_tensor_from_image(source, device="cpu") 83 | 84 | def prepare_critics(self, app, scale): 85 | critics = create_default_critics(app.mode or "hist", app.layers) 86 | prepare_default_critics(app, scale, self.source, critics) 87 | return [critics] 88 | 89 | def prepare_seed_tensor(self, app, size, previous=None): 90 | if previous is None: 91 | b, _, h, w = size 92 | mean = self.source.mean(dim=(2, 3), keepdim=True).to(device=app.device) 93 | result = torch.empty((b, 1, h, w), device=app.device, dtype=torch.float32) 94 | return ( 95 | (result.normal_(std=0.1) + mean).clamp(0.0, 1.0).to(dtype=app.precision) 96 | ) 97 | 98 | return upscale(previous, size=size[2:]) 99 | 100 | 101 | class Enhance(Command): 102 | def __init__(self, target, source, zoom=1): 103 | self.octaves = int(math.log(zoom, 2) + 1.0) 104 | self.source = load_tensor_from_image(source, device="cpu") 105 | self.target = load_tensor_from_image(target, device="cpu") 106 | 107 | def prepare_critics(self, app, scale): 108 | critics = create_default_critics(app.mode or "hist", app.layers) 109 | prepare_default_critics(app, scale, self.source, critics) 110 | return [critics] 111 | 112 | def prepare_seed_tensor(self, app, size, previous=None): 113 | if previous is not None: 114 | return upscale(previous, size=size[2:]) 115 | 116 | seed = downscale(self.target.to(device=app.device), size=size[2:]) 117 | return renormalize(seed, self.source.to(app.device)).to(dtype=app.precision) 118 | 119 | 120 | class Remake(Command): 121 | def __init__(self, target, source, weights=[1.0]): 122 | self.source = load_tensor_from_image(source, device="cpu") 123 | self.target = load_tensor_from_image(target, device="cpu") 124 | self.weights = torch.tensor(weights, dtype=torch.float32).view(-1, 1, 1, 1) 125 | 126 | def prepare_critics(self, app, scale): 127 | critics = create_default_critics(app.mode or "hist", app.layers) 128 | prepare_default_critics(app, scale, self.source, critics) 129 | return [critics] 130 | 131 | def prepare_seed_tensor(self, app, size, previous=None): 132 | seed = upscale(self.target.to(device=app.device), size=size[2:]) 133 | return renormalize(seed, self.source.to(app.device)).to(dtype=app.precision) 134 | 135 | def finalize_octave(self, result): 136 | device = result.tensor.device 137 | weights = self.weights.to(device) 138 | tensor = result.tensor.expand(len(self.weights), -1, -1, -1) 139 | target = self.target.to(device).expand(len(self.weights), -1, -1, -1) 140 | return Result(tensor * (weights + 0.0) + (1.0 - weights) * target, *result[1:]) 141 | 142 | 143 | class Repair(Command): 144 | def __init__(self, target, source): 145 | assert target.mode == "RGBA" 146 | self.source = load_tensor_from_image(source, device="cpu") 147 | self.target = load_tensor_from_image(target.convert("RGBA"), device="cpu") 148 | 149 | def prepare_critics(self, app, scale): 150 | critics = create_default_critics(app.mode or "hist", app.layers) 151 | # source = renormalize(self.source, self.target[:, 0:3]) 152 | prepare_default_critics(app, scale, self.source, critics) 153 | return [critics] 154 | 155 | def prepare_seed_tensor(self, app, size, previous=None): 156 | target = downscale(self.target.to(device=app.device), size=size[2:]) 157 | if previous is None: 158 | mean = self.source.mean(dim=(2, 3), keepdim=True).to(device=app.device) 159 | current = random_normal(size, mean).to( 160 | device=app.device, dtype=app.precision 161 | ) 162 | else: 163 | current = upscale(previous, size=size[2:]) 164 | 165 | # Use the alpha-mask directly from user. Could blur it here for better results! 166 | alpha = target[:, 3:4].detach() 167 | return torch.cat( 168 | [target[:, 0:3] * (alpha + 0.0) + (1.0 - alpha) * current, 1.0 - alpha], 169 | dim=1, 170 | ) 171 | 172 | 173 | class Expand(Command): 174 | def __init__(self, target, source, factor=None): 175 | self.factor = factor or (1.0, 1.0) 176 | self.source = load_tensor_from_image(source, device="cpu") 177 | self.target = load_tensor_from_image(target, device="cpu") 178 | 179 | def prepare_critics(self, app, scale): 180 | critics = create_default_critics(app.mode or "patch", app.layers) 181 | prepare_default_critics(app, scale, self.source, critics) 182 | return [critics] 183 | 184 | def prepare_seed_tensor(self, app, size, previous=None): 185 | target_size = (int(size[2] / self.factor[0]), int(size[3] / self.factor[1])) 186 | target = downscale(self.target.to(device=app.device), size=target_size) 187 | 188 | if previous is None: 189 | mean = self.source.mean(dim=(2, 3), keepdim=True).to(device=app.device) 190 | current = random_normal(size, mean).to( 191 | device=app.device, dtype=app.precision 192 | ) 193 | else: 194 | current = upscale(previous, size=size[2:]) 195 | 196 | start = (size[2] - target_size[0]) // 2, (size[3] - target_size[1]) // 2 197 | slice_y = slice(start[0], start[0] + target_size[0]) 198 | slice_x = slice(start[1], start[1] + target_size[1]) 199 | current[:, :, slice_y, slice_x,] = target 200 | 201 | # This currently uses a very crisp boolean mask, looks better when edges are 202 | # smoothed for `overlap` pixels. 203 | alpha = torch.ones_like(current[:, 0:1]) 204 | alpha[:, :, slice_y, slice_x,] = 0.0 205 | return torch.cat([current, alpha], dim=1) 206 | 207 | 208 | class Mashup(Command): 209 | def __init__(self, sources): 210 | self.sources = [ 211 | load_tensor_from_image(s, device="cpu") for s in sources 212 | ] 213 | 214 | def prepare_critics(self, app, scale): 215 | critics = create_default_critics(app.mode or "patch", app.layers) 216 | all_layers = [c.get_layers() for c in critics] 217 | sources = [ 218 | F.interpolate( 219 | img, 220 | scale_factor=1.0 / scale, 221 | mode="area", 222 | recompute_scale_factor=False, 223 | ).to(device=app.device, dtype=app.precision) 224 | for img in self.sources 225 | ] 226 | 227 | # Combine all features into a single dictionary. 228 | features = [dict(app.encoder.extract_all(f, all_layers)) for f in sources] 229 | features = dict(zip(features[0].keys(), zip(*[f.values() for f in features]))) 230 | 231 | # Initialize the critics from the combined dictionary. 232 | for critic in critics: 233 | critic.from_features(features) 234 | 235 | return [critics] 236 | 237 | def prepare_seed_tensor(self, app, size, previous=None): 238 | if previous is None: 239 | means = [torch.mean(s, dim=(0, 2, 3), keepdim=True) for s in self.sources] 240 | mean = (sum(means) / len(means)).to(app.device) 241 | result = random_normal(size, mean).to(dtype=app.precision) 242 | return result.to(device=app.device, dtype=app.precision) 243 | 244 | return F.interpolate( 245 | previous, size=size[2:], mode="bicubic", align_corners=False 246 | ).clamp(0.0, 1.0) 247 | -------------------------------------------------------------------------------- /README.rst: -------------------------------------------------------------------------------- 1 | texturize 2 | ========= 3 | 4 | .. image:: docs/gravel-x4.webp 5 | 6 | A command-line tool and Python library to automatically generate new textures similar 7 | to a source image or photograph. It's useful in the context of computer graphics if 8 | you want to make variations on a theme or expand the size of an existing texture. 9 | 10 | This software is powered by deep learning technology — using a combination of 11 | convolution networks and example-based optimization to synthesize images. We're 12 | building ``texturize`` as the highest-quality open source library available! 13 | 14 | 1. `Examples & Demos <#1-examples--demos>`_ 15 | 2. `Commands <#2-commands>`_ 16 | 3. `Options & Usage <#3-options--usage>`_ 17 | 4. `Installation <#4-installation>`_ 18 | 19 | |Python Version| |License Type| |Project Stars| |Package Version| |Project Status| |Build Status| 20 | 21 | ---- 22 | 23 | 1. Examples & Demos 24 | =================== 25 | 26 | The examples are available as notebooks, and you can run them directly in-browser 27 | thanks to Jupyter and Google Colab: 28 | 29 | * **Gravel** — `online demo `__ and `source notebook `__. 30 | * **Grass** — `online demo `__ and `source notebook `__. 31 | 32 | These demo materials are released under the Creative Commons `BY-NC-SA license `_, including the text, images and code. 33 | 34 | .. image:: docs/grass-x4.webp 35 | 36 | 2. Commands 37 | =========== 38 | 39 | a) REMIX 40 | -------- 41 | 42 | Generate variations of any shape from a single texture. 43 | 44 | Remix Command-Line 45 | ~~~~~~~~~~~~~~~~~~ 46 | 47 | .. code-block:: bash 48 | 49 | Usage: 50 | texturize remix SOURCE... 51 | 52 | Examples: 53 | texturize remix samples/grass.webp --size=720x360 54 | texturize remix samples/gravel.png --size=512x512 55 | 56 | Remix Library API 57 | ~~~~~~~~~~~~~~~~~ 58 | 59 | .. code-block:: python 60 | 61 | from texturize import api, commands, io 62 | 63 | # The input could be any PIL Image in RGB mode. 64 | image = io.load_image_from_file("examples/dirt1.webp") 65 | 66 | # Coarse-to-fine synthesis runs one octave at a time. 67 | remix = commands.Remix(image) 68 | for result in api.process_octaves(remix, size=(512,512), octaves=5): 69 | pass 70 | 71 | # The output can be saved in any PIL-supported format. 72 | result.images[0].save("output.png") 73 | 74 | 75 | Remix Examples 76 | ~~~~~~~~~~~~~~ 77 | 78 | .. image:: docs/remix-gravel.webp 79 | 80 | .. Remix Online Tool 81 | .. ~~~~~~~~~~~~~~~~~ 82 | .. * `colab notebook `__ 83 | 84 | ---- 85 | 86 | b) REMAKE 87 | --------- 88 | 89 | Reproduce an original texture in the style of another. 90 | 91 | 92 | Remake Command-Line 93 | ~~~~~~~~~~~~~~~~~~~ 94 | 95 | .. code-block:: bash 96 | 97 | Usage: 98 | texturize remake TARGET [like] SOURCE 99 | 100 | Examples: 101 | texturize remake samples/grass1.webp like samples/grass2.webp 102 | texturize remake samples/gravel1.png like samples/gravel2.png --weight 0.5 103 | 104 | 105 | Remake Library API 106 | ~~~~~~~~~~~~~~~~~~ 107 | 108 | .. code-block:: python 109 | 110 | from texturize import api, commands 111 | 112 | # The input could be any PIL Image in RGB mode. 113 | target = io.load_image_from_file("examples/dirt1.webp") 114 | source = io.load_image_from_file("examples/dirt2.webp") 115 | 116 | # Only process one octave to retain photo-realistic output. 117 | remake = commands.Remake(target, source) 118 | for result in api.process_octaves(remake, size=(512,512), octaves=1): 119 | pass 120 | 121 | # The output can be saved in any PIL-supported format. 122 | result.images[0].save("output.png") 123 | 124 | 125 | Remake Examples 126 | ~~~~~~~~~~~~~~~ 127 | 128 | .. image:: docs/remake-grass.webp 129 | 130 | .. Remake Online Tool 131 | .. ~~~~~~~~~~~~~~~~~~ 132 | .. * `colab notebook `__ 133 | 134 | ---- 135 | 136 | c) MASHUP 137 | --------- 138 | 139 | Combine multiple textures together into one output. 140 | 141 | 142 | Mashup Command-Line 143 | ~~~~~~~~~~~~~~~~~~~ 144 | 145 | .. code-block:: bash 146 | 147 | Usage: 148 | texturize mashup SOURCE... 149 | 150 | Examples: 151 | texturize mashup samples/grass1.webp samples/grass2.webp 152 | texturize mashup samples/gravel1.png samples/gravel2.png 153 | 154 | 155 | Mashup Library API 156 | ~~~~~~~~~~~~~~~~~~ 157 | 158 | .. code-block:: python 159 | 160 | from texturize import api, commands 161 | 162 | # The input could be any PIL Image in RGB mode. 163 | sources = [ 164 | io.load_image_from_file("examples/dirt1.webp"), 165 | io.load_image_from_file("examples/dirt2.webp"), 166 | ] 167 | 168 | # Only process one octave to retain photo-realistic output. 169 | mashup = commands.Mashup(sources) 170 | for result in api.process_octaves(mashup, size=(512,512), octaves=5): 171 | pass 172 | 173 | # The output can be saved in any PIL-supported format. 174 | result.images[0].save("output.png") 175 | 176 | 177 | Mashup Examples 178 | ~~~~~~~~~~~~~~~ 179 | 180 | .. image:: docs/mashup-gravel.webp 181 | 182 | .. Mashup Online Tool 183 | .. ~~~~~~~~~~~~~~~~~~ 184 | .. * `colab notebook `__ 185 | 186 | ---- 187 | 188 | d) ENHANCE 189 | ---------- 190 | 191 | Increase the resolution or quality of a texture using another as an example. 192 | 193 | 194 | Enhance Command-Line 195 | ~~~~~~~~~~~~~~~~~~~~ 196 | 197 | .. code-block:: bash 198 | 199 | Usage: 200 | texturize enhance TARGET [with] SOURCE --zoom=ZOOM 201 | 202 | Examples: 203 | texturize enhance samples/grass1.webp with samples/grass2.webp --zoom=2 204 | texturize enhance samples/gravel1.png with samples/gravel2.png --zoom=4 205 | 206 | 207 | Enhance Library API 208 | ~~~~~~~~~~~~~~~~~~~ 209 | 210 | .. code-block:: python 211 | 212 | from texturize import api, commands 213 | 214 | # The input could be any PIL Image in RGB mode. 215 | target = io.load_image_from_file("examples/dirt1.webp") 216 | source = io.load_image_from_file("examples/dirt2.webp") 217 | 218 | # Only process one octave to retain photo-realistic output. 219 | enhance = commands.Enhance(target, source, zoom=2) 220 | for result in api.process_octaves(enhance, size=(512,512), octaves=2): 221 | pass 222 | 223 | # The output can be saved in any PIL-supported format. 224 | result.images[0].save("output.png") 225 | 226 | 227 | Enhance Examples 228 | ~~~~~~~~~~~~~~~~ 229 | 230 | .. image:: docs/enhance-grass.webp 231 | 232 | .. Enhance Online Tool 233 | .. ~~~~~~~~~~~~~~~~~~~ 234 | .. * `colab notebook `__ 235 | 236 | ---- 237 | 238 | 239 | 3. Options & Usage 240 | ================== 241 | 242 | For details about the command-line usage of the tool, see the tool itself: 243 | 244 | .. code-block:: bash 245 | 246 | texturize --help 247 | 248 | Here are the command-line options currently available, which apply to most of the 249 | commands above:: 250 | 251 | Options: 252 | SOURCE Path to source image to use as texture. 253 | -s WxH, --size=WxH Output resolution as WIDTHxHEIGHT. [default: 640x480] 254 | -o FILE, --output=FILE Filename for saving the result, includes format variables. 255 | [default: {command}_{source}{variation}.png] 256 | 257 | --weights=WEIGHTS Comma-separated list of blend weights. [default: 1.0] 258 | --zoom=ZOOM Integer zoom factor for enhancing. [default: 2] 259 | 260 | --variations=V Number of images to generate at same time. [default: 1] 261 | --seed=SEED Configure the random number generation. 262 | --mode=MODE Either "patch" or "gram" to manually specify critics. 263 | --octaves=O Number of octaves to process. Defaults to 5 for 512x512, or 264 | 4 for 256x256 equivalent pixel count. 265 | --quality=Q Quality for optimization, higher is better. [default: 5] 266 | --device=DEVICE Hardware to use, either "cpu" or "cuda". 267 | --precision=PRECISION Floating-point format to use, "float16" or "float32". 268 | --quiet Suppress any messages going to stdout. 269 | --verbose Display more information on stdout. 270 | -h, --help Show this message. 271 | 272 | 273 | 4. Installation 274 | =============== 275 | 276 | Latest Release [recommended] 277 | ---------------------------- 278 | 279 | We suggest using `Miniconda 3.x `__ to 280 | manage your Python environments. Once the ``conda`` command-line size is installed on 281 | your machine, there are setup scripts you can download directly from the repository: 282 | 283 | .. code-block:: bash 284 | 285 | # a) Use this if you have an *Nvidia GPU only*. 286 | curl -s https://raw.githubusercontent.com/texturedesign/texturize/master/tasks/setup-cuda.yml -o setup.yml 287 | 288 | # b) Fallback if you just want to run on CPU. 289 | curl -s https://raw.githubusercontent.com/texturedesign/texturize/master/tasks/setup-cpu.yml -o setup.yml 290 | 291 | Now you can create a fresh Conda environment for texture synthesis: 292 | 293 | .. code-block:: bash 294 | 295 | conda env create -n myenv -f setup.yml 296 | conda activate myenv 297 | 298 | **NOTE**: Any version of CUDA is suitable to run ``texturize`` as long as PyTorch is 299 | working. See the official `PyTorch installation guide `__ 300 | for alternatives ways to install the ``pytorch`` library. 301 | 302 | Then, you can fetch the latest version of the library from the Python Package Index 303 | (PyPI) using the following command: 304 | 305 | .. code-block:: bash 306 | 307 | pip install texturize 308 | 309 | Finally, you can check if everything worked by calling the command-line script: 310 | 311 | .. code-block:: bash 312 | 313 | texturize --help 314 | 315 | You can use ``conda env remove -n myenv`` to delete the virtual environment once you 316 | are done. 317 | 318 | 319 | Repository Install [developers] 320 | ------------------------------- 321 | 322 | If you're a developer and want to install the library locally, start by cloning the 323 | repository to your local disk: 324 | 325 | .. code-block:: bash 326 | 327 | git clone https://github.com/texturedesign/texturize.git 328 | 329 | We also recommend using `Miniconda 3.x `__ 330 | for development. You can set up a new virtual environment called ``myenv`` by running 331 | the following commands, depending whether you want to run on CPU or GPU (via CUDA). 332 | For advanced setups like specifying which CUDA version to use, see the official 333 | `PyTorch installation guide `__. 334 | 335 | .. code-block:: bash 336 | 337 | cd texturize 338 | 339 | # a) Use this if you have an *Nvidia GPU only*. 340 | conda env create -n myenv -f tasks/setup-cuda.yml 341 | 342 | # b) Fallback if you just want to run on CPU. 343 | conda env create -n myenv -f tasks/setup-cpu.yml 344 | 345 | Once the virtual environment is created, you can activate it and finish the setup of 346 | ``texturize`` with these commands: 347 | 348 | .. code-block:: bash 349 | 350 | conda activate myenv 351 | poetry install 352 | 353 | Finally, you can check if everything worked by calling the script: 354 | 355 | .. code-block:: bash 356 | 357 | texturize --help 358 | 359 | Use ``conda env remove -n myenv`` to remove the virtual environment once you are done. 360 | 361 | ---- 362 | 363 | |Python Version| |License Type| |Project Stars| |Package Version| |Project Status| |Build Status| 364 | 365 | .. |Python Version| image:: https://img.shields.io/pypi/pyversions/texturize 366 | :target: https://docs.conda.io/en/latest/miniconda.html 367 | 368 | .. |License Type| image:: https://img.shields.io/badge/license-AGPL-blue.svg 369 | :target: https://github.com/texturedesign/texturize/blob/master/LICENSE 370 | 371 | .. |Project Stars| image:: https://img.shields.io/github/stars/texturedesign/texturize.svg?color=turquoise 372 | :target: https://github.com/texturedesign/texturize/stargazers 373 | 374 | .. |Package Version| image:: https://img.shields.io/pypi/v/texturize?color=turquoise 375 | :alt: PyPI - Version 376 | :target: https://pypi.org/project/texturize/ 377 | 378 | .. |Project Status| image:: https://img.shields.io/pypi/status/texturize?color=#00ff00 379 | :alt: PyPI - Status 380 | :target: https://github.com/texturedesign/texturize 381 | 382 | .. |Build Status| image:: https://img.shields.io/github/actions/workflow/status/texturedesign/texturize/python-package.yml 383 | :alt: GitHub Workflow Status 384 | :target: https://github.com/texturedesign/texturize/actions/workflows/python-package.yml 385 | -------------------------------------------------------------------------------- /tests/test_match.py: -------------------------------------------------------------------------------- 1 | # neural-texturize — Copyright (c) 2020, Novelty Factory KG. See LICENSE for details. 2 | 3 | import torch 4 | 5 | from texturize.match import ( 6 | FeatureMatcher, 7 | Mapping, 8 | torch_scatter_2d, 9 | cosine_similarity_vector_1d, 10 | ) 11 | 12 | 13 | import pytest 14 | from hypothesis import settings, given, event, strategies as H 15 | 16 | 17 | def make_square_tensor(size, channels): 18 | return torch.empty((1, channels, size, size), dtype=torch.float).uniform_(0.1, 0.9) 19 | 20 | 21 | def Tensor(range=(4, 32), channels=None) -> H.SearchStrategy[torch.Tensor]: 22 | return H.builds( 23 | make_square_tensor, 24 | size=H.integers(min_value=range[0], max_value=range[-1]), 25 | channels=H.integers(min_value=channels or 1, max_value=channels or 8), 26 | ) 27 | 28 | 29 | Coord = H.tuples(H.integers(), H.integers()) 30 | CoordList = H.lists(Coord, min_size=1, max_size=32) 31 | 32 | 33 | @given(content=Tensor(channels=4), style=Tensor(channels=4)) 34 | def test_indices_random_range(content, style): 35 | """Determine that random indices are in range. 36 | """ 37 | mapping = Mapping(content.shape) 38 | mapping.from_linear(style.shape) 39 | 40 | assert mapping.indices[:, 0, :, :].min() >= 0 41 | assert mapping.indices[:, 0, :, :].max() < style.shape[2] 42 | 43 | assert mapping.indices[:, 1, :, :].min() >= 0 44 | assert mapping.indices[:, 1, :, :].max() < style.shape[3] 45 | 46 | 47 | @given(content=Tensor(channels=5), style=Tensor(channels=5)) 48 | def test_indices_linear_range(content, style): 49 | """Determine that random indices are in range. 50 | """ 51 | mapping = Mapping(content.shape) 52 | mapping.from_linear(style.shape) 53 | 54 | assert mapping.indices[:, 0, :, :].min() >= 0 55 | assert mapping.indices[:, 0, :, :].max() < style.shape[2] 56 | 57 | assert mapping.indices[:, 1, :, :].min() >= 0 58 | assert mapping.indices[:, 1, :, :].max() < style.shape[3] 59 | 60 | 61 | @given( 62 | target=Tensor(range=(3, 11), channels=4), source=Tensor(range=(5, 9), channels=4) 63 | ) 64 | def test_scores_range_matrix(target, source): 65 | """Determine that the scores of random patches are in correct range. 66 | """ 67 | matcher = FeatureMatcher(target, source) 68 | matcher.compare_features_matrix(split=2) 69 | assert matcher.repro_target.scores.min() >= 0.0 70 | assert matcher.repro_target.scores.max() <= 1.0 71 | assert matcher.repro_sources.scores.min() >= 0.0 72 | assert matcher.repro_sources.scores.max() <= 1.0 73 | 74 | 75 | @given( 76 | target=Tensor(range=(3, 11), channels=3), source=Tensor(range=(5, 9), channels=3) 77 | ) 78 | def test_scores_range_random(target, source): 79 | """Determine that the scores of random patches are in correct range. 80 | """ 81 | matcher = FeatureMatcher(target, source) 82 | matcher.compare_features_random(split=1) 83 | 84 | assert matcher.repro_sources.indices.min() != -1 85 | assert matcher.repro_target.scores.min() >= 0.0 86 | assert matcher.repro_target.scores.max() <= 1.0 87 | 88 | assert matcher.repro_sources.scores.max() >= 0.0 89 | assert matcher.repro_sources.indices.max() != -1 90 | 91 | 92 | @given( 93 | target=Tensor(range=(5, 11), channels=3), source=Tensor(range=(7, 9), channels=3) 94 | ) 95 | @settings(deadline=None) 96 | def test_compare_random_converges(target, source): 97 | """Determine that the scores of random patches are in correct range. 98 | """ 99 | matcher1 = FeatureMatcher(target, source) 100 | matcher1.compare_features_matrix(split=2) 101 | 102 | matcher2 = FeatureMatcher(target, source) 103 | for _ in range(500): 104 | matcher2.compare_features_random(split=2) 105 | missing = ( 106 | (matcher1.repro_target.indices != matcher2.repro_target.indices).sum() 107 | + (matcher1.repro_sources.indices != matcher2.repro_sources.indices).sum() 108 | ) 109 | if missing == 0: 110 | break 111 | 112 | assert (matcher1.repro_target.indices != matcher2.repro_target.indices).sum() <= 2 113 | assert pytest.approx(0.0, abs=1e-6) == torch.dist( 114 | matcher1.repro_target.scores, matcher2.repro_target.scores 115 | ) 116 | 117 | assert matcher2.repro_sources.indices.min() != -1 118 | assert (matcher1.repro_sources.indices != matcher2.repro_sources.indices).sum() <= 2 119 | assert pytest.approx(0.0, abs=1e-6) == torch.dist( 120 | matcher1.repro_sources.scores, matcher2.repro_sources.scores 121 | ) 122 | 123 | 124 | @given(content=Tensor(range=(2, 8)), style=Tensor(range=(2, 8))) 125 | def test_indices_random(content, style): 126 | """Determine that random indices are indeed random in larger grids. 127 | """ 128 | m = Mapping(content.shape) 129 | m.from_random(style.shape) 130 | 131 | assert m.indices[:, 0].min() != m.indices[:, 0].max(dim=2) 132 | assert m.indices[:, 1].min() != m.indices[:, 1].max(dim=2) 133 | 134 | 135 | @given(array=Tensor(range=(2, 8))) 136 | def test_indices_linear(array): 137 | """Indices of the indentity transformation should be linear. 138 | """ 139 | m = Mapping(array.shape) 140 | m.from_linear(array.shape) 141 | 142 | assert ( 143 | m.indices[:, 0, :, :] 144 | == torch.arange(start=0, end=array.shape[2]).view(1, -1, 1) 145 | ).all() 146 | assert ( 147 | m.indices[:, 1, :, :] 148 | == torch.arange(start=0, end=array.shape[3]).view(1, 1, -1) 149 | ).all() 150 | 151 | 152 | @given(array=Tensor(range=(4, 16))) 153 | def test_scores_identity(array): 154 | """The score of the identity operation with linear indices should be one. 155 | """ 156 | matcher = FeatureMatcher(array, array) 157 | matcher.repro_target.from_linear(array.shape) 158 | matcher.repro_sources.from_linear(array.shape) 159 | matcher.compare_features_matrix(split=2) 160 | 161 | assert pytest.approx(1.0, abs=1e-6) == matcher.repro_target.scores.min() 162 | assert pytest.approx(1.0, abs=1e-6) == matcher.repro_sources.scores.min() 163 | 164 | 165 | @given( 166 | content=Tensor(range=(17, 37), channels=5), style=Tensor(range=(13, 39), channels=5) 167 | ) 168 | @settings(deadline=None) 169 | def test_indices_same_split(content, style): 170 | """The score of the identity operation with linear indices should be one. 171 | """ 172 | matcher = FeatureMatcher(content, style) 173 | matcher.compare_features_matrix(split=1) 174 | target_indices = matcher.repro_target.indices.clone() 175 | source_indices = matcher.repro_sources.indices.clone() 176 | 177 | for split in [2, 4, 8]: 178 | matcher.update_target(content) 179 | matcher.compare_features_matrix(split=split) 180 | 181 | assert (target_indices != matcher.repro_target.indices).sum() <= 2 182 | assert (source_indices != matcher.repro_sources.indices).sum() <= 2 183 | 184 | 185 | @given( 186 | content=Tensor(range=(17, 37), channels=5), style=Tensor(range=(13, 39), channels=5) 187 | ) 188 | def test_indices_same_rotate(content, style): 189 | """The score of the identity operation with linear indices should be one. 190 | """ 191 | matcher1 = FeatureMatcher(content, style) 192 | matcher1.compare_features_matrix(split=2) 193 | 194 | matcher2 = FeatureMatcher(content, style.permute(0, 1, 3, 2)) 195 | matcher2.compare_features_matrix(split=2) 196 | 197 | assert ( 198 | matcher1.repro_target.indices[:, 0] != matcher2.repro_target.indices[:, 1] 199 | ).sum() <= 1 200 | assert ( 201 | matcher2.repro_target.indices[:, 1] != matcher1.repro_target.indices[:, 0] 202 | ).sum() <= 1 203 | 204 | 205 | @given(content=Tensor(range=(5, 7), channels=2), style=Tensor(range=(3, 9), channels=2)) 206 | def test_indices_symmetry_matrix(content, style): 207 | """The indices of the symmerical operation must be equal. 208 | """ 209 | matcher1 = FeatureMatcher(content, style) 210 | matcher2 = FeatureMatcher(style, content) 211 | 212 | matcher1.compare_features_matrix(split=2) 213 | matcher2.compare_features_matrix(split=2) 214 | 215 | assert (matcher1.repro_target.indices != matcher2.repro_sources.indices).sum() <= 2 216 | assert (matcher1.repro_sources.indices != matcher2.repro_target.indices).sum() <= 2 217 | 218 | 219 | @given(content=Tensor(range=(2, 4), channels=2), style=Tensor(range=(3, 3), channels=2)) 220 | @settings(deadline=None) 221 | def test_indices_symmetry_random(content, style): 222 | """The indices of the symmerical operation must be the same. 223 | """ 224 | matcher1 = FeatureMatcher(content, style) 225 | matcher2 = FeatureMatcher(style, content) 226 | 227 | for _ in range(25): 228 | matcher1.compare_features_random() 229 | matcher2.compare_features_random() 230 | 231 | missing = sum( 232 | [ 233 | (matcher1.repro_target.indices != matcher2.repro_sources.indices).sum(), 234 | (matcher1.repro_sources.indices != matcher2.repro_target.indices).sum(), 235 | ] 236 | ) 237 | if missing == 0: 238 | break 239 | 240 | assert (matcher1.repro_target.indices != matcher2.repro_sources.indices).sum() <= 2 241 | assert (matcher1.repro_sources.indices != matcher2.repro_target.indices).sum() <= 2 242 | 243 | 244 | @given( 245 | content=Tensor(range=(4, 16), channels=2), style=Tensor(range=(4, 16), channels=2) 246 | ) 247 | def test_scores_zero(content, style): 248 | """Scores must be zero if inputs vary on different dimensions. 249 | """ 250 | content[:, 0], style[:, 1] = 0.0, 0.0 251 | matcher = FeatureMatcher(content, style) 252 | matcher.compare_features_matrix(split=2) 253 | assert pytest.approx(0.0) == matcher.repro_target.scores.max() 254 | assert pytest.approx(0.0) == matcher.repro_sources.scores.max() 255 | 256 | 257 | @given( 258 | content=Tensor(range=(4, 16), channels=2), style=Tensor(range=(4, 16), channels=2) 259 | ) 260 | def test_scores_one(content, style): 261 | """Scores must be one if inputs only vary on one dimension. 262 | """ 263 | content[:, 0], style[:, 0] = 0.0, 0.0 264 | matcher = FeatureMatcher(content, style) 265 | matcher.compare_features_matrix(split=2) 266 | assert pytest.approx(1.0) == matcher.repro_target.scores.min() 267 | assert pytest.approx(1.0) == matcher.repro_sources.scores.min() 268 | 269 | 270 | @given( 271 | target=Tensor(range=(4, 12), channels=3), source=Tensor(range=(4, 12), channels=3) 272 | ) 273 | def test_scores_reconstruct(target, source): 274 | """Scores must be one if inputs only vary on one dimension. 275 | """ 276 | matcher = FeatureMatcher(target, source) 277 | matcher.compare_features_matrix() 278 | 279 | recons_target = matcher.reconstruct_target() 280 | score = cosine_similarity_vector_1d(target, recons_target) 281 | assert pytest.approx(0.0, abs=1e-6) == abs( 282 | score.mean() - matcher.repro_target.scores.mean() 283 | ) 284 | 285 | recons_source = matcher.reconstruct_source() 286 | score = cosine_similarity_vector_1d(source, recons_source) 287 | assert pytest.approx(0.0, abs=1e-6) == abs( 288 | score.mean() - matcher.repro_sources.scores.mean() 289 | ) 290 | 291 | 292 | @given( 293 | content=Tensor(range=(4, 16), channels=5), style=Tensor(range=(4, 16), channels=5) 294 | ) 295 | def test_scores_improve(content, style): 296 | """Scores must be one if inputs only vary on one dimension. 297 | """ 298 | matcher = FeatureMatcher(content, style) 299 | matcher.compare_features_identity() 300 | before = matcher.repro_target.scores.sum() 301 | matcher.compare_features_random(times=1) 302 | after = matcher.repro_target.scores.sum() 303 | event("equal? %i" % int(after == before)) 304 | assert after >= before 305 | 306 | 307 | @given(array=Tensor(range=(9, 9), channels=5)) 308 | def test_scores_source_bias_matrix(array): 309 | matcher = FeatureMatcher(array, torch.cat([array, array], dim=2)) 310 | 311 | matcher.repro_target.biases[:, :, 9:] = 1.0 312 | matcher.repro_target.scores.zero_() 313 | matcher.compare_features_matrix(split=2) 314 | assert (matcher.repro_target.indices[:,0] >= 9).all() 315 | 316 | matcher.repro_target.biases[:, :, 9:] = 0.0 317 | matcher.repro_target.biases[:, :, :9] = 1.0 318 | matcher.repro_target.scores.zero_() 319 | matcher.compare_features_matrix(split=2) 320 | assert (matcher.repro_target.indices[:,0] < 9).all() 321 | 322 | 323 | @given(array=Tensor(range=(11, 11), channels=4)) 324 | def test_scores_target_bias_matrix(array): 325 | matcher = FeatureMatcher(torch.cat([array, array], dim=2), array) 326 | 327 | matcher.repro_sources.biases[:, :, 11:] = 1.0 328 | matcher.repro_sources.scores.zero_() 329 | matcher.compare_features_matrix(split=2) 330 | assert (matcher.repro_sources.indices[:,0] >= 11).all() 331 | 332 | matcher.repro_sources.biases[:, :, 11:] = 0.0 333 | matcher.repro_sources.biases[:, :, :11] = 1.0 334 | matcher.repro_sources.scores.zero_() 335 | matcher.compare_features_matrix(split=2) 336 | assert (matcher.repro_sources.indices[:,0] < 11).all() 337 | 338 | 339 | @given(array=Tensor(range=(8, 8), channels=5)) 340 | def test_scores_source_bias_random(array): 341 | matcher = FeatureMatcher(array, torch.cat([array, array], dim=2)) 342 | 343 | matcher.repro_target.biases[:, :, 8:] = 1.0 344 | matcher.repro_target.scores.fill_(-1.0) 345 | for _ in range(10): 346 | matcher.compare_features_random(split=2) 347 | assert (matcher.repro_target.indices[:,0] >= 8).all() 348 | 349 | matcher.repro_target.biases[:, :, 8:] = 0.0 350 | matcher.repro_target.biases[:, :, :8] = 1.0 351 | matcher.repro_target.scores.fill_(-1.0) 352 | for _ in range(10): 353 | matcher.compare_features_random(split=2) 354 | assert (matcher.repro_target.indices[:,0] < 8).all() 355 | 356 | 357 | @given(array=Tensor(range=(12, 12), channels=3)) 358 | def test_scores_target_bias_random(array): 359 | matcher = FeatureMatcher(torch.cat([array, array], dim=2), array) 360 | 361 | matcher.repro_sources.biases[:, :, 12:] = 1.0 362 | matcher.repro_sources.scores.fill_(-1.0) 363 | for _ in range(10): 364 | matcher.compare_features_random(split=2) 365 | assert (matcher.repro_sources.indices[:,0] >= 12).all() 366 | 367 | matcher.repro_sources.biases[:, :, 12:] = 0.0 368 | matcher.repro_sources.biases[:, :, :12] = 1.0 369 | matcher.repro_sources.scores.fill_(-1.0) 370 | for _ in range(10): 371 | matcher.compare_features_random(split=2) 372 | assert (matcher.repro_sources.indices[:,0] < 12).all() 373 | 374 | 375 | @given(array=Tensor(range=(3, 8), channels=5)) 376 | def test_propagate_down_right(array): 377 | """Propagating the identity transformation expects indices to propagate 378 | one cell at a time, this time down and towards the right. 379 | """ 380 | matcher = FeatureMatcher(array, array) 381 | indices = matcher.repro_target.indices 382 | indices.zero_() 383 | 384 | matcher.compare_features_nearby(radius=1, split=2) 385 | assert (indices[:, :, 1, 0] == torch.tensor([1, 0], dtype=torch.long)).all() 386 | assert (indices[:, :, 0, 1] == torch.tensor([0, 1], dtype=torch.long)).all() 387 | 388 | matcher.compare_features_nearby(radius=1, split=2) 389 | assert (indices[:, :, 1, 1] == torch.tensor([1, 1], dtype=torch.long)).all() 390 | 391 | 392 | @given(array=Tensor(range=(2, 8), channels=5)) 393 | def test_propagate_up_left(array): 394 | """Propagating the identity transformation expects indices to propagate 395 | one cell at a time, here up and towards the left. 396 | """ 397 | y, x = array.shape[-2:] 398 | matcher = FeatureMatcher(array, array) 399 | indices, scores = matcher.repro_target.indices, matcher.repro_target.scores 400 | indices.zero_() 401 | 402 | indices[:, 0, -1, -1] = y - 1 403 | indices[:, 1, -1, -1] = x - 1 404 | scores[:, 0, -1, -1] = 1.0 405 | 406 | matcher.compare_features_nearby(radius=1, split=2) 407 | assert ( 408 | indices[:, :, y - 2, x - 1] == torch.tensor([y - 2, x - 1], dtype=torch.long) 409 | ).all() 410 | assert ( 411 | indices[:, :, y - 1, x - 2] == torch.tensor([y - 1, x - 2], dtype=torch.long) 412 | ).all() 413 | 414 | matcher.compare_features_nearby(radius=1, split=2) 415 | assert ( 416 | indices[:, :, y - 2, x - 2] == torch.tensor([y - 2, x - 2], dtype=torch.long) 417 | ).all() 418 | 419 | 420 | @given(content=Tensor(range=(6, 8), channels=4), style=Tensor(range=(7, 9), channels=4)) 421 | def test_compare_inverse_asymmetrical(content, style): 422 | """Check that doing the identity comparison also projects the inverse 423 | coordinates into the other buffer. 424 | """ 425 | 426 | # Set corner pixel as identical, so it matches 100%. 427 | content[:, :, -1, -1] = style[:, :, -1, -1] 428 | 429 | matcher = FeatureMatcher(content, style) 430 | matcher.repro_target.from_linear(style.shape) 431 | matcher.repro_sources.indices.zero_() 432 | matcher.compare_features_identity() 433 | matcher.compare_features_inverse(split=2) 434 | 435 | assert matcher.repro_sources.indices.max() > 0 436 | 437 | matcher.repro_sources.from_linear(content.shape) 438 | matcher.repro_target.indices.zero_() 439 | matcher.repro_target.scores.zero_() 440 | matcher.compare_features_identity() 441 | matcher.compare_features_inverse(split=2) 442 | 443 | assert matcher.repro_target.indices.max() > 0 444 | 445 | 446 | @given(array=Tensor(range=(3, 3), channels=3)) 447 | def test_compare_inverse_symmetrical(array): 448 | """Check that doing the identity comparison also projects the inverse 449 | coordinates into the other buffer. 450 | """ 451 | matcher = FeatureMatcher(array, array) 452 | matcher.repro_target.from_linear(array.shape) 453 | matcher.repro_sources.indices.zero_() 454 | matcher.compare_features_identity() 455 | matcher.compare_features_inverse(split=1) 456 | 457 | assert (matcher.repro_target.indices != matcher.repro_sources.indices).sum() == 0 458 | 459 | matcher.repro_target.indices.zero_() 460 | matcher.repro_target.scores.fill_(float("-inf")) 461 | matcher.compare_features_identity() 462 | matcher.compare_features_inverse(split=1) 463 | 464 | assert (matcher.repro_target.indices != matcher.repro_sources.indices).sum() == 0 465 | 466 | 467 | # ================================================================================ 468 | 469 | 470 | @given(array=Tensor(range=(3, 5), channels=3)) 471 | def test_scatter_2d_single_float(array): 472 | matcher = FeatureMatcher(array, array) 473 | matcher.repro_target.scores.zero_() 474 | 475 | indices = torch.ones(size=(1, 2, 1, 1), dtype=torch.int64) 476 | values = torch.full((1, 1, 1, 1), 2.34, dtype=torch.float32) 477 | 478 | torch_scatter_2d(matcher.repro_target.scores, indices, values) 479 | 480 | assert matcher.repro_target.scores[:, :, 1, 1] == 2.34 481 | assert matcher.repro_target.scores[:, :, 0, 0] == 0.0 482 | 483 | 484 | @given(array=Tensor(range=(3, 5), channels=3)) 485 | def test_scatter_2d_single_long(array): 486 | matcher = FeatureMatcher(array, array) 487 | matcher.repro_target.indices.zero_() 488 | 489 | indices = torch.ones(size=(1, 2, 1, 1), dtype=torch.int64) 490 | values = torch.tensor([[234, 345]], dtype=torch.int64).view(1, 2, 1, 1) 491 | 492 | torch_scatter_2d(matcher.repro_target.indices, indices, values) 493 | 494 | assert (matcher.repro_target.indices == 234).sum() == 1 495 | assert (matcher.repro_target.indices == 345).sum() == 1 496 | assert matcher.repro_target.indices[:, 0, 1, 1] == 234 497 | assert matcher.repro_target.indices[:, 1, 1, 1] == 345 498 | assert matcher.repro_target.indices[:, :, 0, 0].max() == 0 499 | 500 | 501 | def test_improve_window(): 502 | u, v = 4, 5 503 | 504 | mapping = Mapping((1, -1, u, u), "cpu").from_linear((1, 1, 7+v, 7+v)) 505 | similarity = torch.empty(size=(1, u * u, v * v), dtype=torch.float32).uniform_() 506 | 507 | mapping.improve_window((0, u, 0, u), (7, v, 7, v), similarity.max(dim=2)) 508 | orig = mapping.indices.clone() 509 | 510 | mapping.scores.fill_(float("-inf")) 511 | mapping.indices.zero_() 512 | 513 | mapping.improve_window( 514 | (0, u // 2, 0, u), (7, v, 7, v), similarity[:, : u * u // 2].max(dim=2) 515 | ) 516 | mapping.improve_window( 517 | (u // 2, u // 2, 0, u), (7, v, 7, v), similarity[:, u * u // 2 :].max(dim=2) 518 | ) 519 | assert (orig != mapping.indices).sum() == 0 520 | 521 | 522 | # ================================================================================ 523 | 524 | from texturize.match import cosine_similarity_matrix_1d 525 | 526 | 527 | def nearest_neighbors_1d(a, b, split=1, eps=1e-8): 528 | batch = a.shape[0] 529 | size = b.shape[2] // split 530 | 531 | score_a = a.new_full((batch, a.shape[2]), float("-inf")) 532 | index_a = a.new_full((batch, a.shape[2]), -1, dtype=torch.int64) 533 | score_b = b.new_full((batch, b.shape[2]), float("-inf")) 534 | index_b = b.new_full((batch, b.shape[2]), -1, dtype=torch.int64) 535 | 536 | for i in range(split): 537 | start_b, finish_b = i * size, (i + 1) * size 538 | bb = b[:, :, start_b:finish_b] 539 | sim = cosine_similarity_matrix_1d(a, bb, eps=eps) 540 | 541 | max_a = torch.max(sim, dim=2) 542 | cond_a = max_a.values > score_a 543 | index_a[:] = torch.where(cond_a, max_a.indices + start_b, index_a) 544 | score_a[:] = torch.where(cond_a, max_a.values, score_a) 545 | 546 | max_b = torch.max(sim, dim=1) 547 | slice_b = slice(start_b, finish_b) 548 | cond_b = max_b.values > score_b[:, slice_b] 549 | index_b[:, slice_b] = torch.where(cond_b, max_b.indices, index_b[:, slice_b]) 550 | score_b[:, slice_b] = torch.where(cond_b, max_b.values, score_b[:, slice_b]) 551 | 552 | return index_a, index_b 553 | 554 | 555 | @given( 556 | content=Tensor(range=(15, 37), channels=5), style=Tensor(range=(13, 39), channels=5) 557 | ) 558 | @settings(deadline=None) 559 | def test_nearest_neighbor_vs_matcher(content, style): 560 | """The score of the identity operation with linear indices should be one. 561 | """ 562 | 563 | matcher = FeatureMatcher(content, style) 564 | matcher.compare_features_matrix(split=1) 565 | 566 | ida, idb = nearest_neighbors_1d(content.flatten(2), style.flatten(2), split=1) 567 | 568 | ima = ( 569 | matcher.repro_target.indices[:, 0] * style.shape[2] 570 | + matcher.repro_target.indices[:, 1] 571 | ) 572 | assert (ima.flatten(1) != ida).sum() == 0 573 | 574 | imb = ( 575 | matcher.repro_sources.indices[:, 0] * content.shape[2] 576 | + matcher.repro_sources.indices[:, 1] 577 | ) 578 | assert (imb.flatten(1) != idb).sum() == 0 579 | -------------------------------------------------------------------------------- /src/texturize/match.py: -------------------------------------------------------------------------------- 1 | # neural-texturize — Copyright (c) 2020, Novelty Factory KG. See LICENSE for details. 2 | 3 | import itertools 4 | 5 | import torch 6 | import torch.nn.functional as F 7 | 8 | 9 | def torch_flatten_2d(a): 10 | a = a.permute(1, 0, 2, 3) 11 | return a.reshape(a.shape[:1] + (-1,)) 12 | 13 | 14 | def torch_gather_2d(array, indices): 15 | """Extract the content of an array using the 2D coordinates provided. 16 | """ 17 | batch = torch.arange( 18 | 0, array.shape[0], dtype=torch.long, device=indices.device 19 | ).view(-1, 1, 1) 20 | 21 | idx = batch * (array.shape[2] * array.shape[3]) + ( 22 | indices[:, 0, :, :] * array.shape[3] + indices[:, 1, :, :] 23 | ) 24 | flat_array = torch_flatten_2d(array) 25 | x = torch.index_select(flat_array, 1, idx.view(-1)) 26 | 27 | result = x.view(array.shape[1:2] + indices.shape[:1] + indices.shape[2:]) 28 | return result.permute(1, 0, 2, 3) 29 | 30 | 31 | def torch_scatter_2d(output, indices, values): 32 | _, c, h, w = output.shape 33 | 34 | assert output.shape[0] == 1 35 | assert output.shape[1] == values.shape[1] 36 | 37 | chanidx = torch.arange(0, c, dtype=torch.long, device=indices.device).view(-1, 1, 1) 38 | 39 | idx = chanidx * (h * w) + (indices[:, 0] * w + indices[:, 1]) 40 | output.flatten().scatter_(0, idx.flatten(), values.to(dtype=output.dtype).flatten()) 41 | 42 | 43 | def torch_pad_reflect(array, padding): 44 | return torch.nn.functional.pad(array, pad=padding, mode="reflect") 45 | 46 | 47 | def iterate_range(size, split=2): 48 | assert split <= size 49 | for start, stop in zip(range(0, split), range(1, split + 1)): 50 | yield ( 51 | max(0, (size * start) // split), 52 | min(size, (size * stop) // split), 53 | ) 54 | 55 | 56 | def cosine_similarity_matrix_1d(source, target, eps=None): 57 | eps = eps or (1e-3 if source.dtype == torch.float16 else 1e-8) 58 | source = source / (torch.norm(source, dim=1, keepdim=True) + eps) 59 | target = target / (torch.norm(target, dim=1, keepdim=True) + eps) 60 | 61 | result = torch.bmm(source.permute(0, 2, 1), target) 62 | return torch.clamp(result, max=1.0 / eps) 63 | 64 | 65 | def cosine_similarity_vector_1d(source, target, eps=None): 66 | eps = eps or (1e-3 if source.dtype == torch.float16 else 1e-8) 67 | source = source / (torch.norm(source, dim=1, keepdim=True) + eps) 68 | target = target / (torch.norm(target, dim=1, keepdim=True) + eps) 69 | 70 | source = source.expand_as(target) 71 | 72 | result = torch.sum(source * target, dim=1) 73 | return torch.clamp(result, max=1.0 / eps) 74 | 75 | 76 | class Mapping: 77 | def __init__(self, size, device="cpu"): 78 | b, _, h, w = size 79 | self.device = torch.device(device) 80 | self.indices = torch.empty((b, 2, h, w), dtype=torch.int64, device=device) 81 | self.scores = torch.full((b, 1, h, w), float("-inf"), device=device) 82 | self.biases = None 83 | self.target_size = None 84 | 85 | def clone(self): 86 | b, _, h, w = self.indices.shape 87 | clone = Mapping((b, -1, h, w), self.device) 88 | clone.indices[:] = self.indices 89 | clone.scores[:] = self.scores 90 | clone.biases = self.biases.copy() 91 | return clone 92 | 93 | def setup_biases(self, target_size): 94 | b, _, h, w = target_size 95 | self.biases = torch.full((b, 1, h, w), 0.0, device=self.device) 96 | 97 | def rescale(self, target_size): 98 | factor = torch.tensor(target_size, dtype=torch.float) / torch.tensor( 99 | self.target_size[2:], dtype=torch.float 100 | ) 101 | self.indices = ( 102 | self.indices.float().mul(factor.to(self.device).view(1, 2, 1, 1)).long() 103 | ) 104 | self.indices[:, 0].clamp_(0, target_size[0] - 1) 105 | self.indices[:, 1].clamp_(0, target_size[1] - 1) 106 | self.target_size = self.target_size[:2] + target_size 107 | 108 | self.setup_biases(self.scores.shape[:2] + target_size) 109 | 110 | def resize(self, size): 111 | self.indices = F.interpolate( 112 | self.indices.float(), size=size, mode="nearest" 113 | ).long() 114 | self.scores = F.interpolate(self.scores, size=size, mode="nearest") 115 | 116 | def improve(self, candidate_scores, candidate_indices): 117 | candidate_indices = candidate_indices.view(self.indices.shape) 118 | candidate_scores = candidate_scores.view(self.scores.shape) + torch_gather_2d( 119 | self.biases, candidate_indices 120 | ) 121 | 122 | cond = candidate_scores > self.scores 123 | self.indices[:] = torch.where(cond, candidate_indices, self.indices) 124 | self.scores[:] = torch.where(cond, candidate_scores, self.scores) 125 | 126 | def improve_window(self, this_window, other_window, candidates): 127 | assert candidates.indices.shape[0] == 1 128 | 129 | sy, dy, sx, dx = other_window 130 | grid = torch.empty((1, 2, dy, dx), dtype=torch.int64, device=self.device) 131 | self.meshgrid(grid, offset=(sy, sx), range=(dy, dx)) 132 | 133 | chanidx = torch.arange(0, 2, dtype=torch.long, device=self.device).view(-1, 1) 134 | chanidx = chanidx * (dx * dy) 135 | indices_2d = torch.index_select( 136 | grid.flatten(), 137 | dim=0, 138 | index=(chanidx + candidates.indices.to(self.device).view(1, -1)).flatten(), 139 | ) 140 | 141 | return self._improve_window( 142 | this_window, candidates.values.to(self.device), indices_2d.view(1, 2, -1) 143 | ) 144 | 145 | def _improve_window(self, this_window, scores, indices): 146 | start_y, size_y, start_x, size_x = this_window 147 | assert indices.ndim == 3 and indices.shape[2] == size_y * size_x 148 | 149 | candidate_scores = scores.view(1, 1, size_y, size_x) 150 | candidate_indices = indices.view(1, 2, size_y, size_x) 151 | 152 | slice_y, slice_x = ( 153 | slice(start_y, start_y + size_y), 154 | slice(start_x, start_x + size_x), 155 | ) 156 | 157 | cond = candidate_scores > self.scores[:, :, slice_y, slice_x] 158 | self.indices[:, :, slice_y, slice_x] = torch.where( 159 | cond, candidate_indices, self.indices[:, :, slice_y, slice_x] 160 | ) 161 | self.scores[:, :, slice_y, slice_x] = torch.where( 162 | cond, candidate_scores.float(), self.scores[:, :, slice_y, slice_x].float() 163 | ) 164 | return (cond != 0).sum().item() 165 | 166 | def _improve_scatter(self, this_indices, scores, other_window): 167 | sy, dy, sx, dx = other_window 168 | grid = torch.empty((1, 2, dy, dx), dtype=torch.int64, device=self.device) 169 | self.meshgrid(grid, offset=(sy, sx), range=(dy, dx)) 170 | 171 | this_scores = ( 172 | torch_gather_2d(self.scores, this_indices.view(1, 2, dy, dx)) 173 | + self.biases[:, :, sy : sy + dy, sx : sx + dx] 174 | ) 175 | cond = scores.flatten(2) > this_scores.flatten(2) 176 | 177 | better_indices = this_indices.flatten(2)[cond.expand(1, 2, -1)].view(1, 2, -1) 178 | if better_indices.shape[2] == 0: 179 | return 0 180 | 181 | better_scores = scores.flatten(2)[cond].view(1, 1, -1) 182 | window_indices = grid.flatten(2)[cond.expand(1, 2, -1)].view(1, 2, -1) 183 | 184 | torch_scatter_2d(self.scores, better_indices, better_scores) 185 | torch_scatter_2d(self.indices, better_indices, window_indices) 186 | 187 | return better_indices.shape[2] 188 | 189 | def from_random(self, target_size): 190 | assert target_size[0] == 1, "Only 1 feature map supported." 191 | self.target_size = target_size 192 | self.randgrid(self.indices, offset=(0, 0), range=target_size[2:]) 193 | self.setup_biases(target_size) 194 | return self 195 | 196 | def randgrid(self, output, offset, range): 197 | torch.randint( 198 | low=offset[0], 199 | high=offset[0] + range[0], 200 | size=output[:, 0, :, :].shape, 201 | out=output[:, 0, :, :], 202 | ) 203 | torch.randint( 204 | low=offset[1], 205 | high=offset[1] + range[1], 206 | size=output[:, 1, :, :].shape, 207 | out=output[:, 1, :, :], 208 | ) 209 | 210 | def meshgrid(self, output, offset, range): 211 | b, _, h, w = output.shape 212 | output[:, 0, :, :] = ( 213 | torch.arange(h, dtype=torch.float32) 214 | .mul(range[0] / h) 215 | .add(offset[0]) 216 | .view((1, h, 1)) 217 | .expand((b, h, 1)) 218 | .long() 219 | ) 220 | output[:, 1, :, :] = ( 221 | torch.arange(w, dtype=torch.float32) 222 | .mul(range[1] / w) 223 | .add(offset[1]) 224 | .view((1, 1, w)) 225 | .expand((b, 1, w)) 226 | .long() 227 | ) 228 | 229 | def from_linear(self, target_size): 230 | assert target_size[0] == 1, "Only 1 feature map supported." 231 | self.target_size = target_size 232 | self.meshgrid(self.indices, offset=(0, 0), range=target_size[2:]) 233 | self.setup_biases(target_size) 234 | return self 235 | 236 | 237 | class FeatureMatcher: 238 | """Implementation of feature matching between two feature maps in 2D arrays, using 239 | normalized cross-correlation of features as similarity metric. 240 | """ 241 | 242 | def __init__(self, target=None, sources=None, device="cpu", variety=0.0): 243 | self.device = torch.device(device) 244 | self.variety = variety 245 | 246 | self.target = None 247 | self.sources = None 248 | self.repro_target = None 249 | self.repro_sources = None 250 | 251 | if sources is not None: 252 | self.update_sources(sources) 253 | if target is not None: 254 | self.update_target(target) 255 | 256 | def clone(self): 257 | clone = FeatureMatcher(device=self.device) 258 | clone.sources = self.sources 259 | clone.target = self.target 260 | 261 | clone.repro_target = self.repro_target.clone() 262 | clone.repro_sources = self.repro_sources.clone() 263 | return clone 264 | 265 | def update_target(self, target): 266 | assert len(target.shape) == 4 267 | assert target.shape[0] == 1 268 | 269 | self.target = target 270 | 271 | if self.repro_target is None: 272 | self.repro_target = Mapping(self.target.shape, self.device) 273 | self.repro_target.from_random(self.sources.shape) 274 | self.repro_sources.from_random(self.target.shape) 275 | 276 | self.repro_target.scores.fill_(float("-inf")) 277 | self.repro_sources.scores.fill_(float("-inf")) 278 | 279 | if target.shape[2:] != self.repro_target.indices.shape[2:]: 280 | self.repro_sources.rescale(target.shape[2:]) 281 | self.repro_target.resize(target.shape[2:]) 282 | 283 | def update_sources(self, sources): 284 | assert len(sources.shape) == 4 285 | assert sources.shape[0] == 1 286 | 287 | self.sources = sources 288 | 289 | if self.repro_sources is None: 290 | self.repro_sources = Mapping(self.sources.shape, self.device) 291 | 292 | if sources.shape[2:] != self.repro_sources.indices.shape[2:]: 293 | self.repro_target.rescale(sources.shape[2:]) 294 | self.repro_sources.resize(sources.shape[2:]) 295 | 296 | def update_biases(self): 297 | sources_value = ( 298 | self.repro_sources.scores 299 | - torch_gather_2d(self.repro_sources.biases, self.repro_sources.indices) 300 | - self.repro_target.biases 301 | ) 302 | target_value = ( 303 | self.repro_target.scores 304 | - torch_gather_2d(self.repro_target.biases, self.repro_target.indices) 305 | - self.repro_sources.biases 306 | ) 307 | 308 | k = self.variety 309 | self.repro_target.biases[:] = -k * (sources_value - sources_value.mean()) 310 | self.repro_sources.biases[:] = -k * (target_value - target_value.mean()) 311 | 312 | self.repro_target.scores.fill_(float("-inf")) 313 | self.repro_sources.scores.fill_(float("-inf")) 314 | 315 | def reconstruct_target(self): 316 | return torch_gather_2d( 317 | self.sources, self.repro_target.indices.to(self.sources.device) 318 | ) 319 | 320 | def reconstruct_source(self): 321 | return torch_gather_2d( 322 | self.target, self.repro_sources.indices.to(self.sources.device) 323 | ) 324 | 325 | def compare_features_coarse(self, parent, radius=2, split=1): 326 | def _compare(a, b, repro_a, repro_b, parent_a): 327 | if parent_a.indices.shape[2] > repro_a.indices.shape[2]: 328 | return 0 329 | 330 | total = 0 331 | for (t1, t2) in iterate_range(a.shape[2], split): 332 | assert t2 >= t1 333 | 334 | factor = repro_a.indices.shape[2] / parent_a.indices.shape[2] 335 | indices = F.interpolate( 336 | parent_a.indices.float() * factor, 337 | size=repro_a.indices.shape[2:], 338 | mode="nearest", 339 | )[:, :, t1:t2].long() 340 | indices += torch.empty_like(indices).random_(-radius, radius + 1) 341 | 342 | indices[:, 0, :, :].clamp_(min=0, max=b.shape[2] - 1) 343 | indices[:, 1, :, :].clamp_(min=0, max=b.shape[3] - 1) 344 | 345 | total += self._improve( 346 | a, (t1, t2 - t1, 0, a.shape[3]), repro_a, b, indices, repro_b, 347 | ) 348 | return total 349 | 350 | if parent is None: 351 | return 0 352 | 353 | ts = _compare( 354 | self.target, 355 | self.sources, 356 | self.repro_target, 357 | self.repro_sources, 358 | parent.repro_target, 359 | ) 360 | st = _compare( 361 | self.sources, 362 | self.target, 363 | self.repro_sources, 364 | self.repro_target, 365 | parent.repro_sources, 366 | ) 367 | return ts + st 368 | 369 | def compare_features_matrix(self, split=1): 370 | assert self.sources.shape[0] == 1, "Only 1 source supported." 371 | 372 | for (t1, t2), (s1, s2) in itertools.product( 373 | iterate_range(self.target.shape[2], split), 374 | iterate_range(self.sources.shape[2], split), 375 | ): 376 | assert t2 != t1 and s2 != s1 377 | 378 | target_window = self.target[:, :, t1:t2].flatten(2) 379 | source_window = self.sources[:, :, s1:s2].flatten(2) 380 | 381 | similarity = cosine_similarity_matrix_1d(target_window, source_window) 382 | similarity += ( 383 | self.repro_target.biases[:, :, s1:s2] 384 | .to(similarity.device) 385 | .reshape(1, 1, -1) 386 | ) 387 | similarity += ( 388 | self.repro_sources.biases[:, :, t1:t2] 389 | .to(similarity.device) 390 | .reshape(1, -1, 1) 391 | ) 392 | 393 | best_source = torch.max(similarity, dim=2) 394 | self.repro_target.improve_window( 395 | (t1, t2 - t1, 0, self.target.shape[3]), 396 | (s1, s2 - s1, 0, self.sources.shape[3]), 397 | best_source, 398 | ) 399 | 400 | best_target = torch.max(similarity, dim=1) 401 | self.repro_sources.improve_window( 402 | (s1, s2 - s1, 0, self.sources.shape[3]), 403 | (t1, t2 - t1, 0, self.target.shape[3]), 404 | best_target, 405 | ) 406 | 407 | def compare_features_random(self, radius=-1, split=1, times=4): 408 | """Generate random coordinates within a radius for each pixel, then compare the 409 | features to see if the current selection can be improved. 410 | """ 411 | 412 | def _compare(a, b, repro_a, repro_b): 413 | total = 0 414 | for (t1, t2) in iterate_range(a.shape[2], split): 415 | assert t2 >= t1 416 | 417 | if radius == -1: 418 | # Generate random grid size (h, w) with indices in range of B. 419 | h, w = t2 - t1, a.shape[3] 420 | indices = torch.empty( 421 | (times, 2, h, w), dtype=torch.int64, device=self.device 422 | ) 423 | repro_a.randgrid(indices, offset=(0, 0), range=b.shape[2:]) 424 | else: 425 | indices = repro_a.indices[:, :, t1:t2].clone() 426 | indices = indices + torch.empty_like(indices).random_( 427 | -radius, radius + 1 428 | ) 429 | 430 | indices[:, 0, :, :].clamp_(min=0, max=b.shape[2] - 1) 431 | indices[:, 1, :, :].clamp_(min=0, max=b.shape[3] - 1) 432 | 433 | total += self._improve( 434 | a, (t1, t2 - t1, 0, a.shape[3]), repro_a, b, indices, repro_b, 435 | ) 436 | return total 437 | 438 | ts = _compare(self.target, self.sources, self.repro_target, self.repro_sources) 439 | st = _compare(self.sources, self.target, self.repro_sources, self.repro_target) 440 | return ts + st 441 | 442 | def compare_features_identity(self, split=1): 443 | def _compare(a, b, repro_a, repro_b): 444 | for (t1, t2) in iterate_range(a.shape[2], split): 445 | assert t2 >= t1 446 | 447 | indices = repro_a.indices[:, :, t1:t2] 448 | self._update( 449 | a, (t1, t2 - t1, 0, a.shape[3]), repro_a, b, indices, repro_b, 450 | ) 451 | 452 | _compare(self.target, self.sources, self.repro_target, self.repro_sources) 453 | _compare(self.sources, self.target, self.repro_sources, self.repro_target) 454 | 455 | def compare_features_inverse(self, split=1): 456 | def _compare(a, b, repro_a, repro_b, twice=False): 457 | total = 0 458 | for (t1, t2) in iterate_range(a.shape[2], split): 459 | assert t2 >= t1 460 | 461 | indices = repro_a.indices[:, :, t1:t2] 462 | total += self._improve( 463 | a, (t1, t2 - t1, 0, a.shape[3]), repro_a, b, indices, repro_b, 464 | ) 465 | return total 466 | 467 | ts = _compare(self.target, self.sources, self.repro_target, self.repro_sources) 468 | st = _compare(self.sources, self.target, self.repro_sources, self.repro_target) 469 | return ts + st 470 | 471 | def compare_features_nearby(self, radius, split=1): 472 | """Generate nearby coordinates for each pixel to see if offseting the neighboring 473 | pixel would provide better results. 474 | """ 475 | assert isinstance(radius, int) 476 | padding = radius 477 | 478 | def _compare(a, b, repro_a, repro_b): 479 | # Compare all the neighbours from the original position. 480 | original = repro_a.indices.clone() 481 | padded_original = torch_pad_reflect( 482 | original.to(dtype=torch.float32).expand(4, -1, -1, -1), 483 | (padding, padding, padding, padding), 484 | ).long() 485 | 486 | total = 0 487 | for (t1, t2) in iterate_range(a.shape[2], split): 488 | h, w = (t2 - t1), a.shape[3] 489 | 490 | x = original.new_tensor([0, 0, -radius, +radius]).view(4, 1, 1) 491 | y = original.new_tensor([-radius, +radius, 0, 0]).view(4, 1, 1) 492 | 493 | # Create a lookup map with offset coordinates from each coordinate. 494 | lookup = original.new_empty((4, 2, h, w)) 495 | lookup[:, 0, :, :] = torch.arange( 496 | t1, t1 + lookup.shape[2], dtype=torch.long 497 | ).view((1, -1, 1)) 498 | lookup[:, 1, :, :] = torch.arange( 499 | 0, lookup.shape[3], dtype=torch.long 500 | ).view((1, 1, -1)) 501 | lookup[:, 0, :, :] += y + padding 502 | lookup[:, 1, :, :] += x + padding 503 | 504 | # Compute new padded buffer with the current best coordinates. 505 | indices = padded_original.clone() 506 | indices[:, 0, :, :] -= y 507 | indices[:, 1, :, :] -= x 508 | 509 | # Lookup the neighbor coordinates and clamp if the calculation overflows. 510 | candidates = torch_gather_2d(indices, lookup) 511 | 512 | # Handle `out_of_bounds` by clamping. Could be randomized? 513 | candidates[:, 0, :, :].clamp_(min=0, max=b.shape[2] - 1) 514 | candidates[:, 1, :, :].clamp_(min=0, max=b.shape[3] - 1) 515 | 516 | # Update the target window, and the scattered source pixels. 517 | total += self._improve( 518 | a, (t1, t2 - t1, 0, w), repro_a, b, candidates, repro_b, 519 | ) 520 | 521 | return total 522 | 523 | ts = _compare(self.target, self.sources, self.repro_target, self.repro_sources) 524 | st = _compare(self.sources, self.target, self.repro_sources, self.repro_target) 525 | return ts + st 526 | 527 | def _compute_similarity( 528 | self, a_full, window_a, repro_a, b_full, b_indices, repro_b 529 | ): 530 | y, dy, x, dx = window_a 531 | a = a_full[:, :, y : y + dy, x : x + dx] 532 | b = torch_gather_2d(b_full, b_indices.to(b_full.device)) 533 | 534 | similarity = cosine_similarity_vector_1d(a.flatten(2), b.flatten(2)) 535 | similarity += ( 536 | torch_gather_2d(repro_a.biases, b_indices) 537 | .to(similarity.device) 538 | .view(similarity.shape[0], -1) 539 | ) 540 | similarity += ( 541 | repro_b.biases[:, :, y : y + dy, x : x + dx] 542 | .to(similarity.device) 543 | .view(1, -1) 544 | ) 545 | return similarity 546 | 547 | def _update(self, a_full, window_a, repro_a, b_full, b_indices, repro_b): 548 | y, dy, x, dx = window_a 549 | similarity = self._compute_similarity( 550 | a_full, window_a, repro_a, b_full, b_indices, repro_b 551 | ) 552 | assert similarity.shape[0] == 1 553 | repro_a.scores[:, :, y : y + dy, x : x + dx] = similarity.view(1, 1, dy, dx) 554 | 555 | def _improve(self, a_full, window_a, repro_a, b_full, b_indices, repro_b): 556 | similarity = self._compute_similarity( 557 | a_full, window_a, repro_a, b_full, b_indices, repro_b 558 | ) 559 | 560 | best_candidates = similarity.max(dim=0) 561 | candidates = torch.gather( 562 | b_indices.flatten(2), 563 | dim=0, 564 | index=best_candidates.indices.to(self.device) 565 | .view(1, 1, -1) 566 | .expand(1, 2, -1), 567 | ) 568 | 569 | scores = best_candidates.values.view(1, 1, -1).to(self.device) 570 | cha = repro_a._improve_window(window_a, scores, candidates.flatten(2)) 571 | chb = repro_b._improve_scatter(candidates.flatten(2), scores, window_a) 572 | return cha + chb 573 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | GNU AFFERO GENERAL PUBLIC LICENSE 2 | Version 3, 19 November 2007 3 | 4 | Copyright (C) 2007 Free Software Foundation, Inc. 5 | Everyone is permitted to copy and distribute verbatim copies 6 | of this license document, but changing it is not allowed. 7 | 8 | Preamble 9 | 10 | The GNU Affero General Public License is a free, copyleft license for 11 | software and other kinds of works, specifically designed to ensure 12 | cooperation with the community in the case of network server software. 13 | 14 | The licenses for most software and other practical works are designed 15 | to take away your freedom to share and change the works. By contrast, 16 | our General Public Licenses are intended to guarantee your freedom to 17 | share and change all versions of a program--to make sure it remains free 18 | software for all its users. 19 | 20 | When we speak of free software, we are referring to freedom, not 21 | price. Our General Public Licenses are designed to make sure that you 22 | have the freedom to distribute copies of free software (and charge for 23 | them if you wish), that you receive source code or can get it if you 24 | want it, that you can change the software or use pieces of it in new 25 | free programs, and that you know you can do these things. 26 | 27 | Developers that use our General Public Licenses protect your rights 28 | with two steps: (1) assert copyright on the software, and (2) offer 29 | you this License which gives you legal permission to copy, distribute 30 | and/or modify the software. 31 | 32 | A secondary benefit of defending all users' freedom is that 33 | improvements made in alternate versions of the program, if they 34 | receive widespread use, become available for other developers to 35 | incorporate. Many developers of free software are heartened and 36 | encouraged by the resulting cooperation. However, in the case of 37 | software used on network servers, this result may fail to come about. 38 | The GNU General Public License permits making a modified version and 39 | letting the public access it on a server without ever releasing its 40 | source code to the public. 41 | 42 | The GNU Affero General Public License is designed specifically to 43 | ensure that, in such cases, the modified source code becomes available 44 | to the community. It requires the operator of a network server to 45 | provide the source code of the modified version running there to the 46 | users of that server. Therefore, public use of a modified version, on 47 | a publicly accessible server, gives the public access to the source 48 | code of the modified version. 49 | 50 | An older license, called the Affero General Public License and 51 | published by Affero, was designed to accomplish similar goals. This is 52 | a different license, not a version of the Affero GPL, but Affero has 53 | released a new version of the Affero GPL which permits relicensing under 54 | this license. 55 | 56 | The precise terms and conditions for copying, distribution and 57 | modification follow. 58 | 59 | TERMS AND CONDITIONS 60 | 61 | 0. Definitions. 62 | 63 | "This License" refers to version 3 of the GNU Affero General Public License. 64 | 65 | "Copyright" also means copyright-like laws that apply to other kinds of 66 | works, such as semiconductor masks. 67 | 68 | "The Program" refers to any copyrightable work licensed under this 69 | License. Each licensee is addressed as "you". "Licensees" and 70 | "recipients" may be individuals or organizations. 71 | 72 | To "modify" a work means to copy from or adapt all or part of the work 73 | in a fashion requiring copyright permission, other than the making of an 74 | exact copy. The resulting work is called a "modified version" of the 75 | earlier work or a work "based on" the earlier work. 76 | 77 | A "covered work" means either the unmodified Program or a work based 78 | on the Program. 79 | 80 | To "propagate" a work means to do anything with it that, without 81 | permission, would make you directly or secondarily liable for 82 | infringement under applicable copyright law, except executing it on a 83 | computer or modifying a private copy. Propagation includes copying, 84 | distribution (with or without modification), making available to the 85 | public, and in some countries other activities as well. 86 | 87 | To "convey" a work means any kind of propagation that enables other 88 | parties to make or receive copies. Mere interaction with a user through 89 | a computer network, with no transfer of a copy, is not conveying. 90 | 91 | An interactive user interface displays "Appropriate Legal Notices" 92 | to the extent that it includes a convenient and prominently visible 93 | feature that (1) displays an appropriate copyright notice, and (2) 94 | tells the user that there is no warranty for the work (except to the 95 | extent that warranties are provided), that licensees may convey the 96 | work under this License, and how to view a copy of this License. If 97 | the interface presents a list of user commands or options, such as a 98 | menu, a prominent item in the list meets this criterion. 99 | 100 | 1. Source Code. 101 | 102 | The "source code" for a work means the preferred form of the work 103 | for making modifications to it. "Object code" means any non-source 104 | form of a work. 105 | 106 | A "Standard Interface" means an interface that either is an official 107 | standard defined by a recognized standards body, or, in the case of 108 | interfaces specified for a particular programming language, one that 109 | is widely used among developers working in that language. 110 | 111 | The "System Libraries" of an executable work include anything, other 112 | than the work as a whole, that (a) is included in the normal form of 113 | packaging a Major Component, but which is not part of that Major 114 | Component, and (b) serves only to enable use of the work with that 115 | Major Component, or to implement a Standard Interface for which an 116 | implementation is available to the public in source code form. A 117 | "Major Component", in this context, means a major essential component 118 | (kernel, window system, and so on) of the specific operating system 119 | (if any) on which the executable work runs, or a compiler used to 120 | produce the work, or an object code interpreter used to run it. 121 | 122 | The "Corresponding Source" for a work in object code form means all 123 | the source code needed to generate, install, and (for an executable 124 | work) run the object code and to modify the work, including scripts to 125 | control those activities. However, it does not include the work's 126 | System Libraries, or general-purpose tools or generally available free 127 | programs which are used unmodified in performing those activities but 128 | which are not part of the work. For example, Corresponding Source 129 | includes interface definition files associated with source files for 130 | the work, and the source code for shared libraries and dynamically 131 | linked subprograms that the work is specifically designed to require, 132 | such as by intimate data communication or control flow between those 133 | subprograms and other parts of the work. 134 | 135 | The Corresponding Source need not include anything that users 136 | can regenerate automatically from other parts of the Corresponding 137 | Source. 138 | 139 | The Corresponding Source for a work in source code form is that 140 | same work. 141 | 142 | 2. Basic Permissions. 143 | 144 | All rights granted under this License are granted for the term of 145 | copyright on the Program, and are irrevocable provided the stated 146 | conditions are met. This License explicitly affirms your unlimited 147 | permission to run the unmodified Program. The output from running a 148 | covered work is covered by this License only if the output, given its 149 | content, constitutes a covered work. This License acknowledges your 150 | rights of fair use or other equivalent, as provided by copyright law. 151 | 152 | You may make, run and propagate covered works that you do not 153 | convey, without conditions so long as your license otherwise remains 154 | in force. You may convey covered works to others for the sole purpose 155 | of having them make modifications exclusively for you, or provide you 156 | with facilities for running those works, provided that you comply with 157 | the terms of this License in conveying all material for which you do 158 | not control copyright. Those thus making or running the covered works 159 | for you must do so exclusively on your behalf, under your direction 160 | and control, on terms that prohibit them from making any copies of 161 | your copyrighted material outside their relationship with you. 162 | 163 | Conveying under any other circumstances is permitted solely under 164 | the conditions stated below. Sublicensing is not allowed; section 10 165 | makes it unnecessary. 166 | 167 | 3. Protecting Users' Legal Rights From Anti-Circumvention Law. 168 | 169 | No covered work shall be deemed part of an effective technological 170 | measure under any applicable law fulfilling obligations under article 171 | 11 of the WIPO copyright treaty adopted on 20 December 1996, or 172 | similar laws prohibiting or restricting circumvention of such 173 | measures. 174 | 175 | When you convey a covered work, you waive any legal power to forbid 176 | circumvention of technological measures to the extent such circumvention 177 | is effected by exercising rights under this License with respect to 178 | the covered work, and you disclaim any intention to limit operation or 179 | modification of the work as a means of enforcing, against the work's 180 | users, your or third parties' legal rights to forbid circumvention of 181 | technological measures. 182 | 183 | 4. Conveying Verbatim Copies. 184 | 185 | You may convey verbatim copies of the Program's source code as you 186 | receive it, in any medium, provided that you conspicuously and 187 | appropriately publish on each copy an appropriate copyright notice; 188 | keep intact all notices stating that this License and any 189 | non-permissive terms added in accord with section 7 apply to the code; 190 | keep intact all notices of the absence of any warranty; and give all 191 | recipients a copy of this License along with the Program. 192 | 193 | You may charge any price or no price for each copy that you convey, 194 | and you may offer support or warranty protection for a fee. 195 | 196 | 5. Conveying Modified Source Versions. 197 | 198 | You may convey a work based on the Program, or the modifications to 199 | produce it from the Program, in the form of source code under the 200 | terms of section 4, provided that you also meet all of these conditions: 201 | 202 | a) The work must carry prominent notices stating that you modified 203 | it, and giving a relevant date. 204 | 205 | b) The work must carry prominent notices stating that it is 206 | released under this License and any conditions added under section 207 | 7. This requirement modifies the requirement in section 4 to 208 | "keep intact all notices". 209 | 210 | c) You must license the entire work, as a whole, under this 211 | License to anyone who comes into possession of a copy. This 212 | License will therefore apply, along with any applicable section 7 213 | additional terms, to the whole of the work, and all its parts, 214 | regardless of how they are packaged. This License gives no 215 | permission to license the work in any other way, but it does not 216 | invalidate such permission if you have separately received it. 217 | 218 | d) If the work has interactive user interfaces, each must display 219 | Appropriate Legal Notices; however, if the Program has interactive 220 | interfaces that do not display Appropriate Legal Notices, your 221 | work need not make them do so. 222 | 223 | A compilation of a covered work with other separate and independent 224 | works, which are not by their nature extensions of the covered work, 225 | and which are not combined with it such as to form a larger program, 226 | in or on a volume of a storage or distribution medium, is called an 227 | "aggregate" if the compilation and its resulting copyright are not 228 | used to limit the access or legal rights of the compilation's users 229 | beyond what the individual works permit. Inclusion of a covered work 230 | in an aggregate does not cause this License to apply to the other 231 | parts of the aggregate. 232 | 233 | 6. Conveying Non-Source Forms. 234 | 235 | You may convey a covered work in object code form under the terms 236 | of sections 4 and 5, provided that you also convey the 237 | machine-readable Corresponding Source under the terms of this License, 238 | in one of these ways: 239 | 240 | a) Convey the object code in, or embodied in, a physical product 241 | (including a physical distribution medium), accompanied by the 242 | Corresponding Source fixed on a durable physical medium 243 | customarily used for software interchange. 244 | 245 | b) Convey the object code in, or embodied in, a physical product 246 | (including a physical distribution medium), accompanied by a 247 | written offer, valid for at least three years and valid for as 248 | long as you offer spare parts or customer support for that product 249 | model, to give anyone who possesses the object code either (1) a 250 | copy of the Corresponding Source for all the software in the 251 | product that is covered by this License, on a durable physical 252 | medium customarily used for software interchange, for a price no 253 | more than your reasonable cost of physically performing this 254 | conveying of source, or (2) access to copy the 255 | Corresponding Source from a network server at no charge. 256 | 257 | c) Convey individual copies of the object code with a copy of the 258 | written offer to provide the Corresponding Source. This 259 | alternative is allowed only occasionally and noncommercially, and 260 | only if you received the object code with such an offer, in accord 261 | with subsection 6b. 262 | 263 | d) Convey the object code by offering access from a designated 264 | place (gratis or for a charge), and offer equivalent access to the 265 | Corresponding Source in the same way through the same place at no 266 | further charge. You need not require recipients to copy the 267 | Corresponding Source along with the object code. If the place to 268 | copy the object code is a network server, the Corresponding Source 269 | may be on a different server (operated by you or a third party) 270 | that supports equivalent copying facilities, provided you maintain 271 | clear directions next to the object code saying where to find the 272 | Corresponding Source. Regardless of what server hosts the 273 | Corresponding Source, you remain obligated to ensure that it is 274 | available for as long as needed to satisfy these requirements. 275 | 276 | e) Convey the object code using peer-to-peer transmission, provided 277 | you inform other peers where the object code and Corresponding 278 | Source of the work are being offered to the general public at no 279 | charge under subsection 6d. 280 | 281 | A separable portion of the object code, whose source code is excluded 282 | from the Corresponding Source as a System Library, need not be 283 | included in conveying the object code work. 284 | 285 | A "User Product" is either (1) a "consumer product", which means any 286 | tangible personal property which is normally used for personal, family, 287 | or household purposes, or (2) anything designed or sold for incorporation 288 | into a dwelling. In determining whether a product is a consumer product, 289 | doubtful cases shall be resolved in favor of coverage. For a particular 290 | product received by a particular user, "normally used" refers to a 291 | typical or common use of that class of product, regardless of the status 292 | of the particular user or of the way in which the particular user 293 | actually uses, or expects or is expected to use, the product. A product 294 | is a consumer product regardless of whether the product has substantial 295 | commercial, industrial or non-consumer uses, unless such uses represent 296 | the only significant mode of use of the product. 297 | 298 | "Installation Information" for a User Product means any methods, 299 | procedures, authorization keys, or other information required to install 300 | and execute modified versions of a covered work in that User Product from 301 | a modified version of its Corresponding Source. The information must 302 | suffice to ensure that the continued functioning of the modified object 303 | code is in no case prevented or interfered with solely because 304 | modification has been made. 305 | 306 | If you convey an object code work under this section in, or with, or 307 | specifically for use in, a User Product, and the conveying occurs as 308 | part of a transaction in which the right of possession and use of the 309 | User Product is transferred to the recipient in perpetuity or for a 310 | fixed term (regardless of how the transaction is characterized), the 311 | Corresponding Source conveyed under this section must be accompanied 312 | by the Installation Information. But this requirement does not apply 313 | if neither you nor any third party retains the ability to install 314 | modified object code on the User Product (for example, the work has 315 | been installed in ROM). 316 | 317 | The requirement to provide Installation Information does not include a 318 | requirement to continue to provide support service, warranty, or updates 319 | for a work that has been modified or installed by the recipient, or for 320 | the User Product in which it has been modified or installed. Access to a 321 | network may be denied when the modification itself materially and 322 | adversely affects the operation of the network or violates the rules and 323 | protocols for communication across the network. 324 | 325 | Corresponding Source conveyed, and Installation Information provided, 326 | in accord with this section must be in a format that is publicly 327 | documented (and with an implementation available to the public in 328 | source code form), and must require no special password or key for 329 | unpacking, reading or copying. 330 | 331 | 7. Additional Terms. 332 | 333 | "Additional permissions" are terms that supplement the terms of this 334 | License by making exceptions from one or more of its conditions. 335 | Additional permissions that are applicable to the entire Program shall 336 | be treated as though they were included in this License, to the extent 337 | that they are valid under applicable law. If additional permissions 338 | apply only to part of the Program, that part may be used separately 339 | under those permissions, but the entire Program remains governed by 340 | this License without regard to the additional permissions. 341 | 342 | When you convey a copy of a covered work, you may at your option 343 | remove any additional permissions from that copy, or from any part of 344 | it. (Additional permissions may be written to require their own 345 | removal in certain cases when you modify the work.) You may place 346 | additional permissions on material, added by you to a covered work, 347 | for which you have or can give appropriate copyright permission. 348 | 349 | Notwithstanding any other provision of this License, for material you 350 | add to a covered work, you may (if authorized by the copyright holders of 351 | that material) supplement the terms of this License with terms: 352 | 353 | a) Disclaiming warranty or limiting liability differently from the 354 | terms of sections 15 and 16 of this License; or 355 | 356 | b) Requiring preservation of specified reasonable legal notices or 357 | author attributions in that material or in the Appropriate Legal 358 | Notices displayed by works containing it; or 359 | 360 | c) Prohibiting misrepresentation of the origin of that material, or 361 | requiring that modified versions of such material be marked in 362 | reasonable ways as different from the original version; or 363 | 364 | d) Limiting the use for publicity purposes of names of licensors or 365 | authors of the material; or 366 | 367 | e) Declining to grant rights under trademark law for use of some 368 | trade names, trademarks, or service marks; or 369 | 370 | f) Requiring indemnification of licensors and authors of that 371 | material by anyone who conveys the material (or modified versions of 372 | it) with contractual assumptions of liability to the recipient, for 373 | any liability that these contractual assumptions directly impose on 374 | those licensors and authors. 375 | 376 | All other non-permissive additional terms are considered "further 377 | restrictions" within the meaning of section 10. If the Program as you 378 | received it, or any part of it, contains a notice stating that it is 379 | governed by this License along with a term that is a further 380 | restriction, you may remove that term. If a license document contains 381 | a further restriction but permits relicensing or conveying under this 382 | License, you may add to a covered work material governed by the terms 383 | of that license document, provided that the further restriction does 384 | not survive such relicensing or conveying. 385 | 386 | If you add terms to a covered work in accord with this section, you 387 | must place, in the relevant source files, a statement of the 388 | additional terms that apply to those files, or a notice indicating 389 | where to find the applicable terms. 390 | 391 | Additional terms, permissive or non-permissive, may be stated in the 392 | form of a separately written license, or stated as exceptions; 393 | the above requirements apply either way. 394 | 395 | 8. Termination. 396 | 397 | You may not propagate or modify a covered work except as expressly 398 | provided under this License. Any attempt otherwise to propagate or 399 | modify it is void, and will automatically terminate your rights under 400 | this License (including any patent licenses granted under the third 401 | paragraph of section 11). 402 | 403 | However, if you cease all violation of this License, then your 404 | license from a particular copyright holder is reinstated (a) 405 | provisionally, unless and until the copyright holder explicitly and 406 | finally terminates your license, and (b) permanently, if the copyright 407 | holder fails to notify you of the violation by some reasonable means 408 | prior to 60 days after the cessation. 409 | 410 | Moreover, your license from a particular copyright holder is 411 | reinstated permanently if the copyright holder notifies you of the 412 | violation by some reasonable means, this is the first time you have 413 | received notice of violation of this License (for any work) from that 414 | copyright holder, and you cure the violation prior to 30 days after 415 | your receipt of the notice. 416 | 417 | Termination of your rights under this section does not terminate the 418 | licenses of parties who have received copies or rights from you under 419 | this License. If your rights have been terminated and not permanently 420 | reinstated, you do not qualify to receive new licenses for the same 421 | material under section 10. 422 | 423 | 9. Acceptance Not Required for Having Copies. 424 | 425 | You are not required to accept this License in order to receive or 426 | run a copy of the Program. Ancillary propagation of a covered work 427 | occurring solely as a consequence of using peer-to-peer transmission 428 | to receive a copy likewise does not require acceptance. However, 429 | nothing other than this License grants you permission to propagate or 430 | modify any covered work. These actions infringe copyright if you do 431 | not accept this License. Therefore, by modifying or propagating a 432 | covered work, you indicate your acceptance of this License to do so. 433 | 434 | 10. Automatic Licensing of Downstream Recipients. 435 | 436 | Each time you convey a covered work, the recipient automatically 437 | receives a license from the original licensors, to run, modify and 438 | propagate that work, subject to this License. You are not responsible 439 | for enforcing compliance by third parties with this License. 440 | 441 | An "entity transaction" is a transaction transferring control of an 442 | organization, or substantially all assets of one, or subdividing an 443 | organization, or merging organizations. If propagation of a covered 444 | work results from an entity transaction, each party to that 445 | transaction who receives a copy of the work also receives whatever 446 | licenses to the work the party's predecessor in interest had or could 447 | give under the previous paragraph, plus a right to possession of the 448 | Corresponding Source of the work from the predecessor in interest, if 449 | the predecessor has it or can get it with reasonable efforts. 450 | 451 | You may not impose any further restrictions on the exercise of the 452 | rights granted or affirmed under this License. For example, you may 453 | not impose a license fee, royalty, or other charge for exercise of 454 | rights granted under this License, and you may not initiate litigation 455 | (including a cross-claim or counterclaim in a lawsuit) alleging that 456 | any patent claim is infringed by making, using, selling, offering for 457 | sale, or importing the Program or any portion of it. 458 | 459 | 11. Patents. 460 | 461 | A "contributor" is a copyright holder who authorizes use under this 462 | License of the Program or a work on which the Program is based. The 463 | work thus licensed is called the contributor's "contributor version". 464 | 465 | A contributor's "essential patent claims" are all patent claims 466 | owned or controlled by the contributor, whether already acquired or 467 | hereafter acquired, that would be infringed by some manner, permitted 468 | by this License, of making, using, or selling its contributor version, 469 | but do not include claims that would be infringed only as a 470 | consequence of further modification of the contributor version. For 471 | purposes of this definition, "control" includes the right to grant 472 | patent sublicenses in a manner consistent with the requirements of 473 | this License. 474 | 475 | Each contributor grants you a non-exclusive, worldwide, royalty-free 476 | patent license under the contributor's essential patent claims, to 477 | make, use, sell, offer for sale, import and otherwise run, modify and 478 | propagate the contents of its contributor version. 479 | 480 | In the following three paragraphs, a "patent license" is any express 481 | agreement or commitment, however denominated, not to enforce a patent 482 | (such as an express permission to practice a patent or covenant not to 483 | sue for patent infringement). To "grant" such a patent license to a 484 | party means to make such an agreement or commitment not to enforce a 485 | patent against the party. 486 | 487 | If you convey a covered work, knowingly relying on a patent license, 488 | and the Corresponding Source of the work is not available for anyone 489 | to copy, free of charge and under the terms of this License, through a 490 | publicly available network server or other readily accessible means, 491 | then you must either (1) cause the Corresponding Source to be so 492 | available, or (2) arrange to deprive yourself of the benefit of the 493 | patent license for this particular work, or (3) arrange, in a manner 494 | consistent with the requirements of this License, to extend the patent 495 | license to downstream recipients. "Knowingly relying" means you have 496 | actual knowledge that, but for the patent license, your conveying the 497 | covered work in a country, or your recipient's use of the covered work 498 | in a country, would infringe one or more identifiable patents in that 499 | country that you have reason to believe are valid. 500 | 501 | If, pursuant to or in connection with a single transaction or 502 | arrangement, you convey, or propagate by procuring conveyance of, a 503 | covered work, and grant a patent license to some of the parties 504 | receiving the covered work authorizing them to use, propagate, modify 505 | or convey a specific copy of the covered work, then the patent license 506 | you grant is automatically extended to all recipients of the covered 507 | work and works based on it. 508 | 509 | A patent license is "discriminatory" if it does not include within 510 | the scope of its coverage, prohibits the exercise of, or is 511 | conditioned on the non-exercise of one or more of the rights that are 512 | specifically granted under this License. You may not convey a covered 513 | work if you are a party to an arrangement with a third party that is 514 | in the business of distributing software, under which you make payment 515 | to the third party based on the extent of your activity of conveying 516 | the work, and under which the third party grants, to any of the 517 | parties who would receive the covered work from you, a discriminatory 518 | patent license (a) in connection with copies of the covered work 519 | conveyed by you (or copies made from those copies), or (b) primarily 520 | for and in connection with specific products or compilations that 521 | contain the covered work, unless you entered into that arrangement, 522 | or that patent license was granted, prior to 28 March 2007. 523 | 524 | Nothing in this License shall be construed as excluding or limiting 525 | any implied license or other defenses to infringement that may 526 | otherwise be available to you under applicable patent law. 527 | 528 | 12. No Surrender of Others' Freedom. 529 | 530 | If conditions are imposed on you (whether by court order, agreement or 531 | otherwise) that contradict the conditions of this License, they do not 532 | excuse you from the conditions of this License. If you cannot convey a 533 | covered work so as to satisfy simultaneously your obligations under this 534 | License and any other pertinent obligations, then as a consequence you may 535 | not convey it at all. For example, if you agree to terms that obligate you 536 | to collect a royalty for further conveying from those to whom you convey 537 | the Program, the only way you could satisfy both those terms and this 538 | License would be to refrain entirely from conveying the Program. 539 | 540 | 13. Remote Network Interaction; Use with the GNU General Public License. 541 | 542 | Notwithstanding any other provision of this License, if you modify the 543 | Program, your modified version must prominently offer all users 544 | interacting with it remotely through a computer network (if your version 545 | supports such interaction) an opportunity to receive the Corresponding 546 | Source of your version by providing access to the Corresponding Source 547 | from a network server at no charge, through some standard or customary 548 | means of facilitating copying of software. This Corresponding Source 549 | shall include the Corresponding Source for any work covered by version 3 550 | of the GNU General Public License that is incorporated pursuant to the 551 | following paragraph. 552 | 553 | Notwithstanding any other provision of this License, you have 554 | permission to link or combine any covered work with a work licensed 555 | under version 3 of the GNU General Public License into a single 556 | combined work, and to convey the resulting work. The terms of this 557 | License will continue to apply to the part which is the covered work, 558 | but the work with which it is combined will remain governed by version 559 | 3 of the GNU General Public License. 560 | 561 | 14. Revised Versions of this License. 562 | 563 | The Free Software Foundation may publish revised and/or new versions of 564 | the GNU Affero General Public License from time to time. Such new versions 565 | will be similar in spirit to the present version, but may differ in detail to 566 | address new problems or concerns. 567 | 568 | Each version is given a distinguishing version number. If the 569 | Program specifies that a certain numbered version of the GNU Affero General 570 | Public License "or any later version" applies to it, you have the 571 | option of following the terms and conditions either of that numbered 572 | version or of any later version published by the Free Software 573 | Foundation. If the Program does not specify a version number of the 574 | GNU Affero General Public License, you may choose any version ever published 575 | by the Free Software Foundation. 576 | 577 | If the Program specifies that a proxy can decide which future 578 | versions of the GNU Affero General Public License can be used, that proxy's 579 | public statement of acceptance of a version permanently authorizes you 580 | to choose that version for the Program. 581 | 582 | Later license versions may give you additional or different 583 | permissions. However, no additional obligations are imposed on any 584 | author or copyright holder as a result of your choosing to follow a 585 | later version. 586 | 587 | 15. Disclaimer of Warranty. 588 | 589 | THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY 590 | APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT 591 | HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY 592 | OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, 593 | THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR 594 | PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM 595 | IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF 596 | ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 597 | 598 | 16. Limitation of Liability. 599 | 600 | IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING 601 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS 602 | THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY 603 | GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE 604 | USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF 605 | DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD 606 | PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), 607 | EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF 608 | SUCH DAMAGES. 609 | 610 | 17. Interpretation of Sections 15 and 16. 611 | 612 | If the disclaimer of warranty and limitation of liability provided 613 | above cannot be given local legal effect according to their terms, 614 | reviewing courts shall apply local law that most closely approximates 615 | an absolute waiver of all civil liability in connection with the 616 | Program, unless a warranty or assumption of liability accompanies a 617 | copy of the Program in return for a fee. 618 | 619 | END OF TERMS AND CONDITIONS 620 | 621 | How to Apply These Terms to Your New Programs 622 | 623 | If you develop a new program, and you want it to be of the greatest 624 | possible use to the public, the best way to achieve this is to make it 625 | free software which everyone can redistribute and change under these terms. 626 | 627 | To do so, attach the following notices to the program. It is safest 628 | to attach them to the start of each source file to most effectively 629 | state the exclusion of warranty; and each file should have at least 630 | the "copyright" line and a pointer to where the full notice is found. 631 | 632 | 633 | Copyright (C) 634 | 635 | This program is free software: you can redistribute it and/or modify 636 | it under the terms of the GNU Affero General Public License as published 637 | by the Free Software Foundation, either version 3 of the License, or 638 | (at your option) any later version. 639 | 640 | This program is distributed in the hope that it will be useful, 641 | but WITHOUT ANY WARRANTY; without even the implied warranty of 642 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 643 | GNU Affero General Public License for more details. 644 | 645 | You should have received a copy of the GNU Affero General Public License 646 | along with this program. If not, see . 647 | 648 | Also add information on how to contact you by electronic and paper mail. 649 | 650 | If your software can interact with users remotely through a computer 651 | network, you should also make sure that it provides a way for users to 652 | get its source. For example, if your program is a web application, its 653 | interface could display a "Source" link that leads users to an archive 654 | of the code. There are many ways you could offer source, and different 655 | solutions will be better for different programs; see section 13 for the 656 | specific requirements. 657 | 658 | You should also get your employer (if you work as a programmer) or school, 659 | if any, to sign a "copyright disclaimer" for the program, if necessary. 660 | For more information on this, and how to apply and follow the GNU AGPL, see 661 | . 662 | --------------------------------------------------------------------------------