├── .gitignore
├── .gitmodules
├── LICENSE
├── README.md
├── Tutorial.ipynb
├── figures
├── comparison
│ ├── figure.ipynb
│ └── generate_data.py
├── influence
│ ├── figure.ipynb
│ └── generate_data.py
├── multiscale
│ ├── figure.ipynb
│ └── generate_data.py
├── reg_fail
│ ├── figure.ipynb
│ └── generate_data.py
├── remeshing
│ ├── figure.ipynb
│ └── generate_data.py
├── teaser
│ ├── figure.ipynb
│ └── generate_data.py
└── viewpoints
│ ├── figure.ipynb
│ └── generate_data.py
├── largesteps
├── __init__.py
├── geometry.py
├── optimize.py
├── parameterize.py
├── pgf_custom.py
└── solvers.py
├── requirements.txt
├── scripts
├── __init__.py
├── blender_render.py
├── constants.py
├── geometry.py
├── io_ply.py
├── load_xml.py
├── main.py
├── preamble.py
└── render.py
├── setup.py
└── setup_dependencies.sh
/.gitignore:
--------------------------------------------------------------------------------
1 | *__pycache__*
2 | scenes/*
3 | output/*
4 | *.ipynb_checkpoints*
5 |
6 |
--------------------------------------------------------------------------------
/.gitmodules:
--------------------------------------------------------------------------------
1 | [submodule "botsch-kobbelt-remesher-libigl"]
2 | path = ext/botsch-kobbelt-remesher-libigl
3 | url = git@github.com:sgsellan/botsch-kobbelt-remesher-libigl.git
4 | [submodule "nvdiffrast"]
5 | path = ext/nvdiffrast
6 | url = git@github.com:NVlabs/nvdiffrast.git
7 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Copyright (c) 2021 Baptiste Nicolet , All rights reserved.
2 |
3 | Redistribution and use in source and binary forms, with or without
4 | modification, are permitted provided that the following conditions are met:
5 |
6 | 1. Redistributions of source code must retain the above copyright notice, this
7 | list of conditions and the following disclaimer.
8 |
9 | 2. Redistributions in binary form must reproduce the above copyright notice,
10 | this list of conditions and the following disclaimer in the documentation
11 | and/or other materials provided with the distribution.
12 |
13 | 3. Neither the name of the copyright holder nor the names of its contributors
14 | may be used to endorse or promote products derived from this software
15 | without specific prior written permission.
16 |
17 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
18 | ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
19 | WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
20 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
21 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
22 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
23 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
24 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
25 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
26 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
27 |
28 | You are under no obligation whatsoever to provide any bug fixes, patches, or
29 | upgrades to the features, functionality or performance of the source code
30 | ("Enhancements") to anyone; however, if you choose to make your Enhancements
31 | available either publicly, or directly to the author of this software, without
32 | imposing a separate written license agreement for such Enhancements, then you
33 | hereby grant the following license: a non-exclusive, royalty-free perpetual
34 | license to install, use, modify, prepare derivative works, incorporate into
35 | other computer software, distribute, and sublicense such enhancements or
36 | derivative works thereof, in binary and source code form.
37 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 | ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia), December 2021.
14 |
15 | Baptiste Nicolet
16 | ·
17 | Alec Jacobson
18 | ·
19 | Wenzel Jakob
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
32 |
33 |
34 |
35 |
36 |
37 |
38 | Table of Contents
39 |
40 | -
41 | Installation
42 |
47 |
48 | -
49 | Parameterization
50 |
51 | -
52 | Running the experiments
53 |
54 | -
55 | Repository structure
56 |
57 | -
58 | License
59 |
60 | -
61 | Citation
62 |
63 | -
64 | Acknowledgments
65 |
66 |
67 |
68 |
69 |
70 |
71 | ## Installation
72 |
73 | This repository contains both the operators needed to use our parameterization
74 | of vertex positions of meshes as well as the code for the experiments we show in
75 | the paper.
76 | ### Parameterization package installation
77 |
78 | If you are only interested in using our parameterization in an existing (PyTorch
79 | based) pipeline, we have made it available to install via `pip`:
80 |
81 | ```bash
82 | pip install largesteps
83 | ```
84 |
85 | This will install the `largesteps` module. This only contains the
86 | parameterization logic implemented as a PyTorch custom operator. See the
87 | [tutorial](Tutorial.ipynb) for an example use case.
88 |
89 | ### Cloning the repository
90 |
91 | Otherwise, if you want to reproduce the experiments from the paper, you can
92 | clone this repo and install the module locally.
93 |
94 | ```bash
95 | git clone --recursive git@github.com:rgl-epfl/large-steps-pytorch.git
96 | cd large-steps-pytorch
97 | pip install .
98 | ```
99 |
100 | The experiments in this repository depend on PyTorch. Please follow instructions on
101 | the PyTorch [website](https://pytorch.org/get-started/locally/) to install it.
102 |
103 | To install `nvdiffrast` and the Botsch-Kobbelt remesher, which are provided as
104 | submodules, please run the `setup_dependencies.sh` script.
105 |
106 | `nvdiffrast` relies on the `cudatoolkit-dev` package to compile modules at runtime.
107 | To install it with Anaconda:
108 | ```bash
109 | conda install -c conda-forge cudatoolkit-dev
110 | ```
111 |
112 | To install the other dependencies needed to run the experiments, also run:
113 | ```bash
114 | pip install -r requirements.txt
115 | ```
116 |
117 | :warning: On Linux, `nvdiffrast` requires using `g++` to compile some PyTorch
118 | extensions, make sure this is your default compiler:
119 |
120 | ```bash
121 | export CC=gcc && CXX=g++
122 | ```
123 |
124 | Rendering the figures will also require installing
125 | [blender](https://www.blender.org/download/). You can specify the name of the
126 | blender executable you wish to use in `scripts/constants.py`
127 |
128 | ### Downloading the scenes
129 |
130 | The scenes for the experiments can be downloaded
131 | [here](https://rgl.s3.eu-central-1.amazonaws.com/media/papers/Nicolet2021Large.zip).
132 | Please extract the archive at the toplevel of this repository.
133 |
134 | ## Parameterization
135 |
136 | In a nutshell, our parameterization can be obtained in just a few lines:
137 |
138 | ```python
139 | # Given tensors v and f containing vertex positions and faces
140 | from largesteps.geometry import laplacian_uniform, compute_matrix
141 | from largesteps.parameterize import to_differential, from_differential
142 |
143 | # Compute the system matrix
144 | M = compute_matrix(v, f, lambda_=10)
145 |
146 | # Parameterize
147 | u = to_differential(M, v)
148 | ```
149 |
150 | `compute_matrix` returns the parameterization matrix **M** = **I** + λ**L**.
151 | This function takes another parameter, `alpha`, which leads to a slightly
152 | different, but equivalent, formula for the matrix: **M** = (1-α)**I** + α**L**,
153 | with α ∈ [0,1[. With this formula, the scale of the matrix **M** has the same
154 | order of magnitude regardless of α.
155 |
156 | ```python
157 | M = compute_matrix(L, alpha=0.9)
158 | ```
159 |
160 | Then, vertex coordinates can be retrieved as:
161 |
162 | ```python
163 | v = from_differential(u, M, method='Cholesky')
164 | ```
165 |
166 | This will in practice perform a cache lookup for a solver associated to the
167 | matrix **M** (and instantiate one if not found) and solve the linear system
168 | **Mv** = **u**. Further calls to `from_differential` with the same
169 | matrix will use the solver stored in the cache. Since this operation is
170 | implemented as a differentiable PyTorch operation, there is nothing more to be
171 | done to optimize this parameterization.
172 |
173 | ## Running the experiments
174 |
175 | You can then run the experiments in the `figures` folder, in which each
176 | subfolder corresponds to a figure in the paper, and contains two files:
177 | - `generate_data.py`: contains the script to run the experiment and write the
178 | output to the directory specified in `scripts/constants.py`
179 | - `figure.ipynb`: contains the script generating the figure, assuming
180 | `generate_data.py` has been run before and the output written to the directory
181 | specified in `scripts/constants.py`
182 |
183 | We provide the scripts for the following figures:
184 | - Fig. 1 -> `teaser`
185 | - Fig. 3 -> `multiscale`
186 | - Fig. 5 -> `remeshing`
187 | - Fig. 6 -> `reg_fail`
188 | - Fig. 7 -> `comparison`
189 | - Fig. 8 -> `viewpoints`
190 | - Fig. 9 -> `influence`
191 |
192 | :warning: Several experiments are equal-time comparisons ran on a Linux Ryzen
193 | 3990X workstation with a TITAN RTX graphics card. In order to ensure
194 | reproducibility, we have frozen the step counts for each method in these
195 | experiments.
196 |
197 | ## Repository structure
198 |
199 | The `largesteps` folder contains the parameterization module made available via
200 | `pip`. It contains:
201 | - `geometry.py`: contains the laplacian matrix computation.
202 | - `optimize.py`: contains the `AdamUniform` optimizer implementation
203 | - `parameterize.py`: contains the actual parameterization code, implemented as a
204 | `to_differential` and `from_differential` function.
205 | - `solvers.py`: contains the Cholesky and conjugate gradients solvers used to
206 | convert parameterized coordinates back to vertex coordinates.
207 |
208 | Other functions used for the experiments are included in the `scripts` folder:
209 | - `blender_render.py`: utility script to render meshes inside blender
210 | - `constants.py`: contains paths to different useful folders (scenes, remesher, etc.)
211 | - `geometry.py`: utility geometry functions (normals computation, edge length, etc.)
212 | - `io_ply.py`: PLY mesh file loading
213 | - `load_xml.py`: XML scene file loading
214 | - `main.py`: contains the main optimization function
215 | - `preamble.py`: utility scipt to a import redundant modules for the figures
216 | - `render.py`: contains the rendering logic, using `nvdiffrast`
217 |
218 | ## License
219 | This code is provided under a 3-clause BSD license that can be found in the
220 | LICENSE file. By using, distributing, or contributing to this project, you agree
221 | to the terms and conditions of this license.
222 |
223 |
224 | ## Citation
225 |
226 | If you use this code for academic research, please cite our method using the following BibTeX entry:
227 |
228 | ```bibtex
229 | @article{Nicolet2021Large,
230 | author = "Nicolet, Baptiste and Jacobson, Alec and Jakob, Wenzel",
231 | title = "Large Steps in Inverse Rendering of Geometry",
232 | journal = "ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia)",
233 | volume = "40",
234 | number = "6",
235 | year = "2021",
236 | month = dec,
237 | doi = "10.1145/3478513.3480501",
238 | url = "https://rgl.epfl.ch/publications/Nicolet2021Large"
239 | }
240 | ```
241 | ## Acknowledgments
242 | The authors would like to thank Delio Vicini for early discussions about this
243 | project, Silvia Sellán for sharing her remeshing implementation and help for the
244 | figures, as well as Hsueh-Ti Derek Liu for his advice in making the figures.
245 | Also, thanks to Miguel Crespo for making this README template.
246 |
--------------------------------------------------------------------------------
/figures/comparison/figure.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "Comparison\n",
8 | "==========="
9 | ]
10 | },
11 | {
12 | "cell_type": "code",
13 | "execution_count": null,
14 | "metadata": {},
15 | "outputs": [],
16 | "source": [
17 | "import sys\n",
18 | "import os\n",
19 | "sys.path.append(os.path.realpath(\"../..\"))\n",
20 | "from scripts.preamble import *\n",
21 | "recompute = True"
22 | ]
23 | },
24 | {
25 | "cell_type": "code",
26 | "execution_count": null,
27 | "metadata": {},
28 | "outputs": [],
29 | "source": [
30 | "scene_info = {\n",
31 | " #name: [pretty name, viewpoint]\n",
32 | " 'suzanne': ['Suzanne', '25', 9, 0.5*60],\n",
33 | " 'bunny': ['Bunny', '25', 9, 0.75*60],\n",
34 | " 'bob': ['Bob', '64', 21, 0.5*60],\n",
35 | " 'tshirt': ['T-Shirt', '25', 14, 0.2*60],\n",
36 | " 'cranium': ['Cranium', '25', 8, 1*60],\n",
37 | " 'planck': ['Planck', '25', 11, 0.5*60]\n",
38 | " }\n",
39 | "scenes = sorted(scene_info.keys())"
40 | ]
41 | },
42 | {
43 | "cell_type": "code",
44 | "execution_count": null,
45 | "metadata": {},
46 | "outputs": [],
47 | "source": [
48 | "# Load wireframes\n",
49 | "wireframes = [[],[],[],[]]\n",
50 | "renderings = [[],[],[],[],[]]\n",
51 | "losses = [[],[],[]]\n",
52 | "res = 100\n",
53 | "for i,scene in enumerate(scenes):\n",
54 | " vp = scene_info[scene][2]\n",
55 | " collection = scene_info[scene][1]\n",
56 | " for j, method in enumerate([\"base\", \"res_lap\", \"res_bilap\", \"res_smooth\"]):\n",
57 | " filename = os.path.join(basename, scene, f\"{method}_wireframe.png\")\n",
58 | " if recompute or not os.path.exists(filename):\n",
59 | " blender_render(os.path.join(basename, scene, f\"{method}.ply\"), \n",
60 | " os.path.join(basename, scene), \n",
61 | " collection,\n",
62 | " vp,\n",
63 | " res=100,\n",
64 | " ours=(j==3),\n",
65 | " baseline=(j in [1,2]),\n",
66 | " wireframe=True)\n",
67 | " wireframes[j].append(plt.imread(filename))\n",
68 | " \n",
69 | " if j > 0:\n",
70 | " filename = os.path.join(basename, scene, f\"{method}_smooth.png\")\n",
71 | " if recompute or not os.path.exists(filename):\n",
72 | " blender_render(os.path.join(basename, scene, f\"{method}.ply\"), \n",
73 | " os.path.join(basename, scene), \n",
74 | " collection,\n",
75 | " vp,\n",
76 | " res=res,\n",
77 | " ours=(j==3),\n",
78 | " baseline=(j in [1,2]))\n",
79 | " renderings[j].append(plt.imread(filename))\n",
80 | " \n",
81 | " for j, method in enumerate([\"smooth\", \"bilap\", \"lap\"]):\n",
82 | " losses[j].append(pd.read_csv(os.path.join(basename, scene, f\"loss_{method}.csv\"), index_col=0).values)\n",
83 | " \n",
84 | " filename = os.path.join(basename, scene, \"ref_smooth.png\")\n",
85 | " if recompute or not os.path.exists(filename):\n",
86 | " blender_render(os.path.join(basename, scene, \"ref.ply\"), \n",
87 | " os.path.join(basename, scene), \n",
88 | " collection,\n",
89 | " vp,\n",
90 | " res=res)\n",
91 | " renderings[-1].append(plt.imread(filename))"
92 | ]
93 | },
94 | {
95 | "cell_type": "code",
96 | "execution_count": null,
97 | "metadata": {
98 | "scrolled": false
99 | },
100 | "outputs": [],
101 | "source": [
102 | "base_size = 3\n",
103 | "transpose = True\n",
104 | "log = True\n",
105 | "\n",
106 | "if transpose:\n",
107 | " n_cols = len(scenes)\n",
108 | " n_rows = 8\n",
109 | "else:\n",
110 | " n_rows = len(scenes)\n",
111 | " n_cols = 8\n",
112 | "\n",
113 | "crop_ratio = 1.3\n",
114 | "h,w,_ = renderings[1][0].shape\n",
115 | "aspect = w / h\n",
116 | "crop_w = int(crop_ratio*h)\n",
117 | "\n",
118 | "widths = np.ones(n_cols) * crop_w / h\n",
119 | "heights = np.ones(n_rows)\n",
120 | "plot_ratio = 2\n",
121 | "\n",
122 | "if transpose:\n",
123 | " heights[-3:] = crop_ratio/plot_ratio\n",
124 | "else:\n",
125 | " widths[-3:] = plot_ratio\n",
126 | "\n",
127 | "crops = np.zeros((len(wireframes[0]), 4), dtype=np.int32)\n",
128 | "crops[:,0] = 25000\n",
129 | "crops[:,2] = 25000\n",
130 | "\n",
131 | "# Find the closest crop of the rendering to the object\n",
132 | "for j in range(len(wireframes)):\n",
133 | " for i, wf in enumerate(wireframes[j]):\n",
134 | " nonzero = np.where(wf[..., 3] > 0)[1]\n",
135 | " crops[i,:2] = (min(crops[i,0], np.min(nonzero)), max(crops[i,1],np.max(nonzero)))\n",
136 | " nonzero = np.where(wf[..., 3] > 0)[0]\n",
137 | " crops[i,2:] = (min(crops[i,2], np.min(nonzero)), max(crops[i,3],np.max(nonzero)))\n",
138 | "crop_aspect = (crops[:,1]-crops[:,0])/(crops[:,3]-crops[:,2])\n",
139 | "\n",
140 | "actual_crops=np.zeros_like(crops)\n",
141 | "for i, crop in enumerate(crops):\n",
142 | " if crop_aspect[i] < crop_ratio:\n",
143 | " actual_crops[i,2] = crops[i,2]\n",
144 | " actual_crops[i,3] = crops[i,3]\n",
145 | " x0, x1 = crops[i,:2]\n",
146 | " actual_crops[i,0] = (x1-x0)/2*((x1+x0)/(x1-x0) - crop_ratio/crop_aspect[i])\n",
147 | " actual_crops[i,1] = x1+x0-actual_crops[i,0]\n",
148 | " else:\n",
149 | " actual_crops[i,0]=crops[i,0]\n",
150 | " actual_crops[i,1]=crops[i,1]\n",
151 | " y0, y1 = crops[i,2:]\n",
152 | " actual_crops[i,2] = (y1-y0)/2*((y1+y0)/(y1-y0) - crop_aspect[i]/crop_ratio)\n",
153 | " actual_crops[i,3] = y1+y0-actual_crops[i,2]\n",
154 | "\n",
155 | "fig = plt.figure(1, figsize=(widths.sum() * base_size, heights.sum() * base_size), constrained_layout=True)\n",
156 | "gs = fig.add_gridspec(n_rows, n_cols, wspace=0.0,hspace=0.0, height_ratios = heights, width_ratios=widths)\n",
157 | "\n",
158 | "sns.set_style(\"white\")\n",
159 | "\n",
160 | "kw = dict(ha=\"center\", va=\"center\", fontsize=fontsize, color=\"darkgrey\", rotation=45)\n",
161 | "\n",
162 | "for i, scene_name in enumerate(scenes):\n",
163 | " for j, name in enumerate([\"Initial State\", r\"Regularized ($\\mathbf{L}$)\", r\"Regularized ($\\mathbf{L}^2$)\", r\"Ours ($\\lambda=19$)\", \"Target\"]):\n",
164 | " if transpose:\n",
165 | " ax = fig.add_subplot(gs[j, i])\n",
166 | " else:\n",
167 | " ax = fig.add_subplot(gs[i, j])\n",
168 | "\n",
169 | " if j in [1,2,3]:\n",
170 | " wf = wireframes[j][i].copy()\n",
171 | " wf[wf[..., -1] == 0] = 1\n",
172 | " h,w, _ = wf.shape\n",
173 | " I,J = np.meshgrid(np.linspace(0,1,w), np.linspace(0,1,h))\n",
174 | " im = np.ones_like(wf)\n",
175 | " im[IJ] = renderings[j][i][I>J]\n",
177 | " im = im*im[..., -1][..., None] + (1-im[..., -1][..., None])\n",
178 | " if not np.any(actual_crops[i]<0):\n",
179 | " ax.imshow(im[actual_crops[i,2]:actual_crops[i,3], actual_crops[i,0]:actual_crops[i,1]],)\n",
180 | " else:\n",
181 | " ax.imshow(im[:, (w-crop_w)//2:(w+crop_w)//2],)\n",
182 | " elif j == 4:\n",
183 | " if np.any(actual_crops[i]<0):\n",
184 | " ax.imshow(renderings[j][i][:, (w-crop_w)//2:(w+crop_w)//2],)\n",
185 | " else:\n",
186 | " ax.imshow(renderings[j][i][actual_crops[i,2]:actual_crops[i,3], actual_crops[i,0]:actual_crops[i,1]],)\n",
187 | " else:\n",
188 | " if np.any(actual_crops[i]<0):\n",
189 | " ax.imshow(wireframes[j][i][:, (w-crop_w)//2:(w+crop_w)//2],)\n",
190 | " else:\n",
191 | " ax.imshow(wireframes[j][i][actual_crops[i,2]:actual_crops[i,3], actual_crops[i,0]:actual_crops[i,1]],)\n",
192 | " \n",
193 | " ax.tick_params(axis='both', which='both', labelleft=False, labelbottom=False)\n",
194 | " sns.despine(ax=ax, left=True, bottom=True)\n",
195 | " \n",
196 | " if j == 0:\n",
197 | " if transpose:\n",
198 | " ax.set_title(rf\"\\textsc{{{scene_info[scene_name][0]}}}\", y=1.1, fontsize=fontsize)\n",
199 | " else:\n",
200 | " ax.set_ylabel(scene_info[scene_name][0], x=-1, fontsize=fontsize)\n",
201 | " if i == 0:\n",
202 | " if transpose:\n",
203 | " ax.set_ylabel(name, fontsize=fontsize, x=0)\n",
204 | " else:\n",
205 | " ax.set_title(name, fontsize=fontsize, y=1.1)\n",
206 | " ax.yaxis.set_label_coords(-0.25, 0.5)\n",
207 | " \n",
208 | " with sns.axes_style('darkgrid'):\n",
209 | " for k, loss in enumerate([\"MAE\", \"Laplacian\", \"Hausdorff\"]):\n",
210 | " time = scene_info[scene_name][3]\n",
211 | " if transpose:\n",
212 | " ax = fig.add_subplot(gs[j+k+1, i])\n",
213 | " else:\n",
214 | " ax = fig.add_subplot(gs[i, j+k+1])\n",
215 | "\n",
216 | " loss_ours = losses[0][i][1::10, k]\n",
217 | " loss_reg = losses[2][i][1::10, k]\n",
218 | " loss_bi = losses[1][i][1::10, k]\n",
219 | " ax.plot(np.linspace(0,time, loss_ours.shape[0]), loss_ours, label=r\"Ours ($\\lambda=19$)\", linewidth=2)\n",
220 | " ax.plot(np.linspace(0,time, loss_reg.shape[0]), loss_reg, label=r\"Regularized $\\left(\\mathbf{L}\\right)$\", linewidth=2)\n",
221 | " ax.plot(np.linspace(0,time, loss_bi.shape[0]), loss_bi, label=r\"Regularized ($\\left(\\mathbf{L}^2\\right)$\", linewidth=2)\n",
222 | "\n",
223 | " if log:\n",
224 | " ax.set_yscale('log')\n",
225 | " else:\n",
226 | " ax.set_ylim(bottom=0)\n",
227 | " if k == 2:\n",
228 | " ax.set_xlabel(\"Time (s)\")\n",
229 | " if k==0 and i==0:\n",
230 | " ax.legend()\n",
231 | " if i == 0:\n",
232 | " if transpose:\n",
233 | " ax.set_ylabel(loss, fontsize=fontsize)\n",
234 | " else:\n",
235 | " ax.set_title(loss, y=1.1, fontsize=fontsize)\n",
236 | "\n",
237 | " ax.yaxis.set_label_coords(-0.25, 0.5)\n",
238 | "\n",
239 | "plt.savefig(os.path.join(basename, \"comparison.pdf\"), format='pdf', dpi=300, bbox_inches='tight', pad_inches=0.005)"
240 | ]
241 | },
242 | {
243 | "cell_type": "code",
244 | "execution_count": null,
245 | "metadata": {},
246 | "outputs": [],
247 | "source": []
248 | },
249 | {
250 | "cell_type": "code",
251 | "execution_count": null,
252 | "metadata": {},
253 | "outputs": [],
254 | "source": []
255 | }
256 | ],
257 | "metadata": {
258 | "kernelspec": {
259 | "display_name": "Python 3",
260 | "language": "python",
261 | "name": "python3"
262 | },
263 | "language_info": {
264 | "codemirror_mode": {
265 | "name": "ipython",
266 | "version": 3
267 | },
268 | "file_extension": ".py",
269 | "mimetype": "text/x-python",
270 | "name": "python",
271 | "nbconvert_exporter": "python",
272 | "pygments_lexer": "ipython3",
273 | "version": "3.8.5"
274 | }
275 | },
276 | "nbformat": 4,
277 | "nbformat_minor": 4
278 | }
279 |
--------------------------------------------------------------------------------
/figures/comparison/generate_data.py:
--------------------------------------------------------------------------------
1 | import sys
2 | import os
3 | import numpy as np
4 | import pandas as pd
5 | import torch
6 | from tqdm import trange
7 |
8 | sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(__file__))))
9 | from scripts.main import optimize_shape
10 | from scripts.constants import *
11 | from scripts.io_ply import write_ply
12 | from largesteps.optimize import AdamUniform
13 |
14 | try:
15 | from igl import hausdorff
16 | except ModuleNotFoundError:
17 | print("WARNING: could not import libigl. The Hausdorff distances will not be computed. Please install libigl if you want to compute them.")
18 |
19 | output_dir = os.path.join(OUTPUT_DIR, os.path.basename(os.path.dirname(__file__)))
20 |
21 | scenes = ["suzanne", "cranium", "bob", "bunny", "tshirt", "planck"]
22 | step_sizes = [2e-3, 5e-3, 3e-3, 1e-2, 3e-3, 3e-3]
23 |
24 | # Frozen step counts for equal-time runs of all methods on our system
25 | steps_ours = [1080, 1820, 930, 1380, 370, 915]
26 | steps_baseline = [1130, 1910, 940, 1450, 390, 960]
27 | regs = [2.8, 0.21, 0.67, 3.8, 12, 3.8]
28 | regs_bi = [3.8, 0.16, 0.37, 2.1, 12, 5]
29 |
30 | params = {
31 | "boost": 3,
32 | "loss": "l1",
33 | "alpha": 0.95,
34 | }
35 |
36 | for i, scene in enumerate(scenes):
37 | filename = os.path.join(SCENES_DIR, scene, f"{scene}.xml")
38 | output = os.path.join(output_dir, scene)
39 | if not os.path.isdir(output):
40 | os.makedirs(output)
41 | for j, method in enumerate(["smooth", "lap", "bilap"]):
42 | if j == 0:
43 | params["reg"] = 0
44 | params["smooth"] = True
45 | params["optimizer"] = AdamUniform
46 | params["step_size"] = step_sizes[i]
47 | params["steps"] = steps_ours[i]
48 | else:
49 | if j==1:
50 | params["reg"] = regs[i]
51 | params["bilaplacian"] = False
52 | else:
53 | params["reg"] = regs_bi[i]
54 | params["bilaplacian"] = True
55 | params["smooth"] = False
56 | params["optimizer"] = torch.optim.Adam
57 | params["step_size"] = 1e-2
58 | params["steps"] = steps_baseline[i]
59 |
60 | torch.cuda.empty_cache()
61 | out = optimize_shape(filename, params)
62 | # Write result
63 | v = out["vert_steps"][-1] + out["tr_steps"][-1]
64 | f = out["f"][-1]
65 | write_ply(os.path.join(output, f"res_{method}.ply"), v, f)
66 |
67 | # Write base mesh, reference shape and images
68 | if j == 0:
69 | v = out["vert_steps"][0] + out["tr_steps"][0]
70 | f = out["f"][0]
71 | write_ply(os.path.join(output, f"base.ply"), v, f)
72 |
73 | # Write the reference shape
74 | write_ply(os.path.join(output, "ref.ply"), out["v_ref"], out["f_ref"])
75 |
76 | losses = np.zeros((out["losses"].shape[0], 3))
77 | losses[:,:2] = out["losses"]
78 | if "hausdorff" in dir():
79 | # Compute the hausdorff distance
80 | vb = out["v_ref"]
81 | fb = out["f_ref"]
82 | fa = out["f"][0]
83 | verts = (np.array(out["vert_steps"]) + np.array(out["tr_steps"]))[1::10]
84 | d_hausdorff = np.zeros((verts.shape[0]))
85 | for it in trange(verts.shape[0]):
86 | d_hausdorff[it] = (hausdorff(verts[it], fa, vb, fb) + hausdorff(vb, fb, verts[it], fa))
87 |
88 | losses[1::10,2] = d_hausdorff
89 |
90 | # Write the losses
91 | pd.DataFrame(data=losses, columns=["im_loss", "reg_loss", "hausdorff"]).to_csv(os.path.join(output, f"loss_{method}.csv"))
92 |
--------------------------------------------------------------------------------
/figures/influence/figure.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "Influence of lambda on the result quality\n",
8 | "================================="
9 | ]
10 | },
11 | {
12 | "cell_type": "code",
13 | "execution_count": null,
14 | "metadata": {},
15 | "outputs": [],
16 | "source": [
17 | "import sys\n",
18 | "import os\n",
19 | "sys.path.append(os.path.realpath(\"../..\"))\n",
20 | "from scripts.preamble import *\n",
21 | "from scripts.io_ply import read_ply \n",
22 | "\n",
23 | "sns.set_style({'axes.grid' : False})\n",
24 | "recompute = True\n",
25 | "fontsize = 18"
26 | ]
27 | },
28 | {
29 | "cell_type": "code",
30 | "execution_count": null,
31 | "metadata": {},
32 | "outputs": [],
33 | "source": [
34 | "basename = os.path.join(OUTPUT_DIR, os.path.basename(os.getcwd()))\n",
35 | "alphas = pd.read_csv(os.path.join(basename, \"alphas.csv\"), index_col=0).iloc[:].values[:,0]\n",
36 | "lambdas = alphas / (1-alphas)"
37 | ]
38 | },
39 | {
40 | "cell_type": "code",
41 | "execution_count": null,
42 | "metadata": {
43 | "scrolled": false
44 | },
45 | "outputs": [],
46 | "source": [
47 | "# Load wireframes\n",
48 | "wireframes = []\n",
49 | "for i in range(len(lambdas)):\n",
50 | " filename = os.path.join(basename, f\"res_{i:02d}_wireframe.png\")\n",
51 | " if not os.path.exists(filename) or recompute:\n",
52 | " blender_render(os.path.join(basename, f\"res_{i:02d}.ply\"), \n",
53 | " basename, \n",
54 | " 14,\n",
55 | " 6,\n",
56 | " wireframe=True)\n",
57 | " wireframes.append(plt.imread(filename))"
58 | ]
59 | },
60 | {
61 | "cell_type": "code",
62 | "execution_count": null,
63 | "metadata": {},
64 | "outputs": [],
65 | "source": [
66 | "try:\n",
67 | " from igl import hausdorff\n",
68 | " hausdorff_distances = np.zeros(len(lambdas))\n",
69 | " mesh = read_ply(os.path.join(basename, f\"ref.ply\"))\n",
70 | " v_ref, f_ref = mesh[\"vertices\"].cpu().numpy(), mesh[\"faces\"].cpu().numpy()\n",
71 | " for i in range(len(lambdas)):\n",
72 | " mesh = read_ply(os.path.join(basename, f\"res_{i:02d}.ply\"))\n",
73 | " v, f = mesh[\"vertices\"].cpu().numpy(), mesh[\"faces\"].cpu().numpy()\n",
74 | " hausdorff_distances[i] = 0.5 * (hausdorff(v_ref, f_ref, v, f) + hausdorff(v, f, v_ref, f_ref))\n",
75 | "except ModuleNotFoundError:\n",
76 | " print(\"WARNING: could not import libigl. The Hausdorff distances will not be computed. Please install libigl if you want to compute them.\")\n",
77 | " hausdorff_distances = None"
78 | ]
79 | },
80 | {
81 | "cell_type": "code",
82 | "execution_count": null,
83 | "metadata": {
84 | "scrolled": false
85 | },
86 | "outputs": [],
87 | "source": [
88 | "base_size = 3\n",
89 | "flat = False\n",
90 | "h_dist = True\n",
91 | "if flat:\n",
92 | " n_rows = 1\n",
93 | " n_cols = diffs.shape[0]\n",
94 | "else:\n",
95 | " n_cols = 4\n",
96 | " n_rows = len(wireframes) // n_cols\n",
97 | "H,W,_ = wireframes[0].shape\n",
98 | "w = int(0.84*H)\n",
99 | "aspect = w/H\n",
100 | "fontsize = 24\n",
101 | "\n",
102 | "fig = plt.figure(1, figsize=(n_cols * base_size* aspect, n_rows * base_size*1.3), constrained_layout=True)\n",
103 | "gs = fig.add_gridspec(n_rows, n_cols, wspace=0.075,hspace=0.0)\n",
104 | "\n",
105 | "for i in range(n_rows):\n",
106 | " for j in range(n_cols):\n",
107 | " ax = fig.add_subplot(gs[i,j])\n",
108 | " ax.tick_params(\n",
109 | " axis='both',\n",
110 | " which='both',\n",
111 | " labelleft=False,\n",
112 | " labelbottom=False)\n",
113 | " \n",
114 | " idx = i*n_cols+j\n",
115 | " rnd = wireframes[idx].copy()\n",
116 | " rnd[rnd[..., -1] == 0] = 1\n",
117 | " rnd = rnd*rnd[..., -1][..., None] + (1-rnd[..., -1][..., None]) # Alpha\n",
118 | " rnd = rnd[..., :3]\n",
119 | " \n",
120 | " im = ax.imshow(rnd[:, (W-w)//2:(W+w)//2],)\n",
121 | " if hausdorff_distances is not None:\n",
122 | " ax.set_xlabel(f\"$H$={hausdorff_distances[idx]:.3e}\", fontsize=2*fontsize//3)\n",
123 | " if flat:\n",
124 | " ax.set_title(fr\"${lambdas[idx]:.1f}$\", y=-0.3, fontsize=fontsize)\n",
125 | " else:\n",
126 | " ax.set_title(fr\"${lambdas[idx]:.1f}$\", y=-0.3, fontsize=fontsize)\n",
127 | "\n",
128 | "if flat:\n",
129 | " arrow = matplotlib.patches.FancyArrowPatch(\n",
130 | " (0.05,-0.05), (0.95,-0.05), transform=fig.transFigure,fc='black', mutation_scale = 40.)\n",
131 | " fig.patches.append(arrow)\n",
132 | " fig.suptitle(r\"$\\lambda$\", y=-0.1, fontsize=fontsize)\n",
133 | "else:\n",
134 | " arrow = matplotlib.patches.FancyArrowPatch(\n",
135 | " (0.05,0.48), (0.95,0.48), transform=fig.transFigure,fc='black', mutation_scale = 40.)\n",
136 | " fig.patches.append(arrow)\n",
137 | " plt.figtext(0.5,0.45, r\"$\\lambda$\", ha=\"center\", va=\"top\", fontsize=fontsize)\n",
138 | " arrow = matplotlib.patches.FancyArrowPatch(\n",
139 | " (0.05,-0.03), (0.95,-0.03), transform=fig.transFigure,fc='black', mutation_scale = 40.)\n",
140 | " fig.patches.append(arrow)\n",
141 | " fig.suptitle(r\"$\\lambda$\", y=-0.06, fontsize=fontsize)\n",
142 | "\n",
143 | "plt.savefig(os.path.join(basename, \"influence.pdf\"), format='pdf', dpi=300, bbox_inches='tight', pad_inches=0.005)"
144 | ]
145 | },
146 | {
147 | "cell_type": "code",
148 | "execution_count": null,
149 | "metadata": {},
150 | "outputs": [],
151 | "source": []
152 | },
153 | {
154 | "cell_type": "code",
155 | "execution_count": null,
156 | "metadata": {},
157 | "outputs": [],
158 | "source": []
159 | },
160 | {
161 | "cell_type": "code",
162 | "execution_count": null,
163 | "metadata": {},
164 | "outputs": [],
165 | "source": []
166 | }
167 | ],
168 | "metadata": {
169 | "kernelspec": {
170 | "display_name": "Python 3",
171 | "language": "python",
172 | "name": "python3"
173 | },
174 | "language_info": {
175 | "codemirror_mode": {
176 | "name": "ipython",
177 | "version": 3
178 | },
179 | "file_extension": ".py",
180 | "mimetype": "text/x-python",
181 | "name": "python",
182 | "nbconvert_exporter": "python",
183 | "pygments_lexer": "ipython3",
184 | "version": "3.8.5"
185 | }
186 | },
187 | "nbformat": 4,
188 | "nbformat_minor": 4
189 | }
190 |
--------------------------------------------------------------------------------
/figures/influence/generate_data.py:
--------------------------------------------------------------------------------
1 | import sys
2 | import os
3 | import numpy as np
4 | import pandas as pd
5 |
6 | sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(__file__))))
7 | from scripts.main import *
8 | from scripts.constants import *
9 | from scripts.io_ply import write_ply
10 | sys.path.append(REMESH_DIR)
11 |
12 | folder = SCENES_DIR
13 | scene_name = "suzanne"
14 | filename = os.path.join(folder, scene_name, scene_name + ".xml")
15 |
16 | output_dir = os.path.join(OUTPUT_DIR, os.path.basename(os.path.dirname(__file__)))
17 | if not os.path.isdir(output_dir):
18 | os.makedirs(output_dir)
19 |
20 | params = {
21 | "steps": 4300,
22 | "loss": "l1",
23 | "boost" : 3,
24 | "step_size": 1e-3,
25 | "optimizer": AdamUniform
26 | }
27 |
28 | alphas = [0.0, 0.25, 0.5, 0.75, 0.95, 0.98, 0.99, 0.999]
29 |
30 | df = pd.DataFrame(data={"alpha": alphas})
31 | df.to_csv(os.path.join(output_dir, "alphas.csv"))
32 | for i, alpha in enumerate(alphas):
33 | params["alpha"] = alpha
34 |
35 | out = optimize_shape(filename, params)
36 | v = out["vert_steps"][-1] + out["tr_steps"][-1]
37 | f = out["f"][-1]
38 | # Write the resulting mesh
39 | write_ply(os.path.join(output_dir, f"res_{i:02d}.ply"), v, f)
40 |
41 | # Write target shape
42 | write_ply(os.path.join(output_dir, "ref.ply"), out["v_ref"], out["f_ref"])
43 |
--------------------------------------------------------------------------------
/figures/multiscale/figure.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "Multiscale optimization\n",
8 | "==================="
9 | ]
10 | },
11 | {
12 | "cell_type": "code",
13 | "execution_count": null,
14 | "metadata": {},
15 | "outputs": [],
16 | "source": [
17 | "import sys\n",
18 | "import os\n",
19 | "sys.path.append(os.path.realpath(\"../..\"))\n",
20 | "from scripts.preamble import *\n",
21 | "recompute = True"
22 | ]
23 | },
24 | {
25 | "cell_type": "code",
26 | "execution_count": null,
27 | "metadata": {
28 | "scrolled": false
29 | },
30 | "outputs": [],
31 | "source": [
32 | "remesh_steps = pd.read_csv(os.path.join(basename, \"remesh_steps.csv\"), index_col=0).values[:,0]"
33 | ]
34 | },
35 | {
36 | "cell_type": "code",
37 | "execution_count": null,
38 | "metadata": {
39 | "scrolled": false
40 | },
41 | "outputs": [],
42 | "source": [
43 | "# Load wireframes\n",
44 | "wireframes = []\n",
45 | "renderings = []\n",
46 | "vp = 6\n",
47 | "collection = 25\n",
48 | "res = 100\n",
49 | "for i in range(len(remesh_steps)):\n",
50 | " filename = os.path.join(basename, f\"res_{i:02d}_wireframe.png\")\n",
51 | " if not os.path.exists(filename) or recompute:\n",
52 | " blender_render(os.path.join(basename, f\"res_{i:02d}.ply\"), \n",
53 | " basename, \n",
54 | " collection,\n",
55 | " vp,\n",
56 | " res=res,\n",
57 | " ours=True,\n",
58 | " wireframe=True,\n",
59 | " t=0.006)\n",
60 | " wireframes.append(plt.imread(filename))"
61 | ]
62 | },
63 | {
64 | "cell_type": "code",
65 | "execution_count": null,
66 | "metadata": {},
67 | "outputs": [],
68 | "source": [
69 | "fontsize = 36\n",
70 | "sns.set_style('white')\n",
71 | "base_size = 3\n",
72 | "n_rows = 1\n",
73 | "n_cols = len(wireframes)\n",
74 | "h,w,_ = wireframes[0].shape\n",
75 | "crops = np.zeros((len(wireframes), 2), dtype=np.int32)\n",
76 | "for i, wf in enumerate(wireframes):\n",
77 | " nonzero = np.where(wf[..., 3] > 0)[1]\n",
78 | " crops[i] = (np.min(nonzero), np.max(nonzero))\n",
79 | "\n",
80 | "widths = (crops[:,1] - crops[:,0]) / h\n",
81 | "total_w = np.sum(widths)\n",
82 | "\n",
83 | "fig = plt.figure(1, figsize=(total_w*base_size, n_rows * base_size), constrained_layout=True)\n",
84 | "gs = fig.add_gridspec(n_rows, n_cols, wspace=0.0,hspace=0.0, width_ratios=widths)\n",
85 | "\n",
86 | "for j in range(n_cols):\n",
87 | " ax = fig.add_subplot(gs[j])\n",
88 | "\n",
89 | " im = ax.imshow(wireframes[j][:,crops[j,0]:crops[j,1]],)\n",
90 | "\n",
91 | " ax.tick_params(\n",
92 | " axis='both',\n",
93 | " which='both',\n",
94 | " labelleft=False,\n",
95 | " labelbottom=False)\n",
96 | " sns.despine(ax=ax, left=True, bottom=True)\n",
97 | "\n",
98 | "arrow = matplotlib.patches.FancyArrowPatch(\n",
99 | " (0.05,0), (0.95,0), transform=fig.transFigure, fc='black', mutation_scale = 40.)\n",
100 | "fig.patches.append(arrow)\n",
101 | "\n",
102 | "fig.suptitle('Time', y=-0.05, fontsize=fontsize)\n",
103 | "plt.savefig(os.path.join(basename, \"multiscale.pdf\"), format='pdf', dpi=300, bbox_inches='tight', pad_inches=0.005)"
104 | ]
105 | },
106 | {
107 | "cell_type": "code",
108 | "execution_count": null,
109 | "metadata": {},
110 | "outputs": [],
111 | "source": []
112 | }
113 | ],
114 | "metadata": {
115 | "kernelspec": {
116 | "display_name": "Python 3",
117 | "language": "python",
118 | "name": "python3"
119 | },
120 | "language_info": {
121 | "codemirror_mode": {
122 | "name": "ipython",
123 | "version": 3
124 | },
125 | "file_extension": ".py",
126 | "mimetype": "text/x-python",
127 | "name": "python",
128 | "nbconvert_exporter": "python",
129 | "pygments_lexer": "ipython3",
130 | "version": "3.8.5"
131 | }
132 | },
133 | "nbformat": 4,
134 | "nbformat_minor": 4
135 | }
136 |
--------------------------------------------------------------------------------
/figures/multiscale/generate_data.py:
--------------------------------------------------------------------------------
1 | import sys
2 | import os
3 | import pandas as pd
4 |
5 | sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(__file__))))
6 | from scripts.main import optimize_shape
7 | from scripts.constants import *
8 | from scripts.io_ply import write_ply
9 |
10 | output_dir = os.path.join(OUTPUT_DIR, os.path.basename(os.path.dirname(__file__)))
11 | if not os.path.isdir(output_dir):
12 | os.makedirs(output_dir)
13 |
14 | folder = SCENES_DIR
15 | scene = "dragon"
16 | filename = os.path.join(folder, scene, f"{scene}.xml")
17 | remesh_steps = [500, 1500, 3000, 4500, 7000, 10000, 12000, 14000]
18 |
19 | params = {
20 | "steps": 16000,
21 | "step_size" : 1e-1,
22 | "loss": "l1",
23 | "boost" : 3,
24 | "lambda": 19,
25 | "remesh": remesh_steps.copy(),
26 | }
27 |
28 | out = optimize_shape(filename, params)
29 |
30 | all_steps = [0, *remesh_steps, -1]
31 | N = len((all_steps)) - 2
32 | pd.DataFrame(data=all_steps).to_csv(os.path.join(output_dir, "remesh_steps.csv"))
33 | for i, step in enumerate(all_steps):
34 | v = out["vert_steps"][step] + out["tr_steps"][step]
35 | f = out["f"][min(i, N)]
36 | write_ply(os.path.join(output_dir, f"res_{i:02d}.ply"), v, f)
37 |
38 | v = out["vert_steps"][-1] + out["tr_steps"][-1]
39 | f = out["f"][-1]
40 | write_ply(os.path.join(output_dir, f"res_{i+1:02d}.ply"), v, f)
41 |
--------------------------------------------------------------------------------
/figures/reg_fail/figure.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "Failure case of regularization\n",
8 | "========================"
9 | ]
10 | },
11 | {
12 | "cell_type": "code",
13 | "execution_count": null,
14 | "metadata": {},
15 | "outputs": [],
16 | "source": [
17 | "import sys\n",
18 | "import os\n",
19 | "sys.path.append(os.path.realpath(\"../..\"))\n",
20 | "from scripts.preamble import *\n",
21 | "\n",
22 | "sns.set()\n",
23 | "sns.set(font_scale=1.1)\n",
24 | "recompute = True\n",
25 | "fontsize = 18\n",
26 | "log = False"
27 | ]
28 | },
29 | {
30 | "cell_type": "code",
31 | "execution_count": null,
32 | "metadata": {
33 | "scrolled": false
34 | },
35 | "outputs": [],
36 | "source": [
37 | "weights = pd.read_csv(os.path.join(basename, \"weights.csv\"), index_col=0)[\"weight\"].values"
38 | ]
39 | },
40 | {
41 | "cell_type": "code",
42 | "execution_count": null,
43 | "metadata": {},
44 | "outputs": [],
45 | "source": [
46 | "reg_losses = []\n",
47 | "for i in range(len(weights)):\n",
48 | " reg_losses.append(pd.read_csv(os.path.join(basename, f\"loss_reg_{i}.csv\"), index_col=0))"
49 | ]
50 | },
51 | {
52 | "cell_type": "code",
53 | "execution_count": null,
54 | "metadata": {},
55 | "outputs": [],
56 | "source": [
57 | "smooth_loss = pd.read_csv(os.path.join(basename, \"loss_smooth.csv\"), index_col=0)"
58 | ]
59 | },
60 | {
61 | "cell_type": "code",
62 | "execution_count": null,
63 | "metadata": {},
64 | "outputs": [],
65 | "source": [
66 | "save_steps = pd.read_csv(os.path.join(basename, \"save_steps.csv\"), index_col=0)[\"save_steps\"].values"
67 | ]
68 | },
69 | {
70 | "cell_type": "code",
71 | "execution_count": null,
72 | "metadata": {},
73 | "outputs": [],
74 | "source": [
75 | "names = [\"smooth\", *[f\"reg_{i}\"for i in range(len(weights))]]\n",
76 | "imgs = []\n",
77 | "\n",
78 | "for i, name in enumerate(names):\n",
79 | " wireframes = []\n",
80 | " for j in save_steps:\n",
81 | " img_name = os.path.join(basename, f\"{name}_{j:05d}_wireframe.png\")\n",
82 | " obj_name = os.path.join(basename, f\"{name}_{j:05d}.ply\")\n",
83 | " if not os.path.exists(img_name) or recompute:\n",
84 | " blender_render(obj_name, \n",
85 | " basename, \n",
86 | " 14,\n",
87 | " 14,\n",
88 | " res=100,\n",
89 | " wireframe=True,\n",
90 | " ours=(i==0),\n",
91 | " baseline=(i!=0))\n",
92 | " wireframes.append(plt.imread(img_name))\n",
93 | " imgs.append(wireframes)"
94 | ]
95 | },
96 | {
97 | "cell_type": "code",
98 | "execution_count": null,
99 | "metadata": {},
100 | "outputs": [],
101 | "source": [
102 | "save_steps[-1] = 25000"
103 | ]
104 | },
105 | {
106 | "cell_type": "code",
107 | "execution_count": null,
108 | "metadata": {
109 | "scrolled": false
110 | },
111 | "outputs": [],
112 | "source": [
113 | "base_size = 3\n",
114 | "n_rows = len(reg_losses)+1\n",
115 | "n_cols = len(save_steps)+1\n",
116 | "aspect = 1\n",
117 | "fontsize = 24\n",
118 | "h,w,_ = imgs[0][0].shape\n",
119 | "widths = np.ones(n_cols)\n",
120 | "widths[-1] = 1.5\n",
121 | "\n",
122 | "fig = plt.figure(1, figsize=(n_cols * base_size* aspect, n_rows * base_size*1), constrained_layout=True)\n",
123 | "gs = fig.add_gridspec(n_rows, n_cols, wspace=0.075,hspace=0.0, width_ratios=widths)\n",
124 | "\n",
125 | "\n",
126 | "with sns.axes_style('white'):\n",
127 | " for j, step in enumerate(save_steps):\n",
128 | " ax = fig.add_subplot(gs[0, j])\n",
129 | " ax.imshow(imgs[0][j][:, (w-h)//2:(h-w)//2],)\n",
130 | " ax.axes.get_xaxis().set_ticklabels([])\n",
131 | " ax.axes.get_yaxis().set_ticklabels([])\n",
132 | " ax.axes.get_xaxis().set_ticks([])\n",
133 | " ax.axes.get_yaxis().set_ticks([])\n",
134 | " sns.despine(ax=ax, left=True, bottom=True)\n",
135 | " if j == 0:\n",
136 | " ax.set_ylabel(rf\"Ours ($\\lambda$=99)\", fontsize=fontsize)\n",
137 | "\n",
138 | "with sns.axes_style('darkgrid'):\n",
139 | " ax = fig.add_subplot(gs[0, -1])\n",
140 | " ax.plot(1e2*smooth_loss[\"reg_loss\"].values, label=r\"Laplacian loss ($\\times 10^2$)\")\n",
141 | " ax.plot(smooth_loss[\"im_loss\"].values, label=\"Rendering loss\")\n",
142 | " if log:\n",
143 | " ax.set_yscale('log')\n",
144 | " ax.set_ylabel(\"Loss\", fontsize=2*fontsize//3)\n",
145 | " ax.set_xlabel(\"Steps\", fontsize=2*fontsize//3)\n",
146 | " ax.legend(fontsize=2*fontsize//3)\n",
147 | " ax.set_yscale('log')\n",
148 | " ax.set_ylim((5e-6,1))\n",
149 | "for i in range(len(reg_losses)):\n",
150 | " with sns.axes_style('white'):\n",
151 | " for j, step in enumerate(save_steps):\n",
152 | " ax = fig.add_subplot(gs[i+1, j])\n",
153 | " ax.imshow(imgs[i+1][j][:, (w-h)//2:(h-w)//2],)\n",
154 | " ax.axes.get_xaxis().set_ticklabels([])\n",
155 | " ax.axes.get_yaxis().set_ticklabels([])\n",
156 | " ax.axes.get_xaxis().set_ticks([])\n",
157 | " ax.axes.get_yaxis().set_ticks([])\n",
158 | " sns.despine(ax=ax, left=True, bottom=True)\n",
159 | " if i == len(reg_losses)-1:\n",
160 | " ax.set_title(fr\"${save_steps[j]}$\", y=-0.5, fontsize=fontsize)\n",
161 | " if j == 0:\n",
162 | " ax.set_ylabel(rf\"$\\lambda = {int(weights[i])}$\", fontsize=fontsize)\n",
163 | " \n",
164 | " with sns.axes_style('darkgrid'):\n",
165 | " ax = fig.add_subplot(gs[i+1, -1])\n",
166 | " ax.plot(1e2*reg_losses[i][\"reg_loss\"].values, label=r\"Laplacian loss ($\\times 10^2$)\")\n",
167 | " ax.plot(reg_losses[i][\"im_loss\"].values, label=\"Rendering loss\")\n",
168 | " ax.set_yscale('log')\n",
169 | " if log:\n",
170 | " ax.set_yscale('log')\n",
171 | " ax.set_ylabel(\"Loss\", fontsize=2*fontsize//3)\n",
172 | " ax.set_xlabel(\"Steps\", fontsize=2*fontsize//3)\n",
173 | " ax.set_ylim((5e-6,1))\n",
174 | " if i == len(reg_losses)-1:\n",
175 | " ax.set_title(\"Loss\", y=-0.5, fontsize=fontsize)\n",
176 | "\n",
177 | "arrow = matplotlib.patches.FancyArrowPatch(\n",
178 | " (0.05,-0.02), (0.75,-0.02), transform=fig.transFigure,fc='black', mutation_scale = 40.)\n",
179 | "fig.patches.append(arrow)\n",
180 | "\n",
181 | "fig.suptitle(\"Iteration\", fontsize=fontsize, y=-0.05, x=0.4)\n",
182 | "plt.savefig(os.path.join(basename, \"reg_fail.pdf\"), format='pdf', dpi=300, bbox_inches='tight', pad_inches=0.05)"
183 | ]
184 | },
185 | {
186 | "cell_type": "code",
187 | "execution_count": null,
188 | "metadata": {},
189 | "outputs": [],
190 | "source": []
191 | },
192 | {
193 | "cell_type": "code",
194 | "execution_count": null,
195 | "metadata": {},
196 | "outputs": [],
197 | "source": []
198 | }
199 | ],
200 | "metadata": {
201 | "kernelspec": {
202 | "display_name": "Python 3",
203 | "language": "python",
204 | "name": "python3"
205 | },
206 | "language_info": {
207 | "codemirror_mode": {
208 | "name": "ipython",
209 | "version": 3
210 | },
211 | "file_extension": ".py",
212 | "mimetype": "text/x-python",
213 | "name": "python",
214 | "nbconvert_exporter": "python",
215 | "pygments_lexer": "ipython3",
216 | "version": "3.8.5"
217 | }
218 | },
219 | "nbformat": 4,
220 | "nbformat_minor": 4
221 | }
222 |
--------------------------------------------------------------------------------
/figures/reg_fail/generate_data.py:
--------------------------------------------------------------------------------
1 | import sys
2 | import os
3 | import pandas as pd
4 |
5 | sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(__file__))))
6 | from scripts.main import optimize_shape
7 | from scripts.constants import *
8 | from scripts.io_ply import write_ply
9 | from largesteps.optimize import AdamUniform
10 | from torch.optim import Adam
11 |
12 | output_dir = os.path.join(OUTPUT_DIR, os.path.basename(os.path.dirname(__file__)))
13 | if not os.path.isdir(output_dir):
14 | os.makedirs(output_dir)
15 |
16 | scene_name = "reg_fail"
17 | filename = os.path.join(SCENES_DIR, scene_name, scene_name + ".xml")
18 |
19 | reg_weights = [1, 400, 10000]
20 | df = pd.DataFrame(data={"weight":reg_weights})
21 | df.to_csv(os.path.join(output_dir, f"weights.csv"))
22 |
23 | save_steps = [0, 500, 2500, 7500, 15000, 25000]
24 | df = pd.DataFrame(data={"save_steps": save_steps})
25 | df.to_csv(os.path.join(output_dir, f"save_steps.csv"))
26 |
27 | params = {
28 | "steps" : 25001,
29 | "step_size" : 5e-3,
30 | "shading": False,
31 | "boost" : 3,
32 | "smooth" : True,
33 | "lambda": 99,
34 | "loss": "l2",
35 | "use_tr": False,
36 | "optimizer": AdamUniform,
37 | "reg": 0
38 | }
39 |
40 | # Optimize with our method
41 | out = optimize_shape(filename, params)
42 | for step in save_steps:
43 | v = out["vert_steps"][step] + out["tr_steps"][step]
44 | f = out["f"][-1]
45 | write_ply(os.path.join(output_dir, f"smooth_{step:05d}.ply"), v, f)
46 |
47 | # Write out the loss
48 | data = {
49 | 'im_loss': out["losses"][:,0],
50 | 'reg_loss': out["losses"][:,1]
51 | }
52 | df = pd.DataFrame(data=data)
53 | df.to_csv(os.path.join(output_dir, f"loss_smooth.csv"))
54 |
55 | params["smooth"] = False
56 | params["optimizer"] = Adam
57 | for i, w in enumerate(reg_weights):
58 | params["reg"] = w
59 | # Optimize with regularization
60 | out = optimize_shape(filename, params)
61 | for step in save_steps:
62 | v = out["vert_steps"][step] + out["tr_steps"][step]
63 | f = out["f"][-1]
64 | write_ply(os.path.join(output_dir, f"reg_{i}_{step:05d}.ply"), v, f)
65 |
66 | data = {
67 | 'im_loss': out["losses"][:,0],
68 | 'reg_loss': out["losses"][:,1]
69 | }
70 | df = pd.DataFrame(data=data)
71 | df.to_csv(os.path.join(output_dir, f"loss_reg_{i}.csv"))
72 |
--------------------------------------------------------------------------------
/figures/remeshing/figure.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "Influence of remeshing on the result quality\n",
8 | "===================================="
9 | ]
10 | },
11 | {
12 | "cell_type": "code",
13 | "execution_count": null,
14 | "metadata": {},
15 | "outputs": [],
16 | "source": [
17 | "import sys\n",
18 | "import os\n",
19 | "sys.path.append(os.path.realpath(\"../..\"))\n",
20 | "from scripts.preamble import *\n",
21 | "from scripts.io_ply import read_ply, write_ply\n",
22 | "from scripts.geometry import massmatrix_voronoi\n",
23 | "\n",
24 | "fontsize = 18\n",
25 | "recompute = False"
26 | ]
27 | },
28 | {
29 | "cell_type": "code",
30 | "execution_count": null,
31 | "metadata": {},
32 | "outputs": [],
33 | "source": [
34 | "# Compute voronoi cell areas\n",
35 | "masses = []\n",
36 | "for i, method in enumerate([\"reg\", \"base\", \"remesh_start\", \"remesh_middle\"]):\n",
37 | " mesh = read_ply(os.path.join(basename, f\"res_{method}.ply\"))\n",
38 | " v, f = mesh[\"vertices\"], mesh[\"faces\"].long()\n",
39 | " masses.append(massmatrix_voronoi(v, f).cpu().numpy())\n",
40 | "\n",
41 | "vmax = max([np.max(M) for M in masses])\n",
42 | "vmin = min([np.min(M) for M in masses])\n",
43 | "norm = matplotlib.colors.Normalize(vmin=vmin, vmax=vmax)\n",
44 | "\n",
45 | "for i, method in enumerate([\"reg\", \"base\", \"remesh_start\", \"remesh_middle\"]):\n",
46 | " mesh = read_ply(os.path.join(basename, f\"res_{method}.ply\"))\n",
47 | " v, f = mesh[\"vertices\"].cpu().numpy(), mesh[\"faces\"].cpu().numpy()\n",
48 | " c = matplotlib.cm.magma_r(norm(masses[i]))[:,:3]\n",
49 | " write_ply(os.path.join(basename, f\"res_{method}.ply\"), v, f, vc=c, ascii=True)"
50 | ]
51 | },
52 | {
53 | "cell_type": "code",
54 | "execution_count": null,
55 | "metadata": {
56 | "scrolled": false
57 | },
58 | "outputs": [],
59 | "source": [
60 | "# Load wireframes\n",
61 | "wireframes = []\n",
62 | "res = 100\n",
63 | "for i, method in enumerate([\"reg\", \"base\", \"remesh_start\", \"remesh_middle\"]):\n",
64 | " filename = os.path.join(basename, f\"res_{method}_wireframe.png\")\n",
65 | " if not os.path.exists(filename) or recompute:\n",
66 | " blender_render(os.path.join(basename, f\"res_{method}.ply\"), \n",
67 | " basename, \n",
68 | " 25,\n",
69 | " 8,\n",
70 | " res=res,\n",
71 | " wireframe=True,\n",
72 | " area = True)\n",
73 | " wireframes.append(plt.imread(filename))"
74 | ]
75 | },
76 | {
77 | "cell_type": "code",
78 | "execution_count": null,
79 | "metadata": {
80 | "scrolled": false
81 | },
82 | "outputs": [],
83 | "source": [
84 | "base_size = 3\n",
85 | "n_rows = 2\n",
86 | "n_cols = 2\n",
87 | "H,W,_ = wireframes[0].shape\n",
88 | "w = H\n",
89 | "h = H-150\n",
90 | "aspect = w/h\n",
91 | "inset_w = 0.2*w\n",
92 | "fig = plt.figure(1, figsize=(n_cols * base_size* aspect, n_rows * base_size), constrained_layout=True)\n",
93 | "gs = fig.add_gridspec(n_rows, n_cols, wspace=0.07,hspace=0.0)\n",
94 | "\n",
95 | "titles = [\"(a) Regularized\", \"(b) Ours, no remeshing\", \"(c) Ours, denser initial shape\", \"(d) Ours, with remeshing\"]\n",
96 | "\n",
97 | "sns.set_style('white')\n",
98 | "for i in range(n_rows):\n",
99 | " for j in range(n_cols):\n",
100 | " idx = i*n_rows + j\n",
101 | " ax = fig.add_subplot(gs[i,j])\n",
102 | " ax.tick_params(\n",
103 | " axis='both',\n",
104 | " which='both',\n",
105 | " labelleft=False,\n",
106 | " labelbottom=False)\n",
107 | " im_cropped = np.flip(wireframes[idx][150:, (W-w)//2:(W+w)//2], axis=0)\n",
108 | " ax.imshow(im_cropped, origin='lower',)\n",
109 | " axins = ax.inset_axes([-0.3, 0, 0.5, 0.5])\n",
110 | "\n",
111 | " axins.imshow(im_cropped, origin='lower',)\n",
112 | " axins.tick_params(axis='both', which='both', labelleft=False, labelbottom=False)\n",
113 | " x0 = int(w*0.3)\n",
114 | " y0 = int(0.21*H)\n",
115 | " x1, x2, y1, y2 = (x0, x0+inset_w, y0, y0+inset_w)\n",
116 | " axins.set_xlim(x1, x2)\n",
117 | " axins.set_ylim(y1, y2)\n",
118 | " ax.indicate_inset_zoom(axins, edgecolor=\"black\", alpha=1)\n",
119 | " sns.despine(ax=ax, left=True, bottom=True)\n",
120 | " ax.set_xlabel(titles[idx], fontsize=fontsize)\n",
121 | "\n",
122 | "cbar = fig.colorbar(matplotlib.cm.ScalarMappable(norm=norm, cmap=matplotlib.cm.magma_r),\n",
123 | " ax=fig.axes, orientation=\"horizontal\", shrink=0.8)\n",
124 | "cbar.ax.set_xlabel(\"Voronoi cell area\", fontsize=fontsize)\n",
125 | "cbar.ax.xaxis.set_ticks_position('top')\n",
126 | "\n",
127 | "plt.savefig(os.path.join(basename, \"remeshing.pdf\"), format='pdf', dpi=300, bbox_inches='tight', pad_inches=0.03)"
128 | ]
129 | },
130 | {
131 | "cell_type": "code",
132 | "execution_count": null,
133 | "metadata": {},
134 | "outputs": [],
135 | "source": []
136 | }
137 | ],
138 | "metadata": {
139 | "kernelspec": {
140 | "display_name": "Python 3",
141 | "language": "python",
142 | "name": "python3"
143 | },
144 | "language_info": {
145 | "codemirror_mode": {
146 | "name": "ipython",
147 | "version": 3
148 | },
149 | "file_extension": ".py",
150 | "mimetype": "text/x-python",
151 | "name": "python",
152 | "nbconvert_exporter": "python",
153 | "pygments_lexer": "ipython3",
154 | "version": "3.8.5"
155 | }
156 | },
157 | "nbformat": 4,
158 | "nbformat_minor": 4
159 | }
160 |
--------------------------------------------------------------------------------
/figures/remeshing/generate_data.py:
--------------------------------------------------------------------------------
1 | import sys
2 | import os
3 |
4 | sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(__file__))))
5 | from scripts.main import optimize_shape
6 | from scripts.constants import *
7 | from scripts.io_ply import write_ply
8 | from largesteps.optimize import AdamUniform
9 | from torch.optim import Adam
10 |
11 | output_dir = os.path.join(OUTPUT_DIR, os.path.basename(os.path.dirname(__file__)))
12 | if not os.path.isdir(output_dir):
13 | os.makedirs(output_dir)
14 |
15 | folder = SCENES_DIR
16 | scene_name = "cranium"
17 | filename = os.path.join(folder, scene_name, scene_name + ".xml")
18 |
19 | params = {
20 | "boost" : 3,
21 | "step_size": 1e-2,
22 | "loss": "l1",
23 | "smooth" : True,
24 | "alpha": 0.95,
25 | }
26 |
27 | remesh = [-1, -1, 750, 0]
28 | steps = [1890, 1800, 1630, 1500]
29 | masses = []
30 | verts = []
31 | faces = []
32 | for i, method in enumerate(["reg", "base", "remesh_middle", "remesh_start"]):
33 | if method == "reg":
34 | params["smooth"] = False
35 | params["optimizer"] = Adam
36 | params["reg"] = 0.16
37 | else:
38 | params["smooth"] = True
39 | params["optimizer"] = AdamUniform
40 | params["reg"] = 0
41 |
42 | params["steps"] = steps[i]
43 | params["remesh"] = remesh[i]
44 | out = optimize_shape(filename, params)
45 | v = out["vert_steps"][-1] + out["tr_steps"][-1]
46 | f = out["f"][-1]
47 | write_ply(os.path.join(output_dir, f"res_{method}.ply"), v, f)
48 |
--------------------------------------------------------------------------------
/figures/teaser/figure.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "Teaser\n",
8 | "==========="
9 | ]
10 | },
11 | {
12 | "cell_type": "code",
13 | "execution_count": null,
14 | "metadata": {},
15 | "outputs": [],
16 | "source": [
17 | "import sys\n",
18 | "import os\n",
19 | "sys.path.append(os.path.realpath(\"../..\"))\n",
20 | "from scripts.preamble import *\n",
21 | "recompute = False\n"
22 | ]
23 | },
24 | {
25 | "cell_type": "code",
26 | "execution_count": null,
27 | "metadata": {},
28 | "outputs": [],
29 | "source": [
30 | "# Load wireframes\n",
31 | "wireframes = []\n",
32 | "renderings = []\n",
33 | "res = 400\n",
34 | "collection = 25\n",
35 | "vp = 11\n",
36 | "scene = \"nefertiti\"\n",
37 | "\n",
38 | "for j, method in enumerate([\"base\", \"naive\", \"reg\", \"ours\", \"remesh\"]):\n",
39 | " # Render with wireframe\n",
40 | " filename = os.path.join(basename, f\"res_{method}_wireframe.png\")\n",
41 | " if recompute or not os.path.exists(filename):\n",
42 | " blender_render(os.path.join(basename, f\"res_{method}.ply\"), \n",
43 | " basename, \n",
44 | " collection,\n",
45 | " vp,\n",
46 | " res=res,\n",
47 | " ours=(j>2),\n",
48 | " baseline=(j in [1,2]),\n",
49 | " wireframe=True,\n",
50 | " t=0.004)\n",
51 | " wireframes.append(plt.imread(filename))\n",
52 | " # Render without wireframe\n",
53 | " if j>0:\n",
54 | " filename = os.path.join(basename, f\"res_{method}_smooth.png\")\n",
55 | " if recompute or not os.path.exists(filename):\n",
56 | " blender_render(os.path.join(basename, f\"res_{method}.ply\"), \n",
57 | " basename, \n",
58 | " collection,\n",
59 | " vp,\n",
60 | " res=res,\n",
61 | " ours=(j>2),\n",
62 | " baseline=(j in [1,2]),\n",
63 | " wireframe=False,\n",
64 | " t=0.004)\n",
65 | " renderings.append(plt.imread(filename))\n",
66 | "\n",
67 | "filename = os.path.join(basename, \"ref_smooth.png\")\n",
68 | "if not os.path.exists(filename) or recompute:\n",
69 | " blender_render(os.path.join(basename, f\"ref.ply\"), \n",
70 | " basename, \n",
71 | " collection,\n",
72 | " vp,\n",
73 | " res=res,\n",
74 | " t=0.004)\n",
75 | "renderings.append(plt.imread(filename))"
76 | ]
77 | },
78 | {
79 | "cell_type": "code",
80 | "execution_count": null,
81 | "metadata": {
82 | "scrolled": false
83 | },
84 | "outputs": [],
85 | "source": [
86 | "base_size = 10\n",
87 | "n_rows = 1\n",
88 | "n_cols = len(wireframes)+1\n",
89 | "H,W,_ = wireframes[0].shape\n",
90 | "ratio = 0.3\n",
91 | "\n",
92 | "w = int(ratio*W)\n",
93 | "x_offset = -int(0.02*w)\n",
94 | "aspect = w/H\n",
95 | "inset_w = int(0.12*H)\n",
96 | "\n",
97 | "fontsize = 36\n",
98 | "widths = np.ones(n_cols)\n",
99 | "\n",
100 | "fig = plt.figure(1, figsize=(n_cols * base_size* aspect, n_rows * base_size*1.1), constrained_layout=True)\n",
101 | "gs = fig.add_gridspec(n_rows, n_cols, wspace=0.075,hspace=0.0, width_ratios=widths)\n",
102 | "\n",
103 | "sns.set_style(\"white\")\n",
104 | "\n",
105 | "kw = dict(ha=\"center\", va=\"center\", fontsize=fontsize, color=\"darkgrey\", rotation=45)\n",
106 | "\n",
107 | "# Indices for the diagonal split, J>I above the diagonal, and JJ] = wf[I>J]\n",
121 | " im[I= 1.0:
131 | raise ValueError(f"Invalid value for alpha: {alpha} : it should take values between 0 (included) and 1 (excluded)")
132 | M = torch.add((1-alpha)*eye, alpha*L) # M = (1-alpha) * I + alpha * L
133 | return M.coalesce()
134 |
--------------------------------------------------------------------------------
/largesteps/optimize.py:
--------------------------------------------------------------------------------
1 | import torch
2 |
3 | class AdamUniform(torch.optim.Optimizer):
4 | """
5 | Variant of Adam with uniform scaling by the second moment.
6 |
7 | Instead of dividing each component by the square root of its second moment,
8 | we divide all of them by the max.
9 | """
10 | def __init__(self, params, lr=0.1, betas=(0.9,0.999)):
11 | defaults = dict(lr=lr, betas=betas)
12 | super(AdamUniform, self).__init__(params, defaults)
13 |
14 | def __setstate__(self, state):
15 | super(AdamUniform, self).__setstate__(state)
16 |
17 | @torch.no_grad()
18 | def step(self):
19 | for group in self.param_groups:
20 | lr = group['lr']
21 | b1, b2 = group['betas']
22 | for p in group["params"]:
23 | state = self.state[p]
24 | # Lazy initialization
25 | if len(state)==0:
26 | state["step"] = 0
27 | state["g1"] = torch.zeros_like(p.data)
28 | state["g2"] = torch.zeros_like(p.data)
29 |
30 | g1 = state["g1"]
31 | g2 = state["g2"]
32 | state["step"] += 1
33 | grad = p.grad.data
34 |
35 | g1.mul_(b1).add_(grad, alpha=1-b1)
36 | g2.mul_(b2).add_(grad.square(), alpha=1-b2)
37 | m1 = g1 / (1-(b1**state["step"]))
38 | m2 = g2 / (1-(b2**state["step"]))
39 | # This is the only modification we make to the original Adam algorithm
40 | gr = m1 / (1e-8 + m2.sqrt().max())
41 | p.data.sub_(gr, alpha=lr)
42 |
--------------------------------------------------------------------------------
/largesteps/parameterize.py:
--------------------------------------------------------------------------------
1 | from largesteps.solvers import CholeskySolver, ConjugateGradientSolver, solve
2 | import weakref
3 |
4 | # Cache for the system solvers
5 | _cache = {}
6 |
7 | def cache_put(key, value, A):
8 | # Called when 'A' is garbage collected
9 | def cleanup_callback(wr):
10 | del _cache[key]
11 |
12 | wr = weakref.ref(
13 | A,
14 | cleanup_callback
15 | )
16 |
17 | _cache[key] = (value, wr)
18 |
19 | def to_differential(L, v):
20 | """
21 | Convert vertex coordinates to the differential parameterization.
22 |
23 | Parameters
24 | ----------
25 | L : torch.sparse.Tensor
26 | (I + l*L) matrix
27 | v : torch.Tensor
28 | Vertex coordinates
29 | """
30 | return L @ v
31 |
32 | def from_differential(L, u, method='Cholesky'):
33 | """
34 | Convert differential coordinates back to Cartesian.
35 |
36 | If this is the first time we call this function on a given matrix L, the
37 | solver is cached. It will be destroyed once the matrix is garbage collected.
38 |
39 | Parameters
40 | ----------
41 | L : torch.sparse.Tensor
42 | (I + l*L) matrix
43 | u : torch.Tensor
44 | Differential coordinates
45 | method : {'Cholesky', 'CG'}
46 | Solver to use.
47 | """
48 | key = (id(L), method)
49 | if key not in _cache.keys():
50 | if method == 'Cholesky':
51 | solver = CholeskySolver(L)
52 | elif method == 'CG':
53 | solver = ConjugateGradientSolver(L)
54 | else:
55 | raise ValueError(f"Unknown solver type '{method}'.")
56 |
57 | cache_put(key, solver, L)
58 | else:
59 | solver = _cache[key][0]
60 |
61 | return solve(solver, u)
62 |
--------------------------------------------------------------------------------
/largesteps/pgf_custom.py:
--------------------------------------------------------------------------------
1 | import os
2 | import pathlib
3 |
4 | import matplotlib as mpl
5 | from matplotlib.backends import backend_pgf
6 | from matplotlib.backends.backend_pgf import writeln, _get_image_inclusion_command, get_preamble, get_fontspec, MixedModeRenderer, _check_savefig_extra_args
7 | from matplotlib.backend_bases import _Backend
8 | from PIL import Image
9 |
10 |
11 | class RendererPgfCustom(backend_pgf.RendererPgf):
12 |
13 | def __init__(self, *args, **kwargs):
14 | super().__init__(*args, **kwargs)
15 |
16 | def draw_image(self, gc, x, y, im, transform=None):
17 | # docstring inherited
18 |
19 | h, w = im.shape[:2]
20 | if w == 0 or h == 0:
21 | return
22 |
23 | if not os.path.exists(getattr(self.fh, "name", "")):
24 | raise ValueError(
25 | "streamed pgf-code does not support raster graphics, consider "
26 | "using the pgf-to-pdf option")
27 |
28 | # save the images to png files
29 | path = pathlib.Path(self.fh.name)
30 | fmt = mpl.rcParams.get('pdf.image_format', 'jpg').lower()
31 | if fmt == 'png':
32 | fname_img = "%s-img%d.png" % (path.stem, self.image_counter)
33 | Image.fromarray(im[::-1]).save(path.parent / fname_img)
34 | elif fmt in ('jpg', 'jpeg'):
35 | fname_img = "%s-img%d.jpg" % (path.stem, self.image_counter)
36 | # Composite over white if transparent
37 | if im.shape[-1] == 4:
38 | alpha = im[:, :, -1][:, :, None] / 255.0
39 | to_save = (alpha * im[:, :, :3] + (1.0 - alpha) * 255).astype(im.dtype)
40 | else:
41 | to_save = im
42 | Image.fromarray(to_save[::-1]).save(path.parent / fname_img,
43 | quality=mpl.rcParams.get('pdf.image_quality', 95))
44 | else:
45 | raise ValueError('Unsupported image format: ' + str(fmt))
46 | self.image_counter += 1
47 |
48 | # reference the image in the pgf picture
49 | writeln(self.fh, r"\begin{pgfscope}")
50 | self._print_pgf_clip(gc)
51 | f = 1. / self.dpi # from display coords to inch
52 | if transform is None:
53 | writeln(self.fh,
54 | r"\pgfsys@transformshift{%fin}{%fin}" % (x * f, y * f))
55 | w, h = w * f, h * f
56 | else:
57 | tr1, tr2, tr3, tr4, tr5, tr6 = transform.frozen().to_values()
58 | writeln(self.fh,
59 | r"\pgfsys@transformcm{%f}{%f}{%f}{%f}{%fin}{%fin}" %
60 | (tr1 * f, tr2 * f, tr3 * f, tr4 * f,
61 | (tr5 + x) * f, (tr6 + y) * f))
62 | w = h = 1 # scale is already included in the transform
63 | interp = str(transform is None).lower() # interpolation in PDF reader
64 |
65 | writeln(self.fh,
66 | r"\pgftext[left,bottom]"
67 | r"{%s[interpolate=%s,width=%fin,height=%fin]{%s}}" %
68 | (_get_image_inclusion_command(),
69 | interp, w, h, fname_img))
70 | writeln(self.fh, r"\end{pgfscope}")
71 |
72 |
73 | class FigureCanvasPgfCustom(backend_pgf.FigureCanvasPgf):
74 | def __init__(self, *args, **kwargs):
75 | super().__init__(*args, **kwargs)
76 |
77 | @_check_savefig_extra_args
78 | def _print_pgf_to_fh(self, fh, *, bbox_inches_restore=None):
79 |
80 | header_text = """%% Creator: Matplotlib, PGF backend
81 | %%
82 | %% To include the figure in your LaTeX document, write
83 | %% \\input{.pgf}
84 | %%
85 | %% Make sure the required packages are loaded in your preamble
86 | %% \\usepackage{pgf}
87 | %%
88 | %% Figures using additional raster images can only be included by \\input if
89 | %% they are in the same directory as the main LaTeX file. For loading figures
90 | %% from other directories you can use the `import` package
91 | %% \\usepackage{import}
92 | %%
93 | %% and then include the figures with
94 | %% \\import{}{.pgf}
95 | %%
96 | """
97 |
98 | # append the preamble used by the backend as a comment for debugging
99 | header_info_preamble = ["%% Matplotlib used the following preamble"]
100 | for line in get_preamble().splitlines():
101 | header_info_preamble.append("%% " + line)
102 | for line in get_fontspec().splitlines():
103 | header_info_preamble.append("%% " + line)
104 | header_info_preamble.append("%%")
105 | header_info_preamble = "\n".join(header_info_preamble)
106 |
107 | # get figure size in inch
108 | w, h = self.figure.get_figwidth(), self.figure.get_figheight()
109 | dpi = self.figure.get_dpi()
110 |
111 | # create pgfpicture environment and write the pgf code
112 | fh.write(header_text)
113 | fh.write(header_info_preamble)
114 | fh.write("\n")
115 | writeln(fh, r"\begingroup")
116 | writeln(fh, r"\makeatletter")
117 | writeln(fh, r"\begin{pgfpicture}")
118 | writeln(fh,
119 | r"\pgfpathrectangle{\pgfpointorigin}{\pgfqpoint{%fin}{%fin}}"
120 | % (w, h))
121 | writeln(fh, r"\pgfusepath{use as bounding box, clip}")
122 | renderer = MixedModeRenderer(self.figure, w, h, dpi,
123 | RendererPgfCustom(self.figure, fh),
124 | bbox_inches_restore=bbox_inches_restore)
125 | self.figure.draw(renderer)
126 |
127 | # end the pgfpicture environment
128 | writeln(fh, r"\end{pgfpicture}")
129 | writeln(fh, r"\makeatother")
130 | writeln(fh, r"\endgroup")
131 |
132 | def get_renderer(self):
133 | return RendererPgfCustom(self.figure, None)
134 |
135 |
136 | @_Backend.export
137 | class _BackendPgfCustom(_Backend):
138 | FigureCanvas = FigureCanvasPgfCustom
139 |
--------------------------------------------------------------------------------
/largesteps/solvers.py:
--------------------------------------------------------------------------------
1 | from torch.autograd import Function
2 | import numpy as np
3 | from cholespy import CholeskySolverF, MatrixType
4 | import torch
5 |
6 | class Solver:
7 | """
8 | Sparse linear system solver base class.
9 | """
10 | def __init__(self, M):
11 | pass
12 |
13 | def solve(self, b, backward=False):
14 | """
15 | Solve the linear system.
16 |
17 | Parameters
18 | ----------
19 | b : torch.Tensor
20 | The right hand side of the system Lx=b
21 | backward : bool (optional)
22 | Whether this is the backward or forward solve
23 | """
24 | raise NotImplementedError()
25 |
26 | class CholeskySolver():
27 | """
28 | Cholesky solver.
29 |
30 | Precomputes the Cholesky decomposition of the system matrix and solves the
31 | system by back-substitution.
32 | """
33 | def __init__(self, M):
34 | self.solver = CholeskySolverF(M.shape[0], M.indices()[0], M.indices()[1], M.values(), MatrixType.COO)
35 |
36 | def solve(self, b, backward=False):
37 | x = torch.zeros_like(b)
38 | self.solver.solve(b.detach(), x)
39 | return x
40 |
41 | class ConjugateGradientSolver(Solver):
42 | """
43 | Conjugate gradients solver.
44 | """
45 | def __init__(self, M):
46 | """
47 | Initialize the solver.
48 |
49 | Parameters
50 | ----------
51 | M : torch.sparse_coo_tensor
52 | Linear system matrix.
53 | """
54 | self.guess_fwd = None
55 | self.guess_bwd = None
56 | self.M = M
57 |
58 | def solve_axis(self, b, x0):
59 | """
60 | Solve a single linear system with Conjugate Gradients.
61 |
62 | Parameters:
63 | -----------
64 | b : torch.Tensor
65 | The right hand side of the system Ax=b.
66 | x0 : torch.Tensor
67 | Initial guess for the solution.
68 | """
69 | x = x0
70 | r = self.M @ x - b
71 | p = -r
72 | r_norm = r.norm()
73 | while r_norm > 1e-5:
74 | Ap = self.M @ p
75 | r2 = r_norm.square()
76 | alpha = r2 / (p * Ap).sum(dim=0)
77 | x = x + alpha*p
78 | r_old = r
79 | r_old_norm = r_norm
80 | r = r + alpha*Ap
81 | r_norm = r.norm()
82 | beta = r_norm.square() / r2
83 | p = -r + beta*p
84 | return x
85 |
86 | def solve(self, b, backward=False):
87 | """
88 | Solve the sparse linear system.
89 |
90 | There is actually one linear system to solve for each axis in b
91 | (typically x, y and z), and we have to solve each separately with CG.
92 | Therefore this method calls self.solve_axis for each individual system
93 | to form the solution.
94 |
95 | Parameters
96 | ----------
97 | b : torch.Tensor
98 | The right hand side of the system Ax=b.
99 | backward : bool
100 | Whether we are in the backward or the forward pass.
101 | """
102 | if self.guess_fwd is None:
103 | # Initialize starting guesses in the first run
104 | self.guess_bwd = torch.zeros_like(b)
105 | self.guess_fwd = torch.zeros_like(b)
106 |
107 | if backward:
108 | x0 = self.guess_bwd
109 | else:
110 | x0 = self.guess_fwd
111 |
112 | if len(b.shape) != 2:
113 | raise ValueError(f"Invalid array shape {b.shape} for ConjugateGradientSolver.solve: expected shape (a, b)")
114 |
115 | x = torch.zeros_like(b)
116 | for axis in range(b.shape[1]):
117 | # We have to solve for each axis separately for CG to converge
118 | x[:, axis] = self.solve_axis(b[:, axis], x0[:, axis])
119 |
120 | if backward:
121 | # Update initial guess for next iteration
122 | self.guess_bwd = x
123 | else:
124 | self.guess_fwd = x
125 |
126 | return x
127 |
128 | class DifferentiableSolve(Function):
129 | """
130 | Differentiable function to solve the linear system.
131 |
132 | This simply calls the solve methods implemented by the Solver classes.
133 | """
134 | @staticmethod
135 | def forward(ctx, solver, b):
136 | ctx.solver = solver
137 | return solver.solve(b, backward=False)
138 |
139 | @staticmethod
140 | def backward(ctx, grad_output):
141 | solver_grad = None # We have to return a gradient per input argument in forward
142 | b_grad = None
143 | if ctx.needs_input_grad[1]:
144 | b_grad = ctx.solver.solve(grad_output.contiguous(), backward=True)
145 | return (solver_grad, b_grad)
146 |
147 | # Alias for DifferentiableSolve function
148 | solve = DifferentiableSolve.apply
149 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | cholespy>=0.1.4
2 | matplotlib>=3.3.2
3 | pythreejs
4 | git+https://github.com/skoch9/meshplot.git@725e4a7
5 | numpy>=1.19.4
6 | pandas>=1.2.4
7 | seaborn>=0.11.1
8 | tqdm>=4.54.1
9 | imageio>=2.9.0
10 |
--------------------------------------------------------------------------------
/scripts/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/rgl-epfl/large-steps-pytorch/1d72dc42d5bea5cfcf94402a7241cf5e88ee7554/scripts/__init__.py
--------------------------------------------------------------------------------
/scripts/blender_render.py:
--------------------------------------------------------------------------------
1 | import bpy
2 | import os
3 |
4 | import argparse
5 | import sys
6 | import numpy as np
7 |
8 | # Remove annoying argument that messes up argparse
9 | sys.argv.remove('--')
10 |
11 | parser = argparse.ArgumentParser(description="Render OBJ meshes from a given viewpoint, with wireframe.")
12 | parser.add_argument("--input", "-i", required=True, type=os.path.abspath, help="Meshes to render.", nargs="+")
13 | parser.add_argument("--collection", "-c", type=str, default="14", help="Camera collection to use.")
14 | parser.add_argument("--smooth", "-s", action="store_true", help="Render without wireframe.")
15 | parser.add_argument("--thickness", "-t", type=float, default=0.008, help="Thickness of the wireframe.")
16 | parser.add_argument("--viewpoint", "-v", type=int, default=0, help="Index of the camera with which to render.")
17 | parser.add_argument("--output", "-o", type=os.path.abspath, help="Output directory.")
18 | parser.add_argument("--resolution", "-r", type=float, default=100, help="Rendering resolution fraction.")
19 | parser.add_argument("--background", action="store_true", help="Render the background or not.")
20 | parser.add_argument("--ours", action="store_true", help="Color mesh as ours")
21 | parser.add_argument("--baseline", action="store_true", help="Color mesh as baseline")
22 | parser.add_argument("--area", action="store_true", help="Color mesh depending on surface area")
23 | parser.add_argument("--sequence", action="store_true", help="Handle naming so that it is compatible with premiere sequence import")
24 | parser.add_argument("--lines", action="store_true", help="Show self intersection as lines")
25 | parser.add_argument("--faces", action="store_true", help="Show self intersection as faces")
26 | parser.add_argument("--it", type=int, default=-1)
27 | # Parse command line args
28 | params = parser.parse_known_args()[0]
29 | if params.sequence and params.it == -1:
30 | raise ValueError("Invalid iteration number!")
31 | if params.output is None:
32 | params.output = os.getcwd()
33 | if not os.path.isdir(params.output):
34 | os.makedirs(params.output)
35 |
36 | assert params.collection in bpy.data.collections.keys(), "Wrong collection name!"
37 |
38 | bpy.ops.object.select_all(action='DESELECT')
39 | white_mat = bpy.data.materials["White"]
40 | baseline_mat = bpy.data.materials["Baseline"]
41 | ours_mat = bpy.data.materials["Ours"]
42 | black_mat = bpy.data.materials["Black"]
43 | area_mat = bpy.data.materials["Area"]
44 | lines_mat = bpy.data.materials["Intersections"]
45 |
46 | i=0
47 | for filename in params.input:
48 | folder, obj_file = os.path.split(filename)
49 | name, ext = os.path.splitext(obj_file)
50 | # Import the object
51 | if ext == ".obj":
52 | bpy.ops.import_scene.obj(filepath=filename)
53 | elif ext == ".ply":
54 | bpy.ops.import_mesh.ply(filepath=filename)
55 | else:
56 | raise ValueError(f"Unsupported extension: {ext} ! This script only supports OBJ and PLY files.")
57 | # Make the imported object the active one
58 | obj = bpy.context.selected_objects[-1]
59 | bpy.context.view_layer.objects.active = obj
60 | if ext == ".ply":
61 | obj.rotation_euler[0] = np.pi / 2
62 | # Assign materials
63 | assert len(obj.data.materials) == (1 if ext==".obj" else 0)
64 | bpy.ops.object.material_slot_add()
65 | if ext == ".ply":
66 | bpy.ops.object.material_slot_add()
67 |
68 | if params.area:
69 | obj.data.materials[0] = area_mat
70 | bpy.data.worlds["World"].node_tree.nodes["Background"].inputs[1].default_value = 3
71 | elif params.ours:
72 | obj.data.materials[0] = ours_mat
73 | elif params.baseline:
74 | obj.data.materials[0] = baseline_mat
75 | else:
76 | obj.data.materials[0] = white_mat
77 | obj.data.materials[1] = black_mat
78 | if not params.smooth:
79 | # Add Wireframe
80 | bpy.ops.object.modifier_add(type='WIREFRAME')
81 | obj.modifiers["Wireframe"].use_replace = False
82 | obj.modifiers["Wireframe"].use_even_offset = False
83 | obj.modifiers["Wireframe"].material_offset = 1
84 | obj.modifiers["Wireframe"].thickness = params.thickness
85 |
86 | # If needed, load the lines
87 | if params.lines:
88 | bpy.ops.import_scene.obj(filepath=os.path.join(folder, f"{name}_lines.obj"))
89 | # Make the imported object the active one
90 | lines = bpy.context.selected_objects[-1]
91 | bpy.context.view_layer.objects.active = lines
92 | # Convert to a curve
93 | bpy.ops.object.convert(target='CURVE')
94 | # Set bevel
95 | lines.data.bevel_depth = 0.008
96 | # Assign material
97 | assert len(lines.data.materials) == 0
98 | bpy.ops.object.material_slot_add()
99 | lines.data.materials[0] = lines_mat
100 | elif params.faces:
101 | # Load conflicting faces and assign material
102 | mask = np.genfromtxt(os.path.join(folder, f"{name}.csv"), delimiter=" ", dtype=int)
103 | bpy.ops.object.material_slot_add()
104 | obj.data.materials[2] = lines_mat
105 | for i in mask:
106 | obj.data.polygons[i].material_index = 2
107 |
108 | # Set the active camera
109 | bpy.data.scenes["Scene"].camera = bpy.data.collections[params.collection].objects[params.viewpoint]
110 | # Render
111 | bpy.data.scenes["Scene"].render.film_transparent = not params.background
112 | bpy.data.scenes["Scene"].render.resolution_percentage = params.resolution
113 | if params.sequence:
114 | bpy.data.scenes["Scene"].render.filepath = os.path.join(params.output, f"{'smooth' if params.smooth else 'wireframe'}_{params.it:04d}.png")
115 | else:
116 | bpy.data.scenes["Scene"].render.filepath = os.path.join(params.output, f"{name}_{'smooth' if params.smooth else 'wireframe'}.png")
117 | bpy.ops.render.render(write_still=True)
118 |
119 | # Delete the object
120 | bpy.ops.object.delete()
121 | i+=1
122 |
--------------------------------------------------------------------------------
/scripts/constants.py:
--------------------------------------------------------------------------------
1 | import os
2 |
3 | ROOT_DIR = os.path.dirname(os.path.dirname(__file__))
4 | OUTPUT_DIR = os.path.realpath(os.path.join(ROOT_DIR, "output")) # Change this if you want to output the data and figures elsewhere
5 | SCENES_DIR = os.path.realpath(os.path.join(ROOT_DIR, "scenes"))
6 | REMESH_DIR = os.path.join(ROOT_DIR, "ext/botsch-kobbelt-remesher-libigl/build")
7 | BLEND_SCENE = os.path.join(SCENES_DIR, "render.blend")
8 | BLENDER_EXEC = "blender2.8" # Change this if you have a different blender installation
9 |
--------------------------------------------------------------------------------
/scripts/geometry.py:
--------------------------------------------------------------------------------
1 | import torch
2 |
3 | def remove_duplicates(v, f):
4 | """
5 | Generate a mesh representation with no duplicates and
6 | return it along with the mapping to the original mesh layout.
7 | """
8 |
9 | unique_verts, inverse = torch.unique(v, dim=0, return_inverse=True)
10 | new_faces = inverse[f.long()]
11 | return unique_verts, new_faces, inverse
12 |
13 | def average_edge_length(verts, faces):
14 | """
15 | Compute the average length of all edges in a given mesh.
16 |
17 | Parameters
18 | ----------
19 | verts : torch.Tensor
20 | Vertex positions.
21 | faces : torch.Tensor
22 | array of triangle faces.
23 | """
24 | face_verts = verts[faces]
25 | v0, v1, v2 = face_verts[:, 0], face_verts[:, 1], face_verts[:, 2]
26 |
27 | # Side lengths of each triangle, of shape (sum(F_n),)
28 | # A is the side opposite v1, B is opposite v2, and C is opposite v3
29 | A = (v1 - v2).norm(dim=1)
30 | B = (v0 - v2).norm(dim=1)
31 | C = (v0 - v1).norm(dim=1)
32 |
33 | return (A + B + C).sum() / faces.shape[0] / 3
34 |
35 | def massmatrix_voronoi(verts, faces):
36 | """
37 | Compute the area of the Voronoi cell around each vertex in the mesh.
38 | http://www.alecjacobson.com/weblog/?p=863
39 |
40 | params
41 | ------
42 |
43 | v: vertex positions
44 | f: triangle indices
45 | """
46 | # Compute edge lengths
47 | l0 = (verts[faces[:,1]] - verts[faces[:,2]]).norm(dim=1)
48 | l1 = (verts[faces[:,2]] - verts[faces[:,0]]).norm(dim=1)
49 | l2 = (verts[faces[:,0]] - verts[faces[:,1]]).norm(dim=1)
50 | l = torch.stack((l0, l1, l2), dim=1)
51 |
52 | # Compute cosines of the corners of the triangles
53 | cos0 = (l[:,1].square() + l[:,2].square() - l[:,0].square())/(2*l[:,1]*l[:,2])
54 | cos1 = (l[:,2].square() + l[:,0].square() - l[:,1].square())/(2*l[:,2]*l[:,0])
55 | cos2 = (l[:,0].square() + l[:,1].square() - l[:,2].square())/(2*l[:,0]*l[:,1])
56 | cosines = torch.stack((cos0, cos1, cos2), dim=1)
57 |
58 | # Convert to barycentric coordinates
59 | barycentric = cosines * l
60 | barycentric = barycentric / torch.sum(barycentric, dim=1)[..., None]
61 |
62 | # Compute areas of the faces using Heron's formula
63 | areas = 0.25 * ((l0+l1+l2)*(l0+l1-l2)*(l0-l1+l2)*(-l0+l1+l2)).sqrt()
64 |
65 | # Compute the areas of the sub triangles
66 | tri_areas = areas[..., None] * barycentric
67 |
68 | # Compute the area of the quad
69 | cell0 = 0.5 * (tri_areas[:,1] + tri_areas[:, 2])
70 | cell1 = 0.5 * (tri_areas[:,2] + tri_areas[:, 0])
71 | cell2 = 0.5 * (tri_areas[:,0] + tri_areas[:, 1])
72 | cells = torch.stack((cell0, cell1, cell2), dim=1)
73 |
74 | # Different formulation for obtuse triangles
75 | # See http://www.alecjacobson.com/weblog/?p=874
76 | cells[:,0] = torch.where(cosines[:,0]<0, 0.5*areas, cells[:,0])
77 | cells[:,1] = torch.where(cosines[:,0]<0, 0.25*areas, cells[:,1])
78 | cells[:,2] = torch.where(cosines[:,0]<0, 0.25*areas, cells[:,2])
79 |
80 | cells[:,0] = torch.where(cosines[:,1]<0, 0.25*areas, cells[:,0])
81 | cells[:,1] = torch.where(cosines[:,1]<0, 0.5*areas, cells[:,1])
82 | cells[:,2] = torch.where(cosines[:,1]<0, 0.25*areas, cells[:,2])
83 |
84 | cells[:,0] = torch.where(cosines[:,2]<0, 0.25*areas, cells[:,0])
85 | cells[:,1] = torch.where(cosines[:,2]<0, 0.25*areas, cells[:,1])
86 | cells[:,2] = torch.where(cosines[:,2]<0, 0.5*areas, cells[:,2])
87 |
88 | # Sum the quad areas to get the voronoi cell
89 | return torch.zeros_like(verts).scatter_add_(0, faces, cells).sum(dim=1)
90 |
91 | def compute_face_normals(verts, faces):
92 | """
93 | Compute per-face normals.
94 |
95 | Parameters
96 | ----------
97 | verts : torch.Tensor
98 | Vertex positions
99 | faces : torch.Tensor
100 | Triangle faces
101 | """
102 | fi = torch.transpose(faces, 0, 1).long()
103 | verts = torch.transpose(verts, 0, 1)
104 |
105 | v = [verts.index_select(1, fi[0]),
106 | verts.index_select(1, fi[1]),
107 | verts.index_select(1, fi[2])]
108 |
109 | c = torch.cross(v[1] - v[0], v[2] - v[0])
110 | n = c / torch.norm(c, dim=0)
111 | return n
112 |
113 | def safe_acos(x):
114 | return torch.acos(x.clamp(min=-1, max=1))
115 |
116 | def compute_vertex_normals(verts, faces, face_normals):
117 | """
118 | Compute per-vertex normals from face normals.
119 |
120 | Parameters
121 | ----------
122 | verts : torch.Tensor
123 | Vertex positions
124 | faces : torch.Tensor
125 | Triangle faces
126 | face_normals : torch.Tensor
127 | Per-face normals
128 | """
129 | fi = torch.transpose(faces, 0, 1).long()
130 | verts = torch.transpose(verts, 0, 1)
131 | normals = torch.zeros_like(verts)
132 |
133 | v = [verts.index_select(1, fi[0]),
134 | verts.index_select(1, fi[1]),
135 | verts.index_select(1, fi[2])]
136 |
137 | for i in range(3):
138 | d0 = v[(i + 1) % 3] - v[i]
139 | d0 = d0 / torch.norm(d0)
140 | d1 = v[(i + 2) % 3] - v[i]
141 | d1 = d1 / torch.norm(d1)
142 | d = torch.sum(d0*d1, 0)
143 | face_angle = safe_acos(torch.sum(d0*d1, 0))
144 | nn = face_normals * face_angle
145 | for j in range(3):
146 | normals[j].index_add_(0, fi[i], nn[j])
147 | return (normals / torch.norm(normals, dim=0)).transpose(0, 1)
148 |
--------------------------------------------------------------------------------
/scripts/io_ply.py:
--------------------------------------------------------------------------------
1 | # This file is inspired from the pyntcloud project https://github.com/daavoo/pyntcloud/blob/master/pyntcloud/io/ply.py
2 | import sys
3 | import numpy as np
4 | import pandas as pd
5 | from collections import defaultdict
6 | import torch
7 |
8 | sys_byteorder = ('>', '<')[sys.byteorder == 'little']
9 |
10 | ply_dtypes = dict([
11 | (b'int8', 'i1'),
12 | (b'char', 'i1'),
13 | (b'uint8', 'u1'),
14 | (b'uchar', 'b1'),
15 | (b'uchar', 'u1'),
16 | (b'int16', 'i2'),
17 | (b'short', 'i2'),
18 | (b'uint16', 'u2'),
19 | (b'ushort', 'u2'),
20 | (b'int32', 'i4'),
21 | (b'int', 'i4'),
22 | (b'uint32', 'u4'),
23 | (b'uint', 'u4'),
24 | (b'float32', 'f4'),
25 | (b'float', 'f4'),
26 | (b'float64', 'f8'),
27 | (b'double', 'f8')
28 | ])
29 |
30 | valid_formats = {'ascii': '', 'binary_big_endian': '>',
31 | 'binary_little_endian': '<'}
32 |
33 | def read_ply(filename):
34 | """ Read a .ply (binary or ascii) file and store the elements in pandas DataFrame
35 | Parameters
36 | ----------
37 | filename: str
38 | Path to the filename
39 | Returns
40 | -------
41 | data: dict
42 | Elements as pandas DataFrames; comments and ob_info as list of string
43 | """
44 |
45 | with open(filename, 'rb') as ply:
46 |
47 | if b'ply' not in ply.readline():
48 | raise ValueError('The file does not start whith the word ply')
49 | # get binary_little/big or ascii
50 | fmt = ply.readline().split()[1].decode()
51 | # get extension for building the numpy dtypes
52 | ext = valid_formats[fmt]
53 |
54 | line = []
55 | dtypes = defaultdict(list)
56 | count = 2
57 | points_size = None
58 | mesh_size = None
59 | has_texture = False
60 | comments = []
61 | while b'end_header' not in line and line != b'':
62 | line = ply.readline()
63 |
64 | if b'element' in line:
65 | line = line.split()
66 | name = line[1].decode()
67 | size = int(line[2])
68 | if name == "vertex":
69 | points_size = size
70 | elif name == "face":
71 | mesh_size = size
72 |
73 | elif b'property' in line:
74 | line = line.split()
75 | # element mesh
76 | if b'list' in line:
77 |
78 | if b"vertex_indices" in line[-1] or b"vertex_index" in line[-1]:
79 | mesh_names = ["n_points", "v1", "v2", "v3"]
80 | else:
81 | has_texture = True
82 | mesh_names = ["n_coords"] + ["v1_u", "v1_v", "v2_u", "v2_v", "v3_u", "v3_v"]
83 |
84 | if fmt == "ascii":
85 | # the first number has different dtype than the list
86 | dtypes[name].append(
87 | (mesh_names[0], ply_dtypes[line[2]]))
88 | # rest of the numbers have the same dtype
89 | dt = ply_dtypes[line[3]]
90 | else:
91 | # the first number has different dtype than the list
92 | dtypes[name].append(
93 | (mesh_names[0], ext + ply_dtypes[line[2]]))
94 | # rest of the numbers have the same dtype
95 | dt = ext + ply_dtypes[line[3]]
96 |
97 | for j in range(1, len(mesh_names)):
98 | dtypes[name].append((mesh_names[j], dt))
99 | else:
100 | if fmt == "ascii":
101 | dtypes[name].append(
102 | (line[2].decode(), ply_dtypes[line[1]]))
103 | else:
104 | dtypes[name].append(
105 | (line[2].decode(), ext + ply_dtypes[line[1]]))
106 |
107 | elif b'comment' in line:
108 | line = line.split(b" ", 1)
109 | comment = line[1].decode().rstrip()
110 | comments.append(comment)
111 |
112 | count += 1
113 |
114 | # for bin
115 | end_header = ply.tell()
116 |
117 | data = {}
118 |
119 | if comments:
120 | data["comments"] = comments
121 |
122 | if fmt == 'ascii':
123 | top = count
124 | bottom = 0 if mesh_size is None else mesh_size
125 |
126 | names = [x[0] for x in dtypes["vertex"]]
127 | vertex_data = pd.read_csv(filename, sep=" ", header=None, engine="python",
128 | skiprows=top, skipfooter=bottom, usecols=names, names=names)
129 |
130 | for n, col in enumerate(vertex_data.columns):
131 | vertex_data[col] = vertex_data[col].astype(
132 | dtypes["vertex"][n][1])
133 |
134 | if mesh_size :
135 | top = count + points_size
136 |
137 | names = np.array([x[0] for x in dtypes["face"]])
138 | usecols = [1, 2, 3, 5, 6, 7, 8, 9, 10] if has_texture else [1, 2, 3]
139 | names = names[usecols]
140 |
141 | data["faces"] = pd.read_csv(
142 | filename, sep=" ", header=None, engine="python", skiprows=top, usecols=usecols, names=names)
143 |
144 | for n, col in enumerate(data["faces"].columns):
145 | data["faces"][col] = data["faces"][col].astype(
146 | dtypes["face"][n + 1][1])
147 |
148 | # Convert to PyTorch array
149 | data["vertices"] = torch.tensor(vertex_data[["x", "y", "z"]].values, device='cuda')
150 | if "nx" in vertex_data.columns:
151 | data["normals"] = torch.tensor(vertex_data[["nx", "ny", "nz"]].values, device='cuda')
152 | data["faces"] = torch.tensor(data["faces"].to_numpy(), device='cuda')
153 |
154 | else:
155 | with open(filename, 'rb') as ply:
156 | ply.seek(end_header)
157 | points_np = np.fromfile(ply, dtype=dtypes["vertex"], count=points_size)
158 | if ext != sys_byteorder:
159 | points_np = points_np.byteswap().newbyteorder()
160 | data["vertices"] = torch.tensor(np.stack((points_np["x"],points_np["y"],points_np["z"]), axis=1), device='cuda')
161 | if "nx" in points_np.dtype.fields.keys():
162 | data["normals"] = torch.tensor(np.stack((points_np["nx"],points_np["ny"],points_np["nz"]), axis=1), device='cuda')
163 | if mesh_size:
164 | mesh_np = np.fromfile(ply, dtype=dtypes["face"], count=mesh_size)
165 | if ext != sys_byteorder:
166 | mesh_np = mesh_np.byteswap().newbyteorder()
167 |
168 | assert (mesh_np["n_points"] == 3*np.ones_like(mesh_np["n_points"])).all(), "Only triangle meshes are supported!"
169 |
170 | data["faces"] = torch.tensor(np.stack((mesh_np["v1"], mesh_np["v2"], mesh_np["v3"]), axis=1), device='cuda')
171 |
172 | return data
173 |
174 | def write_ply(filename, v, f, n=None, vc = None, ascii=False):
175 | """
176 | Write a mesh as PLY with vertex colors.
177 |
178 | Parameters
179 | ----------
180 | filename : str
181 | Path to which to save the mesh.
182 | v : numpy.ndarray
183 | Vertex positions.
184 | f : numpy.ndarray
185 | Faces.
186 | n : numpy.ndarray
187 | Vertex normals (optional).
188 | vc : numpy.ndarray
189 | Vertex colors (optional). Expects colors as floats in [0,1]
190 | ascii : bool
191 | Whether we write a text or binary PLY file (defaults to binary as it is more efficient)
192 | """
193 | if vc is not None:
194 | color = (vc*255).astype(np.uint8)
195 | # headers
196 | string = 'ply\n'
197 | if ascii:
198 | string = string + 'format ascii 1.0\n'
199 | else:
200 | string = string + 'format binary_' + sys.byteorder + '_endian 1.0\n'
201 |
202 | string = string + 'element vertex ' + str(v.shape[0]) + '\n'
203 | string = string + 'property double x\n'
204 | string = string + 'property double y\n'
205 | string = string + 'property double z\n'
206 | if n is not None and n.shape[0] == v.shape[0]:
207 | string = string + 'property double nx\n'
208 | string = string + 'property double ny\n'
209 | string = string + 'property double nz\n'
210 |
211 | if (vc is not None and vc.shape[0] == v.shape[0]):
212 | string = string + 'property uchar red\n'
213 | string = string + 'property uchar green\n'
214 | string = string + 'property uchar blue\n'
215 |
216 | # end of header
217 | string = string + 'element face ' + str(f.shape[0]) + '\n'
218 | string = string + 'property list int int vertex_indices\n'
219 | string = string + 'end_header\n'
220 | with open(filename, 'w') as file:
221 | file.write(string)
222 | if ascii:
223 | # write vertices
224 | for ii in range(v.shape[0]):
225 | string = f"{v[ii,0]} {v[ii,1]} {v[ii,2]}"
226 | if n is not None and n.shape[0] == v.shape[0]:
227 | string = string + f" {n[ii,0]} {n[ii,1]} {n[ii,2]}"
228 | if (vc is not None and vc.shape[0] == v.shape[0]):
229 | string = string + f" {color[ii,0]:03d} {color[ii,1]:03d} {color[ii,2]:03d}\n"
230 | else:
231 | string = string + '\n'
232 | file.write(string)
233 | # write faces
234 | for ii in range(f.shape[0]):
235 | string = f"3 {f[ii,0]} {f[ii,1]} {f[ii,2]} \n"
236 | file.write(string)
237 |
238 | if not ascii:
239 | # Write binary PLY data
240 | with open(filename, 'ab') as file:
241 | if vc is None:
242 | if n is not None:
243 | vertex_data = np.hstack((v, n))
244 | file.write(vertex_data.astype(np.float64).tobytes())
245 | else:
246 | file.write(v.astype(np.float64).tobytes())
247 | else:
248 | if n is not None:
249 | vertex_data = np.zeros(v.shape[0], dtype='double,double,double,double,double,double,uint8,uint8,uint8')
250 | vertex_data['f0'] = v[:,0].astype(np.float64)
251 | vertex_data['f1'] = v[:,1].astype(np.float64)
252 | vertex_data['f2'] = v[:,2].astype(np.float64)
253 | vertex_data['f3'] = n[:,0].astype(np.float64)
254 | vertex_data['f4'] = n[:,1].astype(np.float64)
255 | vertex_data['f5'] = n[:,2].astype(np.float64)
256 | vertex_data['f6'] = color[:,0]
257 | vertex_data['f7'] = color[:,1]
258 | vertex_data['f8'] = color[:,2]
259 | else:
260 | vertex_data = np.zeros(v.shape[0], dtype='double,double,double,uint8,uint8,uint8')
261 | vertex_data['f0'] = v[:,0].astype(np.float64)
262 | vertex_data['f1'] = v[:,1].astype(np.float64)
263 | vertex_data['f2'] = v[:,2].astype(np.float64)
264 | vertex_data['f3'] = color[:,0]
265 | vertex_data['f4'] = color[:,1]
266 | vertex_data['f5'] = color[:,2]
267 | file.write(vertex_data.tobytes())
268 | # Write faces
269 | faces = np.hstack((3*np.ones((len(f), 1)), f)).astype(np.int32)
270 | file.write(faces.tobytes())
271 |
--------------------------------------------------------------------------------
/scripts/load_xml.py:
--------------------------------------------------------------------------------
1 | import xml.etree.ElementTree as ET
2 | import os
3 | import torch
4 | import numpy as np
5 | from scripts.io_ply import read_ply
6 | import imageio
7 | imageio.plugins.freeimage.download()
8 |
9 | def rotation_matrix(axis, angle):
10 | """
11 | Builds a homogeneous coordinate rotation matrix along an axis
12 |
13 | Parameters
14 | ----------
15 |
16 | axis : str
17 | Axis of rotation, x, y, or z
18 | angle : float
19 | Rotation angle, in degrees
20 | """
21 | assert axis in 'xyz', "Invalid axis, expected x, y or z"
22 | mat = torch.eye(4, device='cuda')
23 | theta = np.deg2rad(angle)
24 | idx = 'xyz'.find(axis)
25 | mat[(idx+1)%3, (idx+1)%3] = np.cos(theta)
26 | mat[(idx+2)%3, (idx+2)%3] = np.cos(theta)
27 | mat[(idx+1)%3, (idx+2)%3] = -np.sin(theta)
28 | mat[(idx+2)%3, (idx+1)%3] = np.sin(theta)
29 | return mat
30 |
31 | def translation_matrix(tr):
32 | """
33 | Builds a homogeneous coordinate translation matrix
34 |
35 | Parameters
36 | ----------
37 |
38 | tr : numpy.array
39 | translation value
40 | """
41 | mat = torch.eye(4, device='cuda')
42 | mat[:3,3] = torch.tensor(tr, device='cuda')
43 | return mat
44 |
45 | def load_scene(filepath):
46 | """
47 | Load the meshes, envmap and cameras from a scene XML file.
48 | We assume the file has the same syntax as Mitsuba 2 scenes.
49 |
50 | Parameters
51 | ----------
52 |
53 | - filepath : os.path
54 | Path to the XML file to load
55 | """
56 | folder, filename = os.path.split(filepath)
57 | scene_name, ext = os.path.splitext(filename)
58 | assert ext == ".xml", f"Unexpected file type: '{ext}'"
59 |
60 | tree = ET.parse(filepath)
61 | root = tree.getroot()
62 |
63 | assert root.tag == 'scene', f"Unknown root type '{root.tag}', expected 'scene'"
64 |
65 | scene_params = {
66 | "view_mats" : []
67 | }
68 |
69 | for plugin in root:
70 | if plugin.tag == "default":
71 | if plugin.attrib["name"] == "resx":
72 | scene_params["res_x"] = int(plugin.attrib["value"])
73 | elif plugin.attrib["name"] == "resy":
74 | scene_params["res_y"] = int(plugin.attrib["value"])
75 | elif plugin.tag == "sensor":
76 | view_mats = scene_params["view_mats"]
77 | view_mat = torch.eye(4, device='cuda')
78 | for prop in plugin:
79 |
80 | if prop.tag == "float":
81 | if prop.attrib["name"] == "fov" and "fov" not in scene_params.keys():
82 | scene_params["fov"] = float(prop.attrib["value"])
83 | elif prop.attrib["name"] == "near_clip" and "near_clip" not in scene_params.keys():
84 | scene_params["near_clip"] = float(prop.attrib["value"])
85 | elif prop.attrib["name"] == "far_clip" and "far_clip" not in scene_params.keys():
86 | scene_params["far_clip"] = float(prop.attrib["value"])
87 | elif prop.tag == "transform":
88 | for tr in prop:
89 | if tr.tag == "rotate":
90 | if "x" in tr.attrib:
91 | view_mat = rotation_matrix("x", float(tr.attrib["angle"])) @ view_mat
92 | elif "y" in tr.attrib:
93 | view_mat = rotation_matrix("y", float(tr.attrib["angle"])) @ view_mat
94 | else:
95 | view_mat = rotation_matrix("z", float(tr.attrib["angle"])) @ view_mat
96 | elif tr.tag == "translate":
97 | view_mat = translation_matrix(np.fromstring(tr.attrib["value"], dtype=float, sep=" ")) @ view_mat
98 | else:
99 | raise NotImplementedError(f"Unsupported transformation tag: '{tr.tag}'")
100 | view_mats.append(view_mat.inverse())
101 | elif plugin.tag == "emitter" and plugin.attrib["type"] == "envmap":
102 | for prop in plugin:
103 | if prop.tag == "string" and prop.attrib["name"] == "filename":
104 | envmap_path = os.path.join(folder, prop.attrib["value"])
105 | envmap = torch.tensor(imageio.imread(envmap_path, format='HDR-FI'), device='cuda')
106 | # Add alpha channel
107 | alpha = torch.ones((*envmap.shape[:2],1), device='cuda')
108 | scene_params["envmap"] = torch.cat((envmap, alpha), dim=-1)
109 | elif prop.tag == "float" and prop.attrib["name"] == "scale":
110 | scene_params["envmap_scale"] = float(prop.attrib["value"])
111 | elif plugin.tag == "shape":
112 | if plugin.attrib["type"] == "ply":
113 | for prop in plugin:
114 | if prop.tag == "string" and prop.attrib["name"] == "filename":
115 | mesh_path = os.path.join(folder, prop.attrib["value"])
116 | assert "id" in plugin.attrib.keys(), "Missing mesh id!"
117 | scene_params[plugin.attrib["id"]] = read_ply(mesh_path)
118 | else:
119 | raise NotImplementedError(f"Unsupported file type '{plugin.attrib['type']}', only PLY is supported currently")
120 |
121 | assert "mesh-source" in scene_params.keys(), "Missing source mesh"
122 | assert "mesh-target" in scene_params.keys(), "Missing target mesh"
123 | assert "envmap" in scene_params.keys(), "Missing envmap"
124 | assert len(scene_params["view_mats"]) > 0, "At least one camera needed"
125 |
126 | return scene_params
127 |
--------------------------------------------------------------------------------
/scripts/main.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import time
3 | import os
4 | from tqdm import tqdm
5 | import numpy as np
6 | import sys
7 |
8 | from largesteps.optimize import AdamUniform
9 | from largesteps.geometry import compute_matrix, laplacian_uniform
10 | from largesteps.parameterize import to_differential, from_differential
11 | from scripts.render import NVDRenderer
12 | from scripts.load_xml import load_scene
13 | from scripts.constants import REMESH_DIR
14 | from scripts.geometry import remove_duplicates, compute_face_normals, compute_vertex_normals, average_edge_length
15 | sys.path.append(REMESH_DIR)
16 | from pyremesh import remesh_botsch
17 |
18 | def optimize_shape(filepath, params):
19 | """
20 | Optimize a shape given a scene.
21 |
22 | This will expect a Mitsuba scene as input containing the cameras, envmap and
23 | source and target models.
24 |
25 | Parameters
26 | ----------
27 | filepath : str Path to the XML file of the scene to optimize. params : dict
28 | Dictionary containing all optimization parameters.
29 | """
30 | opt_time = params.get("time", -1) # Optimization time (in minutes)
31 | steps = params.get("steps", 100) # Number of optimization steps (ignored if time > 0)
32 | step_size = params.get("step_size", 0.01) # Step size
33 | boost = params.get("boost", 1) # Gradient boost used in nvdiffrast
34 | smooth = params.get("smooth", True) # Use our method or not
35 | shading = params.get("shading", True) # Use shading, otherwise render silhouettes
36 | reg = params.get("reg", 0.0) # Regularization weight
37 | solver = params.get("solver", 'Cholesky') # Solver to use
38 | lambda_ = params.get("lambda", 1.0) # Hyperparameter lambda of our method, used to compute the parameterization matrix as (I + lambda_ * L)
39 | alpha = params.get("alpha", None) # Alternative hyperparameter, used to compute the parameterization matrix as ((1-alpha) * I + alpha * L)
40 | remesh = params.get("remesh", -1) # Time step(s) at which to remesh
41 | optimizer = params.get("optimizer", AdamUniform) # Which optimizer to use
42 | use_tr = params.get("use_tr", True) # Optimize a global translation at the same time
43 | loss_function = params.get("loss", "l2") # Which loss to use
44 | bilaplacian = params.get("bilaplacian", True) # Use the bilaplacian or the laplacian regularization loss
45 |
46 | # Load the scene
47 | scene_params = load_scene(filepath)
48 |
49 | # Load reference shape
50 | v_ref = scene_params["mesh-target"]["vertices"]
51 | f_ref = scene_params["mesh-target"]["faces"]
52 | if "normals" in scene_params["mesh-target"].keys():
53 | n_ref = scene_params["mesh-target"]["normals"]
54 | else:
55 | face_normals = compute_face_normals(v_ref, f_ref)
56 | n_ref = compute_vertex_normals(v_ref, f_ref, face_normals)
57 |
58 | # Load source shape
59 | v_src = scene_params["mesh-source"]["vertices"]
60 | f_src = scene_params["mesh-source"]["faces"]
61 | # Remove duplicates. This is necessary to avoid seams of meshes to rip apart during the optimization
62 | v_unique, f_unique, duplicate_idx = remove_duplicates(v_src, f_src)
63 |
64 | # Initialize the renderer
65 | renderer = NVDRenderer(scene_params, shading=shading, boost=boost)
66 |
67 | # Render the reference images
68 | ref_imgs = renderer.render(v_ref, n_ref, f_ref)
69 |
70 | # Compute the laplacian for the regularization term
71 | L = laplacian_uniform(v_unique, f_unique)
72 |
73 | # Initialize the optimized variables and the optimizer
74 | tr = torch.zeros((1,3), device='cuda', dtype=torch.float32)
75 |
76 | if smooth:
77 | # Compute the system matrix and parameterize
78 | M = compute_matrix(v_unique, f_unique, lambda_=lambda_, alpha=alpha)
79 | u_unique = to_differential(M, v_unique)
80 |
81 | def initialize_optimizer(u, v, tr, step_size):
82 | """
83 | Initialize the optimizer
84 |
85 | Parameters
86 | ----------
87 | - u : torch.Tensor or None
88 | Parameterized coordinates to optimize if not None
89 | - v : torch.Tensor
90 | Cartesian coordinates to optimize if u is None
91 | - tr : torch.Tensor
92 | Global translation to optimize if not None
93 | - step_size : float
94 | Step size
95 |
96 | Returns
97 | -------
98 | a torch.optim.Optimizer containing the tensors to optimize.
99 | """
100 | opt_params = []
101 | if tr is not None:
102 | tr.requires_grad = True
103 | opt_params.append(tr)
104 | if u is not None:
105 | u.requires_grad = True
106 | opt_params.append(u)
107 | else:
108 | v.requires_grad = True
109 | opt_params.append(v)
110 |
111 | return optimizer(opt_params, lr=step_size)
112 |
113 | opt = initialize_optimizer(u_unique if smooth else None, v_unique, tr if use_tr else None, step_size)
114 |
115 | # Set values for time and step count
116 | if opt_time > 0:
117 | steps = -1
118 | it = 0
119 | t0 = time.perf_counter()
120 | t = t0
121 | opt_time *= 60
122 |
123 | # Dictionary that is returned in the end, contains useful information for debug/analysis
124 | result_dict = {"vert_steps": [], "tr_steps": [], "f": [f_src.cpu().numpy().copy()],
125 | "losses": [], "im_ref": ref_imgs.cpu().numpy().copy(), "im":[],
126 | "v_ref": v_ref.cpu().numpy().copy(), "f_ref": f_ref.cpu().numpy().copy()}
127 |
128 | if type(remesh) == list:
129 | remesh_it = remesh.pop(0)
130 | else:
131 | remesh_it = remesh
132 |
133 |
134 | # Optimization loop
135 | with tqdm(total=max(steps, opt_time), ncols=100, bar_format="{l_bar}{bar}| {n:.2f}/{total_fmt} [{elapsed}<{remaining}, {rate_fmt}{postfix}]") as pbar:
136 | while it < steps or (t-t0) < opt_time:
137 | if it == remesh_it:
138 | # Remesh
139 | with torch.no_grad():
140 | if smooth:
141 | v_unique = from_differential(M, u_unique, solver)
142 |
143 | v_cpu = v_unique.cpu().numpy()
144 | f_cpu = f_unique.cpu().numpy()
145 | # Target edge length
146 | h = (average_edge_length(v_unique, f_unique)).cpu().numpy()*0.5
147 |
148 | # Run 5 iterations of the Botsch-Kobbelt remeshing algorithm
149 | v_new, f_new = remesh_botsch(v_cpu.astype(np.double), f_cpu.astype(np.int32), 5, h, True)
150 |
151 | v_src = torch.from_numpy(v_new).cuda().float().contiguous()
152 | f_src = torch.from_numpy(f_new).cuda().contiguous()
153 |
154 | v_unique, f_unique, duplicate_idx = remove_duplicates(v_src, f_src)
155 | result_dict["f"].append(f_new)
156 | # Recompute laplacian
157 | L = laplacian_uniform(v_unique, f_unique)
158 |
159 | if smooth:
160 | # Compute the system matrix and parameterize
161 | M = compute_matrix(v_unique, f_unique, lambda_=lambda_, alpha=alpha)
162 | u_unique = to_differential(M, v_unique)
163 |
164 | step_size *= 0.8
165 | opt = initialize_optimizer(u_unique if smooth else None, v_unique, tr if use_tr else None, step_size)
166 |
167 | # Get next remesh iteration if any
168 | if type(remesh) == list and len(remesh) > 0:
169 | remesh_it = remesh.pop(0)
170 |
171 | # Get cartesian coordinates
172 | if smooth:
173 | v_unique = from_differential(M, u_unique, solver)
174 |
175 | # Get the version of the mesh with the duplicates
176 | v_opt = v_unique[duplicate_idx]
177 | # Recompute vertex normals
178 | face_normals = compute_face_normals(v_unique, f_unique)
179 | n_unique = compute_vertex_normals(v_unique, f_unique, face_normals)
180 | n_opt = n_unique[duplicate_idx]
181 |
182 | # Render images
183 | opt_imgs = renderer.render(tr + v_opt, n_opt, f_src)
184 |
185 | # Compute image loss
186 | if loss_function == "l1":
187 | im_loss = (opt_imgs - ref_imgs).abs().mean()
188 | elif loss_function == "l2":
189 | im_loss = (opt_imgs - ref_imgs).square().mean()
190 |
191 | # Add regularization
192 | if bilaplacian:
193 | reg_loss = (L@v_unique).square().mean()
194 | else:
195 | reg_loss = (v_unique * (L @v_unique)).mean()
196 |
197 | loss = im_loss + reg * reg_loss
198 |
199 | # Record optimization state for later processing
200 | result_dict["losses"].append((im_loss.detach().cpu().numpy().copy(), (L@v_unique.detach()).square().mean().cpu().numpy().copy()))
201 | result_dict["vert_steps"].append(v_opt.detach().cpu().numpy().copy())
202 | result_dict["tr_steps"].append(tr.detach().cpu().numpy().copy())
203 |
204 | # Backpropagate
205 | opt.zero_grad()
206 | loss.backward()
207 | # Update parameters
208 | opt.step()
209 |
210 | it += 1
211 | t = time.perf_counter()
212 | if steps > -1:
213 | pbar.update(1)
214 | else:
215 | pbar.update(min(opt_time, (t-t0)) - pbar.n)
216 |
217 | result_dict["losses"] = np.array(result_dict["losses"])
218 | return result_dict
219 |
--------------------------------------------------------------------------------
/scripts/preamble.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import subprocess
3 | import sys
4 | import os
5 |
6 | from scripts.constants import *
7 |
8 | import seaborn as sns
9 | import pandas as pd
10 | import matplotlib
11 | import matplotlib.pyplot as plt
12 | import matplotlib.patheffects as path_effects
13 |
14 | matplotlib.rcParams['font.size'] = 18
15 | matplotlib.rcParams['text.usetex'] = True
16 | matplotlib.rcParams['text.latex.preamble'] = r"""\usepackage{libertine}
17 | \usepackage{amsmath}"""
18 | matplotlib.rcParams['pdf.fonttype'] = 42
19 | matplotlib.rcParams['ps.fonttype'] = 42
20 |
21 | sns.set()
22 | sns.set(font_scale=1.1)
23 |
24 | fontsize = 24
25 | basename = os.path.join(OUTPUT_DIR, os.path.basename(os.getcwd()))
26 |
27 | def blender_render(mesh_path, out_path, collection, viewpoint, res=100, area=False, ours=False, baseline=False, wireframe=False, t=0.008):
28 | """
29 | Render a mesh with blender. This method calls a blender python script using
30 | subprocess and an associated blender file containing a readily available
31 | rendering setup.
32 |
33 | Parameters
34 | ----------
35 |
36 | mesh_path : str
37 | Path to the model to render. PLY and OBJ files are supported.
38 | out_path : str
39 | Path to the folder to which the rendering will be saved. It will be named [mesh_name]_[wireframe/smooth].png
40 | collection: str
41 | Name of the collection in the blend file from which to choose a camera.
42 | viewpoint : int
43 | Index of the camera in the given collection.
44 | res : int
45 | Percentage of the full resolution in blender (default 100%)
46 | area : bool
47 | Vizualize vertex area as vertex colors (assumes vertex colors have been precomputed)
48 | ours : bool
49 | Wether this mesh is generated using our method or not.
50 | baseline : bool
51 | Wether this mesh is generated using a baseline or not.
52 | wireframe : bool
53 | Render the model with or without wireframe.
54 | t : float
55 | Wireframe thickness
56 | """
57 | args = [BLENDER_EXEC, "-b" , BLEND_SCENE, "--python",
58 | os.path.join(os.path.dirname(__file__), "blender_render.py"), "--", "-i", mesh_path,
59 | "-o", out_path, "-c", f"{collection}", "-v", f"{viewpoint}", "-r", f"{res}", "-t", f"{t}"]
60 | if baseline:
61 | args.append("--baseline")
62 | elif ours:
63 | args.append("--ours")
64 | if not wireframe:
65 | args.append("-s")
66 | if area:
67 | args.append("--area")
68 | subprocess.run(args, check=True)
69 |
--------------------------------------------------------------------------------
/scripts/render.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import numpy as np
3 | import nvdiffrast.torch as dr
4 |
5 | class SphericalHarmonics:
6 | """
7 | Environment map approximation using spherical harmonics.
8 |
9 | This class implements the spherical harmonics lighting model of [Ramamoorthi
10 | and Hanrahan 2001], that approximates diffuse lighting by an environment map.
11 | """
12 |
13 | def __init__(self, envmap):
14 | """
15 | Precompute the coefficients given an envmap.
16 |
17 | Parameters
18 | ----------
19 | envmap : torch.Tensor
20 | The environment map to approximate.
21 | """
22 | h,w = envmap.shape[:2]
23 |
24 | # Compute the grid of theta, phi values
25 | theta = (torch.linspace(0, np.pi, h, device='cuda')).repeat(w, 1).t()
26 | phi = (torch.linspace(3*np.pi, np.pi, w, device='cuda')).repeat(h,1)
27 |
28 | # Compute the value of sin(theta) once
29 | sin_theta = torch.sin(theta)
30 | # Compute x,y,z
31 | # This differs from the original formulation as here the up axis is Y
32 | x = sin_theta * torch.cos(phi)
33 | z = -sin_theta * torch.sin(phi)
34 | y = torch.cos(theta)
35 |
36 | # Compute the polynomials
37 | Y_0 = 0.282095
38 | # The following are indexed so that using Y_n[-p]...Y_n[p] gives the proper polynomials
39 | Y_1 = [
40 | 0.488603 * z,
41 | 0.488603 * x,
42 | 0.488603 * y
43 | ]
44 | Y_2 = [
45 | 0.315392 * (3*z.square() - 1),
46 | 1.092548 * x*z,
47 | 0.546274 * (x.square() - y.square()),
48 | 1.092548 * x*y,
49 | 1.092548 * y*z
50 | ]
51 | import matplotlib.pyplot as plt
52 | area = w*h
53 | radiance = envmap[..., :3]
54 | dt_dp = 2.0 * np.pi**2 / area
55 |
56 | # Compute the L coefficients
57 | L = [ [(radiance * Y_0 * (sin_theta)[..., None] * dt_dp).sum(dim=(0,1))],
58 | [(radiance * (y * sin_theta)[..., None] * dt_dp).sum(dim=(0,1)) for y in Y_1],
59 | [(radiance * (y * sin_theta)[..., None] * dt_dp).sum(dim=(0,1)) for y in Y_2]]
60 |
61 | # Compute the R,G and B matrices
62 | c1 = 0.429043
63 | c2 = 0.511664
64 | c3 = 0.743125
65 | c4 = 0.886227
66 | c5 = 0.247708
67 |
68 | self.M = torch.stack([
69 | torch.stack([ c1 * L[2][2] , c1 * L[2][-2], c1 * L[2][1] , c2 * L[1][1] ]),
70 | torch.stack([ c1 * L[2][-2], -c1 * L[2][2], c1 * L[2][-1], c2 * L[1][-1] ]),
71 | torch.stack([ c1 * L[2][1] , c1 * L[2][-1], c3 * L[2][0] , c2 * L[1][0] ]),
72 | torch.stack([ c2 * L[1][1] , c2 * L[1][-1], c2 * L[1][0] , c4 * L[0][0] - c5 * L[2][0]])
73 | ]).movedim(2,0)
74 |
75 | def eval(self, n):
76 | """
77 | Evaluate the shading using the precomputed coefficients.
78 |
79 | Parameters
80 | ----------
81 | n : torch.Tensor
82 | Array of normals at which to evaluate lighting.
83 | """
84 | normal_array = n.view((-1, 3))
85 | h_n = torch.nn.functional.pad(normal_array, (0,1), 'constant', 1.0)
86 | l = (h_n.t() * (self.M @ h_n.t())).sum(dim=1)
87 | return l.t().view(n.shape)
88 |
89 | def persp_proj(fov_x=45, ar=1, near=0.1, far=100):
90 | """
91 | Build a perspective projection matrix.
92 |
93 | Parameters
94 | ----------
95 | fov_x : float
96 | Horizontal field of view (in degrees).
97 | ar : float
98 | Aspect ratio (w/h).
99 | near : float
100 | Depth of the near plane relative to the camera.
101 | far : float
102 | Depth of the far plane relative to the camera.
103 | """
104 | fov_rad = np.deg2rad(fov_x)
105 | proj_mat = np.array([[-1.0 / np.tan(fov_rad / 2.0), 0, 0, 0],
106 | [0, np.float32(ar) / np.tan(fov_rad / 2.0), 0, 0],
107 | [0, 0, -(near + far) / (near-far), 2 * far * near / (near-far)],
108 | [0, 0, 1, 0]])
109 | x = torch.tensor([[1,2,3,4]], device='cuda')
110 | proj = torch.tensor(proj_mat, device='cuda', dtype=torch.float32)
111 | return proj
112 |
113 | class NVDRenderer:
114 | """
115 | Renderer using nvdiffrast.
116 |
117 |
118 | This class encapsulates the nvdiffrast renderer [Laine et al 2020] to render
119 | objects given a number of viewpoints and rendering parameters.
120 | """
121 | def __init__(self, scene_params, shading=True, boost=1.0):
122 | """
123 | Initialize the renderer.
124 |
125 | Parameters
126 | ----------
127 | scene_params : dict
128 | The scene parameters. Contains the envmap and camera info.
129 | shading: bool
130 | Use shading in the renderings, otherwise render silhouettes. (default True)
131 | boost: float
132 | Factor by which to multiply shading-related gradients. (default 1.0)
133 | """
134 | # We assume all cameras have the same parameters (fov, clipping planes)
135 | near = scene_params["near_clip"]
136 | far = scene_params["far_clip"]
137 | self.fov_x = scene_params["fov"]
138 | w = scene_params["res_x"]
139 | h = scene_params["res_y"]
140 | self.res = (h,w)
141 | ar = w/h
142 | x = torch.tensor([[1,2,3,4]], device='cuda')
143 | self.proj_mat = persp_proj(self.fov_x, ar, near, far)
144 |
145 | # Construct the Model-View-Projection matrices
146 | self.view_mats = torch.stack(scene_params["view_mats"])
147 | self.mvps = self.proj_mat @ self.view_mats
148 |
149 | self.boost = boost
150 | self.shading = shading
151 |
152 | # Initialize rasterizing context
153 | self.glctx = dr.RasterizeGLContext()
154 | # Load the environment map
155 | w,h,_ = scene_params['envmap'].shape
156 | envmap = scene_params['envmap_scale'] * scene_params['envmap']
157 | # Precompute lighting
158 | self.sh = SphericalHarmonics(envmap)
159 | # Render background for all viewpoints once
160 | self.render_backgrounds(envmap)
161 |
162 | def render_backgrounds(self, envmap):
163 | """
164 | Precompute the background of each input viewpoint with the envmap.
165 |
166 | Params
167 | ------
168 | envmap : torch.Tensor
169 | The environment map used in the scene.
170 | """
171 | h,w = self.res
172 | pos_int = torch.arange(w*h, dtype = torch.int32, device='cuda')
173 | pos = 0.5 - torch.stack((pos_int % w, pos_int // w), dim=1) / torch.tensor((w,h), device='cuda')
174 | a = np.deg2rad(self.fov_x)/2
175 | r = w/h
176 | f = torch.tensor((2*np.tan(a), 2*np.tan(a)/r), device='cuda', dtype=torch.float32)
177 | rays = torch.cat((pos*f, torch.ones((w*h,1), device='cuda'), torch.zeros((w*h,1), device='cuda')), dim=1)
178 | rays_norm = (rays.transpose(0,1) / torch.norm(rays, dim=1)).transpose(0,1)
179 | rays_view = torch.matmul(rays_norm, self.view_mats.inverse().transpose(1,2)).reshape((self.view_mats.shape[0],h,w,-1))
180 | theta = torch.acos(rays_view[..., 1])
181 | phi = torch.atan2(rays_view[..., 0], rays_view[..., 2])
182 | envmap_uvs = torch.stack([0.75-phi/(2*np.pi), theta / np.pi], dim=-1)
183 | self.bgs = dr.texture(envmap[None, ...], envmap_uvs, filter_mode='linear').flip(1)
184 | self.bgs[..., -1] = 0 # Set alpha to 0
185 |
186 | def render(self, v, n, f):
187 | """
188 | Render the scene in a differentiable way.
189 |
190 | Parameters
191 | ----------
192 | v : torch.Tensor
193 | Vertex positions
194 | n : torch.Tensor
195 | Vertex normals
196 | f : torch.Tensor
197 | Model faces
198 |
199 | Returns
200 | -------
201 | result : torch.Tensor
202 | The array of renderings from all given viewpoints
203 | """
204 | v_hom = torch.nn.functional.pad(v, (0,1), 'constant', 1.0)
205 | v_ndc = torch.matmul(v_hom, self.mvps.transpose(1,2))
206 | rast = dr.rasterize(self.glctx, v_ndc, f, self.res)[0]
207 | if self.shading:
208 | v_cols = torch.zeros_like(v)
209 |
210 | # Sample envmap at each vertex using the SH approximation
211 | vert_light = self.sh.eval(n).contiguous()
212 | # Sample incoming radiance
213 | light = dr.interpolate(vert_light[None, ...], rast, f)[0]
214 |
215 | col = torch.cat((light / np.pi, torch.ones((*light.shape[:-1],1), device='cuda')), dim=-1)
216 | result = dr.antialias(torch.where(rast[..., -1:] != 0, col, self.bgs), rast, v_ndc, f, pos_gradient_boost=self.boost)
217 | else:
218 | v_cols = torch.ones_like(v)
219 | col = dr.interpolate(v_cols[None, ...], rast, f)[0]
220 | result = dr.antialias(col, rast, v_ndc, f, pos_gradient_boost=self.boost)
221 | return result
222 |
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | from setuptools import setup
2 |
3 | # read the contents of your README file (https://packaging.python.org/en/latest/guides/making-a-pypi-friendly-readme/)
4 | from pathlib import Path
5 | this_directory = Path(__file__).parent
6 | readme = (this_directory / "README.md").read_text()
7 |
8 | setup(
9 | name='largesteps',
10 | version='0.2.2',
11 | description='Laplacian parameterization package for shape optimization with differentiable rendering',
12 | url='https://github.com/rgl-epfl/large-steps-pytorch',
13 | author='Baptiste Nicolet',
14 | author_email='baptiste.nicolet@epfl.ch',
15 | license='BSD',
16 | packages=['largesteps'],
17 | install_requires=['numpy',
18 | 'scipy',
19 | 'cholespy'
20 | ],
21 | long_description=readme,
22 | long_description_content_type="text/markdown"
23 | )
24 |
--------------------------------------------------------------------------------
/setup_dependencies.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # Make sure all dependencies are installed
4 | sudo apt-get update && apt-get install -y --no-install-recommends \
5 | pkg-config \
6 | libglvnd0 \
7 | libgl1 \
8 | libglx0 \
9 | libegl1 \
10 | libgles2 \
11 | libglvnd-dev \
12 | libgl1-mesa-dev \
13 | libegl1-mesa-dev \
14 | libgles2-mesa-dev \
15 | cmake \
16 | curl \
17 | dvipng \
18 | texlive-latex-extra \
19 | texlive-fonts-recommended \
20 | cm-super \
21 | libeigen3-dev
22 |
23 | # Make sure submodules are checked out
24 | git submodule update --init --recursive
25 |
26 | # Botsch-Kobbelt remesher
27 | cd ext/botsch-kobbelt-remesher-libigl
28 | mkdir -p build
29 | cd build
30 | cmake ..
31 | make -j
32 | cd ../../..
33 |
34 | # nvdiffrast
35 | cd ext/nvdiffrast
36 | pip install .
37 |
--------------------------------------------------------------------------------