├── .gitignore ├── LICENSE ├── README.md ├── airplane.obj ├── datagen_maps.py ├── maps ├── __init__.py ├── geometry.py ├── maps.py └── utils.py ├── requirements.txt ├── scripts ├── coseg-aliens │ ├── get_data.sh │ ├── get_pretrained.sh │ ├── test.sh │ └── train.sh ├── coseg-vases │ ├── get_data.sh │ ├── get_pretrained.sh │ ├── test.sh │ └── train.sh ├── cubes │ ├── get_data.sh │ ├── get_pretrained.sh │ ├── test.sh │ └── train.sh ├── humanbody │ ├── get_data.sh │ ├── get_pretrained.sh │ ├── test.sh │ └── train.sh ├── manifold40 │ ├── get_data.sh │ ├── get_pretrained.sh │ ├── test.sh │ └── train.sh ├── shrec11-split10 │ ├── get_data.sh │ ├── get_pretrained.sh │ ├── test.sh │ └── train.sh └── shrec11-split16 │ ├── get_data.sh │ ├── get_pretrained.sh │ ├── test.sh │ └── train.sh ├── subdivnet ├── __init__.py ├── dataset.py ├── deeplab.py ├── mesh_ops.py ├── mesh_tensor.py ├── network.py └── utils.py ├── teaser.jpg ├── train_cls.py └── train_seg.py /.gitignore: -------------------------------------------------------------------------------- 1 | __pycache__ 2 | *.obj 3 | *.ply 4 | *.swp 5 | *.ipynb 6 | *.pdf 7 | *.zip 8 | *.tgz 9 | *.txt 10 | *.log 11 | *.pyc 12 | data 13 | build/ 14 | venv/ 15 | temp/ 16 | logs/ 17 | checkpoints/ 18 | results/ 19 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2021 Zheng-Ning Liu 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Subdivision-based Mesh Convolutional Networks 2 | 3 | The implementation of `SubdivNet` in our paper, [Subdivion-based Mesh Convolutional Networks](https://cg.cs.tsinghua.edu.cn/papers/TOG-2022-SubdivNet.pdf) 4 | 5 | ![teaser](teaser.jpg) 6 | 7 | ## News 8 | * 🔥This paper was accepted by [ACM TOG](https://dl.acm.org/doi/10.1145/3506694). 9 | 10 | ## Features 11 | * Provides implementations of mesh classification and segmentation on various datasets. 12 | * Provides ready-to-use datasets, pretrained models, training and evaluation scripts. 13 | * Supports a batch of meshes with different number of faces. 14 | 15 | ## Requirements 16 | * python3.7+ 17 | * CUDA 10.1+ 18 | * [Jittor](https://github.com/Jittor/jittor) 19 | 20 | To install other python requirements: 21 | 22 | ``` 23 | pip install -r requirements.txt 24 | ``` 25 | 26 | ## Fetch Data 27 | This repo provides training scripts for classification and segementation, 28 | on the following datasets, 29 | 30 | - shrec11-split10 31 | - shrec11-split16 32 | - cubes 33 | - manifold40 (based on ModelNet40) 34 | - humanbody 35 | - coseg-aliens 36 | - coseg-vases 37 | 38 | To download the preprocessed data, run 39 | 40 | ``` 41 | sh scripts//get_data.sh 42 | ``` 43 | 44 | > The `Manfold40` dataset (before remeshed, without subdivision connectivity) can be downloaded via [this link](https://cg.cs.tsinghua.edu.cn/dataset/subdivnet/datasets/Manifold40.zip). 45 | > Note that this version cannot be used as inputs of SubdivNet. To train SubdivNet, run scripts/manifold40/get_data.sh. 46 | 47 | ## Training 48 | To train the model(s) in the paper, run this command: 49 | 50 | ``` 51 | sh scripts//train.sh 52 | ``` 53 | 54 | To speed up training, you can use multiple gpus. First install `OpenMPI`: 55 | 56 | ``` 57 | sudo apt install openmpi-bin openmpi-common libopenmpi-dev 58 | ``` 59 | 60 | Then run the following command, 61 | 62 | ``` 63 | CUDA_VISIBLE_DEVICES="2,3" mpirun -np 2 sh scripts//train.sh 64 | ``` 65 | 66 | ## Evaluation 67 | 68 | To evaluate the model on a dataset, run: 69 | 70 | ``` 71 | sh scripts//test.sh 72 | ``` 73 | 74 | The pretrained weights are provided. Run the following command to download them. 75 | 76 | ``` 77 | sh scripts//get_pretrained.sh 78 | ``` 79 | 80 | ## Visualize 81 | After testing the segmentation network, there will be colored shapes in a `results` directory. 82 | 83 | ## How to apply SubdivNet to your own data 84 | SubdivNet cannot be directly applied to any common meshes, because it requires the input to hold the subdivision connectivity. 85 | 86 | To create your own data with subdivision connectivity, you may use the provided 87 | tool that implements the MAPS algorithm. You may also refer to [NeuralSubdivision](https://github.com/HTDerekLiu/neuralSubdiv), as they provide a MATLAB script for remeshing. 88 | 89 | To run our implemented MAPS algorithm, first install the following python dependecies, 90 | 91 | ``` 92 | triangle 93 | pymeshlab 94 | shapely 95 | sortedcollections 96 | networkx 97 | rtree 98 | ``` 99 | 100 | Then see `datagen_maps.py` and modify the configurations to remesh your 3D shapes for subdivision connectivity. 101 | 102 | ## Cite 103 | Please cite our paper if you use this code in your own work: 104 | 105 | ``` 106 | @article{subdivnet-tog-2022, 107 | author = {Shi{-}Min Hu and 108 | Zheng{-}Ning Liu and 109 | Meng{-}Hao Guo and 110 | Junxiong Cai and 111 | Jiahui Huang and 112 | Tai{-}Jiang Mu and 113 | Ralph R. Martin}, 114 | title = {Subdivision-based Mesh Convolution Networks}, 115 | journal = {{ACM} Trans. Graph.}, 116 | volume = {41}, 117 | number = {3}, 118 | pages = {25:1--25:16}, 119 | year = {2022}, 120 | url = {https://doi.org/10.1145/3506694} 121 | } 122 | ``` 123 | -------------------------------------------------------------------------------- /airplane.obj: -------------------------------------------------------------------------------- 1 | #### 2 | # 3 | # OBJ File Generated by Meshlab 4 | # 5 | #### 6 | # Object res.obj 7 | # 8 | # Vertices: 252 9 | # Faces: 500 10 | # 11 | #### 12 | vn -1.818911 -1.900910 -4.596037 13 | v 0.386287 0.073519 0.021419 14 | vn -2.220529 -2.597704 -2.219186 15 | v 0.381087 0.006689 0.049242 16 | vn -3.255302 -3.081007 2.499257 17 | v 0.376095 0.020398 0.087676 18 | vn 0.325804 -2.523358 -0.260939 19 | v 0.401754 -0.010577 0.061831 20 | vn 1.172405 -3.532368 4.104272 21 | v 0.406801 0.020344 0.095303 22 | vn -5.017855 -0.945036 -2.480894 23 | v 0.346179 0.103447 0.054721 24 | vn -5.242218 -1.918049 1.555667 25 | v 0.349280 0.082821 0.095246 26 | vn 3.381683 -2.495771 -2.600947 27 | v 0.421985 0.020914 0.049099 28 | vn 2.246969 -1.617445 -5.029899 29 | v 0.414187 0.053411 0.030480 30 | vn 0.073350 -1.178055 -1.238468 31 | v 0.393686 0.081595 -0.003681 32 | vn -0.659743 0.429913 -1.494337 33 | v 0.385199 0.106221 -0.008578 34 | vn -3.212474 0.888969 -4.872225 35 | v 0.377469 0.120440 0.027094 36 | vn 2.963130 0.339968 -5.058470 37 | v 0.418814 0.116065 0.024289 38 | vn 0.395091 0.117281 -6.183616 39 | v 0.403395 0.236347 0.025910 40 | vn -5.893115 -0.509943 -1.072346 41 | v 0.336269 0.143075 0.075068 42 | vn -5.793441 -0.648379 1.437600 43 | v 0.335298 0.161254 0.103178 44 | vn -2.856374 -1.336746 4.447762 45 | v 0.365521 0.131820 0.146151 46 | vn 0.147349 -2.826749 5.074992 47 | v 0.399536 0.072281 0.127111 48 | vn -2.217262 -0.062775 -5.656885 49 | v 0.373513 0.263540 0.029318 50 | vn -1.769584 -2.686706 -3.492004 51 | v 0.175981 0.491634 0.051613 52 | vn -1.907983 -3.376162 -2.134472 53 | v 0.138570 0.503843 0.061171 54 | vn -1.494782 -3.134879 3.306911 55 | v 0.176402 0.490598 0.076325 56 | vn -0.390273 -1.004054 6.063662 57 | v 0.227769 0.477841 0.079028 58 | vn -2.572677 -3.859135 -2.227481 59 | v 0.238720 0.447513 0.055435 60 | vn -2.038583 -3.635698 3.482644 61 | v 0.249778 0.443329 0.069594 62 | vn -3.158579 -2.169806 -4.503831 63 | v 0.251981 0.472398 0.041061 64 | vn 0.484487 0.784892 6.114481 65 | v 0.223752 0.523760 0.079079 66 | vn -4.548948 -0.688584 3.407137 67 | v 0.342366 0.224577 0.134461 68 | vn -3.859906 1.204814 -1.869664 69 | v 0.337988 0.292980 0.120894 70 | vn -0.216058 0.131010 -6.094511 71 | v 0.398356 0.456319 0.026003 72 | vn -5.840858 -0.670967 1.516996 73 | v 0.333340 0.401932 0.089065 74 | vn -3.388629 -0.843040 -4.608632 75 | v 0.354231 0.376918 0.039526 76 | vn -4.675984 0.033496 3.600641 77 | v 0.347415 0.398764 0.136942 78 | vn -2.572566 1.098967 5.135309 79 | v 0.351205 0.300622 0.141275 80 | vn -2.945797 -3.742052 2.011503 81 | v 0.315720 0.400218 0.065412 82 | vn -2.777013 -3.702644 -3.420667 83 | v 0.310537 0.408482 0.044202 84 | vn -0.316295 -1.681304 -3.208075 85 | v 0.272864 0.471188 0.008273 86 | vn -0.979914 -2.597688 -5.347084 87 | v 0.281737 0.447848 0.039488 88 | vn -4.857898 -0.550031 -1.647363 89 | v 0.247313 0.496710 0.026664 90 | vn 3.947630 -2.185500 -3.402107 91 | v 0.295599 0.470614 0.027119 92 | vn 1.936322 4.648206 0.051776 93 | v 0.298492 0.549916 0.053336 94 | vn -3.537682 -0.223209 3.420401 95 | v 0.331646 0.433389 0.076074 96 | vn -3.844935 2.254111 1.517515 97 | v 0.334966 0.540126 0.070791 98 | vn -0.963725 0.526342 -5.102223 99 | v 0.314569 0.489065 0.037735 100 | vn -0.645327 -2.458678 -1.616640 101 | v 0.338161 0.471389 0.002632 102 | vn -1.430089 1.394617 -1.436432 103 | v 0.326105 0.510820 -0.000988 104 | vn 1.538287 0.437844 -1.532294 105 | v 0.368662 0.501442 -0.004942 106 | vn 0.695879 -1.468482 -4.855189 107 | v 0.362876 0.475333 0.035076 108 | vn -0.517430 2.166030 -4.720254 109 | v 0.361623 0.515832 0.040358 110 | vn -1.593942 0.377348 -5.936710 111 | v 0.383217 0.538233 0.032252 112 | vn -4.150124 -2.768777 0.066735 113 | v 0.333835 0.637453 0.107410 114 | vn 1.211993 -1.861641 5.271059 115 | v 0.413128 0.121319 0.149248 116 | vn -2.284795 -1.321617 0.847941 117 | v 0.278978 0.274995 0.139222 118 | vn -1.816554 -0.227505 -4.293857 119 | v 0.288427 0.289844 0.119211 120 | vn -1.179537 -2.191916 -0.557436 121 | v 0.314068 0.246548 0.130888 122 | vn 0.836799 -1.119790 5.273376 123 | v 0.335436 0.261006 0.146142 124 | vn -1.304804 1.539173 4.660873 125 | v 0.287899 0.302280 0.144427 126 | vn -0.722920 0.974883 -0.147212 127 | v 0.276473 0.319531 0.130428 128 | vn 0.738069 4.432898 -0.910318 129 | v 0.328494 0.307231 0.132314 130 | vn -1.734891 -0.176317 5.771070 131 | v 0.381681 0.215870 0.159378 132 | vn 1.040042 -0.418080 5.851358 133 | v 0.411629 0.193107 0.160284 134 | vn -1.816162 0.038850 5.838817 135 | v 0.381144 0.344920 0.158029 136 | vn -1.666422 -0.039540 5.803757 137 | v 0.382634 0.494120 0.159630 138 | vn -0.211888 -1.359048 4.854435 139 | v 0.349210 0.658966 0.139778 140 | vn 1.490869 0.022005 5.855552 141 | v 0.415419 0.551124 0.160244 142 | vn -3.744126 -0.022473 -1.802514 143 | v -0.009460 0.592470 0.062271 144 | vn -3.261746 -1.165070 2.080378 145 | v -0.004514 0.567189 0.091601 146 | vn -0.454255 -2.635704 -0.971153 147 | v 0.016356 0.539695 0.068407 148 | vn -0.007228 -2.280888 3.065984 149 | v 0.019689 0.550875 0.097617 150 | vn -0.033056 -1.114531 -4.863469 151 | v 0.020657 0.573913 0.050007 152 | vn 1.128965 -3.381309 2.475656 153 | v 0.045957 0.557829 0.083820 154 | vn 0.995452 -1.848534 -5.434634 155 | v 0.044492 0.572107 0.062150 156 | vn 2.459820 2.684070 3.507313 157 | v 0.047773 0.616638 0.086525 158 | vn -2.170018 2.164979 -0.449778 159 | v 0.000964 0.650543 0.071682 160 | vn 0.714763 2.043837 -2.281247 161 | v 0.022663 0.645184 0.054524 162 | vn -0.144135 1.662301 1.513592 163 | v 0.014615 0.651842 0.095662 164 | vn 2.804857 2.951347 -3.732777 165 | v 0.046405 0.615552 0.066431 166 | vn -2.018881 -3.441211 1.944018 167 | v 0.099779 0.526995 0.076143 168 | vn -1.785379 -2.271312 -4.319842 169 | v 0.096244 0.537637 0.058273 170 | vn 0.077053 -0.338861 6.098225 171 | v 0.113183 0.554223 0.085786 172 | vn 0.739053 2.853732 -4.214336 173 | v 0.110279 0.595204 0.059609 174 | vn 1.380032 3.614893 1.759203 175 | v 0.084519 0.610418 0.078779 176 | vn -0.395682 0.616585 -6.063751 177 | v 0.163901 0.543705 0.050543 178 | vn 1.170817 3.411679 -2.495974 179 | v 0.159424 0.580322 0.059974 180 | vn 1.414701 3.261083 3.928454 181 | v 0.172558 0.570569 0.077440 182 | vn 1.023372 4.345704 -2.569836 183 | v 0.207630 0.562720 0.060221 184 | vn -3.312485 2.444597 -3.022343 185 | v 0.251090 0.538115 0.049115 186 | vn -0.429529 4.665478 3.087092 187 | v 0.261296 0.548105 0.062891 188 | vn -1.656059 1.053108 -3.991465 189 | v 0.264435 0.541095 0.008251 190 | vn -1.499472 2.707390 -0.423942 191 | v 0.259952 0.567309 0.030017 192 | vn 1.938379 2.006983 -1.625064 193 | v 0.293130 0.559652 0.018512 194 | vn -2.016145 -1.852894 -3.111574 195 | v 0.286669 0.651138 0.093532 196 | vn -2.398081 -2.523971 -0.128117 197 | v 0.275042 0.641904 0.123485 198 | vn -0.121173 -1.250870 0.177133 199 | v 0.308266 0.669243 0.127313 200 | vn -5.045503 0.972977 0.032464 201 | v 0.269082 0.725005 0.129074 202 | vn 0.777198 -5.526930 1.835159 203 | v 0.329167 0.641733 0.122387 204 | vn -0.296569 1.586784 -5.191855 205 | v 0.312247 0.728529 0.089680 206 | vn -2.994278 1.577093 -4.919357 207 | v 0.351467 0.551047 0.049695 208 | vn -3.315053 -0.484395 -4.424528 209 | v 0.335338 0.662970 0.090325 210 | vn -4.372446 0.657180 -3.972560 211 | v 0.354125 0.702635 0.059467 212 | vn -1.709620 0.558485 -5.791330 213 | v 0.380991 0.685597 0.041950 214 | vn 1.489893 0.445149 -5.757706 215 | v 0.417048 0.648294 0.037196 216 | vn -3.727516 1.246945 -3.748893 217 | v 0.282000 0.728235 0.102219 218 | vn 0.012053 0.161786 0.036822 219 | v 0.307857 0.741607 0.127868 220 | vn -4.990089 1.881616 1.868420 221 | v 0.281636 0.752153 0.141468 222 | vn -1.246049 1.962713 0.661238 223 | v 0.293381 0.800183 0.134461 224 | vn 0.356967 1.646288 -5.935877 225 | v 0.321748 0.756946 0.107463 226 | vn -1.371223 2.045862 -3.908842 227 | v 0.302695 0.783581 0.106021 228 | vn -0.286459 3.164595 4.039473 229 | v 0.329256 0.795058 0.136147 230 | vn -2.192999 1.342831 -4.482613 231 | v 0.339248 0.739069 0.101688 232 | vn 0.174128 0.569490 4.916376 233 | v 0.352620 0.727150 0.136690 234 | vn -5.476778 1.550922 -2.392234 235 | v 0.352645 0.802044 0.093439 236 | vn -3.360076 1.424456 -4.398422 237 | v 0.372603 0.822504 0.066442 238 | vn -0.070306 1.203538 -5.695435 239 | v 0.395142 0.795498 0.053013 240 | vn -3.407852 3.693131 -2.970562 241 | v 0.339118 0.797797 0.118315 242 | vn -5.431265 1.156812 2.336916 243 | v 0.363574 0.833349 0.131876 244 | vn 1.416840 2.470682 -3.549061 245 | v 0.408633 0.900770 0.082559 246 | vn -2.701094 2.810631 -2.616990 247 | v 0.379840 0.910047 0.096616 248 | vn -2.679079 3.093817 -0.089829 249 | v 0.379530 0.927300 0.128630 250 | vn 2.202698 2.925559 -0.082353 251 | v 0.415449 0.931212 0.131783 252 | vn -1.851666 -1.867946 2.686725 253 | v 0.286483 0.648509 0.162599 254 | vn -0.049888 -5.440334 -0.196685 255 | v 0.307225 0.641882 0.148221 256 | vn 1.518846 -2.280421 3.033426 257 | v 0.326282 0.648195 0.163004 258 | vn -1.722396 0.057499 5.759770 259 | v 0.383180 0.709572 0.158503 260 | vn 2.347115 -0.099952 5.653850 261 | v 0.415472 0.747369 0.157705 262 | vn -2.747085 0.865577 4.522672 263 | v 0.288217 0.730463 0.160205 264 | vn 2.040313 0.782148 4.971741 265 | v 0.327314 0.725597 0.163393 266 | vn 0.151794 2.050787 1.079852 267 | v 0.312036 0.789021 0.148063 268 | vn -1.357507 -0.417519 5.429835 269 | v 0.395456 0.788056 0.166659 270 | vn -4.383416 -1.946738 3.190756 271 | v 0.384832 0.822003 0.170225 272 | vn 2.892943 -0.958010 5.002020 273 | v 0.408084 0.804453 0.165855 274 | vn 1.795526 -3.101875 2.589365 275 | v 0.406722 0.845121 0.209622 276 | vn -2.084076 -0.753389 0.320279 277 | v 0.300745 0.955248 0.244785 278 | vn -2.083634 -3.143590 1.166485 279 | v 0.324720 0.934940 0.244684 280 | vn -1.720647 -2.513576 -4.240732 281 | v 0.327934 0.940703 0.232145 282 | vn -1.528333 1.668415 -3.517470 283 | v 0.313968 0.983564 0.230294 284 | vn -0.920695 1.549160 0.390248 285 | v 0.309605 0.995634 0.245423 286 | vn -0.692009 0.265727 5.686952 287 | v 0.327063 0.968072 0.255476 288 | vn -5.217898 2.544494 0.608618 289 | v 0.386870 0.918128 0.151608 290 | vn -5.678909 -1.709138 0.389437 291 | v 0.380150 0.891650 0.224026 292 | vn -0.325397 -1.908416 3.846939 293 | v 0.395763 0.896024 0.261430 294 | vn -4.453327 -0.255012 -3.206723 295 | v 0.379425 0.927252 0.225955 296 | vn 2.413608 3.808005 -0.312449 297 | v 0.406624 0.929346 0.150986 298 | vn 3.599632 2.553759 -3.631677 299 | v 0.409090 0.971113 0.227085 300 | vn -0.543022 3.218277 -2.314292 301 | v 0.396699 0.971049 0.203839 302 | vn -2.665173 -2.093935 1.220388 303 | v 0.360932 0.912182 0.247475 304 | vn -3.465944 2.484882 -3.501393 305 | v 0.385355 0.965756 0.229544 306 | vn -3.631704 3.272227 1.556783 307 | v 0.383659 0.979249 0.248191 308 | vn -0.825630 1.951817 3.468868 309 | v 0.396263 0.989005 0.260685 310 | vn -0.095315 1.805026 -0.259376 311 | v 0.398927 1.007744 0.239099 312 | vn 4.281759 -2.443429 2.121532 313 | v 0.432453 0.041540 0.092034 314 | vn 5.008636 -1.010796 -2.502161 315 | v 0.449376 0.103459 0.052560 316 | vn 3.182519 0.047375 -5.118413 317 | v 0.430175 0.257926 0.033201 318 | vn 5.853323 -0.550719 -1.232442 319 | v 0.460801 0.140609 0.074134 320 | vn 5.623761 -1.241221 1.411248 321 | v 0.459403 0.131391 0.100700 322 | vn 3.760383 -1.067656 4.243339 323 | v 0.439272 0.148875 0.143945 324 | vn 4.951804 -2.373347 -1.085827 325 | v 0.457878 0.382912 0.066048 326 | vn 5.146035 -2.036479 -0.897204 327 | v 0.464036 0.234262 0.116541 328 | vn 3.307917 -1.112126 4.792291 329 | v 0.450606 0.229677 0.141143 330 | vn 4.103611 1.433831 -2.294827 331 | v 0.461887 0.293344 0.112140 332 | vn 2.206616 -0.896840 -5.583089 333 | v 0.436260 0.391968 0.034792 334 | vn 2.839852 1.143326 4.879647 335 | v 0.448908 0.302561 0.139927 336 | vn 0.821674 2.128070 0.796761 337 | v 0.518572 0.322260 0.135772 338 | vn 1.526299 1.963335 -2.763247 339 | v 0.519775 0.312545 0.117426 340 | vn 2.109939 -0.794066 -0.349551 341 | v 0.535831 0.280209 0.127273 342 | vn 2.640119 -3.822125 1.018284 343 | v 0.486834 0.241907 0.130671 344 | vn 1.849805 -4.452967 -2.442508 345 | v 0.464680 0.387592 0.049927 346 | vn 3.695956 -0.653416 3.198706 347 | v 0.462715 0.416625 0.075752 348 | vn 3.968024 0.019016 4.743064 349 | v 0.443585 0.539766 0.144033 350 | vn 1.443967 -2.780236 -5.026480 351 | v 0.518364 0.442216 0.040774 352 | vn 2.608454 -3.910162 -0.670575 353 | v 0.526254 0.423708 0.058445 354 | vn -1.012202 -0.443422 -4.833422 355 | v 0.428778 0.480848 0.034357 356 | vn -2.006528 -1.324340 -1.538815 357 | v 0.431136 0.482037 0.000808 358 | vn 1.982984 -1.393866 -1.689078 359 | v 0.474251 0.482542 -0.000335 360 | vn -1.111046 1.487008 -2.128103 361 | v 0.442804 0.510832 -0.008142 362 | vn 1.299013 -1.955523 -4.782113 363 | v 0.465389 0.471901 0.037055 364 | vn 2.027092 2.515455 -1.869015 365 | v 0.466746 0.512508 -0.000331 366 | vn 2.366426 2.265899 -4.216979 367 | v 0.473788 0.505056 0.037768 368 | vn 3.055345 0.232655 3.073897 369 | v 0.461897 0.492297 0.074712 370 | vn 1.760005 -2.934580 4.935444 371 | v 0.511534 0.425313 0.071108 372 | vn 3.511567 -2.593676 2.828824 373 | v 0.457213 0.636570 0.126806 374 | vn -2.343295 -1.033618 -5.203116 375 | v 0.497986 0.478722 0.035371 376 | vn -1.557782 2.462046 3.758469 377 | v 0.465260 0.542483 0.069808 378 | vn -2.480306 -1.696159 -3.860032 379 | v 0.511689 0.482116 0.011505 380 | vn 1.340173 -2.559363 -2.554425 381 | v 0.535313 0.459692 0.019487 382 | vn 1.608738 0.314067 -3.503633 383 | v 0.536137 0.514182 0.002676 384 | vn 3.262315 -1.574329 -4.601983 385 | v 0.552006 0.475906 0.041684 386 | vn 0.033444 -0.371412 6.106999 387 | v 0.590538 0.501037 0.081226 388 | vn 1.982260 -3.641886 2.718531 389 | v 0.613471 0.479588 0.072271 390 | vn 2.216184 -3.095688 -3.442418 391 | v 0.608639 0.478877 0.053368 392 | vn 1.950949 -2.959469 -2.580627 393 | v 0.665531 0.510631 0.058423 394 | vn 2.403180 0.010614 4.245609 395 | v 0.523873 0.292353 0.142428 396 | vn -3.725646 3.365160 -1.874495 397 | v 0.440722 0.516006 0.041026 398 | vn 3.309675 1.877366 -3.973206 399 | v 0.451579 0.549805 0.052795 400 | vn 4.392516 0.481223 -4.181422 401 | v 0.442224 0.648709 0.053196 402 | vn -1.789264 4.961997 -0.299997 403 | v 0.501511 0.550262 0.051890 404 | vn 3.919529 -2.287925 -2.116702 405 | v 0.462859 0.639247 0.104366 406 | vn 2.937540 1.075303 -4.601989 407 | v 0.458769 0.713799 0.094237 408 | vn -5.451713 -0.685048 1.258799 409 | v 0.451421 0.673484 0.140090 410 | vn 0.854781 -1.122580 -4.844535 411 | v 0.497950 0.663770 0.086746 412 | vn 0.584992 -4.158797 -1.147620 413 | v 0.494467 0.638416 0.105108 414 | vn -0.046657 -1.463330 -0.063412 415 | v 0.489832 0.667078 0.126145 416 | vn -3.267384 1.798358 -2.242668 417 | v 0.502314 0.548950 0.021516 418 | vn 0.568060 1.804404 -0.663491 419 | v 0.533131 0.571418 0.021430 420 | vn 1.843860 3.470343 -3.835185 421 | v 0.553055 0.540534 0.051618 422 | vn -0.790081 3.691485 1.833625 423 | v 0.557212 0.550943 0.067553 424 | vn 0.034272 1.232678 -5.849584 425 | v 0.663659 0.565972 0.053368 426 | vn -1.090866 2.085816 5.377514 427 | v 0.644052 0.572980 0.079985 428 | vn 3.574197 -2.096498 -1.996027 429 | v 0.521998 0.652179 0.107041 430 | vn 1.497539 -4.509181 0.584761 431 | v 0.509917 0.639896 0.141956 432 | vn 4.856889 -1.293059 1.303323 433 | v 0.526979 0.660450 0.141988 434 | vn 3.473881 0.925476 -4.413331 435 | v 0.431310 0.762674 0.057047 436 | vn 2.251371 1.399574 -4.088503 437 | v 0.458172 0.735458 0.104756 438 | vn 4.188099 1.399124 -3.707773 439 | v 0.433787 0.814070 0.074509 440 | vn 1.059155 1.032781 5.148097 441 | v 0.445144 0.777771 0.131606 442 | vn 2.304077 4.094407 -3.250335 443 | v 0.463615 0.794728 0.115731 444 | vn 2.993881 1.370055 -4.347608 445 | v 0.506830 0.756200 0.107156 446 | vn 1.590695 2.117002 -2.146846 447 | v 0.499527 0.791508 0.108592 448 | vn 3.945245 3.434010 1.877371 449 | v 0.446438 0.802682 0.120217 450 | vn 5.515698 1.669515 -1.718504 451 | v 0.429773 0.870642 0.098033 452 | vn 0.858518 2.185616 1.756029 453 | v 0.494434 0.798036 0.149757 454 | vn 2.973868 1.111326 -3.606000 455 | v 0.515452 0.726511 0.098329 456 | vn 5.599454 1.219731 0.296147 457 | v 0.523813 0.732829 0.133117 458 | vn -1.301970 3.614171 -0.534994 459 | v 0.649257 0.588463 0.067805 460 | vn 0.928586 -2.694468 4.463979 461 | v 0.679558 0.525287 0.081996 462 | vn 2.229218 -3.892641 0.637767 463 | v 0.714593 0.530930 0.072319 464 | vn -1.210485 -0.857347 5.888689 465 | v 0.757570 0.580049 0.089650 466 | vn -1.819707 1.227518 -5.512396 467 | v 0.757995 0.606491 0.061469 468 | vn -1.430443 -3.756512 -0.959974 469 | v 0.759973 0.559188 0.074037 470 | vn -2.712248 2.801954 1.655006 471 | v 0.751465 0.624028 0.082828 472 | vn -1.458115 4.161530 -0.737427 473 | v 0.719750 0.614855 0.074314 474 | vn -0.311069 -3.096545 -1.268518 475 | v 0.782781 0.541941 0.067392 476 | vn 0.715849 -1.216542 -3.944818 477 | v 0.789371 0.568292 0.050028 478 | vn -0.079606 -2.268838 2.956770 479 | v 0.786050 0.550734 0.097029 480 | vn 2.909964 -1.770313 0.485762 481 | v 0.809765 0.554477 0.082246 482 | vn 3.224671 1.651506 -1.909118 483 | v 0.808126 0.633787 0.062443 484 | vn -0.309803 2.251271 -1.854572 485 | v 0.782001 0.650192 0.056146 486 | vn 0.043441 2.106646 1.723659 487 | v 0.784892 0.651050 0.095479 488 | vn 3.477521 0.763199 2.497407 489 | v 0.810642 0.620047 0.090955 490 | vn -0.305763 0.571264 5.295637 491 | v 0.447689 0.723395 0.140034 492 | vn -1.442544 -1.907425 3.407133 493 | v 0.472832 0.652008 0.165836 494 | vn 1.654480 -2.198251 3.330620 495 | v 0.508611 0.649356 0.162465 496 | vn -1.648929 0.921126 5.275786 497 | v 0.477650 0.731433 0.162931 498 | vn 5.640400 0.239775 2.291340 499 | v 0.418539 0.848687 0.150890 500 | vn 3.156021 0.757249 4.096289 501 | v 0.514194 0.726950 0.159124 502 | vn 3.698554 -2.367962 3.168751 503 | v 0.417317 0.897649 0.248658 504 | vn 4.980026 -1.457969 -2.202136 505 | v 0.420445 0.910093 0.230151 506 | vn 1.005089 2.915407 3.533569 507 | v 0.424904 0.978316 0.252276 508 | vn 1.540544 1.139026 -1.403576 509 | v 0.498577 0.986722 0.228902 510 | vn 1.965874 -1.522941 0.137388 511 | v 0.495436 0.949281 0.242648 512 | vn -0.684182 4.795773 -0.702575 513 | v 0.455743 0.990222 0.238317 514 | vn 1.536810 1.327825 1.740687 515 | v 0.496400 0.986429 0.253683 516 | # 252 vertices, 0 vertices normals 517 | 518 | f 189//189 190//190 191//191 519 | f 180//180 171//171 189//189 520 | f 170//170 187//187 171//171 521 | f 171//171 190//190 189//189 522 | f 171//171 187//187 190//190 523 | f 180//180 189//189 188//188 524 | f 158//158 166//166 159//159 525 | f 61//61 159//159 162//162 526 | f 156//156 159//159 61//61 527 | f 61//61 52//52 156//156 528 | f 158//158 159//159 156//156 529 | f 168//168 179//179 160//160 530 | f 176//176 178//178 182//182 531 | f 185//185 182//182 184//184 532 | f 185//185 170//170 182//182 533 | f 157//157 171//171 180//180 534 | f 161//161 176//176 170//170 535 | f 168//168 157//157 180//180 536 | f 160//160 181//181 162//162 537 | f 169//169 162//162 181//181 538 | f 170//170 176//176 182//182 539 | f 180//180 188//188 168//168 540 | f 168//168 188//188 179//179 541 | f 188//188 183//183 179//179 542 | f 185//185 184//184 186//186 543 | f 185//185 187//187 170//170 544 | f 185//185 186//186 187//187 545 | f 178//178 194//194 196//196 546 | f 194//194 183//183 196//196 547 | f 193//193 194//194 178//178 548 | f 178//178 196//196 182//182 549 | f 183//183 194//194 179//179 550 | f 179//179 197//197 160//160 551 | f 177//177 193//193 178//178 552 | f 50//50 102//102 30//30 553 | f 30//30 102//102 172//172 554 | f 172//172 194//194 193//193 555 | f 194//194 102//102 195//195 556 | f 172//172 102//102 194//194 557 | f 160//160 197//197 181//181 558 | f 179//179 194//194 197//197 559 | f 174//174 177//177 178//178 560 | f 192//192 165//165 163//163 561 | f 61//61 162//162 65//65 562 | f 62//62 61//61 65//65 563 | f 166//166 192//192 159//159 564 | f 159//159 192//192 162//162 565 | f 166//166 165//165 192//192 566 | f 162//162 192//192 163//163 567 | f 160//160 162//162 163//163 568 | f 175//175 193//193 177//177 569 | f 173//173 172//172 193//193 570 | f 173//173 193//193 175//175 571 | f 65//65 162//162 169//169 572 | f 18//18 151//151 52//52 573 | f 152//152 155//155 151//151 574 | f 152//152 154//154 155//155 575 | f 5//5 151//151 18//18 576 | f 12//12 14//14 13//13 577 | f 1//1 13//13 9//9 578 | f 9//9 13//13 152//152 579 | f 13//13 14//14 153//153 580 | f 153//153 157//157 152//152 581 | f 13//13 153//153 152//152 582 | f 52//52 151//151 156//156 583 | f 155//155 156//156 151//151 584 | f 135//135 136//136 147//147 585 | f 138//138 148//148 137//137 586 | f 136//136 148//148 147//147 587 | f 136//136 137//137 148//148 588 | f 138//138 146//146 148//148 589 | f 135//135 147//147 142//142 590 | f 140//140 141//141 146//146 591 | f 143//143 145//145 144//144 592 | f 134//134 138//138 133//133 593 | f 9//9 152//152 8//8 594 | f 8//8 152//152 151//151 595 | f 145//145 147//147 150//150 596 | f 145//145 150//150 144//144 597 | f 147//147 148//148 150//150 598 | f 139//139 147//147 145//145 599 | f 142//142 147//147 139//139 600 | f 146//146 141//141 149//149 601 | f 146//146 149//149 148//148 602 | f 4//4 2//2 8//8 603 | f 4//4 8//8 151//151 604 | f 4//4 151//151 5//5 605 | f 150//150 148//148 149//149 606 | f 157//157 168//168 160//160 607 | f 161//161 167//167 157//157 608 | f 153//153 161//161 157//157 609 | f 167//167 161//161 170//170 610 | f 175//175 177//177 174//174 611 | f 173//173 176//176 172//172 612 | f 174//174 178//178 176//176 613 | f 173//173 174//174 176//176 614 | f 161//161 30//30 176//176 615 | f 30//30 172//172 176//176 616 | f 167//167 170//170 171//171 617 | f 167//167 171//171 157//157 618 | f 173//173 175//175 174//174 619 | f 14//14 161//161 153//153 620 | f 14//14 30//30 161//161 621 | f 156//156 155//155 158//158 622 | f 154//154 157//157 160//160 623 | f 154//154 160//160 158//158 624 | f 152//152 157//157 154//154 625 | f 154//154 158//158 155//155 626 | f 158//158 160//160 164//164 627 | f 160//160 163//163 164//164 628 | f 164//164 163//163 165//165 629 | f 158//158 165//165 166//166 630 | f 158//158 164//164 165//165 631 | f 140//140 146//146 142//142 632 | f 65//65 169//169 240//240 633 | f 124//124 65//65 125//125 634 | f 240//240 181//181 199//199 635 | f 181//181 241//241 199//199 636 | f 240//240 169//169 181//181 637 | f 65//65 240//240 125//125 638 | f 215//215 125//125 240//240 639 | f 181//181 242//242 241//241 640 | f 181//181 210//210 242//242 641 | f 181//181 202//202 210//210 642 | f 210//210 211//211 242//242 643 | f 229//229 234//234 227//227 644 | f 232//232 235//235 234//234 645 | f 232//232 233//233 235//235 646 | f 233//233 236//236 235//235 647 | f 220//220 117//117 120//120 648 | f 120//120 219//219 220//220 649 | f 229//229 228//228 233//233 650 | f 232//232 234//234 229//229 651 | f 229//229 233//233 232//232 652 | f 235//235 236//236 239//239 653 | f 236//236 237//237 238//238 654 | f 238//238 239//239 236//236 655 | f 233//233 237//237 236//236 656 | f 233//233 228//228 237//237 657 | f 228//228 230//230 237//237 658 | f 234//234 235//235 239//239 659 | f 234//234 239//239 238//238 660 | f 230//230 238//238 237//237 661 | f 227//227 238//238 230//230 662 | f 227//227 234//234 238//238 663 | f 132//132 246//246 141//141 664 | f 141//141 246//246 248//248 665 | f 247//247 250//250 246//246 666 | f 247//247 144//144 249//249 667 | f 244//244 143//143 247//247 668 | f 244//244 247//247 246//246 669 | f 143//143 144//144 247//247 670 | f 144//144 251//251 249//249 671 | f 252//252 251//251 248//248 672 | f 250//250 249//249 252//252 673 | f 251//251 252//252 249//249 674 | f 247//247 249//249 250//250 675 | f 144//144 150//150 248//248 676 | f 149//149 248//248 150//150 677 | f 141//141 248//248 149//149 678 | f 246//246 250//250 252//252 679 | f 246//246 252//252 248//248 680 | f 144//144 248//248 251//251 681 | f 244//244 246//246 132//132 682 | f 131//131 215//215 244//244 683 | f 244//244 215//215 120//120 684 | f 219//219 120//120 215//215 685 | f 244//244 120//120 143//143 686 | f 131//131 244//244 132//132 687 | f 240//240 241//241 243//243 688 | f 199//199 241//241 240//240 689 | f 240//240 243//243 221//221 690 | f 125//125 215//215 131//131 691 | f 129//129 125//125 131//131 692 | f 215//215 240//240 221//221 693 | f 223//223 218//218 221//221 694 | f 223//223 221//221 245//245 695 | f 243//243 245//245 221//221 696 | f 219//219 215//215 221//221 697 | f 241//241 245//245 243//243 698 | f 241//241 242//242 245//245 699 | f 211//211 245//245 242//242 700 | f 223//223 245//245 211//211 701 | f 198//198 212//212 213//213 702 | f 101//101 114//114 102//102 703 | f 209//209 211//211 210//210 704 | f 201//201 209//209 210//210 705 | f 114//114 212//212 102//102 706 | f 195//195 102//102 212//212 707 | f 195//195 212//212 198//198 708 | f 213//213 212//212 214//214 709 | f 214//214 220//220 219//219 710 | f 114//114 117//117 214//214 711 | f 213//213 214//214 216//216 712 | f 212//212 114//114 214//214 713 | f 198//198 213//213 217//217 714 | f 213//213 218//218 217//217 715 | f 203//203 204//204 186//186 716 | f 182//182 196//196 203//203 717 | f 184//184 203//203 186//186 718 | f 186//186 204//204 187//187 719 | f 187//187 204//204 205//205 720 | f 203//203 196//196 204//204 721 | f 194//194 198//198 197//197 722 | f 195//195 198//198 194//194 723 | f 200//200 197//197 198//198 724 | f 182//182 203//203 184//184 725 | f 197//197 201//201 181//181 726 | f 197//197 200//200 201//201 727 | f 201//201 202//202 181//181 728 | f 187//187 205//205 207//207 729 | f 188//188 208//208 206//206 730 | f 183//183 188//188 206//206 731 | f 201//201 200//200 209//209 732 | f 201//201 210//210 202//202 733 | f 196//196 206//206 205//205 734 | f 205//205 204//204 196//196 735 | f 196//196 183//183 206//206 736 | f 188//188 225//225 208//208 737 | f 191//191 207//207 228//228 738 | f 225//225 227//227 208//208 739 | f 205//205 224//224 207//207 740 | f 191//191 226//226 225//225 741 | f 224//224 231//231 228//228 742 | f 207//207 224//224 228//228 743 | f 208//208 230//230 231//231 744 | f 228//228 231//231 230//230 745 | f 226//226 191//191 229//229 746 | f 226//226 227//227 225//225 747 | f 226//226 229//229 227//227 748 | f 227//227 230//230 208//208 749 | f 208//208 231//231 224//224 750 | f 191//191 228//228 229//229 751 | f 200//200 222//222 209//209 752 | f 209//209 223//223 211//211 753 | f 209//209 222//222 223//223 754 | f 222//222 198//198 217//217 755 | f 217//217 218//218 223//223 756 | f 216//216 221//221 218//218 757 | f 222//222 217//217 223//223 758 | f 214//214 219//219 216//216 759 | f 117//117 220//220 214//214 760 | f 198//198 222//222 200//200 761 | f 218//218 213//213 216//216 762 | f 216//216 219//219 221//221 763 | f 189//189 225//225 188//188 764 | f 190//190 207//207 191//191 765 | f 189//189 191//191 225//225 766 | f 187//187 207//207 190//190 767 | f 208//208 224//224 206//206 768 | f 205//205 206//206 224//224 769 | f 16//16 17//17 28//28 770 | f 57//57 59//59 58//58 771 | f 54//54 58//58 59//59 772 | f 17//17 52//52 61//61 773 | f 17//17 61//61 60//60 774 | f 53//53 56//56 57//57 775 | f 55//55 53//53 54//54 776 | f 53//53 58//58 54//54 777 | f 53//53 57//57 58//58 778 | f 17//17 60//60 28//28 779 | f 29//29 59//59 34//34 780 | f 54//54 59//59 29//29 781 | f 34//34 60//60 62//62 782 | f 57//57 34//34 59//59 783 | f 28//28 60//60 34//34 784 | f 55//55 28//28 56//56 785 | f 55//55 29//29 28//28 786 | f 56//56 28//28 34//34 787 | f 56//56 34//34 57//57 788 | f 55//55 56//56 53//53 789 | f 55//55 54//54 29//29 790 | f 47//47 49//49 48//48 791 | f 45//45 47//47 48//48 792 | f 36//36 38//38 32//32 793 | f 45//45 48//48 44//44 794 | f 45//45 44//44 46//46 795 | f 45//45 46//46 47//47 796 | f 32//32 48//48 30//30 797 | f 31//31 33//33 51//51 798 | f 31//31 51//51 42//42 799 | f 48//48 49//49 30//30 800 | f 17//17 18//18 52//52 801 | f 35//35 31//31 42//42 802 | f 38//38 48//48 32//32 803 | f 38//38 44//44 48//48 804 | f 69//69 73//73 76//76 805 | f 67//67 69//69 76//76 806 | f 69//69 71//71 73//73 807 | f 74//74 76//76 75//75 808 | f 75//75 76//76 73//73 809 | f 66//66 76//76 74//74 810 | f 70//70 66//66 75//75 811 | f 66//66 67//67 76//76 812 | f 66//66 74//74 75//75 813 | f 70//70 75//75 77//77 814 | f 70//70 77//77 72//72 815 | f 77//77 75//75 73//73 816 | f 80//80 85//85 82//82 817 | f 79//79 84//84 83//83 818 | f 79//79 81//81 84//84 819 | f 79//79 72//72 81//81 820 | f 79//79 71//71 72//72 821 | f 79//79 78//78 71//71 822 | f 71//71 80//80 73//73 823 | f 78//78 80//80 71//71 824 | f 79//79 21//21 78//78 825 | f 80//80 82//82 73//73 826 | f 33//33 62//62 63//63 827 | f 60//60 61//61 62//62 828 | f 34//34 62//62 33//33 829 | f 68//68 72//72 71//71 830 | f 68//68 70//70 72//72 831 | f 68//68 71//71 69//69 832 | f 68//68 66//66 70//70 833 | f 62//62 65//65 63//63 834 | f 68//68 69//69 67//67 835 | f 66//66 68//68 67//67 836 | f 21//21 20//20 22//22 837 | f 22//22 20//20 24//24 838 | f 23//23 27//27 22//22 839 | f 22//22 25//25 23//23 840 | f 24//24 25//25 22//22 841 | f 3//3 17//17 7//7 842 | f 3//3 18//18 17//17 843 | f 3//3 5//5 18//18 844 | f 16//16 28//28 29//29 845 | f 12//12 6//6 19//19 846 | f 12//12 19//19 14//14 847 | f 10//10 13//13 1//1 848 | f 10//10 11//11 13//13 849 | f 10//10 1//1 12//12 850 | f 10//10 12//12 11//11 851 | f 4//4 5//5 3//3 852 | f 2//2 6//6 1//1 853 | f 2//2 4//4 3//3 854 | f 2//2 9//9 8//8 855 | f 2//2 1//1 9//9 856 | f 3//3 7//7 2//2 857 | f 6//6 2//2 7//7 858 | f 15//15 7//7 16//16 859 | f 6//6 12//12 1//1 860 | f 7//7 17//17 16//16 861 | f 12//12 13//13 11//11 862 | f 6//6 7//7 15//15 863 | f 23//23 42//42 27//27 864 | f 42//42 43//43 27//27 865 | f 40//40 44//44 38//38 866 | f 35//35 25//25 24//24 867 | f 35//35 24//24 36//36 868 | f 25//25 42//42 23//23 869 | f 25//25 35//35 42//42 870 | f 24//24 38//38 36//36 871 | f 35//35 36//36 32//32 872 | f 35//35 32//32 31//31 873 | f 19//19 32//32 30//30 874 | f 29//29 33//33 31//31 875 | f 15//15 16//16 31//31 876 | f 31//31 32//32 15//15 877 | f 6//6 15//15 32//32 878 | f 16//16 29//29 31//31 879 | f 19//19 30//30 14//14 880 | f 37//37 40//40 38//38 881 | f 24//24 20//20 26//26 882 | f 24//24 26//26 38//38 883 | f 19//19 6//6 32//32 884 | f 29//29 34//34 33//33 885 | f 37//37 26//26 39//39 886 | f 37//37 38//38 26//26 887 | f 64//64 63//63 124//124 888 | f 63//63 65//65 124//124 889 | f 96//96 122//122 94//94 890 | f 93//93 122//122 121//121 891 | f 93//93 94//94 122//122 892 | f 121//121 122//122 123//123 893 | f 93//93 121//121 95//95 894 | f 33//33 63//63 64//64 895 | f 33//33 64//64 51//51 896 | f 121//121 126//126 95//95 897 | f 121//121 123//123 127//127 898 | f 121//121 127//127 126//126 899 | f 96//96 64//64 123//123 900 | f 122//122 96//96 123//123 901 | f 112//112 115//115 116//116 902 | f 109//109 116//116 115//115 903 | f 107//107 115//115 110//110 904 | f 109//109 111//111 116//116 905 | f 110//110 115//115 112//112 906 | f 113//113 114//114 101//101 907 | f 113//113 100//100 112//112 908 | f 107//107 108//108 115//115 909 | f 106//106 109//109 115//115 910 | f 108//108 106//106 115//115 911 | f 112//112 119//119 118//118 912 | f 118//118 119//119 120//120 913 | f 118//118 120//120 117//117 914 | f 118//118 117//117 113//113 915 | f 114//114 113//113 117//117 916 | f 112//112 116//116 119//119 917 | f 113//113 112//112 118//118 918 | f 116//116 139//139 119//119 919 | f 116//116 130//130 139//139 920 | f 139//139 140//140 142//142 921 | f 130//130 132//132 141//141 922 | f 130//130 140//140 139//139 923 | f 129//129 131//131 130//130 924 | f 116//116 129//129 130//130 925 | f 133//133 135//135 134//134 926 | f 133//133 136//136 135//135 927 | f 133//133 138//138 137//137 928 | f 133//133 137//137 136//136 929 | f 130//130 131//131 132//132 930 | f 130//130 141//141 140//140 931 | f 134//134 142//142 146//146 932 | f 134//134 135//135 142//142 933 | f 134//134 146//146 138//138 934 | f 139//139 145//145 143//143 935 | f 119//119 139//139 143//143 936 | f 119//119 143//143 120//120 937 | f 106//106 128//128 104//104 938 | f 64//64 111//111 127//127 939 | f 64//64 127//127 123//123 940 | f 128//128 111//111 109//109 941 | f 128//128 127//127 111//111 942 | f 105//105 126//126 128//128 943 | f 128//128 126//126 127//127 944 | f 95//95 126//126 105//105 945 | f 105//105 128//128 106//106 946 | f 109//109 104//104 128//128 947 | f 111//111 129//129 116//116 948 | f 64//64 124//124 111//111 949 | f 111//111 124//124 129//129 950 | f 125//125 129//129 124//124 951 | f 90//90 87//87 88//88 952 | f 27//27 88//88 85//85 953 | f 27//27 43//43 88//88 954 | f 41//41 88//88 43//43 955 | f 40//40 41//41 44//44 956 | f 88//88 41//41 90//90 957 | f 37//37 89//89 91//91 958 | f 40//40 91//91 41//41 959 | f 37//37 91//91 40//40 960 | f 87//87 86//86 88//88 961 | f 89//89 90//90 91//91 962 | f 46//46 44//44 49//49 963 | f 46//46 49//49 47//47 964 | f 44//44 41//41 98//98 965 | f 90//90 41//41 91//91 966 | f 92//92 51//51 93//93 967 | f 93//93 96//96 94//94 968 | f 51//51 96//96 93//93 969 | f 93//93 95//95 92//92 970 | f 21//21 83//83 20//20 971 | f 83//83 21//21 79//79 972 | f 83//83 84//84 86//86 973 | f 27//27 80//80 22//22 974 | f 27//27 85//85 80//80 975 | f 77//77 82//82 81//81 976 | f 72//72 77//77 81//81 977 | f 77//77 73//73 82//82 978 | f 78//78 22//22 80//80 979 | f 21//21 22//22 78//78 980 | f 81//81 82//82 84//84 981 | f 85//85 84//84 82//82 982 | f 85//85 88//88 86//86 983 | f 39//39 87//87 90//90 984 | f 89//89 39//39 90//90 985 | f 37//37 39//39 89//89 986 | f 39//39 26//26 87//87 987 | f 84//84 85//85 86//86 988 | f 20//20 83//83 87//87 989 | f 20//20 87//87 26//26 990 | f 83//83 86//86 87//87 991 | f 98//98 41//41 43//43 992 | f 95//95 105//105 103//103 993 | f 103//103 105//105 106//106 994 | f 92//92 103//103 97//97 995 | f 103//103 108//108 107//107 996 | f 103//103 106//106 108//108 997 | f 97//97 103//103 107//107 998 | f 100//100 101//101 98//98 999 | f 50//50 98//98 101//101 1000 | f 50//50 101//101 102//102 1001 | f 95//95 103//103 92//92 1002 | f 112//112 99//99 110//110 1003 | f 97//97 107//107 110//110 1004 | f 100//100 113//113 101//101 1005 | f 104//104 109//109 106//106 1006 | f 100//100 99//99 112//112 1007 | f 99//99 97//97 110//110 1008 | f 43//43 99//99 98//98 1009 | f 43//43 51//51 99//99 1010 | f 49//49 50//50 30//30 1011 | f 98//98 50//50 49//49 1012 | f 44//44 98//98 49//49 1013 | f 43//43 42//42 51//51 1014 | f 51//51 64//64 96//96 1015 | f 92//92 99//99 51//51 1016 | f 92//92 97//97 99//99 1017 | f 98//98 99//99 100//100 1018 | # 500 faces, 0 coords texture 1019 | 1020 | # End of File 1021 | -------------------------------------------------------------------------------- /datagen_maps.py: -------------------------------------------------------------------------------- 1 | # This script remeshes a 3D model or a 3D model dataset for subdivision 2 | # sequence connectivity. 3 | # 4 | # The MAPS process may fail, due to one of the following reasons: 5 | # - The base size is too small, and the shape is too complicated. Try to 6 | # increase the base size. 7 | # - There are too much faces in the 3D shape. You may simplify the input 8 | # mesh before processed. 9 | # - Nan or other run-time exceptions are encountered, mostly because of 10 | # numeric issues. If this happens, you may try to increase the base size, or 11 | # enlarge the default value of trial in maps_async to try multiple times. 12 | import os 13 | import trimesh 14 | import numpy as np 15 | import traceback 16 | from maps import MAPS 17 | from multiprocessing import Pool 18 | from multiprocessing.context import TimeoutError as MTE 19 | from pathlib import Path 20 | from tqdm import tqdm 21 | 22 | 23 | SHREC_CONFIG = { 24 | 'dst_root': './data/SHREC11-MAPS-48-4-split10', 25 | 'src_root': './data/shrec11-split10', 26 | 'n_variation': 10, 27 | 'base_size': 48, 28 | 'depth': 4 29 | } 30 | 31 | CUBES_CONFIG = { 32 | 'dst_root': './data/Cubes-MAPS-48-4', 33 | 'src_root': './data/cubes', 34 | 'n_variation': 10, 35 | 'base_size': 48, 36 | 'depth': 4 37 | } 38 | 39 | MANIFOLD40_CONFIG = { 40 | 'dst_root': './data/Manifold40-MAPS-96-3', 41 | 'src_root': './data/Manifold40', 42 | 'n_variation': 10, 43 | 'base_size': 96, 44 | 'max_base_size': 192, 45 | 'depth': 3 46 | } 47 | 48 | 49 | def maps_async(obj_path, out_path, base_size, max_base_size, depth, timeout, 50 | trial=1, verbose=False): 51 | if verbose: 52 | print('[IN]', out_path) 53 | 54 | for _ in range(trial): 55 | try: 56 | mesh = trimesh.load(obj_path, process=False) 57 | maps = MAPS(mesh.vertices, mesh.faces, base_size, timeout=timeout, 58 | verbose=verbose) 59 | 60 | if maps.base_size > max_base_size: 61 | continue 62 | 63 | sub_mesh = maps.mesh_upsampling(depth=depth) 64 | sub_mesh.export(out_path) 65 | break 66 | except Exception as e: 67 | if verbose: 68 | traceback.print_exc() 69 | else: 70 | if verbose: 71 | print('[OUT FAIL]', out_path) 72 | return False, out_path 73 | if verbose: 74 | print('[OUT SUCCESS]', out_path) 75 | return True, out_path 76 | 77 | 78 | def make_MAPS_dataset(dst_root, src_root, base_size, depth, n_variation=None, 79 | n_worker=1, timeout=None, max_base_size=None, verbose=False): 80 | ''' 81 | Remeshing a dataset with the MAPS algorithm. 82 | 83 | Parameters 84 | ---------- 85 | dst_root: str, 86 | path to a destination directory. 87 | src_root: str, 88 | path to the source dataset. 89 | n_variation: 90 | number of remeshings for a shape. 91 | n_workers: 92 | number of parallel processes. 93 | timeout: 94 | if timeout is not None, terminate the MAPS algorithm after timeout seconds. 95 | 96 | References: 97 | - Lee, Aaron WF, et al. "MAPS: Multiresolution adaptive parameterization of surfaces." 98 | Proceedings of the 25th annual conference on Computer graphics and interactive techniques. 1998. 99 | ''' 100 | 101 | if max_base_size is None: 102 | max_base_size = base_size 103 | 104 | if os.path.exists('maps.log'): 105 | os.remove('maps.log') 106 | 107 | def callback(pbar, success, path): 108 | pbar.update() 109 | if not success: 110 | with open('maps.log', 'a') as f: 111 | f.write(str(path) + '\n') 112 | 113 | for label_dir in sorted(Path(src_root).iterdir(), reverse=True): 114 | if label_dir.is_dir(): 115 | for mode_dir in sorted(label_dir.iterdir()): 116 | if mode_dir.is_dir(): 117 | obj_paths = list(sorted(mode_dir.glob('*.obj'))) 118 | dst_dir = Path(dst_root) / label_dir.name / mode_dir.name 119 | dst_dir.mkdir(parents=True, exist_ok=True) 120 | 121 | pbar = tqdm(total=len(obj_paths) * n_variation) 122 | pbar.set_description(f'{label_dir.name}-{mode_dir.name}') 123 | 124 | if n_worker > 0: 125 | pool = Pool(processes=n_worker) 126 | 127 | results = [] 128 | for obj_path in obj_paths: 129 | obj_id = str(obj_path.stem) 130 | 131 | for var in range(n_variation): 132 | dst_path = dst_dir / f'{obj_id}-{var}.obj' 133 | if dst_path.exists(): 134 | continue 135 | 136 | if n_worker > 0: 137 | ret = pool.apply_async( 138 | maps_async, 139 | (str(obj_path), str(dst_path), base_size, max_base_size, depth, timeout), 140 | callback=lambda x: callback(pbar, x[0], x[1]) 141 | ) 142 | results.append(ret) 143 | else: 144 | maps_async(str(obj_path), str(dst_path), base_size, 145 | max_base_size, depth, timeout, verbose=verbose) 146 | pbar.update() 147 | 148 | if n_worker > 0: 149 | try: 150 | [r.get(timeout + 1) for r in results] 151 | pool.close() 152 | except MTE: 153 | pass 154 | 155 | pbar.close() 156 | 157 | def make_MAPS_shape(in_path, out_path, base_size, depth): 158 | mesh = trimesh.load_mesh(in_path, process=False) 159 | maps = MAPS(mesh.vertices, mesh.faces, base_size=base_size, verbose=True) 160 | sub_mesh = maps.mesh_upsampling(depth=depth) 161 | sub_mesh.export(out_path) 162 | 163 | 164 | def MAPS_demo1(): 165 | '''Apply MAPS to a single 3D model''' 166 | make_MAPS_shape('airplane.obj', 'airplane_MAPS.obj', 96, 3) 167 | 168 | 169 | def MAPS_demo2(): 170 | '''Apply MAPS to shapes from a dataset in parallel''' 171 | config = MANIFOLD40_CONFIG 172 | 173 | make_MAPS_dataset( 174 | config['dst_root'], 175 | config['src_root'], 176 | config['base_size'], 177 | config['depth'], 178 | n_variation=config['n_variation'], 179 | n_worker=60, 180 | timeout=30, 181 | max_base_size=config.get('max_base_size'), 182 | verbose=True 183 | ) 184 | 185 | if __name__ == "__main__": 186 | MAPS_demo1() 187 | -------------------------------------------------------------------------------- /maps/__init__.py: -------------------------------------------------------------------------------- 1 | from .maps import MAPS -------------------------------------------------------------------------------- /maps/geometry.py: -------------------------------------------------------------------------------- 1 | from typing import List 2 | import numpy as np 3 | import triangle as tr 4 | 5 | 6 | def to_barycentric(points, triangle): 7 | """ 8 | compute barycentric coordinates (u, v, w) for points w.r.t. triangle 9 | """ 10 | points = np.array(points) 11 | triangle = np.array(triangle) 12 | 13 | if triangle.shape[1] == 3: 14 | a, b, c = np.linalg.solve(triangle.T, points) 15 | elif triangle.shape[1] == 2: 16 | A = np.vstack([triangle[1] - triangle[0], triangle[2] - triangle[0]]) 17 | b, c = np.linalg.solve(A.T, points - triangle[0]) 18 | a = 1 - b - c 19 | else: 20 | raise Exception("Invalid") 21 | 22 | eps = 1e-5 23 | 24 | return np.array([a, b, c]) 25 | 26 | 27 | def from_barycenteric(attr, bary): 28 | """ 29 | attr: [3, N] or [3,] 30 | bary: [3] 31 | """ 32 | attr = np.array(attr) 33 | bary = np.array(bary) 34 | if len(attr.shape) == 1: 35 | return (attr * bary).sum() 36 | elif len(attr.shape) == 2: 37 | return (attr * bary[:, None]).sum(axis=0) 38 | else: 39 | raise Exception("Invalid") 40 | 41 | 42 | def CDT(vids, vertices): 43 | V = vertices.shape[0] 44 | ring = [(i, (i + 1) % V) for i in range(V)] 45 | data = {"vertices": vertices, "segments": ring} 46 | result = tr.triangulate(data, "pe") 47 | 48 | new_edges = [ 49 | (k, v) for k, v in result["edges"] if not (k, v) in ring and not (v, k) in ring 50 | ] 51 | 52 | new_faces = np.vectorize(lambda x: vids[x], otypes=[int])(result["triangles"]) 53 | new_edges = np.vectorize(lambda x: vids[x], otypes=[int])(new_edges) 54 | 55 | return new_faces, new_edges 56 | 57 | def MVT(v, neighbors): 58 | edges = set() 59 | for i in range(len(neighbors)): 60 | j = i + 1 if i + 1 < len(neighbors) else 0 61 | edges.add((neighbors[i], neighbors[j])) 62 | edges.add((neighbors[j], neighbors[i])) 63 | 64 | new_faces = [] 65 | new_edges = set() 66 | for i, u in enumerate(neighbors): 67 | j = i + 1 if i + 1 < len(neighbors) else 0 68 | w = neighbors[j] 69 | if u == v or w == v: 70 | continue 71 | new_faces.append([v, u, w]) 72 | if not (v, u) in edges: 73 | new_edges.add((v, u)) 74 | if not (v, w) in edges: 75 | new_edges.add((v, w)) 76 | new_faces = np.array(new_faces) 77 | new_edges = np.array(list(new_edges)) 78 | return new_faces, new_edges 79 | 80 | 81 | def one_ring_neighbor_uv( 82 | neighbors: List[int], 83 | vertices: np.ndarray, 84 | i: int, 85 | return_angle=False, 86 | return_alpha=False, 87 | ): 88 | neighbors_p = neighbors 89 | neighbors_s = np.roll(neighbors_p, -1) 90 | vertices_p = vertices[neighbors_p] 91 | vertices_s = vertices[neighbors_s] 92 | direct_p = vertices_p - vertices[i] 93 | direct_s = vertices_s - vertices[i] 94 | length_p = np.sqrt((direct_p ** 2).sum(axis=1)) 95 | length_s = np.sqrt((direct_s ** 2).sum(axis=1)) 96 | direct_p = direct_p / length_p[:, np.newaxis] 97 | direct_s = direct_s / length_s[:, np.newaxis] 98 | 99 | angle_v = np.arccos((direct_p * direct_s).sum(axis=1)) 100 | 101 | alpha = angle_v.sum() 102 | A = 2 * np.pi / alpha 103 | 104 | angle_v[1:] = np.cumsum(angle_v)[:-1] 105 | angle_v[0] = 0 106 | angle_v = angle_v * A 107 | 108 | u = np.power(length_p, A) * np.cos(angle_v) 109 | v = np.power(length_p, A) * np.sin(angle_v) 110 | 111 | uv = np.vstack([u, v]).transpose() 112 | 113 | if np.isnan(uv).any(): 114 | raise Exception('Found NAN') 115 | 116 | ret = (uv,) 117 | if return_angle: 118 | ret += (angle_v,) 119 | if return_alpha: 120 | ret += (alpha,) 121 | 122 | if len(ret) == 1: 123 | ret = ret[0] 124 | return ret 125 | 126 | 127 | def plane_from_points(points): 128 | v1 = points[1] - points[0] 129 | v2 = points[2] - points[0] 130 | 131 | cp = np.cross(v1, v2) 132 | d = -np.dot(cp, points[2]) 133 | 134 | l = np.linalg.norm(cp) 135 | cp /= l 136 | d /= l 137 | a, b, c = cp 138 | 139 | return np.array([a, b, c, d]) 140 | 141 | 142 | def vector_angle(A, B): 143 | return np.arccos(np.dot(A, B) / np.linalg.norm(A) / np.linalg.norm(B)) 144 | 145 | 146 | def triangle_angles(triangle): 147 | ''' 148 | triangle: (3, 3) 149 | ''' 150 | a = vector_angle(triangle[1] - triangle[0], triangle[2] - triangle[0]) 151 | b = vector_angle(triangle[2] - triangle[1], triangle[0] - triangle[1]) 152 | c = np.pi - a - b 153 | return np.array([a, b, c]) 154 | 155 | 156 | def min_triangle_angles(triangle): 157 | return triangle_angles(triangle).min() 158 | 159 | 160 | def face_areas(verts, faces): 161 | areas = [] 162 | for face in faces: 163 | t = np.cross(verts[face[1]] - verts[face[0]], 164 | verts[face[2]] - verts[face[0]]) 165 | areas.append(np.linalg.norm(t) / 2) 166 | return np.array(areas) 167 | -------------------------------------------------------------------------------- /maps/maps.py: -------------------------------------------------------------------------------- 1 | from maps.utils import maximal_independent_set 2 | import random 3 | from collections import defaultdict 4 | from time import time 5 | from typing import Dict, List, Set 6 | 7 | import networkx as nx 8 | import numpy as np 9 | from sortedcollections import ValueSortedDict 10 | from shapely.geometry import LineString 11 | from shapely.geometry import Point 12 | from tqdm import tqdm 13 | from trimesh import PointCloud 14 | from trimesh import Trimesh 15 | 16 | from .geometry import face_areas, min_triangle_angles 17 | from .geometry import plane_from_points 18 | from .geometry import to_barycentric, from_barycenteric 19 | from .geometry import CDT, MVT 20 | from .geometry import one_ring_neighbor_uv 21 | 22 | 23 | class Mesh: 24 | def __init__(self, vertices, faces): 25 | mesh = Trimesh(vertices, faces, process=False, maintain_order=True) 26 | self.verts = vertices.copy() 27 | self.faces = faces.copy() 28 | self.vertex_faces = [set(F) for F in mesh.vertex_faces] 29 | for fs in self.vertex_faces: 30 | if -1 in fs: 31 | fs.remove(-1) 32 | 33 | self.V = self.verts.shape[0] 34 | self.F = self.faces.shape[0] 35 | self.vmask = np.ones(self.V, dtype=bool) 36 | self.fmask = np.ones(self.F, dtype=bool) 37 | 38 | def neighbors(self, i: int) -> Set[int]: 39 | N = set() 40 | for f in self.vertex_faces[i]: 41 | N.add(self.faces[f, 0]) 42 | N.add(self.faces[f, 1]) 43 | N.add(self.faces[f, 2]) 44 | N.remove(i) 45 | return N 46 | 47 | def one_ring_neighbors(self, i: int) -> List[int]: 48 | G = nx.Graph() 49 | for f in self.vertex_faces[i]: 50 | G.add_edge(self.faces[f, 0], self.faces[f, 1]) 51 | G.add_edge(self.faces[f, 1], self.faces[f, 2]) 52 | G.add_edge(self.faces[f, 2], self.faces[f, 0]) 53 | cycle = nx.cycle_basis(G.subgraph(G[i]))[0] 54 | 55 | u, v = cycle[0], cycle[1] 56 | for f in self.vertex_faces[i]: 57 | if u in self.faces[f] and v in self.faces[f]: 58 | clockwise = ( 59 | (u == self.faces[f, 0] and v == self.faces[f, 1]) 60 | or (u == self.faces[f, 1] and v == self.faces[f, 2]) 61 | or (u == self.faces[f, 2] and v == self.faces[f, 0]) 62 | ) 63 | if not clockwise: 64 | cycle = cycle[::-1] 65 | break 66 | else: 67 | raise Exception("Impossible") 68 | return cycle 69 | 70 | def add_vertex(self, vertex): 71 | if self.V + 1 > self.verts.shape[0]: 72 | self.vmask = np.append(self.vmask, np.zeros_like(self.vmask)) 73 | self.verts = np.append(self.verts, np.zeros_like(self.verts), axis=0) 74 | self.verts[self.V] = vertex 75 | self.vmask[self.V] = True 76 | self.vertex_faces.append(set()) 77 | self.V += 1 78 | 79 | def remove_face(self, fid): 80 | self.fmask[fid] = False 81 | for v in self.faces[fid]: 82 | self.vertex_faces[v].remove(fid) 83 | 84 | def remove_faces(self, fids): 85 | self.fmask[fids] = False 86 | for fid in fids: 87 | for v in self.faces[fid]: 88 | self.vertex_faces[v].remove(fid) 89 | 90 | def add_faces(self, new_faces): 91 | for v0, v1, v2 in new_faces: 92 | assert v0 != v1 and v1 != v2 and v2 != v0 93 | 94 | if self.F + len(new_faces) > self.faces.shape[0]: 95 | self.fmask = np.append(self.fmask, np.zeros_like(self.fmask)) 96 | self.faces = np.append(self.faces, np.zeros_like(self.faces), axis=0) 97 | self.faces[self.F : self.F + len(new_faces)] = new_faces 98 | self.fmask[self.F : self.F + len(new_faces)] = True 99 | 100 | for fid, face in enumerate(new_faces): 101 | for v in face: 102 | self.vertex_faces[v].add(fid + self.F) 103 | 104 | self.F += len(new_faces) 105 | 106 | def remove_vertex(self, i, new_faces, neighbors=None): 107 | old_faces = self.vertex_faces[i] 108 | 109 | # delete the vertex and related faces 110 | self.remove_faces(list(self.vertex_faces[i])) 111 | self.add_faces(new_faces) 112 | 113 | # update vertex_faces 114 | for k in neighbors: 115 | for f in old_faces: 116 | if f in self.vertex_faces[k]: 117 | self.vertex_faces[k].remove(f) 118 | for j, face in enumerate(new_faces): 119 | if k in face: 120 | self.vertex_faces[k].add(j + self.F - len(new_faces)) 121 | 122 | 123 | class BaseMesh(Mesh): 124 | def __init__(self, vertices, faces): 125 | super().__init__(vertices, faces) 126 | 127 | self.face_distortion = {i:1 for i in range(self.F)} 128 | 129 | def assign_initial_vertex_weights(self): 130 | Q = {} 131 | for i in range(self.V): 132 | Q[i] = np.zeros([4, 4]) 133 | for fid in self.vertex_faces[i]: 134 | plane = plane_from_points(self.verts[self.faces[fid]]) 135 | plane = plane.reshape([1, 4]) 136 | Q[i] += plane.T @ plane 137 | 138 | vertex_weights = ValueSortedDict() 139 | for i in range(self.V): 140 | vertex_weights[i] = self.compute_vertex_weights(i, Q) 141 | return vertex_weights, Q 142 | 143 | def compute_vertex_weights(self, i, Q: Dict): 144 | weight = 0 145 | coord = np.ones([1, 4]) 146 | for v in self.neighbors(i): 147 | coord[0, :3] = self.verts[v] 148 | weight += coord @ Q[i] @ coord.T 149 | return weight 150 | 151 | def is_validate_removal(self, i: int, neighbors, new_faces, new_edges): 152 | if not self.is_manifold(new_edges, neighbors): 153 | return False 154 | return True 155 | 156 | def is_manifold(self, new_edges, neighbors): 157 | """check if there is a triangle connected by the neighbhors of i""" 158 | for v in neighbors: 159 | for f in self.vertex_faces[v]: 160 | for a, b in new_edges: 161 | if a in self.faces[f] and b in self.faces[f]: 162 | return False 163 | return True 164 | 165 | 166 | class ParamMesh(Mesh): 167 | def __init__(self, vertices, faces): 168 | super().__init__(vertices, faces) 169 | 170 | self.xyz = vertices.copy() 171 | self.baries = defaultdict(dict) 172 | self.on_edge = defaultdict(dict) 173 | 174 | def add_xyz(self, points_uv, uv, edge): 175 | if self.xyz.shape != self.verts.shape: 176 | self.xyz = np.append(self.xyz, np.zeros_like(self.xyz), axis=0) 177 | 178 | a = np.array(points_uv[edge[1]]) - points_uv[edge[0]] 179 | b = np.array(uv) - points_uv[edge[0]] 180 | t = np.linalg.norm(b) / np.linalg.norm(a) 181 | self.xyz[self.V - 1] = t * self.xyz[edge[1]] + (1 - t) * self.xyz[edge[0]] 182 | 183 | def is_watertight(self) -> bool: 184 | verts = self.verts[self.vmask] 185 | faces = self.faces[self.fmask] 186 | mesh = Trimesh(verts, faces, process=False) 187 | return mesh.is_watertight and mesh.is_winding_consistent 188 | 189 | def split_triangles_on_segments(self, points_uv: Dict, points_on_ring: Dict, lines): 190 | for line in lines: 191 | edges = defaultdict(set) 192 | for v in points_uv: 193 | if not v in points_on_ring: 194 | for fid in self.vertex_faces[v]: 195 | if all([u in points_uv for u in self.faces[fid]]): 196 | v0, v1, v2 = self.faces[fid] 197 | edges[tuple(sorted([v0, v1]))].add(fid) 198 | edges[tuple(sorted([v1, v2]))].add(fid) 199 | edges[tuple(sorted([v2, v0]))].add(fid) 200 | 201 | intersections = defaultdict(list) 202 | for edge, fs in edges.items(): 203 | ret = self.intersect(points_uv, edge, line) 204 | if ret is not None: 205 | if isinstance(ret[0], tuple): 206 | self.add_vertex([0, 0, 0]) 207 | self.add_xyz(points_uv, ret[0], edge) 208 | ret[1] = self.V - 1 209 | points_uv[self.V - 1] = ret[0] 210 | self.on_edge[self.V - 1] = line 211 | ret += (edge,) 212 | for fid in fs: 213 | intersections[fid].append(ret) 214 | 215 | for fid, its in intersections.items(): 216 | if len(its) == 3: 217 | u0 = [x[1] for x in its if x[0] is None] 218 | u = [x[1] for x in its if x[0] is not None] 219 | assert len(u0) == 2, "len(u0) != 2" 220 | assert u0[0] == u0[1], "u0[0] != u0[1]" 221 | self.split_into_two_triangle(fid, u0[0], u[0]) 222 | elif len(its) == 2: 223 | if its[0][0] is not None and its[1][0] is not None: 224 | self.split_into_tri_trap(fid, its[0][1:], its[1][1:], points_uv) 225 | elif (its[0][0] is None) ^ (its[1][0] is None): 226 | raise Exception("[Impossible]") 227 | elif len(its) == 1: 228 | raise Exception("[Impossible]") 229 | 230 | def split_into_two_triangle(self, fid, u0, u): 231 | v0, v1, v2 = self.faces[fid] 232 | self.remove_face(fid) 233 | if v1 == u0: 234 | v0, v1, v2 = v1, v2, v0 235 | elif v2 == u0: 236 | v0, v1, v2 = v2, v0, v1 237 | assert v0 == u0, "v0 != u0" 238 | self.add_faces([[v0, v1, u], [v0, u, v2]]) 239 | 240 | def split_into_tri_trap(self, fid, edge0, edge1, points_uv): 241 | v0, v1, v2 = self.faces[fid] 242 | self.remove_face(fid) 243 | u0, (e0v0, e0v1) = edge0 244 | u1, (e1v0, e1v1) = edge1 245 | 246 | def on_edge(x0, x1): 247 | if sorted([x0, x1]) == sorted([e0v0, e0v1]): 248 | return u0 249 | if sorted([x0, x1]) == sorted([e1v0, e1v1]): 250 | return u1 251 | return None 252 | 253 | e0 = on_edge(v0, v1) 254 | e1 = on_edge(v1, v2) 255 | e2 = on_edge(v2, v0) 256 | 257 | # v0 258 | # / \ 259 | # e0 e2 260 | # / \ 261 | # v1 -e1- v2 262 | # 263 | 264 | if e0 is None: 265 | choice_a = [ 266 | [v2, e2, e1], 267 | [v0, v1, e1], 268 | [v0, e1, e2], 269 | ] 270 | choice_b = [ 271 | [v2, e2, e1], 272 | [v1, e1, e2], 273 | [v1, e2, v0], 274 | ] 275 | elif e1 is None: 276 | choice_a = [ 277 | [v0, e0, e2], 278 | [v1, e2, e0], 279 | [v1, v2, e2], 280 | ] 281 | choice_b = [ 282 | [v0, e0, e2], 283 | [v2, e2, e0], 284 | [v2, e0, v1], 285 | ] 286 | elif e2 is None: 287 | choice_a = [ 288 | [v1, e1, e0], 289 | [v2, e0, e1], 290 | [v2, v0, e0], 291 | ] 292 | choice_b = [ 293 | [v1, e1, e0], 294 | [v0, e1, v2], 295 | [v0, e0, e1], 296 | ] 297 | else: 298 | raise Exception("[Impossible]") 299 | 300 | def make_triangle(vids): 301 | return np.array( 302 | [ 303 | points_uv[vids[0]], 304 | points_uv[vids[1]], 305 | points_uv[vids[2]], 306 | ] 307 | ) 308 | 309 | min_a = min( 310 | [ 311 | min_triangle_angles(make_triangle(choice_a[1])), 312 | min_triangle_angles(make_triangle(choice_a[2])), 313 | ] 314 | ) 315 | min_b = min( 316 | [ 317 | min_triangle_angles(make_triangle(choice_b[1])), 318 | min_triangle_angles(make_triangle(choice_b[2])), 319 | ] 320 | ) 321 | 322 | if min_a > min_b: 323 | return self.add_faces(choice_a) 324 | else: 325 | return self.add_faces(choice_b) 326 | 327 | def intersect(self, points_uv, edge, line): 328 | line = sorted(line) 329 | edge = sorted(edge) 330 | 331 | if line == edge: 332 | return None 333 | if line[0] in edge: 334 | return [None, line[0]] 335 | if line[1] in edge: 336 | return [None, line[1]] 337 | 338 | line = LineString([points_uv[line[0]], points_uv[line[1]]]) 339 | edge = LineString([points_uv[edge[0]], points_uv[edge[1]]]) 340 | 341 | ret = edge.intersection(line) 342 | if isinstance(ret, Point): 343 | return [(ret.x, ret.y), -1] 344 | else: 345 | return None 346 | 347 | 348 | class MAPS: 349 | def __init__(self, vertices, faces, base_size, timeout=None, verbose=False): 350 | self.mesh = Trimesh(vertices, faces, process=False, maintain_order=True) 351 | 352 | self.base = BaseMesh(vertices, faces) 353 | self.param = ParamMesh(vertices, faces) 354 | 355 | self.base_size = base_size 356 | self.verbose = verbose 357 | self.timeout = timeout 358 | 359 | self.param_tri_verts = defaultdict(list) 360 | 361 | self.decimate() 362 | self.base_size = self.base.fmask.sum() 363 | 364 | def decimate(self): 365 | start_time = time() 366 | vertex_weights = ValueSortedDict({i: 0 for i in range(self.base.V)}) 367 | 368 | with tqdm(total=self.base.F - self.base_size, disable=not self.verbose) as pbar: 369 | while self.base.fmask.sum() > self.base_size: 370 | vw = list(vertex_weights.keys()) 371 | copy = vw[: len(vw) // 4] 372 | random.shuffle(copy) 373 | vw[: len(vw) // 4] = copy 374 | mis = maximal_independent_set( 375 | vw, self.base.faces, self.base.vertex_faces 376 | ) 377 | for i in mis: 378 | if self.timeout is not None and time() - start_time > self.timeout: 379 | return 380 | neighbors = self.base.one_ring_neighbors(i) 381 | if self.try_decimate_base_vertex(i): 382 | self.base.vmask[i] = 0 383 | vertex_weights.pop(i) 384 | 385 | for k in neighbors: 386 | total = 0 387 | for fid in self.base.vertex_faces[k]: 388 | total += len(self.param_tri_verts[fid]) 389 | vertex_weights[k] = total 390 | 391 | pbar.update(2) 392 | if self.base.fmask.sum() <= self.base_size: 393 | return 394 | 395 | def compute_vertex_weight(self, i: int): 396 | neighbors = self.base.one_ring_neighbors(i) 397 | neighbors_uv = one_ring_neighbor_uv(neighbors, self.base.verts, i) 398 | 399 | # Try constrained denauly triangulate 400 | new_faces, new_edges = CDT(neighbors, neighbors_uv) 401 | 402 | # Check mesh 403 | if not self.base.is_validate_removal(i, neighbors, new_faces, new_edges): 404 | neighbors_uv = np.array([uv / np.linalg.norm(uv) for uv in neighbors_uv]) 405 | for v in neighbors: 406 | new_faces, new_edges = MVT(v, neighbors) 407 | else: 408 | return 0 409 | 410 | old_faces = list(self.base.vertex_faces[i]) 411 | old_areas = face_areas(self.base.verts, self.base.faces[old_faces]).sum() 412 | new_areas = face_areas(self.base.verts, new_faces).sum() 413 | 414 | fd = min(self.base.face_distortion[fid] for fid in old_faces) 415 | return fd * new_areas / old_areas 416 | 417 | 418 | def try_decimate_base_vertex(self, i: int) -> bool: 419 | neighbors = self.base.one_ring_neighbors(i) 420 | neighbors_uv = one_ring_neighbor_uv(neighbors, self.base.verts, i) 421 | 422 | # Try constrained denauly triangulate 423 | new_faces, new_edges = CDT(neighbors, neighbors_uv) 424 | 425 | # Check mesh 426 | if not self.base.is_validate_removal(i, neighbors, new_faces, new_edges): 427 | neighbors_uv = np.array([uv / np.linalg.norm(uv) for uv in neighbors_uv]) 428 | for v in neighbors: 429 | new_faces, new_edges = MVT(v, neighbors) 430 | if self.base.is_validate_removal(i, neighbors, new_faces, new_edges): 431 | break 432 | else: 433 | return False 434 | 435 | ring_uv = {n: neighbors_uv[k] for k, n in enumerate(neighbors)} 436 | ring_uv[i] = [0, 0] 437 | 438 | self.reparameterize(i, ring_uv, new_faces, new_edges) 439 | self.base.remove_vertex(i, new_faces, neighbors) 440 | 441 | return True 442 | 443 | def reparameterize(self, i: int, ring_uv: Dict, new_faces, new_edges): 444 | points_uv = ring_uv.copy() 445 | neighbors = set([k for k in ring_uv.keys() if k != i]) 446 | points_on_ring = neighbors.copy() 447 | 448 | for fid in self.base.vertex_faces[i]: 449 | face = self.base.faces[fid] 450 | face_uv = [ring_uv[face[0]], ring_uv[face[1]], ring_uv[face[2]]] 451 | for v in self.param_tri_verts[fid]: 452 | if not v in points_uv: 453 | points_uv[v] = from_barycenteric(face_uv, self.param.baries[v][fid]) 454 | if v in self.param.on_edge: 455 | edge = self.param.on_edge[v] 456 | if edge[0] in neighbors and edge[1] in neighbors: 457 | points_on_ring.add(v) 458 | del self.param.baries[v][fid] 459 | 460 | self.param.split_triangles_on_segments(points_uv, points_on_ring, new_edges) 461 | 462 | for v, uv in points_uv.items(): 463 | self.uv_to_xyz_tri(v, uv, ring_uv, new_faces) 464 | 465 | return True 466 | 467 | def uv_to_xyz_tri(self, v: int, uv, verts_uv: Dict, faces: List): 468 | def in_triangle(point, triangle): 469 | max_s = np.abs(triangle).max() 470 | point = point / max_s 471 | triangle = triangle / max_s 472 | n1 = np.cross(point - triangle[0], triangle[1] - triangle[0]) 473 | n2 = np.cross(point - triangle[1], triangle[2] - triangle[1]) 474 | n3 = np.cross(point - triangle[2], triangle[0] - triangle[2]) 475 | n1 = 0 if abs(n1) < 1e-10 else n1 476 | n2 = 0 if abs(n2) < 1e-10 else n2 477 | n3 = 0 if abs(n3) < 1e-10 else n3 478 | return ((n1 >= 0) and (n2 >= 0) and (n3 >= 0)) or ( 479 | (n1 <= 0) and (n2 <= 0) and (n3 <= 0) 480 | ) 481 | 482 | found = False 483 | for f, face in enumerate(faces): 484 | triangle_uv = [ 485 | verts_uv[face[0]], 486 | verts_uv[face[1]], 487 | verts_uv[face[2]], 488 | ] 489 | if in_triangle(uv, triangle_uv): 490 | point_bary = to_barycentric(uv, triangle_uv) 491 | assert np.abs(point_bary).sum() <= 2 492 | point_xyz = from_barycenteric(self.base.verts[face], point_bary) 493 | tri = f + self.base.F 494 | self.param_tri_verts[tri].append(v) 495 | self.param.baries[v][tri] = point_bary 496 | self.param.verts[v] = point_xyz 497 | found = True 498 | 499 | assert found 500 | 501 | def mesh_upsampling(self, depth) -> Trimesh: 502 | sub_verts, sub_faces = self.subdivide(depth) 503 | 504 | sub_verts = self.parameterize(sub_verts) 505 | 506 | return Trimesh(sub_verts, sub_faces, process=False, maintain_order=True) 507 | 508 | def subdivide(self, depth): 509 | verts = self.base.verts[self.base.vmask] 510 | vmaps = np.cumsum(self.base.vmask) - 1 511 | faces = self.base.faces[self.base.fmask] 512 | faces = np.vectorize(lambda f: vmaps[f])(faces) 513 | 514 | for _ in range(depth): 515 | nV = verts.shape[0] 516 | nF = faces.shape[0] 517 | edges_d = np.concatenate( 518 | [faces[:, [0, 1]], faces[:, [1, 2]], faces[:, [2, 0]]], axis=0 519 | ) 520 | edges_d = np.sort(edges_d, axis=1) 521 | edges_u, F2E = np.unique(edges_d, axis=0, return_inverse=True) 522 | new_verts = (verts[edges_u[:, 0]] + verts[edges_u[:, 1]]) / 2 523 | verts = np.concatenate([verts, new_verts], axis=0) 524 | 525 | E2 = F2E[:nF] + nV 526 | E0 = F2E[nF : nF * 2] + nV 527 | E1 = F2E[nF * 2 :] + nV 528 | faces = np.concatenate( 529 | [ 530 | np.stack([faces[:, 0], E2, E1], axis=-1), 531 | np.stack([faces[:, 1], E0, E2], axis=-1), 532 | np.stack([faces[:, 2], E1, E0], axis=-1), 533 | np.stack([E0, E1, E2], axis=-1), 534 | ], 535 | axis=0, 536 | ) 537 | 538 | return verts, faces 539 | 540 | def parameterize(self, points): 541 | param_verts = self.param.verts[: self.param.V] 542 | param_faces = self.param.faces[self.param.fmask] 543 | param_mesh = Trimesh(param_verts, param_faces, process=False) 544 | 545 | closest_points, _, triangle_id = param_mesh.nearest.on_surface(points) 546 | 547 | for i, point in enumerate(closest_points): 548 | face = param_faces[triangle_id[i]] 549 | triangle = self.param.verts[face] 550 | xyz = self.param.xyz[face] 551 | try: 552 | bary = to_barycentric(point, triangle) 553 | points[i] = from_barycenteric(xyz, bary) 554 | except np.linalg.LinAlgError as e: 555 | points[i] = xyz.mean(axis=0) 556 | 557 | # points, _, _ = self.mesh.nearest.on_surface(points) 558 | 559 | return points 560 | 561 | def save_decimation(self, v): 562 | faces = [self.base.faces[i] for i, m in enumerate(self.base.fmask) if m] 563 | mesh = Trimesh(self.base.verts, faces) 564 | mesh.export("dec.obj") 565 | 566 | verts = [self.param.verts[i] for i, m in enumerate(self.param.vmask) if m] 567 | pc = PointCloud(verts) 568 | pc.visual.vertex_colors = np.array([[102, 102, 102, 255]] * self.param.V) 569 | pc.visual.vertex_colors[v] = [255, 0, 0, 255] 570 | pc.export("param.ply") 571 | 572 | faces = [self.param.faces[i] for i, m in enumerate(self.param.fmask) if m] 573 | mesh = Trimesh(verts, faces) 574 | mesh.export("param.obj") 575 | 576 | verts = self.param.xyz[self.param.vmask] 577 | mesh = Trimesh(verts, faces) 578 | mesh.export("rec.obj") 579 | -------------------------------------------------------------------------------- /maps/utils.py: -------------------------------------------------------------------------------- 1 | from typing import List 2 | import numpy as np 3 | 4 | def check_duplicated(V) -> bool: 5 | np.unique(V, return_counts=True) 6 | for i in range(V.shape[0]): 7 | for j in range(i+1, V.shape[0]): 8 | if np.abs(V[i] - V[j]).sum() < 1e-7: 9 | return i, j 10 | return False 11 | 12 | def maximal_independent_set(vids, faces, vertex_faces) -> List: 13 | mark = {} 14 | mis = [] 15 | for v in vids: 16 | if not mark.get(v, False): 17 | mis.append(v) 18 | for fid in vertex_faces[v]: 19 | for u in faces[fid]: 20 | mark[u] = True 21 | return mis 22 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | jittor 2 | numpy 3 | Pillow 4 | scipy 5 | tensorboard 6 | tensorboardx 7 | tqdm 8 | trimesh 9 | -------------------------------------------------------------------------------- /scripts/coseg-aliens/get_data.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | DATADIR=$(dirname $0)/'../../data' 4 | 5 | mkdir -p $DATADIR && cd $DATADIR 6 | wget --content-disposition https://cg.cs.tsinghua.edu.cn/dataset/subdivnet/datasets/coseg-aliens-MAPS-256-3.zip 7 | echo "downloaded the data and putting it in: " $DATADIR 8 | echo "unzipping" 9 | unzip -q coseg-aliens-MAPS-256-3.zip && rm coseg-aliens-MAPS-256-3.zip 10 | -------------------------------------------------------------------------------- /scripts/coseg-aliens/get_pretrained.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | CHECKPOINT_DIR=$(dirname $0)/'../../checkpoints' 4 | 5 | mkdir -p $CHECKPOINT_DIR && cd $CHECKPOINT_DIR 6 | wget --content-disposition https://cg.cs.tsinghua.edu.cn/dataset/subdivnet/checkpoints/coseg-aliens/coseg-aliens.pkl 7 | echo "downloaded the checkpoint and putting it in: " $CHECKPOINT_DIR 8 | -------------------------------------------------------------------------------- /scripts/coseg-aliens/test.sh: -------------------------------------------------------------------------------- 1 | python3 train_seg.py test \ 2 | --name coseg-alien \ 3 | --dataroot ./data/coseg-aliens-MAPS-256-3 \ 4 | --upsample bilinear \ 5 | --batch_size 24 \ 6 | --parts 4 \ 7 | --arch deeplab \ 8 | --backbone resnet50 \ 9 | --checkpoint checkpoints/coseg-aliens.pkl -------------------------------------------------------------------------------- /scripts/coseg-aliens/train.sh: -------------------------------------------------------------------------------- 1 | python3 train_seg.py train \ 2 | --name coseg-aliens \ 3 | --dataroot ./data/coseg-aliens-MAPS-256-3 \ 4 | --upsample bilinear \ 5 | --batch_size 24 \ 6 | --parts 4 \ 7 | --augment_scale \ 8 | --arch deeplab \ 9 | --backbone resnet50 \ 10 | --lr 2e-2 11 | 12 | -------------------------------------------------------------------------------- /scripts/coseg-vases/get_data.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | DATADIR=$(dirname $0)/'../../data' 4 | 5 | mkdir -p $DATADIR && cd $DATADIR 6 | wget --content-disposition https://cg.cs.tsinghua.edu.cn/dataset/subdivnet/datasets/coseg-vases-MAPS-256-3.zip 7 | echo "downloaded the data and putting it in: " $DATADIR 8 | echo "unzipping" 9 | unzip -q coseg-vases-MAPS-256-3.zip && rm coseg-vases-MAPS-256-3.zip 10 | -------------------------------------------------------------------------------- /scripts/coseg-vases/get_pretrained.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | CHECKPOINT_DIR=$(dirname $0)/'../../checkpoints' 4 | 5 | mkdir -p $CHECKPOINT_DIR && cd $CHECKPOINT_DIR 6 | wget --content-disposition https://cg.cs.tsinghua.edu.cn/dataset/subdivnet/checkpoints/coseg-vases/coseg-vases.pkl 7 | echo "downloaded the checkpoint and putting it in: " $CHECKPOINT_DIR 8 | -------------------------------------------------------------------------------- /scripts/coseg-vases/test.sh: -------------------------------------------------------------------------------- 1 | python3 train_seg.py test \ 2 | --name coseg-vases \ 3 | --dataroot ./data/coseg-vases-MAPS-256-3 \ 4 | --upsample bilinear \ 5 | --batch_size 24 \ 6 | --parts 4 \ 7 | --arch deeplab \ 8 | --backbone resnet50 \ 9 | --checkpoint ./checkpoints/coseg-vases.pkl -------------------------------------------------------------------------------- /scripts/coseg-vases/train.sh: -------------------------------------------------------------------------------- 1 | python3 train_seg.py train \ 2 | --name coseg-vases \ 3 | --dataroot ./data/coseg-vases-MAPS-256-3 \ 4 | --upsample bilinear \ 5 | --batch_size 24 \ 6 | --parts 4 \ 7 | --augment_scale \ 8 | --arch deeplab \ 9 | --backbone resnet50 \ 10 | --lr 2e-2 11 | -------------------------------------------------------------------------------- /scripts/cubes/get_data.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | DATADIR=$(dirname $0)/'../../data' 4 | 5 | mkdir -p $DATADIR && cd $DATADIR 6 | wget --content-disposition https://cg.cs.tsinghua.edu.cn/dataset/subdivnet/datasets/Cubes-MAPS-48-4.zip 7 | echo "downloaded the data and putting it in: " $DATADIR 8 | echo "unzipping" 9 | unzip -q Cubes-MAPS-48-4.zip && rm Cubes-MAPS-48-4.zip 10 | -------------------------------------------------------------------------------- /scripts/cubes/get_pretrained.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | CHECKPOINT_DIR=$(dirname $0)/'../../checkpoints' 4 | 5 | mkdir -p $CHECKPOINT_DIR && cd $CHECKPOINT_DIR 6 | wget --content-disposition https://cg.cs.tsinghua.edu.cn/dataset/subdivnet/checkpoints/Cubes/cubes.pkl 7 | echo "downloaded the checkpoint and putting it in: " $CHECKPOINT_DIR 8 | -------------------------------------------------------------------------------- /scripts/cubes/test.sh: -------------------------------------------------------------------------------- 1 | # train cubes 2 | python3 train_cls.py test \ 3 | --name Cubes \ 4 | --dataroot ./data/Cubes-MAPS-48-4/ \ 5 | --batch_size 64 \ 6 | --n_classes 22 \ 7 | --depth 4 \ 8 | --channels 32 64 128 128 128 \ 9 | --n_dropout 1 \ 10 | --checkpoint ./checkpoints/cubes.pkl 11 | -------------------------------------------------------------------------------- /scripts/cubes/train.sh: -------------------------------------------------------------------------------- 1 | # train cubes 2 | python3 train_cls.py train \ 3 | --name cubes \ 4 | --dataroot ./data/Cubes-MAPS-48-4/ \ 5 | --optim adam \ 6 | --lr 1e-3 \ 7 | --lr_milestones 20 40 \ 8 | --n_epoch 60 \ 9 | --weight_decay 1e-4 \ 10 | --batch_size 64 \ 11 | --n_classes 22 \ 12 | --depth 4 \ 13 | --channels 32 64 128 128 128 \ 14 | --n_dropout 1 15 | -------------------------------------------------------------------------------- /scripts/humanbody/get_data.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | DATADIR=$(dirname $0)/'../../data' 4 | 5 | mkdir -p $DATADIR && cd $DATADIR 6 | wget --content-disposition https://cg.cs.tsinghua.edu.cn/dataset/subdivnet/datasets/HumanBody-NS-256-3.zip 7 | echo "downloaded the data and putting it in: " $DATADIR 8 | echo "unzipping" 9 | unzip -q HumanBody-NS-256-3.zip && rm HumanBody-NS-256-3.zip 10 | -------------------------------------------------------------------------------- /scripts/humanbody/get_pretrained.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | CHECKPOINT_DIR=$(dirname $0)/'../../checkpoints' 4 | 5 | mkdir -p $CHECKPOINT_DIR && cd $CHECKPOINT_DIR 6 | wget --content-disposition https://cg.cs.tsinghua.edu.cn/dataset/subdivnet/checkpoints/HumanBody/humanbody.pkl 7 | echo "downloaded the checkpoint and putting it in: " $CHECKPOINT_DIR 8 | -------------------------------------------------------------------------------- /scripts/humanbody/test.sh: -------------------------------------------------------------------------------- 1 | python3 train_seg.py test \ 2 | --name HumanBody \ 3 | --dataroot ./data/HumanBody-NS-256-3 \ 4 | --upsample bilinear \ 5 | --batch_size 24 \ 6 | --parts 8 \ 7 | --arch deeplab \ 8 | --backbone resnet50 \ 9 | --checkpoint ./checkpoints/humanbody.pkl 10 | -------------------------------------------------------------------------------- /scripts/humanbody/train.sh: -------------------------------------------------------------------------------- 1 | python3 train_seg.py train \ 2 | --name humanbody \ 3 | --dataroot ./data/HumanBody-NS-256-3 \ 4 | --upsample bilinear \ 5 | --batch_size 24 \ 6 | --parts 8 \ 7 | --augment_scale \ 8 | --augment_orient \ 9 | --lr 2e-2 \ 10 | --arch deeplab \ 11 | --backbone resnet50 12 | -------------------------------------------------------------------------------- /scripts/manifold40/get_data.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | DATADIR=$(dirname $0)/'../../data' 4 | 5 | mkdir -p $DATADIR && cd $DATADIR 6 | wget --content-disposition https://cg.cs.tsinghua.edu.cn/dataset/subdivnet/datasets/Manifold40-MAPS-96-3.zip 7 | echo "downloaded the data and putting it in: " $DATADIR 8 | echo "unzipping" 9 | unzip -q Manifold40-MAPS-96-3.zip && rm Manifold40-MAPS-96-3.zip 10 | -------------------------------------------------------------------------------- /scripts/manifold40/get_pretrained.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | CHECKPOINT_DIR=$(dirname $0)/'../../checkpoints' 4 | 5 | mkdir -p $CHECKPOINT_DIR && cd $CHECKPOINT_DIR 6 | wget --content-disposition https://cg.cs.tsinghua.edu.cn/dataset/subdivnet/checkpoints/Manifold40/manifold40.pkl 7 | echo "downloaded the checkpoint and putting it in: " $CHECKPOINT_DIR 8 | -------------------------------------------------------------------------------- /scripts/manifold40/test.sh: -------------------------------------------------------------------------------- 1 | python3 train_cls.py test \ 2 | --name Manifold40 \ 3 | --dataroot ./data/Manifold40-MAPS-96-3/ \ 4 | --batch_size 12 \ 5 | --n_classes 40 \ 6 | --depth 3 \ 7 | --channels 32 64 128 256 \ 8 | --n_dropout 2 \ 9 | --use_xyz \ 10 | --use_normal \ 11 | --checkpoint ./checkpoints/manifold40.pkl 12 | -------------------------------------------------------------------------------- /scripts/manifold40/train.sh: -------------------------------------------------------------------------------- 1 | # Train 2 | python3 train_cls.py train \ 3 | --name manifold40 \ 4 | --dataroot ./data/Manifold40-MAPS-96-3/ \ 5 | --optim adam \ 6 | --lr 1e-3 \ 7 | --lr_milestones 20 40 \ 8 | --batch_size 48 \ 9 | --n_classes 40 \ 10 | --depth 3 \ 11 | --channels 32 64 128 256 \ 12 | --n_dropout 2 \ 13 | --use_xyz \ 14 | --use_normal \ 15 | --augment_scale -------------------------------------------------------------------------------- /scripts/shrec11-split10/get_data.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | DATADIR=$(dirname $0)/'../../data' 4 | 5 | mkdir -p $DATADIR && cd $DATADIR 6 | wget --content-disposition https://cg.cs.tsinghua.edu.cn/dataset/subdivnet/datasets/SHREC11-MAPS-48-4-split10.zip 7 | echo "downloaded the data and putting it in: " $DATADIR 8 | echo "unzipping" 9 | unzip -q SHREC11-MAPS-48-4-split10.zip && rm SHREC11-MAPS-48-4-split10.zip 10 | -------------------------------------------------------------------------------- /scripts/shrec11-split10/get_pretrained.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | CHECKPOINT_DIR=$(dirname $0)/'../../checkpoints' 4 | 5 | mkdir -p $CHECKPOINT_DIR && cd $CHECKPOINT_DIR 6 | wget --content-disposition https://cg.cs.tsinghua.edu.cn/dataset/subdivnet/checkpoints/shrec11-split10/shrec11-split10.pkl 7 | echo "downloaded the checkpoint and putting it in: " $CHECKPOINT_DIR 8 | -------------------------------------------------------------------------------- /scripts/shrec11-split10/test.sh: -------------------------------------------------------------------------------- 1 | # test shrec11-split10 2 | python3 train_cls.py test \ 3 | --name shrec11-split10 \ 4 | --dataroot ./data/SHREC11-MAPS-48-4-split10 \ 5 | --batch_size 64 \ 6 | --n_classes 30 \ 7 | --depth 4 \ 8 | --channels 32 64 64 128 128 \ 9 | --n_dropout 1 \ 10 | --checkpoint ./checkpoints/shrec11-split10.pkl 11 | -------------------------------------------------------------------------------- /scripts/shrec11-split10/train.sh: -------------------------------------------------------------------------------- 1 | # train shrec11-split10 2 | python3 train_cls.py train \ 3 | --name shrec11-split10 \ 4 | --dataroot ./data/SHREC11-MAPS-48-4-split10 \ 5 | --optim adam \ 6 | --lr 1e-3 \ 7 | --lr_milestones 50 100 \ 8 | --weight_decay 1e-4 \ 9 | --n_epoch 200 \ 10 | --batch_size 64 \ 11 | --n_classes 30 \ 12 | --depth 4 \ 13 | --channels 32 64 64 128 128 \ 14 | --n_dropout 1 -------------------------------------------------------------------------------- /scripts/shrec11-split16/get_data.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | DATADIR=$(dirname $0)/'../../data' 4 | 5 | mkdir -p $DATADIR && cd $DATADIR 6 | wget --content-disposition https://cg.cs.tsinghua.edu.cn/dataset/subdivnet/datasets/SHREC11-MAPS-48-4-split16.zip 7 | echo "downloaded the data and putting it in: " $DATADIR 8 | echo "unzipping" 9 | unzip -q SHREC11-MAPS-48-4-split16.zip && rm SHREC11-MAPS-48-4-split16.zip 10 | -------------------------------------------------------------------------------- /scripts/shrec11-split16/get_pretrained.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | CHECKPOINT_DIR=$(dirname $0)/'../../checkpoints' 4 | 5 | mkdir -p $CHECKPOINT_DIR && cd $CHECKPOINT_DIR 6 | wget --content-disposition https://cg.cs.tsinghua.edu.cn/dataset/subdivnet/checkpoints/shrec11-split16/shrec11-split16.pkl 7 | echo "downloaded the checkpoint and putting it in: " $CHECKPOINT_DIR 8 | -------------------------------------------------------------------------------- /scripts/shrec11-split16/test.sh: -------------------------------------------------------------------------------- 1 | # test shrec11-split16 2 | python3 train_cls.py test \ 3 | --name shrec11-split16 \ 4 | --dataroot ./data/SHREC11-MAPS-48-4-split16 \ 5 | --batch_size 64 \ 6 | --n_classes 30 \ 7 | --depth 4 \ 8 | --channels 32 64 128 128 128 \ 9 | --n_dropout 1 \ 10 | --checkpoint ./checkpoints/shrec11-split16.pkl 11 | -------------------------------------------------------------------------------- /scripts/shrec11-split16/train.sh: -------------------------------------------------------------------------------- 1 | # train shrec11-split16 2 | python3 train_cls.py train \ 3 | --name shrec11-split16 \ 4 | --dataroot ./data/SHREC11-MAPS-48-4-split16 \ 5 | --optim adam \ 6 | --lr 1e-3 \ 7 | --lr_milestones 50 100 \ 8 | --weight_decay 1e-4 \ 9 | --n_epoch 200 \ 10 | --batch_size 64 \ 11 | --n_classes 30 \ 12 | --depth 4 \ 13 | --channels 32 64 128 128 128 \ 14 | --n_dropout 1 -------------------------------------------------------------------------------- /subdivnet/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/lzhengning/SubdivNet/018bd806a6bf99bee53855adc7c5749b79fd2045/subdivnet/__init__.py -------------------------------------------------------------------------------- /subdivnet/dataset.py: -------------------------------------------------------------------------------- 1 | import json 2 | import random 3 | from pathlib import Path 4 | 5 | from jittor.dataset import Dataset 6 | 7 | import numpy as np 8 | import trimesh 9 | from scipy.spatial.transform import Rotation 10 | 11 | 12 | def augment_points(pts): 13 | # scale 14 | pts = pts * np.random.uniform(0.8, 1.25) 15 | 16 | # translation 17 | translation = np.random.uniform(-0.1, 0.1) 18 | pts = pts + translation 19 | 20 | return pts 21 | 22 | 23 | def randomize_mesh_orientation(mesh: trimesh.Trimesh): 24 | axis_seq = ''.join(random.sample('xyz', 3)) 25 | angles = [random.choice([0, 90, 180, 270]) for _ in range(3)] 26 | rotation = Rotation.from_euler(axis_seq, angles, degrees=True) 27 | mesh.vertices = rotation.apply(mesh.vertices) 28 | return mesh 29 | 30 | 31 | def random_scale(mesh: trimesh.Trimesh): 32 | mesh.vertices = mesh.vertices * np.random.normal(1, 0.1, size=(1, 3)) 33 | return mesh 34 | 35 | 36 | def mesh_normalize(mesh: trimesh.Trimesh): 37 | vertices = mesh.vertices - mesh.vertices.min(axis=0) 38 | vertices = vertices / vertices.max() 39 | mesh.vertices = vertices 40 | return mesh 41 | 42 | 43 | def load_mesh(path, normalize=False, augments=[], request=[]): 44 | mesh = trimesh.load_mesh(path, process=False) 45 | 46 | for method in augments: 47 | if method == 'orient': 48 | mesh = randomize_mesh_orientation(mesh) 49 | if method == 'scale': 50 | mesh = random_scale(mesh) 51 | 52 | if normalize: 53 | mesh = mesh_normalize(mesh) 54 | 55 | F = mesh.faces 56 | V = mesh.vertices 57 | Fs = mesh.faces.shape[0] 58 | 59 | face_center = V[F.flatten()].reshape(-1, 3, 3).mean(axis=1) 60 | # corner = V[F.flatten()].reshape(-1, 3, 3) - face_center[:, np.newaxis, :] 61 | vertex_normals = mesh.vertex_normals 62 | face_normals = mesh.face_normals 63 | face_curvs = np.vstack([ 64 | (vertex_normals[F[:, 0]] * face_normals).sum(axis=1), 65 | (vertex_normals[F[:, 1]] * face_normals).sum(axis=1), 66 | (vertex_normals[F[:, 2]] * face_normals).sum(axis=1), 67 | ]) 68 | 69 | feats = [] 70 | if 'area' in request: 71 | feats.append(mesh.area_faces) 72 | if 'normal' in request: 73 | feats.append(face_normals.T) 74 | if 'center' in request: 75 | feats.append(face_center.T) 76 | if 'face_angles' in request: 77 | feats.append(np.sort(mesh.face_angles, axis=1).T) 78 | if 'curvs' in request: 79 | feats.append(np.sort(face_curvs, axis=0)) 80 | 81 | feats = np.vstack(feats) 82 | 83 | return mesh.faces, feats, Fs 84 | 85 | 86 | def load_segment(path): 87 | with open(path) as f: 88 | segment = json.load(f) 89 | raw_labels = np.array(segment['raw_labels']) - 1 90 | sub_labels = np.array(segment['sub_labels']) - 1 91 | raw_to_sub = np.array(segment['raw_to_sub']) 92 | 93 | return raw_labels, sub_labels, raw_to_sub 94 | 95 | 96 | class ClassificationDataset(Dataset): 97 | def __init__(self, dataroot, batch_size, train=True, shuffle=False, num_workers=0, augment=False, in_memory=False): 98 | super().__init__(batch_size=batch_size, shuffle=shuffle, num_workers=num_workers, keep_numpy_array=True, buffer_size=134217728*2) 99 | 100 | self.batch_size = batch_size 101 | self.augment = augment 102 | self.in_memory = in_memory 103 | self.dataroot = Path(dataroot) 104 | self.augments = [] 105 | self.mode = 'train' if train else 'test' 106 | self.feats = ['area', 'face_angles', 'curvs'] 107 | 108 | self.mesh_paths = [] 109 | self.labels = [] 110 | self.browse_dataroot() 111 | 112 | self.set_attrs(total_len=len(self.mesh_paths)) 113 | 114 | 115 | def browse_dataroot(self): 116 | # self.shape_classes = [x.name for x in self.dataroot.iterdir() if x.is_dir()] 117 | self.shape_classes = sorted([x.name for x in self.dataroot.iterdir() if x.is_dir()]) 118 | 119 | for obj_class in self.dataroot.iterdir(): 120 | if obj_class.is_dir(): 121 | label = self.shape_classes.index(obj_class.name) 122 | for obj_path in (obj_class / self.mode).iterdir(): 123 | if obj_path.is_file(): 124 | self.mesh_paths.append(obj_path) 125 | self.labels.append(label) 126 | 127 | self.mesh_paths = np.array(self.mesh_paths) 128 | self.labels = np.array(self.labels) 129 | 130 | def __getitem__(self, idx): 131 | faces, feats, Fs = load_mesh(self.mesh_paths[idx], 132 | normalize=True, 133 | augments=self.augments, 134 | request=self.feats) 135 | label = self.labels[idx] 136 | return faces, feats, Fs, label, self.mesh_paths[idx] 137 | 138 | def collate_batch(self, batch): 139 | faces, feats, Fs, labels, mesh_paths = zip(*batch) 140 | N = len(batch) 141 | max_f = max(Fs) 142 | 143 | np_faces = np.zeros((N, max_f, 3), dtype=np.int32) 144 | np_feats = np.zeros((N, feats[0].shape[0], max_f), dtype=np.float32) 145 | np_Fs = np.int32(Fs) 146 | 147 | for i in range(N): 148 | np_faces[i, :Fs[i]] = faces[i] 149 | np_feats[i, :, :Fs[i]] = feats[i] 150 | 151 | meshes = { 152 | 'faces': np_faces, 153 | 'feats': np_feats, 154 | 'Fs': np_Fs 155 | } 156 | labels = np.array(labels) 157 | 158 | return meshes, labels, mesh_paths 159 | 160 | 161 | class SegmentationDataset(Dataset): 162 | def __init__(self, dataroot, batch_size, train=True, shuffle=False, num_workers=0, augments=None, in_memory=False): 163 | super().__init__(batch_size=batch_size, shuffle=shuffle, num_workers=num_workers, keep_numpy_array=True, buffer_size=134217728) 164 | self.batch_size = batch_size 165 | self.in_memory = in_memory 166 | self.dataroot = dataroot 167 | 168 | self.augments = [] 169 | if train and augments: 170 | self.augments = augments 171 | 172 | self.mode = 'train' if train else 'test' 173 | self.feats = ['area', 'face_angles', 'curvs', 'center', 'normal'] 174 | 175 | self.mesh_paths = [] 176 | self.raw_paths = [] 177 | self.seg_paths = [] 178 | self.browse_dataroot() 179 | 180 | self.set_attrs(total_len=len(self.mesh_paths)) 181 | 182 | def browse_dataroot(self): 183 | for dataset in (Path(self.dataroot) / self.mode).iterdir(): 184 | if not dataset.is_dir(): 185 | continue 186 | for obj_path in dataset.iterdir(): 187 | if obj_path.suffix == '.obj': 188 | obj_name = obj_path.stem 189 | seg_path = obj_path.parent / (obj_name + '.json') 190 | 191 | raw_name = obj_name.rsplit('-', 1)[0] 192 | raw_path = list(Path(self.dataroot).glob(f'raw/{raw_name}.*'))[0] 193 | self.mesh_paths.append(str(obj_path)) 194 | self.raw_paths.append(str(raw_path)) 195 | self.seg_paths.append(str(seg_path)) 196 | self.mesh_paths = np.array(self.mesh_paths) 197 | self.raw_paths = np.array(self.raw_paths) 198 | self.seg_paths = np.array(self.seg_paths) 199 | 200 | def __getitem__(self, idx): 201 | faces, feats, Fs = load_mesh(self.mesh_paths[idx], 202 | normalize=True, 203 | augments=self.augments, 204 | request=self.feats) 205 | raw_labels, sub_labels, raw_to_sub = load_segment(self.seg_paths[idx]) 206 | 207 | return faces, feats, Fs, raw_labels, sub_labels, raw_to_sub, self.mesh_paths[idx], self.raw_paths[idx] 208 | 209 | def collate_batch(self, batch): 210 | faces, feats, Fs, raw_labels, sub_labels, raw_to_sub, mesh_paths, raw_paths = zip(*batch) 211 | N = len(batch) 212 | max_f = max(Fs) 213 | 214 | np_faces = np.zeros((N, max_f, 3), dtype=np.int32) 215 | np_feats = np.zeros((N, feats[0].shape[0], max_f), dtype=np.float32) 216 | np_Fs = np.int32(Fs) 217 | np_sub_labels = np.ones((N, max_f), dtype=np.int32) * -1 218 | 219 | for i in range(N): 220 | np_faces[i, :Fs[i]] = faces[i] 221 | np_feats[i, :, :Fs[i]] = feats[i] 222 | np_sub_labels[i, :Fs[i]] = sub_labels[i] 223 | 224 | meshes = { 225 | 'faces': np_faces, 226 | 'feats': np_feats, 227 | 'Fs': np_Fs 228 | } 229 | labels = np_sub_labels 230 | mesh_info = { 231 | 'raw_labels': raw_labels, 232 | 'raw_to_sub': raw_to_sub, 233 | 'mesh_paths': mesh_paths, 234 | 'raw_paths': raw_paths, 235 | } 236 | return meshes, labels, mesh_info 237 | -------------------------------------------------------------------------------- /subdivnet/deeplab.py: -------------------------------------------------------------------------------- 1 | from typing import Optional 2 | import jittor as jt 3 | import jittor.nn as nn 4 | 5 | from .mesh_tensor import MeshTensor 6 | from .mesh_ops import MeshAdaptivePool 7 | from .mesh_ops import MeshBatchNorm 8 | from .mesh_ops import MeshConv 9 | from .mesh_ops import MeshDropout 10 | from .mesh_ops import MeshLinear 11 | from .mesh_ops import MeshPool 12 | from .mesh_ops import MeshUnpool 13 | from .mesh_ops import MeshReLU 14 | from .mesh_ops import mesh_concat 15 | 16 | 17 | class MeshConvBlock(nn.Module): 18 | def __init__(self, in_channels, out_channels, dilation=1): 19 | super().__init__() 20 | 21 | self.mconv1 = MeshConv(in_channels, out_channels, dilation=dilation, bias=False) 22 | self.mconv2 = MeshConv(out_channels, out_channels, dilation=dilation, bias=False) 23 | self.bn1 = MeshBatchNorm(out_channels) 24 | self.bn2 = MeshBatchNorm(out_channels) 25 | self.relu1 = MeshReLU() 26 | self.relu2 = MeshReLU() 27 | 28 | def execute(self, mesh): 29 | mesh = self.mconv1(mesh) 30 | mesh = self.bn1(mesh) 31 | mesh = self.relu1(mesh) 32 | mesh = self.mconv2(mesh) 33 | mesh = self.bn2(mesh) 34 | mesh = self.relu2(mesh) 35 | return mesh 36 | 37 | 38 | class MeshVanillaUnet(nn.Module): 39 | def __init__(self, in_channels, out_channels, upsample='nearest') -> None: 40 | super().__init__() 41 | 42 | self.fc1 = MeshLinear(in_channels, 32) 43 | self.relu = MeshReLU() 44 | self.pool = MeshPool('max') 45 | self.unpool = MeshUnpool(upsample) 46 | 47 | self.enc1 = MeshConvBlock(32, 64) 48 | self.enc2 = MeshConvBlock(64, 128) 49 | self.enc3 = MeshConvBlock(128, 128) 50 | 51 | self.mid_conv = MeshConvBlock(128, 128) 52 | 53 | self.dec3 = MeshConvBlock(256, 128) 54 | self.dec2 = MeshConvBlock(256, 64) 55 | self.dec1 = MeshConvBlock(128, 64) 56 | 57 | self.fc2 = nn.Sequential( 58 | MeshDropout(0.5), 59 | MeshLinear(64, 64), 60 | MeshDropout(0.1), 61 | MeshLinear(64, out_channels) 62 | ) 63 | 64 | def execute(self, mesh): 65 | mesh = self.fc1(mesh) 66 | mesh = self.relu(mesh) 67 | 68 | enc_mesh1 = self.enc1(mesh) 69 | enc_mesh2 = self.enc2(self.pool(enc_mesh1)) 70 | enc_mesh3 = self.enc3(self.pool(enc_mesh2)) 71 | 72 | mid_mesh = self.pool(enc_mesh3) 73 | mid_mesh = self.mid_conv(mid_mesh) 74 | 75 | dec_mesh3 = self.unpool(mid_mesh, ref_mesh=enc_mesh3) 76 | dec_mesh3 = self.dec3(mesh_concat([dec_mesh3, enc_mesh3])) 77 | dec_mesh2 = self.unpool(dec_mesh3, ref_mesh=enc_mesh2) 78 | dec_mesh2 = self.dec2(mesh_concat([dec_mesh2, enc_mesh2])) 79 | dec_mesh1 = self.unpool(dec_mesh2, ref_mesh=enc_mesh1) 80 | dec_mesh1 = self.dec1(mesh_concat([dec_mesh1, enc_mesh1])) 81 | 82 | out_mesh = self.fc2(dec_mesh1) 83 | 84 | return out_mesh.feats 85 | 86 | 87 | class BasicBlock(nn.Module): 88 | expansion: int = 1 89 | 90 | def __init__(self, 91 | inplanes: int, 92 | planes: int, 93 | stride: int = 1, 94 | dilation: int = 1, 95 | downsample: Optional[MeshPool] = None 96 | ): 97 | super().__init__() 98 | # Both self.conv1 and self.downsample layers downsample the input when stride != 1 99 | self.conv1 = MeshConv(inplanes, planes, kernel_size=3, stride=stride, dilation=dilation) 100 | self.bn1 = MeshBatchNorm(planes) 101 | self.relu = MeshReLU() 102 | self.conv2 = MeshConv(planes, planes, kernel_size=3, dilation=dilation) 103 | self.bn2 = MeshBatchNorm(planes) 104 | self.downsample = downsample 105 | self.stride = stride 106 | self.dilation = dilation 107 | 108 | def execute(self, mesh): 109 | identity = mesh 110 | 111 | out = self.conv1(mesh) 112 | out = self.bn1(out) 113 | out = self.relu(out) 114 | 115 | out = self.conv2(out) 116 | out = self.bn2(out) 117 | 118 | if self.downsample is not None: 119 | identity = self.downsample(mesh) 120 | 121 | out += identity 122 | out = self.relu(out) 123 | 124 | return out 125 | 126 | 127 | class Bottleneck(nn.Module): 128 | expansion = 4 129 | 130 | def __init__(self, 131 | inplanes: int, 132 | planes: int, 133 | stride: int = 1, 134 | dilation: int = 1, 135 | downsample: Optional[MeshPool] = None): 136 | super().__init__() 137 | self.conv1 = MeshConv(inplanes, planes, kernel_size=1, bias=False) 138 | self.bn1 = MeshBatchNorm(planes) 139 | self.conv2 = MeshConv(planes, planes, kernel_size=3, stride=stride, 140 | dilation=dilation, bias=False) 141 | self.bn2 = MeshBatchNorm(planes) 142 | self.conv3 = MeshConv(planes, planes * 4, kernel_size=1, bias=False) 143 | self.bn3 = MeshBatchNorm(planes * 4) 144 | self.relu = MeshReLU() 145 | self.downsample = downsample 146 | self.stride = stride 147 | self.dilation = dilation 148 | 149 | def execute(self, mesh): 150 | residual = mesh 151 | 152 | out = self.conv1(mesh) 153 | out = self.bn1(out) 154 | out = self.relu(out) 155 | 156 | out = self.conv2(out) 157 | out = self.bn2(out) 158 | out = self.relu(out) 159 | 160 | out = self.conv3(out) 161 | out = self.bn3(out) 162 | 163 | if self.downsample is not None: 164 | residual = self.downsample(mesh) 165 | 166 | out += residual 167 | out = self.relu(out) 168 | 169 | return out 170 | 171 | 172 | class MeshResNet(nn.Module): 173 | def __init__(self, in_channels, block, layers): 174 | self.inplanes = 64 175 | super().__init__() 176 | 177 | blocks = [1, 2, 4] 178 | output_stride = 16 179 | if output_stride == 16: 180 | strides = [1, 2, 2, 1] 181 | dilations = [1, 1, 1, 2] 182 | elif output_stride == 8: 183 | strides = [1, 2, 1, 1] 184 | dilations = [1, 1, 2, 4] 185 | else: 186 | raise NotImplementedError 187 | 188 | # Modules 189 | self.conv1 = MeshConv(in_channels, 64, kernel_size=5, stride=2, bias=False) 190 | self.bn1 = MeshBatchNorm(64) 191 | self.relu = MeshReLU() 192 | # self.maxpool = MeshPool('max') 193 | 194 | self.layer1 = self._make_layer(block, 64, layers[0], stride=strides[0], dilation=dilations[0]) 195 | self.layer2 = self._make_layer(block, 128, layers[1], stride=strides[1], dilation=dilations[1]) 196 | self.layer3 = self._make_layer(block, 256, layers[2], stride=strides[2], dilation=dilations[2]) 197 | self.layer4 = self._make_MG_unit(block, 512, blocks=blocks, stride=strides[3], dilation=dilations[3]) 198 | 199 | def _make_layer(self, block, planes, blocks, stride=1, dilation=1): 200 | downsample = None 201 | if stride != 1 or self.inplanes != planes * block.expansion: 202 | downsample = nn.Sequential( 203 | MeshConv(self.inplanes, planes * block.expansion, 204 | kernel_size=1, stride=stride, bias=False), 205 | MeshBatchNorm(planes * block.expansion), 206 | ) 207 | 208 | layers = [] 209 | layers.append(block(self.inplanes, planes, stride, dilation, downsample)) 210 | self.inplanes = planes * block.expansion 211 | for i in range(1, blocks): 212 | layers.append(block(self.inplanes, planes, dilation=dilation)) 213 | 214 | return nn.Sequential(*layers) 215 | 216 | def _make_MG_unit(self, block, planes, blocks, stride=1, dilation=1): 217 | downsample = None 218 | if stride != 1 or self.inplanes != planes * block.expansion: 219 | downsample = nn.Sequential( 220 | MeshConv(self.inplanes, planes * block.expansion, 221 | kernel_size=1, stride=stride, bias=False), 222 | MeshBatchNorm(planes * block.expansion), 223 | ) 224 | 225 | layers = [] 226 | layers.append(block(self.inplanes, planes, stride, dilation=blocks[0]*dilation, 227 | downsample=downsample)) 228 | self.inplanes = planes * block.expansion 229 | for i in range(1, len(blocks)): 230 | layers.append(block(self.inplanes, planes, stride=1, 231 | dilation=blocks[i]*dilation)) 232 | 233 | return nn.Sequential(*layers) 234 | 235 | def execute(self, mesh): 236 | x = self.conv1(mesh) 237 | x = self.bn1(x) 238 | x = self.relu(x) 239 | # x = self.maxpool(x) 240 | 241 | x = self.layer1(x) 242 | low_level_feat = x 243 | x = self.layer2(x) 244 | mid_ref_mesh = x 245 | x = self.layer3(x) 246 | x = self.layer4(x) 247 | return x, low_level_feat, mid_ref_mesh 248 | 249 | 250 | class ASPPModule(nn.Module): 251 | def __init__(self, inplanes, planes, kernel_size, dilation): 252 | super().__init__() 253 | self.atrous_conv = MeshConv(inplanes, planes, kernel_size=kernel_size, 254 | stride=1, dilation=dilation, bias=False) 255 | self.bn = MeshBatchNorm(planes) 256 | self.relu = MeshReLU() 257 | 258 | def execute(self, x): 259 | x = self.atrous_conv(x) 260 | x = self.bn(x) 261 | 262 | return self.relu(x) 263 | 264 | 265 | class ASPP(nn.Module): 266 | def __init__(self, globalpool='mean'): 267 | super().__init__() 268 | inplanes = 512 269 | planes = 128 270 | dilations = [1, 6, 12, 18] 271 | 272 | self.aspp1 = ASPPModule(inplanes, planes, 1, dilation=dilations[0]) 273 | self.aspp2 = ASPPModule(inplanes, planes, 3, dilation=dilations[1]) 274 | self.aspp3 = ASPPModule(inplanes, planes, 3, dilation=dilations[2]) 275 | self.aspp4 = ASPPModule(inplanes, planes, 3, dilation=dilations[3]) 276 | 277 | self.global_avg_pool = nn.Sequential(MeshAdaptivePool(globalpool), 278 | nn.Linear(inplanes, planes, bias=False), 279 | nn.BatchNorm(planes), 280 | nn.ReLU()) 281 | self.conv1 = MeshConv(planes * 5, planes, 1, bias=False) 282 | self.bn1 = MeshBatchNorm(planes) 283 | self.relu = MeshReLU() 284 | self.dropout = MeshDropout(0.5) 285 | 286 | def execute(self, x): 287 | x1 = self.aspp1(x) 288 | x2 = self.aspp2(x) 289 | x3 = self.aspp3(x) 290 | x4 = self.aspp4(x) 291 | x5 = self.global_avg_pool(x) 292 | x5 = x5.broadcast([x.N, 128, x.F], [2]) 293 | x5 = x.updated(x5) 294 | x = mesh_concat((x1, x2, x3, x4, x5)) 295 | 296 | x = self.conv1(x) 297 | x = self.bn1(x) 298 | x = self.relu(x) 299 | 300 | return self.dropout(x) 301 | 302 | 303 | class MeshDeeplabDecoder(nn.Module): 304 | def __init__(self, num_classes): 305 | super().__init__() 306 | backbone = 'resnet' 307 | if backbone == 'resnet' or backbone == 'drn': 308 | low_level_inplanes = 64 309 | elif backbone == 'xception': 310 | low_level_inplanes = 128 311 | elif backbone == 'mobilenet': 312 | low_level_inplanes = 24 313 | else: 314 | raise NotImplementedError 315 | 316 | self.conv1 = MeshConv(low_level_inplanes, 48, 1, bias=False) 317 | self.bn1 = MeshBatchNorm(48) 318 | self.relu = MeshReLU() 319 | self.last_conv = nn.Sequential(MeshConv(48 + 128, 128, kernel_size=3, stride=1, bias=False), 320 | MeshBatchNorm(128), 321 | MeshReLU(), 322 | MeshDropout(0.5), 323 | MeshConv(128, 128, kernel_size=3, stride=1, bias=False), 324 | MeshBatchNorm(128), 325 | MeshReLU(), 326 | MeshDropout(0.1), 327 | MeshConv(128, num_classes, kernel_size=1, stride=1)) 328 | self.unpool = MeshUnpool('bilinear') 329 | 330 | def execute(self, x, low_level_feat, mid_ref_mesh): 331 | low_level_feat = self.conv1(low_level_feat) 332 | low_level_feat = self.bn1(low_level_feat) 333 | low_level_feat = self.relu(low_level_feat) 334 | 335 | x = self.unpool(x, ref_mesh=mid_ref_mesh) 336 | x = self.unpool(x, ref_mesh=low_level_feat) 337 | x = mesh_concat([x, low_level_feat]) 338 | x = self.last_conv(x) 339 | 340 | return x 341 | 342 | 343 | class MeshDeepLab(nn.Module): 344 | def __init__(self, in_channels, out_channels, backbone='resnet18', globalpool='mean'): 345 | super().__init__() 346 | 347 | if backbone == 'resnet18': 348 | resnet_layers = [2, 2, 2, 2] 349 | elif backbone == 'resnet50': 350 | resnet_layers = [3, 4, 6, 3] 351 | else: 352 | raise Exception('Unknown resnet architecture') 353 | 354 | self.backbone = MeshResNet(in_channels, BasicBlock, resnet_layers) 355 | self.aspp = ASPP(globalpool) 356 | self.decoder = MeshDeeplabDecoder(out_channels) 357 | self.unpool = MeshUnpool('bilinear') 358 | 359 | def execute(self, mesh: MeshTensor): 360 | x, low_level_feat, mid_ref_mesh = self.backbone(mesh) 361 | x = self.aspp(x) 362 | x = self.decoder(x, low_level_feat, mid_ref_mesh) 363 | x = self.unpool(x, ref_mesh=mesh) 364 | 365 | return x.feats 366 | -------------------------------------------------------------------------------- /subdivnet/mesh_ops.py: -------------------------------------------------------------------------------- 1 | from typing import List 2 | 3 | from .mesh_tensor import MeshTensor 4 | 5 | import jittor as jt 6 | import jittor.nn as nn 7 | 8 | jt.cudnn.set_max_workspace_ratio(0.01) 9 | 10 | 11 | class MeshConv(nn.Module): 12 | def __init__(self, in_channels, out_channels, kernel_size=3, dilation=1, stride=1, bias=True): 13 | ''' Now, such convolutions patterns are implemented: 14 | * kernel size = 1, stride = [1, 2] 15 | * kernel size = 3, dilation = %any%, stride = [1, 2] 16 | * kernel size = 5, no dilation, stride = [1, 2] 17 | Note that the valid stride is determined by the subdivision connectivity of the input data (see Section 3.3.4). 18 | ''' 19 | super().__init__() 20 | self.in_channels = in_channels 21 | self.out_channels = out_channels 22 | self.kernel_size = kernel_size 23 | self.dilation = dilation 24 | self.stride = stride 25 | 26 | assert self.kernel_size % 2 == 1 27 | 28 | if self.kernel_size == 1: 29 | assert dilation == 1 30 | self.conv1d = nn.Conv1d(in_channels, out_channels, kernel_size=1, bias=bias) 31 | else: 32 | kernel_size = 4 33 | self.conv2d = nn.Conv2d(in_channels, out_channels, (1, kernel_size), bias=bias) 34 | 35 | assert self.stride in [1, 2] 36 | 37 | 38 | def execute(self, mesh_tensor: MeshTensor): 39 | if self.in_channels != mesh_tensor.C: 40 | raise Exception(f'feature dimension is {mesh_tensor.C}, but conv kernel dimension is {self.in_channels}') 41 | 42 | if self.kernel_size == 1: # Simple Convolution 43 | feats = mesh_tensor.feats 44 | if self.stride == 2: 45 | N, C, F = mesh_tensor.shape 46 | feats = feats.reindex([N, C, F // 4], [ 47 | 'i0', 'i1', 'i2 + @e0(i0) * 3' 48 | ], extras=[mesh_tensor.Fs // 4]) 49 | mesh_tensor = mesh_tensor.inverse_loop_pool(pooled_feats=feats) 50 | y = self.conv1d(feats) 51 | else: # General Convolution 52 | CKP = mesh_tensor.convolution_kernel_pattern(self.kernel_size, self.dilation) 53 | K = CKP.shape[2] 54 | 55 | conv_feats = mesh_tensor.feats.reindex( 56 | shape=[mesh_tensor.N, self.in_channels, mesh_tensor.F, K], 57 | indexes=[ 58 | 'i0', 59 | 'i1', 60 | '@e0(i0, i2, i3)' 61 | ], 62 | extras=[CKP, mesh_tensor.Fs], 63 | overflow_conditions=['i2 >= @e1(i0)'], 64 | overflow_value=0, 65 | ) # [N, C, F, K] 66 | 67 | y0 = mesh_tensor.feats 68 | 69 | if self.stride == 2: 70 | N, C, F = mesh_tensor.shape 71 | conv_feats = conv_feats.reindex([N, C, F // 4, K], [ 72 | 'i0', 'i1', 'i2 + @e0(i0) * 3', 'i3' 73 | ], extras=[mesh_tensor.Fs // 4]) 74 | y0 = y0.reindex([N, C, F // 4], [ 75 | 'i0', 'i1', 'i2 + @e0(i0) * 3' 76 | ], extras=[mesh_tensor.Fs // 4]) 77 | mesh_tensor = mesh_tensor.inverse_loop_pool(pooled_feats=y0) 78 | 79 | features = [] 80 | 81 | # Convolution: see Equation(2) in the corresponding paper 82 | # 1. w_0 * e_i 83 | features.append(y0) 84 | # 2. w_1 * sigma_{e_j} 85 | features.append(conv_feats.sum(dim=-1)) 86 | # 3. w_2 * sigma_{e_j+1 - e_j} 87 | features.append(jt.abs(conv_feats[..., [K-1] + list(range(K-1))] - conv_feats).sum(-1)) 88 | # 4. w_3 * sigma_{e_i - e_j} 89 | features.append(jt.abs(y0.unsqueeze(dim=-1) - conv_feats).sum(dim=-1)) 90 | 91 | y = jt.stack(features, dim=-1) 92 | y = self.conv2d(y)[:, :, :, 0] 93 | 94 | return mesh_tensor.updated(y) 95 | 96 | 97 | class MeshPool(nn.Module): 98 | def __init__(self, op): 99 | super().__init__() 100 | self.op = op 101 | 102 | def execute(self, mesh_tensor: MeshTensor): 103 | return mesh_tensor.inverse_loop_pool(self.op) 104 | 105 | 106 | class MeshUnpool(nn.Module): 107 | def __init__(self, mode): 108 | super().__init__() 109 | self.mode = mode 110 | 111 | def execute(self, mesh_tensor, ref_mesh=None): 112 | if ref_mesh is None: 113 | return mesh_tensor.loop_unpool(self.mode) 114 | else: 115 | return mesh_tensor.loop_unpool(self.mode, ref_mesh.faces, ref_mesh._cache) 116 | 117 | 118 | class MeshAdaptivePool(nn.Module): 119 | ''' Adaptive Pool (only support output size of (1,), i.e., global pooling) 120 | ''' 121 | def __init__(self, op): 122 | super().__init__() 123 | 124 | if not op in ['max', 'mean']: 125 | raise Exception('Unsupported pooling method') 126 | 127 | self.op = op 128 | 129 | def execute(self, mesh_tensor: MeshTensor): 130 | jt_op = 'add' if self.op == 'mean' else 'maximum' 131 | 132 | y = mesh_tensor.feats.reindex_reduce( 133 | op=jt_op, 134 | shape=[mesh_tensor.N, mesh_tensor.C], 135 | indexes=[ 136 | 'i0', 137 | 'i1' 138 | ], 139 | overflow_conditions=[ 140 | 'i2 >= @e0(i0)' 141 | ], 142 | extras=[mesh_tensor.Fs]) 143 | 144 | if self.op == 'mean': 145 | y = y / mesh_tensor.Fs.unsqueeze(dim=-1) 146 | 147 | return y 148 | 149 | 150 | class MeshBatchNorm(nn.Module): 151 | def __init__(self, num_features, eps=1e-5, momentum=0.1): 152 | super().__init__() 153 | self.bn = nn.BatchNorm(num_features, eps, momentum) 154 | 155 | def execute(self, mesh_tensor: MeshTensor): 156 | feats = self.bn(mesh_tensor.feats) 157 | return mesh_tensor.updated(feats) 158 | 159 | 160 | class MeshReLU(nn.Module): 161 | def __init__(self): 162 | super().__init__() 163 | self.relu = nn.ReLU() 164 | 165 | def execute(self, mesh_tensor: MeshTensor): 166 | feats = self.relu(mesh_tensor.feats) 167 | return mesh_tensor.updated(feats) 168 | 169 | 170 | class MeshDropout(nn.Module): 171 | def __init__(self, p=0.5): 172 | super().__init__() 173 | self.dropout = nn.Dropout(p) 174 | 175 | def execute(self, mesh_tensor): 176 | feats = self.dropout(mesh_tensor.feats) 177 | return mesh_tensor.updated(feats) 178 | 179 | 180 | class MeshLinear(nn.Module): 181 | def __init__(self, in_channels, out_channels, bias=True): 182 | super().__init__() 183 | self.conv1d = nn.Conv1d(in_channels, out_channels, kernel_size=1, bias=bias) 184 | 185 | def execute(self, mesh_tensor: MeshTensor): 186 | feats = self.conv1d(mesh_tensor.feats) 187 | return mesh_tensor.updated(feats) 188 | 189 | 190 | def mesh_concat(meshes: List[MeshTensor]): 191 | new_feats = jt.concat([mesh.feats for mesh in meshes], dim=1) 192 | return meshes[0].updated(new_feats) 193 | 194 | 195 | def mesh_add(mesh_a: MeshTensor, mesh_b: MeshTensor): 196 | new_feats = mesh_a.feats + mesh_b.feats 197 | return mesh_a.updated(new_feats) -------------------------------------------------------------------------------- /subdivnet/mesh_tensor.py: -------------------------------------------------------------------------------- 1 | import jittor as jt 2 | 3 | 4 | class MeshTensor: 5 | """ 6 | A MeshTensor object stores a batch of triangular meshes with 7 | multi-dimensional arrays. 8 | 9 | All faces are stored in a 3-dimensional tensor. To support a batch of 10 | variable number of faces, an addtional array Fs is used to hold every mesh's 11 | number of faces. 12 | """ 13 | def __init__(self, faces: jt.Var, feats: jt.Var, Fs: jt.Var=None, cache=None): 14 | """ 15 | Parameters 16 | ------------ 17 | faces: (N, F, 3) int32 18 | Array of triangular faces. 19 | feats: (N, C, F) float32 20 | Array of face features. 21 | Fs: (N,) int32, optional 22 | Array of number of faces in each mesh. 23 | If not specified, Fs is set to n. 24 | cache: dict, optional 25 | things calculated from faces to avoid repeated calculation for the 26 | same mesh. 27 | """ 28 | self.faces = faces 29 | self.feats = feats 30 | 31 | self.N, self.C, self.F = feats.shape 32 | 33 | if Fs is not None: 34 | self.Fs = Fs 35 | assert self.F == self.Fs.max().data[0] 36 | else: 37 | self.Fs = jt.ones(self.N, dtype="int32") * self.F 38 | 39 | self._cache = cache if cache is not None else {} 40 | 41 | def updated(self, new_feats): 42 | """ 43 | Return a new MeshTensor with its feats updated. 44 | 45 | A shortcut to obtain a new MeshTensor with new features. 46 | """ 47 | assert new_feats.shape[0] == self.N 48 | assert new_feats.shape[2] == self.F 49 | return MeshTensor(self.faces, new_feats, self.Fs, self._cache) 50 | 51 | @property 52 | def shape(self): 53 | return self.feats.shape 54 | 55 | @property 56 | def V(self) -> int: 57 | """ Maximum number of vertices in the mini-batch """ 58 | if not 'V' in self._cache: 59 | self._cache['V'] = int((self.faces.max() + 1).data) 60 | return self._cache['V'] 61 | 62 | @property 63 | def Vs(self) -> jt.Var: 64 | """ 65 | Number of vertices in each mesh. 66 | 67 | Returns 68 | ------------ 69 | (N,) int32 70 | """ 71 | if not 'Vs' in self._cache: 72 | self._cache['Vs'] = self.faces.max(dim=1).max(dim=1) + 1 73 | return self._cache['Vs'] 74 | 75 | @property 76 | def degrees(self) -> jt.Var: 77 | """ 78 | Degrees of vertices. 79 | 80 | Return: 81 | ------------ 82 | (N, V) int32 83 | """ 84 | if not 'degrees' in self._cache: 85 | face_degrees = jt.ones((self.N, self.F, 3), dtype=jt.int32) 86 | self._cache['degrees'] = face_degrees.reindex_reduce( 87 | op='add', 88 | shape=[self.N, self.V], 89 | indexes=[ 90 | 'i0', '@e0(i0, i1, i2)' 91 | ], 92 | extras=[self.faces, self.Fs], 93 | overflow_conditions=['i1 >= @e1(i0)'] 94 | ) 95 | return self._cache['degrees'] 96 | 97 | @property 98 | def FAF(self) -> jt.Var: 99 | """ 100 | FAF (Face-adjacent-faces) indexs the adjacencies. 101 | 102 | Returns: 103 | ------------ 104 | (N, F, 3) int32 105 | """ 106 | if not 'FAF' in self._cache: 107 | self._cache['FAF'] = self.compute_face_adjacency_faces() 108 | return self._cache['FAF'] 109 | 110 | @property 111 | def FAFP(self) -> jt.Var: 112 | """ The previous face of current face's adjacent faces """ 113 | if not 'FAFP' in self._cache: 114 | self._cache['FAFP'], self._cache['FAFN'] = self.compute_face_adjacency_reordered() 115 | return self._cache['FAFP'] 116 | 117 | @property 118 | def FAFN(self) -> jt.Var: 119 | """ The next face of current face's adjacent faces """ 120 | if not 'FAFN' in self._cache: 121 | self._cache['FAFP'], self._cache['FAFN'] = self.compute_face_adjacency_reordered() 122 | return self._cache['FAFN'] 123 | 124 | def __add__(self, other: jt.Var) -> jt.Var: 125 | new_feats = self.feats + other.feats 126 | return self.updated(new_feats) 127 | 128 | def __radd__(self, other: jt.Var) -> jt.Var: 129 | return self.__add__(other) 130 | 131 | def __sub__(self, other: jt.Var) -> jt.Var: 132 | new_feats = self.feats - other.feats 133 | return self.updated(new_feats) 134 | 135 | def __rsub__(self, other: jt.Var) -> jt.Var: 136 | new_feats = other.feats - self.feats 137 | return self.updated(new_feats) 138 | 139 | def __repr__(self): 140 | return f'MeshTensor: N={self.N}, C={self.C}, F={self.F}' 141 | 142 | def inverse_loop_pool(self, op='max', pooled_feats=None): 143 | """ 144 | Pooling with the inverse loop scheme. 145 | 146 | Parameters: 147 | ------------ 148 | op: {'max', 'mean'}, optional 149 | Reduction method of pooling. The default is 'max'. 150 | pooled_feats: (N, C, F) float32, optional 151 | Specifying the feature after pooling. 152 | 153 | Returns: 154 | ------------ 155 | MeshTensor after 4-to-1 face merge. 156 | """ 157 | pooled_Fs = self.Fs // 4 158 | 159 | pooled_faces = self.faces.reindex( 160 | shape=[self.N, self.F // 4, 3], 161 | indexes=[ 162 | 'i0', 163 | 'i1 + @e0(i0) * i2', 164 | '0', 165 | ], 166 | extras=[pooled_Fs], 167 | overflow_conditions=['i1 >= @e0(i0)'], 168 | overflow_value=0 169 | ) 170 | 171 | if pooled_feats is None: 172 | pooled_feats = self.feats.reindex( 173 | shape=[self.N, self.C, self.F // 4, 4], 174 | indexes=[ 175 | 'i0', 176 | 'i1', 177 | 'i2 + @e0(i0) * i3' 178 | ], 179 | extras=[pooled_Fs], 180 | overflow_conditions=['i2 >= @e0(i0)'], 181 | overflow_value=0 182 | ) 183 | 184 | if op == 'max': 185 | pooled_feats = jt.argmax(pooled_feats, dim=3)[1] 186 | elif op == 'mean': 187 | pooled_feats = jt.mean(pooled_feats, dim=3) 188 | else: 189 | raise Exception('Unsupported pooling operation') 190 | else: 191 | assert pooled_feats.shape[0] == self.N 192 | assert pooled_feats.shape[2] == self.F // 4 193 | 194 | return MeshTensor(pooled_faces, pooled_feats, pooled_Fs) 195 | 196 | def loop_subdivision(self): 197 | """ 198 | Computes the faces of meshes after one time of loop subdivision. 199 | """ 200 | subdiv_faces = jt.zeros([self.N, self.F * 4, 3], dtype=jt.float32) 201 | for i in range(self.N): 202 | V = self.faces[i].max() + 1 203 | F = self.Fs[i].data[0] 204 | 205 | E = jt.concat([ 206 | self.faces[i, :F, [0,1]], 207 | self.faces[i, :F, [1,2]], 208 | self.faces[i, :F, [2,0]] 209 | ], dim=0) 210 | E_hash = E.min(dim=1).astype('int64') * E.max() + E.max(dim=1) 211 | E2F, _ = jt.argsort(E_hash) 212 | F2E = jt.zeros_like(E2F) 213 | F2E[E2F] = jt.index((E.shape[0],), 0) // 2 214 | 215 | E2 = V + F2E[:F] 216 | E0 = V + F2E[F:F*2] 217 | E1 = V + F2E[F*2:] 218 | subdiv_faces[i, :F*4] = jt.concat([ 219 | jt.stack([self.faces[i, :F, 0], E2, E1], dim=-1), 220 | jt.stack([self.faces[i, :F, 1], E0, E2], dim=-1), 221 | jt.stack([self.faces[i, :F, 2], E1, E0], dim=-1), 222 | jt.stack([E0, E1, E2], dim=-1) 223 | ], dim=0) 224 | return subdiv_faces 225 | 226 | def loop_unpool(self, mode, ref_faces=None, ref_cache=None): 227 | """ 228 | Unpooling with the loop subdivision scheme. 229 | 230 | Parameters: 231 | ------------ 232 | mode: {'nearest', 'bilinear'} 233 | Algorithm used for unpooling. 234 | ref_faces: (N, F, 3) int32, optional 235 | If specified, the returned MeshTensor uses the reference faces 236 | instead of computing by loop subdivision. This parameter can speed 237 | up dense prediction networks with pairs of pooling and unpooling. 238 | The default is None. 239 | ref_cache: dict, optional 240 | If specified, the returned MeshTensor uses the reference cache. The 241 | default is None. 242 | 243 | Returns: 244 | ------------ 245 | MeshTensor after 1-to-4 face split. 246 | """ 247 | unpooled_Fs = self.Fs * 4 248 | 249 | if ref_faces is not None: 250 | unpooled_faces = ref_faces 251 | unpooled_cache = ref_cache 252 | else: 253 | unpooled_faces = self.loop_subdivision() 254 | unpooled_cache = None 255 | 256 | if mode == 'nearest': 257 | unpooled_feats = jt.concat([self.feats] * 4, dim=2) 258 | elif mode == 'bilinear': 259 | neighbor_feats = self.feats.reindex( 260 | shape=[self.N, self.C, self.F, 3], 261 | indexes=[ 262 | 'i0', 'i1', '@e0(i0, i2, i3)' 263 | ], 264 | extras=[self.FAF] 265 | ) 266 | unpooled_feats = jt.concat([ 267 | (self.feats * 2 + neighbor_feats[..., 1] + neighbor_feats[..., 2]) / 4, 268 | (self.feats * 2 + neighbor_feats[..., 2] + neighbor_feats[..., 0]) / 4, 269 | (self.feats * 2 + neighbor_feats[..., 0] + neighbor_feats[..., 1]) / 4, 270 | self.feats 271 | ], dim=2) 272 | else: 273 | raise Exception(f'Unsupported unpool mode: {mode}') 274 | 275 | return MeshTensor(unpooled_faces, unpooled_feats, unpooled_Fs, unpooled_cache) 276 | 277 | def compute_face_adjacency_faces(self) -> jt.Var: 278 | """ 279 | Compute face adjacency faces. 280 | 281 | Returns: 282 | ------------ 283 | (N, F, 3) int32 284 | """ 285 | FAF = jt.zeros_like(self.faces) 286 | for i in range(self.N): 287 | F = self.Fs[i].data[0] 288 | E = jt.concat([ 289 | self.faces[i, :F, [1, 2]], 290 | self.faces[i, :F, [2, 0]], 291 | self.faces[i, :F, [0, 1]], 292 | ], dim=0) 293 | 294 | E_hash = E.min(dim=1).astype('int64') * E.max() + E.max(dim=1) 295 | 296 | # S is index of sorted E_hash. 297 | # Based on the construction rule of E, 298 | # 1. S % F is the face id 299 | # 2. S // F is the order of edge in F 300 | S, _ = jt.argsort(E_hash) 301 | 302 | # S[:, 0] and S[:, 1] are pairs of half-edge 303 | S = S.reshape(-1, 2) 304 | 305 | FAF[i, S[:, 0] % F, S[:, 0] // F] = S[:, 1] % F 306 | FAF[i, S[:, 1] % F, S[:, 1] // F] = S[:, 0] % F 307 | 308 | return FAF 309 | 310 | def compute_face_adjacency_reordered(self) -> jt.Var: 311 | """ 312 | """ 313 | FAF = self.FAF 314 | 315 | FAF_ext = FAF.reindex( 316 | shape=[self.N, self.F, 3, 3], 317 | indexes=[ 318 | 'i0', '@e0(i0, i1, i2)', 'i3', 319 | ], 320 | extras=[FAF], 321 | ) 322 | 323 | # shift adjacency so that 324 | for _ in range(2): 325 | FAF_ext = FAF_ext.reindex( 326 | shape=[self.N, self.F, 3, 3], 327 | indexes=[ 328 | 'i0', 'i1', 'i2', '@e0(i0, i1, i2, 0) == i1 ? i3 : (i3 > 0 ? i3 - 1 : 2)' 329 | ], 330 | extras=[FAF_ext] 331 | ) 332 | 333 | FAFP = FAF_ext[:, :, :, 2] 334 | FAFN = FAF_ext[:, :, :, 1] 335 | return FAFP, FAFN 336 | 337 | def dilated_face_adjacencies(self, dilation: int): 338 | if dilation <= 1: 339 | raise Exception('dilation must be greater than zero') 340 | 341 | DFA = jt.code( 342 | shape=[self.N, self.F, 3], 343 | dtype=jt.int32, 344 | inputs=[self.FAF, jt.zeros((dilation, 0), dtype=jt.int32)], 345 | cpu_src=""" 346 | @alias(FAF, in0) 347 | int dilation = in1_shape0; 348 | 349 | for (int bs = 0; bs < out_shape0; ++bs) 350 | for (int f = 0; f < out_shape1; ++f) 351 | for (int k = 0; k < out_shape2; ++k) { 352 | int a = f; 353 | int b = @FAF(bs, f, k); 354 | for (int d = 1; d < dilation; ++d) { 355 | int i = @FAF(bs, b, 0) == a ? 0 : (@FAF(bs, b, 1) == a ? 1 : 2); 356 | a = b; 357 | if ((d & 1) == 0) { // go to next 358 | b = @FAF(bs, b, i < 2 ? i + 1 : 0); 359 | } else { // go to previous 360 | b = @FAF(bs, b, i > 0 ? i - 1 : 2); 361 | } 362 | } 363 | @out(bs, f, k) = b; 364 | } 365 | """, 366 | cuda_src=""" 367 | __global__ void dilated_face_adjacencies_kernel(@ARGS_DEF) { 368 | @PRECALC 369 | @alias(FAF, in0) 370 | int dilation = in1_shape0; 371 | int N = in0_shape0; 372 | int F = in0_shape1; 373 | 374 | int idx = blockIdx.x * blockDim.x + threadIdx.x; 375 | int bs = idx / (F * 3); 376 | int f = idx / 3 % F; 377 | int k = idx % 3; 378 | 379 | if (bs >= N) 380 | return; 381 | 382 | int a = f; 383 | int b = @FAF(bs, f, k); 384 | for (int d = 1; d < dilation; ++d) { 385 | int i = @FAF(bs, b, 0) == a ? 0 : (@FAF(bs, b, 1) == a ? 1 : 2); 386 | a = b; 387 | if ((d & 1) == 0) { // go to next 388 | b = @FAF(bs, b, i < 2 ? i + 1 : 0); 389 | } else { // go to previous 390 | b = @FAF(bs, b, i > 0 ? i - 1 : 2); 391 | } 392 | } 393 | @out(bs, f, k) = b; 394 | } 395 | 396 | dilated_face_adjacencies_kernel<<<(in0_shape0*in0_shape1*3-1)/512+1, 512>>>(@ARGS); 397 | """ 398 | ) 399 | 400 | return DFA 401 | 402 | def convolution_kernel_pattern(self, kernel_size=3, dilation=1): 403 | if kernel_size == 1: 404 | raise Exception(f'kernel size 1 does not have convolution pattern') 405 | 406 | if kernel_size == 3: 407 | if dilation == 1: 408 | return self.FAF 409 | else: 410 | return self.dilated_face_adjacencies(dilation) 411 | elif kernel_size == 5: 412 | if dilation == 1: 413 | return jt.stack([ 414 | self.FAFN[:, :, 0], 415 | self.FAF[:, :, 0], 416 | self.FAFP[:, :, 0], 417 | self.FAFN[:, :, 1], 418 | self.FAF[:, :, 1], 419 | self.FAFP[:, :, 1], 420 | self.FAFN[:, :, 2], 421 | self.FAF[:, :, 2], 422 | self.FAFP[:, :, 2], 423 | ], dim=-1) 424 | else: 425 | raise Exception('Not support dilation with kernel size larger than 3 yet') 426 | else: 427 | DFA = jt.code( 428 | shape=[self.N, self.F, 3], 429 | dtype=jt.int32, 430 | inputs=[self.FAF, jt.zeros(kernel_size, 0), jt.zeros((dilation, 0), dtype=jt.int32)], 431 | cpu_src=""" 432 | @alias(FAF, in0) 433 | int kernel_size = in1_shape0; 434 | int dilation = in2_shape0; 435 | 436 | for (int bs = 0; bs < out_shape0; ++bs) 437 | for (int f = 0; f < out_shape1; ++f) 438 | for (int k = 0; k < out_shape2; ++k) { 439 | int a = f; 440 | int b = @FAF(bs, f, k); 441 | for (int d = 1; d < 0; ++d) { 442 | int i = @FAF(bs, b, 0) == a ? 0 : (@FAF(bs, b, 1) == a ? 1 : 2); 443 | a = b; 444 | if ((d & 1) == 0) { // go to next 445 | b = @FAF(bs, b, i < 2 ? i + 1 : 0); 446 | } else { // go to previous 447 | b = @FAF(bs, b, i > 0 ? i - 1 : 2); 448 | } 449 | } 450 | @out(bs, f, k) = b; 451 | } 452 | """, 453 | cuda_src=""" 454 | __global__ void dilated_face_adjacencies_kernel(@ARGS_DEF) { 455 | @PRECALC 456 | @alias(FAF, in0) 457 | int dilation = in1_shape0; 458 | int N = in0_shape0; 459 | int F = in0_shape1; 460 | 461 | int idx = blockIdx.x * blockDim.x + threadIdx.x; 462 | int bs = idx / (F * 3); 463 | int f = idx / 3 % F; 464 | int k = idx % 3; 465 | 466 | if (bs >= N) 467 | return; 468 | 469 | int a = f; 470 | int b = @FAF(bs, f, k); 471 | for (int d = 1; d < dilation; ++d) { 472 | int i = @FAF(bs, b, 0) == a ? 0 : (@FAF(bs, b, 1) == a ? 1 : 2); 473 | a = b; 474 | if ((d & 1) == 0) { // go to next 475 | b = @FAF(bs, b, i < 2 ? i + 1 : 0); 476 | } else { // go to previous 477 | b = @FAF(bs, b, i > 0 ? i - 1 : 2); 478 | } 479 | } 480 | @out(bs, f, k) = b; 481 | } 482 | 483 | dilated_face_adjacencies_kernel<<<(in0_shape0*in0_shape1*3-1)/512+1, 512>>>(@ARGS); 484 | """ 485 | ) 486 | 487 | return DFA 488 | 489 | raise Exception(f'Unspported kernel size {kernel_size}') 490 | 491 | 492 | def aggregate_vertex_feature(self, op='mean'): 493 | """ 494 | Aggregate face feature to vertex 495 | 496 | Parameters: 497 | ----------- 498 | op: {'min', 'max', 'mean'}, optional 499 | 500 | Returns: 501 | -------- 502 | vertex_features: (N, C, V), float32 503 | """ 504 | if not op in ['max', 'min', 'maximum', 'minimum', 'mean']: 505 | raise Exception(f'Unsupported op: {op}') 506 | jt_op = op 507 | if op == 'max': 508 | jt_op = 'maximum' 509 | if op == 'min': 510 | jt_op = 'minimum' 511 | if op == 'mean': 512 | jt_op = 'add' 513 | 514 | face_features = jt.misc.repeat( 515 | self.feats.unsqueeze(dim=-1), 516 | [1, 1, 1, 3] 517 | ) 518 | vertex_features = face_features.reindex_reduce( 519 | op=jt_op, 520 | shape=[self.N, self.C, self.V], 521 | indexes=[ 522 | 'i0', 523 | 'i1', 524 | '@e0(i0, i2, i3)' 525 | ], 526 | extras=[self.faces, self.Fs], 527 | overflow_conditions=['i2 >= @e1(i0)'] 528 | ) 529 | 530 | if op == 'mean': 531 | degree = self.degrees.reindex( 532 | shape=[self.N, self.V], 533 | indexes=['i0', 'i1'], 534 | extras=[self.Vs], 535 | overflow_value=1, 536 | overflow_conditions=['i1 >= @e0(i0)'] 537 | ) 538 | vertex_features = vertex_features / degree.unsqueeze(dim=1) 539 | 540 | return vertex_features 541 | -------------------------------------------------------------------------------- /subdivnet/network.py: -------------------------------------------------------------------------------- 1 | from typing import List 2 | import jittor as jt 3 | import jittor.nn as nn 4 | 5 | from .mesh_ops import MeshAdaptivePool 6 | from .mesh_ops import MeshBatchNorm 7 | from .mesh_ops import MeshConv 8 | from .mesh_ops import MeshDropout 9 | from .mesh_ops import MeshLinear 10 | from .mesh_ops import MeshPool 11 | from .mesh_ops import MeshUnpool 12 | from .mesh_ops import MeshReLU 13 | from .mesh_ops import mesh_concat 14 | from .mesh_ops import mesh_add 15 | 16 | 17 | class MeshConvBlock(nn.Module): 18 | def __init__(self, in_channels, out_channels, dilation=1): 19 | super().__init__() 20 | 21 | self.mconv1 = MeshConv(in_channels, out_channels, dilation=dilation, bias=False) 22 | self.mconv2 = MeshConv(out_channels, out_channels, dilation=dilation, bias=False) 23 | self.bn1 = MeshBatchNorm(out_channels) 24 | self.bn2 = MeshBatchNorm(out_channels) 25 | self.relu1 = MeshReLU() 26 | self.relu2 = MeshReLU() 27 | 28 | def execute(self, mesh): 29 | mesh = self.mconv1(mesh) 30 | mesh = self.bn1(mesh) 31 | mesh = self.relu1(mesh) 32 | mesh = self.mconv2(mesh) 33 | mesh = self.bn2(mesh) 34 | mesh = self.relu2(mesh) 35 | return mesh 36 | 37 | 38 | class MeshResIdentityBlock(nn.Module): 39 | def __init__(self, in_channels, out_channels, dilation=1): 40 | super().__init__() 41 | self.conv1 = MeshLinear(in_channels, out_channels) 42 | self.bn1 = MeshBatchNorm(out_channels) 43 | self.relu = MeshReLU() 44 | self.conv2 = MeshConv(out_channels, out_channels, dilation=dilation) 45 | self.bn2 = MeshBatchNorm(out_channels) 46 | self.conv3 = MeshLinear(out_channels, out_channels) 47 | self.bn3 = MeshBatchNorm(out_channels) 48 | 49 | def execute(self, mesh): 50 | identity = mesh 51 | 52 | mesh = self.conv1(mesh) 53 | mesh = self.bn1(mesh) 54 | mesh = self.relu(mesh) 55 | mesh = self.conv2(mesh) 56 | mesh = self.bn2(mesh) 57 | mesh = self.conv3(mesh) 58 | mesh = self.bn3(mesh) 59 | 60 | mesh.feats += identity.feats 61 | mesh = self.relu(mesh) 62 | 63 | return mesh 64 | 65 | 66 | class MeshResConvBlock(nn.Module): 67 | def __init__(self, in_channels, out_channels, dilation=1): 68 | super().__init__() 69 | self.conv0 = MeshLinear(in_channels, out_channels) 70 | self.bn0 = MeshBatchNorm(out_channels) 71 | self.conv1 = MeshLinear(out_channels, out_channels) 72 | self.bn1 = MeshBatchNorm(out_channels) 73 | self.relu = MeshReLU() 74 | self.conv2 = MeshConv(out_channels, out_channels, dilation=dilation) 75 | self.bn2 = MeshBatchNorm(out_channels) 76 | self.conv3 = MeshLinear(out_channels, out_channels) 77 | self.bn3 = MeshBatchNorm(out_channels) 78 | 79 | def execute(self, mesh): 80 | mesh = self.conv0(mesh) 81 | mesh = self.bn0(mesh) 82 | identity = mesh 83 | 84 | mesh = self.conv1(mesh) 85 | mesh = self.bn1(mesh) 86 | mesh = self.relu(mesh) 87 | mesh = self.conv2(mesh) 88 | mesh = self.bn2(mesh) 89 | mesh = self.conv3(mesh) 90 | mesh = self.bn3(mesh) 91 | 92 | mesh.feats += identity.feats 93 | mesh = self.relu(mesh) 94 | 95 | return mesh 96 | 97 | 98 | class MeshBottleneck(nn.Module): 99 | expansion = 4 100 | 101 | def __init__(self, inplanes, planes, stride=1, dilation=1, downsample=None): 102 | super().__init__() 103 | self.conv1 = MeshConv(inplanes, planes, kernel_size=1, bias=False) 104 | self.bn1 = MeshBatchNorm(planes) 105 | self.conv2 = MeshConv(planes, planes, kernel_size=3, stride=stride, dilation=dilation, bias=False) 106 | self.bn2 = MeshBatchNorm(planes) 107 | self.conv3 = MeshConv(planes, planes * 4, kernel_size=1, bias=False) 108 | self.bn3 = MeshBatchNorm(planes * 4) 109 | self.relu = MeshReLU() 110 | self.downsample = downsample 111 | self.stride = stride 112 | self.dilation = dilation 113 | 114 | def execute(self, x): 115 | residual = x 116 | 117 | out = self.conv1(x) 118 | out = self.bn1(out) 119 | out = self.relu(out) 120 | 121 | out = self.conv2(out) 122 | out = self.bn2(out) 123 | out = self.relu(out) 124 | 125 | out = self.conv3(out) 126 | out = self.bn3(out) 127 | 128 | if self.downsample is not None: 129 | residual = self.downsample(x) 130 | 131 | out += residual 132 | out = self.relu(out) 133 | 134 | return out 135 | 136 | 137 | class MeshNet(nn.Module): 138 | def __init__(self, in_channels: int, out_channels: int, depth: int, 139 | layer_channels: List[int], residual=False, blocks=None, n_dropout=1): 140 | super(MeshNet, self).__init__() 141 | self.fc = MeshLinear(in_channels, layer_channels[0]) 142 | self.relu = MeshReLU() 143 | 144 | self.convs = nn.Sequential() 145 | for i in range(depth): 146 | if residual: 147 | self.convs.append(MeshResConvBlock(layer_channels[i], layer_channels[i + 1])) 148 | for _ in range(blocks[i] - 1): 149 | self.convs.append(MeshResConvBlock(layer_channels[i + 1], layer_channels[i + 1])) 150 | else: 151 | self.convs.append(MeshConvBlock(layer_channels[i], 152 | layer_channels[i + 1])) 153 | self.convs.append(MeshPool('max')) 154 | self.convs.append(MeshConv(layer_channels[-1], 155 | layer_channels[-1], 156 | bias=False)) 157 | self.global_pool = MeshAdaptivePool('max') 158 | 159 | if n_dropout >= 2: 160 | self.dp1 = nn.Dropout(0.5) 161 | 162 | self.linear1 = nn.Linear(layer_channels[-1], layer_channels[-1], bias=False) 163 | self.bn = nn.BatchNorm1d(layer_channels[-1]) 164 | 165 | if n_dropout >= 1: 166 | self.dp2 = nn.Dropout(0.5) 167 | self.linear2 = nn.Linear(layer_channels[-1], out_channels) 168 | 169 | 170 | def execute(self, mesh): 171 | mesh = self.fc(mesh) 172 | mesh = self.relu(mesh) 173 | 174 | mesh = self.convs(mesh) 175 | 176 | x = self.global_pool(mesh) 177 | 178 | if hasattr(self, 'dp1'): 179 | x = self.dp1(x) 180 | x = nn.relu(self.bn(self.linear1(x))) 181 | 182 | if hasattr(self, 'dp2'): 183 | x = self.dp2(x) 184 | x = self.linear2(x) 185 | 186 | return x 187 | -------------------------------------------------------------------------------- /subdivnet/utils.py: -------------------------------------------------------------------------------- 1 | import os 2 | import json 3 | from pathlib import Path 4 | 5 | import numpy as np 6 | import trimesh 7 | 8 | import jittor as jt 9 | 10 | from .mesh_tensor import MeshTensor 11 | 12 | 13 | segment_colors = np.array([ 14 | [0, 114, 189], 15 | [217, 83, 26], 16 | [238, 177, 32], 17 | [126, 47, 142], 18 | [117, 142, 48], 19 | [76, 190, 238], 20 | [162, 19, 48], 21 | [240, 166, 202], 22 | ]) 23 | 24 | 25 | def to_mesh_tensor(meshes): 26 | return MeshTensor(jt.int32(meshes['faces']), 27 | jt.float32(meshes['feats']), 28 | jt.int32(meshes['Fs'])) 29 | 30 | 31 | def save_results(mesh_infos, preds, labels, name): 32 | if not os.path.exists('results'): 33 | os.mkdir('results') 34 | 35 | if isinstance(labels, jt.Var): 36 | labels = labels.data 37 | 38 | results_path = Path('results') / name 39 | results_path.mkdir(parents=True, exist_ok=True) 40 | 41 | for i in range(preds.shape[0]): 42 | mesh_path = mesh_infos['mesh_paths'][i] 43 | mesh_name = Path(mesh_path).stem 44 | 45 | mesh = trimesh.load_mesh(mesh_path, process=False) 46 | mesh.visual.face_colors[:, :3] = segment_colors[preds[i, :mesh.faces.shape[0]]] 47 | mesh.export(results_path / f'pred-{mesh_name}.ply') 48 | mesh.visual.face_colors[:, :3] = segment_colors[labels[i, :mesh.faces.shape[0]]] 49 | mesh.export(results_path / f'gt-{mesh_name}.ply') 50 | 51 | 52 | def update_label_accuracy(preds, labels, acc): 53 | if isinstance(preds, jt.Var): 54 | preds = preds.data 55 | if isinstance(labels, jt.Var): 56 | labels = labels.data 57 | 58 | for i in range(preds.shape[0]): 59 | for k in range(len(acc)): 60 | if (labels[i] == k).sum() > 0: 61 | acc[k] += ((preds[i] == labels[i]) * (labels[i] == k)).sum() / (labels[i] == k).sum() 62 | 63 | 64 | def compute_original_accuracy(mesh_infos, preds, labels): 65 | if isinstance(preds, jt.Var): 66 | preds = preds.data 67 | if isinstance(labels, jt.Var): 68 | labels = labels.data 69 | 70 | accs = np.zeros(preds.shape[0]) 71 | for i in range(preds.shape[0]): 72 | raw_labels = mesh_infos['raw_labels'][i] 73 | raw_to_sub = mesh_infos['raw_to_sub'][i] 74 | accs[i] = np.mean((preds[i])[raw_to_sub] == raw_labels) 75 | 76 | return accs 77 | 78 | 79 | class ClassificationMajorityVoting: 80 | def __init__(self, nclass): 81 | self.votes = {} 82 | self.nclass = nclass 83 | 84 | def vote(self, mesh_paths, preds, labels): 85 | if isinstance(preds, jt.Var): 86 | preds = preds.data 87 | if isinstance(labels, jt.Var): 88 | labels = labels.data 89 | 90 | for i in range(preds.shape[0]): 91 | name = (Path(mesh_paths[i]).stem).split('-')[0] 92 | if not name in self.votes: 93 | self.votes[name] = { 94 | 'polls': np.zeros(self.nclass, dtype=int), 95 | 'label': labels[i] 96 | } 97 | self.votes[name]['polls'][preds[i]] += 1 98 | 99 | def compute_accuracy(self): 100 | sum_acc = 0 101 | for name, vote in self.votes.items(): 102 | pred = np.argmax(vote['polls']) 103 | sum_acc += pred == vote['label'] 104 | return sum_acc / len(self.votes) 105 | 106 | 107 | class SegmentationMajorityVoting: 108 | def __init__(self, nclass, name=''): 109 | self.votes = {} 110 | self.nclass = nclass 111 | self.name = name 112 | 113 | def vote(self, mesh_infos, preds, labels): 114 | if isinstance(preds, jt.Var): 115 | preds = preds.data 116 | if isinstance(labels, jt.Var): 117 | labels = labels.data 118 | 119 | for i in range(preds.shape[0]): 120 | name = (Path(mesh_infos['mesh_paths'][i]).stem)[:-4] 121 | nfaces = mesh_infos['raw_labels'][i].shape[0] 122 | if not name in self.votes: 123 | self.votes[name] = { 124 | 'polls': np.zeros((nfaces, self.nclass), dtype=int), 125 | 'label': mesh_infos['raw_labels'][i], 126 | 'raw_path': mesh_infos['raw_paths'][i], 127 | } 128 | polls = self.votes[name]['polls'] 129 | raw_to_sub = mesh_infos['raw_to_sub'][i] 130 | raw_pred = (preds[i])[raw_to_sub] 131 | polls[np.arange(nfaces), raw_pred] += 1 132 | 133 | def compute_accuracy(self, save_results=False): 134 | if save_results: 135 | if self.name: 136 | results_path = Path('results') / self.name 137 | else: 138 | results_path = Path('results') 139 | results_path.mkdir(parents=True, exist_ok=True) 140 | 141 | sum_acc = 0 142 | all_acc = {} 143 | for name, vote in self.votes.items(): 144 | label = vote['label'] 145 | pred = np.argmax(vote['polls'], axis=1) 146 | acc = np.mean(pred == label) 147 | sum_acc += acc 148 | all_acc[name] = acc 149 | 150 | if save_results: 151 | mesh_path = vote['raw_path'] 152 | mesh = trimesh.load_mesh(mesh_path, process=False) 153 | mesh.visual.face_colors[:, :3] = segment_colors[pred[:mesh.faces.shape[0]]] 154 | mesh.export(results_path / f'pred-{name}.ply') 155 | mesh.visual.face_colors[:, :3] = segment_colors[label[:mesh.faces.shape[0]]] 156 | mesh.export(results_path / f'gt-{name}.ply') 157 | 158 | if save_results: 159 | with open(results_path / 'acc.json', 'w') as f: 160 | json.dump(all_acc, f, indent=4) 161 | return sum_acc / len(self.votes) 162 | -------------------------------------------------------------------------------- /teaser.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/lzhengning/SubdivNet/018bd806a6bf99bee53855adc7c5749b79fd2045/teaser.jpg -------------------------------------------------------------------------------- /train_cls.py: -------------------------------------------------------------------------------- 1 | # *************************************************************** 2 | # Author: Zheng-Ning Liu 3 | # 4 | # The training & test script for mesh classification. 5 | # *************************************************************** 6 | 7 | import os 8 | 9 | os.environ['OPENBLAS_NUM_THREADS'] = '1' 10 | os.environ['MKL_NUM_THREADS'] = '1' 11 | 12 | import argparse 13 | from tensorboardX import SummaryWriter 14 | 15 | import jittor as jt 16 | import jittor.nn as nn 17 | from jittor.optim import Adam 18 | from jittor.optim import SGD 19 | from jittor.lr_scheduler import MultiStepLR 20 | jt.flags.use_cuda = 1 21 | 22 | import numpy as np 23 | from tqdm import tqdm 24 | 25 | from subdivnet.dataset import ClassificationDataset 26 | from subdivnet.network import MeshNet 27 | from subdivnet.utils import to_mesh_tensor 28 | from subdivnet.utils import ClassificationMajorityVoting 29 | 30 | 31 | def train(net, optim, train_dataset, writer, epoch): 32 | net.train() 33 | n_correct = 0 34 | n_samples = 0 35 | 36 | jt.sync_all(True) 37 | 38 | disable_tqdm = jt.rank != 0 39 | for meshes, labels, _ in tqdm(train_dataset, desc=f'Train {epoch}', disable=disable_tqdm): 40 | 41 | mesh_tensor = to_mesh_tensor(meshes) 42 | mesh_labels = jt.int32(labels) 43 | 44 | outputs = net(mesh_tensor) 45 | loss = nn.cross_entropy_loss(outputs, mesh_labels) 46 | optim.step(loss) 47 | 48 | preds = np.argmax(outputs.data, axis=1) 49 | n_correct += np.sum(labels == preds) 50 | n_samples += outputs.shape[0] 51 | 52 | loss = loss.item() 53 | if jt.rank == 0: 54 | writer.add_scalar('loss', loss, global_step=train.step) 55 | 56 | train.step += 1 57 | 58 | # To avoid jittor handing when training with multiple gpus 59 | jt.sync_all(True) 60 | 61 | if jt.rank == 0: 62 | acc = n_correct / n_samples 63 | print('Epoch #{epoch}: train acc = ', acc) 64 | writer.add_scalar('train-acc', acc, global_step=epoch) 65 | 66 | 67 | @jt.single_process_scope() 68 | def test(net, test_dataset, writer, epoch, args): 69 | net.eval() 70 | acc = 0 71 | voted = ClassificationMajorityVoting(args.n_classes) 72 | with jt.no_grad(): 73 | for meshes, labels, mesh_paths in tqdm(test_dataset, desc=f'Test {epoch}'): 74 | mesh_tensor = to_mesh_tensor(meshes) 75 | outputs = net(mesh_tensor) 76 | 77 | preds = np.argmax(outputs.data, axis=1) 78 | acc += np.sum(labels == preds) 79 | voted.vote(mesh_paths, preds, labels) 80 | 81 | acc /= test_dataset.total_len 82 | vacc = voted.compute_accuracy() 83 | 84 | # Update best results 85 | if test.best_acc < acc: 86 | if test.best_acc > 0: 87 | os.remove(os.path.join('checkpoints', name, f'acc-{test.best_acc:.4f}.pkl')) 88 | net.save(os.path.join('checkpoints', name, f'acc-{acc:.4f}.pkl')) 89 | test.best_acc = acc 90 | 91 | if test.best_vacc < vacc: 92 | if test.best_vacc > 0: 93 | os.remove(os.path.join('checkpoints', name, f'vacc-{test.best_vacc:.4f}.pkl')) 94 | net.save(os.path.join('checkpoints', name, f'vacc-{vacc:.4f}.pkl')) 95 | test.best_vacc = vacc 96 | 97 | print(f'Epoch #{epoch}: test acc = {acc}, best = {test.best_acc}') 98 | print(f'Epoch #{epoch}: test acc [voted] = {vacc}, best = {test.best_vacc}') 99 | writer.add_scalar('test-acc', acc, global_step=epoch) 100 | writer.add_scalar('test-vacc', vacc, global_step=epoch) 101 | 102 | 103 | if __name__ == '__main__': 104 | parser = argparse.ArgumentParser() 105 | parser.add_argument('mode', choices=['train', 'test']) 106 | parser.add_argument('--name', type=str, required=True) 107 | parser.add_argument('--dataroot', type=str, required=True) 108 | parser.add_argument('--checkpoint', type=str) 109 | parser.add_argument('--n_classes', type=int) 110 | parser.add_argument('--depth', type=int, required=True) 111 | parser.add_argument('--optim', choices=['adam', 'sgd'], default='adam') 112 | parser.add_argument('--lr', type=float, default=1e-3) 113 | parser.add_argument('--lr_milestones', type=int, nargs='+', default=None) 114 | parser.add_argument('--weight_decay', type=float, default=0) 115 | parser.add_argument('--batch_size', type=int, default=48) 116 | parser.add_argument('--n_epoch', type=int, default=100) 117 | parser.add_argument('--channels', type=int, nargs='+', required=True) 118 | parser.add_argument('--residual', action='store_true') 119 | parser.add_argument('--blocks', type=int, nargs='+', default=None) 120 | parser.add_argument('--n_dropout', type=int, default=1) 121 | parser.add_argument('--seed', type=int, default=None) 122 | parser.add_argument('--n_worker', type=int, default=4) 123 | parser.add_argument('--use_xyz', action='store_true') 124 | parser.add_argument('--use_normal', action='store_true') 125 | parser.add_argument('--augment_scale', action='store_true') 126 | parser.add_argument('--augment_orient', action='store_true') 127 | 128 | args = parser.parse_args() 129 | mode = args.mode 130 | name = args.name 131 | dataroot = args.dataroot 132 | 133 | if args.seed is not None: 134 | jt.set_global_seed(args.seed) 135 | 136 | # ========== Dataset ========== 137 | augments = [] 138 | if args.augment_scale: 139 | augments.append('scale') 140 | if args.augment_orient: 141 | augments.append('orient') 142 | train_dataset = ClassificationDataset(dataroot, batch_size=args.batch_size, 143 | shuffle=True, train=True, num_workers=args.n_worker, augment=augments) 144 | test_dataset = ClassificationDataset(dataroot, batch_size=args.batch_size, 145 | shuffle=False, train=False, num_workers=args.n_worker) 146 | 147 | input_channels = 7 148 | if args.use_xyz: 149 | train_dataset.feats.append('center') 150 | test_dataset.feats.append('center') 151 | input_channels += 3 152 | if args.use_normal: 153 | train_dataset.feats.append('normal') 154 | test_dataset.feats.append('normal') 155 | input_channels += 3 156 | 157 | # ========== Network ========== 158 | net = MeshNet(input_channels, out_channels=args.n_classes, depth=args.depth, 159 | layer_channels=args.channels, residual=args.residual, 160 | blocks=args.blocks, n_dropout=args.n_dropout) 161 | 162 | # ========== Optimizer ========== 163 | if args.optim == 'adam': 164 | optim = Adam(net.parameters(), lr=args.lr, weight_decay=args.weight_decay) 165 | else: 166 | optim = SGD(net.parameters(), lr=args.lr, momentum=0.9) 167 | 168 | if args.lr_milestones is not None: 169 | scheduler = MultiStepLR(optim, milestones=args.lr_milestones, gamma=0.1) 170 | else: 171 | scheduler = MultiStepLR(optim, milestones=[]) 172 | 173 | # ========== MISC ========== 174 | if jt.rank == 0: 175 | writer = SummaryWriter("logs/" + name) 176 | else: 177 | writer = None 178 | 179 | checkpoint_path = os.path.join('checkpoints', name) 180 | checkpoint_name = os.path.join(checkpoint_path, name + '-latest.pkl') 181 | os.makedirs(checkpoint_path, exist_ok=True) 182 | 183 | if args.checkpoint is not None: 184 | print('parameters: loaded from ', args.checkpoint) 185 | net.load(args.checkpoint) 186 | 187 | train.step = 0 188 | test.best_acc = 0 189 | test.best_vacc = 0 190 | 191 | # ========== Start Training ========== 192 | if jt.rank == 0: 193 | print('name: ', name) 194 | 195 | if args.mode == 'train': 196 | for epoch in range(args.n_epoch): 197 | train(net, optim, train_dataset, writer, epoch) 198 | test(net, test_dataset, writer, epoch, args) 199 | scheduler.step() 200 | 201 | jt.sync_all() 202 | if jt.rank == 0: 203 | net.save(checkpoint_name) 204 | else: 205 | test(net, test_dataset, writer, 0, args) 206 | -------------------------------------------------------------------------------- /train_seg.py: -------------------------------------------------------------------------------- 1 | import os 2 | os.environ['OPENBLAS_NUM_THREADS'] = '1' 3 | 4 | import argparse 5 | import random 6 | 7 | import numpy as np 8 | from tensorboardX import SummaryWriter 9 | 10 | import jittor as jt 11 | import jittor.nn as nn 12 | from jittor.optim import Adam, SGD 13 | from jittor.lr_scheduler import MultiStepLR 14 | jt.flags.use_cuda = 1 15 | 16 | from tqdm import tqdm 17 | 18 | from subdivnet.dataset import SegmentationDataset 19 | from subdivnet.deeplab import MeshDeepLab 20 | from subdivnet.deeplab import MeshVanillaUnet 21 | from subdivnet.utils import to_mesh_tensor 22 | from subdivnet.utils import save_results 23 | from subdivnet.utils import update_label_accuracy 24 | from subdivnet.utils import compute_original_accuracy 25 | from subdivnet.utils import SegmentationMajorityVoting 26 | 27 | 28 | def train(net, optim, dataset, writer, epoch): 29 | net.train() 30 | acc = 0 31 | for meshes, labels, _ in tqdm(dataset, desc=str(epoch)): 32 | mesh_tensor = to_mesh_tensor(meshes) 33 | mesh_labels = jt.int32(labels) 34 | outputs = net(mesh_tensor) 35 | loss = nn.cross_entropy_loss(outputs.unsqueeze(dim=-1), mesh_labels.unsqueeze(dim=-1), ignore_index=-1) 36 | optim.step(loss) 37 | 38 | preds = np.argmax(outputs.data, axis=1) 39 | acc += np.sum((labels == preds).sum(axis=1) / meshes['Fs']) 40 | writer.add_scalar('loss', loss.data[0], global_step=train.step) 41 | train.step += 1 42 | acc /= dataset.total_len 43 | 44 | print('Epoch #{epoch}: train acc = ', acc) 45 | writer.add_scalar('train-acc', acc, global_step=epoch) 46 | 47 | 48 | @jt.single_process_scope() 49 | def test(net, dataset, writer, epoch, args): 50 | net.eval() 51 | acc = 0 52 | oacc = 0 53 | label_acc = np.zeros(args.parts) 54 | name = args.name 55 | voted = SegmentationMajorityVoting(args.parts, name) 56 | 57 | with jt.no_grad(): 58 | for meshes, labels, mesh_infos in tqdm(dataset, desc=str(epoch)): 59 | mesh_tensor = to_mesh_tensor(meshes) 60 | mesh_labels = jt.int32(labels) 61 | outputs = net(mesh_tensor) 62 | preds = np.argmax(outputs.data, axis=1) 63 | 64 | batch_acc = (labels == preds).sum(axis=1) / meshes['Fs'] 65 | batch_oacc = compute_original_accuracy(mesh_infos, preds, mesh_labels) 66 | acc += np.sum(batch_acc) 67 | oacc += np.sum(batch_oacc) 68 | update_label_accuracy(preds, mesh_labels, label_acc) 69 | voted.vote(mesh_infos, preds, mesh_labels) 70 | 71 | acc /= dataset.total_len 72 | oacc /= dataset.total_len 73 | voacc = voted.compute_accuracy(save_results=True) 74 | writer.add_scalar('test-acc', acc, global_step=epoch) 75 | writer.add_scalar('test-oacc', oacc, global_step=epoch) 76 | writer.add_scalar('test-voacc', voacc, global_step=epoch) 77 | 78 | # Update best results 79 | if test.best_oacc < oacc: 80 | if test.best_oacc > 0: 81 | os.remove(os.path.join('checkpoints', name, f'oacc-{test.best_oacc:.4f}.pkl')) 82 | net.save(os.path.join('checkpoints', name, f'oacc-{oacc:.4f}.pkl')) 83 | test.best_oacc = oacc 84 | 85 | if test.best_voacc < voacc: 86 | if test.best_voacc > 0: 87 | os.remove(os.path.join('checkpoints', name, f'voacc-{test.best_voacc:.4f}.pkl')) 88 | net.save(os.path.join('checkpoints', name, f'voacc-{voacc:.4f}.pkl')) 89 | test.best_voacc = voacc 90 | 91 | print('test acc = ', acc) 92 | print('test acc [original] =', oacc, ', best =', test.best_oacc) 93 | print('test acc [original] [voted] =', voacc, ', best =', test.best_voacc) 94 | print('test acc per label =', label_acc / dataset.total_len) 95 | 96 | 97 | if __name__ == '__main__': 98 | parser = argparse.ArgumentParser() 99 | parser.add_argument('mode', choices=['train', 'test']) 100 | parser.add_argument('--name', type=str, required=True) 101 | parser.add_argument('--dataroot', type=str, required=True) 102 | parser.add_argument('--batch_size', type=int, default=8) 103 | parser.add_argument('--optim', choices=['adam', 'sgd'], default='adam') 104 | parser.add_argument('--lr', type=float, default=2e-2) 105 | parser.add_argument('--lr_milestones', type=int, nargs='+', default=[50, 100, 150]) 106 | parser.add_argument('--lr_gamma', type=float, default=0.1) 107 | parser.add_argument('--weight_decay', type=float, default=0) 108 | parser.add_argument('--checkpoint', type=str) 109 | parser.add_argument('--upsample', choices=['nearest', 'bilinear'], default='bilinear') 110 | parser.add_argument('--parts', type=int, default=8) 111 | parser.add_argument('--augment_scale', action='store_true') 112 | parser.add_argument('--augment_orient', action='store_true') 113 | parser.add_argument('--arch', choices=['unet', 'deeplab', 'vunet'], default='unet') 114 | parser.add_argument('--backbone', choices=['resnet18', 'resnet50'], default='resnet50') 115 | parser.add_argument('--globalpool', choices=['max', 'mean'], default='mean') 116 | 117 | args = parser.parse_args() 118 | mode = args.mode 119 | name = args.name 120 | batch_size = args.batch_size 121 | dataroot = args.dataroot 122 | 123 | net = None 124 | if args.arch == 'deeplab': 125 | net = MeshDeepLab(13, args.parts, args.backbone, globalpool=args.globalpool) 126 | elif args.arch == 'unet': 127 | net = MeshVanillaUnet(13, args.parts, upsample=args.upsample) 128 | 129 | if args.optim == 'adam': 130 | optim = Adam(net.parameters(), lr=args.lr, weight_decay=args.weight_decay) 131 | else: 132 | optim = SGD(net.parameters(), lr=args.lr, weight_decay=args.weight_decay) 133 | 134 | scheduler = MultiStepLR(optim, milestones=args.lr_milestones, gamma=args.lr_gamma) 135 | 136 | writer = SummaryWriter("logs/" + name) 137 | print('name:', name) 138 | 139 | augments = [] 140 | if args.augment_scale: 141 | augments.append('scale') 142 | if args.augment_orient: 143 | augments.append('orient') 144 | 145 | if mode == 'train': 146 | train_dataset = SegmentationDataset(dataroot, batch_size=batch_size, 147 | shuffle=True, train=True, num_workers=4, augments=augments) 148 | test_dataset = SegmentationDataset(dataroot, batch_size=8, shuffle=False, 149 | train=False, num_workers=4) 150 | 151 | checkpoint_path = os.path.join('checkpoints', name) 152 | checkpoint_name = os.path.join(checkpoint_path, name + '-latest.pkl') 153 | os.makedirs(checkpoint_path, exist_ok=True) 154 | 155 | if args.checkpoint is not None: 156 | print('parameters: loaded from ', args.checkpoint) 157 | net.load(args.checkpoint) 158 | 159 | train.step = 0 160 | test.best_oacc = 0 161 | test.best_voacc = 0 162 | 163 | if args.mode == 'train': 164 | for epoch in range(500): 165 | train(net, optim, train_dataset, writer, epoch) 166 | test(net, test_dataset, writer, epoch, args) 167 | scheduler.step() 168 | net.save(checkpoint_name) 169 | else: 170 | test(net, test_dataset, writer, 0, args) 171 | --------------------------------------------------------------------------------