.
675 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # NextFace
2 | NextFace is a light-weight pytorch library for high-fidelity 3D face reconstruction from monocular image(s) where scene attributes –3D geometry, reflectance (diffuse, specular and roughness), pose, camera parameters, and scene illumination– are estimated. It is a first-order optimization method that uses pytorch autograd engine and ray tracing to fit a statistical morphable model to an input image(s).
3 | 
4 | 


5 |
6 |
7 | A demo on youtube from here:
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 | # News
17 | * **19 March 2023**: fixed a bug in the optimizer where the gradients were not activated for the camera pos (rotation and translation). Also I added a new optimization strategy for the second and third stage which should improve the overall performance. plz pull
18 | * **21 June 2022**: Many thanks for **[Jack Saunders](https://researchportal.bath.ac.uk/en/persons/jack-saunders)** for adding this new feature to NextFace: Added support for [mediapipe](https://google.github.io/mediapipe/solutions/face_mesh.html#overview) as a replacement for FAN landmarks detector. Mediapipe produces much stable and accurate results than FAN . To try mediapipe, you need to pull the new version of the code and install mediapipe ** pip install mediapipe** . Now by default the landmarks detector used is mediapipe, if you want to switch bach to FAN plz edit the **optimConfig.ini** file (set **lamdmarksDetectorType = 'fan'**)
19 | * **01 May 2022**: i you want to generate an animation like the one of the gif files in the readme that rotates the reconstruction on the vertical axis, plz run the replay.py script and give it the path of the pickle file that contains the optimized scene attributes ( located in checkpoints/stage3_output.pickle).
20 | * **26 April 2022**: I added export of the estimated light map (as an environment map). this can be useful if you want to render the face with other rendering engines (Unreal, Unity, OpenGL). plz pull the code. U can choose to export the lightmap as png or exr (check optimConfig.ini)
21 | * **25 April 2022**: if you want to generate textures with higher resolutions (1024x1024 or 2048x2048) I have added these two maps here : **https://github.com/abdallahdib/NextFace/releases**. to use these maps, download **uvParametrization.2048.pickle** and **uvParametrization.1024.pickle** and put them inside **baselMorphableModel** directory and change the **textureResolution** in **optimConfig.in** to 1024 or 2048. Also dont forget to pull the latest code. Plz note that with these large uvmaps optimization will require more cpu/gpu memory.
22 | * **24 April 2022**: added a colab notebook in: **demo.ipynb**.
23 | * **20 April 2022**: I replaced landmarks association file with a new one which gives better reconstruction, especially on face coutours. Plz pull
24 | * **20 April 2022**: I tried NextFace on a challenging face and surprisingly we still get appealing reconstruction, check below:
25 | 
26 |
27 | # Features:
28 | * Reconstructs face at high fidelity from single or multiple RGB images
29 | * Estimates face geometry
30 | * Estimates detailed face reflectance (diffuse, specular and roughness)
31 | * Estimates scene light with spherical harmonics
32 | * Estimates head pose and orientation
33 | * Runs on both cpu and cuda-enabled gpu
34 |
35 |
36 | # Installation
37 | * Clone the repository
38 | * Execute the commands in 'INSTALL' file. these commands create a new conda environment called faceNext and install required packages. An 'environment.yml' is also provided. The library is tested with torch 1.3.1, torchvision 0.4.2 and cuda toolkit 10.1, but it should also work with recent pytorch versions.
39 | * Activate the environment: conda activate nextFace
40 | * Download basel face model from [here](https://faces.dmi.unibas.ch/bfm/bfm2017.html), just fill the form and you will receive an instant direct download link into your inbox. Downloaded **model2017-1_face12_nomouth.h5 file** and put it inside **./baselMorphableModel** directory
41 | * Download the albedo face model **albedoModel2020_face12_albedoPart.h5** from [here](https://github.com/waps101/AlbedoMM/releases/download/v1.0/albedoModel2020_face12_albedoPart.h5) and put it inside **./baselMorphableModel** directory
42 |
43 |
44 | # How to use
45 |
46 | ## Reconstruction from a single image
47 | * to reconstruct a face from a single image: run the following command:
48 | * **python optimizer.py --input *path-to-your-input-image* --output *output-path-where-to-save-results***
49 | ## Reconstruction from multiple images (batch reconstruction)
50 | * In case you have multiple images with same resolution, u can run a batch optimization on these images. For this, put all ur images in the same directory and run the following command:
51 | * **python optimizer.py --input *path-to-your-folder-that-contains-all-ur-images* --output *output-path-where-to-save-results***
52 | ## Reconstruction from mutliple images for the same person
53 | * if you have multiple images for the same person, put these images in the same folder and run the following command:
54 | * **python optimizer.py --sharedIdentity --input *path-to-your-folder-that-contains-all-ur-images* --output *output-path-where-to-save-results***
55 |
56 | the **sharedIdentity** flag tells the optimizer that all images belong to the same person. In such case, the shape identity and face reflectance attributes are shared across all images. This generally produces better face reflectance and geometry estimation.
57 | ## Configuring NextFace
58 | * The file **optimConfig.ini** allows to control different aspect of NextFace such as:
59 | * optimization (regularizations, number of iterations...)
60 | * compute device (run on cpu or gpu)
61 | * spherical harmonics (number of bands, environment map resolution)
62 | * ray tracing (number of samples)
63 | * The code is self-documented and easy to follow
64 |
65 | # Output
66 | The optimization takes 4~5 minutes depending on your gpu performance. The output of the optimization is the following:
67 | * render_{imageIndex}.png: contains from left to right: input image, overlay of the final reconstruction on the input image, the final reconstruction, diffuse, specular and roughness maps projected on the face.
68 | * diffuseMap_{imageIndex}.png: the estimated diffuse map in uv space
69 | * specularMap_{imageIndex}.png: the estimated specular map in uv space
70 | * roughnessMap_{imageIndex}.png: the estimated roughness map in uv space
71 | * mesh{imageIndex}.obj: an obj file that contains the 3D mesh of the reconstructed face
72 |
73 | # How it works
74 | NextFace reprocudes the optimizatin strategy of our [early work](https://arxiv.org/abs/2101.05356). The optimization is composed of the three stages:
75 | * **stage 1**: or coarse stage, where face expression and head pose are estimated by minimizing the geometric loss between the 2d landmarks and their corresponding face vertices. this produces a good starting point for the next optimization stage
76 | * **stage 2**: the face shape identity/expression, statistical diffuse and specular albedos, head pose and scene light are estimated by minimizing the photo consistency loss between the ray traced image and the real one.
77 | * **stage 3**: to improve the statistical albedos estimated in the previous stage, the method optimizes, on per-pixel basis, the previously estimated albedos and try to capture more albedo details. Consistency, symmetry and smoothness regularizers (similar to [this work](https://arxiv.org/abs/2101.05356)) are used to avoid overfitting and add robustness against lighting conditions.
78 | By default, the method uses 9 order spherical harmonics bands (as in [this work](https://openaccess.thecvf.com/content/ICCV2021/papers/Dib_Towards_High_Fidelity_Monocular_Face_Reconstruction_With_Rich_Reflectance_Using_ICCV_2021_paper.pdf)) to capture scene light. you can modify the number of spherical harmonics bands in **optimConfig.ini** bands and see the importance of using high number of bands for a better shadows recovery.
79 |
80 | # Good practice for best reconstruction
81 |
82 | * To obtain best reconstruction with optimal albedos, ensure that the images are taken in good lighting conditions (no shadows and well lit...).
83 | * In case of single input image, ensure that the face is frontal to reconstructs a complete diffuse/specular/roughness, as the method recover only visible parts of the face.
84 | * Avoid extreme face expressions as the underlying model may fail to recover them.
85 | # Limitations
86 | * The method relies on landmarks to initialize the optimization (Stage 1). In case these landmarks are inaccurate, you may get sub-optimal reconstruction. NextFace uses landmarks from [face_alignment](https://github.com/1adrianb/face-alignment) which are robust against extreme poses however they are not as accurate as they can be. This limitation has been discussed [here](https://openaccess.thecvf.com/content/ICCV2021/papers/Dib_Towards_High_Fidelity_Monocular_Face_Reconstruction_With_Rich_Reflectance_Using_ICCV_2021_paper.pdf) and [here](https://arxiv.org/abs/2101.05356). Using [this landmark detector](https://arxiv.org/abs/2204.02776) from Microsoft seems promising.
87 | * NextFace is slow and execution speed decreases with the size of the input image. For instance, if you are running an old-gpu (like me), you can decrease the resolution of the input image in the **optimConfig.ini** file by reducing the value of the *maxResolution* parameter. Our [recent work](https://openaccess.thecvf.com/content/ICCV2021/papers/Dib_Towards_High_Fidelity_Monocular_Face_Reconstruction_With_Rich_Reflectance_Using_ICCV_2021_paper.pdf) solves for this and achieve near real-time performance using deep convolutional neural network.
88 | * NextFace cannot capture fine geometry details (wrinkles, pores...). these details may get baked in the final albedos. our recent [work](https://arxiv.org/abs/2203.07732) captures fine scale geometric details.
89 | * The spherical harmonics can only model lights at infinity, under strong directional shadows, the estimated light may not be accurate as it can be, so residual shadows may appear in the estimated albedos. You can attenuate this by increasing the value of regularizers in the **optimConfig.ini** file, but this trade-off albedo details.
90 | Below are the values to modify:
91 | * for diffuse map: *weightDiffuseSymmetryReg* and *weightDiffuseConsistencyReg*,
92 | * for specular map: *weightSpecularSymmetryReg*, *weightSpecularConsistencyReg*
93 | * for roughness map: *weightRoughnessSymmetryReg* and *weightRoughnessConsistencyReg*
94 | I also provided a configuration file named **optimConfigShadows.ini** which have higher values for these regularizers that u can try.
95 | * Using a single image to estimate face attribute is an ill-posed problem and the estimated reflectance maps(diffuse, specular and roughness) are view/camera dependent. To obtain intrinsic reflectance maps, you have to use multiple images per subject.
96 |
97 | # Roadmap
98 | If I have time:
99 | * Expression tracking from video by optimizating head pose and expression on per-frame basis, which is straightforward once you have estimated the intrinsic face parameters(reflectance and geometry). I did not implement it yet simply, because i am running an old gpu (GTX 970M). I may add this feature when I decide to buy an RTX :)
100 | * Add virtual lightstage as proposed in [this](https://arxiv.org/abs/2101.05356) to model high frequency point lights.
101 | * Add support for [FLAME](https://github.com/Rubikplayer/flame-fitting) morphable model. You are welcome if you can help.
102 | * Add GUI interface for loading images, landmarks edition, run optimization and visualize results.
103 |
104 | # License
105 | NextFace is available for free, under GPL license, to use for research and educational purposes only. Please check LICENSE file.
106 |
107 | # Acknowledgements
108 | The uvmap is taken from [here](https://github.com/unibas-gravis/parametric-face-image-generator/blob/master/data/regions/face12.json), landmarks association from [here](https://github.com/kimoktm/Face2face/blob/master/data/custom_mapping.txt). [redner](https://github.com/BachiLi/redner/) is used for ray tracing, albedo model from [here](https://github.com/waps101/AlbedoMM/).
109 |
110 | # contact
111 | mail: deeb.abdallah @at gmail
112 |
113 | twitter: abdallah_dib
114 |
115 | # Citation
116 | If you use NextFace and find it useful in your work, these works are relevant for you:
117 |
118 | ```
119 | @inproceedings{dib2021practical,
120 | title={Practical face reconstruction via differentiable ray tracing},
121 | author={Dib, Abdallah and Bharaj, Gaurav and Ahn, Junghyun and Th{\'e}bault, C{\'e}dric and Gosselin, Philippe and Romeo, Marco and Chevallier, Louis},
122 | booktitle={Computer Graphics Forum},
123 | volume={40},
124 | number={2},
125 | pages={153--164},
126 | year={2021},
127 | organization={Wiley Online Library}
128 | }
129 |
130 | @inproceedings{dib2021towards,
131 | title={Towards High Fidelity Monocular Face Reconstruction with Rich Reflectance using Self-supervised Learning and Ray Tracing},
132 | author={Dib, Abdallah and Thebault, Cedric and Ahn, Junghyun and Gosselin, Philippe-Henri and Theobalt, Christian and Chevallier, Louis},
133 | booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
134 | pages={12819--12829},
135 | year={2021}
136 | }
137 |
138 | @article{dib2022s2f2,
139 | title={S2F2: Self-Supervised High Fidelity Face Reconstruction from Monocular Image},
140 | author={Dib, Abdallah and Ahn, Junghyun and Thebault, Cedric and Gosselin, Philippe-Henri and Chevallier, Louis},
141 | journal={arXiv preprint arXiv:2203.07732},
142 | year={2022}
143 | }
144 |
--------------------------------------------------------------------------------
/baselMorphableModel/landmark_54.txt:
--------------------------------------------------------------------------------
1 | 7 26869
2 | 8 27061
3 | 9 27253
4 | 17 22586
5 | 18 22991
6 | 19 23303
7 | 20 23519
8 | 21 23736
9 | 22 24312
10 | 23 24527
11 | 24 24743
12 | 25 25055
13 | 26 25466
14 | 27 8134
15 | 28 8143
16 | 29 8151
17 | 30 8157
18 | 31 6986
19 | 32 7695
20 | 33 8167
21 | 34 8639
22 | 35 9346
23 | 36 2602
24 | 37 4146
25 | 38 4920
26 | 39 5830
27 | 40 4674
28 | 41 3900
29 | 42 10390
30 | 43 11287
31 | 44 12061
32 | 45 13481
33 | 46 12331
34 | 47 11557
35 | 48 5522
36 | 49 6026
37 | 50 7355
38 | 51 8181
39 | 52 9007
40 | 53 10329
41 | 54 10857
42 | 55 9730
43 | 56 8670
44 | 57 8199
45 | 58 7726
46 | 59 6898
47 | 60 6291
48 | 61 7364
49 | 62 8190
50 | 63 9016
51 | 64 10088
52 | 65 8663
53 | 66 8191
54 | 67 7719
--------------------------------------------------------------------------------
/baselMorphableModel/landmark_62.txt:
--------------------------------------------------------------------------------
1 | 0 16203
2 | 1 16235
3 | 2 16260
4 | 3 16290
5 | 7 26869
6 | 8 27061
7 | 9 27253
8 | 13 22481
9 | 14 22451
10 | 15 22426
11 | 16 22394
12 | 17 22586
13 | 18 22991
14 | 19 23303
15 | 20 23519
16 | 21 23736
17 | 22 24312
18 | 23 24527
19 | 24 24743
20 | 25 25055
21 | 26 25466
22 | 27 8134
23 | 28 8143
24 | 29 8151
25 | 30 8157
26 | 31 6986
27 | 32 7695
28 | 33 8167
29 | 34 8639
30 | 35 9346
31 | 36 2602
32 | 37 4146
33 | 38 4920
34 | 39 5830
35 | 40 4674
36 | 41 3900
37 | 42 10390
38 | 43 11287
39 | 44 12061
40 | 45 13481
41 | 46 12331
42 | 47 11557
43 | 48 5522
44 | 49 6026
45 | 50 7355
46 | 51 8181
47 | 52 9007
48 | 53 10329
49 | 54 10857
50 | 55 9730
51 | 56 8670
52 | 57 8199
53 | 58 7726
54 | 59 6898
55 | 60 6291
56 | 61 7364
57 | 62 8190
58 | 63 9016
59 | 64 10088
60 | 65 8663
61 | 66 8191
62 | 67 7719
--------------------------------------------------------------------------------
/baselMorphableModel/landmark_62_mp.txt:
--------------------------------------------------------------------------------
1 | 127 16203
2 | 234 16235
3 | 93 16260
4 | 132 16290
5 | 148 26869
6 | 152 27061
7 | 377 27253
8 | 361 22481
9 | 323 22451
10 | 454 22426
11 | 356 22394
12 | 70 22586
13 | 63 22991
14 | 105 23303
15 | 66 23519
16 | 107 23736
17 | 336 24312
18 | 296 24527
19 | 334 24743
20 | 293 25055
21 | 300 25466
22 | 6 8134
23 | 195 8143
24 | 5 8151
25 | 4 8157
26 | 240 6986
27 | 99 7695
28 | 2 8167
29 | 328 8639
30 | 460 9346
31 | 33 2602
32 | 160 4146
33 | 158 4920
34 | 133 5830
35 | 153 4674
36 | 144 3900
37 | 362 10390
38 | 385 11287
39 | 387 12061
40 | 263 13481
41 | 373 12331
42 | 380 11557
43 | 61 5522
44 | 40 6026
45 | 37 7355
46 | 0 8181
47 | 267 9007
48 | 270 10329
49 | 291 10857
50 | 321 9730
51 | 314 8670
52 | 17 8199
53 | 84 7726
54 | 91 6898
55 | 62 6291
56 | 82 7364
57 | 13 8190
58 | 312 9016
59 | 292 10088
60 | 317 8663
61 | 14 8191
62 | 87 7719
--------------------------------------------------------------------------------
/baselMorphableModel/normals.pickle:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/abdallahdib/NextFace/07b2b1c7da2021e939e469f82fd9823e3d0ec67c/baselMorphableModel/normals.pickle
--------------------------------------------------------------------------------
/baselMorphableModel/uvParametrization.256.pickle:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/abdallahdib/NextFace/07b2b1c7da2021e939e469f82fd9823e3d0ec67c/baselMorphableModel/uvParametrization.256.pickle
--------------------------------------------------------------------------------
/baselMorphableModel/uvParametrization.512.pickle:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/abdallahdib/NextFace/07b2b1c7da2021e939e469f82fd9823e3d0ec67c/baselMorphableModel/uvParametrization.512.pickle
--------------------------------------------------------------------------------
/camera.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import numpy as np
3 |
4 |
5 |
6 | class Camera:
7 |
8 | def __init__(self, device):
9 | self.device = device
10 |
11 | self.rotXm1 = torch.tensor(np.array([[0., 0., 0.], [0., 1., 0.], [0., 0., 1.]]), dtype=torch.float, device=device)
12 | self.rotXm2 = torch.tensor(np.array([[0., 0., 0.], [0., 0., -1.], [0., 1., 0.]]), dtype=torch.float, device=device)
13 | self.rotXm3 = torch.tensor(np.array([[1., 0., 0.], [0., 0., 0.], [0., 0., 0.]]), dtype=torch.float, device=device)
14 |
15 | self.rotYm1 = torch.tensor(np.array([[1., 0., 0.], [0., 0., 0.], [0., 0., 1.]]), dtype=torch.float, device=device)
16 | self.rotYm2 = torch.tensor(np.array([[0., 0., 1.], [0., 0., 0.], [-1., 0., 0.]]), dtype=torch.float, device=device)
17 | self.rotYm3 = torch.tensor(np.array([[0., 0., 0.], [0., 1., 0.], [0., 0., 0.]]), dtype=torch.float, device=device)
18 |
19 | self.rotZm1 = torch.tensor(np.array([[1., 0., 0.], [0., 1., 0.], [0., 0., 0.]]), dtype=torch.float, device=device)
20 | self.rotZm2 = torch.tensor(np.array([[0., -1., 0.], [1., 0., 0.], [0., 0., 0.]]), dtype=torch.float, device=device)
21 | self.rotZm3 = torch.tensor(np.array([[0., 0., 0.], [0., 0., 0.], [0., 0., 1.]]), dtype=torch.float, device=device)
22 |
23 | def computeTransformation(self, rotation, translation):
24 | '''
25 | create a transformation matrix from rotation and translation
26 | rotation: [n, 3]
27 | translation: [n, 3]
28 | return: transformation matrix [n, 4, 3]
29 | '''
30 |
31 | assert (rotation.dim() == 2 and rotation.shape[-1] == 3)
32 | assert(translation.dim() == 2 and translation.shape[-1] == 3)
33 |
34 | rotx = torch.cos(rotation[..., :1, None]).expand(-1, 3, 3) * self.rotXm1 \
35 | + torch.sin(rotation[..., :1, None]).expand(-1, 3, 3) * self.rotXm2 \
36 | + self.rotXm3
37 | roty = torch.cos(rotation[..., 1:2, None]).expand(-1, 3, 3) * self.rotYm1 \
38 | + torch.sin(rotation[..., 1:2, None]).expand( -1, 3, 3) * self.rotYm2 \
39 | + self.rotYm3
40 | rotz = torch.cos(rotation[..., 2:, None]).expand(-1, 3, 3) * self.rotZm1 \
41 | + torch.sin(rotation[..., 2:, None]).expand(-1, 3, 3) * self.rotZm2 \
42 | + self.rotZm3
43 |
44 | rotMatrix = torch.matmul(rotz, torch.matmul(roty, rotx))
45 | transformation = torch.cat((rotMatrix, translation[ :, :, None]), -1)
46 | return transformation
47 |
48 | def transformVertices(self, vertices, translation, rotation):
49 | '''
50 | transform vertices by the rotation and translation vector
51 | :param vertices: tensor [n, verticesNumber, 3]
52 | :param translation: tensor [n, 3]
53 | :param rotation: tensor [n, 3]
54 | :return: transformed vertices [n, verticesNumber, 3]
55 | '''
56 | assert (vertices.dim() == 3 and vertices.shape[-1] == 3)
57 |
58 | transformationMatrix = self.computeTransformation(rotation, translation)
59 | ones = torch.ones([vertices.shape[0], vertices.shape[1], 1], dtype = torch.float, device = vertices.device)
60 | vertices = torch.cat((vertices, ones), -1)
61 | framesNumber = transformationMatrix.shape[0]
62 | verticesNumber = vertices.shape[1]
63 | out = torch.matmul(transformationMatrix.view(1, framesNumber, 1, 3, 4),
64 | vertices.view(framesNumber, verticesNumber, 4, 1)).view(1, framesNumber, verticesNumber, 3)
65 | return out[0]
--------------------------------------------------------------------------------
/config.py:
--------------------------------------------------------------------------------
1 | import copy
2 | import sys
3 |
4 |
5 | class Config:
6 | def __init__(self):
7 | #compute device
8 | self.device = 'cuda'
9 |
10 | #tracker
11 | self.lamdmarksDetectorType = 'mediapipe' # Options ['mediapipe', 'fan']
12 |
13 | #morphable model
14 | self.path = 'baselMorphableModel'
15 | self.textureResolution = 256 #256 or 512
16 | self.trimPca = False # if True keep only a subset of the pca basis (eigen vectors)
17 |
18 | #spherical harmonics
19 | self.bands = 9
20 | self.envMapRes = 64
21 | self.smoothSh = False
22 | self.saveExr = True
23 | #camera
24 | self.camFocalLength = 500.0 #focal length in pixels (f = f_{mm} * imageWidth / sensorWidth)
25 | self.optimizeFocalLength = True #if True the initial focal length is estimated otherwise it remains constant
26 |
27 | #image
28 | self.maxResolution = 512
29 |
30 | #optimization
31 | self.iterStep1 = 2000 # number of iterations for the coarse optim
32 | self.iterStep2 = 400 #number of iteration for the first dense optim (based on statistical priors)
33 | self.iterStep3 = 100 #number of iterations for refining the statistical albedo priors
34 | self.weightLandmarksLossStep2 = 0.001 #landmarks weight during step2
35 | self.weightLandmarksLossStep3 = 0.001 # landmarks weight during step3
36 |
37 | self.weightShapeReg = 0.001 #weight for shape regularization
38 | self.weightExpressionReg = 0.001 # weight for expression regularization
39 | self.weightAlbedoReg = 0.001 # weight for albedo regularization
40 |
41 | self.weightDiffuseSymmetryReg = 50. #symmetry regularizer weight for diffuse texture (at step 3)
42 | self.weightDiffuseConsistencyReg = 100. # consistency regularizer weight for diffuse texture (at step 3)
43 | self.weightDiffuseSmoothnessReg = 0.001 # smoothness regularizer weight for diffuse texture (at step 3)
44 |
45 | self.weightSpecularSymmetryReg = 30. # symmetry regularizer weight for specular texture (at step 3)
46 | self.weightSpecularConsistencyReg = 2. # consistency regularizer weight for specular texture (at step 3)
47 | self.weightSpecularSmoothnessReg = 0.001 # smoothness regularizer weight for specular texture (at step 3)
48 |
49 | self.weightRoughnessSymmetryReg = 10. # symmetry regularizer weight for roughness texture (at step 3)
50 | self.weightRoughnessConsistencyReg = 0. # consistency regularizer weight for roughness texture (at step 3)
51 | self.weightRoughnessSmoothnessReg = 0.002 # smoothness regularizer weight for roughness texture (at step 3)
52 |
53 | self.debugFrequency = 10 #display frequency during optimization
54 | self.saveIntermediateStage = False #if True the output of stage 1 and 2 are saved. stage 3 is always saved which is the output of the optim
55 | self.verbose = False #display loss on terminal if true
56 |
57 | self.rtSamples = 500 #the number of ray tracer samples to render the final output
58 | self.rtTrainingSamples = 8 # number of ray tracing to use during training
59 | def fillFromDicFile(self, filePath):
60 | '''
61 | overwrite default config
62 | :param filePath: path to the new config file
63 | :return:
64 | '''
65 |
66 | print('loading optim config from: ', filePath)
67 | fp = open(filePath, 'r')
68 | assert(fp is not None)
69 | Lines = fp.readlines()
70 | fp.close()
71 |
72 | dic = {}
73 |
74 | for line in Lines:
75 | oLine = copy.copy(line)
76 |
77 | if line[0] == '#' or line[0] == '\n':
78 | continue
79 | if '#' in line:
80 | line = line[0:line.find('#')].strip().replace('\t', '').replace('\n', '')
81 |
82 | if len(line) < 1:
83 | continue
84 |
85 | keyval = line.split('=')
86 | if len(keyval) == 2:
87 | #assert (len(keyval) == 2)
88 | key = keyval[0].strip()
89 | val = keyval[1].strip()
90 | val = val.replace('"', '').replace("'", "").strip()
91 | dic[key] = val
92 | else:
93 | print('[warning] unknown key/val: ', oLine, file=sys.stderr, flush=True)
94 |
95 | for k, v in dic.items():
96 | aType = type(getattr(self, k)).__name__
97 | if aType == 'str':
98 | setattr(self, k, v)
99 | elif aType == 'bool':
100 | setattr(self, k, v.lower() == 'true')
101 | elif aType == 'int':
102 | setattr(self, k, int(v))
103 | elif aType == 'float':
104 | setattr(self, k, float(v))
105 | else:
106 | raise RuntimeError("unknown dictionary type: "+ key + "=>" + val)
107 | def print(self):
108 | dic = self.__dict__
109 | for key, val in dic.items():
110 | print(key, '=>', val)
111 |
--------------------------------------------------------------------------------
/environment.yml:
--------------------------------------------------------------------------------
1 | # This file may be used to create an environment using:
2 | # $ conda create --name --file
3 | # platform: win-64
4 | blas=1.0=mkl
5 | blosc=1.21.0=h19a0ad4_0
6 | brotli=1.0.9=ha925a31_2
7 | bzip2=1.0.8=he774522_0
8 | ca-certificates=2022.3.29=haa95532_0
9 | certifi=2020.6.20=pyhd3eb1b0_3
10 | cffi=1.14.6=py36h2bbff1b_0
11 | charls=2.1.0=h33f27b4_2
12 | cloudpickle=2.0.0=pyhd3eb1b0_0
13 | colorama=0.4.4=pyhd3eb1b0_0
14 | cudatoolkit=10.1.243=h74a9793_0
15 | cycler=0.11.0=pyhd3eb1b0_0
16 | cytoolz=0.11.0=py36he774522_0
17 | dask-core=2021.3.0=pyhd3eb1b0_0
18 | decorator=4.4.2=pypi_0
19 | face_alignment=1.2.0=py_1
20 | freeglut=3.2.2=h0e60522_1
21 | freetype=2.10.4=hd328e21_0
22 | giflib=5.2.1=h62dcd97_0
23 | h5py=2.10.0=py36h5e291fa_0
24 | hdf5=1.10.4=h7ebc959_0
25 | icc_rt=2019.0.0=h0cc432a_1
26 | icu=68.2=h0e60522_0
27 | imagecodecs=2020.5.30=py36hb1be65f_2
28 | imageio=2.15.0=pypi_0
29 | intel-openmp=2022.0.0=haa95532_3663
30 | jasper=2.0.33=h77af90b_0
31 | jpeg=9d=h2bbff1b_0
32 | kiwisolver=1.3.1=py36hd77b12b_0
33 | lcms2=2.12=h83e58a3_0
34 | libaec=1.0.4=h33f27b4_1
35 | libblas=3.8.0=20_mkl
36 | libcblas=3.8.0=20_mkl
37 | libclang=11.1.0=default_h5c34c98_1
38 | liblapack=3.8.0=20_mkl
39 | liblapacke=3.8.0=20_mkl
40 | libopencv=4.5.2=py36h14c9de7_0
41 | libpng=1.6.37=h2a8f88b_0
42 | libprotobuf=3.15.8=h7755175_1
43 | libtiff=4.2.0=hd0e1b90_0
44 | libwebp-base=1.2.2=h8ffe710_1
45 | libzopfli=1.0.3=ha925a31_0
46 | llvmlite=0.36.0=py36h34b8924_4
47 | lz4-c=1.9.3=h2bbff1b_1
48 | matplotlib-base=3.3.4=py36h49ac443_0
49 | mkl=2020.2=256
50 | mkl-service=2.3.0=py36h196d8e1_0
51 | mkl_fft=1.3.0=py36h46781fe_0
52 | mkl_random=1.1.1=py36h47e9c7a_0
53 | networkx=2.5.1=pypi_0
54 | ninja=1.10.2=h559b2a2_2
55 | numba=0.53.0=py36hf11a4ad_0
56 | numpy=1.19.2=py36hadc3359_0
57 | numpy-base=1.19.2=py36ha3acd2a_0
58 | olefile=0.46=py36_0
59 | opencv=4.5.2=py36ha15d459_0
60 | openjpeg=2.4.0=h4fc8c34_0
61 | openssl=1.1.1n=h8ffe710_0
62 | pillow=8.4.0=pypi_0
63 | pip=21.2.2=py36haa95532_0
64 | py-opencv=4.5.2=py36hfacbf0b_0
65 | pycparser=2.21=pyhd3eb1b0_0
66 | pyparsing=3.0.7=pypi_0
67 | pyreadline=2.1=py36_1
68 | python=3.6.7=h9f7ef89_2
69 | python-dateutil=2.8.2=pyhd3eb1b0_0
70 | python_abi=3.6=2_cp36m
71 | pytorch=1.3.1=py3.6_cuda101_cudnn7_0
72 | pywavelets=1.1.1=py36he774522_2
73 | pyyaml=5.4.1=py36h2bbff1b_1
74 | qt=5.12.9=h5909a2a_4
75 | redner-gpu=0.4.25=pypi_0
76 | scikit-image=0.17.2=pypi_0
77 | scipy=1.5.4=pypi_0
78 | setuptools=58.0.4=py36haa95532_0
79 | six=1.16.0=pyhd3eb1b0_1
80 | snappy=1.1.8=h33f27b4_0
81 | sqlite=3.38.2=h2bbff1b_0
82 | tifffile=2020.9.3=pypi_0
83 | tk=8.6.11=h2bbff1b_0
84 | toolz=0.11.2=pyhd3eb1b0_0
85 | torchvision=0.4.2=py36_cu101
86 | tornado=6.1=py36h2bbff1b_0
87 | tqdm=4.63.0=pyhd3eb1b0_0
88 | vc=14.2=h21ff451_1
89 | vs2015_runtime=14.27.29016=h5e58377_2
90 | wheel=0.37.1=pyhd3eb1b0_0
91 | wincertstore=0.2=py36h7fe50ca_0
92 | xz=5.2.5=h62dcd97_0
93 | yaml=0.2.5=he774522_0
94 | zlib=1.2.11=hbd8134f_5
95 | zstd=1.4.9=h19a0ad4_0
96 | mediapipe=0.8.3
--------------------------------------------------------------------------------
/gaussiansmoothing.py:
--------------------------------------------------------------------------------
1 | import torch
2 | from torch import nn
3 | import numbers
4 | import math
5 | from torch.nn import functional as F
6 |
7 | class GaussianSmoothing(nn.Module):
8 | """
9 | Apply gaussian smoothing on a
10 | 1d, 2d or 3d tensor. Filtering is performed seperately for each channel
11 | in the input using a depthwise convolution.
12 | Arguments:
13 | channels (int, sequence): Number of channels of the input tensors. Output will
14 | have this number of channels as well.
15 | kernel_size (int, sequence): Size of the gaussian kernel.
16 | sigma (float, sequence): Standard deviation of the gaussian kernel.
17 | dim (int, optional): The number of dimensions of the data.
18 | Default value is 2 (spatial).
19 | """
20 | def __init__(self, channels, kernel_size, sigma, dim=2):
21 | super(GaussianSmoothing, self).__init__()
22 | if isinstance(kernel_size, numbers.Number):
23 | kernel_size = [kernel_size] * dim
24 | if isinstance(sigma, numbers.Number):
25 | sigma = [sigma] * dim
26 |
27 | # The gaussian kernel is the product of the
28 | # gaussian function of each dimension.
29 | kernel = 1
30 | meshgrids = torch.meshgrid(
31 | [
32 | torch.arange(size, dtype=torch.float32)
33 | for size in kernel_size
34 | ]
35 | )
36 | for size, std, mgrid in zip(kernel_size, sigma, meshgrids):
37 | mean = (size - 1) / 2
38 | kernel *= 1 / (std * math.sqrt(2 * math.pi)) * \
39 | torch.exp(-((mgrid - mean) / (2 * std)) ** 2)
40 |
41 | # Make sure sum of values in gaussian kernel equals 1.
42 | kernel = kernel / torch.sum(kernel)
43 |
44 | # Reshape to depthwise convolutional weight
45 | kernel = kernel.view(1, 1, *kernel.size())
46 | kernel = kernel.repeat(channels, *[1] * (kernel.dim() - 1))
47 |
48 | self.register_buffer('weight', kernel)
49 | self.groups = channels
50 |
51 | if dim == 1:
52 | self.conv = F.conv1d
53 | elif dim == 2:
54 | self.conv = F.conv2d
55 | elif dim == 3:
56 | self.conv = F.conv3d
57 | else:
58 | raise RuntimeError(
59 | 'Only 1, 2 and 3 dimensions are supported. Received {}.'.format(dim)
60 | )
61 |
62 | def forward(self, input):
63 | """
64 | Apply gaussian filter to input.
65 | Arguments:
66 | input (torch.Tensor): Input to apply gaussian filter on.
67 | Returns:
68 | filtered (torch.Tensor): Filtered output.
69 | """
70 | return self.conv(input, weight=self.weight, groups=self.groups)
71 |
72 | def smoothImage(img, filter):
73 | '''
74 | smooth an image with filter
75 | '''
76 |
77 | imgAd = img.permute(0, 3, 1, 2)
78 | imgAd = torch.nn.functional.pad(imgAd, (1, 1, 1, 1), mode='reflect')
79 | output = filter(imgAd)
80 | output = output.permute(0, 2, 3, 1)
81 | return output
--------------------------------------------------------------------------------
/image.py:
--------------------------------------------------------------------------------
1 | import sys
2 | from os import walk
3 | import numpy as np
4 | import torch
5 | import cv2
6 | import os
7 |
8 |
9 | def saveImage(image, fileName, gamma = 2.2):
10 | '''
11 | save image to drive
12 | :param image: float tensor [w, h, 3/4]
13 | :param fileName: path to where to save the image
14 | :param gamma: gamma correction
15 | :return:
16 | '''
17 |
18 | import pyredner
19 | pyredner.imwrite(image.cpu().detach(), fileName, gamma = gamma)
20 |
21 | def overlayImage(background, image):
22 | '''
23 | overlay image on top of background image an image on a background image.
24 | :param background: float tensor [width,height,3]
25 | :param image: float tensor [width, height, 4]
26 | :return: float tensor [width, height, 3]
27 | '''
28 | assert(torch.is_tensor(background) and torch.is_tensor(image) and background.dim() == 3 and image.dim() == 3 and background.shape[0] == image.shape[0] and background.shape[1] == image.shape[1])
29 | assert(background.shape[-1] == 3 and image.shape[-1] == 4)
30 | from torchvision import transforms
31 | background = transforms.ToPILImage()(background.permute(2, 1, 0).clone().detach().cpu()).convert("RGB")
32 | image = transforms.ToPILImage()(torch.clamp(image.permute(2, 1, 0), 0, 1).clone().detach().cpu()).convert("RGBA")
33 | background.paste(image, (0, 0), image)
34 | return transforms.ToTensor()(background).permute(2, 1, 0)
35 |
36 | def resizeImage(image, targetResolution):
37 | '''
38 | resize an image (as numpy array) to the target resolution
39 | :param image: numpy array [h, w, 4/3/1]
40 | :param targetResolution: int > 0
41 | :return: numpy array [h, w, 4/3/1]
42 | '''
43 | assert(image is not None and isinstance(image, np.ndarray) and len(image.shape) == 3 and image.shape[-1] == 3 or image.shape[-1] == 4 or image.shape[-1] == 1)
44 | dmax = max(image.shape[0], image.shape[1])
45 |
46 | if (dmax > targetResolution):
47 | print("[INFO] resizing input image to fit:", targetResolution,"px resolution...")
48 | if (image.shape[0] > image.shape[1]):
49 | scale = float(targetResolution) / float(image.shape[0])
50 | else:
51 | scale = float(targetResolution) / float(image.shape[1])
52 | img = cv2.resize(image, (int(image.shape[1] * scale), int(image.shape[0] * scale)), interpolation=cv2.INTER_CUBIC )
53 | else:
54 | return image
55 | return img
56 |
57 | class Image:
58 |
59 | def __init__(self, path, device, maxRes = 512):
60 | '''
61 | class that represent a single image as a pytorch tensor [1, h, w, channels]
62 | :param path: the path to the image
63 | :param device: where to store the image ('cpu' or 'cuda')
64 | :param maxRes: maximum allowed resolution (depending on the gpu/cpu memory and speed, this limit can be increased or removed)
65 | '''
66 | assert(maxRes > 0)
67 | print('loading image from path: ', path)
68 | self.device = device
69 | numpyImage = cv2.imread(path)[..., 0:3]
70 | assert (numpyImage is not None)
71 | numpyImage = resizeImage(cv2.cvtColor(numpyImage, cv2.COLOR_BGR2RGB), int(maxRes))
72 | self.tensor = (torch.from_numpy(numpyImage).to(self.device).to(dtype=torch.float32) / 255.0).unsqueeze(0)
73 | self.height = numpyImage.shape[0]
74 | self.width = numpyImage.shape[1]
75 | self.channels = numpyImage.shape[2]
76 | self.gamma = 2.2
77 | self.center = torch.tensor([ self.width / 2, self.height / 2], dtype = torch.float32, device = self.device).reshape(1, -1)
78 | self.imageName = os.path.basename(path)
79 |
80 | class ImageFolder:
81 |
82 | def __init__(self, path, device, maxRes = 512):
83 | '''
84 | class that represent images in a given path
85 | :param path: the path to the image
86 | :param device: where to store the image ('cpu' or 'cuda')
87 | '''
88 | print('loading images from path: ', path)
89 | self.device = device
90 | self.tensor = None
91 | self.imageNames = []
92 | supportedFormats = ['.jpg', '.jpeg', '.png']
93 |
94 | filenames = next(walk(path), (None, None, []))[2]
95 | width = None
96 | height = None
97 | ct = 0
98 |
99 | assert (len(filenames) > 0) # no images found in the given directory
100 | for filename in filenames:
101 | if os.path.splitext(filename)[1].lower() in supportedFormats:
102 | image = Image(path + '/' + filename, device, maxRes)
103 |
104 | if width is None:
105 | width = image.width
106 | height = image.height
107 | self.tensor = torch.zeros([len(filenames), height, width, image.channels], device = self.device)
108 | self.center = torch.zeros([len(filenames), 2], device = self.device)
109 |
110 | assert image.width == width and image.height == height
111 |
112 | self.width = image.width
113 | self.height = image.height
114 | self.channels = image.channels
115 | self.tensor[ct] = image.tensor[0].clone().detach()
116 | self.center[ct] = image.center[0].clone().detach()
117 | self.imageNames.append(image.imageName)
118 | image = None
119 |
120 | ct += 1
121 |
122 |
123 | import gc
124 | gc.collect()
125 | self.gamma = 2.2
126 |
127 | @property
128 | def asNumpyArray(self):
129 | return self.tensor.detach().cpu().numpy() * 255.0
130 |
131 | if __name__ == "__main__":
132 | pass
--------------------------------------------------------------------------------
/input/s1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/abdallahdib/NextFace/07b2b1c7da2021e939e469f82fd9823e3d0ec67c/input/s1.png
--------------------------------------------------------------------------------
/input/s2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/abdallahdib/NextFace/07b2b1c7da2021e939e469f82fd9823e3d0ec67c/input/s2.png
--------------------------------------------------------------------------------
/input/s3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/abdallahdib/NextFace/07b2b1c7da2021e939e469f82fd9823e3d0ec67c/input/s3.png
--------------------------------------------------------------------------------
/input/s4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/abdallahdib/NextFace/07b2b1c7da2021e939e469f82fd9823e3d0ec67c/input/s4.png
--------------------------------------------------------------------------------
/input/s5.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/abdallahdib/NextFace/07b2b1c7da2021e939e469f82fd9823e3d0ec67c/input/s5.png
--------------------------------------------------------------------------------
/input/s6.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/abdallahdib/NextFace/07b2b1c7da2021e939e469f82fd9823e3d0ec67c/input/s6.png
--------------------------------------------------------------------------------
/input/s7.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/abdallahdib/NextFace/07b2b1c7da2021e939e469f82fd9823e3d0ec67c/input/s7.png
--------------------------------------------------------------------------------
/landmarksfan.py:
--------------------------------------------------------------------------------
1 | import face_alignment
2 | import numpy as np
3 | import torch
4 | import cv2
5 |
6 |
7 | class LandmarksDetectorFAN:
8 | def __init__(self, mask, device):
9 | '''
10 | init landmark detector with given mask on target device
11 | :param mask: valid mask for the 68 landmarks of shape [n]
12 | :param device:
13 | '''
14 | assert(mask.dim() == 1)
15 | assert(mask.max().item() <= 67 and mask.min().item() >= 0)
16 |
17 | self.device = device
18 | self.landmarksDetector = face_alignment.FaceAlignment(face_alignment.LandmarksType._3D, flip_input=False, device=self.device)
19 | self.mask = mask.to(self.device)
20 |
21 | def detect(self, images):
22 | '''
23 | detect landmakrs on a batch of images
24 | :param images: tensor [n, height, width, channels]
25 | :return: tensor [n, landmarksNumber, 2]
26 | '''
27 | #landmarks = torch.zeros([images.shape[0], self.mask.shape[0], 2], device = images.device, dtype = torch.float32)
28 | assert(images.dim() == 4)
29 | landmarks = []
30 | for i in range(len(images)):
31 | land = self._detect(images[i].detach().cpu().numpy() * 255.0)
32 | landmarks.append(land)
33 |
34 | torch.set_grad_enabled(True) #it turns out that the landmark detector disables the autograd engine. this line fixes this
35 | return torch.tensor(landmarks, device = self.device)
36 | def _detect(self, image):
37 | arr = self.landmarksDetector.get_landmarks_from_image(image, None)
38 | if arr is None or len(arr) == 0:
39 | raise RuntimeError("No landmarks found in image...")
40 | if len(arr) > 1:
41 | print('found multiple subjects in image. extracting landmarks for first subject only...')
42 |
43 | landmarks = []
44 | mask = self.mask.detach().cpu().numpy()
45 | for preds in arr:
46 |
47 | preds = preds[mask]
48 | subjectLandmarks = np.array([[p[0], p[1]] for p in preds])
49 | landmarks.append(subjectLandmarks)
50 | break #only one subject per frame
51 |
52 | return landmarks[0]
53 | return torch.tensor(landmarks, device = self.device)
54 |
55 | def drawLandmarks(self, image, landmarks):
56 | '''
57 | draw landmakrs on top of image (for debug)
58 | :param image: tensor representing the image [h, w, channels]
59 | :param landmarks: tensor representing the image landmarks [n, 2]
60 | :return:
61 | '''
62 | assert(image.dim() == 3 and landmarks.dim() == 2 and landmarks.shape[-1] ==2)
63 | clone = np.copy(image.detach().cpu().numpy() * 255.0)
64 | land = landmarks.cpu().numpy()
65 | for x in land:
66 | cv2.circle(clone, (int(x[0]), int(x[1])), 1, (0, 0, 255), -1)
67 | return clone
68 |
--------------------------------------------------------------------------------
/landmarksmediapipe.py:
--------------------------------------------------------------------------------
1 | import mediapipe as mp
2 | import numpy as np
3 | import torch
4 | import cv2
5 |
6 | class LandmarksDetectorMediapipe:
7 | def __init__(self, mask, device, is_video=False, refine_landmarks=False):
8 | '''
9 | init landmark detector with given mask on target device
10 | :param mask: valid mask for the 468 landmarks of shape [n]
11 | :param device:
12 | :param is_video: set to true if passing frames sequentially in order
13 | :param refine_landmarks: if the facemesh attention module should be applied. Note: requires mediapipe 0.10
14 | '''
15 | assert(mask.dim() == 1)
16 | assert(mask.max().item() <= 467 and mask.min().item() >= 0)
17 |
18 | self.device = device
19 | mp_face_mesh = mp.solutions.face_mesh
20 |
21 | if refine_landmarks:
22 | try:
23 | self.landmarksDetector = mp_face_mesh.FaceMesh(
24 | static_image_mode=not is_video,
25 | refine_landmarks=True,
26 | min_detection_confidence=0.5,
27 | min_tracking_confidence=0.5,
28 | )
29 | except KeyError:
30 | raise KeyError('Refine landmarks is only available with the latest version of mediapipe')
31 |
32 | else:
33 | self.landmarksDetector = mp_face_mesh.FaceMesh(
34 | static_image_mode=not is_video,
35 | min_detection_confidence=0.5,
36 | min_tracking_confidence=0.5,
37 | )
38 |
39 | self.mask = mask.to(self.device)
40 |
41 | def detect(self, images):
42 | '''
43 | detect landmakrs on a batch of images
44 | :param images: tensor [n, height, width, channels]
45 | :return: tensor [n, landmarksNumber, 2]
46 | '''
47 | #landmarks = torch.zeros([images.shape[0], self.mask.shape[0], 2], device = images.device, dtype = torch.float32)
48 | assert(images.dim() == 4)
49 | landmarks = []
50 | for i in range(len(images)):
51 | land = self._detect((images[i].detach().cpu().numpy() * 255.0).astype('uint8'))
52 | landmarks.append(land)
53 |
54 | torch.set_grad_enabled(True) #it turns out that the landmark detector disables the autograd engine. this line fixes this
55 | return torch.tensor(landmarks, device = self.device)
56 |
57 | def _detect(self, image):
58 |
59 | height, width, _ = image.shape
60 |
61 | results = self.landmarksDetector.process(image)
62 | mask = self.mask.detach().cpu().numpy()
63 | multi_face_landmarks = results.multi_face_landmarks
64 |
65 | if multi_face_landmarks:
66 | face_landmarks = multi_face_landmarks[0]
67 | landmarks = np.array(
68 | [(lm.x * width, lm.y * height) for lm in face_landmarks.landmark]
69 | )
70 | else:
71 | raise RuntimeError('No face was found in this image')
72 |
73 | return landmarks[mask]
74 |
75 | def drawLandmarks(self, image, landmarks):
76 | '''
77 | draw landmakrs on top of image (for debug)
78 | :param image: tensor representing the image [h, w, channels]
79 | :param landmarks: tensor representing the image landmarks [n, 2]
80 | :return:
81 | '''
82 | assert(image.dim() == 3 and landmarks.dim() == 2 and landmarks.shape[-1] ==2)
83 | clone = np.copy(image.detach().cpu().numpy() * 255.0)
84 | land = landmarks.cpu().numpy()
85 | for x in land:
86 | cv2.circle(clone, (int(x[0]), int(x[1])), 1, (0, 0, 255), -1)
87 | return clone
88 |
89 |
--------------------------------------------------------------------------------
/meshnormals.py:
--------------------------------------------------------------------------------
1 | import torch
2 |
3 | class MeshNormals:
4 |
5 | def __init__(self, device, faces, vertexIndex, vertexFaceNeighbors):
6 | assert(vertexIndex is not None)
7 | assert(vertexFaceNeighbors is not None)
8 |
9 | self.device = device
10 | self.faces = faces
11 | self.vertexIndex = []
12 | self.vertexFaceNeighbors = []
13 | if vertexIndex is not None and vertexFaceNeighbors is not None:
14 | for i in range(len(vertexIndex)):
15 | self.vertexIndex.append(torch.tensor(vertexIndex[i]).to(self.device))
16 | self.vertexFaceNeighbors.append(torch.tensor(vertexFaceNeighbors[i]).to(self.device))
17 |
18 | def computeNormals(self, vertices):
19 | '''
20 | compute vertices normal
21 | :param vertices: [..., verticesNumber, 3]
22 | :return: normalized normal vectors [..., verticesNumber, 3]
23 | '''
24 |
25 | faces = self.faces
26 | assert(faces is not None)
27 | assert(vertices.shape[-1] == 3)
28 |
29 | v1 = vertices[..., faces[:, 0], :]
30 | v2 = vertices[..., faces[:, 1], :] - v1
31 | v3 = vertices[..., faces[:, 2], :] - v1
32 | faceNormals = torch.cross(v2, v3, dim=vertices.dim() - 1)
33 |
34 | normals = torch.zeros_like(vertices)
35 | for (ni, vi) in zip(self.vertexFaceNeighbors, self.vertexIndex):
36 | vc4 = faceNormals[..., ni, :]
37 | vc4 = torch.mean(vc4, -2)
38 | normals[..., vi, :] = vc4
39 |
40 | return torch.nn.functional.normalize(normals, 2, -1)
41 |
42 |
--------------------------------------------------------------------------------
/morphablemodel.py:
--------------------------------------------------------------------------------
1 | from utils import loadDictionaryFromPickle, writeDictionaryToPickle
2 | from normalsampler import NormalSampler
3 | from meshnormals import MeshNormals
4 | import numpy as np
5 | import torch
6 | import h5py
7 | import sys
8 | import os
9 |
10 | class MorphableModel:
11 |
12 | def __init__(self, path, textureResolution = 256, trimPca = False, landmarksPathName = 'landmark_62_mp.txt', device='cuda'):
13 | '''
14 | a statistical morphable model is a generative model that can generate faces with different identity, expression and skin reflectance
15 | it is mainly composed of an orthogonal basis (eigen vectors) obtained from applying principal component analysis (PCA) on a set of face scans.
16 | a linear combination of these eigen vectors produces different type shape and skin
17 | :param path: drive path of where the data of the morphable model is saved
18 | :param textureResolution: the resolution of the texture used for diffuse and specular reflectance
19 | :param trimPca: if True keep only a subset of the PCA basis
20 | :param landmarksPathName: a text file conains the association between the 2d pixel position and the 3D points in the mesh
21 | :param device: where to store the morphableModel data (cpu or gpu)
22 | '''
23 | assert textureResolution == 256 or textureResolution == 512 or textureResolution == 1024 or textureResolution == 2048 #can handle only 256 or 512 texture res
24 | self.shapeBasisSize = 199
25 | self.albedoBasisSize = 145
26 | self.expBasisSize = 100
27 | self.device = device
28 | pathH5Model = path + '/model2017-1_face12_nomouth.h5'
29 | pathAlbedoModel = path + '/albedoModel2020_face12_albedoPart.h5'
30 | pathUV = path + '/uvParametrization.' + str(textureResolution) + '.pickle'
31 | pathLandmarks = path + '/' + landmarksPathName
32 |
33 | pathPickleFileName = path + '/morphableModel-2017.pickle'
34 | pathNormals = path + '/normals.pickle'
35 |
36 | if os.path.exists(pathPickleFileName) == False:
37 | print("Loading Basel Face Model 2017 from " + pathH5Model + "... this may take a while the first time... The next runtime it will be faster...")
38 |
39 | if os.path.exists(pathH5Model) == False:
40 | print('[Error] to use the library, you have to install basel morphable face model 2017 from: https://faces.dmi.unibas.ch/bfm/bfm2017.html', file=sys.stderr, flush=True)
41 | print('Fill the form on the link and you will get instant download link into your inbox.', file=sys.stderr, flush=True)
42 | print('Download "model2017-1_face12_nomouth.h5" and put it inside ',path, ' and run again...', file=sys.stderr, flush=True)
43 | exit(0)
44 |
45 | self.file = h5py.File(pathH5Model, 'r')
46 | assert(self.file is not None)
47 |
48 | print("loading shape basis...")
49 | self.shapeMean = torch.Tensor(self.file["shape"]["model"]["mean"]).reshape(-1, 3).to(device).float()
50 | self.shapePca = torch.Tensor(self.file["shape"]["model"]["pcaBasis"]).reshape(-1, 3, self.shapeBasisSize).to(device).float().permute(2, 0, 1)
51 | self.shapePcaVar = torch.Tensor(self.file["shape"]["model"]["pcaVariance"]).reshape(self.shapeBasisSize).to(device).float()
52 |
53 | print("loading expression basis...")
54 | self.expressionPca = torch.Tensor(self.file["expression"]["model"]["pcaBasis"]).reshape(-1, 3, self.expBasisSize).to(device).float().permute(2, 0, 1)
55 | self.expressionPcaVar = torch.Tensor(self.file["expression"]["model"]["pcaVariance"]).reshape(self.expBasisSize).to(device).float()
56 | self.faces = torch.Tensor(np.transpose(self.file["shape"]["representer"]["cells"])).reshape(-1, 3).to(device).long()
57 | self.file.close()
58 |
59 | print("Loading Albedo model from " + pathAlbedoModel + "...")
60 | if os.path.exists(pathAlbedoModel) == False:
61 | print('[ERROR] Please install the albedo model from the link below, put it inside', path, 'and run again: https://github.com/waps101/AlbedoMM/releases/download/v1.0/albedoModel2020_face12_albedoPart.h5', file=sys.stderr, flush=True)
62 | exit(0)
63 |
64 | self.file = h5py.File(pathAlbedoModel, 'r')
65 | assert(self.file is not None)
66 |
67 | self.diffuseAlbedoMean = torch.Tensor(self.file["diffuseAlbedo"]["model"]["mean"]).reshape(-1, 3).to(device).float()
68 | self.diffuseAlbedoPca = torch.Tensor(self.file["diffuseAlbedo"]["model"]["pcaBasis"]).reshape(-1, 3, self.albedoBasisSize).to(device).float().permute(2, 0, 1)
69 | self.diffuseAlbedoPcaVar = torch.Tensor(self.file["diffuseAlbedo"]["model"]["pcaVariance"]).reshape(self.albedoBasisSize).to(device).float()
70 |
71 | self.specularAlbedoMean = torch.Tensor(self.file["specularAlbedo"]["model"]["mean"]).reshape(-1, 3).to(device).float()
72 | self.specularAlbedoPca = torch.Tensor(self.file["specularAlbedo"]["model"]["pcaBasis"]).reshape(-1, 3, self.albedoBasisSize).to(device).float().permute(2, 0, 1)
73 | self.specularAlbedoPcaVar = torch.Tensor(self.file["specularAlbedo"]["model"]["pcaVariance"]).reshape(self.albedoBasisSize).to(device).float()
74 | self.file.close()
75 |
76 | #save to pickle for future loading
77 | dict = {'shapeMean': self.shapeMean.cpu().numpy(),
78 | 'shapePca': self.shapePca.cpu().numpy(),
79 | 'shapePcaVar': self.shapePcaVar.cpu().numpy(),
80 |
81 | 'diffuseAlbedoMean': self.diffuseAlbedoMean.cpu().numpy(),
82 | 'diffuseAlbedoPca': self.diffuseAlbedoPca.cpu().numpy(),
83 | 'diffuseAlbedoPcaVar': self.diffuseAlbedoPcaVar.cpu().numpy(),
84 |
85 | 'specularAlbedoMean': self.specularAlbedoMean.cpu().numpy(),
86 | 'specularAlbedoPca': self.specularAlbedoPca.cpu().numpy(),
87 | 'specularAlbedoPcaVar': self.specularAlbedoPcaVar.cpu().numpy(),
88 |
89 | 'expressionPca': self.expressionPca.cpu().numpy(),
90 | 'expressionPcaVar': self.expressionPcaVar.cpu().numpy(),
91 | 'faces': self.faces.cpu().numpy()}
92 | writeDictionaryToPickle(dict, pathPickleFileName)
93 | else:
94 | print("Loading Basel Face Model 2017 from " + pathPickleFileName + "...")
95 |
96 | dict = loadDictionaryFromPickle(pathPickleFileName)
97 | self.shapeMean = torch.tensor(dict['shapeMean']).to(device)
98 | self.shapePca = torch.tensor(dict['shapePca']).to(device)
99 | self.shapePcaVar = torch.tensor(dict['shapePcaVar']).to(device)
100 |
101 | self.diffuseAlbedoMean = torch.tensor(dict['diffuseAlbedoMean']).to(device)
102 | self.diffuseAlbedoPca = torch.tensor(dict['diffuseAlbedoPca']).to(device)
103 | self.diffuseAlbedoPcaVar = torch.tensor(dict['diffuseAlbedoPcaVar']).to(device)
104 |
105 | self.specularAlbedoMean = torch.tensor(dict['specularAlbedoMean']).to(device)
106 | self.specularAlbedoPca = torch.tensor(dict['specularAlbedoPca']).to(device)
107 | self.specularAlbedoPcaVar = torch.tensor(dict['specularAlbedoPcaVar']).to(device)
108 |
109 | self.expressionPca = torch.tensor(dict['expressionPca']).to(device)
110 | self.expressionPcaVar = torch.tensor(dict['expressionPcaVar']).to(device)
111 | self.faces = torch.tensor(dict['faces']).to(device)
112 |
113 |
114 | if trimPca:
115 | newDim = min(80,
116 | self.shapePca.shape[0],
117 | self.diffuseAlbedoPca.shape[0],
118 | self.specularAlbedoPcaVar.shape[0],
119 | self.expressionPca.shape[0])
120 |
121 | self.shapePca = self.shapePca[0:newDim, ...]
122 | self.shapePcaVar = self.shapePcaVar[0:newDim, ...]
123 |
124 | self.diffuseAlbedoPca = self.diffuseAlbedoPca[0:newDim, ...]
125 | self.diffuseAlbedoPcaVar = self.diffuseAlbedoPcaVar[0:newDim, ...]
126 |
127 | self.specularAlbedoPca = self.specularAlbedoPca[0:newDim, ...]
128 | self.specularAlbedoPcaVar = self.specularAlbedoPcaVar[0:newDim, ...]
129 |
130 | self.expressionPca = self.expressionPca[0:newDim, ...]
131 | self.expressionPcaVar = self.expressionPcaVar[0:newDim, ...]
132 | self.shapeBasisSize = newDim
133 | self.expBasisSize = newDim
134 | self.albedoBasisSize = newDim
135 |
136 | print("loading mesh normals...")
137 | dic = loadDictionaryFromPickle(pathNormals)
138 | self.meshNormals = MeshNormals(device, self.faces, dic['vertexIndex'], dic['vertexFaceNeighbors'])
139 |
140 | print("loading uv parametrization...")
141 | self.uvParametrization = loadDictionaryFromPickle(pathUV)
142 |
143 | for key in self.uvParametrization:
144 | if key != 'uvResolution':
145 | self.uvParametrization[key] = torch.tensor(self.uvParametrization[key]).to(device)
146 |
147 | self.uvMap = self.uvParametrization['uvVertices'].to(device)
148 |
149 | print("loading landmarks association file...")
150 | self.landmarksAssociation = torch.tensor(np.loadtxt(pathLandmarks, delimiter='\t\t')[:, 1].astype(np.int64)).to(device)
151 | self.landmarksMask = torch.tensor(np.loadtxt(pathLandmarks, delimiter='\t\t')[:, 0].astype(np.int64)).to(device)
152 |
153 | print('creating sampler...')
154 | self.sampler = NormalSampler(self)
155 |
156 | def generateTextureFromAlbedo(self, albedo):
157 | '''
158 | generate diffuse and specular textures from per vertex albedo color
159 | :param albedo: tensor of per vertex albedo color [n, verticesNumber, 3]
160 | :return: generated textures [n, self.getTextureResolution(), self.getTextureResolution(), 3]
161 | '''
162 | assert (albedo.dim() == 3 and albedo.shape[-1] == self.diffuseAlbedoMean.shape[-1] and albedo.shape[-2] == self.diffuseAlbedoMean.shape[-2])
163 | textureSize = self.uvParametrization['uvResolution']
164 | halfRes = textureSize // 2
165 | baryCenterWeights = self.uvParametrization['uvFaces']
166 | oFaces = self.uvParametrization['uvMapFaces']
167 | uvxyMap = self.uvParametrization['uvXYMap']
168 |
169 | neighboors = torch.arange(self.faces.shape[-1], dtype = torch.int64, device = self.faces.device)
170 |
171 | texture = (baryCenterWeights[:, neighboors, None] * albedo[:, self.faces[oFaces[:, None], neighboors]]).sum(dim=-2)
172 | textures = torch.zeros((albedo.size(0), textureSize, textureSize, 3), dtype=torch.float32, device = self.faces.device)
173 | textures[:, uvxyMap[:, 0], uvxyMap[:, 1]] = texture
174 | textures[:, halfRes, :, :] = (textures[:, halfRes -1, :, :] + textures[:, halfRes + 1, :, :]) * 0.5
175 | return textures.permute(0, 2, 1, 3).flip([1])
176 |
177 | def getTextureResolution(self):
178 | '''
179 | return the resolution of the texture
180 | :return: int scalar
181 | '''
182 | return self.uvParametrization['uvResolution']
183 |
184 | def computeShape(self, shapeCoff, expCoff):
185 | '''
186 | compute vertices from shape and exp coeff
187 | :param shapeCoff: [n, self.shapeBasisSize]
188 | :param expCoff: [n, self.expBasisSize]
189 | :return: return vertices tensor [n, verticesNumber, 3]
190 | '''
191 | assert (shapeCoff.dim() == 2 and shapeCoff.shape[1] == self.shapeBasisSize)
192 | assert (expCoff.dim() == 2 and expCoff.shape[1] == self.expBasisSize)
193 |
194 | vertices = self.shapeMean + torch.einsum('ni,ijk->njk', (shapeCoff, self.shapePca)) + torch.einsum('ni,ijk->njk', (expCoff, self.expressionPca))
195 | return vertices
196 |
197 | def computeNormals(self, vertices):
198 | '''
199 | compute normals for given vertices tensor
200 | :param vertices: float tensor [..., 3]
201 | :return: float tensor [..., 3]
202 | '''
203 | assert(vertices.shape[-1] == 3)
204 | return self.meshNormals.computeNormals(vertices)
205 |
206 | def computeDiffuseAlbedo(self, diffAlbedoCoeff):
207 | '''
208 | compute diffuse albedo from coeffs
209 | :param diffAlbedoCoeff: tensor [n, self.albedoBasisSize]
210 | :return: diffuse colors per vertex [n, verticesNumber, 3]
211 | '''
212 | assert(diffAlbedoCoeff.dim() == 2 and diffAlbedoCoeff.shape[1] == self.albedoBasisSize)
213 |
214 | colors = self.diffuseAlbedoMean + torch.einsum('ni,ijk->njk', (diffAlbedoCoeff, self.diffuseAlbedoPca))
215 | return colors
216 |
217 | def computeSpecularAlbedo(self, specAlbedoCoeff):
218 | '''
219 | compute specular albedo from coeffs
220 | :param specAlbedoCoeff: [n, self.albedoBasisSize]
221 | :return: specular colors per vertex [n, verticesNumber, 3]
222 | '''
223 | assert(specAlbedoCoeff.dim() == 2 and specAlbedoCoeff.shape[1] == self.albedoBasisSize)
224 |
225 | colors = self.specularAlbedoMean + torch.einsum('ni,ijk->njk', (specAlbedoCoeff, self.specularAlbedoPca))
226 | return colors
227 |
228 | def computeShapeAlbedo(self, shapeCoeff, expCoeff, albedoCoeff):
229 | '''
230 | compute vertices and diffuse/specular albedo from shape, exp and albedo coeff
231 | :param shapeCoeff: tensor [n, self.shapeBasisSize]
232 | :param expCoeff: tensor [n, self.expBasisSize]
233 | :param albedoCoeff: tensor [n, self.albedoBasisSize]
234 | :return: vertices [n, verticesNumber 3], diffuse albedo [n, verticesNumber 3], specAlbedo albedo [n, verticesNumber 3]
235 | '''
236 |
237 | vertices = self.computeShape(shapeCoeff, expCoeff)
238 | diffAlbedo = self.computeDiffuseAlbedo(albedoCoeff)
239 | specAlbedo = self.computeSpecularAlbedo(albedoCoeff)
240 | return vertices, diffAlbedo, specAlbedo
241 |
242 | def sample(self, shapeNumber = 1):
243 | '''
244 | random sample shape, expression, diffuse and specular albedo coeffs
245 | :param shapeNumber: number of shapes to sample
246 | :return: shapeCoeff [n, self.shapeBasisSize], expCoeff [n, self.expBasisSize], diffCoeff [n, albedoBasisSize], specCoeff [n, self.albedoBasisSize]
247 | '''
248 | shapeCoeff = self.sampler.sample(shapeNumber, self.shapePcaVar)
249 | expCoeff = self.sampler.sample(shapeNumber, self.expressionPcaVar)
250 | diffAlbedoCoeff = self.sampler.sample(shapeNumber, self.diffuseAlbedoPcaVar)
251 | specAlbedoCoeff = self.sampler.sample(shapeNumber, self.specularAlbedoPcaVar)
252 | return shapeCoeff, expCoeff, diffAlbedoCoeff, specAlbedoCoeff
253 |
254 |
255 |
--------------------------------------------------------------------------------
/normalsampler.py:
--------------------------------------------------------------------------------
1 | import torch
2 |
3 | class NormalSampler:
4 |
5 | def __init__(self, morphableModel):
6 | self.morphableModel = morphableModel
7 |
8 | def _sample(self, n, variance, std_multiplier = 1):
9 | std = torch.sqrt(variance) * std_multiplier
10 | std = std.expand((n, std.shape[0]))
11 | q = torch.distributions.Normal(torch.zeros_like(std).to(std.device), std * std_multiplier)
12 | samples = q.rsample()
13 | return samples
14 |
15 | def sampleShape(self, n, std_multiplier = 1):
16 | return self._sample(n, self.morphableModel.shapePcaVar, std_multiplier)
17 |
18 | def sampleExpression(self, n, std_multiplier=1):
19 | return self._sample(n, self.morphableModel.expressionPcaVar, std_multiplier)
20 |
21 | def sampleAlbedo(self, n, std_multiplier=1):
22 | return self._sample(n, self.morphableModel.diffuseAlbedoPcaVar, std_multiplier)
23 |
24 | def sample(self, shapeNumber = 1):
25 | shapeCoeff = self.sampleShape(shapeNumber)
26 | expCoeff = self.sampleExpression(shapeNumber)
27 | albedoCoeff = self.sampleAlbedo(shapeNumber)
28 | return shapeCoeff, expCoeff, albedoCoeff
29 |
30 |
--------------------------------------------------------------------------------
/optimConfig.ini:
--------------------------------------------------------------------------------
1 | #compute device
2 | device = 'cuda' #'cuda' or 'cpu'
3 |
4 | #tracker
5 | lamdmarksDetectorType = 'fan' # 'mediapipe' or 'fan (mediapipe is much more stable than fan)
6 |
7 | #morphable model
8 | path = './baselMorphableModel'
9 | textureResolution = 512
10 | trimPca=False #if True keep only a subset of the pca basis (eigen vectors)
11 |
12 | #spherical harmonics
13 | bands = 9
14 | envMapRes = 64
15 | smoothSh=True #smooth the optimized environment map
16 | saveExr=False #save the optimized env map as exr or not ( if False a png is saved)
17 |
18 | #image
19 | maxResolution = 256 #maximum allowed resolution (if input image is larger it will be automatically scaled down). this limitation is here to allow the library to run on hardware with limited gpu memory and also to maintain a raisonable optimization speed on non rtx gpus. this limit can be increased on decent gpus/cpus
20 |
21 | #camera
22 | camFocalLength = 3000.0 #focal length in pixels (= f_{mm} * imageWidth / sensorWidth)
23 | optimizeFocalLength = True #if True the initial focal length is estimated otherwise it remains constant
24 |
25 | #optimization
26 | iterStep1 = 2000 # number of iterations for the coarse optim
27 | iterStep2 = 400 #number of iteration for the first dense optim (based on statistical priors)
28 | iterStep3 = 100 #number of iterations for refining the statistical albedo priors
29 | weightLandmarksLossStep2 = 0.001 #landmarks weight during step2
30 | weightLandmarksLossStep3 = 0.001 # landmarks weight during step3
31 |
32 | weightShapeReg = 0.001 #weight for shape regularization
33 | weightExpressionReg = 0.001 # weight for expression regularization
34 | weightAlbedoReg = 0.001 # weight for albedo regularization
35 |
36 | #regularizers for diffuse texture in step 3
37 | weightDiffuseSymmetryReg = 300. #symmetry regularizer weight for diffuse texture (at step 3). u may want to increase it in case of harsh shadows
38 | weightDiffuseConsistencyReg = 100. # consistency regularizer weight for diffuse texture (at step 3). u may want to increase it in case of harsh shadows
39 | weightDiffuseSmoothnessReg = 0.001 # smoothness regularizer weight for diffuse texture (at step 3)
40 |
41 | #regularizers for specular texture in step 3
42 | weightSpecularSymmetryReg = 200. # symmetry regularizer weight for specular texture (at step 3). u may want to increase it in case of harsh shadows
43 | weightSpecularConsistencyReg = 2. # consistency regularizer weight for specular texture (at step 3). u may want to increase it in case of harsh shadows
44 | weightSpecularSmoothnessReg = 0.001 # smoothness regularizer weight for specular texture (at step 3)
45 |
46 | #regularizers for roughness texture in step 3
47 | weightRoughnessSymmetryReg = 200. # symmetry regularizer weight for roughness texture (at step 3). u may want to increase it in case of harsh shadows
48 | weightRoughnessConsistencyReg = 0. # consistency regularizer weight for roughness texture (at step 3). u may want to increase it in case of harsh shadows
49 | weightRoughnessSmoothnessReg = 0.002 # smoothness regularizer weight for roughness texture (at step 3)
50 |
51 | #debug
52 | debugFrequency = 30 #display frequency during optimization (saved to debug directory) (0: no debug display)
53 | saveIntermediateStage = False #if True the output of stage 1 and 2 are saved. stage 3 is always saved which is the output of the optim
54 | verbose = False #display loss on terminal if true
55 |
56 | #ray tracing
57 | rtSamples = 4000 #the number of ray tracer samples to render the final output (higher is better but slower) best value is 20000 but on my old gpu it takes too much time to render. if u have nvidia rtx u are fine enjoy :)
58 | rtTrainingSamples = 8#number of ray tracing to use during training
--------------------------------------------------------------------------------
/optimConfigShadows.ini:
--------------------------------------------------------------------------------
1 | #compute device
2 | device = 'cuda' #'cuda' or 'cpu'
3 |
4 | #tracker
5 | tracker = 'mediapipe' # 'mediapipe' or 'fan
6 |
7 | #morphable model
8 | path = './baselMorphableModel'
9 | textureResolution = 512
10 | trimPca=False #if True keep only a subset of the pca basis (eigen vectors)
11 |
12 | #spherical harmonics
13 | bands = 9
14 | envMapRes = 64
15 |
16 | #image
17 | maxResolution = 256 #maximum allowed resolution (if input image is larger it will be automatically scaled down). this limitation is here to allow the library to run on hardware with limited gpu memory and also to maintain a raisonable optimization speed on non rtx gpus. this limit can be increased on decent gpus/cpus
18 |
19 | #camera
20 | camFocalLength = 3000.0 #focal length in pixels (= f_{mm} * imageWidth / sensorWidth)
21 | optimizeFocalLength = True #if True the initial focal length is estimated otherwise it remains constant
22 |
23 | #optimization
24 | iterStep1 = 2000 # number of iterations for the coarse optim
25 | iterStep2 = 400 #number of iteration for the first dense optim (based on statistical priors)
26 | iterStep3 = 100 #number of iterations for refining the statistical albedo priors
27 | weightLandmarksLossStep2 = 0.001 #landmarks weight during step2
28 | weightLandmarksLossStep3 = 0.001 # landmarks weight during step3
29 |
30 | weightShapeReg = 0.001 #weight for shape regularization
31 | weightExpressionReg = 0.001 # weight for expression regularization
32 | weightAlbedoReg = 0.001 # weight for albedo regularization
33 |
34 | #regularizers for diffuse texture in step 3
35 | weightDiffuseSymmetryReg = 1200. #symmetry regularizer weight for diffuse texture (at step 3). u may want to increase it in case of harsh shadows
36 | weightDiffuseConsistencyReg = 100. # consistency regularizer weight for diffuse texture (at step 3). u may want to increase it in case of harsh shadows
37 | weightDiffuseSmoothnessReg = 0.001 # smoothness regularizer weight for diffuse texture (at step 3)
38 |
39 | #regularizers for specular texture in step 3
40 | weightSpecularSymmetryReg = 300. # symmetry regularizer weight for specular texture (at step 3). u may want to increase it in case of harsh shadows
41 | weightSpecularConsistencyReg = 2. # consistency regularizer weight for specular texture (at step 3). u may want to increase it in case of harsh shadows
42 | weightSpecularSmoothnessReg = 0.001 # smoothness regularizer weight for specular texture (at step 3)
43 |
44 | #regularizers for roughness texture in step 3
45 | weightRoughnessSymmetryReg = 300. # symmetry regularizer weight for roughness texture (at step 3). u may want to increase it in case of harsh shadows
46 | weightRoughnessConsistencyReg = 0. # consistency regularizer weight for roughness texture (at step 3). u may want to increase it in case of harsh shadows
47 | weightRoughnessSmoothnessReg = 0.002 # smoothness regularizer weight for roughness texture (at step 3)
48 |
49 | #debug
50 | debugFrequency = 30 #display frequency during optimization (saved to debug directory) (0: no debug display)
51 | saveIntermediateStage = False #if True the output of stage 1 and 2 are saved. stage 3 is always saved which is the output of the optim
52 | verbose = False #display loss on terminal if true
53 |
54 | #ray tracing
55 | rtSamples = 4000 #the number of ray tracer samples to render the final output (higher is better but slower) best value is 20000 but on my old gpu it takes too much time to render. if u have nvidia rtx u are fine enjoy :)
56 | rtTrainingSamples = 8#number of ray tracing to use during training
--------------------------------------------------------------------------------
/optimizer.py:
--------------------------------------------------------------------------------
1 | from image import Image, ImageFolder, overlayImage, saveImage
2 | from gaussiansmoothing import GaussianSmoothing, smoothImage
3 | from projection import estimateCameraPosition
4 |
5 | from textureloss import TextureLoss
6 | from pipeline import Pipeline
7 | from config import Config
8 | from utils import *
9 | import argparse
10 | import pickle
11 | import tqdm
12 | import sys
13 |
14 | class Optimizer:
15 |
16 | def __init__(self, outputDir, config):
17 | self.config = config
18 | self.device = config.device
19 | self.verbose = config.verbose
20 | self.framesNumber = 0
21 | self.pipeline = Pipeline(self.config)
22 |
23 | if self.config.lamdmarksDetectorType == 'fan':
24 | from landmarksfan import LandmarksDetectorFAN
25 | self.landmarksDetector = LandmarksDetectorFAN(self.pipeline.morphableModel.landmarksMask, self.device)
26 | elif self.config.lamdmarksDetectorType == 'mediapipe':
27 | from landmarksmediapipe import LandmarksDetectorMediapipe
28 | self.landmarksDetector = LandmarksDetectorMediapipe(self.pipeline.morphableModel.landmarksMask, self.device)
29 | else:
30 | raise ValueError(f'lamdmarksDetectorType must be one of [mediapipe, fan] but was {self.config.lamdmarksDetectorType}')
31 |
32 | self.textureLoss = TextureLoss(self.device)
33 |
34 | self.inputImage = None
35 | self.landmarks = None
36 | torch.set_grad_enabled(False)
37 | self.smoothing = GaussianSmoothing(3, 3, 1.0, 2).to(self.device)
38 | self.outputDir = outputDir + '/'
39 | self.debugDir = self.outputDir + '/debug/'
40 | mkdir_p(self.outputDir)
41 | mkdir_p(self.debugDir)
42 | mkdir_p(self.outputDir + '/checkpoints/')
43 |
44 | self.vEnhancedDiffuse = None
45 | self.vEnhancedSpecular = None
46 | self.vEnhancedRoughness = None
47 |
48 | def saveParameters(self, outputFileName):
49 |
50 | dict = {
51 | 'vShapeCoeff': self.pipeline.vShapeCoeff.detach().cpu().numpy(),
52 | 'vAlbedoCoeff': self.pipeline.vAlbedoCoeff.detach().cpu().numpy(),
53 | 'vExpCoeff': self.pipeline.vExpCoeff.detach().cpu().numpy(),
54 | 'vRotation': self.pipeline.vRotation.detach().cpu().numpy(),
55 | 'vTranslation': self.pipeline.vTranslation.detach().cpu().numpy(),
56 | 'vFocals': self.pipeline.vFocals.detach().cpu().numpy(),
57 | 'vShCoeffs': self.pipeline.vShCoeffs.detach().cpu().numpy(),
58 | 'screenWidth':self.pipeline.renderer.screenWidth,
59 | 'screenHeight': self.pipeline.renderer.screenHeight,
60 | 'sharedIdentity': self.pipeline.sharedIdentity
61 |
62 | }
63 | if self.vEnhancedDiffuse is not None:
64 | dict['vEnhancedDiffuse'] = self.vEnhancedDiffuse.detach().cpu().numpy()
65 | if self.vEnhancedSpecular is not None:
66 | dict['vEnhancedSpecular'] = self.vEnhancedSpecular.detach().cpu().numpy()
67 | if self.vEnhancedRoughness is not None:
68 | dict['vEnhancedRoughness'] = self.vEnhancedRoughness.detach().cpu().numpy()
69 |
70 | handle = open(outputFileName, 'wb')
71 | pickle.dump(dict, handle, pickle.HIGHEST_PROTOCOL)
72 | handle.close()
73 |
74 | def loadParameters(self, pickelFileName):
75 | handle = open(pickelFileName, 'rb')
76 | assert handle is not None
77 | dict = pickle.load(handle)
78 | self.pipeline.vShapeCoeff = torch.tensor(dict['vShapeCoeff']).to(self.device)
79 | self.pipeline.vAlbedoCoeff = torch.tensor(dict['vAlbedoCoeff']).to(self.device)
80 | self.pipeline.vExpCoeff = torch.tensor(dict['vExpCoeff']).to(self.device)
81 | self.pipeline.vRotation = torch.tensor(dict['vRotation']).to(self.device)
82 | self.pipeline.vTranslation = torch.tensor(dict['vTranslation']).to(self.device)
83 | self.pipeline.vFocals = torch.tensor(dict['vFocals']).to(self.device)
84 | self.pipeline.vShCoeffs = torch.tensor(dict['vShCoeffs']).to(self.device)
85 | self.pipeline.renderer.screenWidth = int(dict['screenWidth'])
86 | self.pipeline.renderer.screenHeight = int(dict['screenHeight'])
87 | self.pipeline.sharedIdentity = bool(dict['sharedIdentity'])
88 |
89 | if "vEnhancedDiffuse" in dict:
90 | self.vEnhancedDiffuse = torch.tensor(dict['vEnhancedDiffuse']).to(self.device)
91 |
92 | if "vEnhancedSpecular" in dict:
93 | self.vEnhancedSpecular = torch.tensor(dict['vEnhancedSpecular']).to(self.device)
94 |
95 | if "vEnhancedRoughness" in dict:
96 | self.vEnhancedRoughness = torch.tensor(dict['vEnhancedRoughness']).to(self.device)
97 |
98 | handle.close()
99 | self.enableGrad()
100 |
101 | def enableGrad(self):
102 | self.pipeline.vShapeCoeff.requires_grad = True
103 | self.pipeline.vAlbedoCoeff.requires_grad = True
104 | self.pipeline.vExpCoeff.requires_grad = True
105 | self.pipeline.vRotation.requires_grad = True
106 | self.pipeline.vTranslation.requires_grad = True
107 | self.pipeline.vFocals.requires_grad = True
108 | self.pipeline.vShCoeffs.requires_grad = True
109 |
110 | def setImage(self, imagePath, sharedIdentity = False):
111 | '''
112 | set image to estimate face reflectance and geometry
113 | :param imagePath: drive path to the image
114 | :param sharedIdentity: if true than the shape and albedo coeffs are equal to 1, as they belong to the same person identity
115 | :return:
116 | '''
117 | if os.path.isfile(imagePath):
118 | self.inputImage = Image(imagePath, self.device, self.config.maxResolution)
119 | else:
120 | self.inputImage = ImageFolder(imagePath, self.device, self.config.maxResolution)
121 |
122 | self.framesNumber = self.inputImage.tensor.shape[0]
123 | #self.inputImage = Image(imagePath, self.device)
124 | self.pipeline.renderer.screenWidth = self.inputImage.width
125 | self.pipeline.renderer.screenHeight = self.inputImage.height
126 |
127 | print('detecting landmarks using:', self.config.lamdmarksDetectorType)
128 | landmarks = self.landmarksDetector.detect(self.inputImage.tensor)
129 | #assert (landmarks.shape[0] == 1) # can only handle single subject in image
130 | assert (landmarks.dim() == 3 and landmarks.shape[2] == 2)
131 | self.landmarks = landmarks
132 | for i in range(self.framesNumber):
133 | imagesLandmark = self.landmarksDetector.drawLandmarks(self.inputImage.tensor[i], self.landmarks[i])
134 | cv2.imwrite(self.outputDir + '/landmarks' + str(i) + '.png', cv2.cvtColor(imagesLandmark, cv2.COLOR_BGR2RGB) )
135 | self.pipeline.initSceneParameters(self.framesNumber, sharedIdentity)
136 | self.initCameraPos() #always init the head pose (rotation + translation)
137 | self.enableGrad()
138 |
139 | def initCameraPos(self):
140 | print('init camera pose...', file=sys.stderr, flush=True)
141 | association = self.pipeline.morphableModel.landmarksAssociation
142 | vertices = self.pipeline.computeShape()
143 | headPoints = vertices[:, association]
144 | rot, trans = estimateCameraPosition(self.pipeline.vFocals, self.inputImage.center,
145 | self.landmarks, headPoints, self.pipeline.vRotation,
146 | self.pipeline.vTranslation)
147 |
148 | self.pipeline.vRotation = rot.clone().detach()
149 | self.pipeline.vTranslation = trans.clone().detach()
150 | def getTextureIndex(self, i):
151 | if self.pipeline.sharedIdentity:
152 | return 0
153 | return i
154 | def debugFrame(self, image, target, diffuseTexture, specularTexture, roughnessTexture, outputPrefix):
155 | for i in range(image.shape[0]):
156 | diff = (image[i] - target[i]).abs()
157 |
158 | import cv2
159 | diffuse = cv2.resize(cv2.cvtColor(diffuseTexture[self.getTextureIndex(i)].detach().cpu().numpy(), cv2.COLOR_BGR2RGB), (target.shape[2], target.shape[1]))
160 | spec = cv2.resize(cv2.cvtColor(specularTexture[self.getTextureIndex(i)].detach().cpu().numpy(), cv2.COLOR_BGR2RGB), (target.shape[2], target.shape[1]))
161 | rough = roughnessTexture[self.getTextureIndex(i)].detach().cpu().numpy()
162 | rough = cv2.cvtColor(cv2.resize(rough, (target.shape[2], target.shape[1])), cv2.COLOR_GRAY2RGB)
163 |
164 | res = cv2.hconcat([cv2.cvtColor(image[i].detach().cpu().numpy(), cv2.COLOR_BGR2RGB),
165 | cv2.cvtColor(target[i].detach().cpu().numpy(), cv2.COLOR_BGR2RGB),
166 | cv2.cvtColor(diff.detach().cpu().numpy(), cv2.COLOR_BGR2RGB)])
167 | ref = cv2.hconcat([diffuse, spec, rough])
168 |
169 | debugFrame = cv2.vconcat([np.power(np.clip(res, 0.0, 1.0), 1.0 / 2.2) * 255, ref * 255])
170 | cv2.imwrite(outputPrefix + '_frame' + str(i) + '.png', debugFrame)
171 |
172 | def regStatModel(self, coeff, var):
173 | loss = ((coeff * coeff) / var).mean()
174 | return loss
175 |
176 | def plotLoss(self, lossArr, index, fileName):
177 | import matplotlib.pyplot as plt
178 | plt.figure(index)
179 | plt.plot(lossArr)
180 | plt.scatter(np.arange(0, len(lossArr)).tolist(), lossArr, c='red')
181 | plt.savefig(fileName)
182 |
183 | def landmarkLoss(self, cameraVertices, landmarks):
184 | return self.pipeline.landmarkLoss(cameraVertices, landmarks, self.pipeline.vFocals, self.inputImage.center)
185 |
186 | def runStep1(self):
187 | print("1/3 => Optimizing head pose and expressions using landmarks...", file=sys.stderr, flush=True)
188 | torch.set_grad_enabled(True)
189 |
190 | params = [
191 | {'params': self.pipeline.vRotation, 'lr': 0.02},
192 | {'params': self.pipeline.vTranslation, 'lr': 0.02},
193 | {'params': self.pipeline.vExpCoeff, 'lr': 0.02},
194 | #{'params': self.pipeline.vShapeCoeff, 'lr': 0.02}
195 | ]
196 |
197 | if self.config.optimizeFocalLength:
198 | params.append({'params': self.pipeline.vFocals, 'lr': 0.02})
199 |
200 | optimizer = torch.optim.Adam(params)
201 | losses = []
202 |
203 | #for iter in range(2000):
204 | for iter in tqdm.tqdm(range(self.config.iterStep1)):
205 | optimizer.zero_grad()
206 | vertices = self.pipeline.computeShape()
207 | cameraVertices = self.pipeline.transformVertices(vertices)
208 | loss = self.landmarkLoss(cameraVertices, self.landmarks)
209 | loss += 0.1 * self.regStatModel(self.pipeline.vExpCoeff, self.pipeline.morphableModel.expressionPcaVar)
210 | loss.backward()
211 | optimizer.step()
212 | losses.append(loss.item())
213 | if self.verbose:
214 | print(iter, '=>', loss.item())
215 |
216 | self.plotLoss(losses, 0, self.outputDir + 'checkpoints/stage1_loss.png')
217 | self.saveParameters(self.outputDir + 'checkpoints/stage1_output.pickle')
218 |
219 | def runStep2(self):
220 | print("2/3 => Optimizing shape, statistical albedos, expression, head pose and scene light...", file=sys.stderr, flush=True)
221 | torch.set_grad_enabled(True)
222 | self.pipeline.renderer.samples = 8
223 | inputTensor = torch.pow(self.inputImage.tensor, self.inputImage.gamma)
224 |
225 | optimizer = torch.optim.Adam([
226 | {'params': self.pipeline.vShCoeffs, 'lr': 0.005},
227 | {'params': self.pipeline.vAlbedoCoeff, 'lr': 0.007}
228 | ])
229 | losses = []
230 |
231 | for iter in tqdm.tqdm(range(self.config.iterStep2 + 1)):
232 | if iter == 100:
233 | optimizer.add_param_group({'params': self.pipeline.vShapeCoeff, 'lr': 0.01})
234 | optimizer.add_param_group({'params': self.pipeline.vExpCoeff, 'lr': 0.01})
235 | optimizer.add_param_group({'params': self.pipeline.vRotation, 'lr': 0.0001})
236 | optimizer.add_param_group({'params': self.pipeline.vTranslation, 'lr': 0.0001})
237 |
238 | optimizer.zero_grad()
239 | vertices, diffAlbedo, specAlbedo = self.pipeline.morphableModel.computeShapeAlbedo(self.pipeline.vShapeCoeff, self.pipeline.vExpCoeff, self.pipeline.vAlbedoCoeff)
240 | cameraVerts = self.pipeline.camera.transformVertices(vertices, self.pipeline.vTranslation, self.pipeline.vRotation)
241 | diffuseTextures = self.pipeline.morphableModel.generateTextureFromAlbedo(diffAlbedo)
242 | specularTextures = self.pipeline.morphableModel.generateTextureFromAlbedo(specAlbedo)
243 |
244 | images = self.pipeline.render(cameraVerts, diffuseTextures, specularTextures)
245 | mask = images[..., 3:]
246 | smoothedImage = smoothImage(images[..., 0:3], self.smoothing)
247 | diff = mask * (smoothedImage - inputTensor).abs()
248 | #photoLoss = diff.mean(dim=-1).sum() / float(self.framesNumber)
249 | photoLoss = 1000.* diff.mean()
250 | landmarksLoss = self.config.weightLandmarksLossStep2 * self.landmarkLoss(cameraVerts, self.landmarks)
251 |
252 | regLoss = 0.0001 * self.pipeline.vShCoeffs.pow(2).mean()
253 | regLoss += self.config.weightAlbedoReg * self.regStatModel(self.pipeline.vAlbedoCoeff, self.pipeline.morphableModel.diffuseAlbedoPcaVar)
254 | regLoss += self.config.weightShapeReg * self.regStatModel(self.pipeline.vShapeCoeff, self.pipeline.morphableModel.shapePcaVar)
255 | regLoss += self.config.weightExpressionReg * self.regStatModel(self.pipeline.vExpCoeff, self.pipeline.morphableModel.expressionPcaVar)
256 |
257 | loss = photoLoss + landmarksLoss + regLoss
258 |
259 | losses.append(loss.item())
260 | loss.backward()
261 | optimizer.step()
262 | if self.verbose:
263 | print(iter, ' => Loss:', loss.item(),
264 | '. photo Loss:', photoLoss.item(),
265 | '. landmarks Loss: ', landmarksLoss.item(),
266 | '. regLoss: ', regLoss.item())
267 |
268 | if self.config.debugFrequency > 0 and iter % self.config.debugFrequency == 0:
269 | self.debugFrame(smoothedImage, inputTensor, diffuseTextures, specularTextures, self.pipeline.vRoughness, self.debugDir + 'debug1_iter' + str(iter))
270 |
271 | self.plotLoss(losses, 1, self.outputDir + 'checkpoints/stage2_loss.png')
272 | self.saveParameters(self.outputDir + 'checkpoints/stage2_output.pickle')
273 |
274 | def runStep3(self):
275 | print("3/3 => finetuning albedos, shape, expression, head pose and scene light...", file=sys.stderr, flush=True)
276 | torch.set_grad_enabled(True)
277 | self.pipeline.renderer.samples = 8
278 |
279 | inputTensor = torch.pow(self.inputImage.tensor, self.inputImage.gamma)
280 | vertices, diffAlbedo, specAlbedo = self.pipeline.morphableModel.computeShapeAlbedo(self.pipeline.vShapeCoeff, self.pipeline.vExpCoeff, self.pipeline.vAlbedoCoeff)
281 | vDiffTextures = self.pipeline.morphableModel.generateTextureFromAlbedo(diffAlbedo).detach().clone() if self.vEnhancedDiffuse is None else self.vEnhancedDiffuse.detach().clone()
282 | vSpecTextures = self.pipeline.morphableModel.generateTextureFromAlbedo(specAlbedo).detach().clone() if self.vEnhancedSpecular is None else self.vEnhancedSpecular.detach().clone()
283 | vRoughTextures = self.pipeline.vRoughness.detach().clone() if self.vEnhancedRoughness is None else self.vEnhancedRoughness.detach().clone()
284 |
285 | refDiffTextures = vDiffTextures.detach().clone()
286 | refSpecTextures = vSpecTextures.detach().clone()
287 | refRoughTextures = vRoughTextures.detach().clone()
288 | vDiffTextures.requires_grad = True
289 | vSpecTextures.requires_grad = True
290 | vRoughTextures.requires_grad = True
291 |
292 | optimizer = torch.optim.Adam([
293 | {'params': vDiffTextures, 'lr': 0.005},
294 | {'params': vSpecTextures, 'lr': 0.02},
295 | {'params': vRoughTextures, 'lr': 0.02}
296 | ])
297 | ''''
298 | {'params': self.pipeline.vShCoeffs, 'lr': 0.005 * 2.},
299 | {'params': self.pipeline.vShapeCoeff, 'lr': 0.01},
300 | {'params': self.pipeline.vExpCoeff, 'lr': 0.01},
301 | {'params': self.pipeline.vRotation, 'lr': 0.0005},
302 | {'params': self.pipeline.vTranslation, 'lr': 0.0005}'''
303 |
304 | losses = []
305 |
306 | for iter in tqdm.tqdm(range(self.config.iterStep3 + 1)):
307 | optimizer.zero_grad()
308 | vertices, diffAlbedo, specAlbedo = self.pipeline.morphableModel.computeShapeAlbedo(self.pipeline.vShapeCoeff, self.pipeline.vExpCoeff, self.pipeline.vAlbedoCoeff)
309 | cameraVerts = self.pipeline.camera.transformVertices(vertices, self.pipeline.vTranslation, self.pipeline.vRotation)
310 |
311 | images = self.pipeline.render(cameraVerts, vDiffTextures, vSpecTextures, vRoughTextures)
312 | mask = images[..., 3:]
313 | smoothedImage = smoothImage(images[..., 0:3], self.smoothing)
314 | diff = mask * (smoothedImage - inputTensor).abs()
315 |
316 | #loss = diff.mean(dim=-1).sum() / float(self.framesNumber)
317 | loss = 1000.0 * diff.mean()
318 | loss += 0.2 * (self.textureLoss.regTextures(vDiffTextures, refDiffTextures, ws = self.config.weightDiffuseSymmetryReg, wr = self.config.weightDiffuseConsistencyReg, wc = self.config.weightDiffuseConsistencyReg, wsm = self.config.weightDiffuseSmoothnessReg, wm = 0.) + \
319 | self.textureLoss.regTextures(vSpecTextures, refSpecTextures, ws = self.config.weightSpecularSymmetryReg, wr = self.config.weightSpecularConsistencyReg, wc = self.config.weightSpecularConsistencyReg, wsm = self.config.weightSpecularSmoothnessReg, wm = 0.5) + \
320 | self.textureLoss.regTextures(vRoughTextures, refRoughTextures, ws = self.config.weightRoughnessSymmetryReg, wr = self.config.weightRoughnessConsistencyReg, wc = self.config.weightRoughnessConsistencyReg, wsm = self.config.weightRoughnessSmoothnessReg, wm = 0.))
321 | loss += 0.0001 * self.pipeline.vShCoeffs.pow(2).mean()
322 | loss += self.config.weightExpressionReg * self.regStatModel(self.pipeline.vExpCoeff, self.pipeline.morphableModel.expressionPcaVar)
323 | loss += self.config.weightShapeReg * self.regStatModel(self.pipeline.vShapeCoeff, self.pipeline.morphableModel.shapePcaVar)
324 | loss += self.config.weightLandmarksLossStep3 * self.landmarkLoss(cameraVerts, self.landmarks)
325 |
326 | losses.append(loss.item())
327 |
328 | loss.backward()
329 | optimizer.step()
330 | if self.verbose:
331 | print(iter, ' => Loss:', loss.item())
332 |
333 | if self.config.debugFrequency > 0 and iter % self.config.debugFrequency == 0:
334 | self.debugFrame(smoothedImage, inputTensor, vDiffTextures, vSpecTextures, vRoughTextures, self.debugDir + 'debug2_iter' + str(iter))
335 |
336 | self.plotLoss(losses, 2, self.outputDir + 'checkpoints/stage3_loss.png')
337 |
338 | self.vEnhancedDiffuse = vDiffTextures.detach().clone()
339 | self.vEnhancedSpecular = vSpecTextures.detach().clone()
340 | self.vEnhancedRoughness = vRoughTextures.detach().clone()
341 |
342 | self.saveParameters(self.outputDir + 'checkpoints/stage3_output.pickle')
343 |
344 | def saveOutput(self, samples, outputDir = None, prefix = ''):
345 | if outputDir is None:
346 | outputDir = self.outputDir
347 | mkdir_p(outputDir)
348 |
349 | print("saving to: '", outputDir, "'. hold on... ", file=sys.stderr, flush=True)
350 | outputDir += '/' #use join
351 |
352 | inputTensor = torch.pow(self.inputImage.tensor, self.inputImage.gamma)
353 | vDiffTextures = self.vEnhancedDiffuse
354 | vSpecTextures = self.vEnhancedSpecular
355 | vRoughTextures = self.vEnhancedRoughness
356 | vertices, diffAlbedo, specAlbedo = self.pipeline.morphableModel.computeShapeAlbedo(self.pipeline.vShapeCoeff, self.pipeline.vExpCoeff, self.pipeline.vAlbedoCoeff)
357 | cameraVerts = self.pipeline.camera.transformVertices(vertices, self.pipeline.vTranslation, self.pipeline.vRotation)
358 | cameraNormals = self.pipeline.morphableModel.computeNormals(cameraVerts)
359 |
360 |
361 | if vDiffTextures is None:
362 | vDiffTextures = self.pipeline.morphableModel.generateTextureFromAlbedo(diffAlbedo)
363 | vSpecTextures = self.pipeline.morphableModel.generateTextureFromAlbedo(specAlbedo)
364 | vRoughTextures = self.pipeline.vRoughness
365 |
366 |
367 | self.pipeline.renderer.samples = samples
368 | images = self.pipeline.render(None, vDiffTextures, vSpecTextures, vRoughTextures)
369 |
370 | diffuseAlbedo = self.pipeline.render(diffuseTextures=vDiffTextures, renderAlbedo=True)
371 | specularAlbedo = self.pipeline.render(diffuseTextures=vSpecTextures, renderAlbedo=True)
372 | roughnessAlbedo = self.pipeline.render(diffuseTextures=vRoughTextures.repeat(1, 1, 1, 3), renderAlbedo=True)
373 | illum = self.pipeline.render(diffuseTextures=torch.ones_like(vDiffTextures), specularTextures=torch.zeros_like(vDiffTextures))
374 |
375 | for i in range(diffuseAlbedo.shape[0]):
376 | saveObj(outputDir + prefix + '/mesh' + str(i) + '.obj',
377 | 'material' + str(i) + '.mtl',
378 | cameraVerts[i],
379 | self.pipeline.faces32,
380 | cameraNormals[i],
381 | self.pipeline.morphableModel.uvMap,
382 | prefix + 'diffuseMap_' + str(self.getTextureIndex(i)) + '.png')
383 |
384 | envMaps = self.pipeline.sh.toEnvMap(self.pipeline.vShCoeffs, self.config.smoothSh) #smooth
385 | ext = '.png'
386 | if self.config.saveExr:
387 | ext = '.exr'
388 | saveImage(envMaps[i], outputDir + '/envMap_' + str(i) + ext)
389 |
390 | #saveImage(diffuseAlbedo[self.getTextureIndex(i)], outputDir + prefix + 'diffuse_' + str(self.getTextureIndex(i)) + '.png')
391 | #saveImage(specularAlbedo[self.getTextureIndex(i)], outputDir + prefix + 'specular_' + str(self.getTextureIndex(i)) + '.png')
392 | #saveImage(roughnessAlbedo[self.getTextureIndex(i)], outputDir + prefix + 'roughness_' + str(self.getTextureIndex(i)) + '.png')
393 | #saveImage(illum[i], outputDir + prefix + 'illumination_' + str(i) + '.png')
394 | #saveImage(images[i], outputDir + prefix + 'finalReconstruction_' + str(i) + '.png')
395 | overlay = overlayImage(inputTensor[i], images[i])
396 | #saveImage(overlay, outputDir + '/overlay_' + str(i) + '.png')
397 |
398 | renderAll = torch.cat([torch.cat([inputTensor[i], torch.ones_like(images[i])[..., 3:]], dim = -1),
399 | torch.cat([overlay.to(self.device), torch.ones_like(images[i])[..., 3:]], dim = -1),
400 | images[i],
401 | illum[i],
402 | diffuseAlbedo[self.getTextureIndex(i)],
403 | specularAlbedo[self.getTextureIndex(i)],
404 | roughnessAlbedo[self.getTextureIndex(i)]], dim=1)
405 | saveImage(renderAll, outputDir + '/render_' + str(i) + '.png')
406 |
407 | saveImage(vDiffTextures[self.getTextureIndex(i)], outputDir + prefix + 'diffuseMap_' + str(self.getTextureIndex(i)) + '.png')
408 | saveImage(vSpecTextures[self.getTextureIndex(i)], outputDir + prefix + 'specularMap_' + str(self.getTextureIndex(i)) + '.png')
409 | saveImage(vRoughTextures[self.getTextureIndex(i)].repeat(1, 1, 3), outputDir + prefix + 'roughnessMap_' + str(self.getTextureIndex(i)) + '.png')
410 |
411 | def run(self, imagePathOrDir, sharedIdentity = False, checkpoint = None, doStep1 = True, doStep2 = True, doStep3 = True):
412 | '''
413 | run optimization on given path (can be a directory that contains images with same resolution or a direct path to an image)
414 | :param imagePathOrDir: a path to a directory or image
415 | :param sharedIdentity: if True, the images in the directory belongs to the same subject so the shape identity and skin reflectance are shared across all images
416 | :param checkpoint: a path to a checkpoint file (pickle) to resume optim (check saveParameters and loadParameters)
417 | :param doStep1: if True do stage 1 optim (landmarks loss)
418 | :param doStep2: if True do stage 2 optim (photo loss on statistical prior)
419 | :param doStep3: if True do stage 3 optim ( refine albedos)
420 | :return:
421 | '''
422 |
423 |
424 | self.setImage(imagePathOrDir, sharedIdentity)
425 | assert(self.framesNumber >= 1) #could not load any image from path
426 |
427 | if checkpoint is not None and checkpoint != '':
428 | print('resuming optimization from checkpoint: ',checkpoint, file=sys.stderr, flush=True)
429 | self.loadParameters(checkpoint)
430 |
431 | import time
432 | start = time.time()
433 | if doStep1:
434 | self.runStep1()
435 | if self.config.saveIntermediateStage:
436 | self.saveOutput(self.config.rtSamples, self.outputDir + '/outputStage1', prefix='stage1_')
437 | if doStep2:
438 | self.runStep2()
439 | if self.config.saveIntermediateStage:
440 | self.saveOutput(self.config.rtSamples, self.outputDir + '/outputStage2', prefix='stage2_')
441 | if doStep3:
442 | self.runStep3()
443 | end = time.time()
444 | print("took {:.2f} minutes to optimize".format((end - start) / 60.), file=sys.stderr, flush=True)
445 | self.saveOutput(self.config.rtSamples, self.outputDir)
446 |
447 | if __name__ == "__main__":
448 |
449 | parser = argparse.ArgumentParser()
450 | parser.add_argument("--input", required=False, default='./input/s1.png', help="path to a directory or image to reconstruct (images in same directory should have the same resolution")
451 |
452 | parser.add_argument("--sharedIdentity", dest='sharedIdentity', action='store_true', help='in case input directory contains multiple images, this flag tells the optimizations that all images are for the same person ( that means the identity shape and skin reflectance is common for all images), if this flag is false, that each image belong to a different subject', required=False)
453 | #parser.add_argument("--no-sharedIdentity", dest='sharedIdentity', action='store_false', help='in case input directory contains multiple images, this flag tells the optimizations that all images are for the same person ( that means the identity shape and skin reflectance is common for all images), if this flag is false, that each image belong to a different subject', required=False)
454 |
455 | parser.add_argument("--output", required=False, default='./output/', help="path to the output directory where optimization results are saved in")
456 | parser.add_argument("--config", required=False, default='./optimConfig.ini', help="path to the configuration file (used to configure the optimization)")
457 |
458 | parser.add_argument("--checkpoint", required=False, default='', help="path to a checkpoint pickle file used to resume optimization")
459 | parser.add_argument("--skipStage1", dest='skipStage1', action='store_true', help='if true, the first (coarse) stage is skipped (stage1). useful if u want to resume optimization from a checkpoint', required=False)
460 | parser.add_argument("--skipStage2", dest='skipStage2', action='store_true', help='if true, the second stage is skipped (stage2). useful if u want to resume optimization from a checkpoint', required=False)
461 | parser.add_argument("--skipStage3", dest='skipStage3', action='store_true', help='if true, the third stage is skipped (stage3). useful if u want to resume optimization from a checkpoint', required=False)
462 | params = parser.parse_args()
463 |
464 | inputDir = params.input
465 | sharedIdentity = params.sharedIdentity
466 | outputDir = params.output + '/' + os.path.basename(inputDir.strip('/'))
467 |
468 | configFile = params.config
469 | checkpoint = params.checkpoint
470 | doStep1 = not params.skipStage1
471 | doStep2 = not params.skipStage2
472 | doStep3 = not params.skipStage3
473 |
474 | config = Config()
475 | config.fillFromDicFile(configFile)
476 | if config.device == 'cuda' and torch.cuda.is_available() == False:
477 | print('[WARN] no cuda enabled device found. switching to cpu... ')
478 | config.device = 'cpu'
479 |
480 | #check if mediapipe is available
481 |
482 | if config.lamdmarksDetectorType == 'mediapipe':
483 | try:
484 | from landmarksmediapipe import LandmarksDetectorMediapipe
485 | except:
486 | print('[WARN] Mediapipe for landmarks detection not availble. falling back to FAN landmarks detector. You may want to try Mediapipe because it is much accurate than FAN (pip install mediapipe)')
487 | config.lamdmarksDetectorType = 'fan'
488 |
489 | optimizer = Optimizer(outputDir, config)
490 | optimizer.run(inputDir,
491 | sharedIdentity= sharedIdentity,
492 | checkpoint= checkpoint,
493 | doStep1= doStep1,
494 | doStep2 = doStep2,
495 | doStep3= doStep3)
--------------------------------------------------------------------------------
/output/defaultoutput:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/abdallahdib/NextFace/07b2b1c7da2021e939e469f82fd9823e3d0ec67c/output/defaultoutput
--------------------------------------------------------------------------------
/pipeline.py:
--------------------------------------------------------------------------------
1 | from sphericalharmonics import SphericalHarmonics
2 | from morphablemodel import MorphableModel
3 | from renderer import Renderer
4 | from camera import Camera
5 | from utils import *
6 |
7 | class Pipeline:
8 |
9 | def __init__(self, config):
10 | '''
11 | a pipeline can generate and render textured faces under different camera angles and lighting conditions
12 | :param config: configuration file used to parameterize the pipeline
13 | '''
14 | self.config = config
15 | self.device = config.device
16 | self.camera = Camera(self.device)
17 | self.sh = SphericalHarmonics(config.envMapRes, self.device)
18 |
19 | if self.config.lamdmarksDetectorType == 'fan':
20 | pathLandmarksAssociation = '/landmark_62.txt'
21 | elif self.config.lamdmarksDetectorType == 'mediapipe':
22 | pathLandmarksAssociation = '/landmark_62_mp.txt'
23 | else:
24 | raise ValueError(f'lamdmarksDetectorType must be one of [mediapipe, fan] but was {self.config.lamdmarksDetectorType}')
25 |
26 | self.morphableModel = MorphableModel(path = config.path,
27 | textureResolution= config.textureResolution,
28 | trimPca= config.trimPca,
29 | landmarksPathName=pathLandmarksAssociation,
30 | device = self.device
31 | )
32 | self.renderer = Renderer(config.rtTrainingSamples, 1, self.device)
33 | self.uvMap = self.morphableModel.uvMap.clone()
34 | self.uvMap[:, 1] = 1.0 - self.uvMap[:, 1]
35 | self.faces32 = self.morphableModel.faces.to(torch.int32).contiguous()
36 | self.shBands = config.bands
37 | self.sharedIdentity = False
38 |
39 | def initSceneParameters(self, n, sharedIdentity = False):
40 | '''
41 | init pipeline parameters (face shape, albedo, exp coeffs, light and head pose (camera))
42 | :param n: the the number of parameters (if negative than the pipeline variables are not allocated)
43 | :param sharedIdentity: if true, the shape and albedo coeffs are equal to 1, as they belong to the same person identity
44 | :return:
45 | '''
46 |
47 | if n <= 0:
48 | return
49 |
50 | self.sharedIdentity = sharedIdentity
51 | nShape = 1 if sharedIdentity == True else n
52 |
53 | self.vShapeCoeff = torch.zeros([nShape, self.morphableModel.shapeBasisSize], dtype = torch.float32, device = self.device)
54 | self.vAlbedoCoeff = torch.zeros([nShape, self.morphableModel.albedoBasisSize], dtype=torch.float32, device=self.device)
55 |
56 | self.vExpCoeff = torch.zeros([n, self.morphableModel.expBasisSize], dtype=torch.float32, device=self.device)
57 | self.vRotation = torch.zeros([n, 3], dtype=torch.float32, device=self.device)
58 | self.vTranslation = torch.zeros([n, 3], dtype=torch.float32, device=self.device)
59 | self.vTranslation[:, 2] = 500.
60 | self.vRotation[:, 0] = 3.14
61 | self.vFocals = self.config.camFocalLength * torch.ones([n], dtype=torch.float32, device=self.device)
62 | self.vShCoeffs = 0.0 * torch.ones([n, self.shBands * self.shBands, 3], dtype=torch.float32, device=self.device)
63 | self.vShCoeffs[..., 0, 0] = 0.5
64 | self.vShCoeffs[..., 2, 0] = -0.5
65 | self.vShCoeffs[..., 1] = self.vShCoeffs[..., 0]
66 | self.vShCoeffs[..., 2] = self.vShCoeffs[..., 0]
67 |
68 | texRes = self.morphableModel.getTextureResolution()
69 | self.vRoughness = 0.4 * torch.ones([nShape, texRes, texRes, 1], dtype=torch.float32, device=self.device)
70 |
71 | def computeShape(self):
72 | '''
73 | compute shape vertices from the shape and expression coefficients
74 | :return: tensor of 3d vertices [n, verticesNumber, 3]
75 | '''
76 |
77 | assert(self.vShapeCoeff is not None and self.vExpCoeff is not None)
78 | vertices = self.morphableModel.computeShape(self.vShapeCoeff, self.vExpCoeff)
79 | return vertices
80 |
81 | def transformVertices(self, vertices = None):
82 | '''
83 | transform vertices to camera coordinate space
84 | :param vertices: tensor of 3d vertices [n, verticesNumber, 3]
85 | :return: transformed vertices [n, verticesNumber, 3]
86 | '''
87 |
88 | if vertices is None:
89 | vertices = self.computeShape()
90 |
91 | assert(vertices.dim() == 3 and vertices.shape[-1] == 3)
92 | assert(self.vTranslation is not None and self.vRotation is not None)
93 | assert(vertices.shape[0] == self.vTranslation.shape[0] == self.vRotation.shape[0])
94 |
95 | transformedVertices = self.camera.transformVertices(vertices, self.vTranslation, self.vRotation)
96 | return transformedVertices
97 |
98 | def render(self, cameraVerts = None, diffuseTextures = None, specularTextures = None, roughnessTextures = None, renderAlbedo = False):
99 | '''
100 | ray trace an image given camera vertices and corresponding textures
101 | :param cameraVerts: camera vertices tensor [n, verticesNumber, 3]
102 | :param diffuseTextures: diffuse textures tensor [n, texRes, texRes, 3]
103 | :param specularTextures: specular textures tensor [n, texRes, texRes, 3]
104 | :param roughnessTextures: roughness textures tensor [n, texRes, texRes, 1]
105 | :param renderAlbedo: if True render albedo else ray trace image
106 | :return: ray traced images [n, resX, resY, 4]
107 | '''
108 | if cameraVerts is None:
109 | vertices, diffAlbedo, specAlbedo = self.morphableModel.computeShapeAlbedo(self.vShapeCoeff, self.vExpCoeff, self.vAlbedoCoeff)
110 | cameraVerts = self.camera.transformVertices(vertices, self.vTranslation, self.vRotation)
111 |
112 | #compute normals
113 | normals = self.morphableModel.meshNormals.computeNormals(cameraVerts)
114 |
115 | if diffuseTextures is None:
116 | diffuseTextures = self.morphableModel.generateTextureFromAlbedo(diffAlbedo)
117 |
118 | if specularTextures is None:
119 | specularTextures = self.morphableModel.generateTextureFromAlbedo(specAlbedo)
120 |
121 | if roughnessTextures is None:
122 | roughnessTextures = self.vRoughness
123 |
124 | envMaps = self.sh.toEnvMap(self.vShCoeffs)
125 |
126 | assert(envMaps.dim() == 4 and envMaps.shape[-1] == 3)
127 | assert (cameraVerts.dim() == 3 and cameraVerts.shape[-1] == 3)
128 | assert (diffuseTextures.dim() == 4 and diffuseTextures.shape[1] == diffuseTextures.shape[2] == self.morphableModel.getTextureResolution() and diffuseTextures.shape[-1] == 3)
129 | assert (specularTextures.dim() == 4 and specularTextures.shape[1] == specularTextures.shape[2] == self.morphableModel.getTextureResolution() and specularTextures.shape[-1] == 3)
130 | assert (roughnessTextures.dim() == 4 and roughnessTextures.shape[1] == roughnessTextures.shape[2] == self.morphableModel.getTextureResolution() and roughnessTextures.shape[-1] == 1)
131 | assert(cameraVerts.shape[0] == envMaps.shape[0])
132 | assert (diffuseTextures.shape[0] == specularTextures.shape[0] == roughnessTextures.shape[0])
133 |
134 | scenes = self.renderer.buildScenes(cameraVerts, self.faces32, normals, self.uvMap, diffuseTextures,
135 | specularTextures, torch.clamp(roughnessTextures, 1e-20, 10.0), self.vFocals, envMaps)
136 | if renderAlbedo:
137 | images = self.renderer.renderAlbedo(scenes)
138 | else:
139 | images = self.renderer.render(scenes)
140 |
141 | return images
142 |
143 | def landmarkLoss(self, cameraVertices, landmarks, focals, cameraCenters, debugDir = None):
144 | '''
145 | calculate scalar loss between vertices in camera space and 2d landmarks pixels
146 | :param cameraVertices: 3d vertices [n, nVertices, 3]
147 | :param landmarks: 2d corresponding pixels [n, nVertices, 2]
148 | :param landmarks: camera focals [n]
149 | :param cameraCenters: camera centers [n, 2
150 | :param debugDir: if not none save landmarks and vertices to an image file
151 | :return: scalar loss (float)
152 | '''
153 | assert (cameraVertices.dim() == 3 and cameraVertices.shape[-1] == 3)
154 | assert (focals.dim() == 1)
155 | assert(cameraCenters.dim() == 2 and cameraCenters.shape[-1] == 2)
156 | assert (landmarks.dim() == 3 and landmarks.shape[-1] == 2)
157 | assert cameraVertices.shape[0] == landmarks.shape[0] == focals.shape[0] == cameraCenters.shape[0]
158 |
159 | headPoints = cameraVertices[:, self.morphableModel.landmarksAssociation]
160 | assert (landmarks.shape[-2] == headPoints.shape[-2])
161 |
162 | projPoints = focals.view(-1, 1, 1) * headPoints[..., :2] / headPoints[..., 2:]
163 | projPoints += cameraCenters.unsqueeze(1)
164 | loss = torch.norm(projPoints - landmarks, 2, dim=-1).pow(2).mean()
165 | if debugDir:
166 | for i in range(projPoints.shape[0]):
167 | image = saveLandmarksVerticesProjections(self.inputImage.tensor[i], projPoints[i], self.landmarks[i])
168 | cv2.imwrite(debugDir + '/lp' + str(i) +'.png', cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
169 |
170 | return loss
--------------------------------------------------------------------------------
/projection.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import torch
3 | import math
4 | import cv2
5 |
6 | def isRotationMatrix(R):
7 | """
8 | return true if the R is a rotation matrix else False (M . T^T = I and det(M) = 1)
9 | """
10 | if R.ndim != 2 or R.shape[0] != R.shape[1]:
11 | return False
12 | isIdentity = np.allclose(R.dot(R.T), np.identity(R.shape[0], np.float))
13 | isDetEqualToOne = np.allclose(np.linalg.det(R), 1)
14 | return isIdentity and isDetEqualToOne
15 |
16 |
17 | def eulerToRodrigues(angles):
18 | """
19 | convert euler angles to rodrigues
20 | """
21 | rotx = np.array([[1, 0, 0],
22 | [0, math.cos(angles[0]), -math.sin(angles[0])],
23 | [0, math.sin(angles[0]), math.cos(angles[0])]
24 | ])
25 |
26 | roty = np.array([[math.cos(angles[1]), 0, math.sin(angles[1])],
27 | [0, 1, 0],
28 | [-math.sin(angles[1]), 0, math.cos(angles[1])]
29 | ])
30 |
31 | rotz = np.array([[math.cos(angles[2]), -math.sin(angles[2]), 0],
32 | [math.sin(angles[2]), math.cos(angles[2]), 0],
33 | [0, 0, 1]
34 | ])
35 |
36 | R = np.dot(rotz, np.dot(roty, rotx))
37 | rotVec, _ = cv2.Rodrigues(R)
38 | return rotVec
39 |
40 |
41 | def rodrigues2Euler(rotation_vector):
42 | """
43 | retrieve euler angles from rodrigues matrix
44 | """
45 | rMat, _ = cv2.Rodrigues(rotation_vector)
46 | assert (rMat.shape[0] == 3 and rMat.shape[1] == 3 and isRotationMatrix(rMat))
47 | roll = math.atan2(rMat[2, 1], rMat[2, 2])
48 | pitch = math.atan2(-rMat[2, 0], math.sqrt(rMat[0, 0] * rMat[0, 0] + rMat[1, 0] * rMat[1, 0]))
49 | yaw = math.atan2(rMat[1, 0], rMat[0, 0])
50 | return np.array([roll, pitch, yaw])
51 |
52 |
53 | def estimateCameraPosition(focalLength, image_center, landmarks, vertices, rotAngles, translation):
54 | '''
55 | estimate the camera position (rotation and translation) using perspective n points pnp
56 | :param focalLength: tensor representing the camera focal length of shape [n]
57 | :param image_center: tensor representing the camera center point [n, 2]
58 | :param landmarks: tensor representing the 2d landmarks in pixel coordinates system [n, verticesNumber, 2]
59 | :param vertices: tensor representing the 3d coordinate position of the landmarks [n, verticesNumber, 3]
60 | :param rotAngles: the initial rotation angles [n, 3]
61 | :param translation: the initial translation angles [n, 3]
62 | :return: estimated rotation [n, 3] , estimated translations [n, 3]
63 | '''
64 | assert (focalLength.dim() == 1 and
65 | image_center.dim() == 2 and
66 | image_center.shape[-1] == 2 and
67 | landmarks.dim() == 3 and landmarks.shape[-1] == 2 and
68 | vertices.dim() == 3 and vertices.shape[-1] == 3 and
69 | rotAngles.dim() == 2 and rotAngles.shape[-1] == 3 and
70 | translation.dim() == 2 and translation.shape[-1] == 3)
71 | assert (focalLength.shape[0] == image_center.shape[0] == landmarks.shape[0] == vertices.shape[0] == rotAngles.shape[0] == translation.shape[0])
72 | rots = []
73 | transs = []
74 | for i in range(focalLength.shape[0]):
75 | rot, trans = solvePnP(focalLength[i].item(),
76 | image_center[i].detach().cpu().numpy(),
77 | vertices[i],
78 | landmarks[i],
79 | rotAngles[i],
80 | translation[i])
81 | rots.append(rot)
82 | transs.append(trans)
83 | return torch.tensor(rots, device=vertices.device, dtype=torch.float32), torch.tensor(transs, device=vertices.device,
84 | dtype=torch.float32)
85 |
86 |
87 | def solvePnP(focalLength, imageCenter, vertices, pixels, rotAngles, translation):
88 | """
89 | Finds an object pose from 3D vertices <-> 2D pixels correspondences
90 | Inputs:
91 | * focalLength: camera focal length
92 | * imageCenter: center [x, y] of the image
93 | * vertices: float tensor [n, 3], of vertices
94 | * pixels: float tensor [n, 2] of corresponding pixels
95 | * rotAngles: initial euler angles
96 | * translation: initial translation vector
97 | """
98 |
99 | cameraMatrix = np.array(
100 | [[focalLength, 0, imageCenter[0]],
101 | [0, focalLength, imageCenter[1]],
102 | [0, 0, 1]], dtype="double"
103 | )
104 |
105 | success, rotVec, transVec = cv2.solvePnP(vertices.clone().detach().cpu().numpy(),
106 | pixels[:, None].detach().cpu().numpy(),
107 | cameraMatrix,
108 | np.zeros((4, 1)),
109 | eulerToRodrigues(rotAngles.detach().cpu().numpy()),
110 | translation.detach().cpu().numpy(),
111 | True,
112 | flags=cv2.SOLVEPNP_ITERATIVE)
113 | assert success, "failed to estimate the pose using pNp"
114 |
115 | rotAngles = rodrigues2Euler(rotVec)
116 |
117 | if rotAngles[0] < 0.:
118 | rotAngles[0] += 2. * math.pi
119 |
120 | translation = transVec.reshape((3,))
121 | return rotAngles, translation
122 |
--------------------------------------------------------------------------------
/renderer.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import math
3 | import pyredner
4 | import redner
5 | import random
6 |
7 | from pyredner import set_print_timing
8 |
9 |
10 | def rayTrace(scene,
11 | channels,
12 | max_bounces = 1,
13 | sampler_type = pyredner.sampler_type.sobol,
14 | num_samples = 8,
15 | seed = None,
16 | sample_pixel_center = False,
17 | device = None):
18 | if device is None:
19 | device = pyredner.get_device()
20 |
21 | assert(isinstance(scene, list))
22 | if seed == None:
23 | # Randomly generate a list of seed
24 | seed = []
25 | for i in range(len(scene)):
26 | seed.append(random.randint(0, 16777216))
27 | assert(len(seed) == len(scene))
28 | # Render each scene in the batch and stack them together
29 | imgs = []
30 | for sc, se in zip(scene, seed):
31 | scene_args = pyredner.RenderFunction.serialize_scene(\
32 | scene = sc,
33 | num_samples = num_samples,
34 | max_bounces = max_bounces,
35 | sampler_type = sampler_type,
36 | channels = channels,
37 | use_primary_edge_sampling=False,
38 | use_secondary_edge_sampling=False,
39 | sample_pixel_center = sample_pixel_center,
40 | device = device)
41 | imgs.append(pyredner.RenderFunction.apply(se, *scene_args))
42 | imgs = torch.stack(imgs)
43 | return imgs
44 |
45 | def renderPathTracing(scene,
46 | channels= None,
47 | max_bounces = 1,
48 | num_samples = 8,
49 | device = None):
50 | if channels is None:
51 | channels = [redner.channels.radiance]
52 | channels.append(redner.channels.alpha)
53 | #if alpha:
54 | # channels.append(redner.channels.alpha)
55 | return rayTrace(scene=scene,
56 | channels=channels,
57 | max_bounces=max_bounces,
58 | sampler_type=pyredner.sampler_type.independent,
59 | num_samples=num_samples,
60 | seed = None,
61 | sample_pixel_center=False,
62 | device=device)
63 |
64 | class Renderer:
65 |
66 | def __init__(self, samples, bounces, device):
67 | set_print_timing(False) #disable redner logs
68 | self.samples = samples
69 | self.bounces = bounces
70 | self.device = torch.device(device)
71 | self.clip_near = 10.0
72 | self.upVector = torch.tensor([0.0, -1.0, 0.0])
73 | self.counter = 0
74 | self.screenWidth = 256
75 | self.screenHeight = 256
76 |
77 | def setupCamera(self, focal, image_width, image_height):
78 |
79 | fov = torch.tensor([360.0 * math.atan(image_width / (2.0 * focal)) / math.pi]) # calculate camera field of view from image size
80 |
81 | cam = pyredner.Camera(
82 | position = torch.tensor([0.0, 0.0, 0.0]),
83 | look_at = torch.tensor([0.0, 0.0, 1.0]),
84 | up = self.upVector,
85 | fov = fov.cpu(),
86 | clip_near = self.clip_near,
87 | cam_to_world = None ,
88 | resolution = (image_height, image_width))
89 |
90 | return cam
91 |
92 | def buildScenes(self, vertices, indices, normal, uv, diffuse, specular, roughness, focal, envMap):
93 | '''
94 | build multiple pyredner scenes used for path tracing (uv mapping and indices are the same for all scenes)
95 | :param vertices: [n, verticesNumber, 3]
96 | :param indices: [indicesNumber, 3]
97 | :param normal: [n, verticesNumber, 3]
98 | :param uv: [verticesNumber, 2]
99 | :param diffuse: [n, resX, resY, 3] or [1, resX, resY, 3]
100 | :param specular: [n, resX, resY, 3] or [1, resX, resY, 3]
101 | :param roughness: [n, resX, resY, 1] or [1, resX, resY, 3]
102 | :param focal: [n]
103 | :param envMap: [n, resX, resY, 3]
104 | :return: return list of pyredner scenes
105 | '''
106 | assert(vertices.dim() == 3 and vertices.shape[-1] == 3 and normal.dim() == 3 and normal.shape[-1] == 3)
107 | assert (indices.dim() == 2 and indices.shape[-1] == 3)
108 | assert (uv.dim() == 2 and uv.shape[-1] == 2)
109 | assert (diffuse.dim() == 4 and diffuse.shape[-1] == 3 and
110 | specular.dim() == 4 and specular.shape[-1] == 3 and
111 | roughness.dim() == 4 and roughness.shape[-1] == 1)
112 | assert(focal.dim() == 1)
113 | assert(envMap.dim() == 4 and envMap.shape[-1] == 3)
114 | assert(vertices.shape[0] == focal.shape[0] == envMap.shape[0])
115 | assert(diffuse.shape[0] == specular.shape[0] == roughness.shape[0])
116 | assert (diffuse.shape[0] == 1 or diffuse.shape[0] == vertices.shape[0])
117 | sharedTexture = True if diffuse.shape[0] == 1 else False
118 |
119 | scenes = []
120 | for i in range(vertices.shape[0]):
121 | texIndex = 0 if sharedTexture else i
122 | mat = pyredner.Material(pyredner.Texture(diffuse[texIndex]),
123 | pyredner.Texture(specular[texIndex]) if specular is not None else None,
124 | pyredner.Texture(roughness[texIndex]) if roughness is not None else None)
125 | obj = pyredner.Object(vertices[i], indices, mat, uvs=uv, normals=normal[i] if normal is not None else None)
126 | cam = self.setupCamera(focal[i], self.screenWidth, self.screenHeight)
127 | scene = pyredner.Scene(cam, materials=[mat], objects=[obj], envmap=pyredner.EnvironmentMap(envMap[i]))
128 | scenes.append(scene)
129 |
130 | return scenes
131 |
132 | def renderAlbedo(self, scenes):
133 | '''
134 | render albedo of given pyredner scenes
135 | :param scenes: list of pyredner scenes
136 | :return: albedo images [n, screenWidth, screenHeight, 4]
137 | '''
138 | #images =pyredner.render_albedo(scenes, alpha = True, num_samples = self.samples, device = self.device)
139 | images = renderPathTracing(scenes,
140 | channels= [pyredner.channels.diffuse_reflectance, pyredner.channels.alpha],
141 | max_bounces = 0,
142 | num_samples = self.samples ,
143 | device = self.device)
144 | return images
145 |
146 | def render(self, scenes):
147 | '''
148 | render scenes with ray tracing
149 | :param scenes: list of pyredner scenes
150 | :return: ray traced images [n, screenWidth, screenHeight, 4]
151 | '''
152 | images = renderPathTracing(scenes,
153 | max_bounces = self.bounces,
154 | num_samples = self.samples ,
155 | device = self.device)
156 | self.counter += 1
157 | return images
--------------------------------------------------------------------------------
/replay.py:
--------------------------------------------------------------------------------
1 | from _nsis import out
2 |
3 | from optimizer import Optimizer
4 | from config import Config
5 | from utils import *
6 | import math
7 | from image import saveImage
8 |
9 | frameIndex = 0
10 | outputDir = './out'
11 | def produce(optimizer):
12 | global frameIndex
13 | images = optimizer.pipeline.render(None, optimizer.vEnhancedDiffuse, optimizer.vEnhancedSpecular, optimizer.vEnhancedRoughness)
14 | for i in range(images.shape[0]):
15 | fileName = outputDir + '/frame_' + str(i)+ '_%04d.png' % frameIndex
16 | saveImage(images[i], fileName)
17 |
18 | frameIndex += 1
19 |
20 |
21 | if __name__ == "__main__":
22 |
23 | '''
24 | this code is used to rotate on vertical axis, an existing reconstruction from a pickle file.
25 | u need to have ffmpeg to produce the final gif image or video
26 | '''
27 | import argparse
28 | parser = argparse.ArgumentParser()
29 | parser.add_argument("--input", required=True,
30 | help="path to a pickle file that contains the reconstruction (check optimizer.py)")
31 |
32 | parser.add_argument("--output", required=True,
33 | help="path to where to save the animation sequence.")
34 |
35 | parser.add_argument("--config", required=False, default='./optimConfig.ini',
36 | help="path to the configuration file (used to configure the optimization)")
37 |
38 | params = parser.parse_args()
39 |
40 | config = Config()
41 |
42 | configFile = params.config # './optimConfig.ini'
43 | outputDir = params.output + '/' #'./replay/'
44 | parameters = params.input #'../workspace/exp/checkpoints/stage3_output.pickle'
45 |
46 | mkdir_p(outputDir)
47 | config.fillFromDicFile(configFile)
48 | optimizer = Optimizer(outputDir, config)
49 | optimizer.pipeline.renderer.samples = config.rtSamples
50 | optimizer.loadParameters(parameters)
51 |
52 | DTR = math.pi / 180.0
53 | minBound = -30.0 * DTR # -65
54 | maxBound = 30.0 * DTR # 65
55 | step = 2.85 * DTR # 0.75
56 |
57 | initAngle = optimizer.pipeline.vRotation[..., 1].clone()
58 | currentAngle = initAngle.clone()
59 |
60 | if True:
61 | frameIndex = 0
62 | print('animating reconstruction, this may take some time depending on the number of raytracing samples and ur gpu. please wait...')
63 | while currentAngle > minBound:
64 | currentAngle -= step
65 | optimizer.pipeline.vRotation[..., 1] = currentAngle
66 | produce(optimizer)
67 |
68 | while currentAngle < maxBound:
69 | currentAngle += step
70 | optimizer.pipeline.vRotation[..., 1] = currentAngle
71 | produce(optimizer)
72 |
73 | while currentAngle > initAngle:
74 | currentAngle -= step
75 | optimizer.pipeline.vRotation[..., 1] = currentAngle
76 | produce(optimizer)
77 |
78 | optimizer.pipeline.vRotation[..., 1] = initAngle
79 | produce(optimizer)
80 |
81 | import os
82 |
83 | #cmd = "ffmpeg -y -i " + outputDir + "frame_0_%04d.png -vf fps=25 -vcodec png -pix_fmt rgba " + outputDir + "/optimized.mov"
84 | cmd = "ffmpeg -f image2 -framerate 20 -y -i " + outputDir + "frame_0_%04d.png " + outputDir + "/optimized.gif"
85 | os.system(cmd)
86 |
87 |
88 |
89 |
90 |
91 |
--------------------------------------------------------------------------------
/resources/beard.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/abdallahdib/NextFace/07b2b1c7da2021e939e469f82fd9823e3d0ec67c/resources/beard.gif
--------------------------------------------------------------------------------
/resources/beard.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/abdallahdib/NextFace/07b2b1c7da2021e939e469f82fd9823e3d0ec67c/resources/beard.png
--------------------------------------------------------------------------------
/resources/emily.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/abdallahdib/NextFace/07b2b1c7da2021e939e469f82fd9823e3d0ec67c/resources/emily.gif
--------------------------------------------------------------------------------
/resources/emily.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/abdallahdib/NextFace/07b2b1c7da2021e939e469f82fd9823e3d0ec67c/resources/emily.png
--------------------------------------------------------------------------------
/resources/results1.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/abdallahdib/NextFace/07b2b1c7da2021e939e469f82fd9823e3d0ec67c/resources/results1.gif
--------------------------------------------------------------------------------
/resources/visual.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/abdallahdib/NextFace/07b2b1c7da2021e939e469f82fd9823e3d0ec67c/resources/visual.jpg
--------------------------------------------------------------------------------
/sphericalharmonics.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import math
3 | import numpy as np
4 |
5 | '''
6 | code taken and adapted from pyredner
7 | '''
8 |
9 | # Code adapted from "Spherical Harmonic Lighting: The Gritty Details", Robin Green
10 | # http://silviojemma.com/public/papers/lighting/spherical-harmonic-lighting.pdf
11 | class SphericalHarmonics:
12 | def __init__(self, envMapResolution, device):
13 | self.device = device
14 | self.setEnvironmentMapResolution(envMapResolution)
15 |
16 | def setEnvironmentMapResolution(self, res):
17 | res = (res, res)
18 | self.resolution = res
19 | uv = np.mgrid[0:res[1], 0:res[0]].astype(np.float32)
20 | self.theta = torch.from_numpy((math.pi / res[1]) * (uv[1, :, :] + 0.5)).to(self.device)
21 | self.phi = torch.from_numpy((2 * math.pi / res[0]) * (uv[0, :, :] + 0.5)).to(self.device)
22 |
23 | def smoothSH(self, coeffs, window=6):
24 | ''' multiply (convolve in sptial domain) the coefficients with a low pass filter.
25 | Following the recommendation in https://www.ppsloan.org/publications/shdering.pdf
26 | '''
27 | smoothed_coeffs = torch.zeros_like(coeffs)
28 | smoothed_coeffs[:, 0] += coeffs[:, 0]
29 | smoothed_coeffs[:, 1:1 + 3] += \
30 | coeffs[:, 1:1 + 3] * math.pow(math.sin(math.pi * 1.0 / window) / (math.pi * 1.0 / window), 4.0)
31 | smoothed_coeffs[:, 4:4 + 5] += \
32 | coeffs[:, 4:4 + 5] * math.pow(math.sin(math.pi * 2.0 / window) / (math.pi * 2.0 / window), 4.0)
33 | smoothed_coeffs[:, 9:9 + 7] += \
34 | coeffs[:, 9:9 + 7] * math.pow(math.sin(math.pi * 3.0 / window) / (math.pi * 3.0 / window), 4.0)
35 | return smoothed_coeffs
36 |
37 |
38 | def associatedLegendrePolynomial(self, l, m, x):
39 | pmm = torch.ones_like(x)
40 | if m > 0:
41 | somx2 = torch.sqrt((1 - x) * (1 + x))
42 | fact = 1.0
43 | for i in range(1, m + 1):
44 | pmm = pmm * (-fact) * somx2
45 | fact += 2.0
46 | if l == m:
47 | return pmm
48 | pmmp1 = x * (2.0 * m + 1.0) * pmm
49 | if l == m + 1:
50 | return pmmp1
51 | pll = torch.zeros_like(x)
52 | for ll in range(m + 2, l + 1):
53 | pll = ((2.0 * ll - 1.0) * x * pmmp1 - (ll + m - 1.0) * pmm) / (ll - m)
54 | pmm = pmmp1
55 | pmmp1 = pll
56 | return pll
57 |
58 |
59 | def normlizeSH(self, l, m):
60 | return math.sqrt((2.0 * l + 1.0) * math.factorial(l - m) / \
61 | (4 * math.pi * math.factorial(l + m)))
62 |
63 | def SH(self, l, m, theta, phi):
64 | if m == 0:
65 | return self.normlizeSH(l, m) * self.associatedLegendrePolynomial(l, m, torch.cos(theta))
66 | elif m > 0:
67 | return math.sqrt(2.0) * self.normlizeSH(l, m) * \
68 | torch.cos(m * phi) * self.associatedLegendrePolynomial(l, m, torch.cos(theta))
69 | else:
70 | return math.sqrt(2.0) * self.normlizeSH(l, -m) * \
71 | torch.sin(-m * phi) * self.associatedLegendrePolynomial(l, -m, torch.cos(theta))
72 |
73 | def toEnvMap(self, shCoeffs, smooth = False):
74 | '''
75 | create an environment map from given sh coeffs
76 | :param shCoeffs: float tensor [n, bands * bands, 3]
77 | :param smooth: if True, the first 3 bands are smoothed
78 | :return: environment map tensor [n, resX, resY, 3]
79 | '''
80 | assert(shCoeffs.dim() == 3 and shCoeffs.shape[-1] == 3)
81 | envMaps = torch.zeros( [shCoeffs.shape[0], self.resolution[0], self.resolution[1], 3]).to(shCoeffs.device)
82 | for i in range(shCoeffs.shape[0]):
83 | envMap =self.constructEnvMapFromSHCoeffs(shCoeffs[i], smooth)
84 | envMaps[i] = envMap
85 | return envMaps
86 | def constructEnvMapFromSHCoeffs(self, shCoeffs, smooth = False):
87 |
88 | assert (isinstance(shCoeffs, torch.Tensor) and shCoeffs.dim() == 2 and shCoeffs.shape[1] == 3)
89 |
90 | if smooth:
91 | smoothed_coeffs = self.smoothSH(shCoeffs.transpose(0, 1), 4)
92 | else:
93 | smoothed_coeffs = shCoeffs.transpose(0, 1) #self.smoothSH(shCoeffs.transpose(0, 1), 4) #smooth the first three bands?
94 |
95 | res = self.resolution
96 |
97 | theta = self.theta
98 | phi = self.phi
99 | result = torch.zeros(res[0], res[1], smoothed_coeffs.shape[0], device=smoothed_coeffs.device)
100 | bands = int(math.sqrt(smoothed_coeffs.shape[1]))
101 | i = 0
102 |
103 | for l in range(bands):
104 | for m in range(-l, l + 1):
105 | sh_factor = self.SH(l, m, theta, phi)
106 | result = result + sh_factor.view(sh_factor.shape[0], sh_factor.shape[1], 1) * smoothed_coeffs[:, i]
107 | i += 1
108 | result = torch.max(result, torch.zeros(res[0], res[1], smoothed_coeffs.shape[0], device=smoothed_coeffs.device))
109 | return result
110 |
--------------------------------------------------------------------------------
/textureloss.py:
--------------------------------------------------------------------------------
1 | import torch
2 |
3 | class TextureLoss:
4 | def __init__(self, device):
5 | self.device = device
6 |
7 | self.RGB2XYZ = torch.tensor([[41.2390799265959, 35.7584339383878, 18.0480788401834],
8 | [21.2639005871510, 71.5168678767756, 07.2192315360734],
9 | [01.9330818715592, 11.9194779794626, 95.0532152249661]], dtype=torch.float).to(self.device)
10 |
11 | def regTextures(self, vTex, refTex, ws=3., wr=10.0, wc=10., wsm=0.01, wm=0.):
12 | '''
13 | regularize vTex with respect to refTex (more on this here: https://arxiv.org/abs/2101.05356)
14 | :param vTex: first texture [n, w, h, 3/1/]
15 | :param refTex: second texture [n, w, h, 3/1]
16 | :param ws: symmetry regularizer
17 | :param wr: rgb regularizer
18 | :param wc: consisntecy regularizer
19 | :param wsm: smoothness regularizer
20 | :param wm: mean regularizer
21 | :return: scalar loss
22 | '''
23 | symReg = (vTex - vTex.flip([2])).abs().mean() # symmetry regularizer on vertical axis
24 | rgbReg = (vTex - refTex).abs().mean() # rgb regularization with respect to reference texture
25 | loss = ws * symReg + wr * rgbReg
26 |
27 | loss += 1000.0 * torch.clamp(-vTex, min=0).mean() # soft penalize < 0
28 | loss += 1000.0 * torch.clamp(vTex - 1.0, min=0).mean() # soft penalize > 1
29 |
30 | loss += wsm * ((vTex[:, 1:] - vTex[:, :-1]).pow(2).sum()) # smooth on y axis
31 | loss += wsm * ((vTex[:, :, 1:] - vTex[:, :, :-1]).pow(2).sum()) # smooth on x axis
32 |
33 | if wc > 0: # regularize in xyz space
34 | refTex_XYZ = torch.matmul(self.RGB2XYZ, refTex[..., None])[..., 0]
35 | refTex_xyz = refTex_XYZ[..., :2] / (1.0 + refTex_XYZ.sum(dim=-1, keepdim=True))
36 | vTex_XYZ = torch.matmul(self.RGB2XYZ, vTex[..., None])[..., 0]
37 | vTex_xyz = vTex_XYZ[..., :2] / (1.0 + torch.clamp(vTex_XYZ, min=0.).sum(dim=-1, keepdim=True))
38 | xy_regularization = (refTex_xyz - vTex_xyz).abs().mean()
39 | loss += wc * xy_regularization
40 |
41 | if wm > 0: # keep close to average (generally for specular map)
42 | loss += wm * ((vTex - vTex.mean(dim=-1, keepdim=True)).pow(2).sum())
43 |
44 | return loss
45 |
--------------------------------------------------------------------------------
/utils.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import torch
3 | import cv2
4 | import os
5 |
6 | def saveObj(filename, materialName, vertices, faces, normals = None, tcoords = None, textureFileName = 'texture.png'):
7 | '''
8 | write mesh to an obj file
9 | :param filename: path to where to save the obj file
10 | :param materialFileName: material name
11 | :param vertices: float tensor [n, 3]
12 | :param faces: tensor [#triangles, 3]
13 | :param normals: float tensor [n, 3]
14 | :param tcoords: float tensor [n, 2]
15 | :param textureFileName: name of the texture to use with material
16 | :return:
17 | '''
18 | assert(vertices.dim() == 2 and vertices.shape[-1] == 3)
19 | assert (faces.dim() == 2 and faces.shape[-1] == 3)
20 |
21 | if normals is not None:
22 | assert (normals.dim() == 2 and normals.shape[-1] == 3)
23 |
24 | if tcoords is not None:
25 | assert (tcoords.dim() == 2 and tcoords.shape[-1] == 2)
26 |
27 | if torch.is_tensor(vertices):
28 | vertices = vertices.detach().cpu().numpy()
29 | if torch.is_tensor(faces):
30 | faces = faces.detach().cpu().numpy()
31 | if torch.is_tensor(normals):
32 | normals = normals.detach().cpu().numpy()
33 | if torch.is_tensor(tcoords):
34 | tcoords = tcoords.detach().cpu().numpy()
35 |
36 | assert(isinstance(vertices, np.ndarray))
37 | assert (isinstance(faces, np.ndarray))
38 | assert (isinstance(normals, np.ndarray))
39 | assert (isinstance(tcoords, np.ndarray))
40 |
41 | #write material
42 | f = open(os.path.dirname(filename) + '/' + materialName, 'w')
43 | f.write('newmtl material0\n')
44 | f.write('map_Kd ' + textureFileName + '\n')
45 | f.close()
46 |
47 | f = open(filename, 'w')
48 | f.write('###########################################################\n')
49 | f.write('# OBJ file generated by faceYard 2021\n')
50 | f.write('#\n')
51 | f.write('# Num Vertices: %d\n' % (vertices.shape[0]))
52 | f.write('# Num Triangles: %d\n' % (faces.shape[0]))
53 | f.write('#\n')
54 | f.write('###########################################################\n')
55 | f.write('\n')
56 | f.write('mtllib ' + materialName + '\n')
57 |
58 | #write vertices
59 | for v in vertices:
60 | f.write('v %f %f %f\n' % (v[0], v[1], v[2]))
61 |
62 | # write the tcoords
63 | if tcoords is not None and tcoords.shape[0] > 0:
64 | for uv in tcoords:
65 | f.write('vt %f %f\n' % (uv[0], uv[1]))
66 |
67 | #write the normals
68 | if normals is not None and normals.shape[0] > 0:
69 | for n in normals:
70 | f.write('vn %f %f %f\n' % (n[0], n[1], n[2]))
71 |
72 | f.write('usemtl material0\n')
73 | #write face indices list
74 | for t in faces:
75 | f.write('f %d/%d/%d %d/%d/%d %d/%d/%d\n' % (t[0] + 1, t[0] + 1,t[0] + 1,
76 | t[1] + 1, t[1] + 1,t[1] + 1,
77 | t[2] + 1, t[2] + 1, t[2] + 1))
78 | f.close()
79 | def saveLandmarksVerticesProjections(imageTensor, projPoints, landmarks):
80 | '''
81 | for debug, render the projected vertices and landmakrs on image
82 | :param images: [w, h, 3]
83 | :param projPoints: [n, 3]
84 | :param landmarks: [n, 2]
85 | :return: tensor [w, h, 3
86 | '''
87 | assert(imageTensor.dim() == 3 and imageTensor.shape[-1] == 3 )
88 | assert(projPoints.dim() == 2 and projPoints.shape[-1] == 2)
89 | assert(projPoints.shape == landmarks.shape)
90 | image = imageTensor.clone().detach().cpu().numpy() * 255.
91 | landmarkCount = landmarks.shape[0]
92 | for i in range(landmarkCount):
93 | x = landmarks[i, 0]
94 | y = landmarks[i, 1]
95 | cv2.circle(image, (int(x), int(y)), 2, (0, 255, 0), -1)
96 | x = projPoints[i, 0]
97 | y = projPoints[i, 1]
98 | cv2.circle(image, (int(x), int(y)), 2, (0, 0, 255), -1)
99 |
100 | return image
101 |
102 | def mkdir_p(path):
103 | import errno
104 | import os
105 |
106 | try:
107 | os.makedirs(path)
108 | except OSError as exc:
109 | if exc.errno == errno.EEXIST and os.path.isdir(path):
110 | pass
111 | else:
112 | raise
113 | def loadDictionaryFromPickle(picklePath):
114 | import pickle
115 | handle = open(picklePath, 'rb')
116 | assert handle is not None
117 | dic = pickle.load(handle)
118 | handle.close()
119 | return dic
120 | def writeDictionaryToPickle(dict, picklePath):
121 | import pickle
122 | handle = open(picklePath, 'wb')
123 | pickle.dump(dict, handle, pickle.HIGHEST_PROTOCOL)
124 | handle.close()
--------------------------------------------------------------------------------