├── .DS_Store ├── figures ├── arch.png ├── hmr.png ├── pifu.png ├── sdf.png ├── sgm.png ├── smpl.png ├── vibe.png ├── .DS_Store ├── bcnet.png ├── pamir.png ├── smplx.png ├── voxel.png ├── 360degree.png ├── anicloth.png ├── archshow.png ├── bodynet.png ├── deephuman.png ├── fashion3d.png ├── multiview.png ├── normalgan.png ├── occupancy.png ├── portrait.png ├── summary.png ├── tailornet.png ├── pointcloud.png ├── polygonmesh.png ├── videoavatar.png ├── deephumanshow.png ├── deepwrinkles.png ├── doublefusion.png ├── monoclothcap.png ├── motionretarget.png ├── multigarment.png ├── 3drepresentation.png └── garmentrecovery.png ├── LICENSE └── README.md /.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/.DS_Store -------------------------------------------------------------------------------- /figures/arch.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/arch.png -------------------------------------------------------------------------------- /figures/hmr.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/hmr.png -------------------------------------------------------------------------------- /figures/pifu.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/pifu.png -------------------------------------------------------------------------------- /figures/sdf.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/sdf.png -------------------------------------------------------------------------------- /figures/sgm.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/sgm.png -------------------------------------------------------------------------------- /figures/smpl.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/smpl.png -------------------------------------------------------------------------------- /figures/vibe.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/vibe.png -------------------------------------------------------------------------------- /figures/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/.DS_Store -------------------------------------------------------------------------------- /figures/bcnet.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/bcnet.png -------------------------------------------------------------------------------- /figures/pamir.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/pamir.png -------------------------------------------------------------------------------- /figures/smplx.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/smplx.png -------------------------------------------------------------------------------- /figures/voxel.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/voxel.png -------------------------------------------------------------------------------- /figures/360degree.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/360degree.png -------------------------------------------------------------------------------- /figures/anicloth.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/anicloth.png -------------------------------------------------------------------------------- /figures/archshow.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/archshow.png -------------------------------------------------------------------------------- /figures/bodynet.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/bodynet.png -------------------------------------------------------------------------------- /figures/deephuman.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/deephuman.png -------------------------------------------------------------------------------- /figures/fashion3d.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/fashion3d.png -------------------------------------------------------------------------------- /figures/multiview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/multiview.png -------------------------------------------------------------------------------- /figures/normalgan.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/normalgan.png -------------------------------------------------------------------------------- /figures/occupancy.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/occupancy.png -------------------------------------------------------------------------------- /figures/portrait.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/portrait.png -------------------------------------------------------------------------------- /figures/summary.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/summary.png -------------------------------------------------------------------------------- /figures/tailornet.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/tailornet.png -------------------------------------------------------------------------------- /figures/pointcloud.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/pointcloud.png -------------------------------------------------------------------------------- /figures/polygonmesh.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/polygonmesh.png -------------------------------------------------------------------------------- /figures/videoavatar.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/videoavatar.png -------------------------------------------------------------------------------- /figures/deephumanshow.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/deephumanshow.png -------------------------------------------------------------------------------- /figures/deepwrinkles.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/deepwrinkles.png -------------------------------------------------------------------------------- /figures/doublefusion.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/doublefusion.png -------------------------------------------------------------------------------- /figures/monoclothcap.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/monoclothcap.png -------------------------------------------------------------------------------- /figures/motionretarget.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/motionretarget.png -------------------------------------------------------------------------------- /figures/multigarment.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/multigarment.png -------------------------------------------------------------------------------- /figures/3drepresentation.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/3drepresentation.png -------------------------------------------------------------------------------- /figures/garmentrecovery.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YingZhangDUT/3d-human-overview/HEAD/figures/garmentrecovery.png -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2021 YingZhangDUT 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # 3D Human 相关研究总结(Body、Pose、Reconstruction、Cloth、Animation) 2 | 3 | ## 前言 4 | 5 | 本文简要介绍与3D数字人相关的研究,包括常用3D表示、常用3D人体模型、3D人体姿态估计,带衣服3D人体重建,3D衣服建模,以及人体动作驱动等。 6 | 7 | ----- 8 | 9 | ## 常用3D表示 10 | 11 | 目前3D 学习中,物体或场景的表示包括**显式表示**与**隐式表示**两种,主流的显式表示包括基于voxel、基于point cloud、和基于polygon mesh三种,隐式表示包括基于Occupancy Function[1]、和基于Signed Distance Functions[2]两种。下表简要总结了各种表示方法的原理及其相应优缺点。 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 |
表示方法
Voxel
Point Cloud
Polygon Mesh
Occupancy Function
Signed Distance Function
表示图像
表示原理 体素用规则的立方体表示3D物体,体素是数据在三维空间中的最小分割单位,类似于2D图像中的像素点云将多面体表示为三维空间中点的集合,一般用激光雷达或深度相机扫描后得到点云数据多边形网格将多面体表示为顶点与面片的集合,包含了物体表面的拓扑信息occupancy function 将物体表示为一个占有函数,即空间中每个点是否在表面上SDF 将物体表示为符号距离函数,即空间中每个点距离表面的距离
优缺点 + 规则表示,容易送入网络学习
+ 可以处理任意拓扑结构
- 随着分辨率增加,内存呈立方级增长
- 物体表示不够精细
- 纹理不友好
+ 容易获取
+ 可以处理任意拓扑结构
- 缺少点与点之间连接关系
- 物体表示不够精细
- 纹理不友好
+ 高质量描述3D几何结构
+ 内存占有较少
+ 纹理友好
- 不同物体类别需要不同的 mesh 模版
- 网络较难学习
+ 可以精细建模细节,理论上分辨率无穷
+ 内存占有少
+ 网络较易学习
- 需后处理得到显式几何结构
+ 可以精细建模细节,理论上分辨率无穷
+ 内存占有少
+ 网络较易学习
- 需后处理得到显式几何结构
27 | 28 | 29 | 30 | 31 | [1] [Occupancy Networks: Learning 3D Reconstruction in Function Space](https://openaccess.thecvf.com/content_CVPR_2019/papers/Mescheder_Occupancy_Networks_Learning_3D_Reconstruction_in_Function_Space_CVPR_2019_paper.pdf). In CVPR, 2019. 32 | 33 | [2] [DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation](https://openaccess.thecvf.com/content_CVPR_2019/papers/Park_DeepSDF_Learning_Continuous_Signed_Distance_Functions_for_Shape_Representation_CVPR_2019_paper.pdf). In CVPR, 2019. 34 | 35 | 36 | 37 | ----- 38 | 39 | ## 常用3D人体模型 40 | 41 | 目前常用的人体参数化表示模型为德国马克斯•普朗克研究所提出的**SMPL**[3],该模型采用6890个顶点(vertices), 和13776 个面片(faces)定义人体 template mesh,并采用10维参数向量控制人体shape,24个关节点旋转参数控制人体pose,其中每个关节点旋转参数采用3维向量来表示该关节相对其父关节分别沿着x,y,z轴的旋转角。该研究所在CVPR 2019上提出 SMPL-X [4],采用了更多顶点来精细建模人体,并加入了面部表情和手部姿态的参数化控制。这两篇工作给出了规范的、通用的、可以与工业3D软件如Maya和Unity相通的人体参数化表示,并提出了一套简单有效的蒙皮策略,使得人体表面的顶点跟随关节旋转运动时不会产生明显瑕疵。近年来也有不少改进的人体模型,如SoftSMPL[5],STAR[6],BLSM[7],GHUM[8]等。 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 |
人体模型
SMPL
SMPL-X
基本表示
• mesh表示:6890 vertices, 13776 faces
• pose控制:24个关节点,24*3维旋转向量
• shape控制:10维向量
• mesh表示:10475 vertices, 20908 faces
• pose控制:身体54个关节点,75维PCA
• 手部控制:24维PCA
• 表情控制:10维向量
• shape控制:10维向量
示意图
54 | 55 | 56 | [3] [SMPL: A Skinned Multi-Person Linear Model](http://files.is.tue.mpg.de/black/papers/SMPL2015.pdf). In SIGGRAPH Asia, 2015. 57 | 58 | [4] [Expressive Body Capture: 3D Hands, Face, and Body from a Single Image](https://ps.is.tuebingen.mpg.de/uploads_file/attachment/attachment/497/SMPL-X.pdf). In CVPR, 2019. 59 | 60 | [5] [SoftSMPL: Data-driven Modeling of Nonlinear Soft-tissue Dynamics for Parametric Humans](http://dancasas.github.io/projects/SoftSMPL/). In Eurographics, 2020. 61 | 62 | [6] [STAR: Sparse Trained Articulated Human Body Regressor](https://star.is.tue.mpg.de/home). ECCV, 2020. 63 | 64 | [7] [BLSM: A Bone-Level Skinned Model of the Human Mesh](https://www.arielai.com/blsm/data/paper.pdf). ECCV, 2020. 65 | 66 | [8] [GHUM & GHUML: Generative 3D Human Shape and Articulated Pose Models](https://openaccess.thecvf.com/content_CVPR_2020/papers/Xu_GHUM__GHUML_Generative_3D_Human_Shape_and_Articulated_Pose_CVPR_2020_paper.pdf). CVPR (Oral), 2020. 67 | 68 | ## 69 | 70 | ## 3D人体姿态估计 71 | 72 | 3D人体姿态估计是指从图像、视频、或点云中估计人物目标的体型(shape)和姿态(pose),是围绕人体3D研究中的一项基本任务。3D人体姿态估计是3D人体重建的重要前提,也可以是人体动作驱动中动作的重要来源。目前很多3D姿态估计算法主要是估计场景中人体的SMPL参数。根据场景不同,可以分为针对单张图像和针对动态视频的人体3D姿态估计。下表简要总结了目前两种场景下的一些代表工作,并给出了一些简要原理介绍和评价。 73 | 74 | 75 | 76 | 77 | 78 | 79 | 80 | 92 | 97 | 98 | 99 | 100 | 108 | 113 | 114 |
场景
代表工作
原理及评价
单张图像
81 | • Keep it SMPL: Automatic Estimation of 3D Human Pose and Shape from a Single Image. In ECCV, 2016.
82 | • End-to-end Recovery of Human Shape and Pose. In CVPR, 2018.
83 | • Learning to Estimate 3D Human Pose and Shape from a Single Color Image. In CVPR, 2018.
84 | • Delving Deep into Hybrid Annotations for 3D Human Recovery in the Wild. In ICCV, 2019.
85 | • SPIN: Learning to reconstruct 3d human pose and shape via model-fitting in the loop. In ICCV, 2019.
86 | • I2L-MeshNet: Image-to-Lixel Prediction Network for Accurate 3D Human Pose and Mesh Estimation from a Single RGB Image. In ECCV, 2020.
87 | • Learning 3D Human Shape and Pose from Dense Body Parts. In TPAMI, 2020.
88 | • ExPose: Monocular Expressive Body Regression through Body-Driven Attention. In ECCV, 2020.
89 | • Hierarchical Kinematic Human Mesh Recovery. In ECCV, 2020.
90 | • Pose2Mesh: Graph Convolutional Network for 3D Human Pose and Mesh Recovery from a 2D Human Pose. In ECCV, 2020. 91 |
93 |
94 | • 主要思路:估计SMPL参数,加入2D keypoint loss,adversarial loss,silhouette loss等;有3D真值时可以加入SMPL参数真值、Mesh真值、3D joint真值约束;融合regression-based 和 optimization-based方法协作提升;从估计SMPL估计更精细的SMPL-X,对手部和头部强化处理;
95 | • 目前挑战:现实场景缺乏真值数据,如何产生有用的监督信号或pseudo ground-truth来帮助训练;合成数据有真值但存在domain gap,如何有效利用合成数据来帮助真实场景训练;目前很多方法估计结果在人体深度、肢体末端如手部和脚部还存在偏差,对复杂姿势估计结果仍不够准确;
96 |
动态视频
101 | • Learning 3D Human Dynamics from Video. In CVPR, 2019.
102 | • Monocular Total Capture: Posing Face, Body, and Hands in the Wild. In CVPR, 2019.
103 | • Human Mesh Recovery from Monocular Images via a Skeleton-disentangled Representation. In ICCV, 2019.
104 | • VIBE: Video Inference for Human Body Pose and Shape Estimation. In CVPR, 2020.
105 | • PoseNet3D: Learning Temporally Consistent 3D Human Pose via Knowledge Distillation. In CVPR, 2020.
106 | • Appearance Consensus Driven Self-Supervised Human Mesh Recovery. In ECCV, 2020.
107 |
109 |
110 | • 主要思路:估计单帧SMPL参数基础上加入帧间连续性和稳定性约束;帧间联合优化;appearance一致性约束;
111 | • 目前挑战:帧间连续性和稳定性约束会对动作产生平滑效果,导致每一帧都不是很准确;估计出来的结果仍会存在漂浮、抖动、滑步等问题;
112 |
115 | 116 | 117 | **** 118 | 119 | ## 3D人体重建 120 | 121 | 近年来与3D人体重建相关的工作很多,按照上述3D表示形式可分为基于Voxel表示、基于Mesh表示和基于Implicit function表示;按照输入形式可分为:基于单张图像、多视角图像和基于视频输入,这些输入都可以带有深度信息或无深度信息;按照重建效果可以分为带纹理重建和不带纹理重建,能直接驱动和不能直接驱动等等。 122 | 123 | 124 | 125 | 126 | 127 | 128 | 129 | 135 | 142 | 143 | 144 | 145 | 152 | 158 | 159 | 160 | 161 | 165 | 171 | 172 | 173 | 174 | 179 | 185 | 186 | 187 | 188 | 191 | 197 | 198 | 199 | 200 | 206 | 212 | 213 | 214 | 215 | 218 | 223 | 224 | 225 | 226 | 227 | 231 | 236 | 237 | 238 | 239 | 242 | 247 | 248 |
输入要求
重建效果
代表工作
基本原理及评价
单张RGB图像
+ 带衣服褶皱
+ 带纹理
+ 能直接驱动
130 | • 360-Degree Textures of People in Clothing from a Single Image. In 3DV, 2019.
131 | • Tex2Shape: Detailed Full Human Body Geometry From a Single Image. In ICCV, 2019.
132 | • ARCH: Animatable Reconstruction of Clothed Humans. In CVPR, 2020.
133 | • 3D Human Avatar Digitization from a Single Image. In VRCAI, 2019. 134 |
136 | • 带衣服人体表示:SMPL+Deformation+Texture;
137 | • 思路1:估计3D pose采样部分纹理,再用GAN网络生成完整纹理和displacement;
138 | • 思路2:估计3D pose并warp到canonical空间中用PIFU估计Occupancy;
139 | • 优势:可直接驱动,生成纹理质量较高;
140 | • 问题:过度依赖扫描3D人体真值来训练网络;需要非常准确的Pose估计做先验;较难处理复杂形变如长发和裙子; 141 |

+ 带衣服褶皱
+ 带纹理
- 不能直接驱动
146 | • PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization. In ICCV, 2019.
147 | • PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization. In CVPR, 2020.
148 | • SiCloPe: Silhouette-Based Clothed People. In CVPR, 2019.
149 | • PaMIR: Parametric Model-Conditioned Implicit Representation for Image-based Human Reconstruction. In TPAMI, 2020.
150 | • Reconstructing NBA Players. In ECCV, 2020. 151 |
153 | • 带衣服人体表示:Occupancy + RGB;
154 | • 思路1:训练网络提取空间点投影到图像位置的特征,并结合该点位置预测其Occupancy值和RGB值;
155 | • 优势:适用于任意pose,可建模复杂外观如长发裙子
156 | • 问题:过度依赖扫描3D人体真值来训练网络;后期需要注册SMPL才能进行驱动;纹理质量并不是很高; 157 |

+ 带衣服褶皱
- 不带纹理
- 不能直接驱动
162 | • BodyNet: Volumetric Inference of 3D Human Body Shapes. In ECCV, 2018.
163 | • DeepHuman: 3D Human Reconstruction From a Single Image. In ICCV, 2019. 164 |
166 | • 带衣服人体表示:voxel grid occupancy;
167 | • 思路1:预测voxel grid每个格子是否在body内部;
168 | • 优势:适用于任意pose,可建模复杂外观如长发裙子
169 | • 问题:需要另外估纹理;分辨率较低;过度依赖扫描3D人体真值来训练网络;后期需要注册SMPL才能进行驱动; 170 |
多视角RGB图像
+ 带衣服褶皱
+ 带纹理
- 不能直接驱动
175 | • Deep Volumetric Video From Very Sparse Multi-View Performance Capture. In ECCV, 2018.
176 | • PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization. In ICCV, 2019.
177 | • PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization. In CVPR, 2020. 178 |
180 | • 带衣服人体表示:Occupancy + RGB;
181 | • 思路: 多视角PIFU;
182 | • 优势:多视角信息预测更准确;适用于任意pose;可建模复杂外观如长发和裙子;
183 | • 问题:多视角数据较难采集,过度依赖扫描3D人体真值来训练网络;后期需要注册SMPL才能进行驱动;纹理质量并不是很高; 184 |
单张RGBD图像
+ 带衣服褶皱
+ 带纹理
- 不能直接驱动
189 | • NormalGAN: Learning Detailed 3D Human from a Single RGB-D Image. In ECCV, 2020. 190 | 192 | • 带衣服人体表示:3D point cloud + triangulation;
193 | • 思路: GAN网络生成front-view和back-view 的depth和color,再用triangulation得到mesh;
194 | • 优势:适用于任意pose;可建模复杂外观如长发和裙子;
195 | • 问题:过度依赖扫描3D人体真值来训练网络;后期需要注册SMPL才能进行驱动;纹理质量并不是很高; 196 |
RGB视频输入
+ 带衣服褶皱
+ 带纹理
+ 能直接驱动
201 | • Video Based Reconstruction of 3D People Models. In CVPR, 2018.
202 | • Detailed Human Avatars from Monocular Video. In 3DV, 2018.
203 | • Learning to Reconstruct People in Clothing from a Single RGB Camera. In CVPR, 2019.
204 | • Multi-Garment Net: Learning to Dress 3D People from Images. In ICCV, 2019. 205 |
207 | • 带衣服人体表示:SMPL+Deformation+Texture;
208 | • 思路1:多帧联合估计canonical T-pose下的SMPL+D,投影回每帧提取纹理融合;
209 | • 优势:可直接驱动;生成纹理质量较高;简单场景下效果较好;
210 | • 问题:过度依赖扫描3D人体真值来训练网络;需要较准确的Pose估计和human parsing做先验;较难处理复杂形变如长发裙子 211 |

+ 带衣服褶皱
- 不带纹理
- 不能直接驱动
216 | • MonoClothCap: Towards Temporally Coherent Clothing Capture from Monocular RGB Video. In 3DV, 2020. 217 | 219 | • 带衣服人体表示:SMPL+Deformation;
220 | • 思路:每帧估计SMPL参数并联合多帧优化得到稳定shape和每帧pose,为不同衣服建模形变参数化模型,约束Silhouette,Clothing segmentation,Photometric,normal等信息一致
221 | • 优势:无需3D真值;可以建模较为细致的衣服形变;
222 | • 问题:依赖较准确的pose和segmentation估计;只能处理部分衣服类型;
RGBD视频输入
+ 带衣服
+ 带纹理
+ 也许能直接驱动
228 | • Robust 3D Self-portraits in Seconds. In CVPR, 2020.
229 | • TexMesh: Reconstructing Detailed Human Texture and Geometry from RGB-D Video. In ECCV, 2020. 230 |
232 | • 带衣服人体表示:Occupancy+RGB;
233 | • 思路1:RGBD版PIFU生成每帧先验,TSDF (truncated signed distance function) 分为inner model和surface layer,PIFusion做double layer-based non-rigid tracking,多帧联合微调优化得到3D portrait;
234 | • 优势:建模较精细,可以处理较大形变如长发和裙子;不需要扫描真值;
235 | • 问题:流程略复杂;纹理质量一般;
Depth视频输入
+ 带衣服褶皱
- 不带纹理
+ 也许能直接驱动
240 | • DoubleFusion: Real-time Capture of Human Performances with Inner Body Shapes from a Single Depth Sensor. In CVPR, 2018. 241 | 243 | • 带衣服人体表示:outer layer + inner layer(SMPL)
244 | • 思路:joint motion tracking, geometric fusion and volumetric shape-pose optimization
245 | • 优势:建模较精细;速度快,可以实时;
246 | • 问题:无纹理;
249 | 250 | 251 | --------------- 252 | 253 | ## 3D衣服建模 254 | 255 | 在3D人体重建任务中,衣服一般是用与template mesh每个顶点绑定的Deformation来表示,但这种表示并不能精细建模衣服的纹理褶皱等细节,在人物模型运动起来时也会很不自然。因此近年来也有一部分工作将3D衣服建模与深度神经网络结合,旨在不同shape和pose情况下,准确逼真地模拟、预测人体衣服的形变。 256 | 257 | 258 | 259 | 260 | 261 | 262 | 265 | 268 | 273 | 274 | 275 | 276 | 279 | 282 | 287 | 288 | 289 | 292 | 295 | 300 | 301 | 302 | 305 | 308 | 313 | 314 | 315 | 318 | 321 | 326 | 327 | 328 | 331 | 334 | 339 | 340 | 341 | 344 | 347 | 353 | 354 |
代表工作
基本原理
简要评价
263 | • Physics-Inspired Garment Recovery from a Single-View Image. In TOG, 2018. 264 | 266 | 267 | 269 | • 思路:衣服分割+衣服特征估计(尺码,布料,褶皱)+人体mesh估计,材质-姿态联合优化+衣物仿真;
270 | • 优势:衣服和人体参数化表示较规范;引入物理、统计、几何先验;
271 | • 问题:衣服特征估计受光照和图像质量影响较大,受限于garment模版的丰富程度;需要后期通过衣物仿真联合优化来调整效果; 272 |
277 | • DeepWrinkles: Accurate and Realistic Clothing Modeling. In ECCV, 2018. 278 | 280 | 281 | 283 | • 思路:统计模型学习衣服在某pose和shape下的大致效果,GAN模型生成更细致的褶皱;
284 | • 优势:用GAN可以生成逼真细致的褶皱;
285 | • 问题:依赖4D扫描动作序列真值;需提前做好衣服注册; 286 |
290 | • Multi-Garment Net: Learning to Dress 3D People from Images. In ICCV, 2019. 291 | 293 | 294 | 296 | • 思路:human parsing分割衣服并预测类别,估计衣服PCA参数和细节Displacement;
297 | • 优势:明确3D scan segmentation和Garment registration的pipeline;引入Human parsing可以得到更准确的衣服类别;
298 | • 问题:过度依赖3D真值训练;PCA参数表示的准确性依赖dataset大小; 299 |
303 | • Learning-Based Animation of Clothing for Virtual Try-On. In EUROGRAPHICS, 2019. 304 | 306 | 307 | 309 | • 思路:衣服仿真生成真值帮助网络训练,基于shape学习衣服模版变形,基于pose和shape学习动态褶皱,
310 | • 优势:衣物仿真可以得到任意pose下的大量真值数据;
311 | • 问题:与现实数据差距较大;依赖衣物模版的丰富程度;直接学习defromation不够稳定,容易穿模需后处理; 312 |
316 | • TailorNet: Predicting Clothing in 3D as a Function of Human Pose, Shape and Garment Style. In CVPR, 2020. 317 | 319 | 320 | 322 | • 思路:将衣服形变分为高频和低频,低频部分用网络估计大致形变,高频部分估计多个特定style-shape模型,每个模型负责估计特定形变及加权权重;
323 | • 优势:可以得到较为细致的衣服褶皱;提出合成数据集,仿真20件衣服,1782个pose和9种shape;
324 | • 问题:在不同shape和style上训练得到结果过于平滑,不够真实; 325 |
329 | • BCNet: Learning Body and Cloth Shape from A Single Image. In ECCV, 2020. 330 | 332 | 333 | 335 | • 思路:基于单张图像估计SMPL参数和上下身的Garment参数,用两个网络分别估计displacement和skining weight;
336 | • 优势:对garment学习蒙皮权重,动起来可以更自然;garment mesh与body mesh不绑定,可以重建更多衣服类别;
337 | • 问题:将衣服分为上下半身,对连衣裙和长款不友好; 338 |
342 | • Deep Fashion3D: A Dataset and Benchmark for 3D Garment Reconstruction from Single Images. In ECCV, 2020. 343 | 345 | 346 | 348 | • 贡献:提出Deep Fashion3D数据集,包括2000件衣服,10种类型,标记相应点云,多视角图像,3D body pose,和feature lines;
349 | • 思路:提出基于单张图像的3D衣服重建,通过估计衣服类型,body pose,feature lines对adaptable template进行形变;
350 | • 优势:衣服类型、feature line估计可以提供更多deformation先验;引入implicit surface重建更精细; 351 | • 问题:当衣服类型与adaptable template差距较大时,handle-based Laplcacian deformation 优化较难; 352 |
355 | 356 | 357 | --------------- 358 | 359 | ## 人体动作驱动 360 | 361 | 人体动作驱动目的是使3D人体按照我们预先设置的动作运动起来,这里面一般需要考虑两个问题:人体动作怎么来?怎么驱动人体得到满意结果? 362 | 363 | **动作获取** 目前常用的动作获取方法包括手工制作、物理模拟、视频估计的和动捕采集等,每种策略的详细优缺点可以参考[9]。简单来说,手工制作动作可以适用于各种目标如人和动物,但代价高昂,依赖于专业美术人员的审美;物理模拟方式根据物理规则的来生成动作,但一般仅适用于少部分规则运动;基于视频估计的方法代价最低,但目前技术很难获得高质量稳定动作;因此目前对于只是使用人体动作的场景来说,动捕采集依赖于专业设备来捕捉真实的演员运动,可以获得稳定的高质量动作[10]。目前也有一些研究工作会基于深度神经网络来生成新动作如PFNN[11]、Dancing to Music[12],或是基于网络进行动作插值来减轻美术工作量如Motion In-Betweening[13],Motion Inpainting[14],或是基于强化学习使目标人物学会做一些动作[15]。 364 | 365 | ![MotionRetarget](./figures/motionretarget.png) 366 | 367 | **3D人体驱动** 一般来说,目前常用的动作驱动流程是将动捕采集数据转换为SMPL参数,再根据SMPL的骨骼结构和蒙皮策略将目标人物repose到特定姿势。对于角色控制精度要求不高的情况下,直接输入SMPL参数来控制动作可以满足大部分需求。而在要求较高的动画场景中,或者是驱动其他骨骼结构类似的角色时,因为角色之间的骨骼长度,体型等会存在差异,只是输入参数控制会产生一些问题如动作不能做到位,产生穿模等等。因此,目前也有一些研究工作探索不同骨骼结构之间的motion retargeting,如[16, 17, 18]等。 368 | 369 | **人体动作迁移** 另外值得一提的是,只是进行动作迁移也可以不需要对角色进行显式的3D建模,目前常用策略是采用GAN网络基于2D/3D 姿态参数来生成动作迁移后的目标图像或视频,如Dense Pose Transfer[19],Everybody Dance Now[20],LWGAN[21], Few-shot vid2vid[22],TransMoMo[23] 等等。 总的来说,基于3D目标重建的动作迁移的优势在于可以泛化到各种动作,运动起来外观比较稳定,而难点在于如何精确重建外观几何如衣服和头发等位置,如何在驱动的时候产生逼真的外观变化效果如衣摆运动和头发飘起等;基于GAN生成的动作迁移优势在于可以生成逼真的外观变化,而难点在于如何应对复杂动作和新动作下的外观生成,如何保证生成视频的人物动作和外观稳定性等。 370 | 371 | 372 | 373 | [9] [3D Human Motion Editing and Synthesis: A Survey](https://www.researchgate.net/publication/264092451_3D_Human_Motion_Editing_and_Synthesis_A_Survey). In CMMM, 2020. 374 | 375 | [10] [MoSh: Motion and Shape Capture from Sparse Markers](https://www.youtube.com/watch?v=Uidbr2fQor0). In SIGGRAPH Asia, 2014. 376 | 377 | [11] [Phase-Functioned Neural Networks for Character Control](http://theorangeduck.com/page/phase-functioned-neural-networks-character-control). In SIGGRAPH, 2017. 378 | 379 | [12] [Dancing to Music Neural Information Processing Systems](https://github.com/NVlabs/Dancing2Music). In NeurIPS, 2019. 380 | 381 | [13] [Robust Motion In-betweening](https://montreal.ubisoft.com/en/robust-motion-in-betweening-2/). In SIGGRAPH, 2020. 382 | 383 | [14] [Human Motion Prediction via Spatio-Temporal Inpainting](https://github.com/magnux/MotionGAN). In ICCV, 2019. 384 | 385 | [15] [DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills](https://xbpeng.github.io/projects/DeepMimic/index.html). In SIGGRAPH 2018. 386 | 387 | [16] [RigNet: Neural Rigging for Articulated Characters](https://github.com/zhan-xu/RigNet). In SIGGRAPH, 2020. 388 | 389 | [17] [Skeleton-Aware Networks for Deep Motion Retargeting](https://deepmotionediting.github.io/retargeting). In SIGGRAPH, 2020. 390 | 391 | [18] [Motion Retargetting based on Dilated Convolutions and Skeleton-specific Loss Functions](https://diglib.eg.org/bitstream/handle/10.1111/cgf13947/v39i2pp497-507.pdf). In Eurographics, 2020. 392 | 393 | [19] [Dense Pose Transfer](https://openaccess.thecvf.com/content_ECCV_2018/papers/Natalia_Neverova_Two_Stream__ECCV_2018_paper.pdf). In ECCV, 2018. 394 | 395 | [20] [Everybody Dance Now](https://carolineec.github.io/everybody_dance_now/). In ICCV, 2019. 396 | 397 | [21] [Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis](https://github.com/agermanidis/Liquid-Warping-GAN). In ICCV, 2019. 398 | 399 | [22] [[Few-shot Video-to-Video Synthesis](https://nvlabs.github.io/few-shot-vid2vid/). In NeurIPS 2019. 400 | 401 | [23] [TransMoMo: Invariance-Driven Unsupervised Video Motion Retargeting](https://yzhq97.github.io/transmomo/). In CVPR, 2020. 402 | 403 | 404 | 405 | **** 406 | 407 | ## 总结 408 | 409 | 本文简要概述了与3D人体相关的一些研究工作,包括Representation、Body、Pose、Reconstruction、Cloth、Animation等多个方面,涉及到各种细分的研究领域,如人体模型表示、人体姿态估计、人体重建、衣服建模、动作合成与驱动等等;从深度学习的角度来看,这些研究方向的主要挑战是缺乏3D真值数据,目前3D数据的采集还受限于特定环境和设备且价格不菲,而数据的标注则需要专业的3D知识和CG技术,因此从仿真数据中学习一些规律并利用自监督或无监督学习算法来迁移到现实场景也是目前研究工作在探索的方向。目前各种技术的终极目标是在虚拟世界里还原真实的人类,除了外观和动作,还有说话、语音、表情、交互等多个方面。此外值得一提的是,渲染技术也是3D数字人领域的关键技术,提升渲染技术的真实性和实时性对于该领域发展有着重要意义。 410 | --------------------------------------------------------------------------------