109 |

110 |
AIST++ construction pipeline overview.
111 |
112 |
113 | The annotations in AIST++ are in [COCO-format](https://cocodataset.org/#home) for 2D \& 3D keypoints, and
114 | [SMPL-format](https://smpl.is.tue.mpg.de/) for human motion annotations. It is designed to serve general
115 | research purposes. However, in some cases you might need the data in different format
116 | (e.g., [Openpose](https://github.com/CMU-Perceptual-Computing-Lab/openpose) /
117 | [Alphapose](https://github.com/MVIG-SJTU/AlphaPose) keypoints format, or [STAR](https://star.is.tue.mpg.de/) human motion
118 | format). **With the code we provide, it should be easy to construct your own
119 | version of AIST++, with your own keypoint detector or human model definition.**
120 |
121 | **Step 1.** Assume you have your own 2D keypoint detection results stored in `