├── images ├── .DS_Store └── example.png └── README.md /images/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/TACJu/PartImageNet/HEAD/images/.DS_Store -------------------------------------------------------------------------------- /images/example.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/TACJu/PartImageNet/HEAD/images/example.png -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # [PartImageNet: A Large, High-Quality Dataset of Parts](https://arxiv.org/abs/2112.00933) 2 | 3 | ## UPDATE 4 | We offer two splits for the PartImageNet dataset now. 5 | - [PartImageNet_Seg](https://huggingface.co/datasets/turkeyju/PartImageNet/blob/main/PartImageNet_Seg.zip), which is used in the Semantic Part Segmentation and Object Segmentation in the paper (Sec 4.1, 4.2). In this split, train/val/test sets share the same classes but contain different images like standard datasets. It is also used in [Compositor: Bottom-up Clustering and Compositing for Robust Part and Object Segmentation](https://arxiv.org/abs/2306.07404). 6 | - [PartImageNet_OOD](https://huggingface.co/datasets/turkeyju/PartImageNet/blob/main/PartImageNet_OOD.zip), which is used in the Few-shot Learning experiments in the paper (Sec 4.3). In this split, train/val/test sets have different classes so it is suitable to conduct research on out-of-distribution and few-shot learning problems. (***This is also the original released split.***) 7 | 8 | ## The dataset is ready! 9 | Our annotations strictly follow the coco style so it should be easy to use the [cocoapi](https://github.com/cocodataset/cocoapi) for visulization the images and annotations. 10 | 11 | If you find our work helpful in your research, please cite it as: 12 | 13 | ``` 14 | @article{he2021partimagenet, 15 | title={PartImageNet: A Large, High-Quality Dataset of Parts}, 16 | author={He, Ju and Yang, Shuo and Yang, Shaokang and Kortylewski, Adam and Yuan, Xiaoding and Chen, Jie-Neng and Liu, Shuai and Yang, Cheng and Yuille, Alan}, 17 | journal={arXiv preprint arXiv:2112.00933}, 18 | year={2021} 19 | } 20 | ``` 21 | 22 | ## Introduction 23 | 24 | PartImageNet is a large, high-quality dataset with part segmentation annotations. It consists of 158 classes from ImageNet with approximately 24′000 images. The classes are grouped into 11 super-categories and the parts split are designed according to the super-category as shown below. The number in the brackets after the category name indicates the total number of classes of the category. 25 | 26 | | Category | Annotated Parts | 27 | |:---:|:---:| 28 | | Quadruped (46) | Head, Body, Foot, Tail | 29 | | Biped (17) | Head, Body, Hand, Foot, Tail | 30 | | Fish (10) | Head, Body, Fin, Tail | 31 | | Bird (14) | Head, Body, Wing, Foot, Tail | 32 | | Snake (15) | Head, Body | 33 | | Reptile (20) | Head, Body, Foot, Tail | 34 | | Car (19) | Body, Tier, Side Mirror | 35 | | Bicycle (6) | Head, Body, Seat, Tier | 36 | | Boat (4) | Body, Sail | 37 | | Aeroplane (2) | Head, Body, Wing, Engine, Tail | 38 | | Bottle (5) | Body, Mouth | 39 | 40 | The statistics of train/val/test split is shown below. 41 | 42 | | Split | Number of classes | Number of images | 43 | |:---:|:---:|:---:| 44 | | Train | 109 | 16540 | 45 | | Val | 19 | 2957 | 46 | | Test | 30 | 4598 | 47 | | Total | 158 | 24095 | 48 | 49 | For more detailed statistics, please check out our paper. 50 | 51 | ## Possible Usage 52 | 53 | PartImageNet has broad potential in and can be benefit to numerious research fields while we simply explore its usage in Part Discovery, Few-shot Learning and Semantic Segmentation in the paper. We hope that with the propose of the PartImageNet, we could attarct more attention to the part-based models and yield more interesting works. 54 | 55 | ## Example Figures 56 | 57 | ![](./images/example.png) 58 | 59 | 60 | --------------------------------------------------------------------------------