└── README.md /README.md: -------------------------------------------------------------------------------- 1 | # SSL for Image Representation 2 | This repository is ```SSL for Image Representation```, one of the OpenLab's PseudoLab. 3 | 4 | [Introduce page](https://www.notion.so/chanrankim/SSL-for-Image-Representation-0574c45b4674428b94149c41cd724f30?pvs=4)
5 | Every Monday at 10pm, [PseudoLab Discord](https://discord.gg/sDgnqYWA3G) Room YL! 6 | ## Contributor 7 | - _Wongi Park_ | [Github](https://github.com/kalelpark) | [LinkedIn](https://www.linkedin.com/in/wongipark/) | 8 | - _Jaehyeong Chun_ | [Github](https://github.com/jaehyeongchun) | [LinkedIn](https://www.linkedin.com/in/jaehyeong-chun-95971b161/) | 9 | - _Dongryeol Lee_ | [Github](https://github.com/ryol8888) | [LinkedIn](https://www.linkedin.com/in/dong-ryeol-lee-110302197/) | 10 | - _Haemun Kim_ 11 | 12 | 13 | | idx | Date | Presenter | Review or Resource(Youtube) | Paper / Code | 14 | |----:|:-----------|:----------|:-----------------|:------------ | 15 | | 1 | 2023.03.20| _Wongi Park_ | OT | OT | 16 | | 2 | 2023.03.27| _Wongi Park_ | [Youtube](https://youtu.be/7xUZA9X78x0) / [Resource](https://www.notion.so/chanrankim/SSL-for-Image-Representation-0574c45b4674428b94149c41cd724f30?pvs=4) | [Spark (ICLR 2023)](https://arxiv.org/abs/2301.03580) / [CODE](https://github.com/keyu-tian/SparK)| 17 | | 3 | 2023.04.03| _Jaehyeong Chun_ | [Resource](https://www.notion.so/chanrankim/SSL-for-Image-Representation-0574c45b4674428b94149c41cd724f30?pvs=4) | [VICReg (ICLR 2022)](https://arxiv.org/abs/2105.04906) / [CODE](https://github.com/facebookresearch/vicreg)| 18 | | 4 | 2023.04.10| _Haemun Kim_ | [Resource](https://www.notion.so/chanrankim/SSL-for-Image-Representation-0574c45b4674428b94149c41cd724f30?pvs=4) | [SSOD (ArXiv 2023)](https://arxiv.org/abs/2302.07577) / [CODE](https://github.com/facebookresearch/vicreg)| 19 | | 5 | 2023.04.17| _Wongi Park_ | [Youtube](https://youtu.be/Ic8GYtwjSuw) / [Resource](https://www.notion.so/chanrankim/SSL-for-Image-Representation-0574c45b4674428b94149c41cd724f30?pvs=4) | [MixMAE (CVPR 2023)](https://arxiv.org/abs/2302.07577)| 20 | | 6 | 2023.04.24| _Jaehyeong Chun_ | [Youtube](https://youtu.be/cDqLLhwzbzI) / [Resource](https://www.notion.so/chanrankim/SSL-for-Image-Representation-0574c45b4674428b94149c41cd724f30?pvs=4) | [DINO (ICCV 2021)](https://arxiv.org/abs/2104.14294) / [CODE](https://github.com/facebookresearch/dino)| 21 | | 7 | 2023.05.01| _Haemun Kim_ | [Youtube](https://youtu.be/7a6ZFyQhuZs) / [Resource](https://www.notion.so/chanrankim/SSL-for-Image-Representation-0574c45b4674428b94149c41cd724f30?pvs=4) | [UPL (CVPR 2022)](https://arxiv.org/abs/2203.03884) / [CODE](https://haochen-wang409.github.io/U2PL/)| 22 | | 8 | 2023.05.08| _Dongryeol Lee_ | [Youtube](https://youtu.be/h8ApVVcLJW8) / [Resource](https://www.notion.so/chanrankim/SSL-for-Image-Representation-0574c45b4674428b94149c41cd724f30?pvs=4) | [RC-MAE (ICLR 2023)](https://arxiv.org/abs/2210.02077) / [CODE](https://github.com/youngwanLEE/rc-mae)| 23 | | 9 | 2023.05.22| _Wongi Park_ | [Youtube](https://youtu.be/k7oX2m0T7OU) / [Resource](https://www.notion.so/chanrankim/SSL-for-Image-Representation-0574c45b4674428b94149c41cd724f30?pvs=4) | [iTPN (CVPR 2023)](https://arxiv.org/pdf/2211.12735.pdf) / [CODE](https://github.com/sunsmarterjie/iTPN)| 24 | | 10 | 2023.05.29| _Jaehyeong Chun_ | [Youtube](https://youtu.be/eSlWQin30xY) / [Resource](https://www.notion.so/chanrankim/SSL-for-Image-Representation-0574c45b4674428b94149c41cd724f30?pvs=4) | [iBOT (ICLR 2022)](https://arxiv.org/abs/2111.07832) / [CODE](https://github.com/bytedance/ibot) | 25 | | 11 | 2023.06.05| _Haemun Kim_ | [Youtube](https://youtu.be/oPiz_gGTJWM) / [Resource](https://www.notion.so/chanrankim/SSL-for-Image-Representation-0574c45b4674428b94149c41cd724f30?pvs=4) | [ARSL (CVPR 2023)](https://arxiv.org/abs/2303.14960) / [CODE](https://github.com/PaddlePaddle/PaddleDetection)| 26 | | 12 | 2023.06.12| _Dongryeol Lee_ | [Youtube](https://youtu.be/aymt6MlpDe4) / [Resource](https://www.notion.so/chanrankim/SSL-for-Image-Representation-0574c45b4674428b94149c41cd724f30?pvs=4) | [(NIPS 2022)](https://arxiv.org/abs/2305.15614)| 27 | | 13 | 2023.08.28| _Wongi Park_ | OT | OT | 28 | | 14 | 2023.09.04| _Wongi Park_ | [Youtube](----) / [Resource](----) | [CDS (ICCV 2021)](https://openaccess.thecvf.com/content/ICCV2021/papers/Kim_CDS_Cross-Domain_Self-Supervised_Pre-Training_ICCV_2021_paper.pdf) / [CODE](https://github.com/VisionLearningGroup/CDS)| 29 | 30 | 31 | ## Table of Contents 32 | - [Survey and Analysis](#Survey-and-Analysis) 33 | - [Contrastive & Distillation Learninig](#Contrastive-&-Distillation-Learninig) 34 | - [Masked Auto Encoder](#Masked-Auto-Encoder) 35 | - [Image Transformation](#Image-Transformation) 36 | - [Vision Language Model](#Vision-Language-Model) 37 | - [Clustering](#Clustering) 38 | - [Domain Generalization](#Domain-Generalization) 39 | - [Anomaly Detection](#Anomaly-Detection) 40 | - [Few-shot learning](#Few-&shot-learning) 41 | - [Dataset](#Dataset) 42 | - [Blog and Resource](#Blog-and-Resource) 43 | 44 | 45 | ### Survey and Analysis 46 | - **[ Analysis ]** Unsupervised Deep Embedding for Clustering Analysis. **(ICML 2016)** [[Paper](https://arxiv.org/pdf/1511.06335.pdf)] [[CODE](https://github.com/piiswrong/dec)] 47 | - **[ Analysis ]** Revisiting self-supervised visual representation learning **(CVPR 2019)** [[Paper](https://arxiv.org/abs/2005.10243)] [[CODE](https://github.com/google/revisiting-self-supervised)] 48 | - **[ Analysis ]** What Makes for Good Views for Contrastive Learning? **(NIPS 2020)** [[Paper](https://arxiv.org/abs/2005.10243)] 49 | - **[ Analysis ]** A critical analysis of self-supervision, or what we can learn from a single image **(ICLR 2020)** [[Paper](https://openreview.net/forum?id=B1esx6EYvr)] 50 | - **[ Analysis ]** How Useful is Self-Supervised Pretraining for Visual Tasks? **(CVPR 2020)** [[Paper](https://arxiv.org/abs/2003.14323)] [[CODE](https://github.com/princeton-vl/selfstudy-render)] 51 | - **[ Analysis ]** How Well Do Self-Supervised Models Transfer? **(CVPR 2021)** [[Paper](https://arxiv.org/abs/2011.13377)] 52 | - **[ Analysis ]** Understanding Dimensional Collapse in Contrastive Self-supervised Learning **(ICLR 2022)** [[Paper](https://arxiv.org/pdf/2110.09348.pdf)] 53 | - **[ Analysis ]** Revealing the Dark Secrets of Masked Image Modeling **(CVPR 2023)** [[Paper](https://arxiv.org/pdf/2205.13543.pdf)] 54 | - **[ Analysis ]** What do Self-Supervised Vision Transformers Learn? **(ICLR 2023)** [[Paper](https://arxiv.org/pdf/2305.00729.pdf)] 55 | 56 | 57 | ### Contrastive & Distillation Learninig 58 | - **[ TraS ]** Transitive Invariance for Self-supervised Visual Representation Learning. **(ICCV 2017)** [[Paper](https://arxiv.org/pdf/1708.02901.pdf)] 59 | - **[ NonID ]** Unsupervised Feature Learning via Non-parameteric Instance Discrimination. **(CVPR 2018)** [[Paper](https://openaccess.thecvf.com/content_cvpr_2018/CameraReady/0801.pdf)] [[CODE](https://github.com/zhirongw/lemniscate.pytorch)] 60 | - **[ MoCo ]** Momentum Contrast for Unsupervised Visual Representation Learning **(CVPR 2019)** [[Paper](https://arxiv.org/abs/1911.05722)] [[CODE](https://github.com/facebookresearch/moco)] 61 | - **[ MoCoV2 ]** Improved Baselines with Momentum Contrastive Learning **(ArXiv 2020)** [[Paper](https://arxiv.org/abs/1911.05722)] [[CODE](https://github.com/facebookresearch/moco)] 62 | - **[ SimCLR ]** A Simple Framework for Contrastive Learning of Visual Representations **(ICML 2020)** [[Paper](https://arxiv.org/abs/2002.05709)] [[CODE](https://github.com/google-research/simclr)] 63 | - **[ SimCLRv2 ]** Big Self-Supervised Models are Strong Semi-Supervised Learners **(NIPS 2020)** [[Paper](https://arxiv.org/abs/2006.10029)] [[CODE](https://github.com/google-research/simclr)] 64 | - **[ SwAV ]** Unsupervised Learning of Visual Features by Contrasting Cluster Assignments **(NIPS 2020)** [[Paper](https://arxiv.org/abs/2006.09882)] [[CODE](https://github.com/facebookresearch/swav)] 65 | - **[ Reasoning ]** Self-Supervised Relational Reasoning for Representation Learning **(NIPS 2020)** [[Paper](https://arxiv.org/pdf/2006.05849.pdf)] [[CODE](https://github.com/mpatacchiola/self-supervised-relational-reasoning)] 66 | - **[ PIRL ]** Self-Supervised Learning of Pretext-Invariant Representations **(CVPR 2020)** [[Paper](https://arxiv.org/abs/1912.01991)] [[CODE](https://github.com/akwasigroch/Pretext-Invariant-Representations)] 67 | - **[ SEED ]** SEED: Self-supervised Distillation For Visual Representation **(ICLR 2021)** [[Paper](https://openreview.net/forum?id=AHm3dbp7D1D)] [[CODE](https://github.com/jacobswan1/SEED)] 68 | - **[ SimSiam ]** Exploring Simple Siamese Representation Learning. **(CVPR 2021)** [[Paper](https://arxiv.org/abs/2011.10566)] [[CODE](https://github.com/PatrickHua/SimSiam)] 69 | - **[ PixPro ]** Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning. **(CVPR 2021)** [[Paper](https://arxiv.org/abs/2011.10043)] [[CODE](https://github.com/zdaxie/PixPro)] 70 | - **[ BYOL ]** Bootstrap Your Own Latent A New Approach to Self-Supervised Learning **(NIPS 2020)** [[Paper](https://arxiv.org/abs/2006.07733)] [[CODE](https://github.com/deepmind/deepmind-research/tree/master/byol)] 71 | - **[ RoCo ]** Robust Contrastive Learning Using Negative Samples with Diminished Semantics **(NIPS 2021)** [[Paper](https://arxiv.org/abs/2110.14189)] [[CODE](https://github.com/SongweiGe/Contrastive-Learning-with-Non-Semantic-Negatives)] 72 | - **[ ImCo ]** Improving Contrastive Learning by Visualizing Feature Transformation **(ICCV 2021)** [[Paper](arxiv.org/abs/2108.02982)] [[CODE](https://github.com/DTennant/CL-Visualizing-Feature-Transformation)] 73 | - **[ DINO ]** Emerging Properties in Self-Supervised Vision Transformers **(ICCV 2021)** [[Paper](https://arxiv.org/abs/2104.14294)] [[CODE](https://github.com/facebookresearch/dino)] 74 | - **[ Barlow Twins ]** Barlow Twins: Self-Supervised Learning via Redundancy Reduction **(ICML 2021)** [[Paper](https://arxiv.org/abs/2103.03230)] [[CODE](https://github.com/facebookresearch/barlowtwins)] 75 | - **[ VICReg ]** VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning **(ICLR 2022)** [[Paper](https://arxiv.org/abs/2105.04906)] [[CODE](https://github.com/facebookresearch/vicreg)] 76 | - **[ E-SSL ]** E-SSL: Equivariant Contrastive Learning **(ICLR 2022)** [[Paper](https://openreview.net/pdf?id=gKLAAfiytI)] [[CODE](https://github.com/rdangovs/essl)] 77 | - **[ TriBYOL ]** TriBYOL: Triplet BYOL for Self-Supervised Representation Learning **(ICASSP 2022)** [[Paper](https://arxiv.org/abs/2206.03012)] 78 | - **[ DINOv2 ]** DINOv2: Learning Robust Visual Features without Supervision **(ArXiv 2023)** [[Paper](https://arxiv.org/abs/2304.07193)] [[CODE](https://github.com/facebookresearch/dinov2)] 79 | - **[ AVT ]** AVT: Unsupervised Learning of Transformation Equivariant Representations by Autoencoding Variational Transformations **(CVPR 2018)** [[Paper](arxiv.org/abs/1903.10863)] [[CODE](https://github.com/maple-research-lab/AVT-pytorch)] 80 | - **[ MoCHI ]** MoCHI: Hard Negative Mixing for Contrastive Learning **(NIPS 2020)** [[Paper](https://arxiv.org/pdf/2010.01028.pdf)] [[CODE](https://europe.naverlabs.com/mochi)] 81 | - **[ SMDistill ]** Unsupervised Representation Transfer for Small Networks: I Believe I Can Distill On-the-Fly **(NIPS 2021)** [[Paper](https://proceedings.neurips.cc/paper/2021/file/cecd845e3577efdaaf24eea03af4c033-Paper.pdf)] 82 | - **[ BURN ]** BURN: Unsupervised Representation Learning for Binary Networks by Joint Classifier Training **(CVPR 2022)** [[Paper](https://arxiv.org/abs/2110.08851)] [[CODE](https://github.com/naver-ai/burn)] 83 | - **[ DenseCL ]** Dense Contrastive Learning for Self-Supervised Visual Pre-Training **(CVPR 2021)** [[Paper](https://arxiv.org/abs/2011.09157)] [[CODE](https://github.com/WXinlong/DenseCL)] 84 | - **[ RINCE ]** Robust Contrastive Learning against Noisy Views **(CVPR 2021)** [[Paper](https://arxiv.org/abs/2201.04309)] [[CODE](https://github.com/chingyaoc/RINCE)] 85 | - **[ SEED ]** SEED: Self-supervised Distillation For Visual Representation **(ICLR 2021)** [[Paper](https://arxiv.org/pdf/2101.04731.pdf)] [[CODE](https://github.com/jacobswan1/SEED)] 86 | 87 | 88 | ### Masked Auto Encoder 89 | - **[ MAE ]** Masked Autoencoders Are Scalable Vision Learners **(CVPR 2020)** [[Paper](https//arxiv.org/abs/2111.06377)] [[CODE](https://github.com/facebookresearch/mae)] 90 | - **[ MST ]** MST: Masked Self-Supervised Transformer for Visual Representation **(NIPS 2021)** [[Paper](https://arxiv.org/abs/2106.05656)] 91 | - **[ SimMiM ]** SimMIM: A Simple Framework for Masked Image Modeling **(CVPR 2021)** [[Paper](https://arxiv.org/abs/2111.09886)] [[CODE](https://github.com/facebookresearch/mae)] 92 | - **[ Adios ]** Adversarial Masking for Self-Supervised Learning **(ICML 2022)** [[Paper](https://arxiv.org/pdf/2201.13100.pdf)] [[CODE](https://github.com/YugeTen/adios)] 93 | - **[ iBOT ]** iBOT 🤖: Image BERT Pre-Training with Online Tokenizer **(ICLR 2022)** [[Paper](arxiv.org/abs/2111.07832)] [[CODE](https://github.com/bytedance/ibot)] 94 | - **[ BEiT ]** BEiT: BERT Pre-Training of Image Transformers **(ICLR 2022)** [[Paper](https://arxiv.org/abs/2106.08254)] [[CODE](https://github.com/microsoft/unilm/tree/master/beit)] 95 | - **[ DMAE ]** Denoising Masked AutoEncoders Help Robust Classification **(ICLR 2023)** [[Paper](https://arxiv.org/abs/2210.06983)] [[CODE](https://github.com/quanlin-wu/dmae)] 96 | - **[ AttnMask ]** What to Hide from Your Students: Attention-Guided Masked Image Modeling **(ECCV 2022)** [[Paper](arxiv.org/abs/2203.12719)] [[CODE](https://github.com/gkakogeorgiou/attmask)] 97 | - **[ SparK ]** Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling **(ICLR 2023)** [[Paper](https://arxiv.org/abs/2301.03580)] [[CODE](https://github.com/keyu-tian/SparK)] 98 | - **[ CIM ]** Corrupted Image Modeling for Self-Supervised Visual Pre-Training **(ICLR 2023)** [[Paper](https://openreview.net/forum?id=09hVcSDkea)] 99 | - **[ MixAE ]** Mixed Autoencoder for Self-supervised Visual Representation Learning **(CVPR 2023)** [[Paper](https://arxiv.org/abs/2303.17152)] 100 | - **[ MixMIM ]** MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers **(CVPR 2023)** [[Paper](https://arxiv.org/abs/2205.13137)] [[CODE](https://github.com/Sense-X/MixMIM)] 101 | - **[ DropMAE ]** DropMAE: Masked Autoencoders with Spatial-Attention Dropout for Tracking Tasks **(CVPR 2023)** [[Paper](https://arxiv.org/abs/2304.00571)] [[CODE](https://github.com/jimmy-dq/DropMAE)] 102 | - **[ iTPN ]** Integrally Pre-Trained Transformer Pyramid Networks. **(CVPR 2023)** [[Paper](https://arxiv.org/pdf/2211.12735.pdf)] [[CODE](https://github.com/sunsmarterjie/iTPN)] 103 | - **[ ConMIM ]** Masked Image Modeling with Denoising Contrast. **(ICLR 2023)** [[Paper](https://arxiv.org/abs/2205.09616)] [[CODE](https://github.com/TencentARC/ConMIM)] 104 | - **[ MultiMAE ]** MultiMAE: Multi-modal Multi-task Masked Autoencoders. **(ICLR 2023)** [[Paper](https://multimae.epfl.ch/)] [[CODE](https://multimae.epfl.ch/)] 105 | - **[ LCO ]** Learning to cluster in order to transfer across domains and tasks **(ICLR 2018)** [[Paper](https://arxiv.org/abs/1711.10125)] [[CODE](https://github.com/GT-RIPL/L2C)] 106 | - **[ TinyMIM ]** TinyMIM: An Empirical Study of Distilling MIM Pre-trained Models. **(CVPR2023)** [[Paper](https://arxiv.org/abs/2301.01296)] [[CODE](https://github.com/OliverRensu/TinyMIM)] 107 | 108 | 109 | ### Image Transformation 110 | - **[ JisawNet ]** Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles. **(ECCV 2016)** [[Paper](hhttp://arxiv.org/abs/1603.09246)] [[CODE](http://www.cvg.unibe.ch/research/JigsawPuzzleSolver.html)] 111 | - **[ Colorful ]** Colorful Image Colorization. **(ECCV 2016)** [[Paper](https://arxiv.org/abs/1603.08511)] [[CODE](http://richzhang.github.io/colorization/)] 112 | - **[ Colorfulv2]** Colorization as a Proxy Task for Visual Understanding. **(CVPR 2017)** [[Paper](http://arxiv.org/abs/1703.04044)] [[CODE](http://people.cs.uchicago.edu/~larsson/color-proxy/)] 113 | - **[ DeepPermNet]** DeepPermNet: Visual Permutation Learning. **(CVPR 2017)** [[Paper](https://arxiv.org/pdf/1704.02729.pdf)] [[CODE](https://github.com/rfsantacruz/deep-perm-net)] 114 | - **[ NAT ]** Unsupervised Learning by Predicting Noise. **(ICML 2017)** [[Paper](https://arxiv.org/abs/1704.05310)] [[CODE](https://github.com/facebookresearch/noise-as-targets)] 115 | - **[ OPN ]** Unsupervised Representation Learning by Sorting Sequences. **(ICCV 2017)** [[Paper](https://arxiv.org/pdf/1708.01246.pdf)] [[CODE](https://github.com/HsinYingLee/OPN)] 116 | - **[ Damage JisawNet ]** Learning Image Representations by Completing Damaged Jigsaw Puzzles. **(WACV 2018)** [[Paper](https://arxiv.org/pdf/1802.01880.pdf)] [[CODE](https://github.com/MehdiNoroozi/JigsawPuzzleSolver)] 117 | - **[ Rotation ]** Unsupervised Representation Learning by Predicting Image Rotations. **(ICLR 2018)** [[Paper](https://openreview.net/forum?id=S1v4N2l0-)] [[CODE](https://github.com/gidariss/FeatureLearningRotNet)] 118 | 119 | ### Vision Language Model 120 | - **[ SINC ]** SINC: Self-Supervised In-Context Learning for Vision-Language Tasks **(ICCV 2023)** [[Paper](https://arxiv.org/abs/2307.07742v2)] 121 | 122 | ### Domain Generalization 123 | - **[ CDS ]** CDS: Cross-Domain Self-supervised Pre-training **(ICCV 2021)** [[Paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Kim_CDS_Cross-Domain_Self-Supervised_Pre-Training_ICCV_2021_paper.pdf)] [[CODE](https://github.com/VisionLearningGroup/CDS)] 124 | - **[ Deja Vu ]** Deja Vu: Continual Model Generalization for Unseen Domains **(ICLR 2023)** [[Paper](https://arxiv.org/pdf/2301.10418.pdf)] [[CODE](https://github.com/SonyResearch/RaTP)] 125 | - **[ FlexPredict ]** Predicting masked tokens in stochastic locations improves masked image modeling **(ArXiv 2023)** [[Paper](https://arxiv.org/pdf/2308.00566.pdf)] 126 | 127 | 128 | ### Anomaly Detection 129 | - **[ CutPaste ]** CutPaste: Self-Supervised Learning for Anomaly Detection and Localization **(CVPR 2021)** [[Paper](https://arxiv.org/abs/2104.04015)] [[CODE](https://github.com/LilitYolyan/CutPaste)] 130 | - **[ SPot ]** SPot-the-Difference Self-Supervised Pre-training for Anomaly Detection and Segmentation **(ECCV 2022)** [[Paper](https://arxiv.org/abs/2207.14315)] [[CODE](https://github.com/amazon-science/spot-diff)] 131 | 132 | 133 | ### Multi-task learning 134 | - **[ MuST ]** Multi-Task Self-Training for Learning General Representations **(ICCV 2021)** [[Paper](https://arxiv.org/abs/2108.11353)] 135 | - **[ SMART ]** SMART: Self-supervised Multi-task pretrAining with contRol Transformers **(ICLR 2023)** [[Paper](https://arxiv.org/pdf/2301.09816.pdf)] [[CODE](https://github.com/microsoft/smart)] 136 | 137 | 138 | 139 | 140 | ### Few-shot learning 141 | - **[ Few-shot ]** When Does Self-supervision Improve Few-shot Learning? **(ECCV 2020)** [[Paper](https://arxiv.org/abs/1910.03560)] [[CODE](https://github.com/cvl-umass/fsl-ssl)] 142 | - **[ Pareto ]** Pareto Self-Supervised Training for Few-Shot Learning **(CVPR 2021)** [[Paper](https://arxiv.org/pdf/2104.07841v2.pdf)] 143 | 144 | ### Clustering 145 | - **[ JULE ]** Joint Unsupervised Learning of Deep Representations and Image Clusters. **(CVPR 2016)** [[Paper](https://arxiv.org/pdf/1604.03628.pdf)] [[CODE](https://github.com/VisionLearningGroup/CDS)] 146 | - **[ Deep Cluster ]** Deep Clustering for Unsupervised Learning of Visual Features **(ECCV 2018)** [[Paper](https://research.fb.com/wp-content/uploads/2018/09/Deep-Clustering-for-Unsupervised-Learning-of-Visual-Features.pdf)] [[CODE](https://github.com/facebookresearch/deepcluster)] 147 | - **[ Self Cluster ]** Self-labelling via simultaneous clustering and representation learning **(ICLR 2020)** [[Paper](https://openreview.net/pdf?id=Hyx-jyBFPr)] [[CODE](https://github.com/yukimasano/self-label)] 148 | - **[ ClusterFit ]** Improving Generalization of Visual Representations **(CVPR 2020)** [[Paper](https://arxiv.org/abs/1912.03330)] 149 | - **[ SCAN ]** SCAN: Learning to Classify Images without Labels **(ECCV 2020)** [[Paper](https://arxiv.org/abs/2005.12320)] [[CODE](https://github.com/wvangansbeke/Unsupervised-Classification)] 150 | - **[ MisMatch ]** Mitigating embedding and class assignment mismatch in unsupervised image classification **(ECCV 2020)** [[Paper](https://link.springer.com/chapter/10.1007/978-3-030-58586-0_45)] [[CODE](https://github.com/Sungwon-Han/TwoStageUC)] 151 | - **[ RUC ]** Improving Unsupervised Image Clustering With Robust Learning **(CVPR 2021)** [[Paper](https://arxiv.org/abs/2012.11150)] [[CODE](https://github.com/deu30303/RUC)] 152 | - **[ MICE ]** MiCE: Mixture of Contrastive Experts for Unsupervised Image Clustering **(ICLR 2021)** [[Paper](https://openreview.net/forum?id=gV3wdEOGy_V)] [[CODE](https://github.com/TsungWeiTsai/MiCE)] 153 | - **[ GATCluster ]** GATCluster: Self-Supervised Gaussian-Attention Network for Image Clustering **(ECCV 2020)** [[Paper](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123700732.pdf)] 154 | - **[ Jigsaw Cluster ]** Jigsaw Clustering for Unsupervised Visual Representation Learning **(CVPR 2021)** [[Paper](https://arxiv.org/abs/2104.00323)] [[CODE](https://github.com/dvlab-research/JigsawClustering)] 155 | 156 | ### Blog and Resource 157 | - **[Self-supervised learning: The dark matter of intelligence](https://ai.facebook.com/blog/self-supervised-learning-the-dark-matter-of-intelligence/)** **(FAIR2022)** 158 | 159 | ### Dataset 160 | - [ImageNet1K](https://www.image-net.org/challenges/LSVRC/index.php) 161 | - [CUB-200](http://www.vision.caltech.edu/visipedia/CUB-200-2011.html) 162 | - [Food-101](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/) 163 | - [Stanford-Car](http://ai.stanford.edu/~jkrause/cars/car_dataset.html) 164 | - [FGVC-Aircraft](https://www.robots.ox.ac.uk/~vgg/data/fgvc-aircraft/) 165 | --------------------------------------------------------------------------------