├── README.md
└── papers.png
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 | # Awesome CONTRASTIVE LEARNING [](https://github.com/sindresorhus/awesome#readme)
4 | > A comprehensive list of awesome contrastive self-supervised learning papers.
5 |
6 |
7 | ## PAPERS
8 |
9 | #### Surveys and Reviews
10 |
11 | - [ ] [2020: A Survey on Contrastive Self-Supervised Learning](https://arxiv.org/abs/2011.00362)
12 |
13 |
14 | #### 2024
15 | - [ ] [2024: Topic Modeling as Multi-Objective Contrastive Optimization](https://arxiv.org/abs/2402.07577)
16 | - [ ] [2024: Demonstrating and Reducing Shortcuts in Vision-Language Representation Learning](https://arxiv.org/abs/2402.17510) [[Code]](https://github.com/MauritsBleeker/svl-framework)
17 | - [ ] [2024: VoCo: A Simple-yet-Effective Volume Contrastive Learning Framework for 3D Medical Image Analysis](https://arxiv.org/abs/2402.17300) [[Code]](https://github.com/Luffy03/VoCo)
18 | - [ ] [2024: Multi-Grained Contrast for Data-Efficient Unsupervised Representation Learning](https://arxiv.org/abs/2407.02014) [[Code]](https://github.com/visresearch/mgc)
19 | - [ ] [2024: Kdmcse: Knowledge distillation multimodal sentence embeddings with adaptive angular margin contrastive learning](https://arxiv.org/abs/2403.17486) [[Code]](https://github.com/duyngtr16061999/kdmcse)
20 |
21 | #### 2023
22 | - [ ] [2023: Improving multimodal sentiment analysis: Supervised angular margin-based contrastive learning for enhanced fusion representation](https://aclanthology.org/2023.findings-emnlp.980/)
23 | - [ ] [2023: Inter-Instance Similarity Modeling for Contrastive Learning](https://arxiv.org/abs/2306.12243) [[Code]](https://github.com/visresearch/patchmix)
24 | - [ ] [2023: Asymmetric Patch Sampling for Contrastive Learning](https://arxiv.org/abs/2306.02854) [[Code]](https://github.com/visresearch/aps)
25 | - [ ] [2023: Randomized Schur Complement Views for Graph Contrastive Learning](https://arxiv.org/abs/2306.04004) [[Code]](https://github.com/kvignesh1420/rlap)
26 |
27 | #### 2022
28 | - [ ] [2022: Adaptive Contrastive Learning on Multimodal Transformer for Review Helpfulness Predictions](https://arxiv.org/abs/2211.03524) [[Code]](https://github.com/nguyentthong/adaptive_contrastive_mrhp)
29 | - [ ] [2022: Contrastive Transformer-based Multiple Instance Learning for Weakly Supervised Polyp Frame Detection](https://arxiv.org/abs/2203.12121)
30 | - [ ] [2022: Fair Contrastive Learning for Facial Attribute Classification (FSCL)](https://arxiv.org/abs/2203.16209)
31 |
32 | #### 2021
33 | - [ ] [2021: Contrastive Learning for Neural Topic Model](https://arxiv.org/abs/2110.12764) [[Code]](https://github.com/nguyentthong/CLNTM)
34 | - [ ] [2021: Learning Transferable Visual Models From Natural Language Supervision (CLIP)](http://proceedings.mlr.press/v139/radford21a)
35 | - [ ] [2021: Constrained Contrastive Distribution Learning for Unsupervised Anomaly Detection and Localisation in Medical Images](https://arxiv.org/abs/2103.03423)
36 | - [ ] [2021: Robust Contrastive Learning Using Negative Samples with Diminished Semantics](https://arxiv.org/abs/2110.14189)
37 | - [ ] [2021: VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning](https://arxiv.org/pdf/2105.04906.pdf)
38 | - [ ] [2021: Barlow Twins: Self-Supervised Learning via Redundancy Reduction](https://arxiv.org/pdf/2103.03230.pdf)
39 | - [ ] [2021: Poisoning and Backdooring Contrastive Learning](https://arxiv.org/abs/2106.09667)
40 | - [ ] [2021: Adversarial Attacks are Reversible with Natural Supervision](https://arxiv.org/abs/2103.14222)
41 | - [ ] [2021: Self-Paced Contrastive Learning for Semi-supervised Medical Image Segmentation with Meta-labels](https://arxiv.org/abs/2107.13741v1)
42 | - [ ] [2021: Understanding Cognitive Fatigue from fMRI Scans with Self-supervised Learning](https://arxiv.org/abs/2106.15009)
43 | - [ ] [2021: A Large-Scale Study on Unsupervised Spatiotemporal Representation Learning](https://arxiv.org/abs/2104.14558)
44 | - [ ] [2021: Contrastive Semi-Supervised Learning for 2D Medical Image Segmentation](https://arxiv.org/abs/2106.06801)
45 | - [ ] [2021: Contrastive Learning with Stronger Augmentations](https://arxiv.org/abs/2104.07713v1)
46 | - [ ] [2021: Dual Contrastive Learning for Unsupervised Image-to-Image Translation](https://arxiv.org/abs/2104.07689v1)
47 | - [ ] [2021: How Well Do Self-Supervised Models Transfer?](https://arxiv.org/abs/2011.13377)
48 | - [ ] [2021: Self-supervised Pretraining of Visual Features in the Wild](https://arxiv.org/abs/2103.01988)
49 | - [ ] [2021: VideoMoCo: Contrastive Video Representation Learning with Temporally Adversarial Examples](https://arxiv.org/abs/2103.05905v2)
50 | - [ ] [2021: Temporal Contrastive Graph for Self-supervised Video Representation Learning](https://arxiv.org/abs/2101.00820)
51 | - [ ] [2021: Active Learning by Acquiring Contrastive Examples](https://arxiv.org/abs/2109.03764)
52 | - [ ] [2021: Active Contrastive Learning of Audio-Visual Video Representations](https://arxiv.org/abs/2009.09805)
53 |
54 | #### 2020
55 |
56 | - [ ] [2020: Rethinking the Value of Labels for Improving Class-Imbalanced Learning](https://arxiv.org/abs/2006.07529)
57 | - [ ] [2020: Online Bag-of-Visual-Words Generation for Unsupervised Representation Learning](https://arxiv.org/abs/2012.11552)
58 | - [ ] [2020: Social NCE: Contrastive Learning of Socially-aware Motion Representations](https://arxiv.org/abs/2012.11717)
59 | - [ ] [2020: CASTing Your Model: Learning to Localize Improves Self-Supervised Representations](https://arxiv.org/pdf/2012.04630.pdf)
60 | - [ ] [2020: Exploring Simple Siamese Representation Learning](https://arxiv.org/abs/2011.10566)
61 | - [ ] [2020: FROST: Faster and more Robust One-shot Semi-supervised Training](https://arxiv.org/abs/2011.09471)
62 | - [ ] [2020: Hard Negative Mixing for Contrastive Learning](https://arxiv.org/abs/2010.01028)
63 | - [ ] [2020: Representation Learning via Invariant Causal Mechanisms](https://arxiv.org/abs/2010.07922)
64 | - [ ] [2020: Are all negatives created equal in contrastive instance discrimination?](https://arxiv.org/abs/2010.06682)
65 | - [ ] [2020: Bootstrap your own latent: A new approach to self-supervised Learning](https://arxiv.org/abs/2006.07733)
66 | - [ ] [2020: Spatiotemporal Contrastive Video Representation Learning](https://arxiv.org/abs/2008.03800)
67 | - [ ] [2020: Augmented Skeleton Based Contrastive Action Learning with Momentum LSTM for Unsupervised Action Recognition](https://arxiv.org/abs/2008.00188)
68 | - [ ] [2020: Deep Robust Clustering by Contrastive Learning](https://arxiv.org/abs/2008.03030)
69 | - [ ] [2020: Contrastive Learning for Unpaired Image-to-Image Translation](https://arxiv.org/abs/2007.15651)
70 | - [ ] [2020: Demystifying Contrastive Self-Supervised Learning: Invariances, Augmentations and Dataset Biases](https://arxiv.org/abs/2007.13916)
71 | - [ ] [2020: What Should Not Be Contrastive in Contrastive Learning](https://arxiv.org/abs/2008.05659)
72 | - [ ] [2020: Self-supervised Video Representation Learning Using Inter-intra Contrastive Framework](https://arxiv.org/abs/2008.02531)
73 | - [ ] [2020: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments](https://arxiv.org/abs/2006.09882)
74 | - [ ] [2020: Prototypical Contrastive Learning of Unsupervised Representations](https://arxiv.org/abs/2005.04966)
75 | - [ ] [2020: GraphCL: Contrastive Self-Supervised Learning of Graph Representations](https://arxiv.org/abs/2007.08025)
76 | - [ ] [2020: DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations](https://arxiv.org/abs/2006.03659)
77 | - [ ] [2020: Pretraining with Contrastive Sentence Objectives Improves Discourse Performance of Language Models](https://arxiv.org/abs/2005.10389)
78 | - [ ] [2020: CERT: Contrastive Self-supervised Learning for Language Understanding](https://arxiv.org/abs/2005.12766)
79 | - [ ] [2020: Deep Graph Contrastive Representation Learning](https://arxiv.org/abs/2006.04131v1)
80 | - [ ] [2020: CLOCS: Contrastive Learning of Cardiac Signals](https://arxiv.org/abs/2005.13249v1)
81 | - [ ] [2020: On Mutual Information in Contrastive Learning for Visual Representations](https://arxiv.org/abs/2005.13149v2)
82 | - [ ] [2020: What makes for good views for contrastive learning](https://arxiv.org/abs/2005.10243v1)
83 | - [ ] [2020: CURL: Contrastive Unsupervised Representations for Reinforcement Learning](https://arxiv.org/abs/2004.04136v2)
84 | - [ ] [2020: Supervised Contrastive Learning](https://arxiv.org/abs/2004.11362v1)
85 | - [ ] [2020: Clustering based Contrastive Learning for Improving Face Representations](https://arxiv.org/abs/2004.02195v1)
86 | - [ ] [2020: A Simple Framework for Contrastive Learning of Visual Representations](https://arxiv.org/pdf/2002.05709.pdf)
87 | - [ ] [2020: Improved Baselines with Momentum Contrastive Learning](https://arxiv.org/abs/2003.04297v1)
88 | - [ ] [2020: ALICE: Active Learning with Contrastive Natural Language Explanations](https://arxiv.org/abs/2009.10259)
89 |
90 | #### 2019
91 |
92 | - [ ] [2019: Unsupervised Scene Adaptation with Memory Regularization in vivo](https://arxiv.org/abs/1912.11164)
93 | - [ ] [2019: Self-labelling via simultaneous clustering and representation learning](https://arxiv.org/abs/1911.05371)
94 | - [ ] [2019: Transferable Contrastive Network for Generalized Zero-Shot Learning](https://arxiv.org/abs/1908.05832v1)
95 | - [ ] [2019: MoCo: Momentum Contrast for Unsupervised Visual Representation Learning](https://arxiv.org/abs/1911.05722)
96 | - [ ] [2019: Self-Supervised Learning of Pretext-Invariant Representations](https://arxiv.org/pdf/1912.01991.pdf)
97 | - [ ] [2019: Selfie: Self-supervised Pretraining for Image Embedding](https://arxiv.org/abs/1906.02940)
98 | - [ ] [2019: Data-Efficient Image Recognition with Contrastive Predictive Coding](https://arxiv.org/abs/1905.09272)
99 | - [ ] [2019: Local Aggregation for Unsupervised Learning of Visual Embeddings](https://arxiv.org/abs/1903.12355)
100 | - [ ] [2019: Learning Representations by Maximizing Mutual Information Across Views](https://arxiv.org/abs/1906.00910)
101 | - [ ] [2019: Contrastive Multiview Coding](https://arxiv.org/abs/1906.05849)
102 | - [ ] [2019: Unsupervised Embedding Learning via Invariant and Spreading Instance Feature](https://arxiv.org/abs/1904.03436)
103 | - [ ] [2019: Invariant Information Clustering for Unsupervised Image Classification and Segmentation](https://arxiv.org/abs/1807.06653)
104 | - [ ] [2019: A Theoretical Analysis of Contrastive Unsupervised Representation Learning](https://arxiv.org/abs/1902.09229)
105 |
106 | #### 2018
107 |
108 | - [ ] [2018: Learning deep representations by mutual information estimation and maximization](https://arxiv.org/abs/1808.06670)
109 | - [ ] [2018: Representation Learning with Contrastive Predictive Coding](https://arxiv.org/abs/1807.03748)
110 | - [ ] [2018: Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination](https://arxiv.org/abs/1805.01978)
111 |
112 | #### 2017 and Older
113 |
114 | - [ ] [2017: Time-Contrastive Networks: Self-Supervised Learning from Video](https://arxiv.org/abs/1704.06888)
115 | - [ ] [2017: Multi-task Self-Supervised Visual Learning](https://arxiv.org/abs/1708.07860)
116 | - [ ] [2017: Unsupervised learning of visual representations by solving jigsawpuzzles](https://arxiv.org/abs/1603.09246)
117 | - [ ] [2015: Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks](https://arxiv.org/abs/1406.6909)
118 | - [ ] [2010: Noise-contrastive estimation: A new estimation principle for unnormalized statistical models](http://proceedings.mlr.press/v9/gutmann10a/gutmann10a.pdf)
119 |
120 |
121 | ## Star History
122 |
123 |
124 |
125 |
126 |
127 |
128 |
129 |
130 |
--------------------------------------------------------------------------------
/papers.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/asheeshcric/awesome-contrastive-self-supervised-learning/f169f3e38c45d2be364b85837156e72cde652c76/papers.png
--------------------------------------------------------------------------------