├── LICENSE
├── README.md
├── Wechat.jpeg
└── icon.png
/LICENSE:
--------------------------------------------------------------------------------
1 | CC0 1.0 Universal
2 |
3 | Statement of Purpose
4 |
5 | The laws of most jurisdictions throughout the world automatically confer
6 | exclusive Copyright and Related Rights (defined below) upon the creator and
7 | subsequent owner(s) (each and all, an "owner") of an original work of
8 | authorship and/or a database (each, a "Work").
9 |
10 | Certain owners wish to permanently relinquish those rights to a Work for the
11 | purpose of contributing to a commons of creative, cultural and scientific
12 | works ("Commons") that the public can reliably and without fear of later
13 | claims of infringement build upon, modify, incorporate in other works, reuse
14 | and redistribute as freely as possible in any form whatsoever and for any
15 | purposes, including without limitation commercial purposes. These owners may
16 | contribute to the Commons to promote the ideal of a free culture and the
17 | further production of creative, cultural and scientific works, or to gain
18 | reputation or greater distribution for their Work in part through the use and
19 | efforts of others.
20 |
21 | For these and/or other purposes and motivations, and without any expectation
22 | of additional consideration or compensation, the person associating CC0 with a
23 | Work (the "Affirmer"), to the extent that he or she is an owner of Copyright
24 | and Related Rights in the Work, voluntarily elects to apply CC0 to the Work
25 | and publicly distribute the Work under its terms, with knowledge of his or her
26 | Copyright and Related Rights in the Work and the meaning and intended legal
27 | effect of CC0 on those rights.
28 |
29 | 1. Copyright and Related Rights. A Work made available under CC0 may be
30 | protected by copyright and related or neighboring rights ("Copyright and
31 | Related Rights"). Copyright and Related Rights include, but are not limited
32 | to, the following:
33 |
34 | i. the right to reproduce, adapt, distribute, perform, display, communicate,
35 | and translate a Work;
36 |
37 | ii. moral rights retained by the original author(s) and/or performer(s);
38 |
39 | iii. publicity and privacy rights pertaining to a person's image or likeness
40 | depicted in a Work;
41 |
42 | iv. rights protecting against unfair competition in regards to a Work,
43 | subject to the limitations in paragraph 4(a), below;
44 |
45 | v. rights protecting the extraction, dissemination, use and reuse of data in
46 | a Work;
47 |
48 | vi. database rights (such as those arising under Directive 96/9/EC of the
49 | European Parliament and of the Council of 11 March 1996 on the legal
50 | protection of databases, and under any national implementation thereof,
51 | including any amended or successor version of such directive); and
52 |
53 | vii. other similar, equivalent or corresponding rights throughout the world
54 | based on applicable law or treaty, and any national implementations thereof.
55 |
56 | 2. Waiver. To the greatest extent permitted by, but not in contravention of,
57 | applicable law, Affirmer hereby overtly, fully, permanently, irrevocably and
58 | unconditionally waives, abandons, and surrenders all of Affirmer's Copyright
59 | and Related Rights and associated claims and causes of action, whether now
60 | known or unknown (including existing as well as future claims and causes of
61 | action), in the Work (i) in all territories worldwide, (ii) for the maximum
62 | duration provided by applicable law or treaty (including future time
63 | extensions), (iii) in any current or future medium and for any number of
64 | copies, and (iv) for any purpose whatsoever, including without limitation
65 | commercial, advertising or promotional purposes (the "Waiver"). Affirmer makes
66 | the Waiver for the benefit of each member of the public at large and to the
67 | detriment of Affirmer's heirs and successors, fully intending that such Waiver
68 | shall not be subject to revocation, rescission, cancellation, termination, or
69 | any other legal or equitable action to disrupt the quiet enjoyment of the Work
70 | by the public as contemplated by Affirmer's express Statement of Purpose.
71 |
72 | 3. Public License Fallback. Should any part of the Waiver for any reason be
73 | judged legally invalid or ineffective under applicable law, then the Waiver
74 | shall be preserved to the maximum extent permitted taking into account
75 | Affirmer's express Statement of Purpose. In addition, to the extent the Waiver
76 | is so judged Affirmer hereby grants to each affected person a royalty-free,
77 | non transferable, non sublicensable, non exclusive, irrevocable and
78 | unconditional license to exercise Affirmer's Copyright and Related Rights in
79 | the Work (i) in all territories worldwide, (ii) for the maximum duration
80 | provided by applicable law or treaty (including future time extensions), (iii)
81 | in any current or future medium and for any number of copies, and (iv) for any
82 | purpose whatsoever, including without limitation commercial, advertising or
83 | promotional purposes (the "License"). The License shall be deemed effective as
84 | of the date CC0 was applied by Affirmer to the Work. Should any part of the
85 | License for any reason be judged legally invalid or ineffective under
86 | applicable law, such partial invalidity or ineffectiveness shall not
87 | invalidate the remainder of the License, and in such case Affirmer hereby
88 | affirms that he or she will not (i) exercise any of his or her remaining
89 | Copyright and Related Rights in the Work or (ii) assert any associated claims
90 | and causes of action with respect to the Work, in either case contrary to
91 | Affirmer's express Statement of Purpose.
92 |
93 | 4. Limitations and Disclaimers.
94 |
95 | a. No trademark or patent rights held by Affirmer are waived, abandoned,
96 | surrendered, licensed or otherwise affected by this document.
97 |
98 | b. Affirmer offers the Work as-is and makes no representations or warranties
99 | of any kind concerning the Work, express, implied, statutory or otherwise,
100 | including without limitation warranties of title, merchantability, fitness
101 | for a particular purpose, non infringement, or the absence of latent or
102 | other defects, accuracy, or the present or absence of errors, whether or not
103 | discoverable, all to the greatest extent permissible under applicable law.
104 |
105 | c. Affirmer disclaims responsibility for clearing rights of other persons
106 | that may apply to the Work or any use thereof, including without limitation
107 | any person's Copyright and Related Rights in the Work. Further, Affirmer
108 | disclaims responsibility for obtaining any necessary consents, permissions
109 | or other rights required for any use of the Work.
110 |
111 | d. Affirmer understands and acknowledges that Creative Commons is not a
112 | party to this document and has no duty or obligation with respect to this
113 | CC0 or use of the Work.
114 |
115 | For more information, please see
116 |
117 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 | # Awesome Contrastive Learning Papers&Codes
4 | [](https://github.com/sindresorhus/awesome#readme)
5 | > A comprehensive list of awesome Contrastive Learning Papers&Codes.
6 |
7 | Contrastive learning papers at the top conferences, research include, but are not limited to: CV, NLP, Audio, Video, Multimodal, Graph, Language, etc.
8 | ## [Content](#content)
9 |
10 |  
11 | 
12 | [](https://github.com/coder-duibai/Contrastive-Learning-Papers-Codes/tree/main.zip)
13 |
37 |
38 | ## Must-read Papers
39 | ### [Survey Papers](#content)
40 | 1. **A Survey on Contrastive Self-supervised Learning**.
41 | Authors:Ashish Jaiswal, Ashwin Ramesh Babu, Mohammad Zaki Zadeh, Debapriya Banerjee, Fillia Makedon. [paper](https://arxiv.org/abs/2011.00362)
42 | ## [Problems](#content)
43 | ### [Computer Vision](#content)
44 | 1. **[PCL] Prototypical Contrastive Learning of Unsupervised Representations**. ICLR2021.
45 | Authors:Junnan Li, Pan Zhou, Caiming Xiong, Steven C.H. Hoi. [paper](https://arxiv.org/abs/2005.04966) [code](https://github.com/salesforce/PCL)
46 | 2. **[BalFeat] Exploring Balanced Feature Spaces for Representation Learning**. ICLR2021.
47 | Authors:Bingyi Kang, Yu Li, Sa Xie, Zehuan Yuan, Jiashi Feng. [paper](https://openreview.net/forum?id=OqtLIabPTit)
48 | 3. **[MiCE] MiCE: Mixture of Contrastive Experts for Unsupervised Image Clustering**. ICLR2021.
49 | Authors:Tsung Wei Tsai, Chongxuan Li, Jun Zhu. [paper](https://openreview.net/forum?id=gV3wdEOGy_V) [code](https://github.com/TsungWeiTsai/MiCE)
50 | 4. **[i-Mix] i-Mix: A Strategy for Regularizing Contrastive Representation Learning**. ICLR2021.
51 | Authors:Kibok Lee, Yian Zhu, Kihyuk Sohn, Chun-Liang Li, Jinwoo Shin, Honglak Lee. [paper](https://arxiv.org/abs/2010.08887) [code](https://github.com/kibok90/imix)
52 | 5. **Contrastive Learning with Hard Negative Samples**. ICLR2021.
53 | Authors:Joshua Robinson, Ching-Yao Chuang, Suvrit Sra, Stefanie Jegelka. [paper](https://arxiv.org/abs/2010.04592) [coder](https://github.com/joshr17/HCL)
54 | 6. **[LooC] What Should Not Be Contrastive in Contrastive Learning**. ICLR2021.
55 | Authors:Tete Xiao, Xiaolong Wang, Alexei A. Efros, Trevor Darrell. [paper](https://arxiv.org/abs/2008.05659)
56 | 7. **[MoCo] Momentum Contrast for Unsupervised Visual Representation Learning**. CVPR2020.
57 | Authors:Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, Ross Girshick. [paper](https://arxiv.org/abs/1911.05722) [code](https://github.com/facebookresearch/moco)
58 | 8. **[MoCo v2] Improved Baselines with Momentum Contrastive Learning**.
59 | Authors:Xinlei Chen, Haoqi Fan, Ross Girshick, Kaiming He. [paper](https://arxiv.org/abs/2003.04297) [code](https://github.com/facebookresearch/moco)
60 | 9. **[SimCLR] A Simple Framework for Contrastive Learning of Visual Representations**. ICML2020.
61 | Authors:Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton. [paper](https://arxiv.org/abs/2002.05709) [code](https://github.com/google-research/simclr)
62 | 10. **[SimCLR v2] Big Self-Supervised Models are Strong Semi-Supervised Learners**. NIPS2020.
63 | Authors:Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, Geoffrey Hinton. [paper](https://arxiv.org/abs/2006.10029) [code](https://github.com/google-research/simclr)
64 | 11. **[BYOL] Bootstrap your own latent: A new approach to self-supervised Learning**.
65 | Authors:Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H. Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, Rémi Munos, Michal Valko. [paper](https://arxiv.org/abs/2006.07733) [code](https://github.com/deepmind/deepmind-research/tree/master/byol)
66 | 12. **[SwAV] Unsupervised Learning of Visual Features by Contrasting Cluster Assignments**. NIPS2020.
67 | Authors:Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, Armand Joulin. [paper](https://arxiv.org/abs/2006.09882) [code](https://github.com/facebookresearch/swav)
68 | 13. **[SimSiam] Exploring Simple Siamese Representation Learning**. CVPR2021.
69 | Authors:Xinlei Chen, Kaiming He. [paper](https://openaccess.thecvf.com/content/CVPR2021/html/Chen_Exploring_Simple_Siamese_Representation_Learning_CVPR_2021_paper.html) [code](https://github.com/PatrickHua/SimSiam/blob/main/models/simsiam.py)
70 | 14. **Hard Negative Mixing for Contrastive Learning**. NIPS2020.
71 | Authors:Yannis Kalantidis, Mert Bulent Sariyildiz, Noe Pion, Philippe Weinzaepfel, Diane Larlus. [paper](https://arxiv.org/abs/2010.01028)
72 | 15. **Supervised Contrastive Learning**. NIPS2020.
73 | Authors:Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, Dilip Krishnan. [paper](https://arxiv.org/abs/2004.11362)
74 | 16. **[LoCo] LoCo: Local Contrastive Representation Learning**. NIPS2020.
75 | Authors:Yuwen Xiong, Mengye Ren, Raquel Urtasun. [paper](https://arxiv.org/abs/2008.01342)
76 | 17. **What Makes for Good Views for Contrastive Learning?**. NIPS2020.
77 | Authors:Yonglong Tian, Chen Sun, Ben Poole, Dilip Krishnan, Cordelia Schmid, Phillip Isola. [paper](https://arxiv.org/abs/2005.10243)
78 | 18. **[ContraGAN] ContraGAN: Contrastive Learning for Conditional Image Generation**. NIPS2020.
79 | Authors:Minguk Kang, Jaesik Park. [paper](https://arxiv.org/abs/2006.12681) [code](https://github.com/POSTECH-CVLab/PyTorch-StudioGAN)
80 | 19. **[SpCL] Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID**. NIPS2020.
81 | Authors:Yixiao Ge, Feng Zhu, Dapeng Chen, Rui Zhao, Hongsheng Li. [paper](https://arxiv.org/abs/2006.02713) [code](https://github.com/yxgeee/SpCL)
82 |
83 | ### [Audio](#content)
84 | 1. **wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations**.
85 | Authors:Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. [paper](https://arxiv.org/abs/2006.11477) [code](https://github.com/pytorch/fairseq)
86 |
87 | ### [Videos and Multimodal](#content)
88 | 1. **Time-Contrastive Networks: Self-Supervised Learning from Video**.
89 | Authors: Pierre Sermanet; Corey Lynch; Yevgen Chebotar; Jasmine Hsu; Eric Jang; Stefan Schaal; Sergey Levine. [paper](https://ieeexplore.ieee.org/abstract/document/8462891)
90 | 2. **Contrastive Multiview Coding**.
91 | Authors:Yonglong Tian, Dilip Krishnan, Phillip Isola. [paper](https://link.springer.com/chapter/10.1007%2F978-3-030-58621-8_45) [code](https://github.com/HobbitLong/CMC/)
92 | 3. **Learning Video Representations using Contrastive Bidirectional Transformer**.
93 | Authors:Chen Sun, Fabien Baradel, Kevin Murphy, Cordelia Schmid. [paper](https://arxiv.org/abs/1906.05743)
94 | 4. **End-to-End Learning of Visual Representations from Uncurated Instructional Videos**. CVPR2020.
95 | Authors:Antoine Miech, Jean-Baptiste Alayrac, Lucas Smaira, Ivan Laptev, Josef Sivic, Andrew Zisserman. [paper](https://openaccess.thecvf.com/content_CVPR_2020/html/Miech_End-to-End_Learning_of_Visual_Representations_From_Uncurated_Instructional_Videos_CVPR_2020_paper.html) [code](https://www.di.ens.fr/willow/research/mil-nce/)
96 | 5. **Multi-modal Self-Supervision from Generalized Data Transformations**.
97 | Authors:Mandela Patrick, Yuki M. Asano, Polina Kuznetsova, Ruth Fong, João F. Henriques, Geoffrey Zweig, Andrea Vedaldi. [paper](https://arxiv.org/abs/2003.04298)
98 | 6. **Support-set bottlenecks for video-text representation learning**. ICLR2021.
99 | Authors:Mandela Patrick, Po-Yao Huang, Yuki Asano, Florian Metze, Alexander Hauptmann, João Henriques, Andrea Vedaldi. [paper](https://arxiv.org/abs/2010.02824)
100 | 7. **Contrastive Learning of Medical Visual Representations from Paired Images and Text**. ICLR2021.
101 | Authors:Yuhao Zhang, Hang Jiang, Yasuhide Miura, Christopher D. Manning, Curtis P. Langlotz. [paper](https://arxiv.org/abs/2010.00747)
102 | 8. **AVLnet: Learning Audio-Visual Language Representations from Instructional Videos**.
103 | Authors:Andrew Rouditchenko, Angie Boggust, David Harwath, Brian Chen, Dhiraj Joshi, Samuel Thomas, Kartik Audhkhasi, Hilde Kuehne, Rameswar Panda, Rogerio Feris, Brian Kingsbury, Michael Picheny, Antonio Torralba, James Glass. [paper](https://arxiv.org/abs/2006.09199)
104 | 9. **Self-Supervised MultiModal Versatile Networks**. NIPS2020.
105 | Authors:Jean-Baptiste Alayrac, Adrià Recasens, Rosalia Schneider, Relja Arandjelović, Jason Ramapuram, Jeffrey De Fauw, Lucas Smaira, Sander Dieleman, Andrew Zisserman. [paper](https://arxiv.org/abs/2006.16228)
106 | 10. **Memory-augmented Dense Predictive Coding for Video Representation Learning**.
107 | Authors:Tengda Han, Weidi Xie, Andrew Zisserman. [paper](https://link.springer.com/chapter/10.1007%2F978-3-030-58580-8_19) [code](https://www.robots.ox.ac.uk/~vgg/research/DPC/)
108 | 11. **Spatiotemporal Contrastive Video Representation Learning**.
109 | Authors:Rui Qian, Tianjian Meng, Boqing Gong, Ming-Hsuan Yang, Huisheng Wang, Serge Belongie, Yin Cui. [paper](https://arxiv.org/abs/2008.03800) [code](https://github.com/tensorflow/models/tree/master/official/)
110 | 12. **Self-supervised Co-training for Video Representation Learning**. NIPS2020.
111 | Authors:Tengda Han, Weidi Xie, Andrew Zisserman. [paper](https://arxiv.org/abs/2010.09709)
112 |
113 |
114 |
115 |
116 | ### [NLP](#content)
117 | 1. **[CALM] Pre-training Text-to-Text Transformers for Concept-centric Common Sense**. ICLR2021.
118 | Authors:Wangchunshu Zhou, Dong-Ho Lee, Ravi Kiran Selvam, Seyeon Lee, Xiang Ren. [paper](https://openreview.net/forum?id=3k20LAiHYL2) [code](https://github.com/INK-USC/CALM)
119 | 2. **Residual Energy-Based Models for Text Generation**. ICLR2021.
120 | Authors:Yuntian Deng, Anton Bakhtin, Myle Ott, Arthur Szlam, Marc'Aurelio Ranzato. [paper](https://arxiv.org/abs/2004.11714)
121 | 3. **Contrastive Learning with Adversarial Perturbations for Conditional Text Generation**. ICLR2021.
122 | Authors:Seanie Lee, Dong Bok Lee, Sung Ju Hwang. [paper](https://arxiv.org/abs/2012.07280)
123 | 4. **[CoDA] CoDA: Contrast-enhanced and Diversity-promoting Data Augmentation for Natural Language Understanding**. ICLR2021.
124 | Authors:Yanru Qu, Dinghan Shen, Yelong Shen, Sandra Sajeev, Jiawei Han, Weizhu Chen. [paper](https://arxiv.org/abs/2010.08670)
125 | 5. **[FairFil] FairFil: Contrastive Neural Debiasing Method for Pretrained Text Encoders**. ICLR2021.
126 | Authors:Pengyu Cheng, Weituo Hao, Siyang Yuan, Shijing Si, Lawrence Carin. [paper](https://arxiv.org/abs/2103.06413)
127 | 6. **Towards Robust and Efficient Contrastive Textual Representation Learning**. ICLR2021.
128 | Authors:Liqun Chen, Yizhe Zhang, Dianqi Li, Chenyang Tao, Dong Wang, Lawrence Carin. [paper](https://openreview.net/forum?id=mDAZVlBeXWx)
129 | 7. **Self-supervised Contrastive Zero to Few-shot Learning from Small, Long-tailed Text data**. ICLR2021.
130 | Authors:Nils Rethmeier, Isabelle Augenstein. [paper](https://openreview.net/forum?id=_cadenVdKzF)
131 | 8. **Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval**. ICLR2021.
132 | Authors:Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, Arnold Overwijk. [paper](https://arxiv.org/abs/2007.00808)
133 | 9. **Self-Supervised Contrastive Learning for Efficient User Satisfaction Prediction in Conversational Agents**. NAACL2021.
134 | Authors:Mohammad Kachuee, Hao Yuan, Young-Bum Kim, Sungjin Lee. [paper](https://arxiv.org/abs/2010.11230)
135 | 10. **SOrT-ing VQA Models : Contrastive Gradient Learning for Improved Consistency**. NAACL2021.
136 | Authors:Sameer Dharur, Purva Tendulkar, Dhruv Batra, Devi Parikh, Ramprasaath R. Selvaraju. [paper](https://aclanthology.org/2021.naacl-main.248/)
137 | 11. **Supporting Clustering with Contrastive Learning**. NAACL2021.
138 | Authors:Dejiao Zhang, Feng Nan, Xiaokai Wei, Shangwen Li, Henghui Zhu, Kathleen McKeown, Ramesh Nallapati, Andrew Arnold, Bing Xiang. [paper](https://arxiv.org/abs/2103.12953)
139 | 12. **Understanding Hard Negatives in Noise Contrastive Estimation**. NAACL2021.
140 | Authors:Wenzheng Zhang, Karl Stratos. [paper](https://arxiv.org/abs/2104.06245)
141 | 13. **Contextualized and Generalized Sentence Representations by Contrastive Self-Supervised Learning: A Case Study on Discourse Relation Analysis**. NAACL2021.
142 | Authors:Hirokazu Kiyomaru, Sadao Kurohashi. [paper](https://aclanthology.org/2021.naacl-main.442/)
143 | 14. **Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach**. NAACL2021.
144 | Authors:Yue Yu, Simiao Zuo, Haoming Jiang, Wendi Ren, Tuo Zhao, Chao Zhang. [paper](https://arxiv.org/abs/2010.07835)
145 |
146 |
147 | ### [Language Contrastive Learning](#content)
148 |
149 | 1. **Distributed Representations of Words and Phrases and their Compositionality**. 2013NIPS.
150 | Authors:Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, Jeffrey Dean. [Paper](https://arxiv.org/abs/1310.4546)
151 |
152 | 2. **An efficient framework for learning sentence representations**.
153 | Authors:Lajanugen Logeswaran, Honglak Lee. [Paper](https://arxiv.org/abs/1803.02893)
154 | 3. **XLNet: Generalized Autoregressive Pretraining for Language Understanding**.
155 | Authors:Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. [Paper](https://arxiv.org/abs/1906.08237)
156 | 4. **A Mutual Information Maximization Perspective of Language Representation Learning**.
157 | Authors:Lingpeng Kong, Cyprien de Masson d'Autume, Wang Ling, Lei Yu, Zihang Dai, Dani Yogatama. [Paper](https://arxiv.org/abs/1910.08350)
158 | 5. **InfoXLM: An Information-Theoretic Framework for Cross-Lingual Language Model Pre-Training**.
159 | Authors:Zewen Chi, Li Dong, Furu Wei, Nan Yang, Saksham Singhal, Wenhui Wang, Xia Song, Xian-Ling Mao, Heyan Huang, Ming Zhou. [Paper](https://arxiv.org/abs/2007.07834)
160 | ### [Graph](#content)
161 | 1. **[GraphCL] Graph Contrastive Learning with Augmentations**. NIPS2020.
162 | Authors:Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, Yang Shen. [paper](https://proceedings.neurips.cc/paper_files/paper/2020/hash/3fe230348e9a12c13120749e3f9fa4cd-Abstract.html)
163 | 2. **Contrastive Multi-View Representation Learning on Graphs**. ICML2020.
164 | Authors:Kaveh Hassani, Amir Hosein Khasahmadi. [Paper](https://arxiv.org/abs/2006.05582)
165 | 3. **[GCC] GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training**. KDD2020.
166 | Authors:Jiezhong Qiu, Qibin Chen, Yuxiao Dong, Jing Zhang, Hongxia Yang, Ming Ding, Kuansan Wang, Jie Tang. [Paper](https://arxiv.org/abs/2006.09963)
167 | 4. **[InfoGraph] InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization**. ICLR2020.
168 | Authors:Fan-Yun Sun, Jordan Hoffmann, Vikas Verma, Jian Tang. [Paper](https://arxiv.org/abs/1908.01000)
169 |
170 | ### [Adversarial Learning](#content)
171 | 1. **Contrastive Learning with Adversarial Examples**. NIPS2020.
172 | Authors:Chih-Hui Ho, Nuno Vasconcelos. [paper](https://arxiv.org/abs/2010.12050)
173 |
174 |
175 | ### [Recommendation](#content)
176 | 1. **Self-Supervised Hypergraph Convolutional Networks for Session-based Recommendation**. AAAI2021.
177 | Authors:Xin Xia, Hongzhi Yin, Junliang Yu, Qinyong Wang, Lizhen Cui, Xiangliang Zhang. [paper](https://arxiv.org/abs/2012.06852) [code](https://github.com/xiaxin1998/DHCN)
178 | 2. **Self-Supervised Multi-Channel Hypergraph Convolutional Network for Social Recommendation**. WWW2021.
179 | Authors:Junliang Yu, Hongzhi Yin, Jundong Li, Qinyong Wang, Nguyen Quoc Viet Hung, Xiangliang Zhang. [paper](https://arxiv.org/abs/2101.06448) [code](https://github.com/Coder-Yu/QRec)
180 | 3. **Self-supervised Graph Learning for Recommendation**. SIGIR2021.
181 | Authors:Jiancan Wu, Xiang Wang, Fuli Feng, Xiangnan He, Liang Chen, Jianxun Lian, Xing Xie. [paper](https://arxiv.org/abs/2010.10783) [code](https://github.com/wujcan/SGL)
182 |
183 | ### [Applications](#content)
184 | 1. **Contrastive Learning for Unpaired Image-to-Image Translation**.
185 | Authors:Taesung ParkAlexei A. Efros, Richard ZhangJun-Yan Zhu. [paper](https://link.springer.com/chapter/10.1007/978-3-030-58545-7_19)
186 |
187 | ## About Me
188 | 你好,我是对白。目前是一名大厂算法工程师,也是一位本科独立创业并拿过数百万融资的小哥哥。日常在[知乎](https://www.zhihu.com/people/coder_duibai)和微信分享算法知识、创业心得和人生感悟。
189 | 欢迎关注我的微信公众号,后台回复关键词:**对比学习**,即可获取对比学习最新论文合集(包含以上所有论文哦~)
190 |
191 | 
192 |
--------------------------------------------------------------------------------
/Wechat.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/coder-duibai/Contrastive-Learning-Papers-Codes/31cb1dc18e898d0ceaf9a5561dfa2e861fdf83ad/Wechat.jpeg
--------------------------------------------------------------------------------
/icon.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/coder-duibai/Contrastive-Learning-Papers-Codes/31cb1dc18e898d0ceaf9a5561dfa2e861fdf83ad/icon.png
--------------------------------------------------------------------------------