└── README.md /README.md: -------------------------------------------------------------------------------- 1 | # Label Noise Papers 2 | 3 | This repository contains Label-Noise Representation Learning (LNRL) papers mentioned in our survey "A Survey of Label-noise Representation Learning: Past, Present, and Future". 4 | 5 | We will update this paper list to include new LNRL papers periodically. 6 | 7 | ## Citation 8 | 9 | Please cite our paper if you find it helpful. 10 | 11 | ``` 12 | @article{han2020survey, 13 | title={A survey of label-noise representation learning: Past, present and future}, 14 | author={Han, Bo and Yao, Quanming and Liu, Tongliang and Niu, Gang and Tsang, Ivor W and Kwok, James T and Sugiyama, Masashi}, 15 | year={2021} 16 | } 17 | ``` 18 | 19 | 20 | ## Content 21 | 1. [Survey](#Survey) 22 | 2. [Data](#Data) 23 | 1. [Transition Matrix](#Transition-Matrix) 24 | 1. [Adaptation Layer](#Adaptation_Layer) 25 | 1. [Loss Correction](#Loss_Correction) 26 | 1. [Prior Knowledge](#Prior_Knowledge) 27 | 1. [Others](#Others) 28 | 3. [Objective](#Objective) 29 | 1. [Regularization](#Regularization) 30 | 1. [Reweighting](#Reweighting) 31 | 1. [Redesigning](#Redesigning) 32 | 1. [Others](#Others) 33 | 4. [Optimization](#Optimization) 34 | 1. [Memorization Effect](#Memorization_Effect) 35 | 1. [Self-training](#Self-training) 36 | 1. [Co-training](#Co-training) 37 | 1. [Beyond Memorization](#Beyond_Memorization) 38 | 1. [Others](#Others) 39 | 5. [Future Directions](#Future-Directions) 40 | 1. [New Datasets](#New-Datasets) 41 | 1. [Instance-dependent LNRL](#Instance-dependent-LNRL) 42 | 1. [Adversarial LNRL](#Adversarial-LNRL) 43 | 1. [Automated INRL](#AutoML) 44 | 1. [Noisy Data](#Noisy_Data) 45 | 1. [Double Descent](#Double_Descent) 46 | 47 | 48 | 49 | ## [Survey](#content) 50 | 51 | 52 | 1. B. Frénay and M. Verleysen, **Classification in the presence of label noise: a survey**, IEEE Transactions on Neural Networks and Learning Systems, vol. 25, no. 5, pp. 845–869, 2013. 53 | [paper](https://romisatriawahono.net/lecture/rm/survey/machine%20learning/Frenay%20-%20Classification%20in%20the%20Presence%20of%20Label%20Noise%20-%202014.pdf) 54 | 55 | 1. G. Algan and I. Ulusoy, **Image classification with deep learning in the presence of noisy labels: A survey**, arXiv preprint arXiv:1912.05170, 2019. 56 | [paper](https://arxiv.org/pdf/1912.05170.pdf) 57 | 58 | 1. D. Karimi, H. Dou, S. K. Warfield, and A. Gholipour, **Deep learning with noisy labels: exploring techniques and remedies in medical image analysis**, Medical Image Analysis, 2020. 59 | [paper](https://arxiv.org/pdf/1912.02911.pdf) 60 | 61 | 1. H. Song, M. Kim, D. Park, and J.-G. Lee, **Learning from noisy labels with deep neural networks: A survey**, arXiv preprint arXiv:2007.08199, 2020. 62 | [paper](https://arxiv.org/pdf/2007.08199.pdf) 63 | 64 | ## [Data](#content) 65 | 66 | ### Transition Matrix 67 | 68 | 1. B. van Rooyen and R. C. Williamson, **A theory of learning with corrupted labels**, Journal of Machine Learning Research, vol. 18, no. 1, pp. 8501–8550, 2017. 69 | [paper](https://www.jmlr.org/papers/volume18/16-315/16-315.pdf) 70 | 71 | 1. G. Patrini, A. Rozza, A. Krishna Menon, R. Nock, and L. Qu, **Making deep neural networks robust to label noise: A loss correction approach**, in CVPR, 2017. 72 | [paper](https://arxiv.org/pdf/1609.03683.pdf) 73 | 74 | ### Adaptation Layer 75 | 76 | 1. S. Sukhbaatar, J. Bruna, M. Paluri, L. Bourdev, and R. Fergus, **Training convolutional networks with noisy labels**, in ICLR Workshop, 2015. 77 | [paper](https://arxiv.org/pdf/1406.2080.pdf) 78 | 79 | 1. J. Goldberger and E. Ben-Reuven, **Training deep neural-networks using a noise adaptation layer**, in ICLR, 2017. 80 | [paper](https://openreview.net/pdf?id=H12GRgcxg) 81 | 82 | 1. I. Misra, C. Lawrence Zitnick, M. Mitchell, and R. Girshick, **Seeing through the human reporting bias: Visual classifiers from noisy human-centric labels**, in CVPR, 2016. 83 | [paper](https://arxiv.org/pdf/1512.06974.pdf) 84 | 85 | ### Loss Correction 86 | 87 | 1. G. Patrini, A. Rozza, A. Krishna Menon, R. Nock, and L. Qu, **Making deep neural networks robust to label noise: A loss correction approach**, in CVPR, 2017. 88 | [paper](https://arxiv.org/pdf/1609.03683.pdf) 89 | 90 | 1. D. Hendrycks, M. Mazeika, D. Wilson, and K. Gimpel, **Using trusted data to train deep networks on labels corrupted by severe noise**, in NeurIPS, 2018. 91 | [paper](https://arxiv.org/pdf/1802.05300.pdf) 92 | 93 | 1. M. Lukasik, S. Bhojanapalli, A. K. Menon, and S. Kumar, **Does label smoothing mitigate label noise?** in ICML, 2020. 94 | [paper](https://arxiv.org/pdf/2003.02819.pdf) 95 | 96 | ### Prior Knowledge 97 | 98 | 1. B. Han, J. Yao, G. Niu, M. Zhou, I. Tsang, Y. Zhang, and M. Sugiyama, **Masking: A new perspective of noisy supervision**, in NeurIPS, 2018. 99 | [paper](https://arxiv.org/pdf/1805.08193.pdf) 100 | 101 | 1. X. Xia, T. Liu, N.Wang, B. Han, C. Gong, G. Niu, and M. Sugiyama, **Are anchor points really indispensable in label-noise learning?** in NeurIPS, 2019. 102 | [paper](https://arxiv.org/pdf/1906.00189.pdf) 103 | 104 | 1. Y. Li, J. Yang, Y. Song, L. Cao, J. Luo, and L.-J. Li, **Learning from noisy labels with distillation**, in ICCV, 2017. 105 | [paper](https://arxiv.org/pdf/1703.02391.pdf) 106 | 107 | ### Others 108 | 109 | 1. J. Krause, B. Sapp, A. Howard, H. Zhou, A. Toshev, T. Duerig, J. Philbin, and L. Fei-Fei, **The unreasonable effectiveness of noisy data for fine-grained recognition**, in ECCV, 2016. 110 | [paper](https://arxiv.org/pdf/1511.06789.pdf) 111 | 112 | 1. C. G. Northcutt, T.Wu, and I. L. Chuang, **Learning with confident examples: Rank pruning for robust classification with noisy labels,** in UAI, 2017. 113 | [paper](https://arxiv.org/pdf/1705.01936.pdf) 114 | 115 | 1. Y. Kim, J. Yim, J. Yun, and J. Kim, **Nlnl: Negative learning for noisy labels**, in ICCV, 2019. 116 | [paper](https://arxiv.org/pdf/1908.07387.pdf) 117 | 118 | 1. P. H. Seo, G. Kim, and B. Han, **Combinatorial inference against label noise**, in NeurIPS, 2019. 119 | [paper](https://papers.nips.cc/paper/2019/file/0cb929eae7a499e50248a3a78f7acfc7-Paper.pdf) 120 | 121 | 1. T. Kaneko, Y. Ushiku, and T. Harada, **Label-noise robust generative adversarial networks**, in CVPR, 2019. 122 | [paper](https://arxiv.org/pdf/1811.11165.pdf) 123 | 124 | 1. A. Lamy, Z. Zhong, A. K. Menon, and N. Verma, **Noise-tolerant fair classification**, in NeurIPS, 2019. 125 | [paper](https://proceedings.neurips.cc/paper/2019/file/8d5e957f297893487bd98fa830fa6413-Paper.pdf) 126 | 127 | 1. J. Yao, H. Wu, Y. Zhang, I. W. Tsang, and J. Sun, **Safeguarded dynamic label regression for noisy supervision**, in AAAI, 2019. 128 | [paper](https://arxiv.org/pdf/1903.02152.pdf) 129 | 130 | ## [Objective](#content) 131 | 132 | ### Regularization 133 | 134 | 1. S. Azadi, J. Feng, S. Jegelka, and T. Darrell, **Auxiliary image regularization for deep cnns with noisy labels**, in ICLR, 2016. 135 | [paper](https://arxiv.org/pdf/1511.07069.pdf) 136 | 137 | 1. D.-H. Lee, **Pseudo-label: The simple and efficient semisupervised learning method for deep neural networks**, in ICML Workshop, 2013. 138 | 139 | 1. S. Reed, H. Lee, D. Anguelov, C. Szegedy, D. Erhan, and A. Rabinovich, **Training deep neural networks on noisy labels with bootstrapping**, in ICLR Workshop, 2015. 140 | [paper](https://arxiv.org/pdf/1412.6596.pdf) 141 | 142 | 1. H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, **mixup: Beyond empirical risk minimization**, in ICLR, 2018. 143 | [paper](https://arxiv.org/pdf/1710.09412.pdf) 144 | 145 | 1. T. Miyato, S.-i. Maeda, M. Koyama, and S. Ishii, **Virtual adversarial training: a regularization method for supervised and semi-supervised learning**, IEEE transactions on pattern analysis and machine intelligence, vol. 41, no. 8, pp. 1979–1993, 2018. 146 | [paper](https://arxiv.org/pdf/1704.03976.pdf) 147 | 148 | 1. B. Han, G. Niu, X. Yu, Q. Yao, M. Xu, I. Tsang, and M. Sugiyama, **Sigua: Forgetting may make learning with noisy labels more robust**, in ICML, 2020. 149 | [paper](https://arxiv.org/pdf/1809.11008.pdf) 150 | 151 | ### Reweighting 152 | 153 | 1. T. Liu and D. Tao, **Classification with noisy labels by importance reweighting**, IEEE Transactions on pattern analysis and machine intelligence, vol. 38, no. 3, pp. 447–461, 2015. 154 | [paper](https://arxiv.org/pdf/1411.7718.pdf) 155 | 156 | 1. Y. Wang, A. Kucukelbir, and D. M. Blei, **Robust probabilistic modeling with bayesian data reweighting,** in ICML, 2017. 157 | [paper](https://arxiv.org/pdf/1606.03860.pdf) 158 | 159 | 1. E. Arazo, D. Ortego, P. Albert, N. E. O’Connor, and K. McGuinness, **Unsupervised label noise modeling and loss correction**, in ICML, 2019. 160 | [paper](https://arxiv.org/pdf/1904.11238.pdf) 161 | 162 | 1. J. Shu, Q. Xie, L. Yi, Q. Zhao, S. Zhou, Z. Xu, and D. Meng, **Meta-weight-net: Learning an explicit mapping for sample weighting**, in NeurIPS, 2019. 163 | [paper](https://arxiv.org/pdf/1902.07379.pdf) 164 | 165 | ### Redesigning 166 | 167 | 1. A. K. Menon, A. S. Rawat, S. J. Reddi, and S. Kumar, **Can gradient clipping mitigate label noise?** in ICLR, 2020. 168 | [paper](https://openreview.net/pdf?id=rklB76EKPr) 169 | 170 | 1. Z. Zhang and M. Sabuncu, **Generalized cross entropy loss for training deep neural networks with noisy labels**, in NeurIPS, 2018. 171 | [paper](https://arxiv.org/pdf/1805.07836.pdf) 172 | 173 | 1. N. Charoenphakdee, J. Lee, and M. Sugiyama, **On symmetric losses for learning from corrupted labels**, in ICML, 2019. 174 | [paper](http://proceedings.mlr.press/v97/charoenphakdee19a/charoenphakdee19a.pdf) 175 | 176 | 1. S. Thulasidasan, T. Bhattacharya, J. Bilmes, G. Chennupati, and J. Mohd-Yusof, **Combating label noise in deep learning using abstention**, in ICML, 2019. 177 | [paper](https://arxiv.org/pdf/1905.10964.pdf) 178 | 179 | 1. Y. Lyu and I. W. Tsang, **Curriculum loss: Robust learning and generalization against label corruption**, in ICLR, 2020. 180 | [paper](https://arxiv.org/pdf/1905.10045.pdf) 181 | 182 | 1. S. Laine and T. Aila, **Temporal ensembling for semi-supervised learning**, in ICLR, 2017. 183 | [paper](https://arxiv.org/pdf/1610.02242.pdf) 184 | 185 | 1. D. T. Nguyen, C. K. Mummadi, T. P. N. Ngo, T. H. P. Nguyen, L. Beggel, and T. Brox, **Self: Learning to filter noisy labels with self-ensembling**, in ICLR, 2020. 186 | [paper](https://arxiv.org/pdf/1910.01842.pdf) 187 | 188 | 1. X. Ma, Y. Wang, M. E. Houle, S. Zhou, S. M. Erfani, S.-T. Xia, S. Wijewickrema, and J. Bailey, **Dimensionality-driven learning with noisy labels**, in ICML, 2018. 189 | [paper](http://proceedings.mlr.press/v80/ma18d/ma18d.pdf) 190 | 191 | ### Others 192 | 193 | 1. S. Branson, G. Van Horn, and P. Perona, **Lean crowdsourcing: Combining humans and machines in an online system**, in CVPR, 2017. 194 | [paper](https://openaccess.thecvf.com/content_cvpr_2017/papers/Branson_Lean_Crowdsourcing_Combining_CVPR_2017_paper.pdf) 195 | 196 | 1. A. Vahdat, **Toward robustness against label noise in training deep discriminative neural networks,** in NeurIPS, 2017. 197 | [paper](https://arxiv.org/pdf/1706.00038.pdf%20%C3%A2%E2%82%AC%E2%80%B9) 198 | 199 | 1. H.-S. Chang, E. Learned-Miller, and A. McCallum, **Active bias: Training more accurate neural networks by emphasizing high variance samples**, in NeurIPS, 2017. 200 | [paper](https://arxiv.org/pdf/1704.07433.pdf) 201 | 202 | 1. A. Khetan, Z. C. Lipton, and A. Anandkumar, **Learning from noisy singly-labeled data,** ICLR, 2018. 203 | [paper](https://arxiv.org/pdf/1712.04577.pdf) 204 | 205 | 1. D. Tanaka, D. Ikami, T. Yamasaki, and K. Aizawa, **Joint optimization framework for learning with noisy labels**, in CVPR, 2018. 206 | [paper](https://openaccess.thecvf.com/content_cvpr_2018/papers/Tanaka_Joint_Optimization_Framework_CVPR_2018_paper.pdf) 207 | 208 | 1. Y. Wang, W. Liu, X. Ma, J. Bailey, H. Zha, L. Song, and S.-T. Xia, **Iterative learning with open-set noisy labels**, in CVPR, 2018. 209 | [paper](https://openaccess.thecvf.com/content_cvpr_2018/papers/Wang_Iterative_Learning_With_CVPR_2018_paper.pdf) 210 | 211 | 1. S. Jenni and P. Favaro, **Deep bilevel learning**, in ECCV, 2018. 212 | [paper](https://openaccess.thecvf.com/content_ECCV_2018/papers/Simon_Jenni_Deep_Bilevel_Learning_ECCV_2018_paper.pdf) 213 | 214 | 1. Y. Wang, X. Ma, Z. Chen, Y. Luo, J. Yi, and J. Bailey, **Symmetric cross entropy for robust learning with noisy labels,** in ICCV, 2019. 215 | [paper](https://openaccess.thecvf.com/content_ICCV_2019/papers/Wang_Symmetric_Cross_Entropy_for_Robust_Learning_With_Noisy_Labels_ICCV_2019_paper.pdf) 216 | 217 | 1. J. Li, Y. Song, J. Zhu, L. Cheng, Y. Su, L. Ye, P. Yuan, and S. Han, **Learning from large-scale noisy web data with ubiquitous reweighting for image classification**, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019. 218 | [paper](https://arxiv.org/pdf/1811.00700.pdf) 219 | 220 | 1. Y. Xu, P. Cao, Y. Kong, and Y. Wang, **L_dmi: A novel informationtheoretic loss function for training deep nets robust to label noise**, in NeurIPS, 2019. 221 | [paper](https://openreview.net/pdf/14f442968372d127473b832165df3e78abc7a1db.pdf) 222 | 223 | 1. Y. Liu and H. Guo, **Peer loss functions: Learning from noisy labels without knowing noise rates**, in ICML, 2020. 224 | [paper](http://proceedings.mlr.press/v119/liu20e/liu20e.pdf) 225 | 226 | 1. X. Ma, H. Huang, Y. Wang, S. Romano, S. Erfani, and J. Bailey, **Normalized loss functions for deep learning with noisy labels**, in ICML, 2020. 227 | [paper](http://proceedings.mlr.press/v119/ma20c/ma20c.pdf) 228 | 229 | ## [Optimization](#content) 230 | 231 | ### Memorization Effect 232 | 233 | 1. C. Zhang, S. Bengio, M. Hardt, . BRecht, and O. Vinyals. **Understanding deep learning requires rethinking generalization**, in ICML, 2016. 234 | [paper](https://openreview.net/pdf?id=Sy8gdB9xx) 235 | 236 | 1. D. Arpit, S. Jastrzębski, N. Ballas, D. Krueger, E. Bengio, M. S. Kanwal, and S. Lacoste-Julien. **A closer look at memorization in deep networks**, In ICML, 2017. 237 | [paper](http://proceedings.mlr.press/v70/arpit17a/arpit17a.pdf) 238 | 239 | ### Self-training 240 | 241 | 1. L. Jiang, Z. Zhou, T. Leung, L.-J. Li, and L. Fei-Fei, **Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels**, in ICML, 2018. 242 | [paper](http://proceedings.mlr.press/v80/jiang18c/jiang18c.pdf) 243 | 244 | 1. M. Ren, W. Zeng, B. Yang, and R. Urtasun, **Learning to reweight examples for robust deep learning**, in ICML, 2018. 245 | [paper](http://proceedings.mlr.press/v80/ren18a/ren18a.pdf) 246 | 247 | 1. L. Jiang, D. Huang, M. Liu, W. Yang. **Beyond synthetic noise: Deep learning on controlled noisy labels**, in ICML 2020. 248 | [paper](http://proceedings.mlr.press/v119/jiang20c/jiang20c.pdf) 249 | 250 | ### Co-training 251 | 252 | 1. B. Han, Q. Yao, X. Yu, G. Niu, M. Xu, W. Hu, I. Tsang, and M. Sugiyama, **Co-teaching: Robust training of deep neural networks with extremely noisy labels**, in NeurIPS, 2018. 253 | [paper](https://arxiv.org/pdf/1804.06872.pdf) 254 | 255 | 1. X. Yu, B. Han, J. Yao, G. Niu, I. W. Tsang, and M. Sugiyama, **How does disagreement help generalization against label corruption?** in ICML, 2019. 256 | [paper](http://proceedings.mlr.press/v97/yu19b/yu19b.pdf) 257 | 258 | 1. Q. Yao, H. Yang, B. Han, G. Niu, and J. T. Kwok, **Searching to exploit memorization effect in learning with noisy labels**, in ICML, 2020. 259 | [paper](http://proceedings.mlr.press/v119/yao20b/yao20b.pdf) 260 | 261 | ### Beyond Memorization 262 | 263 | 1. J. Li, R. Socher, and S. C. Hoi, **Dividemix: Learning with noisy labels as semi-supervised learning**, in ICLR, 2020. 264 | [paper](https://arxiv.org/pdf/2002.07394.pdf) 265 | 266 | 1. D. Hendrycks, K. Lee, and M. Mazeika, **Using pre-training can improve model robustness and uncertainty**, in ICML, 2019. 267 | [paper](http://proceedings.mlr.press/v97/hendrycks19a/hendrycks19a.pdf) 268 | 269 | 1. D. Bahri, H. Jiang, and M. Gupta, **Deep k-nn for noisy labels**, in ICML, 2020. 270 | [paper](http://proceedings.mlr.press/v119/bahri20a/bahri20a.pdf) 271 | 272 | 1. P. Chen, B. Liao, G. Chen, and S. Zhang, **Understanding and utilizing deep neural networks trained with noisy labels**, in ICML, 2019. 273 | [paper](http://proceedings.mlr.press/v97/chen19g/chen19g.pdf) 274 | 275 | ### Others 276 | 277 | 1. A. Veit, N. Alldrin, G. Chechik, I. Krasin, A. Gupta, and S. Belongie, **Learning from noisy large-scale datasets with minimal supervision**, in CVPR, 2017. 278 | [paper](https://arxiv.org/pdf/1701.01619.pdf) 279 | 280 | 1. B. Zhuang, L. Liu, Y. Li, C. Shen, and I. Reid, **Attend in groups: a weakly-supervised deep learning framework for learning from web data**, in CVPR, 2017. 281 | [paper](https://arxiv.org/pdf/1611.09960.pdf) 282 | 283 | 1. K.-H. Lee, X. He, L. Zhang, and L. Yang, **Cleannet: Transfer learning for scalable image classifier training with label noise**, in CVPR, 2018. 284 | [paper](https://arxiv.org/pdf/1711.07131.pdf) 285 | 286 | 1. S. Guo, W. Huang, H. Zhang, C. Zhuang, D. Dong, M. R. Scott,and D. Huang, **Curriculumnet: Weakly supervised learning from large-scale web images**, in ECCV, 2018. 287 | [paper](https://arxiv.org/pdf/1808.01097.pdf) 288 | 289 | 1. J. Deng, J. Guo, N. Xue, and S. Zafeiriou, **Arcface: Additive angular margin loss for deep face recognition**, in CVPR, 2019. 290 | [paper](https://arxiv.org/pdf/1801.07698.pdf) 291 | 292 | 1. X. Wang, S. Wang, J. Wang, H. Shi, and T. Mei, **Co-mining: Deep face recognition with noisy labels**, in ICCV, 2019. 293 | [paper](https://openaccess.thecvf.com/content_ICCV_2019/papers/Wang_Co-Mining_Deep_Face_Recognition_With_Noisy_Labels_ICCV_2019_paper.pdf) 294 | 295 | 1. J. Huang, L. Qu, R. Jia, and B. Zhao, **O2u-net: A simple noisylabel detection approach for deep neural networks**, in ICCV, 2019. 296 | [paper](https://openaccess.thecvf.com/content_ICCV_2019/papers/Huang_O2U-Net_A_Simple_Noisy_Label_Detection_Approach_for_Deep_Neural_ICCV_2019_paper.pdf) 297 | 298 | 1. J. Han, P. Luo, and X. Wang, **Deep self-learning from noisy labels**, in ICCV, 2019. 299 | [paper](https://arxiv.org/pdf/1908.02160.pdf) 300 | 301 | 1. H. Harutyunyan, K. Reing, G. V. Steeg, and A. Galstyan, **Improving generalization by controlling label-noise information in neural network weights**, in ICML, 2020. 302 | [paper](https://arxiv.org/pdf/2002.07933.pdf) 303 | 304 | 1. H. Wei, L. Feng, X. Chen, and B. An, **Combating noisy labels by agreement: A joint training method with co-regularization**, in CVPR, 2020. 305 | [paper](https://arxiv.org/pdf/2003.02752.pdf) 306 | 307 | 1. Z. Zhang, H. Zhang, S. O. Arik, H. Lee, and T. Pfister, **Distilling effective supervision from severe label noise**, in CVPR, 2020. 308 | [paper](https://arxiv.org/pdf/1910.00701.pdf) 309 | 310 | ## [Future Directions](#content) 311 | 312 | ### New Datasets 313 | 314 | 1. T. Xiao, T. Xia, Y. Yang, C. Huang, and X. Wang, **Learning from massive noisy labeled data for image classification**, in CVPR, 2015. 315 | [paper](https://openaccess.thecvf.com/content_cvpr_2015/papers/Xiao_Learning_From_Massive_2015_CVPR_paper.pdf) 316 | 317 | 1. L. Jiang, D. Huang, M. Liu, and W. Yang, **Beyond synthetic noise: Deep learning on controlled noisy labels**, in ICML, 2020. 318 | [paper](http://proceedings.mlr.press/v119/jiang20c/jiang20c.pdf) 319 | 320 | 1. W. Li, L. Wang, W. Li, E. Agustsson, and L. Van Gool. **Webvision database: Visual learning and understanding from web data**. arXiv preprint arXiv:1708.02862, 2017. 321 | [paper](https://arxiv.org/pdf/1708.02862.pdf) 322 | 323 | ### Instance-dependent LNRL 324 | 325 | 1. A. Menon, B. Van Rooyen, and N. Natarajan, **Learning from binary labels with instance-dependent corruption**, Machine Learning, vol. 107, p. 1561–1595, 2018. 326 | [paper](https://arxiv.org/pdf/1605.00751.pdf) 327 | 328 | 1. J. Cheng, T. Liu, K. Ramamohanarao, and D. Tao, **Learning with bounded instance-and label-dependent label noise**, in ICML, 2020. 329 | [paper](http://proceedings.mlr.press/v119/cheng20c/cheng20c.pdf) 330 | 331 | 1. A. Berthon, B. Han, G. Niu, T. Liu, and M. Sugiyama, **Confidence scores make instance-dependent label-noise learning possible**, arXiv preprint arXiv:2001.03772, 2020. 332 | [paper](https://arxiv.org/pdf/2001.03772.pdf) 333 | 334 | ### Adversarial LNRL 335 | 336 | 1. Y. Wang, D. Zou, J. Yi, J. Bailey, X. Ma, and Q. Gu, **Improving adversarial robustness requires revisiting misclassified examples**, in ICLR, 2020. 337 | [paper](https://openreview.net/pdf?id=rklOg6EFwS) 338 | 339 | 1. J. Zhang, X. Xu, B. Han, G. Niu, L. Cui, M. Sugiyama, and M. Kankanhalli, **Attacks which do not kill training make adversarial learning stronger**, in ICML, 2020. 340 | [paper](http://proceedings.mlr.press/v119/zhang20z/zhang20z.pdf) 341 | 342 | ### Automated LNRL 343 | 344 | 1. Q. Yao, H. Yang, B. Han, G. Niu, J. Kwok. **Searching to exploit memorization effect in learning from noisy labels**, in ICML, 2020. 345 | [paper](http://arxiv.org/abs/1911.02377) [code](https://github.com/AutoML-4Paradigm/S2E) 346 | 347 | ### Noisy Data 348 | 349 | 1. J. Zhang, B. Han, L. Wynter, K. H. Low, and M. Kankanhalli, **Towards robust resnet: A small step but a giant leap**, in IJCAI, 2019. 350 | [paper](https://arxiv.org/pdf/1902.10887.pdf) 351 | 352 | 1. B. Han, Y. Pan, and I. W. Tsang, **Robust plackett–luce model for k-ary crowdsourced preferences**, Machine Learning, vol. 107, no. 4, pp. 675–702, 2018. 353 | [paper](https://link.springer.com/content/pdf/10.1007/s10994-017-5674-0.pdf) 354 | 355 | 1. Y. Pan, B. Han, and I.W. Tsang, **Stagewise learning for noisy k-ary preferences**, Machine Learning, vol. 107, no. 8-10, pp. 1333–1361, 2018. 356 | [paper](https://link.springer.com/content/pdf/10.1007/s10994-018-5716-2.pdf) 357 | 358 | 1. F. Liu, J. Lu, B. Han, G. Niu, G. Zhang, and M. Sugiyama, **Butterfly: A panacea for all difficulties in wildly unsupervised domain adaptation**, arXiv preprint arXiv:1905.07720, 2019. 359 | [paper](https://arxiv.org/pdf/1905.07720.pdf) 360 | 361 | 1. X. Yu, T. Liu, M. Gong, K. Zhang, K. Batmanghelich, and D. Tao, **Label-noise robust domain adaptation**, in ICML, 2020. 362 | [paper](http://proceedings.mlr.press/v119/yu20c/yu20c.pdf) 363 | 364 | 1. S. Wu, X. Xia, T. Liu, B. Han, M. Gong, N. Wang, H. Liu, and G. Niu, **Multi-class classification from noisy-similarity-labeled data**, arXiv preprint arXiv:2002.06508, 2020. 365 | [paper](https://arxiv.org/pdf/2002.06508.pdf) 366 | 367 | 1. C. Wang, B. Han, S. Pan, J. Jiang, G. Niu, and G. Long, **Crossgraph: Robust and unsupervised embedding for attributed graphs with corrupted structure**, in ICDM, 2020. 368 | [paper]() 369 | 370 | 1. Y.-H. Wu, N. Charoenphakdee, H. Bao, V. Tangkaratt, and M. Sugiyama, **Imitation learning from imperfect demonstration**, in ICML, 2019. 371 | [paper](https://arxiv.org/pdf/1901.09387.pdf) 372 | 373 | 1. D. S. Brown, W. Goo, P. Nagarajan, and S. Niekum, **Extrapolating beyond suboptimal demonstrations via inverse reinforcement learning from observations**, in ICML, 2019. 374 | [paper](https://arxiv.org/pdf/1904.06387.pdf) 375 | 376 | 1. J. Audiffren, M. Valko, A. Lazaric, and M. Ghavamzadeh, **Maximum entropy semi-supervised inverse reinforcement learning**, in IJCAI, 2015. 377 | [paper](https://hal.inria.fr/hal-01146187/document) 378 | 379 | 1. V. Tangkaratt, B. Han, M. E. Khan, and M. Sugiyama, **Variational imitation learning with diverse-quality demonstrations**, in ICML, 2020. 380 | [paper](https://pdfs.semanticscholar.org/f319/069e750f7178727b7e161570d036ca34a082.pdf) 381 | 382 | 383 | ### Double Descent 384 | 385 | 1. P. Nakkiran, G. Kaplun, Y. Bansal, T. Yang, B. Barak, and I. Sutskever. **Deep double descent: Where bigger models and more data hurt**, in ICLR, 2019. 386 | [paper](https://openreview.net/pdf?id=B1g5sA4twr) 387 | 388 | 1. Z. Yang, Y. Yu, C. You, J. Steinhardt, Y. Ma. **Rethinking Bias-Variance Trade-off for Generalization of Neural Networks**, in ICML, 2020. 389 | [paper](http://proceedings.mlr.press/v119/yang20j/yang20j.pdf) 390 | 391 | --------------------------------------------------------------------------------