├── LICENSE └── README.md /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2022-2024 Gianni Franchi 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Awesome Uncertainty in Deep learning 2 | 3 |
4 | 5 | [![MIT License](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT) 6 | [![Awesome](https://awesome.re/badge.svg)](https://awesome.re) 7 | 8 |
9 | 10 | This repo is a collection of *awesome* papers, codes, books, and blogs about Uncertainty and Deep learning. 11 | 12 | :star: Feel free to star and fork. :star: 13 | 14 | If you think we missed a paper, please open a pull request or send a message on the corresponding [GitHub discussion](https://github.com/ENSTA-U2IS-AI/awesome-uncertainty-deeplearning/discussions). Tell us where the article was published and when, and send us GitHub and ArXiv links if they are available. 15 | 16 | We are also open to any ideas for improvements! 17 | 18 |

19 | Table of Contents 20 |

21 | 22 | - [Awesome Uncertainty in Deep learning](#awesome-uncertainty-in-deep-learning) 23 | - [Papers](#papers) 24 | - [Surveys](#surveys) 25 | - [Theory](#theory) 26 | - [Bayesian-Methods](#bayesian-methods) 27 | - [Ensemble-Methods](#ensemble-methods) 28 | - [Sampling/Dropout-based-Methods](#samplingdropout-based-methods) 29 | - [Post-hoc-Methods/Auxiliary-Networks](#post-hoc-methodsauxiliary-networks) 30 | - [Data-augmentation/Generation-based-methods](#data-augmentationgeneration-based-methods) 31 | - [Output-Space-Modeling/Evidential-deep-learning](#output-space-modelingevidential-deep-learning) 32 | - [Deterministic-Uncertainty-Methods](#deterministic-uncertainty-methods) 33 | - [Quantile-Regression/Predicted-Intervals](#quantile-regressionpredicted-intervals) 34 | - [Conformal Predictions](#conformal-predictions) 35 | - [Calibration/Evaluation-Metrics](#calibrationevaluation-metrics) 36 | - [Misclassification Detection \& Selective Classification](#misclassification-detection--selective-classification) 37 | - [Applications](#applications) 38 | - [Classification and Semantic-Segmentation](#classification-and-semantic-segmentation) 39 | - [Regression](#regression) 40 | - [Anomaly-detection and Out-of-Distribution-Detection](#anomaly-detection-and-out-of-distribution-detection) 41 | - [Object detection](#object-detection) 42 | - [Domain adaptation](#domain-adaptation) 43 | - [Semi-supervised](#semi-supervised) 44 | - [Natural Language Processing](#natural-language-processing) 45 | - [Others](#others) 46 | - [Datasets and Benchmarks](#datasets-and-benchmarks) 47 | - [Libraries](#libraries) 48 | - [Python](#python) 49 | - [PyTorch](#pytorch) 50 | - [JAX](#jax) 51 | - [TensorFlow](#tensorflow) 52 | - [Lectures and tutorials](#lectures-and-tutorials) 53 | - [Books](#books) 54 | - [Other Resources](#other-resources) 55 | 56 | # Papers 57 | 58 | ## Surveys 59 | 60 | **Conference** 61 | 62 | - A Comparison of Uncertainty Estimation Approaches in Deep Learning Components for Autonomous Vehicle Applications [[AISafety Workshop 2020]]() 63 | 64 | **Journal** 65 | 66 | - A survey of uncertainty in deep neural networks [[Artificial Intelligence Review 2023]]() - [[GitHub]]() 67 | - Prior and Posterior Networks: A Survey on Evidential Deep Learning Methods For Uncertainty Estimation [[TMLR2023]]() 68 | - A Survey on Uncertainty Estimation in Deep Learning Classification Systems from a Bayesian Perspective [[ACM2021]]() 69 | - Ensemble deep learning: A review [[Engineering Applications of AI 2021]]() 70 | - A review of uncertainty quantification in deep learning: Techniques, applications and challenges [[Information Fusion 2021]]() 71 | - Aleatoric and epistemic uncertainty in machine learning: an introduction to concepts and methods [[Machine Learning 2021]]() 72 | - Predictive inference with the jackknife+ [[The Annals of Statistics 2021]]() 73 | - Uncertainty in big data analytics: survey, opportunities, and challenges [[Journal of Big Data 2019]]() 74 | 75 | **Arxiv** 76 | 77 | - Benchmarking Uncertainty Disentanglement: Specialized Uncertainties for Specialized Tasks [[ArXiv2024]]() - [[PyTorch]]() 78 | - A System-Level View on Out-of-Distribution Data in Robotics [[arXiv2022]]() 79 | - A Survey on Uncertainty Reasoning and Quantification for Decision Making: Belief Theory Meets Deep Learning [[arXiv2022]]() 80 | 81 | ## Theory 82 | 83 | **Conference** 84 | 85 | - A Rigorous Link between Deep Ensembles and (Variational) Bayesian Methods [[NeurIPS2023]]() 86 | - Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning [[ICLR2023]]() 87 | - Unmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning Ticket's Mask? [[ICLR2023]]() 88 | - Probabilistic Contrastive Learning Recovers the Correct Aleatoric Uncertainty of Ambiguous Inputs [[ICML2023]]() - [[PyTorch]]() 89 | - On Second-Order Scoring Rules for Epistemic Uncertainty Quantification [[ICML2023]]() 90 | - Neural Variational Gradient Descent [[AABI2022]]() 91 | - Top-label calibration and multiclass-to-binary reductions [[ICLR2022]]() 92 | - Bayesian Model Selection, the Marginal Likelihood, and Generalization [[ICML2022]]() 93 | - With malice towards none: Assessing uncertainty via equalized coverage [[AIES 2021]]() 94 | - Uncertainty in Gradient Boosting via Ensembles [[ICLR2021]]() - [[PyTorch]]() 95 | - Repulsive Deep Ensembles are Bayesian [[NeurIPS2021]]() - [[PyTorch]]() 96 | - Bayesian Optimization with High-Dimensional Outputs [[NeurIPS2021]]() 97 | - Residual Pathway Priors for Soft Equivariance Constraints [[NeurIPS2021]]() 98 | - Dangers of Bayesian Model Averaging under Covariate Shift [[NeurIPS2021]]() - [[TensorFlow]]() 99 | - A Mathematical Analysis of Learning Loss for Active Learning in Regression [[CVPR Workshop2021]]() 100 | - Why Are Bootstrapped Deep Ensembles Not Better? [[NeurIPS Workshop]]() 101 | - Deep Convolutional Networks as shallow Gaussian Processes [[ICLR2019]]() 102 | - On the accuracy of influence functions for measuring group effects [[NeurIPS2018]]() 103 | - To Trust Or Not To Trust A Classifier [[NeurIPS2018]]() - [[Python]]() 104 | - Understanding Measures of Uncertainty for Adversarial Example Detection [[UAI2018]]() 105 | 106 | **Journal** 107 | 108 | - Martingale posterior distributions [[Royal Statistical Society Series B]]() 109 | - A Unified Theory of Diversity in Ensemble Learning [[JMLR2023]]() 110 | - Multivariate Uncertainty in Deep Learning [[TNNLS2021]]() 111 | - A General Framework for Uncertainty Estimation in Deep Learning [[RAL2020]]() 112 | - Adaptive nonparametric confidence sets [[Ann. Statist. 2006]]() 113 | 114 | **Arxiv** 115 | 116 | - Ensembles for Uncertainty Estimation: Benefits of Prior Functions and Bootstrapping [[arXiv2022]]() 117 | - Efficient Gaussian Neural Processes for Regression [[arXiv2021]]() 118 | - Dense Uncertainty Estimation [[arXiv2021]]() - [[PyTorch]]() 119 | - A higher-order swiss army infinitesimal jackknife [[arXiv2019]]() 120 | 121 | ## Bayesian-Methods 122 | 123 | **Conference** 124 | 125 | - Training Bayesian Neural Networks with Sparse Subspace Variational Inference [[ICLR2024]]() 126 | - Variational Bayesian Last Layers [[ICLR2024]](https://arxiv.org/abs/2404.11599) 127 | - A Symmetry-Aware Exploration of Bayesian Neural Network Posteriors [[ICLR2024]]() 128 | - Gradient-based Uncertainty Attribution for Explainable Bayesian Deep Learning [[CVPR2023]]() 129 | - Robustness to corruption in pre-trained Bayesian neural networks [[ICLR2023]]() 130 | - Beyond Deep Ensembles: A Large-Scale Evaluation of Bayesian Deep Learning under Distribution Shift [[NeurIPS2023]]() - [[PyTorch]]() 131 | - Transformers Can Do Bayesian Inference [[ICLR2022]]() - [[PyTorch]]() 132 | - Uncertainty Estimation for Multi-view Data: The Power of Seeing the Whole Picture [[NeurIPS2022]]() 133 | - On Batch Normalisation for Approximate Bayesian Inference [[AABI2021]]() 134 | - Activation-level uncertainty in deep neural networks [[ICLR2021]]() 135 | - Laplace Redux – Effortless Bayesian Deep Learning [[NeurIPS2021]]() - [[PyTorch]]() 136 | - On the Effects of Quantisation on Model Uncertainty in Bayesian Neural Networks [[UAI2021]]() 137 | - Learnable uncertainty under Laplace approximations [[UAI2021]]() 138 | - Bayesian Neural Networks with Soft Evidence [[ICML Workshop2021]]() - [[PyTorch]]() 139 | - TRADI: Tracking deep neural network weight distributions for uncertainty estimation [[ECCV2020]]() - [[PyTorch]]() 140 | - How Good is the Bayes Posterior in Deep Neural Networks Really? [[ICML2020]]() 141 | - Efficient and Scalable Bayesian Neural Nets with Rank-1 Factors [[ICML2020]]() - [[TensorFlow]]() 142 | - Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks [[ICML2020]]() - [[PyTorch]]() 143 | - Bayesian Deep Learning and a Probabilistic Perspective of Generalization [[NeurIPS2020]]() 144 | - A Simple Baseline for Bayesian Uncertainty in Deep Learning [[NeurIPS2019]]() - [[PyTorch]]() - [[TorchUncertainty]]() 145 | - Bayesian Uncertainty Estimation for Batch Normalized Deep Networks [[ICML2018]]() - [[TensorFlow]]() - [[TorchUncertainty]]() 146 | - Lightweight Probabilistic Deep Networks [[CVPR2018]]() - [[PyTorch]]() 147 | - A Scalable Laplace Approximation for Neural Networks [[ICLR2018]]() - [[Theano]]() 148 | - Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning [[ICML2018]]() 149 | - Weight Uncertainty in Neural Networks [[ICML2015]]() 150 | 151 | **Journal** 152 | 153 | - Analytically Tractable Hidden-States Inference in Bayesian Neural Networks [[JMLR2024]](https://jmlr.org/papers/v23/21-0758.html) 154 | - Encoding the latent posterior of Bayesian Neural Networks for uncertainty quantification [[TPAMI2023]]() - [[PyTorch]]() 155 | - Bayesian modeling of uncertainty in low-level vision [[IJCV1990]]() 156 | 157 | **Arxiv** 158 | 159 | - Density Uncertainty Layers for Reliable Uncertainty Estimation [[arXiv2023]]() 160 | 161 | ## Ensemble-Methods 162 | 163 | **Conference** 164 | 165 | - Input-gradient space particle inference for neural network ensembles [[ICLR2024]]() 166 | - Fast Ensembling with Diffusion Schrödinger Bridge [[ICLR2024]]() 167 | - Pathologies of Predictive Diversity in Deep Ensembles [[ICLR2024]]() 168 | - Model Ratatouille: Recycling Diverse Models for Out-of-Distribution Generalization [[ICML2023]]() 169 | - Bayesian Posterior Approximation With Stochastic Ensembles [[CVPR2023]]() 170 | - Normalizing Flow Ensembles for Rich Aleatoric and Epistemic Uncertainty Modeling [[AAAI2023]]() 171 | - Window-Based Early-Exit Cascades for Uncertainty Estimation: When Deep Ensembles are More Efficient than Single Models [[ICCV2023]]() - [[PyTorch]]() 172 | - Weighted Ensemble Self-Supervised Learning [[ICLR2023]]() 173 | - Agree to Disagree: Diversity through Disagreement for Better Transferability [[ICLR2023]]() - [[PyTorch]]() 174 | - Packed-Ensembles for Efficient Uncertainty Estimation [[ICLR2023]]() - [[TorchUncertainty]]() 175 | - Sub-Ensembles for Fast Uncertainty Estimation in Neural Networks [[ICCV Workshop2023]]() 176 | - Prune and Tune Ensembles: Low-Cost Ensemble Learning With Sparse Independent Subnetworks [[AAAI2022]]() 177 | - Deep Ensembles Work, But Are They Necessary? [[NeurIPS2022]]() 178 | - FiLM-Ensemble: Probabilistic Deep Learning via Feature-wise Linear Modulation [[NeurIPS2022]]() 179 | - Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity [[ICLR2022]]() - [[PyTorch]]() 180 | - On the Usefulness of Deep Ensemble Diversity for Out-of-Distribution Detection [[ECCV Workshop2022]]() 181 | - Masksembles for Uncertainty Estimation [[CVPR2021]]() - [[PyTorch/TensorFlow]]() 182 | - Robustness via Cross-Domain Ensembles [[ICCV2021]]() - [[PyTorch]]() 183 | - Uncertainty in Gradient Boosting via Ensembles [[ICLR2021]]() - [[PyTorch]]() 184 | - Uncertainty Quantification and Deep Ensembles [[NeurIPS2021]]() 185 | - Maximizing Overall Diversity for Improved Uncertainty Estimates in Deep Ensembles [[AAAI2020]]() 186 | - Uncertainty in Neural Networks: Approximately Bayesian Ensembling [[AISTATS 2020]]() 187 | - Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep Learning [[ICLR2020]]() - [[PyTorch]]() 188 | - BatchEnsemble: An Alternative Approach to Efficient Ensemble and Lifelong Learning [[ICLR2020]]() - [[TensorFlow]]() - [[TorchUncertainty]]() 189 | - Hyperparameter Ensembles for Robustness and Uncertainty Quantification [[NeurIPS2020]]() 190 | - Bayesian Deep Ensembles via the Neural Tangent Kernel [[NeurIPS2020]]() 191 | - Diversity with Cooperation: Ensemble Methods for Few-Shot Classification [[ICCV2019]]() 192 | - Accurate Uncertainty Estimation and Decomposition in Ensemble Learning [[NeurIPS2019]]() 193 | - High-Quality Prediction Intervals for Deep Learning: A Distribution-Free, Ensembled Approach [[ICML2018]]() - [[TensorFlow]]() 194 | - Snapshot Ensembles: Train 1, get M for free [[ICLR2017]](https://arxiv.org/abs/1704.00109) - [[TorchUncertainty]]() 195 | - Simple and scalable predictive uncertainty estimation using deep ensembles [[NeurIPS2017]]() - [[TorchUncertainty]]() 196 | 197 | **Journal** 198 | 199 | - One Versus all for deep Neural Network for uncertainty (OVNNI) quantification [[IEEE Access2021]]() 200 | 201 | **Arxiv** 202 | 203 | - Split-Ensemble: Efficient OOD-aware Ensemble via Task and Model Splitting [[arXiv2023]]() 204 | - Deep Ensemble as a Gaussian Process Approximate Posterior [[arXiv2022]]() 205 | - Sequential Bayesian Neural Subnetwork Ensembles [[arXiv2022]]() 206 | - Confident Neural Network Regression with Bootstrapped Deep Ensembles [[arXiv2022]]() - [[TensorFlow]]() 207 | - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent Variable Model [[arXiv2021]]() 208 | - Deep Ensembles: A Loss Landscape Perspective [[arXiv2019]]() 209 | - Checkpoint ensembles: Ensemble methods from a single training process [[arXiv2017]]() - [[TorchUncertainty]]() 210 | 211 | ## Sampling/Dropout-based-Methods 212 | 213 | **Conference** 214 | 215 | - Enabling Uncertainty Estimation in Iterative Neural Networks [[ICML2024]]() - [[Pytorch]]() 216 | - Make Me a BNN: A Simple Strategy for Estimating Bayesian Uncertainty from Pre-trained Models [[CVPR2024]]() - [[TorchUncertainty]]() 217 | - Training-Free Uncertainty Estimation for Dense Regression: Sensitivity as a Surrogate [[AAAI2022]]() 218 | - Efficient Bayesian Uncertainty Estimation for nnU-Net [[MICCAI2022]]() 219 | - Dropout Sampling for Robust Object Detection in Open-Set Conditions [[ICRA2018]]() 220 | - Test-time data augmentation for estimation of heteroscedastic aleatoric uncertainty in deep neural networks [[MIDL2018]]() 221 | - Concrete Dropout [[NeurIPS2017]]() 222 | - Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning [[ICML2016]]() - [[TorchUncertainty]]() 223 | 224 | **Journal** 225 | 226 | - A General Framework for Uncertainty Estimation in Deep Learning [[Robotics and Automation Letters2020]]() 227 | 228 | **Arxiv** 229 | 230 | - SoftDropConnect (SDC) – Effective and Efficient Quantification of the Network Uncertainty in Deep MR Image Analysis [[arXiv2022]]() 231 | 232 | ## Post-hoc-Methods/Auxiliary-Networks 233 | 234 | **Conference** 235 | 236 | - On the Limitations of Temperature Scaling for Distributions with Overlaps [[ICLR2024]](https://arxiv.org/abs/2306.00740) 237 | - Post-hoc Uncertainty Learning using a Dirichlet Meta-Model [[AAAI2023]]() - [[PyTorch]]() 238 | - ProbVLM: Probabilistic Adapter for Frozen Vision-Language Models [[ICCV2023]]() 239 | - Out-of-Distribution Detection for Monocular Depth Estimation [[ICCV2023]]() 240 | - Detecting Misclassification Errors in Neural Networks with a Gaussian Process Model [[AAAI2022]]() 241 | - Learning Structured Gaussians to Approximate Deep Ensembles [[CVPR2022]]() 242 | - Improving the reliability for confidence estimation [[ECCV2022]]() 243 | - Gradient-based Uncertainty for Monocular Depth Estimation [[ECCV2022]]() - [[PyTorch]]() 244 | - BayesCap: Bayesian Identity Cap for Calibrated Uncertainty in Frozen Neural Networks [[ECCV2022]]() - [[PyTorch]]() 245 | - Learning Uncertainty For Safety-Oriented Semantic Segmentation In Autonomous Driving [[ICIP2022]]() 246 | - SLURP: Side Learning Uncertainty for Regression Problems [[BMVC2021]]() - [[PyTorch]]() 247 | - Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation [[ICCV2021]]() - [[PyTorch]]() 248 | - Learning to Predict Error for MRI Reconstruction [[MICCAI2021]]() 249 | - A Mathematical Analysis of Learning Loss for Active Learning in Regression [[CVPR Workshop2021]]() 250 | - Real-time uncertainty estimation in computer vision via uncertainty-aware distribution distillation [[WACV2021]]() 251 | - On the uncertainty of self-supervised monocular depth estimation [[CVPR2020]]() - [[PyTorch]]() 252 | - Quantifying Point-Prediction Uncertainty in Neural Networks via Residual Estimation with an I/O Kernel [[ICLR2020]]() - [[TensorFlow]]() 253 | - Gradients as a Measure of Uncertainty in Neural Networks [[ICIP2020]]() 254 | - Learning Loss for Test-Time Augmentation [[NeurIPS2020]]() 255 | - Learning loss for active learning [[CVPR2019]]() - [[PyTorch]]() (unofficial codes) 256 | - Addressing failure prediction by learning model confidence [[NeurIPS2019]]() - [[PyTorch]]() 257 | - Structured Uncertainty Prediction Networks [[CVPR2018]]() - [[TensorFlow]]() 258 | - Classification uncertainty of deep neural networks based on gradient information [[IAPR Workshop2018]]() 259 | 260 | **Journal** 261 | 262 | - Towards More Reliable Confidence Estimation [[TPAMI2023]]() 263 | - Confidence Estimation via Auxiliary Models [[TPAMI2021]]() 264 | 265 | **Arxiv** 266 | 267 | - Instance-Aware Observer Network for Out-of-Distribution Object Segmentation [[arXiv2022]]() 268 | - DEUP: Direct Epistemic Uncertainty Prediction [[arXiv2020]]() 269 | - Learning Confidence for Out-of-Distribution Detection in Neural Networks [[arXiv2018]]() 270 | 271 | ## Data-augmentation/Generation-based-methods 272 | 273 | **Conference** 274 | 275 | - Posterior Uncertainty Quantification in Neural Networks using Data Augmentation [[AISTATS2024]]() 276 | - Learning to Generate Training Datasets for Robust Semantic Segmentation [[WACV2024]]() 277 | - OpenMix: Exploring Outlier Samples for Misclassification Detection [[CVPR2023]]() - [[PyTorch]]() 278 | - On the Pitfall of Mixup for Uncertainty Calibration [[CVPR2023]]() 279 | - Diverse, Global and Amortised Counterfactual Explanations for Uncertainty Estimates [[AAAI2022]]() 280 | - Out-of-distribution Detection with Implicit Outlier Transformation [[ICLR2023]]() - [[PyTorch]]() 281 | - PixMix: Dreamlike Pictures Comprehensively Improve Safety Measures [[CVPR2022]]() 282 | - RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy & Out-of-Distribution Robustness [[NeurIPS2022]]() - [[PyTorch]]() 283 | - Towards efficient feature sharing in MIMO architectures [[CVPR Workshop2022]]() 284 | - Robust Semantic Segmentation with Superpixel-Mix [[BMVC2021]]() - [[PyTorch]]() 285 | - MixMo: Mixing Multiple Inputs for Multiple Outputs via Deep Subnetworks [[ICCV2021]]() - [[PyTorch]]() 286 | - Training independent subnetworks for robust prediction [[ICLR2021]]() 287 | - Regularizing Variational Autoencoder with Diversity and Uncertainty Awareness [[IJCAI2021]]() - [[PyTorch]]() 288 | - Uncertainty-aware GAN with Adaptive Loss for Robust MRI Image Enhancement [[ICCV Workshop2021]]() 289 | - Uncertainty-Aware Deep Classifiers using Generative Models [[AAAI2020]]() 290 | - Synthesize then Compare: Detecting Failures and Anomalies for Semantic Segmentation [[ECCV2020]]() - [[PyTorch]]() 291 | - Detecting the Unexpected via Image Resynthesis [[ICCV2019]]() - [[PyTorch]]() 292 | - Mix-n-match: Ensemble and compositional methods for uncertainty calibration in deep learning [[ICML2020]]() 293 | - Deep Anomaly Detection with Outlier Exposure [[ICLR2019]]() 294 | - On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks [[NeurIPS2019]]() 295 | 296 | **Arxiv** 297 | 298 | - Reliability in Semantic Segmentation: Can We Use Synthetic Data? [[arXiv2023]]() 299 | - Quantifying uncertainty with GAN-based priors [[arXiv2019]]() - [[TensorFlow]]() 300 | 301 | ## Output-Space-Modeling/Evidential-deep-learning 302 | 303 | **Conference** 304 | 305 | - Hyper Evidential Deep Learning to Quantify Composite Classification Uncertainty [[ICLR2024]](https://arxiv.org/abs/2404.10980) 306 | - The Evidence Contraction Issue in Deep Evidential Regression: Discussion and Solution [[AAAI2024]]() 307 | - Discretization-Induced Dirichlet Posterior for Robust Uncertainty Quantification on Regression [[AAAI2024]]() - [[PyTorch]]() 308 | - The Unreasonable Effectiveness of Deep Evidential Regression [[AAAI2023]]() - [[PyTorch]]() - [[TorchUncertainty]](https://github.com/ENSTA-U2IS-AI/torch-uncertainty) 309 | - Exploring and Exploiting Uncertainty for Incomplete Multi-View Classification [[CVPR2023]](https://arxiv.org/abs/2304.05165) 310 | - Plausible Uncertainties for Human Pose Regression [[ICCV2023]](https://openaccess.thecvf.com/content/ICCV2023/papers/Bramlage_Plausible_Uncertainties_for_Human_Pose_Regression_ICCV_2023_paper.pdf) - [[PyTorch]]() 311 | - Uncertainty Estimation by Fisher Information-based Evidential Deep Learning [[ICML2023]](https://arxiv.org/pdf/2303.02045.pdf) - [[PyTorch]]() 312 | - Improving Evidential Deep Learning via Multi-task Learning [[AAAI2022]]() - [[PyTorch]](https://github.com/deargen/MT-ENet) 313 | - An Evidential Neural Network Model for Regression Based on Random Fuzzy Numbers [[BELIEF2022]]() 314 | - On the Pitfalls of Heteroscedastic Uncertainty Estimation with Probabilistic Neural Networks [[ICLR2022]]() - [[PyTorch]](https://github.com/martius-lab/beta-nll) 315 | - Natural Posterior Network: Deep Bayesian Uncertainty for Exponential Family Distributions [[ICLR2022]]() - [[PyTorch]]() 316 | - Pitfalls of Epistemic Uncertainty Quantification through Loss Minimisation [[NeurIPS2022]]() 317 | - Fast Predictive Uncertainty for Classification with Bayesian Deep Networks [[UAI2022]]() - [[PyTorch]]() 318 | - Evaluating robustness of predictive uncertainty estimation: Are Dirichlet-based models reliable? [[ICML2021]]() 319 | - Trustworthy multimodal regression with mixture of normal-inverse gamma distributions [[NeurIPS2021]]() 320 | - Misclassification Risk and Uncertainty Quantification in Deep Classifiers [[WACV2021]]() 321 | - Ensemble Distribution Distillation [[ICLR2020]]() 322 | - Conservative Uncertainty Estimation By Fitting Prior Networks [[ICLR2020]]() 323 | - Being Bayesian about Categorical Probability [[ICML2020]]() - [[PyTorch]]() 324 | - Posterior Network: Uncertainty Estimation without OOD Samples via Density-Based Pseudo-Counts [[NeurIPS2020]]() - [[PyTorch]]() 325 | - Deep Evidential Regression [[NeurIPS2020]]() - [[TensorFlow]]() - [[TorchUncertainty]]() 326 | - Noise Contrastive Priors for Functional Uncertainty [[UAI2020]]() 327 | - Towards Maximizing the Representation Gap between In-Domain & Out-of-Distribution Examples [[NeurIPS Workshop2020]]() 328 | - Uncertainty on Asynchronous Time Event Prediction [[NeurIPS2019]]() - [[TensorFlow]]() 329 | - Reverse KL-Divergence Training of Prior Networks: Improved Uncertainty and Adversarial Robustness [[NeurIPS2019]]() 330 | - Quantifying Classification Uncertainty using Regularized Evidential Neural Networks [[AAAI FSS2019]]() 331 | - Uncertainty estimates and multi-hypotheses networks for optical flow [[ECCV2018]]() - [[TensorFlow]]() 332 | - Evidential Deep Learning to Quantify Classification Uncertainty [[NeurIPS2018]]() - [[PyTorch]]() 333 | - Predictive uncertainty estimation via prior networks [[NeurIPS2018]]() 334 | - What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? [[NeurIPS2017]]() 335 | - Estimating the Mean and Variance of the Target Probability Distribution [[(ICNN1994)]]() 336 | 337 | **Journal** 338 | 339 | - Prior and Posterior Networks: A Survey on Evidential Deep Learning Methods For Uncertainty Estimation [[TMLR2023]]() 340 | - Region-Based Evidential Deep Learning to Quantify Uncertainty and Improve Robustness of Brain Tumor Segmentation [[NCA2022]]() 341 | - An evidential classifier based on Dempster-Shafer theory and deep learning [[Neurocomputing2021]]() - [[TensorFlow]]() 342 | - Evidential fully convolutional network for semantic segmentation [[AppliedIntelligence2021]]() - [[TensorFlow]]() 343 | - Information Aware max-norm Dirichlet networks for predictive uncertainty estimation [[NeuralNetworks2021]]() 344 | - A neural network classifier based on Dempster-Shafer theory [[IEEETransSMC2000]]() 345 | 346 | **Arxiv** 347 | 348 | - Evidential Uncertainty Quantification: A Variance-Based Perspective [[arXiv2023]]() 349 | - Effective Uncertainty Estimation with Evidential Models for Open-World Recognition [[arXiv2022]]() 350 | - Multivariate Deep Evidential Regression [[arXiv2022]]() 351 | - Regression Prior Networks [[arXiv2020]]() 352 | - A Variational Dirichlet Framework for Out-of-Distribution Detection [[arXiv2019]]() 353 | - Uncertainty estimation in deep learning with application to spoken language assessment [[PhDThesis2019]]() 354 | - Inhibited softmax for uncertainty estimation in neural networks [[arXiv2018]]() 355 | - Quantifying Intrinsic Uncertainty in Classification via Deep Dirichlet Mixture Networks [[arXiv2018]]() 356 | 357 | ## Deterministic-Uncertainty-Methods 358 | 359 | **Conference** 360 | - A Rate-Distortion View of Uncertainty Quantification [[ICML2024]](https://arxiv.org/abs/2406.10775) - [[Tensorflow]](https://github.com/ifiaposto/Distance_Aware_Bottleneck) 361 | - Deep Deterministic Uncertainty: A Simple Baseline [[CVPR2023]]() - [[PyTorch]]() 362 | - Gaussian Latent Representations for Uncertainty Estimation using Mahalanobis Distance in Deep Classifiers [[ICCV Workshop2023]]() - [[PyTorch]]() 363 | - A Simple and Explainable Method for Uncertainty Estimation using Attribute Prototype Networks [[ICCV Workshop2023]]() 364 | - Training, Architecture, and Prior for Deterministic Uncertainty Methods [[ICLR Workshop2023]]() - [[PyTorch]]() 365 | - Latent Discriminant deterministic Uncertainty [[ECCV2022]]() - [[PyTorch]]() 366 | - On the Practicality of Deterministic Epistemic Uncertainty [[ICML2022]]() 367 | - Improving Deterministic Uncertainty Estimation in Deep Learning for Classification and Regression [[CoRR2021]]() 368 | - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [[ICML2020]]() - [[PyTorch]]() 369 | - Training normalizing flows with the information bottleneck for competitive generative classification [[NeurIPS2020]]() 370 | - Simple and principled uncertainty estimation with deterministic deep learning via distance awareness [[NeurIPS2020]]() 371 | - Revisiting One-vs-All Classifiers for Predictive Uncertainty and Out-of-Distribution Detection in Neural Networks [[ICML Workshop2020]]() 372 | - Sampling-Free Epistemic Uncertainty Estimation Using Approximated Variance Propagation [[ICCV2019]]() - [[PyTorch]]() 373 | - Single-Model Uncertainties for Deep Learning [[NeurIPS2019]]() - [[PyTorch]]() 374 | 375 | **Journal** 376 | 377 | - ZigZag: Universal Sampling-free Uncertainty Estimation Through Two-Step Inference [[TMLR2024]]() - [[Pytorch]]() 378 | - Density estimation in representation space [[EDSMLS2020]]() 379 | 380 | **Arxiv** 381 | 382 | - The Hidden Uncertainty in a Neural Network’s Activations [[arXiv2020]]() 383 | - A simple framework for uncertainty in contrastive learning [[arXiv2020]]() 384 | - Distance-based Confidence Score for Neural Network Classifiers [[arXiv2017]]() 385 | 386 | ## Quantile-Regression/Predicted-Intervals 387 | 388 | **Conference** 389 | 390 | - Image-to-Image Regression with Distribution-Free Uncertainty Quantification and Applications in Imaging [[ICML2022]]() - [[PyTorch]]() 391 | - Prediction Intervals: Split Normal Mixture from Quality-Driven Deep Ensembles [[UAI2020]]() - [[PyTorch]]() 392 | - Classification with Valid and Adaptive Coverage [[NeurIPS2020]]() 393 | - Single-Model Uncertainties for Deep Learning [[NeurIPS2019]]() - [[PyTorch]]() 394 | - High-Quality Prediction Intervals for Deep Learning: A Distribution-Free, Ensembled Approach [[ICML2018]]() - [[TensorFlow]]() 395 | 396 | **Journal** 397 | 398 | - Scalable Uncertainty Quantification for Deep Operator Networks using Randomized Priors [[CMAME2022]]() 399 | - Exploring uncertainty in regression neural networks for construction of prediction intervals [[Neurocomputing2022]]() 400 | 401 | **Arxiv** 402 | 403 | - Interval Neural Networks: Uncertainty Scores [[arXiv2020]]() 404 | - Tight Prediction Intervals Using Expanded Interval Minimization [[arXiv2018]]() 405 | 406 | ## Conformal Predictions 407 | 408 | Awesome Conformal Prediction [[GitHub]]() 409 | 410 | 416 | 417 | ## Calibration/Evaluation-Metrics 418 | 419 | **Conference** 420 | 421 | - Smooth ECE: Principled Reliability Diagrams via Kernel Smoothing [[ICLR2024]]() 422 | - Calibrating Transformers via Sparse Gaussian Processes [[ICLR2023]]() - [[PyTorch]]() 423 | - Beyond calibration: estimating the grouping loss of modern neural networks [[ICLR2023]]() - [[Python]]() 424 | - Dual Focal Loss for Calibration [[ICML 2023]](https://arxiv.org/abs/2305.13665) 425 | - What Are Effective Labels for Augmented Data? Improving Calibration and Robustness with AutoLabel [[SaTML2023]](https://arxiv.org/abs/2302.11188) 426 | - The Devil is in the Margin: Margin-based Label Smoothing for Network Calibration [[CVPR2022]]() - [[PyTorch]]() 427 | - AdaFocal: Calibration-aware Adaptive Focal Loss [[NeurIPS2022]](https://arxiv.org/abs/2211.11838) 428 | - Calibrating Deep Neural Networks by Pairwise Constraints [[CVPR2022]]() 429 | - Top-label calibration and multiclass-to-binary reductions [[ICLR2022]]() 430 | - From label smoothing to label relaxation [[AAAI2021]]() 431 | - Diagnostic Uncertainty Calibration: Towards Reliable Machine Predictions in Medical Domain [[AIStats2021]](https://arxiv.org/pdf/2007.01659) 432 | - Rethinking Calibration of Deep Neural Networks: Do Not Be Afraid of Overconfidence [[NeurIPS2021]]() 433 | - Beyond Pinball Loss: Quantile Methods for Calibrated Uncertainty Quantification [[NeurIPS2021]]() 434 | - Confidence-Aware Learning for Deep Neural Networks [[ICML2020]]() - [[PyTorch]]() 435 | - Mix-n-match: Ensemble and compositional methods for uncertainty calibration in deep learning [[ICML2020]]() 436 | - Regularization via structural label smoothing [[ICML2020]]() 437 | - Well-Calibrated Regression Uncertainty in Medical Imaging with Deep Learning [[MIDL2020]]() - [[PyTorch]]() 438 | - Calibrating Deep Neural Networks using Focal Loss [[NeurIPS2020]]() - [[PyTorch]]() 439 | - Stationary activations for uncertainty calibration in deep learning [[NeurIPS2020]]() 440 | - Revisiting the evaluation of uncertainty estimation and its application to explore model complexity-uncertainty trade-off [[CVPR Workshop2020]]() 441 | - Evaluating Scalable Bayesian Deep Learning Methods for Robust Computer Vision [[CVPR Workshop2020]]() - [[PyTorch]]() 442 | - Bias-Reduced Uncertainty Estimation for Deep Neural Classifiers [[ICLR2019]]() 443 | - Beyond temperature scaling: Obtaining well-calibrated multiclass probabilities with Dirichlet calibration [[NeurIPS2019]]() - [[GitHub]]() 444 | - When does label smoothing help? [[NeurIPS2019]]() 445 | - Verified Uncertainty Calibration [[NeurIPS2019]]() - [[GitHub]]() 446 | - Measuring Calibration in Deep Learning [[CVPR Workshop2019]]() 447 | - Accurate Uncertainties for Deep Learning Using Calibrated Regression [[ICML2018]]() 448 | - Generalized zero-shot learning with deep calibration network [[NeurIPS2018]]() 449 | - On calibration of modern neural networks [[ICML2017]]() - [[TorchUncertainty]](https://github.com/ENSTA-U2IS-AI/torch-uncertainty) 450 | - On Fairness and Calibration [[NeurIPS2017]]() 451 | - Obtaining Well Calibrated Probabilities Using Bayesian Binning [[AAAI2015]]() 452 | 453 | **Journal** 454 | 455 | - Meta-Calibration: Learning of Model Calibration Using Differentiable Expected Calibration Error [[TMLR2023]]() - [[PyTorch]]() 456 | - Evaluating and Calibrating Uncertainty Prediction in Regression Tasks [[Sensors2022]]() 457 | - Calibrated Prediction Intervals for Neural Network Regressors [[IEEE Access 2018]]() - [[Python]]() 458 | 459 | **Arxiv** 460 | 461 | - Towards Understanding Label Smoothing [[arXiv2020]]() 462 | - An Investigation of how Label Smoothing Affects Generalization [[arXiv2020]]() 463 | 464 | ## Misclassification Detection & Selective Classification 465 | 466 | - A Data-Driven Measure of Relative Uncertainty for Misclassification Detection [[ICLR2024]](https://arxiv.org/abs/2306.01710) 467 | - Plugin estimators for selective classification with out-of-distribution detection [[ICLR2024]](https://arxiv.org/abs/2301.12386) 468 | - SURE: SUrvey REcipes for building reliable and robust deep networks [[CVPR2024]](https://arxiv.org/abs/2403.00543) - [[PyTorch]](https://yutingli0606.github.io/SURE/) 469 | - Augmenting Softmax Information for Selective Classification with Out-of-Distribution Data [[ACCV2022]]() 470 | - Anomaly Detection via Reverse Distillation from One-Class Embedding [[CVPR2022]]() 471 | - Rethinking Confidence Calibration for Failure Prediction [[ECCV2022]]() - [[PyTorch]]() 472 | 473 | ## Applications 474 | 475 | ### Classification and Semantic-Segmentation 476 | 477 | **Conference** 478 | 479 | - Modeling Multimodal Aleatoric Uncertainty in Segmentation with Mixture of Stochastic Experts [[ICLR2023]]() - [[PyTorch]]() 480 | - Anytime Dense Prediction with Confidence Adaptivity [[ICLR2022]]() - [[PyTorch]]() 481 | - CRISP - Reliable Uncertainty Estimation for Medical Image Segmentation [[MICCAI2022]]() 482 | - TBraTS: Trusted Brain Tumor Segmentation [[MICCAI2022]]() - [[PyTorch]]() 483 | - Robust Semantic Segmentation with Superpixel-Mix [[BMVC2021]]() - [[PyTorch]]() 484 | - Deep Deterministic Uncertainty for Semantic Segmentation [[ICMLW2021]]() 485 | - DEAL: Difficulty-aware Active Learning for Semantic Segmentation [[ACCV2020]]() 486 | - Classification with Valid and Adaptive Coverage [[NeurIPS2020]]() 487 | - Guided Curriculum Model Adaptation and Uncertainty-Aware Evaluation for Semantic Nighttime Image Segmentation [[ICCV2019]]() 488 | - Human Uncertainty Makes Classification More Robust [[ICCV2019]]() 489 | - Uncertainty-aware self-ensembling model for semi-supervised 3D left atrium segmentation [[MICCAI2019]]() - [[PyTorch]]() 490 | - Lightweight Probabilistic Deep Networks [[CVPR2018]]() - [[PyTorch]]() 491 | - A Probabilistic U-Net for Segmentation of Ambiguous Images [[NeurIPS2018]]() - [[PyTorch]]() 492 | - Evidential Deep Learning to Quantify Classification Uncertainty [[NeurIPS2018]]() - [[PyTorch]]() 493 | - To Trust Or Not To Trust A Classifier [[NeurIPS2018]]() 494 | - Classification uncertainty of deep neural networks based on gradient information [[IAPR Workshop2018]]() 495 | - Bayesian segnet: Model uncertainty in deep convolutional encoder-decoder architectures for scene understanding [[BMVC2017]]() 496 | 497 | **Journal** 498 | 499 | - Explainable machine learning in image classification models: An uncertainty quantification perspective." [[KnowledgeBased2022]]() 500 | - Region-Based Evidential Deep Learning to Quantify Uncertainty and Improve Robustness of Brain Tumor Segmentation [[NCA2022]]() 501 | 502 | **Arxiv** 503 | 504 | - Leveraging Uncertainty Estimates to Improve Classifier Performance [[arXiv2023]]() 505 | - Evaluating Bayesian Deep Learning Methods for Semantic Segmentation [[arXiv2018]]() 506 | 507 | ### Regression 508 | 509 | **Conference** 510 | 511 | - Learning the Distribution of Errors in Stereo Matching for Joint Disparity and Uncertainty Estimation [[CVPR2023]]() - [[PyTorch]]() 512 | - Probabilistic MIMO U-Net: Efficient and Accurate Uncertainty Estimation for Pixel-wise Regression [[ICCV Workshop2023]]() - [[PyTorch]]() 513 | - Training-Free Uncertainty Estimation for Dense Regression: Sensitivity as a Surrogate [[AAAI2022]]() 514 | - Learning Structured Gaussians to Approximate Deep Ensembles [[CVPR2022]]() 515 | - Uncertainty Quantification in Depth Estimation via Constrained Ordinal Regression [[ECCV2022]]() 516 | - On Monocular Depth Estimation and Uncertainty Quantification using Classification Approaches for Regression [[ICIP2022]]() 517 | - Anytime Dense Prediction with Confidence Adaptivity [[ICLR2022]]() - [[PyTorch]]() 518 | - Variational Depth Networks: Uncertainty-Aware Monocular Self-supervised Depth Estimation [[ECCV Workshop2022]]() 519 | - SLURP: Side Learning Uncertainty for Regression Problems [[BMVC2021]]() - [[PyTorch]]() 520 | - Robustness via Cross-Domain Ensembles [[ICCV2021]]() - [[PyTorch]]() 521 | - Learning to Predict Error for MRI Reconstruction [[MICCAI2021]]() 522 | - On the uncertainty of self-supervised monocular depth estimation [[CVPR2020]]() - [[PyTorch]]() 523 | - Quantifying Point-Prediction Uncertainty in Neural Networks via Residual Estimation with an I/O Kernel [[ICLR2020]]() - [[TensorFlow]]() 524 | - Fast Uncertainty Estimation for Deep Learning Based Optical Flow [[IROS2020]]() 525 | - Well-Calibrated Regression Uncertainty in Medical Imaging with Deep Learning [[MIDL2020]]() - [[PyTorch]]() 526 | - Deep Evidential Regression [[NeurIPS2020]]() - [[TensorFlow]]() 527 | - Inferring Distributions Over Depth from a Single Image [[IROS2019]]() - [[TensorFlow]]() 528 | - Multi-Task Learning based on Separable Formulation of Depth Estimation and its Uncertainty [[CVPR Workshop2019]]() 529 | - Lightweight Probabilistic Deep Networks [[CVPR2018]]() - [[PyTorch]]() 530 | - Structured Uncertainty Prediction Networks [[CVPR2018]]() - [[TensorFlow]]() 531 | - Uncertainty estimates and multi-hypotheses networks for optical flow [[ECCV2018]]() - [[TensorFlow]]() 532 | - Accurate Uncertainties for Deep Learning Using Calibrated Regression [[ICML2018]]() 533 | 534 | **Journal** 535 | 536 | - How Reliable is Your Regression Model's Uncertainty Under Real-World Distribution Shifts? [[TMLR2023]]() - [[PyTorch]]() 537 | - Evaluating and Calibrating Uncertainty Prediction in Regression Tasks [[Sensors2022]]() 538 | - Exploring uncertainty in regression neural networks for construction of prediction intervals [[Neurocomputing2022]]() 539 | - Wasserstein Dropout [[Machine Learning 2022]]() - [[PyTorch]]() 540 | - Deep Distribution Regression [[Computational Statistics & Data Analysis2021]]() 541 | - Calibrated Prediction Intervals for Neural Network Regressors [[IEEE Access 2018]]() - [[Python]]() 542 | - Learning a Confidence Measure for Optical Flow [[TPAMI2013]]() 543 | 544 | **Arxiv** 545 | 546 | - Understanding pathologies of deep heteroskedastic regression [[arxiv2024]]() 547 | - Measuring and Modeling Uncertainty Degree for Monocular Depth Estimation [[arXiv2023]]() 548 | - UncertaINR: Uncertainty Quantification of End-to-End Implicit Neural Representations for Computed Tomographaphy [[arXiv2022]]() 549 | - Efficient Gaussian Neural Processes for Regression [[arXiv2021]]() 550 | 551 | ### Anomaly-detection and Out-of-Distribution-Detection 552 | 553 | **Conference** 554 | 555 | - Learning Transferable Negative Prompts for Out-of-Distribution Detection [[CVPR2024]]() - [[PyTorch]]() 556 | - Epistemic Uncertainty Quantification For Pre-trained Neural Networks [[CVPR2024]]() 557 | - NECO: NEural Collapse Based Out-of-distribution Detection [[ICLR2024]]() 558 | - When and How Does In-Distribution Label Help Out-of-Distribution Detection? [[ICML2024]]() - [[PyTorch]]() 559 | - Anomaly Detection under Distribution Shift [[ICCV2023]]() - [[PyTorch]]() 560 | - Normalizing Flows for Human Pose Anomaly Detection [[ICCV2023]](https://orhir.github.io/STG_NF/) - [[PyTorch]](https://github.com/orhir/stg-nf) 561 | - RbA: Segmenting Unknown Regions Rejected by All [[ICCV2023]](https://openaccess.thecvf.com/content/ICCV2023/papers/Nayal_RbA_Segmenting_Unknown_Regions_Rejected_by_All_ICCV_2023_paper.pdf) - [[PyTorch]](https://github.com/NazirNayal8/RbA) 562 | - Uncertainty-Aware Optimal Transport for Semantically Coherent Out-of-Distribution Detection [[CVPR2023]]() - [[PyTorch]]() 563 | - Modeling the Distributional Uncertainty for Salient Object Detection Models [[CVPR2023]](https://npucvr.github.io/Distributional_uncer/) - [[PyTorch]](https://github.com/txynwpu/Distributional_uncertainty_SOD) 564 | - SQUID: Deep Feature In-Painting for Unsupervised Anomaly Detection [[CVPR2023]]() - [[PyTorch]]() 565 | - How to Exploit Hyperspherical Embeddings for Out-of-Distribution Detection? [[ICLR2023]]() - [[PyTorch]]() 566 | - Modeling the Data-Generating Process is Necessary for Out-of-Distribution Generalization [[ICLR2023]]() 567 | - Can CNNs Be More Robust Than Transformers? [[ICLR2023]]() 568 | - A framework for benchmarking class-out-of-distribution detection and its application to ImageNet [[ICLR2023]]() 569 | - Extremely Simple Activation Shaping for Out-of-Distribution Detection [[ICLR2023]]() - [[PyTorch]]() 570 | - Quantification of Uncertainty with Adversarial Models [[NeurIPS2023]]() 571 | - The Robust Semantic Segmentation UNCV2023 Challenge Results [[ICCV Workshop2023]](https://arxiv.org/abs/2309.15478) 572 | - Continual Evidential Deep Learning for Out-of-Distribution Detection [[ICCV Workshop2023]](https://openaccess.thecvf.com/content/ICCV2023W/VCL/html/Aguilar_Continual_Evidential_Deep_Learning_for_Out-of-Distribution_Detection_ICCVW_2023_paper.html) 573 | - Far Away in the Deep Space: Nearest-Neighbor-Based Dense Out-of-Distribution Detection [[ICCV Workshop2023]]() 574 | - Gaussian Latent Representations for Uncertainty Estimation using Mahalanobis Distance in Deep Classifiers [[ICCV Workshop2023]]() 575 | - Calibrated Out-of-Distribution Detection with a Generic Representation [[ICCV Workshop2023]]() - [[PyTorch]]() 576 | - Detecting Misclassification Errors in Neural Networks with a Gaussian Process Model [[AAAI2022]]() 577 | - Towards Total Recall in Industrial Anomaly Detection [[CVPR2022]]() - [[PyTorch]]() 578 | - POEM: Out-of-Distribution Detection with Posterior Sampling [[ICML2022]]() - [[PyTorch]]() 579 | - VOS: Learning What You Don't Know by Virtual Outlier Synthesis [[ICLR2022]]() - [[PyTorch]]() 580 | - Fully Convolutional Cross-Scale-Flows for Image-based Defect Detection [[WACV2022]]() - [[PyTorch]]() 581 | - Out-of-Distribution Detection Using Union of 1-Dimensional Subspaces [[CVPR2021]]() - [[PyTorch]]() 582 | - NAS-OoD: Neural Architecture Search for Out-of-Distribution Generalization [[ICCV2021]]() 583 | - On the Importance of Gradients for Detecting Distributional Shifts in the Wild [[NeurIPS2021]]() 584 | - Exploring the Limits of Out-of-Distribution Detection [[NeurIPS2021]]() 585 | - Detecting out-of-distribution image without learning from out-of-distribution data. [[CVPR2020]]() 586 | - Learning Open Set Network with Discriminative Reciprocal Points [[ECCV2020]]() 587 | - Synthesize then Compare: Detecting Failures and Anomalies for Semantic Segmentation [[ECCV2020]]() - [[PyTorch]]() 588 | - NADS: Neural Architecture Distribution Search for Uncertainty Awareness [[ICML2020]]() 589 | - PaDiM: a Patch Distribution Modeling Framework for Anomaly Detection and Localization [[ICPR2020]]() - [[PyTorch]]() 590 | - Energy-based Out-of-distribution Detection [[NeurIPS2020]]() 591 | - Towards Maximizing the Representation Gap between In-Domain & Out-of-Distribution Examples [[NeurIPS Workshop2020]]() 592 | - Memorizing Normality to Detect Anomaly: Memory-Augmented Deep Autoencoder for Unsupervised Anomaly Detection [[ICCV2019]]() - [[PyTorch]]() 593 | - Detecting the Unexpected via Image Resynthesis [[ICCV2019]]() - [[PyTorch]]() 594 | - Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks [[ICLR2018]]() 595 | - A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks [[ICLR2017]]() - [[TensorFlow]]() 596 | 597 | **Journal** 598 | 599 | - Revisiting Confidence Estimation: Towards Reliable Failure Prediction [[TPAMI2023]](https://www.computer.org/csdl/journal/tp/5555/01/10356834/1SQHDHvGg9i) - [[PyTorch]]() 600 | - One Versus all for deep Neural Network for uncertaInty (OVNNI) quantification [[IEEE Access2021]]() 601 | 602 | **Arxiv** 603 | 604 | - Neuron Activation Coverage: Rethinking Out-of-distribution Detection and Generalization [[arXiv2023]]() - [[PyTorch]]() 605 | - A Simple Fix to Mahalanobis Distance for Improving Near-OOD Detection [[arXiv2021]]() 606 | - Generalized out-of-distribution detection: A survey [[arXiv2021]]() 607 | - Do We Really Need to Learn Representations from In-domain Data for Outlier Detection? [[arXiv2021]]() 608 | - Frequentist uncertainty estimates for deep learning [[arXiv2018]]() 609 | 610 | ### Object detection 611 | 612 | **Conference** 613 | 614 | - Bridging Precision and Confidence: A Train-Time Loss for Calibrating Object Detection [[CVPR2023]]() - [[PyTorch]]() 615 | - Parametric and Multivariate Uncertainty Calibration for Regression and Object Detection [[ECCV Workshop2022]]() - [[PyTorch]]() 616 | - Estimating and Evaluating Regression Predictive Uncertainty in Deep Object Detectors [[ICLR2021]]() - [[PyTorch]]() 617 | - Multivariate Confidence Calibration for Object Detection [[CVPR Workshop2020]]() - [[PyTorch]]() 618 | - Gaussian YOLOv3: An Accurate and Fast Object Detector Using Localization Uncertainty for Autonomous Driving [[ICCV2019]]() - [[CUDA]]() - [[PyTorch]]() - [[Keras]]() 619 | 620 | ### Domain adaptation 621 | 622 | **Conference** 623 | 624 | - Guiding Pseudo-labels with Uncertainty Estimation for Source-free Unsupervised Domain Adaptation [[CVPR2023]]() - [[PyTorch]](https://github.com/mattialitrico/guiding-pseudo-labels-with-uncertainty-estimation-for-source-free-unsupervised-domain-adaptation) 625 | - Uncertainty-guided Source-free 626 | Domain Adaptation [[ECCV2022]]() - [[PyTorch]]() 627 | 628 | ### Semi-supervised 629 | 630 | **Conference** 631 | 632 | - Confidence Estimation Using Unlabeled Data [[ICLR2023]]() - [[PyTorch]]() 633 | 634 | ### Natural Language Processing 635 | 636 | Awesome LLM Uncertainty, Reliability, & Robustness [[GitHub]]() 637 | 638 | 639 | **Conference** 640 | 641 | - R-U-SURE? Uncertainty-Aware Code Suggestions By Maximizing Utility Across Random User Intents [[ICML2023]]() - [[GitHub]](https://github.com/google-research/r_u_sure) 642 | - Strength in Numbers: Estimating Confidence of Large Language Models by Prompt Agreement [[TrustNLP2023]]() - [[GitHub]](https://github.com/JHU-CLSP/Confidence-Estimation-TrustNLP2023) 643 | - Disentangling Uncertainty in Machine Translation Evaluation [[EMNLP2022]]() - [[PyTorch]]() 644 | - Investigating Ensemble Methods for Model Robustness Improvement of Text Classifiers [[EMNLP2022 Findings]]() 645 | - DATE: Detecting Anomalies in Text via Self-Supervision of Transformers [[NAACL2021]]() 646 | - Calibrating Structured Output Predictors for Natural Language Processing [[ACL2020]]() 647 | - Calibrated Language Model Fine-Tuning for In- and Out-of-Distribution Data [[EMNLP2020]]() - [[PyTorch]](https://github.com/Lingkai-Kong/Calibrated-BERT-Fine-Tuning) 648 | 649 | **Journal** 650 | - How Can We Know When Language Models Know? On the Calibration of Language Models for Question Answering [[TACL2021]](https://arxiv.org/abs/2012.00955) - [[PyTorch]](https://github.com/jzbjyb/lm-calibration) 651 | 652 | **Arxiv** 653 | 654 | - Gaussian Stochastic Weight Averaging for Bayesian Low-Rank Adaptation of Large Language Models [[arXiv2024]]() 655 | - To Believe or Not to Believe Your LLM [[arXiv2024]]() 656 | - Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling [[arXiv2023]]() 657 | 658 | ### Others 659 | 660 | **Conference** 661 | 662 | - PaSCo: Urban 3D Panoptic Scene Completion with Uncertainty Awareness [[CVPR2024]]() - [[Website]]() 663 | - Uncertainty Quantification via Stable Distribution Propagation [[ICLR2024]]() 664 | - Assessing Uncertainty in Similarity Scoring: Performance & Fairness in Face Recognition [[ICLR2024]]() 665 | 666 | **Arxiv** 667 | 668 | - Shaving Weights with Occam's Razor: Bayesian Sparsification for Neural Networks Using the Marginal Likelihood - [[arxiv2024]]() 669 | - Urban 3D Panoptic Scene Completion with Uncertainty Awareness [[arXiv2023]]() - [[PyTorch]]() 670 | 671 | # Datasets and Benchmarks 672 | 673 | - SHIFT: A Synthetic Driving Dataset for Continuous Multi-Task Domain Adaptation [[CVPR2022]]() 674 | - MUAD: Multiple Uncertainties for Autonomous Driving, a benchmark for multiple uncertainty types and tasks [[BMVC2022]]() - [[PyTorch]]() 675 | - ACDC: The Adverse Conditions Dataset with Correspondences for Semantic Driving Scene Understanding [[ICCV2021]]() 676 | - The MVTec Anomaly Detection Dataset: A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection [[IJCV2021]]() 677 | - SegmentMeIfYouCan: A Benchmark for Anomaly Segmentation [[NeurIPS2021]]() 678 | - Uncertainty Baselines: Benchmarks for Uncertainty & Robustness in Deep Learning [[arXiv2021]]() - [[TensorFlow]]() 679 | - Curriculum Model Adaptation with Synthetic and Real Data for Semantic Foggy Scene Understanding [[IJCV2020]]() 680 | - Benchmarking the Robustness of Semantic Segmentation Models [[CVPR2020]]() 681 | - Fishyscapes: A Benchmark for Safe Semantic Segmentation in Autonomous Driving [[ICCV Workshop2019]]() 682 | - Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming [[NeurIPS Workshop2019]]() - [[GitHub]]() 683 | - Semantic Foggy Scene Understanding with Synthetic Data [[IJCV2018]]() 684 | - Lost and Found: Detecting Small Road Hazards for Self-Driving Vehicles [[IROS2016]]() 685 | 686 | # Libraries 687 | 688 | ## Python 689 | 690 | - Uncertainty Calibration Library [[GitHub]]() 691 | - MAPIE: Model Agnostic Prediction Interval Estimator [[Sklearn]](https://github.com/scikit-learn-contrib/MAPIE) 692 | - Uncertainty Toolbox [[GitHub]]() 693 | - OpenOOD: Benchmarking Generalized OOD Detection [[GitHub]]() 694 | - Darts: Forecasting and anomaly detection on time series [[GitHub]]() 695 | - Mixture Density Networks (MDN) for distribution and uncertainty estimation [[GitHub]]() 696 | 697 | ## PyTorch 698 | 699 | - TorchUncertainty [[GitHub]]() 700 | - Bayesian Torch [[GitHub]]() 701 | - Blitz: A Bayesian Neural Network library for PyTorch [[GitHub]]() 702 | 703 | ## JAX 704 | 705 | - Fortuna [[GitHub - JAX]]() 706 | 707 | ## TensorFlow 708 | 709 | - TensorFlow Probability [[Website]]() 710 | 711 | # Lectures and tutorials 712 | 713 | - Dan Hendrycks: Intro to ML Safety course [[Website]]() 714 | - Uncertainty and Robustness in Deep Learning Workshop in ICML (2020, 2021) [[SlidesLive]]() 715 | - Yarin Gal: Bayesian Deep Learning 101 [[Website]]() 716 | - MIT 6.S191: Evidential Deep Learning and Uncertainty (2021) [[Youtube]]() 717 | - Hands-on Bayesian Neural Networks - a Tutorial for Deep Learning Users [[IEEE Computational Intelligence Magazine]](https://arxiv.org/pdf/2007.06823.pdf) 718 | 719 | # Books 720 | 721 | - The "Probabilistic Machine-Learning" book series by Kevin Murphy [[Book]]() 722 | 723 | # Other Resources 724 | 725 | Uncertainty Quantification in Deep Learning [[GitHub]]() 726 | 727 | Awesome Out-of-distribution Detection [[GitHub]]() 728 | 729 | Anomaly Detection Learning Resources [[GitHub]]() 730 | 731 | Awesome Conformal Prediction [[GitHub]]() 732 | 733 | Awesome LLM Uncertainty, Reliability, & Robustness [[GitHub]]() 734 | 735 | UQSay - Seminars on Uncertainty Quantification (UQ), Design and Analysis of Computer Experiments (DACE) and related topics @ Paris Saclay [[Website]]() 736 | 737 | ProbAI summer school [[Website]]() 738 | 739 | Gaussian process summer school [[Website]]() 740 | --------------------------------------------------------------------------------