├── LICENSE
└── README.md
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2022-2024 Gianni Franchi
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Awesome Uncertainty in Deep learning
2 |
3 |
4 |
5 | [](https://opensource.org/licenses/MIT)
6 | [](https://awesome.re)
7 |
8 |
9 |
10 | This repo is a collection of *awesome* papers, codes, books, and blogs about Uncertainty and Deep learning.
11 |
12 | :star: Feel free to star and fork. :star:
13 |
14 | If you think we missed a paper, please open a pull request or send a message on the corresponding [GitHub discussion](https://github.com/ENSTA-U2IS-AI/awesome-uncertainty-deeplearning/discussions). Tell us where the article was published and when, and send us GitHub and ArXiv links if they are available.
15 |
16 | We are also open to any ideas for improvements!
17 |
18 |
19 | Table of Contents
20 |
21 |
22 | - [Awesome Uncertainty in Deep learning](#awesome-uncertainty-in-deep-learning)
23 | - [Papers](#papers)
24 | - [Surveys](#surveys)
25 | - [Theory](#theory)
26 | - [Bayesian-Methods](#bayesian-methods)
27 | - [Ensemble-Methods](#ensemble-methods)
28 | - [Sampling/Dropout-based-Methods](#samplingdropout-based-methods)
29 | - [Post-hoc-Methods/Auxiliary-Networks](#post-hoc-methodsauxiliary-networks)
30 | - [Data-augmentation/Generation-based-methods](#data-augmentationgeneration-based-methods)
31 | - [Output-Space-Modeling/Evidential-deep-learning](#output-space-modelingevidential-deep-learning)
32 | - [Deterministic-Uncertainty-Methods](#deterministic-uncertainty-methods)
33 | - [Quantile-Regression/Predicted-Intervals](#quantile-regressionpredicted-intervals)
34 | - [Conformal Predictions](#conformal-predictions)
35 | - [Calibration/Evaluation-Metrics](#calibrationevaluation-metrics)
36 | - [Misclassification Detection \& Selective Classification](#misclassification-detection--selective-classification)
37 | - [Applications](#applications)
38 | - [Classification and Semantic-Segmentation](#classification-and-semantic-segmentation)
39 | - [Regression](#regression)
40 | - [Anomaly-detection and Out-of-Distribution-Detection](#anomaly-detection-and-out-of-distribution-detection)
41 | - [Object detection](#object-detection)
42 | - [Domain adaptation](#domain-adaptation)
43 | - [Semi-supervised](#semi-supervised)
44 | - [Natural Language Processing](#natural-language-processing)
45 | - [Others](#others)
46 | - [Datasets and Benchmarks](#datasets-and-benchmarks)
47 | - [Libraries](#libraries)
48 | - [Python](#python)
49 | - [PyTorch](#pytorch)
50 | - [JAX](#jax)
51 | - [TensorFlow](#tensorflow)
52 | - [Lectures and tutorials](#lectures-and-tutorials)
53 | - [Books](#books)
54 | - [Other Resources](#other-resources)
55 |
56 | # Papers
57 |
58 | ## Surveys
59 |
60 | **Conference**
61 |
62 | - A Comparison of Uncertainty Estimation Approaches in Deep Learning Components for Autonomous Vehicle Applications [[AISafety Workshop 2020]]()
63 |
64 | **Journal**
65 |
66 | - A survey of uncertainty in deep neural networks [[Artificial Intelligence Review 2023]]() - [[GitHub]]()
67 | - Prior and Posterior Networks: A Survey on Evidential Deep Learning Methods For Uncertainty Estimation [[TMLR2023]]()
68 | - A Survey on Uncertainty Estimation in Deep Learning Classification Systems from a Bayesian Perspective [[ACM2021]]()
69 | - Ensemble deep learning: A review [[Engineering Applications of AI 2021]]()
70 | - A review of uncertainty quantification in deep learning: Techniques, applications and challenges [[Information Fusion 2021]]()
71 | - Aleatoric and epistemic uncertainty in machine learning: an introduction to concepts and methods [[Machine Learning 2021]]()
72 | - Predictive inference with the jackknife+ [[The Annals of Statistics 2021]]()
73 | - Uncertainty in big data analytics: survey, opportunities, and challenges [[Journal of Big Data 2019]]()
74 |
75 | **Arxiv**
76 |
77 | - Benchmarking Uncertainty Disentanglement: Specialized Uncertainties for Specialized Tasks [[ArXiv2024]]() - [[PyTorch]]()
78 | - A System-Level View on Out-of-Distribution Data in Robotics [[arXiv2022]]()
79 | - A Survey on Uncertainty Reasoning and Quantification for Decision Making: Belief Theory Meets Deep Learning [[arXiv2022]]()
80 |
81 | ## Theory
82 |
83 | **Conference**
84 |
85 | - A Rigorous Link between Deep Ensembles and (Variational) Bayesian Methods [[NeurIPS2023]]()
86 | - Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning [[ICLR2023]]()
87 | - Unmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning Ticket's Mask? [[ICLR2023]]()
88 | - Probabilistic Contrastive Learning Recovers the Correct Aleatoric Uncertainty of Ambiguous Inputs [[ICML2023]]() - [[PyTorch]]()
89 | - On Second-Order Scoring Rules for Epistemic Uncertainty Quantification [[ICML2023]]()
90 | - Neural Variational Gradient Descent [[AABI2022]]()
91 | - Top-label calibration and multiclass-to-binary reductions [[ICLR2022]]()
92 | - Bayesian Model Selection, the Marginal Likelihood, and Generalization [[ICML2022]]()
93 | - With malice towards none: Assessing uncertainty via equalized coverage [[AIES 2021]]()
94 | - Uncertainty in Gradient Boosting via Ensembles [[ICLR2021]]() - [[PyTorch]]()
95 | - Repulsive Deep Ensembles are Bayesian [[NeurIPS2021]]() - [[PyTorch]]()
96 | - Bayesian Optimization with High-Dimensional Outputs [[NeurIPS2021]]()
97 | - Residual Pathway Priors for Soft Equivariance Constraints [[NeurIPS2021]]()
98 | - Dangers of Bayesian Model Averaging under Covariate Shift [[NeurIPS2021]]() - [[TensorFlow]]()
99 | - A Mathematical Analysis of Learning Loss for Active Learning in Regression [[CVPR Workshop2021]]()
100 | - Why Are Bootstrapped Deep Ensembles Not Better? [[NeurIPS Workshop]]()
101 | - Deep Convolutional Networks as shallow Gaussian Processes [[ICLR2019]]()
102 | - On the accuracy of influence functions for measuring group effects [[NeurIPS2018]]()
103 | - To Trust Or Not To Trust A Classifier [[NeurIPS2018]]() - [[Python]]()
104 | - Understanding Measures of Uncertainty for Adversarial Example Detection [[UAI2018]]()
105 |
106 | **Journal**
107 |
108 | - Martingale posterior distributions [[Royal Statistical Society Series B]]()
109 | - A Unified Theory of Diversity in Ensemble Learning [[JMLR2023]]()
110 | - Multivariate Uncertainty in Deep Learning [[TNNLS2021]]()
111 | - A General Framework for Uncertainty Estimation in Deep Learning [[RAL2020]]()
112 | - Adaptive nonparametric confidence sets [[Ann. Statist. 2006]]()
113 |
114 | **Arxiv**
115 |
116 | - Ensembles for Uncertainty Estimation: Benefits of Prior Functions and Bootstrapping [[arXiv2022]]()
117 | - Efficient Gaussian Neural Processes for Regression [[arXiv2021]]()
118 | - Dense Uncertainty Estimation [[arXiv2021]]() - [[PyTorch]]()
119 | - A higher-order swiss army infinitesimal jackknife [[arXiv2019]]()
120 |
121 | ## Bayesian-Methods
122 |
123 | **Conference**
124 |
125 | - Training Bayesian Neural Networks with Sparse Subspace Variational Inference [[ICLR2024]]()
126 | - Variational Bayesian Last Layers [[ICLR2024]](https://arxiv.org/abs/2404.11599)
127 | - A Symmetry-Aware Exploration of Bayesian Neural Network Posteriors [[ICLR2024]]()
128 | - Gradient-based Uncertainty Attribution for Explainable Bayesian Deep Learning [[CVPR2023]]()
129 | - Robustness to corruption in pre-trained Bayesian neural networks [[ICLR2023]]()
130 | - Beyond Deep Ensembles: A Large-Scale Evaluation of Bayesian Deep Learning under Distribution Shift [[NeurIPS2023]]() - [[PyTorch]]()
131 | - Transformers Can Do Bayesian Inference [[ICLR2022]]() - [[PyTorch]]()
132 | - Uncertainty Estimation for Multi-view Data: The Power of Seeing the Whole Picture [[NeurIPS2022]]()
133 | - On Batch Normalisation for Approximate Bayesian Inference [[AABI2021]]()
134 | - Activation-level uncertainty in deep neural networks [[ICLR2021]]()
135 | - Laplace Redux – Effortless Bayesian Deep Learning [[NeurIPS2021]]() - [[PyTorch]]()
136 | - On the Effects of Quantisation on Model Uncertainty in Bayesian Neural Networks [[UAI2021]]()
137 | - Learnable uncertainty under Laplace approximations [[UAI2021]]()
138 | - Bayesian Neural Networks with Soft Evidence [[ICML Workshop2021]]() - [[PyTorch]]()
139 | - TRADI: Tracking deep neural network weight distributions for uncertainty estimation [[ECCV2020]]() - [[PyTorch]]()
140 | - How Good is the Bayes Posterior in Deep Neural Networks Really? [[ICML2020]]()
141 | - Efficient and Scalable Bayesian Neural Nets with Rank-1 Factors [[ICML2020]]() - [[TensorFlow]]()
142 | - Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks [[ICML2020]]() - [[PyTorch]]()
143 | - Bayesian Deep Learning and a Probabilistic Perspective of Generalization [[NeurIPS2020]]()
144 | - A Simple Baseline for Bayesian Uncertainty in Deep Learning [[NeurIPS2019]]() - [[PyTorch]]() - [[TorchUncertainty]]()
145 | - Bayesian Uncertainty Estimation for Batch Normalized Deep Networks [[ICML2018]]() - [[TensorFlow]]() - [[TorchUncertainty]]()
146 | - Lightweight Probabilistic Deep Networks [[CVPR2018]]() - [[PyTorch]]()
147 | - A Scalable Laplace Approximation for Neural Networks [[ICLR2018]]() - [[Theano]]()
148 | - Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning [[ICML2018]]()
149 | - Weight Uncertainty in Neural Networks [[ICML2015]]()
150 |
151 | **Journal**
152 |
153 | - Analytically Tractable Hidden-States Inference in Bayesian Neural Networks [[JMLR2024]](https://jmlr.org/papers/v23/21-0758.html)
154 | - Encoding the latent posterior of Bayesian Neural Networks for uncertainty quantification [[TPAMI2023]]() - [[PyTorch]]()
155 | - Bayesian modeling of uncertainty in low-level vision [[IJCV1990]]()
156 |
157 | **Arxiv**
158 |
159 | - Density Uncertainty Layers for Reliable Uncertainty Estimation [[arXiv2023]]()
160 |
161 | ## Ensemble-Methods
162 |
163 | **Conference**
164 |
165 | - Input-gradient space particle inference for neural network ensembles [[ICLR2024]]()
166 | - Fast Ensembling with Diffusion Schrödinger Bridge [[ICLR2024]]()
167 | - Pathologies of Predictive Diversity in Deep Ensembles [[ICLR2024]]()
168 | - Model Ratatouille: Recycling Diverse Models for Out-of-Distribution Generalization [[ICML2023]]()
169 | - Bayesian Posterior Approximation With Stochastic Ensembles [[CVPR2023]]()
170 | - Normalizing Flow Ensembles for Rich Aleatoric and Epistemic Uncertainty Modeling [[AAAI2023]]()
171 | - Window-Based Early-Exit Cascades for Uncertainty Estimation: When Deep Ensembles are More Efficient than Single Models [[ICCV2023]]() - [[PyTorch]]()
172 | - Weighted Ensemble Self-Supervised Learning [[ICLR2023]]()
173 | - Agree to Disagree: Diversity through Disagreement for Better Transferability [[ICLR2023]]() - [[PyTorch]]()
174 | - Packed-Ensembles for Efficient Uncertainty Estimation [[ICLR2023]]() - [[TorchUncertainty]]()
175 | - Sub-Ensembles for Fast Uncertainty Estimation in Neural Networks [[ICCV Workshop2023]]()
176 | - Prune and Tune Ensembles: Low-Cost Ensemble Learning With Sparse Independent Subnetworks [[AAAI2022]]()
177 | - Deep Ensembles Work, But Are They Necessary? [[NeurIPS2022]]()
178 | - FiLM-Ensemble: Probabilistic Deep Learning via Feature-wise Linear Modulation [[NeurIPS2022]]()
179 | - Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity [[ICLR2022]]() - [[PyTorch]]()
180 | - On the Usefulness of Deep Ensemble Diversity for Out-of-Distribution Detection [[ECCV Workshop2022]]()
181 | - Masksembles for Uncertainty Estimation [[CVPR2021]]() - [[PyTorch/TensorFlow]]()
182 | - Robustness via Cross-Domain Ensembles [[ICCV2021]]() - [[PyTorch]]()
183 | - Uncertainty in Gradient Boosting via Ensembles [[ICLR2021]]() - [[PyTorch]]()
184 | - Uncertainty Quantification and Deep Ensembles [[NeurIPS2021]]()
185 | - Maximizing Overall Diversity for Improved Uncertainty Estimates in Deep Ensembles [[AAAI2020]]()
186 | - Uncertainty in Neural Networks: Approximately Bayesian Ensembling [[AISTATS 2020]]()
187 | - Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep Learning [[ICLR2020]]() - [[PyTorch]]()
188 | - BatchEnsemble: An Alternative Approach to Efficient Ensemble and Lifelong Learning [[ICLR2020]]() - [[TensorFlow]]() - [[TorchUncertainty]]()
189 | - Hyperparameter Ensembles for Robustness and Uncertainty Quantification [[NeurIPS2020]]()
190 | - Bayesian Deep Ensembles via the Neural Tangent Kernel [[NeurIPS2020]]()
191 | - Diversity with Cooperation: Ensemble Methods for Few-Shot Classification [[ICCV2019]]()
192 | - Accurate Uncertainty Estimation and Decomposition in Ensemble Learning [[NeurIPS2019]]()
193 | - High-Quality Prediction Intervals for Deep Learning: A Distribution-Free, Ensembled Approach [[ICML2018]]() - [[TensorFlow]]()
194 | - Snapshot Ensembles: Train 1, get M for free [[ICLR2017]](https://arxiv.org/abs/1704.00109) - [[TorchUncertainty]]()
195 | - Simple and scalable predictive uncertainty estimation using deep ensembles [[NeurIPS2017]]() - [[TorchUncertainty]]()
196 |
197 | **Journal**
198 |
199 | - One Versus all for deep Neural Network for uncertainty (OVNNI) quantification [[IEEE Access2021]]()
200 |
201 | **Arxiv**
202 |
203 | - Split-Ensemble: Efficient OOD-aware Ensemble via Task and Model Splitting [[arXiv2023]]()
204 | - Deep Ensemble as a Gaussian Process Approximate Posterior [[arXiv2022]]()
205 | - Sequential Bayesian Neural Subnetwork Ensembles [[arXiv2022]]()
206 | - Confident Neural Network Regression with Bootstrapped Deep Ensembles [[arXiv2022]]() - [[TensorFlow]]()
207 | - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent Variable Model [[arXiv2021]]()
208 | - Deep Ensembles: A Loss Landscape Perspective [[arXiv2019]]()
209 | - Checkpoint ensembles: Ensemble methods from a single training process [[arXiv2017]]() - [[TorchUncertainty]]()
210 |
211 | ## Sampling/Dropout-based-Methods
212 |
213 | **Conference**
214 |
215 | - Enabling Uncertainty Estimation in Iterative Neural Networks [[ICML2024]]() - [[Pytorch]]()
216 | - Make Me a BNN: A Simple Strategy for Estimating Bayesian Uncertainty from Pre-trained Models [[CVPR2024]]() - [[TorchUncertainty]]()
217 | - Training-Free Uncertainty Estimation for Dense Regression: Sensitivity as a Surrogate [[AAAI2022]]()
218 | - Efficient Bayesian Uncertainty Estimation for nnU-Net [[MICCAI2022]]()
219 | - Dropout Sampling for Robust Object Detection in Open-Set Conditions [[ICRA2018]]()
220 | - Test-time data augmentation for estimation of heteroscedastic aleatoric uncertainty in deep neural networks [[MIDL2018]]()
221 | - Concrete Dropout [[NeurIPS2017]]()
222 | - Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning [[ICML2016]]() - [[TorchUncertainty]]()
223 |
224 | **Journal**
225 |
226 | - A General Framework for Uncertainty Estimation in Deep Learning [[Robotics and Automation Letters2020]]()
227 |
228 | **Arxiv**
229 |
230 | - SoftDropConnect (SDC) – Effective and Efficient Quantification of the Network Uncertainty in Deep MR Image Analysis [[arXiv2022]]()
231 |
232 | ## Post-hoc-Methods/Auxiliary-Networks
233 |
234 | **Conference**
235 |
236 | - On the Limitations of Temperature Scaling for Distributions with Overlaps [[ICLR2024]](https://arxiv.org/abs/2306.00740)
237 | - Post-hoc Uncertainty Learning using a Dirichlet Meta-Model [[AAAI2023]]() - [[PyTorch]]()
238 | - ProbVLM: Probabilistic Adapter for Frozen Vision-Language Models [[ICCV2023]]()
239 | - Out-of-Distribution Detection for Monocular Depth Estimation [[ICCV2023]]()
240 | - Detecting Misclassification Errors in Neural Networks with a Gaussian Process Model [[AAAI2022]]()
241 | - Learning Structured Gaussians to Approximate Deep Ensembles [[CVPR2022]]()
242 | - Improving the reliability for confidence estimation [[ECCV2022]]()
243 | - Gradient-based Uncertainty for Monocular Depth Estimation [[ECCV2022]]() - [[PyTorch]]()
244 | - BayesCap: Bayesian Identity Cap for Calibrated Uncertainty in Frozen Neural Networks [[ECCV2022]]() - [[PyTorch]]()
245 | - Learning Uncertainty For Safety-Oriented Semantic Segmentation In Autonomous Driving [[ICIP2022]]()
246 | - SLURP: Side Learning Uncertainty for Regression Problems [[BMVC2021]]() - [[PyTorch]]()
247 | - Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation [[ICCV2021]]() - [[PyTorch]]()
248 | - Learning to Predict Error for MRI Reconstruction [[MICCAI2021]]()
249 | - A Mathematical Analysis of Learning Loss for Active Learning in Regression [[CVPR Workshop2021]]()
250 | - Real-time uncertainty estimation in computer vision via uncertainty-aware distribution distillation [[WACV2021]]()
251 | - On the uncertainty of self-supervised monocular depth estimation [[CVPR2020]]() - [[PyTorch]]()
252 | - Quantifying Point-Prediction Uncertainty in Neural Networks via Residual Estimation with an I/O Kernel [[ICLR2020]]() - [[TensorFlow]]()
253 | - Gradients as a Measure of Uncertainty in Neural Networks [[ICIP2020]]()
254 | - Learning Loss for Test-Time Augmentation [[NeurIPS2020]]()
255 | - Learning loss for active learning [[CVPR2019]]() - [[PyTorch]]() (unofficial codes)
256 | - Addressing failure prediction by learning model confidence [[NeurIPS2019]]() - [[PyTorch]]()
257 | - Structured Uncertainty Prediction Networks [[CVPR2018]]() - [[TensorFlow]]()
258 | - Classification uncertainty of deep neural networks based on gradient information [[IAPR Workshop2018]]()
259 |
260 | **Journal**
261 |
262 | - Towards More Reliable Confidence Estimation [[TPAMI2023]]()
263 | - Confidence Estimation via Auxiliary Models [[TPAMI2021]]()
264 |
265 | **Arxiv**
266 |
267 | - Instance-Aware Observer Network for Out-of-Distribution Object Segmentation [[arXiv2022]]()
268 | - DEUP: Direct Epistemic Uncertainty Prediction [[arXiv2020]]()
269 | - Learning Confidence for Out-of-Distribution Detection in Neural Networks [[arXiv2018]]()
270 |
271 | ## Data-augmentation/Generation-based-methods
272 |
273 | **Conference**
274 |
275 | - Posterior Uncertainty Quantification in Neural Networks using Data Augmentation [[AISTATS2024]]()
276 | - Learning to Generate Training Datasets for Robust Semantic Segmentation [[WACV2024]]()
277 | - OpenMix: Exploring Outlier Samples for Misclassification Detection [[CVPR2023]]() - [[PyTorch]]()
278 | - On the Pitfall of Mixup for Uncertainty Calibration [[CVPR2023]]()
279 | - Diverse, Global and Amortised Counterfactual Explanations for Uncertainty Estimates [[AAAI2022]]()
280 | - Out-of-distribution Detection with Implicit Outlier Transformation [[ICLR2023]]() - [[PyTorch]]()
281 | - PixMix: Dreamlike Pictures Comprehensively Improve Safety Measures [[CVPR2022]]()
282 | - RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy & Out-of-Distribution Robustness [[NeurIPS2022]]() - [[PyTorch]]()
283 | - Towards efficient feature sharing in MIMO architectures [[CVPR Workshop2022]]()
284 | - Robust Semantic Segmentation with Superpixel-Mix [[BMVC2021]]() - [[PyTorch]]()
285 | - MixMo: Mixing Multiple Inputs for Multiple Outputs via Deep Subnetworks [[ICCV2021]]() - [[PyTorch]]()
286 | - Training independent subnetworks for robust prediction [[ICLR2021]]()
287 | - Regularizing Variational Autoencoder with Diversity and Uncertainty Awareness [[IJCAI2021]]() - [[PyTorch]]()
288 | - Uncertainty-aware GAN with Adaptive Loss for Robust MRI Image Enhancement [[ICCV Workshop2021]]()
289 | - Uncertainty-Aware Deep Classifiers using Generative Models [[AAAI2020]]()
290 | - Synthesize then Compare: Detecting Failures and Anomalies for Semantic Segmentation [[ECCV2020]]() - [[PyTorch]]()
291 | - Detecting the Unexpected via Image Resynthesis [[ICCV2019]]() - [[PyTorch]]()
292 | - Mix-n-match: Ensemble and compositional methods for uncertainty calibration in deep learning [[ICML2020]]()
293 | - Deep Anomaly Detection with Outlier Exposure [[ICLR2019]]()
294 | - On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks [[NeurIPS2019]]()
295 |
296 | **Arxiv**
297 |
298 | - Reliability in Semantic Segmentation: Can We Use Synthetic Data? [[arXiv2023]]()
299 | - Quantifying uncertainty with GAN-based priors [[arXiv2019]]() - [[TensorFlow]]()
300 |
301 | ## Output-Space-Modeling/Evidential-deep-learning
302 |
303 | **Conference**
304 |
305 | - Hyper Evidential Deep Learning to Quantify Composite Classification Uncertainty [[ICLR2024]](https://arxiv.org/abs/2404.10980)
306 | - The Evidence Contraction Issue in Deep Evidential Regression: Discussion and Solution [[AAAI2024]]()
307 | - Discretization-Induced Dirichlet Posterior for Robust Uncertainty Quantification on Regression [[AAAI2024]]() - [[PyTorch]]()
308 | - The Unreasonable Effectiveness of Deep Evidential Regression [[AAAI2023]]() - [[PyTorch]]() - [[TorchUncertainty]](https://github.com/ENSTA-U2IS-AI/torch-uncertainty)
309 | - Exploring and Exploiting Uncertainty for Incomplete Multi-View Classification [[CVPR2023]](https://arxiv.org/abs/2304.05165)
310 | - Plausible Uncertainties for Human Pose Regression [[ICCV2023]](https://openaccess.thecvf.com/content/ICCV2023/papers/Bramlage_Plausible_Uncertainties_for_Human_Pose_Regression_ICCV_2023_paper.pdf) - [[PyTorch]]()
311 | - Uncertainty Estimation by Fisher Information-based Evidential Deep Learning [[ICML2023]](https://arxiv.org/pdf/2303.02045.pdf) - [[PyTorch]]()
312 | - Improving Evidential Deep Learning via Multi-task Learning [[AAAI2022]]() - [[PyTorch]](https://github.com/deargen/MT-ENet)
313 | - An Evidential Neural Network Model for Regression Based on Random Fuzzy Numbers [[BELIEF2022]]()
314 | - On the Pitfalls of Heteroscedastic Uncertainty Estimation with Probabilistic Neural Networks [[ICLR2022]]() - [[PyTorch]](https://github.com/martius-lab/beta-nll)
315 | - Natural Posterior Network: Deep Bayesian Uncertainty for Exponential Family Distributions [[ICLR2022]]() - [[PyTorch]]()
316 | - Pitfalls of Epistemic Uncertainty Quantification through Loss Minimisation [[NeurIPS2022]]()
317 | - Fast Predictive Uncertainty for Classification with Bayesian Deep Networks [[UAI2022]]() - [[PyTorch]]()
318 | - Evaluating robustness of predictive uncertainty estimation: Are Dirichlet-based models reliable? [[ICML2021]]()
319 | - Trustworthy multimodal regression with mixture of normal-inverse gamma distributions [[NeurIPS2021]]()
320 | - Misclassification Risk and Uncertainty Quantification in Deep Classifiers [[WACV2021]]()
321 | - Ensemble Distribution Distillation [[ICLR2020]]()
322 | - Conservative Uncertainty Estimation By Fitting Prior Networks [[ICLR2020]]()
323 | - Being Bayesian about Categorical Probability [[ICML2020]]() - [[PyTorch]]()
324 | - Posterior Network: Uncertainty Estimation without OOD Samples via Density-Based Pseudo-Counts [[NeurIPS2020]]() - [[PyTorch]]()
325 | - Deep Evidential Regression [[NeurIPS2020]]() - [[TensorFlow]]() - [[TorchUncertainty]]()
326 | - Noise Contrastive Priors for Functional Uncertainty [[UAI2020]]()
327 | - Towards Maximizing the Representation Gap between In-Domain & Out-of-Distribution Examples [[NeurIPS Workshop2020]]()
328 | - Uncertainty on Asynchronous Time Event Prediction [[NeurIPS2019]]() - [[TensorFlow]]()
329 | - Reverse KL-Divergence Training of Prior Networks: Improved Uncertainty and Adversarial Robustness [[NeurIPS2019]](