└── README.md /README.md: -------------------------------------------------------------------------------- 1 | # Machine Unlearning Papers and Benchmarks 2 | 3 | [![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/sindresorhus/awesome) 4 | ![Github repo stars](https://img.shields.io/github/stars/jjbrophy47/machine_unlearning) 5 | ![GitHub last commit](https://img.shields.io/github/last-commit/jjbrophy47/machine_unlearning) 6 | 7 | ## Frameworks 8 | 9 | [OpenUnlearning](https://github.com/locuslab/open-unlearning) 10 | 11 | [Machine Unlearning Comparator](https://github.com/gnueaj/Machine-Unlearning-Comparator) 12 | 13 | ## Papers 14 | 15 | [2025](#2025)   16 | [2024](#2024)   17 | [2023](#2023)   18 | [2022](#2022)   19 | [2021](#2021)   20 | [2020](#2020)   21 | [2019](#2019)   22 | [2018](#2018)   23 | [2017](#2017)   24 | [< 2017](#before-2017)   25 | 26 | ### 2025 27 | 28 | | Author(s) | Title | Venue | 29 | | :------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------- | 30 | | Jiang et al. | [Backdoor Token Unlearning: Exposing and Defending Backdoors in Pretrained Language Models](https://ojs.aaai.org/index.php/AAAI/article/view/34605) | AAAI | 31 | | Han et al. | [DuMo: Dual Encoder Modulation Network for Precise Concept Erasure](https://ojs.aaai.org/index.php/AAAI/article/view/32343)|AAAI| 32 | | Wu et al. | [Unlearning Concepts in Diffusion Model via Concept Domain Correction and Concept Preserving Gradient](https://ojs.aaai.org/index.php/AAAI/article/view/32917) | AAAI | 33 | | Wang et al. | [Selective Forgetting: Advancing Machine Unlearning Techniques and Evaluation in Language Models](https://ojs.aaai.org/index.php/AAAI/article/view/32068) | AAAI | 34 | | Yuan et al. | [Towards Robust Knowledge Unlearning: An Adversarial Framework for Assessing and Improving Unlearning Robustness in Large Language Models](https://arxiv.org/abs/2408.10682) | AAAI | 35 | | Jin et al. | [Unlearning as multi-task optimization: A normalized gradient difference approach with an adaptive learning rate](https://aclanthology.org/2025.naacl-long.563/) | ACL | 36 | | Yang et al. | [CLIPErase: Efficient Unlearning of Visual-Textual Associations in CLIP](https://aclanthology.org/2025.acl-long.1469/) | ACL | 37 | | Choi et al. | [Opt-Out: Investigating Entity-Level Unlearning for Large Language Models via Optimal Transport](https://arxiv.org/abs/2406.12329) | ACL | 38 | | Bhaila et al. | [Soft Prompting for Unlearning in Large Language Models](https://aclanthology.org/2025.naacl-long.204/) | ACL | 39 | | Sun et al. | [Aligned but Blind: Alignment Increases Implicit Bias by Reducing Awareness of Race](https://aclanthology.org/2025.acl-long.1078/) | ACL | 40 | | Xu et al. | [ReLearn: Unlearning via Learning for Large Language Models](https://aclanthology.org/2025.acl-long.297/) | ACL | 41 | | Huo et al. | [MMUnlearner: Reformulating Multimodal Machine Unlearning in the Era of Multimodal Large Language Models](https://arxiv.org/abs/2502.11051) | ACL | 42 | | Liu et al. | [Modality-Aware Neuron Pruning for Unlearning in Multimodal Large Language Models](https://aclanthology.org/2025.acl-long.295/) | ACL | 43 | | Tran et al. | [Tokens for Learning, Tokens for Unlearning: Mitigating Membership Inference Attacks in Large Language Models via Dual-Purpose Training](https://aclanthology.org/2025.findings-acl.1174/) | ACL | 44 | | Zhuang et al. | [SEUF: Is Unlearning One Expert Enough for Mixture-of-Experts LLMs?](https://aclanthology.org/2025.acl-long.424/) | ACL | 45 | | Liu et al. | [Rethinking Machine Unlearning in Image Generation Models](https://arxiv.org/abs/2506.02761) | ACM CCS | 46 | | Chowdhury et al. | [Fundamental Limits of Perfect Concept Erasure](https://proceedings.mlr.press/v258/chowdhury25a.html) | AISTATS | 47 | | Xue et al. | [CRCE: Coreference-Retention Concept Erasure in Text-to-Image Diffusion Models](https://arxiv.org/abs/2503.14232) | BMVC | 48 | | Mekala et al. | [Alternate Preference Optimization for Unlearning Factual Knowledge in Large Language Models](https://aclanthology.org/2025.coling-main.252/) | COLING | 49 | | Ma et al. | [Unveiling Entity-Level Unlearning for Large Language Models: A Comprehensive Analysis](https://aclanthology.org/2025.coling-main.358/) | COLING | 50 | | Sanyal et al. | [Agents Are All You Need for LLM Unlearning](https://arxiv.org/abs/2502.00406) | COLM | 51 | | Zhou et al. | [Decoupled Distillation to Erase: A General Unlearning Method for Any Class-centric Tasks](https://openaccess.thecvf.com/content/CVPR2025/html/Zhou_Decoupled_Distillation_to_Erase_A_General_Unlearning_Method_for_Any_CVPR_2025_paper.html) | CVPR | 52 | | Li et al. | [Detect-and-Guide: Self-regulation of Diffusion Models for Safe Text-to-Image Generation via Guideline Token Optimization](https://openaccess.thecvf.com/content/CVPR2025/html/Li_Detect-and-Guide_Self-regulation_of_Diffusion_Models_for_Safe_Text-to-Image_Generation_via_CVPR_2025_paper.html) | CVPR | 53 | |Wang et al| [Precise, Fast, and Low-cost Concept Erasure in Value Space: Orthogonal Complement Matters](https://openaccess.thecvf.com/content/CVPR2025/html/Wang_Precise_Fast_and_Low-cost_Concept_Erasure_in_Value_Space__CVPR_2025_paper.html)|CVPR| 54 | | Wang et al. | [ACE: Anti-Editing Concept Erasure in Text-to-Image Models](https://openaccess.thecvf.com/content/CVPR2025/html/Wang_ACE_Anti-Editing_Concept_Erasure_in_Text-to-Image_Models_CVPR_2025_paper.html) | CVPR | 55 | | Wu et al. | [EraseDiff: Erasing Data Influence In Diffusion Models](https://openaccess.thecvf.com/content/CVPR2025/html/Wu_Erasing_Undesirable_Influence_in_Diffusion_Models_CVPR_2025_paper.html) | CVPR | 56 | | Lee et al. | [ESC: Erasing Space Concept for Knowledge Deletion](https://openaccess.thecvf.com/content/CVPR2025/html/Lee_ESC_Erasing_Space_Concept_for_Knowledge_Deletion_CVPR_2025_paper.html) | CVPR | 57 | | Thakral et al. | [Fine-Grained Erasure in Text-to-Image Diffusion-based Foundation Models](https://arxiv.org/abs/2503.19783) | CVPR | 58 | | Srivatsan et al. | [STEREO: A Two-Stage Framework for Adversarially Robust Concept Erasing from Text-to-Image Diffusion Models](https://openaccess.thecvf.com/content/CVPR2025/html/Srivatsan_STEREO_A_Two-Stage_Framework_for_Adversarially_Robust_Concept_Erasing_from_CVPR_2025_paper.html)|CVPR| 59 | | Lee et al. | [Localized Concept Erasure for Text-to-Image Diffusion Models Using Training-Free Gated Low-Rank Adaptation](https://openaccess.thecvf.com/content/CVPR2025/html/Lee_Localized_Concept_Erasure_for_Text-to-Image_Diffusion_Models_Using_Training-Free_Gated_CVPR_2025_paper.html) | CVPR | 60 | |Shirkavand et al. | [Efficient Fine-Tuning and Concept Suppression for Pruned Diffusion Models](https://openaccess.thecvf.com/content/CVPR2025/html/Shirkavand_Efficient_Fine-Tuning_and_Concept_Suppression_for_Pruned_Diffusion_Models_CVPR_2025_paper.html)|CVPR| 61 | | Pan et al. | [Multi-Objective Large Language Model Unlearning](https://ieeexplore.ieee.org/abstract/document/10889776) | ICASSP | 62 | | Wang et al. | [Large Scale Knowledge Washing](https://openreview.net/forum?id=dXCpPgjTtd) | ICLR | 63 | |Koulischer et al. |[Dynamic Negative Guidance of Diffusion Models](https://openreview.net/forum?id=6p74UyAdLa)|ICLR| 64 | | Feng et al. | [Controllable Unlearning for Image-to-Image Generative Models via epsilon-Constrained Optimization](https://openreview.net/forum?id=9OJflnNu6C) | ICLR | 65 | | Ding et al. | [Unified Parameter-Efficient Unlearning for LLMs](https://openreview.net/forum?id=zONMuIVCAT) | ICLR | 66 | | Jin et al. | [Unlearning as Multi-Task Optimization: a normalized gradient difference approach with adaptive learning rate](https://openreview.net/forum?id=OknsPawlUf) | ICLR | 67 | | Farrell et al. | [Applying Sparse Autoencoders to Unlearn Knowledge in Language Models](https://openreview.net/forum?id=ZtvRqm6oBu) | ICLR | 68 | |Cywinski et al. | [SAeUron: Interpretable Concept Unlearning in Diffusion Models with Sparse Autoencoders](https://openreview.net/forum?id=6N0GxaKdX9)|ICLR| 69 | | Yoon et al. | [SAFREE: Training-Free and Adaptive Guard for Safe Text-to-Image And Video Generation](https://openreview.net/forum?id=hgTFotBRKl)|ICLR| 70 | | Choi et al. | [Unlearning-based Neural Interpretations](https://openreview.net/forum?id=PBjCTeDL6o) | ICLR | 71 | | Di et al. | [Adversarial Machine Unlearning](https://openreview.net/pdf?id=swWF948IiC) | ICLR | 72 | | Sakarvadia et al. | [Mitigating Memorization in Language Models](https://openreview.net/forum?id=MGKDBuyv4p) | ICLR | 73 | | Li et al. | [When is Task Vector Provably Effective for Model Editing? A Generalization Analysis of Nonlinear Transformers](https://openreview.net/forum?id=vRvVVb0NAz) | ICLR | 74 | | Scholten et al. | [A Probabilistic Perspective on Unlearning and Alignment for Large Language Models](https://openreview.net/forum?id=51WraMid8K) | ICLR | 75 | | Zhang et al. | [Catastrophic Failure of LLM Unlearning via Quantization](https://openreview.net/forum?id=lHSeDYamnz) | ICLR | 76 | | Cha et al. | [Towards Robust and Parameter-Efficient Knowledge Unlearning for LLMs](https://arxiv.org/abs/2408.06621) | ICLR | 77 | | Shi et al. | [MUSE: Machine Unlearning Six-Way Evaluation for Language Models](https://openreview.net/forum?id=TArmA033BU) | ICLR | 78 | | Bui et al. | [Fantastic Targets for Concept Erasure in Diffusion Models and Where To Find Them](https://openreview.net/forum?id=tZdqL5FH7w) | ICLR | 79 | | Yuan et al. | [A Closer Look at Machine Unlearning for Large Language Models](https://openreview.net/forum?id=Q1MHvGmhyT) | ICLR | 80 | | Du et al. | [Textual Unlearning Gives a False Sense of Unlearning](https://openreview.net/forum?id=jyxwWQjU4J) | ICML | 81 | | Li et al. | [One Image is Worth a Thousand Words: A Usability Preservable Text-Image Collaborative Erasing Framework](https://arxiv.org/abs/2505.11131) | ICML | 82 | | Karvonen et al. | [SAEBench: A Comprehensive Benchmark for Sparse Autoencoders in Language Model Interpretability](https://arxiv.org/abs/2503.09532) | ICML | 83 | | Zhang et al. | [Minimalist Concept Erasure in Generative Models](https://arxiv.org/abs/2507.13386) | ICML | 84 | | Fan et al. | [Towards LLM Unlearning Resilient to Relearning Attacks: A Sharpness-Aware Minimization Perspective and Beyond](https://arxiv.org/abs/2502.05374) | ICML | 85 | | Pathak et al. | [Quantum-Inspired Audio Unlearning: Towards Privacy-Preserving Voice Biometrics](https://www.arxiv.org/abs/2507.22208) | IJCB | 86 | | Dou et al. | [Avoiding Copyright Infringement via Large Language Model Unlearning](https://aclanthology.org/2025.findings-naacl.288/) | NAACL | 87 | | Liu et al. | [Protecting Privacy in Multimodal Large Language Models with MLLMU-Bench](https://aclanthology.org/2025.naacl-long.207.pdf) | NAACL | 88 | | Dong et al. | [UNDIAL: Self-Distillation with Adjusted Logits for Robust Unlearning in Large Language Models](https://aclanthology.org/2025.naacl-long.444/) | NAACL | 89 | | Ye et al. | [Reinforcement Unlearning](https://www.ndss-symposium.org/wp-content/uploads/2025-80-paper.pdf) | NDSS | 90 | | Bother et al. | [Modyn: A Platform for Model Training on Dynamic Datasets With Sample-Level Data Selection](https://dl.acm.org/doi/abs/10.1145/3709705) | PACMMOD | 91 | | Thaker et al. | [Position: LLM Unlearning Benchmarks are Weak Measures of Progress](https://ieeexplore.ieee.org/abstract/document/10992346) | SaTML | 92 | | Xia et al. | [Edge Unlearning is Not "on Edge"! an Adaptive Exact Unlearning System on Resource-Constrained Devices](https://ieeexplore.ieee.org/document/11023432) | SP | 93 | | Wang et al. | [Towards Lifecycle Unlearning Commitment Management: Measuring Sample-level Unlearning Completeness](https://arxiv.org/abs/2506.06112) | USENIX Security | 94 | | Wang et al. | [TAPE: Tailored Posterior Difference for Auditing of Machine Unlearning](https://openreview.net/forum?id=LedrHK34jZ#discussion) | WWW | 95 | | | 96 | | Justicia et al. | [Digital forgetting in large language models: a survey of unlearning methods](https://link.springer.com/article/10.1007/s10462-024-11078-6) | Artificial Intelligence Review | 97 | | Qu et al. | [The Frontier of Data Erasure: A Survey on Machine Unlearning for Large Language Models](https://ieeexplore.ieee.org/abstract/document/10834145) | Computer | 98 | | Liu et al. | [Threats, Attacks, and Defenses in Machine Unlearning: A Survey](https://ieeexplore.ieee.org/abstract/document/10892039) | IEEE Open Journal of the Computer Society | 99 | | Sun et al. | [Generative Adversarial Networks Unlearning](https://ieeexplore.ieee.org/abstract/document/10979463) | IEEE Transactions on Dependable and Secure Computing | 100 | | Zuo et al. | [Machine unlearning through fine-grained model parameters perturbation](https://ieeexplore.ieee.org/abstract/document/10839062) | IEEE Transactions on Knowledge and Data Engineering | 101 | | Li et al. | [Class-wise federated unlearning: Harnessing active forgetting with teacher–student memory generation](https://www.sciencedirect.com/science/article/abs/pii/S0950705125004009) | Knowledge-Based Systems | 102 | | Liu et al. | [Rethinking Machine Unlearning for Large Language Models](https://www.nature.com/articles/s42256-025-00985-0) | Nature Machine Intelligence | 103 | | Cooper et al. | [Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5060253) | SSRN | 104 | | Tiwary et al. | [Adapt then Unlearn: Exploiting Parameter Space Semantics for Unlearning in Generative Adversarial Networks](https://openreview.net/forum?id=jAHEBivObO) | TMLR | 105 | | MIranda et al. | [Preserving Privacy in Large Language Models: A Survey on Current Threats and Solutions](https://openreview.net/forum?id=Ss9MTTN7OL) | TMLR | 106 | | Huang et al. | [Offset Unlearning for Large Language Models](https://openreview.net/forum?id=A4RLpHPXCu) | TMLR | 107 | | Sinha et al. | [UnSTAR: Unlearning with Self-Taught Anti-Sample Reasoning for LLMs](https://openreview.net/forum?id=mNXCViKZbI) | TMLR | 108 | | Che et al. | [Model Tampering Attacks Enable More Rigorous Evaluations of LLM Capabilities](https://arxiv.org/abs/2502.05209) | TMLR | 109 | | | | 110 | | Vidal et al. | [Machine Unlearning in Hyperbolic vs. Euclidean Multimodal Contrastive Learning: Adapting Alignment Calibration to MERU](https://openaccess.thecvf.com/content/CVPR2025W/TMM-OpenWorld/html/Vidal_Machine_Unlearning_in_Hyperbolic_vs._Euclidean_Multimodal_Contrastive_Learning_Adapting_CVPRW_2025_paper.html) | CVPR Workshop | 111 | | Cai et al. | [AegisLLM: Scaling Agentic Systems for Self-Reflective Defense in LLM Security](https://arxiv.org/abs/2504.20965) | ICLR Workshop | 112 | | Kim et al. | [Training-Free Safe Denoisers For Safe Use of Diffusion Models](https://openreview.net/forum?id=R9lU8ZeJjS)|ICLR Workshop| 113 | | Bui et al. | [Hiding and Recovering Knowledge in Text-to-Image Diffusion Models via Learnable Prompts](https://openreview.net/forum?id=KeJ6dGkiqb)|ICLR Workshop| 114 | | Sanga et al. | [Train Once, Forget Precisely: Anchored Optimization for Efficient Post-Hoc Unlearning](https://arxiv.org/abs/2506.14515) | ICML Workshop | 115 | | Wu et al. | [Evaluating Deep Unlearning in Large Language Models](https://openreview.net/forum?id=376xPmmHoV) | ICML Workshop | 116 | | Spohn et al. | [Align-then-Unlearn: Embedding Alignment for LLM Unlearning](https://arxiv.org/abs/2506.13181) | ICML Workshop | 117 | | Dosajh et al. | [Unlearning Factual Knowledge from LLMs Using Adaptive RMU](https://arxiv.org/abs/2506.16548) | SemEval | 118 | | Xu et al. | [Unlearning via Model Merging](https://arxiv.org/abs/2503.21088) | SemEval | 119 | | Bronec et al. | [Low-Rank Negative Preference Optimization](https://arxiv.org/abs/2503.13690) | SemEval | 120 | | Srivasthav P et al. | [Forgotten but Not Lost: The Balancing Act of Selective Unlearning in Large Language Models](https://arxiv.org/abs/2503.04795) | SemEval | 121 | | Premptis et al. | [Parameter-Efficient Unlearning for Large Language Models using Data Chunking](https://arxiv.org/abs/2503.02443) | SemEval | 122 | | | | 123 | | Kim et al. | [Are We Truly Forgetting? A Critical Re-examination of Machine Unlearning Evaluation Protocols](https://arxiv.org/pdf/2503.06991) | arxiv | 124 | | Kwak et al. | [NegMerge: Consensual Weight Negation for Strong Machine Unlearning](https://arxiv.org/pdf/2410.05583) | arxiv | 125 | | Wang et al. | [GRU: Mitigating the Trade-off between Unlearning and Retention for Large Language Models](https://arxiv.org/pdf/2503.09117) | arxiv | 126 | | Geng et al. | [A Comprehensive Survey of Machine Unlearning Techniques for Large Language Models](https://arxiv.org/abs/2503.01854) | arxiv | 127 | | Barez et al. | [Open Problems in Machine Unlearning for AI Safety](https://arxiv.org/abs/2501.04952) | arxiv | 128 | | Fan et al. | [Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning](https://openreview.net/forum?id=Pd3jVGTacT) | arxiv | 129 | | Staufer et al. | [What Should LLMs Forget? Quantifying Personal Data in LLMs for Right-to-Be-Forgotten Requests](https://arxiv.org/abs/2507.11128) | arxiv | 130 | | Yeats et al. | [Automating Evaluation of Diffusion Model Unlearning with (Vision-) Language Model World Knowledge](https://arxiv.org/abs/2507.07137) | arxiv | 131 | | Xiong et al. | [The Landscape of Memorization in LLMs: Mechanisms, Measurement, and Mitigation](https://openreview.net/forum?id=Pd3jVGTacT) | arxiv | 132 | | Scholten et al. | [Model Collapse Is Not a Bug but a Feature in Machine Unlearning for LLMs](https://arxiv.org/abs/2507.04219) | arxiv | 133 | | Han et al. | [Unlearning the Noisy Correspondence Makes CLIP More Robust](https://arxiv.org/abs/2507.03434) | arxiv | 134 | | Kawakami et al. | [PULSE: Practical Evaluation Scenarios for Large Multimodal Model Unlearning](https://arxiv.org/abs/2507.01271) | arxiv | 135 | | Ma et al. | [SoK: Semantic Privacy in Large Language Models](https://arxiv.org/abs/2506.23603) | arxiv | 136 | | Rezaei et al. | [Model State Arithmetic for Machine Unlearning](https://arxiv.org/abs/2506.20941) | arxiv | 137 | | Sinha et al. | [Step-by-Step Reasoning Attack: Revealing 'Erased' Knowledge in Large Language Models](https://arxiv.org/abs/2506.17279) | arxiv | 138 | | Zhang et al. | [Does Multimodal Large Language Model Truly Unlearn? Stealthy MLLM Unlearning Attack](https://arxiv.org/abs/2506.17265) | arxiv | 139 | | Jiang et al. | [Large Language Model Unlearning for Source Code](https://arxiv.org/abs/2506.17125) | arxiv | 140 | | Hu et al. | [BLUR: A Benchmark for LLM Unlearning Robust to Forget-Retain Overlap](https://arxiv.org/abs/2506.15699) | arxiv | 141 | | Wu et al. | [Learning-Time Encoding Shapes Unlearning in LLMs](https://arxiv.org/abs/2506.15076) | arxiv | 142 | | Chen et al. | [Unlearning Isn't Invisible: Detecting Unlearning Traces in LLMs from Model Outputs](https://arxiv.org/abs/2506.14003) | arxiv | 143 | | Wang et al. | [Reasoning Model Unlearning: Forgetting Traces, Not Just Answers, While Preserving Reasoning Skills](https://arxiv.org/abs/2506.12963) | arxiv | 144 | | Songdej et al. | [Robust LLM Unlearning with MUDMAN: Meta-Unlearning with Disruption Masking And Normalization](https://arxiv.org/abs/2506.12484) | arxiv | 145 | | Suriyakumar et al. | [UCD: Unlearning in LLMs via Contrastive Decoding](https://arxiv.org/abs/2506.12097) | arxiv | 146 | | Ma et al. | [GUARD: Guided Unlearning and Retention via Data Attribution for Large Language Models](https://arxiv.org/abs/2506.10946) | arxiv | 147 | | Ren et al. | [SoK: Machine Unlearning for Large Language Models](https://arxiv.org/abs/2506.09227) | arxiv | 148 | | Reisizadeh et al. | [BLUR: A Bi-Level Optimization Approach for LLM Unlearning](https://arxiv.org/abs/2506.08164) | arxiv | 149 | | Ye et al. | [LLM Unlearning Should Be Form-Independent](https://arxiv.org/abs/2506.07795) | arxiv | 150 | | Zhang et al. | [RULE: Reinforcement UnLEarning Achieves Forget-Retain Pareto Optimality](https://arxiv.org/abs/2506.07171) | arxiv | 151 | | Lee et al. | [Distillation Robustifies Unlearning](https://arxiv.org/abs/2506.06278) | arxiv | 152 | | Wang et al. | [Towards Lifecycle Unlearning Commitment Management: Measuring Sample-level Unlearning Completeness](https://arxiv.org/abs/2506.06112) | arxiv | 153 | | Wei et al. | [Do LLMs Really Forget? Evaluating Unlearning with Knowledge Correlation and Confidence Awareness](https://arxiv.org/abs/2506.05735) | arxiv | 154 | | Entesari et al. | [Constrained Entropic Unlearning: A Primal-Dual Framework for Large Language Models](https://arxiv.org/abs/2506.05314) | arxiv | 155 | | Wen et al. | [Quantifying Cross-Modality Memorization in Vision-Language Models](https://arxiv.org/abs/2506.05198) | arxiv | 156 | | Chen et al. | [Vulnerability-Aware Alignment: Mitigating Uneven Forgetting in Harmful Fine-Tuning](https://arxiv.org/abs/2506.03850) | arxiv | 157 | | Zhou et al. | [Not All Tokens Are Meant to Be Forgotten](https://arxiv.org/abs/2506.03142) | arxiv | 158 | | Kim et al. | [Rethinking Post-Unlearning Behavior of Large Vision-Language Models](https://arxiv.org/abs/2506.02541) | arxiv | 159 | | Wang et al. | [Invariance Makes LLM Unlearning Resilient Even to Unanticipated Downstream Fine-Tuning](https://arxiv.org/abs/2506.01339) | arxiv | 160 | | Wan et al. | [Not Every Token Needs Forgetting: Selective Unlearning to Limit Change in Utility in Large Language Model Unlearning](https://arxiv.org/abs/2506.00876) | arxiv | 161 | | Feng et al. | [Existing Large Language Model Unlearning Evaluations Are Inconclusive](https://arxiv.org/abs/2506.00688) | arxiv | 162 | | Wang et al. | [Model Unlearning via Sparse Autoencoder Subspace Guided Projections](https://arxiv.org/abs/2505.24428) | arxiv | 163 | | Wu et al. | [Breaking the Gold Standard: Extracting Forgotten Data under Exact Unlearning in Large Language Models](https://arxiv.org/abs/2505.24379) | arxiv | 164 | | Chen et al. | [Does Machine Unlearning Truly Remove Model Knowledge? A Framework for Auditing Unlearning in LLMs](https://arxiv.org/abs/2505.23270) | arxiv | 165 | | Siddiqui et al. | [From Dormant to Deleted: Tamper-Resistant Unlearning Through Weight-Space Regularization](https://arxiv.org/abs/2505.22310) | arxiv | 166 | | Li et al. | [Editing as Unlearning: Are Knowledge Editing Methods Strong Baselines for Large Language Model Unlearning?](https://arxiv.org/abs/2505.19855) | arxiv | 167 | | Jiang et al. | [Graceful Forgetting in Generative Language Models](https://arxiv.org/abs/2505.19715) | arxiv | 168 | | Shi et al. | [Safety Alignment via Constrained Knowledge Unlearning](https://arxiv.org/abs/2505.18588) | arxiv | 169 | | Ye et al. | [T2VUnlearning: A Concept Erasing Method for Text-to-Video Diffusion Models](https://arxiv.org/abs/2505.17550) | arxiv | 170 | | To et al. | [Harry Potter is Still Here! Probing Knowledge Leakage in Targeted Unlearned Large Language Models via Automated Adversarial Prompting](https://arxiv.org/abs/2505.17160) | arxiv | 171 | | Xu et al. | [Unlearning Isn't Deletion: Investigating Reversibility of Machine Unlearning in LLMs](https://arxiv.org/abs/2505.16831) | arxiv | 172 | | Lee et al. | [Does Localization Inform Unlearning? A Rigorous Examination of Local Parameter Attribution for Knowledge Unlearning in Language Models](https://arxiv.org/abs/2505.16252) | arxiv | 173 | | Ma et al. | [Losing is for Cherishing: Data Valuation Based on Machine Unlearning and Shapley Value](https://arxiv.org/abs/2505.16147) | arxiv | 174 | | Yu et al. | [UniErase: Unlearning Token as a Universal Erasure Primitive for Language Models](https://arxiv.org/abs/2505.15674) | arxiv | 175 | | Yoon et al. | [R-TOFU: Unlearning in Large Reasoning Models](https://arxiv.org/abs/2505.15214) | arxiv | 176 | | Jeung et al. | [DUSK: Do Not Unlearn Shared Knowledge](https://arxiv.org/abs/2505.15209) | arxiv | 177 | | Jeung et al. | [SEPS: A Separability Measure for Robust Unlearning in LLMs](https://arxiv.org/abs/2505.14832) | arxiv | 178 | | Deng et al. | [GUARD: Generation-time LLM Unlearning via Adaptive Restriction and Detection](https://arxiv.org/abs/2505.13312) | arxiv | 179 | | Yang et al. | [Exploring Criteria of Loss Reweighting to Enhance LLM Unlearning](https://arxiv.org/abs/2505.11953) | arxiv | 180 | | Qian et al. | [Layered Unlearning for Adversarial Relearning](https://arxiv.org/abs/2505.09500) | arxiv | 181 | | Vasilev et al. | [Unilogit: Robust Machine Unlearning for LLMs Using Uniform-Target Self-Distillation](https://arxiv.org/abs/2505.06027) | arxiv | 182 | | Lu et al. | [WaterDrum: Watermarking for Data-centric Unlearning Metric](https://arxiv.org/abs/2505.05064) | arxiv | 183 | | Xu et al. | [OBLIVIATE: Robust and Practical Machine Unlearning for Large Language Models](https://arxiv.org/abs/2505.04416) | arxiv | 184 | | Sun et al. | [Unlearning vs. Obfuscation: Are We Truly Removing Knowledge?](https://arxiv.org/abs/2505.02884) | arxiv | 185 | | Patil et al. | [Unlearning Sensitive Information in Multimodal LLMs: Benchmark and Attack-Defense Evaluation](https://arxiv.org/abs/2505.01456) | arxiv | 186 | | Zhong et al. | [DualOptim: Enhancing Efficacy and Stability in Machine Unlearning with Dual Optimizers](https://arxiv.org/abs/2504.15827) | arxiv | 187 | | Chen et al. | [ParaPO: Aligning Language Models to Reduce Verbatim Reproduction of Pre-training Data](https://arxiv.org/abs/2504.14452) | arxiv | 188 | | Mahmud et al. | [DP2Unlearning: An Efficient and Guaranteed Unlearning Framework for LLMs](https://arxiv.org/abs/2504.13774) | arxiv | 189 | | Klochkov et al. | [A mean teacher algorithm for unlearning of language models](https://arxiv.org/abs/2504.13388) | arxiv | 190 | | Kim et al. | [GRAIL: Gradient-Based Adaptive Unlearning for Privacy and Copyright in LLMs](https://arxiv.org/abs/2504.12681) | arxiv | 191 | | Pal et al. | [LLM Unlearning Reveals a Stronger-Than-Expected Coreset Effect in Current Benchmarks](https://arxiv.org/abs/2504.10185) | arxiv | 192 | | Muhamed et al. | [SAEs Can Improve Unlearning: Dynamic Sparse Autoencoder Guardrails for Precision Unlearning in LLMs](https://arxiv.org/abs/2504.08192) | arxiv | 193 | | Feng et al. | [Bridging the Gap Between Preference Alignment and Machine Unlearning](https://arxiv.org/abs/2504.06659) | arxiv | 194 | | Feng et al. | [A Neuro-inspired Interpretation of Unlearning in Large Language Models through Sample-level Unlearning Difficulty](https://arxiv.org/abs/2504.06658) | arxiv | 195 | | Krishnan et al. | [Not All Data Are Unlearned Equally](https://arxiv.org/abs/2504.05058) | arxiv | 196 | | Kuo et al. | [Exact Unlearning of Finetuning Data via Model Merging at Scale](https://arxiv.org/abs/2504.04626) | arxiv | 197 | | Xu et al. | [SUV: Scalable Large Language Model Copyright Compliance with Regularized Selective Unlearning](https://arxiv.org/abs/2503.22948) | arxiv | 198 | | Li et al. | [Effective Skill Unlearning through Intervention and Abstention](https://arxiv.org/abs/2503.21730) | arxiv | 199 | | Xu et al. | [PEBench: A Fictitious Dataset to Benchmark Machine Unlearning for Multimodal Large Language Models](https://arxiv.org/abs/2503.12545) | arxiv | 200 | | Poppi et al. | [Hyperbolic Safety-Aware Vision-Language Models](https://arxiv.org/abs/2503.12127) | arxiv | 201 | | Chen et al. | [Safety Mirage: How Spurious Correlations Undermine VLM Safety Fine-tuning](https://arxiv.org/abs/2503.11832) | arxiv | 202 | | Wang et al. | [UIPE: Enhancing LLM Unlearning by Removing Knowledge Related to Forgetting Targets](https://arxiv.org/abs/2503.04693) | arxiv | 203 | | Zhao et al. | [Improving LLM Safety Alignment with Dual-Objective Optimization](https://arxiv.org/abs/2503.03710) | arxiv | 204 | | Yang et al. | [CE-U: Cross Entropy Unlearning](https://arxiv.org/abs/2503.01224) | arxiv | 205 | | Wang et al. | [Erasing Without Remembering: Implicit Knowledge Forgetting in Large Language Models](https://arxiv.org/abs/2502.19982) | arxiv | 206 | | Wang et al. | [Rethinking LLM Unlearning Objectives: A Gradient Perspective and Go Beyond](https://arxiv.org/abs/2502.19301) | arxiv | 207 | | Yang et al. | [FaithUn: Toward Faithful Forgetting in Language Models by Investigating the Interconnectedness of Knowledge](https://arxiv.org/abs/2502.19207) | arxiv | 208 | | Jiang et al. | [Holistic Audit Dataset Generation for LLM Unlearning via Knowledge Graph Traversal and Redundancy Removal](https://arxiv.org/abs/2502.18810) | arxiv | 209 | | Chen et al. | [Soft Token Attacks Cannot Reliably Audit Unlearning in Large Language Models](https://arxiv.org/abs/2502.15836) | arxiv | 210 | | Jung et al. | [CoME: An Unlearning-based Approach to Conflict-free Model Editing](https://arxiv.org/abs/2502.15826) | arxiv | 211 | | Ramakrishna et al. | [LUME: LLM Unlearning with Multitask Evaluations](https://arxiv.org/abs/2502.15097) | arxiv | 212 | | Patil et al. | [UPCORE: Utility-Preserving Coreset Selection for Balanced Unlearning](https://arxiv.org/abs/2502.15082) | arxiv | 213 | | Russinovich et al. | [Obliviate: Efficient Unmemorization for Protecting Intellectual Property in Large Language Models](https://arxiv.org/abs/2502.15010) | arxiv | 214 | | Chen et al. | [SafeEraser: Enhancing Safety in Multimodal Large Language Models through Multimodal Machine Unlearning](https://arxiv.org/abs/2502.12520) | arxiv | 215 | | Chang et al. | [Which Retain Set Matters for LLM Unlearning? A Case Study on Entity Unlearning](https://arxiv.org/abs/2502.11441) | arxiv | 216 | | Shen et al. | [LUNAR: LLM Unlearning via Neural Activation Redirection](https://arxiv.org/abs/2502.07218) | arxiv | 217 | | Geng et al. | [Mitigating Sensitive Information Leakage in LLMs4Code through Machine Unlearning](https://arxiv.org/abs/2502.05739) | arxiv | 218 | | Hu et al. | [FALCON: Fine-grained Activation Manipulation by Contrastive Orthogonal Unalignment for Large Language Model](https://arxiv.org/abs/2502.01472) | arxiv | 219 | | Cheng et al. | [Tool Unlearning for Tool-Augmented LLMs](https://arxiv.org/abs/2502.01083) | arxiv | 220 | | Zhang et al. | [Resolving Editing-Unlearning Conflicts: A Knowledge Codebook Framework for Large Language Model Updating](https://arxiv.org/abs/2502.00158) | arxiv | 221 | | Huu-Tien et al. | [Improving LLM Unlearning Robustness via Random Perturbations](https://arxiv.org/abs/2501.19202) | arxiv | 222 | | He et al. | [Deep Contrastive Unlearning for Language Models](https://arxiv.org/abs/2503.14900) | arxiv | 223 | | Khoriaty et al. | [Don't Forget It! Conditional Sparse Autoencoder Clamping Works for Unlearning](https://arxiv.org/abs/2503.11127) | arxiv | 224 | | Ren et al. | [A General Framework to Enhance Fine-tuning-based LLM Unlearning](https://arxiv.org/abs/2502.17823) | arxiv | 225 | | Lang et al. | [Beyond Single-Value Metrics: Evaluating and Enhancing LLM Unlearning with Cognitive Diagnosis](https://arxiv.org/abs/2502.13996) | arxiv | 226 | | Amara et al. | [EraseBench: Understanding The Ripple Effects of Concept Erasure Techniques](https://arxiv.org/abs/2501.09833) | arxiv | 227 | | Brannvall et al. | [Technical Report for the Forgotten-by-Design Project: Targeted Obfuscation for Machine Learning](https://arxiv.org/abs/2501.11525) | arxiv | 228 | | Chen et al. | [Comprehensive Assessment and Analysis for NSFW Content Erasure in Text-to-Image Diffusion Models](https://arxiv.org/abs/2502.12527) | arxiv | 229 | | Fuchi et al. | [Erasing with Precision: Evaluating Specific Concept Erasure from Text-to-Image Generative Models](https://arxiv.org/abs/2502.13989) | arxiv | 230 | | Kim et al. | [A Comprehensive Survey on Concept Erasure in Text-to-Image Diffusion Models](https://arxiv.org/abs/2502.14896) | arxiv | 231 | | Meng et al. | [Concept Corrector: Erase concepts on the fly for text-to-image diffusion models](https://arxiv.org/abs/2502.16368) | arxiv | 232 | | Beerens et al. | [On the Vulnerability of Concept Erasure in Diffusion Models](https://arxiv.org/abs/2502.17537) | arxiv | 233 | | Chen et al. | [TRCE: Towards Reliable Malicious Concept Erasure in Text-to-Image Diffusion Models](https://arxiv.org/abs/2503.07389) | arxiv | 234 | | Li et al. | [SPEED: Scalable, Precise, and Efficient Concept Erasure for Diffusion Models](https://arxiv.org/abs/2503.07392) | arxiv | 235 | | Tian et al. | [Sparse Autoencoder as a Zero-Shot Classifier for Concept Erasing in Text-to-Image Diffusion Models](https://arxiv.org/abs/2503.09446) | arxiv | 236 | | Carter et al. | [ACE: Attentional Concept Erasure in Diffusion Models](https://arxiv.org/abs/2504.11850) | arxiv | 237 | | Li et al. | [Set You Straight: Auto-Steering Denoising Trajectories to Sidestep Unwanted Concepts](https://arxiv.org/abs/2504.12782) | arxiv | 238 | | Grebe et al. | [Erased but Not Forgotten: How Backdoors Compromise Concept Erasure](https://arxiv.org/abs/2504.21072) | arxiv | 239 | | Gao et al. | [Towards Dataset Copyright Evasion Attack against Personalized Text-to-Image Diffusion Models](https://arxiv.org/abs/2505.02824) | arxiv | 240 | | Biswas et al. | [CURE: Concept Unlearning via Orthogonal Representation Editing in Diffusion Models](https://arxiv.org/abs/2505.12677) | arxiv | 241 | | Chen et al. | [Comprehensive Evaluation and Analysis for NSFW Concept Erasure in Text-to-Image Diffusion Models](https://arxiv.org/abs/2505.15450) | arxiv | 242 | | Liu et al. | [Erased or Dormant? Rethinking Concept Erasure Through Reversibility](https://arxiv.org/abs/2505.16174) | arxiv | 243 | | Lu et al. | [When Are Concepts Erased From Diffusion Models?](https://arxiv.org/abs/2505.17013) | arxiv | 244 | | Xie et al. | [Erasing Concepts, Steering Generations: A Comprehensive Survey of Concept Suppression](https://arxiv.org/abs/2505.19398) | arxiv | 245 | | Gur-Arieh et al. | [Precise In-Parameter Concept Erasure in Large Language Models](https://arxiv.org/abs/2505.22586) | arxiv | 246 | | Carter et al. | [TRACE: Trajectory-Constrained Concept Erasure in Diffusion Models](https://arxiv.org/abs/2505.23312) | arxiv | 247 | | Zhu et al. | [SAGE: Exploring the Boundaries of Unsafe Concept Domain with Semantic-Augment Erasing](https://arxiv.org/abs/2506.09363) | arxiv | 248 | | Fan et al. | [EAR: Erasing Concepts from Unified Autoregressive Models](https://arxiv.org/abs/2506.20151) | arxiv | 249 | | Lee et al. | [Concept Pinpoint Eraser for Text-to-image Diffusion Models via Residual Attention Gate](https://arxiv.org/abs/2506.22806) | arxiv | 250 | | Fu et al. | [FADE: Adversarial Concept Erasure in Flow Models](https://arxiv.org/abs/2507.12283) | arxiv | 251 | |Wu et al. | [MUNBa: Machine Unlearning via Nash Bargaining](https://arxiv.org/abs/2411.15537)|arxiv| 252 | 253 | ### 2024 254 | 255 | | Author(s) | Title | Venue | 256 | | :----------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------- | 257 | | Tian et al. | [DeRDaVa: Deletion-Robust Data Valuation for Machine Learning](https://ojs.aaai.org/index.php/AAAI/article/view/29462) | AAAI | 258 | |Ni et al. | [ORES: open-vocabulary responsible visual synthesis](https://dl.acm.org/doi/10.1609/aaai.v38i19.30144)|AAAI| 259 | | Moon et al. | [Feature Unlearning for Pre-trained GANs and VAEs](https://ojs.aaai.org/index.php/AAAI/article/view/30138) | AAAI | 260 | | Rashid et al. | [Forget to Flourish: Leveraging Machine-Unlearning on Pretrained Language Models for Privacy Leakage](https://ojs.aaai.org/index.php/AAAI/article/view/34218) | AAAI | 261 | | Cha et al. | [Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers](https://ojs.aaai.org/index.php/AAAI/article/view/28996) | AAAI | 262 | | Hong et al. | [All but One: Surgical Concept Erasing with Model Preservation in Text-to-Image Diffusion Models](https://ojs.aaai.org/index.php/AAAI/article/view/30107) | AAAI | 263 | | Kim et al. | [Layer Attack Unlearning: Fast and Accurate Machine Unlearning via Layer Level Attack and Knowledge Distillation](https://ojs.aaai.org/index.php/AAAI/article/view/30118) | AAAI | 264 | | Foster et al. | [Fast Machine Unlearning Without Retraining Through Selective Synaptic Dampening](https://ojs.aaai.org/index.php/AAAI/article/view/29092) | AAAI | 265 | | Hu et al. | [Separate the Wheat from the Chaff: Model Deficiency Unlearning via Parameter-Efficient Module Operation](https://ojs.aaai.org/index.php/AAAI/article/view/29784) | AAAI | 266 | | Li et al. | [Towards Effective and General Graph Unlearning via Mutual Evolution](https://ojs.aaai.org/index.php/AAAI/article/view/29273) | AAAI | 267 | | Liu et al. | [Backdoor Attacks via Machine Unlearning](https://ojs.aaai.org/index.php/AAAI/article/view/29321) | AAAI | 268 | | You et al. | [RRL: Recommendation Reverse Learning](https://ojs.aaai.org/index.php/AAAI/article/view/28782) | AAAI | 269 | | Moon et al. | [Feature Unlearning for Generative Models via Implicit Feedback](https://ojs.aaai.org/index.php/AAAI/article/view/30138) | AAAI | 270 | |Li et al. | [SafeGen: Mitigating Sexually Explicit Content Generation in Text-to-Image Models](https://dl.acm.org/doi/10.1145/3658644.3670295)|ACM CCS| 271 | | Lin et al. | [GDR-GMA: Machine Unlearning via Direction-Rectified and Magnitude-Adjusted Gradients](https://dl.acm.org/doi/abs/10.1145/3664647.3680775) | ACM MM | 272 | | Huang et al. | [Your Code Secret Belongs to Me: Neural Code Completion Tools Can Memorize Hard-Coded Credentials](https://dl.acm.org/doi/abs/10.1145/3660818) | ACM SE | 273 | | Feng et al. | [Fine-grained Pluggable Gradient Ascent for Knowledge Unlearning in Language Models](https://aclanthology.org/2024.emnlp-main.566/) | ACL | 274 | | Arad et al. |[ReFACT: Updating Text-to-Image Models by Editing the Text Encoder](https://aclanthology.org/2024.naacl-long.140/)|ACL| 275 | |Wu et al. | [Universal Prompt Optimizer for Safe Text-to-Image Generation](https://aclanthology.org/2024.naacl-long.351/)|ACL| 276 | | Liu et al. | [Towards Safer Large Language Models through Machine Unlearning](https://aclanthology.org/2024.findings-acl.107/) | ACL | 277 | | Kim et al. | [Towards Robust and Generalized Parameter-Efficient Fine-Tuning for Noisy Label Learning](https://aclanthology.org/2024.acl-long.322/) | ACL | 278 | | Lee et al. | [Protecting Privacy Through Approximating Optimal Parameters for Sequence Unlearning in Language Models](https://aclanthology.org/2024.findings-acl.936/) | ACL | 279 | | Choi et al. | [Cross-Lingual Unlearning of Selective Knowledge in Multilingual Language Models](https://aclanthology.org/2024.findings-emnlp.630/) | ACL | 280 | | Isonuma et al. | [Unlearning Traces the Influential Training Data of Language Models](https://aclanthology.org/2024.acl-long.343.pdf) | ACL | 281 | | Zhou et al. | [Visual In-Context Learning for Large Vision-Language Models](https://aclanthology.org/2024.findings-acl.940/) | ACL | 282 | | Xing et al. | [EFUF: Efficient Fine-Grained Unlearning Framework for Mitigating Hallucinations in Multimodal Large Language Models](https://aclanthology.org/2024.emnlp-main.67/) | ACL | 283 | | Yao et al. | [Machine Unlearning of Pre-trained Large Language Models](https://aclanthology.org/2024.acl-long.457/) | ACL | 284 | | Zhao et al. | [Deciphering the Impact of Pretraining Data on Large Language Models through Machine Unlearning](https://aclanthology.org/2024.findings-acl.559/) | ACL | 285 | | Ni et al. | [Forgetting before Learning: Utilizing Parametric Arithmetic for Knowledge Updating in Large Language Models](https://aclanthology.org/2024.acl-long.310/) | ACL | 286 | | Zhou et al. | [Making Harmful Behaviors Unlearnable for Large Language Models](https://openreview.net/forum?id=a8cMY6s88u) | ACL | 287 | | Yamashita et al. | [One-Shot Machine Unlearning with Mnemonic Code](https://openreview.net/forum?id=JQ7Ri3ccx6) | ACML | 288 | | Fraboni et al. | [SIFU: Sequential Informed Federated Unlearning for Efficient and Provable Client Unlearning in Federated Optimization](https://proceedings.mlr.press/v238/fraboni24a.html) | AISTATS | 289 | | Alshehri and Zhang | [Forgetting User Preference in Recommendation Systems with Label-Flipping](https://ieeexplore.ieee.org/abstract/document/10386603/authors#authors) | BigData | 290 | | Qiu et al. | [FedCIO: Efficient Exact Federated Unlearning with Clustering, Isolation, and One-shot Aggregation](https://ieeexplore.ieee.org/document/10386788) | BigData | 291 | | Yang and Li | [When Contrastive Learning Meets Graph Unlearning: Graph Contrastive Unlearning for Link Prediction](https://ieeexplore.ieee.org/abstract/document/10386624) | BigData | 292 | | Hu et al. | [ERASER: Machine Unlearning in MLaaS via an Inference Serving-Aware Approach](https://dl.acm.org/doi/abs/10.1145/3658644.3670398) | CCS | 293 | | Zhang et al. | [Negative Preference Optimization: From Catastrophic Collapse to Effective Unlearning](https://openreview.net/forum?id=MXLBXjQkmb) | COLM | 294 | | Maini et al. | [TOFU: A Task of Fictitious Unlearning for LLMs](https://openreview.net/forum?id=B41hNBoWLo) | COLM | 295 | | Abbasi et al. | [Brainwash: A Poisoning Attack to Forget in Continual Learning](https://openaccess.thecvf.com/content/CVPR2024/html/Abbasi_BrainWash_A_Poisoning_Attack_to_Forget_in_Continual_Learning_CVPR_2024_paper.html) | CVPR | 296 | | Chen et al. | [Towards Memorization-Free Diffusion Models](https://openaccess.thecvf.com/content/CVPR2024/html/Chen_Towards_Memorization-Free_Diffusion_Models_CVPR_2024_paper.html)|CVPR| 297 | | Lyu et al. | [One-Dimensional Adapter to Rule Them All: Concepts, Diffusion Models and Erasing Applications](https://openaccess.thecvf.com/content/CVPR2024/html/Lyu_One-dimensional_Adapter_to_Rule_Them_All_Concepts_Diffusion_Models_and_CVPR_2024_paper.html) | CVPR | 298 | |Wallace et al. |[Diffusion Model Alignment Using Direct Preference Optimization](https://openaccess.thecvf.com/content/CVPR2024/html/Wallace_Diffusion_Model_Alignment_Using_Direct_Preference_Optimization_CVPR_2024_paper.html)|CVPR| 299 | | Lu et al. | [MACE: Mass Concept Erasure in Diffusion Models](https://openaccess.thecvf.com/content/CVPR2024/html/Lu_MACE_Mass_Concept_Erasure_in_Diffusion_Models_CVPR_2024_paper.html)|CVPR| 300 | | Chen et al. | [WPN: An Unlearning Method Based on N-pair Contrastive Learning in Language Models](https://ebooks.iospress.nl/doi/10.3233/FAIA240662) | ECAI | 301 | | Fan et al. | [Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning](https://link.springer.com/chapter/10.1007/978-3-031-72664-4_16) | ECCV | 302 | |Gong et al. | [Reliable and Efficient Concept Erasure of Text-to-Image Diffusion Models](https://dl.acm.org/doi/10.1007/978-3-031-73668-1_5)|ECCV| 303 | |Kim et al. | [R.A.C.E. : Robust Adversarial Concept Erasure for Secure Text-to-Image Diffusion Model](https://link.springer.com/chapter/10.1007/978-3-031-73010-8_27)|ECCV| 304 | |Kim et al. | [Safeguard Text-to-Image Diffusion Models with Human Feedback Inversion](https://link.springer.com/chapter/10.1007/978-3-031-72855-6_8)|ECCV| 305 | | Wu et al. | [Scissorhands: Scrub Data Influence via Connection Sensitivity in Networks](https://link.springer.com/chapter/10.1007/978-3-031-72970-6_21) | ECCV | 306 | | Zhang et al. | [To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images ... For Now](https://link.springer.com/chapter/10.1007/978-3-031-72998-0_22) | ECCV | 307 | | Liu et al. | [Implicit Concept Removal of Diffusion Models](https://link.springer.com/chapter/10.1007/978-3-031-72664-4_26) | ECCV | 308 | |Ban et al. | [Understanding the Impact of Negative Prompts: When and How Do They Take Effect?](https://dl.acm.org/doi/10.1007/978-3-031-73024-5_12)|ECCV| 309 | | Zhang et al. | [IMMA: Immunizing Text-to-Image Models Against Malicious Adaptation](https://link.springer.com/chapter/10.1007/978-3-031-72933-1_26)|ECCV| 310 | | Poppi et al. | [Removing NSFW Concepts from Vision-and-Language Models for Text-to-Image Retrieval and Generation](https://arxiv.org/abs/2311.16254)|ECCV| 311 | |Liu et al. | [Latent Guard: A Safety Framework for Text-to-Image Generation](https://link.springer.com/chapter/10.1007/978-3-031-73347-5_6)|ECCV| 312 | | Huang et al. | [Receler: Reliable Concept Erasing of Text-to-Image Diffusion Models via Lightweight Erasers](https://dl.acm.org/doi/10.1007/978-3-031-73661-2_20) | ECCV | 313 | | Cheng et al. | [MultiDelete for Multimodal Machine Unlearning](https://link.springer.com/chapter/10.1007/978-3-031-72940-9_10) | ECCV | 314 | | Wang et al. | [How to Forget Clients in Federated Online Learning to Rank?](https://link.springer.com/chapter/10.1007/978-3-031-56063-7_7) | ECIR | 315 | | Jia et al. | [SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning](https://aclanthology.org/2024.emnlp-main.245/) | EMNLP | 316 | | Joshi et al. | [Towards Robust Evaluation of Unlearning in LLMs via Data Transformations](https://openreview.net/forum?id=dEuedZCU66) | EMNLP | 317 | | Tian et al. | [To Forget or Not? Towards Practical Knowledge Unlearning for Large Language Models](https://aclanthology.org/2024.findings-emnlp.82/) | arxiv | 318 | | Chakraborty et al. | [Can Textual Unlearning Solve Cross-Modality Safety Alignment?](https://aclanthology.org/2024.findings-emnlp.574/) | EMNLP | 319 | | Huang et al. | [Demystifying Verbatim Memorization in Large Language Models](https://aclanthology.org/2024.emnlp-main.598/) | EMNLP | 320 | | Liu et al. | [Revisiting Who's Harry Potter: Towards Targeted Unlearning from a Causal Intervention Perspective](https://aclanthology.org/2024.emnlp-main.495/) | EMNLP | 321 | | Chen et al. | [Unlearn What You Want to Forget: Efficient Unlearning for LLMs](https://aclanthology.org/2023.emnlp-main.738/) | EMNLP | 322 | | Liu et al. | [Forgetting Private Textual Sequences in Language Models Via Leave-One-Out Ensemble](https://ieeexplore.ieee.org/abstract/document/10446299) | ICASSP | 323 | | Liu et al. | [Learning to Refuse: Towards Mitigating Privacy Risks in LLMs](https://aclanthology.org/2025.coling-main.114/) | ICCL | 324 | | Fan et al. | [SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation](https://openreview.net/forum?id=gn0mIhQGNM) | ICLR | 325 | | Liu et al. | [Tangent Transformers for Composition, Privacy and Removal](https://openreview.net/forum?id=VLFhbOCz5D) | ICLR | 326 | | Li et al. | [Machine Unlearning for Image-to-Image Generative Models](https://openreview.net/forum?id=9hjVoPWPnh) | ICLR | 327 | | Shen et al. | [Label-Agnostic Forgetting: A Supervision-Free Unlearning in Deep Models](https://openreview.net/forum?id=SIZWiya7FE) | ICLR | 328 | |Li et al. | [Get What You Want, Not What You Don't: Image Content Suppression for Text-to-Image Diffusion Models](https://openreview.net/forum?id=zpVPhvVKXk)| ICLR| 329 | | Tsai et al. | [Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models?](https://openreview.net/forum?id=lm7MRcsFiS) | ICLR | 330 | | Wang et al. | [A Unified and General Framework for Continual Learning](https://openreview.net/forum?id=BE5aK0ETbp) | ICLR | 331 | | Shi et al. | [Detecting Pretraining Data from Large Language Models](https://openreview.net/forum?id=zWqr3MQuNs&trk) | ICLR | 332 | | Eldan et al. | [Who’s Harry Potter? Approximate Unlearning in LLMs](https://openreview.net/forum?id=PDct7vrcvT) | ICLR | 333 | | Wang et al. | [LLM Unlearning via Loss Adjustment with Only Forget Data](https://arxiv.org/abs/2410.11143) | ICLR | 334 | |Chavhan et al. | [ConceptPrune: Concept Editing in Diffusion Models via Skilled Neuron Pruning](https://openreview.net/forum?id=kSdWcw5mkp)|ICLR| 335 | | Zhao et al. | [Rethinking Adversarial Robustness in the Context of the Right to be Forgotten](https://icml.cc/virtual/2024/poster/32857) | ICML | 336 | | Pawelczyk et al. | [In-Context Unlearning: Language Models As Few Shot Unlearners](https://dl.acm.org/doi/abs/10.5555/3692070.3693692) | ICML | 337 | | Barbulescu et al. | [To each (textual sequence) its own: improving memorized-data unlearning in large language models](https://dl.acm.org/doi/abs/10.5555/3692070.3692191) | ICML | 338 | | Li et al. | [The WMDP benchmark: measuring and reducing malicious use with unlearning](https://dl.acm.org/doi/abs/10.5555/3692070.3693215) | ICML | 339 | | Das et al. | [Larimar: large language models with episodic memory control](https://dl.acm.org/doi/abs/10.5555/3692070.3692472) | ICML | 340 | | Barbulescu et al. | [To each (textual sequence) its own: improving memorized-data unlearning in large language models](https://dl.acm.org/doi/abs/10.5555/3692070.3692191) | ICML | 341 | | Zhao et al. | [Learning and forgetting unsafe examples in large language models](https://dl.acm.org/doi/abs/10.5555/3692070.3694584) | ICML | 342 | |Basu et al. | [On mechanistic knowledge localization in text-to-image generative models](https://dl.acm.org/doi/10.5555/3692070.3692199)|ICML| 343 | | Zhang et al. | [SecureCut: Federated Gradient Boosting Decision Trees with Efficient Machine Unlearning](https://link.springer.com/chapter/10.1007/978-3-031-78169-8_23) | ICPR | 344 | | Cai et al. | [Where have you been? A Study of Privacy Risk for Point-of-Interest Recommendation](https://dl.acm.org/doi/abs/10.1145/3637528.3671758) | KDD | 345 | | Gong et al. | [A Population-to-individual Tuning Framework for Adapting Pretrained LM to On-device User Intent Prediction](https://dl.acm.org/doi/abs/10.1145/3637528.3671984) | KDD | 346 | | Xue et al. | [Erase to Enhance: Data-Efficient Machine Unlearning in MRI Reconstruction](https://proceedings.mlr.press/v250/xue24a.html) | MIDL | 347 | | Gao et al. | [Ethos: Rectifying Language Models in Orthogonal Parameter Space](https://aclanthology.org/2024.findings-naacl.132/) | NAACL | 348 | | Park et al. | [Direct Unlearning Optimization for Robust and Safe Text-to-Image Models](https://openreview.net/forum?id=UdXE5V2d0O) | NeurIPS | 349 | | Ko et al. | [Boosting Alignment for Post-Unlearning Text-to-Image Generative Models](https://openreview.net/forum?id=93ktalFvnJ) | NeurIPS | 350 | | Yang et al. | [GuardT2I: Defending Text-to-Image Models from Adversarial Prompts](https://proceedings.neurips.cc/paper_files/paper/2024/hash/8bea36ac39e11ebe49e9eddbd4b8bd3a-Abstract-Conference.html)|NeurIPS| 351 | | Li et al. | [Single Image Unlearning: Efficient Machine Unlearning in Multimodal Large Language Models](https://proceedings.neurips.cc/paper_files/paper/2024/hash/3e53d82a1113e3d240059a9195668edc-Abstract-Conference.html) | NeurIPS | 352 | | Jain et al. | [What Makes and Breaks Safety Fine-tuning? A Mechanistic Study](https://proceedings.neurips.cc/paper_files/paper/2024/hash/a9bef53eb7b0e5950d4f2d9c74a16006-Abstract-Conference.html) | NeurIPS | 353 | | Wu et al. | [Cross-model Control: Improving Multiple Large Language Models in One-time Training](https://proceedings.neurips.cc/paper_files/paper/2024/hash/9856b5d30ac61ab744fdab6f67d874e4-Abstract-Conference.html) | NeurIPS | 354 | | Bui et al. | [Erasing Undesirable Concepts in Diffusion Models with Adversarial Preservation](https://proceedings.neurips.cc/paper_files/paper/2024/hash/f02d7fb7ddd2e6be33b6f3224e5cc44a-Abstract-Conference.html) | NeurIPS | 355 | | Zhao et al. | [What makes unlearning hard and what to do about it](https://proceedings.neurips.cc/paper_files/paper/2024/hash/16e18fa3b3add076c30f2a2598f03031-Abstract-Conference.html) | NeurIPS | 356 | | Zhang et al. | [Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models](https://proceedings.neurips.cc/paper_files/paper/2024/hash/40954ac18a457dd5f11145bae6454cdf-Abstract-Conference.html) | NeurIPS | 357 | | Yao et al. | [Large Language Model Unlearning](https://proceedings.neurips.cc/paper_files/paper/2024/hash/be52acf6bccf4a8c0a90fe2f5cfcead3-Abstract-Conference.html) | NeurIPS | 358 | | Ji et al. | [Reversing the Forget-Retain Objectives: An Efficient LLM Unlearning Framework from Logit Difference](https://proceedings.neurips.cc/paper_files/paper/2024/hash/171291d8fed723c6dfc76330aa827ff8-Abstract-Conference.html?utm_source=chatgpt.com) | NeurIPS | 359 | | Liu et al. | [Large Language Model Unlearning via Embedding-Corrupted Prompts](https://proceedings.neurips.cc/paper_files/paper/2024/hash/d6359156e0e30b1caa116a4306b12688-Abstract-Conference.html) | NeurIPS | 360 | | Jia et al. | [WAGLE: Strategic Weight Attribution for Effective and Modular Unlearning in Large Language Models](https://proceedings.neurips.cc/paper_files/paper/2024/hash/649ad92e7067b3553a0f15acac68806d-Abstract-Conference.html) | NeurIPS | 361 | | Zhang et al. | [UnlearnCanvas: A Stylized Image Dataset to Benchmark Machine Unlearning for Diffusion Models](https://proceedings.neurips.cc/paper_files/paper/2024/hash/aebf4822d30c3f2600566af7eba83548-Abstract-Datasets_and_Benchmarks_Track.html) | NeurIPS D&B | 362 | | Jin et al. | [RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models](https://proceedings.neurips.cc/paper_files/paper/2024/hash/b1f78dfc9ca0156498241012aec4efa0-Abstract-Datasets_and_Benchmarks_Track.html) | NeurIPS D&B | 363 | | Kurmanji et al. | [Machine Unlearning in Learned Databases: An Experimental Analysis](https://dl.acm.org/doi/abs/10.1145/3639304) | SIGMOD | 364 | | Shen et al. | [CaMU: Disentangling Causal Effects in Deep Model Unlearning](https://epubs.siam.org/doi/abs/10.1137/1.9781611978032.89) | SDM | 365 | | Yoon et al. | [Few-Shot Unlearning](https://ieeexplore.ieee.org/document/10646697) | SP | 366 | | Hu et al. | [Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearning](https://ieeexplore.ieee.org/document/10646717) | SP | 367 | | Hoang et al. | [Learn To Unlearn for Deep Neural Networks: Minimizing Unlearning Interference With Gradient Projection](https://openaccess.thecvf.com/content/WACV2024/html/Hoang_Learn_To_Unlearn_for_Deep_Neural_Networks_Minimizing_Unlearning_Interference_WACV_2024_paper.html) | WACV | 368 | | Gandikota et al. | [Unified Concept Editing in Diffusion Models](https://openaccess.thecvf.com/content/WACV2024/html/Gandikota_Unified_Concept_Editing_in_Diffusion_Models_WACV_2024_paper.html)|WACV| 369 | | Malnick et al. | [Taming Normalizing Flows](https://openaccess.thecvf.com/content/WACV2024/html/Malnick_Taming_Normalizing_Flows_WACV_2024_paper.html)|WACV| 370 | | Xin et al. | [On the Effectiveness of Unlearning in Session-Based Recommendation](https://dl.acm.org/doi/abs/10.1145/3616855.3635823) | WSDM | 371 | | Zhang | [Graph Unlearning with Efficient Partial Retraining](https://dl.acm.org/doi/abs/10.1145/3589335.3651265) | WWW | 372 | | Liu et al. | [Breaking the Trilemma of Privacy, Utility, Efficiency via Controllable Machine Unlearning](https://dl.acm.org/doi/abs/10.1145/3589334.3645669) | WWW | 373 | | | | 374 | | Liu et al. | [A Survey on Federated Unlearning: Challenges, Methods, and Future Directions](https://dl.acm.org/doi/full/10.1145/3679014) | ACM Computing Surveys | 375 | | Zhang et al. | [Right to be Forgotten in the Era of Large Language Models: Implications, Challenges, and Solutions](https://link.springer.com/article/10.1007/s43681-024-00573-9) | AI and Ethics | 376 | | Zha et al. | [To Be Forgotten or To Be Fair: Unveiling Fairness Implications of Machine Unlearning Methods](https://link.springer.com/article/10.1007/s43681-023-00398-y) | AI and Ethics | 377 | | Zhang et al. | [Recommendation Unlearning via Influence Function](https://dl.acm.org/doi/abs/10.1145/3701763) | ACM Transactions on Recommender Systems | 378 | | Schoepf et al. | [Potion: Towards Poison Unlearning](https://openreview.net/forum?id=4eSiRnWWaF) | DMLR | 379 | | Wang et al. | [Towards efficient and effective unlearning of large language models for recommendation](https://link.springer.com/article/10.1007/s11704-024-40044-2) | Frontiers of Computer Science | 380 | | Poppi et al. | [Multi-Class Explainable Unlearning for Image Classification via Weight Filtering](https://ieeexplore.ieee.org/abstract/document/10564682) | IEEE Intelligent Systems | 381 | | Panda and AP | [FAST: Feature Aware Similarity Thresholding for Weak Unlearning in Black-Box Generative Models](https://ieeexplore.ieee.org/abstract/document/10754629) | IEEE Transactions on Artificial Intelligence | 382 | | Alam et al. | [Get Rid Of Your Trail: Remotely Erasing Backdoors in Federated Learning](https://ieeexplore.ieee.org/abstract/document/10685452) | IEEE Transactions on Artificial Intelligence | 383 | | Shaik et al. | [FRAMU: Attention-based Machine Unlearning using Federated Reinforcement Learning](https://ieeexplore.ieee.org/abstract/document/10483278) | IEEE Transactions on Knowledge and Data Engineering | 384 | | Shaik et al. | [Exploring the Landscape of Machine Unlearning: A Comprehensive Survey and Taxonomy](https://ieeexplore.ieee.org/abstract/document/10750906) | IEEE Transactions on Neural Networks and Learning Systems | 385 | | Romandini et al. | [Federated Unlearning: A Survey on Methods, Design Guidelines, and Evaluation Metrics](https://ieeexplore.ieee.org/abstract/document/10736348) | IEEE Transactions on Neural Networks and Learning Systems | 386 | | Xu and Teng | [Task-Aware Machine Unlearning and Its Application in Load Forecasting](https://ieeexplore.ieee.org/abstract/document/10472091) | IEEE Transactions on Power Systems | 387 | | Li et al. | [Pseudo Unlearning via Sample Swapping with Hash](https://www.sciencedirect.com/science/article/abs/pii/S0020025524000483?via%3Dihub) | Information Science | 388 | | | 389 | | Fore et al. | [Unlearning Climate Misinformation in Large Language Models](https://aclanthology.org/2024.climatenlp-1.14/) | ClimateNLP | 390 | | Zhang et al. | [Forget-Me-Not: Learning to Forget in Text-to-Image Diffusion Models](https://openaccess.thecvf.com/content/CVPR2024W/MMFM/html/Zhang_Forget-Me-Not_Learning_to_Forget_in_Text-to-Image_Diffusion_Models_CVPRW_2024_paper.html) | CVPR Workshop | 391 | | Shi et al. | [DeepClean: Machine Unlearning on the Cheap by Resetting Privacy Sensitive Weights using the Fisher Diagonal](https://link.springer.com/chapter/10.1007/978-3-031-91672-4_1) | ECCV Workshop | 392 | | Sridhar et al. | [Prompt Sliders for Fine-Grained Control, Editing and Erasing of Concepts in Diffusion Models](https://link.springer.com/chapter/10.1007/978-3-031-91672-4_2)|ECCV Workshop| 393 | | Schoepf et al. | [Loss-Free Machine Unlearning](https://openreview.net/forum?id=bCPz7uqmmh) | ICLR Tiny Paper | 394 | | Tamirisa et al. | [Toward Robust Unlearning for LLMs](https://openreview.net/forum?id=4rPzaUF6Ej) | ICLR Workshop | 395 | | Sun et al. | [Learning and Unlearning of Fabricated Knowledge in Language Models](https://openreview.net/forum?id=R5Q5lANcjY) | ICML Workshop | 396 | | Wang et al. | [Alignment Calibration: Machine Unlearning for Contrastive Learning under Auditing](https://openreview.net/forum?id=icP1P8y4eu) | ICML Workshop | 397 | | Kadhe et al. | [Split, Unlearn, Merge: Leveraging Data Attributes for More Effective Unlearning in LLMs](https://openreview.net/forum?id=BzIySThX9O) | ICML Workshop | 398 | | Zhao et al. | [Scalability of memorization-based machine unlearning](https://openreview.net/pdf?id=VX9HGFiFF1) | NeurIPS Workshop | 399 | | Wu et al. | [CodeUnlearn: Amortized Zero-Shot Machine Unlearning in Language Models Using Discrete Concept](https://openreview.net/forum?id=yf6gOqJiYd) | NeurIPS Workshop | 400 | | Cheng et al. | [MU-Bench: A Multitask Multimodal Benchmark for Machine Unlearning](https://openreview.net/forum?id=FCfY1wYkn9) | NeurIPS Workshop | 401 | | Seyitoğlu et al. | [Extracting Unlearned Information from LLMs with Activation Steering](https://openreview.net/forum?id=RuufZiUWUq) | NeurIPS Workshop | 402 | | Wei et al. | [Provable unlearning in topic modeling and downstream tasks](https://openreview.net/forum?id=Bqvkul76J3) | NeurIPS Workshop | 403 | | Lucki et al. | [An Adversarial Perspective on Machine Unlearning for AI Safety](https://arxiv.org/abs/2409.18025) | NeurIPS Workshop | 404 | | Li et al. | [LLM Defenses Are Not Robust to Multi-Turn Human Jailbreaks Yet](https://openreview.net/forum?id=ZmQX402jWC) | NeurIPS Workshop | 405 | | Smirnov et al. | [Classifier-free guidance in LLMs Safety](https://arxiv.org/abs/2412.06846) | NeurIPS Workshop | 406 | | | 407 | | Liu et al. | [Machine Unlearning in Generative AI: A Survey](https://arxiv.org/abs/2407.20516) | arxiv | 408 | | Xu | [Machine Unlearning for Traditional Models and Large Language Models: A Short Survey](https://arxiv.org/abs/2404.01206) | arxiv | 409 | | Lynch et al. | [Eight Methods to Evaluate Robust Unlearning in LLMs](https://arxiv.org/abs/2402.16835) | arxiv | 410 | | Dontsov et al. | [CLEAR: Character Unlearning in Textual and Visual Modalities](https://arxiv.org/abs/2410.18057) | arXiv | 411 | | Hong et al. | [Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces](https://arxiv.org/pdf/2406.11614) | arXiv | 412 | | Jung et al. | [Attack and Reset for Unlearning: Exploiting Adversarial Noise toward Machine Unlearning through Parameter Re-initialization](https://arxiv.org/abs/2401.08998) | arXiv | 413 | | Pham et al. | [Robust Concept Erasure Using Task Vectors](https://arxiv.org/abs/2404.03631) | arXiv | 414 | | Qian et al. | [Exploring Fairness in Educational Data Mining in the Context of the Right to be Forgotten](https://arxiv.org/abs/2405.16798) | arXiv | 415 | | Schoepf et al. | [An Information Theoretic Approach to Machine Unlearning](https://arxiv.org/abs/2402.01401) | arxiv | 416 | | Schoepf et al. | [Parameter-tuning-free data entry error unlearning with adaptive selective synaptic dampening](https://arxiv.org/abs/2402.10098) | arXiv | 417 | | Zhao et al. | [Separable Multi-Concept Erasure from Diffusion Models](https://arxiv.org/abs/2402.05947) | arXiv | 418 | | Dige et al. | [Mitigating Social Biases in Language Models through Unlearning](https://arxiv.org/abs/2406.13551) | arxiv | 419 | | Hong et al. | [Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces](https://openreview.net/forum?id=blNaExRx7Q) | arxiv | 420 | | Wang et al. | [Towards Effective Evaluations and Comparisons for LLM Unlearning Methods](https://arxiv.org/abs/2406.09179) | arxiv | 421 | | Ashuach et al. | [REVS: Unlearning Sensitive Information in Language Models via Rank Editing in the Vocabulary Space](https://arxiv.org/abs/2406.09325) | arxiv | 422 | | Zuo et al. | [Federated TrustChain: Blockchain-Enhanced LLM Training and Unlearning](https://arxiv.org/abs/2406.04076) | arxiv | 423 | | Wang et al. | [RKLD: Reverse KL-Divergence-based Knowledge Distillation for Unlearning Personal Information in Large Language Models](https://arxiv.org/abs/2406.01983) | arxiv | 424 | | Chen e tal. | [Machine Unlearning in Large Language Models](https://arxiv.org/abs/2404.16841) | arxiv | 425 | | Lu et al. | [Eraser: Jailbreaking Defense in Large Language Models via Unlearning Harmful Knowledge](https://arxiv.org/abs/2404.05880) | arxiv | 426 | | Stoehr et al. | [Localizing Paragraph Memorization in Language Models](https://arxiv.org/abs/2403.19851) | arxiv | 427 | | Pochinkov et al. | [Dissecting Language Models: Machine Unlearning via Selective Pruning](https://arxiv.org/abs/2403.01267) | arxiv | 428 | | Gu et al. | [Second-Order Information Matters: Revisiting Machine Unlearning for Large Language Models](https://arxiv.org/abs/2403.10557) | arxiv | 429 | | Thaker et al. | [Guardrail Baselines for Unlearning in LLMs](https://arxiv.org/abs/2403.03329) | arxiv | 430 | | Wang et al. | [When Machine Unlearning Meets Retrieval-Augmented Generation (RAG): Keep Secret or Forget Knowledge?](https://arxiv.org/abs/2410.15267) | arxiv | 431 | | Muresanu et al. | [Unlearnable Algorithms for In-context Learning](https://arxiv.org/abs/2402.00751) | arxiv | 432 | | Zhao et al. | [Unlearning Backdoor Attacks for LLMs with Weak-to-Strong Knowledge Distillation](https://arxiv.org/abs/2410.14425) | arxiv | 433 | | Choi et al. | [Breaking Chains: Unraveling the Links in Multi-Hop Knowledge Unlearning](https://arxiv.org/abs/2410.13274) | arxiv | 434 | | Guo et al. | [Mechanistic Unlearning: Robust Knowledge Unlearning and Editing via Mechanistic Localization](https://arxiv.org/abs/2410.12949) | arxiv | 435 | | Deeb et al. | [Do Unlearning Methods Remove Information from Language Model Weights?](https://openreview.net/forum?id=uDjuCpQH5N) | arxiv | 436 | | Takashiro et al. | [Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning](https://arxiv.org/abs/2410.00382) | arxiv | 437 | | Veldanda et al. | [LLM Surgery: Efficient Knowledge Unlearning and Editing in Large Language Models](https://arxiv.org/abs/2409.13054) | arxiv | 438 | | Gu et al. | [MEOW: MEMOry Supervised LLM Unlearning Via Inverted Facts](https://arxiv.org/abs/2409.11844) | arxiv | 439 | | Zhang et al. | [Unforgettable Generalization in Language Models](https://arxiv.org/abs/2409.02228) | arxiv | 440 | | Kazemi et al. | [Unlearning Trojans in Large Language Models: A Comparison Between Natural Language and Source Code](https://arxiv.org/abs/2408.12416) | arxiv | 441 | | Huu-Tien et al. | [On Effects of Steering Latent Representation for Large Language Model Unlearning](https://arxiv.org/abs/2408.06223) | arxiv | 442 | | Yang et al. | [Hotfixing Large Language Models for Code](https://arxiv.org/abs/2408.05727) | arxiv | 443 | | Lizzo et al. | [UNLEARN Efficient Removal of Knowledge in Large Language Models](https://arxiv.org/abs/2408.04140) | arxiv | 444 | | Tamirisa et al. | [Tamper-Resistant Safeguards for Open-Weight LLMs](https://arxiv.org/abs/2408.00761) | arxiv | 445 | | Zhou et al. | [On the Limitations and Prospects of Machine Unlearning for Generative AI](https://arxiv.org/abs/2408.00376) | arxiv | 446 | | Tang et al. | [Learn while Unlearn: An Iterative Unlearning Framework for Generative Language Models](https://arxiv.org/abs/2407.20271) | arxiv | 447 | | Lu et al. | [Towards Transfer Unlearning: Empirical Evidence of Cross-Domain Bias Mitigation](https://arxiv.org/abs/2407.16951) | arxiv | 448 | | Gao et al. | [On Large Language Model Continual Unlearning](https://arxiv.org/abs/2407.10223) | arxiv | 449 | | Kolbeinsson et al. | [Composable Interventions for Language Models](https://arxiv.org/abs/2407.06483) | arxiv | 450 | | Hernandez et al. | [If You Don't Understand It, Don't Use It: Eliminating Trojans with Filters Between Layers](https://arxiv.org/abs/2407.06411) | arxiv | 451 | | Zhang et al. | [From Theft to Bomb-Making: The Ripple Effect of Unlearning in Defending Against Jailbreak Attacks](https://arxiv.org/abs/2407.02855) | arxiv | 452 | | Scaria et al. | [Can Small Language Models Learn, Unlearn, and Retain Noise Patterns?](https://arxiv.org/abs/2407.00996) | arxiv | 453 | | Shumailov et al. | [UnUnlearning: Unlearning is not sufficient for content regulation in advanced generative AI](https://arxiv.org/abs/2407.00106) | arxiv | 454 | | Qiu et al. | [How Data Inter-connectivity Shapes LLMs Unlearning: A Structural Unlearning Perspective](https://arxiv.org/abs/2406.16810) | arxiv | 455 | | Lu et al. | [Learn and Unlearn in Multilingual LLMs](https://arxiv.org/abs/2406.13748) | arxiv | 456 | | Ma et al. | [Benchmarking Vision Language Model Unlearning via Fictitious Facial Identity Dataset](https://arxiv.org/abs/2411.03554) | arxiv | 457 | | Rezaei et al. | [RESTOR: Knowledge Recovery in Machine Unlearning](https://arxiv.org/abs/2411.00204) | arxiv | 458 | | Baluta et al. | [Unlearning in- vs. out-of-distribution data in LLMs under gradient-based method](https://arxiv.org/abs/2411.04388) | arxiv | 459 | | Doshi et al. | [Does Unlearning Truly Unlearn? A Black Box Evaluation of LLM Unlearning Methods](https://arxiv.org/abs/2411.12103) | arxiv | 460 | | Wei et al. | [Underestimated Privacy Risks for Minority Populations in Large Language Model Unlearning](https://arxiv.org/abs/2412.08559) | arxiv | 461 | | Zuo et al. | [Large Language Model Federated Learning with Blockchain and Unlearning for Cross-Organizational Collaboration](https://arxiv.org/abs/2412.13551) | arxiv | 462 | | Dou et al. | [Investigating the Feasibility of Mitigating Potential Copyright Infringement via Large Language Model Unlearning](https://arxiv.org/abs/2412.18621) | arxiv | 463 | | Ren et al. | [Copyright Protection in Generative AI: A Technical Perspective, 2024](https://arxiv.org/abs/2402.02333) | arxiv | 464 | | Gu et al. | [Second-Order Information Matters: Revisiting Machine Unlearning for Large Language Models](https://arxiv.org/abs/2403.10557) | arxiv | 465 | | Chakraborty et al. | [Cross-Modal Safety Alignment: Is textual unlearning all you need?](https://arxiv.org/abs/2406.02575) | arxiv | 466 | | Liang et al. | [Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning](https://arxiv.org/pdf/2403.16257v1) | arxiv | 467 | | Wu et al. | [Erasing Undesirable Influence in Diffusion Models](https://arxiv.org/abs/2401.05779) | arxiv | 468 | | Gao et al. | [Meta-Unlearning on Diffusion Models: Preventing Relearning Unlearned Concepts](https://arxiv.org/abs/2410.12777) | arxiv | 469 | | Huang et al. | [Enhancing User-Centric Privacy Protection: An Interactive Framework through Diffusion Models and Machine Unlearning](https://arxiv.org/abs/2409.03326) | arxiv | 470 | | Liu et al. | [Unlearning Concepts from Text-to-Video Diffusion Models](https://arxiv.org/abs/2407.14209) | arxiv | 471 | | Gandikota et al. | [Erasing Conceptual Knowledge from Language Models](https://arxiv.org/abs/2410.02760) | arxiv | 472 | | Tu et al. | [Towards Reliable Empirical Machine Unlearning Evaluation: A Cryptographic Game Perspective](https://arxiv.org/abs/2404.11577) | arxiv | 473 | | Zhuang et al. | [UOE: Unlearning One Expert is Enough for Mixture-of-Experts LLMs](https://openreview.net/forum?id=ZClm0YbcXP) | arxiv | 474 | | | | 475 | | Liu | [Machine Unlearning in 2024](https://ai.stanford.edu/~kzliu/blog/unlearning) | Blog Post | 476 | 477 | ### 2023 478 | 479 | | Author(s) | Title | Venue | 480 | | :-------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------- | 481 | | Wang et al. | [KGA: A General Machine Unlearning Framework Based on Knowledge Gap Alignment](https://aclanthology.org/2023.acl-long.740/) | ACL | 482 | | Yu et al. | [Unlearning Bias in Language Models by Partitioning Gradients](https://aclanthology.org/2023.findings-acl.375.pdf) | ACL | 483 | | Kumar et al. | [Privacy Adhering Machine Un-learning in NLP](https://aclanthology.org/2023.findings-ijcnlp.25) | ACL | 484 | | Adolphs et al. | [The CRINGE Loss: Learning what language not to model](https://aclanthology.org/2023.acl-long.493/) | ACL | 485 | | Li et al. | [Make Text Unlearnable: Exploiting Effective Patterns to Protect Personal Data](https://aclanthology.org/2023.trustnlp-1.22/) | ACL | 486 | | Zhang et al. | [Machine Unlearning Methodology base on Stochastic Teacher Network](https://link.springer.com/chapter/10.1007/978-3-031-46677-9_18) | ADMA | 487 | | LeBlond et al. | [Probing the Transition to Dataset-Level Privacy in ML Models Using an Output-Specific and Data-Resolved Privacy Profile](https://dl.acm.org/doi/abs/10.1145/3605764.3623904) | AISec | 488 | | Cong and Mahdavi | [Efficiently Forgetting What You Have Learned in Graph Representation Learning via Projection](https://proceedings.mlr.press/v206/cong23a.html) | AISTATS | 489 | | Wang et al. | [BFU: Bayesian Federated Unlearning with Parameter Self-Sharing](https://dl.acm.org/doi/abs/10.1145/3579856.3590327) | Asia CCS | 490 | | Lee and Woo | [UNDO: Effective and Accurate Unlearning Method for Deep Neural Networks](https://dl.acm.org/doi/abs/10.1145/3583780.3615235) | CIKM | 491 | | Ghazi et al. | [Ticketed Learning-Unlearning Schemes](https://proceedings.mlr.press/v195/ghazi23a/ghazi23a.pdf) | COLT | 492 | | Chen et al. | [Boundary Unlearning: Rapid Forgetting of Deep Networks via Shifting the Decision Boundary](https://openaccess.thecvf.com/content/CVPR2023/html/Chen_Boundary_Unlearning_Rapid_Forgetting_of_Deep_Networks_via_Shifting_the_CVPR_2023_paper.html) | CVPR | 493 | |Schramowski et al. | [Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models](https://openaccess.thecvf.com/content/CVPR2023/html/Schramowski_Safe_Latent_Diffusion_Mitigating_Inappropriate_Degeneration_in_Diffusion_Models_CVPR_2023_paper.html)|CVPR| 494 | | Lin et al. | [ERM-KTP: Knowledge-Level Machine Unlearning via Knowledge Transfer](https://openaccess.thecvf.com/content/CVPR2023/html/Lin_ERM-KTP_Knowledge-Level_Machine_Unlearning_via_Knowledge_Transfer_CVPR_2023_paper.html) | CVPR | 495 | | Hagos et al. | [Unlearning Spurious Correlations in Chest X-ray Classification](https://link.springer.com/chapter/10.1007/978-3-031-45275-8_26) | Discovery Science | 496 | | Mireshghallah et al. | [Simple Temporal Adaptation to Changing Label Sets: Hashtag Prediction via Dense KNN](https://aclanthology.org/2023.emnlp-main.452/) | EMNLP | 497 | | Kassem et al. | [Preserving Privacy Through Dememorization: An Unlearning Technique For Mitigating Memorization Risks In Language Models](https://aclanthology.org/2023.emnlp-main.265/) | EMNLP | 498 | | Wu et al. | [DEPN: Detecting and Editing Privacy Neurons in Pretrained Language Models](https://aclanthology.org/2023.emnlp-main.174/) | DMNLP | 499 | | Gandikota et al. | [Erasing Concepts from Diffusion Models](https://openaccess.thecvf.com/content/ICCV2023/papers/Gandikota_Erasing_Concepts_from_Diffusion_Models_ICCV_2023_paper.pdf) | ICCV | 500 | | Kumari et al. | [Ablating Concepts in Text-to-Image Diffusion Models](https://openaccess.thecvf.com/content/ICCV2023/html/Kumari_Ablating_Concepts_in_Text-to-Image_Diffusion_Models_ICCV_2023_paper.html) | ICCV | 501 | | Liu et al. | [MUter: Machine Unlearning on Adversarially Trained Models](https://openaccess.thecvf.com/content/ICCV2023/html/Liu_MUter_Machine_Unlearning_on_Adversarially_Trained_Models_ICCV_2023_paper.html) | ICCV | 502 | | Koh et al. | [Disposable Transfer Learning for Selective Source Task Unlearning](https://openaccess.thecvf.com/content/ICCV2023/html/Koh_Disposable_Transfer_Learning_for_Selective_Source_Task_Unlearning_ICCV_2023_paper.html) | ICCV | 503 | | Dukler et al. | [SAFE: Machine Unlearning With Shard Graphs](https://openaccess.thecvf.com/content/ICCV2023/html/Dukler_SAFE_Machine_Unlearning_With_Shard_Graphs_ICCV_2023_paper.html) | ICCV | 504 | | Zheng et al. | [Graph Unlearning Using Knowledge Distillation](https://link.springer.com/chapter/10.1007/978-981-99-7356-9_29) | ICICS | 505 | | Cheng et al. | [GNNDelete: A General Strategy for Unlearning in Graph Neural Networks](https://openreview.net/forum?id=X9yCkmT5Qrl) | ICLR | 506 | |Basu et al.|[Localizing and Editing Knowledge In Text-to-Image Generative Models](https://openreview.net/forum?id=Qmw9ne6SOQ)|ICLR| 507 | | Chien et al. | [Efficient Model Updates for Approximate Unlearning of Graph-Structured Data](https://openreview.net/forum?id=fhcu4FBLciL) | ICLR | 508 | | Ilharco et al. | [Editing models with task arithmetic](https://openreview.net/forum?id=6t0Kwf8-jrj) | ICLR | 509 | | Che et al. | [Fast Federated Machine Unlearning with Nonlinear Functional Theory](https://openreview.net/forum?id=6wQKmKiDHw) | ICML | 510 | | Krishna et al. | [Towards Bridging the Gaps between the Right to Explanation and the Right to be Forgotten](https://proceedings.mlr.press/v202/krishna23a.html) | ICML | 511 | | Liu et al. | [Machine Unlearning with Affine Hyperplane Shifting and Maintaining for Image Classification](https://link.springer.com/chapter/10.1007/978-981-99-8178-6_17) | ICONIP | 512 | | Xiong et al. | [Exact-Fun: An Exact and Efficient Federated Unlearning Approach](https://ieeexplore.ieee.org/abstract/document/10415652) | IEEE ICDM | 513 | | Su and Li | [Asynchronous Federated Unlearning](https://ieeexplore.ieee.org/abstract/document/10229075) | IEEE INFOCOM | 514 | | Lin et al. | [Machine Unlearning in Gradient Boosting Decision Trees](https://dl.acm.org/doi/10.1145/3580305.3599420) | KDD | 515 | | Qian et al. | [Towards Understanding and Enhancing Robustness of Deep Learning Models against Malicious Unlearning Attacks](https://dl.acm.org/doi/abs/10.1145/3580305.3599526) | KDD | 516 | | Wu et al. | [Certified Edge Unlearning for Graph Neural Networks](https://yue-ning.github.io/docs/kdd23-ceu.pdf) | KDD | 517 | | Ni et al. | [Degeneration-Tuning: Using Scrambled Grid shield Unwanted Concepts from Stable Diffusion](https://dl.acm.org/doi/10.1145/3581783.3611867)|ACM MM| 518 | | Li et al. | [Making Users Indistinguishable: Attribute-wise Unlearning in Recommender Systems](https://dl.acm.org/doi/abs/10.1145/3581783.3612418) | MM | 519 | | Hu et al. | [A Duty to Forget, a Right to be Assured? Exposing Vulnerabilities in Machine Unlearning Services](https://www.ndss-symposium.org/wp-content/uploads/2024-252-paper.pdf) | NDSS | 520 | | Warnecke et al. | [Machine Unlearning for Features and Labels](https://www.ndss-symposium.org/wp-content/uploads/2023/02/ndss2023_s87_paper.pdf) | NDSS | 521 | | Brack et al. | [SEGA: Instructing Text-to-Image Models using Semantic Guidance](https://proceedings.neurips.cc/paper_files/paper/2023/hash/4ff83037e8d97b2171b2d3e96cb8e677-Abstract-Conference.html)|NeurIPS| 522 | | Chen et al. | [Fast Model Debias with Machine Unlearning](https://proceedings.neurips.cc/paper_files/paper/2023/hash/2ecc80084c96cc25b11b0ab995c25f47-Abstract-Conference.html) | NeurIPS | 523 | | Kurmanji et al. | [Towards Unbounded Machine Unlearning](https://proceedings.neurips.cc/paper_files/paper/2023/hash/062d711fb777322e2152435459e6e9d9-Abstract-Conference.html) | NeurIPS | 524 | | Li et al. | [UltraRE: Enhancing RecEraser for Recommendation Unlearning via Error Decomposition](https://neurips.cc/virtual/2023/poster/72617) | NeurIPS | 525 | | Liu et al. | [Certified Minimax Unlearning with Generalization Rates and Deletion Capacity](https://neurips.cc/virtual/2023/poster/72765) | NeurIPS | 526 | | Jia et al. | [Model Sparsification Can Simplify Machine Unlearning](https://proceedings.neurips.cc/paper_files/paper/2023/hash/a204aa68ab4e970e1ceccfb5b5cdc5e4-Abstract-Conference.html) | NeurIPS | 527 | | Wei et al. | [Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared Adversarial Examples](https://neurips.cc/virtual/2023/poster/69874) | NeurIPS | 528 | | Di et al. | [Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks](https://neurips.cc/virtual/2023/poster/72092) | NeurIPS | 529 | | Heng et al. | [Selective Amnesia: A Continual Learning Approach to Forgetting in Deep Generative Models](https://proceedings.neurips.cc/paper_files/paper/2023/hash/376276a95781fa17c177b1ccdd0a03ac-Abstract-Conference.html) | NeurIPS | 530 | |Wang et al. | [Concept Algebra for (Score-Based) Text-Controlled Generative Models](https://openreview.net/forum?id=SGlrCuwdsB)|NeurIPS| 531 | | Zhao et al. | [Static and Sequential Malicious Attacks in the Context of Selective Forgetting](https://proceedings.neurips.cc/paper_files/paper/2023/hash/ed4bacc8c7ca1ee0e1d4e0ef376b7ac7-Abstract-Conference.html) | NeurIPS | 532 | | Belrose et al. | [LEACE: Perfect linear concept erasure in closed form](https://proceedings.neurips.cc/paper_files/paper/2023/hash/d066d21c619d0a78c5b557fa3291a8f4-Abstract-Conference.html) | NeurIPS | 533 | | Zhang et al. | [Composing Parameter-Efficient Modules with Arithmetic Operation](https://proceedings.neurips.cc/paper_files/paper/2023/hash/299a08ee712d4752c890938da99a77c6-Abstract-Conference.html) | NeurIPS | 534 | | Leysen | [Exploring Unlearning Methods to Ensure the Privacy, Security, and Usability of Recommender Systems](https://dl.acm.org/doi/abs/10.1145/3604915.3608862) | RecSys | 535 | | Koch and Soll | [No Matter How You Slice It: Machine Unlearning with SISA Comes at the Expense of Minority Classes](https://openreview.net/forum?id=RBX1H-SGdT) | SaTML | 536 | | Schelter et al. | [Forget Me Now: Fast and Exact Unlearning in Neighborhood-based Recommendation](https://ssc.io/pdf/caboose.pdf) | SIGIR | 537 | | Kurmanji et al. | [Machine Unlearning in Learned Databases: An Experimental Analysis](https://dl.acm.org/doi/abs/10.1145/3639304) | SIGMOD | 538 | | Wu et al. | [DeltaBoost: Gradient Boosting Decision Trees with Efficient Machine Unlearning](https://dl.acm.org/doi/abs/10.1145/3589313) | SIGMOD | 539 | | Wang et al. | [Inductive Graph Unlearning](https://arxiv.org/abs/2304.03093) | USENIX Security | 540 | | Xia et al. | [Equitable Data Valuation Meets the Right to Be Forgotten in Model Markets](https://www.vldb.org/pvldb/vol16/p3349-liu.pdf) | VLDB | 541 | | Sun et al. | [Lazy Machine Unlearning Strategy for Random Forests](https://link.springer.com/chapter/10.1007/978-981-99-6222-8_32) | WISA | 542 | | Pan et al. | [Unlearning Graph Classifiers with Limited Data Resources](https://dl.acm.org/doi/10.1145/3543507.3583547) | WWW | 543 | | Wu et al. | [GIF: A General Graph Unlearning Strategy via Influence Function](https://dl.acm.org/doi/abs/10.1145/3543507.3583521) | WWW | 544 | | Zhu et al. | [Heterogeneous Federated Knowledge Graph Embedding Learning and Unlearning](https://dl.acm.org/doi/abs/10.1145/3543507.3583305) | WWW | 545 | | | | 546 | | Ye and Lu | [Sequence Unlearning for Sequential Recommender Systems](https://link.springer.com/chapter/10.1007/978-981-99-8388-9_33) | AI | 547 | | Chen et al. | [Privacy preserving machine unlearning for smart cities](https://link.springer.com/article/10.1007/s12243-023-00960-z) | Annals of Telemcommunications | 548 | | Zhang et al. | [Machine Unlearning by Reversing the Continual Learning](https://www.mdpi.com/2076-3417/13/16/9341) | Applied Sciences | 549 | | Sai et al. | [Machine Un-learning: An Overview of Techniques, Applications, and Future Directions](https://link.springer.com/article/10.1007/s12559-023-10219-3) | Cognitive Computation | 550 | | Tang et al. | [Ensuring User Privacy and Model Security via Machine Unlearning: A Review](https://cdn.techscience.cn/files/cmc/2023/TSP_CMC-77-2/TSP_CMC_32307/TSP_CMC_32307.pdf) | Computers, Materials, and Continua | 551 | | Deng et al. | [Vertical Federated Unlearning on the Logistic Regression Model](https://www.mdpi.com/2079-9292/12/14/3182) | Electronics | 552 | | Zhou et al. | [A unified method to revoke the private data of patients in intelligent healthcare with audit to forget](https://europepmc.org/article/MED/37802981) | Europe PMC | 553 | | Li et al. | [Selective and Collaborative Influence Function for Efficient Recommendation Unlearning](https://www.sciencedirect.com/science/article/abs/pii/S0957417423015270) | Expert Systems with Applications | 554 | | Zeng at al. | [Towards Highly-efficient and Accurate Services QoS Prediction via Machine Unlearning](https://ieeexplore.ieee.org/abstract/document/10171348) | IEEE Access | 555 | | Zhao et al. | [Federated Unlearning With Momentum Degradation](https://ieeexplore.ieee.org/abstract/document/10269017) | IEEE IOT Journal | 556 | | Xia et al. | [FedME2: Memory Evaluation & Erase Promoting Federated Unlearning in DTMN](https://ieeexplore.ieee.org/abstract/document/10234397) | IEEE Selected Areas in Communications | 557 | | Zhang et al. | [Poison Neural Network-Based mmWave Beam Selection and Detoxification With Machine Unlearning](https://ieeexplore.ieee.org/abstract/document/10002349) | IEEE Trans. on Comm. | 558 | | Chundawat et al. | [Zero-Shot Machine Unlearning](https://ieeexplore.ieee.org/abstract/document/10097553) | IEEE Trans. Info. Forensics and Security | 559 | | Wang et al. | [Machine Unlearning via Representation Forgetting with Parameter Self-Sharing](https://ieeexplore.ieee.org/abstract/document/10312776) | IEEE Trans. Info. Forensics and Security | 560 | | Guo et al. | [Verifying in the Dark: Verifiable Machine Unlearning by Using Invisible Backdoor Triggers](https://ieeexplore.ieee.org/abstract/document/10298847) | IEEE Trans. Info. Forensics and Security | 561 | | Zhang et al. | [FedRecovery: Differentially Private Machine Unlearning for Federated Learning Frameworks](https://ieeexplore.ieee.org/abstract/document/10189868) | IEEE Trans. Info. Forensics and Security | 562 | | Guo et al. | [FAST: Adopting Federated Unlearning to Eliminating Malicious Terminals at Server Side](https://ieeexplore.ieee.org/abstract/document/10360312) | IEEE Trans. Network Science and Engineering | 563 | | Tarun et al. | [Fast Yet Effective Machine Unlearning](https://ieeexplore.ieee.org/abstract/document/10113700) | IEEE Trans. Neural Net. and Learn. Systems | 564 | | Tang et al. | [Fuzzy rough unlearning model for feature selection](https://www.sciencedirect.com/science/article/abs/pii/S0888613X23002335) | International Journal of Approximate Reasoning | 565 | | Zhu et al. | [Hierarchical Machine Unlearning](https://dl.acm.org/doi/10.1007/978-3-031-44505-7_7) | Learning and Intelligent Optimization | 566 | | Floridi | [Machine Unlearning: its nature, scope, and importance for a “delete culture”](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4455976) | Philosophy & Technology | 567 | | Zhang et al. | [A Review on Machine Unlearning](https://link.springer.com/article/10.1007/s42979-023-01767-4) | SN Computer Science | 568 | | | | 569 | | Oesterling et al. | [Fair Machine Unlearning: Data Removal while Mitigating Disparities](https://proceedings.mlr.press/v238/oesterling24a.html) | DMLR Workshop | 570 | | Llamas et al. | [Effective Machine Learning-based Access Control Administration through Unlearning](https://ieeexplore.ieee.org/abstract/document/10190682) | EuroS&PW | 571 | | Bae et al. | [Gradient Surgery for One-shot Unlearning on Generative Model](https://arxiv.org/abs/2307.04550) | Generative AI & LAW Workshop | 572 | | Borkar et al. | [What can we learn from Data Leakage and Unlearning for Law?](https://arxiv.org/abs/2307.10476) | ICML Workshop | 573 | |Kim et al. | [Towards Safe Self-Distillation of Internet-Scale Text-to-Image Diffusion Models](https://openreview.net/forum?id=6zALFeqxY0)|ICML Workshop| 574 | | Kadhe et al. | [FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs](https://openreview.net/forum?id=vRPnLsWQNh) | NeurIPS Workshop | 575 | | Li et al. | [Make Text Unlearnable: Exploiting Effective Patterns to Protect Personal Data](https://aclanthology.org/2023.trustnlp-1.22/) | TrustNLP Workshop | 576 | | | | 577 | | Abbasi et al. | [CovarNav: Machine Unlearning via Model Inversion and Covariance Navigation](https://arxiv.org/abs/2311.12999) | arXiv | 578 | | Cotogni et al. | [DUCK: Distance-based Unlearning via Centroid Kinematics](https://arxiv.org/abs/2312.02052) | arXiv | 579 | | Dhasade et al. | [QuickDrop: Efficient Federated Unlearning by Integrated Dataset Distillation](https://arxiv.org/abs/2311.15603) | arXiv | 580 | | Huang et al. | [Tight Bounds for Machine Unlearning via Differential Privacy](https://arxiv.org/abs/2309.00886) | arXiv | 581 | | Jin et al. | [Forgettable Federated Linear Learning with Certified Data Removal](https://arxiv.org/abs/2306.02216) | arXiv | 582 | | Kodge et al. | [Deep Unlearning: Fast and Efficient Training-free Approach to Controlled Forgetting](https://arxiv.org/abs/2312.00761) | arXiv | 583 | | Li and Ghosh | [Random Relabeling for Efficient Machine Unlearning](https://arxiv.org/abs/2305.12320) | arXiv | 584 | | Li et al. | [Subspace based Federated Unlearning](https://arxiv.org/abs/2302.12448) | arXiv | 585 | | Liu et al. | [Recommendation Unlearning via Matrix Correction](https://arxiv.org/abs/2307.15960) | arXiv | 586 | | Qu et al. | [Learn to Unlearn: A Survey on Machine Unlearning](https://arxiv.org/abs/2305.07512) | arXiv | 587 | | Ramachandra and Sethi | [Machine Unlearning for Causal Inference](https://arxiv.org/abs/2308.13559) | arXiv | 588 | | Shah et al. | [Unlearning via Sparse Representations](https://openreview.net/forum?id=TLBPjECC5D) | arXiv | 589 | | Si et al. | [Knowledge Unlearning for LLMs: Tasks, Methods, and Challenges](https://arxiv.org/abs/2311.15766) | arXiv | 590 | | Sinha et al. | [Distill to Delete: Unlearning in Graph Networks with Knowledge Distillation](https://arxiv.org/abs/2309.16173) | arXiv | 591 | | Tan et al. | [Unfolded Self-Reconstruction LSH: Towards Machine Unlearning in Approximate Nearest Neighbour Search](https://arxiv.org/abs/2304.02350) | arXiv | 592 | | Xu et al. | [Netflix and Forget: Efficient and Exact Machine Unlearning from Bi-linear Recommendations](https://arxiv.org/abs/2302.06676) | arXiv | 593 | | Patil et al. | [Can Sensitive Information Be Deleted From LLMs? Objectives for Defending Against Extraction Attacks](https://arxiv.org/abs/2309.17410) | arxiv | 594 | | Jahanian et al. | [Protecting the Neural Networks against FGSM Attack Using Machine Unlearning](https://www.researchsquare.com/article/rs-3239986/v1) | Research Square | 595 | | Dai et al. | [Training Data Attribution for Diffusion Models](https://arxiv.org/abs/2306.02174) | arxiv | 596 | | | | 597 | | Fan | [Machine learning and unlearning for IoT anomaly detection](http://dspace.library.uvic.ca/handle/1828/14962) | Thesis | 598 | | Casper | [Deep Forgetting & Unlearning for Safely-Scoped LLMs](https://www.alignmentforum.org/posts/mFAvspg4sXkrfZ7FA/deep-forgetting-and-unlearning-for-safely-scoped-llms) | Blog Post | 599 | 600 | ### 2022 601 | 602 | | Author(s) | Title | Venue | 603 | | :--------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------- | 604 | | Chundawat et al. | [Can Bad Teaching Induce Forgetting? Unlearning in Deep Networks using an Incompetent Teacher](https://arxiv.org/abs/2205.08096) | AAAI | 605 | | Marchant et al. | [Hard to Forget: Poisoning Attacks on Certified Machine Unlearning](https://ojs.aaai.org/index.php/AAAI/article/view/20736) | AAAI | 606 | | Wu et al. | [PUMA: Performance Unchanged Model Augmentation for Training Data Removal](https://cdn.aaai.org/ojs/20846/20846-13-24859-1-2-20220628.pdf) | AAAI | 607 | | Dai et al. | [Knowledge Neurons in Pretrained Transformers](https://aclanthology.org/anthology-files/anthology-files/queue/pdf/acl/2022.acl-long.581.pdf) | ACL | 608 | | Chen et al. | [Near-Optimal Task Selection for Meta-Learning with Mutual Information and Online Variational Bayesian Unlearning](https://proceedings.mlr.press/v151/chen22h.html) | AISTATS | 609 | | Nguyen et al. | [Markov Chain Monte Carlo-Based Machine Unlearning: Unlearning What Needs to be Forgotten](https://dl.acm.org/doi/abs/10.1145/3488932.3517406) | ASIA CCS | 610 | | Qian et al. | [Patient Similarity Learning with Selective Forgetting](https://www.computer.org/csdl/proceedings-article/bibm/2022/09995016/1JC3byZ284E) | BIBM | 611 | | Chen et al. | [Graph Unlearning](https://dl.acm.org/doi/abs/10.1145/3548606.3559352) | CCS | 612 | | Liu et al. | [Continual Learning and Private Unlearning](https://proceedings.mlr.press/v199/liu22a.html) | CoLLAs | 613 | | Mehta et al. | [Deep Unlearning via Randomized Conditionally Independent Hessians](https://openaccess.thecvf.com/content/CVPR2022/html/Mehta_Deep_Unlearning_via_Randomized_Conditionally_Independent_Hessians_CVPR_2022_paper.html) | CVPR | 614 | | Cao et al. | [Machine Unlearning Method Based On Projection Residual](https://ieeexplore.ieee.org/abstract/document/10032413/) | DSAA | 615 | | Ye et al. | [Learning with Recoverable Forgetting](https://link.springer.com/chapter/10.1007/978-3-031-20083-0_6) | ECCV | 616 | | Thudi et al. | [Unrolling SGD: Understanding Factors Influencing Machine Unlearning](https://ieeexplore.ieee.org/abstract/document/9797378) | EuroS&P | 617 | | Becker and Liebig | [Certified Data Removal in Sum-Product Networks](https://ieeexplore.ieee.org/abstract/document/10030058/) | ICKG | 618 | | Fu et al. | [Knowledge Removal in Sampling-based Bayesian Inference](https://openreview.net/forum?id=dTqOcTUOQO) | ICLR | 619 | | Bevan and Atapour-Abarghouei | [Skin Deep Unlearning: Artefact and Instrument Debiasing in the Context of Melanoma Classification](https://proceedings.mlr.press/v162/bevan22a.html) | ICML | 620 | | Tarun et al. | [Deep Regression Unlearning](https://arxiv.org/abs/2210.08196) | ICML | 621 | | Hu et al. | [Membership Inference via Backdooring](https://www.ijcai.org/proceedings/2022/0532.pdf) | IJCAI | 622 | | Yan et al. | [ARCANE: An Efficient Architecture for Exact Machine Unlearning](https://www.ijcai.org/proceedings/2022/0556.pdf) | IJCAI | 623 | | Liu et al. | [The Right to be Forgotten in Federated Learning: An Efficient Realization with Rapid Retraining](https://ieeexplore.ieee.org/abstract/document/9796721) | INFOCOM | 624 | | Liu et al. | [Backdoor Defense with Machine Unlearning](https://ieeexplore.ieee.org/abstract/document/9796974) | INFOCOM | 625 | | Jiang et al. | [Machine Unlearning Survey](https://www.spiedigitallibrary.org/conference-proceedings-of-spie/12500/125006J/Machine-unlearning-survey/10.1117/12.2660330.short?SSO=1) | MCTE | 626 | | Zhang et al. | [Machine Unlearning for Image Retrieval: A Generative Scrubbing Approach](https://dl.acm.org/doi/abs/10.1145/3503161.3548378) | MM | 627 | | Tanno et al. | [Repairing Neural Networks by Leaving the Right Past Behind](https://openreview.net/forum?id=XiwkvDTU10Y) | NeurIPS | 628 | | Meng et al. | [Locating and Editing Factual Associations in GPT](https://proceedings.neurips.cc/paper_files/paper/2022/hash/6f1d43d5a82a37e89b0665b33bf3a182-Abstract-Conference.html) | NeurIPS | 629 | | Zhang et al. | [Prompt Certified Machine Unlearning with Randomized Gradient Smoothing and Quantization](https://proceedings.neurips.cc/paper_files/paper/2022/file/5771d9f214b75be6ff20f63bba315644-Paper-Conference.pdf) | NeurIPS | 630 | | Gao et al. | [Deletion Inference, Reconstruction, and Compliance in Machine (Un)Learning](https://crysp.petsymposium.org/popets/2022/popets-2022-0079.pdf) | PETS | 631 | | Sommer et al. | [Athena: Probabilistic Verification of Machine Unlearning](https://petsymposium.org/popets/2022/popets-2022-0072.pdf) | PoPETs | 632 | | Lu et al. | [FP2-MIA: A Membership Inference Attack Free of Posterior Probability in Machine Unlearning](https://link.springer.com/chapter/10.1007/978-3-031-20917-8_12) | ProvSec | 633 | | Cao et al. | [FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information](https://ieeexplore.ieee.org/abstract/document/10179336/) | S&P | 634 | | Ganhor et al. | [Unlearning Protected User Attributes in Recommendations with Adversarial Training](https://dl.acm.org/doi/abs/10.1145/3477495.3531820) | SIGIR | 635 | | Chen et al. | [Recommendation Unlearning](https://dl.acm.org/doi/abs/10.1145/3485447.3511997) | TheWebConf | 636 | | Zhou et al. | [Dynamically Selected Mixup Machine Unlearning](https://ieeexplore.ieee.org/abstract/document/10063433) | TrustCom | 637 | | Thudi et al. | [On the Necessity of Auditable Algorithmic Definitions for Machine Unlearning](https://www.usenix.org/conference/usenixsecurity22/presentation/thudi) | USENIX Security | 638 | | Wang et al. | [Federated Unlearning via Class-Discriminative Pruning](https://dl.acm.org/doi/abs/10.1145/3485447.3512222) | WWW | 639 | | | | 640 | | Fan et al. | [Fast Model Update for IoT Traffic Anomaly Detection with Machine Unlearning](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9927728) | IEEE IoT-J | 641 | | Wu et al. | [Federated Unlearning: Guarantee the Right of Clients to Forget](https://ieeexplore.ieee.org/abstract/document/9964015) | IEEE Network | 642 | | Ma et al. | [Learn to Forget: Machine Unlearning Via Neuron Masking](https://ieeexplore.ieee.org/abstract/document/9844865) | IEEE Trans. Dep. Secure Comp. | 643 | | Lu et al. | [Label-only membership inference attacks on machine unlearning without dependence of posteriors](https://onlinelibrary.wiley.com/doi/abs/10.1002/int.23000) | Int. J. Intel. Systems | 644 | | Meng et al. | [Active forgetting via influence estimation for neural networks](https://onlinelibrary.wiley.com/doi/abs/10.1002/int.22981) | Int. J. Intel. Systems | 645 | | Baumhauer et al. | [Machine Unlearning: Linear Filtration for Logit-based Classifiers](https://link.springer.com/article/10.1007/s10994-022-06178-9) | Machine Learning | 646 | | Mahadaven and Mathiodakis | [Certifiable Unlearning Pipelines for Logistic Regression: An Experimental Study](https://www.mdpi.com/2504-4990/4/3/28) | Machine Learning and Knowledge Extraction | 647 | | | | 648 | | Kong et al. | [Forgeability and Membership Inference Attacks](https://dl.acm.org/doi/abs/10.1145/3560830.3563731) | AISec Workshop | 649 | | Kim and Woo | [Efficient Two-Stage Model Retraining for Machine Unlearning](https://openaccess.thecvf.com/content/CVPR2022W/HCIS/html/Kim_Efficient_Two-Stage_Model_Retraining_for_Machine_Unlearning_CVPRW_2022_paper.html) | CVPR Workshop | 650 | | Gong et al. | [Forget-SVGD: Particle-Based Bayesian Federated Unlearning](https://ieeexplore.ieee.org/abstract/document/9820602) | DSL Workshop | 651 | | Chien et al. | [Certified Graph Unlearning](https://arxiv.org/abs/2206.09140) | GLFrontiers Workshop | 652 | | Raunak and Menezes | [Rank-One Editing of Encoder-Decoder Models](https://arxiv.org/abs/2211.13317) | InterNLP Workshop | 653 | | Lycklama et al. | [Cryptographic Auditing for Collaborative Learning](https://pps-lab.com/papers/camel_mlsafety.pdf) | ML Safety Workshop | 654 | | Kong and Chaudhuri | [Data Redaction from Pre-trained GANs](https://openreview.net/forum?id=V7TaczasnAk) | TSRML Workshop | 655 | | Halimi et al. | [Federated Unlearning: How to Efficiently Erase a Client in FL?](https://arxiv.org/abs/2207.05521) | UpML Workshop | 656 | | Rawat et al. | [Challenges and Pitfalls of Bayesian Unlearning](https://arxiv.org/abs/2207.03227) | UpML Workshop | 657 | | | | 658 | | Becker and Liebig | [Evaluating Machine Unlearning via Epistemic Uncertainty](https://arxiv.org/abs/2208.10836) | arXiv | 659 | | Carlini et al. | [The Privacy Onion Effect: Memorization is Relative](https://arxiv.org/abs/2206.10469) | arXiv | 660 | | Chilkuri et al. | [Debugging using Orthogonal Gradient Descent](https://arxiv.org/abs/2206.08489) | arXiv | 661 | | Chourasia et al. | [Forget Unlearning: Towards True Data-Deletion in Machine Learning](https://arxiv.org/abs/2210.08911) | arXiv | 662 | | Cohen et al. | [Control, Confidentiality, and the Right to be Forgotten](https://arxiv.org/abs/2210.07876) | arXiv | 663 | | Eisenhofer et al. | [Verifiable and Provably Secure Machine Unlearning](https://arxiv.org/abs/2210.09126) | arXiv | 664 | | Fraboni et al. | [Sequential Informed Federated Unlearning: Efficient and Provable Client Unlearning in Federated Optimization](https://arxiv.org/abs/2211.11656) | arXiv | 665 | | Gao et al. | [VeriFi: Towards Verifiable Federated Unlearning](https://arxiv.org/abs/2205.12709) | arXiv | 666 | | Goel et al. | [Evaluating Inexact Unlearning Requires Revisiting Forgetting](https://arxiv.org/abs/2201.06640) | arXiv | 667 | | Guo et al. | [Vertical Machine Unlearning: Selectively Removing Sensitive Information From Latent Feature Space](https://arxiv.org/abs/2202.13295) | arXiv | 668 | | Guo et al. | [Efficient Attribute Unlearning: Towards Selective Removal of Input Attributes from Feature Representations](https://arxiv.org/abs/2202.13295) | arXiv | 669 | | Jang et al. | [Knowledge Unlearning for Mitigating Privacy Risks in Language Models](https://arxiv.org/abs/2210.01504) | arXiv | 670 | | Liu et al. | [Forgetting Fast in Recommender Systems](https://arxiv.org/abs/2208.06875) | arXiv | 671 | | Liu et al. | [Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning](https://arxiv.org/abs/2212.03334) | arXiv | 672 | | Lu et al. | [Quark: Controllable Text Generation with Reinforced Unlearning](https://arxiv.org/abs/2205.13636) | arXiv | 673 | | Malnick et al. | [Taming a Generative Model](https://arxiv.org/abs/2211.16488) | arXiv | 674 | | Mercuri et al. | [An Introduction to Machine Unlearning](https://arxiv.org/abs/2209.00939) | arXiv | 675 | | Mireshghallah et al. | [Non-Parametric Temporal Adaptation for Social Media Topic Classification](https://arxiv.org/abs/2209.05706) | arXiv | 676 | | Nguyen et al. | [A Survey of Machine Unlearning](https://arxiv.org/abs/2209.02299) | arXiv | 677 | | Pan et al. | [Unlearning Nonlinear Graph Classifiers in the Limited Training Data Regime](https://arxiv.org/abs/2211.03216) | arXiv | 678 | | Pan et al. | [Machine Unlearning of Federated Clusters](https://arxiv.org/abs/2210.16424) | arXiv | 679 | | Said et al. | [A Survey of Graph Unlearning](https://arxiv.org/abs/2310.02164) | arXiv | 680 | | Weng et al. | [Proof of Unlearning: Definitions and Instantiation](https://arxiv.org/abs/2210.11334) | arXiv | 681 | | Wu et al. | [Federated Unlearning with Knowledge Distillation](https://arxiv.org/abs/2201.09441) | arXiv | 682 | | Yu et al. | [LegoNet: A Fast and Exact Unlearning Architecture](https://arxiv.org/abs/2210.16023) | arXiv | 683 | | Yoon et al. | [Few-Shot Unlearning by Model Inversion](https://arxiv.org/abs/2205.15567) | arXiv | 684 | | Yuan et al. | [Federated Unlearning for On-Device Recommendation](https://arxiv.org/abs/2210.10958) | arXiv | 685 | | Zhu et al. | [Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of Backdoor Effects in Trojaned Machine Learning Models](https://arxiv.org/abs/2212.04687) | arXiv | 686 | | Cong and Mahdavi | [Privacy Matters! Efficient Graph Representation Unlearning with Data Removal Guarantee](https://congweilin.github.io/CongWeilin.io/files/Projector.pdf) | Report | 687 | | Cong and Mahdavi | [GraphEditor: An Efficient Graph Representation Learning and Unlearning Approach](https://congweilin.github.io/CongWeilin.io/files/GraphEditor.pdf) | Report | 688 | | Wu et al. | [Provenance-based Model Maintenance: Implications for Privacy](http://sites.computer.org/debull/A22mar/p37.pdf) | Report | 689 | 690 | ### 2021 691 | 692 | | Author(s) | Title | Venue | 693 | | :------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------- | 694 | | Graves et al. | [Amnesiac Machine Learning](https://ojs.aaai.org/index.php/AAAI/article/view/17371) | AAAI | 695 | | Yu et al. | [How Does Data Augmentation Affect Privacy in Machine Learning?](https://ojs.aaai.org/index.php/AAAI/article/view/17284) | AAAI | 696 | | Liu et al. | [DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts](https://aclanthology.org/2021.acl-long.522/) | ACL | 697 | | Izzo et al. | [Approximate Data Deletion from Machine Learning Models: Algorithms and Evaluations](https://proceedings.mlr.press/v130/izzo21a.html) | AISTATS | 698 | | Li et al. | [Online Forgetting Process for Linear Regression Models](https://proceedings.mlr.press/v130/li21a.html) | AISTATS | 699 | | Neel et al. | [Descent-to-Delete: Gradient-Based Methods for Machine Unlearning](http://proceedings.mlr.press/v132/neel21a.html) | ALT | 700 | | Chen et al. | [REFIT: A Unified Watermark Removal Framework For Deep Learning Systems With Limited Data](https://dl.acm.org/doi/abs/10.1145/3433210.3453079) | ASIA CCS | 701 | | Chen et al. | [When Machine Unlearning Jeopardizes Privacy](https://dl.acm.org/doi/abs/10.1145/3460120.3484756) | CCS | 702 | | Ullah et al. | [Machine Unlearning via Algorithmic Stability](http://proceedings.mlr.press/v134/ullah21a.html) | COLT | 703 | | Golatkar et al. | [Mixed-Privacy Forgetting in Deep Networks](https://openaccess.thecvf.com/content/CVPR2021/html/Golatkar_Mixed-Privacy_Forgetting_in_Deep_Networks_CVPR_2021_paper.html) | CVPR | 704 | | Dang et al. | [Right to Be Forgotten in the Age of Machine Learning](https://link.springer.com/chapter/10.1007/978-3-030-71782-7_35) | ICADS | 705 | | Brophy and Lowd | [Machine Unlearning for Random Forests](http://proceedings.mlr.press/v139/brophy21a.html) | ICML | 706 | | Huang et al. | [Unlearnable Examples: Making Personal Data Unexploitable](https://openreview.net/forum?id=iAmZUo0DxC0) | ICLR | 707 | | Goyal et al. | [Revisiting Machine Learning Training Process for Enhanced Data Privacy](https://dl.acm.org/doi/10.1145/3474124.3474208) | IC3 | 708 | | Tahiliani et al. | [Machine Unlearning: Its Need and Implementation Strategies](https://dl.acm.org/doi/abs/10.1145/3474124.3474158) | IC3 | 709 | | Dam et al. | [Delete My Account: Impact of Data Deletion on Machine Learning Classifiers](https://ieeexplore.ieee.org/abstract/document/10833891) | ICSSA | 710 | | Shibata et al. | [Learning with Selective Forgetting](https://www.ijcai.org/proceedings/2021/0137.pdf) | IJCAI | 711 | | Liu et al. | [Federated Unlearning](https://arxiv.org/abs/2012.13891) | IWQoS | 712 | | Huang et al. | [EMA: Auditing Data Removal from Trained Models](https://link.springer.com/chapter/10.1007/978-3-030-87240-3_76) | MICCAI | 713 | | Gupta et al. | [Adaptive Machine Unlearning](https://proceedings.neurips.cc/paper/2021/hash/87f7ee4fdb57bdfd52179947211b7ebb-Abstract.html) | NeurIPS | 714 | | Khan and Swaroop | [Knowledge-Adaptation Priors](https://proceedings.neurips.cc/paper/2021/hash/a4380923dd651c195b1631af7c829187-Abstract.html) | NeurIPS | 715 | | Sekhari et al. | [Remember What You Want to Forget: Algorithms for Machine Unlearning](https://proceedings.neurips.cc/paper/2021/hash/9627c45df543c816a3ddf2d8ea686a99-Abstract.html) | NeurIPS | 716 | | Liu et al. | [FedEraser: Enabling Efficient Client-Level Data Removal from Federated Learning Models](https://ieeexplore.ieee.org/abstract/document/9521274) | IWQoS | 717 | | Bourtoule et al. | [Machine Unlearning](https://ieeexplore.ieee.org/abstract/document/9519428) | S&P | 718 | | Schelter et al. | [HedgeCut: Maintaining Randomised Trees for Low-Latency Machine Unlearning](https://ssc.io/pdf/rdm235.pdf) | SIGMOD | 719 | | Gong et al. | [Bayesian Variational Federated Learning and Unlearning in Decentralized Networks](https://ieeexplore.ieee.org/abstract/document/9593225) | SPAWC | 720 | | | | 721 | | Aldaghri et al. | [Coded Machine Unlearning](https://ieeexplore.ieee.org/abstract/document/9458237) | IEEE Access | 722 | | Liu et al. | [RevFRF: Enabling Cross-domain Random Forest Training with Revocable Federated Learning](https://ieeexplore.ieee.org/abstract/document/9514457) | IEEE Trans. Dep. Secure Comp. | 723 | | | | 724 | | Wang and Schelter | [Efficiently Maintaining Next Basket Recommendations under Additions and Deletions of Baskets and Items](https://arxiv.org/abs/2201.13313) | ORSUM Workshop | 725 | | Jose and Simeone | [A Unified PAC-Bayesian Framework for Machine Unlearning via Information Risk Minimization](https://ieeexplore.ieee.org/abstract/document/9596170) | MLSP Workshop | 726 | | Peste et al. | [SSSE: Efficiently Erasing Samples from Trained Machine Learning Models](https://openreview.net/forum?id=GRMKEx3kEo) | PRIML Workshop | 727 | | | | 728 | | Chen et al. | [Machine unlearning via GAN](https://arxiv.org/abs/2111.11869) | arXiv | 729 | | He et al. | [DeepObliviate: A Powerful Charm for Erasing Data Residual Memory in Deep Neural Networks](https://arxiv.org/abs/2105.06209) | arXiv | 730 | | Madahaven and Mathioudakis | [Certifiable Machine Unlearning for Linear Models](https://arxiv.org/abs/2106.15093) | arXiv | 731 | | Parne et al. | [Machine Unlearning: Learning, Polluting, and Unlearning for Spam Email](https://arxiv.org/abs/2111.14609) | arXiv | 732 | | Thudi et al. | [Bounding Membership Inference](https://arxiv.org/abs/2202.12232) | arXiv | 733 | | Zeng et al. | [ModelPred: A Framework for Predicting Trained Model from Training Data](https://arxiv.org/abs/2111.12545) | arXiv | 734 | 735 | ### 2020 736 | 737 | | Author(s) | Title | Venue | 738 | | :-------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------ | 739 | | Tople te al. | [Analyzing Information Leakage of Updates to Natural Language Models](https://dl.acm.org/doi/abs/10.1145/3372297.3417880) | CCS | 740 | | Golatkar et al. | [Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks](https://openaccess.thecvf.com/content_CVPR_2020/html/Golatkar_Eternal_Sunshine_of_the_Spotless_Net_Selective_Forgetting_in_Deep_CVPR_2020_paper.html) | CVPR | 741 | | Golatkar et al. | [Forgetting Outside the Box: Scrubbing Deep Networks of Information Accessible from Input-Output Observations](https://openreview.net/forum?id=YtK037uBtWt) | ECCV | 742 | | Garg et al. | [Formalizing Data Deletion in the Context of the Right to be Forgotten](https://link.springer.com/chapter/10.1007/978-3-030-45724-2_13) | EUROCRYPT | 743 | | Guo et al. | [Certified Data Removal from Machine Learning Models](https://dl.acm.org/doi/abs/10.5555/3524938.3525297) | ICML | 744 | | Wu et al. | [DeltaGrad: Rapid Retraining of Machine Learning Models](https://icml.cc/virtual/2020/poster/5915) | ICML | 745 | | Nguyen et al. | [Variational Bayesian Unlearning](https://proceedings.neurips.cc/paper/2020/hash/b8a6550662b363eb34145965d64d0cfb-Abstract.html) | NeurIPS | 746 | | | | 747 | | Liu et al. | [Learn to Forget: User-Level Memorization Elimination in Federated Learning](https://www.researchgate.net/profile/Ximeng-Liu-5/publication/340134612_Learn_to_Forget_User-Level_Memorization_Elimination_in_Federated_Learning/links/5e849e64a6fdcca789e5f955/Learn-to-Forget-User-Level-Memorization-Elimination-in-Federated-Learning.pdf) | researchgate | 748 | | Felps et al. | [Class Clown: Data Redaction in Machine Unlearning at Enterprise Scale](https://arxiv.org/abs/2012.04699) | arXiv | 749 | | Sommer et al. | [Towards Probabilistic Verification of Machine Unlearning](https://arxiv.org/abs/2003.04247) | arXiv | 750 | 751 | ### 2019 752 | 753 | | Author(s) | Title | Venue | 754 | | :------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------- | 755 | | Shintre et al. | [Making Machine Learning Forget](https://link.springer.com/chapter/10.1007/978-3-030-21752-5_6) | APF | 756 | | Du et al. | [Lifelong Anomaly Detection Through Unlearning](https://dl.acm.org/doi/abs/10.1145/3319535.3363226) | CCS | 757 | | Kim et al. | [Learning Not to Learn: Training Deep Neural Networks With Biased Data](https://openaccess.thecvf.com/content_CVPR_2019/html/Kim_Learning_Not_to_Learn_Training_Deep_Neural_Networks_With_Biased_CVPR_2019_paper.html) | CVPR | 758 | | Ginart et al. | [Making AI Forget You: Data Deletion in Machine Learning](http://papers.nips.cc/paper/8611-making-ai-forget-you-data-deletion-in-machine-learning) | NeurIPS | 759 | | Wang et al. | [Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks](https://people.cs.vt.edu/vbimal/publications/backdoor-sp19.pdf) | S&P | 760 | | | | 761 | | Schelter | [“Amnesia” – Towards Machine Learning Models That Can Forget User Data Very Fast](http://cidrdb.org/cidr2020/papers/p32-schelter-cidr20.pdf) | AIDB Workshop | 762 | 763 | ### 2018 764 | 765 | | Author(s) | Title | Venue | 766 | | :------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------ | 767 | | Cao et al. | [Efficient Repair of Polluted Machine Learning Systems via Causal Unlearning](https://dl.acm.org/citation.cfm?id=3196517) | ASIACCS | 768 | | Chen et al. | [A Novel Online Incremental and Decremental Learning Algorithm Based on Variable Support Vector Machine](https://link.springer.com/article/10.1007/s10586-018-1772-4) | Cluster Computing | 769 | | | | 770 | | Villaronga et al. | [Humans Forget, Machines Remember: Artificial Intelligence and the Right to Be Forgotten](https://www.sciencedirect.com/science/article/pii/S0267364917302091) | Computer Law & Security Review | 771 | | Veale et al. | [Algorithms that remember: model inversion attacks and data protection law](https://royalsocietypublishing.org/doi/full/10.1098/rsta.2018.0083) | The Royal Society | 772 | | | | 773 | | European Union | [GDPR](https://gdpr.eu/) | 774 | | State of California | [California Consumer Privacy Act](https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180AB375) | 775 | 776 | ### 2017 777 | 778 | | Author(s) | Title | Venue | 779 | | :------------ | ------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | 780 | | Shokri et al. | [Membership Inference Attacks Against Machine Learning Models](https://ieeexplore.ieee.org/abstract/document/7958568) | S&P | 781 | | Kwak et al. | [Let Machines Unlearn--Machine Unlearning and the Right to be Forgotten](https://aisel.aisnet.org/amcis2017/InformationSystems/Presentations/14/) | SIGSEC | 782 | 783 | ### Before 2017 784 | 785 | | Author(s) | Title | Venue | 786 | | :---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------ | 787 | | Ganin et al. | [Domain-Adversarial Training of Neural Networks](https://www.jmlr.org/papers/volume17/15-239/15-239.pdf) | JMLR 2016 | 788 | | Cao and Yang | [Towards Making Systems Forget with Machine Unlearning](https://ieeexplore.ieee.org/abstract/document/7163042) | S&P 2015 | 789 | | Tsai et al. | [Incremental and decremental training for linear classification](https://dl.acm.org/citation.cfm?id=2623661) | KDD 2014 | 790 | | Karasuyama and Takeuchi | [Multiple Incremental Decremental Learning of Support Vector Machines](https://ieeexplore.ieee.org/abstract/document/5484614) | NeurIPS 2009 | 791 | | Duan et al. | [Decremental Learning Algorithms for Nonlinear Langrangian and Least Squares Support Vector Machines](https://pdfs.semanticscholar.org/312c/677f0882d0dfd60bfd77346588f52aefd10f.pdf) | OSB 2007 | 792 | | Romero et al. | [Incremental and Decremental Learning for Linear Support Vector Machines](https://link.springer.com/chapter/10.1007/978-3-540-74690-4_22) | ICANN 2007 | 793 | | Tveit et al. | [Incremental and Decremental Proximal Support Vector Classification using Decay Coefficients](https://link.springer.com/chapter/10.1007/978-3-540-45228-7_42) | DaWaK 2003 | 794 | | Tveit and Hetland | [Multicategory Incremental Proximal Support Vector Classifiers](https://link.springer.com/chapter/10.1007/978-3-540-45224-9_54) | KES 2003 | 795 | | Cauwenberghs and Poggio | [Incremental and Decremental Support Vector Machine Learning](http://papers.nips.cc/paper/1814-incremental-and-decremental-support-vector-machine-learning.pdf) | NeurIPS 2001 | 796 | | Canada | [PIPEDA](https://www.priv.gc.ca/en/privacy-topics/privacy-laws-in-canada/the-personal-information-protection-and-electronic-documents-act-pipeda/) | 2000 | 797 | --------------------------------------------------------------------------------