├── Structured_Taxonomy.png └── README.md /Structured_Taxonomy.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/he-y/Awesome-Pruning/HEAD/Structured_Taxonomy.png -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Awesome Pruning [![Awesome](https://awesome.re/badge.svg)](https://awesome.re) 2 | 3 | A curated list of neural network pruning and related resources. Inspired by [awesome-deep-vision](https://github.com/kjw0612/awesome-deep-vision), [awesome-adversarial-machine-learning](https://github.com/yenchenlin/awesome-adversarial-machine-learning), [awesome-deep-learning-papers](https://github.com/terryum/awesome-deep-learning-papers) and [Awesome-NAS](https://github.com/D-X-Y/Awesome-NAS). 4 | 5 | Please feel free to [pull requests](https://github.com/he-y/awesome-Pruning/pulls) or [open an issue](https://github.com/he-y/awesome-Pruning/issues) to add papers. 6 | 7 | ## Table of Contents 8 | 9 | - [Type of Pruning](#type-of-pruning) 10 | 11 | - [A Survey of Structured Pruning](#a-survey-of-structured-pruning-arxiv-version-and-ieee-t-pami-version) 12 | 13 | - [2023 Venues](#2023) 14 | 15 | - [2022 Venues](#2022) 16 | 17 | - [2021 Venues](#2021) 18 | 19 | - [2020 Venues](#2020) 20 | 21 | - [2019 Venues](#2019) 22 | 23 | - [2018 Venues](#2018) 24 | 25 | - [2017 Venues](#2017) 26 | 27 | - [2016 Venues](#2016) 28 | 29 | - [2015 Venues](#2015) 30 | 31 | ### Type of Pruning 32 | 33 | | Type | `F` | `W` | `S` | `Other` | 34 | |:----------- |:--------------:|:--------------:|:----------------:|:-----------:| 35 | | Explanation | Filter pruning | Weight pruning | Special Networks | other types | 36 | 37 | ### A Survey of Structured Pruning ([arXiv version](https://arxiv.org/abs/2303.00566) and [IEEE T-PAMI version](https://ieeexplore.ieee.org/document/10330640)) 38 | 39 | Please cite our paper if it's helpful: 40 | ``` 41 | @article{he2024structured, 42 | author={He, Yang and Xiao, Lingao}, 43 | journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 44 | title={Structured Pruning for Deep Convolutional Neural Networks: A Survey}, 45 | year={2024}, 46 | volume={46}, 47 | number={5}, 48 | pages={2900-2919}, 49 | doi={10.1109/TPAMI.2023.3334614}} 50 | ``` 51 | 52 | The related papers are categorized as below: 53 | ![Structured Pruning Taxonomy](./Structured_Taxonomy.png) 54 | 55 | ### 2023 56 | | Title | Venue | Type | Code | 57 | |:-------------------------------------------------------------------------------------------------------------------------------- |:-----:|:-------:|:----:| 58 | | [Revisiting Pruning at Initialization Through the Lens of Ramanujan Graph](https://openreview.net/forum?id=uVcDssQff_) | ICLR | `W` | [PyTorch(Author)](https://github.com/VITA-Group/ramanujan-on-pai)(Releasing) | 59 | | [Unmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning Ticket's Mask?](https://openreview.net/forum?id=xSsW2Am-ukZ) | ICLR | `W` | - | 60 | | [Bit-Pruning: A Sparse Multiplication-Less Dot-Product](https://openreview.net/forum?id=YUDiZcZTI8) | ICLR | `W` | [Code Deleted](https://github.com/DensoITLab/bitprune) | 61 | | [NTK-SAP: Improving neural network pruning by aligning training dynamics](https://openreview.net/forum?id=-5EWhW_4qWP) | ICLR | `W` | - | 62 | | [A Unified Framework for Soft Threshold Pruning](https://openreview.net/forum?id=cCFqcrq0d8) | ICLR | `W` | [PyTorch(Author)](https://github.com/Yanqi-Chen/LATS) | 63 | | [CrAM: A Compression-Aware Minimizer](https://openreview.net/forum?id=_eTZBs-yedr) | ICLR | `W` | - | 64 | | [Trainability Preserving Neural Pruning](https://openreview.net/forum?id=AZFvpnnewr) | ICLR | `F` | - | 65 | | [DFPC: Data flow driven pruning of coupled channels without data](https://openreview.net/forum?id=mhnHqRqcjYU) | ICLR | `F` | [PyTorch(Author)](https://drive.google.com/drive/folders/18eRYzWnB_6Qq0cYiSzvyOgicqn50g3-m) | 66 | | [TVSPrune - Pruning Non-discriminative filters via Total Variation separability of intermediate representations without fine tuning](https://openreview.net/forum?id=sZI1Oj9KBKy) | ICLR | `F` | [PyTorch(Author)](https://github.com/tvsprune/TVS_Prune) | 67 | | [HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained Transformers](https://openreview.net/forum?id=D7srTrGhAs) | ICLR | `F` | - | 68 | | [MECTA: Memory-Economic Continual Test-Time Model Adaptation](https://openreview.net/forum?id=N92hjSf5NNh) | ICLR | `F` | - | 69 | | [DepthFL : Depthwise Federated Learning for Heterogeneous Clients](https://openreview.net/forum?id=pf8RIZTMU58) | ICLR | `F` | - | 70 | | [OTOv2: Automatic, Generic, User-Friendly](https://openreview.net/forum?id=7ynoX1ojPMt) | ICLR | `F` | [PyTorch(Author)](https://github.com/tianyic/only_train_once) | 71 | | [Over-parameterized Model Optimization with Polyak-Lojasiewicz Condition](https://openreview.net/forum?id=aBIpZvMdS56) | ICLR | `F` | - | 72 | | [Pruning Deep Neural Networks from a Sparsity Perspective](https://openreview.net/forum?id=i-DleYh34BM) | ICLR | `WF` | [PyTorch(Author)](https://github.com/dem123456789/Pruning-Deep-Neural-Networks-from-a-Sparsity-Perspective) | 73 | | [Holistic Adversarially Robust Pruning](https://openreview.net/forum?id=sAJDi9lD06L) | ICLR | `WF` | - | 74 | | [How I Learned to Stop Worrying and Love Retraining](https://openreview.net/forum?id=_nF5imFKQI) | ICLR | `WF` | [PyTorch(Author)](https://github.com/ZIB-IOL/BIMP) | 75 | | [Symmetric Pruning in Quantum Neural Networks](https://openreview.net/forum?id=K96AogLDT2K) | ICLR | `S` | - | 76 | | [Rethinking Graph Lottery Tickets: Graph Sparsity Matters](https://openreview.net/forum?id=fjh7UGQgOB) | ICLR | `S` | - | 77 | | [Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural Networks](https://openreview.net/forum?id=4UldFtZ_CVF) | ICLR | `S` | - | 78 | | [Searching Lottery Tickets in Graph Neural Networks: A Dual Perspective](https://openreview.net/forum?id=Dvs-a3aymPe) | ICLR | `S` | - | 79 | | [Diffusion Models for Causal Discovery via Topological Ordering](https://openreview.net/forum?id=Idusfje4-Wq) | ICLR | `S` | - | 80 | | [A General Framework For Proving The Equivariant Strong Lottery Ticket Hypothesis](https://openreview.net/forum?id=vVJZtlZB9D) | ICLR | `Other` | - | 81 | | [Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!](https://openreview.net/forum?id=J6F3lLg4Kdp) | ICLR | `Other` | - | 82 | | [Minimum Variance Unbiased N:M Sparsity for the Neural Gradients](https://openreview.net/forum?id=vuD2xEtxZcj) | ICLR | `Other` | - | 83 | 84 | ### 2022 85 | | Title | Venue | Type | Code | 86 | |:-------------------------------------------------------------------------------------------------------------------------------- |:-----:|:-------:|:----:| 87 | | [Parameter-Efficient Masking Networks](https://openreview.net/forum?id=7rcuQ_V2GFg) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/yueb17/PEMN) | 88 | | ["Lossless" Compression of Deep Neural Networks: A High-dimensional Neural Tangent Kernel Approach](https://openreview.net/forum?id=NaW6T93F34m) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/Model-Compression/Lossless_Compression) | 89 | | [Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing](https://openreview.net/forum?id=2EUJ4e6H4OX) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/GATECH-EIC/S3-Router) | 90 | | [Models Out of Line: A Fourier Lens on Distribution Shift Robustness](https://openreview.net/forum?id=YZ-N-sejjwO) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/sarafridov/RobustNets) | 91 | | [Robust Binary Models by Pruning Randomly-initialized Networks](https://openreview.net/forum?id=5g-h_DILemH) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/IVRL/RobustBinarySubNet) | 92 | | [Rare Gems: Finding Lottery Tickets at Initialization](https://openreview.net/forum?id=Jpxd93u2vK-) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/ksreenivasan/pruning_is_enough) | 93 | | [Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning](https://openreview.net/forum?id=ksVGCOlOEba) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/IST-DASLab/OBC) | 94 | | [Pruning’s Effect on Generalization Through the Lens of Training and Regularization](https://openreview.net/forum?id=OrcLKV9sKWp) | NeurIPS | `W` | - | 95 | | [Back Razor: Memory-Efficient Transfer Learning by Self-Sparsified Backpropagation](https://openreview.net/forum?id=mTXQIpXPDbh) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/VITA-Group/BackRazor_Neurips22) | 96 | | [Analyzing Lottery Ticket Hypothesis from PAC-Bayesian Theory Perspective](https://openreview.net/forum?id=fbUybomIuE) | NeurIPS | `W` | - | 97 | | [Sparse Winning Tickets are Data-Efficient Image Recognizers](https://openreview.net/forum?id=wfKbtSjHA6F) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/VITA-Group/DataEfficientLTH) | 98 | | [Lottery Tickets on a Data Diet: Finding Initializations with Sparse Trainable Networks](https://openreview.net/forum?id=QLPzCpu756J) | NeurIPS | `W` | - | 99 | | [Weighted Mutual Learning with Diversity-Driven Model Compression](https://openreview.net/forum?id=UQJoGBNRX4) | NeurIPS | `F` | - | 100 | | [SInGE: Sparsity via Integrated Gradients Estimation of Neuron Relevance](https://openreview.net/forum?id=oQIJsMlyaW_) | NeurIPS | `F` | - | 101 | | [Data-Efficient Structured Pruning via Submodular Optimization](https://openreview.net/forum?id=K2QGzyLwpYG) | NeurIPS | `F` | [PyTorch(Author)](https://github.com/marwash25/subpruning) | 102 | | [Structural Pruning via Latency-Saliency Knapsack](https://openreview.net/forum?id=cUOR-_VsavA) | NeurIPS | `F` | [PyTorch(Author)](https://github.com/NVlabs/HALP) | 103 | | [Recall Distortion in Neural Network Pruning and the Undecayed Pruning Algorithm](https://openreview.net/forum?id=5hgYi4r5MDp) | NeurIPS | `WF` | - | 104 | | [Pruning Neural Networks via Coresets and Convex Geometry: Towards No Assumptions](https://openreview.net/forum?id=btpIaJiRx6z) | NeurIPS | `WF` | - | 105 | | [Controlled Sparsity via Constrained Optimization or: How I Learned to Stop Tuning Penalties and Love Constraints](https://openreview.net/forum?id=XUvSYc6TqDF) | NeurIPS | `WF` | [PyTorch(Author)](https://github.com/gallego-posada/constrained_sparsity) | 106 | | [Advancing Model Pruning via Bi-level Optimization](https://openreview.net/forum?id=t6O08FxvtBY) | NeurIPS | `WF` | [PyTorch(Author)](https://github.com/OPTML-Group/BiP) | 107 | | [Emergence of Hierarchical Layers in a Single Sheet of Self-Organizing Spiking Neurons](https://openreview.net/forum?id=cPVuuk1lZb3) | NeurIPS | `S` | - | 108 | | [CryptoGCN: Fast and Scalable Homomorphically Encrypted Graph Convolutional Network Inference](https://openreview.net/forum?id=VeQBBm1MmTZ) | NeurIPS | `S` | [PyTorch(Author)](https://github.com/ranran0523/CryptoGCN)(Releasing) | 109 | | [Transform Once: Efficient Operator Learning in Frequency Domain](https://openreview.net/forum?id=B2PpZyAAEgV) | NeurIPS | `Other` | [PyTorch(Author)](https://github.com/DiffEqML/kairos)(Releasing) | 110 | | [Most Activation Functions Can Win the Lottery Without Excessive Depth](https://openreview.net/forum?id=NySDKS9SxN) | NeurIPS | `Other` | [PyTorch(Author)](https://github.com/RelationalML/LT-existence) | 111 | | [Pruning has a disparate impact on model accuracy](https://openreview.net/forum?id=11nMVZK0WYM) | NeurIPS | `Other` | - | 112 | | [Model Preserving Compression for Neural Networks](https://openreview.net/forum?id=gt-l9Hu2ndd) | NeurIPS | `Other` | [PyTorch(Author)](https://github.com/jerry-chee/ModelPreserveCompressionNN) | 113 | | [Prune Your Model Before Distill It](https://link.springer.com/10.1007/978-3-031-20083-0_8) | ECCV | `W` | [PyTorch(Author)](https://https://github.com/ososos888/prune-then-distill) | 114 | | [FedLTN: Federated Learning for Sparse and Personalized Lottery Ticket Networks](https://link.springer.com/10.1007/978-3-031-19775-8_5) | ECCV | `W` | - | 115 | | [FairGRAPE: Fairness-Aware GRAdient Pruning mEthod for Face Attribute Classification](https://link.springer.com/10.1007/978-3-031-19778-9_24) | ECCV | `F` | [PyTorch(Author)](https://github.com/Bernardo1998/FairGRAPE) | 116 | | [SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning](https://link.springer.com/10.1007/978-3-031-20083-0_40) | ECCV | `F` | [PyTorch(Author)](https://github.com/GATECH-EIC/SuperTickets) | 117 | | [Ensemble Knowledge Guided Sub-network Search and Fine-Tuning for Filter Pruning](https://link.springer.com/10.1007/978-3-031-20083-0_34) | ECCV | `F` | [PyTorch(Author)](https://github.com/sseung0703/EKG) | 118 | | [CPrune: Compiler-Informed Model Pruning for Efficient Target-Aware DNN Execution](https://link.springer.com/10.1007/978-3-031-20044-1_37) | ECCV | `F` | [PyTorch(Author)](https://github.com/taehokim20/CPrune) | 119 | | [Soft Masking for Cost-Constrained Channel Pruning](https://link.springer.com/10.1007/978-3-031-20083-0_38) | ECCV | `F` | [PyTorch(Author)](https://github.com/NVlabs/SMCP) | 120 | | [Filter Pruning via Feature Discrimination in Deep Neural Networks](https://link.springer.com/10.1007/978-3-031-19803-8_15) | ECCV | `F` | - | 121 | | [Disentangled Differentiable Network Pruning](https://link.springer.com/10.1007/978-3-031-20083-0_20) | ECCV | `F` | - | 122 | | [Interpretations Steered Network Pruning via Amortized Inferred Saliency Maps](https://link.springer.com/10.1007/978-3-031-19803-8_17) | ECCV | `F` | [PyTorch(Author)](https://github.com/Alii-Ganjj/InterpretationsSteeredPruning) | 123 | | [Bayesian Optimization with Clustering and Rollback for CNN Auto Pruning](https://link.springer.com/10.1007/978-3-031-20050-2_29) | ECCV | `F` | [PyTorch(Author)](https://github.com/fanhanwei/BOCR) | 124 | | [Multi-granularity Pruning for Model Acceleration on Mobile Devices](https://link.springer.com/10.1007/978-3-031-20083-0_29) | ECCV | `WF` | - | 125 | | [Exploring Lottery Ticket Hypothesis in Spiking Neural Networks](https://link.springer.com/10.1007/978-3-031-19775-8_7) | ECCV | `S` | [PyTorch(Author)](https://github.com/Intelligent-Computing-Lab-Yale/Exploring-Lottery-Ticket-Hypothesis-in-SNNs) | 126 | | [Towards Ultra Low Latency Spiking Neural Networks for Vision and Sequential Tasks Using Temporal Pruning](https://link.springer.com/10.1007/978-3-031-20083-0_42) | ECCV | `S` | - | 127 | | [Recent Advances on Neural Network Pruning at Initialization](https://www.ijcai.org/proceedings/2022/786) | IJCAI | `W` | [PyTorch(Author)](https://github.com/mingsun-tse/smile-pruning) | 128 | | [FedDUAP: Federated Learning with Dynamic Update and Adaptive Pruning Using Shared Data on the Server](https://www.ijcai.org/proceedings/2022/385) | IJCAI | `F` | - | 129 | | [On the Channel Pruning using Graph Convolution Network for Convolutional Neural Network Acceleration](https://www.ijcai.org/proceedings/2022/431) | IJCAI | `F` | - | 130 | | [Pruning-as-Search: Efficient Neural Architecture Search via Channel Pruning and Structural Reparameterization](https://www.ijcai.org/proceedings/2022/449) | IJCAI | `F` | - | 131 | | [Neural Network Pruning by Cooperative Coevolution](https://www.ijcai.org/proceedings/2022/667) | IJCAI | `F` | - | 132 | | [SPDY: Accurate Pruning with Speedup Guarantees](https://proceedings.mlr.press/v162/frantar22a.html) | ICML | `W` | [PyTorch(Author)](https://github.com/IST-DASLab/spdy) | 133 | | [Sparse Double Descent: Where Network Pruning Aggravates Overfitting](https://proceedings.mlr.press/v162/he22d.html) | ICML | `W` | [PyTorch(Author)](https://github.com/hezheug/sparse-double-descent) | 134 | | [The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks](https://proceedings.mlr.press/v162/yu22f.html) | ICML | `W` | [PyTorch(Author)](https://github.com/yuxwind/CBS) | 135 | | [Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness](https://proceedings.mlr.press/v162/chen22af.html) | ICML | `F` | [PyTorch(Author)](https://github.com/VITA-Group/Linearity-Grafting) | 136 | | [Winning the Lottery Ahead of Time: Efficient Early Network Pruning](https://proceedings.mlr.press/v162/rachwan22a.html) | ICML | `F` | [PyTorch(Author)](https://github.com/johnrachwan123/Early-Cropression-via-Gradient-Flow-Preservation) | 137 | | [Topology-Aware Network Pruning using Multi-stage Graph Embedding and Reinforcement Learning](https://proceedings.mlr.press/v162/yu22e.html) | ICML | `F` | [PyTorch(Author)](https://github.com/yusx-swapp/GNN-RL-Model-Compression) | 138 | | [Fast Lossless Neural Compression with Integer-Only Discrete Flows](https://proceedings.mlr.press/v162/wang22a.html) | ICML | `F` | [PyTorch(Author)](https://github.com/thu-ml/IODF) | 139 | | [DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks](https://proceedings.mlr.press/v162/fu22c.html) | ICML | `Other` | [PyTorch(Author)](https://github.com/facebookresearch/DepthShrinker) | 140 | | [PAC-Net: A Model Pruning Approach to Inductive Transfer Learning](https://proceedings.mlr.press/v162/myung22a.html) | ICML | `Other` | - | 141 | | [Neural Network Pruning Denoises the Features and Makes Local Connectivity Emerge in Visual Tasks](https://proceedings.mlr.press/v162/pellegrini22a.html) | ICML | `Other` | [PyTorch(Author)](https://github.com/phiandark/SiftingFeatures) | 142 | | [Interspace Pruning: Using Adaptive Filter Representations To Improve Training of Sparse CNNs](https://openaccess.thecvf.com/content/CVPR2022/html/Wimmer_Interspace_Pruning_Using_Adaptive_Filter_Representations_To_Improve_Training_of_CVPR_2022_paper.html) | CVPR | `W` | - | 143 | | [Masking Adversarial Damage: Finding Adversarial Saliency for Robust and Sparse Network](https://openaccess.thecvf.com/content/CVPR2022/html/Lee_Masking_Adversarial_Damage_Finding_Adversarial_Saliency_for_Robust_and_Sparse_CVPR_2022_paper.html) | CVPR | `W` | - | 144 | | [When To Prune? A Policy Towards Early Structural Pruning](https://openaccess.thecvf.com/content/CVPR2022/html/Shen_When_To_Prune_A_Policy_Towards_Early_Structural_Pruning_CVPR_2022_paper.html) | CVPR | `F` | - | 145 | | [Fire Together Wire Together: A Dynamic Pruning Approach With Self-Supervised Mask PredictionFire Together Wire Together: A Dynamic Pruning Approach With Self-Supervised Mask Prediction](https://openaccess.thecvf.com/content/CVPR2022/html/Elkerdawy_Fire_Together_Wire_Together_A_Dynamic_Pruning_Approach_With_Self-Supervised_CVPR_2022_paper.html) | CVPR | `F` | - | 146 | | [Revisiting Random Channel Pruning for Neural Network Compression](https://openaccess.thecvf.com/content/CVPR2022/html/Li_Revisiting_Random_Channel_Pruning_for_Neural_Network_Compression_CVPR_2022_paper.html) | CVPR | `F` | [PyTorch(Author)](https://github.com/ofsoundof/random_channel_pruning)(Releasing) | 147 | | [Learning Bayesian Sparse Networks With Full Experience Replay for Continual Learning](https://openaccess.thecvf.com/content/CVPR2022/html/Yan_Learning_Bayesian_Sparse_Networks_With_Full_Experience_Replay_for_Continual_CVPR_2022_paper.html) | CVPR | `F` | - | 148 | | [DECORE: Deep Compression With Reinforcement Learning](https://openaccess.thecvf.com/content/CVPR2022/html/Alwani_DECORE_Deep_Compression_With_Reinforcement_Learning_CVPR_2022_paper.html) | CVPR | `F` | - | 149 | | [CHEX: CHannel EXploration for CNN Model Compression](https://openaccess.thecvf.com/content/CVPR2022/html/Hou_CHEX_CHannel_EXploration_for_CNN_Model_Compression_CVPR_2022_paper.html) | CVPR | `F` | - | 150 | | [Compressing Models With Few Samples: Mimicking Then Replacing](https://openaccess.thecvf.com/content/CVPR2022/html/Wang_Compressing_Models_With_Few_Samples_Mimicking_Then_Replacing_CVPR_2022_paper.html) | CVPR | `F` | [PyTorch(Author)](https://github.com/cjnjuwhy/MiR)(Releasing) | 151 | | [Contrastive Dual Gating: Learning Sparse Features With Contrastive Learning](https://openaccess.thecvf.com/content/CVPR2022/html/Meng_Contrastive_Dual_Gating_Learning_Sparse_Features_With_Contrastive_Learning_CVPR_2022_paper.html) | CVPR | `WF` | - | 152 | | [DiSparse: Disentangled Sparsification for Multitask Model Compression](https://openaccess.thecvf.com/content/CVPR2022/html/Sun_DiSparse_Disentangled_Sparsification_for_Multitask_Model_Compression_CVPR_2022_paper.html) | CVPR | `Other` | [PyTorch(Author)](https://github.com/SHI-Labs/DiSparse-Multitask-Model-Compression) | 153 | | [Learning Pruning-Friendly Networks via Frank-Wolfe: One-Shot, Any-Sparsity, And No Retraining](https://openreview.net/forum?id=O1DEtITim__) | ICLR **(Spotlight)** | `W` | [PyTorch(Author)](https://github.com/VITA-Group/SFW-Once-for-All-Pruning) | 154 | | [On Lottery Tickets and Minimal Task Representations in Deep Reinforcement Learning](https://openreview.net/forum?id=Fl3Mg_MZR-) | ICLR **(Spotlight)** | `W` | - | 155 | | [An Operator Theoretic View On Pruning Deep Neural Networks](https://openreview.net/forum?id=pWBNOgdeURp) | ICLR | `W` | [PyTorch(Author)](https://github.com/william-redman/Koopman_pruning) | 156 | | [Effective Model Sparsification by Scheduled Grow-and-Prune Methods](https://openreview.net/forum?id=xa6otUDdP2W) | ICLR | `W` | [PyTorch(Author)](https://github.com/boone891214/GaP) | 157 | | [Signing the Supermask: Keep, Hide, Invert](https://openreview.net/forum?id=e0jtGTfPihs) | ICLR | `W` | - | 158 | | [How many degrees of freedom do we need to train deep networks: a loss landscape perspective](https://openreview.net/forum?id=ChMLTGRjFcU) | ICLR | `W` | [PyTorch(Author)](https://github.com/ganguli-lab/degrees-of-freedom) | 159 | | [Dual Lottery Ticket Hypothesis](https://openreview.net/forum?id=fOsN52jn25l) | ICLR | `W` | [PyTorch(Author)](https://github.com/yueb17/DLTH) | 160 | | [Peek-a-Boo: What (More) is Disguised in a Randomly Weighted Neural Network, and How to Find It Efficiently](https://openreview.net/forum?id=moHCzz6D5H3) | ICLR | `W` | [PyTorch(Author)](https://github.com/VITA-Group/Peek-a-Boo) | 161 | | [Sparsity Winning Twice: Better Robust Generalization from More Efficient Training](https://openreview.net/forum?id=SYuJXrXq8tw) | ICLR | `W` | [PyTorch(Author)](https://github.com/VITA-Group/Sparsity-Win-Robust-Generalization) | 162 | | [SOSP: Efficiently Capturing Global Correlations by Second-Order Structured Pruning](https://openreview.net/forum?id=t5EmXZ3ZLR) | ICLR **(Spotlight)** | `F` | [PyTorch(Author)](https://github.com/boschresearch/sosp)(Releasing) | 163 | | [Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models](https://openreview.net/forum?id=Nfl-iXa-y7R) | ICLR **(Spotlight)** | `F` | [PyTorch(Author)](https://github.com/HazyResearch/pixelfly) | 164 | | [Revisit Kernel Pruning with Lottery Regulated Grouped Convolutions](https://openreview.net/forum?id=LdEhiMG9WLO) | ICLR | `F` | [PyTorch(Author)](https://github.com/choH/lottery_regulated_grouped_kernel_pruning) | 165 | | [Plant 'n' Seek: Can You Find the Winning Ticket?](https://openreview.net/forum?id=9n9c8sf0xm) | ICLR | `F` | [PyTorch(Author)](http://www.github.com/RelationalML/PlantNSeek) | 166 | | [Proving the Lottery Ticket Hypothesis for Convolutional Neural Networks](https://openreview.net/forum?id=Vjki79-619-) | ICLR | `F` | [PyTorch(Author)](https://github.com/ArthurWalraven/cnnslth) | 167 | | [On the Existence of Universal Lottery Tickets](https://openreview.net/forum?id=SYB4WrJql1n) | ICLR | `F` | [PyTorch(Author)](https://github.com/RelationalML/UniversalLT) | 168 | | [Training Structured Neural Networks Through Manifold Identification and Variance Reduction](https://openreview.net/forum?id=mdUYT5QV0O) | ICLR | `F` | [PyTorch(Author)](https://www.github.com/zihsyuan1214/rmda) | 169 | | [Learning Efficient Image Super-Resolution Networks via Structure-Regularized Pruning](https://openreview.net/forum?id=AjGC97Aofee) | ICLR | `F` | [PyTorch(Author)](https://github.com/MingSun-Tse/SRP) | 170 | | [Prospect Pruning: Finding Trainable Weights at Initialization using Meta-Gradients](https://openreview.net/forum?id=AIgn9uwfcD1) | ICLR | `WF` | [PyTorch(Author)](https://github.com/mil-ad/prospr) | 171 | | [The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training](https://openreview.net/forum?id=VBZJ_3tz-t) | ICLR | `Other` | [PyTorch(Author)](https://github.com/VITA-Group/Random_Pruning) | 172 | | [Prune and Tune Ensembles: Low-Cost Ensemble Learning with Sparse Independent Subnetworks](https://ojs.aaai.org/index.php/AAAI/article/view/20842) | AAAI | `W` | - | 173 | | [Prior Gradient Mask Guided Pruning-Aware Fine-Tuning](https://ojs.aaai.org/index.php/AAAI/article/view/19888) | AAAI | `F` | - | 174 | | [Convolutional Neural Network Compression through Generalized Kronecker Product Decomposition](https://ojs.aaai.org/index.php/AAAI/article/view/19958) | AAAI | `Other` | - | 175 | 176 | ### 2021 177 | | Title | Venue | Type | Code | 178 | |:-------------------------------------------------------------------------------------------------------------------------------- |:-----:|:-------:|:----:| 179 | | [Validating the Lottery Ticket Hypothesis with Inertial Manifold Theory](https://papers.nips.cc/paper/2021/hash/fdc42b6b0ee16a2f866281508ef56730-Abstract.html) | NeurIPS | `W` | - | 180 | | [The Elastic Lottery Ticket Hypothesis](https://papers.nips.cc/paper/2021/hash/dfccdb8b1cc7e4dab6d33db0fef12b88-Abstract.html) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/VITA-Group/ElasticLTH) | 181 | | [Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?](https://papers.nips.cc/paper/2021/hash/6a130f1dc6f0c829f874e92e5458dced-Abstract.html) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/boone891214/sanity-check-LTH) | 182 | | [Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Sparse Neural Networks](https://papers.nips.cc/paper/2021/hash/15f99f2165aa8c86c9dface16fefd281-Abstract.html) | NeurIPS | `W` | - | 183 | | [You are caught stealing my winning lottery ticket! Making a lottery ticket claim its ownership](https://papers.nips.cc/paper/2021/hash/23e582ad8087f2c03a5a31c125123f9a-Abstract.html) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/VITA-Group/NO-stealing-LTH) | 184 | | [Pruning Randomly Initialized Neural Networks with Iterative Randomization](https://papers.nips.cc/paper/2021/hash/23e582ad8087f2c03a5a31c125123f9a-Abstract.html) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/dchiji-ntt/iterand) | 185 | | [Sparse Training via Boosting Pruning Plasticity with Neuroregeneration](https://papers.nips.cc/paper/2021/hash/5227b6aaf294f5f027273aebf16015f2-Abstract.html) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/VITA-Group/GraNet) | 186 | | [AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks](https://papers.nips.cc/paper/2021/hash/48000647b315f6f00f913caa757a70b3-Abstract.html) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/IST-DASLab/ACDC) | 187 | | [A Winning Hand: Compressing Deep Networks Can Improve Out-of-Distribution Robustness](https://papers.nips.cc/paper/2021/hash/0607f4c705595b911a4f3e7a127b44e0-Abstract.html) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/RobustBench/robustbench) | 188 | | [Rethinking the Pruning Criteria for Convolutional Neural Network](https://papers.nips.cc/paper/2021/hash/87ae6fb631f7c8a627e8e28785d9992d-Abstract.html) | NeurIPS | `F` | - | 189 | | [Only Train Once: A One-Shot Neural Network Training And Pruning Framework](https://papers.nips.cc/paper/2021/hash/a376033f78e144f494bfc743c0be3330-Abstract.html) | NeurIPS | `F` | [PyTorch(Author)](https://github.com/tianyic/onlytrainonce) | 190 | | [CHIP: CHannel Independence-based Pruning for Compact Neural Networks](https://papers.nips.cc/paper/2021/hash/ce6babd060aa46c61a5777902cca78af-Abstract.html) | NeurIPS | `F` | [PyTorch(Author)](https://github.com/Eclipsess/CHIP_NeurIPS2021) | 191 | | [RED : Looking for Redundancies for Data-FreeStructured Compression of Deep Neural Networks](https://papers.nips.cc/paper/2021/hash/ae5e3ce40e0404a45ecacaaf05e5f735-Abstract.html) | NeurIPS | `F` | - | 192 | | [Compressing Neural Networks: Towards Determining the Optimal Layer-wise Decomposition](https://papers.nips.cc/paper/2021/hash/2adcfc3929e7c03fac3100d3ad51da26-Abstract.html) | NeurIPS | `F` | [PyTorch(Author)](https://github.com/lucaslie/torchprune) | 193 | | [Sparse Flows: Pruning Continuous-depth Models](https://papers.nips.cc/paper/2021/hash/bf1b2f4b901c21a1d8645018ea9aeb05-Abstract.html) | NeurIPS | `WF` | [PyTorch(Author)](https://github.com/lucaslie/torchprune) | 194 | | [Scaling Up Exact Neural Network Compression by ReLU Stability](https://papers.nips.cc/paper/2021/hash/e35d7a5768c4b85b4780384d55dc3620-Abstract.html) | NeurIPS | `S` | [PyTorch(Author)](https://github.com/yuxwind/ExactCompression) | 195 | | [Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme](https://papers.nips.cc/paper/2021/hash/effc299a1addb07e7089f9b269c31f2f-Abstract.html) | NeurIPS | `S` | [PyTorch(Author)](https://github.com/SJLeo/GCC) | 196 | | [Heavy Tails in SGD and Compressibility of Overparametrized Neural Networks](https://papers.nips.cc/paper/2021/hash/f5c3dd7514bf620a1b85450d2ae374b1-Abstract.html) | NeurIPS | `Other` | [PyTorch(Author)](https://github.com/mbarsbey/sgd_comp_gen) | 197 | | [ResRep: Lossless CNN Pruning via Decoupling Remembering and Forgetting](https://openaccess.thecvf.com/content/ICCV2021/html/Ding_ResRep_Lossless_CNN_Pruning_via_Decoupling_Remembering_and_Forgetting_ICCV_2021_paper.html) | ICCV | `F` | [PyTorch(Author)](https://github.com/DingXiaoH/ResRep) | 198 | | [Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search](https://openaccess.thecvf.com/content/ICCV2021/html/Zhan_Achieving_On-Mobile_Real-Time_Super-Resolution_With_Neural_Architecture_and_Pruning_Search_ICCV_2021_paper.html) | ICCV | `F` | - | 199 | | [GDP: Stabilized Neural Network Pruning via Gates with Differentiable Polarization](https://openaccess.thecvf.com/content/ICCV2021/html/Guo_GDP_Stabilized_Neural_Network_Pruning_via_Gates_With_Differentiable_Polarization_ICCV_2021_paper.html) | ICCV | `F` | - | 200 | | [Auto Graph Encoder-Decoder for Neural Network Pruning](https://openaccess.thecvf.com/content/ICCV2021/html/Yu_Auto_Graph_Encoder-Decoder_for_Neural_Network_Pruning_ICCV_2021_paper.html) | ICCV | `F` | - | 201 | | [Exploration and Estimation for Model Compression](https://papers.nips.cc/paper/2021/hash/5227b6aaf294f5f027273aebf16015f2-Abstract.html) | ICCV | `F` | - | 202 | | [Sub-Bit Neural Networks: Learning To Compress and Accelerate Binary Neural Networks](https://openaccess.thecvf.com/content/ICCV2021/html/Wang_Sub-Bit_Neural_Networks_Learning_To_Compress_and_Accelerate_Binary_Neural_ICCV_2021_paper.html) | ICCV | `Other` | [PyTorch(Author)](https://github.com/yikaiw/SNN) | 203 | | [On the Predictability of Pruning Across Scales](https://arxiv.org/abs/2006.10621) | ICML | `W` | - | 204 | | [A Probabilistic Approach to Neural Network Pruning](https://arxiv.org/abs/2105.10065) | ICML | `F` | - | 205 | | [Accelerate CNNs from Three Dimensions: A Comprehensive Pruning Framework](https://arxiv.org/abs/2010.04879) | ICML | `F` | - | 206 | | [Group Fisher Pruning for Practical Network Compression](https://arxiv.org/abs/2108.00708) | ICML | `F` | [PyTorch(Author)](https://github.com/jshilong/FisherPruning) | 207 | | [Towards Compact CNNs via Collaborative Compression](https://arxiv.org/abs/2105.11228) | CVPR | `F` | [PyTorch(Author)](https://github.com/liuguoyou/Towards-Compact-CNNs-via-Collaborative-Compression) | 208 | | [Permute, Quantize, and Fine-tune: Efficient Compression of Neural Networks](https://arxiv.org/abs/2010.15703) | CVPR | `F` | [PyTorch(Author)](https://github.com/uber-research/permute-quantize-finetune) | 209 | | [NPAS: A Compiler-aware Framework of Unified Network Pruning andArchitecture Search for Beyond Real-Time Mobile Acceleration](https://arxiv.org/abs/2012.00596) | CVPR | `F` | - | 210 | | [Network Pruning via Performance Maximization](https://openaccess.thecvf.com/content/CVPR2021/html/Gao_Network_Pruning_via_Performance_Maximization_CVPR_2021_paper.html) | CVPR | `F` | - | 211 | | [Convolutional Neural Network Pruning with Structural Redundancy Reduction](https://arxiv.org/abs/2104.03438) | CVPR | `F` | - | 212 | | [Manifold Regularized Dynamic Network Pruning](https://arxiv.org/abs/2103.05861) | CVPR | `F` | - | 213 | | [Joint-DetNAS: Upgrade Your Detector with NAS, Pruning and Dynamic Distillation](https://arxiv.org/abs/2105.12971) | CVPR | `FO` | - | 214 | | [Content-Aware GAN Compression](https://arxiv.org/abs/2104.02244) | CVPR | `S` | [PyTorch(Author)](https://github.com/lychenyoko/content-aware-gan-compression) | 215 | | [Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted Network](https://openreview.net/forum?id=U_mat0b9iv) | ICLR | `W` | [PyTorch(Author)](https://github.com/chrundle/biprop) | 216 | | [Layer-adaptive Sparsity for the Magnitude-based Pruning](https://openreview.net/forum?id=H6ATjJ0TKdf) | ICLR | `W` | [PyTorch(Author)](https://github.com/jaeho-lee/layer-adaptive-sparsity) | 217 | | [Pruning Neural Networks at Initialization: Why Are We Missing the Mark?](https://openreview.net/forum?id=Ig-VyQc-MLK) | ICLR | `W` | - | 218 | | [Robust Pruning at Initialization](https://openreview.net/forum?id=vXj_ucZQ4hA) | ICLR | `W` | - | 219 | | [A Gradient Flow Framework For Analyzing Network Pruning](https://openreview.net/forum?id=rumv7QmLUue) | ICLR | `F` | [PyTorch(Author)](https://github.com/EkdeepSLubana/flowandprune) | 220 | | [Neural Pruning via Growing Regularization](https://openreview.net/forum?id=o966_Is_nPA) | ICLR | `F` | [PyTorch(Author)](https://github.com/MingSun-Tse/Regularization-Pruning) | 221 | | [ChipNet: Budget-Aware Pruning with Heaviside Continuous Approximations](https://openreview.net/forum?id=xCxXwTzx4L1) | ICLR | `F` | [PyTorch(Author)](https://github.com/transmuteAI/ChipNet) | 222 | | [Network Pruning That Matters: A Case Study on Retraining Variants](https://openreview.net/forum?id=Cb54AMqHQFP) | ICLR | `F` | [PyTorch(Author)](https://github.com/lehduong/NPTM) | 223 | 224 | ### 2020 225 | 226 | | Title | Venue | Type | Code | 227 | |:-------------------------------------------------------------------------------------------------------------------------------- |:-----:|:-------:|:----:| 228 | | [Optimal Lottery Tickets via Subset Sum: Logarithmic Over-Parameterization is Sufficient](https://proceedings.neurips.cc/paper/2020/hash/1b742ae215adf18b75449c6e272fd92d-Abstract.html) | NeurIPS | `W` | - | 229 | | [Winning the Lottery with Continuous Sparsification](https://arxiv.org/abs/1912.04427v4) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/lolemacs/continuous-sparsification) | 230 | | [HYDRA: Pruning Adversarially Robust Neural Networks](https://arxiv.org/abs/2002.10509) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/inspire-group/hydra) | 231 | | [Logarithmic Pruning is All You Need](https://arxiv.org/abs/2006.12156) | NeurIPS | `W` | - | 232 | | [Directional Pruning of Deep Neural Networks](https://arxiv.org/abs/2006.09358) | NeurIPS | `W` | - | 233 | | [Movement Pruning: Adaptive Sparsity by Fine-Tuning](https://arxiv.org/abs/2005.07683) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/huggingface/block_movement_pruning) | 234 | | [Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot](https://arxiv.org/abs/2009.11094) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/JingtongSu/sanity-checking-pruning) | 235 | | [Neuron Merging: Compensating for Pruned Neurons](https://arxiv.org/abs/2010.13160) | NeurIPS | `F` | [PyTorch(Author)](https://github.com/friendshipkim/neuron-merging) | 236 | | [Neuron-level Structured Pruning using Polarization Regularizer](https://papers.nips.cc/paper/2020/file/703957b6dd9e3a7980e040bee50ded65-Paper.pdf) | NeurIPS | `F` | [PyTorch(Author)](https://github.com/polarizationpruning/PolarizationPruning) | 237 | | [SCOP: Scientific Control for Reliable Neural Network Pruning](https://arxiv.org/abs/2010.10732) | NeurIPS | `F` | [PyTorch(Author)](https://github.com/yehuitang/Pruning/tree/master/SCOP_NeurIPS2020) | 238 | | [Storage Efficient and Dynamic Flexible Runtime Channel Pruning via Deep Reinforcement Learning](https://proceedings.neurips.cc/paper/2020/hash/a914ecef9c12ffdb9bede64bb703d877-Abstract.html) | NeurIPS | `F` | - | 239 | | [The Generalization-Stability Tradeoff In Neural Network Pruning](https://arxiv.org/abs/1906.03728) | NeurIPS | `F` | [PyTorch(Author)](https://github.com/bbartoldson/GeneralizationStabilityTradeoff) | 240 | | [Greedy Optimization Provably Wins the Lottery: Logarithmic Number of Winning Tickets is Enough](https://proceedings.neurips.cc/paper/2020/hash/be23c41621390a448779ee72409e5f49-Abstract.html) | NeurIPS | `WF` | - | 241 | | [Pruning Filter in Filter](https://arxiv.org/abs/2009.14410) | NeurIPS | `Other` | [PyTorch(Author)](https://github.com/fxmeng/Pruning-Filter-in-Filter) | 242 | | [Position-based Scaled Gradient for Model Quantization and Pruning](https://arxiv.org/abs/2005.11035) | NeurIPS | `Other` | [PyTorch(Author)](https://github.com/Jangho-Kim/PSG-pytorch) | 243 | | [Bayesian Bits: Unifying Quantization and Pruning](https://arxiv.org/abs/2005.07093) | NeurIPS | `Other` | - | 244 | | [Pruning neural networks without any data by iteratively conserving synaptic flow](https://arxiv.org/abs/2006.05467) | NeurIPS | `Other` | [PyTorch(Author)](https://github.com/ganguli-lab/Synaptic-Flow) | 245 | | [Meta-Learning with Network Pruning](https://arxiv.org/abs/2007.03219) | ECCV | `W` | - | 246 | | [Accelerating CNN Training by Pruning Activation Gradients](https://arxiv.org/abs/1908.00173) | ECCV | `W` | - | 247 | | [EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning](https://arxiv.org/abs/2007.02491) | ECCV **(Oral)** | `F` | [PyTorch(Author)](https://github.com/anonymous47823493/EagleEye) | 248 | | [DSA: More Efficient Budgeted Pruning via Differentiable Sparsity Allocation](https://arxiv.org/abs/2004.02164) | ECCV | `F` | - | 249 | | [DHP: Differentiable Meta Pruning via HyperNetworks](https://arxiv.org/abs/2003.13683) | ECCV | `F` | [PyTorch(Author)](https://github.com/ofsoundof/dhp) | 250 | | [DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search](https://arxiv.org/abs/2003.12563) S | ECCV | `Other` | - | 251 | | [Differentiable Joint Pruning and Quantization for Hardware Efficiency](https://arxiv.org/abs/2007.10463) | ECCV | `Other` | - | 252 | | [Channel Pruning via Automatic Structure Search](https://arxiv.org/abs/2001.08565) | IJCAI | `F` | [PyTorch(Author)](https://github.com/lmbxmu/ABCPruner) | 253 | | [Adversarial Neural Pruning with Latent Vulnerability Suppression](https://arxiv.org/abs/1908.04355) | ICML | `W` | - | 254 | | [Proving the Lottery Ticket Hypothesis: Pruning is All You Need](https://arxiv.org/abs/2002.00585) | ICML | `W` | - | 255 | | [Network Pruning by Greedy Subnetwork Selection](https://arxiv.org/abs/2003.01794) | ICML | `F` | - | 256 | | [Operation-Aware Soft Channel Pruning using Differentiable Masks](https://arxiv.org/abs/2007.03938) | ICML | `F` | - | 257 | | [DropNet: Reducing Neural Network Complexity via Iterative Pruning](https://proceedings.mlr.press/v119/tan20a.html) | ICML | `F` | - | 258 | | [Soft Threshold Weight Reparameterization for Learnable Sparsity](https://arxiv.org/abs/2002.03231) | ICML | `WF` | [Pytorch(Author)](https://github.com/RAIVNLab/STR) | 259 | | [Structured Compression by Weight Encryption for Unstructured Pruning and Quantization](https://arxiv.org/abs/1905.10138) | CVPR | `W` | - | 260 | | [Automatic Neural Network Compression by Sparsity-Quantization Joint Learning: A Constrained Optimization-Based Approach](https://openaccess.thecvf.com/content_CVPR_2020/papers/Yang_Automatic_Neural_Network_Compression_by_Sparsity-Quantization_Joint_Learning_A_Constrained_CVPR_2020_paper.pdf) | CVPR | `W` | - | 261 | | [Towards Efficient Model Compression via Learned Global Ranking](https://arxiv.org/abs/1904.12368) | CVPR **(Oral)** | `F` | [Pytorch(Author)](https://github.com/cmu-enyac/LeGR) | 262 | | [HRank: Filter Pruning using High-Rank Feature Map](https://arxiv.org/abs/2002.10179) | CVPR **(Oral)** | `F` | [Pytorch(Author)](https://github.com/lmbxmu/HRank) | 263 | | [Neural Network Pruning with Residual-Connections and Limited-Data](https://arxiv.org/abs/1911.08114) | CVPR **(Oral)** | `F` | - | 264 | | [DMCP: Differentiable Markov Channel Pruning for Neural Networks](https://arxiv.org/abs/2005.03354) | CVPR **(Oral)** | `F` | [TensorFlow(Author)](https://github.com/zx55/dmcp) | 265 | | [Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression](https://arxiv.org/abs/2003.08935) | CVPR | `F` | [PyTorch(Author)](https://github.com/ofsoundof/group_sparsity) | 266 | | [Few Sample Knowledge Distillation for Efficient Network Compression](https://arxiv.org/abs/1812.01839) | CVPR | `F` | - | 267 | | [Discrete Model Compression With Resource Constraint for Deep Neural Networks](http://openaccess.thecvf.com/content_CVPR_2020/html/Gao_Discrete_Model_Compression_With_Resource_Constraint_for_Deep_Neural_Networks_CVPR_2020_paper.html) | CVPR | `F` | - | 268 | | [Learning Filter Pruning Criteria for Deep Convolutional Neural Networks Acceleration](http://openaccess.thecvf.com/content_CVPR_2020/html/He_Learning_Filter_Pruning_Criteria_for_Deep_Convolutional_Neural_Networks_Acceleration_CVPR_2020_paper.html) | CVPR | `F` | - | 269 | | [APQ: Joint Search for Network Architecture, Pruning and Quantization Policy](https://arxiv.org/abs/2006.08509) | CVPR | `F` | - | 270 | | [Multi-Dimensional Pruning: A Unified Framework for Model Compression](http://openaccess.thecvf.com/content_CVPR_2020/html/Guo_Multi-Dimensional_Pruning_A_Unified_Framework_for_Model_Compression_CVPR_2020_paper.html) | CVPR **(Oral)** | `WF` | - | 271 | | [A Signal Propagation Perspective for Pruning Neural Networks at Initialization](https://arxiv.org/abs/1906.06307) | ICLR **(Spotlight)** | `W` | - | 272 | | [ProxSGD: Training Structured Neural Networks under Regularization and Constraints](https://openreview.net/forum?id=HygpthEtvr) | ICLR | `W` | [TF+PT(Author)](https://github.com/optyang/proxsgd) | 273 | | [One-Shot Pruning of Recurrent Neural Networks by Jacobian Spectrum Evaluation](https://arxiv.org/abs/1912.00120) | ICLR | `W` | - | 274 | | [Lookahead: A Far-sighted Alternative of Magnitude-based Pruning](https://arxiv.org/abs/2002.04809) | ICLR | `W` | [PyTorch(Author)](https://github.com/alinlab/lookahead_pruning) | 275 | | [Data-Independent Neural Pruning via Coresets](https://arxiv.org/abs/1907.04018) | ICLR | `W` | - | 276 | | [Provable Filter Pruning for Efficient Neural Networks](https://arxiv.org/abs/1911.07412) | ICLR | `F` | - | 277 | | [Dynamic Model Pruning with Feedback](https://openreview.net/forum?id=SJem8lSFwB) | ICLR | `WF` | - | 278 | | [Comparing Rewinding and Fine-tuning in Neural Network Pruning](https://arxiv.org/abs/2003.02389) | ICLR **(Oral)** | `WF` | [TensorFlow(Author)](https://github.com/lottery-ticket/rewinding-iclr20-public) | 279 | | [AutoCompress: An Automatic DNN Structured Pruning Framework for Ultra-High Compression Rates](https://arxiv.org/abs/1907.03141) | AAAI | `F` | - | 280 | | [Reborn filters: Pruning convolutional neural networks with limited data](https://ojs.aaai.org/index.php/AAAI/article/view/6058) | AAAI | `F` | - | 281 | | [DARB: A Density-Aware Regular-Block Pruning for Deep Neural Networks](http://arxiv.org/abs/1911.08020) | AAAI | `Other` | - | 282 | | [Pruning from Scratch](http://arxiv.org/abs/1909.12579) | AAAI | `Other` | - | 283 | 284 | ### 2019 285 | 286 | | Title | Venue | Type | Code | 287 | |:-------|:--------:|:-------:|:-------:| 288 | | [Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask](https://arxiv.org/abs/1905.01067) | NeurIPS | `W` | [TensorFlow(Author)](https://github.com/uber-research/deconstructing-lottery-tickets) | 289 | | [One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers](https://arxiv.org/abs/1906.02773) | NeurIPS | `W` | - | 290 | | [Global Sparse Momentum SGD for Pruning Very Deep Neural Networks](https://arxiv.org/abs/1909.12778) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/DingXiaoH/GSM-SGD) | 291 | | [AutoPrune: Automatic Network Pruning by Regularizing Auxiliary Parameters](https://papers.nips.cc/paper/9521-autoprune-automatic-network-pruning-by-regularizing-auxiliary-parameters) | NeurIPS | `W` | - | 292 | | [Network Pruning via Transformable Architecture Search](https://arxiv.org/abs/1905.09717) | NeurIPS | `F` | [PyTorch(Author)](https://github.com/D-X-Y/NAS-Projects) | 293 | | [Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks](https://arxiv.org/abs/1909.08174) | NeurIPS | `F` | [PyTorch(Author)](https://github.com/youzhonghui/gate-decorator-pruning) | 294 | | [Model Compression with Adversarial Robustness: A Unified Optimization Framework](https://arxiv.org/abs/1902.03538) | NeurIPS | `Other` | [PyTorch(Author)](https://github.com/TAMU-VITA/ATMC) | 295 | | [Adversarial Robustness vs Model Compression, or Both?](https://arxiv.org/abs/1903.12561) | ICCV | `W` | [PyTorch(Author)](https://github.com/yeshaokai/Robustness-Aware-Pruning-ADMM) | 296 | | [MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning](https://arxiv.org/abs/1903.10258) | ICCV | `F` | [PyTorch(Author)](https://github.com/liuzechun/MetaPruning) | 297 | | [Accelerate CNN via Recursive Bayesian Pruning](https://arxiv.org/abs/1812.00353) | ICCV | `F` | - | 298 | | [Learning Filter Basis for Convolutional Neural Network Compression](https://arxiv.org/abs/1908.08932) | ICCV | `Other` | - | 299 | | [Co-Evolutionary Compression for Unpaired Image Translation](https://arxiv.org/abs/1907.10804) | ICCV | `S` | - | 300 | | [COP: Customized Deep Model Compression via Regularized Correlation-Based Filter-Level Pruning](https://arxiv.org/abs/1906.10337) | IJCAI | `F` | [Tensorflow(Author)](https://github.com/ZJULearning/COP) | 301 | | [Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration](https://arxiv.org/abs/1811.00250) | CVPR **(Oral)** | `F` | [PyTorch(Author)](https://github.com/he-y/filter-pruning-geometric-median) | 302 | | [Towards Optimal Structured CNN Pruning via Generative Adversarial Learning](https://arxiv.org/abs/1903.09291) | CVPR | `F` | [PyTorch(Author)](https://github.com/ShaohuiLin/GAL) | 303 | | [Centripetal SGD for Pruning Very Deep Convolutional Networks with Complicated Structure](https://arxiv.org/abs/1904.03837) | CVPR | `F` | [PyTorch(Author)](https://github.com/ShawnDing1994/Centripetal-SGD) | 304 | | [On Implicit Filter Level Sparsity in Convolutional Neural Networks](https://arxiv.org/abs/1811.12495), [Extension1](https://arxiv.org/abs/1905.04967), [Extension2](https://openreview.net/forum?id=rylVvNS3hE) | CVPR | `F` | [PyTorch(Author)](https://github.com/mehtadushy/SelecSLS-Pytorch) | 305 | | [Structured Pruning of Neural Networks with Budget-Aware Regularization](https://arxiv.org/abs/1811.09332) | CVPR | `F` | - | 306 | | [Importance Estimation for Neural Network Pruning](http://jankautz.com/publications/Importance4NNPruning_CVPR19.pdf) | CVPR | `F` | [PyTorch(Author)](https://github.com/NVlabs/Taylor_pruning) | 307 | | [OICSR: Out-In-Channel Sparsity Regularization for Compact Deep Neural Networks](https://arxiv.org/abs/1905.11664) | CVPR | `F` | - | 308 | | [Variational Convolutional Neural Network Pruning](https://openaccess.thecvf.com/content_CVPR_2019/html/Zhao_Variational_Convolutional_Neural_Network_Pruning_CVPR_2019_paper.html) | CVPR | `F` | - | 309 | | [Partial Order Pruning: for Best Speed/Accuracy Trade-off in Neural Architecture Search](https://arxiv.org/abs/1903.03777) | CVPR | `Other` | [TensorFlow(Author)](https://github.com/lixincn2015/Partial-Order-Pruning) | 310 | | [Collaborative Channel Pruning for Deep Networks](http://proceedings.mlr.press/v97/peng19c.html) | ICML | `F` | - | 311 | | [Approximated Oracle Filter Pruning for Destructive CNN Width Optimization github](https://arxiv.org/abs/1905.04748) | ICML | `F` | - | 312 | | [EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis](https://arxiv.org/abs/1905.05934) | ICML | `F` | [PyTorch(Author)](https://github.com/alecwangcq/EigenDamage-Pytorch) | 313 | | [The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks](https://arxiv.org/abs/1803.03635) | ICLR **(Best)** | `W` | [TensorFlow(Author)](https://github.com/google-research/lottery-ticket-hypothesis) | 314 | | [SNIP: Single-shot Network Pruning based on Connection Sensitivity](https://arxiv.org/abs/1810.02340) | ICLR | `W` | [TensorFLow(Author)](https://github.com/namhoonlee/snip-public) | 315 | | [Dynamic Channel Pruning: Feature Boosting and Suppression](https://arxiv.org/abs/1810.05331) | ICLR | `F` | [TensorFlow(Author)](https://github.com/deep-fry/mayo) | 316 | | [Rethinking the Value of Network Pruning](https://arxiv.org/abs/1810.05270) | ICLR | `F` | [PyTorch(Author)](https://github.com/Eric-mingjie/rethinking-network-pruning) | 317 | | [Dynamic Sparse Graph for Efficient Deep Learning](https://arxiv.org/abs/1810.00859) | ICLR | `F` | [CUDA(3rd)](https://github.com/mtcrawshaw/dynamic-sparse-graph) | 318 | 319 | ### 2018 320 | | Title | Venue | Type | Code | 321 | |:-------|:--------:|:-------:|:-------:| 322 | | [Frequency-Domain Dynamic Pruning for Convolutional Neural Networks](https://papers.NeurIPS.cc/paper/7382-frequency-domain-dynamic-pruning-for-convolutional-neural-networks.pdf) | NeurIPS | `W` | - | 323 | | [Discrimination-aware Channel Pruning for Deep Neural Networks](https://arxiv.org/abs/1810.11809) | NeurIPS | `F` | [TensorFlow(Author)](https://github.com/SCUT-AILab/DCP) | 324 | | [Learning Sparse Neural Networks via Sensitivity-Driven Regularization](https://arxiv.org/pdf/1810.11764.pdf) | NeurIPS | `WF` | - | 325 | | [Constraint-Aware Deep Neural Network Compression](https://openaccess.thecvf.com/content_ECCV_2018/html/Changan_Chen_Constraints_Matter_in_ECCV_2018_paper.html) | ECCV | `W` | [SkimCaffe(Author)](https://github.com/ChanganVR/ConstraintAwareCompression) | 326 | | [A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers](https://arxiv.org/abs/1804.03294) | ECCV | `W` | [Caffe(Author)](https://github.com/KaiqiZhang/admm-pruning) | 327 | | [Amc: Automl for model compression and acceleration on mobile devices](https://arxiv.org/abs/1802.03494) | ECCV | `F` | [TensorFlow(3rd)](https://github.com/Tencent/PocketFlow#channel-pruning) | 328 | | [Data-Driven Sparse Structure Selection for Deep Neural Networks](https://arxiv.org/abs/1707.01213) | ECCV | `F` | [MXNet(Author)](https://github.com/TuSimple/sparse-structure-selection) | 329 | | [Coreset-Based Neural Network Compression](https://arxiv.org/abs/1807.09810) | ECCV | `F` | [PyTorch(Author)](https://github.com/metro-smiles/CNN_Compression) | 330 | | [Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks](https://arxiv.org/abs/1808.06866) | IJCAI | `F` | [PyTorch(Author)](https://github.com/he-y/soft-filter-pruning) | 331 | | [Accelerating Convolutional Networks via Global & Dynamic Filter Pruning](https://www.ijcai.org/proceedings/2018/0336.pdf) | IJCAI | `F` | - | 332 | | [Weightless: Lossy weight encoding for deep neural network compression](https://proceedings.mlr.press/v80/reagan18a.html) | ICML | `W` | - | 333 | | [Compressing Neural Networks using the Variational Information Bottleneck](https://proceedings.mlr.press/v80/dai18d.html) | ICML | `F` | [PyTorch(Author)](https://github.com/zhuchen03/VIBNet) | 334 | | [Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions](https://proceedings.mlr.press/v80/wu18h.html) | ICML | `Other` | [PyTorch(Author)](https://github.com/VITA-Group/Deep-K-Means-pytorch) | 335 | | [CLIP-Q: Deep Network Compression Learning by In-Parallel Pruning-Quantization](https://openaccess.thecvf.com/content_cvpr_2018/html/Tung_CLIP-Q_Deep_Network_CVPR_2018_paper.html) | CVPR | `W` | - | 336 | | [“Learning-Compression” Algorithms for Neural Net Pruning](http://faculty.ucmerced.edu/mcarreira-perpinan/papers/cvpr18.pdf) | CVPR | `W` | - | 337 | | [PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning](https://arxiv.org/abs/1711.05769) | CVPR | `F` | [PyTorch(Author)](https://github.com/arunmallya/packnet) | 338 | | [NISP: Pruning Networks using Neuron Importance Score Propagation](https://arxiv.org/abs/1711.05908) | CVPR | `F` | - | 339 | | [To prune, or not to prune: exploring the efficacy of pruning for model compression](https://arxiv.org/abs/1710.01878) | ICLR | `W` | - | 340 | | [Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers](https://arxiv.org/abs/1802.00124) | ICLR | `F` | [TensorFlow(Author)](https://github.com/bobye/batchnorm_prune), [PyTorch(3rd)](https://github.com/jack-willturner/batchnorm-pruning) | 341 | 342 | 343 | ### 2017 344 | | Title | Venue | Type | Code | 345 | |:-------|:--------:|:-------:|:-------:| 346 | | [Net-Trim: Convex Pruning of Deep Neural Networks with Performance Guarantee](https://arxiv.org/abs/1611.05162) | NeurIPS | `W` | [TensorFlow(Author)](https://github.com/DNNToolBox/Net-Trim-v1) | 347 | | [Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon](https://arxiv.org/abs/1705.07565) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/csyhhu/L-OBS) | 348 | | [Runtime Neural Pruning](https://papers.NeurIPS.cc/paper/6813-runtime-neural-pruning) | NeurIPS | `F` | - | 349 | | [Structured Bayesian Pruning via Log-Normal Multiplicative Noise](https://papers.nips.cc/paper/2017/hash/dab49080d80c724aad5ebf158d63df41-Abstract.html) | NeurIPS | `F` | - | 350 | | [Bayesian Compression for Deep Learning](https://proceedings.neurips.cc/paper/2017/hash/69d1fc78dbda242c43ad6590368912d4-Abstract.html) | NeurIPS | `F` | - | 351 | | [ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression](https://arxiv.org/abs/1707.06342) | ICCV | `F` | [Caffe(Author)](https://github.com/Roll920/ThiNet), [PyTorch(3rd)](https://github.com/tranorrepository/reprod-thinet) | 352 | | [Channel pruning for accelerating very deep neural networks](https://arxiv.org/abs/1707.06168) | ICCV | `F` | [Caffe(Author)](https://github.com/yihui-he/channel-pruning) | 353 | | [Learning Efficient Convolutional Networks Through Network Slimming](https://arxiv.org/abs/1708.06519) | ICCV | `F` | [PyTorch(Author)](https://github.com/Eric-mingjie/network-slimming) | 354 | | [Variational Dropout Sparsifies Deep Neural Networks](http://arxiv.org/abs/1701.05369) | ICML | `W` | - | 355 | | [Combined Group and Exclusive Sparsity for Deep Neural Networks](https://proceedings.mlr.press/v70/yoon17a.html) | ICML | `WF` | - | 356 | | [Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning](https://arxiv.org/abs/1611.05128) | CVPR | `W` | - | 357 | | [Pruning Filters for Efficient ConvNets](https://arxiv.org/abs/1608.08710) | ICLR | `F` | [PyTorch(3rd)](https://github.com/Eric-mingjie/rethinking-network-pruning/tree/master/imagenet/l1-norm-pruning) | 358 | | [Pruning Convolutional Neural Networks for Resource Efficient Inference](https://arxiv.org/abs/1611.06440) | ICLR | `F` | [TensorFlow(3rd)](https://github.com/Tencent/PocketFlow#channel-pruning) | 359 | 360 | 361 | ### 2016 362 | | Title | Venue | Type | Code | 363 | |:-------|:--------:|:-------:|:-------:| 364 | | [Dynamic Network Surgery for Efficient DNNs](https://arxiv.org/abs/1608.04493) | NeurIPS | `W` | [Caffe(Author)](https://github.com/yiwenguo/Dynamic-Network-Surgery) | 365 | | [Learning the Number of Neurons in Deep Networks](https://proceedings.neurips.cc/paper/2016/hash/6e7d2da6d3953058db75714ac400b584-Abstract.html) | NeurIPS | `F` | - | 366 | | [Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding](https://arxiv.org/abs/1510.00149) | ICLR **(Best)** | `W` | [Caffe(Author)](https://github.com/songhan/Deep-Compression-AlexNet) | 367 | 368 | 369 | ### 2015 370 | 371 | | Title | Venue | Type | Code | 372 | |:-------|:--------:|:-------:|:-------:| 373 | | [Learning both Weights and Connections for Efficient Neural Networks](https://arxiv.org/abs/1506.02626) | NeurIPS | `W` | [PyTorch(3rd)](https://github.com/jack-willturner/DeepCompression-PyTorch) | 374 | 375 | ## Related Repo 376 | 377 | [Awesome-model-compression-and-acceleration](https://github.com/memoiry/Awesome-model-compression-and-acceleration) 378 | 379 | [EfficientDNNs](https://github.com/MingSun-Tse/EfficientDNNs) 380 | 381 | [Embedded-Neural-Network](https://github.com/ZhishengWang/Embedded-Neural-Network) 382 | 383 | [awesome-AutoML-and-Lightweight-Models](https://github.com/guan-yuan/awesome-AutoML-and-Lightweight-Models) 384 | 385 | [Model-Compression-Papers](https://github.com/chester256/Model-Compression-Papers) 386 | 387 | [knowledge-distillation-papers](https://github.com/lhyfst/knowledge-distillation-papers) 388 | 389 | [Network-Speed-and-Compression](https://github.com/mrgloom/Network-Speed-and-Compression) 390 | --------------------------------------------------------------------------------