├── .github └── ISSUE_TEMPLATE │ ├── custom.md │ └── paper-reading-templates.md └── README.md /.github/ISSUE_TEMPLATE/custom.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Custom issue template 3 | about: Describe this issue template's purpose here. 4 | title: '' 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | ## 一言でいうと 11 | 12 | ### 論文リンク 13 | 14 | ### 著者/所属機関 15 | 16 | ### 投稿日付(yyyy/MM/dd) 17 | 18 | ## 概要 19 | 20 | ## 新規性・差分 21 | 22 | ## 手法 23 | 24 | ## 結果 25 | 26 | ## コメント 27 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/paper-reading-templates.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Paper Reading Templates 3 | about: Describe this issue template's purpose here. 4 | title: '' 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | ## 一言でいうと 11 | 12 | ### 論文リンク 13 | 14 | ### 著者/所属機関 15 | 16 | ### 投稿日付(yyyy/MM/dd) 17 | 18 | ## 概要 19 | 20 | ## 新規性・差分 21 | 22 | ## 手法 23 | 24 | ## 結果 25 | 26 | ## コメント 27 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Survey Model Compression 2 | Survey for Model Compression 3 | 4 | 5 | ## CVPR2021 6 | CVPR2021 Accepted Papers 7 | - [x] RepVGG: Making VGG-style ConvNets Great Again 8 | - [x] Manifold Regularized Dynamic Network Pruning 9 | - [x] Dynamic Slimmable Network 10 | - [ ] Neural Response Interpretation through the Lens of Critical Pathways 11 | - [ ] Riggable 3D Face Reconstruction via In-Network Optimization 12 | - [x] Towards Compact CNNs via Collaborative Compression 13 | - [ ] BCNet: Searching for Network Width with Bilaterally Coupled Network 14 | - [x] Learnable Companding Quantization for Accurate Low-bit Neural Networks 15 | - [x] Diversifying Sample Generation for Accurate Data-Free Quantization 16 | - [x] Zero-shot Adversarial Quantization 17 | - [x] Network Quantization with Element-wise Gradient Scaling 18 | - [ ] Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge Distillation 19 | - [ ] Complementary Relation Contrastive Distillation 20 | - [x] Distilling Knowledge via Knowledge Review 21 | - [ ] Distilling Object Detectors via Decoupled Features 22 | - [ ] Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks 23 | - [ ] CDFI: Compression-Driven Network Design for Frame Interpolation 24 | 25 | Not Model Compression?? 26 | - [ ] Teachers Do More Than Teach: Compressing Image-to-Image Models 27 | - [ ] Learning Student Networks in the Wild 28 | - [ ] Fast and Accurate Model Scaling 29 | - [x] ReXNet: Diminishing Representational Bottleneck on Convolutional Neural Network 30 | - [ ] Coordinate Attention for Efficient Mobile Network Design 31 | 32 | ## Survey for Neural Network Pruning 33 | Survey for Neural Network Pruning 34 | 35 | ### Survey 36 | - [ ] Liang, Tailin, et al. "Pruning and Quantization for Deep Neural Network Acceleration: A Survey." arXiv preprint arXiv:2101.09671 (2021). 37 | - [ ] Xu, Sheng, et al. "Convolutional Neural Network Pruning: A Survey." 2020 39th Chinese Control Conference (CCC). IEEE, 2020. 38 | - [ ] Liu, Jiayi, et al. "Pruning Algorithms to Accelerate Convolutional Neural Networks for Edge Applications: A Survey." arXiv preprint arXiv:2005.04275 (2020). 39 | - [ ] Cheng, Yu, et al. "A survey of model compression and acceleration for deep neural networks." arXiv preprint arXiv:1710.09282 (2017). 40 | 41 | 42 | ### Awesome Neural Network Pruning 43 | - [ ] Lin, T., Stich, S. U., Barba, L., Dmitriev, D., and Jaggi, M. Dynamic model pruning with feedback. In International Conference on Learning Representations, 2020. 44 | - [x] Kim, W., Kim, S., Park, M., & Jeon, G. (2020). Neuron Merging: Compensating for Pruned Neurons. Advances in Neural Information Processing Systems, 33. 45 | - [x] Blalock, D., Ortiz, J. J. G., Frankle, J., & Guttag, J. (2020). What is the state of neural network pruning?. arXiv preprint arXiv:2003.03033. 46 | - [ ] Dong, Xuanyi, and Yi Yang. "Network pruning via transformable architecture search." arXiv preprint arXiv:1905.09717 (2019). 47 | - [x] Liu, Z., Sun, M., Zhou, T., Huang, G., and Darrell, T. Rethinking the value of network pruning. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. 48 | - [x] Gale, T., Elsen, E., and Hooker, S. The state of sparsity in deep neural networks, 2019. 49 | - [x] Molchanov, Pavlo, et al. "Importance estimation for neural network pruning." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019. 50 | - [x] Lee, N., Ajanthan, T., and Torr, P. H. S. Snip: singleshot network pruning based on connection sensitivity. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6- 9, 2019. 51 | - [x] Zhu, M. H., & Gupta, S. (2018). To Prune, or Not to Prune: Exploring the Efficacy of Pruning for Model Compression. 52 | - [x] Lin, J., Rao, Y., Lu, J., & Zhou, J. (2017, December). Runtime neural pruning. In Proceedings of the 31st International Conference on Neural Information Processing Systems (pp. 2178-2188). 53 | - [x] He, Y., Zhang, X., and Sun, J. Channel pruning for accelerating very deep neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1389–1397, 2017. 54 | - [x] Luo, J.-H., Wu, J., and Lin, W. Thinet: A filter level pruning method for deep neural network compression. In Proceedings of the IEEE international conference on computer vision, pp. 5058–5066, 2017. 55 | - [x] Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., and Zhang, C. Learning efficient convolutional networks through network slimming. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2736–2744, 2017. 56 | - [x] Molchanov, D., Ashukha, A., and Vetrov, D. Variational dropout sparsifies deep neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2498–2507. JMLR. org, 2017. 57 | - [x] Lin, J., Rao, Y., Lu, J., & Zhou, J. (2017, December). Runtime neural pruning. In Proceedings of the 31st International Conference on Neural Information Processing Systems (pp. 2178-2188). 58 | - [x] Li, H., Kadav, A., Durdanovic, I., Samet, H., and Graf, H. P. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710, 2016. 59 | - [x] Han, S., Mao, H., and Dally, W. J. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. In Bengio, Y. and LeCun, Y. (eds.), 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016. 60 | - [x] Han, S., Pool, J., Tran, J., and Dally, W. Learning both weights and connections for efficient neural network. In Advances in neural information processing systems, pp. 1135–1143, 2015. 61 | 62 | ### Auto Network Pruning 63 | - [ ] Zhang, Chaopeng, Yuesheng Zhu, and Zhiqiang Bai. 2020. “MetaAMC: Meta Learning and AutoML for Model Compression.” In Twelfth International Conference on Digital Image Processing (ICDIP 2020), 11519:115191U. International Society for Optics and Photonics. 64 | - [ ] Wang, Tianzhe, Kuan Wang, Han Cai, Ji Lin, Zhijian Liu, Hanrui Wang, Yujun Lin, and Song Han. 2020. “Apq: Joint Search for Network Architecture, Pruning and Quantization Policy.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2078–87. openaccess.thecvf.com. 65 | - [x] Luo, Jian-Hao, and Jianxin Wu. "Autopruner: An end-to-end trainable filter pruning method for efficient deep model inference." Pattern Recognition 107 (2020): 107461. 66 | - [x] Lin, Mingbao, et al. "Channel pruning via automatic structure search." arXiv preprint arXiv:2001.08565 (2020). 67 | - [ ] Li, Baopu, et al. "AutoPruning for Deep Neural Network with Dynamic Channel Masking." arXiv preprint arXiv:2010.12021 (2020). 68 | - [x] He, Yihui, et al. "Amc: Automl for model compression and acceleration on mobile devices." Proceedings of the European Conference on Computer Vision (ECCV). 2018. 69 | - [ ] Ding, Xiaohan, et al. "Auto-balanced filter pruning for efficient convolutional neural networks." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 32. No. 1. 2018. 70 | - [x] He, Yihui, et al. "Amc: Automl for model compression and acceleration on mobile devices." Proceedings of the European Conference on Computer Vision (ECCV). 2018. 71 | 72 | ### Lottery ticket hypothesis 73 | - [ ] Brix, Christopher, Parnia Bahar, and Hermann Ney. "Successfully applying the stabilized lottery ticket hypothesis to the transformer architecture." arXiv preprint arXiv:2005.03454 (2020). 74 | - [ ] Frankle, Jonathan, et al. "Linear mode connectivity and the lottery ticket hypothesis." International Conference on Machine Learning. PMLR, 2020. 75 | - [ ] Malach, Eran, et al. "Proving the lottery ticket hypothesis: Pruning is all you need." International Conference on Machine Learning. PMLR, 2020. 76 | - [ ] Chen, Tianlong, et al. "The lottery ticket hypothesis for pre-trained bert networks." arXiv preprint arXiv:2007.12223 (2020). 77 | - [ ] Frankle, Jonathan, et al. "Stabilizing the lottery ticket hypothesis." arXiv preprint arXiv:1903.01611 (2019). 78 | - [ ] Zhou, Hattie, et al. "Deconstructing lottery tickets: Zeros, signs, and the supermask." arXiv preprint arXiv:1905.01067 (2019). 79 | - [ ] Yu, Haonan, et al. "Playing the lottery with rewards and multiple languages: lottery tickets in rl and nlp." arXiv preprint arXiv:1906.02768 (2019). 80 | - [x] Frankle, J. and Carbin, M. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. 81 | 82 | ### Dynamic Network Pruning 83 | - [ ] Xitong Gao, Yiren Zhao, Lukasz Dudziak, Robert Mullins,and Cheng-zhong Xu. Dynamic channel pruning: Featureboosting and suppression. InInternational Conference onLearning Representations, 2018. 84 | --------------------------------------------------------------------------------