├── .gitignore └── README.md /.gitignore: -------------------------------------------------------------------------------- 1 | # Object file 2 | *.o 3 | 4 | # Ada Library Information 5 | *.ali 6 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Awesome-Federated-Learning-Pruning [![Awesome](https://awesome.re/badge.svg)](https://awesome.re) 2 | Every work on Federated Learning Pruning 3 | 4 | Last Update: Feb, 9th, 2023. 5 | 6 | If you did not find your publication on this list, please email me at mmorafah@ucsd.edu 7 | 8 | ## Papers 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | 76 | 77 | 78 | 79 | 80 | 81 | 82 | 83 | 84 | 85 | 86 | 87 | 88 | 89 | 90 | 91 | 92 | 93 | 94 | 95 | 96 | 97 | 98 | 99 | 100 |
PapersAbbreviationConferences/AffiliationsMaterials
Personalized Federated Learning by Structured and Unstructured Pruning under Data HeterogeneitySub-FedAvgICDCS-W 2021code
Model pruning enables efficient federated learning on edge devicesPruneFLIEEE TNNLS 2022code
Achieving Personalized Federated Learning with Sparse Local ModelsFedSpaarxiv 2022
Lotteryfl: Personalized and communication-efficient federated learning with lottery ticket hypothesis on Non-IID DatasetsLotteryflarxiv 2020code
FedPrune: Personalized and Communication-Efficient Federated Learning on Non-IID DataFedPruneICONIP 2021
FedTiny: Pruned Federated Learning Towards Specialized Tiny ModelsFedTinyarxiv 2022
Complement Sparsification: Low-Overhead Model Pruning for Federated LearningCSAAAI 2023
Efficient Federated Random Subnetwork TrainingFedPMFL NeurIPS-W'22
Federated Sparse Training: Lottery Aware Model Compression for Resource Constrained EdgeFLASHFL NeurIPS-W'22
Federated Progressive Sparsification (Purge-Merge-Tune)FedSparsityFL NeurIPS-W'22
Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better FedDSTAAAI'22
FedDUAP: Federated Learning with Dynamic Update and Adaptive Pruning Using Shared Data on the ServerFedDUAPIJCAI
ZeroFL: Efficient On-Device Training for Federated Learning with Local SparsityZeroFLICLR 2022
101 | --------------------------------------------------------------------------------