├── LICENSE
├── README.md
├── chapter03_Python_image_classification
├── README.md
├── client.py
├── datasets.py
├── figures
│ ├── fig2.png
│ └── fig31.png
├── main.py
├── models.py
├── server.py
└── utils
│ └── conf.json
├── chapter05_FATE_HFL
├── README.md
├── breast_1_train.csv
├── breast_2_train.csv
├── breast_eval.csv
├── figures
│ ├── FATE_logo.jpg
│ └── local_2_dtable.png
├── split_dataset.py
├── test_homolr_train_job_conf.json
└── test_homolr_train_job_dsl.json
├── chapter06_FATE_VFL
├── README.md
├── figures
│ ├── FATE_logo.jpg
│ ├── fig1.png
│ ├── fig10.png
│ ├── fig11.png
│ ├── fig2.png
│ ├── fig3.png
│ ├── fig4.png
│ ├── fig5.png
│ ├── fig6.png
│ ├── fig7.png
│ ├── fig8.png
│ ├── fig9.png
│ └── local_2_dtable.png
├── housing_1_eval.csv
├── housing_1_train.csv
├── housing_2_eval.csv
├── housing_2_train.csv
├── split_eval_dataset.py
├── split_training_dataset.py
├── test_hetero_linr_train_eval_job_conf.json
├── test_hetero_linr_train_eval_job_dsl.json
├── test_hetero_linr_train_job_conf.json
└── test_hetero_linr_train_job_dsl.json
├── chapter09_Recommendation
├── README.md
└── figures
│ ├── cent_recom.png
│ ├── fig1.png
│ ├── fig10.png
│ ├── fig11.png
│ ├── fig12.png
│ ├── fig13.png
│ ├── fig14.png
│ ├── fig2.png
│ ├── fig3.png
│ ├── fig4.png
│ ├── fig5.png
│ ├── fig6.png
│ ├── fig7.png
│ ├── fig8.png
│ └── fig9.png
├── chapter10_Computer_Vision
├── Dataset_description.md
├── LICENSE
├── README.md
├── config
│ ├── coco.data
│ ├── create_custom_model.sh
│ ├── custom.data
│ ├── yolov3-custom-street.cfg
│ ├── yolov3-tiny.cfg
│ └── yolov3.cfg
├── data
│ ├── __init__.py
│ ├── custom
│ │ ├── classes.names
│ │ ├── train.txt
│ │ └── valid.txt
│ ├── data_utils.py
│ ├── dataset.py
│ ├── generate_task_json.py
│ ├── util.py
│ └── voc_dataset.py
├── experiments
│ └── log_formatter.py
├── figures
│ ├── FL_flow.png
│ ├── fig10.png
│ ├── loss.png
│ └── map.png
├── fl_client.py
├── fl_server.py
├── model
│ ├── __init__.py
│ ├── faster_rcnn.py
│ ├── faster_rcnn_trainer.py
│ ├── faster_rcnn_vgg16.py
│ ├── model_wrapper.py
│ ├── region_proposal_network.py
│ ├── roi_module.py
│ ├── utils
│ │ ├── __init__.py
│ │ ├── bbox_tools.py
│ │ ├── creator_tool.py
│ │ ├── nms
│ │ │ ├── __init__.py
│ │ │ ├── _nms_gpu_post.pyx
│ │ │ ├── _nms_gpu_post_py.py
│ │ │ ├── build.py
│ │ │ └── non_maximum_suppression.py
│ │ └── roi_cupy.py
│ └── yolo.py
├── requirements.txt
├── run.sh
├── run_server.sh
├── stop.sh
├── utils
│ ├── __init__.py
│ ├── array_tool.py
│ ├── augmentations.py
│ ├── config.py
│ ├── datasets.py
│ ├── eval_tool.py
│ ├── model_dump.py
│ ├── parse_config.py
│ ├── pvoc2coco.py
│ ├── utils.py
│ └── vis_tool.py
└── weights
│ └── download_weights.sh
├── chapter15_Attack_and_Defense
├── README.md
└── figures
│ ├── attack_defense.jpeg
│ └── summary.png
├── chapter15_Backdoor_Attack
├── README.md
├── client.py
├── datasets.py
├── images
│ ├── fl_backdoor.png
│ ├── normal_image.png
│ ├── normal_image_1.png
│ ├── poison_image.png
│ └── target.png
├── main.py
├── models.py
├── server.py
└── utils
│ └── conf.json
├── chapter15_Compression
├── README.md
├── client.py
├── datasets.py
├── figures
│ └── fig1.png
├── main.py
├── models.py
├── server.py
└── utils
│ └── conf.json
├── chapter15_Differential_Privacy
├── README.md
├── client.py
├── datasets.py
├── figures
│ ├── fig1.png
│ ├── fig13.png
│ ├── fig14.png
│ ├── fig2.png
│ └── fig3.png
├── main.py
├── models.py
├── server.py
└── utils
│ └── conf.json
├── chapter15_Homomorphic_Encryption
├── README.md
├── client.py
├── data
│ └── breast.csv
├── encoding.py
├── figures
│ ├── fig12.png
│ └── fig2.png
├── main.py
├── models.py
├── paillier.py
├── server.py
├── util.py
└── utils
│ └── conf.json
├── chapter15_Sparsity
├── README.md
├── client.py
├── datasets.py
├── figures
│ ├── fig1.png
│ └── fig33.png
├── main.py
├── models.py
├── server.py
└── utils
│ └── conf.json
├── errata
├── README.md
└── figures
│ ├── 3_1.png
│ ├── 3_2.png
│ └── 5_1.png
└── figures
├── cover.jpg
├── cover2.png
└── link.md
/README.md:
--------------------------------------------------------------------------------
1 | # 联邦学习实战 (Practicing-Federated-Learning)
2 |
3 | [](https://github.com/innovation-cat/Awesome-Federated-Machine-Learning) [](https://item.jd.com/13206070.html)
4 |
5 |
6 |
7 | 联邦学习是一种新型的、基于数据隐私保护技术实现的分布式计算范式,自提出以来,就受到学术界和工业界的广泛关注。近年来,随着联邦学习的飞速发展,使得其成为解决数据孤岛和用户隐私问题的首选方案,但当前市面上这方面的实战书籍却尚不多见。本书是第一本权威的联邦学习实战书籍,结合联邦学习案例,有助于读者更深入的理解联邦学习这一新兴的学科。
8 |
9 | **本项目将长期维护和更新《联邦学习实战》书籍对应的章节代码。书或代码中遇到的问题,可以邮件联系:huanganbu@gmail.com**
10 |
11 | 由于受到出版刊物限制,本书不能在纸质书页面上放置网络链接,本书的链接对应可查看这里([书中链接](figures/link.md))。
12 |
13 | 书中可能存在印刷或撰写的错误,勘误列表读者可点击:[勘误列表](errata/README.md),同时也欢迎读者反馈书中的文字错误问题,以便我们改进。
14 |
15 |
16 |
17 | ## 联邦学习材料
18 |
19 | - [联邦学习最新研究论文、书籍、代码、视频等详细资料汇总 (Everything about Federated Learning)](https://github.com/innovation-cat/Awesome-Federated-Machine-Learning)
20 |
21 | - [香港科技大学“联邦学习”课程](https://ising.cse.ust.hk/fl/index.html)
22 |
23 |
24 |
25 | ## 简 介
26 |
27 | 本书是联邦学习系列书籍的第二本,共由五大部分共19章构成。既有理论知识的系统性总结,又有详细的案例分析,本书的组织结构如下:
28 |
29 | - 第一部分简要回顾联邦学习的理论知识点,包括联邦学习的分类、定义;联邦学习常见的安全机制等。
30 |
31 | - 第二部分介绍如何使用Python和FATE进行简单的联邦学习建模;同时,我们也对当前一些常见的联邦学习平台进行总结。
32 |
33 | - 第三部分是联邦学习的案例分析,本部分我们挑选了包括视觉、个性化推荐、金融保险、攻防、医疗等领域的案例,我们将探讨联邦学习如何在这些领域中进行应用和落地。
34 |
35 | - 第四部分主要介绍和联邦学习相关的高级知识点,包括联邦学习的架构和训练的加速方法;联邦学习与区块链、split learning、边缘计算的异同。
36 |
37 | - 第五部分是对本书的总结,以及对联邦学习技术的未来展望。
38 |
39 |
40 |
41 | 本书可以与《联邦学习》书籍配套使用,书的链接地址:
42 |
43 | - 联邦学习实战:[https://item.jd.com/13206070.html](https://item.jd.com/13206070.html)
44 |
45 | - 联邦学习:[https://item.jd.com/12649191.html](https://item.jd.com/12649191.html)
46 |
47 |
48 |
49 | 注:本项目含有部分数学公式,为了不影响阅读效果,在Github网页中能正常显示公式,需要读者首先下载相应的浏览器插件,如MathJax插件([MathJax Plugin for Github](https://chrome.google.com/webstore/detail/mathjax-plugin-for-github/ioemnmodlmafdkllaclgeombjnmnbima))
50 |
51 |
52 |
53 | ## 推荐语
54 |
55 | 本书的编写也得到了来自学术界、金融投资界和工业界的知名人士推荐,在此深表感谢:
56 |
57 | >
"国务院在2020年将数据作为新型生产要素写入法律文件中,与土地、劳动力、资本、技术并列为五个生产要素。这意味着一方面,个人数据隐私将受到法律的严格保护;另一方面,数据与其它生产要素一样,可以进行开放、共享和交易。如何有效解决数据隐私与数据共享之间的矛盾成为了当前人工智能领域的研究热点问题。联邦学习作为一种新型的分布式机器学习训练和应用范式,从提出以来就备受广泛的关注,也被认为是当前产业界解决数据隐私与数据共享之间矛盾的一种有效方案。同时作为可信计算的新成员,书中特别提到,联邦学习还可以与区块链进行强强联合,例如借助区块链记录的不可篡改特性,帮助对联邦学习可能面临的恶意攻击进行追溯;借助区块链的共识机制和智能合约,对联邦学习创造的价值进行利益分配等。《联邦学习实战》一书,对联邦学习的理论和应用案例做了系统性的阐述和分析,相信能够为广大的科研工作者、企业开发人员提供有效的指导和参考。"
58 | > — 陈 纯,中国工程院院士
59 |
60 |
61 |
62 | > "人工智能时代的到来已经不可逆转,在算力算法和机器学习蓬勃发展的大背景下,数据资产成为了非常重要的技术资源,服务企业的长远发展。如何利用好、保护好数据资产是人工智能能否创造出更大经济和社会价值的关键因子。联邦学习理念的提出和发展,在这个方面探索出一条可行之路并做出了重要的示范。杨强教授是世界级人工智能研究专家,在学术和产业两端都对这一领域有非常深的造诣,希望他这本《联邦学习实战》可以为更多业内人士和机器学习的从业者与爱好者带来更多的启发与思考。"
63 | >
64 | > — 沈南鹏,红杉资本全球执行合伙人
65 |
66 |
67 |
68 | > "数据资产化是实现人工智能产业价值的核心环节,而联邦学习是其中的关键技术。书中严谨而深入浅出阐述为读者们提供了非常有效的工具。"
69 | >
70 | > — 陆 奇,奇绩创坛创始人
71 |
72 |
73 |
74 | > "为了互联网更好的未来,我们需要建立负责任的数据经济体系,使我们既能充分实现数据价值,又能很好的保护用户的数据隐私,并能够公平分配数据创造的价值。联邦学习正是支撑这一愿景的重要技术。本书描述了该领域的实际应用案例,为将联邦学习付诸实践提供了重要的指导意义。"
75 | >
76 | > — Dawn Song,美国加州大学伯克利分校教授
77 |
78 |
79 |
80 |
81 | ## 代码章节目录
82 |
83 | * [第3章:用Python从零实现横向联邦图像分类](chapter03_Python_image_classification)
84 | * [第5章:用FATE从零实现横向逻辑回归](chapter05_FATE_HFL)
85 | * [第6章:用FATE从零实现纵向线性回归](chapter06_FATE_VFL)
86 | * [第9章:个性化推荐案例实战](chapter09_Recommendation)
87 | * [第10章:视觉案例实战](chapter10_Computer_Vision)
88 | * [第15章:联邦学习攻防实战](chapter15_Attack_and_Defense)
89 | * [第15章:攻防实战 - 后门攻击](chapter15_Backdoor_Attack)
90 | * [第15章:攻防实战 - 差分隐私](chapter15_Differential_Privacy)
91 | * [第15章:攻防实战 - 同态加密](chapter15_Homomorphic_Encryption)
92 | * [第15章:攻防实战 - 稀疏化](chapter15_Sparsity)
93 | * [第15章:攻防实战 - 模型压缩](chapter15_Compression)
94 |
95 |
--------------------------------------------------------------------------------
/chapter03_Python_image_classification/client.py:
--------------------------------------------------------------------------------
1 |
2 | import models, torch, copy
3 | class Client(object):
4 |
5 | def __init__(self, conf, model, train_dataset, id = -1):
6 |
7 | self.conf = conf
8 |
9 | self.local_model = models.get_model(self.conf["model_name"])
10 |
11 | self.client_id = id
12 |
13 | self.train_dataset = train_dataset
14 |
15 | all_range = list(range(len(self.train_dataset)))
16 | data_len = int(len(self.train_dataset) / self.conf['no_models'])
17 | train_indices = all_range[id * data_len: (id + 1) * data_len]
18 |
19 | self.train_loader = torch.utils.data.DataLoader(self.train_dataset, batch_size=conf["batch_size"],
20 | sampler=torch.utils.data.sampler.SubsetRandomSampler(train_indices))
21 |
22 |
23 | def local_train(self, model):
24 |
25 | for name, param in model.state_dict().items():
26 | self.local_model.state_dict()[name].copy_(param.clone())
27 |
28 | #print(id(model))
29 | optimizer = torch.optim.SGD(self.local_model.parameters(), lr=self.conf['lr'],
30 | momentum=self.conf['momentum'])
31 | #print(id(self.local_model))
32 | self.local_model.train()
33 | for e in range(self.conf["local_epochs"]):
34 |
35 | for batch_id, batch in enumerate(self.train_loader):
36 | data, target = batch
37 |
38 | if torch.cuda.is_available():
39 | data = data.cuda()
40 | target = target.cuda()
41 |
42 | optimizer.zero_grad()
43 | output = self.local_model(data)
44 | loss = torch.nn.functional.cross_entropy(output, target)
45 | loss.backward()
46 |
47 | optimizer.step()
48 | print("Epoch %d done." % e)
49 | diff = dict()
50 | for name, data in self.local_model.state_dict().items():
51 | diff[name] = (data - model.state_dict()[name])
52 | #print(diff[name])
53 |
54 | return diff
55 |
--------------------------------------------------------------------------------
/chapter03_Python_image_classification/datasets.py:
--------------------------------------------------------------------------------
1 |
2 |
3 | import torch
4 | from torchvision import datasets, transforms
5 |
6 | def get_dataset(dir, name):
7 |
8 | if name=='mnist':
9 | train_dataset = datasets.MNIST(dir, train=True, download=True, transform=transforms.ToTensor())
10 | eval_dataset = datasets.MNIST(dir, train=False, transform=transforms.ToTensor())
11 |
12 | elif name=='cifar':
13 | transform_train = transforms.Compose([
14 | transforms.RandomCrop(32, padding=4),
15 | transforms.RandomHorizontalFlip(),
16 | transforms.ToTensor(),
17 | transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
18 | ])
19 |
20 | transform_test = transforms.Compose([
21 | transforms.ToTensor(),
22 | transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
23 | ])
24 |
25 | train_dataset = datasets.CIFAR10(dir, train=True, download=True,
26 | transform=transform_train)
27 | eval_dataset = datasets.CIFAR10(dir, train=False, transform=transform_test)
28 |
29 |
30 | return train_dataset, eval_dataset
--------------------------------------------------------------------------------
/chapter03_Python_image_classification/figures/fig2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter03_Python_image_classification/figures/fig2.png
--------------------------------------------------------------------------------
/chapter03_Python_image_classification/figures/fig31.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter03_Python_image_classification/figures/fig31.png
--------------------------------------------------------------------------------
/chapter03_Python_image_classification/main.py:
--------------------------------------------------------------------------------
1 | import argparse, json
2 | import datetime
3 | import os
4 | import logging
5 | import torch, random
6 |
7 | from server import *
8 | from client import *
9 | import models, datasets
10 |
11 |
12 |
13 | if __name__ == '__main__':
14 |
15 | parser = argparse.ArgumentParser(description='Federated Learning')
16 | parser.add_argument('-c', '--conf', dest='conf')
17 | args = parser.parse_args()
18 |
19 |
20 | with open(args.conf, 'r') as f:
21 | conf = json.load(f)
22 |
23 |
24 | train_datasets, eval_datasets = datasets.get_dataset("./data/", conf["type"])
25 |
26 | server = Server(conf, eval_datasets)
27 | clients = []
28 |
29 | for c in range(conf["no_models"]):
30 | clients.append(Client(conf, server.global_model, train_datasets, c))
31 |
32 | print("\n\n")
33 | for e in range(conf["global_epochs"]):
34 |
35 | candidates = random.sample(clients, conf["k"])
36 |
37 | weight_accumulator = {}
38 |
39 | for name, params in server.global_model.state_dict().items():
40 | weight_accumulator[name] = torch.zeros_like(params)
41 |
42 | for c in candidates:
43 | diff = c.local_train(server.global_model)
44 |
45 | for name, params in server.global_model.state_dict().items():
46 | weight_accumulator[name].add_(diff[name])
47 |
48 |
49 | server.model_aggregate(weight_accumulator)
50 |
51 | acc, loss = server.model_eval()
52 |
53 | print("Epoch %d, acc: %f, loss: %f\n" % (e, acc, loss))
54 |
55 |
56 |
57 |
58 |
59 |
60 |
61 |
--------------------------------------------------------------------------------
/chapter03_Python_image_classification/models.py:
--------------------------------------------------------------------------------
1 |
2 | import torch
3 | from torchvision import models
4 |
5 | def get_model(name="vgg16", pretrained=True):
6 | if name == "resnet18":
7 | model = models.resnet18(pretrained=pretrained)
8 | elif name == "resnet50":
9 | model = models.resnet50(pretrained=pretrained)
10 | elif name == "densenet121":
11 | model = models.densenet121(pretrained=pretrained)
12 | elif name == "alexnet":
13 | model = models.alexnet(pretrained=pretrained)
14 | elif name == "vgg16":
15 | model = models.vgg16(pretrained=pretrained)
16 | elif name == "vgg19":
17 | model = models.vgg19(pretrained=pretrained)
18 | elif name == "inception_v3":
19 | model = models.inception_v3(pretrained=pretrained)
20 | elif name == "googlenet":
21 | model = models.googlenet(pretrained=pretrained)
22 |
23 | if torch.cuda.is_available():
24 | return model.cuda()
25 | else:
26 | return model
--------------------------------------------------------------------------------
/chapter03_Python_image_classification/server.py:
--------------------------------------------------------------------------------
1 |
2 | import models, torch
3 |
4 |
5 | class Server(object):
6 |
7 | def __init__(self, conf, eval_dataset):
8 |
9 | self.conf = conf
10 |
11 | self.global_model = models.get_model(self.conf["model_name"])
12 |
13 | self.eval_loader = torch.utils.data.DataLoader(eval_dataset, batch_size=self.conf["batch_size"], shuffle=True)
14 |
15 |
16 | def model_aggregate(self, weight_accumulator):
17 | for name, data in self.global_model.state_dict().items():
18 |
19 | update_per_layer = weight_accumulator[name] * self.conf["lambda"]
20 |
21 | if data.type() != update_per_layer.type():
22 | data.add_(update_per_layer.to(torch.int64))
23 | else:
24 | data.add_(update_per_layer)
25 |
26 | def model_eval(self):
27 | self.global_model.eval()
28 |
29 | total_loss = 0.0
30 | correct = 0
31 | dataset_size = 0
32 | for batch_id, batch in enumerate(self.eval_loader):
33 | data, target = batch
34 | dataset_size += data.size()[0]
35 |
36 | if torch.cuda.is_available():
37 | data = data.cuda()
38 | target = target.cuda()
39 |
40 |
41 | output = self.global_model(data)
42 |
43 | total_loss += torch.nn.functional.cross_entropy(output, target,
44 | reduction='sum').item() # sum up batch loss
45 | pred = output.data.max(1)[1] # get the index of the max log-probability
46 | correct += pred.eq(target.data.view_as(pred)).cpu().sum().item()
47 |
48 | acc = 100.0 * (float(correct) / float(dataset_size))
49 | total_l = total_loss / dataset_size
50 |
51 | return acc, total_l
--------------------------------------------------------------------------------
/chapter03_Python_image_classification/utils/conf.json:
--------------------------------------------------------------------------------
1 | {
2 |
3 | "model_name" : "resnet18",
4 |
5 | "no_models" : 10,
6 |
7 | "type" : "cifar",
8 |
9 | "global_epochs" : 20,
10 |
11 | "local_epochs" : 3,
12 |
13 | "k" : 5,
14 |
15 | "batch_size" : 32,
16 |
17 | "lr" : 0.001,
18 |
19 | "momentum" : 0.0001,
20 |
21 | "lambda" : 0.1
22 | }
--------------------------------------------------------------------------------
/chapter05_FATE_HFL/README.md:
--------------------------------------------------------------------------------
1 | # 第5章:用FATE从零实现横向逻辑回归
2 |
3 | 在第三章我们介绍了如何使用Python构建简单的横向联邦学习模型。但应该注意到,联邦学习的开发,特别是工业级的产品开发,涉及的工程量却远不止于此,一个功能完备的联邦学习框架的设计细节是相当复杂的,幸运的是,随着联邦学习的发展,当前市面上已经出现了越来越多的开源平台。
4 |
5 | 本章我们将介绍使用FATE从零开始构建一个简单的横向逻辑回归模型,经过本章的学习,读者能够了解利用FATE进行横向建模的基本流程。鉴于本书的篇幅有限,以及本书的写作目的,我们不对FATE的具体原理实现进行详细的讲解。
6 |
7 |
8 |
9 | **注:由于FATE平台在不断迭代和更新的过程中,本章的内容撰写截稿时间较早(2020年9月截稿),可能会因版本变化而导致配置方式和运行方式发生改变,因此,强烈建议读者如果想进一步了解FATE的最新使用和安装,可以直接参考FATE官方文档教程:**
10 |
11 | * [FATE的安装部署](https://github.com/FederatedAI/DOC-CHN/tree/master/%E9%83%A8%E7%BD%B2)
12 | * [FATE的官方文档](https://github.com/FederatedAI/DOC-CHN)
13 |
14 | **如果FATE的安装和使用遇到任何问题,可以添加FATE小助手,有专门的工程团队人员帮忙解决。**
15 |
16 |
17 |

18 |
19 |
20 | ## 5.1 实验准备
21 |
22 | 在开始本章实验之前,请读者确保已经安装[Python](https://www.anaconda.com/products/individual)和[FATE单机版](https://github.com/FederatedAI/DOC-CHN/blob/master/%E9%83%A8%E7%BD%B2/FATE%E5%8D%95%E6%9C%BA%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97.rst)。
23 |
24 | **注意:本书编写时,FATE主要支持的是使用dsl和conf配置文件来构建联邦学习模型,而在之后的最新版本中,FATE也进行了很多方面的改进,特别是最新引入了pipeline的建模方式,更加方便,有关FATE pipeline的训练流程,读者可以参考文档:[FATE-Pipeline](https://github.com/FederatedAI/FATE/tree/master/examples/pipeline)。**
25 |
26 |
27 |
28 | ## 5.2 数据集获取
29 |
30 | 本章我们使用威斯康星州临床科学中心开源的乳腺癌肿瘤数据集来测试横向联邦模型,数据集可以从[Kaggle](https://www.kaggle.com/uciml/breast-cancer-wisconsin-data)网站中下载,也可以直接使用sklearn库的内置数据集获取:
31 |
32 | ```
33 | from sklearn.datasets import load_breast_cancer
34 | import pandas as pd
35 |
36 |
37 | breast_dataset = load_breast_cancer()
38 |
39 | breast = pd.DataFrame(breast_dataset.data, columns=breast_dataset.feature_names)
40 |
41 | breast['y'] = breast_dataset.target
42 |
43 | breast.head()
44 | ```
45 |
46 |
47 |
48 | ## 5.3 横向联邦数据集切分
49 |
50 | 为了模拟横向联邦建模的场景,我们首先在本地将乳腺癌数据集切分为特征相同的横向联邦形式,当前的breast数据集有569条样本,我们将前面的469条作为训练样本,后面的100条作为评估测试样本。
51 |
52 | * 从469条训练样本中,选取前200条作为公司A的本地数据,保存为breast\_1\_train.csv,将剩余的269条数据作为公司B的本地数据,保存为breast\_2\_train.csv。
53 | * 测试数据集可以不需要切分,两个参与方使用相同的一份测试数据即可,文件命名为breast\_eval.csv。
54 |
55 | 数据集切分代码请查看:[split_dataset.py](split_dataset.py)
56 |
57 |
58 |
59 | ## 5.4 利用FATE构建横向联邦学习Pipeline
60 |
61 | 用FATE构建横向联邦学习Pipeline,涉及到三个方面的工作:
62 |
63 | * 数据转换输入
64 | * 模型训练
65 | * 模型评估:(可选)
66 |
67 | 为了方便后面的叙述统一,我们假设读者安装的FATE单机版本目录为:
68 |
69 | ```
70 | fate_dir=/data/projects/fate-1.3.0-experiment/standalone-fate-master-1.4.0/
71 | ```
72 |
73 |
74 |
75 | ### 5.4.1 数据转换输入
76 |
77 | 该步骤是将5.3中切分的本地数据集文件转换为FATE的文件格式DTable,DTable是一个分布式的数据集合:
78 |
79 |
80 |

81 |
82 |
83 | 如书中所述,将数据格式转换为DTable,需要执行下面几个步骤即可:
84 |
85 | * 将本地的切分数据上传到$fate_dir/examples/data目录下
86 | * 进入$fate_dir/examples/federatedml-1.x-examples目录,打开upload_data.json文件进行更新, 以breast_1_train.csv文件为例:
87 |
88 | ```
89 | {
90 | "file": "examples/data/breast_1_train.csv",
91 | "head": 1,
92 | "partition": 10,
93 | "work_mode": 0,
94 | "table_name": "homo_breast_1_train",
95 | "namespace": "homo_host_breast_train"
96 | }
97 | ```
98 |
99 | 有关上面数据上传各字段的解析,可以参考FATE的Github官方文档:
100 |
101 | https://github.com/FederatedAI/DOC-CHN/blob/master/Federatedml/%E6%95%B0%E6%8D%AE%E4%B8%8A%E4%BC%A0%E8%AE%BE%E7%BD%AE%E6%8C%87%E5%8D%97.rst
102 |
103 | 最后在当前目录下($fate_dir/examples/federatedml-1.x-examples),在命令行中执行下面的命令,即可自动完成上传和格式转换:
104 |
105 | ```
106 | python $fate_dir/fate_flow/fate_flow_client.py -f upload -c upload_data.json
107 | ```
108 |
109 |
110 |
111 | ### 5.4.2 模型训练
112 |
113 | 借助FATE,我们可以使用组件的方式来构建联邦学习,而不需要用户从新开始编码,FATE构建联邦学习Pipeline是通过自定义dsl和conf两个配置文件来实现:
114 |
115 | * dsl文件:用来描述任务模块,将任务模块以有向无环图(DAG)的形式组合在一起。
116 |
117 | * conf文件:设置各个组件的参数,比如输入模块的数据表名;算法模块的学习率、batch大小、迭代次数等。
118 |
119 | 有关dsl和conf文件的设置,读者可以参考FATE的官方文档:
120 |
121 | https://github.com/FederatedAI/DOC-CHN/blob/master/Federatedml/%E8%BF%90%E8%A1%8C%E9%85%8D%E7%BD%AE%E8%AE%BE%E7%BD%AE%E6%8C%87%E5%8D%97.rst
122 |
123 | 本案例中,读者可以直接使用本文件夹提供的配置文件:
124 |
125 | * [test_homolr_train_job_conf.json](https://github.com/FederatedAI/Practicing-Federated-Learning/blob/main/chapter05_FATE_HFL/test_homolr_train_job_conf.json)
126 | * [test_homolr_train_job_dsl.json](https://github.com/FederatedAI/Practicing-Federated-Learning/blob/main/chapter05_FATE_HFL/test_homolr_train_job_dsl.json)
127 |
128 |
129 |
130 | 将dsl和conf文件放置在任意目录下,并在该目录下执行下面的命令进行训练:
131 |
132 | ```
133 | python $fate_dir/fate_flow/fate_flow_client.py -f submit_job -d test_homolr_train_job_dsl.json -c test_homolr_train_job_conf.json
134 | ```
135 |
136 |
--------------------------------------------------------------------------------
/chapter05_FATE_HFL/figures/FATE_logo.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter05_FATE_HFL/figures/FATE_logo.jpg
--------------------------------------------------------------------------------
/chapter05_FATE_HFL/figures/local_2_dtable.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter05_FATE_HFL/figures/local_2_dtable.png
--------------------------------------------------------------------------------
/chapter05_FATE_HFL/split_dataset.py:
--------------------------------------------------------------------------------
1 |
2 | from sklearn.datasets import load_breast_cancer
3 | import pandas as pd
4 |
5 |
6 | breast_dataset = load_breast_cancer()
7 |
8 | breast = pd.DataFrame(breast_dataset.data, columns=breast_dataset.feature_names)
9 |
10 | breast = (breast-breast.mean())/(breast.std())
11 |
12 | col_names = breast.columns.values.tolist()
13 |
14 |
15 |
16 | columns = {}
17 | for idx, n in enumerate(col_names):
18 | columns[n] = "x%d"%idx
19 |
20 | breast = breast.rename(columns=columns)
21 |
22 | breast['y'] = breast_dataset.target
23 |
24 | breast['idx'] = range(breast.shape[0])
25 |
26 | idx = breast['idx']
27 |
28 | breast.drop(labels=['idx'], axis=1, inplace = True)
29 |
30 | breast.insert(0, 'idx', idx)
31 |
32 | breast = breast.sample(frac=1)
33 |
34 | train = breast.iloc[:469]
35 |
36 | eval = breast.iloc[469:]
37 |
38 | breast_1_train = train.iloc[:200]
39 |
40 |
41 |
42 | breast_1_train.to_csv('breast_1_train.csv', index=False, header=True)
43 |
44 |
45 | breast_2_train = train.iloc[200:]
46 |
47 | breast_2_train.to_csv('breast_2_train.csv', index=False, header=True)
48 |
49 | eval.to_csv('breast_eval.csv', index=False, header=True)
50 |
51 |
52 |
53 |
--------------------------------------------------------------------------------
/chapter05_FATE_HFL/test_homolr_train_job_conf.json:
--------------------------------------------------------------------------------
1 | {
2 | "initiator": {
3 | "role": "guest",
4 | "party_id": 10000
5 | },
6 | "job_parameters": {
7 | "work_mode": 0
8 | },
9 | "role": {
10 | "guest": [10000],
11 | "host": [10000],
12 | "arbiter": [10000]
13 | },
14 | "role_parameters": {
15 | "guest": {
16 | "args": {
17 | "data": {
18 | "train_data": [{"name": "homo_breast_guest", "namespace": "homo_breast_guest"}]
19 | }
20 | }
21 | "dataio_0": {
22 | "with_label": [true],
23 | "label_name": ["y"],
24 | "label_type": ["int"],
25 | "output_format": ["dense"]
26 | },
27 | },
28 | "host": {
29 | "args": {
30 | "data": {
31 | "train_data": [{"name": "homo_breast_host", "namespace": "homo_breast_host"}]
32 | }
33 | },
34 | "evaluation_0": {
35 | "need_run": [false]
36 | }
37 | }
38 | },
39 | "algorithm_parameters": {
40 | "dataio_0":{
41 | "with_label": true,
42 | "label_name": "y",
43 | "label_type": "int",
44 | "output_format": "dense"
45 | },
46 | "homo_lr_0": {
47 | "penalty": "L2",
48 | "optimizer": "sgd",
49 | "eps": 1e-5,
50 | "alpha": 0.01,
51 | "max_iter": 10,
52 | "converge_func": "diff",
53 | "batch_size": 500,
54 | "learning_rate": 0.15,
55 | "decay": 1,
56 | "decay_sqrt": true,
57 | "init_param": {
58 | "init_method": "zeros"
59 | },
60 | "encrypt_param": {
61 | "method": null
62 | },
63 | "cv_param": {
64 | "n_splits": 4,
65 | "shuffle": true,
66 | "random_seed": 33,
67 | "need_cv": false
68 | }
69 | }
70 | }
71 | }
72 |
--------------------------------------------------------------------------------
/chapter05_FATE_HFL/test_homolr_train_job_dsl.json:
--------------------------------------------------------------------------------
1 | {
2 | "components" : {
3 | "dataio_0": {
4 | "module": "DataIO",
5 | "input": {
6 | "data": {
7 | "data": [
8 | "args.train_data"
9 | ]
10 | }
11 | },
12 | "output": {
13 | "data": ["train"],
14 | "model": ["dataio"]
15 | }
16 | },
17 | "homo_lr_0": {
18 | "module": "HomoLR",
19 | "input": {
20 | "data": {
21 | "train_data": [
22 | "dataio_0.train"
23 | ]
24 | }
25 | },
26 | "output": {
27 | "data": ["train"],
28 | "model": ["homolr"]
29 | }
30 | },
31 | "evaluation_0": {
32 | "module": "Evaluation",
33 | "input": {
34 | "data": {
35 | "data": [
36 | "homo_lr_0.train"
37 | ]
38 | }
39 | },
40 | "output": {
41 | "data": ["evaluate"]
42 | }
43 | }
44 | }
45 | }
46 |
--------------------------------------------------------------------------------
/chapter06_FATE_VFL/figures/FATE_logo.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter06_FATE_VFL/figures/FATE_logo.jpg
--------------------------------------------------------------------------------
/chapter06_FATE_VFL/figures/fig1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter06_FATE_VFL/figures/fig1.png
--------------------------------------------------------------------------------
/chapter06_FATE_VFL/figures/fig10.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter06_FATE_VFL/figures/fig10.png
--------------------------------------------------------------------------------
/chapter06_FATE_VFL/figures/fig11.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter06_FATE_VFL/figures/fig11.png
--------------------------------------------------------------------------------
/chapter06_FATE_VFL/figures/fig2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter06_FATE_VFL/figures/fig2.png
--------------------------------------------------------------------------------
/chapter06_FATE_VFL/figures/fig3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter06_FATE_VFL/figures/fig3.png
--------------------------------------------------------------------------------
/chapter06_FATE_VFL/figures/fig4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter06_FATE_VFL/figures/fig4.png
--------------------------------------------------------------------------------
/chapter06_FATE_VFL/figures/fig5.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter06_FATE_VFL/figures/fig5.png
--------------------------------------------------------------------------------
/chapter06_FATE_VFL/figures/fig6.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter06_FATE_VFL/figures/fig6.png
--------------------------------------------------------------------------------
/chapter06_FATE_VFL/figures/fig7.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter06_FATE_VFL/figures/fig7.png
--------------------------------------------------------------------------------
/chapter06_FATE_VFL/figures/fig8.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter06_FATE_VFL/figures/fig8.png
--------------------------------------------------------------------------------
/chapter06_FATE_VFL/figures/fig9.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter06_FATE_VFL/figures/fig9.png
--------------------------------------------------------------------------------
/chapter06_FATE_VFL/figures/local_2_dtable.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter06_FATE_VFL/figures/local_2_dtable.png
--------------------------------------------------------------------------------
/chapter06_FATE_VFL/split_eval_dataset.py:
--------------------------------------------------------------------------------
1 |
2 | from sklearn.datasets import load_boston
3 | import pandas as pd
4 |
5 |
6 | boston_dataset = load_boston()
7 |
8 | boston = pd.DataFrame(boston_dataset.data, columns=boston_dataset.feature_names)
9 |
10 | #boston = boston.dropna()
11 | boston = (boston-boston.mean())/(boston.std())
12 |
13 | col_names = boston.columns.values.tolist()
14 |
15 |
16 |
17 | columns = {}
18 | for idx, n in enumerate(col_names):
19 | columns[n] = "x%d"%idx
20 |
21 | boston = boston.rename(columns=columns)
22 |
23 | boston['y'] = boston_dataset.target
24 |
25 | boston['idx'] = range(boston.shape[0])
26 |
27 | idx = boston['idx']
28 |
29 | boston.drop(labels=['idx'], axis=1, inplace = True)
30 |
31 | boston.insert(0, 'idx', idx)
32 |
33 |
34 | eval = boston.iloc[406:]
35 |
36 | df1 = eval.sample(80)
37 | df2 = eval.sample(85)
38 |
39 | housing_1_eval = df1[["idx", "x0", "x1", "x2", "x3", "x4", "x5", "x6", "x7"]]
40 |
41 | housing_1_eval.to_csv('housing_1_eval.csv', index=True, header=True)
42 |
43 |
44 | housing_2_eval = df2[["idx", "y", "x8", "x9", "x10", "x11", "x12"]]
45 |
46 | housing_2_eval.to_csv('housing_2_eval.csv', index=True, header=True)
47 |
--------------------------------------------------------------------------------
/chapter06_FATE_VFL/split_training_dataset.py:
--------------------------------------------------------------------------------
1 |
2 | from sklearn.datasets import load_boston
3 | import pandas as pd
4 |
5 |
6 | boston_dataset = load_boston()
7 |
8 | boston = pd.DataFrame(boston_dataset.data, columns=boston_dataset.feature_names)
9 |
10 | #boston = boston.dropna()
11 | boston = (boston-boston.mean())/(boston.std())
12 |
13 | col_names = boston.columns.values.tolist()
14 |
15 |
16 |
17 | columns = {}
18 | for idx, n in enumerate(col_names):
19 | columns[n] = "x%d"%idx
20 |
21 | boston = boston.rename(columns=columns)
22 |
23 | boston['y'] = boston_dataset.target
24 |
25 | boston['idx'] = range(boston.shape[0])
26 |
27 | idx = boston['idx']
28 |
29 | boston.drop(labels=['idx'], axis=1, inplace = True)
30 |
31 | boston.insert(0, 'idx', idx)
32 |
33 | train = boston.iloc[:406]
34 |
35 | df1 = train.sample(360)
36 | df2 = train.sample(380)
37 |
38 | housing_1_train = df1[["idx", "x0", "x1", "x2", "x3", "x4", "x5", "x6", "x7"]]
39 |
40 | housing_1_train.to_csv('housing_1_train.csv', index=False, header=True)
41 |
42 |
43 | housing_2_train = df2[["idx", "y", "x8", "x9", "x10", "x11", "x12"]]
44 |
45 | housing_2_train.to_csv('housing_2_train.csv', index=False, header=True)
46 |
47 |
--------------------------------------------------------------------------------
/chapter06_FATE_VFL/test_hetero_linr_train_eval_job_conf.json:
--------------------------------------------------------------------------------
1 | {
2 | "initiator": {
3 | "role": "guest",
4 | "party_id": 10000
5 | },
6 | "job_parameters": {
7 | "work_mode": 0
8 | },
9 | "role": {
10 | "guest": [
11 | 10000
12 | ],
13 | "host": [
14 | 10000
15 | ],
16 | "arbiter": [
17 | 10000
18 | ]
19 | },
20 | "role_parameters": {
21 | "guest": {
22 | "args": {
23 | "data": {
24 | "train_data": [
25 | {
26 | "name": "hetero_housing_2_train",
27 | "namespace": "hetero_guest_housing_train"
28 | }
29 | ],
30 | "eval_data": [
31 | {
32 | "name": "hetero_housing_2_eval",
33 | "namespace": "hetero_guest_housing_eval"
34 | }
35 | ]
36 | }
37 | },
38 | "dataio_0": {
39 | "with_label": [true],
40 | "label_name": ["y"],
41 | "label_type": ["float"],
42 | "output_format": ["dense"],
43 | "missing_fill": [true],
44 | "outlier_replace": [false]
45 | },
46 | "evaluation_0": {
47 | "eval_type": ["regression"],
48 | "pos_label": [1]
49 | }
50 | },
51 | "host": {
52 | "args": {
53 | "data": {
54 | "train_data": [
55 | {
56 | "name": "hetero_housing_1_train",
57 | "namespace": "hetero_host_housing_train"
58 | }
59 | ],
60 | "eval_data": [
61 | {
62 | "name": "hetero_housing_1_eval",
63 | "namespace": "hetero_host_housing_eval"
64 | }
65 | ]
66 | }
67 | },
68 | "dataio_0": {
69 | "with_label": [false],
70 | "output_format": ["dense"],
71 | "outlier_replace": [false]
72 | },
73 | "evaluation_0": {
74 | "need_run": [false]
75 | }
76 | }
77 | },
78 | "algorithm_parameters": {
79 | "hetero_linr_0": {
80 | "penalty": "L2",
81 | "optimizer": "sgd",
82 | "tol": 1e-3,
83 | "alpha": 0.01,
84 | "max_iter": 200,
85 | "early_stop": "weight_diff",
86 | "batch_size": -1,
87 | "learning_rate": 0.15,
88 | "decay": 0.0,
89 | "decay_sqrt": false,
90 | "init_param": {
91 | "init_method": "zeros"
92 | },
93 | "encrypted_mode_calculator_param": {
94 | "mode": "fast"
95 | },
96 | "cv_param": {
97 | "n_splits": 5,
98 | "shuffle": false,
99 | "random_seed": 103,
100 | "need_cv": false,
101 | "evaluate_param": {
102 | "eval_type": "regression"
103 | }
104 | }
105 | }
106 | }
107 | }
108 |
--------------------------------------------------------------------------------
/chapter06_FATE_VFL/test_hetero_linr_train_eval_job_dsl.json:
--------------------------------------------------------------------------------
1 | {
2 | "components" : {
3 | "dataio_0": {
4 | "module": "DataIO",
5 | "input": {
6 | "data": {
7 | "data": [
8 | "args.train_data"
9 | ]
10 | }
11 | },
12 | "output": {
13 | "data": ["train"],
14 | "model": ["dataio"]
15 | }
16 | },
17 | "dataio_1": {
18 | "module": "DataIO",
19 | "input": {
20 | "data": {
21 | "data": [
22 | "args.eval_data"
23 | ]
24 | },
25 | "model": [
26 | "dataio_0.dataio"
27 | ]
28 | },
29 | "output": {
30 | "data": ["eval_data"],
31 | "model": ["dataio"]
32 | }
33 | },
34 | "intersection_0": {
35 | "module": "Intersection",
36 | "input": {
37 | "data": {
38 | "data": [
39 | "dataio_0.train"
40 | ]
41 | }
42 | },
43 | "output": {
44 | "data": ["train"]
45 | }
46 | },
47 | "intersection_1": {
48 | "module": "Intersection",
49 | "input": {
50 | "data": {
51 | "data": [
52 | "dataio_1.eval_data"
53 | ]
54 | }
55 | },
56 | "output": {
57 | "data": ["eval_data"]
58 | }
59 | },
60 | "hetero_linr_0": {
61 | "module": "HeteroLinR",
62 | "input": {
63 | "data": {
64 | "train_data": ["intersection_0.train"],
65 | "eval_data": ["intersection_1.eval_data"]
66 | }
67 | },
68 | "output": {
69 | "data": ["train"],
70 | "model": ["hetero_linr"]
71 | }
72 | },
73 | "evaluation_0": {
74 | "module": "Evaluation",
75 | "input": {
76 | "data": {
77 | "data": ["hetero_linr_0.train"]
78 | }
79 | },
80 | "output": {
81 | "data": ["evaluate"]
82 | }
83 | }
84 | }
85 | }
--------------------------------------------------------------------------------
/chapter06_FATE_VFL/test_hetero_linr_train_job_conf.json:
--------------------------------------------------------------------------------
1 | {
2 | "initiator": {
3 | "role": "guest",
4 | "party_id": 10000
5 | },
6 | "job_parameters": {
7 | "work_mode": 0
8 | },
9 | "role": {
10 | "guest": [
11 | 10000
12 | ],
13 | "host": [
14 | 10000
15 | ],
16 | "arbiter": [
17 | 10000
18 | ]
19 | },
20 | "role_parameters": {
21 | "guest": {
22 | "args": {
23 | "data": {
24 | "train_data": [
25 | {
26 | "name": "hetero_housing_2_train",
27 | "namespace": "hetero_guest_housing_train"
28 | }
29 | ]
30 | }
31 | },
32 | "dataio_0": {
33 | "with_label": [true],
34 | "label_name": ["y"],
35 | "label_type": ["float"],
36 | "output_format": ["dense"],
37 | "missing_fill": [true],
38 | "outlier_replace": [false]
39 | },
40 | "evaluation_0": {
41 | "eval_type": ["regression"],
42 | "pos_label": [1]
43 | }
44 | },
45 | "host": {
46 | "args": {
47 | "data": {
48 | "train_data": [
49 | {
50 | "name": "hetero_housing_1_train",
51 | "namespace": "hetero_host_housing_train"
52 | }
53 | ]
54 | }
55 | },
56 | "dataio_0": {
57 | "with_label": [false],
58 | "output_format": ["dense"],
59 | "outlier_replace": [false]
60 | },
61 | "evaluation_0": {
62 | "need_run": [false]
63 | }
64 | }
65 | },
66 | "algorithm_parameters": {
67 | "hetero_linr_0": {
68 | "penalty": "L2",
69 | "optimizer": "sgd",
70 | "tol": 1e-3,
71 | "alpha": 0.01,
72 | "max_iter": 200,
73 | "early_stop": "weight_diff",
74 | "batch_size": -1,
75 | "learning_rate": 0.15,
76 | "decay": 0.0,
77 | "decay_sqrt": false,
78 | "init_param": {
79 | "init_method": "zeros"
80 | },
81 | "encrypted_mode_calculator_param": {
82 | "mode": "fast"
83 | },
84 | "cv_param": {
85 | "n_splits": 5,
86 | "shuffle": false,
87 | "random_seed": 103,
88 | "need_cv": false,
89 | "evaluate_param": {
90 | "eval_type": "regression"
91 | }
92 | }
93 | }
94 | }
95 | }
96 |
--------------------------------------------------------------------------------
/chapter06_FATE_VFL/test_hetero_linr_train_job_dsl.json:
--------------------------------------------------------------------------------
1 | {
2 | "components" : {
3 | "dataio_0": {
4 | "module": "DataIO",
5 | "input": {
6 | "data": {
7 | "data": [
8 | "args.train_data"
9 | ]
10 | }
11 | },
12 | "output": {
13 | "data": ["train"],
14 | "model": ["dataio"]
15 | }
16 | },
17 | "intersection_0": {
18 | "module": "Intersection",
19 | "input": {
20 | "data": {
21 | "data": [
22 | "dataio_0.train"
23 | ]
24 | }
25 | },
26 | "output": {
27 | "data": ["train"]
28 | }
29 | },
30 | "hetero_linr_0": {
31 | "module": "HeteroLinR",
32 | "input": {
33 | "data": {
34 | "train_data": ["intersection_0.train"],
35 | "eval_data": ["intersection_1.eval_data"]
36 | }
37 | },
38 | "output": {
39 | "data": ["train"],
40 | "model": ["hetero_linr"]
41 | }
42 | },
43 | "evaluation_0": {
44 | "module": "Evaluation",
45 | "input": {
46 | "data": {
47 | "data": ["hetero_linr_0.train"]
48 | }
49 | },
50 | "output": {
51 | "data": ["evaluate"]
52 | }
53 | }
54 | }
55 | }
--------------------------------------------------------------------------------
/chapter09_Recommendation/README.md:
--------------------------------------------------------------------------------
1 | # 第9章:联邦个性化推荐
2 |
3 | 个性化推荐已经被广泛应用到人们生活中的各个方面,例如新闻推荐、视频推荐、商品推荐等,在信息筛选、精准营销等方面起到至关重要的作用。为了实现精准的推荐效果,推荐系统会收集海量用户和所推荐内容的数据,一般而言,收集的数据越多,对用户和推荐内容的了解就越全面和深入,推荐效果越精准。常见的个性化推荐应用包括音乐推荐、电影推荐、短视频、feed流等,如下图所示。
4 |
5 |
6 |

7 |
8 |
9 |
10 | 传统的个性化推荐流程如下图所示,即系统首先将每一个用户的行为流水数据上传到中心数据库中,在中心端进行集中式的数据清理和模型训练,再将训练后的模型部署到线上。
11 |
12 |
13 |

14 |
15 |
16 |
17 | 但上述的集中式处理方式,由于需要将每一个用户的行为数据上传到服务端中,因此极容易造成隐私的泄露。联邦推荐,实现了在不需要用户行为数据离开本地的前提下,进行联合建模的目的。本章我们将介绍两种联邦推荐算法在FATE的实现,包括:
18 |
19 | - 联邦矩阵分解
20 | - 联邦因子分解机
21 |
22 |
23 |
24 | ## 9.1 联邦矩阵分解
25 |
26 | ### 9.1.1 代码实现
27 |
28 | 联邦矩阵分解的代码已经集成到FATE中,读者可以查看联邦矩阵分解的代码:[Matrix_Factorization](https://github.com/FederatedAI/FedRec/tree/master/federatedrec/matrix_factorization)
29 |
30 | ### 9.1.2 矩阵分解
31 |
32 | 一般而言,用户与物品之间的交互方式多种多样,如评分、点击、购买、收藏、删除等,这些行为都体现了用户对物品的喜好,我们将用户对物品的反馈用一个矩阵r来表示,这个矩阵也被称为评分矩阵(Rating Matrix)。
33 |
34 | 矩阵分解算法是指将原始评分矩阵$r$分解为两个小矩阵$p$和$q$,使其满足:
35 |
36 | $$r=p \times q$$
37 |
38 | 不失一般性,我们假设矩阵$r$的维度大小为$m\*n$,$p$的维度大小为$m\*k$,$q$的维度大小为$k\*n$,$k$表示隐向量的长度,它通常是一个值比较小的数,如$k=50$或者$k=100$,这里$m$表示的是用户的数量,$n$表示的是物品的数量,通过矩阵分解,我们将评分矩阵压缩为两个小矩阵$p$和$q$,分别称之为用户隐向量矩阵和物品隐向量矩阵。
39 |
40 |
41 |

42 |
43 | MF的优化目标函数如下:
44 |
45 |
46 |

47 |
48 |
49 |
50 | ### 9.1.3 矩阵分解的联邦实现
51 |
52 | 我们考虑如下场景的跨公司推荐问题。在该场景下,我们看到左边的公司A是以书籍为内容进行推荐,而右边的公司B是以电影为内容进行推荐。**两家公司的用户具有较高的重合度**。
53 |
54 |
55 |

56 |
57 |
58 | 根据协同过滤的思想,具有相同观影兴趣的用户很可能有相同的阅读兴趣,因此如果我们能够保证在不泄露用户数据隐私前提下,联合多方的数据进行建模,那么将对推荐效果的提升有明显的促进作用。
59 |
60 | 我们采用**纵向联邦学习**的思想,由于两间公司它们的产品不同,从纵向联邦的角度来说,也就是它们的特征不重叠(一方的特征是书的ID,另一方的特征是电影ID)。每一间公司分别有用户对物品的评分矩阵,但由于隐私保护的原因,公司之间不能共享这些评分数据。为此,我们首先将目标函数拆分为如下的形式:
61 |
62 |
63 |

64 |
65 |
66 | 其中$r^A\_{ij}$和$r^B_{ij}$分别代表了公司A和公司B的原始评分矩阵,由于两间公司的用户群体相同,也就是说,它们共享用户的隐向量信息$p$,我们首先引入一个可信的第三方server来维护共享的用户隐特征向量信息,即我们的矩阵$p$,这个矩阵$p$是通过随机初始化的方式在server端生成。
67 |
68 |
69 |

70 |
71 |
72 | 求解联邦矩阵分解的步骤如下所示:
73 |
74 |
75 |

76 |
77 |
78 |
79 |
80 | ## 9.2 联邦因子分解机
81 |
82 | ### 9.2.1 代码实现
83 |
84 | 联邦因子分解机的代码已经集成到FATE中,读者可以查看联邦因子分解机的代码:[Factorization_Machine](https://github.com/FederatedAI/FedRec/tree/master/federatedrec/factorization_machine)
85 |
86 |
87 |
88 | ### 9.2.2 因子分解机
89 |
90 | 因子分解机(FM)将推荐问题归结为回归问题。传统的线性模型,如线性回归等,因其模型简单,可以高效的学习、预测和部署,因此在工业界备受推崇,但线性模型只能捕获到线性信息,不能捕获非线性信息,也就是特征与特征之间的相互作用。这种特征与特征之间的相互作用,就是特征工程中常用的交叉特征(也称为组合特征)。
91 |
92 | 人为构造交叉特征非常耗时,而且更多的是依赖于经验或者试错。FM算法通过在线性模型中加入了二阶信息,为我们自动构建和寻找交叉特征提供了一种可行的方案,FM模型如下所示,
93 |
94 |
95 |

96 |
97 |
98 | 其中最后的项${x_i}{x_j}$,就是指任意两个特征$x_i$和$x_j$的交叉特征。
99 |
100 |
101 |
102 | ### 9.2.3 因子分解机的联邦实现
103 |
104 | 我们考虑如下场景的跨公司推荐问题。在该场景中,公司A是一间在线的书籍销售商,而公司B可以是一间社交网络公司,公司B不直接销售商品,但是它有每个用户的画像数据。**同样,我们假设两家公司的用户具有较高的重合度**。
105 |
106 |
107 |

108 |
109 |
110 |
111 | 如果公司A能够利用公司B的画像数据,对公司A的销售同样有很好的提升,试想一下,如果我们知道某个用户是25岁,职业是程序员,那么一般情况下,给他推荐IT书籍的概率要比推荐漫画的概率高。
112 |
113 | 我们采用**纵向联邦学习**的思想,为了后面讨论的方便,我们将问题描述如下:假设现在A公司有用户的反馈分数和部分特征信息,我们设为$(X_1, Y)$,而公司B拥有额外的特征数据,设为$X_2$,我们需要保证在两方的数据不出本地的前提下,帮助A方提升推荐性能。对于两方的联合建模,其FM模型可以表示为:
114 |
115 |
116 |

117 |
118 |
119 | 上面的模型设计,我们可以将其拆分为由下面的三个部分构成,这三部分分别是:
120 |
121 | - 第一部分表示只考虑公司A特征的预测值,满足:
122 |
123 |
124 |

125 |
126 |
127 | - 第二部分表示只考虑公司B特征的预测值,满足:
128 |
129 |
130 |

131 |
132 |
133 | - 第三部分表示分布在两家公司的交叉特征计算,满足:
134 |
135 |
136 |

137 |
138 |
139 | 求解联邦因子分解机的步骤如下所示:
140 |
141 |
142 |

143 |
144 |
145 |
146 |
147 | ## 9.3 其它联邦推荐算法
148 |
149 | 除了矩阵分解和因子分解机,在FATE中还已经实现了很多的[联邦推荐](https://github.com/FederatedAI/FedRec/tree/master/federatedrec)算法,读者可以点击该链接查找相应的算法实现:
150 |
151 | - ##### [Hetero FM(factorization machine)](https://github.com/FederatedAI/FedRec/blob/master/federatedrec/factorization_machine/README.md)
152 |
153 | - ##### [Homo-FM](https://github.com/FederatedAI/FedRec/blob/master/federatedrec/factorization_machine/README.md)
154 |
155 | - ##### [Hetero MF(matrix factorization)](https://github.com/FederatedAI/FedRec/blob/master/federatedrec/matrix_factorization/README.md)
156 |
157 | - ##### [Hetero SVD](https://github.com/FederatedAI/FedRec/blob/master/federatedrec/svd/README.md)
158 |
159 | - ##### [Hetero SVD++](https://github.com/FederatedAI/FedRec/blob/master/federatedrec/svdpp/README.md)
160 |
161 | - ##### [Hetero GMF](https://github.com/FederatedAI/FedRec/blob/master/federatedrec/general_mf/README.md)
162 |
163 |
--------------------------------------------------------------------------------
/chapter09_Recommendation/figures/cent_recom.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter09_Recommendation/figures/cent_recom.png
--------------------------------------------------------------------------------
/chapter09_Recommendation/figures/fig1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter09_Recommendation/figures/fig1.png
--------------------------------------------------------------------------------
/chapter09_Recommendation/figures/fig10.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter09_Recommendation/figures/fig10.png
--------------------------------------------------------------------------------
/chapter09_Recommendation/figures/fig11.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter09_Recommendation/figures/fig11.png
--------------------------------------------------------------------------------
/chapter09_Recommendation/figures/fig12.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter09_Recommendation/figures/fig12.png
--------------------------------------------------------------------------------
/chapter09_Recommendation/figures/fig13.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter09_Recommendation/figures/fig13.png
--------------------------------------------------------------------------------
/chapter09_Recommendation/figures/fig14.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter09_Recommendation/figures/fig14.png
--------------------------------------------------------------------------------
/chapter09_Recommendation/figures/fig2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter09_Recommendation/figures/fig2.png
--------------------------------------------------------------------------------
/chapter09_Recommendation/figures/fig3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter09_Recommendation/figures/fig3.png
--------------------------------------------------------------------------------
/chapter09_Recommendation/figures/fig4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter09_Recommendation/figures/fig4.png
--------------------------------------------------------------------------------
/chapter09_Recommendation/figures/fig5.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter09_Recommendation/figures/fig5.png
--------------------------------------------------------------------------------
/chapter09_Recommendation/figures/fig6.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter09_Recommendation/figures/fig6.png
--------------------------------------------------------------------------------
/chapter09_Recommendation/figures/fig7.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter09_Recommendation/figures/fig7.png
--------------------------------------------------------------------------------
/chapter09_Recommendation/figures/fig8.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter09_Recommendation/figures/fig8.png
--------------------------------------------------------------------------------
/chapter09_Recommendation/figures/fig9.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter09_Recommendation/figures/fig9.png
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/Dataset_description.md:
--------------------------------------------------------------------------------
1 | ## FedVision
2 | **FedVison dataset** is created jointly by WeBank and ExtremeVision to facilitate the advancement of academic research
3 | and industrial applications of federated learning.
4 |
5 | ### The FedVision project
6 |
7 | * Provides images data sets with standardized annotation for federated object detection.
8 | * Provides key statistics and systems metrics of the data sets.
9 | * Provides a set of implementations of baseline for further research.
10 |
11 | ### Datasets
12 | We introduce two realistic federated datasets.
13 |
14 | * **Federated Street**, a real-world object detection dataset that annotates images captured by a set of street cameras
15 | based on object present in them, including 7 classes. In this dataset, each or every few cameras serve as a device.
16 |
17 | | Dataset | Number of devices | Total samples | Number of class|
18 | |:---:|:---:|:---:|:---:|
19 | | Federated Street | 5, 20 | 956 | 7 |
20 |
21 | ### File descriptions
22 |
23 | * **Street_Dataset.tar** contains the image data and ground truth for the train and test set of the street data set.
24 | * **Images**: The directory which contains the train and test image data.
25 | * **train_label.json**: The annotations file is saved in json format. **train_label.json** is a `list`, which
26 | contains the annotation information of the Images set. The length of `list` is the same as the number of image and each value
27 | in the `list` represents one image_info. Each `image_info` is in format of `dictionary` with keys and values. The keys
28 | of `image_info` are `image_id`, `device1_id`, `device2_id` and `items`. We split the street data set in two ways. For the first, we
29 | split the data into 5 parts according to the geographic information. Besides, we turn 5 into 20. Therefore we have `device1_id` and
30 | `device2_id`. It means that we have 5 or 20 devices. `items` is a list, which may contain multiple objects.
31 | [
32 | {
33 | `"image_id"`: the id of the train image, for example 009579.
34 | `"device1_id"`: the id of device1 ,specifies which device the image is on.
35 | `"device2_id"`: the id of device2.
36 | `"items"`: [
37 | {
38 | `"class"`: the class of one object,
39 | `"bbox"`: ["xmin", "ymin", "xmax", "ymax"], the coordinates of a bounding box
40 | },
41 | ...
42 | ]
43 | },
44 | ...
45 | ]
46 | * **test_label.json**: The annotations of test data are almost the same as of the **train_label.json**. The only difference between them is that
47 | the `image_info` of test data does not have the key `device_id`.
48 |
49 | ### Evaluation
50 | We use he standard [PASCAL VOC 2010](http://host.robots.ox.ac.uk/pascal/VOC/voc2010/devkit_doc_08-May-2010.pdf) mean Average Precision (mAP) for evaluation (mean is taken over per-class APs).
51 | To be considered a correct detection, the overlap ratio  between the predicted bounding box  and ground truth bounding  by the formula
52 |
53 | when
denotes the intersection of the predicted and ground truth bounding boxes and
their union.
54 | Average Precision is calculated for each class respectively.
55 |
56 | where n is the number of total object in given class.
57 | For $k$ classes, mAP is the average of APs.
58 |
59 |
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/README.md:
--------------------------------------------------------------------------------
1 | # 第10章:联邦视觉案例
2 |
3 | 本章是《联邦学习实战》的第十章,我们将介绍在联邦学习场景下,利用分散在各摄像头的数据,构建一个联邦视觉系统,如下图所示:
4 |
5 |
6 |

7 |
8 |
9 | 有关联邦视觉相关的一些文献,读者可以参考下面的链接:
10 |
11 | * [FedVision: An Online Visual Object Detection Platform Powered by Federated Learning](https://arxiv.org/abs/2001.06202)
12 |
13 | * [Real-World Image Datasets for Federated Learning](https://arxiv.org/abs/1910.11089)
14 |
15 | 目前联邦视觉系统已经有下面的两种实现方式:
16 |
17 | - 采用**flask_socketio**来作为服务端和客户端的通信方式实现。
18 | - 基于**PaddleFL**的实现,详细的实现过程,读者可以参考[FedVision_PaddleFL](https://github.com/FederatedAI/FedVision)。
19 |
20 | 本章我们我们**flask_socketio**来作为服务端和客户端的通信方式,服务端与客户端的通信过程如下所示:
21 |
22 |
23 |
24 |
25 |

26 |
27 |
28 |
29 |
30 | ## 10.1 数据集准备
31 |
32 |
33 |
34 |
35 |
36 | - **使用外部的公开数据集**直接使用外部常见的目标检测数据集,来自行运行代码(可能由于数据集的不同,需要自行修改深度学习模型)。常见的目标检测数据集包括:
37 |
38 | * [PASCAL VOC](http://host.robots.ox.ac.uk/pascal/VOC/)
39 | * [MS COCO](https://cocodataset.org/#download)
40 |
41 | 获取数据集后,将其放在data目录下,读者可根据实际需要自行切分数据集。
42 |
43 |
44 |
45 | ## 10.2 实验环境
46 |
47 | 要执行的本章的代码,首先需要读者准备好以下的环境:
48 |
49 | * 安装带GPU版本的PyTorch,本章的代码只能在GPU环境下运行。
50 | * 安装flask_socketio,本章我们使用flask_socketio来作为服务端和客户端之间进行通信的框架。
51 | * 其他的安装环境,可以查看requirements.txt文件,并执行下面代码进行安装:
52 | ```
53 | pip install -r requirements.txt
54 | ```
55 |
56 |
57 |
58 | ## 10.3 代码使用
59 |
60 | 当下载了数据集和安装了必要的运行环境后,我们便可以按照下面的步骤来执行代码:
61 | * 首先执行下面的命令来启动服务端:
62 |
63 | ```bash
64 | sh ./run_server.sh dataset model port
65 | ```
66 | 这里包含三个输入参数:
67 |
68 | 1. dataset: 数据集名称,可选项包括“street_5“、”street_20”。
69 |
70 | 2. model:模型参数,可选项包括”faster“、”yolo“。
71 |
72 | 3. port:端口号,由用户自行设置。
73 |
74 |
75 |
76 | * 然后执行下面命令来启动客户端:
77 | ```bash
78 | sh ./run.sh dataset gpu_id model port
79 | ```
80 | 客户端的启动包含了四个输入参数:
81 |
82 | 1. dataset: 数据集名称,与服务端一样,可选项包括“street_5“、”street_20”。
83 |
84 | 2. gpu_id:如果是在本地环境中模拟多个客户端场景,由于每一个客户端进行本地训练都需要单独的GPU资源,因此,为了防止客户端都绑定在同一个GPU核中执行,导致显存不足程序中断。我们可以将每一个客户端指定到不同的GPU核中执行;如果是分布式环境下执行,即每一个客户端在单独的设备中运行,那么这个参数随意设置即可。
85 |
86 | 3. model:模型参数,与服务端一样,可选项包括”faster“、”yolo“。
87 |
88 | 4. port:端口号,由用户自行设置。
89 |
90 |
91 |
92 | 在运行过程中,如果要强行终止,我们也可以执行下面的命令来强制终止代码的运行:
93 | ```bash
94 | sh ./stop.sh street_5 yolo
95 | ```
96 |
97 |
98 |
99 | ## 10.4 实验效果
100 | 我们分别使用YOLOv3和Faster R-CNN两个模型,在联邦学习场景下,测试对街道数据集进行联合建模的结果比较。
101 |
102 | 我们分别测试了在不同数量的客户参与方(C),以及不同的本地训练迭代次数(E)配置下,其中mAP的结果如下图所示:
103 |
104 |
105 |

106 |
107 |
108 | 其损失值随迭代次数的变化的变化结果如下所示:
109 |
110 |
111 |

112 |
113 |
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/config/coco.data:
--------------------------------------------------------------------------------
1 | classes= 80
2 | train=data/coco/trainvalno5k.txt
3 | valid=data/coco/5k.txt
4 | names=data/coco.names
5 | backup=backup/
6 | eval=coco
7 |
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/config/custom.data:
--------------------------------------------------------------------------------
1 | classes= 7
2 | train=data/custom/train.txt
3 | valid=data/custom/valid.txt
4 | names=data/custom/classes.names
5 |
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/config/yolov3-tiny.cfg:
--------------------------------------------------------------------------------
1 | [net]
2 | # Testing
3 | batch=1
4 | subdivisions=1
5 | # Training
6 | # batch=64
7 | # subdivisions=2
8 | width=416
9 | height=416
10 | channels=3
11 | momentum=0.9
12 | decay=0.0005
13 | angle=0
14 | saturation = 1.5
15 | exposure = 1.5
16 | hue=.1
17 |
18 | learning_rate=0.001
19 | burn_in=1000
20 | max_batches = 500200
21 | policy=steps
22 | steps=400000,450000
23 | scales=.1,.1
24 |
25 | # 0
26 | [convolutional]
27 | batch_normalize=1
28 | filters=16
29 | size=3
30 | stride=1
31 | pad=1
32 | activation=leaky
33 |
34 | # 1
35 | [maxpool]
36 | size=2
37 | stride=2
38 |
39 | # 2
40 | [convolutional]
41 | batch_normalize=1
42 | filters=32
43 | size=3
44 | stride=1
45 | pad=1
46 | activation=leaky
47 |
48 | # 3
49 | [maxpool]
50 | size=2
51 | stride=2
52 |
53 | # 4
54 | [convolutional]
55 | batch_normalize=1
56 | filters=64
57 | size=3
58 | stride=1
59 | pad=1
60 | activation=leaky
61 |
62 | # 5
63 | [maxpool]
64 | size=2
65 | stride=2
66 |
67 | # 6
68 | [convolutional]
69 | batch_normalize=1
70 | filters=128
71 | size=3
72 | stride=1
73 | pad=1
74 | activation=leaky
75 |
76 | # 7
77 | [maxpool]
78 | size=2
79 | stride=2
80 |
81 | # 8
82 | [convolutional]
83 | batch_normalize=1
84 | filters=256
85 | size=3
86 | stride=1
87 | pad=1
88 | activation=leaky
89 |
90 | # 9
91 | [maxpool]
92 | size=2
93 | stride=2
94 |
95 | # 10
96 | [convolutional]
97 | batch_normalize=1
98 | filters=512
99 | size=3
100 | stride=1
101 | pad=1
102 | activation=leaky
103 |
104 | # 11
105 | [maxpool]
106 | size=2
107 | stride=1
108 |
109 | # 12
110 | [convolutional]
111 | batch_normalize=1
112 | filters=1024
113 | size=3
114 | stride=1
115 | pad=1
116 | activation=leaky
117 |
118 | ###########
119 |
120 | # 13
121 | [convolutional]
122 | batch_normalize=1
123 | filters=256
124 | size=1
125 | stride=1
126 | pad=1
127 | activation=leaky
128 |
129 | # 14
130 | [convolutional]
131 | batch_normalize=1
132 | filters=512
133 | size=3
134 | stride=1
135 | pad=1
136 | activation=leaky
137 |
138 | # 15
139 | [convolutional]
140 | size=1
141 | stride=1
142 | pad=1
143 | filters=255
144 | activation=linear
145 |
146 |
147 |
148 | # 16
149 | [yolo]
150 | mask = 3,4,5
151 | anchors = 10,14, 23,27, 37,58, 81,82, 135,169, 344,319
152 | classes=80
153 | num=6
154 | jitter=.3
155 | ignore_thresh = .7
156 | truth_thresh = 1
157 | random=1
158 |
159 | # 17
160 | [route]
161 | layers = -4
162 |
163 | # 18
164 | [convolutional]
165 | batch_normalize=1
166 | filters=128
167 | size=1
168 | stride=1
169 | pad=1
170 | activation=leaky
171 |
172 | # 19
173 | [upsample]
174 | stride=2
175 |
176 | # 20
177 | [route]
178 | layers = -1, 8
179 |
180 | # 21
181 | [convolutional]
182 | batch_normalize=1
183 | filters=256
184 | size=3
185 | stride=1
186 | pad=1
187 | activation=leaky
188 |
189 | # 22
190 | [convolutional]
191 | size=1
192 | stride=1
193 | pad=1
194 | filters=255
195 | activation=linear
196 |
197 | # 23
198 | [yolo]
199 | mask = 1,2,3
200 | anchors = 10,14, 23,27, 37,58, 81,82, 135,169, 344,319
201 | classes=80
202 | num=6
203 | jitter=.3
204 | ignore_thresh = .7
205 | truth_thresh = 1
206 | random=1
207 |
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/data/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter10_Computer_Vision/data/__init__.py
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/data/custom/classes.names:
--------------------------------------------------------------------------------
1 | basket
2 | carton
3 | chair
4 | electrombile
5 | gastank
6 | sunshade
7 | table
8 |
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/data/custom/train.txt:
--------------------------------------------------------------------------------
1 | data/custom/images/000001.jpg
2 | data/custom/images/000002.jpg
3 | data/custom/images/000003.jpg
4 | data/custom/images/000004.jpg
5 | data/custom/images/000005.jpg
6 | data/custom/images/000006.jpg
7 | data/custom/images/000008.jpg
8 | data/custom/images/000010.jpg
9 | data/custom/images/000013.jpg
10 | data/custom/images/000014.jpg
11 | data/custom/images/000016.jpg
12 | data/custom/images/000017.jpg
13 | data/custom/images/000018.jpg
14 | data/custom/images/000019.jpg
15 | data/custom/images/000020.jpg
16 | data/custom/images/000021.jpg
17 | data/custom/images/000022.jpg
18 | data/custom/images/000023.jpg
19 | data/custom/images/000024.jpg
20 | data/custom/images/000026.jpg
21 | data/custom/images/000027.jpg
22 | data/custom/images/000028.jpg
23 | data/custom/images/000029.jpg
24 | data/custom/images/000030.jpg
25 | data/custom/images/000031.jpg
26 | data/custom/images/000032.jpg
27 | data/custom/images/000033.jpg
28 | data/custom/images/000034.jpg
29 | data/custom/images/000035.jpg
30 | data/custom/images/000036.jpg
31 | data/custom/images/000037.jpg
32 | data/custom/images/000040.jpg
33 | data/custom/images/000041.jpg
34 | data/custom/images/000042.jpg
35 | data/custom/images/000043.jpg
36 | data/custom/images/000044.jpg
37 | data/custom/images/000045.jpg
38 | data/custom/images/000046.jpg
39 | data/custom/images/000047.jpg
40 | data/custom/images/000049.jpg
41 | data/custom/images/000050.jpg
42 | data/custom/images/000051.jpg
43 | data/custom/images/000053.jpg
44 | data/custom/images/000054.jpg
45 | data/custom/images/000055.jpg
46 | data/custom/images/000056.jpg
47 | data/custom/images/000057.jpg
48 | data/custom/images/000058.jpg
49 | data/custom/images/000060.jpg
50 | data/custom/images/000061.jpg
51 | data/custom/images/000062.jpg
52 | data/custom/images/000064.jpg
53 | data/custom/images/000065.jpg
54 | data/custom/images/000066.jpg
55 | data/custom/images/000067.jpg
56 | data/custom/images/000068.jpg
57 | data/custom/images/000069.jpg
58 | data/custom/images/000070.jpg
59 | data/custom/images/000071.jpg
60 | data/custom/images/000072.jpg
61 | data/custom/images/000074.jpg
62 | data/custom/images/000075.jpg
63 | data/custom/images/000076.jpg
64 | data/custom/images/000077.jpg
65 | data/custom/images/000078.jpg
66 | data/custom/images/000079.jpg
67 | data/custom/images/000080.jpg
68 | data/custom/images/000082.jpg
69 | data/custom/images/000083.jpg
70 | data/custom/images/000084.jpg
71 | data/custom/images/000085.jpg
72 | data/custom/images/000086.jpg
73 | data/custom/images/000087.jpg
74 | data/custom/images/000088.jpg
75 | data/custom/images/000089.jpg
76 | data/custom/images/000090.jpg
77 | data/custom/images/000092.jpg
78 | data/custom/images/000093.jpg
79 | data/custom/images/000094.jpg
80 | data/custom/images/000095.jpg
81 | data/custom/images/000096.jpg
82 | data/custom/images/000097.jpg
83 | data/custom/images/000098.jpg
84 | data/custom/images/000099.jpg
85 | data/custom/images/000100.jpg
86 | data/custom/images/000101.jpg
87 | data/custom/images/000102.jpg
88 | data/custom/images/000103.jpg
89 | data/custom/images/000105.jpg
90 | data/custom/images/000106.jpg
91 | data/custom/images/000108.jpg
92 | data/custom/images/000109.jpg
93 | data/custom/images/000110.jpg
94 | data/custom/images/000113.jpg
95 | data/custom/images/000114.jpg
96 | data/custom/images/000116.jpg
97 | data/custom/images/000117.jpg
98 | data/custom/images/000118.jpg
99 | data/custom/images/000119.jpg
100 | data/custom/images/000121.jpg
101 | data/custom/images/000122.jpg
102 | data/custom/images/000123.jpg
103 | data/custom/images/000124.jpg
104 | data/custom/images/000127.jpg
105 | data/custom/images/000129.jpg
106 | data/custom/images/000130.jpg
107 | data/custom/images/000131.jpg
108 | data/custom/images/000132.jpg
109 | data/custom/images/000133.jpg
110 | data/custom/images/000134.jpg
111 | data/custom/images/000135.jpg
112 | data/custom/images/000137.jpg
113 | data/custom/images/000139.jpg
114 | data/custom/images/000140.jpg
115 | data/custom/images/000141.jpg
116 | data/custom/images/000142.jpg
117 | data/custom/images/000144.jpg
118 | data/custom/images/000145.jpg
119 | data/custom/images/000146.jpg
120 | data/custom/images/000148.jpg
121 | data/custom/images/000149.jpg
122 | data/custom/images/000150.jpg
123 | data/custom/images/000151.jpg
124 | data/custom/images/000152.jpg
125 | data/custom/images/000153.jpg
126 | data/custom/images/000154.jpg
127 | data/custom/images/000155.jpg
128 | data/custom/images/000157.jpg
129 | data/custom/images/000158.jpg
130 | data/custom/images/000160.jpg
131 | data/custom/images/000162.jpg
132 | data/custom/images/000163.jpg
133 | data/custom/images/000164.jpg
134 | data/custom/images/000165.jpg
135 | data/custom/images/000166.jpg
136 | data/custom/images/000167.jpg
137 | data/custom/images/000168.jpg
138 | data/custom/images/000169.jpg
139 | data/custom/images/000170.jpg
140 | data/custom/images/000171.jpg
141 | data/custom/images/000172.jpg
142 | data/custom/images/000173.jpg
143 | data/custom/images/000174.jpg
144 | data/custom/images/000175.jpg
145 | data/custom/images/000177.jpg
146 | data/custom/images/000178.jpg
147 | data/custom/images/000179.jpg
148 | data/custom/images/000181.jpg
149 | data/custom/images/000182.jpg
150 | data/custom/images/000183.jpg
151 | data/custom/images/000184.jpg
152 | data/custom/images/000185.jpg
153 | data/custom/images/000186.jpg
154 | data/custom/images/000187.jpg
155 | data/custom/images/000188.jpg
156 | data/custom/images/000189.jpg
157 | data/custom/images/000191.jpg
158 | data/custom/images/000192.jpg
159 | data/custom/images/000195.jpg
160 | data/custom/images/000196.jpg
161 | data/custom/images/000198.jpg
162 | data/custom/images/000199.jpg
163 | data/custom/images/000200.jpg
164 |
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/data/custom/valid.txt:
--------------------------------------------------------------------------------
1 | data/custom/images/000007.jpg
2 | data/custom/images/000009.jpg
3 | data/custom/images/000011.jpg
4 | data/custom/images/000012.jpg
5 | data/custom/images/000015.jpg
6 | data/custom/images/000025.jpg
7 | data/custom/images/000038.jpg
8 | data/custom/images/000039.jpg
9 | data/custom/images/000048.jpg
10 | data/custom/images/000052.jpg
11 | data/custom/images/000059.jpg
12 | data/custom/images/000063.jpg
13 | data/custom/images/000073.jpg
14 | data/custom/images/000081.jpg
15 | data/custom/images/000091.jpg
16 | data/custom/images/000104.jpg
17 | data/custom/images/000107.jpg
18 | data/custom/images/000111.jpg
19 | data/custom/images/000112.jpg
20 | data/custom/images/000115.jpg
21 | data/custom/images/000120.jpg
22 | data/custom/images/000125.jpg
23 | data/custom/images/000126.jpg
24 | data/custom/images/000128.jpg
25 | data/custom/images/000136.jpg
26 | data/custom/images/000138.jpg
27 | data/custom/images/000143.jpg
28 | data/custom/images/000147.jpg
29 | data/custom/images/000156.jpg
30 | data/custom/images/000159.jpg
31 | data/custom/images/000161.jpg
32 | data/custom/images/000176.jpg
33 | data/custom/images/000180.jpg
34 | data/custom/images/000190.jpg
35 | data/custom/images/000193.jpg
36 | data/custom/images/000194.jpg
37 | data/custom/images/000197.jpg
38 | data/custom/images/000215.jpg
39 | data/custom/images/000216.jpg
40 | data/custom/images/000219.jpg
41 | data/custom/images/000224.jpg
42 | data/custom/images/000229.jpg
43 | data/custom/images/000231.jpg
44 | data/custom/images/000232.jpg
45 | data/custom/images/000234.jpg
46 | data/custom/images/000240.jpg
47 | data/custom/images/000252.jpg
48 | data/custom/images/000258.jpg
49 | data/custom/images/000261.jpg
50 | data/custom/images/000263.jpg
51 | data/custom/images/000267.jpg
52 | data/custom/images/000268.jpg
53 | data/custom/images/000269.jpg
54 | data/custom/images/000273.jpg
55 | data/custom/images/000277.jpg
56 | data/custom/images/000280.jpg
57 | data/custom/images/000286.jpg
58 | data/custom/images/000287.jpg
59 | data/custom/images/000289.jpg
60 | data/custom/images/000292.jpg
61 | data/custom/images/000294.jpg
62 | data/custom/images/000300.jpg
63 | data/custom/images/000302.jpg
64 | data/custom/images/000304.jpg
65 | data/custom/images/000309.jpg
66 | data/custom/images/000313.jpg
67 | data/custom/images/000316.jpg
68 | data/custom/images/000318.jpg
69 | data/custom/images/000331.jpg
70 | data/custom/images/000339.jpg
71 | data/custom/images/000346.jpg
72 | data/custom/images/000350.jpg
73 | data/custom/images/000367.jpg
74 | data/custom/images/000371.jpg
75 | data/custom/images/000380.jpg
76 | data/custom/images/000383.jpg
77 | data/custom/images/000394.jpg
78 | data/custom/images/000405.jpg
79 | data/custom/images/000407.jpg
80 | data/custom/images/000409.jpg
81 | data/custom/images/000411.jpg
82 | data/custom/images/000414.jpg
83 | data/custom/images/000415.jpg
84 | data/custom/images/000418.jpg
85 | data/custom/images/000424.jpg
86 | data/custom/images/000426.jpg
87 | data/custom/images/000433.jpg
88 | data/custom/images/000435.jpg
89 | data/custom/images/000436.jpg
90 | data/custom/images/000438.jpg
91 | data/custom/images/000441.jpg
92 | data/custom/images/000448.jpg
93 | data/custom/images/000449.jpg
94 | data/custom/images/000454.jpg
95 | data/custom/images/000462.jpg
96 | data/custom/images/000463.jpg
97 | data/custom/images/000465.jpg
98 | data/custom/images/000467.jpg
99 | data/custom/images/000470.jpg
100 | data/custom/images/000481.jpg
101 | data/custom/images/000491.jpg
102 | data/custom/images/000494.jpg
103 | data/custom/images/000497.jpg
104 | data/custom/images/000510.jpg
105 | data/custom/images/000520.jpg
106 | data/custom/images/000525.jpg
107 | data/custom/images/000531.jpg
108 | data/custom/images/000537.jpg
109 | data/custom/images/000540.jpg
110 | data/custom/images/000543.jpg
111 | data/custom/images/000547.jpg
112 | data/custom/images/000548.jpg
113 | data/custom/images/000551.jpg
114 | data/custom/images/000552.jpg
115 | data/custom/images/000565.jpg
116 | data/custom/images/000568.jpg
117 | data/custom/images/000573.jpg
118 | data/custom/images/000575.jpg
119 | data/custom/images/000577.jpg
120 | data/custom/images/000580.jpg
121 | data/custom/images/000585.jpg
122 | data/custom/images/000587.jpg
123 | data/custom/images/000601.jpg
124 | data/custom/images/000602.jpg
125 | data/custom/images/000606.jpg
126 | data/custom/images/000615.jpg
127 | data/custom/images/000622.jpg
128 | data/custom/images/000629.jpg
129 | data/custom/images/000642.jpg
130 | data/custom/images/000645.jpg
131 | data/custom/images/000646.jpg
132 | data/custom/images/000647.jpg
133 | data/custom/images/000650.jpg
134 | data/custom/images/000653.jpg
135 | data/custom/images/000662.jpg
136 | data/custom/images/000665.jpg
137 | data/custom/images/000675.jpg
138 | data/custom/images/000682.jpg
139 | data/custom/images/000684.jpg
140 | data/custom/images/000692.jpg
141 | data/custom/images/000693.jpg
142 | data/custom/images/000700.jpg
143 | data/custom/images/000705.jpg
144 | data/custom/images/000713.jpg
145 | data/custom/images/000718.jpg
146 | data/custom/images/000721.jpg
147 | data/custom/images/000723.jpg
148 | data/custom/images/000724.jpg
149 | data/custom/images/000736.jpg
150 | data/custom/images/000738.jpg
151 | data/custom/images/000743.jpg
152 | data/custom/images/000769.jpg
153 | data/custom/images/000772.jpg
154 | data/custom/images/000777.jpg
155 | data/custom/images/000796.jpg
156 | data/custom/images/000801.jpg
157 | data/custom/images/000803.jpg
158 | data/custom/images/000807.jpg
159 | data/custom/images/000813.jpg
160 | data/custom/images/000815.jpg
161 | data/custom/images/000818.jpg
162 | data/custom/images/000824.jpg
163 | data/custom/images/000829.jpg
164 | data/custom/images/000837.jpg
165 | data/custom/images/000840.jpg
166 | data/custom/images/000843.jpg
167 | data/custom/images/000848.jpg
168 | data/custom/images/000854.jpg
169 | data/custom/images/000859.jpg
170 | data/custom/images/000861.jpg
171 | data/custom/images/000863.jpg
172 | data/custom/images/000864.jpg
173 | data/custom/images/000866.jpg
174 | data/custom/images/000868.jpg
175 | data/custom/images/000869.jpg
176 | data/custom/images/000872.jpg
177 | data/custom/images/000884.jpg
178 | data/custom/images/000885.jpg
179 | data/custom/images/000891.jpg
180 | data/custom/images/000895.jpg
181 | data/custom/images/000903.jpg
182 | data/custom/images/000904.jpg
183 | data/custom/images/000905.jpg
184 | data/custom/images/000906.jpg
185 | data/custom/images/000911.jpg
186 | data/custom/images/000913.jpg
187 | data/custom/images/000930.jpg
188 | data/custom/images/000932.jpg
189 | data/custom/images/000942.jpg
190 | data/custom/images/000949.jpg
191 | data/custom/images/000950.jpg
192 |
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/data/data_utils.py:
--------------------------------------------------------------------------------
1 | import os
2 | import os.path as osp
3 | import xml.etree.ElementTree as ET
4 |
5 |
6 | def show_dir():
7 | dir_list = [a for a in os.listdir("./") if os.path.isdir(a) and a != 'test']
8 | total = 0
9 | categories = {}
10 | for dire in dir_list:
11 | jpeg_path = osp.join(dire, 'Annotations')
12 | xml_list = os.listdir(jpeg_path)
13 | total += len(xml_list)
14 | categories[dire] = len(xml_list)
15 | list1 = sorted(categories.items(), key=lambda x: x[1], reverse=True)
16 | for i, (directory, num) in enumerate(list1):
17 | print(directory, num - 2680)
18 | print(total)
19 |
20 |
21 | def merge_test():
22 | dir_list = [a for a in os.listdir("./") if os.path.isdir(a) and a != 'test']
23 | for dir_name in dir_list:
24 | Anno_path = osp.join(dir_name, "Annotations")
25 | Jpeg_path = osp.join(dir_name, "JPEGImages")
26 | Imag_path = osp.join(dir_name, "ImageSets", "Main")
27 | if not osp.exists(Imag_path):
28 | os.makedirs(Imag_path)
29 | test_anno_path = osp.join("test", "Annotations")
30 | test_jpeg_path = osp.join("test", "JPEGImages")
31 | test_txt_path = osp.join("test", "ImageSets", "Main", "test.txt")
32 | train_txt = open(osp.join(Imag_path, "train.txt"), 'w')
33 | valid_txt = open(osp.join(Imag_path, "valid.txt"), 'w')
34 | test_txt = open(osp.join(Imag_path, "test.txt"), 'w')
35 | anno_list = os.listdir(Anno_path)
36 | for anno_name in anno_list:
37 | anno_name = anno_name.replace(".xml", "\n")
38 | train_txt.write(anno_name)
39 | os.system("cp {}/* {}".format(test_anno_path, Anno_path))
40 | os.system("cp {}/* {}".format(test_jpeg_path, Jpeg_path))
41 | os.system("cp {} {}".format(test_txt_path, Imag_path))
42 | os.system("cp {} {}".format(test_txt_path, osp.join(Imag_path, 'valid.txt')))
43 |
44 |
45 | def make_txt():
46 | dir_list = [a for a in os.listdir("./") if os.path.isdir(a) and a != 'test']
47 | for dir_name in dir_list:
48 | Anno_path = osp.join(dir_name, "Annotations")
49 | Imag_path = osp.join(dir_name, "ImageSets", "Main")
50 | ftest = open(osp.join(Imag_path, "test.txt"), 'r').readlines()
51 | ftrain = open(osp.join(Imag_path, "train.txt"), 'w')
52 | annos = os.listdir(Anno_path)
53 | for anno in annos:
54 | anno = anno.replace(".xml", "\n")
55 | if anno not in ftest:
56 | ftrain.write(anno)
57 |
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/data/dataset.py:
--------------------------------------------------------------------------------
1 | from __future__ import absolute_import
2 | from __future__ import division
3 | import torch as t
4 | from data.voc_dataset import VOCBboxDataset
5 | from skimage import transform as sktsf
6 | from torchvision import transforms as tvtsf
7 | from data import util
8 | import numpy as np
9 | from utils.config import opt
10 |
11 |
12 | def inverse_normalize(img):
13 | if opt.caffe_pretrain:
14 | img = img + (np.array([122.7717, 115.9465, 102.9801]).reshape(3, 1, 1))
15 | return img[::-1, :, :]
16 | # approximate un-normalize for visualize
17 | return (img * 0.225 + 0.45).clip(min=0, max=1) * 255
18 |
19 |
20 | def pytorch_normalze(img):
21 | """
22 | https://github.com/pytorch/vision/issues/223
23 | return appr -1~1 RGB
24 | """
25 | normalize = tvtsf.Normalize(mean=[0.485, 0.456, 0.406],
26 | std=[0.229, 0.224, 0.225])
27 | img = normalize(t.from_numpy(img))
28 | return img.numpy()
29 |
30 |
31 | def caffe_normalize(img):
32 | """
33 | return appr -125-125 BGR
34 | """
35 | img = img[[2, 1, 0], :, :] # RGB-BGR
36 | img = img * 255
37 | mean = np.array([122.7717, 115.9465, 102.9801]).reshape(3, 1, 1)
38 | img = (img - mean).astype(np.float32, copy=True)
39 | return img
40 |
41 |
42 | def preprocess(img, min_size=600, max_size=1000):
43 | """Preprocess an image for feature extraction.
44 |
45 | The length of the shorter edge is scaled to :obj:`self.min_size`.
46 | After the scaling, if the length of the longer edge is longer than
47 | :param min_size:
48 | :obj:`self.max_size`, the image is scaled to fit the longer edge
49 | to :obj:`self.max_size`.
50 |
51 | After resizing the image, the image is subtracted by a mean image value
52 | :obj:`self.mean`.
53 |
54 | Args:
55 | img (~numpy.ndarray): An image. This is in CHW and RGB format.
56 | The range of its value is :math:`[0, 255]`.
57 |
58 | Returns:
59 | ~numpy.ndarray: A preprocessed image.
60 |
61 | """
62 | C, H, W = img.shape
63 | scale1 = min_size / min(H, W)
64 | scale2 = max_size / max(H, W)
65 | scale = min(scale1, scale2)
66 | img = img / 255.
67 | img = sktsf.resize(img, (C, H * scale, W * scale), mode='reflect',anti_aliasing=False)
68 | # both the longer and shorter should be less than
69 | # max_size and min_size
70 | if opt.caffe_pretrain:
71 | normalize = caffe_normalize
72 | else:
73 | normalize = pytorch_normalze
74 | return normalize(img)
75 |
76 |
77 | class Transform(object):
78 |
79 | def __init__(self, min_size=600, max_size=1000):
80 | self.min_size = min_size
81 | self.max_size = max_size
82 |
83 | def __call__(self, in_data):
84 | img, bbox, label = in_data
85 | _, H, W = img.shape
86 | img = preprocess(img, self.min_size, self.max_size)
87 | _, o_H, o_W = img.shape
88 | scale = o_H / H
89 | bbox = util.resize_bbox(bbox, (H, W), (o_H, o_W))
90 |
91 | # horizontally flip
92 | img, params = util.random_flip(
93 | img, x_random=True, return_param=True)
94 | bbox = util.flip_bbox(
95 | bbox, (o_H, o_W), x_flip=params['x_flip'])
96 |
97 | return img, bbox, label, scale
98 |
99 |
100 | class Dataset:
101 | def __init__(self, opt):
102 | self.opt = opt
103 | self.db = VOCBboxDataset(opt.voc_data_dir, opt.label_names)
104 | self.tsf = Transform(opt.min_size, opt.max_size)
105 |
106 | def __getitem__(self, idx):
107 | ori_img, bbox, label, difficult = self.db.get_example(idx)
108 |
109 | img, bbox, label, scale = self.tsf((ori_img, bbox, label))
110 | # TODO: check whose stride is negative to fix this instead copy all
111 | # some of the strides of a given numpy array are negative.
112 | return img.copy(), ori_img.shape[1:], bbox.copy(), \
113 | label.copy(), scale, difficult
114 |
115 | def __len__(self):
116 | return len(self.db)
117 |
118 |
119 | class TestDataset:
120 | def __init__(self, opt, split='test', use_difficult=True):
121 | self.opt = opt
122 | self.db = VOCBboxDataset(opt.voc_data_dir, opt.label_names,
123 | split=split, use_difficult=use_difficult)
124 |
125 | def __getitem__(self, idx):
126 | ori_img, bbox, label, difficult = self.db.get_example(idx)
127 | img = preprocess(ori_img)
128 | scale = ori_img.shape[1] / img.shape[1]
129 | return img, ori_img.shape[1:], bbox, label, scale, difficult
130 |
131 | def __len__(self):
132 | return len(self.db)
133 |
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/data/generate_task_json.py:
--------------------------------------------------------------------------------
1 | import os
2 | import json
3 | import xml.etree.ElementTree as ET
4 |
5 | sets = ['train', 'test']
6 | classes = ["basket", "carton", "chair", "electrombile", "gastank", "sunshade", "table"]
7 | dataset = "street_5"
8 | model = "yolo"
9 | num_client = int(dataset.split('_')[-1])
10 |
11 |
12 | def convert(size, box):
13 | dw = 1. / (size[0])
14 | dh = 1. / (size[1])
15 | x = (box[0] + box[1]) / 2.0 - 1
16 | y = (box[2] + box[3]) / 2.0 - 1
17 | w = box[1] - box[0]
18 | h = box[3] - box[2]
19 | x = x * dw
20 | w = w * dw
21 | y = y * dh
22 | h = h * dh
23 | return x, y, w, h
24 |
25 |
26 | def convert_annotation(anno_path, label_path, image_id):
27 | in_file = open(os.path.join(anno_path, image_id + ".xml"))
28 | out_file = open(os.path.join(label_path, image_id + ".txt"), 'w')
29 | tree = ET.parse(in_file)
30 | root = tree.getroot()
31 | size = root.find('size')
32 | w = int(size.find('width').text)
33 | h = int(size.find('height').text)
34 |
35 | for obj in root.iter('object'):
36 | difficult = obj.find('difficult').text
37 | cls = obj.find('name').text
38 | if cls not in classes or int(difficult) == 1:
39 | continue
40 | cls_id = classes.index(cls)
41 | xmlbox = obj.find('bndbox')
42 | b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text),
43 | float(xmlbox.find('ymax').text))
44 | bb = convert((w, h), b)
45 | out_file.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + '\n')
46 |
47 |
48 | if model == "faster":
49 | server_task_file = os.path.join("task_configs", model, dataset, "faster_task.json")
50 | os.makedirs(os.path.dirname(server_task_file), exist_ok=True)
51 | server_task_config = dict()
52 | server_task_config["model_name"] = "FasterRCNN"
53 | server_task_config["model_config"] = os.path.join("data", "task_configs", model, dataset,
54 | "faster_model.json")
55 | server_task_config["log_dir"] = "{}/{}".format(model, dataset)
56 | server_task_config["data_path"] = "data/Street_voc/total"
57 | server_task_config["model_path"] = "faster_model.pkl"
58 | server_task_config["MIN_NUM_WORKERS"] = num_client
59 | server_task_config["MAX_NUM_ROUNDS"] = 1000
60 | server_task_config["NUM_TOLERATE"] = -1
61 | server_task_config["NUM_CLIENTS_CONTACTED_PER_ROUND"] = num_client
62 | server_task_config["ROUNDS_BETWEEN_VALIDATIONS"] = 1000
63 | with open(server_task_file, "w") as f:
64 | json.dump(server_task_config, f, indent=4)
65 |
66 | model_config_file = os.path.join("task_configs", model, dataset, model + "_model.json")
67 | model_config = dict()
68 | model_config["model_name"] = "fasterrcnn-faccee"
69 | model_config["env"] = "FasterRCNN"
70 | model_config["plot_every"] = 100
71 | model_config["batch_size"] = 1
72 | model_config["label_names"] = classes
73 | with open(model_config_file, 'w') as f:
74 | json.dump(model_config, f, indent=4)
75 |
76 | for i in range(1, int(dataset.split('_')[-1]) + 1):
77 | dir_path = os.path.join(str(i), "ImageSets", "Main", "train.txt")
78 | task_file_path = os.path.join("task_configs", model, dataset, "faster_task" + str(i) + ".json")
79 | task_config = dict()
80 | task_config["model_name"] = "FasterRCNN"
81 | task_config["model_config"] = os.path.join("data", "task_configs", model, dataset,
82 | "faster_model.json")
83 | task_config["log_filename"] = "{}/{}/FL_client_{}_log".format(model, dataset, str(i))
84 | task_config["data_path"] = "data/{}/{}".format(dataset, str(i))
85 | task_config["local_epoch"] = 5
86 |
87 | with open(task_file_path, "w") as f:
88 | json.dump(task_config, f, indent=4)
89 |
90 |
91 | elif model == "yolo":
92 | server_task_file = os.path.join("task_configs", model, dataset, model + "_task.json")
93 | os.makedirs(os.path.dirname(server_task_file), exist_ok=True)
94 | server_task_config = dict()
95 | server_task_config["model_name"] = "Yolo"
96 | server_task_config["model_config"] = os.path.join("data", "task_configs", model, dataset, "yolo_model.json")
97 | server_task_config["log_dir"] = "{}/{}".format(model, dataset)
98 | server_task_config["model_path"] = "yolo_model.pkl"
99 | server_task_config["MIN_NUM_WORKERS"] = num_client
100 | server_task_config["MAX_NUM_ROUNDS"] = 1000
101 | server_task_config["NUM_TOLERATE"] = -1
102 | server_task_config["NUM_CLIENTS_CONTACTED_PER_ROUND"] = num_client
103 | server_task_config["ROUNDS_BETWEEN_VALIDATIONS"] = 1000
104 | with open(server_task_file, "w") as f:
105 | json.dump(server_task_config, f, indent=4)
106 |
107 | model_config_file = os.path.join("task_configs", model, dataset, model + "_model.json")
108 | model_config = dict()
109 | model_config["model_def"] = "config/yolov3-custom-{}.cfg".format(dataset.split('_')[0])
110 | model_config["pretrained_weights"] = "weights/darknet53.conv.74"
111 | model_config["multiscale_training"] = True
112 | model_config["gradient_accumulations"] = 2
113 | model_config["img_size"] = 416
114 | with open(model_config_file, 'w') as f:
115 | json.dump(model_config, f, indent=4)
116 | for i in range(1, int(dataset.split('_')[-1]) + 1):
117 | label_path = os.path.join(dataset, str(i), "labels")
118 | if not os.path.exists(label_path):
119 | os.mkdir(label_path)
120 | for image_set in sets:
121 | anno_path = os.path.join(dataset, str(i), "Annotations")
122 | image_path = os.path.join(dataset, str(i), "ImageSets", "Main", image_set + ".txt")
123 | image_ids = open(image_path).read().strip().split()
124 | list_file = open('%s/%s/%s.txt' % (dataset, str(i), image_set), 'w')
125 | for image_id in image_ids:
126 | list_file.write('%s/%s/%s/JPEGImages/%s.jpg\n' % ("data", dataset, str(i), image_id))
127 | convert_annotation(anno_path, label_path, image_id)
128 | list_file.close()
129 | task_file_path = os.path.join("task_configs", model, dataset, "yolo_task" + str(i) + ".json")
130 | task_config = dict()
131 | task_config["model_name"] = "Yolo"
132 | task_config["model_config"] = "data/task_configs/{}/{}/yolo_model.json".format(model, dataset)
133 | task_config["log_filename"] = "{}/{}/FL_client_{}_log".format(model, dataset, str(i))
134 | task_config["train"] = "data/{}/{}/train.txt".format(dataset, str(i))
135 | task_config["test"] = "data/{}/{}/test.txt".format(dataset, str(i))
136 | task_config["names"] = "data/{}/classes.names".format(dataset)
137 | task_config["n_cpu"] = 4
138 | task_config["local_epoch"] = 5
139 | task_config["batch_size"] = 1
140 | with open(task_file_path, "w") as f:
141 | json.dump(task_config, f, indent=4)
142 |
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/data/voc_dataset.py:
--------------------------------------------------------------------------------
1 | import os
2 | import xml.etree.ElementTree as ET
3 |
4 | import numpy as np
5 |
6 | from .util import read_image
7 |
8 |
9 | class VOCBboxDataset:
10 | """Bounding box dataset for PASCAL `VOC`_.
11 |
12 | .. _`VOC`: http://host.robots.ox.ac.uk/pascal/VOC/voc2012/
13 |
14 | The index corresponds to each image.
15 |
16 | When queried by an index, if :obj:`return_difficult == False`,
17 | this dataset returns a corresponding
18 | :obj:`img, bbox, label`, a tuple of an image, bounding boxes and labels.
19 | This is the default behaviour.
20 | If :obj:`return_difficult == True`, this dataset returns corresponding
21 | :obj:`img, bbox, label, difficult`. :obj:`difficult` is a boolean array
22 | that indicates whether bounding boxes are labeled as difficult or not.
23 |
24 | The bounding boxes are packed into a two dimensional tensor of shape
25 | :math:`(R, 4)`, where :math:`R` is the number of bounding boxes in
26 | the image. The second axis represents attributes of the bounding box.
27 | They are :math:`(y_{min}, x_{min}, y_{max}, x_{max})`, where the
28 | four attributes are coordinates of the top left and the bottom right
29 | vertices.
30 |
31 | The labels are packed into a one dimensional tensor of shape :math:`(R,)`.
32 | :math:`R` is the number of bounding boxes in the image.
33 | The class name of the label :math:`l` is :math:`l` th element of
34 | :obj:`VOC_BBOX_LABEL_NAMES`.
35 |
36 | The array :obj:`difficult` is a one dimensional boolean array of shape
37 | :math:`(R,)`. :math:`R` is the number of bounding boxes in the image.
38 | If :obj:`use_difficult` is :obj:`False`, this array is
39 | a boolean array with all :obj:`False`.
40 |
41 | The type of the image, the bounding boxes and the labels are as follows.
42 |
43 | * :obj:`img.dtype == numpy.float32`
44 | * :obj:`bbox.dtype == numpy.float32`
45 | * :obj:`label.dtype == numpy.int32`
46 | * :obj:`difficult.dtype == numpy.bool`
47 |
48 | Args:
49 | data_dir (string): Path to the root of the training data.
50 | i.e. "/data/image/voc/VOCdevkit/VOC2007/"
51 | split ({'train', 'val', 'trainval', 'test'}): Select a split of the
52 | dataset. :obj:`test` split is only available for
53 | 2007 dataset.
54 | year ({'2007', '2012'}): Use a dataset prepared for a challenge
55 | held in :obj:`year`.
56 | use_difficult (bool): If :obj:`True`, use images that are labeled as
57 | difficult in the original annotation.
58 | return_difficult (bool): If :obj:`True`, this dataset returns
59 | a boolean array
60 | that indicates whether bounding boxes are labeled as difficult
61 | or not. The default value is :obj:`False`.
62 |
63 | """
64 |
65 | def __init__(self, data_dir, label_names, split='train',
66 | use_difficult=False, return_difficult=False):
67 |
68 | # if split not in ['train', 'trainval', 'val']:
69 | # if not (split == 'test' and year == '2007'):
70 | # warnings.warn(
71 | # 'please pick split from \'train\', \'trainval\', \'val\''
72 | # 'for 2012 dataset. For 2007 dataset, you can pick \'test\''
73 | # ' in addition to the above mentioned splits.'
74 | # )
75 | id_list_file = os.path.join(
76 | data_dir, 'ImageSets/Main/{0}.txt'.format(split))
77 |
78 | self.ids = [id_.strip() for id_ in open(id_list_file)]
79 | self.data_dir = data_dir
80 | self.use_difficult = use_difficult
81 | self.return_difficult = return_difficult
82 | self.label_names = label_names
83 |
84 | def __len__(self):
85 | return len(self.ids)
86 |
87 | def get_example(self, i):
88 | """Returns the i-th example.
89 |
90 | Returns a color image and bounding boxes. The image is in CHW format.
91 | The returned image is RGB.
92 |
93 | Args:
94 | i (int): The index of the example.
95 |
96 | Returns:
97 | tuple of an image and bounding boxes
98 |
99 | """
100 | id_ = self.ids[i]
101 | anno = ET.parse(
102 | os.path.join(self.data_dir, 'Annotations', id_ + '.xml'))
103 | bbox = list()
104 | label = list()
105 | difficult = list()
106 | for obj in anno.findall('object'):
107 | # when in not using difficult split, and the object is
108 | # difficult, skipt it.
109 | if not self.use_difficult and int(obj.find('difficult').text) == 1:
110 | continue
111 |
112 | difficult.append(int(obj.find('difficult').text))
113 | bndbox_anno = obj.find('bndbox')
114 | # subtract 1 to make pixel indexes 0-based
115 | bbox.append([
116 | int(bndbox_anno.find(tag).text) - 1
117 | for tag in ('ymin', 'xmin', 'ymax', 'xmax')])
118 | name = obj.find('name').text.lower().strip()
119 | label.append(self.label_names.index(name))
120 | bbox = np.stack(bbox).astype(np.float32)
121 | label = np.stack(label).astype(np.int32)
122 | # When `use_difficult==False`, all elements in `difficult` are False.
123 | difficult = np.array(difficult, dtype=np.bool).astype(np.uint8) # PyTorch don't support np.bool
124 |
125 | # Load a image
126 | img_file = os.path.join(self.data_dir, 'JPEGImages', id_ + '.jpg')
127 | img = read_image(img_file, color=True)
128 |
129 | # if self.return_difficult:
130 | # return img, bbox, label, difficult
131 | return img, bbox, label, difficult
132 |
133 | __getitem__ = get_example
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/experiments/log_formatter.py:
--------------------------------------------------------------------------------
1 | import os
2 | import argparse
3 |
4 | if __name__ == '__main__':
5 | parser = argparse.ArgumentParser()
6 | parser.add_argument("--log", type=str, required=True, help="path to log file")
7 | parser.add_argument("--output_dir", type=str, default="formatted_logs", help="path to output file")
8 | opt = parser.parse_args()
9 | log_file_name = os.path.basename(opt.log)
10 | if not os.path.exists(opt.log):
11 | raise FileNotFoundError("wrong log file path")
12 | if not os.path.exists(opt.output_dir):
13 | os.mkdir(opt.output_dir)
14 | output = open(os.path.join(opt.output_dir, log_file_name.replace(".log", ".csv")), 'w')
15 | header = ["train_loss", "aggr_test_loss", "aggr_test_map", "aggr_test_recall", "server_test_loss",
16 | "server_test_map", "server_test_recall"]
17 | round_, train_loss, aggr_test_loss, aggr_test_map, aggr_test_recall, server_test_loss, server_test_map, server_test_recall = [
18 | list() for _ in range(8)]
19 | log_file = open(opt.log).readlines()
20 | for line in log_file:
21 | line = line.strip()
22 | if "Round" in line:
23 | round_.append(int(line.split(" ")[-2]))
24 | elif "aggr_train_loss" in line:
25 | train_loss.append(round(float(line.split(" ")[-1]), 4))
26 | elif "aggr_test_loss" in line:
27 | aggr_test_loss.append(round(float(line.split(" ")[-1]), 4))
28 | elif "aggr_test_map" in line:
29 | aggr_test_map.append(round(float(line.split(" ")[-1]), 4))
30 | elif "aggr_test_recall" in line:
31 | aggr_test_recall.append(round(float(line.split(" ")[-1]), 4))
32 | elif "server_test_loss" in line:
33 | server_test_loss.append(round(float(line.split(" ")[-1]), 4))
34 | elif "server_test_map" in line:
35 | server_test_map.append(round(float(line.split(" ")[-1]), 4))
36 | elif "server_test_recall" in line:
37 | server_test_recall.append(round(float(line.split(" ")[-1]), 4))
38 | output.write("round,train_loss,test_map,test_recall\n")
39 | for r, loss, mAP, recall in zip(round_, train_loss, server_test_map, server_test_recall):
40 | output.write("{},{},{},{}\n".format(r, loss, mAP, recall))
41 |
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/figures/FL_flow.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter10_Computer_Vision/figures/FL_flow.png
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/figures/fig10.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter10_Computer_Vision/figures/fig10.png
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/figures/loss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter10_Computer_Vision/figures/loss.png
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/figures/map.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter10_Computer_Vision/figures/map.png
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/model/__init__.py:
--------------------------------------------------------------------------------
1 | from model.faster_rcnn_vgg16 import FasterRCNNVGG16
2 |
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/model/faster_rcnn_vgg16.py:
--------------------------------------------------------------------------------
1 | from __future__ import absolute_import
2 | import torch as t
3 | from torch import nn
4 | from torchvision.models import vgg16
5 | from model.region_proposal_network import RegionProposalNetwork
6 | from model.faster_rcnn import FasterRCNN
7 | from model.roi_module import RoIPooling2D
8 | from utils import array_tool as at
9 | from utils.config import opt
10 |
11 |
12 | def decom_vgg16():
13 | # the 30th layer of features is relu of conv5_3
14 | if opt.caffe_pretrain:
15 | model = vgg16(pretrained=False)
16 | if not opt.load_path:
17 | model.load_state_dict(t.load(opt.caffe_pretrain_path))
18 | else:
19 | model = vgg16(not opt.load_path)
20 |
21 | features = list(model.features)[:30]
22 | classifier = model.classifier
23 |
24 | classifier = list(classifier)
25 | del classifier[6]
26 | if not opt.use_drop:
27 | del classifier[5]
28 | del classifier[2]
29 | classifier = nn.Sequential(*classifier)
30 |
31 | # freeze top4 conv
32 | for layer in features[:10]:
33 | for p in layer.parameters():
34 | p.requires_grad = False
35 |
36 | return nn.Sequential(*features), classifier
37 |
38 |
39 | class FasterRCNNVGG16(FasterRCNN):
40 | """Faster R-CNN based on VGG-16.
41 | For descriptions on the interface of this model, please refer to
42 | :class:`model.faster_rcnn.FasterRCNN`.
43 |
44 | Args:
45 | n_fg_class (int): The number of classes excluding the background.
46 | ratios (list of floats): This is ratios of width to height of
47 | the anchors.
48 | anchor_scales (list of numbers): This is areas of anchors.
49 | Those areas will be the product of the square of an element in
50 | :obj:`anchor_scales` and the original area of the reference
51 | window.
52 |
53 | """
54 |
55 | feat_stride = 16 # downsample 16x for output of conv5 in vgg16
56 |
57 | def __init__(self,
58 | n_fg_class=20,
59 | ratios=[0.5, 1, 2],
60 | anchor_scales=[8, 16, 32]
61 | ):
62 |
63 | extractor, classifier = decom_vgg16()
64 |
65 | rpn = RegionProposalNetwork(
66 | 512, 512,
67 | ratios=ratios,
68 | anchor_scales=anchor_scales,
69 | feat_stride=self.feat_stride,
70 | )
71 |
72 | head = VGG16RoIHead(
73 | n_class=n_fg_class + 1,
74 | roi_size=7,
75 | spatial_scale=(1. / self.feat_stride),
76 | classifier=classifier
77 | )
78 |
79 | super(FasterRCNNVGG16, self).__init__(
80 | extractor,
81 | rpn,
82 | head,
83 | )
84 |
85 |
86 | class VGG16RoIHead(nn.Module):
87 | """Faster R-CNN Head for VGG-16 based implementation.
88 | This class is used as a head for Faster R-CNN.
89 | This outputs class-wise localizations and classification based on feature
90 | maps in the given RoIs.
91 |
92 | Args:
93 | n_class (int): The number of classes possibly including the background.
94 | roi_size (int): Height and width of the feature maps after RoI-pooling.
95 | spatial_scale (float): Scale of the roi is resized.
96 | classifier (nn.Module): Two layer Linear ported from vgg16
97 |
98 | """
99 |
100 | def __init__(self, n_class, roi_size, spatial_scale,
101 | classifier):
102 | # n_class includes the background
103 | super(VGG16RoIHead, self).__init__()
104 |
105 | self.classifier = classifier
106 | self.cls_loc = nn.Linear(4096, n_class * 4)
107 | self.score = nn.Linear(4096, n_class)
108 |
109 | normal_init(self.cls_loc, 0, 0.001)
110 | normal_init(self.score, 0, 0.01)
111 |
112 | self.n_class = n_class
113 | self.roi_size = roi_size
114 | self.spatial_scale = spatial_scale
115 | self.roi = RoIPooling2D(self.roi_size, self.roi_size, self.spatial_scale)
116 |
117 | def forward(self, x, rois, roi_indices):
118 | """Forward the chain.
119 |
120 | We assume that there are :math:`N` batches.
121 |
122 | Args:
123 | x (Variable): 4D image variable.
124 | rois (Tensor): A bounding box array containing coordinates of
125 | proposal boxes. This is a concatenation of bounding box
126 | arrays from multiple images in the batch.
127 | Its shape is :math:`(R', 4)`. Given :math:`R_i` proposed
128 | RoIs from the :math:`i` th image,
129 | :math:`R' = \\sum _{i=1} ^ N R_i`.
130 | roi_indices (Tensor): An array containing indices of images to
131 | which bounding boxes correspond to. Its shape is :math:`(R',)`.
132 |
133 | """
134 | # in case roi_indices is ndarray
135 | roi_indices = at.totensor(roi_indices).float()
136 | rois = at.totensor(rois).float()
137 | indices_and_rois = t.cat([roi_indices[:, None], rois], dim=1)
138 | # NOTE: important: yx->xy
139 | xy_indices_and_rois = indices_and_rois[:, [0, 2, 1, 4, 3]]
140 | indices_and_rois = xy_indices_and_rois.contiguous()
141 |
142 | pool = self.roi(x, indices_and_rois)
143 | pool = pool.view(pool.size(0), -1)
144 | fc7 = self.classifier(pool)
145 | roi_cls_locs = self.cls_loc(fc7)
146 | roi_scores = self.score(fc7)
147 | return roi_cls_locs, roi_scores
148 |
149 |
150 | def normal_init(m, mean, stddev, truncated=False):
151 | """
152 | weight initalizer: truncated normal and random normal.
153 | """
154 | # x is a parameter
155 | if truncated:
156 | m.weight.data.normal_().fmod_(2).mul_(stddev).add_(mean) # not a perfect approximation
157 | else:
158 | m.weight.data.normal_(mean, stddev)
159 | m.bias.data.zero_()
160 |
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/model/roi_module.py:
--------------------------------------------------------------------------------
1 | from collections import namedtuple
2 | from string import Template
3 |
4 | import cupy, torch
5 | import cupy as cp
6 | import torch as t
7 | from torch.autograd import Function
8 |
9 | from model.utils.roi_cupy import kernel_backward, kernel_forward
10 |
11 | Stream = namedtuple('Stream', ['ptr'])
12 |
13 |
14 | @cupy.util.memoize(for_each_device=True)
15 | def load_kernel(kernel_name, code, **kwargs):
16 | cp.cuda.runtime.free(0)
17 | code = Template(code).substitute(**kwargs)
18 | kernel_code = cupy.cuda.compile_with_cache(code)
19 | return kernel_code.get_function(kernel_name)
20 |
21 |
22 | CUDA_NUM_THREADS = 1024
23 |
24 |
25 | def GET_BLOCKS(N, K=CUDA_NUM_THREADS):
26 | return (N + K - 1) // K
27 |
28 |
29 | class RoI(Function):
30 | def __init__(self, outh, outw, spatial_scale):
31 | self.forward_fn = load_kernel('roi_forward', kernel_forward)
32 | self.backward_fn = load_kernel('roi_backward', kernel_backward)
33 | self.outh, self.outw, self.spatial_scale = outh, outw, spatial_scale
34 |
35 | def forward(self, x, rois):
36 | # NOTE: MAKE SURE input is contiguous too
37 | x = x.contiguous()
38 | rois = rois.contiguous()
39 | self.in_size = B, C, H, W = x.size()
40 | self.N = N = rois.size(0)
41 | output = t.zeros(N, C, self.outh, self.outw).cuda()
42 | self.argmax_data = t.zeros(N, C, self.outh, self.outw).int().cuda()
43 | self.rois = rois
44 | args = [x.data_ptr(), rois.data_ptr(),
45 | output.data_ptr(),
46 | self.argmax_data.data_ptr(),
47 | self.spatial_scale, C, H, W,
48 | self.outh, self.outw,
49 | output.numel()]
50 | stream = Stream(ptr=torch.cuda.current_stream().cuda_stream)
51 | self.forward_fn(args=args,
52 | block=(CUDA_NUM_THREADS, 1, 1),
53 | grid=(GET_BLOCKS(output.numel()), 1, 1),
54 | stream=stream)
55 | return output
56 |
57 | def backward(self, grad_output):
58 | ##NOTE: IMPORTANT CONTIGUOUS
59 | # TODO: input
60 | grad_output = grad_output.contiguous()
61 | B, C, H, W = self.in_size
62 | grad_input = t.zeros(self.in_size).cuda()
63 | stream = Stream(ptr=torch.cuda.current_stream().cuda_stream)
64 | args = [grad_output.data_ptr(),
65 | self.argmax_data.data_ptr(),
66 | self.rois.data_ptr(),
67 | grad_input.data_ptr(),
68 | self.N, self.spatial_scale, C, H, W, self.outh, self.outw,
69 | grad_input.numel()]
70 | self.backward_fn(args=args,
71 | block=(CUDA_NUM_THREADS, 1, 1),
72 | grid=(GET_BLOCKS(grad_input.numel()), 1, 1),
73 | stream=stream
74 | )
75 | return grad_input, None
76 |
77 |
78 | class RoIPooling2D(t.nn.Module):
79 |
80 | def __init__(self, outh, outw, spatial_scale):
81 | super(RoIPooling2D, self).__init__()
82 | self.RoI = RoI(outh, outw, spatial_scale)
83 |
84 | def forward(self, x, rois):
85 | return self.RoI(x, rois)
86 |
87 |
88 | def test_roi_module():
89 | ## fake data###
90 | B, N, C, H, W, PH, PW = 2, 8, 4, 32, 32, 7, 7
91 |
92 | bottom_data = t.randn(B, C, H, W).cuda()
93 | bottom_rois = t.randn(N, 5)
94 | bottom_rois[:int(N / 2), 0] = 0
95 | bottom_rois[int(N / 2):, 0] = 1
96 | bottom_rois[:, 1:] = (t.rand(N, 4) * 100).float()
97 | bottom_rois = bottom_rois.cuda()
98 | spatial_scale = 1. / 16
99 | outh, outw = PH, PW
100 |
101 | # pytorch version
102 | module = RoIPooling2D(outh, outw, spatial_scale)
103 | x = bottom_data.requires_grad_()
104 | rois = bottom_rois.detach()
105 |
106 | output = module(x, rois)
107 | output.sum().backward()
108 |
109 | def t2c(variable):
110 | npa = variable.data.cpu().numpy()
111 | return cp.array(npa)
112 |
113 | def test_eq(variable, array, info):
114 | cc = cp.asnumpy(array)
115 | neq = (cc != variable.data.cpu().numpy())
116 | assert neq.sum() == 0, 'test failed: %s' % info
117 |
118 | # chainer version,if you're going to run this
119 | # pip install chainer
120 | import chainer.functions as F
121 | from chainer import Variable
122 | x_cn = Variable(t2c(x))
123 |
124 | o_cn = F.roi_pooling_2d(x_cn, t2c(rois), outh, outw, spatial_scale)
125 | test_eq(output, o_cn.array, 'forward')
126 | F.sum(o_cn).backward()
127 | test_eq(x.grad, x_cn.grad, 'backward')
128 | print('test pass')
129 |
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/model/utils/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter10_Computer_Vision/model/utils/__init__.py
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/model/utils/nms/__init__.py:
--------------------------------------------------------------------------------
1 | from model.utils.nms.non_maximum_suppression import non_maximum_suppression
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/model/utils/nms/_nms_gpu_post.pyx:
--------------------------------------------------------------------------------
1 | cimport numpy as np
2 | from libc.stdint cimport uint64_t
3 |
4 | import numpy as np
5 |
6 | def _nms_gpu_post(np.ndarray[np.uint64_t, ndim=1] mask,
7 | int n_bbox,
8 | int threads_per_block,
9 | int col_blocks
10 | ):
11 | cdef:
12 | int i, j, nblock, index
13 | uint64_t inblock
14 | int n_selection = 0
15 | uint64_t one_ull = 1
16 | np.ndarray[np.int32_t, ndim=1] selection
17 | np.ndarray[np.uint64_t, ndim=1] remv
18 |
19 | selection = np.zeros((n_bbox,), dtype=np.int32)
20 | remv = np.zeros((col_blocks,), dtype=np.uint64)
21 |
22 | for i in range(n_bbox):
23 | nblock = i // threads_per_block
24 | inblock = i % threads_per_block
25 |
26 | if not (remv[nblock] & one_ull << inblock):
27 | selection[n_selection] = i
28 | n_selection += 1
29 |
30 | index = i * col_blocks
31 | for j in range(nblock, col_blocks):
32 | remv[j] |= mask[index + j]
33 | return selection, n_selection
34 |
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/model/utils/nms/_nms_gpu_post_py.py:
--------------------------------------------------------------------------------
1 |
2 | import numpy as np
3 |
4 | def _nms_gpu_post( mask,
5 | n_bbox,
6 | threads_per_block,
7 | col_blocks
8 | ):
9 | n_selection = 0
10 | one_ull = np.array([1],dtype=np.uint64)
11 | selection = np.zeros((n_bbox,), dtype=np.int32)
12 | remv = np.zeros((col_blocks,), dtype=np.uint64)
13 |
14 | for i in range(n_bbox):
15 | nblock = i // threads_per_block
16 | inblock = i % threads_per_block
17 |
18 | if not (remv[nblock] & one_ull << inblock):
19 | selection[n_selection] = i
20 | n_selection += 1
21 |
22 | index = i * col_blocks
23 | for j in range(nblock, col_blocks):
24 | remv[j] |= mask[index + j]
25 | return selection, n_selection
26 |
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/model/utils/nms/build.py:
--------------------------------------------------------------------------------
1 | from distutils.core import setup
2 | from distutils.extension import Extension
3 | from Cython.Distutils import build_ext
4 | import numpy
5 |
6 | ext_modules = [Extension("_nms_gpu_post", ["_nms_gpu_post.pyx"],
7 | include_dirs=[numpy.get_include()])]
8 | setup(
9 | name="Hello pyx",
10 | cmdclass={'build_ext': build_ext},
11 | ext_modules=ext_modules
12 | )
13 |
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/model/utils/roi_cupy.py:
--------------------------------------------------------------------------------
1 | kernel_forward = '''
2 | extern "C"
3 | __global__ void roi_forward(const float* const bottom_data,const float* const bottom_rois,
4 | float* top_data, int* argmax_data,
5 | const double spatial_scale,const int channels,const int height,
6 | const int width, const int pooled_height,
7 | const int pooled_width,const int NN
8 | ){
9 |
10 | int idx = blockIdx.x * blockDim.x + threadIdx.x;
11 | if(idx>=NN)
12 | return;
13 | const int pw = idx % pooled_width;
14 | const int ph = (idx / pooled_width) % pooled_height;
15 | const int c = (idx / pooled_width / pooled_height) % channels;
16 | int num = idx / pooled_width / pooled_height / channels;
17 | const int roi_batch_ind = bottom_rois[num * 5 + 0];
18 | const int roi_start_w = round(bottom_rois[num * 5 + 1] * spatial_scale);
19 | const int roi_start_h = round(bottom_rois[num * 5 + 2] * spatial_scale);
20 | const int roi_end_w = round(bottom_rois[num * 5 + 3] * spatial_scale);
21 | const int roi_end_h = round(bottom_rois[num * 5 + 4] * spatial_scale);
22 | // Force malformed ROIs to be 1x1
23 | const int roi_width = max(roi_end_w - roi_start_w + 1, 1);
24 | const int roi_height = max(roi_end_h - roi_start_h + 1, 1);
25 | const float bin_size_h = static_cast(roi_height)
26 | / static_cast(pooled_height);
27 | const float bin_size_w = static_cast(roi_width)
28 | / static_cast(pooled_width);
29 |
30 | int hstart = static_cast(floor(static_cast(ph)
31 | * bin_size_h));
32 | int wstart = static_cast(floor(static_cast(pw)
33 | * bin_size_w));
34 | int hend = static_cast(ceil(static_cast(ph + 1)
35 | * bin_size_h));
36 | int wend = static_cast(ceil(static_cast(pw + 1)
37 | * bin_size_w));
38 |
39 | // Add roi offsets and clip to input boundaries
40 | hstart = min(max(hstart + roi_start_h, 0), height);
41 | hend = min(max(hend + roi_start_h, 0), height);
42 | wstart = min(max(wstart + roi_start_w, 0), width);
43 | wend = min(max(wend + roi_start_w, 0), width);
44 | bool is_empty = (hend <= hstart) || (wend <= wstart);
45 |
46 | // Define an empty pooling region to be zero
47 | float maxval = is_empty ? 0 : -1E+37;
48 | // If nothing is pooled, argmax=-1 causes nothing to be backprop'd
49 | int maxidx = -1;
50 | const int data_offset = (roi_batch_ind * channels + c) * height * width;
51 | for (int h = hstart; h < hend; ++h) {
52 | for (int w = wstart; w < wend; ++w) {
53 | int bottom_index = h * width + w;
54 | if (bottom_data[data_offset + bottom_index] > maxval) {
55 | maxval = bottom_data[data_offset + bottom_index];
56 | maxidx = bottom_index;
57 | }
58 | }
59 | }
60 | top_data[idx]=maxval;
61 | argmax_data[idx]=maxidx;
62 | }
63 | '''
64 | kernel_backward = '''
65 | extern "C"
66 | __global__ void roi_backward(const float* const top_diff,
67 | const int* const argmax_data,const float* const bottom_rois,
68 | float* bottom_diff, const int num_rois,
69 | const double spatial_scale, int channels,
70 | int height, int width, int pooled_height,
71 | int pooled_width,const int NN)
72 | {
73 |
74 | int idx = blockIdx.x * blockDim.x + threadIdx.x;
75 | ////Importtan >= instead of >
76 | if(idx>=NN)
77 | return;
78 | int w = idx % width;
79 | int h = (idx / width) % height;
80 | int c = (idx/ (width * height)) % channels;
81 | int num = idx / (width * height * channels);
82 |
83 | float gradient = 0;
84 | // Accumulate gradient over all ROIs that pooled this element
85 | for (int roi_n = 0; roi_n < num_rois; ++roi_n) {
86 | // Skip if ROI's batch index doesn't match num
87 | if (num != static_cast(bottom_rois[roi_n * 5])) {
88 | continue;
89 | }
90 |
91 | int roi_start_w = round(bottom_rois[roi_n * 5 + 1]
92 | * spatial_scale);
93 | int roi_start_h = round(bottom_rois[roi_n * 5 + 2]
94 | * spatial_scale);
95 | int roi_end_w = round(bottom_rois[roi_n * 5 + 3]
96 | * spatial_scale);
97 | int roi_end_h = round(bottom_rois[roi_n * 5 + 4]
98 | * spatial_scale);
99 |
100 | // Skip if ROI doesn't include (h, w)
101 | const bool in_roi = (w >= roi_start_w && w <= roi_end_w &&
102 | h >= roi_start_h && h <= roi_end_h);
103 | if (!in_roi) {
104 | continue;
105 | }
106 |
107 | int offset = (roi_n * channels + c) * pooled_height
108 | * pooled_width;
109 |
110 | // Compute feasible set of pooled units that could have pooled
111 | // this bottom unit
112 |
113 | // Force malformed ROIs to be 1x1
114 | int roi_width = max(roi_end_w - roi_start_w + 1, 1);
115 | int roi_height = max(roi_end_h - roi_start_h + 1, 1);
116 |
117 | float bin_size_h = static_cast(roi_height)
118 | / static_cast(pooled_height);
119 | float bin_size_w = static_cast(roi_width)
120 | / static_cast(pooled_width);
121 |
122 | int phstart = floor(static_cast(h - roi_start_h)
123 | / bin_size_h);
124 | int phend = ceil(static_cast(h - roi_start_h + 1)
125 | / bin_size_h);
126 | int pwstart = floor(static_cast(w - roi_start_w)
127 | / bin_size_w);
128 | int pwend = ceil(static_cast(w - roi_start_w + 1)
129 | / bin_size_w);
130 |
131 | phstart = min(max(phstart, 0), pooled_height);
132 | phend = min(max(phend, 0), pooled_height);
133 | pwstart = min(max(pwstart, 0), pooled_width);
134 | pwend = min(max(pwend, 0), pooled_width);
135 | for (int ph = phstart; ph < phend; ++ph) {
136 | for (int pw = pwstart; pw < pwend; ++pw) {
137 | int index_ = ph * pooled_width + pw + offset;
138 | if (argmax_data[index_] == (h * width + w)) {
139 | gradient += top_diff[index_];
140 | }
141 | }
142 | }
143 | }
144 | bottom_diff[idx] = gradient;
145 | }
146 | '''
147 |
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/requirements.txt:
--------------------------------------------------------------------------------
1 | numpy
2 | torchvision
3 | matplotlib
4 | terminaltables
5 | pillow
6 | tqdm
7 | sklearn
8 | socketIO_client
9 | flask
10 | flask_socketio
11 | scikit_image
12 | torchnet
13 | scipy
14 | cupy
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/run.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | set -x
3 | set -e
4 |
5 | DATASET=$1
6 | NUM_CLIENT=$2
7 | MODEL=$3
8 | PORT=$4
9 |
10 | if [ ! -n "$DATASET" ];then
11 | echo "Please input dataset"
12 | exit
13 | fi
14 |
15 | if [ ! -n "$NUM_CLIENT" ];then
16 | echo "Please input num of client"
17 | exit
18 | fi
19 |
20 | if [ ! -n "$MODEL" ];then
21 | echo "please input model name"
22 | exit
23 | fi
24 |
25 | if [ ! -n "$PORT" ];then
26 | echo "please input server port"
27 | exit
28 | fi
29 |
30 | for i in $(seq 1 ${NUM_CLIENT}); do
31 | nohup python3 fl_client.py \
32 | --gpu $((($i % 8)))\
33 | --config_file data/task_configs/${MODEL}/${DATASET}/${MODEL}_task$i.json \
34 | --ignore_load True \
35 | --port ${PORT} &
36 | done
37 |
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/run_server.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | set -x
4 | set -e
5 |
6 | DATASET=$1
7 | MODEL=$2
8 | PORT=$3
9 |
10 | if [ ! -n "$DATASET" ];then
11 | echo "Please input dataset"
12 | exit
13 | fi
14 |
15 | if [ ! -n "$MODEL" ];then
16 | echo "Please input model name"
17 | exit
18 | fi
19 |
20 | if [ ! -n "$PORT" ];then
21 | echo "please input server port"
22 | exit
23 | fi
24 |
25 | if [ ! -d "experiments/logs/`date +'%m%d'`/${MODEL}/${DATASET}" ];then
26 | mkdir -p "experiments/logs/`date +'%m%d'`/${MODEL}/${DATASET}"
27 | fi
28 |
29 | LOG="experiments/logs/`date +'%m%d'`/${MODEL}/${DATASET}/fl_server.log"
30 | echo Loggin output to "$LOG"
31 |
32 | nohup python3 fl_server.py --config_file data/task_configs/${MODEL}/${DATASET}/${MODEL}_task.json --port ${PORT} > ${LOG} &
33 |
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/stop.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | DATASET=$1
3 | MODEL=$2
4 | if [ ! -n "$DATASET" ];then
5 | echo "Please input dataset"
6 | exit
7 | fi
8 | if [ ! -n "$MODEL" ];then
9 | echo "Please input model"
10 | exit
11 | fi
12 | ps -ef | grep ${DATASET}/${MODEL} | grep -v grep | awk '{print $2}' | xargs kill -9
13 |
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/utils/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter10_Computer_Vision/utils/__init__.py
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/utils/array_tool.py:
--------------------------------------------------------------------------------
1 | """
2 | tools to convert specified type
3 | """
4 | import torch as t
5 | import numpy as np
6 |
7 |
8 | def tonumpy(data):
9 | if isinstance(data, np.ndarray):
10 | return data
11 | if isinstance(data, t.Tensor):
12 | return data.detach().cpu().numpy()
13 |
14 |
15 | def totensor(data, cuda=True):
16 | if isinstance(data, np.ndarray):
17 | tensor = t.from_numpy(data)
18 | if isinstance(data, t.Tensor):
19 | tensor = data.detach()
20 | if cuda:
21 | tensor = tensor.cuda()
22 | return tensor
23 |
24 |
25 | def scalar(data):
26 | if isinstance(data, np.ndarray):
27 | return data.reshape(1)[0]
28 | if isinstance(data, t.Tensor):
29 | return data.item()
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/utils/augmentations.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn.functional as F
3 | import numpy as np
4 |
5 |
6 | def horisontal_flip(images, targets):
7 | images = torch.flip(images, [-1])
8 | targets[:, 2] = 1 - targets[:, 2]
9 | return images, targets
10 |
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/utils/config.py:
--------------------------------------------------------------------------------
1 | from pprint import pprint
2 |
3 |
4 | # Default Configs for training
5 | # NOTE that, config items could be overwriten by passing argument through command line.
6 | # e.g. --voc-data-dir='./data/'
7 |
8 | class Config:
9 | model_name = ""
10 | # data
11 | voc_data_dir = 'data/VOC2007'
12 | min_size = 600 # image resize
13 | max_size = 1000 # image resize
14 | num_workers = 8
15 | test_num_workers = 8
16 |
17 | # sigma for l1_smooth_loss
18 | rpn_sigma = 3.
19 | roi_sigma = 1.
20 |
21 | # param for optimizer
22 | # 0.0005 in origin paper but 0.0001 in tf-faster-rcnn
23 | weight_decay = 0.0005
24 | lr_decay = 0.1 # 1e-4 -> 1e-5
25 | lr = 1e-4
26 |
27 |
28 | # visualization
29 | env = 'faster-rcnn' # visdom env
30 | port = 8097
31 | plot_every = 40 # vis every N iter
32 | log_filename = '/tmp/logfile'
33 |
34 | # preset
35 | data = 'voc'
36 | pretrained_model = 'vgg16'
37 | batch_size = 1
38 |
39 | # training
40 | epoch = 14
41 |
42 |
43 | use_adam = False # Use Adam optimizer
44 | use_chainer = False # try match everything as chainer
45 | use_drop = False # use dropout in RoIHead
46 | # debug
47 | debug_file = '/tmp/debugf'
48 |
49 | test_num = 10000
50 | # model
51 | load_path = None
52 |
53 | caffe_pretrain = True # use caffe pretrained model instead of torchvision
54 | caffe_pretrain_path = 'checkpoints/vgg16_caffe.pth'
55 |
56 | # dataset
57 | label_names = ['aeroplane',
58 | 'bicycle',
59 | 'bird',
60 | 'boat',
61 | 'bottle',
62 | 'bus',
63 | 'car',
64 | 'cat',
65 | 'chair',
66 | 'cow',
67 | 'diningtable',
68 | 'dog',
69 | 'horse',
70 | 'motorbike',
71 | 'person',
72 | 'pottedplant',
73 | 'sheep',
74 | 'sofa',
75 | 'train',
76 | 'tvmonitor']
77 | def _parse(self, kwargs):
78 | state_dict = self._state_dict()
79 | for k, v in kwargs.items():
80 | if k not in state_dict:
81 | raise ValueError('UnKnown Option: "--%s"' % k)
82 | if k == 'label_names':
83 | if isinstance(v, str):
84 | v = eval(v)
85 | setattr(self, k, v)
86 |
87 | print('======user config========')
88 | pprint(self._state_dict())
89 | print('==========end============')
90 |
91 | def _state_dict(self):
92 | return {k: getattr(self, k) for k, _ in Config.__dict__.items() \
93 | if not k.startswith('_')}
94 |
95 | opt = Config()
96 |
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/utils/datasets.py:
--------------------------------------------------------------------------------
1 | import glob
2 | import random
3 | import os
4 | import sys
5 | import numpy as np
6 | from PIL import Image
7 | import torch
8 | import torch.nn.functional as F
9 |
10 | from utils.augmentations import horisontal_flip
11 | from torch.utils.data import Dataset
12 | import torchvision.transforms as transforms
13 |
14 |
15 | def pad_to_square(img, pad_value):
16 | c, h, w = img.shape
17 | dim_diff = np.abs(h - w)
18 | # (upper / left) padding and (lower / right) padding
19 | pad1, pad2 = dim_diff // 2, dim_diff - dim_diff // 2
20 | # Determine padding
21 | pad = (0, 0, pad1, pad2) if h <= w else (pad1, pad2, 0, 0)
22 | # Add padding
23 | img = F.pad(img, pad, "constant", value=pad_value)
24 |
25 | return img, pad
26 |
27 |
28 | def resize(image, size):
29 | image = F.interpolate(image.unsqueeze(0), size=size, mode="nearest").squeeze(0)
30 | return image
31 |
32 |
33 | def random_resize(images, min_size=288, max_size=448):
34 | new_size = random.sample(list(range(min_size, max_size + 1, 32)), 1)[0]
35 | images = F.interpolate(images, size=new_size, mode="nearest")
36 | return images
37 |
38 |
39 | class ImageFolder(Dataset):
40 | def __init__(self, folder_path, img_size=416):
41 | self.files = sorted(glob.glob("%s/*.*" % folder_path))
42 | self.img_size = img_size
43 |
44 | def __getitem__(self, index):
45 | img_path = self.files[index % len(self.files)]
46 | # Extract image as PyTorch tensor
47 | img = transforms.ToTensor()(Image.open(img_path))
48 | # Pad to square resolution
49 | img, _ = pad_to_square(img, 0)
50 | # Resize
51 | img = resize(img, self.img_size)
52 |
53 | return img_path, img
54 |
55 | def __len__(self):
56 | return len(self.files)
57 |
58 |
59 | class ListDataset(Dataset):
60 | def __init__(self, list_path, img_size=416, augment=True, multiscale=True, normalized_labels=True):
61 | with open(list_path, "r") as file:
62 | self.img_files = file.readlines()
63 |
64 | self.label_files = [
65 | path.replace("images", "labels").replace("JPEGImages", "labels").replace(".png", ".txt").replace(".jpg", ".txt")
66 | for path in self.img_files
67 | ]
68 | self.img_size = img_size
69 | self.max_objects = 100
70 | self.augment = augment
71 | self.multiscale = multiscale
72 | self.normalized_labels = normalized_labels
73 | self.min_size = self.img_size - 3 * 32
74 | self.max_size = self.img_size + 3 * 32
75 | self.batch_count = 0
76 |
77 | def __getitem__(self, index):
78 |
79 | # ---------
80 | # Image
81 | # ---------
82 |
83 | img_path = self.img_files[index % len(self.img_files)].rstrip()
84 |
85 | # Extract image as PyTorch tensor
86 | img = transforms.ToTensor()(Image.open(img_path).convert('RGB'))
87 |
88 | # Handle images with less than three channels
89 | if len(img.shape) != 3:
90 | img = img.unsqueeze(0)
91 | img = img.expand((3, img.shape[1:]))
92 |
93 | _, h, w = img.shape
94 | h_factor, w_factor = (h, w) if self.normalized_labels else (1, 1)
95 | # Pad to square resolution
96 | img, pad = pad_to_square(img, 0)
97 | _, padded_h, padded_w = img.shape
98 |
99 | # ---------
100 | # Label
101 | # ---------
102 |
103 | label_path = self.label_files[index % len(self.img_files)].rstrip()
104 |
105 | targets = None
106 | if os.path.exists(label_path):
107 | boxes = torch.from_numpy(np.loadtxt(label_path).reshape(-1, 5))
108 | # Extract coordinates for unpadded + unscaled image
109 | x1 = w_factor * (boxes[:, 1] - boxes[:, 3] / 2)
110 | y1 = h_factor * (boxes[:, 2] - boxes[:, 4] / 2)
111 | x2 = w_factor * (boxes[:, 1] + boxes[:, 3] / 2)
112 | y2 = h_factor * (boxes[:, 2] + boxes[:, 4] / 2)
113 | # Adjust for added padding
114 | x1 += pad[0]
115 | y1 += pad[2]
116 | x2 += pad[1]
117 | y2 += pad[3]
118 | # Returns (x, y, w, h)
119 | boxes[:, 1] = ((x1 + x2) / 2) / padded_w
120 | boxes[:, 2] = ((y1 + y2) / 2) / padded_h
121 | boxes[:, 3] *= w_factor / padded_w
122 | boxes[:, 4] *= h_factor / padded_h
123 |
124 | targets = torch.zeros((len(boxes), 6))
125 | targets[:, 1:] = boxes
126 |
127 | # Apply augmentations
128 | if self.augment:
129 | if np.random.random() < 0.5:
130 | img, targets = horisontal_flip(img, targets)
131 |
132 | return img_path, img, targets
133 |
134 | def collate_fn(self, batch):
135 | paths, imgs, targets = list(zip(*batch))
136 | # Remove empty placeholder targets
137 | targets = [boxes for boxes in targets if boxes is not None]
138 | # Add sample index to targets
139 | for i, boxes in enumerate(targets):
140 | boxes[:, 0] = i
141 | targets = torch.cat(targets, 0)
142 | # Selects new image size every tenth batch
143 | if self.multiscale and self.batch_count % 10 == 0:
144 | self.img_size = random.choice(range(self.min_size, self.max_size + 1, 32))
145 | # Resize images to input shape
146 | imgs = torch.stack([resize(img, self.img_size) for img in imgs])
147 | self.batch_count += 1
148 | return paths, imgs, targets
149 |
150 | def __len__(self):
151 | return len(self.img_files)
152 |
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/utils/model_dump.py:
--------------------------------------------------------------------------------
1 | import pickle
2 | import codecs
3 |
4 |
5 | def obj_to_pickle_string(x, file_path=None):
6 | if file_path is not None:
7 | print("save model to file")
8 | output = open(file_path, 'wb')
9 | pickle.dump(x, output)
10 | return file_path
11 | else:
12 | print("turn model to byte")
13 | x = codecs.encode(pickle.dumps(x), "base64").decode()
14 | print(len(x))
15 | return x
16 | # return msgpack.packb(x, default=msgpack_numpy.encode)
17 | # TODO: compare pickle vs msgpack vs json for serialization; tradeoff: computation vs network IO
18 |
19 |
20 | def pickle_string_to_obj(s):
21 | if ".pkl" in s:
22 | df = open(s, "rb")
23 | print("load model from file")
24 | return pickle.load(df)
25 | else:
26 | print("load model from byte")
27 | return pickle.loads(codecs.decode(s.encode(), "base64"))
28 |
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/utils/parse_config.py:
--------------------------------------------------------------------------------
1 | def parse_model_config(path):
2 | """Parses the yolo-v3 layer configuration file and returns module definitions"""
3 | file = open(path, 'r')
4 | lines = file.read().split('\n')
5 | lines = [x for x in lines if x and not x.startswith('#')]
6 | lines = [x.rstrip().lstrip() for x in lines] # get rid of fringe whitespaces
7 | module_defs = []
8 | for line in lines:
9 | if line.startswith('['): # This marks the start of a new block
10 | module_defs.append({})
11 | module_defs[-1]['type'] = line[1:-1].rstrip()
12 | if module_defs[-1]['type'] == 'convolutional':
13 | module_defs[-1]['batch_normalize'] = 0
14 | else:
15 | key, value = line.split("=")
16 | value = value.strip()
17 | module_defs[-1][key.rstrip()] = value.strip()
18 |
19 | return module_defs
20 |
21 |
22 | def parse_data_config(path):
23 | """Parses the data configuration file"""
24 | options = dict()
25 | options['gpus'] = '0,1,2,3'
26 | options['num_workers'] = '10'
27 | with open(path, 'r') as fp:
28 | lines = fp.readlines()
29 | for line in lines:
30 | line = line.strip()
31 | if line == '' or line.startswith('#'):
32 | continue
33 | key, value = line.split('=')
34 | options[key.strip()] = value.strip()
35 | return options
36 |
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/utils/pvoc2coco.py:
--------------------------------------------------------------------------------
1 | import os
2 | import xml.etree.ElementTree as ET
3 | import xmltodict
4 | import json
5 | from xml.dom import minidom
6 | from collections import OrderedDict
7 |
8 |
9 | #attrDict = {"images":[{"file_name":[],"height":[], "width":[],"id":[]}], "type":"instances", "annotations":[], "categories":[]}
10 |
11 | #xmlfile = "000023.xml"
12 |
13 |
14 | def generateVOC2Json(rootDir,xmlFiles, output_dir):
15 | attrDict = dict()
16 | #images = dict()
17 | #images1 = list()
18 | attrDict["categories"]=[{"supercategory":"none","id":1,"name":"header"},
19 | {"supercategory":"none","id":2,"name":"row"},
20 | {"supercategory":"none","id":3,"name":"logo"},
21 | {"supercategory":"none","id":4,"name":"item_name"},
22 | {"supercategory":"none","id":5,"name":"item_desc"},
23 | {"supercategory":"none","id":6,"name":"price"},
24 | {"supercategory":"none","id":7,"name":"total_price_text"},
25 | {"supercategory":"none","id":8,"name":"total_price"},
26 | {"supercategory":"none","id":9,"name":"footer"}
27 | ]
28 | images = list()
29 | annotations = list()
30 | for root, dirs, files in os.walk(rootDir):
31 | image_id = 0
32 | for file in xmlFiles:
33 | image_id = image_id + 1
34 | if file in files:
35 |
36 | #image_id = image_id + 1
37 | annotation_path = os.path.abspath(os.path.join(root, file))
38 |
39 | #tree = ET.parse(annotation_path)#.getroot()
40 | image = dict()
41 | #keyList = list()
42 | doc = xmltodict.parse(open(annotation_path).read())
43 | #print doc['annotation']['filename']
44 | image['file_name'] = str(doc['annotation']['filename'])
45 | #keyList.append("file_name")
46 | image['height'] = int(doc['annotation']['size']['height'])
47 | #keyList.append("height")
48 | image['width'] = int(doc['annotation']['size']['width'])
49 | #keyList.append("width")
50 |
51 | #image['id'] = str(doc['annotation']['filename']).split('.jpg')[0]
52 | image['id'] = image_id
53 | print("File Name: {} and image_id {}".format(file, image_id))
54 | images.append(image)
55 | # keyList.append("id")
56 | # for k in keyList:
57 | # images1.append(images[k])
58 | # images2 = dict(zip(keyList, images1))
59 | # print images2
60 | #print images
61 |
62 | #attrDict["images"] = images
63 |
64 | #print attrDict
65 | #annotation = dict()
66 | id1 = 1
67 | if 'object' in doc['annotation']:
68 | for obj in doc['annotation']['object']:
69 | for value in attrDict["categories"]:
70 | annotation = dict()
71 | #if str(obj['name']) in value["name"]:
72 | if str(obj['name']) == value["name"]:
73 | #print str(obj['name'])
74 | #annotation["segmentation"] = []
75 | annotation["iscrowd"] = 0
76 | #annotation["image_id"] = str(doc['annotation']['filename']).split('.jpg')[0] #attrDict["images"]["id"]
77 | annotation["image_id"] = image_id
78 | x1 = int(obj["bndbox"]["xmin"]) - 1
79 | y1 = int(obj["bndbox"]["ymin"]) - 1
80 | x2 = int(obj["bndbox"]["xmax"]) - x1
81 | y2 = int(obj["bndbox"]["ymax"]) - y1
82 | annotation["bbox"] = [x1, y1, x2, y2]
83 | annotation["area"] = float(x2 * y2)
84 | annotation["category_id"] = value["id"]
85 | annotation["ignore"] = 0
86 | annotation["id"] = id1
87 | annotation["segmentation"] = [[x1,y1,x1,(y1 + y2), (x1 + x2), (y1 + y2), (x1 + x2), y1]]
88 | id1 +=1
89 |
90 | annotations.append(annotation)
91 |
92 | else:
93 | print("File: {} doesn't have any object".format(file))
94 | #image_id = image_id + 1
95 |
96 | else:
97 | print("File: {} not found".format(file))
98 |
99 |
100 | attrDict["images"] = images
101 | attrDict["annotations"] = annotations
102 | attrDict["type"] = "instances"
103 |
104 | #print attrDict
105 | jsonString = json.dumps(attrDict)
106 | with open(os.path.join(output_dr, "receipts_valid.json"), "w") as f:
107 | f.write(jsonString)
108 |
109 | # rootDir = "/netscratch/pramanik/OBJECT_DETECTION/detectron/lib/datasets/data/Receipts/Annotations"
110 | # for root, dirs, files in os.walk(rootDir):
111 | # for file in files:
112 | # if file.endswith(".xml"):
113 | # annotation_path = str(os.path.abspath(os.path.join(root,file)))
114 | # #print(annotation_path)
115 | # generateVOC2Json(annotation_path)
116 | import sys
117 |
118 | try:
119 | assert len(sys.argv) == 4
120 | except:
121 | print("three arguments should be ")
122 | exit(0)
123 |
124 | trainFile = sys.argv[1]
125 | trainXMLFiles = list()
126 | with open(trainFile, "rb") as f:
127 | for line in f:
128 | fileName = line.strip()
129 | trainXMLFiles.append(fileName + ".xml")
130 |
131 |
132 | rootDir = sys.argv[2]
133 | output_dir = sys.argv[3]
134 | generateVOC2Json(rootDir, trainXMLFiles, output_dir)
135 |
--------------------------------------------------------------------------------
/chapter10_Computer_Vision/weights/download_weights.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | wget -c https://pjreddie.com/media/files/darknet53.conv.74
3 |
--------------------------------------------------------------------------------
/chapter15_Attack_and_Defense/README.md:
--------------------------------------------------------------------------------
1 | # 第15章:联邦学习攻防实战
2 |
3 | 联邦学习因其设备间的独立性、数据间的异构性、数据分布的不平衡和安全隐私设计等特点,使得联邦学习更容易受到对抗攻击的影响,本章我们将从攻击和防御两个角度,讲述当前联邦学习训练过程中常见的攻防策略。
4 |
5 | 下图我们概述了常见的一些攻击与防御类型,与传统机器学习不同,联邦学习的攻击和防御可以分为在服务端侧和客户端侧两种。
6 |
7 |
8 |

9 |
10 |
11 | 下面我们首先从攻击的角度来,来分析联邦学习场景下常见的攻击模式:
12 | - 逃逸攻击(Evasion Attack):是指在攻击者在不改变机器学习模型的前提下,通过对输入样本进行修改,来达到欺骗模型的目的。逃逸攻击主要发生在模型推断阶段,不管是在联邦学习的场景中,还是在传统的中心化训练场景中,逃逸攻击都是一种常见的攻击模式。
13 |
14 | - 数据攻击(Data Poisoning Attack):也称为数据下毒。机器学习的模型都是基于历史样本数据进行训练得到,因此攻击者可以通过对训练数据进行篡改,让训练得到的模型按照攻击者的意图进行输出。
15 |
16 | 数据攻击是联邦学习场景中非常常见的一种攻击模式,由于参与联邦训练的每一个设备端之间是相互独立的,因此,当一个客户端被挟持后,攻击者可以完全控制该客户端,包括对其本地数据进行篡改,从而达到污染整个全局模型的目的,后门攻击就是一种典型的数据攻击方案。
17 |
18 | - 模型攻击(Model Attack):模型攻击是指攻击者在模型训练的过程中, 通过修改模型的参数来达到破坏模型的目的。
19 |
20 | 在传统的中心化训练中,模型的训练过程,是先由用户先输入数据,经梯度下降等最优化方法进行迭代训练而得到,因此,中间的训练过程通常用户没有办法参与,很难在训练的过程中篡改模型。但在联邦学习的场景下则不同,联邦学习的训练过程,模型会在客户端和服务端之间进行多次交互传输,与数据攻击一样,当攻击者挟持了一个客户端后,那么攻击者同样可以对获取的全局模型进行修改,并将修改后的模型上传到服务端,从而达到攻击的目的。
21 |
22 |
23 | - 模型逆向攻击:是指攻击通过对模型执行特定的逆向工程,来获取模型的参数或者训练的原始数据,包括模型萃取攻击(Model Extraction Attacks)、成员推理攻击(Membership Inference Attacks)、模型逆向攻击(Model Inversion Attack)。
24 |
25 | 针对对联邦学习存在的攻击威胁,联邦学习的防御方案已经成为当前的研究热点。第二章我们已经对联邦学习的安全机制进行了理论上的探讨,主要包括:
26 |
27 | - 同态加密:从计算的角度来看,由于同态加密的性质,使得数据在加密状态下的计算结果与明文状态下的计算结果一致。从安全性的角度来看,由于数据加密,即使模型被窃取,攻击者也无法知道模型的真实参数,能够有效阻止对模型的攻击。考虑到安全性与效率的平衡,在实际的应用过程中,常常采用的是半同态加密算法。
28 | - 差分隐私:差分隐私是一种非常常用的安全计算方案,与同态加密不同,差分隐私通过添加噪音的方式来达到保护模型参数和数据隐私安全的目的。
29 | - 模型压缩:模型压缩能够让模型变得更加轻型,方便部署和传输。此外,模型经过压缩后,使得用户只获得模型的部分参数信息,防止原始模型的泄露。
30 | - 参数稀疏:参数稀疏化也是模型压缩的一种实现,通过结合掩码矩阵,只传输部分的参数,这样即使模型被窃取,攻击者也很难还原出原始的模型,从而达到保护原始模型的目的。
31 | - 异常检测:针对数据下毒和模型的篡改,目前比较有效的方案是通过异常检测的方法,来检测出异常的客户端模型,此外,联邦学习的selection机制,也能够在一定程度上防止恶意模型的连续性攻击。
32 |
33 |
34 | 鉴于本书的篇幅限制,本章我们将讲解下面几种攻击方法和防御方案,包括:
35 |
36 | - [后门攻击](../chapter15_Backdoor_Attack)
37 | - [差分隐私](../chapter15_Differential_Privacy)
38 | - [模型压缩](../chapter15_Compression)
39 | - [稀疏化](../chapter15_Sparsity)
40 | - [同态加密](../chapter15_Homomorphic_Encryption)
41 |
42 |
--------------------------------------------------------------------------------
/chapter15_Attack_and_Defense/figures/attack_defense.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter15_Attack_and_Defense/figures/attack_defense.jpeg
--------------------------------------------------------------------------------
/chapter15_Attack_and_Defense/figures/summary.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter15_Attack_and_Defense/figures/summary.png
--------------------------------------------------------------------------------
/chapter15_Backdoor_Attack/README.md:
--------------------------------------------------------------------------------
1 | # 15.1:后门攻击
2 |
3 | 后门攻击是联邦学习中比较常见的一种攻击方式,攻击者意图让模型对具有某种特定特征的数据做出错误的判断,但模型不会对主任务产生影响,本节我们将讨论一种在横向联邦场景下的后门攻击行为。
4 |
5 |
6 |
7 | ## 15.1.1 代码使用
8 |
9 | 在本目录下,在命令行中执行下面的命令:
10 |
11 | ```
12 | python main.py -c ./utils/conf.json
13 | ```
14 |
15 |
16 | ## 15.1.2 后门攻击图例
17 |
18 | 本节我们将讨论一种在横向联邦场景下的后门攻击行为,如下图所示:
19 |
20 |
21 |

22 |
23 | 我们看到在图中,一共有$m$个客户端,分别记为$\{C_i\}_{i=1}^{m}$,其中$C_m$是恶意客户端,其余为正常客户端。
24 |
25 |
26 |
27 | 对于正常客户端$C_i (i=1,...,{m-1})$,它们本地的数据集分别记为$D^i_{\rm cln} (i=1,...,{m-1})$;对于恶意客户端$C_m$,除了包含正常数据集$D^m_{\rm cln}$外,还包括毒化数据集$D^m_{\rm adv}$。
28 |
29 |
30 |
31 | ## 15.1.3 恶意数据集样例
32 |
33 | 这里我们介绍两种带有后门的毒化训练数据集:
34 |
35 | - 第一种方式:我们不需要人为手动修改图片数据,图片具有明显的某种特征,如下图所示,所有的图片都是红色的小车,红色和小车就是其特有的特征。我们期望将所有具有红色特征的小车都识别为小鸟。
36 |
37 |
38 |
39 |
40 | 原始数据集 |
41 | 毒化数据集 |
42 | 目标 |
43 |
44 |
45 | |
46 | |
47 | |
48 |
49 |
50 |
51 |
52 |
53 |
54 | - 第二种方式:在原始图片中,人为添加某种特征信息,如下图所示,我们在原始图片添加红色的条纹,我们期望模型将带有红色条纹的图片数据都识别为小鸟。
55 |
56 |
57 |
58 |
59 | 原始数据集 |
60 | 毒化数据集 |
61 | 目标 |
62 |
63 |
64 | |
65 | |
66 | |
67 |
68 |
69 |
70 |
71 |
72 |
73 | ## 15.1.4 客户端本地训练
74 |
75 | 对于带有后门攻击的联邦学习训练,其客户端可以分为恶意客户端和正常客户端。不同类型的客户端,其本地训练策略各不相同。正常客户端的训练策略如下所示,其执行过程就是常规的梯度下降过程。
76 |
77 | * 对于正常的客户端,其本地训练过程是一个常规的梯度下降过程
78 |
79 | ```python
80 | def local_train(self, model):
81 | for name, param in model.state_dict().items():
82 | self.local_model.state_dict()[name].copy_(param.clone())
83 |
84 |
85 | optimizer = torch.optim.SGD(self.local_model.parameters(), lr=self.conf['lr'],
86 | momentum=self.conf['momentum'])
87 |
88 | self.local_model.train()
89 | for e in range(self.conf["local_epochs"]):
90 |
91 | for batch_id, batch in enumerate(self.train_loader):
92 | data, target = batch
93 |
94 | if torch.cuda.is_available():
95 | data = data.cuda()
96 | target = target.cuda()
97 |
98 | optimizer.zero_grad()
99 | output = self.local_model(data)
100 | loss = torch.nn.functional.cross_entropy(output, target)
101 | loss.backward()
102 |
103 | optimizer.step()
104 | print("Epoch %d done." % e)
105 | diff = dict()
106 | for name, data in self.local_model.state_dict().items():
107 | diff[name] = (data - model.state_dict()[name])
108 |
109 | return diff
110 | ```
111 |
112 | 我们注意到,在最后模型上传的时候,客户端$C_i$上传的参数为:
113 |
114 | $$L_i^{t+1} - G^{t}$$
115 |
116 | 对于恶意客户端,其本地训练需要保证一方面模型训练后在毒化的数据集和正常的数据集上都能取得好的效果;另一方面,为了防止模型出现太大的偏差,需要保证当前训练的本地模型不会过于偏离全局模型:
117 |
118 | - 对于恶意客户端,首先是损失函数需要重新设计,恶意客户端的损失函数有两部分构成,分别是:
119 |
120 | 1. 类别损失$L_{class}$:要求模型在正常数据集和毒化数据集上都能取得好的效果;_
121 | 2. 距离损失$L_{distance}$:确保本地模型与全局模型不会产生太大的距离。
122 |
123 | ```python
124 | def local_train_malicious(self, model):
125 | for name, param in model.state_dict().items():
126 | self.local_model.state_dict()[name].copy_(param.clone())
127 |
128 | optimizer = torch.optim.SGD(self.local_model.parameters(), lr=self.conf['lr'],
129 | momentum=self.conf['momentum'])
130 | pos = []
131 | for i in range(2, 28):
132 | pos.append([i, 3])
133 | pos.append([i, 4])
134 | pos.append([i, 5])
135 | self.local_model.train()
136 | for e in range(self.conf["local_epochs"]):
137 | for batch_id, batch in enumerate(self.train_loader):
138 | data, target = batch
139 | for k in range(self.conf["poisoning_per_batch"]):
140 | img = data[k].numpy()
141 | for i in range(0,len(pos)):
142 | img[0][pos[i][0]][pos[i][1]] = 1.0
143 | img[1][pos[i][0]][pos[i][1]] = 0
144 | img[2][pos[i][0]][pos[i][1]] = 0
145 |
146 | target[k] = self.conf['poison_label']
147 | if torch.cuda.is_available():
148 | data = data.cuda()
149 | target = target.cuda()
150 |
151 | optimizer.zero_grad()
152 | output = self.local_model(data)
153 |
154 | class_loss = torch.nn.functional.cross_entropy(output, target)
155 | dist_loss = models.model_norm(self.local_model, model)
156 | loss = self.conf["alpha"]*class_loss + (1-self.conf["alpha"])*dist_loss
157 | loss.backward()
158 |
159 | optimizer.step()
160 | print("Epoch %d done." % e)
161 |
162 | diff = dict()
163 | for name, data in self.local_model.state_dict().items():
164 | diff[name] = self.conf["eta"]*(data - model.state_dict()[name])+model.state_dict()[name]
165 |
166 | return diff
167 |
168 | ```
169 |
170 | 我们同样注意到,在最后模型上传的时候,客户端$C_i$上传的参数为:
171 |
172 | $$\lambda*(L_m^{t+1} - G^{t})+G^t$$
173 |
174 | 其中$\lambda$是一个大于1的数值,读者可以在配置文件中自行设置。
175 |
176 | ## 15.1.5 参考文献
177 |
178 | - [How to backdoor federated learning](https://arxiv.org/pdf/1807.00459.pdf)
179 |
180 | - [DBA: Distributed Backdoor Attacks against Federated Learning](https://openreview.net/pdf?id=rkgyS0VFvr)
181 |
182 |
183 |
--------------------------------------------------------------------------------
/chapter15_Backdoor_Attack/client.py:
--------------------------------------------------------------------------------
1 |
2 | import models, torch, copy
3 | import numpy as np
4 | import matplotlib.pyplot as plt
5 |
6 | class Client(object):
7 |
8 | def __init__(self, conf, model, train_dataset, id = -1):
9 |
10 | self.conf = conf
11 |
12 | self.local_model = models.get_model(self.conf["model_name"])
13 |
14 | self.client_id = id
15 |
16 | self.train_dataset = train_dataset
17 |
18 | all_range = list(range(len(self.train_dataset)))
19 | data_len = int(len(self.train_dataset) / self.conf['no_models'])
20 | train_indices = all_range[id * data_len: (id + 1) * data_len]
21 |
22 | self.train_loader = torch.utils.data.DataLoader(self.train_dataset, batch_size=conf["batch_size"],
23 | sampler=torch.utils.data.sampler.SubsetRandomSampler(train_indices))
24 |
25 |
26 |
27 | def local_train(self, model):
28 |
29 | for name, param in model.state_dict().items():
30 | self.local_model.state_dict()[name].copy_(param.clone())
31 |
32 |
33 | optimizer = torch.optim.SGD(self.local_model.parameters(), lr=self.conf['lr'],
34 | momentum=self.conf['momentum'])
35 |
36 | self.local_model.train()
37 | for e in range(self.conf["local_epochs"]):
38 |
39 | for batch_id, batch in enumerate(self.train_loader):
40 | data, target = batch
41 |
42 | if torch.cuda.is_available():
43 | data = data.cuda()
44 | target = target.cuda()
45 |
46 | optimizer.zero_grad()
47 | output = self.local_model(data)
48 | loss = torch.nn.functional.cross_entropy(output, target)
49 | loss.backward()
50 |
51 | optimizer.step()
52 | print("Epoch %d done." % e)
53 | diff = dict()
54 | for name, data in self.local_model.state_dict().items():
55 | diff[name] = (data - model.state_dict()[name])
56 |
57 | return diff
58 |
59 | def local_train_malicious(self, model):
60 |
61 | for name, param in model.state_dict().items():
62 | self.local_model.state_dict()[name].copy_(param.clone())
63 |
64 | optimizer = torch.optim.SGD(self.local_model.parameters(), lr=self.conf['lr'],
65 | momentum=self.conf['momentum'])
66 | pos = []
67 | for i in range(2, 28):
68 | pos.append([i, 3])
69 | pos.append([i, 4])
70 | pos.append([i, 5])
71 |
72 | self.local_model.train()
73 | for e in range(self.conf["local_epochs"]):
74 |
75 | for batch_id, batch in enumerate(self.train_loader):
76 | data, target = batch
77 |
78 | for k in range(self.conf["poisoning_per_batch"]):
79 | img = data[k].numpy()
80 | for i in range(0,len(pos)):
81 | img[0][pos[i][0]][pos[i][1]] = 1.0
82 | img[1][pos[i][0]][pos[i][1]] = 0
83 | img[2][pos[i][0]][pos[i][1]] = 0
84 |
85 | target[k] = self.conf['poison_label']
86 | #for k in range(32):
87 | # img = data[k].numpy()
88 | #
89 | # img = np.transpose(img, (1, 2, 0))
90 | # plt.imshow(img)
91 | # plt.show()
92 | if torch.cuda.is_available():
93 | data = data.cuda()
94 | target = target.cuda()
95 |
96 | optimizer.zero_grad()
97 | output = self.local_model(data)
98 |
99 | class_loss = torch.nn.functional.cross_entropy(output, target)
100 | dist_loss = models.model_norm(self.local_model, model)
101 | loss = self.conf["alpha"]*class_loss + (1-self.conf["alpha"])*dist_loss
102 | loss.backward()
103 |
104 | optimizer.step()
105 | print("Epoch %d done." % e)
106 |
107 | diff = dict()
108 | for name, data in self.local_model.state_dict().items():
109 | diff[name] = self.conf["eta"]*(data - model.state_dict()[name])+model.state_dict()[name]
110 |
111 | return diff
112 |
--------------------------------------------------------------------------------
/chapter15_Backdoor_Attack/datasets.py:
--------------------------------------------------------------------------------
1 |
2 |
3 | import torch
4 | from torchvision import datasets, transforms
5 |
6 | def get_dataset(dir, name):
7 |
8 | if name=='mnist':
9 | train_dataset = datasets.MNIST(dir, train=True, download=True, transform=transforms.ToTensor())
10 | eval_dataset = datasets.MNIST(dir, train=False, transform=transforms.ToTensor())
11 |
12 | elif name=='cifar':
13 | transform_train = transforms.Compose([
14 | transforms.RandomCrop(32, padding=4),
15 | transforms.RandomHorizontalFlip(),
16 | transforms.ToTensor(),
17 | transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
18 | ])
19 |
20 | transform_test = transforms.Compose([
21 | transforms.ToTensor(),
22 | transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
23 | ])
24 |
25 | train_dataset = datasets.CIFAR10(dir, train=True, download=True,
26 | transform=transform_train)
27 | eval_dataset = datasets.CIFAR10(dir, train=False, transform=transform_test)
28 |
29 |
30 | return train_dataset, eval_dataset
--------------------------------------------------------------------------------
/chapter15_Backdoor_Attack/images/fl_backdoor.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter15_Backdoor_Attack/images/fl_backdoor.png
--------------------------------------------------------------------------------
/chapter15_Backdoor_Attack/images/normal_image.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter15_Backdoor_Attack/images/normal_image.png
--------------------------------------------------------------------------------
/chapter15_Backdoor_Attack/images/normal_image_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter15_Backdoor_Attack/images/normal_image_1.png
--------------------------------------------------------------------------------
/chapter15_Backdoor_Attack/images/poison_image.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter15_Backdoor_Attack/images/poison_image.png
--------------------------------------------------------------------------------
/chapter15_Backdoor_Attack/images/target.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter15_Backdoor_Attack/images/target.png
--------------------------------------------------------------------------------
/chapter15_Backdoor_Attack/main.py:
--------------------------------------------------------------------------------
1 | import argparse, json
2 | import datetime
3 | import os
4 | import logging
5 | import torch, random
6 |
7 | from server import *
8 | from client import *
9 | import models, datasets
10 |
11 |
12 |
13 | if __name__ == '__main__':
14 |
15 | parser = argparse.ArgumentParser(description='Federated Learning')
16 | parser.add_argument('-c', '--conf', dest='conf')
17 | args = parser.parse_args()
18 |
19 |
20 | with open(args.conf, 'r') as f:
21 | conf = json.load(f)
22 |
23 |
24 | train_datasets, eval_datasets = datasets.get_dataset("./data/", conf["type"])
25 |
26 | server = Server(conf, eval_datasets)
27 | clients = []
28 |
29 | for c in range(conf["no_models"]):
30 | clients.append(Client(conf, server.global_model, train_datasets, c))
31 |
32 | print("\n\n")
33 | for e in range(conf["global_epochs"]):
34 |
35 | candidates = random.sample(clients, conf["k"])
36 |
37 | weight_accumulator = {}
38 |
39 | for name, params in server.global_model.state_dict().items():
40 | weight_accumulator[name] = torch.zeros_like(params)
41 |
42 | for c in candidates:
43 | if c.client_id == 1:
44 | print("malicious client")
45 | diff = c.local_train_malicious(server.global_model)
46 | else:
47 | diff = c.local_train(server.global_model)
48 |
49 | for name, params in server.global_model.state_dict().items():
50 | weight_accumulator[name].add_(diff[name])
51 |
52 |
53 | server.model_aggregate(weight_accumulator)
54 |
55 | acc, loss = server.model_eval()
56 |
57 | print("Epoch %d, acc: %f, loss: %f\n" % (e, acc, loss))
58 |
59 |
60 |
61 |
62 |
63 |
64 |
65 |
--------------------------------------------------------------------------------
/chapter15_Backdoor_Attack/models.py:
--------------------------------------------------------------------------------
1 |
2 | import torch
3 | from torchvision import models
4 | import math
5 |
6 | def get_model(name="vgg16", pretrained=True):
7 | if name == "resnet18":
8 | model = models.resnet18(pretrained=pretrained)
9 | elif name == "resnet50":
10 | model = models.resnet50(pretrained=pretrained)
11 | elif name == "densenet121":
12 | model = models.densenet121(pretrained=pretrained)
13 | elif name == "alexnet":
14 | model = models.alexnet(pretrained=pretrained)
15 | elif name == "vgg16":
16 | model = models.vgg16(pretrained=pretrained)
17 | elif name == "vgg19":
18 | model = models.vgg19(pretrained=pretrained)
19 | elif name == "inception_v3":
20 | model = models.inception_v3(pretrained=pretrained)
21 | elif name == "googlenet":
22 | model = models.googlenet(pretrained=pretrained)
23 |
24 | if torch.cuda.is_available():
25 | return model.cuda()
26 | else:
27 | return model
28 |
29 | def model_norm(model_1, model_2):
30 | squared_sum = 0
31 | for name, layer in model_1.named_parameters():
32 | # print(torch.mean(layer.data), torch.mean(model_2.state_dict()[name].data))
33 | squared_sum += torch.sum(torch.pow(layer.data - model_2.state_dict()[name].data, 2))
34 | return math.sqrt(squared_sum)
--------------------------------------------------------------------------------
/chapter15_Backdoor_Attack/server.py:
--------------------------------------------------------------------------------
1 |
2 | import models, torch
3 |
4 |
5 | class Server(object):
6 |
7 | def __init__(self, conf, eval_dataset):
8 |
9 | self.conf = conf
10 |
11 | self.global_model = models.get_model(self.conf["model_name"])
12 |
13 | self.eval_loader = torch.utils.data.DataLoader(eval_dataset, batch_size=self.conf["batch_size"], shuffle=True)
14 |
15 |
16 | def model_aggregate(self, weight_accumulator):
17 | for name, data in self.global_model.state_dict().items():
18 |
19 | update_per_layer = weight_accumulator[name] * self.conf["lambda"]
20 |
21 | if data.type() != update_per_layer.type():
22 | data.add_(update_per_layer.to(torch.int64))
23 | else:
24 | data.add_(update_per_layer)
25 |
26 | def model_eval(self):
27 | self.global_model.eval()
28 |
29 | total_loss = 0.0
30 | correct = 0
31 | dataset_size = 0
32 | for batch_id, batch in enumerate(self.eval_loader):
33 | data, target = batch
34 | dataset_size += data.size()[0]
35 |
36 | if torch.cuda.is_available():
37 | data = data.cuda()
38 | target = target.cuda()
39 |
40 |
41 | output = self.global_model(data)
42 |
43 | total_loss += torch.nn.functional.cross_entropy(output, target,
44 | reduction='sum').item() # sum up batch loss
45 | pred = output.data.max(1)[1] # get the index of the max log-probability
46 | correct += pred.eq(target.data.view_as(pred)).cpu().sum().item()
47 |
48 | acc = 100.0 * (float(correct) / float(dataset_size))
49 | total_l = total_loss / dataset_size
50 |
51 | return acc, total_l
--------------------------------------------------------------------------------
/chapter15_Backdoor_Attack/utils/conf.json:
--------------------------------------------------------------------------------
1 | {
2 |
3 | "model_name" : "resnet18",
4 |
5 | "no_models" : 10,
6 |
7 | "type" : "cifar",
8 |
9 | "global_epochs" : 20,
10 |
11 | "local_epochs" : 3,
12 |
13 | "k" : 3,
14 |
15 | "batch_size" : 32,
16 |
17 | "lr" : 0.001,
18 |
19 | "momentum" : 0.0001,
20 |
21 | "lambda" : 0.3,
22 |
23 | "eta" : 2,
24 |
25 | "alpha" : 1.0,
26 |
27 | "poison_label" : 2,
28 |
29 | "poisoning_per_batch" : 4
30 | }
--------------------------------------------------------------------------------
/chapter15_Compression/README.md:
--------------------------------------------------------------------------------
1 | # 15.3:模型压缩
2 |
3 | 研究表明,大部分深度神经网络模型,在模型训练的过程中,都存在着权重参数冗余的问题,即在所有参数中,对模型性能起到重要作用的,往往只有很少的一部分(比如5%)权值。基于这个思想,在深度学习中,我们常常使用剪枝、量化、蒸漏等模型压缩手段来压缩模型,以达到模型性能与模型复杂度之间的平衡。
4 |
5 | 联邦学习在训练的过程中,影响其训练效率的一大因素就是服务端与客户端之间的模型参数交换。因此,我们可以利用模型压缩的思想,在传输的过程中,只传输部分的参数数据,一方面随着传输数据量的减少,能够有效降低网络传输的带宽消耗;另一方面,可以防止模型参数被窃取,由于只传输了部分参数数据,这样即使攻击者获取了这部分数据,由于没有全局信息,因此,也很难利用模型反演攻击来反推出原始数据,从而有效提升了系统的安全性。
6 |
7 |
8 | ## 15.3.1 代码使用
9 |
10 | 在本目录下,在命令行中执行下面的命令:
11 |
12 | ```
13 | python main.py -c ./utils/conf.json
14 | ```
15 |
16 | ## 15.3.2 方法介绍
17 |
18 | 本小节我们考虑这样的一种压缩策略:模型在本地训练过程中,训练前后每一层的变化,选取前后变化大的层传输,为此,我们首先定义层敏感度的概念:
19 |
20 | > 定义:设当前的模型表示为$G = \\{g_1, g_2, · · · , g_L\\}$,这里的$g_i$表示模型的第i层。设当前处在第$t$轮,客户端$C_j$进行联邦学习本地训练时,模型将从:
21 | >
22 | > $$G_t = \\{g^{t}\_{1,j} ,g^{t}\_{2,j} , · · · , g^{t}\_{L,j} \\} $$
23 | >
24 | > 变为:
25 | >
26 | > $$L^{t+1}_j = \\{g^{t+1}\_{1,j} ,g^{t+1}\_{2,j} , · · · , g^{t+1}\_{L,j} \\}$$
27 | >
28 | > 我们将第$i$层的变化记为:
29 | >
30 | > $$\delta^t_{i,j}=|mean(g^{t}\_{i,j}) - mean(g^{t+1}\_{i,j})|$$
31 | >
32 | > 我们将此每一层的参数均值变化量$\delta$,称为层敏感度。
33 |
34 | 基于按层敏感度的定义,对任意被挑选的客户端$C_j$,在模型本地训练结束后,按照上式计算模型的每一层均值变化量$\Delta^t_{i,j}=\\{\delta^t\_{1,j}, \delta^t\_{2,j}, ... , \delta^t\_{L,j} \\}$, 将每一层的敏感度变化量从大到小进行排序,变化越大,说明该层重要性越高。
35 |
36 | ## 15.3.3 配置文件
37 |
38 | 我们首先在配置文件中添加一个字段“rate”,用来控制传输比例,比如0.95表示传输最重要的前95%的层权重参数。
39 | ```python
40 | {
41 | ...,
42 |
43 | rate : 0.95
44 | }
45 | ```
46 |
47 | ## 15.3.4 客户端训练
48 | 其本地训练代码如下所示,训练的过程与常规的本地迭代训练一致:
49 |
50 | ```python
51 | for name, param in model.state_dict().items():
52 | self.local_model.state_dict()[name].copy_(param.clone())
53 | optimizer = torch.optim.SGD(self.local_model.parameters(), lr=self.conf['lr'],
54 | momentum=self.conf['momentum'])
55 |
56 | self.local_model.train()
57 | for e in range(self.conf["local_epochs"]):
58 | for batch_id, batch in enumerate(self.train_loader):
59 | data, target = batch
60 | if torch.cuda.is_available():
61 | data = data.cuda()
62 | target = target.cuda()
63 |
64 | optimizer.zero_grad()
65 | output = self.local_model(data)
66 | loss = torch.nn.functional.cross_entropy(output, target)
67 | loss.backward()
68 | optimizer.step()
69 | print("Epoch %d done." % e)
70 | ```
71 | 训练完毕后,求取每一层的变化值,并对变化值按大小从高到低进行排序,最后按照配置文件中设置的“rate”值,上传最重要的参数权重即可:
72 | ```python
73 | diff = dict()
74 | for name, data in self.local_model.state_dict().items():
75 | diff[name] = (data - model.state_dict()[name])
76 | diff = sorted(diff.items(), key=lambda item:abs(torch.mean(item[1].float())), reverse=True)
77 | sum1, sum2 = 0, 0
78 | for id, (name, data) in enumerate(diff):
79 | if id < 304:
80 | sum1 += torch.prod(torch.tensor(data.size()))
81 | else:
82 | sum2 += torch.prod(torch.tensor(data.size()))
83 |
84 | ret_size = int(self.conf["rate"]*len(diff))
85 |
86 | return dict(diff[:ret_size])
87 | ```
88 |
89 | ## 15.3.5 服务端
90 |
91 | 在服务端中进行聚合的时候,由于每一个客户端上传的只是部分的权重参数,需要根据不同的层来进行聚合,如下所示,在Pytorch中,我们可以通过权重的名字来实现这一过程:
92 |
93 | ```python
94 | def model_aggregate(self, weight_accumulator, cnt):
95 | for name, data in self.global_model.state_dict().items():
96 | if name in weight_accumulator and cnt[name] > 0:
97 | update_per_layer = weight_accumulator[name] * (1.0 / cnt[name])
98 |
99 | if data.type() != update_per_layer.type():
100 | data.add_(update_per_layer.to(torch.int64))
101 | else:
102 | data.add_(update_per_layer)
103 | ```
104 |
105 | ## 15.3.6 效果分析
106 |
107 | 我们根据上传的参数比例不同,比较了在不同的比例配置下,模型性能之间的差别,从下图可以看到,选取重要的模型参数进行传输(top 90%),其性能与完整的参数传输(100%)相比,几乎没有下降。
108 |
109 |
110 |

111 |
--------------------------------------------------------------------------------
/chapter15_Compression/client.py:
--------------------------------------------------------------------------------
1 |
2 | import models, torch, copy
3 | class Client(object):
4 |
5 | def __init__(self, conf, model, train_dataset, id = -1):
6 |
7 | self.conf = conf
8 |
9 | self.local_model = models.get_model(self.conf["model_name"])
10 |
11 | self.client_id = id
12 |
13 | self.train_dataset = train_dataset
14 |
15 | all_range = list(range(len(self.train_dataset)))
16 | data_len = int(len(self.train_dataset) / self.conf['no_models'])
17 | train_indices = all_range[id * data_len: (id + 1) * data_len]
18 |
19 | self.train_loader = torch.utils.data.DataLoader(self.train_dataset, batch_size=conf["batch_size"],
20 | sampler=torch.utils.data.sampler.SubsetRandomSampler(train_indices))
21 |
22 |
23 | def local_train(self, model):
24 |
25 | for name, param in model.state_dict().items():
26 | self.local_model.state_dict()[name].copy_(param.clone())
27 |
28 | #print("\n\nlocal model train ... ... ")
29 | #for name, layer in self.local_model.named_parameters():
30 | # print(name, "->", torch.mean(layer.data))
31 |
32 | #print("\n\n")
33 | optimizer = torch.optim.SGD(self.local_model.parameters(), lr=self.conf['lr'],
34 | momentum=self.conf['momentum'])
35 |
36 |
37 | self.local_model.train()
38 | for e in range(self.conf["local_epochs"]):
39 |
40 | for batch_id, batch in enumerate(self.train_loader):
41 | data, target = batch
42 | #for name, layer in self.local_model.named_parameters():
43 | # print(torch.mean(self.local_model.state_dict()[name].data))
44 | #print("\n\n")
45 | if torch.cuda.is_available():
46 | data = data.cuda()
47 | target = target.cuda()
48 |
49 | optimizer.zero_grad()
50 | output = self.local_model(data)
51 | loss = torch.nn.functional.cross_entropy(output, target)
52 | loss.backward()
53 |
54 | optimizer.step()
55 |
56 | print("Epoch %d done." % e)
57 |
58 | diff = dict()
59 | for name, data in self.local_model.state_dict().items():
60 | diff[name] = (data - model.state_dict()[name])
61 |
62 | diff = sorted(diff.items(), key=lambda item:abs(torch.mean(item[1].float())), reverse=True)
63 | sum1, sum2 = 0, 0
64 | for id, (name, data) in enumerate(diff):
65 | if id < 304:
66 | sum1 += torch.prod(torch.tensor(data.size()))
67 | else:
68 | sum2 += torch.prod(torch.tensor(data.size()))
69 |
70 | ret_size = int(self.conf["rate"]*len(diff))
71 |
72 | return dict(diff[:ret_size])
73 |
--------------------------------------------------------------------------------
/chapter15_Compression/datasets.py:
--------------------------------------------------------------------------------
1 |
2 |
3 | import torch
4 | from torchvision import datasets, transforms
5 |
6 | def get_dataset(dir, name):
7 |
8 | if name=='mnist':
9 | train_dataset = datasets.MNIST(dir, train=True, download=True, transform=transforms.ToTensor())
10 | eval_dataset = datasets.MNIST(dir, train=False, transform=transforms.ToTensor())
11 |
12 | elif name=='cifar':
13 | transform_train = transforms.Compose([
14 | transforms.RandomCrop(32, padding=4),
15 | transforms.RandomHorizontalFlip(),
16 | transforms.ToTensor(),
17 | transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
18 | ])
19 |
20 | transform_test = transforms.Compose([
21 | transforms.ToTensor(),
22 | transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
23 | ])
24 |
25 | train_dataset = datasets.CIFAR10(dir, train=True, download=True,
26 | transform=transform_train)
27 | eval_dataset = datasets.CIFAR10(dir, train=False, transform=transform_test)
28 |
29 |
30 | return train_dataset, eval_dataset
--------------------------------------------------------------------------------
/chapter15_Compression/figures/fig1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter15_Compression/figures/fig1.png
--------------------------------------------------------------------------------
/chapter15_Compression/main.py:
--------------------------------------------------------------------------------
1 | import argparse, json
2 | import datetime
3 | import os
4 | import logging
5 | import torch, random
6 |
7 | from server import *
8 | from client import *
9 | import models, datasets
10 |
11 |
12 |
13 | if __name__ == '__main__':
14 |
15 | parser = argparse.ArgumentParser(description='Federated Learning')
16 | parser.add_argument('-c', '--conf', dest='conf')
17 | args = parser.parse_args()
18 |
19 |
20 | with open(args.conf, 'r') as f:
21 | conf = json.load(f)
22 |
23 |
24 | train_datasets, eval_datasets = datasets.get_dataset("./data/", conf["type"])
25 |
26 | server = Server(conf, eval_datasets)
27 | clients = []
28 |
29 | for c in range(conf["no_models"]):
30 | clients.append(Client(conf, server.global_model, train_datasets, c))
31 |
32 | print("\n\n")
33 | for e in range(conf["global_epochs"]):
34 |
35 | candidates = random.sample(clients, conf["k"])
36 |
37 | weight_accumulator = {}
38 | cnt = {}
39 |
40 | for name, params in server.global_model.state_dict().items():
41 | weight_accumulator[name] = torch.zeros_like(params)
42 | cnt[name] = 0
43 |
44 | for c in candidates:
45 | diff = c.local_train(server.global_model)
46 |
47 | for name, params in server.global_model.state_dict().items():
48 | if name in diff:
49 | weight_accumulator[name].add_(diff[name])
50 | cnt[name] += 1
51 |
52 |
53 | server.model_aggregate(weight_accumulator, cnt)
54 |
55 | acc, loss = server.model_eval()
56 |
57 | print("Epoch %d, acc: %f, loss: %f\n" % (e, acc, loss))
58 |
59 |
60 |
61 |
62 |
63 |
64 |
65 |
--------------------------------------------------------------------------------
/chapter15_Compression/models.py:
--------------------------------------------------------------------------------
1 |
2 | import torch
3 | from torchvision import models
4 | import math
5 |
6 | def get_model(name="vgg16", pretrained=True):
7 | if name == "resnet18":
8 | model = models.resnet18(pretrained=pretrained)
9 | elif name == "resnet50":
10 | model = models.resnet50(pretrained=pretrained)
11 | elif name == "densenet121":
12 | model = models.densenet121(pretrained=pretrained)
13 | elif name == "alexnet":
14 | model = models.alexnet(pretrained=pretrained)
15 | elif name == "vgg16":
16 | model = models.vgg16(pretrained=pretrained)
17 | elif name == "vgg19":
18 | model = models.vgg19(pretrained=pretrained)
19 | elif name == "inception_v3":
20 | model = models.inception_v3(pretrained=pretrained)
21 | elif name == "googlenet":
22 | model = models.googlenet(pretrained=pretrained)
23 |
24 | if torch.cuda.is_available():
25 | return model.cuda()
26 | else:
27 | return model
28 |
29 | def model_norm(model_1, model_2):
30 | squared_sum = 0
31 | for name, layer in model_1.named_parameters():
32 | # print(torch.mean(layer.data), torch.mean(model_2.state_dict()[name].data))
33 | squared_sum += torch.sum(torch.pow(layer.data - model_2.state_dict()[name].data, 2))
34 | return math.sqrt(squared_sum)
--------------------------------------------------------------------------------
/chapter15_Compression/server.py:
--------------------------------------------------------------------------------
1 |
2 | import models, torch
3 |
4 |
5 | class Server(object):
6 |
7 | def __init__(self, conf, eval_dataset):
8 |
9 | self.conf = conf
10 |
11 | self.global_model = models.get_model(self.conf["model_name"])
12 |
13 | self.eval_loader = torch.utils.data.DataLoader(eval_dataset, batch_size=self.conf["batch_size"], shuffle=True)
14 |
15 |
16 | def model_aggregate(self, weight_accumulator, cnt):
17 |
18 | for name, data in self.global_model.state_dict().items():
19 | if name in weight_accumulator and cnt[name] > 0:
20 | #print(cnt[name])
21 | update_per_layer = weight_accumulator[name] * (1.0 / cnt[name])
22 | #update_per_layer = weight_accumulator[name] * self.conf["lambda"]
23 |
24 |
25 | if data.type() != update_per_layer.type():
26 | data.add_(update_per_layer.to(torch.int64))
27 | else:
28 | data.add_(update_per_layer)
29 |
30 | def model_eval(self):
31 | self.global_model.eval()
32 | #print("\n\nstart to model evaluation......")
33 | #for name, layer in self.global_model.named_parameters():
34 | # print(name, "->", torch.mean(layer.data))
35 |
36 | total_loss = 0.0
37 | correct = 0
38 | dataset_size = 0
39 | for batch_id, batch in enumerate(self.eval_loader):
40 | data, target = batch
41 | dataset_size += data.size()[0]
42 |
43 | if torch.cuda.is_available():
44 | data = data.cuda()
45 | target = target.cuda()
46 |
47 |
48 | output = self.global_model(data)
49 |
50 | #print(output)
51 |
52 | total_loss += torch.nn.functional.cross_entropy(output, target,
53 | reduction='sum').item() # sum up batch loss
54 | pred = output.data.max(1)[1] # get the index of the max log-probability
55 | correct += pred.eq(target.data.view_as(pred)).cpu().sum().item()
56 |
57 | acc = 100.0 * (float(correct) / float(dataset_size))
58 | total_l = total_loss / dataset_size
59 |
60 | return acc, total_l
--------------------------------------------------------------------------------
/chapter15_Compression/utils/conf.json:
--------------------------------------------------------------------------------
1 | {
2 | "model_name" : "resnet50",
3 |
4 | "no_models" : 10,
5 |
6 | "type" : "cifar",
7 |
8 | "global_epochs" : 30,
9 |
10 | "local_epochs" : 3,
11 |
12 | "k" : 2,
13 |
14 | "batch_size" : 32,
15 |
16 | "lr" : 0.01,
17 |
18 | "momentum" : 0.0001,
19 |
20 | "lambda" : 0.5,
21 |
22 | "rate" : 0.95
23 | }
--------------------------------------------------------------------------------
/chapter15_Differential_Privacy/README.md:
--------------------------------------------------------------------------------
1 | # 15.2:联邦差分隐私
2 |
3 | 与集中式差分隐私相比,在联邦学习场景下引入差分隐私技术,除了考虑到**数据层面**的隐私安全之外,还需要考虑到**用户层面**(称为user level或者client level)的安全问题。
4 |
5 | 我们首先来回顾集中式差分隐私,集中式差分隐私数据的定义是建在相邻数据集的概念基础上。
6 |
7 | > 相邻数据集:设有两个数据集表示为$D$和$D^′$,若它们之间有且仅有一条数据不一样,那我们就称$D$和$D^′$为相邻数据集。如下图所示,数据集$D$和数据集$D^′$仅相差一个元素$d_5$。
8 |
9 |
10 |

11 |
12 |
13 |
14 | 相邻数据集的定义为数据层面的隐私安全提供了一个参考标准。为了定义在用户层面的隐私安全,我们引入了用户相邻数据集(user-adjacent datasets)的概念,其定义如下:
15 |
16 | > 用户相邻数据集:设每一个用户(即客户端)$C_i$对应的本地数据集为$d_i$,$D$和$D^′$是两个用户数据的集合,我们定义$D$和$D^′$为用户相邻数据集,当且仅当$D$去除或者添加某一个客户端$C_i$的本地数据集$d_i$后变为$D^{'}$。如下图所示:
17 |
18 |
19 |

20 |
21 | 下面我们详细给出在联邦学习场景中实现差分隐私的方法。
22 |
23 | ## 15.2.1 代码使用
24 |
25 | 在本目录下,在命令行中执行下面的命令:
26 |
27 | ```
28 | python main.py -c ./utils/conf.json
29 | ```
30 |
31 | **注意:差分隐私在添加噪音数据后,前面几轮的迭代可能不稳定。读者可以自行设置conf.json中的超参数值:如梯度裁剪参数"c",噪音参数"sigma"等。看看不同的超参数值对结果的影响。**
32 |
33 |
34 |
35 | ## 15.2.2 DP-FedAvg算法
36 |
37 | DP-FedAvg的算法,是指将联邦学习中经典的Federated Average算法和差分隐私技术相结合,最早由文献 [1](#refer-anchor-1) 提出,具体来说,它主要包括本地客户端参数裁剪和服务端聚合添加噪音。
38 |
39 | ## 15.2.3 客户端
40 |
41 | 客户端的本地训练流程如下所示:
42 |
43 |
44 |

45 |
46 |
47 | 具体来说,相比于常规的本地训练,其主要修改点是在每一轮的梯度下降结束后,对参数进行裁剪:
48 |
49 | $$clip(\theta - \theta_0) = \frac{C}{||\theta - \theta_0||}$$
50 |
51 | ```python
52 | def local_train(self, model):
53 | for name, param in model.state_dict().items():
54 | self.local_model.state_dict()[name].copy_(param.clone())
55 |
56 | optimizer = torch.optim.SGD(self.local_model.parameters(), lr=self.conf['lr'],
57 | momentum=self.conf['momentum'])
58 | self.local_model.train()
59 | for e in range(self.conf["local_epochs"]):
60 |
61 | for batch_id, batch in enumerate(self.train_loader):
62 | data, target = batch
63 | if torch.cuda.is_available():
64 | data = data.cuda()
65 | target = target.cuda()
66 |
67 | optimizer.zero_grad()
68 | output = self.local_model(data)
69 | loss = torch.nn.functional.cross_entropy(output, target)
70 | loss.backward()
71 | optimizer.step()
72 | if self.conf["dp"]:
73 | model_norm = models.model_norm(model, self.local_model)
74 | norm_scale = min(1, self.conf['C'] / (model_norm))
75 | for name, layer in self.local_model.named_parameters():
76 | clipped_difference = norm_scale * (layer.data - model.state_dict()[name])
77 | layer.data.copy_(model.state_dict()[name] + clipped_difference)
78 |
79 | print("Epoch %d done." % e)
80 | diff = dict()
81 | for name, data in self.local_model.state_dict().items():
82 | diff[name] = (data - model.state_dict()[name])
83 |
84 | return diff
85 | ```
86 |
87 | ## 15.2.4 服务端
88 |
89 | 服务端侧的流程如下所示:
90 |
91 |
92 |

93 |
94 |
95 | 服务端侧的主要修改在于对全局模型参数进行聚合时添加噪声。噪声数据由高斯分布生成。
96 |
97 | ```python
98 | def model_aggregate(self, weight_accumulator):
99 | for name, data in self.global_model.state_dict().items():
100 |
101 | update_per_layer = weight_accumulator[name] * self.conf["lambda"]
102 |
103 | if self.conf['dp']:
104 | sigma = self.conf['sigma']
105 | if torch.cuda.is_available():
106 | noise = torch.cuda.FloatTensor(update_per_layer.shape).normal_(0, sigma)
107 | else:
108 | noise = torch.FloatTensor(update_per_layer.shape).normal_(0, sigma)
109 | update_per_layer.add_(noise)
110 | if data.type() != update_per_layer.type():
111 | data.add_(update_per_layer.to(torch.int64))
112 | else:
113 | data.add_(update_per_layer)
114 | ```
115 |
116 |
117 |
118 | ## 参考文献
119 |
120 |
121 | \- [1] [Learning differentially private recurrent language models](https://arxiv.org/abs/1710.06963)
--------------------------------------------------------------------------------
/chapter15_Differential_Privacy/client.py:
--------------------------------------------------------------------------------
1 |
2 | import models, torch, copy
3 | class Client(object):
4 |
5 | def __init__(self, conf, model, train_dataset, id = -1):
6 |
7 | self.conf = conf
8 |
9 | self.local_model = models.get_model(self.conf["model_name"])
10 |
11 | self.client_id = id
12 |
13 | self.train_dataset = train_dataset
14 |
15 | all_range = list(range(len(self.train_dataset)))
16 | data_len = int(len(self.train_dataset) / self.conf['no_models'])
17 | train_indices = all_range[id * data_len: (id + 1) * data_len]
18 |
19 | self.train_loader = torch.utils.data.DataLoader(self.train_dataset, batch_size=conf["batch_size"],
20 | sampler=torch.utils.data.sampler.SubsetRandomSampler(train_indices))
21 |
22 |
23 | def local_train(self, model):
24 |
25 | for name, param in model.state_dict().items():
26 | self.local_model.state_dict()[name].copy_(param.clone())
27 |
28 | #print("\n\nlocal model train ... ... ")
29 | #for name, layer in self.local_model.named_parameters():
30 | # print(name, "->", torch.mean(layer.data))
31 |
32 | #print("\n\n")
33 | optimizer = torch.optim.SGD(self.local_model.parameters(), lr=self.conf['lr'],
34 | momentum=self.conf['momentum'])
35 |
36 |
37 | self.local_model.train()
38 | for e in range(self.conf["local_epochs"]):
39 |
40 | for batch_id, batch in enumerate(self.train_loader):
41 | data, target = batch
42 | #for name, layer in self.local_model.named_parameters():
43 | # print(torch.mean(self.local_model.state_dict()[name].data))
44 | #print("\n\n")
45 | if torch.cuda.is_available():
46 | data = data.cuda()
47 | target = target.cuda()
48 |
49 | optimizer.zero_grad()
50 | output = self.local_model(data)
51 | loss = torch.nn.functional.cross_entropy(output, target)
52 | loss.backward()
53 |
54 | optimizer.step()
55 |
56 | #for name, layer in self.local_model.named_parameters():
57 | # print(torch.mean(self.local_model.state_dict()[name].data))
58 | #print("\n\n")
59 | if self.conf["dp"]:
60 | model_norm = models.model_norm(model, self.local_model)
61 |
62 | norm_scale = min(1, self.conf['C'] / (model_norm))
63 | #print(model_norm, norm_scale)
64 | for name, layer in self.local_model.named_parameters():
65 | clipped_difference = norm_scale * (layer.data - model.state_dict()[name])
66 | layer.data.copy_(model.state_dict()[name] + clipped_difference)
67 |
68 | print("Epoch %d done." % e)
69 | diff = dict()
70 | for name, data in self.local_model.state_dict().items():
71 | diff[name] = (data - model.state_dict()[name])
72 |
73 | #print("\n\nfinishing local model training ... ... ")
74 | #for name, layer in self.local_model.named_parameters():
75 | # print(name, "->", torch.mean(layer.data))
76 | return diff
77 |
--------------------------------------------------------------------------------
/chapter15_Differential_Privacy/datasets.py:
--------------------------------------------------------------------------------
1 |
2 |
3 | import torch
4 | from torchvision import datasets, transforms
5 |
6 | def get_dataset(dir, name):
7 |
8 | if name=='mnist':
9 | train_dataset = datasets.MNIST(dir, train=True, download=True, transform=transforms.ToTensor())
10 | eval_dataset = datasets.MNIST(dir, train=False, transform=transforms.ToTensor())
11 |
12 | elif name=='cifar':
13 | transform_train = transforms.Compose([
14 | transforms.RandomCrop(32, padding=4),
15 | transforms.RandomHorizontalFlip(),
16 | transforms.ToTensor(),
17 | transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
18 | ])
19 |
20 | transform_test = transforms.Compose([
21 | transforms.ToTensor(),
22 | transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
23 | ])
24 |
25 | train_dataset = datasets.CIFAR10(dir, train=True, download=True,
26 | transform=transform_train)
27 | eval_dataset = datasets.CIFAR10(dir, train=False, transform=transform_test)
28 |
29 |
30 | return train_dataset, eval_dataset
--------------------------------------------------------------------------------
/chapter15_Differential_Privacy/figures/fig1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter15_Differential_Privacy/figures/fig1.png
--------------------------------------------------------------------------------
/chapter15_Differential_Privacy/figures/fig13.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter15_Differential_Privacy/figures/fig13.png
--------------------------------------------------------------------------------
/chapter15_Differential_Privacy/figures/fig14.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter15_Differential_Privacy/figures/fig14.png
--------------------------------------------------------------------------------
/chapter15_Differential_Privacy/figures/fig2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter15_Differential_Privacy/figures/fig2.png
--------------------------------------------------------------------------------
/chapter15_Differential_Privacy/figures/fig3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter15_Differential_Privacy/figures/fig3.png
--------------------------------------------------------------------------------
/chapter15_Differential_Privacy/main.py:
--------------------------------------------------------------------------------
1 | import argparse, json
2 | import datetime
3 | import os
4 | import logging
5 | import torch, random
6 |
7 | from server import *
8 | from client import *
9 | import models, datasets
10 |
11 |
12 |
13 | if __name__ == '__main__':
14 |
15 | parser = argparse.ArgumentParser(description='Federated Learning')
16 | parser.add_argument('-c', '--conf', dest='conf')
17 | args = parser.parse_args()
18 |
19 |
20 | with open(args.conf, 'r') as f:
21 | conf = json.load(f)
22 |
23 |
24 | train_datasets, eval_datasets = datasets.get_dataset("./data/", conf["type"])
25 |
26 | server = Server(conf, eval_datasets)
27 | clients = []
28 |
29 | for c in range(conf["no_models"]):
30 | clients.append(Client(conf, server.global_model, train_datasets, c))
31 |
32 | print("\n\n")
33 | for e in range(conf["global_epochs"]):
34 |
35 | candidates = random.sample(clients, conf["k"])
36 |
37 | weight_accumulator = {}
38 |
39 | for name, params in server.global_model.state_dict().items():
40 | weight_accumulator[name] = torch.zeros_like(params)
41 |
42 | for c in candidates:
43 | diff = c.local_train(server.global_model)
44 |
45 | for name, params in server.global_model.state_dict().items():
46 | weight_accumulator[name].add_(diff[name])
47 |
48 |
49 | server.model_aggregate(weight_accumulator)
50 |
51 | acc, loss = server.model_eval()
52 |
53 | print("Epoch %d, acc: %f, loss: %f\n" % (e, acc, loss))
54 |
55 |
56 |
57 |
58 |
59 |
60 |
61 |
--------------------------------------------------------------------------------
/chapter15_Differential_Privacy/models.py:
--------------------------------------------------------------------------------
1 |
2 | import torch
3 | from torchvision import models
4 | import math
5 |
6 | def get_model(name="vgg16", pretrained=True):
7 | if name == "resnet18":
8 | model = models.resnet18(pretrained=pretrained)
9 | elif name == "resnet50":
10 | model = models.resnet50(pretrained=pretrained)
11 | elif name == "densenet121":
12 | model = models.densenet121(pretrained=pretrained)
13 | elif name == "alexnet":
14 | model = models.alexnet(pretrained=pretrained)
15 | elif name == "vgg16":
16 | model = models.vgg16(pretrained=pretrained)
17 | elif name == "vgg19":
18 | model = models.vgg19(pretrained=pretrained)
19 | elif name == "inception_v3":
20 | model = models.inception_v3(pretrained=pretrained)
21 | elif name == "googlenet":
22 | model = models.googlenet(pretrained=pretrained)
23 |
24 | if torch.cuda.is_available():
25 | return model.cuda()
26 | else:
27 | return model
28 |
29 | def model_norm(model_1, model_2):
30 | squared_sum = 0
31 | for name, layer in model_1.named_parameters():
32 | # print(torch.mean(layer.data), torch.mean(model_2.state_dict()[name].data))
33 | squared_sum += torch.sum(torch.pow(layer.data - model_2.state_dict()[name].data, 2))
34 | return math.sqrt(squared_sum)
--------------------------------------------------------------------------------
/chapter15_Differential_Privacy/server.py:
--------------------------------------------------------------------------------
1 |
2 | import models, torch
3 |
4 |
5 | class Server(object):
6 |
7 | def __init__(self, conf, eval_dataset):
8 |
9 | self.conf = conf
10 |
11 | self.global_model = models.get_model(self.conf["model_name"])
12 |
13 | self.eval_loader = torch.utils.data.DataLoader(eval_dataset, batch_size=self.conf["batch_size"], shuffle=True)
14 |
15 |
16 | def model_aggregate(self, weight_accumulator):
17 | for name, data in self.global_model.state_dict().items():
18 |
19 | update_per_layer = weight_accumulator[name] * self.conf["lambda"]
20 |
21 | if self.conf['dp']:
22 | sigma = self.conf['sigma']
23 | if torch.cuda.is_available():
24 | noise = torch.cuda.FloatTensor(update_per_layer.shape).normal_(0, sigma)
25 | else:
26 | noise = torch.FloatTensor(update_per_layer.shape).normal_(0, sigma)
27 |
28 | update_per_layer.add_(noise)
29 |
30 | if data.type() != update_per_layer.type():
31 | data.add_(update_per_layer.to(torch.int64))
32 | else:
33 | data.add_(update_per_layer)
34 |
35 | def model_eval(self):
36 | self.global_model.eval()
37 | #print("\n\nstart to model evaluation......")
38 | #for name, layer in self.global_model.named_parameters():
39 | # print(name, "->", torch.mean(layer.data))
40 |
41 | total_loss = 0.0
42 | correct = 0
43 | dataset_size = 0
44 | for batch_id, batch in enumerate(self.eval_loader):
45 | data, target = batch
46 | dataset_size += data.size()[0]
47 |
48 | if torch.cuda.is_available():
49 | data = data.cuda()
50 | target = target.cuda()
51 |
52 |
53 | output = self.global_model(data)
54 |
55 | #print(output)
56 |
57 | total_loss += torch.nn.functional.cross_entropy(output, target,
58 | reduction='sum').item() # sum up batch loss
59 | pred = output.data.max(1)[1] # get the index of the max log-probability
60 | correct += pred.eq(target.data.view_as(pred)).cpu().sum().item()
61 |
62 | acc = 100.0 * (float(correct) / float(dataset_size))
63 | total_l = total_loss / dataset_size
64 |
65 | return acc, total_l
--------------------------------------------------------------------------------
/chapter15_Differential_Privacy/utils/conf.json:
--------------------------------------------------------------------------------
1 | {
2 | "model_name" : "resnet18",
3 |
4 | "no_models" : 10,
5 |
6 | "type" : "cifar",
7 |
8 | "global_epochs" : 100,
9 |
10 | "local_epochs" : 3,
11 |
12 | "k" : 2,
13 |
14 | "batch_size" : 32,
15 |
16 | "lr" : 0.01,
17 |
18 | "momentum" : 0.0001,
19 |
20 | "lambda" : 0.5,
21 |
22 | "dp" : true,
23 |
24 | "C" : 1000,
25 |
26 | "sigma" : 0.001,
27 |
28 | "q" : 0.1,
29 |
30 | "W" : 1
31 | }
--------------------------------------------------------------------------------
/chapter15_Homomorphic_Encryption/README.md:
--------------------------------------------------------------------------------
1 | # 15.4:同态加密
2 |
3 | 同态加密(HE)的概念最初由Rivest等人在1978年提出。同态加密提供了一种对加密数据进行处理的功能, 是一种允许对密文进行计算操作,并生成加密结果的加密技术。在 密文上获得的计算结果被解密后与在明文上的计算结果相匹配,就如同对明文执行了一样的计算操作,如下图所示:
4 |
5 |
6 |

7 |
8 |
9 |
10 | 在第二章,我们介绍同态加密又可以分为全同态加密、些许同态加密和半同态加密三种形式。这其中,由于受到性能等因素的约束,当前在工业界主要使用半同态加密算法。
11 |
12 | 本节我们将讨论在联邦学习下,以半同态加密作为安全机制,实现在加密状态下的Logistic Regression训练,[Paillier半同态加密算法](paillier.py)是由Pascal Paillier在1999 年提出,它是一种加法半同态加密。假设我们使用$u$来表示明文,$[[u]]$来表示密文,那么Paillier 半同态加密算法满足:
13 |
14 | $$[[u+v]] = [[u]] + [[v]]$$
15 |
16 |
17 |
18 | ## 15.4.1 加密损失函数
19 |
20 | 假设当前有$n$个样本数据集,设为:
21 |
22 | $$T=[(x_1, y_1), (x_2, y_2), ..., (x_n, y_n)] \tag{1}$$
23 |
24 | LR的对数损失函数,在明文状态下可以写为:
25 |
26 | $$L=\frac{1}{n}\sum_{i=1}^{n}{\log(1+e^{-y_i{\theta}^T{x_i}})} \tag{2}$$
27 |
28 | 对上式进行求导,求得损失函数值$L$关于模型参数$\theta$的梯度$\frac{\partial{L}}{\partial{\theta}}$,满足:
29 |
30 | $$\frac{\partial{L}}{\partial{\theta}}=\frac{1}{n}\sum_{i=1}^{n}{(\frac{1}{1+e^{-y_i{\theta}^T{x_i}}}-1)y_ix_i} \tag{3}$$
31 |
32 | 利用梯度下降,我们可以求出每一步的参数更新:
33 |
34 | $$\theta = \theta - lr*\frac{\partial{L}}{\partial{\theta}} \tag{4}$$
35 |
36 | 上面的计算过程,包括参数$\theta$和数据$(x, y)$都是在明文状态下计算,在联邦学习场景,这种做法会存在数据泄密的风险,而基于HE 的联邦学习,则要求在加密的状态下进行参数求解,也就是通常来说,传输的参数$\theta$是一个加密后的值$[[\theta]]$,我们将损失函数可以改写为:
37 |
38 | $$L=\frac{1}{n}\sum_{i=1}^{n}{\log(1+e^{-y_i{[[\theta]]}^T{x_i}})} \tag{5}$$
39 |
40 | 上式涉及到对加密数据的指数运算和对数运算,但前面我们已经讲解Paillier加密算法只支持加法同态和标量乘法同态,不支持乘法同态,更不支持复杂的指数和对数运算,因此无法在加密的状态下求解。文献[1]提出一种Taylor 损失来近似原始对数损失的方法,即通过对原始的对数损失函数进行泰勒展开,通过多项式来近似对数损失函数,经过泰勒展开后,损失函数转化为只有标量乘法和加法的运算,从而可以直接应用Paillier 来进行加密求解。
41 |
42 | 我们来回顾对于任意的函数$f(z)$,其在$z=0$处的泰勒多项式展开可以表示为:
43 |
44 | $$f(z)=\sum_{i=0}^{\infty} {\frac{f^{'}(0)}{i!}z^i} \tag{6}$$
45 |
46 | 当$f(z)$为对数损失函数,即$f(z)=\log(1 + e^{-z})$在$z = 0$处的泰勒展开表达式:
47 |
48 | $$\log(1 + e^{-z}) \approx \log{2} - \frac{1}{2}z + \frac{1}{8}z^2+O(z^2) \tag{7}$$
49 |
50 | 这里我们使用其中的二阶多项式来近似对数损失函数,并将$z = y{\theta}^T x$ 代入上式,得到:
51 |
52 | $$\log(1 + e^{-{ y{\theta}^T x}}) \approx \log{2} - \frac{1}{2}{ y{\theta}^T x} + \frac{1}{8}{ ({\theta}^T x)}^2 \tag{8}$$
53 |
54 | 这其中最后一项,由于$y^2 = 1$,因此我们直接去掉$y$,将$(8)$式代入$(2)$式得:
55 |
56 | $$L=\frac{1}{n}\sum_{i=1}^{n}{[\log{2} - \frac{1}{2}{ {y_i}{\theta}^T {x_i}} + \frac{1}{8}{ ({\theta}^T {x_i})}^2 ]} \tag{9}$$
57 |
58 | 对$(9)$式求导,得到损失值$L$关于参数$\theta$的梯度值为:
59 |
60 | $$\frac{\partial{L}}{\partial{\theta}}=\frac{1}{n}\sum_{i=1}^{n}{(\frac{1}{4}{\theta}^T{x_i}-\frac{1}{2}{y_i})x_i} \tag{10}$$
61 |
62 | 对应得到式$(10)$的加密梯度为:
63 |
64 | $$[[\frac{\partial{L}}{\partial{\theta}}]]=\frac{1}{n}\sum_{i=1}^{n}{(\frac{1}{4}{[[\theta]]}^T{x_i}-\frac{1}{2}{y_i})x_i} \tag{11}$$
65 |
66 |
67 |
68 | ## 15.4.2 代码使用
69 |
70 | 在本目录下,在命令行中执行下面的命令:
71 |
72 | ```
73 | python main.py -c ./utils/conf.json
74 | ```
75 |
76 | ## 15.4.3 详细实现
77 |
78 | **定义模型类:** 本节将LR 作为联邦训练模型,但由于涉及数据加密和解密等操作,我们首先自定义一个模型类LR_Model,以方便我们进行加解密
79 |
80 | ```python
81 | class LR_Model(object):
82 | def __init__ (self, public_key, w_size=None, w=None, encrypted=False):
83 | """
84 | w_size: 权重参数数量
85 | w: 是否直接传递已有权重,w和w_size只需要传递一个即可
86 | encrypted: 是明文还是加密的形式
87 | """
88 | self.public_key = public_key
89 | if w is not None:
90 | self.weights = w
91 | else:
92 | limit = -1.0/w_size
93 | self.weights = np.random.uniform(-0.5, 0.5, (w_size,))
94 |
95 | if encrypted==False:
96 | self.encrypt_weights = encrypt_vector(public_key, self.weights)
97 | else:
98 | self.encrypt_weights = self.weights
99 |
100 | def set_encrypt_weights(self, w):
101 | for id, e in enumerate(w):
102 | self.encrypt_weights[id] = e
103 |
104 | def set_raw_weights(self, w):
105 | for id, e in enumerate(w):
106 | self.weights[id] = e
107 |
108 | ```
109 |
110 | **本地模型训练:** 在本地的模型训练中,模型参数是在加密的状态下进行,其过程如下所示:
111 |
112 | ```python
113 | def local_train(self, weights):
114 | original_w = weights
115 | self.local_model.set_encrypt_weights(weights)
116 | neg_one = self.public_key.encrypt(-1)
117 |
118 | for e in range(self.conf["local_epochs"]):
119 | print("start epoch ", e)
120 | idx = np.arange(self.data_x.shape[0])
121 | batch_idx = np.random.choice(idx, self.conf['batch_size'], replace=False)
122 | #print(batch_idx)
123 |
124 | x = self.data_x[batch_idx]
125 | x = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
126 | y = self.data_y[batch_idx].reshape((-1, 1))
127 |
128 | batch_encrypted_grad = x.transpose() * (0.25 * x.dot(self.local_model.encrypt_weights) + 0.5 * y.transpose() * neg_one)
129 | encrypted_grad = batch_encrypted_grad.sum(axis=1) / y.shape[0]
130 |
131 | for j in range(len(self.local_model.encrypt_weights)):
132 | self.local_model.encrypt_weights[j] -= self.conf["lr"] * encrypted_grad[j]
133 |
134 | weight_accumulators = []
135 | #print(models.decrypt_vector(Server.private_key, weights))
136 | for j in range(len(self.local_model.encrypt_weights)):
137 | weight_accumulators.append(self.local_model.encrypt_weights[j] - original_w[j])
138 |
139 | return weight_accumulators
140 | ```
141 |
142 | 算法开始时,先复制服务端下发的全局模型权重,并将本地模型的权重更新为全局模型权重:
143 |
144 | ```python
145 | original_w = weights
146 | self.local_model.set_encrypt_weights(weights)
147 | ```
148 |
149 | 在本地训练的每一轮迭代中,都随机挑选batch_size 大小的训练数据进行训练:
150 |
151 | ```python
152 | idx = np.arange(self.data_x.shape[0])
153 | batch_idx = np.random.choice(idx, self.conf['batch_size'], replace=False)
154 | x = self.data_x[batch_idx]
155 | x = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
156 | y = self.data_y[batch_idx].reshape((-1, 1))
157 | ```
158 |
159 | 接下来,在加密状态下求取加密梯度。利用式(11) 的梯度公式来求解,并利用梯度下降来更新模型参数:
160 |
161 | ```python
162 | batch_encrypted_grad = x.transpose() * (0.25 * x.dot(self.local_model.encrypt_weights) + 0.5 * y.transpose() * neg_one)
163 | encrypted_grad = batch_encrypted_grad.sum(axis=1) / y.shape[0]
164 |
165 | for j in range(len(self.local_model.encrypt_weights)):
166 | self.local_model.encrypt_weights[j] -= self.conf["lr"] * encrypted_grad[j]
167 | ```
168 |
169 |
170 |
171 | ## 15.4.4 实验结果比较
172 |
173 | 我们使用乳腺癌数据集作为我们本节的训练数据(在data目录下),在同态加密下进行联邦学习训练后的模型准确率如下所示:
174 |
175 |
176 |

177 |
178 |
179 |
180 |
181 | ## 15.4.5 参考文献
182 |
183 | - [Private federated
184 | learning on vertically partitioned data via entity resolution and additively
185 | homomorphic encryption](https://arxiv.org/pdf/1711.10677.pdf)
--------------------------------------------------------------------------------
/chapter15_Homomorphic_Encryption/client.py:
--------------------------------------------------------------------------------
1 |
2 | import models, torch, copy
3 | import numpy as np
4 | from server import Server
5 |
6 | class Client(object):
7 |
8 | def __init__(self, conf, public_key, weights, data_x, data_y):
9 |
10 | self.conf = conf
11 |
12 | self.public_key = public_key
13 |
14 | self.local_model = models.LR_Model(public_key=self.public_key, w=weights, encrypted=True)
15 |
16 | #print(type(self.local_model.encrypt_weights))
17 | self.data_x = data_x
18 |
19 | self.data_y = data_y
20 |
21 | #print(self.data_x.shape, self.data_y.shape)
22 |
23 |
24 | def local_train(self, weights):
25 |
26 |
27 | original_w = weights
28 |
29 | self.local_model.set_encrypt_weights(weights)
30 |
31 |
32 |
33 | neg_one = self.public_key.encrypt(-1)
34 |
35 | for e in range(self.conf["local_epochs"]):
36 | print("start epoch ", e)
37 | #if e > 0 and e%2 == 0:
38 | # print("re encrypt")
39 | # self.local_model.encrypt_weights = Server.re_encrypt(self.local_model.encrypt_weights)
40 |
41 | idx = np.arange(self.data_x.shape[0])
42 | batch_idx = np.random.choice(idx, self.conf['batch_size'], replace=False)
43 | #print(batch_idx)
44 |
45 | x = self.data_x[batch_idx]
46 | x = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
47 | y = self.data_y[batch_idx].reshape((-1, 1))
48 |
49 | #print((0.25 * x.dot(self.local_model.encrypt_weights) + 0.5 * y.transpose() * neg_one).shape)
50 |
51 | #print(x.transpose().shape)
52 |
53 | #assert(False)
54 |
55 | batch_encrypted_grad = x.transpose() * (0.25 * x.dot(self.local_model.encrypt_weights) + 0.5 * y.transpose() * neg_one)
56 | encrypted_grad = batch_encrypted_grad.sum(axis=1) / y.shape[0]
57 |
58 | for j in range(len(self.local_model.encrypt_weights)):
59 | self.local_model.encrypt_weights[j] -= self.conf["lr"] * encrypted_grad[j]
60 |
61 | weight_accumulators = []
62 | #print(models.decrypt_vector(Server.private_key, weights))
63 | for j in range(len(self.local_model.encrypt_weights)):
64 | weight_accumulators.append(self.local_model.encrypt_weights[j] - original_w[j])
65 |
66 | return weight_accumulators
67 |
68 |
--------------------------------------------------------------------------------
/chapter15_Homomorphic_Encryption/figures/fig12.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter15_Homomorphic_Encryption/figures/fig12.png
--------------------------------------------------------------------------------
/chapter15_Homomorphic_Encryption/figures/fig2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter15_Homomorphic_Encryption/figures/fig2.png
--------------------------------------------------------------------------------
/chapter15_Homomorphic_Encryption/main.py:
--------------------------------------------------------------------------------
1 | import argparse, json
2 | import datetime
3 | import os
4 | import numpy as np
5 | import logging
6 | import torch, random
7 |
8 | from server import *
9 | from client import *
10 | import models
11 | from sklearn.preprocessing import StandardScaler, MinMaxScaler, PolynomialFeatures
12 |
13 | def read_dataset():
14 | data_X, data_Y = [], []
15 |
16 | with open("./data/breast.csv") as fin:
17 | for line in fin:
18 | data = line.split(',')
19 | data_X.append([float(e) for e in data[:-1]])
20 | if int(data[-1])==1:
21 | data_Y.append(1)
22 | else:
23 | data_Y.append(-1)
24 |
25 | data_X = np.array(data_X)
26 | data_Y = np.array(data_Y)
27 | print("one_num: ", np.sum(data_Y==1), ", minus_one_num: ", np.sum(data_Y==-1))
28 |
29 |
30 |
31 | idx = np.arange(data_X.shape[0])
32 | np.random.shuffle(idx)
33 |
34 | train_size = int(data_X.shape[0]*0.8)
35 |
36 | train_x = data_X[idx[:train_size]]
37 | train_y = data_Y[idx[:train_size]]
38 |
39 | eval_x = data_X[idx[train_size:]]
40 | eval_y = data_Y[idx[train_size:]]
41 |
42 | return (train_x, train_y), (eval_x, eval_y)
43 |
44 |
45 | if __name__ == '__main__':
46 |
47 | parser = argparse.ArgumentParser(description='Federated Learning')
48 | parser.add_argument('-c', '--conf', dest='conf')
49 | args = parser.parse_args()
50 |
51 |
52 | with open(args.conf, 'r') as f:
53 | conf = json.load(f)
54 |
55 |
56 | train_datasets, eval_datasets = read_dataset()
57 |
58 | print(train_datasets[0].shape, train_datasets[1].shape)
59 |
60 | print(eval_datasets[0].shape, eval_datasets[1].shape)
61 |
62 |
63 | server = Server(conf, eval_datasets)
64 | clients = []
65 |
66 |
67 | train_size = train_datasets[0].shape[0]
68 | per_client_size = int(train_size/conf["no_models"])
69 | for c in range(conf["no_models"]):
70 | clients.append(Client(conf, Server.public_key, server.global_model.encrypt_weights, train_datasets[0][c*per_client_size: (c+1)*per_client_size], train_datasets[1][c*per_client_size: (c+1)*per_client_size]))
71 |
72 |
73 | #print(server.global_model.weights)
74 |
75 | for e in range(conf["global_epochs"]):
76 |
77 | server.global_model.encrypt_weights = models.encrypt_vector(Server.public_key, models.decrypt_vector(Server.private_key, server.global_model.encrypt_weights))
78 |
79 | candidates = random.sample(clients, conf["k"])
80 |
81 | weight_accumulator = [Server.public_key.encrypt(0.0)] * (conf["feature_num"]+1)
82 |
83 |
84 | for c in candidates:
85 | #print(models.decrypt_vector(Server.private_key, server.global_model.encrypt_weights))
86 | diff = c.local_train(server.global_model.encrypt_weights)
87 |
88 | for i in range(len(weight_accumulator)):
89 | weight_accumulator[i] = weight_accumulator[i] + diff[i]
90 |
91 | server.model_aggregate(weight_accumulator)
92 |
93 |
94 |
95 | acc = server.model_eval()
96 |
97 | print("Epoch %d, acc: %f\n" % (e, acc))
98 |
--------------------------------------------------------------------------------
/chapter15_Homomorphic_Encryption/models.py:
--------------------------------------------------------------------------------
1 |
2 | import torch
3 | from torchvision import models
4 |
5 | import numpy as np
6 |
7 |
8 | def encrypt_vector(public_key, x):
9 | return [public_key.encrypt(i) for i in x]
10 |
11 | def encrypt_matrix(public_key, x):
12 | ret = []
13 | for r in x:
14 | ret.append(encrypt_vector(public_key, r))
15 | return ret
16 |
17 |
18 | def decrypt_vector(private_key, x):
19 | return [private_key.decrypt(i) for i in x]
20 |
21 |
22 | def decrypt_matrix(private_key, x):
23 | ret = []
24 | for r in x:
25 | ret.append(decrypt_vector(private_key, r))
26 | return ret
27 |
28 |
29 |
30 | class LR_Model(object):
31 |
32 | def __init__ (self, public_key, w_size=None, w=None, encrypted=False):
33 | """
34 | w_size: 权重参数数量
35 | w: 是否直接传递已有权重,w和w_size只需要传递一个即可
36 | encrypted: 是明文还是加密的形式
37 | """
38 | self.public_key = public_key
39 | if w is not None:
40 | self.weights = w
41 | else:
42 | limit = -1.0/w_size
43 | self.weights = np.random.uniform(-0.5, 0.5, (w_size,))
44 |
45 | if encrypted==False:
46 | self.encrypt_weights = encrypt_vector(public_key, self.weights)
47 | else:
48 | self.encrypt_weights = self.weights
49 |
50 | def set_encrypt_weights(self, w):
51 | for id, e in enumerate(w):
52 | self.encrypt_weights[id] = e
53 |
54 | def set_raw_weights(self, w):
55 | for id, e in enumerate(w):
56 | self.weights[id] = e
57 |
--------------------------------------------------------------------------------
/chapter15_Homomorphic_Encryption/server.py:
--------------------------------------------------------------------------------
1 |
2 | import models, torch
3 |
4 | import paillier
5 |
6 | import numpy as np
7 |
8 |
9 | class Server(object):
10 |
11 | public_key, private_key = paillier.generate_paillier_keypair(n_length=1024)
12 |
13 | def __init__(self, conf, eval_dataset):
14 |
15 | self.conf = conf
16 |
17 |
18 |
19 | self.global_model = models.LR_Model(public_key=Server.public_key, w_size=self.conf["feature_num"]+1)
20 |
21 | self.eval_x = eval_dataset[0]
22 |
23 | self.eval_y = eval_dataset[1]
24 |
25 |
26 | def model_aggregate(self, weight_accumulator):
27 |
28 | for id, data in enumerate(self.global_model.encrypt_weights):
29 |
30 | update_per_layer = weight_accumulator[id] * self.conf["lambda"]
31 |
32 | self.global_model.encrypt_weights[id] = self.global_model.encrypt_weights[id] + update_per_layer
33 |
34 |
35 | def model_eval(self):
36 |
37 |
38 | total_loss = 0.0
39 | correct = 0
40 | dataset_size = 0
41 |
42 | batch_num = int(self.eval_x.shape[0]/self.conf["batch_size"])
43 |
44 | self.global_model.weights = models.decrypt_vector(Server.private_key, self.global_model.encrypt_weights)
45 | print(self.global_model.weights)
46 |
47 | for batch_id in range(batch_num):
48 |
49 | x = self.eval_x[batch_id*self.conf["batch_size"] : (batch_id+1)*self.conf["batch_size"]]
50 | x = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
51 | y = self.eval_y[batch_id*self.conf["batch_size"] : (batch_id+1)*self.conf["batch_size"]].reshape((-1, 1))
52 |
53 | dataset_size += x.shape[0]
54 |
55 |
56 |
57 | wxs = x.dot(self.global_model.weights)
58 |
59 | pred_y = [1.0 / (1 + np.exp(-wx)) for wx in wxs]
60 |
61 | #print(pred_y)
62 |
63 | pred_y = np.array([1 if pred > 0.5 else -1 for pred in pred_y]).reshape((-1, 1))
64 |
65 | #print(y)
66 | #print(pred_y)
67 | correct += np.sum(y == pred_y)
68 |
69 | #print(correct, dataset_size)
70 | acc = 100.0 * (float(correct) / float(dataset_size))
71 | #total_l = total_loss / dataset_size
72 |
73 | return acc
74 |
75 | @staticmethod
76 | def re_encrypt(w):
77 | return models.encrypt_vector(Server.public_key, models.decrypt_vector(Server.private_key, w))
--------------------------------------------------------------------------------
/chapter15_Homomorphic_Encryption/utils/conf.json:
--------------------------------------------------------------------------------
1 |
2 | {
3 |
4 | "no_models" : 5,
5 |
6 | "global_epochs" : 30,
7 |
8 | "local_epochs" : 2,
9 |
10 | "k" : 2,
11 |
12 | "batch_size" : 64,
13 |
14 | "lr" : 0.15,
15 |
16 | "momentum" : 0.0001,
17 |
18 | "lambda" : 0.5,
19 |
20 | "feature_num" : 30
21 | }
22 |
23 |
--------------------------------------------------------------------------------
/chapter15_Sparsity/README.md:
--------------------------------------------------------------------------------
1 | # 15.5:参数稀疏化
2 |
3 | 稀疏化策略本质上模型压缩的一种,同样是通过传输少量的参数,一方面减少服务端与客户端之间的网络带宽;另一方面也能够防止全局模型参数泄露。
4 |
5 | ## 15.5.1 方案简介
6 |
7 | 假设当前处在第$t$轮迭代,设当前客户端$C_i$进行本地迭代训练的模型结构为$G_t=[g_1, g_2, ..., g_L]$,这里的$g_i$表示$G_t$的第$i$层的,客户端$C_i$进行本地迭代训练,模型将从$G_t$变为$L^{t+1}_i$,在传统的联邦学习中,客户端$C_i$将向服务端上传模型参数$(L\_{t+1}-G_t)$。
8 |
9 | 稀疏化是指,在本地联邦训练的过程中,每一个客户端中保存一份掩码矩阵$R=[r_1, r_2, ..., r_L]$,$r_i$是与$g_i$形状大小相同的参数矩阵,且只由0和1构成。当客户端本地训练结束之后,将向服务端上传模型参数$(L\_{t+1}-G_t)$与掩码矩阵进行相乘,只有不为0的参数值才会上传。如下图所示:
10 |
11 |
12 |

13 |
14 |
15 |
16 | ## 15.5.2 配置
17 | 为此,我们首先在配置文件中添加新字段“prop”,它是用来控制掩码矩阵中1的数量,具体来说,”prop” 越大,即掩码矩阵中1 的值越多,矩阵越稠密;相反,”prop” 越小,即掩码矩阵中1 的值越少,矩阵越稀疏。
18 |
19 | ```python
20 | {
21 | ...,
22 | prop : 0.8
23 | }
24 | ```
25 |
26 | ## 15.5.3 客户端
27 |
28 | 客户端首先生成构造函数中,创建一个掩码矩阵,掩码矩阵是一个0-1矩阵,我们使用bernoulli分布来随机生成:
29 |
30 | ```python
31 | class Client(object):
32 | def __init__(self, conf, model, train_dataset, id = -1):
33 | ...
34 |
35 | self.mask = {}
36 | for name, param in self.local_model.state_dict().items():
37 | p=torch.ones_like(param)*self.conf["prop"]
38 | if torch.is_floating_point(param):
39 | self.mask[name] = torch.bernoulli(p)
40 | else:
41 | self.mask[name] = torch.bernoulli(p).long()
42 | ```
43 |
44 | 客户端本地训练过程与传统训练过程基本一致,但在模型参数上传的时候,会利用掩码矩阵,只返回掩码矩阵中对应1的权重值。
45 |
46 | ```python
47 | def local_train(self, model):
48 | for name, param in model.state_dict().items():
49 | self.local_model.state_dict()[name].copy_(param.clone())
50 |
51 | optimizer = torch.optim.SGD(self.local_model.parameters(), lr=self.conf['lr'],
52 | momentum=self.conf['momentum'])
53 | self.local_model.train()
54 | for e in range(self.conf["local_epochs"]):
55 | for batch_id, batch in enumerate(self.train_loader):
56 | data, target = batch
57 |
58 | if torch.cuda.is_available():
59 | data = data.cuda()
60 | target = target.cuda()
61 |
62 | optimizer.zero_grad()
63 | output = self.local_model(data)
64 | loss = torch.nn.functional.cross_entropy(output, target)
65 | loss.backward()
66 | optimizer.step()
67 | print("Epoch %d done." % e)
68 |
69 | diff = dict()
70 | for name, data in self.local_model.state_dict().items():
71 | diff[name] = (data - model.state_dict()[name])
72 | #print(diff[name].size(), self.mask[name].size())
73 | diff[name] = diff[name]*self.mask[name]
74 |
75 | return diff
76 | ```
77 |
78 |
79 |
80 | ## 15.5.4 代码使用
81 |
82 | 在本目录下,在命令行中执行下面的命令:
83 |
84 | ```
85 | python main.py -c ./utils/conf.json
86 | ```
87 |
88 |
89 |
90 | ## 15.5.5 效果对比
91 |
92 | 我们来看经过参数稀疏化之后,模型的性能表现如下图所示,可以看到,随着掩码矩阵中0的数量越来越多,稀疏化的模型性能在开始迭代时会有所下降,但随着迭代的进行,模型的性能会逐步恢复到正常状态。
93 |
94 |
95 |

96 |
--------------------------------------------------------------------------------
/chapter15_Sparsity/client.py:
--------------------------------------------------------------------------------
1 |
2 | import models, torch, copy
3 | class Client(object):
4 |
5 | def __init__(self, conf, model, train_dataset, id = -1):
6 |
7 | self.conf = conf
8 |
9 | self.local_model = models.get_model(self.conf["model_name"])
10 |
11 | self.client_id = id
12 |
13 | self.train_dataset = train_dataset
14 |
15 | all_range = list(range(len(self.train_dataset)))
16 | data_len = int(len(self.train_dataset) / self.conf['no_models'])
17 | train_indices = all_range[id * data_len: (id + 1) * data_len]
18 |
19 | self.mask = {}
20 | for name, param in self.local_model.state_dict().items():
21 | p=torch.ones_like(param)*self.conf["prop"]
22 | if torch.is_floating_point(param):
23 | self.mask[name] = torch.bernoulli(p)
24 | else:
25 | self.mask[name] = torch.bernoulli(p).long()
26 |
27 | self.train_loader = torch.utils.data.DataLoader(self.train_dataset, batch_size=conf["batch_size"],
28 | sampler=torch.utils.data.sampler.SubsetRandomSampler(train_indices))
29 |
30 |
31 | def local_train(self, model):
32 |
33 | for name, param in model.state_dict().items():
34 | self.local_model.state_dict()[name].copy_(param.clone())
35 |
36 | optimizer = torch.optim.SGD(self.local_model.parameters(), lr=self.conf['lr'],
37 | momentum=self.conf['momentum'])
38 |
39 | self.local_model.train()
40 | for e in range(self.conf["local_epochs"]):
41 |
42 | for batch_id, batch in enumerate(self.train_loader):
43 | data, target = batch
44 |
45 | if torch.cuda.is_available():
46 | data = data.cuda()
47 | target = target.cuda()
48 |
49 | optimizer.zero_grad()
50 | output = self.local_model(data)
51 | loss = torch.nn.functional.cross_entropy(output, target)
52 | loss.backward()
53 |
54 | optimizer.step()
55 |
56 | print("Epoch %d done." % e)
57 |
58 | diff = dict()
59 | for name, data in self.local_model.state_dict().items():
60 | diff[name] = (data - model.state_dict()[name])
61 | #print(diff[name].size(), self.mask[name].size())
62 | diff[name] = diff[name]*self.mask[name]
63 |
64 | return diff
65 |
--------------------------------------------------------------------------------
/chapter15_Sparsity/datasets.py:
--------------------------------------------------------------------------------
1 |
2 |
3 | import torch
4 | from torchvision import datasets, transforms
5 |
6 | def get_dataset(dir, name):
7 |
8 | if name=='mnist':
9 | train_dataset = datasets.MNIST(dir, train=True, download=True, transform=transforms.ToTensor())
10 | eval_dataset = datasets.MNIST(dir, train=False, transform=transforms.ToTensor())
11 |
12 | elif name=='cifar':
13 | transform_train = transforms.Compose([
14 | transforms.RandomCrop(32, padding=4),
15 | transforms.RandomHorizontalFlip(),
16 | transforms.ToTensor(),
17 | transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
18 | ])
19 |
20 | transform_test = transforms.Compose([
21 | transforms.ToTensor(),
22 | transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
23 | ])
24 |
25 | train_dataset = datasets.CIFAR10(dir, train=True, download=True,
26 | transform=transform_train)
27 | eval_dataset = datasets.CIFAR10(dir, train=False, transform=transform_test)
28 |
29 |
30 | return train_dataset, eval_dataset
--------------------------------------------------------------------------------
/chapter15_Sparsity/figures/fig1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter15_Sparsity/figures/fig1.png
--------------------------------------------------------------------------------
/chapter15_Sparsity/figures/fig33.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/chapter15_Sparsity/figures/fig33.png
--------------------------------------------------------------------------------
/chapter15_Sparsity/main.py:
--------------------------------------------------------------------------------
1 | import argparse, json
2 | import datetime
3 | import os
4 | import logging
5 | import torch, random
6 |
7 | from server import *
8 | from client import *
9 | import models, datasets
10 |
11 |
12 |
13 | if __name__ == '__main__':
14 |
15 | parser = argparse.ArgumentParser(description='Federated Learning')
16 | parser.add_argument('-c', '--conf', dest='conf')
17 | args = parser.parse_args()
18 |
19 |
20 | with open(args.conf, 'r') as f:
21 | conf = json.load(f)
22 |
23 |
24 | train_datasets, eval_datasets = datasets.get_dataset("./data/", conf["type"])
25 |
26 | server = Server(conf, eval_datasets)
27 | clients = []
28 |
29 | for c in range(conf["no_models"]):
30 | clients.append(Client(conf, server.global_model, train_datasets, c))
31 |
32 | print("\n\n")
33 | for e in range(conf["global_epochs"]):
34 |
35 | candidates = random.sample(clients, conf["k"])
36 |
37 | weight_accumulator = {}
38 |
39 | for name, params in server.global_model.state_dict().items():
40 | weight_accumulator[name] = torch.zeros_like(params)
41 |
42 | for c in candidates:
43 | diff = c.local_train(server.global_model)
44 |
45 | for name, params in server.global_model.state_dict().items():
46 | weight_accumulator[name].add_(diff[name])
47 |
48 |
49 | server.model_aggregate(weight_accumulator)
50 |
51 | acc, loss = server.model_eval()
52 |
53 | print("Epoch %d, acc: %f, loss: %f\n" % (e, acc, loss))
54 |
55 |
56 |
57 |
58 |
59 |
60 |
61 |
--------------------------------------------------------------------------------
/chapter15_Sparsity/models.py:
--------------------------------------------------------------------------------
1 |
2 | import torch
3 | from torchvision import models
4 | import math
5 |
6 | def get_model(name="vgg16", pretrained=True):
7 | if name == "resnet18":
8 | model = models.resnet18(pretrained=pretrained)
9 | elif name == "resnet50":
10 | model = models.resnet50(pretrained=pretrained)
11 | elif name == "densenet121":
12 | model = models.densenet121(pretrained=pretrained)
13 | elif name == "alexnet":
14 | model = models.alexnet(pretrained=pretrained)
15 | elif name == "vgg16":
16 | model = models.vgg16(pretrained=pretrained)
17 | elif name == "vgg19":
18 | model = models.vgg19(pretrained=pretrained)
19 | elif name == "inception_v3":
20 | model = models.inception_v3(pretrained=pretrained)
21 | elif name == "googlenet":
22 | model = models.googlenet(pretrained=pretrained)
23 |
24 | if torch.cuda.is_available():
25 | return model.cuda()
26 | else:
27 | return model
28 |
29 | def model_norm(model_1, model_2):
30 | squared_sum = 0
31 | for name, layer in model_1.named_parameters():
32 | # print(torch.mean(layer.data), torch.mean(model_2.state_dict()[name].data))
33 | squared_sum += torch.sum(torch.pow(layer.data - model_2.state_dict()[name].data, 2))
34 | return math.sqrt(squared_sum)
--------------------------------------------------------------------------------
/chapter15_Sparsity/server.py:
--------------------------------------------------------------------------------
1 |
2 | import models, torch
3 |
4 |
5 | class Server(object):
6 |
7 | def __init__(self, conf, eval_dataset):
8 |
9 | self.conf = conf
10 |
11 | self.global_model = models.get_model(self.conf["model_name"])
12 |
13 | self.eval_loader = torch.utils.data.DataLoader(eval_dataset, batch_size=self.conf["batch_size"], shuffle=True)
14 |
15 |
16 | def model_aggregate(self, weight_accumulator):
17 | for name, data in self.global_model.state_dict().items():
18 |
19 | update_per_layer = weight_accumulator[name] * self.conf["lambda"]
20 |
21 | if data.type() != update_per_layer.type():
22 | data.add_(update_per_layer.to(torch.int64))
23 | else:
24 | data.add_(update_per_layer)
25 |
26 | def model_eval(self):
27 | self.global_model.eval()
28 |
29 | total_loss = 0.0
30 | correct = 0
31 | dataset_size = 0
32 | for batch_id, batch in enumerate(self.eval_loader):
33 | data, target = batch
34 | dataset_size += data.size()[0]
35 |
36 | if torch.cuda.is_available():
37 | data = data.cuda()
38 | target = target.cuda()
39 |
40 |
41 | output = self.global_model(data)
42 |
43 | total_loss += torch.nn.functional.cross_entropy(output, target,
44 | reduction='sum').item() # sum up batch loss
45 | pred = output.data.max(1)[1] # get the index of the max log-probability
46 | correct += pred.eq(target.data.view_as(pred)).cpu().sum().item()
47 |
48 | acc = 100.0 * (float(correct) / float(dataset_size))
49 | total_l = total_loss / dataset_size
50 |
51 | return acc, total_l
--------------------------------------------------------------------------------
/chapter15_Sparsity/utils/conf.json:
--------------------------------------------------------------------------------
1 | {
2 | "model_name" : "resnet50",
3 |
4 | "no_models" : 10,
5 |
6 | "type" : "cifar",
7 |
8 | "global_epochs" : 30,
9 |
10 | "local_epochs" : 3,
11 |
12 | "k" : 2,
13 |
14 | "batch_size" : 32,
15 |
16 | "lr" : 0.01,
17 |
18 | "momentum" : 0.0001,
19 |
20 | "lambda" : 0.5,
21 |
22 | "prop" : 0.6
23 | }
--------------------------------------------------------------------------------
/errata/README.md:
--------------------------------------------------------------------------------
1 | # 《联邦学习实战》勘误
2 |
3 |
4 |
5 | ## 第三章
6 |
7 | - 3-1:第45页,客户端的构造函数中,客户端模型的初始化问题,应该重新赋值,而不是由全局模型赋值,防止共用内存空间。
8 |
9 |
10 |

11 |
12 |
13 | - 3-2:第48页,图3.5修改,同时增加了迭代不同次数的比较。
14 |
15 |
16 |

17 |
18 |
19 |
20 |
21 | ## 第五章
22 |
23 | - 5-1:第64页,Listing 5.2中,namespace(命名空间)印刷错误:”homo_host_breast_eval“ 改为 ”homo_guest_breast_eval “
24 |
25 |
26 |

27 |
28 |
--------------------------------------------------------------------------------
/errata/figures/3_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/errata/figures/3_1.png
--------------------------------------------------------------------------------
/errata/figures/3_2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/errata/figures/3_2.png
--------------------------------------------------------------------------------
/errata/figures/5_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/errata/figures/5_1.png
--------------------------------------------------------------------------------
/figures/cover.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/figures/cover.jpg
--------------------------------------------------------------------------------
/figures/cover2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/FederatedAI/Practicing-Federated-Learning/8090c9acfc584571e622807ac5ba4079cdc2b0c8/figures/cover2.png
--------------------------------------------------------------------------------
/figures/link.md:
--------------------------------------------------------------------------------
1 | # 联邦学习实战链接
2 |
3 |
4 |
5 | ## 前言
6 |
7 | 链接0-1:[香港科技大学“联邦学习”课程](https://ising.cse.ust.hk/fl/index.html)
8 |
9 |
10 |
11 | ## 第三章
12 |
13 | 链接3-1:[Anaconda下载地址](https://www.anaconda.com/products/individual)
14 |
15 | 链接3-2:[CUDA下载地址](https://developer.nvidia.com/cuda-downloads)
16 |
17 | 链接3-3:[cuDNN下载地址](https://developer.nvidia.com/rdp/cudnn-archive)
18 |
19 | 链接3-4:[PyTorch官方文档](https://pytorch.org/tutorials/)
20 |
21 | 链接3-5:[torchvision内置模型](https://pytorch.org/vision/stable/models.html)
22 |
23 |
24 |
25 | ## 第四章
26 |
27 | 链接4-1:[FATE项目地址](https://github.com/FederatedAI/FATE)
28 |
29 | 链接4-2:[LINUX基金会与FATE平台合作新闻稿](https://www.linuxfoundation.org/press-release/2019/06/the-linux-foundation-will-host-the-federated-ai-enabler-to-responsibly-advance-data-modeling/)
30 |
31 | 链接4-3:[FATE中文文档](https://github.com/FederatedAI/DOC-CHN)
32 |
33 | 链接4-4:[FATE单机版本](https://github.com/FederatedAI/DOC-CHN/tree/master/%E9%83%A8%E7%BD%B2)
34 |
35 | 链接4-5:[FATE集群版本部署](https://github.com/FederatedAI/DOC-CHN/tree/master/%E9%83%A8%E7%BD%B2)
36 |
37 | 链接4-6:[KubeFATE部署](https://github.com/FederatedAI/KubeFATE/tree/master/docker-deploy)
38 |
39 | 链接4-7:[FedAI官方网站](https://cn.fedai.org/cases/)
40 |
41 |
42 |
43 | ## 第五章
44 |
45 | 链接5-1:[第五章对应代码](https://github.com/FederatedAI/Practicing-Federated-Learning/tree/master/chapter05_FATE_HFL)
46 |
47 | 链接5-2:[乳腺癌数据集](https://archive.ics.uci.edu/ml/datasets/breast+cancer+wisconsin+(original))
48 |
49 |
50 |
51 | ## 第六章
52 |
53 | 链接6-1:[第六章对应代码](https://github.com/FederatedAI/Practicing-Federated-Learning/tree/master/chapter06_FATE_VFL)
54 |
55 | 链接6-2:[Boston Housing数据集地址](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html)
56 |
57 |
58 |
59 | ## 第七章
60 |
61 | 链接7-1:[FATE项目地址](https://github.com/FederatedAI/FATE)
62 |
63 | 链接7-2:[FATE帮助文档](https://github.com/FederatedAI/DOC-CHN)
64 |
65 | 链接7-3:[本书配套代码](https://github.com/FederatedAI/Practicing-Federated-Learning)
66 |
67 | 链接7-4:[TFF的项目地址](https://github.com/tensorflow/federated)
68 |
69 | 链接7-5:[TFF的官方网站](https://www.tensorflow.org/federated?hl=zh-cn)
70 |
71 | 链接7-6: [PySyft的项目地址](https://github.com/OpenMined/PySyft)
72 |
73 | 链接7-7: [PySyft的官方网站](https://www.openmined.org/)
74 |
75 | 链接7-8:[医疗场景联邦学习介绍](https://www.youtube.com/watch?v=bVU-Ea6hc0k)
76 |
77 | 链接7-9:[Clara联邦学习平台](https://devblogs.nvidia.com/federated-learning-clara/)
78 |
79 | 链接7-10:[paddleFL官方文档](https://github.com/PaddlePaddle/PaddleFL)
80 |
81 | 链接7-11:[paddleFL项目地址](https://github.com/PaddlePaddle/PaddleFL)
82 |
83 | 链接7-12:[Angel项目地址](https://github.com/Angel-ML/angel)
84 |
85 | 链接7-13:[同盾iBond平台官网](https://www.tongdun.cn/ai/solution/aiknowledge)
86 |
87 |
88 |
89 | ## 第九章
90 |
91 | 链接9-1:[联邦矩阵分解的项目地址](https://github.com/FederatedAI/FedRec/tree/master/federatedrec/matrix_factorization/hetero_matrixfactor)
92 |
93 | 链接9-2:[联邦因子分解机项目地址](https://github.com/FederatedAI/FedRec/tree/master/federatedrec/factorization_machine/hetero_factorization_machine)
94 |
95 | 链接9-3:[联邦推荐项目地址](https://github.com/FederatedAI/FedRec/tree/master/federatedrec)
96 |
97 | 链接9-4:[联邦推荐云服务](https://market.cloud.tencent.com/products/19083)
98 |
99 |
100 |
101 | ## 第十章
102 |
103 | 链接10-1:[联邦视觉论文地址](https://arxiv.org/abs/2001.06202)
104 |
105 | 链接10-2:[视觉案例基于Flask-SocketIO的Python实现地址](https://github.com/FederatedAI/federated_learning_in_action/blob/master/chapter10_FL_Computer-Vision)
106 |
107 | 链接10-3:[视觉案例在PaddleFL上的实现地址](https://github.com/FederatedAI/FedVision)
108 |
109 | 链接10-4:[Flask-SocketIO官方文档](https://github.com/miguelgrinberg/Flask-SocketIO)
110 |
111 | 链接10-5:[FATE数据集官网](https://dataset.fedai.org/)
112 |
113 |
114 |
115 | ## 第十二章
116 |
117 | 链接12-1:[医疗案例论文网址](https://arxiv.org/abs/2006.10517)
118 |
119 | 链接12-2:[LIDC-IDRI肺结节公开数据集](https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI)
120 |
121 |
122 |
123 | ## 第十三章
124 |
125 | 链接13-1:[微众银行AI赋能灵活用工研究论文荣获IJCAI 2019“创新奖”](http://www.geekpark.net/news/246406)
126 |
127 | 链接13-2:[Zhiyouren应用程序](http://www.zyrwork.com/)
128 |
129 | 链接13-3:[FedGame视频](https://youtu.be/4qd48QfcsXIhttps://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI)
130 |
131 |
132 |
133 | ## 第十四章
134 |
135 | 链接14-1:[贵州大数据交易所官网](http://www.gbdex.com/website/)
136 |
137 | 链接14-2:[贵阳大数据交易所数据定价](http://trade.gbdex.com/trade.web/userMessage/dataServer?type=5)
138 |
139 | 链接14-3:[FedCoin文献地址](https://arxiv.org/abs/2002.11711)
140 |
141 | 链接14-4:[FedCoin Demo页面](http://demo.blockchain-neu.com)
142 |
143 | 链接14-5:[FedCoin视频展示](https://youtu.be/u5LPLdZvd0ghttps://youtu.be/4qd48QfcsXIhttps://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI)
144 |
145 |
146 |
147 | ## 第十五章
148 |
149 | 链接15-1:[后门攻击代码地址](https://github.com/FederatedAI/federated_learning_in_action/blob/master/chapter15_Attack_and_Defense)
150 |
151 |
152 |
153 | ## 第十六章
154 |
155 | 链接16-1:[python socket使用文档](https://docs.python.org/3/library/socket.html)
156 |
157 | 链接16-2:[python-socketio使用文档](https://python-socketio.readthedocs.io/en/latest/index.html)
158 |
159 | 链接16-3: [Flask-socketio使用文档](https://flask-socketio.readthedocs.io/en/latest/)
160 |
161 | 链接16-4:[gRPC官方网站](https://grpc.io/)
162 |
163 | 链接16-5:[ICE官网](https://zeroc.com/)
164 |
165 | 链接16-6:[ICE下载页面](https://zeroc.com/downloads/ice)
166 |
167 | 链接16-7:[ICE使用文档](https://doc.zeroc.com/)
168 |
169 |
170 |
171 | ## 第十八章
172 |
173 | 链接18-1:[Split Learning项目地址](https://splitlearning.github.io/)
174 |
175 | 链接18-2:[AWS Greengrass](https://aws.amazon.com/cn/greengrass/)
176 |
177 | 链接18-3:[Azure IOT edge](https://azure.microsoft.com/en-us/services/iot-edge/)
178 |
179 | 链接18-4:[Edge TPU](https://cloud.google.com/edge-tpu)
180 |
181 | 链接18-5:[Google Cloud IoT Core](https://cloud.google.com/iot-core)
182 |
183 |
184 |
185 | ## 第十九章
186 |
187 | 链接19-1:[联邦机器学习IEEE标准项目](https://standards.ieee.org/project/3652_1.html)
188 |
189 | 链接19-2:[联邦机器学习工作组](https://sagroups.ieee.org/3652-1/)
190 |
191 | 链接19-3:[Clara联邦学习医疗平台](https://devblogs.nvidia.com/federated-learning-clara/)
192 |
193 | 链接19-4:[GBoard输入法模型](https://ai.googleblog.com/2017/04/federated-learning-collaborative.html)
194 |
195 | 链接19-5:[可解析性AI标准](https://sagroups.ieee.org/2894/)
--------------------------------------------------------------------------------