├── TDX
├── torchserve.png
├── Presentation1.png
├── Presentation1.pptx
├── pytorch-icon.svg
├── trusted-big-data.Dockerfile
├── trusted-deep-learning.Dockerfile
└── DEVCATALOG.md
├── security.md
├── README.md
├── CONTRIBUTING.md
├── CODE_OF_CONDUCT.md
└── SGX
├── base.Dockerfile
├── trusted-bigdata.Dockerfile
└── DEVCATALOG.md
/TDX/torchserve.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/intel/BigDL-Privacy-Preserving-Machine-Learning-Toolkit/main/TDX/torchserve.png
--------------------------------------------------------------------------------
/TDX/Presentation1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/intel/BigDL-Privacy-Preserving-Machine-Learning-Toolkit/main/TDX/Presentation1.png
--------------------------------------------------------------------------------
/TDX/Presentation1.pptx:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/intel/BigDL-Privacy-Preserving-Machine-Learning-Toolkit/main/TDX/Presentation1.pptx
--------------------------------------------------------------------------------
/TDX/pytorch-icon.svg:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/security.md:
--------------------------------------------------------------------------------
1 | # Security Policy
2 | Intel is committed to rapidly addressing security vulnerabilities affecting our customers and providing clear guidance on the solution, impact, severity and mitigation.
3 |
4 | ## Reporting a Vulnerability
5 | Please report any security vulnerabilities in this project utilizing the guidelines [here](https://www.intel.com/content/www/us/en/security-center/vulnerability-handling-guidelines.html).
6 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | PROJECT NOT UNDER ACTIVE MANAGEMENT
2 |
3 | This project will no longer be maintained by Intel.
4 |
5 | Intel has ceased development and contributions including, but not limited to, maintenance, bug fixes, new releases, or updates, to this project.
6 |
7 | Intel no longer accepts patches to this project.
8 |
9 | If you have an ongoing need to use this project, are interested in independently developing it, or would like to maintain patches for the open source software community, please create your own fork of this project.
10 |
11 | Contact: webadmin@linux.intel.com
12 |
--------------------------------------------------------------------------------
/TDX/trusted-big-data.Dockerfile:
--------------------------------------------------------------------------------
1 | FROM intelanalytics/bigdl-k8s:2.4.0-SNAPSHOT
2 |
3 | COPY ./download-bigdl-ppml.sh /opt/download-bigdl-ppml.sh
4 | RUN chmod a+x /opt/download-bigdl-ppml.sh
5 |
6 | RUN apt-get update --fix-missing --no-install-recommends && apt-get clean && rm -rf /var/lib/apt/lists/* \
7 | DEBIAN_FRONTEND=noninteractive TZ=Etc/UTC apt-get install -y tzdata && \
8 | apt-get install --no-install-recommends software-properties-common -y && \
9 | add-apt-repository ppa:deadsnakes/ppa -y && \
10 | # Install python3.8
11 | apt-get install -y --no-cache-dir --no-install-recommends python3.8 python3.8-dev python3.8-distutils build-essential python3-wheel python3-pip && \
12 | rm /usr/bin/python3 && \
13 | ln -s /usr/bin/python3.8 /usr/bin/python3 && \
14 | pip3 install --no-cache-dir --upgrade pip && \
15 | pip3 install --no-cache-dir setuptools && \
16 | pip3 install --no-cache-dir numpy && \
17 | ln -s /usr/bin/python3 /usr/bin/python && \
18 | # Download BigDL PPML jar with dependency jar
19 | /opt/download-bigdl-ppml.sh
20 |
21 | ENV PYTHONPATH /usr/lib/python3.8:/usr/lib/python3.8/lib-dynload:/usr/local/lib/python3.8/dist-packages:/usr/lib/python3/dist-packages
22 |
--------------------------------------------------------------------------------
/TDX/trusted-deep-learning.Dockerfile:
--------------------------------------------------------------------------------
1 | FROM ubuntu:20.04
2 |
3 | RUN mkdir -p /ppml/ && \
4 | # Add example
5 | mkdir /ppml/examples
6 |
7 | COPY ./pert.py /ppml/examples/pert.py
8 | COPY ./pert_nano.py /ppml/examples/pert_nano.py
9 |
10 | # Install PYTHON3.8
11 | RUN env DEBIAN_FRONTEND=noninteractive apt-get update && \
12 | apt-get install --no-install-recommends software-properties-common -y && \
13 | add-apt-repository ppa:deadsnakes/ppa -y && \
14 | apt-get install --no-install-recommends -y python3.8 python3.8-dev python3.8-distutils build-essential python3-wheel python3-pip && \
15 | apt-get install --no-install-recommends -y google-perftools=2.7-1ubuntu2 && \
16 | rm /usr/bin/python3 && \
17 | ln -s /usr/bin/python3.8 /usr/bin/python3 && \
18 | pip3 install --no-cache-dir --upgrade pip && \
19 | pip3 install --no-cache-dir setuptools && \
20 | pip3 install --no-cache-dir datasets==2.6.1 transformers intel_extension_for_pytorch && \
21 | ln -s /usr/bin/python3 /usr/bin/python
22 |
23 | ENV PYTHONPATH /usr/lib/python3.8:/usr/lib/python3.8/lib-dynload:/usr/local/lib/python3.8/dist-packages:/usr/lib/python3/dist-packages
24 |
25 | RUN pip3 install --pre --no-cache-dir --upgrade bigdl-nano[pytorch]
26 | WORKDIR /ppml
27 |
28 | ENTRYPOINT [ "/bin/bash" ]
29 |
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | # Contributing
2 |
3 | ### License
4 |
5 | is licensed under the terms in [LICENSE]. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
6 |
7 | ### Sign your work
8 |
9 | Please use the sign-off line at the end of the patch. Your signature certifies that you wrote the patch or otherwise have the right to pass it on as an open-source patch. The rules are pretty simple: if you can certify
10 | the below (from [developercertificate.org](http://developercertificate.org/)):
11 |
12 | ```
13 | Developer Certificate of Origin
14 | Version 1.1
15 |
16 | Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
17 | 660 York Street, Suite 102,
18 | San Francisco, CA 94110 USA
19 |
20 | Everyone is permitted to copy and distribute verbatim copies of this
21 | license document, but changing it is not allowed.
22 |
23 | Developer's Certificate of Origin 1.1
24 |
25 | By making a contribution to this project, I certify that:
26 |
27 | (a) The contribution was created in whole or in part by me and I
28 | have the right to submit it under the open source license
29 | indicated in the file; or
30 |
31 | (b) The contribution is based upon previous work that, to the best
32 | of my knowledge, is covered under an appropriate open source
33 | license and I have the right under that license to submit that
34 | work with modifications, whether created in whole or in part
35 | by me, under the same open source license (unless I am
36 | permitted to submit under a different license), as indicated
37 | in the file; or
38 |
39 | (c) The contribution was provided directly to me by some other
40 | person who certified (a), (b) or (c) and I have not modified
41 | it.
42 |
43 | (d) I understand and agree that this project and the contribution
44 | are public and that a record of the contribution (including all
45 | personal information I submit with it, including my sign-off) is
46 | maintained indefinitely and may be redistributed consistent with
47 | this project or the open source license(s) involved.
48 | ```
49 |
50 | Then you just add a line to every git commit message:
51 |
52 | Signed-off-by: Joe Smith
53 |
54 | Use your real name (sorry, no pseudonyms or anonymous contributions.)
55 |
56 | If you set your `user.name` and `user.email` git configs, you can sign your
57 | commit automatically with `git commit -s`.
58 |
--------------------------------------------------------------------------------
/CODE_OF_CONDUCT.md:
--------------------------------------------------------------------------------
1 | # Contributor Covenant Code of Conduct
2 |
3 | ## Our Pledge
4 |
5 | We as members, contributors, and leaders pledge to make participation in our
6 | community a harassment-free experience for everyone, regardless of age, body
7 | size, visible or invisible disability, ethnicity, sex characteristics, gender
8 | identity and expression, level of experience, education, socio-economic status,
9 | nationality, personal appearance, race, caste, color, religion, or sexual
10 | identity and orientation.
11 |
12 | We pledge to act and interact in ways that contribute to an open, welcoming,
13 | diverse, inclusive, and healthy community.
14 |
15 | ## Our Standards
16 |
17 | Examples of behavior that contributes to a positive environment for our
18 | community include:
19 |
20 | * Demonstrating empathy and kindness toward other people
21 | * Being respectful of differing opinions, viewpoints, and experiences
22 | * Giving and gracefully accepting constructive feedback
23 | * Accepting responsibility and apologizing to those affected by our mistakes,
24 | and learning from the experience
25 | * Focusing on what is best not just for us as individuals, but for the overall
26 | community
27 |
28 | Examples of unacceptable behavior include:
29 |
30 | * The use of sexualized language or imagery, and sexual attention or advances of
31 | any kind
32 | * Trolling, insulting or derogatory comments, and personal or political attacks
33 | * Public or private harassment
34 | * Publishing others' private information, such as a physical or email address,
35 | without their explicit permission
36 | * Other conduct which could reasonably be considered inappropriate in a
37 | professional setting
38 |
39 | ## Enforcement Responsibilities
40 |
41 | Community leaders are responsible for clarifying and enforcing our standards of
42 | acceptable behavior and will take appropriate and fair corrective action in
43 | response to any behavior that they deem inappropriate, threatening, offensive,
44 | or harmful.
45 |
46 | Community leaders have the right and responsibility to remove, edit, or reject
47 | comments, commits, code, wiki edits, issues, and other contributions that are
48 | not aligned to this Code of Conduct, and will communicate reasons for moderation
49 | decisions when appropriate.
50 |
51 | ## Scope
52 |
53 | This Code of Conduct applies within all community spaces, and also applies when
54 | an individual is officially representing the community in public spaces.
55 | Examples of representing our community include using an official e-mail address,
56 | posting via an official social media account, or acting as an appointed
57 | representative at an online or offline event.
58 |
59 | ## Enforcement
60 |
61 | Instances of abusive, harassing, or otherwise unacceptable behavior may be
62 | reported to the community leaders responsible for enforcement at
63 | CommunityCodeOfConduct AT intel DOT com.
64 | All complaints will be reviewed and investigated promptly and fairly.
65 |
66 | All community leaders are obligated to respect the privacy and security of the
67 | reporter of any incident.
68 |
69 | ## Enforcement Guidelines
70 |
71 | Community leaders will follow these Community Impact Guidelines in determining
72 | the consequences for any action they deem in violation of this Code of Conduct:
73 |
74 | ### 1. Correction
75 |
76 | **Community Impact**: Use of inappropriate language or other behavior deemed
77 | unprofessional or unwelcome in the community.
78 |
79 | **Consequence**: A private, written warning from community leaders, providing
80 | clarity around the nature of the violation and an explanation of why the
81 | behavior was inappropriate. A public apology may be requested.
82 |
83 | ### 2. Warning
84 |
85 | **Community Impact**: A violation through a single incident or series of
86 | actions.
87 |
88 | **Consequence**: A warning with consequences for continued behavior. No
89 | interaction with the people involved, including unsolicited interaction with
90 | those enforcing the Code of Conduct, for a specified period of time. This
91 | includes avoiding interactions in community spaces as well as external channels
92 | like social media. Violating these terms may lead to a temporary or permanent
93 | ban.
94 |
95 | ### 3. Temporary Ban
96 |
97 | **Community Impact**: A serious violation of community standards, including
98 | sustained inappropriate behavior.
99 |
100 | **Consequence**: A temporary ban from any sort of interaction or public
101 | communication with the community for a specified period of time. No public or
102 | private interaction with the people involved, including unsolicited interaction
103 | with those enforcing the Code of Conduct, is allowed during this period.
104 | Violating these terms may lead to a permanent ban.
105 |
106 | ### 4. Permanent Ban
107 |
108 | **Community Impact**: Demonstrating a pattern of violation of community
109 | standards, including sustained inappropriate behavior, harassment of an
110 | individual, or aggression toward or disparagement of classes of individuals.
111 |
112 | **Consequence**: A permanent ban from any sort of public interaction within the
113 | community.
114 |
115 | ## Attribution
116 |
117 | This Code of Conduct is adapted from the [Contributor Covenant][homepage],
118 | version 2.1, available at
119 | [https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
120 |
121 | Community Impact Guidelines were inspired by
122 | [Mozilla's code of conduct enforcement ladder][Mozilla CoC].
123 |
124 | For answers to common questions about this code of conduct, see the FAQ at
125 | [https://www.contributor-covenant.org/faq][FAQ]. Translations are available at
126 | [https://www.contributor-covenant.org/translations][translations].
127 |
128 | [homepage]: https://www.contributor-covenant.org
129 | [v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
130 | [Mozilla CoC]: https://github.com/mozilla/diversity
131 | [FAQ]: https://www.contributor-covenant.org/faq
132 |
--------------------------------------------------------------------------------
/SGX/base.Dockerfile:
--------------------------------------------------------------------------------
1 | ARG BIGDL_VERSION=2.4.0-SNAPSHOT
2 | ARG SPARK_VERSION=3.1.3
3 | ARG GRAMINE_BRANCH=devel-v1.3.1-2022-12-28
4 |
5 | FROM ubuntu:20.04
6 | ARG http_proxy
7 | ARG https_proxy
8 | ARG no_proxy
9 | ARG BIGDL_VERSION
10 | ARG GRAMINE_BRANCH
11 | ARG JDK_VERSION=8u192
12 | ARG JDK_URL
13 | ARG SPARK_VERSION
14 |
15 | ENV BIGDL_VERSION ${BIGDL_VERSION}
16 | ENV LOCAL_IP 127.0.0.1
17 | ENV LC_ALL C.UTF-8
18 | ENV LANG C.UTF-8
19 | ENV JAVA_HOME /opt/jdk8
20 | ENV PATH ${JAVA_HOME}/bin:${PATH}
21 |
22 | ENV MALLOC_ARENA_MAX 4
23 |
24 | SHELL ["/bin/bash", "-o", "pipefail", "-c"]
25 |
26 | RUN mkdir -p /ppml/
27 |
28 | COPY ./bash.manifest.template /ppml/bash.manifest.template
29 | COPY ./Makefile /ppml/Makefile
30 | COPY ./init.sh /ppml/init.sh
31 | COPY ./clean.sh /ppml/clean.sh
32 |
33 | # These files ared used for attestation service and register MREnclave
34 | COPY ./register-mrenclave.py /ppml/register-mrenclave.py
35 | COPY ./verify-attestation-service.sh /ppml/verify-attestation-service.sh
36 |
37 | COPY ./_dill.py.patch /_dill.py.patch
38 | COPY ./python-pslinux.patch /python-pslinux.patch
39 |
40 | # Python3.9
41 | RUN env DEBIAN_FRONTEND=noninteractive apt-get update --no-install-recommends && \
42 | apt-get install --no-install-recommends software-properties-common -y && \
43 | add-apt-repository ppa:deadsnakes/ppa -y && \
44 | apt-get install -y --no-install-recommends python3.9 && \
45 | rm /usr/bin/python3 && \
46 | ln -s /usr/bin/python3.9 /usr/bin/python3 && \
47 | ln -s /usr/bin/python3 /usr/bin/python && \
48 | apt-get install -y --no-install-recommends python3-pip python3.9-dev python3-wheel && \
49 | pip3 install --no-cache-dir --upgrade pip && \
50 | pip3 install --no-cache-dir requests argparse cryptography==3.3.2 urllib3 && \
51 | pip3 install --no-cache-dir --upgrade requests && \
52 | pip3 install --no-cache-dir setuptools==58.4.0 && \
53 | # Gramine
54 | apt-get update --fix-missing && \
55 | env DEBIAN_FRONTEND=noninteractive TZ=Etc/UTC apt-get install -y --no-install-recommends apt-utils wget unzip protobuf-compiler libgmp3-dev libmpfr-dev libmpfr-doc libmpc-dev && \
56 | pip install --no-cache-dir meson==0.63.2 cmake==3.24.1.1 toml==0.10.2 pyelftools cffi dill==0.3.4 psutil && \
57 | # Create a link to cmake, upgrade cmake version so that it is compatible with latest meson build (require version >= 3.17)
58 | ln -s /usr/local/bin/cmake /usr/bin/cmake && \
59 | env DEBIAN_FRONTEND=noninteractive apt-get install -y build-essential autoconf bison gawk git ninja-build python3-click python3-jinja2 wget pkg-config cmake libcurl4-openssl-dev libprotobuf-c-dev protobuf-c-compiler python3-cryptography python3-pip python3-protobuf nasm && \
60 | git clone https://github.com/analytics-zoo/gramine.git /gramine && \
61 | git clone https://github.com/intel/SGXDataCenterAttestationPrimitives.git /opt/intel/SGXDataCenterAttestationPrimitives && \
62 | apt-get clean && \
63 | rm -rf /var/lib/apt/lists/*
64 | WORKDIR /gramine
65 | RUN git checkout ${GRAMINE_BRANCH} && \
66 | # Also create the patched gomp
67 | meson setup build/ --buildtype=release -Dsgx=enabled -Dsgx_driver=dcap1.10 -Dlibgomp=enabled && \
68 | ninja -C build/ && \
69 | ninja -C build/ install
70 | WORKDIR /ppml/
71 | RUN mkdir -p /ppml/lib/ && \
72 | cp /usr/local/lib/x86_64-linux-gnu/gramine/runtime/glibc/libgomp.so.1 /ppml/lib/ && \
73 | # meson will copy the original file instead of the symlink, which enable us to delete the gramine directory entirely
74 | rm -rf /gramine && \
75 | apt-get update --fix-missing && \
76 | env DEBIAN_FRONTEND=noninteractive TZ=Etc/UTC apt-get install -y tzdata && \
77 | apt-get install -y --no-install-recommends apt-utils vim curl nano wget unzip git tree zip && \
78 | apt-get install -y --no-install-recommends libsm6 make build-essential && \
79 | apt-get install -y --no-install-recommends autoconf gawk bison libcurl4-openssl-dev python3-protobuf libprotobuf-c-dev protobuf-c-compiler && \
80 | apt-get install -y --no-install-recommends netcat net-tools shellcheck && \
81 | patch /usr/local/lib/python3.9/dist-packages/dill/_dill.py /_dill.py.patch && \
82 | patch /usr/local/lib/python3.9/dist-packages/psutil/_pslinux.py /python-pslinux.patch && \
83 | cp /usr/lib/x86_64-linux-gnu/libpython3.9.so.1 /usr/lib/libpython3.9.so.1 && \
84 | chmod a+x /ppml/init.sh && \
85 | chmod a+x /ppml/clean.sh && \
86 | chmod +x /ppml/verify-attestation-service.sh && \
87 | # Install sgxsdk and dcap, which is used for remote attestation
88 | mkdir -p /opt/intel/
89 | WORKDIR /opt/intel
90 | RUN wget -q https://download.01.org/intel-sgx/sgx-dcap/1.16/linux/distro/ubuntu20.04-server/sgx_linux_x64_sdk_2.19.100.3.bin && \
91 | chmod a+x ./sgx_linux_x64_sdk_2.19.100.3.bin && \
92 | printf "no\n/opt/intel\n"|./sgx_linux_x64_sdk_2.19.100.3.bin && \
93 | shellcheck /opt/intel/sgxsdk/environment && \
94 | . /opt/intel/sgxsdk/environment
95 | WORKDIR /opt/intel
96 | RUN wget -q https://download.01.org/intel-sgx/sgx-dcap/1.16/linux/distro/ubuntu20.04-server/sgx_debian_local_repo.tgz && \
97 | tar xzf sgx_debian_local_repo.tgz && \
98 | echo 'deb [trusted=yes arch=amd64] file:///opt/intel/sgx_debian_local_repo focal main' | tee /etc/apt/sources.list.d/intel-sgx.list && \
99 | wget -qO - https://download.01.org/intel-sgx/sgx_repo/ubuntu/intel-sgx-deb.key | apt-key add - && \
100 | apt-get update && \
101 | apt-get install -y --no-install-recommends libsgx-enclave-common-dev libsgx-ae-qe3 libsgx-ae-qve libsgx-urts libsgx-dcap-ql libsgx-dcap-default-qpl libsgx-dcap-quote-verify-dev libsgx-dcap-ql-dev libsgx-dcap-default-qpl-dev libsgx-quote-ex-dev libsgx-uae-service libsgx-ra-network libsgx-ra-uefi libtdx-attest libtdx-attest-dev && \
102 | apt-get clean && \
103 | rm -rf /var/lib/apt/lists/*
104 | # java
105 | WORKDIR /opt
106 | RUN wget -q $JDK_URL && \
107 | gunzip jdk-$JDK_VERSION-linux-x64.tar.gz && \
108 | tar -xf jdk-$JDK_VERSION-linux-x64.tar -C /opt && \
109 | rm jdk-$JDK_VERSION-linux-x64.tar && \
110 | mv /opt/jdk* /opt/jdk8 && \
111 | ln -s /opt/jdk8 /opt/jdk && \
112 | # related dirs required by bash.manifest
113 | mkdir -p /root/.cache/ && \
114 | mkdir -p /root/.keras/datasets && \
115 | mkdir -p /root/.zinc && \
116 | mkdir -p /root/.m2 && \
117 | mkdir -p /root/.kube/ && \
118 | mkdir -p /ppml/encrypted-fs && \
119 | # folder required for distributed encrypted file system
120 | mkdir -p /ppml/encrypted-fsd && \
121 | # folder to contain the keys (primary key and data key)
122 | mkdir -p /ppml/encrypted_keys
123 | # Install bigdl-ppml lib
124 | WORKDIR /ppml/
125 | RUN git clone https://github.com/intel-analytics/BigDL.git
126 | WORKDIR /ppml/BigDL
127 | RUN mkdir -p /ppml/bigdl-ppml/ && \
128 | cp -r /ppml/BigDL/python/ppml/src/ /ppml/bigdl-ppml && \
129 | rm -rf /ppml/BigDL
130 |
131 |
132 |
133 | COPY ./download_jars.sh /ppml
134 | COPY ./attestation.sh /ppml
135 | COPY ./encrypted-fsd.sh /ppml
136 |
137 | WORKDIR /ppml
138 | RUN bash /ppml/download_jars.sh
139 |
140 | ENV PYTHONPATH /usr/lib/python3.9:/usr/lib/python3.9/lib-dynload:/usr/local/lib/python3.9/dist-packages:/usr/lib/python3/dist-packages:/ppml/bigdl-ppml/src
141 | ENTRYPOINT [ "/bin/bash" ]
142 |
--------------------------------------------------------------------------------
/TDX/DEVCATALOG.md:
--------------------------------------------------------------------------------
1 | # **BigDL PPML on TDX**
2 |
3 | ## Introduction
4 |
5 | Learn to use BigDL PPML (BigDL Privacy Preserving Machine Learning) to run end-to-end Big Data & AI applications with distributed clusters on Intel® Trust Domain Extensions (TDX).
6 |
7 | For more workflow examples and reference implementations, please check [Developer Catalog](https://developer.intel.com/aireferenceimplementations).
8 |
9 | ## Solution Technique Overview
10 |
11 | [PPML](https://bigdl.readthedocs.io/en/latest/doc/PPML/Overview/ppml.html) (Privacy Preserving Machine Learning) in [BigDL 2.0](https://github.com/intel-analytics/BigDL) provides a Trusted Cluster Environment for secure Big Data & AI applications, even in an untrusted cloud environment. By combining Intel® Trust Domain Extensions (TDX) with several other security technologies (e.g., attestation, key management service, private set intersection, federated learning, homomorphic encryption, etc.), BigDL PPML ensures end-to-end security enabled for the entire distributed workflows, such as Apache Spark, Apache Flink, XGBoost, TensorFlow, PyTorch, etc.
12 |
13 | For more details, please visit [BigDL 2.0](https://github.com/intel-analytics/BigDL) GitHub Repository.
14 |
15 | ## Solution Technical Details
16 |
17 | PPML ensures security for all dimensions of the data lifecycle: data at rest, data in transit, and data in use. Data being transferred on a network is `in transit`, data in storage is `at rest`, and data being processed is `in use`.
18 |
19 | 
20 |
21 | PPML protects computation and memory by Trusted Domains, storage (e.g., data and model) by encryption, network communication by remote attestation and Transport Layer Security (TLS), and optional Federated Learning support.
22 |
23 | 
24 |
25 | With BigDL PPML, you can run trusted Big Data & AI applications
26 |
27 | - **Trusted Spark SQL & Dataframe**: with trusted Big Data analytics and ML/DL support, users can run standard Spark data analysis (such as Spark SQL, Dataframe, MLlib, etc.) in a secure and trusted fashion.
28 |
29 | - **Trusted ML (Machine Learning)**: with trusted Big Data analytics and ML/DL support, users can run distributed machine learning (such as MLlib, XGBoost, etc.) in a secure and trusted fashion.
30 |
31 | - **Trusted DL (Deep Learning)**: with Trusted Deep Learning Toolkit, users can run secured end-to-end PyTorch training using either a single machine or cloud-native clusters in a trusted execution environment.
32 |
33 |
34 | ## Validated Hardware Details
35 | The hardware below is recommended for use with this reference implementation.
36 |
37 | Intel® 4th Gen Xeon® Scalable Performance processors or later
38 |
39 | ## How it works
40 |
41 | PPML provides two different Trusted Execution Environments, which are TDX-VM (Virtual Machine) and TDX-CoCo (Confidential Containers). In this section, we will introduce the overall architecture of these two different architectures.
42 |
43 | ## Get Started
44 |
45 | ### Prepare TDX environment
46 |
47 | Prepare your environment first, including TDX-VM/TDX-CoCo orchestration, K8s cluster setup, key management service (KMS), attestation service (AS) setup, and BigDL PPML client container preparation. **Please follow the detailed steps in** [Prepare Environment for TDX-VM](https://github.com/intel-analytics/BigDL/blob/main/ppml/tdx/tdx-vm/README.md#prepare-tdx-vm-environment) and [Prepare Environment for TDX-CoCo](https://github.com/intel-analytics/BigDL/blob/main/ppml/tdx/tdx-coco/README.md#prepare-tdx-coco-environment).
48 |
49 |
50 |
51 | ### BigDL PPML End-to-End Distributed PyTorch Training
52 |
53 | In this section, we will use PyTorch's DistributedDataParallel module to fine-tuning the [PERT model](https://github.com/ymcui/PERT). This example will be used to demonstrate the entire BigDL PPML end-to-end distributed PyTorch training workflow.
54 |
55 | 
56 |
57 | #### Step 1. Prepare your PPML image for the production environment
58 |
59 | To build a secure PPML image for a production environment, BigDL prepared a public base image that does not contain any secrets. You can customize your image on top of this base image.
60 |
61 | 1. Prepare BigDL Base Image
62 |
63 | Users can pull the base image from dockerhub or build it by themselves.
64 |
65 | Pull the base image
66 | ```bash
67 | docker pull intelanalytics/bigdl-ppml-trusted-deep-learning-gramine-base:2.2.0
68 | ```
69 |
70 | 2. Build a Custom Image
71 |
72 | When the base image is ready, you need to generate your enclave key which will be used when building a custom image. Keep the enclave key safe for future usage.
73 |
74 | Running the following command to generate the enclave key `enclave-key.pem`, which is used to launch and sign SGX Enclave.
75 |
76 | ```bash
77 | cd BigDL/ppml/trusted-deep-learning/ref
78 | openssl genrsa -3 -out enclave-key.pem 3072
79 | ```
80 |
81 | When the enclave key `enclave-key.pem` is generated, you are ready to build your custom image by running the following command:
82 |
83 | ```bash
84 | # Under BigDL/ppml/trusted-deep-learning/ref dir
85 | # modify custom parameters in build-custom-image.sh
86 | ./build-custom-image.sh
87 | cd ..
88 | ```
89 |
90 | Note: you can also customize the image according to your own needs, e.g. install third-parity python libraries or jars.
91 |
92 | #### Step 2. Encrypt and Upload Data
93 |
94 | Encrypt the input data of your Big Data & AI applications (here we use PyTorch) and then upload encrypted data to the Network File System (NFS) server. More details in [Encrypt Your Data](https://github.com/intel-analytics/BigDL/tree/main/ppml/trusted-deep-learning#encryption--decryption).
95 |
96 | 1. Download the input data `seamew/ChnSentiCorp` from [huggingface](https://huggingface.co/datasets/seamew/ChnSentiCorp).
97 |
98 |
99 | 2. Encrypt `seamew/ChnSentiCorp` datasets. The encryption of the datasets should be done in a trusted environment. As for the keys that are used for encryption, we choose to request an encryption key from EHSM (our choice of KMS). See this [example](https://github.com/intel-analytics/BigDL/blob/branch-2.2/ppml/trusted-deep-learning/base/load_save_encryption_ex.py) for detailed steps.
100 |
101 | The `patch_encryption()` function used in the example script is provided by `bigdl-nano`. Check [here](https://bigdl.readthedocs.io/en/latest/doc/PythonAPI/Nano/pytorch.html#bigdl.nano.pytorch.patching.patch_encryption) for detailed information.
102 |
103 | #### Step 3. Build Distributed PyTorch training script
104 |
105 | To build your own Distributed PyTorch training script, you can refer to the official [tutorial](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html). The code we use for fine-tuning the PERT model can be found [here](https://github.com/intel-analytics/BigDL/blob/main/ppml/trusted-deep-learning/base/pert.py).
106 |
107 |
108 | #### Step 4. Submit Job
109 |
110 | When the Big Data & AI application and its input data is prepared, you are ready to submit BigDL PPML jobs. We have prepared a [python script](https://github.com/intel-analytics/BigDL/blob/main/python/nano/src/bigdl/nano/k8s/bigdl_submit.py) which can be used to submit training jobs.
111 |
112 | The job submit script is also included in the `intelanalytics/bigdl-ppml-trusted-deep-learning-gramine-base:2.2.0` image so it should have already been included in your custom image. The default location for the script is `/usr/local/lib/python3.7/dist-packages/bigdl/nano/k8s/bigdl_submit.py`.
113 |
114 |
115 | #### Step 6. Monitor job and check results
116 |
117 | Once the job has been scheduled and booted successfully, you can monitor the training progress by checking the logs of the pod.
118 |
119 | If you want to save the trained model in an encrypted form, you can choose to pass a keyword argument `encryption_key` into the patched `torch.save` method. In our situation, the `encryption_key` is retrieved from EHSM in a trusted execution environment. Check the following code for an example:
120 |
121 | ```python
122 |
123 | from bigdl.nano.pytorch.patching import patch_encryption
124 | from bigdl.ppml.kms.ehsm.client import get_data_key_plaintext
125 |
126 | patch_encryption()
127 |
128 | def get_key():
129 | return get_data_key_plaintext(EHSM_IP, EHSM_PORT, encrypted_primary_key_path, encrypted_data_key_path)
130 |
131 | key = get_key()
132 | # Save the trained model in encrypted form
133 | torch.save(model.state_dict(), "pert.bin", encryption_key=key)
134 | ```
135 |
136 |
137 |
138 |
139 | ### BigDL PPML End-to-End Deep-learning Serving Service with TorchServe
140 | In this section, we will go through the entire workflow to deploy a trusted Deep-Learning Serving (hereinafter called DL-Serving) service that provides functionality to classify the Chinese sentiment of various reviews from websites using [TorchServe](https://pytorch.org/serve/).
141 |
142 | 
143 |
144 | #### Step 1. Prepare your PPML image for the production environment
145 |
146 |
147 | To build a secure PPML image for a production environment, BigDL prepared a public base image that does not contain any secrets. You can customize your image on top of this base image.
148 |
149 | 1. Prepare BigDL Base Image
150 |
151 | Users can pull the base image from dockerhub or build it by themselves.
152 |
153 | Pull the base image
154 | ```bash
155 | docker pull intelanalytics/bigdl-ppml-trusted-dl-serving-gramine-base:2.2.0
156 | ```
157 |
158 | 2. Build a Custom Image
159 |
160 | When the base image is ready, you need to generate your enclave key which will be used when building a custom image. Keep the enclave key safe for future usage.
161 |
162 | Running the following command to generate the enclave key `enclave-key.pem`, which is used to launch and sign SGX Enclave.
163 |
164 | ```bash
165 | cd BigDL/ppml/trusted-dl-serving/ref
166 | openssl genrsa -3 -out enclave-key.pem 3072
167 | ```
168 |
169 | When the enclave key `enclave-key.pem` is generated, you are ready to build your custom image by running the following command:
170 |
171 | ```bash
172 | # Under BigDL/ppml/trusted-dl-serving/ref dir
173 | # modify custom parameters in build-custom-image.sh
174 | ./build-custom-image.sh
175 | cd ..
176 | ```
177 |
178 | Note: you can also customize the image according to your own needs, e.g. install third-parity python libraries or jars.
179 |
180 |
181 | #### Step 2. Prepare the Model Archive file and config file
182 |
183 | Same as using normal TorchServe service, users need to prepare the `Model Archive` file using `torch-model-archiver` in advance. Check [here](https://github.com/pytorch/serve/tree/master/model-archiver#torch-model-archiver-for-torchserve) for detailed instructions on how to package the model files into a `mar` file.
184 |
185 | TorchServe uses a `config.properties` file to store configurations. Examples can be found [here](https://pytorch.org/serve/configuration.html#config-model). An important configuration is `minWorkers`, the start script will try to boot up to `minWorkers` backends.
186 |
187 | To ensure end-to-end security, the SSL should be enabled. You can refer to the official [document](https://pytorch.org/serve/configuration.html#enable-ssl) on how to enable SSL.
188 |
189 | #### Step 3. Start TorchServe service
190 |
191 | In this section, we will try to launch the TorchServe service in Trusted Domains. The example `config.properties` is shown as follows:
192 |
193 | ```text
194 | inference_address=http://127.0.0.1:8085
195 | management_address=http://127.0.0.1:8081
196 | metrics_address=http://127.0.0.1:8082
197 | grpc_inference_port=7070
198 | grpc_management_port=7071
199 | model_store=/ppml/
200 | #initial_worker_port=25712
201 | load_models=NANO_FP32CL.mar
202 | enable_metrics_api=false
203 | models={\
204 | "NANO_FP32CL": {\
205 | "1.0": {\
206 | "defaultVersion": true,\
207 | "marName": "NANO_FP32CL.mar",\
208 | "minWorkers": 2,\
209 | "workers": 2,\
210 | "maxWorkers": 2,\
211 | "batchSize": 1,\
212 | "maxBatchDelay": 100,\
213 | "responseTimeout": 1200\
214 | }\
215 | }\
216 | }
217 | ```
218 |
219 | Assuming the above configuration file is stored at `/ppml/tsconfigfp32cl`, then to start the TorchServe Service:
220 |
221 | ```bash
222 | bash /ppml/torchserve/start-torchserve.sh -c /ppml/tsconfigfp32cl -f "0" -b "1,2"
223 | ```
224 |
225 | We pinned the CPU while booting up our frontend and backends. The `"-f 0"` indicates that the frontend will be pinned to core 0, while the `"-b 1,2"` indicates that the first backend will be pinned to core 1, and the second backend will be pinned to core 2.
226 |
227 |
228 | #### Step 4. Access the service using https requests
229 |
230 | After the service has been booted up, we can access the service using https requests. Here we show a simple benchmark result using the [wrk](https://github.com/wg/wrk) tool. The result with 5 threads and 10 connections are:
231 |
232 | ```text
233 | Running 5m test @ http://127.0.0.1:8085/predictions/NANO_FP32CL
234 | 5 threads and 10 connections
235 | Thread Stats Avg Stdev Max +/- Stdev
236 | Latency 355.44ms 19.27ms 783.08ms 98.86%
237 | Req/Sec 6.42 2.69 10.00 49.52%
238 | 8436 requests in 5.00m, 1.95MB read
239 | Requests/sec: 28.11
240 | Transfer/sec: 6.67KB
241 | ```
242 |
243 |
244 |
245 |
246 | ## Learn More
247 |
248 | [BigDL PPML](https://github.com/intel-analytics/BigDL/tree/main/ppml)
249 |
250 | [Tutorials](https://bigdl.readthedocs.io/en/latest/doc/PPML/Overview/examples.html)
251 |
252 | [TDX whitepaper](https://www.intel.com/content/dam/develop/external/us/en/documents/tdx-whitepaper-v4.pdf)
253 |
254 |
255 | ## Support Forum
256 |
257 | - [Mail List](mailto:bigdl-user-group+subscribe@googlegroups.com)
258 | - [User Group](https://groups.google.com/forum/#!forum/bigdl-user-group)
259 | - [Github Issues](https://github.com/intel-analytics/BigDL/issues)
260 | ---
--------------------------------------------------------------------------------
/SGX/trusted-bigdata.Dockerfile:
--------------------------------------------------------------------------------
1 | ARG BASE_IMAGE_NAME=intelanalytics/bigdl-ppml-gramine-base
2 | ARG BASE_IMAGE_TAG=2.4.0-SNAPSHOT
3 | ARG BIGDL_VERSION=2.4.0-SNAPSHOT
4 | ARG SPARK_VERSION=3.1.3
5 | ARG TINI_VERSION=v0.18.0
6 | ARG JDK_VERSION=8u192
7 | ARG JDK_URL
8 | ARG SPARK_JAR_REPO_URL
9 | ARG FLINK_VERSION=1.15.3
10 | ARG SCALA_VERSION=2.12
11 |
12 |
13 | # Stage.1 Spark & Hadoop & Hive & Flink
14 | FROM ubuntu:20.04 as bigdata
15 | ARG HTTP_PROXY_HOST
16 | ARG HTTP_PROXY_PORT
17 | ARG HTTPS_PROXY_HOST
18 | ARG HTTPS_PROXY_PORT
19 | ARG SPARK_VERSION
20 | ARG JDK_VERSION
21 | ARG JDK_URL
22 | ARG SPARK_JAR_REPO_URL
23 | ARG FLINK_VERSION
24 | ARG SCALA_VERSION
25 |
26 | ENV SPARK_VERSION ${SPARK_VERSION}
27 | ENV JAVA_HOME /opt/jdk${JDK_VERSION}
28 | ENV PATH ${JAVA_HOME}/bin:${PATH}
29 | ENV FLINK_HOME /opt/flink
30 | ENV GOSU_VERSION 1.11
31 |
32 | RUN apt-get update --fix-missing --no-install-recommends && \
33 | env DEBIAN_FRONTEND=noninteractive TZ=Etc/UTC apt-get install -y tzdata apt-utils wget unzip patch zip git maven nasm
34 | # java
35 | RUN wget -q $JDK_URL && \
36 | gunzip jdk-$JDK_VERSION-linux-x64.tar.gz && \
37 | tar -xf jdk-$JDK_VERSION-linux-x64.tar -C /opt && \
38 | rm jdk-$JDK_VERSION-linux-x64.tar && \
39 | mv /opt/jdk* /opt/jdk$JDK_VERSION && \
40 | ln -s /opt/jdk$JDK_VERSION /opt/jdk
41 |
42 | # spark
43 | WORKDIR /opt
44 | RUN get -q https://archive.apache.org/dist/spark/spark-${SPARK_VERSION}/spark-${SPARK_VERSION}-bin-hadoop3.2.tgz && \
45 | tar -zxvf spark-${SPARK_VERSION}-bin-hadoop3.2.tgz && \
46 | mv spark-${SPARK_VERSION}-bin-hadoop3.2 spark-${SPARK_VERSION} && \
47 | rm spark-${SPARK_VERSION}-bin-hadoop3.2.tgz && \
48 | cp spark-${SPARK_VERSION}/conf/log4j.properties.template spark-${SPARK_VERSION}/conf/log4j.properties && \
49 | echo "$(printf '\nlog4j.logger.io.netty=ERROR')" >> spark-${SPARK_VERSION}/conf/log4j.properties \
50 | rm spark-${SPARK_VERSION}/python/lib/pyspark.zip && \
51 | rm spark-${SPARK_VERSION}/jars/spark-core_2.12-$SPARK_VERSION.jar && \
52 | rm spark-${SPARK_VERSION}/jars/spark-launcher_2.12-$SPARK_VERSION.jar && \
53 | rm spark-${SPARK_VERSION}/jars/spark-kubernetes_2.12-$SPARK_VERSION.jar && \
54 | rm spark-${SPARK_VERSION}/jars/spark-network-common_2.12-$SPARK_VERSION.jar && \
55 | rm spark-${SPARK_VERSION}/examples/jars/spark-examples_2.12-$SPARK_VERSION.jar && \
56 | rm spark-${SPARK_VERSION}/jars/hadoop-common-3.2.0.jar && \
57 | rm spark-${SPARK_VERSION}/jars/hive-exec-2.3.7-core.jar
58 | COPY ./log4j2.xml /opt/spark-${SPARK_VERSION}/conf/log4j2.xml
59 | # spark modification
60 | RUN wget -q $SPARK_JAR_REPO_URL/spark-core_2.12-$SPARK_VERSION.jar && \
61 | wget -q $SPARK_JAR_REPO_URL/spark-kubernetes_2.12-$SPARK_VERSION.jar && \
62 | wget -q $SPARK_JAR_REPO_URL/spark-network-common_2.12-$SPARK_VERSION.jar && \
63 | wget -q $SPARK_JAR_REPO_URL/spark-examples_2.12-$SPARK_VERSION.jar && \
64 | wget -q $SPARK_JAR_REPO_URL/spark-launcher_2.12-$SPARK_VERSION.jar && \
65 | wget -q $SPARK_JAR_REPO_URL/pyspark.zip && \
66 | mv /opt/spark-core_2.12-$SPARK_VERSION.jar /opt/spark-${SPARK_VERSION}/jars/spark-core_2.12-$SPARK_VERSION.jar && \
67 | mv /opt/spark-launcher_2.12-$SPARK_VERSION.jar /opt/spark-${SPARK_VERSION}/jars/spark-launcher_2.12-$SPARK_VERSION.jar && \
68 | mv /opt/spark-kubernetes_2.12-$SPARK_VERSION.jar /opt/spark-${SPARK_VERSION}/jars/spark-kubernetes_2.12-$SPARK_VERSION.jar && \
69 | mv /opt/spark-network-common_2.12-$SPARK_VERSION.jar /opt/spark-${SPARK_VERSION}/jars/spark-network-common_2.12-$SPARK_VERSION.jar && \
70 | mv /opt/spark-examples_2.12-$SPARK_VERSION.jar /opt/spark-${SPARK_VERSION}/examples/jars/spark-examples_2.12-$SPARK_VERSION.jar && \
71 | mv /opt/pyspark.zip /opt/spark-${SPARK_VERSION}/python/lib/pyspark.zip && \
72 | sed -i 's/\#\!\/usr\/bin\/env bash/\#\!\/usr\/bin\/env bash\nset \-x/' /opt/spark-${SPARK_VERSION}/bin/spark-class && \
73 | rm -f /opt/spark-${SPARK_VERSION}/jars/log4j-1.2.17.jar && \
74 | rm -f /opt/spark-${SPARK_VERSION}/jars/slf4j-log4j12-1.7.16.jar && \
75 | rm -f /opt/spark-${SPARK_VERSION}/jars/apache-log4j-extras-1.2.17.jar && \
76 | rm -r /opt/spark-${SPARK_VERSION}/jars/slf4j-log4j12-1.7.30.jar && \
77 | wget -qP /opt/spark-${SPARK_VERSION}/jars/ https://repo1.maven.org/maven2/org/apache/logging/log4j/log4j-1.2-api/2.17.1/log4j-1.2-api-2.17.1.jar && \
78 | wget -qP /opt/spark-${SPARK_VERSION}/jars/ https://repo1.maven.org/maven2/org/slf4j/slf4j-reload4j/1.7.35/slf4j-reload4j-1.7.35.jar && \
79 | wget -qP /opt/spark-${SPARK_VERSION}/jars/ https://repo1.maven.org/maven2/org/apache/logging/log4j/log4j-api/2.17.1/log4j-api-2.17.1.jar && \
80 | wget -qP /opt/spark-${SPARK_VERSION}/jars/ https://repo1.maven.org/maven2/org/apache/logging/log4j/log4j-core/2.17.1/log4j-core-2.17.1.jar && \
81 | wget -qP /opt/spark-${SPARK_VERSION}/jars/ https://repo1.maven.org/maven2/org/apache/logging/log4j/log4j-slf4j-impl/2.17.1/log4j-slf4j-impl-2.17.1.jar && \
82 | wget -qP /opt/spark-${SPARK_VERSION}/jars/ https://repo1.maven.org/maven2/org/wildfly/openssl/wildfly-openssl/1.0.7.Final/wildfly-openssl-1.0.7.Final.jar && \
83 | wget -qP /opt/spark-${SPARK_VERSION}/jars/ https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-azure/3.2.0/hadoop-azure-3.2.0.jar && \
84 | wget -qP /opt/spark-${SPARK_VERSION}/jars/ https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-azure-datalake/3.2.0/hadoop-azure-datalake-3.2.0.jar && \
85 | wget -qP /opt/spark-${SPARK_VERSION}/jars/ https://repo1.maven.org/maven2/com/microsoft/azure/azure-storage/7.0.0/azure-storage-7.0.0.jar && \
86 | wget -qP /opt/spark-${SPARK_VERSION}/jars/ https://repo1.maven.org/maven2/com/microsoft/azure/azure-data-lake-store-sdk/2.2.9/azure-data-lake-store-sdk-2.2.9.jar
87 | # hadoop
88 | RUN apt-get update --fix-missing --no-install-recommends && \
89 | apt-get install --no-install-recommends -y build-essential && \
90 | wget -q https://github.com/protocolbuffers/protobuf/releases/download/v2.5.0/protobuf-2.5.0.tar.bz2 && \
91 | tar jxvf protobuf-2.5.0.tar.bz2
92 | WORKDIR /opt/protobuf-2.5.0
93 | RUN ./configure && \
94 | make && \
95 | make check && \
96 | export LD_LIBRARY_PATH=/usr/local/lib && \
97 | make install && \
98 | protoc --version
99 | WORKDIR /opt/
100 | RUN git clone https://github.com/analytics-zoo/hadoop.git
101 | WORKDIR /opt/hadoop
102 | RUN git checkout branch-3.2.0-ppml
103 | WORKDIR /opt/hadoop/hadoop-common-project/hadoop-common
104 | RUN export MAVEN_OPTS="-Xmx2g -XX:ReservedCodeCacheSize=512m \
105 | -Dhttp.proxyHost=$HTTP_PROXY_HOST \
106 | -Dhttp.proxyPort=$HTTP_PROXY_PORT \
107 | -Dhttps.proxyHost=$HTTPS_PROXY_HOST \
108 | -Dhttps.proxyPort=$HTTPS_PROXY_PORT" && \
109 | mvn -T 16 -DskipTests=true clean package && \
110 | mv /opt/hadoop/hadoop-common-project/hadoop-common/target/hadoop-common-3.2.0.jar /opt/spark-${SPARK_VERSION}/jars/hadoop-common-3.2.0.jar
111 | # hive
112 | WORKDIR /opt
113 | RUN git clone https://github.com/analytics-zoo/hive.git
114 | WORKDIR /opt/hive
115 | RUN git checkout branch-2.3.7-ppml
116 | WORKDIR /opt/hive/ql
117 | RUN export MAVEN_OPTS="-Xmx2g -XX:ReservedCodeCacheSize=512m \
118 | -Dhttp.proxyHost=$HTTP_PROXY_HOST \
119 | -Dhttp.proxyPort=$HTTP_PROXY_PORT \
120 | -Dhttps.proxyHost=$HTTPS_PROXY_HOST \
121 | -Dhttps.proxyPort=$HTTPS_PROXY_PORT" && \
122 | mvn -T 16 -DskipTests=true clean package && \
123 | mv /opt/hive/ql/target/hive-exec-2.3.7-core.jar /opt/spark-${SPARK_VERSION}/jars/hive-exec-2.3.7-core.jar
124 |
125 | # flink
126 | RUN wget -nv -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$(dpkg --print-architecture)" && \
127 | chmod +x /usr/local/bin/gosu && \
128 | gosu nobody true && \
129 | mkdir -p $FLINK_HOME
130 | WORKDIR $FLINK_HOME
131 | RUN wget -nv -O flink.tgz "https://www.apache.org/dyn/closer.cgi?action=download&filename=flink/flink-${FLINK_VERSION}/flink-${FLINK_VERSION}-bin-scala_${SCALA_VERSION}.tgz" && \
132 | tar -xf flink.tgz --strip-components=1 && \
133 | rm flink.tgz && \
134 | # Replace default REST/RPC endpoint bind address to use the container's network interface
135 | sed -i 's/rest.address: localhost/rest.address: 0.0.0.0/g' $FLINK_HOME/conf/flink-conf.yaml && \
136 | sed -i 's/rest.bind-address: localhost/rest.bind-address: 0.0.0.0/g' $FLINK_HOME/conf/flink-conf.yaml && \
137 | sed -i 's/jobmanager.bind-host: localhost/jobmanager.bind-host: 0.0.0.0/g' $FLINK_HOME/conf/flink-conf.yaml && \
138 | sed -i 's/taskmanager.bind-host: localhost/taskmanager.bind-host: 0.0.0.0/g' $FLINK_HOME/conf/flink-conf.yaml && \
139 | sed -i '/taskmanager.host: localhost/d' $FLINK_HOME/conf/flink-conf.yaml
140 | RUN ls -al /opt
141 |
142 | # Stage.2 BigDL
143 | FROM ubuntu:20.04 as bigdl
144 | ARG BIGDL_VERSION
145 | ARG SPARK_VERSION
146 | ENV SPARK_VERSION ${SPARK_VERSION}
147 | ENV BIGDL_VERSION ${BIGDL_VERSION}
148 | ENV BIGDL_HOME /bigdl-${BIGDL_VERSION}
149 | RUN apt-get update --fix-missing --no-install-recommends && \
150 | apt-get install --no-install-recommends -y apt-utils curl wget unzip git
151 | RUN wget -q https://raw.githubusercontent.com/intel-analytics/analytics-zoo/bigdl-2.0/docker/hyperzoo/download-bigdl.sh && \
152 | chmod a+x ./download-bigdl.sh
153 | RUN ./download-bigdl.sh && \
154 | rm bigdl*.zip
155 |
156 | # stage.3 gramine
157 | FROM $BASE_IMAGE_NAME:$BASE_IMAGE_TAG
158 |
159 | ARG SPARK_VERSION
160 | ARG TINI_VERSION
161 |
162 | ENV FLINK_HOME /ppml/flink
163 | ENV SPARK_VERSION ${SPARK_VERSION}
164 | ENV SPARK_HOME /ppml/spark-${SPARK_VERSION}
165 | ENV LOCAL_IP 127.0.0.1
166 | ENV TINI_VERSION $TINI_VERSION
167 | ENV LC_ALL C.UTF-8
168 | ENV LANG C.UTF-8
169 | ENV PATH $FLINK_HOME/bin:$PATH
170 | ENV BIGDL_HOME /ppml/bigdl-${BIGDL_VERSION}
171 | ENV PYSPARK_PYTHON /usr/bin/python
172 |
173 | RUN mkdir -p /ppml/lib && \
174 | mkdir -p /ppml/keys && \
175 | mkdir -p /ppml/password && \
176 | mkdir -p /ppml/data && \
177 | mkdir -p /ppml/models && \
178 | mkdir -p /ppml/apps
179 |
180 | COPY --from=bigdata /opt/spark-${SPARK_VERSION} /ppml/spark-${SPARK_VERSION}
181 | COPY --from=bigdata /opt/spark-${SPARK_VERSION}/examples/src/main/resources /ppml/examples/src/main/resources
182 | COPY --from=bigdata /opt/flink $FLINK_HOME
183 | COPY --from=bigdata /usr/local/bin/gosu /usr/local/bin/gosu
184 | COPY --from=bigdl /bigdl-${BIGDL_VERSION} ${BIGDL_HOME}
185 |
186 | COPY ./bigdl-ppml-submit.sh /ppml/bigdl-ppml-submit.sh
187 | COPY ./scripts /ppml/scripts
188 | COPY ./spark-executor-template.yaml /ppml/spark-executor-template.yaml
189 | COPY ./spark-driver-template.yaml /ppml/spark-driver-template.yaml
190 | COPY ./entrypoint.sh /opt/entrypoint.sh
191 | COPY ./flink-entrypoint.sh /opt/flink-entrypoint.sh
192 | COPY ./flink-k8s-template.yaml /ppml/flink-k8s-template.yaml
193 | COPY ./examples /ppml/examples
194 | COPY ./zeppelin /ppml/zeppelin
195 | COPY ./spark-executor-template-for-tdxvm.yaml /ppml/spark-executor-template-for-tdxvm.yaml
196 | COPY ./spark-driver-template-for-tdxvm.yaml /ppml/spark-driver-template-for-tdxvm.yaml
197 |
198 | COPY https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /sbin/tini
199 |
200 | SHELL ["/bin/bash", "-o", "pipefail", "-c"]
201 |
202 | RUN rm ${SPARK_HOME}/jars/okhttp-*.jar && \
203 | wget -qP ${SPARK_HOME}/jars https://repo1.maven.org/maven2/com/squareup/okhttp3/okhttp/3.8.0/okhttp-3.8.0.jar && \
204 | wget -qP ${SPARK_HOME}/jars https://github.com/xerial/sqlite-jdbc/releases/download/3.36.0.1/sqlite-jdbc-3.36.0.1.jar && \
205 | chmod +x /opt/entrypoint.sh && \
206 | chmod +x /sbin/tini && \
207 | chmod +x /ppml/bigdl-ppml-submit.sh && \
208 | cp /sbin/tini /usr/bin/tini && \
209 | gramine-argv-serializer bash -c "export TF_MKL_ALLOC_MAX_BYTES=10737418240 && export _SPARK_AUTH_SECRET=$_SPARK_AUTH_SECRET && $sgx_command" > /ppml/secured_argvs && \
210 | wget -qP ${SPARK_HOME}/jars https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.28/mysql-connector-java-8.0.28.jar && \
211 | chmod a+x /ppml/scripts/*
212 | #flink
213 | RUN env DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y libsnappy1v5 gettext-base libjemalloc-dev && \
214 | rm -rf /var/lib/apt/lists/* && \
215 | groupadd --system --gid=9999 flink && \
216 | useradd --system --home-dir ${FLINK_HOME} --uid=9999 --gid=flink flink && \
217 | chmod +x /usr/local/bin/gosu && \
218 | chmod +x /opt/flink-entrypoint.sh && \
219 | chmod -R 777 ${FLINK_HOME} && \
220 | chown -R flink:flink ${FLINK_HOME}
221 | # Python packages
222 | RUN pip3 install --no-cache-dir numpy pandas pyarrow && \
223 | cp "${BIGDL_HOME}/jars/bigdl-ppml-spark_${SPARK_VERSION}-${BIGDL_VERSION}.jar" "${SPARK_HOME}/jars/" && \
224 | cp "${BIGDL_HOME}/jars/bigdl-dllib-spark_${SPARK_VERSION}-${BIGDL_VERSION}.jar" "${SPARK_HOME}/jars/"
225 | # zeppelin
226 | RUN chmod +x /ppml/zeppelin/deploy.sh && \
227 | chmod +x /ppml/zeppelin/delete.sh
228 | # Azure support
229 | RUN apt-get purge -y libsgx-dcap-default-qpl && \
230 | echo "deb [arch=amd64] https://packages.microsoft.com/ubuntu/20.04/prod focal main" | tee /etc/apt/sources.list.d/msprod.list && \
231 | wget -qO - https://packages.microsoft.com/keys/microsoft.asc | apt-key add - && \
232 | apt-get update --no-install-recommends && \
233 | apt-get install -y --no-install-recommends az-dcap-client && \
234 | wget -qP https://aka.ms/InstallAzureCLIDeb | bash && \
235 | apt-get install -y --no-install-recommends bsdmainutils && \
236 | wget -qP https://dl.k8s.io/release/v1.25.0/bin/linux/amd64/kubectl && \
237 | install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl && \
238 | apt-get clean && \
239 | rm -rf /var/lib/apt/lists/*
240 | COPY azure /ppml/azure
241 | RUN chmod a+x /ppml/azure/create-aks.sh && \
242 | chmod a+x /ppml/azure/generate-keys-az.sh && \
243 | chmod a+x /ppml/azure/generate-password-az.sh && \
244 | chmod a+x /ppml/azure/kubeconfig-secret.sh && \
245 | chmod a+x /ppml/azure/submit-spark-sgx-az.sh && \
246 | wget -qP /ppml/lib https://sourceforge.net/projects/analytics-zoo/files/analytics-zoo-data/libhadoop.so
247 |
248 |
249 | ENTRYPOINT [ "/opt/entrypoint.sh" ]
250 |
--------------------------------------------------------------------------------
/SGX/DEVCATALOG.md:
--------------------------------------------------------------------------------
1 | # **BigDL PPML on SGX**
2 |
3 | ## Introduction
4 |
5 | Learn to use BigDL PPML (BigDL Privacy Preserving Machine Learning) to run end-to-end big data analytics applications with distributed clusters on Intel® Software Guard Extensions (SGX).
6 |
7 | For more workflow examples and reference implementations, please check [Developer Catalog](https://developer.intel.com/aireferenceimplementations).
8 |
9 |
10 | ## Solution Technical Overview
11 |
12 | [PPML](https://bigdl.readthedocs.io/en/latest/doc/PPML/Overview/ppml.html) (Privacy Preserving Machine Learning) in [BigDL 2.0](https://github.com/intel-analytics/BigDL) provides a Trusted Cluster Environment for secure Big Data & AI applications, even in an untrusted cloud environment. By combining SGX with several other security technologies (e.g., attestation, key management service, private set intersection, federated learning, and homomorphic encryption), BigDL PPML ensures end-to-end security enabled for the entire distributed workflows (Apache Spark, Apache Flink, XGBoost, TensorFlow, PyTorch, etc.).
13 |
14 | For more details, please visit the [BigDL 2.0](https://github.com/intel-analytics/BigDL) GitHub repository.
15 |
16 | ## Solution Technical Details
17 |
18 | PPML ensures security for all dimensions of the data lifecycle: data at rest, data in transit, and data in use. Data being transferred on a network is `in transit`, data in storage is `at rest`, and data being processed is `in use`.
19 |
20 | 
21 |
22 | PPML allows organizations to explore powerful AI techniques while working to minimize the security risks associated with handling large amounts of sensitive data. PPML protects data at rest, in transit, and in use: compute and memory protected by SGX Enclaves, storage (e.g., data and model) protected by encryption, network communication protected by remote attestation and Transport Layer Security (TLS), and optional Federated Learning support.
23 |
24 | 
25 |
26 | With BigDL PPML, you can run trusted Big Data & AI applications. Different bigdl-ppml-gramine images correspond to different functions:
27 | - **Trusted Big Data**: with trusted Big Data analytics, users can run end-to-end data analysis (Spark SQL, Dataframe, MLlib, etc.) and Flink in a secure and trusted environment.
28 | - **Trusted Deep Learning Toolkit**: with Trusted Deep Learning Toolkits, users can run secured end-to-end PyTorch training using either a single machine or cloud-native clusters in a trusted execution environment.
29 | - **Trusted Python Toolkit**: with trusted Python Toolkit, users can run Numpy, Pandas, Flask, and Torchserve in a secure and trusted environment.
30 | - **Trusted DL Serving**: with trusted DL Serving, users can run Torchserve, Tritonserver, and TF-Serving in a secure and trusted environment.
31 | - **Trusted Machin Learning**: with end-to-end trusted training and inference, users can run LightGBM (data parallel, feature parallel, voting parallel, etc.) and Spark MLlib (supervised, unsupervised, recommendation, etc.) ML applications in a distributed and secure way.
32 |
33 | ## Validated Hardware Details
34 |
35 | | Supported Hardware |
36 | | ---------------------------- |
37 | | Intel® 3th Gen Xeon® Scalable Performance processors or later |
38 |
39 | Recommended regular memory size: 512G
40 |
41 | Recommended EPC(Enclave Page Cache) memory size: 512G
42 |
43 | Recommended Cluster Node Number: 3 or more
44 |
45 | ## How it Works
46 | 
47 |
48 | As the above picture shows, there are several steps in BigDL PPML, including the deployment(set up K8S, SGX, etc.), the preparation(the image and the data), the APP building(the code), the job submission, and reading of results. In addition, AS(Attestation Servive) is optional and will be introduced below.
49 |
50 | ## Get Started
51 |
52 | ### BigDL PPML End-to-End Workflow
53 |
54 | In this section, we take the image `bigdl-ppml-trusted-bigdata-gramine` and `MultiPartySparkQueryExample` as an example to go through the entire BigDL PPML end-to-end workflow. MultiPartySparkQueryExample is to decrypt people data encrypted with different encryption methods and filter out people whose age is between 20 and 40.
55 |
56 | ### Prepare your environment
57 |
58 | Prepare your environment first, including K8s cluster setup, K8s-SGX plugin setup, key/password preparation, key management service (KMS) and attestation service (AS) setup, and BigDL PPML client container preparation. **Please follow the detailed steps in** [Prepare Environment](https://github.com/intel-analytics/BigDL/blob/main/ppml/docs/prepare_environment.md).
59 |
60 | Next, you are going to build a base image, and a custom image on top of it to avoid leaving secrets (e.g., enclave key) in images/containers. After that, you need to register the `MRENCLAVE` in your customer image to Attestation Service Before running your application, and PPML will verify the runtime MREnclave automatically at the backend. The below chart illustrated the whole workflow:
61 | 
62 |
63 | Start your application with the following guide step by step:
64 |
65 | ### Prepare your PPML image for the production environment
66 |
67 | To build a secure PPML image for a production environment, BigDL prepared a public base image that does not contain any secrets. You can customize your image on top of this base image.
68 |
69 | 1. Prepare BigDL Base Image
70 |
71 | Build base image using `base.Dockerfile`.
72 |
73 | 2. Build a Custom Image
74 |
75 | When the base image is ready, you need to generate your enclave key which will be used when building a custom image. Keep the enclave key safe for future remote attestations.
76 |
77 | Running the following command to generate the enclave key `enclave-key.pem`, which is used to launch and sign SGX Enclave. Then you are ready to build your custom image.
78 |
79 | ```bash
80 | git clone https://github.com/intel-analytics/BigDL.git
81 | cd ppml/trusted-bigdata/custom-image
82 | openssl genrsa -3 -out enclave-key.pem 3072
83 | ./build-custom-image.sh
84 | ```
85 |
86 | **Warning:** If you want to skip DCAP (Data Center Attestation Primitives) attestation in runtime containers, you can set `ENABLE_DCAP_ATTESTATION` to *false* in `build-custom-image.sh`, and this will generate a none-attestation image. **But never do this unsafe operation in production!**
87 |
88 | The sensitive enclave key will not be saved in the built image. Two values `mr_enclave` and `mr_signer` are recorded while the Enclave is building. You can find `mr_enclave` and `mr_signer` values in the console log, which are hash values used to register your MREnclave in the following attestation step.
89 |
90 | ````bash
91 | [INFO] Use the below hash values of mr_enclave and mr_signer to register enclave:
92 | mr_enclave : c7a8a42af......
93 | mr_signer : 6f0627955......
94 | ````
95 |
96 | Note: you can also customize the image according to your own needs (e.g. third-parity Python libraries or jars).
97 |
98 | Then, start a client container:
99 |
100 | ```
101 | export K8S_MASTER=k8s://$(sudo kubectl cluster-info | grep 'https.*6443' -o -m 1)
102 | echo The k8s master is $K8S_MASTER .
103 | export DATA_PATH=/YOUR_DIR/data
104 | export KEYS_PATH=/YOUR_DIR/keys
105 | export SECURE_PASSWORD_PATH=/YOUR_DIR/password
106 | export KUBECONFIG_PATH=/YOUR_DIR/kubeconfig
107 | export LOCAL_IP=$LOCAL_IP
108 | export DOCKER_IMAGE=intelanalytics/bigdl-ppml-trusted-bigdata-gramine-reference-16g:2.3.0-SNAPSHOT # or the custom image built by yourself
109 |
110 | sudo docker run -itd \
111 | --privileged \
112 | --net=host \
113 | --name=bigdl-ppml-client-k8s \
114 | --cpus=10 \
115 | --oom-kill-disable \
116 | --device=/dev/sgx/enclave \
117 | --device=/dev/sgx/provision \
118 | -v /var/run/aesmd/aesm.socket:/var/run/aesmd/aesm.socket \
119 | -v $DATA_PATH:/ppml/trusted-big-data-ml/work/data \
120 | -v $KEYS_PATH:/ppml/trusted-big-data-ml/work/keys \
121 | -v $SECURE_PASSWORD_PATH:/ppml/trusted-big-data-ml/work/password \
122 | -v $KUBECONFIG_PATH:/root/.kube/config \
123 | -e RUNTIME_SPARK_MASTER=$K8S_MASTER \
124 | -e RUNTIME_K8S_SPARK_IMAGE=$DOCKER_IMAGE \
125 | -e RUNTIME_DRIVER_PORT=54321 \
126 | -e RUNTIME_DRIVER_MEMORY=1g \
127 | -e LOCAL_IP=$LOCAL_IP \
128 | $DOCKER_IMAGE bash
129 | ```
130 |
131 |
132 | ## Deploy Attestation Service
133 |
134 | Enter the client container:
135 | ```
136 | sudo docker exec -it bigdl-ppml-client-k8s bash
137 | ```
138 |
139 | If you do not need the attestation, you can disable the attestation service. You should configure `spark-driver-template.yaml` and `spark-executor-template` in the client container.yaml to set `ATTESTATION` value to `false` and skip the rest of the step. By default, the attestation service is disabled.
140 | ``` yaml
141 | apiVersion: v1
142 | kind: Pod
143 | spec:
144 | ...
145 | env:
146 | - name: ATTESTATION
147 | value: false
148 | ...
149 | ```
150 |
151 | The bi-attestation guarantees that the MREnclave in runtime containers is a secure one made by you. Its workflow is as below:
152 | 
153 |
154 |
155 | To enable attestation, you should have a running Attestation Service in your environment.
156 |
157 | **1. Deploy EHSM KMS & AS**
158 |
159 | KMS (Key Management Service) and AS (Attestation Service) make sure applications of the customer run in the SGX MREnclave signed above by customer-self, rather than a fake one fake by an attacker.
160 |
161 | BigDL PPML uses EHSM as a reference KMS & AS, you can follow the guide [here](https://github.com/intel-analytics/BigDL/tree/main/ppml/services/ehsm/kubernetes#deploy-bigdl-ehsm-kms-on-kubernetes-with-helm-charts) to deploy EHSM in your environment.
162 |
163 | **2. Enroll in EHSM**
164 |
165 | Execute the following command to enroll yourself in EHSM, The `` is your configured-ip of EHSM service in the deployment section:
166 |
167 | ```bash
168 | curl -v -k -G "https://:9000/ehsm?Action=Enroll"
169 | ......
170 | {"code":200,"message":"successful","result":{"apikey":"E8QKpBB******","appid":"8d5dd3b*******"}}
171 | ```
172 |
173 | You will get an `appid` and `apikey` pair. Please save it for later use.
174 |
175 | **3. Attest EHSM Server (optional)**
176 |
177 | You can attest EHSM server and verify the service is trusted before running workloads to avoid sending your secrets to a fake service.
178 |
179 | To attest EHSM server, start a BigDL container using the custom image built before. **Note**: this is the other container different from the client.
180 |
181 | ```bash
182 | export KEYS_PATH=YOUR_LOCAL_SPARK_SSL_KEYS_FOLDER_PATH
183 | export LOCAL_IP=YOUR_LOCAL_IP
184 | export CUSTOM_IMAGE=YOUR_CUSTOM_IMAGE_BUILT_BEFORE
185 | export PCCS_URL=YOUR_PCCS_URL # format like https://1.2.3.4:xxxx, obtained from KMS services or a self-deployed one
186 |
187 | sudo docker run -itd \
188 | --privileged \
189 | --net=host \
190 | --cpus=5 \
191 | --oom-kill-disable \
192 | -v /var/run/aesmd/aesm.socket:/var/run/aesmd/aesm.socket \
193 | -v $KEYS_PATH:/ppml/trusted-big-data-ml/work/keys \
194 | --name=gramine-verify-worker \
195 | -e LOCAL_IP=$LOCAL_IP \
196 | -e PCCS_URL=$PCCS_URL \
197 | $CUSTOM_IMAGE bash
198 | ```
199 |
200 | Enter the docker container:
201 |
202 | ```bash
203 | sudo docker exec -it gramine-verify-worker bash
204 | ```
205 |
206 | Set the variables in `verify-attestation-service.sh` before running it:
207 |
208 | ```
209 | `ATTESTATION_URL`: URL of attestation service. Should match the format `:`.
210 |
211 | `APP_ID`, `API_KEY`: The appID and apiKey pair generated by your attestation service.
212 |
213 | `ATTESTATION_TYPE`: Type of attestation service. Currently support `EHSMAttestationService`.
214 |
215 | `CHALLENGE`: Challenge to get a quote for attestation service which will be verified by local SGX SDK. Should be a BASE64 string. It can be a casual BASE64 string, for example, it can be generated by the command `echo anystring|base64`.
216 | ```
217 |
218 | In the container, execute `verify-attestation-service.sh` to verify the attestation service quote.
219 |
220 | ```bash
221 | bash verify-attestation-service.sh
222 | ```
223 |
224 | **4. Register your MREnclave to EHSM**
225 |
226 | Register the MREnclave with metadata of your MREnclave (appid, apikey, mr_enclave, mr_signer) obtained in the above steps to EHSM through running a Python script:
227 |
228 | ```bash
229 | # At /ppml/trusted-big-data-ml inside the container now
230 | python register-mrenclave.py --appid \
231 | --apikey \
232 | --url https://:9000 \
233 | --mr_enclave \
234 | --mr_signer
235 | ```
236 | You will receive a response containing a `policyID` and save it which will be used to attest runtime MREnclave when running distributed Kubernetes application. Remember: if you change the image, you should re-do this step and use the new policyID.
237 |
238 | **5. Enable Attestation in configuration**
239 |
240 | First, upload `appid`, `apikey`, and `policyID` obtained before to Kubernetes as secrets:
241 |
242 | ```bash
243 | kubectl create secret generic kms-secret \
244 | --from-literal=app_id=YOUR_KMS_APP_ID \
245 | --from-literal=api_key=YOUR_KMS_API_KEY \
246 | --from-literal=policy_id=YOUR_POLICY_ID
247 | ```
248 |
249 | Configure `spark-driver-template.yaml` and `spark-executor-template.yaml` to enable Attestation as follows:
250 | ``` yaml
251 | apiVersion: v1
252 | kind: Pod
253 | spec:
254 | containers:
255 | - name: spark-driver
256 | securityContext:
257 | privileged: true
258 | env:
259 | - name: ATTESTATION
260 | value: true
261 | - name: PCCS_URL
262 | value: https://your_pccs_ip:your_pccs_port
263 | - name: ATTESTATION_URL
264 | value: your_ehsm_ip:your_ehsm_port
265 | - name: APP_ID
266 | valueFrom:
267 | secretKeyRef:
268 | name: kms-secret
269 | key: app_id
270 | - name: API_KEY
271 | valueFrom:
272 | secretKeyRef:
273 | name: kms-secret
274 | key: app_key
275 | - name: ATTESTATION_POLICYID
276 | valueFrom:
277 | secretKeyRef:
278 | name: policy-id-secret
279 | key: policy_id
280 | ...
281 | ```
282 | You should get `Attestation Success!` in logs after you submit a PPML job if the quote generated with `user_report` is verified successfully by Attestation Service. Or you will get `Attestation Fail! Application killed!` or `JASONObject["result"] is not a JASONObject`and the job will be stopped.
283 |
284 | ## Run BigDL PPML e2e Application
285 | ### Encrypt
286 |
287 | Encrypt the input data of your Big Data & AI applications (here we use MultiPartySparkQueryExample) and then upload encrypted data to the Network File System (NFS) server (or any file system such as HDFS that can be accessed by the cluster).
288 |
289 | 1. Generate the input data `people.csv` for the SimpleQuery application
290 | you can use [generate_people_csv.py](https://github.com/intel-analytics/BigDL/blob/main/ppml/scripts/generate_people_csv.py). The usage command of the script is `python generate_people.py `. For example:
291 | ```bash
292 | python generate_people_csv.py amy.csv 30
293 | python generate_people_csv.py bob.csv 30
294 | ```
295 |
296 | 2. Generate the primary key.
297 | ```
298 | docker exec -i bigdl-ppml-client-k8s bash
299 | cd /ppml/bigdl-ppml/src/bigdl/ppml/kms/ehsm/
300 | export APIKEY=your_apikey
301 | export APPID=your_appid
302 | python client.py -api generate_primary_key -ip ehsm_ip -port ehsm_port
303 | ```
304 | Do this step twice to get two primary keys to encrypt amy.csv and bob.csv.
305 |
306 | 3. Encrypt `people.csv`
307 |
308 | The encryption application is a BigDL PPML job. You need to choose the deploy mode and the way to submit the job first.
309 |
310 | * **There are 4 modes to submit a job**:
311 |
312 | 1. **local mode**: run jobs locally without connecting to a cluster. It is the same as using spark-submit to run your application: `$SPARK_HOME/bin/spark-submit --class "SimpleApp" --master local[4] target.jar`, driver and executors are not protected by SGX.
313 |
314 |
315 |
316 |
317 |
318 | 2. **local SGX mode**: run jobs locally with SGX guarded. As the picture shows, the client JVM is running in a SGX Enclave so that driver and executors can be protected.
319 |
320 |
321 |
322 |
323 |
324 | 3. **client SGX mode**: run jobs in k8s client mode with SGX guarded. As we know, in K8s client mode, the driver is deployed locally as an external client to the cluster. With **client SGX mode**, the executors running in the K8S cluster are protected by SGX, and the driver running in the client is also protected by SGX.
325 |
326 |
327 |
328 |
329 |
330 | 4. **cluster SGX mode**: run jobs in k8s cluster mode with SGX guarded. As we know, in K8s cluster mode, the driver is deployed on the k8s worker nodes like executors. With **cluster SGX mode**, the driver and executors running in the K8S cluster are protected by SGX.
331 |
332 |
333 |
334 |
335 |
336 | * **There are two options to submit PPML jobs**:
337 | * use [PPML CLI](https://github.com/intel-analytics/BigDL/blob/main/ppml/docs/submit_job.md#ppml-cli) to submit jobs manually
338 | * use [helm chart](https://github.com/intel-analytics/BigDL/blob/main/ppml/docs/submit_job.md#ppml-cli#helm-chart) to submit jobs automatically
339 |
340 | Here we use **k8s client mode** and **PPML CLI** to run the PPML application.
341 |
342 | ```bash
343 | export secure_password=`openssl rsautl -inkey /ppml/password/key.txt -decrypt \
373 | --outputDataSinkPath file:// \
374 | --cryptoMode aes/cbc/pkcs5padding \
375 | --dataSourceType csv
376 | ```
377 | `Amy` is free to set, as long as it is consistent in the parameters. Do this step twice to encrypt amy.csv and bob.csv. If the application works successfully, you will see the encrypted files in `outputDataSinkPath`.
378 |
379 | ### Multi-party Decrypt and Query
380 |
381 | Run MultiPartySparkQueryExample
382 | ```
383 | export secure_password=`openssl rsautl -inkey /ppml/password/key.txt -decrypt