├── .gitignore ├── LICENSE ├── README.md ├── attestation ├── .gitignore ├── README.md ├── app-compose.json ├── report.json └── verify.py ├── custom-domain └── dstack-ingress │ ├── Dockerfile │ ├── README.md │ ├── build-image.sh │ ├── docker-compose.yaml │ └── scripts │ ├── cloudflare_dns.py │ ├── entrypoint.sh │ ├── generate-evidences.sh │ ├── renew-certificate.sh │ └── renewal-daemon.sh ├── launcher ├── Dockerfile ├── README.md ├── build-image.sh ├── docker-compose.yml ├── entrypoint.sh └── get-latest.sh ├── lightclient ├── README.md └── docker-compose.yml ├── prelaunch-script ├── README.md ├── docker-compose.yaml └── prelaunch.sh ├── private-docker-image-deployment ├── README.md └── docker-compose.yml ├── ssh-over-tproxy ├── README.md └── docker-compose.yaml ├── tcp-port-forwarding ├── README.md └── port_forwarder.py ├── timelock-nts ├── README.md └── docker-compose.yml ├── tor-hidden-service ├── README.md └── docker-compose.yml └── webshell ├── README.md ├── docker-compose.yaml └── image.jpg /.gitignore: -------------------------------------------------------------------------------- 1 | *~ -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2024 Dstack TEE 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Dstack Examples 2 | This repository contains examples of Dstack applications. 3 | 4 | *Note on single-file example style:* Sometimes we use a style of packing the entire application into a single docker-compose.yml file. 5 | But more commonly a dstack example would have Dockerfile and some other code. 6 | 7 | ## Useful Utilities 8 | These show useful patterns you may want to copy: 9 | - [./lightclient](./lightclient) use a light client so that the dstack app can follow a blockchain 10 | - [./custom-domain](./custom-domain) shows how to serve a secure website from a custom domain, by requesting a letsencrypt certificate from within the app 11 | - [./ssh-over-tproxy](./ssh-over-tproxy) shows how to tunnel arbitrary sockets over https so it can work with tproxy 12 | - [./webshell](./webshell) This is an alternative way to allow logging into a Dstack container (for debug only!) 13 | ## Showcases of porting existing tools 14 | - [./tor-hidden-service](./tor-hidden-service) connects to the tor network and serves a website as a hidden service 15 | ## Illustrating Dstack Features 16 | - [./prelaunch-script](./prelaunch-script) 17 | - [./private-docker-image-deployment](./private-docker-image-deployment) 18 | ## App examples 19 | - [./timelock-nts](./timelock-nts) a timelock decryption example using secure NTP (NTS) from Cloudflare as a time oracle 20 | ## Tutorial (Coming soon) 21 | 22 | ## Contributing 23 | Pull requests are welcomed, curation plan to come soon 24 | -------------------------------------------------------------------------------- /attestation/.gitignore: -------------------------------------------------------------------------------- 1 | /images 2 | -------------------------------------------------------------------------------- /attestation/README.md: -------------------------------------------------------------------------------- 1 | # Dstack Remote Attestation Example 2 | 3 | This example illustrates the remote attestation process for every component of the Dstack Applications. It encompasses everything from the CPU microcode to the TDVF, VM configuration, kernel, kernel parameters and application code. For further details, please refer to our [attestation guide](https://github.com/Dstack-TEE/dstack/blob/6b77340cf530b4532c5815039a74bb3a60302378/attestation.md). 4 | 5 | ## Overview 6 | 7 | The `verify.py` script demonstrates how to: 8 | - Verify TDX quotes using Intel's DCAP 9 | - Parse and validate event logs 10 | - Replay and verify Runtime Measurement Registers (RTMRs) 11 | - Validate application integrity through compose hash verification 12 | 13 | ## Prerequisites 14 | 15 | Before running the example, ensure you have the following installed: 16 | 17 | 1. **Python 3.10+** 18 | - Required for executing `verify.py` 19 | 20 | 2. **Dstack OS Image** 21 | - Either build from source or download from [Dstack Releases](https://github.com/Dstack-TEE/dstack/releases/tag/dev-v0.4.0.0) 22 | 23 | 3. **dcap-qvl** 24 | - A TDX/SGX quote verification tool from Phala 25 | - Install with: `cargo install dcap-qvl-cli` 26 | 27 | 4. **dstack-mr** 28 | - A tool to calculate expected measurement values for Dstack Base Images 29 | - Install with: `go install github.com/kvinwang/dstack-mr@latest` 30 | 31 | ## Setup 32 | 33 | 1. **Generate the Application Report:** 34 | - Run your Dstack application to produce a `report.json` file containing the attestation data 35 | 36 | 2. **Prepare the Compose File:** 37 | - Create and properly configure the `app-compose.json` file to match your application's settings 38 | 39 | ## Run the Example 40 | 41 | Run the verification process simply by executing: 42 | ```bash 43 | python verify.py 44 | ``` -------------------------------------------------------------------------------- /attestation/app-compose.json: -------------------------------------------------------------------------------- 1 | {"manifest_version":2,"name":"kvin-nb1","runner":"docker-compose","docker_compose_file":"services:\n jupyter:\n image: quay.io/jupyter/base-notebook\n user: root\n network_mode: host\n privileged: true\n environment:\n - GRANT_SUDO=yes\n ports:\n - \"80:8888\"\n volumes:\n - /:/host/\n - /var/run/tappd.sock:/var/run/tappd.sock\n\n","docker_config":{},"kms_enabled":true,"tproxy_enabled":true,"public_logs":true,"public_sysinfo":true,"local_key_provider_enabled":false} -------------------------------------------------------------------------------- /attestation/report.json: -------------------------------------------------------------------------------- 1 | {"quote":"040002008100000000000000939a7233f79c4ca9940a0db3957f060783fbfe61525f55581315cd9dc950f44700000000060102000000000000000000000000005b38e33a6487958b72c3c12a938eaa5e3fd4510c51aeeab58c7d5ecee41d7c436489d6c8e4f92f160b7cad34207b00c100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001000000000e702060000000000c68518a0ebb42136c12b2275164f8c72f25fa9a34392228687ed6e9caeb9c0f1dbd895e9cf475121c029dc47e70e91fd000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000274c2344116db7c663470693b5ba62b8621eac28cb41d2f816ddf188f9f423f900a1c44d32386fd3c993dc814e62af9d038b1f762ca65a338236047af820392dfc65bd8a1057e4d3e2acac583da6088c5dba3b1d35acdf0c3682a3abc2cb055a1a25c0dd721d17b31b58d71a172aab4a01bf5c1d43f930dde24a70ef5997beeb15fe650b374de03d7b57d61d4e9c6f1a45c2309ffdc521edbed3f5e651c250dac0e50306d2f93b639f74d60f910b498ca8c489cedae7719c4b052f3e4eadf3a400000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000cc100000886cbd0da9511b6bc6500f305ea6577975b40896060d40bab82bfbe3dd64f0b1041384f56391074a2157cd2c57db5ff50a1024bf3925364f4fef0aebf3ee91384996a9e56e40ac6c0b019709537f16d751c03e8c0d905d79f224ff06ddc4102860a8770107748c011cdbfcccc857e418735b699ac89dc2ed4da11d5125cb925e0600461000000202191b03ff0006000000000000000000000000000000000000000000000000000000000000000000000000000000001500000000000000e700000000000000e5a3a7b5d830c2953b98534c6c59a3a34fdc34e933f7f5898f0a85cf08846bca0000000000000000000000000000000000000000000000000000000000000000dc9e2a7c6f948f17474e34a7fc43ed030f7c1563f1babddf6340c82e0e54a8c500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002000600000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000503bbfe5befa55a13e21747c3859f0b618a050312a0340e980187eea232356d6000000000000000000000000000000000000000000000000000000000000000065ce7bf17e75f59a980fccfa4dfbe7bab98ff0c77a46122a27c048316286ecafb956d0dbd9c9093568c48b37d903eb4efa847e29c771f68ba793e4217b0d41772000000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f05005e0e00002d2d2d2d2d424547494e2043455254494649434154452d2d2d2d2d0a4d49494538444343424a65674177494241674956414c5235544954392b396e73423142545a3173725851346c627752424d416f4743437147534d343942414d430a4d484178496a416742674e5642414d4d47556c756447567349464e4857434251513073675547786864475a76636d306751304578476a415942674e5642416f4d0a45556c756447567349454e76636e4276636d4630615739754d5251774567594456515148444174545957353059534244624746795954454c4d416b47413155450a4341774351304578437a414a42674e5642415954416c56544d423458445449304d4467774d6a45784d54557a4e316f5844544d784d4467774d6a45784d54557a0a4e316f77634445694d434147413155454177775a535735305a5777675530645949464244537942445a584a3061575a70593246305a5445614d426747413155450a43677752535735305a577767513239796347397959585270623234784644415342674e564241634d43314e68626e526849454e7359584a684d517377435159440a5651514944414a445154454c4d416b474131554542684d4356564d775754415442676371686b6a4f5051494242676771686b6a4f50514d4242774e43414154590a77777155344778504a6a596f6a4d4752686136327970346a425164355744764b776d54366c6c314147786a59363870694a50676950686462387a544766374b620a314f79643153464f4d5a70594c795054427a59646f3449444444434341776777487759445652306a42426777466f41556c5739647a62306234656c4153636e550a3944504f4156634c336c5177617759445652306642475177596a42676f46366758495a616148523063484d364c79396863476b7564484a316333526c5a484e6c0a636e5a705932567a4c6d6c75644756734c6d4e766253397a5a3367765932567964476c6d61574e6864476c76626939324e4339775932746a636d772f593245390a6347786864475a76636d306d5a57356a62325270626d63395a4756794d423047413155644467515742425146303476507654474b7762416c356f54765664664d0a2b356a6e7554414f42674e56485138424166384542414d434273417744415944565230544151482f4241497741444343416a6b4743537147534962345451454e0a4151534341696f776767496d4d42344743697147534962345451454e41514545454e3564416f7135634b356e383277396f793165346e34776767466a42676f710a686b69472b453042445145434d494942557a415142677371686b69472b4530424451454341514942416a415142677371686b69472b45304244514543416749420a416a415142677371686b69472b4530424451454341774942416a415142677371686b69472b4530424451454342414942416a415142677371686b69472b4530420a4451454342514942417a415142677371686b69472b45304244514543426749424154415142677371686b69472b453042445145434277494241444151426773710a686b69472b4530424451454343414942417a415142677371686b69472b45304244514543435149424144415142677371686b69472b45304244514543436749420a4144415142677371686b69472b45304244514543437749424144415142677371686b69472b45304244514543444149424144415142677371686b69472b4530420a44514543445149424144415142677371686b69472b45304244514543446749424144415142677371686b69472b453042445145434477494241444151426773710a686b69472b45304244514543454149424144415142677371686b69472b4530424451454345514942437a416642677371686b69472b45304244514543456751510a4167494341674d4241414d4141414141414141414144415142676f71686b69472b45304244514544424149414144415542676f71686b69472b453042445145450a4241617777473841414141774477594b4b6f5a496876684e4151304242516f424154416542676f71686b69472b453042445145474242424a316472685349736d0a682b2f46793074746a6a762f4d45514743697147534962345451454e415163774e6a415142677371686b69472b45304244514548415145422f7a4151426773710a686b69472b45304244514548416745422f7a415142677371686b69472b45304244514548417745422f7a414b42676771686b6a4f5051514441674e48414442450a41694270455738754f726b537469486b4c4b6e6a426855416f637a39545733366a4e2f303765416844503635617749674d2f31474c58745a70446436706150760a535a386d4e7472543830305635346b465944474f7a4f78504374383d0a2d2d2d2d2d454e442043455254494649434154452d2d2d2d2d0a2d2d2d2d2d424547494e2043455254494649434154452d2d2d2d2d0a4d4949436c6a4343416a32674177494241674956414a567658633239472b487051456e4a3150517a7a674658433935554d416f4743437147534d343942414d430a4d476778476a415942674e5642414d4d45556c756447567349464e48574342536232393049454e424d526f77474159445651514b4442464a626e526c624342440a62334a7762334a6864476c76626a45554d424947413155454277774c553246756447456751327868636d4578437a414a42674e564241674d416b4e424d5173770a435159445651514745774a56557a4165467730784f4441314d6a45784d4455774d5442614677307a4d7a41314d6a45784d4455774d5442614d484178496a41670a42674e5642414d4d47556c756447567349464e4857434251513073675547786864475a76636d306751304578476a415942674e5642416f4d45556c75644756730a49454e76636e4276636d4630615739754d5251774567594456515148444174545957353059534244624746795954454c4d416b474131554543417743513045780a437a414a42674e5642415954416c56544d466b77457759484b6f5a497a6a3043415159494b6f5a497a6a304441516344516741454e53422f377432316c58534f0a3243757a7078773734654a423732457944476757357258437478327456544c7136684b6b367a2b5569525a436e71523770734f766771466553786c6d546c4a6c0a65546d693257597a33714f42757a43427544416642674e5648534d4547444157674251695a517a575770303069664f44744a5653763141624f536347724442530a42674e5648523845537a424a4d45656752614244686b466f64485277637a6f764c324e6c636e52705a6d6c6a5958526c63793530636e567a6447566b633256790a646d6c6a5a584d75615735305a577775593239744c306c756447567355306459556d397664454e424c6d526c636a416442674e5648513445466751556c5739640a7a62306234656c4153636e553944504f4156634c336c517744675944565230504151482f42415144416745474d42494741315564457745422f7751494d4159420a4166384341514177436759494b6f5a497a6a30454177494452774177524149675873566b6930772b6936565947573355462f32327561586530594a446a3155650a6e412b546a44316169356343494359623153416d4435786b66545670766f34556f79695359787244574c6d5552344349394e4b7966504e2b0a2d2d2d2d2d454e442043455254494649434154452d2d2d2d2d0a2d2d2d2d2d424547494e2043455254494649434154452d2d2d2d2d0a4d4949436a7a4343416a53674177494241674955496d554d316c71644e496e7a6737535655723951477a6b6e42717777436759494b6f5a497a6a3045417749770a614445614d4267474131554541777752535735305a5777675530645949464a766233516751304578476a415942674e5642416f4d45556c756447567349454e760a636e4276636d4630615739754d5251774567594456515148444174545957353059534244624746795954454c4d416b47413155454341774351304578437a414a0a42674e5642415954416c56544d423458445445344d4455794d5445774e4455784d466f58445451354d54497a4d54497a4e546b314f566f77614445614d4267470a4131554541777752535735305a5777675530645949464a766233516751304578476a415942674e5642416f4d45556c756447567349454e76636e4276636d46300a615739754d5251774567594456515148444174545957353059534244624746795954454c4d416b47413155454341774351304578437a414a42674e56424159540a416c56544d466b77457759484b6f5a497a6a3043415159494b6f5a497a6a3044415163445167414543366e45774d4449595a4f6a2f69505773437a61454b69370a314f694f534c52466857476a626e42564a66566e6b59347533496a6b4459594c304d784f346d717379596a6c42616c54565978465032734a424b357a6c4b4f420a757a43427544416642674e5648534d4547444157674251695a517a575770303069664f44744a5653763141624f5363477244425342674e5648523845537a424a0a4d45656752614244686b466f64485277637a6f764c324e6c636e52705a6d6c6a5958526c63793530636e567a6447566b63325679646d6c6a5a584d75615735300a5a577775593239744c306c756447567355306459556d397664454e424c6d526c636a416442674e564851344546675155496d554d316c71644e496e7a673753560a55723951477a6b6e4271777744675944565230504151482f42415144416745474d42494741315564457745422f7751494d4159424166384341514577436759490a4b6f5a497a6a3045417749445351417752674968414f572f35516b522b533943695344634e6f6f774c7550524c735747662f59693747535839344267775477670a41694541344a306c72486f4d732b586f356f2f7358364f39515778485241765a55474f6452513763767152586171493d0a2d2d2d2d2d454e442043455254494649434154452d2d2d2d2d0a0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","event_log":"[{\"imr\":0,\"event_type\":2147483659,\"digest\":\"2b630fa1d8fa9b6e8c6507d2256803ed227888f7d1f62ca5cfbddfe495338b1c6a0fc339ea64700f0835c0897a662789\",\"event\":\"\",\"event_payload\":\"095464785461626c65000100000000000000af96bb93f2b9b84e9462e0ba745642360090800000000000\"},{\"imr\":0,\"event_type\":2147483658,\"digest\":\"344bc51c980ba621aaa00da3ed7436f7d6e549197dfe699515dfa2c6583d95e6412af21c097d473155875ffd561d6790\",\"event\":\"\",\"event_payload\":\"2946762858585858585858582d585858582d585858582d585858582d58585858585858585858585829000000c0ff000000000040080000000000\"},{\"imr\":0,\"event_type\":2147483649,\"digest\":\"9dc3a1f80bcec915391dcda5ffbb15e7419f77eab462bbf72b42166fb70d50325e37b36f93537a863769bcf9bedae6fb\",\"event\":\"\",\"event_payload\":\"61dfe48bca93d211aa0d00e098032b8c0a00000000000000000000000000000053006500630075007200650042006f006f007400\"},{\"imr\":0,\"event_type\":2147483649,\"digest\":\"6f2e3cbc14f9def86980f5f66fd85e99d63e69a73014ed8a5633ce56eca5b64b692108c56110e22acadcef58c3250f1b\",\"event\":\"\",\"event_payload\":\"61dfe48bca93d211aa0d00e098032b8c0200000000000000000000000000000050004b00\"},{\"imr\":0,\"event_type\":2147483649,\"digest\":\"d607c0efb41c0d757d69bca0615c3a9ac0b1db06c557d992e906c6b7dee40e0e031640c7bfd7bcd35844ef9edeadc6f9\",\"event\":\"\",\"event_payload\":\"61dfe48bca93d211aa0d00e098032b8c030000000000000000000000000000004b0045004b00\"},{\"imr\":0,\"event_type\":2147483649,\"digest\":\"08a74f8963b337acb6c93682f934496373679dd26af1089cb4eaf0c30cf260a12e814856385ab8843e56a9acea19e127\",\"event\":\"\",\"event_payload\":\"cbb219d73a3d9645a3bcdad00e67656f0200000000000000000000000000000064006200\"},{\"imr\":0,\"event_type\":2147483649,\"digest\":\"18cc6e01f0c6ea99aa23f8a280423e94ad81d96d0aeb5180504fc0f7a40cb3619dd39bd6a95ec1680a86ed6ab0f9828d\",\"event\":\"\",\"event_payload\":\"cbb219d73a3d9645a3bcdad00e67656f03000000000000000000000000000000640062007800\"},{\"imr\":0,\"event_type\":4,\"digest\":\"394341b7182cd227c5c6b07ef8000cdfd86136c4292b8e576573ad7ed9ae41019f5818b4b971c9effc60e1ad9f1289f0\",\"event\":\"\",\"event_payload\":\"00000000\"},{\"imr\":0,\"event_type\":10,\"digest\":\"68cd79315e70aecd4afe7c1b23a5ed7b3b8e51a477e1739f111b3156def86bbc56ebf239dcd4591bc7a9fff90023f481\",\"event\":\"\",\"event_payload\":\"414350492044415441\"},{\"imr\":0,\"event_type\":10,\"digest\":\"6bc203b3843388cc4918459c3f5c6d1300a796fb594781b7ecfaa3ae7456975f095bfcc1156c9f2d25e8b8bc1b520f66\",\"event\":\"\",\"event_payload\":\"414350492044415441\"},{\"imr\":0,\"event_type\":10,\"digest\":\"444cf35d277a7b6049faf4ff23165e256e716eaad4650aeef6afae8e2dca3359b40c1b2eb997c5568f956616310c9147\",\"event\":\"\",\"event_payload\":\"414350492044415441\"},{\"imr\":1,\"event_type\":2147483651,\"digest\":\"1a417d47a3cee3249d13443d99ceb785b1b8b03fcf26a925f5701699779195baccfaf3c92f067f42f1c75aecb0c250b1\",\"event\":\"\",\"event_payload\":\"18a0443b0000000000b4b2000000000000000000000000002a000000000000000403140072f728144ab61e44b8c39ebdd7f893c7040412006b00650072006e0065006c0000007fff0400\"},{\"imr\":0,\"event_type\":2147483650,\"digest\":\"1dd6f7b457ad880d840d41c961283bab688e94e4b59359ea45686581e90feccea3c624b1226113f824f315eb60ae0a7c\",\"event\":\"\",\"event_payload\":\"61dfe48bca93d211aa0d00e098032b8c0900000000000000020000000000000042006f006f0074004f0072006400650072000000\"},{\"imr\":0,\"event_type\":2147483650,\"digest\":\"23ada07f5261f12f34a0bd8e46760962d6b4d576a416f1fea1c64bc656b1d28eacf7047ae6e967c58fd2a98bfa74c298\",\"event\":\"\",\"event_payload\":\"61dfe48bca93d211aa0d00e098032b8c08000000000000003e0000000000000042006f006f0074003000300030003000090100002c0055006900410070007000000004071400c9bdb87cebf8344faaea3ee4af6516a10406140021aa2c4614760345836e8ab6f46623317fff0400\"},{\"imr\":1,\"event_type\":2147483655,\"digest\":\"77a0dab2312b4e1e57a84d865a21e5b2ee8d677a21012ada819d0a98988078d3d740f6346bfe0abaa938ca20439a8d71\",\"event\":\"\",\"event_payload\":\"43616c6c696e6720454649204170706c69636174696f6e2066726f6d20426f6f74204f7074696f6e\"},{\"imr\":1,\"event_type\":4,\"digest\":\"394341b7182cd227c5c6b07ef8000cdfd86136c4292b8e576573ad7ed9ae41019f5818b4b971c9effc60e1ad9f1289f0\",\"event\":\"\",\"event_payload\":\"00000000\"},{\"imr\":2,\"event_type\":6,\"digest\":\"d54d67fde61596a4334222c69d76a20273500c9df4d791a554eac56f899de9c35c59107ded404b86e49c11ac84bd17a8\",\"event\":\"\",\"event_payload\":\"ed223b8f1a0000004c4f414445445f494d4147453a3a4c6f61644f7074696f6e7300\"},{\"imr\":2,\"event_type\":6,\"digest\":\"d4e7efaf52826912904a9f1e7f9946b4e8926a65e0cb92e5d79b1b4bb86428bccccc16331b8e58d7faa3b74e51542b43\",\"event\":\"\",\"event_payload\":\"ec223b8f0d0000004c696e757820696e6974726400\"},{\"imr\":1,\"event_type\":2147483655,\"digest\":\"214b0bef1379756011344877743fdc2a5382bac6e70362d624ccf3f654407c1b4badf7d8f9295dd3dabdef65b27677e0\",\"event\":\"\",\"event_payload\":\"4578697420426f6f7420536572766963657320496e766f636174696f6e\"},{\"imr\":1,\"event_type\":2147483655,\"digest\":\"0a2e01c85deae718a530ad8c6d20a84009babe6c8989269e950d8cf440c6e997695e64d455c4174a652cd080f6230b74\",\"event\":\"\",\"event_payload\":\"4578697420426f6f742053657276696365732052657475726e656420776974682053756363657373\"},{\"imr\":3,\"event_type\":134217729,\"digest\":\"f9974020ef507068183313d0ca808e0d1ca9b2d1ad0c61f5784e7157c362c06536f5ddacdad4451693f48fcc72fff624\",\"event\":\"system-preparing\",\"event_payload\":\"\"},{\"imr\":3,\"event_type\":134217729,\"digest\":\"f1b8958288c8a8f4b15807764e91130d4176068bd19f4cf628cd0f66076489774e39f01f197d47b47ce5e27dabab7226\",\"event\":\"rootfs-hash\",\"event_payload\":\"0d51f07efbbdc35b9f97b65655171e5679df26daa7e247280a91b439f1104035\"},{\"imr\":3,\"event_type\":134217729,\"digest\":\"68c649acf8b2a364e8a34e8d47a03bb31c1596182621f50c504fdf075541487c23a9cc194310a913a7637dbbeb9a78aa\",\"event\":\"app-id\",\"event_payload\":\"06768e6df639ce3be65e9e5321f8b2d82dbffb01\"},{\"imr\":3,\"event_type\":134217729,\"digest\":\"f50a57024906cde893dcffa6d2f05a9bcf94cb7632f3738c7541888f6c4ef6162990834d8e2a72a57a341a0e1c6fe603\",\"event\":\"compose-hash\",\"event_payload\":\"06768e6df639ce3be65e9e5321f8b2d82dbffb01c74323462d34c2feb9653aff\"},{\"imr\":3,\"event_type\":134217729,\"digest\":\"e1179208cc27125fbd2c6aacc98476303ce36e006805e9a98f8bf3919c50021e1e99035e2eb510a78d65a27c3455eb4f\",\"event\":\"instance-id\",\"event_payload\":\"9ba3255b85b26ec04b5ce853056f7651144ff3b5\"},{\"imr\":3,\"event_type\":134217729,\"digest\":\"98bd7e6bd3952720b65027fd494834045d06b4a714bf737a06b874638b3ea00ff402f7f583e3e3b05e921c8570433ac6\",\"event\":\"boot-mr-done\",\"event_payload\":\"\"},{\"imr\":3,\"event_type\":134217729,\"digest\":\"deac37b945531c095011446abb3fa70b8cbf895bf6fd42ec1e485dc4d1edef4023e1a419977c2212c5b283519dc03cc6\",\"event\":\"key-provider\",\"event_payload\":\"7b226e616d65223a226b6d73222c226964223a223330353933303133303630373261383634386365336430323031303630383261383634386365336430333031303730333432303030346431366635653338663364386631623364356539346333393935373732376664636363616630656430383363653862353933383136336433393661363831333463626634366637353163663636613064653137643463663364623031346366393337613363333565613935343066386231663836336362396562613631356631227d\"},{\"imr\":3,\"event_type\":134217729,\"digest\":\"1a76b2a80a0be71eae59f80945d876351a7a3fb8e9fd1ff1cede5734aa84ea11fd72b4edfbb6f04e5a85edd114c751bd\",\"event\":\"system-ready\",\"event_payload\":\"\"}]","hash_algorithm":"raw","prefix":""} -------------------------------------------------------------------------------- /attestation/verify.py: -------------------------------------------------------------------------------- 1 | """ 2 | This is an example script of how to do remote attestation for Dstack Applications. 3 | 4 | Dependencies: 5 | - Dstack OS Image: Can be built from source or downloaded from https://github.com/Dstack-TEE/dstack/releases/tag/dev-v0.4.0.0 for the image used in this demo. 6 | - dcap-qvl: Phala's TDX/SGX Quote Verification tool (install with `cargo install dcap-qvl-cli`) 7 | - dstack-mr: Tool for calculating expected measurement values for Dstack Base Images, install with `go install github.com/kvinwang/dstack-mr@latest` 8 | 9 | Example usage is provided in the __main__ section. 10 | """ 11 | 12 | import hashlib 13 | import json 14 | from typing import Dict, Any 15 | import tempfile 16 | import subprocess 17 | import os 18 | 19 | INIT_MR = "000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" 20 | 21 | def replay_rtmr(history: list[str]): 22 | """ 23 | Replay the RTMR history to calculate the final RTMR value. 24 | """ 25 | if len(history) == 0: 26 | return INIT_MR 27 | mr = bytes.fromhex(INIT_MR) 28 | for content in history: 29 | # mr = sha384(concat(mr, content)) 30 | # if content is shorter than 48 bytes, pad it with zeros 31 | content = bytes.fromhex(content) 32 | if len(content) < 48: 33 | content = content.ljust(48, b'\0') 34 | mr = hashlib.sha384(mr + content).digest() 35 | return mr.hex() 36 | 37 | 38 | class DstackTdxQuote: 39 | quote: str 40 | event_log: str 41 | verified_quote: Dict[str, Any] 42 | parsed_event_log: list[Dict[str, Any]] 43 | app_id: str 44 | compose_hash: str 45 | instance_id: str 46 | key_provider: str 47 | 48 | def __init__(self, quote: str, event_log: str): 49 | """ 50 | Initialize the DstackTdxQuote object. 51 | """ 52 | self.quote = bytes.fromhex(quote) 53 | self.event_log = event_log 54 | self.parsed_event_log = json.loads(self.event_log) 55 | self.extract_info_from_event_log() 56 | 57 | def extract_info_from_event_log(self): 58 | """ 59 | Extract the app ID, compose hash, instance ID, and key provider from the event log. 60 | """ 61 | for event in self.parsed_event_log: 62 | if event.get('event') == 'app-id': 63 | self.app_id = event.get('event_payload', '') 64 | elif event.get('event') == 'compose-hash': 65 | self.compose_hash = event.get('event_payload', '') 66 | elif event.get('event') == 'instance-id': 67 | self.instance_id = event.get('event_payload', '') 68 | elif event.get('event') == 'key-provider': 69 | self.key_provider = bytes.fromhex(event.get('event_payload', '')).decode('utf-8') 70 | 71 | def mrs(self) -> Dict[str, str]: 72 | """ 73 | Get the MRs from the verified quote. 74 | """ 75 | report = self.verified_quote.get('report', {}) 76 | if 'TD10' in report: 77 | return report['TD10'] 78 | elif 'TD15' in report: 79 | return report['TD15'] 80 | else: 81 | raise ValueError("No TD10 or TD15 report found in the quote") 82 | 83 | def verify(self): 84 | """ 85 | Verify the TDX quote using dcap-qvl command. 86 | Returns True if verification succeeds, False otherwise. 87 | """ 88 | 89 | with tempfile.NamedTemporaryFile(delete=False) as temp_file: 90 | temp_file.write(self.quote) 91 | temp_path = temp_file.name 92 | 93 | try: 94 | result = subprocess.run( 95 | ["dcap-qvl", "verify", temp_path], 96 | capture_output=True, 97 | text=True 98 | ) 99 | if result.returncode != 0: 100 | raise ValueError(f"dcap-qvl verify failed with return code {result.returncode}") 101 | self.verified_quote = json.loads(result.stdout) 102 | finally: 103 | os.unlink(temp_path) 104 | 105 | def validate_event(self, event: Dict[str, Any]) -> bool: 106 | """ 107 | Validate an event's digest according to the Rust implementation. 108 | Returns True if the event is valid, False otherwise. 109 | """ 110 | # Skip validation for non-IMR3 events for now 111 | if event.get('imr') != 3: 112 | return True 113 | 114 | # Calculate digest using sha384(type:event:payload) 115 | event_type = event.get('event_type', 0) 116 | event_name = event.get('event', '') 117 | event_payload = bytes.fromhex(event.get('event_payload', '')) 118 | 119 | if isinstance(event_payload, str): 120 | event_payload = event_payload.encode() 121 | 122 | hasher = hashlib.sha384() 123 | hasher.update(event_type.to_bytes(4, byteorder='little')) 124 | hasher.update(b':') 125 | hasher.update(event_name.encode()) 126 | hasher.update(b':') 127 | hasher.update(event_payload) 128 | 129 | calculated_digest = hasher.digest().hex() 130 | return calculated_digest == event.get('digest') 131 | 132 | def replay_rtmrs(self) -> Dict[int, str]: 133 | rtmrs = {} 134 | for idx in range(4): 135 | history = [] 136 | for event in self.parsed_event_log: 137 | if event.get('imr') == idx: 138 | # Only add digest to history if event is valid 139 | if self.validate_event(event): 140 | history.append(event['digest']) 141 | else: 142 | raise ValueError(f"Invalid event digest found in IMR {idx}") 143 | rtmrs[idx] = replay_rtmr(history) 144 | return rtmrs 145 | 146 | 147 | def sha256_hex(data: str) -> str: 148 | """ 149 | Calculate the SHA256 hash of the given data. 150 | """ 151 | return hashlib.sha256(data.encode()).hexdigest() 152 | 153 | 154 | if __name__ == "__main__": 155 | vcpus = '1' 156 | memory = '1G' 157 | 158 | print('Pre-calculated RTMRs') 159 | result = subprocess.run( 160 | ["dstack-mr", "-cpu", vcpus, "-memory", memory, "-json", "-metadata", "images/dstack-dev-0.4.0/metadata.json"], 161 | capture_output=True, 162 | text=True 163 | ) 164 | if result.returncode != 0: 165 | raise ValueError(f"dstack-mr failed with return code {result.returncode}: {result.stdout}") 166 | expected_mrs = json.loads(result.stdout) 167 | print(json.dumps(expected_mrs, indent=2)) 168 | 169 | report = json.load(open('report.json')) 170 | quote = DstackTdxQuote(report['quote'], report['event_log']) 171 | quote.verify() 172 | 173 | print("Quote verified") 174 | print(f"TCB status: {quote.verified_quote['status']}") 175 | 176 | verified_mrs = quote.mrs() 177 | show_mrs = { 178 | "mrtd": verified_mrs['mr_td'], 179 | "rtmr0": verified_mrs['rt_mr0'], 180 | "rtmr1": verified_mrs['rt_mr1'], 181 | "rtmr2": verified_mrs['rt_mr2'], 182 | "rtmr3": verified_mrs['rt_mr3'], 183 | "report_data": verified_mrs['report_data'], 184 | } 185 | print(json.dumps(show_mrs, indent=2)) 186 | 187 | assert verified_mrs['mr_td'] == expected_mrs['mrtd'], f"MRTD mismatch: {verified_mrs['mr_td']} != {expected_mrs['mrtd']}" 188 | assert verified_mrs['rt_mr0'] == expected_mrs['rtmr0'], f"RTMR0 mismatch: {verified_mrs['rt_mr0']} != {expected_mrs['rtmr0']}" 189 | assert verified_mrs['rt_mr1'] == expected_mrs['rtmr1'], f"RTMR1 mismatch: {verified_mrs['rt_mr1']} != {expected_mrs['rtmr1']}" 190 | assert verified_mrs['rt_mr2'] == expected_mrs['rtmr2'], f"RTMR2 mismatch: {verified_mrs['rt_mr2']} != {expected_mrs['rtmr2']}" 191 | 192 | replayed_mrs = quote.replay_rtmrs() 193 | print("Replay RTMRs") 194 | print(json.dumps(replayed_mrs, indent=2)) 195 | 196 | assert replayed_mrs[3] == verified_mrs['rt_mr3'], f"RTMR3 mismatch: {replayed_mrs[3]} != {verified_mrs['rt_mr3']}" 197 | 198 | expected_compose_hash = sha256_hex(open('app-compose.json').read()) 199 | assert quote.compose_hash == expected_compose_hash, f"Compose hash mismatch: {quote.compose_hash} != {expected_compose_hash}" 200 | 201 | print(f"App ID: {quote.app_id}") 202 | print(f"Compose Hash: {quote.compose_hash}") 203 | print(f"Instance ID: {quote.instance_id}") 204 | print(f"Key Provider: {quote.key_provider}") 205 | -------------------------------------------------------------------------------- /custom-domain/dstack-ingress/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM nginx@sha256:b6653fca400812e81569f9be762ae315db685bc30b12ddcdc8616c63a227d3ca 2 | 3 | # Use a specific Debian snapshot for reproducible builds 4 | RUN set -e; \ 5 | # Create a sources.list file pointing to a specific snapshot 6 | echo 'deb [check-valid-until=no] https://snapshot.debian.org/archive/debian/20250411T024939Z bookworm main' > /etc/apt/sources.list && \ 7 | echo 'deb [check-valid-until=no] https://snapshot.debian.org/archive/debian-security/20250411T024939Z bookworm-security main' >> /etc/apt/sources.list && \ 8 | echo 'Acquire::Check-Valid-Until "false";' > /etc/apt/apt.conf.d/10no-check-valid-until && \ 9 | # Install packages with exact versions for reproducibility 10 | apt-get -o Acquire::Check-Valid-Until=false update && \ 11 | apt-get install -y --no-install-recommends \ 12 | certbot=2.1.0-4 \ 13 | openssl=3.0.15-1~deb12u1 \ 14 | bash=5.2.15-2+b7 \ 15 | python3=3.11.2-1+b1 \ 16 | python3-pip=23.0.1+dfsg-1 \ 17 | python3-requests=2.28.1+dfsg-1 \ 18 | python3.11-venv=3.11.2-6+deb12u5 \ 19 | curl=7.88.1-10+deb12u12 \ 20 | jq=1.6-2.1 \ 21 | coreutils=9.1-1 && \ 22 | rm -rf /var/lib/apt/lists/* /var/log/* /var/cache/ldconfig/aux-cache 23 | 24 | 25 | RUN mkdir -p /etc/letsencrypt /var/www/certbot /usr/share/nginx/html 26 | 27 | COPY ./scripts/* /scripts/ 28 | RUN chmod +x /scripts/* 29 | ENV PATH="/scripts:$PATH" 30 | 31 | ENTRYPOINT ["/scripts/entrypoint.sh"] 32 | CMD ["nginx", "-g", "daemon off;"] 33 | 34 | -------------------------------------------------------------------------------- /custom-domain/dstack-ingress/README.md: -------------------------------------------------------------------------------- 1 | # Custom Domain Setup for dstack Applications 2 | 3 | This repository provides a solution for setting up custom domains with automatic SSL certificate management for dstack applications using Cloudflare DNS and Let's Encrypt. 4 | 5 | ## Overview 6 | 7 | This project enables you to run dstack applications with your own custom domain, complete with: 8 | 9 | - Automatic SSL certificate provisioning and renewal via Let's Encrypt 10 | - Cloudflare DNS configuration for CNAME, TXT, and CAA records 11 | - Nginx reverse proxy to route traffic to your application 12 | - Certificate evidence generation for verification 13 | 14 | ## How It Works 15 | 16 | The dstack-ingress system provides a seamless way to set up custom domains for dstack applications with automatic SSL certificate management. Here's how it works: 17 | 18 | 1. **Initial Setup**: 19 | - When first deployed, the container automatically obtains SSL certificates from Let's Encrypt using DNS validation 20 | - It configures Cloudflare DNS by creating necessary CNAME, TXT, and optional CAA records 21 | - Nginx is configured to use the obtained certificates and proxy requests to your application 22 | 23 | 2. **DNS Configuration**: 24 | - A CNAME record is created to point your custom domain to the dstack gateway domain 25 | - A TXT record is added with application identification information to help dstack-gateway to route traffic to your application 26 | - If enabled, CAA records are set to restrict which Certificate Authorities can issue certificates for your domain 27 | 28 | 3. **Certificate Management**: 29 | - SSL certificates are automatically obtained during initial setup 30 | - A scheduled task runs twice daily to check for certificate renewal 31 | - When certificates are renewed, Nginx is automatically reloaded to use the new certificates 32 | 33 | 4. **Evidence Generation**: 34 | - The system generates evidence files for verification purposes 35 | - These include the ACME account information and certificate data 36 | - Evidence files are accessible through a dedicated endpoint 37 | 38 | ## Usage 39 | 40 | ### Prerequisites 41 | 42 | - Host your domain on Cloudflare and have access to the Cloudflare account with API token 43 | 44 | ### Deployment 45 | 46 | You can either build the ingress container and push it to docker hub, or use the prebuilt image at `kvin/dstack-ingress`. 47 | 48 | #### Option 1: Use the Pre-built Image 49 | 50 | The fastest way to get started is to use our pre-built image. Simply use the following docker-compose configuration: 51 | 52 | ```yaml 53 | services: 54 | dstack-ingress: 55 | image: kvin/dstack-ingress@sha256:8dfc3536d1bd0be0cb938140aeff77532d35514ae580d8bec87d3d5a26a21470 56 | ports: 57 | - "443:443" 58 | environment: 59 | - CLOUDFLARE_API_TOKEN=${CLOUDFLARE_API_TOKEN} 60 | - DOMAIN=${DOMAIN} 61 | - GATEWAY_DOMAIN=${GATEWAY_DOMAIN} 62 | - CERTBOT_EMAIL=${CERTBOT_EMAIL} 63 | - SET_CAA=true 64 | - TARGET_ENDPOINT=http://app:80 65 | volumes: 66 | - /var/run/tappd.sock:/var/run/tappd.sock 67 | - cert-data:/etc/letsencrypt 68 | restart: unless-stopped 69 | app: 70 | image: nginx # Replace with your application image 71 | restart: unless-stopped 72 | volumes: 73 | cert-data: # Persistent volume for certificates 74 | ``` 75 | 76 | Explanation of environment variables: 77 | 78 | - `CLOUDFLARE_API_TOKEN`: Your Cloudflare API token 79 | - `DOMAIN`: Your custom domain 80 | - `GATEWAY_DOMAIN`: The dstack gateway domain. (e.g. `_.dstack-prod5.phala.network` for Phala Cloud) 81 | - `CERTBOT_EMAIL`: Your email address used in Let's Encrypt certificate requests 82 | - `TARGET_ENDPOINT`: The plain HTTP endpoint of your dstack application 83 | - `SET_CAA`: Set to `true` to enable CAA record setup 84 | 85 | #### Option 2: Build Your Own Image 86 | 87 | If you prefer to build the image yourself: 88 | 89 | 1. Clone this repository 90 | 2. Build the Docker image: 91 | 92 | ```bash 93 | docker build -t yourusername/dstack-ingress . 94 | ``` 95 | 96 | 3. Push to your registry (optional): 97 | 98 | ```bash 99 | docker push yourusername/dstack-ingress 100 | ``` 101 | 102 | 4. Update the docker-compose.yaml file with your image name and deploy 103 | 104 | ## Domain Attestation and Verification 105 | 106 | The dstack-ingress system provides mechanisms to verify and attest that your custom domain endpoint is secure and properly configured. This comprehensive verification approach ensures the integrity and authenticity of your application. 107 | 108 | ### Evidence Collection 109 | 110 | When certificates are issued or renewed, the system automatically generates a set of cryptographically linked evidence files: 111 | 112 | 1. **Access Evidence Files**: 113 | - Evidence files are accessible at `https://your-domain.com/evidences/` 114 | - Key files include `acme-account.json`, `cert.pem`, `sha256sum.txt`, and `quote.json` 115 | 116 | 2. **Verification Chain**: 117 | - `quote.json` contains a TDX quote with the SHA-256 digest of `sha256sum.txt` embedded in the report_data field 118 | - `sha256sum.txt` contains cryptographic checksums of both `acme-account.json` and `cert.pem` 119 | - When the TDX quote is verified, it cryptographically proves the integrity of the entire evidence chain 120 | 121 | 3. **Certificate Authentication**: 122 | - `acme-account.json` contains the ACME account credentials used to request certificates 123 | - When combined with the CAA DNS record, this provides evidence that certificates can only be requested from within this specific TEE application 124 | - `cert.pem` is the Let's Encrypt certificate currently serving your custom domain 125 | 126 | ### CAA Record Verification 127 | 128 | If you've enabled CAA records (`SET_CAA=true`), you can verify that only authorized Certificate Authorities can issue certificates for your domain: 129 | 130 | ```bash 131 | dig CAA your-domain.com 132 | ``` 133 | 134 | The output will display CAA records that restrict certificate issuance exclusively to Let's Encrypt with your specific account URI, providing an additional layer of security. 135 | 136 | ### TLS Certificate Transparency 137 | 138 | All Let's Encrypt certificates are logged in public Certificate Transparency (CT) logs, enabling independent verification: 139 | 140 | **CT Log Verification**: 141 | - Visit [crt.sh](https://crt.sh/) and search for your domain 142 | - Confirm that the certificates match those issued by the dstack-ingress system 143 | - This public logging ensures that all certificates are visible and can be monitored for unauthorized issuance 144 | 145 | ## License 146 | 147 | MIT License 148 | 149 | Copyright (c) 2025 150 | 151 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 152 | 153 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 154 | 155 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 156 | -------------------------------------------------------------------------------- /custom-domain/dstack-ingress/build-image.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | NAME=$1 3 | if [ -z "$NAME" ]; then 4 | echo "Usage: $0 [:]" 5 | exit 1 6 | fi 7 | # Check if buildkit_20 already exists before creating it 8 | if ! docker buildx inspect buildkit_20 &>/dev/null; then 9 | docker buildx create --use --driver-opt image=moby/buildkit:v0.20.2 --name buildkit_20 10 | fi 11 | docker buildx build --builder buildkit_20 --no-cache --build-arg SOURCE_DATE_EPOCH="0" --output type=docker,name=$NAME,rewrite-timestamp=true . 12 | -------------------------------------------------------------------------------- /custom-domain/dstack-ingress/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | services: 2 | dstack-ingress: 3 | image: kvin/dstack-ingress@sha256:8fad2a37bf2b4d2f9529e8953bca341bea17475b72d0ba746789395e5eace9d1 4 | ports: 5 | - "443:443" 6 | environment: 7 | - CLOUDFLARE_API_TOKEN=${CLOUDFLARE_API_TOKEN} 8 | - DOMAIN=${DOMAIN} 9 | - GATEWAY_DOMAIN=${GATEWAY_DOMAIN} 10 | - CERTBOT_EMAIL=${CERTBOT_EMAIL} 11 | - SET_CAA=true 12 | - TARGET_ENDPOINT=http://app:80 13 | volumes: 14 | - /var/run/tappd.sock:/var/run/tappd.sock 15 | - cert-data:/etc/letsencrypt 16 | restart: unless-stopped 17 | 18 | app: 19 | image: nginx 20 | restart: unless-stopped 21 | 22 | volumes: 23 | cert-data: 24 | 25 | -------------------------------------------------------------------------------- /custom-domain/dstack-ingress/scripts/cloudflare_dns.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | import argparse 4 | import json 5 | import os 6 | import sys 7 | import requests 8 | from typing import Dict, List, Optional 9 | 10 | 11 | class CloudflareDNSClient: 12 | """A client for managing DNS records in Cloudflare with better error handling.""" 13 | 14 | def __init__(self, api_token: str, zone_id: Optional[str] = None): 15 | self.api_token = api_token 16 | self.zone_id = zone_id 17 | self.base_url = "https://api.cloudflare.com/client/v4" 18 | self.headers = { 19 | "Authorization": f"Bearer {api_token}", 20 | "Content-Type": "application/json" 21 | } 22 | 23 | def _make_request(self, method: str, endpoint: str, data: Optional[Dict] = None) -> Dict: 24 | """Make a request to the Cloudflare API with error handling.""" 25 | url = f"{self.base_url}/{endpoint}" 26 | try: 27 | if method.upper() == "GET": 28 | response = requests.get(url, headers=self.headers) 29 | elif method.upper() == "POST": 30 | response = requests.post(url, headers=self.headers, json=data) 31 | elif method.upper() == "DELETE": 32 | response = requests.delete(url, headers=self.headers) 33 | else: 34 | raise ValueError(f"Unsupported HTTP method: {method}") 35 | 36 | response.raise_for_status() 37 | result = response.json() 38 | 39 | if not result.get("success", False): 40 | errors = result.get("errors", []) 41 | error_msg = "\n".join([f"Code: {e.get('code')}, Message: {e.get('message')}" for e in errors]) 42 | print(f"API Error: {error_msg}", file=sys.stderr) 43 | # Print the request data for debugging 44 | if data: 45 | print(f"Request data: {json.dumps(data)}", file=sys.stderr) 46 | return {"success": False, "errors": errors} 47 | 48 | return result 49 | except requests.exceptions.RequestException as e: 50 | print(f"Request Error: {str(e)}", file=sys.stderr) 51 | # Print the request data for debugging 52 | if data: 53 | print(f"Request data: {json.dumps(data)}", file=sys.stderr) 54 | return {"success": False, "errors": [{"message": str(e)}]} 55 | except json.JSONDecodeError: 56 | print(f"JSON Decode Error: Could not parse response", file=sys.stderr) 57 | return {"success": False, "errors": [{"message": "Could not parse response"}]} 58 | except Exception as e: 59 | print(f"Unexpected Error: {str(e)}", file=sys.stderr) 60 | return {"success": False, "errors": [{"message": str(e)}]} 61 | 62 | def get_zone_id(self, domain: str) -> Optional[str]: 63 | """Get the zone ID for a domain.""" 64 | # Extract the root domain (e.g., example.com from sub.example.com) 65 | parts = domain.split('.') 66 | if len(parts) > 2: 67 | root_domain = '.'.join(parts[-2:]) 68 | else: 69 | root_domain = domain 70 | 71 | print(f"Fetching zone ID for domain: {root_domain}") 72 | result = self._make_request("GET", f"zones?name={root_domain}") 73 | 74 | if not result.get("success", False): 75 | return None 76 | 77 | zones = result.get("result", []) 78 | if not zones: 79 | print(f"No zones found for domain: {root_domain}", file=sys.stderr) 80 | return None 81 | 82 | zone_id = zones[0].get("id") 83 | if zone_id: 84 | print(f"Successfully retrieved zone ID: {zone_id} for domain {root_domain}") 85 | # Store the zone ID separately from any print output 86 | self.zone_id = zone_id 87 | return zone_id 88 | else: 89 | print(f"Zone ID not found in response for domain: {root_domain}", file=sys.stderr) 90 | return None 91 | 92 | def get_dns_records(self, name: str, record_type: Optional[str] = None) -> List[Dict]: 93 | """Get DNS records for a domain.""" 94 | if not self.zone_id: 95 | print("Zone ID is required", file=sys.stderr) 96 | return [] 97 | 98 | params = f"zones/{self.zone_id}/dns_records?name={name}" 99 | if record_type: 100 | params += f"&type={record_type}" 101 | 102 | print(f"Checking for existing DNS records for {name}") 103 | result = self._make_request("GET", params) 104 | 105 | if not result.get("success", False): 106 | return [] 107 | 108 | records = result.get("result", []) 109 | return records 110 | 111 | def delete_dns_record(self, record_id: str) -> bool: 112 | """Delete a DNS record.""" 113 | if not self.zone_id: 114 | print("Zone ID is required", file=sys.stderr) 115 | return False 116 | 117 | print(f"Deleting record ID: {record_id}") 118 | result = self._make_request("DELETE", f"zones/{self.zone_id}/dns_records/{record_id}") 119 | 120 | return result.get("success", False) 121 | 122 | def create_cname_record(self, name: str, content: str, ttl: int = 60, proxied: bool = False) -> bool: 123 | """Create a CNAME record.""" 124 | if not self.zone_id: 125 | print("Zone ID is required", file=sys.stderr) 126 | return False 127 | 128 | data = { 129 | "type": "CNAME", 130 | "name": name, 131 | "content": content, 132 | "ttl": ttl, 133 | "proxied": proxied 134 | } 135 | 136 | print(f"Adding CNAME record for {name} pointing to {content}") 137 | result = self._make_request("POST", f"zones/{self.zone_id}/dns_records", data) 138 | 139 | return result.get("success", False) 140 | 141 | def create_txt_record(self, name: str, content: str, ttl: int = 60) -> bool: 142 | """Create a TXT record.""" 143 | if not self.zone_id: 144 | print("Zone ID is required", file=sys.stderr) 145 | return False 146 | 147 | data = { 148 | "type": "TXT", 149 | "name": name, 150 | "content": f'"{content}"', 151 | "ttl": ttl 152 | } 153 | 154 | print(f"Adding TXT record for {name} with content {content}") 155 | result = self._make_request("POST", f"zones/{self.zone_id}/dns_records", data) 156 | 157 | return result.get("success", False) 158 | 159 | def create_caa_record(self, name: str, tag: str, value: str, flags: int = 0, ttl: int = 60) -> bool: 160 | """Create a CAA record.""" 161 | if not self.zone_id: 162 | print("Zone ID is required", file=sys.stderr) 163 | return False 164 | 165 | # Clean up the value - remove any existing quotes that might cause issues 166 | clean_value = value.strip('"') 167 | 168 | # Cloudflare API expects a different structure for CAA records 169 | # The data field should contain flags, tag, and value separately 170 | data = { 171 | "type": "CAA", 172 | "name": name, 173 | "ttl": ttl, 174 | "data": { 175 | "flags": flags, 176 | "tag": tag, 177 | "value": clean_value 178 | } 179 | } 180 | 181 | print(f"Adding CAA record for {name} with tag {tag} and value {clean_value}") 182 | result = self._make_request("POST", f"zones/{self.zone_id}/dns_records", data) 183 | 184 | return result.get("success", False) 185 | 186 | 187 | def main(): 188 | parser = argparse.ArgumentParser(description="Manage Cloudflare DNS records") 189 | parser.add_argument("action", choices=["get_zone_id", "set_cname", "set_txt", "set_caa"], 190 | help="Action to perform") 191 | parser.add_argument("--domain", required=True, help="Domain name") 192 | parser.add_argument("--api-token", help="Cloudflare API token") 193 | parser.add_argument("--zone-id", help="Cloudflare Zone ID") 194 | parser.add_argument("--content", help="Record content (target for CNAME, value for TXT/CAA)") 195 | parser.add_argument("--caa-tag", choices=["issue", "issuewild", "iodef"], 196 | help="CAA record tag") 197 | parser.add_argument("--caa-value", help="CAA record value") 198 | 199 | args = parser.parse_args() 200 | 201 | # Get API token from environment if not provided 202 | api_token = args.api_token or os.environ.get("CLOUDFLARE_API_TOKEN") 203 | if not api_token: 204 | print("Error: Cloudflare API token is required", file=sys.stderr) 205 | sys.exit(1) 206 | 207 | # Create DNS client 208 | client = CloudflareDNSClient(api_token, args.zone_id) 209 | 210 | if args.action == "get_zone_id": 211 | zone_id = client.get_zone_id(args.domain) 212 | if not zone_id: 213 | sys.exit(1) 214 | print(zone_id) # Output zone ID for shell script to capture 215 | 216 | elif args.action == "set_cname": 217 | if not args.content: 218 | print("Error: --content is required for CNAME records", file=sys.stderr) 219 | sys.exit(1) 220 | 221 | # Get zone ID if not provided 222 | if not client.zone_id: 223 | zone_id = client.get_zone_id(args.domain) 224 | if not zone_id: 225 | sys.exit(1) 226 | # Make sure to use the zone_id from the client object, not the printed output 227 | client.zone_id = zone_id 228 | 229 | # Check for existing records and delete them 230 | existing_records = client.get_dns_records(args.domain, "CNAME") 231 | for record in existing_records: 232 | client.delete_dns_record(record["id"]) 233 | 234 | # Create new CNAME record 235 | success = client.create_cname_record(args.domain, args.content) 236 | if not success: 237 | sys.exit(1) 238 | 239 | elif args.action == "set_txt": 240 | # Get zone ID if not provided 241 | if not client.zone_id: 242 | zone_id = client.get_zone_id(args.domain) 243 | if not zone_id: 244 | sys.exit(1) 245 | # Make sure to use the zone_id from the client object, not the printed output 246 | client.zone_id = zone_id 247 | 248 | # Check for existing records and delete them 249 | existing_records = client.get_dns_records(args.domain, "TXT") 250 | for record in existing_records: 251 | client.delete_dns_record(record["id"]) 252 | 253 | # Create new TXT record 254 | success = client.create_txt_record(args.domain, args.content) 255 | if not success: 256 | sys.exit(1) 257 | 258 | elif args.action == "set_caa": 259 | if not args.caa_tag or not args.caa_value: 260 | print("Error: --caa-tag and --caa-value are required for CAA records", file=sys.stderr) 261 | sys.exit(1) 262 | 263 | # Get zone ID if not provided 264 | if not client.zone_id: 265 | zone_id = client.get_zone_id(args.domain) 266 | if not zone_id: 267 | sys.exit(1) 268 | # Make sure to use the zone_id from the client object, not the printed output 269 | client.zone_id = zone_id 270 | 271 | # Check for existing records 272 | existing_records = client.get_dns_records(args.domain, "CAA") 273 | for record in existing_records: 274 | # With the new API format, we need to check the data structure 275 | record_data = record.get("data", {}) 276 | record_tag = record_data.get("tag", "") 277 | record_value = record_data.get("value", "") 278 | 279 | # If we find a record with the same tag and value, no need to update 280 | if record_tag == args.caa_tag and record_value == args.caa_value: 281 | print(f"CAA record with the same content already exists") 282 | return 283 | 284 | # If it's the same tag but different value, delete it 285 | if record_tag == args.caa_tag: 286 | client.delete_dns_record(record["id"]) 287 | 288 | # Create new CAA record 289 | success = client.create_caa_record(args.domain, args.caa_tag, args.caa_value) 290 | if not success: 291 | print(f"Failed to create CAA record for {args.domain}") 292 | sys.exit(1) 293 | 294 | 295 | if __name__ == "__main__": 296 | main() 297 | -------------------------------------------------------------------------------- /custom-domain/dstack-ingress/scripts/entrypoint.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -e 3 | 4 | PORT=${PORT:-443} 5 | TXT_PREFIX=${TXT_PREFIX:-"_tapp-address"} 6 | 7 | setup_py_env() { 8 | if [ ! -d "/opt/app-venv" ]; then 9 | python3 -m venv --system-site-packages /opt/app-venv 10 | fi 11 | source /opt/app-venv/bin/activate 12 | pip install certbot-dns-cloudflare==4.0.0 13 | } 14 | 15 | setup_nginx_conf() { 16 | cat < /etc/nginx/conf.d/default.conf 17 | server { 18 | listen ${PORT} ssl; 19 | server_name ${DOMAIN}; 20 | 21 | ssl_certificate /etc/letsencrypt/live/${DOMAIN}/fullchain.pem; 22 | ssl_certificate_key /etc/letsencrypt/live/${DOMAIN}/privkey.pem; 23 | 24 | location / { 25 | proxy_pass ${TARGET_ENDPOINT}; 26 | } 27 | 28 | location /evidences/ { 29 | alias /evidences/; 30 | autoindex on; 31 | } 32 | } 33 | EOF 34 | mkdir -p /var/log/nginx 35 | } 36 | 37 | obtain_certificate() { 38 | # Request certificate using the virtual environment 39 | certbot certonly --dns-cloudflare \ 40 | --dns-cloudflare-credentials ~/.cloudflare/cloudflare.ini \ 41 | --dns-cloudflare-propagation-seconds 120 \ 42 | --email $CERTBOT_EMAIL \ 43 | --agree-tos --no-eff-email --non-interactive \ 44 | -d $DOMAIN 45 | } 46 | 47 | set_cname_record() { 48 | # Use the Python client to set the CNAME record 49 | # This will automatically check for and delete existing records 50 | cloudflare_dns.py set_cname \ 51 | --zone-id "$CLOUDFLARE_ZONE_ID" \ 52 | --domain "$DOMAIN" \ 53 | --content "$GATEWAY_DOMAIN" 54 | 55 | if [ $? -ne 0 ]; then 56 | echo "Error: Failed to set CNAME record for $DOMAIN" 57 | exit 1 58 | fi 59 | } 60 | 61 | set_txt_record() { 62 | local APP_ID 63 | 64 | # Generate a unique app ID if not provided 65 | APP_ID=${APP_ID:-$(curl -s --unix-socket /var/run/tappd.sock http://localhost/prpc/Tappd.Info | jq -j '.app_id')} 66 | 67 | # Use the Python client to set the TXT record 68 | cloudflare_dns.py set_txt \ 69 | --zone-id "$CLOUDFLARE_ZONE_ID" \ 70 | --domain "${TXT_PREFIX}.${DOMAIN}" \ 71 | --content "$APP_ID:$PORT" 72 | 73 | if [ $? -ne 0 ]; then 74 | echo "Error: Failed to set TXT record for $DOMAIN" 75 | exit 1 76 | fi 77 | } 78 | 79 | set_caa_record() { 80 | if [ "$SET_CAA" != "true" ]; then 81 | echo "Skipping CAA record setup" 82 | return 83 | fi 84 | # Add CAA record for the domain 85 | local ACCOUNT_URI 86 | ACCOUNT_URI=$(jq -j '.uri' /evidences/acme-account.json) 87 | echo "Adding CAA record for $DOMAIN, accounturi=$ACCOUNT_URI" 88 | cloudflare_dns.py set_caa \ 89 | --zone-id "$CLOUDFLARE_ZONE_ID" \ 90 | --domain "$DOMAIN" \ 91 | --caa-tag "issue" \ 92 | --caa-value "letsencrypt.org;validationmethods=dns-01;accounturi=$ACCOUNT_URI" 93 | 94 | if [ $? -ne 0 ]; then 95 | echo "Error: Failed to set CAA record for $DOMAIN" 96 | exit 1 97 | fi 98 | } 99 | 100 | bootstrap() { 101 | echo "Obtaining new certificate for $DOMAIN" 102 | setup_py_env 103 | obtain_certificate 104 | generate-evidences.sh 105 | set_cname_record 106 | set_txt_record 107 | set_caa_record 108 | touch /.bootstrapped 109 | } 110 | 111 | # Create Cloudflare credentials file 112 | mkdir -p ~/.cloudflare 113 | echo "dns_cloudflare_api_token = $CLOUDFLARE_API_TOKEN" > ~/.cloudflare/cloudflare.ini 114 | chmod 600 ~/.cloudflare/cloudflare.ini 115 | 116 | # Check if it's the first time the container is started 117 | if [ ! -f "/.bootstrapped" ]; then 118 | bootstrap 119 | else 120 | source /opt/app-venv/bin/activate 121 | echo "Certificate for $DOMAIN already exists" 122 | fi 123 | 124 | renewal-daemon.sh & 125 | 126 | setup_nginx_conf 127 | 128 | exec "$@" 129 | -------------------------------------------------------------------------------- /custom-domain/dstack-ingress/scripts/generate-evidences.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -e 3 | 4 | ACME_ACCOUNT_FILE=$(ls /etc/letsencrypt/accounts/acme-v02.api.letsencrypt.org/directory/*/regr.json) 5 | CERT_FILE=/etc/letsencrypt/live/${DOMAIN}/fullchain.pem 6 | 7 | mkdir -p /evidences 8 | cd /evidences 9 | cp ${ACME_ACCOUNT_FILE} acme-account.json 10 | cp ${CERT_FILE} cert.pem 11 | 12 | sha256sum acme-account.json cert.pem > sha256sum.txt 13 | 14 | QUOTED_HASH=$(sha256sum sha256sum.txt | awk '{print $1}') 15 | 16 | # Pad QUOTED_HASH with zeros to ensure it's 128 characters long 17 | PADDED_HASH="${QUOTED_HASH}" 18 | while [ ${#PADDED_HASH} -lt 128 ]; do 19 | PADDED_HASH="${PADDED_HASH}0" 20 | done 21 | QUOTED_HASH="${PADDED_HASH}" 22 | 23 | curl -s --unix-socket /var/run/tappd.sock http://localhost/prpc/Tappd.RawQuote?report_data=${QUOTED_HASH} > quote.json 24 | echo "Generated evidences successfully" 25 | -------------------------------------------------------------------------------- /custom-domain/dstack-ingress/scripts/renew-certificate.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | source /opt/app-venv/bin/activate 3 | 4 | echo "Renewing certificate for $DOMAIN" 5 | 6 | # Perform the actual renewal with explicit credentials and capture the output 7 | RENEW_OUTPUT=$(certbot renew --dns-cloudflare --dns-cloudflare-credentials ~/.cloudflare/cloudflare.ini --dns-cloudflare-propagation-seconds 120 --non-interactive 2>&1) 8 | RENEW_STATUS=$? 9 | 10 | # Check if renewal failed 11 | if [ $RENEW_STATUS -ne 0 ]; then 12 | echo "Certificate renewal failed" >&2 13 | exit 1 14 | fi 15 | 16 | # Check if no renewals were attempted 17 | if echo "$RENEW_OUTPUT" | grep -q "No renewals were attempted"; then 18 | echo "No certificates need renewal, skipping evidence generation" 19 | exit 0 20 | fi 21 | 22 | # Only generate evidences if certificates were actually renewed 23 | generate-evidences.sh 24 | 25 | # Only reload Nginx if we got here (meaning certificates were renewed) 26 | if ! nginx -s reload; then 27 | echo "Nginx reload failed" >&2 28 | exit 2 29 | fi 30 | 31 | exit 0 32 | 33 | -------------------------------------------------------------------------------- /custom-domain/dstack-ingress/scripts/renewal-daemon.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -e 3 | 4 | while true; do 5 | echo "[$(date)] Checking for certificate renewal" 6 | /usr/bin/env renew-certificate.sh || echo "Certificate renewal check failed with status $?" 7 | # Sleep for 12 hours (43200 seconds) before next renewal check 8 | echo "[$(date)] Next renewal check in 12 hours" 9 | sleep 43200 10 | done 11 | -------------------------------------------------------------------------------- /launcher/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM debian:bookworm-slim@sha256:4b44499bc2a6c78d726f3b281e6798009c0ae1f034b0bfaf6a227147dcff928b 2 | 3 | # Use a specific Debian snapshot for reproducible builds 4 | RUN set -e; \ 5 | # Create a sources.list file pointing to a specific snapshot 6 | echo 'deb [check-valid-until=no] https://snapshot.debian.org/archive/debian/20250411T024939Z bookworm main' > /etc/apt/sources.list && \ 7 | echo 'deb [check-valid-until=no] https://snapshot.debian.org/archive/debian-security/20250411T024939Z bookworm-security main' >> /etc/apt/sources.list && \ 8 | echo 'Acquire::Check-Valid-Until "false";' > /etc/apt/apt.conf.d/10no-check-valid-until && \ 9 | # Install packages with exact versions for reproducibility 10 | apt-get -o Acquire::Check-Valid-Until=false update && \ 11 | apt-get install -y --no-install-recommends docker-compose=1.29.2-3 && \ 12 | rm -rf /var/lib/apt/lists/* && \ 13 | rm -rf /var/log/* /var/cache/ldconfig/aux-cache 14 | 15 | COPY entrypoint.sh get-latest.sh /scripts/ 16 | RUN chmod +x /scripts/*.sh 17 | ENV PATH="/scripts:${PATH}" 18 | RUN mkdir -p /app-data 19 | CMD ["/scripts/entrypoint.sh"] 20 | -------------------------------------------------------------------------------- /launcher/README.md: -------------------------------------------------------------------------------- 1 | # Dstack Launcher Pattern Example 2 | 3 | This repository demonstrates the dstack launcher pattern - a template for implementing automated container updates in your applications. 4 | 5 | ## What is the Launcher Pattern? 6 | 7 | The launcher pattern is a containerized approach to managing application updates. It consists of: 8 | 9 | 1. A **launcher container** that runs continuously and checks for updates 10 | 2. A **workload container** that is the actual application being managed 11 | 12 | The launcher container periodically checks for updates to the workload container and automatically deploys new versions when they become available. 13 | 14 | ## How This Example Works 15 | 16 | This example project demonstrates the basic structure of the launcher pattern: 17 | 18 | - `Dockerfile`: Builds the launcher container with necessary dependencies 19 | - `entrypoint.sh`: The main script that runs inside the launcher container, checking for updates and deploying new versions 20 | - `get-latest.sh`: A script that determines the latest version of the workload container (in a real implementation, this would typically check a registry or other source) 21 | - `docker-compose.yml`: Example configuration for running the launcher container 22 | 23 | ## Using This Template 24 | 25 | This project is intended as a starting point. To adapt it for your own use: 26 | 27 | 1. Modify `get-latest.sh` to implement your own version checking logic (e.g., checking a container registry) 28 | 2. Adjust the configuration variables in `entrypoint.sh` to match your application needs 29 | 3. Update the `docker-compose.yml` file with any additional configuration your launcher needs 30 | 31 | ## Implementation Details 32 | 33 | ### Update Process 34 | 35 | The update process follows these steps: 36 | 37 | 1. The launcher container runs `get-latest.sh` to determine the latest available version 38 | 2. If a new version is detected, it generates a new `docker-compose.yml` file for the workload 39 | 3. It applies the new configuration using Docker Compose, which pulls and starts the new container 40 | 4. The process repeats on a regular interval 41 | 42 | ### Customization Points 43 | 44 | Key areas to customize for your own implementation: 45 | 46 | - **Version Detection**: Replace the logic in `get-latest.sh` with your own mechanism for determining the latest version 47 | - **Deployment Configuration**: Modify how the `docker-compose.yml` is generated in `entrypoint.sh` 48 | - **Update Frequency**: Adjust the sleep interval in the main loop of `entrypoint.sh` 49 | - **Additional Logic**: Add pre/post update hooks, validation, or other custom logic 50 | 51 | ## Getting Started 52 | 53 | 1. Build the launcher container: 54 | 55 | ```bash 56 | ./build-image.sh yourusername/launcher 57 | ``` 58 | 59 | 2. Push the image to Docker Hub (recommended for production use): 60 | 61 | ```bash 62 | docker push yourusername/launcher 63 | ``` 64 | 65 | 3. Deploy 66 | 67 | You can now deploy the following compose to dstack or Phala Cloud. 68 | 69 | ```yaml 70 | services: 71 | launcher: 72 | image: yourusername/launcher 73 | volumes: 74 | - /var/run/docker.sock:/var/run/docker.sock 75 | restart: always 76 | ``` 77 | 78 | **Note:** The example configuration above uses a placeholder `yourusername/launcher` as the image name. Make sure to update it with your actual published image name. 79 | 80 | ## License 81 | 82 | MIT License 83 | 84 | Copyright (c) 2025 85 | 86 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 87 | 88 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 89 | 90 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 91 | -------------------------------------------------------------------------------- /launcher/build-image.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | NAME=$1 3 | if [ -z "$NAME" ]; then 4 | echo "Usage: $0 [:]" 5 | exit 1 6 | fi 7 | # Check if buildkit_20 already exists before creating it 8 | if ! docker buildx inspect buildkit_20 &>/dev/null; then 9 | docker buildx create --use --driver-opt image=moby/buildkit:v0.20.2 --name buildkit_20 10 | fi 11 | docker buildx build --builder buildkit_20 --no-cache --build-arg SOURCE_DATE_EPOCH="0" --output type=docker,name=$NAME,rewrite-timestamp=true . 12 | -------------------------------------------------------------------------------- /launcher/docker-compose.yml: -------------------------------------------------------------------------------- 1 | services: 2 | launcher: 3 | image: kvin/launcher@sha256:873dd2117b0159575d62b79863dafd984cd02be25b46ff164e3b83ed5c9642f7 4 | volumes: 5 | - /var/run/docker.sock:/var/run/docker.sock 6 | restart: always 7 | -------------------------------------------------------------------------------- /launcher/entrypoint.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | PROJECT_NAME=launched 4 | WORKDIR=/app-data 5 | EXTERNAL_PORT=10080 6 | SERVICE_NAME=server 7 | 8 | 9 | cd $WORKDIR 10 | 11 | check-update() { 12 | echo "Checking for updates..." 13 | get-latest.sh latest.tmp 14 | if [ -f latest.tmp ]; then 15 | if [ -f latest ] && diff -q latest.tmp latest > /dev/null; then 16 | echo "No changes detected in latest version" 17 | rm -f latest.tmp 18 | return 1 19 | fi 20 | mv latest.tmp latest 21 | return 0 22 | fi 23 | echo "No update found" 24 | return 1 25 | } 26 | 27 | mk-compose() { 28 | if [ ! -f latest ] || [ ! -s latest ]; then 29 | echo "Error: latest file not found or empty" 30 | return 1 31 | fi 32 | cat < docker-compose.yml 33 | services: 34 | $SERVICE_NAME: 35 | image: $(cat latest) 36 | ports: 37 | - "$EXTERNAL_PORT:80" 38 | restart: always 39 | EOF 40 | echo "docker-compose.yml created" 41 | return 0 42 | } 43 | 44 | apply-update() { 45 | echo "Making docker-compose.yml..." 46 | if ! mk-compose; then 47 | echo "Error: Failed to make docker-compose.yml" 48 | return 1 49 | fi 50 | echo "Applying update..." 51 | docker-compose -p $PROJECT_NAME up -d --remove-orphans 52 | echo "Update applied" 53 | return 0 54 | } 55 | 56 | rm -f latest 57 | while true; do 58 | check-update 59 | apply-update 60 | sleep 5 61 | done 62 | -------------------------------------------------------------------------------- /launcher/get-latest.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Script to get the latest image for demonstration of upgrade process 3 | # 4 | # CUSTOMIZATION GUIDE: 5 | # This is where you implement your own version detection logic. Some options include: 6 | # 7 | # 1. Read from an onchain contract: 8 | # - Install web3 tools: apk add --no-cache nodejs npm && npm install -g web3 9 | # - Query contract: CONTRACT_ADDRESS="0x123..." && IMAGE=$(web3 contract call --abi=/path/to/abi.json $CONTRACT_ADDRESS getLatestImage) 10 | # 11 | # 2. Use another container to check for updates and output the latest image: 12 | # - Create a separate container that runs periodically (via cron or continuous loop) 13 | # - This container can perform complex checks (e.g., registry scanning, security validation) 14 | # - Mount a shared volume between containers: docker-compose.yml: 15 | # volumes: 16 | # - shared-data:/shared 17 | # - The update checker writes to the shared file: echo "new-image:latest" > /shared/latest-image.txt 18 | # - In this script: IMAGE=$(cat /shared/latest-image.txt) 19 | # 20 | # The script should output the full image reference (including tag or digest) to the file specified by $OUTPUT 21 | 22 | OUTPUT=$1 23 | 24 | # Add a small delay to simulate network/processing time 25 | sleep 2 26 | 27 | MINUTE=$(date +%-M) 28 | 29 | # Use time-based selection instead of random to create more predictable upgrade patterns 30 | # This will switch images roughly every minute 31 | if [ $((MINUTE % 2)) -eq 0 ]; then 32 | echo "nginx@sha256:d67fed8b03f1ed3d2a5e3cbc5ca268ad7a7528adfdd1220c420c8cf4e3802d9c" > $OUTPUT 33 | else 34 | echo "nginx@sha256:81aa342ba08035632898b78d46d0e11d79abeee63b3a6994a44ac34e102ef888" > $OUTPUT 35 | fi 36 | 37 | -------------------------------------------------------------------------------- /lightclient/README.md: -------------------------------------------------------------------------------- 1 | TEE Coprocessors in Dstack 2 | ===== 3 | 4 | Minimal docker file for using the Helios light client to provide a trustworthy view of the blockchain. 5 | 6 | You can run this locally - it will output an empty attestation if it's not in a TEE. To run this on Dstack, you can simply copy paste the docker-compose.yml and specify your ETH_RPC_URL parameter. 7 | 8 | The provided docker compose uses holesky. Helios currently supports other Eth testnetworks as well as opstack. 9 | 10 | This relies on an untrusted RPC, so you need to provide your own `ETH_RPC_URL`. The free trial at quicknode.com works fine. 11 | 12 | Run with: 13 | ```bash 14 | docker compose build 15 | docker compose run --rm -e ETH_RPC_URL=${ETH_RPC_URL} tapp 16 | ``` 17 | 18 | Expected output: 19 | ``` 20 | +] Creating 1/1 21 | ✔ Network lightclient_default Created 0.1s 22 | 2024-12-17T21:52:56.084201Z INFO helios::rpc: rpc server started at 127.0.0.1:8545 23 | 2024-12-17T21:52:57.858077Z INFO helios::consensus: sync committee updated 24 | 2024-12-17T21:52:57.941169Z INFO helios::consensus: sync committee updated 25 | 2024-12-17T21:52:58.420835Z INFO helios::consensus: finalized slot slot=3214080 confidence=92.38% age=00:00:16:58 26 | 2024-12-17T21:52:58.420854Z INFO helios::consensus: updated head slot=3214163 confidence=92.38% age=00:00:00:22 27 | 2024-12-17T21:52:58.420859Z INFO helios::consensus: consensus client in sync with checkpoint: 0x9260657ed4167f2bbe57317978ff181b6b96c1065ecf9340bba05ba3578128fe 28 | 29 | 30 | baseFeePerGas 8 31 | difficulty 0 32 | extraData 0x444556434f4e20505245434f4e4653 33 | gasLimit 30000000 34 | ... 35 | 0x5adfa31d8bcaae1b27bf8c6d2d6eb0108f3dc8ec35dc8ffaa5b8326e3eab475b 36 | 0x58025835a1943c458e444fbd39d7f776132cd82892b9f2f17218de5b29aa8b8e 37 | ] 38 | ATTEST=... 39 | ``` 40 | 41 | Acknowledgments 42 | ### 43 | Thanks [@fucory](https://x.com/fucory) and [@kassandraETH](https://x.com/kassandraETH) for the suggestions -------------------------------------------------------------------------------- /lightclient/docker-compose.yml: -------------------------------------------------------------------------------- 1 | services: 2 | tapp: 3 | configs: 4 | - source: run.sh 5 | target: /root/run.sh 6 | volumes: 7 | - /var/run/tappd.sock:/var/run/tappd.sock 8 | build: 9 | context: . 10 | dockerfile_inline: | 11 | FROM ubuntu:22.04 12 | RUN apt-get update && apt install -y curl wget 13 | WORKDIR /root 14 | 15 | # Foundry 16 | RUN wget https://github.com/foundry-rs/foundry/releases/download/nightly-c3069a50ba18cccfc4e7d5de9b9b388811d9cc7b/foundry_nightly_linux_amd64.tar.gz 17 | RUN tar -xzf ./foundry_nightly_linux_amd64.tar.gz -C /usr/local/bin 18 | 19 | # Helios 20 | RUN curl -L 'https://github.com/a16z/helios/releases/download/0.7.0/helios_linux_amd64.tar.gz' | tar -xzC . 21 | 22 | CMD [ "bash", "/root/run.sh" ] 23 | platform: linux/amd64 24 | configs: 25 | run.sh: 26 | content: | 27 | # First run Helios in the background 28 | # Provide a reasonable checkpoint. 29 | ( 30 | /root/helios ethereum --network=holesky --checkpoint 0x9260657ed4167f2bbe57317978ff181b6b96c1065ecf9340bba05ba3578128fe \ 31 | --consensus-rpc http://testing.holesky.beacon-api.nimbus.team --execution-rpc $${ETH_RPC_URL} 32 | ) & 33 | 34 | # Let it sync #TODO do this smarter 35 | sleep 5 36 | 37 | # Then run some queries. This would be a good place to run an api server. 38 | # Cast <-> Helios <-> Untrusted RPCs 39 | cast block --rpc-url=localhost:8545 | tee response.txt 40 | 41 | # Fetch the quote 42 | HASH=$$(sha256sum response.txt) 43 | PAYLOAD="{\"report_data\": \"$$(echo -n $$HASH | od -A n -t x1 | tr -d ' \n')\"}" 44 | ATTEST=$$(curl -X POST --unix-socket /var/run/tappd.sock -d "$$PAYLOAD" http://localhost/prpc/Tappd.TdxQuote?json) 45 | # TODO: Fallback to the dummy remote attestation 46 | 47 | echo ATTEST=$${ATTEST} >> response.txt 48 | cat response.txt 49 | -------------------------------------------------------------------------------- /prelaunch-script/README.md: -------------------------------------------------------------------------------- 1 | # DStack Pre-launch Script Example 2 | 3 | This directory provides an example of a pre-launch script for the DStack Application. Introduced in Dstack v0.3.5 (as detailed in [#94](https://github.com/Dstack-TEE/dstack/pull/94)), this feature allows the application to perform preliminary setup steps before initiating Docker Compose. The pre-launch script's content is specified in the `pre_launch_script` section of the `app-compose.json` file. The `prelaunch.sh` script demonstrates how to manage container initialization and configure the environment prior to launching your application. 4 | -------------------------------------------------------------------------------- /prelaunch-script/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | services: 2 | busybox: 3 | image: busybox:latest 4 | command: sh -c "cat /etc/motd" 5 | volumes: 6 | - /tapp/motd:/etc/motd 7 | restart: no 8 | -------------------------------------------------------------------------------- /prelaunch-script/prelaunch.sh: -------------------------------------------------------------------------------- 1 | # This is an example of how to write a pre-launch script of DStack App Compose. 2 | 3 | # The script is run in /tapp directory. The app-compose.json file is in the same directory. 4 | 5 | set -e 6 | 7 | # We fully handle the docker compose logic in this script. 8 | echo "Extracting docker compose file" 9 | jq -j '.docker_compose_file' app-compose.json >docker-compose.yaml 10 | echo "Removing orphans" 11 | tdxctl remove-orphans -f docker-compose.yaml || true 12 | echo "Restarting docker" 13 | chmod +x /usr/bin/containerd-shim-runc-v2 14 | systemctl restart docker 15 | 16 | # Login docker account 17 | echo "Logging into Docker Hub" 18 | tdxctl notify-host -e "boot.progress" -d "logging into docker hub" || true 19 | if [ -n "$DOCKER_USERNAME" ] && [ -n "$DOCKER_PASSWORD" ]; then 20 | if ! echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin; then 21 | tdxctl notify-host -e "boot.error" -d "failed to login to docker hub" 22 | exit 1 23 | fi 24 | fi 25 | 26 | # Use a container to setup the environment 27 | echo "Setting up the environment" 28 | tdxctl notify-host -e "boot.progress" -d "setting up the environment" || true 29 | docker run \ 30 | --rm \ 31 | --name dstack-app-setup \ 32 | -v /tapp:/tapp \ 33 | -w /tapp \ 34 | -v /var/run/docker.sock:/var/run/docker.sock \ 35 | curlimages/curl:latest \ 36 | -s https://raw.githubusercontent.com/Dstack-TEE/meta-dstack/refs/heads/main/meta-dstack/recipes-core/base-files/files/motd -o /tapp/motd 37 | 38 | echo "Starting containers" 39 | tdxctl notify-host -e "boot.progress" -d "starting containers" || true 40 | if ! docker compose up -d; then 41 | tdxctl notify-host -e "boot.error" -d "failed to start containers" 42 | exit 1 43 | fi 44 | 45 | # Use exit to skip the original docker compose handling 46 | exit 0 47 | -------------------------------------------------------------------------------- /private-docker-image-deployment/README.md: -------------------------------------------------------------------------------- 1 | # Private Docker Image Deployment 2 | 3 | This is a private docker image deployment from docker hub or user private registry example for Dstack and Phala Cloud. 4 | 5 | The example of [docker-compose.yml](docker-compose.yml) is provided to help you to load docker images from a private docker registry. 6 | 7 | ## Notices 8 | 9 | - The environment variables are required to be set in the Dstack and Phala Cloud through the `Encrypted Secrets` feature. 10 | 11 | ``` 12 | DOCKER_USERNAME= 13 | DOCKER_PASSWORD= 14 | PRIVATE_REGISTRY_URL= 15 | PRIVATE_REGISTRY_USERNAME= 16 | PRIVATE_REGISTRY_PASSWORD= 17 | ``` 18 | 19 | The `DOCKER_USERNAME` and `DOCKER_PASSWORD` are the username and password for docker hub if you want to pull images from docker hub. 20 | 21 | And the `PRIVATE_REGISTRY_URL`, `PRIVATE_REGISTRY_USERNAME` and `PRIVATE_REGISTRY_PASSWORD` are the url, username and password for your private docker registry if you want to load images from your private docker registry. 22 | 23 | - When the CVM is created, the `init` service will be executed to pull the images from the private registry and run the containers. All services created by the `init` service, could be accessed through the link looks like: 24 | 25 | ``` 26 | https://-.dstack-prod4.phala.network/ 27 | ``` 28 | 29 | - The `init` service will be executed only once when the CVM is created or updated, if you want to update the `docker-compose.yml`, you need to update the CVM with the updated `docker-compose.yml` file. 30 | -------------------------------------------------------------------------------- /private-docker-image-deployment/docker-compose.yml: -------------------------------------------------------------------------------- 1 | services: 2 | init: 3 | image: docker:latest 4 | container_name: init 5 | environment: 6 | # The username and password for docker hub if you want to pull images from docker hub. 7 | DOCKER_USERNAME: ${DOCKER_USERNAME} 8 | DOCKER_PASSWORD: ${DOCKER_PASSWORD} 9 | # The url, username and password for your private docker registry if you want to load images from your private docker registry. 10 | PRIVATE_REGISTRY_URL: ${PRIVATE_REGISTRY_URL} 11 | PRIVATE_REGISTRY_USERNAME: ${PRIVATE_REGISTRY_USERNAME} 12 | PRIVATE_REGISTRY_PASSWORD: ${PRIVATE_REGISTRY_PASSWORD} 13 | volumes: 14 | - /var/run/docker.sock:/var/run/docker.sock 15 | - /tapp:/tapp 16 | command: 17 | - /bin/sh 18 | - -c 19 | - | 20 | docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD && 21 | docker login -u $PRIVATE_REGISTRY_USERNAME -p $PRIVATE_REGISTRY_PASSWORD $PRIVATE_REGISTRY_URL && 22 | echo 'login success' && 23 | echo ' 24 | services: 25 | httpbin_example_1: 26 | image: 0xii/httpbin:latest 27 | container_name: httpbin1 28 | ports: 29 | - "1080:80" 30 | ' > /tapp/httpbin_example_1.yaml && 31 | echo ' 32 | services: 33 | httpbin_example_2: 34 | image: your-private-registry.com/0xii/httpbin:latest 35 | container_name: httpbin2 36 | ports: 37 | - "1081:80" 38 | ' > /tapp/httpbin_example_2.yaml && 39 | docker compose -f /tapp/httpbin_example_1.yaml up -d && 40 | docker compose -f /tapp/httpbin_example_2.yaml up -d && 41 | sleep infinity 42 | restart: "no" 43 | -------------------------------------------------------------------------------- /ssh-over-tproxy/README.md: -------------------------------------------------------------------------------- 1 | # SSH Over TPROXY Example 2 | 3 | This guide illustrates how to set up an SSH server within a tapp and access it using a public tproxy endpoint. 4 | 5 | ## Installation Steps 6 | 7 | 1. **Deploy the Docker Compose File** 8 | Start by deploying the provided `docker-compose.yaml` on Dstack or Phala Cloud. Adjust the workload section as needed, and remember to set the root password using the `ROOT_PW` environment variable. 9 | 10 | 2. **Configure Your SSH Client** 11 | Add the following configuration block to your `~/.ssh/config` file: 12 | ``` 13 | Host my-tee-box 14 | ProxyCommand openssl s_client -quiet -connect -1022.:443 15 | ``` 16 | Be sure to replace `` with your tapp's application ID and `` with your tproxy server's domain. 17 | Change the 443 to the port of the dstack-gateway if not using the default one. 18 | Example ProxyCommand: `ProxyCommand openssl s_client -quiet -connect c3c0ed2429a72e11e07c8d5701725968ff234dc0-1022.dstack-prod5.phala.network:443` 19 | 20 | 3. **Connect via SSH command** 21 | Finally, initiate the connection by running: 22 | ``` 23 | ssh root@my-tee-box 24 | ``` 25 | -------------------------------------------------------------------------------- /ssh-over-tproxy/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | services: 2 | ssh-server: 3 | build: 4 | context: . 5 | dockerfile_inline: | 6 | FROM ubuntu:latest 7 | RUN apt-get update && apt-get install -y openssh-server sudo 8 | RUN mkdir /run/sshd 9 | RUN echo 'root:${ROOT_PW:-123456}' | chpasswd 10 | RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config 11 | RUN sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/' /etc/ssh/sshd_config 12 | RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd 13 | RUN sed -i 's/#Port 22/Port 1022/' /etc/ssh/sshd_config 14 | EXPOSE 1022 15 | CMD ["/usr/sbin/sshd", "-D"] 16 | restart: unless-stopped 17 | privileged: true 18 | network_mode: host 19 | volumes: 20 | - /:/host/ 21 | - /var/run/tappd.sock:/var/run/tappd.sock 22 | - /var/run/docker.sock:/var/run/docker.sock 23 | workload: 24 | image: nginx 25 | -------------------------------------------------------------------------------- /tcp-port-forwarding/README.md: -------------------------------------------------------------------------------- 1 | # TCP Port Forwarding Guide 2 | 3 | This guide outlines methods for forwarding TCP ports between your local machine and remote dstack app instances. 4 | 5 | ## A simple TCP echo server 6 | 7 | Let's create a simple TCP echo server in python and deploy it to dstack: 8 | 9 | ```yaml 10 | services: 11 | echo-server: 12 | image: python:3.9-slim 13 | command: | 14 | python -c " 15 | import socket; 16 | HOST = '0.0.0.0'; 17 | PORT = 8080; 18 | s = socket.socket(socket.AF_INET, socket.SOCK_STREAM); 19 | s.bind((HOST, PORT)); 20 | s.listen(); 21 | while True: 22 | conn, addr = s.accept(); 23 | print('Connected by', addr); 24 | conn.sendall(b'welcome') 25 | while True: 26 | data = conn.recv(1024); 27 | if not data: 28 | break; 29 | conn.sendall(data) 30 | " 31 | ports: 32 | - "8080:8080" 33 | ``` 34 | 35 | Run the following command to forward local port `8080` to the echo server: 36 | 37 | ```bash 38 | socat TCP-LISTEN:8080,fork,reuseaddr OPENSSL:-8080.:443 39 | ``` 40 | 41 | Use `nc` as client to test the echo server: 42 | 43 | ```bash 44 | $ nc 127.0.0.1 8080 45 | hello 46 | hello 47 | ``` 48 | Press Ctrl+C to stop the nc client. 49 | 50 | 51 | ## SSH Access 52 | For dstack apps using dev OS images, SSH access is available through the CVM. Connect via dstack-gateway (formerly tproxy) by: 53 | 54 | 1. Configure SSH (~/.ssh/config): 55 | ```bash 56 | Host my-dstack-app 57 | HostName -22. 58 | Port 443 59 | ProxyCommand openssl s_client -quiet -connect %h:%p 60 | ``` 61 | 62 | Change the 443 to the port of the dstack-gateway if not using the default one. 63 | 64 | 2. Connect: 65 | ```bash 66 | ssh root@my-dstack-app 67 | ``` 68 | 69 | ## TCP Port Forwarding Options 70 | 71 | ### Using socat (Unix-like systems) 72 | 73 | Let's set some variables for convenience. 74 | ```bash 75 | APP_ID= 76 | DSTACK_GATEWAY_DOMAIN= 77 | GATEWAY_PORT= 78 | ``` 79 | 80 | On Unix-like systems, we can use `socat` to forward ports. 81 | 82 | Assuming we have a nginx server listening on port `80` in the dstack app, we can access it via the dstack-gateway `HTTPS` endpoint. 83 | ``` 84 | curl https://. 85 | ``` 86 | 87 | If our client doesn't support `HTTPS`, we can use `socat` to forward port `80` to the local machine. 88 | 89 | ```bash 90 | socat TCP-LISTEN:1080,bind=127.0.0.1,fork,reuseaddr OPENSSL:${APP_ID}-80.${DSTACK_GATEWAY_DOMAIN}:${GATEWAY_PORT} 91 | ``` 92 | 93 | Then we can access the nginx server over plain HTTP via the local port `1080`. 94 | 95 | ```bash 96 | curl http://127.0.0.1:1080 97 | ``` 98 | 99 | Similarly, we can forward port `22` to the local machine. 100 | 101 | ```bash 102 | socat TCP-LISTEN:1022,bind=127.0.0.1,fork,reuseaddr OPENSSL:${APP_ID}-22.${DSTACK_GATEWAY_DOMAIN}:${GATEWAY_PORT} 103 | ``` 104 | 105 | Then we can access the SSH server via the local port `1022`. 106 | 107 | ```bash 108 | ssh root@127.0.0.1 -p 1022 109 | ``` 110 | 111 | ### Using python script 112 | 113 | If socat is unavailable, particularly on Windows systems, we can utilize a Python script for port forwarding. 114 | 115 | ```bash 116 | python3 port_forwarder.py -l 127.0.0.1:1080 -r ${APP_ID}-80.${DSTACK_GATEWAY_DOMAIN}:${GATEWAY_PORT} 117 | ``` 118 | 119 | Subsequently, we can connect to the Nginx server through plain HTTP using local port `1080`. 120 | 121 | ```bash 122 | curl http://127.0.0.1:1080 123 | ``` 124 | -------------------------------------------------------------------------------- /tcp-port-forwarding/port_forwarder.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | import socket 3 | import ssl 4 | import threading 5 | import select 6 | import sys 7 | import argparse 8 | 9 | def parse_address(address): 10 | """Parse an address in the format 'host:port'""" 11 | parts = address.split(':') 12 | if len(parts) != 2: 13 | raise ValueError(f"Invalid address format: {address}. Use format 'host:port'") 14 | 15 | host = parts[0] 16 | try: 17 | port = int(parts[1]) 18 | if port < 1 or port > 65535: 19 | raise ValueError(f"Invalid port number: {port}. Must be between 1 and 65535") 20 | except ValueError: 21 | raise ValueError(f"Port must be a number between 1 and 65535, got: {parts[1]}") 22 | 23 | return (host, port) 24 | 25 | def handle_client(client_socket, remote_host, remote_port): 26 | """Handle a client connection by forwarding it to the remote server with TLS""" 27 | print(f"New connection from {client_socket.getpeername()}") 28 | 29 | # Create TLS context 30 | context = ssl.create_default_context() 31 | 32 | try: 33 | # Connect to the remote server with TLS 34 | remote_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 35 | secured_socket = context.wrap_socket(remote_socket, server_hostname=remote_host) 36 | secured_socket.connect((remote_host, remote_port)) 37 | 38 | print(f"Connected to {remote_host}:{remote_port} with TLS") 39 | 40 | # Forward data in both directions 41 | while True: 42 | # Use select to monitor both sockets 43 | readable, _, exceptional = select.select([client_socket, secured_socket], [], [client_socket, secured_socket], 60) 44 | 45 | if exceptional: 46 | print("Connection error") 47 | break 48 | 49 | for sock in readable: 50 | if sock is client_socket: 51 | # Receive from client, send to server 52 | data = client_socket.recv(4096) 53 | if not data: 54 | print("Client disconnected") 55 | return 56 | secured_socket.send(data) 57 | 58 | elif sock is secured_socket: 59 | # Receive from server, send to client 60 | data = secured_socket.recv(4096) 61 | if not data: 62 | print("Server disconnected") 63 | return 64 | client_socket.send(data) 65 | 66 | except Exception as e: 67 | print(f"Error: {e}") 68 | 69 | finally: 70 | try: 71 | client_socket.close() 72 | secured_socket.close() 73 | except: 74 | pass 75 | print("Connection closed") 76 | 77 | def main(): 78 | # Parse command line arguments 79 | parser = argparse.ArgumentParser(description='TCP to TLS proxy') 80 | parser.add_argument('-l', '--local', required=True, help='Local address to listen on (format: host:port)') 81 | parser.add_argument('-r', '--remote', required=True, help='Remote address to connect to (format: host:port)') 82 | 83 | args = parser.parse_args() 84 | 85 | try: 86 | local_host, local_port = parse_address(args.local) 87 | remote_host, remote_port = parse_address(args.remote) 88 | 89 | # Create server socket 90 | server = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 91 | server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) 92 | server.bind((local_host, local_port)) 93 | server.listen(5) 94 | 95 | print(f"TLS proxy listening on {local_host}:{local_port}") 96 | print(f"Forwarding to {remote_host}:{remote_port} with TLS") 97 | print("Press Ctrl+C to exit") 98 | 99 | while True: 100 | client_socket, addr = server.accept() 101 | client_thread = threading.Thread( 102 | target=handle_client, 103 | args=(client_socket, remote_host, remote_port) 104 | ) 105 | client_thread.daemon = True 106 | client_thread.start() 107 | 108 | except KeyboardInterrupt: 109 | print("\nShutting down...") 110 | 111 | except Exception as e: 112 | print(f"Error: {e}") 113 | 114 | finally: 115 | try: 116 | server.close() 117 | except: 118 | pass 119 | 120 | if __name__ == "__main__": 121 | main() 122 | -------------------------------------------------------------------------------- /timelock-nts/README.md: -------------------------------------------------------------------------------- 1 | Timelock example using cloudflare's time service 2 | # 3 | 4 | Cloudflare provides a secure time oracle service. 5 | Roughly it lets you connect over TLS and it gives you the current time. 6 | 7 | Read more about this service here: 8 | https://blog.cloudflare.com/secure-time/ 9 | https://developers.cloudflare.com/time-services/nts/ 10 | 11 | So, this example functions pretty simply: 12 | - first it generates a public key 13 | - it also outputs a remote attestation, where the `report_data` includes the public key and the release time (5 minutes in the future) 14 | - after the release time is reached according to the oralce, it outputs the private key -------------------------------------------------------------------------------- /timelock-nts/docker-compose.yml: -------------------------------------------------------------------------------- 1 | services: 2 | tapp: 3 | configs: 4 | - source: run.sh 5 | target: run.sh 6 | volumes: 7 | - /var/run/tappd.sock:/var/run/tappd.sock 8 | build: 9 | context: . 10 | dockerfile_inline: | 11 | FROM ubuntu:22.04 12 | RUN apt-get update 13 | RUN apt install -y curl openssl ntpsec-ntpdate 14 | command: bash run.sh 15 | platform: linux/amd64 16 | 17 | configs: 18 | run.sh: 19 | content: | 20 | #!/bin/bash 21 | key=$$(openssl genpkey -algorithm Ed25519) 22 | echo "Public Key:"; echo "$$key" | openssl pkey -pubout 23 | 24 | # Get timestamp from cloudflare and add 5 minutes 25 | get_time() { ntpdate -4q time.cloudflare.com 2>/dev/null | head -1 | cut -d' ' -f1,2 | date +%s -f -; } 26 | deadline=$$(($$(get_time) + 300)) 27 | deadline_str=$$(date -d @$${deadline}) 28 | echo "Release: $$deadline_str" 29 | 30 | # Fetch the quote 31 | get_quote() { 32 | PAYLOAD="{\"report_data\": \"$$(echo -n $$1 | od -A n -t x1 | tr -d ' \n')\"}" 33 | curl -X POST --unix-socket /var/run/tappd.sock -d "$$PAYLOAD" http://localhost/prpc/Tappd.TdxQuote?json 34 | } 35 | get_quote $$(echo $$key $$deadline_str | sha256sum) 36 | echo 37 | 38 | # Loop until it's time to release the key 39 | while [ $$(get_time) -lt $$deadline ]; do 40 | echo "$$((deadline - $$(get_time)))s left" 41 | sleep 60 42 | done 43 | echo "Private Key:"; echo "$$key" 44 | -------------------------------------------------------------------------------- /tor-hidden-service/README.md: -------------------------------------------------------------------------------- 1 | # TEE Tor Hidden Service 2 | 3 | Can you serve an app from an anonymous Dstack node that doesn't reveal its IP address? 4 | 5 | This docker compose example sets up a Tor hidden service and serves an nginx website from that. Unlike other Dstack examples using tproxy, this one avoids exposing ports on the host at all. It uses the Tor network itself as a reverse proxy. 6 | 7 | ![image](https://github.com/user-attachments/assets/ff1b7847-4d8f-45eb-8cb3-790bf73765ca) 8 | 9 | 10 | ## Overview 11 | 12 | The setup consists of two main components: 13 | - A Tor service that creates and manages the hidden service 14 | - An Nginx server that serves the TEE attestation data 15 | 16 | When accessed through Tor Browser, the service displays: 17 | - The .onion address it's serving on 18 | - TDX remote attestation from /var/run/tappd.sock 19 | 20 | The remote attestation uses the hash of the .onion address as the quote report data. 21 | 22 | The service automatically generates a new .onion address on first launch and maintains it across restarts through the persistent `tor_data` volume. 23 | 24 | ## To run locally 25 | 26 | 1. Run the containers: 27 | ```bash 28 | docker compose up -d 29 | ``` 30 | 2. The onion address will be displayed in the Nginx container logs: 31 | ```bash 32 | docker compose logs nginx 33 | ``` 34 | 35 | -------------------------------------------------------------------------------- /tor-hidden-service/docker-compose.yml: -------------------------------------------------------------------------------- 1 | services: 2 | tor: 3 | build: 4 | context: . 5 | dockerfile_inline: | 6 | FROM debian:bullseye-slim 7 | RUN apt-get update && apt-get install -y tor && apt-get clean && rm -rf /var/lib/apt/lists/* 8 | RUN mkdir -p /var/lib/tor/hidden_service && chown -R debian-tor:debian-tor /var/lib/tor/hidden_service/ && \ 9 | chmod 700 /var/lib/tor/hidden_service/ && echo "HiddenServiceDir /var/lib/tor/hidden_service/" > /etc/tor/torrc && \ 10 | echo "HiddenServicePort 80 nginx:80" >> /etc/tor/torrc 11 | USER debian-tor 12 | CMD tor -f /etc/tor/torrc 13 | volumes: 14 | - tor_data:/var/lib/tor/hidden_service 15 | restart: unless-stopped 16 | networks: 17 | - net 18 | 19 | nginx: 20 | depends_on: [tor] 21 | image: nginx:alpine 22 | volumes: 23 | - /var/run/tappd.sock:/var/run/tappd.sock 24 | - tor_data:/tor_data:ro 25 | command: sh -c "apk add --no-cache curl && /start.sh" 26 | configs: 27 | - source: nginx_script 28 | target: /start.sh 29 | mode: 0755 30 | restart: unless-stopped 31 | networks: 32 | - net 33 | 34 | networks: 35 | net: 36 | 37 | volumes: 38 | tor_data: 39 | 40 | configs: 41 | nginx_script: 42 | content: | 43 | #!/bin/sh 44 | echo '

Dstack TEE Tor Onion Service

' > /usr/share/nginx/html/index.html 45 | while [ ! -f /tor_data/hostname ]; do sleep 1; done 46 | addr=$$(cat /tor_data/hostname) 47 | echo "

$$addr

" >> /usr/share/nginx/html/index.html 48 | hash=$$(echo -n "$$addr" | sha256sum) 49 | payload="{\"report_data\":\"$$(echo -n $$hash | od -A n -t x1 | tr -d ' \n')\"}" 50 | attest=$$(curl -sX POST --unix-socket /var/run/tappd.sock -d "$$payload" http://localhost/prpc/Tappd.TdxQuote?json) 51 | echo "
$$attest
" >> /usr/share/nginx/html/index.html 52 | echo "Serving at $$addr" 53 | exec nginx -g 'daemon off;' 54 | -------------------------------------------------------------------------------- /webshell/README.md: -------------------------------------------------------------------------------- 1 | # Accessing Dstack CVM with a Webshell 2 | 3 | When developing with Dstack CVM using a development image (e.g., dstack-dev-0.3.4), having a webshell to access the container can be extremely beneficial for debugging and troubleshooting. This guide outlines the steps to set up and use a webshell with the ttyd service. 4 | 5 | 6 | ## Steps to Set Up and Use the Webshell 7 | 8 | ### 1. Add the `ttyd` Service to Your `docker-compose.yaml` 9 | 10 | Copy the `ttyd` service definition from the [docker-compose.yaml](docker-compose.yaml) file provided and include it in your own `docker-compose.yaml` file. 11 | 12 | 13 | The `ttyd` service environment variables are as follows: 14 | ```yaml 15 | environment: 16 | - HL_USER_USERNAME=root 17 | - HL_USER_PASSWORD=suon7eeXuGeechee 18 | ``` 19 | 20 | The `AUTHOR` is the author of the CVM, and the `HL_USER_USERNAME` and `HL_USER_PASSWORD` are the username and password for the webshell and you can change them to your own. 21 | 22 | 23 | ### 2. Update the CVM Configuration 24 | 25 | Update the CVM with the updated `docker-compose.yaml` to include the `ttyd` service in Dstack or Phala Cloud. This operation will restart the CVM. 26 | 27 | ### 3. Access the Webshell Endpoint 28 | 29 | After the CVM is updated, locate the endpoint URL for the `ttyd` service. The URL typically follows this pattern: 30 | 31 | ``` 32 | https://-7681.dstack-prod4.phala.network/ 33 | ``` 34 | 35 | Additionally, the `7681` is the default port for the webshell. 36 | 37 | Open this URL in your browser to access the webshell. 38 | 39 | ### 4. Install SSH Client and Connect to Host CVM 40 | 41 | Once inside the webshell, execute the following commands to install the SSH client and connect to the host CVM: 42 | 43 | ```bash 44 | apk update && apk add openssh-client 45 | ssh root@localhost 46 | ``` 47 | 48 | ![image](./image.jpg) 49 | 50 | This will allow you to SSH into the host CVM for debugging purposes. 51 | 52 | --- 53 | 54 | ## Important Notes 55 | 56 | - **Development Image:** 57 | The `ttyd` service is designed for use with development images (e.g., `dstack-dev-0.3.4`). It allows you to SSH into the host CVM for debugging. 58 | 59 | - **Production Image:** 60 | In production images, the host CVM does not has ssh server service enabled, so the `ttyd` service does not support SSH access to the host CVM. However, it can still be used to view and edit files in the host CVM through the `/host` path. 61 | 62 | --- 63 | 64 | By following this guide, you can effectively set up and utilize a webshell to enhance your development and debugging workflow with Dstack CVM. 65 | -------------------------------------------------------------------------------- /webshell/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | services: 2 | ttypd: 3 | image: hackinglab/alpine-ttyd-bash:3.2 4 | environment: 5 | - HL_USER_USERNAME=root 6 | - HL_USER_PASSWORD=suon7eeXuGeechee 7 | ports: 8 | - 7681:7681 9 | volumes: 10 | - /:/host 11 | network_mode: host 12 | -------------------------------------------------------------------------------- /webshell/image.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Dstack-TEE/dstack-examples/40e08f3efcd11bb56a5fa465a10303f118301e86/webshell/image.jpg --------------------------------------------------------------------------------