├── .gitignore ├── HYDRA_README.md ├── README.md ├── hydra-common.nix ├── hydra-master.nix ├── hydra-network.nix ├── hydra-slave.nix ├── ipfs-gateway.nix ├── ipfs-mirror-push.py ├── jobset-jailbreak-cabal.nix ├── jobset-libfastcgi.nix └── jobset-nixpkgs.nix /.gitignore: -------------------------------------------------------------------------------- 1 | # .gitignore 2 | 3 | /id_buildfarm 4 | /id_buildfarm.pub 5 | -------------------------------------------------------------------------------- /HYDRA_README.md: -------------------------------------------------------------------------------- 1 | How to set up your own Hydra Server 2 | =================================== 3 | 4 | For those who enjoy watching technical screencasts, there's also a video about 5 | this subject available at https://www.youtube.com/watch?v=RXV0Y5Bn-QQ. 6 | 7 | This repository contains a complex'ish example configuration for the Nix-based 8 | continuous build system [Hydra](http://nixos.org/hydra/) that new users can use 9 | to get started. The file [`hydra-common.nix`](hydra-common.nix) defines basic 10 | properties of a VBox-based virtual machine running NixOS 16.03, which 11 | [`hydra-master.nix`](hydra-master.nix) extends to configure a running Hydra 12 | server. [`hydra-slave.nix`](hydra-slave.nix), on the other hand, configures a 13 | simple build slave for the main server to delegate build jobs to. Finally, 14 | [`hydra-network.nix`](hydra-network.nix) ties those modules together into a 15 | network definition for Nixops. 16 | 17 | To run these examples quickly with `nixops` on your local machine, you'll need 18 | 19 | - hardware virtualization support, 20 | - 8+ GB of memory, 21 | - [NixOS](http://nixos.org/) and [Nixops](http://nixos.org/nixops/) installed. 22 | 23 | Also, your `configuration.nix` file should include: 24 | 25 | ~~~~~ 26 | virtualisation.virtualbox.host.enable = true; 27 | ~~~~~ 28 | 29 | If those pre-conditions are met, follow these steps: 30 | 31 | 1. Generate an SSH key used by the Hydra master server to authenticate itself 32 | to the build slaves: 33 | 34 | ~~~~~ bash 35 | $ ssh-keygen -C "hydra@hydra.example.org" -N "" -f id_buildfarm 36 | ~~~~~ 37 | 38 | 2. Set up your shell environment to use the `nixos-16.03` release for all 39 | further commands: 40 | 41 | ~~~~~ bash 42 | $ NIX_PATH="nixpkgs=https://github.com/nixos/nixpkgs-channels/archive/nixos-16.03.tar.gz" 43 | $ export NIX_PATH 44 | ~~~~~ 45 | 46 | 3. Start the server: 47 | 48 | ~~~~~ bash 49 | $ nixops create -d hydra hydra-network.nix 50 | $ nixops deploy -d hydra 51 | ~~~~~ 52 | 53 | 4. Ensure that the main server knows the binary cache for `nixos-16.03`: 54 | 55 | ~~~~~ bash 56 | nixops ssh hydra -- nix-channel --update 57 | ~~~~~ 58 | 59 | If all these steps completed without errors, then `nixops info -d hydra` will tell you 60 | the IP address of the new machine(s). For example, let's say that the `hydra` 61 | machine got assigned the address `192.168.56.101`. Then go to 62 | `http://192.168.56.101:8080/` to access the web front-end and sign in with the 63 | username "`alice`" and password "`foobar`". 64 | 65 | Now you are ready to create projects and jobsets the repository contains the 66 | following examples that you can use: 67 | 68 | - [`jobset-nixpkgs.nix`](jobset-nixpkgs.nix) 69 | - [`jobset-libfastcgi.nix`](jobset-libfastcgi.nix) 70 | - [`jobset-jailbreak-cabal.nix`](jobset-jailbreak-cabal.nix) 71 | 72 | The last jobset performs several Haskell builds that may be quite expensive, so 73 | it's probably wise *not* to run that on virtual hardware but only on a real 74 | sever. 75 | 76 | Miscellaneous topics 77 | -------------------- 78 | 79 | - How to [disable binary substitutions](https://github.com/NixOS/hydra/commit/82504fe01084f432443c121614532d29c781082a) 80 | for higher evaluation performance. 81 | 82 | 83 | - How to run emergency garbage collections: 84 | 85 | ~~~~~ bash 86 | $ systemctl start hydra-update-gc-roots.service 87 | $ systemctl start nix-gc.service 88 | ~~~~~ 89 | 90 | - "Shares" are interpreted as follows: each jobset has a "fraction", which is 91 | its number of shares divided by the total number of shares. The queue runner 92 | records how much time a jobset has used over the last day as a fraction of 93 | the total time and then jobsets are ordered by their allocated fraction 94 | divided by the fraction of time used i.e. jobsets that have used less of 95 | their allotment are prioritized. 96 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | IPFS Nix/Hydra Config 2 | ===================== 3 | 4 | Test Repository for IPFS in Nix together with Hydra 5 | 6 | For how to setup the network, checkout HYDRA_README.md 7 | 8 | If you want to use a IPFS-enabled nix, you can get it from 9 | here: 10 | 11 | https://github.com/mguentner/nix/tree/ipfs 12 | 13 | Cherry-pick this commit for the `nixIPFS` attribute: 14 | 15 | https://github.com/mguentner/nixpkgs/commit/d5ea24ebd885d693ea4fa3ad1150fc75b5303c64 16 | 17 | If you want to deploy the network using `nixops`, you should use the same branch. 18 | 19 | Design 20 | ====== 21 | 22 | I want to describe how I imagine the workflow to be. 23 | 24 | Machines 25 | -------- 26 | 27 | * a machine (A) from where `/nix/store` should be published to a binary cache 28 | * a second machine (B) which will act as a IPFS mirror 29 | * a third machine (C) which wants to use A/B as a binary cache 30 | 31 | Workflow 32 | -------- 33 | 34 | On A, which could be a Hydra, a signed binary cache is being generated: 35 | ``` 36 | nix copy --to file:///var/www/example.org/cache?secret-key=/etc/nix/hydra.example.org-1/secret\&compression=none\&publish-to-ipfs=1 -r /nix/store/wkhdf9jinag5750mqlax6z2zbwhqb76n-hello-2.10/ 37 | ``` 38 | 39 | Each `.nar` is being exported to a IPFS repository running on A. The `compression` is set to `none` to make 40 | deduplication in IPFS possible. 41 | Code: https://github.com/mguentner/nix/blob/ipfs/src/libstore/binary-cache-store.cc#L259 42 | After the cache is complete a resulting `.narinfo` might look like this: 43 | 44 | ``` 45 | StorePath: /nix/store/8lbpq1vmajrbnc96xhv84r87fa4wvfds-glibc-2.24 46 | URL: nar/0bl38619jq6p2jqk0xjz8rkgdvs0ljvzc71jmha7mh5r1xix375g.nar 47 | Compression: none 48 | FileHash: sha256:0bl38619jq6p2jqk0xjz8rkgdvs0ljvzc71jmha7mh5r1xix375g 49 | FileSize: 20742128 50 | NarHash: sha256:0bl38619jq6p2jqk0xjz8rkgdvs0ljvzc71jmha7mh5r1xix375g 51 | NarSize: 20742128 52 | References: 8lbpq1vmajrbnc96xhv84r87fa4wvfds-glibc-2.24 53 | Deriver: n9j6dbab59jcm9wic0g44xw8gcm32vxb-glibc-2.24.drv 54 | Sig: hydra.example.org-1:eVg2Xe22OpwnAB6Baw022lWvTSbB7cAWDBcLn9bTpSOJmozzk3FS0SVLdeEkoVZn55xZ78Y07XUL5RMEcXniCA== 55 | IPFSHash: QmNu8CKWDm5nKfmLjQQNRdaKgYxfLG5fCYU2gyrgNhDEbU 56 | ``` 57 | 58 | The IPFSHash is not signed through the signed fingerprint (`Sig:`), however once 59 | the file behind `IPFSHash` has been fetched completely it will be validated 60 | against `NarHash` which is part of the fingerprint. 61 | Relevant Code: https://github.com/NixOS/nix/blob/215b70f51e5abd350c9b7db656aedac9d96d0046/src/libstore/store-api.cc#L523 62 | 63 | The IPFS repository on A should be periodically cleaned in order to free space. This, 64 | however would make the hash inaccessible. That's why everything is mirrored on B using 65 | `ipfs-mirror-push.py` 66 | 67 | Now on A, this can be executed 68 | 69 | ``` 70 | python3 ipfs-mirror-push.py --ssh admin@B --path /var/www/example.org/cache 71 | ``` 72 | 73 | The script will collect all IPFS hashes from the `.narinfo` files in 74 | `/var/www/example.org/cache` and download them on B, thus making them 75 | available. 76 | 77 | If C now uses A as binary cache, it will first download the `.narinfo` using HTTP 78 | and find a `IPFSHash` inside it. 79 | Instead of downloading this file using HTTP, it will be downloaded using IPFS. 80 | If no local IPFS daemon should be used, a IPFS Gateway can be used on C. 81 | Code: https://github.com/mguentner/nix/blob/ipfs/src/libstore/binary-cache-store.cc#L316 82 | -------------------------------------------------------------------------------- /hydra-common.nix: -------------------------------------------------------------------------------- 1 | # hydra-common.nix 2 | 3 | { 4 | 5 | deployment.targetEnv = "virtualbox"; 6 | deployment.virtualbox.memorySize = 2048; 7 | deployment.virtualbox.headless = true; 8 | 9 | i18n.defaultLocale = "en_US.UTF-8"; 10 | 11 | nix.nrBuildUsers = 30; 12 | 13 | services.nixosManual.showManual = false; 14 | services.ntp.enable = false; 15 | services.openssh.allowSFTP = false; 16 | services.openssh.passwordAuthentication = false; 17 | 18 | users = { 19 | mutableUsers = false; 20 | users.root.openssh.authorizedKeys.keyFiles = [ ~/.ssh/id_rsa.pub ]; 21 | }; 22 | 23 | } 24 | -------------------------------------------------------------------------------- /hydra-master.nix: -------------------------------------------------------------------------------- 1 | # hydra-master.nix 2 | 3 | { config, pkgs, ... }: 4 | { 5 | imports = [ ./hydra-common.nix ]; 6 | 7 | environment.etc = pkgs.lib.singleton { 8 | target = "nix/id_buildfarm"; 9 | source = ./id_buildfarm; 10 | uid = config.ids.uids.hydra; 11 | gid = config.ids.gids.hydra; 12 | mode = "0440"; 13 | }; 14 | 15 | networking.firewall.allowedTCPPorts = [ config.services.hydra.port 80 4001 ]; 16 | 17 | nix = { 18 | package = pkgs.nixIPFS; 19 | distributedBuilds = true; 20 | buildMachines = [ 21 | { hostName = "slave1"; maxJobs = 1; speedFactor = 1; sshKey = "/etc/nix/id_buildfarm"; sshUser = "root"; system = "x86_64-linux"; } 22 | ]; 23 | extraOptions = "auto-optimise-store = true"; 24 | }; 25 | 26 | services.hydra = { 27 | enable = true; 28 | hydraURL = "http://hydra.example.org"; 29 | notificationSender = "hydra@example.org"; 30 | port = 8080; 31 | extraConfig = "store-uri = file:///nix/store?secret-key=/etc/nix/hydra.example.org-1/secret"; 32 | buildMachinesFiles = [ "/etc/nix/machines" ]; 33 | }; 34 | 35 | services.postgresql = { 36 | enable = true; 37 | dataDir = "/var/db/postgresql-${config.services.postgresql.package.psqlSchema}"; 38 | }; 39 | 40 | services.ipfs = { 41 | enable = true; 42 | # The Gateway normally listens on 8080 43 | gatewayAddress = "/ip4/127.0.0.1/tcp/9090"; 44 | }; 45 | 46 | services.nginx = { 47 | enable = true; 48 | recommendedTlsSettings = true; 49 | virtualHosts = { 50 | "cache.example.org" = { 51 | root = "/var/www/example.org/cache/"; 52 | default = true; 53 | }; 54 | }; 55 | }; 56 | 57 | systemd.services.hydra-manual-setup = { 58 | description = "Create Admin User for Hydra"; 59 | serviceConfig.Type = "oneshot"; 60 | serviceConfig.RemainAfterExit = true; 61 | wantedBy = [ "multi-user.target" ]; 62 | requires = [ "hydra-init.service" ]; 63 | after = [ "hydra-init.service" ]; 64 | environment = config.systemd.services.hydra-init.environment; 65 | script = '' 66 | if [ ! -e ~hydra/.setup-is-complete ]; then 67 | # create admin user 68 | /run/current-system/sw/bin/hydra-create-user alice --full-name 'Alice Q. User' --email-address 'alice@example.org' --password foobar --role admin 69 | # create signing keys 70 | /run/current-system/sw/bin/install -d -m 551 /etc/nix/hydra.example.org-1 71 | /run/current-system/sw/bin/nix-store --generate-binary-cache-key hydra.example.org-1 /etc/nix/hydra.example.org-1/secret /etc/nix/hydra.example.org-1/public 72 | /run/current-system/sw/bin/chown -R hydra:hydra /etc/nix/hydra.example.org-1 73 | /run/current-system/sw/bin/chmod 440 /etc/nix/hydra.example.org-1/secret 74 | /run/current-system/sw/bin/chmod 444 /etc/nix/hydra.example.org-1/public 75 | mkdir -p /var/www/example.org/cache 76 | # done 77 | touch ~hydra/.setup-is-complete 78 | fi 79 | ''; 80 | }; 81 | 82 | } 83 | -------------------------------------------------------------------------------- /hydra-network.nix: -------------------------------------------------------------------------------- 1 | # hydra-network.nix 2 | 3 | { 4 | 5 | network.description = "Hydra Continuous Integration Server"; 6 | 7 | hydra = import ./hydra-master.nix; 8 | slave1 = import ./hydra-slave.nix; 9 | ipfsgw = import ./ipfs-gateway.nix; 10 | 11 | } 12 | -------------------------------------------------------------------------------- /hydra-slave.nix: -------------------------------------------------------------------------------- 1 | # hydra-slave.nix 2 | 3 | { config, pkgs, ... }: 4 | 5 | { 6 | 7 | imports = [ ./hydra-common.nix ]; 8 | 9 | nix.gc = { 10 | automatic = true; 11 | dates = "05:15"; 12 | options = ''--max-freed "$((32 * 1024**3 - 1024 * $(df -P -k /nix/store | tail -n 1 | ${pkgs.gawk}/bin/awk '{ print $4 }')))"''; 13 | }; 14 | 15 | users.extraUsers.root.openssh.authorizedKeys.keys = pkgs.lib.singleton '' 16 | command="nice -n20 nix-store --serve --write" ${pkgs.lib.readFile ./id_buildfarm.pub} 17 | ''; 18 | 19 | } 20 | -------------------------------------------------------------------------------- /ipfs-gateway.nix: -------------------------------------------------------------------------------- 1 | { config, pkgs, ... }: 2 | let 3 | wl_path = "/var/lib/ipfs/"; 4 | wl_name = "whitelist.conf"; 5 | in 6 | { 7 | imports = [ ./hydra-common.nix ]; 8 | 9 | networking.firewall.allowedTCPPorts = [ 80 4001 ]; 10 | services.ipfs.enable = true; 11 | services.nginx = { 12 | enable = true; 13 | virtualHosts = { 14 | "_" = { 15 | default = true; 16 | extraConfig = '' 17 | location /ipfs/ 18 | { 19 | try_files $uri @ipfs; 20 | } 21 | ''; 22 | locations."@ipfs" = { 23 | extraConfig = '' 24 | proxy_pass http://127.0.0.1:8080; 25 | ''; 26 | }; 27 | }; 28 | }; 29 | }; 30 | 31 | systemd.services.ipfsgw-setup = { 32 | description = "Init for IPFS Gateway"; 33 | serviceConfig.Type = "oneshot"; 34 | serviceConfig.RemainAfterExit = true; 35 | wantedBy = [ "multi-user.target" ]; 36 | requires = [ "nginx.service" ]; 37 | before = [ "nginx.service" ]; 38 | script = '' 39 | mkdir -p ${wl_path} 40 | touch ${wl_path + wl_name} 41 | ''; 42 | }; 43 | 44 | systemd.services.nginx_reloader = { 45 | serviceConfig = { 46 | Type = "oneshot"; 47 | ExecStart = "${pkgs.systemd}/bin/systemctl reload-or-restart nginx"; 48 | }; 49 | }; 50 | 51 | systemd.paths.nginx_reloader = { 52 | wantedBy = [ "multi-user.target" ]; 53 | requires = [ "ipfs.service" ]; 54 | pathConfig = { PathChanged = "${wl_path + wl_name}"; }; 55 | }; 56 | 57 | } 58 | -------------------------------------------------------------------------------- /ipfs-mirror-push.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | from os import listdir 3 | from os.path import isfile, join 4 | import subprocess 5 | import argparse 6 | 7 | parser = argparse.ArgumentParser(description='Push .nar files to an IPFS Mirror') 8 | 9 | parser.add_argument('--ssh', default='127.0.0.1', type=str, required=True) 10 | parser.add_argument('--path', default='/var/www/cache/', type=str, required=True) 11 | 12 | args = parser.parse_args() 13 | 14 | narinfo_files = [join(args.path, f) for f in listdir(args.path) 15 | if isfile(join(args.path, f)) 16 | and f.endswith("narinfo")] 17 | ipfsHashes = [] 18 | for narinfo_file in narinfo_files: 19 | with open(narinfo_file, 'rb') as narinfo: 20 | content = narinfo.readlines() 21 | for line in content: 22 | if line.decode("utf-8").startswith("IPFSHash:"): 23 | print("Found IPFSHash in {}".format(narinfo_file)) 24 | ipfsHashes.append(line.decode("utf-8").split(' ')[1]) 25 | 26 | print("Exporting Hashes to Mirror...") 27 | 28 | for ipfsHash in ipfsHashes: 29 | getCommand = "sh -c 'ipfs --api /ip4/127.0.0.1/tcp/5001 cat {}' > /dev/null" 30 | conn = subprocess.Popen(["ssh", "%s" % args.ssh, getCommand.format(ipfsHash)], 31 | shell=False, 32 | stdout=subprocess.PIPE, 33 | stderr=subprocess.PIPE) 34 | res = conn.stderr.readlines() 35 | if res != []: 36 | print("SSH Command failed with {}".format(conn.stderr.readlines())) 37 | -------------------------------------------------------------------------------- /jobset-jailbreak-cabal.nix: -------------------------------------------------------------------------------- 1 | # jailbreak-cabal-ci.nix 2 | 3 | { jailbreakCabalSrc ? { outPath = ../jailbreak-cabal; revCount = 0; gitTag = "dirty"; } 4 | , supportedSystems ? ["x86_64-linux"] 5 | , supportedCompilers ? ["ghc784" "ghc7103" "ghc801"] 6 | }: 7 | 8 | with (import { inherit supportedSystems; }); 9 | 10 | let 11 | 12 | lib = pkgs.lib // pkgs.haskell.lib; 13 | 14 | buildFun = { mkDerivation, base, Cabal }: mkDerivation { 15 | pname = "jailbreak-cabal"; 16 | version = jailbreakCabalSrc.gitTag; 17 | src = jailbreakCabalSrc; 18 | isLibrary = false; 19 | isExecutable = true; 20 | executableHaskellDepends = [ base Cabal ]; 21 | homepage = "http://github.com/peti/jailbreak-cabal"; 22 | description = "Strip version restrictions from build dependencies in Cabal files"; 23 | license = pkgs.stdenv.lib.licenses.bsd3; 24 | }; 25 | 26 | in 27 | { 28 | 29 | jailbreak-cabal = lib.genAttrs supportedCompilers (compiler: 30 | lib.genAttrs supportedSystems (system: 31 | let 32 | pkgs = pkgsFor system; 33 | haskellPackages = pkgs.haskell.packages.${compiler}; 34 | Cabal = if compiler == "ghc801" 35 | then null 36 | else haskellPackages.Cabal_1_20_0_3; 37 | in 38 | haskellPackages.callPackage buildFun { inherit Cabal; } 39 | ) 40 | ); 41 | 42 | } 43 | -------------------------------------------------------------------------------- /jobset-libfastcgi.nix: -------------------------------------------------------------------------------- 1 | # libfastcgi-ci.nix 2 | 3 | { fastcgiSrc ? { outPath = ../fastcgi; revCount = 0; gitTag = "dirty"; } 4 | , supportedSystems ? ["x86_64-linux"] 5 | }: 6 | 7 | with (import { inherit supportedSystems; }); 8 | 9 | rec { 10 | 11 | tarball = pkgs.releaseTools.sourceTarball { 12 | name = "libfastcgi"; 13 | src = fastcgiSrc; 14 | version = fastcgiSrc.gitTag; 15 | }; 16 | 17 | build = pkgs.lib.genAttrs supportedSystems (system: 18 | let 19 | pkgs = pkgsFor system; 20 | in 21 | pkgs.releaseTools.nixBuild { 22 | name = "libfastcgi"; 23 | src = tarball; 24 | buildInputs = [ pkgs.boost.out ]; 25 | } 26 | ); 27 | } 28 | -------------------------------------------------------------------------------- /jobset-nixpkgs.nix: -------------------------------------------------------------------------------- 1 | # nixpkgs-ci.nix 2 | 3 | { supportedSystems ? ["i686-linux" "x86_64-linux"] }: 4 | 5 | with (import { inherit supportedSystems; }); 6 | 7 | { 8 | 9 | # Simply assign a derivation to an attribute to have it built. 10 | hello_world_1 = pkgs_x86_64_linux.hello; 11 | 12 | # 'hydraJob' strips all non-essential attributes. 13 | hello_world_2 = pkgs.lib.hydraJob pkgs_x86_64_linux.hello; 14 | 15 | # Generate one attribute per supported platform. 16 | hello_world_3 = pkgs.lib.genAttrs supportedSystems (system: (pkgsFor system).hello); 17 | 18 | } // mapTestOn { 19 | 20 | # Fancy shortcut to generate one attribute per supported platform. 21 | hello = supportedSystems; 22 | 23 | } 24 | --------------------------------------------------------------------------------