├── .gitignore ├── README.md ├── Vagrantfile ├── certs ├── ca.crt └── haproxy.crt ├── config.yaml ├── localranchervagrant.PNG ├── rancher_id ├── scripts ├── configure_rancher_node.sh ├── configure_rancher_server.sh └── master.sh └── vagrant_rancheros_guest_plugin.rb /.gitignore: -------------------------------------------------------------------------------- 1 | .vagrant 2 | registry 3 | vagrantup.bat 4 | /vault 5 | /certs/localrancher* -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # localranchervagrant 2 | ![Pretty Picture](https://github.com/chrisurwin/localranchervagrant/blob/master/localranchervagrant.PNG) 3 | 4 | Vagrant files to stand up a Local Rancher install with 3 nodes 5 | 6 | This runs RancherOS as the base OS for the nodes which doesn't have the guest tools for Virtualbox installed 7 | 8 | Start the cluster and the Rancher UI will become accessible on http://172.22.101.100 9 | 10 | To see the contents of the registry cache proxy, navigate to http://172.22.101.100:5000/v2/_catalog 11 | 12 | The default file will bring up a cattle environment. You can change this by editing `orchestrator` in [the config file](config.yaml). 13 | 14 | ## Usage 15 | 16 | To use this you must have vagrant installed, which can be obtained from www.vagrantup.com 17 | 18 | clone the directory and then run **vagrant up** 19 | 20 | This has been tested with vagrant 1.9.1 and VirtualBox 5.0.32. If you experience issues with the networking it is likely related to running an older version. 21 | 22 | ## Config 23 | 24 | The config.yml contains any variables that you should need to change, below is a description of the variables and their values: 25 | 26 | **orchestrator** - Possible values are `cattle`, `kubernetes`, `mesos` and `swarm` 27 | 28 | This sets the orchestrator that will be used for the environment, as part of the process the Default environment is deleted and we create a new one with the name of the orchestrator. 29 | 30 | **network_mode** - Possible values are `isolated`, `airgap` and `normal` 31 | 32 | `isolated` set the Rancher Server and Rancher nodes to have no external internet access other than via a proxy server that runs on the master node. This is used to emulate environments where a proxy server is required to access the internet 33 | 34 | `airgap` sets the Rancher Server and nodes to have no external access at all. All images required to stand up Rancher are downloaded to a private repo on master and pulled from there 35 | 36 | **sslenabled** - Possible values are `true` and `false` 37 | 38 | This uses a pre-generated certificate to terminate the connection to the Rancher server with SSL. This certificate is located in the /certs folder. If this is changed then the public key will need to be replaced in the configure_rancher_node.sh script otherwise the agent will error. 39 | 40 | **ssldns** - Default value is `server.rancher.vagrant` 41 | 42 | The setting for this needs to match the string that is stored in the SSL certificate that is used for termination. 43 | 44 | **version** - Possible values `latest`, `stable`, `v1.x.x` where x.x is any release of Rancher Server 45 | 46 | This is the version of Rancher Server that you want to be deployed into your environment 47 | 48 | **rancher\_env\_vars** - Pass through additional environment variables to the Rancher server 49 | 50 | **agent_version** - The version of the Rancher agent for the nodes to pull 51 | 52 | **ROS_version** - The version of RancherOS for the nodes to run, possible values are `1.0.3` and `1.0.4` 53 | 54 | **master** - Settings for the master node that runs the proxy, registry mirror etc., this value should not be changed 55 | 56 | - **cpus** - Default `1` This is the number of vCPU's that the master node should have 57 | 58 | - **memory** - Default `1024` This is the amount of RAM to be allocated to the master node, if running on a machine with only 8GB this should be dropped to `512` 59 | 60 | **server** - Settings for the server node(s) that runs the Rancher Server, this value should not be changed 61 | 62 | - **count** - Default `1` This is the number of Rancher Servers to run, if you want to test HA then this should be set to `2` or above 63 | 64 | - **cpus** - Default `1` This is the number of vCPU's that each server node should have 65 | 66 | - **memory** - Default `2048` This is the amount of RAM to be allocated to each server node, if running on a machine with only 8GB this should be dropped to `1024` 67 | 68 | **node** - Settings for the rancher node(s) that run in the Rancher environment, this value should not be changed 69 | 70 | - **count** - Default `3` This is the number of nodes to run 71 | 72 | - **cpus** - Default `1` This is the number of vCPU's that each Rancher node should have 73 | 74 | - **memory** - Default `2048` This is the amount of RAM to be allocated to each Rancher node, if running on a machine with only 8GB this should be dropped to `1024` 75 | 76 | **ip** - This section defines the IP address ranges for the virtual machines 77 | 78 | - **master** - Default `172.22.101.100` 79 | 80 | - **server** - Default `172.22.101.101` 81 | 82 | - **node** - Default `172.22.101.111` 83 | 84 | **linked_clones** - Default value `true` Leave as this as it reduces disk footprint 85 | 86 | **net** - Network Settings section, this should not be changed 87 | 88 | - **private\_nic\_type** - Default `82545EM` this sometime needs to be changed to `82540EM` This is the network card that is emulated in the virtual machine 89 | 90 | - **network\_type** - Default **private\_network** 91 | 92 | If you want to expose the Virtual Machines directly to the network this can be set to **public_network** 93 | 94 | **keys** - Subsection for defining keys to be used when enabling *external_ssh*. The public key will be placed onto all servers, the private key will be placed onto just the master node. You can then use the master node as a jump host to each of the remaining VM's, or access them directly with the ssh key 95 | 96 | - **public_key** - This should be set to the path of the public key that needs to be uploaded 97 | 98 | - **private_key** - This should be set to the path of the private key that needs to be uploaded 99 | 100 | **external_access** - To expose the setup to an external network 101 | 102 | - **enabled** - Default value `false`, Change to true if you want to expose the master node to an external network` 103 | 104 | - **ssh_port** - Default value `2277`, this is the port that the master node will be exposed on if you enabled *external\_ssh* 105 | 106 | - **http_port** - set this value to the local port on the host to forward to port 80 on the master 107 | 108 | - **https_port** - set this value to the local port on the host to forward to port 443 on the master 109 | 110 | ## Troubleshooting 111 | 112 | **VM's starting but not running any scripts** - Try changing the *private\_nic\_type* 113 | -------------------------------------------------------------------------------- /Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | require_relative 'vagrant_rancheros_guest_plugin.rb' 4 | require 'ipaddr' 5 | require 'yaml' 6 | 7 | x = YAML.load_file('config.yaml') 8 | puts "Config: #{x.inspect}\n\n" 9 | 10 | $private_nic_type = x.fetch('net').fetch('private_nic_type') 11 | $external_ssh = x.fetch('external_access').fetch('enabled') 12 | 13 | Vagrant.configure(2) do |config| 14 | 15 | config.vm.define "master" do |master| 16 | c = x.fetch('master') 17 | master.vm.box = "williamyeh/ubuntu-trusty64-docker" 18 | master.vm.guest = :ubuntu 19 | master.vm.network x.fetch('net').fetch('network_type'), ip: x.fetch('ip').fetch('master'), nic_type: $private_nic_type 20 | master.vm.provider :virtualbox do |v| 21 | v.cpus = c.fetch('cpus') 22 | v.memory = c.fetch('memory') 23 | v.name = "master" 24 | end 25 | if x.fetch('external_access').fetch('enabled') 26 | master.vm.network "forwarded_port", guest: 22, host: x.fetch('external_access').fetch('ssh_port') 27 | master.vm.network "forwarded_port", guest: 80, host: x.fetch('external_access').fetch('http_port') 28 | master.vm.network "forwarded_port", guest: 443, host: x.fetch('external_access').fetch('https_port') 29 | end 30 | if x.fetch('sslenabled') 31 | master.vm.provision "file", source: "./certs/haproxy.crt", destination: "/home/vagrant/haproxy.crt" 32 | end 33 | master.vm.provision "shell", path: "scripts/master.sh", args: [x.fetch('network_mode'),x.fetch('sslenabled'),x.fetch('ip').fetch('server'),x.fetch('server').fetch('count'),x.fetch('ip').fetch('master'), x.fetch('version')] 34 | if File.file?(x.fetch('keys').fetch('private_key')) 35 | master.vm.provision "file", source: x.fetch('keys').fetch('private_key'), destination: "/home/vagrant/.ssh/id_rsa" 36 | end 37 | if File.file?(x.fetch('keys').fetch('public_key')) 38 | public_key = File.read(x.fetch('keys').fetch('public_key')) 39 | master.vm.provision :shell, :inline =>" 40 | echo 'Copying SSH Keys to the VM' 41 | mkdir -p /home/vagrant/.ssh 42 | chmod 700 /home/vagrant/.ssh 43 | echo '#{public_key}' >> /home/vagrant/.ssh/authorized_keys 44 | chmod -R 600 /home/vagrant/.ssh/authorized_keys 45 | ", privileged: false 46 | end 47 | end 48 | 49 | server_ip = IPAddr.new(x.fetch('ip').fetch('server')) 50 | (1..x.fetch('server').fetch('count')).each do |i| 51 | c = x.fetch('server') 52 | hostname = "server-%02d" % i 53 | config.vm.define hostname do |server| 54 | server.vm.box= "chrisurwin/RancherOS" 55 | server.vm.guest = :linux 56 | server.vm.provider :virtualbox do |v| 57 | v.customize ["modifyvm", :id, "--natdnshostresolver1", "on"] 58 | v.cpus = c.fetch('cpus') 59 | v.linked_clone = true if Gem::Version.new(Vagrant::VERSION) >= Gem::Version.new('1.8.0') and x.fetch('linked_clones') 60 | v.memory = c.fetch('memory') 61 | v.name = hostname 62 | end 63 | server.vm.network x.fetch('net').fetch('network_type'), ip: IPAddr.new(server_ip.to_i + i - 1, Socket::AF_INET).to_s, nic_type: $private_nic_type 64 | server.vm.hostname = hostname 65 | server.vm.provision "shell", path: "scripts/configure_rancher_server.sh", args: [x.fetch('ip').fetch('master'), x.fetch('orchestrator'), i, x.fetch('version'), x.fetch('network_mode'), x.fetch('sslenabled'), x.fetch('ssldns'), x.fetch('ip').fetch('master'), x.fetch('rancher_env_vars')] 66 | if File.file?(x.fetch('keys').fetch('private_key')) 67 | config.vm.provision "file", source: x.fetch('keys').fetch('private_key'), destination: "/home/rancher/.ssh/id_rsa" 68 | end 69 | if File.file?(x.fetch('keys').fetch('public_key')) 70 | public_key = File.read(x.fetch('keys').fetch('public_key')) 71 | server.vm.provision :shell, :inline =>" 72 | echo 'Copying SSH Keys to the VM' 73 | mkdir -p /home/rancher/.ssh 74 | chmod 700 /home/rancher/.ssh 75 | echo '#{public_key}' >> /home/rancher/.ssh/authorized_keys 76 | chmod -R 600 /home/rancher/.ssh/authorized_keys 77 | ", privileged: false 78 | end 79 | end 80 | end 81 | 82 | node_ip = IPAddr.new(x.fetch('ip').fetch('node')) 83 | (1..x.fetch('node').fetch('count')).each do |i| 84 | c = x.fetch('node') 85 | hostname = "node-%02d" % i 86 | config.vm.define hostname do |node| 87 | node.vm.box = "chrisurwin/RancherOS" 88 | node.vm.box_version = x.fetch('ROS_version') 89 | node.vm.guest = :linux 90 | node.vm.provider "virtualbox" do |v| 91 | v.cpus = c.fetch('cpus') 92 | v.linked_clone = true if Gem::Version.new(Vagrant::VERSION) >= Gem::Version.new('1.8.0') and x.fetch('linked_clones') 93 | v.memory = c.fetch('memory') 94 | v.name = hostname 95 | end 96 | if x.fetch('sslenabled') 97 | node.vm.provision "file", source: "./certs/ca.crt", destination: "/home/rancher/ca.crt" 98 | end 99 | node.vm.network x.fetch('net').fetch('network_type'), ip: IPAddr.new(node_ip.to_i + i - 1, Socket::AF_INET).to_s, nic_type: $private_nic_type 100 | node.vm.hostname = hostname 101 | node.vm.provision "shell", path: "scripts/configure_rancher_node.sh", args: [x.fetch('ip').fetch('master'), x.fetch('orchestrator'), x.fetch('network_mode'), x.fetch('sslenabled'), x.fetch('ssldns'), x.fetch('ip').fetch('master')] 102 | if File.file?(x.fetch('keys').fetch('private_key')) 103 | config.vm.provision "file", source: x.fetch('keys').fetch('private_key'), destination: "/home/rancher/.ssh/id_rsa" 104 | end 105 | if File.file?(x.fetch('keys').fetch('public_key')) 106 | public_key = File.read(x.fetch('keys').fetch('public_key')) 107 | node.vm.provision :shell, :inline =>" 108 | echo 'Copying SSH Keys to the VM' 109 | mkdir -p /home/rancher/.ssh 110 | chmod 700 /home/rancher/.ssh 111 | echo '#{public_key}' >> /home/rancher/.ssh/authorized_keys 112 | chmod -R 600 /home/rancher/.ssh/authorized_keys 113 | ", privileged: false 114 | end 115 | end 116 | end 117 | 118 | end 119 | -------------------------------------------------------------------------------- /certs/ca.crt: -------------------------------------------------------------------------------- 1 | -----BEGIN CERTIFICATE----- 2 | MIIC9zCCAd+gAwIBAgIJAO1C/dbdYAGPMA0GCSqGSIb3DQEBCwUAMBIxEDAOBgNV 3 | BAMMB3Rlc3QtY2EwHhcNMTcxMTE4MTU1NjM1WhcNMTgwMTE3MTU1NjM1WjASMRAw 4 | DgYDVQQDDAd0ZXN0LWNhMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA 5 | xZxxA3u9MQqr37QVubJcOhy+LmgPccrmw9ONDHdG+/0XU1IzdScknrKBYkoR/OoZ 6 | SWOaWDOC4Gq9s6KGMy+uCq6sn/yn/3Afbcomr5ajodmyjKAp693W58Lq9EtbyfPi 7 | jTxcYkPfxi3WZvzoDE5N0TzOvy/1ZgtkjY3FxQNTCpVwMp/d1f44L5PNrTnCeRi2 8 | JS1UNEY1LT3mpUB5oXHbzrm3QZ7H9ye+/fdFj8we7Wlr/PT9RhlWd+GiGGdfiysc 9 | QK+h4fQWhydDdSjpI+n20w0W0ULzEV+dGZ3J72ZOJJ+sCuyrIc9UIB71pw9xSulV 10 | RLZhOhRSkvjfWeXVxaECGQIDAQABo1AwTjAdBgNVHQ4EFgQUR2mqcAfjvhlWd2JR 11 | /V4RHCaqbJYwHwYDVR0jBBgwFoAUR2mqcAfjvhlWd2JR/V4RHCaqbJYwDAYDVR0T 12 | BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAWykiulB8BGjOux4aXFJUWf2QReX1 13 | AEPha4+2boW6+jVyY3TOkwhVfFnVGhTt/504ituDY+S50A6SydY+xslTf1QGfiwk 14 | /HviN8k5uw/TC+S9qqG4k/mukYM/snSm0DuGoWrJzWOFKAnzT/WzStgOVxHMcH+h 15 | YG609a2vK5slUfLkymhENR/YDqvhr65bqh2kQKRCZqTcsMWLxL77mgMT3oHHRor0 16 | q3pKevYJb2eCuhOfglyvVMthlMpzm3SdoJX8c6sdpZ9ANef1xIrtwCqdpJpmYBdj 17 | lsQ72PE0c+cgCg3q7JN+65VazHy6r9Q/ZM78z0Br7q5zra3n6tpcNwUYOg== 18 | -----END CERTIFICATE----- 19 | -------------------------------------------------------------------------------- /certs/haproxy.crt: -------------------------------------------------------------------------------- 1 | -----BEGIN CERTIFICATE----- 2 | MIIDIjCCAgqgAwIBAgIJAMG0PIouuZgoMA0GCSqGSIb3DQEBCwUAMBIxEDAOBgNV 3 | BAMMB3Rlc3QtY2EwHhcNMTcxMTE4MTU1NjM2WhcNMTgwMTE3MTU1NjM2WjAhMR8w 4 | HQYDVQQDDBZzZXJ2ZXIucmFuY2hlci52YWdyYW50MIIBIjANBgkqhkiG9w0BAQEF 5 | AAOCAQ8AMIIBCgKCAQEArpO2JCVrgtjrK2eF1ntGn5eXxepRhrw8Oa/eUQcqhaoL 6 | diXp5T+y8+WC3o0t643peAN9yBrkAfUFTEn888/3wuWmEGUug2/tdqrfpvhIioIb 7 | bIeko0x4TyXPN7uCqOrDCpr4mpND7t6AMkEuV5mOx9mBX/qO9pRj/YPMa5TgG0Wc 8 | VG0TNNad9RQgHFByNRJKxlD+eK3IOa50Cmq4d/dpFVtvWDocFW5be6zbcu4SWx4N 9 | Qw48elRf74ndj3rf+Q7XUM8HpjHZWRK7X99FlG5n+TPSXjgixiDtXOrguYrW14dQ 10 | WR+Z+x+8Cb9e8gX5RxzEygPVHQDFkLgXGuL659Iv+wIDAQABo2wwajAJBgNVHRME 11 | AjAAMAsGA1UdDwQEAwIF4DAdBgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwEw 12 | MQYDVR0RBCowKIIOMTcyLjIyLjEwMS4xMDCCFnNlcnZlci5yYW5jaGVyLnZhZ3Jh 13 | bnQwDQYJKoZIhvcNAQELBQADggEBAFDs13I4a8KqOimuyzO/K/2lAOIOPLR2O5WU 14 | X0OyO+iTjg/bsq4GtMnZ1H02QNV0PDZNfYRyw09WhMS85pRwofjo7Fbo/HdlO75S 15 | cPTLHmauY1lKak3l+fGmEnJ1/LI7Bi23keyf3IaJVPsVb247WiB1Nr8hit7QL0Rj 16 | /Lf9tj5D2thjygEGQqWqn4iGAnCHhXncwtn3l9Hb7dEhgUB5lDUuc5VAcdV1LI9j 17 | L7LktXN7rM0h72HyB0qr5Q4Fx1xyZV9gTP3hQLYUCh3mLj21gByRfq/JUgP/BGkz 18 | j6AMVxluyNkTIX0/e2f0WqR2gdiVCxFnIcV1+OJTcm3sGwrippU= 19 | -----END CERTIFICATE----- 20 | -----BEGIN RSA PRIVATE KEY----- 21 | MIIEogIBAAKCAQEArpO2JCVrgtjrK2eF1ntGn5eXxepRhrw8Oa/eUQcqhaoLdiXp 22 | 5T+y8+WC3o0t643peAN9yBrkAfUFTEn888/3wuWmEGUug2/tdqrfpvhIioIbbIek 23 | o0x4TyXPN7uCqOrDCpr4mpND7t6AMkEuV5mOx9mBX/qO9pRj/YPMa5TgG0WcVG0T 24 | NNad9RQgHFByNRJKxlD+eK3IOa50Cmq4d/dpFVtvWDocFW5be6zbcu4SWx4NQw48 25 | elRf74ndj3rf+Q7XUM8HpjHZWRK7X99FlG5n+TPSXjgixiDtXOrguYrW14dQWR+Z 26 | +x+8Cb9e8gX5RxzEygPVHQDFkLgXGuL659Iv+wIDAQABAoIBACJogKzNJ0yJ3ogW 27 | hl2IvJm18jcTAPCSZzhjfmjwBaSEqG+n91q03FuP7+RXZSaZONvrrD3ldu0f0uR/ 28 | bLjNU6L5SmrO1oUXaXFLKnupUfriu+7/KSb6angHFrmAXrdWVgzgEIG+5tPMzO2a 29 | xTyEyox/RS2an9Tf2HleIctOc4FMOrWI0TVydHB9RhFKS99HWBlsWcA6r3aDb7Pj 30 | 1+liXcWKp/8IrzZE+ljSY4bAoP0KRaFnKdg565FXosBwwdMp+7P121o4J07GnLm8 31 | spKHnNUL9FOEDfShFOiSpgACrVZYAlkZVOgWWyDQfanJVGSDJyNtZ/UEn8hqKg2V 32 | cMfwRDECgYEA1fHjq/6u9HuPI4EepzPgl4olOP4hzMJZZzWsKDwrq6EgzevBnH1x 33 | mytaU/EZW2ONKq0RliQRe1wG3YZ4GM1zOIwss6AQNLriu0HvAyssxS2yLbHcu46W 34 | Q1p20RiDnplwYzTs/v38HFCKhq1gWa2KxH/RP57WFoO6t/cOFs3YbZMCgYEA0OTB 35 | g/GRjmxOVcy27sk9wyd4oBvS64chPUgZW4LNH9EXdhYG+KDXspVRJ6kRs9KKCMiC 36 | XNI2h64axV6yi8tuTBCwDnVTLgDoIJzo0jtkVN++5VpHmEZdi7ain70UCRzn7BGs 37 | ZGEt8fz0bJEq6+LAIGKGO08GqDZkcK9EZ4hTdPkCgYBrfJGj8dq4XfN2JQLfgkLO 38 | H8u/rH4Sv3nnTqHlvQU7v0IZ8t5c3vFawhcpoTHaJlPgIv947ToTPhL87Pgdjerb 39 | eCFlRX+e3l3XMrMAZoqhremc54U8fdCkmiijuKs33WMHlKmVGsm0+AoAy85if7v+ 40 | KTFZw1VQDnKaxna/oj+gHQKBgG58P5ow38LLqBcbYrojeGZK7zMH5GQsobkO9FKB 41 | eUrVxanwdpl0QQG0lir68mEA/38rxj+f87WJsaEouBjWbS0QOn47MdaIJj5xn3bq 42 | cOOvQAGA4ZM5zGqEG7mPQRSc1FH4+wxQlgOCvijaEzCWSvYWBldCMgwwUKmFGhIE 43 | edfJAoGAei3YSQcNFRR/cuYScZOnQ6tSVrHzS7yaiLO1xU8mp4juVxHzaPIbeKBm 44 | MK4oinPFaV2whE1rvzSg21iAJFDYGb+SO9Y6STI0RCUA6osi0AchSWnQgtgnuXds 45 | EDiRltFZbYsf/G5oto7kKMMoz1UzRDwPqnKK5EyFeQKPzAa7o2w= 46 | -----END RSA PRIVATE KEY----- 47 | -------------------------------------------------------------------------------- /config.yaml: -------------------------------------------------------------------------------- 1 | # cattle|kubernetes|swarm|mesos 2 | orchestrator: cattle 3 | # 'true'|'false' isolated network requiring proxy. must be in '' 4 | network_mode: normal 5 | # 'true'|'false' 6 | sslenabled: 'false' 7 | ssldns: 'server.rancher.vagrant' 8 | # https://hub.docker.com/r/rancher/server/tags/ 9 | version: latest 10 | #Pass through environment variables for Rancher Server to start with 11 | rancher_env_vars: "" 12 | agent_version: "v1.2.5" 13 | ROS_version: 1.0.3 14 | master: 15 | cpus: 1 16 | memory: 1024 17 | server: 18 | count: 1 19 | cpus: 1 20 | memory: 2048 21 | node: 22 | count: 3 23 | cpus: 1 24 | memory: 2048 25 | ip: 26 | master: 172.22.101.100 27 | server: 172.22.101.101 28 | node: 172.22.101.111 29 | # Linked clones are based on a master VM, which is generated by importing the 30 | # base box only once the first time it is required. The result is reduced 31 | # storage overhead and faster deployment times for multi-VM deployments. 32 | # 33 | # https://www.vagrantup.com/docs/virtualbox/configuration.html#linked-clones 34 | linked_clones: true 35 | net: 36 | private_nic_type: 82545EM 37 | network_type: private_network 38 | # Location of ssh key files that want to be added to the servers 39 | keys: 40 | public_key: "" 41 | private_key: "" 42 | external_access: 43 | enabled: false 44 | ssh_port: 2277 45 | http_port: 80 46 | https_port: 443 47 | -------------------------------------------------------------------------------- /localranchervagrant.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/vagrant/dab012e9ba3d38c3637b6fbd85026e0657b475fb/localranchervagrant.PNG -------------------------------------------------------------------------------- /rancher_id: -------------------------------------------------------------------------------- 1 | -----BEGIN RSA PRIVATE KEY----- 2 | MIIEowIBAAKCAQEAkkd08CYBqSBRRy27Xqfm1D9RqKYxVYA5btbzfzyHiNlqYkD5 3 | T7iDcI1pSSJxl8dB2Shfwn0pN9V/DWUOcc4QJAxe+YwVgszfbmBy4ggszc0S8lNU 4 | QSCnZ7ij1AOwU04ATdVmKouehJ9Bi4n/D5IylEeRXynpc5mYzYu1guojwqjishmt 5 | SeMxj05kKxGQJiVynGVfyMqeOe6XCCCEDYbvVxE2O9QuAgxPO4944/06woAGTNpE 6 | Tfuu3yU8vKbS8nbEzPTrW145v+I2+wQ4VDTdhyLQtQN60AnUaguScXCkO2I+UmgB 7 | M9+1S1xGZBzywwIPZZMaeJsm2/8y5FiI5P8eiQIDAQABAoIBACRi3fW2nochYNZh 8 | y8/Z2BNHxNo8rNzlzmOB2IP33Ibv/gR+wQVRknB23I+yn6rvh7Oq1VcdpxkaZEZC 9 | svsgFOfCpAVTES6+I33jQ/BxtCNlNadNjxG8O6gTR70/RacTKgQyZv2CSWYX70f0 10 | hZur9CTqQID5ffahaF8It5ffAhIzbBpC1Ezc1yBiWrsR16j7McucVkHtRDIIM+XR 11 | vVmxultCwlISSocvCttFcmcJHYDo3aAHGctzBshQHR6gpG2WWKkDPz9I+hCcOCSp 12 | +KN1emrb/9GTojfgmqn2cl+dRbiWUtBeeY/xy0TzytTgu3Ox+UqYrAHUZtyP4WZY 13 | 1YMVLWECgYEAwrg24KT2RVx/58yDSswSXdXCtTCJHJMrxpcnYhhS0XltZr7kW1Vk 14 | oE2hWvDXqJuMB+sdJoU8YIX4v2OKOmRYyhocEe6ELHcNa4XtJwX/mPj3C3lQMGeN 15 | OEGhchT1RDbCGNUzyeNWmfYV+dl+PMNY1qvOoUHM3Fg3TGRuB7VJ3ZUCgYEAwFCW 16 | KDN2/SOdGKERMi43eErroJpmKB7bF9RZ6UbYaCjfvfMk1DCylLsbycVc1s7RmCs0 17 | V0AtCgREHePXS4qGR1ecUtZIPQ+ydHISvo53IW/TORHPz8VQQSmPJnQK+qtVkFNR 18 | ZPlb/Kr/4zD1k/pcusYcb4Ba4+RwxwauPIgTuCUCgYEAwLkVT7jWKwaHbgSEOibL 19 | UuZ8hVsZihe77x2yFCwg5EV724K6gA9/vg2300unF7GEol25h2O9w4c2ES/TI0Io 20 | gzfean0Zguz+clqmw8XzGhPoaAnVLWOAaZrD0ScfLjnIWE3EqWCCWXYwgA8dG8T5 21 | C9gWdpKq0bfaW5jXMnfjORUCgYBXpKI54fJpPNC8R7MZhlx7MtGdWKATyrQYcQsk 22 | t8ep+G63F1cCaMsJ4xev5YUA1DJKwe3RkYq1FOqiIJKed5voaXHxgnxbb4rpbI9E 23 | TpLWJytAx/BIBOklOTZm1N2oDWsqIf5nIQkd5iRFuqtAbAnUiLKu0fUaBKKs4UuI 24 | hI9QDQKBgCXeVvAjb0Ys55tO2FYHcH5PuC3sGraNZjLVZPF21RsKvzCWF0tExJJk 25 | sArpXUg9FqvqOfWLsUmny/W0p7T86anq0HA+tMoBtEF6GnXeCDuEHjN9CwM9uQp1 26 | IzlmmV8pDc/Uddf7lFuOLEQh3CIbuZZRvycQlAJyVIx5RTahxe7C 27 | -----END RSA PRIVATE KEY----- -------------------------------------------------------------------------------- /scripts/configure_rancher_node.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash -x 2 | rancher_server_ip=${1:-172.22.101.100} 3 | orchestrator=${2:-cattle} 4 | network_type=${3:-false} 5 | sslenabled=${4:-false} 6 | ssldns=${5:-server.rancher.vagrant} 7 | cache_ip=${6:-172.22.101.100} 8 | 9 | curlprefix="appropriate" 10 | if [ "$sslenabled" == 'true' ]; then 11 | protocol="https" 12 | rancher_server_ip=$ssldns 13 | else 14 | protocol="http" 15 | fi 16 | 17 | if [ "$network_type" == "airgap" ] ; then 18 | curlprefix="$cache_ip:5000" 19 | fi 20 | 21 | if [ "$orchestrator" == "kubernetes" ] && [ ! "$(ros engine list | grep current | grep docker-1.12.6)" ]; then 22 | ros engine switch docker-1.12.6 23 | system-docker restart docker 24 | sleep 5 25 | fi 26 | 27 | ros config set rancher.docker.insecure_registry "['$cache_ip:5000']" 28 | if [ ! "$network_type" == "airgap" ] ; then 29 | ros config set rancher.docker.registry_mirror "http://$cache_ip:4000" 30 | ros config set rancher.system_docker.registry_mirror "http://$cache_ip:4000" 31 | ros config set rancher.docker.host "['unix:///var/run/docker.sock', 'tcp://0.0.0.0:2375']" 32 | if [ "$network_type" == "isolated" ]; then 33 | ros config set rancher.docker.environment "['http_proxy=http://$cache_ip:3128','https_proxy=http://$cache_ip:3128','HTTP_PROXY=http://$cache_ip:3128','HTTPS_PROXY=http://$cache_ip:3128','no_proxy=server.rancher.vagrant,localhost,127.0.0.1','NO_PROXY=server.rancher.vagrant,localhost,127.0.0.1']" 34 | ros config set rancher.system_docker.environment "['http_proxy=http://$cache_ip:3128','https_proxy=http://$cache_ip:3128','HTTP_PROXY=http://$cache_ip:3128','HTTPS_PROXY=http://$cache_ip:3128','no_proxy=server.rancher.vagrant,localhost,127.0.0.1','NO_PROXY=server.rancher.vagrant,localhost,127.0.0.1']" 35 | fi 36 | fi 37 | 38 | system-docker restart docker 39 | sleep 5 40 | 41 | if [ "$sslenabled" == 'true' ]; then 42 | mkdir -p /var/lib/rancher/etc/ssl 43 | echo "-----BEGIN CERTIFICATE----- 44 | MIIDFTCCAf2gAwIBAgIJAN2yyLTWbidBMA0GCSqGSIb3DQEBBQUAMCExHzAdBgNV 45 | BAMMFnNlcnZlci5yYW5jaGVyLnZhZ3JhbnQwHhcNMTcwNzI5MTQxMjQ1WhcNMjcw 46 | NzI3MTQxMjQ1WjAhMR8wHQYDVQQDDBZzZXJ2ZXIucmFuY2hlci52YWdyYW50MIIB 47 | IjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwLq7oWQwnSAR696FL7w2W7t/ 48 | MVCioPnnJV8tFfTvIZ/zQsH4ul9rjdv0NGLcPEXXtdDxsadn+hMWUYAqPNn2YDav 49 | pa0HhEFL/WGnUAP/XE2Vrop7QYh/heu8BIOOQ2rAOaxlLUscDYSmA3BeIEIoDLSc 50 | +A9+xVMikc6SkSQ4qpZOF7GvLfbwEYs3ii7PFUTzxDbmWsOuEEyRyJ36+6fmTTIu 51 | w6rrnsCqUF7AfIEcgxEVOTxtMoY/v8427AP+0B4pAGrY7siwqoiXtWERraVuyAwn 52 | 0IkKK4LH/lH30KwzPZTV825aE2Ob05MAg3Sbgi83BfMZQpbloJKj/MosmcXfGwID 53 | AQABo1AwTjAdBgNVHQ4EFgQUXlHYBOn21xjD64UiFrQa+hoFFyIwHwYDVR0jBBgw 54 | FoAUXlHYBOn21xjD64UiFrQa+hoFFyIwDAYDVR0TBAUwAwEB/zANBgkqhkiG9w0B 55 | AQUFAAOCAQEAQo+VJv2VkXAe03RL5PBSopE50XNF0xUvMH45gt2lnh4bz2HXTaLy 56 | XcbMzFWeClKWvkqfb9vhlClhmusJYYzkWsSJ5il7YNYVI4m+z33XtTeR0Pzuy2XQ 57 | BrRf+kz6KP5DJt1HusTN+gJFJ0EI850USscCjR2TiPWe7zgKt8WJ/W5c3rVwLFy5 58 | Z/nsoi16UmSJXKkJzXA+tM6K5DCx1p4LmuZXSzB5EwkL9okqA903Vj6kv9JwaHJl 59 | 4IgQPgzN0f5iPZNsMboEFfhcYVRRYoznnJzL7VCg1ig5j9JyfsjSpozVFE2CY/52 60 | tRubyXjH+dQQftBUuzwULwwKGL0le7o/vA== 61 | -----END CERTIFICATE-----" > /var/lib/rancher/etc/ssl/ca.crt 62 | fi 63 | 64 | if [ "$network_type" == "isolated" ] || [ "$network_type" == "airgap" ] ; then 65 | ros config set rancher.network.dns.nameservers ["'$cache_ip'"] 66 | system-docker restart network 67 | route add default gw $cache_ip 68 | fi 69 | 70 | if [ "$sslenabled" == 'true' ]; then 71 | ros config set rancher.network.dns.nameservers ["'$cache_ip'"] 72 | system-docker restart network 73 | mkdir -p /var/lib/rancher/etc/ssl 74 | cp /home/rancher/ca.crt /var/lib/rancher/etc/ssl/ca.crt 75 | fi 76 | 77 | while true; do 78 | ENV_ID=$(docker run \ 79 | -v /tmp:/tmp \ 80 | --rm \ 81 | $curlprefix/curl \ 82 | -sLk \ 83 | "$protocol://$rancher_server_ip/v2-beta/project?name=$orchestrator" | jq '.data[0].id' | tr -d '"') 84 | 85 | if [[ "$ENV_ID" == 1a* ]]; then 86 | break 87 | else 88 | sleep 5 89 | fi 90 | done 91 | 92 | 93 | echo Adding host to Rancher Server 94 | 95 | docker run \ 96 | -v /tmp:/tmp \ 97 | --rm \ 98 | $curlprefix/curl \ 99 | -sLk \ 100 | -X POST \ 101 | -H 'Content-Type: application/json' \ 102 | -H 'accept: application/json' \ 103 | -d "{\"type\":\"registrationToken\"}" \ 104 | "$protocol://$rancher_server_ip/v2-beta/projects/$ENV_ID/registrationtoken" 105 | 106 | docker run \ 107 | -v /tmp:/tmp \ 108 | --rm \ 109 | $curlprefix/curl \ 110 | -sLk \ 111 | "$protocol://$rancher_server_ip/v2-beta/projects/$ENV_ID/registrationtokens/?state=active" | 112 | jq -r .data[].command | 113 | head -n1 | 114 | sh 115 | -------------------------------------------------------------------------------- /scripts/configure_rancher_server.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash -x 2 | 3 | rancher_server_ip=${1:-172.22.101.100} 4 | orchestrator=${2:-cattle} 5 | node=${3:-3} 6 | rancher_server_version=${4:-stable} 7 | network_type=${5:-false} 8 | sslenabled=${6:-false} 9 | ssldns=${7:-server.rancher.vagrant} 10 | cache_ip=${8:-172.22.101.100} 11 | rancher_env_vars=${9} 12 | registry_prefix="rancher" 13 | curl_prefix="appropriate" 14 | 15 | if [ "$network_type" == "airgap" ]; then 16 | registry_prefix=$cache_ip:5000 17 | curl_prefix=$cache_ip:5000 18 | fi 19 | 20 | if [ "$sslenabled" == 'true' ]; then 21 | protocol="https" 22 | rancher_server_ip=$ssldns 23 | else 24 | protocol="http" 25 | fi 26 | 27 | ros config set rancher.docker.insecure_registry "['$cache_ip:5000']" 28 | if [ ! "$network_type" == "airgap" ] ; then 29 | ros config set rancher.docker.registry_mirror "http://$cache_ip:4000" 30 | ros config set rancher.system_docker.registry_mirror "http://$cache_ip:4000" 31 | ros config set rancher.docker.host "['unix:///var/run/docker.sock', 'tcp://0.0.0.0:2375']" 32 | if [ "$network_type" == "isolated" ]; then 33 | ros config set rancher.docker.environment "['http_proxy=http://$cache_ip:3128','https_proxy=http://$cache_ip:3128','HTTP_PROXY=http://$cache_ip:3128','HTTPS_PROXY=http://$cache_ip:3128','no_proxy=server.rancher.vagrant,localhost,127.0.0.1','NO_PROXY=server.rancher.vagrant,localhost,127.0.0.1']" 34 | ros config set rancher.system_docker.environment "['http_proxy=http://$cache_ip:3128','https_proxy=http://$cache_ip:3128','HTTP_PROXY=http://$cache_ip:3128','HTTPS_PROXY=http://$cache_ip:3128','no_proxy=server.rancher.vagrant,localhost,127.0.0.1','NO_PROXY=server.rancher.vagrant,localhost,127.0.0.1']" 35 | fi 36 | fi 37 | 38 | system-docker restart docker 39 | sleep 5 40 | 41 | if [ "$network_type" == "isolated" ] || [ "$network_type" == "airgap" ] ; then 42 | ros config set rancher.network.dns.nameservers ["'$cache_ip'"] 43 | system-docker restart network 44 | route add default gw $cache_ip 45 | fi 46 | 47 | if [ "$sslenabled" == 'true' ]; then 48 | ros config set rancher.network.dns.nameservers ["'$cache_ip'"] 49 | system-docker restart network 50 | fi 51 | 52 | SUSPEND=n 53 | CATTLE_JAVA_OPTS="-Xms128m -Xmx1g -XX:+HeapDumpOnOutOfMemoryError -agentlib:jdwp=transport=dt_socket,server=y,suspend=$SUSPEND,address=1044" 54 | 55 | EXTRA_OPTS="" 56 | if [ "$network_type" == "isolated" ]; then 57 | EXTRA_OPTS="-e http_proxy='http://$cache_ip:3128' \ 58 | -e https_proxy='http://$cache_ip:3128' \ 59 | -e HTTP_PROXY='http://$cache_ip:3128' \ 60 | -e HTTPS_PROXY='http://$cache_ip:3128' \ 61 | -e no_proxy='server.rancher.vagrant,localhost,127.0.0.1' \ 62 | -e NO_PROXY='server.rancher.vagrant,localhost,127.0.0.1'" 63 | fi 64 | rancher_command="" 65 | if [ "$network_type" == "airgap" ]; then 66 | EXTRA_OPTS="-e CATTLE_BOOTSTRAP_REQUIRED_IMAGE=$cache_ip:5000/rancher/agent:v1.2.5" 67 | rancher_command="$registry_prefix/rancher/server:$rancher_server_version" 68 | else 69 | rancher_command="rancher/server:$rancher_server_version" 70 | fi 71 | 72 | echo Installing Rancher Server 73 | sudo docker run -d --restart=always \ 74 | -p 8080:8080 \ 75 | -p 8088:8088 \ 76 | -p 1044:1044 \ 77 | -p 9345:9345 \ 78 | $EXTRA_OPTS \ 79 | -e CATTLE_JAVA_OPTS="$CATTLE_JAVA_OPTS" \ 80 | $rancher_env_vars \ 81 | --restart=unless-stopped \ 82 | --name rancher-server \ 83 | $rancher_command \ 84 | --db-host $cache_ip \ 85 | --db-port 3306 \ 86 | --db-name cattle \ 87 | --db-user root \ 88 | --db-pass cattle \ 89 | --advertise-address `ifconfig eth1 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'` 90 | 91 | if [ $node -eq 1 ]; then 92 | # wait until rancher server is ready 93 | while true; do 94 | docker run --dns $cache_ip --rm $curl_prefix/curl -sLk $protocol://$rancher_server_ip/ping && break 95 | sleep 5 96 | done 97 | 98 | set -e 99 | 100 | # disable telemetry for developers 101 | docker run \ 102 | --rm \ 103 | $curl_prefix/curl \ 104 | -sLk \ 105 | -X POST \ 106 | -H 'Accept: application/json' \ 107 | -H 'Content-Type: application/json' \ 108 | -d '{"type":"setting","name":"telemetry.opt","value":"out"}' \ 109 | "$protocol://$rancher_server_ip/v2-beta/setting" 110 | 111 | # set default registry for Rancher images 112 | if [ "$network_type" == "airgap" ] ; then 113 | docker run \ 114 | --rm \ 115 | $curl_prefix/curl \ 116 | -sLk \ 117 | -X POST \ 118 | -H 'Accept: application/json' \ 119 | -H 'Content-Type: application/json' \ 120 | -d '{"type":"setting","name":"registry.default","value":"'$cache_ip':5000"}' \ 121 | "$protocol://$rancher_server_ip/v2-beta/setting" 122 | 123 | fi 124 | 125 | # lookup orchestrator template id 126 | while true; do 127 | ENV_TEMPLATE_ID=$(docker run \ 128 | -v /tmp:/tmp \ 129 | --rm \ 130 | $curl_prefix/curl \ 131 | -sLk \ 132 | "$protocol://$rancher_server_ip/v2-beta/projectTemplates?name=$orchestrator" | jq '.data[0].id' | tr -d '"') 133 | 134 | # might've received 422 InvalidReference if the templates haven't populated yet 135 | if [[ "$ENV_TEMPLATE_ID" == 1pt* ]]; then 136 | break 137 | else 138 | sleep 5 139 | fi 140 | done 141 | 142 | # create an environment with specified orchestrator template 143 | docker run \ 144 | -v /tmp:/tmp \ 145 | --rm \ 146 | $curl_prefix/curl \ 147 | -sLk \ 148 | -X POST \ 149 | -H 'Accept: application/json' \ 150 | -H 'Content-Type: application/json' \ 151 | -d "{\"description\":\"$orchestrator\",\"name\":\"$orchestrator\",\"projectTemplateId\":\"$ENV_TEMPLATE_ID\",\"allowSystemRole\":false,\"members\":[],\"virtualMachine\":false,\"servicesPortRange\":null}" \ 152 | "$protocol://$rancher_server_ip/v2-beta/projects" 153 | 154 | # lookup default environment id 155 | DEFAULT_ENV_ID=$(docker run -v /tmp:/tmp --rm $curl_prefix/curl -sLk "$protocol://$rancher_server_ip/v2-beta/project?name=Default" | jq '.data[0].id' | tr -d '"') 156 | 157 | # delete default environment 158 | docker run \ 159 | --rm \ 160 | $curl_prefix/curl \ 161 | -sLk \ 162 | -X DELETE \ 163 | -H 'Accept: application/json' \ 164 | -H 'Content-Type: application/json' \ 165 | -d '{}' \ 166 | "$protocol://$rancher_server_ip/v2-beta/projects/$DEFAULT_ENV_ID/?action=delete" 167 | fi 168 | -------------------------------------------------------------------------------- /scripts/master.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash -x 2 | network_mode=${1:-false} 3 | sslenabled=${2:-false} 4 | rancher_server_ip=${3:-172.22.101.101} 5 | rancher_server_node=${4:-1} 6 | cache_ip=${5:-172.22.101.100} 7 | rancher_server_version=${6:-latest} 8 | password=${7:-rancher} 9 | 10 | 11 | apt-get update 12 | apt-get install jq 13 | apt-get install docker-engine 14 | 15 | 16 | echo "DOCKER_OPTS=\"\$DOCKER_OPTS --registry-mirror http://$cache_ip:4000 --insecure-registry http://$cache_ip:5000 --insecure-registry http://$cache_ip:4000\"" >> /etc/default/docker 17 | service docker restart 18 | 19 | # path to a remote share 20 | share_path=/vagrant/.vagrant/data 21 | mkdir -p $share_path 22 | 23 | # base configuration path 24 | config_path=/etc/vagrant/ 25 | mkdir -p $config_path 26 | 27 | chmod 0700 /home/vagrant/.ssh/rancher_id 28 | chown vagrant /home/vagrant/.ssh/rancher_id 29 | 30 | docker rm -f cadvisor 31 | 32 | echo "version: 0.1 33 | log: 34 | fields: 35 | service: registry 36 | storage: 37 | cache: 38 | blobdescriptor: redis 39 | filesystem: 40 | rootdirectory: /var/lib/registry 41 | http: 42 | addr: :5000 43 | headers: 44 | X-Content-Type-Options: [nosniff] 45 | redis: 46 | addr: redis:6379 47 | password: $password 48 | db: 0 49 | dialtimeout: 10ms 50 | readtimeout: 10ms 51 | writetimeout: 10ms 52 | pool: 53 | maxidle: 16 54 | maxactive: 64 55 | idletimeout: 300s 56 | health: 57 | storagedriver: 58 | enabled: true 59 | interval: 10s 60 | threshold: 3 61 | proxy: 62 | remoteurl: https://registry-1.docker.io" > $config_path/config.yml 63 | 64 | mkdir -p $share_path/redis 65 | echo "save 300 1 66 | requirepass \"$password\"" > $share_path/redis/redis.conf 67 | 68 | redis_image=redis_image.tar 69 | if [ -f $share_path/$redis_image ]; then 70 | docker load -i $share_path/$redis_image 71 | fi 72 | 73 | docker run -d --restart=always --name redis-mirror -p 6379 -v $share_path/redis:/data --entrypoint=/usr/local/bin/redis-server redis /data/redis.conf 74 | 75 | if [ ! -f $share_path/$redis_image ] ; then 76 | docker save -o $share_path/$redis_image redis 77 | fi 78 | 79 | registry_image=registry_v2_image.tar 80 | if [ -f $share_path/$registry_image ]; then 81 | docker load -i $share_path/$registry_image 82 | fi 83 | 84 | docker run -d --restart=always -p 4000:5000 --name v2-mirror \ 85 | -v $share_path:/var/lib/registry --link redis-mirror:redis registry:2 /var/lib/registry/config.yml 86 | 87 | if [ ! -f $share_path/$registry_image ] ; then 88 | docker save -o $share_path/$registry_image registry:2 89 | fi 90 | 91 | # Allow for --provison to clean the cattle DB 92 | docker stop mysql 93 | docker rm mysql 94 | 95 | echo Install MySQL 96 | docker run \ 97 | -d \ 98 | --name mysql \ 99 | -p 3306:3306 \ 100 | --net=host \ 101 | --restart=always \ 102 | -v mysql:/var/lib/mysql \ 103 | -e MYSQL_ROOT_PASSWORD=cattle \ 104 | mysql:5.7.18 105 | 106 | if [ $? -eq 0 ]; then 107 | sleep 15 108 | echo Creating database 109 | docker exec -i mysql \ 110 | mysql \ 111 | --password=cattle \ 112 | -e "CREATE DATABASE IF NOT EXISTS cattle COLLATE = 'utf8_general_ci' CHARACTER SET = 'utf8';" 113 | fi 114 | 115 | #Setup haproxy for Rancher HA 116 | 117 | echo "#--------------------------------------------------------------------- 118 | # Global settings 119 | #--------------------------------------------------------------------- 120 | global 121 | maxconn 100 122 | #--------------------------------------------------------------------- 123 | # common defaults that all the 'listen' and 'backend' sections will 124 | # use if not designated in their block 125 | #--------------------------------------------------------------------- 126 | defaults 127 | mode http 128 | log global 129 | option httplog 130 | option dontlognull 131 | timeout check 10 132 | option httplog clf 133 | option http-server-close 134 | option forwardfor except 127.0.0.0/8 135 | option redispatch 136 | retries 3 137 | timeout http-request 10s 138 | timeout queue 1m 139 | timeout connect 10s 140 | timeout client 1m 141 | timeout server 1m 142 | timeout http-keep-alive 1m 143 | timeout check 10s 144 | maxconn 20000 145 | 146 | listen stats 147 | bind 0.0.0.0:1936 148 | mode http 149 | stats enable 150 | stats hide-version 151 | stats realm Haproxy\ Statistics 152 | stats uri / 153 | stats auth Username:Password" > $config_path/haproxy.cfg 154 | 155 | nextip(){ 156 | IP=$1 157 | IP_HEX=$(printf '%.2X%.2X%.2X%.2X\n' `echo $IP | sed -e 's/\./ /g'`) 158 | NEXT_IP_HEX=$(printf %.8X `echo $(( 0x$IP_HEX + 1 ))`) 159 | NEXT_IP=$(printf '%d.%d.%d.%d\n' `echo $NEXT_IP_HEX | sed -r 's/(..)/0x\1 /g'`) 160 | echo "$NEXT_IP" 161 | } 162 | 163 | if [ "$sslenabled" == 'true' ]; then 164 | echo " 165 | 166 | frontend main 167 | bind 0.0.0.0:80 168 | redirect scheme https if !{ ssl_fc } 169 | bind 0.0.0.0:443 ssl crt /usr/local/etc/haproxy/haproxy.crt 170 | default_backend ha-nodes 171 | # Add headers for SSL offloading 172 | http-request set-header X-Forwarded-Proto https if { ssl_fc } 173 | http-request set-header X-Forwarded-Ssl on if { ssl_fc } 174 | 175 | backend ha-nodes 176 | option httpchk HEAD /ping 177 | default-server inter 3s fall 3 rise 2" >> $config_path/haproxy.cfg 178 | 179 | else 180 | echo " 181 | frontend main 182 | bind 0.0.0.0:80 183 | mode tcp 184 | default_backend ha-nodes 185 | 186 | backend ha-nodes 187 | mode tcp 188 | option httpchk HEAD /ping 189 | default-server inter 3s fall 3 rise 2" >> $config_path/haproxy.cfg 190 | fi 191 | 192 | IP=$rancher_server_ip 193 | for i in $(seq 1 $rancher_server_node); do 194 | echo " server ha-$i $IP:8080 send-proxy check" >> $config_path/haproxy.cfg 195 | IP=$(nextip $IP) 196 | done 197 | 198 | 199 | docker stop haproxy 200 | docker rm haproxy 201 | docker run -d --name haproxy --restart=always -p 80:80 -p 443:443 -p 1936:1936 -v $config_path/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro -v /home/vagrant/haproxy.crt:/usr/local/etc/haproxy/haproxy.crt:ro haproxy:1.7 202 | 203 | #docker run -d --name haproxy --restart=always -p 80:80 -p 443:443 -p 1936:1936 -v $config_path/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro haproxy:1.7 204 | 205 | # Install nfs server 206 | sudo mkdir -p /home/vagrant/nfs 207 | sudo docker run -d --name nfs --restart=always --privileged --net=host -v /home/vagrant/nfs:/nfsshare -e SHARED_DIRECTORY=/nfsshare itsthenetwork/nfs-server-alpine:4 208 | 209 | #Run a local registry 210 | mkdir -p $share_path/registry 211 | docker run -d -p 5000:5000 --restart=always --name registry -v $share_path/registry:/var/lib/registry registry:2 212 | 213 | #Run local proxy 214 | if [ "$network_mode" == "isolated" ] || [ "$network_mode" == "airgap" ] || [ "$sslenabled" == 'true' ]; then 215 | docker run -d --restart=always --name proxy -p 3128:3128 minimum2scp/squid 216 | 217 | #Setup dns proxy 218 | echo " 219 | include \"/etc/bind/named.conf.local\"; 220 | acl goodclients { 221 | $cache_ip/24; 222 | localhost; 223 | localnets; 224 | }; 225 | 226 | options { 227 | 228 | recursion yes; 229 | allow-query { goodclients; }; 230 | 231 | dnssec-validation auto; 232 | 233 | auth-nxdomain no; # conform to RFC1035 234 | listen-on-v6 { any; }; 235 | };" > /root/bind.conf 236 | 237 | echo " zone \"rancher.vagrant\" { 238 | type master; 239 | file \"/etc/bind/db.rancher.vagrant\"; 240 | };" > /root/named.conf.local 241 | 242 | echo "; 243 | ; BIND data file for local loopback interface 244 | ; 245 | \$TTL 604800 246 | @ IN SOA ns.rancher.vagrant. root.rancher.vagrant. ( 247 | 1 ; Serial 248 | 604800 ; Refresh 249 | 86400 ; Retry 250 | 2419200 ; Expire 251 | 604800 ) ; Negative Cache TTL 252 | ; 253 | @ IN NS rancher.vagrant. 254 | @ IN A 127.0.0.1 255 | @ IN AAAA ::1 256 | @ IN NS ns.rancher.vagrant. 257 | ns IN A $cache_ip 258 | 259 | ;also list other computers 260 | server IN A $cache_ip" > /root/db.rancher.vagrant 261 | 262 | docker run -d --restart=always --name bind9 -p 53:53 -p 53:53/udp -v /root/named.conf.local:/etc/bind/named.conf.local -v /root/bind.conf:/etc/bind/named.conf -v /root/db.rancher.vagrant:/etc/bind/db.rancher.vagrant resystit/bind9:latest 263 | fi 264 | 265 | if [ "$network_mode" == "airgap" ] ; then 266 | 267 | docker run -d -p 7070:7070 --restart=always --name rivapi llparse/registryranch:0.2 268 | sleep 15 269 | curl -Ss "http://localhost:7070/images/$rancher_server_version" | jq -r '.images' | \ 270 | while read key 271 | do 272 | image="${key//\"}" 273 | searchstring=":" 274 | rest=${image#*$searchstring} 275 | if [ "${#rest}" -gt "2" ]; then 276 | imageandtag=(${image//:/ }) 277 | exists=$(curl -Ss http://$cache_ip:5000/v2/${imageandtag[0]}/tags/list | jq -r '.tags' | grep ${imageandtag[1]//,}) 278 | if [ "${#exists}" -gt "2" ]; then 279 | echo "Image $image already in local cache" 280 | else 281 | docker pull ${image//,} 282 | docker tag ${image//,} $cache_ip:5000/${image//,} 283 | docker push $cache_ip:5000/${image//,} 284 | fi 285 | fi 286 | done 287 | exists=$(curl -Ss http://$cache_ip:5000/v2/server/tags/list | jq -r '.tags' | grep $rancher_server_version) 288 | if [ "${#exists}" -gt "2" ]; then 289 | echo "Image rancher/server:$rancher_server_version already in local cache" 290 | else 291 | docker pull rancher/server:$rancher_server_version 292 | docker tag rancher/server:$rancher_server_version $cache_ip:5000/rancher/server:$rancher_server_version 293 | docker push $cache_ip:5000/rancher/server:$rancher_server_version 294 | fi 295 | exists=$(curl -Ss http://$cache_ip:5000/v2/rancher/agent/tags/list | jq -r '.tags' | grep v1.2.5) 296 | if [ "${#exists}" -gt "2" ]; then 297 | echo "Image rancher/agent:v1.2.5 already in local cache" 298 | else 299 | docker pull rancher/agent:v1.2.5 300 | docker tag rancher/agent:v1.2.5 $cache_ip:5000/rancher/agent:v1.2.5 301 | docker push $cache_ip:5000/rancher/agent:v1.2.5 302 | fi 303 | exists=$(curl -Ss http://$cache_ip:5000/v2/curl/tags/list | jq -r '.tags' | grep latest) 304 | if [ "${#exists}" -gt "2" ]; then 305 | echo "Image appropriate/curl already in local cache" 306 | else 307 | docker pull appropriate/curl 308 | docker tag appropriate/curl $cache_ip:5000/curl 309 | docker push $cache_ip:5000/curl 310 | fi 311 | fi 312 | 313 | # Mount /vagrant virtualbox filesystem on reboot 314 | echo "if [ -f /var/run/vboxadd-service.pid ]; then 315 | mount -t vboxsf -o uid=900,gid=900,rw vagrant /vagrant 316 | fi" > /etc/rc.local 317 | -------------------------------------------------------------------------------- /vagrant_rancheros_guest_plugin.rb: -------------------------------------------------------------------------------- 1 | require 'ipaddr' 2 | 3 | ## Hacking this until we get a real plugin 4 | 5 | # Borrowing from http://stackoverflow.com/questions/1825928/netmask-to-cidr-in-ruby 6 | IPAddr.class_eval do 7 | def to_cidr 8 | self.to_i.to_s(2).count("1") 9 | end 10 | end 11 | 12 | module VagrantPlugins 13 | module GuestRancherOS 14 | class Guest < Vagrant.plugin("2", :guest) 15 | # Name used for guest detection 16 | GUEST_DETECTION_NAME = "rancheros".freeze 17 | 18 | def detect?(machine) 19 | machine.communicate.test <<-EOH.gsub(/^ */, '') 20 | if test -r /etc/os-release; then 21 | source /etc/os-release && test x#{self.class.const_get(:GUEST_DETECTION_NAME)} = x$ID && exit 22 | fi 23 | if test -x /usr/bin/lsb_release; then 24 | /usr/bin/lsb_release -i 2>/dev/null | grep -qi #{self.class.const_get(:GUEST_DETECTION_NAME)} && exit 25 | fi 26 | if test -r /etc/issue; then 27 | cat /etc/issue | grep -qi #{self.class.const_get(:GUEST_DETECTION_NAME)} && exit 28 | fi 29 | exit 1 30 | EOH 31 | end 32 | end 33 | end 34 | end 35 | 36 | 37 | module VagrantPlugins 38 | module GuestLinux 39 | class Plugin < Vagrant.plugin("2") 40 | guest_capability("linux", "change_host_name") do 41 | Cap::ChangeHostName 42 | end 43 | 44 | guest_capability("linux", "configure_networks") do 45 | Cap::ConfigureNetworks 46 | end 47 | end 48 | end 49 | end 50 | 51 | module VagrantPlugins 52 | module GuestLinux 53 | module Cap 54 | class ConfigureNetworks 55 | def self.configure_networks(machine, networks) 56 | machine.communicate.tap do |comm| 57 | interfaces = [] 58 | comm.sudo("ip link show|grep eth[1-9]|awk '{print $2}'|sed -e 's/:$//'") do |_, result| 59 | interfaces = result.split("\n") 60 | end 61 | 62 | networks.each do |network| 63 | dhcp = "true" 64 | iface = interfaces[network[:interface].to_i - 1] 65 | 66 | if network[:type] == :static 67 | cidr = IPAddr.new(network[:netmask]).to_cidr 68 | comm.sudo("ros config set rancher.network.interfaces.#{iface}.address #{network[:ip]}/#{cidr}") 69 | comm.sudo("ros config set rancher.network.interfaces.#{iface}.match #{iface}") 70 | 71 | dhcp = "false" 72 | end 73 | comm.sudo("ros config set rancher.network.interfaces.#{iface}.dhcp #{dhcp}") 74 | end 75 | 76 | comm.sudo("system-docker restart network") 77 | end 78 | end 79 | end 80 | end 81 | end 82 | end 83 | 84 | module VagrantPlugins 85 | module GuestLinux 86 | module Cap 87 | class ChangeHostName 88 | def self.change_host_name(machine, name) 89 | machine.communicate.tap do |comm| 90 | if !comm.test("sudo hostname --fqdn | grep '#{name}'") 91 | comm.sudo("ros config set hostname #{name}") 92 | end 93 | end 94 | end 95 | end 96 | end 97 | end 98 | end 99 | --------------------------------------------------------------------------------