├── images ├── bosh-arch.png ├── components.png ├── dep-graph.png ├── hostedzone.png ├── bosh-arch.graffle ├── dep-graph.graffle ├── bosh-db-schema.png ├── deploy-sequence.png ├── orchestrate_sm.png ├── release-iterate.png ├── bosh-architecture.png ├── cpi-sequence-diag.png ├── micro-aws │ ├── subnet.png │ ├── enable-auto-assign.png │ └── modify-auto-assign.png ├── orchestrate_sm_3.png ├── powerdns-db-schema.png ├── registry-db-schema.png ├── gcp-service-account.png ├── release-iterate.graffle ├── bosh-architecture.graffle ├── micro-openstack │ ├── keypair.png │ ├── allocate-ip.png │ ├── edit-bosh-sg.png │ ├── edit-ssh-sg.png │ ├── floating-ip.png │ ├── save-keypair.png │ ├── create-bosh-sg.png │ ├── create-keypair.png │ ├── create-ssh-sg.png │ ├── security-groups-2.png │ ├── security-groups.png │ └── create-floating-ip.png ├── bosh-directories-2-level.png ├── cpi-interactions-overview.png ├── bosh-directories-2-level.graffle ├── multi-cpi │ ├── aws-iaas-topology.png │ ├── route-table-az-1.png │ ├── route-table-az-2.png │ ├── peering-connection-accept.png │ └── peering-connection-creation.png └── deploy-microbosh-to-aws │ ├── iam-modal.png │ ├── create-vpc.png │ ├── get-iam-creds.png │ ├── list-iam-users.png │ ├── list-key-pairs.png │ ├── list-subnets.png │ ├── create-iam-users.png │ ├── create-key-pair.png │ ├── list-elastic-ips.png │ ├── access-keys-modal.png │ ├── account-dashboard.png │ ├── attach-iam-policy.png │ ├── create-elastic-ip.png │ ├── vpc-dashboard-start.png │ ├── account-dashboard-vpc.png │ ├── add-iam-inline-policy.png │ ├── create-security-group.png │ ├── list-security-groups.png │ ├── attach-iam-policy-expanded.png │ ├── edit-security-group-rules.png │ ├── security-credentials-menu.png │ ├── account-dashboard-region-menu.png │ ├── open-edit-security-group-modal.png │ └── security-credentials-dashboard.png ├── README.md ├── cli-help.html.md.erb ├── scheduled-procs.html.md.erb ├── rackhd-cpi.html.md.erb ├── NOTICE ├── init-vsphere-rp.html.md.erb ├── init.html.md.erb ├── jumpbox.html.md.erb ├── remove-dev-tools.html.md.erb ├── about.html.md.erb ├── openstack-human-readable-vm-names.html.md.erb ├── openstack-nova-networking.html.md.erb ├── basic-workflow.html.md.erb ├── recent.html.md.erb ├── flush-arp.html.md.erb ├── links-common-types.html.md.erb ├── cli-tunnel.html.md.erb ├── openstack-self-signed-endpoints.html.md.erb ├── openstack-registry.html.md.erb ├── vsphere-vmotion-support.html.md.erb ├── openstack-keystonev2.html.md.erb ├── openstack-light-stemcells.html.md.erb ├── cli-env-deps.html.md.erb ├── instance-metadata.html.md.erb ├── init-external-ip.html.md.erb ├── persistent-disk-fs.html.md.erb ├── links-manual.html.md.erb ├── vm-struct.html.md.erb ├── design.html.md.erb ├── aws-instance-storage.html.md.erb ├── release.html.md.erb ├── vsphere-migrate-datastores.html.md.erb ├── sample-manifest.html.md.erb ├── director-configure-db.html.md.erb ├── vcloud-cpi.html.md.erb ├── update-cloud-config.html.md.erb ├── vsphere-ha.html.md.erb ├── stemcell.html.md.erb ├── deployment.html.md.erb ├── release-blobstore.html.md.erb ├── job-lifecycle.html.md.erb ├── migrate-to-bosh-init.html.md.erb ├── uploading-stemcells.html.md.erb ├── managing-stemcells.html.md.erb ├── init-google.html.md.erb ├── install-bosh-init.html.md.erb ├── director-access-events.html.md.erb ├── openstack-multiple-networks.html.md.erb ├── bosh-cli.html.md.erb ├── deploying.html.md.erb ├── addons-common.html.md.erb ├── post-start.html.md.erb ├── monitoring.html.md.erb ├── compiled-releases.html.md.erb ├── init-vcloud.html.md.erb ├── director-backup.html.md.erb ├── trusted-certs.html.md.erb ├── deployment-basics.html.md.erb ├── pre-start.html.md.erb ├── director-users.html.md.erb ├── post-deploy.html.md.erb ├── hm-config.html.md.erb ├── aws-iam-users.html.md.erb ├── cli-global-flags.html.md.erb ├── problems.html.md.erb ├── understanding-bosh.html.md.erb ├── variable-types.html.md.erb ├── uploading-releases.html.md.erb ├── vsphere-esxi-host-failure.html.md.erb ├── init-azure.html.md.erb ├── release-blobs.html.md.erb ├── links-properties.html.md.erb ├── drain.html.md.erb ├── vsphere-network-partition.html.md.erb ├── windows-sample-release.html.md.erb ├── props-common.html.md.erb ├── init-softlayer.html.md.erb ├── warden-cpi.html.md.erb ├── virtualbox-cpi.html.md.erb ├── migrated-from.html.md.erb ├── director-certs.html.md.erb ├── s3-release-blobstore.html.md.erb ├── init-vsphere.html.md.erb ├── director-users-uaa-perms.html.md.erb ├── director-certs-openssl.html.md.erb ├── windows.html.md.erb └── director-configure-blobstore.html.md.erb /images/bosh-arch.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/bosh-arch.png -------------------------------------------------------------------------------- /images/components.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/components.png -------------------------------------------------------------------------------- /images/dep-graph.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/dep-graph.png -------------------------------------------------------------------------------- /images/hostedzone.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/hostedzone.png -------------------------------------------------------------------------------- /images/bosh-arch.graffle: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/bosh-arch.graffle -------------------------------------------------------------------------------- /images/dep-graph.graffle: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/dep-graph.graffle -------------------------------------------------------------------------------- /images/bosh-db-schema.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/bosh-db-schema.png -------------------------------------------------------------------------------- /images/deploy-sequence.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/deploy-sequence.png -------------------------------------------------------------------------------- /images/orchestrate_sm.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/orchestrate_sm.png -------------------------------------------------------------------------------- /images/release-iterate.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/release-iterate.png -------------------------------------------------------------------------------- /images/bosh-architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/bosh-architecture.png -------------------------------------------------------------------------------- /images/cpi-sequence-diag.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/cpi-sequence-diag.png -------------------------------------------------------------------------------- /images/micro-aws/subnet.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/micro-aws/subnet.png -------------------------------------------------------------------------------- /images/orchestrate_sm_3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/orchestrate_sm_3.png -------------------------------------------------------------------------------- /images/powerdns-db-schema.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/powerdns-db-schema.png -------------------------------------------------------------------------------- /images/registry-db-schema.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/registry-db-schema.png -------------------------------------------------------------------------------- /images/gcp-service-account.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/gcp-service-account.png -------------------------------------------------------------------------------- /images/release-iterate.graffle: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/release-iterate.graffle -------------------------------------------------------------------------------- /images/bosh-architecture.graffle: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/bosh-architecture.graffle -------------------------------------------------------------------------------- /images/micro-openstack/keypair.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/micro-openstack/keypair.png -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # BOSH Documentation 2 | 3 | This is a guide for anyone using BOSH, the Outer Shell to Cloud Foundry. 4 | 5 | 6 | -------------------------------------------------------------------------------- /images/bosh-directories-2-level.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/bosh-directories-2-level.png -------------------------------------------------------------------------------- /images/cpi-interactions-overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/cpi-interactions-overview.png -------------------------------------------------------------------------------- /images/bosh-directories-2-level.graffle: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/bosh-directories-2-level.graffle -------------------------------------------------------------------------------- /images/micro-aws/enable-auto-assign.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/micro-aws/enable-auto-assign.png -------------------------------------------------------------------------------- /images/micro-aws/modify-auto-assign.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/micro-aws/modify-auto-assign.png -------------------------------------------------------------------------------- /images/micro-openstack/allocate-ip.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/micro-openstack/allocate-ip.png -------------------------------------------------------------------------------- /images/micro-openstack/edit-bosh-sg.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/micro-openstack/edit-bosh-sg.png -------------------------------------------------------------------------------- /images/micro-openstack/edit-ssh-sg.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/micro-openstack/edit-ssh-sg.png -------------------------------------------------------------------------------- /images/micro-openstack/floating-ip.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/micro-openstack/floating-ip.png -------------------------------------------------------------------------------- /images/micro-openstack/save-keypair.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/micro-openstack/save-keypair.png -------------------------------------------------------------------------------- /images/multi-cpi/aws-iaas-topology.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/multi-cpi/aws-iaas-topology.png -------------------------------------------------------------------------------- /images/multi-cpi/route-table-az-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/multi-cpi/route-table-az-1.png -------------------------------------------------------------------------------- /images/multi-cpi/route-table-az-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/multi-cpi/route-table-az-2.png -------------------------------------------------------------------------------- /images/micro-openstack/create-bosh-sg.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/micro-openstack/create-bosh-sg.png -------------------------------------------------------------------------------- /images/micro-openstack/create-keypair.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/micro-openstack/create-keypair.png -------------------------------------------------------------------------------- /images/micro-openstack/create-ssh-sg.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/micro-openstack/create-ssh-sg.png -------------------------------------------------------------------------------- /images/deploy-microbosh-to-aws/iam-modal.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/deploy-microbosh-to-aws/iam-modal.png -------------------------------------------------------------------------------- /images/micro-openstack/security-groups-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/micro-openstack/security-groups-2.png -------------------------------------------------------------------------------- /images/micro-openstack/security-groups.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/micro-openstack/security-groups.png -------------------------------------------------------------------------------- /images/deploy-microbosh-to-aws/create-vpc.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/deploy-microbosh-to-aws/create-vpc.png -------------------------------------------------------------------------------- /images/micro-openstack/create-floating-ip.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/micro-openstack/create-floating-ip.png -------------------------------------------------------------------------------- /images/multi-cpi/peering-connection-accept.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/multi-cpi/peering-connection-accept.png -------------------------------------------------------------------------------- /images/deploy-microbosh-to-aws/get-iam-creds.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/deploy-microbosh-to-aws/get-iam-creds.png -------------------------------------------------------------------------------- /images/deploy-microbosh-to-aws/list-iam-users.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/deploy-microbosh-to-aws/list-iam-users.png -------------------------------------------------------------------------------- /images/deploy-microbosh-to-aws/list-key-pairs.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/deploy-microbosh-to-aws/list-key-pairs.png -------------------------------------------------------------------------------- /images/deploy-microbosh-to-aws/list-subnets.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/deploy-microbosh-to-aws/list-subnets.png -------------------------------------------------------------------------------- /images/multi-cpi/peering-connection-creation.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/multi-cpi/peering-connection-creation.png -------------------------------------------------------------------------------- /images/deploy-microbosh-to-aws/create-iam-users.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/deploy-microbosh-to-aws/create-iam-users.png -------------------------------------------------------------------------------- /images/deploy-microbosh-to-aws/create-key-pair.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/deploy-microbosh-to-aws/create-key-pair.png -------------------------------------------------------------------------------- /images/deploy-microbosh-to-aws/list-elastic-ips.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/deploy-microbosh-to-aws/list-elastic-ips.png -------------------------------------------------------------------------------- /images/deploy-microbosh-to-aws/access-keys-modal.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/deploy-microbosh-to-aws/access-keys-modal.png -------------------------------------------------------------------------------- /images/deploy-microbosh-to-aws/account-dashboard.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/deploy-microbosh-to-aws/account-dashboard.png -------------------------------------------------------------------------------- /images/deploy-microbosh-to-aws/attach-iam-policy.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/deploy-microbosh-to-aws/attach-iam-policy.png -------------------------------------------------------------------------------- /images/deploy-microbosh-to-aws/create-elastic-ip.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/deploy-microbosh-to-aws/create-elastic-ip.png -------------------------------------------------------------------------------- /images/deploy-microbosh-to-aws/vpc-dashboard-start.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/deploy-microbosh-to-aws/vpc-dashboard-start.png -------------------------------------------------------------------------------- /images/deploy-microbosh-to-aws/account-dashboard-vpc.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/deploy-microbosh-to-aws/account-dashboard-vpc.png -------------------------------------------------------------------------------- /images/deploy-microbosh-to-aws/add-iam-inline-policy.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/deploy-microbosh-to-aws/add-iam-inline-policy.png -------------------------------------------------------------------------------- /images/deploy-microbosh-to-aws/create-security-group.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/deploy-microbosh-to-aws/create-security-group.png -------------------------------------------------------------------------------- /images/deploy-microbosh-to-aws/list-security-groups.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/deploy-microbosh-to-aws/list-security-groups.png -------------------------------------------------------------------------------- /cli-help.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: CLI v2 Commands 3 | --- 4 | 5 |

Note: Examples require CLI v2.

6 | 7 | This document lists all CLI v2 commands: 8 | -------------------------------------------------------------------------------- /images/deploy-microbosh-to-aws/attach-iam-policy-expanded.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/deploy-microbosh-to-aws/attach-iam-policy-expanded.png -------------------------------------------------------------------------------- /images/deploy-microbosh-to-aws/edit-security-group-rules.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/deploy-microbosh-to-aws/edit-security-group-rules.png -------------------------------------------------------------------------------- /images/deploy-microbosh-to-aws/security-credentials-menu.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/deploy-microbosh-to-aws/security-credentials-menu.png -------------------------------------------------------------------------------- /images/deploy-microbosh-to-aws/account-dashboard-region-menu.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/deploy-microbosh-to-aws/account-dashboard-region-menu.png -------------------------------------------------------------------------------- /images/deploy-microbosh-to-aws/open-edit-security-group-modal.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/deploy-microbosh-to-aws/open-edit-security-group-modal.png -------------------------------------------------------------------------------- /images/deploy-microbosh-to-aws/security-credentials-dashboard.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/comcast/docs-bosh/master/images/deploy-microbosh-to-aws/security-credentials-dashboard.png -------------------------------------------------------------------------------- /scheduled-procs.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Scheduled Processes 3 | --- 4 | 5 | Jobs can use cron installed on the stemcell to schedule processes. In a [pre-start script](pre-start.html), copy a script from your job's `bin` directory to one of the following locations: 6 | 7 | - `/etc/cron.hourly` 8 | - `/etc/cron.daily` 9 | - `/etc/cron.weekly` 10 | 11 | You can also create a file in `/etc/cron.d` for full control over the cron schedule. 12 | -------------------------------------------------------------------------------- /rackhd-cpi.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: RackHD CPI 3 | --- 4 | 5 | [RackHD CPI](https://bosh.io/releases/github.com/cloudfoundry-incubator/bosh-rackhd-cpi-release) works with [OpenStack raw stemcells](https://bosh.io/stemcells/bosh-openstack-kvm-ubuntu-trusty-go_agent-raw). 6 | 7 | See more details on Github: [https://github.com/cloudfoundry-incubator/bosh-rackhd-cpi-release]. 8 | 9 | --- 10 | [Back to Table of Contents](index.html#cpi-config) 11 | -------------------------------------------------------------------------------- /NOTICE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2015-Present CloudFoundry.org Foundation, Inc. All Rights Reserved. 2 | 3 | This project contains software that is Copyright (c) 2012-2015 Pivotal Software, Inc. 4 | 5 | This project is licensed to you under the Apache License, Version 2.0 (the "License"). 6 | You may not use this project except in compliance with the License. 7 | 8 | This project may include a number of subcomponents with separate copyright notices 9 | and license terms. Your use of these subcomponents is subject to the terms and 10 | conditions of the subcomponent's license, as noted in the LICENSE file. 11 | -------------------------------------------------------------------------------- /init-vsphere-rp.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Deploying Director into a vSphere Resource Pool 3 | --- 4 | 5 | If the BOSH director is required to be deployed within a vSphere Resource Pool, utilize the following additional CLI arguments when creating the BOSH env: 6 | 7 |
 8 | $ bosh create-env bosh-deployment/bosh.yml \
 9 |     -o ... \
10 |     -o bosh-deployment/vsphere/resource-pool.yml \
11 |     -v ... \
12 |     -v vcenter_rp=my-bosh-rp
13 | 
14 | 15 |

Note that the vSphere resource pool must already be created before running the `bosh create-env` command. 16 | -------------------------------------------------------------------------------- /init.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Create an Environment 3 | --- 4 | 5 | An environment consists of the Director and deployments that it orchestrates. First we need to deploy the Director which then would be able to manage other deployments. 6 | 7 | Choose your next step where to install the Director: 8 | 9 | - [On Local machine](bosh-lite.html) [recommended for a quick test] 10 | - [On AWS](init-aws.html) 11 | - [On Azure](init-azure.html) 12 | - [On Google](init-google.html) 13 | - [On OpenStack](init-openstack.html) 14 | - [On SoftLayer](init-softlayer.html) 15 | - [On vCloud](init-vcloud.html) 16 | - [On vSphere](init-vsphere.html) 17 | -------------------------------------------------------------------------------- /jumpbox.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Jumpbox 3 | --- 4 | 5 | It's recommended: 6 | 7 | - to maintain a separate jumpbox VM for your environment 8 | - do not SSH to the Director 9 | - use `bosh ssh` to access VMs in your deployments and use jumpbox VM as your SSH gateway 10 | 11 | To obtain SSH access specifically to the Director VM when necessary you can opt into `jumpbox-user.yml` ops file when running [`bosh create-env` command](cli-v2.html#create-env). It will add a `jumpbox` user to the VM (by using `user_add` job from `cloudfoundry/os-conf-release`). 12 | 13 |

14 | $ bosh int creds.yml --path /jumpbox_ssh/private_key > jumpbox.key
15 | $ chmod 600 jumpbox.key
16 | $ ssh jumpbox@<external-or-internal-ip> -i jumpbox.key
17 | 
18 | -------------------------------------------------------------------------------- /remove-dev-tools.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Removal of compilers and other development tools 3 | --- 4 | 5 |

Note: This feature is available with bosh-release v255.4+ and on 3213+ stemcell series.

6 | 7 | It's typically unnecessary to have development tools installed on all VMs in a production environment. All stemcells come with a minimal set of development tools for compilation workers to successfully compile packages. 8 | 9 | To see which files and directories will be removed from the stemcell on bootup, unpack stemcell tarball and view `dev_tools_file_list.txt`. 10 | 11 | To remove development tools from all non-compilation VMs: 12 | 13 | 1. Change deployment manifest for the Director: 14 | 15 | ```yaml 16 | properties: 17 | director: 18 | remove_dev_tools: true 19 | ``` 20 | 21 | 1. Redeploy Director. 22 | 23 | 1. Run `bosh recreate` for your deployments. 24 | -------------------------------------------------------------------------------- /about.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: What is BOSH? 3 | --- 4 | 5 | BOSH is a project that unifies release engineering, deployment, and lifecycle management of small and large-scale cloud software. BOSH can provision and deploy software over hundreds of VMs. It also performs monitoring, failure recovery, and software updates with zero-to-minimal downtime. 6 | 7 | While BOSH was developed to deploy Cloud Foundry PaaS, it can also be used to deploy almost any other software (Hadoop, for instance). BOSH is particularly well-suited for large distributed systems. In addition, BOSH supports multiple Infrastructure as a Service (IaaS) providers like VMware vSphere, Google Cloud Platform, Amazon Web Services EC2, and OpenStack. There is a Cloud Provider Interface (CPI) that enables users to extend BOSH to support additional IaaS providers such as Apache CloudStack and VirtualBox. 8 | 9 | --- 10 | Next: [What problems does BOSH solve?](problems.html) 11 | -------------------------------------------------------------------------------- /openstack-human-readable-vm-names.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Using human-readable VM names instead of UUIDs 3 | --- 4 | 5 |

Note: This feature is available with bosh-openstack-cpi v23+.

6 | 7 | You can enable human-readable VM names in your Director manifest to get VMs with names such as `runner_z1/0` instead of UUIDs such as `vm-3151dbb0-7cea-475b-9ff8-7faa94a8188e`. 8 | 9 | 1. Enable the human-readable-vm-names feature 10 | 11 | ```yaml 12 | properties: 13 | openstack: 14 | human_readable_vm_names: true 15 | ``` 16 | 17 | 1. Set the `registry.endpoint` configuration to [include basic auth credentials](openstack-registry.html) 18 | 19 | 1. Deploy the Director 20 | 21 | --- 22 | [Back to Table of Contents](index.html#cpi-config) 23 | 24 | Next: [Validating self-signed OpenStack endpoints](openstack-self-signed-endpoints.html) 25 | 26 | Previous: [Extended Registry configuration](openstack-registry.html) 27 | -------------------------------------------------------------------------------- /openstack-nova-networking.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Using nova-networking 3 | --- 4 | 5 |

Note: This feature is available with bosh-openstack-cpi v28+.

6 | 7 | The OpenStack CPI v28+ uses neutron networking by default. This document describes how to enable nova-networking instead if your OpenStack installation doesn't provide neutron. **Note:** nova-networking is deprecated as of the OpenStack Newton release and will be removed in the future. 8 | 9 | 1. Configure OpenStack CPI 10 | 11 | In `properties.openstack`: 12 | - add property `use_nova_networking: true` 13 | 14 | ```yaml 15 | properties: 16 | openstack: &openstack 17 | (...) 18 | use_nova_networking: true 19 | ``` 20 | 21 | 1. Deploy the Director 22 | 23 | --- 24 | [Back to Table of Contents](index.html#cpi-config) 25 | 26 | Next: [Extended Registry configuration](openstack-registry.html) 27 | 28 | Previous: [Using Keystone API v2](openstack-keystonev2.html) 29 | -------------------------------------------------------------------------------- /basic-workflow.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Deploy Workflow 3 | --- 4 | 5 | (Follow [Create an environment?](init.html) to create the Director.) 6 | 7 | The Director can manage multiple [deployments](terminology.html#deployment). 8 | 9 | Creation of a deployment consists of the following steps: 10 | 11 | - Create a [cloud config](terminology.html#cloud-config) 12 | - Create a [deployment manifest](terminology.html#manifest) 13 | - Upload stemcells and releases from the deployment manifest 14 | - Kick off the deploy to make a deployment on the Director 15 | 16 | Updating an existing deployment is the same procedure: 17 | 18 | - Update cloud config if anything changed 19 | - Update deployment manifest if anything changed 20 | - Upload new stemcells and/or releases if necessary 21 | - Kick off the deploy to apply changes to the deployment 22 | 23 | In the next several steps we are going to deploy simple [Zookeeper](https://en.wikipedia.org/wiki/Apache_ZooKeeper) deployment to the Director. 24 | 25 | --- 26 | Next: [Update Cloud Config](update-cloud-config.html) 27 | -------------------------------------------------------------------------------- /recent.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Recent Additions and Updates 3 | --- 4 | 5 | * [Package vendoring](package-vendoring.html) 6 | * [Native DNS Support](dns.html) [alpha] 7 | 8 | --- 9 | 10 | * [CLI v2](cli-v2.html) 11 | * [CPI config](cpi-config.html) 12 | 13 | --- 14 | 15 | * [Addon placement rules](runtime-config.html#addons) 16 | * [Director-wide tagging](runtime-config.html#tags.html) 17 | * [Deployment tagging](manifest-v2.html#tags) 18 | 19 | --- 20 | 21 | * [Link properties](links-properties.html) 22 | * [Manual linking](links-manual.html) 23 | * [Explicit ARP Flushing](flush-arp.html) 24 | * [Events](events.html) 25 | * [Access event logging](director-access-events.html) 26 | * [Customizing persistent disk FS](persistent-disk-fs.html) 27 | 28 | --- 29 | 30 | * [Common addons](addons-common.html) 31 | 32 | --- 33 | 34 | * [Manifest v2](manifest-v2.html) 35 | * [Cloud config](cloud-config.html) 36 | * [AZs](azs.html) 37 | * [Links](links.html) 38 | * [Runtime config](runtime-config.html) 39 | * [Renaming/migrating jobs](migrated-from.html) 40 | * [Persistent and orphaned disks](persistent-disks.html) 41 | -------------------------------------------------------------------------------- /flush-arp.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Explicit ARP Flushing 3 | --- 4 | 5 |

Note: This feature is available with bosh-release v256+ and 3232+ stemcell series.

6 | 7 | Certain IaaSes may limit and/or disable gratuitous ARP for security reasons (for example AWS). Linux kernel performs periodic garbage collection of stale ARP entries; however, if there are open or stale connections these entries will not be cleared causing new connections to fail since they just use an existing *outdated* MAC address. 8 | 9 | The Director is fully in control of when the VMs are created so it's able to communicate with the other VMs it manages and issues an explicit `delete_arp_entries` Agent RPC call to clear stale ARP entries. 10 | 11 | To enable this feature: 12 | 13 | 1. Add [director.flush_arp](http://bosh.io/jobs/director?source=github.com/cloudfoundry/bosh#p=director.flush_arp) deployment manifest for the Director: 14 | 15 | ```yaml 16 | properties: 17 | director: 18 | flush_arp: true 19 | ``` 20 | 21 | 1. Redeploy the Director. 22 | 23 | --- 24 | [Back to Table of Contents](index.html#director-config) 25 | -------------------------------------------------------------------------------- /links-common-types.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Links - Common types 3 | --- 4 | 5 | Common suggested link types that release authors should conform to if type is used. 6 | 7 | --- 8 | ## `database` type 9 | 10 | `link('...').address` returns database address. 11 | 12 | * **adapter** [String]: Name of the adapter. 13 | * **port** [Integer]: Port to connect on. 14 | * **tls** [Hash]: See [TLS properties](props-common.html#tls) 15 | * **username** [String]: Database username to use. Example: `u387455`. 16 | * **password** [String]: Database password to use. 17 | * **database** [String]: Database name. Example: `app-server` 18 | * **options** [Hash, optional]: Opaque set of options for the adapter. Example: `{max_connections: 32, pool_timeout: 10}`. 19 | 20 | --- 21 | ## `blobstore` type 22 | 23 | `link('...').address` returns blobstore address. 24 | 25 | * **adapter** [String]: Name of the adapter (s3, gcp, etc.). 26 | * **tls** [Hash]: See [TLS properties](props-common.html#tls) 27 | * **options** [Hash, optional]: Opaque set of options. Example: `{aws_access_key: ..., secret_access_key: ...}`. 28 | -------------------------------------------------------------------------------- /cli-tunnel.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: CLI Tunneling 3 | --- 4 | 5 |

Note: Applies to CLI v2.

6 | 7 | CLI supports tunneling all of its traffic (HTTP and SSH) through a SOCKS 5 proxy specified via `BOSH_ALL_PROXY` environment variable. (Custom environment variable was chosen instead of using `all_proxy` environment variable to avoid accidently tunneling non-CLI traffic.) 8 | 9 | Common use cases for tunneling through a jumpbox VM include: 10 | 11 | - deploying Director VM with `bosh create-env` command 12 | - accessing the Director and UAA APIs 13 | 14 |
15 | # set a tunnel
16 | # -D : local SOCKS port
17 | # -f : forks the process in the background
18 | # -C : compresses data before sending
19 | # -N : tells SSH that no command will be sent once the tunnel is up
20 | # -4 : force SSH to use IPv4 to avoid the dreaded `bind: Cannot assign requested address` error
21 | $ ssh -4 -D 5000 -fNC jumpbox@jumpbox-ip -i jumpbox.key
22 | 
23 | # let CLI know via environment variable
24 | $ export BOSH_ALL_PROXY=socks5://localhost:5000
25 | 
26 | $ bosh create-env bosh-deployment/bosh.yml ...
27 | $ bosh alias-env aws -e director-ip --ca-cert ...
28 | 
29 | -------------------------------------------------------------------------------- /openstack-self-signed-endpoints.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Validating self-signed OpenStack endpoints 3 | --- 4 | 5 |

Note: This feature is available with bosh-openstack-cpi v23+.

6 | 7 | When your OpenStack is using a self-signed certificate, you want to enable the OpenStack CPI to validate it. You can configure the OpenStack CPI with the public certificate of the RootCA that signed the OpenStack endpoint certificate. 8 | 9 | 1. Configure `properties.openstack.connection_options` to include the property `ca_cert`. It can contain one or more certificates. 10 | 11 | ```yaml 12 | properties: 13 | openstack: 14 | connection_options: 15 | ca_cert: |+ 16 | -----BEGIN CERTIFICATE----- 17 | MII... 18 | -----END CERTIFICATE----- 19 | ``` 20 | 21 | 1. Set the `registry.endpoint` configuration to [include basic auth credentials](openstack-registry.html) 22 | 23 | 1. Deploy the Director 24 | 25 | 26 | --- 27 | [Back to Table of Contents](index.html#cpi-config) 28 | 29 | Next: [Multi-homed VMs](openstack-multiple-networks.html) 30 | 31 | Previous: [Using human-readable VM names](openstack-human-readable-vm-names.html) 32 | -------------------------------------------------------------------------------- /openstack-registry.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Extended Registry configuration 3 | --- 4 | 5 |

Note: We are actively pursuing to remove the Registry to simplify BOSH architecture.

6 | 7 | Default configuration typically uses Registry and defaults it to IP source authentication. Due to certain networking configurations (NAT) IP source authentication may not work correctly, hence switching to basic authentication is necessary. 8 | 9 | 1. Configure the Registry and the OpenStack CPI to use basic authentication by setting `registry.endpoint` 10 | 11 | ```yaml 12 | properties: 13 | registry: 14 | address: PRIVATE-IP 15 | host: PRIVATE-IP 16 | db: *db 17 | http: {user: admin, password: admin-password, port: 25777} 18 | username: admin 19 | password: admin-password 20 | port: 25777 21 | endpoint: http://admin:admin-password@PRIVATE-IP:25777 # <--- 22 | ``` 23 | 24 | 1. Deploy the Director 25 | 26 | --- 27 | [Back to Table of Contents](index.html#cpi-config) 28 | 29 | Next: [Using human-readable VM names](openstack-human-readable-vm-names.html) 30 | 31 | Previous: [Using nova-networking](openstack-nova-networking.html) 32 | -------------------------------------------------------------------------------- /vsphere-vmotion-support.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Storage DRS and vMotion Support 3 | --- 4 | 5 |

Note: Storage DRS and vMotion can be used with bosh-vsphere-cpi v18+.

6 | 7 |

Warning: If a VM was accidentally deleted after a disk was migrated via DRS or vMotion, BOSH may be unable to locate the disk.

8 | 9 | Typically Storage DRS and vMotion moves attached persistent disks with the VM. 10 | When doing so it renames attached disks and places them into moved VM folder (typically called `vm-`). 11 | Prior to bosh-vsphere-cpi v18, Storage DRS and vMotion were not supported since the CPI was unable to locate renamed disks. 12 | Later versions of the CPI are able to locate disks migrated by vSphere as long as the disks are attached to the VMs. 13 | 14 | As VMs are recreated, the CPI will move persistent disks out of VM folders so that they are not deleted with the VMs. 15 | This procedure will happen automatically when VMs are deleted (in `delete_vm` CPI call) and when disks are detached (in `detach_disk` CPI call). 16 | 17 | --- 18 | [Back to Table of Contents](index.html#cpi-config) 19 | 20 | Previous: [Migrating from one datastore to another](vsphere-migrate-datastores.html) 21 | -------------------------------------------------------------------------------- /openstack-keystonev2.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Using Keystone API v2 3 | --- 4 | 5 | Default configuration typically uses Keystone v3 API. This document describes how to use Keystone v2 if your OpenStack installation enforces this. 6 | 7 | 1. Configure OpenStack CPI 8 | 9 | In `properties.openstack`: 10 | - switch property `auth_url` to use v2 endpoint. 11 |
Note: path is `v2.0` including the minor revision!
12 | - add property `tenant` 13 | - remove properties `domain` and `project` 14 | 15 | ```yaml 16 | properties: 17 | openstack: &openstack 18 | auth_url: https://keystone.my-openstack.com:5000/v2.0 # <--- Replace with Keystone URL 19 | tenant: OPENSTACK-TENANT # <--- Replace with OpenStack tenant name 20 | username: OPENSTACK-USERNAME # <--- Replace with OpenStack username 21 | api_key: OPENSTACK-PASSWORD # <--- Replace with OpenStack password 22 | default_key_name: bosh 23 | default_security_groups: [bosh] 24 | ``` 25 | 26 | 1. Deploy the Director 27 | 28 | --- 29 | [Back to Table of Contents](index.html#cpi-config) 30 | 31 | Next: [Using nova-networking](openstack-nova-networking.html) 32 | 33 | Previous: [OpenStack](openstack-cpi.html) 34 | -------------------------------------------------------------------------------- /openstack-light-stemcells.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Using Light Stemcells 3 | --- 4 | 5 |

Note: This feature is available with bosh-openstack-cpi v28+.

6 | 7 | You can create your own OpenStack light stemcells to re-use stemcell images already uploaded to your OpenStack image store. **Note:** Future deployments will fail if the stemcell image referenced by a light stemcell is removed from your OpenStack image store. 8 | 9 | 1. Retrieve the UUID of an already uploaded stemcell image e.g. with `openstack image list` 10 | 1. Retrieve the operating system and version of the stemcell from the image metadata to e.g. with `openstack image show ` 11 | 1. Use the [`create_light_stemcell`](https://github.com/cloudfoundry-incubator/bosh-openstack-cpi-release/blob/master/scripts/create_light_stemcell) script to create a light stemcell archive 12 | 13 | ``` 14 | $ ./create_light_stemcell --os ubuntu-trusty --version 3312 --image-id 1234-567-890 15 | ``` 16 | 17 | You can use the light stemcell archive like a regular stemcell archive in BOSH deployment manifests and with bosh-init. 18 | 19 | --- 20 | [Back to Table of Contents](index.html#cpi-config) 21 | 22 | Previous: [Multi-homed VMs](openstack-multiple-networks.html) 23 | -------------------------------------------------------------------------------- /cli-env-deps.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: CLI `create-env` Dependencies 3 | --- 4 | 5 |

Note: Applies to CLI v2.

6 | 7 | The `bosh create-env` (and `bosh delete-env`) command has dependencies (similar to `bosh-init`). 8 | 9 | 1. Depending on your platform install following packages: 10 | 11 | **Ubuntu Trusty** 12 | 13 |
14 |     $ sudo apt-get update
15 |     $ sudo apt-get install -y build-essential zlibc zlib1g-dev ruby ruby-dev openssl libxslt-dev libxml2-dev libssl-dev libreadline6 libreadline6-dev libyaml-dev libsqlite3-dev sqlite3
16 |     
17 | 18 | **CentOS** 19 | 20 |
21 |     $ sudo yum install gcc gcc-c++ ruby ruby-devel mysql-devel postgresql-devel postgresql-libs sqlite-devel libxslt-devel libxml2-devel patch openssl
22 |     $ gem install yajl-ruby
23 |     
24 | 25 | **Mac OS X** 26 | 27 | Install Apple Command Line Tools: 28 |
29 |     $ xcode-select --install
30 |     
31 | 32 | Use [Homebrew](http://brew.sh) to install OpenSSL: 33 |
34 |     $ brew install openssl
35 |     
36 | 37 | 1. Make sure Ruby is installed (any version is adequate): 38 | 39 |
40 |     $ ruby -v
41 |     ruby 2.2.3p173 (2015-08-18 revision 51636) [x86_64-darwin14]
42 |     
43 | -------------------------------------------------------------------------------- /instance-metadata.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Instance Metadata 3 | --- 4 | 5 |

Note: This feature is available with bosh-release v255.4+ and on 3213+ stemcell series.

6 | 7 | It's common for software to need to know where it was deployed so that it can make application level decisions or propagate location information in logs and metrics. Director instance specific information in multiple ways: 8 | 9 | ## Via ERB templates 10 | 11 | Use [`spec` variable](jobs.html#properties-spec) in ERB templates to get access to AZ, deployment name, ID, etc. 12 | 13 | ## Via filesystem 14 | 15 | Accessing information over filesystem might be useful when building core libraries so that explicit configuration is not required. Each VM has a `/var/vcap/instance` directory that contains following files: 16 | 17 |
18 | vcap@7e87e912-35cc-4b43-9645-5a7d7d6f2caa:~$ ls -la /var/vcap/instance/
19 | total 24
20 | drwxr-xr-x  2 root root 4096 Mar 17 00:06 .
21 | drwxr-xr-x 11 root root 4096 Mar 17 00:16 ..
22 | -rw-r--r--  1 root root    2 Mar 17 00:07 az
23 | -rw-r--r--  1 root root   10 Mar 17 00:07 deployment
24 | -rw-r--r--  1 root root   36 Mar 17 00:07 id
25 | -rw-r--r--  1 root root    3 Mar 17 00:07 name
26 | 
27 | 28 | Example values: 29 | 30 | - AZ: `z1` 31 | - Deployment: `redis` 32 | - ID: `fdfad8ca-a213-4a1c-b1f6-c53f86bb896a` 33 | - Name: `redis` 34 | -------------------------------------------------------------------------------- /init-external-ip.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Expose Director on a public IP 3 | --- 4 | 5 | It's strongly recommended to not allow ingress traffic to the Director VM via public IP. One way to achieve that is to use a [jumpbox](terminology.html#jumpbox) to access internal networks. 6 | 7 | If you do have a jumpbox consider using [CLI tunneling functionality](cli-tunnel.html) instead of running CLI from the jumpbox VM. 8 | 9 | When it's not desirable or possible to have a jumpbox, you can use following steps to assign public IP to the Director VM. 10 | 11 | For CPIs that do not use registry (Google, vSphere, vCloud): 12 | 13 |
14 | $ bosh create-env bosh-deployment/bosh.yml \
15 |     -o ... \
16 |     -o bosh-deployment/external-ip-not-recommended.yml \
17 |     -v ... \
18 |     -v external_ip=12.34.56.78
19 | 
20 | 21 | Or for CPIs that do use registry (AWS, Azure, and OpenStack): 22 | 23 |
24 | $ bosh create-env bosh-deployment/bosh.yml \
25 |     -o ... \
26 |     -o bosh-deployment/external-ip-with-registry-not-recommended.yml \
27 |     -v ... \
28 |     -v external_ip=12.34.56.78
29 | 
30 | 31 |

Note that if you have already ran `bosh create-env` command before adding above operations file, you may have to remove generated Director (and other components such as UAA) SSL certificates from the variables store so that SSL certificates can be regenerated with SANs. 32 | -------------------------------------------------------------------------------- /persistent-disk-fs.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Customizing Persistent Disk Filesystem 3 | --- 4 | 5 |

Note: This feature is available with 3215+ stemcell series.

6 | 7 | Certain releases operate more reliably when persistent data is stored using particular filesystem. The Agent currently supports two different persistent disk filesystem types: `ext4` (default) and `xfs`. 8 | 9 | Here is an example: 10 | 11 | ```yaml 12 | instance_groups: 13 | - name: mongo 14 | instances: 3 15 | jobs: 16 | - name: mongo 17 | release: mongo 18 | # ... 19 | persistent_disk: 10_000 20 | env: 21 | persistent_disk_fs: xfs 22 | ``` 23 | 24 | Currently this configuration lives in the instance group `env` configuration. (Eventually we will move this configuration onto the disk type where it belongs.) There are few gotchas: 25 | 26 | - changing `persistent_disk_fs` in any way (even if just explicitly setting the default of `ext4`) results in a VM recreation (but reuses *same* disk) 27 | - changing `persistent_disk_fs` for an instance group that previously had a persistent disk will not simply reformat existing disk 28 | 29 | To move persistent data to a new persistent disk formatted with a new filesystem you have to set `persistent_disk_fs` configuration *and* change the disk size. If there was no existing persistent disk (for example, for a new deployment), the Agent will format it as requested. 30 | 31 | --- 32 | [Back to Table of Contents](index.html#deployment-config) 33 | -------------------------------------------------------------------------------- /links-manual.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Manual Linking 3 | --- 4 | 5 | (See [Links](links.html) and [Link properties](links-properties.html) for an introduction.) 6 | 7 |

Note: This feature is available with bosh-release v256+.

8 | 9 | For components/endpoints that are not managed by the Director or cannot be linked, operator can explicitly specify full link details in the manifest. This allows release authors to continue exposing a single interface (link) for connecting configuration, instead of exposing adhoc job properties for use when link is not provided. 10 | 11 | From our previous example here is a `web` job that communicates with a database: 12 | 13 | ```yaml 14 | name: web 15 | 16 | templates: 17 | config.erb: config/conf 18 | 19 | consumes: 20 | - name: primary_db 21 | type: db 22 | 23 | provides: 24 | - name: incoming 25 | type: http 26 | 27 | properties: {...} 28 | ``` 29 | 30 | And here is how operator can configure `web` job to talk to an RDS instance instead of a Postgres server deployed with BOSH: 31 | 32 | ```yaml 33 | instance_groups: 34 | - name: app 35 | jobs: 36 | - name: web 37 | release: my-app 38 | consumes: 39 | primary_db: 40 | instances: 41 | - address: teswfbquts.cabsfabuo7yr.us-east-1.rds.amazonaws.com 42 | properties: 43 | port: 3306 44 | adapter: mysql2 45 | username: some-username 46 | password: some-password 47 | name: my-app 48 | ``` 49 | 50 | --- 51 | [Back to Table of Contents](index.html#deployment-config) 52 | -------------------------------------------------------------------------------- /vm-struct.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Structure of a BOSH VM 3 | --- 4 | 5 | All managed VMs include: 6 | 7 | - BOSH Agent 8 | - Monit daemon 9 | - `/var/vcap/` directory 10 | 11 | Each BOSH managed VM may be assigned a single copy of a deployment job to run. At that point VM is considered to be an instance of that deployment job -- it has a name and an index (e.g. `redis-server/0`). 12 | 13 | When the assignment is made, the Agent will populate `/var/vcap/` directory with the release jobs specified in the deployment job definition in the deployment manifest. If selected release jobs depend on release packages those will also be downloaded and placed into the `/var/vcap` directory. For example given a following deployment job definition: 14 | 15 | ```yaml 16 | jobs: 17 | - name: redis-master 18 | templates: 19 | - {name: redis-server, release: redis} 20 | - {name: syslog-forwarder, release: syslog} 21 | ... 22 | ``` 23 | 24 | the Agent will download two release jobs (listed in the templates property in the deployment job definition) into the following directories: 25 | 26 | - redis-server into `/var/vcap/jobs/redis-server` 27 | - syslog-forwarder into `/var/vcap/jobs/syslog-forwarder` 28 | 29 | Assuming that the redis-server job depends on a redis package and the syslog-forwarder job depends on a syslog-forwarder package, the Agent will download two release packages into the following directories: 30 | 31 | - redis into `/var/vcap/packages/redis` 32 | - syslog-forwarder into `/var/vcap/packages/syslog-forwarder` 33 | 34 | --- 35 | Next: [VM Configuration Locations](vm-config.html) 36 | -------------------------------------------------------------------------------- /design.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: BOSH Design Principles 3 | --- 4 | 5 | BOSH is an open framework for managing the full development and deployment life cycle of large-scale distributed software applications. 6 | 7 | BOSH: 8 | 9 | * Leverages IaaS APIs to create VMs from base images packaged with 10 | operator-defined network, storage, and software configurations 11 | * Monitors and manages VM and process health, detecting and restarting processes 12 | or VMs when they become unhealthy 13 | * Updates all VMs reliably and idempotently, whether the update is to the OS, a 14 | package, or any other component 15 | 16 | ## BOSH Deployments are Predictable ## 17 | 18 | BOSH compiles the source code in an isolated, sterile environment. 19 | When BOSH completes a deployment or update, the virtual machines deployed 20 | contain only the exact software specified in the release. 21 | 22 | BOSH versions all jobs, packages, and releases independently. 23 | Because BOSH automatically versions releases and everything they contain in a 24 | consistent way, the state of your deployment is known throughout its lifecycle. 25 | 26 | ## BOSH Deployments are Repeatable ## 27 | 28 | Every time you repeat a BOSH deployment, the result is exactly the same deployed 29 | system. 30 | 31 | ## BOSH Deployments are Self-Healing ## 32 | 33 | BOSH monitors the health of processes running on the virtual machines it deploys 34 | and compares the results with the ideal state of the system as described in the 35 | deployment manifest. 36 | If BOSH detects a failed job or non-responsive VM, BOSH can automatically 37 | recreate the job on a new VM. 38 | -------------------------------------------------------------------------------- /aws-instance-storage.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Using Instance Storage 3 | --- 4 | 5 |

Note: This feature is available with bosh-aws-cpi v32+ and only for releases deployed with ? stemcells.

6 | 7 | Certain [instance types](https://aws.amazon.com/ec2/instance-types/) have access to instance storage. All BOSH managed VMs have to store some ephemeral data such as release jobs, packages, logs and other scratch data. First instance storage disk is used if possible; otherwise, a separate EBS volume is created as an ephemeral disk. 8 | 9 | Applications may need access to all instance storage disks. In that case separate EBS volume will always be created to store ephemeral data. To enable access to all instance storage disks add `raw_instance_storage: true`: 10 | 11 | ```yaml 12 | resource_pools: 13 | - name: default 14 | network: default 15 | stemcell: 16 | name: bosh-aws-xen-hvm-ubuntu-trusty-go_agent 17 | version: latest 18 | cloud_properties: 19 | instance_type: d2.2xlarge 20 | raw_instance_storage: true 21 | ``` 22 | 23 | With multiple disks attached, the Agent partitions and labels instance storage disks with label `raw-ephemeral-*` so that release jobs can easily find and use them: 24 | 25 |
26 | bosh_caxspafr6@09f1a2db-f322-487c-bd03-63bf0d367f3d:~$ ls -la /dev/disk/by-partlabel/raw-ephemeral-*
27 | lrwxrwxrwx 1 root root 12 Oct  5 03:09 /dev/disk/by-partlabel/raw-ephemeral-0 -> ../../xvdba1
28 | lrwxrwxrwx 1 root root 12 Oct  5 03:09 /dev/disk/by-partlabel/raw-ephemeral-1 -> ../../xvdbb1
29 | 
30 | 31 | --- 32 | [Back to Table of Contents](index.html#cpi-config) 33 | 34 | Previous: [Using IAM instance profiles](aws-iam-instance-profiles.html) 35 | -------------------------------------------------------------------------------- /release.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: What is a Release? 3 | --- 4 | 5 | A release is a versioned collection of configuration properties, configuration templates, start up scripts, source code, binary artifacts, and anything else required to build and deploy software in a reproducible way. 6 | 7 | A release is the layer placed on top of a [stemcell](stemcell.html). They are self-contained and provide very specific software for the purpose of that release. For example, a Redis release might include start-up and shutdown scripts for `redis-server`, a tarball with Redis source code obtained from the Redis official website, and a few configuration properties allowing cluster operators to alter that Redis configuration. 8 | 9 | By allowing layering of stemcells and releases, BOSH is able to solve problems such as "how does one make sure that the compiled version of the software is reliably available throughout the deploy", or "how to version and roll out updated software to the whole cluster, VM-by-VM", that other orchestration software is not able to solve. 10 | 11 | There are two common formats in which releases are distributed: artifacts checked into a git repository and as a single tarball. 12 | 13 | By introducing the concept of a release, the following concerns are addressed: 14 | 15 | - Capturing all needed configuration options and scripts for deployment of the software 16 | - Recording and keeping track of all dependencies for the software 17 | - Versioning and keeping track of software releases 18 | - Creating releases that can be IaaS agnostic 19 | - Creating releases that are self-contained and do not require internet access for deployment 20 | 21 | --- 22 | Next: [What is a Deployment?](deployment.html) 23 | 24 | Previous: [What is a Stemcell?](stemcell.html) 25 | -------------------------------------------------------------------------------- /vsphere-migrate-datastores.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Migrating from one datastore to another 3 | --- 4 | 5 |

Note: This feature is available with bosh-vsphere-cpi v9+.

6 | 7 | This topic describes how to migrate VMs and persistent disks from one datastore to another without downtime. 8 | 9 | 1. Attach new datastore(s) to the hosts where the VMs are running while keeping the old datastore(s) attached to the same hosts. 10 | 11 | 1. Change deployment manifest for the Director to configure vSphere CPI to reference new datastore(s). 12 | 13 | ```json 14 | properties: 15 | vsphere: 16 | host: 172.16.68.3 17 | user: root 18 | password: vmware 19 | datacenters: 20 | - name: BOSH_DC 21 | vm_folder: prod-vms 22 | template_folder: prod-templates 23 | disk_path: prod-disks 24 | datastore_pattern: '\Anew-prod-ds\z' # <--- 25 | persistent_datastore_pattern: '\Anew-prod-ds\z' # <--- 26 | clusters: [BOSH_CL] 27 | ``` 28 | 29 | 1. Redeploy the Director 30 | 31 | 1. Verify that the Director VM's root, ephemeral and persistent disks are now on the new datastore(s). 32 | 33 | 1. For each one of the deployments managed by the Director (visible via [`bosh deployments`](sysadmin-commands.html#deployment)), run [`bosh deploy --recreate`](sysadmin-commands.html#deployment) so that VMs are recreated and persistent disks are reattached. 34 | 35 | 1. Verify that the persistent disks and VMs were moved to new datastore(s) and there are no remaining disks in the old datastore(s). 36 | 37 | --- 38 | [Back to Table of Contents](index.html#cpi-config) 39 | 40 | Next: [Storage DRS and vMotion Support](vsphere-vmotion-support.html) 41 | 42 | Previous: [vSphere HA](vsphere-ha.html) 43 | -------------------------------------------------------------------------------- /sample-manifest.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Sample BOSH Deployment Manifest 3 | --- 4 | 5 | The following is a sample BOSH deployment manifest. See [Understanding the BOSH Deployment Manifest](./deployment-manifest.html) for an explanation of the manifest contents. 6 | 7 | ```yaml 8 | --- 9 | name: my-redis-deployment 10 | director_uuid: 1234abcd-5678-efab-9012-3456cdef7890 11 | 12 | releases: 13 | - {name: redis, version: 12} 14 | 15 | resource_pools: 16 | - name: redis-servers 17 | network: default 18 | stemcell: 19 | name: bosh-aws-xen-ubuntu-trusty-go_agent 20 | version: 2708 21 | cloud_properties: 22 | instance_type: m1.small 23 | availability_zone: us-east-1c 24 | 25 | networks: 26 | - name: default 27 | type: manual 28 | subnets: 29 | - range: 10.10.0.0/24 30 | gateway: 10.10.0.1 31 | static: 32 | - 10.10.0.16 - 10.10.0.18 33 | reserved: 34 | - 10.10.0.2 - 10.10.0.15 35 | dns: [10.10.0.6] 36 | cloud_properties: 37 | subnet: subnet-d597b993 38 | 39 | compilation: 40 | workers: 2 41 | network: default 42 | reuse_compilation_vms: true 43 | cloud_properties: 44 | instance_type: c1.medium 45 | availability_zone: us-east-1c 46 | 47 | update: 48 | canaries: 1 49 | max_in_flight: 3 50 | canary_watch_time: 15000-30000 51 | update_watch_time: 15000-300000 52 | 53 | jobs: 54 | - name: redis-master 55 | instances: 1 56 | templates: 57 | - {name: redis-server, release: redis} 58 | persistent_disk: 10_240 59 | resource_pool: redis-servers 60 | networks: 61 | - name: default 62 | 63 | - name: redis-slave 64 | instances: 2 65 | templates: 66 | - {name: redis-server, release: redis} 67 | persistent_disk: 10_240 68 | resource_pool: redis-servers 69 | networks: 70 | - name: default 71 | 72 | properties: 73 | redis: 74 | max_connections: 10 75 | ``` 76 | -------------------------------------------------------------------------------- /director-configure-db.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Connecting the Director to an External Postgres Database 3 | --- 4 | 5 | The Director stores VM, persistent disk and other information in a database. An internal database might be sufficient for your deployment; however, a highly-available external database can improve performance, scalability and protect against data loss. 6 | 7 | ## Included Postgres (default) 8 | 9 | 1. Add postgres release job and make sure that persistent disk is enabled: 10 | 11 | ```yaml 12 | jobs: 13 | - name: bosh 14 | templates: 15 | - {name: postgres, release: bosh} 16 | # ... 17 | persistent_disk: 25_000 18 | # ... 19 | ``` 20 | 21 | 1. Configure postgres job, and let the Director and the Registry (if configured) use the database: 22 | 23 | ```yaml 24 | properties: 25 | postgres: &database 26 | host: 127.0.0.1 27 | user: postgres 28 | password: postgres-password 29 | database: bosh 30 | adapter: postgres 31 | 32 | director: 33 | db: *database 34 | # ... 35 | 36 | registry: 37 | db: *database 38 | # ... 39 | ``` 40 | 41 | --- 42 | ## External 43 | 44 | The Director is tested to be compatible with MySQL and Postgresql databases. 45 | 46 | 1. Modify deployment manifest for the Director 47 | 48 | ```yaml 49 | properties: 50 | director: 51 | db: &database 52 | host: DB-HOST 53 | port: DB-PORT 54 | user: DB-USER 55 | password: DB-PASSWORD 56 | database: bosh 57 | adapter: postgres 58 | 59 | registry: 60 | db: *database 61 | # ... 62 | ``` 63 | 64 | See [director.db job configuration](https://bosh.io/jobs/director?source=github.com/cloudfoundry/bosh#p=director.db) for more details. 65 | -------------------------------------------------------------------------------- /vcloud-cpi.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: vCloud CPI 3 | --- 4 | 5 | This topic describes cloud properties for different resources created by the vCloud CPI. 6 | 7 | ## AZs 8 | 9 | Currently CPI does not support any cloud properties for AZs. 10 | 11 | Example: 12 | 13 | ```yaml 14 | azs: 15 | - name: z1 16 | ``` 17 | 18 | --- 19 | ## Networks 20 | 21 | Schema for `cloud_properties` section used by manual network subnet: 22 | 23 | * **name** [String, required]: Name of vApp network in which instance will be created. Example: `VPC_BOSH`. 24 | 25 | Example of manual network: 26 | 27 | ```yaml 28 | networks: 29 | - name: default 30 | type: manual 31 | subnets: 32 | - range: 10.10.0.0/24 33 | gateway: 10.10.0.1 34 | cloud_properties: 35 | name: VPC_BOSH 36 | ``` 37 | 38 | vCloud CPI does not support dynamic or vip networks. 39 | 40 | --- 41 | ## Resource Pools / VM Types 42 | 43 | Schema for `cloud_properties` section: 44 | 45 | * **cpu** [Integer, required]: Number of CPUs. Example: `1`. 46 | * **ram** [Integer, required]: RAM in megabytes. Example: `1024`. 47 | * **disk** [Integer, required]: Ephemeral disk size in megabytes. Example: `10240` 48 | 49 | Example of a VM with 1 CPU, 1GB RAM, and 10GB ephemeral disk: 50 | 51 | ```yaml 52 | resource_pools: 53 | - name: default 54 | network: default 55 | stemcell: 56 | name: bosh-vcloud-esxi-ubuntu-trusty-go_agent 57 | version: latest 58 | cloud_properties: 59 | cpu: 1 60 | ram: 1_024 61 | disk: 10_240 62 | ``` 63 | 64 | --- 65 | ## Disk Pools / Disk Types 66 | 67 | Currently the CPI does not support any cloud properties for disks. 68 | 69 | Example of 10GB disk: 70 | 71 | ```yaml 72 | disk_pools: 73 | - name: default 74 | disk_size: 10_240 75 | ``` 76 | 77 | --- 78 | [Back to Table of Contents](index.html#cpi-config) 79 | -------------------------------------------------------------------------------- /update-cloud-config.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Update Cloud Config 3 | --- 4 | 5 |

Note: Document uses CLI v2.

6 | 7 | The [cloud config](terminology.html#cloud-config) is a YAML file that defines IaaS specific configuration used by all deployments. It allows to separate IaaS specific configuration into its own file and keep deployment manifests IaaS agnostic. 8 | 9 | Here is an example cloud config used with [BOSH Lite](terminology.html#bosh-lite): 10 | 11 | ```yaml 12 | --- 13 | azs: 14 | - name: z1 15 | - name: z2 16 | - name: z3 17 | 18 | vm_types: 19 | - name: default 20 | 21 | disk_types: 22 | - name: default 23 | disk_size: 1024 24 | 25 | networks: 26 | - name: default 27 | type: manual 28 | subnets: 29 | - azs: [z1, z2, z3] 30 | dns: [8.8.8.8] 31 | range: 10.244.0.0/24 32 | gateway: 10.244.0.1 33 | static: [10.244.0.34] 34 | reserved: [] 35 | 36 | compilation: 37 | workers: 5 38 | az: z1 39 | reuse_compilation_vms: true 40 | vm_type: default 41 | network: default 42 | ``` 43 | 44 | (Taken from ) 45 | 46 | Without going into much detail, above cloud config defines three [availability zones](terminology.html#az), one `default` [VM type](terminology.html#vm-type) and one `default` [disk types](terminology.html#disk-type) and a `default` [network](networks.html). All of these definitions will be referenced by the deployment manifest. 47 | 48 | See [cloud config schema](cloud-config.html) for detailed breakdown. 49 | 50 | To configure Director with above cloud config use [`bosh update-cloud-config` command](cli-v2.html#update-cloud-config): 51 | 52 |
53 | $ bosh -e vbox update-cloud-config cloud-config.yml
54 | 
55 | 56 | --- 57 | Next: [Build deployment manifest](deployment-basics.html) 58 | 59 | Previous: [Deploy Workflow](basic-workflow.html) 60 | -------------------------------------------------------------------------------- /vsphere-ha.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: vSphere HA 3 | --- 4 | 5 | vSphere High Availability (HA) is a VMware product that detects ESXi host failure, for example host power off or network partition, and automatically restarts virtual machines on other hosts in the cluster. 6 | It can interoperate effectively with the [BOSH Resurrector](resurrector.html), which recreates VMs if the Director loses contact with a VM's BOSH Agent. 7 | 8 |

Note: This feature is available with bosh-vsphere-cpi v30+.

9 | 10 | ### vCenter Configuration 11 | 12 | Configure vSphere HA as follows: 13 | 14 | * Check ***Cluster* → Manage → Settings → vSphere HA → 15 | Edit... → Turn on vSphere HA** 16 | 17 | * Check **Host Monitoring** 18 | 19 | * Ensure the Response for **Failure conditions and VM response → Host Isolation** is set to **Shut down and restart VMs** 20 | 21 | ### BOSH Director Configuration 22 | 23 | Increase the timeout values of the [BOSH Health Monitor](monitoring.html#vm) on the BOSH Director to allow for smooth interoperation between BOSH and vCenter. 24 | We recommend increasing the `agent_timeout` from the default 60s to 180s in the BOSH Director's manifest to allow vCenter time to detect the failed host: 25 | 26 | ```yaml 27 | jobs: 28 | - name: bosh 29 | properties: 30 | ... 31 | hm: 32 | resurrector_enabled: true 33 | intervals: 34 | agent_timeout: 180 35 | ``` 36 | 37 |

Warning: If vSphere HA is not enabled on the cluster and a host failure occurs, the BOSH Resurrector will be unable to recreate the VMs without manual intervention. 38 | Follow the manual procedure as appropriate: Host Failure or Network Partition.

39 | 40 | --- 41 | [Back to Table of Contents](index.html#cpi-config) 42 | 43 | Next: [Migrating from one datastore to another](vsphere-migrate-datastores.html) 44 | -------------------------------------------------------------------------------- /stemcell.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: What is a Stemcell? 3 | --- 4 | 5 | A stemcell is a versioned Operating System image wrapped with IaaS specific packaging. 6 | 7 | A typical stemcell contains a bare minimum OS skeleton with a few common utilities pre-installed, a BOSH Agent, and a few configuration files to securely configure the OS by default. For example: with vSphere, the official stemcell for Ubuntu Trusty is an approximately 500MB VMDK file. With AWS, official stemcells are published as AMIs that can be used in your AWS account. 8 | 9 | Stemcells do not contain any specific information about any software that will be installed once that stemcell becomes a specialized machine in the cluster; nor do they contain any sensitive information which would make them unable to be shared with other BOSH users. This clear separation between base Operating System and later-installed software is what makes stemcells a powerful concept. 10 | 11 | In addition to being generic, stemcells for one OS (e.g. all Ubuntu Trusty stemcells) are exactly the same for all infrastructures. This property of stemcells allows BOSH users to quickly and reliably switch between different infrastructures without worrying about the differences between OS images. 12 | 13 | The Cloud Foundry BOSH team is responsible for producing and maintaining an official set of stemcells. Cloud Foundry currently supports Ubuntu Trusty and CentOS 6.5 on vSphere, AWS, OpenStack, and vCloud infrastructures. 14 | 15 | Stemcells are distributed as tarballs. 16 | 17 | By introducing the concept of a stemcell, the following problems have been solved: 18 | 19 | - Capturing a base Operating System image 20 | - Versioning changes to the Operating System image 21 | - Reusing base Operating System images across VMs of different types 22 | - Reusing base Operating System images across different IaaS 23 | 24 | --- 25 | Next: [What is a Release?](release.html) 26 | 27 | Previous: [What problems does BOSH solve?](problems.html) 28 | -------------------------------------------------------------------------------- /deployment.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: What is a Deployment? 3 | --- 4 | 5 | A deployment is a collection of VMs, built from a [stemcell](stemcell.html), that has been populated with specific [releases](release.html) and disks that keep persistent data. These resources are created in the IaaS based on a deployment manifest and managed by the [Director](terminology.html#director), a centralized management server. 6 | 7 | The deployment process begins with deciding which Operating System images (stemcells) need to be used and which software (releases) need to be deployed, how to track persistent data while a cluster is updated and transformed, and how to automate steps of deploying images to an IaaS; this includes configuring machines to use said image, and placing correct software versions onto provisioned machines. BOSH builds upon previously introduced primitives (stemcells and releases) by providing a way to state an explicit combination of stemcells, releases, and operator-specified properties in a human readable file. This file is called a deployment manifest. 8 | 9 | When a deployment manifest is uploaded to the Director, requested resources are allocated and stored. These resources form a deployment. The deployment keeps track of associated VMs and persistent disks that are attached to the VMs. Over time, as the deployment manifest changes, VMs are replaced and updated. However, persistent disks are retained and are re-attached to the newer VMs. 10 | 11 | A user can manage a deployment via its deployment manifest. A deployment manifest contains all needed information for tracking, managing, and updating software on the deployment's VMs. It describes the deployment in an IaaS-agnostic way. For example, to update a Zookeeper cluster (deployment is named 'zookeper') to a later version of a Zookeeper release, one would update few lines in the deployment manifest. 12 | 13 | --- 14 | [Back to Table of Contents](index.html#intro) 15 | 16 | Previous: [What is a Release?](release.html) 17 | -------------------------------------------------------------------------------- /release-blobstore.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Release Blobstore 3 | --- 4 | 5 |

Note: Examples require CLI v2.

6 | 7 | A release blobstore contains [release blob](release-blobs.html) and created final releases. 8 | 9 | Access to release blobstore is configured via two files: 10 | 11 | - `config/final.yml` (checked into Git repository): contains blobstore location 12 | - `config/private.yml` (is NOT checked into Git repository): contains blobstore credentials 13 | 14 | CLI supports two different blobstore providers: `s3` and `local`. 15 | 16 | ## S3 Configuration 17 | 18 | S3 provider is used for most production releases. It's can be used with any S3-compatible blobstore (in compatibility mode) like Google Cloud Storage and Swift. 19 | 20 | `config/final.yml`: 21 | 22 | ```yaml 23 | --- 24 | blobstore: 25 | provider: s3 26 | options: 27 | bucket_name: 28 | ``` 29 | 30 | `config/private.yml`: 31 | 32 | ```yaml 33 | --- 34 | blobstore: 35 | options: 36 | access_key_id: 37 | secret_access_key: 38 | ``` 39 | 40 | See [Configuring S3 release blobstore](s3-release-blobstore.html) for details and [S3 CLI Usage](https://github.com/pivotal-golang/s3cli#usage) for additional configuration options. 41 | 42 | --- 43 | ## Local Configuration 44 | 45 | Local provider is useful for testing. 46 | 47 | `config/final.yml`: 48 | 49 | ```yaml 50 | --- 51 | blobstore: 52 | provider: local 53 | options: 54 | blobstore_path: /tmp/test-blobs 55 | ``` 56 | 57 | Nothing in `config/private.yml`. 58 | 59 | --- 60 | ## Migrating blobs 61 | 62 | CLI does not currently provide a builtin way to migrate blobs to a different blobstore. Suggested way to migrate blobs is to use third party tool like `s3cmd` to list and copy all blobs from current blobstore to another. Once copying of all blobs is complete, update `config` directory to with new blobstore location. 63 | -------------------------------------------------------------------------------- /job-lifecycle.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Job Lifecycle 3 | --- 4 | 5 | There are several stages that all jobs (and their associated processes) on each VM go through during a deployment process: 6 | 7 | ## When start is issued 8 | 9 | 1. Persistent disks are mounted on the VM if configured, and not already mounted 10 | 11 | 1. All jobs and their dependent packages are downloaded and placed onto a machine 12 | 13 | 1. [pre-start scripts](pre-start.html) run for all jobs on the VM in parallel 14 | - (waits for all pre-start scripts to finish) 15 | - does not time out 16 | 17 | 1. `monit start` is called for each process in no particular order 18 | - each job can specify zero or more processes 19 | - times out based on [canary_watch_time/update_watch_time settings](manifest-v2.html#update) 20 | 21 | 1. [post-start scripts](post-start.html) run for all jobs on the VM in parallel 22 | - (waits for all post-start scripts to finish) 23 | - does not time out 24 | 25 | 1. [post-deploy scripts](post-deploy.html) run for all jobs on *all* VMs in parallel 26 | - (waits for all post-deploy scripts to finish) 27 | - does not time out 28 | 29 | Note that scripts should not rely on the order they are run. Agent may decide to run them serially or in parallel. 30 | 31 | --- 32 | ## When processes are running 33 | 34 | 1. Monit will automatically restart processes that failed their associated checks 35 | - a common pattern used is a PID check 36 | 37 | --- 38 | ## When stop is issued (or before update and subsequent start happens) 39 | 40 | 1. `monit unmonitor` is called for each process 41 | 42 | 1. [drain scripts](drain.html) run for all jobs on the VM in parallel 43 | - (waits for all drain scripts to finish) 44 | - does not time out 45 | 46 | 1. `monit stop` is called for each process 47 | - times out after 5 minutes as of bosh v258+ on 3302+ stemcells 48 | 49 | 1. Persistent disks are unmounted on the VM if configured 50 | 51 | --- 52 | Next: [Pre-start script](pre-start.html) 53 | -------------------------------------------------------------------------------- /migrate-to-bosh-init.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Migrating to bosh-init from the micro CLI plugin 3 | --- 4 | 5 | The micro CLI plugin is deprecated and will be discontinued in a few months. `bosh-init` replaces its functionality and improves on its features. If you do not maintain a MicroBOSH VM, there is no need to do anything; however, if you do please follow these steps: 6 | 7 | 1. [Install bosh-init](install-bosh-init.html). 8 | 9 | 1. Review [Using bosh-init](using-bosh-init.html). 10 | 11 | 1. Familiarize yourself with how to write a deployment manifest for the Director VM (previously referred to as MicroBOSH) following examples mentioned on one of these pages depending on your IaaS: 12 | - [Initializing on AWS](init-aws.html) 13 | - [Initializing on OpenStack](init-openstack.html) 14 | - [Initializing on vSphere](init-vsphere.html) 15 | - [Initializing on vCloud](init-vcloud.html) 16 | 17 | 1. Create a deployment manifest (for example `bosh.yml`) based on one of the above examples. 18 | 19 |

Note: Make sure NATS, blobstore, and database settings used by the Director, registry, and DNS server match your previous configuration.

20 | 21 | 1. Copy `bosh-deployments.yml` to the deployment directory that contains your new deployment manifest. `bosh-deployments.yml` is a state file produced and updated by the micro CLI. It contains VM and persistent disk IDs of your current MicroBOSH VM. 22 | 23 | 1. Run `bosh-init deploy ./bosh.yml`. `bosh-init` will find `bosh-deployments.yml` in your deployment directory, convert it to a new deployment state file (in our example `bosh-state.json`) and update the Director VM as specified by deployment manifest. 24 | 25 | 1. Save the deployment state file left in your deployment directory so you can later update/delete your Director. See [Deployment state](using-bosh-init.html#deployment-state) section of 'Using bosh-init' for more details. 26 | 27 | 1. Target your Director with the BOSH CLI and run `bosh deployments` command to confirm your existing deployments were migrated. 28 | -------------------------------------------------------------------------------- /uploading-stemcells.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Upload Stemcells 3 | --- 4 | 5 |

Note: Document uses CLI v2.

6 | 7 | (See [What is a Stemcell?](stemcell.html) for an introduction to stemcells.) 8 | 9 | As described earlier, each deployment can reference one or more stemcells. For a deploy to succeed, necessary stemcells must be uploaded to the Director. 10 | 11 | ## Finding Stemcells 12 | 13 | The [stemcells section of bosh.io](http://bosh.io/stemcells) lists official stemcells. 14 | 15 | --- 16 | ## Uploading to the Director 17 | 18 | CLI provides [`bosh upload-stemcell` command](cli-v2.html#upload-stemcell). 19 | 20 | - If you have a URL to a stemcell tarball (for example URL provided by bosh.io): 21 | 22 |
23 |     $ bosh -e vbox upload-stemcell https://bosh.io/d/stemcells/bosh-warden-boshlite-ubuntu-trusty-go_agent?v=3421.9 --sha1 1396d7877204e630b9e77ae680f492d26607461d
24 |     
25 | 26 | - If you have a stemcell tarball on your local machine: 27 | 28 |
29 |     $ bosh upload-stemcell ~/Downloads/bosh-stemcell-3421.9-warden-boshlite-ubuntu-trusty-go_agent.tgz
30 |     
31 | 32 | Once the command succeeds you can view all uploaded stemcells in the Director: 33 | 34 |
35 | $ bosh -e vbox stemcells
36 | Using environment '192.168.50.6' as client 'admin'
37 | 
38 | Name                                         Version  OS             CPI  CID
39 | bosh-warden-boshlite-ubuntu-trusty-go_agent  3421.9*  ubuntu-trusty  -    6c9c002e-bb46-4838-4b73-ff1afaa0aa21
40 | 
41 | (*) Currently deployed
42 | 
43 | 1 stemcells
44 | 
45 | Succeeded
46 | 
47 | 48 | --- 49 | ## Deployment Manifest Usage 50 | 51 | To use uploaded stemcell in your deployment, add stemcells: 52 | 53 | ```yaml 54 | stemcells: 55 | - alias: default 56 | os: ubuntu-trusty 57 | version: 3421.4 58 | ``` 59 | 60 | --- 61 | Next: [Upload Releases](uploading-releases.html) 62 | 63 | Previous: [Build Deployment Manifest](deployment-basics.html) 64 | -------------------------------------------------------------------------------- /managing-stemcells.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Managing stemcells 3 | --- 4 | 5 | (See [What is a Stemcell?](stemcell.html) and [Uploading stemcells](uploading-stemcells.html) for an introduction.) 6 | 7 | --- 8 | ## Stemcell versioning 9 | 10 | Each stemcell is uniquely identified by its name and version. Currently stemcell versions are in MAJOR.MINOR format. Major version is incremented when new features are added to stemcells (or any components that stemcells typically include such as BOSH Agent). Minor versions are incremented if certain security fixes and/or features are backported on top of existing stemcell line. We recommend to continiously bump to the latest major stemcell version to receive latest updates. 11 | 12 | --- 13 | ## Overview 14 | 15 | You can identify stemcell version from inside the VM via following files: 16 | 17 | - `/var/vcap/bosh/etc/stemcell_version`: Example: `3232.1` 18 | - `/var/vcap/bosh/etc/stemcell_git_sha1`: Example: `8c8a6bd2ac5cacb11c421a97e90b888be9612ecb+` 19 | 20 |

Note: Release authors should not use the contents of these files in their releases.

21 | 22 | See [Stemcell Building](build-stemcell.html#tarball-structure) to find stemcell archive structure. 23 | 24 | --- 25 | ## Fixing corrupted stemcells 26 | 27 | Occasionally stemcells are deleted from the IaaS outside of the Director. For example your vSphere administrator decided to clean up your vSphere VMs folder. The Director of course will continue to reference deleted IaaS asset and CPI will eventually raise an error when trying to create new VM. [`bosh upload-stemcell` command](cli-v2.html#upload-stemcell) provides a `--fix` flag which allows to reupload stemcell with the same name and version into the Director fixing this problem. 28 | 29 | --- 30 | ## Cleaning up uploaded stemcells 31 | 32 | Over time the Director accumulates stemcells. Stemcells could be deleted manually via [`bosh delete-stemcell` command](cli-v2.html#delete-stemcell) or be cleaned up via [`bosh cleanup` command](cli-v2.html#clean-up). 33 | -------------------------------------------------------------------------------- /init-google.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Creating environment on Google Cloud Platform 3 | --- 4 | 5 | This document shows how to initialize new [environment](terminology.html#environment) on Google Cloud Platform. 6 | 7 | 1. Install [CLI v2](./cli-v2.html). 8 | 9 | 1. Use `bosh create-env` command to deploy the Director. 10 | 11 |
12 |     # Create directory to keep state
13 |     $ mkdir bosh-1 && cd bosh-1
14 | 
15 |     # Clone Director templates
16 |     $ git clone https://github.com/cloudfoundry/bosh-deployment
17 | 
18 |     # Fill below variables (replace example values) and deploy the Director
19 |     $ bosh create-env bosh-deployment/bosh.yml \
20 |         --state=state.json \
21 |         --vars-store=creds.yml \
22 |         -o bosh-deployment/gcp/cpi.yml \
23 |         -v director_name=bosh-1 \
24 |         -v internal_cidr=10.0.0.0/24 \
25 |         -v internal_gw=10.0.0.1 \
26 |         -v internal_ip=10.0.0.6 \
27 |         --var-file gcp_credentials_json=~/Downloads/gcp-23r82r3y2.json \
28 |         -v project_id=moonlight-2389ry3 \
29 |         -v zone=us-east-1 \
30 |         -v tags=[internal] \
31 |         -v network=default \
32 |         -v subnetwork=default
33 |     
34 | 35 | If running above commands outside of a connected Google network, refer to [Exposing environment on a public IP](init-external-ip.html) for additional CLI flags. 36 | 37 | 1. Connect to the Director. 38 | 39 |
40 |     # Configure local alias
41 |     $ bosh alias-env bosh-1 -e 10.0.0.6 --ca-cert <(bosh int ./creds.yml --path /director_ssl/ca)
42 | 
43 |     # Log in to the Director
44 |     $ export BOSH_CLIENT=admin
45 |     $ export BOSH_CLIENT_SECRET=`bosh int ./creds.yml --path /admin_password`
46 | 
47 |     # Query the Director for more info
48 |     $ bosh -e bosh-1 env
49 |     
50 | 51 | 1. Save the deployment state files left in your deployment directory `bosh-1` so you can later update/delete your Director. See [Deployment state](cli-envs.html#deployment-state) for details. 52 | 53 | --- 54 | [Back to Table of Contents](index.html#install) 55 | 56 | Previous: [Create an environment](init.html) 57 | -------------------------------------------------------------------------------- /install-bosh-init.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Install bosh-init 3 | --- 4 | 5 |

Note: See [CLI v2](cli-v2.html) for an updated CLI.

6 | 7 | `bosh-init` is used for creating and updating a Director VM (and its persistent disk) in an environment. 8 | 9 | Here are the latest binaries: 10 | 11 |
12 |

0.0.103

13 | 17 |
18 | 19 | 1. Download the binary for your platform and place it on your `PATH`. For example on UNIX machines: 20 | 21 |
22 | 	$ chmod +x ~/Downloads/bosh-init-*
23 | 	$ sudo mv ~/Downloads/bosh-init-* /usr/local/bin/bosh-init
24 | 	
25 | 26 | 1. Check `bosh-init` version to make sure it is properly installed: 27 | 28 |
29 | 	$ bosh-init -v
30 | 	
31 | 32 | 1. Depending on your platform install following packages: 33 | 34 | **Ubuntu Trusty** 35 | 36 |
37 | 	$ sudo apt-get install -y build-essential zlibc zlib1g-dev ruby ruby-dev openssl libxslt-dev libxml2-dev libssl-dev libreadline6 libreadline6-dev libyaml-dev libsqlite3-dev sqlite3
38 | 	
39 | 40 | **CentOS** 41 | 42 |
43 | 	$ sudo yum install gcc gcc-c++ ruby ruby-devel mysql-devel postgresql-devel postgresql-libs sqlite-devel libxslt-devel libxml2-devel patch openssl
44 | 	$ gem install yajl-ruby
45 | 	
46 | 47 | **Mac OS X** 48 | 49 | Install Apple Command Line Tools: 50 |
51 | 	$ xcode-select --install
52 | 	
53 | 54 | Use [Homebrew](http://brew.sh) to install OpenSSL: 55 |
56 | 	$ brew install openssl
57 | 	
58 | 59 | 1. Make sure Ruby 2+ is installed: 60 | 61 |
62 | 	$ ruby -v
63 | 	ruby 2.2.3p173 (2015-08-18 revision 51636) [x86_64-darwin14]
64 | 	
65 | 66 | --- 67 | See [Using bosh-init](using-bosh-init.html) for details on how to use it. 68 | -------------------------------------------------------------------------------- /director-access-events.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Access Event Logging 3 | --- 4 | 5 |

Note: This feature is available in bosh-release v256+.

6 | 7 | Director logs all API access events to syslog under `vcap.bosh.director` topic. 8 | 9 | Here is a log snipped found in `/var/log/syslog` in [Common Event Format (CEF)](https://www.protect724.hpe.com/servlet/JiveServlet/downloadBody/1072-102-7-18874/CommonEventFormat%20v22.pdf): 10 | 11 | ``` 12 | May 13 05:13:34 localhost vcap.bosh.director[16199]: CEF:0|CloudFoundry|BOSH|1.0000.0|director_api|/deployments|7|requestMethod=GET src=127.0.0.1 spt=25556 shost=36ff45a2-51a2-488d-af95-953c43de4cec cs1=10.10.0.36,fe80::80a:99ff:fed6:df7d%eth0 cs1Label=ips cs2=X_BOSH_UPLOAD_REQUEST_TIME=0.000&HOST=127.0.0.1&X_REAL_IP=127.0.0.1&X_FORWARDED_FOR=127.0.0.1&X_FORWARDED_PROTO=https&USER_AGENT=EventMachine HttpClient cs2Label=httpHeaders cs3=none cs3Label=authType cs4=401 cs4Label=responseStatus cs5=Not authorized: '/deployments' cs5Label=statusReason 13 | ``` 14 | 15 | And in a more redable form: 16 | 17 | ``` 18 | May 13 05:13:34 localhost vcap.bosh.director[16199]: 19 | CEF:0 20 | CloudFoundry 21 | BOSH 22 | 1.3232.0 23 | director_api 24 | /deployments 25 | 7 26 | 27 | requestMethod=GET 28 | src=127.0.0.1 29 | spt=25556 30 | shost=36ff45a2-51a2-488d-af95-953c43de4cec 31 | 32 | cs1=10.10.0.36,fe80::80a:99ff:fed6:df7d%eth0 33 | cs1Label=ips 34 | 35 | cs2=X_BOSH_UPLOAD_REQUEST_TIME=0.000&HOST=127.0.0.1&X_REAL_IP=127.0.0.1&X_FORWARDED_FOR=127.0.0.1&X_FORWARDED_PROTO=https&USER_AGENT=EventMachine HttpClient 36 | cs2Label=httpHeaders 37 | 38 | cs3=none 39 | cs3Label=authType 40 | 41 | cs4=401 42 | cs4Label=responseStatus 43 | 44 | cs5=Not authorized: '/deployments' 45 | cs5Label=statusReason 46 | ``` 47 | 48 | --- 49 | ## Enabling Logging 50 | 51 | To enable this feature: 52 | 53 | 1. Add [`director.log_access_events_to_syslog`](https://bosh.io/jobs/director?source=github.com/cloudfoundry/bosh#p=director.log_access_events_to_syslog) deployment manifest for the Director: 54 | 55 | ```yaml 56 | properties: 57 | director: 58 | log_access_events_to_syslog: true 59 | ``` 60 | 61 | 1. Optionally colocate [syslog-release's `syslog_forwarder` job](http://bosh.io/jobs/syslog_forwarder?source=github.com/cloudfoundry/syslog-release) with the Director to forward logs to a remote location. 62 | 63 | 1. Redeploy the Director. 64 | 65 | --- 66 | [Back to Table of Contents](index.html#director-config) 67 | -------------------------------------------------------------------------------- /openstack-multiple-networks.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Multi-homed VMs 3 | --- 4 | 5 |

Note: This feature is available with bosh-openstack-cpi v24+.

6 |

Note: This feature requires OpenStack Neutron.

7 | 8 | ### Limitation: This feature requires DHCP to be disabled 9 | 10 | Disabling DHCP means that the network devices on your VMs will not get configuration 11 | such as default gateway, DNS, and MTU. If you require specific values for these settings, 12 | you will need to set them by other means. 13 | 14 | 1. In your Director deployment manifest, set [`properties.openstack.use_dhcp: false`] 15 | (https://bosh.io/jobs/openstack_cpi?source=github.com/cloudfoundry-incubator/bosh-openstack-cpi-release#p=openstack.use_dhcp). 16 | This means the BOSH agent will configure the network devices without DHCP. This is a Director-wide setting 17 | and switches off DHCP for all VMs deployed with this Director. 18 | 1. In your Director deployment manifest, set [`properties.openstack.config_drive: cdrom`] 19 | (https://bosh.io/jobs/openstack_cpi?source=github.com/cloudfoundry-incubator/bosh-openstack-cpi-release#p=openstack.config_drive). 20 | This means OpenStack will mount a cdrom drive to distribute meta-data and user-data instead of using an HTTP metadata service. 21 | 1. In your [BOSH network configuration](networks.html#manual), set `gateway` and `dns` to allow outbound communication. 22 | 1. If you're not using VLAN, but a tunnel mechanism for Neutron networking, you also need to set the MTU for your network devices on *all* VMs: 23 | * GRE Tunnels incur an overhead of 42 bytes, therefore set your MTU to `1458` 24 | * VXLAN Tunnels incur an overhead of 50 bytes, therefore set your MTU to `1450` 25 |

Note: The above numbers assume that you're using an MTU of 1500 for the physical network. If your physical network is setup differently, adapt the MTU values accordingly.

26 | 27 | Setting the MTU for network devices is currently not possible in the deployment manifest's `networks` section and thus requires manual user interaction. We recommend to co-locate the [networking-release](https://github.com/cloudfoundry/networking-release)'s `set_mtu` job using [addons](runtime-config.html#addons). 28 | 29 | --- 30 | [Back to Table of Contents](index.html#cpi-config) 31 | 32 | Next: [Using Light Stemcells](openstack-light-stemcells.html) 33 | 34 | Previous: [Validating self-signed OpenStack endpoints](openstack-self-signed-endpoints.html) 35 | -------------------------------------------------------------------------------- /bosh-cli.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: BOSH Command Line Interface 3 | --- 4 | 5 | BOSH Command Line Interface (CLI) is used to interact with the Director. The CLI is written in Ruby and is distributed via `bosh_cli` gem. 6 | 7 |
 8 | $ gem install bosh_cli --no-ri --no-rdoc
 9 | 
10 | 11 |

Note: BOSH CLI requires Ruby 2+

12 | 13 | If gem installation does not succeed, make sure pre-requisites for your OS are met. 14 | 15 | ### Prerequisites on Ubuntu Trusty 16 | 17 | Make sure following packages are installed: 18 | 19 |
20 | $ sudo apt-get install build-essential ruby ruby-dev libxml2-dev libsqlite3-dev libxslt1-dev libpq-dev libmysqlclient-dev zlib1g-dev
21 | 
22 | 23 | Make sure `ruby` and `gem` binaries are on your path before continuing. 24 | 25 | ### Prerequisites on CentOS 26 | 27 | Make sure following packages are installed: 28 | 29 |
30 | $ sudo yum install gcc ruby ruby-devel mysql-devel postgresql-devel postgresql-libs sqlite-devel libxslt-devel libxml2-devel yajl-ruby
31 | 
32 | 33 | ### Prerequisites on Mac OS X 34 | 35 | You may see an error like this:
ERROR:  While executing gem ... (Gem::FilePermissionError) ↵ You don't have write permissions for the /Library/Ruby/Gems/2.0.0 directory.
36 | 37 | Instead of using the system Ruby, install a separate Ruby for your own use, and switch to that one using a package like RVM, rbenv, or chruby. 38 | 39 | You may see an error like this:
The compiler failed to generate an executable file. (RuntimeError). You have to install development tools first.
40 | 41 | Make sure you have installed Xcode and the command-line developer tools, and agreed to the license. 42 | 43 |
44 | $ xcode-select --install
45 | xcode-select: note: install requested for command line developer tools
46 | 
47 | 48 | A window will pop up saying:
The "xcode-select" command requires the command line developer tools. Would you like to install the tools now?
Choose Install to continue. Choose Get Xcode to install Xcode and the command line developer tools from the App Store. If you have already installed Xcode from the App Store, you can choose Install and it will install the cli tools. 49 | 50 | If you have successfully installed them, you will see this: 51 | 52 |
53 | $ xcode-select --install
54 | xcode-select: error: command line tools are already installed, use "Software Update" to install updates
55 | 
56 | 57 | To agree to the license: 58 | 59 |
60 | $ sudo xcodebuild -license
61 | 
62 | -------------------------------------------------------------------------------- /deploying.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Deploy 3 | --- 4 | 5 | Once referenced stemcells and releases are uploaded to the Director and the deployment manifest is complete, Director can successfull make a deployment. The CLI has a single command to create and update a deployment: [`bosh deploy` command](cli-v2.html#deploy). From the perspective of the Director same steps are taken to create or update a deployment. 6 | 7 | To create a Zookeeper deployment from `zookeeper.yml` deployment manifest run the deploy command: 8 | 9 |
10 | $ bosh -e vbox -d zookeeper deploy zookeeper.yml
11 | Using environment '192.168.56.6' as '?'
12 | 
13 | Task 1133
14 | 
15 | 08:41:15 | Preparing deployment: Preparing deployment (00:00:00)
16 | 08:41:15 | Preparing package compilation: Finding packages to compile (00:00:00)
17 | 08:41:15 | Creating missing vms: zookeeper/6b7a51c4-1aeb-4cea-a2da-fdac3044bdee (1) (00:00:10)
18 | 08:41:25 | Updating instance zookeeper: zookeeper/3f9980b4-d02f-4754-bb53-0d1458e447ac (0) (canary) (00:00:27)
19 | 08:41:52 | Updating instance zookeeper: zookeeper/b8b577e7-d745-4d06-b2b7-c7cdeb46c78f (4) (canary) (00:00:25)
20 | 08:42:17 | Updating instance zookeeper: zookeeper/5a901538-be10-4d53-a3e9-3e23d3e3a07a (3) (00:00:25)
21 | 08:42:42 | Updating instance zookeeper: zookeeper/c5a3f7e6-4311-43ac-8500-a2337ca3e8a7 (2) (00:00:26)
22 | 08:43:08 | Updating instance zookeeper: zookeeper/6b7a51c4-1aeb-4cea-a2da-fdac3044bdee (1) (00:00:39)
23 | 
24 | Started  Mon Jul 24 08:41:15 UTC 2017
25 | Finished Mon Jul 24 08:43:47 UTC 2017
26 | Duration 00:02:32
27 | 
28 | Task 1133 done
29 | 
30 | Succeeded
31 | 
32 | 33 | After the deploy command completes with either success or failure you can run a command to list VMs created for this deployment: 34 | 35 |
36 | $ bosh -e vbox -d zookeeper instances
37 | Using environment '192.168.56.6' as '?'
38 | 
39 | Deployment 'zookeeper'
40 | 
41 | Instance                                          Process State  AZ  IPs
42 | smoke-tests/42e003c1-1c05-453e-a946-c2e77935cff0  -              z1  -
43 | zookeeper/3f9980b4-d02f-4754-bb53-0d1458e447ac    running        z2  10.244.0.2
44 | zookeeper/5a901538-be10-4d53-a3e9-3e23d3e3a07a    -              z1  10.244.0.3
45 | zookeeper/6b7a51c4-1aeb-4cea-a2da-fdac3044bdee    running        z3  10.244.0.6
46 | zookeeper/b8b577e7-d745-4d06-b2b7-c7cdeb46c78f    running        z2  10.244.0.4
47 | zookeeper/c5a3f7e6-4311-43ac-8500-a2337ca3e8a7    -              z1  10.244.0.5
48 | 
49 | 6 instances
50 | 
51 | Succeeded
52 | 
53 | 54 | --- 55 | [Back to Table of Contents](index.html#basic-deploy) 56 | 57 | Previous: [Upload Releases](uploading-releases.html) 58 | -------------------------------------------------------------------------------- /addons-common.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Common Addons 3 | --- 4 | 5 | (See [runtime config](runtime-config.html#addons) for an introduction to addons.) 6 | 7 | ## Syslog forwarding 8 | 9 | Need: Configure syslog on all machines to forward system logs to a remote location. 10 | 11 | ```yaml 12 | releases: 13 | - name: syslog 14 | version: 3 15 | 16 | addons: 17 | - name: logs 18 | jobs: 19 | - name: syslog_forwarder 20 | release: syslog 21 | properties: 22 | syslog: 23 | address: logs4.papertrail.com 24 | transport: tcp 25 | port: 38559 26 | tls_enabled: true 27 | permitted_peer: "*.papertrail.com" 28 | ca_cert: | 29 | -----BEGIN CERTIFICATE----- 30 | MIIClTCCAf4CCQDc6hJtvGB8RjANBgkqhkiG9w0BAQUFADCBjjELMAk... 31 | -----END CERTIFICATE----- 32 | ``` 33 | 34 | See [syslog_forwarder job](https://bosh.io/jobs/syslog_forwarder?source=github.com/cloudfoundry/syslog-release). 35 | 36 | --- 37 | ## Custom SSH login banner 38 | 39 |

Note: This job work with 3232+ stemcell series due to how sshd is configured.

40 | 41 | Need: Configure custom login banner to comply with organizational regulations. 42 | 43 | ```yaml 44 | releases: 45 | - name: os-conf 46 | version: 3 47 | 48 | addons: 49 | - name: misc 50 | jobs: 51 | - name: login_banner 52 | release: os-conf 53 | properties: 54 | login_banner: 55 | text: | 56 | This computer system is for authorized use only. All activity is logged and 57 | regularly checked by system administrators. Individuals attempting to connect to, 58 | port-scan, deface, hack, or otherwise interfere with any services on this system 59 | will be reported. 60 | ``` 61 | 62 | See [login_banner job](https://bosh.io/jobs/login_banner?source=github.com/cloudfoundry/os-conf-release). 63 | 64 | --- 65 | ## Custom SSH users 66 | 67 |

Warning: This job does not remove users from the VM when user is removed from the manifest.

68 | 69 | Need: Provide SSH access to all VMs for a third party automation system. 70 | 71 | ```yaml 72 | releases: 73 | - name: os-conf 74 | version: 3 75 | 76 | addons: 77 | - name: misc 78 | jobs: 79 | - name: user_add 80 | release: os-conf 81 | properties: 82 | users: 83 | - name: nessus 84 | public_key: "ssh-rsa AAAAB3NzaC1yc2EAQCyKb5nLZv...oYPkLlOGyAFLk6Id75Xr hostname" 85 | - name: teleport 86 | public_key: "ssh-rsa AAAAB3NzaC1yc2dfgJKkb5nLZv...dkjfLlOGyAFLk6kfbgYG hostname" 87 | ``` 88 | 89 | See [user_add job](https://bosh.io/jobs/user_add?source=github.com/cloudfoundry/os-conf-release). 90 | -------------------------------------------------------------------------------- /post-start.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Post-start Script 3 | --- 4 | 5 | (See [Job Lifecycle](job-lifecycle.html) for an explanation of when post-start scripts run.) 6 | 7 |

Note: This feature is available with bosh-release v255.4+ and only for releases deployed with 3125+ stemcells.

8 | 9 |

Note: Releases that make use of post-start scripts and are deployed on older stemcells or with an older Director may potentially deploy; however, post-start script will not be called.

10 | 11 | Release job can have a post-start script that will run after the job is started (specifically after monit successfully starts a process). This script allows the job to execute any additional commands against a machine and/or persistent data before considering release job as successfully started. 12 | 13 | --- 14 | ## Job Configuration 15 | 16 | To add a post-start script to a release job: 17 | 18 | 1. Create a script with any name in the templates directory of a release job. 19 | 1. In the `templates` section of the release job spec file, add the script name and the `bin/post-start` directory as a key value pair. 20 | 21 | Example: 22 | 23 | ```yaml 24 | --- 25 | name: cassandra_node 26 | templates: 27 | post-start.erb: bin/post-start 28 | ``` 29 | 30 | --- 31 | ## Script Implementation 32 | 33 | Post-start script is usually just a regular shell script. Since post-start script is executed in a similar way as other release job scripts (start, stop, drain scripts) you can use job's package dependencies. 34 | 35 | Post-start script should be idempotent. It may be called multiple times after process is successfully started. 36 | 37 | Unlike a drain script, a post-start script uses an exit code to indicate its success (exit code 0) or failure (any other exit code). 38 | 39 | Post-start script is called every time after job is started (ctl script is called) by the Director, which means that post-start script should perform its operations in an idempotent way. 40 | 41 |

Note: Running `monit start` directly on a VM will not trigger post-start scripts.

42 | 43 | Post-start scripts in a single deployment job (typically is composed of multiple release jobs) are executed in parallel. 44 | 45 | --- 46 | ## Logs 47 | 48 | You can find logs for each release job's post-start script in the following locations: 49 | 50 | - stdout in `/var/vcap/sys/log//post-start.stdout.log` 51 | - stderr in `/var/vcap/sys/log//post-start.stderr.log` 52 | 53 | Since post-start script will be called multiple times, new output will be appended to the files above. Standard [log rotation policy](job-logs.html#log-rotation) applies. 54 | 55 | --- 56 | Next: [Post-deploy script](post-deploy.html) 57 | 58 | Previous: [Pre-start script](pre-start.html) 59 | -------------------------------------------------------------------------------- /monitoring.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Monitoring 3 | --- 4 | 5 | BOSH monitors deployed VMs and release jobs' processes on those VMs via the Health Monitor and the help of the Agent, and Monit. 6 | 7 | --- 8 | ## VMs 9 | 10 | [The Health Monitor](bosh-components.html#health-monitor) continuously checks presence of the deployed VMs. The Agent on each VM produces a heartbeat every minute and sends it to the Health Monitor over [NATS](bosh-components.html#nats). 11 | 12 | The Health Monitor is extended by a set of plugins. Each plugin is given an opportunity to act on each heartbeat, so in cases of failure it can notify external services or perform actions against the Director. 13 | 14 | Health Monitor includes the following plugins: 15 | 16 | - Event Logger: Logs events to a file 17 | - Resurrector: Recreates VMs that have stopped heartbeating 18 | - Emailer: Sends configurable e-mails on events receipt 19 | - JSON: Sends events over stdin to any executable matching the glob /var/vcap/jobs/*/bin/bosh-monitor/* 20 | - OpenTSDB: Sends events to [OpenTSDB](http://opentsdb.net/) 21 | - Graphite: Sends events to [Graphite](https://graphite.readthedocs.org/en/latest/) 22 | - PagerDuty: Sends events to [PagerDuty.com](http://pagerduty.com) using their API 23 | - DataDog: Sends events to [DataDog.com](http://datadoghq.com) using their API 24 | - AWS CloudWatch: Sends events to [Amazon's CloudWatch](http://aws.amazon.com/cloudwatch/) using their API 25 | 26 | See [Configuring Health Monitor](hm-config.html) for detailed plugins' configuration. 27 | 28 | ### Resurrector Plugin 29 | 30 | Resurrector plugin continuously cross-references VMs expected to be running against the VMs that are sending heartbeats. When resurrector does not receive heartbeats for a VM for a certain period of time, it will kick off a task on the Director to try to "resurrect" that VM. 31 | 32 | See [Automatic repair with Resurrector](resurrector.html) for details. 33 | 34 | --- 35 | ## Processes on VMs 36 | 37 | Release jobs' process monitoring on each VM is done with the help of the [Monit](http://mmonit.com/monit/). Monit continuously monitors presence of the configured release jobs' processes and restarts processes that are not found. Process restarts, failures, etc. are reported to the Agent which in turn reports them as alerts to the Health Monitor. Each Health Monitor plugin is given an opportunity to act on each alert. 38 | 39 | --- 40 | ## SSH Events 41 | 42 | The Agent on each VM sends an alert when someone/something tries to log into the system via SSH. Successful and failed attempts are recorded. 43 | 44 | --- 45 | ## Deploy Events 46 | 47 | The Director sends an alert when a deployment starts, successfully completes or errors. 48 | 49 | --- 50 | Next: [Process monitoring with Monit](vm-monit.html) 51 | -------------------------------------------------------------------------------- /compiled-releases.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Compiled Releases 3 | --- 4 | 5 |

Note: This feature is available with bosh-release v210+.

6 | 7 |

Note: CLI v2 is used in the examples.

8 | 9 | Typically release tarballs are distributed with source packages; however, there may be a requirement to use compiled packages in an environment (for example a production environment) where: 10 | 11 | - compilation is not permitted for security reasons 12 | - access to source packages is not permitted for legal reasons 13 | - exact existing audited binary assets are expected to be used 14 | 15 | Any release can be exported as a compiled release by using the Director and [bosh export-release](cli-v2.html#export-release) command. 16 | 17 | --- 18 | ## Using export-release command 19 | 20 | To export a release: 21 | 22 | 1. Create an empty deployment (or use an existing one). This deployment will hold compilation VMs if compilation is necessary. 23 | 24 | ```yaml 25 | name: compilation-workspace 26 | 27 | releases: 28 | - name: uaa 29 | version: "45" 30 | 31 | stemcells: 32 | - alias: default 33 | os: ubuntu-trusty 34 | version: latest 35 | 36 | instance_groups: [] 37 | 38 | update: 39 | canaries: 1 40 | max_in_flight: 1 41 | canary_watch_time: 1000-90000 42 | update_watch_time: 1000-90000 43 | ``` 44 | 45 |

Note: This example assumes you are using cloud config, hence no compilation, networks and other sections were defined. If you are not using cloud config you will have to define them.

46 | 47 | 1. Reference desired release versions you want to export. 48 | 49 | 1. Deploy. Example manifest above does not allocate any resources when deployed. 50 | 51 | 1. Run `bosh export-release` command. In our example: `bosh export-release uaa/45 ubuntu-trusty/3197`. If release is not already compiled it will create necessary compilation VMs and compile all packages. 52 | 53 | 1. Find exported release tarball in the current directory. Compiled release tarball can be now imported into any other Director via `bosh upload-release` command. 54 | 55 | 1. Optionally use `bosh inspect-release` command to view associated compiled packages on the Director. In our example: `bosh inspect-release uaa/45`. 56 | 57 | --- 58 | ## Floating stemcells 59 | 60 | Compiled releases are built against a particular stemcell version. Director allows compiled releases to be installed on any minor version of the major stemcell version that the compiled release was exported against. `bosh create-env` command requires exact stemcell match unlike the Director. 61 | 62 | For example UAA release 27 compiled against stemcell version 3233.10 will work on any 3233 stemcell, but the Director will refuse to install it on 3234. 63 | -------------------------------------------------------------------------------- /init-vcloud.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Creating environment on vCloud 3 | --- 4 | 5 |

Note: See Initializing BOSH environment on vCloud for using bosh-init instead of CLI v2. (Not recommended)

6 | 7 | This document shows how to initialize new [environment](terminology.html#environment) on vCloud. 8 | 9 | 1. Install [CLI v2](./cli-v2.html). 10 | 11 | 1. Use `bosh create-env` command to deploy the Director. 12 | 13 |
14 |     # Create directory to keep state
15 |     $ mkdir bosh-1 && cd bosh-1
16 | 
17 |     # Clone Director templates
18 |     $ git clone https://github.com/cloudfoundry/bosh-deployment
19 | 
20 |     # Fill below variables (replace example values) and deploy the Director
21 |     $ bosh create-env bosh-deployment/bosh.yml \
22 |         --state=state.json \
23 |         --vars-store=creds.yml \
24 |         -o bosh-deployment/vcloud/cpi.yml \
25 |         -v director_name=bosh-1 \
26 |         -v internal_cidr=10.0.0.0/24 \
27 |         -v internal_gw=10.0.0.1 \
28 |         -v internal_ip=10.0.0.6 \
29 |         -v network_name="VM Network" \
30 |         -v vcloud_url=https://jf629-vcd.vchs.vmware.com \
31 |         -v vcloud_user=root \
32 |         -v vcloud_password=vmware \
33 |         -v vcd_org=VDC-M127910816-4610-275 \
34 |         -v vcd_name=VDC-M127910816-4610-275
35 |     
36 | 37 | To prepare your vCloud environment find out and/or create any missing resources listed below: 38 | - Configure `vcloud_url` (e.g. 'https://jf629-vcd.vchs.vmware.com') with the URL of the vCloud Director. 39 | - Configure `vcloud_user` (e.g. 'root') and `vcloud_password` (e.g. 'vmware') in your deployment manifest with vCloud user name and password. BOSH does not require user to be an admin; however, it does need certain user privileges. 40 | - Configure `network_name` (e.g. 'VM Network') with the name of the vCloud network. Above example uses `10.0.0.0/24` network and Director VM will be placed at `10.0.0.6`. 41 | - Configure `vcd_org` (e.g. 'VDC-M127910816-4610-275') 42 | - Configure `vcd_name` (e.g. 'VDC-M127910816-4610-275') 43 | 44 | 1. Connect to the Director. 45 | 46 |
47 |     # Configure local alias
48 |     $ bosh alias-env bosh-1 -e 10.0.0.6 --ca-cert <(bosh int ./creds.yml --path /director_ssl/ca)
49 | 
50 |     # Log in to the Director
51 |     $ export BOSH_CLIENT=admin
52 |     $ export BOSH_CLIENT_SECRET=`bosh int ./creds.yml --path /admin_password`
53 | 
54 |     # Query the Director for more info
55 |     $ bosh -e bosh-1 env
56 |     
57 | 58 | 1. Save the deployment state files left in your deployment directory `bosh-1` so you can later update/delete your Director. See [Deployment state](cli-envs.html#deployment-state) for details. 59 | 60 | --- 61 | [Back to Table of Contents](index.html#install) 62 | 63 | Previous: [Create an environment](init.html) 64 | -------------------------------------------------------------------------------- /director-backup.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: How to backup and restore a bosh director deployment? 3 | --- 4 | 5 | ## Why performing bosh director backup and restores ? ## 6 | 7 | If using bosh-init to deploy your bosh-director, it is useful to backup the deployment state file containing associated Iaas information (IP, floating IP, persistent disk volume id). This would enable recovery of a lost bosh director VM from the persistent disk still present in the Iaas. See [bosh-init](using-bosh-init.html#recover-deployment-state). 8 | 9 | Performing regular backup of the bosh director is essential to be able to operate your bosh deployments (CF, services ...) up despite a loss the the bosh director persistent disk, or an operator error such as the deletion by mistake of the director deployment. See related testimony of such [not fun experience]( https://youtu.be/ZQvxfL3Wb7s?list=PLhuMOCWn4P9io8gtd6JSlI9--q7Gw3epW&t=1307) 10 | 11 | Bosh provides built-in commands to export the content of the director database and restore it on a fresh empty director deployment. The back up however does not include the data than can be recovered from artifact repositories, namely bosh stemcells and releases. The latter need to be re uploaded manually. 12 | 13 | ## Backup your bosh director ## 14 | 15 | Target the bosh director deployment that you need to backup: 16 | 17 | `bosh deployment your_bosh_director_manifest` 18 | 19 | Make a backup of your bosh instance : 20 | 21 | `bosh backup` 22 | 23 | This command will generate a .tar.gz file that contains a dump of director's database 24 | 25 | ## Restore the backup following an outage ## 26 | 27 | We assume that your bosh director deployment was deleted so you need to restore it. 28 | 29 | The first step is to deploy a new fresh empty bosh director deployment. 30 | 31 | The second step is to restore the content of the bosh director db 32 | 33 | * Connect to your bosh instance (using `bosh target https://your_bosh_ip`). 34 | * Use the bosh restore command : `bosh restore yourBackupFile.tar.gz` 35 | 36 | The third step is to manually re-upload the releases and stemcells binaries. 37 | 38 | * List the expected stemcells and releases 39 | 40 | `bosh stemcells` 41 | `bosh releases` 42 | 43 | Then upload stemcells/releases from your repositories such as bosh.io, using the `--fix` option so bosh will fix them in the db (fixing the missing blobs). Without the `--fix` option, bosh would complain about duplicate stemcells/releases with same name. 44 | 45 | `bosh upload stemcell https://stemcells_url --fix` 46 | `bosh upload releases https://release_url --fix` 47 | 48 | Following these restoration steps, the bosh director is now able to manage your previous deployments. 49 | 50 | `bosh cloudcheck` 51 | 52 | `bosh instances --ps` 53 | 54 | You can now safely update your deployments using the usual deploy command. 55 | 56 | `bosh deploy` 57 | -------------------------------------------------------------------------------- /trusted-certs.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Trusted Certificates 3 | --- 4 | 5 |

Note: This feature is available with bosh-release v176+ (1.2992.0) and stemcells v2992+.

6 | 7 | This document describes how to configure the Director to add a set of trusted certificates to all VMs managed by that Director. Configured trusted certificates are added to the default certificate store on each VM and will be automatically seen by the majority of software (e.g. curl). 8 | 9 | --- 10 | ## Configuring Trusted Certificates 11 | 12 | To configure the Director with trusted certificates: 13 | 14 | 1. Change deployment manifest for the Director to include one or more certificates: 15 | 16 | ```yaml 17 | properties: 18 | director: 19 | trusted_certs: | 20 | # Comments are allowed in between certificate boundaries 21 | -----BEGIN CERTIFICATE----- 22 | MIICsjCCAhugAwIBAgIJAMcyGWdRwnFlMA0GCSqGSIb3DQEBBQUAMEUxCzAJBgNV 23 | BAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBX 24 | ... 25 | ItuuqKphqhSb6PEcFMzuVpTbN09ko54cHYIIULrSj3lEkoY9KJ1ONzxKjeGMHrOP 26 | KS+vQr1+OCpxozj1qdBzvHgCS0DrtA== 27 | -----END CERTIFICATE----- 28 | # Some other certificate below 29 | -----BEGIN CERTIFICATE----- 30 | MIIB8zCCAVwCCQCLgU6CRfFs5jANBgkqhkiG9w0BAQUFADBFMQswCQYDVQQGEwJB 31 | VTETMBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0 32 | ... 33 | VhORg7+d5moBrryXFJfeiybtuIEA+1AOwEkdp1MAKBhRZYmeoQXPAieBrCp6l+Ax 34 | BaLg0R513H6KdlpsIOh6Ywa1r/ID0As= 35 | -----END CERTIFICATE----- 36 | ``` 37 | 38 | 1. Redeploy the Director with the updated manifest. 39 | 40 |

Note: Currently only VMs managed by the Director will be updated with the trusted certificates. The Director VM will not have trusted certificates installed.

41 | 42 | 1. Redeploy each deployment to immediately update deployment's VMs with trusted certificates. Otherwise trusted certificate changes will be picked up next time you run `bosh deploy` for that deployment. 43 | 44 |
45 |     $ bosh deployment ~/deployments/cf-mysql.yml
46 |     $ bosh deploy
47 |     ...
48 | 
49 |     $ bosh deployment ~/deployments/cf-rabbitmq.yml
50 |     $ bosh deploy
51 |     ...
52 |     
53 | 54 | ### Configuration Format 55 | 56 | The Director allows to specify one or more certificates concatenated together in the PEM format. Any text before, between and after certificate boundaries is ignored when importing the certificates, but may be useful for leaving notes about the certificate purpose. 57 | 58 | Providing multiple certificates makes downtimeless certificate rotation possible; however, it involves redeploying the Director and all deployments twice -- first to add a new certificate and second to remove an old certificate. 59 | 60 | --- 61 | [Back to Table of Contents](index.html#deployment-config) 62 | -------------------------------------------------------------------------------- /deployment-basics.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Build Deployment Manifest 3 | --- 4 | 5 | (See [What is a Deployment?](deployment.html) for an introduction to deployments.) 6 | 7 | A deployment is a collection of VMs, persistent disks and other resources. To create a deployment in the Director, it has to be described with a [deployment manifest](terminology.html#manifest). Most deployment manifests look something like this: 8 | 9 | ```yaml 10 | --- 11 | name: zookeeper 12 | 13 | releases: 14 | - name: zookeeper 15 | version: 0.0.5 16 | url: https://bosh.io/d/github.com/cppforlife/zookeeper-release?v=0.0.5 17 | sha1: 65a07b7526f108b0863d76aada7fc29e2c9e2095 18 | 19 | stemcells: 20 | - alias: default 21 | os: ubuntu-trusty 22 | version: latest 23 | 24 | update: 25 | canaries: 2 26 | max_in_flight: 1 27 | canary_watch_time: 5000-60000 28 | update_watch_time: 5000-60000 29 | 30 | instance_groups: 31 | - name: zookeeper 32 | azs: [z1, z2, z3] 33 | instances: 5 34 | jobs: 35 | - name: zookeeper 36 | release: zookeeper 37 | properties: {} 38 | vm_type: default 39 | stemcell: default 40 | persistent_disk: 10240 41 | networks: 42 | - name: default 43 | 44 | - name: smoke-tests 45 | azs: [z1] 46 | lifecycle: errand 47 | instances: 1 48 | jobs: 49 | - name: smoke-tests 50 | release: zookeeper 51 | properties: {} 52 | vm_type: default 53 | stemcell: default 54 | networks: 55 | - name: default 56 | ``` 57 | 58 | (Taken from ) 59 | 60 | Here is how deployment manifest describes a reasonably complex Zookeeper cluster: 61 | 62 | - Zookepeer source code, configuration file, startup scripts 63 | - include `zookeeper` release version `0.0.5` to the `releases` section 64 | - Operating system image onto which install software 65 | - include latest version of `ubuntu-trusty` stemcell 66 | - Create 5 Zookeeper VMs spread 67 | - add `zookeeper` [instance group](terminology.html#instance-group) with `instances: 5` 68 | - Spread VMs over multiple availability zones 69 | - add `azs: [z1, z2, z3]` 70 | - Install Zookeeper software onto VMs 71 | - add `zookeeper` job to this instance group 72 | - Size VMs in the same way 73 | - add `vm_type: default` which references VM type from cloud config 74 | - Attach a 10GB [persistent disk](terminology.html#persistent-disk) to each Zookeeper VM 75 | - add `persistent_disk: 10240` to `zookeeper` instance group 76 | - Place VMs onto some [network](networks.html) 77 | - add `networks: [{name: default}]` to `zookeeper` instance group 78 | - Provide a way to smoke test Zookeeper cluster 79 | - add `smoke-tests` instance group with `smoke-tests` job from Zookeeper release 80 | 81 | Refer to [manifest v2 schema](manifest-v2.html) for detailed breakdown. 82 | 83 | Once manifest is complete referenced stemcells and releases must be uploaded. 84 | 85 | --- 86 | Next: [Upload Stemcells](uploading-stemcells.html) 87 | 88 | Previous: [Update Cloud Config](update-cloud-config.html) 89 | -------------------------------------------------------------------------------- /pre-start.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Pre-start Script 3 | --- 4 | 5 | (See [Job Lifecycle](job-lifecycle.html) for an explanation of when pre-start scripts run.) 6 | 7 |

Note: This feature is available with bosh-release v206+ (1.3072.0) and only for releases deployed with 3125+ stemcells.

8 | 9 |

Note: Releases that make use of pre-start scripts and are deployed on older stemcells or with an older Director may potentially deploy; however, pre-start script will not be called.

10 | 11 | Release job can have a pre-start script that will run before the job is started. This script allows the job to prepare machine and/or persistent data before starting its operation. For example, when writing a release for Cassandra, each node will need to migrate format of SSTables. That procedure may be lengthy and should happen before the node can successfully start. 12 | 13 | --- 14 | ## Job Configuration 15 | 16 | To add a pre-start script to a release job: 17 | 18 | 1. Create a script with any name in the templates directory of a release job. 19 | 1. In the `templates` section of the release job spec file, add the script name and the `bin/pre-start` directory as a key value pair. 20 | 21 | Example: 22 | 23 | ```yaml 24 | --- 25 | name: cassandra_node 26 | templates: 27 | pre-start.erb: bin/pre-start 28 | ``` 29 | 30 | --- 31 | ## Script Implementation 32 | 33 | Pre-start script is usually just a regular shell script. ERB tags may be used for templating. Since pre-start script is executed in a similar way as other release job scripts (start, stop, drain scripts) you can use job's package dependencies. 34 | 35 |

After templating, the pre-start script must have its shebang on the first line.

36 | 37 | Pre-start script should be idempotent. It may be called multiple times before process is successfully started. 38 | 39 | Unlike a drain script, a pre-start script uses an exit code to indicate its success (exit code 0) or failure (any other exit code). 40 | 41 | Pre-start script is called every time before job is started (ctl script is called) by the Director, which means that pre-start script should perform its operations in an idempotent way. 42 | 43 |

Note: Running `monit start` directly on a VM will not trigger pre-start scripts.

44 | 45 | Pre-start scripts in a single deployment job (typically is composed of multiple release jobs) are executed in parallel. 46 | 47 | --- 48 | ## Logs 49 | 50 | You can find logs for each release job's pre-start script in the following locations: 51 | 52 | - stdout in `/var/vcap/sys/log//pre-start.stdout.log` 53 | - stderr in `/var/vcap/sys/log//pre-start.stderr.log` 54 | 55 | Since pre-start script will be called multiple times, new output will be appended to the files above. Standard [log rotation policy](job-logs.html#log-rotation) applies. 56 | 57 | --- 58 | Next: [Post-start script](post-start.html) 59 | 60 | Previous: [Job lifecycle](job-lifecycle.html) 61 | -------------------------------------------------------------------------------- /director-users.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: User management on the Director 3 | --- 4 | 5 | The Director provides a very simple built-in user management system for authentication of operators and internal services (for example, the Health Monitor). Alternatively, it can integrate with UAA for more advanced use cases. 6 | 7 | --- 8 | ## Default Configuration 9 | 10 |

Note: We are planning to remove this configuration. We recommend configuring the Director as described below in Preconfigured Users section.

11 | 12 | Once installed, the Director comes without any configured users by default. When there are no configured users you can use `admin` / `admin` credentials to login into the Director. 13 | 14 |
15 | $ bosh login admin
16 | 
17 | Enter password: *****
18 | Logged in as `admin'
19 | 
20 | 21 | When the Director is configured with at least one user, default `admin` / `admin` credentials no longer work. To create a new user: 22 | 23 |
24 | $ bosh create user some-operator
25 | 
26 | Enter new password: ********
27 | Verify new password: ********
28 | User `some-operator' has been created
29 | 
30 | 31 | To delete existing user: 32 | 33 |
34 | $ bosh delete user some-operator
35 | 
36 | Are you sure you would like to delete the user `some-operator'? (type 'yes' to continue): yes
37 | User `some-operator' has been deleted
38 | 
39 | 40 | --- 41 | ## Preconfigured Users 42 | 43 |

Note: This feature is available with bosh-release v177+ (1.2999.0).

44 | 45 | In this configuration the Director is configured in advance with a list of users. There is no way to add or remove users without redeploying the Director. 46 | 47 | To configure the Director with a list of users: 48 | 49 | 1. Change deployment manifest for the Director: 50 | 51 | ```yaml 52 | properties: 53 | director: 54 | user_management: 55 | provider: local 56 | local: 57 | users: 58 | - {name: admin, password: admin-password} 59 | - {name: hm, password: hm-password} 60 | ``` 61 | 62 | 1. Redeploy the Director with the updated manifest. 63 | 64 | --- 65 | ## UAA Integration 66 | 67 | [Configure the Director with UAA user management](director-users-uaa.html). 68 | 69 | --- 70 | ## Director Tasks 71 | 72 | When a user initiates a [director task](director-tasks.html), the director logs the user in the task audit log. 73 | 74 | --- 75 | ## Health Monitor Authentication 76 | 77 | The Health Monitor is configured to use a custom user to query/submit requests to the Director. Since by default the Director does not come with any users, the Health Monitor is not able to successfully communicate with the Director. See the [Automatic repair with Resurrector](resurrector.html) topic for more details. 78 | 79 | --- 80 | [Back to Table of Contents](index.html#director-config) 81 | -------------------------------------------------------------------------------- /post-deploy.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Post-deploy Script 3 | --- 4 | 5 | (See [Job Lifecycle](job-lifecycle.html) for an explanation of when post-deploy scripts run.) 6 | 7 |

Note: This feature is available with bosh-release v255.4+ and only for releases deployed with 3125+ stemcells.

8 | 9 |

Note: Releases that make use of post-deploy scripts and are deployed on older stemcells or with an older Director may potentially deploy; however, post-deploy script will not be called.

10 | 11 | Release job can have a post-deploy script that will run after all jobs in the deployments successfully started (and ran post-start scripts). This script allows the job to execute any additional commands against a whole deployment before considering deploy finished. 12 | 13 | --- 14 | ## Director Configuration 15 | 16 | Currently the Director does not run post deploy scripts by default. Use [`director.enable_post_deploy` property](https://bosh.io/jobs/director?source=github.com/cloudfoundry/bosh#p=director.enable_post_deploy) to enable running scripts. 17 | 18 | --- 19 | ## Job Configuration 20 | 21 | To add a post-deploy script to a release job: 22 | 23 | 1. Create a script with any name in the templates directory of a release job. 24 | 1. In the `templates` section of the release job spec file, add the script name and the `bin/post-deploy` directory as a key value pair. 25 | 26 | Example: 27 | 28 | ```yaml 29 | --- 30 | name: cassandra_node 31 | templates: 32 | post-deploy.erb: bin/post-deploy 33 | ``` 34 | 35 | --- 36 | ## Script Implementation 37 | 38 | Post-deploy script is usually just a regular shell script. Since post-deploy script is executed in a similar way as other release job scripts (start, stop, drain scripts) you can use job's package dependencies. 39 | 40 | Post-deploy script should be idempotent. It may be called multiple times after process is successfully started. 41 | 42 | Unlike a drain script, a post-deploy script uses an exit code to indicate its success (exit code 0) or failure (any other exit code). 43 | 44 | Post-deploy script is called every time after job is started (ctl script is called) by the Director, which means that post-deploy script should perform its operations in an idempotent way. 45 | 46 |

Note: Running `monit start` directly on a VM will not trigger post-deploy scripts.

47 | 48 | Post-deploy scripts in a deployment are executed in parallel. 49 | 50 | --- 51 | ## Logs 52 | 53 | You can find logs for each release job's post-deploy script in the following locations: 54 | 55 | - stdout in `/var/vcap/sys/log//post-deploy.stdout.log` 56 | - stderr in `/var/vcap/sys/log//post-deploy.stderr.log` 57 | 58 | Since post-deploy script will be called multiple times, new output will be appended to the files above. Standard [log rotation policy](job-logs.html#log-rotation) applies. 59 | 60 | --- 61 | Next: [Drain script](drain.html) 62 | 63 | Previous: [Post-start script](post-start.html) 64 | -------------------------------------------------------------------------------- /hm-config.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Configuring Health Monitor 3 | --- 4 | 5 | Sections below only show minimum configuration options to enable plugins. Add them to the deployment manifest for the Health Monitor. See [health_monitor release job properties](http://bosh.io/jobs/health_monitor?source=github.com/cloudfoundry/bosh) for more details. 6 | 7 | --- 8 | ## Event Logger 9 | 10 | Enabled by default. No way to turn it off. 11 | 12 | --- 13 | ## Resurrector 14 | 15 | Restarts VMs that have stopped heartbeating. See [Automatic repair with Resurrector](resurrector.html) for more details. 16 | 17 | ```yaml 18 | properties: 19 | hm: 20 | resurrector_enabled: true 21 | ``` 22 | 23 | --- 24 | ## Emailer 25 | 26 | Plugin that sends configurable e-mails on events reciept. 27 | 28 | ```yaml 29 | properties: 30 | hm: 31 | email_notifications: true 32 | email_recipients: [email@gmail.com] 33 | smtp: 34 | from: 35 | host: 36 | port: 37 | domain: 38 | tls: 39 | auth: 40 | user: 41 | password: 42 | ``` 43 | 44 | --- 45 | ## JSON 46 | 47 | Enabled by default. 48 | 49 | Plugin that sends alerts and heartbeats as json to programs installed on the director over stdin. The plugin will start and manage a process for each executable matching the glob `/var/vcap/jobs/*/bin/bosh-monitor/*`. 50 | 51 | --- 52 | ## OpenTSDB 53 | 54 | Plugin that forwards alerts and heartbeats to [OpenTSDB](http://opentsdb.net/). 55 | 56 | ```yaml 57 | properties: 58 | hm: 59 | tsdb_enabled: true 60 | tsdb: 61 | address: tsdb.your.org 62 | port: 4242 63 | ``` 64 | 65 | --- 66 | ## Graphite 67 | 68 | Plugin that forwards heartbeats to [Graphite](https://graphite.readthedocs.org/en/latest/). 69 | 70 | ```yaml 71 | properties: 72 | hm: 73 | graphite_enabled: true 74 | graphite: 75 | address: graphite.your.org 76 | port: 2003 77 | ``` 78 | 79 | --- 80 | ## PagerDuty 81 | 82 | Plugin that sends various events to [PagerDuty.com](http://pagerduty.com) using their API. 83 | 84 | ```yaml 85 | properties: 86 | hm: 87 | pagerduty_enabled: 88 | pagerduty: 89 | service_key: 90 | http_proxy: 91 | ``` 92 | 93 | --- 94 | ## DataDog 95 | 96 | Plugin that sends various events to [DataDog.com](http://datadoghq.com) using their API. 97 | 98 | ```yaml 99 | properties: 100 | hm: 101 | datadog_enabled: true 102 | datadog: 103 | api_key: 104 | application_key: 105 | pagerduty_service_name: 106 | ``` 107 | 108 | --- 109 | ## AWS CloudWatch 110 | 111 | Plugin that sends various events to [Amazon's CloudWatch](http://aws.amazon.com/cloudwatch/) using their API. 112 | 113 | ```yaml 114 | properties: 115 | hm: 116 | cloud_watch_enabled: true 117 | aws: 118 | access_key_id: 119 | secret_access_key: 120 | ``` 121 | -------------------------------------------------------------------------------- /aws-iam-users.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Creating IAM Users 3 | --- 4 | 5 | ## Creating new IAM user 6 | 7 | 1. Log into the AWS console: [https://console.aws.amazon.com/console/home](https://console.aws.amazon.com/console/home). 8 | 9 | <%= image_tag("images/deploy-microbosh-to-aws/account-dashboard.png") %> 10 | 11 | 1. Click your account name and select **Security Credentials**. 12 | 13 | <%= image_tag("images/deploy-microbosh-to-aws/security-credentials-menu.png") %> 14 | 15 | 1. If the AWS IAM confirmation box is presented, click **Get Started with IAM Users** to go to IAM Users management page. Alternatively go directly to [users list](https://console.aws.amazon.com/iam/home#users). 16 | 17 | <%= image_tag("images/deploy-microbosh-to-aws/iam-modal.png") %> 18 | 19 | 1. Click **Create New Users** button. 20 | 21 | <%= image_tag("images/deploy-microbosh-to-aws/list-iam-users.png") %> 22 | 23 | 1. Enter a descriptive name for a new user, make sure that access keys will be generated for each user and click **Create** button. 24 | 25 | <%= image_tag("images/deploy-microbosh-to-aws/create-iam-users.png") %> 26 | 27 | 1. Record **Access Key ID** and **Secret Access Key** for later use. Click **Close** link to get back to the list of users. 28 | 29 | <%= image_tag("images/deploy-microbosh-to-aws/get-iam-creds.png") %> 30 | 31 | 1. Click on a new user from the list of users. 32 | 33 | 1. Click on **Inline Policies** panel and choose to create a new inline policy. 34 | 35 | <%= image_tag("images/deploy-microbosh-to-aws/attach-iam-policy.png") %> 36 | 37 | 1. Choose **Custom Policy**. 38 | 39 | 1. Add a policy configuration for the chosen user and click **Apply Policy**. 40 | 41 | <%= image_tag("images/deploy-microbosh-to-aws/add-iam-inline-policy.png") %> 42 | 43 | For example your aws-cpi's inline policy allows EC2 and full ELB access: 44 | 45 | ```yaml 46 | { 47 | "Version": "2012-10-17", 48 | "Statement": [ 49 | { 50 | "Action": [ 51 | "ec2:AssociateAddress", 52 | "ec2:AttachVolume", 53 | "ec2:CreateVolume", 54 | "ec2:DeleteSnapshot", 55 | "ec2:DeleteVolume", 56 | "ec2:DescribeAddresses", 57 | "ec2:DescribeImages", 58 | "ec2:DescribeInstances", 59 | "ec2:DescribeRegions", 60 | "ec2:DescribeSecurityGroups", 61 | "ec2:DescribeSnapshots", 62 | "ec2:DescribeSubnets", 63 | "ec2:DescribeVolumes", 64 | "ec2:DetachVolume", 65 | "ec2:CreateSnapshot", 66 | "ec2:CreateTags", 67 | "ec2:RunInstances", 68 | "ec2:TerminateInstances", 69 | "ec2:RegisterImage", 70 | "ec2:DeregisterImage" 71 | ], 72 | "Effect": "Allow", 73 | "Resource": "*" 74 | }, 75 | { 76 | "Effect": "Allow", 77 | "Action": [ "elasticloadbalancing:*" ], 78 | "Resource": [ "*" ] 79 | } 80 | ] 81 | } 82 | ``` 83 | 84 |

Note: It's highly encouraged to set very restrictive policy to limit unncessary access.

85 | -------------------------------------------------------------------------------- /cli-global-flags.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: CLI Global Flags 3 | --- 4 | 5 |

Note: Applies to CLI v2.

6 | 7 | ## Help flag 8 | 9 | - `bosh -h` shows global flags described here and all available commands 10 | - `bosh -h` shows command specific options 11 | 12 | --- 13 | ## Version flag 14 | 15 | - `-v` flag shows CLI version. 16 | 17 |

Note: To see Director version use `bosh env` command.

18 | 19 | --- 20 | ## Environment flags 21 | 22 | - `--environment` (`-e`) flag allows to specify Director VM address or environment alias (`BOSH_ENVIRONMENT` environment variable) 23 | - `--ca-cert` flag allows to specify CA certificate used for connecting for Director and UAA (`BOSH_CA_CERT` environment variable) 24 | 25 | CLI does not provide a way to skip SSL certificate validation to encourage secure Director configuration. 26 | 27 | See [CLI environments](cli-envs.html) for details. 28 | 29 | --- 30 | ## Authentication flags 31 | 32 | - `--client` flag allows to specify basic auth username or UAA client ID (`BOSH_CLIENT` environment variable) 33 | - `--client-secret` flag allows to specify basic auth password or UAA client secret (`BOSH_CLIENT_SECRET` environment variable) 34 | 35 | CLI does not provide a way to specify UAA user login information since all non-interactive use (in scripts) should use UAA clients. `bosh log-in` command allows to log in interactively as a UAA user. 36 | 37 | --- 38 | ## Output flags 39 | 40 | - `--n` flag affirms any confirmation that typically requires use input (`BOSH_NON_INTERACTIVE=true` environment variable) 41 | - `--json` flag changes output format to JSON 42 | - `--tty` flag forces output to include all decorative text typically visible when command is not redirected 43 | - `--no-color` flag disables colors (enabled by default when command is redirected) 44 | 45 | CLI makes a distinction between decorative text (table headings) and primary content (such as tables). To make it eas easy to parse command output via other tools (such as grep) when decorative text is automatically hidden when command output is redirected. 46 | 47 | --- 48 | ## Deployment flag 49 | 50 | - `--deployment` (`-d`) flag allows to specify deployment for a command (`BOSH_DEPLOYMENT` environment variable) 51 | 52 | Several commands that can operate in a Director and a deployment context (such as `bosh tasks` command) account for presence of this flag and filter their output based on a deployment. 53 | 54 | --- 55 | ## SOCKS5 Tunneling 56 | 57 | See [tunneling](cli-tunnel.html) for details. 58 | 59 | --- 60 | ## Logging 61 | 62 | Along with the UI output (stdout) and UI errors (stderr), CLI can output more verbose logs. 63 | 64 | Logging is disabled by default (`BOSH_LOG_LEVEL` defaults to `none`). 65 | 66 | To enable logging, set the `BOSH_LOG_LEVEL` environment variable to one of the following values: `debug`, `info`, `warn`, `error`, `none` (default) 67 | 68 | Logs write to stdout (debug & info) & stderr (warn & error) by default. 69 | 70 | To write logs to a file, set the `BOSH_LOG_PATH` environment variable to the path of the file to create and/or append to. 71 | -------------------------------------------------------------------------------- /problems.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: What Problems Does BOSH Solve? 3 | --- 4 | 5 | BOSH allows individual developers and teams to easily version, package and deploy software in a reproducible manner. 6 | 7 | Any software, whether it is a simple static site or a complex multi-component service, will need to be updated and repackaged at some point. This updated software might need to be deployed to a cluster, or it might need to be packaged for end-users to deploy to their own servers. In a lot of cases, the developers who produced the software will be deploying it to their own production environment. Usually, a team will use a staging, development, or demo environment that is similarly configured to their production environment to verify that updates run as expected. These staging environments are often taxing to build and administer. Maintaining consistency between multiple environments is often painful to manage. 8 | 9 | Developer/operator communities have come far in solving similar situations with tools like Chef, Puppet, and Docker. However, each organization solves these problems in a different way, which usually involves a variety of different, and not necessarily well-integrated, tools. While these tools exist to solve the individual parts of versioning, packaging, and deploying software reproducibly, BOSH was designed to do each of these as a whole. 10 | 11 | BOSH was purposefully constructed to address the four principles of modern [Release Engineering](http://en.wikipedia.org/wiki/Release_engineering) in the following ways: 12 | 13 | > **Identifiability**: Being able to identify all of the source, tools, environment, and other components that make up a particular release. 14 | 15 | BOSH has a concept of a software release which packages up all related source code, binary assets, configuration etc. This allows users to easily track contents of a particular release. In addition to releases BOSH provides a way to capture all Operating System dependencies as one image. 16 | 17 | > **Reproducibility**: The ability to integrate source, third party components, data, and deployment externals of a software system in order to guarantee operational stability. 18 | 19 | BOSH tool chain provides a centralized server that manages software releases, Operating System images, persistent data, and system configuration. It provides a clear and simple way of operating a deployed system. 20 | 21 | > **Consistency**: The mission to provide a stable framework for development, deployment, audit, and accountability for software components. 22 | 23 | BOSH software releases workflows are used throughout the development of the software and when the system needs to be deployed. BOSH centralized server allows users to see and track changes made to the deployed system. 24 | 25 | > **Agility**: The ongoing research into what are the repercussions of modern software engineering practices on the productivity in the software cycle, i.e. continuous integration. 26 | 27 | BOSH tool chain integrates well with current best practices of software engineering (including Continuous Delivery) by providing ways to easily create software releases in an automated way and to update complex deployed systems with simple commands. 28 | 29 | --- 30 | Next: [What is a Stemcell?](stemcell.html) 31 | 32 | Previous: [What is BOSH?](about.html) 33 | -------------------------------------------------------------------------------- /understanding-bosh.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Understanding BOSH 3 | --- 4 | 5 | BOSH is an open source tool chain for release engineering, deployment, and 6 | lifecycle management of large-scale distributed services. 7 | 8 | ## Parts of a BOSH Deployment ## 9 | 10 | Every BOSH deployment consists of three parts: a stemcell, a release, and a 11 | manifest. 12 | 13 | ### Stemcell ### 14 | 15 | A stemcell is a VM template. 16 | BOSH clones new VMs from the stemcell to create the VMs needed for a deployment. 17 | A stemcell contains an OS and an embedded BOSH Agent that allows BOSH to control 18 | VMs cloned from the stemcell. 19 | 20 | VMs cloned from a single stemcell can be configured with different CPU, memory, 21 | storage, and network settings, and can have different software packages 22 | installed. 23 | Stemcell are tied to specific cloud infrastructures. 24 | 25 | ### Release ### 26 | 27 | A BOSH release is a collection of source code, configuration files, and startup 28 | scripts, with a version number that identifies these components. 29 | A BOSH release consists of the software packages to be installed and the 30 | processes, or jobs, to run on the VMs in a deployment. 31 | 32 | * A package contains source code and a script for compiling and installing the 33 | package, with optional dependencies on other packages. 34 | * A job is a set of configuration files and scripts to run the binaries from a 35 | package. 36 | 37 | ### Manifest ### 38 | 39 | The BOSH deployment manifest is a YAML file defining the layout and properties 40 | of the deployment. 41 | When a BOSH operator initiates a new deployment using the BOSH CLI, the BOSH 42 | Director receives a version of the deployment manifest and creates a new 43 | deployment using this manifest. 44 | The manifest describes the configuration of the cloud infrastructure, network 45 | architecture, and VM types, including which operating system each VM runs. 46 | 47 | ## Deploying with BOSH ## 48 | 49 | A BOSH deployment creates runnable software on VMs from a static release. 50 | 51 | To deploy with BOSH: 52 | 53 | 1. Upload a stemcell 54 | 1. Upload a release 55 | 1. Set deployment with a manifest 56 | 1. Deploy 57 | 58 | The [stemcell](#stemcell) acts as a template for the new VMs created for the 59 | deployment. 60 | The [manifest](#manifest) defines the values of the parameters needed by the 61 | deployment. 62 | BOSH substitutes the parameters from the manifest into the [release](#release) 63 | and configures the software to run as described in the manifest. 64 | 65 | The separation of a BOSH deployment into a stemcell, release, and manifest lets 66 | you make changes to one aspect of a deployment without having to change the 67 | rest. 68 | 69 | For example: 70 | 71 | * To switch a deployment between clouds: 72 | * Keep the same release 73 | * Use a stemcell specific to the new cloud 74 | * Tweak the manifest 75 | * To scale up an application: 76 | * Keep the same release 77 | * Use the same stemcell 78 | * Change one line in the manifest 79 | * To update or roll back an application: 80 | * Use a newer or older release version 81 | * Use the same stemcell 82 | * Use the same manifest -------------------------------------------------------------------------------- /variable-types.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Variable Types 3 | --- 4 | 5 | (See [Variable Interpolation](cli-int.html) for introduction.) 6 | 7 | Currently CLI supports `certificate`, `password`, `rsa`, and `ssh` types. The Director (connected to a config server) may support additional types known by the config server. 8 | 9 | Note that `` indicates value obtained via `((var))` variable syntax. 10 | 11 | --- 12 | ## Password 13 | 14 | **** [String]: Password value. When generated defaults to 20 chars (from `a-z0-9`). 15 | 16 | --- 17 | ## Certificate 18 | 19 | **** [Hash]: Certificate. 20 | 21 | * **ca** [String]: Certificate's CA (PEM encoded). 22 | * **certificate** [String]: Certificate (PEM encoded). 23 | * **private_key** [String]: Private key (PEM encoded). 24 | 25 | Generation options: 26 | 27 | * **common_name** [String, required]: Common name. Example: `foo.com`. 28 | * **alternative_names** [Array, options]: Subject alternative names. Example: `["foo.com", "*.foo.com"]`. 29 | * **is_ca** [Boolean, required]: Indicates whether this is a CA certificate (root or intermediate). Defaults to `false`. 30 | * **ca** [String, optional]: Specifies name of a CA certificate to use for making this certificate. Can be specified in conjuction with `is_ca` to produce an intermediate certificate. 31 | * **extended\_key\_usage** [Array, optional]: List of extended key usage. Possible values: `client_auth` and/or `server_auth`. Default: empty. Example: `[client_auth]`. 32 | 33 | Example: 34 | 35 | ```yaml 36 | - name: bosh_ca 37 | type: certificate 38 | options: 39 | is_ca: true 40 | common_name: bosh 41 | - name: mbus_bootstrap_ssl 42 | type: certificate 43 | options: 44 | ca: bosh_ca 45 | common_name: ((internal_ip)) 46 | alternative_names: [((internal_ip))] 47 | ``` 48 | 49 | Example of certificates used for mutual TLS: 50 | 51 | ```yaml 52 | variables: 53 | - name: cockroachdb_ca 54 | type: certificate 55 | options: 56 | is_ca: true 57 | common_name: cockroachdb 58 | - name: cockroachdb_server_ssl 59 | type: certificate 60 | options: 61 | ca: cockroachdb_ca 62 | common_name: node 63 | alternative_names: ["*.cockroachdb.default.cockroachdb.bosh"] 64 | extended_key_usage: 65 | - server_auth 66 | - client_auth 67 | - name: cockroachdb_user_root 68 | type: certificate 69 | options: 70 | ca: cockroachdb_ca 71 | common_name: root 72 | extended_key_usage: 73 | - client_auth 74 | - name: cockroachdb_user_test 75 | type: certificate 76 | options: 77 | ca: cockroachdb_ca 78 | common_name: test 79 | extended_key_usage: 80 | - client_auth 81 | ``` 82 | 83 | --- 84 | ## RSA 85 | 86 | **** [Hash]: RSA key. When generated defaults to 2048 bits. 87 | 88 | * **private_key** [String]: Private key (PEM encoded). 89 | * **public_key** [String]: Public key (PEM encoded). 90 | 91 | --- 92 | ## SSH 93 | 94 | **** [Hash]: SSH key. When generated defaults to RSA 2048 bits. 95 | 96 | * **private_key** [String]: Private key (PEM encoded). 97 | * **public_key** [String]: Public key (PEM encoded). 98 | * **public\_key\_fingerprint** [String]: Public key's MD5 fingerprint. Example: `c3:ae:51:ec:cb:a8:09:ac:43:fd:84:dd:11:dd:fe:c7`. 99 | -------------------------------------------------------------------------------- /uploading-releases.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Uploading Releases 3 | --- 4 | 5 |

Note: Document uses CLI v2.

6 | 7 | (See [What is a Release?](release.html) for an introduction to releases.) 8 | 9 | Each deployment can reference one or many releases. For a deploy to succeed, all necessary releases must be uploaded to the Director. 10 | 11 | ## Finding Releases 12 | 13 | Releases are distributed in two ways: as a release tarball or through a source code repository. The [releases section of bosh.io](http://bosh.io/releases) provides a good list of available releases and their tarballs. 14 | 15 | Here are a few popular releases: 16 | 17 | - [cf-release](http://bosh.io/releases/github.com/cloudfoundry/cf-release) provides CloudFoundry 18 | - [concourse](http://bosh.io/releases/github.com/concourse/concourse) provides a Continious Integration system called Concourse CI 19 | - [cf-rabbitmq-release](http://bosh.io/releases/github.com/pivotal-cf/cf-rabbitmq-release) provides RabbitMQ 20 | 21 | --- 22 | ## Uploading to the Director 23 | 24 | CLI provides [`bosh upload-release` command](cli-v2.html#upload-release). 25 | 26 | - If you have a URL to a release tarball (for example a URL provided by bosh.io): 27 | 28 |
29 | 	$ bosh -e vbox upload-release https://bosh.io/d/github.com/cppforlife/zookeeper-release?v=0.0.5 --sha1 65a07b7526f108b0863d76aada7fc29e2c9e2095
30 | 	
31 | 32 | Alternatively, if you have a release tarball on your local machine: 33 | 34 |
35 | 	$ bosh -e vbox upload-release ~/Downloads/zookeeper-0.0.5.tgz
36 | 	
37 | 38 | - If you cloned release Git repository: 39 | 40 | Note that all release repositories have a `releases/` folder that contains release YAML files. These files have all the required information about how to assemble a specific version of a release (provided that the release maintainers produce and commit that version to the repository). You can use the YAML files to either directly upload a release, or to create a release tarball locally and then upload it. 41 | 42 |
43 |   $ git clone https://github.com/cppforlife/zookeeper-release
44 | 	$ cd zookeeper-release/
45 | 	$ bosh -e vbox upload-release
46 | 	
47 | 48 | Alternatively, to build a release tarball locally from a release YAML file: 49 | 50 |
51 | 	$ cd zookeeper-release/
52 | 	$ bosh create-release releases/zookeeper/zookeeper-0.0.5.yml --tarball x.tgz
53 | 	$ bosh -e vbox upload-release x.tgz
54 | 	
55 | 56 | Once the command succeeds, you can view all uploaded releases in the Director: 57 | 58 |
59 | $ bosh -e vbox releases
60 | Using environment '192.168.50.6' as client 'admin'
61 | 
62 | Name       Version            Commit Hash
63 | dns        0+dev.1496791266*  65f3b30+
64 | zookeeper  0.0.5*             b434447
65 | 
66 | (*) Currently deployed
67 | (+) Uncommitted changes
68 | 
69 | 3 releases
70 | 
71 | Succeeded
72 | 
73 | 74 | --- 75 | ## Deployment Manifest Usage 76 | 77 | To use an uploaded release in your deployment, update the `releases` section in your deployment manifest: 78 | 79 | ```yaml 80 | releases: 81 | - name: zookeeper 82 | version: 0.0.5 83 | ``` 84 | 85 | --- 86 | Next: [Deploy](deploying.html) 87 | 88 | Previous: [Uploading Stemcells](uploading-stemcells.html) 89 | -------------------------------------------------------------------------------- /vsphere-esxi-host-failure.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Recovery from an ESXi Host Failure 3 | --- 4 | 5 |

Note: Do not follow this procedure if vSphere HA is enabled and bosh-vsphere-cpi is v30+; vSphere HA will automatically move all VMs from the failed host to other good hosts.

6 | 7 | This topic describes how to recreate VMs in the event of an ESXi host failure. 8 | The BOSH Resurrector is unable to recreate a VM on a failed ESXi host without 9 | manual intervention. 10 | It can not recreate a VM until the VM has 11 | been successfully deleted, and it can not delete the VM because 12 | the ESXi host is unavailable. 13 | The following steps will allow the Resurrector to recreate these VMs on a healthy host. 14 | 15 | 1. Manually remove the failed Host from its cluster to force removal of all VMs 16 | 1. select the ESXi host from the cluster: **vCenter → Hosts and Clusters 17 | → _datacenter_ → _cluster_** 18 | 2. right-click the failed ESXi host 19 | 3. select **Remove from Inventory** 20 | 2. Re-upload all stemcells currently in use by the director 21 | - `bosh stemcells` 22 | 23 | ``` 24 | +------------------------------------------+---------------+---------+-----------------------------------------+ 25 | | Name | OS | Version | CID | 26 | +------------------------------------------+---------------+---------+-----------------------------------------+ 27 | | bosh-vsphere-esxi-hvm-centos-7-go_agent | centos-7 | 3184.1 | sc-bc3d762c-71a1-4e76-ae6d-7d2d4366821b | 28 | | bosh-vsphere-esxi-ubuntu-trusty-go_agent | ubuntu-trusty | 3192 | sc-46509b02-a164-4306-89de-99abdaffe8a8 | 29 | | bosh-vsphere-esxi-ubuntu-trusty-go_agent | ubuntu-trusty | 3202 | sc-86d76a55-5bcb-4c12-9fa7-460edd8f94cf | 30 | | bosh-vsphere-esxi-ubuntu-trusty-go_agent | ubuntu-trusty | 3262.4* | sc-97e9ba2d-6ae0-41d1-beea-082b6635e7cb | 31 | +------------------------------------------+---------------+---------+-----------------------------------------+ 32 | ``` 33 | - re-upload the in-use stemcells (the ones with asterisks ('*') next to their version) with the `--fix` flag, e.g.: 34 | 35 | ``` 36 | bosh upload stemcell https://bosh.io/d/stemcells/bosh-vsphere-esxi-ubuntu-trusty-go_agent?v=3262.4 --fix 37 | ``` 38 | 6. Wait for the resurrector to recreate the VMs. Alternatively, force a recreate using `bosh cck` 39 | and choose the `Recreate` option for each missing VM 40 | 9. Clean-up: after the ESXi host has been recovered and added back to the cluster, 41 | preferably while it's in maintenance mode, delete stemcells and powered-off, stale VMs: 42 | * **vCenter → Hosts and Clusters 43 | → _datacenter_ → _cluster_** 44 | * select the recovered ESXi host 45 | * **Related Objects → Virtual Machines** 46 | * delete stale VMs (VMs whose name match this pattern: _vm-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_) 47 | * delete stale stemcells (VMs whose name match this pattern: _sc-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_) 48 | * VMs and stemcells can be deleted by right-clicking on them, selecting **All vCenter Actions → Delete from Disk** 49 | 50 | --- 51 | [Back to Table of Contents](index.html#cpi-config) 52 | 53 | Previous: [vSphere HA](vsphere-ha.html) 54 | 55 | Next: [Recovery from a vSphere Network Partitioning Fault](vsphere-network-partition.html) 56 | -------------------------------------------------------------------------------- /init-azure.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Setting up BOSH environment on Azure 3 | --- 4 | 5 |

Note: See Initializing BOSH environment on Azure for using bosh-init instead of CLI v2. (Not recommended)

6 | 7 | This document shows how to initialize new [environment](terminology.html#environment) on Microsoft Azure. 8 | 9 | ## Step 1: Prepare an Azure Environment 10 | 11 | If you do not have an Azure account, [create one](https://azure.microsoft.com/en-us/pricing/free-trial/). 12 | 13 | Then follow this [guide](https://github.com/cloudfoundry-incubator/bosh-azure-cpi-release/blob/master/docs/get-started/create-service-principal.md) to create your Azure service principal. 14 | 15 | We strongly recommend you to use Azure template [bosh-setup](https://github.com/Azure/azure-quickstart-templates/tree/master/bosh-setup) to initialize the new environment on Microsoft Azure. 16 | 17 | To prepare your Azure environment find out and/or create any missing resources in Azure. If you are not familiar with Azure take a look at [Creating Azure resources](azure-resources.html) page for more details on how to create and configure necessary resources: 18 | 19 | --- 20 | ## Step 2: Deploy 21 | 22 | 1. Install [CLI v2](./cli-v2.html). 23 | 24 | 1. Use `bosh create-env` command to deploy the Director. 25 | 26 |
27 |     # Create directory to keep state
28 |     $ mkdir bosh-1 && cd bosh-1
29 | 
30 |     # Clone Director templates
31 |     $ git clone https://github.com/cloudfoundry/bosh-deployment
32 | 
33 |     # Fill below variables (replace example values) and deploy the Director
34 |     $ bosh create-env bosh-deployment/bosh.yml \
35 |         --state=state.json \
36 |         --vars-store=creds.yml \
37 |         -o bosh-deployment/azure/cpi.yml \
38 |         -v director_name=bosh-1 \
39 |         -v internal_cidr=10.0.0.0/24 \
40 |         -v internal_gw=10.0.0.1 \
41 |         -v internal_ip=10.0.0.6 \
42 |         -v vnet_name=boshnet \
43 |         -v subnet_name=bosh \
44 |         -v subscription_id=3c39a033-c306-4615-a4cb-260418d63879 \
45 |         -v tenant_id=0412d4fa-43d2-414b-b392-25d5ca46561da \
46 |         -v client_id=33e56099-0bde-8z93-a005-89c0f6df7465 \
47 |         -v client_secret=client-secret \
48 |         -v resource_group_name=bosh-res-group \
49 |         -v storage_account_name=boshstore \
50 |         -v default_security_group=nsg-bosh
51 |     
52 | 53 | If running above commands outside of a connected Azure network, refer to [Exposing environment on a public IP](init-external-ip.html) for additional CLI flags. 54 | 55 | See [Azure CPI errors](azure-cpi.html#errors) for list of common errors and resolutions. 56 | 57 | 1. Connect to the Director. 58 | 59 |
60 |     # Configure local alias
61 |     $ bosh alias-env bosh-1 -e 10.0.0.6 --ca-cert <(bosh int ./creds.yml --path /director_ssl/ca)
62 | 
63 |     # Log in to the Director
64 |     $ export BOSH_CLIENT=admin
65 |     $ export BOSH_CLIENT_SECRET=`bosh int ./creds.yml --path /admin_password`
66 | 
67 |     # Query the Director for more info
68 |     $ bosh -e bosh-1 env
69 |     
70 | 71 | 1. Save the deployment state files left in your deployment directory `bosh-1` so you can later update/delete your Director. See [Deployment state](cli-envs.html#deployment-state) for details. 72 | 73 | --- 74 | [Back to Table of Contents](index.html#install) 75 | 76 | Previous: [Create an environment](init.html) 77 | -------------------------------------------------------------------------------- /release-blobs.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Release Blobs 3 | --- 4 | 5 |

Note: Examples use CLI v2.

6 | 7 | A package may need to reference blobs (binary large objects) in addition to referencing other source files. For example when building a package for PostgreSQL server you may want to include `postgresql-9.6.1.tar.gz` from `https://www.postgresql.org/ftp/source/`. Typically it's not recommended to check in blobs directly into a Git repository because Git cannot efficiently track changes to such files. CLI provides a way to manage blobs in a reasonable manner with several commands: 8 | 9 |
10 | $ bosh -h|grep blob
11 |   add-blob               Add blob
12 |   blobs                  List blobs
13 |   remove-blob            Remove blob
14 |   sync-blobs             Sync blobs
15 |   upload-blobs           Upload blobs
16 | 
17 | 18 | ## Adding a blob 19 | 20 | Package can reference blobs via `files` directive in a package spec just like as other source files. 21 | 22 | ```yaml 23 | --- 24 | name: cockroachdb 25 | files: 26 | - cockroach-latest.linux-amd64.tgz 27 | ``` 28 | 29 | Creating a release with above configuration causes following error: 30 | 31 |
32 | $ bosh create-release --force
33 | Building a release from directory '/Users/user/workspace/cockroachdb-release':
34 |   - Constructing packages from directory:
35 |       - Reading package from '/Users/user/workspace/cockroachdb-release/packages/cockroachdb':
36 |           Collecting package files:
37 |             Missing files for pattern 'cockroach-latest.linux-amd64.tgz'
38 | 
39 | 40 | CLI expects to find `cockroach-latest.linux-amd64.tgz` in either `blobs` or `src` directory. Since it's a blob it should not be in `src` directory but rather added with the following command: 41 | 42 |
43 | $ bosh add-blob ~/Downloads/cockroach-latest.linux-amd64.tgz cockroach-latest.linux-amd64.tgz
44 | 
45 | 46 | `add-blob` command: 47 | 48 | - copies file into `blobs` directory (which should be in `.gitignore`) 49 | - updates `config/blobs.yml` to start tracking blobs 50 | 51 | --- 52 | ## Listing blobs 53 | 54 | To list currently tracked blobs use `bosh blobs` command: 55 | 56 |
57 | $ bosh blobs
58 | Path                              Size    Blobstore ID                          SHA1
59 | cockroach-latest.linux-amd64.tgz  15 MiB  (local)                               469004231a9ed1d87de798f12fe2f49cc6ff1d2f
60 | go1.7.4.linux-amd64.tar.gz        80 MiB  7e6431ba-f2c6-4e80-6a16-cd5cd8722b57  2e5baf03d1590e048c84d1d5b4b6f2540efaaea1
61 | 
62 | 2 blobs
63 | 
64 | Succeeded
65 | 
66 | 67 | Blobs that have not been uploaded to release blobstore will be marked as `local` until they are uploaded. 68 | 69 | --- 70 | ## Uploading blobs 71 | 72 | Blobs should be saved into release blobstore before cutting a new final release so that others can rebuild a release at a future time. 73 | 74 | `bosh upload-blobs` command: 75 | 76 | - uploads all local blobs to release blobstore 77 | - updates `config/blobs.yml` with blobstore IDs 78 | 79 | `config/blobs.yml` should be checked into a Git repository. 80 | 81 | --- 82 | ## Removing blobs 83 | 84 | Once a blob is no longer needed by a package it can be stopped being tracked. 85 | 86 | `bosh remove-blob` command: 87 | 88 | - removes blob from `config/blobs.yml` 89 | - does NOT remove blob from release blobstore so that new releases can be created from older revisions 90 | -------------------------------------------------------------------------------- /links-properties.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Link Properties 3 | --- 4 | 5 | (See [Links](links.html) for an introduction.) 6 | 7 |

Note: This feature is available with bosh-release v255.5+.

8 | 9 | In addition to sharing basic networking information (name, AZ, IP, etc.) links allow to show arbitrary information via properties. Most common example is sharing a port value. From our previous example here is a `web` job that communicates with a database: 10 | 11 | ```yaml 12 | name: web 13 | 14 | templates: 15 | config.erb: config/conf 16 | 17 | consumes: 18 | - name: primary_db 19 | type: db 20 | 21 | provides: 22 | - name: incoming 23 | type: http 24 | 25 | properties: {...} 26 | ``` 27 | 28 | Here is an example Postgres job that provides a `conn` link of type `db`, and now also includes server's port for client connections: 29 | 30 | ```yaml 31 | name: postgres 32 | 33 | templates: {...} 34 | 35 | provides: 36 | - name: conn 37 | type: db 38 | properties: 39 | - port 40 | - adapter 41 | - username 42 | - password 43 | - name 44 | 45 | properties: 46 | port: 47 | description: "Port for Postgres to listen on" 48 | default: 5432 49 | adapter: 50 | description: "Type of this database" 51 | default: postgres 52 | username: 53 | description: "Username" 54 | default: postgres 55 | password: 56 | description: "Password" 57 | name: 58 | description: "Database name" 59 | default: postgres 60 | ``` 61 | 62 | Note that all properties included in the `conn` link are defined by the job itself in the `properties` section. 63 | 64 | And finally `web` job can use the port and a few other properties in its ERB templates when configuring how to connect to the database: 65 | 66 | ```ruby 67 | <%%= 68 | 69 | db = link("primary_db") 70 | 71 | result = { 72 | "production" => { 73 | "adapter" => db.p("adapter"), 74 | "username" => db.p("username"), 75 | "password" => db.p("password"), 76 | "host" => db.instances.first.address, 77 | "port" => db.p("port"), 78 | "database" => db.p("name"), 79 | "encoding" => "utf8", 80 | "reconnect" => false, 81 | "pool" => 5 82 | } 83 | } 84 | 85 | JSON.dump(result) 86 | 87 | %> 88 | ``` 89 | 90 | Similarly to how [`p` template accessor](jobs.html#properties) provides access to the job's top level properties, `link("...").p("...)` and `link("...").if_p("...)` accessors work with properties included in the link. 91 | 92 | `if_p` template accessor becomes very useful when trying to provide backwards compatibility around link properties as their interface changes. For example if Postgres job author decides to start including `encoding` property in the link and `web` job's author wants to continue to support older links that don't include that information, they can use `db.if_p("encoding") { ... }`. 93 | 94 | And finally in the above example the operator needs to configure database password to deploy these two jobs since password property doesn't have a default: 95 | 96 | ```yaml 97 | instance_groups: 98 | - name: app 99 | jobs: 100 | - name: web 101 | release: my-app 102 | consumes: 103 | primary_db: {from: data_db} 104 | 105 | - name: data_db 106 | jobs: 107 | - name: postgres 108 | release: postgres 109 | provides: 110 | conn: {as: data_db} 111 | properties: 112 | password: some-password 113 | ``` 114 | 115 | --- 116 | Next: [Manual linking](links-manual.html) 117 | 118 | [Back to Table of Contents](index.html#deployment-config) 119 | -------------------------------------------------------------------------------- /drain.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Drain Script 3 | --- 4 | 5 | (See [Job Lifecycle](job-lifecycle.html) for an explanation of when drain scripts run.) 6 | 7 | Release job can have a drain script that will run when the job is restarted or stopped. This script allows the job to clean up and get into a state where it can be safely stopped. For example, when writing a release for a load balancer, each node can safely stop accepting new connections and drain existing connections before fully stopping. 8 | 9 | --- 10 | ## Job Configuration 11 | 12 | To add a drain script to a release job: 13 | 14 | 1. Create a script with any name in the templates directory of a release job. 15 | 1. In the `templates` section of the release job spec file, add the script name and the `bin/drain` directory as a key value pair. 16 | 17 | Example: 18 | 19 | ```yaml 20 | --- 21 | name: nginx 22 | templates: 23 | drain-web-requests.erb: bin/drain 24 | ``` 25 | 26 |

Note: Drain script from each release job will run if they are deployed on 3093+ stemcells. Before only the first release job's drain script ran.

27 | 28 | --- 29 | ## Script Implementation 30 | 31 | Drain script is usually just a regular shell script. Since drain script is executed in a similar way as other release job scripts (start, stop, pre-start scripts) you can use job's package dependencies. 32 | 33 | Drain script should be idempotent. It may be called multiple times before or after process is stopped. 34 | 35 | You must ensure that your drain script exits in one of following ways: 36 | 37 | - exit with a non-`0` exit code to indicate drain script failed 38 | 39 | - exit with `0` exit code and also print an integer followed by a newline to `stdout` (nothing else must be printed to `stdout`): 40 | 41 | **static draining**: If the drain script prints a zero or a positive integer, BOSH sleeps for that many seconds before continuing. 42 | 43 | **dynamic draining**: If the drain script prints a negative integer, BOSH sleeps for that many seconds, then calls the drain script again. 44 | 45 |

Note: BOSH re-runs a script indefinitely as long as the script exits with a exit code 0 and outputs a negative integer.

46 | 47 |

Note: It's recommended to only use static draining as dynamic draining will be eventually deprecated.

48 | 49 | --- 50 | ## Environment Variables 51 | 52 | Drain script can access the following environment variables: 53 | 54 | * `BOSH_JOB_STATE`: JSON description of the current job state 55 | * `BOSH_JOB_NEXT_STATE`: JSON description of the new job state that is being applied 56 | 57 | For example drain script can use this feature to determine if the size of the persistent disk changes and take a specified action. 58 | 59 | --- 60 | ## Logs 61 | 62 | Currently logs from the drain script are not saved on disk by default, though release author may choose to do so explicitly. We are planning to eventually make it more consistent with [pre-start script logging](pre-start.html#logs). 63 | 64 | --- 65 | ## Example 66 | 67 |
68 | #!/bin/bash
69 | 
70 | pid_path=/var/vcap/sys/run/worker/worker.pid
71 | 
72 | if [ -f $pid_path ]; then
73 |   pid=$(cat $pid_path)
74 |   kill $pid        # process is running; kill it softly
75 |   sleep 10         # wait a bit
76 |   kill -9 $pid     # kill it hard
77 |   rm -rf $pid_path # remove pid file
78 | fi
79 | 
80 | echo 0 # ok to exit; do not wait for anything
81 | 
82 | exit 0
83 | 
84 | 85 | --- 86 | [Back to Table of Contents](index.html#release) 87 | 88 | Previous: [Post-deploy script](post-deploy.html) 89 | -------------------------------------------------------------------------------- /vsphere-network-partition.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Recovery from a vSphere Network Partitioning Fault 3 | --- 4 | 5 |

Note: Do not follow this procedure if vSphere HA is enabled and bosh-vsphere-cpi is v30+; 6 | vSphere HA will automatically recreate VMs that were on the partitioned host.

7 | 8 | This topic describes how to recreate VMs in the event of a network partition 9 | that disrupts the following: 10 | 11 | * the vCenter's ability to communicate with an ESXi host 12 | * the BOSH Director's ability to communicate with the VMs on that host. 13 | 14 | There are two options. 15 | 16 | 1. Power down the ESXi host. Follow the instructions to 17 | [recover from an ESXi host failure](vsphere-esxi-host-failure.html) to recover 18 | your BOSH deployment. 19 | 20 | 2. If you cannot power down your ESXi host, then you must shut down the VMs 21 | running on the partitioned ESXi host: 22 | - Determine which VMs are affected by using the `bosh vms --details`; 23 | the output should resemble the following: 24 | 25 | ``` 26 | +------------------------------------------------+--------------------+----+---------+-------------+-----------------------------------------+--------------------------------------+--------------+--------+ 27 | | VM | State | AZ | VM Type | IPs | CID | Agent ID | Resurrection | Ignore | 28 | +------------------------------------------------+--------------------+----+---------+-------------+-----------------------------------------+--------------------------------------+--------------+--------+ 29 | | dummy/0 (4f9b0722-d004-43a6-b258-adf5e2cc5c70) | running | z1 | default | 10.85.57.7 | vm-073648a9-57da-4122-953b-5ccf5b74c563 | 98ee24dd-c7e5-4f4b-8e6f-4f3dfa4cb5b1 | active | false | 30 | | dummy/1 (df4732aa-9f4b-4635-aedb-54278b3fac31) | running | z1 | default | 10.85.57.11 | vm-debbd710-8829-4484-9098-78a4410ed3cc | 4f3491bd-3ab8-4fa7-9930-cf0ec0a56fec | active | false | 31 | | dummy/2 (56957582-ca58-418d-a7e6-ea0151010302) | unresponsive agent | z1 | default | | vm-c2d2a8ac-7afb-4875-9cf3-d69978c9e8c3 | d38569a5-389a-4de6-95a8-0790e8e5ede4 | active | false | 32 | | dummy/3 (60e0b351-6524-4f45-af12-953a47af5a29) | running | z1 | default | 10.85.57.10 | vm-bf3bbeaf-3506-4fe1-9e7e-76e2c26ce5d8 | f98c9763-6518-4305-8f16-b451a36d1b91 | active | false | 33 | | dummy/4 (473a2bf2-7147-41d5-805a-532f27c6f833) | unresponsive agent | z1 | default | | vm-2c520edb-9202-499f-a079-b3468633bd37 | 43ff0019-2af1-4c87-944b-76aa06f97b83 | active | false | 34 | +------------------------------------------------+--------------------+----+---------+-------------+-----------------------------------------+--------------------------------------+--------------+--------+ 35 | ``` 36 | - Connect to the partitioned ESXi host, and using the `CID` from the 37 | previous command find the Vmids of the VMs using the `CID` from the previous command, 38 | e.g. 39 | 40 | ``` 41 | esxcli vm process list | grep -A 1 ^vm-c2d2a8ac-7afb-4875-9cf3-d69978c9e8c3 42 | esxcli vm process list | grep -A 1 ^vm-2c520edb-9202-499f-a079-b3468633bd37 43 | # We see that the WorldNumbers (World IDs) are 199401 & 199751, respectively 44 | esxcli vm process kill --type=force --world-id=199401 45 | esxcli vm process kill --type=force --world-id=199751 46 | ``` 47 | - Follow the instructions [Recover from an ESXi host failure](vsphere-esxi-host-failure.html). 48 | 49 | --- 50 | [Back to Table of Contents](index.html#cpi-config) 51 | 52 | Previous: [Recovery from an ESXi host failure](vsphere-esxi-host-failure.html) 53 | -------------------------------------------------------------------------------- /windows-sample-release.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Sample BOSH Windows Release 3 | --- 4 | 5 | This is a sample BOSH release than can be deployed using a Windows stemcell. It has a single job called `say-hello` that repeatedly prints out a message. 6 | 7 | After creating a deployment with this release and the `say-hello` job you can access the job's standard out with the `bosh log` command (see documentation on [logs](job-logs.html) for more information). 8 | 9 | --- 10 | ## Release Structure 11 | 12 |
 13 | $ mkdir sample-windows-release
 14 | $ cd sample-windows-release
 15 | $ bosh init-release --git
 16 | $ bosh generate-job say-hello
 17 | 
18 | 19 | ``` 20 | jobs/ 21 | say-hello/ 22 | templates/ 23 | post-deploy.ps1 24 | post-start.ps1 25 | pre-start.ps1 26 | start.ps1 27 | monit 28 | spec 29 | packages/ 30 | ``` 31 | 32 | --- 33 | ### `spec` 34 | 35 | The `spec` file specifies the job name and description. It also contains the templates to render, which may depend on zero or more packages. See the documentation on [job spec files](jobs.html#spec) for more information. 36 | 37 | ```yaml 38 | --- 39 | name: say-hello 40 | 41 | description: "This is a simple job" 42 | 43 | templates: 44 | start.ps1: bin/start.ps1 45 | 46 | packages: [] 47 | ``` 48 | 49 | --- 50 | ### `monit` 51 | 52 | The `monit` file includes zero or more processes to run. Each process specifies an executable as well as any arguments and environment variables. See the documentation on [monit files](jobs.html#monit) for more information. Note, however, that Windows monit files are JSON config files for [Windows service wrapper](https://github.com/kohsuke/winsw), not config files for the monit Unix utility. 53 | 54 | ```json 55 | { 56 | "processes": [ 57 | { 58 | "name": "say-hello", 59 | "executable": "powershell", 60 | "args": [ "/var/vcap/jobs/say-hello/bin/start.ps1" ], 61 | "env": { 62 | "FOO": "BAR" 63 | } 64 | } 65 | ] 66 | } 67 | ``` 68 | 69 | --- 70 | ### `start.ps1` 71 | 72 | The `start.ps1` script executed by the [service-wrapper](https://github.com/kohsuke/winsw) loops indefintely while printing out a message: 73 | 74 | ```powershell 75 | while ($true) 76 | { 77 | Write-Host "I am executing a BOSH job. FOO=${Env:FOO}" 78 | Start-Sleep 1.0 79 | } 80 | ``` 81 | 82 | --- 83 | ## Creating and Deploying the Sample Release 84 | 85 | If you have the Director with a Windows stemcell uploaded, you can create the above described release with an empty `blobs.yml` and `final.yml`, then try deploying it: 86 | 87 |
 88 | $ cd sample-windows-release
 89 | $ bosh create-release --force
 90 | $ bosh upload-release
 91 | $ bosh -d sample-windows-deployment deploy manifest.yml
 92 | 
93 | 94 | For information about deployment basics, see the [Deploy Workflow](basic-workflow.html) documenation. 95 | 96 | Here is a sample manifest. For information on manifest basics, see the [Deployment Manifest](deployment-manifest.html) documentation. 97 | 98 | ```yaml 99 | name: sample-windows-deployment 100 | 101 | releases: 102 | - name: sample-windows-release 103 | version: latest 104 | 105 | stemcells: 106 | - alias: windows 107 | os: windows2012R2 108 | version: latest 109 | 110 | update: 111 | canaries: 1 112 | max_in_flight: 1 113 | canary_watch_time: 30000-300000 114 | update_watch_time: 30000-300000 115 | 116 | instance_groups: 117 | - name: hello 118 | azs: [z1] 119 | instances: 1 120 | jobs: 121 | - name: say-hello 122 | release: sample-windows-release 123 | stemcell: windows 124 | vm_type: default 125 | networks: 126 | - name: default 127 | ``` 128 | -------------------------------------------------------------------------------- /props-common.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Properties - Suggested configurations 3 | --- 4 | 5 | ## TLS configuration 6 | 7 | Following is a _suggested_ set of properties for TLS configuration: 8 | 9 | * **tls** [Hash]: TLS configuration section. 10 | * **enabled** [Boolean, optional]: Enable/disable TLS. Default should be `true`. 11 | * **cert** [Hash]: Value described by [`ceritificate` variable type](variable-types.html#certificate). Default is `nil`. 12 | * **protocols** [String, optional]: Space separated list of protocols to support. Example: `TLSv1.2`. 13 | * **ciphers** [String, optional]: OpenSSL formatted list of ciphers to support. Example: `!DES:!RC4:!3DES:!MD5:!PSK`. 14 | 15 | Example job spec: 16 | 17 | ```yaml 18 | name: app-server 19 | 20 | properties: 21 | tls.enabled: 22 | description: "Enable/disable TLS" 23 | default: true 24 | tls.cert: 25 | type: certificate 26 | description: "Specify certificate" 27 | ... 28 | ``` 29 | 30 | Example manifest usage: 31 | 32 | ```yaml 33 | instance_groups: 34 | - name: app-server 35 | instances: 2 36 | jobs: 37 | - name: app-server 38 | properties: 39 | tls: 40 | cert: ((app-server-tls)) 41 | ... 42 | 43 | variables: 44 | - name: app-server-tls 45 | type: certificate 46 | options: 47 | ... 48 | ``` 49 | 50 | Note that if your job requires multiple TLS configurations (for example, for a client and server TLS configurations), configuration above would be nested under particular context. For example: 51 | 52 | ```yaml 53 | name: app-server 54 | 55 | properties: 56 | server.tls.enabled: 57 | description: "Enable/disable TLS" 58 | default: true 59 | server.tls.cert: 60 | type: certificate 61 | description: "Specify server certificate" 62 | 63 | client.tls.enabled: 64 | description: "Enable/disable TLS" 65 | default: true 66 | client.tls.cert: 67 | type: certificate 68 | description: "Specify client certificate" 69 | ... 70 | ``` 71 | 72 | Example manifest usage: 73 | 74 | ```yaml 75 | instance_groups: 76 | - name: app-server 77 | instances: 2 78 | jobs: 79 | - name: app-server 80 | properties: 81 | server: 82 | tls: 83 | cert: ((app-server-tls)) 84 | client: 85 | tls: 86 | cert: ((app-client-tls)) 87 | ... 88 | 89 | variables: 90 | - name: app-server-tls 91 | type: certificate 92 | options: 93 | ... 94 | - name: app-client-tls 95 | type: certificate 96 | options: 97 | ... 98 | ``` 99 | 100 | --- 101 | ## Environment proxy configuration 102 | 103 | Following is a _suggested_ set of properties for environment proxy configuration: 104 | 105 | * **env** [Hash] 106 | * **http_proxy** [String, optinal]: HTTP proxy that software should use. Default: not specified. 107 | * **https_proxy** [String, optinal]: HTTPS proxy that software should use. Default: not specified. 108 | * **no_proxy** [String, optinal]: List of comma-separated hosts that should skip connecting to the proxy in software. Default: not specified. 109 | 110 | Example job spec: 111 | 112 | ```yaml 113 | name: app-server 114 | 115 | properties: 116 | env.http_proxy: 117 | description: HTTP proxy that the server should use 118 | env.https_proxy: 119 | description: HTTPS proxy that the server should use 120 | env.no_proxy: 121 | description: List of comma-separated hosts that should skip connecting to the proxy in the server 122 | ... 123 | ``` 124 | 125 | Example manifest usage: 126 | 127 | ```yaml 128 | instance_groups: 129 | - name: app-server 130 | instances: 2 131 | jobs: 132 | - name: app-server 133 | properties: 134 | env: 135 | http_proxy: http://10.203.0.1:5187/ 136 | https_proxy: http://10.203.0.1:5187/ 137 | no_proxy: localhost,127.0.0.1 138 | ... 139 | ``` 140 | -------------------------------------------------------------------------------- /init-softlayer.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Creating environment on SoftLayer 3 | --- 4 | 5 | This document shows how to create new [environment](terminology.html#environment) on SoftLayer. 6 | 7 | ## Step 1: Prepare a SoftLayer Environment 8 | 9 | To prepare your SoftLayer environment: 10 | 11 | * [Create a SoftLayer account](#account) 12 | * [Generate an API Key](#api-key) 13 | * [Access SoftLayer VPN](#vpn) 14 | * [Order VLANs](#vlan) 15 | 16 | --- 17 | ### Create a Softlayer account 18 | 19 | If you do not have an SoftLayer account, [create one for one month free](https://www.softlayer.com/promo/freeCloud). 20 | 21 | Use the login credentials received in your provided email to login to SoftLayer [Customer Portal](https://control.softlayer.com). 22 | 23 | --- 24 | ### Generate an API Key 25 | 26 | API keys are used to securely access the SoftLayer API. Follow [Generate an API Key](http://knowledgelayer.softlayer.com/procedure/generate-api-key) to generate your API key. 27 | 28 | --- 29 | ### Access SoftLayer VPN 30 | 31 | To access SoftLayer Private network, you need to access SoftLayer VPN. Follow [VPN Access](http://www.softlayer.com/vpn-access) to access the VPN. You can get your VPN password from your [user profile](https://control.softlayer.com/account/user/profile). Follow [VPN Access](http://www.softlayer.com/vpn-access) to access the VPN. 32 | 33 | --- 34 | ### Order VLANs 35 | 36 | VLANs provide the ability to partition devices and subnets on the network. To order VLANs, login to SoftLayer [Customer Portal](https://control.softlayer.com) and navigate to Network > IP Management > VLANs. Once on the page, click the "Order VLAN" link in the top-right corner. Fill in the pop-up window to order the VLANs as you need. The VLAN IDs are needed in the deployment manifest. 37 | 38 | --- 39 | ## Step 2: Deploy 40 | 41 | 1. Install [CLI v2](./cli-v2.html). 42 | 43 | 1. Establish VPN connection between your host and Softlayer. The machine where to run CLI needs to communicate with the target Director VM over the SoftLayer private network. 44 | 45 | 1. Use `bosh create-env` command to deploy the Director. 46 | 47 |
48 |     # Create directory to keep state
49 |     $ mkdir bosh-1 && cd bosh-1
50 | 
51 |     # Clone Director templates
52 |     $ git clone https://github.com/cloudfoundry/bosh-deployment
53 | 
54 |     # Fill below variables (replace example values) and deploy the Director
55 |     $ sudo bosh create-env bosh-deployment/bosh.yml \
56 |         --state=state.json \
57 |         --vars-store=creds.yml \
58 |         -o bosh-deployment/softlayer/cpi.yml \
59 |         -v director_name=bosh-1 \
60 |         -v internal_cidr=10.0.0.0/24 \
61 |         -v internal_gw=10.0.0.1 \
62 |         -v internal_ip=10.0.0.6 \
63 |         -v sl_datacenter= \
64 |         -v sl_vm_domain= \
65 |         -v sl_vm_name_prefix= \
66 |         -v sl_vlan_public= \
67 |         -v sl_vlan_private= \
68 |         -v sl_username= \
69 |         -v sl_api_key=
70 |     
71 | 72 |

Note: The reason why need to run `bosh create-env` command with sudo is that it needs to update `/etc/hosts` file which needs suffient permission.

73 | 74 | 1. Connect to the Director. 75 | 76 |
77 |     # Configure local alias
78 |     $ bosh alias-env bosh-1 -e 10.0.0.6 --ca-cert <(bosh int ./creds.yml --path /director_ssl/ca)
79 | 
80 |     # Log in to the Director
81 |     $ export BOSH_CLIENT=admin
82 |     $ export BOSH_CLIENT_SECRET=`bosh int ./creds.yml --path /admin_password`
83 | 
84 |     # Query the Director for more info
85 |     $ bosh -e bosh-1 env
86 |     
87 | 88 | 1. Save the deployment state files left in your deployment directory `bosh-1` so you can later update/delete your Director. See [Deployment state](cli-envs.html#deployment-state) for details. 89 | 90 | --- 91 | [Back to Table of Contents](index.html#install) 92 | 93 | Previous: [Create an environment](init.html) 94 | -------------------------------------------------------------------------------- /warden-cpi.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Warden/Garden CPI 3 | --- 4 | 5 |

Note: Updated for bosh-warden-cpi v28+.

6 | 7 | This topic describes cloud properties for different resources created by the Warden/Garden CPI. 8 | 9 | ## AZs 10 | 11 | Currently the CPI does not support any cloud properties for AZs. 12 | 13 | Example: 14 | 15 | ```yaml 16 | azs: 17 | - name: z1 18 | ``` 19 | 20 | --- 21 | ## Networks 22 | 23 | Currently the CPI does not support any cloud properties for networks. 24 | 25 | Example of a manual network: 26 | 27 | ```yaml 28 | networks: 29 | - name: default 30 | type: manual 31 | subnets: 32 | - range: 10.244.1.0/24 33 | gateway: 10.244.1.0 34 | static: [10.244.1.34] 35 | ``` 36 | 37 |

Note: bosh-warden-cpi v24+ makes it possible to use subnets bigger than /30 as exemplified above. bosh-lite v9000.48.0 uses that newer bosh-warden-cpi.

38 | 39 | Example of a dynamic network: 40 | 41 | ```yaml 42 | networks: 43 | - name: default 44 | type: dynamic 45 | ``` 46 | 47 | The CPI does not support vip networks. 48 | 49 | --- 50 | ## Resource Pools / VM Types 51 | 52 | Schema for `cloud_properties` section: 53 | 54 | * **ports** [Array, optional]: Allows to define port mapping between host and associated containers. Available in v30+. 55 | * **host** [String, required]: Port or range of ports. Example: `80`. 56 | * **container** [String, optional]: Port or range of ports. Defaults to `host` defined port or range. Example: `80`. 57 | * **protocol** [String, optional]: Connection protocol. Defaults to `tcp`. Example: `udp`. 58 | 59 | We may add simple load balancing via iptables for testing if ports is forwarded to multiple containers. 60 | 61 | Example: 62 | 63 | ```yaml 64 | vm_extensions: 65 | - name: external-access 66 | cloud_properties: 67 | ports: 68 | # Forward 80 to 80 tcp 69 | - host: 80 70 | # Forward 443 to 8443 tcp 71 | - host: 443 72 | container: 8443 73 | # Forward 53 to 53 udp 74 | - host: 53 75 | protocol: udp 76 | # Forward 1000-2000 to 1000-2000 tcp 77 | - host: 1000-2000 78 | ``` 79 | 80 | --- 81 | ## Disk Pools 82 | 83 | Currently the CPI does not support any cloud properties for disks. 84 | 85 | Example of 10GB disk: 86 | 87 | ```yaml 88 | disk_pools: 89 | - name: default 90 | disk_size: 10_240 91 | ``` 92 | 93 | --- 94 | ## Global Configuration 95 | 96 | The CPI uses containers to represent VMs and loopback devices to represent disks. Since the CPI can only talk to a single Garden server it can only manage resources on a single machine. 97 | 98 | Example of a CPI configuration: 99 | 100 | ```yaml 101 | properties: 102 | warden_cpi: 103 | loopback_range: [100, 130] 104 | warden: 105 | connect_network: tcp 106 | connect_address: 127.0.0.1:7777 107 | actions: 108 | stemcells_dir: "/var/vcap/data/cpi/stemcells" 109 | disks_dir: "/var/vcap/store/cpi/disks" 110 | host_ephemeral_bind_mounts_dir: "/var/vcap/data/cpi/ephemeral_bind_mounts_dir" 111 | host_persistent_bind_mounts_dir: "/var/vcap/data/cpi/persistent_bind_mounts_dir" 112 | agent: 113 | mbus: nats://nats:((nats_password))@10.244.8.2:4222 114 | blobstore: 115 | provider: dav 116 | options: 117 | endpoint: http://10.244.8.2:25251 118 | user: agent 119 | password: ((blobstore_agent_password)) 120 | ``` 121 | 122 | --- 123 | ## Example Cloud Config 124 | 125 | See [bosh-deployment](https://github.com/cloudfoundry/bosh-deployment/blob/master/warden/cloud-config.yml). 126 | 127 | --- 128 | ## Notes 129 | 130 | * Garden server does not have a UI; however, you can use [gaol CLI](https://github.com/xoebus/gaol) to interact with it directly. 131 | 132 | --- 133 | [Back to Table of Contents](index.html#cpi-config) 134 | -------------------------------------------------------------------------------- /virtualbox-cpi.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: VirtualBox CPI 3 | --- 4 | 5 | This topic describes cloud properties for different resources created by the [VirtualBox CPI](https://bosh.io/releases/github.com/cppforlife/bosh-virtualbox-cpi-release). VirtualBox CPI works with [vSphere ESXI stemcells](https://bosh.io/stemcells/bosh-vsphere-esxi-ubuntu-trusty-go_agent). 6 | 7 | ## AZs 8 | 9 | Currently the CPI does not support any cloud properties for AZs. 10 | 11 | Example: 12 | 13 | ```yaml 14 | azs: 15 | - name: z1 16 | ``` 17 | 18 | --- 19 | ## Networks 20 | 21 | Schema for `cloud_properties` section used by network subnet: 22 | 23 | * **name** [String, required]: Name of the network. Example: `vboxnet0`. Default: `vboxnet0`. 24 | * **type** [String, optional]: Type of the network. See [`VBoxManage modifyvm` networking settings](https://www.virtualbox.org/manual/ch08.html#idp46691722135120) for valid values. Example: `hostonly`. Default: `hostonly`. 25 | 26 | Example of manual network: 27 | 28 | ```yaml 29 | networks: 30 | - name: private 31 | type: manual 32 | subnets: 33 | - range: 192.168.50.0/24 34 | gateway: 192.168.50.1 35 | dns: [192.168.50.1] 36 | cloud_properties: 37 | name: vboxnet0 38 | ``` 39 | 40 | --- 41 | ## VM Types 42 | 43 | Schema for `cloud_properties` section: 44 | 45 | * **cpus** [Integer, optional]: Number of CPUs. Example: `1`. Default: `1`. 46 | * **memory** [Integer, optional]: RAM in megabytes. Example: `1024`. Default: `512`. 47 | * **ephemeral_disk** [Integer, optional]: Ephemeral disk size in megabytes. Example: `10240`. Default: `5000`. 48 | * **paravirtprovider** [String, optional]: Paravirtual provider type. See [`VBoxManage modifyvm` general settings](https://www.virtualbox.org/manual/ch08.html#idp46691713664256) for valid values. Default: `minimal`. 49 | 50 | Example of a VM type: 51 | 52 | ```yaml 53 | vm_types: 54 | - name: default 55 | cloud_properties: 56 | cpus: 2 57 | memory: 2_048 58 | ephemeral_disk: 4_096 59 | paravirtprovider: minimal 60 | ``` 61 | 62 | --- 63 | ## Disk Types 64 | 65 | Currently the CPI does not support any cloud properties for disks. 66 | 67 | Example of 10GB disk: 68 | 69 | ```yaml 70 | disk_types: 71 | - name: default 72 | disk_size: 10_240 73 | ``` 74 | 75 | --- 76 | ## Global Configuration 77 | 78 | The CPI uses individual VirtualBox VMs and disks. Since the CPI can only talk to a single VirtualBox server it can only manage resources on a single machine. 79 | 80 | Example of a CPI configuration: 81 | 82 | ```yaml 83 | properties: 84 | host: 192.168.50.1 85 | username: ubuntu 86 | private_key: | 87 | -----BEGIN RSA PRIVATE KEY----- 88 | MIIEowIBAAKCAQEAr/c6pUbrq/U+s0dSU+Z6dxrHC7LOGDijv8LYN5cc7alYg+TV 89 | ... 90 | fe5h79YLG+gJDqVQyKJm0nDRCVz0IkM7Nhz8j07PNJzWjee/kcvv 91 | -----END RSA PRIVATE KEY----- 92 | 93 | agent: {mbus: "https://mbus:mbus-password@0.0.0.0:6868"} 94 | 95 | ntp: 96 | - 0.pool.ntp.org 97 | - 1.pool.ntp.org 98 | 99 | blobstore: 100 | provider: local 101 | path: /var/vcap/micro_bosh/data/cache 102 | ``` 103 | 104 | See [virtualbox_cpi job](https://bosh.io/jobs/virtualbox_cpi?source=github.com/cppforlife/bosh-virtualbox-cpi-release) for more details. 105 | 106 | --- 107 | ## Example Cloud Config 108 | 109 | ```yaml 110 | azs: 111 | - name: z1 112 | - name: z2 113 | 114 | vm_types: 115 | - name: default 116 | 117 | disk_types: 118 | - name: default 119 | disk_size: 3000 120 | 121 | networks: 122 | - name: default 123 | type: manual 124 | subnets: 125 | - range: 192.168.50.0/24 126 | gateway: 192.168.50.1 127 | azs: [z1, z2] 128 | reserved: [192.168.50.6] 129 | dns: [192.168.50.1] 130 | cloud_properties: 131 | name: vboxnet0 132 | 133 | compilation: 134 | workers: 2 135 | reuse_compilation_vms: true 136 | az: z1 137 | vm_type: default 138 | network: default 139 | ``` 140 | 141 | --- 142 | [Back to Table of Contents](index.html#cpi-config) 143 | -------------------------------------------------------------------------------- /migrated-from.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Renaming/migrating instance groups 3 | --- 4 | 5 | Occasionally, it's convenient to rename one or more instance groups as their purpose changes or as better names are found. In most cases it's desirable to maintain existing persistent data by keeping existing persistent disks. 6 | 7 | Previously, the CLI provided the `rename job` command to rename a specific instance group one at a time. That approach worked OK in non-automated, non-frequently updated environments, but it was inconvenient for automated, frequently updated environments. As a replacement, the `migrated_from` directive was added to allow renames to happen in a more systematic way. 8 | 9 | Additionally `migrated_from` directive can be used to migrate instance groups to use first class AZs. 10 | 11 | --- 12 | ## Schema 13 | 14 | **migrated_from** [Array, required]: The name and AZ of each instance group that should be used to form new instance group. 15 | 16 | * **name** [String, required]: Name of an instance group that used to exist in the manifest. 17 | * **az** [String, optional]: Availability zone that was used for the named instance group. This key is optional for instance groups that used first class AZs (via `azs` key). If first class AZ was not used, then this key must specify first class AZ that matches actual IaaS AZ configuration. 18 | 19 | --- 20 | ## Renaming Instance Groups 21 | 22 | 1. Given follow deployment instance group `etcd`: 23 | 24 | ```yaml 25 | instance_groups: 26 | - name: etcd-primary 27 | instances: 2 28 | jobs: 29 | - {name: etcd, release: etcd} 30 | vm_type: default 31 | stemcell: default 32 | persistent_disk: 10_240 33 | networks: 34 | - name: default 35 | ``` 36 | 37 | 1. Change instance group's name to `etcd` and add `migrated_from` with a previous name. 38 | 39 | ```yaml 40 | instance_groups: 41 | - name: etcd 42 | instances: 2 43 | jobs: 44 | - {name: etcd, release: etcd} 45 | vm_type: default 46 | stemcell: default 47 | persistent_disk: 10_240 48 | networks: 49 | - name: default 50 | migrated_from: 51 | - name: etcd-primary 52 | ``` 53 | 54 | 1. Deploy. 55 | 56 | --- 57 | ## Migrating Instance Groups (to first class AZs) 58 | 59 | Before the introduction of first class AZs, each instance group was associated with a resource pool that typically defined some CPI specific AZ configuration in its `cloud_properties`. Typically there would be multiple instance groups that mostly differed by their name, for example `etcd_z1` and `etcd_z2`. With first class AZs, multiple instance groups typically should be collapsed to simplify the deployment. 60 | 61 | 1. Given following instance groups `etcd_z1` and `etcd_z2` with AZ specific resource pools and networks: 62 | 63 | ```yaml 64 | instance_groups: 65 | - name: etcd_z1 66 | instances: 2 67 | jobs: 68 | - {name: etcd, release: etcd} 69 | persistent_disk: 10_240 70 | resource_pool: medium_z1 71 | networks: 72 | - name: default_z1 73 | 74 | - name: etcd_z2 75 | instances: 1 76 | jobs: 77 | - {name: etcd, release: etcd} 78 | persistent_disk: 10_240 79 | resource_pool: medium_z2 80 | networks: 81 | - name: default_z2 82 | ``` 83 | 84 | 1. Collapse both instance groups into a single instance group `etcd` and use `migrated_from` with previous group names. 85 | 86 | ```yaml 87 | instance_groups: 88 | - name: etcd 89 | azs: [z1, z2] 90 | instances: 3 91 | jobs: 92 | - {name: etcd, release: etcd} 93 | vm_type: default 94 | stemcell: default 95 | persistent_disk: 10_240 96 | networks: 97 | - name: default 98 | migrated_from: 99 | - {name: etcd_z1, az: z1} 100 | - {name: etcd_z2, az: z2} 101 | ``` 102 | 103 |

Note that other referenced resources such as resource pool and network should be adjusted to work with AZs.

104 | 105 |

Note: Migration from one AZ to a different AZ is not supported yet.

106 | 107 | 1. Deploy. 108 | -------------------------------------------------------------------------------- /director-certs.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Director SSL Certificate Configuration 3 | --- 4 | 5 |

Note: See Director SSL Certificate Configuration with OpenSSL if you prefer to generate certs with OpenSSL config.

6 | 7 | Depending on you configuration, there are up to three endpoints to be secured using SSL certificates: The Director, the UAA, and the SAML Service Provider on the UAA. 8 | 9 |

Note: If you are using the UAA for user management, an SSL certificate is mandatory for the Director and the UAA.

10 | 11 |

Note: Unless you are using a configuration server, your SSL certificates will be stored in the Director's database.

12 | 13 | ## Generate SSL certificates 14 | 15 | You can use CLI v2 `interpolate` command to generate self signed certificates. Even if you use CLI v2 to generate certificates, you can still continue using CLI v1 with the Director. 16 | 17 | ```yaml 18 | variables: 19 | - name: default_ca 20 | type: certificate 21 | options: 22 | is_ca: true 23 | common_name: bosh_ca 24 | - name: director_ssl 25 | type: certificate 26 | options: 27 | ca: default_ca 28 | common_name: ((internal_ip)) 29 | alternative_names: [((internal_ip))] 30 | - name: uaa_ssl 31 | type: certificate 32 | options: 33 | ca: default_ca 34 | common_name: ((internal_ip)) 35 | alternative_names: [((internal_ip))] 36 | - name: uaa_service_provider_ssl 37 | type: certificate 38 | options: 39 | ca: default_ca 40 | common_name: ((internal_ip)) 41 | alternative_names: [((internal_ip))] 42 | ``` 43 | 44 |
 45 | $ bosh interpolate tpl.yml -v internal_ip=10.244.4.2 --vars-store certs.yml
 46 | $ cat certs.yml
 47 | 
48 | 49 | ## Configure the Director to use certificates 50 | 51 | Update the Director deployment manifest: 52 | 53 | - `director.ssl.key` 54 | - Private key for the Director (content of `bosh int certs.yml --path /director_ssl/private_key`) 55 | - `director.ssl.cert` 56 | - Associated certificate for the Director (content of `bosh int certs.yml --path /director_ssl/certificate`) 57 | - Include all intermediate certificates if necessary 58 | - `hm.director_account.ca_cert` 59 | - CA certificate used by the HM to verify the Director's certificate (content of `bosh int certs.yml --path /director_ssl/ca`) 60 | 61 | Example manifest excerpt: 62 | 63 | ```yaml 64 | ... 65 | jobs: 66 | - name: bosh 67 | properties: 68 | director: 69 | ssl: 70 | key: | 71 | -----BEGIN RSA PRIVATE KEY----- 72 | MII... 73 | -----END RSA PRIVATE KEY----- 74 | cert: | 75 | -----BEGIN CERTIFICATE----- 76 | MII... 77 | -----END CERTIFICATE----- 78 | ... 79 | ``` 80 | 81 |

Note: A `path` to the key or certificate file is not supported.

82 | 83 | If you are using the UAA for user management, additionally put certificates in these properties: 84 | 85 | - `uaa.sslPrivateKey` 86 | - Private key for the UAA (content of `bosh int certs.yml --path /uaa_ssl/private_key`) 87 | - `uaa.sslCertificate` 88 | - Associated certificate for the UAA (content of `bosh int certs.yml --path /uaa_ssl/certificate`) 89 | - Include all intermediate certificates if necessary 90 | - `login.saml.serviceProviderKey` 91 | - Private key for the UAA (content of `bosh int certs.yml --path /uaa_service_provider_ssl/private_key`) 92 | - `login.saml.serviceProviderCertificate` 93 | - Associated certificate for the UAA (content of `bosh int certs.yml --path /uaa_service_provider_ssl/certificate`) 94 | 95 | --- 96 | ## Target the Director 97 | 98 | After you deployed your Director with the above changes, you need to specify `--ca-cert` when targeting the Director: 99 | 100 |
101 | $ bosh --ca-cert <(bosh int certs.yml --path /director_ssl/ca) target 10.244.4.2
102 | 
103 | 104 |

Note: If your certificates are trusted via system installed CA certificates, there is no need to provide `--ca-cert` option.

105 | 106 | --- 107 | [Back to Table of Contents](index.html#install) 108 | -------------------------------------------------------------------------------- /s3-release-blobstore.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Setting up an S3 Release Blobstore 3 | --- 4 | 5 |

Note: Examples require CLI v2.

6 | 7 | This topic is written for release maintainers and describes how to set up a S3 bucket for storing release artifacts. 8 | 9 | ## Creating S3 Bucket 10 | 11 | S3 bucket is used for storing release blobs and generated final release blobs. It's configured to be readable by everyone. 12 | 13 | - Create S3 bucket with a descriptive name. For example choose `redis-blobs` for `redis-release` release. 14 | 15 | - Under the bucket properties, click `Add bucket policies` and add the following entry to make blobs publicly downloadable. Be sure to change `` to a name you want to call your blobstore bucket. 16 | 17 | ```json 18 | { 19 | "Statement": [{ 20 | "Action": [ "s3:GetObject" ], 21 | "Effect": "Allow", 22 | "Resource": "arn:aws:s3:::/*", 23 | "Principal": { "AWS": ["*"] } 24 | }] 25 | } 26 | ``` 27 | 28 |

Note: S3 buckets have a global namespace. If you create a bucket, that name has been forever consumed for everyone using S3. If you choose to delete that bucket, the name will not be added back to the global pool of names. It is gone forever.

29 | 30 | ## Creating IAM User for the Maintainer 31 | 32 | An IAM user is used to download and upload blobs to a created S3 bucket. 33 | 34 | - Create an AWS IAM user with a name that describes a specific purpose for this user -- uploading release blobs. For example: `redis-blobs-upload`. 35 | 36 | - Make sure to save the credentials provided in the last step of user creation. These will need to be put in the `config/private.yml` file of your BOSH release. This file will look something like: 37 | 38 | ```yaml 39 | --- 40 | blobstore: 41 | options: 42 | access_key_id: 43 | secret_access_key: 44 | ``` 45 | 46 | - Remember to also create/update `config/final.yml`: 47 | 48 | ```yaml 49 | --- 50 | blobstore: 51 | provider: s3 52 | options: 53 | bucket_name: 54 | ``` 55 | 56 |

Note: The .gitignore file in the BOSH release should include config/private.yml. This file should not be committed to the release repo. It is only meant for the release maintainers. config/final.yml, on the other hand, should not be in the .gitignore file, and should be committed to the repository, as it is for users consuming and deploying the release.

57 | 58 | - Attach a _user_ policy that would limit the user to permissions to read/write to the bucket that was just created: 59 | 60 | ```json 61 | { 62 | "Version": "2012-10-17", 63 | "Statement": [{ 64 | "Effect": "Allow", 65 | "Action": [ "s3:PutObject" ], 66 | "Resource": [ "arn:aws:s3:::/*" ] 67 | }] 68 | } 69 | ``` 70 | 71 |

Note: When you first create a bucket, it might take a little while for Amazon to be able to route requests correctly to the bucket and so downloads may fail with an obscure "Broken Pipe" error. The solution is to wait for some time before trying. 72 | 73 | ## Setting S3 region 74 | 75 | By default, Amazon S3 buckets resolve to the `us-east-1` (North Virginia) region. If your blobstore bucket resides in a different region, override the region and endpoint settings in `config/final.yml`. For example, a bucket in `eu-west-1` would be as follows: 76 | 77 | ```yaml 78 | --- 79 | blobstore: 80 | provider: s3 81 | options: 82 | bucket_name: 83 | region: eu-west-1 84 | endpoint: https://s3-eu-west-1.amazonaws.com 85 | ``` 86 | 87 | A full list of S3 regions and endpoints is available [here](http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region). 88 | 89 | See [S3 CLI Usage](https://github.com/pivotal-golang/s3cli#usage) for additional configuration options. 90 | 91 | ## Usage 92 | 93 | Once the S3 bucket and IAM user are configured with correct access rules, `bosh upload blobs` should succeed and the S3 bucket should contain uploaded blobs. Running `bosh create release --final` will also place additional blobs into the bucket. 94 | -------------------------------------------------------------------------------- /init-vsphere.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Creating environment on vSphere 3 | --- 4 | 5 |

Note: See Initializing BOSH environment on vSphere for using bosh-init instead of CLI v2. (Not recommended)

6 | 7 | This document shows how to set up new [environment](terminology.html#environment) on vSphere. 8 | 9 | 1. Install [CLI v2](./cli-v2.html). 10 | 11 | 1. Use `bosh create-env` command to deploy the Director. 12 | 13 |
14 |     # Create directory to keep state
15 |     $ mkdir bosh-1 && cd bosh-1
16 | 
17 |     # Clone Director templates
18 |     $ git clone https://github.com/cloudfoundry/bosh-deployment
19 | 
20 |     # Fill below variables (replace example values) and deploy the Director
21 |     $ bosh create-env bosh-deployment/bosh.yml \
22 |         --state=state.json \
23 |         --vars-store=creds.yml \
24 |         -o bosh-deployment/vsphere/cpi.yml \
25 |         -v director_name=bosh-1 \
26 |         -v internal_cidr=10.0.0.0/24 \
27 |         -v internal_gw=10.0.0.1 \
28 |         -v internal_ip=10.0.0.6 \
29 |         -v network_name="VM Network" \
30 |         -v vcenter_dc=my-dc \
31 |         -v vcenter_ds=datastore0 \
32 |         -v vcenter_ip=192.168.0.10 \
33 |         -v vcenter_user=root \
34 |         -v vcenter_password=vmware \
35 |         -v vcenter_templates=bosh-1-templates \
36 |         -v vcenter_vms=bosh-1-vms \
37 |         -v vcenter_disks=bosh-1-disks \
38 |         -v vcenter_cluster=cluster1
39 |     
40 | 41 | If Resource Pools want to be utilized, refer to [Deploying BOSH into Resource Pools](init-vsphere-rp.html) for additional CLI flags. 42 | 43 | Use the vSphere Web Client to find out and/or create any missing resources listed below: 44 | - Configure `vcenter_ip` (e.g. '192.168.0.10') with the IP of the vCenter. 45 | - Configure `vcenter_user` (e.g. 'root') and `vcenter_password` (e.g. 'vmware') with vCenter user name and password. 46 | BOSH does not require user to be an admin, but it does require the following [privileges](https://github.com/cloudfoundry-incubator/bosh-vsphere-cpi-release/blob/master/docs/required_vcenter_privileges.md). 47 | - Configure `vcenter_dc` (e.g. 'my-dc') with the name of the datacenter the Director will use for VM creation. 48 | - Configure `vcenter_vms` (e.g. 'my-bosh-vms') and `TEMPLATES-FOLDER-NAME` (e.g. 'my-bosh-templates') with the name of the folder created to hold VMs and the name of the folder created to hold stemcells. Folders will be automatically created under the chosen datacenter. 49 | - Configure `vcenter_ds` (e.g. 'datastore[1-9]') with a regex matching the names of potential datastores the Director will use for storing VMs and associated persistent disks. 50 | - Configure `vcenter_disks` (e.g. 'my-bosh-disks') with the name of the VMs folder. Disk folder will be automatically created in the chosen datastore. 51 | - Configure `vcenter_cluster` (e.g. 'cluster1') with the name of the vSphere cluster. Create cluster under the chosen datacenter in the Clusters tab. 52 | - Configure `network_name` (e.g. 'VM Network') with the name of the vSphere network. Create network under the chosen datacenter in the Networks tab. Above example uses `10.0.0.0/24` network and Director VM will be placed at `10.0.0.6`. 53 | - [Optional] Configure `vcenter_rp` (eg. 'my-bosh-rp') with the name of the vSphere resource pool. Create resource pool under the choosen datacenter in the Clusters tab. 54 | 55 | See [vSphere CPI errors](vsphere-cpi.html#errors) for list of common errors and resolutions. 56 | 57 | 1. Connect to the Director. 58 | 59 |
60 |     # Configure local alias
61 |     $ bosh alias-env bosh-1 -e 10.0.0.6 --ca-cert <(bosh int ./creds.yml --path /director_ssl/ca)
62 | 
63 |     # Log in to the Director
64 |     $ export BOSH_CLIENT=admin
65 |     $ export BOSH_CLIENT_SECRET=`bosh int ./creds.yml --path /admin_password`
66 | 
67 |     # Query the Director for more info
68 |     $ bosh -e bosh-1 env
69 |     
70 | 71 | 1. Save the deployment state files left in your deployment directory `bosh-1` so you can later update/delete your Director. See [Deployment state](cli-envs.html#deployment-state) for details. 72 | 73 | --- 74 | [Back to Table of Contents](index.html#install) 75 | 76 | Previous: [Create an environment](init.html) 77 | -------------------------------------------------------------------------------- /director-users-uaa-perms.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: UAA Permissions 3 | --- 4 | 5 | All UAA users can log into all Directors which can verify the access token. However, user actions will be limited based on the presence of the following scopes in their UAA token: 6 | 7 |

Warning: If you use the same private key to sign keys on different UAAs, users might obtain a token from one UAA and use it on the Director configured with a different UAA. It is therefore highly recommended to lock down scopes to individual Directors and not re-use your private key used for signing on the UAA.

8 | 9 | ## Anonymous 10 | 11 | Can access: 12 | 13 | - `bosh status`: show general information about targeted Director (authentication is not required) 14 | 15 | --- 16 | ## Full Admin 17 | 18 | Scopes: 19 | 20 | - `bosh.admin`: user has admin access on any Director 21 | - `bosh..admin`: user has admin access on the Director with the corresponding UUID 22 | 23 | Can use all commands on all deployments. 24 | 25 | --- 26 | ## Full Read-only 27 | 28 | Scopes: 29 | 30 | - `bosh.read`: user has read access on any Director 31 | - `bosh..read`: user has read access on the Director with the corresponding UUID 32 | 33 | Cannot modify any resource. 34 | 35 | Can access in read-only capacity: 36 | 37 | - `bosh deployments`: list of *all* deployments and releases/stemcells used 38 | - `bosh releases`: list of *all* releases and their versions 39 | - `bosh stemcells`: list of *all* stemcells and their versions 40 | - `bosh vms`: list of all VMs which includes job names, IPs, vitals, details, etc. 41 | - `bosh tasks`: list of all tasks summaries which includes task descriptions without access to debug logs 42 | 43 | --- 44 | ## Team Admin 45 | 46 |

Note: This feature is available with bosh-release v255.4+.

47 | 48 | The Director has a concept of a team so that set of users can only manage specific deployments. When a user creates a deployment, created deployment will be *managed* by the teams that that user is part of. There is currently no way to assign or reassign deployment's teams. 49 | 50 | Scopes: 51 | 52 | - `bosh.teams..admin`: user has admin access for deployments managed by the team 53 | 54 | Can modify team managed deployments' associated resources: 55 | 56 | - `bosh deploy`: create or update deployment 57 | - `bosh delete deployment`: delete deployment 58 | - `bosh start/stop/recreate`: manage VMs 59 | - `bosh cck`: diagnose deployment problems 60 | - `bosh ssh`: SSH into a VM 61 | - `bosh logs`: fetch logs from a VM 62 | - `bosh run errand`: run an errand 63 | 64 | Can view shared resources: 65 | 66 | - `bosh deployments`: list of team managed deployments and releases/stemcells used 67 | - `bosh releases`: list of *all* releases and their versions 68 | - `bosh stemcells`: list of *all* stemcells and their versions 69 | - `bosh vms`: list of team managed deployments' VMs which includes job names, IPs, vitals, details, etc. 70 | - `bosh tasks`: list of team managed deployments' tasks and their full details 71 | 72 | Team admin cannot upload releases and stemcells. 73 | 74 | --- 75 | ## Stemcell uploader 76 | 77 |

Note: This feature is available with bosh-release v261.2+.

78 | 79 | Scopes: 80 | 81 | - `bosh.stemcells.upload`: user can upload new stemcells 82 | 83 | Note that CLI will try to list stemcells before uploading given stemcell, hence `bosh upload stemcell` CLI command requires users/clients to have `bosh.read` scope as well. 84 | 85 | --- 86 | ## Release uploader 87 | 88 |

Note: This feature is available with bosh-release v261.2+.

89 | 90 | Scopes: 91 | 92 | - `bosh.releases.upload`: user can upload new releases 93 | 94 | Note that CLI will try to list releases before uploading given release, hence `bosh upload release` CLI command requires users/clients to have `bosh.read` scope as well. 95 | 96 | --- 97 | ## Errors 98 | 99 | ``` 100 | HTTP 401: Not authorized: '/deployments' requires one of the scopes: bosh.admin, bosh.UUID.admin, bosh.read, bosh.UUID.read 101 | ``` 102 | 103 | This error occurs if the user doesn't have the right scopes for the requested command. 104 | 105 | --- 106 | [Back to Table of Contents](index.html#director-config) 107 | -------------------------------------------------------------------------------- /director-certs-openssl.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Director SSL Certificate Configuration with OpenSSL 3 | --- 4 | 5 | Depending on you configuration, there are up to three endpoints to be secured using SSL certificates: The Director, the UAA, and the SAML Service Provider on the UAA. 6 | 7 |

Note: If you are using the UAA for user management, an SSL certificate is mandatory for the Director and the UAA.

8 | 9 |

Note: Unless you are using a configuration server, your SSL certificates will be stored in the Director's database.

10 | 11 | ## Generate SSL certificates (with OpenSSL) 12 | 13 | You can use the following script to generate a root CA certificate and use it to sign three generated SSL certificates: 14 | 15 |
 16 | #!/bin/bash
 17 | 
 18 | set -e
 19 | 
 20 | certs=`dirname $0`/certs
 21 | 
 22 | rm -rf $certs && mkdir -p $certs
 23 | 
 24 | cd $certs
 25 | 
 26 | echo "Generating CA..."
 27 | openssl genrsa -out rootCA.key 2048
 28 | yes "" | openssl req -x509 -new -nodes -key rootCA.key \
 29 |   -out rootCA.pem -days 99999
 30 | 
 31 | function generateCert {
 32 |   name=$1
 33 |   ip=$2
 34 | 
 35 |   cat >openssl-exts.conf <<-EOL
 36 | extensions = san
 37 | [san]
 38 | subjectAltName = IP:${ip}
 39 | EOL
 40 | 
 41 |   echo "Generating private key..."
 42 |   openssl genrsa -out ${name}.key 2048
 43 | 
 44 |   echo "Generating certificate signing request for ${ip}..."
 45 |   # golang requires to have SAN for the IP
 46 |   openssl req -new -nodes -key ${name}.key \
 47 |     -out ${name}.csr \
 48 |     -subj "/C=US/O=BOSH/CN=${ip}"
 49 | 
 50 |   echo "Generating certificate ${ip}..."
 51 |   openssl x509 -req -in ${name}.csr \
 52 |     -CA rootCA.pem -CAkey rootCA.key -CAcreateserial \
 53 |     -out ${name}.crt -days 99999 \
 54 |     -extfile ./openssl-exts.conf
 55 | 
 56 |   echo "Deleting certificate signing request and config..."
 57 |   rm ${name}.csr
 58 |   rm ./openssl-exts.conf
 59 | }
 60 | 
 61 | generateCert director 10.244.4.2 # <--- Replace with public Director IP
 62 | generateCert uaa-web 10.244.4.2  # <--- Replace with public Director IP
 63 | generateCert uaa-sp 10.244.4.2   # <--- Replace with public Director IP
 64 | 
 65 | echo "Finished..."
 66 | ls -la .
 67 | 
68 | 69 | --- 70 | ## Configure the Director to use certificates 71 | 72 | Update the Director deployment manifest: 73 | 74 | - `director.ssl.key` 75 | - Private key for the Director (e.g. content of `certs/director.key`) 76 | - `director.ssl.cert` 77 | - Associated certificate for the Director (e.g. content of `certs/director.crt`) 78 | - Include all intermediate certificates if necessary 79 | - `hm.director_account.ca_cert` 80 | - CA certificate used by the HM to verify the Director's certificate (e.g. content of `certs/rootCA.pem`) 81 | 82 | Example manifest excerpt: 83 | 84 | ```yaml 85 | ... 86 | jobs: 87 | - name: bosh 88 | properties: 89 | director: 90 | ssl: 91 | key: | 92 | -----BEGIN RSA PRIVATE KEY----- 93 | MII... 94 | -----END RSA PRIVATE KEY----- 95 | cert: | 96 | -----BEGIN CERTIFICATE----- 97 | MII... 98 | -----END CERTIFICATE----- 99 | ... 100 | ``` 101 | 102 |

Note: A `path` to the key or certificate file is not supported.

103 | 104 | If you are using the UAA for user management, additionally put certificates in these properties: 105 | 106 | - `uaa.sslPrivateKey` 107 | - Private key for the UAA (e.g. content of `certs/uaa-web.key`) 108 | - `uaa.sslCertificate` 109 | - Associated certificate for the UAA (e.g. content of `certs/uaa-web.crt`) 110 | - Include all intermediate certificates if necessary 111 | - `login.saml.serviceProviderKey` 112 | - Private key for the UAA (e.g. content of `certs/uaa-sp.key`) 113 | - `login.saml.serviceProviderCertificate` 114 | - Associated certificate for the UAA (e.g. content of `certs/uaa-sp.crt`) 115 | 116 | --- 117 | ## Target the Director 118 | 119 | After you deployed your Director with the above changes, you need to specify `--ca-cert` when targeting the Director: 120 | 121 |
122 | $ bosh --ca-cert certs/rootCA.pem target 10.244.4.2
123 | 
124 | 125 |

Note: If your certificates are trusted via system installed CA certificates, there is no need to provide `--ca-cert` option.

126 | 127 | --- 128 | [Back to Table of Contents](index.html#install) 129 | -------------------------------------------------------------------------------- /windows.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: BOSH on Windows 3 | --- 4 | 5 | BOSH can deploy jobs on Windows VMs. There is open source tooling and documentation available to build [AWS](https://github.com/cloudfoundry-incubator/aws-light-stemcell-builder), [Azure](https://github.com/cloudfoundry-incubator/bosh-windows-stemcell-builder/blob/master/azure-light-stemcell.md), 6 | [vSphere](https://github.com/cloudfoundry-incubator/bosh-windows-stemcell-builder/blob/master/create-manual-vsphere-stemcells.md) and [Openstack](https://github.com/cloudfoundry-incubator/bosh-windows-stemcell-builder/blob/master/create-manual-openstack-stemcells.md) stemcells for Windows. 7 | 8 | In general Windows BOSH Releases work in the same way as a standard BOSH release. The main difference is that the [monit file](create-release.html#monit) for Linux Releases is structured differently on Windows. Below are specific concerns for jobs on Windows. 9 | 10 | --- 11 | ## Releases 12 | 13 | The structure of a BOSH release for Windows is identical to [Linux BOSH Releases](http://bosh.io/docs/create-release.html). This means the structure of a Windows BOSH release will be: 14 | 15 | - metadata that specifies available configuration options 16 | - ERB configuration files 17 | - a Monit file that describes how to start, stop and monitor processes 18 | - start and stop scripts for each process 19 | - additional hook scripts 20 | 21 | --- 22 | ## Jobs 23 | 24 | The structure of a BOSH job for Windows is similar to the [Standard Linux BOSH Job Lifecycle](http://bosh.io/docs/job-lifecycle.html), only with processes monitored by [Windows Service Wrapper](https://github.com/kohsuke/winsw) instead of monit. 25 | 26 | The monit file for Windows is a JSON config file that describes processes to run: 27 | 28 | ```json 29 | { 30 | "processes": [ 31 | { 32 | "name": "say-hello", 33 | "executable": "powershell", 34 | "args": [ "/var/vcap/jobs/say-hello/bin/start.ps1" ], 35 | "env": { 36 | "FOO": "BAR" 37 | } 38 | } 39 | ] 40 | } 41 | ``` 42 | 43 | The above monit file will execute the file `C:\var\vcap\jobs\say-hello\bin\start.ps1` with the environment variable `FOO` set to `BAR`. The BOSH agent ensures the process is running by executing within a [service wrapper](https://github.com/kohsuke/winsw). 44 | 45 | Also, note that Pre-Start, Post-Start, Drain, and Post-Deploy scripts (described in the [job lifecycle](http://bosh.io/docs/job-lifecycle.html)) must be powershell scripts and end with the `.ps1` extension, i.e., `pre-start.ps1`, `post-start.ps1`, `drain.ps1`, and `post-deploy.ps1`. 46 | 47 | --- 48 | ### Stop Scripts in Jobs 49 | 50 | Release job can have a stop script that will run when the job is restarted or stopped. This script allows the job to clean up and get into a state where it can be safely stopped. 51 | 52 | The stop script replaces the standard mechanism for shutting down a BOSH job. If you use a stop script, [winsw](https://github.com/kohsuke/winsw) will *not* stop your job automatically. Instead it is the responsibility of the stop script to clean up resources and kill any processes that are part of the job. Winsw will wait for both the stop script and the main job process to exit before reporting to Windows that the service has terminated. For more details on how winsw handles a stop script, see [winsw documentation](https://github.com/kohsuke/winsw/blob/master/doc/xmlConfigFile.md#stopargumentstopexecutable). 53 | 54 | To use a stop script, a change to the job's `monit` and `spec` file must be made. The actual script source is placed in the jobs template directory. eg: `jobs/job_name/templates` 55 | 56 | Stdout and Stderr are currently not preserved. It is recommended to use the Windows EventLog. 57 | 58 | 59 | ### Monit 60 | 61 | Monit changes can refer to separate scripts for both stop and start actions. 62 | For instance, to use separate scripts for start and stop: 63 | 64 | ```json 65 | { 66 | "processes": [ 67 | { 68 | "name": "say-hello", 69 | "executable": "powershell.exe", 70 | "args": [ "/var/vcap/jobs/say-hello/bin/start.ps1" ], 71 | "stop": { 72 | "executable": "powershell.exe",: 73 | "args": [ "/var/vcap/jobs/say-hello/bin/stop.ps1" ], 74 | } 75 | } 76 | ] 77 | } 78 | ``` 79 | 80 | ### Spec 81 | 82 | The `spec` file change is similar to linux. Here is an example: 83 | 84 | ``` 85 | --- 86 | name: simple-stop-example 87 | templates: 88 | stop.ps1: bin/stop.ps1 89 | ``` 90 | --- 91 | ## Sample BOSH Windows Release 92 | 93 | Please see [the next page](windows-sample-release.html) for a sample BOSH Windows release. 94 | 95 | ## Sample BOSH Windows Release 96 | 97 | Please see [the next page](windows-sample-release.html) for a sample BOSH Windows release. 98 | -------------------------------------------------------------------------------- /director-configure-blobstore.html.md.erb: -------------------------------------------------------------------------------- 1 | --- 2 | title: Connecting the Director to an External Blobstore 3 | --- 4 | 5 | The Director stores uploaded releases, configuration files, logs and other data in a blobstore. A default DAV blobstore is sufficient for most BOSH environments; however, a highly-available external blobstore may be desired. 6 | 7 | ## Included DAV (default) 8 | 9 | By default the Director is configured to use included DAV blobstore job (see [Installing BOSH section](index.html#install) for example manifests). Here is how to configure it: 10 | 11 | 1. Add blobstore release job and make sure that persistent disk is enabled: 12 | 13 | ```yaml 14 | jobs: 15 | - name: bosh 16 | templates: 17 | - {name: blobstore, release: bosh} 18 | # ... 19 | persistent_disk: 25_000 20 | # ... 21 | networks: 22 | - name: default 23 | static_ips: [10.0.0.6] 24 | ``` 25 | 26 | 1. Configure blobstore job. The blobstore's address must be reachable by the Agents: 27 | 28 | ```yaml 29 | properties: 30 | blobstore: 31 | provider: dav 32 | address: 10.0.0.6 33 | port: 25250 34 | director: 35 | user: director 36 | password: DIRECTOR-PASSWORD 37 | agent: 38 | user: agent 39 | password: AGENT-PASSWORD 40 | ``` 41 | 42 | Above configuration is used by the Director and the Agents. 43 | 44 | --- 45 | ## S3 46 | 47 | The Director and the Agents can use an S3 compatible blobstore. Here is how to configure it: 48 | 49 | 1. Create a *private* S3 bucket 50 | 51 | 1. Ensure that access to the bucket is protected, as the Director may store sensitive information. 52 | 53 | 1. Modify deployment manifest for the Director and specify S3 credentials and bucket name: 54 | 55 | ```yaml 56 | properties: 57 | blobstore: 58 | provider: s3 59 | access_key_id: ACCESS-KEY-ID 60 | secret_access_key: SECRET-ACCESS-KEY 61 | bucket_name: test-bosh-bucket 62 | ``` 63 | 64 | 1. For an S3 compatible blobstore you need to additionally specify the host: 65 | 66 | ```yaml 67 | properties: 68 | blobstore: 69 | provider: s3 70 | access_key_id: ACCESS-KEY-ID 71 | secret_access_key: SECRET-ACCESS-KEY 72 | bucket_name: test-bosh-bucket 73 | host: objects.dreamhost.com 74 | ``` 75 | 76 | --- 77 | ## Google Cloud Storage (GCS) 78 | 79 |

Note: Available in bosh release v263+ and Linux stemcells 3450+.

80 | 81 | The Director and the Agents can use GCS as a blobstore. Here is how to configure it: 82 | 83 | 1. [Create a GCS bucket](https://cloud.google.com/storage/docs/creating-buckets). 84 | 85 | 1. Follow the steps on how to create service accounts and configure them with the [minimum set of permissions](google-required-permissions.html#director-with-gcs-blobstore). 86 | 87 | 1. Ensure that access to the bucket is protected, as the Director may store sensitive information. 88 | 89 | 1. Modify deployment manifest for the Director and specify GCS credentials and bucket name: 90 | 91 | ```yaml 92 | properties: 93 | blobstore: 94 | provider: gcs 95 | json_key: | 96 | DIRECTOR-BLOBSTORE-SERVICE-ACCOUNT-FILE 97 | bucket_name: test-bosh-bucket 98 | agent: 99 | blobstore: 100 | json_key: | 101 | AGENT-SERVICE-ACCOUNT-BLOBSTORE-FILE 102 | ``` 103 | 104 | 1. To use [Customer Supplied Encryption Keys](https://cloud.google.com/storage/docs/encryption#customer-supplied) 105 | to encrypt blobstore contents instead of server-side encryption keys, specify `encryption_key`: 106 | 107 | ```yaml 108 | properties: 109 | blobstore: 110 | provider: gcs 111 | json_key: | 112 | DIRECTOR-BLOBSTORE-SERVICE-ACCOUNT-FILE 113 | bucket_name: test-bosh-bucket 114 | encryption_key: BASE64-ENCODED-32-BYTES 115 | agent: 116 | blobstore: 117 | json_key: | 118 | AGENT-SERVICE-ACCOUNT-BLOBSTORE-FILE 119 | ``` 120 | 121 | 1. To use an explicit [Storage Class](https://cloud.google.com/storage/docs/storage-classes) 122 | to store blobstore contents instead of the bucket default, specify `storage_class`: 123 | 124 | ```yaml 125 | properties: 126 | blobstore: 127 | provider: gcs 128 | json_key: | 129 | DIRECTOR-BLOBSTORE-SERVICE-ACCOUNT-FILE 130 | bucket_name: test-bosh-bucket 131 | storage_class: REGIONAL 132 | agent: 133 | blobstore: 134 | json_key: | 135 | AGENT-SERVICE-ACCOUNT-BLOBSTORE-FILE 136 | ``` 137 | --------------------------------------------------------------------------------