├── .gitignore ├── golang ├── .godir ├── Dockerfile ├── build ├── cover ├── test ├── README.md └── godep.md ├── os ├── img │ ├── dev.png │ ├── image.png │ ├── prod.png │ ├── size.png │ ├── small.png │ ├── laptop.png │ ├── settings.png │ ├── template.png │ ├── userdata.png │ ├── vmware-ip.png │ ├── ikoula-login.png │ ├── cloudca-apiinfo.png │ ├── exoscale-size.png │ ├── update-timeline.png │ ├── cloudca-addinstance.png │ ├── cloudca-getapiinfo.png │ ├── ikoula-public-cloud.png │ ├── ikoula-subscriptions.png │ ├── cloudca-addinstance_step1.png │ ├── cloudca-addinstance_step2.png │ ├── cloudca-addinstance_step3.png │ ├── cloudca-addinstance_step4.png │ ├── cloudca-instance_detail.png │ ├── ikoula-instance-deployed.png │ └── ikoula-deploy-instance-menu.png ├── adding-certificate-authorities.md ├── switching-channels.md ├── adding-users.md ├── power-management.md ├── sdk-testing-with-mantle.md ├── scheduling-tasks-with-systemd-timers.md ├── install-debugging-tools.md ├── configuring-dns.md ├── verify-images.md ├── deploy_coreos_libvirt.sh ├── adding-disk-space.md ├── selinux.md ├── sdk-building-development-images.md ├── registry-authentication.md ├── sdk-disk-partitions.md ├── coreos-hardening-guide.md ├── using-systemd-and-udev-rules.md ├── booting-on-eucalyptus.md ├── booting-with-iso.md ├── overview-of-systemctl.md ├── booting-on-cloudstack.md ├── notes-for-distributors.md ├── booting-on-niftycloud-JA_JP.md ├── customize-etcd-unit.md ├── booting-on-packet.md ├── booting-on-ecs.md ├── reading-the-system-log.md ├── using-systemd-drop-in-units.md ├── booting-on-niftycloud.md ├── other-settings.md ├── sdk-modifying-coreos.md ├── booting-on-virtualbox.md ├── customizing-sshd.md ├── sdk-tips-and-tricks.md ├── booting-on-vultr.md ├── mounting-storage.md ├── btrfs-troubleshooting.md └── sdk-building-production-images.md ├── quay-enterprise ├── img │ ├── gear.png │ ├── new-tag.png │ ├── workers.png │ ├── db-setup.png │ ├── superuser.png │ ├── view-app.png │ ├── enable-auth.png │ ├── enable-build.png │ ├── register-app.png │ ├── db-setup-full.png │ ├── enable-trigger.png │ └── container-restart.png ├── tail-gunicorn-logs.sh ├── gzip-registry-logs.sh ├── github-app.md ├── log-debugging.md ├── github-auth.md ├── provision_mysql.sh ├── swift-temp-url.md ├── mysql-container.md ├── github-build.md ├── build-support.md ├── initial-setup.md └── configure-machines.md ├── coreupdate ├── img │ ├── Relationships.png │ ├── coreupdate-group.png │ └── coreupdate-group-default.png ├── configure-machines.md └── update-protocol.md ├── README.md ├── ignition ├── network-configuration.md └── what-is-ignition.md ├── kubernetes └── pods.md └── flannel └── flannel-config.md /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | -------------------------------------------------------------------------------- /golang/.godir: -------------------------------------------------------------------------------- 1 | github.com/coreos/docs 2 | -------------------------------------------------------------------------------- /golang/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM golang:onbuild 2 | -------------------------------------------------------------------------------- /os/img/dev.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/os/img/dev.png -------------------------------------------------------------------------------- /os/img/image.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/os/img/image.png -------------------------------------------------------------------------------- /os/img/prod.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/os/img/prod.png -------------------------------------------------------------------------------- /os/img/size.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/os/img/size.png -------------------------------------------------------------------------------- /os/img/small.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/os/img/small.png -------------------------------------------------------------------------------- /os/img/laptop.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/os/img/laptop.png -------------------------------------------------------------------------------- /os/img/settings.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/os/img/settings.png -------------------------------------------------------------------------------- /os/img/template.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/os/img/template.png -------------------------------------------------------------------------------- /os/img/userdata.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/os/img/userdata.png -------------------------------------------------------------------------------- /os/img/vmware-ip.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/os/img/vmware-ip.png -------------------------------------------------------------------------------- /os/img/ikoula-login.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/os/img/ikoula-login.png -------------------------------------------------------------------------------- /os/img/cloudca-apiinfo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/os/img/cloudca-apiinfo.png -------------------------------------------------------------------------------- /os/img/exoscale-size.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/os/img/exoscale-size.png -------------------------------------------------------------------------------- /os/img/update-timeline.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/os/img/update-timeline.png -------------------------------------------------------------------------------- /quay-enterprise/img/gear.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/quay-enterprise/img/gear.png -------------------------------------------------------------------------------- /os/img/cloudca-addinstance.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/os/img/cloudca-addinstance.png -------------------------------------------------------------------------------- /os/img/cloudca-getapiinfo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/os/img/cloudca-getapiinfo.png -------------------------------------------------------------------------------- /os/img/ikoula-public-cloud.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/os/img/ikoula-public-cloud.png -------------------------------------------------------------------------------- /os/img/ikoula-subscriptions.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/os/img/ikoula-subscriptions.png -------------------------------------------------------------------------------- /quay-enterprise/img/new-tag.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/quay-enterprise/img/new-tag.png -------------------------------------------------------------------------------- /quay-enterprise/img/workers.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/quay-enterprise/img/workers.png -------------------------------------------------------------------------------- /coreupdate/img/Relationships.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/coreupdate/img/Relationships.png -------------------------------------------------------------------------------- /quay-enterprise/img/db-setup.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/quay-enterprise/img/db-setup.png -------------------------------------------------------------------------------- /quay-enterprise/img/superuser.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/quay-enterprise/img/superuser.png -------------------------------------------------------------------------------- /quay-enterprise/img/view-app.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/quay-enterprise/img/view-app.png -------------------------------------------------------------------------------- /coreupdate/img/coreupdate-group.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/coreupdate/img/coreupdate-group.png -------------------------------------------------------------------------------- /os/img/cloudca-addinstance_step1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/os/img/cloudca-addinstance_step1.png -------------------------------------------------------------------------------- /os/img/cloudca-addinstance_step2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/os/img/cloudca-addinstance_step2.png -------------------------------------------------------------------------------- /os/img/cloudca-addinstance_step3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/os/img/cloudca-addinstance_step3.png -------------------------------------------------------------------------------- /os/img/cloudca-addinstance_step4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/os/img/cloudca-addinstance_step4.png -------------------------------------------------------------------------------- /os/img/cloudca-instance_detail.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/os/img/cloudca-instance_detail.png -------------------------------------------------------------------------------- /os/img/ikoula-instance-deployed.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/os/img/ikoula-instance-deployed.png -------------------------------------------------------------------------------- /quay-enterprise/img/enable-auth.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/quay-enterprise/img/enable-auth.png -------------------------------------------------------------------------------- /quay-enterprise/img/enable-build.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/quay-enterprise/img/enable-build.png -------------------------------------------------------------------------------- /quay-enterprise/img/register-app.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/quay-enterprise/img/register-app.png -------------------------------------------------------------------------------- /os/img/ikoula-deploy-instance-menu.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/os/img/ikoula-deploy-instance-menu.png -------------------------------------------------------------------------------- /quay-enterprise/img/db-setup-full.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/quay-enterprise/img/db-setup-full.png -------------------------------------------------------------------------------- /quay-enterprise/img/enable-trigger.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/quay-enterprise/img/enable-trigger.png -------------------------------------------------------------------------------- /quay-enterprise/img/container-restart.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/quay-enterprise/img/container-restart.png -------------------------------------------------------------------------------- /coreupdate/img/coreupdate-group-default.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nelsonjchen/coreos-docs/master/coreupdate/img/coreupdate-group-default.png -------------------------------------------------------------------------------- /quay-enterprise/tail-gunicorn-logs.sh: -------------------------------------------------------------------------------- 1 | CONTAINER_ID=$(docker ps | grep "coreos/registry" | awk '{print $1}') 2 | LOG_LOCATION=$(docker inspect -f '{{ index .Volumes "/var/log" }}' ${CONTAINER_ID}) 3 | tail -f ${LOG_LOCATION}/gunicorn_*/current 4 | -------------------------------------------------------------------------------- /quay-enterprise/gzip-registry-logs.sh: -------------------------------------------------------------------------------- 1 | CONTAINER_ID=$(docker ps | grep "coreos/registry" | awk '{print $1}') 2 | LOG_LOCATION=$(docker inspect -f '{{ index .Volumes "/var/log" }}' ${CONTAINER_ID}) 3 | tar -zcvf registry-logs-$(date +%s).tar.gz ${LOG_LOCATION} 4 | -------------------------------------------------------------------------------- /golang/build: -------------------------------------------------------------------------------- 1 | #!/bin/bash -e 2 | 3 | PROJ="newproj" 4 | ORG_PATH="github.com/coreos" 5 | REPO_PATH="${ORG_PATH}/${PROJ}" 6 | 7 | if [ ! -h gopath/src/${REPO_PATH} ]; then 8 | mkdir -p gopath/src/${ORG_PATH} 9 | ln -s ../../../.. gopath/src/${REPO_PATH} || exit 255 10 | fi 11 | 12 | export GOBIN=${PWD}/bin 13 | export GOPATH=${PWD}/gopath 14 | 15 | eval $(go env) 16 | 17 | echo "Building ${PROJ}d..." 18 | go build -o bin/${PROJ}d ${REPO_PATH}/${PROJ}d 19 | -------------------------------------------------------------------------------- /golang/cover: -------------------------------------------------------------------------------- 1 | #!/bin/bash -e 2 | # 3 | # Generate coverage HTML for a package 4 | # e.g. PKG=./unit ./cover 5 | # 6 | 7 | if [ -z "$PKG" ]; then 8 | echo "cover only works with a single package, sorry" 9 | exit 255 10 | fi 11 | 12 | COVEROUT="coverage" 13 | 14 | if ! [ -d "$COVEROUT" ]; then 15 | mkdir "$COVEROUT" 16 | fi 17 | 18 | # strip out slashes and dots 19 | COVERPKG=${PKG//\//} 20 | COVERPKG=${COVERPKG//./} 21 | 22 | # generate arg for "go test" 23 | export COVER="-coverprofile ${COVEROUT}/${COVERPKG}.out" 24 | 25 | source ./test 26 | 27 | go tool cover -html=${COVEROUT}/${COVERPKG}.out 28 | -------------------------------------------------------------------------------- /quay-enterprise/github-app.md: -------------------------------------------------------------------------------- 1 | # Creating an OAuth Application in GitHub 2 | 3 | - Log into GitHub (Enterprise) 4 | - Visit the applications page under your organization's settings and click "Register New Application". 5 | 6 | 7 | 8 | - Enter your registry's URL as the application URL 9 | 10 | Note: If using public GitHub, the URL entered must be accessible by *your users*. It can still be an internal URL. 11 | 12 | - Enter `https://{REGISTRY URL HERE}/oauth2/github/callback` as the Authorization callback URL. 13 | - Create the application 14 | 15 | 16 | 17 | - Note down the `Client ID` and `Client Secret`. 18 | -------------------------------------------------------------------------------- /os/adding-certificate-authorities.md: -------------------------------------------------------------------------------- 1 | # Custom Certificate Authorities 2 | 3 | CoreOS supports custom Certificate Authorities (CAs) in addition to the default list of trusted CAs. Adding your own CA allows you to: 4 | 5 | - Use a corporate wildcard certificate 6 | - Use your own CA to communicate with an installation of CoreUpdate 7 | - Use your own CA to [encrypt communications with a container registry](registry-authentication.md) 8 | 9 | The setup process for any of these use-cases is the same: 10 | 11 | 1. Copy the PEM-encoded certificate authority file (usually with a `.pem` file name extension) to `/etc/ssl/certs` 12 | 13 | 2. Run the `update-ca-certificates` script to update the system bundle of Certificate Authorities. All programs running on the system will now trust the added CA. 14 | 15 | ## More Information 16 | 17 | [Generate Self-Signed Certificates](generate-self-signed-certificates.md) 18 | 19 | [Use an insecure registry behind a firewall](registry-authentication.md#using-a-registry-without-ssl-configured) 20 | 21 | [etcd Security Model]({{site.baseurl}}/etcd/docs/latest/security.html) 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # CoreOS Documentation 2 | 3 | These are publicly accessible versions of the documentation on [coreos.com](http://coreos.com/docs/). 4 | 5 | Check out the [help-wanted tag](https://github.com/coreos/docs/issues?q=is%3Aopen+label%3Ahelp-wanted) if you're not sure what to help out with. 6 | 7 | 1. Fork this repo 8 | 2. Send a Pull Request 9 | 3. We will get your fix up on [coreos.com](http://coreos.com/docs/) 10 | 11 | Thanks for your contributions and fixes! 12 | 13 | ## Translations 14 | 15 | We accept patches for documentation translation. Please send the 16 | documents as a pull request and follow two guidelines: 17 | 18 | 1. Name the files identically to the originals but put them in a 19 | directory that matches the language locale. For example: JA\_JP, 20 | ZH\_CN or KO\_KN. 21 | 22 | 2. Add an explanation about the translated document to the top of the 23 | file: "These documents were localized into Esperanto by Community 24 | Member and last updated on 2014-04-04. If you 25 | find inaccuracies or problems please file an issue on GitHub." 26 | 27 | Thank you for your contributions. 28 | -------------------------------------------------------------------------------- /quay-enterprise/log-debugging.md: -------------------------------------------------------------------------------- 1 | # Enterprise Registry Log Debugging 2 | 3 | ## Personal debugging 4 | 5 | When attempting to debug an issue, one should first consult the logs of the web workers running the Enterprise Registry. 6 | 7 | ## Visit the Management Panel 8 | 9 | Sign in to a super user account and visit `http://yourregister/superuser` to view the management panel: 10 | 11 | Enterprise Registry Management Panel 12 | 13 | ## View the logs for each service 14 | 15 | - Click the logs tab () 16 | - To view logs for each service, click the service name at the top. The logs will update automatically. 17 | 18 | ## Contacting support 19 | 20 | When contacting support, one should always include a copy of the Enterprise Registry's log directory. 21 | 22 | To download logs, click the " Download All Local Logs (.tar.gz)" link 23 | 24 | ## Shell script to download logs 25 | 26 | The aforementioned operations are also available in script form on [GitHub](https://github.com/coreos/docs/blob/master/quay-enterprise/gzip-registry-logs.sh). 27 | -------------------------------------------------------------------------------- /quay-enterprise/github-auth.md: -------------------------------------------------------------------------------- 1 | # GitHub Authentication 2 | 3 | CoreOS Enterprise Registry supports using GitHub or GitHub Enterprise as an authentication system. 4 | 5 | ## Create an OAuth Application in GitHub 6 | 7 | Following the instructions at [Create a GitHub Application](github-app.md). 8 | 9 | **NOTE:** This application must be **different** from that used for GitHub Build Triggers. 10 | 11 | ## Visit the Management Panel 12 | 13 | Sign in to a super user account and visit `http://yourregister/superuser` to view the management panel: 14 | 15 | Enterprise Registry Management Panel 16 | 17 | ## Enable GitHub Authentication 18 | 19 | Enable GitHub Authentication 20 | 21 | - Click the configuration tab () and scroll down to the section entitled GitHub (Enterprise) Authentication. 22 | - Check the "Enable GitHub Authentication" box 23 | - Fill in the credentials from the application created above 24 | - Click "Save Configuration Changes" 25 | - Restart the container (you will be prompted) 26 | -------------------------------------------------------------------------------- /golang/test: -------------------------------------------------------------------------------- 1 | #!/bin/bash -e 2 | # 3 | # Run all tests (not including functional) 4 | # ./test 5 | # ./test -v 6 | # 7 | # Run tests for one package 8 | # PKG=./foo ./test 9 | # PKG=bar ./test 10 | # 11 | 12 | # Invoke ./cover for HTML output 13 | COVER=${COVER:-"-cover"} 14 | 15 | source ./build 16 | 17 | TESTABLE="foo bar" 18 | FORMATTABLE="$TESTABLE baz" 19 | 20 | # user has not provided PKG override 21 | if [ -z "$PKG" ]; then 22 | TEST=$TESTABLE 23 | FMT=$FORMATTABLE 24 | 25 | # user has provided PKG override 26 | else 27 | # strip out slashes and dots from PKG=./foo/ 28 | TEST=${PKG//\//} 29 | TEST=${TEST//./} 30 | 31 | # only run gofmt on packages provided by user 32 | FMT="$TEST" 33 | fi 34 | 35 | # split TEST into an array and prepend REPO_PATH to each local package 36 | split=(${TEST// / }) 37 | TEST=${split[@]/#/${REPO_PATH}/} 38 | 39 | echo "Running tests..." 40 | go test ${COVER} $@ ${TEST} 41 | 42 | echo "Checking gofmt..." 43 | fmtRes=$(gofmt -l $FMT) 44 | if [ -n "${fmtRes}" ]; then 45 | echo -e "gofmt checking failed:\n${fmtRes}" 46 | exit 255 47 | fi 48 | 49 | echo "Checking govet..." 50 | vetRes=$(go vet $TEST) 51 | if [ -n "${vetRes}" ]; then 52 | echo -e "govet checking failed:\n${vetRes}" 53 | exit 255 54 | fi 55 | 56 | echo "Success" 57 | -------------------------------------------------------------------------------- /quay-enterprise/provision_mysql.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # A simple shell script to provision a dedicated MySQL container for the CoreOS Enterprise Registery 3 | set -e 4 | 5 | # Edit the following three values to your liking: 6 | MYSQL_USER="coreosuser" 7 | MYSQL_DATABASE="enterpriseregistrydb" 8 | MYSQL_CONTAINER_NAME="mysql" 9 | 10 | # Do not edit these values: 11 | # (creates a 32 char password for the MySQL root user and the Enterprise Registery DB user) 12 | MYSQL_ROOT_PASSWORD=$(cat /dev/urandom | LC_CTYPE=C tr -dc 'a-zA-Z0-9' | fold -w 32 | sed 1q) 13 | MYSQL_PASSWORD=$(cat /dev/urandom | LC_CTYPE=C tr -dc 'a-zA-Z0-9' | fold -w 32 | sed 1q) 14 | 15 | echo "Start the Oracle MySQL container:" 16 | # It will provision a blank database for the Enterprise Registery upon first start. 17 | # This initial provisioning can take up to 30 seconds. 18 | 19 | docker \ 20 | run \ 21 | --detach \ 22 | --env MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} \ 23 | --env MYSQL_USER=${MYSQL_USER} \ 24 | --env MYSQL_PASSWORD=${MYSQL_PASSWORD} \ 25 | --env MYSQL_DATABASE=${MYSQL_DATABASE} \ 26 | --name ${MYSQL_CONTAINER_NAME} \ 27 | --publish 3306:3306 \ 28 | mysql:5.7; 29 | 30 | echo "Sleeping for 10 seconds to allow time for the DB to be provisioned:" 31 | for i in `seq 1 10`; 32 | do 33 | echo "." 34 | sleep 1 35 | done 36 | 37 | echo "Database '${MYSQL_DATABASE}' running." 38 | echo " Username: ${MYSQL_USER}" 39 | echo " Password: ${MYSQL_PASSWORD}" -------------------------------------------------------------------------------- /os/switching-channels.md: -------------------------------------------------------------------------------- 1 | # Switching Release Channels 2 | 3 | CoreOS is released into stable, alpha and beta channels. New features and bug fixes are tested in the alpha channel and are promoted bit-for-bit to the beta channel if no additional bugs are found. 4 | 5 | By design, the CoreOS update engine does not execute downgrades. If you're switching from a channel with a higher CoreOS version than the new channel, your machine won't be updated again until the new channel contains a higher version number. 6 | 7 | ![Update Timeline](img/update-timeline.png) 8 | 9 | ## Create Update Config File 10 | 11 | You can switch machines between channels by creating `/etc/coreos/update.conf`: 12 | 13 | ```ini 14 | GROUP=beta 15 | ``` 16 | 17 | ## Restart Update Engine 18 | 19 | The last step is to restart the update engine in order for it to pick up the changed channel: 20 | 21 | ```sh 22 | sudo systemctl restart update-engine 23 | ``` 24 | 25 | ## Debugging 26 | 27 | After the update engine is restarted, the machine should check for an update within an hour. You can view the update engine log if you'd like to see the requests that are being made to the update service: 28 | 29 | ```sh 30 | journalctl -f -u update-engine 31 | ``` 32 | 33 | For reference, you can find the current version: 34 | 35 | ```sh 36 | cat /etc/os-release 37 | ``` 38 | 39 | ## Release Information 40 | 41 | You can read more about the current releases and channels on the [releases page]({{site.baseurl}}/releases). 42 | -------------------------------------------------------------------------------- /quay-enterprise/swift-temp-url.md: -------------------------------------------------------------------------------- 1 | # Enterprise Registry Swift Direct Download 2 | 3 | ## Direct Download 4 | 5 | The Swift storage engine supports using a feature called [temporary URLs](http://docs.openstack.org/juno/config-reference/content/object-storage-tempurl.html) to allow for faster pulling of images. 6 | 7 | To enable direct download with Swift, please follow these instructions. 8 | 9 | ## Create a Swift temporary URL token 10 | 11 | To enable temporary URLs, first set the `X-Account-Meta-Temp-URL-Key` header on your Object Storage account to an arbitrary string. This string serves as a secret key. For example, to set a key of `somecoolkey` using the swift command-line tool: 12 | 13 | ``` 14 | $ swift post -m "Temp-URL-Key:somecoolkey" 15 | ``` 16 | 17 | ## Visit the Management Panel 18 | 19 | Sign in to a super user account and visit `http://registry.example.com/superuser` to view the management panel: 20 | 21 | Enterprise Registry Management Panel 22 | 23 | ## Go to the settings tab 24 | 25 | - Click the configuration tab () and scroll down to the section entitled Registry Storage. 26 | - Ensure that "OpenStack Storage (Swift)" is selected 27 | 28 | ## Enter the temporary URL key 29 | 30 | Enter the key generated above into the `Temp URL Key` field under the Swift storage engine settings. 31 | 32 | ## Save configuration 33 | 34 | Hit `Save Configuration` to save and validate your configuration. The Swift storage engine system will automatically 35 | test that the direct download feature is enabled and working. -------------------------------------------------------------------------------- /ignition/network-configuration.md: -------------------------------------------------------------------------------- 1 | # Network Configuration # 2 | 3 | Configuring networkd with Ignition is a very straightforward task. Because 4 | Ignition runs before networkd starts, configuration is just a matter of writing 5 | the desired config to disk. The Ignition config has a specific section 6 | dedicated to this. 7 | 8 | ## Static Networking ## 9 | 10 | In this example, the network interface with the name "eth0" will be given the 11 | IP address 10.0.1.7. A typical interface will need more configuration and can 12 | use all of the options of a [network unit][network]. 13 | 14 | ```json 15 | { 16 | "networkd": { 17 | "units": [ 18 | { 19 | "name": "00-eth0.network", 20 | "contents": "[Match]\nName=eth0\n\n[Network]\nAddress=10.0.1.7" 21 | } 22 | ] 23 | } 24 | } 25 | ``` 26 | 27 | This configuration will instruct Ignition to create a single network unit named 28 | "00-eth0.network" with the contents: 29 | 30 | ``` 31 | [Match] 32 | Name=eth0 33 | 34 | [Network] 35 | Address=10.0.1.7 36 | ``` 37 | 38 | When the system boots, networkd will read this config and assign the IP address 39 | to eth0. 40 | 41 | [network]: http://www.freedesktop.org/software/systemd/man/systemd.network.html 42 | 43 | ## Bonded NICs ## 44 | 45 | In this example, all of the network interfaces whose names begin with "eth" 46 | will be bonded together to form "bond0". This new interface will then be 47 | configured to use DHCP. 48 | 49 | ```json 50 | { 51 | "networkd": { 52 | "units": [ 53 | { 54 | "name": "00-eth.network", 55 | "contents": "[Match]\nName=eth*\n\n[Network]\nBond=bond0" 56 | }, 57 | { 58 | "name": "10-bond0.netdev", 59 | "contents": "[NetDev]\nName=bond0\nKind=bond" 60 | }, 61 | { 62 | "name": "20-bond0.network", 63 | "contents": "[Match]\nName=bond0\n\n[Network]\nDHCP=true" 64 | }, 65 | ] 66 | } 67 | } 68 | ``` 69 | -------------------------------------------------------------------------------- /os/adding-users.md: -------------------------------------------------------------------------------- 1 | # Adding Users 2 | 3 | You can create user accounts on a CoreOS machine manually with `useradd` or via cloud-config when the machine is created. 4 | 5 | ## Add Users via Cloud-Config 6 | 7 | Managing users via cloud-config is preferred because it allows you to use the same configuration across many servers and the cloud-config file can be stored in a repo and versioned. In your cloud-config, you can specify many [different parameters]({{site.baseurl}}/docs/cluster-management/setup/cloudinit-cloud-config/#users) for each user. Here's an example: 8 | 9 | ```yaml 10 | #cloud-config 11 | 12 | users: 13 | - name: elroy 14 | passwd: $6$5s2u6/jR$un0AvWnqilcgaNB3Mkxd5yYv6mTlWfOoCYHZmfi3LDKVltj.E8XNKEcwWm... 15 | groups: 16 | - sudo 17 | - docker 18 | ssh-authorized-keys: 19 | - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGdByTgSVHq....... 20 | ``` 21 | 22 | Check out the entire [Customize with Cloud-Config]({{site.baseurl}}/docs/cluster-management/setup/cloudinit-cloud-config/) guide for the full details. 23 | 24 | ## Add User Manually 25 | 26 | If you'd like to add a user manually, SSH to the machine and use the `useradd` tool. To create the user `user`, run: 27 | 28 | ```sh 29 | sudo useradd -p "*" -U -m user1 -G sudo 30 | ``` 31 | 32 | The `"*"` creates a user that cannot login with a password but can log in via SSH key. `-U` creates a group for the user, `-G` adds the user to the existing `sudo` group and `-m` creates a home directory. If you'd like to add a password for the user, run: 33 | 34 | ```sh 35 | $ sudo passwd user1 36 | New password: 37 | Re-enter new password: 38 | passwd: password changed. 39 | ``` 40 | 41 | To assign an SSH key, run: 42 | 43 | ```sh 44 | update-ssh-keys -u user1 user1.pem 45 | ``` 46 | 47 | ## Further Reading 48 | 49 | Read the [full cloud-config]({{site.baseurl}}/docs/cluster-management/setup/cloudinit-cloud-config/) guide to install users and more. 50 | -------------------------------------------------------------------------------- /quay-enterprise/mysql-container.md: -------------------------------------------------------------------------------- 1 | # Setting Up a MySQL Docker Container 2 | 3 | If you don't have an existing MySQL system to host the Enterprise Registry database on then you can run the steps below to create a dedicated MySQL container using the Oracle MySQL verified Docker image from https://registry.hub.docker.com/_/mysql/ 4 | 5 | ```sh 6 | docker pull mysql:5.7 7 | ``` 8 | 9 | Edit these values to your liking: 10 | 11 | ```sh 12 | MYSQL_USER="coreosuser" 13 | MYSQL_DATABASE="enterpriseregistrydb" 14 | MYSQL_CONTAINER_NAME="mysql" 15 | ``` 16 | Do not edit these values: 17 | (creates a 32 char password for the MySQL root user and the Enterprise Registry DB user) 18 | 19 | ```sh 20 | MYSQL_ROOT_PASSWORD=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | sed 1q) 21 | MYSQL_PASSWORD=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | sed 1q) 22 | ``` 23 | 24 | Start the MySQL container and create a new DB for the Enterprise registry: 25 | 26 | ```sh 27 | docker \ 28 | run \ 29 | --detach \ 30 | --env MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} \ 31 | --env MYSQL_USER=${MYSQL_USER} \ 32 | --env MYSQL_PASSWORD=${MYSQL_PASSWORD} \ 33 | --env MYSQL_DATABASE=${MYSQL_DATABASE} \ 34 | --name ${MYSQL_CONTAINER_NAME} \ 35 | --publish 3306:3306 \ 36 | mysql:5.7; 37 | ``` 38 | 39 | Wait about 30 seconds for the new DB to be created before testing the connection to the DB, the MySQL container will not respond during the initial DB creation process. 40 | 41 | 42 | Alternatively you can download a simple shell script to perform the steps above: 43 | 44 | ```sh 45 | curl --location https://raw.githubusercontent.com/coreos/docs/master/quay-enterprise/provision_mysql.sh -o /tmp/provision_mysql.sh -# 46 | ``` 47 | Then run: 48 | 49 | ```sh 50 | chmod -c +x /tmp/provision_mysql.sh 51 | /tmp/provision_mysql.sh 52 | ``` 53 | 54 | Note: Using Percona v5.6 for the MySQL container is known to not work at this point in time. 55 | -------------------------------------------------------------------------------- /os/power-management.md: -------------------------------------------------------------------------------- 1 | # Tuning CoreOS Power Management 2 | 3 | ## CPU Governor 4 | 5 | By default, CoreOS uses the "performance" CPU governor meaning that the CPU 6 | operates at the maximum frequency regardless of load. This is reasonable for 7 | a system that is under constant load or cannot tolerate increased latency. 8 | On the other hand, if the system is idle much of the time and latency is not 9 | a concern, power savings may be desired. 10 | 11 | Several governors are available: 12 | 13 | | Governor | Description | 14 | |--------------------|-------------| 15 | | `performance` | Default. Operate at the maximum frequency | 16 | | `ondemand` | Dynamically scale frequency at 75% cpu load | 17 | | `conservative` | Dynamically scale frequency at 95% cpu load | 18 | | `powersave` | Operate at the minimum frequency | 19 | | `userspace` | Controlled by a userspace application via the `scaling_setspeed` file | 20 | 21 | The "conservative" governor can be used instead using the following shell commands: 22 | 23 | ```sh 24 | modprobe cpufreq_conservative 25 | echo "conservative" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor > /dev/null 26 | ``` 27 | 28 | This can be configured with [cloud-config]({{site.baseurl}}/docs/cluster-management/setup/cloudinit-cloud-config/#coreos) as well: 29 | 30 | ```yaml 31 | coreos: 32 | units: 33 | - name: cpu-governor.service 34 | command: start 35 | runtime: true 36 | content: | 37 | [Unit] 38 | Description=Enable CPU power saving 39 | 40 | [Service] 41 | Type=oneshot 42 | RemainAfterExit=yes 43 | ExecStart=/usr/sbin/modprobe cpufreq_conservative 44 | ExecStart=/usr/bin/sh -c '/usr/bin/echo "conservative" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor' 45 | ``` 46 | 47 | More information on further tuning each governor is available in the [Kernel Documentation](https://www.kernel.org/doc/Documentation/cpu-freq/governors.txt) 48 | -------------------------------------------------------------------------------- /os/sdk-testing-with-mantle.md: -------------------------------------------------------------------------------- 1 | # Mantle: Gluing CoreOS together 2 | 3 | Mantle is a collection of utilities for the CoreOS SDK. 4 | 5 | ## plume index 6 | 7 | Generate and upload index.html objects to turn a Google Cloud Storage 8 | bucket into a publicly browsable file tree. Useful if you want something 9 | like Apache's directory index for your software download repository. 10 | 11 | ## plume gce cluster launching 12 | 13 | Related commands to launch instances on Google Compute Engine(gce) with 14 | the latest SDK image. SSH keys should be added to the gce project 15 | metadata before launching a cluster. All commands have flags that can 16 | overwrite the default project, bucket, and other settings. `plume help 17 | ` can be used to discover all the switches. 18 | 19 | ### plume upload 20 | 21 | Upload latest SDK image to Google Storage and then create a gce image. 22 | Assumes an image packaged with the flag `--format=gce` is present. 23 | Common usage for CoreOS devs using the default bucket and project is: 24 | 25 | `plume upload` 26 | 27 | ### plume list-images 28 | 29 | Print out gce images from a project. Common usage: 30 | 31 | `plume list-images` 32 | 33 | ### plume create-instances 34 | 35 | Launch instances on gce. SSH keys should be added to the metadata 36 | section of the gce developers console. Common usage: 37 | 38 | `plume create-instances -n=3 -image= -config=` 39 | 40 | ### plume list-instances 41 | 42 | List running gce instances. Common usage: 43 | 44 | `plume list-instances` 45 | 46 | ### plume destroy-instances 47 | 48 | Destroy instances on gce based by prefix or name. Images created with 49 | `create-instances` use a common basename as a prefix that can also be 50 | used to tear down the cluster. Common usage: 51 | 52 | `plume destroy-instances -prefix=$USER` 53 | 54 | ## kola 55 | 56 | Test framework for CoreOS integration testing. Launch groups of related 57 | tests using the latest SDK image on specified platforms (qemu, gce ...) 58 | -------------------------------------------------------------------------------- /os/scheduling-tasks-with-systemd-timers.md: -------------------------------------------------------------------------------- 1 | # Scheduling Tasks with systemd Timers 2 | 3 | CoreOS uses systemd timers (`cron` replacement) to schedule tasks. Here we will show you how you can schedule a periodic job. 4 | 5 | Let's create an alternative for this `crontab` job: 6 | 7 | ``` 8 | */10 * * * * /usr/bin/date >> /tmp/date 9 | ``` 10 | 11 | Timers work directly with services' units. So we have to create `/etc/systemd/system/date.service` first: 12 | 13 | ``` 14 | [Unit] 15 | Description=Prints date into /tmp/date file 16 | 17 | [Service] 18 | Type=oneshot 19 | ExecStart=/usr/bin/sh -c '/usr/bin/date >> /tmp/date' 20 | ``` 21 | 22 | Then we have to create timer unit with the same name but with `*.timer` suffix `/etc/systemd/system/date.timer`: 23 | 24 | ``` 25 | [Unit] 26 | Description=Run date.service every 10 minutes 27 | 28 | [Timer] 29 | OnCalendar=*:0/10 30 | ``` 31 | 32 | 33 | 34 | This config will run `date.service` every 10 minutes. You can also list all timers enabled in your system using `systemctl list-timers` command or `systemctl list-timers --all` to list all timers. Run `systemctl start date.timer` to enable timer. 35 | 36 | You can also create timer with different name, i.e. `task.timer`. In this case you have specify service unit name: 37 | 38 | ``` 39 | Unit=date.service 40 | ``` 41 | 42 | ## Cloud-Config 43 | 44 | Here you can find an example how to install systemd timers using `cloud-config`: 45 | 46 | ```yaml 47 | #cloud-config 48 | 49 | coreos: 50 | units: 51 | - name: date.service 52 | content: | 53 | [Unit] 54 | Description=Prints date into /tmp/date file 55 | 56 | [Service] 57 | Type=oneshot 58 | ExecStart=/usr/bin/sh -c '/usr/bin/date >> /tmp/date' 59 | - name: date.timer 60 | command: start 61 | content: | 62 | [Unit] 63 | Description=Run date.service every 10 minutes 64 | 65 | [Timer] 66 | OnCalendar=*:0/10 67 | ``` 68 | 69 | ## Further Reading 70 | 71 | If you're interested in more general systemd timers feature, check out the [full documentation](http://www.freedesktop.org/software/systemd/man/systemd.timer.html). 72 | -------------------------------------------------------------------------------- /quay-enterprise/github-build.md: -------------------------------------------------------------------------------- 1 | # Setup GitHub Build Triggers 2 | 3 | CoreOS Enterprise Registry supports using GitHub or GitHub Enterprise as a trigger to building 4 | images. 5 | 6 | ## Initial Setup 7 | 8 | If you have not yet done so, please [enable build support](build-support.md) in the Enterprise Registry. 9 | 10 | ## Create an OAuth Application in GitHub 11 | 12 | Following the instructions at [Create a GitHub Application](github-app.md). 13 | 14 | **NOTE:** This application must be **different** from that used for GitHub Authentication. 15 | 16 | ## Visit the Management Panel 17 | 18 | Sign in to a super user account and visit `http://yourregister/superuser` to view the management panel: 19 | 20 | Enterprise Registry Management Panel 21 | 22 | ## Enable GitHub Triggers 23 | 24 | Enable GitHub Trigger 25 | 26 | - Click the configuration tab () and scroll down to the section entitled GitHub (Enterprise) Build Triggers. 27 | - Check the "Enable GitHub Triggers" box 28 | - Fill in the credentials from the application created above 29 | - Click "Save Configuration Changes" 30 | - Restart the container (you will be prompted) 31 | 32 | ## Tag an Automated Build 33 | 34 | After getting automated builds working, it may be desired to tag a specific build with a name. By default, the last image pushed to a repository will be tagged as `latest`. 35 | Because tagging is [usually done client side](https://docs.docker.com/userguide/dockerimages/#setting-tags-on-an-image) before an image is pushed, it may not be clear how to tag an image that was built and pushed by GitHub. Luckily, there is a interface for doing so on the repository page. After clicking to select a given build on the build graph, the right side of the page displays tag information which when clicked provides a drop-down menu with the option of creating a new tag. 36 | 37 | Create a new tag 38 | 39 | There is currently no ability to automatically tag GitHub triggered builds. 40 | -------------------------------------------------------------------------------- /os/install-debugging-tools.md: -------------------------------------------------------------------------------- 1 | # Install Debugging Tools 2 | 3 | You can use common debugging tools like tcpdump or strace with Toolbox. Using the filesystem of a specified Docker container Toolbox will launch a container with full system privileges including access to system PIDs, network interfaces and other global information. Inside of the toolbox, the machine's filesystem is mounted to `/media/root`. 4 | 5 | ## Quick Debugging 6 | 7 | By default, Toolbox uses the stock Fedora Docker container. To start using it, simply run: 8 | 9 | ```sh 10 | /usr/bin/toolbox 11 | ``` 12 | 13 | You're now in the namespace of Fedora and can install any software you'd like via `yum`. For example, if you'd like to use `tcpdump`: 14 | 15 | ```sh 16 | [root@srv-3qy0p ~]# yum install tcpdump 17 | [root@srv-3qy0p ~]# tcpdump -i ens3 18 | tcpdump: verbose output suppressed, use -v or -vv for full protocol decode 19 | listening on ens3, link-type EN10MB (Ethernet), capture size 65535 bytes 20 | ``` 21 | 22 | ### Specify a Custom Docker Image 23 | 24 | Create a `.toolboxrc` in the user's home folder to use a specific Docker image: 25 | 26 | ```sh 27 | $ cat .toolboxrc 28 | TOOLBOX_DOCKER_IMAGE=index.example.com/debug 29 | TOOLBOX_USER=root 30 | $ /usr/bin/toolbox 31 | Pulling repository index.example.com/debug 32 | ... 33 | ``` 34 | 35 | You can also specify this in cloud-config: 36 | 37 | ``` 38 | #cloud-config 39 | write_files: 40 | - path: /home/core/.toolboxrc 41 | owner: core 42 | content: | 43 | TOOLBOX_DOCKER_IMAGE=index.example.com/debug 44 | TOOLBOX_DOCKER_TAG=v1 45 | TOOLBOX_USER=root 46 | ``` 47 | 48 | ## SSH Directly Into A Toolbox 49 | 50 | Advanced users can SSH directly into a toolbox by setting up an `/etc/passwd` entry: 51 | 52 | ```sh 53 | useradd bob -m -p '*' -s /usr/bin/toolbox -U -G sudo,docker 54 | ``` 55 | 56 | To test, SSH as bob: 57 | 58 | ```sh 59 | ssh bob@hostname.example.com 60 | 61 | ______ ____ _____ 62 | / ____/___ ________ / __ \/ ___/ 63 | / / / __ \/ ___/ _ \/ / / /\__ \ 64 | / /___/ /_/ / / / __/ /_/ /___/ / 65 | \____/\____/_/ \___/\____//____/ 66 | [root@srv-3qy0p ~]# yum install emacs 67 | [root@srv-3qy0p ~]# emacs /media/root/etc/systemd/system/docker.service 68 | ``` 69 | -------------------------------------------------------------------------------- /os/configuring-dns.md: -------------------------------------------------------------------------------- 1 | # DNS Configuration 2 | 3 | By default, DNS resolution on CoreOS is handled through `/etc/resolv.conf`, which is a symlink to `/run/systemd/resolve/resolv.conf`. 4 | This file is managed by [systemd-resolved][systemd-resolved]. 5 | Normally, `systemd-resolved` gets DNS IP addresses from [systemd-networkd][systemd-networkd], either via DHCP or static configuration. 6 | DNS IP addresses can also be set via `systemd-resolved`'s [resolved.conf][resolved.conf]. 7 | See [Network configuration with networkd](network-config-with-networkd.md) for more information on `systemd-networkd`. 8 | 9 | ## Using a local DNS cache 10 | 11 | `systemd-resolved` includes a caching DNS resolver. 12 | To use it for DNS resolution and caching, you must enable it via [nsswitch.conf][nsswitch.conf] by adding `resolv` to the `hosts` section. 13 | 14 | Here is an example cloud-config snippet to do that: 15 | 16 | ```yaml 17 | #cloud-config 18 | write_files: 19 | - path: /etc/nsswitch.conf 20 | permissions: 0644 21 | owner: root 22 | content: | 23 | # /etc/nsswitch.conf: 24 | 25 | passwd: files usrfiles 26 | shadow: files usrfiles 27 | group: files usrfiles 28 | 29 | hosts: files usrfiles resolv dns 30 | networks: files usrfiles dns 31 | 32 | services: files usrfiles 33 | protocols: files usrfiles 34 | rpc: files usrfiles 35 | 36 | ethers: files 37 | netmasks: files 38 | netgroup: files 39 | bootparams: files 40 | automount: files 41 | aliases: files 42 | ``` 43 | 44 | Only nss-aware applications can take advantage of the `systemd-resolved` cache. 45 | Notably, this means that statically linked Go programs and programs running within Docker/rkt will use `/etc/resolv.conf` only, and will not use the `systemd-resolve` cache. 46 | 47 | [systemd-resolved]: http://www.freedesktop.org/software/systemd/man/systemd-resolved.service.html 48 | [systemd-networkd]: http://www.freedesktop.org/software/systemd/man/systemd-networkd.service.html 49 | [resolved.conf]: http://www.freedesktop.org/software/systemd/man/resolved.conf.html 50 | [nsswitch.conf]: http://man7.org/linux/man-pages/man5/nsswitch.conf.5.html 51 | 52 | -------------------------------------------------------------------------------- /os/verify-images.md: -------------------------------------------------------------------------------- 1 | # Verify CoreOS Images with GPG 2 | 3 | CoreOS publishes new images for each release across a variety of platforms and hosting providers. Each channel has it's own set of images ([stable], [beta], [alpha]) that are posted to our storage site. Along with each image, a signature is generated from the [CoreOS Image Signing Key][signing-key] and posted. 4 | 5 | [signing-key]: {{site.baseurl}}/security/image-signing-key 6 | [stable]: http://stable.release.core-os.net/amd64-usr/current/ 7 | [beta]: http://beta.release.core-os.net/amd64-usr/current/ 8 | [alpha]: http://alpha.release.core-os.net/amd64-usr/current/ 9 | 10 | After downloading your image, you should verify it with `gpg` tool. First, download the image signing key: 11 | 12 | ```sh 13 | curl -O https://coreos.com/security/image-signing-key/CoreOS_Image_Signing_Key.asc 14 | ``` 15 | 16 | Next, import the public key and verify that the ID matches the website: [CoreOS Image Signing Key][signing-key] 17 | 18 | ```sh 19 | gpg --import --keyid-format LONG CoreOS_Image_Signing_Key.asc 20 | gpg: key 50E0885593D2DCB4: public key "CoreOS Buildbot (Offical Builds) " imported 21 | gpg: Total number processed: 1 22 | gpg: imported: 1 (RSA: 1) 23 | gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model 24 | gpg: depth: 0 valid: 2 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 2u 25 | ``` 26 | 27 | Now we're ready to download an image and it's signature, ending in .sig. We're using the QEMU image in this example: 28 | 29 | ```sh 30 | curl -O http://stable.release.core-os.net/amd64-usr/current/coreos_production_qemu_image.img.bz2 31 | curl -O http://stable.release.core-os.net/amd64-usr/current/coreos_production_qemu_image.img.bz2.sig 32 | ``` 33 | 34 | Verify image with `gpg` tool: 35 | 36 | ```sh 37 | gpg --verify coreos_production_qemu_image.img.bz2.sig 38 | gpg: Signature made Tue Jun 23 09:39:04 2015 CEST using RSA key ID E5676EFC 39 | gpg: Good signature from "CoreOS Buildbot (Offical Builds) " 40 | ``` 41 | 42 | The `Good signature` message indicates that the file signature is valid. Go launch some machines now that we've successfully verified that this CoreOS image isn't corrupt, that it was authored by CoreOS, and wasn't tampered with in transit. 43 | -------------------------------------------------------------------------------- /os/deploy_coreos_libvirt.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash -e 2 | 3 | usage() { 4 | echo "Usage: $0 %number_of_coreos_nodes%" 5 | } 6 | 7 | if [ "$1" == "" ]; then 8 | echo "Cluster size is empty" 9 | usage 10 | exit 1 11 | fi 12 | 13 | if ! [[ $1 =~ ^[0-9]+$ ]]; then 14 | echo "'$1' is not a number" 15 | usage 16 | exit 1 17 | fi 18 | 19 | LIBVIRT_PATH=/var/lib/libvirt/images/coreos 20 | USER_DATA_TEMPLATE=$LIBVIRT_PATH/user_data 21 | ETCD_DISCOVERY=$(curl -s "https://discovery.etcd.io/new?size=$1") 22 | CHANNEL=stable 23 | RELEASE=current 24 | RAM=1024 25 | CPUs=1 26 | IMG_NAME="coreos_${CHANNEL}_${RELEASE}_qemu_image.img" 27 | 28 | if [ ! -d $LIBVIRT_PATH ]; then 29 | mkdir -p $LIBVIRT_PATH || (echo "Can not create $LIBVIRT_PATH directory" && exit 1) 30 | fi 31 | 32 | if [ ! -f $USER_DATA_TEMPLATE ]; then 33 | echo "$USER_DATA_TEMPLATE template doesn't exist" 34 | exit 1 35 | fi 36 | 37 | for SEQ in $(seq 1 $1); do 38 | COREOS_HOSTNAME="coreos$SEQ" 39 | 40 | if [ ! -d $LIBVIRT_PATH/$COREOS_HOSTNAME/openstack/latest ]; then 41 | mkdir -p $LIBVIRT_PATH/$COREOS_HOSTNAME/openstack/latest || (echo "Can not create $LIBVIRT_PATH/$COREOS_HOSTNAME/openstack/latest directory" && exit 1) 42 | fi 43 | 44 | if [ ! -f $LIBVIRT_PATH/$IMG_NAME ]; then 45 | wget http://${CHANNEL}.release.core-os.net/amd64-usr/${RELEASE}/coreos_production_qemu_image.img.bz2 -O - | bzcat > $LIBVIRT_PATH/$IMG_NAME || (rm -f $LIBVIRT_PATH/$IMG_NAME && echo "Failed to download image" && exit 1) 46 | fi 47 | 48 | if [ ! -f $LIBVIRT_PATH/$COREOS_HOSTNAME.qcow2 ]; then 49 | qemu-img create -f qcow2 -b $LIBVIRT_PATH/$IMG_NAME $LIBVIRT_PATH/$COREOS_HOSTNAME.qcow2 50 | fi 51 | 52 | sed "s#%HOSTNAME%#$COREOS_HOSTNAME#g;s#%DISCOVERY%#$ETCD_DISCOVERY#g" $USER_DATA_TEMPLATE > $LIBVIRT_PATH/$COREOS_HOSTNAME/openstack/latest/user_data 53 | 54 | virt-install --connect qemu:///system --import --name $COREOS_HOSTNAME --ram $RAM --vcpus $CPUs --os-type=linux --os-variant=virtio26 --disk path=$LIBVIRT_PATH/$COREOS_HOSTNAME.qcow2,format=qcow2,bus=virtio --filesystem $LIBVIRT_PATH/$COREOS_HOSTNAME/,config-2,type=mount,mode=squash --vnc --noautoconsole 55 | done 56 | -------------------------------------------------------------------------------- /os/adding-disk-space.md: -------------------------------------------------------------------------------- 1 | # Adding Disk Space to Your CoreOS Machine 2 | 3 | On a CoreOS machine, the operating system itself is mounted as a read-only partition at `/usr`. The root partition provides read-write storage by default and on a fresh install is mostly blank. The default size of this partition depends on the platform but it is usually between 3GB and 16GB. If more space is required simply extend the virtual machine's disk image and CoreOS will fix the partition table and resize the root partition to fill the disk on the next boot. 4 | 5 | ## Amazon EC2 6 | 7 | Amazon doesn't support directly resizing volumes, you must take a 8 | snapshot and create a new volume based on that snapshot. Refer to 9 | the AWS EC2 documentation on [expanding EBS volumes][ebs-expand-volume] 10 | for detailed instructions. 11 | 12 | [ebs-expand-volume]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html 13 | 14 | ## QEMU (qemu-img) 15 | 16 | Even if you are not using Qemu itself the qemu-img tool is the easiest 17 | to use. It will work on raw, qcow2, vmdk, and most other formats. The 18 | command accepts either an absolute size or a relative size by 19 | by adding `+` prefix. Unit suffixes such as `G` or `M` are also supported. 20 | 21 | ```sh 22 | # Increase the disk size by 5GB 23 | qemu-img resize coreos_production_qemu_image.img +5G 24 | ``` 25 | 26 | ## VMware 27 | 28 | The interface available for resizing disks in VMware varies depending on 29 | the product. See this [Knowledge Base article][vmkb1004047] for details. 30 | Most products include a tool called `vmware-vdiskmanager`. The size must 31 | be the absolute disk size, relative sizes are not supported so be 32 | careful to only increase the size, not shrink it. The unit 33 | suffixes `Gb` and `Mb` are supported. 34 | 35 | ```sh 36 | # Set the disk size to 20GB 37 | vmware-vdiskmanager -x 20Gb coreos_developer_vmware_insecure.vmx 38 | ``` 39 | 40 | [vmkb1004047]: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004047 41 | 42 | ## VirtualBox 43 | 44 | Use qemu-img or vmware-vdiskmanager as described above. VirtualBox does 45 | not support resizing VMDK disk images, only VDI and VHD disks. Meanwhile 46 | VirtualBox only supports using VMDK disk images with the OVF config file 47 | format used for importing/exporting virtual machines. 48 | 49 | If you have have no other options you can try converting the VMDK disk 50 | image to a VDI image and configuring a new virtual machine with it: 51 | 52 | ```sh 53 | VBoxManage clonehd old.vmdk new.vdi --format VDI 54 | VBoxManage modifyhd new.vdi --resize 20480 55 | ``` 56 | -------------------------------------------------------------------------------- /os/selinux.md: -------------------------------------------------------------------------------- 1 | # SELinux on CoreOS 2 | 3 | SELinux is a fine-grained access control mechanism integrated into 4 | CoreOS and rkt. Each container runs in its own independent SELinux context, 5 | increasing isolation between containers and providing another layer of 6 | protection should a container be compromised. 7 | 8 | CoreOS implements SELinux, but currently does not enforce SELinux protections 9 | by default. This allows deployers to verify container operation before enabling 10 | SELinux enforcement. This document covers the process of checking containers 11 | for SELinux policy compatibility, and switching SELinux into `enforcing` mode. 12 | 13 | ## Check a Container's Compatibility with SELinux Policy 14 | 15 | To verify whether the current SELinux policy would inhibit your containers, 16 | enable SELinux logging. In the following set of commands, we delete the rules that suppress this logging by default, and copy the policy store from CoreOS's read-only `/usr` to a writable file system location. 17 | 18 | ```sh 19 | $ rm /etc/audit/rules.d/80-selinux.rules 20 | $ rm /etc/audit/rules.d/99-default.rules 21 | $ rm /etc/audit/rules.d/99-default.rules 22 | $ rm /etc/selinux/mcs 23 | $ cp -a /usr/lib/selinux/mcs /etc/selinux 24 | $ rm /var/lib/selinux 25 | $ cp -a /usr/lib/selinux/policy /var/lib/selinux 26 | $ semodule -DB 27 | $ systemctl restart audit-rules 28 | ``` 29 | 30 | Now run your container. Check the system logs for any messages containing 31 | `avc: denied`. Such messages indicate that an `enforcing` SELinux would prevent 32 | the container from performing the logged operation. Please open an issue at 33 | [coreos/bugs](https://github.com/coreos/bugs/issues), including the full avc 34 | log message. 35 | 36 | ## Enable SELinux Enforcement 37 | 38 | Once satisfied that your container workload is compatible with the SELinux 39 | policy, you can temporarily enable enforcement by running the following command 40 | as root: 41 | 42 | `$ setenforce 1` 43 | 44 | A reboot will reset SELinux to `permissive` mode. 45 | 46 | ### Make SELinux Enforcement Permanent 47 | 48 | To enable SELinux enforcement across reboots, replace the symbolic link 49 | `/etc/selinux/config` with the file it targets, so that the file can be written. 50 | You can use the `readlink` command to dereference the link, as shown in the 51 | following one-liner: 52 | 53 | `$ cp --remove-destination $(readlink -f /etc/selinux/config) /etc/selinux/config` 54 | 55 | Now, edit `/etc/selinux/config` to replace `SELINUX=permissive` with 56 | `SELINUX=enforcing`. 57 | 58 | ## Limitations 59 | 60 | SELinux enforcement is currently incompatible with Btrfs volumes and volumes 61 | that are shared between multiple containers. 62 | -------------------------------------------------------------------------------- /os/sdk-building-development-images.md: -------------------------------------------------------------------------------- 1 | # Building Development Images 2 | 3 | ## Updating Packages on an Image 4 | 5 | Building a new VM image is a time consuming process. On development images you 6 | can use `gmerge` to build packages on your workstation and ship them to your 7 | target VM. 8 | 9 | On your workstation start the dev server inside the SDK chroot: 10 | 11 | ```sh 12 | start_devserver --port 8080 13 | ``` 14 | 15 | NOTE: This port will need to be Internet accessible if your VM is remote. 16 | 17 | Run `gmerge` from your VM and ensure that the `DEVSERVER` setting in 18 | `/etc/coreos/update.conf` points to your workstation IP/hostname and port. 19 | 20 | ```sh 21 | gmerge coreos-base/update_engine 22 | ``` 23 | 24 | ### Updating an Image with Update Engine 25 | 26 | If you want to test that an image you built can successfully upgrade a running 27 | VM you can use devserver. To specify the version to upgrade to you can use the 28 | `--image` argument. This should be a newer build than the VM is currently 29 | running, otherwise devserver will answer "no update" to any requests. Here is 30 | an example using the default value: 31 | 32 | ```sh 33 | start_devserver --image ../build/images/amd64-usr/latest/coreos_developer_image.bin 34 | ``` 35 | 36 | On the target VM ensure that the `SERVER` setting in `/etc/coreos/update.conf` 37 | points to your workstation, for example: 38 | 39 | ```sh 40 | GROUP=developer 41 | SERVER=http://you.example.com:8080/update 42 | DEVSERVER=http://you.example.com:8080 43 | ``` 44 | 45 | If you modify this file restart update engine: `systemctl restart update-engine` 46 | 47 | On the VM force an immediate update check: 48 | 49 | ```sh 50 | update_engine_client -update 51 | ``` 52 | 53 | If the update fails you can check the logs of the update engine by running: 54 | 55 | ```sh 56 | journalctl -u update-engine -o cat 57 | ``` 58 | 59 | If you want to download another update you may need to clear the reboot 60 | pending status: 61 | 62 | ```sh 63 | update_engine_client -reset_status 64 | ``` 65 | 66 | ## Updating portage-stable ebuilds from Gentoo 67 | 68 | There is a utility script called `update_ebuilds` that can pull from Gentoo's 69 | CVS tree directly into your local portage-stable tree. Here is an example usage 70 | bumping go to the latest version: 71 | 72 | ```sh 73 | ./update_ebuilds --commit dev-lang/go 74 | ``` 75 | 76 | To create a Pull Request after the bump run: 77 | 78 | ```sh 79 | cd ~/trunk/src/third_party/portage-stable 80 | git checkout -b 'bump-go' 81 | git push bump-go 82 | ``` 83 | 84 | ## Tips and Tricks 85 | 86 | We've compiled a [list of tips and tricks](/docs/sdk-distributors/sdk/tips-and-tricks) that can make working with the SDK a bit easier. 87 | -------------------------------------------------------------------------------- /os/registry-authentication.md: -------------------------------------------------------------------------------- 1 | # Using Authentication for a Registry 2 | 3 | A json file `.dockercfg` is generated in your home directory on `docker login`. It holds authentication information for a public or private Docker registry. This `.dockercfg` can be reused in other home directories to authenticate. One way to do this is using Cloud-Config which is discussed more below. If you want to populate these values without running Docker login, the auth token is a base64 encoded string: `base64(:)`. 4 | 5 | ## The .dockercfg File 6 | 7 | Here's what an example looks like with credentials for Docker's public index and a private index: 8 | 9 | 10 | #### .dockercfg 11 | 12 | ```json 13 | { 14 | "quay.io": { 15 | "auth": "xXxXxXxXxXx=", 16 | "email": "username@example.com" 17 | }, 18 | "https://index.docker.io/v1/": { 19 | "auth": "xXxXxXxXxXx=", 20 | "email": "username@example.com" 21 | }, 22 | "https://index.example.com": { 23 | "auth": "XxXxXxXxXxX=", 24 | "email": "username@example.com" 25 | } 26 | } 27 | ``` 28 | 29 | 30 | The last step is to tell your systemd units to run as the `core` user in order for Docker to use the credentials we just set up. This is done in the service section of the unit: 31 | 32 | ```ini 33 | [Unit] 34 | Description=My Container 35 | After=docker.service 36 | 37 | [Service] 38 | User=core 39 | ExecStart=/usr/bin/docker run busybox /bin/sh -c "while true; do echo Hello World; sleep 1; done" 40 | 41 | [Install] 42 | WantedBy=multi-user.target 43 | ``` 44 | 45 | ### Cloud-Config 46 | 47 | Since each machine in your cluster is going to have to pull images, cloud-config is the easiest way to write the config file to disk. 48 | 49 | ```yaml 50 | #cloud-config 51 | write_files: 52 | - path: /home/core/.dockercfg 53 | owner: core:core 54 | permissions: '0644' 55 | content: | 56 | { 57 | "quay.io": { 58 | "auth": "xXxXxXxXxXx=", 59 | "email": "username@example.com" 60 | }, 61 | "https://index.docker.io/v1/": { 62 | "auth": "xXxXxXxXxXx=", 63 | "email": "username@example.com" 64 | }, 65 | "https://index.example.com": { 66 | "auth": "XxXxXxXxXxX=", 67 | "email": "username@example.com" 68 | } 69 | } 70 | ``` 71 | 72 | ## Using a Registry Without SSL Configured 73 | 74 | The default behavior of Docker is to prevent access to registries that aren't using SSL. If you're running a registry behind your firewall without SSL, you need to configure an additional parameter, which whitelists a CIDR range of allowed "insecure" registries. 75 | 76 | The best way to do this is within your cloud-config: 77 | 78 | ```yaml 79 | #cloud-config 80 | 81 | write_files: 82 | - path: /etc/systemd/system/docker.service.d/50-insecure-registry.conf 83 | content: | 84 | [Service] 85 | Environment='DOCKER_OPTS=--insecure-registry="10.0.1.0/24"' 86 | ``` 87 | -------------------------------------------------------------------------------- /os/sdk-disk-partitions.md: -------------------------------------------------------------------------------- 1 | # CoreOS Disk Layout 2 | 3 | CoreOS is designed to be reliably updated via a [continuous stream of updates]({{site.baseurl}}/using-coreos/updates). The operating system has 9 different disk partitions, utilizing a subset of those to make each update safe and enable a roll-back to a previous version if anything goes wrong. 4 | 5 | ## Partition Table 6 | 7 | | Number | Label | Description | Partition Type | 8 | |:------:|-------|-------------|----------------| 9 | | 1 | EFI-SYSTEM | Contains the bootloader. | VFAT | 10 | | 2 | BIOS-BOOT | This partition is reserved for future use. | (none) | 11 | | 3 | USR-A | One of two active/passive partitions holding CoreOS. | EXT4 | 12 | | 4 | USR-B | One of two active/passive partitions holding CoreOS. | (empty on first boot) | 13 | | 5 | ROOT-C | This partition is reserved for future use. | (none) | 14 | | 6 | OEM | Stores configuration data specific to an [OEM platform][OEM docs] | EXT4 | 15 | | 7 | OEM-CONFIG | Optional storage for an OEM. | (defined by OEM) | 16 | | 8 | (unused) | This partition is reserved for future use. | (none) | 17 | | 9 | ROOT | Stateful partition for storing persistent data. | EXT4 or BTRFS | 18 | 19 | For more information, [read more about the disk layout][chromium disk format] used by Chromium and ChromeOS, which inspired the layout used by CoreOS. 20 | 21 | [OEM docs]: {{site.baseurl}}/docs/sdk-distributors/distributors/notes-for-distributors 22 | [chromium disk format]: http://www.chromium.org/chromium-os/chromiumos-design-docs/disk-format 23 | 24 | ## Mounted Filesystems 25 | 26 | CoreOS is divided into two main filesystems, a read-only `/usr` and a stateful read/write `/`. 27 | 28 | ### Read-Only /usr 29 | 30 | The `USR-A` or `USR-B` partitions are interchangeable and one of the two is mounted as a read-only filesystem at `/usr`. After an update, CoreOS will re-configure the GPT priority attribute, instructing the bootloader to boot from the passive (newly updated) partition. Here's an example of the priority flags set on an Amazon EC2 machine: 31 | 32 | ``` 33 | $ sudo cgpt show /dev/xvda 34 | start size part contents 35 | 270336 2097152 3 Label: "USR-A" 36 | Type: Alias for coreos-rootfs 37 | UUID: 7130C94A-213A-4E5A-8E26-6CCE9662F132 38 | Attr: priority=1 tries=0 successful=1 39 | ``` 40 | 41 | CoreOS images ship with the `USR-B` partition empty to reduce the image filesize. The first CoreOS update will populate it and start the normal active/passive scheme. 42 | 43 | The OEM partition is also mounted as read-only at `/usr/share/oem`. 44 | 45 | ### Stateful Root 46 | 47 | All stateful data, including container images, is stored within the read/write filesystem mounted at `/`. On first boot, the ROOT partition and filesystem will expand to fill any remaining free space at the end of the drive. 48 | 49 | The data stored on the root partition isn't manipulated by the update process. In return, we do our best to prevent you from modifying the data in /usr. 50 | 51 | Due to the unique disk layout of CoreOS, an `rm -rf /` is an un-supported but valid operation to do a "factory reset". The machine should boot and operate normally afterwards. -------------------------------------------------------------------------------- /os/coreos-hardening-guide.md: -------------------------------------------------------------------------------- 1 | # CoreOS Hardening Guide 2 | 3 | This guide covers the basics of securing a CoreOS Linux instance. By default CoreOS has a very slim network profile and the only service that listens by default on CoreOS is sshd on port 22 on all interfaces. There are also some defaults for local users and services that should be considered. 4 | ## Remote Listening Services 5 | ### Disabling sshd 6 | 7 | To disable sshd from listening you can stop the socket and 8 | 9 | ``` 10 | ln -s /dev/null /etc/systemd/system/sshd.socket 11 | systemctl daemon-reload 12 | systemctl stop sshd.socket 13 | ``` 14 | 15 | If you wish to make further customizations see our [customize sshd guide](https://coreos.com/os/docs/latest/customizing-sshd.html). 16 | ## Remote non-Listening Services 17 | ### etcd, fleet, and locksmith 18 | 19 | etcd, fleet, and locksmith all should be secured and authenticated using TLS if you are using these services. Please see the relevant guides for details. 20 | 21 | [etcd security guide](https://coreos.com/etcd/docs/2.2.0/security.html) 22 | [fleet TLS configuration](https://coreos.com/os/docs/latest/cloud-config.html#fleet) 23 | [locksmith TLS configuration](https://coreos.com/os/docs/latest/cloud-config.html#locksmith) 24 | 25 | ## Local Services 26 | ### Local Users 27 | CoreOS has a single default user account called "core". Generally this user is the one that gets ssh keys added to it via cloud-config for administrators to login. The core users, by default, has access to the wheel group which grants sudo access. You can change this by removing the core user from wheel by running this command: `gpasswd -d core wheel`. 28 | ### Docker Daemon 29 | 30 | The docker daemon is accessible via a unix domain socket at /run/docker.sock. Users in the "docker" group have access to this service and access to the docker socket grants similar capabilities to sudo. The core user, by default, has access to the docker group. You can change this by removing the core user from docker by running this command: `gpasswd -d core docker`. 31 | ### rkt fetch 32 | 33 | Users in the "rkt" group have access to the rkt container image store. A user may download new images and place them in the store if they belong to this group. This could be used as an attack vector to insert images that are later executed as root by the rkt container runtime. The core user, by default, has access to the rkt group. You can change this by removing the core user from rkt by running this command: `gpasswd -d core rkt`. 34 | ### fleet API Socket 35 | 36 | The fleet API allows management the state of the cluster using JSON over HTTP. By default, CoreOS ships a socket unit for fleet (fleet.socket) which binds to a Unix domain socket, /var/run/fleet.sock. This socket is currently globally writable but will be restricted in a future release. Users with access to this socket and fleet configured have sudo equivalent access via systemd. To disable fleet completely run: 37 | 38 | ``` 39 | ln -s /dev/null /etc/systemd/system/fleet.socket 40 | systemctl daemon-reload 41 | systemctl stop fleet.socket 42 | ``` 43 | 44 | To restrict access to fleet.socket to root only run: 45 | 46 | ``` 47 | mkdir -p /etc/systemd/system/fleet.socket.d 48 | cat << EOM > /etc/systemd/system/fleet.socket.d/10-root-only.conf 49 | [Socket] 50 | SocketMode=0600 51 | SocketUser=root 52 | SocketGroup=root 53 | EOM 54 | systemctl daemon-reload 55 | ``` 56 | -------------------------------------------------------------------------------- /os/using-systemd-and-udev-rules.md: -------------------------------------------------------------------------------- 1 | # Using systemd and udev Rules 2 | 3 | In our example we will use libvirt VM with CoreOS and run systemd unit on disk attach event. First of all we have to create systemd unit file `/etc/systemd/system/device-attach.service`: 4 | 5 | ``` 6 | [Service] 7 | Type=oneshot 8 | ExecStart=/usr/bin/echo 'device has been attached' 9 | ``` 10 | 11 | This unit file will be triggered by our udev rule. 12 | 13 | Then we have to start `udevadm monitor --environment` to monitor kernel events. 14 | 15 | Once you've attached virtio libvirt device (i.e. `virsh attach-disk coreos /dev/VG/test vdc`) you'll see similar `udevadm` output: 16 | 17 | ``` 18 | UDEV [545.954641] add /devices/pci0000:00/0000:00:18.0/virtio4/block/vdb (block) 19 | .ID_FS_TYPE_NEW= 20 | ACTION=add 21 | DEVNAME=/dev/vdb 22 | DEVPATH=/devices/pci0000:00/0000:00:18.0/virtio4/block/vdb 23 | DEVTYPE=disk 24 | ID_FS_TYPE= 25 | MAJOR=254 26 | MINOR=16 27 | SEQNUM=1327 28 | SUBSYSTEM=block 29 | USEC_INITIALIZED=545954447 30 | ``` 31 | 32 | According to text above udev generates event which contains directives (ACTION=add and SUBSYSTEM=block) we will use in our rule. It should look this way: 33 | 34 | ``` 35 | ACTION=="add", SUBSYSTEM=="block", TAG+="systemd", ENV{SYSTEMD_WANTS}="device-attach.service" 36 | ``` 37 | 38 | That rule means that udev will trigger `device-attach.service` systemd unit on any block device attachment. Now when we use this command `virsh attach-disk coreos /dev/VG/test vdc` on host machine, we should see `device has been attached` message in CoreOS node's journal. This example should be similar to USB/SAS/SATA device attach. 39 | 40 | ## Cloud-Config example 41 | 42 | To use the unit and udev rule with cloud-config, modify this example as needed: 43 | 44 | ```yaml 45 | #cloud-config 46 | write_files: 47 | - path: /etc/udev/rules.d/01-block.rules 48 | permissions: 0644 49 | owner: root 50 | content: | 51 | ACTION=="add", SUBSYSTEM=="block", TAG+="systemd", ENV{SYSTEMD_WANTS}="device-attach.service" 52 | coreos: 53 | units: 54 | - name: device-attach.service 55 | content: | 56 | [Unit] 57 | Description=Notify about attached device 58 | 59 | [Service] 60 | Type=oneshot 61 | ExecStart=/usr/bin/echo 'device has been attached' 62 | ``` 63 | 64 | ## More systemd Examples 65 | 66 | For more systemd examples, check out these documents: 67 | 68 | [Customizing Docker][customizing-docker] 69 | [Customizing the SSH Daemon][customizing-sshd] 70 | [Using systemd Drop-In Units][drop-in] 71 | 72 | [drop-in]: using-systemd-drop-in-units.html 73 | [customizing-sshd]: customizing-sshd.html#changing-the-sshd-port 74 | [customizing-docker]: customizing-docker.html#using-a-dockercfg-file-for-authentication 75 | [cloud-config]: cloud-config.html 76 | [etcd-discovery]: cluster-discovery.html 77 | [systemd-udev]: using-systemd-and-udev-rules.html 78 | 79 | ## More Information 80 | systemd.service Docs 81 | systemd.unit Docs 82 | systemd.target Docs 83 | udev Docs 84 | -------------------------------------------------------------------------------- /os/booting-on-eucalyptus.md: -------------------------------------------------------------------------------- 1 | # Running CoreOS on Eucalyptus 3.4 2 | 3 | CoreOS is currently in heavy development and actively being tested. 4 | These instructions will walk you through downloading CoreOS, bundling the image, and running an instance from it. 5 | 6 | ## Import the Image 7 | 8 | These steps will download the CoreOS image, uncompress it, convert it from qcow->raw, and then import it into Eucalyptus. 9 | In order to convert the image you will need to install ```qemu-img``` with your favorite package manager. 10 | 11 | ### Choosing a Channel 12 | 13 | CoreOS is released into stable, alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel. 14 | 15 | The channel is selected based on the URL below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.baseurl}}/releases) for specific features and bug fixes in each channel. 16 | 17 | ```sh 18 | $ wget -q http://alpha.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2 19 | $ bunzip2 coreos_production_openstack_image.img.bz2 20 | $ qemu-img convert -O raw coreos_production_openstack_image.img coreos_production_openstack_image.raw 21 | $ euca-bundle-image -i coreos_production_openstack_image.raw -r x86_64 -d /var/tmp 22 | 00% |====================================================================================================| 5.33 GB 59.60 MB/s Time: 0:01:35 23 | Wrote manifest bundle/coreos_production_openstack_image.raw.manifest.xml 24 | $ euca-upload-bundle -m /var/tmp/coreos_production_openstack_image.raw.manifest.xml -b coreos-production 25 | Uploaded coreos-production/coreos_production_openstack_image.raw.manifest.xml 26 | $ euca-register coreos-production/coreos_production_openstack_image.raw.manifest.xml --virtualization-type hvm --name "CoreOS-Production" 27 | emi-E4A33D45 28 | ``` 29 | 30 | ## Boot it up 31 | 32 | Now generate the ssh key that will be injected into the image for the `core` 33 | user and boot it up! 34 | 35 | ```sh 36 | $ euca-create-keypair coreos > core.pem 37 | $ euca-run-instances emi-E4A33D45 -k coreos -t m1.medium -g default 38 | ... 39 | ``` 40 | 41 | Your first CoreOS instance should now be running. The only thing left to do is 42 | find the IP and SSH in. 43 | 44 | ```sh 45 | $ euca-describe-instances | grep coreos 46 | RESERVATION r-BCF44206 498025213678 group-1380012085 47 | INSTANCE i-22444094 emi-E4A33D45 euca-10-0-1-61.cloud.home euca-172-16-0-56.cloud.internal running coreos 0 48 | m1.small 2013-10-02T05:32:44.096Z one eki-05573B4A eri-EA7436D2 monitoring-enabled 10.0.1.61 172.16.0.56 instance-store paravirtualized 5046c208-fec1-4a6e-b079-e7cdf6a7db8f_one_1 49 | 50 | ``` 51 | 52 | Finally SSH into it, note that the user is `core`: 53 | 54 | ```sh 55 | $ chmod 400 core.pem 56 | $ ssh -i core.pem core@10.0.1.61 57 | ______ ____ _____ 58 | / ____/___ ________ / __ \/ ___/ 59 | / / / __ \/ ___/ _ \/ / / /\__ \ 60 | / /___/ /_/ / / / __/ /_/ /___/ / 61 | \____/\____/_/ \___/\____//____/ 62 | 63 | core@10-0-0-3 ~ $ 64 | ``` 65 | 66 | ## Using CoreOS 67 | 68 | Now that you have a machine booted it is time to play around. 69 | Check out the [CoreOS Quickstart]({{site.baseurl}}/docs/quickstart) guide or dig into [more specific topics]({{site.baseurl}}/docs). 70 | -------------------------------------------------------------------------------- /quay-enterprise/build-support.md: -------------------------------------------------------------------------------- 1 | # Automatically build Dockerfiles with Build Workers 2 | 3 | CoreOS Enterprise Registry supports building Dockerfiles using a set of worker nodes. Build triggers, such as GitHub webhooks ([Setup Instructions](github-build.md)), can be configured to automatically build new versions of your repositories when new code is committed. This document will walk you through enabling the feature flag and setting up multiple build workers to enable this feature. 4 | 5 | *Note:* This feature is currently in *beta*, so it may encounter issues every so often. Please report 6 | any issues encountered to support so we can fix it ASAP. 7 | 8 | ## Visit the Management Panel 9 | 10 | Sign in to a super user account and visit `http://yourregister/superuser` to view the management panel: 11 | 12 | Enterprise Registry Management Panel 13 | 14 | ## Enable Building 15 | 16 | Enable Dockerfile Build 17 | 18 | - Click the configuration tab () and scroll down to the section entitled Dockerfile Build Support. 19 | - Check the "Enable Dockerfile Build" box 20 | - Click "Save Configuration Changes" 21 | - Restart the container (you will be prompted) 22 | 23 | ## Setup the Build Workers 24 | 25 | Enterprise Registry Build Workers 26 | 27 | One or more build workers will communicate with the main registry container to build new containers when triggered. The machines must have Docker installed and must not be used for any other work. The following procedure needs to be done every time a new worker needs to be 28 | added, but it can be automated fairly easily. 29 | 30 | ### Pull the Build Worker Image 31 | 32 | The build worker is currently in beta. To gain access to its repository, please contact support. 33 | Once given access, pull down the latest copy of the image just like any other: 34 | 35 | ```sh 36 | docker pull quay.io/coreos/registry-build-worker:latest 37 | ``` 38 | 39 | ### Run the Build Worker image 40 | 41 | Run this container on each build worker. Since the worker will be orchestrating docker builds, we need to mount in the docker socket. This orchestration will use a large amount of CPU and need to manipulate the docker images on disk — we recommend that dedicated machines be used for this task. 42 | 43 | Use the environment variable `SERVER` to tell the worker how to communicate with the primary Enterprise Registry container. A websocket is used as a data channel, and it was configured when we changed the feature flag above. 44 | 45 | | Security | Websocket Address | 46 | |----------|-------------------| 47 | | Using SSL | ```wss://somehost.com``` | 48 | | Without SSL | ```ws://somehost.com``` | 49 | 50 | Here's what the full command looks like: 51 | 52 | ```sh 53 | docker run --restart on-failure -e SERVER=wss://myenterprise.host -v /var/run/docker.sock:/var/run/docker.sock quay.io/coreos/registry-build-worker:latest 54 | ``` 55 | 56 | When the container starts, each build worker will auto-register with the Enterprise Registry and start building containers once a job triggered and it is assigned to a worker. 57 | 58 | ### Setup GitHub Build (optional) 59 | 60 | If your organization plans to have builds be conducted via pushes to GitHub (or GitHub Enterprise), please continue 61 | with the Setting up GitHub Build. 62 | 63 | -------------------------------------------------------------------------------- /quay-enterprise/initial-setup.md: -------------------------------------------------------------------------------- 1 | # On-Premise Installation 2 | 3 | CoreOS Enterprise Registry requires three components to be running to begin the setup process: 4 | 5 | - A supported database (MySQL, Postgres) 6 | - A Redis instance (for real-time events) 7 | - The Enterprise Registry image 8 | 9 | **NOTE**: Please have the host and port of the database and the Redis instance ready. 10 | 11 | 12 | ## Preparing the Database 13 | 14 | A MySQL RDBMS or Postgres installation with an empty database is required, and a login with full access to said database. The schema will be created the first time the registry image is run. The database install can either be pre-existing or run on CoreOS via a [Docker container](mysql-container.md). 15 | 16 | ## Setting up Redis 17 | 18 | Redis stores data which must be accessed quickly but doesn’t necessarily require durability guarantees. If you have an existing Redis instance, make sure to accept incoming connections on port 6379 (or change the port in the setup process) and then feel free to skip this step. 19 | 20 | To run redis, simply pull and run the Quay.io Redis image: 21 | 22 | ``` 23 | sudo docker pull quay.io/quay/redis 24 | sudo docker run -d -p 6379:6379 quay.io/quay/redis 25 | ``` 26 | 27 | **NOTE**: This host will have to accept incoming connections on port 6379 from the hosts on which the registry will run. 28 | 29 | ## Downloading the Enterprise Registry image 30 | 31 | After signing up you will receive a `.dockercfg` file containing your credentials to the `quay.io/coreos/registry` repository. 32 | Save this file to your CoreOS machine in `/home/core/.dockercfg` and `/root/.dockercfg`. 33 | You should now be able to execute `docker pull quay.io/coreos/registry` to download the container. 34 | 35 | ## Setting up the Directories 36 | 37 | CoreOS Enterprise registry requires a storage directory and a configuration directory: 38 | 39 | ``` 40 | mkdir storage 41 | mkdir config 42 | ``` 43 | 44 | ## Setting up and running the Registry 45 | 46 | Run the following command, replacing `/local/path/to/the/config/directory` and `/local/path/to/the/storage/directory` with the absolute 47 | paths to the directories created above: 48 | 49 | ``` 50 | sudo docker run --restart=always -p 443:443 -p 80:80 --privileged=true -v /local/path/to/the/config/directory:/conf/stack -v /local/path/to/the/storage/directory:/datastorage -d quay.io/coreos/registry 51 | ``` 52 | 53 | Enterprise Registry Setup Screen 54 | 55 | Once started, visit: http://yourhost/setup, wait for the page to load (it may take a minute or two) and follow instructions there to setup the enterprise registry. 56 | 57 | **NOTE**: The Enterprise Registry will restart itself a few times during this setup process. If the container does not automatically come 58 | back up, simply run the command above again. 59 | 60 | Enterprise Registry Restart 61 | 62 | 63 | ## Verifying the Registry status 64 | 65 | Visit the `/health/endtoend` endpoint on the registry hostname and verify that the `code` is `200` and `is_testing` is `false`. 66 | 67 | 68 | ## Logging in 69 | 70 | ### If using database authentication: 71 | 72 | Once the Enterprise Registry is running, new users can be created by clicking the `Sign Up` button. If e-mail is enabled, the sign up process will require an e-mail confirmation step, after which repositories, organizations and teams can be setup by the user. 73 | 74 | 75 | ### If using LDAP authentication: 76 | 77 | Users should be able to login to the Enterprise Registry directly with their LDAP username and password. 78 | 79 | -------------------------------------------------------------------------------- /os/booting-with-iso.md: -------------------------------------------------------------------------------- 1 | # Booting CoreOS from an ISO 2 | 3 | The latest CoreOS ISOs can be downloaded from the image storage site: 4 | 5 |
6 | 11 |
12 |
13 |
14 |

The alpha channel closely tracks master and is released to frequently. The newest versions of Docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.

15 |
16 | Download Alpha ISO 17 | Browse Storage Site 18 |

19 |

All of the files necessary to verify the image can be found on the storage site.

20 |
21 |
22 |
23 |

The beta channel consists of promoted alpha releases. Current version is CoreOS {{site.beta-channel}}.

24 |
25 | Download Beta ISO 26 | Browse Storage Site 27 |

28 |

All of the files necessary to verify the image can be found on the storage site.

29 |
30 |
31 |
32 |

The Stable channel should be used by production clusters. Versions of CoreOS are battle-tested within the Beta and Alpha channels before being promoted. Current version is CoreOS {{site.stable-channel}}.

33 |
34 | Download Stable ISO 35 | Browse Storage Site 36 |

37 |

All of the files necessary to verify the image can be found on the storage site.

38 |
39 |
40 |
41 | 42 | ## Known Limitations 43 | 44 | 1. The best strategy for providing [cloud-config]({{site.baseurl}}/docs/cluster-management/setup/cloudinit-cloud-config) is via [config-drive]({{site.baseurl}}/docs/cluster-management/setup/cloudinit-config-drive). 45 | 46 | ## Install to Disk 47 | 48 | The most common use-case for this ISO is to install CoreOS to disk. You can [find those instructions here]({{site.baseurl}}/docs/running-coreos/bare-metal/installing-to-disk). 49 | 50 | ## Bypass Authentication 51 | 52 | If you need to bypass authentication in order to install, the kernel option `coreos.autologin` allows you to drop directly to a shell on a given console without prompting for a password. Useful for troubleshooting but use with caution. 53 | 54 | For any console that doesn't normally get a login prompt by default be sure to combine with the `console` option: 55 | 56 | ``` 57 | console=tty0 console=ttyS0 coreos.autologin=tty1 coreos.autologin=ttyS0 58 | ``` 59 | 60 | Without any argument it enables access on all consoles. Note that for the VGA console the login prompts are on virtual terminals (`tty1`, `tty2`, etc), not the VGA console itself (`tty0`). 61 | -------------------------------------------------------------------------------- /golang/README.md: -------------------------------------------------------------------------------- 1 | # Go at CoreOS 2 | 3 | We use Go (golang) a lot at CoreOS, and we’ve built up a lot of internal knowledge about how best to develop Go projects. 4 | 5 | This document serves as a best practices and style guide for how to work on new and existing CoreOS projects written in Go. 6 | 7 | ## Version 8 | 9 | - Wherever possible, use the [latest official release][go-dl] of go 10 | - Any software shipped in the CoreOS image should be developed against the [version shipped in the CoreOS image](https://github.com/coreos/portage-stable/tree/master/dev-lang/go) 11 | 12 | [go-dl]: https://golang.org/dl/ 13 | 14 | ## Style 15 | 16 | Go style at CoreOS essentially just means following the upstream conventions: 17 | - [Effective Go][effectivego] 18 | - [CodeReviewComments][codereview] 19 | - [Godoc][godoc] 20 | 21 | It's recommended to set a save hook in your editor of choice that runs `goimports` against your code. 22 | 23 | [effectivego]: https://golang.org/doc/effective_go.html 24 | [codereview]: https://github.com/golang/go/wiki/CodeReviewComments 25 | [godoc]: http://blog.golang.org/godoc-documenting-go-code 26 | 27 | ## Tests 28 | 29 | - Always run [goimports][goimports] (which transitively calls `gofmt`) and `go vet` 30 | - Use [table-driven tests][table-driven] wherever possible ([example][table-driven-example]) 31 | - Use [travis][travis] to run unit/integration tests against the project repository ([example][travis-example]) 32 | - Use [SemaphoreCI][semaphore] to run functional tests where possible ([example][semaphore-example]) 33 | - Use [GoDebug][godebug] `pretty.Compare` to compare objects (structs, maps, slices, etc.) ([example] [godebug-compare-example]) 34 | 35 | [godebug]: https://github.com/kylelemons/godebug/ 36 | [godebug-compare-example]: https://github.com/coreos-inc/auth/blob/master/functional/db_test.go#L107 37 | [goimports]: https://github.com/bradfitz/goimports 38 | [table-driven]: https://github.com/golang/go/wiki/TableDrivenTests 39 | [table-driven-example]: https://github.com/coreos/etcd/blob/35fddbc5d01f5e88bbc590c60f0b5e3ea8fa141b/raft/raft_paper_test.go#L186 40 | [travis]: https://travis-ci.org/ 41 | [travis-example]: https://github.com/coreos/fleet/blob/master/.travis.yml 42 | [semaphore]: https://semaphoreci.com/ 43 | [semaphore-example]: https://github.com/coreos/rkt/blob/master/tests/README.md 44 | 45 | ## Dependencies 46 | 47 | - Carefully consider adding dependencies to your project: Do you really need it? 48 | - Manage third-party dependencies with [godep][godep-guide] 49 | 50 | [godep-guide]: golang/godep.md 51 | 52 | ## Shared Code 53 | 54 | Idiomatic golang generally eschews creating generic utility packages in favour of implementing the necessary code as locally as possible to its use case. 55 | In cases where generic, utility code makes sense, though, move it to `github.com/coreos/pkg`. 56 | Use this repository as a first port of call when the need for generic code seems to arise. 57 | 58 | ## Docker 59 | 60 | When creating Docker images from Go projects, use a combination of a `.godir` file and the `golang:onbuild` base image to produce the most simple Dockerfile for a Go project. 61 | The `.godir` file must contain the import path of the package being written (i.e. etcd's .godir contains "github.com/coreos/etcd"). 62 | 63 | ## Logging 64 | 65 | When in need of more sophisticated logging than the [stdlib log package][stdlib-log] provides, use the shared [CoreOS Log package][capnslog] (aka `capnslog`) 66 | 67 | [stdlib-log]: https://golang.org/pkg/log 68 | [capnslog]: https://github.com/coreos/pkg/tree/master/capnslog 69 | 70 | ## CLI 71 | 72 | In anything other than the most basic CLI cases (i.e. where the [stdlib flag package][stdlib-flag] suffices), use [Cobra][cobra] to construct command-line tools. 73 | 74 | [stdlib-flag]: https://golang.org/pkg/log 75 | [cobra]: https://github.com/spf13/cobra 76 | 77 | ## Development Tools 78 | 79 | - Use `gvm` to manage multiple versions of golang and multiple GOPATHs 80 | -------------------------------------------------------------------------------- /os/overview-of-systemctl.md: -------------------------------------------------------------------------------- 1 | # Overview of systemctl 2 | 3 | `systemctl` is your interface to systemd, the init system used in CoreOS. All processes on a single machine are started and managed by systemd, including your Docker containers. You can learn more in our [Getting Started with systemd]({{site.baseurl}}/docs/launching-containers/launching/getting-started-with-systemd) guide. Let's explore a few helpful `systemctl` commands. You must run all of these commands locally on the CoreOS machine: 4 | 5 | ## Find the Status of a Container 6 | 7 | The first step to troubleshooting with `systemctl` is to find the status of the item in question. If you have multiple `Exec` commands in your service file, you can see which one of them is failing and view the exit code. Here's a failing service that starts a private Docker registry in a container: 8 | 9 | ```sh 10 | $ sudo systemctl status custom-registry.service 11 | 12 | custom-registry.service - Custom Registry Service 13 | Loaded: loaded (/media/state/units/custom-registry.service; enabled-runtime) 14 | Active: failed (Result: exit-code) since Sun 2013-12-22 12:40:11 UTC; 35s ago 15 | Process: 10191 ExecStopPost=/usr/bin/etcdctl delete /registry (code=exited, status=0/SUCCESS) 16 | Process: 10172 ExecStartPost=/usr/bin/etcdctl set /registry index.domain.com:5000 (code=exited, status=0/SUCCESS) 17 | Process: 10171 ExecStart=/usr/bin/docker run -rm -p 5555:5000 54.202.26.87:5000/registry /bin/sh /root/boot.sh (code=exited, status=1/FAILURE) 18 | Main PID: 10171 (code=exited, status=1/FAILURE) 19 | CGroup: /system.slice/custom-registry.service 20 | 21 | Dec 22 12:40:01 localhost etcdctl[10172]: index.domain.com:5000 22 | Dec 22 12:40:01 localhost systemd[1]: Started Custom Registry Service. 23 | Dec 22 12:40:01 localhost docker[10171]: Unable to find image '54.202.26.87:5000/registry' (tag: latest) locally 24 | Dec 22 12:40:11 localhost docker[10171]: 2013/12/22 12:40:11 Invalid Registry endpoint: Get http://index2.domain.com:5000/v1/_ping: dial tcp 54.204.26.2...o timeout 25 | Dec 22 12:40:11 localhost systemd[1]: custom-registry.service: main process exited, code=exited, status=1/FAILURE 26 | Dec 22 12:40:11 localhost etcdctl[10191]: index.domain.com:5000 27 | Dec 22 12:40:11 localhost systemd[1]: Unit custom-registry.service entered failed state. 28 | Hint: Some lines were ellipsized, use -l to show in full. 29 | ``` 30 | 31 | You can see that `Process: 10171 ExecStart=/usr/bin/docker` exited with `status=1/FAILURE` and the log states that the index that we attempted to launch the container from, `54.202.26.87` wasn't valid, so the container image couldn't be downloaded. 32 | 33 | ## List Status of All Units 34 | 35 | Listing all of the processes running on the box is too much information, but you can pipe the output into grep to find the services you're looking for. Here's all service files and their status: 36 | 37 | ```sh 38 | sudo systemctl list-units | grep .service 39 | ``` 40 | 41 | ## Start or Stop a Service 42 | 43 | ```sh 44 | sudo systemctl start apache.service 45 | ``` 46 | 47 | ```sh 48 | sudo systemctl stop apache.service 49 | ``` 50 | 51 | ## Kill a Service 52 | 53 | This will stop the process immediately: 54 | 55 | ```sh 56 | sudo systemctl kill apache.service 57 | ``` 58 | 59 | ## Restart a Service 60 | 61 | Restarting a service is as easy as: 62 | 63 | ```sh 64 | sudo systemctl restart apache.service 65 | ``` 66 | 67 | If you're restarting a service after you changed its service file, you will need to reload all of the service files before your changes take effect: 68 | 69 | ```sh 70 | sudo systemctl daemon-reload 71 | ``` 72 | 73 | #### More Information 74 | Getting Started with systemd 75 | systemd.service Docs 76 | systemd.unit Docs 77 | -------------------------------------------------------------------------------- /os/booting-on-cloudstack.md: -------------------------------------------------------------------------------- 1 | # Running CoreOS on CloudStack 2 | 3 | This guide explains how to deploy CoreOS with CloudStack. These instructions will walk you through downloading CoreOS image and running an instance from it. 4 | This document assumes that CloudStack is already installed. Please refer Install Guide for CloudStack installation steps. 5 | 6 | 7 | ## Register the CoreOS image (Template) 8 | 9 | After logging in to CloudStack UI, to upload a template: 10 | 11 |
    12 |
  1. In the left navigation bar, click Templates.

    13 |
  2. 14 |
  3. Click Register Template.

    15 |
  4. 16 |
  5. Provide the following:

    17 |
      18 |
    • Name and Description. These will be shown in the UI, so choose 19 | something descriptive.

      20 |
    • 21 |
    • URL. The Management Server will download the file from the 22 | specified URL, such as http://dl.openvm.eu/cloudstack/coreos/x86_64/coreos_production_cloudstack_image-kvm.qcow2.bz2.

      23 |
    • 24 |
    • Zone. Choose the zone where you want the template to be 25 | available, or All Zones to make it available throughout 26 | CloudStack.

      27 |
    • 28 |
    • OS Type: This helps CloudStack and the hypervisor perform 29 | certain operations and make assumptions that improve the 30 | performance of the guest.

      31 |
    • 32 |
    • Hypervisor: The supported hypervisors are listed. Select the 33 | desired one.

      34 |
    • 35 |
    • Format. The format of the template upload file, such as VHD or 36 | OVA.

      37 |
    • 38 |
    • Extractable. Choose Yes if the template is available for 39 | extraction. If this option is selected, end users can download a 40 | full image of a template.

      41 |
    • 42 |
    • Public. Choose Yes to make this template accessible to all 43 | users of this CloudStack installation. The template will appear in 44 | the Community Templates list. See “Private and 45 | Public Templates”.

      46 |
    • 47 |
    • Featured. Choose Yes if you would like this template to be 48 | more prominent for users to select. The template will appear in 49 | the Featured Templates list. Only an administrator can make a 50 | template Featured.

      51 |
    • 52 |
    53 |
  6. 54 |
55 | 56 | Alternatively registerTemplate API can also be used. 57 | 58 | ### CoreOS Templates 59 | 60 | Apache CloudStack community created CoreOS templates are located at http://dl.openvm.eu/cloudstack/coreos/x86_64/ 61 | CoreOS templates are currently available for XenServer, KVM, VMware and HyperV hypervisors. 62 | 63 | ### Deploy CoreOS Instance 64 | 65 |

To create a VM from a template:

66 |
    67 |
  1. Log in to the CloudStack UI as an administrator or user.

    68 |
  2. 69 |
  3. In the left navigation bar, click Instances.

    70 |
  4. 71 |
  5. Click Add Instance.

    72 |
  6. 73 |
  7. Select a zone.

    74 |
  8. 75 |
  9. Select the CoreOS template registered in the previous step. 76 |

  10. 77 |
  11. Click Submit and your VM will be created and started.

    78 |
  12. 79 |
80 | 81 | Alternatively deployVirtualMachine API can also be used to deploy CoreOS instance. 82 | 83 | ### Virtual machine configuration 84 | 85 | cloud-config can be provided using userdata while deploying virtual machine. userdata is an optional request parameter for deployVirtualMachine API 86 | 87 | ## Using CoreOS 88 | 89 | Now that you have a machine booted it is time to play around. 90 | Check out the [CoreOS Quickstart]({{site.baseurl}}/docs/quickstart) guide or dig into [more specific topics]({{site.baseurl}}/docs). 91 | -------------------------------------------------------------------------------- /os/notes-for-distributors.md: -------------------------------------------------------------------------------- 1 | # Notes for Distributors 2 | 3 | ## Importing Images 4 | 5 | Images of CoreOS alpha releases are hosted at `http://alpha.release.core-os.net/amd64-usr/`. There are directories for releases by version as well as `current` with a copy of the latest version. Similarly, beta releases can be found at `http://beta.release.core-os.net/amd64-usr/`. 6 | 7 | If you are importing images for use inside of your environment it is recommended that you import from the `current` directory. For example to grab the alpha OpenStack version of CoreOS you can import `http://alpha.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2`. There is a `version.txt` file in this directory which you can use to label the image with a version number. 8 | 9 | It is recommended that you also verify files using the [CoreOS Image Signing Key][signing-key]. The GPG signature for each image is a detached `.sig` file that must be passed to `gpg --verify`. For example: 10 | 11 | ```sh 12 | wget http://alpha.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2 13 | wget http://alpha.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2.sig 14 | gpg --verify coreos_production_openstack_image.img.bz2.sig 15 | ``` 16 | 17 | [signing-key]: {{site.baseurl}}/security/image-signing-key 18 | 19 | ## Image Customization 20 | 21 | Customizing a CoreOS image for a specific operating environment is easily done through [cloud-config]({{site.baseurl}}/docs/cluster-management/setup/cloudinit-cloud-config/), a YAML-based configuration standard that is widely supported. As a provider, you must ensure that your platform makes this data available to CoreOS, where it will be parsed during the boot process. 22 | 23 | Use cloud-config to handle platform specific configuration such as custom networking, running an agent on the machine or injecting files onto disk. CoreOS will automatically parse and execute `/usr/share/oem/cloud-config.yml` if it exists. Your cloud-config should create additional units that process user-provided metadata, as described below. 24 | 25 | ## Handling End-User Cloud-Config Files 26 | 27 | End-users should be able to provide a cloud-config file to your platform while specifying their VM's parameters. This file should be made available to CoreOS at a known network address, injected directly onto disk or contained within a [config-drive][config-drive-docs]. Below are a few examples of how this process works on a few different providers. 28 | 29 | [config-drive-docs]: http://docs.openstack.org/user-guide/cli_config_drive.html 30 | 31 | ### Amazon EC2 Example 32 | 33 | CoreOS machines running on Amazon EC2 utilize a two-step cloud-config process. First, a cloud-config file baked into the image runs systemd units that execute scripts to fetch the user-provided SSH key and fetch the [user-provided cloud-config][amazon-cloud-config] from the instance [user-data service][amazon-user-data-doc] on Amazon's internal network. Afterwards, the user-provided cloud-config, specified from either the web console or API, is parsed. 34 | 35 | You can find the [code for this process on GitHub][amazon-github]. End-user instructions for this process can be found on our [Amazon EC2 docs][amazon-cloud-config]. 36 | 37 | [amazon-github]: https://github.com/coreos/coreos-overlay/tree/master/coreos-base/oem-ec2-compat 38 | [amazon-user-data-doc]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AESDG-chapter-instancedata.html#instancedata-user-data-retrieval 39 | [amazon-cloud-config]: {{site.baseurl}}/docs/running-coreos/cloud-providers/ec2#cloud-config 40 | 41 | ### Rackspace Cloud Example 42 | 43 | Rackspace passes configuration data to a VM by mounting [config-drive][config-drive-docs], a special configuration drive containing machine-specific data, to the machine. Like, Amazon EC2, CoreOS images for Rackspace contain a cloud-config file baked into the image that runs units to read from the config-drive. If a user-provided cloud-config file is found, it is parsed. 44 | 45 | You can find the [code for this process on GitHub][rackspace-github]. End-user instructions for this process can be found on our [Rackspace docs][rackspace-cloud-config]. 46 | 47 | [rackspace-github]: https://github.com/coreos/coreos-overlay/tree/master/coreos-base/oem-rackspace 48 | [rackspace-cloud-config]: {{site.baseurl}}/docs/running-coreos/cloud-providers/rackspace#cloud-config 49 | -------------------------------------------------------------------------------- /coreupdate/configure-machines.md: -------------------------------------------------------------------------------- 1 | # Configure Machines to Use CoreUpdate 2 | 3 | Configuring new or existing CoreOS machines to communicate with a [CoreUpdate](https://coreos.com/products/coreupdate) instance is a simple change to a configuration file. 4 | 5 | ## New Machines 6 | 7 | New servers can be configured to communicate with your CoreUpdate installation by using [cloud-config](https://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config). 8 | 9 | By default, your installation has a single application, CoreOS, with the identifier `e96281a6-d1af-4bde-9a0a-97b76e56dc57`. This ID is universal and all CoreOS machines are configured to use it. Within the CoreOS application, there are several application groups which have been created to match CoreOS channels with the indentifiers `alpha`, `beta`, and `stable`. 10 | 11 | In addition to the default groups, you may choose to create your own group that is configured to use a specific channel, rate-limit and other settings. Groups that you create will have a unique identifier that is a generated UUID or you may provide a custom string. 12 | 13 | To place a CoreOS machine in one of these groups, you must configure the update settings via cloud-config or a file on disk. 14 | 15 | ### Join Preconfigured Group 16 | 17 | Set the value of `server` to the custom address of your installation and append "/v1/update/". Set `group` to one of the default application groups: `alpha`, `beta`, or `stable`. 18 | 19 | For example, here is what the Alpha group looks like in CoreUpdate: 20 | 21 | ![CoreUpdate Group](img/coreupdate-group-default.png) 22 | 23 | Here's the cloud-config to use: 24 | 25 | ``` 26 | #cloud-config 27 | 28 | coreos: 29 | update: 30 | group: alpha 31 | server: https://customer.update.core-os.net/v1/update/ 32 | ``` 33 | 34 | ### Join Custom Group 35 | 36 | Set the value of `server` to the custom address of your installation and append "/v1/update/". Set `group` to the unique identifier of your application group. 37 | 38 | For example, here is what "NYC Production" looks like in CoreUpdate: 39 | 40 | ![CoreUpdate Group](img/coreupdate-group.png) 41 | 42 | Here's the cloud-config to use: 43 | 44 | ``` 45 | #cloud-config 46 | 47 | coreos: 48 | update: 49 | group: 0a809ab1-c01c-4a6b-8ac8-6b17cb9bae09 50 | server: https://customer.update.core-os.net/v1/update/ 51 | ``` 52 | 53 | More information can be found in the [cloud-config guide](http://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config/#coreos). 54 | 55 | ## Existing Machines 56 | 57 | To change the update of existing machines, edit `/etc/coreos/update.conf` with your favorite editor and provide the `SERVER=` and `GROUP=` values: 58 | 59 | ``` 60 | GROUP=0a809ab1-c01c-4a6b-8ac8-6b17cb9bae09 61 | SERVER=https://customer.update.core-os.net/v1/update/ 62 | ``` 63 | 64 | To apply the changes, run: 65 | 66 | ``` 67 | sudo systemctl restart update-engine 68 | ``` 69 | 70 | In addition to `GROUP=` and `SERVER=`, a few other internal values exist, but are set to defaults. You shouldn't have to modify these. 71 | 72 | `COREOS_RELEASE_APPID`: the CoreOS app ID, `e96281a6-d1af-4bde-9a0a-97b76e56dc57` 73 | 74 | `COREOS_RELEASE_VERSION`: defaults to the version of CoreOS you're running 75 | 76 | `COREOS_RELEASE_BOARD`: defaults to `amd64-usr` 77 | 78 | ## Viewing Machines in CoreUpdate 79 | 80 | Each machine should check in about 10 minutes after boot and roughly every hour after that. If you'd like to see it sooner, you can force an update check, which will skip any rate-limiting settings that are configured. 81 | 82 | ### Force Update in Background 83 | 84 | ``` 85 | $ update_engine_client -check_for_update 86 | [0123/220706:INFO:update_engine_client.cc(245)] Initiating update check and install. 87 | ``` 88 | 89 | ### Force Update in Foreground 90 | 91 | If you want to see what's going on behind the scenes, you can watch the ouput in the foreground: 92 | 93 | ``` 94 | $ update_engine_client -update 95 | [0123/222449:INFO:update_engine_client.cc(245)] Initiating update check and install. 96 | [0123/222449:INFO:update_engine_client.cc(250)] Waiting for update to complete. 97 | LAST_CHECKED_TIME=0 98 | PROGRESS=0.000000 99 | CURRENT_OP=UPDATE_STATUS_IDLE 100 | NEW_VERSION=0.0.0.0 101 | NEW_SIZE=0 102 | [0123/222454:ERROR:update_engine_client.cc(189)] Update failed. 103 | ``` 104 | 105 | Be aware that the "failed update" means that there isn't a newer version to install. 106 | -------------------------------------------------------------------------------- /os/booting-on-niftycloud-JA_JP.md: -------------------------------------------------------------------------------- 1 | # ニフティクラウド上でのCoreOSの起動 2 | 3 | 事前に[ニフティクラウド CLI][cli-documentation]をインストールする必要があります。These instructions are also [available in English](../). 4 | 5 | [cli-documentation]: http://cloud.nifty.com/api/cli/ 6 | 7 | ## Cloud-Config 8 | 9 | CoreOSはcloud-configにより、マシンのパラメータを設定したり、起動時にsystemdのunitを立ち上げたりすることが可能です。サポートしている機能は[こちら]({{site.baseurl}}/docs/cluster-management/setup/cloudinit-cloud-config)で確認してください。cloud-configは最小限で有用な状態にクラスターを立ち上げることを目的としており、複数のホストで共通ではない設定をするためには使うべきではありません。ニフティクラウド上では、cloud-configはサーバーの起動中に編集でき、次回起動時に反映されます。 10 | 11 | [ニフティクラウドCLI][cli-documentation]を使ってcloud-configを設定することができます。 12 | 13 | 最も一般的なcloud-configは下記のようなものです。 14 | 15 | ```yaml 16 | #cloud-config 17 | 18 | coreos: 19 | etcd2: 20 | # generate a new token for each unique cluster from https://discovery.etcd.io/new?size=3 21 | # specify the initial size of your cluster with ?size=X 22 | discovery: https://discovery.etcd.io/ 23 | # multi-region and multi-cloud deployments need to use $public_ipv4 24 | advertise-client-urls: http://$private_ipv4:2379,http://$private_ipv4:4001 25 | initial-advertise-peer-urls: http://$private_ipv4:2380 26 | # listen on both the official ports and the legacy ports 27 | # legacy ports can be omitted if your application doesn't depend on them 28 | listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001 29 | listen-peer-urls: http://$private_ipv4:2380 30 | units: 31 | - name: etcd2.service 32 | command: start 33 | - name: fleet.service 34 | command: start 35 | ``` 36 | 37 | `$private_ipv4`と`$public_ipv4`という変数はニフティクラウド上のcloud-configでサポートされています。 38 | 39 | ## チャンネルの選択 40 | 41 | CoreOSはチャンネル毎に別のスケジュールで[自動アップデート]({{site.baseurl}}/using-coreos/updates)されるように設計されています。推奨はしませんが、この機能は[無効にする]({{site.baseurl}}/docs/cluster-management/debugging/prevent-reboot-after-update)こともできます。各機能やバグフィックスについては[リリースノート]({{site.baseurl}}/releases)を読んでください。 42 | 43 |
44 | 49 |
50 |
51 |

AlphaチャンネルはMasterをぴったりと追っていて、頻繁にリリースされます。テストのために最新のDockeretcdfleetの利用が可能です。現在のバージョンはCoreOS {{site.alpha-channel}}です。

52 |

$ZONE, $TYPE, $FW_ID and $SSH_KEY_IDを指定し、ニフティクラウドCLIで立ち上げます。

53 |
nifty-run-instances $(nifty-describe-images --delimiter ',' --image-name "CoreOS Alpha {{site.alpha-channel}}" | awk -F',' '{print $2}') --key $SSH_KEY_ID --availability-zone $ZONE --instance-type $TYPE -g $FW_ID -f cloud-config.yml -q POST
54 |
55 |
56 |

BetaチャンネルはAlphaリリースが昇格されたものです。現在のバージョンはCoreOS {{site.beta-channel}}です。

57 |

$ZONE, $TYPE, $FW_ID and $SSH_KEY_IDを指定し、ニフティクラウドCLIで立ち上げます。

58 |
nifty-run-instances $(nifty-describe-images --delimiter ',' --image-name "CoreOS Beta {{site.beta-channel}}" | awk -F',' '{print $2}') --key $SSH_KEY_ID --availability-zone $ZONE --instance-type $TYPE -g $FW_ID -f cloud-config.yml -q POST
59 |
60 |
61 |

プロダクションクラスターではStableチャンネルを使用すべきです。CoreOSの各バージョンは昇格されるまでにBetaとAlphaチャンネルで検証済みです。現在のバージョンはCoreOS {{site.stable-channel}}です。

62 |

$ZONE, $TYPE, $FW_ID and $SSH_KEY_IDを指定し、ニフティクラウドCLIで立ち上げます。

63 |
nifty-run-instances $(nifty-describe-images --delimiter ',' --image-name "CoreOS Stable {{site.stable-channel}}" | awk -F',' '{print $2}') --key $SSH_KEY_ID --availability-zone $ZONE --instance-type $TYPE -g $FW_ID -f cloud-config.yml -q POST
64 |
65 |
66 |
67 | 68 | ### サーバーの追加 69 | 70 | さらにクラスタにサーバーを追加するには、同じcloud-config、適当なファイアウォールグループで立ち上げるだけです。 71 | 72 | ## SSH 73 | 74 | 下記のコマンドでログインできます。 75 | 76 | ```sh 77 | ssh core@ -i 78 | ``` 79 | 80 | ## CoreOSの利用 81 | 82 | 起動済みのマシンを手に入れたら、遊ぶ時間です。 83 | [CoreOSクイックスタート]({{site.baseurl}}/docs/quickstart)を見るか、[各トピックス]({{site.baseurl}}/docs)を掘り下げましょう。 84 | -------------------------------------------------------------------------------- /os/customize-etcd-unit.md: -------------------------------------------------------------------------------- 1 | # Customizing the etcd Unit 2 | 3 | The etcd systemd unit can be customized by overriding the unit that ships with the default CoreOS settings. Common use-cases for doing this are covered below. 4 | 5 | ## Use Client Certificates 6 | 7 | etcd supports client certificates as a way to provide secure communication between clients ↔ leader and internal traffic between etcd peers in the cluster. Configuring certificates for both scenarios is done through environment variables. We can use a systemd drop-in unit to augment the unit that ships with CoreOS. 8 | 9 | Please follow the [instruction](generate-self-signed-certificates.md) to know how to create self-signed certificates and private keys. 10 | 11 | We need to create our drop-in unit in `/etc/systemd/system/etcd.service.d/`. If you run `systemctl status etcd2` you can see that CoreOS is already generating a few drop-in units for etcd as part of the OEM and cloudinit processes. To ensure that our drop-in runs after these, we name it `30-certificates.conf` and place them in `/etc/systemd/system/etcd2.service.d/`. 12 | 13 | #### 30-certificates.conf 14 | 15 | ```ini 16 | [Service] 17 | # Client Env Vars 18 | Environment=ETCD_CA_FILE=/path/to/CA.pem 19 | Environment=ETCD_CERT_FILE=/path/to/server.crt 20 | Environment=ETCD_KEY_FILE=/path/to/server.key 21 | # Peer Env Vars 22 | Environment=ETCD_PEER_CA_FILE=/path/to/CA.pem 23 | Environment=ETCD_PEER_CERT_FILE=/path/to/peers.crt 24 | Environment=ETCD_PEER_KEY_FILE=/path/to/peers.key 25 | ``` 26 | 27 | You'll have to put these files on disk somewhere. To do this on each of your machines, the easiest way is with cloud-config. 28 | 29 | ### Cloud-Config 30 | 31 | Cloud-config has a parameter that will place the contents of a file on disk. We're going to use this to add our drop-in unit as well as the certificate files. 32 | 33 | ```yaml 34 | #cloud-config 35 | 36 | write_files: 37 | - path: /etc/systemd/system/etcd2.service.d/30-certificates.conf 38 | permissions: 0644 39 | content: | 40 | [Service] 41 | # Client Env Vars 42 | Environment=ETCD_CA_FILE=/path/to/CA.pem 43 | Environment=ETCD_CERT_FILE=/path/to/server.crt 44 | Environment=ETCD_KEY_FILE=/path/to/server.key 45 | # Peer Env Vars 46 | Environment=ETCD_PEER_CA_FILE=/path/to/CA.pem 47 | Environment=ETCD_PEER_CERT_FILE=/path/to/peers.crt 48 | Environment=ETCD_PEER_KEY_FILE=/path/to/peers.key 49 | - path: /path/to/CA.pem 50 | permissions: 0644 51 | content: | 52 | -----BEGIN CERTIFICATE----- 53 | MIIFNDCCAx6gAwIBAgIBATALBgkqhkiG9w0BAQUwLTEMMAoGA1UEBhMDVVNBMRAw 54 | ...snip... 55 | EtHaxYQRy72yZrte6Ypw57xPRB8sw1DIYjr821Lw05DrLuBYcbyclg== 56 | -----END CERTIFICATE----- 57 | - path: /path/to/server.crt 58 | permissions: 0644 59 | content: | 60 | -----BEGIN CERTIFICATE----- 61 | MIIFWTCCA0OgAwIBAgIBAjALBgkqhkiG9w0BAQUwLTEMMAoGA1UEBhMDVVNBMRAw 62 | DgYDVQQKEwdldGNkLWNhMQswCQYDVQQLEwJDQTAeFw0xNDA1MjEyMTQ0MjhaFw0y 63 | ...snip... 64 | rdmtCVLOyo2wz/UTzvo7UpuxRrnizBHpytE4u0KgifGp1OOKY+1Lx8XSH7jJIaZB 65 | a3m12FMs3AsSt7mzyZk+bH2WjZLrlUXyrvprI40= 66 | -----END CERTIFICATE----- 67 | - path: /path/to/server.key 68 | permissions: 0644 69 | content: | 70 | -----BEGIN RSA PRIVATE KEY----- 71 | Proc-Type: 4,ENCRYPTED 72 | DEK-Info: DES-EDE3-CBC,069abc493cd8bda6 73 | 74 | TBX9mCqvzNMWZN6YQKR2cFxYISFreNk5Q938s5YClnCWz3B6KfwCZtjMlbdqAakj 75 | ...snip... 76 | mgVh2LBerGMbsdsTQ268sDvHKTdD9MDAunZlQIgO2zotARY02MLV/Q5erASYdCxk 77 | -----END RSA PRIVATE KEY----- 78 | - path: /path/to/peers.crt 79 | permissions: 0644 80 | content: | 81 | -----BEGIN CERTIFICATE----- 82 | VQQLEwJDQTAeFw0xNDA1MjEyMTQ0MjhaFw0yMIIFWTCCA0OgAwIBAgIBAjALBgkq 83 | DgYDVQQKEwdldGNkLWNhMQswCQYDhkiG9w0BAQUwLTEMMAoGA1UEBhMDVVNBMRAw 84 | ...snip... 85 | BHpytE4u0KgifGp1OOKY+1Lx8XSH7jJIaZBrdmtCVLOyo2wz/UTzvo7UpuxRrniz 86 | St7mza3m12FMs3AsyZk+bH2WjZLrlUXyrvprI90= 87 | -----END CERTIFICATE----- 88 | - path: /path/to/peers.key 89 | permissions: 0644 90 | content: | 91 | -----BEGIN RSA PRIVATE KEY----- 92 | Proc-Type: 4,ENCRYPTED 93 | DEK-Info: DES-EDE3-CBC,069abc493cd8bda6 94 | 95 | SFreNk5Q938s5YTBX9mCqvzNMWZN6YQKR2cFxYIClnCWz3B6KfwCZtjMlbdqAakj 96 | ...snip... 97 | DvHKTdD9MDAunZlQIgO2zotmgVh2LBerGMbsdsTQ268sARY02MLV/Q5erASYdCxk 98 | -----END RSA PRIVATE KEY----- 99 | ``` 100 | -------------------------------------------------------------------------------- /ignition/what-is-ignition.md: -------------------------------------------------------------------------------- 1 | # What is Ignition? # 2 | 3 | Ignition is a new provisioning utility designed specifically for CoreOS. At the 4 | the most basic level, it is a tool for manipulating disks during early boot. 5 | This includes partitioning disks, formatting partitions, writing files (regular 6 | files, systemd units, networkd units, etc.), and configuring users. On first 7 | boot, Ignition reads its configuration from a source-of-truth (remote URL, 8 | network metadata service, hypervisor bridge, etc.) and applies the configuration. 9 | 10 | A [series of example configs][examples] are provided for reference. 11 | 12 | [examples]: examples.md 13 | 14 | ## Ignition vs coreos-cloudinit ## 15 | 16 | Ignition solves many of the same problems as [coreos-cloudinit][cloudinit] but 17 | in a simpler, more predictable, and more flexible manner. This is achieved with 18 | two major changes: Ignition only runs once and Ignition does not handle 19 | variable substitution. Ignition has also fixed a number of the pain points with 20 | regard to configuration. 21 | 22 | Instead of YAML, Ignition uses JSON for its configuration format. JSON's typing 23 | immediately eliminates problems like "off" being rewritten as "false", the 24 | "#cloud-config" header being stripped because comments *shouldn't* have 25 | meaning, and confusion around whether those file permissions were written in 26 | octal or decimal. Ignition's configuration is also versioned which allows it to 27 | be improved in the future without having to worry as much about maintaining 28 | backward compatibility. 29 | 30 | [cloudinit]: https://github.com/coreos/coreos-cloudinit 31 | 32 | ### Ignition Only Runs Once ## 33 | 34 | Even though Ignition only runs once, it packs a powerful punch. Because 35 | Ignition runs so early in the boot process (in the initramfs, to be exact), it 36 | is able to repartition disks, format filesystems, create users, and write files 37 | all before the userspace has begun booting. 38 | 39 | A result of Ignition running so early is that the issue of 40 | [configuring the network][network config] falls away; the network config is 41 | written early enough for networkd to read when it first starts. It also means 42 | systemd services are already written to disk by the time systemd starts. This 43 | results in a simple startup, a faster startup, and the ability to accurately 44 | inspect the unit dependency graphs. 45 | 46 | [network config]: network-configuration.md 47 | 48 | ### No Variable Substitution ## 49 | 50 | Given that Ignition only runs once, it wouldn't make much sense for it to 51 | incorporate dynamic data (e.g. floating IP addresses, compute region, etc.). 52 | This is partly why there is no support for variable substition. 53 | 54 | Instead of dynamic data, the proper approach is to use Ignition to write static 55 | files and leverage systemd's environment variable expansion to insert dynamic 56 | data. The Ignition config should install a service which fetches the necessary 57 | runtime data and then any services which need this data (e.g. etcd, fleet) can 58 | depend on that aforementioned service and source in its output. The result is 59 | that the data is only collected if and when it is needed. For supported 60 | platforms, CoreOS provides a small utility (`coreos-metadata.service`) to help 61 | fetch this data. 62 | 63 | The lack of variable substitution in Ignition has an added benifit of leveling 64 | the playing field when it comes to compute providers. No longer is the user's 65 | experince crippled because the metadata for their platform isn't supported. It 66 | is possible to write a [custom metadata agent][custom agent] to fetch the 67 | necessary data. 68 | 69 | [custom agent]: examples.md#custom-metadata-agent 70 | 71 | ## Providing Ignition a Config ## 72 | 73 | Ignition can read its config from a number of different locations, although, 74 | only one can be used at a time. When running CoreOS on the supported cloud- 75 | providers, Ignition will read its config from the instance's userdata. This 76 | means that if Ignition is being used, it will not be possible to use other 77 | tools which also use this userdata (e.g. coreos-cloudinit). Bare metal 78 | installations and PXE boots can use the kernel boot parameters to point 79 | Ignition at the config. 80 | 81 | ## Where is Ignition Supported? ## 82 | 83 | The [full list of supported platforms][supported platforms] is provided and 84 | will be kept up-to-date as development progresses. 85 | 86 | Ignition is under active development. Expect to see support for more images in 87 | the coming months. 88 | 89 | [supported platforms]: https://github.com/coreos/ignition/blob/master/doc/supported-platforms.md 90 | -------------------------------------------------------------------------------- /kubernetes/pods.md: -------------------------------------------------------------------------------- 1 | # Overview of a Pod 2 | 3 | A Kubernetes pod is a group of containers that are deployed together on the same host. If you frequently deploy single containers, you can generally replace the word "pod" with "container" and accurately understand the concept. 4 | 5 | Pods operate at one level higher than individual containers because it's very common to have a group of containers work together to produce an artifact or process a set of work. 6 | 7 | For example, consider this pair of containers: a caching server and a cache "warmer". You could build these two functions into a single container, but now they can each be tailored to the specific task and shared between different projects/teams. 8 | 9 | Another example is an app server pod that contains three separate containers: the app server itself, a monitoring adapter, and a logging adapter. The logging and monitoring containers could be shared across all projects in your organization. These adapters could provide an abstraction between different cloud monitoring vendors or other destinations. 10 | 11 | Any project requiring logging or monitoring can include these containers in their pods, but not have to worry about the specific logic. All they need to do is send logs from the app server to a known location within the pod. How does that work? Let's walk through it. 12 | 13 | ## Shared Namespaces, Volumes and Secrets 14 | 15 | By design, all of the containers in a pod are connected to facilitate intra-pod communication, ease of management and flexibility for application architectures. If you've ever fought with connecting two raw containers together, the concept of a pod will save you time and is much more powerful. 16 | 17 | ### Shared Network 18 | 19 | All containers share the same network namespace & port space. Communication over localhost is encouraged. Each container can also communicate with any other pod or service within the cluster. 20 | 21 | ### Shared Volumes 22 | 23 | Volumes attached to the pod may be mounted inside of one or more containers. In the logging example above, a volume named `logs` is attached to the pod. The app server would log output to `logs` mounted at `/volumes/logs` and the logging adapter would have a read-only mount to the same volume. If either of these containers needed to restarted, the log data is preserved instead of being lost. 24 | 25 | There are many types of volumes supported by Kubernetes, including native support for mounting Github repos, network disks (EBS, NFS, etc), local machine disks, and a few special volume types, like secrets. 26 | 27 | Here's an example pod: 28 | 29 | ``` 30 | apiVersion: v1 31 | kind: Pod 32 | metadata: 33 | name: example-app 34 | labels: 35 | app: example-app 36 | version: v1 37 | role=backend 38 | spec: 39 | containers: 40 | - name: java 41 | image: companyname/java 42 | ports: 43 | - containerPort: 443 44 | volumeMounts: 45 | - mountPath: /volumes/logs 46 | name: logs 47 | - name: logger 48 | image: companyname/logger:v1.2.3 49 | ports: 50 | - containerPort: 9999 51 | volumeMounts: 52 | - mountPath: /logs 53 | name: logs 54 | - name: monitoring 55 | image: companyname/monitoring:v4.5.6 56 | ports: 57 | - containerPort: 1234 58 | ``` 59 | 60 | ### Resources 61 | 62 | Resource limits such as CPU and RAM are shared between all containers in the pod. 63 | 64 | ## Creating Pods 65 | 66 | Pods are considered ephemeral "cattle" and won't survive a machine failure and may be terminated for machine maintenance. For high resiliency, pods are managed by a [replication controller][controller-overview], which creates and destroys replicas of pods as needed. Individual pods can also be created outside of a replication controller, but this isn't a common practice. 67 | 68 | [Kubernetes services][service-overview] should always be used to expose pod(s) to the rest of the cluster in order to provide the proper level of abstraction since individual pods will come and go. 69 | 70 | Replication controllers and services use the *pod labels* to select a group of pods that they interact with. Your pods will typically have labels for the application name, role, environment, version, etc. Each of these can be combined in order to select all pods with a certain role, a certain application, or a more complex query. The label system is extremely flexible by design and experimentation is encouraged to estabilish the practices that work best for your company or team. 71 | 72 |
73 |

Are you familiar with replication controllers and services?

74 | Replication Controller overview 75 | Services overview 76 | Back to Listing 77 |
78 | 79 | [controller-overview]: replication-controller.md 80 | [service-overview]: services.md -------------------------------------------------------------------------------- /os/booting-on-packet.md: -------------------------------------------------------------------------------- 1 | # Running CoreOS on Packet 2 | 3 | Packet is a bare metal cloud hosting provider. CoreOS is installable as one of the default operating system options. You can deploy CoreOS servers via the Packet portal or API. 4 | 5 | ## Channels 6 | Currently the Packet OEM is making it's way through the 3 CoreOS channels. As it becomes available to a new channel it will become available on Packet. There are no seperate instructions per channel that are outside of the normal CoreOS instructions. 7 | 8 | ## Deployment Instructions 9 | The first step in deploying any devices on Packet is to first create an account and decide if you'd like to deploy via our portal or API. The portal is appropriate for small clusters of machines that won't change frequently. If you'll be deploying a lot of machines, or expect your workload to change frequently it is much more efficient to use the API. You can generate an API token through the portal once you've set up an account and payment method. Create an account here: [Packet Account Registration](https://www.packet.net/promo/coreos/) 10 | 11 | ### Projects 12 | Packet has a concept of 'projects' that represent a grouping of machines that defines several other aspects of the service. A project defines who on the team has access to manage the machines in your account. Projects also define your private network; all machines in a given project will automatically share backend network connectivity. The SSH keys of all team members associated with a project will be installed to all newly provisioned machines in a project. All servers need to be in a project, even if there is only one server in that project. 13 | 14 | ### Portal Instructions 15 | Once logged into the portal you will be able to click the 'Deploy' button and choose CoreOS from the menu of operating systems, and choose which project you want the server to be deployed in. If you choose to enter custom cloud-config, you can click the 'manage' link and add that as well. The SSH key that you associate with your account and any other team member's keys that are on the project will be added to your CoreOS machine once it is provisioned. 16 | 17 | ### API instructions 18 | If you elect to use the API to provision machines on Packet you should consider using [one of our language libraries](https://www.packet.net/dev/) to code against. As an example, this is how you would launch a single Type 1 machine in a curl command. [Packet API Documentation](https://www.packet.net/dev/api/) 19 | 20 | ```bash 21 | # Replace items in brackets () with the appropriate values. 22 | 23 | curl -X POST \ 24 | -H 'Content-Type: application/json' \ 25 | -H 'Accept: application/json' \ 26 | -H 'X-Auth-Token: ' \ 27 | -d '{"hostname": "", "plan": "baremetal_1", "facility": "ewr1", "operating_system": "coreos_alpha", "userdata": ""}' \ 28 | https://api.packet.net/projects//devices 29 | ``` 30 | 31 | Double quotes in the `` value must be escaped such that the request body is valid JSON. See the Cloud-Config section below for more information about accepted forms of userdata. 32 | 33 | ### Cloud-Config 34 | 35 | CoreOS allows you to configure machine parameters, launch systemd units on startup and more via cloud-config. Jump over to the [docs to learn about the supported features]({{site.baseurl}}/docs/cluster-management/setup/cloudinit-cloud-config). Cloud-config is intended to bring up a cluster of machines into a minimal useful state and ideally shouldn't be used to configure anything that isn't standard across many hosts. Once a machine is created on Packet, the cloud-config cannot be modified. This example can be used to spin up a minimal cluster. 36 | 37 | ```yaml 38 | #cloud-config 39 | 40 | coreos: 41 | etcd2: 42 | # generate a new token for each unique cluster from https://discovery.etcd.io/new?size=3 43 | # specify the initial size of your cluster with ?size=X 44 | discovery: https://discovery.etcd.io/ 45 | # multi-region and multi-cloud deployments need to use $public_ipv4 46 | advertise-client-urls: http://$private_ipv4:2379 47 | initial-advertise-peer-urls: http://$private_ipv4:2380 48 | listen-client-urls: http://0.0.0.0:2379 49 | listen-peer-urls: http://$private_ipv4:2380 50 | units: 51 | - name: etcd2.service 52 | command: start 53 | - name: fleet.service 54 | command: start 55 | ``` 56 | 57 | ## IP Addresses 58 | 59 | The `$private_ipv4`, `$public_ipv4`, and `$public_ipv6` variables are fully supported in cloud-config on Packet. Packet is fully IPv6 compliant and we encourage you to utilize IPv6 for public connectivity with your running containers. Make sure to read up on [IPv6 and Docker](https://docs.docker.com/articles/networking/#ipv6) if you choose to take advantage of this functionality. 60 | 61 | ## Using CoreOS 62 | 63 | Now that you have a machine booted it is time to play around. 64 | Check out the [CoreOS Quickstart]({{site.baseurl}}/docs/quickstart) guide or dig into [more specific topics]({{site.baseurl}}/docs). 65 | -------------------------------------------------------------------------------- /os/booting-on-ecs.md: -------------------------------------------------------------------------------- 1 | # Running CoreOS with AWS EC2 Container Service 2 | 3 | [Amazon EC2 Container Service (ECS)](http://aws.amazon.com/ecs/) is a container management service which provides a set of APIs for scheduling container workloads across EC2 clusters. It supports CoreOS with Docker containers. 4 | 5 | Your CoreOS machines communicate with ECS via an agent. The agent interacts with Docker to start new containers and gather information about running containers. 6 | 7 | ## Set Up A New Cluster 8 | 9 | When booting your [CoreOS Machines on EC2]({{site.baseurl}}/docs/running-coreos/cloud-providers/ec2), specify that the ECS agent is started via [cloud-config]({{site.baseurl}}/docs/cluster-management/setup/cloudinit-cloud-config). 10 | 11 | Be sure to change `ECS_CLUSTER` to the cluster name you've configured via the ECS CLI or leave it empty for the default. Here's a full cloud-config example: 12 | 13 | ```yaml 14 | #cloud-config 15 | 16 | coreos: 17 | units: 18 | - 19 | name: amazon-ecs-agent.service 20 | command: start 21 | runtime: true 22 | content: | 23 | [Unit] 24 | Description=AWS ECS Agent 25 | Documentation=https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ 26 | Requires=docker.socket 27 | After=docker.socket 28 | 29 | [Service] 30 | Environment=ECS_CLUSTER=your_cluster_name 31 | Environment=ECS_LOGLEVEL=info 32 | Environment=ECS_VERSION=latest 33 | Restart=on-failure 34 | RestartSec=30 35 | RestartPreventExitStatus=5 36 | SyslogIdentifier=ecs-agent 37 | ExecStartPre=-/bin/mkdir -p /var/log/ecs /var/ecs-data /etc/ecs 38 | ExecStartPre=-/usr/bin/touch /etc/ecs/ecs.config 39 | ExecStartPre=-/usr/bin/docker kill ecs-agent 40 | ExecStartPre=-/usr/bin/docker rm ecs-agent 41 | ExecStartPre=/usr/bin/docker pull amazon/amazon-ecs-agent:${ECS_VERSION} 42 | ExecStart=/usr/bin/docker run --name ecs-agent \ 43 | --env-file=/etc/ecs/ecs.config \ 44 | --volume=/var/run/docker.sock:/var/run/docker.sock \ 45 | --volume=/var/log/ecs:/log \ 46 | --volume=/var/ecs-data:/data \ 47 | --volume=/sys/fs/cgroup:/sys/fs/cgroup:ro \ 48 | --volume=/run/docker/execdriver/native:/var/lib/docker/execdriver/native:ro \ 49 | --publish=127.0.0.1:51678:51678 \ 50 | --env=ECS_LOGFILE=/log/ecs-agent.log \ 51 | --env=ECS_LOGLEVEL=${ECS_LOGLEVEL} \ 52 | --env=ECS_DATADIR=/data \ 53 | --env=ECS_CLUSTER=${ECS_CLUSTER} \ 54 | amazon/amazon-ecs-agent:${ECS_VERSION} 55 | ``` 56 | 57 | The example above pulls the latest official Amazon ECS agent container from the Docker Hub when the machine starts. If you ever need to update the agent, it’s as simple as restarting the amazon-ecs-agent service or the CoreOS machine. 58 | 59 | If you want to configure SSH keys in order to log in, mount disks or configure other options, see the [full cloud-config documentation]({{site.baseurl}}/docs/cluster-management/setup/cloudinit-cloud-config). 60 | 61 | ## Connect ECS to an Existing Cluster 62 | 63 | Connecting an existing cluster to ECS is simple with [fleet]({{site.baseurl}}/docs/launching-containers/launching/launching-containers-fleet) — the agent can be run as a global unit. The unit looks similar to the example above: 64 | 65 | #### amazon-ecs-agent.service 66 | 67 | ```ini 68 | [Unit] 69 | Description=Amazon ECS Agent 70 | After=docker.service 71 | Requires=docker.service 72 | Requires=network-online.target 73 | After=network-online.target 74 | 75 | [Service] 76 | Environment=ECS_CLUSTER=your_cluster_name 77 | Environment=ECS_LOGLEVEL=warn 78 | ExecStartPre=-/usr/bin/docker kill ecs-agent 79 | ExecStartPre=-/usr/bin/docker rm ecs-agent 80 | ExecStartPre=/usr/bin/docker pull amazon/amazon-ecs-agent 81 | ExecStart=/usr/bin/docker run --name ecs-agent --env=ECS_CLUSTER=${ECS_CLUSTER} --env=ECS_LOGLEVEL=${ECS_LOGLEVEL} --publish=127.0.0.1:51678:51678 --volume=/var/run/docker.sock:/var/run/docker.sock amazon/amazon-ecs-agent 82 | ExecStop=/usr/bin/docker stop ecs-agent 83 | 84 | [X-Fleet] 85 | Global=true 86 | ``` 87 | 88 | Be sure to change `ECS_CLUSTER` to the cluster name you've configured in the AWS console or leave it empty for the default. 89 | 90 | To run this unit on each machine, all you have to do is submit it to the cluster: 91 | 92 | ```sh 93 | $ fleetctl start amazon-ecs-agent.service 94 | Triggered global unit amazon-ecs-agent.service start 95 | ``` 96 | 97 | You should see all of your machines show up in the ECS CLI output. 98 | 99 | For more information on using ECS, check out the [official Amazon documentation](http://aws.amazon.com/documentation/ecs/). 100 | -------------------------------------------------------------------------------- /coreupdate/update-protocol.md: -------------------------------------------------------------------------------- 1 | # Omaha 2 | 3 | The Omaha protocol is the specification that the update service uses to communicate with updaters running in a CoreOS cluster. The protocol is a fairly simple — it specifies sending HTTP POSTs with XML data bodies for various events that happen during the execution of an update. 4 | 5 | ## Update Request 6 | 7 | The update request sends machine metadata and a list of applications that it is responsible for. In most cases, each updater is responsible for a single package. Here's what a typical request looks like: 8 | 9 | ``` 10 | 11 | 12 | 13 | 14 | 15 | 16 | ``` 17 | 18 | ### Application Section 19 | 20 | The app section is where the action happens. You can submit multiple applications or application instances in one request, but this isn't standard. 21 | 22 | | Parameter | Description | 23 | |-----------|-------------| 24 | | appid | Matches the id of the group that that this instance belongs to in the update service. | 25 | | version | The current semantic version number of the application code. | 26 | | track | The channel that the application is requesting. | 27 | | bootid | The unique identifier assigned to this instance. | 28 | 29 | ## Already Up to Date 30 | 31 | If the application instance is already running the latest version, the response will be short: 32 | 33 | ``` 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | ``` 42 | 43 | As you can see, the response indicated that no update was required for the provided group id and version. 44 | 45 | ## Update Required 46 | 47 | If the application is not up to date, the response returned contains all of the information needed to execute the update: 48 | 49 | ``` 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | ``` 70 | 71 | The most important parts of the response are the `codebase`, which points to the location of the package, and the `sha256` which should be checked to make sure the package hasn't been tampered with. 72 | 73 | ## Report Progress, Errors and Completion 74 | 75 | Events are submitted to the update service as the updater passes certain milestones such as starting the download, installing the update and confirming that the update was complete and successful. Events are specified in numerical codes corresponding to the event initiated and the resulting state. You can find a [full list of the event codes](https://code.google.com/p/omaha/wiki/ServerProtocol#event_Element) in Google's documentation. The CoreOS update service implements a subset of these events: 76 | 77 | | Event Description | Event Type | Event Result | 78 | |-------------------|------------|--------------| 79 | | Downloading latest version. | `13` | `1` | 80 | | Update package arrived successfully. | `14` | `1` | 81 | | Updater has processed and applied package. | `3` | `1` | 82 | | Install success. Update completion prevented by instance. | `800` | `1` | 83 | | Instances upgraded to current channel version. | `3` | `2` | 84 | | Instance reported an error during an update step. | `3` | `0` | 85 | 86 | For example, a `3:2` represents a successful update and a successful reboot. Here's the request and response: 87 | 88 | ### Request 89 | 90 | ``` 91 | 92 | 93 | 94 | 95 | 96 | 97 | ``` 98 | 99 | ### Response 100 | 101 | The protocol dictates that each event should be acknowledged even if no data needs to be returned: 102 | 103 | ``` 104 | 105 | 106 | 107 | 108 | ``` 109 | 110 | ## Further Reading 111 | 112 | You can read more about the [Omaha tech specs](https://code.google.com/p/omaha/wiki/ServerProtocol) or visit the [project homepage](https://code.google.com/p/omaha/). 113 | -------------------------------------------------------------------------------- /os/reading-the-system-log.md: -------------------------------------------------------------------------------- 1 | # Reading the System Log 2 | 3 | `journalctl` is your interface into a single machine's journal/logging and `fleetctl journal` will fetch the journal for containers started with [fleet](https://coreos.com/using-coreos/clustering/). All service files and Docker containers insert data into the systemd journal. There are a few helpful commands to read the journal: 4 | 5 | ## Read the Entire Journal 6 | 7 | ```sh 8 | $ journalctl 9 | 10 | -- Logs begin at Fri 2013-12-13 23:43:32 UTC, end at Sun 2013-12-22 12:28:45 UTC. -- 11 | Dec 22 00:10:21 localhost systemd-journal[33]: Runtime journal is using 184.0K (max 49.9M, leaving 74.8M of free 499.0M, current limit 49.9M). 12 | Dec 22 00:10:21 localhost systemd-journal[33]: Runtime journal is using 188.0K (max 49.9M, leaving 74.8M of free 499.0M, current limit 49.9M). 13 | Dec 22 00:10:21 localhost kernel: Initializing cgroup subsys cpuset 14 | Dec 22 00:10:21 localhost kernel: Initializing cgroup subsys cpu 15 | Dec 22 00:10:21 localhost kernel: Initializing cgroup subsys cpuacct 16 | Dec 22 00:10:21 localhost kernel: Linux version 3.11.7+ (buildbot@10.10.10.10) (gcc version 4.6.3 (Gentoo Hardened 4.6.3 p1.13, pie-0.5.2) 17 | ... 18 | 1000s more lines 19 | ``` 20 | ## Read Entries for a Specific Service 21 | 22 | Read entries generated by a specific unit: 23 | 24 | ```sh 25 | $ journalctl -u apache.service 26 | 27 | -- Logs begin at Fri 2013-12-13 23:43:32 UTC, end at Sun 2013-12-22 12:32:52 UTC. -- 28 | Dec 22 12:32:39 localhost systemd[1]: Starting Apache Service... 29 | Dec 22 12:32:39 localhost systemd[1]: Started Apache Service. 30 | Dec 22 12:32:39 localhost docker[9772]: /usr/sbin/apache2ctl: 87: ulimit: error setting limit (Operation not permitted) 31 | Dec 22 12:32:39 localhost docker[9772]: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.6 for ServerName 32 | ``` 33 | 34 | Using the `--tunnel` flag ([docs](https://github.com/coreos/fleet/blob/master/Documentation/using-the-client.md#from-an-external-host)), you can remotely read the journal for a specific unit started via [fleet](https://coreos.com/using-coreos/clustering/). This command will figure out which machine the unit is currently running on, fetch the journal and output it: 35 | 36 | ```sh 37 | $ fleetctl --tunnel 10.10.10.10 journal apache.service 38 | 39 | -- Logs begin at Fri 2013-12-13 23:43:32 UTC, end at Sun 2013-12-22 12:32:52 UTC. -- 40 | Dec 22 12:32:39 localhost systemd[1]: Starting Apache Service... 41 | Dec 22 12:32:39 localhost systemd[1]: Started Apache Service. 42 | Dec 22 12:32:39 localhost docker[9772]: /usr/sbin/apache2ctl: 87: ulimit: error setting limit (Operation not permitted) 43 | Dec 22 12:32:39 localhost docker[9772]: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.6 for ServerName 44 | ``` 45 | 46 | ## Read Entries Since Boot 47 | 48 | Reading just the entries since the last boot is an easy way to troubleshoot services that are failing to start properly: 49 | 50 | ```sh 51 | journalctl --boot 52 | ``` 53 | 54 | ## Tail the Journal 55 | 56 | You can tail the entire journal or just a specific service: 57 | 58 | ```sh 59 | journalctl -f 60 | ``` 61 | 62 | ```sh 63 | journalctl -u apache.service -f 64 | ``` 65 | 66 | ## Read Entries with Line Wrapping 67 | 68 | By default `journalctl` passes `FRSXMK` command line options to [`less`](http://linux.die.net/man/1/less). You can override these options by setting a custom [`SYSTEMD_LESS`](http://www.freedesktop.org/software/systemd/man/journalctl.html#%24SYSTEMD_LESS) environment variable with omitted `S` option: 69 | 70 | ```sh 71 | SYSTEMD_LESS=FRXMK journalctl 72 | ``` 73 | 74 | Read logs without pager: 75 | 76 | ```sh 77 | journalctl --no-pager 78 | ``` 79 | 80 | # Debugging journald 81 | 82 | If you've faced some problems with journald you can enable debug mode following the instructions below. 83 | 84 | ## Enable Debugging Manually 85 | 86 | ```sh 87 | mkdir -p /etc/systemd/system/systemd-journald.service.d/ 88 | ``` 89 | 90 | Create [Drop-In][drop-ins] `/etc/systemd/system/systemd-journald.service.d/10-debug.conf` with following content: 91 | 92 | ```sh 93 | [Service] 94 | Environment=SYSTEMD_LOG_LEVEL=debug 95 | ``` 96 | 97 | And restart `systemd-journald` service: 98 | 99 | ```sh 100 | systemctl daemon-reload 101 | systemctl restart systemd-journald 102 | dmesg | grep systemd-journald 103 | ``` 104 | 105 | ## Enable debugging through Cloud-Config 106 | 107 | Define [Drop-In][drop-ins] in [Cloud-Config][cloud-config]: 108 | 109 | ```yaml 110 | #cloud-config 111 | coreos: 112 | units: 113 | - name: systemd-journald.service 114 | drop-ins: 115 | - name: 10-debug.conf 116 | content: | 117 | [Service] 118 | Environment=SYSTEMD_LOG_LEVEL=debug 119 | command: restart 120 | ``` 121 | 122 | And run `coreos-cloudinit` or reboot your CoreOS host to apply the changes. 123 | 124 | [drop-ins]: using-systemd-drop-in-units.md 125 | [cloud-config]: https://github.com/coreos/coreos-cloudinit/blob/master/Documentation/cloud-config.md 126 | 127 | #### More Information 128 | Getting Started with systemd 129 | Network Configuration with networkd 130 | -------------------------------------------------------------------------------- /os/using-systemd-drop-in-units.md: -------------------------------------------------------------------------------- 1 | # Using systemd Drop-In Units 2 | 3 | There are two methods of overriding default CoreOS settings in unit files: copying the unit file from `/usr/lib64/systemd/system` to `/etc/systemd/system` and modifying the chosen settings. Alternatively, one can create a directory named `unit.d` within `/etc/systemd/system` and place a drop-in file `name.conf` there that only changes the specific settings one is interested in. Note that multiple such drop-in files are read if present. 4 | 5 | The advantage of the first method is that one easily overrides the complete unit, the default CoreOS unit is not parsed at all anymore. It has the disadvantage that improvements to the unit file supplied by CoreOS are not automatically incorporated on updates. 6 | 7 | The advantage of the second method is that one only overrides the settings one specifically wants, where updates to the original CoreOS unit automatically apply. This has the disadvantage that some future CoreOS updates might be incompatible with the local changes, but the risk is much lower. 8 | 9 | Note that for drop-in files, if one wants to remove entries from a setting that is parsed as a list (and is not a dependency), such as `ConditionPathExists=` (or e.g. `ExecStart=` in service units), one needs to first clear the list before re-adding all entries except the one that is to be removed. See below for an example. 10 | 11 | This also applies for user instances of systemd, but with different locations for the unit files. See the section on unit load paths in [official systemd doc](http://www.freedesktop.org/software/systemd/man/systemd.unit.html) for further details. 12 | 13 | ## Example: Customizing fleet.service 14 | 15 | Let's review `/usr/lib64/systemd/system/fleet.service` unit (you can find it using this command: `systemctl list-units | grep fleet`) with the following contents: 16 | 17 | ``` 18 | [Unit] 19 | Description=fleet daemon 20 | 21 | After=etcd.service 22 | After=etcd2.service 23 | 24 | Wants=fleet.socket 25 | After=fleet.socket 26 | 27 | [Service] 28 | ExecStart=/usr/bin/fleetd 29 | Restart=always 30 | RestartSec=10s 31 | ``` 32 | 33 | Let's walk through increasing the `RestartSec` parameter via both methods: 34 | 35 | ### Override Only Specific Option 36 | 37 | You can create a drop-in file `/etc/systemd/system/fleet.service.d/10-restart_60s.conf` with the following contents: 38 | 39 | ``` 40 | [Service] 41 | RestartSec=60s 42 | ``` 43 | 44 | Then reload systemd, scanning for new or changed units: 45 | 46 | ```sh 47 | systemctl daemon-reload 48 | 49 | ``` 50 | 51 | And restart modified service if necessary (in our example we have changed only `RestartSec` option, but if you want to change environment variables, `ExecStart` or other run options you have to restart service): 52 | 53 | ```sh 54 | systemctl restart fleet.service 55 | ``` 56 | 57 | Here is how that could be implemented within `cloud-config`: 58 | 59 | ```yaml 60 | #cloud-config 61 | coreos: 62 | units: 63 | - name: fleet.service 64 | drop-ins: 65 | - name: 10-restart_60s.conf 66 | content: | 67 | [Service] 68 | RestartSec=60s 69 | command: start 70 | ``` 71 | 72 | This change is small and targeted. It is the easiest way to tweak unit's parameters. 73 | 74 | ### Override the Whole Unit File 75 | 76 | Another way is to override whole systemd unit. Copy default unit file `/usr/lib64/systemd/system/fleet.service` to `/etc/systemd/system/fleet.service` and change the chosen settings: 77 | 78 | ``` 79 | [Unit] 80 | Description=fleet daemon 81 | 82 | After=etcd.service 83 | After=etcd2.service 84 | 85 | Wants=fleet.socket 86 | After=fleet.socket 87 | 88 | [Service] 89 | ExecStart=/usr/bin/fleetd 90 | Restart=always 91 | RestartSec=60s 92 | ``` 93 | 94 | `cloud-config` example: 95 | 96 | ```yaml 97 | #cloud-config 98 | 99 | coreos: 100 | units: 101 | - name: fleet.service 102 | command: start 103 | content: | 104 | [Unit] 105 | Description=fleet daemon 106 | 107 | After=etcd.service 108 | After=etcd2.service 109 | 110 | Wants=fleet.socket 111 | After=fleet.socket 112 | 113 | [Service] 114 | ExecStart=/usr/bin/fleetd 115 | Restart=always 116 | RestartSec=60s 117 | ``` 118 | 119 | ### List Drop-Ins 120 | 121 | To see all runtime drop-in changes for system units run the command below: 122 | 123 | ```sh 124 | systemd-delta --type=extended 125 | ``` 126 | 127 | ## Another systemd Examples 128 | 129 | For another real systemd examples, check out these documents: 130 | 131 | [Customizing Docker]({{site.baseurl}}/os/docs/latest/customizing-docker.html#using-a-dockercfg-file-for-authentication) 132 | [Customizing the SSH Daemon]({{site.baseurl}}/os/docs/latest/customizing-sshd.html#changing-the-sshd-port) 133 | [Using Environment Variables In systemd Units]({{site.baseurl}}/os/docs/latest/using-environment-variables-in-systemd-units.html) 134 | 135 | ## More Information 136 | systemd.service Docs 137 | systemd.unit Docs 138 | systemd.target Docs 139 | -------------------------------------------------------------------------------- /os/booting-on-niftycloud.md: -------------------------------------------------------------------------------- 1 | # Running CoreOS on NIFTY Cloud 2 | 3 | NIFTY Cloud is a Japanese cloud computing provider. These instructions are also [available in Japanese](JA_JP/). Before proceeding, you will need to [install NIFTY Cloud CLI][cli-documentation]. 4 | 5 | [cli-documentation]: https://translate.google.com/translate?hl=en&sl=ja&tl=en&u=http%3A%2F%2Fcloud.nifty.com%2Fapi%2Fcli%2F 6 | 7 | ## Cloud-Config 8 | 9 | CoreOS allows you to configure machine parameters, launch systemd units on startup and more via cloud-config. Jump over to the [docs to learn about the supported features]({{site.baseurl}}/docs/cluster-management/setup/cloudinit-cloud-config). Cloud-config is intended to bring up a cluster of machines into a minimal useful state and ideally shouldn't be used to configure anything that isn't standard across many hosts. On NIFTY Cloud, the cloud-config can be modified while the instance is running and will be processed next time the machine boots. 10 | 11 | You can provide cloud-config to CoreOS via [NIFTY Cloud CLI][cli-documentation]. 12 | 13 | The most common cloud-config for NIFTY Cloud looks like: 14 | 15 | ```yaml 16 | #cloud-config 17 | 18 | coreos: 19 | etcd2: 20 | # generate a new token for each unique cluster from https://discovery.etcd.io/new?size=3 21 | # specify the initial size of your cluster with ?size=X 22 | discovery: https://discovery.etcd.io/ 23 | # multi-region and multi-cloud deployments need to use $public_ipv4 24 | advertise-client-urls: http://$private_ipv4:2379,http://$private_ipv4:4001 25 | initial-advertise-peer-urls: http://$private_ipv4:2380 26 | # listen on both the official ports and the legacy ports 27 | # legacy ports can be omitted if your application doesn't depend on them 28 | listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001 29 | listen-peer-urls: http://$private_ipv4:2380 30 | units: 31 | - name: etcd2.service 32 | command: start 33 | - name: fleet.service 34 | command: start 35 | ``` 36 | 37 | The `$private_ipv4` and `$public_ipv4` substitution variables are fully supported in cloud-config on NIFTY Cloud. 38 | 39 | ## Choosing a Channel 40 | 41 | CoreOS is designed to be [updated automatically]({{site.baseurl}}/using-coreos/updates) with different schedules per channel. You can [disable this feature]({{site.baseurl}}/docs/cluster-management/debugging/prevent-reboot-after-update), although we don't recommend it. Read the [release notes]({{site.baseurl}}/releases) for specific features and bug fixes. 42 | 43 |
44 | 49 |
50 |
51 |

The alpha channel closely tracks master and is released to frequently. The newest versions of Docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.

52 |

Launch via NIFTY Cloud CLI by specifying $ZONE, $TYPE, $FW_ID and $SSH_KEY_ID:

53 |
nifty-run-instances $(nifty-describe-images --delimiter ',' --image-name "CoreOS Alpha {{site.alpha-channel}}" | awk -F',' '{print $2}') --key $SSH_KEY_ID --availability-zone $ZONE --instance-type $TYPE -g $FW_ID -f cloud-config.yml -q POST
54 |
55 |
56 |

The beta channel consists of promoted alpha releases. Current version is CoreOS {{site.beta-channel}}.

57 |

Launch via NIFTY Cloud CLI by specifying $ZONE, $TYPE, $FW_ID and $SSH_KEY_ID:

58 |
nifty-run-instances $(nifty-describe-images --delimiter ',' --image-name "CoreOS Beta {{site.beta-channel}}" | awk -F',' '{print $2}') --key $SSH_KEY_ID --availability-zone $ZONE --instance-type $TYPE -g $FW_ID -f cloud-config.yml -q POST
59 |
60 |
61 |

The Stable channel should be used by production clusters. Versions of CoreOS are battle-tested within the Beta and Alpha channels before being promoted. Current version is CoreOS {{site.stable-channel}}.

62 |

Launch via NIFTY Cloud CLI by specifying $ZONE, $TYPE, $FW_ID and $SSH_KEY_ID:

63 |
nifty-run-instances $(nifty-describe-images --delimiter ',' --image-name "CoreOS Stable {{site.stable-channel}}" | awk -F',' '{print $2}') --key $SSH_KEY_ID --availability-zone $ZONE --instance-type $TYPE -g $FW_ID -f cloud-config.yml -q POST
64 |
65 |
66 |
67 | 68 | ### Adding More Machines 69 | 70 | To add more instances to the cluster, just launch more with the same cloud-config and the appropriate firewall group. 71 | 72 | ## SSH 73 | 74 | You can log in your CoreOS instances using: 75 | 76 | ```sh 77 | ssh core@ -i 78 | ``` 79 | 80 | ## Using CoreOS 81 | 82 | Now that you have a machine booted it is time to play around. 83 | Check out the [CoreOS Quickstart]({{site.baseurl}}/docs/quickstart) guide or dig into [more specific topics]({{site.baseurl}}/docs). 84 | -------------------------------------------------------------------------------- /quay-enterprise/configure-machines.md: -------------------------------------------------------------------------------- 1 | # Configure Machines for Quay Enterprise 2 | 3 | Quay Enterprise allows you to create user accounts and teams, or groups, of those users that mirror your existing org chart. A special type of user, a robot account, is designed to be used programatically by deployment systems and other pieces of software. Robot accounts are usually configured with read-only access to a repository. 4 | 5 | This guide we will assume you have the DNS record `registry.example.com` configured to point to your Enterprise Registry. 6 | 7 | ## Credentials 8 | 9 | Each CoreOS machine needs to be configured with the username and password for a robot account in order to deploy your containers. Docker looks for configured credentials in a `.dockercfg` file located within the user's home directory. You can download this file directly from the Quay Enterprise interface. Let's assume you've created a robot account called `myapp+deployment`. 10 | 11 | Writing the `.dockercfg` can be specified in [cloud-config](https://coreos.com/os/docs/latest/cloud-config.html) with the write_files parameter, or created manually on each machine. 12 | 13 | ### Kubernetes Pull Secret 14 | 15 | If you are using Quay Enterprise in conjunction with a Kubernetes or Tectonic cluster, it's easiest to use the built-in secret distribution method. This method allows for you to use different sets of robot accounts on a per-app basis, and also allows for them to be updated or rotated at any time across all machines in the cluster. 16 | 17 | An "Image Pull Secret" is a special secret that Kubernetes will use when pulling down the containers in a pod. It is a base64-encoded Docker config file. Here's an example: 18 | 19 | ```sh 20 | $ cat ~/.dockercfg | base64 21 | eyAiaHR0cHM6Ly9pbmRleC5kb2NrZXIuaW8vdjEvIjogeyAiYXV0aCI6ICJabUZyWlhCaGMzTjNiM0prTVRJSyIsICJlbWFpbCI6ICJqZG9lQGV4YW1wbGUuY29tIiB9IH0K 22 | ``` 23 | 24 | ``` 25 | apiVersion: v1 26 | kind: Secret 27 | metadata: 28 | name: myappcreds 29 | data: 30 | .dockercfg: eyAiaHR0cHM6Ly9pbmRleC5kb2NrZXIuaW8vdjEvIjogeyAiYXV0aCI6ICJabUZyWlhCaGMzTjNiM0prTVRJSyIsICJlbWFpbCI6ICJqZG9lQGV4YW1wbGUuY29tIiB9IH0K 31 | type: kubernetes.io/dockercfg 32 | ``` 33 | 34 | To use this secret, first submit it into the cluster: 35 | 36 | ```sh 37 | $ kubectl create -f /tmp/myappcreds.yaml 38 | secrets/myappcreds 39 | ``` 40 | 41 | #### Reference Pull Secret with RC 42 | 43 | Reference your new secret in a Replication Controller YAML definition: 44 | 45 | ``` 46 | apiVersion: v1 47 | kind: ReplicationController 48 | metadata: 49 | name: myapp 50 | spec: 51 | replicas: 1 52 | selector: 53 | tier: webapp 54 | template: 55 | metadata: 56 | labels: 57 | tier: webapp 58 | spec: 59 | containers: 60 | - name: foo 61 | image: quay.io/coreos/etcd:v2.2.1 62 | ports: 63 | - containerPort: 2380 64 | imagePullSecrets: 65 | - name: myappcreds 66 | ``` 67 | 68 | #### Assign a Default Pull Secret per Namespace 69 | 70 | To use a specific pull secret as the default in a specific namespace, you can create a [Service Account](http://kubernetes.io/v1.1/docs/user-guide/service-accounts.html) that will be available to each pod. This is new in Kubernetes v1.1. 71 | 72 | ### Cloud-Config 73 | 74 | A snippet to configure the credentials via write_files looks like: 75 | 76 | ```yaml 77 | #cloud-config 78 | 79 | write_files: 80 | - path: /root/.dockercfg 81 | permissions: 0644 82 | content: | 83 | { 84 | "https://registry.example.com/v1/": { 85 | "auth": "cm9ic3p1bXNrajYzUFFXSU9HSkhMUEdNMEISt0ZXN0OkdOWEVHWDRaSFhNUVVSMkI1WE9MM1k1S1R1VET0I1RUZWSVg3TFRJV1I3TFhPMUI=", 86 | "email": "" 87 | } 88 | } 89 | ``` 90 | 91 | Each machine booted with this cloud-config should automatically be authenticated with Quay Enterprise. 92 | 93 | 94 | ### Manual Login 95 | 96 | To temporarily login to a Quay Enterprise account on a machine, run `docker login`: 97 | 98 | ```sh 99 | $ docker login registry.example.com 100 | Login against server at https://registry.example.com/v1/ 101 | Username: myapp+deployment 102 | Password: GNXEGX4Y5J63PQWIOGJHLPGM0B5GUDOBZHXMQUR2B5XOL35EFVIX7LTIWR7LXO1B 103 | Email: myemail@example.com 104 | ``` 105 | 106 | ## Test Push or Pull 107 | 108 | Now that your machine is authenticated, try pulling one of your repositories. If you haven't pushed a repository into your Enterprise Registry, you will need to tag it with the full name: 109 | 110 | ```sh 111 | $ docker tag bf60637a656c registry.domain.com/myapp 112 | $ docker push registry.domain.com/myapp 113 | ``` 114 | 115 | If you already have images in your registry, test out a pull: 116 | 117 | ```sh 118 | docker pull registry.domain.com/myapp 119 | ``` 120 | 121 | ## Pulling via systemd 122 | 123 | Assuming a .dockercfg is present in /root, the following is an example systemd unit file that pulls a docker image: 124 | 125 | ``` 126 | [Unit] 127 | Description=Hello World 128 | 129 | [Service] 130 | WorkingDirectory=/root 131 | ExecStartPre=-/usr/bin/docker kill hello-world 132 | ExecStartPre=-/usr/bin/docker rm -f hello-world 133 | ExecStartPre=/usr/bin/docker pull quay.io/example/hello-world:latest 134 | ExecStart=/usr/bin/docker run --rm --name hello-world quay.io/example/hello-world:latest 135 | ExecStop=-/usr/bin/docker stop hello-world 136 | ``` 137 | 138 | If the working directory is not set, docker will not be able to discover the .dockercfg file and will not have the credentials to pull private images. 139 | -------------------------------------------------------------------------------- /golang/godep.md: -------------------------------------------------------------------------------- 1 | ## Using Godep 2 | 3 | This is a brief guide describing how we manage third-party dependencies using [`godep`](https://github.com/tools/godep). 4 | We're going to use the [rkt](https://github.com/coreos/rkt) repository as an example. 5 | 6 | The [build script](https://github.com/coreos/rkt/blob/master/build) is crafted to make this transparent to most users (i.e. if you're just building rkt from source, or modifying any of the codebase without changing dependencies, you should have no need to interact with godep). 7 | But occasionally the need arises to either a) add a new dependency or b) update/remove an existing dependency. 8 | At this point, the ramblings below from an experienced Godep victim^Wenthusiast might prove of use... 9 | 10 | ### Update godep 11 | 12 | Step zero is generally to ensure you have the **latest version** of `godep` available in your `PATH`. 13 | 14 | ### Having the right directory layout (i.e. `GOPATH`) 15 | 16 | To work with `godep`, you'll need to have the repository (i.e. `github.com/coreos/rkt`) checked out in a valid `GOPATH`. 17 | If you use the [standard Go workflow](https://golang.org/doc/code.html#Organization), with every package in its proper place in a workspace, this should be no problem. 18 | As an example, if one was obtaining the repository for the first time, one would do the following: 19 | 20 | ``` 21 | $ export GOPATH=/tmp/foo # or any directory you please 22 | $ go get -d github.com/coreos/rkt/... # or 'git clone https://github.com/coreos/rkt $GOPATH/src/github.com/coreos/rkt' 23 | $ cd $GOPATH/src/github.com/coreos/rkt 24 | ``` 25 | 26 | If, however, you instead prefer to manage your source code in directories like `~/src/rkt`, there's a problem: `godep` doesn't like symbolic links (which is what the rkt `build` script [uses to create a self-contained GOPATH](https://github.com/coreos/rkt/blob/master/build#L8)). 27 | Hence, you'll need to work around this with bind mounts, with something like the following: 28 | ``` 29 | $ export GOPATH=/tmp/foo # or any directory you please 30 | $ mkdir -p $GOPATH/src/github.com/coreos/rkt 31 | $ sudo mount --bind ~/src/rkt $GOPATH/src/github.com/coreos/rkt 32 | $ cd $GOPATH/src/github.com/coreos/rkt 33 | ``` 34 | 35 | One benefit of this approach over the single-workspace workflow is that checking out different versions of dependencies in the `GOPATH` (as we are about to do) is guarnteed to not affect any other packages in the `GOPATH`. 36 | (Using [gvm](https://github.com/moovweb/gvm) or other such tomfoolery to manage `GOPATH`s is an exercise left for the reader.) 37 | 38 | ### Restoring the current state of dependencies 39 | 40 | Now that we have a functional `GOPATH`, use `godep` to restore the full set of vendored dependencies to their correct versions. 41 | (What this command does is essentially just loop over the set of dependencies codified in `Godeps/Godeps.json`, using `go get` to retrieve and then `git checkout` (or equivalent) to set each to their correct revision.) 42 | 43 | ``` 44 | $ godep restore # might take a while if it's the first time... 45 | ``` 46 | 47 | At this stage, your path forks, depending on what exactly you want to do: add, update or remove a dependency. 48 | But in _all three cases_, the procedure finishes with the [same save command](#saving-the-set-of-dependencies). 49 | 50 | #### Add a new dependency 51 | 52 | In this case you'll first need to retrieve the dependency you're working with into `GOPATH`. 53 | As a simple example, assuming we're adding `github.com/fizz/buzz`: 54 | ``` 55 | $ go get -d github.com/fizz/buzz 56 | ``` 57 | 58 | Then add your new dependency into `godep`'s purview by simply importing the standard package name in one of your sources: 59 | 60 | ``` 61 | $ vim $GOPATH/src/github.com/coreos/rkt/some/file.go 62 | ... 63 | import "github.com/fizz/buzz" 64 | ... 65 | ``` 66 | 67 | Now, GOTO [saving](#saving-the-set-of-dependencies) 68 | 69 | #### Update an existing dependency 70 | 71 | In this case, assuming we're updating `github.com/foo/bar`: 72 | 73 | ``` 74 | $ cd $GOPATH/src/github.com/foo/bar 75 | $ git pull # or 'go get -d -u github.com/foo/bar/...' 76 | $ git checkout $DESIRED_REVISION 77 | $ cd $GOPATH/src/github.com/coreos/rkt 78 | $ godep update github.com/foo/bar/... 79 | ``` 80 | 81 | Now, GOTO [saving](#saving-the-set-of-dependencies) 82 | 83 | #### Removing an existing dependency 84 | 85 | This is the simplest case of all: simply remove all references to a dependency from the source files. 86 | 87 | Now, GOTO [saving](#saving-the-set-of-dependencies) 88 | 89 | ### Saving the set of dependencies 90 | 91 | Finally, here we are, the magic command, the holy grail, the ultimate conclusion of all `godep` operations. 92 | Provided you have followed the preceding instructions, regardless of whether you are adding/removing/modifying dependencies, this command will cast the necessary spells to solve all of your dependency worries: 93 | ``` 94 | $ godep save -r ./... 95 | ``` 96 | 97 | ## Finishing up 98 | 99 | At this point, you should be good to PR. 100 | As well as a simple sanity check that the code actually builds and tests pass, here are some things to look out for: 101 | - `git status Godeps/` should show only a minimal and relevant change (i.e. only the dependencies you actually intended to touch). 102 | - `git diff Godeps/` should be free of any changes to import paths within the vendored dependencies 103 | - `git diff` should show _all_ third-party import paths prefixed with `Godeps/_workspace` 104 | - If something looks awry, restart, pray to your preferred deity, and try again. 105 | -------------------------------------------------------------------------------- /os/other-settings.md: -------------------------------------------------------------------------------- 1 | # Tips and Other Settings 2 | 3 | ## Loading Kernel Modules 4 | 5 | Most Linux kernel modules get automatically loaded as-needed but there 6 | are a some situations where this doesn't work. Problems can arise if 7 | there is boot-time dependencies are sensitive to exactly when the module 8 | gets loaded. Module auto-loading can be broken all-together if the 9 | operation requiring the module happens inside of a container. `iptables` 10 | and other netfilter features can easily encounter both of these issues. 11 | To force a module to be loaded early during boot simply list them in a 12 | file under `/etc/modules-load.d`. The file name must end in `.conf`. 13 | 14 | ```sh 15 | echo nf_conntrack > /etc/modules-load.d/nf.conf 16 | ``` 17 | 18 | These files are processed early during the boot sequence. This means 19 | that updating `modules-load.d` via cloud-config will only take effect on 20 | the next boot unless the `systemd-modules-load` service is also 21 | restarted: 22 | 23 | ```yaml 24 | #cloud-config 25 | 26 | write_files: 27 | - path: /etc/modules-load.d/nf.conf 28 | content: nf_conntrack 29 | 30 | coreos: 31 | units: 32 | - name: systemd-modules-load.service 33 | command: restart 34 | ``` 35 | 36 | ### Loading Kernel Modules with Options 37 | 38 | This example cloud-config excerpt loads the `dummy` network interface module 39 | with an option specifying the number of interfaces the module should create 40 | when loaded (`numdummies=5`): 41 | 42 | ```yaml 43 | #cloud-config 44 | 45 | write_files: 46 | - path: /etc/modprobe.d/dummy.conf 47 | content: options dummy numdummies=5 48 | - path: /etc/modules-load.d/dummy.conf 49 | content: dummy 50 | 51 | coreos: 52 | units: 53 | - name: systemd-modules-load.service 54 | command: restart 55 | ``` 56 | 57 | After this cloud-config is processed, the dummy module is loaded into the 58 | kernel, and five dummy interfaces are added to the network stack. 59 | 60 | Further details can be found in the systemd man pages: 61 | [modules-load.d(5)](http://www.freedesktop.org/software/systemd/man/modules-load.d.html) 62 | [systemd-modules-load.service(8)](http://www.freedesktop.org/software/systemd/man/systemd-modules-load.service.html) 63 | [modprobe.d(5)](http://linux.die.net/man/5/modprobe.d) 64 | 65 | ## Tuning sysctl Parameters 66 | 67 | The Linux kernel offers a plethora of knobs under `/proc/sys` to control 68 | the availability of different features and tune performance parameters. 69 | For one-shot changes values can be written directly to the files under 70 | `/proc/sys` but persistent settings must be written to `/etc/sysctl.d`: 71 | 72 | ```sh 73 | echo net.netfilter.nf_conntrack_max=131072 > /etc/sysctl.d/nf.conf 74 | sysctl --system 75 | ``` 76 | 77 | Some parameters, such as the conntrack one above, are only available 78 | after the module they control has been loaded. To ensure any modules are 79 | loaded in advance use `modules-load.d` as described above. A complete 80 | cloud-config using both would look like: 81 | 82 | ```yaml 83 | #cloud-config 84 | 85 | write_files: 86 | - path: /etc/modules-load.d/nf.conf 87 | content: | 88 | nf_conntrack 89 | - path: /etc/sysctl.d/nf.conf 90 | content: | 91 | net.netfilter.nf_conntrack_max=131072 92 | 93 | coreos: 94 | units: 95 | - name: systemd-modules-load.service 96 | command: restart 97 | - name: systemd-sysctl.service 98 | command: restart 99 | ``` 100 | 101 | Further details can be found in the systemd man pages: 102 | [sysctl.d(5)](http://www.freedesktop.org/software/systemd/man/sysctl.d.html) 103 | [systemd-sysctl.service(8)](http://www.freedesktop.org/software/systemd/man/systemd-sysctl.service.html) 104 | 105 | ## Adding Custom Kernel Boot Options 106 | 107 | The CoreOS bootloader parses the configuration file `/usr/share/oem/grub.cfg`, 108 | where custom kernel boot options may be set. 109 | 110 | ### Enable CoreOS Autologin 111 | 112 | To login without a password on every boot, edit `/usr/share/oem/grub.cfg` 113 | to add the line: 114 | 115 | ``` 116 | set linux_append="coreos.autologin=tty1" 117 | ``` 118 | 119 | ### Enable Systemd Debug Logging 120 | 121 | Edit `/usr/share/oem/grub.cfg` to add the following line, enabling systemd's 122 | most verbose `debug`-level logging: 123 | 124 | ``` 125 | set linux_append="systemd.log_level=debug" 126 | ``` 127 | 128 | ### Mask a Systemd Unit 129 | 130 | Completely disable the `systemd-networkd.service` unit by adding this line to 131 | `/usr/share/oem/grub.cfg`: 132 | 133 | ``` 134 | set linux_append="systemd.mask=systemd-networkd.service" 135 | ``` 136 | 137 | ## Adding Custom Messages to MOTD 138 | 139 | When logging in interactively, a brief message (the "Message of the Day (MOTD") 140 | reports the CoreOS release channel, version, and a list of any services or 141 | systemd units that have failed. Additional text can be added by dropping text 142 | files into `/etc/motd.d`. The directory may need to be created first, and the 143 | drop-in file name must end in `.conf`. CoreOS versions 555.0.0 and greater 144 | support customization of the MOTD. 145 | 146 | ```sh 147 | mkdir -p /etc/motd.d 148 | echo "This machine is dedicated to computing Pi" > /etc/motd.d/pi.conf 149 | ``` 150 | 151 | Or via cloud-config: 152 | 153 | ```yaml 154 | #cloud-config 155 | 156 | write_files: 157 | - path: /etc/motd.d/pi.conf 158 | content: This machine is dedicated to computing Pi 159 | ``` 160 | 161 | Note: The MOTD is regenerated by a script once per minute to ensure the list of 162 | failed units remains reasonably current. This behavior may change in the future. 163 | -------------------------------------------------------------------------------- /os/sdk-modifying-coreos.md: -------------------------------------------------------------------------------- 1 | # CoreOS Developer SDK Guide 2 | 3 | These are the instructions for building CoreOS itself. By the end of 4 | the guide you will build a developer image that you can run under 5 | KVM and have tools for making changes to the code. 6 | 7 | CoreOS is an open source project. All of the source for CoreOS is 8 | available on [github][github-coreos]. If you find issues with these docs 9 | or the code please send a pull request. 10 | 11 | You can direct questions to the [IRC channel][irc] or [mailing list][coreos-dev]. 12 | 13 | [github-coreos]: https://github.com/coreos/ 14 | [irc]: irc://irc.freenode.org:6667/#coreos 15 | [coreos-dev]: https://groups.google.com/forum/#!forum/coreos-dev 16 | 17 | ## Getting Started 18 | 19 | Let's get set up with an SDK chroot and build a bootable image of CoreOS. The 20 | SDK chroot has a full toolchain and isolates the build process from quirks and 21 | differences between host OSes. The SDK must be run on an x86-64 Linux machine, 22 | the distro should not matter (Ubuntu, Fedora, etc). 23 | 24 | ### Prerequisites 25 | 26 | System requirements to get started: 27 | 28 | - curl 29 | - git 30 | - python2 31 | 32 | You also need a proper git setup: 33 | 34 | ```sh 35 | git config --global user.email "you@example.com" 36 | git config --global user.name "Your Name" 37 | ``` 38 | 39 | **NOTE**: Do the git configuration as a normal user and not with sudo. 40 | 41 | ### Install repo 42 | 43 | `repo` helps to manage the collection of git repositories that makes up CoreOS. 44 | Pull down the code and add it to your path: 45 | 46 | ```sh 47 | mkdir ~/bin 48 | export PATH="$PATH:$HOME/bin" 49 | curl https://storage.googleapis.com/git-repo-downloads/repo > ~/bin/repo 50 | chmod a+x ~/bin/repo 51 | ``` 52 | 53 | You may want to add this to your .bashrc or /etc/profile.d/ so that you don’t 54 | need to reset your $PATH manually each time you open a new shell. 55 | 56 | ### Bootstrap the SDK chroot 57 | 58 | Create a project directory. This will hold all of your git repos and the SDK 59 | chroot. A few gigs of space will be necessary. 60 | 61 | ```sh 62 | mkdir coreos; cd coreos 63 | ``` 64 | 65 | Initialize the .repo directory with the manifest that describes all of the git 66 | repos required to get started. 67 | 68 | ```sh 69 | repo init -u https://github.com/coreos/manifest.git 70 | ``` 71 | 72 | Synchronize all of the required git repos from the manifest. 73 | 74 | ```sh 75 | repo sync 76 | ``` 77 | 78 | ### Building an image 79 | 80 | Download and enter the SDK chroot which contains all of the compilers and 81 | tooling. 82 | 83 | ```sh 84 | ./chromite/bin/cros_sdk 85 | ``` 86 | 87 | **WARNING:** If you ever need to delete the SDK chroot use 88 | `./chromite/bin/cros_sdk --delete`. Otherwise, you will delete `/dev` 89 | entries that are bind mounted into the chroot. 90 | 91 | Set up the "core" user's password. 92 | 93 | ```sh 94 | ./set_shared_user_password.sh 95 | ``` 96 | 97 | Setup a board root filesystem for the amd64-usr target in /build/amd64-usr: 98 | 99 | ```sh 100 | ./setup_board --default --board=amd64-usr 101 | ``` 102 | 103 | Build all of the target binary packages: 104 | 105 | ```sh 106 | ./build_packages 107 | ``` 108 | 109 | Build an image based on the built binary packages along with the developer 110 | overlay: 111 | 112 | ```sh 113 | ./build_image dev 114 | ``` 115 | 116 | After this finishes up commands for converting the raw bin into 117 | a bootable vm will be printed. Run the `image_to_vm.sh` command. 118 | 119 | ### Booting 120 | 121 | Once you build an image you can launch it with KVM (instructions will 122 | print out after `image_to_vm.sh` runs). 123 | 124 | ## Making Changes 125 | 126 | ### git and repo 127 | 128 | CoreOS is managed by `repo`. It was built for the Android project and makes 129 | managing a large number of git repos easier, from the announcement blog: 130 | 131 | > The repo tool uses an XML-based manifest file describing where the upstream 132 | > repositories are, and how to merge them into a single working checkout. repo 133 | > will recurse across all the git subtrees and handle uploads, pulls, and other 134 | > needed items. repo has built-in knowledge of topic branches and makes working 135 | > with them an essential part of the workflow. 136 | > -- via the [Google Open Source Blog][repo-blog] 137 | 138 | [repo-blog]: http://google-opensource.blogspot.com/2008/11/gerrit-and-repo-android-source.html 139 | 140 | You can find the full manual for repo by visiting [android.com - Developing][android-repo-git]. 141 | 142 | [android-repo-git]: https://source.android.com/source/developing.html 143 | 144 | ### Updating repo manifests 145 | 146 | The repo manifest for CoreOS lives in a git repository in 147 | `.repo/manifests`. If you need to update the manifest edit `default.xml` 148 | in this directory. 149 | 150 | `repo` uses a branch called 'default' to track the upstream branch you 151 | specify in `repo init`, this defaults to 'origin/master'. Keep this in 152 | mind when making changes, the origin git repository should not have a 153 | 'default' branch. 154 | 155 | ## Building Images 156 | 157 | There are separate workflows for building [production images](/docs/sdk-distributors/sdk/building-production-images) and [development images](/docs/sdk-distributors/sdk/building-development-images). 158 | 159 | ## Tips and Tricks 160 | 161 | We've compiled a [list of tips and tricks](/docs/sdk-distributors/sdk/tips-and-tricks) that can make working with the SDK a bit easier. 162 | 163 | ## Testing Images 164 | 165 | [Mantle](/docs/sdk-distributors/sdk/mantle) is a collection of utilities 166 | used in testing and launching SDK images. 167 | -------------------------------------------------------------------------------- /os/booting-on-virtualbox.md: -------------------------------------------------------------------------------- 1 | # Running CoreOS on VirtualBox 2 | 3 | These instructions will walk you through running CoreOS on Oracle VM VirtualBox. 4 | 5 | ## Building the Virtual Disk 6 | 7 | There is a script that simplify the VDI building. It downloads a bare-metal 8 | image, verifies it with GPG and convert the image to VirtualBox format. 9 | 10 | The script is located at 11 | [GitHub](https://github.com/coreos/scripts/blob/master/contrib/create-coreos-vdi 12 | "create-coreos-vdi"). 13 | The running host must support VirtualBox tools. 14 | 15 | As first step, you must download and make it executable. 16 | 17 | ```sh 18 | wget https://raw.github.com/coreos/scripts/master/contrib/create-coreos-vdi 19 | chmod +x create-coreos-vdi 20 | ``` 21 | 22 | To run the script you can specify a destination location and the CoreOS version. 23 | 24 | ```sh 25 | ./create-coreos-vdi -d /data/VirtualBox/Templates 26 | ``` 27 | 28 | ## Choose a Channel 29 | 30 | Choose a channel to base your disk image on. Specific versions of CoreOS can also be referenced by version number. 31 | 32 |
33 | 38 |
39 |
40 |

The alpha channel closely tracks master and is released to frequently. The newest versions of Docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.

41 |

Create a disk image from this channel by running:

42 |
 43 | ./create-coreos-vdi -V alpha
 44 | 
45 |
46 |
47 |

The beta channel consists of promoted alpha releases. Current version is CoreOS {{site.beta-channel}}.

48 |

Create a disk image from this channel by running:

49 |
 50 | ./create-coreos-vdi -V beta
 51 | 
52 |
53 |
54 |

The Stable channel should be used by production clusters. Versions of CoreOS are battle-tested within the Beta and Alpha channels before being promoted. Current version is CoreOS {{site.stable-channel}}.

55 |

Create a disk image from this channel by running:

56 |
 57 | ./create-coreos-vdi -V stable
 58 | 
59 |
60 |
61 |
62 | 63 | After the script is finished successfully, will be available at the specified 64 | destination location the CoreOS image or at current location. The file name will 65 | be something like: 66 | 67 | ``` 68 | coreos_production_stable.vdi 69 | ``` 70 | 71 | ## Creating a Config-Drive 72 | 73 | Cloud-config can be specified by attaching a 74 | [config-drive]({{site.baseurl}}/docs/cluster-management/setup/cloudinit-config-drive/) 75 | with the label `config-2`. This is commonly done through whatever interface 76 | allows for attaching CD-ROMs or new drives. 77 | 78 | Note that the config-drive standard was originally an OpenStack feature, which 79 | is why you'll see strings containing `openstack`. This filepath needs to be 80 | retained, although CoreOS supports config-drive on all platforms. 81 | 82 | For more information on customization that can be done with cloud-config, head 83 | on over to the 84 | [cloud-config guide]({{site.baseurl}}/docs/cluster-management/setup/cloudinit-cloud-config/). 85 | 86 | You need a config-drive to configure at least one SSH key to access the virtual 87 | machine. If you are in hurry you can create a basic config-drive with following 88 | steps. 89 | 90 | ```sh 91 | wget https://raw.github.com/coreos/scripts/master/contrib/create-basic-configdrive 92 | chmod +x create-basic-configdrive 93 | ./create-basic-configdrive -H my_vm01 -S ~/.ssh/id_rsa.pub 94 | ``` 95 | 96 | Will be created an ISO file named `my_vm01.iso` that will configure a virtual 97 | machine to accept your SSH key and set its name to my_vm01. 98 | 99 | ## Deploying a New Virtual Machine on VirtualBox 100 | 101 | I recommend to use the built image as base image. Therefore you should clone the 102 | image for each new virtual machine and set it to desired size. 103 | 104 | ```sh 105 | VBoxManage clonehd coreos_production_stable.vdi my_vm01.vdi 106 | # Resize virtual disk to 10 GB 107 | VBoxManage modifyhd my_vm01.vdi --resize 10240 108 | ``` 109 | 110 | At boot time the CoreOS will detect that the volume size changed and will resize 111 | the filesystem according. 112 | 113 | Open VirtualBox Manager and go to menu Machine > New. Type the desired machine 114 | name and choose 'Linux' type and 'Linux 2.6 / 3.x (64 bit)' version. 115 | 116 | Next, choose the desired memory size. I recommend 1 GB for smooth experience. 117 | 118 | Next, choose 'Use an existing virtual hard drive file' and find the new cloned 119 | image. 120 | 121 | Click on 'Create' button to create the virtual machine. 122 | 123 | Next, go to settings from the created virtual machine. Then click on Storage tab 124 | and load the created config-drive into CD/DVD drive. 125 | 126 | Click on 'OK' button and the virtual machine will be ready to be started. 127 | 128 | ## Logging In 129 | 130 | Networking can take a bit of time to come up under VirtualBox and you will need 131 | to know the IP in order to connect to it. Press enter a few times at the login 132 | prompt and you see an IP address pop up. 133 | 134 | Now you can login using your private SSH key. 135 | 136 | ```sh 137 | ssh core@192.168.56.101 138 | ``` 139 | 140 | ## Using CoreOS 141 | 142 | Now that you have a machine booted it is time to play around. 143 | Check out the [CoreOS Quickstart]({{site.baseurl}}/docs/quickstart) guide or dig 144 | into [more specific topics]({{site.baseurl}}/docs). 145 | -------------------------------------------------------------------------------- /os/customizing-sshd.md: -------------------------------------------------------------------------------- 1 | # Customizing the SSH Daemon 2 | 3 | CoreOS defaults to running an OpenSSH daemon using `systemd` socket activation -- when a client connects to the port configured for SSH, `sshd` is started on the fly for that client using a `systemd` unit derived automatically from a template. In some cases you may want to customize this daemon's authentication methods or other configuration. This guide will show you how to do that at build time using `cloud-config`, and after building by modifying the `systemd` unit file. 4 | 5 | As a practical example, when a client fails to connect by not completing the TCP connection (e.g. because the "client" is actually a TCP port scanner), the MOTD may report failures of `systemd` units (which will be named by the source IP that failed to connect) next time you log in to the CoreOS host. These failures are not themselves harmful, but it is a good general practice to change how SSH listens, either by changing the IP address `sshd` listens to from the default setting (which listens on all configured interfaces), changing the default port, or both. 6 | 7 | ## Customizing sshd with Cloud-Config 8 | 9 | In this example we will disable logins for the `root` user, only allow login for the `core` user and disable password based authentication. For more details on what sections can be added to `/etc/ssh/sshd_config` see the [OpenSSH manual][openssh-manual]. 10 | 11 | [openssh-manual]: http://www.openssh.com/cgi-bin/man.cgi?query=sshd_config 12 | 13 | ```yaml 14 | #cloud-config 15 | 16 | write_files: 17 | - path: /etc/ssh/sshd_config 18 | permissions: 0600 19 | owner: root:root 20 | content: | 21 | # Use most defaults for sshd configuration. 22 | UsePrivilegeSeparation sandbox 23 | Subsystem sftp internal-sftp 24 | 25 | PermitRootLogin no 26 | AllowUsers core 27 | PasswordAuthentication no 28 | ChallengeResponseAuthentication no 29 | ``` 30 | 31 | ### Changing the sshd Port 32 | 33 | CoreOS ships with socket-activated SSH by default. The configuration for this can be found at `/usr/lib/systemd/system/sshd.socket`. We're going to override some of the default settings for this in the cloud-config provided at boot: 34 | 35 | ```yaml 36 | #cloud-config 37 | 38 | coreos: 39 | units: 40 | - name: sshd.socket 41 | command: restart 42 | content: | 43 | [Socket] 44 | ListenStream=2222 45 | Accept=yes 46 | ``` 47 | 48 | `sshd` will now listen only on port 2222 on all interfaces when the system is built. 49 | 50 | ### Further Reading 51 | 52 | Read the [full cloud-config]({{site.baseurl}}/docs/cluster-management/setup/cloudinit-cloud-config/) guide to install users and more. 53 | 54 | ## Customizing sshd after build 55 | 56 | If you have CoreOS hosts that need configuration changes made after build, log into the CoreOS host as the core user, then: 57 | 58 | ``` 59 | $ sudo cp /usr/lib/systemd/system/sshd.socket /etc/systemd/system/sshd.socket 60 | ``` 61 | 62 | This gives you a copy of the sshd.socket unit that will override the one supplied by CoreOS -- when you make changes to it and `systemd` re-reads its configuration, those changes will be implemented. 63 | 64 | ### Changing the sshd port 65 | 66 | To change how sshd listens, change or add the relevant lines in the `[Socket]` section of `/etc/systemd/system/sshd.socket` (leaving options not specified here intact). 67 | 68 | To change just the listened-to port (in this example, port 2222): 69 | 70 | ``` 71 | [Socket] 72 | ListenStream=2222 73 | ``` 74 | 75 | To change the listened-to IP address (in this example, 10.20.30.40): 76 | 77 | ``` 78 | [Socket] 79 | ListenStream=10.20.30.40:22 80 | FreeBind=true 81 | ``` 82 | 83 | You can specify both an IP and an alternate port in a single ListenStream line; additionally, IPv6 address bindings would be specified as, for example, `[2001:db8::7]:22` Note two things: first, while a specific IP address is optional, you must always specify the port, even if it is the default SSH port. Second, the `FreeBind` option is used allow the socket to be bound on addresses that are not actually configured on an interface yet, to avoid issues caused by delays in IP configuration at boot. (This is not needed if you are not specifying an address.) 84 | 85 | Multiple ListenStream lines can be specified, in which case `sshd` will listen on all the specified sockets: 86 | 87 | ``` 88 | [Socket] 89 | ListenStream=2222 90 | ListenStream=10.20.30.40:2223 91 | FreeBind=true 92 | ``` 93 | 94 | `sshd` will now listen to port 2222 on all configured addresses, and port 2223 on 10.20.30.40. 95 | 96 | Taking the last example, the complete contents of `/etc/systemd/system/sshd.socket` would now be: 97 | 98 | ``` 99 | [Unit] 100 | Description=OpenSSH Server Socket 101 | Conflicts=sshd.service 102 | 103 | [Socket] 104 | ListenStream=2222 105 | ListenStream=10.20.30.40:2223 106 | FreeBind=true 107 | Accept=yes 108 | 109 | [Install] 110 | WantedBy=sockets.target 111 | ``` 112 | 113 | ### Activating changes 114 | 115 | After the edited file is written to disk, you can activate it without rebooting with: 116 | 117 | ``` 118 | $ sudo systemctl daemon-reload 119 | ``` 120 | 121 | We now see that systemd is listening on the new sockets: 122 | 123 | ``` 124 | $ systemctl status sshd.socket 125 | ● sshd.socket - OpenSSH Server Socket 126 | Loaded: loaded (/etc/systemd/system/sshd.socket; disabled; vendor preset: disabled) 127 | Active: active (listening) since Wed 2015-10-14 21:04:31 UTC; 2min 45s ago 128 | Listen: [::]:2222 (Stream) 129 | 10.20.30.40:2223 (Stream) 130 | Accepted: 1; Connected: 0 131 | ... 132 | ``` 133 | 134 | And if we attempt to connect to port 22 on our public IP, the connection is rejected, but port 2222 works: 135 | 136 | ``` 137 | $ ssh core@[public IP] 138 | ssh: connect to host [public IP] port 22: Connection refused 139 | $ ssh -p 2222 core@[public IP] 140 | Enter passphrase for key '/home/user/.ssh/id_rsa': 141 | ``` 142 | 143 | ### Further reading on systemd units 144 | 145 | For more information about configuring CoreOS hosts with `systemd`, see [Getting Started with systemd]({{site.baseurl}}/docs/launching-containers/launching/getting-started-with-systemd/). 146 | -------------------------------------------------------------------------------- /os/sdk-tips-and-tricks.md: -------------------------------------------------------------------------------- 1 | # Tips and Tricks 2 | 3 | ## Finding all open pull requests and issues 4 | 5 | - [CoreOS Issues][issues] 6 | - [CoreOS Pull Requests][pullrequests] 7 | 8 | [issues]: https://github.com/organizations/coreos/dashboard/issues/ 9 | [pullrequests]: https://github.com/organizations/coreos/dashboard/pulls/ 10 | 11 | ## Searching all repo code 12 | 13 | Using `repo forall` you can search across all of the Git repos at once: 14 | 15 | ```sh 16 | repo forall -c git grep 'CONFIG_EXTRA_FIRMWARE_DIR' 17 | ``` 18 | 19 | ## Add new upstream package 20 | 21 | Before making modifications use `repo start` to create a new branch for the changes. 22 | 23 | To add a new package fetch the Gentoo package from upstream and add the package as a dependency of coreos-base/coreos 24 | 25 | If any files in the upstream package will be changed the package can be fetched from upstream Gentoo directly into `src/third_party/coreos-overlay` it may be necessary to create any missing directories in the path too. 26 | 27 | e.g. 28 | 29 | ```sh 30 | ~/trunk/src/third_party/coreos-overlay $ mkdir -p sys-block/open-iscsi && rsync -av rsync://rsync.gentoo.org/gentoo-portage/sys-block/open-iscsi/ sys-block/open-iscsi/ 31 | ``` 32 | 33 | The tailing / prevents rsync from creating the directory for the package so you don't end up with `sys-block/open-iscsi/open-iscsi` 34 | Remember to add the new files to git. 35 | 36 | If the new package does not need to be modified the package should be placed in `src/third_party/portage-stable` 37 | 38 | You can use `scripts/update_ebuilds` to fetch packages into `src/third_party/portage-stable` and add the files to git. 39 | You should specify the category and the packagename. 40 | e.g. 41 | `./update_ebuilds sys-block/open-iscsi` 42 | 43 | If the package needs to be modified it must be moved out of `src/third_party/portage-stable` to `src/third_party/coreos-overlay` 44 | 45 | To include the new package as a dependency of coreos add the package to the end of the RDEPEND environment variable in `coreos-base/coreos/coreos-0.0.1.ebuild` then increment the revision of coreos by renaming the softlink `git mv coreos-base/coreos/coreos-0.0.1-r237.ebuild coreos-base/coreos/coreos-0.0.1-r238.ebuild` 46 | 47 | The new package will now be built and installed as part of the normal build flow. 48 | 49 | Add and commit the changes to git using AngularJS format. See [CONTRIBUTING.md] 50 | [CONTRIBUTING.md]: https://github.com/coreos/etcd/blob/master/CONTRIBUTING.md 51 | 52 | Push the changes to your GitHub fork and create a pull request. 53 | 54 | ### Ebuild Tips 55 | 56 | - Manually merge a package to the chroot to test build `emerge-amd64-usr packagename` 57 | - Manually unmerge a package `emerge-amd64-usr --unmerge packagename` 58 | - Remove a binary package from the cache `sudo rm /build/amd64-usr/packages/catagory/packagename-version.tbz2` 59 | - recreate the chroot prior to a clean rebuild `./chromite/bin/cros_sdk -r` 60 | - it may be necessary to comment out kernel source checks from the ebuild if the build fails -- as coreos does not yet provide visibility of the configured kernel source at build time -- usually this is not a problem but may lead to warning messages 61 | - Chromium OS [Portage Build FAQ] 62 | - [Gentoo Development Guide] 63 | 64 | 65 | [Portage Build FAQ]: http://www.chromium.org/chromium-os/how-tos-and-troubleshooting/portage-build-faq 66 | [Gentoo Development Guide]: http://devmanual.gentoo.org/ 67 | 68 | ## Caching git https passwords 69 | 70 | Note: You need git 1.7.10 or newer to use the credential helper 71 | 72 | Turn on the credential helper and git will save your password in memory 73 | for some time: 74 | 75 | ```sh 76 | git config --global credential.helper cache 77 | ``` 78 | 79 | Why doesn't CoreOS use SSH in the git remotes? Because, we can't do 80 | anonymous clones from GitHub with an SSH URL. In the future we will fix 81 | this. 82 | 83 | ### Base system dependency graph 84 | 85 | Get a view into what the base system will contain and why it will contain those 86 | things with the emerge tree view: 87 | 88 | ```sh 89 | emerge-amd64-usr --emptytree -p -v --tree coreos-base/coreos-dev 90 | ``` 91 | 92 | ## SSH Config 93 | 94 | You will be booting lots of VMs with on the fly ssh key generation. Add 95 | this in your `$HOME/.ssh/config` to stop the annoying fingerprint warnings. 96 | 97 | ```ini 98 | Host 127.0.0.1 99 | StrictHostKeyChecking no 100 | UserKnownHostsFile /dev/null 101 | User core 102 | LogLevel QUIET 103 | ``` 104 | 105 | ## Hide loop devices from desktop environments 106 | 107 | By default desktop environments will diligently display any mounted devices 108 | including loop devices used to construct CoreOS disk images. If the daemon 109 | responsible for this happens to be ``udisks`` then you can disable this 110 | behavior with the following udev rule: 111 | 112 | ```sh 113 | echo 'SUBSYSTEM=="block", KERNEL=="ram*|loop*", ENV{UDISKS_PRESENTATION_HIDE}="1", ENV{UDISKS_PRESENTATION_NOPOLICY}="1"' > /etc/udev/rules.d/85-hide-loop.rules 114 | udevadm control --reload 115 | ``` 116 | 117 | ## Leaving developer mode 118 | 119 | Some daemons act differently in "dev mode". For example update_engine refuses 120 | to auto-update or connect to HTTPS URLs. If you need to test something out of 121 | dev_mode on a vm you can do the following: 122 | 123 | ``` 124 | mv /root/.dev_mode{,.old} 125 | ``` 126 | 127 | If you want to permanently leave you can run the following: 128 | 129 | ``` 130 | crossystem disable_dev_request=1; reboot 131 | ``` 132 | 133 | ## Known Issues 134 | 135 | ### build\_packages fails on coreos-base 136 | 137 | Sometimes coreos-dev or coreos builds will fail in `build_packages` with a 138 | backtrace pointing to `epoll`. This hasn't been tracked down but running 139 | `build_packages` again should fix it. The error looks something like this: 140 | 141 | ``` 142 | Packages failed: 143 | coreos-base/coreos-dev-0.1.0-r63 144 | coreos-base/coreos-0.0.1-r187 145 | ``` 146 | 147 | ## Constants and IDs 148 | 149 | ### CoreOS App ID 150 | 151 | This UUID is used to identify CoreOS to the update service and elsewhere. 152 | 153 | ``` 154 | e96281a6-d1af-4bde-9a0a-97b76e56dc57 155 | ``` 156 | 157 | ### GPT UUID Types 158 | 159 | - CoreOS Root: 5dfbf5f4-2848-4bac-aa5e-0d9a20b745a6 160 | - CoreOS Reserved: c95dc21a-df0e-4340-8d7b-26cbfa9a03e0 161 | -------------------------------------------------------------------------------- /os/booting-on-vultr.md: -------------------------------------------------------------------------------- 1 | # Running CoreOS on a Vultr VPS 2 | 3 | These instructions will walk you through running a single CoreOS node. This guide assumes: 4 | 5 | * You have an account at [Vultr.com](https://www.vultr.com). 6 | * You have a public + private key combination generated. Here's a helpful guide if you need to generate these keys: [How to set up SSH keys](https://help.github.com/articles/generating-ssh-keys). 7 | 8 | The simplest option to boot up CoreOS is to select the "CoreOS Stable" operating system from Vultr's default offerings. However, most deployments require a custom `cloud-config`, which can only be achieved in Vultr with an iPXE script. The remainder of this article describes this process. 9 | 10 | ## Cloud-Config 11 | 12 | First, you'll need to make a shell script containing your `cloud-config` available at a public URL: 13 | 14 | **cloud-config-bootstrap.sh** 15 | 16 | ```sh 17 | #!/bin/bash 18 | 19 | cat > "cloud-config.yaml" < 39 | 44 |
45 |
46 |
47 |

The alpha channel closely tracks master and is released to frequently. The newest versions of Docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.data.alpha-channel.rackspace-version}}.

48 |
49 |

A sample script will look like this:

50 | 51 |
#!ipxe
 52 | 
 53 | # Location of your shell script.
 54 | set cloud-config-url http://example.com/cloud-config-bootstrap.sh
 55 | 
 56 | set base-url http://alpha.release.core-os.net/amd64-usr/current
 57 | kernel ${base-url}/coreos_production_pxe.vmlinuz cloud-config-url=${cloud-config-url}
 58 | initrd ${base-url}/coreos_production_pxe_image.cpio.gz
 59 | boot
60 |
61 |
62 |
63 |

The beta channel consists of promoted alpha releases. Current version is CoreOS {{site.data.beta-channel.rackspace-version}}.

64 |
65 |

A sample script will look like this:

66 | 67 |
#!ipxe
 68 | 
 69 | # Location of your shell script.
 70 | set cloud-config-url http://example.com/cloud-config-bootstrap.sh
 71 | 
 72 | set base-url http://beta.release.core-os.net/amd64-usr/current
 73 | kernel ${base-url}/coreos_production_pxe.vmlinuz cloud-config-url=${cloud-config-url}
 74 | initrd ${base-url}/coreos_production_pxe_image.cpio.gz
 75 | boot
76 |
77 |
78 |
79 |

The Stable channel should be used by production clusters. Versions of CoreOS are battle-tested within the Beta and Alpha channels before being promoted. Current version is CoreOS {{site.data.stable-channel.rackspace-version}}.

80 |
81 |

A sample script will look like this:

82 | 83 |
#!ipxe
 84 | 
 85 | # Location of your shell script.
 86 | set cloud-config-url http://example.com/cloud-config-bootstrap.sh
 87 | 
 88 | set base-url http://stable.release.core-os.net/amd64-usr/current
 89 | kernel ${base-url}/coreos_production_pxe.vmlinuz cloud-config-url=${cloud-config-url}
 90 | initrd ${base-url}/coreos_production_pxe_image.cpio.gz
 91 | boot
92 |
93 |
94 | 95 | 96 | Go to My Servers > Startup Scripts > Add Startup Script, select type "PXE", and input your script. Be sure to replace the cloud-config-url with that of the shell script you created above. 97 | 98 | Additional reading can be found at [Booting CoreOS with iPXE](http://coreos.com/docs/running-coreos/bare-metal/booting-with-ipxe/) and [Embedded scripts for iPXE](http://ipxe.org/embed). 99 | 100 | ## Create the VPS 101 | 102 | Create a new VPS (any server type and location of your choice), and then: 103 | 104 | 1. For the "Operating System" select "Custom" 105 | 2. Select "iPXE Custom Script" and the script you created above. 106 | 3. Click "Place Order" 107 | 108 | Once you receive the "Subscription Activated" email the VPS will be ready to use. 109 | 110 | ## Accessing the VPS 111 | 112 | You can now log in to CoreOS using the associated private key on your local computer. You may need to specify its location using ```-i LOCATION```. If you need additional details on how to specify the location of your private key file see [here](http://www.cyberciti.biz/faq/force-ssh-client-to-use-given-private-key-identity-file/). 113 | 114 | SSH to the IP of your VPS, and specify the "core" user: ```ssh core@IP``` 115 | 116 | ```sh 117 | $ ssh core@IP 118 | The authenticity of host 'IP (2a02:1348:17c:423d:24:19ff:fef1:8f6)' can't be established. 119 | RSA key fingerprint is 99:a5:13:60:07:5d:ac:eb:4b:f2:cb:c9:b2:ab:d7:21. 120 | Are you sure you want to continue connecting (yes/no)? yes 121 | Warning: Permanently added '[IP]' (ED25519) to the list of known hosts. 122 | Enter passphrase for key '/home/user/.ssh/id_rsa': 123 | CoreOS stable (557.2.0) 124 | core@localhost ~ $ 125 | ``` 126 | 127 | ## Using CoreOS 128 | 129 | Check out the [CoreOS Quickstart]({{site.baseurl}}/docs/quickstart) guide or dig into [more specific topics]({{site.baseurl}}/docs). 130 | -------------------------------------------------------------------------------- /os/mounting-storage.md: -------------------------------------------------------------------------------- 1 | # Mounting Storage 2 | 3 | The [cloud-config]({{site.baseurl}}/docs/cluster-management/setup/cloudinit-cloud-config) *mount unit* mechanism is used to attach additional filesystems to CoreOS nodes, whether such storage is provided by an underlying cloud platform, physical disk, SAN, or NAS system. By [systemd convention](http://www.freedesktop.org/software/systemd/man/systemd.mount.html), mount unit names derive from the target mount point, with interior slashes replaced by dashes, and the `.mount` extension appended. A unit mounting onto `/var/www` is thus named `var-www.mount`. 4 | 5 | Mount units name the source filesystem and target mount point, and optionally the filesystem type. Cloud-config writes mount unit files beneath `/etc/systemd/system`. *Systemd* mounts filesystems defined in such units at boot time. The following example mounts an [EC2 ephemeral disk]({{site.baseurl}}/docs/running-coreos/cloud-providers/ec2/#instance-storage) at the node's `/media/ephemeral` directory, and is therefore named `media-ephemeral.mount`: 6 | 7 | ```yaml 8 | #cloud-config 9 | 10 | coreos: 11 | units: 12 | - name: media-ephemeral.mount 13 | command: start 14 | content: | 15 | [Mount] 16 | What=/dev/xvdb 17 | Where=/media/ephemeral 18 | Type=ext3 19 | ``` 20 | 21 | ## Use Attached Storage for Docker 22 | 23 | Docker containers can be very large and debugging a build process makes it easy to accumulate hundreds of containers. It's advantageous to use attached storage to expand your capacity for container images. Be aware that some cloud providers treat certain disks as ephemeral and you will lose all Docker images contained on that disk. 24 | 25 | We're going to mount a btrfs device to `/var/lib/docker`, where Docker stores images. We can do this on the fly when the machines starts up with a oneshot unit that formats the drive and another one that runs afterwards to mount it. Be sure to hardcode the correct device or look for a device by label: 26 | 27 | ```yaml 28 | #cloud-config 29 | coreos: 30 | units: 31 | - name: format-ephemeral.service 32 | command: start 33 | content: | 34 | [Unit] 35 | Description=Formats the ephemeral drive 36 | After=dev-xvdb.device 37 | Requires=dev-xvdb.device 38 | [Service] 39 | Type=oneshot 40 | RemainAfterExit=yes 41 | ExecStart=/usr/sbin/wipefs -f /dev/xvdb 42 | ExecStart=/usr/sbin/mkfs.btrfs -f /dev/xvdb 43 | - name: var-lib-docker.mount 44 | command: start 45 | content: | 46 | [Unit] 47 | Description=Mount ephemeral to /var/lib/docker 48 | Requires=format-ephemeral.service 49 | After=format-ephemeral.service 50 | Before=docker.service 51 | [Mount] 52 | What=/dev/xvdb 53 | Where=/var/lib/docker 54 | Type=btrfs 55 | ``` 56 | 57 | Notice that we're starting both units at the same time and using the power of systemd to work out the dependencies for us. In this case, `var-lib-docker.mount` requires `format-ephemeral.service`, ensuring that our storage will always be formatted before it is mounted. Docker will refuse to start otherwise. 58 | 59 | ## Creating and Mounting a btrfs Volume File 60 | 61 | CoreOS [561.0.0](https://coreos.com/releases/#561.0.0) and later are installed with ext4 + overlayfs to provide a layered filesystem for the root partition. 62 | Installations from prior to this, are using btrfs for this functionality. 63 | If you'd like to continue using btrfs on newer CoreOS machines, you can do so with two systemd units: one that creates and formats a btrfs volume file and another that mounts it. 64 | 65 | In this example, we are going to mount a new 25GB btrfs volume file to `/var/lib/docker`, and one can verify that Docker is using the btrfs storage driver once the Docker service has started by executing `sudo docker info`. 66 | We recommend allocating **no more than 85%** of the available disk space for a btrfs filesystem as journald will also require space on the host filesystem. 67 | 68 | ```yaml 69 | #cloud-config 70 | coreos: 71 | units: 72 | - name: format-var-lib-docker.service 73 | command: start 74 | content: | 75 | [Unit] 76 | Before=docker.service var-lib-docker.mount 77 | ConditionPathExists=!/var/lib/docker.btrfs 78 | [Service] 79 | Type=oneshot 80 | ExecStart=/usr/bin/truncate --size=25G /var/lib/docker.btrfs 81 | ExecStart=/usr/sbin/mkfs.btrfs /var/lib/docker.btrfs 82 | - name: var-lib-docker.mount 83 | enable: true 84 | content: | 85 | [Unit] 86 | Before=docker.service 87 | After=format-var-lib-docker.service 88 | Requires=format-var-lib-docker.service 89 | [Install] 90 | RequiredBy=docker.service 91 | [Mount] 92 | What=/var/lib/docker.btrfs 93 | Where=/var/lib/docker 94 | Type=btrfs 95 | Options=loop,discard 96 | ``` 97 | 98 | Note the declaration of `ConditionPathExists=!/var/lib/docker.btrfs`. Without this line, systemd would reformat the btrfs filesystem every time the machine starts. 99 | 100 | ## Mounting NFS Exports 101 | 102 | This cloud-config excerpt enables the NFS host monitor [`rpc.statd(8)`](http://linux.die.net/man/8/rpc.statd), then mounts an NFS export onto the CoreOS node's `/var/www`. 103 | 104 | ```yaml 105 | #cloud-config 106 | coreos: 107 | units: 108 | - name: rpc-statd.service 109 | command: start 110 | enable: true 111 | - name: var-www.mount 112 | command: start 113 | content: | 114 | [Mount] 115 | What=nfs.example.com:/var/www 116 | Where=/var/www 117 | Type=nfs 118 | ``` 119 | 120 | To declare that another service depends on this mount, name the mount unit in the dependent unit's `After` and `Requires` properties: 121 | 122 | ```yaml 123 | [Unit] 124 | After=var-www.mount 125 | Requires=var-www.mount 126 | ``` 127 | 128 | If the mount fails, dependent units will not start. 129 | 130 | ## Further Reading 131 | 132 | Check the [`systemd mount` docs](http://www.freedesktop.org/software/systemd/man/systemd.mount.html) to learn about the available options. Examples specific to [EC2]({{site.baseurl}}/docs/running-coreos/cloud-providers/ec2/#instance-storage), [Google Compute Engine]({{site.baseurl}}/docs/running-coreos/cloud-providers/google-compute-engine/#additional-storage) and [Rackspace Cloud]({{site.baseurl}}/docs/running-coreos/cloud-providers/rackspace/#mount-data-disk) can be used as a starting point. 133 | -------------------------------------------------------------------------------- /os/btrfs-troubleshooting.md: -------------------------------------------------------------------------------- 1 | # Working with btrfs and Common Troubleshooting 2 | 3 | btrfs is a copy-on-write filesystem with full support in the upstream Linux kernel and several desirable features. In the past, CoreOS shipped with a btrfs root filesystem to support Docker filesystem requirements at the time. As of version 561.0.0, CoreOS ships with ext4 as the default root filesystem by default while still supporting Docker. Btrfs is still supported and works with the latest CoreOS releases and Docker, but we recommend using ext4. 4 | 5 | btrfs was marked as experimental for a long time, but it's now fully production-ready and supported by a number of Linux distributions. 6 | 7 | Notable Features of btrfs: 8 | 9 | - Ability to add/remove block devices without interruption 10 | - Ability to balance the filesystem without interruption 11 | - RAID 0, RAID 1, RAID 5, RAID 6 and RAID 10 12 | - Snapshots and file cloning 13 | 14 | This guide won't cover these topics — it's mostly focused on troubleshooting. 15 | 16 | For a more complete troubleshooting experience, let's explore how btrfs works under the hood. 17 | 18 | btrfs stores data in chunks across all of the block devices on the system. The total storage across these devices is shown in the standard output of `df -h`. 19 | 20 | Raw data and filesystem metadata are stored in one or many chunks, typically ~1GiB in size. When RAID is configured, these chunks are replicated instead of individual files. 21 | 22 | A copy-on-write filesystem maintains many changes of a single file, which is helpful for snapshotting and other advanced features, but can lead to fragmentation with some workloads. 23 | 24 | ## No Space Left on Device 25 | 26 | When the filesystem is out of chunks to write data into, `No space left on device` will be reported. This will prevent journal files from being recorded, containers from starting and so on. 27 | 28 | The common reaction to this error is to run `df -h` and you'll see that there is still some free space. That command isn't measuring the btrfs primitives (chunks, metadata, etc), which is what really matters. 29 | 30 | Running `sudo btrfs fi show` will give you the btrfs view of how much free space you have. When starting/stopping many Docker containers or doing a large amount of random writes, chunks will become duplicated in an inefficient manner over time. 31 | 32 | Re-balancing the filesystem ([official btrfs docs](https://btrfs.wiki.kernel.org/index.php/Balance_Filters)) will relocate data from empty or near-empty chunks to free up space. This operation can be done without downtime. 33 | 34 | First, let's see how much free space we have: 35 | 36 | ```sh 37 | $ sudo btrfs fi show 38 | Label: 'ROOT' uuid: 82a40c46-557e-4848-ad4d-10c6e36ed5ad 39 | Total devices 1 FS bytes used 13.44GiB 40 | devid 1 size 32.68GiB used 32.68GiB path /dev/xvda9 41 | 42 | Btrfs v3.14_pre20140414 43 | ``` 44 | 45 | The answer: not a lot. We can re-balance to fix that. 46 | 47 | The re-balance command can be configured to only relocate data in chunks up to a certain percentage used. This will prevent you from moving around a lot of data without a lot of benefit. If your disk is completely full, you may need to delete a few containers to create space for the re-balance operation to work with. 48 | 49 | Let's try to relocate chunks with less than 5% of usage: 50 | 51 | ```sh 52 | $ sudo btrfs fi balance start -dusage=5 / 53 | Done, had to relocate 5 out of 45 chunks 54 | $ sudo btrfs fi show 55 | Label: 'ROOT' uuid: 82a40c46-557e-4848-ad4d-10c6e36ed5ad 56 | Total devices 1 FS bytes used 13.39GiB 57 | devid 1 size 32.68GiB used 28.93GiB path /dev/xvda9 58 | 59 | Btrfs v3.14_pre20140414 60 | ``` 61 | 62 | The operation took about a minute on a cloud server and gained us 4GiB of space on the filesystem. It's up to you to find out what percentage works best for your workload, the speed of your disks, etc. 63 | 64 | If your balance operation is taking a long time, you can open a new shell and find the status: 65 | 66 | ``` 67 | $ sudo btrfs balance status / 68 | Balance on '/' is running 69 | 0 out of about 1 chunks balanced (1 considered), 100% left 70 | ``` 71 | 72 | ## Adding a New Physical Disk 73 | 74 | New physical disks can be added to an existing btrfs filesystem. The first step is to have the new block device [mounted on the machine]({{site.baseurl}}/docs/cluster-management/setup/mounting-storage/). Afterwards, let btrfs know about the new device and re-balance the file system. The key step here is re-balancing, which will move the data and metadata across both block devices. Expect this process to take some time: 75 | 76 | ```sh 77 | $ btrfs device add /dev/sdc / 78 | $ btrfs filesystem balance / 79 | ``` 80 | 81 | ## Disable Copy-On-Write 82 | 83 | Copy-On-write isn't ideal for workloads that create or modify many small files, such as databases. Without disabling COW, you can heavily fragment the file system as explained above. 84 | 85 | The best strategy for successfully running a database in a container is to disable COW on directory/volume that is mounted into the container. 86 | 87 | The COW setting is stored as a file attribute and is modified with a utility called `chattr`. To disable COW for a MySQL container's volume, run: 88 | 89 | ```sh 90 | $ sudo mkdir /var/lib/mysql 91 | $ sudo chattr -R +C /var/lib/mysql 92 | ``` 93 | 94 | The directory `/var/lib/mysql` is now ready to be used by a Docker container without COW. Let's break down the command: 95 | 96 | `-R` indicates that want to recursively change the file attribute 97 | `+C` means we want to set the NOCOW attribute on the file/directory 98 | 99 | To verify, we can run: 100 | 101 | ```sh 102 | $ sudo lsattr /var/lib/ 103 | ---------------- /var/lib/portage 104 | ---------------- /var/lib/gentoo 105 | ---------------- /var/lib/iptables 106 | ---------------- /var/lib/ip6tables 107 | ---------------- /var/lib/arpd 108 | ---------------- /var/lib/ipset 109 | ---------------- /var/lib/dbus 110 | ---------------- /var/lib/systemd 111 | ---------------- /var/lib/polkit-1 112 | ---------------- /var/lib/dhcpcd 113 | ---------------- /var/lib/ntp 114 | ---------------- /var/lib/nfs 115 | ---------------- /var/lib/etcd 116 | ---------------- /var/lib/docker 117 | ---------------- /var/lib/update_engine 118 | ---------------C /var/lib/mysql 119 | ``` 120 | 121 | ### Disable in a Unit File 122 | 123 | Setting the file attributes can be done within a systemd unit using two `ExecStartPre` commands: 124 | 125 | ```ini 126 | ExecStartPre=/usr/bin/mkdir -p /var/lib/mysql 127 | ExecStartPre=/usr/bin/chattr -R +C /var/lib/mysql 128 | ``` 129 | -------------------------------------------------------------------------------- /flannel/flannel-config.md: -------------------------------------------------------------------------------- 1 | # Configuring flannel for Container Networking 2 | 3 | *Note*: flannel is only available in [CoreOS versions 554]({{site.baseurl}}/releases/#554.0.0) and later. 4 | 5 | ## Overview 6 | 7 | With Docker, each container is assigned an IP address that can be used to communicate with other containers on the _same_ host. 8 | For communicating over a network, containers are tied to the IP addresses of the host machines and must rely on port-mapping to reach the desired container. 9 | This makes it difficult for applications running inside containers to advertise their external IP and port as that information is not available to them. 10 | 11 | flannel solves the problem by giving each container an IP that can be used for container-to-container communication. It uses packet encapsulation 12 | to create a virtual overlay network that spans the whole cluster. More specifically, flannel gives each host an IP subnet (/24 by default) from which the 13 | Docker daemon is able to allocate IPs to the individual containers. 14 | 15 | flannel uses [etcd](https://coreos.com/using-coreos/etcd/) to store mappings between the virtual IP and host addresses. A `flanneld` daemon runs on each 16 | host and is responsible for watching information in etcd and routing the packets. 17 | 18 | ## Configuration 19 | 20 | ### Publishing config to etcd 21 | flannel looks up its configuration in etcd. Therefore the first step to getting started with flannel is to publish the configuration to etcd. 22 | By default, flannel looks up its configuration in `/coreos.com/network/config`. At the bare minimum, you must tell flannel an IP range (subnet) that 23 | it should use for the overlay. Here is an example of the minimum flannel configuration: 24 | 25 | ```json 26 | { "Network": "10.1.0.0/16" } 27 | ``` 28 | 29 | Use `etcdctl` utility to publish the config: 30 | 31 | ```bash 32 | $ etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16" }' 33 | ``` 34 | 35 | You can put this into a drop-in for flanneld.service via cloud-config: 36 | 37 | ```yaml 38 | #cloud-config 39 | 40 | coreos: 41 | units: 42 | - name: flanneld.service 43 | drop-ins: 44 | - name: 50-network-config.conf 45 | content: | 46 | [Service] 47 | ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16" }' 48 | ``` 49 | 50 | This will assign the specified /16 for the entire overlay network. By default, flannel will allocate a /24 to each host. This default, along with the 51 | minimum and maximum subnet IP addresses is overridable in config: 52 | 53 | ```json 54 | { 55 | "Network": "10.1.0.0/16", 56 | "SubnetLen": 28, 57 | "SubnetMin": "10.1.10.0", 58 | "SubnetMax": "10.1.50.0" 59 | } 60 | ``` 61 | 62 | This config instructs flannel to allocate /28 subnets to individual hosts and make sure not to issue subnets outside of 10.1.10.0 - 10.1.50.0 range. 63 | 64 | ### Firewall 65 | 66 | flannel uses UDP port 8285 for sending encapsulated IP packets. Make sure to enable this traffic to pass between the hosts. If you find that you can't ping containers across hosts, this port is probably not open. 67 | 68 | ### Enabling flannel via Cloud-Config 69 | The last step is to enable `flanneld.service` in the cloud-config by adding `command: start` directive: 70 | 71 | ```yaml 72 | #cloud-config 73 | 74 | coreos: 75 | units: 76 | - name: flanneld.service 77 | drop-ins: 78 | - name: 50-network-config.conf 79 | content: | 80 | [Service] 81 | ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16" }' 82 | command: start 83 | 84 | # Example service running in a Docker container 85 | - name: redis.service 86 | content: | 87 | [Unit] 88 | Requires=flanneld.service 89 | After=flanneld.service 90 | 91 | [Service] 92 | ExecStart=/usr/bin/docker run redis 93 | Restart=always 94 | command: start 95 | ``` 96 | 97 | *Important*: If you are starting other units via cloud-config, `flanneld.service` needs to be listed _before_ any services that run Docker containers. 98 | In addition, other units that will run in containers, including those scheduled via fleet, should include `Requires=flanneld.service`, `After=flanneld.service`, and `Restart=always|on-failure` directives. 99 | These directive are necessary because flanneld.service may fail due to etcd not being available yet. It will keep restarting and it is important for Docker based services to also keep trying until flannel is up. 100 | 101 | *Important*: If you are starting flannel on Vagrant, it should be instructed to use the correct network interface: 102 | 103 | ```yaml 104 | #cloud-config 105 | 106 | coreos: 107 | flannel: 108 | interface: $public_ipv4 109 | ``` 110 | 111 | ## Under the Hood 112 | To reduce the CoreOS image size, flannel daemon is stored in CoreOS Enterprise Registry as a Docker container and not shipped in the CoreOS image. 113 | For those users wishing not to use flannel, it helps to keep their installation minimal. When `flanneld.service` it started, it pulls the Docker image 114 | from the registry. There is, however, a chicken and the egg problem as flannel configures Docker bridge and Docker is needed to pull down the image. 115 | 116 | In order to work around this, CoreOS is configured to optionally run a second copy of Docker daemon which we call early-docker. Early-docker daemon 117 | is started with `--iptables=false` and containers that it executes need to run with host networking. This prevents Docker from starting `docker0` bridge. 118 | 119 | Here is the sequence of events that happens when `flanneld.service` is started followed by a service that runs a Docker container (e.g. redis server): 120 | 121 | 1. `early-docker.service` gets started since it is a dependency of `flanneld.service`. 122 | 2. `early-docker.service` launches a Docker on a separate Unix socket — `/var/run/early-docker.sock`. 123 | 3. `flanneld.service` executes `DOCKER_HOST=unix:///var/run/early-docker.sock docker run --net=host quay.io/coreos/flannel:$FLANNEL_VER` (actual invocation is slightly more complex). 124 | 4. flanneld starts and writes out `/run/flannel/subnet.env` with the acquired IP subnet information. 125 | 5. `ExecStartPost` in `flanneld.service` converts information in `/run/flannel/subnet.env` into Docker daemon command line args (such as `--bip` and `--mtu`), 126 | storing them in `/run/flannel_docker_opts.env` 127 | 6. `redis.service` gets started which invokes `docker run ...`, triggering socket activation of `docker.service`. 128 | 7. `docker.service` sources in `/run/flannel_docker_opts.env` which contains env variables with command line options and starts the Docker with them. 129 | 8. `redis.service` runs Docker redis container. 130 | 131 | If you would like to learn more about these service files, you can check them out like so: `systemctl cat early-docker.service`. 132 | -------------------------------------------------------------------------------- /os/sdk-building-production-images.md: -------------------------------------------------------------------------------- 1 | # Building Production Images 2 | 3 | In general the automated process should always be used but in a pinch 4 | putting together a release manually may be necessary. All release 5 | information is tracked in the [manifest][coreos-manifest] git 6 | repository which is usually organized like so: 7 | 8 | * build-109.xml (previous release manifest) 9 | * build-115.xml (current release manifest) 10 | * master.xml (master branch manifest) 11 | * version.txt (current version information) 12 | * default.xml -> master.xml 13 | * release.xml -> build-115.xml 14 | 15 | [coreos-manifest]: https://github.com/coreos/manifest 16 | 17 | ## Tagging Releases 18 | 19 | The first step of building a release is updating and tagging the release 20 | in the manifest git repository. A typical release off of master involves 21 | the following steps: 22 | 23 | 1. Make sure you are on the master branch: `repo init -b master` 24 | 2. Sync/checkout source, excluding local changes: `repo sync --detach` 25 | 3. In the scripts directory: `./tag_release --push` 26 | 27 | That was far too easy, if you need to do it the hard way try this: 28 | 29 | 1. Make sure you are on the master branch: `repo init -b master` 30 | 2. Sync/checkout source, excluding local changes: `repo sync --detach` 31 | 3. Switch to the somewhat hidden manifests checkout: `cd .repo/manifests` 32 | 4. Update `version.txt` with the desired version number. 33 | * COREOS_BUILD is the major version number, and should be the number 34 | of days since July 1st, 2013. COREOS_BRANCH should start at 0 and 35 | is incremented for every normal release based on a particular 36 | COREOS_BUILD version. COREOS_PATCH is reserved for exceptional 37 | situations such as emergency manual releases and should normally 38 | be 0. 39 | * The complete version string is 40 | COREOS_BUILD.COREOS_BRANCH.COREOS_PATCH 41 | * COREOS_SDK_VERSION should be the complete version string of an 42 | existing build. The `cros_sdk` uses this to pick what SDK tarball 43 | to use when creating a fresh chroot and provides a fallback set of 44 | binary packages to use when the current release's packages are 45 | unavailable. Usually it will be one release behind COREOS_BUILD. 46 | 5. Generate a release manifest: `repo manifest -r -o build-$BUILD.xml` 47 | where `$BUILD` is the current value of COREOS_BUILD in `version.txt`. 48 | 6. Update `release.xml`: `ln -sf build-$BUILD.xml release.xml` 49 | 7. Commit! `git add build-$BUILD.xml; git commit -a` 50 | 8. Tag! `git tag v$BUILD.$BRANCH.$PATCH` 51 | 9. Push! `git push origin HEAD:master HEAD:dev-channel 52 | HEAD:build-$BUILD v$BUILD.$BRANCH.$PATCH` 53 | 54 | If a release branch needs to be updated after master has moved on the 55 | procedure is similar. 56 | Unfortunately since tagging branched releases (not on master) is a bit 57 | trickier to get right the `tag_release` script cannot be used. 58 | The automated build will kick off after updating the `dev-channel` branch. 59 | 60 | 1. Check out the release instead of master: `repo init -b build-$BUILD 61 | -m release.xml` 62 | 2. Sync, cherry-pick, push, and whatever else is required to publish 63 | the desired changes in the repo-managed projects. If the desired 64 | changes are already published (such as if you are just updating to a 65 | later commit from a project's master branch) then this can be 66 | skipped. 67 | 3. `cd .repo/manifests` 68 | 4. Update `version.txt` as desired. Usually just increment 69 | COREOS_PATCH. 70 | 5. Update `build-$BUILD.xml` as desired. The output of 71 | `repo manifest -r` shouldn't be used verbatim this time because it 72 | won't generate meaningful values for the `upstream` project 73 | attribute when starting from a release manifest instead of 74 | `master.xml` but it can be useful for looking up the git commit to 75 | update the `revision` attribute to. If the new git commit is on a 76 | branch other than master be sure to update the `upstream` attribute 77 | with the appropriate ref spec for that branch. 78 | 6. If this is the first time this branch has been updated on its own 79 | update the `default.xml` link so checking out this manifest branch 80 | with repo init but without the `-m` argument works: 81 | `ln -sf build-$BUILD.xml default.xml` 82 | 7. Commit! `git commit -a` 83 | 8. Tag! `git tag v$BUILD.$BRANCH.$PATCH` 84 | 9. Push! `git push origin HEAD:dev-channel 85 | HEAD:build-$BUILD v$BUILD.$BRANCH.$PATCH` 86 | 87 | Now you can start building images! 88 | This will build an image that can be ran under KVM and uses near production 89 | values. 90 | 91 | Note: Add `COREOS_OFFICIAL=1` here if you are making a real release. That will 92 | change the version to leave off the build id suffix. 93 | 94 | ```sh 95 | ./build_image prod --group alpha 96 | ``` 97 | 98 | The generated production image is bootable as-is by qemu but for a 99 | larger ROOT partition or VMware images use `image_to_vm.sh` as 100 | described in the final output of `build_image`. 101 | 102 | ## Automated Builds 103 | 104 | Automated release builds are triggered by pushes to the `dev-channel` 105 | branch in the manifest repository. 106 | 107 | Note: In the future builds will be triggered by pushing new tags instead 108 | of using the `dev-channel` branch; the branch only exists due to a limitation 109 | of the current buildbot deployment. 110 | 111 | ## Pushing updates into roller 112 | 113 | The automated build host does not have access to production signing keys 114 | so the final signing and push to roller must be done elsewhere. 115 | The `coreos_production_update.zip` archive provides the tools required to 116 | do this so a full SDK setup is not required. This does require gsutil to be 117 | installed and configured. 118 | An update payload signed by the insecure development keys is generated 119 | automatically as `coreos_production_update.gz` and 120 | `coreos_production_update.meta`. If needed the raw filesystem image used 121 | to generate the payload is `coreos_production_update.bin.bz2`. 122 | As an example, to publish the insecurely signed payload: 123 | 124 | ```sh 125 | URL=http://builds.release.core-os.net/alpha/amd64-usr/321.0.0 126 | cd $(mktemp -d) 127 | gsutil -m cp $URL/coreos_production_update* ./ 128 | gpg --verify coreos_production_update.zip.sig 129 | gpg --verify coreos_production_update.gz.sig 130 | gpg --verify coreos_production_update.meta.sig 131 | unzip coreos_production_update.zip 132 | ./core_roller_upload --user @coreos.com --api_key 133 | ``` 134 | 135 | Note: prefixing the command with a space will avoid recording your API key 136 | in your bash history if `$HISTCONTROL` is `ignorespace` or `ignoreboth`. 137 | 138 | ## Tips and Tricks 139 | 140 | We've compiled a [list of tips and tricks](/docs/sdk-distributors/sdk/tips-and-tricks) that can make working with the SDK a bit easier. 141 | --------------------------------------------------------------------------------