7 |
8 | - Enter your registry's URL as the application URL
9 |
10 | Note: If using public GitHub, the URL entered must be accessible by *your users*. It can still be an internal URL.
11 |
12 | - Enter `https://{REGISTRY URL HERE}/oauth2/github/callback` as the Authorization callback URL.
13 | - Create the application
14 |
15 |
16 |
17 | - Note down the `Client ID` and `Client Secret`.
18 |
--------------------------------------------------------------------------------
/os/adding-certificate-authorities.md:
--------------------------------------------------------------------------------
1 | # Custom Certificate Authorities
2 |
3 | CoreOS supports custom Certificate Authorities (CAs) in addition to the default list of trusted CAs. Adding your own CA allows you to:
4 |
5 | - Use a corporate wildcard certificate
6 | - Use your own CA to communicate with an installation of CoreUpdate
7 | - Use your own CA to [encrypt communications with a container registry](registry-authentication.md)
8 |
9 | The setup process for any of these use-cases is the same:
10 |
11 | 1. Copy the PEM-encoded certificate authority file (usually with a `.pem` file name extension) to `/etc/ssl/certs`
12 |
13 | 2. Run the `update-ca-certificates` script to update the system bundle of Certificate Authorities. All programs running on the system will now trust the added CA.
14 |
15 | ## More Information
16 |
17 | [Generate Self-Signed Certificates](generate-self-signed-certificates.md)
18 |
19 | [Use an insecure registry behind a firewall](registry-authentication.md#using-a-registry-without-ssl-configured)
20 |
21 | [etcd Security Model]({{site.baseurl}}/etcd/docs/latest/security.html)
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # CoreOS Documentation
2 |
3 | These are publicly accessible versions of the documentation on [coreos.com](http://coreos.com/docs/).
4 |
5 | Check out the [help-wanted tag](https://github.com/coreos/docs/issues?q=is%3Aopen+label%3Ahelp-wanted) if you're not sure what to help out with.
6 |
7 | 1. Fork this repo
8 | 2. Send a Pull Request
9 | 3. We will get your fix up on [coreos.com](http://coreos.com/docs/)
10 |
11 | Thanks for your contributions and fixes!
12 |
13 | ## Translations
14 |
15 | We accept patches for documentation translation. Please send the
16 | documents as a pull request and follow two guidelines:
17 |
18 | 1. Name the files identically to the originals but put them in a
19 | directory that matches the language locale. For example: JA\_JP,
20 | ZH\_CN or KO\_KN.
21 |
22 | 2. Add an explanation about the translated document to the top of the
23 | file: "These documents were localized into Esperanto by Community
24 | Member
12 |
13 | ## View the logs for each service
14 |
15 | - Click the logs tab ()
16 | - To view logs for each service, click the service name at the top. The logs will update automatically.
17 |
18 | ## Contacting support
19 |
20 | When contacting support, one should always include a copy of the Enterprise Registry's log directory.
21 |
22 | To download logs, click the " Download All Local Logs (.tar.gz)" link
23 |
24 | ## Shell script to download logs
25 |
26 | The aforementioned operations are also available in script form on [GitHub](https://github.com/coreos/docs/blob/master/quay-enterprise/gzip-registry-logs.sh).
27 |
--------------------------------------------------------------------------------
/quay-enterprise/github-auth.md:
--------------------------------------------------------------------------------
1 | # GitHub Authentication
2 |
3 | CoreOS Enterprise Registry supports using GitHub or GitHub Enterprise as an authentication system.
4 |
5 | ## Create an OAuth Application in GitHub
6 |
7 | Following the instructions at [Create a GitHub Application](github-app.md).
8 |
9 | **NOTE:** This application must be **different** from that used for GitHub Build Triggers.
10 |
11 | ## Visit the Management Panel
12 |
13 | Sign in to a super user account and visit `http://yourregister/superuser` to view the management panel:
14 |
15 |
16 |
17 | ## Enable GitHub Authentication
18 |
19 |
20 |
21 | - Click the configuration tab () and scroll down to the section entitled GitHub (Enterprise) Authentication.
22 | - Check the "Enable GitHub Authentication" box
23 | - Fill in the credentials from the application created above
24 | - Click "Save Configuration Changes"
25 | - Restart the container (you will be prompted)
26 |
--------------------------------------------------------------------------------
/golang/test:
--------------------------------------------------------------------------------
1 | #!/bin/bash -e
2 | #
3 | # Run all tests (not including functional)
4 | # ./test
5 | # ./test -v
6 | #
7 | # Run tests for one package
8 | # PKG=./foo ./test
9 | # PKG=bar ./test
10 | #
11 |
12 | # Invoke ./cover for HTML output
13 | COVER=${COVER:-"-cover"}
14 |
15 | source ./build
16 |
17 | TESTABLE="foo bar"
18 | FORMATTABLE="$TESTABLE baz"
19 |
20 | # user has not provided PKG override
21 | if [ -z "$PKG" ]; then
22 | TEST=$TESTABLE
23 | FMT=$FORMATTABLE
24 |
25 | # user has provided PKG override
26 | else
27 | # strip out slashes and dots from PKG=./foo/
28 | TEST=${PKG//\//}
29 | TEST=${TEST//./}
30 |
31 | # only run gofmt on packages provided by user
32 | FMT="$TEST"
33 | fi
34 |
35 | # split TEST into an array and prepend REPO_PATH to each local package
36 | split=(${TEST// / })
37 | TEST=${split[@]/#/${REPO_PATH}/}
38 |
39 | echo "Running tests..."
40 | go test ${COVER} $@ ${TEST}
41 |
42 | echo "Checking gofmt..."
43 | fmtRes=$(gofmt -l $FMT)
44 | if [ -n "${fmtRes}" ]; then
45 | echo -e "gofmt checking failed:\n${fmtRes}"
46 | exit 255
47 | fi
48 |
49 | echo "Checking govet..."
50 | vetRes=$(go vet $TEST)
51 | if [ -n "${vetRes}" ]; then
52 | echo -e "govet checking failed:\n${vetRes}"
53 | exit 255
54 | fi
55 |
56 | echo "Success"
57 |
--------------------------------------------------------------------------------
/quay-enterprise/provision_mysql.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | # A simple shell script to provision a dedicated MySQL container for the CoreOS Enterprise Registery
3 | set -e
4 |
5 | # Edit the following three values to your liking:
6 | MYSQL_USER="coreosuser"
7 | MYSQL_DATABASE="enterpriseregistrydb"
8 | MYSQL_CONTAINER_NAME="mysql"
9 |
10 | # Do not edit these values:
11 | # (creates a 32 char password for the MySQL root user and the Enterprise Registery DB user)
12 | MYSQL_ROOT_PASSWORD=$(cat /dev/urandom | LC_CTYPE=C tr -dc 'a-zA-Z0-9' | fold -w 32 | sed 1q)
13 | MYSQL_PASSWORD=$(cat /dev/urandom | LC_CTYPE=C tr -dc 'a-zA-Z0-9' | fold -w 32 | sed 1q)
14 |
15 | echo "Start the Oracle MySQL container:"
16 | # It will provision a blank database for the Enterprise Registery upon first start.
17 | # This initial provisioning can take up to 30 seconds.
18 |
19 | docker \
20 | run \
21 | --detach \
22 | --env MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} \
23 | --env MYSQL_USER=${MYSQL_USER} \
24 | --env MYSQL_PASSWORD=${MYSQL_PASSWORD} \
25 | --env MYSQL_DATABASE=${MYSQL_DATABASE} \
26 | --name ${MYSQL_CONTAINER_NAME} \
27 | --publish 3306:3306 \
28 | mysql:5.7;
29 |
30 | echo "Sleeping for 10 seconds to allow time for the DB to be provisioned:"
31 | for i in `seq 1 10`;
32 | do
33 | echo "."
34 | sleep 1
35 | done
36 |
37 | echo "Database '${MYSQL_DATABASE}' running."
38 | echo " Username: ${MYSQL_USER}"
39 | echo " Password: ${MYSQL_PASSWORD}"
--------------------------------------------------------------------------------
/os/switching-channels.md:
--------------------------------------------------------------------------------
1 | # Switching Release Channels
2 |
3 | CoreOS is released into stable, alpha and beta channels. New features and bug fixes are tested in the alpha channel and are promoted bit-for-bit to the beta channel if no additional bugs are found.
4 |
5 | By design, the CoreOS update engine does not execute downgrades. If you're switching from a channel with a higher CoreOS version than the new channel, your machine won't be updated again until the new channel contains a higher version number.
6 |
7 | 
8 |
9 | ## Create Update Config File
10 |
11 | You can switch machines between channels by creating `/etc/coreos/update.conf`:
12 |
13 | ```ini
14 | GROUP=beta
15 | ```
16 |
17 | ## Restart Update Engine
18 |
19 | The last step is to restart the update engine in order for it to pick up the changed channel:
20 |
21 | ```sh
22 | sudo systemctl restart update-engine
23 | ```
24 |
25 | ## Debugging
26 |
27 | After the update engine is restarted, the machine should check for an update within an hour. You can view the update engine log if you'd like to see the requests that are being made to the update service:
28 |
29 | ```sh
30 | journalctl -f -u update-engine
31 | ```
32 |
33 | For reference, you can find the current version:
34 |
35 | ```sh
36 | cat /etc/os-release
37 | ```
38 |
39 | ## Release Information
40 |
41 | You can read more about the current releases and channels on the [releases page]({{site.baseurl}}/releases).
42 |
--------------------------------------------------------------------------------
/quay-enterprise/swift-temp-url.md:
--------------------------------------------------------------------------------
1 | # Enterprise Registry Swift Direct Download
2 |
3 | ## Direct Download
4 |
5 | The Swift storage engine supports using a feature called [temporary URLs](http://docs.openstack.org/juno/config-reference/content/object-storage-tempurl.html) to allow for faster pulling of images.
6 |
7 | To enable direct download with Swift, please follow these instructions.
8 |
9 | ## Create a Swift temporary URL token
10 |
11 | To enable temporary URLs, first set the `X-Account-Meta-Temp-URL-Key` header on your Object Storage account to an arbitrary string. This string serves as a secret key. For example, to set a key of `somecoolkey` using the swift command-line tool:
12 |
13 | ```
14 | $ swift post -m "Temp-URL-Key:somecoolkey"
15 | ```
16 |
17 | ## Visit the Management Panel
18 |
19 | Sign in to a super user account and visit `http://registry.example.com/superuser` to view the management panel:
20 |
21 |
22 |
23 | ## Go to the settings tab
24 |
25 | - Click the configuration tab () and scroll down to the section entitled Registry Storage.
26 | - Ensure that "OpenStack Storage (Swift)" is selected
27 |
28 | ## Enter the temporary URL key
29 |
30 | Enter the key generated above into the `Temp URL Key` field under the Swift storage engine settings.
31 |
32 | ## Save configuration
33 |
34 | Hit `Save Configuration` to save and validate your configuration. The Swift storage engine system will automatically
35 | test that the direct download feature is enabled and working.
--------------------------------------------------------------------------------
/ignition/network-configuration.md:
--------------------------------------------------------------------------------
1 | # Network Configuration #
2 |
3 | Configuring networkd with Ignition is a very straightforward task. Because
4 | Ignition runs before networkd starts, configuration is just a matter of writing
5 | the desired config to disk. The Ignition config has a specific section
6 | dedicated to this.
7 |
8 | ## Static Networking ##
9 |
10 | In this example, the network interface with the name "eth0" will be given the
11 | IP address 10.0.1.7. A typical interface will need more configuration and can
12 | use all of the options of a [network unit][network].
13 |
14 | ```json
15 | {
16 | "networkd": {
17 | "units": [
18 | {
19 | "name": "00-eth0.network",
20 | "contents": "[Match]\nName=eth0\n\n[Network]\nAddress=10.0.1.7"
21 | }
22 | ]
23 | }
24 | }
25 | ```
26 |
27 | This configuration will instruct Ignition to create a single network unit named
28 | "00-eth0.network" with the contents:
29 |
30 | ```
31 | [Match]
32 | Name=eth0
33 |
34 | [Network]
35 | Address=10.0.1.7
36 | ```
37 |
38 | When the system boots, networkd will read this config and assign the IP address
39 | to eth0.
40 |
41 | [network]: http://www.freedesktop.org/software/systemd/man/systemd.network.html
42 |
43 | ## Bonded NICs ##
44 |
45 | In this example, all of the network interfaces whose names begin with "eth"
46 | will be bonded together to form "bond0". This new interface will then be
47 | configured to use DHCP.
48 |
49 | ```json
50 | {
51 | "networkd": {
52 | "units": [
53 | {
54 | "name": "00-eth.network",
55 | "contents": "[Match]\nName=eth*\n\n[Network]\nBond=bond0"
56 | },
57 | {
58 | "name": "10-bond0.netdev",
59 | "contents": "[NetDev]\nName=bond0\nKind=bond"
60 | },
61 | {
62 | "name": "20-bond0.network",
63 | "contents": "[Match]\nName=bond0\n\n[Network]\nDHCP=true"
64 | },
65 | ]
66 | }
67 | }
68 | ```
69 |
--------------------------------------------------------------------------------
/os/adding-users.md:
--------------------------------------------------------------------------------
1 | # Adding Users
2 |
3 | You can create user accounts on a CoreOS machine manually with `useradd` or via cloud-config when the machine is created.
4 |
5 | ## Add Users via Cloud-Config
6 |
7 | Managing users via cloud-config is preferred because it allows you to use the same configuration across many servers and the cloud-config file can be stored in a repo and versioned. In your cloud-config, you can specify many [different parameters]({{site.baseurl}}/docs/cluster-management/setup/cloudinit-cloud-config/#users) for each user. Here's an example:
8 |
9 | ```yaml
10 | #cloud-config
11 |
12 | users:
13 | - name: elroy
14 | passwd: $6$5s2u6/jR$un0AvWnqilcgaNB3Mkxd5yYv6mTlWfOoCYHZmfi3LDKVltj.E8XNKEcwWm...
15 | groups:
16 | - sudo
17 | - docker
18 | ssh-authorized-keys:
19 | - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGdByTgSVHq.......
20 | ```
21 |
22 | Check out the entire [Customize with Cloud-Config]({{site.baseurl}}/docs/cluster-management/setup/cloudinit-cloud-config/) guide for the full details.
23 |
24 | ## Add User Manually
25 |
26 | If you'd like to add a user manually, SSH to the machine and use the `useradd` tool. To create the user `user`, run:
27 |
28 | ```sh
29 | sudo useradd -p "*" -U -m user1 -G sudo
30 | ```
31 |
32 | The `"*"` creates a user that cannot login with a password but can log in via SSH key. `-U` creates a group for the user, `-G` adds the user to the existing `sudo` group and `-m` creates a home directory. If you'd like to add a password for the user, run:
33 |
34 | ```sh
35 | $ sudo passwd user1
36 | New password:
37 | Re-enter new password:
38 | passwd: password changed.
39 | ```
40 |
41 | To assign an SSH key, run:
42 |
43 | ```sh
44 | update-ssh-keys -u user1 user1.pem
45 | ```
46 |
47 | ## Further Reading
48 |
49 | Read the [full cloud-config]({{site.baseurl}}/docs/cluster-management/setup/cloudinit-cloud-config/) guide to install users and more.
50 |
--------------------------------------------------------------------------------
/quay-enterprise/mysql-container.md:
--------------------------------------------------------------------------------
1 | # Setting Up a MySQL Docker Container
2 |
3 | If you don't have an existing MySQL system to host the Enterprise Registry database on then you can run the steps below to create a dedicated MySQL container using the Oracle MySQL verified Docker image from https://registry.hub.docker.com/_/mysql/
4 |
5 | ```sh
6 | docker pull mysql:5.7
7 | ```
8 |
9 | Edit these values to your liking:
10 |
11 | ```sh
12 | MYSQL_USER="coreosuser"
13 | MYSQL_DATABASE="enterpriseregistrydb"
14 | MYSQL_CONTAINER_NAME="mysql"
15 | ```
16 | Do not edit these values:
17 | (creates a 32 char password for the MySQL root user and the Enterprise Registry DB user)
18 |
19 | ```sh
20 | MYSQL_ROOT_PASSWORD=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | sed 1q)
21 | MYSQL_PASSWORD=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | sed 1q)
22 | ```
23 |
24 | Start the MySQL container and create a new DB for the Enterprise registry:
25 |
26 | ```sh
27 | docker \
28 | run \
29 | --detach \
30 | --env MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} \
31 | --env MYSQL_USER=${MYSQL_USER} \
32 | --env MYSQL_PASSWORD=${MYSQL_PASSWORD} \
33 | --env MYSQL_DATABASE=${MYSQL_DATABASE} \
34 | --name ${MYSQL_CONTAINER_NAME} \
35 | --publish 3306:3306 \
36 | mysql:5.7;
37 | ```
38 |
39 | Wait about 30 seconds for the new DB to be created before testing the connection to the DB, the MySQL container will not respond during the initial DB creation process.
40 |
41 |
42 | Alternatively you can download a simple shell script to perform the steps above:
43 |
44 | ```sh
45 | curl --location https://raw.githubusercontent.com/coreos/docs/master/quay-enterprise/provision_mysql.sh -o /tmp/provision_mysql.sh -#
46 | ```
47 | Then run:
48 |
49 | ```sh
50 | chmod -c +x /tmp/provision_mysql.sh
51 | /tmp/provision_mysql.sh
52 | ```
53 |
54 | Note: Using Percona v5.6 for the MySQL container is known to not work at this point in time.
55 |
--------------------------------------------------------------------------------
/os/power-management.md:
--------------------------------------------------------------------------------
1 | # Tuning CoreOS Power Management
2 |
3 | ## CPU Governor
4 |
5 | By default, CoreOS uses the "performance" CPU governor meaning that the CPU
6 | operates at the maximum frequency regardless of load. This is reasonable for
7 | a system that is under constant load or cannot tolerate increased latency.
8 | On the other hand, if the system is idle much of the time and latency is not
9 | a concern, power savings may be desired.
10 |
11 | Several governors are available:
12 |
13 | | Governor | Description |
14 | |--------------------|-------------|
15 | | `performance` | Default. Operate at the maximum frequency |
16 | | `ondemand` | Dynamically scale frequency at 75% cpu load |
17 | | `conservative` | Dynamically scale frequency at 95% cpu load |
18 | | `powersave` | Operate at the minimum frequency |
19 | | `userspace` | Controlled by a userspace application via the `scaling_setspeed` file |
20 |
21 | The "conservative" governor can be used instead using the following shell commands:
22 |
23 | ```sh
24 | modprobe cpufreq_conservative
25 | echo "conservative" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor > /dev/null
26 | ```
27 |
28 | This can be configured with [cloud-config]({{site.baseurl}}/docs/cluster-management/setup/cloudinit-cloud-config/#coreos) as well:
29 |
30 | ```yaml
31 | coreos:
32 | units:
33 | - name: cpu-governor.service
34 | command: start
35 | runtime: true
36 | content: |
37 | [Unit]
38 | Description=Enable CPU power saving
39 |
40 | [Service]
41 | Type=oneshot
42 | RemainAfterExit=yes
43 | ExecStart=/usr/sbin/modprobe cpufreq_conservative
44 | ExecStart=/usr/bin/sh -c '/usr/bin/echo "conservative" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor'
45 | ```
46 |
47 | More information on further tuning each governor is available in the [Kernel Documentation](https://www.kernel.org/doc/Documentation/cpu-freq/governors.txt)
48 |
--------------------------------------------------------------------------------
/os/sdk-testing-with-mantle.md:
--------------------------------------------------------------------------------
1 | # Mantle: Gluing CoreOS together
2 |
3 | Mantle is a collection of utilities for the CoreOS SDK.
4 |
5 | ## plume index
6 |
7 | Generate and upload index.html objects to turn a Google Cloud Storage
8 | bucket into a publicly browsable file tree. Useful if you want something
9 | like Apache's directory index for your software download repository.
10 |
11 | ## plume gce cluster launching
12 |
13 | Related commands to launch instances on Google Compute Engine(gce) with
14 | the latest SDK image. SSH keys should be added to the gce project
15 | metadata before launching a cluster. All commands have flags that can
16 | overwrite the default project, bucket, and other settings. `plume help
17 |
21 |
22 | ## Enable GitHub Triggers
23 |
24 |
25 |
26 | - Click the configuration tab () and scroll down to the section entitled GitHub (Enterprise) Build Triggers.
27 | - Check the "Enable GitHub Triggers" box
28 | - Fill in the credentials from the application created above
29 | - Click "Save Configuration Changes"
30 | - Restart the container (you will be prompted)
31 |
32 | ## Tag an Automated Build
33 |
34 | After getting automated builds working, it may be desired to tag a specific build with a name. By default, the last image pushed to a repository will be tagged as `latest`.
35 | Because tagging is [usually done client side](https://docs.docker.com/userguide/dockerimages/#setting-tags-on-an-image) before an image is pushed, it may not be clear how to tag an image that was built and pushed by GitHub. Luckily, there is a interface for doing so on the repository page. After clicking to select a given build on the build graph, the right side of the page displays tag information which when clicked provides a drop-down menu with the option of creating a new tag.
36 |
37 |
38 |
39 | There is currently no ability to automatically tag GitHub triggered builds.
40 |
--------------------------------------------------------------------------------
/os/install-debugging-tools.md:
--------------------------------------------------------------------------------
1 | # Install Debugging Tools
2 |
3 | You can use common debugging tools like tcpdump or strace with Toolbox. Using the filesystem of a specified Docker container Toolbox will launch a container with full system privileges including access to system PIDs, network interfaces and other global information. Inside of the toolbox, the machine's filesystem is mounted to `/media/root`.
4 |
5 | ## Quick Debugging
6 |
7 | By default, Toolbox uses the stock Fedora Docker container. To start using it, simply run:
8 |
9 | ```sh
10 | /usr/bin/toolbox
11 | ```
12 |
13 | You're now in the namespace of Fedora and can install any software you'd like via `yum`. For example, if you'd like to use `tcpdump`:
14 |
15 | ```sh
16 | [root@srv-3qy0p ~]# yum install tcpdump
17 | [root@srv-3qy0p ~]# tcpdump -i ens3
18 | tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
19 | listening on ens3, link-type EN10MB (Ethernet), capture size 65535 bytes
20 | ```
21 |
22 | ### Specify a Custom Docker Image
23 |
24 | Create a `.toolboxrc` in the user's home folder to use a specific Docker image:
25 |
26 | ```sh
27 | $ cat .toolboxrc
28 | TOOLBOX_DOCKER_IMAGE=index.example.com/debug
29 | TOOLBOX_USER=root
30 | $ /usr/bin/toolbox
31 | Pulling repository index.example.com/debug
32 | ...
33 | ```
34 |
35 | You can also specify this in cloud-config:
36 |
37 | ```
38 | #cloud-config
39 | write_files:
40 | - path: /home/core/.toolboxrc
41 | owner: core
42 | content: |
43 | TOOLBOX_DOCKER_IMAGE=index.example.com/debug
44 | TOOLBOX_DOCKER_TAG=v1
45 | TOOLBOX_USER=root
46 | ```
47 |
48 | ## SSH Directly Into A Toolbox
49 |
50 | Advanced users can SSH directly into a toolbox by setting up an `/etc/passwd` entry:
51 |
52 | ```sh
53 | useradd bob -m -p '*' -s /usr/bin/toolbox -U -G sudo,docker
54 | ```
55 |
56 | To test, SSH as bob:
57 |
58 | ```sh
59 | ssh bob@hostname.example.com
60 |
61 | ______ ____ _____
62 | / ____/___ ________ / __ \/ ___/
63 | / / / __ \/ ___/ _ \/ / / /\__ \
64 | / /___/ /_/ / / / __/ /_/ /___/ /
65 | \____/\____/_/ \___/\____//____/
66 | [root@srv-3qy0p ~]# yum install emacs
67 | [root@srv-3qy0p ~]# emacs /media/root/etc/systemd/system/docker.service
68 | ```
69 |
--------------------------------------------------------------------------------
/os/configuring-dns.md:
--------------------------------------------------------------------------------
1 | # DNS Configuration
2 |
3 | By default, DNS resolution on CoreOS is handled through `/etc/resolv.conf`, which is a symlink to `/run/systemd/resolve/resolv.conf`.
4 | This file is managed by [systemd-resolved][systemd-resolved].
5 | Normally, `systemd-resolved` gets DNS IP addresses from [systemd-networkd][systemd-networkd], either via DHCP or static configuration.
6 | DNS IP addresses can also be set via `systemd-resolved`'s [resolved.conf][resolved.conf].
7 | See [Network configuration with networkd](network-config-with-networkd.md) for more information on `systemd-networkd`.
8 |
9 | ## Using a local DNS cache
10 |
11 | `systemd-resolved` includes a caching DNS resolver.
12 | To use it for DNS resolution and caching, you must enable it via [nsswitch.conf][nsswitch.conf] by adding `resolv` to the `hosts` section.
13 |
14 | Here is an example cloud-config snippet to do that:
15 |
16 | ```yaml
17 | #cloud-config
18 | write_files:
19 | - path: /etc/nsswitch.conf
20 | permissions: 0644
21 | owner: root
22 | content: |
23 | # /etc/nsswitch.conf:
24 |
25 | passwd: files usrfiles
26 | shadow: files usrfiles
27 | group: files usrfiles
28 |
29 | hosts: files usrfiles resolv dns
30 | networks: files usrfiles dns
31 |
32 | services: files usrfiles
33 | protocols: files usrfiles
34 | rpc: files usrfiles
35 |
36 | ethers: files
37 | netmasks: files
38 | netgroup: files
39 | bootparams: files
40 | automount: files
41 | aliases: files
42 | ```
43 |
44 | Only nss-aware applications can take advantage of the `systemd-resolved` cache.
45 | Notably, this means that statically linked Go programs and programs running within Docker/rkt will use `/etc/resolv.conf` only, and will not use the `systemd-resolve` cache.
46 |
47 | [systemd-resolved]: http://www.freedesktop.org/software/systemd/man/systemd-resolved.service.html
48 | [systemd-networkd]: http://www.freedesktop.org/software/systemd/man/systemd-networkd.service.html
49 | [resolved.conf]: http://www.freedesktop.org/software/systemd/man/resolved.conf.html
50 | [nsswitch.conf]: http://man7.org/linux/man-pages/man5/nsswitch.conf.5.html
51 |
52 |
--------------------------------------------------------------------------------
/os/verify-images.md:
--------------------------------------------------------------------------------
1 | # Verify CoreOS Images with GPG
2 |
3 | CoreOS publishes new images for each release across a variety of platforms and hosting providers. Each channel has it's own set of images ([stable], [beta], [alpha]) that are posted to our storage site. Along with each image, a signature is generated from the [CoreOS Image Signing Key][signing-key] and posted.
4 |
5 | [signing-key]: {{site.baseurl}}/security/image-signing-key
6 | [stable]: http://stable.release.core-os.net/amd64-usr/current/
7 | [beta]: http://beta.release.core-os.net/amd64-usr/current/
8 | [alpha]: http://alpha.release.core-os.net/amd64-usr/current/
9 |
10 | After downloading your image, you should verify it with `gpg` tool. First, download the image signing key:
11 |
12 | ```sh
13 | curl -O https://coreos.com/security/image-signing-key/CoreOS_Image_Signing_Key.asc
14 | ```
15 |
16 | Next, import the public key and verify that the ID matches the website: [CoreOS Image Signing Key][signing-key]
17 |
18 | ```sh
19 | gpg --import --keyid-format LONG CoreOS_Image_Signing_Key.asc
20 | gpg: key 50E0885593D2DCB4: public key "CoreOS Buildbot (Offical Builds)
13 |
14 | ## Enable Building
15 |
16 |
17 |
18 | - Click the configuration tab () and scroll down to the section entitled Dockerfile Build Support.
19 | - Check the "Enable Dockerfile Build" box
20 | - Click "Save Configuration Changes"
21 | - Restart the container (you will be prompted)
22 |
23 | ## Setup the Build Workers
24 |
25 |
26 |
27 | One or more build workers will communicate with the main registry container to build new containers when triggered. The machines must have Docker installed and must not be used for any other work. The following procedure needs to be done every time a new worker needs to be
28 | added, but it can be automated fairly easily.
29 |
30 | ### Pull the Build Worker Image
31 |
32 | The build worker is currently in beta. To gain access to its repository, please contact support.
33 | Once given access, pull down the latest copy of the image just like any other:
34 |
35 | ```sh
36 | docker pull quay.io/coreos/registry-build-worker:latest
37 | ```
38 |
39 | ### Run the Build Worker image
40 |
41 | Run this container on each build worker. Since the worker will be orchestrating docker builds, we need to mount in the docker socket. This orchestration will use a large amount of CPU and need to manipulate the docker images on disk — we recommend that dedicated machines be used for this task.
42 |
43 | Use the environment variable `SERVER` to tell the worker how to communicate with the primary Enterprise Registry container. A websocket is used as a data channel, and it was configured when we changed the feature flag above.
44 |
45 | | Security | Websocket Address |
46 | |----------|-------------------|
47 | | Using SSL | ```wss://somehost.com``` |
48 | | Without SSL | ```ws://somehost.com``` |
49 |
50 | Here's what the full command looks like:
51 |
52 | ```sh
53 | docker run --restart on-failure -e SERVER=wss://myenterprise.host -v /var/run/docker.sock:/var/run/docker.sock quay.io/coreos/registry-build-worker:latest
54 | ```
55 |
56 | When the container starts, each build worker will auto-register with the Enterprise Registry and start building containers once a job triggered and it is assigned to a worker.
57 |
58 | ### Setup GitHub Build (optional)
59 |
60 | If your organization plans to have builds be conducted via pushes to GitHub (or GitHub Enterprise), please continue
61 | with the Setting up GitHub Build.
62 |
63 |
--------------------------------------------------------------------------------
/quay-enterprise/initial-setup.md:
--------------------------------------------------------------------------------
1 | # On-Premise Installation
2 |
3 | CoreOS Enterprise Registry requires three components to be running to begin the setup process:
4 |
5 | - A supported database (MySQL, Postgres)
6 | - A Redis instance (for real-time events)
7 | - The Enterprise Registry image
8 |
9 | **NOTE**: Please have the host and port of the database and the Redis instance ready.
10 |
11 |
12 | ## Preparing the Database
13 |
14 | A MySQL RDBMS or Postgres installation with an empty database is required, and a login with full access to said database. The schema will be created the first time the registry image is run. The database install can either be pre-existing or run on CoreOS via a [Docker container](mysql-container.md).
15 |
16 | ## Setting up Redis
17 |
18 | Redis stores data which must be accessed quickly but doesn’t necessarily require durability guarantees. If you have an existing Redis instance, make sure to accept incoming connections on port 6379 (or change the port in the setup process) and then feel free to skip this step.
19 |
20 | To run redis, simply pull and run the Quay.io Redis image:
21 |
22 | ```
23 | sudo docker pull quay.io/quay/redis
24 | sudo docker run -d -p 6379:6379 quay.io/quay/redis
25 | ```
26 |
27 | **NOTE**: This host will have to accept incoming connections on port 6379 from the hosts on which the registry will run.
28 |
29 | ## Downloading the Enterprise Registry image
30 |
31 | After signing up you will receive a `.dockercfg` file containing your credentials to the `quay.io/coreos/registry` repository.
32 | Save this file to your CoreOS machine in `/home/core/.dockercfg` and `/root/.dockercfg`.
33 | You should now be able to execute `docker pull quay.io/coreos/registry` to download the container.
34 |
35 | ## Setting up the Directories
36 |
37 | CoreOS Enterprise registry requires a storage directory and a configuration directory:
38 |
39 | ```
40 | mkdir storage
41 | mkdir config
42 | ```
43 |
44 | ## Setting up and running the Registry
45 |
46 | Run the following command, replacing `/local/path/to/the/config/directory` and `/local/path/to/the/storage/directory` with the absolute
47 | paths to the directories created above:
48 |
49 | ```
50 | sudo docker run --restart=always -p 443:443 -p 80:80 --privileged=true -v /local/path/to/the/config/directory:/conf/stack -v /local/path/to/the/storage/directory:/datastorage -d quay.io/coreos/registry
51 | ```
52 |
53 |
54 |
55 | Once started, visit: http://yourhost/setup, wait for the page to load (it may take a minute or two) and follow instructions there to setup the enterprise registry.
56 |
57 | **NOTE**: The Enterprise Registry will restart itself a few times during this setup process. If the container does not automatically come
58 | back up, simply run the command above again.
59 |
60 |
61 |
62 |
63 | ## Verifying the Registry status
64 |
65 | Visit the `/health/endtoend` endpoint on the registry hostname and verify that the `code` is `200` and `is_testing` is `false`.
66 |
67 |
68 | ## Logging in
69 |
70 | ### If using database authentication:
71 |
72 | Once the Enterprise Registry is running, new users can be created by clicking the `Sign Up` button. If e-mail is enabled, the sign up process will require an e-mail confirmation step, after which repositories, organizations and teams can be setup by the user.
73 |
74 |
75 | ### If using LDAP authentication:
76 |
77 | Users should be able to login to the Enterprise Registry directly with their LDAP username and password.
78 |
79 |
--------------------------------------------------------------------------------
/os/booting-with-iso.md:
--------------------------------------------------------------------------------
1 | # Booting CoreOS from an ISO
2 |
3 | The latest CoreOS ISOs can be downloaded from the image storage site:
4 |
5 | The alpha channel closely tracks master and is released to frequently. The newest versions of Docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.
15 |All of the files necessary to verify the image can be found on the storage site.
20 |The beta channel consists of promoted alpha releases. Current version is CoreOS {{site.beta-channel}}.
24 |All of the files necessary to verify the image can be found on the storage site.
29 |The Stable channel should be used by production clusters. Versions of CoreOS are battle-tested within the Beta and Alpha channels before being promoted. Current version is CoreOS {{site.stable-channel}}.
33 |All of the files necessary to verify the image can be found on the storage site.
38 |In the left navigation bar, click Templates.
13 |Click Register Template.
15 |Provide the following:
17 |Name and Description. These will be shown in the UI, so choose 19 | something descriptive.
20 |URL. The Management Server will download the file from the 22 | specified URL, such as http://dl.openvm.eu/cloudstack/coreos/x86_64/coreos_production_cloudstack_image-kvm.qcow2.bz2.
23 |Zone. Choose the zone where you want the template to be 25 | available, or All Zones to make it available throughout 26 | CloudStack.
27 |OS Type: This helps CloudStack and the hypervisor perform 29 | certain operations and make assumptions that improve the 30 | performance of the guest.
31 |Hypervisor: The supported hypervisors are listed. Select the 33 | desired one.
34 |Format. The format of the template upload file, such as VHD or 36 | OVA.
37 |Extractable. Choose Yes if the template is available for 39 | extraction. If this option is selected, end users can download a 40 | full image of a template.
41 |Public. Choose Yes to make this template accessible to all 43 | users of this CloudStack installation. The template will appear in 44 | the Community Templates list. See “Private and 45 | Public Templates”.
46 |Featured. Choose Yes if you would like this template to be 48 | more prominent for users to select. The template will appear in 49 | the Featured Templates list. Only an administrator can make a 50 | template Featured.
51 |To create a VM from a template:
66 |Log in to the CloudStack UI as an administrator or user.
68 |In the left navigation bar, click Instances.
70 |Click Add Instance.
72 |Select a zone.
74 |Select the CoreOS template registered in the previous step. 76 |
Click Submit and your VM will be created and started.
78 |AlphaチャンネルはMasterをぴったりと追っていて、頻繁にリリースされます。テストのために最新のDocker、etcd、fleetの利用が可能です。現在のバージョンはCoreOS {{site.alpha-channel}}です。
52 |$ZONE, $TYPE, $FW_ID and $SSH_KEY_IDを指定し、ニフティクラウドCLIで立ち上げます。
nifty-run-instances $(nifty-describe-images --delimiter ',' --image-name "CoreOS Alpha {{site.alpha-channel}}" | awk -F',' '{print $2}') --key $SSH_KEY_ID --availability-zone $ZONE --instance-type $TYPE -g $FW_ID -f cloud-config.yml -q POST
54 | BetaチャンネルはAlphaリリースが昇格されたものです。現在のバージョンはCoreOS {{site.beta-channel}}です。
57 |$ZONE, $TYPE, $FW_ID and $SSH_KEY_IDを指定し、ニフティクラウドCLIで立ち上げます。
nifty-run-instances $(nifty-describe-images --delimiter ',' --image-name "CoreOS Beta {{site.beta-channel}}" | awk -F',' '{print $2}') --key $SSH_KEY_ID --availability-zone $ZONE --instance-type $TYPE -g $FW_ID -f cloud-config.yml -q POST
59 | プロダクションクラスターではStableチャンネルを使用すべきです。CoreOSの各バージョンは昇格されるまでにBetaとAlphaチャンネルで検証済みです。現在のバージョンはCoreOS {{site.stable-channel}}です。
62 |$ZONE, $TYPE, $FW_ID and $SSH_KEY_IDを指定し、ニフティクラウドCLIで立ち上げます。
nifty-run-instances $(nifty-describe-images --delimiter ',' --image-name "CoreOS Stable {{site.stable-channel}}" | awk -F',' '{print $2}') --key $SSH_KEY_ID --availability-zone $ZONE --instance-type $TYPE -g $FW_ID -f cloud-config.yml -q POST
64 | The alpha channel closely tracks master and is released to frequently. The newest versions of Docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.
52 |Launch via NIFTY Cloud CLI by specifying $ZONE, $TYPE, $FW_ID and $SSH_KEY_ID:
nifty-run-instances $(nifty-describe-images --delimiter ',' --image-name "CoreOS Alpha {{site.alpha-channel}}" | awk -F',' '{print $2}') --key $SSH_KEY_ID --availability-zone $ZONE --instance-type $TYPE -g $FW_ID -f cloud-config.yml -q POST
54 | The beta channel consists of promoted alpha releases. Current version is CoreOS {{site.beta-channel}}.
57 |Launch via NIFTY Cloud CLI by specifying $ZONE, $TYPE, $FW_ID and $SSH_KEY_ID:
nifty-run-instances $(nifty-describe-images --delimiter ',' --image-name "CoreOS Beta {{site.beta-channel}}" | awk -F',' '{print $2}') --key $SSH_KEY_ID --availability-zone $ZONE --instance-type $TYPE -g $FW_ID -f cloud-config.yml -q POST
59 | The Stable channel should be used by production clusters. Versions of CoreOS are battle-tested within the Beta and Alpha channels before being promoted. Current version is CoreOS {{site.stable-channel}}.
62 |Launch via NIFTY Cloud CLI by specifying $ZONE, $TYPE, $FW_ID and $SSH_KEY_ID:
nifty-run-instances $(nifty-describe-images --delimiter ',' --image-name "CoreOS Stable {{site.stable-channel}}" | awk -F',' '{print $2}') --key $SSH_KEY_ID --availability-zone $ZONE --instance-type $TYPE -g $FW_ID -f cloud-config.yml -q POST
64 | The alpha channel closely tracks master and is released to frequently. The newest versions of Docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.
41 |Create a disk image from this channel by running:
42 |43 | ./create-coreos-vdi -V alpha 44 |45 |
The beta channel consists of promoted alpha releases. Current version is CoreOS {{site.beta-channel}}.
48 |Create a disk image from this channel by running:
49 |50 | ./create-coreos-vdi -V beta 51 |52 |
The Stable channel should be used by production clusters. Versions of CoreOS are battle-tested within the Beta and Alpha channels before being promoted. Current version is CoreOS {{site.stable-channel}}.
55 |Create a disk image from this channel by running:
56 |57 | ./create-coreos-vdi -V stable 58 |59 |
The alpha channel closely tracks master and is released to frequently. The newest versions of Docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.data.alpha-channel.rackspace-version}}.
48 |A sample script will look like this:
50 | 51 |#!ipxe
52 |
53 | # Location of your shell script.
54 | set cloud-config-url http://example.com/cloud-config-bootstrap.sh
55 |
56 | set base-url http://alpha.release.core-os.net/amd64-usr/current
57 | kernel ${base-url}/coreos_production_pxe.vmlinuz cloud-config-url=${cloud-config-url}
58 | initrd ${base-url}/coreos_production_pxe_image.cpio.gz
59 | boot
60 | The beta channel consists of promoted alpha releases. Current version is CoreOS {{site.data.beta-channel.rackspace-version}}.
64 |A sample script will look like this:
66 | 67 |#!ipxe
68 |
69 | # Location of your shell script.
70 | set cloud-config-url http://example.com/cloud-config-bootstrap.sh
71 |
72 | set base-url http://beta.release.core-os.net/amd64-usr/current
73 | kernel ${base-url}/coreos_production_pxe.vmlinuz cloud-config-url=${cloud-config-url}
74 | initrd ${base-url}/coreos_production_pxe_image.cpio.gz
75 | boot
76 | The Stable channel should be used by production clusters. Versions of CoreOS are battle-tested within the Beta and Alpha channels before being promoted. Current version is CoreOS {{site.data.stable-channel.rackspace-version}}.
80 |A sample script will look like this:
82 | 83 |#!ipxe
84 |
85 | # Location of your shell script.
86 | set cloud-config-url http://example.com/cloud-config-bootstrap.sh
87 |
88 | set base-url http://stable.release.core-os.net/amd64-usr/current
89 | kernel ${base-url}/coreos_production_pxe.vmlinuz cloud-config-url=${cloud-config-url}
90 | initrd ${base-url}/coreos_production_pxe_image.cpio.gz
91 | boot
92 |