├── .gitignore ├── .history ├── hack │ ├── build_20191015113924.sh │ └── build_20191029105522.sh └── workshops │ └── ansible-for-devops │ ├── 04-source-code-and-image-repos_20191015111359.rst │ ├── 04-source-code-and-image-repos_20191017220426.rst │ ├── 08-configuring-tower_20191015111359.rst │ ├── 08-configuring-tower_20191017215730.rst │ ├── 08-configuring-tower_20191017220628.rst │ ├── 08-configuring-tower_20191017220709.rst │ ├── 08-configuring-tower_20191017220820.rst │ ├── 08-configuring-tower_20191018105207.rst │ ├── 08-configuring-tower_20191018123053.rst │ ├── 08-configuring-tower_20191018135830.rst │ ├── 08-configuring-tower_20191018135906.rst │ ├── 08-configuring-tower_20191018140020.rst │ ├── 08-configuring-tower_20191018150108.rst │ ├── 08-configuring-tower_20191018151630.rst │ ├── 08-configuring-tower_20191018151853.rst │ ├── 08-configuring-tower_20191018151915.rst │ ├── 08-configuring-tower_20191029152839.rst │ └── 08-configuring-tower_20191029163334.rst ├── .travis.yml ├── .vscode └── settings.json ├── Dockerfile ├── LICENSE ├── README.md ├── entrypoint.sh ├── hack ├── README.md ├── build.sh ├── prep-ansible-for-devops.sh ├── prep-better-together.sh └── run.sh ├── requirements.txt └── workshops ├── ansible-for-devops ├── 01-introduction.rst ├── 02-ci-cd-essentials.rst ├── 03-source-code-and-image-repos.rst ├── 04-deploying-sitea.rst ├── 05-deploying-siteb.rst ├── 06-loadbalancing.rst ├── 07-galaxy-motd.rst ├── 08-configuring-tower.rst ├── 09-building-tower-jobs.rst ├── 10-bonus-round.rst ├── 11-cta.rst ├── Makefile ├── _static │ ├── .DS_Store │ └── images │ │ ├── at_add.png │ │ ├── at_browse.png │ │ ├── at_cred_detail.png │ │ ├── at_credentials_button.png │ │ ├── at_gear.png │ │ ├── at_inv_create.png │ │ ├── at_inv_group.png │ │ ├── at_inv_source.png │ │ ├── at_inv_source_button.png │ │ ├── at_job_template.png │ │ ├── at_lic_prompt.png │ │ ├── at_project_detail.png │ │ ├── at_save.png │ │ ├── at_submit.png │ │ ├── at_tm_stdout.png │ │ ├── gogs_config_1.png │ │ ├── gogs_config_2.png │ │ ├── gogs_dash.png │ │ ├── gogs_login.png │ │ ├── gogs_new_repo.png │ │ ├── gogs_plus.png │ │ ├── gogs_register.png │ │ ├── gogs_repo_dash.png │ │ ├── gogs_repo_full.png │ │ ├── gogs_repo_info.png │ │ ├── gogs_save.png │ │ └── tower_install_splash.png ├── conf.py ├── examples │ ├── hosts │ ├── motd.yml │ ├── nginx-lb-deploy.yml │ ├── nginx-lb │ │ ├── Dockerfile │ │ └── etc │ │ │ └── nginx │ │ │ └── conf.d │ │ │ └── default.conf │ ├── roles │ │ ├── apache-simple │ │ │ ├── README.md │ │ │ ├── defaults │ │ │ │ └── main.yml │ │ │ ├── handlers │ │ │ │ └── main.yml │ │ │ ├── meta │ │ │ │ └── main.yml │ │ │ ├── tasks │ │ │ │ └── main.yml │ │ │ ├── templates │ │ │ │ ├── httpd.conf.j2 │ │ │ │ └── index.html.j2 │ │ │ ├── tests │ │ │ │ ├── inventory │ │ │ │ └── test.yml │ │ │ └── vars │ │ │ │ └── main.yml │ │ └── jtyr.motd │ │ │ ├── .gitignore │ │ │ ├── LICENSE │ │ │ ├── README.md │ │ │ ├── defaults │ │ │ └── main.yaml │ │ │ ├── meta │ │ │ ├── .galaxy_install_info │ │ │ └── main.yaml │ │ │ ├── tasks │ │ │ └── main.yaml │ │ │ └── templates │ │ │ └── motd.j2 │ ├── siteb-apache-simple-container-deploy.yml │ ├── siteb-config-build.yml │ └── siteb │ │ ├── Dockerfile │ │ ├── etc │ │ └── httpd │ │ │ └── conf │ │ │ └── httpd.conf │ │ └── var │ │ └── www │ │ └── html │ │ └── index.html ├── images │ ├── Logo-RedHat-D-Color-RGB.png │ └── rh_favicon.png └── index.rst ├── better-together ├── LICENSE ├── Makefile ├── README.md ├── _static │ ├── ansible-essentials.html │ └── openshift_technical_overview.pdf ├── ansible-intro.rst ├── ci-cd.rst ├── conf.py ├── container-cgroups.rst ├── container-namespaces.rst ├── container-registry.rst ├── container-selinux.rst ├── images │ ├── Logo-RedHat-D-Color-RGB.png │ ├── ops │ │ ├── ansible-galaxy.png │ │ ├── ansible_indentation_view.png │ │ ├── ansible_overview.png │ │ ├── app-cli.png │ │ ├── app-cli_gui.png │ │ ├── containers_vs_vms.png │ │ ├── evolution.png │ │ ├── images_layer_cake.png │ │ ├── images_registry.png │ │ ├── metrics.jpeg │ │ ├── ocp_add_to_project.png │ │ ├── ocp_app-gui_build.png │ │ ├── ocp_app-gui_wizard.png │ │ ├── ocp_login.png │ │ ├── ocp_networking_node.png │ │ ├── ocp_php_results.png │ │ ├── ocp_project_list.png │ │ ├── ocp_service_catalog.png │ │ └── vm_vs_container.png │ └── rh_favicon.png ├── index.rst ├── integration.rst ├── ocp-deploying-apps.rst ├── ocp-intro.rst ├── ocp-routing.rst └── ocp-sdn.rst └── example-workshop ├── Makefile ├── conf.py ├── contributing.rst ├── docs ├── images ├── Logo-RedHat-D-Color-RGB.png └── rh_favicon.png ├── index.rst ├── intro.rst ├── license.rst └── quickstart.rst /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | .pyc 3 | _build 4 | tags 5 | .quay_creds 6 | -------------------------------------------------------------------------------- /.history/hack/build_20191015113924.sh: -------------------------------------------------------------------------------- 1 | #! /usr/bin/env bash 2 | # Helper script to build container images for each workshop 3 | # if using podman, be sure to install the podman-docker package 4 | 5 | WORKSHOP_NAME=$1 6 | if [[ ${#3} -eq 0 ]];then 7 | QUAY_PROJECT=jduncan 8 | else 9 | QUAY_PROJECT=$3 10 | fi 11 | 12 | 13 | case $2 in 14 | local) 15 | docker build --build-arg workshop_name=$WORKSHOP_NAME \ 16 | -t quay.io/$QUAY_PROJECT/operator-workshop-lab-guide-$WORKSHOP_NAME \ 17 | . 18 | ;; 19 | quay) 20 | # designed to be used by travis-ci, where the docker_* variables are defined 21 | echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin quay.io 22 | docker build --build-arg workshop_name=$WORKSHOP_NAME \ 23 | -t quay.io/$QUAY_PROJECT/operator-workshop-lab-guide-$WORKSHOP_NAME . 24 | docker push quay.io/$QUAY_PROJECT/operator-workshop-lab-guide-$WORKSHOP_NAME 25 | ;; 26 | *) 27 | echo "usage: ./hack/build.sh " 28 | ;; 29 | esac 30 | -------------------------------------------------------------------------------- /.history/hack/build_20191029105522.sh: -------------------------------------------------------------------------------- 1 | #! /usr/bin/env bash 2 | # Helper script to build container images for each workshop 3 | # if using podman, be sure to install the podman-docker package 4 | # also, if using podman, up th 5 | 6 | WORKSHOP_NAME=$1 7 | if [[ ${#3} -eq 0 ]];then 8 | QUAY_PROJECT=jduncan 9 | else 10 | QUAY_PROJECT=$3 11 | fi 12 | 13 | 14 | case $2 in 15 | local) 16 | docker build --build-arg workshop_name=$WORKSHOP_NAME \ 17 | -t quay.io/$QUAY_PROJECT/operator-workshop-lab-guide-$WORKSHOP_NAME \ 18 | . 19 | ;; 20 | quay) 21 | # designed to be used by travis-ci, where the docker_* variables are defined 22 | echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin quay.io 23 | docker build --build-arg workshop_name=$WORKSHOP_NAME \ 24 | -t quay.io/$QUAY_PROJECT/operator-workshop-lab-guide-$WORKSHOP_NAME . 25 | docker push quay.io/$QUAY_PROJECT/operator-workshop-lab-guide-$WORKSHOP_NAME 26 | ;; 27 | *) 28 | echo "usage: ./hack/build.sh " 29 | ;; 30 | esac 31 | -------------------------------------------------------------------------------- /.history/workshops/ansible-for-devops/08-configuring-tower_20191015111359.rst: -------------------------------------------------------------------------------- 1 | .. sectionauthor:: Ajay Chenampara 2 | .. _docs admin: jduncan@redhat.com 3 | 4 | ================================================== 5 | Configuring Ansible Tower 6 | ================================================== 7 | 8 | In this lab will you'll be working with Ansible Tower to make it how we interface with our playbooks, roles, and infrastructure for the rest of the workshop. We'll configure Tower with your inventory, credentials, and tell it how to interface with GOGS to manage playbooks and roles. 9 | 10 | Your control node already has Tower deployed at \https://|control_public_ip|. We've also pre-applied a license key to your tower instance, so it's ready to be configured. Your Tower username is ``admin`` and your Tower password is |student_pass|. 11 | 12 | Configuring Ansible Tower 13 | -------------------------- 14 | 15 | There are a number of constructs in the Ansible Tower UI that enable multi-tenancy, notifications, scheduling, etc. Today we're going to focus on a few of the key constructs that are essential to any workflow. 16 | 17 | - Credentials 18 | - Projects 19 | - Inventory 20 | - Job Template 21 | 22 | Let's start with adding a Credential. 23 | 24 | Creating Credentials 25 | `````````````````````` 26 | 27 | TODO - GOGS creds 28 | 29 | Credentials are utilized by Tower for authentication when launching jobs against machines, synchronizing with inventory sources, and importing project content from a version control system. 30 | 31 | There are many `types of credentials `__ including machine, network, and various cloud providers. In this workshop, we'll create a *machine* credential. 32 | 33 | - Select the gear icon |Gear button|, then select CREDENTIALS. 34 | - Click on ADD |Add button| 35 | 36 | Use this information to complete the credential form. 37 | 38 | +------------------------+---------------------------------------+ 39 | | NAME | Ansible Workshop Credential | 40 | +========================+=======================================+ 41 | | DESCRIPTION | Credentials for Ansible Workshop | 42 | +------------------------+---------------------------------------+ 43 | | ORGANIZATION | Default | 44 | +------------------------+---------------------------------------+ 45 | | TYPE | Machine | 46 | +------------------------+---------------------------------------+ 47 | | USERNAME | |student_name| | 48 | +------------------------+---------------------------------------+ 49 | | PASSWORD | |student_pass| | 50 | +------------------------+---------------------------------------+ 51 | | PRIVILEGE ESCALATION | Sudo (This is the default) | 52 | +------------------------+---------------------------------------+ 53 | 54 | .. figure:: ./_static/images/at_cred_detail.png 55 | :alt: Adding a Credential 56 | 57 | Adding a Credential 58 | 59 | - Select SAVE |Save button| 60 | 61 | With your credential created, next you'll create a project to point back to your GOGS instance. 62 | 63 | Creating a Project 64 | ``````````````````` 65 | 66 | A Project is a logical collection of Ansible playbooks, represented in Tower. You can manage playbooks and playbook directories by either placing them manually under the Project Base Path on your Tower server, or by placing your playbooks into a source code management (SCM) system supported by Tower, including Git, Subversion, and Mercurial. 67 | 68 | - Click on PROJECTS 69 | - Select ADD |Add button| 70 | 71 | Complete the form using the following entries 72 | 73 | ================== =================================================== 74 | NAME Ansible Workshop Project 75 | ================== =================================================== 76 | DESCRIPTION workshop playbooks 77 | ORGANIZATION Default 78 | SCM TYPE Git 79 | SCM URL \http://|control_public_ip|/|student_name|/playbook.git 80 | SCM BRANCH 81 | SCM UPDATE OPTIONS [x] Clean [x] Delete on Update [x] Update on Launch 82 | ================== =================================================== 83 | 84 | .. figure:: ./_static/images/at_project_detail.png 85 | :alt: Defining a Project 86 | 87 | Defining a Project 88 | 89 | - Select SAVE |Save button| 90 | 91 | Creating an Inventory 92 | `````````````````````` 93 | 94 | TODO - make it pull from the project 95 | 96 | An inventory is a collection of hosts against which jobs may be launched. Inventories are divided into groups and these groups contain the actual hosts. Groups may be sourced manually, by entering host names into Tower, or from one of Ansible Tower’s supported cloud providers. 97 | An Inventory can also be imported into Tower using the ``tower-manage`` command and this is how we are going to add an inventory for this workshop. 98 | 99 | - Click on INVENTORIES 100 | - Select ADD |Add button| 101 | - Complete the form using the following entries 102 | 103 | +----------------+------------------------------+ 104 | | NAME | Ansible Workshop Inventory | 105 | +================+==============================+ 106 | | DESCRIPTION | Ansible Inventory | 107 | +----------------+------------------------------+ 108 | | ORGANIZATION | Default | 109 | +----------------+------------------------------+ 110 | 111 | .. figure:: ./_static/images/at_inv_create.png 112 | :alt: Create an Inventory 113 | 114 | Creating an Inventory 115 | 116 | - Select SAVE |Save button| 117 | 118 | Look in your ``.ansible.cfg`` file to find the path to your default inventory. This is the inventory we'll import into Tower. Your default inventory is the ``inventory`` parameter. 119 | 120 | .. parsed-literal:: 121 | 122 | $ cat ~/.ansible.cfg 123 | [defaults] 124 | stdout_callback = yaml 125 | connection = smart 126 | timeout = 60 127 | deprecation_warnings = False 128 | host_key_checking = False 129 | retry_files_enabled = False 130 | 131 | inventory = /home/|student_name|/devops-workshop/lab_inventory/hosts 132 | 133 | To import the inventory, we'll use the ``tower-manage`` utility on your control node/Tower server. 134 | 135 | .. parsed-literal:: 136 | 137 | sudo tower-manage inventory_import --source=/home/|student_name|/devops-workshop/lab_inventory/hosts --inventory-name="Ansible Workshop Inventory" 138 | 139 | You should see output similar to the following: 140 | 141 | .. figure:: ./_static/images/at_tm_stdout.png 142 | :alt: Importing an inventory with tower-manage 143 | 144 | Importing an inventory with tower-manage 145 | 146 | Feel free to browse your inventory in Tower. You should now notice that 147 | the inventory has been populated with Groups and that each of those 148 | groups contain hosts. 149 | 150 | .. figure:: ./_static/images/at_inv_group.png 151 | :alt: Inventory with Groups 152 | 153 | Inventory with Groups 154 | 155 | Ansible Tower is now configured with everything we need to continue building out our infrastructure-as-code environment in today's workshop! 156 | 157 | Creating job templates 158 | ----------------------- 159 | 160 | STIG template 161 | `````````````` 162 | 163 | Prod template 164 | `````````````` 165 | 166 | Dev template 167 | `````````````` 168 | 169 | Load balancer template 170 | ``````````````````````` 171 | 172 | Workflow templates 173 | -------------------- 174 | 175 | Summary 176 | -------- 177 | 178 | .. |Browse button| image:: ./_static/images/at_browse.png 179 | .. |Submit button| image:: ./_static/images/at_submit.png 180 | .. |Gear button| image:: ./_static/images/at_gear.png 181 | .. |Add button| image:: ./_static/images/at_add.png 182 | .. |Save button| image:: ./_static/images/at_save.png 183 | -------------------------------------------------------------------------------- /.history/workshops/ansible-for-devops/08-configuring-tower_20191017215730.rst: -------------------------------------------------------------------------------- 1 | .. sectionauthor:: Ajay Chenampara 2 | .. _docs admin: jduncan@redhat.com 3 | 4 | ================================================== 5 | Configuring Ansible Tower 6 | ================================================== 7 | 8 | In this lab will you'll be working with Ansible Tower to make it how we interface with our playbooks, roles, and infrastructure for the rest of the workshop. We'll configure Tower with your inventory, credentials, and tell it how to interface with GOGS to manage playbooks and roles. 9 | 10 | Your control node already has Tower deployed at \https://|control_public_ip|. We've also pre-applied a license key to your tower instance, so it's ready to be configured. Your Tower username is ``admin`` and your Tower password is |student_pass|. 11 | 12 | Configuring Ansible Tower 13 | -------------------------- 14 | 15 | There are a number of constructs in the Ansible Tower UI that enable multi-tenancy, notifications, scheduling, etc. Today we're going to focus on a few of the key constructs that are essential to any workflow. 16 | 17 | - Credentials 18 | - Projects 19 | - Inventory 20 | - Job Template 21 | 22 | Let's start with adding a Credential. 23 | 24 | Creating Credentials 25 | `````````````````````` 26 | 27 | Credentials are utilized by Tower for authentication when launching Ansible jobs involving machines, synchronizing with inventory sources, and importing project content from a version control system. 28 | 29 | There are many `types of credentials `__ including machine, network, and various cloud providers. In this workshop, we'll create a *machine* credential. 30 | 31 | - Select the key icon |Credentials button| on the left rail. 32 | - Click on ADD |Add button| 33 | 34 | Use this information to complete the credential form. 35 | 36 | +------------------------+---------------------------------------+ 37 | | NAME | Ansible Workshop Credential | 38 | +========================+=======================================+ 39 | | DESCRIPTION | Credentials for Ansible Workshop | 40 | +------------------------+---------------------------------------+ 41 | | ORGANIZATION | Default | 42 | +------------------------+---------------------------------------+ 43 | | TYPE | Machine | 44 | +------------------------+---------------------------------------+ 45 | | USERNAME | |student_name| | 46 | +------------------------+---------------------------------------+ 47 | | PASSWORD | |student_pass| | 48 | +------------------------+---------------------------------------+ 49 | | PRIVILEGE ESCALATION | Sudo (This is the default) | 50 | +------------------------+---------------------------------------+ 51 | 52 | .. figure:: ./_static/images/at_cred_detail.png 53 | :alt: Adding a Credential 54 | 55 | Adding a Credential 56 | 57 | - Select SAVE |Save button| 58 | 59 | With your credential created, next you'll create a project to point back to your GOGS instance. 60 | 61 | Creating a Project 62 | ``````````````````` 63 | 64 | A Project is a logical collection of Ansible playbooks, represented in Tower. You can manage playbooks and playbook directories by either placing them manually under the Project Base Path on your Tower server, or by placing your playbooks into a source code management (SCM) system supported by Tower, including Git, Subversion, and Mercurial. 65 | 66 | - Click on PROJECTS 67 | - Select ADD |Add button| 68 | 69 | Complete the form using the following entries 70 | 71 | ================== =================================================== 72 | NAME Ansible Workshop Project 73 | ================== =================================================== 74 | DESCRIPTION workshop playbooks 75 | ORGANIZATION Default 76 | SCM TYPE Git 77 | SCM URL \http://|control_public_ip|/|student_name|/playbook.git 78 | SCM BRANCH 79 | SCM UPDATE OPTIONS [x] Clean [x] Delete on Update [x] Update on Launch 80 | ================== =================================================== 81 | 82 | .. figure:: ./_static/images/at_project_detail.png 83 | :alt: Defining a Project 84 | 85 | Defining a Project 86 | 87 | - Select SAVE |Save button| 88 | 89 | Creating an Inventory 90 | `````````````````````` 91 | 92 | TODO - make it pull from the project 93 | 94 | An inventory is a collection of hosts against which jobs may be launched. Inventories are divided into groups and these groups contain the actual hosts. Groups may be sourced manually, by entering host names into Tower, or from one of Ansible Tower’s supported cloud providers. 95 | An Inventory can also be imported into Tower using the ``tower-manage`` command and this is how we are going to add an inventory for this workshop. 96 | 97 | - Click on INVENTORIES 98 | - Select ADD |Add button| 99 | - Complete the form using the following entries 100 | 101 | +----------------+------------------------------+ 102 | | NAME | Ansible Workshop Inventory | 103 | +================+==============================+ 104 | | DESCRIPTION | Ansible Inventory | 105 | +----------------+------------------------------+ 106 | | ORGANIZATION | Default | 107 | +----------------+------------------------------+ 108 | 109 | .. figure:: ./_static/images/at_inv_create.png 110 | :alt: Create an Inventory 111 | 112 | Creating an Inventory 113 | 114 | - Select SAVE |Save button| 115 | 116 | Look in your ``.ansible.cfg`` file to find the path to your default inventory. This is the inventory we'll import into Tower. Your default inventory is the ``inventory`` parameter. 117 | 118 | .. parsed-literal:: 119 | 120 | $ cat ~/.ansible.cfg 121 | [defaults] 122 | stdout_callback = yaml 123 | connection = smart 124 | timeout = 60 125 | deprecation_warnings = False 126 | host_key_checking = False 127 | retry_files_enabled = False 128 | 129 | inventory = /home/|student_name|/devops-workshop/lab_inventory/hosts 130 | 131 | To import the inventory, we'll use the ``tower-manage`` utility on your control node/Tower server. 132 | 133 | .. parsed-literal:: 134 | 135 | sudo tower-manage inventory_import --source=/home/|student_name|/devops-workshop/lab_inventory/hosts --inventory-name="Ansible Workshop Inventory" 136 | 137 | You should see output similar to the following: 138 | 139 | .. figure:: ./_static/images/at_tm_stdout.png 140 | :alt: Importing an inventory with tower-manage 141 | 142 | Importing an inventory with tower-manage 143 | 144 | Feel free to browse your inventory in Tower. You should now notice that 145 | the inventory has been populated with Groups and that each of those 146 | groups contain hosts. 147 | 148 | .. figure:: ./_static/images/at_inv_group.png 149 | :alt: Inventory with Groups 150 | 151 | Inventory with Groups 152 | 153 | Ansible Tower is now configured with everything we need to continue building out our infrastructure-as-code environment in today's workshop! 154 | 155 | Creating job templates 156 | ----------------------- 157 | 158 | STIG template 159 | `````````````` 160 | 161 | Prod template 162 | `````````````` 163 | 164 | Dev template 165 | `````````````` 166 | 167 | Load balancer template 168 | ``````````````````````` 169 | 170 | Workflow templates 171 | -------------------- 172 | 173 | Summary 174 | -------- 175 | 176 | .. |Credentials button| image:: ./_static/images/at_credentials_button.png 177 | .. |Browse button| image:: ./_static/images/at_browse.png 178 | .. |Submit button| image:: ./_static/images/at_submit.png 179 | .. |Gear button| image:: ./_static/images/at_gear.png 180 | .. |Add button| image:: ./_static/images/at_add.png 181 | .. |Save button| image:: ./_static/images/at_save.png 182 | -------------------------------------------------------------------------------- /.history/workshops/ansible-for-devops/08-configuring-tower_20191017220628.rst: -------------------------------------------------------------------------------- 1 | .. sectionauthor:: Ajay Chenampara 2 | .. _docs admin: jduncan@redhat.com 3 | 4 | ================================================== 5 | Configuring Ansible Tower 6 | ================================================== 7 | 8 | In this lab will you'll be working with Ansible Tower to make it how we interface with our playbooks, roles, and infrastructure for the rest of the workshop. We'll configure Tower with your inventory, credentials, and tell it how to interface with GOGS to manage playbooks and roles. 9 | 10 | Your control node already has Tower deployed at \https://|control_public_ip|. We've also pre-applied a license key to your tower instance, so it's ready to be configured. Your Tower username is ``admin`` and your Tower password is |student_pass|. 11 | 12 | Configuring Ansible Tower 13 | -------------------------- 14 | 15 | There are a number of constructs in the Ansible Tower UI that enable multi-tenancy, notifications, scheduling, etc. Today we're going to focus on a few of the key constructs that are essential to any workflow. 16 | 17 | - Credentials 18 | - Projects 19 | - Inventory 20 | - Job Template 21 | 22 | Let's start with adding a Credential. 23 | 24 | Creating Credentials 25 | `````````````````````` 26 | 27 | Credentials are utilized by Tower for authentication when launching Ansible jobs involving machines, synchronizing with inventory sources, and importing project content from a version control system. 28 | 29 | There are many `types of credentials `__ including machine, network, and various cloud providers. In this workshop, we'll create a *machine* credential. 30 | 31 | - Select the key icon |Credentials button| on the left rail. 32 | - Click ADD |Add button| on the right side. 33 | 34 | Use this information to complete the credential form. 35 | 36 | +------------------------+---------------------------------------+ 37 | | NAME | Ansible Workshop Credential | 38 | +========================+=======================================+ 39 | | DESCRIPTION | Credentials for Ansible Workshop | 40 | +------------------------+---------------------------------------+ 41 | | ORGANIZATION | Default | 42 | +------------------------+---------------------------------------+ 43 | | TYPE | Machine | 44 | +------------------------+---------------------------------------+ 45 | | USERNAME | |student_name| | 46 | +------------------------+---------------------------------------+ 47 | | PASSWORD | |student_pass| | 48 | +------------------------+---------------------------------------+ 49 | | PRIVILEGE ESCALATION | Sudo (This is the default) | 50 | +------------------------+---------------------------------------+ 51 | 52 | .. figure:: ./_static/images/at_cred_detail.png 53 | :alt: Adding a Credential 54 | 55 | Adding a Credential 56 | 57 | - Select SAVE |Save button| 58 | 59 | With your machine credential created, next create a source control credential to 60 | authenticate to your GOGS instance. Use the following information. 61 | 62 | +------------------------+---------------------------------------+ 63 | | NAME | GOGS Credential | 64 | +------------------------+---------------------------------------+ 65 | | DESCRIPTION | Credential for GOGS git instance | 66 | +------------------------+---------------------------------------+ 67 | | ORGANIZATION | Default | 68 | +------------------------+---------------------------------------+ 69 | | TYPE | Source Control | 70 | +------------------------+---------------------------------------+ 71 | | USERNAME | |student_name| | 72 | +------------------------+---------------------------------------+ 73 | | PASSWORD | |student_pass| | 74 | +------------------------+---------------------------------------+ 75 | 76 | When finished, click SAVE |Save button|. 77 | 78 | These credentials will allow us to configure the rest of Tower. Next, create a 79 | Project that points back to your source code in GOGS. 80 | 81 | Creating a Project 82 | ``````````````````` 83 | 84 | A Project is a logical collection of Ansible playbooks, represented in Tower. You can manage playbooks and playbook directories by either placing them manually under the Project Base Path on your Tower server, or by placing your playbooks into a source code management (SCM) system supported by Tower, including Git, Subversion, and Mercurial. 85 | 86 | - Click on PROJECTS 87 | - Select ADD |Add button| 88 | 89 | Complete the form using the following entries 90 | 91 | ================== =================================================== 92 | NAME Ansible Workshop Project 93 | ================== =================================================== 94 | DESCRIPTION workshop playbooks 95 | ORGANIZATION Default 96 | SCM TYPE Git 97 | SCM URL \http://|control_public_ip|/|student_name|/playbook.git 98 | SCM BRANCH 99 | SCM UPDATE OPTIONS [x] Clean [x] Delete on Update [x] Update on Launch 100 | ================== =================================================== 101 | 102 | .. figure:: ./_static/images/at_project_detail.png 103 | :alt: Defining a Project 104 | 105 | Defining a Project 106 | 107 | - Select SAVE |Save button| 108 | 109 | Creating an Inventory 110 | `````````````````````` 111 | 112 | TODO - make it pull from the project 113 | 114 | An inventory is a collection of hosts against which jobs may be launched. Inventories are divided into groups and these groups contain the actual hosts. Groups may be sourced manually, by entering host names into Tower, or from one of Ansible Tower’s supported cloud providers. 115 | An Inventory can also be imported into Tower using the ``tower-manage`` command and this is how we are going to add an inventory for this workshop. 116 | 117 | - Click on INVENTORIES 118 | - Select ADD |Add button| 119 | - Complete the form using the following entries 120 | 121 | +----------------+------------------------------+ 122 | | NAME | Ansible Workshop Inventory | 123 | +================+==============================+ 124 | | DESCRIPTION | Ansible Inventory | 125 | +----------------+------------------------------+ 126 | | ORGANIZATION | Default | 127 | +----------------+------------------------------+ 128 | 129 | .. figure:: ./_static/images/at_inv_create.png 130 | :alt: Create an Inventory 131 | 132 | Creating an Inventory 133 | 134 | - Select SAVE |Save button| 135 | 136 | Look in your ``.ansible.cfg`` file to find the path to your default inventory. This is the inventory we'll import into Tower. Your default inventory is the ``inventory`` parameter. 137 | 138 | .. parsed-literal:: 139 | 140 | $ cat ~/.ansible.cfg 141 | [defaults] 142 | stdout_callback = yaml 143 | connection = smart 144 | timeout = 60 145 | deprecation_warnings = False 146 | host_key_checking = False 147 | retry_files_enabled = False 148 | 149 | inventory = /home/|student_name|/devops-workshop/lab_inventory/hosts 150 | 151 | To import the inventory, we'll use the ``tower-manage`` utility on your control node/Tower server. 152 | 153 | .. parsed-literal:: 154 | 155 | sudo tower-manage inventory_import --source=/home/|student_name|/devops-workshop/lab_inventory/hosts --inventory-name="Ansible Workshop Inventory" 156 | 157 | You should see output similar to the following: 158 | 159 | .. figure:: ./_static/images/at_tm_stdout.png 160 | :alt: Importing an inventory with tower-manage 161 | 162 | Importing an inventory with tower-manage 163 | 164 | Feel free to browse your inventory in Tower. You should now notice that 165 | the inventory has been populated with Groups and that each of those 166 | groups contain hosts. 167 | 168 | .. figure:: ./_static/images/at_inv_group.png 169 | :alt: Inventory with Groups 170 | 171 | Inventory with Groups 172 | 173 | Ansible Tower is now configured with everything we need to continue building out our infrastructure-as-code environment in today's workshop! 174 | 175 | Creating job templates 176 | ----------------------- 177 | 178 | STIG template 179 | `````````````` 180 | 181 | Prod template 182 | `````````````` 183 | 184 | Dev template 185 | `````````````` 186 | 187 | Load balancer template 188 | ``````````````````````` 189 | 190 | Workflow templates 191 | -------------------- 192 | 193 | Summary 194 | -------- 195 | 196 | .. |Credentials button| image:: ./_static/images/at_credentials_button.png 197 | .. |Browse button| image:: ./_static/images/at_browse.png 198 | .. |Submit button| image:: ./_static/images/at_submit.png 199 | .. |Gear button| image:: ./_static/images/at_gear.png 200 | .. |Add button| image:: ./_static/images/at_add.png 201 | .. |Save button| image:: ./_static/images/at_save.png 202 | -------------------------------------------------------------------------------- /.history/workshops/ansible-for-devops/08-configuring-tower_20191017220709.rst: -------------------------------------------------------------------------------- 1 | .. sectionauthor:: Ajay Chenampara 2 | .. _docs admin: jduncan@redhat.com 3 | 4 | ================================================== 5 | Configuring Ansible Tower 6 | ================================================== 7 | 8 | In this lab will you'll be working with Ansible Tower to make it how we interface with our playbooks, roles, and infrastructure for the rest of the workshop. We'll configure Tower with your inventory, credentials, and tell it how to interface with GOGS to manage playbooks and roles. 9 | 10 | Your control node already has Tower deployed at \https://|control_public_ip|. We've also pre-applied a license key to your tower instance, so it's ready to be configured. Your Tower username is ``admin`` and your Tower password is |student_pass|. 11 | 12 | Configuring Ansible Tower 13 | -------------------------- 14 | 15 | There are a number of constructs in the Ansible Tower UI that enable multi-tenancy, notifications, scheduling, etc. Today we're going to focus on a few of the key constructs that are essential to any workflow. 16 | 17 | - Credentials 18 | - Projects 19 | - Inventory 20 | - Job Template 21 | 22 | Let's start with adding a Credential. 23 | 24 | Creating Credentials 25 | `````````````````````` 26 | 27 | Credentials are utilized by Tower for authentication when launching Ansible jobs involving machines, synchronizing with inventory sources, and importing project content from a version control system. 28 | 29 | There are many `types of credentials `__ including machine, network, and various cloud providers. In this workshop, we'll create a *machine* credential. 30 | 31 | - Select the key icon |Credentials button| on the left rail. 32 | - Click ADD |Add button| on the right side. 33 | 34 | Use this information to complete the credential form. 35 | 36 | +------------------------+---------------------------------------+ 37 | | NAME | Ansible Workshop Credential | 38 | +========================+=======================================+ 39 | | DESCRIPTION | Credentials for Ansible Workshop | 40 | +------------------------+---------------------------------------+ 41 | | ORGANIZATION | Default | 42 | +------------------------+---------------------------------------+ 43 | | TYPE | Machine | 44 | +------------------------+---------------------------------------+ 45 | | USERNAME | |student_name| | 46 | +------------------------+---------------------------------------+ 47 | | PASSWORD | |student_pass| | 48 | +------------------------+---------------------------------------+ 49 | | PRIVILEGE ESCALATION | Sudo (This is the default) | 50 | +------------------------+---------------------------------------+ 51 | 52 | .. figure:: ./_static/images/at_cred_detail.png 53 | :alt: Adding a Credential 54 | 55 | Adding a Credential 56 | 57 | - Select SAVE |Save button| 58 | 59 | With your machine credential created, next create a source control credential to 60 | authenticate to your GOGS instance. Use the following information. 61 | 62 | +------------------------+---------------------------------------+ 63 | | NAME | GOGS Credential | 64 | +------------------------+---------------------------------------+ 65 | | DESCRIPTION | Credential for GOGS git instance | 66 | +------------------------+---------------------------------------+ 67 | | ORGANIZATION | Default | 68 | +------------------------+---------------------------------------+ 69 | | TYPE | Source Control | 70 | +------------------------+---------------------------------------+ 71 | | USERNAME | |student_name| | 72 | +------------------------+---------------------------------------+ 73 | | PASSWORD | |student_pass| | 74 | +------------------------+---------------------------------------+ 75 | 76 | When finished, click SAVE |Save button|. 77 | 78 | These credentials will allow us to configure the rest of Tower. Next, create a 79 | Project that points back to your source code in GOGS. 80 | 81 | Creating a Project 82 | ``````````````````` 83 | 84 | A Project is a logical collection of Ansible playbooks, represented in Tower. You can manage playbooks and playbook directories by either placing them manually under the Project Base Path on your Tower server, or by placing your playbooks into a source code management (SCM) system supported by Tower, including Git, Subversion, and Mercurial. 85 | 86 | - Click on PROJECTS on the left rail 87 | - Select ADD |Add button| 88 | 89 | Complete the form using the following entries 90 | 91 | ================== =================================================== 92 | NAME Ansible Workshop Project 93 | ================== =================================================== 94 | DESCRIPTION workshop playbooks 95 | ORGANIZATION Default 96 | SCM TYPE Git 97 | SCM URL \http://|control_public_ip|/|student_name|/playbook.git 98 | SCM BRANCH 99 | SCM UPDATE OPTIONS [x] Clean [x] Delete on Update [x] Update on Launch 100 | ================== =================================================== 101 | 102 | .. figure:: ./_static/images/at_project_detail.png 103 | :alt: Defining a Project 104 | 105 | Defining a Project 106 | 107 | - Select SAVE |Save button| 108 | 109 | Creating an Inventory 110 | `````````````````````` 111 | 112 | TODO - make it pull from the project 113 | 114 | An inventory is a collection of hosts against which jobs may be launched. Inventories are divided into groups and these groups contain the actual hosts. Groups may be sourced manually, by entering host names into Tower, or from one of Ansible Tower’s supported cloud providers. 115 | An Inventory can also be imported into Tower using the ``tower-manage`` command and this is how we are going to add an inventory for this workshop. 116 | 117 | - Click on INVENTORIES 118 | - Select ADD |Add button| 119 | - Complete the form using the following entries 120 | 121 | +----------------+------------------------------+ 122 | | NAME | Ansible Workshop Inventory | 123 | +================+==============================+ 124 | | DESCRIPTION | Ansible Inventory | 125 | +----------------+------------------------------+ 126 | | ORGANIZATION | Default | 127 | +----------------+------------------------------+ 128 | 129 | .. figure:: ./_static/images/at_inv_create.png 130 | :alt: Create an Inventory 131 | 132 | Creating an Inventory 133 | 134 | - Select SAVE |Save button| 135 | 136 | Look in your ``.ansible.cfg`` file to find the path to your default inventory. This is the inventory we'll import into Tower. Your default inventory is the ``inventory`` parameter. 137 | 138 | .. parsed-literal:: 139 | 140 | $ cat ~/.ansible.cfg 141 | [defaults] 142 | stdout_callback = yaml 143 | connection = smart 144 | timeout = 60 145 | deprecation_warnings = False 146 | host_key_checking = False 147 | retry_files_enabled = False 148 | 149 | inventory = /home/|student_name|/devops-workshop/lab_inventory/hosts 150 | 151 | To import the inventory, we'll use the ``tower-manage`` utility on your control node/Tower server. 152 | 153 | .. parsed-literal:: 154 | 155 | sudo tower-manage inventory_import --source=/home/|student_name|/devops-workshop/lab_inventory/hosts --inventory-name="Ansible Workshop Inventory" 156 | 157 | You should see output similar to the following: 158 | 159 | .. figure:: ./_static/images/at_tm_stdout.png 160 | :alt: Importing an inventory with tower-manage 161 | 162 | Importing an inventory with tower-manage 163 | 164 | Feel free to browse your inventory in Tower. You should now notice that 165 | the inventory has been populated with Groups and that each of those 166 | groups contain hosts. 167 | 168 | .. figure:: ./_static/images/at_inv_group.png 169 | :alt: Inventory with Groups 170 | 171 | Inventory with Groups 172 | 173 | Ansible Tower is now configured with everything we need to continue building out our infrastructure-as-code environment in today's workshop! 174 | 175 | Creating job templates 176 | ----------------------- 177 | 178 | STIG template 179 | `````````````` 180 | 181 | Prod template 182 | `````````````` 183 | 184 | Dev template 185 | `````````````` 186 | 187 | Load balancer template 188 | ``````````````````````` 189 | 190 | Workflow templates 191 | -------------------- 192 | 193 | Summary 194 | -------- 195 | 196 | .. |Credentials button| image:: ./_static/images/at_credentials_button.png 197 | .. |Browse button| image:: ./_static/images/at_browse.png 198 | .. |Submit button| image:: ./_static/images/at_submit.png 199 | .. |Gear button| image:: ./_static/images/at_gear.png 200 | .. |Add button| image:: ./_static/images/at_add.png 201 | .. |Save button| image:: ./_static/images/at_save.png 202 | -------------------------------------------------------------------------------- /.history/workshops/ansible-for-devops/08-configuring-tower_20191018105207.rst: -------------------------------------------------------------------------------- 1 | .. sectionauthor:: Ajay Chenampara 2 | .. _docs admin: jduncan@redhat.com 3 | 4 | ================================================== 5 | Configuring Ansible Tower 6 | ================================================== 7 | 8 | In this lab will you'll be working with Ansible Tower to make it how we interface with our playbooks, roles, and infrastructure for the rest of the workshop. We'll configure Tower with your inventory, credentials, and tell it how to interface with GOGS to manage playbooks and roles. 9 | 10 | Your control node already has Tower deployed at \https://|control_public_ip|. We've also pre-applied a license key to your tower instance, so it's ready to be configured. Your Tower username is ``admin`` and your Tower password is |student_pass|. 11 | 12 | Configuring Ansible Tower 13 | -------------------------- 14 | 15 | There are a number of constructs in the Ansible Tower UI that enable multi-tenancy, notifications, scheduling, etc. Today we're going to focus on a few of the key constructs that are essential to any workflow. 16 | 17 | - Credentials 18 | - Projects 19 | - Inventory 20 | - Job Template 21 | 22 | Let's start with adding a Credential. 23 | 24 | Creating Credentials 25 | `````````````````````` 26 | 27 | Credentials are utilized by Tower for authentication when launching Ansible jobs involving machines, synchronizing with inventory sources, and importing project content from a version control system. 28 | 29 | There are many `types of credentials `__ including machine, network, and various cloud providers. In this workshop, we'll create a *machine* credential. 30 | 31 | - Select the key icon |Credentials button| on the left rail. 32 | - Click ADD |Add button| on the right side. 33 | 34 | Use this information to complete the credential form. 35 | 36 | +------------------------+---------------------------------------+ 37 | | NAME | Ansible Workshop Credential | 38 | +========================+=======================================+ 39 | | DESCRIPTION | Credentials for Ansible Workshop | 40 | +------------------------+---------------------------------------+ 41 | | ORGANIZATION | Default | 42 | +------------------------+---------------------------------------+ 43 | | TYPE | Machine | 44 | +------------------------+---------------------------------------+ 45 | | USERNAME | |student_name| | 46 | +------------------------+---------------------------------------+ 47 | | PASSWORD | |student_pass| | 48 | +------------------------+---------------------------------------+ 49 | | PRIVILEGE ESCALATION | Sudo (This is the default) | 50 | +------------------------+---------------------------------------+ 51 | 52 | .. figure:: ./_static/images/at_cred_detail.png 53 | :alt: Adding a Credential 54 | 55 | Adding a Credential 56 | 57 | - Select SAVE |Save button| 58 | 59 | With your machine credential created, next create a source control credential to 60 | authenticate to your GOGS instance. Use the following information. 61 | 62 | +------------------------+---------------------------------------+ 63 | | NAME | GOGS Credential | 64 | +------------------------+---------------------------------------+ 65 | | DESCRIPTION | Credential for GOGS git instance | 66 | +------------------------+---------------------------------------+ 67 | | ORGANIZATION | Default | 68 | +------------------------+---------------------------------------+ 69 | | TYPE | Source Control | 70 | +------------------------+---------------------------------------+ 71 | | USERNAME | |student_name| | 72 | +------------------------+---------------------------------------+ 73 | | PASSWORD | |student_pass| | 74 | +------------------------+---------------------------------------+ 75 | 76 | When finished, click SAVE |Save button|. 77 | 78 | These credentials will allow us to configure the rest of Tower. Next, create a 79 | Project that points back to your source code in GOGS. 80 | 81 | Creating a Project 82 | ``````````````````` 83 | 84 | A Project is a logical collection of Ansible playbooks, represented in Tower. You can manage playbooks and playbook directories by either placing them manually under the Project Base Path on your Tower server, or by placing your playbooks into a source code management (SCM) system supported by Tower, including Git, Subversion, and Mercurial. 85 | 86 | - Click on PROJECTS on the left rail 87 | - Select ADD |Add button| 88 | 89 | Complete the form using the following entries 90 | 91 | ================== =================================================== 92 | NAME Ansible Workshop Project 93 | ================== =================================================== 94 | DESCRIPTION workshop playbooks 95 | ORGANIZATION Default 96 | SCM TYPE Git 97 | SCM URL \http://|control_public_ip|/|student_name|/playbook.git 98 | SCM BRANCH 99 | SCM UPDATE OPTIONS [x] Clean [x] Delete on Update [x] Update on Launch 100 | ================== =================================================== 101 | 102 | .. figure:: ./_static/images/at_project_detail.png 103 | :alt: Defining a Project 104 | 105 | Defining a Project 106 | 107 | - Click SAVE |Save button| 108 | 109 | With your GOGS repository set up as a source of Ansibe code for your Tower 110 | instance, next you'll add an Inventory that references the inventory file you've 111 | been maintaining in GOGS today. 112 | 113 | Creating an Inventory 114 | `````````````````````` 115 | 116 | TODO - make it pull from the project 117 | 118 | An inventory is a collection of hosts against which jobs may be launched. Inventories are divided into groups and these groups contain the actual hosts. Groups may be sourced manually, by entering host names into Tower, or from one of Ansible Tower’s supported cloud providers. 119 | An Inventory can also be imported into Tower using the ``tower-manage`` command and this is how we are going to add an inventory for this workshop. 120 | 121 | - Click on INVENTORIES 122 | - Select ADD |Add button| 123 | - Complete the form using the following entries 124 | 125 | +----------------+------------------------------+ 126 | | NAME | Ansible Workshop Inventory | 127 | +================+==============================+ 128 | | DESCRIPTION | Ansible Inventory | 129 | +----------------+------------------------------+ 130 | | ORGANIZATION | Default | 131 | +----------------+------------------------------+ 132 | 133 | .. figure:: ./_static/images/at_inv_create.png 134 | :alt: Create an Inventory 135 | 136 | Creating an Inventory 137 | 138 | - Click SAVE |Save button| 139 | 140 | Next, click 141 | 142 | Ansible Tower is now configured with everything we need to continue building out our infrastructure-as-code environment in today's workshop! 143 | 144 | Creating job templates 145 | ----------------------- 146 | 147 | STIG template 148 | `````````````` 149 | 150 | Prod template 151 | `````````````` 152 | 153 | Dev template 154 | `````````````` 155 | 156 | Load balancer template 157 | ``````````````````````` 158 | 159 | Workflow templates 160 | -------------------- 161 | 162 | Summary 163 | -------- 164 | 165 | .. |Credentials button| image:: ./_static/images/at_credentials_button.png 166 | .. |Browse button| image:: ./_static/images/at_browse.png 167 | .. |Submit button| image:: ./_static/images/at_submit.png 168 | .. |Gear button| image:: ./_static/images/at_gear.png 169 | .. |Add button| image:: ./_static/images/at_add.png 170 | .. |Save button| image:: ./_static/images/at_save.png 171 | .. |Source button| image:: ./_static/images/at_inv_source_button.png 172 | -------------------------------------------------------------------------------- /.history/workshops/ansible-for-devops/08-configuring-tower_20191018123053.rst: -------------------------------------------------------------------------------- 1 | .. sectionauthor:: Ajay Chenampara 2 | .. _docs admin: jduncan@redhat.com 3 | 4 | ================================================== 5 | Configuring Ansible Tower 6 | ================================================== 7 | 8 | In this lab will you'll be working with Ansible Tower to make it how we interface with our playbooks, roles, and infrastructure for the rest of the workshop. We'll configure Tower with your inventory, credentials, and tell it how to interface with GOGS to manage playbooks and roles. 9 | 10 | Your control node already has Tower deployed at \https://|control_public_ip|. We've also pre-applied a license key to your tower instance, so it's ready to be configured. Your Tower username is ``admin`` and your Tower password is |student_pass|. 11 | 12 | Configuring Ansible Tower 13 | -------------------------- 14 | 15 | There are a number of constructs in the Ansible Tower UI that enable multi-tenancy, notifications, scheduling, etc. Today we're going to focus on a few of the key constructs that are essential to any workflow. 16 | 17 | - Credentials 18 | - Projects 19 | - Inventory 20 | - Job Template 21 | 22 | Let's start with adding a Credential. 23 | 24 | Creating Credentials 25 | `````````````````````` 26 | 27 | Credentials are utilized by Tower for authentication when launching Ansible jobs involving machines, synchronizing with inventory sources, and importing project content from a version control system. 28 | 29 | There are many `types of credentials `__ including machine, network, and various cloud providers. In this workshop, we'll create a *machine* credential. 30 | 31 | - Select the key icon |Credentials button| on the left rail. 32 | - Click ADD |Add button| on the right side. 33 | 34 | Use this information to complete the credential form. 35 | 36 | +------------------------+---------------------------------------+ 37 | | NAME | Ansible Workshop Credential | 38 | +========================+=======================================+ 39 | | DESCRIPTION | Credentials for Ansible Workshop | 40 | +------------------------+---------------------------------------+ 41 | | ORGANIZATION | Default | 42 | +------------------------+---------------------------------------+ 43 | | TYPE | Machine | 44 | +------------------------+---------------------------------------+ 45 | | USERNAME | |student_name| | 46 | +------------------------+---------------------------------------+ 47 | | PASSWORD | |student_pass| | 48 | +------------------------+---------------------------------------+ 49 | | PRIVILEGE ESCALATION | Sudo (This is the default) | 50 | +------------------------+---------------------------------------+ 51 | 52 | .. figure:: ./_static/images/at_cred_detail.png 53 | :alt: Adding a Credential 54 | 55 | Adding a Credential 56 | 57 | - Select SAVE |Save button| 58 | 59 | With your machine credential created, next create a source control credential to 60 | authenticate to your GOGS instance. Use the following information. 61 | 62 | +------------------------+---------------------------------------+ 63 | | NAME | GOGS Credential | 64 | +------------------------+---------------------------------------+ 65 | | DESCRIPTION | Credential for GOGS git instance | 66 | +------------------------+---------------------------------------+ 67 | | ORGANIZATION | Default | 68 | +------------------------+---------------------------------------+ 69 | | TYPE | Source Control | 70 | +------------------------+---------------------------------------+ 71 | | USERNAME | |student_name| | 72 | +------------------------+---------------------------------------+ 73 | | PASSWORD | |student_pass| | 74 | +------------------------+---------------------------------------+ 75 | 76 | When finished, click SAVE |Save button|. 77 | 78 | These credentials will allow us to configure the rest of Tower. Next, create a 79 | Project that points back to your source code in GOGS. 80 | 81 | Creating a Project 82 | ``````````````````` 83 | 84 | A Project is a logical collection of Ansible playbooks, represented in Tower. You can manage playbooks and playbook directories by either placing them manually under the Project Base Path on your Tower server, or by placing your playbooks into a source code management (SCM) system supported by Tower, including Git, Subversion, and Mercurial. 85 | 86 | - Click on PROJECTS on the left rail 87 | - Select ADD |Add button| 88 | 89 | Complete the form using the following entries 90 | 91 | ================== =================================================== 92 | NAME Ansible Workshop Project 93 | ================== =================================================== 94 | DESCRIPTION workshop playbooks 95 | ORGANIZATION Default 96 | SCM TYPE Git 97 | SCM URL \http://|control_public_ip|/|student_name|/playbook.git 98 | SCM BRANCH 99 | SCM UPDATE OPTIONS [x] Clean [x] Delete on Update [x] Update on Launch 100 | ================== =================================================== 101 | 102 | .. figure:: ./_static/images/at_project_detail.png 103 | :alt: Defining a Project 104 | 105 | Defining a Project 106 | 107 | - Click SAVE |Save button| 108 | 109 | With your GOGS repository set up as a source of Ansibe code for your Tower 110 | instance, next you'll add an Inventory that references the inventory file you've 111 | been maintaining in GOGS today. 112 | 113 | Creating an Inventory 114 | `````````````````````` 115 | 116 | An inventory is a collection of hosts against which jobs may be launched. Inventories are divided into groups and these groups contain the actual hosts. Groups may be sourced manually, by entering host names into Tower, or from one of Ansible Tower’s supported cloud providers. 117 | An Inventory can also be imported into Tower using the ``tower-manage`` command and this is how we are going to add an inventory for this workshop. 118 | 119 | - Click on INVENTORIES 120 | - Select ADD |Add button| 121 | - Complete the form using the following entries 122 | 123 | +----------------+------------------------------+ 124 | | NAME | Ansible Workshop Inventory | 125 | +================+==============================+ 126 | | DESCRIPTION | Ansible Inventory | 127 | +----------------+------------------------------+ 128 | | ORGANIZATION | Default | 129 | +----------------+------------------------------+ 130 | 131 | .. figure:: ./_static/images/at_inv_create.png 132 | :alt: Create an Inventory 133 | 134 | Creating an Inventory 135 | 136 | - Click SAVE |Save button| 137 | 138 | Next, click Sources |Source button| to add a source for your inventory. 139 | 140 | Inventory Sources 141 | ~~~~~~~~~~~~~~~~~~~ 142 | 143 | 144 | 145 | Ansible Tower is now configured with everything we need to continue building out our infrastructure-as-code environment in today's workshop! 146 | 147 | Creating job templates 148 | ----------------------- 149 | 150 | STIG template 151 | `````````````` 152 | 153 | Prod template 154 | `````````````` 155 | 156 | Dev template 157 | `````````````` 158 | 159 | Load balancer template 160 | ``````````````````````` 161 | 162 | Workflow templates 163 | -------------------- 164 | 165 | Summary 166 | -------- 167 | 168 | .. |Credentials button| image:: ./_static/images/at_credentials_button.png 169 | .. |Browse button| image:: ./_static/images/at_browse.png 170 | .. |Submit button| image:: ./_static/images/at_submit.png 171 | .. |Gear button| image:: ./_static/images/at_gear.png 172 | .. |Add button| image:: ./_static/images/at_add.png 173 | .. |Save button| image:: ./_static/images/at_save.png 174 | .. |Source button| image:: ./_static/images/at_inv_source_button.png 175 | -------------------------------------------------------------------------------- /.history/workshops/ansible-for-devops/08-configuring-tower_20191018151853.rst: -------------------------------------------------------------------------------- 1 | .. sectionauthor:: Ajay Chenampara 2 | .. _docs admin: jduncan@redhat.com 3 | 4 | ================================================== 5 | Configuring Ansible Tower 6 | ================================================== 7 | 8 | In this lab will you'll be working with Ansible Tower to make it how we interface with our playbooks, roles, and infrastructure for the rest of the workshop. We'll configure Tower with your inventory, credentials, and tell it how to interface with GOGS to manage playbooks and roles. 9 | 10 | Your control node already has Tower deployed at \https://|control_public_ip|. We've also pre-applied a license key to your tower instance, so it's ready to be configured. Your Tower username is ``admin`` and your Tower password is |student_pass|. 11 | 12 | Configuring Ansible Tower 13 | -------------------------- 14 | 15 | There are a number of constructs in the Ansible Tower UI that enable multi-tenancy, notifications, scheduling, etc. Today we're going to focus on a few of the key constructs that are essential to any workflow. 16 | 17 | - Credentials 18 | - Projects 19 | - Inventory 20 | - Job Template 21 | 22 | Let's start with adding a Credential. 23 | 24 | Creating Credentials 25 | `````````````````````` 26 | 27 | Credentials are utilized by Tower for authentication when launching Ansible jobs involving machines, synchronizing with inventory sources, and importing project content from a version control system. 28 | 29 | There are many `types of credentials `__ including machine, network, and various cloud providers. In this workshop, we'll create a *machine* credential. 30 | 31 | - Select the key icon |Credentials button| on the left rail. 32 | - Click ADD |Add button| on the right side. 33 | 34 | Use this information to complete the credential form. 35 | 36 | +------------------------+---------------------------------------+ 37 | | NAME | Ansible Workshop Credential | 38 | +========================+=======================================+ 39 | | DESCRIPTION | Credentials for Ansible Workshop | 40 | +------------------------+---------------------------------------+ 41 | | ORGANIZATION | Default | 42 | +------------------------+---------------------------------------+ 43 | | TYPE | Machine | 44 | +------------------------+---------------------------------------+ 45 | | USERNAME | |student_name| | 46 | +------------------------+---------------------------------------+ 47 | | PASSWORD | |student_pass| | 48 | +------------------------+---------------------------------------+ 49 | | PRIVILEGE ESCALATION | Sudo (This is the default) | 50 | +------------------------+---------------------------------------+ 51 | 52 | .. figure:: ./_static/images/at_cred_detail.png 53 | :alt: Adding a Credential 54 | 55 | Adding a Credential 56 | 57 | - Select SAVE |Save button| 58 | 59 | With your machine credential created, next create a source control credential to 60 | authenticate to your GOGS instance. Use the following information. 61 | 62 | +------------------------+---------------------------------------+ 63 | | NAME | GOGS Credential | 64 | +------------------------+---------------------------------------+ 65 | | DESCRIPTION | Credential for GOGS git instance | 66 | +------------------------+---------------------------------------+ 67 | | ORGANIZATION | Default | 68 | +------------------------+---------------------------------------+ 69 | | TYPE | Source Control | 70 | +------------------------+---------------------------------------+ 71 | | USERNAME | |student_name| | 72 | +------------------------+---------------------------------------+ 73 | | PASSWORD | |student_pass| | 74 | +------------------------+---------------------------------------+ 75 | 76 | When finished, click SAVE |Save button|. 77 | 78 | These credentials will allow us to configure the rest of Tower. Next, create a 79 | Project that points back to your source code in GOGS. 80 | 81 | Creating a Project 82 | ``````````````````` 83 | 84 | A Project is a logical collection of Ansible playbooks, represented in Tower. You can manage playbooks and playbook directories by either placing them manually under the Project Base Path on your Tower server, or by placing your playbooks into a source code management (SCM) system supported by Tower, including Git, Subversion, and Mercurial. 85 | 86 | - Click on PROJECTS on the left rail 87 | - Select ADD |Add button| 88 | 89 | Complete the form using the following entries 90 | 91 | ================== =================================================== 92 | NAME Ansible Workshop Project 93 | ================== =================================================== 94 | DESCRIPTION workshop playbooks 95 | ORGANIZATION Default 96 | SCM TYPE Git 97 | SCM URL \http://|control_public_ip|/|student_name|/playbook.git 98 | SCM BRANCH 99 | SCM UPDATE OPTIONS [x] Clean [x] Delete on Update [x] Update on Launch 100 | ================== =================================================== 101 | 102 | .. figure:: ./_static/images/at_project_detail.png 103 | :alt: Defining a Project 104 | 105 | Defining a Project 106 | 107 | - Click SAVE |Save button| 108 | 109 | With your GOGS repository set up as a source of Ansibe code for your Tower 110 | instance, next you'll add an Inventory that references the inventory file you've 111 | been maintaining in GOGS today. 112 | 113 | Creating an Inventory 114 | `````````````````````` 115 | 116 | An inventory is a collection of hosts against which jobs may be launched. Inventories are divided into groups and these groups contain the actual hosts. Groups may be sourced manually, by entering host names into Tower, or from one of Ansible Tower’s supported cloud providers. 117 | An Inventory can also be imported into Tower using the ``tower-manage`` command and this is how we are going to add an inventory for this workshop. 118 | 119 | - Click on INVENTORIES 120 | - Select ADD |Add button| 121 | - Complete the form using the following entries 122 | 123 | +----------------+------------------------------+ 124 | | NAME | Ansible Workshop Inventory | 125 | +================+==============================+ 126 | | DESCRIPTION | Ansible Inventory | 127 | +----------------+------------------------------+ 128 | | ORGANIZATION | Default | 129 | +----------------+------------------------------+ 130 | 131 | .. figure:: ./_static/images/at_inv_create.png 132 | :alt: Create an Inventory 133 | 134 | Creating an Inventory 135 | 136 | - Click SAVE |Save button| 137 | 138 | Next, click Sources |Source button| to add a source for your inventory. 139 | 140 | Inventory Sources 141 | ~~~~~~~~~~~~~~~~~~~ 142 | 143 | Inventory sources can come from multiple locations including all of the public 144 | and on-premise cloud and infrastructure providers, Red Hat Satellite, and even 145 | custom scripts. For today's workshop, you'll add a source to your inventory that 146 | references the file in your GOGS repository project. Fill in your inventory 147 | source with the following information. 148 | 149 | ============ ============================================================= 150 | NAME GOGS Source 151 | ============ ============================================================= 152 | DESCRIPTION 153 | CREDENTIAL GOGS Credential 154 | SOURCE Sourced from Project 155 | OPTIONS [x] Overwrite [x] Overwrite Variables [x] Update on Project Change 156 | ============ ============================================================= 157 | 158 | 159 | 160 | .. figure:: ./_static/images/at_inv_source.png 161 | :alt: Adding a source to your inventory 162 | 163 | 164 | 165 | Ansible Tower is now configured with everything we need to continue building out our infrastructure-as-code environment in today's workshop! 166 | 167 | Creating job templates 168 | ----------------------- 169 | 170 | STIG template 171 | `````````````` 172 | 173 | Prod template 174 | `````````````` 175 | 176 | Dev template 177 | `````````````` 178 | 179 | Load balancer template 180 | ``````````````````````` 181 | 182 | Workflow templates 183 | -------------------- 184 | 185 | Summary 186 | -------- 187 | 188 | .. |Credentials button| image:: ./_static/images/at_credentials_button.png 189 | .. |Browse button| image:: ./_static/images/at_browse.png 190 | .. |Submit button| image:: ./_static/images/at_submit.png 191 | .. |Gear button| image:: ./_static/images/at_gear.png 192 | .. |Add button| image:: ./_static/images/at_add.png 193 | .. |Save button| image:: ./_static/images/at_save.png 194 | .. |Source button| image:: ./_static/images/at_inv_source_button.png 195 | -------------------------------------------------------------------------------- /.history/workshops/ansible-for-devops/08-configuring-tower_20191029152839.rst: -------------------------------------------------------------------------------- 1 | .. sectionauthor:: Jamie Duncan 2 | .. _docs admin: cloudguy@redhat.com 3 | 4 | ================================================== 5 | Configuring Ansible Tower 6 | ================================================== 7 | 8 | In this lab will you'll be working with Ansible Tower to make it how we interface with our playbooks, roles, and infrastructure for the rest of the workshop. We'll configure Tower with your inventory, credentials, and tell it how to interface with GOGS to manage playbooks and roles. 9 | 10 | Your control node already has Tower deployed at \https://|control_public_ip|. We've also pre-applied a license key to your tower instance, so it's ready to be configured. Your Tower username is ``admin`` and your Tower password is |student_pass|. 11 | 12 | Configuring Ansible Tower 13 | -------------------------- 14 | 15 | There are a number of constructs in the Ansible Tower UI that enable multi-tenancy, notifications, scheduling, etc. Today we're going to focus on a few of the key constructs that are essential to any workflow. 16 | 17 | - Credentials 18 | - Projects 19 | - Inventory 20 | - Job Template 21 | 22 | Let's start with adding a Credential. 23 | 24 | Creating Credentials 25 | `````````````````````` 26 | 27 | Credentials are utilized by Tower for authentication when launching Ansible jobs involving machines, synchronizing with inventory sources, and importing project content from a version control system. 28 | 29 | There are many `types of credentials `__ including machine, network, and various cloud providers. In this workshop, we'll create a *machine* credential. 30 | 31 | - Select the key icon |Credentials button| on the left rail. 32 | - Click ADD |Add button| on the right side. 33 | 34 | Use this information to complete the credential form. 35 | 36 | +------------------------+---------------------------------------+ 37 | | NAME | Ansible Workshop Credential | 38 | +========================+=======================================+ 39 | | DESCRIPTION | Credentials for Ansible Workshop | 40 | +------------------------+---------------------------------------+ 41 | | ORGANIZATION | Default | 42 | +------------------------+---------------------------------------+ 43 | | TYPE | Machine | 44 | +------------------------+---------------------------------------+ 45 | | USERNAME | |student_name| | 46 | +------------------------+---------------------------------------+ 47 | | PASSWORD | |student_pass| | 48 | +------------------------+---------------------------------------+ 49 | | PRIVILEGE ESCALATION | Sudo (This is the default) | 50 | +------------------------+---------------------------------------+ 51 | 52 | .. figure:: ./_static/images/at_cred_detail.png 53 | :alt: Adding a Credential 54 | 55 | Adding a Credential 56 | 57 | Click SAVE |Save button| to save each new Credential. 58 | 59 | With your machine credential created, next create a source control credential to 60 | authenticate to your GOGS instance. Use the following information. 61 | 62 | +------------------------+---------------------------------------+ 63 | | NAME | GOGS Credential | 64 | +------------------------+---------------------------------------+ 65 | | DESCRIPTION | Credential for GOGS git instance | 66 | +------------------------+---------------------------------------+ 67 | | ORGANIZATION | Default | 68 | +------------------------+---------------------------------------+ 69 | | TYPE | Source Control | 70 | +------------------------+---------------------------------------+ 71 | | USERNAME | |student_name| | 72 | +------------------------+---------------------------------------+ 73 | | PASSWORD | |student_pass| | 74 | +------------------------+---------------------------------------+ 75 | 76 | When finished, click SAVE |Save button|. 77 | 78 | These credentials will allow us to configure the rest of Tower. Next, create a 79 | Project that points back to your source code in GOGS. 80 | 81 | Creating a Project 82 | ``````````````````` 83 | 84 | A Project is a logical collection of Ansible playbooks, represented in Tower. You can manage playbooks and playbook directories by either placing them manually under the Project Base Path on your Tower server, or by placing your playbooks into a source code management (SCM) system supported by Tower, including Git, Subversion, and Mercurial. 85 | 86 | - Click on PROJECTS on the left rail 87 | - Select ADD |Add button| 88 | 89 | Complete the form using the following entries 90 | 91 | ================== =================================================== 92 | NAME Ansible Workshop Project 93 | ================== =================================================== 94 | DESCRIPTION workshop playbooks 95 | ORGANIZATION Default 96 | SCM TYPE Git 97 | SCM URL \http://|control_public_ip|/|student_name|/playbook.git 98 | SCM BRANCH 99 | SCM UPDATE OPTIONS [x] Clean [x] Delete on Update [x] Update on Launch 100 | ================== =================================================== 101 | 102 | .. figure:: ./_static/images/at_project_detail.png 103 | :alt: Defining a Project 104 | 105 | Defining a Project 106 | 107 | Click SAVE |Save button| to save your project. 108 | 109 | With your GOGS repository set up as a source of Ansibe code for your Tower 110 | instance, next you'll add an Inventory that references the inventory file you've 111 | been maintaining in GOGS today. 112 | 113 | Creating an Inventory 114 | `````````````````````` 115 | 116 | An inventory is a collection of hosts against which jobs may be launched. Inventories are divided into groups and these groups contain the actual hosts. Groups may be sourced manually, by entering host names into Tower, or from one of Ansible Tower’s supported cloud providers. 117 | An Inventory can also be imported into Tower using the ``tower-manage`` command and this is how we are going to add an inventory for this workshop. 118 | 119 | - Click on INVENTORIES 120 | - Select ADD |Add button| 121 | - Complete the form using the following entries 122 | 123 | +----------------+------------------------------+ 124 | | NAME | Ansible Workshop Inventory | 125 | +================+==============================+ 126 | | DESCRIPTION | Ansible Inventory | 127 | +----------------+------------------------------+ 128 | | ORGANIZATION | Default | 129 | +----------------+------------------------------+ 130 | 131 | .. figure:: ./_static/images/at_inv_create.png 132 | :alt: Create an Inventory 133 | 134 | Creating an Inventory 135 | 136 | Click SAVE |Save button| to save your new inventory. 137 | 138 | Next, click Sources |Source button| to add a source for your inventory. 139 | 140 | Inventory Sources 141 | ~~~~~~~~~~~~~~~~~~~ 142 | 143 | Inventory sources can come from multiple locations including all of the public 144 | and on-premise cloud and infrastructure providers, Red Hat Satellite, and even 145 | custom scripts. For today's workshop, you'll add a source to your inventory that 146 | references the file in your GOGS repository project. Fill in your inventory 147 | source with the following information. 148 | 149 | ============ =================================================================== 150 | NAME GOGS Source 151 | ============ =================================================================== 152 | DESCRIPTION 153 | CREDENTIAL GOGS Credential 154 | SOURCE Sourced from Project 155 | OPTIONS [x] Overwrite [x] Overwrite Variables [x] Update on Project Change 156 | ============ =================================================================== 157 | 158 | .. figure:: ./_static/images/at_inv_source.png 159 | :alt: Adding a source to your inventory 160 | 161 | 162 | Ansible Tower is now configured with everything we need to continue building out our infrastructure-as-code environment in today's workshop! 163 | 164 | Creating job templates 165 | ----------------------- 166 | 167 | Ansible Tower Job Templates are where everything comes together to get work 168 | done. 169 | 170 | Creating a STIG Job Template 171 | ````````````````````````````` 172 | 173 | Summary 174 | -------- 175 | 176 | .. |Credentials button| image:: ./_static/images/at_credentials_button.png 177 | .. |Browse button| image:: ./_static/images/at_browse.png 178 | .. |Submit button| image:: ./_static/images/at_submit.png 179 | .. |Gear button| image:: ./_static/images/at_gear.png 180 | .. |Add button| image:: ./_static/images/at_add.png 181 | .. |Save button| image:: ./_static/images/at_save.png 182 | .. |Source button| image:: ./_static/images/at_inv_source_button.png 183 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | dist: bionic 2 | language: minimal 3 | env: 4 | - WORKSHOP_NAME=example-workshop 5 | - WORKSHOP_NAME=ansible-for-devops 6 | - WORKSHOP_NAME=better-together 7 | stages: 8 | - test 9 | - name: deploy 10 | if: branch = master 11 | before_script: 12 | # https://podman.io/getting-started/installation#ubuntu 13 | - . /etc/os-release 14 | - echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list 15 | - curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/Release.key | sudo apt-key add - 16 | - sudo apt-get update -qq 17 | - sudo apt-get -qq -y install podman slirp4netns 18 | script: hack/run.sh ${WORKSHOP_NAME} local --project=${QUAY_REPOSITORY:-creynold} --test 19 | jobs: 20 | include: 21 | - stage: deploy 22 | script: bash hack/build.sh ${WORKSHOP_NAME} quay ${QUAY_REPOSITORY:-creynold} 23 | env: WORKSHOP_NAME=ansible-for-devops 24 | - stage: deploy 25 | script: bash hack/build.sh ${WORKSHOP_NAME} quay ${QUAY_REPOSITORY:-creynold} 26 | env: WORKSHOP_NAME=better-together 27 | -------------------------------------------------------------------------------- /.vscode/settings.json: -------------------------------------------------------------------------------- 1 | { 2 | "rewrap.wrappingColumn": 80, 3 | "restructuredtext.confPath": "" 4 | } -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | FROM centos:latest 2 | LABEL maintainer="creynold@redhat.com" 3 | 4 | ARG workshop_name=example-workshop 5 | ENV WORKSHOP_NAME=$workshop_name \ 6 | STUDENT_NAME='example student' \ 7 | BASTION_HOST='bastion.example.com' \ 8 | MASTER_URL='ocp.example.com' \ 9 | APP_DOMAIN='apps.example.com' 10 | 11 | RUN mkdir -p /opt/docs && \ 12 | chmod -R u+x /opt/docs && \ 13 | chgrp -R 0 /opt/docs && \ 14 | chmod -R g=u /opt/docs && \ 15 | yum -y install epel-release && \ 16 | yum -y install python3-devel python3-setuptools python3-pip make && \ 17 | yum -y update && \ 18 | yum -y clean all --enablerepo='*' 19 | 20 | COPY requirements.txt entrypoint.sh workshops/$workshop_name /opt/docs/ 21 | WORKDIR /opt/docs 22 | RUN pip3 install --trusted-host pypi.org --trusted-host files.pythonhosted.org --no-cache-dir --upgrade pip setuptools && \ 23 | pip3 install --trusted-host pypi.org --trusted-host files.pythonhosted.org --no-cache-dir -r requirements.txt 24 | 25 | EXPOSE 8080 26 | CMD ["/opt/docs/entrypoint.sh"] 27 | USER 10001 28 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Workshop Operator Lab Guide 2 | 3 | The goal of this project is to be self-documenting. We maintain most of the documentation in one of the lab guide projects. Getting it to work means you have the prerequisites and requirements taken care of. 4 | 5 | ## Prerequisites 6 | 7 | * docker or podman on the host to view the documentation 8 | * sudo access 9 | 10 | ## Deploying a workshop 11 | 12 | There are currently 3 workshops in this repository: 13 | 14 | * example-workshop: The documentation for more advanced features as well as contributing to this project. 15 | * better-together: The Ops lab guide for the Better Together: Ansible and OpenShift workshop series. 16 | * ansible-for-devops: The revamped, container-based Ansible Essentials workshop content. 17 | 18 | ### Starting a workshop: 19 | 20 | ``` 21 | $ sudo hack/run.sh start 22 | ``` 23 | 24 | And that's it. The workshop guide your specified is running on your server on port 8080. 25 | 26 | ### Running multiple workshops at once, or running on a port other than 8080: 27 | 28 | ``` 29 | $ sudo hack/run.sh start 30 | ``` 31 | 32 | *note: only one copy of a single workshop works right now. You can run multiple different workshops on different ports, but only one of each.* 33 | 34 | ### Stopping a workshop: 35 | 36 | ``` 37 | $ sudo hack/run.sh stop 38 | ``` 39 | -------------------------------------------------------------------------------- /entrypoint.sh: -------------------------------------------------------------------------------- 1 | #! /usr/bin/env bash 2 | # Helper script to automate the build and deployment of Sphinx content for a specific workshop 3 | 4 | make dirhtml 5 | cp -r _build/dirhtml/_static . 6 | cd _build/dirhtml && python3 -m http.server 8080 7 | -------------------------------------------------------------------------------- /hack/README.md: -------------------------------------------------------------------------------- 1 | # Containerized Lab Guide Hack Scripts 2 | 3 | ## build.sh 4 | 5 | This script builds out a container image for a specific lab guide. It takes up to 3 parameters. 6 | 7 | 1. Name of the workshop to build. This needs to correspond with the directory name for your workshop in the `workshops` directory. 8 | 2. Build target - `local` or `quay`. 9 | * `local` - the container is built and available in the host's container cache. 10 | * `quay` - it's built, cached, and also pushed to quay.io. Requirements for quay would be to be logged in to your runtime to authenticate against quay.io. The images are named using `quay.io//operator-workshop-lab-guide-`. 11 | 3. (optional) - Quay namespace tag. By default, this is `creynold` (makes it easier for the author). You can specify a different quay namespace using this parameter. 12 | 13 | ## run.sh 14 | 15 | This script runs containers in various environments. It's useful for testing, QA, or even running lab guides in environments that lack OpenShift or kubernetes. It takes up to 3 parameters. 16 | 17 | 1. Name of the workshop to build. This needs to correspond with the directory name for your workshop in the `workshops` directory. 18 | 2. Build target - `local`, `start`, or `stop` 19 | * `local` - Runs the named workshop, using only the local container cache. This is useful when testing out a new build that hasn't been pushed to quay.io yet. 20 | * `start` - Runs the named workshop, but pulls the latest image from quay.io and uses that as the image to run the container from 21 | * `stop` - Stops the specified workshop 22 | 3. (optional) - container host port. If not specified, the lab guide serves on port 8080. This is useful if you want to run multiple lab guides simultaneously or if you need to not use the standard port. 23 | 24 | ### Dynamic content 25 | 26 | One of the most interesting things about this project is its ability to take environment variables in a container and add them dynamically to the Sphinx content in your project. To do this easily, use a `prep` script in concert with `conf.py` in your Sphinx project. 27 | 28 | #### Prep script 29 | 30 | Prep scripts live in the `hack` directory, using the `prep-.sh` name format. The goal of this script is to account for any dynamic content you want in your lab guide, an add that to an environment file that's read into docker. For OpenShift-based deployments, these variables are handled by the deployment or deploymentConfig resources. The example below accounts for dynamic content in a linklight-based environment, as well as a standalone testing environment. It's all simple bash with some tests and conditionals. The goal is to have an environment file that is used in `run.sh` when starting up the container. 31 | 32 | ``` 33 | #! /usr/bin/env bash 34 | 35 | ENV_FILE=$1 36 | 37 | echo "Preparing environment variables for $WORKSHOP_NAME lab guide" 38 | 39 | STUDENT_NAME=$(grep student /etc/passwd | awk -F':' '{ print $1 }') 40 | if [[ ${#STUDENT_NAME} -eq 0 ]];then 41 | STUDENT_NAME=student1 42 | fi 43 | INVENTORY_FILE=/home/$STUDENT_NAME/devops-workshop/lab_inventory/hosts 44 | if [ -f $INVENTORY_FILE ];then 45 | CONTROL_PRIVATE_IP=$(grep 'ansible ansible_host' $INVENTORY_FILE | awk -F'=' '{ print $2 }') 46 | STUDENT_PASS=$(grep ansible_ssh_pass $INVENTORY_FILE | awk -F= '{ print $2 }') 47 | CONTROL_PUBLIC_IP=$(curl http://169.254.169.254/latest/meta-data/public-ipv4) 48 | NODE_1_IP=$(grep node1 $INVENTORY_FILE | awk -F'=' '{ print $2 }') 49 | NODE_2_IP=$(grep node2 $INVENTORY_FILE | awk -F'=' '{ print $2 }') 50 | NODE_3_IP=$(grep node3 $INVENTORY_FILE | awk -F'=' '{ print $2 }') 51 | NODE_4_IP=$(grep node4 $INVENTORY_FILE | awk -F'=' '{ print $2 }') 52 | else 53 | CONTROL_PRIVATE_IP=10.0.0.1 54 | STUDENT_PASS=redhat01 55 | CONTROL_PUBLIC_IP=192.168.0.1 56 | NODE_1_IP=10.0.0.2 57 | NODE_2_IP=10.0.0.3 58 | NODE_3_IP=10.0.0.4 59 | NODE_4_IP=10.0.0.5 60 | fi 61 | 62 | echo "CONTROL_PRIVATE_IP="$CONTROL_PRIVATE_IP >> $ENV_FILE 63 | echo "CONTROL_PUBLIC_IP="$CONTROL_PUBLIC_IP >> $ENV_FILE 64 | echo "STUDENT_NAME="$STUDENT_NAME >> $ENV_FILE 65 | echo "STUDENT_PASS="$STUDENT_PASS >> $ENV_FILE 66 | echo "NODE_1_IP="$NODE_1_IP >> $ENV_FILE 67 | echo "NODE_2_IP="$NODE_2_IP >> $ENV_FILE 68 | echo "NODE_3_IP="$NODE_3_IP >> $ENV_FILE 69 | echo "NODE_4_IP="$NODE_4_IP >> $ENV_FILE 70 | ``` 71 | 72 | #### conf.py 73 | 74 | `conf.py` is the primary configuration file for sphinx-docs. To take the environment variables and make them sphinx subsitutions, we use the `rst_prolog` parameter. This takes the environment variables and makes them available as substitutions in your sphinx project using [their standard process](http://docutils.sourceforge.net/docs/ref/rst/directives.html#replacement-text). 75 | 76 | ``` 77 | rst_prolog = """ 78 | .. |workshop_name_clean| replace:: %s 79 | .. |workshop_name| replace:: %s 80 | .. |student_name| replace:: %s 81 | .. |student_pass| replace:: %s 82 | .. |control_public_ip| replace:: %s 83 | .. |node_1_ip| replace:: %s 84 | .. |node_2_ip| replace:: %s 85 | .. |node_3_ip| replace:: %s 86 | .. |node_4_ip| replace:: %s 87 | """ % (project_clean, 88 | os.environ['WORKSHOP_NAME'], 89 | os.environ['STUDENT_NAME'], 90 | os.environ['STUDENT_PASS'], 91 | os.environ['CONTROL_PUBLIC_IP'], 92 | os.environ['NODE_1_IP'], 93 | os.environ['NODE_2_IP'], 94 | os.environ['NODE_3_IP'], 95 | os.environ['NODE_4_IP'], 96 | ) 97 | ``` 98 | 99 | #### how this works 100 | 101 | Sphinx is a static content generator. So dynamic content isn't possible. But this project actually takes those variables and renders the text within the container before starting to serve it. That means any time you redeploy a container to reflect a change, the content is re-generated and is therefore (sorta') dynamic. 102 | -------------------------------------------------------------------------------- /hack/build.sh: -------------------------------------------------------------------------------- 1 | #! /usr/bin/env bash 2 | # Helper script to build container images for each workshop 3 | 4 | WORKSHOP_NAME=$1 5 | LOCATION=${2:local} 6 | QUAY_PROJECT=${3:-creynold} 7 | 8 | cd $(dirname $(realpath $0))/.. 9 | if [ -f .quay_creds ]; then 10 | LOCATION=quay 11 | . .quay_creds 12 | fi 13 | 14 | case $LOCATION in 15 | local) 16 | podman build --build-arg workshop_name=$WORKSHOP_NAME \ 17 | -t quay.io/$QUAY_PROJECT/operator-workshop-lab-guide-$WORKSHOP_NAME . 18 | ;; 19 | quay) 20 | echo Logging in to quay.io 21 | # designed to be used by travis-ci, where the docker_* variables are defined 22 | echo "$DOCKER_PASSWORD" | podman login -u "$DOCKER_USERNAME" --password-stdin quay.io || exit 1 23 | 24 | echo Building image 25 | podman build --build-arg workshop_name=$WORKSHOP_NAME \ 26 | -t quay.io/$QUAY_PROJECT/operator-workshop-lab-guide-$WORKSHOP_NAME . || exit 2 27 | echo Pushing image 28 | podman push quay.io/$QUAY_PROJECT/operator-workshop-lab-guide-$WORKSHOP_NAME || exit 3 29 | ;; 30 | *) 31 | echo "usage: ./hack/build.sh WORKSHOP_NAME [local|quay] [QUAY_PROJECT]" 32 | ;; 33 | esac 34 | -------------------------------------------------------------------------------- /hack/prep-ansible-for-devops.sh: -------------------------------------------------------------------------------- 1 | #! /usr/bin/env bash 2 | 3 | ENV_FILE=$1 4 | STUDENT_NAME=${2:-student1} 5 | 6 | echo "Preparing environment variables for ansible-for-devops lab guide" 7 | 8 | INVENTORY_FILE=/home/$STUDENT_NAME/devops-workshop/lab_inventory/hosts 9 | if [ -f $INVENTORY_FILE ]; then 10 | CONTROL_PRIVATE_IP=$(grep 'ansible ansible_host' $INVENTORY_FILE | awk -F'=' '{ print $2 }') 11 | STUDENT_PASS=$(grep ansible_ssh_pass $INVENTORY_FILE | awk -F= '{ print $2 }') 12 | CONTROL_PUBLIC_IP=$(curl http://169.254.169.254/latest/meta-data/public-ipv4) 13 | NODE_1_IP=$(grep dev_web1 $INVENTORY_FILE | head -1 | awk -F'=' '{ print $2 }') 14 | NODE_2_IP=$(grep dev_web2 $INVENTORY_FILE | head -1 | awk -F'=' '{ print $2 }') 15 | NODE_3_IP=$(grep prod_web1 $INVENTORY_FILE | head -1 | awk -F'=' '{ print $2 }') 16 | NODE_4_IP=$(grep prod_web2 $INVENTORY_FILE | head -1 | awk -F'=' '{ print $2 }') 17 | else 18 | CONTROL_PRIVATE_IP=10.0.0.1 19 | STUDENT_PASS=redhat01 20 | CONTROL_PUBLIC_IP=192.168.0.1 21 | NODE_1_IP=10.0.0.2 22 | NODE_2_IP=10.0.0.3 23 | NODE_3_IP=10.0.0.4 24 | NODE_4_IP=10.0.0.5 25 | fi 26 | 27 | echo "CONTROL_PRIVATE_IP="$CONTROL_PRIVATE_IP >> $ENV_FILE 28 | echo "CONTROL_PUBLIC_IP="$CONTROL_PUBLIC_IP >> $ENV_FILE 29 | echo "STUDENT_NAME="$STUDENT_NAME >> $ENV_FILE 30 | echo "STUDENT_PASS="$STUDENT_PASS >> $ENV_FILE 31 | echo "NODE_1_IP="$NODE_1_IP >> $ENV_FILE 32 | echo "NODE_2_IP="$NODE_2_IP >> $ENV_FILE 33 | echo "NODE_3_IP="$NODE_3_IP >> $ENV_FILE 34 | echo "NODE_4_IP="$NODE_4_IP >> $ENV_FILE 35 | -------------------------------------------------------------------------------- /hack/prep-better-together.sh: -------------------------------------------------------------------------------- 1 | #! /usr/bin/env bash 2 | 3 | ENV_FILE=$1 4 | STUDENT_NAME=${2:-student1} 5 | echo "Preparing environment variables for $WORKSHOP_NAME lab guide" 6 | 7 | STUDENT_PASS=redhat01 8 | OPENSHIFT_VER=3.11 9 | CONTROL_PUBLIC_IP=192.168.10.1 10 | LAB_DOMAIN=apps.example.com 11 | 12 | echo "OPENSHIFT_VER="$OPENSHIFT_VER >> $ENV_FILE 13 | echo "CONTROL_PUBLIC_IP="$CONTROL_PUBLIC_IP >> $ENV_FILE 14 | echo "STUDENT_NAME="$STUDENT_NAME >> $ENV_FILE 15 | echo "STUDENT_PASS="$STUDENT_PASS >> $ENV_FILE 16 | echo "LAB_DOMAIN="$LAB_DOMAIN >> $ENV_FILE 17 | -------------------------------------------------------------------------------- /hack/run.sh: -------------------------------------------------------------------------------- 1 | #! /usr/bin/env bash 2 | # Helper function to run the newly built containers in various locations 3 | 4 | print_usage() { 5 | echo "Usage: ./run.sh WORKSHOP_NAME (setup|local|start|stop|remove) [(-p |--port=)PORT]" >&2 6 | echo " [--project=QUAY_PROJECT_NAME] [(-e |--env-file=)ENV_FILE_PATH] [--test]" >&2 7 | } 8 | 9 | cd $(dirname $(realpath $0))/.. 10 | 11 | WORKSHOP_NAME=$1 12 | shift 13 | RUN_TYPE=$1 14 | shift 15 | 16 | # defaults 17 | QUAY_PROJECT=creynold 18 | PORT=8888 19 | ENV_FILE=/tmp/env.list 20 | do_tests='' 21 | 22 | # If STUDENT_NAME is unset, check /etc/passwd for a student, otherwise default 23 | # to student1 24 | export STUDENT_NAME=$(_PASSWD_STUDENT=$(awk -F: '/student/{ print $1 }' /etc/passwd); 25 | echo ${STUDENT_NAME:-${_PASSWD_STUDENT:-student1}}) 26 | 27 | while [ $# -gt 0 ]; do 28 | case "$1" in 29 | --project=*) 30 | QUAY_PROJECT=$(echo "$1" | cut -d= -f2-) 31 | ;; 32 | -p|--port=*) 33 | if [ "$1" = '-p' ]; then 34 | shift 35 | PORT="$1" 36 | else 37 | PORT=$(echo "$1" | cut -d= -f2-) 38 | fi 39 | ;; 40 | -e|--env-file=*) 41 | if [ "$1" = '-e' ]; then 42 | shift 43 | ENV_FILE="$1" 44 | else 45 | ENV_FILE=$(echo "$1" | cut -d= -f2-) 46 | fi 47 | ;; 48 | --test) 49 | do_tests=true 50 | ;; 51 | *) 52 | print_usage 53 | exit 1 54 | ;; 55 | esac; shift 56 | done 57 | 58 | # Relies on options 59 | CONTAINER_IMAGE=quay.io/$QUAY_PROJECT/operator-workshop-lab-guide-$WORKSHOP_NAME 60 | 61 | # Create the podman run env.list file 62 | setup_local() { 63 | stop_local 64 | 65 | echo "Creating the environment variables file at $ENV_FILE" 66 | echo "WORKSHOP_NAME=$WORKSHOP_NAME" > $ENV_FILE 67 | ENV_PREP_SCRIPT=prep-$WORKSHOP_NAME.sh 68 | if [ -f hack/$ENV_PREP_SCRIPT ]; then 69 | echo Running prep script 70 | hack/$ENV_PREP_SCRIPT $ENV_FILE $STUDENT_NAME 71 | fi 72 | 73 | remove_container 74 | 75 | echo Defining podman container from $CONTAINER_IMAGE 76 | podman create --env-file $ENV_FILE --name $WORKSHOP_NAME -p $PORT:8080 $CONTAINER_IMAGE 77 | 78 | echo Dropping systemd unit file 79 | mkdir -p $HOME/.config/systemd/user 80 | cat << EOF > $HOME/.config/systemd/user/$WORKSHOP_NAME.service 81 | [Unit] 82 | Description=$WORKSHOP_NAME Lab Guide container 83 | Wants=network.target multi-user.target 84 | After=network.target 85 | 86 | [Service] 87 | Restart=always 88 | ExecStart=/usr/bin/podman start -a $WORKSHOP_NAME 89 | ExecStop=/usr/bin/podman stop -t 2 $WORKSHOP_NAME 90 | 91 | [Install] 92 | WantedBy=multi-user.target 93 | EOF 94 | systemctl --user daemon-reload 95 | 96 | echo Enabling systemd unit 97 | systemctl --user enable $WORKSHOP_NAME.service 98 | } 99 | 100 | stop_local() { 101 | if container_installed; then 102 | echo Stopping $WORKSHOP_NAME container 103 | systemctl --user stop $WORKSHOP_NAME.service 104 | fi 105 | } 106 | 107 | container_installed() { 108 | systemctl --user | grep -qF "$WORKSHOP_NAME Lab Guide container" 109 | return $? 110 | } 111 | 112 | start_local() { 113 | stop_local 114 | 115 | echo Running dedicated local build 116 | hack/build.sh $WORKSHOP_NAME local $QUAY_PROJECT || exit 1 117 | 118 | echo Ensuring setup complete 119 | container_installed || setup_local 120 | 121 | echo Starting systemd container for $WORKSHOP_NAME 122 | systemctl --user start $WORKSHOP_NAME 123 | } 124 | 125 | start() { 126 | stop_local 127 | 128 | echo Pulling image $CONTAINER_IMAGE 129 | podman pull $CONTAINER_IMAGE || exit 1 130 | 131 | echo Ensuring setup complete 132 | container_installed || setup_local 133 | 134 | echo Starting systemd container for $WORKSHOP_NAME 135 | systemctl --user start $WORKSHOP_NAME 136 | } 137 | 138 | remove_container() { 139 | existing_container=$(podman ps -a -f name=$WORKSHOP_NAME --format='{{ $.ID }}') 140 | if [ "$existing_container" ]; then 141 | echo Removing existing $WORKSHOP_NAME container ID $existing_container 142 | podman rm -f $existing_container 143 | fi 144 | } 145 | 146 | remove_local() { 147 | rm -f $ENV_FILE 148 | remove_container 149 | if container_installed; then 150 | systemctl --user stop $WORKSHOP_NAME.service 151 | systemctl --user disable $WORKSHOP_NAME.service 152 | rm -f $HOME/.config/systemd/user/$WORKSHOP_NAME.service 153 | systemctl --user daemon-reload 154 | systemctl --user reset-failed 155 | fi 156 | } 157 | 158 | testable='' 159 | case $RUN_TYPE in 160 | setup) 161 | setup_local 162 | ;; 163 | start) 164 | start 165 | testable=true 166 | ;; 167 | local) 168 | start_local 169 | testable=true 170 | ;; 171 | stop) 172 | stop_local 173 | ;; 174 | remove) 175 | remove_local 176 | ;; 177 | *) 178 | print_usage 179 | exit 1 180 | ;; 181 | esac 182 | 183 | if [ -n "$do_tests" -a -n "$testable" ]; then 184 | count=0 185 | step=5 186 | max_failures=12 187 | while ! curl 127.0.0.1:$PORT &>/dev/null; do 188 | (( count++ )) 189 | if [ $count -ge $max_failures ]; then 190 | echo "Failed attempt $count of $max_failures.... dumping logs" >&2 191 | echo 192 | echo systemctl --user status $WORKSHOP_NAME: 193 | systemctl --user status $WORKSHOP_NAME | sed 's/^/ /' 194 | echo journalctl --user -u $WORKSHOP_NAME: 195 | journalctl --user -u $WORKSHOP_NAME | sed 's/^/ /' 196 | exit 13 197 | fi 198 | echo "Failed attempt $count of $max_failures.... retrying in $step" >&2 199 | sleep $step 200 | done 201 | curl localhost:8888 |& grep -F '' 202 | stop_local 203 | fi 204 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | Sphinx 2 | sphinx_rtd_theme 3 | sphinx_autobuild 4 | livereload 5 | recommonmark 6 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/01-introduction.rst: -------------------------------------------------------------------------------- 1 | .. sectionauthor:: Chris Reynolds <creynold@redhat.com> 2 | .. _docs admin: creynold@redhat.com 3 | 4 | ======================== 5 | Introduction 6 | ======================== 7 | 8 | Thanks for making the time attending today's workshop! We hope it provides value for you. All of the workshop content is deployed in :aws:`AWS<ec2>`. To participate in the workshop you'll need a laptop with the following capabilities: 9 | 10 | - Browse to public IP addresses without interference 11 | - SSH to public IP addresses without interference 12 | - Github account :github:`github.com<join>` 13 | - Quay.io account :quay:`quay.io<signin>` 14 | 15 | 16 | Workshop design 17 | ---------------- 18 | 19 | Today's workshop is designed as a series of guided labs. We'll start off today with some introductory information before we move into lab #1. 20 | 21 | .. admonition:: But I already know about Ansible! 22 | 23 | We'll try to keep the intro as short as possible. We want to get into the fun stuff in the labs as quickly as possible! But we have to ensure we're all using the same vocabulary to talk about the goals we have for today. 24 | 25 | Custom lab environment 26 | ----------------------- 27 | 28 | Your environment was custom built for you. Your lab environment is: 29 | 30 | :Username: |student_name| 31 | :Password: |student_pass| 32 | 33 | =========== ========================== ============================= 34 | Node name Purpose IP Address 35 | =========== ========================== ============================= 36 | control Primary Host/Tower Server |control_public_ip| (public) 37 | node1 Site A server 1 |node_1_ip| 38 | node2 Site A server 2 |node_2_ip| 39 | node3 Site B server 1 |node_3_ip| 40 | node4 Site B server 2 |node_4_ip| 41 | =========== ========================== ============================= 42 | 43 | This should be all of the information you need to manage your environment for the rest of the workshop. If a dependency is needed on a host that's not already there, we'll write some Ansible to take care of it. In the next section, we'll take care of level-setting everyone around the CI/CD concepts that we'll be using throughout the labs. Thanks again for joining us! 44 | 45 | Success Criteria 46 | ''''''''''''''''' 47 | 48 | We want the lab today to be a fair analogue of what you may be doing in your own environments today with containers. By the end of today's workshop our goal is to: 49 | 50 | - Deploy the tools we need to use containers and Ansible effectively in our infrastructure, including: 51 | 52 | * A :quay:`quay.io<repository>` repository to house your custom container images 53 | * :tower:`Ansible Tower<>` to provide a central point of automation 54 | 55 | 56 | - Deploy to Site B using Ansible playbooks and traditional software RPM packages. Site B is a cluster of two RHEL instances on :aws:`AWS<ec2>` and serving a custom website with ``httpd``. 57 | - Deploy to Site A using Ansible playbooks. Site A will be a cluster of two Amazon instances as well, each running ``httpd`` in containers running a custom website. 58 | - Deploy and configure Ansible Tower 59 | - Configure a containerized ``nginx`` load balancer to balance traffic for your development and production clusters 60 | - Tie all of these resources together in :tower:`Ansible Tower<>` using a CI/CD workflow 61 | 62 | .. admonition:: Copy and Paste 63 | 64 | Don't do it. The formatting in this workshop is correct to display in the browser, but not for pasting into vi/vim. 65 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/02-ci-cd-essentials.rst: -------------------------------------------------------------------------------- 1 | .. sectionauthor:: Chris Reynolds <creynold@redhat.com> 2 | .. _docs admin: creynold@redhat.com 3 | 4 | ================== 5 | CI/CD essentials 6 | ================== 7 | 8 | Overview 9 | '''''''''' 10 | 11 | Our goal for today's lab is to help you build out a real-world example of how Ansible can be used to provide the fundamental concepts around :ci_def:`Continuous Integration<>` and :cd_def:`Continuous Delivery <>` (CI/CD). To help us examine the fundamental concepts more closely, we're not going to abstract the workflow using a CI/CD platform like :jenkins:`Jenkins<>`. We're building out the steps manually today using Ansible. Our ultimate goal is to make these workflows available on-demand using :tower:`Ansible Tower<>`, using Ansible best practices. 12 | 13 | Additionally, all our workloads in this lab are deployed using :rh_containers:`Linux containers<>`. If this isn't something you've done before, don't worry. The examples are all copy/paste friendly, and the goal of a workshop like this one is to give you confidence to use tools that you haven't used before. 14 | 15 | Common vocabulary 16 | '''''''''''''''''''''' 17 | 18 | Let's take a few minutes to level set around some of the vocabulary we're using throughout today's lab. 19 | 20 | Continuous Integration 21 | ``````````````````````` 22 | 23 | Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early. (:ci_def:`source<>`) 24 | 25 | Continuous Delivery 26 | ````````````````````` 27 | 28 | Continuous delivery (CD or CDE) is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time and, when releasing the software, doing so manually. (:cd_def:`source<>`) 29 | 30 | Containers 31 | ``````````` 32 | 33 | Containers are applications deployed by a container runtime (``docker`` in today's lab) that are formatted using the :github:`OCI container image format<opencontainers/image-spec>`. To expose a few more of the internals, we're going to manage these containers directly instead of using a container application platform like :ocp:`OpenShift<>`. 34 | 35 | Ansible 36 | ```````` 37 | 38 | Ansible is the main focus of our work today. Ansible is an automation tool written in :python:`Python<>`. We'll be interacting with Ansible using playbooks that are written using :yaml:`YAML<>`. 39 | 40 | Ansible Tower 41 | `````````````` 42 | 43 | Ansible Tower (Tower) is how you use Ansible at scale. Tower is a mature web application that provides a user interface, RBAC, and API in front of your Ansible workloads. For teams larger than a few people who are managing more than a few servers, Ansible Tower is the best way to use Ansible. 44 | 45 | Getting Started 46 | ''''''''''''''''' 47 | 48 | Ready? Let's start building out our next-generation infrastructure! 49 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/03-source-code-and-image-repos.rst: -------------------------------------------------------------------------------- 1 | .. sectionauthor:: Chris Reynolds <creynold@redhat.com> 2 | .. _docs admin: creynold@redhat.com 3 | 4 | ========================================== 5 | Inventories, Source code and container repositories 6 | ========================================== 7 | 8 | Overview 9 | --------- 10 | 11 | In this lab we're using Github account :github:`github.com<>` to provide version control for the playbooks and roles we'll create. Additionally we'll use Quay.io account :quay:`quay.io<>` registry to house our container images. Our tasks for this lab are to: 12 | 13 | 1. Set up an group in your inventory file 14 | 2. Configure Github and confirm it's functioning properly 15 | 3. Configure Quay.io and confirm it's functioning properly 16 | 17 | 18 | Let's get started. 19 | 20 | Adding inventory groups 21 | ------------------------ 22 | 23 | You need to add a ``nodes`` group to your inventory located at ``~/ansible-for-devops-workshop/hosts``. First let's create the hosts file. 24 | 25 | .. code-block:: bash 26 | 27 | $ mkdir ~/ansible-for-devops-workshop/ 28 | $ vim ~/ansible-for-devops-workshop/hosts 29 | 30 | Add your ``nodes`` groups to ``~/ansible-for-devops-workshop/hosts``. 31 | 32 | .. parsed-literal:: 33 | 34 | [nodes] 35 | |node_1_ip| 36 | |node_2_ip| 37 | |node_3_ip| 38 | |node_4_ip| 39 | 40 | 41 | Creating a Github repository 42 | ``````````````````````````` 43 | 44 | To create our initial github repository, we'll first create one in the Github UI, then add our content so far to it and upload to our remote host. 45 | 46 | The first step is to create a new repository in the Github UI :github:`github.com<>`. After logging in with , click |plus sign| on the top right side of the screen. 47 | 48 | Adding a new repository to Github 49 | ```````````````````````````````` 50 | 51 | This takes you to the new repository wizard. We only need to fill out a name (``ansible-for-devops-workshop``) and description for our repository. You can keep the repository public. 52 | 53 | After this is filled out, click Create Repository Button. This will create your new repository and take you to its dashboard page. You should see the output below 54 | 55 | .. code-block:: bash 56 | 57 | echo "# ansible-for-devops-workshop" >> README.md 58 | git init 59 | git add README.md 60 | git commit -m "first commit" 61 | git remote add origin https://github.com/|student_name|/ansible-for-devops-workshop.git 62 | git push -u origin master 63 | 64 | With this complete, we'll use the cli to create a new repository and make it a git repository with our new repository set as the origin for it. 65 | 66 | Creating a git repository from existing files 67 | `````````````````````````````````````````````` 68 | 69 | On your control node, ``cd`` to your ``ansible-for-devops-workshop`` directory 70 | 71 | .. code-block:: bash 72 | 73 | cd ~/ansible-for-devops-workshop 74 | 75 | From this directory, follow the directions from the github repository creation dashboard output from section the section above. We'll make a few small changes in those instructions is to add all of the existing files instead of just the example ``README.md`` as well as to configure a username and email for your commit. 76 | 77 | .. admonition:: What about README.md 78 | 79 | The ``README.md`` file is optional for this workshop, but is definitely a best practice in general. 80 | 81 | .. parsed-literal:: 82 | $ echo "# ansible-for-devops-workshop" >> README.md 83 | $ git init 84 | Initialized empty Git repository in /home/|student_name|/ansible-for-devops-workshop/.git/ 85 | $ git add . 86 | $ git config --global user.email "GITHUB EMAIL" 87 | $ git config --global user.name |student_name| 88 | $ git commit -m "first commit" 89 | [master (root-commit) 9b28318] first commit 90 | 2 files changed, 6 insertions(+) 91 | create mode 100644 README.md 92 | create mode 100644 hosts 93 | $ git remote add origin https://github.com/|student_name|/ansible-for-devops-workshop.git 94 | $ git push -u origin master 95 | Enumerating objects: 4, done. 96 | Counting objects: 100% (4/4), done. 97 | Delta compression using up to 12 threads 98 | Compressing objects: 100% (2/2), done. 99 | Writing objects: 100% (4/4), 307 bytes | 307.00 KiB/s, done. 100 | Total 4 (delta 0), reused 0 (delta 0) 101 | To https://github.com/|student_name|/ansible-for-devops-workshop.git 102 | * [new branch] master -> master 103 | Branch 'master' set up to track remote branch 'master' from 'origin'. 104 | 105 | You'll be prompted for your GitHub username and password you set up when you registered. And that's it. To confirm your work was successful, reload your repository dashboard in Github and you should see the files that were just committed. 106 | 107 | 108 | Creating your container registry 109 | -------------------------------- 110 | 111 | You'll need to log into Quay.io :quay:`quay.io<>` to create the repository that we are going to use today. At the top right of the screen click the |plus sign| and select ``New Repository``. 112 | 113 | In the ``Create New Repository`` page, we will need to make a few changes. First let's give the repository a name. We are going to use ``ansible-for-devops-siteb``. 114 | 115 | Next we will select the ``Public`` radio button. This will allow anyone to see and pull from the repository. Also it is free :) 116 | 117 | For the final selection we will ``(Empty Repository)`` and select ``Create Public Repository`` button. 118 | 119 | And now your system is ready for the rest of the workshop! 120 | 121 | Summary 122 | -------- 123 | 124 | This lab setups and configures your environment to work with the fundamental building blocks that make DevOps possible. Everything you do should be in source control. Additionally, containers are the building blocks of modern infrastructure as you will see in the upcoming labs. 125 | 126 | .. |plus sign| image:: _static/images/gogs_plus.png 127 | .. |save button| image:: _static/images/gogs_save.png 128 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/06-loadbalancing.rst: -------------------------------------------------------------------------------- 1 | .. sectionauthor:: Chris Reynolds <creynold@redhat.com> 2 | .. _docs admin: creynold@redhat.com 3 | 4 | ================================= 5 | Load-balancing your clusters 6 | ================================= 7 | 8 | In exercise we're going to deploy an :nginx:`nginx<>` reverse proxy and load balancer. This proxy will take incoming http requests over port 8081 9 | and forward them to one of the 4 web servers that we have deployed. Like most of your other services in today's workshop, you'll be containerizing nginx. 10 | 11 | 12 | Modifying your inventory 13 | `````````````````````````` 14 | 15 | Like your other services, your load balancer needs a group in your Ansible inventory at ``~/ansible-for-devops-workshop/hosts``. Add a group named ``lb`` with the IP address of your control node. 16 | 17 | .. parsed-literal:: 18 | 19 | [nodes] 20 | |node_1_ip| 21 | |node_2_ip| 22 | |node_3_ip| 23 | |node_4_ip| 24 | 25 | [sitea] 26 | |node_3_ip| 27 | |node_4_ip| 28 | 29 | [siteb] 30 | |node_1_ip| 31 | |node_2_ip| 32 | 33 | [lb] 34 | |control_public_ip| 35 | 36 | With your inventory group created, it's time to move on to customizing your nginx configuration and building a custom container image. 37 | 38 | Creating the nginx container image 39 | ----------------------------------- 40 | 41 | Nginx has a built-in module that acts as a reverse proxy load balancer named ``proxy_pass``. You'll need to configure that module as well as an ``upstream`` server group for both site-a and site-b in your ``default.conf`` configuration file at ``/etc/nginx/conf.d/default.conf``. 42 | 43 | Setting up nginx.conf 44 | ``````````````````````` 45 | 46 | The first step to create your customized nginx container is to create the proper configuration file. 47 | 48 | Create a directory named ``~/ansible-for-devops-workshop/nginx-lb`` on your control node. Inside that directory create a file named ``default.conf`` with the following contents: 49 | 50 | .. code-block:: bash 51 | 52 | $ cd ~/ansible-for-devops-workshop 53 | $ mkdir -p ~/ansible-for-devops-workshop/nginx-lb/etc/nginx/conf.d/ 54 | $ vim nginx-lb/etc/nginx/conf.d/default.conf 55 | 56 | .. parsed-literal:: 57 | 58 | upstream backend { 59 | server |node_1_ip|:8080; 60 | server |node_2_ip|:8080; 61 | server |node_3_ip|:8080; 62 | server |node_4_ip|:8080; 63 | } 64 | 65 | server { 66 | listen 8081; 67 | location / { 68 | proxy_pass http://backend; 69 | } 70 | } 71 | 72 | This configuration creates an ``nginx`` loadbalancer that passes requests on |control_public_ip| for ``/`` between your site-a and site-b nodes. 73 | 74 | Next, create a :dockerfile:`Dockerfile<>` to build an ``nginx`` container image that includes the custom configuration. 75 | 76 | Creating the nginx Dockerfile 77 | `````````````````````````````` 78 | 79 | Create a file at ``~/ansible-for-devops-workshop/nginx-lb/Dockerfile`` with the following contents. 80 | 81 | .. code-block:: bash 82 | 83 | $ cd ~/ansible-for-devops-workshop 84 | $ vim ~/ansible-for-devops-workshop/nginx-lb/Dockerfile 85 | 86 | .. parsed-literal:: 87 | 88 | FROM nginx 89 | USER root 90 | MAINTAINER |student_name| 91 | RUN rm /etc/nginx/conf.d/default.conf 92 | COPY ./etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf 93 | 94 | With all of the artifacts created, go ahead and commit them to your git repository. Next you'll create the build pipeline and write the Ansible playbooks to deploy your nginx load balancer on your control node. 95 | 96 | Creating the nginx container repo, build trigger and image 97 | `````````````````````````````````````````````````````````` 98 | 99 | You'll need to log into Quay.io :quay:`quay.io<>` to create the repository that we are going to use today. At the top right of the screen click the |plus sign| and select ``New Repository``. 100 | 101 | In the ``Create New Repository`` page, we will need to make a few changes. First let's give the repository a name. We are going to use ``ansible-for-devops-nginx-lb``. 102 | 103 | Next we will select the ``Public`` radio button. This will allow anyone to see and pull from the repository. Also it is free :) 104 | 105 | For the next selection we will ``Link to Github Repository Push`` and select ``Create Public Repository`` button. 106 | 107 | Select the Organization that you created and select Continue, next we will select the ``ansibe-for-devops-workshop`` repository and hit continue. 108 | 109 | For the Trigger, leave the ``default`` option and hit continue. 110 | 111 | On the ``Select Dockerfile`` page, click the dropdown arrow and select ``/nginx-lb/Dockerfile`` and select continue. 112 | 113 | For the context, you must select ``/nginx-lb``. If not then the build will fail. The context refers to where is path that it should start in when referencing things. 114 | 115 | We will not be selecting a robot account, so hit continue and the "Ready to Go" will appear, from there we can select ``Continue`` and this will complete the Build Trigger. 116 | 117 | Back on the build page, click the gear icon and next to your newly created ``Build Triggers``. Select ``Run Trigger Now`` and for Branch/Tag select ``master``, then hit ``Start Build``. 118 | 119 | In the ``Build History`` above you will now see a build that is running, click the ``Build ID`` and watch your container being built!! 120 | 121 | Your custom nginx-lb image is now in your :quay:`quay.io<>` container registry. Your next playbook will deploy your nginx-lb to your control node. 122 | 123 | Deploying your nginx load balancer 124 | ----------------------------------- 125 | 126 | Now we can create a playbook that will deploy the load balancer. 127 | 128 | Create a playbook named ``~/ansible-for-devops-workshop/nginx-lb-deploy.yml`` with the following content. 129 | 130 | .. code-block:: bash 131 | 132 | $ cd ~/ansible-for-devops-workshop 133 | $ vim ~/ansible-for-devops-workshop/nginx-lb-deploy.yml 134 | 135 | 136 | .. code-block:: yaml 137 | 138 | --- 139 | - name: deploy nginx load balancer 140 | hosts: lb 141 | become: yes 142 | 143 | tasks: 144 | - name: install docker preqequisities 145 | pip: 146 | name: docker 147 | 148 | - name: launch nginx-lb container on lb nodes 149 | docker_container: 150 | name: nginx-lb 151 | image: quay.io/|student_name|/ansible-for-devops-nginx-lb 152 | ports: 153 | - "8081:8081" 154 | restart_policy: always 155 | pull: yes 156 | 157 | We are NOT going to run the playbook yet, this will be done in Ansible Tower. 158 | 159 | Summary 160 | -------- 161 | 162 | This lab is the completion of your website's and our load balancer config build. In the next lab you'll pull a MOTD banner from Ansible Galaxy and get it ready to be deployed by Ansible Tower. 163 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/07-galaxy-motd.rst: -------------------------------------------------------------------------------- 1 | .. sectionauthor:: Chris Reynolds <creynold@redhat.com> 2 | .. _docs admin: creynold@redhat.com 3 | 4 | ======================= 5 | Ansible Galaxy and MOTD 6 | ======================= 7 | Overview 8 | ````````` 9 | In this exercise we are going get an MOTD banner from :ansible_galaxy:`galaxy.ansible.com<jtyr/motd>` and add it to your Github repository. 10 | 11 | 12 | What is Ansible Galaxy? 13 | ``````````````````````` 14 | Jump-start your automation project with great content from the Ansible community. Galaxy provides pre-packaged units of work known to Ansible as roles. 15 | Roles can be dropped into Ansible PlayBooks and immediately put to work. You'll find roles for provisioning infrastructure, deploying applications, and all of the tasks you do everyday. 16 | 17 | Getting content from Ansible Galaxy 18 | ````````` 19 | By default when you install a role from Ansible Galaxy it is stored in ``/etc/ansible/roles`` which is available to all user system wide. In this case we 20 | want to install the roles into our own ``~/ansible-for-devops-workshop/roles`` folder so we can easily add it to our Github repo. 21 | 22 | We are going to use the motd from :ansible_galaxy:`galaxy.ansible.com<jtyr/motd>`, so let go ahead and get a hold of it. 23 | 24 | .. code-block:: bash 25 | 26 | $ cd ~/ansible-for-devops-workshop 27 | $ ansible-galaxy install --roles-path ~/ansible-for-devops-workshop/roles/ jtyr.motd 28 | 29 | Now we can create a playbook that will use our new role. 30 | 31 | .. code-block:: bash 32 | 33 | $ vim motd.yml 34 | 35 | .. code-block:: yaml 36 | 37 | --- 38 | - name: Deploy site web infrastructure 39 | hosts: all 40 | become: yes 41 | 42 | roles: 43 | - jtyr.motd 44 | 45 | 46 | We are NOT going to run the playbook yet, we will save this for Ansible Tower. 47 | 48 | Summary 49 | -------- 50 | 51 | This lab has set the MOTD from Ansible Galaxy. In the next lab we will get Ansible Tower configured and then start running playbooks! 52 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/09-building-tower-jobs.rst: -------------------------------------------------------------------------------- 1 | .. sectionauthor:: Chris Reynolds <creynold@redhat.com> 2 | .. _docs admin: creynold@redhat.com 3 | 4 | ================================================== 5 | Building and Running a Ansible Tower Job 6 | ================================================== 7 | 8 | In this lab will you'll be building a job template and running it via an Ansible Tower job. 9 | 10 | Setting the MOTD 11 | ---------------- 12 | 13 | On the left hand side, select Templates. Then select the ADD |Add button| on the top right side and select job template. 14 | 15 | Use this information to complete the Job Template form. 16 | 17 | +------------------------+---------------------------------------+ 18 | | NAME | MOTD Banner | 19 | +========================+=======================================+ 20 | | DESCRIPTION | Set the MOTD Banner for all hosts | 21 | +------------------------+---------------------------------------+ 22 | | JOB TYPE | Run | 23 | +------------------------+---------------------------------------+ 24 | | INVENTORY | Ansible Workshop Inventory | 25 | +------------------------+---------------------------------------+ 26 | | PROJECT | Github | 27 | +------------------------+---------------------------------------+ 28 | | PLAYBOOK | motd.yml | 29 | +------------------------+---------------------------------------+ 30 | | CREDENTIALS | Ansible Workshop Credential | 31 | +------------------------+---------------------------------------+ 32 | | OPTIONS | Enable Privilege Escalation | 33 | +------------------------+---------------------------------------+ 34 | 35 | Select the Launch button or the Rocket Icon at the bottom of the page, then watch the Ansible output. 36 | 37 | You can verify that the MOTD has changed by logging out of your ssh session and logging back in, you will see something similar below: 38 | 39 | .. parsed-literal:: 40 | _ _ _ _ 41 | / \ _ __ ___(_) |__ | | ___ 42 | / _ \ | '_ \/ __| | '_ \| |/ _ \ 43 | / ___ \| | | \__ \ | |_) | | __/ 44 | /_/ \_\_| |_|___/_|_.__/|_|\___| 45 | FQDN: ansible 46 | Distro: RedHat 7.7 Maipo 47 | Virtual: YES 48 | 49 | CPUs: 2 50 | RAM: 3.8GB 51 | 52 | 53 | 54 | Configuring A Job Template For Site-A 55 | ------------------------------------- 56 | 57 | On the left hand side, select Templates. Then select the ADD |Add button| on the top right side and select job template. 58 | 59 | Use this information to complete the Job Template form. 60 | 61 | +------------------------+---------------------------------------+ 62 | | NAME | Ansible Workshop Website SiteA | 63 | +========================+=======================================+ 64 | | DESCRIPTION | Deploy the Ansible Workshop Website | 65 | +------------------------+---------------------------------------+ 66 | | JOB TYPE | Run | 67 | +------------------------+---------------------------------------+ 68 | | INVENTORY | Ansible Workshop Inventory | 69 | +------------------------+---------------------------------------+ 70 | | PROJECT | Github | 71 | +------------------------+---------------------------------------+ 72 | | PLAYBOOK | sitea-deploy.yml | 73 | +------------------------+---------------------------------------+ 74 | | CREDENTIALS | Ansible Workshop Credential | 75 | +------------------------+---------------------------------------+ 76 | | TAGS | rpm | 77 | +------------------------+---------------------------------------+ 78 | | OPTIONS | Enable Privilege Escalation | 79 | +------------------------+---------------------------------------+ 80 | 81 | Select the Launch button or the Rocket Icon at the bottom of the page, then watch the Ansible output. 82 | 83 | 84 | To confirm your playbook performed properly, use the ``curl`` command to access each Site-A server on port 8080. 85 | 86 | .. parsed-literal:: 87 | 88 | $ curl \http://|node_1_ip|:8080 89 | $ curl \http://|node_2_ip|:8080 90 | 91 | Configuring A Job Template For Site-B 92 | ------------------------------------- 93 | 94 | On the left hand side, select Templates. Then select the ADD |Add button| on the top right side and select job template. 95 | 96 | Use this information to complete the Job Template form. 97 | 98 | +------------------------+------------------------------------------+ 99 | | NAME | Site-B Apache Simple Container Deploy | 100 | +========================+==========================================+ 101 | | DESCRIPTION | Deploy Site-B Website | 102 | +------------------------+------------------------------------------+ 103 | | JOB TYPE | Run | 104 | +------------------------+------------------------------------------+ 105 | | INVENTORY | Ansible Workshop Inventory | 106 | +------------------------+------------------------------------------+ 107 | | PROJECT | Github | 108 | +------------------------+------------------------------------------+ 109 | | PLAYBOOK | siteb-apache-simple-container-deploy.yml | 110 | +------------------------+------------------------------------------+ 111 | | CREDENTIALS | Ansible Workshop Credential | 112 | +------------------------+------------------------------------------+ 113 | | OPTIONS | Enable Privilege Escalation | 114 | +------------------------+------------------------------------------+ 115 | 116 | Select the Launch button or the Rocket Icon at the bottom of the page, then watch the Ansible output. 117 | 118 | 119 | To confirm your playbook performed properly, use the ``curl`` command to access each Site-A server on port 8080. 120 | 121 | .. parsed-literal:: 122 | 123 | $ curl \http://|node_3_ip|:8080 124 | $ curl \http://|node_4_ip|:8080 125 | 126 | 127 | 128 | Configuring A Job Template For NGINX LoadBalancer 129 | ------------------------------------------------- 130 | 131 | On the left hand side, select Templates. Then select the ADD |Add button| on the top right side and select job template. 132 | 133 | Use this information to complete the Job Template form. 134 | 135 | +------------------------+------------------------------------------+ 136 | | NAME | Deploy NGINX LoadBalancer | 137 | +========================+==========================================+ 138 | | DESCRIPTION | Deploy NGINX LoadBalancer | 139 | +------------------------+------------------------------------------+ 140 | | JOB TYPE | Run | 141 | +------------------------+------------------------------------------+ 142 | | INVENTORY | Ansible Workshop Inventory | 143 | +------------------------+------------------------------------------+ 144 | | PROJECT | Github | 145 | +------------------------+------------------------------------------+ 146 | | PLAYBOOK | nginx-lb-deploy.yml | 147 | +------------------------+------------------------------------------+ 148 | | CREDENTIALS | Ansible Workshop Credential | 149 | +------------------------+------------------------------------------+ 150 | | OPTIONS | Enable Privilege Escalation | 151 | +------------------------+------------------------------------------+ 152 | 153 | Select the Launch button or the Rocket Icon at the bottom of the page, then watch the Ansible output. 154 | 155 | 156 | After a successful completion, confirm your load balancer is deployed by testing by hitting the loadbalancer url and port via curl or in a web browser. 157 | 158 | .. parsed-literal:: 159 | 160 | $ curl \http://|control_public_ip|:8081 161 | 162 | Summary 163 | -------- 164 | 165 | Ansible Tower is how Ansible is consumed at enterprise scale. It provides an 166 | API, a database that is a single source of truth, and the ability to deploy in a 167 | highly-available mesh across your entire infrastructure. For any team managing 168 | production environments, Ansible Tower is a vital tool. 169 | 170 | .. |Credentials button| image:: ./_static/images/at_credentials_button.png 171 | .. |Browse button| image:: ./_static/images/at_browse.png 172 | .. |Submit button| image:: ./_static/images/at_submit.png 173 | .. |Gear button| image:: ./_static/images/at_gear.png 174 | .. |Add button| image:: ./_static/images/at_add.png 175 | .. |Save button| image:: ./_static/images/at_save.png 176 | .. |Source button| image:: ./_static/images/at_inv_source_button.png 177 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/10-bonus-round.rst: -------------------------------------------------------------------------------- 1 | .. sectionauthor:: Chris Reynolds <creynold@redhat.com> 2 | .. _docs admin: creynold@redhat.com 3 | 4 | ================================================== 5 | Bonus Round 6 | ================================================== 7 | 8 | In this lab you will NOT be guided, but given tasks that need to be implemented. 9 | 10 | Setting the NTP 11 | --------------- 12 | 13 | Find a role on Ansible Galaxy that sets ntp to ``time.google.com`` 14 | 15 | Load this into Ansible Tower and successfully run the playbook. 16 | 17 | Remove Nano 18 | ----------- 19 | 20 | Your security team has indicated that Nano is a threat and is to no longer be installed. 21 | 22 | Either create a playbook or find a role on Ansible Galaxy that removes Nano from all nodes. 23 | 24 | Load this into Ansible Tower and successfully run the playbook. 25 | 26 | Update the Webpage 27 | ------------------- 28 | 29 | Update the ``index.html`` from earlier to have the text "Winter is Coming". 30 | 31 | Have Ansible Tower deploy this change into both Site-A and Site-B. 32 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/11-cta.rst: -------------------------------------------------------------------------------- 1 | .. sectionauthor:: Chris Reynolds <creynold@redhat.com> 2 | .. _docs admin: creynold@redhat.com 3 | 4 | ======================= 5 | Conclusions 6 | ======================= 7 | 8 | We have reached the end of our journey. What did you think? You have created several playbooks, containers and created an infrastructure, and deployed it via Ansible Tower. 9 | From here where do you go? Continue to write more playbooks and contribute to the community. This is just the start of your automation journey. 10 | 11 | Call to action 12 | `````````````````` 13 | Take what you have created today and build on it. You now have playbooks in your own Github repository. You now have containers that you can run, build and modify to suite your 14 | day to day needs. Contribute to Ansible Galaxy and share the roles that you have created!. 15 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/Makefile: -------------------------------------------------------------------------------- 1 | # Minimal makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | SOURCEDIR = . 8 | BUILDDIR = _build 9 | LINKCHECKDIR = _build/linkcheck 10 | 11 | 12 | # Put it first so that "make" without argument is like "make help". 13 | help: 14 | @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) 15 | 16 | .PHONY: help Makefile 17 | 18 | # Catch-all target: route all unknown targets to Sphinx using the new 19 | # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). 20 | %: Makefile 21 | @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) 22 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/_static/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/_static/.DS_Store -------------------------------------------------------------------------------- /workshops/ansible-for-devops/_static/images/at_add.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/_static/images/at_add.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/_static/images/at_browse.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/_static/images/at_browse.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/_static/images/at_cred_detail.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/_static/images/at_cred_detail.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/_static/images/at_credentials_button.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/_static/images/at_credentials_button.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/_static/images/at_gear.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/_static/images/at_gear.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/_static/images/at_inv_create.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/_static/images/at_inv_create.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/_static/images/at_inv_group.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/_static/images/at_inv_group.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/_static/images/at_inv_source.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/_static/images/at_inv_source.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/_static/images/at_inv_source_button.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/_static/images/at_inv_source_button.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/_static/images/at_job_template.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/_static/images/at_job_template.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/_static/images/at_lic_prompt.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/_static/images/at_lic_prompt.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/_static/images/at_project_detail.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/_static/images/at_project_detail.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/_static/images/at_save.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/_static/images/at_save.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/_static/images/at_submit.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/_static/images/at_submit.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/_static/images/at_tm_stdout.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/_static/images/at_tm_stdout.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/_static/images/gogs_config_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/_static/images/gogs_config_1.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/_static/images/gogs_config_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/_static/images/gogs_config_2.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/_static/images/gogs_dash.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/_static/images/gogs_dash.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/_static/images/gogs_login.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/_static/images/gogs_login.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/_static/images/gogs_new_repo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/_static/images/gogs_new_repo.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/_static/images/gogs_plus.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/_static/images/gogs_plus.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/_static/images/gogs_register.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/_static/images/gogs_register.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/_static/images/gogs_repo_dash.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/_static/images/gogs_repo_dash.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/_static/images/gogs_repo_full.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/_static/images/gogs_repo_full.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/_static/images/gogs_repo_info.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/_static/images/gogs_repo_info.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/_static/images/gogs_save.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/_static/images/gogs_save.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/_static/images/tower_install_splash.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/_static/images/tower_install_splash.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/examples/hosts: -------------------------------------------------------------------------------- 1 | [nodes] 2 | 172.16.217.29 3 | 172.16.203.160 4 | 172.16.252.242 5 | 172.16.235.36 6 | 7 | [sitea] 8 | 172.16.217.29 9 | 172.16.203.160 10 | 11 | [siteb] 12 | 172.16.252.242 13 | 172.16.235.36 14 | 15 | [lb] 16 | 54.84.23.28 17 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/examples/motd.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Deploy site web infrastructure 3 | hosts: all 4 | become: yes 5 | 6 | roles: 7 | - jtyr.motd 8 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/examples/nginx-lb-deploy.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: deploy nginx load balancer 3 | hosts: lb 4 | become: yes 5 | 6 | tasks: 7 | - name: install docker preqequisities 8 | pip: 9 | name: docker 10 | 11 | - name: launch nginx-lb container on lb nodes 12 | docker_container: 13 | name: nginx-lb 14 | image: quay.io/creynold/ansible-for-devops-nginx-lb 15 | ports: 16 | - "8081:8081" 17 | restart_policy: always 18 | pull: yes 19 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/examples/nginx-lb/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM nginx 2 | USER root 3 | MAINTAINER creynold@redhat.com 4 | RUN rm /etc/nginx/conf.d/default.conf 5 | COPY ./etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf 6 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/examples/nginx-lb/etc/nginx/conf.d/default.conf: -------------------------------------------------------------------------------- 1 | upstream backend { 2 | server 172.16.217.29:8080; 3 | server 172.16.203.160:8080; 4 | server 172.16.252.242:8080; 5 | server 172.16.235.36:8080; 6 | } 7 | 8 | server { 9 | listen 8081; 10 | location / { 11 | proxy_pass http://backend; 12 | } 13 | } 14 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/examples/roles/apache-simple/README.md: -------------------------------------------------------------------------------- 1 | Role Name 2 | ========= 3 | 4 | A brief description of the role goes here. 5 | 6 | Requirements 7 | ------------ 8 | 9 | Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required. 10 | 11 | Role Variables 12 | -------------- 13 | 14 | A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well. 15 | 16 | Dependencies 17 | ------------ 18 | 19 | A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles. 20 | 21 | Example Playbook 22 | ---------------- 23 | 24 | Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too: 25 | 26 | - hosts: servers 27 | roles: 28 | - { role: username.rolename, x: 42 } 29 | 30 | License 31 | ------- 32 | 33 | BSD 34 | 35 | Author Information 36 | ------------------ 37 | 38 | An optional section for the role authors to include contact information, or a website (HTML is not allowed). 39 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/examples/roles/apache-simple/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # defaults file for apache-simple 3 | apache_test_message: This is a test message 4 | apache_max_keep_alive_requests: 115 5 | apache_webserver_port: 8080 6 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/examples/roles/apache-simple/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # handlers file for apache-simple 3 | - name: restart-apache-service 4 | service: 5 | name: httpd 6 | state: restarted 7 | enabled: yes 8 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/examples/roles/apache-simple/meta/main.yml: -------------------------------------------------------------------------------- 1 | galaxy_info: 2 | author: your name 3 | description: your description 4 | company: your company (optional) 5 | 6 | # If the issue tracker for your role is not on github, uncomment the 7 | # next line and provide a value 8 | # issue_tracker_url: http://example.com/issue/tracker 9 | 10 | # Choose a valid license ID from https://spdx.org - some suggested licenses: 11 | # - BSD-3-Clause (default) 12 | # - MIT 13 | # - GPL-2.0-or-later 14 | # - GPL-3.0-only 15 | # - Apache-2.0 16 | # - CC-BY-4.0 17 | license: license (GPL-2.0-or-later, MIT, etc) 18 | 19 | min_ansible_version: 2.4 20 | 21 | # If this a Container Enabled role, provide the minimum Ansible Container version. 22 | # min_ansible_container_version: 23 | 24 | # 25 | # Provide a list of supported platforms, and for each platform a list of versions. 26 | # If you don't wish to enumerate all versions for a particular platform, use 'all'. 27 | # To view available platforms and versions (or releases), visit: 28 | # https://galaxy.ansible.com/api/v1/platforms/ 29 | # 30 | # platforms: 31 | # - name: Fedora 32 | # versions: 33 | # - all 34 | # - 25 35 | # - name: SomePlatform 36 | # versions: 37 | # - all 38 | # - 1.0 39 | # - 7 40 | # - 99.99 41 | 42 | galaxy_tags: [] 43 | # List tags for your role here, one per line. A tag is a keyword that describes 44 | # and categorizes the role. Users find roles by searching for tags. Be sure to 45 | # remove the '[]' above, if you add tags to this list. 46 | # 47 | # NOTE: A tag is limited to a single word comprised of alphanumeric characters. 48 | # Maximum 20 tags per role. 49 | 50 | dependencies: [] 51 | # List your role dependencies here, one per line. Be sure to remove the '[]' above, 52 | # if you add dependencies to this list. 53 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/examples/roles/apache-simple/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # tasks file for apache 3 | - name: Ensure httpd packages are present 4 | yum: 5 | name: "{{ item }}" 6 | state: present 7 | with_items: "{{ httpd_packages }}" 8 | notify: restart-apache-service 9 | tags: 10 | - rpm 11 | 12 | - name: Ensure latest httpd.conf file is present for RPM 13 | template: 14 | src: httpd.conf.j2 15 | dest: /etc/httpd/conf/httpd.conf 16 | notify: restart-apache-service 17 | tags: 18 | - rpm 19 | 20 | - name: Ensure latest httpd.conf file is present for Container 21 | template: 22 | src: httpd.conf.j2 23 | dest: /home/student1/ansible-for-devops-workshop/siteb/etc/httpd/conf/httpd.conf 24 | tags: 25 | - container 26 | 27 | - name: Ensure latest index.html file is present for RPM 28 | template: 29 | src: index.html.j2 30 | dest: /var/www/html/index.html 31 | tags: 32 | - rpm 33 | 34 | - name: Ensure latest index.html file is present for Container 35 | template: 36 | src: index.html.j2 37 | dest: /home/student1/ansible-for-devops-workshop/siteb/var/www/html/index.html 38 | tags: 39 | - container 40 | 41 | - name: Ensure httpd service is started and enabled 42 | service: 43 | name: httpd 44 | state: started 45 | enabled: yes 46 | tags: 47 | - rpm 48 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/examples/roles/apache-simple/templates/index.html.j2: -------------------------------------------------------------------------------- 1 | <html lang="en"> 2 | <head> 3 | <meta charset="utf-8"> 4 | <title>Ansible: Automation for Everyone 5 | 6 | 30 | 31 | 32 |
33 | 34 |

{{ apache_test_message }}

35 |
36 |
{{ inventory_hostname }}
Red Hat Ansible
37 | 38 | 39 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/examples/roles/apache-simple/tests/inventory: -------------------------------------------------------------------------------- 1 | localhost 2 | 3 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/examples/roles/apache-simple/tests/test.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: localhost 3 | remote_user: root 4 | roles: 5 | - apache-simple -------------------------------------------------------------------------------- /workshops/ansible-for-devops/examples/roles/apache-simple/vars/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # vars file for apache-simple 3 | httpd_packages: 4 | - httpd 5 | - mod_wsgi 6 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/examples/roles/jtyr.motd/.gitignore: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/examples/roles/jtyr.motd/.gitignore -------------------------------------------------------------------------------- /workshops/ansible-for-devops/examples/roles/jtyr.motd/LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) Jiri Tyr 2014-2019 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy of 6 | this software and associated documentation files (the "Software"), to deal in 7 | the Software without restriction, including without limitation the rights to 8 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of 9 | the Software, and to permit persons to whom the Software is furnished to do so, 10 | subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS 17 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR 18 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER 19 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN 20 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 21 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/examples/roles/jtyr.motd/README.md: -------------------------------------------------------------------------------- 1 | motd 2 | ==== 3 | 4 | This role creates the Message Of The Day (motd) file with some additional 5 | information about the distro and the hardware. 6 | 7 | The configuration of the role is done in such way that it should not be necessary 8 | to change the role for any kind of configuration. All can be done either by 9 | changing role parameters or by declaring completely new configuration as a 10 | variable. That makes this role absolutely universal. See the examples below for 11 | more details. 12 | 13 | Please report any issues or send PR. 14 | 15 | 16 | Example 17 | ------- 18 | 19 | ```yaml 20 | # This is a playlist example 21 | - host: myhost 22 | # Load the role to produce the /etc/motd file 23 | roles: 24 | - motd 25 | ``` 26 | 27 | This playbook produces the `/etc/motd' file looking like this: 28 | 29 | ``` 30 | [root@localhost ~]# cat /etc/motd 31 | _ _ _ _ 32 | / \ _ __ ___(_) |__ | | ___ 33 | / _ \ | '_ \/ __| | '_ \| |/ _ \ 34 | / ___ \| | | \__ \ | |_) | | __/ 35 | /_/ \_\_| |_|___/_|_.__/|_|\___| 36 | 37 | FQDN: localhost.localdomain 38 | Distro: CentOS 6.6 Final 39 | Virtual: YES 40 | 41 | CPUs: 1 42 | RAM: 0.49GB 43 | 44 | ``` 45 | 46 | 47 | Role variables 48 | -------------- 49 | 50 | ```yaml 51 | # Default ASCII art shown at the beginning of the motd 52 | motd_ascii_art: |-2 53 | _ _ _ _ 54 | / \ _ __ ___(_) |__ | | ___ 55 | / _ \ | '_ \/ __| | '_ \| |/ _ \ 56 | / ___ \| | | \__ \ | |_) | | __/ 57 | /_/ \_\_| |_|___/_|_.__/|_|\___| 58 | 59 | # Whether to hide the Virtual info 60 | motd_hide_virtual: no 61 | 62 | # Whether to add extra new line behind the Virtual info 63 | motd_virtual_newline: yes 64 | 65 | # Number of initial space 66 | motd_initial_spaces: 1 67 | 68 | # Indent size 69 | motd_indent: "{{ 7 if motd_hide_virtual else 8 }}" 70 | 71 | # Default information to show under the ASCII art 72 | motd_info__default: 73 | - FQDN: "{{ ansible_facts.fqdn }}" 74 | - Distro: "{{ ansible_facts.distribution }} {{ ansible_facts.distribution_version }} {{ ansible_facts.distribution_release }}" 75 | - "{{ 76 | { '': ' ' if motd_virtual_newline else '' } 77 | if motd_hide_virtual 78 | else 79 | { 'Virtual': ('YES\n' if motd_virtual_newline else 'YES') if ansible_facts.virtualization_role == 'guest' else ('NO\n' if motd_virtual_newline else 'NO') } }}" 80 | - CPUs: "{{ ansible_facts.processor_vcpus }}" 81 | - RAM: "{{ (ansible_facts.memtotal_mb / 1000) | round(1) }}GB" 82 | 83 | # Custom information to show under the ASCII art 84 | motd_info__custom: [] 85 | 86 | # Final information to show under the ASCII art 87 | motd_info: "{{ 88 | motd_info__default + 89 | motd_info__custom 90 | }}" 91 | ``` 92 | 93 | 94 | License 95 | ------- 96 | 97 | MIT 98 | 99 | 100 | Author 101 | ------ 102 | 103 | Jiri Tyr 104 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/examples/roles/jtyr.motd/defaults/main.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | # Default ASCII art shown at the beginning of the motd 4 | motd_ascii_art: |-2 5 | _ _ _ _ 6 | / \ _ __ ___(_) |__ | | ___ 7 | / _ \ | '_ \/ __| | '_ \| |/ _ \ 8 | / ___ \| | | \__ \ | |_) | | __/ 9 | /_/ \_\_| |_|___/_|_.__/|_|\___| 10 | 11 | # Whether to hide the Virtual info 12 | motd_hide_virtual: no 13 | 14 | # Whether to add extra new line behind the Virtual info 15 | motd_virtual_newline: yes 16 | 17 | # Number of initial space 18 | motd_initial_spaces: 1 19 | 20 | # Indent size 21 | motd_indent: "{{ 7 if motd_hide_virtual else 8 }}" 22 | 23 | # Default information to show under the ASCII art 24 | motd_info__default: 25 | - FQDN: "{{ ansible_facts.fqdn }}" 26 | - Distro: "{{ ansible_facts.distribution }} {{ ansible_facts.distribution_version }} {{ ansible_facts.distribution_release }}" 27 | - "{{ 28 | { '': ' ' if motd_virtual_newline else '' } 29 | if motd_hide_virtual 30 | else 31 | { 'Virtual': ('YES\n' if motd_virtual_newline else 'YES') if ansible_facts.virtualization_role == 'guest' else ('NO\n' if motd_virtual_newline else 'NO') } }}" 32 | - CPUs: "{{ ansible_facts.processor_vcpus }}" 33 | - RAM: "{{ (ansible_facts.memtotal_mb / 1000) | round(1) }}GB" 34 | 35 | # Custom information to show under the ASCII art 36 | motd_info__custom: [] 37 | 38 | # Final information to show under the ASCII art 39 | motd_info: "{{ 40 | motd_info__default + 41 | motd_info__custom 42 | }}" 43 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/examples/roles/jtyr.motd/meta/.galaxy_install_info: -------------------------------------------------------------------------------- 1 | install_date: Sun Dec 15 18:51:02 2019 2 | version: v1.5.0 3 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/examples/roles/jtyr.motd/meta/main.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | galaxy_info: 4 | author: Jiri Tyr 5 | description: Customizable Message Of The Day (motd) 6 | license: MIT 7 | min_ansible_version: 2.8 8 | platforms: 9 | - name: Debian 10 | versions: 11 | - all 12 | - name: EL 13 | versions: 14 | - all 15 | galaxy_tags: 16 | - system 17 | dependencies: [] 18 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/examples/roles/jtyr.motd/tasks/main.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Create MOTD file 4 | template: 5 | src: motd.j2 6 | dest: /etc/motd 7 | tags: 8 | - motd_config 9 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/examples/roles/jtyr.motd/templates/motd.j2: -------------------------------------------------------------------------------- 1 | {{ motd_ascii_art }} 2 | {% for item in motd_info -%} 3 | {%- for key, value in item.items() %} 4 | {%- if key != '' -%} 5 | {{ ' ' * (motd_initial_spaces | int) }}{{ key }}: {{ (' ' * (motd_indent | int - motd_initial_spaces | int - (key | length))) if (motd_initial_spaces | int + (key | length)) < motd_indent | int else '' }}{{ value }}{{ '\n' }} 6 | {%- else -%} 7 | {{ '\n' if motd_virtual_newline else '' }} 8 | {%- endif %} 9 | {%- endfor %} 10 | {%- endfor %} 11 | 12 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/examples/siteb-apache-simple-container-deploy.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Deploy Site-B 3 | hosts: siteb 4 | become: yes 5 | 6 | 7 | tasks: 8 | - name: install docker preqequisities 9 | pip: 10 | name: docker 11 | 12 | 13 | - name: launch the apache-simple container on the site-b nodes 14 | docker_container: 15 | name: apache-simple 16 | image: quay.io/creynold/ansible-for-devops-siteb 17 | ports: 18 | - "8080:8080" 19 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/examples/siteb-config-build.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Deploy site web infrastructure 3 | hosts: localhost 4 | become: yes 5 | 6 | roles: 7 | - apache-simple 8 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/examples/siteb/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM registry.access.redhat.com/rhscl/httpd-24-rhel7 2 | USER root 3 | MAINTAINER creynold@redhat.com 4 | ADD ./etc/httpd/conf/httpd.conf /etc/httpd/conf 5 | ADD ./var/www/html/index.html /var/www/html/ 6 | RUN chown -R apache:apache /var/www/html 7 | EXPOSE 8080 8 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/examples/siteb/var/www/html/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | Ansible: Automation for Everyone 5 | 6 | 30 | 31 | 32 |
33 | 34 |

This is a test message

35 |
36 |
localhost
Red Hat Ansible
37 | 38 | 39 | -------------------------------------------------------------------------------- /workshops/ansible-for-devops/images/Logo-RedHat-D-Color-RGB.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/images/Logo-RedHat-D-Color-RGB.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/images/rh_favicon.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/ansible-for-devops/images/rh_favicon.png -------------------------------------------------------------------------------- /workshops/ansible-for-devops/index.rst: -------------------------------------------------------------------------------- 1 | .. Ansible for DevOps documentation master file, created by 2 | sphinx-quickstart on Tue Apr 23 11:23:20 2019. 3 | You can adapt this file completely to your liking, but it should at least 4 | contain the root `doctree` directive. 5 | 6 | .. sectionauthor:: Chris Reynolds 7 | 8 | ============================================================== 9 | |workshop_name_clean| Lab Guide 10 | ============================================================== 11 | 12 | Before getting started 13 | ---------------------- 14 | 15 | We encourage you to take today's lab guide back to your own environments for re-use and even your modify it to fit your needs. With that in mind we have two ways for you to obtain the content. 16 | 17 | - *Source Code*: The source for today's lab guide (and others) is located on :github:`GitHub`. 18 | - *Container Image*: Your lab guide is deployed and running inside a container. The container image is available on :quay_image:`quay.io`. 19 | 20 | License 21 | --------- 22 | 23 | This lab guide is released under the :license:`Creative Commons 4.0 By Attribution<>` license. Feel free to re-use it in any way you see fit. Just cite your source. 24 | 25 | Questions issues and improvements 26 | ----------------------------------- 27 | 28 | Feel free to file a :github:`GitHub issue` for any improvements or issues that may come to mind while you're working through the labs today! 29 | 30 | .. toctree:: 31 | :maxdepth: 2 32 | :numbered: 33 | :Caption: Index 34 | :name: mastertoc 35 | 36 | 01-introduction 37 | 02-ci-cd-essentials 38 | 03-source-code-and-image-repos 39 | 04-deploying-sitea 40 | 05-deploying-siteb 41 | 06-loadbalancing 42 | 07-galaxy-motd 43 | 08-configuring-tower 44 | 09-building-tower-jobs 45 | 10-bonus-round 46 | 11-cta 47 | -------------------------------------------------------------------------------- /workshops/better-together/Makefile: -------------------------------------------------------------------------------- 1 | # Minimal makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | SOURCEDIR = . 8 | BUILDDIR = _build 9 | 10 | # Put it first so that "make" without argument is like "make help". 11 | help: 12 | @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) 13 | 14 | .PHONY: help Makefile 15 | 16 | # Catch-all target: route all unknown targets to Sphinx using the new 17 | # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). 18 | %: Makefile 19 | @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) -------------------------------------------------------------------------------- /workshops/better-together/README.md: -------------------------------------------------------------------------------- 1 | # lab-guide-better-together- 2 | Workshop Operator Lab Guide for Better Together 3 | 4 | ## Running the lab guide container 5 | 6 | Note: systems need to have docker running 7 | To start the lab in a local environment from the most current quay.io image: 8 | 9 | `hack/run.sh better-together start` 10 | 11 | To start from a local image: 12 | 13 | `hack/run.sh better-together local` 14 | 15 | To build locally: 16 | 17 | `hack/build.sh better-together local` 18 | -------------------------------------------------------------------------------- /workshops/better-together/_static/openshift_technical_overview.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/better-together/_static/openshift_technical_overview.pdf -------------------------------------------------------------------------------- /workshops/better-together/ci-cd.rst: -------------------------------------------------------------------------------- 1 | Lab 8 - A real world CI/CD scenario 2 | ==================================== 3 | 4 | In the final section of our workshop, we'll take everything we've been 5 | discussing and put it into practice with a large, complex, real-work 6 | workflow. In your cluster, you'll create multiple projects and use a 7 | Jenkins pipeline workflow that: 8 | 9 | - Checks out source code from a git server within Openshift 10 | - Builds a java application and archives it in Nexus 11 | - Runs unit tests for the application 12 | - Runs code analysis using Sonarqube 13 | - Builds a Wildfly container image 14 | - Deplous the app into a dev project and runs integration tests 15 | - Builds a human break into the OpenShift UI to confirm before it 16 | promotes the application to the stage project 17 | 18 | .. important:: 19 | 20 | For this lab, SSH to your master node and escalate to your root user so you can interact with OpenShift via the ``oc`` command line client. 21 | 22 | This is a complete analog to a modern CI/CD workflow, implemented 100% 23 | within OpenShift. First, we'll need to create some projects for your 24 | CI/CD workflow to use. The content can be found on Github at 25 | https://github.com/siamaksade/openshift-cd-demo. This content has been 26 | downloaded already to your OpenShift control node at 27 | ``/root/cicd-demo``. 28 | 29 | Creating a CI/CD workflow manually 30 | '''''''''''''''''''''''''''''''''''''''' 31 | 32 | Creating the needed projects 33 | '''''''''''''''''''''''''''''''''''' 34 | 35 | On your control node, execute the following code: 36 | 37 | :: 38 | 39 | oc new-project dev --display-name="Tasks - Dev" 40 | oc new-project stage --display-name="Tasks - Stage" 41 | oc new-project cicd --display-name="CI/CD"`` 42 | 43 | This will create three projects in your OpenShift cluster. 44 | 45 | - **Tasks - Dev:** This will house your dev team's development 46 | deployment 47 | - **Tasks - Stage:** This will be your dev team's Stage deployment 48 | - **CI/CD:** This projects houses all of your CI/CD tools 49 | 50 | Giving your CI/CD project the proper permissions 51 | '''''''''''''''''''''''''''''''''''''''''''''''''''''''' 52 | 53 | Next, you need to give the CI/CD project permission to exexute tasks in 54 | the Dev and Stage projects. 55 | 56 | :: 57 | 58 | oc policy add-role-to-group edit system:serviceaccounts:cicd -n dev 59 | oc policy add-role-to-group edit system:serviceaccounts:cicd -n stage 60 | 61 | 4.1.3 - Deploying your workflow 62 | 63 | With your projects created, you're ready to deploy the demo and trigger 64 | the workflow 65 | 66 | :: 67 | 68 | oc new-app -n cicd -f cicd-template.yaml --param=DEPLOY_CHE=true 69 | 70 | This process doesn't take much time for a single application, but it 71 | doesn't scale well, it's not repeatable, and it relies on the person 72 | executing it knowing the commands, and the specific information about 73 | the situation. In the next section, we'll accomplish the same thing with 74 | a simple Ansible playbook, executed from your bastion host. 75 | 76 | Automating application deployment with Ansible 77 | '''''''''''''''''''''''''''''''''''''''''''''''''''' 78 | 79 | There modules for ``oc``. However, lots of interactions with OpenShift 80 | that you'll find in OpenShift playbooks still use the ``command`` 81 | module. There is minimal risk here because the ``oc`` command itself is 82 | idempotent. 83 | 84 | A playbook that would create the entire CI/CD workflow could look as 85 | follows: 86 | 87 | :: 88 | 89 | --- 90 | name: Deploy OpenShift CI/CD Project and begin workflow 91 | hosts: masters 92 | become: yes 93 | 94 | tasks: 95 | - name: Create Tasks project 96 | command: oc new-project dev --display-name="Tasks - Dev" 97 | - name: Create Stage project 98 | command: oc new-project stage --display-name="Tasks - Stage" 99 | - name: Create CI/CD project 100 | command: oc new-project cicd --display-name="CI/CD" 101 | - name: Set serviceaccount status for CI/CD project for dev and stage projects 102 | command: oc policy add-role-to-group edit system:serviceaccounts:cicd -n {{ item }} 103 | with_items: 104 | - dev 105 | - stage 106 | - name: Start application deployment to trigger CI/CD workflow 107 | command: oc new-app -n cicd -f cicd-template.yaml --param=DEPLOY_CHE=true 108 | 109 | This playbook is relatively simple, with a single ``with_items`` loop. 110 | What sort of additional enhancements can you think of to make this 111 | playbook more powerful to deploy workflows inside OpenShift? 112 | 113 | Summary 114 | ''''''''''''' 115 | 116 | This project takes multiple complex developer tools and integrates into 117 | a single automated workflow. Applications are built, tested, deployed, 118 | and then humans can verify eveything passes to their satisfaction before 119 | the final button is pushed to promote the application to the next level. 120 | Every one of those tools is running in a container inside your OpenShift 121 | cluster. 122 | -------------------------------------------------------------------------------- /workshops/better-together/conf.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # 3 | # Configuration file for the Sphinx documentation builder. 4 | # 5 | # This file does only contain a selection of the most common options. For a 6 | # full list see the documentation: 7 | # http://www.sphinx-doc.org/en/master/config 8 | 9 | # -- Path setup -------------------------------------------------------------- 10 | 11 | # If extensions (or modules to document with autodoc) are in another directory, 12 | # add these directories to sys.path here. If the directory is relative to the 13 | # documentation root, use os.path.abspath to make it absolute, like shown here. 14 | # 15 | # import os 16 | # import sys 17 | # sys.path.insert(0, os.path.abspath('.')) 18 | 19 | 20 | # -- Project information ----------------------------------------------------- 21 | import os 22 | project = u'Better Together: OpenShift and Ansible Workshop Ops Lab Guide' 23 | copyright = u'2019, Red Hat' 24 | author = u'Jamie Duncan' 25 | 26 | # The short X.Y version 27 | version = u'' 28 | # The full version, including alpha/beta/rc tags 29 | release = u'1.1' 30 | html_logo = "images/Logo-RedHat-D-Color-RGB.png" 31 | html_favicon = "images/rh_favicon.png" 32 | rst_prolog = """ 33 | .. |student_name| replace:: %s 34 | .. |student_pass| replace:: %s 35 | .. |control_public_ip| replace:: %s 36 | .. |openshift_ver| replace:: %s 37 | .. |lab_domain| replace:: %s 38 | .. |ssh_command| replace:: %s@%s 39 | """ % (os.environ['STUDENT_NAME'], 40 | os.environ['STUDENT_PASS'], 41 | os.environ['CONTROL_PUBLIC_IP'], 42 | os.environ['OPENSHIFT_VER'], 43 | os.environ['LAB_DOMAIN'], 44 | os.environ['STUDENT_NAME'], 45 | os.environ['CONTROL_PUBLIC_IP'] 46 | ) 47 | 48 | # -- General configuration --------------------------------------------------- 49 | 50 | # If your documentation needs a minimal Sphinx version, state it here. 51 | # 52 | # needs_sphinx = '1.0' 53 | 54 | # Add any Sphinx extension module names here, as strings. They can be 55 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 56 | # ones. 57 | extensions = [ 58 | 'sphinx.ext.todo', 59 | 'sphinx.ext.ifconfig', 60 | 'sphinx.ext.githubpages', 61 | ] 62 | 63 | # Add any paths that contain templates here, relative to this directory. 64 | templates_path = ['_templates'] 65 | 66 | # The suffix(es) of source filenames. 67 | # You can specify multiple suffix as a list of string: 68 | # 69 | # source_suffix = ['.rst', '.md'] 70 | source_suffix = '.rst' 71 | 72 | # The master toctree document. 73 | master_doc = 'index' 74 | 75 | # The language for content autogenerated by Sphinx. Refer to documentation 76 | # for a list of supported languages. 77 | # 78 | # This is also used if you do content translation via gettext catalogs. 79 | # Usually you set "language" from the command line for these cases. 80 | language = None 81 | 82 | # List of patterns, relative to source directory, that match files and 83 | # directories to ignore when looking for source files. 84 | # This pattern also affects html_static_path and html_extra_path. 85 | exclude_patterns = [] 86 | 87 | # The name of the Pygments (syntax highlighting) style to use. 88 | pygments_style = None 89 | 90 | 91 | # -- Options for HTML output ------------------------------------------------- 92 | 93 | # The theme to use for HTML and HTML Help pages. See the documentation for 94 | # a list of builtin themes. 95 | # 96 | html_theme = 'sphinx_rtd_theme' 97 | 98 | # Theme options are theme-specific and customize the look and feel of a theme 99 | # further. For a list of options available for each theme, see the 100 | # documentation. 101 | # 102 | html_theme_options = { 103 | 'style_nav_header_background': '#dddddd', 104 | 'style_external_links': True, 105 | 'logo_only': True, 106 | 'prev_next_buttons_location': 'both', 107 | } 108 | 109 | # Add any paths that contain custom static files (such as style sheets) here, 110 | # relative to this directory. They are copied after the builtin static files, 111 | # so a file named "default.css" will overwrite the builtin "default.css". 112 | html_static_path = ['_static'] 113 | 114 | # Custom sidebar templates, must be a dictionary that maps document names 115 | # to template names. 116 | # 117 | # The default sidebars (for documents that don't match any pattern) are 118 | # defined by theme itself. Builtin themes are using these templates by 119 | # default: ``['localtoc.html', 'relations.html', 'sourcelink.html', 120 | # 'searchbox.html']``. 121 | # 122 | # html_sidebars = {} 123 | 124 | 125 | # -- Options for HTMLHelp output --------------------------------------------- 126 | 127 | # Output file base name for HTML help builder. 128 | htmlhelp_basename = 'BetterTogetherOpenShiftandAnsibleWorkshopdoc' 129 | 130 | 131 | # -- Options for LaTeX output ------------------------------------------------ 132 | 133 | latex_elements = { 134 | # The paper size ('letterpaper' or 'a4paper'). 135 | # 136 | # 'papersize': 'letterpaper', 137 | 138 | # The font size ('10pt', '11pt' or '12pt'). 139 | # 140 | # 'pointsize': '10pt', 141 | 142 | # Additional stuff for the LaTeX preamble. 143 | # 144 | # 'preamble': '', 145 | 146 | # Latex figure (float) alignment 147 | # 148 | # 'figure_align': 'htbp', 149 | } 150 | 151 | # Grouping the document tree into LaTeX files. List of tuples 152 | # (source start file, target name, title, 153 | # author, documentclass [howto, manual, or own class]). 154 | latex_documents = [ 155 | (master_doc, 'BetterTogetherOpenShiftandAnsibleWorkshop.tex', u'Better Together: OpenShift and Ansible Workshop Documentation', 156 | u'Jamie Duncan', 'manual'), 157 | ] 158 | 159 | 160 | # -- Options for manual page output ------------------------------------------ 161 | 162 | # One entry per manual page. List of tuples 163 | # (source start file, name, description, authors, manual section). 164 | man_pages = [ 165 | (master_doc, 'bettertogetheropenshiftandansibleworkshop', u'Better Together: OpenShift and Ansible Workshop Documentation', 166 | [author], 1) 167 | ] 168 | 169 | 170 | # -- Options for Texinfo output ---------------------------------------------- 171 | 172 | # Grouping the document tree into Texinfo files. List of tuples 173 | # (source start file, target name, title, author, 174 | # dir menu entry, description, category) 175 | texinfo_documents = [ 176 | (master_doc, 'BetterTogetherOpenShiftandAnsibleWorkshop', u'Better Together: OpenShift and Ansible Workshop Documentation', 177 | author, 'BetterTogetherOpenShiftandAnsibleWorkshop', 'One line description of project.', 178 | 'Miscellaneous'), 179 | ] 180 | 181 | 182 | # -- Options for Epub output ------------------------------------------------- 183 | 184 | # Bibliographic Dublin Core info. 185 | epub_title = project 186 | 187 | # The unique identifier of the text. This can be a ISBN number 188 | # or the project homepage. 189 | # 190 | # epub_identifier = '' 191 | 192 | # A unique identification for the text. 193 | # 194 | # epub_uid = '' 195 | 196 | # A list of files that should not be packed into the epub file. 197 | epub_exclude_files = ['search.html'] 198 | 199 | 200 | # -- Extension configuration ------------------------------------------------- 201 | 202 | # -- Options for todo extension ---------------------------------------------- 203 | 204 | # If true, `todo` and `todoList` produce output, else they produce nothing. 205 | todo_include_todos = True 206 | -------------------------------------------------------------------------------- /workshops/better-together/container-cgroups.rst: -------------------------------------------------------------------------------- 1 | Lab 4 - Quotas and Limits 2 | ===================================================== 3 | 4 | In this lab we'll investigate how the Linux kernel uses Control Groups to prevent containers from consuming more than their fair share of host resources. 5 | 6 | .. important:: 7 | 8 | For this lab, SSH to your master node and escalate to your root user so you can interact with OpenShift via the ``oc`` command line client. 9 | 10 | What are kernel control groups? 11 | '''''''''''''''''''''''''''''''' 12 | 13 | Kernel Control Groups are how containers (and other things like VMs and 14 | even clever sysadmins) limit the resources available to a given process. 15 | Nothing fixes bad code. But with control groups in place, it can become 16 | restarting a single service when it crashes instead of restarting the 17 | entire server. 18 | 19 | In OpenShift, control groups are used to deploy resource limit and 20 | requests. Let's set up some limits for applications in a new project 21 | that we'll call ``image-uploader``. Let's create a new project using our 22 | control node. 23 | 24 | .. admonition:: Why do I need to be the root user on the control node? 25 | 26 | When it deploys, OpenShift places a special certificate in 27 | ``/root/.kube/config``. This certificate allows you to access OpenShift 28 | as full admin without needing to log in. 29 | 30 | Creating projects 31 | '''''''''''''''''' 32 | 33 | Applications deployed in OpenShift are separated into *projects*. 34 | Projects are used not only as logical separators, but also as a 35 | reference for RBAC and networking policies that we'll discuss later. To 36 | create a new project, use the ``oc new-project`` command. 37 | 38 | :: 39 | 40 | # oc new-project image-uploader --display-name="Image Uploader Project" 41 | Now using project "image-uploader" on server "https://ip-172-16-129-11.ec2.internal:443". 42 | 43 | You can add applications to this project with the 'new-app' command. For example, try: 44 | 45 | oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git 46 | 47 | to build a new example application in Ruby. 48 | 49 | We'll use this project for multiple examples. Before we actually deploy 50 | an application into it, we want to set up project limits and requests. 51 | 52 | Limits and Requests 53 | '''''''''''''''''''' 54 | 55 | 56 | OpenShift Limits are per-project maximums for various objects like 57 | number of containers, storage requests, etc. Requests for a project are 58 | default values for resource allocation if no other values are requested. 59 | We'll work with this more in a while, but in the meantime, think of 60 | Requests as a lower bound for resources in a project, while Limits are 61 | the upper bound. 62 | 63 | Creating Limits and Requests for a project 64 | ''''''''''''''''''''''''''''''''''''''''''' 65 | 66 | 67 | The first thing we'll create for the Image Uploader project is a 68 | collection of Limits. This is done, like most things in OpenShift, by 69 | creatign a YAML file and having OpenShift process it. On your control 70 | node, create a file named ``/root/core-resource-limits.yaml``. It should 71 | contain the following content. 72 | 73 | :: 74 | 75 | apiVersion: "v1" 76 | kind: "LimitRange" 77 | metadata: 78 | name: "core-resource-limits" 79 | spec: 80 | limits: 81 | - type: "Pod" 82 | max: 83 | cpu: "2" 84 | memory: "1Gi" 85 | min: 86 | cpu: "100m" 87 | memory: "4Mi" 88 | - type: "Container" 89 | max: 90 | cpu: "2" 91 | memory: "1Gi" 92 | min: 93 | cpu: "100m" 94 | memory: "4Mi" 95 | default: 96 | cpu: "300m" 97 | memory: "200Mi" 98 | defaultRequest: 99 | cpu: "200m" 100 | memory: "100Mi" 101 | maxLimitRequestRatio: 102 | cpu: "10" 103 | 104 | After your file is created, have it processed and added to the 105 | configuration for the ``image-uploader`` project. 106 | 107 | :: 108 | 109 | # oc create -f core-resource-limits.yaml -n image-uploader 110 | limitrange "core-resource-limits" created 111 | 112 | To confirm your limits have been applied, run the ``oc get limitrange`` 113 | command. 114 | 115 | :: 116 | 117 | # oc get limitrange 118 | NAME AGE 119 | core-resource-limits 2m 120 | 121 | The ``limitrange`` you just created applies to any applications deployed 122 | in the ``image-uploader`` project. Next, you're going to create resource 123 | limits for the entire project. Create a file named 124 | ``/root/compute-resources.yaml`` on your control node. It should contain 125 | the following content. 126 | 127 | :: 128 | 129 | apiVersion: v1 130 | kind: ResourceQuota 131 | metadata: 132 | name: compute-resources 133 | spec: 134 | hard: 135 | pods: "10" 136 | requests.cpu: "2" 137 | requests.memory: 2Gi 138 | limits.cpu: "3" 139 | limits.memory: 3Gi 140 | scopes: 141 | - NotTerminating 142 | 143 | Once it's created, apply the limits to the ``image-uploader`` project. 144 | 145 | :: 146 | 147 | # oc create -f compute-resources.yaml -n image-uploader 148 | resourcequota "compute-resources" created 149 | 150 | Next, confirm the limits were applied using ``oc get``. 151 | 152 | :: 153 | 154 | # oc get resourcequota -n image-uploader 155 | NAME AGE 156 | compute-resources 1m 157 | 158 | We're almost done! So fare we've define resource limits for both apps 159 | and the entire ``image-uploader`` project. These are controlled under 160 | the convers by control groups in the Linux kernel. But to be safe, we 161 | also need to define limits to the numbers of kubernetes objects that can 162 | be deployed in the ``image-uploader`` project. To do this, create a file 163 | named ``/root/core-object-counts.yaml`` with the following content. 164 | 165 | :: 166 | 167 | apiVersion: v1 168 | kind: ResourceQuota 169 | metadata: 170 | name: core-object-counts 171 | spec: 172 | hard: 173 | configmaps: "10" 174 | persistentvolumeclaims: "5" 175 | resourcequotas: "5" 176 | replicationcontrollers: "20" 177 | secrets: "50" 178 | services: "10" 179 | openshift.io/imagestreams: "10" 180 | 181 | Once created, apply these controls to your ``image-uploader`` project. 182 | 183 | :: 184 | 185 | # oc create -f core-object-counts.yaml -n image-uploader 186 | resourcequota "core-object-counts" created 187 | 188 | If you re-run ``oc get resourcequota``, you'll see both quotas applied 189 | to your ``image-uploader`` project. 190 | 191 | :: 192 | 193 | # oc get resourcequota -n image-uploader 194 | NAME AGE 195 | compute-resources 9m 196 | core-object-counts 1m 197 | 198 | Summary 199 | '''''''' 200 | 201 | The resource guardrails provided by control groups inside OpenShift are 202 | invaluable to an Ops team. We can't run around looking at every 203 | container that comes or go. We have to be able to programatically define 204 | flexible quotas and requests for our developers. All of this information 205 | is available in the OpenShift web interface, so your devs have no excuse 206 | for not knowing what they're using and how much they have left. 207 | -------------------------------------------------------------------------------- /workshops/better-together/container-selinux.rst: -------------------------------------------------------------------------------- 1 | Discussion - Protection with SELinux 2 | ======================================= 3 | 4 | SELinux has a polarizing reputation in the Linux community. Inside an 5 | OpenShift cluster, it is 100% required. We're not going to get into the 6 | specifics due to time constraints, but we wanted to carve out a time for 7 | you to ask any specific questions and to give a few highlights about 8 | SELinux in OpenShift. 9 | 10 | SELinux in OpenShift 11 | ''''''''''''''''''''' 12 | 13 | 1. **SELinux must be in enforcing mode in OpenShift.** This is 14 | *negotiable*. SELinux prevents containers from being able to 15 | communicate with the host in undesirable ways, as well as limiting 16 | cross-container resource access. 17 | 2. **SELinux requires no configuration out of the box.** You *can* 18 | customize SELinux in OpenShift, but it's not required at all. 19 | 3. **By default, OpenShift acts at the project level.** In order to 20 | provide easier communication between apps deployed in the same 21 | project, they share the same SELinux context by default. The 22 | assumption is that applications in the same project will be related 23 | and have a consistent need to share resources and easily communicate. 24 | 25 | Summary 26 | ''''''''''''''' 27 | 28 | Namespaces, CGroups, and SELinux. We know that's a lot of firehose to point at you in a single lab. These concepts are the fundamental building blocks of the container revolution that our industry is currently working through. They're also all components in Linux kernel. That's great for speed and security. But none of those things have any way of knowing what's going on in the kernel on another server. 29 | 30 | *Real power comes when you can orchestrate containers for your applications across multiple servers. That's where OpenShift and kubernetes step in.* 31 | -------------------------------------------------------------------------------- /workshops/better-together/images/Logo-RedHat-D-Color-RGB.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/better-together/images/Logo-RedHat-D-Color-RGB.png -------------------------------------------------------------------------------- /workshops/better-together/images/ops/ansible-galaxy.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/better-together/images/ops/ansible-galaxy.png -------------------------------------------------------------------------------- /workshops/better-together/images/ops/ansible_indentation_view.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/better-together/images/ops/ansible_indentation_view.png -------------------------------------------------------------------------------- /workshops/better-together/images/ops/ansible_overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/better-together/images/ops/ansible_overview.png -------------------------------------------------------------------------------- /workshops/better-together/images/ops/app-cli.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/better-together/images/ops/app-cli.png -------------------------------------------------------------------------------- /workshops/better-together/images/ops/app-cli_gui.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/better-together/images/ops/app-cli_gui.png -------------------------------------------------------------------------------- /workshops/better-together/images/ops/containers_vs_vms.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/better-together/images/ops/containers_vs_vms.png -------------------------------------------------------------------------------- /workshops/better-together/images/ops/evolution.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/better-together/images/ops/evolution.png -------------------------------------------------------------------------------- /workshops/better-together/images/ops/images_layer_cake.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/better-together/images/ops/images_layer_cake.png -------------------------------------------------------------------------------- /workshops/better-together/images/ops/images_registry.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/better-together/images/ops/images_registry.png -------------------------------------------------------------------------------- /workshops/better-together/images/ops/metrics.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/better-together/images/ops/metrics.jpeg -------------------------------------------------------------------------------- /workshops/better-together/images/ops/ocp_add_to_project.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/better-together/images/ops/ocp_add_to_project.png -------------------------------------------------------------------------------- /workshops/better-together/images/ops/ocp_app-gui_build.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/better-together/images/ops/ocp_app-gui_build.png -------------------------------------------------------------------------------- /workshops/better-together/images/ops/ocp_app-gui_wizard.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/better-together/images/ops/ocp_app-gui_wizard.png -------------------------------------------------------------------------------- /workshops/better-together/images/ops/ocp_login.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/better-together/images/ops/ocp_login.png -------------------------------------------------------------------------------- /workshops/better-together/images/ops/ocp_networking_node.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/better-together/images/ops/ocp_networking_node.png -------------------------------------------------------------------------------- /workshops/better-together/images/ops/ocp_php_results.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/better-together/images/ops/ocp_php_results.png -------------------------------------------------------------------------------- /workshops/better-together/images/ops/ocp_project_list.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/better-together/images/ops/ocp_project_list.png -------------------------------------------------------------------------------- /workshops/better-together/images/ops/ocp_service_catalog.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/better-together/images/ops/ocp_service_catalog.png -------------------------------------------------------------------------------- /workshops/better-together/images/ops/vm_vs_container.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/better-together/images/ops/vm_vs_container.png -------------------------------------------------------------------------------- /workshops/better-together/images/rh_favicon.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/better-together/images/rh_favicon.png -------------------------------------------------------------------------------- /workshops/better-together/index.rst: -------------------------------------------------------------------------------- 1 | .. Workshop Lab Guide documentation master file, created by 2 | sphinx-quickstart on Thu Mar 28 23:19:41 2019. 3 | You can adapt this file completely to your liking, but it should at least 4 | contain the root `toctree` directive. 5 | 6 | .. sectionauthor:: Jamie Duncan 7 | 8 | ============================================================== 9 | Better Together: OpenShift and Ansible Workshop Ops Lab Guide 10 | ============================================================== 11 | 12 | Welcome 13 | ======== 14 | 15 | Thank you for joining us today at the Better Together: OpenShift and Ansible Workshop. As a reminder, here's the login information you'll use for the rest of today's lab. 16 | 17 | - SSH Username: |student_name| 18 | - OpenShift Username: admin and |student_name| 19 | - Password: |student_pass| 20 | - Control Node: |control_public_ip| 21 | 22 | Introduction and getting started 23 | ================================= 24 | 25 | Thank you for taking the time to come and work with us today! Let's get started. 26 | 27 | Links and resources 28 | ----------------------------- 29 | 30 | - `Ansible Essentials slides `__ 31 | - `OpenShift Technical overview slides `__ 32 | - `CI/CD Pipeline Example `__ 33 | - `PuTTY for Windows `__ 34 | - `Ansible Docs `__ 35 | - `Ansible Module Index `__ 36 | - `OpenShift Docs `__ 37 | - `OC Command line client `__ 38 | - `Lab Guide Container Image `__ 39 | - `Lab Guide Source Code `__ 40 | - `Workshop Deploy Playbooks `__ 41 | 42 | .. toctree:: 43 | :maxdepth: 2 44 | :numbered: 45 | :Caption: Index 46 | :name: mastertoc 47 | 48 | ansible-intro 49 | ocp-intro 50 | integration 51 | container-registry 52 | container-namespaces 53 | container-cgroups 54 | container-selinux 55 | ocp-deploying-apps 56 | ocp-sdn 57 | ocp-routing 58 | ci-cd 59 | -------------------------------------------------------------------------------- /workshops/better-together/integration.rst: -------------------------------------------------------------------------------- 1 | Discussion - OpenShift and Ansible Together 2 | ======================================================= 3 | 4 | Deploying and managing an OpenShift cluster is controlled by Ansible. 5 | The 6 | `openshift-ansible `__ project is used to deploy and scale OpenShift clusters, as well as enable new features like `OpenShift Container Storage `__. 7 | 8 | Deploying OpenShift 9 | ''''''''''''''''''''''''' 10 | 11 | Your entire OpenShift cluster was deployed using Ansible. The inventory 12 | used to deploy your cluster is on your bastion host at the default 13 | inventory location for Ansible, ``/etc/ansible/hosts``. To deploy an 14 | OpenShift cluster on RHEL 7, after registering it and subscribing it to 15 | the proper repositories two Ansible playbooks need to be run: 16 | 17 | :: 18 | 19 | ansible-playook /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml 20 | ansible-playook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml 21 | 22 | The deployment process takes 30-40 minutes to complete, depending on the size of your cluster. To save that time, we've got you covered and have already deployed your OpenShift cluster. In fact, all lab environments were provisioned using a single ansible playbook from `another ansible playbook that incorporates the playbooks that deploy OpenShift `__. 23 | 24 | Operations and Lifecycle management 25 | '''''''''''''''''''''''''''''''''''' 26 | 27 | In additon to deploying OpenShift, Ansible is used to modify your 28 | existing cluster. These playbooks are also located in 29 | ``/usr/share/ansible/openshift-ansible/``. They can do things like: 30 | 31 | - Add a node to your cluster 32 | - Deploy OpenShift Container Storage (OCS) 33 | - Deploy metrics or log aggregation to an existing cluster 34 | - Deploy `Cloudforms `__ in your OpenShift cluster 35 | - Other operations like re-deploying encryption certificates 36 | 37 | Taking a look at the playbook options available from 38 | ``openshift-ansible``, you see: 39 | 40 | :: 41 | 42 | $ ls /usr/share/ansible/openshift-ansible/playbooks/ 43 | adhoc/ 44 | common/ 45 | openshift-autoheal/ 46 | openshift-grafana/ 47 | openshift-master/ 48 | openshift-node/ 49 | openshift-web-console/ 50 | roles/ 51 | aws/ 52 | container-runtime/ 53 | openshift-checks/ 54 | openshift-hosted/ 55 | openshift-metrics/ 56 | openshift-node-problem-detector/ 57 | openstack/ 58 | azure/ 59 | deploy_cluster.yml 60 | openshift-descheduler 61 | openshift-loadbalancer 62 | openshift-monitor-availability 63 | openshift-prometheus 64 | prerequisites.yml 65 | byo/ 66 | gcp/ 67 | openshift-etcd/ 68 | openshift-logging/ 69 | openshift-monitoring/ 70 | openshift-provisioners/ 71 | README.md 72 | cluster-operator/ 73 | init/ 74 | openshift-glusterfs/ 75 | openshift-management/ 76 | openshift-nfs/ 77 | openshift-service-catalog/ 78 | redeploy-certificates.yml 79 | 80 | .. admonition:: Why not do this right now? 81 | 82 | If you want to add something to your cluster, please feel free. Because this process is simply running ``ansible-playbook``, we're not going to ask everyone watch ansible output scroll down their screen for 10 or more minutes. 83 | 84 | Summary 85 | ''''''''' 86 | 87 | We've talked about Ansible fundamentals, an we've discussed OpenShift architecture. 88 | 89 | This section has been about the deeper relationship between OpenShift 90 | and Ansible. All major lifecycle events are handled using Ansible. 91 | 92 | Next, we'll take a look at an OpenShift deployment that provides 93 | everything you need to create and test a full CI/CD workflow in 94 | OpenShift. 95 | -------------------------------------------------------------------------------- /workshops/better-together/ocp-intro.rst: -------------------------------------------------------------------------------- 1 | Discussion - Container Fundamentals 2 | ==================================== 3 | 4 | In this section we'll discuss the fundamental components that make up 5 | OpenShift. Any Ops-centric discussion of an application platform like 6 | OpenShift needs to start with containers; both technically and from a 7 | perspective of value. We'll start off talking about why containers are 8 | the best solution today to deliver your applications. 9 | 10 | .. note:: 11 | For more in depth information on this topic, please take a look at the `OpenShift Technical 12 | Overview `__. 13 | 14 | What exacty *is* a container? 15 | ''''''''''''''''''''''''''''''''''''' 16 | 17 | There are t-shirts out there say "Containers are Linux". Well. They're 18 | not wrong. The components that isolate and protect applications running 19 | in containers are unique to the Linux kernel. Some of them have been 20 | around for years, or even decades. In this section we'll investigate 21 | them in more depth. 22 | 23 | Comparing VM and Container resource usage 24 | ''''''''''''''''''''''''''''''''''''''''''''''''' 25 | 26 | In this section we're going to investigate how containers use your 27 | datacenter resources more efficiently. 28 | 29 | .. figure:: images/ops/containers_vs_vms.png 30 | :alt: containers compared to virtual machines 31 | 32 | Container host resources compared to VM hypervisor resources 33 | 34 | First, we'll focus on storage. 35 | 36 | Storage resource consumption 37 | ````````````````````````````` 38 | 39 | Compared to a 40GB disk image, the RHEL 7 container base image is 40 | `approximately 41 | 72MB `__. 42 | It's a widely accepted rule of thumb that container images shouldn't 43 | exceed 1GB in size. If your application takes more than 1GB of code and 44 | libraries to run, you likely need to re-think your plan of attack. 45 | 46 | Instead of each new instance of an application consuming 40GB+ on your 47 | storage resources, they consume a couple hundred MegaBytes. Your next 48 | storage purchase just got a whole lot more interesting. 49 | 50 | CPU and RAM resource consumption 51 | ````````````````````````````````` 52 | 53 | It's not just storage where containers help save resources. We'll 54 | analyze this in more depth in the next section, but we want to get the 55 | idea into your head for that part of our investigation now. Containers 56 | are smaller than a full VM because containers don't each run their own 57 | Linux kernel. All containers on a host share a single kernel. That means 58 | a container just needs the application it needs to execute and its 59 | dependencies. You can think of containers as a "portable userspace" for 60 | your applications. 61 | 62 | Because each container doesn't require its own kernel, we also measure 63 | startup time in milliseconds! This gives us a whole new way to think 64 | about scalability and High Availability! 65 | 66 | .. figure:: images/ops/metrics.jpeg 67 | :alt: Metrics resource consumption 68 | 69 | The registry-console application consumes less than 3MB of RAM! 70 | 71 | If we were deploying this application in a VM we would spend multiple 72 | gigabytes of our RAM just so we could give the application the handful 73 | of MegaBytes it needs to run properly. 74 | 75 | .. note:: 76 | 77 | Think about your traditional virtualization platform, or your workflow 78 | to deploy instances to your public cloud of choice for a moment. How big 79 | is your default VMDK for your root OS disk? How much extra storage do 80 | you add to your EC2 instance, just to handle the 'unknown' situations? 81 | Is it 40GB? 60GB? 82 | 83 | **This phenomenon is known as *Worst Case Scenario Provisioning*.** In 84 | the industry, we've done it for years because we consider each VM a 85 | unique creature that is hard to alter once created. Even with more 86 | optimized workflows in the public cloud, we hold on to IP addresses, and 87 | their associated resources, as if they're immutable once created. It's 88 | easier to overspend on compute resources for most of us than to change 89 | an IP address in our IPAM once we've deployed a system. 90 | 91 | The same is true for CPU consumption. In OpenShift, we measure and 92 | allocate CPUs by the *millicore*, or thousandth of a core. Instead of 93 | multiple CPUs, applications can be given the small fractions of a CPU 94 | they need to get their job done. 95 | 96 | Containers are an evolution, not an end state 97 | '''''''''''''''''''''''''''''''''''''''''''''' 98 | 99 | At their heart, containers are the next evolution in how we isolate 100 | processes on a Linux system. This evolution started when we created the 101 | first computers. They evolved from ENIAC, through mainframes, into the 102 | server revolution all the way through virtual machines (VMs) and now 103 | into containers. 104 | 105 | .. figure:: images/ops/evolution.png 106 | 107 | More efficient application isolation (we'll get into how that works in 108 | the next section) provides an Ops team a few key advantages that we'll 109 | discuss next. 110 | 111 | Summary 112 | ''''''''''''''' 113 | 114 | Containers aren't just a tool to help developers create applications 115 | more quickly. Although they are revolutionary in how they do that. 116 | Containers aren't just marketing hype. Although there's certainly a lot of 117 | that going on right now, too. 118 | 119 | For an Ops team, containers take any application and deploy it more 120 | efficiently in our infrastructure. Instead of measuring each application 121 | in GB used and full CPUs allocated, we measure containers using MB of 122 | storage and RAM and we allocate thousandths of CPUs. 123 | 124 | OpenShift deployed into your existing datacenter gives you back 125 | resources. For customers deep into their transformation with OpenShift, 126 | an exponential increase in resource density isn't uncommon. 127 | -------------------------------------------------------------------------------- /workshops/better-together/ocp-routing.rst: -------------------------------------------------------------------------------- 1 | Lab 7 - Routing layer 2 | ======================= 3 | 4 | .. important:: 5 | 6 | For this lab, SSH to your master node and escalate to your root user so you can interact with OpenShift via the ``oc`` command line client. 7 | 8 | The routing layer integrated with OpenShift uses HAProxy by default. It maps the publicly available route you assign an 9 | application and maps it back to the corresponding pods in your cluster. 10 | Each time an application or route is updated (created, retired, scaled 11 | up or down), the configuration in HAProxy is updated by OpenShift. 12 | HAProxy runs in a pod in the default project on your infrastructure 13 | node. 14 | 15 | .. note:: Other routing options 16 | 17 | OpenShift uses a plugin framework for its routing layer. The default router for OpenShift 3.11 is HAProxy, but OpenShift also ships with an F5 router plugin. Additionally, there are cloud-provider specific and third-party router plugins. 18 | 19 | OpenShift 4 transitioned to `nginx `__ as the default router. 20 | 21 | Inspecting HAProxy in OpenShift 22 | ''''''''''''''''''''''''''''''''' 23 | 24 | :: 25 | 26 | # oc project default 27 | # oc get pods 28 | NAME READY STATUS RESTARTS AGE 29 | docker-registry-1-77rmv 1/1 Running 0 2d 30 | registry-console-1-n7kbk 1/1 Running 0 2d 31 | router-1-mwb89 1/1 Running 0 2d <--- router pod running HAProxy 32 | 33 | If you know the name of a pod, you can us ``oc rsh`` to connect to it 34 | remotely. This is doing some fun magic using ``ssh`` and ``nsenter`` 35 | under the covers to provide a connection to the proper node inside the 36 | proper namespaces for the pod. Looking in the ``haproxy.config`` file 37 | for references to ``app-cli`` gives displays your router configuration 38 | for that application. ``Ctrl-D`` will exit out of your ``rsh`` session. 39 | 40 | :: 41 | 42 | # oc rsh router-1-mwb89 43 | sh-4.2$ grep app-cli haproxy.config 44 | backend be_http:image-uploader:app-cli 45 | server pod:app-cli-1-tthhf:app-cli:10.130.0.36:8080 10.130.0.36:8080 cookie 91b8f12aa1ca5b82e34e730715b58254 weight 256 check inter 5000ms 46 | server pod:app-cli-1-bgt75:app-cli:10.130.0.41:8080 10.130.0.41:8080 cookie 0f411f181edfdfb13c0c0d1b562f5efd weight 256 check inter 5000ms 47 | server pod:app-cli-1-26fgz:app-cli:10.131.0.6:8080 10.131.0.6:8080 cookie 67b4bd5bb54b037c5b37c8acadcfe833 weight 256 check inter 5000ms 48 | 49 | If you use the ``oc get pods`` command for the Image Uploader project 50 | and limit its output for the app-cli application, you can see the IP 51 | addresses in HAProxy match the pods for the application. 52 | 53 | :: 54 | 55 | # oc get pods -l app=app-cli -n image-uploader -o wide 56 | NAME READY STATUS RESTARTS AGE IP NODE 57 | app-cli-1-26fgz 1/1 Running 0 3h 10.131.0.6 ip-172-16-50-98.ec2.internal 58 | app-cli-1-bgt75 1/1 Running 0 3h 10.130.0.41 ip-172-16-245-111.ec2.internal 59 | app-cli-1-tthhf 1/1 Running 0 3h 10.130.0.36 ip-172-16-245-111.ec2.internal 60 | 61 | To confirm HAProxy is automatically updated, let's scale ``app-cli`` 62 | back down to 1 pod and re-check the router configuration. 63 | 64 | :: 65 | 66 | # oc scale dc/app-cli -n image-uploader --replicas=1 67 | deploymentconfig.apps.openshift.io "app-cli" scaled 68 | 69 | oc get pods -l app=app-cli -n image-uploader -o wide 70 | 71 | NAME READY STATUS RESTARTS AGE IP NODE app-cli-1-tthhf 1/1 Running 0 3h 72 | 10.130.0.36 ip-172-16-245-111.ec2.internal 73 | 74 | Instead of having to connect to a shell session inside the router pod, you can use ``oc exec`` to run a single command in a pod and get the output. 75 | 76 | :: 77 | 78 | oc exec router-1-mwb89 grep app-cli haproxy.config 79 | 80 | backend be\_http:image-uploader:app-cli server 81 | pod:app-cli-1-tthhf:app-cli:10.130.0.36:8080 10.130.0.36:8080 cookie 82 | 91b8f12aa1ca5b82e34e730715b58254 weight 256 check inter 5000ms \`\`\` 83 | 84 | We were able to confirm that our HAProxy configuration updates 85 | automatically when applications are updated. 86 | 87 | Summary 88 | ''''''''''''' 89 | 90 | We know this is a mountain of content. Our goal is to present you with 91 | information that will be helpful as you sink your teeth into your own 92 | OpenShift clusters in your own infrastructure. These are some of the 93 | fundamental tools and tricks that are going to be helpful as you begin 94 | this journey. 95 | -------------------------------------------------------------------------------- /workshops/better-together/ocp-sdn.rst: -------------------------------------------------------------------------------- 1 | Lab 6 - OpenShift SDN 2 | ======================= 3 | 4 | Overview 5 | ''''''''' 6 | OpenShift uses a complex software-defined network solution using `Open 7 | vSwitch (OVS) `__ that creates multiple 8 | interfaces for each container and routes traffic through VXLANs to other 9 | nodes in the cluster or through a TUN interface to route out of your 10 | cluster and into other networks. 11 | 12 | .. important:: 13 | 14 | For this lab, SSH to your master node and escalate to your root user so you can interact with OpenShift via the ``oc`` command line client. 15 | 16 | Inspecting the SDN 17 | ''''''''''''''''''' 18 | 19 | At a fundamental level, OpenShift creates an OVS bridge and attaches a 20 | TUN and VXLAN interface. The VXLAN interface routes requests between 21 | nodes on the cluster, and the TUN interface routes traffic off of the 22 | cluster using the node's default gateway. Each container also cretes a 23 | ``veth`` interface that is linked to the ``eth0`` interface in a 24 | specific container using `kernel interrface 25 | linking `__. 26 | You can see this on your nodes by running the ``ovs-vsct list-br`` 27 | command. 28 | 29 | .. code-block:: bash 30 | 31 | # ovs-vsctl list-br br0 32 | 33 | This lists the OVS bridges on the host. To see the interfaces within the 34 | bridge, run the following command. Here you can see the ``vxlan``, 35 | ``tun``, and ``veth`` interfaces within the bridge. 36 | 37 | .. code-block:: bash 38 | 39 | # ovs-vsctl list-ifaces br0 tun0 40 | veth1903c0e4 41 | veth2daf2599 42 | veth6afcc070 43 | veth77b9379f 44 | veth9406531e 45 | veth97389395 46 | vxlan0 47 | 48 | Logically, the networking configuration on an OpenShift node looks like 49 | the graphic below. 50 | 51 | .. figure:: images/ops/ocp_networking_node.png 52 | :alt: 53 | 54 | Summary 55 | '''''''' 56 | 57 | When networking isn't working as you expect, and you've already ruled 58 | out DNS (for the 5th time), keep this architecture in mind as you are 59 | troubleshooting your cluster. There's no magic invovled; only 60 | technologies that have proven themselves for years in production. 61 | -------------------------------------------------------------------------------- /workshops/example-workshop/Makefile: -------------------------------------------------------------------------------- 1 | # Minimal makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | SOURCEDIR = . 8 | BUILDDIR = _build 9 | 10 | # Put it first so that "make" without argument is like "make help". 11 | help: 12 | @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) 13 | 14 | .PHONY: help Makefile 15 | 16 | # Catch-all target: route all unknown targets to Sphinx using the new 17 | # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). 18 | %: Makefile 19 | @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) -------------------------------------------------------------------------------- /workshops/example-workshop/conf.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # 3 | # Configuration file for the Sphinx documentation builder. 4 | # 5 | # This file does only contain a selection of the most common options. For a 6 | # full list see the documentation: 7 | # http://www.sphinx-doc.org/en/master/config 8 | 9 | # -- Path setup -------------------------------------------------------------- 10 | 11 | # If extensions (or modules to document with autodoc) are in another directory, 12 | # add these directories to sys.path here. If the directory is relative to the 13 | # documentation root, use os.path.abspath to make it absolute, like shown here. 14 | # 15 | # import os 16 | # import sys 17 | # sys.path.insert(0, os.path.abspath('.')) 18 | 19 | extensions = [ 20 | 'sphinx.ext.extlinks', 21 | ] 22 | 23 | # -- Project information ----------------------------------------------------- 24 | 25 | import os 26 | 27 | project_clean = os.environ['WORKSHOP_NAME'].title().replace('_', ' ').replace('-',' ') 28 | 29 | project = project_clean 30 | copyright = u'2019, Red Hat, Inc.' 31 | author = u'Red Hat, Inc. North America Public Sector Emerging Technology Team' 32 | 33 | # The short X.Y version 34 | version = u'0.1' 35 | # The full version, including alpha/beta/rc tags 36 | release = u'0.1alpha1' 37 | html_logo = "images/Logo-RedHat-D-Color-RGB.png" 38 | html_favicon = "images/rh_favicon.png" 39 | rst_prolog = """ 40 | .. |workshop_name_clean| replace:: %s 41 | .. |workshop_name| replace:: %s 42 | .. |student_name| replace:: %s 43 | .. |bastion_host| replace:: %s 44 | .. |master_url| replace:: %s 45 | .. |app_domain| replace:: %s 46 | .. |github_url| replace:: https://github.com/jduncan-rva/workshop-operator-lab-guide 47 | """ % (project_clean, 48 | os.environ['WORKSHOP_NAME'], 49 | os.environ['STUDENT_NAME'], 50 | os.environ['BASTION_HOST'], 51 | os.environ['MASTER_URL'], 52 | os.environ['APP_DOMAIN'], 53 | ) 54 | 55 | 56 | html_theme_options = { 57 | 'style_nav_header_background': '#dddddd', 58 | 'style_external_links': True, 59 | 'logo_only': True, 60 | 'prev_next_buttons_location': 'both', 61 | } 62 | 63 | 64 | extlinks = { 65 | 'github_url': ('https://github.com/jduncan-rva/%s', 'GitHub '), 66 | } 67 | 68 | # -- General configuration --------------------------------------------------- 69 | 70 | # If your documentation needs a minimal Sphinx version, state it here. 71 | # 72 | # needs_sphinx = '1.0' 73 | 74 | # Add any Sphinx extension module names here, as strings. They can be 75 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 76 | # ones. 77 | source_suffix = { 78 | '.rst': 'restructuredtext', 79 | '.md': 'markdown', 80 | } 81 | 82 | # Add any paths that contain templates here, relative to this directory. 83 | templates_path = ['_templates'] 84 | 85 | # The master toctree document. 86 | master_doc = 'index' 87 | 88 | # The language for content autogenerated by Sphinx. Refer to documentation 89 | # for a list of supported languages. 90 | # 91 | # This is also used if you do content translation via gettext catalogs. 92 | # Usually you set "language" from the command line for these cases. 93 | language = None 94 | 95 | # List of patterns, relative to source directory, that match files and 96 | # directories to ignore when looking for source files. 97 | # This pattern also affects html_static_path and html_extra_path. 98 | exclude_patterns = [u'_build', 'Thumbs.db', '.DS_Store'] 99 | 100 | # The name of the Pygments (syntax highlighting) style to use. 101 | # pygments_style = sphinx 102 | 103 | 104 | # -- Options for HTML output ------------------------------------------------- 105 | 106 | # The theme to use for HTML and HTML Help pages. See the documentation for 107 | # a list of builtin themes. 108 | # 109 | html_theme = "sphinx_rtd_theme" 110 | 111 | # Theme options are theme-specific and customize the look and feel of a theme 112 | # further. For a list of options available for each theme, see the 113 | # documentation. 114 | # 115 | # html_theme_options = {} 116 | 117 | # Add any paths that contain custom static files (such as style sheets) here, 118 | # relative to this directory. They are copied after the builtin static files, 119 | # so a file named "default.css" will overwrite the builtin "default.css". 120 | html_static_path = ['_static'] 121 | 122 | # Custom sidebar templates, must be a dictionary that maps document names 123 | # to template names. 124 | # 125 | # The default sidebars (for documents that don't match any pattern) are 126 | # defined by theme itself. Builtin themes are using these templates by 127 | # default: ``['localtoc.html', 'relations.html', 'sourcelink.html', 128 | # 'searchbox.html']``. 129 | # 130 | # html_sidebars = {} 131 | 132 | 133 | # -- Options for HTMLHelp output --------------------------------------------- 134 | 135 | # Output file base name for HTML help builder. 136 | htmlhelp_basename = 'WorkshopLabGuidedoc' 137 | 138 | 139 | # -- Options for LaTeX output ------------------------------------------------ 140 | 141 | latex_elements = { 142 | # The paper size ('letterpaper' or 'a4paper'). 143 | # 144 | # 'papersize': 'letterpaper', 145 | 146 | # The font size ('10pt', '11pt' or '12pt'). 147 | # 148 | # 'pointsize': '10pt', 149 | 150 | # Additional stuff for the LaTeX preamble. 151 | # 152 | # 'preamble': '', 153 | 154 | # Latex figure (float) alignment 155 | # 156 | # 'figure_align': 'htbp', 157 | } 158 | 159 | # Grouping the document tree into LaTeX files. List of tuples 160 | # (source start file, target name, title, 161 | # author, documentclass [howto, manual, or own class]). 162 | latex_documents = [ 163 | (master_doc, 'WorkshopLabGuide.tex', u'Workshop Lab Guide Documentation', 164 | u'Red Hat Public Sector', 'manual'), 165 | ] 166 | 167 | 168 | # -- Options for manual page output ------------------------------------------ 169 | 170 | # One entry per manual page. List of tuples 171 | # (source start file, name, description, authors, manual section). 172 | man_pages = [ 173 | (master_doc, 'workshoplabguide', u'Workshop Lab Guide Documentation', 174 | [author], 1) 175 | ] 176 | 177 | 178 | # -- Options for Texinfo output ---------------------------------------------- 179 | 180 | # Grouping the document tree into Texinfo files. List of tuples 181 | # (source start file, target name, title, author, 182 | # dir menu entry, description, category) 183 | texinfo_documents = [ 184 | (master_doc, 'WorkshopLabGuide', u'Workshop Lab Guide Documentation', 185 | author, 'WorkshopLabGuide', 'One line description of project.', 186 | 'Miscellaneous'), 187 | ] 188 | 189 | 190 | # -- Options for Epub output ------------------------------------------------- 191 | 192 | # Bibliographic Dublin Core info. 193 | epub_title = project 194 | 195 | # The unique identifier of the text. This can be a ISBN number 196 | # or the project homepage. 197 | # 198 | # epub_identifier = '' 199 | 200 | # A unique identification for the text. 201 | # 202 | # epub_uid = '' 203 | 204 | # A list of files that should not be packed into the epub file. 205 | epub_exclude_files = ['search.html'] 206 | -------------------------------------------------------------------------------- /workshops/example-workshop/contributing.rst: -------------------------------------------------------------------------------- 1 | .. sectionauthor:: Jamie Duncan 2 | .. _docs admin: jduncan@redhat.com 3 | 4 | ============= 5 | Contributing 6 | ============= 7 | 8 | Most of our workshops take very basic environments and wrap them around a lab guide to provide a good learning experience for our customers. The typical contribution to this project will be to create a new lab guide, or port over an existing one. For more complex workshops, you may need to extend or modify the actual `Workshop Operator `__. Contributions to the workshop operator are covered in its documentation, and out of scope for this document. 9 | 10 | Documentation Format 11 | --------------------- 12 | 13 | This lab guide engine uses `Sphinx __ to render the HTML that's presented to each user. The default documentation format we use is `RestructuredText `__. 14 | 15 | .. admonition:: What about MarkDown? 16 | 17 | If you choose, you can also use MarkDown (`CommonMark format `__ to render your lab guides. Just make sure your files are saved with a ``.md`` file extension and go for it! Your initial page should be named ``index.md``, and then follow the same conventions as this example_workshop. 18 | 19 | Adding a workshop 20 | ------------------ 21 | 22 | To contribute a new workshop, we follow the standard fork and branch methodology of git. 23 | 24 | Fork the repository 25 | ```````````````````` 26 | 27 | To add an new workshop, begin by forking :github_url:`Github project`. 28 | 29 | .. code-block:: bash 30 | 31 | $ git clone git@github.com:jduncan-rva/workshop-operator-lab-guide.git 32 | 33 | Create your new workshop directory 34 | ``````````````````````````````````` 35 | 36 | After changing to that direction, create a new directory in ``./workshops``. 37 | 38 | .. code-block:: bash 39 | 40 | $ mkdir -p ./workshops/my_workshop 41 | $ ll workshops/ 42 | total 0 43 | drwxr-xr-x 5 jduncan wheel 160 Apr 3 15:40 . 44 | drwxr-xr-x 11 jduncan wheel 352 Apr 3 15:40 .. 45 | drwxr-xr-x 7 jduncan wheel 224 Apr 2 10:25 dcmetromap 46 | drwxr-xr-x 10 jduncan wheel 320 Apr 3 10:06 example_workshop 47 | drwxr-xr-x 2 jduncan wheel 64 Apr 3 15:40 my_workshop 48 | 49 | Add your project to the CI/CD Workflow 50 | ``````````````````````````````````````` 51 | 52 | We have a functional CI/CD workflow in place to build container images for all projects in the lab guide engine. To have yours built, add a new line to ``.travis.yml`` in the ``env`` section. For example: 53 | 54 | .. code-block:: yaml 55 | 56 | env: 57 | - WORKSHOP_NAME=example_workshop 58 | - WORKSHOP_NAME=dcmetromap 59 | - WORKSHOP_NAME=new_workshop 60 | 61 | Once that is done, open up a Pull Request against :github_url:`Github `. 62 | -------------------------------------------------------------------------------- /workshops/example-workshop/docs: -------------------------------------------------------------------------------- 1 | docs -------------------------------------------------------------------------------- /workshops/example-workshop/images/Logo-RedHat-D-Color-RGB.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/example-workshop/images/Logo-RedHat-D-Color-RGB.png -------------------------------------------------------------------------------- /workshops/example-workshop/images/rh_favicon.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rhcreynold/workshop-operator-lab-guide/f22dc821e7e25daf392e6ff1c98b330fa859bc3f/workshops/example-workshop/images/rh_favicon.png -------------------------------------------------------------------------------- /workshops/example-workshop/index.rst: -------------------------------------------------------------------------------- 1 | .. Workshop Lab Guide documentation master file, created by 2 | sphinx-quickstart on Thu Mar 28 23:19:41 2019. 3 | You can adapt this file completely to your liking, but it should at least 4 | contain the root `toctree` directive. 5 | 6 | .. sectionauthor:: Jamie Duncan 7 | 8 | ============================================================== 9 | |workshop_name_clean| Lab Guide 10 | ============================================================== 11 | 12 | Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Fusce ut placerat orci nulla pellentesque dignissim enim sit amet. Odio ut sem nulla pharetra diam sit amet nisl suscipit. Turpis nunc eget lorem dolor sed viverra ipsum. Quam elementum pulvinar etiam non quam lacus suspendisse. Mauris nunc congue nisi vitae suscipit tellus mauris a. 13 | 14 | Dolor sit amet consectetur adipiscing elit ut. Ultrices eros in cursus turpis massa. Aenean pharetra magna ac placerat. Mattis nunc sed blandit libero volutpat. Turpis egestas maecenas pharetra convallis posuere morbi leo urna molestie. Ut diam quam nulla porttitor massa id neque aliquam vestibulum. Lacus sed turpis tincidunt id. 15 | 16 | .. toctree:: 17 | :maxdepth: 2 18 | :numbered: 19 | :Caption: Index 20 | :name: mastertoc 21 | 22 | intro 23 | quickstart 24 | contributing 25 | license 26 | -------------------------------------------------------------------------------- /workshops/example-workshop/intro.rst: -------------------------------------------------------------------------------- 1 | .. sectionauthor:: Jamie Duncan 2 | .. _docs admin: jduncan@redhat.com 3 | 4 | ============= 5 | Introduction 6 | ============= 7 | 8 | Important links 9 | ---------------- 10 | - `RestructuredText in Sphinx Getting started `__ 11 | - `Workshop Operator Source `__ 12 | 13 | Default Variables 14 | ------------------ 15 | 16 | You can define more variables to fit your specific workshop in ``conf.py`` in the `rst_prolog `__ section. 17 | 18 | ====================== ============== 19 | Environment Variable Default Value 20 | ====================== ============== 21 | WORKSHOP_NAME |workshop_name| 22 | STUDENT_NAME |student_name| 23 | BASTION_HOST |bastion_host| 24 | MASTER_URL |master_url| 25 | APP_DOMAIN |app_domain| 26 | ====================== ============== 27 | -------------------------------------------------------------------------------- /workshops/example-workshop/license.rst: -------------------------------------------------------------------------------- 1 | .. sectionauthor:: Jamie Duncan 2 | .. _docs admin: jduncan@redhat.com 3 | 4 | =============== 5 | Licensing 6 | =============== 7 | 8 | You should really release your content under some sort of collaborative `Creative Commons `__, perhaps? 9 | -------------------------------------------------------------------------------- /workshops/example-workshop/quickstart.rst: -------------------------------------------------------------------------------- 1 | .. sectionauthor:: Jamie Duncan 2 | .. _docs admin: jduncan@redhat.com 3 | 4 | ============= 5 | Quickstart 6 | ============= 7 | 8 | The basics of working with the lab guide project, and spinning up a local copy for customer demos and smaller workshops. The first step is to always check out a local copy of the :github_url:`Github project `. We'll be using the ``example_workshop`` project (which is also this howto guide). 9 | 10 | .. code-block:: bash 11 | 12 | $ git clone git@github.com:jduncan-rva/workshop-operator-lab-guide.git 13 | 14 | Requirements 15 | ````````````` 16 | 17 | As much as we love buildah and podman, we understand that this will often be run on Mac laptops. For that reasons, for right now, we're using Docker as the default container runtime. 18 | 19 | .. admonition:: What about buildah?! 20 | 21 | All of the automation will of course work on Linux as well, and converting it to use podman should be trivial if anything needs to be done at all. 22 | 23 | Building a local copy of your workshop guide 24 | ````````````````````````````````````````````` 25 | 26 | In the root directory of the repository is ``build.sh``. This script makes it easy to build a container image that houses your project's lab guide. The usuage is simple: 27 | 28 | .. code-block:: bash 29 | 30 | $ ./build.sh example_workshop local 31 | 32 | The ``local`` directive tells the script to build a local container image. 33 | 34 | .. admonition:: Other build targets 35 | 36 | ``build.sh`` also has a ``quay`` build target that is used as part of the CI/CD workflow. It requires a few additional variables: 37 | 38 | * ``QUAY_PROEJCT`` - The quay.io project you want to upload to. This variable in ``build.sh`` is also used for the local build tag. 39 | * ``DOCKER_USERNAME`` - Your quay.io username 40 | * ``DOCKER_PASSWORD`` - Your quay.io password 41 | 42 | For travis-ci, the usernames and passwords are encrypted and aren't visible in the job output. 43 | 44 | Running a local copy of your lab guide 45 | ``````````````````````````````````````` 46 | 47 | In the root directory of the repository is ``run.sh``. This is used to start and stop your lab guide container. 48 | 49 | .. code-block:: bash 50 | :caption: 'This command starts your lab guide locally' 51 | 52 | $ ./run.sh example_workshop local 53 | No lab guides are currently running 54 | Preparing new lab guide for example_workshop 55 | example_workshop is running as container ID 05b2fd52306d and is avaiable at http://localhost:8080 56 | 57 | Stopping a local copy of your lab guide 58 | ```````````````````````````````````````` 59 | 60 | The same ``run.sh`` script is used to cleanly stop your lab guides. 61 | 62 | .. code-block:: bash 63 | :caption: 'This command stops your lab guide locally' 64 | 65 | $ ./run.sh example_workshop stop 66 | Stopping example_workshop container ID a0fcb61cc934 67 | Lab Guide for example_workshop stopped 68 | 69 | Do you have to use the helper scripts? 70 | ``````````````````````````````````````` 71 | 72 | You can of course do all of this manually using your container runtime commands. The helper scripts allow you to create and run multiple labs guides simultaneously when you need to right on your laptop. They also clean up after themselves nicely. 73 | --------------------------------------------------------------------------------