├── .gitignore ├── ABOUT.md ├── README.md ├── compact.yml ├── docker-compose.yml ├── node ├── Dockerfile ├── README.md └── config.json └── slave ├── .dockerignore ├── Dockerfile ├── README.md ├── gitconfig └── ssh └── .gitkeep /.gitignore: -------------------------------------------------------------------------------- 1 | authorized_keys 2 | *.iml 3 | .idea 4 | id_rsa* 5 | -------------------------------------------------------------------------------- /ABOUT.md: -------------------------------------------------------------------------------- 1 | Running Continuous Integration on a Shoestring with Docker and Fig 2 | =============================================================== 3 | 4 | One of the things I love about Continuous Delivery (CD) is the "Show, don't Tell" aspect of the process. While we can 5 | often convince a customer or coworker what's the 'right thing to do', some people are harder to sell, and nothing beats 6 | a demonstration. 7 | 8 | The downside of Continuous Delivery is that, on the face of it, we use a lot of hardware. Multiple copies of multiple 9 | servers all doing nominally the same thing if you don't understand the system. Cloud services are great for proving out 10 | the system due to the low monthly outlay, but not all organizations allow it. Maybe it's a billing issue, or concern 11 | about your source getting stolen, or in an older company it may be a longstanding IT policy. If a manager believes in 12 | the system, they may be willing to stick their neck out and get paperwork signed or policies changed. But how do you 13 | get them on board in the first place? This chicken and egg problem has been bothering me for a while now, and Docker 14 | helps a lot with this situation. 15 | 16 | # Jenkins in a Box 17 | 18 | The thing I wanted to know was, could I get a CI server and all of its dependencies into a set of Docker containers. 19 | It turns out not only is the answer 'yes', but most of the work has already been done for us. You just have to wire 20 | the right things together. 21 | 22 | ## Why start here? 23 | 24 | The Big Ask for hardware starts with the build environment. 25 | 26 | Continuous Delivery didn't always exist as a term. Before that it was just a concept. You start with a repeatable 27 | build. You automate compiling the code. You automate testing the code. You set up a build server so you know if it's 28 | safe to pull down trunk/master in the morning. You start enforcing clean builds of trunk/master. You automate 29 | packaging the code. Then you automate archiving the packages. One day you wake up and realize you have a self service 30 | system where QA can pull new versions onto their test systems and from there it's a short leap to capturing 31 | configuration and doing the same thing in staging and production. 32 | 33 | But halfway through this process, you needed to do UI testing. For web apps that means Selenium. PhantomJS is a good 34 | starting point, but there are many things that only break on Firefox, or Chrome. Running a browser in a VM without a 35 | video card takes some special knowledge that not everybody has. And when the tests break you can't always reproduce 36 | them locally. Sooner or later you need to watch the build server run the tests to get a clue why things aren't working. 37 | Nothing substitutes for pixels. Saucelabs can solve this for you but we're trying to start small. 38 | 39 | ## The Plan 40 | 41 | Most of what you need is out there, we just have to stitch it together. The Jenkins team maintains Docker images. 42 | SeleniumHQ has their own as well, that can run Firefox and Chrome in a headless environment. They also have 'debug' 43 | builds with support VNC connections, which we'll be using. What we need is a Fig script to connect them to each other, 44 | and the Jenkins slaves need our development toolchain. 45 | 46 | We need: 47 | 1. A Jenkins instance 48 | 1. A Selenium Grid (hub) to dole out browsers 49 | 1. Selenium 'nodes' which can run browsers 50 | 1. A Jenkins slave that can see the Selenium Grid 51 | 1. SSH Certs on the slave so that Jenkins can talk to it 52 | 53 | ### Caveats 54 | 55 | Rather than modifying the Jenkins image, I opted to build a custom Jenkins Slave. Personally, I prefer not to run slaves 56 | on the Jenkins box. First, the hardware budget for the two is very different. Slaves are IO, memory, and CPU bound. The 57 | filesystem can be deleted between builds with few repercussions. The Jenkins server is a different beast. It needs 58 | to be backed up, it uses a lot disk space for artifacts (build statistics and test reports, even if you store your 59 | binaries in a system of record), and it needs some bandwidth. There are many ways for a bad build to take out the 60 | entire server, and I would rather not even have to worry about it. 61 | 62 | Also it's probable you already have a Jenkins server, and it's easy enough to tweak this demo code to use it with your 63 | existing server without impacting your current operations. 64 | 65 | ## docker-compose to the rescue 66 | 67 | docker-compose (formerly Fig) is a great Docker tool for wiring up a bunch of services to each other. Since I know a 68 | lot of people who like to poke at the build environment, I opted to write a docker-compose file where all of the ports 69 | are wired to fixed port numbers on the host operating system. 70 | 71 | You'll need to install docker-compose of course (it's not part of the Docker install, or at least not yet), and you'll 72 | need to create a ~/jenkins_home directory which will contain all of the configuration for Jenkins, you'll need to 73 | generate an SSH key for Jenkins, and copy it into `authorized_keys` for the slave (see the [README.md] if you need help 74 | with that step). Then you can just type in two magic little words: 75 | 76 | docker-compose up 77 | 78 | And after a few minutes of downloading and building images, You'll have a Jenkins environment running in a box. 79 | 80 | You'll have the following running (substitute 192.168.59.103 if you're running boot2docker) 81 | 1. Jenkins on http://127.0.0.1:8080 82 | 1. A Jenkins slave listening for SSH connections at slave:2222 83 | 1. A virtual desktop running Firefox tests listening on 127.0.0.1:5950 84 | 1. A virtual desktop running Chrome tests listening on 127.0.0.1:5960 85 | 1. Selenium hub listening on hub:4444 (behaving similarly to selenium-standalone) 86 | 87 | ## Further Improvements 88 | 89 | If that's not already cool enough for you, there are some more steps I'll leave as an exercise for the reader. 90 | 91 | ### Go smaller: Single node 92 | 93 | On small projects, it's not uncommon to run the Integration Tests sequentially. A single browser open at a time, to 94 | avoid any concurrent modification issues resulting in false build failures. 95 | 96 | I did an experiment where I took the SeleniumHQ chrome debug image, dropped firefox on it as well, and changed the 97 | configuration to offer both browsers. I run this version in [compact.yml] instead of the two run in the normal example. 98 | This means only one copy of `X11` and `xvfb` is running, and you only need one VNC session to see everything. The 99 | trouble with this is ongoing maintenance. I've done my best to create the minimum configuration possible, but it's 100 | always a possibility that a new SeleniumHQ release won't be compatible. For this reason I'd say this should only be 101 | used for Phase 1 of a project, and should be a priority to eliminate this custom image ASAP. 102 | 103 | docker-compose --file=compact.yml build 104 | docker-compose --file=compact.yml up 105 | 106 | This version of the system peaked at a little under 4 GB of RAM. With developer grade machines frequently having 16GB 107 | of RAM or more this becomes something you could actually run on someone's desktop for a while. Or you could split it 108 | and run it on 2 machines. 109 | 110 | ### Go bigger: Parallel tests 111 | 112 | One of the big reasons people run Selenium Grid is to run tests in parallel. One cool thing you can do with 113 | docker-compose is tell it "I want you to run 4 copies of this image" by using the `docker-compose scale` command, and it 114 | will spool them up. The tradeoff is that at present it doesn't have a way to deal with fixed port numbers (there's no 115 | support for port ranges) so you have to take out the port mappings (eg: "5950:5900" becomes "5900"). The consequence is 116 | that every time you restart docker-compose, the ports tend to change. But watching a parallel test run over VNC would be 117 | challenging to say the least, in which case you might opt to not run VNC at all. In that case you can save some 118 | resources by using the non-debug images 119 | 120 | 121 | # Examples and Further reading 122 | 123 | [docker-compose.yml](docker-compose.yml) 124 | 125 | [compact.yml](compact.yml) 126 | 127 | [Selenium HQ Docker](https://github.com/SeleniumHQ/docker-selenium) 128 | 129 | [Jenkins images in the Docker Registry](https://registry.hub.docker.com/_/jenkins/) -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | Intro 2 | ===== 3 | 4 | This is an example of using Docker images to host an entire build server infrastructure. 5 | 6 | ## Fig File overview 7 | 8 | We create a build slave image that has node and maven installed 9 | 10 | We spool up a copy of the Jenkins Docker image 11 | 12 | We Spool up a Selenium environment using the Selenium Docker Images 13 | - 'nodes' for Chrome, Firefox with VNC access enabled (password: 'secret') 14 | - Selenium 'hub' to manage the nodes 15 | 16 | A more in-depth overview can be found in [the About file](ABOUT.md) 17 | 18 | Configuration 19 | ============= 20 | 21 | In order for Jenkins to be able to use a slave, the public key for Jenkins needs to be configured on the machine 22 | 23 | As the user that will run the Fig scripts: 24 | 25 | * Create a directory called `~/jenkins_home/.ssh` 26 | * Generate an SSH key 27 | - `ssh-keygen -t rsa -C 'jenkins@example.com'` 28 | - When prompted, save the files to `~/jenkins_home/.ssh/id_rsa` 29 | * Copy `~/jenkins_home/.ssh/id_rsa.pub` to `slave/ssh/authorized_keys` 30 | 31 | Now when the image gets built the slave will trust incoming ssh connections from Jenkins 32 | 33 | ## Jenkins config 34 | 35 | Because we're using a stock Jenkins Docker image, none of your command line build tools will be available to the slaves 36 | running on 'master'. You'll need to disable them and configure two new slaves. 37 | 38 | ### Add the Docker Slaves 39 | 40 | The docker slave responds to SSH requests on port 2222. If you're running boot2docker, it will be accessible at 41 | `192.168.59.103` 42 | 43 | ### Git 44 | 45 | Remember that Git is not installed on Jenkins by default. Go in and add the 'Git Plugin' and restart Jenkins 46 | 47 | ### Boot2docker notes 48 | 49 | If you want your Jenkins to be visible on the network, it's a good idea to shut down boot2docker, use the VirtualBox 50 | administration console to turn on the 3rd ethernet interface and set it to the default (bridged). This will cause the 51 | host computer to have two IP addresses, one that routes straight to the VM. This will save you from having to set up 52 | port forwards for everything. 53 | -------------------------------------------------------------------------------- /compact.yml: -------------------------------------------------------------------------------- 1 | # Selenium Hub, which parcels out browsers to test runners 2 | hub: 3 | image: 'selenium/hub:2.45.0' 4 | expose: 5 | - 4444 6 | # In Selenium parlance a 'node' hosts the browser 7 | node: 8 | build: 'node' 9 | ports: 10 | - 5900 11 | links: 12 | - hub 13 | # Jenkins slave. SSH endpoint with some CLI tools installed 14 | slave: 15 | build: 'slave' 16 | expose: 17 | - '22' 18 | links: 19 | - hub 20 | # The Jenkins server. See README for details on SSH certs for talking to the slave 21 | jenkins: 22 | image: 'jenkins:1.596.2' 23 | ports: 24 | - '8080:8080' 25 | volumes: 26 | - ~/jenkins_home:/var/jenkins_home 27 | links: 28 | - slave 29 | -------------------------------------------------------------------------------- /docker-compose.yml: -------------------------------------------------------------------------------- 1 | # Selenium Hub, which parcels out browsers to test runners 2 | hub: 3 | image: 'selenium/hub:2.45.0' 4 | expose: 5 | - 4444 6 | # Firefox image with VNC on port 5950 7 | firefox: 8 | image: "selenium/node-firefox-debug:2.45.0" 9 | ports: 10 | - "5950:5900" 11 | links: 12 | - hub 13 | # Chrome image with VNC on port 5960 14 | chrome: 15 | image: "selenium/node-chrome-debug:2.45.0" 16 | ports: 17 | - "5960:5900" 18 | links: 19 | - hub 20 | # Jenkins slave. SSH endpoint with some CLI tools installed 21 | slave: 22 | build: 'slave' 23 | expose: 24 | - 22 25 | links: 26 | - hub 27 | # The Jenkins server. See README for details on SSH certs for talking to the slave 28 | jenkins: 29 | image: 'jenkins:1.596.2' 30 | ports: 31 | - 8080:8080 32 | volumes: 33 | - ~/jenkins_home:/var/jenkins_home 34 | links: 35 | - slave 36 | -------------------------------------------------------------------------------- /node/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM selenium/node-chrome-debug:2.44.0 2 | 3 | RUN ls / 4 | 5 | USER root 6 | 7 | #========= 8 | # Firefox 9 | #========= 10 | RUN apt-get update -qqy \ 11 | && apt-get -qqy --no-install-recommends install \ 12 | firefox \ 13 | && rm -rf /var/lib/apt/lists/* 14 | 15 | #======================== 16 | # Selenium Configuration 17 | #======================== 18 | COPY config.json /opt/selenium/config.json 19 | 20 | EXPOSE 5900 -------------------------------------------------------------------------------- /node/README.md: -------------------------------------------------------------------------------- 1 | # Selenium Grid Node - Firefox, Chrome, Debug 2 | 3 | Selenium Node configured to run Firefox, Chrome, and remote Debug 4 | 5 | An amalgam of the very useful [docker images maintained by SeleniumHQ](https://github.com/SeleniumHQ/docker-selenium/). 6 | 7 | Where the SeleniumHQ images are intended to demonstrate large scale, parallel WebDriver testing, this image is intended 8 | as a demonstration of a small scale Continuous Integration environment running on a single headless VM. 9 | 10 | ## Rationale 11 | 12 | In order to run Selenium tests on multiple User Agents, Selenium spins up multiple instances of xvfb, and if you 13 | want the debug version, x11vnc as well. In small scale testing it is common to run UI tests sequentially. As your 14 | tests scale up typically you would scale your testing infrastructure to match. 15 | 16 | 17 | -------------------------------------------------------------------------------- /node/config.json: -------------------------------------------------------------------------------- 1 | { 2 | "capabilities": [ 3 | { 4 | "browserName": "*firefox", 5 | "maxInstances": 1, 6 | "seleniumProtocol": "Selenium" 7 | }, 8 | { 9 | "browserName": "firefox", 10 | "maxInstances": 1, 11 | "seleniumProtocol": "WebDriver" 12 | }, 13 | { 14 | "browserName": "*googlechrome", 15 | "maxInstances": 1, 16 | "seleniumProtocol": "Selenium" 17 | }, 18 | { 19 | "browserName": "chrome", 20 | "maxInstances": 1, 21 | "seleniumProtocol": "WebDriver" 22 | } 23 | ], 24 | "configuration": { 25 | "proxy": "org.openqa.grid.selenium.proxy.DefaultRemoteProxy", 26 | "maxSession": 1, 27 | "port": 5555, 28 | "register": true, 29 | "registerCycle": 5000 30 | } 31 | } 32 | -------------------------------------------------------------------------------- /slave/.dockerignore: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/dev9com/docker-jenkins/ac0ce0174d0f22d773c24cb1921af86165080ee5/slave/.dockerignore -------------------------------------------------------------------------------- /slave/Dockerfile: -------------------------------------------------------------------------------- 1 | # Version: 0.0.3 4-Dev-2014 2 | FROM ubuntu:latest 3 | 4 | # update apt-get info 5 | RUN apt-get update 6 | 7 | # Install Java, Git, Maven, NPM 8 | RUN apt-get install -y --no-install-recommends openjdk-7-jdk git maven nodejs nodejs-legacy npm 9 | RUN apt-get clean 10 | 11 | # Install an SSH server for jenkins slave 12 | RUN apt-get install -y openssh-server 13 | RUN mkdir -p /var/run/sshd 14 | 15 | # jenkins user config 16 | RUN useradd -m --shell /bin/bash jenkins 17 | 18 | ADD gitconfig /home/jenkins/.gitconfig 19 | ADD ssh /home/jenkins/.ssh/ 20 | RUN chmod 600 /home/jenkins/.ssh/* 21 | 22 | RUN chown -R jenkins:jenkins /home/jenkins 23 | 24 | # Setup Node 25 | 26 | # Standard SSH port 27 | EXPOSE 22 28 | 29 | CMD ["/usr/sbin/sshd", "-D"] 30 | -------------------------------------------------------------------------------- /slave/README.md: -------------------------------------------------------------------------------- 1 | Intro 2 | ===== 3 | 4 | This is a Docker image configured to run as a Jenkins slave. 5 | 6 | It installs JDK 7, and the latest Node.js and Maven versions on Ubuntu 7 | 8 | It can be run as part of the 'Jenkins in a Box' Fig configuration (see [parent Readme](../README.md)) or run independently. 9 | 10 | Configuration 11 | ============= 12 | 13 | In order for Jenkins to be able to use a slave, the public key for Jenkins needs to be configured on the machine 14 | 15 | * Log into the Jenkins Server 16 | * Find or create the 'jenkins' user public key `/var/jenkins_home/.ssh/id_rsa.pub` 17 | * Copy it to `ssh/authorized_keys` 18 | * Build the image 19 | * Run the image 20 | * Configure a new node in Jenkins, with the SSH port of the running slave image 21 | 22 | ### Building the image 23 | 24 | docker build -t my_jenkins_slave:latest . 25 | 26 | ### Run the image 27 | 28 | docker run -p 2222:22 -dt my_jenkins_slave:latest 29 | 30 | Add port 2222 (or whatever you'd like it to be) to the Advanced->Port field in the Node configuration 31 | 32 | 33 | 34 | -------------------------------------------------------------------------------- /slave/gitconfig: -------------------------------------------------------------------------------- 1 | [url "https://bitbucket.com/"] 2 | insteadOf = git@bitbucket.com: 3 | -------------------------------------------------------------------------------- /slave/ssh/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/dev9com/docker-jenkins/ac0ce0174d0f22d773c24cb1921af86165080ee5/slave/ssh/.gitkeep --------------------------------------------------------------------------------