├── chapter1 ├── Dockerfile ├── img │ ├── ps.png │ ├── run.png │ ├── exec.png │ └── image.png └── README.md ├── chapter4 ├── env_file ├── img │ └── ipython.png ├── jupyter │ ├── requirements.txt │ ├── Dockerfile │ └── example.ipynb ├── init-tables.sh ├── docker-compose.yml └── README.md ├── chapter3 ├── img │ ├── psql.png │ ├── select-empty.png │ └── select_content.png ├── Dockerfile └── README.md ├── chapter2 ├── img │ ├── notebook.png │ └── notebook-server.png ├── requirements.txt ├── Dockerfile └── README.md └── README.md /chapter1/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM alpine:3.7 2 | 3 | CMD ["echo", "hello world!" ] 4 | -------------------------------------------------------------------------------- /chapter4/env_file: -------------------------------------------------------------------------------- 1 | POSTGRES_PASSWORD=safe 2 | POSTGRES_USER=me 3 | POSTGRES_DB=study 4 | -------------------------------------------------------------------------------- /chapter1/img/ps.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/InsightDataScience/docker-workshop/master/chapter1/img/ps.png -------------------------------------------------------------------------------- /chapter1/img/run.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/InsightDataScience/docker-workshop/master/chapter1/img/run.png -------------------------------------------------------------------------------- /chapter1/img/exec.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/InsightDataScience/docker-workshop/master/chapter1/img/exec.png -------------------------------------------------------------------------------- /chapter1/img/image.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/InsightDataScience/docker-workshop/master/chapter1/img/image.png -------------------------------------------------------------------------------- /chapter3/img/psql.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/InsightDataScience/docker-workshop/master/chapter3/img/psql.png -------------------------------------------------------------------------------- /chapter2/img/notebook.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/InsightDataScience/docker-workshop/master/chapter2/img/notebook.png -------------------------------------------------------------------------------- /chapter4/img/ipython.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/InsightDataScience/docker-workshop/master/chapter4/img/ipython.png -------------------------------------------------------------------------------- /chapter3/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM postgres:9.6.5 2 | 3 | ENV POSTGRES_PASSWORD safe 4 | ENV POSTGRES_USER me 5 | ENV POSTGRES_DB study 6 | -------------------------------------------------------------------------------- /chapter3/img/select-empty.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/InsightDataScience/docker-workshop/master/chapter3/img/select-empty.png -------------------------------------------------------------------------------- /chapter2/img/notebook-server.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/InsightDataScience/docker-workshop/master/chapter2/img/notebook-server.png -------------------------------------------------------------------------------- /chapter3/img/select_content.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/InsightDataScience/docker-workshop/master/chapter3/img/select_content.png -------------------------------------------------------------------------------- /chapter2/requirements.txt: -------------------------------------------------------------------------------- 1 | numpy==1.15.1 2 | pandas==0.23.4 3 | matplotlib==2.2.3 4 | prompt-toolkit==2.0.4 5 | ipython==7.0.0 6 | jupyter==1.0.0 7 | -------------------------------------------------------------------------------- /chapter4/jupyter/requirements.txt: -------------------------------------------------------------------------------- 1 | numpy==1.15.1 2 | pandas==0.23.4 3 | matplotlib==2.2.3 4 | prompt-toolkit==2.0.4 5 | ipython==7.0.0 6 | jupyter==1.0.0 7 | psycopg2==2.7.5 8 | sqlalchemy==1.3.0 9 | -------------------------------------------------------------------------------- /chapter4/init-tables.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -e 3 | 4 | psql -v ON_ERROR_STOP=1 --username me --dbname study <<-EOSQL 5 | CREATE TABLE participants (name VARCHAR(255), age int, score int); 6 | SELECT * FROM participants; 7 | INSERT INTO participants (name, age, score) VALUES ('Ronda', 33, 8), ('Jack', 44, 12); 8 | EOSQL 9 | -------------------------------------------------------------------------------- /chapter2/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM python:3.7 2 | 3 | RUN apt-get update 4 | 5 | # Copy file of required Python packages 6 | COPY requirements.txt . 7 | 8 | # Switch to directory of jupyter notebooks 9 | WORKDIR /jupyter_notebooks 10 | 11 | # Install Python packages 12 | RUN pip install -r ../requirements.txt 13 | 14 | # Start jupyter-notebook server 15 | CMD jupyter-notebook --no-browser --ip="*" --allow-root --NotebookApp.allow_remote_access=True 16 | -------------------------------------------------------------------------------- /chapter4/jupyter/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM python:3.7 2 | 3 | RUN apt-get update 4 | 5 | # Copy file of required Python packages 6 | COPY requirements.txt . 7 | 8 | # Switch to directory of jupyter notebooks 9 | WORKDIR /jupyter_notebooks 10 | 11 | # Copy example notebook 12 | COPY example.ipynb . 13 | 14 | # Install Python packages 15 | RUN pip install -r ../requirements.txt 16 | 17 | # Start jupyter-notebook server 18 | CMD jupyter-notebook --no-browser --ip="*" --allow-root --NotebookApp.allow_remote_access=True 19 | -------------------------------------------------------------------------------- /chapter4/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3' 2 | services: 3 | db: 4 | image: "postgres:9.6.5" 5 | volumes: 6 | - "dbdata2:/var/lib/postgresql/data" 7 | - ./init-tables.sh:/docker-entrypoint-initdb.d/init-tables.sh 8 | env_file: 9 | - env_file 10 | networks: 11 | - nw 12 | jupyter-server: 13 | build: ./jupyter 14 | volumes: 15 | - "notebooks2:/jupyter_notebooks" 16 | ports: 17 | - 8888:8888 18 | env_file: 19 | - env_file 20 | networks: 21 | - nw 22 | depends_on: 23 | - db 24 | networks: 25 | nw: 26 | driver: bridge 27 | volumes: 28 | dbdata2: 29 | notebooks2: 30 | -------------------------------------------------------------------------------- /chapter4/jupyter/example.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "import pandas as pd\n", 10 | "import os\n", 11 | "from sqlalchemy import create_engine" 12 | ] 13 | }, 14 | { 15 | "cell_type": "code", 16 | "execution_count": null, 17 | "metadata": {}, 18 | "outputs": [], 19 | "source": [] 20 | }, 21 | { 22 | "cell_type": "code", 23 | "execution_count": null, 24 | "metadata": {}, 25 | "outputs": [], 26 | "source": [ 27 | "engine = create_engine('postgresql://'+os.environ['POSTGRES_USER']+':'+os.environ['POSTGRES_PASSWORD']+'@'+\"db\"+':'+'5432'+'/'+os.environ['POSTGRES_DB'],echo=False)" 28 | ] 29 | }, 30 | { 31 | "cell_type": "code", 32 | "execution_count": null, 33 | "metadata": {}, 34 | "outputs": [], 35 | "source": [ 36 | "data = pd.read_sql('SELECT * FROM participants', engine)" 37 | ] 38 | }, 39 | { 40 | "cell_type": "code", 41 | "execution_count": null, 42 | "metadata": {}, 43 | "outputs": [], 44 | "source": [ 45 | "data.head()" 46 | ] 47 | } 48 | ], 49 | "metadata": { 50 | "kernelspec": { 51 | "display_name": "Python 3", 52 | "language": "python", 53 | "name": "python3" 54 | }, 55 | "language_info": { 56 | "codemirror_mode": { 57 | "name": "ipython", 58 | "version": 3 59 | }, 60 | "file_extension": ".py", 61 | "mimetype": "text/x-python", 62 | "name": "python", 63 | "nbconvert_exporter": "python", 64 | "pygments_lexer": "ipython3", 65 | "version": "3.7.0" 66 | } 67 | }, 68 | "nbformat": 4, 69 | "nbformat_minor": 2 70 | } 71 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Docker Workshop 2 | 3 | This repository contains the content of and directions to Insight Data Science's Docker workshop. 4 | The goal of this workshop is to give participants an introduction to the basics and help people get started building containers.. 5 | To do so, we will start with the basics and end with the deployment of a system consisting of a 6 | Jupyter notebook server and a PostgreSQL database. 7 | 8 | ## What is Docker? 9 | 10 | [Docker](https://www.docker.com) is a popular containerization platform. Containerization is a lightweight alternative to virtualization that has gained widespread popularity due to its flexibility and ease of use. Docker in particular has emerged as the most popular framework for containerization. It is a powerful framework for creating controlled environments that behave identical on any system they run on. 11 | 12 | In combination with orchestration tools such as [Kubernetes](https://www.kubernetes.io), it is also very easy to build dynamic microservice architectures. 13 | 14 | ## Prerequisites 15 | 16 | Before starting the workshop, please [install Docker](https://www.docker.com/get-started). Note that as Docker is based on Linux features such as cgroups, the installation and usage of Docker on Windows can be cumbersome. 17 | 18 | ## Overview 19 | 20 | The content of this workshop is divided into four chapters: 21 | 22 | - [Chapter 1](./chapter1/README.md): Starting with the very basics of how to use Docker by building a container that runs Linux. 23 | - [Chapter 2](./chapter2/README.md): Learning more about Docker by building a container that hosts a Jupyter notebook service. 24 | - [Chapter 3](./chapter3/README.md): Learning how to use Docker to setup a database. 25 | - [Chapter 4](./chapter4/README.md): Learning how to use docker-compose to orchestrate the deployment of multiple containers. In particular, we will deploy the two containers from Chapter 2 and 3 and connect them so that our Jupyter notebooks can access the database. 26 | 27 | ## Insight DevOps Engineering Fellowship 28 | 29 | Interested in getting hands-on experience with tools like Terraform, Chef, Kubernetes, Prometheus etc. on Amazon Web Services and transitioning to DevOps engineering? The [Insight DevOps Engineering Fellows Program](https://www.insightdevops.com) is a free 7-week professional training program where you can build and maintain cutting edge distributed platforms and transition to a career in DevOps engineering. 30 | -------------------------------------------------------------------------------- /chapter3/README.md: -------------------------------------------------------------------------------- 1 | # Chapter 3: Databases and binding volumes 2 | 3 | In this chapter, we are creating a container hosting a PostgreSQL database and binding a volume to the storage of the database to make it persistent. We will then use `docker exec` to log into the database and crate a table as well as insert some entries into the new table. 4 | 5 | We will start again with investigating the Dockerfile. Compared to the last chapter, this Dockerfile looks quite simple: 6 | 7 | FROM postgres:9.6.5 8 | 9 | 10 | We are using [this](https://hub.docker.com/_/postgres/) base image. This image comes with a fully configured PostgreSQL installation. All we have to do is provide our username, password and database name as environmental variables using: 11 | ``` 12 | ENV POSTGRES_PASSWORD safe 13 | ENV POSTGRES_USER me 14 | ENV POSTGRES_DB study 15 | ``` 16 | 17 | We are now all ready to build and run the container! 18 | 19 | ## Running and building the container 20 | 21 | As we are now already experienced with building and running container images, the `build` and `run` commands should not shock as anymore: 22 | ``` 23 | docker build -t db . 24 | docker run -d -v dbdata:/var/lib/postgresql/data --name db db 25 | ``` 26 | Most interestingly, note that PostgreSQL stores its data in `/var/lib/postgresql/data`, this is why we create and mount a volume `dbdata` to it to make our database persistent. 27 | Our database is running in detached mode (the postgres image toggles that option by default), but we want to log in to create a table and insert some entries! 28 | 29 | This is of course where `docker exec` comes in handy again! Using the command 30 | 31 | docker exec -it db psql -U me study 32 | 33 | we are logged into our database. 34 | The option `-it` essentially executes the command in an interactive mode where our terminal is bound to the container. We should now be logged into the database! 35 | 36 | ![psql](./img/psql.png) 37 | 38 | Execute the following SQL command to create a table participants: 39 | 40 | CREATE TABLE participants (name VARCHAR(255), age INT, score INT); 41 | 42 | Of course, the table is currently empty, as we can verify with: 43 | 44 | SELECT * FROM participants; 45 | 46 | 47 | ![empty](./img/select-empty.png) 48 | 49 | So, let us insert two entries via 50 | 51 | INSERT INTO participants (name, age, score) VALUES ('Lucy', 33, 8), ('Jack', 44, 12); 52 | 53 | and check again: 54 | 55 | SELECT * FROM participants; 56 | 57 | We should now see both entries! 58 | 59 | ![content](./img/select_content.png) 60 | 61 | We end our database session using 62 | 63 | \q 64 | 65 | To make sure our setup is actually persistent, let us stop our container 66 | 67 | docker stop db 68 | 69 | and then restart it using 70 | 71 | docker start db 72 | 73 | If we reenter the database and display the content of the table participants 74 | 75 | docker exec -it db psql -U me study 76 | SELECT * FROM participants; 77 | 78 | we should still see both entries! 79 | 80 | This is the end of Chapter 3, we have successfully setup a database using Docker! To finish this workshop strongly, let us combine what we have learned in Chapter 2 and Chapter 3 in [Chapter 4](../chapter4/README.md)! 81 | -------------------------------------------------------------------------------- /chapter1/README.md: -------------------------------------------------------------------------------- 1 | # Chapter 1: First Docker Container! 2 | 3 | Let us get started with the "Hello World!" of Docker. In this directory, take a closer look at the Dockerfile. 4 | This is where we specify what our container will look like. A Dockerfile is a recipe to produce a Docker image. A Docker image is a file that can be used to spin up containers with a predefined configuration. 5 | 6 | Our particular example of a Dockerfile is quite empty - just two lines. The first line 7 | ``` 8 | FROM alpine:3.7 9 | ``` 10 | tells us which base Docker image we are using. The image `alpine:3.7` is a minimalistic Linux system. We are not modifying this base image at all but instead just specify a command we want to run when spinning up a container with this image: 11 | ``` 12 | CMD ["echo", "hello world!" ] 13 | ``` 14 | 15 | ## Building and Running Containers 16 | 17 | Now that we have a Dockerfile, we can use it to to create a Docker image via 18 | 19 | docker build -t hello-world . 20 | 21 | This creates an image tagged hello-world. The `.` specifies that the image we are building is based on the Dockerfile in the current directory. We can convince ourselves that the image was built successful by use of the command: 22 | 23 | docker image ls 24 | 25 | This should yield a screenshot like this: 26 | ![imagels](./img/image.png) 27 | 28 | which displays all current images. Now that we have an image, we can start a container based on it. The command `docker run` starts a new container: 29 | 30 | docker run --name hello-world hello-world 31 | 32 | The `--name` option allows us to name the container `hello-world`. The second `hello-world` directs docker to use the image tagged `hello-world` to spin up the container. Our terminal should print `hello world!` as we start the container: 33 | 34 | ![run](./img/run.png) 35 | 36 | Let us now see the list of running, active containers via 37 | 38 | docker ps 39 | 40 | It is empty! 41 | 42 | This is because our container did not have anything to do after printing `hello world!`. We can get more information if we add the `-a` option: 43 | 44 | docker ps -a 45 | 46 | Now we can see that the container has an `exited` status. 47 | In order to remove a (stopped) container, we can use the `rm` command: 48 | 49 | docker rm hello-world 50 | 51 | Let us now use `docker-run` again but add one option and one parameter: 52 | 53 | docker run -d -it --name hello-world hello-world /bin/sh 54 | 55 | The option `-d` activates detached mode, a mode where the container runs in the background. The option `-it` runs the container in an interactive mode. The command after the docker image `/bin/sh` direct `docker run` to execute this command after creating the container. This particular command started a shell process that waits for our input. In particular, it runs until we close it. Hence, when executing 56 | 57 | docker ps 58 | 59 | We now see a running container! 60 | 61 | ![ps](./img/ps.png) 62 | 63 | If we want to executed a command in a running container we can use `docker exec`: 64 | 65 | docker exec hello-world ls 66 | 67 | The command executes `ls` in the container `hello-world`. This shows us the file system of our container. 68 | 69 | 70 | 71 | That was a lot of information! An important command we did not touch is `docker logs`, we encourage you to try that command yourself before moving into the second chapter. 72 | 73 | Continue now to [Chapter 2](../chapter2/README.md)! 74 | -------------------------------------------------------------------------------- /chapter2/README.md: -------------------------------------------------------------------------------- 1 | # Chapter 2: A Jupyter-Notebook Server! 2 | We will now write a Dockerfile that generates an image that is fully configured to host a Jupyter notebook server. 3 | 4 | Let us again check out the Dockerfile. There seems to be much more going on now! Let us take it step-by-step. 5 | 6 | FROM python:3.7 7 | 8 | This one is easy: We start with a system that has Python 3.7 preinstalled. We now run `apt-get update` to make sure our system has updated package lists for everything we want to add to it. 9 | 10 | RUN apt-get update 11 | 12 | We now use the `COPY` command: It allows us to copy files from our local machine into the docker image. 13 | In this case, we copy over requirements.txt. This text file contains all Python modules that we need to run a Jupyter notebook server (and it also contains `pandas` and `numpy`). 14 | 15 | COPY requirements.txt . 16 | 17 | We now switch the working directory to the folder /jupyter-notebooks (this folder will be created by the `WORKDIR` command). 18 | This is the folder where we want to save our notebooks in. 19 | 20 | WORKDIR /jupyter_notebooks 21 | 22 | The following command now installs all modules from `requirements.txt`. 23 | 24 | RUN pip install -r ../requirements.txt 25 | 26 | Note that we have to go to the parent folder first before we find `requirements.txt`. It would have been cleaner to run `pip install` before changing the working directory. We wrote this version for educational purposes. 27 | 28 | Lastly, we want to actually start the jupyter-notebook server when we spin up a container. 29 | 30 | CMD jupyter-notebook --no-browser --ip="*" --allow-root --NotebookApp.allow_remote_access=True 31 | 32 | ## Running and storing 33 | 34 | Building the image is as easy as it was in chapter 1: 35 | 36 | docker build -t jupyter . 37 | 38 | The run command will be trickier - let's dive right into it: 39 | 40 | docker run -p 8888:8888 -v notebooks:/jupyter_notebooks --name jupyter jupyter 41 | 42 | We need the option `-p 8888:8888` to connect the container port 8888 with our machine port 8888. The option `-v notebooks:/jupyter_notebooks` mounts the volume `notebooks` to the folder `/jupyter_notebooks`. You ask what is a volume? In order to persists files created in containers, Docker can create volumes. Those volumes can then be attached to one or more containers. If a running container stops, the content of the volume is stored on your machine's hard drive and reloaded if you start the container again. You can think of a volume as a network hard drive that you can attach to many different networks. 43 | 44 | Go ahead into the notebook-server (follow the instructions printed on the terminal) and make a notebook. Note that our terminal is actively showing the log of our container. This is useful for debugging but how do we stop the container from running? Let us use the web frontend of the jupyter server to stop the server and regain control over our terminal by clicking the quit button. 45 | 46 | ![server](./img/notebook-server.png) 47 | 48 | Alternatively, we could have used 49 | 50 | docker stop jupyter 51 | 52 | to stop the jupyter server. 53 | Use 54 | 55 | docker rm jupyter 56 | 57 | to remove the container. If you now rebuild and restart the container, you will notice that the notebook you just created is still there! 58 | 59 | In order to test this, use the command 60 | 61 | docker run -d -p 8888:8888 -v notebooks:/jupyter_notebooks --name jupyter jupyter 62 | 63 | We are now running it in detached mode, so we do not lose control over the terminal. However, we also do not see the prompt that gives us the address and the authorization token of the jupyter server. We can get it via 64 | 65 | docker logs jupyter 66 | 67 | which accesses the logs of the jupyter container. We can now log into the notebook server again and verify that our notebook still exists. 68 | 69 | ![notebook](./img/notebook.png) 70 | 71 | Let us stop the container using 72 | 73 | docker stop jupyter 74 | 75 | We can also verify that our volume `notebook` was created successfully via 76 | 77 | docker volume ls 78 | 79 | and we can remove it via 80 | 81 | docker volume rm notebooks 82 | 83 | 84 | This is the end of Chapter 2! We will now continue with building a container hosting a database in [Chapter 3](../chapter3/README.md)!. 85 | -------------------------------------------------------------------------------- /chapter4/README.md: -------------------------------------------------------------------------------- 1 | # Chapter 4: Putting it all together via docker-compose! 2 | 3 | While it useful to run Docker containers by themselves, the real power of Docker is only unleashed once we build and orchestrate a whole platform consisting of multiple containers using container orchestration tools. One of the best solutions for this is [Kubernetes](https://kubernetes.io/), but for the purpose of this workshop, it will suffice to use `docker-compose`, an orchestration tool that ships with our installation of Docker (if you are using Linux, you might have to install it separately). 4 | 5 | Most of the files in the `jupyter` folder are directly taken from Chapter 2. However, we also now have a `docker-compose.yml` file! 6 | Here yml stands for "Yet Another Markup Language", it is a language commonly used for configuration files. 7 | Before looking at the rest of the repository, let us dive into the `docker-compose` file: 8 | 9 | We first declare that we are using version 3 of `docker-compose`: 10 | 11 | version: '3' 12 | 13 | We now define `services`. Each service uses an image to build one or more containers. We will only build one container per service, but you can imagine that having multiple containers running the same service can help serve millions of users! 14 | The first service we define is a database service: 15 | ``` 16 | services: 17 | db: 18 | image: "postgres:9.6.5" 19 | volumes: 20 | - "dbdata2:/var/lib/postgresql/data" 21 | - ./init-tables.sh:/docker-entrypoint-initdb.d/init-tables.sh 22 | env_file: 23 | - env_file 24 | networks: 25 | - nw 26 | ``` 27 | This service uses the same image that we used in Chapter 3. The declaration of the first volume is also analogous to what we declared in Chapter 3. The second volume mounts the script `init-tables.sh` in the `chapter4` folder to the folder `docker-entrypoint-initdb.d`. This is a feature of the `postgres` image: Any script in that folder will be used to setup the database when first started. We are using this feature to setup a table and insert two entries just like we did by hand in Chapter 3. Feel free to look into `init-tables.sh` to see that the commands are basically the same. 28 | `docker-compose` allows us to declare a bunch of environmental variables via a file - we have named this file env_file and it is similar to what we defined back in Chapter 3. 29 | Lastly, we attach the `db` service to the network `nw`. More on this later! Let us first move on to the jupyter service. 30 | ``` 31 | jupyter-server: 32 | build: ./jupyter 33 | volumes: 34 | - "notebooks2:/jupyter_notebooks" 35 | ports: 36 | - 8888:8888 37 | env_file: 38 | - env_file 39 | networks: 40 | - nw 41 | depends_on: 42 | - db 43 | ``` 44 | Rather than using an image for this service, we give `docker-compose` a build path. This path leads to a `Dockerfile` identical to what we used in Chapter 2 and will allow `docker-compose` to create a jupyter server. The `ports` command ensures what the option `-p 8888:8888` ensured in `docker run`: That we can access the server from outside the containers. To make it easy for us to connect from a jupyter notebook to the database service, we load the same environmental variables. We also attach ourselves to the same network. Lastly, we declare that this service depends on the service `db`. This means that we will only start this service once the `db` service has started. 45 | 46 | Let us now finally talk about networking! Now that we want to build a platform of containers, we need to declare networks and attach containers to them. The snippet 47 | ``` 48 | networks: 49 | nw: 50 | driver: bridge 51 | ``` 52 | declares the network `nw`. The `driver: bridge` option is the best option to simulate a network on a single machine. Read [here](https://docs.docker.com/compose/compose-file/#networks) for more details. As we attached both services to the same network, they will be able to find each other just by their name (in this case `db` and `jupyter-server` respectively). 53 | 54 | Lastly, we declare the volumes that we are using for persistent data storage: 55 | ``` 56 | volumes: 57 | dbdata2: 58 | notebooks2: 59 | ``` 60 | 61 | ## Building and Running the Platform 62 | 63 | While there was a lot of new material in Chapter 4, building and running looks surprisingly familiar: 64 | ``` 65 | docker-compose build 66 | docker-compose up 67 | ``` 68 | 69 | Those two commands setup both our services and establish the network `nw`. Note that we added an example notebook in our `jupyter-server` service. Enter the server as usual and open it. You find some commands that load the content of the database into a pandas dataframe. 70 | 71 | ![dataframe](./img/ipython.png) 72 | 73 | We have a working platform! 74 | 75 | Note that we can connect to the database using `user@db:5432`, this is because in our network `nw`, the database server can be found via its name `db`. 76 | 77 | This completes our tutorial, we hope you enjoyed it! If you are interested in containerization technologies and concepts such as Infrastructure as Code, continuous integration and deployment and scalable infrastructures, be sure to check out the [DevOps Engineering program](https://www.insightdevops.com) at Insight Data Science! 78 | --------------------------------------------------------------------------------