├── .gitignore ├── .gitmodules ├── LICENSE ├── Makefile ├── README.md ├── Vagrantfile ├── docs ├── Makefile ├── README.md ├── conf.py ├── index.rst ├── linux │ ├── .buildinfo │ ├── _sources │ │ └── index.rst.txt │ ├── _static │ │ ├── alabaster.css │ │ ├── basic.css │ │ ├── custom.css │ │ ├── doctools.js │ │ ├── documentation_options.js │ │ ├── file.png │ │ ├── language_data.js │ │ ├── minus.png │ │ ├── plus.png │ │ ├── pygments.css │ │ ├── searchtools.js │ │ └── sphinx_highlight.js │ ├── genindex.html │ ├── index.html │ ├── objects.inv │ ├── py-modindex.html │ ├── search.html │ └── searchindex.js ├── prep-processes.md ├── prep-users.md └── prep.md ├── levels ├── 00_fork_exec │ ├── README.md │ └── rd.py ├── 01_chroot_image │ ├── README.md │ └── rd.py ├── 02_mount_ns │ ├── README.md │ └── rd.py ├── 03_pivot_root │ ├── README.md │ ├── breakout.py │ └── rd.py ├── 04_overlay │ ├── README.md │ └── rd.py ├── 05_uts_namespace │ ├── README.md │ └── rd.py ├── 06_pid_namespace │ ├── README.md │ └── rd.py ├── 07_net_namespace │ ├── README.md │ └── rd.py ├── 08_cpu_cgroup │ ├── README.md │ └── rd.py ├── 09_memory_cgorup │ ├── README.md │ └── rd.py ├── 10_setuid │ ├── README.md │ └── rd.py └── cleanup.sh ├── linux.c ├── packer ├── bootstrap.sh ├── on_boot.sh ├── rubber-docker.json └── vimrc ├── pyproject.toml ├── requirements.txt ├── setup.py └── slides ├── images ├── docker-architecture.svg ├── thats-all-folks.jpg └── there-is-no-container.jpg └── workshop.html /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | env/ 12 | build/ 13 | develop-eggs/ 14 | dist/ 15 | downloads/ 16 | eggs/ 17 | .eggs/ 18 | lib/ 19 | lib64/ 20 | parts/ 21 | sdist/ 22 | var/ 23 | *.egg-info/ 24 | .installed.cfg 25 | *.egg 26 | 27 | # PyInstaller 28 | # Usually these files are written by a python script from a template 29 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 30 | *.manifest 31 | *.spec 32 | 33 | # Installer logs 34 | pip-log.txt 35 | pip-delete-this-directory.txt 36 | 37 | # Unit test / coverage reports 38 | htmlcov/ 39 | .tox/ 40 | .coverage 41 | .coverage.* 42 | .cache 43 | nosetests.xml 44 | coverage.xml 45 | *,cover 46 | .hypothesis/ 47 | 48 | # Translations 49 | *.mo 50 | *.pot 51 | 52 | # Django stuff: 53 | *.log 54 | 55 | # Sphinx documentation 56 | docs/_build/ 57 | 58 | # PyBuilder 59 | target/ 60 | 61 | #Ipython Notebook 62 | .ipynb_checkpoints 63 | 64 | #PyCharm 65 | .idea/ 66 | 67 | # Vagrant 68 | .vagrant 69 | -------------------------------------------------------------------------------- /.gitmodules: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Fewbytes/rubber-docker/2b3fca3ae4e6de1d3288edbdf3935399cb3c324c/.gitmodules -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2016 Fewbytes 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | PIP=pip 2 | 3 | .PHONY: wheel clean clean-docs build install docs build-tool 4 | 5 | build: build-tool 6 | python -m build 7 | 8 | install: 9 | $(PIP) install . 10 | 11 | docs: 12 | $(PIP) freeze | grep -q Sphinx || $(PIP) install Sphinx 13 | $(MAKE) -C docs html 14 | 15 | clean-docs: 16 | $(MAKE) -C docs clean 17 | 18 | CLEAN_LIST=build rubber_docker.egg-info $(wildcard *.whl) $(wildcard rubber-docker-*.tar.gz) 19 | clean: clean-docs 20 | rm -rf build $(CLEAN_LIST) 21 | 22 | build-tool: 23 | $(PIP) freeze |grep -q build || $(PIP) install build -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Docker From Scratch Workshop 2 | 3 | 4 | ## Preparatory Talk 5 | [The preparatory talk](https://docs.google.com/presentation/d/10vFQfEUvpf7qYyksNqiy-bAxcy-bvF0OnUElCOtTTRc/edit?usp=sharing) 6 | covers all the basics you'll need for this workshop, including: 7 | - Linux syscalls and glibc wrappers 8 | - chroot vs pivot_root 9 | - namespaces 10 | - cgroups 11 | - capabilities 12 | - and more 13 | 14 | ## The Workshop 15 | Use [the provided slides](https://github.com/Fewbytes/rubber-docker/tree/master/slides) while advancing through the levels, adding more features to your container. 16 | Remember to go over each level's readme, and if things get rough - 17 | you can always find the solution for level N in the level N+1 skeleton. 18 | 19 | ## The linux python module 20 | Not all the necessary system calls are exposed in python's standard library. 21 | In addition, we want to preserve the semantics of the system calls and use them as if we were writing C. 22 | We therefore wrote a python module called *linux* (take a look at [linux.c](linux.c)) which exposes the relevant system calls. 23 | Have a look at the [module documentation](https://rawgit.com/Fewbytes/rubber-docker/master/docs/linux/index.html) for more info. 24 | 25 | ## Quickstart 26 | There are currently 3 options to start the workshop by yourself: 27 | 1. We created a public AMI with the required configuration and utilities 28 | already installed: 29 | | Region | AMI | 30 | |--------|-----| 31 | | eu-central-1 | `ami-041c4af571b01d0f8` | 32 | | il-central-1 | `ami-036406540dcc4a690` | 33 | | us-east-1 | `ami-0cb446a6fd2678063` | 34 | | us-west-1 | `ami-0defd345b84194d79` | 35 | 1. We provide a [packer template](https://www.packer.io/) so you can create 36 | your own AMI. 37 | 1. We have a [Vagrantfile](https://www.vagrantup.com/) for you to run using 38 | your favorite virtual machine hypervisor (NOTE: not yet fully tested). 39 | 40 | The workshop material is checked out at `/workshop` on the instance: 41 | - `/workshop/rubber-docker` - this repository, where you do all the work 42 | - `/workshop/images` - images for containers, already populated with ubuntu and busybox images 43 | 44 | Before starting the workshop, go over the prep docs in the `docs` folder. 45 | 46 | Start the workshop at `/workshop/rubber-docker/levels/00_fork_exec`. 47 | 48 | ## Dev environment 49 | If you need to build and install the `linux` module: 50 | 51 | ```sh 52 | make install 53 | ``` 54 | 55 | If you want a distributable wheel package: 56 | ```sh 57 | make build 58 | ``` 59 | 60 | 61 | # PR stuff 62 | This workshop has been publicly given in many places starting February 2016. 63 | 64 | - Opstalk meetup, Tel-Aviv, February 2016 65 | - DevOps Sydney meetup, Sydney, June 2016 66 | - DevOpsDays Amsterdam, Amsterdam, June 2016 67 | - SRECon EU, Dublin, July 2016 68 | - Sela Developer Practice, Tel-Aviv, June 2016 69 | - SRECon US, Santa Clara, March 2018 70 | - DevOpsDays Kiel, Kiel, May 2018 71 | 72 | # FAQ 73 | ### Why did you create this? 74 | Because we feel the only way to truly understand something to build it from scratch - and Linux containers are a very hyped and poorly understood technology 75 | 76 | ### Can I use this repository to conduct my own public/private workshop? 77 | Of course! If you do, please consider letting us know on Twitter (@nukemberg and @nocoot) and of course send feedback. 78 | 79 | ### This workshop doesn't cover seccomp/user containers/whatever 80 | Yes, no way we can cover the entire featureset of a real container engine. We tried to concentrate on thing we believe are important for understanding how containers work 81 | 82 | ### I found a bug! 83 | See contributions below 84 | 85 | 86 | # Contributions 87 | Contributions are welcome! If you found a bug or something to improve feel free to open an issue or a pull request. Please note that the entire repository is under MIT license and your contribution will be under that license. 88 | 89 | # Sponsors 90 | We'd like to thank our friends at [Strigo.io](http://strigo.io/) for kindly providing their platform, and allowing us to deliver this and other workshops without worrying about infrastructure. 91 | If you plan to deliver this workshop yourself, we highly encourage you to [contact them](contact@strigo.io). 92 | -------------------------------------------------------------------------------- /Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | # All Vagrant configuration is done below. The "2" in Vagrant.configure 5 | # configures the configuration version (we support older styles for 6 | # backwards compatibility). Please don't change it unless you know what 7 | # you're doing. 8 | Vagrant.configure("2") do |config| 9 | 10 | config.vm.box = "ubuntu/xenial64" 11 | 12 | config.vm.provider "virtualbox" do |vb| 13 | vb.memory = "512" 14 | end 15 | 16 | config.vm.provision "shell", inline: <<-SHELL 17 | grep -q `hostname` /etc/hosts || echo 127.0.0.1 `hostname` |sudo tee -a /etc/hosts 18 | sudo bash /vagrant/packer/bootstrap.sh 19 | sudo bash /etc/rc.local 20 | SHELL 21 | end 22 | -------------------------------------------------------------------------------- /docs/Makefile: -------------------------------------------------------------------------------- 1 | # Makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | PAPER = 8 | BUILDDIR = _build 9 | 10 | # Internal variables. 11 | PAPEROPT_a4 = -D latex_paper_size=a4 12 | PAPEROPT_letter = -D latex_paper_size=letter 13 | ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 14 | # the i18n builder cannot share the environment and doctrees with the others 15 | I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 16 | 17 | .PHONY: help 18 | help: 19 | @echo "Please use \`make ' where is one of" 20 | @echo " html to make standalone HTML files" 21 | @echo " dirhtml to make HTML files named index.html in directories" 22 | @echo " singlehtml to make a single large HTML file" 23 | @echo " pickle to make pickle files" 24 | @echo " json to make JSON files" 25 | @echo " htmlhelp to make HTML files and a HTML help project" 26 | @echo " qthelp to make HTML files and a qthelp project" 27 | @echo " applehelp to make an Apple Help Book" 28 | @echo " devhelp to make HTML files and a Devhelp project" 29 | @echo " epub to make an epub" 30 | @echo " epub3 to make an epub3" 31 | @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" 32 | @echo " latexpdf to make LaTeX files and run them through pdflatex" 33 | @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" 34 | @echo " text to make text files" 35 | @echo " man to make manual pages" 36 | @echo " texinfo to make Texinfo files" 37 | @echo " info to make Texinfo files and run them through makeinfo" 38 | @echo " gettext to make PO message catalogs" 39 | @echo " changes to make an overview of all changed/added/deprecated items" 40 | @echo " xml to make Docutils-native XML files" 41 | @echo " pseudoxml to make pseudoxml-XML files for display purposes" 42 | @echo " linkcheck to check all external links for integrity" 43 | @echo " doctest to run all doctests embedded in the documentation (if enabled)" 44 | @echo " coverage to run coverage check of the documentation (if enabled)" 45 | @echo " dummy to check syntax errors of document sources" 46 | 47 | .PHONY: clean 48 | clean: 49 | rm -rf $(BUILDDIR)/* 50 | 51 | .PHONY: html 52 | html: 53 | $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html 54 | @echo 55 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." 56 | 57 | .PHONY: dirhtml 58 | dirhtml: 59 | $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml 60 | @echo 61 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." 62 | 63 | .PHONY: singlehtml 64 | singlehtml: 65 | $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml 66 | @echo 67 | @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." 68 | 69 | .PHONY: pickle 70 | pickle: 71 | $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle 72 | @echo 73 | @echo "Build finished; now you can process the pickle files." 74 | 75 | .PHONY: json 76 | json: 77 | $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json 78 | @echo 79 | @echo "Build finished; now you can process the JSON files." 80 | 81 | .PHONY: htmlhelp 82 | htmlhelp: 83 | $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp 84 | @echo 85 | @echo "Build finished; now you can run HTML Help Workshop with the" \ 86 | ".hhp project file in $(BUILDDIR)/htmlhelp." 87 | 88 | .PHONY: qthelp 89 | qthelp: 90 | $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp 91 | @echo 92 | @echo "Build finished; now you can run "qcollectiongenerator" with the" \ 93 | ".qhcp project file in $(BUILDDIR)/qthelp, like this:" 94 | @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/RubberDocker.qhcp" 95 | @echo "To view the help file:" 96 | @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/RubberDocker.qhc" 97 | 98 | .PHONY: applehelp 99 | applehelp: 100 | $(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp 101 | @echo 102 | @echo "Build finished. The help book is in $(BUILDDIR)/applehelp." 103 | @echo "N.B. You won't be able to view it unless you put it in" \ 104 | "~/Library/Documentation/Help or install it in your application" \ 105 | "bundle." 106 | 107 | .PHONY: devhelp 108 | devhelp: 109 | $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp 110 | @echo 111 | @echo "Build finished." 112 | @echo "To view the help file:" 113 | @echo "# mkdir -p $$HOME/.local/share/devhelp/RubberDocker" 114 | @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/RubberDocker" 115 | @echo "# devhelp" 116 | 117 | .PHONY: epub 118 | epub: 119 | $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub 120 | @echo 121 | @echo "Build finished. The epub file is in $(BUILDDIR)/epub." 122 | 123 | .PHONY: epub3 124 | epub3: 125 | $(SPHINXBUILD) -b epub3 $(ALLSPHINXOPTS) $(BUILDDIR)/epub3 126 | @echo 127 | @echo "Build finished. The epub3 file is in $(BUILDDIR)/epub3." 128 | 129 | .PHONY: latex 130 | latex: 131 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 132 | @echo 133 | @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." 134 | @echo "Run \`make' in that directory to run these through (pdf)latex" \ 135 | "(use \`make latexpdf' here to do that automatically)." 136 | 137 | .PHONY: latexpdf 138 | latexpdf: 139 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 140 | @echo "Running LaTeX files through pdflatex..." 141 | $(MAKE) -C $(BUILDDIR)/latex all-pdf 142 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." 143 | 144 | .PHONY: latexpdfja 145 | latexpdfja: 146 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 147 | @echo "Running LaTeX files through platex and dvipdfmx..." 148 | $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja 149 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." 150 | 151 | .PHONY: text 152 | text: 153 | $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text 154 | @echo 155 | @echo "Build finished. The text files are in $(BUILDDIR)/text." 156 | 157 | .PHONY: man 158 | man: 159 | $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man 160 | @echo 161 | @echo "Build finished. The manual pages are in $(BUILDDIR)/man." 162 | 163 | .PHONY: texinfo 164 | texinfo: 165 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 166 | @echo 167 | @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." 168 | @echo "Run \`make' in that directory to run these through makeinfo" \ 169 | "(use \`make info' here to do that automatically)." 170 | 171 | .PHONY: info 172 | info: 173 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 174 | @echo "Running Texinfo files through makeinfo..." 175 | make -C $(BUILDDIR)/texinfo info 176 | @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." 177 | 178 | .PHONY: gettext 179 | gettext: 180 | $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale 181 | @echo 182 | @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." 183 | 184 | .PHONY: changes 185 | changes: 186 | $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes 187 | @echo 188 | @echo "The overview file is in $(BUILDDIR)/changes." 189 | 190 | .PHONY: linkcheck 191 | linkcheck: 192 | $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck 193 | @echo 194 | @echo "Link check complete; look for any errors in the above output " \ 195 | "or in $(BUILDDIR)/linkcheck/output.txt." 196 | 197 | .PHONY: doctest 198 | doctest: 199 | $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest 200 | @echo "Testing of doctests in the sources finished, look at the " \ 201 | "results in $(BUILDDIR)/doctest/output.txt." 202 | 203 | .PHONY: coverage 204 | coverage: 205 | $(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage 206 | @echo "Testing of coverage in the sources finished, look at the " \ 207 | "results in $(BUILDDIR)/coverage/python.txt." 208 | 209 | .PHONY: xml 210 | xml: 211 | $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml 212 | @echo 213 | @echo "Build finished. The XML files are in $(BUILDDIR)/xml." 214 | 215 | .PHONY: pseudoxml 216 | pseudoxml: 217 | $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml 218 | @echo 219 | @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." 220 | 221 | .PHONY: dummy 222 | dummy: 223 | $(SPHINXBUILD) -b dummy $(ALLSPHINXOPTS) $(BUILDDIR)/dummy 224 | @echo 225 | @echo "Build finished. Dummy builder generates no files." 226 | -------------------------------------------------------------------------------- /docs/README.md: -------------------------------------------------------------------------------- 1 | # rubber-docker docs 2 | 3 | This folder contains the preparatory and background docs for the workshop. 4 | 5 | Before taking the workshop, please read through the preparatory docs starting with [prep.md](prep.md) 6 | 7 | During the workshop, please use the [linux module documentation](https://rawgit.com/Fewbytes/rubber-docker/master/docs/linux/index.html) for additional syscall wrappers -------------------------------------------------------------------------------- /docs/conf.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # 3 | # Rubber Docker documentation build configuration file, created by 4 | # sphinx-quickstart on Sat Jun 18 21:18:39 2016. 5 | # 6 | # This file is execfile()d with the current directory set to its 7 | # containing dir. 8 | # 9 | # Note that not all possible configuration values are present in this 10 | # autogenerated file. 11 | # 12 | # All configuration values have a default; values that are commented out 13 | # serve to show the default. 14 | 15 | # If extensions (or modules to document with autodoc) are in another directory, 16 | # add these directories to sys.path here. If the directory is relative to the 17 | # documentation root, use os.path.abspath to make it absolute, like shown here. 18 | # 19 | import os 20 | import sys 21 | # sys.path.insert(0, os.path.abspath('.')) 22 | sys.path.insert(0, os.path.abspath("..")) 23 | 24 | # -- General configuration ------------------------------------------------ 25 | 26 | # If your documentation needs a minimal Sphinx version, state it here. 27 | # 28 | # needs_sphinx = '1.0' 29 | 30 | # Add any Sphinx extension module names here, as strings. They can be 31 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 32 | # ones. 33 | extensions = [ 34 | 'sphinx.ext.autodoc', 35 | ] 36 | 37 | # Add any paths that contain templates here, relative to this directory. 38 | templates_path = ['_templates'] 39 | 40 | # The suffix(es) of source filenames. 41 | # You can specify multiple suffix as a list of string: 42 | # 43 | # source_suffix = ['.rst', '.md'] 44 | source_suffix = '.rst' 45 | 46 | # The encoding of source files. 47 | # 48 | # source_encoding = 'utf-8-sig' 49 | 50 | # The master toctree document. 51 | master_doc = 'index' 52 | 53 | # General information about the project. 54 | project = u'Rubber Docker' 55 | copyright = u'2016, Avishai Ish-Shalom, Nati Cohen' 56 | author = u'Avishai Ish-Shalom, Nati Cohen' 57 | 58 | # The version info for the project you're documenting, acts as replacement for 59 | # |version| and |release|, also used in various other places throughout the 60 | # built documents. 61 | # 62 | # The short X.Y version. 63 | version = u'0.1' 64 | # The full version, including alpha/beta/rc tags. 65 | release = u'0.1' 66 | 67 | # The language for content autogenerated by Sphinx. Refer to documentation 68 | # for a list of supported languages. 69 | # 70 | # This is also used if you do content translation via gettext catalogs. 71 | # Usually you set "language" from the command line for these cases. 72 | language = "en" 73 | 74 | # There are two options for replacing |today|: either, you set today to some 75 | # non-false value, then it is used: 76 | # 77 | # today = '' 78 | # 79 | # Else, today_fmt is used as the format for a strftime call. 80 | # 81 | # today_fmt = '%B %d, %Y' 82 | 83 | # List of patterns, relative to source directory, that match files and 84 | # directories to ignore when looking for source files. 85 | # This patterns also effect to html_static_path and html_extra_path 86 | exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store'] 87 | 88 | # The reST default role (used for this markup: `text`) to use for all 89 | # documents. 90 | # 91 | # default_role = None 92 | 93 | # If true, '()' will be appended to :func: etc. cross-reference text. 94 | # 95 | # add_function_parentheses = True 96 | 97 | # If true, the current module name will be prepended to all description 98 | # unit titles (such as .. function::). 99 | # 100 | # add_module_names = True 101 | 102 | # If true, sectionauthor and moduleauthor directives will be shown in the 103 | # output. They are ignored by default. 104 | # 105 | # show_authors = False 106 | 107 | # The name of the Pygments (syntax highlighting) style to use. 108 | pygments_style = 'sphinx' 109 | 110 | # A list of ignored prefixes for module index sorting. 111 | # modindex_common_prefix = [] 112 | 113 | # If true, keep warnings as "system message" paragraphs in the built documents. 114 | # keep_warnings = False 115 | 116 | # If true, `todo` and `todoList` produce output, else they produce nothing. 117 | todo_include_todos = False 118 | 119 | 120 | # -- Options for HTML output ---------------------------------------------- 121 | 122 | # The theme to use for HTML and HTML Help pages. See the documentation for 123 | # a list of builtin themes. 124 | # 125 | html_theme = 'alabaster' 126 | 127 | # Theme options are theme-specific and customize the look and feel of a theme 128 | # further. For a list of options available for each theme, see the 129 | # documentation. 130 | # 131 | # html_theme_options = {} 132 | 133 | # Add any paths that contain custom themes here, relative to this directory. 134 | # html_theme_path = [] 135 | 136 | # The name for this set of Sphinx documents. 137 | # " v documentation" by default. 138 | # 139 | # html_title = u'Rubber Docker v0.1' 140 | 141 | # A shorter title for the navigation bar. Default is the same as html_title. 142 | # 143 | # html_short_title = None 144 | 145 | # The name of an image file (relative to this directory) to place at the top 146 | # of the sidebar. 147 | # 148 | # html_logo = None 149 | 150 | # The name of an image file (relative to this directory) to use as a favicon of 151 | # the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 152 | # pixels large. 153 | # 154 | # html_favicon = None 155 | 156 | # Add any paths that contain custom static files (such as style sheets) here, 157 | # relative to this directory. They are copied after the builtin static files, 158 | # so a file named "default.css" will overwrite the builtin "default.css". 159 | html_static_path = [] 160 | 161 | # Add any extra paths that contain custom files (such as robots.txt or 162 | # .htaccess) here, relative to this directory. These files are copied 163 | # directly to the root of the documentation. 164 | # 165 | # html_extra_path = [] 166 | 167 | # If not None, a 'Last updated on:' timestamp is inserted at every page 168 | # bottom, using the given strftime format. 169 | # The empty string is equivalent to '%b %d, %Y'. 170 | # 171 | # html_last_updated_fmt = None 172 | 173 | # If true, SmartyPants will be used to convert quotes and dashes to 174 | # typographically correct entities. 175 | # 176 | # html_use_smartypants = True 177 | 178 | # Custom sidebar templates, maps document names to template names. 179 | # 180 | # html_sidebars = {} 181 | 182 | # Additional templates that should be rendered to pages, maps page names to 183 | # template names. 184 | # 185 | # html_additional_pages = {} 186 | 187 | # If false, no module index is generated. 188 | # 189 | # html_domain_indices = True 190 | 191 | # If false, no index is generated. 192 | # 193 | # html_use_index = True 194 | 195 | # If true, the index is split into individual pages for each letter. 196 | # 197 | # html_split_index = False 198 | 199 | # If true, links to the reST sources are added to the pages. 200 | # 201 | # html_show_sourcelink = True 202 | 203 | # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. 204 | # 205 | # html_show_sphinx = True 206 | 207 | # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. 208 | # 209 | # html_show_copyright = True 210 | 211 | # If true, an OpenSearch description file will be output, and all pages will 212 | # contain a tag referring to it. The value of this option must be the 213 | # base URL from which the finished HTML is served. 214 | # 215 | # html_use_opensearch = '' 216 | 217 | # This is the file name suffix for HTML files (e.g. ".xhtml"). 218 | # html_file_suffix = None 219 | 220 | # Language to be used for generating the HTML full-text search index. 221 | # Sphinx supports the following languages: 222 | # 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja' 223 | # 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr', 'zh' 224 | # 225 | # html_search_language = 'en' 226 | 227 | # A dictionary with options for the search language support, empty by default. 228 | # 'ja' uses this config value. 229 | # 'zh' user can custom change `jieba` dictionary path. 230 | # 231 | # html_search_options = {'type': 'default'} 232 | 233 | # The name of a javascript file (relative to the configuration directory) that 234 | # implements a search results scorer. If empty, the default will be used. 235 | # 236 | # html_search_scorer = 'scorer.js' 237 | 238 | # Output file base name for HTML help builder. 239 | htmlhelp_basename = 'RubberDockerdoc' 240 | 241 | # -- Options for LaTeX output --------------------------------------------- 242 | 243 | latex_elements = { 244 | # The paper size ('letterpaper' or 'a4paper'). 245 | # 246 | # 'papersize': 'letterpaper', 247 | 248 | # The font size ('10pt', '11pt' or '12pt'). 249 | # 250 | # 'pointsize': '10pt', 251 | 252 | # Additional stuff for the LaTeX preamble. 253 | # 254 | # 'preamble': '', 255 | 256 | # Latex figure (float) alignment 257 | # 258 | # 'figure_align': 'htbp', 259 | } 260 | 261 | # Grouping the document tree into LaTeX files. List of tuples 262 | # (source start file, target name, title, 263 | # author, documentclass [howto, manual, or own class]). 264 | latex_documents = [ 265 | (master_doc, 'RubberDocker.tex', u'Rubber Docker Documentation', 266 | u'Avishai Ish-Shalom, Nati Cohen', 'manual'), 267 | ] 268 | 269 | # The name of an image file (relative to this directory) to place at the top of 270 | # the title page. 271 | # 272 | # latex_logo = None 273 | 274 | # For "manual" documents, if this is true, then toplevel headings are parts, 275 | # not chapters. 276 | # 277 | # latex_use_parts = False 278 | 279 | # If true, show page references after internal links. 280 | # 281 | # latex_show_pagerefs = False 282 | 283 | # If true, show URL addresses after external links. 284 | # 285 | # latex_show_urls = False 286 | 287 | # Documents to append as an appendix to all manuals. 288 | # 289 | # latex_appendices = [] 290 | 291 | # If false, no module index is generated. 292 | # 293 | # latex_domain_indices = True 294 | 295 | 296 | # -- Options for manual page output --------------------------------------- 297 | 298 | # One entry per manual page. List of tuples 299 | # (source start file, name, description, authors, manual section). 300 | man_pages = [ 301 | (master_doc, 'rubberdocker', u'Rubber Docker Documentation', 302 | [author], 1) 303 | ] 304 | 305 | # If true, show URL addresses after external links. 306 | # 307 | # man_show_urls = False 308 | 309 | 310 | # -- Options for Texinfo output ------------------------------------------- 311 | 312 | # Grouping the document tree into Texinfo files. List of tuples 313 | # (source start file, target name, title, author, 314 | # dir menu entry, description, category) 315 | texinfo_documents = [ 316 | (master_doc, 'RubberDocker', u'Rubber Docker Documentation', 317 | author, 'RubberDocker', 'One line description of project.', 318 | 'Miscellaneous'), 319 | ] 320 | 321 | # Documents to append as an appendix to all manuals. 322 | # 323 | # texinfo_appendices = [] 324 | 325 | # If false, no module index is generated. 326 | # 327 | # texinfo_domain_indices = True 328 | 329 | # How to display URL addresses: 'footnote', 'no', or 'inline'. 330 | # 331 | # texinfo_show_urls = 'footnote' 332 | 333 | # If true, do not generate a @detailmenu in the "Top" node's menu. 334 | # 335 | # texinfo_no_detailmenu = False 336 | -------------------------------------------------------------------------------- /docs/index.rst: -------------------------------------------------------------------------------- 1 | Docker from Scratch workshop 2 | ============================ 3 | 4 | .. automodule:: linux 5 | :members: 6 | -------------------------------------------------------------------------------- /docs/linux/.buildinfo: -------------------------------------------------------------------------------- 1 | # Sphinx build info version 1 2 | # This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. 3 | config: 9192838a7f90bcbccb3e5df098ecfde8 4 | tags: 645f666f9bcd5a90fca523b33c5a78b7 5 | -------------------------------------------------------------------------------- /docs/linux/_sources/index.rst.txt: -------------------------------------------------------------------------------- 1 | Docker from Scratch workshop 2 | ============================ 3 | 4 | .. automodule:: linux 5 | :members: 6 | -------------------------------------------------------------------------------- /docs/linux/_static/alabaster.css: -------------------------------------------------------------------------------- 1 | @import url("basic.css"); 2 | 3 | /* -- page layout ----------------------------------------------------------- */ 4 | 5 | body { 6 | font-family: Georgia, serif; 7 | font-size: 17px; 8 | background-color: #fff; 9 | color: #000; 10 | margin: 0; 11 | padding: 0; 12 | } 13 | 14 | 15 | div.document { 16 | width: 940px; 17 | margin: 30px auto 0 auto; 18 | } 19 | 20 | div.documentwrapper { 21 | float: left; 22 | width: 100%; 23 | } 24 | 25 | div.bodywrapper { 26 | margin: 0 0 0 220px; 27 | } 28 | 29 | div.sphinxsidebar { 30 | width: 220px; 31 | font-size: 14px; 32 | line-height: 1.5; 33 | } 34 | 35 | hr { 36 | border: 1px solid #B1B4B6; 37 | } 38 | 39 | div.body { 40 | background-color: #fff; 41 | color: #3E4349; 42 | padding: 0 30px 0 30px; 43 | } 44 | 45 | div.body > .section { 46 | text-align: left; 47 | } 48 | 49 | div.footer { 50 | width: 940px; 51 | margin: 20px auto 30px auto; 52 | font-size: 14px; 53 | color: #888; 54 | text-align: right; 55 | } 56 | 57 | div.footer a { 58 | color: #888; 59 | } 60 | 61 | p.caption { 62 | font-family: inherit; 63 | font-size: inherit; 64 | } 65 | 66 | 67 | div.relations { 68 | display: none; 69 | } 70 | 71 | 72 | div.sphinxsidebar { 73 | max-height: 100%; 74 | overflow-y: auto; 75 | } 76 | 77 | div.sphinxsidebar a { 78 | color: #444; 79 | text-decoration: none; 80 | border-bottom: 1px dotted #999; 81 | } 82 | 83 | div.sphinxsidebar a:hover { 84 | border-bottom: 1px solid #999; 85 | } 86 | 87 | div.sphinxsidebarwrapper { 88 | padding: 18px 10px; 89 | } 90 | 91 | div.sphinxsidebarwrapper p.logo { 92 | padding: 0; 93 | margin: -10px 0 0 0px; 94 | text-align: center; 95 | } 96 | 97 | div.sphinxsidebarwrapper h1.logo { 98 | margin-top: -10px; 99 | text-align: center; 100 | margin-bottom: 5px; 101 | text-align: left; 102 | } 103 | 104 | div.sphinxsidebarwrapper h1.logo-name { 105 | margin-top: 0px; 106 | } 107 | 108 | div.sphinxsidebarwrapper p.blurb { 109 | margin-top: 0; 110 | font-style: normal; 111 | } 112 | 113 | div.sphinxsidebar h3, 114 | div.sphinxsidebar h4 { 115 | font-family: Georgia, serif; 116 | color: #444; 117 | font-size: 24px; 118 | font-weight: normal; 119 | margin: 0 0 5px 0; 120 | padding: 0; 121 | } 122 | 123 | div.sphinxsidebar h4 { 124 | font-size: 20px; 125 | } 126 | 127 | div.sphinxsidebar h3 a { 128 | color: #444; 129 | } 130 | 131 | div.sphinxsidebar p.logo a, 132 | div.sphinxsidebar h3 a, 133 | div.sphinxsidebar p.logo a:hover, 134 | div.sphinxsidebar h3 a:hover { 135 | border: none; 136 | } 137 | 138 | div.sphinxsidebar p { 139 | color: #555; 140 | margin: 10px 0; 141 | } 142 | 143 | div.sphinxsidebar ul { 144 | margin: 10px 0; 145 | padding: 0; 146 | color: #000; 147 | } 148 | 149 | div.sphinxsidebar ul li.toctree-l1 > a { 150 | font-size: 120%; 151 | } 152 | 153 | div.sphinxsidebar ul li.toctree-l2 > a { 154 | font-size: 110%; 155 | } 156 | 157 | div.sphinxsidebar input { 158 | border: 1px solid #CCC; 159 | font-family: Georgia, serif; 160 | font-size: 1em; 161 | } 162 | 163 | div.sphinxsidebar #searchbox input[type="text"] { 164 | width: 160px; 165 | } 166 | 167 | div.sphinxsidebar .search > div { 168 | display: table-cell; 169 | } 170 | 171 | div.sphinxsidebar hr { 172 | border: none; 173 | height: 1px; 174 | color: #AAA; 175 | background: #AAA; 176 | 177 | text-align: left; 178 | margin-left: 0; 179 | width: 50%; 180 | } 181 | 182 | div.sphinxsidebar .badge { 183 | border-bottom: none; 184 | } 185 | 186 | div.sphinxsidebar .badge:hover { 187 | border-bottom: none; 188 | } 189 | 190 | /* To address an issue with donation coming after search */ 191 | div.sphinxsidebar h3.donation { 192 | margin-top: 10px; 193 | } 194 | 195 | /* -- body styles ----------------------------------------------------------- */ 196 | 197 | a { 198 | color: #004B6B; 199 | text-decoration: underline; 200 | } 201 | 202 | a:hover { 203 | color: #6D4100; 204 | text-decoration: underline; 205 | } 206 | 207 | div.body h1, 208 | div.body h2, 209 | div.body h3, 210 | div.body h4, 211 | div.body h5, 212 | div.body h6 { 213 | font-family: Georgia, serif; 214 | font-weight: normal; 215 | margin: 30px 0px 10px 0px; 216 | padding: 0; 217 | } 218 | 219 | div.body h1 { margin-top: 0; padding-top: 0; font-size: 240%; } 220 | div.body h2 { font-size: 180%; } 221 | div.body h3 { font-size: 150%; } 222 | div.body h4 { font-size: 130%; } 223 | div.body h5 { font-size: 100%; } 224 | div.body h6 { font-size: 100%; } 225 | 226 | a.headerlink { 227 | color: #DDD; 228 | padding: 0 4px; 229 | text-decoration: none; 230 | } 231 | 232 | a.headerlink:hover { 233 | color: #444; 234 | background: #EAEAEA; 235 | } 236 | 237 | div.body p, div.body dd, div.body li { 238 | line-height: 1.4em; 239 | } 240 | 241 | div.admonition { 242 | margin: 20px 0px; 243 | padding: 10px 30px; 244 | background-color: #EEE; 245 | border: 1px solid #CCC; 246 | } 247 | 248 | div.admonition tt.xref, div.admonition code.xref, div.admonition a tt { 249 | background-color: #FBFBFB; 250 | border-bottom: 1px solid #fafafa; 251 | } 252 | 253 | div.admonition p.admonition-title { 254 | font-family: Georgia, serif; 255 | font-weight: normal; 256 | font-size: 24px; 257 | margin: 0 0 10px 0; 258 | padding: 0; 259 | line-height: 1; 260 | } 261 | 262 | div.admonition p.last { 263 | margin-bottom: 0; 264 | } 265 | 266 | div.highlight { 267 | background-color: #fff; 268 | } 269 | 270 | dt:target, .highlight { 271 | background: #FAF3E8; 272 | } 273 | 274 | div.warning { 275 | background-color: #FCC; 276 | border: 1px solid #FAA; 277 | } 278 | 279 | div.danger { 280 | background-color: #FCC; 281 | border: 1px solid #FAA; 282 | -moz-box-shadow: 2px 2px 4px #D52C2C; 283 | -webkit-box-shadow: 2px 2px 4px #D52C2C; 284 | box-shadow: 2px 2px 4px #D52C2C; 285 | } 286 | 287 | div.error { 288 | background-color: #FCC; 289 | border: 1px solid #FAA; 290 | -moz-box-shadow: 2px 2px 4px #D52C2C; 291 | -webkit-box-shadow: 2px 2px 4px #D52C2C; 292 | box-shadow: 2px 2px 4px #D52C2C; 293 | } 294 | 295 | div.caution { 296 | background-color: #FCC; 297 | border: 1px solid #FAA; 298 | } 299 | 300 | div.attention { 301 | background-color: #FCC; 302 | border: 1px solid #FAA; 303 | } 304 | 305 | div.important { 306 | background-color: #EEE; 307 | border: 1px solid #CCC; 308 | } 309 | 310 | div.note { 311 | background-color: #EEE; 312 | border: 1px solid #CCC; 313 | } 314 | 315 | div.tip { 316 | background-color: #EEE; 317 | border: 1px solid #CCC; 318 | } 319 | 320 | div.hint { 321 | background-color: #EEE; 322 | border: 1px solid #CCC; 323 | } 324 | 325 | div.seealso { 326 | background-color: #EEE; 327 | border: 1px solid #CCC; 328 | } 329 | 330 | div.topic { 331 | background-color: #EEE; 332 | } 333 | 334 | p.admonition-title { 335 | display: inline; 336 | } 337 | 338 | p.admonition-title:after { 339 | content: ":"; 340 | } 341 | 342 | pre, tt, code { 343 | font-family: 'Consolas', 'Menlo', 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', monospace; 344 | font-size: 0.9em; 345 | } 346 | 347 | .hll { 348 | background-color: #FFC; 349 | margin: 0 -12px; 350 | padding: 0 12px; 351 | display: block; 352 | } 353 | 354 | img.screenshot { 355 | } 356 | 357 | tt.descname, tt.descclassname, code.descname, code.descclassname { 358 | font-size: 0.95em; 359 | } 360 | 361 | tt.descname, code.descname { 362 | padding-right: 0.08em; 363 | } 364 | 365 | img.screenshot { 366 | -moz-box-shadow: 2px 2px 4px #EEE; 367 | -webkit-box-shadow: 2px 2px 4px #EEE; 368 | box-shadow: 2px 2px 4px #EEE; 369 | } 370 | 371 | table.docutils { 372 | border: 1px solid #888; 373 | -moz-box-shadow: 2px 2px 4px #EEE; 374 | -webkit-box-shadow: 2px 2px 4px #EEE; 375 | box-shadow: 2px 2px 4px #EEE; 376 | } 377 | 378 | table.docutils td, table.docutils th { 379 | border: 1px solid #888; 380 | padding: 0.25em 0.7em; 381 | } 382 | 383 | table.field-list, table.footnote { 384 | border: none; 385 | -moz-box-shadow: none; 386 | -webkit-box-shadow: none; 387 | box-shadow: none; 388 | } 389 | 390 | table.footnote { 391 | margin: 15px 0; 392 | width: 100%; 393 | border: 1px solid #EEE; 394 | background: #FDFDFD; 395 | font-size: 0.9em; 396 | } 397 | 398 | table.footnote + table.footnote { 399 | margin-top: -15px; 400 | border-top: none; 401 | } 402 | 403 | table.field-list th { 404 | padding: 0 0.8em 0 0; 405 | } 406 | 407 | table.field-list td { 408 | padding: 0; 409 | } 410 | 411 | table.field-list p { 412 | margin-bottom: 0.8em; 413 | } 414 | 415 | /* Cloned from 416 | * https://github.com/sphinx-doc/sphinx/commit/ef60dbfce09286b20b7385333d63a60321784e68 417 | */ 418 | .field-name { 419 | -moz-hyphens: manual; 420 | -ms-hyphens: manual; 421 | -webkit-hyphens: manual; 422 | hyphens: manual; 423 | } 424 | 425 | table.footnote td.label { 426 | width: .1px; 427 | padding: 0.3em 0 0.3em 0.5em; 428 | } 429 | 430 | table.footnote td { 431 | padding: 0.3em 0.5em; 432 | } 433 | 434 | dl { 435 | margin-left: 0; 436 | margin-right: 0; 437 | margin-top: 0; 438 | padding: 0; 439 | } 440 | 441 | dl dd { 442 | margin-left: 30px; 443 | } 444 | 445 | blockquote { 446 | margin: 0 0 0 30px; 447 | padding: 0; 448 | } 449 | 450 | ul, ol { 451 | /* Matches the 30px from the narrow-screen "li > ul" selector below */ 452 | margin: 10px 0 10px 30px; 453 | padding: 0; 454 | } 455 | 456 | pre { 457 | background: #EEE; 458 | padding: 7px 30px; 459 | margin: 15px 0px; 460 | line-height: 1.3em; 461 | } 462 | 463 | div.viewcode-block:target { 464 | background: #ffd; 465 | } 466 | 467 | dl pre, blockquote pre, li pre { 468 | margin-left: 0; 469 | padding-left: 30px; 470 | } 471 | 472 | tt, code { 473 | background-color: #ecf0f3; 474 | color: #222; 475 | /* padding: 1px 2px; */ 476 | } 477 | 478 | tt.xref, code.xref, a tt { 479 | background-color: #FBFBFB; 480 | border-bottom: 1px solid #fff; 481 | } 482 | 483 | a.reference { 484 | text-decoration: none; 485 | border-bottom: 1px dotted #004B6B; 486 | } 487 | 488 | /* Don't put an underline on images */ 489 | a.image-reference, a.image-reference:hover { 490 | border-bottom: none; 491 | } 492 | 493 | a.reference:hover { 494 | border-bottom: 1px solid #6D4100; 495 | } 496 | 497 | a.footnote-reference { 498 | text-decoration: none; 499 | font-size: 0.7em; 500 | vertical-align: top; 501 | border-bottom: 1px dotted #004B6B; 502 | } 503 | 504 | a.footnote-reference:hover { 505 | border-bottom: 1px solid #6D4100; 506 | } 507 | 508 | a:hover tt, a:hover code { 509 | background: #EEE; 510 | } 511 | 512 | 513 | @media screen and (max-width: 870px) { 514 | 515 | div.sphinxsidebar { 516 | display: none; 517 | } 518 | 519 | div.document { 520 | width: 100%; 521 | 522 | } 523 | 524 | div.documentwrapper { 525 | margin-left: 0; 526 | margin-top: 0; 527 | margin-right: 0; 528 | margin-bottom: 0; 529 | } 530 | 531 | div.bodywrapper { 532 | margin-top: 0; 533 | margin-right: 0; 534 | margin-bottom: 0; 535 | margin-left: 0; 536 | } 537 | 538 | ul { 539 | margin-left: 0; 540 | } 541 | 542 | li > ul { 543 | /* Matches the 30px from the "ul, ol" selector above */ 544 | margin-left: 30px; 545 | } 546 | 547 | .document { 548 | width: auto; 549 | } 550 | 551 | .footer { 552 | width: auto; 553 | } 554 | 555 | .bodywrapper { 556 | margin: 0; 557 | } 558 | 559 | .footer { 560 | width: auto; 561 | } 562 | 563 | .github { 564 | display: none; 565 | } 566 | 567 | 568 | 569 | } 570 | 571 | 572 | 573 | @media screen and (max-width: 875px) { 574 | 575 | body { 576 | margin: 0; 577 | padding: 20px 30px; 578 | } 579 | 580 | div.documentwrapper { 581 | float: none; 582 | background: #fff; 583 | } 584 | 585 | div.sphinxsidebar { 586 | display: block; 587 | float: none; 588 | width: 102.5%; 589 | margin: 50px -30px -20px -30px; 590 | padding: 10px 20px; 591 | background: #333; 592 | color: #FFF; 593 | } 594 | 595 | div.sphinxsidebar h3, div.sphinxsidebar h4, div.sphinxsidebar p, 596 | div.sphinxsidebar h3 a { 597 | color: #fff; 598 | } 599 | 600 | div.sphinxsidebar a { 601 | color: #AAA; 602 | } 603 | 604 | div.sphinxsidebar p.logo { 605 | display: none; 606 | } 607 | 608 | div.document { 609 | width: 100%; 610 | margin: 0; 611 | } 612 | 613 | div.footer { 614 | display: none; 615 | } 616 | 617 | div.bodywrapper { 618 | margin: 0; 619 | } 620 | 621 | div.body { 622 | min-height: 0; 623 | padding: 0; 624 | } 625 | 626 | .rtd_doc_footer { 627 | display: none; 628 | } 629 | 630 | .document { 631 | width: auto; 632 | } 633 | 634 | .footer { 635 | width: auto; 636 | } 637 | 638 | .footer { 639 | width: auto; 640 | } 641 | 642 | .github { 643 | display: none; 644 | } 645 | } 646 | 647 | 648 | /* misc. */ 649 | 650 | .revsys-inline { 651 | display: none!important; 652 | } 653 | 654 | /* Hide ugly table cell borders in ..bibliography:: directive output */ 655 | table.docutils.citation, table.docutils.citation td, table.docutils.citation th { 656 | border: none; 657 | /* Below needed in some edge cases; if not applied, bottom shadows appear */ 658 | -moz-box-shadow: none; 659 | -webkit-box-shadow: none; 660 | box-shadow: none; 661 | } 662 | 663 | 664 | /* relbar */ 665 | 666 | .related { 667 | line-height: 30px; 668 | width: 100%; 669 | font-size: 0.9rem; 670 | } 671 | 672 | .related.top { 673 | border-bottom: 1px solid #EEE; 674 | margin-bottom: 20px; 675 | } 676 | 677 | .related.bottom { 678 | border-top: 1px solid #EEE; 679 | } 680 | 681 | .related ul { 682 | padding: 0; 683 | margin: 0; 684 | list-style: none; 685 | } 686 | 687 | .related li { 688 | display: inline; 689 | } 690 | 691 | nav#rellinks { 692 | float: right; 693 | } 694 | 695 | nav#rellinks li+li:before { 696 | content: "|"; 697 | } 698 | 699 | nav#breadcrumbs li+li:before { 700 | content: "\00BB"; 701 | } 702 | 703 | /* Hide certain items when printing */ 704 | @media print { 705 | div.related { 706 | display: none; 707 | } 708 | } -------------------------------------------------------------------------------- /docs/linux/_static/custom.css: -------------------------------------------------------------------------------- 1 | /* This file intentionally left blank. */ 2 | -------------------------------------------------------------------------------- /docs/linux/_static/doctools.js: -------------------------------------------------------------------------------- 1 | /* 2 | * doctools.js 3 | * ~~~~~~~~~~~ 4 | * 5 | * Base JavaScript utilities for all Sphinx HTML documentation. 6 | * 7 | * :copyright: Copyright 2007-2024 by the Sphinx team, see AUTHORS. 8 | * :license: BSD, see LICENSE for details. 9 | * 10 | */ 11 | "use strict"; 12 | 13 | const BLACKLISTED_KEY_CONTROL_ELEMENTS = new Set([ 14 | "TEXTAREA", 15 | "INPUT", 16 | "SELECT", 17 | "BUTTON", 18 | ]); 19 | 20 | const _ready = (callback) => { 21 | if (document.readyState !== "loading") { 22 | callback(); 23 | } else { 24 | document.addEventListener("DOMContentLoaded", callback); 25 | } 26 | }; 27 | 28 | /** 29 | * Small JavaScript module for the documentation. 30 | */ 31 | const Documentation = { 32 | init: () => { 33 | Documentation.initDomainIndexTable(); 34 | Documentation.initOnKeyListeners(); 35 | }, 36 | 37 | /** 38 | * i18n support 39 | */ 40 | TRANSLATIONS: {}, 41 | PLURAL_EXPR: (n) => (n === 1 ? 0 : 1), 42 | LOCALE: "unknown", 43 | 44 | // gettext and ngettext don't access this so that the functions 45 | // can safely bound to a different name (_ = Documentation.gettext) 46 | gettext: (string) => { 47 | const translated = Documentation.TRANSLATIONS[string]; 48 | switch (typeof translated) { 49 | case "undefined": 50 | return string; // no translation 51 | case "string": 52 | return translated; // translation exists 53 | default: 54 | return translated[0]; // (singular, plural) translation tuple exists 55 | } 56 | }, 57 | 58 | ngettext: (singular, plural, n) => { 59 | const translated = Documentation.TRANSLATIONS[singular]; 60 | if (typeof translated !== "undefined") 61 | return translated[Documentation.PLURAL_EXPR(n)]; 62 | return n === 1 ? singular : plural; 63 | }, 64 | 65 | addTranslations: (catalog) => { 66 | Object.assign(Documentation.TRANSLATIONS, catalog.messages); 67 | Documentation.PLURAL_EXPR = new Function( 68 | "n", 69 | `return (${catalog.plural_expr})` 70 | ); 71 | Documentation.LOCALE = catalog.locale; 72 | }, 73 | 74 | /** 75 | * helper function to focus on search bar 76 | */ 77 | focusSearchBar: () => { 78 | document.querySelectorAll("input[name=q]")[0]?.focus(); 79 | }, 80 | 81 | /** 82 | * Initialise the domain index toggle buttons 83 | */ 84 | initDomainIndexTable: () => { 85 | const toggler = (el) => { 86 | const idNumber = el.id.substr(7); 87 | const toggledRows = document.querySelectorAll(`tr.cg-${idNumber}`); 88 | if (el.src.substr(-9) === "minus.png") { 89 | el.src = `${el.src.substr(0, el.src.length - 9)}plus.png`; 90 | toggledRows.forEach((el) => (el.style.display = "none")); 91 | } else { 92 | el.src = `${el.src.substr(0, el.src.length - 8)}minus.png`; 93 | toggledRows.forEach((el) => (el.style.display = "")); 94 | } 95 | }; 96 | 97 | const togglerElements = document.querySelectorAll("img.toggler"); 98 | togglerElements.forEach((el) => 99 | el.addEventListener("click", (event) => toggler(event.currentTarget)) 100 | ); 101 | togglerElements.forEach((el) => (el.style.display = "")); 102 | if (DOCUMENTATION_OPTIONS.COLLAPSE_INDEX) togglerElements.forEach(toggler); 103 | }, 104 | 105 | initOnKeyListeners: () => { 106 | // only install a listener if it is really needed 107 | if ( 108 | !DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS && 109 | !DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS 110 | ) 111 | return; 112 | 113 | document.addEventListener("keydown", (event) => { 114 | // bail for input elements 115 | if (BLACKLISTED_KEY_CONTROL_ELEMENTS.has(document.activeElement.tagName)) return; 116 | // bail with special keys 117 | if (event.altKey || event.ctrlKey || event.metaKey) return; 118 | 119 | if (!event.shiftKey) { 120 | switch (event.key) { 121 | case "ArrowLeft": 122 | if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break; 123 | 124 | const prevLink = document.querySelector('link[rel="prev"]'); 125 | if (prevLink && prevLink.href) { 126 | window.location.href = prevLink.href; 127 | event.preventDefault(); 128 | } 129 | break; 130 | case "ArrowRight": 131 | if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break; 132 | 133 | const nextLink = document.querySelector('link[rel="next"]'); 134 | if (nextLink && nextLink.href) { 135 | window.location.href = nextLink.href; 136 | event.preventDefault(); 137 | } 138 | break; 139 | } 140 | } 141 | 142 | // some keyboard layouts may need Shift to get / 143 | switch (event.key) { 144 | case "/": 145 | if (!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS) break; 146 | Documentation.focusSearchBar(); 147 | event.preventDefault(); 148 | } 149 | }); 150 | }, 151 | }; 152 | 153 | // quick alias for translations 154 | const _ = Documentation.gettext; 155 | 156 | _ready(Documentation.init); 157 | -------------------------------------------------------------------------------- /docs/linux/_static/documentation_options.js: -------------------------------------------------------------------------------- 1 | const DOCUMENTATION_OPTIONS = { 2 | VERSION: '0.1', 3 | LANGUAGE: 'en', 4 | COLLAPSE_INDEX: false, 5 | BUILDER: 'html', 6 | FILE_SUFFIX: '.html', 7 | LINK_SUFFIX: '.html', 8 | HAS_SOURCE: true, 9 | SOURCELINK_SUFFIX: '.txt', 10 | NAVIGATION_WITH_KEYS: false, 11 | SHOW_SEARCH_SUMMARY: true, 12 | ENABLE_SEARCH_SHORTCUTS: true, 13 | }; -------------------------------------------------------------------------------- /docs/linux/_static/file.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Fewbytes/rubber-docker/2b3fca3ae4e6de1d3288edbdf3935399cb3c324c/docs/linux/_static/file.png -------------------------------------------------------------------------------- /docs/linux/_static/language_data.js: -------------------------------------------------------------------------------- 1 | /* 2 | * language_data.js 3 | * ~~~~~~~~~~~~~~~~ 4 | * 5 | * This script contains the language-specific data used by searchtools.js, 6 | * namely the list of stopwords, stemmer, scorer and splitter. 7 | * 8 | * :copyright: Copyright 2007-2024 by the Sphinx team, see AUTHORS. 9 | * :license: BSD, see LICENSE for details. 10 | * 11 | */ 12 | 13 | var stopwords = ["a", "and", "are", "as", "at", "be", "but", "by", "for", "if", "in", "into", "is", "it", "near", "no", "not", "of", "on", "or", "such", "that", "the", "their", "then", "there", "these", "they", "this", "to", "was", "will", "with"]; 14 | 15 | 16 | /* Non-minified version is copied as a separate JS file, if available */ 17 | 18 | /** 19 | * Porter Stemmer 20 | */ 21 | var Stemmer = function() { 22 | 23 | var step2list = { 24 | ational: 'ate', 25 | tional: 'tion', 26 | enci: 'ence', 27 | anci: 'ance', 28 | izer: 'ize', 29 | bli: 'ble', 30 | alli: 'al', 31 | entli: 'ent', 32 | eli: 'e', 33 | ousli: 'ous', 34 | ization: 'ize', 35 | ation: 'ate', 36 | ator: 'ate', 37 | alism: 'al', 38 | iveness: 'ive', 39 | fulness: 'ful', 40 | ousness: 'ous', 41 | aliti: 'al', 42 | iviti: 'ive', 43 | biliti: 'ble', 44 | logi: 'log' 45 | }; 46 | 47 | var step3list = { 48 | icate: 'ic', 49 | ative: '', 50 | alize: 'al', 51 | iciti: 'ic', 52 | ical: 'ic', 53 | ful: '', 54 | ness: '' 55 | }; 56 | 57 | var c = "[^aeiou]"; // consonant 58 | var v = "[aeiouy]"; // vowel 59 | var C = c + "[^aeiouy]*"; // consonant sequence 60 | var V = v + "[aeiou]*"; // vowel sequence 61 | 62 | var mgr0 = "^(" + C + ")?" + V + C; // [C]VC... is m>0 63 | var meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$"; // [C]VC[V] is m=1 64 | var mgr1 = "^(" + C + ")?" + V + C + V + C; // [C]VCVC... is m>1 65 | var s_v = "^(" + C + ")?" + v; // vowel in stem 66 | 67 | this.stemWord = function (w) { 68 | var stem; 69 | var suffix; 70 | var firstch; 71 | var origword = w; 72 | 73 | if (w.length < 3) 74 | return w; 75 | 76 | var re; 77 | var re2; 78 | var re3; 79 | var re4; 80 | 81 | firstch = w.substr(0,1); 82 | if (firstch == "y") 83 | w = firstch.toUpperCase() + w.substr(1); 84 | 85 | // Step 1a 86 | re = /^(.+?)(ss|i)es$/; 87 | re2 = /^(.+?)([^s])s$/; 88 | 89 | if (re.test(w)) 90 | w = w.replace(re,"$1$2"); 91 | else if (re2.test(w)) 92 | w = w.replace(re2,"$1$2"); 93 | 94 | // Step 1b 95 | re = /^(.+?)eed$/; 96 | re2 = /^(.+?)(ed|ing)$/; 97 | if (re.test(w)) { 98 | var fp = re.exec(w); 99 | re = new RegExp(mgr0); 100 | if (re.test(fp[1])) { 101 | re = /.$/; 102 | w = w.replace(re,""); 103 | } 104 | } 105 | else if (re2.test(w)) { 106 | var fp = re2.exec(w); 107 | stem = fp[1]; 108 | re2 = new RegExp(s_v); 109 | if (re2.test(stem)) { 110 | w = stem; 111 | re2 = /(at|bl|iz)$/; 112 | re3 = new RegExp("([^aeiouylsz])\\1$"); 113 | re4 = new RegExp("^" + C + v + "[^aeiouwxy]$"); 114 | if (re2.test(w)) 115 | w = w + "e"; 116 | else if (re3.test(w)) { 117 | re = /.$/; 118 | w = w.replace(re,""); 119 | } 120 | else if (re4.test(w)) 121 | w = w + "e"; 122 | } 123 | } 124 | 125 | // Step 1c 126 | re = /^(.+?)y$/; 127 | if (re.test(w)) { 128 | var fp = re.exec(w); 129 | stem = fp[1]; 130 | re = new RegExp(s_v); 131 | if (re.test(stem)) 132 | w = stem + "i"; 133 | } 134 | 135 | // Step 2 136 | re = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/; 137 | if (re.test(w)) { 138 | var fp = re.exec(w); 139 | stem = fp[1]; 140 | suffix = fp[2]; 141 | re = new RegExp(mgr0); 142 | if (re.test(stem)) 143 | w = stem + step2list[suffix]; 144 | } 145 | 146 | // Step 3 147 | re = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/; 148 | if (re.test(w)) { 149 | var fp = re.exec(w); 150 | stem = fp[1]; 151 | suffix = fp[2]; 152 | re = new RegExp(mgr0); 153 | if (re.test(stem)) 154 | w = stem + step3list[suffix]; 155 | } 156 | 157 | // Step 4 158 | re = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/; 159 | re2 = /^(.+?)(s|t)(ion)$/; 160 | if (re.test(w)) { 161 | var fp = re.exec(w); 162 | stem = fp[1]; 163 | re = new RegExp(mgr1); 164 | if (re.test(stem)) 165 | w = stem; 166 | } 167 | else if (re2.test(w)) { 168 | var fp = re2.exec(w); 169 | stem = fp[1] + fp[2]; 170 | re2 = new RegExp(mgr1); 171 | if (re2.test(stem)) 172 | w = stem; 173 | } 174 | 175 | // Step 5 176 | re = /^(.+?)e$/; 177 | if (re.test(w)) { 178 | var fp = re.exec(w); 179 | stem = fp[1]; 180 | re = new RegExp(mgr1); 181 | re2 = new RegExp(meq1); 182 | re3 = new RegExp("^" + C + v + "[^aeiouwxy]$"); 183 | if (re.test(stem) || (re2.test(stem) && !(re3.test(stem)))) 184 | w = stem; 185 | } 186 | re = /ll$/; 187 | re2 = new RegExp(mgr1); 188 | if (re.test(w) && re2.test(w)) { 189 | re = /.$/; 190 | w = w.replace(re,""); 191 | } 192 | 193 | // and turn initial Y back to y 194 | if (firstch == "y") 195 | w = firstch.toLowerCase() + w.substr(1); 196 | return w; 197 | } 198 | } 199 | 200 | -------------------------------------------------------------------------------- /docs/linux/_static/minus.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Fewbytes/rubber-docker/2b3fca3ae4e6de1d3288edbdf3935399cb3c324c/docs/linux/_static/minus.png -------------------------------------------------------------------------------- /docs/linux/_static/plus.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Fewbytes/rubber-docker/2b3fca3ae4e6de1d3288edbdf3935399cb3c324c/docs/linux/_static/plus.png -------------------------------------------------------------------------------- /docs/linux/_static/pygments.css: -------------------------------------------------------------------------------- 1 | pre { line-height: 125%; } 2 | td.linenos .normal { color: inherit; background-color: transparent; padding-left: 5px; padding-right: 5px; } 3 | span.linenos { color: inherit; background-color: transparent; padding-left: 5px; padding-right: 5px; } 4 | td.linenos .special { color: #000000; background-color: #ffffc0; padding-left: 5px; padding-right: 5px; } 5 | span.linenos.special { color: #000000; background-color: #ffffc0; padding-left: 5px; padding-right: 5px; } 6 | .highlight .hll { background-color: #ffffcc } 7 | .highlight { background: #eeffcc; } 8 | .highlight .c { color: #408090; font-style: italic } /* Comment */ 9 | .highlight .err { border: 1px solid #FF0000 } /* Error */ 10 | .highlight .k { color: #007020; font-weight: bold } /* Keyword */ 11 | .highlight .o { color: #666666 } /* Operator */ 12 | .highlight .ch { color: #408090; font-style: italic } /* Comment.Hashbang */ 13 | .highlight .cm { color: #408090; font-style: italic } /* Comment.Multiline */ 14 | .highlight .cp { color: #007020 } /* Comment.Preproc */ 15 | .highlight .cpf { color: #408090; font-style: italic } /* Comment.PreprocFile */ 16 | .highlight .c1 { color: #408090; font-style: italic } /* Comment.Single */ 17 | .highlight .cs { color: #408090; background-color: #fff0f0 } /* Comment.Special */ 18 | .highlight .gd { color: #A00000 } /* Generic.Deleted */ 19 | .highlight .ge { font-style: italic } /* Generic.Emph */ 20 | .highlight .ges { font-weight: bold; font-style: italic } /* Generic.EmphStrong */ 21 | .highlight .gr { color: #FF0000 } /* Generic.Error */ 22 | .highlight .gh { color: #000080; font-weight: bold } /* Generic.Heading */ 23 | .highlight .gi { color: #00A000 } /* Generic.Inserted */ 24 | .highlight .go { color: #333333 } /* Generic.Output */ 25 | .highlight .gp { color: #c65d09; font-weight: bold } /* Generic.Prompt */ 26 | .highlight .gs { font-weight: bold } /* Generic.Strong */ 27 | .highlight .gu { color: #800080; font-weight: bold } /* Generic.Subheading */ 28 | .highlight .gt { color: #0044DD } /* Generic.Traceback */ 29 | .highlight .kc { color: #007020; font-weight: bold } /* Keyword.Constant */ 30 | .highlight .kd { color: #007020; font-weight: bold } /* Keyword.Declaration */ 31 | .highlight .kn { color: #007020; font-weight: bold } /* Keyword.Namespace */ 32 | .highlight .kp { color: #007020 } /* Keyword.Pseudo */ 33 | .highlight .kr { color: #007020; font-weight: bold } /* Keyword.Reserved */ 34 | .highlight .kt { color: #902000 } /* Keyword.Type */ 35 | .highlight .m { color: #208050 } /* Literal.Number */ 36 | .highlight .s { color: #4070a0 } /* Literal.String */ 37 | .highlight .na { color: #4070a0 } /* Name.Attribute */ 38 | .highlight .nb { color: #007020 } /* Name.Builtin */ 39 | .highlight .nc { color: #0e84b5; font-weight: bold } /* Name.Class */ 40 | .highlight .no { color: #60add5 } /* Name.Constant */ 41 | .highlight .nd { color: #555555; font-weight: bold } /* Name.Decorator */ 42 | .highlight .ni { color: #d55537; font-weight: bold } /* Name.Entity */ 43 | .highlight .ne { color: #007020 } /* Name.Exception */ 44 | .highlight .nf { color: #06287e } /* Name.Function */ 45 | .highlight .nl { color: #002070; font-weight: bold } /* Name.Label */ 46 | .highlight .nn { color: #0e84b5; font-weight: bold } /* Name.Namespace */ 47 | .highlight .nt { color: #062873; font-weight: bold } /* Name.Tag */ 48 | .highlight .nv { color: #bb60d5 } /* Name.Variable */ 49 | .highlight .ow { color: #007020; font-weight: bold } /* Operator.Word */ 50 | .highlight .w { color: #bbbbbb } /* Text.Whitespace */ 51 | .highlight .mb { color: #208050 } /* Literal.Number.Bin */ 52 | .highlight .mf { color: #208050 } /* Literal.Number.Float */ 53 | .highlight .mh { color: #208050 } /* Literal.Number.Hex */ 54 | .highlight .mi { color: #208050 } /* Literal.Number.Integer */ 55 | .highlight .mo { color: #208050 } /* Literal.Number.Oct */ 56 | .highlight .sa { color: #4070a0 } /* Literal.String.Affix */ 57 | .highlight .sb { color: #4070a0 } /* Literal.String.Backtick */ 58 | .highlight .sc { color: #4070a0 } /* Literal.String.Char */ 59 | .highlight .dl { color: #4070a0 } /* Literal.String.Delimiter */ 60 | .highlight .sd { color: #4070a0; font-style: italic } /* Literal.String.Doc */ 61 | .highlight .s2 { color: #4070a0 } /* Literal.String.Double */ 62 | .highlight .se { color: #4070a0; font-weight: bold } /* Literal.String.Escape */ 63 | .highlight .sh { color: #4070a0 } /* Literal.String.Heredoc */ 64 | .highlight .si { color: #70a0d0; font-style: italic } /* Literal.String.Interpol */ 65 | .highlight .sx { color: #c65d09 } /* Literal.String.Other */ 66 | .highlight .sr { color: #235388 } /* Literal.String.Regex */ 67 | .highlight .s1 { color: #4070a0 } /* Literal.String.Single */ 68 | .highlight .ss { color: #517918 } /* Literal.String.Symbol */ 69 | .highlight .bp { color: #007020 } /* Name.Builtin.Pseudo */ 70 | .highlight .fm { color: #06287e } /* Name.Function.Magic */ 71 | .highlight .vc { color: #bb60d5 } /* Name.Variable.Class */ 72 | .highlight .vg { color: #bb60d5 } /* Name.Variable.Global */ 73 | .highlight .vi { color: #bb60d5 } /* Name.Variable.Instance */ 74 | .highlight .vm { color: #bb60d5 } /* Name.Variable.Magic */ 75 | .highlight .il { color: #208050 } /* Literal.Number.Integer.Long */ -------------------------------------------------------------------------------- /docs/linux/_static/sphinx_highlight.js: -------------------------------------------------------------------------------- 1 | /* Highlighting utilities for Sphinx HTML documentation. */ 2 | "use strict"; 3 | 4 | const SPHINX_HIGHLIGHT_ENABLED = true 5 | 6 | /** 7 | * highlight a given string on a node by wrapping it in 8 | * span elements with the given class name. 9 | */ 10 | const _highlight = (node, addItems, text, className) => { 11 | if (node.nodeType === Node.TEXT_NODE) { 12 | const val = node.nodeValue; 13 | const parent = node.parentNode; 14 | const pos = val.toLowerCase().indexOf(text); 15 | if ( 16 | pos >= 0 && 17 | !parent.classList.contains(className) && 18 | !parent.classList.contains("nohighlight") 19 | ) { 20 | let span; 21 | 22 | const closestNode = parent.closest("body, svg, foreignObject"); 23 | const isInSVG = closestNode && closestNode.matches("svg"); 24 | if (isInSVG) { 25 | span = document.createElementNS("http://www.w3.org/2000/svg", "tspan"); 26 | } else { 27 | span = document.createElement("span"); 28 | span.classList.add(className); 29 | } 30 | 31 | span.appendChild(document.createTextNode(val.substr(pos, text.length))); 32 | const rest = document.createTextNode(val.substr(pos + text.length)); 33 | parent.insertBefore( 34 | span, 35 | parent.insertBefore( 36 | rest, 37 | node.nextSibling 38 | ) 39 | ); 40 | node.nodeValue = val.substr(0, pos); 41 | /* There may be more occurrences of search term in this node. So call this 42 | * function recursively on the remaining fragment. 43 | */ 44 | _highlight(rest, addItems, text, className); 45 | 46 | if (isInSVG) { 47 | const rect = document.createElementNS( 48 | "http://www.w3.org/2000/svg", 49 | "rect" 50 | ); 51 | const bbox = parent.getBBox(); 52 | rect.x.baseVal.value = bbox.x; 53 | rect.y.baseVal.value = bbox.y; 54 | rect.width.baseVal.value = bbox.width; 55 | rect.height.baseVal.value = bbox.height; 56 | rect.setAttribute("class", className); 57 | addItems.push({ parent: parent, target: rect }); 58 | } 59 | } 60 | } else if (node.matches && !node.matches("button, select, textarea")) { 61 | node.childNodes.forEach((el) => _highlight(el, addItems, text, className)); 62 | } 63 | }; 64 | const _highlightText = (thisNode, text, className) => { 65 | let addItems = []; 66 | _highlight(thisNode, addItems, text, className); 67 | addItems.forEach((obj) => 68 | obj.parent.insertAdjacentElement("beforebegin", obj.target) 69 | ); 70 | }; 71 | 72 | /** 73 | * Small JavaScript module for the documentation. 74 | */ 75 | const SphinxHighlight = { 76 | 77 | /** 78 | * highlight the search words provided in localstorage in the text 79 | */ 80 | highlightSearchWords: () => { 81 | if (!SPHINX_HIGHLIGHT_ENABLED) return; // bail if no highlight 82 | 83 | // get and clear terms from localstorage 84 | const url = new URL(window.location); 85 | const highlight = 86 | localStorage.getItem("sphinx_highlight_terms") 87 | || url.searchParams.get("highlight") 88 | || ""; 89 | localStorage.removeItem("sphinx_highlight_terms") 90 | url.searchParams.delete("highlight"); 91 | window.history.replaceState({}, "", url); 92 | 93 | // get individual terms from highlight string 94 | const terms = highlight.toLowerCase().split(/\s+/).filter(x => x); 95 | if (terms.length === 0) return; // nothing to do 96 | 97 | // There should never be more than one element matching "div.body" 98 | const divBody = document.querySelectorAll("div.body"); 99 | const body = divBody.length ? divBody[0] : document.querySelector("body"); 100 | window.setTimeout(() => { 101 | terms.forEach((term) => _highlightText(body, term, "highlighted")); 102 | }, 10); 103 | 104 | const searchBox = document.getElementById("searchbox"); 105 | if (searchBox === null) return; 106 | searchBox.appendChild( 107 | document 108 | .createRange() 109 | .createContextualFragment( 110 | '" 114 | ) 115 | ); 116 | }, 117 | 118 | /** 119 | * helper function to hide the search marks again 120 | */ 121 | hideSearchWords: () => { 122 | document 123 | .querySelectorAll("#searchbox .highlight-link") 124 | .forEach((el) => el.remove()); 125 | document 126 | .querySelectorAll("span.highlighted") 127 | .forEach((el) => el.classList.remove("highlighted")); 128 | localStorage.removeItem("sphinx_highlight_terms") 129 | }, 130 | 131 | initEscapeListener: () => { 132 | // only install a listener if it is really needed 133 | if (!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS) return; 134 | 135 | document.addEventListener("keydown", (event) => { 136 | // bail for input elements 137 | if (BLACKLISTED_KEY_CONTROL_ELEMENTS.has(document.activeElement.tagName)) return; 138 | // bail with special keys 139 | if (event.shiftKey || event.altKey || event.ctrlKey || event.metaKey) return; 140 | if (DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS && (event.key === "Escape")) { 141 | SphinxHighlight.hideSearchWords(); 142 | event.preventDefault(); 143 | } 144 | }); 145 | }, 146 | }; 147 | 148 | _ready(() => { 149 | /* Do not call highlightSearchWords() when we are on the search page. 150 | * It will highlight words from the *previous* search query. 151 | */ 152 | if (typeof Search === "undefined") SphinxHighlight.highlightSearchWords(); 153 | SphinxHighlight.initEscapeListener(); 154 | }); 155 | -------------------------------------------------------------------------------- /docs/linux/genindex.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | Index — Rubber Docker 0.1 documentation 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 |
26 |
27 |
28 | 29 | 30 |
31 | 32 | 33 |

Index

34 | 35 |
36 | C 37 | | L 38 | | M 39 | | P 40 | | S 41 | | U 42 | 43 |
44 |

C

45 | 46 | 50 |
51 | 52 |

L

53 | 54 | 63 |
    55 |
  • 56 | linux 57 | 58 |
  • 62 |
64 | 65 |

M

66 | 67 | 76 | 80 |
    68 |
  • 69 | module 70 | 71 |
  • 75 |
81 | 82 |

P

83 | 84 | 88 |
89 | 90 |

S

91 | 92 | 96 | 100 |
101 | 102 |

U

103 | 104 | 108 | 114 |
115 | 116 | 117 | 118 |
119 | 120 |
121 |
122 | 162 |
163 |
164 | 172 | 173 | 174 | 175 | 176 | 177 | -------------------------------------------------------------------------------- /docs/linux/objects.inv: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Fewbytes/rubber-docker/2b3fca3ae4e6de1d3288edbdf3935399cb3c324c/docs/linux/objects.inv -------------------------------------------------------------------------------- /docs/linux/py-modindex.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | Python Module Index — Rubber Docker 0.1 documentation 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 27 | 28 | 29 | 30 | 31 | 32 |
33 |
34 |
35 | 36 | 37 |
38 | 39 | 40 |

Python Module Index

41 | 42 |
43 | l 44 |
45 | 46 | 47 | 48 | 50 | 51 | 52 | 55 |
 
49 | l
53 | linux 54 |
56 | 57 | 58 |
59 | 60 |
61 |
62 | 102 |
103 |
104 | 112 | 113 | 114 | 115 | 116 | 117 | -------------------------------------------------------------------------------- /docs/linux/search.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | Search — Rubber Docker 0.1 documentation 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 |
33 |
34 |
35 | 36 | 37 |
38 | 39 |

Search

40 | 41 | 49 | 50 | 51 |

52 | Searching for multiple words only shows matches that contain 53 | all words. 54 |

55 | 56 | 57 |
58 | 59 | 60 | 61 |
62 | 63 | 64 |
65 | 66 | 67 |
68 | 69 |
70 |
71 | 101 |
102 |
103 | 111 | 112 | 113 | 114 | 115 | 116 | -------------------------------------------------------------------------------- /docs/linux/searchindex.js: -------------------------------------------------------------------------------- 1 | Search.setIndex({"alltitles": {"Docker from Scratch workshop": [[0, null]], "linux": [[0, "linux"]]}, "docnames": ["index"], "envversion": {"sphinx": 62, "sphinx.domains.c": 3, "sphinx.domains.changeset": 1, "sphinx.domains.citation": 1, "sphinx.domains.cpp": 9, "sphinx.domains.index": 1, "sphinx.domains.javascript": 3, "sphinx.domains.math": 2, "sphinx.domains.python": 4, "sphinx.domains.rst": 2, "sphinx.domains.std": 2}, "filenames": ["index.rst"], "indexentries": {"clone() (in module linux)": [[0, "id0", false], [0, "linux.clone", false]], "linux": [[0, "module-linux", false]], "module": [[0, "module-linux", false]], "mount() (in module linux)": [[0, "id1", false], [0, "linux.mount", false]], "pivot_root() (in module linux)": [[0, "id2", false], [0, "linux.pivot_root", false]], "sethostname() (in module linux)": [[0, "id3", false], [0, "linux.sethostname", false]], "setns() (in module linux)": [[0, "id4", false], [0, "linux.setns", false]], "umount() (in module linux)": [[0, "id5", false], [0, "linux.umount", false]], "umount2() (in module linux)": [[0, "id6", false], [0, "linux.umount2", false]], "unshare() (in module linux)": [[0, "id7", false], [0, "linux.unshare", false]]}, "objects": {"": [[0, 0, 0, "-", "linux"]], "linux": [[0, 1, 1, "id0", "clone"], [0, 1, 1, "id1", "mount"], [0, 1, 1, "id2", "pivot_root"], [0, 1, 1, "id3", "sethostname"], [0, 1, 1, "id4", "setns"], [0, 1, 1, "id5", "umount"], [0, 1, 1, "id6", "umount2"], [0, 1, 1, "id7", "unshare"]]}, "objnames": {"0": ["py", "module", "Python module"], "1": ["py", "function", "Python function"]}, "objtypes": {"0": "py:module", "1": "py:function"}, "terms": {"0": 0, "2": 0, "For": 0, "No": 0, "On": 0, "The": 0, "ad": 0, "addit": 0, "allow": 0, "ani": 0, "appli": 0, "ar": 0, "argument": 0, "aspect": 0, "associ": 0, "attach": 0, "behavior": 0, "being": 0, "below": 0, "between": 0, "c": 0, "call": 0, "callabl": 0, "callback": 0, "callback_arg": 0, "can": 0, "case": 0, "chang": 0, "child": 0, "clone": 0, "clone_newipc": 0, "clone_newn": 0, "clone_newnet": 0, "clone_newpid": 0, "clone_newut": 0, "combin": 0, "contain": 0, "context": 0, "control": 0, "creat": 0, "current": 0, "descriptor": 0, "differ": 0, "directori": 0, "disassoci": 0, "domainnam": 0, "dure": 0, "e": 0, "etc": 0, "execut": 0, "extens": 0, "fail": 0, "fd": 0, "file": 0, "filesystem": 0, "filesystemtyp": 0, "flag": 0, "follow": 0, "fork": 0, "function": 0, "hostnam": 0, "i": 0, "id": 0, "implement": 0, "int": 0, "ipc": 0, "join": 0, "kernel": 0, "like": 0, "mai": 0, "manipul": 0, "might": 0, "miss": 0, "mnt_detach": 0, "modul": 0, "most": 0, "mount": 0, "mountflag": 0, "mountopt": 0, "mountpoint": 0, "move": 0, "ms_privat": 0, "ms_rec": 0, "multipl": 0, "must": 0, "namespac": 0, "need": 0, "network": 0, "new": 0, "new_root": 0, "none": 0, "nonzero": 0, "note": 0, "nstype": 0, "number": 0, "o": 0, "one": 0, "oper": 0, "option": 0, "other": 0, "paramet": 0, "part": 0, "pass": 0, "pid": 0, "pivot_root": 0, "point": 0, "process": 0, "put_old": 0, "python": 0, "rais": 0, "reassoci": 0, "refer": 0, "remov": 0, "restrict": 0, "return": 0, "root": 0, "runtimeerror": 0, "same": 0, "see": 0, "set": 0, "sethostnam": 0, "setn": 0, "share": 0, "should": 0, "simpl": 0, "sourc": 0, "specifi": 0, "str": 0, "string": 0, "success": 0, "support": 0, "syscal": 0, "system": 0, "target": 0, "thei": 0, "thi": 0, "thread": 0, "topmost": 0, "tupl": 0, "type": 0, "umount": 0, "umount2": 0, "underneath": 0, "unmount": 0, "unshar": 0, "us": 0, "ut": 0, "valu": 0, "want": 0, "what": 0, "which": 0, "wrapper": 0, "yield": 0, "you": 0}, "titles": ["Docker from Scratch workshop"], "titleterms": {"docker": 0, "from": 0, "linux": 0, "scratch": 0, "workshop": 0}}) -------------------------------------------------------------------------------- /docs/prep-processes.md: -------------------------------------------------------------------------------- 1 | # Linux processes 2 | 3 | A _process_ is an operating system concept describing a task with a separate memory space and resources. 4 | In Linux, _processes_ are created using the `clone()` system call, which clones an existing process to create a new process. 5 | The `clone()` call accepts various flags which tell Linux what resources to share/copy with the original process. 6 | Usually, the `clone()` system call is not used directly; instead we use POSIX calls (like `fork()`) which are implemented in _glibc_ (userspace). 7 | In fact, most of the Linux and POSIX interfaces we use are implemented in _glibc_, not in the kernel. 8 | 9 | The `fork()` call we know and love creates a _process_ by calling `clone()` with a bunch of flags. Threads are created using the `pthread_create()` call. 10 | 11 | Under the hood, both threads and processes are tasks and are represented by a struct called (surprise surprise) *task_struct*. *task_struct* has about 170 fields, and is around 1k size. Some notable fields are: _*user_, _pid_, _tgid_, _*files_, _*fs_, _*nsproxy_ 12 | - *fs_struct* _*fs_ holds information on current root, working directory, umask, etc. 13 | - *pid* struct maps processes to one or more tasks 14 | 15 | ## Processes - fork & exec 16 | 17 | Traditionally \*nixs created new processes by calling the following calls in order: 18 | 1. *fork()* - Duplicate the current process, VM is copy-on-write 19 | 1. *exec()* - Replace text/data/bss/stack with new program image 20 | 21 | After calling *exec()*, the new process image starts executing from the entrypoint (main function) and the new command line arguments (argv) are passed to it. 22 | 23 | - glibc’s *fork()* and *pthread_create()* both call clone() syscall 24 | - *clone()* creates a new *task_struct* from parent 25 | controls resource sharing by flags (e.g. share VM, share/copy fd) 26 | -------------------------------------------------------------------------------- /docs/prep-users.md: -------------------------------------------------------------------------------- 1 | # Users 2 | 3 | From the kernel’s PoV, a user is an `int` parameter in various structs 4 | A process (*task_struct*) has several uid fields: *ruid*, *suid*, *euid*, *fsuid* 5 | 6 | There is no need to “add” or "create" new users - since a "user" is just an int parameter, we can just a assign any value to it. There are only two types of users: 7 | - uid 0, aka _root_ 8 | - everyone else 9 | 10 | More on that in the _capabilities_ section. 11 | 12 | User names are a userspace feature of which the kernel is completely oblivious. Largely implemented in `libnss` and `glibc`, user names are a mapping from a human friendly names to uid numbers, managed in `/etc/passwd` and `/etc/shadow`. 13 | Commands like `useradd` manipulate `/etc/passwd`, `/etc/shadow`. 14 | If there's no matching entry for a uid in `/etc/passwd`, everything still works except we won't have the mapping to human friendly names. E.g., try the following: 15 | 16 | ``` 17 | touch /tmp/test 18 | sudo chown 29311 /tmp/test 19 | ls -lh /tmp/test 20 | ``` 21 | 22 | ## User permissions 23 | The kernel uses uid numbers (and gid) to decide if a process is permitted to perform certain actions. For example, if a process is trying to `open()` a file, the *fsuid* of the process is compared with the owner uid of the file (and it's permissions mask). 24 | 25 | But how does a process change it's uid fields? When a process is cloned it inherits its uid fields from the parent and can then call `setuid()` or similar to change the uid. A process can only change its *ruid* (real uid) if it is currently uid 0 (root). 26 | 27 | No identity checks (NFS i’m looking at you) 28 | -------------------------------------------------------------------------------- /docs/prep.md: -------------------------------------------------------------------------------- 1 | # The basics of Linux containers 2 | 3 | # What is a Linux container? 4 | 5 | A container, or "O/S virtualization" as it is sometimes known, refers to an isolated group of processes in an O/S. Let's take postgres as an example service. 6 | 7 | Suppose we have postgres running in a container. 8 | Postgres spawns a process for every connection it holds so all the process of a postgres instance must have access to the same resources. 9 | But we don't want other processes (e.g. Apache) to have access or even to be able to see the resources postgres (e.g. memory) is using, and we would also like to limit the amount of resources postgres is allowed to consume. 10 | 11 | In addition, we would also like to abstract each postgres instance's view of the O/S so it doesn't concern itself with the peculiarities of the specific host it running on. 12 | For example postgres stores its data in `/var/lib/postgres` and we would like to preserve that regardless of how many postgres instances are running on that host. 13 | 14 | So to sum up, this is what we want from a container: 15 | - isolation 16 | - abstraction 17 | - resource constraints 18 | 19 | Traditionally sysadmins used users and filesystem permissions for isolation. 20 | Abstraction was done using `chroot` and resource constraints were managed using `rlimit`. 21 | This was far from satisfactory, as evident by the growing popularity of virtual machines. 22 | To make things more manageable, we want the kernel to provide a mechanism which will achieve the above. 23 | 24 | Unfortunately, such a mechanism does not exist in Linux. 25 | Instead, we have a a few independent mechanisms which we can orchestrate together to achieve various levels of isolation, abstraction and resource constraints. 26 | We have: 27 | - namespaces 28 | - cgroups 29 | - chroot/pivot_root 30 | - seccomp 31 | - appaprmor/SELinux 32 | 33 | Thus, a "Linux container" is not a well defined entity. 34 | From the kernel perspective, there is no such thing as a container, just a bunch of processes with namespaces, cgroups and so on. 35 | 36 | To understand how these mechanisms work, it's a good idea to revisit how relevant Linux primitives work: 37 | - [Processes](prep-processes.md) 38 | - [Users](prep-users.md) 39 | - [Mounts](prep-mounts.md) 40 | - [chroot/pivot_root](prep-chroot.md) 41 | - [Memory management](prep-memory.md) 42 | 43 | After going over the primitives, let's see how the new mechanisms work: 44 | - [Namespaces](prep-namespaces.md) 45 | - [cgroups](prep-cgroups.md) 46 | - [seccomp](prep-seccomp.md) 47 | - [capabilities](prep-capabilities.md) 48 | -------------------------------------------------------------------------------- /levels/00_fork_exec/README.md: -------------------------------------------------------------------------------- 1 | # Level 00: fork & exec 2 | 3 | In this level we will get to know `rd.py` - the skeleton of our rubber-docker. 4 | `rd.py` already implements one CLI command - *run* using the *click* python module. It should be used like this: 5 | 6 | ``` 7 | $ python3 rd.py run /bin/echo "Hello Docker" 8 | ``` 9 | 10 | Right now, the skeleton doesn't actually do much, so at a bare minimum we need to make it run our executable. 11 | This can be achieved using Linux's *fork-exec* mechanism. 12 | 13 | ## fork-exec 14 | In the \*nix family of operating systems, new processes are spawned using *fork-exec* - 15 | the *fork()* call and the *exec()* call work together to create the new process. 16 | 17 | *fork()* is the name of the system call (actually a *libc* call in Linux) that the parent process uses to "divide" itself ("fork" into two identical processes). 18 | After calling *fork()*, a complete copy of the executing program is made into the new process. 19 | This new process (which is a "child" of the "parent") has a new PID. 20 | The *fork()* function returns the child's PID to the parent, while it returns 0 to the child, in order to allow the two identical processes to distinguish one another. 21 | Following return from the *fork()* call, execution will proceed from the same point in the program and then (usually) diverge based on the return value. 22 | 23 | In some cases the two processes are made to continue running the same binary, but often one (usually the child) switches to running another binary executable using the *exec()* system call (actually a family of calls). 24 | When the child process calls *exec()*, all data in the original program is lost, and it is replaced with a running copy of the new program. 25 | 26 | Finally, the parent must invoke the *wait()* call (or any of it's variants), in order to collect the child's exit status and allow the system to release the resources associated with the child. If a wait is not performed, then the terminated child remains in a defunct (AKA "zombie") state. 27 | 28 | ### Example 29 | 30 | ```python 31 | pid = os.fork() 32 | if pid == 0: 33 | # This is the child 34 | os.execv('/bin/echo', ['/bin/echo', 'Hello Docker']) 35 | # Unreachable area, since we change the binary executable 36 | else: 37 | # This is the parent 38 | waited_pid, status = os.waitpid(pid, 0) # wait for the child to finish 39 | ``` 40 | 41 | ## Further reading 42 | - [os.fork()](https://docs.python.org/2/library/os.html#os.fork) 43 | - [os.execv()](https://docs.python.org/2/library/os.html#os.execv) 44 | - [os.waitpid()](https://docs.python.org/2/library/os.html#os.waitpid) 45 | - Python [Lists](https://docs.python.org/2/tutorial/introduction.html#lists) 46 | - [Wikipedia - Fork-exec](https://en.wikipedia.org/wiki/Fork%E2%80%93exec) 47 | 48 | ## How to check your work 49 | 50 | ``` 51 | $ python3 rd.py run /bin/echo "Hello Docker" 52 | Hello Docker 53 | 54 | 3620 exited with status 0 55 | ``` 56 | 57 | ## Bonus 58 | 59 | Can you use another exec variant to manipulate your child process environment variables (similar to [docker run -e](https://docs.docker.com/engine/reference/run/#env-environment-variables))? 60 | -------------------------------------------------------------------------------- /levels/00_fork_exec/rd.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """Docker From Scratch Workshop - Level 0: Starting a new process. 3 | 4 | Goal: We want to start a new linux process using the fork & exec model. 5 | 6 | Note: At this level we don't care about containment yet. 7 | 8 | Usage: 9 | running: 10 | rd.py run /bin/sh 11 | will: 12 | - fork a new process which will exec '/bin/sh' 13 | - while the parent waits for it to finish 14 | """ 15 | 16 | 17 | 18 | import click 19 | import os 20 | import traceback 21 | 22 | 23 | @click.group() 24 | def cli(): 25 | pass 26 | 27 | 28 | def contain(command): 29 | # TODO: exec command, note the difference between the exec flavours 30 | # https://docs.python.org/2/library/os.html#os.execv 31 | # NOTE: command is an array (the first element is path/file, and the entire 32 | # array is exec's args) 33 | 34 | os._exit(0) # TODO: remove this after adding exec 35 | 36 | 37 | @cli.command(context_settings=dict(ignore_unknown_options=True,)) 38 | @click.argument('Command', required=True, nargs=-1) 39 | def run(command): 40 | # TODO: replace this with fork() 41 | # (https://docs.python.org/2/library/os.html#os.fork) 42 | pid = 0 43 | if pid == 0: 44 | # This is the child, we'll try to do some containment here 45 | try: 46 | contain(command) 47 | except Exception: 48 | traceback.print_exc() 49 | os._exit(1) # something went wrong in contain() 50 | 51 | # This is the parent, pid contains the PID of the forked process 52 | # wait for the forked child and fetch the exit status 53 | _, status = os.waitpid(pid, 0) 54 | print('{} exited with status {}'.format(pid, status)) 55 | 56 | 57 | if __name__ == '__main__': 58 | cli() 59 | -------------------------------------------------------------------------------- /levels/01_chroot_image/README.md: -------------------------------------------------------------------------------- 1 | # Level 01: chroot 2 | 3 | "Jail" a process so it doesn't see the rest of the file system. 4 | 5 | To exec a process in a chroot we need a few things: 6 | 1. Choose a new root directory for the process 7 | 1. with our target binary 8 | 1. with any other dependency (proc? sys? dev?) 9 | 1. Chroot into it using Python's [os.chroot](https://docs.python.org/2/library/os.html#os.chroot) 10 | 11 | To help you get there quickly, we implemented *create_container_root()* which extracts pre-downloaded images (ubuntu OR busybox), and returns a path. 12 | 13 | If we want tools like `ps` to work properly, we need to mount the special filesystems like `/proc`, `/sys` and `/dev` inside the new root. 14 | This can be done using the [linux python module](https://rawgit.com/Fewbytes/rubber-docker/master/docs/linux/index.html) which exposes the [mount()](https://rawgit.com/Fewbytes/rubber-docker/master/docs/linux/index.html#linux.mount) syscall: 15 | 16 | ```python 17 | linux.mount('proc', os.path.join(new_root, 'proc'), 'proc', 0, '') 18 | ``` 19 | The semantics of the *mount()* syscall have been preserved; to learn more about it read `man 2 mount`. 20 | 21 | From within the chroot, have a look at `/proc/mounts`. Does it look different from `/proc/mounts` outside the chroot? 22 | Remember we are not using mount namespace yet! 23 | 24 | (*answer*: [linux/fs/proc_namespace.c on Github](https://github.com/torvalds/linux/blob/33caf82acf4dc420bf0f0136b886f7b27ecf90c5/fs/proc_namespace.c#L110)) 25 | 26 | ## Cleaning up 27 | 28 | You might notice upon completing this level that you have many unused entries in */proc/mounts* and many unused extracted images in */workshop/containers*. You can use [our cleanup script](../cleanup.sh) to remove them quickly. 29 | ```bash 30 | /workshop/rubber-docker/levels/cleanup.sh 31 | ``` 32 | 33 | ## Relevant Documentation 34 | 35 | [chroot manpage](http://man7.org/linux/man-pages/man2/chroot.2.html) 36 | 37 | ## How to check your work 38 | 39 | Without calling `chroot` (_wrong_): 40 | ```shell 41 | $ python3 rd.py run -i ubuntu -- /bin/ls -l /workshop/rubber-docker/levels/ 42 | total 44 43 | drwxr-xr-x 2 ubuntu ubuntu 4096 Jun 20 21:37 00_fork_exec 44 | drwxr-xr-x 2 ubuntu ubuntu 4096 Jun 20 21:37 01_chroot_image 45 | drwxr-xr-x 2 ubuntu ubuntu 4096 Jun 20 21:37 02_mount_ns 46 | drwxr-xr-x 2 ubuntu ubuntu 4096 Jun 20 21:37 03_pivot_root 47 | drwxr-xr-x 2 ubuntu ubuntu 4096 Jun 20 21:37 04_overlay 48 | drwxr-xr-x 2 ubuntu ubuntu 4096 Jun 20 21:37 05_uts_namespace 49 | drwxr-xr-x 2 ubuntu ubuntu 4096 Jun 20 21:37 06_pid_namespace 50 | drwxr-xr-x 2 ubuntu ubuntu 4096 Jun 20 21:37 07_net_namespace 51 | drwxr-xr-x 2 ubuntu ubuntu 4096 Jun 20 21:37 08_cpu_cgroup 52 | drwxr-xr-x 2 ubuntu ubuntu 4096 Jun 20 21:37 09_memory_cgorup 53 | drwxr-xr-x 2 ubuntu ubuntu 4096 Jun 20 21:37 10_setuid 54 | 1620 exited with status 0 55 | ``` 56 | 57 | With `chroot` and an extracted image (_good_): 58 | ```shell 59 | $ python3 rd.py run -i ubuntu -- /bin/ls -l /workshop/rubber-docker/levels/ 60 | Created a new root fs for our container: /workshop/containers/1739af4b-3849-4e88-ae65-dc98264a0e69/rootfs 61 | /bin/ls: cannot access /workshop/rubber-docker/levels/: No such file or directory 62 | 1656 exited with status 512 63 | ``` 64 | -------------------------------------------------------------------------------- /levels/01_chroot_image/rd.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """Docker From Scratch Workshop - Level 1: Chrooting into an image. 3 | 4 | Goal: Let's get some filesystem isolation with good ol' chroot. 5 | 6 | Usage: 7 | running: 8 | rd.py run -i ubuntu /bin/sh 9 | will: 10 | fork a new child process that will: 11 | - unpack an ubuntu image into a new directory 12 | - chroot() into that directory 13 | - exec '/bin/sh' 14 | while the parent waits for it to finish. 15 | """ 16 | 17 | 18 | import os 19 | import tarfile 20 | import uuid 21 | 22 | import click 23 | import traceback 24 | 25 | import linux 26 | 27 | 28 | def _get_image_path(image_name, image_dir, image_suffix='tar'): 29 | return os.path.join(image_dir, os.extsep.join([image_name, image_suffix])) 30 | 31 | 32 | def _get_container_path(container_id, container_dir, *subdir_names): 33 | return os.path.join(container_dir, container_id, *subdir_names) 34 | 35 | 36 | def create_container_root(image_name, image_dir, container_id, container_dir): 37 | """Create a container root by extracting an image into a new directory 38 | 39 | Usage: 40 | new_root = create_container_root( 41 | image_name, image_dir, container_id, container_dir) 42 | 43 | @param image_name: the image name to extract 44 | @param image_dir: the directory to lookup image tarballs in 45 | @param container_id: the unique container id 46 | @param container_dir: the base directory of newly generated container 47 | directories 48 | @retrun: new container root directory 49 | @rtype: str 50 | """ 51 | image_path = _get_image_path(image_name, image_dir) 52 | container_root = _get_container_path(container_id, container_dir, 'rootfs') 53 | 54 | assert os.path.exists(image_path), "unable to locate image %s" % image_name 55 | 56 | if not os.path.exists(container_root): 57 | os.makedirs(container_root) 58 | 59 | with tarfile.open(image_path) as t: 60 | # Fun fact: tar files may contain *nix devices! *facepalm* 61 | members = [m for m in t.getmembers() 62 | if m.type not in (tarfile.CHRTYPE, tarfile.BLKTYPE)] 63 | t.extractall(container_root, members=members) 64 | 65 | return container_root 66 | 67 | 68 | @click.group() 69 | def cli(): 70 | pass 71 | 72 | 73 | def contain(command, image_name, image_dir, container_id, container_dir): 74 | # TODO: would you like to do something before chrooting? 75 | # print('Created a new root fs for our container: {}'.format(new_root)) 76 | 77 | # TODO: chroot into new_root 78 | # TODO: something after chrooting? (HINT: try running: python3 rd.py run -i ubuntu -- /bin/sh) 79 | 80 | os.execvp(command[0], command) 81 | 82 | 83 | @cli.command(context_settings=dict(ignore_unknown_options=True,)) 84 | @click.option('--image-name', '-i', help='Image name', default='ubuntu') 85 | @click.option('--image-dir', help='Images directory', 86 | default='/workshop/images') 87 | @click.option('--container-dir', help='Containers directory', 88 | default='/workshop/containers') 89 | @click.argument('Command', required=True, nargs=-1) 90 | def run(image_name, image_dir, container_dir, command): 91 | container_id = str(uuid.uuid4()) 92 | pid = os.fork() 93 | if pid == 0: 94 | # This is the child, we'll try to do some containment here 95 | try: 96 | contain(command, image_name, image_dir, container_id, 97 | container_dir) 98 | except Exception: 99 | traceback.print_exc() 100 | os._exit(1) # something went wrong in contain() 101 | 102 | # This is the parent, pid contains the PID of the forked process 103 | # wait for the forked child, fetch the exit status 104 | _, status = os.waitpid(pid, 0) 105 | print('{} exited with status {}'.format(pid, status)) 106 | 107 | 108 | if __name__ == '__main__': 109 | cli() 110 | -------------------------------------------------------------------------------- /levels/02_mount_ns/README.md: -------------------------------------------------------------------------------- 1 | # Level 02: mount namespace 2 | 3 | Let's add a mount namespace using the [unshare()](https://rawgit.com/Fewbytes/rubber-docker/master/docs/linux/index.html#linux.unshare) call. 4 | Mount namespaces essentially work like bind mounts - operations in mount namespaces will be propagated to other namespaces *unless* we make the parent mount (/ in our case) a *private* mount (or similar). 5 | For this reason, we need to change / to a private mount. 6 | This is done using the [mount()](https://rawgit.com/Fewbytes/rubber-docker/master/docs/linux/index.html#linux.mount) syscall with `MS_PRIVATE` and `MS_REC` flags (why do we need `MS_REC`?) 7 | 8 | Python doesn't have the mount syscall exposed; use the `linux` module provided in this repo instead. 9 | 10 | **Fun Fact**: The Linux kernel documentation says private mounts are the default, but are they? 11 | 12 | Also, it's time to create device nodes in our container root using [mknod()](https://docs.python.org/2/library/os.html#os.mknod): 13 | 14 | ```python 15 | os.mknod(os.path.join(dev_path, device), 0666 | stat.S_IFCHR, os.makedev(major, minor)) 16 | ``` 17 | 18 | Look at the host's `/dev` and think which devices you might need, note their minor/major (using ls -l), and create them inside the container. 19 | 20 | ## Relevant Documentation 21 | 22 | - [man 2 mount](http://man7.org/linux/man-pages/man2/mount.2.html) 23 | - [Kernel docs - shared mounts](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt) 24 | - [man 7 namespaces](http://man7.org/linux/man-pages/man7/namespaces.7.html) 25 | 26 | ## How to check your work: 27 | 28 | ### Mount namespace 29 | Verify your new forked process is in a different mount namespace 30 | ```bash 31 | $ ls -lh /proc/self/ns/mnt 32 | lrwxrwxrwx 1 root root 0 Mar 18 04:13 /proc/self/ns/mnt -> mnt:[4026531840] 33 | $ sudo python rd.py run -i ubuntu /bin/bash 34 | root@ip-172-31-31-83:/# ls -lh /proc/self/ns/mnt 35 | lrwxrwxrwx 1 root root 0 Mar 18 04:13 /proc/self/ns/mnt -> mnt:[4026532139] 36 | ``` 37 | 38 | Create a new mount inside the container, and make sure it's invisible from the outside 39 | ```bash 40 | $ sudo python rd.py run -i ubuntu /bin/bash 41 | root@ip-172-31-31-83:/# mkdir /mnt/moo 42 | root@ip-172-31-31-83:/# mount -t tmpfs tmpfs /mnt/moo 43 | 44 | # Keep that contained process running, and open another terminal 45 | $ grep moo /proc/mounts 46 | $ 47 | ``` 48 | 49 | ### Device nodes 50 | We love throwing stuff to /dev/null, what if it was a regular file? 51 | ```bash 52 | # Without a null device node 53 | $ sudo python rd.py run -i ubuntu /bin/bash 54 | root@ip-172-31-31-83:/# find / > /dev/null 55 | root@ip-172-31-31-83:/# ls -lh /dev/null 56 | -rw-r--r-- 1 root root 2.2M Jun 21 16:40 /dev/null 57 | 58 | # With a null device node 59 | $ sudo python rd.py run -i ubuntu /bin/bash 60 | Created a new root fs for our container: /workshop/containers/6aeb472a-94da-42f3-a004-f5809367327b/rootfs 61 | root@ip-172-31-31-83:/# find / > /dev/null 62 | root@ip-172-31-31-83:/# ls -lh /dev/null 63 | crw-r--r-- 1 root root 1, 3 Jun 21 16:44 /dev/null 64 | ``` 65 | 66 | ## Cleanup 67 | Don't forget to remove the containers and mounts using [cleanup.sh](../cleanup.sh) 68 | -------------------------------------------------------------------------------- /levels/02_mount_ns/rd.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """Docker From Scratch Workshop - Level 2: Adding mount namespace. 3 | 4 | Goal: Separate our mount table from the other processes. 5 | 6 | Usage: 7 | running: 8 | rd.py run -i ubuntu /bin/sh 9 | will: 10 | - fork a new chrooted process in a new mount namespace 11 | """ 12 | 13 | 14 | 15 | import linux 16 | import tarfile 17 | import uuid 18 | 19 | import click 20 | import os 21 | import traceback 22 | 23 | 24 | def _get_image_path(image_name, image_dir, image_suffix='tar'): 25 | return os.path.join(image_dir, os.extsep.join([image_name, image_suffix])) 26 | 27 | 28 | def _get_container_path(container_id, container_dir, *subdir_names): 29 | return os.path.join(container_dir, container_id, *subdir_names) 30 | 31 | 32 | def create_container_root(image_name, image_dir, container_id, container_dir): 33 | image_path = _get_image_path(image_name, image_dir) 34 | container_root = _get_container_path(container_id, container_dir, 'rootfs') 35 | 36 | assert os.path.exists(image_path), "unable to locate image %s" % image_name 37 | 38 | if not os.path.exists(container_root): 39 | os.makedirs(container_root) 40 | 41 | with tarfile.open(image_path) as t: 42 | # Fun fact: tar files may contain *nix devices! *facepalm* 43 | members = [m for m in t.getmembers() 44 | if m.type not in (tarfile.CHRTYPE, tarfile.BLKTYPE)] 45 | t.extractall(container_root, members=members) 46 | 47 | return container_root 48 | 49 | 50 | @click.group() 51 | def cli(): 52 | pass 53 | 54 | 55 | def contain(command, image_name, image_dir, container_id, container_dir): 56 | new_root = create_container_root( 57 | image_name, image_dir, container_id, container_dir) 58 | print('Created a new root fs for our container: {}'.format(new_root)) 59 | 60 | # TODO: time to say goodbye to the old mount namespace, 61 | # see "man 2 unshare" to get some help 62 | # HINT 1: there is no os.unshare(), time to use the linux module we made 63 | # just for you! 64 | # HINT 2: the linux module includes both functions and constants! 65 | # e.g. linux.CLONE_NEWNS 66 | 67 | # TODO: remember shared subtrees? 68 | # (https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt) 69 | # Make / a private mount to avoid littering our host mount table. 70 | 71 | # Create mounts (/proc, /sys, /dev) under new_root 72 | linux.mount('proc', os.path.join(new_root, 'proc'), 'proc', 0, '') 73 | linux.mount('sysfs', os.path.join(new_root, 'sys'), 'sysfs', 0, '') 74 | linux.mount('tmpfs', os.path.join(new_root, 'dev'), 'tmpfs', 75 | linux.MS_NOSUID | linux.MS_STRICTATIME, 'mode=755') 76 | # Add some basic devices 77 | devpts_path = os.path.join(new_root, 'dev', 'pts') 78 | if not os.path.exists(devpts_path): 79 | os.makedirs(devpts_path) 80 | linux.mount('devpts', devpts_path, 'devpts', 0, '') 81 | for i, dev in enumerate(['stdin', 'stdout', 'stderr']): 82 | os.symlink('/proc/self/fd/%d' % i, os.path.join(new_root, 'dev', dev)) 83 | 84 | # TODO: add more devices (e.g. null, zero, random, urandom) using os.mknod. 85 | 86 | os.chroot(new_root) 87 | 88 | os.chdir('/') 89 | 90 | os.execvp(command[0], command) 91 | 92 | 93 | @cli.command(context_settings=dict(ignore_unknown_options=True,)) 94 | @click.option('--image-name', '-i', help='Image name', default='ubuntu') 95 | @click.option('--image-dir', help='Images directory', 96 | default='/workshop/images') 97 | @click.option('--container-dir', help='Containers directory', 98 | default='/workshop/containers') 99 | @click.argument('Command', required=True, nargs=-1) 100 | def run(image_name, image_dir, container_dir, command): 101 | container_id = str(uuid.uuid4()) 102 | 103 | pid = os.fork() 104 | if pid == 0: 105 | # This is the child, we'll try to do some containment here 106 | try: 107 | contain(command, image_name, image_dir, container_id, container_dir) 108 | except Exception: 109 | traceback.print_exc() 110 | os._exit(1) # something went wrong in contain() 111 | 112 | # This is the parent, pid contains the PID of the forked process 113 | # wait for the forked child, fetch the exit status 114 | _, status = os.waitpid(pid, 0) 115 | print('{} exited with status {}'.format(pid, status)) 116 | 117 | 118 | if __name__ == '__main__': 119 | cli() 120 | -------------------------------------------------------------------------------- /levels/03_pivot_root/README.md: -------------------------------------------------------------------------------- 1 | # Level 03: pivot_root 2 | 3 | After successfully jailing a process with [chroot()](https://docs.python.org/2/library/os.html#os.chroot), let's escape from this jail: copy the [breakout.py](breakout.py) script into the chroot and run it! 4 | 5 | ```bash 6 | sudo python rd.py run -i ubuntu /bin/bash 7 | 8 | # Check that you are inside chroot 9 | ls / 10 | 11 | # Escape! 12 | python breakout.py 13 | 14 | # Aaaaand we're out :) 15 | ls / 16 | ``` 17 | 18 | 19 | Ok, obviously [chroot()](https://docs.python.org/2/library/os.html#os.chroot) isn't good enough. Let's try using [pivot_root()](https://rawgit.com/Fewbytes/rubber-docker/master/docs/linux/index.html#linux.pivot_root). 20 | Remember that [pivot_root()](https://rawgit.com/Fewbytes/rubber-docker/master/docs/linux/index.html#linux.pivot_root) works on **all** processes - luckily we are using mount namespaces. 21 | 22 | Because we are using mount namespace which internally uses the mount bind mechanism, by default our (sub)mounts will be visible by the parent mount (and mount namespace). 23 | To avoid that, we need to mount private the root mount immediately after moving to the new mount namespace - also, this needs to be done *recursively* for all submounts, otherwise we will end up unmounting important things like `/dev/pts` :) 24 | 25 | After using [pivot_root()](https://rawgit.com/Fewbytes/rubber-docker/master/docs/linux/index.html#linux.pivot_root), we need to [umount2()](https://rawgit.com/Fewbytes/rubber-docker/master/docs/linux/index.html#linux.umount2) the `old_root`. 26 | We need to use umount2 and not umount because we need to pass certain flags to the call; specifically, we need to detach. See `man 2 umount` for details. 27 | 28 | ## Relevant Documentation 29 | 30 | - [man 2 pivot_root](http://man7.org/linux/man-pages/man2/pivot_root.2.html) 31 | - [man 2 umount2](http://man7.org/linux/man-pages/man2/umount2.2.html) 32 | 33 | ## How to check your work 34 | 35 | Within the container, you should see a new *rootfs* device; However, this step will actually fail: 36 | 37 | ``` 38 | $ sudo python rd.py run -i ubuntu /bin/bash 39 | Created a new root fs for our container: /workshop/containers/f793960b-64bd-4c21-9a7f-da1b0fbe9aad/rootfs 40 | Traceback (most recent call last): 41 | File "rd.py", line 126, in 42 | cli() 43 | File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 700, in __call__ 44 | return self.main(*args, **kwargs) 45 | File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 680, in main 46 | rv = self.invoke(ctx) 47 | File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 1027, in invoke 48 | return _process_result(sub_ctx.command.invoke(sub_ctx)) 49 | File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 873, in invoke 50 | return ctx.invoke(self.callback, **ctx.params) 51 | File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 508, in invoke 52 | return callback(*args, **kwargs) 53 | File "rd.py", line 118, in run 54 | contain(command, image_name, image_dir, container_id, container_dir) 55 | File "rd.py", line 98, in contain 56 | linux.pivot_root(new_root, old_root) 57 | RuntimeError: (16, 'Device or resource busy') 58 | 10766 exited with status 256 59 | ``` 60 | 61 | The reason this step will fail is that [pivot_root(new_root, put_old)](https://rawgit.com/Fewbytes/rubber-docker/master/docs/linux/index.html#linux.pivot_root) requires *new_root* to be a different filesystem then the old root. 62 | This will be resolved in step 04 when we mount an overlay filesystem as *new_root*. 63 | 64 | To circumvent that, we can copy the image files to a [tmpfs](https://en.wikipedia.org/wiki/Tmpfs) mount. 65 | ```python 66 | # ... 67 | # TODO: uncomment (why?) 68 | linux.mount('tmpfs', container_root, 'tmpfs', 0, None) 69 | # ... 70 | ``` 71 | 72 | Repeat the `breakout.py` exercise and see if you can still escape :) 73 | -------------------------------------------------------------------------------- /levels/03_pivot_root/breakout.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | """Docker From Scratch Workshop - Breakout script. 3 | """ 4 | 5 | import os 6 | 7 | # Create a directory and chroot to it but we don't want to chdir to it 8 | os.makedirs('.foo') 9 | os.chroot('.foo') 10 | 11 | # pwd still has a reference to a directory outside the (new) chroot, so chdir 12 | # to a directory above pwd. 13 | # The kernel will automatically convert extra ../ to / 14 | os.chdir('../../../../../../../../') 15 | 16 | # finally chroot to the old (topmost) root 17 | os.chroot('.') 18 | 19 | # now we can exec a shell in the host 20 | os.execv('/bin/bash', ['/bin/bash']) 21 | -------------------------------------------------------------------------------- /levels/03_pivot_root/rd.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """Docker From Scratch Workshop - Level 3: Switching from chroot to pivot_root. 3 | 4 | Goal: Use pivot_root instead of chroot, and umount old_root. 5 | 6 | Usage: 7 | running: 8 | rd.py run -i ubuntu /bin/sh 9 | will: 10 | - fork a new process in a new mount namespace with a new root 11 | - make sure that you can't easily escape 12 | """ 13 | 14 | 15 | 16 | import linux 17 | import tarfile 18 | import uuid 19 | 20 | import click 21 | import os 22 | import stat 23 | import traceback 24 | 25 | 26 | def _get_image_path(image_name, image_dir, image_suffix='tar'): 27 | return os.path.join(image_dir, os.extsep.join([image_name, image_suffix])) 28 | 29 | 30 | def _get_container_path(container_id, container_dir, *subdir_names): 31 | return os.path.join(container_dir, container_id, *subdir_names) 32 | 33 | 34 | def create_container_root(image_name, image_dir, container_id, container_dir): 35 | image_path = _get_image_path(image_name, image_dir) 36 | container_root = _get_container_path(container_id, container_dir, 'rootfs') 37 | 38 | assert os.path.exists(image_path), "unable to locate image %s" % image_name 39 | 40 | if not os.path.exists(container_root): 41 | os.makedirs(container_root) 42 | 43 | # TODO: uncomment (why?) 44 | # linux.mount('tmpfs', container_root, 'tmpfs', 0, None) 45 | 46 | with tarfile.open(image_path) as t: 47 | # Fun fact: tar files may contain *nix devices! *facepalm* 48 | members = [m for m in t.getmembers() 49 | if m.type not in (tarfile.CHRTYPE, tarfile.BLKTYPE)] 50 | t.extractall(container_root, members=members) 51 | 52 | return container_root 53 | 54 | 55 | @click.group() 56 | def cli(): 57 | pass 58 | 59 | 60 | def makedev(dev_path): 61 | for i, dev in enumerate(['stdin', 'stdout', 'stderr']): 62 | os.symlink('/proc/self/fd/%d' % i, os.path.join(dev_path, dev)) 63 | os.symlink('/proc/self/fd', os.path.join(dev_path, 'fd')) 64 | # Add extra devices 65 | DEVICES = {'null': (stat.S_IFCHR, 1, 3), 'zero': (stat.S_IFCHR, 1, 5), 66 | 'random': (stat.S_IFCHR, 1, 8), 'urandom': (stat.S_IFCHR, 1, 9), 67 | 'console': (stat.S_IFCHR, 136, 1), 'tty': (stat.S_IFCHR, 5, 0), 68 | 'full': (stat.S_IFCHR, 1, 7)} 69 | for device, (dev_type, major, minor) in DEVICES.items(): 70 | os.mknod(os.path.join(dev_path, device), 71 | 0o666 | dev_type, os.makedev(major, minor)) 72 | 73 | 74 | def contain(command, image_name, image_dir, container_id, container_dir): 75 | try: 76 | linux.unshare(linux.CLONE_NEWNS) # create a new mount namespace 77 | except RuntimeError as e: 78 | if getattr(e, 'args', '') == (1, 'Operation not permitted'): 79 | print('Error: Use of CLONE_NEWNS with unshare(2) requires the ' 80 | 'CAP_SYS_ADMIN capability (i.e. you probably want to retry ' 81 | 'this with sudo)') 82 | raise e 83 | 84 | # TODO: we added MS_REC here. wanna guess why? 85 | linux.mount(None, '/', None, linux.MS_PRIVATE | linux.MS_REC, None) 86 | 87 | new_root = create_container_root( 88 | image_name, image_dir, container_id, container_dir) 89 | print('Created a new root fs for our container: {}'.format(new_root)) 90 | 91 | # Create mounts (/proc, /sys, /dev) under new_root 92 | linux.mount('proc', os.path.join(new_root, 'proc'), 'proc', 0, '') 93 | linux.mount('sysfs', os.path.join(new_root, 'sys'), 'sysfs', 0, '') 94 | linux.mount('tmpfs', os.path.join(new_root, 'dev'), 'tmpfs', 95 | linux.MS_NOSUID | linux.MS_STRICTATIME, 'mode=755') 96 | 97 | # Add some basic devices 98 | devpts_path = os.path.join(new_root, 'dev', 'pts') 99 | if not os.path.exists(devpts_path): 100 | os.makedirs(devpts_path) 101 | linux.mount('devpts', devpts_path, 'devpts', 0, '') 102 | 103 | makedev(os.path.join(new_root, 'dev')) 104 | 105 | os.chroot(new_root) # TODO: replace with pivot_root 106 | 107 | os.chdir('/') 108 | 109 | # TODO: umount2 old root (HINT: see MNT_DETACH in man 2 umount) 110 | 111 | os.execvp(command[0], command) 112 | 113 | 114 | @cli.command(context_settings=dict(ignore_unknown_options=True,)) 115 | @click.option('--image-name', '-i', help='Image name', default='ubuntu') 116 | @click.option('--image-dir', help='Images directory', 117 | default='/workshop/images') 118 | @click.option('--container-dir', help='Containers directory', 119 | default='/workshop/containers') 120 | @click.argument('Command', required=True, nargs=-1) 121 | def run(image_name, image_dir, container_dir, command): 122 | container_id = str(uuid.uuid4()) 123 | 124 | pid = os.fork() 125 | if pid == 0: 126 | # This is the child, we'll try to do some containment here 127 | try: 128 | contain(command, image_name, image_dir, container_id, 129 | container_dir) 130 | except Exception: 131 | traceback.print_exc() 132 | os._exit(1) # something went wrong in contain() 133 | 134 | # This is the parent, pid contains the PID of the forked process 135 | # wait for the forked child, fetch the exit status 136 | _, status = os.waitpid(pid, 0) 137 | print('{} exited with status {}'.format(pid, status)) 138 | 139 | 140 | if __name__ == '__main__': 141 | cli() 142 | -------------------------------------------------------------------------------- /levels/04_overlay/README.md: -------------------------------------------------------------------------------- 1 | # level 04: overlay CoW filesystem 2 | 3 | So far, unpacking the image every time was slow and we want fast startup times for our containers. 4 | Also, it would be nice if every container won't take so much space (~ 180MB in ubuntu minimal's case). 5 | 6 | In this level, we will add overlayfs. 7 | A secondary win is that now we can make `pivot_root()` work since our new root will be a mountpoint! 8 | 9 | What we want to do is extract the image to an *image_root* directory (if it's not extracted already), and then create the following: 10 | - a *container_dir* with a mount directory for overlayfs 11 | - a directory for the writable branch (*upperdir*) 12 | - a directory for the *workdir* 13 | 14 | ## Exercises 15 | 16 | After implementing this step, try a few things to see how overlayfs behaves: 17 | - write a file using `dd` inside the container and see if you can fill the host drive. 18 | - write a large file (say 1GB) to the image directory, then open it for (non-truncating) writing in the container, perhaps using this python code: `open('big_file', 'r+')`. How much time does the open operation take? why? 19 | - Do some file operations (write files, move files, delete files) in the container, then have a look at the `upperdir` (using `ls -la`). 20 | 21 | ## Relevant Documentation 22 | 23 | - [OverlayFS - Kernel archives](https://www.kernel.org/doc/Documentation/filesystems/overlayfs.txt) 24 | 25 | ## How to check your work 26 | 27 | ``` 28 | $ time sudo python rd.py run -i ubuntu /bin/bash -- -c true 29 | Created a new root fs for our container: /workshop/containers/7a3393a1-df94-4c44-a935-700ec52c2607/rootfs 30 | 11191 exited with status 0 31 | 32 | real 0m3.475s 33 | user 0m1.492s 34 | sys 0m1.260s 35 | 36 | $ time sudo python rd.py run -i ubuntu /bin/bash -- -c true 37 | Created a new root fs for our container: /workshop/containers/98282744-82bd-4c70-bbf9-028e8c92f995/rootfs 38 | 11196 exited with status 0 39 | 40 | real 0m0.162s 41 | user 0m0.088s 42 | sys 0m0.032s 43 | ``` 44 | Observe that second launch of a container using the same image takes almost no time, because we don't need to extract the image again. 45 | -------------------------------------------------------------------------------- /levels/04_overlay/rd.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """Docker From Scratch Workshop - Level 4: Add overlay FS. 3 | 4 | Goal: Instead of re-extracting the image, use it as a read-only layer 5 | (lowerdir), and create a copy-on-write layer for changes (upperdir). 6 | 7 | HINT: Don't forget that overlay fs also requires a workdir. 8 | 9 | Read more on overlay FS here: 10 | https://www.kernel.org/doc/Documentation/filesystems/overlayfs.txt 11 | """ 12 | 13 | 14 | 15 | import linux 16 | import tarfile 17 | import uuid 18 | 19 | import click 20 | import os 21 | import stat 22 | import traceback 23 | 24 | 25 | def _get_image_path(image_name, image_dir, image_suffix='tar'): 26 | return os.path.join(image_dir, os.extsep.join([image_name, image_suffix])) 27 | 28 | 29 | def _get_container_path(container_id, container_dir, *subdir_names): 30 | return os.path.join(container_dir, container_id, *subdir_names) 31 | 32 | 33 | def create_container_root(image_name, image_dir, container_id, container_dir): 34 | image_path = _get_image_path(image_name, image_dir) 35 | assert os.path.exists(image_path), "unable to locate image %s" % image_name 36 | 37 | # TODO: Instead of creating the container_root and extracting to it, 38 | # create an images_root. 39 | # keep only one rootfs per image and re-use it 40 | container_root = _get_container_path(container_id, container_dir, 'rootfs') 41 | 42 | if not os.path.exists(container_root): 43 | os.makedirs(container_root) 44 | with tarfile.open(image_path) as t: 45 | # Fun fact: tar files may contain *nix devices! *facepalm* 46 | members = [m for m in t.getmembers() 47 | if m.type not in (tarfile.CHRTYPE, tarfile.BLKTYPE)] 48 | t.extractall(container_root, members=members) 49 | 50 | # TODO: create directories for copy-on-write (uppperdir), overlay workdir, 51 | # and a mount point 52 | 53 | # TODO: mount the overlay (HINT: use the MS_NODEV flag to mount) 54 | 55 | return container_root # return the mountpoint for the mounted overlayfs 56 | 57 | 58 | @click.group() 59 | def cli(): 60 | pass 61 | 62 | 63 | def makedev(dev_path): 64 | for i, dev in enumerate(['stdin', 'stdout', 'stderr']): 65 | os.symlink('/proc/self/fd/%d' % i, os.path.join(dev_path, dev)) 66 | os.symlink('/proc/self/fd', os.path.join(dev_path, 'fd')) 67 | # Add extra devices 68 | DEVICES = {'null': (stat.S_IFCHR, 1, 3), 'zero': (stat.S_IFCHR, 1, 5), 69 | 'random': (stat.S_IFCHR, 1, 8), 'urandom': (stat.S_IFCHR, 1, 9), 70 | 'console': (stat.S_IFCHR, 136, 1), 'tty': (stat.S_IFCHR, 5, 0), 71 | 'full': (stat.S_IFCHR, 1, 7)} 72 | for device, (dev_type, major, minor) in DEVICES.items(): 73 | os.mknod(os.path.join(dev_path, device), 74 | 0o666 | dev_type, os.makedev(major, minor)) 75 | 76 | 77 | def _create_mounts(new_root): 78 | # Create mounts (/proc, /sys, /dev) under new_root 79 | linux.mount('proc', os.path.join(new_root, 'proc'), 'proc', 0, '') 80 | linux.mount('sysfs', os.path.join(new_root, 'sys'), 'sysfs', 0, '') 81 | linux.mount('tmpfs', os.path.join(new_root, 'dev'), 'tmpfs', 82 | linux.MS_NOSUID | linux.MS_STRICTATIME, 'mode=755') 83 | 84 | # Add some basic devices 85 | devpts_path = os.path.join(new_root, 'dev', 'pts') 86 | if not os.path.exists(devpts_path): 87 | os.makedirs(devpts_path) 88 | linux.mount('devpts', devpts_path, 'devpts', 0, '') 89 | 90 | makedev(os.path.join(new_root, 'dev')) 91 | 92 | 93 | def contain(command, image_name, image_dir, container_id, container_dir): 94 | linux.unshare(linux.CLONE_NEWNS) # create a new mount namespace 95 | 96 | linux.mount(None, '/', None, linux.MS_PRIVATE | linux.MS_REC, None) 97 | 98 | new_root = create_container_root( 99 | image_name, image_dir, container_id, container_dir) 100 | print('Created a new root fs for our container: {}'.format(new_root)) 101 | 102 | _create_mounts(new_root) 103 | 104 | old_root = os.path.join(new_root, 'old_root') 105 | os.makedirs(old_root) 106 | linux.pivot_root(new_root, old_root) 107 | 108 | os.chdir('/') 109 | 110 | linux.umount2('/old_root', linux.MNT_DETACH) # umount old root 111 | os.rmdir('/old_root') # rmdir the old_root dir 112 | os.execvp(command[0], command) 113 | 114 | 115 | @cli.command(context_settings=dict(ignore_unknown_options=True,)) 116 | @click.option('--image-name', '-i', help='Image name', default='ubuntu') 117 | @click.option('--image-dir', help='Images directory', 118 | default='/workshop/images') 119 | @click.option('--container-dir', help='Containers directory', 120 | default='/workshop/containers') 121 | @click.argument('Command', required=True, nargs=-1) 122 | def run(image_name, image_dir, container_dir, command): 123 | container_id = str(uuid.uuid4()) 124 | 125 | pid = os.fork() 126 | if pid == 0: 127 | # This is the child, we'll try to do some containment here 128 | try: 129 | contain(command, image_name, image_dir, container_id, 130 | container_dir) 131 | except Exception: 132 | traceback.print_exc() 133 | os._exit(1) # something went wrong in contain() 134 | 135 | # This is the parent, pid contains the PID of the forked process 136 | # wait for the forked child, fetch the exit status 137 | _, status = os.waitpid(pid, 0) 138 | print('{} exited with status {}'.format(pid, status)) 139 | 140 | 141 | if __name__ == '__main__': 142 | cli() 143 | -------------------------------------------------------------------------------- /levels/05_uts_namespace/README.md: -------------------------------------------------------------------------------- 1 | # Level 05: UTS namespace 2 | 3 | The UTS namespace allows per-container hostnames. 4 | After moving to a new UTS namespace, you can change the hostname without affecting the hostname of the machine. 5 | 6 | Use the [sethostname()](https://rawgit.com/Fewbytes/rubber-docker/master/docs/linux/index.html#linux.sethostname) call provided by our `linux` module. 7 | 8 | ## How to check your work 9 | 10 | The hostname within the container should be different from that outside. 11 | Specifically, we want the hostname to be the container ID. 12 | 13 | ``` 14 | $ sudo python rd.py run -i ubuntu /bin/bash -- -c hostname 15 | 0c96ccc-ee60-11e5-b7ff-600308a39608 16 | 11196 exited with status 0 17 | $ hostname -f 18 | vagrant-willy-amd64 19 | ``` 20 | -------------------------------------------------------------------------------- /levels/05_uts_namespace/rd.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """Docker From Scratch Workshop - Level 5: Add UTS namespace. 3 | 4 | Goal: Have your own private hostname! 5 | """ 6 | 7 | 8 | 9 | import linux 10 | import tarfile 11 | import uuid 12 | 13 | import click 14 | import os 15 | import stat 16 | import traceback 17 | 18 | 19 | def _get_image_path(image_name, image_dir, image_suffix='tar'): 20 | return os.path.join(image_dir, os.extsep.join([image_name, image_suffix])) 21 | 22 | 23 | def _get_container_path(container_id, base_path, *subdir_names): 24 | return os.path.join(base_path, container_id, *subdir_names) 25 | 26 | 27 | def create_container_root(image_name, image_dir, container_id, container_dir): 28 | image_path = _get_image_path(image_name, image_dir) 29 | image_root = os.path.join(image_dir, image_name, 'rootfs') 30 | 31 | assert os.path.exists(image_path), "unable to locate image %s" % image_name 32 | 33 | if not os.path.exists(image_root): 34 | os.makedirs(image_root) 35 | with tarfile.open(image_path) as t: 36 | # Fun fact: tar files may contain *nix devices! *facepalm* 37 | members = [m for m in t.getmembers() 38 | if m.type not in (tarfile.CHRTYPE, tarfile.BLKTYPE)] 39 | t.extractall(image_root, members=members) 40 | 41 | # Create directories for copy-on-write (uppperdir), overlay workdir, 42 | # and a mount point 43 | container_cow_rw = _get_container_path( 44 | container_id, container_dir, 'cow_rw') 45 | container_cow_workdir = _get_container_path( 46 | container_id, container_dir, 'cow_workdir') 47 | container_rootfs = _get_container_path( 48 | container_id, container_dir, 'rootfs') 49 | for d in (container_cow_rw, container_cow_workdir, container_rootfs): 50 | if not os.path.exists(d): 51 | os.makedirs(d) 52 | 53 | # Mount the overlay (HINT: use the MS_NODEV flag to mount) 54 | linux.mount( 55 | 'overlay', container_rootfs, 'overlay', linux.MS_NODEV, 56 | "lowerdir={image_root},upperdir={cow_rw},workdir={cow_workdir}".format( 57 | image_root=image_root, 58 | cow_rw=container_cow_rw, 59 | cow_workdir=container_cow_workdir)) 60 | 61 | return container_rootfs # return the mountpoint for the overlayfs 62 | 63 | 64 | @click.group() 65 | def cli(): 66 | pass 67 | 68 | 69 | def makedev(dev_path): 70 | for i, dev in enumerate(['stdin', 'stdout', 'stderr']): 71 | os.symlink('/proc/self/fd/%d' % i, os.path.join(dev_path, dev)) 72 | os.symlink('/proc/self/fd', os.path.join(dev_path, 'fd')) 73 | # Add extra devices 74 | DEVICES = {'null': (stat.S_IFCHR, 1, 3), 'zero': (stat.S_IFCHR, 1, 5), 75 | 'random': (stat.S_IFCHR, 1, 8), 'urandom': (stat.S_IFCHR, 1, 9), 76 | 'console': (stat.S_IFCHR, 136, 1), 'tty': (stat.S_IFCHR, 5, 0), 77 | 'full': (stat.S_IFCHR, 1, 7)} 78 | for device, (dev_type, major, minor) in DEVICES.items(): 79 | os.mknod(os.path.join(dev_path, device), 80 | 0o666 | dev_type, os.makedev(major, minor)) 81 | 82 | 83 | def _create_mounts(new_root): 84 | # Create mounts (/proc, /sys, /dev) under new_root 85 | linux.mount('proc', os.path.join(new_root, 'proc'), 'proc', 0, '') 86 | linux.mount('sysfs', os.path.join(new_root, 'sys'), 'sysfs', 0, '') 87 | linux.mount('tmpfs', os.path.join(new_root, 'dev'), 'tmpfs', 88 | linux.MS_NOSUID | linux.MS_STRICTATIME, 'mode=755') 89 | 90 | # Add some basic devices 91 | devpts_path = os.path.join(new_root, 'dev', 'pts') 92 | if not os.path.exists(devpts_path): 93 | os.makedirs(devpts_path) 94 | linux.mount('devpts', devpts_path, 'devpts', 0, '') 95 | 96 | makedev(os.path.join(new_root, 'dev')) 97 | 98 | 99 | def contain(command, image_name, image_dir, container_id, container_dir): 100 | linux.unshare(linux.CLONE_NEWNS) # create a new mount namespace 101 | # TODO: switch to a new UTS namespace, change hostname to container_id 102 | # HINT: use linux.sethostname() 103 | 104 | linux.mount(None, '/', None, linux.MS_PRIVATE | linux.MS_REC, None) 105 | 106 | new_root = create_container_root( 107 | image_name, image_dir, container_id, container_dir) 108 | print('Created a new root fs for our container: {}'.format(new_root)) 109 | 110 | _create_mounts(new_root) 111 | 112 | old_root = os.path.join(new_root, 'old_root') 113 | os.makedirs(old_root) 114 | linux.pivot_root(new_root, old_root) 115 | 116 | os.chdir('/') 117 | 118 | linux.umount2('/old_root', linux.MNT_DETACH) # umount old root 119 | os.rmdir('/old_root') # rmdir the old_root dir 120 | 121 | os.execvp(command[0], command) 122 | 123 | 124 | @cli.command(context_settings=dict(ignore_unknown_options=True,)) 125 | @click.option('--image-name', '-i', help='Image name', default='ubuntu') 126 | @click.option('--image-dir', help='Images directory', 127 | default='/workshop/images') 128 | @click.option('--container-dir', help='Containers directory', 129 | default='/workshop/containers') 130 | @click.argument('Command', required=True, nargs=-1) 131 | def run(image_name, image_dir, container_dir, command): 132 | container_id = str(uuid.uuid4()) 133 | 134 | pid = os.fork() 135 | if pid == 0: 136 | # This is the child, we'll try to do some containment here 137 | try: 138 | contain(command, image_name, image_dir, container_id, container_dir) 139 | except Exception: 140 | traceback.print_exc() 141 | os._exit(1) # something went wrong in contain() 142 | 143 | # This is the parent, pid contains the PID of the forked process 144 | # wait for the forked child, fetch the exit status 145 | _, status = os.waitpid(pid, 0) 146 | print('{} exited with status {}'.format(pid, status)) 147 | 148 | 149 | if __name__ == '__main__': 150 | cli() 151 | -------------------------------------------------------------------------------- /levels/06_pid_namespace/README.md: -------------------------------------------------------------------------------- 1 | # Level 06: PID namespace 2 | 3 | The PID namespace is a little different from other namespaces: `unshare()` doesn't move the current process to a new namespace, instead only its future children will be in the new PID namespace. 4 | We have 2 options: 5 | 1. Use `unshare()` before we fork 6 | 1. Use `clone()` instead of `fork()` and pass the `CLONE_NEWPID` flag 7 | 8 | Our version of `clone()` that is exposed by the `linux` module mirrors the `libc` API (because it's simpler): 9 | ```python 10 | linux.clone(python_callable, flags, callable_args_tuple) # --> returns pid of new process 11 | ``` 12 | 13 | ## Exercises 14 | - Try using the PID namespace without the `/proc` mount or mount binding the original `/proc` mount. How do tools like `ps` behave in this case? 15 | - Try `kill -9 $$` from within the container with and without PID namespace ($$ is evaluated to the current PID). Is there a difference? why? 16 | - Try generating zombies within the container. 17 | 18 | ## Relevant Documentation 19 | 20 | - [man 7 pid_namespaces](http://man7.org/linux/man-pages/man7/pid_namespaces.7.html) 21 | - [Namespaces in operation part 3](https://lwn.net/Articles/531419/) 22 | 23 | ## How to check your work 24 | 25 | Various process listing commands and the */proc* filesystem should show only container processes: 26 | ``` 27 | $ sudo python rd.py run -i ubuntu /bin/bash 28 | Created a new root fs for our container: /workshop/containers/a4725e53-b164-4b60-ab6f-8ee527258f71/rootfs 29 | root@a4725e53-b164-4b60-ab6f-8ee527258f71:/# ps 30 | PID TTY TIME CMD 31 | 1 pts/0 00:00:00 bash 32 | 11 pts/0 00:00:00 ps 33 | root@a4725e53-b164-4b60-ab6f-8ee527258f71:/# ls /proc | grep '[0-9]' 34 | 1 35 | 12 36 | 13 37 | ``` 38 | -------------------------------------------------------------------------------- /levels/06_pid_namespace/rd.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """Docker From Scratch Workshop - Level 6: Add PID namespace. 3 | 4 | Goal: Have your new process start with PID 1 :) 5 | """ 6 | 7 | 8 | 9 | import linux 10 | import tarfile 11 | import uuid 12 | 13 | import click 14 | import os 15 | import stat 16 | import traceback 17 | 18 | 19 | def _get_image_path(image_name, image_dir, image_suffix='tar'): 20 | return os.path.join(image_dir, os.extsep.join([image_name, image_suffix])) 21 | 22 | 23 | def _get_container_path(container_id, base_path, *subdir_names): 24 | return os.path.join(base_path, container_id, *subdir_names) 25 | 26 | 27 | def create_container_root(image_name, image_dir, container_id, container_dir): 28 | image_path = _get_image_path(image_name, image_dir) 29 | image_root = os.path.join(image_dir, image_name, 'rootfs') 30 | 31 | assert os.path.exists(image_path), "unable to locate image %s" % image_name 32 | 33 | if not os.path.exists(image_root): 34 | os.makedirs(image_root) 35 | with tarfile.open(image_path) as t: 36 | # Fun fact: tar files may contain *nix devices! *facepalm* 37 | members = [m for m in t.getmembers() 38 | if m.type not in (tarfile.CHRTYPE, tarfile.BLKTYPE)] 39 | t.extractall(image_root, members=members) 40 | 41 | # Create directories for copy-on-write (uppperdir), overlay workdir, 42 | # and a mount point 43 | container_cow_rw = _get_container_path( 44 | container_id, container_dir, 'cow_rw') 45 | container_cow_workdir = _get_container_path( 46 | container_id, container_dir, 'cow_workdir') 47 | container_rootfs = _get_container_path( 48 | container_id, container_dir, 'rootfs') 49 | for d in (container_cow_rw, container_cow_workdir, container_rootfs): 50 | if not os.path.exists(d): 51 | os.makedirs(d) 52 | 53 | # Mount the overlay (HINT: use the MS_NODEV flag to mount) 54 | linux.mount( 55 | 'overlay', container_rootfs, 'overlay', linux.MS_NODEV, 56 | "lowerdir={image_root},upperdir={cow_rw},workdir={cow_workdir}".format( 57 | image_root=image_root, 58 | cow_rw=container_cow_rw, 59 | cow_workdir=container_cow_workdir)) 60 | 61 | return container_rootfs # return the mountpoint for the overlayfs 62 | 63 | 64 | @click.group() 65 | def cli(): 66 | pass 67 | 68 | 69 | def makedev(dev_path): 70 | for i, dev in enumerate(['stdin', 'stdout', 'stderr']): 71 | os.symlink('/proc/self/fd/%d' % i, os.path.join(dev_path, dev)) 72 | os.symlink('/proc/self/fd', os.path.join(dev_path, 'fd')) 73 | # Add extra devices 74 | DEVICES = {'null': (stat.S_IFCHR, 1, 3), 'zero': (stat.S_IFCHR, 1, 5), 75 | 'random': (stat.S_IFCHR, 1, 8), 'urandom': (stat.S_IFCHR, 1, 9), 76 | 'console': (stat.S_IFCHR, 136, 1), 'tty': (stat.S_IFCHR, 5, 0), 77 | 'full': (stat.S_IFCHR, 1, 7)} 78 | for device, (dev_type, major, minor) in DEVICES.items(): 79 | os.mknod(os.path.join(dev_path, device), 80 | 0o666 | dev_type, os.makedev(major, minor)) 81 | 82 | 83 | def _create_mounts(new_root): 84 | # Create mounts (/proc, /sys, /dev) under new_root 85 | linux.mount('proc', os.path.join(new_root, 'proc'), 'proc', 0, '') 86 | linux.mount('sysfs', os.path.join(new_root, 'sys'), 'sysfs', 0, '') 87 | linux.mount('tmpfs', os.path.join(new_root, 'dev'), 'tmpfs', 88 | linux.MS_NOSUID | linux.MS_STRICTATIME, 'mode=755') 89 | 90 | # Add some basic devices 91 | devpts_path = os.path.join(new_root, 'dev', 'pts') 92 | if not os.path.exists(devpts_path): 93 | os.makedirs(devpts_path) 94 | linux.mount('devpts', devpts_path, 'devpts', 0, '') 95 | 96 | makedev(os.path.join(new_root, 'dev')) 97 | 98 | 99 | def contain(command, image_name, image_dir, container_id, container_dir): 100 | linux.unshare(linux.CLONE_NEWNS) # create a new mount namespace 101 | linux.unshare(linux.CLONE_NEWUTS) # switch to a new UTS namespace 102 | linux.sethostname(container_id) # change hostname to container_id 103 | 104 | linux.mount(None, '/', None, linux.MS_PRIVATE | linux.MS_REC, None) 105 | 106 | new_root = create_container_root( 107 | image_name, image_dir, container_id, container_dir) 108 | print('Created a new root fs for our container: {}'.format(new_root)) 109 | 110 | _create_mounts(new_root) 111 | 112 | old_root = os.path.join(new_root, 'old_root') 113 | os.makedirs(old_root) 114 | linux.pivot_root(new_root, old_root) 115 | 116 | os.chdir('/') 117 | 118 | linux.umount2('/old_root', linux.MNT_DETACH) # umount old root 119 | os.rmdir('/old_root') # rmdir the old_root dir 120 | 121 | os.execvp(command[0], command) 122 | 123 | 124 | @cli.command(context_settings=dict(ignore_unknown_options=True,)) 125 | @click.option('--image-name', '-i', help='Image name', default='ubuntu') 126 | @click.option('--image-dir', help='Images directory', 127 | default='/workshop/images') 128 | @click.option('--container-dir', help='Containers directory', 129 | default='/workshop/containers') 130 | @click.argument('Command', required=True, nargs=-1) 131 | def run(image_name, image_dir, container_dir, command): 132 | container_id = str(uuid.uuid4()) 133 | 134 | # TODO: Switching to a new PID namespace (using unshare) would only affect 135 | # the children of a process (because we can't change the PID of a 136 | # running process), so we'll have to unshare here OR replace 137 | # os.fork() with linux.clone() 138 | 139 | pid = os.fork() 140 | if pid == 0: 141 | # This is the child, we'll try to do some containment here 142 | try: 143 | contain(command, image_name, image_dir, container_id, 144 | container_dir) 145 | except Exception: 146 | traceback.print_exc() 147 | os._exit(1) # something went wrong in contain() 148 | 149 | # This is the parent, pid contains the PID of the forked process 150 | # wait for the forked child, fetch the exit status 151 | _, status = os.waitpid(pid, 0) 152 | print('{} exited with status {}'.format(pid, status)) 153 | 154 | 155 | if __name__ == '__main__': 156 | cli() 157 | -------------------------------------------------------------------------------- /levels/07_net_namespace/README.md: -------------------------------------------------------------------------------- 1 | # Level 07: network namespace 2 | 3 | Move to a new network namespace so that the container doesn't have access to the host NICs. 4 | After implementing this, you can check with `ip link` or `ifconfig` that you don't see the host NICs anymore. 5 | 6 | Bonus: The iproute2 toolchain also allows fiddling with network namespaces. 7 | Have a look at the `ip netns` commands. 8 | To make it work with the namespaces we generate using syscalls, we need to link a file in `/var/run/netns` to the network namespace file descriptor from `/proc//ns/`. 9 | 10 | ## Relevant Documentation 11 | 12 | - [Namespaces in operation - network namespace](https://lwn.net/Articles/580893/) 13 | - [man ip-netns](http://man7.org/linux/man-pages/man8/ip-netns.8.html) 14 | 15 | ## How to check your work 16 | Run the container and use `ip link` or `ifconfig` to browse the available NICs. 17 | You should see only `lo` (if using `ip link`) or no NICs (if using `ifconfig`). 18 | ``` 19 | $ sudo python rd.py run -i ubuntu /bin/bash 20 | Created a new root fs for our container: /workshop/containers/9feb3d2d-725b-4c36-8c4d-0c586766f6f6/rootfs 21 | root@9feb3d2d-725b-4c36-8c4d-0c586766f6f6:/# ip a 22 | 1: lo: mtu 65536 qdisc noop state DOWN group default 23 | link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 24 | root@9feb3d2d-725b-4c36-8c4d-0c586766f6f6:/# ifconfig 25 | ``` 26 | 27 | ## Bonus round 28 | The `ip netns` command allows you to manipulate network namespaces, e.g. to create a `veth` pair and assign one of the pair to your new network namespace. 29 | The `veth` pair is somewhat like a patch cable - packets sent on one `veth` NIC will appear on the other member of the pair. 30 | You can use the `veth` to connect your container to a bridge/vswitch (like Docker does) or routing table. 31 | `ip netns` uses the `netlink` kernel API and you can use it directly with the `pyroute2` module. 32 | Alternatively, it may be easier to start by running `ip netns` commands. 33 | Note that `ip netns` requires a network namespace file descriptor to reside in `/var/run/netns`, you can symlink `/proc//ns/net` to get `ip netns` to work. 34 | -------------------------------------------------------------------------------- /levels/07_net_namespace/rd.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """Docker From Scratch Workshop - Level 7: Add network namespace. 3 | 4 | Goal: Have your own NICs. 5 | """ 6 | 7 | 8 | 9 | import linux 10 | import tarfile 11 | import uuid 12 | 13 | import click 14 | import os 15 | import stat 16 | 17 | 18 | def _get_image_path(image_name, image_dir, image_suffix='tar'): 19 | return os.path.join(image_dir, os.extsep.join([image_name, image_suffix])) 20 | 21 | 22 | def _get_container_path(container_id, base_path, *subdir_names): 23 | return os.path.join(base_path, container_id, *subdir_names) 24 | 25 | 26 | def create_container_root(image_name, image_dir, container_id, container_dir): 27 | image_path = _get_image_path(image_name, image_dir) 28 | image_root = os.path.join(image_dir, image_name, 'rootfs') 29 | 30 | assert os.path.exists(image_path), "unable to locate image %s" % image_name 31 | 32 | if not os.path.exists(image_root): 33 | os.makedirs(image_root) 34 | with tarfile.open(image_path) as t: 35 | # Fun fact: tar files may contain *nix devices! *facepalm* 36 | members = [m for m in t.getmembers() 37 | if m.type not in (tarfile.CHRTYPE, tarfile.BLKTYPE)] 38 | t.extractall(image_root, members=members) 39 | 40 | # Create directories for copy-on-write (uppperdir), overlay workdir, 41 | # and a mount point 42 | container_cow_rw = _get_container_path( 43 | container_id, container_dir, 'cow_rw') 44 | container_cow_workdir = _get_container_path( 45 | container_id, container_dir, 'cow_workdir') 46 | container_rootfs = _get_container_path( 47 | container_id, container_dir, 'rootfs') 48 | for d in (container_cow_rw, container_cow_workdir, container_rootfs): 49 | if not os.path.exists(d): 50 | os.makedirs(d) 51 | 52 | # Mount the overlay (HINT: use the MS_NODEV flag to mount) 53 | linux.mount( 54 | 'overlay', container_rootfs, 'overlay', linux.MS_NODEV, 55 | "lowerdir={image_root},upperdir={cow_rw},workdir={cow_workdir}".format( 56 | image_root=image_root, 57 | cow_rw=container_cow_rw, 58 | cow_workdir=container_cow_workdir)) 59 | 60 | return container_rootfs # return the mountpoint for the overlayfs 61 | 62 | 63 | @click.group() 64 | def cli(): 65 | pass 66 | 67 | 68 | def makedev(dev_path): 69 | for i, dev in enumerate(['stdin', 'stdout', 'stderr']): 70 | os.symlink('/proc/self/fd/%d' % i, os.path.join(dev_path, dev)) 71 | os.symlink('/proc/self/fd', os.path.join(dev_path, 'fd')) 72 | # Add extra devices 73 | DEVICES = {'null': (stat.S_IFCHR, 1, 3), 'zero': (stat.S_IFCHR, 1, 5), 74 | 'random': (stat.S_IFCHR, 1, 8), 'urandom': (stat.S_IFCHR, 1, 9), 75 | 'console': (stat.S_IFCHR, 136, 1), 'tty': (stat.S_IFCHR, 5, 0), 76 | 'full': (stat.S_IFCHR, 1, 7)} 77 | for device, (dev_type, major, minor) in DEVICES.items(): 78 | os.mknod(os.path.join(dev_path, device), 79 | 0o666 | dev_type, os.makedev(major, minor)) 80 | 81 | 82 | def _create_mounts(new_root): 83 | # Create mounts (/proc, /sys, /dev) under new_root 84 | linux.mount('proc', os.path.join(new_root, 'proc'), 'proc', 0, '') 85 | linux.mount('sysfs', os.path.join(new_root, 'sys'), 'sysfs', 0, '') 86 | linux.mount('tmpfs', os.path.join(new_root, 'dev'), 'tmpfs', 87 | linux.MS_NOSUID | linux.MS_STRICTATIME, 'mode=755') 88 | 89 | # Add some basic devices 90 | devpts_path = os.path.join(new_root, 'dev', 'pts') 91 | if not os.path.exists(devpts_path): 92 | os.makedirs(devpts_path) 93 | linux.mount('devpts', devpts_path, 'devpts', 0, '') 94 | 95 | makedev(os.path.join(new_root, 'dev')) 96 | 97 | 98 | def contain(command, image_name, image_dir, container_id, container_dir): 99 | linux.sethostname(container_id) # change hostname to container_id 100 | 101 | linux.mount(None, '/', None, linux.MS_PRIVATE | linux.MS_REC, None) 102 | 103 | new_root = create_container_root( 104 | image_name, image_dir, container_id, container_dir) 105 | print('Created a new root fs for our container: {}'.format(new_root)) 106 | 107 | _create_mounts(new_root) 108 | 109 | old_root = os.path.join(new_root, 'old_root') 110 | os.makedirs(old_root) 111 | linux.pivot_root(new_root, old_root) 112 | 113 | os.chdir('/') 114 | 115 | linux.umount2('/old_root', linux.MNT_DETACH) # umount old root 116 | os.rmdir('/old_root') # rmdir the old_root dir 117 | 118 | os.execvp(command[0], command) 119 | 120 | 121 | @cli.command(context_settings=dict(ignore_unknown_options=True,)) 122 | @click.option('--image-name', '-i', help='Image name', default='ubuntu') 123 | @click.option('--image-dir', help='Images directory', 124 | default='/workshop/images') 125 | @click.option('--container-dir', help='Containers directory', 126 | default='/workshop/containers') 127 | @click.argument('Command', required=True, nargs=-1) 128 | def run(image_name, image_dir, container_dir, command): 129 | container_id = str(uuid.uuid4()) 130 | 131 | # TODO: switch to a new NET namespace 132 | # linux.clone(callback, flags, callback_args) is modeled after the Glibc 133 | # version. see: "man 2 clone" 134 | flags = linux.CLONE_NEWPID | linux.CLONE_NEWNS | linux.CLONE_NEWUTS 135 | callback_args = (command, image_name, image_dir, container_id, 136 | container_dir) 137 | pid = linux.clone(contain, flags, callback_args) 138 | 139 | # This is the parent, pid contains the PID of the forked process 140 | # wait for the forked child, fetch the exit status 141 | _, status = os.waitpid(pid, 0) 142 | print('{} exited with status {}'.format(pid, status)) 143 | 144 | 145 | if __name__ == '__main__': 146 | cli() 147 | -------------------------------------------------------------------------------- /levels/08_cpu_cgroup/README.md: -------------------------------------------------------------------------------- 1 | # Level 08: CPU CGroup 2 | 3 | In this Level we add our first cgroup: 4 | - First, we create a top cgroup for all containers and a subgroup for every container, something like `/sys/fs/cgroup/cpu/rubber-docker/`. 5 | - Then we move the contained process to the group by writing its pid to the `tasks` file of the group. 6 | - Finally, we set the limits of the group by writing the number of allocated shares to `cpu.shares`. 7 | 8 | ## Relevant Documentation 9 | - [Kernel docs, scheduler design](https://www.kernel.org/doc/Documentation/scheduler/sched-design-CFS.txt) - section 7 10 | - [Kernel docs, CPU accounting controller](https://www.kernel.org/doc/Documentation/cgroup-v1/cpuacct.txt) 11 | 12 | 13 | ## Exercises 14 | - Run a container with 200 cpu shares and then generate cpu load inside the container (using the `stress` tool). How much cpu usage does the host show? Why? 15 | - Run two containers with different shares allocations, generate cpu load in both and observe cpu usage using top on the host. 16 | - What is the interaction between cgroup limits and `nice` priorities and priority classes (e.g. RT scheduler)? 17 | 18 | ## How to check your work 19 | Look at the content of `/proc/self/cgroup` from within a container to verify it is in a new cpu cgroup: 20 | ``` 21 | $ sudo python rd.py run -i ubuntu /bin/bash 22 | Created a new root fs for our container: /workshop/containers/57f02a16-4515-4068-b097-b241b66e4987/rootfs 23 | root@57f02a16-4515-4068-b097-b241b66e4987:/# cat /proc/self/cgroup 24 | 10:hugetlb:/user.slice/user-1000.slice/session-2.scope 25 | 9:blkio:/user.slice/user-1000.slice/session-2.scope 26 | 8:net_cls,net_prio:/user.slice/user-1000.slice/session-2.scope 27 | 7:cpu,cpuacct:/rubber_docker/57f02a16-4515-4068-b097-b241b66e4987 28 | 6:perf_event:/user.slice/user-1000.slice/session-2.scope 29 | 5:devices:/user.slice/user-1000.slice/session-2.scope 30 | 4:cpuset:/user.slice/user-1000.slice/session-2.scope 31 | 3:memory:/user.slice/user-1000.slice/session-2.scope 32 | 2:freezer:/user.slice/user-1000.slice/session-2.scope 33 | 1:name=systemd:/user.slice/user-1000.slice/session-2.scope 34 | 35 | root@57f02a16-4515-4068-b097-b241b66e4987:/# grep 57f02a16-4515-4068-b097-b241b66e4987 /proc/self/cgroup 36 | 7:cpu,cpuacct:/rubber_docker/57f02a16-4515-4068-b097-b241b66e4987 37 | ``` 38 | 39 | Alternatively, you can take a look from the host at `/sys/fs/cgroup/cpu`: 40 | ``` 41 | $ cat /sys/fs/cgroup/cpu/rubber_docker/57f02a16-4515-4068-b097-b241b66e4987/tasks 42 | 5386 43 | ``` 44 | -------------------------------------------------------------------------------- /levels/08_cpu_cgroup/rd.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """Docker From Scratch Workshop - Level 8: Add CPU Control group. 3 | 4 | Goal: prevent your container from starving host processes CPU time. 5 | """ 6 | 7 | 8 | 9 | import linux 10 | import tarfile 11 | import uuid 12 | 13 | import click 14 | import os 15 | import stat 16 | 17 | 18 | def _get_image_path(image_name, image_dir, image_suffix='tar'): 19 | return os.path.join(image_dir, os.extsep.join([image_name, image_suffix])) 20 | 21 | 22 | def _get_container_path(container_id, base_path, *subdir_names): 23 | return os.path.join(base_path, container_id, *subdir_names) 24 | 25 | 26 | def create_container_root(image_name, image_dir, container_id, container_dir): 27 | image_path = _get_image_path(image_name, image_dir) 28 | image_root = os.path.join(image_dir, image_name, 'rootfs') 29 | 30 | assert os.path.exists(image_path), "unable to locate image %s" % image_name 31 | 32 | if not os.path.exists(image_root): 33 | os.makedirs(image_root) 34 | with tarfile.open(image_path) as t: 35 | # Fun fact: tar files may contain *nix devices! *facepalm* 36 | members = [m for m in t.getmembers() 37 | if m.type not in (tarfile.CHRTYPE, tarfile.BLKTYPE)] 38 | t.extractall(image_root, members=members) 39 | 40 | # Create directories for copy-on-write (uppperdir), overlay workdir, 41 | # and a mount point 42 | container_cow_rw = _get_container_path( 43 | container_id, container_dir, 'cow_rw') 44 | container_cow_workdir = _get_container_path( 45 | container_id, container_dir, 'cow_workdir') 46 | container_rootfs = _get_container_path( 47 | container_id, container_dir, 'rootfs') 48 | for d in (container_cow_rw, container_cow_workdir, container_rootfs): 49 | if not os.path.exists(d): 50 | os.makedirs(d) 51 | 52 | # Mount the overlay (HINT: use the MS_NODEV flag to mount) 53 | linux.mount( 54 | 'overlay', container_rootfs, 'overlay', linux.MS_NODEV, 55 | "lowerdir={image_root},upperdir={cow_rw},workdir={cow_workdir}".format( 56 | image_root=image_root, 57 | cow_rw=container_cow_rw, 58 | cow_workdir=container_cow_workdir)) 59 | 60 | return container_rootfs # return the mountpoint for the overlayfs 61 | 62 | 63 | @click.group() 64 | def cli(): 65 | pass 66 | 67 | 68 | def makedev(dev_path): 69 | for i, dev in enumerate(['stdin', 'stdout', 'stderr']): 70 | os.symlink('/proc/self/fd/%d' % i, os.path.join(dev_path, dev)) 71 | os.symlink('/proc/self/fd', os.path.join(dev_path, 'fd')) 72 | # Add extra devices 73 | DEVICES = {'null': (stat.S_IFCHR, 1, 3), 'zero': (stat.S_IFCHR, 1, 5), 74 | 'random': (stat.S_IFCHR, 1, 8), 'urandom': (stat.S_IFCHR, 1, 9), 75 | 'console': (stat.S_IFCHR, 136, 1), 'tty': (stat.S_IFCHR, 5, 0), 76 | 'full': (stat.S_IFCHR, 1, 7)} 77 | for device, (dev_type, major, minor) in DEVICES.items(): 78 | os.mknod(os.path.join(dev_path, device), 79 | 0o666 | dev_type, os.makedev(major, minor)) 80 | 81 | 82 | def _create_mounts(new_root): 83 | # Create mounts (/proc, /sys, /dev) under new_root 84 | linux.mount('proc', os.path.join(new_root, 'proc'), 'proc', 0, '') 85 | linux.mount('sysfs', os.path.join(new_root, 'sys'), 'sysfs', 0, '') 86 | linux.mount('tmpfs', os.path.join(new_root, 'dev'), 'tmpfs', 87 | linux.MS_NOSUID | linux.MS_STRICTATIME, 'mode=755') 88 | 89 | # Add some basic devices 90 | devpts_path = os.path.join(new_root, 'dev', 'pts') 91 | if not os.path.exists(devpts_path): 92 | os.makedirs(devpts_path) 93 | linux.mount('devpts', devpts_path, 'devpts', 0, '') 94 | 95 | makedev(os.path.join(new_root, 'dev')) 96 | 97 | 98 | def contain(command, image_name, image_dir, container_id, container_dir, 99 | cpu_shares): 100 | # TODO: insert the container to a new cpu cgroup named: 101 | # 'rubber_docker/container_id' 102 | 103 | # TODO: if (cpu_shares != 0) => set the 'cpu.shares' in our cpu cgroup 104 | 105 | linux.sethostname(container_id) # change hostname to container_id 106 | 107 | linux.mount(None, '/', None, linux.MS_PRIVATE | linux.MS_REC, None) 108 | 109 | new_root = create_container_root( 110 | image_name, image_dir, container_id, container_dir) 111 | print('Created a new root fs for our container: {}'.format(new_root)) 112 | 113 | _create_mounts(new_root) 114 | 115 | old_root = os.path.join(new_root, 'old_root') 116 | os.makedirs(old_root) 117 | linux.pivot_root(new_root, old_root) 118 | 119 | os.chdir('/') 120 | 121 | linux.umount2('/old_root', linux.MNT_DETACH) # umount old root 122 | os.rmdir('/old_root') # rmdir the old_root dir 123 | 124 | os.execvp(command[0], command) 125 | 126 | 127 | @cli.command(context_settings=dict(ignore_unknown_options=True,)) 128 | @click.option('--cpu-shares', help='CPU shares (relative weight)', default=0) 129 | @click.option('--image-name', '-i', help='Image name', default='ubuntu') 130 | @click.option('--image-dir', help='Images directory', 131 | default='/workshop/images') 132 | @click.option('--container-dir', help='Containers directory', 133 | default='/workshop/containers') 134 | @click.argument('Command', required=True, nargs=-1) 135 | def run(cpu_shares, image_name, image_dir, container_dir, command): 136 | container_id = str(uuid.uuid4()) 137 | 138 | # linux.clone(callback, flags, callback_args) is modeled after the Glibc 139 | # version. see: "man 2 clone" 140 | flags = (linux.CLONE_NEWPID | linux.CLONE_NEWNS | linux.CLONE_NEWUTS | 141 | linux.CLONE_NEWNET) 142 | callback_args = (command, image_name, image_dir, container_id, 143 | container_dir, cpu_shares) 144 | pid = linux.clone(contain, flags, callback_args) 145 | 146 | # This is the parent, pid contains the PID of the forked process 147 | # wait for the forked child, fetch the exit status 148 | _, status = os.waitpid(pid, 0) 149 | print('{} exited with status {}'.format(pid, status)) 150 | 151 | 152 | if __name__ == '__main__': 153 | cli() 154 | -------------------------------------------------------------------------------- /levels/09_memory_cgorup/README.md: -------------------------------------------------------------------------------- 1 | # Level 09: Memory CGroup 2 | 3 | In this level we limit the memory usage of the container. 4 | Create a directory inside the memory cgroup fs (like we did in the cpu cgroup) per container. 5 | Move the process to the cgroup by writing the pid to the `tasks` file and then setup the limits by writing to the following files: 6 | - `memory.limit_in_bytes` - either number of bytes or units e.g. 1g 7 | - `memory.memsw.limit_in_bytes` - either number of bytes or units e.g. 1g 8 | 9 | ## Exercises 10 | 11 | After setting the limits, run a container with the stress tool and observe what happens when your container goes over the allotted limit. 12 | Explore the behavior of the container: 13 | - Watch when the container goes over `memory.limit_in_bytes`. 14 | - Watch when the container goes over `memory.memsw.limit_in_bytes.` 15 | - Watch `memory.kmem.usage_in_bytes`, is all kernel memory used accounted for? 16 | - Try the infamous `while true; do mkdir t; cd t; done` DOS attack from within the container. Does it succeed in DOSing the host? 17 | - Where is `tcp` socket buffers memory accounted? where is `udp` memory accounted? 18 | - Explore the behavior of memory cgroup with different OOM control options. (`memory.oom_control` file). 19 | 20 | ## Relevant Documentation 21 | - [Kernel docs, memory cgroup](https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt) 22 | 23 | 24 | ## How to check your work 25 | From the container: 26 | ``` 27 | $ sudo python rd.py run -i ubuntu --memory 128m --memory-swap 150m /bin/bash 28 | Created a new root fs for our container: /workshop/containers/1e9b16b3-3ea3-4cad-84e1-f623ba4deada/rootfs 29 | root@1e9b16b3-3ea3-4cad-84e1-f623ba4deada:/# cat /proc/self/cgroup 30 | 10:hugetlb:/user.slice/user-1000.slice/session-2.scope 31 | 9:blkio:/user.slice/user-1000.slice/session-2.scope 32 | 8:net_cls,net_prio:/user.slice/user-1000.slice/session-2.scope 33 | 7:cpu,cpuacct:/rubber_docker/1e9b16b3-3ea3-4cad-84e1-f623ba4deada 34 | 6:perf_event:/user.slice/user-1000.slice/session-2.scope 35 | 5:devices:/user.slice/user-1000.slice/session-2.scope 36 | 4:cpuset:/user.slice/user-1000.slice/session-2.scope 37 | 3:memory:/rubber_docker/1e9b16b3-3ea3-4cad-84e1-f623ba4deada 38 | 2:freezer:/user.slice/user-1000.slice/session-2.scope 39 | 1:name=systemd:/user.slice/user-1000.slice/session-2.scope 40 | ``` 41 | 42 | From the host: 43 | ``` 44 | $ cat /sys/fs/cgroup/memory/rubber_docker/1e9b16b3-3ea3-4cad-84e1-f623ba4deada/memory.limit_in_bytes 45 | 134217728 46 | $ cat /sys/fs/cgroup/memory/rubber_docker/1e9b16b3-3ea3-4cad-84e1-f623ba4deada/memory.memsw.limit_in_bytes 47 | 157286400 48 | ``` 49 | 50 | ## Bonus round 51 | read about and use the following control files: 52 | - `memory.oom_control` 53 | - `memory.swappiness` 54 | - `memory.kmem.limit_in_bytes` 55 | - `memory.kmem.tcp.limit_in_bytes` 56 | - `memory.soft_limit_in_bytes` 57 | -------------------------------------------------------------------------------- /levels/09_memory_cgorup/rd.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """Docker From Scratch Workshop - Level 9: Add Memory Control group. 3 | 4 | Goal: prevent your container from eating all the RAM. 5 | """ 6 | 7 | 8 | 9 | import linux 10 | import tarfile 11 | import uuid 12 | 13 | import click 14 | import os 15 | import stat 16 | 17 | 18 | def _get_image_path(image_name, image_dir, image_suffix='tar'): 19 | return os.path.join(image_dir, os.extsep.join([image_name, image_suffix])) 20 | 21 | 22 | def _get_container_path(container_id, base_path, *subdir_names): 23 | return os.path.join(base_path, container_id, *subdir_names) 24 | 25 | 26 | def create_container_root(image_name, image_dir, container_id, container_dir): 27 | image_path = _get_image_path(image_name, image_dir) 28 | image_root = os.path.join(image_dir, image_name, 'rootfs') 29 | 30 | assert os.path.exists(image_path), "unable to locate image %s" % image_name 31 | 32 | if not os.path.exists(image_root): 33 | os.makedirs(image_root) 34 | with tarfile.open(image_path) as t: 35 | # Fun fact: tar files may contain *nix devices! *facepalm* 36 | members = [m for m in t.getmembers() 37 | if m.type not in (tarfile.CHRTYPE, tarfile.BLKTYPE)] 38 | t.extractall(image_root, members=members) 39 | 40 | # Create directories for copy-on-write (uppperdir), overlay workdir, 41 | # and a mount point 42 | container_cow_rw = _get_container_path( 43 | container_id, container_dir, 'cow_rw') 44 | container_cow_workdir = _get_container_path( 45 | container_id, container_dir, 'cow_workdir') 46 | container_rootfs = _get_container_path( 47 | container_id, container_dir, 'rootfs') 48 | for d in (container_cow_rw, container_cow_workdir, container_rootfs): 49 | if not os.path.exists(d): 50 | os.makedirs(d) 51 | 52 | # Mount the overlay (HINT: use the MS_NODEV flag to mount) 53 | linux.mount( 54 | 'overlay', container_rootfs, 'overlay', linux.MS_NODEV, 55 | "lowerdir={image_root},upperdir={cow_rw},workdir={cow_workdir}".format( 56 | image_root=image_root, 57 | cow_rw=container_cow_rw, 58 | cow_workdir=container_cow_workdir)) 59 | 60 | return container_rootfs # return the mountpoint for the overlayfs 61 | 62 | 63 | @click.group() 64 | def cli(): 65 | pass 66 | 67 | 68 | def makedev(dev_path): 69 | for i, dev in enumerate(['stdin', 'stdout', 'stderr']): 70 | os.symlink('/proc/self/fd/%d' % i, os.path.join(dev_path, dev)) 71 | os.symlink('/proc/self/fd', os.path.join(dev_path, 'fd')) 72 | # Add extra devices 73 | DEVICES = {'null': (stat.S_IFCHR, 1, 3), 'zero': (stat.S_IFCHR, 1, 5), 74 | 'random': (stat.S_IFCHR, 1, 8), 'urandom': (stat.S_IFCHR, 1, 9), 75 | 'console': (stat.S_IFCHR, 136, 1), 'tty': (stat.S_IFCHR, 5, 0), 76 | 'full': (stat.S_IFCHR, 1, 7)} 77 | for device, (dev_type, major, minor) in DEVICES.items(): 78 | os.mknod(os.path.join(dev_path, device), 79 | 0o666 | dev_type, os.makedev(major, minor)) 80 | 81 | 82 | def _create_mounts(new_root): 83 | # Create mounts (/proc, /sys, /dev) under new_root 84 | linux.mount('proc', os.path.join(new_root, 'proc'), 'proc', 0, '') 85 | linux.mount('sysfs', os.path.join(new_root, 'sys'), 'sysfs', 0, '') 86 | linux.mount('tmpfs', os.path.join(new_root, 'dev'), 'tmpfs', 87 | linux.MS_NOSUID | linux.MS_STRICTATIME, 'mode=755') 88 | 89 | # Add some basic devices 90 | devpts_path = os.path.join(new_root, 'dev', 'pts') 91 | if not os.path.exists(devpts_path): 92 | os.makedirs(devpts_path) 93 | linux.mount('devpts', devpts_path, 'devpts', 0, '') 94 | 95 | makedev(os.path.join(new_root, 'dev')) 96 | 97 | 98 | def _setup_cpu_cgroup(container_id, cpu_shares): 99 | CPU_CGROUP_BASEDIR = '/sys/fs/cgroup/cpu' 100 | container_cpu_cgroup_dir = os.path.join( 101 | CPU_CGROUP_BASEDIR, 'rubber_docker', container_id) 102 | 103 | # Insert the container to new cpu cgroup named 'rubber_docker/container_id' 104 | if not os.path.exists(container_cpu_cgroup_dir): 105 | os.makedirs(container_cpu_cgroup_dir) 106 | tasks_file = os.path.join(container_cpu_cgroup_dir, 'tasks') 107 | open(tasks_file, 'w').write(str(os.getpid())) 108 | 109 | # If (cpu_shares != 0) => set the 'cpu.shares' in our cpu cgroup 110 | if cpu_shares: 111 | cpu_shares_file = os.path.join(container_cpu_cgroup_dir, 'cpu.shares') 112 | open(cpu_shares_file, 'w').write(str(cpu_shares)) 113 | 114 | 115 | def contain(command, image_name, image_dir, container_id, container_dir, 116 | cpu_shares, memory, memory_swap): 117 | _setup_cpu_cgroup(container_id, cpu_shares) 118 | 119 | # TODO: similarly to the CPU cgorup, add Memory cgroup support here 120 | # setup memory -> memory.limit_in_bytes, 121 | # memory_swap -> memory.memsw.limit_in_bytes if they are not None 122 | 123 | linux.sethostname(container_id) # Change hostname to container_id 124 | 125 | linux.mount(None, '/', None, linux.MS_PRIVATE | linux.MS_REC, None) 126 | 127 | new_root = create_container_root( 128 | image_name, image_dir, container_id, container_dir) 129 | print('Created a new root fs for our container: {}'.format(new_root)) 130 | 131 | _create_mounts(new_root) 132 | 133 | old_root = os.path.join(new_root, 'old_root') 134 | os.makedirs(old_root) 135 | linux.pivot_root(new_root, old_root) 136 | 137 | os.chdir('/') 138 | 139 | linux.umount2('/old_root', linux.MNT_DETACH) # umount old root 140 | os.rmdir('/old_root') # rmdir the old_root dir 141 | 142 | os.execvp(command[0], command) 143 | 144 | 145 | @cli.command(context_settings=dict(ignore_unknown_options=True,)) 146 | @click.option('--memory', 147 | help='Memory limit in bytes.' 148 | ' Use suffixes to represent larger units (k, m, g)', 149 | default=None) 150 | @click.option('--memory-swap', 151 | help='A positive integer equal to memory plus swap.' 152 | ' Specify -1 to enable unlimited swap.', 153 | default=None) 154 | @click.option('--cpu-shares', help='CPU shares (relative weight)', default=0) 155 | @click.option('--image-name', '-i', help='Image name', default='ubuntu') 156 | @click.option('--image-dir', help='Images directory', 157 | default='/workshop/images') 158 | @click.option('--container-dir', help='Containers directory', 159 | default='/workshop/containers') 160 | @click.argument('Command', required=True, nargs=-1) 161 | def run(memory, memory_swap, cpu_shares, image_name, image_dir, container_dir, 162 | command): 163 | container_id = str(uuid.uuid4()) 164 | 165 | # linux.clone(callback, flags, callback_args) is modeled after the Glibc 166 | # version. see: "man 2 clone" 167 | flags = (linux.CLONE_NEWPID | linux.CLONE_NEWNS | linux.CLONE_NEWUTS | 168 | linux.CLONE_NEWNET) 169 | callback_args = (command, image_name, image_dir, container_id, 170 | container_dir, cpu_shares, memory, memory_swap) 171 | pid = linux.clone(contain, flags, callback_args) 172 | 173 | # This is the parent, pid contains the PID of the forked process 174 | # Wait for the forked child, fetch the exit status 175 | _, status = os.waitpid(pid, 0) 176 | print('{} exited with status {}'.format(pid, status)) 177 | 178 | 179 | if __name__ == '__main__': 180 | cli() 181 | -------------------------------------------------------------------------------- /levels/10_setuid/README.md: -------------------------------------------------------------------------------- 1 | # Level 10: setuid 2 | 3 | In this level we implement functionality similar to `docker run -u UID` - the ability to run processes as a non-root user. 4 | In order to run the contained process as a different user, use the _setuid_ and _setgid_ system calls. 5 | These system calls must be called before we _exec_, but after we do all the tasks that require root privileges. 6 | 7 | ## Exercises 8 | - Use a uid of an existing username (e.g. 1000) and play around with the container's _/etc/passwd_ file. 9 | - Create some files inside the container and observe the owner uid outside the container. How does that affect shared volumes between containers? 10 | 11 | ## Relevant Documentation 12 | 13 | - [man 2 setuid](hhttp://man7.org/linux/man-pages/man2/setuid.2.html) 14 | 15 | ## How to check your work 16 | ``` 17 | $ sudo python rd.py run -i ubuntu --user 2014:222 /bin/bash 18 | Created a new root fs for our container: /workshop/containers/1e9b16b3-3ea3-4cad-84e1-f623ba4deada/rootfs 19 | root@1e9b16b3-3ea3-4cad-84e1-f623ba4deada:/# id 20 | 21 | ``` 22 | -------------------------------------------------------------------------------- /levels/10_setuid/rd.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """Docker From Scratch Workshop - Level 10: Exec non-root containers. 3 | 4 | Goal: allow specifying uid and gid for the container to run as a non-root user. 5 | """ 6 | 7 | 8 | 9 | 10 | import linux 11 | import tarfile 12 | import uuid 13 | 14 | import click 15 | import os 16 | import stat 17 | 18 | 19 | def _get_image_path(image_name, image_dir, image_suffix='tar'): 20 | return os.path.join(image_dir, os.extsep.join([image_name, image_suffix])) 21 | 22 | 23 | def _get_container_path(container_id, base_path, *subdir_names): 24 | return os.path.join(base_path, container_id, *subdir_names) 25 | 26 | 27 | def create_container_root(image_name, image_dir, container_id, container_dir): 28 | image_path = _get_image_path(image_name, image_dir) 29 | image_root = os.path.join(image_dir, image_name, 'rootfs') 30 | 31 | assert os.path.exists(image_path), "unable to locate image %s" % image_name 32 | 33 | if not os.path.exists(image_root): 34 | os.makedirs(image_root) 35 | with tarfile.open(image_path) as t: 36 | # Fun fact: tar files may contain *nix devices! *facepalm* 37 | members = [m for m in t.getmembers() 38 | if m.type not in (tarfile.CHRTYPE, tarfile.BLKTYPE)] 39 | t.extractall(image_root, members=members) 40 | 41 | # Create directories for copy-on-write (uppperdir), overlay workdir, 42 | # and a mount point 43 | container_cow_rw = _get_container_path( 44 | container_id, container_dir, 'cow_rw') 45 | container_cow_workdir = _get_container_path( 46 | container_id, container_dir, 'cow_workdir') 47 | container_rootfs = _get_container_path( 48 | container_id, container_dir, 'rootfs') 49 | for d in (container_cow_rw, container_cow_workdir, container_rootfs): 50 | if not os.path.exists(d): 51 | os.makedirs(d) 52 | 53 | # mount the overlay (HINT: use the MS_NODEV flag to mount) 54 | linux.mount( 55 | 'overlay', container_rootfs, 'overlay', linux.MS_NODEV, 56 | "lowerdir={image_root},upperdir={cow_rw},workdir={cow_workdir}".format( 57 | image_root=image_root, 58 | cow_rw=container_cow_rw, 59 | cow_workdir=container_cow_workdir)) 60 | 61 | return container_rootfs # return the mountpoint for the overlayfs 62 | 63 | 64 | @click.group() 65 | def cli(): 66 | pass 67 | 68 | 69 | def makedev(dev_path): 70 | for i, dev in enumerate(['stdin', 'stdout', 'stderr']): 71 | os.symlink('/proc/self/fd/%d' % i, os.path.join(dev_path, dev)) 72 | os.symlink('/proc/self/fd', os.path.join(dev_path, 'fd')) 73 | # Add extra devices 74 | DEVICES = {'null': (stat.S_IFCHR, 1, 3), 'zero': (stat.S_IFCHR, 1, 5), 75 | 'random': (stat.S_IFCHR, 1, 8), 'urandom': (stat.S_IFCHR, 1, 9), 76 | 'console': (stat.S_IFCHR, 136, 1), 'tty': (stat.S_IFCHR, 5, 0), 77 | 'full': (stat.S_IFCHR, 1, 7)} 78 | for device, (dev_type, major, minor) in DEVICES.items(): 79 | os.mknod(os.path.join(dev_path, device), 80 | 0o666 | dev_type, os.makedev(major, minor)) 81 | 82 | 83 | def _create_mounts(new_root): 84 | # Create mounts (/proc, /sys, /dev) under new_root 85 | linux.mount('proc', os.path.join(new_root, 'proc'), 'proc', 0, '') 86 | linux.mount('sysfs', os.path.join(new_root, 'sys'), 'sysfs', 0, '') 87 | linux.mount('tmpfs', os.path.join(new_root, 'dev'), 'tmpfs', 88 | linux.MS_NOSUID | linux.MS_STRICTATIME, 'mode=755') 89 | 90 | # Add some basic devices 91 | devpts_path = os.path.join(new_root, 'dev', 'pts') 92 | if not os.path.exists(devpts_path): 93 | os.makedirs(devpts_path) 94 | linux.mount('devpts', devpts_path, 'devpts', 0, '') 95 | 96 | makedev(os.path.join(new_root, 'dev')) 97 | 98 | 99 | def _setup_cpu_cgroup(container_id, cpu_shares): 100 | CPU_CGROUP_BASEDIR = '/sys/fs/cgroup/cpu' 101 | container_cpu_cgroup_dir = os.path.join( 102 | CPU_CGROUP_BASEDIR, 'rubber_docker', container_id) 103 | 104 | # Insert the container to new cpu cgroup named 'rubber_docker/container_id' 105 | if not os.path.exists(container_cpu_cgroup_dir): 106 | os.makedirs(container_cpu_cgroup_dir) 107 | tasks_file = os.path.join(container_cpu_cgroup_dir, 'tasks') 108 | open(tasks_file, 'w').write(str(os.getpid())) 109 | 110 | # If (cpu_shares != 0) => set the 'cpu.shares' in our cpu cgroup 111 | if cpu_shares: 112 | cpu_shares_file = os.path.join(container_cpu_cgroup_dir, 'cpu.shares') 113 | open(cpu_shares_file, 'w').write(str(cpu_shares)) 114 | 115 | 116 | def _setup_memory_cgroup(container_id, memory, memory_swap): 117 | MEMORY_CGROUP_BASEDIR = '/sys/fs/cgroup/memory' 118 | container_mem_cgroup_dir = os.path.join( 119 | MEMORY_CGROUP_BASEDIR, 'rubber_docker', container_id) 120 | 121 | # Insert the container to new memory cgroup named 'rubber_docker/container_id' 122 | if not os.path.exists(container_mem_cgroup_dir): 123 | os.makedirs(container_mem_cgroup_dir) 124 | tasks_file = os.path.join(container_mem_cgroup_dir, 'tasks') 125 | open(tasks_file, 'w').write(str(os.getpid())) 126 | 127 | if memory is not None: 128 | mem_limit_in_bytes_file = os.path.join( 129 | container_mem_cgroup_dir, 'memory.limit_in_bytes') 130 | open(mem_limit_in_bytes_file, 'w').write(str(memory)) 131 | if memory_swap is not None: 132 | memsw_limit_in_bytes_file = os.path.join( 133 | container_mem_cgroup_dir, 'memory.memsw.limit_in_bytes') 134 | open(memsw_limit_in_bytes_file, 'w').write(str(memory_swap)) 135 | 136 | 137 | def contain(command, image_name, image_dir, container_id, container_dir, 138 | cpu_shares, memory, memory_swap, user): 139 | _setup_cpu_cgroup(container_id, cpu_shares) 140 | _setup_memory_cgroup(container_id, memory, memory_swap) 141 | 142 | linux.sethostname(container_id) # change hostname to container_id 143 | 144 | linux.mount(None, '/', None, linux.MS_PRIVATE | linux.MS_REC, None) 145 | 146 | new_root = create_container_root( 147 | image_name, image_dir, container_id, container_dir) 148 | print('Created a new root fs for our container: {}'.format(new_root)) 149 | 150 | _create_mounts(new_root) 151 | 152 | old_root = os.path.join(new_root, 'old_root') 153 | os.makedirs(old_root) 154 | linux.pivot_root(new_root, old_root) 155 | 156 | os.chdir('/') 157 | 158 | linux.umount2('/old_root', linux.MNT_DETACH) # umount old root 159 | os.rmdir('/old_root') # rmdir the old_root dir 160 | 161 | # TODO: if user is set, drop privileges using os.setuid() 162 | # (and optionally os.setgid()). 163 | 164 | os.execvp(command[0], command) 165 | 166 | 167 | @cli.command(context_settings=dict(ignore_unknown_options=True,)) 168 | @click.option('--memory', 169 | help='Memory limit in bytes.' 170 | ' Use suffixes to represent larger units (k, m, g)', 171 | default=None) 172 | @click.option('--memory-swap', 173 | help='A positive integer equal to memory plus swap.' 174 | ' Specify -1 to enable unlimited swap.', 175 | default=None) 176 | @click.option('--cpu-shares', help='CPU shares (relative weight)', default=0) 177 | @click.option('--user', help='UID (format: [:])', default='') 178 | @click.option('--image-name', '-i', help='Image name', default='ubuntu') 179 | @click.option('--image-dir', help='Images directory', 180 | default='/workshop/images') 181 | @click.option('--container-dir', help='Containers directory', 182 | default='/workshop/containers') 183 | @click.argument('Command', required=True, nargs=-1) 184 | def run(memory, memory_swap, cpu_shares, user, image_name, image_dir, 185 | container_dir, command): 186 | container_id = str(uuid.uuid4()) 187 | 188 | # linux.clone(callback, flags, callback_args) is modeled after the Glibc 189 | # version. see: "man 2 clone" 190 | flags = (linux.CLONE_NEWPID | linux.CLONE_NEWNS | linux.CLONE_NEWUTS | 191 | linux.CLONE_NEWNET) 192 | callback_args = (command, image_name, image_dir, container_id, 193 | container_dir, cpu_shares, memory, memory_swap, user) 194 | pid = linux.clone(contain, flags, callback_args) 195 | 196 | # This is the parent, pid contains the PID of the forked process 197 | # Wait for the forked child, fetch the exit status 198 | _, status = os.waitpid(pid, 0) 199 | print('{} exited with status {}'.format(pid, status)) 200 | 201 | 202 | if __name__ == '__main__': 203 | cli() 204 | -------------------------------------------------------------------------------- /levels/cleanup.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | # 3 | # The dirtiest cleanup script 4 | # 5 | 6 | # don't interfere with umount 7 | pushd / 8 | 9 | # umount stuff 10 | while $(grep -q workshop /proc/mounts); do 11 | sudo umount $(grep workshop /proc/mounts | shuf | head -n1 | cut -f2 -d' ') 2>/dev/null 12 | done 13 | 14 | # remove stuff 15 | sudo rm -rf /workshop/containers/* 16 | 17 | popd 18 | -------------------------------------------------------------------------------- /linux.c: -------------------------------------------------------------------------------- 1 | #define _GNU_SOURCE 2 | #include 3 | #include 4 | #include 5 | #include 6 | #include 7 | #include 8 | 9 | #define STACK_SIZE 32768 10 | 11 | #define LINUX_MODULE_DOC "linux\n"\ 12 | "=====\n"\ 13 | "The linux module is a simple Python c extension, containing syscall wrappers "\ 14 | "missing from the Python os module. You will need to use these system calls "\ 15 | "to implement different aspect of process containment during the workshop." 16 | 17 | #define PIVOT_ROOT_DOC ".. py:function:: pivot_root(new_root, put_old)\n"\ 18 | "\n"\ 19 | "change the root filesystem\n"\ 20 | "\n"\ 21 | ":param str new_root: New root file system\n"\ 22 | ":param str put_old: Directory to move the current process root file system to\n"\ 23 | ":return: None\n"\ 24 | ":raises RuntimeError: if pivot_root fails\n"\ 25 | "\n"\ 26 | "**NOTE:** The following restrictions apply to `new_root` and `put_old`:\n"\ 27 | "\n"\ 28 | "* They must be directories.\n"\ 29 | "* `new_root` and put_old must not be on the same filesystem as the current root.\n"\ 30 | "* `new_root` must be a mountpoint.\n"\ 31 | "* `put_old` must be underneath `new_root`, that is, adding a nonzero number\n"\ 32 | " of /.. to the string pointed to by `put_old` must yield the same directory as\n"\ 33 | " `new_root`.\n"\ 34 | "* No other filesystem may be mounted on `put_old`.\n" 35 | 36 | static PyObject * 37 | pivot_root(PyObject *self, PyObject *args) { 38 | const char *put_old, *new_root; 39 | 40 | if (!PyArg_ParseTuple(args, "ss", &new_root, &put_old)) 41 | return NULL; 42 | 43 | if (syscall(SYS_pivot_root, new_root, put_old) == -1) { 44 | PyErr_SetFromErrno(PyExc_RuntimeError); 45 | return NULL; 46 | } else { 47 | Py_INCREF(Py_None); 48 | return Py_None; 49 | } 50 | } 51 | 52 | #define MOUNT_DOC ".. py:function:: mount(source, target, filesystemtype, mountflags, mountopts)\n"\ 53 | "\n"\ 54 | "mount filesystem\n"\ 55 | "\n"\ 56 | ":param str source: filesystem to attach (can be ``None``)\n"\ 57 | ":param str target: directory being attached to, or manipulated (in case of flag change)\n"\ 58 | ":param str filesystemtype: filesystem supported by the kernel (can be ``None``)\n"\ 59 | ":param int mountflags: any combination (using ``|``) of mount flags supported by mount(2).\n"\ 60 | " For the workshop you are most likely to use ``0`` (i.e. no flags), \n"\ 61 | " or a combination of: ``linux.MS_REC``, ``linux.MS_PRIVATE``\n"\ 62 | ":param str mountopts: options passed to the specified filesystem (can be ``None``)\n"\ 63 | ":return: None\n"\ 64 | ":raises RuntimeError: if mount fails\n"\ 65 | "\n" 66 | 67 | static PyObject * 68 | _mount(PyObject *self, PyObject *args) { 69 | const char *source, *target, *filesystemtype, *mountopts; 70 | unsigned long mountflags; 71 | 72 | if (!PyArg_ParseTuple(args, "zszkz", &source, &target, &filesystemtype, &mountflags, &mountopts)) { 73 | return NULL; 74 | } 75 | 76 | if (mount(source, target, filesystemtype, mountflags, mountopts) == -1) { 77 | PyErr_SetFromErrno(PyExc_RuntimeError); 78 | return NULL; 79 | } else { 80 | Py_INCREF(Py_None); 81 | return Py_None; 82 | } 83 | } 84 | 85 | #define UMOUNT_DOC ".. py:function:: umount(target)\n"\ 86 | "\n"\ 87 | "unmount filesystem\n"\ 88 | "\n"\ 89 | ":param str target: the (topmost) filesystem this directory is mounted on will be removed\n"\ 90 | ":return: None\n"\ 91 | ":raises RuntimeError: if umount fails\n"\ 92 | "\n" 93 | 94 | static PyObject * 95 | _umount(PyObject *self, PyObject *args) { 96 | const char *target; 97 | 98 | if (!PyArg_ParseTuple(args, "s", &target)) { 99 | return NULL; 100 | } 101 | 102 | if (umount(target) == -1) { 103 | PyErr_SetFromErrno(PyExc_RuntimeError); 104 | return NULL; 105 | } else { 106 | Py_INCREF(Py_None); 107 | return Py_None; 108 | } 109 | } 110 | 111 | #define UMOUNT2_DOC ".. py:function:: umount2(target, flags)\n"\ 112 | "\n"\ 113 | "unmount filesystem but allows additional `flags` controlling the behavior of the operation\n"\ 114 | "\n"\ 115 | ":param str target: the (topmost) filesystem this directory is mounted on will be removed\n"\ 116 | ":param int flags: control the behavior of the operation. You can combine multiple flags\n"\ 117 | " using ``|``. For the workshop you are most likely to use\n"\ 118 | " ``linux.MNT_DETACH``\n"\ 119 | ":return: None\n"\ 120 | ":raises RuntimeError: if umount2 fails\n"\ 121 | "\n" 122 | 123 | static PyObject * 124 | _umount2(PyObject *self, PyObject *args) { 125 | const char *target; 126 | int flags; 127 | 128 | if (!PyArg_ParseTuple(args, "si", &target, &flags)) { 129 | return NULL; 130 | } 131 | 132 | if (umount2(target, flags) == -1) { 133 | PyErr_SetFromErrno(PyExc_RuntimeError); 134 | return NULL; 135 | } else { 136 | Py_INCREF(Py_None); 137 | return Py_None; 138 | } 139 | } 140 | 141 | #define UNSHARE_DOC ".. py:function:: unshare(flags)\n"\ 142 | "\n"\ 143 | "disassociate parts of the process execution context\n"\ 144 | "\n"\ 145 | ":param int flags: which parts of the execution context should be unshared. You can\n"\ 146 | " combine multiple flags using ``|``. See below for flags you might want\n"\ 147 | " to use in this workshop\n"\ 148 | ":return: None\n"\ 149 | ":raises RuntimeError: if unshare fails\n"\ 150 | "\n"\ 151 | "Useful flags:\n"\ 152 | "\n"\ 153 | "* ``linux.CLONE_NEWNS`` - Unshare the mount namespace\n"\ 154 | "* ``linux.CLONE_NEWUTS`` - Unshare the UTS namespace (hostname, domainname, etc)\n"\ 155 | "* ``linux.CLONE_NEWNET`` - Unshare the network namespace\n"\ 156 | "* ``linux.CLONE_NEWPID`` - Unshare the PID namespace\n"\ 157 | 158 | static PyObject * 159 | _unshare(PyObject *self, PyObject *args) { 160 | int clone_flags; 161 | 162 | if (!PyArg_ParseTuple(args, "i", &clone_flags)) 163 | return NULL; 164 | 165 | if (unshare(clone_flags) == -1) { 166 | PyErr_SetFromErrno(PyExc_RuntimeError); 167 | return NULL; 168 | } else { 169 | Py_INCREF(Py_None); 170 | return Py_None; 171 | } 172 | } 173 | 174 | #define SETNS_DOC ".. py:function:: setns(fd, nstype)\n"\ 175 | "\n"\ 176 | "reassociate process with a namespace\n"\ 177 | "\n"\ 178 | ":param int fd: file descriptor referring to a namespace to associate with\n"\ 179 | ":param int nstype: one of the following: ``0`` (Allow any type of namespace to be joined),\n"\ 180 | " ``CLONE_NEWIPC`` (join IPC namespace), ``CLONE_NEWNET`` (join network \n"\ 181 | " namespace), or ``CLONE_NEWUTS`` (join UTS namespace)\n"\ 182 | ":return: None\n"\ 183 | ":raises RuntimeError: if setns fails\n"\ 184 | "\n"\ 185 | 186 | static PyObject * 187 | _setns(PyObject *self, PyObject *args) { 188 | int fd, nstype; 189 | 190 | if (!PyArg_ParseTuple(args, "ii", &fd, &nstype)) 191 | return NULL; 192 | 193 | if (setns(fd, nstype) == -1) { 194 | PyErr_SetFromErrno(PyExc_RuntimeError); 195 | return NULL; 196 | } else { 197 | Py_INCREF(Py_None); 198 | return Py_None; 199 | } 200 | } 201 | 202 | struct py_clone_args { 203 | PyObject *callback; 204 | PyObject *callback_args; 205 | }; 206 | 207 | static int clone_callback(void *args) { 208 | PyObject *result; 209 | struct py_clone_args *call_args = (struct py_clone_args *)args; 210 | 211 | if ((result = PyObject_CallObject(call_args->callback, call_args->callback_args)) == NULL) { 212 | PyErr_Print(); 213 | return -1; 214 | } else { 215 | Py_DECREF(result); 216 | } 217 | return 0; 218 | } 219 | 220 | #define CLONE_DOC ".. py:function:: clone(callback, flags, callback_args)\n"\ 221 | "\n"\ 222 | "create a child process\n"\ 223 | "\n"\ 224 | ":param Callable callback: python function to be executed by the forked child\n"\ 225 | ":param int flags: combination (using ``|``) of flags specifying what should be shared\n"\ 226 | " between the calling process and the child process. See below.\n"\ 227 | ":param tuple callback_args: tuple of arguments for the callback function\n"\ 228 | ":return: On success, the thread ID of the child process\n"\ 229 | ":raises RuntimeError: if clone fails\n"\ 230 | "\n"\ 231 | "\n"\ 232 | "Useful flags:\n"\ 233 | "\n"\ 234 | "* ``linux.CLONE_NEWNS`` - Unshare the mount namespace\n"\ 235 | "* ``linux.CLONE_NEWUTS`` - Unshare the UTS namespace (hostname, domainname, etc)\n"\ 236 | "* ``linux.CLONE_NEWNET`` - Unshare the network namespace\n"\ 237 | "* ``linux.CLONE_NEWPID`` - Unshare the PID namespace\n"\ 238 | 239 | static PyObject * 240 | _clone(PyObject *self, PyObject *args) { 241 | PyObject *callback, *callback_args; 242 | void *child_stack; 243 | int flags; 244 | pid_t child_pid; 245 | 246 | child_stack = malloc(STACK_SIZE); 247 | 248 | if (!PyArg_ParseTuple(args, "OiO", &callback, &flags, &callback_args)) 249 | return NULL; 250 | 251 | if (!PyCallable_Check(callback)) { 252 | PyErr_SetString(PyExc_TypeError, "parameter must be callable"); 253 | return NULL; 254 | } 255 | 256 | struct py_clone_args call_args; 257 | call_args.callback = callback; 258 | call_args.callback_args = callback_args; 259 | 260 | if ((child_pid = clone(&clone_callback, child_stack + STACK_SIZE, flags | SIGCHLD, &call_args)) == -1) { 261 | PyErr_SetFromErrno(PyExc_RuntimeError); 262 | return Py_BuildValue("i", -1); 263 | } else { 264 | return Py_BuildValue("i", child_pid); 265 | } 266 | } 267 | 268 | #define SETHOSTNAME_DOC ".. py:function:: sethostname(hostname)\n"\ 269 | "\n"\ 270 | "set the system hostname\n"\ 271 | "\n"\ 272 | ":param str hostname: new hostname value\n"\ 273 | ":return: None\n"\ 274 | ":raises RuntimeError: if sethostname fails\n"\ 275 | "\n"\ 276 | 277 | static PyObject * 278 | _sethostname(PyObject *self, PyObject *args) { 279 | const char *hostname; 280 | 281 | if (!PyArg_ParseTuple(args, "s", &hostname)) 282 | return NULL; 283 | 284 | if (sethostname(hostname, strlen(hostname)) == -1) { 285 | PyErr_SetFromErrno(PyExc_RuntimeError); 286 | return NULL; 287 | } 288 | 289 | Py_INCREF(Py_None); 290 | return Py_None; 291 | } 292 | 293 | static PyMethodDef LinuxMethods[] = { 294 | {"pivot_root", pivot_root, METH_VARARGS, PIVOT_ROOT_DOC}, 295 | {"unshare", _unshare, METH_VARARGS, UNSHARE_DOC}, 296 | {"setns", _setns, METH_VARARGS, SETNS_DOC}, 297 | {"clone", _clone, METH_VARARGS, CLONE_DOC}, 298 | {"sethostname", _sethostname, METH_VARARGS, SETHOSTNAME_DOC}, 299 | {"mount", _mount, METH_VARARGS, MOUNT_DOC}, 300 | {"umount", _umount, METH_VARARGS, UMOUNT_DOC}, 301 | {"umount2", _umount2, METH_VARARGS, UMOUNT2_DOC}, 302 | {NULL, NULL, 0, NULL} /* Sentinel */ 303 | }; 304 | 305 | static struct PyModuleDef linuxmodule = { 306 | PyModuleDef_HEAD_INIT, 307 | "linux", 308 | LINUX_MODULE_DOC, 309 | -1, 310 | LinuxMethods 311 | }; 312 | 313 | PyMODINIT_FUNC 314 | PyInit_linux(void) 315 | { 316 | PyObject *module = PyModule_Create(&linuxmodule); 317 | 318 | 319 | // clone constants 320 | PyModule_AddIntConstant(module, "CLONE_NEWNS", CLONE_NEWNS); // mount namespace 321 | PyModule_AddIntConstant(module, "CLONE_NEWUTS", CLONE_NEWUTS); // UTS (hostname) namespace 322 | PyModule_AddIntConstant(module, "CLONE_NEWPID", CLONE_NEWPID); // PID namespace 323 | PyModule_AddIntConstant(module, "CLONE_NEWUSER", CLONE_NEWUSER); // users namespace 324 | PyModule_AddIntConstant(module, "CLONE_NEWIPC", CLONE_NEWIPC); // IPC namespace 325 | PyModule_AddIntConstant(module, "CLONE_NEWNET", CLONE_NEWNET); // network namespace 326 | PyModule_AddIntConstant(module, "CLONE_THREAD", CLONE_THREAD); 327 | 328 | // mount constants 329 | PyModule_AddIntConstant(module, "MS_RDONLY", MS_RDONLY); /* Mount read-only. */ 330 | PyModule_AddIntConstant(module, "MS_NOSUID", MS_NOSUID); /* Ignore suid and sgid bits. */ 331 | PyModule_AddIntConstant(module, "MS_NODEV", MS_NODEV); /* Disallow access to device special files. */ 332 | PyModule_AddIntConstant(module, "MS_NOEXEC", MS_NOEXEC); /* Disallow program execution. */ 333 | PyModule_AddIntConstant(module, "MS_SYNCHRONOUS", MS_SYNCHRONOUS); /* Writes are synced at once. */ 334 | PyModule_AddIntConstant(module, "MS_REMOUNT", MS_REMOUNT); /* Alter flags of a mounted FS. */ 335 | PyModule_AddIntConstant(module, "MS_MANDLOCK", MS_MANDLOCK); /* Allow mandatory locks on an FS. */ 336 | PyModule_AddIntConstant(module, "MS_DIRSYNC", MS_DIRSYNC); /* Directory modifications are synchronous. */ 337 | PyModule_AddIntConstant(module, "MS_NOATIME", MS_NOATIME); /* Do not update access times. */ 338 | PyModule_AddIntConstant(module, "MS_NODIRATIME", MS_NODIRATIME); /* Do not update directory access times. */ 339 | PyModule_AddIntConstant(module, "MS_BIND", MS_BIND); /* Bind directory at different place. */ 340 | PyModule_AddIntConstant(module, "MS_MOVE", MS_MOVE); 341 | PyModule_AddIntConstant(module, "MS_REC", MS_REC); /* Recursive loopback */ 342 | PyModule_AddIntConstant(module, "MS_SILENT", MS_SILENT); 343 | PyModule_AddIntConstant(module, "MS_POSIXACL", MS_POSIXACL); /* VFS does not apply the umask. */ 344 | PyModule_AddIntConstant(module, "MS_UNBINDABLE", MS_UNBINDABLE); /* Change to unbindable. */ 345 | PyModule_AddIntConstant(module, "MS_PRIVATE", MS_PRIVATE); /* Change to private. */ 346 | PyModule_AddIntConstant(module, "MS_SLAVE", MS_SLAVE); /* Change to slave. */ 347 | PyModule_AddIntConstant(module, "MS_SHARED", MS_SHARED); /* Change to shared. */ 348 | PyModule_AddIntConstant(module, "MS_RELATIME", MS_RELATIME); /* Update atime relative to mtime/ctime. */ 349 | PyModule_AddIntConstant(module, "MS_KERNMOUNT", MS_KERNMOUNT); /* This is a kern_mount call. */ 350 | PyModule_AddIntConstant(module, "MS_I_VERSION", MS_I_VERSION); /* Update inode I_version field. */ 351 | PyModule_AddIntConstant(module, "MS_STRICTATIME", MS_STRICTATIME); /* Always perform atime updates. */ 352 | PyModule_AddIntConstant(module, "MS_ACTIVE", MS_ACTIVE); 353 | PyModule_AddIntConstant(module, "MS_NOUSER", MS_NOUSER); 354 | PyModule_AddIntConstant(module, "MNT_DETACH", MNT_DETACH); /* Just detach from the tree. */ 355 | PyModule_AddIntConstant(module, "MS_MGC_VAL", MS_MGC_VAL); 356 | 357 | return module; 358 | } 359 | -------------------------------------------------------------------------------- /packer/bootstrap.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -e 3 | 4 | function export_image() { 5 | image_name=$1 6 | export_name=$2 7 | shift; shift 8 | CONTAINER_ID=$(docker run -d $image_name "$@") 9 | docker wait $CONTAINER_ID 10 | docker export -o $export_name.tar $CONTAINER_ID 11 | docker rm $CONTAINER_ID 12 | } 13 | 14 | if [ $(id -u) -ne 0 ]; then 15 | echo "You must run this script as root. Attempting to sudo" 1>&2 16 | exec sudo -H -n bash $0 $@ 17 | fi 18 | 19 | # Wait for cloud-init 20 | sleep 10 21 | 22 | # Install packages 23 | export DEBIAN_FRONTEND=noninteractive 24 | export DEBIAN_PRIORITY=critical 25 | install -m 0755 -d /etc/apt/keyrings 26 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc 27 | chmod a+r /etc/apt/keyrings/docker.asc 28 | 29 | # Add the repository to Apt sources: 30 | echo \ 31 | "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \ 32 | $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ 33 | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null 34 | apt update 35 | apt install -y docker-ce stress python3-dev build-essential cmake htop ipython3 python3-pip python3-click git 36 | 37 | # Include the memory and memsw cgroups for cgroups v1 on old images; not needed for ubuntu 24.04 cgroups v2 38 | # sed -i.bak 's|^kernel.*$|\0 cgroup_enable=memory swapaccount=1|' /boot/grub/menu.lst 39 | # sed -i -r 's|GRUB_CMDLINE_LINUX="(.*)"|GRUB_CMDLINE_LINUX="\1 cgroup_enable=memory swapaccount=1"|' /etc/default/grub 40 | # update-grub 41 | 42 | # Configure Docker to use overlayfs 43 | cat - > /etc/docker/daemon.json <<'EOF' 44 | { 45 | "storage-driver": "overlay2" 46 | } 47 | EOF 48 | # restart docker (to use overlay) 49 | systemctl restart docker 50 | 51 | usermod -G docker -a ubuntu 52 | 53 | # Clone git repo 54 | mkdir /workshop 55 | git clone https://github.com/Fewbytes/rubber-docker.git /workshop/rubber-docker 56 | 57 | # Fetch images 58 | mkdir -p /workshop/images 59 | pushd /workshop/images 60 | export_image ubuntu:noble ubuntu-export /bin/bash -c 'apt get update && apt get install -y python stress' 61 | export_image busybox busybox /bin/true 62 | cp /workshop/rubber-docker/levels/03_pivot_root/breakout.py ./ 63 | chmod +x breakout.py 64 | tar cf ubuntu.tar breakout.py 65 | tar Af ubuntu.tar ubuntu-export.tar 66 | rm breakout.py ubuntu-export.tar 67 | popd 68 | 69 | # On boot, pull the repo and build the C extension 70 | cat > /etc/rc.local <<'EOF' 71 | #!/bin/bash 72 | 73 | # Allow the git commands from rc.local on a root owned directory 74 | export HOME=/root 75 | git config --global --add safe.directory /workshop/rubber-docker 76 | 77 | # Pull latest version of rubber-docker, install requirements & build the C extension 78 | if [[ -d /workshop/rubber-docker ]]; then 79 | pushd /workshop/rubber-docker 80 | git pull && pip install --break-system-packages . 81 | # [[ -f requirements.txt ]] && pip install -r requirements.txt 82 | popd 83 | fi 84 | 85 | # This will allow us to change rc.local stuff without regenerating the AMI 86 | /workshop/rubber-docker/packer/on_boot.sh 87 | 88 | EOF 89 | 90 | # Seutp motd 91 | cat > /etc/motd <<'EOF' 92 | Welcome to the "Docker From Scratch" workshop! 93 | 94 | Workshop material is in /workshop 95 | Workshop code is checked out in /workshop/rubber-docker 96 | 97 | Hint: you probably want to work as root. 98 | 99 | Don't forget to have fun and break things :) 100 | EOF 101 | 102 | # setup vim 103 | sudo -H -u ubuntu bash -e <<'EOS' 104 | cd ~ 105 | git clone https://github.com/VundleVim/Vundle.vim.git ~/.vim/bundle/Vundle.vim 106 | cp /tmp/vimrc ~/.vimrc 107 | echo "Installing plugins using Vundle" 108 | echo | echo | vim +PluginInstall +qall &>/dev/null 109 | echo "Vundle done" 110 | python3 ~/.vim/bundle/YouCompleteMe/install.py 111 | EOS -------------------------------------------------------------------------------- /packer/on_boot.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | mkdir -p /workshop/containers 4 | 5 | chown ubuntu:ubuntu -R /workshop 6 | 7 | if [[ $(cat /proc/swaps | wc -l) -le 1 && ! -f /swap.img ]]; then 8 | dd if=/dev/zero of=/swap.img count=1024 bs=1M 9 | chmod 0600 /swap.img 10 | mkswap /swap.img 11 | swapon /swap.img 12 | fi 13 | 14 | -------------------------------------------------------------------------------- /packer/rubber-docker.json: -------------------------------------------------------------------------------- 1 | { 2 | "variables": { 3 | "aws_access_key": "{{ env `AWS_ACCESS_KEY_ID` }}", 4 | "aws_secret_key": "{{ env `AWS_SECRET_ACCESS_KEY` }}", 5 | "aws_region": "eu-central-1", 6 | "source_ami": "ami-0e872aee57663ae2d" 7 | }, 8 | "builders": [ 9 | { 10 | "type": "amazon-ebs", 11 | "access_key": "{{ user `aws_access_key` }}", 12 | "secret_key": "{{ user `aws_secret_key` }}", 13 | "region": "{{ user `aws_region` }}", 14 | "source_ami": "{{ user `source_ami` }}", 15 | "instance_type": "t2.medium", 16 | "associate_public_ip_address": true, 17 | "subnet_id": "{{ user `subnet_id` }}", 18 | "ssh_username": "ubuntu", 19 | "ami_name": "rubber-docker-{{timestamp}}", 20 | "ami_groups": [ 21 | "all" 22 | ], 23 | "ami_regions": [ 24 | "il-central-1", 25 | "us-east-1", 26 | "us-west-1" 27 | ] 28 | } 29 | ], 30 | "provisioners": [ 31 | { 32 | "type": "file", 33 | "source": "vimrc", 34 | "destination": "/tmp/vimrc" 35 | }, 36 | { 37 | "type": "shell", 38 | "script": "bootstrap.sh" 39 | } 40 | ] 41 | } 42 | -------------------------------------------------------------------------------- /packer/vimrc: -------------------------------------------------------------------------------- 1 | set nocompatible " be iMproved, required 2 | filetype plugin indent on " required 3 | 4 | " set the runtime path to include Vundle and initialize 5 | set rtp+=~/.vim/bundle/Vundle.vim 6 | call vundle#begin() 7 | " alternatively, pass a path where Vundle should install plugins 8 | "call vundle#begin('~/some/path/here') 9 | 10 | " let Vundle manage Vundle, required 11 | Plugin 'VundleVim/Vundle.vim' 12 | 13 | " The following are examples of different formats supported. 14 | " Keep Plugin commands between vundle#begin/end. 15 | " plugin on GitHub repo 16 | Plugin 'valloric/YouCompleteMe' 17 | Plugin 'vim-scripts/indentpython.vim' 18 | 19 | " All of your Plugins must be added before the following line 20 | call vundle#end() " required 21 | filetype plugin indent on " required 22 | " To ignore plugin indent changes, instead use: 23 | "filetype plugin on 24 | " 25 | " Brief help 26 | " :PluginList - lists configured plugins 27 | " :PluginInstall - installs plugins; append `!` to update or just :PluginUpdate 28 | " :PluginSearch foo - searches for foo; append `!` to refresh local cache 29 | " :PluginClean - confirms removal of unused plugins; append `!` to auto-approve removal 30 | " 31 | " see :h vundle for more details or wiki for FAQ 32 | " Put your non-Plugin stuff after this line 33 | set bg=dark 34 | syntax on 35 | -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [build-system] 2 | requires = ["setuptools >= 61.0"] 3 | build-backend = "setuptools.build_meta" 4 | 5 | [project] 6 | name = "rubber-docker" 7 | version = "1.0.0" 8 | dependencies = [ 9 | "click", 10 | ] 11 | requires-python = ">=3.8" 12 | authors = [ 13 | {name = "Avishai Ish-Shalom", email = "avishai.ishshalom@gmail.com"}, 14 | {name = "Nati Cohen", email = "nocoot@gmail.com"}, 15 | ] 16 | maintainers = [ 17 | {name = "Avishai Ish-Shalom", email = "avishai.ishshalom@gmail.com"}, 18 | {name = "Nati Cohen", email = "nocoot@gmail.com"}, 19 | ] 20 | description = "Docker from scratch" 21 | readme = "README.md" 22 | license = {file = "LICENSE"} 23 | keywords = ["docker", "containers"] 24 | classifiers = [ 25 | "Development Status :: 4 - Beta", 26 | "Programming Language :: Python" 27 | ] 28 | 29 | [project.urls] 30 | Homepage = "https://github.com/Fewbytes/rubber-docker" 31 | Documentation = "hhttps://github.com/Fewbytes/rubber-docker" 32 | Repository = "https://github.com/Fewbytes/rubber-docker.git" 33 | "Bug Tracker" = "https://github.com/Fewbytes/rubber-docker/issues" 34 | 35 | [tool.setuptools] 36 | packages = ["linux"] 37 | package-dir = {"linux" = "."} -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | click -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | from setuptools import setup, Extension 2 | 3 | setup(ext_modules=[Extension("linux", sources=["linux.c"])]) 4 | -------------------------------------------------------------------------------- /slides/images/thats-all-folks.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Fewbytes/rubber-docker/2b3fca3ae4e6de1d3288edbdf3935399cb3c324c/slides/images/thats-all-folks.jpg -------------------------------------------------------------------------------- /slides/images/there-is-no-container.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Fewbytes/rubber-docker/2b3fca3ae4e6de1d3288edbdf3935399cb3c324c/slides/images/there-is-no-container.jpg -------------------------------------------------------------------------------- /slides/workshop.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | Containers from Scratch workshop 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 33 | 34 | 39 | 40 | 43 | 44 | 45 | 46 | 47 |
48 | 49 | 50 |
51 |
52 |

Containers from Scratch

53 |

Nati Cohen (@nocoot)
Avishai Ish-Shalom (@nukemberg)

54 |
55 | 56 |
57 |
58 |

Workshop goal

59 |

Understand how containers work in terms of kernel constructs: 60 |

    61 |
  • Namespaces
  • 62 |
  • CGroups
  • 63 |
  • CoW filesystem
  • 64 |
65 |

66 |
67 |
68 |

"Container" is not a kernel primitive

69 | 70 |

It is a combination of various kernel mechanisms

71 |
72 |
73 |

Docker highlevel architecture

74 |
75 | 76 |
77 |

We're building part of libcontainer + daemon + cli

78 |
79 |
80 |
81 |
82 |

Let's build!

83 |
84 |

Workshop material is in /workshop

85 |

86 |

 87 | 									cd /workshop/rubber-docker/levels/00_fork_exec
 88 | 								
89 | try to run rd.py 90 |
 91 | 									python rd.py run /bin/echo "Hello Docker"
 92 | 								
93 |

94 |
95 |
96 |
97 |

fork & exec a process

98 |
99 |
100 |

chroot

101 |

Maybe unpack an image too?

102 |
103 |
104 |
105 |
106 |

Namespaces

107 |

The namespaces API:

108 |
    109 |
  • clone
  • 110 |
  • unshare
  • 111 |
  • setns
  • 112 |
113 |

114 |
115 |
116 |

The mount namespace

117 |
118 |
119 |

pivot_root

120 |

121 |

    122 |
  • Don't forget to mount private (recursively) the original root filesystem (why?)
  • 123 |
  • Finally, remove the old root directory
  • 124 |
125 |

126 |

Doesn't work? new_root must be a mount!

127 |
128 |
129 |

CoW FTW!

130 |

131 |

Mount an overlay filesystem as the rootfs

132 |
    133 |
  • Use the extracted image rootfs dir as lowerdir
  • 134 |
  • Create an upperdir and workdir per container
  • 135 |
136 |

137 |

Now we can pivot_root into the CoW mount

138 |
139 |
140 |

UTS namespace and hostname

141 |
142 |
143 |

PID namespace

144 |

unshare CLONE_NEWPID before forking a new process

145 |

Can also be done via clone

146 |
147 |
148 |

Network namespace

149 |
150 |
151 |
152 |
153 |

CGroups

154 |
155 |
156 |

The CPU controller

157 |

Open a new cpu cgroup for each container. Set cpu.shares

158 |

Questions

159 |
    160 |
  • Are processes always throttled?
  • 161 |
  • How much is a share?
  • 162 |
  • How does this limit relate to scheduling priorities and classes?
  • 163 |
164 |

165 |
166 |
167 |

The Memory controller

168 |

Open a new memory cgroup for each container. Configure the following:

169 |
    170 |
  • memory.limit_in_bytes
  • 171 |
  • memory.memsw.limit_in_bytes
  • 172 |
  • memory.kmem.limit_in_bytes
  • 173 |
  • memory.oom_control
  • 174 |
175 |

Run stress and observe how the container behaves with(out) oom killer

176 |

177 |
178 |
179 |
180 |
181 |

Bonus round!

182 |
183 |
184 |

exec

185 |

docker exec like functionality

186 |

Write the pid to a file, then use it's namespace references with setns

187 |
188 |
189 |

run -d (background)

190 |
191 |
192 |

Capabilities

193 |
194 |
195 |

Volumes

196 |

mount bind

197 |
198 |
199 |

Devices whitelist controller

200 |
201 |
202 |

Internal NIC & port mappings

203 |
204 |
205 |

setuid

206 |
207 |
208 |
209 | 210 |

Workshop materials: github.com/Fewbytes/rubber-docker
211 | Preparation talk slides: Google docs

212 |
213 |
214 | 215 |
216 | 217 | 218 | 219 | 243 | 244 | 245 | 246 | --------------------------------------------------------------------------------