├── .dockerignore ├── .gitignore ├── CHANGES.md ├── CONTRIBUTING.md ├── Dockerfile ├── LICENSE ├── MAINTAINERS ├── MANIFEST.in ├── README.md ├── ROADMAP.md ├── SWARM.md ├── bin └── docker-compose ├── compose ├── __init__.py ├── cli │ ├── __init__.py │ ├── colors.py │ ├── command.py │ ├── docker_client.py │ ├── docopt_command.py │ ├── errors.py │ ├── formatter.py │ ├── log_printer.py │ ├── main.py │ ├── multiplexer.py │ ├── utils.py │ └── verbose_proxy.py ├── config.py ├── container.py ├── progress_stream.py ├── project.py └── service.py ├── contrib └── completion │ └── bash │ └── docker-compose ├── docs ├── Dockerfile ├── cli.md ├── completion.md ├── django.md ├── env.md ├── index.md ├── install.md ├── mkdocs.yml ├── rails.md ├── wordpress.md └── yml.md ├── requirements-dev.txt ├── requirements.txt ├── script ├── .validate ├── build-linux ├── build-linux-inner ├── build-osx ├── ci ├── clean ├── dev ├── dind ├── docs ├── shell ├── test ├── test-versions ├── validate-dco └── wrapdocker ├── setup.py ├── tests ├── __init__.py ├── fixtures │ ├── UpperCaseDir │ │ └── docker-compose.yml │ ├── build-ctx │ │ └── Dockerfile │ ├── build-path │ │ └── docker-compose.yml │ ├── commands-composefile │ │ └── docker-compose.yml │ ├── dockerfile-with-volume │ │ └── Dockerfile │ ├── dockerfile_with_entrypoint │ │ ├── Dockerfile │ │ └── docker-compose.yml │ ├── env-file │ │ ├── docker-compose.yml │ │ └── test.env │ ├── env │ │ ├── one.env │ │ ├── resolve.env │ │ └── two.env │ ├── environment-composefile │ │ └── docker-compose.yml │ ├── extends │ │ ├── circle-1.yml │ │ ├── circle-2.yml │ │ ├── common.yml │ │ ├── docker-compose.yml │ │ ├── nested-intermediate.yml │ │ └── nested.yml │ ├── links-composefile │ │ └── docker-compose.yml │ ├── longer-filename-composefile │ │ └── docker-compose.yaml │ ├── multiple-composefiles │ │ ├── compose2.yml │ │ └── docker-compose.yml │ ├── no-composefile │ │ └── .gitignore │ ├── ports-composefile │ │ └── docker-compose.yml │ ├── simple-composefile │ │ └── docker-compose.yml │ ├── simple-dockerfile │ │ ├── Dockerfile │ │ └── docker-compose.yml │ ├── user-composefile │ │ └── docker-compose.yml │ └── volume-path │ │ ├── common │ │ └── services.yml │ │ └── docker-compose.yml ├── integration │ ├── __init__.py │ ├── cli_test.py │ ├── project_test.py │ ├── service_test.py │ └── testcases.py └── unit │ ├── __init__.py │ ├── cli │ ├── __init__.py │ ├── docker_client_test.py │ └── verbose_proxy_test.py │ ├── cli_test.py │ ├── config_test.py │ ├── container_test.py │ ├── log_printer_test.py │ ├── progress_stream_test.py │ ├── project_test.py │ ├── service_test.py │ ├── sort_service_test.py │ └── split_buffer_test.py ├── tox.ini └── wercker.yml /.dockerignore: -------------------------------------------------------------------------------- 1 | .git 2 | venv 3 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | *.egg-info 2 | *.pyc 3 | .tox 4 | /build 5 | /dist 6 | /docs/_site 7 | /venv 8 | docker-compose.spec 9 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing to Compose 2 | 3 | Compose is a part of the Docker project, and follows the same rules and principles. Take a read of [Docker's contributing guidelines](https://github.com/docker/docker/blob/master/CONTRIBUTING.md) to get an overview. 4 | 5 | ## TL;DR 6 | 7 | Pull requests will need: 8 | 9 | - Tests 10 | - Documentation 11 | - [To be signed off](https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work) 12 | - A logical series of [well written commits](https://github.com/alphagov/styleguides/blob/master/git.md) 13 | 14 | ## Development environment 15 | 16 | If you're looking contribute to Compose 17 | but you're new to the project or maybe even to Python, here are the steps 18 | that should get you started. 19 | 20 | 1. Fork [https://github.com/docker/compose](https://github.com/docker/compose) to your username. 21 | 1. Clone your forked repository locally `git clone git@github.com:yourusername/compose.git`. 22 | 1. Enter the local directory `cd compose`. 23 | 1. Set up a development environment by running `python setup.py develop`. This will install the dependencies and set up a symlink from your `docker-compose` executable to the checkout of the repository. When you now run `docker-compose` from anywhere on your machine, it will run your development version of Compose. 24 | 25 | ## Running the test suite 26 | 27 | Use the test script to run linting checks and then the full test suite: 28 | 29 | $ script/test 30 | 31 | Tests are run against a Docker daemon inside a container, so that we can test against multiple Docker versions. By default they'll run against only the latest Docker version - set the `DOCKER_VERSIONS` environment variable to "all" to run against all supported versions: 32 | 33 | $ DOCKER_VERSIONS=all script/test 34 | 35 | Arguments to `script/test` are passed through to the `nosetests` executable, so you can specify a test directory, file, module, class or method: 36 | 37 | $ script/test tests/unit 38 | $ script/test tests/unit/cli_test.py 39 | $ script/test tests.integration.service_test 40 | $ script/test tests.integration.service_test:ServiceTest.test_containers 41 | 42 | ## Building binaries 43 | 44 | Linux: 45 | 46 | $ script/build-linux 47 | 48 | OS X: 49 | 50 | $ script/build-osx 51 | 52 | Note that this only works on Mountain Lion, not Mavericks, due to a [bug in PyInstaller](http://www.pyinstaller.org/ticket/807). 53 | 54 | ## Release process 55 | 56 | 1. Open pull request that: 57 | 58 | - Updates the version in `compose/__init__.py` 59 | - Updates the binary URL in `docs/install.md` 60 | - Updates the script URL in `docs/completion.md` 61 | - Adds release notes to `CHANGES.md` 62 | 63 | 2. Create unpublished GitHub release with release notes 64 | 65 | 3. Build Linux version on any Docker host with `script/build-linux` and attach to release 66 | 67 | 4. Build OS X version on Mountain Lion with `script/build-osx` and attach to release as `docker-compose-Darwin-x86_64` and `docker-compose-Linux-x86_64`. 68 | 69 | 5. Publish GitHub release, creating tag 70 | 71 | 6. Update website with `script/deploy-docs` 72 | 73 | 7. Upload PyPi package 74 | 75 | $ git checkout $VERSION 76 | $ python setup.py sdist upload 77 | -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | FROM resin/rpi-raspbian:wheezy 2 | 3 | RUN set -ex; \ 4 | apt-get update -qq; \ 5 | apt-get install -y \ 6 | python \ 7 | python-pip \ 8 | python-dev \ 9 | git \ 10 | apt-transport-https \ 11 | ca-certificates \ 12 | curl \ 13 | lxc \ 14 | iptables \ 15 | ; \ 16 | rm -rf /var/lib/apt/lists/* 17 | 18 | ENV ALL_DOCKER_VERSIONS 1.6.0 19 | 20 | RUN set -ex; \ 21 | curl -L http://assets.hypriot.com/docker-hypriot_1.6.0-1_armhf.deb -o docker-hypriot_1.6.0-1_armhf.deb; \ 22 | dpkg -x docker-hypriot_1.6.0-1_armhf.deb /tmp/docker || true; \ 23 | mv /tmp/docker/usr/bin/docker /usr/local/bin/docker; \ 24 | rm -rf /tmp/docker 25 | 26 | RUN useradd -d /home/user -m -s /bin/bash user 27 | WORKDIR /code/ 28 | 29 | ADD requirements.txt /code/ 30 | RUN pip install -r requirements.txt 31 | 32 | ADD requirements-dev.txt /code/ 33 | RUN pip install -r requirements-dev.txt 34 | 35 | RUN apt-get install -qy wget && \ 36 | cd /tmp && \ 37 | wget -q https://pypi.python.org/packages/source/P/PyInstaller/PyInstaller-2.1.tar.gz && \ 38 | tar xzf PyInstaller-2.1.tar.gz && \ 39 | cd PyInstaller-2.1/bootloader && \ 40 | python ./waf configure --no-lsb build install && \ 41 | ln -s /tmp/PyInstaller-2.1/PyInstaller/bootloader/Linux-32bit-arm /usr/local/lib/python2.7/dist-packages/PyInstaller/bootloader/Linux-32bit-arm 42 | 43 | ADD . /code/ 44 | RUN python setup.py install 45 | 46 | RUN chown -R user /code/ 47 | 48 | ENTRYPOINT ["/usr/local/bin/docker-compose"] 49 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | 2 | Apache License 3 | Version 2.0, January 2004 4 | http://www.apache.org/licenses/ 5 | 6 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 7 | 8 | 1. Definitions. 9 | 10 | "License" shall mean the terms and conditions for use, reproduction, 11 | and distribution as defined by Sections 1 through 9 of this document. 12 | 13 | "Licensor" shall mean the copyright owner or entity authorized by 14 | the copyright owner that is granting the License. 15 | 16 | "Legal Entity" shall mean the union of the acting entity and all 17 | other entities that control, are controlled by, or are under common 18 | control with that entity. For the purposes of this definition, 19 | "control" means (i) the power, direct or indirect, to cause the 20 | direction or management of such entity, whether by contract or 21 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 22 | outstanding shares, or (iii) beneficial ownership of such entity. 23 | 24 | "You" (or "Your") shall mean an individual or Legal Entity 25 | exercising permissions granted by this License. 26 | 27 | "Source" form shall mean the preferred form for making modifications, 28 | including but not limited to software source code, documentation 29 | source, and configuration files. 30 | 31 | "Object" form shall mean any form resulting from mechanical 32 | transformation or translation of a Source form, including but 33 | not limited to compiled object code, generated documentation, 34 | and conversions to other media types. 35 | 36 | "Work" shall mean the work of authorship, whether in Source or 37 | Object form, made available under the License, as indicated by a 38 | copyright notice that is included in or attached to the work 39 | (an example is provided in the Appendix below). 40 | 41 | "Derivative Works" shall mean any work, whether in Source or Object 42 | form, that is based on (or derived from) the Work and for which the 43 | editorial revisions, annotations, elaborations, or other modifications 44 | represent, as a whole, an original work of authorship. For the purposes 45 | of this License, Derivative Works shall not include works that remain 46 | separable from, or merely link (or bind by name) to the interfaces of, 47 | the Work and Derivative Works thereof. 48 | 49 | "Contribution" shall mean any work of authorship, including 50 | the original version of the Work and any modifications or additions 51 | to that Work or Derivative Works thereof, that is intentionally 52 | submitted to Licensor for inclusion in the Work by the copyright owner 53 | or by an individual or Legal Entity authorized to submit on behalf of 54 | the copyright owner. For the purposes of this definition, "submitted" 55 | means any form of electronic, verbal, or written communication sent 56 | to the Licensor or its representatives, including but not limited to 57 | communication on electronic mailing lists, source code control systems, 58 | and issue tracking systems that are managed by, or on behalf of, the 59 | Licensor for the purpose of discussing and improving the Work, but 60 | excluding communication that is conspicuously marked or otherwise 61 | designated in writing by the copyright owner as "Not a Contribution." 62 | 63 | "Contributor" shall mean Licensor and any individual or Legal Entity 64 | on behalf of whom a Contribution has been received by Licensor and 65 | subsequently incorporated within the Work. 66 | 67 | 2. Grant of Copyright License. Subject to the terms and conditions of 68 | this License, each Contributor hereby grants to You a perpetual, 69 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 70 | copyright license to reproduce, prepare Derivative Works of, 71 | publicly display, publicly perform, sublicense, and distribute the 72 | Work and such Derivative Works in Source or Object form. 73 | 74 | 3. Grant of Patent License. Subject to the terms and conditions of 75 | this License, each Contributor hereby grants to You a perpetual, 76 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 77 | (except as stated in this section) patent license to make, have made, 78 | use, offer to sell, sell, import, and otherwise transfer the Work, 79 | where such license applies only to those patent claims licensable 80 | by such Contributor that are necessarily infringed by their 81 | Contribution(s) alone or by combination of their Contribution(s) 82 | with the Work to which such Contribution(s) was submitted. If You 83 | institute patent litigation against any entity (including a 84 | cross-claim or counterclaim in a lawsuit) alleging that the Work 85 | or a Contribution incorporated within the Work constitutes direct 86 | or contributory patent infringement, then any patent licenses 87 | granted to You under this License for that Work shall terminate 88 | as of the date such litigation is filed. 89 | 90 | 4. Redistribution. You may reproduce and distribute copies of the 91 | Work or Derivative Works thereof in any medium, with or without 92 | modifications, and in Source or Object form, provided that You 93 | meet the following conditions: 94 | 95 | (a) You must give any other recipients of the Work or 96 | Derivative Works a copy of this License; and 97 | 98 | (b) You must cause any modified files to carry prominent notices 99 | stating that You changed the files; and 100 | 101 | (c) You must retain, in the Source form of any Derivative Works 102 | that You distribute, all copyright, patent, trademark, and 103 | attribution notices from the Source form of the Work, 104 | excluding those notices that do not pertain to any part of 105 | the Derivative Works; and 106 | 107 | (d) If the Work includes a "NOTICE" text file as part of its 108 | distribution, then any Derivative Works that You distribute must 109 | include a readable copy of the attribution notices contained 110 | within such NOTICE file, excluding those notices that do not 111 | pertain to any part of the Derivative Works, in at least one 112 | of the following places: within a NOTICE text file distributed 113 | as part of the Derivative Works; within the Source form or 114 | documentation, if provided along with the Derivative Works; or, 115 | within a display generated by the Derivative Works, if and 116 | wherever such third-party notices normally appear. The contents 117 | of the NOTICE file are for informational purposes only and 118 | do not modify the License. You may add Your own attribution 119 | notices within Derivative Works that You distribute, alongside 120 | or as an addendum to the NOTICE text from the Work, provided 121 | that such additional attribution notices cannot be construed 122 | as modifying the License. 123 | 124 | You may add Your own copyright statement to Your modifications and 125 | may provide additional or different license terms and conditions 126 | for use, reproduction, or distribution of Your modifications, or 127 | for any such Derivative Works as a whole, provided Your use, 128 | reproduction, and distribution of the Work otherwise complies with 129 | the conditions stated in this License. 130 | 131 | 5. Submission of Contributions. Unless You explicitly state otherwise, 132 | any Contribution intentionally submitted for inclusion in the Work 133 | by You to the Licensor shall be under the terms and conditions of 134 | this License, without any additional terms or conditions. 135 | Notwithstanding the above, nothing herein shall supersede or modify 136 | the terms of any separate license agreement you may have executed 137 | with Licensor regarding such Contributions. 138 | 139 | 6. Trademarks. This License does not grant permission to use the trade 140 | names, trademarks, service marks, or product names of the Licensor, 141 | except as required for reasonable and customary use in describing the 142 | origin of the Work and reproducing the content of the NOTICE file. 143 | 144 | 7. Disclaimer of Warranty. Unless required by applicable law or 145 | agreed to in writing, Licensor provides the Work (and each 146 | Contributor provides its Contributions) on an "AS IS" BASIS, 147 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 148 | implied, including, without limitation, any warranties or conditions 149 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 150 | PARTICULAR PURPOSE. You are solely responsible for determining the 151 | appropriateness of using or redistributing the Work and assume any 152 | risks associated with Your exercise of permissions under this License. 153 | 154 | 8. Limitation of Liability. In no event and under no legal theory, 155 | whether in tort (including negligence), contract, or otherwise, 156 | unless required by applicable law (such as deliberate and grossly 157 | negligent acts) or agreed to in writing, shall any Contributor be 158 | liable to You for damages, including any direct, indirect, special, 159 | incidental, or consequential damages of any character arising as a 160 | result of this License or out of the use or inability to use the 161 | Work (including but not limited to damages for loss of goodwill, 162 | work stoppage, computer failure or malfunction, or any and all 163 | other commercial damages or losses), even if such Contributor 164 | has been advised of the possibility of such damages. 165 | 166 | 9. Accepting Warranty or Additional Liability. While redistributing 167 | the Work or Derivative Works thereof, You may choose to offer, 168 | and charge a fee for, acceptance of support, warranty, indemnity, 169 | or other liability obligations and/or rights consistent with this 170 | License. However, in accepting such obligations, You may act only 171 | on Your own behalf and on Your sole responsibility, not on behalf 172 | of any other Contributor, and only if You agree to indemnify, 173 | defend, and hold each Contributor harmless for any liability 174 | incurred by, or claims asserted against, such Contributor by reason 175 | of your accepting any such warranty or additional liability. 176 | 177 | END OF TERMS AND CONDITIONS 178 | 179 | Copyright 2014 Docker, Inc. 180 | 181 | Licensed under the Apache License, Version 2.0 (the "License"); 182 | you may not use this file except in compliance with the License. 183 | You may obtain a copy of the License at 184 | 185 | http://www.apache.org/licenses/LICENSE-2.0 186 | 187 | Unless required by applicable law or agreed to in writing, software 188 | distributed under the License is distributed on an "AS IS" BASIS, 189 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 190 | See the License for the specific language governing permissions and 191 | limitations under the License. 192 | -------------------------------------------------------------------------------- /MAINTAINERS: -------------------------------------------------------------------------------- 1 | Aanand Prasad (@aanand) 2 | Ben Firshman (@bfirsh) 3 | Daniel Nephin (@dnephin) 4 | -------------------------------------------------------------------------------- /MANIFEST.in: -------------------------------------------------------------------------------- 1 | include Dockerfile 2 | include LICENSE 3 | include requirements.txt 4 | include requirements-dev.txt 5 | include tox.ini 6 | include *.md 7 | include contrib/completion/bash/docker-compose 8 | recursive-include tests * 9 | global-exclude *.pyc 10 | global-exclude *.pyo 11 | global-exclude *.un~ 12 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | Docker Compose 2 | ============== 3 | *(Previously known as Fig)* 4 | 5 | Compose is a tool for defining and running complex applications with Docker. 6 | With Compose, you define a multi-container application in a single file, then 7 | spin your application up in a single command which does everything that needs to 8 | be done to get it running. 9 | 10 | Compose is great for development environments, staging servers, and CI. We don't 11 | recommend that you use it in production yet. 12 | 13 | Using Compose is basically a three-step process. 14 | 15 | First, you define your app's environment with a `Dockerfile` so it can be 16 | reproduced anywhere: 17 | 18 | ```Dockerfile 19 | FROM hypriot/rpi-python:latest 20 | WORKDIR /code 21 | ADD requirements.txt /code/ 22 | RUN pip install -r requirements.txt 23 | ADD . /code 24 | CMD python app.py 25 | ``` 26 | 27 | Next, you define the services that make up your app in `docker-compose.yml` so 28 | they can be run together in an isolated environment: 29 | 30 | ```yaml 31 | web: 32 | build: . 33 | links: 34 | - db 35 | ports: 36 | - "8000:8000" 37 | db: 38 | image: hypriot/rpi-redis 39 | ``` 40 | 41 | Lastly, run `docker-compose up` and Compose will start and run your entire app. 42 | 43 | Compose has commands for managing the whole lifecycle of your application: 44 | 45 | * Start, stop and rebuild services 46 | * View the status of running services 47 | * Stream the log output of running services 48 | * Run a one-off command on a service 49 | 50 | Installation and documentation 51 | ------------------------------ 52 | 53 | **Update** Newer versions are built with the [hypriot/arm-compose](https://github.com/hypriot/arm-compose) repo and uploaded to Hypriot's Schatzkiste. Please have a look at the [installation steps](https://github.com/hypriot/arm-compose#installation) over there. 54 | 55 | - Full documentation is available on [Docker's website](http://docs.docker.com/compose/). 56 | - Hop into #docker-compose on Freenode if you have any questions. 57 | 58 | Contributing 59 | ------------ 60 | 61 | [![Build Status](http://jenkins.dockerproject.com/buildStatus/icon?job=Compose Master)](http://jenkins.dockerproject.com/job/Compose%20Master/) 62 | 63 | Want to help build Compose? Check out our [contributing documentation](https://github.com/docker/compose/blob/master/CONTRIBUTING.md). 64 | -------------------------------------------------------------------------------- /ROADMAP.md: -------------------------------------------------------------------------------- 1 | # Roadmap 2 | 3 | ## More than just development environments 4 | 5 | Over time we will extend Compose's remit to cover test, staging and production environments. This is not a simple task, and will take many incremental improvements such as: 6 | 7 | - Compose’s brute-force “delete and recreate everything” approach is great for dev and testing, but it not sufficient for production environments. You should be able to define a "desired" state that Compose will intelligently converge to. 8 | - It should be possible to partially modify the config file for different environments (dev/test/staging/prod), passing in e.g. custom ports or volume mount paths. ([#426](https://github.com/docker/fig/issues/426)) 9 | - Compose should recommend a technique for zero-downtime deploys. 10 | 11 | ## Integration with Swarm 12 | 13 | Compose should integrate really well with Swarm so you can take an application you've developed on your laptop and run it on a Swarm cluster. 14 | 15 | The current state of integration is documented in [SWARM.md](SWARM.md). 16 | 17 | ## Applications spanning multiple teams 18 | 19 | Compose works well for applications that are in a single repository and depend on services that are hosted on Docker Hub. If your application depends on another application within your organisation, Compose doesn't work as well. 20 | 21 | There are several ideas about how this could work, such as [including external files](https://github.com/docker/fig/issues/318). 22 | 23 | ## An even better tool for development environments 24 | 25 | Compose is a great tool for development environments, but it could be even better. For example: 26 | 27 | - [Compose could watch your code and automatically kick off builds when something changes.](https://github.com/docker/fig/issues/184) 28 | - It should be possible to define hostnames for containers which work from the host machine, e.g. “mywebcontainer.local”. This is needed by apps comprising multiple web services which generate links to one another (e.g. a frontend website and a separate admin webapp) 29 | -------------------------------------------------------------------------------- /SWARM.md: -------------------------------------------------------------------------------- 1 | Docker Compose/Swarm integration 2 | ================================ 3 | 4 | Eventually, Compose and Swarm aim to have full integration, meaning you can point a Compose app at a Swarm cluster and have it all just work as if you were using a single Docker host. 5 | 6 | However, the current extent of integration is minimal: Compose can create containers on a Swarm cluster, but the majority of Compose apps won’t work out of the box unless all containers are scheduled on one host, defeating much of the purpose of using Swarm in the first place. 7 | 8 | Still, Compose and Swarm can be useful in a “batch processing” scenario (where a large number of containers need to be spun up and down to do independent computation) or a “shared cluster” scenario (where multiple teams want to deploy apps on a cluster without worrying about where to put them). 9 | 10 | A number of things need to happen before full integration is achieved, which are documented below. 11 | 12 | Re-deploying containers with `docker-compose up` 13 | ------------------------------------------------ 14 | 15 | Repeated invocations of `docker-compose up` will not work reliably when used against a Swarm cluster because of an under-the-hood design problem; [this will be fixed](https://github.com/docker/fig/pull/972) in the next version of Compose. For now, containers must be completely removed and re-created: 16 | 17 | $ docker-compose kill 18 | $ docker-compose rm --force 19 | $ docker-compose up 20 | 21 | Links and networking 22 | -------------------- 23 | 24 | The primary thing stopping multi-container apps from working seamlessly on Swarm is getting them to talk to one another: enabling private communication between containers on different hosts hasn’t been solved in a non-hacky way. 25 | 26 | Long-term, networking is [getting overhauled](https://github.com/docker/docker/issues/9983) in such a way that it’ll fit the multi-host model much better. For now, containers on different hosts cannot be linked. In the next version of Compose, linked services will be automatically scheduled on the same host; for now, this must be done manually (see “Co-scheduling containers” below). 27 | 28 | `volumes_from` and `net: container` 29 | ----------------------------------- 30 | 31 | For containers to share volumes or a network namespace, they must be scheduled on the same host - this is, after all, inherent to how both volumes and network namespaces work. In the next version of Compose, this co-scheduling will be automatic whenever `volumes_from` or `net: "container:..."` is specified; for now, containers which share volumes or a network namespace must be co-scheduled manually (see “Co-scheduling containers” below). 32 | 33 | Co-scheduling containers 34 | ------------------------ 35 | 36 | For now, containers can be manually scheduled on the same host using Swarm’s [affinity filters](https://github.com/docker/swarm/blob/master/scheduler/filter/README.md#affinity-filter). Here’s a simple example: 37 | 38 | ```yaml 39 | web: 40 | image: my-web-image 41 | links: ["db"] 42 | environment: 43 | - "affinity:container==myproject_db_*" 44 | db: 45 | image: postgres 46 | ``` 47 | 48 | Here, we express an affinity filter on all web containers, saying that each one must run alongside a container whose name begins with `myproject_db_`. 49 | 50 | - `myproject` is the common prefix Compose gives to all containers in your project, which is either generated from the name of the current directory or specified with `-p` or the `DOCKER_COMPOSE_PROJECT_NAME` environment variable. 51 | - `*` is a wildcard, which works just like filename wildcards in a Unix shell. 52 | -------------------------------------------------------------------------------- /bin/docker-compose: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | from compose.cli.main import main 3 | main() 4 | -------------------------------------------------------------------------------- /compose/__init__.py: -------------------------------------------------------------------------------- 1 | from __future__ import unicode_literals 2 | from .service import Service # noqa:flake8 3 | 4 | __version__ = '1.2.0' 5 | -------------------------------------------------------------------------------- /compose/cli/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hypriot/compose/7b5276774168bbe98209be7ce3ea05fd2a2131ea/compose/cli/__init__.py -------------------------------------------------------------------------------- /compose/cli/colors.py: -------------------------------------------------------------------------------- 1 | from __future__ import unicode_literals 2 | NAMES = [ 3 | 'grey', 4 | 'red', 5 | 'green', 6 | 'yellow', 7 | 'blue', 8 | 'magenta', 9 | 'cyan', 10 | 'white' 11 | ] 12 | 13 | 14 | def get_pairs(): 15 | for i, name in enumerate(NAMES): 16 | yield(name, str(30 + i)) 17 | yield('intense_' + name, str(30 + i) + ';1') 18 | 19 | 20 | def ansi(code): 21 | return '\033[{0}m'.format(code) 22 | 23 | 24 | def ansi_color(code, s): 25 | return '{0}{1}{2}'.format(ansi(code), s, ansi(0)) 26 | 27 | 28 | def make_color_fn(code): 29 | return lambda s: ansi_color(code, s) 30 | 31 | 32 | for (name, code) in get_pairs(): 33 | globals()[name] = make_color_fn(code) 34 | 35 | 36 | def rainbow(): 37 | cs = ['cyan', 'yellow', 'green', 'magenta', 'red', 'blue', 38 | 'intense_cyan', 'intense_yellow', 'intense_green', 39 | 'intense_magenta', 'intense_red', 'intense_blue'] 40 | 41 | for c in cs: 42 | yield globals()[c] 43 | -------------------------------------------------------------------------------- /compose/cli/command.py: -------------------------------------------------------------------------------- 1 | from __future__ import unicode_literals 2 | from __future__ import absolute_import 3 | from requests.exceptions import ConnectionError, SSLError 4 | import logging 5 | import os 6 | import re 7 | import six 8 | 9 | from .. import config 10 | from ..project import Project 11 | from ..service import ConfigError 12 | from .docopt_command import DocoptCommand 13 | from .utils import call_silently, is_mac, is_ubuntu 14 | from .docker_client import docker_client 15 | from . import verbose_proxy 16 | from . import errors 17 | from .. import __version__ 18 | 19 | log = logging.getLogger(__name__) 20 | 21 | 22 | class Command(DocoptCommand): 23 | base_dir = '.' 24 | 25 | def dispatch(self, *args, **kwargs): 26 | try: 27 | super(Command, self).dispatch(*args, **kwargs) 28 | except SSLError as e: 29 | raise errors.UserError('SSL error: %s' % e) 30 | except ConnectionError: 31 | if call_silently(['which', 'docker']) != 0: 32 | if is_mac(): 33 | raise errors.DockerNotFoundMac() 34 | elif is_ubuntu(): 35 | raise errors.DockerNotFoundUbuntu() 36 | else: 37 | raise errors.DockerNotFoundGeneric() 38 | elif call_silently(['which', 'boot2docker']) == 0: 39 | raise errors.ConnectionErrorBoot2Docker() 40 | else: 41 | raise errors.ConnectionErrorGeneric(self.get_client().base_url) 42 | 43 | def perform_command(self, options, handler, command_options): 44 | if options['COMMAND'] == 'help': 45 | # Skip looking up the compose file. 46 | handler(None, command_options) 47 | return 48 | 49 | if 'FIG_FILE' in os.environ: 50 | log.warn('The FIG_FILE environment variable is deprecated.') 51 | log.warn('Please use COMPOSE_FILE instead.') 52 | 53 | explicit_config_path = options.get('--file') or os.environ.get('COMPOSE_FILE') or os.environ.get('FIG_FILE') 54 | project = self.get_project( 55 | self.get_config_path(explicit_config_path), 56 | project_name=options.get('--project-name'), 57 | verbose=options.get('--verbose')) 58 | 59 | handler(project, command_options) 60 | 61 | def get_client(self, verbose=False): 62 | client = docker_client() 63 | if verbose: 64 | version_info = six.iteritems(client.version()) 65 | log.info("Compose version %s", __version__) 66 | log.info("Docker base_url: %s", client.base_url) 67 | log.info("Docker version: %s", 68 | ", ".join("%s=%s" % item for item in version_info)) 69 | return verbose_proxy.VerboseProxy('docker', client) 70 | return client 71 | 72 | def get_project(self, config_path, project_name=None, verbose=False): 73 | try: 74 | return Project.from_dicts( 75 | self.get_project_name(config_path, project_name), 76 | config.load(config_path), 77 | self.get_client(verbose=verbose)) 78 | except ConfigError as e: 79 | raise errors.UserError(six.text_type(e)) 80 | 81 | def get_project_name(self, config_path, project_name=None): 82 | def normalize_name(name): 83 | return re.sub(r'[^a-z0-9]', '', name.lower()) 84 | 85 | if 'FIG_PROJECT_NAME' in os.environ: 86 | log.warn('The FIG_PROJECT_NAME environment variable is deprecated.') 87 | log.warn('Please use COMPOSE_PROJECT_NAME instead.') 88 | 89 | project_name = project_name or os.environ.get('COMPOSE_PROJECT_NAME') or os.environ.get('FIG_PROJECT_NAME') 90 | if project_name is not None: 91 | return normalize_name(project_name) 92 | 93 | project = os.path.basename(os.path.dirname(os.path.abspath(config_path))) 94 | if project: 95 | return normalize_name(project) 96 | 97 | return 'default' 98 | 99 | def get_config_path(self, file_path=None): 100 | if file_path: 101 | return os.path.join(self.base_dir, file_path) 102 | 103 | supported_filenames = [ 104 | 'docker-compose.yml', 105 | 'docker-compose.yaml', 106 | 'fig.yml', 107 | 'fig.yaml', 108 | ] 109 | 110 | def expand(filename): 111 | return os.path.join(self.base_dir, filename) 112 | 113 | candidates = [filename for filename in supported_filenames if os.path.exists(expand(filename))] 114 | 115 | if len(candidates) == 0: 116 | raise errors.ComposeFileNotFound(supported_filenames) 117 | 118 | winner = candidates[0] 119 | 120 | if len(candidates) > 1: 121 | log.warning("Found multiple config files with supported names: %s", ", ".join(candidates)) 122 | log.warning("Using %s\n", winner) 123 | 124 | if winner == 'docker-compose.yaml': 125 | log.warning("Please be aware that .yml is the expected extension " 126 | "in most cases, and using .yaml can cause compatibility " 127 | "issues in future.\n") 128 | 129 | if winner.startswith("fig."): 130 | log.warning("%s is deprecated and will not be supported in future. " 131 | "Please rename your config file to docker-compose.yml\n" % winner) 132 | 133 | return expand(winner) 134 | -------------------------------------------------------------------------------- /compose/cli/docker_client.py: -------------------------------------------------------------------------------- 1 | from docker import Client 2 | from docker import tls 3 | import ssl 4 | import os 5 | 6 | 7 | def docker_client(): 8 | """ 9 | Returns a docker-py client configured using environment variables 10 | according to the same logic as the official Docker client. 11 | """ 12 | cert_path = os.environ.get('DOCKER_CERT_PATH', '') 13 | if cert_path == '': 14 | cert_path = os.path.join(os.environ.get('HOME', ''), '.docker') 15 | 16 | base_url = os.environ.get('DOCKER_HOST') 17 | tls_config = None 18 | 19 | if os.environ.get('DOCKER_TLS_VERIFY', '') != '': 20 | parts = base_url.split('://', 1) 21 | base_url = '%s://%s' % ('https', parts[1]) 22 | 23 | client_cert = (os.path.join(cert_path, 'cert.pem'), os.path.join(cert_path, 'key.pem')) 24 | ca_cert = os.path.join(cert_path, 'ca.pem') 25 | 26 | tls_config = tls.TLSConfig( 27 | ssl_version=ssl.PROTOCOL_TLSv1, 28 | verify=True, 29 | assert_hostname=False, 30 | client_cert=client_cert, 31 | ca_cert=ca_cert, 32 | ) 33 | 34 | timeout = int(os.environ.get('DOCKER_CLIENT_TIMEOUT', 60)) 35 | return Client(base_url=base_url, tls=tls_config, version='1.15', timeout=timeout) 36 | -------------------------------------------------------------------------------- /compose/cli/docopt_command.py: -------------------------------------------------------------------------------- 1 | from __future__ import unicode_literals 2 | from __future__ import absolute_import 3 | import sys 4 | 5 | from inspect import getdoc 6 | from docopt import docopt, DocoptExit 7 | 8 | 9 | def docopt_full_help(docstring, *args, **kwargs): 10 | try: 11 | return docopt(docstring, *args, **kwargs) 12 | except DocoptExit: 13 | raise SystemExit(docstring) 14 | 15 | 16 | class DocoptCommand(object): 17 | def docopt_options(self): 18 | return {'options_first': True} 19 | 20 | def sys_dispatch(self): 21 | self.dispatch(sys.argv[1:], None) 22 | 23 | def dispatch(self, argv, global_options): 24 | self.perform_command(*self.parse(argv, global_options)) 25 | 26 | def perform_command(self, options, handler, command_options): 27 | handler(command_options) 28 | 29 | def parse(self, argv, global_options): 30 | options = docopt_full_help(getdoc(self), argv, **self.docopt_options()) 31 | command = options['COMMAND'] 32 | 33 | if command is None: 34 | raise SystemExit(getdoc(self)) 35 | 36 | if not hasattr(self, command): 37 | raise NoSuchCommand(command, self) 38 | 39 | handler = getattr(self, command) 40 | docstring = getdoc(handler) 41 | 42 | if docstring is None: 43 | raise NoSuchCommand(command, self) 44 | 45 | command_options = docopt_full_help(docstring, options['ARGS'], options_first=True) 46 | return options, handler, command_options 47 | 48 | 49 | class NoSuchCommand(Exception): 50 | def __init__(self, command, supercommand): 51 | super(NoSuchCommand, self).__init__("No such command: %s" % command) 52 | 53 | self.command = command 54 | self.supercommand = supercommand 55 | -------------------------------------------------------------------------------- /compose/cli/errors.py: -------------------------------------------------------------------------------- 1 | from __future__ import absolute_import 2 | from textwrap import dedent 3 | 4 | 5 | class UserError(Exception): 6 | def __init__(self, msg): 7 | self.msg = dedent(msg).strip() 8 | 9 | def __unicode__(self): 10 | return self.msg 11 | 12 | __str__ = __unicode__ 13 | 14 | 15 | class DockerNotFoundMac(UserError): 16 | def __init__(self): 17 | super(DockerNotFoundMac, self).__init__(""" 18 | Couldn't connect to Docker daemon. You might need to install docker-osx: 19 | 20 | https://github.com/noplay/docker-osx 21 | """) 22 | 23 | 24 | class DockerNotFoundUbuntu(UserError): 25 | def __init__(self): 26 | super(DockerNotFoundUbuntu, self).__init__(""" 27 | Couldn't connect to Docker daemon. You might need to install Docker: 28 | 29 | http://docs.docker.io/en/latest/installation/ubuntulinux/ 30 | """) 31 | 32 | 33 | class DockerNotFoundGeneric(UserError): 34 | def __init__(self): 35 | super(DockerNotFoundGeneric, self).__init__(""" 36 | Couldn't connect to Docker daemon. You might need to install Docker: 37 | 38 | http://docs.docker.io/en/latest/installation/ 39 | """) 40 | 41 | 42 | class ConnectionErrorBoot2Docker(UserError): 43 | def __init__(self): 44 | super(ConnectionErrorBoot2Docker, self).__init__(""" 45 | Couldn't connect to Docker daemon - you might need to run `boot2docker up`. 46 | """) 47 | 48 | 49 | class ConnectionErrorGeneric(UserError): 50 | def __init__(self, url): 51 | super(ConnectionErrorGeneric, self).__init__(""" 52 | Couldn't connect to Docker daemon at %s - is it running? 53 | 54 | If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable. 55 | """ % url) 56 | 57 | 58 | class ComposeFileNotFound(UserError): 59 | def __init__(self, supported_filenames): 60 | super(ComposeFileNotFound, self).__init__(""" 61 | Can't find a suitable configuration file. Are you in the right directory? 62 | 63 | Supported filenames: %s 64 | """ % ", ".join(supported_filenames)) 65 | -------------------------------------------------------------------------------- /compose/cli/formatter.py: -------------------------------------------------------------------------------- 1 | from __future__ import unicode_literals 2 | from __future__ import absolute_import 3 | import os 4 | import texttable 5 | 6 | 7 | def get_tty_width(): 8 | tty_size = os.popen('stty size', 'r').read().split() 9 | if len(tty_size) != 2: 10 | return 80 11 | _, width = tty_size 12 | return int(width) 13 | 14 | 15 | class Formatter(object): 16 | def table(self, headers, rows): 17 | table = texttable.Texttable(max_width=get_tty_width()) 18 | table.set_cols_dtype(['t' for h in headers]) 19 | table.add_rows([headers] + rows) 20 | table.set_deco(table.HEADER) 21 | table.set_chars(['-', '|', '+', '-']) 22 | 23 | return table.draw() 24 | -------------------------------------------------------------------------------- /compose/cli/log_printer.py: -------------------------------------------------------------------------------- 1 | from __future__ import unicode_literals 2 | from __future__ import absolute_import 3 | import sys 4 | 5 | from itertools import cycle 6 | 7 | from .multiplexer import Multiplexer, STOP 8 | from . import colors 9 | from .utils import split_buffer 10 | 11 | 12 | class LogPrinter(object): 13 | def __init__(self, containers, attach_params=None, output=sys.stdout, monochrome=False): 14 | self.containers = containers 15 | self.attach_params = attach_params or {} 16 | self.prefix_width = self._calculate_prefix_width(containers) 17 | self.generators = self._make_log_generators(monochrome) 18 | self.output = output 19 | 20 | def run(self): 21 | mux = Multiplexer(self.generators) 22 | for line in mux.loop(): 23 | self.output.write(line) 24 | 25 | def _calculate_prefix_width(self, containers): 26 | """ 27 | Calculate the maximum width of container names so we can make the log 28 | prefixes line up like so: 29 | 30 | db_1 | Listening 31 | web_1 | Listening 32 | """ 33 | prefix_width = 0 34 | for container in containers: 35 | prefix_width = max(prefix_width, len(container.name_without_project)) 36 | return prefix_width 37 | 38 | def _make_log_generators(self, monochrome): 39 | color_fns = cycle(colors.rainbow()) 40 | generators = [] 41 | 42 | def no_color(text): 43 | return text 44 | 45 | for container in self.containers: 46 | if monochrome: 47 | color_fn = no_color 48 | else: 49 | color_fn = next(color_fns) 50 | generators.append(self._make_log_generator(container, color_fn)) 51 | 52 | return generators 53 | 54 | def _make_log_generator(self, container, color_fn): 55 | prefix = color_fn(self._generate_prefix(container)).encode('utf-8') 56 | # Attach to container before log printer starts running 57 | line_generator = split_buffer(self._attach(container), '\n') 58 | 59 | for line in line_generator: 60 | yield prefix + line 61 | 62 | exit_code = container.wait() 63 | yield color_fn("%s exited with code %s\n" % (container.name, exit_code)) 64 | yield STOP 65 | 66 | def _generate_prefix(self, container): 67 | """ 68 | Generate the prefix for a log line without colour 69 | """ 70 | name = container.name_without_project 71 | padding = ' ' * (self.prefix_width - len(name)) 72 | return ''.join([name, padding, ' | ']) 73 | 74 | def _attach(self, container): 75 | params = { 76 | 'stdout': True, 77 | 'stderr': True, 78 | 'stream': True, 79 | } 80 | params.update(self.attach_params) 81 | params = dict((name, 1 if value else 0) for (name, value) in list(params.items())) 82 | return container.attach(**params) 83 | -------------------------------------------------------------------------------- /compose/cli/multiplexer.py: -------------------------------------------------------------------------------- 1 | from __future__ import absolute_import 2 | from threading import Thread 3 | 4 | try: 5 | from Queue import Queue, Empty 6 | except ImportError: 7 | from queue import Queue, Empty # Python 3.x 8 | 9 | 10 | # Yield STOP from an input generator to stop the 11 | # top-level loop without processing any more input. 12 | STOP = object() 13 | 14 | 15 | class Multiplexer(object): 16 | def __init__(self, generators): 17 | self.generators = generators 18 | self.queue = Queue() 19 | 20 | def loop(self): 21 | self._init_readers() 22 | 23 | while True: 24 | try: 25 | item = self.queue.get(timeout=0.1) 26 | if item is STOP: 27 | break 28 | else: 29 | yield item 30 | except Empty: 31 | pass 32 | 33 | def _init_readers(self): 34 | for generator in self.generators: 35 | t = Thread(target=_enqueue_output, args=(generator, self.queue)) 36 | t.daemon = True 37 | t.start() 38 | 39 | 40 | def _enqueue_output(generator, queue): 41 | for item in generator: 42 | queue.put(item) 43 | -------------------------------------------------------------------------------- /compose/cli/utils.py: -------------------------------------------------------------------------------- 1 | from __future__ import unicode_literals 2 | from __future__ import absolute_import 3 | from __future__ import division 4 | import datetime 5 | import os 6 | import subprocess 7 | import platform 8 | 9 | 10 | def yesno(prompt, default=None): 11 | """ 12 | Prompt the user for a yes or no. 13 | 14 | Can optionally specify a default value, which will only be 15 | used if they enter a blank line. 16 | 17 | Unrecognised input (anything other than "y", "n", "yes", 18 | "no" or "") will return None. 19 | """ 20 | answer = raw_input(prompt).strip().lower() 21 | 22 | if answer == "y" or answer == "yes": 23 | return True 24 | elif answer == "n" or answer == "no": 25 | return False 26 | elif answer == "": 27 | return default 28 | else: 29 | return None 30 | 31 | 32 | # http://stackoverflow.com/a/5164027 33 | def prettydate(d): 34 | diff = datetime.datetime.utcnow() - d 35 | s = diff.seconds 36 | if diff.days > 7 or diff.days < 0: 37 | return d.strftime('%d %b %y') 38 | elif diff.days == 1: 39 | return '1 day ago' 40 | elif diff.days > 1: 41 | return '{0} days ago'.format(diff.days) 42 | elif s <= 1: 43 | return 'just now' 44 | elif s < 60: 45 | return '{0} seconds ago'.format(s) 46 | elif s < 120: 47 | return '1 minute ago' 48 | elif s < 3600: 49 | return '{0} minutes ago'.format(s / 60) 50 | elif s < 7200: 51 | return '1 hour ago' 52 | else: 53 | return '{0} hours ago'.format(s / 3600) 54 | 55 | 56 | def mkdir(path, permissions=0o700): 57 | if not os.path.exists(path): 58 | os.mkdir(path) 59 | 60 | os.chmod(path, permissions) 61 | 62 | return path 63 | 64 | 65 | def split_buffer(reader, separator): 66 | """ 67 | Given a generator which yields strings and a separator string, 68 | joins all input, splits on the separator and yields each chunk. 69 | 70 | Unlike string.split(), each chunk includes the trailing 71 | separator, except for the last one if none was found on the end 72 | of the input. 73 | """ 74 | buffered = str('') 75 | separator = str(separator) 76 | 77 | for data in reader: 78 | buffered += data 79 | while True: 80 | index = buffered.find(separator) 81 | if index == -1: 82 | break 83 | yield buffered[:index + 1] 84 | buffered = buffered[index + 1:] 85 | 86 | if len(buffered) > 0: 87 | yield buffered 88 | 89 | 90 | def call_silently(*args, **kwargs): 91 | """ 92 | Like subprocess.call(), but redirects stdout and stderr to /dev/null. 93 | """ 94 | with open(os.devnull, 'w') as shutup: 95 | return subprocess.call(*args, stdout=shutup, stderr=shutup, **kwargs) 96 | 97 | 98 | def is_mac(): 99 | return platform.system() == 'Darwin' 100 | 101 | 102 | def is_ubuntu(): 103 | return platform.system() == 'Linux' and platform.linux_distribution()[0] == 'Ubuntu' 104 | -------------------------------------------------------------------------------- /compose/cli/verbose_proxy.py: -------------------------------------------------------------------------------- 1 | 2 | import functools 3 | from itertools import chain 4 | import logging 5 | import pprint 6 | 7 | import six 8 | 9 | 10 | def format_call(args, kwargs): 11 | args = (repr(a) for a in args) 12 | kwargs = ("{0!s}={1!r}".format(*item) for item in six.iteritems(kwargs)) 13 | return "({0})".format(", ".join(chain(args, kwargs))) 14 | 15 | 16 | def format_return(result, max_lines): 17 | if isinstance(result, (list, tuple, set)): 18 | return "({0} with {1} items)".format(type(result).__name__, len(result)) 19 | 20 | if result: 21 | lines = pprint.pformat(result).split('\n') 22 | extra = '\n...' if len(lines) > max_lines else '' 23 | return '\n'.join(lines[:max_lines]) + extra 24 | 25 | return result 26 | 27 | 28 | class VerboseProxy(object): 29 | """Proxy all function calls to another class and log method name, arguments 30 | and return values for each call. 31 | """ 32 | 33 | def __init__(self, obj_name, obj, log_name=None, max_lines=10): 34 | self.obj_name = obj_name 35 | self.obj = obj 36 | self.max_lines = max_lines 37 | self.log = logging.getLogger(log_name or __name__) 38 | 39 | def __getattr__(self, name): 40 | attr = getattr(self.obj, name) 41 | 42 | if not six.callable(attr): 43 | return attr 44 | 45 | return functools.partial(self.proxy_callable, name) 46 | 47 | def proxy_callable(self, call_name, *args, **kwargs): 48 | self.log.info("%s %s <- %s", 49 | self.obj_name, 50 | call_name, 51 | format_call(args, kwargs)) 52 | 53 | result = getattr(self.obj, call_name)(*args, **kwargs) 54 | self.log.info("%s %s -> %s", 55 | self.obj_name, 56 | call_name, 57 | format_return(result, self.max_lines)) 58 | return result 59 | -------------------------------------------------------------------------------- /compose/config.py: -------------------------------------------------------------------------------- 1 | import os 2 | import yaml 3 | import six 4 | 5 | 6 | DOCKER_CONFIG_KEYS = [ 7 | 'cap_add', 8 | 'cap_drop', 9 | 'cpu_shares', 10 | 'command', 11 | 'detach', 12 | 'dns', 13 | 'dns_search', 14 | 'domainname', 15 | 'entrypoint', 16 | 'env_file', 17 | 'environment', 18 | 'hostname', 19 | 'image', 20 | 'links', 21 | 'mem_limit', 22 | 'net', 23 | 'ports', 24 | 'privileged', 25 | 'restart', 26 | 'stdin_open', 27 | 'tty', 28 | 'user', 29 | 'volumes', 30 | 'volumes_from', 31 | 'working_dir', 32 | ] 33 | 34 | ALLOWED_KEYS = DOCKER_CONFIG_KEYS + [ 35 | 'build', 36 | 'expose', 37 | 'external_links', 38 | 'name', 39 | ] 40 | 41 | DOCKER_CONFIG_HINTS = { 42 | 'cpu_share' : 'cpu_shares', 43 | 'link' : 'links', 44 | 'port' : 'ports', 45 | 'privilege' : 'privileged', 46 | 'priviliged': 'privileged', 47 | 'privilige' : 'privileged', 48 | 'volume' : 'volumes', 49 | 'workdir' : 'working_dir', 50 | } 51 | 52 | 53 | def load(filename): 54 | working_dir = os.path.dirname(filename) 55 | return from_dictionary(load_yaml(filename), working_dir=working_dir, filename=filename) 56 | 57 | 58 | def from_dictionary(dictionary, working_dir=None, filename=None): 59 | service_dicts = [] 60 | 61 | for service_name, service_dict in list(dictionary.items()): 62 | if not isinstance(service_dict, dict): 63 | raise ConfigurationError('Service "%s" doesn\'t have any configuration options. All top level keys in your docker-compose.yml must map to a dictionary of configuration options.' % service_name) 64 | loader = ServiceLoader(working_dir=working_dir, filename=filename) 65 | service_dict = loader.make_service_dict(service_name, service_dict) 66 | service_dicts.append(service_dict) 67 | 68 | return service_dicts 69 | 70 | 71 | def make_service_dict(name, service_dict, working_dir=None): 72 | return ServiceLoader(working_dir=working_dir).make_service_dict(name, service_dict) 73 | 74 | 75 | class ServiceLoader(object): 76 | def __init__(self, working_dir, filename=None, already_seen=None): 77 | self.working_dir = working_dir 78 | self.filename = filename 79 | self.already_seen = already_seen or [] 80 | 81 | def make_service_dict(self, name, service_dict): 82 | if self.signature(name) in self.already_seen: 83 | raise CircularReference(self.already_seen) 84 | 85 | service_dict = service_dict.copy() 86 | service_dict['name'] = name 87 | service_dict = resolve_environment(service_dict, working_dir=self.working_dir) 88 | service_dict = self.resolve_extends(service_dict) 89 | return process_container_options(service_dict, working_dir=self.working_dir) 90 | 91 | def resolve_extends(self, service_dict): 92 | if 'extends' not in service_dict: 93 | return service_dict 94 | 95 | extends_options = process_extends_options(service_dict['name'], service_dict['extends']) 96 | 97 | if self.working_dir is None: 98 | raise Exception("No working_dir passed to ServiceLoader()") 99 | 100 | other_config_path = expand_path(self.working_dir, extends_options['file']) 101 | other_working_dir = os.path.dirname(other_config_path) 102 | other_already_seen = self.already_seen + [self.signature(service_dict['name'])] 103 | other_loader = ServiceLoader( 104 | working_dir=other_working_dir, 105 | filename=other_config_path, 106 | already_seen=other_already_seen, 107 | ) 108 | 109 | other_config = load_yaml(other_config_path) 110 | other_service_dict = other_config[extends_options['service']] 111 | other_service_dict = other_loader.make_service_dict( 112 | service_dict['name'], 113 | other_service_dict, 114 | ) 115 | validate_extended_service_dict( 116 | other_service_dict, 117 | filename=other_config_path, 118 | service=extends_options['service'], 119 | ) 120 | 121 | return merge_service_dicts(other_service_dict, service_dict) 122 | 123 | def signature(self, name): 124 | return (self.filename, name) 125 | 126 | 127 | def process_extends_options(service_name, extends_options): 128 | error_prefix = "Invalid 'extends' configuration for %s:" % service_name 129 | 130 | if not isinstance(extends_options, dict): 131 | raise ConfigurationError("%s must be a dictionary" % error_prefix) 132 | 133 | if 'service' not in extends_options: 134 | raise ConfigurationError( 135 | "%s you need to specify a service, e.g. 'service: web'" % error_prefix 136 | ) 137 | 138 | for k, _ in extends_options.items(): 139 | if k not in ['file', 'service']: 140 | raise ConfigurationError( 141 | "%s unsupported configuration option '%s'" % (error_prefix, k) 142 | ) 143 | 144 | return extends_options 145 | 146 | 147 | def validate_extended_service_dict(service_dict, filename, service): 148 | error_prefix = "Cannot extend service '%s' in %s:" % (service, filename) 149 | 150 | if 'links' in service_dict: 151 | raise ConfigurationError("%s services with 'links' cannot be extended" % error_prefix) 152 | 153 | if 'volumes_from' in service_dict: 154 | raise ConfigurationError("%s services with 'volumes_from' cannot be extended" % error_prefix) 155 | 156 | if 'net' in service_dict: 157 | if get_service_name_from_net(service_dict['net']) is not None: 158 | raise ConfigurationError("%s services with 'net: container' cannot be extended" % error_prefix) 159 | 160 | 161 | def process_container_options(service_dict, working_dir=None): 162 | for k in service_dict: 163 | if k not in ALLOWED_KEYS: 164 | msg = "Unsupported config option for %s service: '%s'" % (service_dict['name'], k) 165 | if k in DOCKER_CONFIG_HINTS: 166 | msg += " (did you mean '%s'?)" % DOCKER_CONFIG_HINTS[k] 167 | raise ConfigurationError(msg) 168 | 169 | service_dict = service_dict.copy() 170 | 171 | if 'volumes' in service_dict: 172 | service_dict['volumes'] = resolve_host_paths(service_dict['volumes'], working_dir=working_dir) 173 | 174 | if 'build' in service_dict: 175 | service_dict['build'] = resolve_build_path(service_dict['build'], working_dir=working_dir) 176 | 177 | return service_dict 178 | 179 | 180 | def merge_service_dicts(base, override): 181 | d = base.copy() 182 | 183 | if 'environment' in base or 'environment' in override: 184 | d['environment'] = merge_environment( 185 | base.get('environment'), 186 | override.get('environment'), 187 | ) 188 | 189 | if 'volumes' in base or 'volumes' in override: 190 | d['volumes'] = merge_volumes( 191 | base.get('volumes'), 192 | override.get('volumes'), 193 | ) 194 | 195 | if 'image' in override and 'build' in d: 196 | del d['build'] 197 | 198 | if 'build' in override and 'image' in d: 199 | del d['image'] 200 | 201 | list_keys = ['ports', 'expose', 'external_links'] 202 | 203 | for key in list_keys: 204 | if key in base or key in override: 205 | d[key] = base.get(key, []) + override.get(key, []) 206 | 207 | list_or_string_keys = ['dns', 'dns_search'] 208 | 209 | for key in list_or_string_keys: 210 | if key in base or key in override: 211 | d[key] = to_list(base.get(key)) + to_list(override.get(key)) 212 | 213 | already_merged_keys = ['environment', 'volumes'] + list_keys + list_or_string_keys 214 | 215 | for k in set(ALLOWED_KEYS) - set(already_merged_keys): 216 | if k in override: 217 | d[k] = override[k] 218 | 219 | return d 220 | 221 | 222 | def merge_environment(base, override): 223 | env = parse_environment(base) 224 | env.update(parse_environment(override)) 225 | return env 226 | 227 | 228 | def parse_links(links): 229 | return dict(parse_link(l) for l in links) 230 | 231 | 232 | def parse_link(link): 233 | if ':' in link: 234 | source, alias = link.split(':', 1) 235 | return (alias, source) 236 | else: 237 | return (link, link) 238 | 239 | 240 | def get_env_files(options, working_dir=None): 241 | if 'env_file' not in options: 242 | return {} 243 | 244 | if working_dir is None: 245 | raise Exception("No working_dir passed to get_env_files()") 246 | 247 | env_files = options.get('env_file', []) 248 | if not isinstance(env_files, list): 249 | env_files = [env_files] 250 | 251 | return [expand_path(working_dir, path) for path in env_files] 252 | 253 | 254 | def resolve_environment(service_dict, working_dir=None): 255 | service_dict = service_dict.copy() 256 | 257 | if 'environment' not in service_dict and 'env_file' not in service_dict: 258 | return service_dict 259 | 260 | env = {} 261 | 262 | if 'env_file' in service_dict: 263 | for f in get_env_files(service_dict, working_dir=working_dir): 264 | env.update(env_vars_from_file(f)) 265 | del service_dict['env_file'] 266 | 267 | env.update(parse_environment(service_dict.get('environment'))) 268 | env = dict(resolve_env_var(k, v) for k, v in six.iteritems(env)) 269 | 270 | service_dict['environment'] = env 271 | return service_dict 272 | 273 | 274 | def parse_environment(environment): 275 | if not environment: 276 | return {} 277 | 278 | if isinstance(environment, list): 279 | return dict(split_env(e) for e in environment) 280 | 281 | if isinstance(environment, dict): 282 | return environment 283 | 284 | raise ConfigurationError( 285 | "environment \"%s\" must be a list or mapping," % 286 | environment 287 | ) 288 | 289 | 290 | def split_env(env): 291 | if '=' in env: 292 | return env.split('=', 1) 293 | else: 294 | return env, None 295 | 296 | 297 | def resolve_env_var(key, val): 298 | if val is not None: 299 | return key, val 300 | elif key in os.environ: 301 | return key, os.environ[key] 302 | else: 303 | return key, '' 304 | 305 | 306 | def env_vars_from_file(filename): 307 | """ 308 | Read in a line delimited file of environment variables. 309 | """ 310 | if not os.path.exists(filename): 311 | raise ConfigurationError("Couldn't find env file: %s" % filename) 312 | env = {} 313 | for line in open(filename, 'r'): 314 | line = line.strip() 315 | if line and not line.startswith('#'): 316 | k, v = split_env(line) 317 | env[k] = v 318 | return env 319 | 320 | 321 | def resolve_host_paths(volumes, working_dir=None): 322 | if working_dir is None: 323 | raise Exception("No working_dir passed to resolve_host_paths()") 324 | 325 | return [resolve_host_path(v, working_dir) for v in volumes] 326 | 327 | 328 | def resolve_host_path(volume, working_dir): 329 | container_path, host_path = split_volume(volume) 330 | if host_path is not None: 331 | host_path = os.path.expanduser(host_path) 332 | host_path = os.path.expandvars(host_path) 333 | return "%s:%s" % (expand_path(working_dir, host_path), container_path) 334 | else: 335 | return container_path 336 | 337 | 338 | def resolve_build_path(build_path, working_dir=None): 339 | if working_dir is None: 340 | raise Exception("No working_dir passed to resolve_build_path") 341 | 342 | _path = expand_path(working_dir, build_path) 343 | if not os.path.exists(_path) or not os.access(_path, os.R_OK): 344 | raise ConfigurationError("build path %s either does not exist or is not accessible." % _path) 345 | else: 346 | return _path 347 | 348 | 349 | def merge_volumes(base, override): 350 | d = dict_from_volumes(base) 351 | d.update(dict_from_volumes(override)) 352 | return volumes_from_dict(d) 353 | 354 | 355 | def dict_from_volumes(volumes): 356 | if volumes: 357 | return dict(split_volume(v) for v in volumes) 358 | else: 359 | return {} 360 | 361 | 362 | def volumes_from_dict(d): 363 | return [join_volume(v) for v in d.items()] 364 | 365 | 366 | def split_volume(string): 367 | if ':' in string: 368 | (host, container) = string.split(':', 1) 369 | return (container, host) 370 | else: 371 | return (string, None) 372 | 373 | 374 | def join_volume(pair): 375 | (container, host) = pair 376 | if host is None: 377 | return container 378 | else: 379 | return ":".join((host, container)) 380 | 381 | 382 | def expand_path(working_dir, path): 383 | return os.path.abspath(os.path.join(working_dir, path)) 384 | 385 | 386 | def to_list(value): 387 | if value is None: 388 | return [] 389 | elif isinstance(value, six.string_types): 390 | return [value] 391 | else: 392 | return value 393 | 394 | 395 | def get_service_name_from_net(net_config): 396 | if not net_config: 397 | return 398 | 399 | if not net_config.startswith('container:'): 400 | return 401 | 402 | _, net_name = net_config.split(':', 1) 403 | return net_name 404 | 405 | 406 | def load_yaml(filename): 407 | try: 408 | with open(filename, 'r') as fh: 409 | return yaml.safe_load(fh) 410 | except IOError as e: 411 | raise ConfigurationError(six.text_type(e)) 412 | 413 | 414 | class ConfigurationError(Exception): 415 | def __init__(self, msg): 416 | self.msg = msg 417 | 418 | def __str__(self): 419 | return self.msg 420 | 421 | 422 | class CircularReference(ConfigurationError): 423 | def __init__(self, trail): 424 | self.trail = trail 425 | 426 | @property 427 | def msg(self): 428 | lines = [ 429 | "{} in {}".format(service_name, filename) 430 | for (filename, service_name) in self.trail 431 | ] 432 | return "Circular reference:\n {}".format("\n extends ".join(lines)) 433 | -------------------------------------------------------------------------------- /compose/container.py: -------------------------------------------------------------------------------- 1 | from __future__ import unicode_literals 2 | from __future__ import absolute_import 3 | 4 | import six 5 | from functools import reduce 6 | 7 | 8 | class Container(object): 9 | """ 10 | Represents a Docker container, constructed from the output of 11 | GET /containers/:id:/json. 12 | """ 13 | def __init__(self, client, dictionary, has_been_inspected=False): 14 | self.client = client 15 | self.dictionary = dictionary 16 | self.has_been_inspected = has_been_inspected 17 | 18 | @classmethod 19 | def from_ps(cls, client, dictionary, **kwargs): 20 | """ 21 | Construct a container object from the output of GET /containers/json. 22 | """ 23 | new_dictionary = { 24 | 'Id': dictionary['Id'], 25 | 'Image': dictionary['Image'], 26 | 'Name': '/' + get_container_name(dictionary), 27 | } 28 | return cls(client, new_dictionary, **kwargs) 29 | 30 | @classmethod 31 | def from_id(cls, client, id): 32 | return cls(client, client.inspect_container(id)) 33 | 34 | @classmethod 35 | def create(cls, client, **options): 36 | response = client.create_container(**options) 37 | return cls.from_id(client, response['Id']) 38 | 39 | @property 40 | def id(self): 41 | return self.dictionary['Id'] 42 | 43 | @property 44 | def image(self): 45 | return self.dictionary['Image'] 46 | 47 | @property 48 | def short_id(self): 49 | return self.id[:10] 50 | 51 | @property 52 | def name(self): 53 | return self.dictionary['Name'][1:] 54 | 55 | @property 56 | def name_without_project(self): 57 | return '_'.join(self.dictionary['Name'].split('_')[1:]) 58 | 59 | @property 60 | def number(self): 61 | try: 62 | return int(self.name.split('_')[-1]) 63 | except ValueError: 64 | return None 65 | 66 | @property 67 | def ports(self): 68 | self.inspect_if_not_inspected() 69 | return self.get('NetworkSettings.Ports') or {} 70 | 71 | @property 72 | def human_readable_ports(self): 73 | def format_port(private, public): 74 | if not public: 75 | return private 76 | return '{HostIp}:{HostPort}->{private}'.format( 77 | private=private, **public[0]) 78 | 79 | return ', '.join(format_port(*item) 80 | for item in sorted(six.iteritems(self.ports))) 81 | 82 | @property 83 | def human_readable_state(self): 84 | if self.is_running: 85 | return 'Ghost' if self.get('State.Ghost') else 'Up' 86 | else: 87 | return 'Exit %s' % self.get('State.ExitCode') 88 | 89 | @property 90 | def human_readable_command(self): 91 | entrypoint = self.get('Config.Entrypoint') or [] 92 | cmd = self.get('Config.Cmd') or [] 93 | return ' '.join(entrypoint + cmd) 94 | 95 | @property 96 | def environment(self): 97 | return dict(var.split("=", 1) for var in self.get('Config.Env') or []) 98 | 99 | @property 100 | def is_running(self): 101 | return self.get('State.Running') 102 | 103 | def get(self, key): 104 | """Return a value from the container or None if the value is not set. 105 | 106 | :param key: a string using dotted notation for nested dictionary 107 | lookups 108 | """ 109 | self.inspect_if_not_inspected() 110 | 111 | def get_value(dictionary, key): 112 | return (dictionary or {}).get(key) 113 | 114 | return reduce(get_value, key.split('.'), self.dictionary) 115 | 116 | def get_local_port(self, port, protocol='tcp'): 117 | port = self.ports.get("%s/%s" % (port, protocol)) 118 | return "{HostIp}:{HostPort}".format(**port[0]) if port else None 119 | 120 | def start(self, **options): 121 | return self.client.start(self.id, **options) 122 | 123 | def stop(self, **options): 124 | return self.client.stop(self.id, **options) 125 | 126 | def kill(self, **options): 127 | return self.client.kill(self.id, **options) 128 | 129 | def restart(self): 130 | return self.client.restart(self.id) 131 | 132 | def remove(self, **options): 133 | return self.client.remove_container(self.id, **options) 134 | 135 | def inspect_if_not_inspected(self): 136 | if not self.has_been_inspected: 137 | self.inspect() 138 | 139 | def wait(self): 140 | return self.client.wait(self.id) 141 | 142 | def logs(self, *args, **kwargs): 143 | return self.client.logs(self.id, *args, **kwargs) 144 | 145 | def inspect(self): 146 | self.dictionary = self.client.inspect_container(self.id) 147 | self.has_been_inspected = True 148 | return self.dictionary 149 | 150 | def links(self): 151 | links = [] 152 | for container in self.client.containers(): 153 | for name in container['Names']: 154 | bits = name.split('/') 155 | if len(bits) > 2 and bits[1] == self.name: 156 | links.append(bits[2]) 157 | return links 158 | 159 | def attach(self, *args, **kwargs): 160 | return self.client.attach(self.id, *args, **kwargs) 161 | 162 | def attach_socket(self, **kwargs): 163 | return self.client.attach_socket(self.id, **kwargs) 164 | 165 | def __repr__(self): 166 | return '' % self.name 167 | 168 | def __eq__(self, other): 169 | if type(self) != type(other): 170 | return False 171 | return self.id == other.id 172 | 173 | 174 | def get_container_name(container): 175 | if not container.get('Name') and not container.get('Names'): 176 | return None 177 | # inspect 178 | if 'Name' in container: 179 | return container['Name'] 180 | # ps 181 | shortest_name = min(container['Names'], key=lambda n: len(n.split('/'))) 182 | return shortest_name.split('/')[-1] 183 | -------------------------------------------------------------------------------- /compose/progress_stream.py: -------------------------------------------------------------------------------- 1 | import json 2 | import os 3 | import codecs 4 | 5 | 6 | class StreamOutputError(Exception): 7 | pass 8 | 9 | 10 | def stream_output(output, stream): 11 | is_terminal = hasattr(stream, 'fileno') and os.isatty(stream.fileno()) 12 | stream = codecs.getwriter('utf-8')(stream) 13 | all_events = [] 14 | lines = {} 15 | diff = 0 16 | 17 | for chunk in output: 18 | event = json.loads(chunk) 19 | all_events.append(event) 20 | 21 | if 'progress' in event or 'progressDetail' in event: 22 | image_id = event.get('id') 23 | if not image_id: 24 | continue 25 | 26 | if image_id in lines: 27 | diff = len(lines) - lines[image_id] 28 | else: 29 | lines[image_id] = len(lines) 30 | stream.write("\n") 31 | diff = 0 32 | 33 | if is_terminal: 34 | # move cursor up `diff` rows 35 | stream.write("%c[%dA" % (27, diff)) 36 | 37 | print_output_event(event, stream, is_terminal) 38 | 39 | if 'id' in event and is_terminal: 40 | # move cursor back down 41 | stream.write("%c[%dB" % (27, diff)) 42 | 43 | stream.flush() 44 | 45 | return all_events 46 | 47 | 48 | def print_output_event(event, stream, is_terminal): 49 | if 'errorDetail' in event: 50 | raise StreamOutputError(event['errorDetail']['message']) 51 | 52 | terminator = '' 53 | 54 | if is_terminal and 'stream' not in event: 55 | # erase current line 56 | stream.write("%c[2K\r" % 27) 57 | terminator = "\r" 58 | pass 59 | elif 'progressDetail' in event: 60 | return 61 | 62 | if 'time' in event: 63 | stream.write("[%s] " % event['time']) 64 | 65 | if 'id' in event: 66 | stream.write("%s: " % event['id']) 67 | 68 | if 'from' in event: 69 | stream.write("(from %s) " % event['from']) 70 | 71 | status = event.get('status', '') 72 | 73 | if 'progress' in event: 74 | stream.write("%s %s%s" % (status, event['progress'], terminator)) 75 | elif 'progressDetail' in event: 76 | detail = event['progressDetail'] 77 | if 'current' in detail: 78 | percentage = float(detail['current']) / float(detail['total']) * 100 79 | stream.write('%s (%.1f%%)%s' % (status, percentage, terminator)) 80 | else: 81 | stream.write('%s%s' % (status, terminator)) 82 | elif 'stream' in event: 83 | stream.write("%s%s" % (event['stream'], terminator)) 84 | else: 85 | stream.write("%s%s\n" % (status, terminator)) 86 | -------------------------------------------------------------------------------- /compose/project.py: -------------------------------------------------------------------------------- 1 | from __future__ import unicode_literals 2 | from __future__ import absolute_import 3 | import logging 4 | 5 | from functools import reduce 6 | from .config import get_service_name_from_net, ConfigurationError 7 | from .service import Service 8 | from .container import Container 9 | from docker.errors import APIError 10 | 11 | log = logging.getLogger(__name__) 12 | 13 | 14 | def sort_service_dicts(services): 15 | # Topological sort (Cormen/Tarjan algorithm). 16 | unmarked = services[:] 17 | temporary_marked = set() 18 | sorted_services = [] 19 | 20 | def get_service_names(links): 21 | return [link.split(':')[0] for link in links] 22 | 23 | def get_service_dependents(service_dict, services): 24 | name = service_dict['name'] 25 | return [ 26 | service for service in services 27 | if (name in get_service_names(service.get('links', [])) or 28 | name in service.get('volumes_from', []) or 29 | name == get_service_name_from_net(service.get('net'))) 30 | ] 31 | 32 | def visit(n): 33 | if n['name'] in temporary_marked: 34 | if n['name'] in get_service_names(n.get('links', [])): 35 | raise DependencyError('A service can not link to itself: %s' % n['name']) 36 | if n['name'] in n.get('volumes_from', []): 37 | raise DependencyError('A service can not mount itself as volume: %s' % n['name']) 38 | else: 39 | raise DependencyError('Circular import between %s' % ' and '.join(temporary_marked)) 40 | if n in unmarked: 41 | temporary_marked.add(n['name']) 42 | for m in get_service_dependents(n, services): 43 | visit(m) 44 | temporary_marked.remove(n['name']) 45 | unmarked.remove(n) 46 | sorted_services.insert(0, n) 47 | 48 | while unmarked: 49 | visit(unmarked[-1]) 50 | 51 | return sorted_services 52 | 53 | 54 | class Project(object): 55 | """ 56 | A collection of services. 57 | """ 58 | def __init__(self, name, services, client): 59 | self.name = name 60 | self.services = services 61 | self.client = client 62 | 63 | @classmethod 64 | def from_dicts(cls, name, service_dicts, client): 65 | """ 66 | Construct a ServiceCollection from a list of dicts representing services. 67 | """ 68 | project = cls(name, [], client) 69 | for service_dict in sort_service_dicts(service_dicts): 70 | links = project.get_links(service_dict) 71 | volumes_from = project.get_volumes_from(service_dict) 72 | net = project.get_net(service_dict) 73 | 74 | project.services.append(Service(client=client, project=name, links=links, net=net, 75 | volumes_from=volumes_from, **service_dict)) 76 | return project 77 | 78 | def get_service(self, name): 79 | """ 80 | Retrieve a service by name. Raises NoSuchService 81 | if the named service does not exist. 82 | """ 83 | for service in self.services: 84 | if service.name == name: 85 | return service 86 | 87 | raise NoSuchService(name) 88 | 89 | def get_services(self, service_names=None, include_deps=False): 90 | """ 91 | Returns a list of this project's services filtered 92 | by the provided list of names, or all services if service_names is None 93 | or []. 94 | 95 | If include_deps is specified, returns a list including the dependencies for 96 | service_names, in order of dependency. 97 | 98 | Preserves the original order of self.services where possible, 99 | reordering as needed to resolve dependencies. 100 | 101 | Raises NoSuchService if any of the named services do not exist. 102 | """ 103 | if service_names is None or len(service_names) == 0: 104 | return self.get_services( 105 | service_names=[s.name for s in self.services], 106 | include_deps=include_deps 107 | ) 108 | else: 109 | unsorted = [self.get_service(name) for name in service_names] 110 | services = [s for s in self.services if s in unsorted] 111 | 112 | if include_deps: 113 | services = reduce(self._inject_deps, services, []) 114 | 115 | uniques = [] 116 | [uniques.append(s) for s in services if s not in uniques] 117 | return uniques 118 | 119 | def get_links(self, service_dict): 120 | links = [] 121 | if 'links' in service_dict: 122 | for link in service_dict.get('links', []): 123 | if ':' in link: 124 | service_name, link_name = link.split(':', 1) 125 | else: 126 | service_name, link_name = link, None 127 | try: 128 | links.append((self.get_service(service_name), link_name)) 129 | except NoSuchService: 130 | raise ConfigurationError('Service "%s" has a link to service "%s" which does not exist.' % (service_dict['name'], service_name)) 131 | del service_dict['links'] 132 | return links 133 | 134 | def get_volumes_from(self, service_dict): 135 | volumes_from = [] 136 | if 'volumes_from' in service_dict: 137 | for volume_name in service_dict.get('volumes_from', []): 138 | try: 139 | service = self.get_service(volume_name) 140 | volumes_from.append(service) 141 | except NoSuchService: 142 | try: 143 | container = Container.from_id(self.client, volume_name) 144 | volumes_from.append(container) 145 | except APIError: 146 | raise ConfigurationError('Service "%s" mounts volumes from "%s", which is not the name of a service or container.' % (service_dict['name'], volume_name)) 147 | del service_dict['volumes_from'] 148 | return volumes_from 149 | 150 | def get_net(self, service_dict): 151 | if 'net' in service_dict: 152 | net_name = get_service_name_from_net(service_dict.get('net')) 153 | 154 | if net_name: 155 | try: 156 | net = self.get_service(net_name) 157 | except NoSuchService: 158 | try: 159 | net = Container.from_id(self.client, net_name) 160 | except APIError: 161 | raise ConfigurationError('Serivce "%s" is trying to use the network of "%s", which is not the name of a service or container.' % (service_dict['name'], net_name)) 162 | else: 163 | net = service_dict['net'] 164 | 165 | del service_dict['net'] 166 | 167 | else: 168 | net = 'bridge' 169 | 170 | return net 171 | 172 | def start(self, service_names=None, **options): 173 | for service in self.get_services(service_names): 174 | service.start(**options) 175 | 176 | def stop(self, service_names=None, **options): 177 | for service in reversed(self.get_services(service_names)): 178 | service.stop(**options) 179 | 180 | def kill(self, service_names=None, **options): 181 | for service in reversed(self.get_services(service_names)): 182 | service.kill(**options) 183 | 184 | def restart(self, service_names=None, **options): 185 | for service in self.get_services(service_names): 186 | service.restart(**options) 187 | 188 | def build(self, service_names=None, no_cache=False): 189 | for service in self.get_services(service_names): 190 | if service.can_be_built(): 191 | service.build(no_cache) 192 | else: 193 | log.info('%s uses an image, skipping' % service.name) 194 | 195 | def up(self, 196 | service_names=None, 197 | start_deps=True, 198 | recreate=True, 199 | insecure_registry=False, 200 | detach=False, 201 | do_build=True): 202 | running_containers = [] 203 | for service in self.get_services(service_names, include_deps=start_deps): 204 | if recreate: 205 | for (_, container) in service.recreate_containers( 206 | insecure_registry=insecure_registry, 207 | detach=detach, 208 | do_build=do_build): 209 | running_containers.append(container) 210 | else: 211 | for container in service.start_or_create_containers( 212 | insecure_registry=insecure_registry, 213 | detach=detach, 214 | do_build=do_build): 215 | running_containers.append(container) 216 | 217 | return running_containers 218 | 219 | def pull(self, service_names=None, insecure_registry=False): 220 | for service in self.get_services(service_names, include_deps=True): 221 | service.pull(insecure_registry=insecure_registry) 222 | 223 | def remove_stopped(self, service_names=None, **options): 224 | for service in self.get_services(service_names): 225 | service.remove_stopped(**options) 226 | 227 | def containers(self, service_names=None, stopped=False, one_off=False): 228 | return [Container.from_ps(self.client, container) 229 | for container in self.client.containers(all=stopped) 230 | for service in self.get_services(service_names) 231 | if service.has_container(container, one_off=one_off)] 232 | 233 | def _inject_deps(self, acc, service): 234 | net_name = service.get_net_name() 235 | dep_names = (service.get_linked_names() + 236 | service.get_volumes_from_names() + 237 | ([net_name] if net_name else [])) 238 | 239 | if len(dep_names) > 0: 240 | dep_services = self.get_services( 241 | service_names=list(set(dep_names)), 242 | include_deps=True 243 | ) 244 | else: 245 | dep_services = [] 246 | 247 | dep_services.append(service) 248 | return acc + dep_services 249 | 250 | 251 | class NoSuchService(Exception): 252 | def __init__(self, name): 253 | self.name = name 254 | self.msg = "No such service: %s" % self.name 255 | 256 | def __str__(self): 257 | return self.msg 258 | 259 | 260 | class DependencyError(ConfigurationError): 261 | pass 262 | -------------------------------------------------------------------------------- /contrib/completion/bash/docker-compose: -------------------------------------------------------------------------------- 1 | #!bash 2 | # 3 | # bash completion for docker-compose 4 | # 5 | # This work is based on the completion for the docker command. 6 | # 7 | # This script provides completion of: 8 | # - commands and their options 9 | # - service names 10 | # - filepaths 11 | # 12 | # To enable the completions either: 13 | # - place this file in /etc/bash_completion.d 14 | # or 15 | # - copy this file to e.g. ~/.docker-compose-completion.sh and add the line 16 | # below to your .bashrc after bash completion features are loaded 17 | # . ~/.docker-compose-completion.sh 18 | 19 | 20 | # For compatibility reasons, Compose and therefore its completion supports several 21 | # stack compositon files as listed here, in descending priority. 22 | # Support for these filenames might be dropped in some future version. 23 | __docker-compose_compose_file() { 24 | local file 25 | for file in docker-compose.y{,a}ml fig.y{,a}ml ; do 26 | [ -e $file ] && { 27 | echo $file 28 | return 29 | } 30 | done 31 | echo docker-compose.yml 32 | } 33 | 34 | # Extracts all service names from the compose file. 35 | ___docker-compose_all_services_in_compose_file() { 36 | awk -F: '/^[a-zA-Z0-9]/{print $1}' "${compose_file:-$(__docker-compose_compose_file)}" 2>/dev/null 37 | } 38 | 39 | # All services, even those without an existing container 40 | __docker-compose_services_all() { 41 | COMPREPLY=( $(compgen -W "$(___docker-compose_all_services_in_compose_file)" -- "$cur") ) 42 | } 43 | 44 | # All services that have an entry with the given key in their compose_file section 45 | ___docker-compose_services_with_key() { 46 | # flatten sections to one line, then filter lines containing the key and return section name. 47 | awk '/^[a-zA-Z0-9]/{printf "\n"};{printf $0;next;}' "${compose_file:-$(__docker-compose_compose_file)}" | awk -F: -v key=": +$1:" '$0 ~ key {print $1}' 48 | } 49 | 50 | # All services that are defined by a Dockerfile reference 51 | __docker-compose_services_from_build() { 52 | COMPREPLY=( $(compgen -W "$(___docker-compose_services_with_key build)" -- "$cur") ) 53 | } 54 | 55 | # All services that are defined by an image 56 | __docker-compose_services_from_image() { 57 | COMPREPLY=( $(compgen -W "$(___docker-compose_services_with_key image)" -- "$cur") ) 58 | } 59 | 60 | # The services for which containers have been created, optionally filtered 61 | # by a boolean expression passed in as argument. 62 | __docker-compose_services_with() { 63 | local containers names 64 | containers="$(docker-compose 2>/dev/null ${compose_file:+-f $compose_file} ${compose_project:+-p $compose_project} ps -q)" 65 | names=( $(docker 2>/dev/null inspect --format "{{if ${1:-true}}} {{ .Name }} {{end}}" $containers) ) 66 | names=( ${names[@]%_*} ) # strip trailing numbers 67 | names=( ${names[@]#*_} ) # strip project name 68 | COMPREPLY=( $(compgen -W "${names[*]}" -- "$cur") ) 69 | } 70 | 71 | # The services for which at least one running container exists 72 | __docker-compose_services_running() { 73 | __docker-compose_services_with '.State.Running' 74 | } 75 | 76 | # The services for which at least one stopped container exists 77 | __docker-compose_services_stopped() { 78 | __docker-compose_services_with 'not .State.Running' 79 | } 80 | 81 | 82 | _docker-compose_build() { 83 | case "$cur" in 84 | -*) 85 | COMPREPLY=( $( compgen -W "--no-cache" -- "$cur" ) ) 86 | ;; 87 | *) 88 | __docker-compose_services_from_build 89 | ;; 90 | esac 91 | } 92 | 93 | 94 | _docker-compose_docker-compose() { 95 | case "$prev" in 96 | --file|-f) 97 | _filedir y?(a)ml 98 | return 99 | ;; 100 | --project-name|-p) 101 | return 102 | ;; 103 | esac 104 | 105 | case "$cur" in 106 | -*) 107 | COMPREPLY=( $( compgen -W "--help -h --verbose --version --file -f --project-name -p" -- "$cur" ) ) 108 | ;; 109 | *) 110 | COMPREPLY=( $( compgen -W "${commands[*]}" -- "$cur" ) ) 111 | ;; 112 | esac 113 | } 114 | 115 | 116 | _docker-compose_help() { 117 | COMPREPLY=( $( compgen -W "${commands[*]}" -- "$cur" ) ) 118 | } 119 | 120 | 121 | _docker-compose_kill() { 122 | case "$prev" in 123 | -s) 124 | COMPREPLY=( $( compgen -W "SIGHUP SIGINT SIGKILL SIGUSR1 SIGUSR2" -- "$(echo $cur | tr '[:lower:]' '[:upper:]')" ) ) 125 | return 126 | ;; 127 | esac 128 | 129 | case "$cur" in 130 | -*) 131 | COMPREPLY=( $( compgen -W "-s" -- "$cur" ) ) 132 | ;; 133 | *) 134 | __docker-compose_services_running 135 | ;; 136 | esac 137 | } 138 | 139 | 140 | _docker-compose_logs() { 141 | case "$cur" in 142 | -*) 143 | COMPREPLY=( $( compgen -W "--no-color" -- "$cur" ) ) 144 | ;; 145 | *) 146 | __docker-compose_services_all 147 | ;; 148 | esac 149 | } 150 | 151 | 152 | _docker-compose_port() { 153 | case "$prev" in 154 | --protocol) 155 | COMPREPLY=( $( compgen -W "tcp udp" -- "$cur" ) ) 156 | return; 157 | ;; 158 | --index) 159 | return; 160 | ;; 161 | esac 162 | 163 | case "$cur" in 164 | -*) 165 | COMPREPLY=( $( compgen -W "--protocol --index" -- "$cur" ) ) 166 | ;; 167 | *) 168 | __docker-compose_services_all 169 | ;; 170 | esac 171 | } 172 | 173 | 174 | _docker-compose_ps() { 175 | case "$cur" in 176 | -*) 177 | COMPREPLY=( $( compgen -W "-q" -- "$cur" ) ) 178 | ;; 179 | *) 180 | __docker-compose_services_all 181 | ;; 182 | esac 183 | } 184 | 185 | 186 | _docker-compose_pull() { 187 | case "$cur" in 188 | -*) 189 | COMPREPLY=( $( compgen -W "--allow-insecure-ssl" -- "$cur" ) ) 190 | ;; 191 | *) 192 | __docker-compose_services_from_image 193 | ;; 194 | esac 195 | } 196 | 197 | 198 | _docker-compose_restart() { 199 | case "$prev" in 200 | -t | --timeout) 201 | return 202 | ;; 203 | esac 204 | 205 | case "$cur" in 206 | -*) 207 | COMPREPLY=( $( compgen -W "-t --timeout" -- "$cur" ) ) 208 | ;; 209 | *) 210 | __docker-compose_services_running 211 | ;; 212 | esac 213 | } 214 | 215 | 216 | _docker-compose_rm() { 217 | case "$cur" in 218 | -*) 219 | COMPREPLY=( $( compgen -W "--force -f -v" -- "$cur" ) ) 220 | ;; 221 | *) 222 | __docker-compose_services_stopped 223 | ;; 224 | esac 225 | } 226 | 227 | 228 | _docker-compose_run() { 229 | case "$prev" in 230 | -e) 231 | COMPREPLY=( $( compgen -e -- "$cur" ) ) 232 | compopt -o nospace 233 | return 234 | ;; 235 | --entrypoint|--user|-u) 236 | return 237 | ;; 238 | esac 239 | 240 | case "$cur" in 241 | -*) 242 | COMPREPLY=( $( compgen -W "--allow-insecure-ssl -d --entrypoint -e --no-deps --rm --service-ports -T --user -u" -- "$cur" ) ) 243 | ;; 244 | *) 245 | __docker-compose_services_all 246 | ;; 247 | esac 248 | } 249 | 250 | 251 | _docker-compose_scale() { 252 | case "$prev" in 253 | =) 254 | COMPREPLY=("$cur") 255 | ;; 256 | *) 257 | COMPREPLY=( $(compgen -S "=" -W "$(___docker-compose_all_services_in_compose_file)" -- "$cur") ) 258 | compopt -o nospace 259 | ;; 260 | esac 261 | } 262 | 263 | 264 | _docker-compose_start() { 265 | __docker-compose_services_stopped 266 | } 267 | 268 | 269 | _docker-compose_stop() { 270 | case "$prev" in 271 | -t | --timeout) 272 | return 273 | ;; 274 | esac 275 | 276 | case "$cur" in 277 | -*) 278 | COMPREPLY=( $( compgen -W "-t --timeout" -- "$cur" ) ) 279 | ;; 280 | *) 281 | __docker-compose_services_running 282 | ;; 283 | esac 284 | } 285 | 286 | 287 | _docker-compose_up() { 288 | case "$prev" in 289 | -t | --timeout) 290 | return 291 | ;; 292 | esac 293 | 294 | case "$cur" in 295 | -*) 296 | COMPREPLY=( $( compgen -W "--allow-insecure-ssl -d --no-build --no-color --no-deps --no-recreate -t --timeout" -- "$cur" ) ) 297 | ;; 298 | *) 299 | __docker-compose_services_all 300 | ;; 301 | esac 302 | } 303 | 304 | 305 | _docker-compose() { 306 | local commands=( 307 | build 308 | help 309 | kill 310 | logs 311 | port 312 | ps 313 | pull 314 | restart 315 | rm 316 | run 317 | scale 318 | start 319 | stop 320 | up 321 | ) 322 | 323 | COMPREPLY=() 324 | local cur prev words cword 325 | _get_comp_words_by_ref -n : cur prev words cword 326 | 327 | # search subcommand and invoke its handler. 328 | # special treatment of some top-level options 329 | local command='docker-compose' 330 | local counter=1 331 | local compose_file compose_project 332 | while [ $counter -lt $cword ]; do 333 | case "${words[$counter]}" in 334 | -f|--file) 335 | (( counter++ )) 336 | compose_file="${words[$counter]}" 337 | ;; 338 | -p|--project-name) 339 | (( counter++ )) 340 | compose_project="${words[$counter]}" 341 | ;; 342 | -*) 343 | ;; 344 | *) 345 | command="${words[$counter]}" 346 | break 347 | ;; 348 | esac 349 | (( counter++ )) 350 | done 351 | 352 | local completions_func=_docker-compose_${command} 353 | declare -F $completions_func >/dev/null && $completions_func 354 | 355 | return 0 356 | } 357 | 358 | complete -F _docker-compose docker-compose 359 | -------------------------------------------------------------------------------- /docs/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM docs/base:latest 2 | MAINTAINER Sven Dowideit (@SvenDowideit) 3 | 4 | # to get the git info for this repo 5 | COPY . /src 6 | 7 | # Reset the /docs dir so we can replace the theme meta with the new repo's git info 8 | RUN git reset --hard 9 | 10 | RUN grep "__version" /src/compose/__init__.py | sed "s/.*'\(.*\)'/\1/" > /docs/VERSION 11 | COPY docs/* /docs/sources/compose/ 12 | COPY docs/mkdocs.yml /docs/mkdocs-compose.yml 13 | 14 | # Then build everything together, ready for mkdocs 15 | RUN /docs/build.sh 16 | -------------------------------------------------------------------------------- /docs/cli.md: -------------------------------------------------------------------------------- 1 | page_title: Compose CLI reference 2 | page_description: Compose CLI reference 3 | page_keywords: fig, composition, compose, docker, orchestration, cli, reference 4 | 5 | 6 | # CLI reference 7 | 8 | Most Docker Compose commands are run against one or more services. If 9 | the service is not specified, the command will apply to all services. 10 | 11 | For full usage information, run `docker-compose [COMMAND] --help`. 12 | 13 | ## Commands 14 | 15 | ### build 16 | 17 | Builds or rebuilds services. 18 | 19 | Services are built once and then tagged as `project_service`, e.g., 20 | `composetest_db`. If you change a service's Dockerfile or the contents of its 21 | build directory, run `docker-compose build` to rebuild it. 22 | 23 | ### help 24 | 25 | Displays help and usage instructions for a command. 26 | 27 | ### kill 28 | 29 | Forces running containers to stop by sending a `SIGKILL` signal. Optionally the 30 | signal can be passed, for example: 31 | 32 | $ docker-compose kill -s SIGINT 33 | 34 | ### logs 35 | 36 | Displays log output from services. 37 | 38 | ### port 39 | 40 | Prints the public port for a port binding 41 | 42 | ### ps 43 | 44 | Lists containers. 45 | 46 | ### pull 47 | 48 | Pulls service images. 49 | 50 | ### rm 51 | 52 | Removes stopped service containers. 53 | 54 | 55 | ### run 56 | 57 | Runs a one-off command on a service. 58 | 59 | For example, 60 | 61 | $ docker-compose run web python manage.py shell 62 | 63 | will start the `web` service and then run `manage.py shell` in python. 64 | Note that by default, linked services will also be started, unless they are 65 | already running. 66 | 67 | One-off commands are started in new containers with the same configuration as a 68 | normal container for that service, so volumes, links, etc will all be created as 69 | expected. When using `run`, there are two differences from bringing up a 70 | container normally: 71 | 72 | 1. the command will be overridden with the one specified. So, if you run 73 | `docker-compose run web bash`, the container's web command (which could default 74 | to, e.g., `python app.py`) will be overridden to `bash` 75 | 76 | 2. by default no ports will be created in case they collide with already opened 77 | ports. 78 | 79 | Links are also created between one-off commands and the other containers which 80 | are part of that service. So, for example, you could run: 81 | 82 | $ docker-compose run db psql -h db -U docker 83 | 84 | This would open up an interactive PostgreSQL shell for the linked `db` container 85 | (which would get created or started as needed). 86 | 87 | If you do not want linked containers to start when running the one-off command, 88 | specify the `--no-deps` flag: 89 | 90 | $ docker-compose run --no-deps web python manage.py shell 91 | 92 | Similarly, if you do want the service's ports to be created and mapped to the 93 | host, specify the `--service-ports` flag: 94 | $ docker-compose run --service-ports web python manage.py shell 95 | 96 | ### scale 97 | 98 | Sets the number of containers to run for a service. 99 | 100 | Numbers are specified as arguments in the form `service=num`. For example: 101 | 102 | $ docker-compose scale web=2 worker=3 103 | 104 | ### start 105 | 106 | Starts existing containers for a service. 107 | 108 | ### stop 109 | 110 | Stops running containers without removing them. They can be started again with 111 | `docker-compose start`. 112 | 113 | ### up 114 | 115 | Builds, (re)creates, starts, and attaches to containers for a service. 116 | 117 | Linked services will be started, unless they are already running. 118 | 119 | By default, `docker-compose up` will aggregate the output of each container and, 120 | when it exits, all containers will be stopped. Running `docker-compose up -d`, 121 | will start the containers in the background and leave them running. 122 | 123 | By default, if there are existing containers for a service, `docker-compose up` will stop and recreate them (preserving mounted volumes with [volumes-from]), so that changes in `docker-compose.yml` are picked up. If you do not want containers stopped and recreated, use `docker-compose up --no-recreate`. This will still start any stopped containers, if needed. 124 | 125 | [volumes-from]: http://docs.docker.io/en/latest/use/working_with_volumes/ 126 | 127 | ## Options 128 | 129 | ### --verbose 130 | 131 | Shows more output 132 | 133 | ### --version 134 | 135 | Prints version and exits 136 | 137 | ### -f, --file FILE 138 | 139 | Specifies an alternate Compose yaml file (default: `docker-compose.yml`) 140 | 141 | ### -p, --project-name NAME 142 | 143 | Specifies an alternate project name (default: current directory name) 144 | 145 | 146 | ## Environment Variables 147 | 148 | Several environment variables are available for you to configure Compose's behaviour. 149 | 150 | Variables starting with `DOCKER_` are the same as those used to configure the 151 | Docker command-line client. If you're using boot2docker, `$(boot2docker shellinit)` 152 | will set them to their correct values. 153 | 154 | ### COMPOSE\_PROJECT\_NAME 155 | 156 | Sets the project name, which is prepended to the name of every container started by Compose. Defaults to the `basename` of the current working directory. 157 | 158 | ### COMPOSE\_FILE 159 | 160 | Sets the path to the `docker-compose.yml` to use. Defaults to `docker-compose.yml` in the current working directory. 161 | 162 | ### DOCKER\_HOST 163 | 164 | Sets the URL of the docker daemon. As with the Docker client, defaults to `unix:///var/run/docker.sock`. 165 | 166 | ### DOCKER\_TLS\_VERIFY 167 | 168 | When set to anything other than an empty string, enables TLS communication with 169 | the daemon. 170 | 171 | ### DOCKER\_CERT\_PATH 172 | 173 | Configures the path to the `ca.pem`, `cert.pem`, and `key.pem` files used for TLS verification. Defaults to `~/.docker`. 174 | 175 | ## Compose documentation 176 | 177 | - [Installing Compose](install.md) 178 | - [User guide](index.md) 179 | - [Yaml file reference](yml.md) 180 | - [Compose environment variables](env.md) 181 | - [Compose command line completion](completion.md) 182 | -------------------------------------------------------------------------------- /docs/completion.md: -------------------------------------------------------------------------------- 1 | --- 2 | layout: default 3 | title: Command Completion 4 | --- 5 | 6 | Command Completion 7 | ================== 8 | 9 | Compose comes with [command completion](http://en.wikipedia.org/wiki/Command-line_completion) 10 | for the bash shell. 11 | 12 | Installing Command Completion 13 | ----------------------------- 14 | 15 | Make sure bash completion is installed. If you use a current Linux in a non-minimal installation, bash completion should be available. 16 | On a Mac, install with `brew install bash-completion` 17 | 18 | Place the completion script in `/etc/bash_completion.d/` (`/usr/local/etc/bash_completion.d/` on a Mac), using e.g. 19 | 20 | curl -L https://raw.githubusercontent.com/docker/compose/1.2.0/contrib/completion/bash/docker-compose > /etc/bash_completion.d/docker-compose 21 | 22 | Completion will be available upon next login. 23 | 24 | Available completions 25 | --------------------- 26 | Depending on what you typed on the command line so far, it will complete 27 | 28 | - available docker-compose commands 29 | - options that are available for a particular command 30 | - service names that make sense in a given context (e.g. services with running or stopped instances or services based on images vs. services based on Dockerfiles). For `docker-compose scale`, completed service names will automatically have "=" appended. 31 | - arguments for selected options, e.g. `docker-compose kill -s` will complete some signals like SIGHUP and SIGUSR1. 32 | 33 | Enjoy working with Compose faster and with less typos! 34 | 35 | ## Compose documentation 36 | 37 | - [Installing Compose](install.md) 38 | - [User guide](index.md) 39 | - [Command line reference](cli.md) 40 | - [Yaml file reference](yml.md) 41 | - [Compose environment variables](env.md) 42 | -------------------------------------------------------------------------------- /docs/django.md: -------------------------------------------------------------------------------- 1 | page_title: Quickstart Guide: Compose and Django 2 | page_description: Getting started with Docker Compose and Django 3 | page_keywords: documentation, docs, docker, compose, orchestration, containers, 4 | django 5 | 6 | 7 | ## Getting started with Compose and Django 8 | 9 | 10 | This Quick-start Guide will demonstrate how to use Compose to set up and run a 11 | simple Django/PostgreSQL app. Before starting, you'll need to have 12 | [Compose installed](install.md). 13 | 14 | ### Define the project 15 | 16 | Start by setting up the three files you'll need to build the app. First, since 17 | your app is going to run inside a Docker container containing all of its 18 | dependencies, you'll need to define exactly what needs to be included in the 19 | container. This is done using a file called `Dockerfile`. To begin with, the 20 | Dockerfile consists of: 21 | 22 | FROM python:2.7 23 | ENV PYTHONUNBUFFERED 1 24 | RUN mkdir /code 25 | WORKDIR /code 26 | ADD requirements.txt /code/ 27 | RUN pip install -r requirements.txt 28 | ADD . /code/ 29 | 30 | This Dockerfile will define an image that is used to build a container that 31 | includes your application and has Python installed alongside all of your Python 32 | dependencies. For more information on how to write Dockerfiles, see the 33 | [Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](http://docs.docker.com/reference/builder/). 34 | 35 | Second, you'll define your Python dependencies in a file called 36 | `requirements.txt`: 37 | 38 | Django 39 | psycopg2 40 | 41 | Finally, this is all tied together with a file called `docker-compose.yml`. It 42 | describes the services that comprise your app (here, a web server and database), 43 | which Docker images they use, how they link together, what volumes will be 44 | mounted inside the containers, and what ports they expose. 45 | 46 | db: 47 | image: postgres 48 | web: 49 | build: . 50 | command: python manage.py runserver 0.0.0.0:8000 51 | volumes: 52 | - .:/code 53 | ports: 54 | - "8000:8000" 55 | links: 56 | - db 57 | 58 | See the [`docker-compose.yml` reference](yml.html) for more information on how 59 | this file works. 60 | 61 | ### Build the project 62 | 63 | You can now start a Django project with `docker-compose run`: 64 | 65 | $ docker-compose run web django-admin.py startproject composeexample . 66 | 67 | First, Compose will build an image for the `web` service using the `Dockerfile`. 68 | It will then run `django-admin.py startproject composeexample .` inside a 69 | container built using that image. 70 | 71 | This will generate a Django app inside the current directory: 72 | 73 | $ ls 74 | Dockerfile docker-compose.yml composeexample manage.py requirements.txt 75 | 76 | ### Connect the database 77 | 78 | Now you need to set up the database connection. Replace the `DATABASES = ...` 79 | definition in `composeexample/settings.py` to read: 80 | 81 | DATABASES = { 82 | 'default': { 83 | 'ENGINE': 'django.db.backends.postgresql_psycopg2', 84 | 'NAME': 'postgres', 85 | 'USER': 'postgres', 86 | 'HOST': 'db', 87 | 'PORT': 5432, 88 | } 89 | } 90 | 91 | These settings are determined by the 92 | [postgres](https://registry.hub.docker.com/_/postgres/) Docker image specified 93 | in the Dockerfile. 94 | 95 | Then, run `docker-compose up`: 96 | 97 | Recreating myapp_db_1... 98 | Recreating myapp_web_1... 99 | Attaching to myapp_db_1, myapp_web_1 100 | myapp_db_1 | 101 | myapp_db_1 | PostgreSQL stand-alone backend 9.1.11 102 | myapp_db_1 | 2014-01-27 12:17:03 UTC LOG: database system is ready to accept connections 103 | myapp_db_1 | 2014-01-27 12:17:03 UTC LOG: autovacuum launcher started 104 | myapp_web_1 | Validating models... 105 | myapp_web_1 | 106 | myapp_web_1 | 0 errors found 107 | myapp_web_1 | January 27, 2014 - 12:12:40 108 | myapp_web_1 | Django version 1.6.1, using settings 'composeexample.settings' 109 | myapp_web_1 | Starting development server at http://0.0.0.0:8000/ 110 | myapp_web_1 | Quit the server with CONTROL-C. 111 | 112 | Your Django app should nw be running at port 8000 on your Docker daemon (if 113 | you're using Boot2docker, `boot2docker ip` will tell you its address). 114 | 115 | You can also run management commands with Docker. To set up your database, for 116 | example, run `docker-compose up` and in another terminal run: 117 | 118 | $ docker-compose run web python manage.py syncdb 119 | 120 | ## More Compose documentation 121 | 122 | - [Installing Compose](install.md) 123 | - [User guide](index.md) 124 | - [Command line reference](cli.md) 125 | - [Yaml file reference](yml.md) 126 | - [Compose environment variables](env.md) 127 | - [Compose command line completion](completion.md) 128 | -------------------------------------------------------------------------------- /docs/env.md: -------------------------------------------------------------------------------- 1 | --- 2 | layout: default 3 | title: Compose environment variables reference 4 | --- 5 | 6 | Environment variables reference 7 | =============================== 8 | 9 | **Note:** Environment variables are no longer the recommended method for connecting to linked services. Instead, you should use the link name (by default, the name of the linked service) as the hostname to connect to. See the [docker-compose.yml documentation](yml.md#links) for details. 10 | 11 | Compose uses [Docker links] to expose services' containers to one another. Each linked container injects a set of environment variables, each of which begins with the uppercase name of the container. 12 | 13 | To see what environment variables are available to a service, run `docker-compose run SERVICE env`. 14 | 15 | name\_PORT
16 | Full URL, e.g. `DB_PORT=tcp://172.17.0.5:5432` 17 | 18 | name\_PORT\_num\_protocol
19 | Full URL, e.g. `DB_PORT_5432_TCP=tcp://172.17.0.5:5432` 20 | 21 | name\_PORT\_num\_protocol\_ADDR
22 | Container's IP address, e.g. `DB_PORT_5432_TCP_ADDR=172.17.0.5` 23 | 24 | name\_PORT\_num\_protocol\_PORT
25 | Exposed port number, e.g. `DB_PORT_5432_TCP_PORT=5432` 26 | 27 | name\_PORT\_num\_protocol\_PROTO
28 | Protocol (tcp or udp), e.g. `DB_PORT_5432_TCP_PROTO=tcp` 29 | 30 | name\_NAME
31 | Fully qualified container name, e.g. `DB_1_NAME=/myapp_web_1/myapp_db_1` 32 | 33 | [Docker links]: http://docs.docker.com/userguide/dockerlinks/ 34 | 35 | ## Compose documentation 36 | 37 | - [Installing Compose](install.md) 38 | - [User guide](index.md) 39 | - [Command line reference](cli.md) 40 | - [Yaml file reference](yml.md) 41 | - [Compose command line completion](completion.md) 42 | -------------------------------------------------------------------------------- /docs/index.md: -------------------------------------------------------------------------------- 1 | page_title: Compose: Multi-container orchestration for Docker 2 | page_description: Introduction and Overview of Compose 3 | page_keywords: documentation, docs, docker, compose, orchestration, containers 4 | 5 | 6 | # Docker Compose 7 | 8 | Compose is a tool for defining and running complex applications with Docker. 9 | With Compose, you define a multi-container application in a single file, then 10 | spin your application up in a single command which does everything that needs to 11 | be done to get it running. 12 | 13 | Compose is great for development environments, staging servers, and CI. We don't 14 | recommend that you use it in production yet. 15 | 16 | Using Compose is basically a three-step process. 17 | 18 | First, you define your app's environment with a `Dockerfile` so it can be 19 | reproduced anywhere: 20 | 21 | ```Dockerfile 22 | FROM python:2.7 23 | WORKDIR /code 24 | ADD requirements.txt /code/ 25 | RUN pip install -r requirements.txt 26 | ADD . /code 27 | CMD python app.py 28 | ``` 29 | 30 | Next, you define the services that make up your app in `docker-compose.yml` so 31 | they can be run together in an isolated environment: 32 | 33 | ```yaml 34 | web: 35 | build: . 36 | links: 37 | - db 38 | ports: 39 | - "8000:8000" 40 | db: 41 | image: postgres 42 | ``` 43 | 44 | Lastly, run `docker-compose up` and Compose will start and run your entire app. 45 | 46 | Compose has commands for managing the whole lifecycle of your application: 47 | 48 | * Start, stop and rebuild services 49 | * View the status of running services 50 | * Stream the log output of running services 51 | * Run a one-off command on a service 52 | 53 | ## Compose documentation 54 | 55 | - [Installing Compose](install.md) 56 | - [Command line reference](cli.md) 57 | - [Yaml file reference](yml.md) 58 | - [Compose environment variables](env.md) 59 | - [Compose command line completion](completion.md) 60 | 61 | ## Quick start 62 | 63 | Let's get started with a walkthrough of getting a simple Python web app running 64 | on Compose. It assumes a little knowledge of Python, but the concepts 65 | demonstrated here should be understandable even if you're not familiar with 66 | Python. 67 | 68 | ### Installation and set-up 69 | 70 | First, [install Docker and Compose](install.md). 71 | 72 | Next, you'll want to make a directory for the project: 73 | 74 | $ mkdir composetest 75 | $ cd composetest 76 | 77 | Inside this directory, create `app.py`, a simple web app that uses the Flask 78 | framework and increments a value in Redis: 79 | 80 | ```python 81 | from flask import Flask 82 | from redis import Redis 83 | import os 84 | app = Flask(__name__) 85 | redis = Redis(host='redis', port=6379) 86 | 87 | @app.route('/') 88 | def hello(): 89 | redis.incr('hits') 90 | return 'Hello World! I have been seen %s times.' % redis.get('hits') 91 | 92 | if __name__ == "__main__": 93 | app.run(host="0.0.0.0", debug=True) 94 | ``` 95 | 96 | Next, define the Python dependencies in a file called `requirements.txt`: 97 | 98 | flask 99 | redis 100 | 101 | ### Create a Docker image 102 | 103 | Now, create a Docker image containing all of your app's dependencies. You 104 | specify how to build the image using a file called 105 | [`Dockerfile`](http://docs.docker.com/reference/builder/): 106 | 107 | FROM python:2.7 108 | ADD . /code 109 | WORKDIR /code 110 | RUN pip install -r requirements.txt 111 | 112 | This tells Docker to include Python, your code, and your Python dependencies in 113 | a Docker image. For more information on how to write Dockerfiles, see the 114 | [Docker user 115 | guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) 116 | and the 117 | [Dockerfile reference](http://docs.docker.com/reference/builder/). 118 | 119 | ### Define services 120 | 121 | Next, define a set of services using `docker-compose.yml`: 122 | 123 | web: 124 | build: . 125 | command: python app.py 126 | ports: 127 | - "5000:5000" 128 | volumes: 129 | - .:/code 130 | links: 131 | - redis 132 | redis: 133 | image: redis 134 | 135 | This defines two services: 136 | 137 | - `web`, which is built from the `Dockerfile` in the current directory. It also 138 | says to run the command `python app.py` inside the image, forward the exposed 139 | port 5000 on the container to port 5000 on the host machine, connect up the 140 | Redis service, and mount the current directory inside the container so we can 141 | work on code without having to rebuild the image. 142 | - `redis`, which uses the public image 143 | [redis](https://registry.hub.docker.com/_/redis/), which gets pulled from the 144 | Docker Hub registry. 145 | 146 | ### Build and run your app with Compose 147 | 148 | Now, when you run `docker-compose up`, Compose will pull a Redis image, build an 149 | image for your code, and start everything up: 150 | 151 | $ docker-compose up 152 | Pulling image redis... 153 | Building web... 154 | Starting composetest_redis_1... 155 | Starting composetest_web_1... 156 | redis_1 | [8] 02 Jan 18:43:35.576 # Server started, Redis version 2.8.3 157 | web_1 | * Running on http://0.0.0.0:5000/ 158 | 159 | The web app should now be listening on port 5000 on your Docker daemon host (if 160 | you're using Boot2docker, `boot2docker ip` will tell you its address). 161 | 162 | If you want to run your services in the background, you can pass the `-d` flag 163 | (for daemon mode) to `docker-compose up` and use `docker-compose ps` to see what 164 | is currently running: 165 | 166 | $ docker-compose up -d 167 | Starting composetest_redis_1... 168 | Starting composetest_web_1... 169 | $ docker-compose ps 170 | Name Command State Ports 171 | ------------------------------------------------------------------- 172 | composetest_redis_1 /usr/local/bin/run Up 173 | composetest_web_1 /bin/sh -c python app.py Up 5000->5000/tcp 174 | 175 | The `docker-compose run` command allows you to run one-off commands for your 176 | services. For example, to see what environment variables are available to the 177 | `web` service: 178 | 179 | $ docker-compose run web env 180 | 181 | See `docker-compose --help` to see other available commands. 182 | 183 | If you started Compose with `docker-compose up -d`, you'll probably want to stop 184 | your services once you've finished with them: 185 | 186 | $ docker-compose stop 187 | 188 | At this point, you have seen the basics of how Compose works. 189 | 190 | - Next, try the quick start guide for [Django](django.md), 191 | [Rails](rails.md), or [Wordpress](wordpress.md). 192 | - See the reference guides for complete details on the [commands](cli.md), the 193 | [configuration file](yml.md) and [environment variables](env.md). 194 | -------------------------------------------------------------------------------- /docs/install.md: -------------------------------------------------------------------------------- 1 | page_title: Installing Compose 2 | page_description: How to intall Docker Compose 3 | page_keywords: compose, orchestration, install, installation, docker, documentation 4 | 5 | 6 | ## Installing Compose 7 | 8 | To install Compose, you'll need to install Docker first. You'll then install 9 | Compose with a `curl` command. 10 | 11 | ### Install Docker 12 | 13 | First, install Docker version 1.3 or greater: 14 | 15 | - [Instructions for Mac OS X](http://docs.docker.com/installation/mac/) 16 | - [Instructions for Ubuntu](http://docs.docker.com/installation/ubuntulinux/) 17 | - [Instructions for other systems](http://docs.docker.com/installation/) 18 | 19 | ### Install Compose 20 | 21 | To install Compose, run the following commands: 22 | 23 | curl -L https://github.com/docker/compose/releases/download/1.2.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose 24 | chmod +x /usr/local/bin/docker-compose 25 | 26 | Optionally, you can also install [command completion](completion.md) for the 27 | bash shell. 28 | 29 | Compose is available for OS X and 64-bit Linux. If you're on another platform, 30 | Compose can also be installed as a Python package: 31 | 32 | $ sudo pip install -U docker-compose 33 | 34 | No further steps are required; Compose should now be successfully installed. 35 | You can test the installation by running `docker-compose --version`. 36 | 37 | ## Compose documentation 38 | 39 | - [User guide](index.md) 40 | - [Command line reference](cli.md) 41 | - [Yaml file reference](yml.md) 42 | - [Compose environment variables](env.md) 43 | - [Compose command line completion](completion.md) 44 | -------------------------------------------------------------------------------- /docs/mkdocs.yml: -------------------------------------------------------------------------------- 1 | 2 | - ['compose/index.md', 'User Guide', 'Docker Compose' ] 3 | - ['compose/install.md', 'Installation', 'Docker Compose'] 4 | - ['compose/cli.md', 'Reference', 'Compose command line'] 5 | - ['compose/yml.md', 'Reference', 'Compose yml'] 6 | - ['compose/env.md', 'Reference', 'Compose ENV variables'] 7 | - ['compose/completion.md', 'Reference', 'Compose commandline completion'] 8 | - ['compose/django.md', 'Examples', 'Getting started with Compose and Django'] 9 | - ['compose/rails.md', 'Examples', 'Getting started with Compose and Rails'] 10 | - ['compose/wordpress.md', 'Examples', 'Getting started with Compose and Wordpress'] 11 | -------------------------------------------------------------------------------- /docs/rails.md: -------------------------------------------------------------------------------- 1 | page_title: Quickstart Guide: Compose and Rails 2 | page_description: Getting started with Docker Compose and Rails 3 | page_keywords: documentation, docs, docker, compose, orchestration, containers, 4 | rails 5 | 6 | 7 | ## Getting started with Compose and Rails 8 | 9 | This Quickstart guide will show you how to use Compose to set up and run a Rails/PostgreSQL app. Before starting, you'll need to have [Compose installed](install.md). 10 | 11 | ### Define the project 12 | 13 | Start by setting up the three files you'll need to build the app. First, since 14 | your app is going to run inside a Docker container containing all of its 15 | dependencies, you'll need to define exactly what needs to be included in the 16 | container. This is done using a file called `Dockerfile`. To begin with, the 17 | Dockerfile consists of: 18 | 19 | FROM ruby:2.2.0 20 | RUN apt-get update -qq && apt-get install -y build-essential libpq-dev 21 | RUN mkdir /myapp 22 | WORKDIR /myapp 23 | ADD Gemfile /myapp/Gemfile 24 | RUN bundle install 25 | ADD . /myapp 26 | 27 | That'll put your application code inside an image that will build a container with Ruby, Bundler and all your dependencies inside it. For more information on how to write Dockerfiles, see the [Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](http://docs.docker.com/reference/builder/). 28 | 29 | Next, create a bootstrap `Gemfile` which just loads Rails. It'll be overwritten in a moment by `rails new`. 30 | 31 | source 'https://rubygems.org' 32 | gem 'rails', '4.2.0' 33 | 34 | Finally, `docker-compose.yml` is where the magic happens. This file describes the services that comprise your app (a database and a web app), how to get each one's Docker image (the database just runs on a pre-made PostgreSQL image, and the web app is built from the current directory), and the configuration needed to link them together and expose the web app's port. 35 | 36 | db: 37 | image: postgres 38 | ports: 39 | - "5432" 40 | web: 41 | build: . 42 | command: bundle exec rails s -p 3000 -b '0.0.0.0' 43 | volumes: 44 | - .:/myapp 45 | ports: 46 | - "3000:3000" 47 | links: 48 | - db 49 | 50 | ### Build the project 51 | 52 | With those three files in place, you can now generate the Rails skeleton app 53 | using `docker-compose run`: 54 | 55 | $ docker-compose run web rails new . --force --database=postgresql --skip-bundle 56 | 57 | First, Compose will build the image for the `web` service using the 58 | `Dockerfile`. Then it'll run `rails new` inside a new container, using that 59 | image. Once it's done, you should have generated a fresh app: 60 | 61 | $ ls 62 | Dockerfile app docker-compose.yml tmp 63 | Gemfile bin lib vendor 64 | Gemfile.lock config log 65 | README.rdoc config.ru public 66 | Rakefile db test 67 | 68 | Uncomment the line in your new `Gemfile` which loads `therubyracer`, so you've 69 | got a Javascript runtime: 70 | 71 | gem 'therubyracer', platforms: :ruby 72 | 73 | Now that you've got a new `Gemfile`, you need to build the image again. (This, 74 | and changes to the Dockerfile itself, should be the only times you'll need to 75 | rebuild.) 76 | 77 | $ docker-compose build 78 | 79 | ### Connect the database 80 | 81 | The app is now bootable, but you're not quite there yet. By default, Rails 82 | expects a database to be running on `localhost` - so you need to point it at the 83 | `db` container instead. You also need to change the database and username to 84 | align with the defaults set by the `postgres` image. 85 | 86 | Open up your newly-generated `database.yml` file. Replace its contents with the 87 | following: 88 | 89 | development: &default 90 | adapter: postgresql 91 | encoding: unicode 92 | database: postgres 93 | pool: 5 94 | username: postgres 95 | password: 96 | host: db 97 | 98 | test: 99 | <<: *default 100 | database: myapp_test 101 | 102 | You can now boot the app with: 103 | 104 | $ docker-compose up 105 | 106 | If all's well, you should see some PostgreSQL output, and then—after a few 107 | seconds—the familiar refrain: 108 | 109 | myapp_web_1 | [2014-01-17 17:16:29] INFO WEBrick 1.3.1 110 | myapp_web_1 | [2014-01-17 17:16:29] INFO ruby 2.2.0 (2014-12-25) [x86_64-linux-gnu] 111 | myapp_web_1 | [2014-01-17 17:16:29] INFO WEBrick::HTTPServer#start: pid=1 port=3000 112 | 113 | Finally, you need to create the database. In another terminal, run: 114 | 115 | $ docker-compose run web rake db:create 116 | 117 | That's it. Your app should now be running on port 3000 on your Docker daemon (if 118 | you're using Boot2docker, `boot2docker ip` will tell you its address). 119 | 120 | ## More Compose documentation 121 | 122 | - [Installing Compose](install.md) 123 | - [User guide](index.md) 124 | - [Command line reference](cli.md) 125 | - [Yaml file reference](yml.md) 126 | - [Compose environment variables](env.md) 127 | - [Compose command line completion](completion.md) 128 | -------------------------------------------------------------------------------- /docs/wordpress.md: -------------------------------------------------------------------------------- 1 | page_title: Quickstart Guide: Compose and Wordpress 2 | page_description: Getting started with Docker Compose and Rails 3 | page_keywords: documentation, docs, docker, compose, orchestration, containers, 4 | wordpress 5 | 6 | ## Getting started with Compose and Wordpress 7 | 8 | You can use Compose to easily run Wordpress in an isolated environment built 9 | with Docker containers. 10 | 11 | ### Define the project 12 | 13 | First, [Install Compose](install.md) and then download Wordpress into the 14 | current directory: 15 | 16 | $ curl https://wordpress.org/latest.tar.gz | tar -xvzf - 17 | 18 | This will create a directory called `wordpress`. If you wish, you can rename it 19 | to the name of your project. 20 | 21 | Next, inside that directory, create a `Dockerfile`, a file that defines what 22 | environment your app is going to run in. For more information on how to write 23 | Dockerfiles, see the 24 | [Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the 25 | [Dockerfile reference](http://docs.docker.com/reference/builder/). In this case, 26 | your Dockerfile should be: 27 | 28 | ``` 29 | FROM orchardup/php5 30 | ADD . /code 31 | ``` 32 | 33 | This tells Docker how to build an image defining a container that contains PHP 34 | and Wordpress. 35 | 36 | Next you'll create a `docker-compose.yml` file that will start your web service 37 | and a separate MySQL instance: 38 | 39 | ``` 40 | web: 41 | build: . 42 | command: php -S 0.0.0.0:8000 -t /code 43 | ports: 44 | - "8000:8000" 45 | links: 46 | - db 47 | volumes: 48 | - .:/code 49 | db: 50 | image: orchardup/mysql 51 | environment: 52 | MYSQL_DATABASE: wordpress 53 | ``` 54 | 55 | Two supporting files are needed to get this working - first, `wp-config.php` is 56 | the standard Wordpress config file with a single change to point the database 57 | configuration at the `db` container: 58 | 59 | ``` 60 | 51 | ### links 52 | 53 | Link to containers in another service. Either specify both the service name and 54 | the link alias (`SERVICE:ALIAS`), or just the service name (which will also be 55 | used for the alias). 56 | 57 | ``` 58 | links: 59 | - db 60 | - db:database 61 | - redis 62 | ``` 63 | 64 | An entry with the alias' name will be created in `/etc/hosts` inside containers 65 | for this service, e.g: 66 | 67 | ``` 68 | 172.17.2.186 db 69 | 172.17.2.186 database 70 | 172.17.2.187 redis 71 | ``` 72 | 73 | Environment variables will also be created - see the [environment variable 74 | reference](env.md) for details. 75 | 76 | ### external_links 77 | 78 | Link to containers started outside this `docker-compose.yml` or even outside 79 | of Compose, especially for containers that provide shared or common services. 80 | `external_links` follow semantics similar to `links` when specifying both the 81 | container name and the link alias (`CONTAINER:ALIAS`). 82 | 83 | ``` 84 | external_links: 85 | - redis_1 86 | - project_db_1:mysql 87 | - project_db_1:postgresql 88 | ``` 89 | 90 | ### ports 91 | 92 | Expose ports. Either specify both ports (`HOST:CONTAINER`), or just the container 93 | port (a random host port will be chosen). 94 | 95 | > **Note:** When mapping ports in the `HOST:CONTAINER` format, you may experience 96 | > erroneous results when using a container port lower than 60, because YAML will 97 | > parse numbers in the format `xx:yy` as sexagesimal (base 60). For this reason, 98 | > we recommend always explicitly specifying your port mappings as strings. 99 | 100 | ``` 101 | ports: 102 | - "3000" 103 | - "8000:8000" 104 | - "49100:22" 105 | - "127.0.0.1:8001:8001" 106 | ``` 107 | 108 | ### expose 109 | 110 | Expose ports without publishing them to the host machine - they'll only be 111 | accessible to linked services. Only the internal port can be specified. 112 | 113 | ``` 114 | expose: 115 | - "3000" 116 | - "8000" 117 | ``` 118 | 119 | ### volumes 120 | 121 | Mount paths as volumes, optionally specifying a path on the host machine 122 | (`HOST:CONTAINER`), or an access mode (`HOST:CONTAINER:ro`). 123 | 124 | ``` 125 | volumes: 126 | - /var/lib/mysql 127 | - cache/:/tmp/cache 128 | - ~/configs:/etc/configs/:ro 129 | ``` 130 | 131 | ### volumes_from 132 | 133 | Mount all of the volumes from another service or container. 134 | 135 | ``` 136 | volumes_from: 137 | - service_name 138 | - container_name 139 | ``` 140 | 141 | ### environment 142 | 143 | Add environment variables. You can use either an array or a dictionary. 144 | 145 | Environment variables with only a key are resolved to their values on the 146 | machine Compose is running on, which can be helpful for secret or host-specific values. 147 | 148 | ``` 149 | environment: 150 | RACK_ENV: development 151 | SESSION_SECRET: 152 | 153 | environment: 154 | - RACK_ENV=development 155 | - SESSION_SECRET 156 | ``` 157 | 158 | ### env_file 159 | 160 | Add environment variables from a file. Can be a single value or a list. 161 | 162 | If you have specified a Compose file with `docker-compose -f FILE`, paths in 163 | `env_file` are relative to the directory that file is in. 164 | 165 | Environment variables specified in `environment` override these values. 166 | 167 | ``` 168 | env_file: .env 169 | 170 | env_file: 171 | - ./common.env 172 | - ./apps/web.env 173 | - /opt/secrets.env 174 | ``` 175 | 176 | ``` 177 | RACK_ENV: development 178 | ``` 179 | 180 | ### extends 181 | 182 | Extend another service, in the current file or another, optionally overriding 183 | configuration. 184 | 185 | Here's a simple example. Suppose we have 2 files - **common.yml** and 186 | **development.yml**. We can use `extends` to define a service in 187 | **development.yml** which uses configuration defined in **common.yml**: 188 | 189 | **common.yml** 190 | 191 | ``` 192 | webapp: 193 | build: ./webapp 194 | environment: 195 | - DEBUG=false 196 | - SEND_EMAILS=false 197 | ``` 198 | 199 | **development.yml** 200 | 201 | ``` 202 | web: 203 | extends: 204 | file: common.yml 205 | service: webapp 206 | ports: 207 | - "8000:8000" 208 | links: 209 | - db 210 | environment: 211 | - DEBUG=true 212 | db: 213 | image: postgres 214 | ``` 215 | 216 | Here, the `web` service in **development.yml** inherits the configuration of 217 | the `webapp` service in **common.yml** - the `build` and `environment` keys - 218 | and adds `ports` and `links` configuration. It overrides one of the defined 219 | environment variables (DEBUG) with a new value, and the other one 220 | (SEND_EMAILS) is left untouched. It's exactly as if you defined `web` like 221 | this: 222 | 223 | ```yaml 224 | web: 225 | build: ./webapp 226 | ports: 227 | - "8000:8000" 228 | links: 229 | - db 230 | environment: 231 | - DEBUG=true 232 | - SEND_EMAILS=false 233 | ``` 234 | 235 | The `extends` option is great for sharing configuration between different 236 | apps, or for configuring the same app differently for different environments. 237 | You could write a new file for a staging environment, **staging.yml**, which 238 | binds to a different port and doesn't turn on debugging: 239 | 240 | ``` 241 | web: 242 | extends: 243 | file: common.yml 244 | service: webapp 245 | ports: 246 | - "80:8000" 247 | links: 248 | - db 249 | db: 250 | image: postgres 251 | ``` 252 | 253 | > **Note:** When you extend a service, `links` and `volumes_from` 254 | > configuration options are **not** inherited - you will have to define 255 | > those manually each time you extend it. 256 | 257 | ### net 258 | 259 | Networking mode. Use the same values as the docker client `--net` parameter. 260 | 261 | ``` 262 | net: "bridge" 263 | net: "none" 264 | net: "container:[name or id]" 265 | net: "host" 266 | ``` 267 | 268 | ### dns 269 | 270 | Custom DNS servers. Can be a single value or a list. 271 | 272 | ``` 273 | dns: 8.8.8.8 274 | dns: 275 | - 8.8.8.8 276 | - 9.9.9.9 277 | ``` 278 | 279 | ### cap_add, cap_drop 280 | 281 | Add or drop container capabilities. 282 | See `man 7 capabilities` for a full list. 283 | 284 | ``` 285 | cap_add: 286 | - ALL 287 | 288 | cap_drop: 289 | - NET_ADMIN 290 | - SYS_ADMIN 291 | ``` 292 | 293 | ### dns_search 294 | 295 | Custom DNS search domains. Can be a single value or a list. 296 | 297 | ``` 298 | dns_search: example.com 299 | dns_search: 300 | - dc1.example.com 301 | - dc2.example.com 302 | ``` 303 | 304 | ### working\_dir, entrypoint, user, hostname, domainname, mem\_limit, privileged, restart, stdin\_open, tty, cpu\_shares 305 | 306 | Each of these is a single value, analogous to its 307 | [docker run](https://docs.docker.com/reference/run/) counterpart. 308 | 309 | ``` 310 | cpu_shares: 73 311 | 312 | working_dir: /code 313 | entrypoint: /code/entrypoint.sh 314 | user: postgresql 315 | 316 | hostname: foo 317 | domainname: foo.com 318 | 319 | mem_limit: 1000000000 320 | privileged: true 321 | 322 | restart: always 323 | 324 | stdin_open: true 325 | tty: true 326 | ``` 327 | 328 | ## Compose documentation 329 | 330 | - [Installing Compose](install.md) 331 | - [User guide](index.md) 332 | - [Command line reference](cli.md) 333 | - [Compose environment variables](env.md) 334 | - [Compose command line completion](completion.md) 335 | -------------------------------------------------------------------------------- /requirements-dev.txt: -------------------------------------------------------------------------------- 1 | mock >= 1.0.1 2 | nose==1.3.4 3 | git+https://github.com/pyinstaller/pyinstaller.git@12e40471c77f588ea5be352f7219c873ddaae056#egg=pyinstaller 4 | unittest2==0.8.0 5 | flake8==2.3.0 6 | pep8==1.6.1 7 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | PyYAML==3.10 2 | docker-py==1.0.0 3 | dockerpty==0.3.2 4 | docopt==0.6.1 5 | requests==2.2.1 6 | six==1.7.3 7 | texttable==0.8.2 8 | websocket-client==0.11.0 9 | -------------------------------------------------------------------------------- /script/.validate: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | if [ -z "$VALIDATE_UPSTREAM" ]; then 4 | # this is kind of an expensive check, so let's not do this twice if we 5 | # are running more than one validate bundlescript 6 | 7 | VALIDATE_REPO='https://github.com/docker/fig.git' 8 | VALIDATE_BRANCH='master' 9 | 10 | if [ "$TRAVIS" = 'true' -a "$TRAVIS_PULL_REQUEST" != 'false' ]; then 11 | VALIDATE_REPO="https://github.com/${TRAVIS_REPO_SLUG}.git" 12 | VALIDATE_BRANCH="${TRAVIS_BRANCH}" 13 | fi 14 | 15 | VALIDATE_HEAD="$(git rev-parse --verify HEAD)" 16 | 17 | git fetch -q "$VALIDATE_REPO" "refs/heads/$VALIDATE_BRANCH" 18 | VALIDATE_UPSTREAM="$(git rev-parse --verify FETCH_HEAD)" 19 | 20 | VALIDATE_COMMIT_LOG="$VALIDATE_UPSTREAM..$VALIDATE_HEAD" 21 | VALIDATE_COMMIT_DIFF="$VALIDATE_UPSTREAM...$VALIDATE_HEAD" 22 | 23 | validate_diff() { 24 | if [ "$VALIDATE_UPSTREAM" != "$VALIDATE_HEAD" ]; then 25 | git diff "$VALIDATE_COMMIT_DIFF" "$@" 26 | fi 27 | } 28 | validate_log() { 29 | if [ "$VALIDATE_UPSTREAM" != "$VALIDATE_HEAD" ]; then 30 | git log "$VALIDATE_COMMIT_LOG" "$@" 31 | fi 32 | } 33 | fi 34 | -------------------------------------------------------------------------------- /script/build-linux: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -ex 4 | 5 | TAG="docker-compose" 6 | docker build -t "$TAG" . 7 | docker run \ 8 | --rm \ 9 | --user=user \ 10 | --volume="$(pwd):/code" \ 11 | --entrypoint="script/build-linux-inner" \ 12 | "$TAG" 13 | -------------------------------------------------------------------------------- /script/build-linux-inner: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -ex 4 | 5 | mkdir -p `pwd`/dist 6 | chmod 777 `pwd`/dist 7 | 8 | pyinstaller -F bin/docker-compose 9 | mv dist/docker-compose dist/docker-compose-`uname -s`-`uname -m` 10 | dist/docker-compose-`uname -s`-`uname -m` --version 11 | -------------------------------------------------------------------------------- /script/build-osx: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -ex 3 | rm -rf venv 4 | virtualenv venv 5 | venv/bin/pip install -r requirements.txt 6 | venv/bin/pip install -r requirements-dev.txt 7 | venv/bin/pip install . 8 | venv/bin/pyinstaller -F bin/docker-compose 9 | mv dist/docker-compose dist/docker-compose-Darwin-x86_64 10 | dist/docker-compose-Darwin-x86_64 --version 11 | -------------------------------------------------------------------------------- /script/ci: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # This should be run inside a container built from the Dockerfile 3 | # at the root of the repo: 4 | # 5 | # $ TAG="docker-compose:$(git rev-parse --short HEAD)" 6 | # $ docker build -t "$TAG" . 7 | # $ docker run --rm --volume="/var/run/docker.sock:/var/run/docker.sock" --volume="$(pwd)/.git:/code/.git" -e "TAG=$TAG" --entrypoint="script/ci" "$TAG" 8 | 9 | set -e 10 | 11 | >&2 echo "Validating DCO" 12 | script/validate-dco 13 | 14 | export DOCKER_VERSIONS=all 15 | . script/test-versions 16 | 17 | >&2 echo "Building Linux binary" 18 | su -c script/build-linux-inner user 19 | -------------------------------------------------------------------------------- /script/clean: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | find . -type f -name '*.pyc' -delete 3 | rm -rf docs/_site build dist docker-compose.egg-info 4 | -------------------------------------------------------------------------------- /script/dev: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # This is a script for running Compose inside a Docker container. It's handy for 3 | # development. 4 | # 5 | # $ ln -s `pwd`/script/dev /usr/local/bin/docker-compose 6 | # $ cd /a/compose/project 7 | # $ docker-compose up 8 | # 9 | 10 | set -e 11 | 12 | # Follow symbolic links 13 | if [ -h "$0" ]; then 14 | DIR=$(readlink "$0") 15 | else 16 | DIR=$0 17 | fi 18 | DIR="$(dirname "$DIR")"/.. 19 | 20 | docker build -t docker-compose $DIR 21 | exec docker run -i -t -v /var/run/docker.sock:/var/run/docker.sock -v `pwd`:`pwd` -w `pwd` docker-compose $@ 22 | -------------------------------------------------------------------------------- /script/dind: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -e 3 | 4 | # DinD: a wrapper script which allows docker to be run inside a docker container. 5 | # Original version by Jerome Petazzoni 6 | # See the blog post: http://blog.docker.com/2013/09/docker-can-now-run-within-docker/ 7 | # 8 | # This script should be executed inside a docker container in privilieged mode 9 | # ('docker run --privileged', introduced in docker 0.6). 10 | 11 | # Usage: dind CMD [ARG...] 12 | 13 | # apparmor sucks and Docker needs to know that it's in a container (c) @tianon 14 | export container=docker 15 | 16 | # First, make sure that cgroups are mounted correctly. 17 | CGROUP=/cgroup 18 | 19 | mkdir -p "$CGROUP" 20 | 21 | if ! mountpoint -q "$CGROUP"; then 22 | mount -n -t tmpfs -o uid=0,gid=0,mode=0755 cgroup $CGROUP || { 23 | echo >&2 'Could not make a tmpfs mount. Did you use --privileged?' 24 | exit 1 25 | } 26 | fi 27 | 28 | if [ -d /sys/kernel/security ] && ! mountpoint -q /sys/kernel/security; then 29 | mount -t securityfs none /sys/kernel/security || { 30 | echo >&2 'Could not mount /sys/kernel/security.' 31 | echo >&2 'AppArmor detection and -privileged mode might break.' 32 | } 33 | fi 34 | 35 | # Mount the cgroup hierarchies exactly as they are in the parent system. 36 | for SUBSYS in $(cut -d: -f2 /proc/1/cgroup); do 37 | mkdir -p "$CGROUP/$SUBSYS" 38 | if ! mountpoint -q $CGROUP/$SUBSYS; then 39 | mount -n -t cgroup -o "$SUBSYS" cgroup "$CGROUP/$SUBSYS" 40 | fi 41 | 42 | # The two following sections address a bug which manifests itself 43 | # by a cryptic "lxc-start: no ns_cgroup option specified" when 44 | # trying to start containers withina container. 45 | # The bug seems to appear when the cgroup hierarchies are not 46 | # mounted on the exact same directories in the host, and in the 47 | # container. 48 | 49 | # Named, control-less cgroups are mounted with "-o name=foo" 50 | # (and appear as such under /proc//cgroup) but are usually 51 | # mounted on a directory named "foo" (without the "name=" prefix). 52 | # Systemd and OpenRC (and possibly others) both create such a 53 | # cgroup. To avoid the aforementioned bug, we symlink "foo" to 54 | # "name=foo". This shouldn't have any adverse effect. 55 | name="${SUBSYS#name=}" 56 | if [ "$name" != "$SUBSYS" ]; then 57 | ln -s "$SUBSYS" "$CGROUP/$name" 58 | fi 59 | 60 | # Likewise, on at least one system, it has been reported that 61 | # systemd would mount the CPU and CPU accounting controllers 62 | # (respectively "cpu" and "cpuacct") with "-o cpuacct,cpu" 63 | # but on a directory called "cpu,cpuacct" (note the inversion 64 | # in the order of the groups). This tries to work around it. 65 | if [ "$SUBSYS" = 'cpuacct,cpu' ]; then 66 | ln -s "$SUBSYS" "$CGROUP/cpu,cpuacct" 67 | fi 68 | done 69 | 70 | # Note: as I write those lines, the LXC userland tools cannot setup 71 | # a "sub-container" properly if the "devices" cgroup is not in its 72 | # own hierarchy. Let's detect this and issue a warning. 73 | if ! grep -q :devices: /proc/1/cgroup; then 74 | echo >&2 'WARNING: the "devices" cgroup should be in its own hierarchy.' 75 | fi 76 | if ! grep -qw devices /proc/1/cgroup; then 77 | echo >&2 'WARNING: it looks like the "devices" cgroup is not mounted.' 78 | fi 79 | 80 | # Mount /tmp 81 | mount -t tmpfs none /tmp 82 | 83 | if [ $# -gt 0 ]; then 84 | exec "$@" 85 | fi 86 | 87 | echo >&2 'ERROR: No command specified.' 88 | echo >&2 'You probably want to run hack/make.sh, or maybe a shell?' 89 | -------------------------------------------------------------------------------- /script/docs: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | set -ex 3 | 4 | # import the existing docs build cmds from docker/docker 5 | DOCSPORT=8000 6 | GIT_BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null) 7 | DOCKER_DOCS_IMAGE="compose-docs$GIT_BRANCH" 8 | DOCKER_RUN_DOCS="docker run --rm -it -e NOCACHE" 9 | 10 | docker build -t "$DOCKER_DOCS_IMAGE" -f docs/Dockerfile . 11 | $DOCKER_RUN_DOCS -p $DOCSPORT:8000 "$DOCKER_DOCS_IMAGE" mkdocs serve 12 | -------------------------------------------------------------------------------- /script/shell: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | set -ex 3 | docker build -t docker-compose . 4 | exec docker run -v /var/run/docker.sock:/var/run/docker.sock -v `pwd`:/code -ti --rm --entrypoint bash docker-compose 5 | -------------------------------------------------------------------------------- /script/test: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # See CONTRIBUTING.md for usage. 3 | 4 | set -ex 5 | 6 | TAG="docker-compose:$(git rev-parse --short HEAD)" 7 | 8 | docker build -t "$TAG" . 9 | docker run \ 10 | --rm \ 11 | --volume="/var/run/docker.sock:/var/run/docker.sock" \ 12 | --volume="$(pwd):/code" \ 13 | -e DOCKER_VERSIONS \ 14 | -e "TAG=$TAG" \ 15 | --entrypoint="script/test-versions" \ 16 | "$TAG" \ 17 | "$@" 18 | -------------------------------------------------------------------------------- /script/test-versions: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # This should be run inside a container built from the Dockerfile 3 | # at the root of the repo - script/test will do it automatically. 4 | 5 | set -e 6 | 7 | >&2 echo "Running lint checks" 8 | flake8 compose 9 | 10 | if [ "$DOCKER_VERSIONS" == "" ]; then 11 | DOCKER_VERSIONS="1.5.0" 12 | elif [ "$DOCKER_VERSIONS" == "all" ]; then 13 | DOCKER_VERSIONS="$ALL_DOCKER_VERSIONS" 14 | fi 15 | 16 | for version in $DOCKER_VERSIONS; do 17 | >&2 echo "Running tests against Docker $version" 18 | docker run \ 19 | --rm \ 20 | --privileged \ 21 | --volume="/var/lib/docker" \ 22 | -e "DOCKER_VERSION=$version" \ 23 | --entrypoint="script/dind" \ 24 | "$TAG" \ 25 | script/wrapdocker nosetests "$@" 26 | done 27 | -------------------------------------------------------------------------------- /script/validate-dco: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e 4 | 5 | source "$(dirname "$BASH_SOURCE")/.validate" 6 | 7 | adds=$(validate_diff --numstat | awk '{ s += $1 } END { print s }') 8 | dels=$(validate_diff --numstat | awk '{ s += $2 } END { print s }') 9 | notDocs="$(validate_diff --numstat | awk '$3 !~ /^docs\// { print $3 }')" 10 | 11 | : ${adds:=0} 12 | : ${dels:=0} 13 | 14 | # "Username may only contain alphanumeric characters or dashes and cannot begin with a dash" 15 | githubUsernameRegex='[a-zA-Z0-9][a-zA-Z0-9-]+' 16 | 17 | # https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work 18 | dcoPrefix='Signed-off-by:' 19 | dcoRegex="^(Docker-DCO-1.1-)?$dcoPrefix ([^<]+) <([^<>@]+@[^<>]+)>( \\(github: ($githubUsernameRegex)\\))?$" 20 | 21 | check_dco() { 22 | grep -qE "$dcoRegex" 23 | } 24 | 25 | if [ $adds -eq 0 -a $dels -eq 0 ]; then 26 | echo '0 adds, 0 deletions; nothing to validate! :)' 27 | elif [ -z "$notDocs" -a $adds -le 1 -a $dels -le 1 ]; then 28 | echo 'Congratulations! DCO small-patch-exception material!' 29 | else 30 | commits=( $(validate_log --format='format:%H%n') ) 31 | badCommits=() 32 | for commit in "${commits[@]}"; do 33 | if [ -z "$(git log -1 --format='format:' --name-status "$commit")" ]; then 34 | # no content (ie, Merge commit, etc) 35 | continue 36 | fi 37 | if ! git log -1 --format='format:%B' "$commit" | check_dco; then 38 | badCommits+=( "$commit" ) 39 | fi 40 | done 41 | if [ ${#badCommits[@]} -eq 0 ]; then 42 | echo "Congratulations! All commits are properly signed with the DCO!" 43 | else 44 | { 45 | echo "These commits do not have a proper '$dcoPrefix' marker:" 46 | for commit in "${badCommits[@]}"; do 47 | echo " - $commit" 48 | done 49 | echo 50 | echo 'Please amend each commit to include a properly formatted DCO marker.' 51 | echo 52 | echo 'Visit the following URL for information about the Docker DCO:' 53 | echo ' https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work' 54 | echo 55 | } >&2 56 | false 57 | fi 58 | fi 59 | -------------------------------------------------------------------------------- /script/wrapdocker: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | if [ "$DOCKER_VERSION" == "" ]; then 4 | DOCKER_VERSION="1.5.0" 5 | fi 6 | 7 | ln -fs "/usr/local/bin/docker-$DOCKER_VERSION" "/usr/local/bin/docker" 8 | 9 | # If a pidfile is still around (for example after a container restart), 10 | # delete it so that docker can start. 11 | rm -rf /var/run/docker.pid 12 | docker -d $DOCKER_DAEMON_ARGS &>/var/log/docker.log & 13 | 14 | >&2 echo "Waiting for Docker to start..." 15 | while ! docker ps &>/dev/null; do 16 | sleep 1 17 | done 18 | 19 | >&2 echo ">" "$@" 20 | exec "$@" 21 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | from __future__ import unicode_literals 4 | from __future__ import absolute_import 5 | from setuptools import setup, find_packages 6 | import codecs 7 | import os 8 | import re 9 | import sys 10 | 11 | 12 | def read(*parts): 13 | path = os.path.join(os.path.dirname(__file__), *parts) 14 | with codecs.open(path, encoding='utf-8') as fobj: 15 | return fobj.read() 16 | 17 | 18 | def find_version(*file_paths): 19 | version_file = read(*file_paths) 20 | version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]", 21 | version_file, re.M) 22 | if version_match: 23 | return version_match.group(1) 24 | raise RuntimeError("Unable to find version string.") 25 | 26 | 27 | install_requires = [ 28 | 'docopt >= 0.6.1, < 0.7', 29 | 'PyYAML >= 3.10, < 4', 30 | 'requests >= 2.2.1, < 2.6', 31 | 'texttable >= 0.8.1, < 0.9', 32 | 'websocket-client >= 0.11.0, < 1.0', 33 | 'docker-py >= 1.0.0, < 1.2', 34 | 'dockerpty >= 0.3.2, < 0.4', 35 | 'six >= 1.3.0, < 2', 36 | ] 37 | 38 | tests_require = [ 39 | 'mock >= 1.0.1', 40 | 'nose', 41 | 'pyinstaller', 42 | 'flake8', 43 | ] 44 | 45 | 46 | if sys.version_info < (2, 7): 47 | tests_require.append('unittest2') 48 | 49 | 50 | setup( 51 | name='docker-compose', 52 | version=find_version("compose", "__init__.py"), 53 | description='Multi-container orchestration for Docker', 54 | url='https://www.docker.com/', 55 | author='Docker, Inc.', 56 | license='Apache License 2.0', 57 | packages=find_packages(exclude=[ 'tests.*', 'tests' ]), 58 | include_package_data=True, 59 | test_suite='nose.collector', 60 | install_requires=install_requires, 61 | tests_require=tests_require, 62 | entry_points=""" 63 | [console_scripts] 64 | docker-compose=compose.cli.main:main 65 | """, 66 | ) 67 | -------------------------------------------------------------------------------- /tests/__init__.py: -------------------------------------------------------------------------------- 1 | import sys 2 | 3 | if sys.version_info >= (2,7): 4 | import unittest 5 | else: 6 | import unittest2 as unittest 7 | 8 | -------------------------------------------------------------------------------- /tests/fixtures/UpperCaseDir/docker-compose.yml: -------------------------------------------------------------------------------- 1 | simple: 2 | image: busybox:latest 3 | command: /bin/sleep 300 4 | another: 5 | image: busybox:latest 6 | command: /bin/sleep 300 7 | -------------------------------------------------------------------------------- /tests/fixtures/build-ctx/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM busybox:latest 2 | CMD echo "success" 3 | -------------------------------------------------------------------------------- /tests/fixtures/build-path/docker-compose.yml: -------------------------------------------------------------------------------- 1 | foo: 2 | build: ../build-ctx/ 3 | -------------------------------------------------------------------------------- /tests/fixtures/commands-composefile/docker-compose.yml: -------------------------------------------------------------------------------- 1 | implicit: 2 | image: composetest_test 3 | explicit: 4 | image: composetest_test 5 | command: [ "/bin/true" ] 6 | -------------------------------------------------------------------------------- /tests/fixtures/dockerfile-with-volume/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM busybox 2 | VOLUME /data 3 | CMD sleep 3000 4 | -------------------------------------------------------------------------------- /tests/fixtures/dockerfile_with_entrypoint/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM busybox:latest 2 | ENTRYPOINT echo "From prebuilt entrypoint" 3 | -------------------------------------------------------------------------------- /tests/fixtures/dockerfile_with_entrypoint/docker-compose.yml: -------------------------------------------------------------------------------- 1 | service: 2 | build: . 3 | -------------------------------------------------------------------------------- /tests/fixtures/env-file/docker-compose.yml: -------------------------------------------------------------------------------- 1 | web: 2 | image: busybox 3 | command: /bin/true 4 | env_file: ./test.env 5 | -------------------------------------------------------------------------------- /tests/fixtures/env-file/test.env: -------------------------------------------------------------------------------- 1 | FOO=1 -------------------------------------------------------------------------------- /tests/fixtures/env/one.env: -------------------------------------------------------------------------------- 1 | # Keep the blank lines and comments in this file, please 2 | 3 | ONE=2 4 | TWO=1 5 | 6 | # (thanks) 7 | 8 | THREE=3 9 | 10 | FOO=bar 11 | # FOO=somethingelse 12 | -------------------------------------------------------------------------------- /tests/fixtures/env/resolve.env: -------------------------------------------------------------------------------- 1 | FILE_DEF=F1 2 | FILE_DEF_EMPTY= 3 | ENV_DEF 4 | NO_DEF 5 | -------------------------------------------------------------------------------- /tests/fixtures/env/two.env: -------------------------------------------------------------------------------- 1 | FOO=baz 2 | DOO=dah 3 | -------------------------------------------------------------------------------- /tests/fixtures/environment-composefile/docker-compose.yml: -------------------------------------------------------------------------------- 1 | service: 2 | image: busybox:latest 3 | command: sleep 5 4 | 5 | environment: 6 | foo: bar 7 | hello: world 8 | -------------------------------------------------------------------------------- /tests/fixtures/extends/circle-1.yml: -------------------------------------------------------------------------------- 1 | foo: 2 | image: busybox 3 | bar: 4 | image: busybox 5 | web: 6 | extends: 7 | file: circle-2.yml 8 | service: web 9 | baz: 10 | image: busybox 11 | quux: 12 | image: busybox 13 | -------------------------------------------------------------------------------- /tests/fixtures/extends/circle-2.yml: -------------------------------------------------------------------------------- 1 | foo: 2 | image: busybox 3 | bar: 4 | image: busybox 5 | web: 6 | extends: 7 | file: circle-1.yml 8 | service: web 9 | baz: 10 | image: busybox 11 | quux: 12 | image: busybox 13 | -------------------------------------------------------------------------------- /tests/fixtures/extends/common.yml: -------------------------------------------------------------------------------- 1 | web: 2 | image: busybox 3 | command: /bin/true 4 | environment: 5 | - FOO=1 6 | - BAR=1 7 | -------------------------------------------------------------------------------- /tests/fixtures/extends/docker-compose.yml: -------------------------------------------------------------------------------- 1 | myweb: 2 | extends: 3 | file: common.yml 4 | service: web 5 | command: sleep 300 6 | links: 7 | - "mydb:db" 8 | environment: 9 | # leave FOO alone 10 | # override BAR 11 | BAR: "2" 12 | # add BAZ 13 | BAZ: "2" 14 | mydb: 15 | image: busybox 16 | command: sleep 300 17 | -------------------------------------------------------------------------------- /tests/fixtures/extends/nested-intermediate.yml: -------------------------------------------------------------------------------- 1 | webintermediate: 2 | extends: 3 | file: common.yml 4 | service: web 5 | environment: 6 | - "FOO=2" 7 | -------------------------------------------------------------------------------- /tests/fixtures/extends/nested.yml: -------------------------------------------------------------------------------- 1 | myweb: 2 | extends: 3 | file: nested-intermediate.yml 4 | service: webintermediate 5 | environment: 6 | - "BAR=2" 7 | -------------------------------------------------------------------------------- /tests/fixtures/links-composefile/docker-compose.yml: -------------------------------------------------------------------------------- 1 | db: 2 | image: busybox:latest 3 | command: /bin/sleep 300 4 | web: 5 | image: busybox:latest 6 | command: /bin/sleep 300 7 | links: 8 | - db:db 9 | console: 10 | image: busybox:latest 11 | command: /bin/sleep 300 12 | -------------------------------------------------------------------------------- /tests/fixtures/longer-filename-composefile/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | definedinyamlnotyml: 2 | image: busybox:latest 3 | command: /bin/sleep 300 -------------------------------------------------------------------------------- /tests/fixtures/multiple-composefiles/compose2.yml: -------------------------------------------------------------------------------- 1 | yetanother: 2 | image: busybox:latest 3 | command: /bin/sleep 300 4 | -------------------------------------------------------------------------------- /tests/fixtures/multiple-composefiles/docker-compose.yml: -------------------------------------------------------------------------------- 1 | simple: 2 | image: busybox:latest 3 | command: /bin/sleep 300 4 | another: 5 | image: busybox:latest 6 | command: /bin/sleep 300 7 | -------------------------------------------------------------------------------- /tests/fixtures/no-composefile/.gitignore: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hypriot/compose/7b5276774168bbe98209be7ce3ea05fd2a2131ea/tests/fixtures/no-composefile/.gitignore -------------------------------------------------------------------------------- /tests/fixtures/ports-composefile/docker-compose.yml: -------------------------------------------------------------------------------- 1 | 2 | simple: 3 | image: busybox:latest 4 | command: /bin/sleep 300 5 | ports: 6 | - '3000' 7 | - '49152:3001' 8 | -------------------------------------------------------------------------------- /tests/fixtures/simple-composefile/docker-compose.yml: -------------------------------------------------------------------------------- 1 | simple: 2 | image: busybox:latest 3 | command: /bin/sleep 300 4 | another: 5 | image: busybox:latest 6 | command: /bin/sleep 300 7 | -------------------------------------------------------------------------------- /tests/fixtures/simple-dockerfile/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM busybox:latest 2 | CMD echo "success" 3 | -------------------------------------------------------------------------------- /tests/fixtures/simple-dockerfile/docker-compose.yml: -------------------------------------------------------------------------------- 1 | simple: 2 | build: . 3 | -------------------------------------------------------------------------------- /tests/fixtures/user-composefile/docker-compose.yml: -------------------------------------------------------------------------------- 1 | service: 2 | image: busybox:latest 3 | user: notauser 4 | command: id 5 | -------------------------------------------------------------------------------- /tests/fixtures/volume-path/common/services.yml: -------------------------------------------------------------------------------- 1 | db: 2 | image: busybox 3 | volumes: 4 | - ./foo:/foo 5 | - ./bar:/bar 6 | -------------------------------------------------------------------------------- /tests/fixtures/volume-path/docker-compose.yml: -------------------------------------------------------------------------------- 1 | db: 2 | extends: 3 | file: common/services.yml 4 | service: db 5 | volumes: 6 | - ./bar:/bar 7 | -------------------------------------------------------------------------------- /tests/integration/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hypriot/compose/7b5276774168bbe98209be7ce3ea05fd2a2131ea/tests/integration/__init__.py -------------------------------------------------------------------------------- /tests/integration/project_test.py: -------------------------------------------------------------------------------- 1 | from __future__ import unicode_literals 2 | from compose import config 3 | from compose.project import Project 4 | from compose.container import Container 5 | from .testcases import DockerClientTestCase 6 | 7 | 8 | class ProjectTest(DockerClientTestCase): 9 | def test_volumes_from_service(self): 10 | service_dicts = config.from_dictionary({ 11 | 'data': { 12 | 'image': 'busybox:latest', 13 | 'volumes': ['/var/data'], 14 | }, 15 | 'db': { 16 | 'image': 'busybox:latest', 17 | 'volumes_from': ['data'], 18 | }, 19 | }, working_dir='.') 20 | project = Project.from_dicts( 21 | name='composetest', 22 | service_dicts=service_dicts, 23 | client=self.client, 24 | ) 25 | db = project.get_service('db') 26 | data = project.get_service('data') 27 | self.assertEqual(db.volumes_from, [data]) 28 | 29 | def test_volumes_from_container(self): 30 | data_container = Container.create( 31 | self.client, 32 | image='busybox:latest', 33 | volumes=['/var/data'], 34 | name='composetest_data_container', 35 | ) 36 | project = Project.from_dicts( 37 | name='composetest', 38 | service_dicts=config.from_dictionary({ 39 | 'db': { 40 | 'image': 'busybox:latest', 41 | 'volumes_from': ['composetest_data_container'], 42 | }, 43 | }), 44 | client=self.client, 45 | ) 46 | db = project.get_service('db') 47 | self.assertEqual(db.volumes_from, [data_container]) 48 | 49 | project.kill() 50 | project.remove_stopped() 51 | 52 | def test_net_from_service(self): 53 | project = Project.from_dicts( 54 | name='composetest', 55 | service_dicts=config.from_dictionary({ 56 | 'net': { 57 | 'image': 'busybox:latest', 58 | 'command': ["/bin/sleep", "300"] 59 | }, 60 | 'web': { 61 | 'image': 'busybox:latest', 62 | 'net': 'container:net', 63 | 'command': ["/bin/sleep", "300"] 64 | }, 65 | }), 66 | client=self.client, 67 | ) 68 | 69 | project.up() 70 | 71 | web = project.get_service('web') 72 | net = project.get_service('net') 73 | self.assertEqual(web._get_net(), 'container:'+net.containers()[0].id) 74 | 75 | project.kill() 76 | project.remove_stopped() 77 | 78 | def test_net_from_container(self): 79 | net_container = Container.create( 80 | self.client, 81 | image='busybox:latest', 82 | name='composetest_net_container', 83 | command='/bin/sleep 300' 84 | ) 85 | net_container.start() 86 | 87 | project = Project.from_dicts( 88 | name='composetest', 89 | service_dicts=config.from_dictionary({ 90 | 'web': { 91 | 'image': 'busybox:latest', 92 | 'net': 'container:composetest_net_container' 93 | }, 94 | }), 95 | client=self.client, 96 | ) 97 | 98 | project.up() 99 | 100 | web = project.get_service('web') 101 | self.assertEqual(web._get_net(), 'container:'+net_container.id) 102 | 103 | project.kill() 104 | project.remove_stopped() 105 | 106 | def test_start_stop_kill_remove(self): 107 | web = self.create_service('web') 108 | db = self.create_service('db') 109 | project = Project('composetest', [web, db], self.client) 110 | 111 | project.start() 112 | 113 | self.assertEqual(len(web.containers()), 0) 114 | self.assertEqual(len(db.containers()), 0) 115 | 116 | web_container_1 = web.create_container() 117 | web_container_2 = web.create_container() 118 | db_container = db.create_container() 119 | 120 | project.start(service_names=['web']) 121 | self.assertEqual(set(c.name for c in project.containers()), set([web_container_1.name, web_container_2.name])) 122 | 123 | project.start() 124 | self.assertEqual(set(c.name for c in project.containers()), set([web_container_1.name, web_container_2.name, db_container.name])) 125 | 126 | project.stop(service_names=['web'], timeout=1) 127 | self.assertEqual(set(c.name for c in project.containers()), set([db_container.name])) 128 | 129 | project.kill(service_names=['db']) 130 | self.assertEqual(len(project.containers()), 0) 131 | self.assertEqual(len(project.containers(stopped=True)), 3) 132 | 133 | project.remove_stopped(service_names=['web']) 134 | self.assertEqual(len(project.containers(stopped=True)), 1) 135 | 136 | project.remove_stopped() 137 | self.assertEqual(len(project.containers(stopped=True)), 0) 138 | 139 | def test_project_up(self): 140 | web = self.create_service('web') 141 | db = self.create_service('db', volumes=['/var/db']) 142 | project = Project('composetest', [web, db], self.client) 143 | project.start() 144 | self.assertEqual(len(project.containers()), 0) 145 | 146 | project.up(['db']) 147 | self.assertEqual(len(project.containers()), 1) 148 | self.assertEqual(len(db.containers()), 1) 149 | self.assertEqual(len(web.containers()), 0) 150 | 151 | project.kill() 152 | project.remove_stopped() 153 | 154 | def test_project_up_recreates_containers(self): 155 | web = self.create_service('web') 156 | db = self.create_service('db', volumes=['/etc']) 157 | project = Project('composetest', [web, db], self.client) 158 | project.start() 159 | self.assertEqual(len(project.containers()), 0) 160 | 161 | project.up(['db']) 162 | self.assertEqual(len(project.containers()), 1) 163 | old_db_id = project.containers()[0].id 164 | db_volume_path = project.containers()[0].get('Volumes./etc') 165 | 166 | project.up() 167 | self.assertEqual(len(project.containers()), 2) 168 | 169 | db_container = [c for c in project.containers() if 'db' in c.name][0] 170 | self.assertNotEqual(db_container.id, old_db_id) 171 | self.assertEqual(db_container.get('Volumes./etc'), db_volume_path) 172 | 173 | project.kill() 174 | project.remove_stopped() 175 | 176 | def test_project_up_with_no_recreate_running(self): 177 | web = self.create_service('web') 178 | db = self.create_service('db', volumes=['/var/db']) 179 | project = Project('composetest', [web, db], self.client) 180 | project.start() 181 | self.assertEqual(len(project.containers()), 0) 182 | 183 | project.up(['db']) 184 | self.assertEqual(len(project.containers()), 1) 185 | old_db_id = project.containers()[0].id 186 | db_volume_path = project.containers()[0].inspect()['Volumes']['/var/db'] 187 | 188 | project.up(recreate=False) 189 | self.assertEqual(len(project.containers()), 2) 190 | 191 | db_container = [c for c in project.containers() if 'db' in c.name][0] 192 | self.assertEqual(db_container.id, old_db_id) 193 | self.assertEqual(db_container.inspect()['Volumes']['/var/db'], 194 | db_volume_path) 195 | 196 | project.kill() 197 | project.remove_stopped() 198 | 199 | def test_project_up_with_no_recreate_stopped(self): 200 | web = self.create_service('web') 201 | db = self.create_service('db', volumes=['/var/db']) 202 | project = Project('composetest', [web, db], self.client) 203 | project.start() 204 | self.assertEqual(len(project.containers()), 0) 205 | 206 | project.up(['db']) 207 | project.stop() 208 | 209 | old_containers = project.containers(stopped=True) 210 | 211 | self.assertEqual(len(old_containers), 1) 212 | old_db_id = old_containers[0].id 213 | db_volume_path = old_containers[0].inspect()['Volumes']['/var/db'] 214 | 215 | project.up(recreate=False) 216 | 217 | new_containers = project.containers(stopped=True) 218 | self.assertEqual(len(new_containers), 2) 219 | 220 | db_container = [c for c in new_containers if 'db' in c.name][0] 221 | self.assertEqual(db_container.id, old_db_id) 222 | self.assertEqual(db_container.inspect()['Volumes']['/var/db'], 223 | db_volume_path) 224 | 225 | project.kill() 226 | project.remove_stopped() 227 | 228 | def test_project_up_without_all_services(self): 229 | console = self.create_service('console') 230 | db = self.create_service('db') 231 | project = Project('composetest', [console, db], self.client) 232 | project.start() 233 | self.assertEqual(len(project.containers()), 0) 234 | 235 | project.up() 236 | self.assertEqual(len(project.containers()), 2) 237 | self.assertEqual(len(db.containers()), 1) 238 | self.assertEqual(len(console.containers()), 1) 239 | 240 | project.kill() 241 | project.remove_stopped() 242 | 243 | def test_project_up_starts_links(self): 244 | console = self.create_service('console') 245 | db = self.create_service('db', volumes=['/var/db']) 246 | web = self.create_service('web', links=[(db, 'db')]) 247 | 248 | project = Project('composetest', [web, db, console], self.client) 249 | project.start() 250 | self.assertEqual(len(project.containers()), 0) 251 | 252 | project.up(['web']) 253 | self.assertEqual(len(project.containers()), 2) 254 | self.assertEqual(len(web.containers()), 1) 255 | self.assertEqual(len(db.containers()), 1) 256 | self.assertEqual(len(console.containers()), 0) 257 | 258 | project.kill() 259 | project.remove_stopped() 260 | 261 | def test_project_up_starts_depends(self): 262 | project = Project.from_dicts( 263 | name='composetest', 264 | service_dicts=config.from_dictionary({ 265 | 'console': { 266 | 'image': 'busybox:latest', 267 | 'command': ["/bin/sleep", "300"], 268 | }, 269 | 'data' : { 270 | 'image': 'busybox:latest', 271 | 'command': ["/bin/sleep", "300"] 272 | }, 273 | 'db': { 274 | 'image': 'busybox:latest', 275 | 'command': ["/bin/sleep", "300"], 276 | 'volumes_from': ['data'], 277 | }, 278 | 'web': { 279 | 'image': 'busybox:latest', 280 | 'command': ["/bin/sleep", "300"], 281 | 'links': ['db'], 282 | }, 283 | }), 284 | client=self.client, 285 | ) 286 | project.start() 287 | self.assertEqual(len(project.containers()), 0) 288 | 289 | project.up(['web']) 290 | self.assertEqual(len(project.containers()), 3) 291 | self.assertEqual(len(project.get_service('web').containers()), 1) 292 | self.assertEqual(len(project.get_service('db').containers()), 1) 293 | self.assertEqual(len(project.get_service('data').containers()), 1) 294 | self.assertEqual(len(project.get_service('console').containers()), 0) 295 | 296 | project.kill() 297 | project.remove_stopped() 298 | 299 | def test_project_up_with_no_deps(self): 300 | project = Project.from_dicts( 301 | name='composetest', 302 | service_dicts=config.from_dictionary({ 303 | 'console': { 304 | 'image': 'busybox:latest', 305 | 'command': ["/bin/sleep", "300"], 306 | }, 307 | 'data' : { 308 | 'image': 'busybox:latest', 309 | 'command': ["/bin/sleep", "300"] 310 | }, 311 | 'db': { 312 | 'image': 'busybox:latest', 313 | 'command': ["/bin/sleep", "300"], 314 | 'volumes_from': ['data'], 315 | }, 316 | 'web': { 317 | 'image': 'busybox:latest', 318 | 'command': ["/bin/sleep", "300"], 319 | 'links': ['db'], 320 | }, 321 | }), 322 | client=self.client, 323 | ) 324 | project.start() 325 | self.assertEqual(len(project.containers()), 0) 326 | 327 | project.up(['db'], start_deps=False) 328 | self.assertEqual(len(project.containers(stopped=True)), 2) 329 | self.assertEqual(len(project.get_service('web').containers()), 0) 330 | self.assertEqual(len(project.get_service('db').containers()), 1) 331 | self.assertEqual(len(project.get_service('data').containers()), 0) 332 | self.assertEqual(len(project.get_service('data').containers(stopped=True)), 1) 333 | self.assertEqual(len(project.get_service('console').containers()), 0) 334 | 335 | project.kill() 336 | project.remove_stopped() 337 | 338 | def test_unscale_after_restart(self): 339 | web = self.create_service('web') 340 | project = Project('composetest', [web], self.client) 341 | 342 | project.start() 343 | 344 | service = project.get_service('web') 345 | service.scale(1) 346 | self.assertEqual(len(service.containers()), 1) 347 | service.scale(3) 348 | self.assertEqual(len(service.containers()), 3) 349 | project.up() 350 | service = project.get_service('web') 351 | self.assertEqual(len(service.containers()), 3) 352 | service.scale(1) 353 | self.assertEqual(len(service.containers()), 1) 354 | project.up() 355 | service = project.get_service('web') 356 | self.assertEqual(len(service.containers()), 1) 357 | # does scale=0 ,makes any sense? after recreating at least 1 container is running 358 | service.scale(0) 359 | project.up() 360 | service = project.get_service('web') 361 | self.assertEqual(len(service.containers()), 1) 362 | project.kill() 363 | project.remove_stopped() 364 | -------------------------------------------------------------------------------- /tests/integration/testcases.py: -------------------------------------------------------------------------------- 1 | from __future__ import unicode_literals 2 | from __future__ import absolute_import 3 | from compose.service import Service 4 | from compose.config import make_service_dict 5 | from compose.cli.docker_client import docker_client 6 | from compose.progress_stream import stream_output 7 | from .. import unittest 8 | 9 | 10 | class DockerClientTestCase(unittest.TestCase): 11 | @classmethod 12 | def setUpClass(cls): 13 | cls.client = docker_client() 14 | 15 | def setUp(self): 16 | for c in self.client.containers(all=True): 17 | if c['Names'] and 'composetest' in c['Names'][0]: 18 | self.client.kill(c['Id']) 19 | self.client.remove_container(c['Id']) 20 | for i in self.client.images(): 21 | if isinstance(i.get('Tag'), basestring) and 'composetest' in i['Tag']: 22 | self.client.remove_image(i) 23 | 24 | def create_service(self, name, **kwargs): 25 | kwargs['image'] = "busybox:latest" 26 | 27 | if 'command' not in kwargs: 28 | kwargs['command'] = ["/bin/sleep", "300"] 29 | 30 | return Service( 31 | project='composetest', 32 | client=self.client, 33 | **make_service_dict(name, kwargs, working_dir='.') 34 | ) 35 | 36 | def check_build(self, *args, **kwargs): 37 | build_output = self.client.build(*args, **kwargs) 38 | stream_output(build_output, open('/dev/null', 'w')) 39 | -------------------------------------------------------------------------------- /tests/unit/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hypriot/compose/7b5276774168bbe98209be7ce3ea05fd2a2131ea/tests/unit/__init__.py -------------------------------------------------------------------------------- /tests/unit/cli/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hypriot/compose/7b5276774168bbe98209be7ce3ea05fd2a2131ea/tests/unit/cli/__init__.py -------------------------------------------------------------------------------- /tests/unit/cli/docker_client_test.py: -------------------------------------------------------------------------------- 1 | from __future__ import unicode_literals 2 | from __future__ import absolute_import 3 | import os 4 | 5 | import mock 6 | from tests import unittest 7 | 8 | from compose.cli import docker_client 9 | 10 | 11 | class DockerClientTestCase(unittest.TestCase): 12 | 13 | def test_docker_client_no_home(self): 14 | with mock.patch.dict(os.environ): 15 | del os.environ['HOME'] 16 | docker_client.docker_client() 17 | 18 | def test_docker_client_with_custom_timeout(self): 19 | with mock.patch.dict(os.environ): 20 | os.environ['DOCKER_CLIENT_TIMEOUT'] = timeout = "300" 21 | client = docker_client.docker_client() 22 | self.assertEqual(client.timeout, int(timeout)) 23 | -------------------------------------------------------------------------------- /tests/unit/cli/verbose_proxy_test.py: -------------------------------------------------------------------------------- 1 | from __future__ import unicode_literals 2 | from __future__ import absolute_import 3 | from tests import unittest 4 | 5 | from compose.cli import verbose_proxy 6 | 7 | 8 | class VerboseProxyTestCase(unittest.TestCase): 9 | 10 | def test_format_call(self): 11 | expected = "(u'arg1', True, key=u'value')" 12 | actual = verbose_proxy.format_call( 13 | ("arg1", True), 14 | {'key': 'value'}) 15 | 16 | self.assertEqual(expected, actual) 17 | 18 | def test_format_return_sequence(self): 19 | expected = "(list with 10 items)" 20 | actual = verbose_proxy.format_return(list(range(10)), 2) 21 | self.assertEqual(expected, actual) 22 | 23 | def test_format_return(self): 24 | expected = "{u'Id': u'ok'}" 25 | actual = verbose_proxy.format_return({'Id': 'ok'}, 2) 26 | self.assertEqual(expected, actual) 27 | 28 | def test_format_return_no_result(self): 29 | actual = verbose_proxy.format_return(None, 2) 30 | self.assertEqual(None, actual) 31 | -------------------------------------------------------------------------------- /tests/unit/cli_test.py: -------------------------------------------------------------------------------- 1 | from __future__ import unicode_literals 2 | from __future__ import absolute_import 3 | import logging 4 | import os 5 | import tempfile 6 | import shutil 7 | from .. import unittest 8 | 9 | import docker 10 | import mock 11 | from six import StringIO 12 | 13 | from compose.cli import main 14 | from compose.cli.main import TopLevelCommand 15 | from compose.cli.errors import ComposeFileNotFound 16 | from compose.service import Service 17 | 18 | 19 | class CLITestCase(unittest.TestCase): 20 | def test_default_project_name(self): 21 | cwd = os.getcwd() 22 | 23 | try: 24 | os.chdir('tests/fixtures/simple-composefile') 25 | command = TopLevelCommand() 26 | project_name = command.get_project_name(command.get_config_path()) 27 | self.assertEquals('simplecomposefile', project_name) 28 | finally: 29 | os.chdir(cwd) 30 | 31 | def test_project_name_with_explicit_base_dir(self): 32 | command = TopLevelCommand() 33 | command.base_dir = 'tests/fixtures/simple-composefile' 34 | project_name = command.get_project_name(command.get_config_path()) 35 | self.assertEquals('simplecomposefile', project_name) 36 | 37 | def test_project_name_with_explicit_uppercase_base_dir(self): 38 | command = TopLevelCommand() 39 | command.base_dir = 'tests/fixtures/UpperCaseDir' 40 | project_name = command.get_project_name(command.get_config_path()) 41 | self.assertEquals('uppercasedir', project_name) 42 | 43 | def test_project_name_with_explicit_project_name(self): 44 | command = TopLevelCommand() 45 | name = 'explicit-project-name' 46 | project_name = command.get_project_name(None, project_name=name) 47 | self.assertEquals('explicitprojectname', project_name) 48 | 49 | def test_project_name_from_environment_old_var(self): 50 | command = TopLevelCommand() 51 | name = 'namefromenv' 52 | with mock.patch.dict(os.environ): 53 | os.environ['FIG_PROJECT_NAME'] = name 54 | project_name = command.get_project_name(None) 55 | self.assertEquals(project_name, name) 56 | 57 | def test_project_name_from_environment_new_var(self): 58 | command = TopLevelCommand() 59 | name = 'namefromenv' 60 | with mock.patch.dict(os.environ): 61 | os.environ['COMPOSE_PROJECT_NAME'] = name 62 | project_name = command.get_project_name(None) 63 | self.assertEquals(project_name, name) 64 | 65 | def test_filename_check(self): 66 | self.assertEqual('docker-compose.yml', get_config_filename_for_files([ 67 | 'docker-compose.yml', 68 | 'docker-compose.yaml', 69 | 'fig.yml', 70 | 'fig.yaml', 71 | ])) 72 | 73 | self.assertEqual('docker-compose.yaml', get_config_filename_for_files([ 74 | 'docker-compose.yaml', 75 | 'fig.yml', 76 | 'fig.yaml', 77 | ])) 78 | 79 | self.assertEqual('fig.yml', get_config_filename_for_files([ 80 | 'fig.yml', 81 | 'fig.yaml', 82 | ])) 83 | 84 | self.assertEqual('fig.yaml', get_config_filename_for_files([ 85 | 'fig.yaml', 86 | ])) 87 | 88 | self.assertRaises(ComposeFileNotFound, lambda: get_config_filename_for_files([])) 89 | 90 | def test_get_project(self): 91 | command = TopLevelCommand() 92 | command.base_dir = 'tests/fixtures/longer-filename-composefile' 93 | project = command.get_project(command.get_config_path()) 94 | self.assertEqual(project.name, 'longerfilenamecomposefile') 95 | self.assertTrue(project.client) 96 | self.assertTrue(project.services) 97 | 98 | def test_help(self): 99 | command = TopLevelCommand() 100 | with self.assertRaises(SystemExit): 101 | command.dispatch(['-h'], None) 102 | 103 | def test_setup_logging(self): 104 | main.setup_logging() 105 | self.assertEqual(logging.getLogger().level, logging.DEBUG) 106 | self.assertEqual(logging.getLogger('requests').propagate, False) 107 | 108 | @mock.patch('compose.cli.main.dockerpty', autospec=True) 109 | def test_run_with_environment_merged_with_options_list(self, mock_dockerpty): 110 | command = TopLevelCommand() 111 | mock_client = mock.create_autospec(docker.Client) 112 | mock_project = mock.Mock() 113 | mock_project.get_service.return_value = Service( 114 | 'service', 115 | client=mock_client, 116 | environment=['FOO=ONE', 'BAR=TWO'], 117 | image='someimage') 118 | 119 | command.run(mock_project, { 120 | 'SERVICE': 'service', 121 | 'COMMAND': None, 122 | '-e': ['BAR=NEW', 'OTHER=THREE'], 123 | '--user': None, 124 | '--no-deps': None, 125 | '--allow-insecure-ssl': None, 126 | '-d': True, 127 | '-T': None, 128 | '--entrypoint': None, 129 | '--service-ports': None, 130 | '--rm': None, 131 | }) 132 | 133 | _, _, call_kwargs = mock_client.create_container.mock_calls[0] 134 | self.assertEqual( 135 | call_kwargs['environment'], 136 | {'FOO': 'ONE', 'BAR': 'NEW', 'OTHER': 'THREE'}) 137 | 138 | 139 | def get_config_filename_for_files(filenames): 140 | project_dir = tempfile.mkdtemp() 141 | try: 142 | make_files(project_dir, filenames) 143 | command = TopLevelCommand() 144 | command.base_dir = project_dir 145 | return os.path.basename(command.get_config_path()) 146 | finally: 147 | shutil.rmtree(project_dir) 148 | 149 | 150 | def make_files(dirname, filenames): 151 | for fname in filenames: 152 | with open(os.path.join(dirname, fname), 'w') as f: 153 | f.write('') 154 | 155 | -------------------------------------------------------------------------------- /tests/unit/container_test.py: -------------------------------------------------------------------------------- 1 | from __future__ import unicode_literals 2 | from .. import unittest 3 | 4 | import mock 5 | import docker 6 | 7 | from compose.container import Container 8 | 9 | 10 | class ContainerTest(unittest.TestCase): 11 | 12 | 13 | def setUp(self): 14 | self.container_dict = { 15 | "Id": "abc", 16 | "Image": "busybox:latest", 17 | "Command": "sleep 300", 18 | "Created": 1387384730, 19 | "Status": "Up 8 seconds", 20 | "Ports": None, 21 | "SizeRw": 0, 22 | "SizeRootFs": 0, 23 | "Names": ["/composetest_db_1", "/composetest_web_1/db"], 24 | "NetworkSettings": { 25 | "Ports": {}, 26 | }, 27 | } 28 | 29 | def test_from_ps(self): 30 | container = Container.from_ps(None, 31 | self.container_dict, 32 | has_been_inspected=True) 33 | self.assertEqual(container.dictionary, { 34 | "Id": "abc", 35 | "Image":"busybox:latest", 36 | "Name": "/composetest_db_1", 37 | }) 38 | 39 | def test_from_ps_prefixed(self): 40 | self.container_dict['Names'] = ['/swarm-host-1' + n for n in self.container_dict['Names']] 41 | 42 | container = Container.from_ps(None, 43 | self.container_dict, 44 | has_been_inspected=True) 45 | self.assertEqual(container.dictionary, { 46 | "Id": "abc", 47 | "Image":"busybox:latest", 48 | "Name": "/composetest_db_1", 49 | }) 50 | 51 | def test_environment(self): 52 | container = Container(None, { 53 | 'Id': 'abc', 54 | 'Config': { 55 | 'Env': [ 56 | 'FOO=BAR', 57 | 'BAZ=DOGE', 58 | ] 59 | } 60 | }, has_been_inspected=True) 61 | self.assertEqual(container.environment, { 62 | 'FOO': 'BAR', 63 | 'BAZ': 'DOGE', 64 | }) 65 | 66 | def test_number(self): 67 | container = Container.from_ps(None, 68 | self.container_dict, 69 | has_been_inspected=True) 70 | self.assertEqual(container.number, 1) 71 | 72 | def test_name(self): 73 | container = Container.from_ps(None, 74 | self.container_dict, 75 | has_been_inspected=True) 76 | self.assertEqual(container.name, "composetest_db_1") 77 | 78 | def test_name_without_project(self): 79 | container = Container.from_ps(None, 80 | self.container_dict, 81 | has_been_inspected=True) 82 | self.assertEqual(container.name_without_project, "db_1") 83 | 84 | def test_inspect_if_not_inspected(self): 85 | mock_client = mock.create_autospec(docker.Client) 86 | container = Container(mock_client, dict(Id="the_id")) 87 | 88 | container.inspect_if_not_inspected() 89 | mock_client.inspect_container.assert_called_once_with("the_id") 90 | self.assertEqual(container.dictionary, 91 | mock_client.inspect_container.return_value) 92 | self.assertTrue(container.has_been_inspected) 93 | 94 | container.inspect_if_not_inspected() 95 | self.assertEqual(mock_client.inspect_container.call_count, 1) 96 | 97 | def test_human_readable_ports_none(self): 98 | container = Container(None, self.container_dict, has_been_inspected=True) 99 | self.assertEqual(container.human_readable_ports, '') 100 | 101 | def test_human_readable_ports_public_and_private(self): 102 | self.container_dict['NetworkSettings']['Ports'].update({ 103 | "45454/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "49197" } ], 104 | "45453/tcp": [], 105 | }) 106 | container = Container(None, self.container_dict, has_been_inspected=True) 107 | 108 | expected = "45453/tcp, 0.0.0.0:49197->45454/tcp" 109 | self.assertEqual(container.human_readable_ports, expected) 110 | 111 | def test_get_local_port(self): 112 | self.container_dict['NetworkSettings']['Ports'].update({ 113 | "45454/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "49197" } ], 114 | }) 115 | container = Container(None, self.container_dict, has_been_inspected=True) 116 | 117 | self.assertEqual( 118 | container.get_local_port(45454, protocol='tcp'), 119 | '0.0.0.0:49197') 120 | 121 | def test_get(self): 122 | container = Container(None, { 123 | "Status":"Up 8 seconds", 124 | "HostConfig": { 125 | "VolumesFrom": ["volume_id",] 126 | }, 127 | }, has_been_inspected=True) 128 | 129 | self.assertEqual(container.get('Status'), "Up 8 seconds") 130 | self.assertEqual(container.get('HostConfig.VolumesFrom'), ["volume_id",]) 131 | self.assertEqual(container.get('Foo.Bar.DoesNotExist'), None) 132 | -------------------------------------------------------------------------------- /tests/unit/log_printer_test.py: -------------------------------------------------------------------------------- 1 | from __future__ import unicode_literals 2 | from __future__ import absolute_import 3 | import os 4 | 5 | from compose.cli.log_printer import LogPrinter 6 | from .. import unittest 7 | 8 | 9 | class LogPrinterTest(unittest.TestCase): 10 | def get_default_output(self, monochrome=False): 11 | def reader(*args, **kwargs): 12 | yield "hello\nworld" 13 | 14 | container = MockContainer(reader) 15 | output = run_log_printer([container], monochrome=monochrome) 16 | return output 17 | 18 | def test_single_container(self): 19 | output = self.get_default_output() 20 | 21 | self.assertIn('hello', output) 22 | self.assertIn('world', output) 23 | 24 | def test_monochrome(self): 25 | output = self.get_default_output(monochrome=True) 26 | self.assertNotIn('\033[', output) 27 | 28 | def test_polychrome(self): 29 | output = self.get_default_output() 30 | self.assertIn('\033[', output) 31 | 32 | def test_unicode(self): 33 | glyph = u'\u2022'.encode('utf-8') 34 | 35 | def reader(*args, **kwargs): 36 | yield glyph + b'\n' 37 | 38 | container = MockContainer(reader) 39 | output = run_log_printer([container]) 40 | 41 | self.assertIn(glyph, output) 42 | 43 | 44 | def run_log_printer(containers, monochrome=False): 45 | r, w = os.pipe() 46 | reader, writer = os.fdopen(r, 'r'), os.fdopen(w, 'w') 47 | printer = LogPrinter(containers, output=writer, monochrome=monochrome) 48 | printer.run() 49 | writer.close() 50 | return reader.read() 51 | 52 | 53 | class MockContainer(object): 54 | def __init__(self, reader): 55 | self._reader = reader 56 | 57 | @property 58 | def name(self): 59 | return 'myapp_web_1' 60 | 61 | @property 62 | def name_without_project(self): 63 | return 'web_1' 64 | 65 | def attach(self, *args, **kwargs): 66 | return self._reader() 67 | 68 | def wait(self, *args, **kwargs): 69 | return 0 70 | -------------------------------------------------------------------------------- /tests/unit/progress_stream_test.py: -------------------------------------------------------------------------------- 1 | from __future__ import unicode_literals 2 | from __future__ import absolute_import 3 | from tests import unittest 4 | 5 | import mock 6 | from six import StringIO 7 | 8 | from compose import progress_stream 9 | 10 | 11 | class ProgressStreamTestCase(unittest.TestCase): 12 | 13 | def test_stream_output(self): 14 | output = [ 15 | '{"status": "Downloading", "progressDetail": {"current": ' 16 | '31019763, "start": 1413653874, "total": 62763875}, ' 17 | '"progress": "..."}', 18 | ] 19 | events = progress_stream.stream_output(output, StringIO()) 20 | self.assertEqual(len(events), 1) 21 | -------------------------------------------------------------------------------- /tests/unit/project_test.py: -------------------------------------------------------------------------------- 1 | from __future__ import unicode_literals 2 | from .. import unittest 3 | from compose.service import Service 4 | from compose.project import Project 5 | from compose.container import Container 6 | from compose import config 7 | 8 | import mock 9 | import docker 10 | 11 | class ProjectTest(unittest.TestCase): 12 | def test_from_dict(self): 13 | project = Project.from_dicts('composetest', [ 14 | { 15 | 'name': 'web', 16 | 'image': 'busybox:latest' 17 | }, 18 | { 19 | 'name': 'db', 20 | 'image': 'busybox:latest' 21 | }, 22 | ], None) 23 | self.assertEqual(len(project.services), 2) 24 | self.assertEqual(project.get_service('web').name, 'web') 25 | self.assertEqual(project.get_service('web').options['image'], 'busybox:latest') 26 | self.assertEqual(project.get_service('db').name, 'db') 27 | self.assertEqual(project.get_service('db').options['image'], 'busybox:latest') 28 | 29 | def test_from_dict_sorts_in_dependency_order(self): 30 | project = Project.from_dicts('composetest', [ 31 | { 32 | 'name': 'web', 33 | 'image': 'busybox:latest', 34 | 'links': ['db'], 35 | }, 36 | { 37 | 'name': 'db', 38 | 'image': 'busybox:latest', 39 | 'volumes_from': ['volume'] 40 | }, 41 | { 42 | 'name': 'volume', 43 | 'image': 'busybox:latest', 44 | 'volumes': ['/tmp'], 45 | } 46 | ], None) 47 | 48 | self.assertEqual(project.services[0].name, 'volume') 49 | self.assertEqual(project.services[1].name, 'db') 50 | self.assertEqual(project.services[2].name, 'web') 51 | 52 | def test_from_config(self): 53 | dicts = config.from_dictionary({ 54 | 'web': { 55 | 'image': 'busybox:latest', 56 | }, 57 | 'db': { 58 | 'image': 'busybox:latest', 59 | }, 60 | }) 61 | project = Project.from_dicts('composetest', dicts, None) 62 | self.assertEqual(len(project.services), 2) 63 | self.assertEqual(project.get_service('web').name, 'web') 64 | self.assertEqual(project.get_service('web').options['image'], 'busybox:latest') 65 | self.assertEqual(project.get_service('db').name, 'db') 66 | self.assertEqual(project.get_service('db').options['image'], 'busybox:latest') 67 | 68 | def test_get_service(self): 69 | web = Service( 70 | project='composetest', 71 | name='web', 72 | client=None, 73 | image="busybox:latest", 74 | ) 75 | project = Project('test', [web], None) 76 | self.assertEqual(project.get_service('web'), web) 77 | 78 | def test_get_services_returns_all_services_without_args(self): 79 | web = Service( 80 | project='composetest', 81 | name='web', 82 | ) 83 | console = Service( 84 | project='composetest', 85 | name='console', 86 | ) 87 | project = Project('test', [web, console], None) 88 | self.assertEqual(project.get_services(), [web, console]) 89 | 90 | def test_get_services_returns_listed_services_with_args(self): 91 | web = Service( 92 | project='composetest', 93 | name='web', 94 | ) 95 | console = Service( 96 | project='composetest', 97 | name='console', 98 | ) 99 | project = Project('test', [web, console], None) 100 | self.assertEqual(project.get_services(['console']), [console]) 101 | 102 | def test_get_services_with_include_links(self): 103 | db = Service( 104 | project='composetest', 105 | name='db', 106 | ) 107 | web = Service( 108 | project='composetest', 109 | name='web', 110 | links=[(db, 'database')] 111 | ) 112 | cache = Service( 113 | project='composetest', 114 | name='cache' 115 | ) 116 | console = Service( 117 | project='composetest', 118 | name='console', 119 | links=[(web, 'web')] 120 | ) 121 | project = Project('test', [web, db, cache, console], None) 122 | self.assertEqual( 123 | project.get_services(['console'], include_deps=True), 124 | [db, web, console] 125 | ) 126 | 127 | def test_get_services_removes_duplicates_following_links(self): 128 | db = Service( 129 | project='composetest', 130 | name='db', 131 | ) 132 | web = Service( 133 | project='composetest', 134 | name='web', 135 | links=[(db, 'database')] 136 | ) 137 | project = Project('test', [web, db], None) 138 | self.assertEqual( 139 | project.get_services(['web', 'db'], include_deps=True), 140 | [db, web] 141 | ) 142 | 143 | def test_use_volumes_from_container(self): 144 | container_id = 'aabbccddee' 145 | container_dict = dict(Name='aaa', Id=container_id) 146 | mock_client = mock.create_autospec(docker.Client) 147 | mock_client.inspect_container.return_value = container_dict 148 | project = Project.from_dicts('test', [ 149 | { 150 | 'name': 'test', 151 | 'image': 'busybox:latest', 152 | 'volumes_from': ['aaa'] 153 | } 154 | ], mock_client) 155 | self.assertEqual(project.get_service('test')._get_volumes_from(), [container_id]) 156 | 157 | def test_use_volumes_from_service_no_container(self): 158 | container_name = 'test_vol_1' 159 | mock_client = mock.create_autospec(docker.Client) 160 | mock_client.containers.return_value = [ 161 | { 162 | "Name": container_name, 163 | "Names": [container_name], 164 | "Id": container_name, 165 | "Image": 'busybox:latest' 166 | } 167 | ] 168 | project = Project.from_dicts('test', [ 169 | { 170 | 'name': 'vol', 171 | 'image': 'busybox:latest' 172 | }, 173 | { 174 | 'name': 'test', 175 | 'image': 'busybox:latest', 176 | 'volumes_from': ['vol'] 177 | } 178 | ], mock_client) 179 | self.assertEqual(project.get_service('test')._get_volumes_from(), [container_name]) 180 | 181 | @mock.patch.object(Service, 'containers') 182 | def test_use_volumes_from_service_container(self, mock_return): 183 | container_ids = ['aabbccddee', '12345'] 184 | mock_return.return_value = [ 185 | mock.Mock(id=container_id, spec=Container) 186 | for container_id in container_ids] 187 | 188 | project = Project.from_dicts('test', [ 189 | { 190 | 'name': 'vol', 191 | 'image': 'busybox:latest' 192 | }, 193 | { 194 | 'name': 'test', 195 | 'image': 'busybox:latest', 196 | 'volumes_from': ['vol'] 197 | } 198 | ], None) 199 | self.assertEqual(project.get_service('test')._get_volumes_from(), container_ids) 200 | 201 | def test_use_net_from_container(self): 202 | container_id = 'aabbccddee' 203 | container_dict = dict(Name='aaa', Id=container_id) 204 | mock_client = mock.create_autospec(docker.Client) 205 | mock_client.inspect_container.return_value = container_dict 206 | project = Project.from_dicts('test', [ 207 | { 208 | 'name': 'test', 209 | 'image': 'busybox:latest', 210 | 'net': 'container:aaa' 211 | } 212 | ], mock_client) 213 | service = project.get_service('test') 214 | self.assertEqual(service._get_net(), 'container:'+container_id) 215 | 216 | def test_use_net_from_service(self): 217 | container_name = 'test_aaa_1' 218 | mock_client = mock.create_autospec(docker.Client) 219 | mock_client.containers.return_value = [ 220 | { 221 | "Name": container_name, 222 | "Names": [container_name], 223 | "Id": container_name, 224 | "Image": 'busybox:latest' 225 | } 226 | ] 227 | project = Project.from_dicts('test', [ 228 | { 229 | 'name': 'aaa', 230 | 'image': 'busybox:latest' 231 | }, 232 | { 233 | 'name': 'test', 234 | 'image': 'busybox:latest', 235 | 'net': 'container:aaa' 236 | } 237 | ], mock_client) 238 | 239 | service = project.get_service('test') 240 | self.assertEqual(service._get_net(), 'container:'+container_name) 241 | -------------------------------------------------------------------------------- /tests/unit/service_test.py: -------------------------------------------------------------------------------- 1 | from __future__ import unicode_literals 2 | from __future__ import absolute_import 3 | 4 | from .. import unittest 5 | import mock 6 | 7 | import docker 8 | from requests import Response 9 | 10 | from compose import Service 11 | from compose.container import Container 12 | from compose.service import ( 13 | APIError, 14 | ConfigError, 15 | build_port_bindings, 16 | build_volume_binding, 17 | get_container_name, 18 | parse_repository_tag, 19 | parse_volume_spec, 20 | split_port, 21 | ) 22 | 23 | 24 | class ServiceTest(unittest.TestCase): 25 | 26 | def setUp(self): 27 | self.mock_client = mock.create_autospec(docker.Client) 28 | 29 | def test_name_validations(self): 30 | self.assertRaises(ConfigError, lambda: Service(name='')) 31 | 32 | self.assertRaises(ConfigError, lambda: Service(name=' ')) 33 | self.assertRaises(ConfigError, lambda: Service(name='/')) 34 | self.assertRaises(ConfigError, lambda: Service(name='!')) 35 | self.assertRaises(ConfigError, lambda: Service(name='\xe2')) 36 | self.assertRaises(ConfigError, lambda: Service(name='_')) 37 | self.assertRaises(ConfigError, lambda: Service(name='____')) 38 | self.assertRaises(ConfigError, lambda: Service(name='foo_bar')) 39 | self.assertRaises(ConfigError, lambda: Service(name='__foo_bar__')) 40 | 41 | Service('a') 42 | Service('foo') 43 | 44 | def test_project_validation(self): 45 | self.assertRaises(ConfigError, lambda: Service(name='foo', project='_')) 46 | Service(name='foo', project='bar') 47 | 48 | def test_get_container_name(self): 49 | self.assertIsNone(get_container_name({})) 50 | self.assertEqual(get_container_name({'Name': 'myproject_db_1'}), 'myproject_db_1') 51 | self.assertEqual(get_container_name({'Names': ['/myproject_db_1', '/myproject_web_1/db']}), 'myproject_db_1') 52 | self.assertEqual(get_container_name({'Names': ['/swarm-host-1/myproject_db_1', '/swarm-host-1/myproject_web_1/db']}), 'myproject_db_1') 53 | 54 | def test_containers(self): 55 | service = Service('db', client=self.mock_client, project='myproject') 56 | 57 | self.mock_client.containers.return_value = [] 58 | self.assertEqual(service.containers(), []) 59 | 60 | self.mock_client.containers.return_value = [ 61 | {'Image': 'busybox', 'Id': 'OUT_1', 'Names': ['/myproject', '/foo/bar']}, 62 | {'Image': 'busybox', 'Id': 'OUT_2', 'Names': ['/myproject_db']}, 63 | {'Image': 'busybox', 'Id': 'OUT_3', 'Names': ['/db_1']}, 64 | {'Image': 'busybox', 'Id': 'IN_1', 'Names': ['/myproject_db_1', '/myproject_web_1/db']}, 65 | ] 66 | self.assertEqual([c.id for c in service.containers()], ['IN_1']) 67 | 68 | def test_containers_prefixed(self): 69 | service = Service('db', client=self.mock_client, project='myproject') 70 | 71 | self.mock_client.containers.return_value = [ 72 | {'Image': 'busybox', 'Id': 'OUT_1', 'Names': ['/swarm-host-1/myproject', '/swarm-host-1/foo/bar']}, 73 | {'Image': 'busybox', 'Id': 'OUT_2', 'Names': ['/swarm-host-1/myproject_db']}, 74 | {'Image': 'busybox', 'Id': 'OUT_3', 'Names': ['/swarm-host-1/db_1']}, 75 | {'Image': 'busybox', 'Id': 'IN_1', 'Names': ['/swarm-host-1/myproject_db_1', '/swarm-host-1/myproject_web_1/db']}, 76 | ] 77 | self.assertEqual([c.id for c in service.containers()], ['IN_1']) 78 | 79 | def test_get_volumes_from_container(self): 80 | container_id = 'aabbccddee' 81 | service = Service( 82 | 'test', 83 | volumes_from=[mock.Mock(id=container_id, spec=Container)]) 84 | 85 | self.assertEqual(service._get_volumes_from(), [container_id]) 86 | 87 | def test_get_volumes_from_intermediate_container(self): 88 | container_id = 'aabbccddee' 89 | service = Service('test') 90 | container = mock.Mock(id=container_id, spec=Container) 91 | 92 | self.assertEqual(service._get_volumes_from(container), [container_id]) 93 | 94 | def test_get_volumes_from_service_container_exists(self): 95 | container_ids = ['aabbccddee', '12345'] 96 | from_service = mock.create_autospec(Service) 97 | from_service.containers.return_value = [ 98 | mock.Mock(id=container_id, spec=Container) 99 | for container_id in container_ids 100 | ] 101 | service = Service('test', volumes_from=[from_service]) 102 | 103 | self.assertEqual(service._get_volumes_from(), container_ids) 104 | 105 | def test_get_volumes_from_service_no_container(self): 106 | container_id = 'abababab' 107 | from_service = mock.create_autospec(Service) 108 | from_service.containers.return_value = [] 109 | from_service.create_container.return_value = mock.Mock( 110 | id=container_id, 111 | spec=Container) 112 | service = Service('test', volumes_from=[from_service]) 113 | 114 | self.assertEqual(service._get_volumes_from(), [container_id]) 115 | from_service.create_container.assert_called_once_with() 116 | 117 | def test_split_port_with_host_ip(self): 118 | internal_port, external_port = split_port("127.0.0.1:1000:2000") 119 | self.assertEqual(internal_port, "2000") 120 | self.assertEqual(external_port, ("127.0.0.1", "1000")) 121 | 122 | def test_split_port_with_protocol(self): 123 | internal_port, external_port = split_port("127.0.0.1:1000:2000/udp") 124 | self.assertEqual(internal_port, "2000/udp") 125 | self.assertEqual(external_port, ("127.0.0.1", "1000")) 126 | 127 | def test_split_port_with_host_ip_no_port(self): 128 | internal_port, external_port = split_port("127.0.0.1::2000") 129 | self.assertEqual(internal_port, "2000") 130 | self.assertEqual(external_port, ("127.0.0.1", None)) 131 | 132 | def test_split_port_with_host_port(self): 133 | internal_port, external_port = split_port("1000:2000") 134 | self.assertEqual(internal_port, "2000") 135 | self.assertEqual(external_port, "1000") 136 | 137 | def test_split_port_no_host_port(self): 138 | internal_port, external_port = split_port("2000") 139 | self.assertEqual(internal_port, "2000") 140 | self.assertEqual(external_port, None) 141 | 142 | def test_split_port_invalid(self): 143 | with self.assertRaises(ConfigError): 144 | split_port("0.0.0.0:1000:2000:tcp") 145 | 146 | def test_build_port_bindings_with_one_port(self): 147 | port_bindings = build_port_bindings(["127.0.0.1:1000:1000"]) 148 | self.assertEqual(port_bindings["1000"],[("127.0.0.1","1000")]) 149 | 150 | def test_build_port_bindings_with_matching_internal_ports(self): 151 | port_bindings = build_port_bindings(["127.0.0.1:1000:1000","127.0.0.1:2000:1000"]) 152 | self.assertEqual(port_bindings["1000"],[("127.0.0.1","1000"),("127.0.0.1","2000")]) 153 | 154 | def test_build_port_bindings_with_nonmatching_internal_ports(self): 155 | port_bindings = build_port_bindings(["127.0.0.1:1000:1000","127.0.0.1:2000:2000"]) 156 | self.assertEqual(port_bindings["1000"],[("127.0.0.1","1000")]) 157 | self.assertEqual(port_bindings["2000"],[("127.0.0.1","2000")]) 158 | 159 | def test_split_domainname_none(self): 160 | service = Service('foo', hostname='name', client=self.mock_client) 161 | self.mock_client.containers.return_value = [] 162 | opts = service._get_container_create_options({'image': 'foo'}) 163 | self.assertEqual(opts['hostname'], 'name', 'hostname') 164 | self.assertFalse('domainname' in opts, 'domainname') 165 | 166 | def test_split_domainname_fqdn(self): 167 | service = Service('foo', 168 | hostname='name.domain.tld', 169 | client=self.mock_client) 170 | self.mock_client.containers.return_value = [] 171 | opts = service._get_container_create_options({'image': 'foo'}) 172 | self.assertEqual(opts['hostname'], 'name', 'hostname') 173 | self.assertEqual(opts['domainname'], 'domain.tld', 'domainname') 174 | 175 | def test_split_domainname_both(self): 176 | service = Service('foo', 177 | hostname='name', 178 | domainname='domain.tld', 179 | client=self.mock_client) 180 | self.mock_client.containers.return_value = [] 181 | opts = service._get_container_create_options({'image': 'foo'}) 182 | self.assertEqual(opts['hostname'], 'name', 'hostname') 183 | self.assertEqual(opts['domainname'], 'domain.tld', 'domainname') 184 | 185 | def test_split_domainname_weird(self): 186 | service = Service('foo', 187 | hostname='name.sub', 188 | domainname='domain.tld', 189 | client=self.mock_client) 190 | self.mock_client.containers.return_value = [] 191 | opts = service._get_container_create_options({'image': 'foo'}) 192 | self.assertEqual(opts['hostname'], 'name.sub', 'hostname') 193 | self.assertEqual(opts['domainname'], 'domain.tld', 'domainname') 194 | 195 | def test_get_container_not_found(self): 196 | self.mock_client.containers.return_value = [] 197 | service = Service('foo', client=self.mock_client) 198 | 199 | self.assertRaises(ValueError, service.get_container) 200 | 201 | @mock.patch('compose.service.Container', autospec=True) 202 | def test_get_container(self, mock_container_class): 203 | container_dict = dict(Name='default_foo_2') 204 | self.mock_client.containers.return_value = [container_dict] 205 | service = Service('foo', client=self.mock_client) 206 | 207 | container = service.get_container(number=2) 208 | self.assertEqual(container, mock_container_class.from_ps.return_value) 209 | mock_container_class.from_ps.assert_called_once_with( 210 | self.mock_client, container_dict) 211 | 212 | @mock.patch('compose.service.log', autospec=True) 213 | def test_pull_image(self, mock_log): 214 | service = Service('foo', client=self.mock_client, image='someimage:sometag') 215 | service.pull(insecure_registry=True) 216 | self.mock_client.pull.assert_called_once_with('someimage:sometag', insecure_registry=True) 217 | mock_log.info.assert_called_once_with('Pulling foo (someimage:sometag)...') 218 | 219 | @mock.patch('compose.service.Container', autospec=True) 220 | @mock.patch('compose.service.log', autospec=True) 221 | def test_create_container_from_insecure_registry( 222 | self, 223 | mock_log, 224 | mock_container): 225 | service = Service('foo', client=self.mock_client, image='someimage:sometag') 226 | mock_response = mock.Mock(Response) 227 | mock_response.status_code = 404 228 | mock_response.reason = "Not Found" 229 | mock_container.create.side_effect = APIError( 230 | 'Mock error', mock_response, "No such image") 231 | 232 | # We expect the APIError because our service requires a 233 | # non-existent image. 234 | with self.assertRaises(APIError): 235 | service.create_container(insecure_registry=True) 236 | 237 | self.mock_client.pull.assert_called_once_with( 238 | 'someimage:sometag', 239 | insecure_registry=True, 240 | stream=True) 241 | mock_log.info.assert_called_once_with( 242 | 'Pulling image someimage:sometag...') 243 | 244 | def test_parse_repository_tag(self): 245 | self.assertEqual(parse_repository_tag("root"), ("root", "")) 246 | self.assertEqual(parse_repository_tag("root:tag"), ("root", "tag")) 247 | self.assertEqual(parse_repository_tag("user/repo"), ("user/repo", "")) 248 | self.assertEqual(parse_repository_tag("user/repo:tag"), ("user/repo", "tag")) 249 | self.assertEqual(parse_repository_tag("url:5000/repo"), ("url:5000/repo", "")) 250 | self.assertEqual(parse_repository_tag("url:5000/repo:tag"), ("url:5000/repo", "tag")) 251 | 252 | def test_latest_is_used_when_tag_is_not_specified(self): 253 | service = Service('foo', client=self.mock_client, image='someimage') 254 | Container.create = mock.Mock() 255 | service.create_container() 256 | self.assertEqual(Container.create.call_args[1]['image'], 'someimage:latest') 257 | 258 | def test_create_container_with_build(self): 259 | self.mock_client.images.return_value = [] 260 | service = Service('foo', client=self.mock_client, build='.') 261 | service.build = mock.create_autospec(service.build) 262 | service.create_container(do_build=True) 263 | 264 | self.mock_client.images.assert_called_once_with(name=service.full_name) 265 | service.build.assert_called_once_with() 266 | 267 | def test_create_container_no_build(self): 268 | self.mock_client.images.return_value = [] 269 | service = Service('foo', client=self.mock_client, build='.') 270 | service.create_container(do_build=False) 271 | 272 | self.assertFalse(self.mock_client.images.called) 273 | self.assertFalse(self.mock_client.build.called) 274 | 275 | 276 | class ServiceVolumesTest(unittest.TestCase): 277 | 278 | def test_parse_volume_spec_only_one_path(self): 279 | spec = parse_volume_spec('/the/volume') 280 | self.assertEqual(spec, (None, '/the/volume', 'rw')) 281 | 282 | def test_parse_volume_spec_internal_and_external(self): 283 | spec = parse_volume_spec('external:interval') 284 | self.assertEqual(spec, ('external', 'interval', 'rw')) 285 | 286 | def test_parse_volume_spec_with_mode(self): 287 | spec = parse_volume_spec('external:interval:ro') 288 | self.assertEqual(spec, ('external', 'interval', 'ro')) 289 | 290 | def test_parse_volume_spec_too_many_parts(self): 291 | with self.assertRaises(ConfigError): 292 | parse_volume_spec('one:two:three:four') 293 | 294 | def test_parse_volume_bad_mode(self): 295 | with self.assertRaises(ConfigError): 296 | parse_volume_spec('one:two:notrw') 297 | 298 | def test_build_volume_binding(self): 299 | binding = build_volume_binding(parse_volume_spec('/outside:/inside')) 300 | self.assertEqual( 301 | binding, 302 | ('/outside', dict(bind='/inside', ro=False))) 303 | -------------------------------------------------------------------------------- /tests/unit/sort_service_test.py: -------------------------------------------------------------------------------- 1 | from compose.project import sort_service_dicts, DependencyError 2 | from .. import unittest 3 | 4 | 5 | class SortServiceTest(unittest.TestCase): 6 | def test_sort_service_dicts_1(self): 7 | services = [ 8 | { 9 | 'links': ['redis'], 10 | 'name': 'web' 11 | }, 12 | { 13 | 'name': 'grunt' 14 | }, 15 | { 16 | 'name': 'redis' 17 | } 18 | ] 19 | 20 | sorted_services = sort_service_dicts(services) 21 | self.assertEqual(len(sorted_services), 3) 22 | self.assertEqual(sorted_services[0]['name'], 'grunt') 23 | self.assertEqual(sorted_services[1]['name'], 'redis') 24 | self.assertEqual(sorted_services[2]['name'], 'web') 25 | 26 | def test_sort_service_dicts_2(self): 27 | services = [ 28 | { 29 | 'links': ['redis', 'postgres'], 30 | 'name': 'web' 31 | }, 32 | { 33 | 'name': 'postgres', 34 | 'links': ['redis'] 35 | }, 36 | { 37 | 'name': 'redis' 38 | } 39 | ] 40 | 41 | sorted_services = sort_service_dicts(services) 42 | self.assertEqual(len(sorted_services), 3) 43 | self.assertEqual(sorted_services[0]['name'], 'redis') 44 | self.assertEqual(sorted_services[1]['name'], 'postgres') 45 | self.assertEqual(sorted_services[2]['name'], 'web') 46 | 47 | def test_sort_service_dicts_3(self): 48 | services = [ 49 | { 50 | 'name': 'child' 51 | }, 52 | { 53 | 'name': 'parent', 54 | 'links': ['child'] 55 | }, 56 | { 57 | 'links': ['parent'], 58 | 'name': 'grandparent' 59 | }, 60 | ] 61 | 62 | sorted_services = sort_service_dicts(services) 63 | self.assertEqual(len(sorted_services), 3) 64 | self.assertEqual(sorted_services[0]['name'], 'child') 65 | self.assertEqual(sorted_services[1]['name'], 'parent') 66 | self.assertEqual(sorted_services[2]['name'], 'grandparent') 67 | 68 | def test_sort_service_dicts_4(self): 69 | services = [ 70 | { 71 | 'name': 'child' 72 | }, 73 | { 74 | 'name': 'parent', 75 | 'volumes_from': ['child'] 76 | }, 77 | { 78 | 'links': ['parent'], 79 | 'name': 'grandparent' 80 | }, 81 | ] 82 | 83 | sorted_services = sort_service_dicts(services) 84 | self.assertEqual(len(sorted_services), 3) 85 | self.assertEqual(sorted_services[0]['name'], 'child') 86 | self.assertEqual(sorted_services[1]['name'], 'parent') 87 | self.assertEqual(sorted_services[2]['name'], 'grandparent') 88 | 89 | def test_sort_service_dicts_5(self): 90 | services = [ 91 | { 92 | 'links': ['parent'], 93 | 'name': 'grandparent' 94 | }, 95 | { 96 | 'name': 'parent', 97 | 'net': 'container:child' 98 | }, 99 | { 100 | 'name': 'child' 101 | } 102 | ] 103 | 104 | sorted_services = sort_service_dicts(services) 105 | self.assertEqual(len(sorted_services), 3) 106 | self.assertEqual(sorted_services[0]['name'], 'child') 107 | self.assertEqual(sorted_services[1]['name'], 'parent') 108 | self.assertEqual(sorted_services[2]['name'], 'grandparent') 109 | 110 | def test_sort_service_dicts_6(self): 111 | services = [ 112 | { 113 | 'links': ['parent'], 114 | 'name': 'grandparent' 115 | }, 116 | { 117 | 'name': 'parent', 118 | 'volumes_from': ['child'] 119 | }, 120 | { 121 | 'name': 'child' 122 | } 123 | ] 124 | 125 | sorted_services = sort_service_dicts(services) 126 | self.assertEqual(len(sorted_services), 3) 127 | self.assertEqual(sorted_services[0]['name'], 'child') 128 | self.assertEqual(sorted_services[1]['name'], 'parent') 129 | self.assertEqual(sorted_services[2]['name'], 'grandparent') 130 | 131 | def test_sort_service_dicts_7(self): 132 | services = [ 133 | { 134 | 'net': 'container:three', 135 | 'name': 'four' 136 | }, 137 | { 138 | 'links': ['two'], 139 | 'name': 'three' 140 | }, 141 | { 142 | 'name': 'two', 143 | 'volumes_from': ['one'] 144 | }, 145 | { 146 | 'name': 'one' 147 | } 148 | ] 149 | 150 | sorted_services = sort_service_dicts(services) 151 | self.assertEqual(len(sorted_services), 4) 152 | self.assertEqual(sorted_services[0]['name'], 'one') 153 | self.assertEqual(sorted_services[1]['name'], 'two') 154 | self.assertEqual(sorted_services[2]['name'], 'three') 155 | self.assertEqual(sorted_services[3]['name'], 'four') 156 | 157 | def test_sort_service_dicts_circular_imports(self): 158 | services = [ 159 | { 160 | 'links': ['redis'], 161 | 'name': 'web' 162 | }, 163 | { 164 | 'name': 'redis', 165 | 'links': ['web'] 166 | }, 167 | ] 168 | 169 | try: 170 | sort_service_dicts(services) 171 | except DependencyError as e: 172 | self.assertIn('redis', e.msg) 173 | self.assertIn('web', e.msg) 174 | else: 175 | self.fail('Should have thrown an DependencyError') 176 | 177 | def test_sort_service_dicts_circular_imports_2(self): 178 | services = [ 179 | { 180 | 'links': ['postgres', 'redis'], 181 | 'name': 'web' 182 | }, 183 | { 184 | 'name': 'redis', 185 | 'links': ['web'] 186 | }, 187 | { 188 | 'name': 'postgres' 189 | } 190 | ] 191 | 192 | try: 193 | sort_service_dicts(services) 194 | except DependencyError as e: 195 | self.assertIn('redis', e.msg) 196 | self.assertIn('web', e.msg) 197 | else: 198 | self.fail('Should have thrown an DependencyError') 199 | 200 | def test_sort_service_dicts_circular_imports_3(self): 201 | services = [ 202 | { 203 | 'links': ['b'], 204 | 'name': 'a' 205 | }, 206 | { 207 | 'name': 'b', 208 | 'links': ['c'] 209 | }, 210 | { 211 | 'name': 'c', 212 | 'links': ['a'] 213 | } 214 | ] 215 | 216 | try: 217 | sort_service_dicts(services) 218 | except DependencyError as e: 219 | self.assertIn('a', e.msg) 220 | self.assertIn('b', e.msg) 221 | else: 222 | self.fail('Should have thrown an DependencyError') 223 | 224 | def test_sort_service_dicts_self_imports(self): 225 | services = [ 226 | { 227 | 'links': ['web'], 228 | 'name': 'web' 229 | }, 230 | ] 231 | 232 | try: 233 | sort_service_dicts(services) 234 | except DependencyError as e: 235 | self.assertIn('web', e.msg) 236 | else: 237 | self.fail('Should have thrown an DependencyError') 238 | -------------------------------------------------------------------------------- /tests/unit/split_buffer_test.py: -------------------------------------------------------------------------------- 1 | from __future__ import unicode_literals 2 | from __future__ import absolute_import 3 | from compose.cli.utils import split_buffer 4 | from .. import unittest 5 | 6 | class SplitBufferTest(unittest.TestCase): 7 | def test_single_line_chunks(self): 8 | def reader(): 9 | yield b'abc\n' 10 | yield b'def\n' 11 | yield b'ghi\n' 12 | 13 | self.assert_produces(reader, [b'abc\n', b'def\n', b'ghi\n']) 14 | 15 | def test_no_end_separator(self): 16 | def reader(): 17 | yield b'abc\n' 18 | yield b'def\n' 19 | yield b'ghi' 20 | 21 | self.assert_produces(reader, [b'abc\n', b'def\n', b'ghi']) 22 | 23 | def test_multiple_line_chunk(self): 24 | def reader(): 25 | yield b'abc\ndef\nghi' 26 | 27 | self.assert_produces(reader, [b'abc\n', b'def\n', b'ghi']) 28 | 29 | def test_chunked_line(self): 30 | def reader(): 31 | yield b'a' 32 | yield b'b' 33 | yield b'c' 34 | yield b'\n' 35 | yield b'd' 36 | 37 | self.assert_produces(reader, [b'abc\n', b'd']) 38 | 39 | def test_preserves_unicode_sequences_within_lines(self): 40 | string = u"a\u2022c\n".encode('utf-8') 41 | 42 | def reader(): 43 | yield string 44 | 45 | self.assert_produces(reader, [string]) 46 | 47 | def assert_produces(self, reader, expectations): 48 | split = split_buffer(reader(), b'\n') 49 | 50 | for (actual, expected) in zip(split, expectations): 51 | self.assertEqual(type(actual), type(expected)) 52 | self.assertEqual(actual, expected) 53 | -------------------------------------------------------------------------------- /tox.ini: -------------------------------------------------------------------------------- 1 | [tox] 2 | envlist = py26,py27 3 | 4 | [testenv] 5 | usedevelop=True 6 | deps = 7 | -rrequirements.txt 8 | -rrequirements-dev.txt 9 | commands = 10 | nosetests -v {posargs} 11 | flake8 compose 12 | 13 | [flake8] 14 | # ignore line-length for now 15 | ignore = E501,E203 16 | exclude = compose/packages 17 | -------------------------------------------------------------------------------- /wercker.yml: -------------------------------------------------------------------------------- 1 | box: wercker-labs/docker 2 | build: 3 | steps: 4 | - script: 5 | name: validate DCO 6 | code: script/validate-dco 7 | - script: 8 | name: run tests 9 | code: script/test 10 | - script: 11 | name: build binary 12 | code: script/build-linux 13 | --------------------------------------------------------------------------------