├── README.md ├── concepts ├── factors.md ├── terms.md └── topology.md ├── design ├── filesystem.md ├── images.md ├── supervisor.md └── wheels.md └── instructions ├── axle-containers.md └── hub-containers.md /README.md: -------------------------------------------------------------------------------- 1 | # Radial Documentation 2 | 3 | ## **PENDING UPDATES** 4 | 5 | Radial was originally conceived very early in the Docker story and this has been 6 | an alpha quality experiment to try and solve some of the gaps in the Docker 7 | ecosystem. Many important features were not implemented yet and the ecosystem 8 | was exploding with many different ways to use containers and best practices were 9 | being stumbled upon slowly over time. Many things have happened since Radial's 10 | inception that have made the Docker ecosystem a little more mature: 11 | 12 | * Kubernetes has come on the scene and dominated the way orchestration is 13 | handled at scale 14 | * The Docker daemon has matured considerably and allows for much more control of 15 | the processes running in the containers 16 | * The orchestration and networking picture for Docker has a more clear 17 | trajectory with the introduction of machine, compose, and swarm 18 | * Tooling around Docker has also matured and there is less and less reinventing 19 | of common tasks that must be done 20 | 21 | So with all this considered, I've revisited what Radial is, and what it should 22 | be in relation to all these changes and I now have a better idea of what Radial 23 | can contribute to the bigger picture. In general, this is a focus on simple 24 | implementation of container best practices that are orchestration agnostic and 25 | can scale both large and small for the average user that does not want to make 26 | custom Dockerfiles in order to use a program. Some of the upcoming changes 27 | include: 28 | 29 | * Pulling supervisor from the containers. 30 | * Radial will strictly uphold the one process per container principal as it 31 | is much easier now to send signals, enter container processes, and link 32 | containers to each other. There is less and less evidence in my experience 33 | to warrent having multiple processes in a container ever. 34 | * Standardizing setup and entrypoint scripts. 35 | * Drop-in setup and running of final binaries 36 | * Simpler log management 37 | * Make use of the logging driver in Docker 1.6 38 | * Simpler hub scripts 39 | * using new env-file 40 | * Spec 41 | * define a spec to promote standardization and interoperability 42 | 43 | And many more. Check out the beta branches of each repo to check out the latest 44 | and greatest. Comments always welcome, Thanks! 45 | 46 | ## What is Radial? 47 | 48 | Radial is a [Docker](http://docker.io) container topology strategy for managing 49 | app stacks on a per-app basis. It was created to help me understand, record, and 50 | put into practice the intersection between [Twelve-Factor 51 | App](http://12factor.net) best-practices and Docker features. The results are my 52 | own topology strategies for manually implementing Docker containers in a 53 | predictable, intuitive, and flexible way. I am documenting it and organizing it 54 | here for all to critique, use, and make suggestions to. It should be a good 55 | primer for enthusiasts like myself and professionals alike to start to 56 | understand how the different features of Docker meet the needs of hosting 57 | processes in a server cluster. 58 | 59 | ## What Can I Do With It? 60 | 61 | Radial is a container topology strategy that is is outlined in this 62 | documentation and is implemented in the suite of accompanying Docker 63 | [images](https://index.docker.io/u/radial/) and Dockerfiles that this project 64 | supplies. Since Radial deals entirely with how you spread your application stack 65 | across containers, it is intended to be compatible with all orchestration, PAAS, 66 | and cluster management tools without requiring or locking you into any single 67 | one of them. If you like to use these images manually, they should be able to 68 | scale up with you into complex cluster use. Hopefully, there are some good ideas 69 | are in here for you to use or replicate. Better still, hopefully these images 70 | themselves will serve you well in all your use. 71 | 72 | ## What's The Strategy? 73 | 74 | In short, I use the analogy of an "axle", a "hub", and a "spoke" to help 75 | conceptually link together Docker containers into different classes based on 76 | what we need the entire stack (the "wheel") to be doing. Axle containers are 77 | volume containers, hub containers manage application configuration, and spoke 78 | containers are our application containers. I've created specialty Docker images 79 | streamlined for each of these classes and are available on the [Docker 80 | Index](https://index.docker.io/u/radial). 81 | 82 | These images use tools such as busybox to keep container size down, git to clone 83 | and combine configuration, and Supervisor to manage our all our processes and 84 | glue it all together. 85 | 86 | Radial makes configuration a first-class citizen by stripping it out of the 87 | application container and putting it in it's own container (named the "hub") 88 | which is later mounted into the application container (the "spoke"). This allows 89 | interchangeable and modular application containers for many use cases. 90 | 91 | ## The Documentation 92 | 93 | ### Concepts 94 | 95 | * [The Topology](/concepts/topology.md) 96 | * [The Factors](/concepts/factors.md) 97 | * [Terms](/concepts/terms.md) 98 | 99 | ### Design 100 | 101 | * [File System Structure](/design/filesystem.md) 102 | * [Wheels](/design/wheels.md) 103 | * [Radial Images](/design/images.md) 104 | * [Supervisor](/design/supervisor.md) 105 | 106 | ### Instructions 107 | 108 | * [Axle Containers](/instructions/axle-containers.md) 109 | * Hub Containers 110 | * Spoke Containers 111 | 112 | ## Help 113 | 114 | These documents are works-in-progress as I put their suggestions into practice. 115 | Updates are to come as suggestions and testing come in. General help and all 116 | other discussions are welcome over at the 117 | [forum](https://groups.google.com/forum/?pli=1#!forum/radial-docker-topology). 118 | For any bugs, please use the GitHub bug tracker for the appropriate code 119 | repository. 120 | -------------------------------------------------------------------------------- /concepts/factors.md: -------------------------------------------------------------------------------- 1 | # The Factors 2 | 3 | With inspiration from [The Twelve-Factor App](http://12factor.net), I explore 4 | how Radial helps to fulfil certain 12-factor principles using Docker features. 5 | 6 | ## Codebase 7 | 8 | >One codebase tracked in revision control, many deploys 9 | 10 | Docker images make this a breeze. Dockerfiles reproduce a build on any machine 11 | and sharing them is straightforward across machines. Where Radial adds to this 12 | principal, is that it specifies a container solely for the application itself, 13 | separate from any configuration or environment information to make this more 14 | modular. This is the spoke container. 15 | 16 | ## Configuration 17 | 18 | >Store config in the environment 19 | 20 | The typical interpretation of this factor is with regards to configuration that 21 | varies in between deploys (staging, production, developer environments, etc). 22 | This can be things such as credentials to external services, and per-deploy 23 | values that are what make a staging deploy different from a production one for 24 | example. 12-factor states these values should live in environment variables, not 25 | in the code itself. 26 | 27 | Radial wheels take this concept a bit further. As stated in "Codebase", since we 28 | separate out ALL configuration from the application container, we have a little 29 | more freedom for combining configuration and code because everything is designed 30 | to be modular. Radial uses a hub container to store application configuration, 31 | and since our fundamental "unit" for deploying apps is the wheel (multiple 32 | containers) instead of the single container, we can now use the wheel repository 33 | as a method to store this deploy specific information. Programs like [fig][fig] 34 | allow us to think in terms of multiple containers coordinated together. It is in 35 | a fig.yml, or some similar orchestration configuration, that we can specify our 36 | remaining environment configuration. The spirit of this 12-factor principle is 37 | still kept; our application source code is completely separate from any and all 38 | instance and configuration settings. 39 | 40 | [fig]:http://www.fig.sh 41 | 42 | ## Backing Services 43 | 44 | >Treat backing services as attached resources 45 | 46 | Because of spoke containers, use of an application can be ubiquitous across any 47 | use case. Again, because application configuration in the hub container is 48 | separate from the application in the spoke container, one can easily use the 49 | same Docker image for any situation. No need to make an entirely separate image 50 | just because you want to change configuration settings. This isn't specific to 51 | this 12-factor principle, but this is especially useful when dealing with 52 | database services, which usually a little more complex to setup. 53 | 54 | ## Build-Release-Run 55 | 56 | >Strictly separate build and run stages 57 | 58 | Docker really does force you to separate these stages. Build is represented by a 59 | `docker build` command, release is a `docker create` command with environment 60 | variables to input the configuration, and you run it immediately with `docker 61 | run`. 62 | 63 | The component of this principle that Radial helps with is the "release" stage; 64 | the combining of application with configuration. Because configuration and 65 | application code are represented by their own containers, all it takes is a 66 | simple `--volumes-from` commands to link up a spoke container to it's 67 | configuration in the hub. This combination can be very quick to implement 68 | because we don't require a build step just to change configuration. Already have 69 | a working configuration that you want to test with a new version of the binary? 70 | Use the existing hub container image to test it out. This modularity makes the 71 | various combinations easy to manage because it encourages smaller images that can 72 | easily be pieced back together in case of rollback. 73 | 74 | ## Concurrency 75 | 76 | While concurrency is more about designing a stateless app then it is about how 77 | you organize your containers, I feel that Radial's modular design helps 78 | encourage this. Because configuration is stored in the hub container, and 79 | because volume sharing is done predictably (hub does `--volumes-from` axles, 80 | spokes do `--volumes-from` the hub), spoke containers can be spun up and scaled 81 | on an existing wheel and simply attached to it. This can give a simple wheel 82 | more "features" such as service registration or application monitoring, or even 83 | run multiple identical instances of already existing spoke containers. 84 | 85 | ## Disposability 86 | 87 | >Maximize robustness with fast startup and graceful shutdown 88 | 89 | Radial uses small and modular images for quick startup, 90 | [Supervisord][supversior] for in container process management of one or more 91 | running processes within the container itself, and carefully designed entrypoint 92 | scripts to facilitate proper transfer of UNIX signals from the docker daemon and 93 | enforce proper behavior of container start, restart, and shutdown procedures. 94 | 95 | [supervisor]:http://supervisord.org 96 | 97 | ## Dev/prod parity 98 | 99 | >Keep development, staging, and production as similar as possible 100 | 101 | The easier it is to reuse the same core images for development and production, 102 | the closer the development environment will be to production. Many more factors 103 | go into supporting this principle of course, but as far as spreading an 104 | application stack across Docker containers goes, if you can use the same web 105 | server and database image on your development machine as your production 106 | machines, only making small configuration changes in your hub container as 107 | needed, we are one step closer to keeping this 12-factor principle. 108 | 109 | ## Logs 110 | 111 | >Treat logs as event streams 112 | 113 | As far as the application is concerned, writing to STDOUT or STDERR is mandatory 114 | in order to work with [Supervisord][supervisor]. So this principle is preserved; 115 | the application itself doesn't bother with log files. However, Supervisor itself 116 | does in fact write these logs to files, and our end goal still is to make our 117 | log events a _stream_, not just files. The Radial solution to this is to gather 118 | all logs for the wheel in a single spot, the `/log` directory. Within it, unique 119 | folders are created that allow for any number of spokes to write logs here 120 | without name collisions. The idea is simple, if we can make all the logs 121 | available in a predictable location, we can use whatever logging program we want 122 | later to turn these log files into a stream. The log harvesting itself is a 123 | feature for it's own spoke container, and Radial doesn't make that decision for 124 | you by baking in the log management into it's containers. 125 | 126 | ## Admin Processes 127 | 128 | >Run admin/management tasks as one-off processes 129 | 130 | Radial spoke containers allow for a standard daemon mode as well as a custom 131 | entrypoint-like command mode that allows one to specify an alternate behavior 132 | for that spoke container. This feature allows one to use the same spoke 133 | container to run a database (say postgresql) and manage it directly using 134 | it's management program such as `psql`. This keeps the environment between the 135 | admin process and the app identical. 136 | 137 | To further simplify this task, Radial organizes all Supervisord unix sockets as 138 | well as any other application sockets in the hub container in a shared `/run` 139 | directory for easy inter-spoke communication. This has the added benefit of also 140 | allowing access to application management via the unix sockets without using 141 | built-in SSH or opening any holes within the spoke container itself. One-off 142 | admin/management tasks are added onto a wheel just like any other spoke 143 | container would. Everything it needs will be available via the hub. 144 | -------------------------------------------------------------------------------- /concepts/terms.md: -------------------------------------------------------------------------------- 1 | # Terms 2 | 3 | Here I'll clarify some terms that have been used loosely thus far in the context 4 | of the Radial container topology: 5 | 6 | **Stack|Application Stack**: _One or more specific pieces of software that work 7 | in tandem to deliver a coherent result._ 8 | 9 | In Radial, the Wheel is most analogous to an application stack in its scope, but 10 | it is not limited to a stack alone. A Wheel can allow for common configuration 11 | profiles, sharing of unix sockets, logs, cache etc., all while maintaining 12 | separation of each application in the stack due to their being in their own 13 | application containers (spokes). Typically, this implies that a Wheel must 14 | reside on a single host but this isn't necessarily always true. 15 | 16 | **Service**: _An abstract software functionality that can be made up of one or 17 | more application stacks. A stack can be sufficient by itself to deliver a 18 | service, or a service may be made up of many stacks._ 19 | 20 | A single database on a single host can be used for multiple purposes; it by 21 | itself can be considered a service. The components of a LAMP stack on a single 22 | host by themselves are transparent, yet altogether, provide a web service. 23 | Variations of core/monitoring/api/master/worker stacks across all nodes in a 24 | server cluster could provide a distributed file system or computation engine. 25 | That is a single service made up of many different heterogeneous stacks on 26 | multiple hosts. 27 | 28 | **Wheel**: _An instance of one or more optional axle container(s), a mandatory 29 | hub container, and at least one spoke container._ 30 | 31 | A Wheel is a flexible entity. Since it is a modular collection of cooperating 32 | containers, a Wheel can be your atomic unit for both stacks and services in any 33 | combination you can think up. When designed right, spoke containers are 34 | interchangeable because any and all configuration is stored in the hub 35 | container, separate from the spoke. So the `radial/postgresql` container used 36 | by itself, is the same used in the LAMP stack, is the same used as some part of 37 | a distributed database service across many hosts etc. The goal of Radial is to 38 | eliminate Dockerfile explosion all because of slightly different use-cases of 39 | similar services, all while not forfeiting Docker or web best practices. 40 | 41 | 42 | -------------------------------------------------------------------------------- /concepts/topology.md: -------------------------------------------------------------------------------- 1 | # The Axle-Hub-Spoke Topology 2 | 3 | Before many of the Docker container orchestration services became popular and 4 | mature, manual lifecycle tasks of a cluster of containers was a bit of a mess. I 5 | wanted to streamline my understanding of application stack components and fit 6 | them to Docker features intuitively and systematically. Even now with tools like 7 | [Fig][fig], [Kubernetes][kube], and others, I'm finding these topology 8 | principals very easy to transfer no matter the orchestration tool. 9 | 10 | One challenge I frequently came across early on was related to inconsistency in 11 | the Docker Index regarding how each repository's config/data/log logic was 12 | carried out in each of the containers. The stated goal of the Docker Index is 13 | portability and sharing, and while the sharing part is built-in, the portability 14 | part was lacking to me. There were very few images I could literally `docker 15 | pull` and feel confident that it was setup the way I wanted and the image would 16 | behave in a predictable manner. I found myself reconstructing the image almost 17 | every single time I wanted to use someone else's image due to my specific 18 | configuration needs. Unless individuals used strategies similar to those used in 19 | the Radial Topology, the entire "image sharing" part of the Docker index seemed 20 | a bit unfulfilled to me. 21 | 22 | Thus, for Radial, I've used the imagery of a wheel to help conceptually keep 23 | things simple. Just like a real wheel, the main parts of a Radial "wheel" are 24 | specialized containers each labeled an "axle", a "hub", and a "spoke" container 25 | that each serves a specific purposes in our topology. Typifying containers this 26 | way allows us to manage our configuration files, our data/code, and our logs in 27 | an easy to remember way. With this axle-hub-spoke strategy, we have a 28 | predictable method for the ways that: 29 | 30 | * application code is retrieved and/or run while remaining portable 31 | * configuration is loaded/stored, edited, and implemented 32 | * everything is isolated, tested, run, and disposed of 33 | * volumes get created, shared and persisted across containers 34 | * logs are stored and collected 35 | * admin processes are carried out and how those changes are kept 36 | 37 | [kube]:http://googlecloudplatform.github.io/kubernetes 38 | [fig]:http://www.fig.sh 39 | 40 | 41 | ## The Axle Container 42 | 43 | All wheels rotate on an axle, and if our cluster is made up of various "wheels" 44 | (application components, stacks, or what 45 | [Kubernetes][kube] would call a 46 | "pod"), each with their respective hubs and spokes, then there is a fundamental 47 | starting point, the axle container, that must be created first to allow for some 48 | persistent containers and services. Specifically, axle containers: 49 | 50 | * are used for persisting volumes that one or multiple wheels will use for 51 | support or other persistent storage, but that have nothing to do with any one 52 | particular wheel's configuration, or application binary. One common example is 53 | log storage. Another is a data set that could be created by one container, 54 | collated by other, and displayed/hosted by a third on a website. 55 | * can include bind-mount volumes unique to specific hosts. An example of this 56 | would be a music/video library that needs to be available normally to to some 57 | desktop workstation as well as served/worked managed through some process in 58 | the cluster. 59 | 60 | ## The Hub Container 61 | 62 | Hub containers are a type of volume container with three purposes: 63 | 64 | * Managing, storing, and versioning configuration and static files 65 | * Creating/storing/persisting all new volumes needed for it's respective spoke 66 | container (application or running process) in a typical 67 | [volume container](http://crosbymichael.com/advanced-docker-volumes.html) 68 | fashion. 69 | * Combining exposed volumes from all other relevant axle containers that might 70 | be needed for this particular wheel. Two examples are a log container or a 71 | specific media library. 72 | * Be a standardized location for unix sockets and other inter-spoke 73 | communication. 74 | 75 | The idea is that the hub is the gathering point for all the complexity of your 76 | wheel as it relates to your stack: the volumes, the local bind mounts, the 77 | configuration etc., so that your spoke containers are free just to worry about 78 | the code and the running process. This completely separates the configuration 79 | from the application itself allowing further modularity with managing 80 | everything. 81 | 82 | ## The Spoke Container 83 | 84 | Spoke containers are your applications and workers. They should, by design, 85 | always assume that: 86 | 87 | * the configuration normally stored in `/etc` is stored under `/config` 88 | * that the data and other "stuff" that applications tend to generate or need to 89 | access (cache, helper folders, things normally found in a users home 90 | directory) is in `/data` 91 | * and that the location of the logs will be in `/log` 92 | 93 | All a spoke container needs to do is be run with `--volumes-from` it's 94 | respective hub container and all of the above will be accessible to it. A spoke 95 | containers purpose is to manage, store, version, and run the app/code/process. 96 | It should: 97 | 98 | * Be completely configuration agnostic. What this means is that as a rule, 99 | configuration cannot be bundled with the spoke container. It must be managed 100 | separately and stored in `/config` as mentioned previously in the hub 101 | container for this Wheel. 102 | * Be modular. The spoke container image for a singular database service running 103 | 3 containers (axle for database storage, a hub for configuration, and a spoke 104 | for the database binary) can be the same spoke container used as part of a 6 105 | container LAMP stack (axle for database storage, axle for log storage, hub for 106 | configuration and hosting sockets between the web server, database, and web 107 | app spoke containers) 108 | * Should blissfully assume the standardized locations for configuration, for 109 | data, and for logs via it's use of `--volumes-from` the hub container. 110 | 111 | Designing spoke containers this way encourages safer application containers 112 | because it doesn't attempt to make any user-friendly/configuration/security 113 | trade-offs. 114 | 115 | ## Wheels 116 | 117 | With the above responsibilities delegated amongst the three types of containers, 118 | our application stack can be fully modularized. All-together, an instance of any 119 | combination of axles, a single hub, and any number of spoke containers can be 120 | referred to as a "wheel" and [packaged all-together](/design/wheels.md) as such. At 121 | the moment, wheels designed to be an atomic unit that is located on a single 122 | host. 123 | -------------------------------------------------------------------------------- /design/filesystem.md: -------------------------------------------------------------------------------- 1 | # Config, Data, Logs, Run 2 | 3 | The [Filesystem Hierarchy Standard][fhs] provides a great deal of flexibility 4 | regarding where and how to organize our file system. On top of that, many of our 5 | individual preferences are heavily influenced by the distributions we use day to 6 | day. This complicates more than it helps however when it comes to the type of 7 | distributed and multi-layered filesystem that emerges when we start to combine 8 | Docker containers using volumes. Some helpful usage patterns have emerged in the 9 | community, and for Radial, I incorporate some of those practices along with some 10 | additional items. 11 | 12 | * All configuration that would normally be in `/etc` is stored in `/config`. 13 | This includes internal application configuration as well as any other 14 | instance/deploy configuration (Supervisor) that could later get inserted at 15 | build time. 16 | * All static data that is persisted in typical [volume 17 | container](http://crosbymichael.com/advanced-docker-volumes.html) fashion 18 | should find it's way to a sub-directory of `/data`. This directory is also a 19 | catch-all for caching and other runtime data that could typically be thrown 20 | somewhere in a users `/home` directory. Effectively, this directory is a 21 | combination of `/mnt`, `/opt`, `/home`, and `/var`. 22 | * All logs are written to `/log` in a sub-folder that is named after the 23 | container ID. Since `/log` is typically it's own axle container, this 24 | directory could potentially collect the logs of many wheels across a single 25 | host, or even across an entire small cluster of hosts. Thus, the naming of 26 | each folder is the same as the spoke containers short ID to prevent 27 | collisions. 28 | * `/run` is used just like one would expect on typical systems for the location 29 | of socket files and other runtime data. 30 | 31 | In more detail: 32 | 33 | [fhs]: http://www.pathname.com/fhs/ 34 | 35 | ## Why `/config`? 36 | 37 | * We want to avoid editing configuration once our container is running like the 38 | plague. It's not easily reproducible and promotes run-away container 39 | modification, which we don't want. 40 | * Relying on the standard location for application configuration doesn't work 41 | well when we want modular containers. Things can get lost when doing complex 42 | volume sharing and mounting in multiple containers when the configuration 43 | paths are not standardized from the onset. 44 | * It's pretty standard for programs to support a custom configuration path at 45 | runtime so we leverage this common option to our advantage and always point 46 | our binaries configuration to it's own location in `/config`. 47 | 48 | ## Why `/data`? 49 | 50 | Similar to point 2 above; with many different services and programs using 51 | different default paths for their data directories, and many of those same 52 | services using different locations even between operating system installations, 53 | we want to know that there is a 100% guaranteed match and consistency between 54 | our persistent volume storage hub container and our application spoke container. 55 | We want piece of mind that as long as we specify either at run time or in our 56 | configuration that the data directory resides at `/data` we are good to go with 57 | no questions asked. 58 | 59 | ## Why `/log`? 60 | 61 | If we've already created an axle container for our logs at `/log`, we'll use 62 | `--volumes-from` our already-running "logs" container to make it available now 63 | to our hub container and by association to our application spoke container 64 | (itself, running with `--volumes-from` the hub container). Keeping with the now 65 | established pattern, we can also specify the path (and non-conflicting name) for 66 | our log file at run time for our app and have our logs now available in one 67 | location for processing using whatever method we choose. 68 | 69 | ## Why `/run`? 70 | 71 | Since containers don't run anything by default, this space is largely unused in 72 | containers. Radial will use this folder however to share unix sockets between 73 | various spokes. This is also the location of each spokes Supervisor socketfile. 74 | This allows one to control all inner processes of a wheel from one place. 75 | -------------------------------------------------------------------------------- /design/images.md: -------------------------------------------------------------------------------- 1 | # Radial Images 2 | 3 | Here I'd like to dive more technically into the helper images available and how 4 | they work in the topology. 5 | 6 | ## Busyboxplus 7 | 8 | I've created a suite of lightweight [busybox images][busyboxplus] specifically 9 | for their role as Hub and Axle containers: a super small base image, and a git 10 | and cURL flavor. 11 | 12 | The base Busyboxplus base image, available via `docker pull 13 | radial/busyboxplus:base` is Internet enabled and minimal in size (1.27mb). It is 14 | the default image for Axle containers since they are completely inert and don't 15 | need to do anything. Internet access was free in these containers size-wise so 16 | I figured I'd include it. 17 | 18 | The git flavor is the default flavor for Hub containers. Weighing in at 12.86mb, 19 | it has a minimal version of git installed for basic 20 | extraction/merging/committing. It's not meant for interactive git usage on the 21 | development side, but is more for cloning and merging configuration on the 22 | deployment side. Git already depends on cURL so this image also comes with a 23 | full cURL installation as well. You can access this image via `docker pull 24 | radial/busyboxplus:git`. 25 | 26 | The cURL flavor, which is slightly smaller (4.23mb) is exactly as it sounds: the 27 | base image plus cURL compiled from scratch for busybox without git. It is 28 | available via `docker pull radial/busyboxplus:curl`. It's not used in any of the 29 | Axle, Hub, or Spoke containers by default but is included as an alternate to the 30 | git flavor because of cURL's incredible utility in so many ways even without 31 | git. 32 | 33 | If you are familiar with buildroot, you can see how these images were compiled 34 | [here](https://github.com/radial/core-busyboxplus) 35 | 36 | [busyboxplus]: https://index.docker.io/u/radial/busyboxplus/ 37 | 38 | ## Core Distro 39 | 40 | The [core-distro images](https://index.docker.io/u/radial/distro/) are the fully 41 | featured operating system for our actual applications. Because I'm most familiar 42 | with Ubuntu, this is what I've chosen as the main operating system. It consists 43 | of the official Docker Ubuntu image with a couple modifications: 44 | 45 | * The `/etc/apt/sources.list` mirrors changed from "archive" to each of the 46 | Amazon EC2 data centers. 47 | * Downloads from the default "archive" location can be very slow at times 48 | and it's hard to guess the speed or distance from other institutions and 49 | mirrors. So using Amazon EC2 from the start is a pretty safe way to go. 50 | * Each Dockerfile also sets the appropriate timezone data for each data center 51 | region selection as well as the 'en_US.UTF-8' locale and American English 52 | language for all images regardless of region. 53 | * The idea is to optimize the mirrors and timezones for all the regions, but 54 | unify them all by language. This is because you speak just one language to 55 | your computer, but you'll deploy your apps all over the world and will 56 | want the timezones to work properly and the mirrors to be fast. 57 | 58 | Source Dockerfiles can be found [here](https://github.com/radial/core-distro) 59 | 60 | ## Axle-Base 61 | 62 | The [Axle-Base image](https://index.docker.io/u/radial/axle-base/) is just a 63 | wrapper for the base Busyboxplus image but renamed to keep the naming scheme 64 | consistent. It's sole purpose is for housing shared volumes. There should never 65 | be a need to enter or modify this container directly once run, so it pretty much 66 | does nothing. 67 | 68 | Dockerfile source available [here](https://github.com/radial/imagebase-axle) 69 | 70 | ## Hub-Base 71 | 72 | The [hub-base image](https://index.docker.io/u/radial/hub-base/) uses the git 73 | flavor of Busyboxplus as it's OS core. This is meant to allow it to combine both 74 | a [skeleton configuration](https://github.com/radial/config-supervisor) of 75 | [Supervisor](/radial/supervisor) as well as any additional configuration you 76 | need for your application. Both these configurations can be acquired statically 77 | and captured in Dockers version control by building images using this image as a 78 | base, or dynamically at runtime inside an exposed volume. 79 | 80 | The method for achieving this is by defining environment variables inside a 81 | special "build-env" file that is sourced at build time statically, or by 82 | defining these environment variables at run time dynamically. These variables 83 | are git repositories and branches that contain the configuration in which to 84 | clone and combine using git. 85 | 86 | The hub-base image is also responsible for modifying the file ownership and 87 | permissions of these configuration files one combined with git. This is also 88 | done using environment variables. 89 | 90 | Further details on how to use this image can be found in [it's 91 | repository](https://github.com/radial/imagebase-hub). 92 | 93 | ## Spoke-Base 94 | 95 | Finally, we have the [spoke-base image][spoke]. It is based on the [core-distro 96 | images](https://github.com/radial/core-distro) and adds Supervisor to it for use 97 | in your final Spoke container running your application. Much of it's automation 98 | is abstracted away in the default Supervisor configuration and it the Spoke's 99 | standard entrypoint script. This allows your Dockerfiles to remain very simple 100 | when you use this image as a base. Check out [it's repository][spoke] for more 101 | details. 102 | 103 | [spoke]:https://github.com/radial/imagebase-spoke 104 | -------------------------------------------------------------------------------- /design/supervisor.md: -------------------------------------------------------------------------------- 1 | # Process Management 2 | 3 | [Supervisor](http://supervisord.org) is a common choice for process management 4 | inside Docker containers. Many already have experience with it and it is quite 5 | simple to setup and use, thus it was included here as well for the Radial 6 | Topology. 7 | 8 | ## Configuration 9 | 10 | The [Supervisor skeleton][super_skel] is a default configuration for how 11 | Supervisor will be run in our Spoke container. It should suffice for most of 12 | your needs, but it is also configurable in case you would like to make 13 | modifications. This is done by creating your own `supervisord.conf` and making 14 | it available in a git repository with the same folder structure as the template. 15 | Depending on if you build your hub dynamically or statically, you select this 16 | repository by setting the environment variables 'SUPERVISOR_REPO' and 17 | 'SUPERVISOR_BRANCH' in either your hub Dockerfile, your `docker run` arguments, 18 | or by setting it in your `hub/build-env` file in your Wheel repository for 19 | sourcing during build. In either case, git will clone and pull the designated 20 | branch at the specified repository location. To find out more about which 21 | features of git are supported by the Hub container, check out the repository for 22 | the [hub-base image](https://github.com/radial/imagebase-hub). 23 | 24 | In case you would like to use your own Supervisor configuration, there are some 25 | options you must be mindful of not to change to make sure that the rest of the 26 | Wheel will continue to work properly. 27 | 28 | ## Folder Structure 29 | 30 | ``` 31 | supervisor 32 | ├── conf.d 33 | │   └── sshd.ini 34 | └── supervisord.conf 35 | ``` 36 | The 'supervisor' folder needs to be in the root of your repository and the 37 | folder naming and structure can't deviate from the above tree. Note that the 38 | supervisor folder sits directly in your `/config` directory in the root of your 39 | Hub container. 40 | 41 | ## Subprocess init Files 42 | 43 | It is a Docker best-practice is to keep the number of processes in a container 44 | to a minimum. However, multiple processes are easily supported. By default, each 45 | spoke starts up a group identified by the `$SPOKE_NAME` variable in the spoke 46 | Dockerfile. You can have any number of subprocesses configurations split amongst 47 | any number of files. They all get put in the `/conf/supervisor/conf.d` folder 48 | when building your hub container. As long as you properly group them, the entire 49 | group will get started. See 50 | [this](https://github.com/radial/wheel-log.io/blob/master/hub/config/supervisor/conf.d/logio.ini) 51 | wheel's configuration for an example. 52 | 53 | **Note**: use of the "autostart" feature is discouraged when you have multiple 54 | spokes attached to your hub container. All the Supervisor subprocess 55 | configuration files will be detected by all the other Supervisor daemons in 56 | every other spoke. So to keep from starting every process automatically in every 57 | spoke, we instead resort to having each spoke manually launch it's own 58 | respective subprocess. Again, this is done via a properly configured subprocess 59 | group and use of `$SPOKE_NAME`. 60 | 61 | ## Logging 62 | 63 | By default, the Spoke container, when run, will create a folder in `/log` named 64 | as the containers ID. It gets it from the '$HOSTNAME' variable that is set in 65 | the docker container by default. Luckily, Supervisor has access to this variable 66 | as well, so the [Supervisor skeleton][super_skel] by default will dump all the 67 | logs into each spokes unique folder. This is what allows us to use the same axle 68 | container to store the logs from all our spoke containers without the worry of 69 | name collisions. We then have only one place to look for viewing and backup our 70 | logs. 71 | 72 | [super_skel]: https://github.com/radial/config-supervisor 73 | 74 | ## Supervisor Sockets 75 | 76 | The Supervisor daemon can be controlled with the `supervisorctl` command via a 77 | unix socket located in `/run/supervisor/$HOSTNAME`. When built, all Supervisor 78 | daemon sockets are found in the `/run/supervisor` directory within their 79 | respective `$HOSTNAME` folder. The `/run` directory is being shared by the hub 80 | container. Future plans are in the works to expand the role of these sockets, 81 | the `/run/supervisor` directory, and other parts of Supervisor in general for 82 | automatic monitoring, resource reporting, and even service discovery. They will 83 | be add-on spoke services just like any other. 84 | -------------------------------------------------------------------------------- /design/wheels.md: -------------------------------------------------------------------------------- 1 | # The Wheel 2 | 3 | The Wheel is the standard model for packaging your entire stack. You can see an 4 | example repository [here](https://github.com/radial/template-wheel) to get a 5 | more complete sense of how such a thing would work. While some components of the 6 | Wheel repository are works-in-progress and subject to change, it's still worth 7 | explaining here to get a sense of what I hoped to achieve with it. 8 | 9 | Note that what is described here is a Wheel repository. It is a single 10 | repository that can be used in conjunction with container orchestration to 11 | produce the "running" Wheel, which is a collection of interchangeable containers. 12 | 13 | ## File structure 14 | 15 | ``` 16 | . 17 | ├── axle 18 | ├── hub 19 | │   ├── Dockerfile 20 | │   ├── build-env 21 | │   ├── config 22 | │   │   ├── someprogram.conf 23 | │   │   └── supervisor 24 | │   │   └── conf.d 25 | │   │   └── someprogram.ini 26 | │   ├── data 27 | │   │   ├── media 28 | │   │   │   └── dataset 29 | │   │   └── src 30 | │   │   └── someprogram.sh 31 | │   └── log 32 | │   └── logfile 33 | └── spoke 34 | └── Dockerfile 35 | ``` 36 | 37 | While the structure of the 'axle' and 'spoke' folders is technically not 38 | mandatory, it is suggested to keep it like this for compatibility with future 39 | planned features. 40 | 41 | The folder structure of the 'hub' folder however IS mandatory. Deviating will 42 | break the 'hub-base' and 'spoke-base' images. Application configuration sits in 43 | `hub/config` and the accompanying Supervisor '.ini' file goes in 44 | `hub/config/supervisor/conf.d`. 45 | 46 | ## Configuration Branch 47 | 48 | Your application has a repository for it's code. Your deployment code and 49 | configuration however, as it pertains to using Radial, is stored in the 50 | Wheel repository. The 'hub-base' image has the ability to `ADD` your 51 | configuration (whatever files you put in your `hub/config` and 52 | `hub/config/supervisor/conf.d` folders) as well as pull it from a repository. If 53 | pulling from a repository is your method of choice, it is suggested that your 54 | make a separate branch in your Wheel repository with just the contents of your 55 | `hub/config` folder and all it's subfolders and files. The 'hub-base' image 56 | contains logic to pull the 'config' branch of your Wheel repository and merge it 57 | with the [skeleton configuration][config-supervisor] used by Supervisor. 58 | 59 | The folder structure of your 'config' branch would be as follows: 60 | 61 | ``` 62 | . 63 | ├── someprogram.conf 64 | └── supervisor 65 | └── conf.d 66 | └── someprogram.ini 67 | ``` 68 | The two files 'someprogram.conf' and 'someprogram.ini' demonstrate the needs of 69 | a typical application, but this folder structure can easily support more 70 | complicated situations. 71 | 72 | [config-supervisor]: https://github.com/radial/config-supervisor 73 | 74 | ## Fig 75 | 76 | Since Radial makes liberal use of containers and "separates concerns" to an 77 | extreme degree, a basic orchestration tool is needed to help manage the 78 | building, linking, and deploying of all the containers. Other tools can surely 79 | be used for this, but for the sake of simplicity, Radial uses [Fig][fig] for now 80 | for demonstration and testing. 81 | 82 | [fig]: http://www.fig.sh 83 | 84 | ## Dynamic vs. Static Modes 85 | 86 | Packaged with each wheel, are example `fig.yml` and `fig-dynamic.yml` files. 87 | They demonstrate the two ways that you can produce images. 88 | 89 | ### Axles 90 | 91 | Axle containers are volume containers. So they're use can vary depending on 92 | need. Typically, volume containers are dynamic, meant to store the data of some 93 | other container running a database perhaps. They also could potentially be used 94 | statically and house some type of static data set, source code, or media. 95 | 96 | ### Hubs 97 | 98 | Hub containers allow for any combination of `docker build`, `docker run` 99 | (without building), and configuration branches stored in `$WHEEL_REPO` 100 | environment variables. In order of most "static" to most "ephemeral", you could: 101 | 102 | 1. Manually create and store the configuration files in a wheel repository, 103 | then use `docker build` to capture the files in an image. This image can be 104 | transported and stored with the configuration files always maintained 105 | within. 106 | 2. Use `docker build` to create a static hub container, the same as point 1, but 107 | if you don't have access to the actual configuration files, and they happen 108 | to be stored already in a wheel repository "config" branch, you can specify 109 | these `$WHEEL_REPO` repositories in a `/hub/build-env` file that gets 110 | sourced on `docker build` therby cloning the configuration into the hub 111 | container and storing it statically the same. 112 | 3. For a quicker and more dynamic mode, you can simply run the 113 | `radial/hub-base` image without build, but specify `$WHEEL_REPO` 114 | environment variables at run time and the hub will clone them. **This mode 115 | is ephemeral** and as soon as you remove this running container, the 116 | configuration will be gone as well. This should not be a problem however 117 | since you already have your configuration stored safely in version control. 118 | The hub simply acts as a volume container for your configuration in this 119 | situation. 120 | 121 | ### Spokes 122 | 123 | After a spoke container is initially built, it is supposed to always be 124 | dynamically run after that. A spoke container captures a single build of a 125 | binary and will eventually run it; and does nothing else. The configuration for 126 | this spoke is stored in the hub container thereby never requiring any 127 | customization of the spoke container. 128 | -------------------------------------------------------------------------------- /instructions/axle-containers.md: -------------------------------------------------------------------------------- 1 | # Axle Containers 2 | 3 | As mentioned before, Axle containers are for hosting regular or 4 | bind-mounted volumes. This can be host-specific persistent data, or any other 5 | type of [volume container](http://crosbymichael.com/advanced-docker-volumes.html). 6 | 7 | By default, Radial uses such a container to store the Supervisor log output of 8 | each spoke container. Rather then make a decision on which log management one 9 | should use, they are instead simply collated into a single location. It's the 10 | job of some other spoke container to harvest/serve/analyze these logs later if 11 | one so chooses. 12 | 13 | Let's create our above example and make a "logs" Axle container. If we choose to 14 | make a permanent image that we can reuse, the Dockerfile would possibly contain: 15 | 16 | ``` 17 | FROM radial/axle-base 18 | VOLUME ["/log"] 19 | CMD ["IDLE"] 20 | ``` 21 | 22 | Note that the program "IDLE" is the equivalent of `tail -f /dev/null` meant 23 | solely to keep the container from exiting. This makes container management a bit 24 | more intuitive regarding volume containers. 25 | 26 | It could be built via: 27 | 28 | `docker build -t logs .` 29 | 30 | and run via: 31 | 32 | `docker run --name logs logs` 33 | 34 | Building this container however doesn't have any benefits, so usually it can 35 | just be run directly from command line with: 36 | 37 | `docker run --name logs --volume /log radial/axle-base IDLE` 38 | -------------------------------------------------------------------------------- /instructions/hub-containers.md: -------------------------------------------------------------------------------- 1 | # Hub Containers 2 | 3 | Hub containers are the big workhorses of the Radial topology. They are 4 | the gathering point for all the complexity of your wheel. 5 | 6 | ## Configuration 7 | 8 | The big idea with 12-factor and configuration is that it is: 9 | 1. stored separate from application code 10 | 2. in version control 11 | 3. only to be mixed with application code at deploy time 12 | 4. and transported via environment variables 13 | 14 | The Radial topology addresses each point accordingly: 15 | 1. Configuration has it's own container (the hub). 16 | 2. We use git to manage various version of configuration and can access it 17 | remotely or directly using a wheel repository. 18 | 3. Wheels are modular components and various version of configuration can be 19 | easily combined with spoke containers depending on need. 20 | 4. Environment variables are used for everything. 21 | 22 | ## Static Mode 23 | 24 | *TODO* 25 | 26 | * building 27 | * config 28 | * build-env file 29 | * local files 30 | 31 | ## Dynamic Mode 32 | 33 | *TODO* 34 | * running 35 | * config 36 | * env vars 37 | 38 | ### Tunables 39 | 40 | Tunable environment variables; modify at runtime. Italics are defaults. 41 | 42 | - **$SUPERVISOR_REPO**: [_https://github.com/radial/config-supervisor.git_] 43 | Repository location for default Supervisor daemon configuration. 44 | - **$SUPERVISOR_BRANCH**: [_master_] Repository default branch. 45 | - **[$WHEEL_REPO[_APP1]...]**: Additional repositories to download and merge 46 | with the default SUPERVISOR_REPO repository. 47 | - **[$WHEEL_BRANCH[_APP1]...]**: Branches to pull for given WHEEL_REPO 48 | repositories. 49 | - **$UPDATES**: [_False_|True] Update configuration from the selected 50 | WHEEL_REPO repositories (if any) on container restart. 51 | - **$PERMISSIONS_DEFAULT_DIR**: [_"755"_] Default (recursive) directory 52 | permissions for /config, /data, and /log. 53 | - **$PERMISSIONS_DEFAULT_FILE**: [_"644"_] Default (recursive) file permissions 54 | for files contained in /config, /data, and /log. 55 | - **$PERMISSIONS_EXCEPTIONS**: [_empty_] A single string, separated by spaces, 56 | containing a list of files/directories to chmod/chown. 57 | - The format for a single entry: 58 | {\}{:\}[:\][:\] 59 | - These values, separated by ':' are passed directly into `chown` and 60 | `chmod` so things like `/config/*` work for directory contents and numeric 61 | user and group ids work as well. 62 | - Some examples: 63 | - `/config/supervisor/conf.d/*:700` 64 | - `/config/supervisor/supervisord.conf:700:root:root` 65 | - `/config/supervisor/myprogram.conf:777:myprogramuser` 66 | --------------------------------------------------------------------------------