├── AdoptingContainersInTheEnterprise.md ├── CONTENTS.md ├── CrossingTheOceanWithContainers.md ├── DockerFuelsARethinkingOfTheOperatingSystem.md ├── DockerOrchestrationTools.md ├── FromMonolithToMicroservices.md ├── README.md └── resource ├── AdoptingContainersInTheEnterprise ├── MotivationsToConsiderDocker.png └── PaaSMostLikelyToBeAutomated.png ├── CrossingTheOceanWithContainers └── PercentOfOrganizationsPlanningToAddressNeeds.png ├── DockerFuelsARethinkingOfTheOperatingSystem ├── ComparingContainerOperatingSystems.png └── ForMicroOperatingSystemsSizeMatters.png ├── DockerOrchestrationTools ├── ComparingContainerOrchestrationTools.png ├── FleetArch.png ├── FlockerArch.png ├── KubernetesArch.png ├── MesosArch.png └── SwarmArch.png ├── FromMonolithToMicroservices ├── ScalingWithMicroservices.png └── TheScaleCubeAndMicroservices.png ├── sponsorlogos2.png └── tnslogo.png /AdoptingContainersInTheEnterprise.md: -------------------------------------------------------------------------------- 1 | # Adopting Containers in the Enterprise 2 | by Vivek Juneja 3 | 4 | --- 5 | 6 | Disruptive new technologies and practices often follow an exhaustive adoption process in large enterprises. The process typically begins with elaborate presentations and discussions, followed by various proofs-of-concept. When adoption does occur, it typically occurs first within small teams building non-critical systems, and mostly for development and test workloads. By the time the technology reaches production, a significant amount of time and opportunity has passed. 7 | 8 | Containers, and the notion of containerizing enterprise workloads, are going through this process right now, in a great many enterprises, fueled by the promise that containers offer to address the varied issues that plague developer productivity and application delivery in the enterprise. 9 | 10 | ## Enterprise Motivations 11 | 12 | By providing abstraction around the workload, and making it portable, containers become the foundation for supporting a variety of more efficient development processes and application architectures. 13 | 14 | Specifically, enterprise development teams and IT organizations see in Docker and containers a means of addressing: 15 | 16 | 1. Process inefficiencies: Different teams within the enterprise often have their own standards for developing and releasing software. Outsourced development, unfortunately often accompanied by limited governance and transparency, often exacerbates this problem. Many enterprises are exploring containers as a way to standardize the software release process across teams. 17 | 18 | 1. Legacy applications: Legacy applications and systems are commonplace in the enterprise. These systems create constant operational and maintenance issues. Project teams struggle to allocate resources to operate on-demand test infrastructure, and as a result frequently forgo testing in favor of releasing new functionality on time. Teams working on legacy applications are attracted to containers for their ability to help make more efficient use of infrastructure. 19 | 20 | 1. Collaboration: Development teams working on large projects have to coordinate software releases, a process that often takes a long time. Teams working on continuously maintained and developed legacy applications, for example, appreciate the ability of containers to help them synchronize software releases more easily. 21 | 22 | ![Motivations to Consider Docker](resource/AdoptingContainersInTheEnterprise/MotivationsToConsiderDocker.png) 23 | 24 | A development approach based on container images also helps encourage closer interaction between the development and operations teams, thereby encouraging DevOps. 25 | 26 | 1. Dev/prod parity: Production, development and test environments often have parity issues, drifting apart in ways that lead to constant “runs-on-my-machine” issues. Containers help ensure a consistent runtime environment across various infrastructures, a feature enterprise development teams find extremely useful. 27 | 28 | 1. VM sprawl: As enterprises adopt virtual machines, VM sprawl often sets in, leading to less efficient utilization of the infrastructure. Enterprises often respond by establishing strict governance rules, like approvals and workflows, to limit the sprawl. These rules, however, reverse the benefits of elasticity and self-service for developers. Because the overhead associated with containers is so much lower than that of VMs, many enterprises are excited about containers as a way to mitigate sprawl. 29 | 30 | 1. Cloud-native applications: Microservices and other cloud-native application architectures require a different view of infrastructure than is traditionally assumed. In containers, enterprise development teams see an opportunity to more easily build cloud-native applications and take advantage of emerging trends. 31 | 32 | ## Container Adoption Strategies 33 | 34 | When virtual machines started getting popular inside the enterprise, a major selling point to the IT department was the opportunity to consolidate underutilized infrastructure to reduce its operational footprint and cost. Adoption of containers and the ecosystem surrounding them, however, is being driven by agility, not cost reduction. 35 | 36 | There are a handful of strategies that enterprises commonly take when evaluating and adopting container technologies: 37 | 38 | 1. Go after low-hanging fruit: When development teams have to struggle to get test environments provisioned and operational, they lose a lot of time, and frustration mounts. Solving infrastructure availability crunches in development and testing is often a good driver for making the leap to containers. 39 | 40 | For projects that don’t require dedicated instances, reusing IT infrastructure for test environments deployed as linked containers is a great way to get started, and can provide important operational knowledge of the technology. 41 | 42 | 1. Update build and deploy processes: Build and deployment infrastructure needs to be modified for the enterprise to take full advantage of modern infrastructure. To deploy to public and private clouds, some enterprises deploy the end application as a set of machine images, reducing the time required to set up newly provisioned on-demand infrastructure. The same practice can be extended to container images. 43 | 44 | In a container-based deployment model, the build step can generate the new container image using a pre-existing base image for the environment. The deployment step can take this image and run it on any infrastructure supporting the chosen container technology. 45 | 46 | 1. Go “container first”: Adopting a container first approach for all new projects is another common way to drive adoption. This means that all new projects must build and release software with containers, unless there are specific reasons why they cannot. 47 | 48 | Going container first encourages development teams to consider containers a first-class element of their application topology and spurs the development of container-native applications. In addition, ramping up new development teams with pre-containerized project environments is far easier than with traditional approaches. This helps get teams started quickly, without the complexity of transitioning existing project processes. 49 | 50 | 1. Standardize base images: It’s important for operations teams to formalize and release standard container images that all projects can use. These customized base images can be hosted on a private registry used by development teams when building projects. Changes to the standardized base images could mean new releases on the registry, which can then be transparently used in the development process. Enterprises are accustomed to hosting private repositories for firewall rules and other digital IT assets, so this should not be a strange adoption step for them. 51 | 52 | ![Motivations to Consider Docker](resource/AdoptingContainersInTheEnterprise/PaaSMostLikelyToBeAutomated.png) 53 | 54 | ## Key Issues Faced 55 | 56 | Enterprises face two key challenges when incorporating containers into their overall IT strategy: [security and a lack of mature tools](http://www.theregister.co.uk/2015/01/12/docker_security_immature_but_not_scary_says_gartner/). 57 | 58 | While web-scale companies like Google and Twitter have been using containers in production for years, it’s still early days for both open source and proprietary products. The issue of tool maturity is largely one of time and experience. With more adoption and demand, the tools will inevitably stabilize and improve. 59 | 60 | >Enterprise IT organizations are looking to containers to help them achieve more modern application architectures and development processes, in the process allowing them to innovate more quickly. 61 | 62 | Security, on the other hand, has been a dominant focus among early adopters. While a concern for all firms using containers in production, it is an even greater worry for companies using containers with production workloads in multi-tenant environments. 63 | 64 | The issue at hand is one of container isolation. To date, containers don’t exhibit the strong and tested ability to isolate disparate workloads that has become assumed of the various virtual machine hypervisors. 65 | 66 | Last year, Canonical introduced [LXD](http://www.ubuntu.com/cloud/tools/lxd), which implements ideas around stronger container security and makes them available via Ubuntu and OpenStack integration. New Projects like [Hyper](https://hyper.sh/) — which uses a minimalist Linux kernel, called a HyperKernel, to load and run containers — are also emerging as interesting alternatives to containers alone. And VMware has been promoting traditional virtualization [as a complement to containers ](http://blogs.vmware.com/cto/vmware-containers-containers-without-compromise/)for this very reason. 67 | 68 | ## Containers in Practice 69 | 70 | As organizations continue to drive agility and innovation, they naturally aim to eliminate inefficiencies, such as resource constraints and process bottlenecks. At the same time, they are turning to modern application  architecture styles like microservices, and development processes like continuous delivery, which together allow developers to iterate much more quickly. 71 | 72 | Last year, the chief architect of [ING spoke at Dockercon Europe ](https://blog.docker.com/2014/12/dockercon-europe-keynote-continuous-delivery-in-the-enterprise-by-henk-kolk-ing/)about how they are using Docker to enable continuous delivery. [BBC News also highlighted](https://blog.docker.com/2014/12/dockercon-eu-enterprise-ci-problems-and-our-solutions-by-simon-thulbourne/) how it architected its continuous integration solution around Docker, and shared the caveats and compromises that they had to deal with in doing so. The presentations by these two large companies is a sign of how quickly the enterprise is getting serious about container technologies. 73 | 74 | By taking one or more of the approaches outlined in this article, enterprises new to the container game can take their first steps down the path of enlightenment, knowing that they’re following a trail that has been trod many times before. 75 | 76 | --- 77 | Copyright © 2015, The New Stack. All rights reserved. 78 | -------------------------------------------------------------------------------- /CONTENTS.md: -------------------------------------------------------------------------------- 1 | # Contents 2 | 3 | Each of the chapters in this preview comes from a book in The New Stack's Docker and Container Ebook Series. 4 | 5 | **Book 1: The Guide to the Docker & Container Ecosystem** 6 | 7 | * [Crossing the Ocean With Containers](CrossingTheOceanWithContainers.md), *by Jeff Sussna* 8 | * [Docker Fuels A Rethinking of the Operating System](DockerFuelsARethinkingOfTheOperatingSystem.md), *by Susan Hall and Sam Charrington* 9 | * [Adopting Containers in the Enterprise](AdoptingContainersInTheEnterprise.md), *by Vivek Juneja* 10 | 11 |   12 | 13 | **Book 2: Application Development & Microservices With Docker & Containers** 14 | 15 | * [From Monolith to Microservices: Finding the Secret Sauce](FromMonolithToMicroservices.md), *by Vivek Juneja* 16 | 17 |   18 | 19 | **Book 3: Automation & Orchestration With Docker & Containers** 20 | 21 | * [Docker Orchestration Tools](DockerOrchestrationTools.md), *by Janakiram MSV* 22 | -------------------------------------------------------------------------------- /CrossingTheOceanWithContainers.md: -------------------------------------------------------------------------------- 1 | # Crossing the Ocean With Containers 2 | By Jeff Sussna 3 | 4 | --- 5 | 6 | When cloud computing first made its appearance, most people viewed it as a cost-reduction convenience. Soon, though, many organizations began to recognize its power to transform IT on a deeper level. Cloud offered a vision of infrastructure as a dynamic, adaptable resource that IT could use to power 21st-century business imperatives for agility and responsiveness. Terms such as “cloud-native” and “[cattle not pets](http://thenewstack.io/pets-and-cattle-symbolize-servers-so-what-does-that-make-containers-chickens/)” expressed the understanding that cloud-based IT required a fundamental mindset shift, away from treating infrastructure components as large, expensive, specialized, handcrafted, and slow-to-change. 7 | 8 | > Docker has captured the industry’s imagination with breathtaking speed. 9 | 10 | Containers are taking this transformation to the next level. Docker has captured the industry’s imagination with breathtaking speed. It began in similar fashion to cloud, seeming to provide a more convenient solution to existing packaging and deployment problems. In reality, though, containers point the way towards an even more profound mindset shift than cloud. 11 | 12 | While cloud computing changed how we manage “machines,” it didn’t change the basic things we managed. Containers, on the other hand, promise a world that transcends our attachment to traditional servers and operating systems altogether. They truly shift the emphasis to applications and application components. One might claim that containers, in combination with the microservices software pattern, represent the fruition of the object-oriented, component-based vision for application architecture. 13 | 14 | In a testament to the rapidity of Docker’s ascent, the conversation has quickly shifted to its readiness for production enterprise use. Blog posts chronicling experiences running [Docker in production](http://thenewstack.io/now-in-beta-rancher-labs-runs-docker-natively-in-production/) duel with others detailing [the ways in which it’s not yet viable](http://thenewstack.io/docker-production-environment-four-recent-examples-two-thumbs-no-go/). This binary argument misses the nature of technology adoption. The fact that a craft has proven itself seaworthy doesn’t obviate the need to figure out how to navigate the ocean with it. Just as was the case with cloud computing, containers pose as many questions as they answer. These questions arise on multiple levels: architectural, operational, organizational, and conceptual. 15 | 16 | Containers make many things possible, without necessarily accomplishing any of them by themselves. Almost immediately after the excitement of recognizing the power of containers, one begins the more laborious process of figuring out how to use them for practical purposes. Immediate issues include questions such as: 17 | 18 | 19 | * How do containers communicate across operating system and network boundaries? 20 | 21 | * What’s the best way to configure them and manage their lifecycles? 22 | 23 | * How do you monitor them? 24 | 25 | * How do you actually compose them into larger systems, and how do you manage those composite systems? 26 | 27 | Various answers to these questions have begun to emerge. Packaging tools such as Packer bridge configuration automation with immutable infrastructure. Cluster management systems such as Kubernetes layer replication, health maintenance, and network management on top of raw containers. Platform-as-a-Service offerings such as Cloud Foundry and OpenShift are embracing containers within their own architectural models. 28 | 29 | ![Percent of Organizations Planning to Address IT Needs](resource/CrossingTheOceanWithContainers/PercentOfOrganizationsPlanningToAddressNeeds.png) 30 | 31 | These higher-order systems answer some of the initial questions that arise while trying to deploy containers. They also, though, raise new questions of their own. Now, instead of asking how to manage and compose containers, one has to ask how to manage and compose the container management, deployment, and operations toolchain. 32 | 33 | This process is a recursive one. At the moment, we can’t know where it will end. What does it mean, for example, to run [Kubernetes on top of Mesos](http://thenewstack.io/mesosphere-now-includes-kubernetes-for-managing-clustered-containers/)? Contemplating that question involves understanding and interrelating no less than three unfamiliar technologies and operating models. 34 | 35 | More importantly, though, organizations are just beginning to contemplate how to integrate the container model into their enterprise architectures, organizations, and conceptual frameworks. This process will be a journey of its own. It will consist of a combination of adaptation and transformation. The precise path and destination of that journey are both unknown, and will depend to a large degree on each organization’s individual history, capabilities, and style. 36 | 37 | >Deep technical change is a complex process. It can’t be predicted or linearly planned. 38 | 39 | Implementing it requires the same lean and agile techniques we use for product development. The question, “is Docker ready for the enterprise?” is the wrong question. A better question would be, “how are containers likely to perturb our organization and our ways of doing things?” Answering that question requires conducting experiments and learning from feedback. It also goes far beyond purely technical concerns. 40 | 41 | Adopting a transformative technology such as cloud or containers impacts every aspect of IT. When computing resources pop into and out of existence by the minute instead of the year, and in the hundreds of thousands instead of the hundreds, traditional management methods no longer suffice. Both the configuration management system and the configuration and change management processes need retooling. 42 | 43 | New tools and processes, however, are necessary but not sufficient. Technical staff don’t just need retraining to use the new tools; they also need to learn new ways of thinking about what systems are and how to solve problems with them. Making Docker enterprise-ready involves not just making it technically robust and secure, but also figuring out what it implies for staffing, hiring, and training. The constraint that gates a company’s ability to absorb change often isn’t a new technology itself, but rather the ability to hire people who can comprehend the implications of that technology, and who can operate it based on that understanding. 44 | 45 | Ultimately, the impact of containers will reach even beyond IT, and play a part in transforming the entire nature of the enterprise. The value of microservices and containers lies in how they enable smaller, faster, more frequent change. In order to take full advantage of this capability, IT organizations will need to restructure themselves socially as well as architecturally. This cascading transformation process will in turn apply to the enterprise as a whole, as it strives to take advantage of its new capabilities for responsive digital service. 46 | 47 | Just as container management systems present new sets of questions, so too do new organizational structures. If a company decides to adopt Holacracy as part of its mission to improve agility, it will have to navigate that adoption process. Just as with technological change, effective social and structural change happens through experimentation, failure, and adaptation. 48 | 49 | In thinking about enterprise adoption of Docker or any other container technology, we need to understand it for what it is: a trigger for a much larger, more complex, and long-lasting process. We need to cast our gaze beyond containers themselves, towards the socio-technical systems they are just beginning to perturb. 50 | 51 | >We need to apply everything we’ve learned about navigating change and uncertainty, and step beyond the binary success/failure conceptual model of adoption. 52 | 53 | In this way, containers are no different from DevOps, or Lean, or any other organizational transformation. 54 | 55 | --- 56 | Copyright © 2015, The New Stack. All rights reserved. 57 | -------------------------------------------------------------------------------- /DockerFuelsARethinkingOfTheOperatingSystem.md: -------------------------------------------------------------------------------- 1 | # Docker Fuels A Rethinking of the Operating System 2 | by Susan Hall and Sam Charrington 3 | 4 | Since the launch of Docker, there’s been an explosion of new container-centric operating systems, including [CoreOS](https://coreos.com/), Ubuntu [Snappy](http://developer.ubuntu.com/en/snappy/), [RancherOS](http://rancher.com/rancher-os/), Red Hat’s [Atomic Host](http://www.redhat.com/en/about/press-releases/red-hat-launches-red-hat-enterprise-linux-7-atomic-host-advances-linux-containers-enterprise), VMware’s recently announced [Photon,](http://blogs.vmware.com/cloudnative/introducing-photon/) and Microsoft’s [Nano Server](https://msdn.microsoft.com/en-us/library/mt126167.aspx). 5 | 6 | Of course, you can run Linux Containers (LXC) on any Linux distribution, and most other major operating systems now have some sort of comparable technology. But what sets these new container-centric operating systems apart is that they are much lighter weight than a traditional Linux distribution. 7 | 8 | “These traditional Linux distros have just gobs and gobs of packages,” said Kit Colbert, VMware’s vice president and CTO of Cloud-Native Apps. “They’ve got 4, 6 GB of stuff in there — the application can’t see any of that. With a Java app, you’ve got to have a JRE [Java Runtime Environment] inside the container for that app to run. It doesn’t need anything outside the container to run. So why do you have 4 or 6 GB of stuff that you just don’t need anymore?” 9 | 10 | At the same time, the containers have to run somewhere, and that host runs on an operating system as well. Hence the rise of these new container-centric “micro OSes.” 11 | 12 | The idea of a minimalist operating system isn’t new. Stripped-down operating systems have long been embedded in electronic systems, ranging from traffic lights to digital video recordings. And minimal operating systems, often based on Linux, that can boot from a CD or even floppy disks, have been around since the 1990s. But these operating systems were typically designed to run only a single node. 13 | 14 | >Today’s micro OSes are designed for a world in which, thanks to containers, the entire data center is treated as one giant operating system that spans hundreds or even thousands of nodes. 15 | 16 | This is leading to new thinking about operating systems. In this chapter we take a look at the major players and their vision for the future of the operating system. 17 | 18 | ![Comparing Container Operating Systems](resource/DockerFuelsARethinkingOfTheOperatingSystem/ForMicroOperatingSystemsSizeMatters.png) 19 | 20 | ## CoreOS 21 | Rivals give props to CoreOS for pioneering the micro OS even before Docker came on the scene. The company recently [added $12 million](http://thenewstack.io/coreos-tightens-fit-with-kubernetes-raises-12m-from-google-ventures/) to its coffers with investment from Google Ventures, and unveiled a technology called [Tectonic](http://tectonic.com/blog), combining its CoreOS portfolio and Kubernetes — Google’s open source project for managing containerized applications. 22 | 23 | “The container OS is about building the ideal environment for running your application when it’s OK to change things,” said CoreOS CEO Alex Polvi. “In the traditional Linux server environment, it’s hard to change things because applications are just so fragile.” 24 | 25 | With CoreOS, the operating system is treated more like a web browser, such as Chrome, that is automatically updated as new components are released. Cryptographic signatures help ensure the validity of updates and the integrity of the system as a whole. 26 | 27 | While other lean OSes are aimed at a particular flavor of technology, CoreOS aims to be the general-purpose choice. The company officially supports numerous deployment options, and many more community-supported options are available. 28 | 29 | “One of the things we’ve figured out is how to build the OS and run it in all these different environments,” Polvi said. “We’re very focused on building CoreOS for production-ready environments and it feels like all these other [new OSes] are a long way off from that.” 30 | 31 | ## RancherOS 32 | 33 | RancherOS, consisting of just the kernel and Docker itself, is one of the smallest micro OSes, weighing in at around 22 MB, co-founder and CEO Sheng Liang said, compared to about 300 MB for VMware’s Photon. 34 | 35 | While RancherOS also grew out of frustrations similar to those experienced by CoreOS, the company took a different approach in developing its operating system, said Liang. To develop its Docker-optimized micro OS, the company sought to build the minimal technology required to run the Docker daemon on a Linux kernel. To achieve this, it first decided to eliminate systemd, the service-management system built into most Linux distributions, and rather use Docker itself to boot the system. 36 | 37 | Liang said that systemd is often in direct conflict with Docker. Since containers are often created outside of systemd’s view, it often tries to kill them, but winds up only killing the client while the container keeps running. He said that RancherOS worked on finding a way to bridge the camps behind the two technologies for a long time, but is no longer sure there’s any way to solve the problem. 38 | 39 | “The way the industry’s going right now, both the systemd and Docker communities are on a bit of a collision course,” Liang said. “Both see themselves as the ever-expanding center of the universe, and it’s hard [for either] to listen to another master.” 40 | 41 | ## Ubuntu Snappy 42 | Canonical boasts that Ubuntu is the most popular Linux distro for containers, with over seven times more Docker containers running on Ubuntu than any other OS. 43 | 44 | Snappy is “a very tiny, thin operating system,” said Dustin Kirkland, Ubuntu Cloud Solutions product manager and strategist at Canonical. Snappy Ubuntu Core is the result of applying lessons that Canonical learned in its efforts to create a tiny-yet-robust operating system for mobile devices. To support carriers’ and users’ needs for reliable system and application updates, the company developed the Snappy technology, which uses “transactional, image-based delta updates” for the system and applications, transmitting only differences to keep downloads small and ensure that upgrades can always be rolled back. 45 | 46 | To enhance the security of mobile devices, Canonical created a containment mechanism that isolates each application running on the device. Canonical contends that this same capability offers a level of isolation beyond that available using Docker alone, but few details are available. 47 | 48 | In addition to Snappy, Canonical  has unveiled a second element of its vision for a containerized world in [LXD, a lightweight hypervisor](http://thenewstack.io/latest-ubuntu-adds-lxd-0-7-hypervisor-rendering-desktops-an-endangered-species/) based on LXC, the same Linux feature that made Docker possible. Originally designed as a mechanism to incorporate LXC-based containers into OpenStack clouds, the launch of LXD caused some to ask whether Ubuntu intends to replace Docker with the technology. 49 | 50 | According to Kirkland, however, Docker and LXD are complementary technologies. He adds that Canonical continues to recommend Docker for packaging and running applications. But outsiders aren’t necessarily convinced. 51 | 52 | Canonical’s real motivation is control of the entire stack, said Janakiram MSV, principal analyst at Janakiram & Associates. “Snappy will be their CoreOS, and LXD will become an alternate native hypervisor,” he said. “Then they have more VM-like containers. They also have Juju Charms with its own orchestration and provisioning story for the lifecycle of containers.” 53 | 54 | ![Comparing Container Operating Systems](resource/DockerFuelsARethinkingOfTheOperatingSystem/ComparingContainerOperatingSystems.png) 55 | 56 | ## VMware Photon 57 | The meteoric rise of Docker, itself essentially a virtualization technology, has caused many to anticipate the VMware response, even, if not especially, after the two companies [announced a partnership ](http://thenewstack.io/amid-container-vs-vm-hype-vmware-draws-docker-closer/)back in August of last year. Its first response finally came earlier this year, with the announcement of its lightweight OS Photon. 58 |   59 | The changing relationship between applications and infrastructure is of key importance to VMware, said Colbert. “When you look at that split, what used to be an operating system used to have some app stuff and some infrastructure stuff,” he said. “Now the app stuff is inside the container; the infrastructure is outside of there. Photon is kind of the infrastructure portion of the Linux OS. That’s why we want to build that into [its hypervisor] ESX.” 60 | 61 | CoreOS will support Photon. Photon is really targeted to the VMware product line, according to CoreOS’s Polvi. “To run a container in a server environment, you need Linux, which means VMware now needs to manage above the hypervisor, actually into a Linux OS,” he said. “This does not mean that VMware does not allow you to run CoreOS or Red Hat or any other operating system. It means they need a specialized one to add containers to VMware’s product. They needed an extension of their hypervisor into an operating system.” 62 | 63 | Janakiram sees [VMware as being forced to bring out its own OS](http://www.forbes.com/sites/janakirammsv/2015/04/24/containers-and-microservices-force-vmware-to-ship-a-linux-distribution/) out of fear that the shift from hypervisors to containers will erode their existing business. 64 | 65 | >“VMware also knows there’s more money to be made in the management layer of containers. They want to repeat the magic of the v-family of products with containers. 66 | 67 | ## Red Hat Atomic Host 68 | Red Hat launched RHEL 7 Atomic Host in March. It has [fewer than 200 binaries](http://www.theplatform.net/2015/04/14/thin-is-in-for-operating-systems-thanks-to-docker/), compared to the 6,500 in the full RHEL 7 release. 69 | 70 | Mark Coggin, senior director of product marketing for Red Hat Enterprise Linux, called it a “happy medium” in the world of lean OSes: lighter than a traditional OS, but not as small as some of its competitors. He cautions against more “extreme” approaches, such as that of RancherOS, saying that it could lead to more complexity, because it requires running additional system containers in addition to the application container. 71 | 72 | “We’ve said, ‘We think these are the core system services that need to be in the host platform that will address 80 to 90 percent of use cases for container applications,’” he said. “If a customer needs something beyond that — like system activity data collection, sys logging, identity — we have containerized versions of those as well, but we put much of what we think is necessary in the host platform.” 73 | 74 | He said Atomic Host brings to the table a base derived from an enterprise-grade operating system rigorously tested by Red Hat’s engineering team. According to Red Hat, this is important because there are cross-dependencies that might not be readily apparent, say between the version of the Dynamic Host Configuration Protocol (DHCP) console or the version of Secure Shell (SSH) that you need inside your container, and the kernel. 75 | 76 | “We’ve already gone through the work of ensuring that the version of SSH or DHCP that you’re using is going to work with the version inside RHEL Atomic Host,” Coggin said. 77 | 78 | Red Hat is also working with independent software vendors (ISVs) to certify that the containers they build are based on images that are known, tested and safe. 79 | 80 | ## Microsoft Nano Server 81 | 82 | Nano Server, Microsoft’s lightweight OS, [targets two use cases](http://thenewstack.io/microsofts-lightweight-os-and-its-deep-linux-connection/), according to The New Stack’s Scott M. Fulton III. The first is focused on supporting cloud infrastructure, such as clustered hypervisors and clustered storage. The second focuses on supporting cloud-native applications. In this latter role, Nano Server will be suited to a new class of apps that get developed and deployed on Azure within a new Azure-based development environment — outside of the conventional client-based Visual Studio. 83 | 84 | It’s these new apps which will serve as Windows developers’ entryway to the world of containers. 85 | 86 | Developers writing for Nano Server will be guaranteed compatibility with pre-existing Windows Server installations, because Nano Server is effectively a subset of Windows Server. There may be a significant adjustment period, however, until developers become accustomed to the concept of microservices. Windows developers are used to having large libraries of pre-existing functionality available to their code in a global scope. Apps written to Nano Server run on a physical host, a virtual host or in a container. Two [types of containers](http://thenewstack.io/docker-for-windows-is-on-its-way/) work on both Windows Server and Nano Server: the same Docker containers developed for Linux, and a type developed by Microsoft for its own hypervisor platform, called Hyper-V Containers. 87 | 88 | “These provide additional isolation,” [explained Jeffrey Snover](http://thenewstack.io/microsofts-lightweight-os-and-its-deep-linux-connection/), Microsoft distinguished engineer and lead architect, in The New Stack. “They’re really used for things like multi-tenant services, or multi-tenant platform-as-a-service, where you’re going to be running code that might be malicious that you don’t trust.” 89 | 90 | The concept draws inspiration from Drawbridge, a containerization system developed by Microsoft Research, mainly for purposes of process isolation and sandboxing untrusted apps that could crash the system. 91 | 92 | ## The Swirling Market 93 | 94 | It’s not surprising to see so many vendors jockeying for position. Two camps are forming, Janakiram said: Red Hat, Docker and their allies on one side; and VMware, CoreOS and their allies on the other. “The more Red Hat goes closer to Docker, VMware will go farther.” 95 | 96 | Google, he said, is on the fence. “Google is smiling because they know how to do containers very well,” he said. 97 | 98 | “Kubernetes will work with rkt and they’re working to make it work with any other container. But they want to make money on the public cloud; they want to run containers on the Google cloud. But they don’t mind giving away some of their innovation on orchestration.” 99 | 100 | Microsoft, while closely aligned with Docker, has an uphill battle ahead of it as it works to overcome incompatibilities between Windows and Linux. “In 24 months the dust will settle, and we’ll get to see who’s the winner,” Janakiram said. “Who’s still relevant in the market.” 101 | 102 | --- 103 | Copyright © 2015, The New Stack. All rights reserved. 104 | -------------------------------------------------------------------------------- /DockerOrchestrationTools.md: -------------------------------------------------------------------------------- 1 | # Docker Orchestration Tools 2 | By Janakiram MS 3 | 4 | --- 5 | 6 | The Docker platform contains many tools to manage the lifecycle of a container. Docker Command Line Interface, for example, supports the following container activities 7 | 8 | * Pulling a repository from the registry. 9 | * Running the container and optionally attaching a terminal to it. 10 | * Committing the container to a new image. 11 | * Uploading the image to the registry. 12 | * Terminating a running container. 13 | 14 | While the CLI meets the needs of managing one container on one host, it falls short when it comes to managing multiple containers deployed on multiple hosts. To go beyond the management of individual containers, we must turn to orchestration tools. 15 | 16 | >Orchestration tools extend lifecycle management capabilities to complex, multi-container workloads deployed on a cluster of machines. 17 | 18 | By abstracting the host infrastructure, orchestration tools allow users to treat the entire cluster as a single deployment target. 19 | 20 | ## Baseline Features 21 | The process of orchestration typically consists of scheduling, cluster management and health monitoring functions. A handful of core features have come to characterize user expectations for modern container orchestration tools. 22 | 23 | ### Declarative Configuration 24 | Orchestration tools provide an option for DevOps teams to declare the configuration in a standard format, such as YAML or JSON. These definitions carry crucial information about the repositories, tags, ports, volumes and logs supporting the workload. The tools are idempotent with respect to the system configuration, which means that applying the same configuration multiple times will always have the same result. 25 | 26 | ### Rules and Constraints 27 | Workloads often have special requirements for host placement. For example, it’s pointless to provision the master and slave database container on the same host; it defeats the purpose. Similarly, it may be a good idea to place in-memory cache on the same host as the web server. Orchestration tools support mechanisms for defining the affinity and constraints of container placement. 28 | 29 | ### Provisioning 30 | Provisioning, or scheduling, deals with negotiating the placement of containers within the cluster and launching them. This process involves selecting an appropriate host based on the configuration and the rules and constraints that it defines. Apart from a container-provisioning API, orchestration tools will invoke the infrastructure APIs specific to the host environment. 31 | 32 | ### Discovery 33 | In a distributed deployment consisting of containers running on multiple hosts, container discovery becomes critical. Web servers need to dynamically discover the database servers, and load balancers need to discover and register web servers. Orchestration tools provide, or expect, a distributed key-value store, a lightweight DNS or some other mechanism to enable the discovery of containers. 34 | 35 | ### Health Monitoring 36 | Since orchestration tools are aware of the desired configuration of the system, they are uniquely able to track and monitor the health of its containers and hosts. In the event of host failure, the tools can relocate the container. Similarly, when a container crashes, orchestration tools can launch a replacement. Orchestration tools ensure that the deployment always matches the desired state declared by the developer or operator. 37 | 38 | ## Container Orchestration Ecosystem 39 | With the explosive popularity of Docker and the early limitations of its built-in tooling, a number of orchestration tools have been developed to meet the needs of those seeking to deploy sophisticated workloads to clustered environments. They include: 40 | 41 | * [BOSH](https://github.com/cf-platform-eng/docker-boshrelease): Originally created as a deployment tool for Cloud Foundry, BOSH has been updated to deploy and orchestrate persistent Docker containers across many virtual machines (VMs) and multiple infrastructure-as-a-service (IaaS) providers. 42 | 43 | * [Centurion](https://github.com/newrelic/centurion): Built by the monitoring company New Relic, Centurion takes containers from a Docker registry and runs them on a cluster of hosts with the correct environment variables, host volume mappings and port mappings. 44 | 45 | * [Clocker](https://github.com/brooklyncentral/clocker): Clocker is a Docker orchestration engine based on Apache Brooklyn, an open source framework for modeling, monitoring and managing applications through autonomic blueprints. 46 | 47 | * [Crane](https://github.com/michaelsauter/crane): Crane orchestrates Docker containers based on declarative configuration files in the form of YAML or JSON. 48 | 49 | * [Consul](http://www.consul.io/): Consul is a tool for discovering and configuring services. Its service discovery layer uses DNS or HTTP to find all service providers with availability. 50 | 51 | * [Decking](http://decking.io/): Based on Node.js, Decking aims to simplify the creation, organization and management of clusters of Docker containers. 52 | 53 | * [Deimos](https://github.com/mesosphere/deimos): Deimos is a Docker plugin for Apache Mesos that provides external containerization of clusters. 54 | 55 | * [Docker Swarm](https://docs.docker.com/swarm/): A native tool from Docker, Inc., Swarm aims to simplify the deployment of containers on a cluster. 56 | 57 | * [Dockerize](https://github.com/jwilder/dockerize): Dockerize simplifies the “Dockerization” of applications by automatically generating configuration files using templates. 58 | 59 | * [Fleet](https://github.com/coreos/fleet): An extension of systemd that uses etcd to operate at the cluster level instead of the machine level. 60 | 61 | * [Flocker](https://clusterhq.com/): Flocker from ClusterHQ is a data volume manager and multi-host Docker cluster management tool. 62 | 63 | * [Geard](https://github.com/openshift/geard): Borrowing terminology from its OpenShift PaaS offering, Red Hat created an orchestration engine to run Docker containers in production. 64 | 65 | * [Kubernetes](http://kubernetes.io/): Google, which claims to launch over two billion containers each day, grabbed the attention of the industry in releasing Kubernetes. 66 | 67 | * [Magnum](https://wiki.openstack.org/wiki/Magnum): Part of the OpenStack project, Magnum provides an API for managing application containers, which have different lifecycle and operational requirements than Nova instances. 68 | 69 | * [Maestro](https://github.com/signalfuse/maestro-ng): MaestroNG is a command-line utility developed by SignalFx to automatically manage and orchestrate deployments by bringing up of a set of service instance containers on a set of target host machines. 70 | 71 | * [Marathon](https://mesosphere.github.io/marathon/): Marathon is a cluster-wide init and control system for services in cgroups or Docker containers running on top of a Mesos cluster. 72 | 73 | * [Shipper](https://github.com/mailgun/shipper): Built by the Mailgun team at Rackspace, Shipper is a fabric for Docker and a tool for orchestrating Docker containers. 74 | 75 | * [Shipyard](http://shipyard-project.com/): Shipyard enables multi-host, Docker cluster management. It uses the Citadel toolkit for cluster resourcing and scheduling. 76 | 77 | ## A Closer Look at the Popular Choices 78 | 79 | ### Docker Swarm 80 | [Docker Swarm](https://github.com/docker/swarm) was announced at DockerCon Europe in November 2014 as the official orchestration tool from Docker, Inc. The objective of Swarm is to use the same Docker API that works with the core Docker Engine. Instead of targeting an API endpoint representing one Docker Engine, Swarm transparently deals with an endpoint associated with a pool of Docker Engines. The key advantage of this approach is that the existing tools and APIs will continue to work with a cluster in the same way they work with single instance. 81 | 82 | ![Docker Swarm: Swap, plug and play](resource/DockerOrchestrationTools/SwarmArch.png) 83 | 84 | Docker Swarm comes with several built-in scheduling strategies, giving users the ability to guide container placement so as to maximize or minimize the spread of containers across the cluster. Random placement is supported as well. 85 | 86 | Docker, Inc. seeks to follow the principle of "batteries included but removable," meaning that while it currently ships with only a handful of simple scheduling backends, in the future it may support additional backends through a pluggable interface. Based on the scale and complexity of a given use case, developers or operations staff might choose to plug in an appropriate backend. 87 | 88 | Docker Swarm supports constraints and affinities to determine the placement of containers on specific hosts. Constraints define requirements to select a subset of nodes that should be considered for scheduling. They can be based on attributes like storage type, geographic location, environment and kernel version. Affinity defines requirements to collocate containers on hosts. 89 | 90 | For discovering containers on each host, Swarm uses a pluggable backend architecture that works with a simple hosted discovery service, static files, lists of IPs, etcd, consul and zookeeper. 91 | 92 | Swarm supports basic health monitoring, which prevents provisioning containers on faulty hosts. 93 | 94 | ### Kubernetes 95 | [Kubernetes](http://kubernetes.io/) is one of the most popular orchestration tools of the container ecosystem. Coming from Google – a company that claims to deal with two billion containers every day – Kubernetes enjoys unique credibility. Within a few weeks of the announcement that Google had open-sourced the project, CoreOS, Red Hat, Microsoft, IBM, HP and Mesosphere pledged their support for it. 96 | 97 | ![Kubernetes: The every man's Borg](resource/DockerOrchestrationTools/KubernetesArch.png) 98 | 99 | Kubernetes’ architecture is based on a master server with multiple minions. The command line tool, called kubecfg, connects to the API endpoint of the master to manage and orchestrate the minions. Below is the definition of each component that runs within the Kubernetes environment. 100 | 101 | * Master: The server that runs the Kubernetes management processes, including the API service, replication controller and scheduler. 102 | 103 | * Minion: The host that runs the Kubelet service and the Docker Engine. Minions receive commands from the master. 104 | 105 | * Kubelet: The node level manager in Kubernetes. Runs on a Minion. 106 | 107 | * Pod: The collection of containers deployed on the same minion. 108 | 109 | * Replication controller: Defines the number of pods or containers that need to be running. 110 | 111 | * Service: A definition that allows the discovery of services/ports published by each container, along with the external proxy used for communications. 112 | 113 | * Kubecfg: The command line interface that talks to the master to manage a Kubernetes deployment. 114 | 115 | The service definition, along with the rules and constraints, is described in a JSON file. For service discovery, Kubernetes provides a stable IP address and DNS name that corresponds to a dynamic set of pods. When a container running in a Kubernetes pod connects to this address, the connection is forwarded by a local agent (called the kube-proxy) running on the source machine to one of the corresponding backend containers. 116 | 117 | Kubernetes supports user-implemented application health checks. These checks are performed by the Kubelet running on each minion to ensure that the application is operating correctly. Currently, Kubernetes supports three types of health checks : 118 | 119 | * HTTP health check: The Kubelet will call a web endpoint. If the response code is between 200 and 399, it is considered a success. 120 | 121 | * Container exec: The Kubelet will execute a command within the container. If it returns “OK,” it is considered a success. 122 | 123 | * TCP socket: The Kubelet will attempt to open a socket to the container and establish a connection. If the connection is made, it is considered healthy. 124 | 125 | ### Apache Mesos 126 | [Apache Mesos ](http://mesos.apache.org/)is an open source cluster manager that simplifies the complexity of running tasks on a shared pool of servers. Originally designed to support high-performance computing workloads, Mesos added support for Docker in the 0.20.0 release. With the 0.20.1 patch release, some of its limitations in managing containers were fixed. 127 | 128 | ![Apache Mesos: The Swiss Army Knife of cluster managers](resource/DockerOrchestrationTools/MesosArch.png) 129 | 130 | A typical Mesos cluster consists of one or more servers running the mesos-master and a cluster of servers running the mesos-slave component. Each slave is registered with the master to offer resources. The master interacts with deployed frameworks to delegate tasks to slaves. Below is an overview of Mesos’ architecture: 131 | 132 | * Master daemon: The mesos-master service runs on a master node and manages slave daemons. 133 | 134 | * Slave daemon: The mesos-slave service runs on each slave node to run tasks that belong to a framework. 135 | 136 | * Framework: An application definition consisting of a scheduler that registers with the master to receive resource offers, along with one or more executors to launch tasks on the slaves. 137 | 138 | * Offer: The list of a slave node’s resources. Each slave node sends offers to the master, and the master provides offers to registered application frameworks. 139 | 140 | * Task: The unit of work scheduled by a framework to be executed on a slave node. 141 | 142 | * Apache ZooKeeper: The software used to coordinate the collection of master nodes. 143 | 144 | Unlike other tools, Mesos ensures high availability of the master nodes using Apache ZooKeeper, which replicates the masters to form a quorum. A high availability deployment requires at least three master nodes. All nodes in the system, including masters and slaves, communicate with ZooKeeper to determine which master is the current ‘leading master.’ The leader performs health checks on all the slaves and proactively deactivates any that fail. 145 | 146 | When Mesos is used in conjunction with Marathon, service discovery can be enabled based on the HAProxy TCP/HTTP load balancer, along with an assistant script that uses Marathon’s REST API to periodically regenerate a HAProxy configuration file. Alternately, Mesos-DNS, a DNS-based service discovery mechanism, has recently been released in beta. 147 | 148 | ### Fleet 149 | [Fleet](https://github.com/coreos/fleet) was introduced as a distributed init system for CoreOS. It unites [systemd](http://coreos.com/using-coreos/systemd), the standard Linux init system for handling the lifecycle of OS processes, and [etcd](https://github.com/coreos/etcd), a distributed key-value store for shared configuration and service discovery, into one distributed init system. 150 | 151 | ![Fleet: Taking Systemd up to 11](resource/DockerOrchestrationTools/FleetArch.png) 152 | 153 | Every machine in a Fleet cluster runs the ‘fleetd’ daemon, which encapsulates two roles — the engine and the agent. The engine is responsible for scheduling decisions while the agent executes the systemd units. An election process is used to promote a single node’s fleetd instance to master, thus activating its engine capabilities. 154 | 155 | Fleet supports the following operations: 156 | 157 | * Deploying Docker containers on any host within a cluster. 158 | 159 | * Distributing services across a cluster using container- and machine-level affinity and anti-affinity. 160 | 161 | * Maintaining multiple instances of a service and re-scheduling these on machine failure. 162 | 163 | * Discovering machines running in the cluster. 164 | 165 | Fleet can run two types of processes, defined as systemd units: 166 | 167 | 1. Standard units are long-running processes that are typically scheduled to a single machine. If that machine fails, the unit will be automatically migrated and started on a new machine. 168 | 169 | 1. Global units are designed to run on all machines within the cluster. These are used to support common services like monitoring agents or components of higher-level orchestration systems, like Kubernetes, Mesos or OpenStack. 170 | 171 | ### Flocker 172 | Developed by ClusterHQ, [Flocker](https://clusterhq.com/flocker/introduction/) attempts to bring the portability that is currently possible with stateless containers to the world of stateful containers running, for example, persistent database services. Flocker moves containers, along with their data, between multiple hosts within a cluster, or even across multiple cloud providers. 173 | 174 | [Flocker](https://github.com/ClusterHQ/flocker) is essentially a volume and container management system built on top of the [ZFS](https://en.wikipedia.org/wiki/Zfs) file system. It aims to bring to Docker environments the live migration capabilities that helped make virtual machines attractive to the enterprise. 175 | 176 | ![Flocker: Apps and data flocking together](resource/DockerOrchestrationTools/FlockerArch.png) 177 | 178 | Flocker has two key components: 179 | 180 | * A network proxy that routes requests for services to an appropriate host running a specific container. 181 | 182 | * A ZFS volume that serves as the storage backend. 183 | 184 | When a container moves across hosts, the ZFS data volume is snapshotted first and then copied to the target host. A handoff process is initiated between the source container and the replica before the replica becomes active. There will be a brief pause during the handoff process which may result in a downtime, but ClusterHQ is working towards a seamless migration that avoids downtime altogether. 185 | 186 | Flocker uses a Fig file to declare the cluster. This file contains the Docker images, links and ports that are exposed. Another file is used to define the deployment topology and constraints, and the placement of containers across available hosts. A tool called flocker-deploy is used to submit the Fig file and the deployment definition to the Flocker cluster. 187 | 188 | Each Flocker host runs a control service and a convergence agent. The control service exposes an HTTP API that is used to modify the desired configuration of the cluster. Convergence agents are responsible for modifying the cluster state to match the desired configuration submitted via the control service. 189 | 190 | An alternative approach for using Flocker with Docker recently became available with ClusterHQ’s release of a [Docker plugin for Flocker](http://thenewstack.io/the-real-docker-ecosystem-launches-with-plugins/) at DockerCon 2015. In contrast to the architecture described above, the new plugin allows Flocker to be controlled by Docker, rather than Flocker controlling Docker. While imposing a distinct — and more experimental — architecture, the plugin approach allows for easier integration with other Docker ecosystem tools, like Docker Swarm and Docker Compose. 191 | 192 | Though Flocker is still evolving, it has set out to solve a key problem faced by container users. 193 | 194 | ![Comparing Container Orchestration Tools](resource/DockerOrchestrationTools/ComparingContainerOrchestrationTools.png) 195 | 196 | ## Summary 197 | The Docker ecosystem is growing rapidly. From major infrastructure companies to PaaS vendors to early-stage startups, everyone is clamoring to stake out their place in the ecosystem. There are many contributors working on container orchestration tools, as these are essential for deploying real-world applications, and thus, driving the adoption of Docker. We attempted to highlight some of the key contributors building orchestration tools. By no means is this an exhaustive list, but it does cover the key players that are positively impacting the Docker ecosystem. 198 | 199 | --- 200 | Copyright © 2015, The New Stack. All rights reserved. 201 | -------------------------------------------------------------------------------- /FromMonolithToMicroservices.md: -------------------------------------------------------------------------------- 1 | # From Monolith to Microservices 2 | by Vivek Juneja 3 | 4 | --- 5 | 6 | You’re locked in a daily struggle with a large enterprise system that’s been chugging away since the dawn of IT. The development team doesn’t want to get fired, and so is fearful of making changes or adding new code to the monolith. 7 | 8 | The most mundane of changes is discussed ad nauseum, in an effort to ensure that the business’ operations are not compromised. New developers spend months learning the system’s codebase before they can even begin to work on it. 9 | 10 | When things go wrong, operations blames development and development blames QA. Project managers blame the budget and everyone else. The business loses confidence in IT and begins to look for outsourcers to replace the internal team. 11 | 12 | This scenario — immortalized in the DevOps novel “[The Phoenix Project](http://itrevolution.com/books/phoenix-project-devops-book/)” —  is faced every day by enterprise IT practitioners. 13 | 14 | Unless you’ve been living under a rock, you’ve heard of how microservices can turn this scenario on its head, enabling a new, more agile world in which developers and operations teams work hand in hand to deliver small, loosely coupled bundles of software quickly and safely. Microservices-based architectures, rather than being something wholly new, are based on the software development and delivery practices of modern web-scale companies like Google, Amazon, Netflix and Twitter. However, the [plateau of productivity ](https://en.wikipedia.org/wiki/Hype_cycle)with this technology is far, far away — at least for most main street enterprises. 15 | 16 | For those new to microservices, there are plenty of resources and war stories on the Web about how companies have embraced the approach over time. Most of this information is scattered across blog posts, lectures and seminars, however, making it difficult to make sense of the big picture. 17 | 18 | While a one-size-fits-all approach to adopting microservices cannot exist, it is helpful to examine the base principles that have guided successful adoption efforts. 19 | 20 | ## Adopting Microservices 21 | One common approach for teams adopting microservices is to identify existing functionality in the monolith that is both non-critical and fairly loosely coupled with the rest of the application. For example, in an e-commerce system, events and promotions are often ideal candidates for a microservices proof-of-concept. Alternately, more sophisticated teams can simply mandate that all new functionality must be developed as a microservice. 22 | 23 | In each of these scenarios, the key challenge is to design and develop the integration between the existing monolith and the new microservices. When a part of the monolith is re-designed using microservices, a common practice is to introduce glue code to help it to talk to the new services. 24 | 25 | Michael Feathers, in the book “Working Effectively With Legacy Code,” introduced the concept of [seams](http://www.informit.com/articles/article.aspx?p=359417&seqNum=2). A seam is a place where the program’s behavior can be altered without editing. Using the concept of a seam, we can identify places in the monolith where a change can be introduced that will help it to interact with the new microservices. 26 | 27 | Building a feature proxy that “fools” the monolith and instead calls the new microservices is another possible approach. An API[ gateway](http://thenewstack.io/?s=api+gateway) can help combine many individual service calls into one coarse-grained service, and in so doing reduce the cost of integrating with the monolith. 28 | 29 | The main idea is to slowly replace functionality in the monolith with microservices, while minimizing the changes that must be added to the monolith to support this transition. This is important in order to reduce the cost of maintaining the monolith and minimize the impact of the migration. 30 | 31 | ## Microservices Architectural Patterns 32 | A number of architectural patterns exist that can be leveraged to build a solid microservices implementation strategy. I will present some common ones that may be evaluated for your own projects. 33 | 34 | In their book “[The Art of Scalability](http://theartofscalability.com/),” Martin Abbott and Michael Fisher elaborated on the concept of the “[scale cube](http://microservices.io/articles/scalecube.html),” illustrating various ways to achieve scalability in a software system. The microservices pattern maps to the Y-axis of the cube, wherein functional decomposition is used to scale the system. Each service can then be further scaled by cloning (X-axis) or sharding (Z-axis). 35 | 36 | ![The Scale Cube and Microservices: 3 Dimensions to Scaling](resource/FromMonolithToMicroservices/TheScaleCubeAndMicroservices.png) 37 | 38 | Alistair Cockburn introduced the “ports and adapters” pattern, also called [hexagonal architecture](http://alistair.cockburn.us/Hexagonal+architecture), in the context of building applications that can be tested in isolation. However, it has been increasingly used for building reusable microservices-based systems, [as advocated by](https://skillsmatter.com/skillscasts/5280-hexagonal-microservices) James Gardner and Vlad Mettler. A hexagonal architecture is an implementation of a pattern called [bounded context](http://martinfowler.com/bliki/BoundedContext.html), wherein the capabilities related to a specific business domain are insulated from any outside changes or effects. 39 | 40 | Examples abound of these principles being put to practice by enterprises migrating to microservices. Click Travel open sourced their [Cheddar framework](https://github.com/ClickTravel/Cheddar), which captures these ideas in an easy-to-use project template for Java developers building applications for Amazon Web Services. SoundCloud, after a failed attempt at a big-bang refactoring of their application, [based their microservices migration](http://www.infoq.com/news/2014/06/soundcloud-microservices) on the use of the bounded context pattern to identify cohesive feature sets which did not couple with the rest of domain. 41 | 42 | One challenge faced by teams new to microservices is dealing with distributed transactions spanning multiple independent services. In a monolith this is easy, since state changes are typically persisted to a common data model shared by all parts of the application. This is not the case, however, with microservices. 43 | 44 | Having each microservice managing its own state and data introduces architectural and operational complexity when handling distributed transactions. Good design practices, such as domain-driven design, help mitigate some of this complexity by inherently limiting shared state. 45 | 46 | Event-oriented patterns such as [event sourcing](http://martinfowler.com/eaaDev/EventSourcing.html) or command query responsibility segregation ([CQRS](http://martinfowler.com/bliki/CQRS.html)) can help teams ensure data consistency in a distributed microservices environment. With event sourcing and CQRS, the state changes needed to support distributed transactions can be propagated as events (event sourcing) or commands (CQRS). Each microservice that participates in a given transaction can then subscribe to the appropriate event. 47 | 48 | This pattern can be extended to support [compensating](https://en.wikipedia.org/wiki/Compensating_transaction) operations by the microservice when dealing with eventual consistency. Chris Richardson [presented an implementation of this](http://www.slideshare.net/chris.e.richardson/building-and-deploying-microservices-with-event-sourcing-cqrs-and-docker-hacksummit-2014) in his talk at hack.summit() 2014 and shared example code [via Github](https://github.com/cer/event-sourcing-examples). Also worth exploring is Fred George’s [notion of ‘streams and rapids,’](https://vimeo.com/79866979) which uses asynchronous services and a high speed messaging bus to connect the microservices in an application. 49 | 50 | ![Scaling With Microservices](resource/FromMonolithToMicroservices/ScalingWithMicroservices.png) 51 | 52 | While architectures such as these are promising, it is important to remember that, during the transition from monolith to a collection of microservices, both systems will exist in parallel. To reduce the development and operational costs of the migration, the architectural and integration patterns employed by the microservices must be appropriate to the monolith’s architecture. 53 | 54 | ## Architectural & Implementation Considerations 55 | 56 | ### Domain Modeling 57 | Domain modeling is at the heart of designing coherent and loosely coupled microservices. The goal is to ensure that each of your application’s microservices are adequately isolated from runtime side effects of, and insulated from changes in the implementation of, the other microservices in the system. 58 | 59 | Isolating and insulating microservices also helps ensure their reusability. For example, consider a promotions service that can be extracted from a monolithic e-commerce system. This service could be used by various consuming clients using mobile web, iOS or Android apps. In order for this to work predictably, the domain of “promotions,” including its state entities and logic, needs to be insulated from other domains in the system, like “products,” “customers,” “orders,” etc. This means the promotions service must not be polluted with cross-domain logic or entities. 60 | 61 | Proper domain modeling also helps avoid the pitfall of modeling the system along technological or organizational boundaries, resulting in data services, business logic and presentation logic each implemented as separate services. 62 | 63 | >All roads to microservices pass through domain modeling. 64 | 65 | Sam Newman discusses these principles in his book “[Building Microservices](http://info.thoughtworks.com/building-microservices-book).” Vaughn Vernon focuses on this area even more deeply  in “[Implementing Domain-Driven Design](https://vaughnvernon.co/?page_id=168).” 66 | 67 | ### Service Size 68 | Service size is a widely debated and confusing topic in the microservices community. The overarching goal when determining the right size for a microservice is to not make a monolith out of it. 69 | 70 | The “[Single Responsibility Principle](http://www.objectmentor.com/resources/articles/srp.pdf)” is a driving force when considering the right service size in a microservices system. Some practitioners advocate as small a service size as possible for independent operation and testing. Building microservices in the spirit of [Unix](http://www.infoq.com/presentations/Micro-Services) utilities also leads to small codebases which are easy to maintain and upgrade. 71 | 72 | Architects must be particularly careful in architecting large domains, like “products” in an e-commerce system, as these are potential monoliths, prone to large variations in their definition. There could be various types of products, for example. For each type of product, there could be different business logic. Encapsulating all this can become overwhelming, but the way to approach it is to put more boundaries inside the product domain and create further services. 73 | 74 | Another consideration is the idea of replaceability. If the time it takes to replace a particular microservice with a new implementation or technology is too long (relative to the cycle time of the project), then it’s definitely a service that needs further reworking of its size. 75 | 76 | ### Testing 77 | Let’s look at some operational aspects of having the monolith progressively transformed into a microservices-based system. Testability is a common issue: During the course of developing the microservices, teams will need to perform integration testing of the services with the monolith. The idea, of course, is to ensure that the business operations spanning the pre-existing monolith and the new microservices do not fail. 78 | 79 | One option here is to have the monolith provide some consumer-driven contracts that can be translated into test cases for the new microservices. This approach helps ensure that the microservice always has access to the expectations of the monolith in the form of automated tests. The monolith’s developers would provide a spec containing sample monolith requests and expected microservice responses. This spec is then used to create relevant mocks at the monolith end, and as the basis for an automated test suite that is run before integrating the microservices with the monolith. [Pact](https://github.com/DiUS/pact-jvm), a consumer-driven contract testing library, is a good reference for this approach. 80 | 81 | Creating a reusable test environment that can deploy a test copy of the entire monolith — and making it available, on-demand, to the microservices teams — is also useful. This eliminates potential roadblocks for those teams and improves the feedback loop for the project as a whole. A common way of accomplishing this is to containerize the entire monolith in the form of Docker containers orchestrated through an automation tool like [Docker Compose](https://docs.docker.com/compose/). This deploys a test infrastructure of the monolith quickly and gives the team the ability to perform integration tests locally. 82 | 83 | ### Service Discovery 84 | A service may need to know about other services when accomplishing a business function. A [service discovery](http://en.wikipedia.org/wiki/Service_discovery) system enables this, wherein each service refers to an external registry holding the endpoints of the other services. This can be implemented through environment variables when dealing with a small number of services; [etcd](http://thenewstack.io/about-etcd-the-distributed-key-value-store-used-for-kubernetes-googles-cluster-container-manager/), [Consul](https://www.consul.io/) and [Apache Zookeeper](https://zookeeper.apache.org/) are examples of more sophisticated systems commonly used for service discovery. 85 | 86 | ### Deployment 87 | Each microservice should be self-deployable, either on a runtime container or by embedding a container in itself. For example, a JVM-based microservice could embed a [Tomcat container](https://en.wikipedia.org/wiki/Apache_Tomcat) in itself, reducing the need for a standalone web application server. 88 | 89 | At any point in time, there could be a number of microservices of the same type (i.e., X-axis scaling as per[ the](https://devcentral.f5.com/Portals/0/Users/038/38/38/x-axis-scaling.png) scale cube) to allow for more reliable handling of requests. Most implementations also include a software load balancer that can also act as a service registry, such as [Netflix Eureka](https://github.com/Netflix/eureka/wiki/Eureka-at-a-glance). This implementation allows for failover and transparent balancing of requests as well. 90 | 91 | ### Build and Release Pipeline 92 | Additional considerations when implementing microservices are quite common, such as having a continuous integration and deployment pipeline. The notable caveat for this in a microservices-based system is having an on-demand, exclusive, build and release pipeline for each microservice. This reduces the cost of building and releasing the application as a whole. We do not need to build the monolith when a microservice gets updated. Instead, we only build the changed microservice and release it to the end system. 93 | 94 | Release practices also need to include the concept of [rolling upgrades](http://docs.ansible.com/guide_rolling_upgrade.html) or [blue-green deployment](http://martinfowler.com/bliki/BlueGreenDeployment.html). This means that, at any point of time in a new build and release cycle, there can be concurrent versions of the same microservice running in the production environment. A percentage of the active user load can be routed to the new microservice version to test its operation, before slowly phasing out the old version. This helps to ensure that a failed change in a microservice does not cripple the monolith. In case of failure, the active load can be routed back to the old version of the same service. 95 | 96 | ### Feature Flags 97 | One other common pattern is to allow for [feature flags](http://en.wikipedia.org/wiki/Feature_toggle). A feature flag, like a configuration parameter, can be added to the monolith to allow toggling a feature on or off. Implementing this pattern in the monolith would allow us to trigger the use of the relevant microservice for the feature when the flag is turned on. This enables easy A/B testing of features migrated from the monolith to microservices. If the monolith version of a feature and the new microservice replicating the said feature can co-exist in the production environment, a traffic routing implementation along with the feature flag can help the delivery teams more rapidly build the end system. 98 | 99 | ## Developer Productivity During Microservices Adoption 100 | 101 | Monolithic architectures are attractive in that they allow quick turnaround of new business features on a tight schedule — when the overall system is still small. However, this becomes a development and operations nightmare as the system grows up. 102 | 103 | >If working with your monolith was always as painful as it is now, you probably wouldn’t have it. Rather, systems become monoliths because adding onto the monolith is easy at first. 104 | 105 | Giving power to developers to choose a “microservices first” approach when building a new feature or system is complicated and has many moving parts. Doing so demands strong disciplines around architecture and automation, which in turn helps create an environment that allows teams to quickly and cleanly build microservices. 106 | 107 | One approach to building out this developer infrastructure is to create a standard boilerplate project that encapsulates key principles of microservice design, including project structure; test automation; integration with instrumentation and monitoring infrastructures; patterns, like circuit breakers and timeouts; API frameworks; and documentation hooks, among others. 108 | 109 | Project templates allow teams to focus less on scaffolding and glue code, and more on building business functionality in a distributed microservices-based environment. Projects like [Dropwizard](http://www.dropwizard.io/), [Spring Boot](https://github.com/spring-projects/spring-boot), Netflix [Karyon](https://github.com/Netflix/karyon) are interesting approaches to solving this. Making the right choice between them depends on the architecture and developer skill level. 110 | 111 | ## Monitoring and Operations 112 | Co-existing monoliths and microservices require a comprehensive monitoring of performance, systems and resources. This is more pronounced if a particular feature from the monolith is replicated through a microservice. Collecting statistics for performance and load will allow the monolithic implementation and the microservices-based one replacing it to be compared. This will enable better visibility into the gains that the new implementation brings to the system, and improve confidence in pursuing further migration. 113 | 114 | ## Organizational Considerations 115 | The most challenging aspects of moving from monoliths to microservices are the organizational changes required, such as building services teams that own all aspects of their microservices. This requires creating multi-disciplinary units which include developers, testers and operations staff, among others. The idea is to embrace more collective code ownership and care for software craftsmanship. 116 | 117 | Most of the ideas shared in this article have either been practiced or have delivered results in organizations of all sizes. However, this is not a one-size-fits-all paradigm. Hence, it's important to keep an eye on evolving patterns and adoption war stories. As more organizations move from monoliths to microservices, we will have more to learn along our journey. 118 | 119 | --- 120 | Copyright © 2015, The New Stack. All rights reserved. 121 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | [](http://thenewstack.io) 2 | 3 | # The Docker & Container Ebook Series 4 | 5 | Welcome to this special Github preview edition of The New Stack's Docker & Container Ebook Series. 6 | 7 | Thanks in advance for taking the time to review a few [select chapters](CONTENTS.md) from our upcoming series of ebooks about Docker and the container ecosystem. 8 | 9 | The five-book series aims to map the vast and rapidly expanding oceans of the container universe, and will cover: 10 | 11 | * Understanding the Docker & Container Ecosystem 12 | 13 | * Application Development and Microservices 14 | 15 | * Automation and Orchestration 16 | 17 | * Networking, Security and Storage 18 | 19 | * Management and Monitoring 20 | 21 | This preview [contains sample chapters](CONTENTS.md) from across the series--three from the first book, and one each from the second and third. 22 | 23 | We are hoping you’ll be so kind as to share your suggestions or feedback with us. Feel free to send us a pull request, or share them directly with us [by email](mailto:ebook-feedback@thenewstack.io). 24 | 25 | We would be remiss if we didn't thank our series sponsors: 26 | 27 | ![Sponsor Logos](resource/sponsorlogos2.png) 28 | 29 | We're deeply grateful for their support for this project. 30 | 31 | Enjoy! 32 | 33 | Sam Charrington [\[t\]](http://twitter.com/samcharrington), Alex Williams [\[t\]](http://twitter.com/alexwilliams) & The New Stack staff 34 | 35 | 36 | -------------------------------------------------------------------------------- /resource/AdoptingContainersInTheEnterprise/MotivationsToConsiderDocker.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/thenewstack/docker-and-containers-ebooks/2d21d30fbed39b2c8454a40cfc7cc52f97e4f616/resource/AdoptingContainersInTheEnterprise/MotivationsToConsiderDocker.png -------------------------------------------------------------------------------- /resource/AdoptingContainersInTheEnterprise/PaaSMostLikelyToBeAutomated.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/thenewstack/docker-and-containers-ebooks/2d21d30fbed39b2c8454a40cfc7cc52f97e4f616/resource/AdoptingContainersInTheEnterprise/PaaSMostLikelyToBeAutomated.png -------------------------------------------------------------------------------- /resource/CrossingTheOceanWithContainers/PercentOfOrganizationsPlanningToAddressNeeds.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/thenewstack/docker-and-containers-ebooks/2d21d30fbed39b2c8454a40cfc7cc52f97e4f616/resource/CrossingTheOceanWithContainers/PercentOfOrganizationsPlanningToAddressNeeds.png -------------------------------------------------------------------------------- /resource/DockerFuelsARethinkingOfTheOperatingSystem/ComparingContainerOperatingSystems.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/thenewstack/docker-and-containers-ebooks/2d21d30fbed39b2c8454a40cfc7cc52f97e4f616/resource/DockerFuelsARethinkingOfTheOperatingSystem/ComparingContainerOperatingSystems.png -------------------------------------------------------------------------------- /resource/DockerFuelsARethinkingOfTheOperatingSystem/ForMicroOperatingSystemsSizeMatters.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/thenewstack/docker-and-containers-ebooks/2d21d30fbed39b2c8454a40cfc7cc52f97e4f616/resource/DockerFuelsARethinkingOfTheOperatingSystem/ForMicroOperatingSystemsSizeMatters.png -------------------------------------------------------------------------------- /resource/DockerOrchestrationTools/ComparingContainerOrchestrationTools.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/thenewstack/docker-and-containers-ebooks/2d21d30fbed39b2c8454a40cfc7cc52f97e4f616/resource/DockerOrchestrationTools/ComparingContainerOrchestrationTools.png -------------------------------------------------------------------------------- /resource/DockerOrchestrationTools/FleetArch.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/thenewstack/docker-and-containers-ebooks/2d21d30fbed39b2c8454a40cfc7cc52f97e4f616/resource/DockerOrchestrationTools/FleetArch.png -------------------------------------------------------------------------------- /resource/DockerOrchestrationTools/FlockerArch.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/thenewstack/docker-and-containers-ebooks/2d21d30fbed39b2c8454a40cfc7cc52f97e4f616/resource/DockerOrchestrationTools/FlockerArch.png -------------------------------------------------------------------------------- /resource/DockerOrchestrationTools/KubernetesArch.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/thenewstack/docker-and-containers-ebooks/2d21d30fbed39b2c8454a40cfc7cc52f97e4f616/resource/DockerOrchestrationTools/KubernetesArch.png -------------------------------------------------------------------------------- /resource/DockerOrchestrationTools/MesosArch.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/thenewstack/docker-and-containers-ebooks/2d21d30fbed39b2c8454a40cfc7cc52f97e4f616/resource/DockerOrchestrationTools/MesosArch.png -------------------------------------------------------------------------------- /resource/DockerOrchestrationTools/SwarmArch.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/thenewstack/docker-and-containers-ebooks/2d21d30fbed39b2c8454a40cfc7cc52f97e4f616/resource/DockerOrchestrationTools/SwarmArch.png -------------------------------------------------------------------------------- /resource/FromMonolithToMicroservices/ScalingWithMicroservices.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/thenewstack/docker-and-containers-ebooks/2d21d30fbed39b2c8454a40cfc7cc52f97e4f616/resource/FromMonolithToMicroservices/ScalingWithMicroservices.png -------------------------------------------------------------------------------- /resource/FromMonolithToMicroservices/TheScaleCubeAndMicroservices.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/thenewstack/docker-and-containers-ebooks/2d21d30fbed39b2c8454a40cfc7cc52f97e4f616/resource/FromMonolithToMicroservices/TheScaleCubeAndMicroservices.png -------------------------------------------------------------------------------- /resource/sponsorlogos2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/thenewstack/docker-and-containers-ebooks/2d21d30fbed39b2c8454a40cfc7cc52f97e4f616/resource/sponsorlogos2.png -------------------------------------------------------------------------------- /resource/tnslogo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/thenewstack/docker-and-containers-ebooks/2d21d30fbed39b2c8454a40cfc7cc52f97e4f616/resource/tnslogo.png --------------------------------------------------------------------------------