What is Docker Swarm?
Docker Swarm is the native solution to create, define, monitor and maintain Docker clusters. When stepping the boundaries of single-server and multi-container applications Docker Swarm is the right tool for the job. Discover what Docker Swarm is and what it can do in just a few minutes to leverage the capability of the native Docker solution for clustering.
Containers, developers and applications
Containers have brought a bunch of fresh air in the world of software programming. It used to happen quite a lot in the past:
- Bob the developer creates his fancy new application(s), everything works fine,
- client is uber-happy about the new applications, bring it NOW!
- Bob transfer the app to its new environment, it doesn’t work.
- Now Bob has to spend more hours trying to get his new app to work.
I have seen this scene too many times during my work as a developer to crack a joke on it. Application environments are hellish. And when the solution is composed of multiple applications/languages/databases it is just a few more meters to hell itself.
Thanks to containers, and especially Docker, developers have been able to define application environments in easy-to-read files and build images upon such files. This process enables developers (and sysadmins) to produce identical environments both on the developer machine and on server. No more Bob/Client situations! But here comes another problem…
Why Docker Swarm?
So you’ve probably been playing with Docker and containers for a while, deployed it a few times, maybe on few servers. But you soon realize this is time consuming. If you deploy your application on three different servers it may seem easy, but think about deploying your application on one thousand servers! Not so excited anymore?
The problem with Docker is that there’s just the bare minimum to get running. Need to connect containers? There are Docker Networks. Need to store permanent data? There are Docker Volumes. So, let me flip the board. What if you needed to connect two containers on two different machines? And here’s your problem!
Docker Swarm is the native solution for orchestrating multiple engines (called swarms) and multiple containers. It is by far the easiest method to deploy a cluster of containers, and it is included in Docker releases >= 1.12.0 . With Docker Swarm you can easily create a cluster of machines composed of managers (nodes that perform orchestration) and workers (nodes offering Swarm services), or even both. Also if you’ve worked with Docker Compose you can easily import your compose files in a swarm cluster. Swarm also offers many features including, but not limited to load balancing and scaling.
Docker Swarm vs Kubernetes
One of the most common comparisons you will find lately on the Internet is Docker Swarm vs Kubernetes. Kubernetes won full stop. But that doesn’t mean Swarm is now completely useless. Kubernetes is complex and managing a Kubernetes cluster can be difficult without tools such as Rancher or solutions like OpenShift. The features and maturity of Kubernetes make it a great product for any kind of deployment, except one. Small deployments will still work fine using Kubernetes, but the simplicity and ease of use of Swarm are just the best for these kind of scenarios.
Directly from the docs page:
- Cluster management integrated with Docker Engine: Use the Docker Engine CLI to create a swarm of Docker Engines where you can deploy application services. You don’t need additional orchestration software to create or manage a swarm.
- Decentralized design: Instead of handling differentiation between node roles at deployment time, the Docker Engine handles any specialization at runtime. You can deploy both kinds of nodes, managers and workers, using the Docker Engine. This means you can build an entire swarm from a single disk image.
- Declarative service model: Docker Engine uses a declarative approach to let you define the desired state of the various services in your application stack. For example, you might describe an application comprised of a web front end service with message queueing services and a database backend.
- Scaling: For each service, you can declare the number of tasks you want to run. When you scale up or down, the swarm manager automatically adapts by adding or removing tasks to maintain the desired state.
- Desired state reconciliation: The swarm manager node constantly monitors the cluster state and reconciles any differences between the actual state and your expressed desired state. For example, if you set up a service to run 10 replicas of a container, and a worker machine hosting two of those replicas crashes, the manager creates two new replicas to replace the replicas that crashed. The swarm manager assigns the new replicas to workers that are running and available.
- Multi-host networking: You can specify an overlay network for your services. The swarm manager automatically assigns addresses to the containers on the overlay network when it initializes or updates the application.
- Service discovery: Swarm manager nodes assign each service in the swarm a unique DNS name and load balances running containers. You can query every container running in the swarm through a DNS server embedded in the swarm.
- Load balancing: You can expose the ports for services to an external load balancer. Internally, the swarm lets you specify how to distribute service containers between nodes.
- Secure by default: Each node in the swarm enforces TLS mutual authentication and encryption to secure communications between itself and all other nodes. You have the option to use self-signed root certificates or certificates from a custom root CA.
- Rolling updates: At rollout time you can apply service updates to nodes incrementally. The swarm manager lets you control the delay between service deployment to different sets of nodes. If anything goes wrong, you can roll-back a task to a previous version of the service.
Latest posts by mark (see all)
- DC/OS: Meet the Data Center Operating System - 18 April 2018
- Apache Mesos: meet the data-center kernel - 11 April 2018
- Serverless architecture: discover computing without… computers? - 4 April 2018