Developer need not to worry about the underlying infrastructure, all he/she has to look into is the services running on them and the stack they write.
You do not have to worry about where your code is running. Which leads to faster rollouts, faster releases, faster deployments. Even rollbacks have become piece of cake with having docker on your infrastructure.
If there is any change in your service all you have to do is change the YAML (yet another markup language) file and you will have a completely new service in minutes. Docker was build for scalabilty and high availability.
It is very easy to load balance your services in docker, scale up and scale down as per your requirements.
The most basic application that is demoed by docker, is the following cat and dog polling polygot application.
Each part of this application will be written and maintained by a different team. Add it will just get collaborated by docker.
The above are the components required to get the docker application up and running.
Docker swarm is a docker cluster manager that we can run our docker commands on and they will be executed on the whole cluster instead of just one machine.
The following is a docker swarm architecture:
Containers provide an elegant solution for those looking to design and deploy applications at scale. While Docker provides the actual containerizing technology, many other projects assist in developing the tools needed for appropriate bootstrapping and communication in the deployment environment.
One of the core technologies that many Docker environments rely on is service discovery. Service discovery allows an application or component to discover information about their environment and neighbors. This is usually implemented as a distributed key-value store, which can also serve as a more general location to dictate configuration details. Configuring a service discovery tool allows you to separate your runtime configuration from the actual container, which allows you to reuse the same image in a number of environments.
The basic idea behind service discovery is that any new instance of an application should be able to programmatically identify the details of its current environment. This is required in order for the new instance to be able to “plug in” to the existing application environment without manual intervention. Service discovery tools are generally implemented as a globally accessible registry that stores information about the instances or services that are currently operating. Most of the time, in order to make this configuration fault tolerant and scalable, the registry is distributed among the available hosts in the infrastructure.
While the primary purpose of service discovery platforms is to serve connection details to link components together, they can be used more generally to store any type of configuration. Many deployments leverage this ability by writing their configuration data to the discovery tool. If the containers are configured so that they know to look for these details, they can modify their behavior based on what they find.
How Does Service Discovery Work?
Each service discovery tool provides an API that components can use to set or retrieve data. Because of this, for each component, the service discovery address must either be hard-coded into the application/container itself, or provided as an option at runtime. Typically the discovery service is implemented as a key-value store accessible using standard http methods.
The way a service discovery portal works is that each service, as it comes online, registers itself with the discovery tool. It records whatever information a related component might need in order to consume the service it provides. For instance, a MySQL database may register the IP address and port where the daemon is running, and optionally the username and credentials needed to sign in.
When a consumer of that service comes online, it is able to query the service discovery registry for information at a predefined endpoint. It can then interact with the components it needs based on the information it finds. One good example of this is a load balancer. It can find every backend server that it needs to feed traffic to by querying the service discovery portal and adjusting its configuration accordingly.
This takes the configuration details out of the containers themselves. One of the benefits of this is that it makes the component containers more flexible and less bound to a specific configuration. Another benefit is that it makes it simple to make your components react to new instances of a related service, allowing dynamic reconfiguration.
Now that we’ve discussed some of the general features of service discovery tools and globally distributed key-value stores, we can mention a few of the projects that relate to these concepts.
Some of the most common service discovery tools are:
- etcd: This tool was created by the makers of CoreOS to provide service discovery and globally distributed configuration to both containers and the host systems themselves. It implements an http API and has a command line client available on each host machine.
- consul: This service discovery platform has many advanced features that make it stand out including configurable health checks, ACL functionality, HAProxy configuration, etc.
- zookeeper: This example is a bit older than the previous two, providing a more mature platform at the expense of some newer features.
Some other projects that expand basic service discovery are:
- crypt: Crypt allows components to protect the information they write using public key encryption. The components that are meant to read the data can be given the decryption key. All other parties will be unable to read the data.
- confd: Confd is a project aimed at allowing dynamic reconfiguration of arbitrary applications based on changes in the service discovery portal. The system involves a tool to watch relevant endpoints for changes, a templating system to build new configuration files based on the information gathered, and the ability to reload affected applications.
- vulcand: Vulcand serves as a load balancer for groups of components. It is etcd aware and modifies its configuration based on changes detected in the store.
- marathon: While marathon is mainly a scheduler (covered later), it also implements a basic ability to reload HAProxy when changes are made to the available services it should be balancing between.
- frontrunner: This project hooks into marathon to provide a more robust solution for updating HAProxy.
- synapse: This project introduces an embedded HAProxy instance that can route traffic to components.
- nerve: Nerve is used in conjunction with synapse to provide health checks for individual component instances. If the component becomes unavailable, nerve updates synapse to bring the component out of rotation.
The command above is used to create a consul machine droplet in digital ocean.
Use the above command to create docker swarm master which will attach to the consul.
In docker swarm you can define your strategies in a very fine grain style.
To scale up all you have to type is docker-compose scale <your-service-name> and you are done.
Auto-scalng will need a monitoring service to be plugged in.