Like any software architecture, specific characteristics of microservices relate to essential development, operation and maintenance issues, such as availability, reliability, scalability and independence. As for the deployment and instantiation of microservices, there are mature patterns regarding the features used (instances and hosts) and the frequency of deployment.
Instances and Servers
Some service deployment patterns can be used in the context of microservices, such as: multiple instances of microservice per server, a single instance of microservice per virtual machine (VM), and single instance of microservice per container.
Before discussing these patterns, it is worth mentioning some microservice deployment requirements:
- Microservices must be deployed and “removed” independently;
- Microservices must be created and deployed quickly, reliably and cost-effectively;
- It must be possible to scale each microservice (create new instances) independently, as a given microservice may be more requested than others;
- A microservice with an error should not affect other microservices.
The services of an application in the microservice architecture can be developed in different languages and frameworks, and at any given time it may be necessary to run multiple instances of a microservice. Faced with this reality, some challenges need to be taken into account when adopting this architecture, such as maintaining the stability and availability of the application, while new versions of microservices or APIs are released, as well as removing other microservices.
Based on these considerations, let’s analyze the deployment patterns mentioned above.
Multiple instances of microservice per server
This is the most traditional method of deploying a monolithic application. It can run one or more instances of the microservice on a single server. Multiple instances of the microservice can run in the same process or in a group of different processes. This approach has benefits and disadvantages:
Benefits:
- It is possible to use features efficiently, as the instances share the same server and the same operating system.
- The deployment of a service instance is relatively quick, since you simply copy it to the server and start it.
Disadvantages:
- There is little or no isolation between the instances and it is not possible to limit the features used by each one. An instance may experience increased CPU usage or allocate a lot of server memory when running, which reduces the features available to other instances.
- The team deploying the microservices needs to know the technical details of how to deploy them. As microservices can use different technologies, the complexity of this process and the risk of errors can increase considerably.
A single instance of microservice per Virtual Machine (VM)
According to this approach, each microservice is packaged as an image of a virtual machine (VM) and each instance is started using that image.
Benefits:
- Each instance of the microservice runs in isolation with a certain amount of CPU and memory. Thus, the instances cannot use the features allocated to the others.
- The virtual machine encapsulates the technical details of the microservice, which makes deployment simpler and more reliable.
- The virtual machine API can be used as the microservice API.
- It is possible to use the cloud infrastructure to deploy the instances and take advantage of features such as load balancing and auto-scaling.
Disadvantages:
- It is not possible to use features efficiently, as each microservice instance uses a entire virtual machine, including the operating system.
- The deployment of a new microservice instance is slow, especially due to the size of the virtual machine image created for each instance.
A single instance of microservice per container
Firstly, let’s understand what a container is.
“Containers provide a standard way to package your application’s code, configurations and dependencies into a single object. The several instances share an operating system installed on the server and run as isolated processes. This allows for quick, reliable, and consistent deployments, regardless of the environment.” (AWS description)
In a way, containers behave like virtual machines. However, containers do not need to replicate an entire operating system, only the individual components they need to operate, which gives a significant performance boost and reduces the size of the application. Furthermore, they do not need to pre-allocate the features needed to run, such as memory, which allows for dynamic sharing of available features. It is even possible to control the amount of memory and CPU that each instance of a given container can use.
Docker is a software for deploying container technology, and is currently the most popular; however there are other options like LXD, Windows Container and Linux VServer.
In the container model for deploying microservices, each microservice runs in its own container. In this manner, each microservice is packaged in an independent container image. Then, instantiating a microservice is equivalent to instantiating a container. It is common to manage them through container orchestrators, such as Kubernetes. The orchestrator also makes it easy to locate each microservice instance according to the features needed to execute them, in addition to allowing them to be grouped or distributed according to the impact limitation strategy in case of errors (error zones).
Benefits:
- The container is a lightweight and fast technology, which makes it easy to create boot images;
- It isolates microservice instances;
- It encapsulates the specific technological details of the deployment of each microservice;
- The container API can be used as the microservice API.
- There are container orchestrator solutions such as Kubernetes, and features are made available on cloud infrastructure, such as Google Container Engine, Azure Kubernetes Services or Elastic Kubernetes Services (AWS).
Disadvantages:
- Container security can be challenged, as it shares the server’s operating system with other containers.
This approach has been the preferred pattern for deploying microservices, and many tools are available to automate the tasks involved.
In addition to choosing the microservice deployment strategy, it is important to ensure continuous delivery, which is one of the main features of microservice-based applications.
Continuous Delivery
The idea behind the concept of continuous delivery is to get the software up and running frequently (possibly several times a day), safely and reliably. For this to happen, it is necessary to automate the process of recurring approval and deployment tests in production.
In continuous delivery (or continuous deployment), software development cycles are much faster and each update is shorter, which means a much lower risk to system stability. Moreover, testing each change is much easier and, in case of error, it is possible to identify the error and revert the code to the previous version quickly. When making minor changes to functionality, the tests required are also minor, so they are easier to add to the automatic testing process. From the developers’ point of view, continuous testing and feedback make software evolution easier for the team. In business terms, this means that the company’s processes can evolve quickly, which is an important competitive advantage.
There are some strategies for running a new version of a microservice-based application without taking down the previous version, while ensuring that the application will never be unavailable (zero downtime) in a new deployment:
- Rolling: progressively upload the services to a new version to gradually replace the old version of the service. In this way, the old version is deactivated when the new one is fully functional.
- Canary: add a new service instance to test the reliability of the new version, before completely replacing the old version. This instance coexists with the previous version’s instances and both are available during the evaluation period. Eventually, some customer requests can be routed directly to the instance that runs this new version to ensure that it works correctly. Once the correct execution is proven, the Rolling strategy is used, which gradually replaces the old version’s instances with the new one.
- Blue-Green: introduce a parallel environment (green) to the new version of the software and, when everything is tested and ready to start operating, the entire user traffic is progressively redirected from the current environment (blue) to the new environment (green).
Choosing between canary or blue-green strategy reduces the impact of errors on the new version when the application is made available. Therefore, these strategies are more suitable for microservice-based application.