Microservices Deployment Design Patterns
This is the 12th post in a series on microservices architecture
The deployment process involves two interrelated concepts:
- Deployment process — In order to get software into production, developers, and DevOps must follow certain steps.
- Deployment architecture — Deployment architecture is responsible for defining the environment in which the software will run.
The production environment allows developers to configure and manage their services, the deployment pipeline to deploy new versions of services, and users to access the functionality provided by those services.
Production environments must have four key capabilities:
- Service management interface — Provides developers with the ability to create, update, and configure services.
- Runtime service management — Ensures that the desired number of service instances is running at all times.
- Monitoring — It gives developers insight into what their services are doing, including log files and metrics.
- Request routing — Provides services to users by routing their requests
To deploy services in a production environment, there are four main options.
- Using language-specific packages such as Java JARs or WARs to deploy services.
- Service deployment as a virtual machine simplifies deployment by encapsulating the service’s technology stack in a virtual machine image.
- Deploying services as containers, which are lighter than virtual machines.
- Using serverless deployment, such as AWS Lambda, to deploy services.
Using language-specific packaging patterns for deployment
The service can be deployed as a language-specific package pattern. Using this pattern, a service’s language-specific package is what’s deployed in production and managed by the service runtime. For Java applications, it can be either a JAR file or a WAR file.
You would first need to install the necessary runtime, which for a Java application would be JRE. Installing a web container like Apache Tomcat or Jetty is also required if it’s a WAR file. After configuring the machine, you copy the package to the machine and start the service. JVM processes are used to run the services.
Your deployment pipeline should automatically deploy the service to production. This is done by creating an executable JAR or WAR file. It then invokes the service management interface in the production environment to deploy the new version.
Benefits of service as a language-specific package pattern
- Fast deployment — The service is copied to a host and started.
- Efficient resource utilization — Resource efficiency, especially when running multiple instances on the same machine or within the same process like Apache Tomcat.
Drawbacks of service as a language-specific package pattern
- Absence of an encapsulated technology stack— It is crucial that the DevOps team understands how to deploy each service accurately. Each service may require a different version of the runtime.
- A service instance cannot be constrained in terms of resources consumed — Resources consumed by a service instance cannot be constrained. Processes can potentially consume all of a machine’s CPU and memory, starving other service instances and operating systems of resources.
- Lack of isolation when running multiple service instances on the same machine — Due to a lack of isolation, a misbehaving service instance may impact other service instances. As a result, the application may be unreliable, especially if there are multiple instances of the service running on the same machine.
- Automating the placement of service instances is difficult— Each machine has a fixed set of resources: CPU and memory, and each service instance use some resources. A service instance should be assigned to machines in a way that uses the machines efficiently without overloading them.
Using a virtual machine pattern for deployment
You want to deploy the service this time using AWS Elastic Compute Cloud (EC2), which is a virtual machine. It is possible to create and configure an EC2 instance and copy onto it the executable JAR or WAR file, but this approach has drawbacks that were discussed in the previous section.
The best, more modern approach is to package the service like an Amazon Machine Image (AMI). Every instance of the service is created using that AMI. EC2 instances are usually managed by an AWS Auto Scaling group, which aims to ensure that the desired number of healthy instances is always running.
The deployment pipeline of the service builds the virtual machine image. In the deployment pipeline, a VM image builder creates a VM image containing the service’s code, as well as whatever software is required to run it.
Benefits of deploying services as virtual machines
- Technology stack is encapsulated in a VM image — The VM image contains the service as well as all of its dependencies.
- Isolated service instances — All service instances run in complete isolation, and each virtual machine has a fixed amount of CPU and memory, so other services cannot steal resources.
- Utilizes mature cloud infrastructure — Cloud infrastructure such as AWS is mature, highly automated, and flexible.
Drawbacks of deploying services as virtual machines
- Inefficient use of resources— There are overheads associated with each service instance, including its operating system.
- Deployments are relatively slow— The size of the VM typically makes building the image take longer. Booting up the operating system inside the VM also takes some time.
- Overhead of system administration— Patching the operating system and runtime is your responsibility.
Deployment using the service as a container pattern
Containers are a modern, lightweight deployment mechanism. They are an operating system-level virtualization technology. It is a standard unit of software that packages up code along with all its dependencies so that the application runs reliably and quickly in any computing environment.
When a process runs in a container, it’s as if it were running on its own machine. It usually has its own IP address, so conflicts with ports are eliminated. Additionally, each container has its own root filesystem. Container runtimes use operating system mechanisms to isolate containers from each other. The most popular container runtime is Docker.
You can specify the CPU & memory resources of a container when creating it. Container runtime enforces these limits to prevent containers from hogging system resources.
In order to deploy a service as a container, you must package it as a container image. Container images contain the application and any software required to run the service as well as the filesystem.
Creating a Dockerfile is the first step in creating an image. It describes how to create a Docker container image. It contains instructions for installing software and configuring the container, as well as the shell command to run when the container is created. After you have written the Dockerfile, you can then build the image.
A newly built Docker image must then be pushed to the Docker registry at the end of the build process. A Docker registry is the equivalent of a Java Maven repository for Java libraries. Once your service has been packaged as a container image, you can create one or more containers. Images will be pulled from the registry onto a production server by the container infrastructure. From there, one or more containers will be created.
Benefits of deploying services as containers
- Encapsulation of the technology stack in which the API for managing your services becomes the container API.
- Each service instance is isolated.
- The resources of each service instance are constrained.
- Containers are lighter than virtual machines.
Drawbacks of deploying services as containers
- It is your responsibility to administer the container images.
- You need to patch the operating system and runtime.
The best way to orchestrate Docker containers is to use Kubernetes, an orchestration framework for container management.
Using the serverless deployment pattern for deployment
The language-specific packaging, service as a virtual machine, and service as container patterns are quite different, but they have some similarities as well.
- All three patterns require you to pre-provision some computing resources, either physical machines, virtual machines, or containers.
- Deployment platforms that support autoscaling adjust the number of VMs or containers based on the load.
- Even when VMs or containers are idle, you will always have to pay for them.
- In addition, you are responsible for system administration.
All of the main public clouds offer serverless deployment options. Google Cloud has Google Cloud functions, AWS has AWS Lambda, and Azure has Azure functions.
Benefits of using serverless deployment functions
Using serverless deployment, such as AWS Lambda, has several advantages:
- Integrates with a number of AWS services
- Reduces the number of system administration tasks
- Elasticity to handle dynamic load
- Usage-based pricing
Drawbacks of using serverless deployment functions
- Long-tail latency — AWS Lambda dynamically executes your code, so some requests have high latency due to the time it takes AWS to provision an instance and for the application to start.
- Limited event/request-based programming — AWS Lambda isn’t intended to be used to deploy long-running services, like those that consume messages from a third-party message broker.
Serverless deployments aren’t a good fit for every service because of long-tail latencies and the requirement to use an event-based programming model. Use them if they are a good fit.
You should choose the most lightweight deployment pattern that supports your service’s requirements.
Evaluate the options in the following order
- Serverless
- Containers
- Virtual Machines
- Language-specific packages