Deploying Microservices on EKS: A Beginner's Guide (Part 2)

Deploying Microservices on EKS: A Beginner's Guide (Part 2)
Photo by Timur Kozmenko / Unsplash

We could deploy our blog as a microservice in a Kubernetes cluster last time. You may have noticed that the client-service was the single entry point into the cluster and was also responsible for all the routing. This requirement made us use an express server for exposing a few routes besides serving static files for the React app.

What's The Problem?

Every time a new microservice is created and needs to be called from client-service, at least 2 new container images need to be built and deployed to the cluster. We can avoid this by using a Nginx server container instead of express for serving static files and routing. The nginx.conf where all the routing information lives can be mounted separately (read ConfigMap).

In this way, only the mounted file will need to be updated every time to change something in the routing. It still doesn't solve the problem completely as the pods need to be restarted for the changes to reflect.

It would be so great if we can decouple all the routing logic from the application. Not just routing; what if we want to implement tracing across the services or mTLS? It would be nice to implement these features transparently without changing the application code.

Service Mesh

A service mesh exactly does the above and more. It transparently takes over inter-service communication and adds many features to the service network. If not a service mesh then every application will need to implement their own set of libraries across languages and frameworks. They will also need to make sure that the libraries are compatible with each other. This may consume a lot of developers time and integration testing.

Service mesh is a unified infrastructure layer which provides observability, traffic management, fault toleration and security to existing services.

Istio: A Performant Distributed Mesh

Istio is the product of a joint effort by Lyft, Google and IBM. It's completely open source and built upon the CNCF graduate, Envoy proxy. It follows a sidecar pattern to add envoy network proxy container to running pods. The service discovery and other control plane functionalities are centralised. For details on the Istio architecture, have a look here. Although not considered very easy to deploy, let's explore some of it's capabilities in our routing problem by doing exactly that.

Configuring the Environment

We can use the same setup as before with an addition of Istio CLI tools:

  1. istioctl: We can download this binary by running the following command
curl -L https://istio.io/downloadIstio | sh -

The directory istio-<version> will be created and you'll need to copy istioctl from inside istio-<version>/bin to somewhere in your path.

All Hands On Deck

Let's start by deploying all the deployments and services from our previous project.

git clone https://github.com/maytanthegeek/kubernetes-microservices
cd kubernetes-microservices
kubectl apply -f kubernetes/

You may notice that service-client.yml is not of type: LoadBalancer. This is because we will offload the responsibility of ingress from the internet to Istio later on.

Note: All the manifest files are available in the accompanying repository.

Let's start with Istio installation in our cluster. It provides 3 methods of installation:

  1. Using istioctl deploying various manifests for Istio components.
  2. Using helm chart for installing components.
  3. Using Istio operator which installs and updates the required components on our behalf.

We will use the operator method as it's convenient and easy to maintain. Start by initializing operator CRDs and creating required namespaces in the cluster:

istioctl operator init

Now we are ready to create our operator manifest file istio-operator-profile.yml.

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  namespace: istio-system
  name: istiocontrolplane
spec:
  profile: demo

The only spec we have added is profile: demo which ensures that all possible Istio components are installed for our evaluation. You can choose other profiles as well. Apply this operator with kubectl.

kubectl apply -f istio-operator-profile.yml

If you check the istio-system namespace for pods, you'll see istiod, istio-ingressgateway and istio-egressgateway being created. They help in service discovery, ingress of traffic from the outside and sending traffic outside the cluster in that order.

Istio is up and running in our cluster and ready to show it's magic. We will first create a Gateway resource for the default namespace which will allow istio-ingressgateway to send traffic inside that namespace.

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: blog-gateway
  namespace: blog
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - blog.maytan.me

This will allow all traffic from domain blog.maytan.me over HTTP (port 80) to be accepted by istio-ingressgateway for this namespace. Go ahead and apply this file.

Note: The domain in the hosts array needs to be pointing to the load balancer created just after Istio installation.

What about routing? We have seen no kind of routing rules yet. To achieve that we will create another Istio resource called VirtualService. It is a kind of a wrapper on top of Kubernetes services which includes layer 7 like routing capabilities. Here's the virtual-service.yml:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: client
  namespace: blog
spec:
  hosts:
  - "*"
  gateways:
  - blog-gateway
  http:
  - match:
    - uri:
        prefix: /user
    route:
    - destination:
        port:
          number: 80
        host: user-service
  - match:
    - uri:
        prefix: /post
    route:
    - destination:
        port:
          number: 80
        host: post-service
  - match:
    - uri:
        prefix: /
    route:
    - destination:
        port:
          number: 8080
        host: client-service

There are a few things to notice here:

  1. The gateways array contains the name of the gateway we created earlier.
  2. For every destination service, we provide a match condition. The match condition can have regex, exact or prefix matches for URIs, headers, etc.
  3. For destination we define a port and a host. The host name, by default can be exactly matching the name of your Kubernetes service. You can also do interesting things like weight based routing for multiple destinations.
  4. The host in destination is not really the name of the service. It's the name of a DestinationRule. A DestinationRule actually points to pods via a service. We can have multiple subsets of service (like for having prod/stage versions of same service) in a DestinationRule. Here's a great example to learn more.

There's one last step remaining so that our applications can start using the service mesh. Remember how we talked about a sidecar container being placed in the existing application pod. If you look at our pods for client, user and post, they are pretty alone right now. We need to follow the following steps so that Istio will be able to inject its sidecars into the pods.

# Label the namespace so that Istio knows to use it for injection.
kubectl label namespace default istio-injection=enabled

# Restart all the pods so that Istio can start injection
kubectl rollout restart deployment -n default

Conclusion

At this point we can visit http://blog.maytan.me and will see our monumental blog.

Take this a light introduction to service mesh, especially Istio. For us, it has opened many opportunities and increased the observability 10 folds. There are many more useful features in here that we haven't explored. In fact, this is just a quarter of what we can do with Istio. Do check out their tasks section for really interesting real world use cases.

Remember, I'm just starting out and would appreciate an honest feedback. In the next part of this series, we will explore CI/CD for Kubernetes and what is GitOps. Ciao!