Deploying Microservices on EKS: A Beginner’s Guide (Part 1)

Deploying Microservices on EKS: A Beginner’s Guide (Part 1)
Image by Steven Liao from Pixabay

Our core services at Smart Joules are packed up in a monolith application which has steadily grown over the last 2 years. Overall, 8 developers have worked on the application at different points of time. Recently we started realizing what a mammoth of a Node application it has grown into.

The Challenge

The problems started with delayed deployments to production due to the increased time of running tests. Code reviews took longer to ensure any modification to models or services didn’t affect any other controller than what it is meant for. The biggest of all challenges was on-boarding a new developer to this huge code base let alone derive any useful output from them for a few months. We also speculated that scaling the whole application would be a challenge and require huge resources (the application was hosted on AWS ElasticBeanstalk as a Docker application).

We then decided that it was time we started breaking the application into microservices. As the work started on this, we also needed a deployment strategy for those individual microservices, routing, and individual scaling needs.

Kubernetes Comes to the Picture

After consultation with some advisors, we decide on using AWS managed Kubernetes service EKS to run our microservices. Kubernetes is very well supported in the community. Logging and networking are easy to achieve with service meshes now coming into the picture. Here I would like to tell you that we didn’t find any difference in EKS and other managed Kubernetes services like AKS and GKE. All our other application and data is hosted in AWS so we decided to go with EKS.

Basic Concepts

  1. Pods: The basic unit in Kubernetes, which is responsible to run your application. It essentially runs your Docker images inside.
  2. ReplicaSet: It is used to maintain the desired number of identical Pods running at any given time. When a ReplicaSet is created, it may create new Pods or kill them as needed.
  3. Deployment: It provides a declarative way to create and update ReplicaSets. This is where the desired state of a ReplicaSet is defined.
  4. Service: An abstract declaration that is used to expose a Pod or a set of identical Pods as a network service.
  5. ServiceAccount: Identity for the Pods to interact with the Kubernetes’ APIs. It can be specified in Deployment if already created.
  6. HorizontalPodAutoscaler: It automatically scales the number of Pods in a ReplicaSet or a Deployment between specified minimum and maximum based on a provided condition on some metric (CPU utilization or a custom metric).

Configuring the Environment

I am assuming you already have an AWS account and AWS CLI set-up with your IAM. If you don’t want to follow with AWS then you can use Minikube too.

  1. kubectl: It's the CLI tool to manage applications, authentication, networking, and scaling on your cluster. Here are the instructions to install. Just follow the install instructions for your OS.
  2. eksctl: This tool is made by AWS and weaveworks to create and manage EKS clusters and nodes in it. It comes in very handy than manoeuvring through the AWS console. Install using these instructions.
  3. helm: Helm is to Kubernetes what pip is to Python. It's a package manager for Kubernetes which can deploy hundreds of applications hosted at Helm Hub in minutes and more.
  4. minikube: Minikube is a great tool for testing your apps for Kubernetes locally. It creates a Kubernetes cluster inside a virtual machine using your choice of hypervisor and configures kubectl to use that cluster. Follow this for installation.

All Hands on Deck

Even though I learned to deploy my first cluster from an AWS workshop, now I feel the instructions are a bit outdated. I have also included some practices that I use to deploy at scale.

Let’s start with launching the cluster with 3 nodes:

eksctl create cluster --name=my-first-cluster --nodes=3 --managed --alb-ingress-access

This will launch the cluster in the region us-west-2. Use --region to change it. This process may take ~15–20 mins to complete. After the cluster is up and running, check if kubectl is configured to use the new cluster:

kubectl cluster-info

It should return information similar to this:

Output of kubectl cluster-info

We will start with deploying a very handy tool here, Kubernetes Dashboard:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml

After the successful deployment message, access the dashboard via:

kubectl port-forward -n kubernetes-dashboard service/kubernetes-dashboard 10443:443 &

The dashboard will be then available at https://localhost:10443/. You can log in with the token option after getting your access token by running:

aws eks get-token --cluster-name my-first-cluster

Keep it running and let’s move to our real application.

Deploying an Application

This application has 3 separate microservices:

  1. The client frontend to show posts.
  2. The user service to list users.
  3. The posts service to get posts and comments on them.

The docker images are already available for these services on Docker Hub so we can focus on deployment only. The source is available at https://github.com/maytanthegeek/kubernetes-microservices/

We will start by writing a deployment manifest for client microservice:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: client-service
spec:
  replicas: 3
  selector:
    matchLabels:
      app: client-service
  template:
    metadata:
      labels:
        app: client-service
    spec:
      containers:
      - name: client-service
        image: maytanthegeek/microservice-client:1.0.0
        imagePullPolicy: Always
        resources:
          limits:
            memory: "128Mi"
            cpu: "50m"
        ports:
        - containerPort: 80

Notice that the template starting on line 10 is actually the spec for the pods and replicas: 3 creates a ReplicaSet of 3 pods. So we can see how a Deployment encapsulates these together.

All the 3 pods will expose the client application on port 80 but we cannot yet access this application directly from the internet or outside the cluster. For that, we will create a Service.

apiVersion: v1
kind: Service
metadata:
  name: client-service  
spec:
  type: LoadBalancer
  selector:
    app: client-service
  ports:
  - port: 8080
    targetPort: 80

As the client should be exposed to the internet, we have used the Service type: LoadBalancer. There are other ServiceType as well for different use-cases:

  1. ClusterIP(default type): Only accessible within the cluster. No need to mention the type explicitly.
  2. NodePort: Exposes the service to the internet why a port on the node(s) in which the pod(s) is/are running.
  3. LoadBalancer: Exposes the service to the internet via the cloud provider’s LoadBalancer service. In AWS this ElasticLoadBalancer.
  4. ExternalName: This is generally used to map a service from a different namespace to the current namespace. This uses a CNAME record directly instead of any proxying.

Create similar deployment and service manifests for user and post services by looking at these files https://github.com/maytanthegeek/kubernetes-microservices/tree/master/kubernetes

Note: Do not deploy service-client.yml from the above repository as it doesn't have the type: LoadBalancer and won't be accessible on internet.

We can now start deploying these services using kubectl. As I have put all the manifests in the kubernetes directory, I can use the command:

kubectl apply -f kubernetes/

If you want to deploy individual manifests, you can use:

kubectl apply -f path/to/file.yml

After this, you can go ahead and check if the microservices were deployed successfully by running:

kubectl get all

Your result should look like this:

Output of kubectl get all

Remember we left the Kubernetes Dashboard running. If you open that tab, you’ll be able to see the same information in the default namespace.

Kubernetes Dashboard

Kubernetes Dashboard is a pretty nice GUI to visualize your cluster. You can almost use every feature that kubectl offers with the dashboard as well.

Viewing the Application

Now that the application is deployed, how can we view it in the browser?

After a while when the load balancer assigned to the client-service becomes active you will see a URL instead of <pending> on running the above command or just by running:

kubectl get service client-service
Output of kubectl get service client-service

Visit this URL in a browser and you’ll see our application working. I also opened the network tab to see the different requests going to the application.

Running application

Conclusion

So that was it. Deploying microservices in the cloud can be a pain when you are just starting out. Choosing good tools and learning resources can take you miles on that journey. That is the primary reason we dived into Kubernetes.

Scaling individual pods, networking, and integrating monitoring becomes very easy. It does most of the heavy lifting for us (not everything though and only if you take some well thought steps).

You can delete all services created in this article from the cluster by using:

kubectl delete -f kubernetes/

or delete the whole cluster by running:

eksctl delete cluster --name=my-first-cluster

What's Next?

You may have noticed that all the requests are currently going to the client service only. What if we wanted to expose other services to the world as well. We could, of course, use LoadBalancer type for all of them but is there a better way? Can we also make service-to-service communication more streamlined?

There’s this thing called service mesh like Istio, Linkerd, etc. that are specifically focused to enhance the service-to-service communication without having to modify the existing service much. In part 2 of this series, we’ll go through it in detail and deploy Istio service mesh in our cluster.