A step by step guide: To dockerize and deploy a MEAN stack application into a Kubernetes cluster

Pradeep Raj
14 min readJun 3, 2021

--

Here I want to share my experience while I was learning docker, Kubernetes and deploying my applications in it. So if you are a MEAN stack (or front end or back end) developer and want to know how to deploy your application into Kubernetes cluster (hereafter k8s), this blog is for you.
At the end of this blog, you will have a handson expereince on the same.

The application we are going to deploy in k8s

All the code that used in this blog is available on my GitHub page. You can refer to here šŸ‘‰ Frontend app and Backend app.

So here is the plan:

  • Overview of the application
  • Short intro about Docker
  • Dockerizing our frontend and backend applications
  • Short intro to k8s
  • Creating k8s components for our application
  • Deploying our app into the k8s cluster using Minikube

Feel free to jump directly to the section you care more about!

Application overview:

In this blog, Iā€™ve used a MEAN stack Notes taking application. But you can easily deploy any of your Angular/React/Vue/JS or any front-end application to k8s by following the below steps.

Like any other mean stack application, this application also has three layers. The adjacent layers directly communicate with each other.

Three layers of the application

As linked above both frontend and backend application has its own Git repositories. This app will do the basic CRUD operations. The frontend and backend apps have their own TCP ports and communicate with each other over HTTP. I donā€™t want to discuss more on the application. You can still refer to the repo to know more about the application.

What is Docker?

Before knowing what it is, letā€™s see what problem Docker solves.
Letā€™s go through a scenario that we all could have come across. Letā€™s assume you are developing a MEAN stack application as part of your project. You push your code commits constantly to your teamā€™s git repository. The application was working absolutely fine. One fine day you are about to demo your application to the clients, andā€¦ boomā€¦ your laptop crashed due to some strange hardware failure.
Just not to disappoint your clients, you are provided with your managerā€™s laptop. And you pulled the code from the git repository. When you try to run the application, it started showing some new error messages which you have never seen on your machine. And the demo ended with one big statement. Here is that
ā€œIt was working fine on my machine!ā€

Moral of the story: If you had known Docker before, your clients would not have been disappointed on that day. Because Docker solves this common problem.

Docker helps you package all your code, dependencies, system requirements, environment variables, etc into a portable image. This image can run on any machine which has docker installed in it. And when it runs, it runs exactly like how it ran on your development machine. Like we constantly push our codebase to git repositories, these docker images can also be pushed to the Docker registries. There are many docker registries out there. One that is more popular is DockerHub.

To execute a docker image, you have to install Docker Desktop on your machine. Once it is installed, you can run any image from DockerHub or the images that you created locally. The running instance of a docker image is called ā€œContainerā€. The container has its own filesystem, networking, etc. Itā€™s like a virtual machine, but itā€™s not. We can run as many containers as our machine is capable of and every container is isolated from each other.
Letā€™s say you have created an image of your Express API server (donā€™t worry, weā€™ll soon see how to create one) that listens to port 3000. You can start a container from that image. The container starts to listen on its port 3000. While creating the container you can map any port from your PC (host machine is docker term) with container exposed port. Whenever you interact with your host port, the request will be forwarded to the containerā€™s port.

Itā€™s like running multiple isolated instances of an operating system inside your machine. But containers are way lighter than a virtual machine.
Here I just tried to give you some taste of Docker. To learn more please refer here.

Dockerizing the apps:

Make sure you have installed the DockerDesktop application successfully and it is running. Also create an account in Dockerhub and note down your username.

Okay, now is the time to get your hands dirty. Letā€™s see how we can create a docker image for your application.
You should have heard of package.json. We save all our application dependencies there. Similarly, docker also has a file where we list all the dependencies that our application needs. It is called the Dockerfile.
Letā€™s start with a simple docker file. Here is the docker file for our express application. Letā€™s keep the Dockerfile in the root folder of the application.

To create an image, we have to pick a base image. There are thousands of images available in DockerHub. As we are going to build a docker image for an express application, Iā€™ve chosen a node:alpine image. Compared to other Linux distributions, Alpine is lighter. Choosing the lighter base image will reduce the size of our final image. As the next steps, Iā€™m copying my local files into my image and running the npm install command. The last command will start the application inside the container.
Docker has its own CLI. To create an image out of this Dockerfile. We have to run the following command from the root folder of the application where we have the Dockerfile. (Donā€™t miss the dot at the end of the command)

Itā€™s a best practice to use the Dockerhub username while building the image. By this time you should have created your first image successfully.
CongratsšŸŽ‰. Now letā€™s start a container from the newly created image. Just type this command and hit enter.

This command will start your express application in a docker container in daemon mode. You may notice that we are also mapping our host port with the container port to forward the request to the container.
Note: This express application is hardcoded to run in port 3000. So we cannot change the container port (which is on the right side of the :). But we can modify the host port to any available allowed port in your machine.
Now you can open your browser or Postman client and start interacting with your express API server.
Creating docker images and working with containers is simple and straightforward. Isnā€™t it?
Let me share some docker commands that will help you.

Okay, letā€™s roll up the sleeves and look into the Dockerfile for the frontend application as well.

This may look longer. But itā€™s as simple as the previous one.
You may see there are two stages in this docker file. The reason is, to run ā€œnpm installā€ and ā€œng buildā€ you need a docker image with NodeJS runtime.
But to host the dist folder, you donā€™t need a NodeJS runtime. You just need a web server.
Hence we are having two stages in the Dockerfile.
If you notice, in the Dockerfile (line 11) we are copying the ā€œdistā€ from the first stage to the second stage.
This practice will drastically reduce the size of your docker image because your final image is not going to have any NodeJS related dependency.

To create the image and start the container, we have to run the same set of commands.

Now your front-end application is running on port 80 on your machine. If both frontend and backend applications are running as a container in the right ports, both can interact with each other.
Even if your machine doesnā€™t have NodeJS and AngularCLI installed, you can still run this MEAN stack application just by installing DockerDesktop and pulling the right image from DockerHub.

KudosšŸ„³šŸ„³šŸ„³ You just dockerized a MEAN stack application.
But our goal is far higher. We have to deploy these docker images into a k8s cluster. Grab some coffee, letā€™s do it.

What is Kubernetes?

As usual, before jumping directly to What ā€” letā€™s see the problem that k8s solves.
Nowadays project teams are starting to move from large monolithic applications to microservice-based applications. This helps teams to work independently and reduce dependency on other teams.
Every team wants to run its microservice as a container. And to prevent bottlenecks and high demands, some containers will be replicated into multiple instances. So even for simple web applications, we can easily have around 50 containers running at any point in time. And during peak time, to meet the demand it might go even higher.
Donā€™t you think how tough it is to manually manage each and every container to work together?
Wouldnā€™t it be nice if we have a tool that automates this and makes containers work together seamlessly?
The answer is KubernetesāŽˆ. That is why k8s is being called a container orchestrator.
In the Kubernetes world, these containers run inside nodes. A node can run multiple containers (or pods in k8s worldā€” pod is like a container with some abstraction around it). A k8s cluster can have multiple nodes.
There are two types of nodes:

  • Master nodes šŸ§ (acts like a brain ā€” managerial work)
  • Worker nodes šŸ’Ŗ(acts like a muscle ā€” fieldwork)

The worker node is where the pods (or the containers) run. Master node manages these worker nodes. If the demand surges, the master node will allocate new worker nodes to the cluster and removes them if not necessary.
K8s has lots of components like service, ingress, controller manager, cloud controller manager, etc. We cannot learn everything in a single article.
But what we learn here are absolute necessary items that need to deploy your application into a k8s cluster. Are you ready?

Creating necessary k8s components:

As I told you earlier, there are lots of components in k8s. But many of them are beyond the scope of this blog. Letā€™s discuss the following k8s components now:

  • Deployments
  • Internal Services (Cluster IP)
  • External Services (Load Balancers, NodePorts)
  • ConfigMaps
  • Secrets

Creating and working with these components are way easier than they sound. Before creating, letā€™s see them briefly one by one.

Deployment is where we specify the following items

  • Which image to use to build the container?
  • How many replicas of the container to be created to balance the load?
  • Which port the does container listen to?
  • Set environment variables and their values. The application running inside the container will access it. And more.

To make a pod (container) interact with another pod, we need something called a Service. To make the interaction within the k8s cluster we can create an internal service (aka Cluster IP). But to expose a service outside the cluster, we need to create a LoadBalancer (or NodePorts) service.
Letā€™s say our MongoDB pod and Express server pod are in the same cluster. Then an internal service will satisfy their need to interact with each other. But if an external client wants to access the Express server (like a front-end application), then we have to expose our Express pod via a LoadBalancer service.

Q: But why canā€™t we directly expose the pod's IP address to make two or more pods interact with each other?

A: Pods are ephemeral (aka short-lived). Since the pods are created and destroyed based on the demand, we cannot depend on the IP address of a pod. That is why we link all the active pods to a k8s service and expose the service IP address for interaction.

While developing applications, we ideally donā€™t save the application secrets and server endpoints along with the source code. It could be either saved as environment variables or stored in a local file. Kubernetes provides two components to achieve this.
Using ConfigMaps one can store configurations that are not confidential but subject to change from environment to environment. Like DB_URL, API endpoints, etc. These configmaps can be consumed inside deployments seamlessly. But the order of creation matters. If we are going to consume a configuration in a deployment. Before starting a deployment, the corresponding ConfigMaps should be created beforehand.
The same applies to secrets. Secrets are used to store DB passwords, API access keys, etc. Secrets have to be created beforehand and they can be consumed inside deployments.

All these components are written as YAML files. Letā€™s start to write them.
Our application has three entities ā€” Mongo, Express, and Angular. Letā€™s start with Mongo.
In Mongo, letā€™s start with secrets. Because the order REALLY matters.

You should have already installed DockerDesktop in your machine. To proceed further, you have to install MiniKube on your PC.
Minikube allows you to set up a k8s cluster in your personal machine which only has a limited resource.

To start a local k8s cluster in your machine, execute the following minikube command:

This command will create one master node and one worker node. It may take a while to set up a local ks8 cluster. Once done, follow the next steps.

Now you have a cluster running.
K8s provides a command-line tool called kubectl. It helps us to interact with a k8s cluster and provide instructions to it. We use kubectl commands to create and modify the k8s components.

This is how the secrets YAML file looks like. Letā€™s see how we use it in our deployment file.

The above YAML file has both deployment and service. As you may know, the YAML file can have multiple configurations by separating them with a ā€” ā€” ā€” (three hyphen). From line number 23, you can see how we are assigning the secret to the environment variables. When the MongoDB container is running, it will have access to these environment variables and secrets.
Below that you can see the service component (from line 35). It has a name and selector. The selector refers to the deployment for which this service is created. The above MongoDB deployment can be accessed via this service. If the express server pod (which we create in the next step) wants to connect to MongoDB, it can just refer to this service.
MongoDB has a default port of 27017, we can use the same port here to keep things simple.

Now the MongoDB setup for the k8s cluster is almost done. Since our deployment depends upon secret, letā€™s deploy the secret first using the following command.

Note: The order really matters.

The above command will add the secret to the k8s cluster. Now letā€™s add mongo deployment and service to the cluster.

Great. Itā€™s time to switch to Express. Here as well we follow the same process we did for Mongo.
Since we are going to consume the MongoDB service inside the express container, we have to consume it via ConfigMaps.
Instead of creating ConfigMaps, we can also hardcode the cluster IP. But in the future, if you want to make any change, you have to change it in all the places. That is prone to error. Hence we are creating ConfigMaps. It acts as a single source of truth.
Letā€™s create it first.

You can see, we just need to specify the name of the mongo service which we created before. Nothing more. k8s will take care of the rest. Now the configMap is ready, letā€™s create the express deployment and service. The service we created for mongo is an internal service. But for express, we have to create an external service. Because it will be consumed from the angular application from the browser client ā€” which is outside the k8s cluster.

In line 17, you can see the name of the Express image we created earlier. K8s will not look for local images, rather it checks in the DockerHub registry for the images. This means, before running the below command the images we created locally have to be pushed to the DockerHub by creating an account in DockerHub. Make sure you log in to your DockerDesktop using those credentials.

Here is the command to push the local image to DockerHub:

From line 25 of the express-deployment-service.YAML file, you can see how we are consuming the ConfigMap and assigning it to the environment variable.
Kindly do refer to the db.js file in the repository to see how we are consuming these inside our express application.

Again, the order matters. Letā€™s first add the ConfigMaps to the cluster followed by deployment and service.
Letā€™s execute these commands.

By now, you have two pods running for MongoDB and Express servers. And each has its own service.
If you canā€™t wait till the next step to taste the power of k8s. Iā€™ll help you do that. To access the Express API directly from your browser or PostMan client, you need an IP address of the Express API service which is living inside the k8s cluster. Minikube helps you to get that.
If you type ā€œkubectl get servicesā€, you could see some message like below.

Now copy the name of the Express API service and run the following command.

ā€œminikube service api_service_name_hereā€

Minikube assigns an IP address for you to interact with the Express server. You can just interact with this IP address from your local machine and make HTTP requests and do some CRUD operations ā€” which will reflect in the MongoDB inside the k8s cluster. Isnā€™t that great?

Now, we are at the last stage. Deploying our front end to the k8s cluster.
As discussed earlier, itā€™s better to push the image to the Dockerhub beforehand. So do push the angular application image to DockerHub.
Here we have our deployment and service YAML for the angular application.

You could see that how we are referring to the angular image in line 17.
And in line 32, we have mentioned the service type as LoadBalancer (or it could be NodePorts as well, refer to k8s docs to know the difference between them). This allows us to access this service from an external network.
The following command helps us to add this deployment and service to the k8s cluster.

And we are done. Now our k8s cluster has all the three parts of our application (MongoDB, Express API, Angular app) running in it.
Let me share some commands that will help you troubleshoot/diagnose any problem while working with the k8s cluster locally.

If you have not already noticed, there is one minor change you have to make in your front-end application. Inside the src folder, there is an Nginx configuration file. Here you have to mention the path of your API which you are referring to in your Angular service.
Letā€™s say in the notes application if all the API URLs are prefixed with /notes, Like theseā€¦
* GET /notes/all,
* PUT /notes/edit/123,
* DELETE /notes/delete.

You can find that in line number 14. So if you are using a different naming convention within your app, donā€™t forget to mention the prefix here.

As the last step, letā€™s use minikube command to get the endpoint URL to access the angular application.

Your browser will open automatically else do copy the endpoint URL and access the Angular application and start playing with it!!!

Hey, CongratsšŸŽ‰šŸŽ‰šŸ„³šŸ„³ You have achieved a great milestone.

I agree, it was a pretty long article. But you nailed it.
Now itā€™s your turn to dockerize and deploy your own web applications into the k8s cluster. I would recommend you to run the notes application which we discussed in this blog on your local machine first. Once you are able to do that, you can try with other applications.
As always post your queries in the comments section if you have any.

Happy coding!

--

--

No responses yet