Deploying a Node app onto AWS Fargate

571
0

Fargate is a great product AWS offers to deploy your backend applications onto. It combines the flexibility and scalability of containers, together with the ease of use of a hosted solution so you don’t have such a steep learning curve to host your own application with a tried and proven architecture.

Fargate is essentially a hosted solution for running docker containers inside a cluster. You only need to define the basic characteristics of your cluster, i.e. how bid you want it to be, and the auto-scaling rules if you want them. Afterwards, Fargate takes care of the rest. As soon as you upload a new Docker image to your repository, Fargate will start a new deploy process. It spins up new instances with the new image, checks they are working well, and then spins down the instances with the old version of your app. Pretty neat!

By the end of this project you will have learned:

  • How to Dockerize an application
  • How to set up AWS Fargate
  • How to deploy your app onto AWS Fargate with a load balancer

Pre-requisites:

  • Have a backend app already built
  • Docker
  • An AWS account
  • The AWS CLI tool installed on your computer

Dockerizing your app

Since Fargate runs a cluster of docker containers which contain your app, we first need to embed our app inside a docker container, also known as “Dockerizing” our app. Doing this is extremely simple. We need to add a Dockerfile to our project directory which tells Docker how to build our app inside the container.

FROM node:14

WORKDIR /app

ENV NODE_ENV production

ADD dist .
COPY package*.json ./

RUN npm install
RUN npm install pm2 -g

EXPOSE 3001

CMD ["pm2-runtime", "app.js", "-i", "0"]

Let’s run through what’s happening here:

First, we need a base image. That is, an operating system and a set of pre-installed libraries which we can use to host our application. In this case I chose node:14 which is a standard linux operating system with node 14 already installed on it.

Next, we create a new directory where we will add all of our app code.

Then we set the NODE_ENV environment variable to be “production”. This will make npm only install the project dependencies, and not the development dependencies.

Then, we add all of our compiled JavaScript code onto the app directory on the container.

After that, we also copy the package.json and package-lock.json files too.

Once those are in the container, we will proceed to npm install the entire project.

After that’s done, we will install another library globally, which is pm2. Pm2 is a process manager which we will use to run our application and it will take care of making sure the process running our application is always alive regardless of what failures can happen in the system.

Then, we expose port 3001 on the container which we will use to receive traffic through.

And finally, we will run the pm2-runtime command to start the application on the container.

Creating our container repository on AWS

In your AWS Console, search for Amazon Container Services, and go to Amazon ECR. You’ll need to create a new private repository. Give it any name. My one will be called littl.link

Once you create it, you’ll be able to see the “push commands”. These are commands you’ll paste into your terminal or PowerShell to push the built container to the repository. From there, we’ll set-up AWS Fargate to detect new images being pushed to this repository and then spin up instances within a cluster with the new docker images, which will be each time we update our application.

Just copy, paste and run each command sequentially. Here’s what each one does:

aws ecr get-login-password --region eu-north-1 | docker login --username AWS --password-stdin <your-aws-account-id>.dkr.ecr.eu-north-1.amazonaws.com

You just created a private ECR repository, so naturally it requires authentication. The sweet thing is that we can use the AWS CLI to log in for us automatically.

docker build -t littl.link .

Starts building your docker container. The first time you execute this, it can take up to a couple of minutes to complete. Subsequent executions will run a lot faster as most of the components of your docker container do not change, therefore don’t need to be re-built.

docker tag littl.link:latest <your-aws-account-id>.dkr.ecr.eu-north-1.amazonaws.com/littl.link:latest

Simply tags your docker image with the URL of your repository.

docker push <your-aws-account-id>.dkr.ecr.eu-north-1.amazonaws.com/littl.link:latest

Pushes your docker image to the private repository you created earlier on AWS. Once it completes, when you refresh your repository inside ECR, you should see a new image available there.

Setting up Fargate

Now that we have our image available on AWS, we can move ahead with setting up Fargate, which will use the image we just pushed to run containers on a cluster.

Container

Inside of Container Service, go to Clusters and create a new one. You simply need to assign it a name, and that’s it. Make sure you don’t select “Create a new VPC for this cluster”.

A cluster is simply a way we have to group compute resources that are going to be running our application so there’s not much configuring to be done here. In ECS, the compute resources are called “Tasks”. Each task is pretty much a compute resource running the container which runs our app. You can add as many tasks to your cluster as you want, although you need to bear in mind that each tasks costs money. AWS Fargate is extremely cost effective so if you run your own small scale projects you much not even be billed as there is a free-tier, but just keep in mind to not go crazy with the resources you add to your cluster especially when you’re just stating, learning and playing around.

Task Definition

Next, we need to setup the task definition, or in other words, how we want our tasks to behave and what we want them to run inside of our cluster.

Pretty straightforward. Just give your task definition a name, chose the ecsTaskExecutionRole for the Task role, and select the smallest Task memory and Task CPU available.

Next, we need to configure the container we want the task definition to launch. Under the Container definitions section, click on Add container.

Here we need to give the container a name, and more importantly, add the URI of the container registry we created before in the “Image” section. This is so the task definition knows which image to launch in the container, and we must point it to the image we are going to be pushing to our container repository. So to do that, go back to your container repository and copy the URI of the repository:

Going back to the container setup, after we add the URI, also make sure to add 3001 in the port mappings, and select TCP. This needs to be aligned with the port we exposed in the Dockerfile as well as inside of app.ts when we launch the server, the port we tell it to listen to for inbound traffic.

Leave the rest of the fields default / empty (yes, there’s a lot of them) and click add.

As for the rest of configurations for the task definition, also leave them blank or with their default values and click create.

Service

After we create a task definition, we need to link it to the container. The way to do this is to enter the container settings and add a new service.

When setting up the service, there are a couple of tricky steps you need to make sure you set up correctly for your container to work properly.

In the first section, make sure you:

  • Select Launch Type as Fargate
  • Select your task definition
  • Select your cluster
  • Add a name for the service
  • Initially set Number of tasks as 1 (you want the minimum number of compute resources for learning and testing Fargate)

Then, the next settings which deal with the network configuration and load balancing is where ir gets tricky, so make sure you follow these settings:

  • Select all subnets available to you
  • Edit the default security group and make sure you create a new one

This next step is extremely important to get right: when creating a new security group, select Custom TCP for inbound rules and set the port to 3001. This needs to map to the exposed port on your container. For source, select anywhere.

Application Load Balancer

Next, back on the service setup page, select Application Load Balancer as the load balancer type. You will see you don’t have any load balancer in the dropdown select, so you’ll need to click on the link provided next to the select to open a new tab and create a load balancer.

When creating a new load balancer, make sure you select Application Load Balancer.

Give it a name, and select Internet-facing and IPv4.

Under security groups, make sure to create a new one.

Add a name (the name cannot start with “sg”) and add an inbound rule for Custom TCP, and port 80. Source again needs to be Anywhere.

This time around we set port 80 because we’re setting the inbound rules for the Load Balancer, rather than the container. The load balancer will be our internet facing piece of infrastructure su it needs to listen to the standard ports the web runs on, which is port 80 for HTTP traffic.

Then, for the load balance security group setting, remove the default security group and add the one you just created.

Next we need to set up the target group. A Target group is how you tell the load balancer to be aware of compute resources or tasks hosting your application, and forward traffic to. It’s a target group because your cluster can be formed of multiple tasks to which the load balancer can send traffic to.

Create a new Target Group:

Make sure you:

  • Set it’s type as IP addresses
  • Give it a name
  • Set the protocol and port to HTTP 80
  • IP address type to IPv4
  • Protocol version HTTP1

Your load balancer will periodically evaluate the health of your tasks by sending a request to a specified endpoint. If the request returns a OK response, it will know the given task is healthy. Otherwise, it will tell the service in the cluster to shut the task down and start a new one to replace it.

These settings are important in order for your cluster to work properly. The important part is to set the Health check path to an endpoint that returns a status code of 200. In my case I set it to the root path of the application, so all I need to do is to insert a forward slash.

After you create the load balancer, go back to the service set up, refresh the list of load balancer and select the one you just created.

Back to setting up the service

Now there’s one last thing we need to set up.

We need to specify which container we want to load balance, or in other words, which container to be added to the target group for the load balancer we just created.

  • Production listener port: 80:HTTP – This is the port the load balancer will be listening to to inbound traffic
  • Target group: select the target group you just created
  • The rest should be automatically filled in

That’s it! Go ahead and click next and create your service.

Go back to your cluster, and look at the Tasks tab. After a while you should see that there is a working task inside the cluster serving your app. If you go back to the load balancer, you will see a DNS name set up. This is the URL you can use to access the load balancer and have it forward you to the app. Try it!

Copy and paste the URL into your browser.

Alright, the last thing we need to do it to add a neat domain name to forward traffic to the load balancer so we don’t have such a long and ugly domain name for our app.

Leave a Reply

Your email address will not be published. Required fields are marked *