Deploying networked containers to an AWS EC2 instance

Jaaq Jorissen
6 min readNov 22, 2017

--

In my previous post I talked about how set up local networked containers using Docker, using redis as a caching layer for you node server. In this post, we’ll go through all the steps needed to get your killer app up and running on AWS.

We’ll be using the AWS Docker registry called Elastic Container Service, but instead of using Task Definitons and Load Balancers to scale and automatically deploy your container, we will do it manually.

As this is quite a lengthy process, the post will be divided in the following main topics:

  • Pushing your container to ECS (Docker Registry)
  • Creating your EC2 instance (VPS)
  • Setting up Docker on your EC2 instance
  • Configuring Elastic IP (Static IP for your VPS)

(This post assumes you have a docker container with a nodejs image running that needs a redis caching layer — see the previous post on how to set this up)

Pushing your container to ECS

Go to the AWS console and select ‘Services’ in the top right and navigate to Elastic Container Service. You’ll need to create a repository where your containers reside, so hit Create repository, give it an appropriate name e.g. tutorial and hit “Next step”.

Before continuing, you will need to install the aws-cli which is necessary to interact with AWS using the command line. After installing aws-cli, make sure you have set up your Security credentials by following the guide here and then run

$ aws configure

Next up , you’ll need to authenticate with the Docker repository, and the first aws command will return the command needed to do just that. Copy paste the output from that command and then follow the other steps to push your container to your repository.

If all is well, you just pushed your first container to ECS!

Creating your EC2 instance

Log into your AWS Console, go to the EC2 Console, and hit the blue “Launch Instance” button.

On the next page you will have to choose an Amazon Machine Image (AMI) that will run on your EC2 instance. We’ll go with the Amazon Linux AMI, as it’s pretty lightweight and comes with some handy pre-installed tools like the aws-cli.

On the next page you will be prompted to select an Instance Type. If you’re eligible for the free tier, select t2.micro, otherwise t2.nano will suffice and is very inexpensive. Click the grey “Next: Configure Instance Details” button to continue and keep hitting “Next” until you reach “Configure Security Group”.

Security Groups control the rules for network traffic to your instance. By default all ports are blocked, but since we want to SSH into our instance and reach it using our browser, we need to open up 2 ports, so add 2 rules:

SSH:  TCP / Port 22
HTTP: TCP / Port 80

Give it an appropriate name e.g. default-access-group and hit the blue “Review and Launch” button and launch your instance on the next page. You’ll be prompted for a Key Pair — this is needed to SSH into your EC2 instance, so create a new pair and download the .pem file.

Make sure to keep this file safe as it only is issued once!

Hit “Launch Instances” and you will be redirected to the EC2 Instances page. Sit back, relax, and watch as your first EC2 instance is being fired up.

Setting up your EC2 instance

Once your EC2 instance is running, select it and copy the Public IP address at the bottom of the page under the Description tab where it says Public DNS (IPv4).

We will now SSH into the instance by using the .pem file that was issued to you earlier. Navigate to the folder where you saved the file using command line, make sure the you have read permissions and SSH as follows:

$ chmod 400 my-ec2-key-pair.pem
$ ssh -i my-ec2-key-pair.pem ec2-user@<INSERT PUBLIC DNS ADDRESS>

Congratulations! You have SSH’d in to your freshly created EC2 instance.

Next up we will install Docker, pull in our image and set-up some scripts that will boot up our docker images when the instance boots.

We’ll install and then start Docker, after which well add the ec2-user to the docker group. Exit and ssh in again to enable your changes.

$ sudo yum update -y
$ sudo yum install -y docker
$ sudo service docker start
$ sudo usermod -a -G docker ec2-user
$ exit

After SSH-ing again into your ec2 container, it’s time for us to pull in the images we need, fire them up and make sure they start every time we reboot the container. Let’s start by pulling in our images. If you don’t remember the repository URI for the image you pushed, go to

AWS Console > Elastic Container Services > Repositories

then click the repository you created earlier and copy the URI.

$ docker pull redis
$ docker pull <URI> (e.g. xxx.us-west-2.amazonaws.com/tutorial)

What you want to do now is go over my previous post and do the exact same steps. The only difference is that instead of nodejs your image will now be called the <URI> you copy/pasted above and that you will be mapping port 3001 to the 80 port of your container, so the command is now:

$ docker run --name node.networked -d -p 80:3001 -e REDIS_URL=redis://redis.networked:6379 --network myNetwork nodejs

Both containers should now be up and running. You can double check by running $ docker ps and then navigate to the Public IP Address you copied earlier in your browser window.

Now we have one more thing to take care of, and that is re-start your containers when your EC2 instance reboots. There is a powerful, not very well documented service from AWS called cloud-init that gives you control over a few important steps in the lifecycle of a container, including

  • scripts-per-once
  • scripts-per-boot
  • scripts-per-instance

This is different from User-Data which only runs on the initial creation of an instance. For more information, refer to this post.

In this case, we want to run our commands every boot. Let’s navigate to the scripts-per-boot folder and create our bash script.

$ cd /var/lib/cloud/scripts/per-boot
$ sudo vim start-docker.sh

If you are unfamiliar with using VIM — press i to start insert mode, copy paste the following lines and press ESC followed by typing wq! and hit ENTER.

#!/bin/bashdocker start redis.networked
docker start node.networked

As you might have guessed, every script in this folder will be executed on boot of an EC2 instance. That is, if the file is executable — Let’s make it so:

$ sudo chmod +x start-docker.sh

Now you can navigate to the EC2 Dashboard, right click your instance and select Instance State > Reboot. After it has booted, navigate to the Public IP and see your application up and running!

Configuring Elastic IP

As you stop and start instances, their public IP might change. Our final step is assigning a static IP to our instance. AWS makes this very straightforward, just navigate to your AWS console and the EC2 Dashboard.

In the menu on the left, you’ll find Elastic IP’s under Network & Security.

Click the blue “Allocate new address”, confirm the allocation and then find the newly created IP address in the list. Right-click and select “Associate address”. On the next page leave the Resource type on Instance, select your instance in the dropdown list and select the Private IP that’s available to you. Finally, click the blue “Associate” button and you’re all set.

Please note that you can now also SSH directly to this IP address instead of using the Public IP found under your EC2 Instances.

Conclusion

That’s it! You’ve created an EC2 Instance, pushed your docker image to the ECS repository, configured docker on your VPS and you can access it over a public, static IP address.

This is the most manual process to deploy a basic application, and it offers no load balancing or automatic scaling of new instances that load a set of pre-configured containers. I think it’s important to initially get your hands dirty and set things up your self.

In my next post I will explore how to use the topics mentioned above as well as how to use these in conjunction with Docker Compose.

Thanks for reading!

--

--

Jaaq Jorissen
Jaaq Jorissen

No responses yet