Developers working in cloud computing often struggle with configuration issues. The service works in the lab, but...
then won't run as expected in production. The Amazon EC2 Container Service offers hosted, managed and fully programmable APIs to run Docker containers from within EC2. Developers choose the size and number of servers in the Docker fleet and Amazon puts those containers onto instances.
Amazon Web Services (AWS) Elastic Compute Cloud (EC2) enables users to spin up VMs either programmatically or automatically, depending on the configuration. EC2 users define an instance type, region and availability zone (AZ) in which the VM will reside. EC2 instances can scale up, but the power with EC2 comes when you add the ability to scale out -- dividing services across multiple servers in multiple AZs and regions.
EC2's pricing, however, makes it more economical to run fewer, larger instances. Businesses that want to completely isolate certain services so certain services don't affect another turn to Docker.
Docker is designed for microservices and cloud portability. Microservices help prevent issues that develop with one service from affecting another service. For example, if your PDF archiver process goes rogue and exhausts your CPU, you don't want it to neutralize your search indexer. The classic fix would be to spin up each service on its own EC2 instance; however, two m3.large instances are more expensive than one m3.xlarge instance. Amazon rewards you for using larger instances versus more instances.
But Docker isn't specific to AWS or the cloud; IT teams can run Docker on anything from a public cloud to bare metal. The software sits on top of the VM as an abstraction layer without removing the overhead of another full virtualization layer. This means companies that want to experiment with other public cloud providers won't have to rewrite code. If your primary functionality is in Amazon Machine Instances (AMIs) that you build over time, you'll not only have to maintain scripts to update AMIs for new security patches, you'll also have to rebuild that system for a new cloud provider such as Google Compute Engine.
Here are some steps to follow when setting up and running Amazon EC2 Container Service.
Getting started with Amazon ECS
EC2 Container Service has a console that comprises everything needed to get started with Docker and AWS. The first step is to create a cluster and add instances to the Docker pool. These instances can be added to an Auto Scaling Group (ASG), so healthy instances are always available, and so there's enough extra power behind the Docker system to keep your containers running.
To get started, visit the Amazon ECS Console and create a cluster, which is a top-level grouping for Docker instances. You can use separate clusters to isolate test-and-development environments within the same AWS account.
Once you create a cluster, launch instances in it. I recommend using an Auto Scaling Group, even if you want to have a fixed number of instances running in your cluster. Setting up an Auto Scaling Group provides a safeguard; if a server dies it will automatically be replaced.
In the console, create a new launch configuration and then choose the latest ECS-optimized instance from the AWS Marketplace.
Click Continue and choose an instance type to launch. I recommend using a C4 or M3 instance type, large or higher. Don't choose a T2 or T1 type of instance; these are designed for burst capacity and may have issues keeping up with long-running processes.
Fill out the rest of the details and proceed to setting up the ASG following through the prompts as they appear. You may want to configure your ASG to run in a virtual private cloud (VPC), but make sure it is set up to run on at least two different subnets in two different availability zones.
Once your ASG spins up EC2 instances for the cluster, you'll see them appear in your ECS console (Figure 1).
Creating task definitions
A task is one or more Docker containers running together for one service or microservice. Tasks can be used for simple workloads such as a Redis server or complex workloads such as an entire WordPress stack with linked containers for a database, memcached and a Web server. It can be useful to link containers to help ensure they work cohesively.
When configuring a container in your task definition, define a container name, an image, CPUs, memory, optional ports to map, optional environment variables, an optional override command and links. You must specify how much memory (in MB) you want to reserve for each container, as well as how many CPU units to reserve (each 2.8 GHz processor has about 1,024 total CPU units). This ensures servers are not overloaded with too many containers which may prevent one container from using too many resources and slowing down other containers. The cluster homepage shows available CPU units and RAM. If either of these is running low, launch additional servers in your cluster before you launch more containers.
Creating a service
Once you've created a task definition, it's time to create a service, which is the ECS version of a simplified Auto Scaling Group. This ensures a certain number of task instances are running, and allows developers to tie them into an elastic load balancer. Assigning a role for services helps compartmentalize; individual Docker containers get their own identities and access management roles. This gives nearly the same power with EC2 as an individual Docker container.
You may want to start with a private Docker Registry, which can create a private repository of Docker images to pull into the system. A private Docker repository will appear as a configurable Docker container with support for storing the repository in Simple Storage Service.
Figure 2 shows an example task definition that I use to host my own Docker registry.
Figure 3 shows the related service.
My Docker hub internal load balancer proxies Port 80 to Port 5000. Because it's only allowed within my VPC, I don't need any custom authentication or SSL support. I can set up images to pull from that Docker repository, and I have a special Docker instance that I use to build and push images to that repository.
Running blue / green updates
Similar to Elastic Beanstalk, Amazon EC2 Container Service handles updates by spinning up extra tasks, switching over load balancers and terminating old containers after the new containers are ready. This typically eliminates downtime, so IT teams can push out updates during normal business hours without affecting customers.
To do a blue / green deployment, create a new version of your task -- probably pointing to a new version of your image -- and then update the service to use that revision. In ECS, all task revisions are saved, so you can easily revert back to an older version if something goes wrong with a deployment.
ECS vs. Beanstalk for Docker containers
Running Docker on Amazon instances is not a new concept. If you need a Web-based application to regularly use new deployments, Beanstalk with Docker is the best option. But this isn't necessarily ideal for running several microservices behind an application. If you have a Web application that drives other back-end systems, such as video transcription extraction, automated image facial detection or game analytics, you may need to use both ECS and Elastic Beanstalk with Docker.
It can be best to direct microservices on ECS to read from SQS queues and then scale services to run multiple copies of those services across your fleet. For example, I have a fleet of 4 m3.xlarge instances running 10 copies of a PDF extraction tool, two copies of a full-text indexing system, four copies of a delivery system and a few miscellaneous tasks such as updating our Geckoboard dashboards and synchronizing data to RDS for analysis in Tableau.
To add more PDF extraction processes, I can go into that service and increase the number of running tasks. If I need more space, I can either bump up the server size or increase the size of my pool. If I did this on Beanstalk, I would need at least 10 different instances to run 10 copies of my PDF extraction tool. With ECS I can take advantage of larger instances that run at a lower cost compared to running several small instances.
ECS makes the most sense if you're using Docker on AWS. With Amazon doing the majority of the work, you can easily manage instances while using familiar elements like elastic load balancing, auto scaling and similar services with a fixed number of instances for containers.
Docker on AWS: It's not you, it's me
Following Docker's sudden success
Potential scalability advantages with Docker
Discover the benefits of Amazon EC2 Container Service