Get started Bring yourself up to speed with our introductory content.

Cost to run EC2 instances vs. Docker containers

Fans of Docker claim it has many advantages for application deployment. But how does this translate in AWS? We do a trial run to compare costs.

As Docker excitement builds, advocates of the technology tout the benefits of running Docker containers on AWS....

But to truly understand how Docker works in relation to Amazon EC2 instances, including cost comparisons and capabilities, it's helpful to do a trial run.

This experiment looks at the price of Docker containers under several configurations on a system with four applications, each of which requires 6 GB of memory. All four applications run, more or less, at the same time, though each app has usage spikes independent of one another.

Crunch the container cost numbers

The table below shows how you could run those four instances several different ways. The first column describes a prototypical configuration with multiple EC2 instances; the second column describes a single large instance using Docker. And the third column describes that same large instance running without Docker.

Each application could run on an m3.large machine in a single-tenancy configuration. And all four applications could run on an m3.2xlarge, where that machine has four Docker instances. In addition, four machines could share resources and run natively on the same m3.2xlarge at the OS level.

Docker container costs vs. EC2 instances
Comparing instance and container costs

Comparing Docker containers versus a non-Docker configuration on the larger machine, they are equivalent except for burst mode. The larger machine has eight CPUs, which, in the Docker configuration, are rigidly allocated; any app can use all eight CPUs as needed. It's unlikely that all four applications would have the exact same CPU requirements at the exact same time, so more CPUs would be available for applications in the non-Docker configuration.

A similar scenario exists when looking at disk storage. The larger machine has a total of 160 GiB of local disk, which, in Docker, must be rigidly shared. In a native configuration, each application can use a dynamic amount of disk. But one application might use more than its share of disk, leaving less disk space available for the other apps.

Docker hype vs. reality

So, if Docker containers aren't necessarily cheaper or more powerful than direct use of EC2 instances, then why do they get so much attention?

One area where Docker stands out is in deploying demonstration systems. When testing or doing a prototype deployment of a complex multi-application system, containers can be useful because they enable you to launch a single Docker image. In a prototype environment, configuration factors, such as long-term operating costs or effective use of resources, are less important than the need to deploy the system quickly.

On the other hand, AWS has specific technologies designed to ease the burden of deploying complex systems. Elastic Beanstalk, for example, creates a simple configuration to deploy a standard, complete web application.

Whatever your position on Docker, enterprise IT should watch this space carefully. Large companies, like Google and Microsoft, recently endorsed Docker-like technology. Others are sure to follow.

Dig Deeper on AWS instances strategy and setup

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

How do you reduce costs for containers?
A minor correction, in the column "Single non-Docker instance" the "Disk available per application" should be "160 GB shared".

Your comment about "But one application might use more than its share of disk, leaving less disk space available for the other apps.": If any one app needs the disk space then, assuming it is functioning correctly, it needs the disk space. If you run out of disk either reduce your disk requirements or increase the allocated disk space.

Running multiple docker containers seems to increase resource fragmentation. If what you want is to get an environment up and running without the need to consider the optimal use of hardware resources then it seems like a viable option.
The total cost for the 4 X m3.large instances per hours should be $0.560 (not $0.2240), if considering the on-demand in US regions.
Good tests. Would love to see a couple others:

1 - How does this compare on a Cloud that has a more dedicated Docker offering (eg. Digital Ocean)?

2 - Can you create a test that measures the speed of using Docker vs. AWS instances (or any VMs on a Cloud)? Speed and agility of dealing with modern apps is more of the focus of why people are excited about Docker.
Cancel seems to demonstrate that the cost of docker is more nuanced especially in the CPU usage where idle CPU capacity would not go wasted even if it exceeded the container's limit.
It appears the comparison is between 4 smaller (M3.large) vs 1 larger (M3.2xlarge) application. Docker can be clustered using Swarm and it could well run on 4 smaller M3.large instance at lower CPU utilization (as it has the capability of running tens if not hundred of instances without the OS overhead.)