Cost of running Docker containers vs. AWS instances

Fans of Docker claim it has many advantages for application deployment. But how does this translate in AWS? We do a trial run to compare costs.

As Docker excitement builds, advocates of the technology tout the benefits of running Docker containers on AWS....

But to truly understand how Docker works in relation to AWS, including cost comparisons and capabilities, it's helpful to do a trial run.

This experiment looks at the price of Docker containers under several configurations on a system with four applications, each of which requires 6 GB of memory. All four applications run more or less at the same time, though each app has usage spikes independent of one another. 

Table 1 shows how you could run those four instances several different ways. The first column describes a prototypical multiple instance configuration; the second column describes a single large instance using Docker. And the third column describes that same large instance running without Docker.

 

Multiple AWS instances

Single Docker instance

Single non-Docker instance

Instance type

M3.large

M3.2xlarge

M3.2xlarge

Number of instances

4

1

1

Total memory

30 GB

30 GB

30 GB

vCPU count

8

8

8

Total cost per hour

$0.2240

$0.560

$0.560

Burst CPU count per application

2

2

8

Disk available per application

80 GB

40 GB fixed

40 GB shared

Table 1

Each application could run on an M3.large machine in a single tenancy configuration. And all four applications could run on an M3.2xlarge, where that machine has four Docker instances. In addition, four machines could share resources and run natively on the same M3.2xlarge at the OS level.

The four-instance configuration costs less than half of the other two options and provides twice the local disk storage per application. Comparing Docker containers versus a non-Docker configuration on the larger machine, they are equivalent except for burst mode.

The larger machine has eight CPUs, which in the Docker configuration are rigidly allocated; any app can use all eight CPUs as needed. It's unlikely that all four applications would have the exact same CPU requirements at the exact same time, so more CPUs would be available for applications in the non-Docker configuration.

A similar scenario exists when looking at disk storage. The larger machine has a total of 160 GB of local disk, which in Docker must be rigidly shared. In a native configuration, each application can use a dynamic amount of disk. But one application might use more than its share of disk, leaving less disk space available for the other apps.

So if Docker containers are more expensive and less powerful than direct use of AWS instances, then why is it getting so much attention?

One area where Docker stands out is in deploying demonstration systems. When testing or doing a prototype deployment of a complex multi-application system, Docker can be useful because it enables you to launch a single image. In a prototype environment, configuration factors, such as long-term operating costs or effective use of resources, are less important than the need to deploy the system quickly.

On the other hand, AWS has specific technologies designed to ease the burden of deploying complex systems. Elastic Beanstalk, for example, creates a simple configuration to deploy a standard complete Web application.

Whatever your position on Docker, enterprise IT should watch this space carefully. Large companies like Google and Microsoft recently endorsed Docker-like technology. Others are sure to follow.  

About the author:
Brian Tarbox has been doing mission-critical programming since he created a timing-and-scoring program for the Head of the Connecticut Regatta back in 1981. Though primarily an Amazon Java programmer, Brian is a firm believer that engineers should be polylingual and use the best language for the problem. Brian holds patents in the fields of UX and VideoOnDemand with several more in process. His Log4JFugue open source project won the 2010 Duke's Choice award for the most innovative use of Java; he also won a JavaOne best speaker Rock Star award as well as a Most Innovative Use of Jira award from Atlassian in 2010. Brian has published several dozen technical papers and is a regular speaker at local Meetups.

Next Steps

Docker security considerations

Is application containerization really the future?

This was last published in October 2014

Dig Deeper on AWS instances strategy and setup

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

5 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

A minor correction, in the column "Single non-Docker instance" the "Disk available per application" should be "160 GB shared".

Your comment about "But one application might use more than its share of disk, leaving less disk space available for the other apps.": If any one app needs the disk space then, assuming it is functioning correctly, it needs the disk space. If you run out of disk either reduce your disk requirements or increase the allocated disk space.

Running multiple docker containers seems to increase resource fragmentation. If what you want is to get an environment up and running without the need to consider the optimal use of hardware resources then it seems like a viable option.
Cancel
The total cost for the 4 X m3.large instances per hours should be $0.560 (not $0.2240), if considering the on-demand in US regions.
Cancel
Good tests. Would love to see a couple others:

1 - How does this compare on a Cloud that has a more dedicated Docker offering (eg. Digital Ocean)?

2 - Can you create a test that measures the speed of using Docker vs. AWS instances (or any VMs on a Cloud)? Speed and agility of dealing with modern apps is more of the focus of why people are excited about Docker.
Cancel
https://goldmann.pl/blog/2014/09/11/resource-management-in-docker/ seems to demonstrate that the cost of docker is more nuanced especially in the CPU usage where idle CPU capacity would not go wasted even if it exceeded the container's limit.
Cancel
It appears the comparison is between 4 smaller (M3.large) vs 1 larger (M3.2xlarge) application. Docker can be clustered using Swarm and it could well run on 4 smaller M3.large instance at lower CPU utilization (as it has the capability of running tens if not hundred of instances without the OS overhead.)
Cancel

-ADS BY GOOGLE

SearchCloudApplications

TheServerSide.com

SearchSoftwareQuality

SearchCloudComputing

Close