Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Demystifying the AWS and Docker containers hype

Despite IT pros' mixed feelings on Docker, there are some advantages to running the containers on AWS EC2 instances. We cut through the hype.

For many, AWS is synonymous with virtualization -- spin up an EC2 instance and you get a virtual machine running...

on a hypervisor inside Amazon's machine farm. But that's not the only way to do it; alternatives, such as Docker, are becoming increasingly popular. While some organizations run Docker on their own hardware, others advocate for a hybrid approach of running Docker containers on EC2 instances. We'll explore what this means.

Docker does not virtualize an entire machine in the way Amazon Web Services (AWS) does, instead it virtualizes an environment within the physical or virtual machine (VM). For example, you can run several Docker images within a container on a single machine. Docker containers also allow you to configure how much CPU, memory and disk each image within your machine receives. This enables enterprises to run multiple applications on a single machine, separate from one another.

Even though this may sound beneficial, feelings on Docker are mixed. Some admins believe a container that allows you to run multiple programs in isolation from one another on a single machine is a revolutionary concept. Others believe this technology was invented decades ago, but back then it was called an operating system (OS).

Are Docker containers just an OS by another name? 

A modern multitasking OS allows multiple applications to run in isolation from one another on the same physical or virtual machine. In fact, some might note that running another layer of software that isolates applications from one another on top of the OS is duplicating efforts. In addition, Docker doesn't allow multiple applications to share resources in the same way that a modern OS does.

For example, an OS running on hardware with 16 GB of memory could run multiple applications, each requiring 12 GB of memory. The OS would alternate the use of that memory to the various applications. Similarly, on a native OS, each application could use bursts of 90% of the CPU. As long as the apps didn’t require all of the CPU at the same time, the OS allows each app to use all available CPU.

By contrast, in a Docker configuration, IT teams decide what percentage of the base machine’s memory and CPU goes toward an application. For example, a 16 GB machine would have no more than 8 GB to each of the two applications, minus the amount needed for the OS and Docker, and no more than 50% of the CPU. If one of the two applications is idle, the other application cannot use the first applications share of the CPU. In this sense, some would call Docker a poor man's limited-process switching system.

Behind the Docker excitement  

One advantage of using Docker, rather than simply letting multiple applications share a machine at the OS-level, is that the technology can present a façade of the disk to each application. For example, you could have multiple apps, each of which believes it has exclusive access to the root directory on the machine.

The process of actually configuring Docker containers is another plus. Once a Docker container is configured, you can deploy it in a variety of places with a large degree of confidence that it will run the same way on each configuration because the application sees the same environment. This is similar to using a baked image in AWS -- with a baked Amazon Machine Instance. IT teams can deploy the system on different sized hardware that looks identical to the underlying application.

A third, and perhaps more typical AWS deployment for Docker, would be to put each application on its own smaller instance. While this would completely isolate applications from one another, many argue this is wasteful since you are paying the cost of multiple copies of the OS. On the other hand, some people argue that because Docker doesn't allow apps to share memory, admins must over-configure (i.e., waste) memory in a Docker configuration.

We’ll look at these options in more detail, breaking down the costs in the next tip.  In the meantime, you now hopefully know enough about Docker to start thinking about how it might, or might not, make sense in your own AWS configuration.

About the author:
Brian Tarbox has been doing mission-critical programming since he created a timing-and-scoring program for the Head of the Connecticut Regatta back in 1981. Though primarily an Amazon Java programmer, Brian is a firm believer that engineers should be polylingual and use the best language for the problem. Brian holds patents in the fields of UX and VideoOnDemand with several more in process. His Log4JFugue open source project won the 2010 Duke's Choice award for the most innovative use of Java; he also won a JavaOne best speaker Rock Star award as well as a Most Innovative Use of Jira award from Atlassian in 2010. Brian has published several dozen technical papers and is a regular speaker at local Meetups.

Next Steps

Is Docker jeopardizing the future of virtualization?

Docker app containers could improve cloud portability, app development

This was last published in October 2014

Dig Deeper on AWS instances strategy and setup

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

2 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Couple other things:

- within AWS, the Docker service isn't running "bare metal" containers, but rather it's an AMI that has Docker as an "application", so it's Docker running in a VM.

- AWS is running the container resource management framework behind the scenes, and it's believed to be using Apache Mesos (not Google Kubernetes). this probably becomes a pluggable service in the future.

Cancel
It makes sense to me; I'd use Docker in AWS to speed up the development and deployment of builds for, say, testing a website. I guess in theory you could use it for production too, say to roll out deploys.
Cancel

-ADS BY GOOGLE

SearchCloudApplications

TheServerSide.com

SearchSoftwareQuality

SearchCloudComputing

Close