Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Maximize Docker portability on AWS

One benefit of Docker containers is their portability, but several common configuration mistakes can limit automation and the ability to move them between cloud environments.

When AWS first released the Amazon EC2 Container Service, some IT professionals immediately noticed the main advantage...

that Docker could bring to cloud computing: portability. With applications running inside containers, developers could build an application once and use Docker portability to deploy it across any vendor that supports it -- or even on local systems.

AWS and Google both support Docker containers running directly within their cloud platforms. But simply using Docker doesn't immediately make an application portable to other services. To take full advantage of Docker portability, developers must take several steps.

Separate components and automatically build them

Some developers set up a local Docker server and begin manually building a custom image, installing all of the software on a single image. This is a mistake; manually building a Docker container on a local development system isn't very portable. Developers may have longer build processes, and a small change to one component will lead to an entirely new manual build.

Instead, create Dockerfiles with automated build instructions. A Dockerfile is a template for how to build a single component of an application stack. Docker containers can be automatically built from a Dockerfile on any vendor platform that supports it.

Developers also must split components into the smallest logical parts possible. For example, applications that require a MongoDB database should move the MongoDB container to a separate Dockerfile and then link it to the application server. This also enables DevOps teams to reconfigure applications so they use cloud-native services, such as Memcached, when possible.

Link between containers

Most applications require more than one component -- or layer -- to run. For example, a simple LAMP stack may require a MySQL back end and an Apache server running PHP. It's common for database apps to have multiple Docker containers to split the load and permissions, allowing developers to string together different systems to build one application. Docker containers are cheap, so keeping components separate makes sense. A typical application may include:

  • Application server
  • Memcached
  • MySQL, or NoSQL databases like MongoDB or Redis

In this case, there would be a Docker container for each component; and Docker links would expose ports from each of the subcomponents to an application server. AWS also offers hosted Memcached, MySQL and Redis style databases. Developers may find a NoSQL store using DynamoDB more efficient then running MongoDB, but that is where portability issues arise.

Avoid vendor-specific databases and services

Applications that need maximum Docker portability should not rely on vendor-specific services, such as DynamoDB. Using a service like Amazon ElastiCache instead of a Docker-based Memcached server does work, but make sure the application server component still uses environment variables like those Docker links uses to identify the location of the Memcached server. That way, when developers need to migrate a container to another cloud or on-premises data center, it's easy to make a Memcached Docker container later and link them together.

Some services, such as Memcached, Redis and ElasticSearch, are vender-independent. If a service can run within a Docker container, you can use the cloud-native version. But it's important to configure those services without rewriting the application layer. Before using any vendor-specific service, check that an identical replacement is available in something like a Docker container.

Keep containers small

Micro-containers help developers run an application with the smallest available footprint. For example, a Go application server can be as small as 5 megabytes. Larger containers, or those with self-contained Linux distributions, take longer to start up and require more memory to run. Splitting containers into multiple components works, but make sure it doesn't negatively affect performance or scalability too much.

Split containers shouldn't use shared resources for important operations, such as a mounted file system from the host, as this will negatively affect Docker portability.

Don't use a vendor-specific OS or software

While the Amazon Linux OS runs applications on Elastic Compute Cloud (EC2) instances, it isn't a suitable OS for Docker. Not only is it specific to AWS, it's specific to EC2 instances -- and it won't build in a Dockerfile.

Additionally, don't use vendor-specific software such as the AWS Command Line Interface, or rely on endpoints that only exist in AWS, such as for EC2 Tags or AWS credentials. Instead, make sure all configuration options can be passed through environment variables or Docker tags.

Test your cloud container technology knowledge

The popularity of cloud containers is soaring. If your company isn't already working with containers, the time to get on board is now. Take this brief quiz to test your knowledge.

Configure load balancing

Each cloud vendor has load balancing options specific to its platform. But developers should make sure their DNS provider of choice allows them to quickly change between load-balancing endpoints or support failover and round-robin DNS. For example, if an application runs in both Google Cloud and AWS, each provider should have a load balancer set up within it, and the DNS server should route traffic between the two of them.

Amazon Route 53 supports health checks and latency checks, and the service doesn't lock in developers. Other offerings are available with similar features -- if an enterprise wants to movie off AWS entirely.

Test the portability

Have you looked at your backup processes only to realize that they haven't been working for the past year? A more troubling scenario is to need those backups and realize there's no way to restore them. It's a common issue, and the only way to be safe is to routinely verify backups.

The same is true when building portable Docker containers; developers must verify portability. When working on a Docker container, test it on multiple locations. And test it without access to other cloud vendors, such as AWS.

Regularly run portability tests to ensure the application doesn't regress. If possible, have Docker containers run on multiple cloud vendors from Day 1; and verify that production traffic can be served from any location.

Even if the application is designed to run locally and only use AWS for burst capacity, make sure you have at least one instance of the entire application stack running and continuously being tested on AWS. This helps ensure that AWS can handle the production workload, and it lets a DevOps team know that burst capacity will work as needed. Think of this as a Docker version of Chaos Monkey.

Next Steps

Few enterprises take advantage of container portability

Manage clusters of containers in AWS

Get to know the Amazon EC2 Container Service

This was last published in October 2016

Conference Coverage

Your passport to AWS re:Invent 2016

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How much portability do you need for cloud applications?
Cancel

-ADS BY GOOGLE

SearchCloudApplications

TheServerSide.com

SearchSoftwareQuality

SearchCloudComputing

Close