Eclipse Digital - Fotolia

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Match container automation tools to AWS workload needs

Container automation and scheduling enables IT teams to build apps that are scalable and portable. Here are four tools for controlling Amazon ECS resources.

Containers allow IT teams to automatically configure lightweight VMs and run dedicated code, but those teams face challenges when it comes to managing and provisioning container clusters in the cloud, especially with container automation. Several native AWS tools as well as third-party utilities help IT teams control Elastic Compute Cloud resources. Developers must explore all container services and know how they differ across workloads.

AWS provides four basic approaches to container automation, which is also called orchestration. The Amazon EC2 Container Service (ECS) includes the service scheduler for long-running tasks and load balancing, the RunTask action for batch jobs and the StartTask action for custom integration.

Amazon ECS enables developers to work with information about the state and performance of resources to help schedule a collection of tasks. Developers can use ECS to schedule a service, which can include multiple instances of one or more tasks. ECS automates the placement of services, associated tasks for the services and the load-balancing algorithms that work with them.

Use ECS scheduler for lengthy tasks

The service scheduler is optimized for applications that need to run for long periods of time. These applications could be composed of multiple tasks related to the application logic that can run on separate containers. The service scheduler enables developers to specify a specific number of instances of a task or to scale different tasks independent of each other based on a desired metric -- CPU, memory usage or another characteristic of the underlying container. For example, a web application interface might need to scale at twice the rate of a database connector for a given type of load.

If one or more instance of a particular task fails or stops, the service scheduler restarts or relaunches the instance until it reaches the desired number. This approach works with applications that have data stored outside of running tasks. Services only delete directly when there are no running tasks; otherwise a developer has to update the desired task count to zero.

Developers can apply Auto Scaling services to run more tasks, in response to predetermined CloudWatch alarms related to CPU and memory metrics, for example. This allows IT teams to automatically scale services up and down, maintaining performance while managing costs. This requires Application Auto Scaling to configure CloudWatch alarms and set up the appropriate AWS Identity and Access Management (IAM) security credentials.

Increase granularity with application load balancing

Amazon ECS scheduler works in conjunction with a load balancer to redirect traffic across multiple instances of a service. A service definition launches multiple tasks and load balancers simultaneously. Developers can use a load balancer for a set of tasks running on a single service; the developer configures the tasks to work with AWS' Classic Load Balancer and Application Load Balancer tools.

Application Load Balancer supports multiple ports on a container and configures each one to listen on a separate port. This enables several tasks to occur on the same container. The Classic Load Balancer requires a fixed relationship between the load balancer port and container instance port, so tasks have to run on separate container instances.

Use RunTask for batch jobs

Developers can manually run tasks to test out a new service during development. They can also use it for occasional batch jobs that could include data transformation, data analytics or machine learning algorithms that run and stop on their own.

To manually run tasks from the ECS console, select the RunTask option. This action randomly distributes tasks across a cluster; it includes an orchestration engine to evenly distribute the workload across instances in the cluster.

Previously run tasks will start immediately. New tasks require developers to configure IAM permissions through container overrides. These tasks can shut down when a batch job completes or when the Stop Task API makes a request to stop an instance from outside the application.

Use the StartTask API for app integration

The StartTask API enables enterprises to write custom container automation schedules or to use third-party schedulers, such as Apache Mesos or Kubernetes. With this approach, developers can specify where to place new tasks. This is a good option when the enterprise wants to use the same scheduler on premises and across multiple cloud platforms.

Access the StartTask action through the AWS Command Line Interface, AWS software development kits or the ECS API. The ECS List and Describe actions retrieve the current state of a cluster of containers. This information can drive a feedback loop for provisioning or can shut down containers.

Amazon released the ECS Scheduler Driver, an open source proof of concept tool that illustrates how to integrate ECS with the Mesos framework. Mesos supports a variety of schedulers, including Marathon for longer-running applications, Chronos for batch jobs and Apache Aurora for several deployment scenarios.

This API also allows companies to use existing schedulers with ECS. Coursera, an online educational technology company, developed Iguazú to schedule batch jobs related to course development. More recently, the company started using the ECS StartTask API to bring existing tools to AWS.

Next Steps

AWS Blox adds to container orchestration market

Provision and automate tasks in Amazon ECS

Manage Docker container configurations

Dig Deeper on AWS instances strategy and setup