Containers are becoming a fundamental element of many cloud-native development projects. A container ecosystem...
includes a container orchestrator, which efficiently compiles resources and instances. And enterprises now have several container orchestrator options to control workloads, including Amazon ECS, Azure Container Service and Docker Swarm, and Diego, among others as well as open source orchestrators.
Kubernetes, an open source project spawned by Google, could become a de facto standard for container orchestrators. Kubernetes quickly became one of the most active DevOps projects used on GitHub, with its number of web searches exploding over the past two years. And a recent survey of OpenStack users found that Kubernetes is the most popular tool for managing OpenStack applications.
By releasing Kubernetes as open source, Google hoped to cultivate an ecosystem of developers that would contribute code and incorporate it into their DevOps toolchains. That strategy would make Kubernetes an essential element of many platform as a service stacks and public cloud services. And since then, nearly all significant cloud platforms, including Azure, IBM Bluemix, OpenStack, VMware and Google Cloud Platform have embraced Kubernetes.
Kubernetes' Federated Services feature makes it easy to perform cross-cluster, multicloud service discovery and deployments. Kubernetes clusters running in different clouds, such as AWS, Google and private clouds, can register with a common Federation API server. From there, clients can automatically find the closest instance of the container service they need using domain name system records managed by Federation Services.
Kubernetes deploys groups of containers as pods on cluster nodes using bin-packing optimization algorithms, which define a set of problems that involves optimally placing objects of different sizes in the smallest possible volume. Applications in a pod share a network configuration -- IP address and available ports -- and a Linux namespace. Kubernetes manages the pod, including starting, stopping, restarting and moving within a cluster. Kubernetes can scale applications in response to manual commands using its cluster management software interface or command-line interface (CLI) or through an auto scaler. Kubernetes also supports persistent volumes that are mounted to a pod during deployment, which can be set up manually or dynamically. And the Kubernetes software is scalable; it can handle up to 2,000 node clusters with 120,000 total containers.
AWS responds with Blox
AWS is missing from the aforementioned list of public cloud providers that have adopted Kubernetes. And while AWS users can spin up Kubernetes clusters using Elastic Compute Cloud (EC2) instances, the Amazon EC2 Container Service (ECS) uses a proprietary cluster manager. In addition, ECS' black-box nature is problematic for businesses that want to move container workloads between cloud platforms or maintain a hybrid cloud deployment, as some of the automation and management scripts won't seamlessly transfer from Kubernetes to ECS.
Late in 2016, AWS released a separate open source orchestration system called AWS Blox, possibly to placate large enterprises that are building hybrid container infrastructure with AWS and need the same container orchestrator on public and private clouds. While it is open source and has the support of Netflix -- already an ECS adopter -- Blox is only available on AWS.
Blox is less cluster management software than a container scheduling framework. The service exploits existing ECS event streams, currently used for monitoring, to simplify the development of custom ECS schedulers.
The service provides a REST API that exposes two services: one to collect and store cluster state, called the cluster-state-service, and a daemon scheduler that enables the service to launch one task per container across all nodes in a cluster. The scheduler monitors for new nodes that join the cluster and automatically places the task on them. The Blox scheduler uses its APIs to enable external log and monitoring agents to collect data about ECS applications. Blox also includes a CLI that supports the full set of, which includes using Docker Compose to define and deploy multi-container applications that Blox itself doesn't yet support.
For some developers, AWS Blox sounds more like an open source toolkit than a polished container orchestrator. According to project README, "The scheduler can be used as a reference for how to use the cluster-state-service to build custom scheduling logic, and we plan to add additional scheduling capabilities for different use cases." AWS recommends playing with Blox on a local Docker installation, not ECS itself, with the hope that users will eventually create an ecosystem of DevOps tools around ECS.
Since its unveiling, the Blox project has been quite subdued, with three to six code commits per week, comprising about 20,000 to 40,000 entries of new code per week spanning two releases. There are currently seven features on the Blox roadmap, including a web user interface, support for multiple accounts and high availability features such as redundant schedulers, automatic instance restart and a redundant data store.
For the near term, AWS Blox will be a curiosity for the vast majority of AWS users who deploy containers. But its limitations could push enterprises to use native ECS scheduling and cluster management software features instead.
ECS is ideal for IT teams that want to deploy workloads on AWS and aren't worried about infrastructure lock-in. These teams could focus on maximizing use of higher-level AWS data, machine learning and application services. And they can integrate these with containerized applications. In the long term, Blox may evolve into a general-purpose cluster manager that other cloud services support. But it's more likely that Blox will become a cluster manager for private cloud technology stacks.
Non-AWS or multicloud users should look to Kubernetes or other cross-cloud orchestration software, like Docker Swarm or Mesos Marathon. And developers using Google Container Engine have no reason to deviate from the default Kubernetes engine; they could incorporate it into their private cloud stacks of choice when building hybrid container infrastructure. Keep in mind: Kubernetes requires additional expertise to run. But improvements to the Kubernetes installation process and cluster management software interface could ease the learning curve.
Compare container management and orchestration options
Blox doesn't fit correctly into container puzzle
AWS still lags behind container portability race