This content is part of the Essential Guide: AWS re:Invent 2015: A guide to Amazon's sold-out event

Ooyala achieves multi-cloud management with DIY elbow grease

An AWS early adopter branched out into Azure and OpenStack while avoiding "sticky" shared services and incorporating open source automation tools.

One early AWS adopter says sticking to the lowest common denominator of cloud services and independent tools for automation has helped it achieve stability and optimize performance in an increasingly multi-cloud world.

Like The Weather Company, Ooyala Inc., a video processing service with customers such as ESPN and Bloomberg, has joined a growing number of companies branching out beyond just one cloud.

One of the things we learned early on is that in EC2, we could build more stable and performant infrastructure if we were relying on fewer shared resources.
Ilan RabinovitchOoyala

Ooyala began using Amazon Web Services (AWS) in 2007, but about two years ago added an OpenStack-based private cloud to handle high-I/O workloads and, within the last year, engaged with Microsoft Azure to host some workloads in locations where AWS does not yet have data centers, such as Illinois and Texas.

To keep workloads performing well and highly available across multiple clouds, the company avoids services that can't be replicated to other platforms, said Ilan Rabinovitch, engineering manager of infrastructure and reliability at Ooyala. This extends even to low-level services such as Elastic Load Balancer and Elastic Block Store (EBS); the company built its own load balancers based on open source tools such as NGINX and HAProxy, and has stuck with ephemeral object storage from the Simple Storage Service (S3), rather than use shared storage behind its Elastic Compute Cloud (EC2) workloads, Rabinovitch said.

"One of the things we learned early on is that in EC2, we could build more stable and performant infrastructure if we were relying on fewer shared resources," Rabinovitch said. "Over the years we've found that services like EBS tended to be the source of various performance issues and outages." Two years ago, Rabinovitch penned a public blog outlining the specific issues Ooyala had experienced with EBS.

While this picture has changed drastically in the last 18 months with the addition of solid-state drive-backed options for EBS, Rabinovitch said, he still feels it's good practice in multi-cloud management to design applications for ephemeral underlying resources.

Even as AWS offers new shared-storage options such as the Elastic File System, customers interested in multi-cloud management or advanced automation should design applications with infrastructure failure in mind, Rabinovitch said.

"You want to be able to build your workload such that any given node can be terminated without any further investigation or interruption of your staff to keep you online," Rabinovitch said.

Chef provisioning is one of the chief resources in the toolbox to do this, according to Rabinovitch. Multi-cloud configuration management automation can also be achieved using Chef competitors such as Puppet and Ansible.

"As long as you have any configuration management tool, you're much better off than if you do not," Rabinovitch said. "It's about that thought process around codifying your infrastructure and treating it like any other piece of software that your organization might develop."

That means that things such as unit and integration tests are just as important, if not more important, for infrastructure code as they are for a Web application or some other code that an organization might produce, according to Rabinovitch.

"You want to have the same level of quality and the same level of engineering into that code," he said.

While it's possible to be agnostic about different cloud platforms to a certain extent, some lock-in is unavoidable.

"The most sticky service most cloud providers offer is storing your data, and data gravity means it takes a lot of momentum to move away from a given service," he said.

One way AWS services have improved and should continue to improve, according to Rabinovitch, is by easing the transition between EC2 Classic and Virtual Private Cloud (VPC).

With VPC, new EC2 users that have no infrastructure can start up a new account and be off to the races, Rabinovitch said.

However, "if you are a large organization that has been in EC2 Classic for many years it's a more difficult and time-intensive process," he said.

Utilities such as ClassicLink, introduced by AWS in January, have made hooking EC2 Classic resources into VPCs much easier, but Amazon should continue to reduce the 'friction' involved in moving resources to VPC as well, Rabinovitch said.

Amazon did not comment for this story.

Beth Pariseau is senior news writer for SearchAWS. Write to her at [email protected] or follow @PariseauTT on Twitter.

Next Steps

Ooyala embraces OpenStack for high-IO workloads

Cloud migration tools target multi-cloud world

What's driving the multi-cloud engine?

Dig Deeper on AWS case studies and startups