AP - Fotolia
The public cloud can be a critical component of maximizing a DevOps strategy, but the transition is not without its challenges.
AWS offers scalability and API-enabled automation that make it one of the best platforms for a DevOps model, but the learning curve can be steep -- especially for enterprises with legacy systems or those that neglect the ops side of the equation. The platform also has gaps in its native tooling that third-party tools can fill, but navigating those decisions amid a shifting dynamic in technology and culture isn't simple, either.
Logicworks, a New York-based cloud management provider and AWS partner, helps businesses move to DevOps on AWS. The company's own transition from web hosting to cloud computing, shifting from siloed working groups to DevOps, took nearly two years to implement, noted Jason McKay, senior vice president and CTO. That was partly due to waiting for customer demand for newer services, but also with getting the right staff in place and gaining internal expertise on the platform.
"And it doesn't end," McKay said. "We're still constantly adding new functionality."
Because AWS is such a big player in the market, users don't have to worry about a lack of tooling as they may with other platforms, said Theo Kim, head of DevOps at GoPro Inc., in San Francisco.
"If you're making a DevOps tool and you don't have AWS support, whether it's native or a plug-in, you're kind of screwed," Kim said. "I would say that it's much more difficult to do DevOps in Azure, simply because they don't have the integration."
DevOps can be an amorphous term, but most tools that fall under the umbrella of the deployment model can be found on AWS. That includes native tooling, such as CloudFormation, OpsWorks, CodePipeline, CodeDeploy and CodeCommit. There are popular third-party tools integrated with the platform, such as Puppet, Chef, Salt, Ansible and Jenkins.
AWS is well-suited for DevOps, but there can be a downside to having a thousand ways to do something if it's not handled properly, McKay said.
"I wouldn't change that [flexibility] for the world, but I do think it's important that organizations get some internal consistency when deploying apps," he added.
Donnie Berkholzresearch director, 451 Research
Services such as CodePipeline and CodeDeploy are examples of how AWS expanded capabilities around automation and delivery pipelines, but often, organizations rely on their own tools to reduce lock-in, observed Donnie Berkholz, research director at 451 Research.
"As these cloud services move up the stack, we see fewer and fewer alternatives that are created by a specific cloud provider, whether public or private," Berkholz said. "Instead, users are left rolling their own."
AWS is still missing full-featured monitoring and alerting, as well as collaboration functionality such as live chat services that continue to grow in popularity, he added.
Scale and third-party DevOps demands
Cloud Elements Inc., an API integration provider in Denver, recently migrated all of its workloads to AWS. The real draw was Amazon Virtual Private Cloud, though the company also uses Elastic Compute Cloud (EC2), Auto Scaling, Lambda, Simple Storage Service, Glacier, CloudWatch and Route 53. Cloud Elements turned to HashiCorp to tie together all the services and leverage those services on other platforms for its customers.
One of the problems with doing at-scale DevOps on AWS is the upper limits Amazon puts on services, said Rocky Madden, DevOps engineer at Cloud Elements.
"Once you get into some of these outer fringe areas, it becomes very cumbersome to the point where you almost have to look at alternatives," Madden said.
Cloud Elements looked at higher-level features, such as EC2 Container Service or Lambda, but neither met the company's needs. The container service encountered bottlenecks with the instances due to the need for underlying servers for each application, while Lambda can run up huge bills at scale and cap the size of compiled code per function and the number of simultaneous requests. The company eventually scrapped plans to have all requests handled by Lambda functions and turned to Kubernetes and Docker Swarm to use containers on top of AWS.
Security is another area AWS lacks native capabilities to meet the DevOps model, according to Kim.
The issue isn't specific to the AWS, but translating requirements imposed in a data center to the cloud can be difficult, Kim said. For example, in a more traditional setting, an IT pro sets up a firewall to meet security requirements, but there is no hardware for users to touch in AWS, and a single point of failure defeats the purpose of the system architecture, he explained.
This is "something we're still trying to scramble and find tools for," he said. "Right now, a lot of the tools and systems out there were designed for on premises."
Changing to DevOps on AWS
For cloud-native startups, many of the perceived staples of DevOps are baked into their strategy to build applications and grow their business. But the shift isn't simple for large enterprises with legacy systems and more traditional delivery models.
Large companies tend to start their journey to the public cloud with the lift-and-shift approach, or view it as a cheaper alternative to doing the same processes they run in their own data centers. Only when they look at higher-level services and use AWS as a strategic architecture and operations platform does DevOps enter the conversation, said Stephen Elliot, research vice president at IDC.
When companies do make the shift to DevOps on AWS, it's important to have a clear plan for where their applications will reside, and how the continuous integration and automation will work. At the same time, they need to be mindful of security and compliance requirements, and the complex billing structure on AWS, Elliot said.
"It's not a one-step or two-step process; it's multiple steps," he said.
Recognizing a company's core competency is also important when deciding if DevOps is the right route, Madden said. If operations are not a strength, then a platform as a service offering, such as Heroku, might be the way to go. Companies also should be mindful that reaching a certain scale may outgrow those tools, however, and managing the bare bones of AWS environments isn't as simple as just standing up an instance, he added.
"We're not an ops company, but we definitely have ops needs, so having engineers contribute to that makes sense for us," Madden said. "There's definitely a skill set involved there that might not be appreciated at first by developers."
Trevor Jones is a news writer with TechTarget's data center and virtualization media group. Contact him at [email protected]
DevOps can help improve AWS security
AWS Lambda helps promote automation