Implementing a new IT approach can be difficult, even before getting involved with new software and services.
The AWS Partner Network offers managed services providers ready to help integrate Amazon cloud services, but users working in hybrid cloud environments need to strike a delicate balance between dev, test and production workloads across multiple environments. AWS tools aren't always ideal for the task at hand -- Amazon API management and DevOps tools, for example, might not meet the needs of hybrid customers looking to seamlessly transition from on premises to the cloud.
Chris Riley, a partner at HKM Consulting LCC in Boston, an AWS partner and managed services provider specializing in full-stack development, helps a mix of enterprises -- from commercial clients to government-based businesses and universities -- onboard Amazon API management and other cloud technologies, as well as Docker and continuous integration (CI) methodologies. He also is a proponent of the Open API Initiative, an open source effort using Swagger that seeks to standardize the use of REST APIs.
SearchAWS spoke with Riley about Amazon API management, how AWS could improve its offerings and how HKM's clients approach the DevOps movement.
What are the biggest challenges that AWS users face when tying services together with APIs?
Riley: The governance piece is an important component. People still leverage existing solutions from Oracle, Apigee or WSO2 to help manage that. And then, how do you secure that? Those tools come with prebuilt solutions around OAuth 2, as well as SAML and Shibboleth.
The credentialing/security aspects are a very decent portion of what we do with clients, trying to effectively use what they have. On the higher-end side, they use Shibboleth quite often, which is based on SAML 2. On the commercial side, we're seeing an uptick of OAuth 2, a tool that's simpler and very consistent with what a lot of the API platforms are using today, whether it's Facebook or Pinterest; they're all using some variant of OAuth to communicate identity and the access capabilities for that particular user.
What we are seeing is an uptick in desire, both on the university side and commercial side, to act almost as if they are like a Facebook or Pinterest, where they have some type of portal where developers can easily find information and, from there, be able to access the resources that they need. That includes the security component, the REST API documentation -- we use a lot of Swagger for that -- and quick ways to onboard to take advantage of those capabilities.
That's an area I see Amazon is a little bit behind on. API Gateway provides some core capabilities, but I definitely envision that they'll be increasing its capabilities and footprint over time.
What features should AWS add to the Amazon API Gateway? Where could it improve with API management?
Riley: Right now, you're restricted to using the Amazon security module, so I think opening that up and making it pluggable for on-premises security infrastructure, as well as in the cloud. Security is one area where they are weak and need more focus.
A second [area] would be monitoring. They have CloudWatch and some basic metrics there, but I think if you want to look at it from an API-consumption perspective and utilization perspective ... API User X has used this API 15 times this week, for this period of time. Going deeper into the consumption and utilization, specifically at the API level, would be helpful. Some of the other platforms provide dashboards and monitoring specifically around API consumption.
With routing, we've seen a lot of stuff that happens between the initial receipt and the initial send. In some cases, people stand up older services based on SOAP protocol. They want to go in and do some more detailed manipulation of the data. [AWS] has some capabilities now, but that's an area that could potentially be improved over time. In that case, you get into what's called the enterprise service bus category, where people are doing a heavy amount of logic, and I don't think API Gateway is focused on that. They want to be as lightweight as possible.
Where do you see the DevOps movement, in general, going in the next five years?
Riley: In terms of small organization, DevOps is much simpler because a huge component of DevOps is based around roles and processes. So, if you're an established software organization with 1,000 developers, it's really tough to change what they do. I've seen a large amount of cost put into Agile -- trying to get teams off a Waterfall [model] and move toward Agile methodology to make themselves leaner, more efficient, more effective and more tied into the business.
The next phase we're in right now, at least on the enterprise side, would be on continuous integration to begin with and continuous delivery [CD], if it makes sense; in some cases, it doesn't. Some organizations release on a quarterly basis; that's what they're familiar with, and that's what their customers are familiar with. For them, doing [CI] on a quarterly basis makes sense. We're seeing good penetration of [CI] across the board, for the most part, when an organization is really dependent on it and has everyone aligned and leveraging it.
We have seen an uptick in infrastructure automation. I think Docker has shaken a lot of what was going on with Puppet, Chef, Ansible, Salt and those other platforms that are out there and had made good names for themselves, but they're very complex and focused on an infrastructure-oriented user, like a system administrator. Docker has given the power of what the runtime is going to be back to the developer. With that switch, handing off a Docker container, organizations are now talking about using Kubernetes, Mesos, ECS [EC2 Container Service] and platforms like that.
Rather than knowing the guts of what's going on inside, the DevOps person is more about the distribution, healthy execution and monitoring. So, I'm starting to see more on that side in relation to the switch in DevOps roles. I think that will continue five years out with containers, and whatever comes after containers will continue to be focused on a specific capability, and [enterprises will have] a person who manages that through its lifecycle.
What is AWS' role in the DevOps space?
Riley: Most shops that we've gone to, if they're startups or even enterprise organizations, are moving to host their infrastructure and try to do on-demand, taking advantage of [AWS] Spot Instances. All that interaction, although you can do it through the console, you're using the CLI [command-line interface] to spin things up and do things in an automated way. That's a huge DevOps component.
If the client's not doing that, they're running Kubernetes and Docker in their local environment or have their own private cloud. But we've seen the push where large organizations are saying, 'It just makes more sense to let Amazon manage our infrastructure.' And because of that, they want to be automated.
Chris Rileypartner, HKM Consulting
Some people are old school, but organizations are saying, 'Well, if we're going to do this, let's do it right. Let's automate.' So when they deploy things, they're spinning up more instances, and to do that, they're using the Amazon CLI. So, [AWS has] some pretty powerful tools. Everything is software-based, so you can configure load balancers, IP addresses, DNS [domain name system], all that stuff. Whether it's a QA [quality assurance] environment, they have the ability to do what you need to do there.
We have not leveraged any of the build automation tools that they have. We're doing everything via Bamboo or Jenkins. We like the capabilities and power within those tools to manage and monitor. Typically, what we do is develop locally, then if everything looks good, we push to the cloud.
Are AWS users opting for CI and CD services over open source tools? Or are they being used in tandem?
Riley: Most of the customers I've been working with on the university side or on the commercial side are in situations where they have a lot of infrastructure in-house. So, if they're going to do something as a hybrid cloud scenario, a lot of them still leverage, run and do builds locally. A West Coast university I was at recently, one of them was using Bamboo to do all their build locally. If things were good, they would make a Docker image and run that locally. If that was good, the option was moving it up to Amazon Web Services or Azure, whichever deployment they wanted. It's a little easier to stand up a CI instance locally and hook into things locally than to try to leverage Amazon tooling and wire it in that way.
We're still in this hybrid world with most of our clients. There are a few that are pure Amazon-only and have no infrastructure -- those are more on the startup side than established clients with hundreds of developers.
Custom authentication opens up possibilities in Amazon API Gateway
Secure API calls through Amazon API Gateway
Q&A with cloud author, expert Andreas Wittig