Your guide to AWS re:Invent 2017 news and analysis
Reporting and analysis from IT events
AWS has yet again drawn back from its one-size-fits-all approach to load balancing to address modern development...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
methods for increasingly diverse and fragmented applications.
In just over a year, the cloud provider has released two layer-specific load balancers to replace its Elastic Load Balancing feature. Application Load Balancer, unveiled last summer, adds more granularity to routing at the application layer. And last month, AWS introduced Network Load Balancer to route TCP traffic at the transport level to targets such as containers, IP addresses and Elastic Compute Cloud (EC2) instances. Network Load Balancer can balance millions of requests per second without a warmup period -- a boon for customers who receive volatile spikes in traffic, according to the company.
AWS Network Load Balancer preserves client IP addresses, which eliminates workarounds and allows an IT team to apply firewall rules to those addresses. It also supports one static IP per availability zone to help corral addresses as resources scale up. A team can assign an elastic IP per availability zone for more control over domain name system (DNS) records, and they can preserve the client IP address to apply firewall rules on those targets.
IT shops need load balancing to provide low latency and extremely high throughput for applications that run across instances, containers and servers, said Brad Casemore, an analyst at IDC. Features around static IP addresses per availability zone also cater to enterprises that may have particular whitelists they have to enforce on the firewall, to cite one example.
As AWS moves up the stack to entice more customers, the cloud provider is working to improve resources at the plumbing level. Modern workloads that require higher performance and new approaches to development outstrip the Classic Load Balancer's capabilities, and workarounds such as X-Forwarded-For headers and proxy protocols frustrate users and hinder adoption, Casemore said. That's a problem for AWS, which strives to reduce areas of friction for its customers.
"How much friction does a customer have to go through in order to get application availability from instances running on AWS' cloud?" Casemore said. "TCPs are rife in the enterprise world, and these sorts of features and functionality become more important and take away some of that friction when customers are moving to the cloud."
AWS recommends using the Network and Application load-balancing options when working with EC2 instances within a Virtual Private Cloud (VPC), which occurs now by default. Apps that run on EC2-Classic instances can continue working with the Classic Load Balancer.
Adapting to app development advances
Microservices and container-based applications have changed what customers require from AWS load balancing. Monitoring takes on greater importance in microservices workloads, for example, so IT teams can respond quickly when an app component crashes. AWS Network Load Balancer conducts health checks on both the network and application, and it pushes traffic only to healthy load balancers. Additionally, AWS customers can integrate Amazon Route 53 for DNS failover into another availability zone.
"Certain customers looking at high-value applications that they want to move to the cloud are going to want to make sure they have an application continuity and disaster recovery option," Casemore said.
Chris Riley, partner at cPrime, a consultancy specializing in Agile transformations in Foster City, Calif., started using Application Load Balancer shortly after its release, although his clients have yet to need additional routing at the transport layer. The service is handy for URL-based routing or container-based workloads, as containers can appear on different ports through the use of a scheduler through EC2 Container Service, he said. And customers can load-balance to multiple ports on the same instance and to on-premises targets, a priority among some hybrid cloud customers.
It's easy enough to replace load balancers to reduce downtime for latency-sensitive workloads and large amounts of traffic, Riley said, but not all of his clients want to shift to containers and REST APIs. Simpler workloads, such as traditional web apps, might not need the new load balancer's capabilities, at least for now. And AWS' insistence on using layer-specific load balancers for VPC-based instances means cPrime will work with the Network Load Balancer in the future.
AWS' load-balancing enhancements could convert users who sought services and software from businesses like F5, Barracuda or Cisco, and open source tools, such as NGINX and HAProxy -- much as its Application Load Balancer could pull business away from application delivery controller vendors. Nevertheless, some customers want greater integration between the application- and network-specific load balancers beyond an API that currently links the two, and they want to extend that control all the way up the stack, Casemore said.
"Right now, they have to do that through conjoining the two, but that's a little pricier. Some would like to see the features melded," he said.
David Carty is the site editor for SearchAWS. Contact him at firstname.lastname@example.org.
AWS boosts hybrid cloud load-balancing features
New AWS features include enhancements for streaming apps
Elastic Load Balancing gets a makeover