WavebreakmediaMicro - Fotolia


Native tools prove useful in AWS performance testing

A host of tools exist to test application performance in AWS. One company explains how it used AWS Lambda and other native tools to reduce costs and keep QA and engineering happy.

AWS performance testing is sometimes an afterthought in software development. It can be difficult to reliably simulate the load that a collection of applications and their APIs endure. That means software engineers and QA don't necessarily know the performance characteristics of their code until after it goes live. Developers tend to focus on application logic, which is easy to test during development. But they might not be aware of the performance implications of the code until it runs on live or simulated infrastructure.

To test an application, quality assurance (QA) tests tax the app with multiple requests, which can be a challenge to implement from a single server. You can use AWS Lambda to build a better performance stack than existing commercial tools. Lambda enables QA to spin up chains of functions that simulate loads from thousands of users. Engineers can set up a master Lambda function that spawns the requisite number of functions required to perform the test. The master can gradually add more slaves to increase the load.

A master controller can ease Lambda performance tests compared to orchestrating the individual functions. Engineers link a single master controller to other masters to further improve the scalability by joining functions. For example, one controller Lambda function could bring up 10 more controller functions, which bring up 1,000 slave functions to run the test.

Accusoft Corp., a document processing and viewing service based in Tampa, Fla., discovered a unique use for Lambda.

"It is one of the few things we had QA and engineering buy into," said Michael Pardue, senior software engineer at Accusoft. When Pardue tried to use exiting tooling, his team had issues generating the load they wanted. So he performed a half-day proof of concept using Lambda.

"I put something together that let me tackle [AWS] performance testing quickly, so it seemed worth following that path. I figured if it was that easy, then I might as well scale it up."

In the iteration, Accusoft aimed to generate a certain level of concurrent requests. Pardue experimented with driving up to 10,000 concurrent calls on a given API. But this didn't give him the appropriate data consistently. It's a much better practice to ramp up requests gradually.

Lambda limitations, such as function timeouts after five minutes, create problems. IT teams can circumvent them by chaining each Lambda function to call the next iteration of itself. This chain enables the performance test to run for an indefinite length of time.

"It was kind of scary the time I ran it," Pardue said. "I was worried about what might happen if there was an endless loop."

Fortunately, testing wasn't an expensive endeavor for Accusoft. Despite performing a million requests per day, its total Lambda bill was about $9 per month.

CloudFormation manages AWS performance testing

CloudFormation gives you a declarative mechanism for what tests are associated with a performance testing stack.
Michael Parduesenior software engineer, Accusoft Corp.

While QA can interact with Lambda functions via API calls, Pardue said AWS CloudFormation templates simplify library setup and updates. CloudFormation templates allow developers and testers to specify a collection of cloud services to invoke for live applications or performance testing. Developers can reuse or adjust these collections of services for different use cases or types of tests. With this approach to AWS performance testing, engineers can associate a group of Lambda functions for different types of performance tests on different applications.

"CloudFormation gives you a declarative mechanism for what tests are associated with a performance testing stack," Pardue said. And that can better organize a project. With this approach, QA executes either a single testing load or an orchestrated set of testing loads. A single testing load is useful to answer questions, such as what type of server makeup is required to handle a particular application load. An orchestrated set of testing loads identifies failure points or areas where system integrity fails.

Simplify results aggregation

Log aggregation is the biggest challenge to ensure the system works effectively. At , Pardue did not link Lambda functions in a chain; he also took results locally before aggregating them. Amazon CloudWatch was a better tool to query the Lambda streams to aggregate data. Pardue then performed post processing on the resulting logs to make it useful for queries the engineers had or graphs they wanted to generate.

For example, an engineer might want to know how much load the application can handle before performance suffers. One server might be able to handle 90 requests per minute, which could be the same regardless of the number of concurrent users. This log processing makes it easier to calculate scaling that enables a specified performance level on the live system.

A best practice is to set up data aggregation , which involves developing a scalable way to get data out of the system for processing. Teams can use that data to decide how to execute Lambda functions.

Next Steps

Learn how to set up and configure Lambda functions

What QA options are available for AWS testing?

AWS performance testing and monitoring can optimize your service

Dig Deeper on AWS Lambda