ra2 studio - Fotolia
Back when developers were building and deploying enterprise applications, the use of cloud-based resources seemed like a breath of fresh air. Developers had to plan out how the application would scale, taking into account more users, larger data sets and more loading, which meant that it was imperative to think well ahead of the wall that the application would soon hit. It was necessary to make sure that enough hardware resources were on order, as well as make sure they had a place in the data center and were tested and in a good operational state.
AWS, like other public clouds, provides the feature of "Auto Scaling." Think of it like renting a Prius that automatically converts into a Corvette when the gas pedal is slammed down; then, back to a Prius again when it slows down.
Managed by Amazon CloudWatch, AWS Auto Scaling allows users to automatically scale Amazon EC2 instances up or down with defined parameters. This means users can ensure that the number of Amazon EC2 instances used increases automatically during demand spikes.
What's more, it automatically reduces the number of instances when the spikes subside. This feature is most useful for applications that have a lot of variability in loading.
Beyond the auto scaling features of AWS, there are some tricks of the trade that those who build on the AWS public cloud platform should consider.
First, design the infrastructure by delegating tasks to independent services. This means leveraging service-oriented architecture as a foundation to decompose the application and/or technology instances (e.g., a database) as sets of services that are bound together to form an application. This provides users with more options to scale the application. Once things are separated into services, they can be scaled independently.
Then, make sure the application and its components are stateless. This provides more options for scaling the application, and is a good idea in general. Moreover, make sure the application is idempotent.
Next, as we covered above, make sure that the application is in an auto scaling group, and EC2 will adjust the number of instances required to meet the needs of an increasing or declining load. Also, leverage Elastic Load Balancer to make sure each instance is leveraged as equally as possible. Turn sticky sessions off.
After that, make sure to leverage caches, when it makes sense. Typically, data-intensive services will benefit the most; where and when will depend on the application.
Finally, test and profile. This means testing the application on AWS. Determine where the bottlenecks occur, and then design around them. This will allow users to decide how to best leverage AWS, and even design the application with scaling on AWS in mind. AWS can certainly accommodate the scaling needs of most applications. However, in some instances, it's necessary to design the application to take the best advantage of the scalability features of AWS.