ra2 studio - Fotolia
While creating a new application that will reside on Amazon Web Services, a developer must adapt and evolve from traditional methodologies and techniques to take advantage of the power and flexibility offered by Amazon's robust cloud infrastructure. Cloud-based applications require forethought as to how they will be deployed and scale in response to demand.
I have identified these key areas as "must haves" for an application to be successful and economical while running on the AWS platform:
- Remote logging and debugging capabilities
While not an exhaustive list, ensuring your application has these key components will give you a solid foundation to build upon.
Amazon Web Services (AWS) provides all of the tools that developers need to make an application elastic through the Amazon Management Console and the many different APIs. These tools aid in the configuration and management of auto scaling groups. Scale-up and scale-out models are supported and encouraged.
Auto scaling groups function by monitoring metrics provided by Amazon CloudWatch and take action based on pre-defined thresholds, or via a set schedule. Auto scaling groups can then launch new instances for you to scale out, or terminate instances to scale down based on demand.
Elastic load balancers (ELBs) should also be used in conjunction with auto scaling groups. An auto scaling group configuration can define an ELB that a newly launched instance should automatically attach to. The application then inherently gains the health checks, fault tolerance and load balancing capabilities that ELBs provide.
While developing an application, it is important to build its components with a very loosely coupled approach. Common components include a Web layer, a database layer and a middleware layer. Design the components of the application so they are stateless and can be simply thrown away or terminated. Try to avoid any dependencies on a single instance. This will allow auto scaling groups to work their magic without any manual intervention.
Developers should take advantage of other services on the AWS platform, such as Simple Queue Service for queuing, Relational Database Service for databases, Simple Notification Service for notifications, and Simple Workflow for workflows. These services help alleviate traditional dependencies and prevent the re-invention of the wheel.
It is a fact that failure is unavoidable. Likewise, in a large enough application, failure is always present in some way, shape or form. To guarantee that an application running on AWS is always available, developers should take advantage of the following techniques:
- Use multiple availability zones. An availability zone can be equated to a separate physical data center. By launching instances into multiple availability zones, an application can survive the loss of an entire zone. ELBs can direct traffic for the application across these zones.
- Configure the auto scaling group's minimum setting. This tells AWS to ensure there are a set minimum number of instances running for the application at all times.
Pro-tip: Use auto scaling groups for everything, even single standalone instances. By setting a minimum value of one (1) for the policy, AWS will spawn a new instance if the single instance dies.
- Make use of AWS' multiple regions across the world. By placing the application's presences in more than one region, the chances of it surviving a major outage dramatically increases.
The concept of, "If you plan for failure nothing will ever fail," is especially true when deploying an application on AWS.
An application architected for the AWS platform should be fully portable. In this case, portability refers to the ability to quickly re-launch the application from scratch via nothing but code. A real-world example of this would be choosing to launch your application into multiple regions.
The application and its AWS configurations should be entirely defined in code. Whether a developer chooses to utilize a configuration management system such as OpsWorks or Puppet, or chooses to write his own deployment scripts, this concept applies.
The promise and power of AWS comes in the form of two words: "software" and "defined." By defining the application and its supporting infrastructure in code, it can be reused anywhere at any time. Having the entire stack defined in code also mitigates operational risk by reducing human interaction. This ensures consistency, enables end-to-end testing, rapid deployment and increased agility.
By making use of a version control repository (such as Git or Subversion, developers gain the ability to track changes and roll back changes for the stack as a whole.
Remote logging and debugging capabilities
Traditionally, when an application encounters errors, a developer would log onto the server hosting it and attempt to diagnose the problem.
In the highly dynamic world of AWS, it is a recommended practice to abandon the traditional way of thinking and realize that an instance is really just a throw-away container. By ensuring that logs are centralized in one location, no matter if this is accomplished via a traditional syslog collector or the new CloudWatch Logs agent feature that allows sending log data directly to CloudWatch Logs from within EC2 instances, a developer can be sure to have all of the necessary information in one place for diagnosing and remedying a problem.
It is important to choose development tools that provide remote debugging capabilities. This becomes imperative due to the stateless nature of the instances the application is deployed on. Most development environments today have support for AWS in one manner or another -- usually via plug-ins. Developers should check for this capability in the tools they use daily. These features help to make everyone's job much easier.
About the author:
Timothy J. Patterson is a Cloud and Virtualization Systems Engineer, currently working at ProQuest out of Ann Arbor, Michigan. Focused primarily on Amazon Web Services and VMware technologies, Tim is a 2014 vExpert and is currently in possession of the VCAP5-DCA, VCAP5-DCD, and VCP5-DCV technical certifications from VMware, as well as the AWS Certified Solutions Architect – Professional level certification from Amazon. Additionally, Tim holds a Bachelor of Science degree from Saginaw Valley State University in Computer Information Systems. Tim recently became vDM002, winning the second season of the Virtual Design Master competition.