Sergey Nivens - Fotolia

AWS managed services improve performance, cost efficiency

In this book excerpt from Effective DevOps with AWS, learn how to drive performance improvements and reduce costs with data management services.

Data management can be cumbersome for IT teams that need to deploy and manage applications in the cloud. And, when not done properly, performance suffers, and costs increase.

AWS managed services, such as Amazon ElastiCache, along with Amazon CloudFront, can ease data management practices and significantly improve application performance. Additionally, these offerings can help AWS users decrease their costs.

In this excerpt from Effective DevOps with AWS by Nathaniel Felsen, a staff software engineer at Tesla, the author examines how AWS managed services can play an integral role in application scalability, alleviate database strains and lower the risk of latency across the platform.

Editor's Note: In the previous sections of this chapter excerpt, the author describes an architecture that consists of an Elastic Load Balancer that directs traffic to several EC2 instances in an Auto Scaling group that powers an SQL database. This architecture is intended to increase scalability and avoid a single point of failure.

Improving performance and cost saving

That new architecture design will typically let you scale your service to hundreds of thousands of users. The next logical step will be to look at how to improve performance and lower costs.

As mentioned previously, one of the most compelling reasons to use AWS is the amount of managed services that can be used in conjunction with your application. A lot of those services are geared toward very specific needs. For instance, if you do a lot of image transcoding, you may look into Elastic Transcoder or if you need reliable systems to send emails, you could look at SES [Amazon Simple Email Service], but some other services are more ubiquitous. In the previous section, we talked about adding more computing resources to handle more traffic. With our current model, we add more EC2 instances as the average CPU utilization gets higher and uses [a] bigger instance, or adds more read replicas to the data tier. This solution is easy to implement and works great at first, but over time, becomes somewhat expensive. There are a number of ways by which we can easily improve the situation and make our application work smarter by reusing previously computed data through a caching layer. AWS has a service called ElastiCache that can come in handy in those situations.

ElastiCache

ElastiCache is a managed service that lets you create in-memory key-value stores. At a very high level, you will create a cluster and then update your application to use it. The changes are fairly simple. Let's imagine a phonebook application. Whenever we want to retrieve the address of a business, we will need to access our database to retrieve that information. For popular businesses such as banks or post offices, for example, it is likely that the exact same query will be run very frequently. Instead of always reaching to our database, we can update our application logic to check the cache first and, if the information isn't cached, retrieve it from the database and then cache the result:

Sub get_address(name)
address = cache.get(name, "address")
If address Is Empty Then // cache miss
address = db.query("SELECT address from businesses WHERE name = ?",
name)
cache.set(name, "address", address)
End If
Return address
End Sub

With that system, the first time a business is being looked up, we will end up accessing our database, but after that, the data will be added to ElastiCache and the following calls will be quicker to execute and won't require accessing our database.

By making very minimal changes to your application, you will be able to rely on ElastiCache to alleviate some pressure off of the database.

ElastiCache currently supports two of the most well-known open source projects in the category, Redis and Memcached. In the most recent years, Redis started to become a more attractive option as it supports more data types, can be configured to be highly available and can store bigger keys than Memcached. In some specific scenarios, Memcached might still make sense as it has a more efficient internal memory management and is multithreaded.

By making very minimal changes to your application, you will be able to rely on ElastiCache to alleviate some pressure off of the database and lower the overall latency as accessing keys in ElastiCache is a lot faster than any well-known database. In addition to that speed benefit, using ElastiCache will allow us to scale down the number of RDS [Relational Database Service] replicas we need, saving us a bit of money.

There are a number of ways to take advantage of this service. You can use it to store db query results like we saw in our example, but you can also decide to integrate it higher in your stack and store computed results, HTML snippets, images, or even counters.

You can read more about it at http://amzn.to/2gKWCKe.

That caching strategy works great for certain types of dynamic content as we just saw and it also really helps with static content. For completely static content, we can even do better than using ElastiCache by moving our caching layer closer to our users through a content delivery network (CDN) such as CloudFront.

CloudFront

Our application is currently hosted in the US-East-1 region, which is physically located in Northern Virginia. Any visitor anywhere in the world needs to connect and retrieve data from the East of the United States to experience our application. As we all know, speed matters. Amazon published a few years ago that 100 milliseconds of latency on their ecommerce site can result in 1% of potential sales lost. If we consider users trying to open our application from Australia, just in order to establish a TCP connection and do a simple TCP handshake (SYN, SYN-ACK, ACK) this means connecting to an endpoint 10,000 miles (16,000 km) away and executing three round trips.

Even at the speed of light (186,000 mph), this adds overs 100 ms of latency and all that happens even before the first byte of data is being transferred. In order to improve the user experience, one of the solutions is to take advantage of CloudFront, the CDN from AWS. By essentially uploading all static assets such as HTML, CSS, images, and client side JavaScript to an S3 bucket and adding a CloudFront distribution in front of it, we accomplish two goals at once:

  • We first make the application much faster to load for the users as they are now downloading assets from data centers near their physical location instead of Northern Virginia:
    The AWS US-East-1 region hosts the app.
    All users access the application, which originates in one region.
  • Transferring data over HTTP is a very common and well understood need. Services such as CloudFront are much more suited to that task than our application. By making that change, we will see fewer requests hitting our applications taking some load off of our EC2 instances:
    CloudFront removes stress from the EC2 instance.
    CloudFront distributes the load across multiple endpoints.

You can read more about CloudFront at http://amzn.to/2gvlylO.

After adding an ElastiCache and CloudFront to your application, your infrastructure may look like this:

Revamped architecture improves scalability, availability.
These AWS managed services create a more scalable application.

With that approach of relying on AWS managed services to complement our monolithic application, we can with very little changes in our application, scale even further our current stack, improve the user experience, and even save money in certain cases. The next steps into scaling our application will require changing the logic of certain aspects of our application.

Packt Publishing is offering a special offer on Effective DevOps with AWS by Nathaniel Felsen for SearchAWS readers. Follow this link for a free download of the rest of this chapter, and use the code ORTTA50 at checkout to save 50% on the recommended e-book retail price until March 31, 2018.

Dig Deeper on AWS management

App Architecture
Cloud Computing
Software Quality
ITOperations
Close