The primary use case for AWS CloudFront lies in optimizing the delivery of static content to users. But Amazon...
has raised the bar for the content distribution network with features for accelerating dynamic content as well. This makes the service more useful for non-video-related applications, such as speeding the distribution of user-generated and personalized content. Also, Amazon charges the same for distributing dynamic content as it does for distributing static content. These costs start on par with AWS S3, but can drop as volume grows.
For example, TwitPic was one of the early adopters of CloudFront for improving the performance of its image sharing application. The service allows users to post images to Twitter and other social media platforms. "We used CloudFront to deliver user-uploaded images to our 50 million registered users. We served over 50 billion monthly API calls using CloudFront," said Steve Corona, formerly CTO of TwitPic and currently Principal Architect at Bigcommerce.
Leveraging CloudFront allowed the small startup to scale without having to build a large IT department. The company would have otherwise faced a much greater challenge in growing cost-effectively. Corona explained, "We were delivering petabytes of traffic per month. Originally we did this using Amazon S3 because, well, we didn't know what we were doing. It turned out CloudFront was actually cheaper for the amount of traffic that we were serving. To top that, it was much faster than serving data straight out of S3."
Under the hood
A typical website includes a mix of static and dynamic content. Static content can include images and style sheets that are the same across all users and are best cached at the edges of the content distribution network (CDN). Dynamic content includes information that changes frequently or is personalized based on user preferences, behavior, location or other factors. Consequently, even so-called dynamic content can be cached for varying periods.
Dynamically generating parts of a website causes an increase in network traffic latency, multiple connections to the origin server and increased CPU load. To address this situation, CloudFront uses a framework for processing query string parameters and cookies for delivering the entire website without having to separate out static and dynamic content or to manage multiple domains.
CloudFront also allows application developers to configure time-to-live (TTL) values for each file by setting the cache control headers on files saved on the origin server. CloudFront uses these to determine how frequently the edge location retrieves an updated file from the origin server. When the files change frequently, the developer needs to set a short TTL, which can be as little as zero seconds.
Cache behavior basics
It is prudent to look at the configurations for the cache settings used to ensure that users get fresh data. CloudFront uses a variant of the Least Recently Used, or LRU, caching algorithm. After content has been replicated in the CloudFront cache, users can pull data from the cache rather than directly from the origin server. If the TTL setting for the content is too short, users will effectively pull content directly from the origin server. On the other hand, if the TTL is set for a longer duration, it's important to ensure that the application is configured to send an invalidation request.
For example, the Star Media Group, Canada's largest online daily newspaper, uses CloudFront to deliver. With more than 3.3 million monthly active users, scale and performance are two important parts of the design. At the same time, different types of content need to be set to expire with different frequencies.
To address this challenge, the Star Media IT team set up 19 different cache behaviors for different content types. They also established design guidelines to limit the number of query strings and to use client-side cookies rather than server-side cookies. This makes the content more cacheable while reducing the number of variations of content to cache.
This approach also makes it possible to cache what is traditionally considered dynamic content. For example, popular search results are cached for two minutes, which increases the speed of delivery for popular searches and reduces the load on the origin server by eliminating the need to respond to popular queries each time. This approach balanced caching with the need to keep search results current and ultimately led to a 50% improvement in response time.
Keeping it fresh
Like election results, sports scores and stock quotes, data can be cached for short periods ranging from a second to a minute to get the benefit of caching without having to send out stale data, said Nihar Bihani, principal product manager at Amazon Web Services. For example, in 2012, National Public Radio delivered the election results using CloudFront and a TTL setting of 30 seconds.
In other cases, the content can be geographically generated, which makes it easier to cache content for short periods in a given region. For example, Earth Networks, which has developed the popular Weather Bug application, uses CloudFront to cache weather data. Since users in a geographic area tend to hit the same edge server, this makes it possible for them to pull in weather updates for their region without having to go back to the origin server for each request.
Bihani said this kind of personalization based on location is possible using specially crafted cookies and query strings, in which data about the user is recorded and becomes part of the cache key. He noted, "It is dynamic in the sense that it is personalized for each user, but at the same time the data can be cached, and so everyone in San Francisco can get a cached copy rather than going to the Weather Bug origin server."
The value of cacheless CDN
The main benefits from a CDN lie in replicating content closer to the users. However, even when the TTL is set at zero, website performance can still improve if CloudFront is used as a proxy for improving the connection to the origin server rather than as a cache, Bihani said. In these instances, the edge servers are able to leverage a single, optimized connection with the back-end server that can share a single Transmission Control Protocol (TCP) connection across multiple users and reduce the number of hops that a packet must take to the origin server.
When the first user requests content through the edge, there is some overhead and delay in setting up the connection with the origin server, but subsequent requests through the edge server are able to use the TCP connection that has already been established between the origin server and CloudFront edge servers, thus reducing delays in retrieving content.
Bihani explained, "The further away from the origin server, the more packet drops that happen on the Internet. Every time a packet drops, we have to retransmit. With CloudFront, not only does it use the persistent connection, but those are also optimized for performance." On a long connection -- such as a user in Singapore retrieving content from an origin server in Virginia -- using CloudFront as a proxy can cut latency in half. A user in New York would see some improvement, but not as much.