Andrea Danti - Fotolia
Caching involves identifying frequently used data and moving it into memory. Data in-memory can be accessed much faster than data stored on disk, so caching can dramatically improve system and application performance. Amazon ElastiCache, an AWS cache service, adds another piece to the puzzle -- providing memory-based storage for critical pieces of data, rather than cloud users having to rely on slower storage resources like Amazon Simple Storage Service.
Cloud workload architects can connect a workload to Amazon ElastiCache to store important content, such as a database query or other computational results. Similarly, the AWS cache service supports a multitude of read-heavy applications, such as messaging, media sharing and compute-intensive tasks; it's useful in big data projects.
AWS handles all of the management, monitoring and operational issues involved with ElastiCache. This means that cloud users can easily add a cache layer to their workloads without worrying about how it will perform or without having to optimize cache behavior. Caching does not replace resource scaling; however, it can complement scaling and load balancing. The resulting performance boost may nullify some of the scaling and associated load balancing needed to maintain cloud workload performance as traffic demands vary over time.
Amazon ElastiCache uses independent cache nodes, which provide predetermined quantities of network-attached memory. Each node runs a Memcached- or Redis-compliant caching service, and each node has its own Domain Name System name and port. This allows a workload to access and integrate with the instance. ElastiCache nodes are typically billed by the hour -- on-demand cache nodes -- or purchased for long-term usage -- reserved cache nodes.
The AWS cache service currently supports 17 types of cache nodes in three standard classes -- t2, m3 and m4 -- and one memory-optimized class -- r3. Each node type has a unique amount of memory, one or more virtual CPUs (vCPUs) and varied levels of network access. Nodes range from a micro-sized instance -- cache.t2.micro with 1 vCPU, 550 MB and modest network performance -- to an extra-large cache instance -- cache.m4.10xlarge with 40 vCPUs, over 154 GB of memory and 10 GbE network performance.
Test your knowledge: Amazon Simple Storage Service quiz
Think you know everything about Amazon Simple Storage Service? Prove it with this 10-question quiz about Amazon S3.
Developers can deploy cache nodes individually, but node failure can disrupt the application that relies on that resource from the AWS cache service. In practice, developers deploy ElastiCache nodes in clusters to boost resiliency and further improve cache capacity. Developers can launch cache clusters through the AWS Management Console, command-line tools or ElastiCache APIs by specifying the identifier, type and quantity of nodes.
Cache parameter groups help developers configure clusters, enabling them to define the engine and other cache system configuration settings. AWS uses default parameters that are optimized for the memory and compute resources of the current cache cluster nodes, when developers do not specify custom parameters. It is possible, however, to tailor parameters to optimize cache performance for the task at hand. Developers can review parameters through the same methods they use to launch clusters.
Identify and resolve app performance bottlenecks
Give databases a tuneup by turning to a cache service
Words to go: AWS storage
Dig Deeper on AWS database management
Related Q&A from Stephen J. Bigelow
Eliciting performance requirements from business end users necessitates a clearly defined scope and the right set of questions. Expert Mary Gorman ... Continue Reading
Requirements fall into three categories: business, user and software. See examples of each one, as well as what constitutes functional and ... Continue Reading
Navigating data center malfunctions when hardware is off premises can be tricky. Organizations must have strong SLAs with their colo provider to ... Continue Reading