Problem solve Get help with specific problems with your technologies, process and projects.

How to optimize memory for AWS database applications

Taking advantage of SSD technologies for database applications can be challenging. Here are a few tips for dealing with AWS applications.

Architecting database applications to take advantage of improvement in solid-state disk (SSD) technologies can...

be challenging, argues Brian Bulkowski, CTO of Aerospike, a NoSQL database technology company. Of particular note have been significant improvements in the ability to access data in parallel. These improvements are now allowing SSDs to achieve similar performance to random-access-memory storage for many types of database applications at about one-eighth the cost, he said.

Over the last several years, the performance of SSDs has been growing by leaps and bounds, while the costs have been dropping, compared to traditional rotational disk drives and the random access memory (RAM) connected to modern computers. But taking advantage of these improvements requires making sense of the storage characteristics of AWS instances' sizes, understanding application characteristics and leveraging the right programming language.

Making sense of AWS options

AWS IaaS EC2 instances can be provisioned with different levels of storage:

a) Memory, which corresponds to RAM in a traditional physical computer

b) Instance storage, also known as ephemeral storage, which corresponds to disk size in a traditional physical computer

c) Flexible and persistent supplementary storage like EBS and S3, which loosely corresponds to network storage on a physical PC

Amazon now offers SSD as the default option for implementing Ephemeral storage and General Purpose (SSD) as the default configuration EBS (older-instance types don't default to SSD). The additional benefit of EBS is that the storage system can persist after the database server itself has been decommissioned.

In addition, AWS also offers SSD storage as the default option for Amazon DynamoDB, and as an option for Amazon RDS and Amazon Redshift. This can be a great option to reduce development overhead for database applications. But when the enterprise needs to implement another database, there can be wide variations in the ability to take advantage of the parallel nature of SSDs, Bulkowski said.

The physics of parallel data storage

Physical computers are typically built up out of three main types of storage. RAM sits on the motherboard next to the CPU, provides the highest performance, costs the most and loses its contents when the computer is powered down. SSD and rotational disks are supplemental storage capabilities connected to the computer through on-board PCI-e, SCSI and SATA cables, or networking such as eSATA or FibreChannel.

Rotational disks contain a single physical read/write head that is capable of reading one stream of data at a time across multiple physical platters. This works well when the data can be read sequentially, such as reading large media files like audio or video, and also works well for some kind of database analytics applications, such as using Hadoop. However rotational disk performance suffers when the read/write head has to move across multiple sections of the platters in order to retrieve the data.

In contrast, flash memory drives are physically constructed from hundreds or thousands of blocks spread over many chips that can each be accessed randomly without affecting the performance of accessing data from other blocks. Flash drives do have two bottlenecks. The first is the storage controller between the computer processor and the bank of individual chips. The second is that random data cannot be read from different blocks of an individual chip simultaneously.

Most database engines today don't take advantage of the ability to access random bits of data from flash drives, Bulkowski argued. As a result, the database is simply slower, or if the access pattern can be cached, requires more RAM to achieve the same benefit. While RAM storage is faster than flash, it tends to cost about ten times as much as flash for a given amount of storage capacity. At a physical level, he noted that RAM has much higher I/O capacity compared to SSDs, but costs about three to four times as much to operate, owing to the larger power requirements of RAM. These relative costs are reflected in the relative costs of different machine instances available from Amazon Web Services.

Writing to the queue

The key to taking advantage of this ability to access data in parallel across several chips lies in writing programs to take advantage of a feature called queue depth. Increasing the queue depth in the database application allows the program to read or write data from different individual chips in an SSD in parallel, which has the net effect of improving database performance.

If the queue depth gets too large, the likelihood of trying to access different bits of data on the same individual chips grows, which hurts performance. Hence, argued Bulkowski, that the best queue depth for most applications is only 32 to 64 concurrent requests per drive, even if the drive supports more. By optimizing the queue depth the database application uses to access the SSD, the application is able to achieve better performance with less need to leverage the more expensive RAM.

At the application layer, developers need to think about how to implement the application to queue up requests to the storage system to be processed in parallel. But there are a lot of pitfalls in getting good parallelism in software, Bulkowski said. He noted that it is difficult to implement good parallelism using programming languages like JavaScript, Ruby and Python, since these languages don't have good support for implementing multiple threads; Java and C# make it a little easier.

C and C++ are the best languages for implementing highly parallel system code, as they best expose core operating system functionality. For example, Mutex extensions, also known as mutual exclusion, are language features that simplify the process of programmatically generating low-level system calls in parallel. Another alternative might be to start with a commercial database with built in SSD storage optimizations like Aerospike.

Choosing the right architecture for the application

Not all database applications can make use of the ability of flash storage to access random data in parallel. Databases used for processing Web requests from a number of simultaneous users tend to see the best improvement from flash storage, Bulkowski said.

In contrast, analytics applications like Hadoop are parallel in some sense, but generally end up accessing data from the storage drives via large streams. For example, the process of crunching a month's worth of user logs to identify the behavior, or users, tends to be pulled from the data sequentially and thus will not see as much benefit from moving to SSD. Between these two extremes are real-time-analytics types of applications that involve some random seeking and streaming of data.

Bulkowski suggested that one way of taking advantage of the cost differential between the tiers is to provision the database to use the ephemeral storage to read data for the best performance. This can be backed by data stored on EBS for persisting the data. This approach provides the best blend of price and performance on AWS.

Background processes need to be considered too

There are other subtle characteristics of database applications the enterprise architect should also consider. Robert Treat, CEO of OmniTI, a Web architecture consultancy, said that understanding how the database software makes use of RAM, and how it flushes data to disks, is extremely important for optimizing SSD usage. It's also important to assess the different ways a database will need to talk to the file system. The most obvious are heavy read loads where lots of back ends are competing for I/O. But other processes can include journaling systems, log file generation and the type of compaction processes required for background maintenance.

To help find the right balance, Treat recommended benchmarking using real-world deployments backed by strong metrics. This can help the enterprise determine how best to deploy and tune systems for SSDs. Between RAM and SSD, though, the biggest factor is understanding the size of the working set of data.

Configuring the right mix of SSD and RAM capacity can have more permutations with the complexity of the database. Treat said that a more traditional database system, with a single primary and a number of secondary servers for failover, will be very straightforward to configure except at the disk level. On the other hand, a distributed database system is going to have more variability based on the number of nodes, amount of RAM and network setup.

Recommended Treat, "In most cases though, if you focus on the technical strengths and operability of the databases' systems as the driver of hardware choice, the number of comparative systems you need to look at should be relatively small."

This was last published in January 2015

Dig Deeper on AWS tools for development

Join the conversation

3 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Have you used SSD technologies for AWS database applications?
Cancel
When you write parallel applications as described, I wonder how portable they're likely to be. True, they may offer better performance for the time being, but how easy are they going to be to maintain, let alone upgrade?
Cancel
If some data needs to be stored in a session object over the period of interaction, it's stored in this session object and stays there as long as the session exists. So if you have 3 independent boxes, there's no direct way of one knowing what is there in the session object of the other. In order to synchronize between these server sessions, you may have to write/read the session data into a layer which is common to all.
Cancel

-ADS BY GOOGLE

SearchCloudApplications

TheServerSide.com

SearchSoftwareQuality

SearchCloudComputing

Close