carloscastilla - Fotolia

Manage Learn to apply best practices and optimize your operations.

DynamoDB pricing option makes it a go-to for serverless apps

DynamoDB had its drawbacks, but the addition of on-demand billing and ACID transactions makes it a solid database option for serverless applications.

Amazon DynamoDB has long been a go-to database for AWS Lambda applications, and now, two new features help address some former drawbacks of the service.

First, because of the DynamoDB pricing model, a user had to actually plan for expected usage and scale capacity as needed. Amazon tried to fix this issue with Auto Scaling for provisioned throughput, but it still took too long to scale and didn't handle occasional traffic spikes very well. Additionally, a user couldn't reduce throughput to zero when the database wasn't in use. For these reasons, DynamoDB couldn't be truly considered serverless.

That all changed with DynamoDB on-demand. The feature, which users can add to existing production tables with no downtime, adds increased flexibility, with on-demand billing and high throughput without provisioning.

Get on-demand on demand

This latest DynamoDB pricing model removes the need for capacity planning. There's still a charge for storage, but instead of hourly read/write rates, there's a per-request fee of $1.25 per million write operations and $0.25 per million read operations.

For example, I had a table called APIKeys that needed to stay at a minimum capacity of 100 read operations per second because we'd often see a flood of requests come in at the same time. That total capacity largely went unused, but if there was a spike of more than 150 API calls at once, some of those requests would be delayed or fail. We also needed 100 write operations per second available, since we tracked the last login time for any API key. Our monthly cost was around $60 just for API key access, and we finished well below 1 million read/write requests per month. However, with on-demand pricing, our new estimated cost for the APIKeys table is $1.50/month -- a 97% savings.

However, if we had a consistent 100 read/write operations per second, we would expect to pay about $390 per month with on-demand DynamoDB pricing. As a result, users should still opt for the traditional provisioned option for applications with constant and predictable activity.

How to estimate your on-demand bill

You can estimate your monthly DynamoDB on-demand pricing via the CloudWatch console. Click on Metrics > DynamoDB > Table Metrics. In the Search box, type ConsumedWriteCapacity, then select all of the associated metrics. Under the Graphed Metrics tab, choose sum for the statistic and 30 days for the time period.

DynamoDB write capacity counts
Set a time frame for write capacity unit counts.

At the top of the graph, choose a time range of four weeks, and set the graph type to number. This will generate counts for the write capacity units that have been consumed over the last four weeks for each individual table.

When I did this, it showed a consumed write capacity of about 5 million operations per four weeks, which means I'd pay about $6.25 per month in write units. To see what your read capacity costs would be, search instead for ConsumedReadCapacity, and perform these same steps.

For our workload, this turned out to be a $10/month estimate in DynamoDB costs instead of the $600/month we currently pay for provisioned throughput.

ACID transactions added to DynamoDB

A second big complaint against NoSQL databases like DynamoDB is a lack of transaction support. More specifically, these services can't roll back a set of changes when only one operation fails -- a feature that's especially common for financial transactions, which are only completed after delivery of a product is confirmed. If delivery failed, it would require a rollback of the entire transaction to generate a refund or cancel the charge.

Amazon attempted to rectify this issue with ACID transactions for DynamoDB. These transactions enable developers to issue conditional write operations across multiple databases at once.

Transaction operations allow for two basic types of actions to prevent multiple processes from requesting or updating a field at the same time. Developers can use TransactWriteItems with condition statements to tell DynamoDB to do either all or none of certain operations. If the operation succeeds, the transaction is written, and a successful message is returned. If the operation fails, the user must try again to complete the transaction.

Additionally, the TransactGetItems operation can be used for read-item consistency and return an error if any transaction is actively being processed for the item in question. For example, if your application charges a user account, you can use the TransactGetItems API call to return their remaining quota first to make sure it's not already charged.

Transactions are enabled by default for all local tables, but they are not fully ACID-compliant for Global Tables. This means applications that need this feature should stick to DynamoDB tables in one region to ensure consistent read/write operations.

Also, there doesn't appear to be any way to roll back an entire transaction manually at this time. DynamoDB doesn't support locking items, so the items updated through the transaction feature can still be written from other operations before the write transaction is completed. For example, if users try to download information, which decreases their quota in a transaction, and someone else tries to purchase additional quota at the same time, the transactional write operation would fail, and Amazon DynamoDB would not update any items in the transaction.

Dig Deeper on AWS database management

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

How will these new DynamoDB features affect your database spending?
Cancel

-ADS BY GOOGLE

SearchCloudApplications

TheServerSide.com

SearchSoftwareQuality

SearchCloudComputing

Close