Advanced Amazon Web Services users prefer self-managed databases running on Amazon's Elastic Compute Cloud to its...
Database as a Service offerings -- at least for now.
Presentations given at the inaugural meeting of the AWS Super Users Online Meetup Group last week focused on how the 'super users' run databases on AWS. A majority of the speakers said they run self-managed databases such as Cassandra and MySQL on the Elastic Compute Cloud (EC2) rather than using Amazon's Database as a Service (DBaaS) platforms, such as the Relational Database Service (RDS) and DynamoDB.
However, some of the IT pros who presented at the Meetup also had experience with DBaaS, and some remained on the fence between self-managed and DBaaS options moving forward.
To RDS and back again
Edmodo Inc., a San Mateo, Calif.-based company that offers an online social learning platform for education, learned a lot of lessons when it moved its MySQL operations from self-managed instances on EC2 to RDS, according to Jack Murgia, director of operations for the company.
"We learned a lot more when we decided to move back out of RDS," he added.
In the spring of 2011, Edmodo served around 2 million users and got a major investment from venture capital firms, which it used to hire Murgia to work with a 10-person developer team.
"Basically we had one database when I walked through the door, a master and a slave," Murgia said. Both ran on EC2. When Murgia came in, there wasn't a database administrator on staff.
Along came Amazon RDS, which offered a chance to take MySQL management off the busy startup's plate. In the fall of 2011, the company completed its migration to RDS.
"We were able, with the click of a few buttons, to create development and [quality assurance] environments," Murgia said of RDS. "It was also easy, as the load increased that fall, [to] throw in read replicas with a few clicks and changes to [domain name system] records."
But the RDS deployment ran into snags when it came to multi-availability-zone failover.
"What we found was that multi-AZ failover was a failure most of the time," Murgia said. "Sometimes even on a planned failover we would find that replication would break, and the only option at that point was to bring up new replicas."
The primary database had eight replicas, and it took about an hour to make each new replica, which meant up to a full day of downtime before Edmodo was back up and able to serve customers. So the company regrouped, and went to standalone masters on RDS, planning to make new replicas if something failed. RDS began offering a service-level agreement (SLA) in June of 2013, which also kept Edmodo looking for ways to stay on the service.
But as Edmodo continued to grow, the company brought in DBAs from an outsourcing company and hired more system administrators between 2011 and 2013. At that point, with the skills in-house to run a database self-managed on EC2, Edmodo moved away from RDS in favor of a self-managed MySQL environment.
"We had our hands tied with the 'black box'" of RDS, Murgia said. If Edmodo managed its own MySQL and replication, the IT team could promote a replica to master, point all the other replicas to that master and get back up and running. Instead, there was frustration when the company worked to recover the database without having that kind of control over the infrastructure.
"It's a tradeoff you have to make," Murgia said. "Perhaps you don't have the skills, perhaps you're a small startup, so it's fine, but as you start to gain those skills and start to raise your standards of performance and availability, that's going to become an issue."
Weighing Cassandra and DynamoDB for NoSQL
Another presentation at the Super Users group Meetup was given by IT pros from Stackdriver, a Boston-basedcompany that offers AWS Monitoring as a Service. The company reached an inflection point with its Cassandra cluster, and is considering two alternatives: expanding the existing cluster or redeploying on Amazon's DynamoDB DBaaS.
"We have a very write-heavy workload that [involves] billions of data points, and Cassandra has good support for that kind of write-heavy workload," said Joey Imbasciano, cloud platform engineer with Stackdriver. "The design pattern for modeling time series data in Cassandra is also pretty well-known, so we knew we weren't going to be blazing any trails there."
Another appealing feature of Cassandra was the ability to delete data programmatically, which would keep the database at a manageable size without requiring manual intervention. Stackdriver also considered MySQL and RDS, but felt NoSQL was a better fit for its data set. The company also took a long look at DynamoDB before deploying Cassandra about 18 months ago.
"At the time … the vendor lock-in was something we were really trying to avoid," Imbasciano said. "Also, we did a little bit of cost estimation and figured the cost of using Dynamo at the time was going to be quite a bit higher."
Stackdriver started with a three-node Cassandra ring; today it has already grown to 36 nodes, and as growth continues, the company is looking at DynamoDB again.
"The benefits are obvious," said Patrick Eaton, an architect at Stackdriver who co-presented with Imbasciano. "The tuning is automatic. The upgrades are automatic. Amazon has full-time support people taking care of things. They can scale it up as big as you want to."
"Also, we've seen historically that AWS has aggressively cut prices, so it's likely that for a constant workload, our prices will actually get cheaper over time," Eaton added.
Still, a Dynamo deployment for the company's time-series data will be somewhat more expensive at first than continuing with Cassandra, according to Stackdriver's current estimates.
"The cost model is pretty complicated … it's based on these abstract quantities they call 'write units' and 'read units,' which are a combination of request rate and data size and consistency model," Eaton said. "It makes it hard to estimate what the service is going to cost you until you get pretty far into a prototyping phase."
By Stackdriver's calculations, ongoing management for Cassandra is about a quarter of an engineer, which it prices at about $3,000 a month. The primary cluster costs $12,500 per month. A smaller cluster for alerting works out to about $1,300 a month in the current Cassandra deployment.
With Dynamo, for the primary cluster, Stackdriver estimates the cost at about $22,000 for the storage and writes alone. The alerting cluster, on the other hand, is estimated at about $600 on DynamoDB -- 50% of its current price.
"The savings or the cost really depends on the type of workload," Eaton said. "You can't just compare these alternatives in blanket statements."
There are also steep discounts for customers willing to reserve capacity up front -- Eaton estimated the savings at 53% for one year, and 76% for three years.
Then again, "we have no operational experience with this system," Eaton said. "There are going to be surprises, there are going to be gotchas, and [if we move to Dynamo] we're just going to have to endure all those growing pains all over again."
Amazon reps did not comment as of press time.