LAS VEGAS -- Amazon's new Redshift data warehouse service may leave traditional offerings in the dust with competitive...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
pricing, provided enterprises can overcome data migration challenges, experts said.
"We're hoping it'll be faster performance-wise," said Ivan Jurado, chief technology architect and general manager for marketing analytics company M-Sights, which moved its entire infrastructure to Amazon Web Services (AWS) just a few months ago.
Giedrius Praspaliauskassenior systems architect for a consulting company based on the West Coast
So far, the company, which has about five terabytes of data to analyze at any given time, has been running queries in AWS-hosted instances of SQL Server, but this environment is quickly eating up memory, Jurado said.
"Right now we're already at 37 gigabytes of memory -- we can't get much bigger."
However, enterprises with large existing data warehouses on-site won't be able to spin up Redshift as easily, according to consultants at the show.
"It will be good for smaller guys and those who can start from scratch, maybe, but if you have a large investment in a data warehouse, you're not going to move it all to Amazon -- you already have sunk costs into that infrastructure," said Giedrius Praspaliauskas, senior systems architect for a consulting company based on the West Coast.
Some enterprises might go for Redshift depending on where they are in their procurement cycle; if they're coming up to the end of an existing deployment's life, it's more likely they'll go for cloud-based data warehousing, one analyst said.
"You also have to look at the skills of your workforce," said Tony Witherspoon, senior solutions consultant for a consulting company on the East Coast. "I definitely wouldn't start with data warehousing as an entry point to the cloud -- you have to pick the low-hanging fruit first."
AWS also dropped the price of Amazon Simple Storage Service (S3) about 25% across the board -- a move that may help boost Redshift, some observers said.
Redshift is available as a preview from AWS and comes in two flavors: one comprised of 16 terabyte (TB) nodes and one with 2 TB nodes. Amazon claims the platform can scale automatically to petabytes in size and will compete with IT's "old guard" vendors on price.
For more on AWS re:Invent 2012 visit:
AWS re:Invent 2012 cloud conference coverage
The 2 TB nodes are available starting at 85 cents per hour, while two 16 TB nodes with 128 GB of RAM each can run for $3.65 per hour, adding up to $32,000 per year according to a keynote presentation Wednesday morning by Andrew Jassy, senior vice president of AWS.
Typically, enterprises pay between $19,000 and $25,000 per terabyte of data per year with traditional data warehouses, Jassy said, citing statistics gathered by analyst firm ITG in June 2011. Amazon's data warehouse can run for as little as $1000 per terabyte per year, Jassy said.
Amazon also claims Redshift can offer ten times the performance of traditional data warehouses, citing internal tests.
"They haven't released the methodology for the benchmarks, so it's hard to make a comparison a DBA is going to take very seriously," said Carl Brooks, analyst with 451 Research based in Boston. "But the claimed cost delta is so big that it will be attractive anyway. This is the exact same value prop as EC2."
Dig Deeper on AWS big data and data analytics