LAS VEGAS -- Amazon previewed a new Data Pipeline service and made available new EC2 instance types this week,...
but enterprise IT pros said there's still room for improvement with its existing offerings.
Specifically, attendees at the Amazon Web Services (AWS) re:Invent conference here said they're looking for live migration of Elastic Compute Cloud (EC2) instances between availability zones and regions for disaster avoidance.
"Right now, moving workloads around to avoid outages is pretty cumbersome and manual," said Jason Deck, senior operations engineer for Atlanta, Ga.-based LogicBlox. "I want to know if it's something they're working on."
For more on AWS re:Invent 2012 visit:
Conference coverage of AWS re:Invent 2012
Netflix CEO Reed Hastings brought up the topic of live migration within Amazon's cloud during a keynote presentation this week.
"VMware does this today with vMotion and has done this for many years, so it is technically possible," Hastings said. "But it's extremely demanding to be able to do that at scale."
Attendees at the show also said they hope to see further service automation. This would include automated selection of instances for different workloads, which was also called for by Hastings.
"We're really still in the very primitive first assembly-language phase of cloud computing," Hastings said. "When you have to pick individual instance types … you know something's wrong."
Over the next five to ten years, Hastings expects EC2 will develop to select instances based on the type of workload being deployed and move workloads among instance types as they scale using live migration.
IT pros looking for more automation also envision an automated "lights out" function for EC2, so that they don't have to manually shut off their inactive EC2 instances.
"Why don't they have a timer to turn the lights on and off automatically?" said Matt Lipinski, architect for Reed Elsevier Technology Services, based in Miamisburg, OH. "Why not have a checkbox for shutting down instances as a group or individually outside of business hours?"
Right now, moving workloads around to avoid outages is pretty cumbersome and manual.
Jason Deck, senior operations engineer for LogicBlox
Other wish list items cited by IT pros at the conference include geographic expansion of Amazon's regions to more parts of Asia and Russia, more cities supporting the Direct Connect service, simplification of provisioning and operational workflows in Amazon's Virtual Private Cloud, and more reference architectures and documentation.
Data Pipeline service automates data analytics workflows
Amazon.com CTO Werner Vogels revealed during his keynote this week that the company is preparing a new workflow orchestration service for data analytics called Data Pipeline, which can be used to set up data movement policies among repositories of data, including Amazon Simple Storage Service (S3), Elastic Block Store, Amazon's new RedShift data warehouse, and on-premise data repositories.
"It doesn't come within a country mile of replacing real [Business Intelligence] job management," said Carl Brooks, analyst with 451 Research, based in Boston, Mass. "But it works for a significant portion of the AWS audience, i.e. giant Web properties and online services with lots of raw data to process."
This could prove useful for aggregating and storing log files in the cloud, Lipinski said.
EC2 Cluster High Memory and High Storage instances
The two new instance types also unveiled by Vogels focus on workloads at the ends of the spectrum, the CTO said.
The Cluster High Memory instance, also known as cr1.8xlarge, will offer 240 GB of RAM and two 120 GB solid state drives (SSDs) for high performance.
The High Storage instance, also known as hs1.xlarge, packs in 117 GB of RAM and 24 hard drives for a total capacity of 48 TB. The high storage instance is suited for highly distributed Elastic Map Reduce workloads, Vogels said.