buchachon - Fotolia

AWS Snowball Edge extends EC2 on premises

AWS Snowball portable data transfer devices gain EC2 functionality, as the public cloud vendor seeks more ways for customers to use these boxes in their private facilities.

For the first time, AWS users can get a full instance delivered directly to them, as the public cloud provider further stretches its footprint beyond its own facilities.

AWS now puts EC2 instances inside its Snowball devices to better meet users' demand for hybrid and edge deployments that eventually connect back to the cloud. These portable, ruggedized devices offer 100 TB of local storage and can be used to collect data, regardless of whether they have an internet connection, before being shipped back to AWS to upload to its servers.

This addition to AWS Snowball Edge expands the cloud provider's integration of Greengrass software from last year. But that move was focused primarily on IoT deployments and was a stripped-down version of AWS software that relied on Lambda functions as a service. The EC2 integration, though reliant on a new form factor, gives users the same compute functionality found with its cloud VMs and broadens the uses without the need to rewrite code into Lambda functions.

EC2 instances running outside AWS data centers would have been unfathomable just a few years ago, when AWS executives scoffed at the notion of hybrid deployments and argued everything would be in the public cloud in short order. AWS has shown it is willing to revise its strategy -- to eliminate a perceived weakness in its collection of services, or enter markets it thinks are vulnerable. But its ultimate goal is unchanged: to move as many workloads as possible to its cloud.

AWS realizes its strategy must address on-premises or colocated deployments, but it's unlikely to do anything to help customers move applications from the cloud to private data centers, said Tony Iams, a Gartner analyst.

"They're going to proceed very cautiously, because they don't want to distract from their core business, which is to move workloads to the cloud and take advantage of the economies of scale when things are hosted in their data centers," Iams said.

This proprietary VM packed into the Snowball devices only nominally runs on a customer's site, Iams said, and the extent of the integration of these AWS Snowball Edge devices with other on-premises deployments, such as VMware, is unclear. For now, uses will probably be limited to preprocessing or data compression.

"All of this is about recognizing that there are disconnected and semiconnected use cases," said Mark Ryland, chief solutions architect for the worldwide public sector at AWS. "There [are] all these hybrid use cases that are going to continue for years."

Each AWS Snowball Edge device supports up to 24 vCPUs and 32 GB of memory, as well as an S3-compatible endpoint. Users can test Amazon Machine Images in the cloud and preload them onto the device, but these Snowballs aren't likely to act as a rack of servers inside a private data center; AWS charges a daily, per-device fee for jobs that last more than 10 days.

Beta testers have asked for additional features, such as detachable Elastic Block Store (EBS) volumes, Ryland said. It's possible those will be added, including potential Virtual Private Cloud-type functionality around networking. Ryland described the future roadmap of Snowball as a "steady progression," adding that the addition of more services to these devices will require more CPU and more memory, too.

A move in that direction would push AWS Snowball Edge farther from its origins as a transfer device and closer to competing with on-premises services, such as Microsoft Azure Stack, which is effectively a scaled-down version of Microsoft's public cloud that can be deployed inside a customer's data center.

Werner Vogels, CTO of Amazon
Amazon CTO Werner Vogels speaks to the crowd during the AWS Summit keynote address.

SageMaker, S3 and EC2 get more speed

They're going to proceed very cautiously, because they don't want to distract from their core business, which is to move workloads to the cloud and take advantage of the economies of scale.
Tony Iamsanalyst, Gartner

Amazon SageMaker received some updates, as well. The managed service for machine learning can now be used for high-throughput batch jobs of more than 5 GB to quickly incorporate data, such as billing inventory or product sales, to test against existing models.

AWS also added a pipe-input mode to stream data directly from S3 to TensorFlow containers, rather than load data through Amazon EBS volumes. This feature can accelerate training, improve throughput and reduce disk space usage, and additional frameworks will be added in the future, according to AWS.

Other updates include increased performance in S3, which now supports up to 3,500 requests per second to add data and 5,500 requests per second for retrievals. There are also three new generations of instance types on the way: the compute-intensive Z1d and the memory-optimized R5 and R5d. All three will eventually have bare-metal options, as well.

And, finally, Bring Your Own IP -- currently in preview -- is a feature in Amazon Virtual Private Cloud to link publicly routable IP addresses with AWS offerings so users can move existing applications to the cloud without having to change their IP address.

Editorial director Margie Semilof and SearchAWS site editor David Carty contributed to this report.

Dig Deeper on AWS infrastructure

App Architecture
Cloud Computing
Software Quality
ITOperations
Close