Kit Wai Chan - Fotolia

AWS bandwidth upgrade opens door to more data movement

A bandwidth jolt to AWS networking services gives customers faster data transfer between services, and creates workflows that might have otherwise been prohibitive.

Boosted connection speeds between AWS' most popular services opens the door to more easily move data across the platform, and further raises the networking bar among public cloud providers.

Customers could see dramatic increases in AWS bandwidth to speed the transfer of files between Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), as well as between EC2 instances. This follows the faster connection speeds within EC2 clusters that AWS added last year with its Elastic Network Adapter (ENA), and follows a trend among the major cloud vendors as they race to improve their networking capabilities.

Traffic between EC2 and S3 will have a fivefold improvement with up to 25 Gbps of AWS bandwidth, which could make the platform more attractive for backup and continuity, as well as applications that require large data transfers. Transfers between EC2 instances in the same region now have up to 5 Gbps for single point-to-point connections and 25 Gbps across multiple connections.

Joe Emison, founder and CTO of BuildFaxJoe Emison

To take advantage of the improved AWS bandwidth speeds, users must enable the ENA, AWS' next-generation network interface, as well as use current-generation instances and the latest Amazon Machine Images. ENA is built in to some newer instance types, such as the X1, but customers should otherwise update their systems if they want to hit these speeds.

Over the past year, the major cloud providers all addressed deficiencies in their network capabilities to improve connections to and through their data centers. Networking hasn't been a major inhibitor to adoption, but in some scenarios, workloads have remained on premises or locked into AWS because of migration constraints, said Joe Emison, founder and CTO of BuildFax, an AWS customer in Austin, Texas.

[The AWS bandwidth upgrade] addresses one of the most visibly obvious things that have been consistently painful to a lot of people.
Joe Emisonfounder and CTO, BuildFax

For example, it would have taken weeks to move 10 TB of images out of an S3 bucket for someone else to work on, but now that can be sped to a degree that would no longer make such a move prohibitive, he said.

"The public cloud has been able to handle 95% of the use cases for a while," Emison said. "Does this add another percentage point? Yeah, it probably adds one and addresses one of the most visibly obvious things that have been consistently painful to a lot of people."

Another potential example is loading large files to an application on Amazon Elastic MapReduce. The improved AWS bandwidth, combined with the updated pricing model and the use of transient clusters that can be shut down automatically, gives users more processing power in less time for a more cost-efficient process, said Adam Book, chief engineer at Relus Cloud, an AWS consultancy in Peachtree Corners, Ga.

This could also attract large databases hosted on premises to AWS, said Dan Conde, an analyst at Enterprise Strategy Group Inc., based in Milford, Mass. Customers can either move these workloads to Amazon Relational Database Service or establish a more familiar setup to connect EC2 and S3, where S3 serves as the rapid-access data store. In that scenario, a corporation might be more willing to move a database to AWS because of the improved network speeds.

Once a company commits to AWS and becomes more comfortable with the platform, it may determine it's better to move to some native database options, because connection speeds become irrelevant when it's all handled by Amazon.

These updates are in line with AWS' strategy to offer a la carte services so customers can pick and choose how they want to build their environments. However, in some ways, Amazon still lags behind the capabilities of Google Cloud Platform (GCP), Conde said.

Google owns its own network to run GCP, as well as the rest of its portfolio of services, which gives it an advantage to extend data across regions, Conde said. If a multinational needs to share data between Brazil and Canada, for example, it may require the public internet for AWS users, whereas GCP customers would remain exclusively on Google's fiber and achieve faster, more reliable connections.

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at [email protected].

Dig Deeper on AWS infrastructure

App Architecture
Cloud Computing
Software Quality
ITOperations
Close