This content is part of the Essential Guide: AWS vs. Google comparison guide
News Stay informed about the latest enterprise technology news and product updates.

AWS, Google cloud performance beat by unlikely competitor

Cloud service providers go head-to-head in independent benchmark tests of network throughput and latency -- and the results may surprise you.

Anecdotal reports about cloud performance are common, but benchmark results based on dozens of tests of network throughput and latency offer more powerful insights.

Google and Amazon Web Services (AWS) have each claimed network performance supremacy in the cloud. Both credit their success to software-defined networking architectures as well as privately owned dark fiber between data centers for strong performance offerings. Customers and partners have also reported solid network performance on both Google and Amazon platforms.

"We overprovisioned our compute and memory resources when we first moved to AWS," said Kevin Felichko, CTO for e-commerce website, based in Frederick, Md. "Since then, we have scaled down with no noticeable performance impact."

While Amazon's networks can be difficult to join across regions, one CTO who uses both services said Google stands out in the network field because of a design that sets network boundaries around "projects" rather than regions.

I don't have a single engineer in my organization that understands the topology of the Amazon network. Google is much simpler.
Alberto MaestroElasticBox CTO

ElasticBox is a cloud management software company whose product runs on both Google and Amazon. With the region-based approach ElasticBox CTO Alberto Maestro and his team set up a router running in a colocation to join the regions, but found unstable connections and the performance variable. Google's approach eliminated the need for the separately managed router and provided more consistency across great global distances.

Manageability is also a point in Google's favor, Maestro said.

"I don't have a single engineer in my organization that understands the topology of the Amazon network," he said. "Google is much simpler."

That said, AWS offers more flexibility, such as in defining policy rules by individual subnets, a more granular approach than what's available with Google, Maestro said.

And the cloud performance winner is…

Which of these services -- Google or Amazon -- outperforms the rest of the market?

The answer is neither.

CenturyLink Cloud, formerly Tier 3, received the top prize for network throughput in tests performed in October by CloudHarmony Inc., an independent benchmark tester of cloud performance based in Laguna Beach, Calif.

CenturyLink remains a dark horse in the cloud computing race -- the Tier 3 customer base last year was between 2,500 and 3,000 customers. But in 10 tests of its 8-vCPU, 16 GB memory hyperscale instance, the mean downlink was 18.919 gigabits per second (Gbps), about 8 Gbps above the highest results from AWS and Google.

CenturyLink had six of the top 10 network throughput results for this round of testing, and at $0.33 per hour, its 8-vCPU, 16 GB memory hyperscale instance was also among the cheaper servers tested.

CenturyLink's outlier results could be because its virtual machines (VMs) share the same host, according to Jason Read, founder of CloudHarmony. Amazon and Google VMs may be throttled to provide consistent performance across shared hosts.

CenturyLink, however, disputes this notion.

"CenturyLink Cloud servers are normally distributed across hardware clusters in our data centers, and then continually optimized to balance overall workloads across all clusters," said David Shacochis, vice president of cloud platform at CenturyLink. "The likelihood that a particular benchmark workload gets confined to a single physical server, while possible, is mathematically remote."

Variability in network performance was also observed during the benchmark testing, and Read said Amazon's services had the lowest variability -- in other words, the most consistency in network performance between tests.

Network Throughput Top 10 Results

Google vs. Amazon: Who's price-performance king?

When isolated, Google and Amazon's results largely show an even race in network throughput performance, except for price-performance comparisons. Of the top 10 network throughput results, Google's n1-highcpu-8 instance came in at number five, with 10.791 Gbps performance, and Amazon's c3.8xlarge is ranked sixth, with 10.319 Gbps.

But while Google's n1-highcpu-8 edged out Amazon's c3.8xlarge instance, the c3.8xlarge, with 32 vCPUs and 60 GB memory, starts at $1.68 per hour for on-demand pricing, as opposed to the $0.24 per hour price tag on Google's 8-vCPU, 7.2 GB memory instance (using Google's typical price per hour).

Amazon also has two other instance types that came close to 10 Gbps throughput in benchmark tests -- the r3.8xlarge and i2.8xlarge, each of which has 32 vCPUs and 244 GB memory and performed at 9.835 Gbps and 9.623 Gbps, respectively. On-demand prices for these servers are set at $2.80 and $6.82 per hour.

By comparison, Google's n1-highmem-16 instance, with 16 vCPUs and 104 GB memory, performed at 8.521 Gbps and costs $0.90 per hour; its n1-highCPU-16 instance, with 16 vCPUs and 14.4 GB memory, performed at 8.271 Gbps and costs $0.48 per hour.

One exception to this price-performance pattern was Amazon's t2.micro entry-level instance, which offered 4.229 Gbps performance despite its single CPU and 1 GB of memory. This server is also priced at $0.01 per hour.

AWS vs. Google Throughput Results

Some AWS customers anecdotally report latency lags, particularly in the US-East region.

"We receive a complaint from our internal users about once every two weeks which can be attributed solely to latency," Felichko said. "Maybe once a month … we have a customer that complains about slow loading times of our website."

In CloudHarmony's benchmark tests, Amazon's c3.8xlarge instance and r3.8xlarge was first for lowest latency, at 0.09 milliseconds for both instances. However, AWS also had the two worst latencies of any of the more than 90 instance types tested, at 7.65 milliseconds for the m3.medium instance, and 17.30 milliseconds for the t1.micro instance.

CenturyLink's 2 vCPU, 4 GB memory hyperscale instance trailed Amazon's highest-performing servers with 0.10 milliseconds latency. Google's latencies, meanwhile, were in the middle of the pack, with all its results hovering between 0.5 and 0.6 milliseconds or more.

The best -- and worst -- of the rest

Microsoft's Azure platform ranked just behind Google, Amazon and CenturyLink in network throughput tests -- its d14 instance, with 16 vCPU and 112 GB of memory, performed at 9.452 Gbps and is priced at $2.61 per hour.

Rackspace Hosting's network throughput came in the middle of the pack, with its highest-performing instance, the io120 with 32 vCPU and 120 GB memory, clocked at 6.866 Gbps. IBM's SoftLayer also performed well at smaller instance sizes, with 3.32 Gbps performance offered on its 2 vCPU, 4 GB instance, which is priced at $0.12 per hour.

Rackspace, IBM, Azure, Amazon and Google declined to comment publicly for this article.

Full network performance results will be posted at later this month. 

Beth Pariseau is senior news writer for SearchAWS. Write to her at or follow @PariseauTT on Twitter.

Dig Deeper on AWS architecture and design

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Have you experienced issues with Google, Azure or AWS network performance?
One of the biggest struggles with cloud virtualization is the variability of response times, and that variability can make for synchronization and timing issues. Since my primary role with virtualization in the cloud is the parallelization of automated tests and setting up multiple machines to share services, yes, at times, there can be a performance hit or a lag that can cause failures that at other times would not. 
Hi Michael, thanks very much for your thoughtful reply. Is there any particular vendor you've experienced this variability with? Or does it apply across the board in your experience?
Call me suprised!