Get started Bring yourself up to speed with our introductory content.

Tailor AWS storage options to enterprise data needs

How IT teams store data in the cloud can affect performance, costs and security. Identify your needs to choose the right AWS storage service.

It takes a solid understanding of all AWS storage options to choose the proper service to meet your enterprise's...

application and performance requirements. IT teams must identify the features of a storage service that will help strike an appropriate balance between performance and cost.

When choosing a storage option, look at your organization's storage needs from multiple perspectives. Specifically, look at the following:

  • Storage size plays a vital role in performance. For block storage options across multiple cloud providers, developers can see how disk or volume IOPS or throughput linearly increases with the storage size of the disk.
  • Each cloud provider has multiple storage options based on access frequency. If developers have data that's infrequently accessed, they can use different storage classes to save money while achieving the same availability, durability and security.
  • Developers who want predictable, low-latency, consistent performance levels for applications can customize storage options to fit those requirements.
  • Data availability is an important consideration. If a developer is willing to compromise on availability, he should be able to find a low-cost storage service.

Diving into AWS storage options

AWS storage options can be divided into three classes, each with different capabilities:

  • Block device storage (Amazon Elastic Block Store)
  • Object storage (Amazon Simple Storage Service)
  • Archival storage (Amazon Glacier)

Amazon Elastic Block Store (EBS) is a low-latency, persistent storage offering. Because it is highly available in the same availability zone, component failure doesn't lead to data loss.

When choosing a storage option, look at your organization's storage needs from multiple perspectives.

Amazon EBS volumes, used within Elastic Compute Cloud instances, are available with solid-state drives (SSD) and hard disk drives (HDD). AWS offers a set of EBS volumes for each type of drive.

Benefits of SSD include low-latency results, durability and consistent I/O performance. SSD volume performance is measured by IOPS. AWS offers EBS SSD volumes in two types:

EBS Provisioned IOPS SSD. These volumes improve performance and consistency for applications that are highly sensitive to latency. An EBS Provisioned IOPS SSD volume enables developers to set a preferred IOPS level and ensure that it always achieves at least 90% of that level. The size of an EBS Provisioned IOPS volume can fall between 4 GB and 16 TB with up to 20,000 IOPS per volume. Each gigabyte of the volume provides around 30 IOPS. EBS Provisioned IOPS SSD volumes typically suit I/O-intensive databases and critical business applications.

EBS General Purpose SSD. These volumes, referred to as GP2, fit the majority of IT use cases. Each gigabyte of storage provides up to 3 IOPS, and the size of GP2 volumes can range between 1 GB and 16 TB. Each GP2 volume comes up with baseline performance IOPS, which allows it to burst performance up to 3,000 IOPS for 30 minutes, ensuring fast bootup cycles. EBS General Purpose SSD volumes are suitable for a system boot volume or root volume, low-latency interactive apps and transactional workloads.

HDD volumes, meanwhile, are low-cost, low-latency magnetic volumes. Performance is measured in megabytes per second (MBps) rather than IOPS.

Throughput Optimized HDD. Sequential I/O workloads use these low-latency magnetic storage volumes. The volume size can vary from 500 GB to 16 TB with a maximum throughput of 500 MBps. EBS Throughput Optimized HDD volumes are suitable for big data tasks, log processing and data warehouses.

Cold HDD. These volumes are designed to handle sequential data that doesn't need to be instantly available. They are useful when performance is not a big concern; developers cannot use cold HDD for bootable drives. Volume size varies from 500 GB to 16 TB with a maximum throughput of 250 MBps. Infrequently accessed data use EBS Cold HDD volumes.

Amazon Simple Storage Service (S3) is a highly available, durable, secure object storage service. Developers can access the AWS product via API calls from anywhere in the world. There is no upfront storage commitment, and S3 and can integrate with various AWS tools. Its features include cross-region replication, lifecycle rules, event notifications, versioning, encryptions and flexible storage options.

Amazon offers several AWS storage options within the S3 structure.

Amazon S3 Standard provides availability, durability and security with low-latency delivery for frequently accessed objects. The durability of the objects is set to 99.999999999% and a year-long availability of 99.99%. It can support data encryption in transit and at rest. S3 Standard Storage can be used for static websites, distribution of rich content -- such as video -- and big data analytics.

Amazon S3 Standard Infrequent Access provides the same performance, availability and durability as S3 Standard, but it lowers cost for data accessed on a less frequent basis. This class applies to objects and can stay in the same bucket with S3 Standard objects. Developers typically use S3 Standard Infrequent Access for backups and data stored for disaster recovery.

Amazon Glacier is a low-cost cold storage option that provides durable, secure, long-term storage. It's well suited for archiving and backup.

It's best to deploy Glacier when an enterprise can tolerate long data retrieval times. The service works well with Amazon S3, where automatic lifecycle policies allow an organization to store data.

Typical uses for Amazon Glacier include long-term backups and logs. It can store compliance-related data, which, in certain cases, needs to be kept for years.

Test your knowledge: Amazon Simple Storage Service quiz

Think you know everything about Amazon Simple Storage Service? Test your storage knowledge with this 10-question quiz about Amazon S3.

Comparing cloud data storage options

AWS isn't alone in cloud storage. Rival public cloud providers Microsoft Azure and Google Cloud Platform (GCP) offer multiple storage options.

Azure's Blob service, which is available in Zone Redundant Storage, Locally Redundant Storage and Cool Blob options, has 99.9% availability. GCP's Cloud Storage service offers Standard and Durable Reduced Availability object storage options and also lists 99.9% availability.

Azure Disks is Microsoft's SSD block storage service. Customers pay based on what they use, regardless of the size provisioned. Azure gives users the ability to map shared drives. Virtual machine size determines how many disks can be attached to a server. GCP's Persistent Disk block storage service includes SSD and HDD options with a maximum disk size of 64 TB.

For archive storage, Amazon Glacier is a low-cost option at $0.007 per GB per month, with a retrieval time of four hours. GCP's Nearline service costs $0.01 per GB per month and has a retrieval time of seconds. Azure does not have a comparable offering in the archival storage category.

Next Steps

Words to go: AWS storage

Improve storage performance with these tricks

The cost of cold storage in AWS

This was last published in October 2016

Essential Guide

Choose and manage a public cloud storage service successfully

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What are your enterprise's unique storage needs?
Cancel

-ADS BY GOOGLE

SearchCloudApplications

TheServerSide.com

SearchSoftwareQuality

SearchCloudComputing

Close