carloscastilla - Fotolia

Tip

Maximize Amazon S3 reliability with shrewd choices

IT organizations moving to AWS might have data protection concerns. However, with replication across regions and versioning available, savvy S3 users get increased data reliability.

Data resilience is critical to business continuity and regulatory compliance. A move to the public cloud from on-premises computing should prompt renewed concern over reliability and availability.

Public cloud providers, such as AWS, routinely integrate numerous storage resilience features as a default part of the service. IT organizations that store data in Amazon S3 buckets should implement steps to further boost that reliability. Review the features already built into AWS, such as replication and tiers of availability. Then, consider strategies that maximize data resilience, which include backups.

And don't forget to protect data from well-meaning users. Amazon S3 reliability is ultimately only as good as your weakest IT practice.

Basic S3 availability and durability

Amazon S3 is the flagship object storage service from AWS, which means that the default level of S3 reliability should suit IT's requirements for many everyday workloads.

Native data resilience in Amazon S3 starts with replication. AWS replicates an S3 bucket across storage devices in a minimum of three availability zones within a selected region. Each availability zone is physically separated, which helps ensure S3 reliability in the event of storage device failure or facility-wide problems, such as a fire. This type of default availability is applied to S3 Standard, S3 Standard-Infrequent Access (IA) and archival S3 Glacier storage classes. The S3 One Zone-IA storage class provides a lower level of reliability as it replicates storage objects only once within a single availability zone.

AWS codifies Amazon S3 availability in its service-level agreement (SLA) with users:

  • S3 Standard storage at 99.99% availability
  • S3 Standard-IA storage at 99.9% availability
  • S3 One Zone-IA storage at 99.5% availability

AWS customers receive a storage credit if the availability percentage falls below the minimum commitment for that class of S3 bucket in the SLA.

AWS also claims durability levels for objects as part of its native S3 reliability features. Durability denotes average annual object loss. Amazon S3 Standard, S3 Standard-IA, S3 One Zone-IA and S3 Glacier all provide 99.999999999% object durability over a given year, according to the cloud provider.

Enhance S3 reliability

Standard S3 reliability levels might meet the needs of an enterprise without further work. In cases where workload or business requirements demand a greater level of data resilience, IT organizations can apply strategies that bolster the Amazon S3 bucket's safety.

Upgrade the service. S3 services are available in several classes with different availability levels delineated above. The difference between 99.9% and 99.99% uptime amounts to almost 40 minutes of availability every month. Generally, use IA storage for lower-priority IT workloads that can tolerate the lower availability. If workload requirements change, evaluate a move to a service class with better availability.

Make data copies. The most broadly accepted means to enhance data protection is to make more copies of the data in more places. AWS provides native services for backups and replications to achieve this goal.

The AWS Backup service centralizes and automates the backup process and guides data retention across all its cloud services, such as Amazon Elastic File System, DynamoDB, Elastic Block Store and Relational Database Service. Backup behaviors are driven by policies, with monitoring and reporting available on the backup activities.

Use replication instead of backup to achieve similar protections for data that is typically accessed and used quickly. S3 buckets are replicated across availability zones within the same region by default, except the One Zone class. However, S3 supports cross-region replication (CRR), wherein S3 buckets are asynchronously copied to different regions. CRR can satisfy business or compliance rules that state how far apart data copies must be stored. It also helps to serve users in geographically diverse regions with replicated data sets.

Apply versions. Users pose a far greater threat to business data than infrastructure failure or physical disaster. For example, they may inadvertently overwrite or delete an object in an S3 bucket, which could have devastating impacts. Rather than go through a time-consuming and often tricky backup restoration to recover user-deleted data, consider S3 versioning. This process preserves every version of every object in an S3 bucket and tracks changes each time a PUT, COPY, POST or DELETE operation is performed. Default GET operations return the latest version of the S3 object, but the administrator can recover older, protected versions on demand. Use data lifecycle rules to delete unneeded versions so this practice doesn't balloon storage costs and data retention management efforts.

There are three obligations the IT organization has for data protection, and these apply to Amazon S3 reliability, as well as any other storage mechanism:

  1. The level of protection must meet the needs of the workload and the business. Overbuilding storage resilience is expensive, and underbuilding it can cost much more.
  2. Test any data protection strategy that is put into place. Don't expect policies and automation to save the day until they're proven effective.
  3. Review and update data protection strategies periodically. The protection that works today could be inadequate tomorrow.

Dig Deeper on AWS infrastructure

App Architecture
Cloud Computing
Software Quality
ITOperations
Close