Amazon Simple Storage Service (Amazon S3) is a scalable, high-speed, web-based cloud storage service. The service is designed for online backup and archiving of data and applications on Amazon Web Services. Amazon S3 was designed with a minimal feature set and created to make web-scale computing easier for developers.
Amazon S3 features
S3 provides 99.999999999% durability for objects stored in the service and supports multiple security and compliance certifications. An administrator can also link S3 to other AWS security and monitoring services, including CloudTrail, CloudWatch and Macie. There's also an extensive partner network of vendors that link their services directly to S3.
Data can be transferred to S3 over the public internet via access to S3 APIs. There's also Amazon S3 Transfer Acceleration for faster movement over long distances, as well as AWS Direct Connect for a private, consistent connection between S3 and an enterprise's own data center. An administrator can also use AWS Snowball, a physical transfer device, to ship large amounts of data from an enterprise data center directly to AWS, which will then upload it to S3.
In addition, users can integrate other AWS services with S3. For example, an analyst can query data directly on S3 either with Amazon Athena for ad hoc queries or with Amazon Redshift Spectrum for more complex analyses.
How Amazon S3 Works
Amazon S3 is an object storage service, which differs from other types of storage such as block and file cloud storage. Each object is stored as a file with its metadata included. The object is also given an ID number. Applications use this ID number to access objects. Unlike file and block cloud storage, where a developer can access an object via a REST API.
The S3 cloud storage service gives a subscriber access to the same systems that Amazon uses to run its own websites. S3 enables customers to upload, store and download practically any file or object that is up to five terabytes (TB) in size -- with the largest single upload capped at five gigabytes (GB).
Amazon S3 storage classes
Amazon S3 comes in three storage classes: S3 Standard, S3 Infrequent Access and Amazon Glacier. S3 Standard is suitable for frequently accessed data that needs to be delivered with low latency and high throughput. S3 Standard targets applications, dynamic websites, content distribution and big data workloads.
S3 Infrequent Access offers a lower storage price for data that is needed less often, but that must be quickly accessible. This tier can be used for backups, disaster recovery and long-term data storage.
Amazon Glacier is the least expensive storage option in S3, but it is strictly designed for archival storage because it takes longer to access the data. Glacier offers variable retrieval rates that range from minutes to hours.
A user can also implement lifecycle management policies to curate data and move it to the most appropriate tier over time.
Working with buckets
Amazon does not impose a limit on the number of items that a subscriber can store; however, there are Amazon S3 bucket limitations. An Amazon S3 bucket exists within a particular region of the cloud. An AWS customer can use an Amazon S3 API to upload objects to a particular bucket. Customers can configure and manage S3 buckets.
Protecting your data
S3 buckets are kept private by default, but an admin can choose to make them publicly accessible. A user can also encrypt data prior to storage. Rights may be specified for individual users, who will then need approved AWS credentials to download or access a file in S3.
When a user stores data in S3, Amazon tracks the usage for billing purposes, but it does not otherwise access the data unless required to do so by law.