Warakorn - Fotolia

Q
Get started Bring yourself up to speed with our introductory content.

How much will AWS cold storage cost our enterprise?

Moving data to the cloud has it challenges, especially when it comes to cold storage. What should our enterprise expect in terms of cost and time spent?

Moving terabytes of data to or from cloud services can incur significant time and expense. There are several archival options for AWS cold storage that enable enterprises to move data without significant strain on resources.

Dedicated high-speed network connections, such as AWS Direct Connect, can establish a dedicated connection between a data center and an AWS cloud facility. The Ethernet connection bypasses the public internet, avoiding bottlenecks and congestion that often reduce bandwidth and lower data transfer performance. AWS Direct Connect services are available in 1 Gigabit Ethernet (GbE) or 10 GbE speeds; IT teams can aggregate multiple links for additional bandwidth.

The AWS Import/Export Snowball is another way to transfer huge volumes of data, but it uses a portable out-of-band storage appliance. An administrator requests a large transfer job, and AWS ships the enterprise a storage appliance. The admin then connects the appliance to the network and transfers the desired data, which is encrypted. The company ships the device back to AWS, where the data is moved to storage using Amazon's internal network.

Dedicated high-speed network connections, such as AWS Direct Connect, can establish a dedicated connection between a data center and an AWS cloud facility.

The process sounds old school, but this data migration method can actually be faster and less expensive than moving data over the internet, depending on a company's network connection speeds and the amount of data it needs to move. AWS Import/Export Snowball can be a good option for data volumes as low as 60 TB over a 1 Gigabit Ethernet connection. Conversely, the appliance might be a better alternative when restoring massive backups from cloud storage to the local data center.

Neither AWS Direct Connect nor AWS Import/Export Snowball directly moves data to Amazon Glacier. Data transfers to Amazon Simple Storage Service (S3), and then admins move the data offline to a Glacier vault. There are other alternatives to accelerate data transfers between an enterprise and AWS, such as Amazon Kinesis Firehose, which addresses multiple streaming data sources, and Amazon S3 Transfer Acceleration for recurring storage jobs with incremental changes -- usually over long distances -- and gateways to cache data locally. But AWS Direct Connect and AWS Import/Export Snowball are probably the most desirable options for use with AWS cold storage.

There are also some cost considerations for using AWS cold storage. For example, each object that Glacier archives requires an enterprise to store some data in S3 to maintain metadata; this can be as much as 40 KB per object. While that isn't much, the total can add up for a large number of objects. When an IT team plans to retain those objects in Glacier for months -- or even years -- storing Glacier object metadata in S3 can add unexpected cost to the archives. It can be helpful to compress many small data objects into a single consolidated .tar or .zip file before uploading and moving them to AWS cold storage.

There is an early deletion fee if an admin deletes or changes an object within three months of creating the archive -- no charge is incurred for changes that take place after three months. There are also costs for moving each object to Glacier, which means moving a large number of objects can be expensive. IT teams can incur other costs when restoring data to S3 from Glacier. AWS usually charges for restorations based on the highest rate encountered in GB/hr.

Next Steps

Amazon Glacier has cold storage competition

Essential guide to AWS data management

Sling big data to and from the cloud

Dig Deeper on AWS database management

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

Join the conversation

3 comments

Send me notifications when other members comment.

Please create a username to comment.

What tricks can reduce cloud backup costs?
Cancel
Some people have gotten some pretty unpleasant, expensive surprises by taking a lot of data out of Glacier at a time. https://itknowledgeexchange.techtarget.com/storage-disaster-recovery/user-finds-amazon-glacier-expensive-roach-motel-data/
Cancel
Well, Mr. Bigelow is right to point out the small ways in which using AWS S3 and Glacier can add up to big bucks.  I suspect that many public cloud storage users get sucked in by the low monthly cost per GB.  So the data just keeps getting shipped out to AWS S3 and after a few years that tens of terabytes could be hundreds of terabytes and that small monthly charge is now a much larger monthly that you will have to keep paying until you delete everything or bite-the-bullet and bring all of your cold and old data back to an on-premises object-based storage cluster, which is where you should have put it in the first place.  

Storing all of your cold and old unstructured data in a public storage cloud is a mi$take. Data is "sticky" and public cloud storage providers know this.  They expect to keep taking your money forever to store your cold and old unstructured data, backups, etc.  Pay the ransom and get it back and keep it in your own private object-based storage cluster.

If your use public cloud infrastructure to host and run applications then that is where your hot or transactional data needs to be stored.  Everything else should be kept in a local object-based storage cluster on your premises or in a colo site, which is under your control.  You can achieve the magic "penny per GB per month" internally without all of the add-on charges for "touching" your data.
Cancel

-ADS BY GOOGLE

TheServerSide.com

SearchSoftwareQuality

SearchCloudComputing

Close