When a disaster recovery plan involves the availability of multiple AWS sites actively running -- or capable of...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
running -- the same application, IT architects must carefully consider the integrity of multiple data sources in the cloud.
Simple backup scenarios are usually forgiving, and a data source such as a database can be replicated to another local or remote (cloud) volume to produce a recent copy of data. The biggest challenges here are bandwidth and retention. Lean bandwidth and the latency of significant replication distances can impose substantial recovery point objectives (RPOs) and recovery time objectives (RTOs) on backup efforts. But technologies such as data deduplication and differential backups help mitigate RPO and RTO challenges. These are well-worn ideas in traditional on-premises data centers, and they apply to backups in the cloud as well.
But when developers replicate data, the real problems come when multiple sites must share data processing. In Amazon Web Services (AWS), this can include a multisite strategy involving local and public cloud redundancy, as well as duplicating a deployment across multiple availability zones and regions. Different traffic routed to different application instances will inevitably result in different data in the data store or database. When an outage occurs and all traffic is routed to the alternate site, any differences in data can result in serious errors and disruptions to the business.
There must be a way to synchronize data between redundant AWS sites in real time. The typical approach is to define one of the redundant data stores or databases as a primary store, and have both AWS sites use that same primary data source. Changes to the primary data store are then mirrored -- immediately replicated -- to a secondary data source running at the redundant site. This works well in AWS, where Elastic Compute Cloud (EC2) instances can easily mirror or replicate data. When one site experiences an outage, the redundant site can rely on a current version of the data store or database.
This approach has several direct benefits. First, EC2 instances that rely on a single primary data source won't experience data desynchronization -- sometimes referred to as "split brain" -- which could result in erroneous or invalid data drawn from the secondary data store before it has time to synchronize. However, if trouble occurs with a primary site, the secondary data store and EC2 instances can step in and take over load processing without data loss. There are numerous options available for AWS storage replication, depending on storage volume and performance requirements. For example, AWS users can configure EC2 instances to mirror or replicate data from local data centers to or between AWS availability zones.
AWS management tools control your cloud
AWS approach to DR fueling migration
AWS products aid in disaster recovery
Learn about the different AWS EC2 instance types
Cascading AWS outage stokes cloud fears
DaaS powers disaster recovery plans
Related Q&A from Stephen J. Bigelow
Consider factors like security, platform compatibility, data usage requirements and management when transitioning from a private cloud to a hybrid ...continue reading
Several tools and commands can come in handy to storage admins looking to benchmark I/O performance on Linux systems. But not all benchmarking tools ...continue reading
If you consider security, performance, scalability, expertise, network visibility and service management when creating a private cloud, you avoid ...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.