Kit Wai Chan - Fotolia

What are the pros and cons of AWS traffic policies?

Our enterprise will install traffic routing policies. While we understand our options, what are the benefits and drawbacks of AWS Traffic Flow rules?

Configuring traffic policies within a public cloud environment takes practice and planning. The Amazon Route 53...

domain name service lets developers direct end users to specific applications while Amazon Route 53 Traffic Flow customizes how developers direct that traffic. And Route 53 Traffic Flow includes four traffic policies -- weighted, latency, geolocation and failover -- and each has its own strengths and weaknesses.

The weighted traffic rule has clear advantages for enterprises provisioning new infrastructures. This rule loads a portion of total AWS traffic to the test environment while also sending some traffic to the existing infrastructure. Putting different weights on new and existing record sets helps developers determine the functionality of new infrastructure without hampering routed traffic.

Numerical values provide the weight for record sets; the domain has multiple record sets, which makes it difficult to determine the overall picture of routed AWS traffic. Adding a new record set forces the developer to alter the weighted value of other record sets to match the sum of the weighted value. The weighted rule can't be combined with the latency rule for a particular domain or subdomain.

Numerical values provide the weight for record sets; the domain has multiple record sets, which makes it difficult to determine the overall picture of routed traffic.

The latency rule is beneficial when uptime is crucial, such as internet banking apps that rely on a real-time data processing infrastructure. Latency-based routing sometimes differs when calculating the lowest reported latency; it calculates the latency between the endpoint and AWS region rather than between the actual end user and the hosted web server or application. Moreover, the latency rule doesn't count actual query latency, which is the processing time of when the server connects back-end nodes to respond to the request.

The geolocation rule prioritizes worldwide end-user requests. However, when end-user requests generate from a location that's between two geographic regions -- and one region is slower than other the other -- latency could be increased because one region will receive more traffic than other regions.

The failover rule is associated with weighted and latency rules. This rule operates based on health checks and routes poorly performing traffic to another healthy region within 150 seconds. This rule checks health at 30- and 60-second intervals of time-to-live. Tying failover traffic policies with the weighted rule might not be the best option, because the weighted value of the healthy instance won't increase traffic immediately. It will, however, take effect proportionately.

Associating a health check with the latency rule could completely shut down a region, because the traffic will pass to a healthy region regardless of latency. So, the traffic will increase in one region despite geolocation or latency rules. A failover policy could serve as a static backup by keeping record sets explicitly designated as failover. That means a record set will only be used when other record sets go out of commission.

Amazon Route 53 Traffic Flow turns domain names into IP addresses, which help IT teams set up policies to route application and website traffic according to their needs. Whether choosing a weighted, latency, geolocation or failover policy, it's important to know the ins and outs of each. Read more about Amazon Route 53 Traffic Flow and its associated policies.

Latency and geolocation rules rarely fail, which is good because there is nothing you can do about it if they do. It's a better idea to use subdomains for the end user, as you can archive latency or geolocation by pointing the whole volume of traffic flow to that particular subdomain.

The failover rule could result in cascading failure, which is similar to a loop in a health check. There are primary and secondary failover options – adding a health check to both options leads to cascading failures. There is only one record set, which acts as a primary set; the secondary set is commissioned once the failover occurs. Therefore, when the primary record is healthy, there is no need to perform a health check on the secondary record.

The failover policy can be associated with all three other Amazon Route 53 traffic policies. Combining them all helps to build a centralized infrastructure for a DNS query.

Next Steps

Set up AWS redundancy and DR

Prevent workload disruptions with an AWS DR plan

Replicate data across different AWS regions

Dig Deeper on AWS network management