Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

AWS development tools shore up apps

AWS has a diverse customer base that deploys a breadth of apps. Certain methods such as IAM can help secure an app regardless of its content or target audience.

AWS provides a collection of useful services for application developers and IT administrators across many industries....

AWS enables startup companies to offer web-scale applications with no upfront investment -- depending on the desired configuration -- and Fortune 500 companies can migrate existing, expensive on-premises infrastructure to a managed cloud environment.

With companies of all sizes and capacities building applications in AWS, IT teams need to consider a variety of security and networking methods to protect their apps' performance. AWS development tools control who can access applications, where new applications are spun up and how apps are routed within the AWS environment.

The cloud provider handles access management through AWS access keys. There is a set of root credentials -- effectively super-user keys -- that developers should never use in an automated script. AWS Identity Access Management (IAM) credentials define user access to individual applications, scripts and tasks. IAM is split into two different types of credentials: roles for services and users, which represent developers, operators and other IT professionals who need higher-level access to the AWS Management Console, or AWS development tools. Create a separate IAM role or account for each individual task that requires access, so that each credential set can be fully isolated to the access it requires.

For example, if a project consists of two parts, one that allows users to submit files to Simple Storage Service (S3) and another that analyzes those files and writes metadata to Amazon DynamoDB, create an IAM role for the writer process and another for the worker process. The writer process should only be allowed to access the specific S3 bucket with write permissions. The worker process needs read permissions on the same S3 bucket as well as write permissions on the DynamoDB table it populates.

It's best to start off with the least amount of permissions possible and add more as resources grow. That way, developers can ensure that exposure of one set of credentials doesn't cause cloud-wide issues. If credentials for the writer process are exposed but permissions are properly configured, an intruder couldn't terminate all of an enterprise's Elastic Compute Cloud (EC2) instances. However, at least one business learned this lesson the hard way.

All official AWS SDKs support IAM roles assigned to EC2 instances or EC2 Container Service tasks by default; those SDKs do not require roles to be hard-coded in any config files. When working locally or anywhere outside of the AWS ecosystem, AWS credentials may be passed into scripts by setting up AWS profiles in the ~/.aws/credentials file.

Choosing the right AWS region

AWS development tools control who can access applications, where new applications are spun up and how apps are routed within the AWS environment.

AWS is split into multiple geographic regions. Within each region are multiple data centers called availability zones (AZs). Within a given region, it's important to balance a workload across multiple AZs and allow for failover from one to another.

Some developers treat regions just like AZs, which is a common mistake. Although AWS provides a fast connection between regions, it's not nearly as fast as the connections between AZs. Developers setting up a back end that has servers in US-West-1 but uses DynamoDB from US-East-1 will experience massive delays, as each connection to DynamoDB has to pass through the open web instead of an internal AWS network.

An IT team should provision resources within close geographic proximity to the majority of its end user base, even though U.S. regions are the least expensive. If users spread out across multiple geographic regions, set up multiple copies of the entire application stack in multiple regions, synchronizing back-end databases regularly using AWS development tools like Lambda.

After choosing a region, it's important to have redundant instances in multiple AZs within that region for high-availability applications. Amazon's service-level agreement states that any single AZ could go down without affecting its uptime statistics. For example, if US-East-1a goes down, but every other AZ is working, Amazon doesn't count that against its uptime statistics and would only mark that as a Performance issue and not an issue of Region Unavailability. According to Amazon, Region Unavailability is when more than one Availability Zone in which you are running an instance, within the same Region, becomes Unavailable. To mitigate this issue, developers need to make sure their systems automatically fail over to instances in other AZs when one is down. Using AWS Elastic Load Balancing with instances in multiple AZs is one solution.

Some services, such as AWS Lambda and DynamoDB, are zone-independent. While developers can choose a region for these AWS development tools, the cloud provider automatically chooses the best zone and handles outages.

Benefits of Route 53 DNS

Amazon Route 53 is AWS' DNS server; companies can also purchase domains and management directly through the AWS Management Console. But companies can also purchase domains from a third-party provider and simply point the name servers at Route 53.

Route 53 provides DevOps with a lot of advantages when hosting within AWS. Specifically, Route 53 integrates directly with S3, allowing DevOps teams to direct an S3 endpoint to a custom CNAME. It also integrates directly with Amazon CloudFront and indirectly with AWS Elastic Beanstalk and Amazon API Gateway.

Perhaps the most useful feature of Route 53 is the ability to handle both failover- and geolocation-based routing. Failover routing allows DevOps to maintain a high level of availability even in the event of catastrophic failures at AWS data centers, such as an entire region outage. Failover routing also allows DevOps to deploy blue/green style deployments with automated rollbacks if a new deployment goes completely wrong.

Geolocation-based routing allows DevOps to provide low-latency access to the full application stack regardless of an end user's location. Geolocation-based routing works by specifying a set to direct end users to, based on their physical locations. This policy can be combined with failover-based routing to set up rules that direct users to the closest regional endpoint, with a failover to direct them back to another region if the closest region isn't working.

Test your knowledge of cloud container technology

The popularity of cloud containers is soaring. If your company isn't already working with containers, the time to get on board is now. Take this brief quiz to test your knowledge of cloud container technology.

Using VPC

AWS launched Amazon Virtual Private Cloud (VPC) in 2009. The service allows DevOps staff to manage networks within AWS that are specific to their application stacks. Teams can configure an isolated part of the AWS network that has connection with the outside world -- either through AWS or through a virtual private network directly to a company office.

With VPC, DevOps teams can access basic network controls, such as configuring subnets to allow for direct external access or setting up specific routing rules to only allow traffic to go to specific places. The service also allows teams to configure cloud-based network address translations (NATs), which can be assigned an elastic IP address and are useful when an application requires access to an API that manages access via IP address. NATs allow multiple back-end servers -- or Lambda functions -- to access an outside resource via the same IP address.

With all newly created AWS accounts, DevOps teams are given a default VPC. This works for most application stacks, and it is much easier than configuring a custom VPC. But it's important to note that each VPC can have multiple subnets configured, but each subnet can only exist in one zone. AWS states that any given AZ may go down without impacting total service availability, so it's also important to make sure that high-availability applications span multiple AZs. If an application is only hosted in a single AZ, DevOps teams cannot expect to have consistent uptime.

Next Steps

Availability zones allow for regional provisioning

These ELB workarounds improve performance

AWS mobile dev tools aims for MBaaS market share

This was last published in September 2016

Conference Coverage

Your passport to AWS re:Invent 2016

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What other development tools or features should AWS add to its services?
Cancel

-ADS BY GOOGLE

SearchCloudApplications

TheServerSide.com

SearchSoftwareQuality

SearchCloudComputing

Close