Manage Learn to apply best practices and optimize your operations.

Use AWS Data Pipeline, other tools to stop instances

Developers can take different approaches to automatically spin up and shut down EC2 instances, either on a schedule or in response to load.

With AWS, in theory, enterprises only have to pay for the resources they use. In practice, however, IT teams need...

to explicitly tell AWS to shut down unnecessary instances. Otherwise it's like leaving the car engine running all night without going anywhere.

Developers can automatically provision servers that are only used during certain hours and then turn them off. They can also launch a server for testing or for a data-processing scenario and then turn it off when the process is complete. There are several straightforward approaches baked into AWS to handle both of these scenarios. In addition, various scripts and third-party tools can make it easier to manage the process.

Some starting points for this include Auto Scaling, AWS Data Pipeline and CloudWatch alarm actions. Each of the tools can run on its own without the need for a separate server or dependence on an on-premises service. Developers can also create scripts to manage provisioning and decommissioning of Elastic Compute Cloud (EC2) instances that run on a basic Linux EC2 instance, a developer's workstation or a basic Linux server.

Auto Scaling EC2 instances to zero

AWS Auto Scaling makes it easy to grow a group of servers in response to traffic and then shrink the server cluster as loads decrease. It is possible to scale a group down to zero instances on a schedule and then start it up to one instance at certain times.

Developers must create an Auto Scaling configuration to run an instance on a recurring schedule. This group should be set up to use the minimum instance runtime. Developers configure a user data script to run at the end to back up instance data and move it to a stopped state. Then, they create a recurring action to lower the instances to zero, which terminates the instance. It is also important to configure Auto Scaling so it doesn't automatically replace unhealthy instances. Otherwise, it will start a new instance as soon as it shuts down the server.

Run CLI commands with AWS Data Pipeline

AWS Auto Scaling makes it easy to grow a group of servers in response to traffic and then shrink the server cluster as loads decrease.

Setting up AWS Data Pipeline enables IT teams to run AWS CLI commands on a schedule. It will also automatically write logs to a Simple Storage Service bucket. And an AWS Identity and Access Management (IAM) role can be associated with IAM processes to reduce the need for key management.

Smaller enterprises can take advantage of the free tier to keep data transfer costs to a minimum. Otherwise, it costs about $1 per month for a daily activity and goes up depending on complexity.

IT teams specify the business logic using a pipeline definition syntax file and then they activate the pipeline. Developers can edit the file and run it again to activate the change. The Task Runner application polls AWS Data Pipeline for tasks and runs them; you can manage the pipeline via the AWS Management Console, AWS CLI, AWS SDKs and the Query API. The Query API provides the most flexibility, but it requires developers to write code to handle low-level details, such as generating a hash to sign requests and manage error handling.

Set up CloudWatch alarm actions

Developers can schedule Amazon CloudWatch alarm actions that automatically stop, terminate and recover an EC2 instance. A stopped instance essentially hibernates and can restore from where it left off. A terminated instance is similar to rebooting the server, which may result in lost data. Alarm actions can shut down an instance when the CPU drops below a certain threshold. This is can be a good option for shutting down data processing or completed test jobs.

April Fools' Day in August quiz on cloud computing skills

April Fools' Day is filled with pranks and practical jokes, so try your luck at this tricky cloud quiz. But read carefully, or the joke will be on you. Take the quiz and see if your results are feast or folly.

The schedule can be specified using alarm action Amazon Resource Names (ARNs). Default ARNs are not associated with security credentials, but developers can use the EC2ActionsAccess IAM role to securely launch actions with the same credentials as a developer. They also have to set up EC2ActionsAccess roles using the console.

It's not currently possible to use alarm actions for EC2 classic instances, or instances that aren't launched in an Amazon Virtual Private Cloud, Dedicated Instances and instances that use store volumes.

Developers also can use scripts or applications to manage EC2 instance provisioning. These can be set up to run in a minimal EC2 instance or an enterprise workstation or server. Some useful free tools include the EC2 API tools, AutomatiCloud and Bitnami Cloud Tools for AWS. Commercial scheduling tools include ParkMyCloud and Skeddly.

Next Steps

What happens when an Auto Scaling group runs amok?

Apply these ELB tricks in your AWS environment

Basics for using Amazon EC2 instances

This was last published in August 2016

Conference Coverage

Your passport to AWS re:Invent 2016

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How does your enterprise audit AWS compute resources?
Cancel

-ADS BY GOOGLE

SearchCloudApplications

TheServerSide.com

SearchSoftwareQuality

SearchCloudComputing

Close