The transition of corporate infrastructure into software-defined private clouds has transformed IT automation from...
a goal to a business imperative. Enterprise IT now needs to focus on DevOps and related automation tools that allow infrastructure to be managed as code instead of unique physical instances.
Public cloud services such as Amazon Web Services (AWS), Azure and Google, which use commodity servers, would be impossible to operate without extreme automation. But using the same set of tools across several sources of infrastructure is challenging; each cloud platform has its own management console and interfaces. And most enterprises now use both internal systems and infrastructure as a service (IaaS).
AWS infrastructure automation tools are encapsulated and exposed in APIs, and although it offers management and application orchestration services like CloudFormation and OpsWorks, these tools only work within AWS. More than half of Fortune 500 companies currently use the supported version of Chef, the DevOps automation package, on their internal infrastructure. And because many of these organizations also use AWS, they need a way to integrate infrastructure automation between the two.
Deploying Chef on premises or in the cloud
A Chef deployment encompasses four elements:
- Server -- Controls a hub for one or more application environments
- Workstations -- Develops configuration recipes
- Nodes -- Runs a particular application
- Analytics -- An optional element that logs, audits and reports on Chef server activity
Organizations that have already made the DevOps transition to infrastructure as code will most likely have all four Chef elements installed on premises. For them, the goal is adding AWS nodes to an existing workload pool. In contrast, enterprises starting out with infrastructure automation will need a Chef server. There are three options for this:
- Self-managed using a pre-packaged download and private server
- Self-managed on AWS using either an Amazon Machine Instance (AMI) from the AWS Marketplace or a manual install of open- source Chef onto Elastic Compute Cloud (EC2)
- Software as a service using the hosted Chef service
This tip focuses on the pure AWS infrastructure automation option in which all four Chef elements are EC2 instances; however, developer workstations could run alone on a separate PC. In this workflow, developers use pre-packaged cookbooks and custom code to build configuration recipes that are sent to the Chef server. The server then directs the Chef client to deploy and configure cloud-resident nodes.
Connecting Chef with AWS
Organizations with an existing Chef deployment can access and control EC2 nodes in a few ways. The best option, particularly for those with multiple cloud workloads that might be spread across different availability zones, is to use a virtual private cloud (VPC). This enables a private, encrypted connection between a private data center and AWS resources. By using a VPC, the EC2 nodes sit on a private subnet, allowing the Chef server to access them as if they were any other internal server.
Another option is to access EC2 instances via SSL using the Chef Knife command-line interface. Knife can manage nodes, cookbooks, recipes, user roles and Chef client installations. Controlling EC2 instances requires an administrator to install the Knife EC2 plug-in on Chef workstations and open an SSH port in the AWS configuration. Once configured, developers can start, stop and list EC2 instances; configure and run new instances as Chef nodes; and apply Chef recipes to one or more nodes.
Putting recipes to use
AWS OpsWorks, an application management service based on Chef, is fully compatible with Chef recipes. Therefore, you can apply Chef recipes to any EC2 instance, helping with AWS infrastructure automation. However, it doesn't provide the flexibility of hosted or self-managed Chef to control resources across clouds. Cloud admins must turn to Chef server for that.
The most convenient option for using Chef server is a pre-packaged AMI from the AWS Marketplace that takes care of porting and installation details. It comes as a fully supported service. Of course, convenience and support come at a price -- the pre-packaged AMI is offered at a 25% markup to the base EC2 rate for the Chef server instance. Alternatively, admins can download open source Chef and install it on either Ubuntu or Red Hat servers.
Running Chef server on AWS allows IT teams to manage individual EC2 instances or machine clusters; it also exploits existing cookbook recipes to manage other AWS resources, including security groups, Elastic Load Balancers and Elastic Block Store volumes. It's even possible to integrate Chef with CloudFormation to manage and update Auto Scaling groups.
Integrating Chef with AWS is relatively easy and extends Chef's capabilities into the public cloud, but it isn't the only configuration management tool. Organizations moving into IaaS should also evaluate Ansible, Puppet and SaltStack. Each tool works the major IaaS vendors and can provide a common platform for consistent application and system configuration, deployment, and lifecycle management.
Run servers with OpsWorks
Difference and similarities between OpsWorks and CloudFormation
Manage workloads on-premises and in the cloud