BACKGROUND IMAGE: DrHitch/stock.adobe.com
A serverless architecture consists of infrastructure resources managed by a cloud provider. This enables developers to focus more on actual code and less on administration details.
And while AWS Lambda is by far the most popular option for this approach, there are other tools on the platform that can be used to deploy serverless architecture on AWS. In this tip, we'll explain how to use other AWS compute, storage and networking offerings to deploy a serverless application.
AWS Fargate is the serverless compute alternative to AWS Lambda. This operational mode within Amazon Elastic Container Service (ECS) provides compute resources for Docker containers, without the need to manage EC2 instances. Users configure the required CPU and memory capacity, and Fargate launches and manages the containers.
Fargate has a few advantages over AWS Lambda, such as higher portability. Unlike Lambda, code in Docker containers doesn't need to follow a specific pattern. And unlike Lambda functions, Fargate containers can run for any period of time without timeout restrictions.
Lambda functions have their own advantages over Fargate containers, such as potentially lower cost, higher code modularity, simpler scalability and typically a more agile software delivery model. A Fargate versus AWS Lambda choice will depend on your application requirements, but Fargate is definitely a good option for a serverless architecture on AWS.
Amazon API Gateway
With API Gateway, developers can create API front ends and expose application functionality to external systems and client applications, all without managing any of the underlying infrastructure. Even though integrating API Gateway with Lambda functions is a common serverless pattern, you can also use HTTP integrations to connect API Gateway with other back end components, such as Application Load Balancers, ECS Fargate containers or even call Amazon cloud service APIs.
Amazon DynamoDB and Aurora Serverless
There are two database options for serverless architectures on AWS: Amazon DynamoDB for a NoSQL implement, or Amazon Aurora for a relational database.
In the serverless DynamoDB example, you can create a NoSQL table and immediately insert, update, delete or read records from it, without launching any servers. The only capacity configurations are read/write capacity units, but you can choose the on-demand option, which automatically assigns read/write capacity units based on the current load.
Located in the Amazon Relational Database Service (RDS) console, Aurora Serverless enables you to create a relational database and assign a minimum and maximum allocation of CPU and memory, without launching any RDS instances. In addition, you can configure Auto Scaling behavior to increase and pause compute capacity due to inactivity. The end result is a serverless relational database.
If you're developing an application that requires data transformation, you might need AWS Glue, a serverless extract, transform, load (ETL) service. With AWS Glue, you define data sources and targets in S3 -- called Data Catalogs -- as well as transformation logic -- called jobs -- based on your application requirements. Then, you can schedule and trigger these jobs as needed.
Data Catalogs and jobs in AWS Glue are delivered to developers in a serverless way, without the developer having to manage infrastructure operations. If you were considering provisioning EC2 instances to catalog and transform data used by your applications, AWS Glue would handle this task for you.
Amazon Athena is also a serverless product that doesn't require users to configure or manage any of the infrastructure required to analyze data. This is a significant advantage, given that many use cases for Athena include large data sets, which would otherwise require the additional complexity of launching and managing multiple servers in the cloud.
Athena enables developers to analyze data stored in S3, using SQL syntax. This means applications can extract meaningful information from logs or any structured data set stored in S3. You can submit queries using the Athena API, and applications can even fetch results from previous query executions. To use Athena, create virtual tables with a CREATE EXTERNAL TABLE statement, where field structure is defined, as well as a location in S3.
Amazon Kinesis is a real-time data ingestion and processing service. Using Kinesis, applications can receive, transform, analyze or store large amounts of data in real time without having the user manage infrastructure. When Kinesis receives data, it can store the data in S3 with Kinesis Firehose or analyze data in real time using Kinesis Data Analytics.
You can build a serverless real-time data processing app by integrating Kinesis with other serverless architecture services. You can configure Kinesis as an API Gateway integration, which means that a Kinesis stream can be exposed to client applications. Kinesis integrates with AWS Lambda to process incoming records, but you can also implement Fargate containers as consumers of Kinesis streams.
Released in 2006, Amazon Simple Queue Service (SQS) is a managed serverless message queuing mechanism. Developers create a queue and can immediately send messages to it. Once messages arrive at a queue, they can be consumed by other components by polling the queue for new messages. You don't have to manage any infrastructure and even though SQS integrates with AWS Lambda, you can also launch Fargate containers to poll an SQS queue.
Developers can use Amazon Simple Notification Service (SNS) to route messages between application components, without managing any servers. You can create an SNS topic and immediately be ready to subscribe components to it and send messages to that topic. Developers integrate components by sending messages to a topic and having subscribed components take action.
For example, CloudWatch Alarms and CloudWatch Events can send messages to a topic and trigger application responses. You can also configure applications to send messages to SNS. All of this can be done on virtually any scale.
One of the first services to launch on the platform, Amazon S3 also operates in a serverless fashion. With S3, developers can store any amount of files, up to 5 TB each, without managing or configuring any infrastructure.
It's not hard to find a serverless S3 example. Many of the serverless components described above integrate with S3, such as Athena, Kinesis and Glue. Any component with permissions to call S3 APIs, such as Fargate containers, can also interact with S3. Developers can also configure an API Gateway endpoint as a proxy to S3, which enables client applications to access and update objects in an S3 bucket without using AWS Lambda.
AWS Step Functions
Sooner or later, most applications require some sort of asynchronous processing and job orchestration. AWS Step Functions enables developers to create state machines that perform user-defined workflows. Each step in a workflow can execute custom logic, evaluate conditions, send data or run tasks using a number of services on the platform.
Users can achieve serverless orchestration with AWS Step Functions by integrating with DynamoDB, Fargate, SNS, SQS and AWS Glue.