BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
AWS may have been the first cloud provider to embrace serverless computing with AWS Lambda, but it isn't the only provider with a compelling serverless platform.
All three major cloud providers -- AWS, Microsoft and Google -- have FaaS offerings. There also are a growing number of third-party services, such as Webtask, but the cloud providers' services are often the first choice for cloud users, as their services provide tighter integration with their other services.
Let's compare AWS Lambda, Microsoft Azure Functions and Google Cloud Functions and review three key aspects of the cloud providers' serverless platforms -- language support, integrations and pricing.
Since all serverless platforms run a single function of code, cloud providers must support the specific language and version your function needs. You can't use a Java interpreter to run Go. If your language needs Node 11 but your environment uses Node 6, you might run into some issues. This means developers need to choose a provider that supports their programming language.
Azure Functions supports a large variety of languages out of the box and was the first to natively support C#. It also supports Node.js, Java and Python. There is also an older version of Azure Functions, which has experimental support for several new languages, but developers shouldn't use it in production environments.
Google Cloud Functions is the most limited in terms of language options. It only supports Node.js, Python and Go. Additionally, Google Cloud Functions is usually late to standard language support. It does not currently support Node 10, which both Azure Functions and AWS Lambda do.
AWS Lambda natively supports Node.js, Python, Java, Ruby, C#, Go and PowerShell. Additionally, AWS gave developers more flexibility when it added custom runtimes. Now, developers can bring their own runtimes to the service platform, which expands the options to support almost any language. Just like Amazon Machine Images for EC2, Lambda's custom runtimes can be published through Lambda Layers, which enables open source communities to provide their own support for languages. This way, developers can use Node 11, C++ or even Bash on Lambda.
Developers can also use Lambda Layers to release other commonly used packages that must be built natively for Lambda functions. Examples of packages include binary compilations like FFmpeg, SQLite or Puppeteer. One of the most creative uses of Layers is for IO Pipe, which debugs and tracks serverless functions. IO Pipe released all of its monitoring support via Layers, so users can add monitoring to their function through the inclusion of a Lambda Layer or AWS Serverless Application Model config file.
Without integrations, a FaaS serverless platform is no different from a traditional server. Instead, serverless functions need to be integrated with event systems, which trigger your function when something happens. For example, the most basic type of event is an HTTP event, which triggers when a user visits a specific URL.
In Azure Functions, these integrations are called bindings and triggers. All bindings and triggers are provided in the function code, in a function.json file. More bindings and triggers can be added by updating the file and reuploading the function. Azure supports a wide variety of events, including changes made in almost any of its cloud databases, as well as a cron-like system called timer triggers. Azure also supports both input and output bindings, which make it easier to perform tasks without repetitive code, such as sending messages via Twilio or emails via SendGrid.
In Google Cloud Functions, developers invoke functions with the popular Firebase framework, or from direct HTTP invocations. Additionally, developers can invoke Cloud Functions based on their Pub/Sub topics, cloud storage or Stackdriver Logging. Cloud Functions can only serve a specific use case, so each function supports only one type of invocation at a time. For this reason, developers who want to use Google Cloud Functions for multiple purposes should consider using the Pub/Sub topic integration and trigger that topic whenever the function should run.
Unlike its competitors, AWS Lambda can add multiple event sources to a single function. This means developers can uncouple the development process from the integration sources. Developers can add new event sources as needed after they've already deployed a function.
For example, I use a function called indexRecordToAlgolia that takes a DynamoDB record and indexes it in the Algolia search platform. This function listens to DynamoDB streams for when new records are written or updated, and then converts them to the proper format before it inserts them into Algolia. You can even set up your functions so that when new items are added to DynamoDB, an associated Lambda function can be notified and add items dynamically. This way actions can be run separately and developers don't have to change any code or upload the function again. This is a great way to decouple the implementation of one system that compiles records, such as users, from another system that indexes those records into a search environment.
Lambda also supports a wide variety of integrations beyond DynamoDB, including Kinesis, API Gateway and Amazon Simple Queue Service. These integrations can result in developers having to write less bootstrapping code and get right to the business logic. For example, through API Gateway, developers can trigger Lambda functions not only via HTTP endpoints, but also via WebSocket endpoints. API Gateway handles all of the complex translations and invokes Lambda functions that don't need to know any of the implementation details.
Costs are difficult to compare
Serverless platforms are notorious for being so cheap it's practically free. That's because FaaS platforms only charge for actual executions of code, as well as a small cost for storage of code. That being said, there are variations in the vendors' pricing models.
Azure Functions has the most complex pricing. Azure charges you per execution, as well as for the amount of memory used and the time the function takes to complete. As a result, you don't need to preprovision memory or CPU requirements with your function, but each execution is wildly unpredictable in terms of cost. Similar to Lambda, Azure Functions charges a minimum memory and execution time at 128 MB and 100 ms. Azure Functions can run for a maximum of 10 minutes per invocation.
Google Cloud Functions also charges per execution and for execution time. Unlike Azure Functions, however, Google Cloud Functions requires you to provision your function and charges based on those allocated resources, rather than actual memory or CPU used. Developers choose from five different types of functions, with rates based on corresponding levels of CPU and memory. Google Cloud Functions also bills compute time to the nearest 100 ms. There is a maximum runtime for all Google Cloud Functions of 540 seconds per invocation.
AWS Lambda does not charge per execution, and instead charges strictly per 100 ms of execution time. Similar to Google Cloud Functions, Lambda enables developers to allocate memory, in increments of 128 MB, up to a maximum of 3008 MB. Like Google and Azure, Lambda scales CPU usage with memory. However, AWS Lambda is not as forthcoming with those values specifics as Google Cloud Functions or Azure Functions. AWS directs users to specify the amount of memory they want to allocate for their Lambda function, which will then allocate the CPU for you. Lambda functions can run for a maximum of 15 minutes per invocation.