Rather than writing your own APIs to support business workloads in the cloud, providers like Amazon Web Services...
allow cloud customers to create custom APIs through the Amazon API Gateway service. Custom APIs can then connect new applications to workloads running in the AWS public cloud. And while APIs are a direct and reliable method for accessing workloads or services running in a cloud environment, API functionality can pose performance challenges for the cloud provider.
The point of providing a cloud-based service and API functionality is to allow thousands -- maybe even millions -- of third-party software deployments to access the same service. For example, a business might write an Android app for millions of smartphone users that rely on back-door services in the cloud that are accessed through APIs. The result is API call traffic that is unpredictable and has the potential to overwhelm the cloud service -- causing poor app performance and leading to unhappy customers.
The solution is to cache and throttle API calls.
API throttling limits the rate that API calls are processed. Amazon API Gateway allows users to set standard and burst throttling rates on a per-second basis. For example, a business setting up an API could set 1,500 calls per second as the standard throttling rate, and then set 3,000 calls per second as a burst rate. Any API calls received that are over these limits will produce an error, but the software development kit in the endpoint app will see the error and continue to retry the API call. This means users might only see a brief delay in the app's response. AWS tracks all of the API calls and produces regular billing based on the total number of calls and the amount of data moved out of the cloud.
Caching is another means of managing API compute overhead. The idea is that many API requests have the same parameters and return the same results to the application that calls it. But this type of redundant behavior can waste valuable compute power for the cloud service, which could be better served processing requests with unique parameters and results. When a cache is used to store the results of API calls, subsequent API calls can be fulfilled from the cache -- sometimes called an API result cache. Caching usually offers fast results -- compared to waiting for the service to process the API call and return a result -- and avoids API throttling rate limits and mitigates the cost of API calls.
Amazon API Gateway users can implement a cache, select the desired cache size in gigabytes and set other preferences such as the time-to-live of the cached data. However, API result caching is another Amazon cloud service, and it carries an additional monthly cost.
Amazon API Gateway is another public cloud service that third-party software developers and service providers can use to programmatically access AWS compute instances like EC2 and Lambda. But like other AWS products, the pricing follows a pay-per-call model. It's another convenience, but strict reporting is essential to manage costs. In addition, the API should be thoughtful and well-designed in order to minimize the number of calls and keep costs down.
Amazon API Gateway grants access to cloud resources
AWS app development tools increasing functionality
AWS SDK bringing in herds of developers
What you should know about the API economy