BACKGROUND IMAGE: DrHitch/stock.adobe.com
Amazon SageMaker is a service that enables a developer to build and train machine learning models for predictive or analytical applications in the Amazon Web Services (AWS) public cloud.
Machine learning offers a variety of benefits for enterprises, such as advanced analytics for customer data or back-end security threat detection, but it can be hard for IT professionals to deploy these models without prior expertise. Amazon SageMaker aims to address this challenge, as it provides built-in and common machine learning algorithms, along with other tools, to simplify and accelerate the process.
How Amazon SageMaker works
Amazon SageMaker supports Jupyter notebooks, which are open source web applications that help developers share live code. For SageMaker users, these notebooks include drivers, packages and libraries for common deep learning platforms and frameworks. A developer can launch a pre-built notebook, which AWS supplies for a variety of applications and use cases, then customize it according to the data set and schema the developer wants to train. Developers can also use custom-built algorithms written in one of the supported ML frameworks or any code that has been packaged as a Docker container image. SageMaker can pull data from Amazon Simple Storage Service (S3), and there is no practical limit to the size of the data set.
To get started, a developer logs into the SageMaker console and launches a notebook instance. SageMaker provides a variety of built-in algorithms, such as linear regression and image classification, or the developer can import his or her own algorithm.
Then, when the developer is ready to train the model, he or she specifies the location of the data in S3 and the preferred instance type, then initiates the training process. SageMaker then uses automatic model tuning to find the set of parameters, also called hyperparameters, that best optimizes the algorithm.
When the model is ready to deploy, the service automatically operates and scales cloud infrastructure, using a set of SageMaker instance types that include several with GPU accelerators and that have been optimized for ML workloads. SageMaker deploys across multiple availability zones, performs health checks, applies security patches, sets up Auto Scaling and establishes secure HTTPS endpoints to connect to an app. A developer can track and trigger alarms for changes in production performance via Amazon CloudWatch Metrics.
SageMaker security, price
Optionally, Amazon SageMaker encrypts models both in transit and at rest through the AWS Key Management Service, and API requests to the service are executed over a secure sockets layer connection. Additionally, SageMaker stores code in volumes, which are protected by security groups and offer encryption.
AWS charges each SageMaker user for the compute, storage and data processing resources used to build, train, perform and log machine learning models and predictions, along with the S3 charges to hold the data sets used for training and ongoing predictions.