AWS Data Pipeline helps customers to manage and streamline any workflow that’s data-driven. The service is useful for customers who have data spread across many AWS services and want to harness it all into a single location for configuration and management. For example, a data scientist could assign a job to Data Pipeline so that it accesses log data from the storage service every hour and then transfers it to a relational database for future analysis. While the service is particularly suited to workflows already optimized for AWS, Data Pipeline can also connect with the customer's proprietary data source in house, as well as third-party data sources.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Customers can access information from a data source, process it and then automatically transfer the result to another system or service simply by using a Data Pipeline template. Developers can access Data Pipeline through the AWS Management Console, the command line interface, or the service APIs. Pricing depends on usage and can be free for low frequency activities.