Data Pipeline Automation: Dynamic Intelligence, Not Static Code Gen

Join us for this free 1-hour webinar from GigaOm Research. The webinar features GigaOm analyst Andrew Brust and special guest, Sean Knapp from Ascend, a new company focused on autonomous data pipelines.

In this 1-hour webinar, you will discover:

  • How data pipeline orchestration and multi-cloud strategies intersect
  • Why data lineage and data transformation serve and benefit dynamic data movement
  • That scaling and integrating today’s cloud and on-premises data technologies requires a mix of automation and data engineering expertise

Why Attend:

Data pipelines are a reality for most organizations. While we work hard to bring compute to the data, to virtualize and to federate, sometimes data has to move to an optimized platform. While schema-on-read has its advantages for exploratory analytics, pipeline-driven schema-on-write is a reality for production data warehouses, data lakes and other BI repositories.

But data pipelines can be operationally brittle, and automation approaches to date have led to a generation of unsophisticated code and triggers whose management and maintenance, especially at-scale, is no easier than the manually-crafted stuff.

But it doesn’t have to be that way. With advances in machine learning and the industry’s decades of experience with pipeline development and orchestration, we can take pipeline automation into the realm of intelligent systems. The implications are significant, leading to data-driven agility while eliminating denial of data pipelines’ utility and necessity.

Request Free!