The term "Observability" was first coined in the 1960s by Rudolf Kálmán, an Electrical Engineer and mathematician to describe how well a system can be measured by its outputs. "Data Observability" as a term has picked up usage over the last few years and in an article featured on Forbes about 2 years ago, here is how it was defined and is still apt for the most part. "Data Observability" is a set of tools to track the health of enterprise data systems and identify and troubleshoot problems when things go wrong. Data Observability provides continuous, holistic, and cross-sectional visibility into complex data systems, such as the analytics and AI applications that companies would like to use to guide their businesses and personalize customer experiences. It synthesizes signals across infrastructure, application, and data layers to provide a comprehensive understanding of individual components, data pipelines, and system performance. The traditional monitoring tools typically provide information about single applications, system components, or, perhaps entire systems. They provide little value, however to Data Engineers, Data scientists, CDOs who are trying to resolve issues, increase scale and data utilization and optimize performance of inter-connected data systems, potentially in a hybrid cloud environment built to support Data analysis and AI at scale.
In this presentation, Hari plans to cover key pillars of "Data Observability", covering the "what" and "why" of "Data Observability" and glimpses of "how" it could be addressed taking real world examples and as to how it can affect business decisions and outcomes. In addition, he also going to cover the "Data Observability" products/platform landscape covering both open and closed source systems.