Executive business intelligence (BI) reporting can be incomplete, inconsistent and/or inaccurate, becoming a critical concern for the executive management team trying to make informed business decisions. When issues arise, it is up to the IT department to figure out what the problem is, where it occurred, and how to fix it. This is not a trivial task.
Take the following scenario in which a CEO receives two reports supposedly from the same set of data, but each report shows different results. Which report is correct? If this is something your organization has experienced, then you know what happens next – the data discovery fire drill.
A flurry of activities take place, suspending all other top priorities. A special team is quickly assembled to delve into each report. They review the data sources, ETL processes and data marts in an effort to trace the events that affected the data. Fire drills like the above can consume days if not weeks of effort to locate the error.
In the above situation it turns out there was a new update to one ETL process that was implemented in only one report. When you multiply the number of data discovery fire drills by the number of data quality concerns for any executive business intelligence report, the costs continue to mount.
Data can arrive from multiple systems at the same time, often occurring rapidly and in parallel. In some cases, the ETL load itself may generate new data. Through all of this, IT still has to answer two fundamental questions: where did this data come from, and how did it get here?
As the volume of data rapidly increases, BI data environments are becoming more complex. To manage this complexity, organizations invest in a multitude of elaborate and expensive tools. But despite this investment, IT is still overwhelmed trying to track the vast collection of data within their BI environment. Is more technology the answer?
Perhaps the better question we should look to answer is: how can we avoid these data discovery fires in the future?
We believe it’s possible to prevent data discovery fires, and that starts with proper data governance and a strong data lineage capability.
Why is data governance important?
Why is data lineage important?
In the context of modern, data-driven business in which organizations are essentially production lines of information – data governance is responsible for the health and maintenance of said production line.
It’s the enabling factor of the enterprise data management suite that ensures data quality, so organizations can have greater trust in their data. It ensures that any data created is properly stored, tagged and assigned the context needed to prevent corruption or loss as it moves through the production line – greatly enhancing data discovery.
Alongside improving data quality, aiding in regulatory compliance, and making practices like tracing data lineage easier, sound data governance also helps organizations be proactive with their data, using it to drive revenue. They can make better decisions faster and negate the likelihood of costly mistakes and data breaches that would eat into their bottom lines.
For more information about how data governance supports executive business intelligence and the rest of the enterprise data management suite, click here.