In a previous post entitled, Avoiding the Master Data Game of Telephone, I discussed an unfortunate real-world scenario where a customer had, over time, found themselves without a reliable source of truth for core operating data.
In an environment where users have truly lost a sense of trust in the data they rely upon, data quality initiatives must be made highly visible to the rank-and-file. Without a strategic push to restore trust in data, systemic and organizational changes around information governance will often fail to reap dividends, and governance actors will be perceived as overhead rather than drivers of efficiency.
Persistent Data Validation
One way to “prove” effective data governance is via what I refer to as a persistent data validation. Many companies exercise a great deal of discipline validating data as it is migrated into new systems, and promptly lose sight of ongoing maintenance needs after go-live. Through passive data governance tools like dspMonitor, governance organizations can earn the trust of users through regularly distributed validation reports with clear definitions and business drivers.
Persistent data validation seeks to demonstrate data quality in a practical, tactile way. Users understand exactly what is being tested and why; they’re not merely told that their data quality concerns are being addressed – they can see specific reports and metrics move in the right direction. In multi-system landscapes, they can see concretely that basic information like addresses and phone numbers are consistently maintained across systems and review the individual tests used to verify this information.
This also has the inverse effect of providing governance of the governance organization itself – by publishing its metrics and methodology; other stakeholders can easily chart the performance of the information governance initiative. For this reason, broader exposure of data quality reports is sometimes met with resistance, especially in fledgling information governance organizations. However, it’s crucial for bridging the trust gap, particularly in a user base with a history of being burned by poor data quality.
Broader distribution of data quality metrics can also help “encourage” underperforming segments to devote more effort to master data maintenance. Transparency – to the extent advisable based on privacy, regulatory and strategic requirements, of course – is the name of the game when it comes to persistent data validation. This in turn helps build the sense that data is not something to be withheld or shielded within the organization, but is instead a collaborative and empowering shared resource.
Rebuilding confidence in master data is not an easy task for most large organizations. The temptation in establishing information governance is to set the goalpost at improved data quality metrics. In practice, users need more than just assurance that data quality has improved before they’ll walk away from ingrained habits and inefficiencies driven by mistrust in data. A robust persistent data validation strategy can go a long way in establishing the perception that information governance is worthwhile and that data has become truly business ready.
About the Author
Nate LaFerle brings extensive enterprise data management experience through large-scale global data strategy and migration engagements in retail, manufacturing, finance, pharmaceuticals and the public sector. Based in Chicago, he currently leads a multi-year BackOffice engagement at a Fortune 500 life sciences organization.Follow on Twitter More Content by Nate LaFerle