Do you listen to the Rock Bottom Data Feed? If you don’t, you should. It’s a serious show about data that doesn’t take itself too seriously; more of a radio program than a podcast.

My co-host, John Ladley, in his monologue for the recent episode “More Fuss About Tony Mazzarella” talked about how governance can be found embedded in the cockpit of a commercial airliner. There are procedures, checklists, and cross-validation, and it happens without anybody watching over their shoulders. It’s just what they do.

The exact opposite is usually true when it comes to corporate data. You’re doing well if you have procedures and checklists. A few companies do cross-validation, but governance almost never happens without constant oversight.

Listening to the show got me thinking about why governance would be pervasive in one setting and absent in the other. I’m sure we could come up with several reasons, but this one is at the top of my list:

In the cockpit, there is a bright straight line between the failure to do appropriate governance and the potentially fatal consequences.

Every pilot recognizes that the procedures are designed to prevent catastrophe, and every pilot understands that deviation from those procedures increases the probability of catastrophe. “Aviation regulations are written in blood,” because most every safety rule and checklist procedure was created in response to an accident where someone died or was severely injured. The causes and consequences are very clearly linked. 

There are two ways to learn something: the easy way and the hard way. We as a species seem to have an innate aversion to learning things the easy way. Pain is a more effective motivator, like the corporate management beatings I talked about last week. So, the hard way it is. Aviation isn’t the only domain.

When I was in high school, I watched an HBO documentary about the 1942 fire at the Cocoanut Grove night club. It made a big impression on me. To this day I still notice the swinging doors that flank revolving doors, the direction that exit doors swing, exit signs, and panic bars. Bright, straight lines between governance and consequence. Interestingly, many of the regulations broadly mandated as a result of that fire were proposed and in some cases locally enacted following fires at the Iroquois Theater (1903), the Rhoads Opera House (1908), and the Rhythm Club (1940). Regrettably, it took the deaths of 492 people in a major city, massive media coverage, and failures that were impossible to ignore to create a shared, undeniable narrative that enabled reforms to be both codified and widely adopted.

So, how does all this apply to data? 

Think about it this way: Putting regular unleaded gasoline into a diesel engine will cause severe, often catastrophic damage. The error is apparent and the consequences are observed almost immediately. 

It has been said that applications are the engine of business but that data is the fuel, and many companies are using bad fuel.

Anyone who has worked with corporate data for any length of time has no shortage of examples of problems that were caused by poorly understood or poor quality data: missed development deadlines, QA passing code that later failed in production because test data didn’t reflect real-world edge cases, syntax or semantic mismatches between interfaces, incorrect analytical conclusions, and biased or hallucinating AI models.

Unlike putting unleaded gasoline into a diesel engine, the connection between the data and a problem is usually less apparent and less immediate.

We have no shared narrative that says, “This is what bad data does.” Instead of a unifying moment, we get localized fixes, industry-specific regulations, and continued skepticism about whether data governance is really necessary. Neither the benefits nor the consequences are well understood.

We are living in the pre-Cocoanut Grove era of fire safety. It’s 1941. Lots of signals. No shared narrative. No collective urgency. And we haven’t yet experienced that painful turning point event. 

When we raise the alarm, all we hear back is “consequences schmonsequences.” [If you’re familiar with the 1957 Looney Tunes cartoon Ali Baba Bunny, you probably heard Daffy Duck speak that line in your head as you started reading this article.] Continuing: “The data is good enough. After all, we’re running the company on it. Analytics have a margin of error anyway. I can’t imagine any real serious consequences and besides, the cost and distraction don’t justify the attention.” Sound familiar?

The links between the lack of data understanding, poor data quality, and negative business outcomes must be made visible so that they can be internalized…by everybody.

We don’t want hovering governance committees. It would be better if everybody understood the need, the benefits, and the consequences, but that’s not a realistic expectation. At least not in the short-term (or medium-term).

I often watch the television show Air Disasters. Each episode is like a puzzle where the goal is to figure out why the disaster happened so that it can be prevented in the future. The same should be done with data.

Start treating data-related incidents like accident investigations. 

Every missed deadline, bad decision, broken dashboard, or failed model should trigger the same questions: What data was used? Where did it originate? What assumptions were wrong? 

Understand the incidents and learn from them. Identify the root causes. Measure the impact. Recommend changes. It would be nice to have executive-level support for these efforts, but don’t wait. Document what you can when you can.

Having this information makes the risks more tangible, and more likely that executives will take notice and take action.

Aviation didn’t wait for perfect governance. They built it one investigation at a time.