Beware the Data Governance Ides of March

WindowsLiveWriter-TheIdesofMarchandtheTheatreofDataQuality_80BF-

Morte de Césare (Death of Caesar) by Vincenzo Camuccini, 1798

Today is the Ides of March (March 15), which back in 44 BC was definitely not a good day to be Julius Caesar, who was literally stabbed in the back by the Roman Senate during his assassination in the Theatre of Pompey (as depicted above), which was spearheaded by Brutus and Cassius in a failed attempt to restore the Roman Republic, but instead resulted in a series of civil wars that ultimately led to the establishment of the permanent Roman Empire by Caesar’s heir Octavius (aka Caesar Augustus).

“Beware the Ides of March” is the famously dramatized warning from William Shakespeare’s play Julius Caesar, which has me pondering whether a data governance program implementation has an Ides of March (albeit a less dramatic one—hopefully).

Hybrid Approach (starting Top-Down) is currently leading my unscientific poll about the best way to approach data governance, acknowledging executive sponsorship and a data governance board will be required for the top-down-driven activities of funding, policy making and enforcement, decision rights, and arbitration of conflicting business priorities as well as organizational politics.

The definition of data governance policies illustrates the intersection of business, data, and technical knowledge spread throughout the organization, revealing how interconnected and interdependent the organization is.  The policies provide a framework for the communication and collaboration of business, data, and technical stakeholders, and establish an enterprise-wide understanding of the roles and responsibilities involved, and the accountability required to support the organization’s daily business activities.

The process of defining data governance policies resembles the communication and collaboration of the Roman Republic, but the process of implementing and enforcing data governance policies resembles the command and control of the Roman Empire.

During this transition of power, from policy definition to policy implementation and enforcement, lies the greatest challenge for a data governance program.  Even though no executive sponsor is the Data Governance Emperor (not even Caesar CEO) and the data governance board is not the Data Governance Senate, a heavy-handed top-down approach to data governance can make policy compliance feel like imperial rule and policy enforcement feel like martial law.  Although a series of enterprise civil wars is unlikely to result, the data governance program is likely to fail without the support of a strong and stable bottom-up foundation.

The enforcement of data governance policies is often confused with traditional management notions of command and control, but the enduring success of data governance requires an organizational culture that embodies communication and collaboration, which is mostly facilitated by bottom-up-driven activities led by the example of data stewards and other peer-level change agents.

“Beware the Data Governance Ides of March” is my dramatized warning about relying too much on the top-down approach to implementing data governance—and especially if your organization has any data stewards named Brutus or Cassius.

Plato’s Data

Plato’s Cave is a famous allegory from philosophy that describes a fictional scenario where people mistake an illusion for reality.

The allegory describes a group of people who have lived their whole lives as prisoners chained motionless in a dark cave, forced to face a blank wall.  Behind the prisoners is a large fire.  In front of the fire are puppeteers that project shadows onto the cave wall, acting out little plays, which include mimicking voices and sound effects that echo off the cave walls.  These shadows and echoes are only projections, partial reflections of a reality created by the puppeteers.  However, this illusion represents the only reality the prisoners have ever known, and so to them the shadows are real sights and the echoes are real sounds.

When one of the prisoners is freed and permitted to turn around and see the source of the shadows and echoes, he rejects reality as an illusion.  The prisoner is then dragged out of the cave into the sunlight, out into the bright, painful light of the real world, which he also rejects as an illusion.  How could these sights and sounds be real to him when all he has ever known is the cave?

But eventually the prisoner acclimates to the real world, realizing that the real illusion was the shadows and echoes in the cave.

Unfortunately, this is when he’s returned to his imprisonment in the cave.  Can you imagine how painful the rest of his life will be, once again being forced to watch the shadows and listen to the echoes — except now he knows that they are not real.

Plato’s Cinema

A modern update on the allegory is something we could call Plato’s Cinema, where a group of people live their whole lives as prisoners chained motionless in a dark cinema, forced to face a blank screen.  Behind the audience is a large movie projector.

Please stop reading for a moment and try to imagine if everything you ever knew was based entirely on the movies you watched.

Now imagine you are one of the prisoners, and you did not get to choose the movies, but instead were forced to watch whatever the projectionist chooses to show you.  Although the fictional characters and stories of these movies are only projections, partial reflections of a reality created by the movie producers, since this illusion would represent the only reality you have ever known, to you the characters would be real people and the stories would be real events.

If you were freed from this cinema prison, permitted to turn around and see the projector, wouldn’t you reject it as an illusion?  If you were dragged out of the cinema into the sunlight, out into the bright, painful light of the real world, wouldn’t you also reject reality as an illusion?  How could these sights and sounds be real to you when all you have ever known is the cinema?

Let’s say that you eventually acclimated to the real world, realizing that the real illusion was the projections on the movie screen.

However, now let’s imagine that you are then returned to your imprisonment in the dark cinema.  Can you imagine how painful the rest of your life would be, once again being forced to watch the movies — except now you know that they are not real.

Plato’s Data

Whether it’s an abstract description of real-world entities (i.e., “master data”) or an abstract description of real-world interactions (i.e., “transaction data”) among entities, data is an abstract description of reality — let’s call this the allegory of Plato’s Data.

We often act as if we are being forced to face our computer screen, upon which data tells us a story about the real world that is just as enticing as the flickering shadows on the wall of Plato’s Cave, or the mesmerizing movies projected in Plato’s Cinema.

Data shapes our perception of the real world, but sometimes we forget that data is only a partial reflection of reality.

I am sure that it sounds silly to point out something so obvious, but imagine if, before you were freed, the other prisoners, in either the cave or the cinema, tried to convince you that the shadows or the movies weren’t real.  Or imagine you’re the prisoner returning to either the cave or the cinema.  How would you convince other prisoners that you’ve seen the true nature of reality?

A common question about Plato’s Cave is whether it’s crueler to show the prisoner the real world, or to return the prisoner to the cave after he has seen it.  Much like the illusions of the cave and the cinema, data makes more sense the more we believe it is real.

However, with data, neither breaking the illusion nor returning ourselves to it is cruel, but is instead a necessary practice because it’s important to occasionally remind ourselves that data and the real world are not the same thing.

So Long 2011, and Thanks for All the . . .

OCDQ Radio is a vendor-neutral podcast about data quality and its related disciplines, produced and hosted by Jim Harris.

Don’t Panic!  Welcome to the mostly harmless OCDQ Radio 2011 Year in Review episode.  During this approximately 42 minute episode, I recap the data-related highlights of 2011 in a series of sometimes serious, sometimes funny, segments, as well as make wacky and wildly inaccurate data-related predictions about 2012.

Special thanks to my guests Jarrett Goldfedder, who discusses Big Data, Nicola Askham, who discusses Data Governance, and Daragh O Brien, who discusses Data Privacy.  Additional thanks to Rich Murnane and Dylan Jones.  And Deep Thanks to that frood Douglas Adams, who always knew where his towel was, and who wrote The Hitchhiker’s Guide to the Galaxy.

 

So Long 2011, and Thanks for All the . . .

Additional listening options:

 

Previous OCDQ Radio Episodes

Clicking on the link will take you to the episode’s blog post:

Information Overload Revisited

This blog post is sponsored by the Enterprise CIO Forum and HP.

Information Overload is a term invoked regularly during discussions about the data deluge of the Information Age, which has created a 24 hours a day, 7 days a week, 365 days a year, world-wide whirlwind of constant information flow, where the very air we breath is literally teeming with digital data streams — continually inundating us with new, and new types of, information.

Information overload generally refers to how too much information can overwhelm our ability to understand an issue, and can even disable our decision making in regards to that issue (this latter aspect is generally referred to as Analysis Paralysis).

But we often forget that the term is over 40 years old.  It was popularized by Alvin Toffler in his bestselling book Future Shock, which was published in 1970, back when the Internet was still in its infancy, and long before the Internet’s progeny would give birth to the clouds contributing to the present, potentially perpetual, forecast for data precipitation.

A related term that has become big in the data management industry is Big Data, which, as Gartner Research explains, although the term acknowledges the exponential growth, availability, and use of information in today’s data-rich landscape, big data is about more than just data volume.  Data variety (i.e., structured, semi-structured, and unstructured data, as well as other types, such as the sensor data emanating from the Internet of Things) and data velocity (i.e., how fast data is being produced and how fast the data must be processed to meet demand) are also key characteristics of the big challenges of big data.

John Dodge and Bob Gourley recently discussed big data on Enterprise CIO Forum Radio, where Gourley explained that big data is essentially “the data that your enterprise is not currently able to do analysis over.”  This point resonates with a similar one made by Bill Laberis, who recently discussed new global research where half of the companies polled responded that they cannot effectively deal with analyzing the rising tide of data available to them.

Most of the big angst about big data comes from this fear that organizations are not tapping the potential business value of all that data not currently being included in their analytics and decision making.  This reminds me of psychologist Herbert Simon, who won the 1978 Nobel Prize in Economics for his pioneering research on decision making, which included comparing and contrasting the decision-making strategies of maximizing and satisficing (a term that combines satisfying with sufficing).

Simon explained that a maximizer is like a perfectionist who considers all the data they can find because they need to be assured that their decision was the best that could be made.  This creates a psychologically daunting task, especially as the amount of available data constantly increases (again, note that this observation was made over 40 years ago).  The alternative is to be a satisficer, someone who attempts to meet criteria for adequacy rather than identify an optimal solution.  And especially when time is a critical factor, such as it is with the real-time decision making demanded by a constantly changing business world.

Big data strategies will also have to compare and contrast maximizing and satisficing.  Maximizers, if driven by their angst about all that data they are not analyzing, might succumb to information overload.  Satisficers, if driven by information optimization, might sufficiently integrate just enough of big data into their business analytics in a way that satisfies specific business needs.

As big data forces us to revisit information overload, it may be useful for us to remember that originally the primary concern was not about the increasing amount of information, but instead the increasing access to information.  As Clay Shirky succinctly stated, “It’s not information overload, it’s filter failure.”  So, to harness the business value of big data, we will need better filters, which may ultimately make for the entire distinction between information overload and information optimization.

This blog post is sponsored by the Enterprise CIO Forum and HP.

 

Related Posts

The Data Encryption Keeper

The Cloud Security Paradox

The Good, the Bad, and the Secure

Securing your Digital Fortress

Shadow IT and the New Prometheus

Are Cloud Providers the Bounty Hunters of IT?

The Diderot Effect of New Technology

The IT Consumerization Conundrum

The IT Prime Directive of Business First Contact

A Sadie Hawkins Dance of Business Transformation

Are Applications the La Brea Tar Pits for Data?

Why does the sun never set on legacy applications?

The Partly Cloudy CIO

The IT Pendulum and the Federated Future of IT

Suburban Flight, Technology Sprawl, and Garage IT

You only get a Return from something you actually Invest in

In my previous post, I took a slightly controversial stance on a popular three-word phrase — Root Cause Analysis.  In this post, it’s another popular three-word phrase — Return on Investment (most commonly abbreviated as the acronym ROI).

What is the ROI of purchasing a data quality tool or launching a data governance program?

Zero.  Zip.  Zilch.  Intet.  Ingenting.  Rien.  Nada.  Nothing.  Nichts.  Niets.  Null.  Niente.  Bupkis.

There is No Such Thing as the ROI of purchasing a data quality tool or launching a data governance program.

Before you hire “The Butcher” to eliminate me for being The Man Who Knew Too Little about ROI, please allow me to explain.

Returns only come from Investments

Although the reason that you likely purchased a data quality tool is because you have business-critical data quality problems, simply purchasing a tool is not an investment (unless you believe in Magic Beans) since the tool itself is not a solution.

You use tools to build, test, implement, and maintain solutions.  For example, I spent several hundred dollars on new power tools last year for a home improvement project.  However, I haven’t received any return on my home improvement investment for a simple reason — I still haven’t even taken most of the tools out of their packaging yet.  In other words, I barely even started my home improvement project.  It is precisely because I haven’t invested any time and effort that I haven’t seen any returns.  And it certainly isn’t going to help me (although it would help Home Depot) if I believed buying even more new tools was the answer.

Although the reason that you likely launched a data governance program is because you have complex issues involving the intersection of data, business processes, technology, and people, simply launching a data governance program is not an investment since it does not conjure the three most important letters.

Data is only an Asset if Data is a Currency

In his book UnMarketing, Scott Stratten discusses this within the context of the ROI of social media (a commonly misunderstood aspect of social media strategy), but his insight is just as applicable to any discussion of ROI.  “Think of it this way: You wouldn’t open a business bank account and ask to withdraw $5,000 before depositing anything. The banker would think you are a loony.”

Yet, as Stratten explained, people do this all the time in social media by failing to build up what is known as social currency.  “You’ve got to invest in something before withdrawing. Investing your social currency means giving your time, your knowledge, and your efforts to that channel before trying to withdraw monetary currency.”

The same logic applies perfectly to data quality and data governance, where we could say it’s the failure to build up what I will call data currency.  You’ve got to invest in data before you could ever consider data an asset to your organization.  Investing your data currency means giving your time, your knowledge, and your efforts to data quality and data governance before trying to withdraw monetary currency (i.e., before trying to calculate the ROI of a data quality tool or a data governance program).

If you actually want to get a return on your investment, then actually invest in your data.  Invest in doing the hard daily work of continuously improving your data quality and putting into practice your data governance principles, policies, and procedures.

Data is only an asset if data is a currency.  Invest in your data currency, and you will eventually get a return on your investment.

You only get a return from something you actually invest in.

Related Posts

Can Enterprise-Class Solutions Ever Deliver ROI?

Do you believe in Magic (Quadrants)?

Which came first, the Data Quality Tool or the Business Need?

What Data Quality Technology Wants

A Farscape Analogy for Data Quality

The Data Quality Wager

“Some is not a number and soon is not a time”

The Dumb and Dumber Guide to Data Quality

There is No Such Thing as a Root Cause

Root cause analysis.  Most people within the industry, myself included, often discuss the importance of determining the root cause of data governance and data quality issues.  However, the complex cause and effect relationships underlying an issue means that when an issue is encountered, often you are only seeing one of the numerous effects of its root cause (or causes).

In my post The Root! The Root! The Root Cause is on Fire!, I poked fun at those resistant to root cause analysis with the lyrics:

The Root! The Root! The Root Cause is on Fire!
We don’t want to determine why, just let the Root Cause burn.
Burn, Root Cause, Burn!

However, I think that the time is long overdue for even me to admit the truth — There is No Such Thing as a Root Cause.

Before you charge at me with torches and pitchforks for having an Abby Normal brain, please allow me to explain.

 

Defect Prevention, Mouse Traps, and Spam Filters

Some advocates of defect prevention claim that zero defects is not only a useful motivation, but also an attainable goal.  In my post The Asymptote of Data Quality, I quoted Daniel Pink’s book Drive: The Surprising Truth About What Motivates Us:

“Mastery is an asymptote.  You can approach it.  You can home in on it.  You can get really, really, really close to it.  But you can never touch it.  Mastery is impossible to realize fully.

The mastery asymptote is a source of frustration.  Why reach for something you can never fully attain?

But it’s also a source of allure.  Why not reach for it?  The joy is in the pursuit more than the realization.

In the end, mastery attracts precisely because mastery eludes.”

The mastery of defect prevention is sometimes distorted into a belief in data perfection, into a belief that we can not just build a better mousetrap, but we can build a mousetrap that could catch all the mice, or that by placing a mousetrap in our garage, which prevents mice from entering via the garage, we somehow also prevent mice from finding another way into our house.

Obviously, we can’t catch all the mice.  However, that doesn’t mean we should let the mice be like Pinky and the Brain:

Pinky: “Gee, Brain, what do you want to do tonight?”

The Brain: “The same thing we do every night, Pinky — Try to take over the world!”

My point is that defect prevention is not the same thing as defect elimination.  Defects evolve.  An excellent example of this is spam.  Even conservative estimates indicate almost 80% of all e-mail sent world-wide is spam.  A similar percentage of blog comments are spam, and spam generating bots are quite prevalent on Twitter and other micro-blogging and social networking services.  The inconvenient truth is that as we build better and better spam filters, spammers create better and better spam.

Just as mousetraps don’t eliminate mice and spam filters don’t eliminate spam, defect prevention doesn’t eliminate defects.

However, mousetraps, spam filters, and defect prevention are essential proactive best practices.

 

There are No Lines of Causation — Only Loops of Correlation

There are no root causes, only strong correlations.  And correlations are strengthened by continuous monitoring.  Believing there are root causes means believing continuous monitoring, and by extension, continuous improvement, has an end point.  I call this the defect elimination fallacy, which I parodied in song in my post Imagining the Future of Data Quality.

Knowing there are only strong correlations means knowing continuous improvement is an infinite feedback loop.  A practical example of this reality comes from data-driven decision making, where:

  1. Better Business Performance is often correlated with
  2. Better Decisions, which, in turn, are often correlated with
  3. Better Data, which is precisely why Better Decisions with Better Data is foundational to Business Success — however . . .

This does not mean that we can draw straight lines of causation between (3) and (1), (3) and (2), or (2) and (1).

Despite our preference for simplicity over complexity, if bad data was the root cause of bad decisions and/or bad business performance, every organization would never be profitable, and if good data was the root cause of good decisions and/or good business performance, every organization could always be profitable.  Even if good data was a root cause, not just a correlation, and even when data perfection is temporarily achieved, the effects would still be ephemeral because not only do defects evolve, but so does the business world.  This evolution requires an endless revolution of continuous monitoring and improvement.

Many organizations implement data quality thresholds to close the feedback loop evaluating the effectiveness of their data management and data governance, but few implement decision quality thresholds to close the feedback loop evaluating the effectiveness of their data-driven decision making.

The quality of a decision is determined by the business results it produces, not the person who made the decision, the quality of the data used to support the decision, or even the decision-making technique.  Of course, the reality is that business results are often not immediate and may sometimes be contingent upon the complex interplay of multiple decisions.

Even though evaluating decision quality only establishes a correlation, and not a causation, between the decision execution and its business results, it is still essential to continuously monitor data-driven decision making.

Although the business world will never be totally predictable, we can not turn a blind eye to the need for data-driven decision making best practices, or the reality that no best practice can eliminate the potential for poor data quality and decision quality, nor the potential for poor business results even despite better data quality and decision quality.  Central to continuous improvement is the importance of closing the feedback loops that make data-driven decisions more transparent through better monitoring, allowing the organization to learn from its decision-making mistakes, and make adjustments when necessary.

We need to connect the dots of better business performance, better decisions, and better data by drawing loops of correlation.

 

Decision-Data Feedback Loop

Continuous improvement enables better decisions with better data, which drives better business performance — as long as you never stop looping the Decision-Data Feedback Loop, and start accepting that there is no such thing as a root cause.

I discuss this, and other aspects of data-driven decision making, in my DataFlux white paper, which is available for download (registration required) using the following link: Decision-Driven Data Management

 

Related Posts

The Root! The Root! The Root Cause is on Fire!

Bayesian Data-Driven Decision Making

The Role of Data Quality Monitoring in Data Governance

The Circle of Quality

Oughtn’t you audit?

The Dichotomy Paradox, Data Quality and Zero Defects

The Asymptote of Data Quality

To Our Data Perfectionists

Imagining the Future of Data Quality

What going to the Dentist taught me about Data Quality

DQ-Tip: “There is No Such Thing as Data Accuracy...”

The HedgeFoxian Hypothesis

No Datum is an Island of Serendip

Continuing a series of blog posts inspired by the highly recommended book Where Good Ideas Come From by Steven Johnson, in this blog post I want to discuss the important role that serendipity plays in data — and, by extension, business success.

Let’s start with a brief etymology lesson.  The origin of the word serendipity, which is commonly defined as a “happy accident” or “pleasant surprise” can be traced to the Persian fairy tale The Three Princes of Serendip, whose heroes were always making discoveries of things they were not in quest of either by accident or by sagacity (i.e., the ability to link together apparently innocuous facts to come to a valuable conclusion).  Serendip was an old name for the island nation now known as Sri Lanka.

“Serendipity,” Johnson explained, “is not just about embracing random encounters for the sheer exhilaration of it.  Serendipity is built out of happy accidents, to be sure, but what makes them happy is the fact that the discovery you’ve made is meaningful to you.  It completes a hunch, or opens up a door in the adjacent possible that you had overlooked.  Serendipitous discoveries often involve exchanges across traditional disciplines.  Serendipity needs unlikely collisions and discoveries, but it also needs something to anchor those discoveries.  The challenge, of course, is how to create environments that foster these serendipitous connections.”

 

No Datum is an Island of Serendip

“No man is an island, entire of itself; every man is a piece of the continent, a part of the main.”

These famous words were written by the poet John Donne, the meaning of which is generally regarded to be that human beings do not thrive when isolated from others.  Likewise, data does not thrive in isolation.  However, many organizations persist on data isolation, on data silos created when separate business units see power in the hoarding of data, not in the sharing of data.

But no business unit is an island, entire of itself; every business unit is a piece of the organization, a part of the enterprise.

Likewise, no datum is an Island of Serendip.  Data thrives through the connections, collisions, and combinations that collectively unleash serendipity.  When data is exchanged across organizational boundaries, and shared with the entire enterprise, it enables the interdisciplinary discoveries required for making business success more than just a happy accident or pleasant surprise.

Our organizations need to create collaborative environments that foster serendipitous connections bringing all of our business units and people together around our shared data assets.  We need to transcend our organizational boundaries, reduce our data silos, and gather our enterprise’s heroes together on the Data Island of Serendip — our United Nation of Business Success.

 

Related Posts

Data Governance and the Adjacent Possible

The Three Most Important Letters in Data Governance

The Stakeholder’s Dilemma

The Data Cold War

Turning Data Silos into Glass Houses

The Good Data

DQ-BE: Single Version of the Time

My Own Private Data

Sharing Data

Are you Building Bridges or Digging Moats?

The Collaborative Culture of Data Governance

The Interconnected User Interface

The Speed of Decision

In a previous post, I used the Large Hadron Collider as a metaphor for big data and big analytics where the creative destruction caused by high-velocity collisions of large volumes of varying data attempt to reveal elementary particles of business intelligence.

Since recent scientific experiments have sparked discussion about the possibility of exceeding the speed of light, in this blog post I examine whether it’s possible to exceed the speed of decision (i.e., the constraints that time puts on data-driven decision making).

 

Is Decision Speed more important than Data Quality?

In my blog post Thaler’s Apples and Data Quality Oranges, I explained how time-inconsistent data quality preferences within business intelligence reflect the reality that with the speed at which things change these days, more near-real-time operational business decisions are required, which sometimes makes decision speed more important than data quality.

Even though advancements in computational power, network bandwidth, parallel processing frameworks (e.g., MapReduce), scalable and distributed models (e.g., cloud computing), and other techniques (e.g., in-memory computing) are making real-time data-driven decisions more technologically possible than ever before, as I explained in my blog post Satisficing Data Quality, data-driven decision making often has to contend with the practical trade-offs between correct answers and timely answers.

Although we can’t afford to completely sacrifice data quality for faster business decisions, and obviously high quality data is preferable to poor quality data, less than perfect data quality can not be used as an excuse to delay making a critical decision.

 

Is Decision Speed more important than Decision Quality?

The increasing demand for real-time data-driven decisions is not only requiring us to re-evaluate our data quality thresholds.  In my blog post The Circle of Quality, I explained the connection between data quality and decision quality, and how result quality trumps them both because an organization’s success is measured by the quality of the business results it produces.

Again, with the speed at which the business world now changes, the reality is that the fear of making a mistake can not be used as an excuse to delay making a critical decision, which sometimes makes decision speed more important than decision quality.

“Fail faster” has long been hailed as the mantra of business innovation.  It’s not because failure is a laudable business goal, but instead because the faster you can identify your mistakes, the faster you can correct your mistakes.  Of course this requires that you are actually willing to admit you made a mistake.

(As an aside, I often wonder what’s more difficult for an organization to admit: poor data quality or poor decision quality?)

Although good decisions are obviously preferable to bad decisions, we have to acknowledge the fragility of our knowledge and accept that mistake-driven learning is an essential element of efficient and effective data-driven decision making.

Although the speed of decision is not the same type of constant as the speed of light, in our constantly changing business world, the speed of decision represents the constant demand for good-enough data for fast-enough decisions.

 

Related Posts

The Big Data Collider

A Decision Needle in a Data Haystack

The Data-Decision Symphony

Thaler’s Apples and Data Quality Oranges

Satisficing Data Quality

Data Confabulation in Business Intelligence

The Data that Supported the Decision

Data Psychedelicatessen

OCDQ Radio - Big Data and Big Analytics

OCDQ Radio - Good-Enough Data for Fast-Enough Decisions

Data, Information, and Knowledge Management

Data In, Decision Out

The Real Data Value is Business Insight

Is your data complete and accurate, but useless to your business?

The Circle of Quality

The Three Most Important Letters in Data Governance

In his book I Is an Other: The Secret Life of Metaphor and How It Shapes the Way We See the World, James Geary included several examples of the psychological concept of priming.  “Our metaphors prime how we think and act.  This kind of associative priming goes on all the time.  In one study, researchers showed participants pictures of objects characteristic of a business setting: briefcases, boardroom tables, a fountain pen, men’s and women’s suits.  Another group saw pictures of objects—a kite, sheet music, a toothbrush, a telephone—not characteristic of any particular setting.”

“Both groups then had to interpret an ambiguous social situation, which could be described in several different ways.  Those primed by pictures of business-related objects consistently interpreted the situation as more competitive than those who looked at pictures of kites and toothbrushes.”

“This group’s competitive frame of mind asserted itself in a word completion task as well.  Asked to complete fragments such as wa_, _ight, and co_p__tive, the business primes produced words like war, fight, and competitive more often than the control group, eschewing equally plausible alternatives like was, light, and cooperative.”

Communication, collaboration, and change management are arguably the three most critical aspects for implementing a new data governance program successfully.  Since all three aspects are people-centric, we should pay careful attention to how we are priming people to think and act within the context of data governance principles, policies, and procedures.  We could simplify this down to whether we are fostering an environment that primes people for cooperation—or primes people for competition.

Since there are only three letters of difference between the words cooperative and competitive, we could say that these are the three most important letters in data governance.

 

Related Posts

Data Governance and the Adjacent Possible

Turning Data Silos into Glass Houses

Aristotle, Data Governance, and Lead Rulers

OCDQ Radio - The Blue Box of Information Quality

The Stakeholder’s Dilemma

The Prince of Data Governance

Beware the Data Governance Ides of March

Jack Bauer and Enforcing Data Governance Policies

Council Data Governance

The Data Governance Oratorio

OCDQ Radio - Data Governance Star Wars

Data Governance Star Wars: Balancing Bureaucracy And Agility

A Tale of Two G’s

The People Platform

The Collaborative Culture of Data Governance

Data Governance and the Adjacent Possible

I am reading the book Where Good Ideas Come From by Steven Johnson, which examines recurring patterns in the history of innovation.  The first pattern Johnson writes about is called the Adjacent Possible, which is a term coined by Stuart Kauffman, and is described as “a kind of shadow future, hovering on the edges of the present state of things, a map of all the ways in which the present can reinvent itself.  Yet it is not an infinite space, or a totally open playing field.  The strange and beautiful truth about the adjacent possible is that its boundaries grow as you explore those boundaries.”

Exploring the adjacent possible is like exploring “a house that magically expands with each door you open.  You begin in a room with four doors, each leading to a new room that you haven’t visited yet.  Those four rooms are the adjacent possible.  But once you open any one of those doors and stroll into that room, three new doors appear, each leading to a brand-new room that you couldn’t have reached from your original starting point.  Keep opening new doors and eventually you’ll have built a palace.”

If it ain’t broke, bricolage it

“If it ain’t broke, don’t fix it” is a common defense of the status quo, which often encourages an environment that stifles innovation and the acceptance of new ideas.  The status quo is like staying in the same familiar and comfortable room and choosing to keep all four of its doors closed.

The change management efforts of data governance often don’t talk about opening one of those existing doors.  Instead they often broadcast the counter-productive message that “everything is so broken, we can’t fix it.”  We need to destroy our existing house and rebuild it from scratch with brand new rooms — and probably with one of those open floor plans without any doors.

Should it really be surprising when this approach to change management is so strongly resisted?

The term bricolage can be defined as making creative and resourceful use of whatever materials are at hand regardless of their original purpose, stringing old parts together to form something radically new, transforming the present into the near future.

“Good ideas are not conjured out of thin air,” explains Johnson, “they are built out of a collection of existing parts.”

The primary reason that the change management efforts of data governance are resisted is because they rely almost exclusively on negative methods—they emphasize broken business and technical processes, as well as bad data-related employee behaviors.

Although these problems exist and are the root cause of some of the organization’s failures, there are also unheralded processes and employees that prevented other problems from happening, which are the root cause of some of the organization’s successes.

It’s important to demonstrate that some data governance policies reflect existing best practices, which helps reduce resistance to change, and so a far more productive change management mantra for data governance is: “If it ain’t broke, bricolage it.”

Data Governance and the Adjacent Possible

As Johnson explains, “in our work lives, in our creative pursuits, in the organizations that employ us, in the communities we inhabit—in all these different environments, we are surrounded by potential new ways of breaking out of our standard routines.”

“The trick is to figure out ways to explore the edges of possibility that surround you.”

Most data governance maturity models describe an organization’s evolution through a series of stages intended to measure its capability and maturity, tendency toward being reactive or proactive, and inclination to be project-oriented or program-oriented.

Johnson suggests that “one way to think about the path of evolution is as a continual exploration of the adjacent possible.”

Perhaps we need to think about the path of data governance evolution as a continual exploration of the adjacent possible, as a never-ending journey which begins by opening that first door, building a palatial data governance program one room at a time.

 

Related Posts

The Cloud Security Paradox

This blog post is sponsored by the Enterprise CIO Forum and HP.

Nowadays it seems like any discussion about enterprise security inevitably becomes a discussion about cloud security.  Last week, as I was listening to John Dodge and Bob Gourley discuss recent top cloud security tweets on Enterprise CIO Forum Radio, the story that caught my attention was the Network World article by Christine Burns, part of a six-part series on cloud computing, which had a provocative title declaring that public cloud security remains Mission Impossible.

“Cloud security vendors and cloud services providers have a long way to go,” Burns wrote, “before enterprise customers will be able to find a comfort zone in the public cloud, or even in a public/private hybrid deployment.”  Although I agree with Burns, and I highly recommend reading her entire excellent article, I have always been puzzled by debates over cloud security.

A common opinion is that cloud-based solutions are fundamentally less secure than on-premises solutions.  Some critics even suggest cloud-based solutions can never be secure.  I don’t agree with either opinion because to me it’s all a matter of perspective.

Let’s imagine that I am a cloud-based service provider selling solutions leveraging my own on-premises resources, meaning that I own and operate all of the technology infrastructure within the walls of my one corporate office.  Let’s also imagine that in addition to the public cloud solution that I sell to my customers, I have built a private cloud solution for some of my employees (e.g., salespeople in the field), and that I also have other on-premises systems (e.g., accounting) not connected to any cloud.

Since all of my solutions are leveraging the exact same technology infrastructure, if it is impossible to secure my public cloud, then it logically follows that not only is it impossible to secure my private cloud, but it is also impossible to secure my on-premises systems as well.  Therefore, all of my security must be Mission Impossible.  I refer to this as the Cloud Security Paradox.

Some of you will argue that my scenario was oversimplified, since most cloud-based solutions, whether public or private, may include technology infrastructure that is not under my control, and may be accessed using devices that are not under my control.

Although those are valid security concerns, they are not limited to—nor were they created by—cloud computing, because with the prevalence of smart phones and other mobile devices, those security concerns exist for entirely on-premises solutions as well.

In my opinion, cloud-based versus on-premises, public cloud versus private cloud, and customer access versus employee access, are all oversimplified arguments.  Regardless of the implementation strategy, technology infrastructure and especially your data needs to be secured wherever it is, however it is accessed, and with the appropriate levels of control over who can access what.

Fundamentally, the real problem is a lack of well-defined, well-implemented, and well-enforced security practices.  As Burns rightfully points out, a significant challenge with cloud-based solutions is that “public cloud providers are notoriously unwilling to provide good levels of visibility into their underlying security practices.”

However, when the cost savings and convenience of cloud-based solutions are accepted without a detailed security assessment, that is not a fundamental flaw of cloud computing—that is simply a bad business decision.

Let’s stop blaming poor enterprise security practices on the adoption of cloud computing.

This blog post is sponsored by the Enterprise CIO Forum and HP.

 

Related Posts

The Good, the Bad, and the Secure

Securing your Digital Fortress

Shadow IT and the New Prometheus

Are Cloud Providers the Bounty Hunters of IT?

The Diderot Effect of New Technology

The IT Consumerization Conundrum

The IT Prime Directive of Business First Contact

A Sadie Hawkins Dance of Business Transformation

Are Applications the La Brea Tar Pits for Data?

Why does the sun never set on legacy applications?

The Partly Cloudy CIO

The IT Pendulum and the Federated Future of IT

Suburban Flight, Technology Sprawl, and Garage IT