DQ-View: Is Data Quality the Sun?

Data Quality (DQ) View is an OCDQ regular segment.  Each DQ-View is a brief video discussion of a data quality key concept.

DataQualityPro

This recent tweet by Dylan Jones of Data Quality Pro succinctly expresses a vitally important truth about the data quality profession.

Although few would debate the necessary requirement of skill, some might doubt the need for passion.  Therefore, in this new DQ-View segment, I want to discuss why data quality initiatives require passionate data professionals.

 

DQ-View: Is Data Quality the Sun?

 

If you are having trouble viewing this video, then you can watch it on Vimeo by clicking on this link: DQ-View on Vimeo

 

Related Posts

Data Gazers

Finding Data Quality

Oh, the Data You’ll Show!

Data Rock Stars: The Rolling Forecasts

The Second Law of Data Quality

The General Theory of Data Quality

DQ-Tip: “Start where you are...”

Sneezing Data Quality

Is your data complete and accurate, but useless to your business?

Ensuring that complete and accurate data is being used to make critical daily business decisions is perhaps the primary reason why data quality is so vitally important to the success of your organization. 

However, this effort can sometimes take on a life of its own, where achieving complete and accurate data is allowed to become the raison d'être of your data management strategy—in other words, you start managing data for the sake of managing data.

When this phantom menace clouds your judgment, your data might be complete and accurate—but useless to your business.

Completeness and Accuracy

How much data is necessary to make an effective business decision?  Having complete (i.e., all available) data seems obviously preferable to incomplete data.  However, with data volumes always burgeoning, the unavoidable fact is that sometimes having more data only adds confusion instead of clarity, thereby becoming a distraction instead of helping you make a better decision.

Returning to my original question, how much data is really necessary to make an effective business decision? 

Accuracy, which, thanks to substantial assistance from my readers, was defined in a previous post as both the correctness of a data value within a limited context such as verification by an authoritative reference (i.e., validity) combined with the correctness of a valid data value within an extensive context including other data as well as business processes (i.e., accuracy). 

Although accurate data is obviously preferable to inaccurate data, less than perfect data quality can not be used as an excuse to delay making a critical business decision.  When it comes to the quality of the data being used to make these business decisions, you can’t always get the data you want, but if you try sometimes, you just might find, you get the business insight you need.

Data-driven Solutions for Business Problems

Obviously, there are even more dimensions of data quality beyond completeness and accuracy. 

However, although it’s about more than just improving your data, data quality can be misperceived to be an activity performed just for the sake of the data.  When, in fact, data quality is an enterprise-wide initiative performed for the sake of implementing data-driven solutions for business problems, enabling better business decisions, and delivering optimal business performance.

In order to accomplish these objectives, data has to be not only complete and accurate, as well as whatever other dimensions you wish to add to your complete and accurate definition of data quality, but most important, data has to be useful to the business.

Perhaps the most common definition for data quality is “fitness for the purpose of use.” 

The missing word, which makes this definition both incomplete and inaccurate, puns intended, is “business.”  In other words, data quality is “fitness for the purpose of business use.”  How complete and how accurate (and however else) the data needs to be is determined by its business use—or uses since, in the vast majority of cases, data has multiple business uses.

Data, data everywhere

With silos replicating data as well as new data being created daily, managing all of the data is not only becoming impractical, but because we are too busy with the activity of trying to manage all of it, no one is stopping to evaluate usage or business relevance.

The fifth of the Five New Ideas From 2010 MIT Information Quality Industry Symposium, which is a recent blog post written by Mark Goloboy, was that “60-90% of operational data is valueless.”

“I won’t say worthless,” Goloboy clarified, “since there is some operational necessity to the transactional systems that created it, but valueless from an analytic perspective.  Data only has value, and is only worth passing through to the Data Warehouse if it can be directly used for analysis and reporting.  No news on that front, but it’s been more of the focus since the proliferation of data has started an increasing trend in storage spend.”

In his recent blog post Are You Afraid to Say Goodbye to Your Data?, Dylan Jones discussed the critical importance of designing an archive strategy for data, as opposed to the default position many organizations take, where burgeoning data volumes are allowed to proliferate because, in large part, no one wants to delete (or, at the very least, archive) any of the existing data. 

This often results in the data that the organization truly needs for continued success getting stuck in the long line of data waiting to be managed, and in many cases, behind data for which the organization no longer has any business use (and perhaps never even had the chance to use when the data was actually needed to make critical business decisions).

“When identifying data in scope for a migration,” Dylan advised, “I typically start from the premise that ALL data is out of scope unless someone can justify its existence.  This forces the emphasis back on the business to justify their use of the data.”

Data Memorioso

Funes el memorioso is a short story by Jorge Luis Borges, which describes a young man named Ireneo Funes who, as a result of a horseback riding accident, has lost his ability to forget.  Although Funes has a tremendous memory, he is so lost in the details of everything he knows that he is unable to convert the information into knowledge and unable, as a result, to grow in wisdom.

In Spanish, the word memorioso means “having a vast memory.”  When Data Memorioso is your data management strategy, your organization becomes so lost in all of the data it manages that it is unable to convert data into business insight and unable, as a result, to survive and thrive in today’s highly competitive and rapidly evolving marketplace.

In their great book Made to Stick: Why Some Ideas Survive and Others Die, Chip Heath and Dan Heath explained that “an accurate but useless idea is still useless.  If a message can’t be used to make predictions or decisions, it is without value, no matter how accurate or comprehensive it is.”  I believe that this is also true for your data and your organization’s business uses for it.

Is your data complete and accurate, but useless to your business?

DQ-View: Designated Asker of Stupid Questions

Data Quality (DQ) View is an OCDQ regular segment.  Each DQ-View is a brief video discussion of a data quality key concept.

Effective communication improves everyone’s understanding of data quality, establishes a tangible business context, and helps prioritize critical data issues.  Therefore, as the first video in my new DQ-View segment, I want to discuss a critical role that far too often is missing from data quality initiatives—Designated Asker of Stupid Questions.

 

DQ-View: Designated Asker of Stupid Questions

 

If you are having trouble viewing this video, then you can watch it on Vimeo by clicking on this link: DQ-View on Vimeo

 

Related Posts

The Importance of Envelopes

The Point of View Paradox

The Balancing Act of Awareness

Shut Your Mouth

Podcast: Open Your Ears

Hailing Frequencies Open

The Game of Darts – An Allegory

Podcast: Business Technology and Human-Speak

Not So Strange Case of Dr. Technology and Mr. Business

The Acronymicon

Podcast: Stand-Up Data Quality (Second Edition)

Last December, while experimenting with using podcasts and videos to add more variety and more personality to my blogging, I recorded a podcast called Stand-Up Data Quality, in which I discussed using humor to enliven a niche topic such as data quality, and revisited some of the stand-up comedy aspects of some of my favorite written-down blog posts from 2009.

In this brief (approximately 10 minutes) OCDQ Podcast, I share some more of my data quality humor:

You can also download this podcast (MP3 file) by clicking on this link: Stand-Up Data Quality (Second Edition)

 

Related Posts

Wednesday Word: June 23, 2010 – Referential Narcissisity

The Five Worst Elevator Pitches for Data Quality

Data Quality Mad Libs (Part 1)

Data Quality Mad Libs (Part 2)

Podcast: Stand-Up Data Quality (First Edition)

Data Quality: The Reality Show?

Data Quality and the Cupertino Effect

The Cupertino Effect can occur when you accept the suggestion of a spellchecker program, which was attempting to assist you with a misspelled word (or what it “thinks” is a misspelling because it cannot find an exact match for the word in its dictionary). 

Although the suggestion (or in most cases, a list of possible words is suggested) is indeed spelled correctly, it might not be the word you were trying to spell, and in some cases, by accepting the suggestion, you create a contextually inappropriate result.

It’s called the “Cupertino” effect because with older programs the word “cooperation” was only listed in the spellchecking dictionary in hyphenated form (i.e., “co-operation”), making the spellchecker suggest “Cupertino” (i.e., the California city and home of the worldwide headquarters of Apple, Inc.,  thereby essentially guaranteeing it to be in all spellchecking dictionaries).

By accepting the suggestion of a spellchecker program (and if there’s only one suggested word listed, don’t we always accept it?), a sentence where we intended to write something like:

“Cooperation is vital to our mutual success.”

Becomes instead:

“Cupertino is vital to our mutual success.”

And then confusion ensues (or hilarity—or both).

Beyond being a data quality issue for unstructured data (e.g., documents, e-mail messages, blog posts, etc.), the Cupertino Effect reminded me of the accuracy versus context debate.

 

“Data quality is primarily about context not accuracy...”

This Data Quality (DQ) Tip from last September sparked a nice little debate in the comments section.  The complete DQ-Tip was:

“Data quality is primarily about context not accuracy. 

Accuracy is part of the equation, but only a very small portion.”

Therefore, the key point wasn’t that accuracy isn’t important, but simply to emphasize that context is more important. 

In her fantastic book Executing Data Quality Projects, Danette McGilvray defines accuracy as “a measure of the correctness of the content of the data (which requires an authoritative source of reference to be identified and accessible).”

Returning to the Cupertino Effect for a moment, the spellchecking dictionary provides an identified, accessible, and somewhat authoritative source of reference—and “Cupertino” is correct data content for representing the name of a city in California. 

However, absent a context within which to evaluate accuracy, how can we determine the correctness of the content of the data?

 

The Free-Form Effect

Let’s use a different example.  A common root cause of poor quality for structured data is: free-form text fields.

Regardless of how good the metadata description is written or how well the user interface is designed, if a free-form text field is provided, then you will essentially be allowed to enter whatever you want for the content of the data (i.e., the data value).

For example, a free-form text field is provided for entering the Country associated with your postal address.

Therefore, you could enter data values such as:

Brazil
United States of America
Portugal
United States
República Federativa do Brasil
USA
Canada
Federative Republic of Brazil
Mexico
República Portuguesa
U.S.A.
Portuguese Republic

However, you could also enter data values such as:

Gondor
Gnarnia
Rohan
Citizen of the World
The Land of Oz
The Island of Sodor
Berzerkistan
Lilliput
Brobdingnag
Teletubbyland
Poketopia
Florin

The first list contains real countries, but a lack of standard values introduces needless variations. The second list contains fictional countries, which people like me enter into free-form fields to either prove a point or simply to amuse myself (well okay—both).

The most common solution is to provide a drop-down box of standard values, such as those provided by an identified, accessible, and authoritative source of reference—the ISO 3166 standard country codes.

Problem solved—right?  Maybe—but maybe not. 

Yes, I could now choose BR, US, PT, CA, MX (the ISO 3166 alpha-2 codes for Brazil, United States, Portugal, Canada, Mexico), which are the valid and standardized country code values for the countries from my first list above—and I would not be able to find any of my fictional countries listed in the new drop-down box.

However, I could also choose DO, RE, ME, FI, SO, LA, TT, DE (Dominican Republic, Réunion, Montenegro, Finland, Somalia, Lao People’s Democratic Republic, Trinidad and Tobago, Germany), all of which are valid and standardized country code values, however all of them are also contextually invalid for my postal address.

 

Accuracy: With or Without Context?

Accuracy is only one of the many dimensions of data quality—and you may have a completely different definition for it. 

Paraphrasing Danette McGilvray, accuracy is a measure of the validity of data values, as verified by an authoritative reference. 

My question is what about context?  Or more specifically, should accuracy be defined as a measure of the validity of data values, as verified by an authoritative reference, and within a specific context?

Please note that I am only trying to define the accuracy dimension of data quality, and not data quality

Therefore, please resist the urge to respond with “fitness for the purpose of use” since even if you want to argue that “context” is just another word meaning “use” then next we will have to argue over the meaning of the word “fitness” and before you know it, we will be arguing over the meaning of the word “meaning.”

Please accurately share your thoughts (with or without context) about accuracy and context—by posting a comment below.

The 2010 Data Quality Blogging All-Stars

The 2010 Major League Baseball (MLB) All-Star Game is being held tonight (July 13) at Angel Stadium in Anaheim, California.

For those readers who are not baseball fans, the All-Star Game is an annual exhibition held in mid-July that showcases the players with (for the most part) the best statistical performances during the first half of the MLB season.

Last summer, I began my own annual exhibition of showcasing the bloggers whose posts I have personally most enjoyed reading during the first half of the data quality blogging season. 

Therefore, this post provides links to stellar data quality blog posts that were published between January 1 and June 30 of 2010.  My definition of a “data quality blog post” also includes Data Governance, Master Data Management, and Business Intelligence. 

Please Note: There is no implied ranking in the order that bloggers or blogs are listed, other than that Individual Blog All-Stars are listed first, followed by Vendor Blog All-Stars, and the blog posts are listed in reverse chronological order by publication date.

 

Henrik Liliendahl Sørensen

From Liliendahl on Data Quality:

 

Dylan Jones

From Data Quality Pro:

 

Julian Schwarzenbach

From Data and Process Advantage Blog:

 

Rich Murnane

From Rich Murnane's Blog:

 

Phil Wright

From Data Factotum:

 

Initiate – an IBM Company

From Mastering Data Management:

 

Baseline Consulting

From their three blogs: Inside the Biz with Jill Dyché, Inside IT with Evan Levy, and In the Field with our Experts:

 

DataFlux – a SAS Company

From Community of Experts:

 

Related Posts

Recently Read: May 15, 2010

Recently Read: March 22, 2010

Recently Read: March 6, 2010

Recently Read: January 23, 2010

The 2009 Data Quality Blogging All-Stars

 

Additional Resources

From the IAIDQ, read the 2010 issues of the Blog Carnival for Information/Data Quality:

The Diffusion of Data Governance

Marty Moseley of Initiate recently blogged Are We There Yet? Results of the Data Governance Survey, and the blog post includes a link to the survey, which is freely available—no registration required.

The Initiate survey says that although data governance dates back to the late 1980s, it is experiencing a resurgence because of initiatives such as business intelligence, data quality, and master data management—as well as the universal need to make better data-driven business decisions “in less time than ever before, often culling data from more structured and unstructured sources, with more transparency required.”

Winston Chen of Kalido recently blogged A Brief History of Data Governance, which provides a brief overview of three distinct eras in data management: Application Era (1960-1990), Enterprise Repository Era (1990-2010), and Policy Era (2010-?).

As I commented on Winston’s post, I began my career at the tail-end of the Application Era, and my career has been about a 50/50 split between applications and enterprise repositories since history does not move forward at the same pace for all organizations, including software vendors—by which, I mean that my professional experience was influenced more by working for vendors selling application-based solutions than it was by working with clients who were, let’s just say, less than progressive.

Diffusion of innovations (illustrated above) is a theory developed by Everett Rogers for describing the five stages and the rate at which innovations (e.g., new ideas or technology) spread through markets (or “cultures”), starting with the Innovators and the Early Adopters, then progressing through the Early Majority and the Late Majority, and finally ending with the Laggards.

Therefore, the exact starting points of the three eras Winston described in his post can easily be debated because progress can be painfully slow until a significant percentage of the Early Majority begins to embrace the innovation—thereby causing the so-called Tipping Point where progress begins to accelerate enough for the mainstream to take it seriously. 

Please Note: I am not talking about crossing “The Chasm”—which as Geoffrey A. Moore rightfully discusses, is the critical, but much earlier, phenomenon occurring when enough of the Early Adopters have embraced the innovation so that the beginning of the Early Majority becomes an almost certainty—but true mainstream adoption of the innovation is still far from guaranteed.

The tipping point that I am describing occurs within the Early Majority and before the top of the adoption curve is reached. 

Achieving 16% market share (or “cultural awareness”) is where the Early Majority begins—and only after successfully crossing the chasm (which I approximate occurs somewhere around 8% market share).  However,  the difference between a fad and a true innovation occurs somewhere around 25% market share—and this is the tipping point that I am describing.

The Late Majority (and the top of the adoption curve) doesn’t begin until 50% market share, and it’s all downhill from there, meaning that the necessary momentum has been achieved to almost guarantee that the innovation will be fully adopted.

For example, it could be argued that master data management (MDM) reached its tipping point in late 2009, and with the wave of acquisitions in early 2010, MDM stepped firmly on the gas pedal of the Early Majority, and we are perhaps just beginning to see the start of MDM’s Late Majority.

It is much harder to estimate where we are within the diffusion of data governance.  Of course, corporate cultural awareness always plays a significant role in determining the adoption of new ideas and the market share of emerging technologies.

The Initiate survey concludes that “the state of data governance initiatives is still rather immature in most organizations” and reveals “a surprising lack of perceived executive interest in data governance initiatives.”

Rob Karel of Forrester Research recently blogged about how Data Governance Remains Immature, but he is “optimistic that we might finally see some real momentum building for data governance to be embraced as a legitimate competency.”

“It will likely be a number of years before best practices outnumber worst practices,” as Rob concludes, “but any momentum in data governance adoption is good momentum!”

From my perspective, data governance is still in the Early Adopter phase.  Perhaps 2011 will be “The Year of Data Governance” in much the same way that some have declared 2010 to to be “The Year of MDM.”

In other words, it may be another six to twelve months before we can claim the Early Majority has truly embraced not just the idea of data governance, but have realistically begun their journey toward making it happen.

 

What Say You?

Please share your thoughts about the diffusion of data governance, as well as your overall perspectives on data governance.

 

Related Posts

MacGyver: Data Governance and Duct Tape

The Prince of Data Governance

Jack Bauer and Enforcing Data Governance Policies

 

Follow OCDQ

If you enjoyed this blog post, then please subscribe to OCDQ via my RSS feed, my E-mail updates, or Google Reader.

You can also follow OCDQ on Twitter, fan the Facebook page for OCDQ, and connect with me on LinkedIn.


New Time Human Business

The song “Old Time Rock and Roll” by Bob Seger was perhaps immortalized by that famous scene in the film Risky Business, which has itself become immortalized by its many parodies including the television commercials for the video game Guitar Hero.

As I recently blogged about in my post The Great Rift, the real risky business in the new economy of the 21st century is when organizations prioritize the value of things over the value of people.

Since here in the United States, we are preparing for a long holiday weekend in celebration of the Fourth of July, and also because I am (as usual) in a musical state of mind, I wrote my own parody song called “New Time Human Business.”

 

New Time Human Business

Just get that Old School way of doing business off your mind,
And listen to me sing about the Human Side of Business, because it’s time.
Today’s business world ain’t got no damn soul,
I like how the New Time Human Business rolls!

Don’t try to take my message to an executive boardroom,
You’ll find they stopped listening to their people a long time before.
I don’t know how they manage to even get their fat heads through the door,
I like how the New Time Human Business rolls!

I’ve always liked how the Human Side of Business rolls,
That kind of business just soothes the soul.
I reminisce about the days of old,
When “Mom and Pop” knew the real business goal,
Relationship, rapport, and trust—yeah, that’s what sold!

Today’s business world ain’t got no damn soul,
I like how the New Time Human Business rolls!

Won’t go to a Big Business Rally to hear them toot their own horn,
I’d rather hear real people sing some classic blues or funky old soul.
There’s only one sure way to get me to listen to your goals,
Start singing like how the New Time Human Business rolls!

Call me a rebel, call me a dreamer, call me what you will,
Say I’m an idiot, say doing business this way, I’ll never pay my damn bills.
But today’s business world ain’t got no damn soul,
I like how the New Time Human Business rolls!

I’ve always liked how the Human Side of Business rolls,
That kind of business just soothes the soul.
I reminisce about the days of old,
When “Mom and Pop” knew the real business goal,
Relationship, rapport, and trust—yeah, that’s what sold!

Today’s business world ain’t got no damn soul,
I like how the New Time Human Business rolls!

Do you believe in Magic (Quadrants)?

Twitter

If you follow Data Quality on Twitter like I do, then you are probably already well aware that the 2010 Gartner Magic Quadrant for Data Quality Tools was released this week (surprisingly, it did not qualify as a Twitter trending topic).

The five vendors that were selected as the “data quality market leaders” were SAS DataFlux, IBM, Informatica, SAP Business Objects, and Trillium.

Disclosure: I am a former IBM employee, former IBM Information Champion, and I blog for the Data Roundtable, which is sponsored by SAS.

Please let me stress that I have the highest respect for both Ted Friedman and Andy Bitterer, as well as their in depth knowledge of the data quality industry and their insightful analysis of the market for data quality tools.

In this blog post, I simply want to encourage a good-natured debate, and not about the Gartner Magic Quadrant specifically, but rather about market research in general.  Gartner is used as the example because they are perhaps the most well-known and the source most commonly cited by data quality vendors during the sales cycle—and obviously, especially by the “leading vendors.”

I would like to debate how much of an impact market research really has on a prospect’s decision to purchase a data quality tool.

Let’s agree to keep this to a very informal debate about how research can affect both the perception and the reality of the market.

Therefore—for the love of all high quality data everywhere—please, oh please, data quality vendors, do NOT send me your quarterly sales figures, or have your PR firm mercilessly spam either my comments section or my e-mail inbox with all the marketing collateral “proving” how Supercalifragilisticexpialidocious your data quality tool is—I said please, so play nice.

 

The OCDQ View on OOBE-DQ

In a previous post, I used the term OOBE-DQ to refer to the out-of-box-experience (OOBE) provided by data quality (DQ) tools, which usually becomes a debate between “ease of use” and “powerful functionality” after you ignore the Magic Beans sales pitch that guarantees you the data quality tool is both remarkably easy to use and incredibly powerful.

However, the data quality market continues to evolve away from esoteric technical tools and toward business-empowering suites providing robust functionality with easier to use and role-based interfaces that are tailored to the specific needs of different users, such as business analysts, data stewards, application developers, and system administrators.

The major players are still the large vendors who have innovated (mostly via acquisition and consolidation) enterprise application development platforms with integrated (to varying degrees) components, which provide not only data quality functionality, but also data integration and master data management (MDM) as well.

Many of these vendors also offer service-oriented deployments delivering the same functionality within more loosely coupled technical architectures, which includes leveraging real-time services to prevent (or at least greatly minimize) poor data quality at the multiple points of origin within the data ecosystem.

Many vendors are also beginning to provide better built-in reporting and data visualization capabilities, which is helping to make the correlation between poor data quality and suboptimal business processes more tangible, especially for executive management.

It must be noted that many vendors (including the “market leaders”) continue to struggle with their International OOBE-DQ. 

Many (if not most) data quality tools are strongest in their native country or their native language, but their OOBE-DQ declines significantly when they travel abroad.  Especially outside of the United States, smaller vendors with local linguistic and cultural expertise built into their data quality tools have continued to remain fiercely competitive with the larger vendors.

Market research certainly has a role to play in making a purchasing decision, and perhaps most notably as an aid in comparing and contrasting features and benefits, which of course, always have to be evaluated against your specific requirements, including both your current and future needs. 

Now let’s shift our focus to examining some of the inherent challenges of evaluating market research, perception, and reality.

 

Confirmation Bias

First of all, I realize that this debate will suffer from a considerable—and completely understandable—confirmation bias.

If you are a customer, employee, or consultant for one of the “High Five” (not an “official” Gartner Magic Quadrant term for the Leaders), then obviously you have a vested interest in getting inebriated on your own Kool-Aid (as noted in my disclosure above, I used to get drunk on the yummy Big Blue Kool-Aid).  Now, this doesn’t mean that you are a “yes man” (or a “yes woman”).  It simply means it is logical for you to claim that market research, market perception, and market reality are in perfect alignment.

Likewise, if you are a customer, employee, or consultant for one of the “It Isn’t Easy Being Niche-y” (rather surprisingly, not an “official” Gartner Magic Quadrant term for the Niche Players), then obviously you have a somewhat vested interest in claiming that market research is from Mars, market perception is from Venus, and market reality is really no better than reality television.

And, if you are a customer, employee, or consultant for one of the “We are on the outside looking in, flipping both Gartner and their Magic Quadrant the bird for excluding us” (I think that you can figure out on your own whether or not that is an “official” Gartner Magic Quadrant term), then obviously you have a vested interest in saying that market research can “Kiss My ASCII!”

My only point is that your opinion of market research will obviously be influenced by what it says about your data quality tool. 

Therefore, should it really surprise anyone when, during the sales cycle, one of the High Five uses the Truly Awesome Syllogism:

“Well, of course, we say our data quality tool is awesome.
However, the Gartner Magic Quadrant also says our data quality tool is awesome.
Therefore, our data quality tool is Truly Awesome.”

Okay, so technically, that’s not even a syllogism—but who said any form of logical argument is ever used during a sales cycle?

On a more serious note, and to stop having too much fun at Gartner’s expense, they do advise against simply selecting vendors in their “Leaders quadrant” and instead always advise to select the vendor that is the better match for your specific requirements.

 

Features and Benefits: The Game Nobody Wins

As noted earlier, a features and benefits comparison is not only the most common technique used by prospects, but it is also the most common—if not the only—way that the vendors themselves position their so-called “competitive differentiation.”

The problem with this approach—and not just for data quality tools—is that there are far more similarities than differences to be found when comparing features and benefits. 

Practically every single data quality tool on the market today will include functionality for data profiling, data quality assessment, data standardization, data matching, data consolidation, data integration, data enrichment, and data quality monitoring.

Therefore, running down a checklist of features is like playing a game of Buzzword Bingo, or constantly playing Musical Chairs, but without removing any of the chairs in between rounds—in others words, the Features Game almost always ends in a tie.

So then next we play the Benefits Game, which is usually equally pointless because it comes down to silly arguments such as “our data matching engine is better than yours.”  This is the data quality tool vendor equivalent of:

Vendor D: “My Dad can beat up your Dad!”

Vendor Q: “Nah-huh!”

Vendor D: “Yah-huh!”

Vendor Q: “NAH-HUH!”

Vendor D: “YAH-HUH!”

Vendor Q: “NAH-HUH!”

Vendor D: “Yah-huh!  Stamp it!  No Erasies!  Quitsies!”

Vendor Q: “No fair!  You can’t do that!”

After both vendors have returned from their “timeout,” a slightly more mature approach is to run a vendor “bake-off” where the dueling data quality tools participate in a head-to-head competition processing a copy of the same data provided by the prospect. 

However, a bake-off often produces misleading results because the vendors—and not the prospect—perform the competition, making it mostly about vendor expertise, not OOBE-DQ.  Also, the data used rarely exemplifies the prospect’s data challenges.

If competitive differentiation based on features and benefits is a game that nobody wins, then what is the alternative?

 

The Golden Circle

The Golden Circle

I recently read the book Start with Why by Simon Sinek, which explains that “people don’t buy WHAT you do, they buy WHY you do it.” 

The illustration shows what Simon Sinek calls The Golden Circle.

WHY is your purpose—your driving motivation for action. 

HOW is your principles—specific actions that are taken to realize your Why. 

WHAT is your results—tangible ways in which you bring your Why to life. 

It’s a circle when viewed from above, but in reality it forms a megaphone for broadcasting your message to the marketplace. 

When you rely only on the approach of attempting to differentiate your data quality tool by discussing its features and benefits, you are focusing on only your WHAT, and absent your WHY and HOW, you sound just like everyone else to the marketplace.

When, as is often the case, nobody wins the Features and Benefits Game, a data quality tool sounds more like a commodity, which will focus the marketplace’s attention on aspects such as your price—and not on aspects such as your value.

Due to the considerable length of this blog post, I have been forced to greatly oversimplify the message of this book, which a future blog post will discuss in more detail.  I highly recommend the book (and no, I am not an affiliate).

At the very least, consider this question:

If there truly was one data quality tool on the market today that, without question, had the very best features and benefits, then why wouldn’t everyone simply buy that one? 

Of course your data quality tool has solid features and benefits—just like every other data quality tool does.

I believe that the hardest thing for our industry to accept is—the best technology hardly ever wins the sale. 

As most of the best salespeople will tell you, what wins the sale is when a relationship is formed between vendor and customer, a strategic partnership built upon a solid foundation of rapport, respect, and trust.

And that has more to do with WHY you would make a great partner—and less to do with WHAT your data quality tool does.

 

Do you believe in Magic (Quadrants)?

I Want To Believe

How much of an impact do you think market research has on the purchasing decision of a data quality tool?  How much do you think research affects both the perception and the reality of the data quality tool market?  How much do you think the features and benefits of a data quality tool affect the purchasing decision?

All perspectives on this debate are welcome without bias.  Therefore, please post a comment below.

PLEASE NOTE

Comments advertising your products and services (or bashing competitors) will not be approved.

 

 

The Great Rift

I recently read a great article about social collaboration in the enterprise by Julie Hunt, which includes the excellent insight:

“Most enterprises have failed to engender a ‘collaboration culture’ based on real human interaction.  The executive management of many companies does not even understand what a ‘collaboration culture’ is.  Frankly, executive management of many companies is hard put to authentically value employees—these companies want to de-humanize employees with such terms as ‘resources’ and ‘human capital’, and think that it is enough if they sling around a few ‘mission statements’ claiming that they ‘value’ employees.”

Even though the article was specifically discussing the reason why companies struggle to effectively use social media in business, it reminded me of the reason that many enterprise initiatives struggle—if not fail—to live up to their rather lofty expectations.

The most common root cause for the failure of enterprise initiatives is what I like to refer to as The Great Rift.

 

The Great Rift

In astronomy, the Great Rift—also known as the Dark Rift—is a series of overlapping and non-luminous molecular dust clouds, which appear to create a dark divide in the otherwise bright band of stars and other luminous objects comprising our Milky Way.

Within the intergalactic empires of the business world, The Great Rift is a metaphor for the dark divide separating how most of these organizations would list and prioritize their corporate assets:

Please note that a list of things is on the left side of The Great Rift and on the right side is a list of people. 

Although the order of importance given to the items within each of these lists is debatable, I would argue what is not debatable is that the list of things is what most organizations prioritize as their most important corporate assets.

It is precisely this prioritization of the value of things over the value of people that creates and sustains The Great Rift.

Of course, the message delivered by corporate mission statements, employee rallies, and customer conferences would lead you to believe the exact opposite is true—and in fairness, some organizations do prioritize the value of people over the value of things.

However, the harsh reality of the business world is that the message “we value our people” is often only a Machiavellian illusion.

I believe that as long as The Great Rift exists, then no enterprise initiative can be successful—or remain successful for very long. 

The enterprise-wide communication and collaboration that is so critical to achieving and sustaining success on initiatives such as Master Data Management (MDM) and Data Governance, can definitely not escape the effects of The Great Rift. 

Eventually, The Great Rift becomes the enterprise equivalent of a black hole, where not even the light shining from your very brightest stars will be able to escape its gravitational pull.

“Returning to the human side of business won’t happen magically,” Julie Hunt concluded her article.  “It will take real work and real commitment, from the executive level through all levels of management and employee departments.”

I wholeheartedly agree with Julie and will therefore conclude this blog post by paraphrasing the lyrics from “Yellow” by Coldplay into a song I am simply calling “People” because repairing The Great Rift and “returning to the human side of business” can only be accomplished by acknowledging that every organization’s truly most important corporate asset is—their people.

Rumors have it that the The Rolling Forecasts might even add the song to their playlist for the Data Rock Star World Tour 2010.

 

People

Look at your people
Look how they shine for you
And in everything they do
Yeah, they’re all stars

They came along 
They wrote a song for you
About all the Things they do
And it was called People

So they each took their turn 
And sung about all the things they’ve done
And it was all for you

Your business
Oh yeah, your technology and your data too
They turned it all into something beautiful
Did you know they did it for you?
They did it all for you

Now what are you going to do for them?

They crossed The Great Rift
They jumped across for you 
Because all the things you do
Are all done by your people

Look at your stars
Look how they shine
And in everything they do
Look how they shine for you

They crossed the line
The imaginary line drawn by you
Oh what a wonderful thing to do
And it was all for you

Your business
Oh yeah, your technology and your data too
They turned it all into something beautiful
Did you know they did it for you?
They did it all for you

Now what are you going to do for them?

Look at your people, they’re your stars, it’s true
Look how they shine
And in everything they do
Look how they shine for you

Look at your people
Look at your stars
Look how they shine
And in everything they do
Look how they shine for you

Now what are you going to do for them?

Twitter, Meaningful Conversations, and #FollowFriday

In social media, one of the most common features of social networking services is allowing users to share brief status updates.  Twitter is currently built on only this feature and uses status updates (referred to as tweets) that are limited to a maximum of 140 characters, which creates a rather pithy platform that many people argue is incompatible with meaningful communication.

Although I use Twitter for a variety of reasons, one of them is sharing quotes that I find thought-provoking.  For example:

 

This George Santayana quote was shared by James Geary, whom I follow on Twitter because he uses his account to provide the “recommended daily dose of aphorisms.”  My re-tweet (i.e., “forwarding” of another user’s status update) triggered the following meaningful conversation with Augusto Albeghi, the founder of StraySoft who is known as @Stray__Cat on Twitter:

 

Now of course, I realize that what exactly constitutes a “meaningful conversation” is debatable regardless of the format.

Therefore, let me first provide my definition, which is comprised of the following three simple requirements:

  1. At least two people discussing a topic, which is of interest to all parties involved
  2. Allowing all parties involved to have an equal chance to speak (or otherwise share their thoughts)
  3. Attentively listening to the current speaker—as opposed to merely waiting for your turn to speak

Next, let’s examine why Twitter’s format can be somewhat advantageous to satisfying these requirements:

  1. Although many (if not most) tweets are not necessarily attempting to start a conversation, at the very least they do provide a possible topic for any interested parties
  2. Everyone involved has an equal chance to speak, but time lags and multiple simultaneous speakers can occur, which in all fairness can happen in any other format
  3. Tweets provide somewhat of a running transcript (again, time lags can occur) for the conversation, making it easier to “listen” to the other speaker (or speakers)

Now, let’s address the most common objection to Twitter being used as a conversation medium:

“How can you have a meaningful conversation when constrained to only 140 characters at a time?”

I admit to being a long-winded talker or, as a favorite (canceled) television show would say, “conversationally anal-retentive.”  In the past (slightly less now), I was also known for e-mail messages even Leo Tolstoy would declare to be far too long.

However, I wholeheartedly agree with Jennifer Blanchard, who explained how Twitter makes you a better writer.  When forced to be concise, you have to focus on exactly what you want to say, using as few words as possible.

I call this reduction of your message to its bare essence—the power of pith.  In order to engage in truly meaning conversations, this is a required skill we all must master, and not just for tweeting—but Twitter does provide a great practice environment.

 

At least that’s my 140 characters worth on this common debate—well okay, it’s more like my 5,000 characters worth.

 

Great folks to follow on Twitter

Since this blog post was published on a Friday, which for Twitter users like me means it’s FollowFriday, I would like to conclude by providing a brief list of some great folks to follow on Twitter. 

Although by no means a comprehensive list, and listed in no particular order whatsoever, here are some great tweeps, and especially if you are interested in Data Quality, Data Governance, Master Data Management, and Business Intelligence:

 

PLEASE NOTE: No offense is intended to any of my tweeps not listed above.  However, if you feel that I have made a glaring omission of an obviously Twitterific Tweep, then please feel free to post a comment below and add them to the list.  Thanks!

I hope that everyone has a great FollowFriday and an even greater weekend.  See you all around the Twittersphere.

 

Related Posts

Wordless Wednesday: June 16, 2010

Data Rock Stars: The Rolling Forecasts

The Fellowship of #FollowFriday

Social Karma (Part 7)

The Wisdom of the Social Media Crowd

The Twitter Clockwork is NOT Orange

Video: Twitter #FollowFriday – January 15, 2010

Video: Twitter Search Tutorial

Live-Tweeting: Data Governance

Brevity is the Soul of Social Media

If you tweet away, I will follow

Tweet 2001: A Social Media Odyssey

MacGyver: Data Governance and Duct Tape

One of my favorite 1980s television shows was MacGyver, which starred Richard Dean Anderson as an extremely intelligent and endlessly resourceful secret agent, known for his practical application of scientific knowledge and inventive use of common items.

While I was thinking about the role of both data stewards and data cleansing within a successful data governance program, the two things that immediately came to mind were MacGyver, and the other equally versatile metaphor for versatility—duct tape

I decided to combine these two excellent metaphors by envisioning MacGyver as a data steward and duct tape as data cleansing.

 

Data Steward: The MacGyver of Data Governance

Since “always prepared for adventure” was one of the show’s taglines, I think MacGyver would make an excellent data steward.

The fact that the activities associated with the role can vary greatly, almost qualifies “data steward” as a MacGyverism.  Your particular circumstances, and especially the unique corporate culture of your organization, will determine the responsibilities of your data stewardship function, but the general principles of data stewardship, as defined by Jill Dyché, include the following:

  • Stewardship is the practice of managing or looking after the well being of something.
  • Data is an asset owned by the enterprise.
  • Data stewards do not necessarily “own” the data assigned to them.
  • Data stewards care for data assets on behalf of the enterprise.

Just like MacGyver’s trusted sidekick—his Swiss Army knife—the most common trait of a data steward may be versatility. 

I am not suggesting that a data steward is a jack of all trades, but master of none.  However, a data steward often has a rather HedgeFoxian personality, thereby possessing the versatility necessary to integrate disparate disciplines into practical solutions.

In her excellent article Data Stewardship Strategy, Jill Dyché outlined six tried-and-true techniques that can help you avoid some common mistakes and successfully establish a data stewardship function within your organization.  The second technique provides a few examples of typical data stewardship activities, which often include assessing and correcting data quality issues.

 

Data Cleansing: The Duct Tape of Data Quality

About poor data quality, MacGyver says, “if I had some duct tape, I could fix that.”  (Okay—so he says that about everything.)

Data cleansing is the duct tape of data quality.

Proactive defect prevention is highly recommended, even though it is impossible to truly prevent every problem before it happens, because the more control enforced where data originates, the better the overall quality will be for enterprise information. 

However, when poor data quality negatively impacts decision-critical information, the organization may legitimately prioritize a reactive short-term response—where the only remediation will be finding and fixing the immediate problems. 

Of course, remediation limited to data cleansing alone will neither identify nor address the burning root cause of those problems. 

Effectively balancing the demands of a triage mentality with a best practice of implementing defect prevention wherever possible, will often create a very challenging situation for data stewards to contend with on a daily basis.  However, like MacGyver says:

“When it comes down to me against a situation, I don’t like the situation to win.”

Therefore, although comprehensive data remediation will require combining reactive and proactive approaches to data quality, data stewards need to always keep plenty of duct tape on hand (i.e., put data cleansing tools to good use whenever necessary).

 

The Data Governance Foundation

In the television series, MacGyver eventually left the clandestine service and went to work for the Phoenix Foundation

Similarly, in the world of data quality, many data stewards don’t formally receive that specific title until they go to work helping to establish your organization’s overall Data Governance Foundation.

Although it may be what the function is initially known for, as Jill Dyché explains, “data stewardship is bigger than data quality.”

“Data stewards establish themselves as adept at executing new data governance policies and consequently, vital to ongoing information management, they become ambassadors on data’s behalf, proselytizing the concept of data as a corporate asset.”

Of course, you must remember that many of the specifics of the data stewardship function will be determined by your unique corporate culture and where your organization currently is in terms of its overall data governance maturity.

Although not an easy mission to undertake, the evolving role of a data steward is of vital importance to data governance.

The primary focus of data governance is the strategic alignment of people throughout the organization through the definition, and enforcement, of policies in relation to data access, data sharing, data quality, and effective data usage, all for the purposes of supporting critical business decisions and enabling optimal business performance. 

I know that sounds like a daunting challenge (and it definitely is) but always remember the wise words of Angus MacGyver:

“Brace yourself.  This could be fun.”

Related Posts

The Prince of Data Governance

Jack Bauer and Enforcing Data Governance Policies

The Circle of Quality

A Tale of Two Q’s

Live-Tweeting: Data Governance

 

Follow OCDQ

If you enjoyed this blog post, then please subscribe to OCDQ via my RSS feed, my E-mail updates, or Google Reader.

You can also follow OCDQ on Twitter, fan the Facebook page for OCDQ, and connect with me on LinkedIn.


Wednesday Word: June 23, 2010

Wednesday Word is an OCDQ regular segment intended to provide an occasional alternative to my Wordless Wednesday posts.  Wednesday Word provides a word (or words) of the day, including both my definition and an example of recommended usage.

 

Referential Narcissisity

Definition – When referential integrity is enforced, a relational database table’s foreign key columns must only contain data values from their parent table’s primary key column, but referential narcissisity occurs when a table’s foreign key columns refuse to acknowledge data values from their alleged parent table—especially when the parent table was created by another DBA.

Example – The following scene is set on the eighth floor of the Nemesis Corporation, where within the vast cubicle farm of the data architecture group, Bob, a Business Analyst struggling with an ad hoc report, seeks the assistance of Doug, a Senior DBA.

Bob: “Excuse me, Doug.  I don’t mean to bother you, I know you are a very busy and important man, but I am trying to join the Sales Transaction table to the Customer Master table using Customer Key, and my queries always return zero rows.”

Doug: “That is because although Doug created the Sales Transaction table, the Customer Master table was created by Craig.  Doug’s tables do not acknowledge any foreign key relationships with Craig’s tables.  Doug is superior to Craig in every way.  Doug’s Kung Fu is the best—and until Craig publicly acknowledges this, your joins will not return any rows.”

Bob: “Uh, why do you keep referring to yourself in the third person?”

Doug: “Doug is bored with this conversation now.  Be gone from my sight, lowly business analyst.  You should be happy that Doug even acknowledged your presence at all.” 

 

Related Posts

Wednesday Word: June 9, 2010 – C.O.E.R.C.E.

Wednesday Word: April 28, 2010 – Antidisillusionmentarianism

Wednesday Word: April 21, 2010 – Enterpricification

Wednesday Word: April 7, 2010 – Vendor Asskisstic

Promoting Poor Data Quality

A few months ago, during an e-mail correspondence with one of my blog readers from Brazil (I’ll let him decide if he wishes to remain anonymous or identify himself in the comments section), I was asked the following intriguing question:

“Who profits from poor data quality?”

The specific choice of verb (i.e., “profits”) may have been a linguistic issue, by which I mean that since I don’t know Portuguese, our correspondence had to be conducted in English. 

Please don’t misunderstand me—his writing was perfectly understandable. 

As I discussed in my blog post Can Social Media become a Universal Translator?, my native language is English, and like many people from the United States, it is the only language I am fluent in.  My friends from Great Britain would most likely point that I am only fluent in the American “version” of the English language, but that’s a topic for another day—and another blog post.

When anyone communicates in another language—and especially in writing—not every word may be exactly right. 

For example: Muito obrigado por sua pergunta!

Hopefully (and with help from Google Translate), I just wrote “thank you for your question” in Portuguese.

My point is that I believe he was asking why poor data quality continues to persist as an extremely prevalent issue, especially when its detrimental effects on effective business decisions has become painfully obvious given the recent global financial crisis.

However, being mentally stuck on my literal interpretation of the word “profit” has delayed my blog post response—until now.

 

Promoting Poor Data Quality

In economics, the term “flight to quality” describes the aftermath of a financial crisis (e.g., a stock market crash) when people become highly risk-averse and move their money into safer, more reliable investments.  A similar “flight to data quality” often occurs in the aftermath of an event when poor data quality negatively impacted decision-critical enterprise information. 

The recent recession provides many examples of the financial aspect of this negative impact.  Therefore, even companies that may not have viewed poor data quality as a major risk—and a huge cost greatly decreasing their profits—are doing so now.

However, the retail industry has always been known for its paper thin profit margins, which are due, in large part, to often being forced into the highly competitive game of pricing.  Although dropping the price is the easiest way to sell just about any product, it is also virtually impossible to sustain this rather effective, but short-term, tactic as a viable long-term business strategy. 

Therefore, a common approach used to compete on price without risking too much on profit is to promote sales using a rebate, which I believe is a business strategy intentionally promoting poor data quality for the purposes of increasing profits.

 

You break it, you slip it—either way—you buy it, we profit

The most common form of a rebate is a mail-in rebate.  The basic premise is simple.  Instead of reducing the in-store price of a product, it is sold at full price, but a rebate form is provided that the customer can fill out and mail to the product’s manufacturer, which will then mail a rebate check to the customer—usually within a few business weeks after approving the rebate form. 

For example, you could purchase a new mobile phone for $250 with a $125 mail-in rebate, which would make the “sale price” only $125—which is what the store will advertise as the actual sale price with “after a $125 mail-in rebate” written in small print.

Two key statistics significantly impact the profitability of these type of rebate programs, breakage and slippage.

Breakage is the percentage of customers who, for reasons I will get to in a moment, fail to take advantage of the rebate, and therefore end up paying full price for the product.  Returning to my example, the mobile phone that would have cost $125 if you received the $125 mail-in rebate, instead becomes exactly what you paid for it—$250 (plus applicable taxes, of course).

Slippage is the percentage of customers who either don’t mail in the rebate form at all, or don’t cash their received rebate check.  The former is the most common “slip,” while the latter is usually caused by failing to cash the rebate check before it expires, which is typically 30 to 90 days after it is processed (i.e., expiration dated)—and regardless of when it is actually received.

Breakage, and the most common form of slippage, are generally the result of making the rebate process intentionally complex. 

Rebate forms often require you to provide a significant amount of information, both about yourself and the product, as well as attach several “proofs of purchase” such as a copy of the receipt and the barcode cut out of the product’s package. 

Data entry errors are perhaps the most commonly cited root cause of poor data quality. 

Rebates seem designed to guarantee data entry errors (by encouraging the customer to fill out the rebate form incorrectly). 

In this particular situation, the manufacturer is hyper-vigilant about data quality and for an excellent reason—poor data quality will either delay or void the customer’s rebate. 

Additionally, the fine print of the rebate form can include other “terms and conditions” voiding the rebate—even if the form is filled out perfectly.  A common example is the limitation of “only one rebate per postal address.”  This sounds reasonable, right? 

Well, one major electronics manufacturer used this disclaimer to disqualify all customers who lived in multiple unit dwellings, such as an apartment building, where another customer “at the same postal address” had already applied for a rebate.

 

Conclusion

Statistics vary by product and region, but estimates show that breakage and slippage combine on average to result in 40% of retail customers paying full price when making a purchasing decision based on a promotional price requiring a mail-in rebate.

So who profits from poor data quality?  Apparently, the retail industry does—sometimes. 

Poor data quality (and poor information quality in the case of intentionally confusing fine print) definitely has a role to play with mail-in rebates—and it’s a supporting role that can definitely lead to increased profits. 

Of course, the long-term risks and costs associated with alienating the marketplace with gimmicky promotions take their toll. 

In fact, the major electronics manufacturer mentioned above was actually substantially fined in the United States and forced to pay hundreds of thousands of dollars worth of denied mail-in rebates to customers.

Therefore, poor data quality, much like crime, doesn’t pay—at least not for very long.

I am not trying to demonize the retail industry. 

Excluding criminal acts of intentional fraud, such as identity theft and money laundering, this was the best example I could think of that allowed me to respond to a reader’s request—without using the far more complex example of the mortgage crisis.

 

What Say You?

Can you think of any other examples of the possible benefits—intentional or accidental—derived from poor data quality?

The Prince of Data Governance

Machiavelli

The difference between politics and policies was explained in the recent blog post A New Dimension in Data Governance Directives: Politics by Jarrett Goldfedder, who also discussed the need to consider the political influences involved, as they can often have a far greater impact on our data governance policies than many choose to recognize.

I definitely agree, especially since the unique corporate culture of every organization carries with it the intricacies and complexities of politics that Niccolò Machiavelli (pictured) wrote about in his book The Prince.

The book, even despite the fact it was written in the early 16th century, remains a great, albeit generally regarded as satirical, view on politics.

The Prince provides a classic study of the acquisition, expansion, and effective use of political power, where the ends always justify the means.

An example of a Machiavellian aspect of the politics of data governance is when a primary stakeholder, while always maintaining the illusion of compliance, only truly complies with policies when it suits the very purposes of their own personal agenda, or when it benefits the interests of the business unit that they represent on the data governance board.

 

Creating Accountability

In her excellent comment on my recent blog post Jack Bauer and Enforcing Data Governance Policies, Kelle O'Neal provided a link to the great article Creating Accountability by Nancy Raulston, which explains that there is a significant difference between increasing accountability (e.g., for compliance with data governance policies) and simply getting everyone to do what they’re told (especially if you have considered resorting to the use of a Jack Bauer approach to enforcing data governance policies).

Raulston shares her high-level thoughts about the key aspects of alignment with vision and goals, achieving clarity on actions and priorities, establishing ownership of processes and responsibilities, the structure of meetings, and the critical role of active and direct communication—all of which are necessary to create true accountability.

“Accountability does not come from every single person getting every single action item done on time,” explains Raulston.  “It arises as groups actively manage the process of making progress, raising and resolving issues, actively negotiating commitments, and providing direct feedback to team members whose behavior is impeding the team.”

Obviously, this is often easier said than done.  However, as Raulston concludes, “ultimate success comes from each person being willing to honestly engage in the process, believing that the improved probability of success outweighs any momentary discomfort from occasionally having to admit to not having gotten something done.”  Or perhaps more important, occasionally having to be comfortable with not having gotten what would suit their personal agenda, or benefit the interests of their group.

 

The Art of the Possible

“Right now, our only choice,” as Goldfedder concluded his post, “is to hope that the leaders in charge of the final decisions can put their own political goals aside for the sake of the principles and policies they have been entrusted to uphold and protect.”

Although I agree, as well as also acknowledge that the politics of data governance will always make it as much art as it is science, I can not help but be reminded of the famous words of Otto von Bismarck:

“Politics is the art of the possible.”

The politics of data governance are extremely challenging, and yes, at times rather Machiavellian in their nature. 

Although it is certainly by no means an easy endeavor for either you or your organization to undertake, neither is achieving a successful and sustainable data governance program impossible. 

Politics may be The Prince of Data Governance, but as long as Communication and Collaboration reign as King and Queen, then Data Governance is the Art of the Possible.

 

Please share your thoughts about the politics of data governance, as well as your overall perspectives on data governance.

 

Follow OCDQ

If you enjoyed this blog post, then please subscribe to OCDQ via my RSS feed, my E-mail updates, or Google Reader.

You can also follow OCDQ on Twitter, fan the Facebook page for OCDQ, and connect with me on LinkedIn.