On March 13, 2009 I launched this blog and, just a month away from its 5th anniversary, this was its 500th post. For following Obsessive-Compulsive Data Quality for 5 years and 500 posts, I offer 5 words: Thank you all very much.Read More
Welcome to the 400th Obsessive-Compulsive Data Quality (OCDQ) blog post! I am commemorating this milestone with the 13th entry in my ongoing series for expressing gratitude to my readers for their truly commendable comments on my blog posts.
“Your concern is well-founded. Knowing how few businesses make really good use of the small data they’ve had around all along, it’s easy to imagine that they won’t do any better with bigger data sets.
I wrote some hints for those wallowing into the big data mire in my post, Better than Brute Force: Big Data Analytics Tips. But the truth is that many organizations won’t take advantage of the ideas that you are presenting, or my tips, especially as the datasets grow larger. That’s partly because they have no history in scientific methods, and partly because the data science movement is driving employers to search for individuals with heroically large skill sets.
Since few, if any, people truly meet these expectations, those hired will have real human limitations, and most often they will be people who know much more about data storage and manipulation than data analysis and applications.”
“The comparison between scientific inquiry and business decision making is a very interesting and important one. Successfully serving a customer and boosting competitiveness and revenue does require some (hopefully unique) insights into customer needs. Where do those insights come from?
Additionally, scientists also never stop questioning and improving upon fundamental truths, which I also interpret as not accepting conventional wisdom — obviously an important trait of business managers.
I recently read commentary that gave high praise to the manager utilizing the scientific method in his or her decision-making process. The author was not a technologist, but rather none other than Peter Drucker, in writings from decades ago.
I blogged about Drucker’s commentary, data science, the scientific method vs. business decision making, and I’d value your and others’ input: Business Managers Can Learn a Lot from Data Scientists.”
“I would argue that listening to not only customers but also business partners is very important (and not only in retail but in any business). I always say that, even if as an organization you are not active in the social world, assume that your customers, suppliers, employees, competitors are active in the social world and they will talk about you (as a company), your people, products, etc.
So it is extremely important to tune in to those conversations and evaluate its impact on your business. A dear friend of mine ventured into the restaurant business a few years back. He experienced a little bit of a slowdown in his business after a great start. He started surveying his customers, brought in food critiques to evaluate if the food was a problem, but he could not figure out what was going on. I accidentally stumbled upon Yelp.com and noticed that his restaurant’s rating had dropped and there were some complaints recently about services and cleanliness (nothing major though).
This happened because he had turnover in his front desk staff. He was able to address those issues and was able to reach out to customers who had bad experience (some of them were frequent visitors). They were able to go back and comment and give newer ratings to his business. This helped him with turning the corner and helped with the situation.
This was a big learning moment for me about the power of social media and the need for monitoring it.”
“Our organization is starting to develop data governance processes and one of the processes we have deliberately designed is to get to the root cause of data quality issues.
We’ve designed it so that the errors that are reported also include the userid and the system where the data was generated. Errors are then filtered by function and the business steward responsible for that function is the one who is responsible for determining and addressing the root cause (which of course may require escalation to solve).
The business steward for the functional area has the most at stake in the data and is typically the most knowledgeable as to the process or system that may be triggering the error. We have yet to test this as we are currently in the process of deploying a pilot stewardship program.
However, we are very confident that it will help us uncover many of the causes of the data quality problems and with lots of PLAN, DO, CHECK, and ACT, our goal is to continuously improve so that our need for stewardship eventually (many years away no doubt) is reduced.”
“I can’t even imagine what it’s like to use this iPad I own now if I am out of network for an hour. Supposedly the coolest thing to own and a breakthrough innovation of this decade as some put it, it’s nothing but a dumb terminal if I do not have 3G or Wi-Fi connectivity.
Putting most of my documents, notes, to-do’s, and bookmarked blogs for reading later (e.g., Instapaper) in the cloud, I am sure to avoid duplicating data and eliminate installing redundant applications.
(Oops! I mean the apps! :) )
With cloud-based MDM and Data Quality tools starting to linger, I can’t wait to explore and utilize the advantages these return of dumb terminals bring to our enterprise information management field.”
“The fact is that companies have always done predictive marketing, they’re just getting smarter at it.
I remember living as a student in a fairly downtrodden area that because of post code analytics meant I was bombarded with letterbox mail advertising crisis loans to consolidate debts and so on. When I got my first job and moved to a new area all of a sudden I was getting loans to buy a bigger car. The companies were clearly analyzing my wealth based on post code lifestyle data.
Fast forward and companies can do way more as you say.
Teresa Cottam (Global Telecoms Analyst) has cited the big telcos as a major driver in all this, they now consider themselves data companies so will start to offer more services to vendors to track our engagement across the entire communications infrastructure (Read more here: http://bit.ly/xKkuX6).
I’ve just picked up a shiny new Mac this weekend after retiring my long suffering relationship with Windows so it will be interesting to see what ads I get served!”
And please check out all of the commendable comments received on the blog post: Data Quality and Chicken Little Syndrome.
Thank You for Your Comments and Your Readership
You are Awesome — which is why receiving your comments has been the most rewarding aspect of my blogging experience over the last 400 posts. Even if you have never posted a comment, you are still awesome — feel free to tell everyone I said so.
This entry in the series highlighted commendable comments on blog posts published between April 2012 and June 2012.
Since there have been so many commendable comments, please don’t be offended if one of your comments wasn’t featured.
Please continue commenting and stay tuned for future entries in the series.
Thank you for reading the Obsessive-Compulsive Data Quality blog. Your readership is deeply appreciated.
So, absolutely without question, there is no better way to commemorate this milestone other than to also make this the 12th entry in my ongoing series for expressing my gratitude to my readers for their truly commendable comments on my blog posts.
“I think this helps illustrate that one size does not fit all.
You can’t take a singular approach to how you design for big data. It’s all about identifying relevance and understanding that relevance can change over time.
There are certain situations where it makes sense to leverage all of the data, and now with high performance computing capabilities that include in-memory, in-DB and grid, it's possible to build and deploy rich models using all data in a short amount of time. Not only can you leverage rich models, but you can deploy a large number of models that leverage many variables so that you get optimal results.
On the other hand, there are situations where you need to filter out the extraneous information and the more intelligent you can be about identifying the relevant information the better.
The traditional approach is to grab the data, cleanse it, and land it somewhere before processing or analyzing the data. We suggest that you leverage analytics up front to determine what data is relevant as it streams in, with relevance based on your organizational knowledge or context. That helps you determine what data should be acted upon immediately, where it should be stored, etc.
And, of course, there are considerations about using visual analytic techniques to help you determine relevance and guide your analysis, but that’s an entire subject just on its own!”
On Data Governance Frameworks are like Jigsaw Puzzles, Gabriel Marcan commented:
“I agree (and like) the jigsaw puzzles metaphor. I would like to make an observation though:
Can you really construct Data Governance one piece at a time?
I would argue you need to put together sets of pieces simultaneously, and to ensure early value, you might want to piece together the interesting / easy pieces first.
Hold on, that sounds like the typical jigsaw strategy anyway . . . :-)”
On Data Governance Frameworks are like Jigsaw Puzzles, Doug Newdick commented:
“I think that there are a number of more general lessons here.
In particular, the description of the issues with data governance sounds very like the issues with enterprise architecture. In general, there are very few eureka moments in solving the business and IT issues plaguing enterprises. These solutions are usually 10% inspiration, 90% perspiration in my experience. What looks like genius or a sudden breakthrough is usually the result of a lot of hard work.
I also think that there is a wider Myth of the Framework at play too.
The myth is that if we just select the right framework then everything else will fall into place. In reality, the selection of the framework is just the start of the real work that produces the results. Frameworks don’t solve your problems, people solve your problems by the application of brain-power and sweat.
All frameworks do is take care of some of the heavy-lifting, i.e., the mundane foundational research and thinking activity that is not specific to your situation.
Unfortunately the myth of the framework is why many organizations think that choosing TOGAF will immediately solve their IT issues and are then disappointed when this doesn’t happen, when a more sensible approach might have garnered better long-term success.”
“I agree with everything you’ve said, but there’s a much uglier truth about data quality that should also be discussed — the business benefit of NOT having a data quality program.
The unfortunate reality is that in a tight market, the last thing many decision makers want to be made public (internally or externally) is the truth.
In a company with data quality principles ingrained in day-to-day processes, and reporting handled independently, it becomes much harder to hide or reinterpret your falling market share. Without these principles though, you’ll probably be able to pick your version of the truth from a stack of half a dozen, then spend your strategy meeting discussing which one is right instead of what you’re going to do about it.
What we’re talking about here is the difference between a Politician — who will smile at the camera and proudly announce 0.1% growth was a fantastic result given X, Y, and Z factors — and a Statistician who will endeavor to describe reality with minimal personal bias.
And the larger the organization, the more internal politics plays a part. I believe a lot of the reluctance in investing in data quality initiatives could be traced back to this fear of being held truly accountable, regardless of it being in the best interests of the organization. To build a data quality-centric culture, the change must be driven from the CEO down if it’s to succeed.”
“The question: ‘Is Data Quality a Journey or a Destination?’ suggests that it is one or the other.
I agree with another comment that data quality is neither . . . or, I suppose, it could be both (the journey is the destination and the destination is the journey. They are one and the same.)
The quality of data (or anything for that matter) is something we experience.
Quality only radiates when someone is in the act of experiencing the data, and usually only when it is someone that matters. This radiation decays over time, ranging from seconds or less to years or more.
The only problem with viewing data quality as radiation is that radiation can be measured by an instrument, but there is no such instrument to measure data quality.
We tend to confuse data qualities (which can be measured) and data quality (which cannot).
In the words of someone whose name I cannot recall: ‘Quality is not job one. Being totally %@^#&$*% amazing is job one.’ The only thing I disagree with here is that being amazing is characterized as a ‘job.’
Data quality is not something we ‘do’ to data. It’s not a business initiative or project or job. It’s not a discipline. We need to distinguish between the pursuit (journey) of being amazing and actually being amazing (destination — but certainly not a final one). To be amazing requires someone to be amazed. We want data to be continuously amazing . . . to someone that matters, i.e., someone who uses and values the data a whole lot for an end that makes a material difference.
Come to think of it, the only prerequisite for data quality is being alive because that is the only way to experience it. If you come across some data and have an amazed reaction to it and can make a difference using it, you cannot help but experience great data quality. So if you are amazing people all the time with your data, then you are doing your data quality job very well.”
“Nicely delineated argument, Jim. Successfully starting a data quality program seems to be a balance between getting started somewhere and determining where best to start. The data quality problem is like a two-edged sword without a handle that is inflicting the ‘death of a thousand cuts’.
Data quality is indeed difficult to get ‘a handle on’.”
And since they generated so much great banter, please check out all of the commendable comments received by the blog posts There is No Such Thing as a Root Cause and You only get a Return from something you actually Invest in.
Thank You for Three Awesome Years
You are Awesome — which is why receiving your comments has been the most rewarding aspect of my blogging experience over the last three years. Even if you have never posted a comment, you are still awesome — feel free to tell everyone I said so.
This entry in the series highlighted commendable comments on blog posts published between December 2011 and March 2012.
Since there have been so many commendable comments, please don’t be offended if one of your comments wasn’t featured.
Please continue commenting and stay tuned for future entries in the series.
Thank you for reading the Obsessive-Compulsive Data Quality blog for the last three years. Your readership is deeply appreciated.
My selections were based on a pseudo-scientific, quasi-statistical combination of page views, comments, and re-tweets (as well as choosing a few of my personal favorites). Instead of ordering the posts chronologically, I decided to organize them by theme.
The Metadata Trilogy
Although it has an incredibly important role to play in data quality and its related disciplines, I don’t write about metadata very often. But the reader feedback that I received lead me to writing three blog posts about metadata in the span of a few weeks:
- The Metadata Crisis — There is a running debate within many organizations over the meaning of commonly used terms, which complicates what on the surface seem like straightforward business questions.
- The Metadata Continuum — There is a continuum, where at one end we have the uniformity of controlled vocabularies, and at the other end we have the flexibility of chaotic folksonomies. However, both flexibility and uniformity provide value.
- You Say Potato and I Say Tater Tot — The demarcations of the borders between metadata, data, and information are important, but sometimes difficult to discern. In this post, I offer an explanation about these demarcations using potatoes.
The Data Governance Star Wars (one less than a) Trilogy
In June, Rob Karel of Forrester Research and I used a Star Wars themed blog mock debate to take on one of data governance’s biggest challenges — how to balance bureaucracy and business agility. Gwen Thomas of the Data Governance Institute joined Rob and I to continue the discussion during a special, extended, and Star Wars themed episode of OCDQ Radio:
- Data Governance Star Wars: Balancing Bureaucracy and Agility — In character as OCDQ-Wan, I argue in favor of business agility and explain that Collaboration is the Data Governance Force.
- Data Governance Star Wars on OCDQ Radio — In Part 1, Rob Karel and I discuss our blog mock debate, which is followed by a brief Star Wars themed intermission, and then in Part 2, Gwen Thomas joins us to provide her excellent insights.
Although not Star Wars themed, here are some additional Best OCDQ Blog Posts of 2011 on the topic of data governance:
- The Three Most Important Letters in Data Governance — There are only three letters of difference between the words cooperative and competitive, which we could say are the three most important letters in data governance.
- Data Governance and the Adjacent Possible — It’s important to demonstrate that some data governance policies reflect existing best practices, which helps reduce resistance to change, and therefore I advise: “If it ain’t broke, bricolage it.”
- Aristotle, Data Governance, and Lead Rulers — Well-constructed data governance policies are like lead rulers — flexible rules that empower us with an understanding of the principle of the policy, and how to enforce it in a particular context.
- The Stakeholder’s Dilemma — There will be times when sacrifices for the long-term greater good will require that stakeholders either contribute more resources during the current phase, or receive fewer benefits from its deliverables.
- Beware the Data Governance Ides of March — My dramatized warning about relying too much on the top-down approach to implementing data governance — and especially if your organization has any data stewards named Brutus or Cassius.
- Data Governance and the Buttered Cat Paradox — The fearless felines of the buttered-toast-paratrooper brigade ponder how to approach data governance — top-down or bottom-up. See the follow-up post: Zig-Zag-Diagonal Data Governance
In June, I launched OCDQ Radio, which is a vendor-neutral podcast about data quality and the audio complement to this blog, providing me with a platform for recorded discussions with the great folks working in the data management industry. So far, there have been 21 episodes of OCDQ Radio, including 22 guests from 7 countries. Here are a few of the most popular episodes:
- So Long 2011, and Thanks for All the . . . — The OCDQ Radio 2011 Year in Review, featuring Jarrett Goldfedder, who discusses Big Data, Nicola Askham, who discusses Data Governance, and Daragh O Brien, who discusses Data Privacy.
- The Fall Back Recap Show — A look back at the Best of OCDQ Radio, including discussions about Data, Information, Business-IT Collaboration, Change Management, Big Analytics, Data Governance, and the Data Revolution.
- Big Data and Big Analytics — Special Guests Jill Dyché and Dan Soceanu discuss big trends in Business Intelligence, including Cloud, Collaboration, and Big Data, the last of which lead to a discussion about Big Analytics.
- Organizing for Data Quality — Guest Tom Redman (aka the “Data Doc”) discusses how your organization should approach data quality, including his call to action for your role in the data revolution.
- Making EIM Work for Business — Guest John Ladley discusses his book Making EIM Work for Business, exploring what makes information management, not just useful, but valuable to the enterprise.
- The Blue Box of Information Quality — Guest Daragh O Brien on why Information Quality is bigger on the inside, using stories as an analytical tool and change management technique, and why we must never forget that “people are cool.”
- Master Data Management in Practice — Guests Dalton Cervo and Mark Allen discuss their book MDM in Practice, and how to properly prepare for a new MDM program.
- Studying Data Quality — Guest Gordon Hamilton discusses the key concepts from recommended data quality books, including those which he has implemented in his career as a data quality practitioner.
- Good-Enough Data for Fast-Enough Decisions — Guest Julie Hunt discusses Data Quality and Business Intelligence, including the speed versus quality debate of near-real-time decision making, and the future of predictive analytics.
- Social Media Strategy — Guest Crysta Anderson of IBM Initiate explains social media strategy and content marketing, including three recommended practices: (1) Listen intently, (2) Communicate succinctly, and (3) Have fun.
The Best of the Rest
- Plato’s Data — Data shapes our perception of the real world, but sometimes we forget that data is only a partial reflection of reality. This theme was also discussed on the OCDQ Radio episode Redefining Data Quality with Peter Perera.
- There is No Such Thing as a Root Cause — There are no root causes, only strong correlations. And correlations are strengthened by continuous monitoring. This post received excellent comments, including great banter with Martin Doyle.
- You only get a Return from something you actually Invest in — Invest in doing the hard daily work of continuously improving your data quality and putting into practice your data governance principles, policies, and procedures.
- The Dichotomy Paradox, Data Quality and Zero Defects — Has your data quality practice become motionless by trying to prove that Zero Defects is more than just theoretically possible?
- The Data Quality Wager — Inspired by Gordon Hamilton, my rendering of Pascal’s Wager in a data quality context.
- DQ-View: Talking about Data — DQ-View video discussion about how data professionals should talk about data when invited to participate in business discussions within their organizations.
- The Speed of Decision — Examines the constraints that time puts on data-driven decision making, pondering whether decision speed is more important than data quality and decision quality.
- The Data Cold War — Examines how Google and Facebook have performed the Master Data Management Magic Trick and socialized data (“Information wants to be free!”) in order to capitalize data as a true corporate asset.
- A Farscape Analogy for Data Quality — Ponders whether data is not viewed as an asset because data has so thoroughly pervaded the enterprise that data has become invisible to those who are so dependent upon its quality.
- No Datum is an Island of Serendip — Our organizations need to create collaborative environments that foster serendipitous connections bringing all of our business units and people together around our shared data assets.
Thank You for Reading OCDQ Blog in 2011
In 2011, the Obsessive-Compulsive Data Quality (OCDQ) blog published 112 posts, which received 130,000 total page views, averaging 350 page views and 150 unique visitors a day.
Thank you for reading OCDQ Blog in 2011. Your readership was deeply appreciated.
This Thursday is Thanksgiving Day, which in the United States is a holiday with a long, varied, and debated history. However, the most consistent themes remain family and friends gathering together to share a large meal and express their gratitude.
This is the eleventh entry in my ongoing series for expressing my gratitude to my readers for their commendable comments on my blog posts. Receiving comments is the most rewarding aspect of my blogging experience because not only do comments greatly improve the quality of my blog, comments also help me better appreciate the difference between what I know and what I only think I know. Which is why, although I am truly grateful to all of my readers, I am most grateful to my commenting readers.
“Recently got to listen in on a ‘cooperate or not’ discussion. (Not my clients.) What struck me was that the people advocating cooperation were big-picture people (from architecture and process) while those who just wanted what they wanted were more concerned about their own short-term gains than about system health. No surprise, right?
But what was interesting was that they were clearly looking after their own careers, and not their silos’ interests. I think we who help focus and frame the Stakeholder’s Dilemma situations need to be better prepared to address the individual people involved, and not just the organizational roles they represent.”
“As always, an intriguing post. Especially where you draw a parallel between Data Governance and Knowledge Management (wisdom management?) We sometimes portray data management (current term) as ‘well managed data administration’ (term from 70s-80s). As for the debate on ‘data’ and ‘information’ I prefer to see everything written, drawn and / or stored on paper or in digital format as data with various levels of informational value, depending on the amount and quality of metadata surrounding the data item and the accessibility, usefulness (quality) of that item.
For example, 12024561414 is a number with low informational value. I could add metadata, for instance: ‘Phone number’, that makes it potentially known as a phone number. Rather than let you find out whose number it is we could add more information value and add more metadata like: ‘White House Switchboard’. Accessibility could be enhanced by improving formatting like: (1) 202-456-1414.
What I am trying to say with this example is that data items should be placed on a rising scale of informational value rather than be put on steps or firm levels of informational value. So the Information Hierarchy provided by Professor Larson does not work very well for me. It could work only if for all data items the exact information value was determined for every probable context. This model is useful for communication purposes.”
“‘erised stra ehru oyt ube cafru oyt on wohsi.’
To all Harry Potter fans this translates to: ‘I show not your face but your heart’s desire.’
It refers to The Mirror of Erised. It does not reflect reality but what you desire. (Erised is Desired spelled backwards.) Often data will cast a reflection of what people want to see.
‘Dumbledore cautions Harry that the mirror gives neither knowledge nor truth and that men have wasted away before it, entranced by what they see.’ How many systems are really Mirrors of Erised?”
“Because the prisoners in the cave are chained and unable to turn their heads to see what goes on behind them, they perceive the shadows as reality. They perceive imperfect reflections of truth and reality.
Bringing the allegory to modern times, this serves as a good reminder that companies MUST embrace data quality for an accurate and REAL view of customers, business initiatives, prospects, and so on. Continuing to view half-truths based on possibly faulty data and information means you are just lost in a dark cave!
I also like the comparison to the Mirror of Erised. One of my favorite movies is the Matrix, in which there are also a lot of parallelisms to Plato’s Cave Allegory. As Morpheus says to Neo: ‘That you are a slave, Neo. Like everyone else you were born into bondage. Into a prison that you cannot taste or see or touch. A prison for your mind.’ Once Neo escapes the Matrix, he discovers that his whole life was based on shadows of the truth.
Plato, Harry Potter, and Morpheus — I’d love to hear a discussion between the three of them in a cave!”
“It is true that data is only a reflection of reality but that is also true of anything that we perceive with our senses. When the prisoners in the cave turn around, what they perceive with their eyes in the visible spectrum is only a very narrow slice of what is actually there. Even the ‘solid’ objects they see, and can indeed touch, are actually composed of 99% empty space.
The questions that need to be asked and answered about the essence of data quality are far less esoteric than many would have us believe. They can be very simple, without being simplistic. Indeed simplicity can be seen as a cornerstone of true data quality. If you cannot identify the underlying simplicity that lies at the heart of data quality you can never achieve it. Simple questions are the most powerful. Questions like, ‘In our world (i.e., the enterprise in question) what is it that we need to know about (for example) a Sale that will enable us to operate successfully and meet all of our goals and objectives?’ If the enterprise cannot answer such simple questions then it is in trouble. Making the questions more complicated will not take the enterprise any closer to where it needs to be. Rather it will completely obscure the goal.
Data quality is rather like a ‘magic trick’ done by a magician. Until you know how it is done it appears to an unfathomable mystery. Once you find out that is merely an illusion, the reality is absolutely simple and, in fact, rather mundane. But perhaps that is why so many practitioners perpetuate the illusion. It is not for self gain. They just don’t want to tell the world that, when it comes to data quality, there is no Tooth Fairy, no Easter Bunny, or no Santa Claus. It’s sad, but true. Data quality is boringly simple!”
“Actually I would go substantially further, whereas data was originally no more than a representation of the real world and if validation was required the real world was the ‘authoritative source’ — but that is clearly no longer the case. Data is in fact the new reality!
Data is now used to track everything, if the data is wrong the real world item disappears. It may have really been destroyed or it may be simply lost, but it does not matter, if the data does not provide evidence of its existence then it does not exist. If you doubt this, just think of money, how much you have is not based on any physical object but on data.
By the way the theoretical definition I use for data is as follows:
Datum — a disruption in a continuum.
The practical definition I use for data is as follows:
Data — elements into which information is transformed so that it can be stored or moved.”
“We can see that there’s a trench between those who think adjacent means out of scope and those who think it means opportunity. Great leaders know that good stories make for better governance for an organization that needs to adapt and evolve, but stay true to its mission. Built from, but not about, real facts, good fictions are broadly true without being specifically true, and therefore they carry well to adjacent business processes where their truths can be applied to making improvements.
On the other hand, if it weren’t for nonfiction — accounts of real markets and processes — there would be nothing for the POSSIBLE to be adjacent TO. Managers often have trouble with this because they feel called to manage the facts, and call anything else an airy-fairy waste of time.
So a data governance program needs to assert whether its purpose is to fix the status quo only, or to fix the status quo in order to create agility to move into new areas when needed. Each of these should have its own business case and related budgets and thresholds (tolerances) in the project plan. And it needs to choose its sponsorship and data quality players accordingly.”
“I’ve been working on a definitive solution for the data / information / metadata / attributes / properties knot for a while now and I think I have it figured out.
I read your blog entitled The Semantic Future of MDM and we share the same philosophy even while we differ a bit on the details. Here goes. It’s all information. Good, bad, reliable or not, the argument whether data is information or vice versa is not helpful. The reason data seems different than information is because it has too much ambiguity when it is out of context. Data is like a quantum wave: it has many possibilities one of which is ‘collapsed’ into reality when you add context. Metadata is not a type of data, any more than attributes, properties or associations are a type of information. These are simply conventions to indicate the role that information is playing in a given circumstance.
Your Michelle Davis example is a good illustration: Without context, that string could be any number of individuals, so I consider it data. Give it a unique identifier and classify it as a digital representation in the class of Person, however and we have information. If I then have Michelle add attributes to her personal record — like sex, age, etc. — and assuming that these are likewise identified and classed — now Michelle is part of a set, or relation. Note that it is bad practice — and consequently the cause of many information management headaches — to use data instead of information. Ambiguity kills. Now, if I were to use Michelle’s name in a Subject Matter Expert field as proof of the validity of a digital asset; or in the Author field as an attribute, her information does not *become* metadata or an attribute: it is still information. It is merely being used differently.
In other words, in my world while the terms ‘data’ and ‘information’ are classified as concepts, the terms ‘metadata’, ‘attribute’ and ‘property’ are classified as roles to which instances of those concepts (well, one of them anyway) can be put, i.e., they are fit for purpose. This separation of the identity and class of the string from the purpose to which it is being assigned has produced very solid results for me.”
Thanks for giving your comments
Thank you very much for giving your comments and sharing your perspectives with our collablogaunity. This entry in the series highlighted commendable comments on OCDQ Blog posts published between July and November of 2011.
Since there have been so many commendable comments, please don’t be offended if one of your comments wasn’t featured.
Please keep on commenting and stay tuned for future entries in the series.
Thank you for reading the Obsessive-Compulsive Data Quality (OCDQ) blog. Your readership is deeply appreciated.
Welcome to the 300th Obsessive-Compulsive Data Quality (OCDQ) blog post!
You might have been expecting a blog post inspired by the movie 300, but since I already did that with Spartan Data Quality, instead I decided to commemorate this milestone with the 10th entry in my ongoing series for expressing my gratitude to my readers for their truly commendable comments on my blog posts.
“This has been one of my pet peeves for a long time. Shared version of truth or the reference version of truth is so much better, friendly and non-dictative (if such a word exists) than single version of truth.
I truly believe that starting a discussion with Single Version of the Truth with business stakeholders is a nonstarter. There will always be a need for multifaceted view and possibly multiple aspects of the truth.
A very common term/example I have come across is the usage of the term revenue. Unfortunately, there is no single version of revenue across the organizations (and for valid reasons). From Sales Management prospective, they like to look at sales revenue (sales bookings) which is the business on which they are compensated on, financial folks want to look at financial revenue, which is the revenue they capture in the books and marketing possibly wants to look at marketing revenue (sales revenue before the discount) which is the revenue marketing uses to justify their budgets. So if you ever asked questions to a group of people about what revenue of the organization is, you will get three different perspectives. And these three answers will be accurate in the context of three different groups.”
“I think this is going to dominate the data management realm in the coming years. We are not only met with drastically increasing volumes of data, but also increasing velocity and variety of data.
The dilemma is between making good decisions and making fast decisions, whether the decisions based on business intelligence findings should wait for assuring the quality of the data upon which the decisions are made, thus risking the decision being too late. If data quality always could be optimal by being solved at the root we wouldn’t have that dilemma.
The challenge is if we are able to have optimal data all the time when dealing with extreme data, which is data of great variety moving in high velocity and coming in huge volumes.”
“I definitely agree and think you are burrowing into the real core of what makes or breaks EDM and MDM type initiatives -- it's the people.
Business models, processes, data, and technology all provide fixed forms of enablement or constraint. And where in the past these dynamics have been very compartmentalized throughout a company's business model and systems architecture, with EDM and MDM involving more integrated functions and shared data, people become more of the x-factor in the equation. This demands the presence of data governance to be the facilitating process that drives the collaborative, cross-functional, and decision making dynamics needed for successful EDM and MDM. Of course, the dilemma is that in a governance model people can still make bad decisions that inhibit people from working effectively.
So in terms of the people platform and data governance, there needs to be the correct focus on what are the right roles and good decisions made that can enable people to interact effectively.”
“Our organization has taken the Hybrid Approach (starting Bottom-Up) and it works well for two reasons: (1) the worker bee rock stars are all aligned and ready to hit the ground running, and (2) the ‘Top’ can sit back and let the ‘aligned’ worker bees get on with it.
Of course, this approach is sometimes (painfully) slow, but with the ground-level rock stars already aligned, there is less resistance implementing the policies, and the Top’s heavy hand is needed much less frequently, but I voted for Hybrid Approach (starting Top-Down) because I have less than stellar patience for the long and scenic route.”
“Too many companies get paralyzed thinking about how to do this and implement it. (Along with the overwhelmed feeling that it is too much time/effort/money to fix it.) But I think your poll needs another option to vote on, specifically: ‘Whatever works for the company/culture/organization’ since not all solutions will work for every organization.
In some where it is highly structured, rigid and controlled, there wouldn’t be the freedom at the grass-roots level to start something like this and it might be frowned upon by upper-level management. In other organizations that foster grass-roots things then it could work.
However, no matter which way you can get it started and working, you need to have buy-in and commitment at all levels to keep it going and make it effective.”
“Deming puts a lot of energy into his arguments in 'Out of the Crisis' that the short-term mindset of the executives, and by extension the directors, is a large part of the problem.
Jackanapes, a lovely under-used term, might be a bit strong when the executives are really just doing what they are paid for. In North America we get what the directors measure! In fact, one quandary is that a proactive executive, who invests in data quality is building the long-term value of their company but is also setting it up to be acquired by somebody who recognizes that the 'under the radar' improvements are making the prize valuable.
Deming says on p.100: 'Fear of unfriendly takeover may be the single most important obstacle to constancy of purpose. There is also, besides the unfriendly takeover, the equally devastating leveraged buyout. Either way, the conqueror demands dividends, with vicious consequences on the vanquished.'”
“It always makes me smile when people attempt to put a percentage value on their data quality as though it were something as tangible and measurable as the fat content of your milk.
In order to make such a measurement one would need to know where 100% of the defects lie. If they knew that they would be able to resolve the defects and achieve 100% quality. In reality you cannot and do not know where each defect is and how many there are.
Even though tools such as profilers will tell you, for example, that 95% of your US address records have a valid state added, there is still no way to measure how many of these valid states are applicable to the real world entity on the ground. Mr Smith may be registered in the database to an existing and valid address in the database, but if he moved last week there's a data quality issue that won't be discovered until one attempts to contact him.
The same applies when people say they have removed 95% of duplicates from their data. If they can measure it then they know where the other 5% of duplicates are and they can remove them.
But back to the point: you may not achieve 100% quality. In fact, we know you never will. But aiming for that target means that you're aiming in the right direction. As long as your goal is to get close to perfection and not to achieve it, I don't see the problem.”
“A curious question to my Rebellious friend OCDQ-Wan, while data governance agility is a wonderful goal, and maybe a great place to start your efforts, is it sustainable?
Your agile Rebellion is like any start-up: decisions must be made quickly, you must do a lot with limited resources, everyone plays multiple roles willingly, and your objective is very targeted and specific. For example, to fire a photon torpedo into a small thermal exhaust port - only 2 meters wide - connected directly to the main reactor of the Death Star. Let's say you 'win' that market objective. What next?
The Rebellion defeats the Galactic Empire, leaving a market leadership vacuum. The Rebellion begins to set up a new form of government to serve all (aka grow existing market and expand into new markets) and must grow larger, with more layers of management, in order to scale. (aka enterprise data governance supporting all LOBs, geographies, and business functions).
At some point this Rebellion becomes a new Bureaucracy - maybe with a different name and legacy, but with similar results. Don't forget, the Galactic Empire started as a mini-rebellion itself spearheaded by the agile Palpatine!”
You Are Awesome
Thank you very much for sharing your perspectives with our collablogaunity. This entry in the series highlighted the commendable comments received on OCDQ Blog posts published between January and June of 2011.
Please keep on commenting and stay tuned for future entries in the series.
By the way, even if you have never posted a comment on my blog, you are still awesome — feel free to tell everyone I said so.
Thank you for reading the Obsessive-Compulsive Data Quality (OCDQ) blog. Your readership is deeply appreciated.
Effectively using social media within a business context is more art than science, which is why properly planning and executing a social media strategy is essential for organizations as well as individual professionals.
On this episode, I discuss social media strategy and content marketing with Crysta Anderson, a Social Media Strategist for IBM, who manages IBM InfoSphere’s social media presence, including the Mastering Data Management blog, the @IBMInitiate and @IBM_InfoSphere Twitter accounts, LinkedIn and other platforms.
Crysta Anderson also serves as a social media subject matter expert for IBM’s Information Management division.
Under Crysta’s execution, IBM Initiate has received numerous social media awards, including “Best Corporate Blog” from the Chicago Business Marketing Association, Marketing Sherpa’s 2010 Viral and Social Marketing Hall of Fame, and BtoB Magazine’s list of “Most Successful Online Social Networking Initiatives.”
Crysta graduated from the University of Chicago with a BA in Political Science and is currently pursuing a Master’s in Integrated Marketing Communications at Northwestern University’s Medill School. Learn more about Crysta Anderson on LinkedIn.
Social Media Strategy
Additional listening options:
If you are having trouble viewing this video, then you can watch it on Vimeo by clicking on this link: OCDQ on Vimeo
Thank you for reading my many musings on data quality and its related disciplines, and for tolerating my various references, from Adventures in Data Profiling to Social Karma, Shakespeare to Dr. Seuss, The Pirates of Penzance to The Rolling Stones, from The Three Musketeers to The Three Tweets, Dante Alighieri to Dumb and Dumber, Jack Bauer to Captain Jack Sparrow, Finding Data Quality to Discovering What Data Quality Technology Wants, and from Schrödinger’s Cat to the Buttered Cat.
Thank you for reading Obsessive-Compulsive Data Quality for the last two years. Your readership is deeply appreciated.
Today is February 14 — Valentine’s Day — the annual celebration of enduring romance, where true love is publicly judged according to your willingness to purchase chocolate, roses, and extremely expensive jewelry, and privately judged in ways that nobody (and please, trust me when I say nobody) wants to see you post on Twitter, Facebook, Flickr, YouTube, or your blog.
This is the ninth entry in my ongoing series for expressing my true love to my readers for their truly commendable comments on my blog posts. Receiving comments is the most rewarding aspect of my blogging experience. Although I love all of my readers, I love my commenting readers most of all.
“I sometimes compare our profession with that of dentists. Dentists are also believed to advocate for good habits around your teeth, but are making money when these good habits aren’t followed.
So when 4 out 5 dentists recommend a certain toothpaste, it is probably no good :-)
Seriously though, I take the amount of money spent on data quality tools as a sign that organizations believe there are issues best solved with technology. Of course these tools aren’t magic.
Data quality tools only solve a certain part of your data and information related challenges. On the other hand, the few problems they do solve may be solved very well and cannot be solved by any other line of products or in any practical way by humans in any quantity or quality.”
“I think that the expectations of clients from their data quality vendors have grown tremendously over the past few years. This is, of course, in line with most everything in the Web 2.0 cloud world that has become point-and-click, on-demand response.
In the olden days of 2002, I remember clients asking for vendors to adjust data only to the point where dashboard statistics could be presented on a clean Java user interface. I have noticed that some clients today want the software to not just run customizable reports, but to extract any form of data from any type of database, to perform advanced ETL and calculations with minimal user effort, and to be easy to use. It’s almost like telling your dentist to fix your crooked teeth with no anesthesia, no braces, no pain, during a single office visit.
Of course, the reality today does not match the expectation, but data quality vendors and architects may need to step up their game to remain cutting edge.”
“This immediately reminded me of the practice of Kaizen in the manufacturing industry. The idea being that continued small improvements yield large improvements in productivity when compounded.
For years now, many of the thought leaders have preached that projects from business intelligence to data quality to MDM to data governance, and so on, start small and that by starting small and focused, they will yield larger benefits when all of the small projects are compounded.
But the one thing that I have not seen it tied back to is the successes that were found in the leaders of the various industries that have adopted the Kaizen philosophy.
Data quality practitioners need to recognize that their success lies in the fundamentals of Kaizen: quality, effort, participation, willingness to change, and communication. The fundamentals put people and process before technology. In other words, technology may help eliminate the problem, but it is the people and process that allow that elimination to occur.”
“Subtle but immensely important because implementing a coordinated series of small, easily trained habits can add up to a comprehensive data quality program.
In my first data quality role we identified about ten core habits that everyone on the team should adopt and the results were astounding. No need for big programs, expensive technology, change management and endless communication, just simple, achievable habits that importantly were focused on the workers.
To make habits work they need the WIIFM (What’s In It For Me) factor.”
“Interesting concept about using data for the wrong purpose. I think that data, if it is the ‘true’ data can be used for any business decision as long as it is interpreted the right way.
One problem is that data may have a margin of error associated with it and this must be understood in order to properly use it to make decisions. Another issue is that the underlying definitions may be different.
For example, an organization may use the term ‘customer’ when it means different things. The marketing department may have a list of ‘customers’ that includes leads and prospects, but the operational department may only call them ‘customers’ when they are generating revenue.
Each department’s data and interpretation of it is correct for their own purpose, but you cannot mix the data or use it in the ‘other’ department to make decisions.
If all the data is correct, the definitions and the rules around capturing it are fully understood, then you should be able to use it to make any business decision.
But when it gets misinterpreted and twisted to suit some business decision that it may not be suited for, then you are crossing over to the Dark Side.”
“My continuous struggle is the chaos of data electronically submitted by many, many sources, different levels of quality and many different formats while maintaining the history of classification, correction, language translation, where used, and a multitude of other ‘data transactions’ to translate this data into usable information for multi-business use and reporting. This is my definition of Master Data Management.
I chuckled at the description of the ‘rigid business processes’ and I added ‘software products’ to the concept, since the software industry must understand the fluidity of the change of data to address the challenges of Master Data Management, Data Governance, and Data Cleansing.”
“I read: ‘Collaboration is the key to business success. This essential collaboration has to be based on people, and not on rigid business processes . . .’
And I think: Collaboration is the key to any success. This must have been true since the time man hunted the Mammoth. When collaborating, it went a lot better to catch the bugger.
And I agree that the collaboration has to be based on people, and not on rigid business processes. That is as opposed to based on rigid people, and not on flexible business processes. All the truths are in the adjectives.
I don’t mean to bash, Jim, I think there is a lot of truth here and you point to the exact relationship between collaboration as a requirement and Data Governance as a prerequisite. It’s just me getting a little tired of Gartner saying things of the sort that ‘in order to achieve success, people should work together. . .’
I have a word in mind that starts with ‘du’ and ends with ‘h’ :-)”
“Quality is a result of people’s work, their responsibility, improvement initiatives, etc. I think it is more about the company culture and its possible regulation by government. It is the most complicated to set-up a ‘new’ (information quality) culture, because of its influence on every single employee. It is about well balanced information value chain and quality processes at every ‘gemba’ where information is created.
Confidence in the information is necessary because we make many decisions based on it. Sometimes we do better or worse then before. We should store/use as much accurate information as possible.
All stewardship or governance frameworks should help companies with the change of its culture, define quality measures (the most important is accuracy), cost of poor quality system (allowing them to monitor impacts of poor quality information), and other necessary things. Only at this moment would we be able to trust corporate information and make decisions.
A small remark on technology only. Data quality technology is a good tool for helping you to analyze ‘technical’ quality of data – patterns, business rules, frequencies, NULL or Not NULL values, etc. Many technology companies narrow information quality into an area of massive cleansing (scrap/rework) activities. They can correct some errors but everything in general leads to a higher validity, but not information accuracy. If cleansing is implemented as a regular part of the ETL processes then the company institutionalizes massive correction, which is only a cost adding activity and I am sure it is not the right place to change data contents – we increase data inconsistency within information systems.
Every quality management system (for example TQM, TIQM, Six Sigma, Kaizen) focuses on improvement at the place where errors occur – gemba. All those systems require: leaders, measures, trained people, and simply – adequate culture.
Technology can be a good assistant (helper), but a bad master.”
“In a sense, I would say that the current definitions and approaches of/towards data quality might very well not be able to avoid the Dustbin of History.
In the world of phones and PDAs, quality of information about environments, current fashions/trends, locations and current moods of the customer might be more important than a single view of customer or de-duped customers. The pace at which consumer’s habits are changing, it might be the quality of information about the environment in which the transaction is likely to happen that will be more important than the quality of the post transaction data itself . . . Just a thought.”
“So true, so true, so true.
I see this a lot. Great projects or initiatives start off, collaboration is expected across organizations, and there is initial interest, big meetings / events to jump start the Calumet. Now what, when the events no longer happen, funding to fly everyone to the same city to bond, share, explore together dries up.
Here is what we have seen work. After the initial kick off, have small events, focus groups, and let the Calumet grow organically. Sometimes after a big powwow, folks assume others are taking care of the communication / collaboration, but with a small venue, it slowly grows.
Success breeds success and folks want to be part of that, so when the focus group achieves, the growth happens. This cycle is then repeated, hopefully.
While it is important for folks to come together at the kick off to see the big picture, it is the small rolling waves of success that will pick up momentum, and people will want to join the effort to collaborate versus waiting for others to pick up the ball and run.
Thanks for posting, good topic. Now where is my small focus group? :-)”
You Are Awesome
Thank you very much for sharing your perspectives with our collablogaunity. This entry in the series highlighted the commendable comments received on OCDQ Blog posts published in October, November, and December of 2010.
Please keep on commenting and stay tuned for future entries in the series.
By the way, even if you have never posted a comment on my blog, you are still awesome — feel free to tell everyone I said so.
The question that I get asked most frequently about blogging is:
“Is there a simple formula for writing effective blog posts?”
And the only honest answer is:
“NO! There is NOT a simple formula for writing effective blog posts.”
Well, okay . . . according to conventional blogging wisdom . . . maybe there is one simple formula:
This slide is from my social media presentation, which you can download by clicking on this link: Social Karma Presentation
The Two U’s
The first aspect of conventional blogging wisdom is to follow the Two U’s:
- Useful – Focus on your reader and provide them assistance with a specific problem
- Unique – Capture your reader’s attention and share your perspective in your own voice
Blogging is all about you. No, not you meaning me, the blogger — you meaning you, the reader.
To be useful, blogging has to be all about the reader. If you write only for yourself, then you will also be your only reader.
Useful blog posts often provide “infotainment” — a combination of information and entertainment — that, when it’s done well, can turn readers into raving fans. Just don’t forget—your blog content has to be informative and entertaining to your readers.
One important aspect of being unique is writing effective titles. Most potential readers scan titles to determine if they will click and read more. There is a delicate balance between effective titles and “baiting” – which will only alienate potential readers.
If you write a compelling title that makes your readers click through to an interesting post, then “You Rock!” However, if you write a “Shock and Awe” title followed by “Aw Shucks” content, then “You Suck!”
Your blog content also has to be unique—your topic, position, voice, or a combination of all three.
Consider the following when striving to write unique blog posts:
- The easiest way to produce unique content is to let your blogging style reflect your personality
- Don’t be afraid to express your opinion—even on subjects where it seems like “everything has already be said”
- Your opinion is unique—because it is your opinion
- An opinion—as long as it is respectfully given—is never wrong
- Consistency in both style and message is important, however it’s okay to vary your style and/or change your opinion
The Three C’s
The second aspect of conventional blogging wisdom is to follow the Three C’s:
- Clear – Get to the point and stay on point
- Concise – No longer than absolutely necessary
- Consumable – Formatted to be easily read on a computer screen
Clear blog posts typically have a single theme or one primary topic to communicate. Don’t run off on tangents, especially ones not related to the point you are trying to make. If you have several legitimate sub-topics to cover, then consider creating a series.
Concise doesn’t necessarily mean “write really short blog posts.” There is no specific word count to target. Being concise simply means taking out anything that doesn’t need to be included. Editing is the hardest part of writing, but also the most important.
Consumable content is extremely essential when people are reading off of a computer screen.
Densely packed text attacks the eyes, which doesn’t encourage anyone to keep reading.
Consumable blog posts effectively use techniques such as the following:
- Providing an introduction and/or a conclusion
- Using section headings (in a larger size or different font or both)
- Varying the lengths of both sentences and paragraphs
- Highlighting key words or phrases using bold or italics—but don’t underline—people will think it’s a link and click on it
- Making or summarizing keys points in a short sentence or a short paragraph
- Making or summarizing key points using numbered or bulleted lists
As a general rule, the longer (although still both clear and concise) the blog post, the more consumable you need to make it.
If writing is not your thing, and you’re podcasting or video blogging or using some combination of all three (and that’s another way to be unique), I still think the conventional blogging wisdom applies, which, of course, you are obviously free to ignore since blogging is definitely more art than science.
However, I recommend that you first learn and practice the conventional blogging wisdom.
After all, it’s always more fun to break the rules when you actually know what the rules are.
If one of your New Year’s Resolutions is to start a blog, please be forewarned that the blogosphere has a real zombie problem.
No, not that kind of zombie.
“Zombie” is a slang term used to describe a blog that has stopped publishing new posts. In other words, the blog has joined the Blogosphere of the Living Dead, which is comprised of blogs that still have a valid URL, but desperately crave new “Posts!”
It’s Not Personal—Zombies are Professional
If you’re considering starting a personal blog (especially one about “real zombies”), then please stop reading—and start blogging.
However, if you’re considering starting a professional blog, then please continue reading. By a “professional blog” I do not mean a blog that makes money. I simply mean a blog that’s part of the social media strategy for your organization or a blog that helps advance your professional career—which, yes, may also directly or (far more likely, if at all) indirectly make you money.
If you are seriously considering starting a professional blog, before you do anything else, complete the 20-10-5 plan.
The 20-10-5 Plan
- Brainstorm 20 high level ideas for blog posts
- Write 10 rough drafts based on those ideas
- Finish 5 ready-to-publish posts from those drafts
If you are unable to complete this simple plan, then seriously reconsider starting a professional blog.
Please Note: I will add the caveat that if writing is not your thing, and you’re planning on podcasting or video blogging instead, I still adamantly believe you must complete the 20-10-5 plan. In essence, the plan is simply a challenge to see if you can create five pieces of ready-to-publish content—BEFORE you launch your professional blog, since IMHO—if you can’t, then don’t.
Recommended Next Steps
If you completed the 20-10-5 plan, then after you launch your blog, consider the following recommendations:
- Do not post more than once a week
- Maintain an editorial calendar and schedule your future posts
- Finish more ready-to-publish posts (you’re good until Week 6 because of the 20-10-5 plan)
Yes, you’ll be tempted to start posting more than once a week. Yes, you’ll be eager to share your brilliance with the blogosphere.
However, just like many new things, blogging is really fun—when it’s new.
So let’s run the numbers:
- Posting once a week = 52 blog posts a year
- Posting twice a week = 104 blog posts a year
- Posting five times a week (basically once every weekday) = 260 blog posts a year
I am not trying to harsh your mellow. I am simply saying that you need to pace yourself—especially at the beginning.
I am not a Zombie—or a Social Media Expert
I am not a “social media expert.” In fact, until late 2008, I wasn’t even interested enough to ask people what they meant when I heard them talking about “social media.” I started blogging, tweeting, and using other social media in early 2009.
Do I practice what I preach? Check my archives.
My blog was started in March 2009. I published 5-8 posts per month (1-2 posts per week) for each of the first five months, and then I gradually increased my posting frequency. Now, almost two years later, I have published 236 posts on this blog, which is an overall average of 10 posts per month (2-3 posts per week), without ever posting fewer than 5 times in one month.
So if you do decide to become a blogger, please don’t become a zombie in 2011—wait until the Zombie Apocalypse of 2012 :-)
This Thursday is Thanksgiving Day, which is a United States holiday with a long and varied history. The most consistent themes remain family and friends gathering together to share a large meal and express their gratitude.
This is the eighth entry in my ongoing series for expressing my gratitude to my readers for their truly commendable comments on my blog posts. Receiving comments is the most rewarding aspect of my blogging experience. Although I am truly grateful to all of my readers, I am most grateful to my commenting readers.
“Being a lover of both music and data, it struck all the right notes!
I think the analogy is a very good one—when I think about data as music, I think about a companies business intelligence architecture as being a bit like a very good concert hall, stage, and instruments. All very lovely to listen to music—but without the score itself (the data), there is nothing to play.
And while certainly a real live concert hall is fantastic for enjoying Bach, I’m enjoying some Bach right now on my laptop—and the MUSIC is really the key.
Companies very often focus on building fantastic concert halls (made with all the best and biggest data warehouse appliances, ETL servers, web servers, visualization tools, portals, etc.) but forget that the point was to make that decision—and base it on data from the real world. Focusing on the quality of your data, and on the decision at hand, can often let you make wonderful music—and if your budget or schedule doesn't allow for a concert hall, you might be able to get there regardless.”
“I used to get incredibly frustrated with the data denial aspect of our profession. Having delivered countless data quality assessments, I’ve never found an organization that did not have pockets of extremely poor data quality, but as you say, at the outset, no-one wants to believe this.
Like you, I’ve seen the natural defense mechanisms. Some managers do fear the fallout and I’ve even had quite senior directors bury our research and quickly cut any further activity when issues have been discovered, fortunately that was an isolated case.
In the majority of cases though I think that many senior figures are genuinely shocked when they see their data quality assessments for the first time. I think the big problem is that because they institutionalize so many scrap and rework processes and people that are common to every organization, the majority of issues are actually hidden.
This is one of the issues I have with the big shock announcements we often see in conference presentations (I’m as guilty as hell for these so call me a hypocrite) where one single error wipes millions off a share price or sends a space craft hurtling into Mars.
Most managers don’t experience this cataclysm, so it’s hard for them to relate to because it implies their data needs to be perfect, they believe that’s unattainable and lose interest.
Far better to use anecdotes like the one cited in this blog to demonstrate how simple improvements can change lives and the bottom line in a limited time span.”
“Yes, quality is in the eye of the beholder. Data quality metrics must be calculated within the context of a data consumer. This context is missing in most software tools on the market.
Another important metric is what I call the Materiality Metric.
In your example, 50% of customer data is inaccurate. It’d be helpful if we know which 50%. Are they the customers that generate the most revenue and profits, or are they dormant customers? Are they test records that were never purged from the system? We can calculate the materiality metric by aggregating a relevant business metric for those bad records.
For example, 85% of the year-to-date revenue is associated with those 50% bad customer records.
Now we know this is serious!”
“I am constantly amazed at the number of folks I meet who are paralyzed about advanced analytics, saying that ‘we have to fix/clean/integrate all our data before we can do that.’
They don’t know if the data would even be relevant, haven’t considered getting the data from an external source and haven't checked to see if the analytic techniques being considered could handle the bad or incomplete data automatically! Lots of techniques used in data mining were invented when data was hard to come by and very ‘dirty’ so they are actually pretty good at coping. Unless someone thinks about the decision you want to improve, and the analytics they will need to do so, I don’t see how they can say their data is too dirty, too inconsistent to be used.”
“Early in my career, I answered a typical job interview question ‘What are your strengths?’ with:
‘I can bring Business and IT together to deliver results.’
My interviewer wryly poo-poo’d my answer with ‘Business and IT work together well already,’ insinuating that such barriers may have existed in the past, but were now long gone. I didn’t get that particular job, but in the years since I have seen this barrier in action (I can attest that my interviewer was wrong).
What is required for Business Intelligence success is to have smart business people and smart IT people working together collaboratively. Too many times one side or the other says ‘that’s not my job’ and enormous potential is left unrealized.”
“It amazes me (ok, not really...it makes me cynical and want to rant...) how often Business and IT SAY they are collaborating, but it’s obvious they have varying views and perspectives on what collaboration is and what the expected outcomes should be. Business may think collaboration means working together for a solution, IT may think it means IT does the dirty work so Business doesn’t have to.
Either way, why don’t they just start the whole process by having a (honest and open) chat about expectations and that INCLUDES what collaboration means and how they will work together.
And hopefully, (here’s where I start to rant because OMG it’s Collaboration 101) that includes agreement not to use language such as BUSINESS and IT, but rather start to use language like WE.”
“Just a couple of days ago I had this conversation about the curse of IT in general:
When it works no-one notices or gives credit; it’s only when it’s broken we hear about it.
A typical example is government IT over here in the UK. Some projects have worked well; others have been spectacular failures. Guess which we hear about? We review failure mercilessly but sometimes forget to do the same with success so we can document and repeat the good stuff too!
I find the best case studies are the balanced ones that say: this is what we wanted to do, this is how we did it, these are the benefits. Plus this is what I’d do differently next time (lessons learned).
Maybe in those lessons learned we should also make a big effort to document the positive learnings and not just take these for granted. Yes these do come out in ‘best practices’ but again, best practices never get the profile of disaster stories...
I wonder if much of the gloom is self-fulfilling almost, and therefore quite unhealthy. So we say it’s difficult, the failure rate is high, etc. – commonly known as covering your butt. Then when something goes wrong you can point back to the low expectations you created in the first place.
But maybe, the fact we have low expectations means we don’t go in with the right attitude?
The self-defeating outcome is that many large organizations are fearful of getting to grips with their data problems. So lots of projects we should be doing to improve things are put on hold because of the perceived risk, disruption, cost – things then just get worse making the problem harder to resolve.
Data quality professionals surely don’t want to be seen as effectively undertakers to the doomed project, necessary yes, but not surrounded by the unmistakable smell of death that makes others uncomfortable.
Sure the nature of your work is often to focus on the broken, but quite apart from anything else, isn’t it always better to be cheerful?”
“They say that sport coaches never teach the negative, or to double the double negative, they never say ‘don’t do that.’ I read somewhere, maybe Daniel Siegel’s stuff, that when the human brain processes the statement ‘don’t do that’ it drops the ‘don’t,’ which leaves it thinking ‘do that.’
Data quality is a complex and multi-splendiforous area with many variables intermingled, but our task as Data Quality Evangelists would be more pleasant if we were helping people rise to the level of the positive expectations, rather than our being codependent in their sinking to the level of the negative expectation.”
DQ-Tip: “There is no such thing as data accuracy...” sparked an excellent debate between Graham Rhind and Peter Benson, who is the Project Leader of ISO 8000, which is the international standards for data quality. Their debate included the differences and interdependencies that exist between data and information, as well as between data quality and information quality.
Thanks for giving your comments
Thank you very much for giving your comments and sharing your perspectives with our collablogaunity.
This entry in the series highlighted commendable comments on OCDQ Blog posts published in August and September of 2010.
Please keep on commenting and stay tuned for future entries in the series.
Blogging has made the digital version of my world much smaller and allowed my writing to reach a much larger audience than would otherwise be possible. Although I am truly grateful to all of my readers, I am most grateful to my commenting readers.
Since its inception over a year ago, this has been an ongoing series for expressing my gratitude to my readers for their truly commendable comments, which greatly improve the quality of my blog posts.
“To be literate, a person of letters, means one must occasionally write letters by hand.
The connection between brain and hand cannot be overlooked as a key component to learning. It is by the very fact that it is labor intensive and requires thought that we are able to learn concepts and care thought into action.
One key feels the same as another and if the keyboard is changed then even the positioning of fingers while typing will have no significance. My bread and butter is computers but all in the name of communications, understanding and resolution of problems plaguing people/organizations.
And yet, I will never be too far into a computer to neglect to write a note or letter to a loved one. While I don’t journal, and some say that writing a blog is like journaling online, I love mixing and matching even searching for the perfect word or turn of phrase.
Although a certain number of simians may recreate something legible on machines, Shakespeare or literature of the level to inspire and move it will not be.
The pen is mightier than the sword—from as earthshaking as the downfall of nations to as simple as my having gotten jobs after handwriting simple thank you notes.
Unfortunately, it may go the way of the sword and be kept in glass cases instead of employed in its noblest and most dangerous task—wielded by masters of mind and purpose.”
“Politics and self-interest are rarely addressed factors in principles of data governance, yet are such a strong component during some high-profile implementations, that data governance truly does need to be treated as an art rather than a science.
Data teams should have principles and policies to follow, but these can be easily overshadowed by decisions made from a few executives promoting their own agendas. Somehow, built into the existing theories of data governance, we should consider how to handle these political influences using some measure of accountability that all team members—stakeholders included—need to have.”
“Data Governance enforcement is a combination of straightforward and logical activities that when implemented correctly will help you achieve compliance, and ensure the success of your program. I would emphasize that they ALL (Documentation, Communication, Metrics, Remediation, Refinement) need to be part of your overall program, as doing one or a few without the others will lead to increased risk of failure.
My favorite? Tough to choose. The metrics are key, as are the documentation, remediation and refinement. But to me they all depend upon good communications. If you don’t communicate your policies, metrics, risks, issues, challenges, work underway, etc., you will fail! I have seen instances where policies have been established, yet they weren’t followed for the simple fact that people were unaware they existed.”
“This sparks an episode I had a few years ago with an engineering services company in the UK.
I ran a management workshop showing a lot of the issues we had uncovered. As we were walking through a dashboard of all the findings one of the directors shouted out that the 20% completeness stats for a piece of engineering installation data was wrong, she had received no reports of missing data.
I drilled into the raw data and sure enough we found that 80% of the data was incomplete.
She was furious and demanded that site visits be carried out and engineers should be incentivized (i.e., punished!) in order to maintain this information.
What was interesting is that the data went back many years so I posed the question:
‘Has your decision-making ability been impeded by this lack of information?’
What followed was a lengthy debate, but the outcome was NO, it had little effect on operations or strategic decision making.
The company could have invested considerable amounts of time and money in maintaining this information but the benefits would have been marginal.
One of the most important dimensions to add to any data quality assessment is USEFULNESS, I use that as a weight to reduce the impact of other dimensions. To extend your debate further, data may be hopelessly inaccurate and incomplete, but if it’s of no use, then let’s take it out of the equation.”
“Data Quality dimensions that track a data set’s significance to the Business such as Relevance or Impact could help keep the care and feeding efforts for each data set in ratio to their importance to the Business.
I think you are suggesting that the Business’s strategic/tactical objectives should be used to self-assess and even prune data quality management efforts, in order to keep them aligned with the Business rather than letting them have an independent life of their own.
I wonder if all business activities could use a self-assessment metric built in to their processing so that they can realign to reality. In the low levels of biology this is sometimes referred to as a ‘suicide gene’ that lets a cell decide when it is no longer needed. Suicide is such a strong term though, maybe it could be called an: annual review to realign efforts to organizational goals gene.”
“A particularly nasty problem in data management is that data created for one purpose gets used for another. Often, the people who use the data don't have a choice. It’s the only data available!
And when the same piece of data is used for multiple purposes, it gets even tougher. As you said, completeness and accuracy has a context: the same piece of data could be good for one purpose and useless for another.
A major goal of data governance is to define and enforce policies that aligns how data is created with how data is used. And if conflicts arise—they surely will—there’s a mechanism for resolving them.”
“I usually separate those out by saying that validity is a binary measurement of whether or not a value is correct or incorrect within a certain context, whereas accuracy is a measurement of the valid value’s ‘correctness’ within the context of the other data surrounding it and/or the processes operating upon it.
So, validity answers the question: ‘Is ZW a valid country code?’ and the answer would (currently) be ‘Yes, on the African continent, or perhaps on planet Earth.’
Accuracy answers the question: ‘Is it 2.5 degrees Celsius today in Redding, California?’
To which the answer would measure several things: is 2.5 degrees Celsius a valid temperature for Redding, CA? (yes it is), is it probable this time of year? (no, it has never been nearly that cold on this date), and are there any weather anomalies noted that might recommend that 2.5C is valid for Redding today? (no, there are not). So even though 2.5C is a valid air temperature, Redding, CA is a valid city and state combination, and 2.5C is valid for Redding in some parts of the year, that temperature has never been seen in Redding on July 15th and therefore it is probably not accurate.
Another ‘accuracy’ use case is one I’ve run into before: Is it accurate that Customer A purchased $15,049.00 in <product> on order 123 on <this date>?
To answer this, you may look at the average order size for this product (in quantity and overall price), the average order sizes from Customer A (in quantity ordered and monetary value), any promotions that offer such pricing deals, etc.
Given that the normal credit card charges for this customer are in the $50.00 to $150.00 range, and that the products ordered are on average $10.00 to $30.00, and that even the best customers normally do not order more than $200, and that there has never been a single order from this type of customer for this amount, then it is highly unlikely that a purchase of this size is accurate.”
“I believe Magic Quadrants (MQ) are a tool that clients of Gartner, and any one else that can get their hands on them, use as one data point in their decision making process.
Analytic reports, like any other data point, are as useful or dangerous as the user wants/needs it to be. From a buyer’s perspective, a MQ can be used for lots of things:
1. To validate a market
2. To identify vendors in the marketplace
3. To identify minimum qualifications in terms of features and functionality
4. To identify trends
5. To determine a company’s viability
6. To justify one’s choice of a vendor
7. To justify value of a purchase
8. Worse case scenario: defends one choice of a failed selection
9. To demonstrate business value of a technology
I also believe they use the analysts, Ted and Andy in this instance, as a sounding board to validate what they believe or learned from other data points, i.e. references, white papers, demos, friends, colleagues, etc.
In the final analysis though, I know that clients usually make their selection based on many things, the MQ included. One of the most important decision points is the relationship they have with a vendor or the one they believe they are going to be able to develop with a new vendor—and no MQ is going to tell you that.”
Thank you all for your comments. Your feedback is greatly appreciated—and truly is the best part of my blogging experience.
This entry in the series highlighted commendable comments on OCDQ Blog posts published in May, June, and July of 2010.
Please keep on commenting and stay tuned for future entries in the series.
Welcome to the Obsessive-Compulsive Data Quality (OCDQ) Blog Bicentennial Celebration!
Well, okay, technically a bicentennial is the 200th anniversary of something, and I haven’t been blogging for two hundred years.
On March 13, 2009, I officially launched this blog. Earlier this year, I published my 100th blog post. Thanks to my prolific pace, facilitated by a copious amount of free time due to a rather slow consulting year, this is officially the 200th OCDQ Blog post!
So I decided to rummage through my statistics and archives, and assemble a retrospective of how this all came to pass. Enjoy!
OCDQ Blog Numerology
The following table breaks down the OCDQ Blog statistics by month (clicking on the month link will take you to its blog archive), with subtotals by year, and overall totals for number of blog posts, unique visitors, and page views. The most popular blog post for each month was determined using a pseudo-scientific quasi-statistical combination of page views, comments, and re-tweets.
* Since this is the third one published in September 2010, it is officially the 200th OCDQ Blog post!
Some of my favorites
In addition to the most popular OCDQ Blog posts listed above by month, the following are some of my personal favorites:
- All I Really Need To Know About Data Quality I Learned In Kindergarten — Inspired by Robert Fulghum’s classic book, this blog post explains how show and tell, the five second rule and other kindergarten lessons are essential to data quality success.
- The Three Musketeers of Data Quality — Although people, process, and technology are all necessary for data quality success, people are the most important of all. So, who exactly are some of the most important people on your data quality project?
- Fantasy League Data Quality — This blog post attempted to explain best practices in action for master data management, data warehousing, business intelligence, and data quality using . . . fantasy league baseball and football.
- Blog-Bout: “Risk” versus “Monopoly” — A “blog-bout” is a good-natured debate between two bloggers. Phil Simon and I debated which board game is the better metaphor for an Information Technology (IT) project: “Risk” or “Monopoly.”
- Collablogaunity — Mashing together the words collaboration, blog, and community, I created the term collablogaunity (which is pronounced “Call a Blog a Unity”) to explain some recommended blogging best practices.
- Do you enjoy writing? — A literally handwritten blog post about the art of painting with letters and words—aka writing.
- MacGyver: Data Governance and Duct Tape — This allegedly Emmy Award nominated blog post explains data stewardship, data quality, data cleansing, defect prevention, and data governance—all with help from both MacGyver and Jill Dyché.
- The Importance of Envelopes — No, this was not a blog post about postal address data quality. Instead, I used envelopes as a metaphor for effective communication, explaining that the way we deliver our message is as important as our message.
- Dilbert, Data Quality, Rabbits, and #FollowFriday — This blog post revealed a truth that all data quality experts know well: All data quality issues are caused by rabbits—either a cartoon rabbit named Roger, or an invisible rabbit named Harvey.
- Finding Data Quality — With lots of help from the movie Finding Nemo, this blog post explains that although it is often discussed only in relation to other enterprise information initiatives, eventually you’ll be finding data quality everywhere.
Find your favorites
Find your favorites by browsing OCDQ Blog content using the following links:
- Best of OCDQ — Periodically updated listings, organized by topic, of the best OCDQ Blog posts of all time
- Popular Content — Adventures in Data Profiling, Identifying Duplicate Customers, Social Karma, and Downloads
- Best OCDQ Blog Posts of 2010 — “Best of 2010” blog posts based on a combination of page views, comments, and re-tweets
- Best OCDQ Blog Posts of 2009 — “Best of 2009” blog posts based on a combination of page views, comments, and re-tweets
- OCDQ Blog Archives by Month — Browse by Month, e.g., July 2009, May 2010, October 2009, January 2010, August 2009
- OCDQ Blog Archives by Category — Browse by Category, e.g., Debates, Videos, Podcasts, Social Media, Random Thoughts
- OCDQ Blog Archives by Tag — Browse by Tag, e.g., Business-IT Collaboration, Wednesday Word, Recently Read, DQ-Song
So far, OCDQ Blog has received over 900 comments, which is an average of 50 comments per month, and 5 comments per post.
Although a fair percentage of the total number of comments are my responses, Commendable Comments is my ongoing series (next entry coming later this month) that celebrates the truly commendable comments that I regularly receive from my readers.
Thank you very much to everyone who reads OCDQ Blog. Whether you comment or not, your readership is deeply appreciated.
Photo via Flickr (Creative Commons License) by: macwagen
I have always wanted to see my name in lights. However, this photo (of the Harris Theater on Liberty Avenue in downtown Pittsburgh, Pennsylvania) is probably the closest that I will ever come to such a luminous achievement.
In this blog post, I will simply shine the bright stage lights upon the reasoning behind my somewhat theatrical blogging style.
Regular readers know (and perhaps all too well) that I have a proclivity for using metaphors in my blogging.
Most often, I employ conceptual metaphors in an attempt to explain data quality (and its related disciplines) by providing context about a key concept I am trying to convey by casting it within a situation that (hopefully) my readers can more easily relate to, and (hopefully) later be able to use the conceptual metaphor to draw meaningful parallels to their own experiences.
Sometimes I weave metaphors into the very tapestry of the fine written-woven fabric that is my blogging style (such as with that admittedly terrible example). Other times, the metaphor provides the conceptual framework for a blog post. Some of my many examples of this technique include equating data quality with going to the dentist, having a bad cold, or fantasy league baseball.
However, by far my most challenging metaphors—not only for me to write, but also for my readers to understand—is when I blog either a story or a song (well, technically lyrics since—and believe me, you should be very thankful for this—I don’t sing).
Both my story posts and my song posts (please see below for links) are actually allegories since they are extended metaphors where I usually don’t include any supporting commentary, thereby hoping that they illustrate their point without explanation.
Even before the evolution of written language, storytelling played an integral role in every human culture. Listening to stories and retelling them to others continues to be the predominant means of expressing our emotions and ideas—even if nowadays we get most of our stories from television, movies, or the Internet, and less from reading books or having in-person conversations.
And, of course, both before and after the evolution of written language, music played a vital role in the human experience, and without doubt will continue to provide us with additional stories through instrumental, lyrical, and theatrical performances.
I also believe that one of the best aspects of the present social media revolution is that it’s reinvigorating the story culture of our evolutionary past, providing us with more immediate and expanded access to our collective knowledge, experience, and wisdom.
Last summer, metaphor maven James Geary recorded the following fantastic TED Talk video, during which he explains how we all use metaphors to compare what we know, to what we don’t know, and he quotes the sage wisdom of Albert Einstein:
“Combinatory play seems to be the essential feature in productive thought.”
If you are having trouble viewing this video, then you can watch it on TED by clicking on this link: Metaphorically Speaking
Whether you blog or not, you use metaphors, stories, and sometimes songs, to help you make sense of the world around you.
The very act of thinking is a form of storytelling. Your brain tries to compare what you already know, or more precisely, what you think you already know, with the new information you are constantly receiving. Especially nowadays when the very air you breath is literally teeming with digital data streams, you are being continually inundated with new information.
Your brain’s combinatory play experiments with bridging your neural pathways with different metaphors, until eventually it finds the right metaphor and your cognitive dissonance falls away in a flash of insight that brings a new depth of understanding and helps you discover a new way to rule the world—metaphorically speaking of course.
Related (Story) Posts
Related (Song) Posts
Related (Blogging) Posts