MDM, Assets, Locations, and the TARDIS

Henrik Liliendahl Sørensen, as usual, is facilitating excellent discussion around master data management (MDM) concepts via his blog.  Two of his recent posts, Multi-Entity MDM vs. Multi-Domain MDM and The Real Estate Domain, have both received great commentary.  So, in case you missed them, be sure to read those posts, and join in their comment discussions/debates.

A few of the concepts discussed and debated reminded me of the OCDQ Radio episode Demystifying Master Data Management, during which guest John Owens explained the three types of data (Transaction, Domain, Master), the four master data entities (Party, Product, Location, Asset), as well as, and perhaps the most important concept of all, the Party-Role Relationship, which is where we find many of the terms commonly used to describe the Party master data entity (e.g., Customer, Supplier, Employee).

Henrik’s second post touched on Location and Asset, which come up far less often in MDM discussions than Party and Product do, and arguably with understandably good reason.  This reminded me of the science fiction metaphor I used during my podcast with John, a metaphor I made in an attempt to help explain the difference and relationship between an Asset and a Location.

Location is often over-identified with postal address, which is actually just one means of referring to a location.  A location can also be referred to by its geographic coordinates, either absolute (e.g., latitude and longitude) or relative (e.g., 7 miles northeast of the intersection of Route 66 and Route 54).

Asset refers to a resource owned or controlled by an enterprise and capable of producing business value.  Assets are often over-identified with their location, especially real estate assets such as a manufacturing plant or an office building, since they are essentially immovable assets always at a particular location.

However, many assets are movable, such as the equipment used to manufacture products, or the technology used to support employee activities.  These assets are not always at a particular location (e.g., laptops and smartphones used by employees) and can also be dependent on other, non-co-located, sub-assets (e.g., replacement parts needed to repair broken equipment).

In Doctor Who, a brilliant British science fiction television program celebrating its 50th anniversary this year, the TARDIS, which stands for Time and Relative Dimension in Space, is the time machine and spaceship the Doctor and his companions travel in.

The TARDIS is arguably the Doctor’s most important asset, but its location changes frequently, both during and across episodes.

So, in MDM, we could say that Location is a time and relative dimension in space where we would currently find an Asset.

 

Related Posts

OCDQ Radio - Demystifying Master Data Management

OCDQ Radio - Master Data Management in Practice

OCDQ Radio - The Art of Data Matching

Plato’s Data

Once Upon a Time in the Data

The Data Cold War

DQ-BE: Single Version of the Time

The Data Outhouse

Fantasy League Data Quality

OCDQ Radio - The Blue Box of Information Quality

Choosing Your First Master Data Domain

Lycanthropy, Silver Bullets, and Master Data Management

Voyage of the Golden Records

The Quest for the Golden Copy

How Social can MDM get?

Will Social MDM be the New Spam?

More Thoughts about Social MDM

Is Social MDM going the Wrong Way?

The Semantic Future of MDM

Small Data and VRM

Data Quality: Quo Vadimus?

Over the past week, an excellent meme has been making its way around the data quality blogosphere.  It all started, as many of the best data quality blogging memes do, with a post written by Henrik Liliendahl Sørensen.

In Turning a Blind Eye to Data Quality, Henrik blogged about how, as data quality practitioners, we are often amazed by the inconvenient truth that our organizations are capable of growing as a successful business even despite the fact that they often turn a blind eye to data quality by ignoring data quality issues and not following the data quality best practices that we advocate.

“The evidence about how poor data quality is costing enterprises huge sums of money has been out there for a long time,” Henrik explained.  “But business successes are made over and over again despite bad data.  There may be casualties, but the business goals are met anyway.  So, poor data quality is just something that makes the fight harder, not impossible.”

As data quality practitioners, we often don’t effectively sell the business benefits of data quality, but instead we often only talk about the negative aspects of not investing in data quality, which, as Henrik explained, is usually why business leaders turn a blind eye to data quality challenges.  Henrik concluded with the recommendation that when we are talking with business leaders, we need to focus on “smaller, but tangible, wins where data quality improvement and business efficiency goes hand in hand.”

 

Is Data Quality a Journey or a Destination?

Henrik’s blog post received excellent comments, which included a debate about whether data quality is a journey or a destination.

Garry Ure responded with his blog post Destination Unknown, in which he explained how “historically the quest for data quality was likened to a journey to convey the concept that you need to continue to work in order to maintain quality.”  But Garry also noted that sometimes when an organization does successfully ingrain data quality practices into day-to-day business operations, it can make it seem like data quality is a destination that the organization has finally reached.

Garry concluded data quality is “just one destination of many on a long and somewhat recursive journey.  I think the point is that there is no final destination, instead the journey becomes smoother, quicker, and more pleasant for those traveling.”

Bryan Larkin responded to Garry with the blog post Data Quality: Destinations Known, in which Bryan explained, “data quality should be a series of destinations where short journeys occur on the way to those destinations.  The reason is simple.  If we make it about one big destination or one big journey, we are not aligning our efforts with business goals.”

In order to do this, Bryan recommends that “we must identify specific projects that have tangible business benefits (directly to the bottom line — at least to begin with) that are quickly realized.  This means we are looking at less of a smooth journey and more of a sprint to a destination — to tackle a specific problem and show results in a short amount of time.  Most likely we’ll have a series of these sprints to destinations with little time to enjoy the journey.”

“While comprehensive data quality initiatives,” Bryan concluded, “are things we as practitioners want to see — in fact we build our world view around such — most enterprises (not all, mind you) are less interested in big initiatives and more interested in finite, specific, short projects that show results.  If we can get a series of these lined up, we can think of them more in terms of an overall comprehensive plan if we like — even a journey.  But most functional business staff will think of them in terms of the specific projects that affect them.”

The Latin phrase Quo Vadimus? translates into English as “Where are we going?”  When I ponder where data quality is going, and whether data quality is a journey or a destination, I am reminded of the words of T.S. Eliot:

“We must not cease from exploration and the end of all our exploring will be to arrive where we began and to know the place for the first time.”

We must not cease from exploring new ways to continuously improve our data quality and continuously put into practice our data governance principles, policies, and procedures, and the end of all our exploring will be to arrive where we began and to know, perhaps for the first time, the value of high-quality data to our enterprise’s continuing journey toward business success.

The Art of Data Matching

OCDQ Radio is a vendor-neutral podcast about data quality and its related disciplines, produced and hosted by Jim Harris.

On this episode of OCDQ Radio, I am joined by Henrik Liliendahl Sørensen for a discussion about the Art of Data Matching.

Henrik is a data quality and master data management (MDM) professional also doing data architecture.  Henrik has worked 30 years in the IT business within a large range of business areas, such as government, insurance, manufacturing, membership, healthcare, public transportation, and more.

Henrik’s current engagements include working as practice manager at Omikron Data Quality, a data quality tool maker with headquarters in Germany, and as data quality specialist at Stibo Systems, a master data management vendor with headquarters in Denmark.  Henrik is also a charter member of the IAIDQ, and the creator of the LinkedIn Group for Data Matching for people interested in data quality and thrilled by automated data matching, deduplication, and identity resolution.

Henrik is one of the most prolific and popular data quality bloggers, regularly sharing his excellent insights about data quality, data matching, MDM, data architecture, data governance, diversity in data quality, and many other data management topics.

Popular OCDQ Radio Episodes

Clicking on the link will take you to the episode’s blog post:

  • Demystifying Data Science — Guest Melinda Thielbar, a Ph.D. Statistician, discusses what a data scientist does and provides a straightforward explanation of key concepts such as signal-to-noise ratio, uncertainty, and correlation.
  • Data Quality and Big Data — Guest Tom Redman (aka the “Data Doc”) discusses Data Quality and Big Data, including if data quality matters less in larger data sets, and if statistical outliers represent business insights or data quality issues.
  • Demystifying Master Data Management — Guest John Owens explains the three types of data (Transaction, Domain, Master), the four master data entities (Party, Product, Location, Asset), and the Party-Role Relationship, which is where we find many of the terms commonly used to describe the Party master data entity (e.g., Customer, Supplier, Employee).
  • Data Governance Star Wars — Special Guests Rob Karel and Gwen Thomas joined this extended, and Star Wars themed, discussion about how to balance bureaucracy and business agility during the execution of data governance programs.
  • The Johari Window of Data Quality — Guest Martin Doyle discusses helping people better understand their data and assess its business impacts, not just the negative impacts of bad data quality, but also the positive impacts of good data quality.
  • Studying Data Quality — Guest Gordon Hamilton discusses the key concepts from recommended data quality books, including those which he has implemented in his career as a data quality practitioner.

#FollowFriday Spotlight: @hlsdk

FollowFriday Spotlight is an OCDQ regular segment highlighting someone you should follow—and not just Fridays on Twitter.

Henrik Liliendahl Sørensen is a data quality and master data management (MDM) professional with over 30 years of experience in the information technology (IT) business working within a large range of business areas, such as government, insurance, manufacturing, membership, healthcare, and public transportation.

For more details about what Henrik has been, and is, working on, check out his My Been Done List and 2011 To Do List.

Henrik is also a charter member of the IAIDQ, and the creator of the LinkedIn Group for Data Matching for people interested in data quality and thrilled by automated data matching, deduplication, and identity resolution.

Henrik is one of the most prolific and popular data quality bloggers, regularly sharing his excellent insights about data quality, data matching, MDM, data architecture, data governance, diversity in data quality, and many other data management topics.

So check out Liliendahl on Data Quality for great blog posts written by Henrik Liliendahl Sørensen, such as these popular posts:

 

Related Posts

Delivering Data Happiness

#FollowFriday Spotlight: @DataQualityPro

#FollowFriday and Re-Tweet-Worthiness

#FollowFriday and The Three Tweets

Dilbert, Data Quality, Rabbits, and #FollowFriday

Twitter, Meaningful Conversations, and #FollowFriday

The Fellowship of #FollowFriday

Social Karma (Part 7) – Twitter

Delivering Data Happiness

Recently, a happiness meme has been making its way around the data quality blogosphere.

Its origins have been traced to a lovely day in Denmark when Henrik Liliendahl Sørensen, with help from The Muppet Show, asked “Why do you watch it?” referring to the typically negative spin in the data quality blogosphere, where it seems we are:

“Always describing how bad data is everywhere.

Bashing executives who don’t get it.

Telling about all the hard obstacles ahead. Explaining you don’t have to boil the ocean but might get success by settling for warming up a nice little drop of water.

Despite really wanting to tell a lot of success stories, being the funny Fozzie Bear on the stage, well, I am afraid I also have been spending most of my time on the balcony with Statler and Waldorf.

So, from this day forward: More success stories.”

In his recent blog posts, The Ugly Duckling and Data Quality Tools: The Cygnets in Information Quality, Henrik has been sharing more success stories, or to phrase it in an even happier way: delivering data happiness.

 

Delivering Data Happiness

I am reading the great book Delivering Happiness: A Path to Profits, Passion, and Purpose by Tony Hsieh, the CEO of Zappos.

Obviously, the book’s title inspired the title of this blog post. 

One of the Zappos core values is “build a positive team and family spirit,” and I have been thinking about how that applies to data quality improvements, which are often pursued as one of the many aspects of a data governance program.

Most data governance maturity models describe an organization’s evolution through a series of stages intended to measure its capability and maturity, tendency toward being reactive or proactive, and inclination to be project-oriented or program-oriented.

Most data governance programs are started by organizations that are confronted with a painfully obvious need for improvement.

The primary reason that the change management efforts of data governance are resisted is because they rely almost exclusively on negative methods—they emphasize broken business and technical processes, as well as bad data-related employee behaviors.

Although these problems exist and are the root cause of some of the organization’s failures, there are also unheralded processes and employees that prevented other problems from happening, which are the root cause of some of the organization’s successes.

“The best team members,” writes Hsieh while explaining the Zappos core values, “take initiative when they notice issues so that the team and the company can succeed.” 

“The best team members take ownership of issues and collaborate with other team members whenever challenges arise.” 

“The best team members have a positive influence on one another and everyone they encounter.  They strive to eliminate any kind of cynicism and negative interactions.”

The change management efforts of data governance and other enterprise information initiatives often make it sound like no such employees (i.e., “best team members”) currently exist anywhere within an organization. 

The blogosphere, as well as critically acclaimed books and expert presentations at major industry conferences, often seem to be in unanimous and unambiguous agreement in the message that they are broadcasting:

“Everything your organization is currently doing regarding data management is totally wrong!”

Sadly, that isn’t much of an exaggeration.  But I am not trying to accuse anyone of using Machiavellian sales tactics to sell solutions to non-existent problems—poor data quality and data governance maturity are costly realities for many organizations.

Nor am I trying to oversimplify the many real complexities involved when implementing enterprise information initiatives.

However, most of these initiatives focus exclusively on developing new solutions and best practices, failing to even acknowledge the possible presence of existing solutions and best practices.

The success of all enterprise information initiatives requires the kind of enterprise-wide collaboration that is facilitated by the “best team members.”  But where, exactly, do the best team members come from?  Should it really be surprising whenever an enterprise information initiative can’t find any using exclusively negative methods, focusing only on what is currently wrong?

As Gordon Hamilton commented on my previous post, we need to be “helping people rise to the level of the positive expectations, rather than our being codependent in their sinking to the level of the negative expectations.”

We really need to start using more positive methods for fostering change.

Let’s begin by first acknowledging the best team members who are currently delivering data happiness to our organizations.

 

Related Posts

Why isn’t our data quality worse?

The Road of Collaboration

Common Change

Finding Data Quality

Declaration of Data Governance

The Balancing Act of Awareness

Podcast: Business Technology and Human-Speak

“I can make glass tubes”