Data is a Game Changer

Data is a Game Changer.png

Nowadays we hear a lot of chatter, rather reminiscent of the boisterous bluster of sports talk radio debates, about the potential of big data and its related technologies to enable predictive and real-time analytics and, by leveraging an infrastructure provided by the symbiotic relationship of cloud and mobile, serve up better business performance and an enhanced customer experience.

Sports have always provided great fodder for the data-obsessed with its treasure troves of statistical data dissecting yesterday’s games down to the most minute detail, which is called upon by experts and amateurs alike to try to predict tomorrow’s games as well as analyze in real-time the play-by-play of today’s games.  Arguably, it was the bestselling book Moneyball by Michael Lewis, which was also adapted into a popular movie starring Brad Pitt, that brought data obsession to the masses, further fueling the hype and overuse of sports metaphors such as how data can be a game changer for businesses in any industry and of any size.

The Future is Now Playing on Center Court

Which is why it is so refreshing to see a tangible real-world case study for big data analytics being delivered with the force of an Andy Murray two-handed backhand as over the next two weeks the United States Tennis Association (USTA) welcomes hundreds of thousands of spectators to New York City’s Flushing Meadows for the 2013 U.S. Open tennis tournament.  Both the fans in the stands and the millions more around the world will visit USOpen.org, via the web or mobile apps, in order to follow the action, watch live-streamed tennis matches, and get scores, stats, and the latest highlights and news thanks to IBM technologies.

Before, during, and after each match, predictive and real-time analytics drive IBM’s SlamTracker tool.  Before matches, IBM analyzes 41 million data points collected from eight years of Grand Slam play, including head-to-head matches, similar player types, and playing surfaces.  SlamTracker uses this data to create engaging and compelling tools for digital audiences, which identify key actions players must take to enhance their chances of winning, and give fans player information, match statistics, social sentiment, and more.

The infrastructure that supports the U.S. Open’s digital presence is hosted on an IBM SmartCloud.  This flexible, scalable environment, managed by IBM Analytics, lets the USTA ensure continuous availability of their digital platforms throughout the tournament and year-round.  The USTA and IBM give fans the ability to experience the matches from anywhere, with any device via a mobile-friendly site and engaging apps for multiple mobile platforms.  Together these innovations make the U.S. Open experience immediate and intimate for fans sitting in the stands or on another continent.

Better Service, More Winners, and Fewer Unforced Errors

In tennis, a service (also known as a serve) is a shot to start a point.  In business, a service is a shot to start a point of positive customer interaction, whether that’s a point of sale or an opportunity to serve a customer’s need (e.g., resolving a complaint).

In tennis, a winner is a shot not reached by your opponent, which wins you a point.  In business, a winner is a differentiator not reached by your competitor, which wins your business a sale when it makes a customer choose your product or service.

In tennis, an unforced error is a failure to complete a service or return a shot, which cannot be attributed to any factor other than poor judgement or execution by the player.  In business, an unforced error is a failure to service a customer or get a return on an investment, which cannot be attributed to any factor other than poor decision making or execution by the organization.

Properly supported by enabling technologies, businesses of all sizes, and across all industries, can capture and analyze data to uncover hidden patterns and trends that can help them achieve better service, more winners, and fewer unforced errors.

How can Data change Your Game?

Whether it’s on the court, in the stands, on the customer-facing front lines, in the dashboards used by executive management, or behind the scenes of a growing midsize business, data is a game changer.  How can data change your game?

IBM Logo.jpg

The Stone Wars of Root Cause Analysis

Stone+Ripple.jpg

“As a single stone causes concentric ripples in a pond,” Martin Doyle commented on my blog post There is No Such Thing as a Root Cause, “there will always be one root cause event creating the data quality wave.  There may be interference after the root cause event which may look like a root cause, creating eddies of side effects and confusion, but I believe there will always be one root cause.  Work backwards from the data quality side effects to the root cause and the data quality ripples will be eliminated.”

Martin Doyle and I continued our congenial blog comment banter on my podcast episode The Johari Window of Data Quality, but in this blog post I wanted to focus on the stone-throwing metaphor for root cause analysis.

Let’s begin with the concept of a single stone causing the concentric ripples in a pond.  Is the stone really the root cause?  Who threw the stone?  Why did that particular person choose to throw that specific stone?  How did the stone come to be alongside the pond?  Which path did the stone-thrower take to get to the pond?  What happened to the stone-thrower earlier in the day that made them want to go to the pond, and once there, pick up a stone and throw it in the pond?

My point is that while root cause analysis is important to data quality improvement, too often we can get carried away riding the ripples of what we believe to be the root cause of poor data quality.  Adding to the complexity is the fact there’s hardly ever just one stone.  Many stones get thrown into our data ponds, and trying to un-ripple their poor quality effects can lead us to false conclusions because causation is non-linear in nature.  Causation is a complex network of many interrelated causes and effects, so some of what appear to be the effects of the root cause you have isolated may, in fact, be the effects of other causes.

As Laura Sebastian-Coleman explains, data quality assessments are often “a quest to find a single criminal—The Root Cause—rather than to understand the process that creates the data and the factors that contribute to data issues and discrepancies.”  Those approaching data quality this way, “start hunting for the one thing that will explain all the problems.  Their goal is to slay the root cause and live happily ever after.  Their intentions are good.  And slaying root causes—such as poor process design—can bring about improvement.  But many data problems are symptoms of a lack of knowledge about the data and the processes that create it.  You cannot slay a lack of knowledge.  The only way to solve a knowledge problem is to build knowledge of the data.”

Believing that you have found and eliminated the root cause of all your data quality problems is like believing that after you have removed the stones from your pond (i.e., data cleansing), you can stop the stone-throwers by building a high stone-deflecting wall around your pond (i.e., defect prevention).  However, there will always be stones (i.e., data quality issues) and there will always be stone-throwers (i.e., people and processes) that will find a way to throw a stone in your pond.

In our recent podcast Measuring Data Quality for Ongoing Improvement, Laura Sebastian-Coleman and I discussed although root cause is used as a singular noun, just as data is used as a singular noun, we should talk about root causes since, just as data analysis is not analysis of a single datum, root cause analysis should not be viewed as analysis of a single root cause.

The bottom line, or, if you prefer, the ripple at the bottom of the pond, is the Stone Wars of Root Cause Analysis will never end because data quality is a journey, not a destination.  After all, that’s why it’s called ongoing data quality improvement.

Measuring Data Quality for Ongoing Improvement

OCDQ Radio is an audio podcast about data quality and its related disciplines, produced and hosted by Jim Harris.

Listen to Laura Sebastian-Coleman, author of the book Measuring Data Quality for Ongoing Improvement: A Data Quality Assessment Framework, and I discuss bringing together a better understanding of what is represented in data, and how it is represented, with the expectations for use in order to improve the overall quality of data.  Our discussion also includes avoiding two common mistakes made when starting a data quality project, and defining five dimensions of data quality.

Laura Sebastian-Coleman has worked on data quality in large health care data warehouses since 2003.  She has implemented data quality metrics and reporting, launched and facilitated a data quality community, contributed to data consumer training programs, and has led efforts to establish data standards and to manage metadata.  In 2009, she led a group of analysts in developing the original Data Quality Assessment Framework (DQAF), which is the basis for her book.

Laura Sebastian-Coleman has delivered papers at MIT’s Information Quality Conferences and at conferences sponsored by the International Association for Information and Data Quality (IAIDQ) and the Data Governance Organization (DGO).  She holds IQCP (Information Quality Certified Professional) designation from IAIDQ, a Certificate in Information Quality from MIT, a B.A. in English and History from Franklin & Marshall College, and a Ph.D. in English Literature from the University of Rochester.

Popular OCDQ Radio Episodes

Clicking on the link will take you to the episode’s blog post:

  • Demystifying Data Science — Guest Melinda Thielbar, a Ph.D. Statistician, discusses what a data scientist does and provides a straightforward explanation of key concepts such as signal-to-noise ratio, uncertainty, and correlation.
  • Data Quality and Big Data — Guest Tom Redman (aka the “Data Doc”) discusses Data Quality and Big Data, including if data quality matters less in larger data sets, and if statistical outliers represent business insights or data quality issues.
  • Demystifying Master Data Management — Guest John Owens explains the three types of data (Transaction, Domain, Master), the four master data entities (Party, Product, Location, Asset), and the Party-Role Relationship, which is where we find many of the terms commonly used to describe the Party master data entity (e.g., Customer, Supplier, Employee).
  • Data Governance Star Wars — Special Guests Rob Karel and Gwen Thomas joined this extended, and Star Wars themed, discussion about how to balance bureaucracy and business agility during the execution of data governance programs.
  • The Johari Window of Data Quality — Guest Martin Doyle discusses helping people better understand their data and assess its business impacts, not just the negative impacts of bad data quality, but also the positive impacts of good data quality.
  • Data Profiling Early and Often — Guest James Standen discusses data profiling concepts and practices, and how bad data is often misunderstood and can be coaxed away from the dark side if you know how to approach it.
  • Studying Data Quality — Guest Gordon Hamilton discusses the key concepts from recommended data quality books, including those which he has implemented in his career as a data quality practitioner.

The Symbiotic Relationship of Cloud and Mobile

“Although many people are calling for a cloud revolution in which everyone simultaneously migrates their systems to the cloud,” David Linthicum recently blogged, “that’s not going to happen.  While there will be no mass migration, there will be many one-off cloud migration projects that improve the functionality of systems, as well as cloud-based deployments of new systems.”

“This means that,” Linthicum predicted, “cloud computing’s growth will follow the same patterns of adoption we saw for the PC and the Web.  We won’t notice many of the changes as they occur, but the changes will indeed come.”  Perhaps the biggest driver of the cloud-based changes to come is the way many of us are using the cloud today — as a way to synchronize data across our multiple devices, the vast majority of which nowadays are mobile devices.

John Mason recently blogged about the symbiotic relationship between the cloud and mobile devices, which “not only expands the reach of small and midsize businesses, it levels the playing field too, helping them compete in a quickly changing business environment.  Cloud-based applications help businesses stay mobile, agile, and responsive without sacrificing security or reliability, and even the smallest of companies can provide their customers with fast, around-the-clock access to important data.”

The age of the mobile device is upon us and it is thanks mainly to the cloud-based applications floating above us, enabling a mobile-app-portal-to-the-cloud computing model that is well-supported by the widespread availability of high-speed network connectivity options since, no matter where we are, it seems like a Wi-Fi or broadband mobile network is always available.

As more and more small and midsize businesses continue to leverage the symbiotic relationship between the cloud and mobile to build relationships with customers and rethink how work works, they are enabling the future of the collaborative economy.

IBM Logo.jpg

Caffeinated Thoughts on Technology for Midsize Businesses

If you are having trouble viewing this video, then you can watch it on Vimeo via this link:vimeo.com/71338997

The following links are to the resources featured in or related to the content of this video:

  • Get Bold with Your Social Media: http://goo.gl/PCQ11 (Sandy Carter Book Review by Debbie Laskey)
IBM Logo.jpg

DQ-BE: The Time Traveling Gift Card

Data Quality By Example (DQ-BE) is an OCDQ regular segment that provides examples of data quality key concepts.

As an avid reader, I tend to redeem most of my American Express Membership Rewards points for Barnes & Noble gift cards to buy new books for my Nook.  As a data quality expert, I tend to notice when something is amiss with data.  As shown above, for example, my recent gift card was apparently issued on — and only available for use until — January 1, 1900.

At first, I thought I might have encountered the time traveling gift card.  However, I doubted the gift card would be accepted as legal tender in 1900.  Then I thought my gift card was actually worth $1,410 (what $50 in 1900 would be worth today), which would allow me to buy a lot more books — as long as Barnes & Noble would overlook the fact the gift card expired 113 years ago.

Fortunately, I was able to use the gift card to purchase $50 worth of books in 2013.

So, I guess the moral of this story is that sometimes poor data quality does pay.  However, it probably never pays to display your poor data quality to someone who runs an obsessive-compulsive data quality blog with a series about data quality by example.

 

What examples (good or poor) of data quality have you encountered in your time travels?

 

Related Posts

DQ-BE: Invitation to Duplication

DQ-BE: Dear Valued Customer

DQ-BE: Single Version of the Time

DQ-BE: Data Quality Airlines

Retroactive Data Quality

Sometimes Worse Data Quality is Better

Data Quality, 50023

DQ-IRL (Data Quality in Real Life)

The Seven Year Glitch

When Poor Data Quality Calls

Data has an Expiration Date

Sometimes it’s Okay to be Shallow

A Big Data Platform for Midsize Businesses

If you’re having trouble viewing this video, watch it on Vimeo via this link:A Big Data Platform for Midsize Businesses

The following links are to the infographics featured in this video, as well as links to other related resources:

  • Webcast Replay: Why Big Data Matters to the Midmarket: http://goo.gl/A1WYZ (No Registration Required)
  • IBM’s 2012 Big Data Study with Feedback from People who saw Results: http://goo.gl/MmRAv (Registration Required)
  • Participate in IBM’s 2013 Business Value Survey on Analytics and Big Data: http://goo.gl/zKSPM (Registration Required)
IBM Logo.jpg

Too Big to Ignore

OCDQ Radio is an audio podcast about data quality and its related disciplines, produced and hosted by Jim Harris.

During this episode, Phil Simon shares his sage advice for getting started with big data, including the importance of having a data-oriented mindset, that ambitious long-term goals should give way to more reasonable and attainable short-term objectives, and always remembering that big data is just another means toward solving business problems.

Phil Simon is a sought-after speaker and the author of five management books, most recently Too Big to Ignore: The Business Case for Big Data.  A recognized technology expert, he consults companies on how to optimize their use of technology.  His contributions have been featured on NBC, CNBC, ABC News, Inc. magazine, BusinessWeek, Huffington Post, Globe and Mail, Fast Company, Forbes, the New York Times, ReadWriteWeb, and many other sites.

Popular OCDQ Radio Episodes

Clicking on the link will take you to the episode’s blog post:

  • Demystifying Data Science — Guest Melinda Thielbar, a Ph.D. Statistician, discusses what a data scientist does and provides a straightforward explanation of key concepts such as signal-to-noise ratio, uncertainty, and correlation.
  • Data Quality and Big Data — Guest Tom Redman (aka the “Data Doc”) discusses Data Quality and Big Data, including if data quality matters less in larger data sets, and if statistical outliers represent business insights or data quality issues.
  • Demystifying Master Data Management — Guest John Owens explains the three types of data (Transaction, Domain, Master), the four master data entities (Party, Product, Location, Asset), and the Party-Role Relationship, which is where we find many of the terms commonly used to describe the Party master data entity (e.g., Customer, Supplier, Employee).
  • Data Governance Star Wars — Special Guests Rob Karel and Gwen Thomas joined this extended, and Star Wars themed, discussion about how to balance bureaucracy and business agility during the execution of data governance programs.
  • The Johari Window of Data Quality — Guest Martin Doyle discusses helping people better understand their data and assess its business impacts, not just the negative impacts of bad data quality, but also the positive impacts of good data quality.
  • Data Profiling Early and Often — Guest James Standen discusses data profiling concepts and practices, and how bad data is often misunderstood and can be coaxed away from the dark side if you know how to approach it.
  • Studying Data Quality — Guest Gordon Hamilton discusses the key concepts from recommended data quality books, including those which he has implemented in his career as a data quality practitioner.

Cloud Benefits for Midsize Businesses

If you’re having trouble viewing this video, watch it on Vimeo via this link:Cloud Benefits for Midsize Businesses on Vimeo

The following links are to the infographics and eBook featured in this video, as well as other related resources:

IBM Logo.jpg

DQ-Tip: “An information centric organization...”

Data Quality (DQ) Tips is an OCDQ regular segment.  Each DQ-Tip is a clear and concise data quality pearl of wisdom.

“An information centric organization is an organization driven from high-quality, complete, and timely information that is relevant to its goals.”

This DQ-Tip is from the new book Patterns of Information Management by Mandy Chessell and Harald Smith.

“An organization exists for a purpose,” Chessell and Smith explained.  “It has targets to achieve and long-term aspirations.  An organization needs to make good use of its information to achieve its goals.”  In order to do this, they recommend that you define an information strategy that lays out why, what, and how your organization will manage its information:

  • Why — The business imperatives that drive the need to be information centric, which helps focus information management efforts on the activities that deliver value to the organization.
  • What — The type of information that you must manage to deliver on those business imperatives, which includes the subject areas to cover, which attributes within each subject area that need to be managed, the valid values for those attributes, and the information management policies (such as retention and protection) that the organization wants to implement.
  • How — The information management principles that provide the general rules for how information is to be managed by the information systems and the people using them along with how information flows between them.

Developing an information strategy, according to Chessell and Smith, “creates a set of objectives for the organization, which guides the investment in information management technology and related solutions that support the business.  Starting with the business imperatives ensures the information management strategy is aligned with the needs of the organization, making it easier to demonstrate its relevance and value.”

Chessell and Smith also noted that “technology alone is not sufficient to ensure the quality, consistency, and flexibility of an organization’s information.  Classify the people connected to the organization according to their information needs and skills, provide common channels of communication and knowledge sharing about information, and user interfaces and reports through which they can access the information as appropriate.”

Chessell and Smith explained that the attitudes and skills of the organization’s people will be what enables the right behaviors in everyday operations, which is a major determination of the success of an information management program.

 

Related Posts

DQ-Tip: “The quality of information is directly related to...”

DQ-Tip: “Undisputable fact about the value and use of data...”

DQ-Tip: “Data quality tools do not solve data quality problems...”

DQ-Tip: “There is no such thing as data accuracy...”

DQ-Tip: “Data quality is primarily about context not accuracy...”

DQ-Tip: “There is no point in monitoring data quality...”

DQ-Tip: “Don't pass bad data on to the next person...”

DQ-Tip: “...Go talk with the people using the data”

DQ-Tip: “Data quality is about more than just improving your data...”

DQ-Tip: “Start where you are...”

Sometimes Worse Data Quality is Better

Continuing a theme from three previous posts, which discussed when it’s okay to call data quality as good as it needs to get, the occasional times when perfect data quality is necessary, and the costs and profits of poor data quality, in this blog post I want to provide three examples of when the world of consumer electronics proved that sometimes worse data quality is better.

 

When the Betamax Bet on Video Busted

While it seems like a long time ago in a galaxy far, far away, during the 1970s and 1980s a videotape format war waged between Betamax and VHS.  Betamax was widely believed to provide superior video data quality.

But a blank Betamax tape allowed users to record up to two hours of high-quality video, whereas a VHS tape allowed users to record up to four hours of slightly lower quality video.  Consumers consistently chose quantity over quality — and especially since lower quality also meant a lower price.  Betamax tapes and machines remained more expensive based on betting that consumers would be willing to pay a premium for higher-quality video.

The VHS victory demonstrated how people often choose quantity over quality, so it doesn’t always pay to have better data quality.

 

When Lossless Lost to Lossy Audio

Much to the dismay of those working in the data quality profession, most people do not care about the quality of their data unless it becomes bad enough for them to pay attention to — and complain about.

An excellent example is bitrate, which refers to the number of bits — or the amount of data — that are processed over a certain amount of time.  In his article Does Bitrate Really Make a Difference In My Music?, Whitson Gordon examined the common debate about lossless versus lossy audio formats.

Using the example of ripping a track from a CD to a hard drive, a lossless format means the track is not compressed to the point where any of its data is lost, retaining, for all intents and purposes, the same audio data quality as the original CD track.

By contrast, a lossy format compresses the track so that it takes up less space by intentionally deleting some of its data, reducing audio data quality.  Audiophiles often claim anything other than vinyl records sound lousy because they are so lossy.

However, like truth, beauty, and art, data quality can be said to be in the eyes — or the ears — of the beholder.  So, if your favorite music sounds fine to you in MP3 file format, then not only do you not need vinyl records, audio tapes, and CDs anymore, but if you consider MP3 files good enough, then you will not pay more attention to (or pay more money for) audio data quality.

 

When Digital Killed the Photograph Star

The Eastman Kodak Company, commonly known as Kodak, which was founded by George Eastman in 1888 and dominated the photograph industry for most of the 20th century, filed for bankruptcy in January 2012.  The primary reason was that Kodak, which had previously pioneered innovations like celluloid film and color photography, failed to embrace the industry’s transition to digital photography, despite the fact that Kodak invented some of the core technology used in current digital cameras.

Why?  Because Kodak believed that the data quality of digital photographs would be generally unacceptable to consumers as a replacement for film photographs.  In much the same way that Betamax assumed consumers wanted higher-quality video, Kodak assumed consumers would always want to use higher-quality photographs to capture their “Kodak moments.”

In fairness to Kodak, mobile devices are causing a massive — and rapid — disruption to many well-established business models, creating a brave new digital world, and obviously not just for photography.  However, when digital killed the photograph star, it proved, once again, that sometimes worse data quality is better.

  

Related Posts

Data Quality and the OK Plateau

When Poor Data Quality Kills

The Costs and Profits of Poor Data Quality

Promoting Poor Data Quality

Data Quality and the Cupertino Effect

The Data Quality Wager

How Data Cleansing Saves Lives

The Dichotomy Paradox, Data Quality and Zero Defects

Data Quality and Miracle Exceptions

Data Quality: Quo Vadimus?

The Seventh Law of Data Quality

A Tale of Two Q’s

Paleolithic Rhythm and Data Quality

Groundhog Data Quality Day

Data Quality and The Middle Way

Stop Poor Data Quality STOP

When Poor Data Quality Calls

Freudian Data Quality

Predictably Poor Data Quality

Satisficing Data Quality