Big Data and the Infinite Inbox

Occasionally it’s necessary to temper the unchecked enthusiasm accompanying the peak of inflated expectations associated with any hype cycle.  This may be especially true for big data, and especially now since, as Svetlana Sicular of Gartner recently blogged, big data is falling into the trough of disillusionment and “to minimize the depth of the fall, companies must be at a high enough level of analytical and enterprise information management maturity combined with organizational support of innovation.”

I fear the fall may feel bottomless for those who fell hard for the hype and believe the Big Data Psychic capable of making better, if not clairvoyant, predictions.  When, in fact, “our predictions may be more prone to failure in the era of big data,” explained Nate Silver in his book The Signal and the Noise: Why Most Predictions Fail but Some Don't.  “There isn’t any more truth in the world than there was before the Internet.  Most of the data is just noise, as most of the universe is filled with empty space.”

Proposing the 3Ss (Small, Slow, Sure) as a counterpoint to the 3Vs (Volume, Velocity, Variety), Stephen Few recently blogged about the slow data movement.  “Data is growing in volume, as it always has, but only a small amount of it is useful.  Data is being generated and transmitted at an increasing velocity, but the race is not necessarily for the swift; slow and steady will win the information race.  Data is branching out in ever-greater variety, but only a few of these new choices are sure.”

Big data requires us to revisit information overload, a term that was originally about, not the increasing amount of information, but instead the increasing access to information.  As Clay Shirky stated, “It’s not information overload, it’s filter failure.”

As Silver noted, the Internet (like the printing press before it) was a watershed moment in our increased access to information, but its data deluge didn’t increase the amount of truth in the world.  And in today’s world, where many of us strive on a daily basis to prevent email filter failure and achieve what Merlin Mann called Inbox Zero, I find unfiltered enthusiasm about big data to be rather ironic, since big data is essentially enabling the data-driven decision making equivalent of the Infinite Inbox.

Imagine logging into your email every morning and discovering: You currently have () Unread Messages.

However, I’m sure most of it probably would be spam, which you obviously wouldn’t have any trouble quickly filtering (after all, infinity minus spam must be a back of the napkin calculation), allowing you to only read the truly useful messages.  Right?

 

Related Posts

HoardaBytes and the Big Data Lebowski

OCDQ Radio - Data Quality and Big Data

Open MIKE Podcast — Episode 05: Defining Big Data

Will Big Data be Blinded by Data Science?

Data Silence

Magic Elephants, Data Psychics, and Invisible Gorillas

The Graystone Effects of Big Data

Information Overload Revisited

Exercise Better Data Management

A Tale of Two Datas

A Statistically Significant Resolution for 2013

It’s Not about being Data-Driven

Big Data, Sporks, and Decision Frames

Big Data: Structure and Quality

Darth Vader, Big Data, and Predictive Analytics

Big Data, Predictive Analytics, and the Ideal Chronicler

The Big Data Theory

Swimming in Big Data

What Magic Tricks teach us about Data Science

What Mozart for Babies teaches us about Data Science

Open MIKE Podcast — Episode 11

Method for an Integrated Knowledge Environment (MIKE2.0) is an open source delivery framework for Enterprise Information Management, which provides a comprehensive methodology that can be applied across a number of different projects within the Information Management space.  For more information, click on this link: openmethodology.org/wiki/What_is_MIKE2.0

The Open MIKE Podcast is a video podcast show, hosted by Jim Harris, which discusses aspects of the MIKE2.0 framework, and features content contributed to MIKE 2.0 Wiki Articles, Blog Posts, and Discussion Forums.

 

Episode 11: Information Maturity Model

If you’re having trouble viewing this video, you can watch it on Vimeo by clicking on this link: Open MIKE Podcast on Vimeo

 

MIKE2.0 Content Featured in or Related to this Podcast

Information Maturity Model: openmethodology.org/wiki/Information_Maturity_Model

Reactive Data Governance: openmethodology.org/wiki/Reactive_Data_Governance_Organisation

Proactive Data Governance: openmethodology.org/wiki/Proactive_Data_Governance_Organisation

Managed Data Governance: openmethodology.org/wiki/Managed_Data_Governance_Organisation

Optimal Data Governance: openmethodology.org/wiki/Optimal_Data_Governance_Organisation

 

Previous Episodes of the Open MIKE Podcast

Clicking on the link will take you to the episode’s blog post:

Episode 01: Information Management Principles

Episode 02: Information Governance and Distributing Power

Episode 03: Data Quality Improvement and Data Investigation

Episode 04: Metadata Management

Episode 05: Defining Big Data

Episode 06: Getting to Know NoSQL

Episode 07: Guiding Principles for Open Semantic Enterprise

Episode 08: Information Lifecycle Management

Episode 09: Enterprise Data Management Strategy

Episode 10: Information Maturity QuickScan

You can also find the videos and blog post summaries for every episode of the Open MIKE Podcast at: ocdqblog.com/MIKE

MDM, Assets, Locations, and the TARDIS

Henrik Liliendahl Sørensen, as usual, is facilitating excellent discussion around master data management (MDM) concepts via his blog.  Two of his recent posts, Multi-Entity MDM vs. Multi-Domain MDM and The Real Estate Domain, have both received great commentary.  So, in case you missed them, be sure to read those posts, and join in their comment discussions/debates.

A few of the concepts discussed and debated reminded me of the OCDQ Radio episode Demystifying Master Data Management, during which guest John Owens explained the three types of data (Transaction, Domain, Master), the four master data entities (Party, Product, Location, Asset), as well as, and perhaps the most important concept of all, the Party-Role Relationship, which is where we find many of the terms commonly used to describe the Party master data entity (e.g., Customer, Supplier, Employee).

Henrik’s second post touched on Location and Asset, which come up far less often in MDM discussions than Party and Product do, and arguably with understandably good reason.  This reminded me of the science fiction metaphor I used during my podcast with John, a metaphor I made in an attempt to help explain the difference and relationship between an Asset and a Location.

Location is often over-identified with postal address, which is actually just one means of referring to a location.  A location can also be referred to by its geographic coordinates, either absolute (e.g., latitude and longitude) or relative (e.g., 7 miles northeast of the intersection of Route 66 and Route 54).

Asset refers to a resource owned or controlled by an enterprise and capable of producing business value.  Assets are often over-identified with their location, especially real estate assets such as a manufacturing plant or an office building, since they are essentially immovable assets always at a particular location.

However, many assets are movable, such as the equipment used to manufacture products, or the technology used to support employee activities.  These assets are not always at a particular location (e.g., laptops and smartphones used by employees) and can also be dependent on other, non-co-located, sub-assets (e.g., replacement parts needed to repair broken equipment).

In Doctor Who, a brilliant British science fiction television program celebrating its 50th anniversary this year, the TARDIS, which stands for Time and Relative Dimension in Space, is the time machine and spaceship the Doctor and his companions travel in.

The TARDIS is arguably the Doctor’s most important asset, but its location changes frequently, both during and across episodes.

So, in MDM, we could say that Location is a time and relative dimension in space where we would currently find an Asset.

 

Related Posts

OCDQ Radio - Demystifying Master Data Management

OCDQ Radio - Master Data Management in Practice

OCDQ Radio - The Art of Data Matching

Plato’s Data

Once Upon a Time in the Data

The Data Cold War

DQ-BE: Single Version of the Time

The Data Outhouse

Fantasy League Data Quality

OCDQ Radio - The Blue Box of Information Quality

Choosing Your First Master Data Domain

Lycanthropy, Silver Bullets, and Master Data Management

Voyage of the Golden Records

The Quest for the Golden Copy

How Social can MDM get?

Will Social MDM be the New Spam?

More Thoughts about Social MDM

Is Social MDM going the Wrong Way?

The Semantic Future of MDM

Small Data and VRM

Popeye, Spinach, and Data Quality

As a kid, one of my favorite cartoons was Popeye the Sailor, who was empowered by eating spinach to take on many daunting challenges, such as battling his brawny nemesis Bluto for the affections of his love interest Olive Oyl, often kidnapped by Bluto.

I am reading the book The Half-life of Facts: Why Everything We Know Has an Expiration Date by Samuel Arbesman, who explained, while examining how a novel fact, even a wrong one, spreads and persists, that one of the strangest examples of the spread of an error is related to Popeye the Sailor.  “Popeye, with his odd accent and improbable forearms, used spinach to great effect, a sort of anti-Kryptonite.  It gave him his strength, and perhaps his distinctive speaking style.  But why did Popeye eat so much spinach?  What was the reason for his obsession with such a strange food?”

The truth begins over fifty years before the comic strip made its debut.  “Back in 1870,” Arbesman explained, “Erich von Wolf, a German chemist, examined the amount of iron within spinach, among many other green vegetables.  In recording his findings, von Wolf accidentally misplaced a decimal point when transcribing data from his notebook, changing the iron content in spinach by an order of magnitude.  While there are actually only 3.5 milligrams of iron in a 100-gram serving of spinach, the accepted fact became 35 milligrams.  Once this incorrect number was printed, spinach’s nutritional value became legendary.  So when Popeye was created, studio executives recommended he eat spinach for his strength, due to its vaunted health properties, and apparently Popeye helped increase American consumption of spinach by a third!”

“This error was eventually corrected in 1937,” Arbesman continued, “when someone rechecked the numbers.  But the damage had been done.  It spread and spread, and only recently has gone by the wayside, no doubt helped by Popeye’s relative obscurity today.  But the error was so widespread, that the British Medical Journal published an article discussing this spinach incident in 1981, trying its best to finally debunk the issue.”

“Ultimately, the reason these errors spread,” Arbesman concluded, “is because it’s a lot easier to spread the first thing you find, or the fact that sounds correct, than to delve deeply into the literature in search of the correct fact.”

What “spinach” has your organization been falsely consuming because of a data quality issue that was not immediately obvious, and which may have led to a long, and perhaps ongoing, history of data-driven decisions based on poor quality data?

Popeye said “I yam what I yam!”  Your organization yams what your data yams, so you had better make damn sure it’s correct.

 

Related Posts

The Family Circus and Data Quality

Can Data Quality avoid the Dustbin of History?

Retroactive Data Quality

Spartan Data Quality

Pirates of the Computer: The Curse of the Poor Data Quality

The Tooth Fairy of Data Quality

The Dumb and Dumber Guide to Data Quality

Darth Data

Occurred, a data defect has . . .

The Data Quality Placebo

Data Quality is People!

DQ-View: The Five Stages of Data Quality

DQ-BE: Data Quality Airlines

Wednesday Word: Quality-ish

The Five Worst Elevator Pitches for Data Quality

Shining a Social Light on Data Quality

The Poor Data Quality Jar

Data Quality and #FollowFriday the 13th

Dilbert, Data Quality, Rabbits, and #FollowFriday

Data Love Song Mashup

Open Source Business Intelligence

OCDQ Radio is a vendor-neutral podcast about data quality and its related disciplines, produced and hosted by Jim Harris.

During this episode, I discuss open source business intelligence (OSBI) with Lyndsay Wise, author of the insightful new book Using Open Source Platforms for Business Intelligence: Avoid Pitfalls and Maximize ROI.

Lyndsay Wise is the President and Founder of WiseAnalytics, an independent analyst firm and consultancy specializing in business intelligence for small and mid-sized organizations.  For more than ten years, Lyndsay Wise has assisted clients in business systems analysis, software selection, and implementation of enterprise applications.

Lyndsay Wise conducts regular research studies, consults, writes articles, and speaks about how to implement a successful business intelligence approach and improving the value of business intelligence within organizations.

Related OCDQ Radio Episodes

Clicking on the link will take you to the episode’s blog post:

  • Studying Data Quality — Guest Gordon Hamilton discusses the key concepts from recommended data quality books, including those which he has implemented in his career as a data quality practitioner.

Data Quality and Anton’s Syndrome

In his book Incognito: The Secret Lives of the Brain, David Eagleman discussed aspects of a bizarre, and rare, brain disorder called Anton’s Syndrome in which a stroke renders a person blind — but the person denies their blindness.

“Those with Anton’s Syndrome truly believe they are not blind,” Eagleman explained.  “It is only after bumping into enough furniture and walls that they begin to feel that something is amiss.  They are experiencing what they take to be vision, but it is all internally generated.  The external data is not getting to the right places because of the stroke, and so their reality is simply that which is generated by the brain, with little attachment to the real world.  In this sense, what they experience is no different from dreaming, drug trips, or hallucinations.”

Data quality practitioners often complain that business leaders are blind to the importance of data quality to business success, or that they deny data quality issues exist in their organization.  As much as we wish it wasn’t so, often it isn’t until business leaders bump into enough of the negative effects of poor data quality that they begin to feel that something is amiss.  However, as much as we would like to, we can’t really attribute their denial to drug-induced hallucinations.

Sometimes an illusion-of-quality effect is caused when data is excessively filtered and cleansed before it reaches business leaders, perhaps as the result of a perception filter for data quality issues created as a natural self-defense mechanism by the people responsible for the business processes and technology surrounding data, since no one wants to be blamed for causing, or failing to fix, data quality issues.  Unfortunately, this might really leave the organization’s data with little attachment to the real world.

In fairness, sometimes it’s also the blind leading the blind because data quality practitioners often suffer from business blindness by presenting data quality issues without providing business context, without relating data quality metrics in a tangible manner to how the business uses data to support a business process, accomplish a business objective, or make a business decision.

A lot of the disconnect between business leaders, who believe they are not blind to data quality, and data quality practitioners, who believe they are not blind to business context, comes from a crisis of perception.  Each side in this debate believes they have a complete vision, but it’s only after bumping into each other enough times that they begin to envision the organizational blindness caused when data quality is not properly measured within a business context and continually monitored.

 

Related Posts

Data Quality and Chicken Little Syndrome

Data Quality and Miracle Exceptions

Data Quality: Quo Vadimus?

Availability Bias and Data Quality Improvement

Finding Data Quality

“Some is not a number and soon is not a time”

The Data Quality of Dorian Gray

The Data Quality Wager

DQ-View: The Five Stages of Data Quality

Data Quality and the Bystander Effect

Data Quality and the Q Test

Why isn’t our data quality worse?

The Illusion-of-Quality Effect

Perception Filters and Data Quality

WYSIWYG and WYSIATI

Predictably Poor Data Quality

Data Psychedelicatessen

Data Geeks and Business Blindness

The Real Data Value is Business Insight

Is your data accurate, but useless to your business?

Data Quality Measurement Matters

Data Myopia and Business Relativity

Data and its Relationships with Quality

Plato’s Data

An Enterprise Resolution

This blog post is sponsored by the Enterprise CIO Forum and HP.

Since just before Christmas I posted An Enterprise Carol, I decided just before New Year’s to post An Enterprise Resolution.

In her article The Irrational Allure of the Next Big Thing, Karla Starr examined why people value potential over achievement in books, sports, and politics.  However, her findings apply equally well to technology and the enterprise’s relationship with IT.

“Subjectivity and hype,” Starr explained, “make people particularly prone to falling for Next Best Thing-ism.”

“Our collective willingness to jump on the bandwagon,” Starr continued, “seems at odds with one of psychology’s most robust findings: We are averse to uncertainty.  But as it turns out, our reaction to incomplete information depends on our interpretation of the scant data we do have.  Uncertainty is a sort of amplifier, intensifying our response whether it’s positive or negative.  As long as we react positively to the little information shown, we’re actually attracted to uncertainty.  It’s curiosity rather than knowledge that leads to increased cognitive engagement.  If the only information at hand is positive, your mind is going to fill in the gaps with other positive details.  A whiff of positive information is all we need to set our minds aflutter.”

In his book Thinking, Fast and Slow, Daniel Kahneman explained “when people are favorably disposed toward a technology, they rate it as offering large benefits and imposing little risk; when they dislike a technology, they can think only of its disadvantages, and few advantages come to mind.  People who receive a message extolling the benefits of a technology also change their beliefs about its risks.  Good technologies have few costs in the imaginary world we inhabit, bad technologies have no benefits, and all decisions are easy.  In the real world of course, we often face painful tradeoffs between benefits and costs.”

In his book What Technology Wants, Kevin Kelly explained that technology has a social dimension beyond the mere functionality it provides.  “We adopt new technologies largely because of what they do for us, but also in part because of what they mean to us.  Often we refuse to adopt technology for the same reason: because of how the avoidance reinforces or shapes our identity.”

So, in 2013, as the big data hype cycle comes down from the peak of inflated expectations, as the painful tradeoffs between the benefits and costs of cloud computing are faced, and as IT consumerization continues to reshape the identity of the IT function, let’s make an enterprise resolution to deal with these realities before we go off chasing the next best thing.  Happy New Year!

This blog post is sponsored by the Enterprise CIO Forum and HP.

 

Related Posts

An Enterprise Carol

Why does the sun never set on legacy applications?

Are Applications the La Brea Tar Pits for Data?

The Diffusion of the Consumerization of IT

Serving IT with a Side of Hash Browns

The Cloud is shifting our Center of Gravity

A Swift Kick in the AAS

Sometimes all you Need is a Hammer

Shadow IT and the New Prometheus

The IT Consumerization Conundrum

The Diderot Effect of New Technology

More Tethered by the Untethered Enterprise?

The Return of the Dumb Terminal

Magic Elephants, Data Psychics, and Invisible Gorillas

Big Data el Memorioso

Information Overload Revisited

The Limitations of Historical Analysis

OCDQ Radio - The Evolution of Enterprise Security

Enterprise Security and Social Engineering

Can the Enterprise really be Secured?

Open MIKE Podcast — Episode 10

Method for an Integrated Knowledge Environment (MIKE2.0) is an open source delivery framework for Enterprise Information Management, which provides a comprehensive methodology that can be applied across a number of different projects within the Information Management space.  For more information, click on this link: openmethodology.org/wiki/What_is_MIKE2.0

The Open MIKE Podcast is a video podcast show, hosted by Jim Harris, which discusses aspects of the MIKE2.0 framework, and features content contributed to MIKE 2.0 Wiki Articles, Blog Posts, and Discussion Forums.

 

Episode 10: Information Maturity QuickScan

If you’re having trouble viewing this video, you can watch it on Vimeo by clicking on this link: Open MIKE Podcast on Vimeo

 

MIKE2.0 Content Featured in or Related to this Podcast

Information Maturity (IM) QuickScan: openmethodology.org/wiki/Information_Maturity_QuickScan

IM QuickScan Template Documents: openmethodology.org/wiki/QuickScan_MS_Office_survey

Information Maturity Model: openmethodology.org/wiki/Information_Maturity_Model

 

Previous Episodes of the Open MIKE Podcast

Clicking on the link will take you to the episode’s blog post:

Episode 01: Information Management Principles

Episode 02: Information Governance and Distributing Power

Episode 03: Data Quality Improvement and Data Investigation

Episode 04: Metadata Management

Episode 05: Defining Big Data

Episode 06: Getting to Know NoSQL

Episode 07: Guiding Principles for Open Semantic Enterprise

Episode 08: Information Lifecycle Management

Episode 09: Enterprise Data Management Strategy

You can also find the videos and blog post summaries for every episode of the Open MIKE Podcast at: ocdqblog.com/MIKE

Big Data is not just for Big Businesses

“It is widely assumed that big data, which imbues a sense of grandiosity, is only for those large enterprises with enormous amounts of data and the dedicated IT staff to tackle it,” opens the recent article Big data: Why it matters to the midmarket.

Much of the noise generated these days about the big business potential of big data certainly seems to contain very little signal directed at small and midsize businesses.  Although it’s true that big businesses generate more data, faster, and in more varieties, a considerable amount of big data is externally generated, much of which is freely available for use by businesses of all sizes.

The easiest example is the poster child for leveraging big data — Google Search.  But there’s also a growing number of open data sources (e.g., weather data) and social data sources (e.g., Twitter), and, since more of the world is becoming directly digitized, more businesses are now using more data no matter how big they are.  Additionally, as Phil Simon wrote about in The New Small, the free and open source software, as-a-service, cloud, mobile, and social technology trends driving the consumerization of IT are enabling small and midsize businesses to, among other things, use more data and be more competitive with big businesses.

“Each minute of every day, information is produced about the activities of your business, your customers, and your industry,” explained Sarita Harbour in her recent blog post Harnessing Big Data: Giving Midsize Business a Competitive Edge.  “Hidden within this enormous amount of data are trends, patterns, and indicators that, if extracted and identified, can yield important information to make your business more efficient and more competitive, and ultimately, it can make you more money.”

However, the biggest driver of the misperception about big data is its over-identification with data volume.  Which is why earlier this year in his blog post It’s time for a new definition of big data, Robert Hillard used several examples to explain that big data refers more to big complexity than big volume.  While acknowledging that complex datasets tend to grow rapidly, thus making big data voluminous, his wonderfully pithy conclusion was that “big data can be very small and not all large datasets are big.”

Therefore, by extension we could say that the businesses using big data can be small, or mid-sized, and not all the businesses using big data are big.  But, of course, that’s not quite pithy enough.  So let’s simply say that big data is not just for big businesses.

 

This post was written as part of the IBM for Midsize Business program, which provides midsize businesses with the tools, expertise and solutions they need to become engines of a smarter planet.

 

Related Posts

Will Big Data be Blinded by Data Science?

Big Data Lessons from Orbitz

The Graystone Effects of Big Data

Word of Mouth has become Word of Data

Information Asymmetry versus Empowered Customers

Talking Business about the Weather

Magic Elephants, Data Psychics, and Invisible Gorillas

Open MIKE Podcast — Episode 05: Defining Big Data

Open MIKE Podcast — Episode 06: Getting to Know NoSQL

OCDQ Radio - Data Quality and Big Data

HoardaBytes and the Big Data Lebowski

Sometimes it’s Okay to be Shallow

How Predictable Are You?

The Wisdom of Crowds, Friends, and Experts

Exercise Better Data Management

A Tale of Two Datas

Darth Vader, Big Data, and Predictive Analytics

The Big Data Theory

Data Management: The Next Generation

Big Data: Structure and Quality

An Enterprise Carol

This blog post is sponsored by the Enterprise CIO Forum and HP.

Since ‘tis the season for reflecting on the past year and predicting the year ahead, while pondering this post my mind wandered to the reflections and predictions provided by the ghosts of A Christmas Carol by Charles Dickens.  So, I decided to let the spirit of Jacob Marley revisit my previous Enterprise CIO Forum posts to bring you the Ghosts of Enterprise Past, Present, and Future.

 

The Ghost of Enterprise Past

Legacy applications have a way of haunting the enterprise long after they should have been sunset.  The reason that most of them do not go gentle into that good night, but instead rage against the dying of their light, is some users continue using some of the functionality they provide, as well as the data trapped in those applications, to support the enterprise’s daily business activities.

This freaky feature fracture (i.e., technology supporting business needs being splintered across new and legacy applications) leaves many IT departments overburdened with maintaining a lot of technology and data that’s not being used all that much.

The Ghost of Enterprise Past warns us that IT can’t enable the enterprise’s future if it’s stuck still supporting its past.

 

The Ghost of Enterprise Present

While IT was busy battling the Ghost of Enterprise Past, a familiar, but fainter, specter suddenly became empowered by the diffusion of the consumerization of IT.  The rapid ascent of the cloud and mobility, spirited by service-oriented solutions that were more focused on the user experience, promised to quickly deliver only the functionality required right now to support the speed and agility requirements driving the enterprise’s business needs in the present moment.

Gifted by this New Prometheus, Shadow IT emerged from the shadows as the Ghost of Enterprise Present, with business-driven and decentralized IT solutions becoming more commonplace, as well as begrudgingly accepted by IT leaders.

All of which creates quite the IT Conundrum, forming yet another front in the war against Business-IT collaboration.  Although, in the short-term, the consumerization of IT usually better services the technology needs of the enterprise, in the long-term, if it’s not integrated into a cohesive strategy, it creates a complex web of IT that entangles the enterprise much more than it enables it.

And with the enterprise becoming much more of a conceptual, rather than a physical, entity due to the cloud and mobile devices enabling us to take the enterprise with us wherever we go, the evolution of enterprise security is now facing far more daunting challenges than the external security threats we focused on in the past.  This more open business environment is here to stay, and it requires a modern data security model, despite the fact that such a model could become the weakest link in enterprise security.

The Ghost of Enterprise Present asks many questions, but none more frightening than: Can the enterprise really be secured?

 

The Ghost of Enterprise Future

Of course, the T in IT wasn’t the only apparition previously invisible outside of the IT department to recently break through the veil in a big way.  The I in IT had its own coming-out party this year also since, as many predicted, 2012 was the year of Big Data.

Although neither the I nor the T is magic, instead of sugar plums, Data Psychics and Magic Elephants appear to be dancing in everyone’s heads this holiday season.  In other words, the predictive power of big data and the technological wizardry of Hadoop (as well as other NoSQL techniques) seem to be on the wish list of every enterprise for the foreseeable future.

However, despite its unquestionable potential, as its hype starts to settle down, the sobering realities of big data analytics will begin to sink in.  Data’s value comes from data’s usefulness.  If all we do is hoard data, then we’ll become so lost in the details that we’ll be unable to connect enough of the dots to discover meaningful patterns and convert big data into useful information that enables the enterprise to take action, make better decisions, or otherwise support its business activities.

Big data will force us to revisit information overload as we are occasionally confronted with the limitations of historical analysis, and blindsided by how our biases and preconceptions could silence the signal and amplify the noise, which will also force us to realize that data quality still matters in big data and that bigger data needs better data management.

As the Ghost of Enterprise Future, big data may haunt us with more questions than the many answers it will no doubt provide.

 

“Bah, Humbug!”

I realize that this post lacks the happy ending of A Christmas Carol.  To paraphrase Dickens, I endeavored in this ghostly little post to raise the ghosts of a few ideas, not to put my readers out of humor with themselves, with each other, or with the season, but simply to give them thoughts to consider about how to keep the Enterprise well in the new year.  Happy Holidays Everyone!

This blog post is sponsored by the Enterprise CIO Forum and HP.

 

Related Posts

Why does the sun never set on legacy applications?

Are Applications the La Brea Tar Pits for Data?

The Diffusion of the Consumerization of IT

The Cloud is shifting our Center of Gravity

More Tethered by the Untethered Enterprise?

A Swift Kick in the AAS

The UX Factor

Sometimes all you Need is a Hammer

Shadow IT and the New Prometheus

The IT Consumerization Conundrum

OCDQ Radio - The Evolution of Enterprise Security

The Cloud Security Paradox

The Good, the Bad, and the Secure

The Weakest Link in Enterprise Security

Can the Enterprise really be Secured?

Magic Elephants, Data Psychics, and Invisible Gorillas

Big Data el Memorioso

Information Overload Revisited

The Limitations of Historical Analysis

Data Silence

Open MIKE Podcast — Episode 09

Method for an Integrated Knowledge Environment (MIKE2.0) is an open source delivery framework for Enterprise Information Management, which provides a comprehensive methodology that can be applied across a number of different projects within the Information Management space.  For more information, click on this link: openmethodology.org/wiki/What_is_MIKE2.0

The Open MIKE Podcast is a video podcast show, hosted by Jim Harris, which discusses aspects of the MIKE2.0 framework, and features content contributed to MIKE 2.0 Wiki Articles, Blog Posts, and Discussion Forums.

 

Episode 09: Enterprise Data Management Strategy

If you’re having trouble viewing this video, you can watch it on Vimeo by clicking on this link: Open MIKE Podcast on Vimeo

 

MIKE2.0 Content Featured in or Related to this Podcast

Enterprise Data Management Strategy: openmethodology.org/wiki/Enterprise_Data_Management_Strategy_Solution_Offering

Executive Overview on EDM Strategy: openmethodology.org/w/images/6/6c/Executive_Overview_on_EDM_Strategy.pdf

You can also find the videos and blog post summaries for every episode of the Open MIKE Podcast at: ocdqblog.com/MIKE

Devising a Mobile Device Strategy

As I previously blogged in The Age of the Mobile Device, the disruptiveness of mobile devices to existing business models is difficult to overstate.  Mobile was also cited as one of the complementary technology forces, along with social and cloud, in the recent Harvard Business Review blog post by R “Ray” Wang about new business models being enabled by big data.

Since their disruptiveness to existing IT models is also difficult to overstate, this post ponders the Bring Your Own Device (BYOD) trend that’s forcing businesses of all sizes to devise a mobile device strategy.  BYOD is often not about bringing your own device to the office, but about bringing your own device with you wherever you go (even though the downside of this untethered enterprise may be that our always precarious work-life balance surrenders to the pervasive work-is-life feeling mobile devices can enable).

In his recent InformationWeek article, BYOD: Why Mobile Device Management Isn’t Enough, Michael Davis observed that too many IT departments are not devising a mobile device strategy, but instead “they’re merely scrambling to meet pressure from the CEO on down to offer BYOD options or increase mobile app access.”  Davis also noted that when IT creates BYOD policies, they often to fail to acknowledge mobile devices have to be managed differently, partially since they are not owned by the company.

An alternative to BYOD, which Brian Proffitt recently blogged about, is Corporate Owned, Personally Enabled (COPE). “Plenty of IT departments see BYOD as a demon to be exorcised from the cubicle farms,” Proffitt explained, “or an opportunity to dump the responsibility for hardware upkeep on their internal customers.  The idea behind BYOD is to let end users choose the devices, programs, and services that best meet their personal and business needs, with access, support, and security supplied by the company IT department — often with subsidies for device purchases.”  Whereas the idea behind COPE is “the organization buys the device and still owns it, but the employee is allowed, within reason, to install the applications they want on the device.”

Whether you opt for BYOD or COPE, Information Management recently highlighted 5 Trouble Spots to consider, which included assuming that mobile device security is already taken care of by in-house security initiatives, data integration disconnects with on-premises data essentially turning mobile devices into mobile data silos, and the combination of personal and business data, which complicates, among other things, remote wiping the data on a mobile device in the event of a theft or security violation, which is why, as Davis concluded, managing the company data on the device is more important than managing the device itself.

With the complex business and IT challenges involved, how is your midsize business devising a mobile device strategy?

 

This post was written as part of the IBM for Midsize Business program, which provides midsize businesses with the tools, expertise and solutions they need to become engines of a smarter planet.

 

Related Posts

The Age of the Mobile Device

The Return of the Dumb Terminal

More Tethered by the Untethered Enterprise?

OCDQ Radio - Social Media for Midsize Businesses

Social Media Marketing: From Monologues to Dialogues

Social Business is more than Social Marketing

The Cloud is shifting our Center of Gravity

Barriers to Cloud Adoption

OCDQ Radio - Cloud Computing for Midsize Businesses

Cloud Computing is the New Nimbyism

The Cloud Security Paradox

OCDQ Radio - The Evolution of Enterprise Security

The Graystone Effects of Big Data

Big Data Lessons from Orbitz

Will Big Data be Blinded by Data Science?

The Wisdom of Crowds, Friends, and Experts

I recently finished reading the TED Book by Jim Hornthal, A Haystack Full of Needles, which included an overview of the different predictive approaches taken by one of the most common forms of data-driven decision making in the era of big data, namely, the recommendation engines increasingly provided by websites, social networks, and mobile apps.

These recommendation engines primarily employ one of three techniques, choosing to base their data-driven recommendations on the “wisdom” provided by either crowds, friends, or experts.

 

The Wisdom of Crowds

In his book The Wisdom of Crowds, James Surowiecki explained that the four conditions characterizing wise crowds are diversity of opinion, independent thinking, decentralization, and aggregation.  Amazon is a great example of a recommendation engine using this approach by assuming that a sufficiently large population of buyers is a good proxy for your purchasing decisions.

For example, Amazon tells you that people who bought James Surowiecki’s bestselling book also bought Thinking, Fast and Slow by Daniel Kahneman, Crowdsourcing: Why the Power of the Crowd Is Driving the Future of Business by Jeff Howe, and Wikinomics: How Mass Collaboration Changes Everything by Don Tapscott.  However, Amazon neither provides nor possesses knowledge of why people bought all four of these books or qualification of the subject matter expertise of these readers.

However, these concerns, which we could think of as potential data quality issues, and which would be exacerbated within a small amount of transaction data where the eclectic tastes and idiosyncrasies of individual readers would not help us decide what books to buy, within a large amount of transaction data, we achieve the Wisdom of Crowds effect when, taken in aggregate, we receive a general sense of what books we might like to read based on what a diverse group of readers collectively makes popular.

As I blogged about in my post Sometimes it’s Okay to be Shallow, sometimes the aggregated, general sentiment of a large group of unknown, unqualified strangers will be sufficient to effectively make certain decisions.

 

The Wisdom of Friends

Although the influence of our friends and family is the oldest form of data-driven decision making, historically this influence was delivered by word of mouth, which required you to either be there to hear those influential words when they were spoken, or have a large enough network of people you knew that would be able to eventually pass along those words to you.

But the rise of social networking services, such as Twitter and Facebook, has transformed word of mouth into word of data by transcribing our words into short bursts of social data, such as status updates, online reviews, and blog posts.

Facebook “Likes” are a great example of a recommendation engine that uses the Wisdom of Friends, where our decision to buy a book, see a movie, or listen to a song might be based on whether or not our friends like it.  Of course, “friends” is used in a very loose sense in a social network, and not just on Facebook, since it combines strong connections such as actual friends and family, with weak connections such as acquaintances, friends of friends, and total strangers from the periphery of our social network.

Social influence has never ended with the people we know well, as Nicholas Christakis and James Fowler explained in their book Connected: The Surprising Power of Our Social Networks and How They Shape Our Lives.  But the hyper-connected world enabled by the Internet, and further facilitated by mobile devices, has strengthened the social influence of weak connections, and these friends form a smaller crowd whose wisdom is involved in more of our decisions than we may even be aware of.

 

The Wisdom of Experts

Since it’s more common to associate wisdom with expertise, Pandora is a great example of a recommendation engine that uses the Wisdom of Experts.  Pandora used a team of musicologists (professional musicians and scholars with advanced degrees in music theory) to deconstruct more than 800,000 songs into 450 musical elements that make up each performance, including qualities of melody, harmony, rhythm, form, composition, and lyrics, as part of what Pandora calls the Music Genome Project.

As Pandora explains, their methodology uses precisely defined terminology, a consistent frame of reference, redundant analysis, and ongoing quality control to ensure that data integrity remains reliably high, believing that delivering a great radio experience to each and every listener requires an incredibly broad and deep understanding of music.

Essentially, experts form the smallest crowd of wisdom.  Of course, experts are not always right.  At the very least, experts are not right about every one of their predictions.  Nor do experts always agree with other, which is why I imagine that one of the most challenging aspects of the Music Genome Project is getting music experts to consistently apply precisely the same methodology.

Pandora also acknowledges that each individual has a unique relationship with music (i.e., no one else has tastes exactly like yours), and allows you to “Thumbs Up” or “Thumbs Down” songs without affecting other users, producing more personalized results than either the popularity predicted by the Wisdom of Crowds or the similarity predicted by the Wisdom of Friends.

 

The Future of Wisdom

It’s interesting to note that the Wisdom of Experts is the only one of these approaches that relies on what data management and business intelligence professionals would consider a rigorous approach to data quality and decision quality best practices.  But this is also why the Wisdom of Experts is the most time-consuming and expensive approach to data-driven decision making.

In the past, the Wisdom of Crowds and Friends was ignored in data-driven decision making for the simple reason that this potential wisdom wasn’t digitized.  But now, in the era of big data, not only are crowds and friends digitized, but technological advancements combined with cost-effective options via open source (data and software) and cloud computing make these approaches quicker and cheaper than the Wisdom of Experts.  And despite the potential data quality and decision quality issues, the Wisdom of Crowds and/or Friends is proving itself a viable option for more categories of data-driven decision making.

I predict that the future of wisdom will increasingly become an amalgamation of experts, friends, and crowds, with the data and techniques from all three potential sources of wisdom often acknowledged as contributors to data-driven decision making.

 

Related Posts

Sometimes it’s Okay to be Shallow

Word of Mouth has become Word of Data

The Wisdom of the Social Media Crowd

Data Management: The Next Generation

Exercise Better Data Management

Darth Vader, Big Data, and Predictive Analytics

Data-Driven Intuition

The Big Data Theory

Finding a Needle in a Needle Stack

Big Data, Predictive Analytics, and the Ideal Chronicler

The Limitations of Historical Analysis

Magic Elephants, Data Psychics, and Invisible Gorillas

OCDQ Radio - Data Quality and Big Data

Big Data: Structure and Quality

HoardaBytes and the Big Data Lebowski

The Data-Decision Symphony

OCDQ Radio - Decision Management Systems

A Tale of Two Datas

Open MIKE Podcast — Episode 08

Method for an Integrated Knowledge Environment (MIKE2.0) is an open source delivery framework for Enterprise Information Management, which provides a comprehensive methodology that can be applied across a number of different projects within the Information Management space.  For more information, click on this link: openmethodology.org/wiki/What_is_MIKE2.0

The Open MIKE Podcast is a video podcast show, hosted by Jim Harris, which discusses aspects of the MIKE2.0 framework, and features content contributed to MIKE 2.0 Wiki Articles, Blog Posts, and Discussion Forums.

 

Episode 08: Information Lifecycle Management

If you’re having trouble viewing this video, you can watch it on Vimeo by clicking on this link: Open MIKE Podcast on Vimeo

 

MIKE2.0 Content Featured in or Related to this Podcast

Information Asset Management: openmethodology.org/wiki/Information_Asset_Management_Offering_Group

Information Lifecycle Management: openmethodology.org/wiki/Information_Lifecycle_Management_Solution_Offering

You can also find the videos and blog post summaries for every episode of the Open MIKE Podcast at: ocdqblog.com/MIKE