Big Data Lessons from Orbitz

One of the week’s interesting technology stories was On Orbitz, Mac Users Steered to Pricier Hotels, an article by Dana Mattioli in The Wall Street Journal, about how online travel company Orbitz used data mining to discover significant spending differences between their Mac and PC customers (who were identified by the operating system of the computer used to book reservations).

Orbitz discovered that Mac users are 40% more likely to book a four- or five-star hotel, and tend to stay in more expensive rooms, spending on average $20 to $30 more a night on hotels.  Based on this discovery, Orbitz has been experimenting with showing different hotel offers to Mac and PC visitors, ranking the more expensive hotels on the first page of search results for Mac users.

This Orbitz story is interesting because I think it provides two important lessons about big data for businesses of all sizes.

The first lesson is, as Mattioli reported, “the sort of targeting undertaken by Orbitz is likely to become more commonplace as online retailers scramble to identify new ways in which people’s browsing data can be used to boost online sales.  Orbitz lost $37 million in 2011 and its stock has fallen by more than 74% since its 2007 IPO.  The effort underscores how retailers are becoming bigger users of so-called predictive analytics, crunching reams of data to guess the future shopping habits of customers.  The goal is to tailor offerings to people believed to have the highest lifetime value to the retailer.”

The second lesson is a good example of how word of mouth has become word of data.  Shortly after the article was published, Orbitz became a trending topic on Twitter — but not in a way that the company would have hoped.  A lot of negative sentiment was expressed by Mac users claiming that they would no longer use Orbitz since they charged Mac users more than PC users.

However, this commonly expressed misunderstanding was clarified by an Orbitz spokesperson in the article, who explained that Orbitz is not charging Mac users more money for the same hotels, but instead they are simply setting the default search rank to show Mac users the more expensive hotels first.  Mac users can always re-sort the results ascending by price in order to see the same less expensive hotels that would be displayed in the default search rank used for PC users.  Orbitz is attempting to offer a customized (albeit a generalized, not personalized) user experience, but some users see it as gaming the system against them.

This Orbitz story provides two lessons about the brave new business world brought to us by big data and data science, where more companies are using predictive analytics to discover business insights, and more customers are empowering themselves with data.

Business has always resembled a battlefield.  But nowadays, data is the weapon of choice for companies and customers alike, since, in our increasing data-constructed world, big data is no longer just for big companies, and everyone is a data geek now.

 

This post was written as part of the IBM for Midsize Business program, which provides midsize businesses with the tools, expertise and solutions they need to become engines of a smarter planet.

 

The Return of the Dumb Terminal

This blog post is sponsored by the Enterprise CIO Forum and HP.

In his book What Technology Wants, Kevin Kelly observed “computers are becoming ever more general-purpose machines as they swallow more and more functions.  Entire occupations and their workers’ tools have been subsumed by the contraptions of computation and networks.  You can no longer tell what a person does by looking at their workplace, because 90 percent of employees are using the same tool — a personal computer.  Is that the desk of the CEO, the accountant, the designer, or the receptionist?  This is amplified by cloud computing, where the actual work is done on the net as a whole and the tool at hand merely becomes a portal to the work.  All portals have become the simplest possible window — a flat screen of some size.”

Although I am an advocate for cloud computing and cloud-based services, sometimes I can’t help but wonder if cloud computing is turning our personal computers back into that simplest of all possible windows that we called the dumb terminal.

Twenty years ago, at the beginning of my IT career, when I was a mainframe production support specialist, my employer gave me a dumb terminal to take home for connecting to the mainframe via my dial-up modem.  Since I used it late at night when dealing with nightly production issues, the aptly nicknamed green machine (its entirely text-based display used bright green characters) would make my small apartment eerily glow green, which convinced my roommate and my neighbors that I was some kind of mad scientist performing unsanctioned midnight experiments with radioactive materials.

The dumb terminal was so-called because, when not connected to the mainframe, it was essentially a giant paperweight since it provided no offline functionality.  Nowadays, our terminals (smartphones, tablets, and laptops) are smarter, but in some sense, with more functionality moving to the cloud, even though they provide varying degrees of offline functionality, our terminals get dumbed back down when they’re not connected to the web or a mobile network, because most of what we really need is online.

It can even be argued that smartphones and tablets were actually designed to be dumb terminals because they intentionally offer limited offline data storage and computing power, and are mostly based on a mobile-app-portal-to-the-cloud computing model, which is well-supported by the widespread availability of high-speed network connectivity options (broadband, mobile, Wi-Fi).

Laptops (and the dwindling number of desktops) are the last bastions of offline data storage and computing power.  Moving more of those applications and data to the cloud would help eliminate redundant applications and duplicated data, and make it easier to use the right technology for a specific business problem.  And if most of our personal computers were dumb terminals, then our smart people could concentrate more on the user experience aspects of business-enabling information technology.

Perhaps the return of the dumb terminal is a smart idea after all.

This blog post is sponsored by the Enterprise CIO Forum and HP.

 

Related Posts

A Swift Kick in the AAS

The UX Factor

The Partly Cloudy CIO

Are Cloud Providers the Bounty Hunters of IT?

The Cloud Security Paradox

Sometimes all you Need is a Hammer

Why does the sun never set on legacy applications?

Are Applications the La Brea Tar Pits for Data?

The Diffusion of the Consumerization of IT

More Tethered by the Untethered Enterprise?

The Family Circus and Data Quality

Family Circus.png

Like many young intellectuals, the only part of the Sunday newspaper I read growing up was the color comics section, and one of my favorite comic strips was The Family Circus created by cartoonist Bil Keane.  One of the recurring themes of the comic strip was a set of invisible gremlins that the children used to shift blame for any misdeeds, including Ida Know, Not Me, and Nobody.

Although I no longer read any section of the newspaper on any day of the week, this Sunday morning I have been contemplating how this same set of invisible gremlins is used by many people throughout most organizations to shift blame for any incidents when poor data quality negatively impacted business activities, especially since, when investigating the root cause, you often find that Ida Know owns the dataNot Me is accountable for data governance, and Nobody takes responsibility for data quality.

The Graystone Effects of Big Data

As a big data geek and a big fan of science fiction, I was intrigued by Zoe Graystone, the central character of the science fiction television show Caprica, which was a spin-off prequel of the re-imagined Battlestar Galactica television show.

Zoe Graystone was a teenage computer programming genius who created a virtual reality avatar of herself based on all of the available data about her own life, leveraging roughly 100 terabytes of personal data from numerous databases.  This allowed her avatar to access data from her medical files, DNA profiles, genetic typing, CAT scans, synaptic records, psychological evaluations, school records, emails, text messages, phone calls, audio and video recordings, security camera footage, talent shows, sports, restaurant bills, shopping receipts, online search history, music lists, movie tickets, and television shows.  The avatar transformed that big data into personality and memory, and believably mimicked the real Zoe Graystone within a virtual reality environment.

The best science fiction reveals just how thin the line is that separates imagination from reality.  Over thirty years ago, around the time of the original Battlestar Galactica television show, virtual reality avatars based on massive amounts of personal data would likely have been dismissed as pure fantasy.  But nowadays, during the era of big data and data science, the idea of Zoe Graystone creating a virtual reality avatar of herself doesn’t sound so far-fetched, nor is it pure data science fiction.

“On Facebook,” Ellis Hamburger recently blogged, “you’re the sum of all your interactions and photos with others.  Foursquare began its life as a way to see what your friends are up to, but it has quickly evolved into a life-logging tool / artificial intelligence that knows you like an old friend does.”

Facebook and Foursquare are just two social media examples of our increasingly data-constructed world, which is creating a virtual reality environment where our data has become our avatar and our digital mouths are speaking volumes about us.

Big data and real data science are enabling people and businesses of all sizes to put this virtual reality environment to good use, such as customers empowering themselves with data and companies using predictive analytics to discover business insights.

I refer to the positive aspects of Big Data as the Zoe Graystone Effect.

But there are also negative aspects to the virtual reality created by our big data avatars.  For example, in his recent blog post Rethinking Privacy in an Era of Big Data, Quentin Hardy explained “by triangulating different sets of data (you are suddenly asking lots of people on LinkedIn for endorsements on you as a worker, and on Foursquare you seem to be checking in at midday near a competitor’s location), people can now conclude things about you (you’re probably interviewing for a job there).”

On the Caprica television show, Daniel Graystone (her father) used Zoe’s avatar as the basis for an operating system for a race of sentient machines known as Cylons, which ultimately lead to the Cylon Wars and the destruction of most of humanity.  A far less dramatic example from the real world, which I explained in my blog post The Data Cold War, is how companies like Google use the virtual reality created by our big data avatars against us by selling our personal data (albeit indirectly) to advertisers.

I refer to the negative aspects of Big Data as the Daniel Graystone Effect.

How have your personal life and your business activities been affected by the Graystone Effects of Big Data?

 

This post was written as part of the IBM for Midsize Business program, which provides midsize businesses with the tools, expertise and solutions they need to become engines of a smarter planet.

 

Sometimes all you Need is a Hammer

This blog post is sponsored by the Enterprise CIO Forum and HP.

“If all you have is a hammer, everything looks like a nail” is a popular phrase, also known as the law of the instrument, which describes an over-reliance on a familiar tool, as opposed to using “the right tool for the job.”  In information technology (IT), the law of the instrument is often invoked to justify the need to purchase the right technology to solve a specific business problem.

However, within the IT industry, it has become increasingly difficult over the years to buy the right tool for the job since many leading vendors make it nearly impossible to buy just an individual tool.  Instead, vendors want you to buy their entire tool box, filled with many tools for which you have no immediate need, and some tools which you have no idea why you would ever need.

It’d be like going to a hardware store to buy just a hammer, but the hardware store refusing to sell you a hammer without also selling you a 10-piece set of screwdrivers, a 4-piece set of pliers, a 18-piece set of wrenches, and an industrial-strength nail gun.

My point is that many new IT innovations originate from small, entrepreneurial vendors, which tend to be specialists with a very narrow focus that can provide a great source of rapid innovation.  This is in sharp contrast to the large, enterprise-class vendors, which tend to innovate via acquisition and consolidation, embedding tools and other technology components within generalized IT platforms, allowing these mega-vendors to offer end-to-end solutions and the convenience of one-vendor IT shopping.

But the consumerization of IT, driven by the unrelenting trends of cloud computingSaaS, and mobility, is fostering a return to specialization, a return to being able to buy only the information technology that you currently need — the right tool for the job, and often at the right price precisely because it’s almost always more cost-effective to buy only what you need right now.

I am not trying to criticize traditional IT vendors that remain off-premises-resistant by exclusively selling on-premises solutions, which the vendors positively call enterprise-class solutions, but their customers often come to negatively call legacy applications.

I understand the economics of the IT industry.  Vendors can make more money with fewer customers by selling on-premises IT platforms with six-or-seven-figure licenses plus five-figure annual maintenance fees, as opposed to selling cloud-based services with three-or-four-figure pay-as-you-go-cancel-anytime monthly subscriptions.  The former is the big-ticket business model of the vendorization of IT.  The latter is the big-volume business model of the consumerization of IT.  Essentially, this is a paradigm shift that makes IT more of a consumer-driven marketplace, and less of the vendor-driven marketplace it has historically been.

Although it remains true that if all you have is a hammer, everything looks like a nail, sometimes all you need is a hammer.  And when all you need is a hammer, you shouldn’t get nailed by vendors selling you more information technology than you need.

This blog post is sponsored by the Enterprise CIO Forum and HP.

 

Related Posts

Can Enterprise-Class Solutions Ever Deliver ROI?

Why does the sun never set on legacy applications?

The Diffusion of the Consumerization of IT

The IT Consumerization Conundrum

The UX Factor

A Swift Kick in the AAS

Shadow IT and the New Prometheus

The Cloud Security Paradox

Are Cloud Providers the Bounty Hunters of IT?

The Partly Cloudy CIO

Data Quality and the Bystander Effect

In his recent Harvard Business Review blog post Break the Bad Data Habit, Tom Redman cautioned against correcting data quality issues without providing feedback to where the data originated.  “At a minimum,” Redman explained, “others using the erred data may not spot the error.  There is no telling where it might turn up or who might be victimized.”  And correcting bad data without providing feedback to its source also denies the organization an opportunity to get to the bottom of the problem.

“And failure to provide feedback,” Redman continued, “is but the proximate cause.  The deeper root issue is misplaced accountability — or failure to recognize that accountability for data is needed at all.  People and departments must continue to seek out and correct errors.  They must also provide feedback and communicate requirements to their data sources.”

In his blog post The Secret to an Effective Data Quality Feedback Loop, Dylan Jones responded to Redman’s blog post with some excellent insights regarding data quality feedback loops and how they can help improve your data quality initiatives.

I definitely agree with Redman and Jones about the need for feedback loops, but I have found, more often than not, that no feedback at all is provided on data quality issues because of the assumption that data quality is someone else’s responsibility.

This general lack of accountability for data quality issues is similar to what is known in psychology as the Bystander Effect, which refers to people often not offering assistance to the victim in an emergency situation when other people are present.  Apparently, the mere presence of other bystanders greatly decreases intervention, and the greater the number of bystanders, the less likely it is that any one of them will help.  Psychologists believe that the reason this happens is that as the number of bystanders increases, any given bystander is less likely to interpret the incident as a problem, and less likely to assume responsibility for taking action.

In my experience, the most common reason that data quality issues are often neither reported nor corrected is that most people throughout the enterprise act like data quality bystanders, making them less likely to interpret bad data as a problem or, at the very least, not their responsibility.  But the enterprise’s data quality is perhaps most negatively affected by this bystander effect, which may make it the worst bad data habit that the enterprise needs to break.

Word of Mouth has become Word of Data

In a previous post about overcoming information asymmetry, I discussed one of the ways that customers are changing the balance of power in the retail industry.  During last week’s Mid-Market Smarter Commerce Tweet Chat, the first question was:

Why does measuring social media matter for the retail industry today?

My response was: Word of Mouth has become Word of Data.  In this blog post, I want to explain what I meant by that.

Historically, information reached customers in one of two ways, either through advertising or word of mouth.  The latter was usually words coming from the influential mouths of family and friends, but sometimes from strangers with relevant experience or expertise.  Either way, those words were considered more credible than advertising based on the assumption that the mouths saying them didn’t stand to gain anything personally from sharing their opinions about a company, product, or service.

The biggest challenge facing word of mouth was that you either had to be there to hear the words when they were spoken, or you needed to have a large enough network of people you knew that would be able to pass along those words.  The latter was like we were all playing the children’s game broken telephone since relying upon only verbally transmitted information about any subject, and perhaps especially about a purchasing decision, was dubious when receiving the information via one or more intermediaries.

But the rise of social networking services, like Twitter, Facebook, and Google Plus, has changed the game, especially now that our broken telephones have been replaced with smartphones.  Not only is our social network larger (albeit still mostly comprised of intermediate connections), but, more important, our conversations are essentially being transcribed — our words no longer just leave our mouths, but are also exchanged in short bursts of social data via tweets, status updates, online reviews, and blog posts.

And it could be argued that our social data has a more active social life than we do, since all of our data interacts with the data from other users within and across our social networks, participating in conversations that keep on going long after we have logged out.  Influential tweets get re-tweeted.  Meaningful status updates and blog posts receive comments.  Votes determine which online reviews are most helpful.  This ongoing conversation enriches the information customers have available to them.

Although listening to customers has always been important, gathering customer feedback used to be a challenge.  But nowadays, customers provide their feedback to retailers, and share their experiences with other customers, via social media.  Word of mouth has become word of data.  The digital mouths of customers speak volumes.  The voice of the customer has become empowered by social media, changing the balance of power in the retail industry, and putting customers in control of the conversation.

 

This post was written as part of the IBM for Midsize Business program, which provides midsize businesses with the tools, expertise and solutions they need to become engines of a smarter planet.

 

More Tethered by the Untethered Enterprise?

This blog post is sponsored by the Enterprise CIO Forum and HP.

A new term I have been hearing more frequently lately is the Untethered Enterprise.  Like many new terms, definitions vary, but for me at least, it conjures up images of cutting the cords and wires that tether the enterprise to a specific physical location, and tether the business activities of its employees to specific time frames during specific days of the week.

There was a time, not too long ago, when the hard-wired phone lines for desk phones and the Ethernet cables for desktop PCs used by employees between the hours of 9AM and 5PM on Monday through Friday within the office spaces of the organization was how, when, and where the vast majority of the business activities of the enterprise were conducted.

Then came the first generation of mobile phones — the ones that only made phone calls.  And laptop computers, which initially supplemented desktop PCs, but typically only for those employees with a job requiring them to regularly work outside the office, such as traveling salespeople.  Eventually, laptops became the primary work computer with docking stations allowing them to connect to keyboards and monitors while working in the office, and providing most employees with the option of taking their work home with them.  Then the next generations of mobile phones brought text messaging, e-mail, and as Wi-Fi networks became more prevalent, full Internet access, which completed the education of the mobile phone, graduating it to a smartphone.

These smartphones are now supplemented by either a laptop or a tablet, or sometimes both.  These devices are either provided by the enterprise, or with the burgeoning Bring Your Own Device (BYOD) movement, employees are allowed to use their personal smartphones, laptops, and tablets for business purposes.  Either way, enabled by the growing availability of cloud-based services, many employees of most organizations are now capable of conducting business anywhere at anytime.  And beyond a capability, some enterprises foster the expectation that their employees demonstrate a willingness to conduct business anywhere at anytime.

I acknowledge its potential for increasing productivity and better supporting the demands of today’s fast-paced business world, but I can’t help but wonder if the enterprise and its employees will feel more tethered by the untethered enterprise because, when we can no longer unplug since there’s nothing left to unplug, then our always precarious work-life balance seems to surrender to the pervasive work-is-life feeling enabled by the untethered enterprise.

This blog post is sponsored by the Enterprise CIO Forum and HP.

 

Related Posts

The Diffusion of the Consumerization of IT

Serving IT with a Side of Hash Browns

The IT Consumerization Conundrum

The IT Prime Directive of Business First Contact

The UX Factor

A Swift Kick in the AAS

Shadow IT and the New Prometheus

The Diderot Effect of New Technology

Are Cloud Providers the Bounty Hunters of IT?

The IT Pendulum and the Federated Future of IT

Information Asymmetry versus Empowered Customers

Information asymmetry is a term from economics describing how one party involved in a transaction typically has more or better information than the other party.  Perhaps the easiest example of information asymmetry is retail sales, where historically the retailer has always had more or better information than the customer about a product that is about to be purchased.

Generally speaking, information asymmetry is advantageous for the retailer, allowing them to manipulate the customer into purchasing products that benefit the retailer’s goals (e.g., maximizing profit margins or unloading excess inventory) more than the customer’s goals (e.g., paying a fair price or buying the product that best suits their needs).  I don’t mean to demonize the retail industry, but for a long time, I’m pretty sure its unofficial motto was: “An uninformed customer is the best customer.”

Let’s consider the example of purchasing a high-definition television (HDTV) since it demonstrates how information asymmetry is not always about holding back useful information, but also bombarding customers with useless information.  In this example, it’s about bombarding customers with useless technical jargon, such as refresh rate, resolution, and contrast ratio.

To an uninformed customer, it certainly sounds like it makes sense that the HDTV with a 240Hz refresh rate, 1080p resolution, and 2,000,000:1 contrast ratio is better than the one with a 120Hz refresh rate, 720p resolution, and 1,000,000:1 contrast ratio.

After all, 240 > 120, 1080 > 720, and 2,000,000 > 1,000,000, right?  Yes — but what do any of those numbers actually mean?

The reality is that refresh rate, resolution, and contrast ratio are just three examples of useless HDTV specifications because they essentially provide no meaningful information about the video quality of the television.  This information is advantageous to only one party involved in the transaction — the retailer — since it appears to justify the higher price of an allegedly better product.

But nowadays fewer customers are falling for these tricks.  Performing a quick Internet search, either before going shopping or on their mobile phone while at the store, is balancing out some of the information asymmetry in retail sales and empowering customers to make better purchasing decisions.  With the increasing availability of broadband Internet and mobile connectivity, today’s empowered customer arrives at the retail front lines armed and ready to do battle with information asymmetry.

 

This post was written as part of the IBM for Midsize Business program, which provides midsize businesses with the tools, expertise and solutions they need to become engines of a smarter planet.

 

The Data Quality Placebo

Inspired by a recent Boing Boing blog post

Are you suffering from persistent and annoying data quality issues?  Or are you suffering from the persistence of data quality tool vendors and consultants annoying you with sales pitches about how you must be suffering from persistent data quality issues?

Either way, the Data Division of Prescott Pharmaceuticals (trusted makers of gastroflux, datamine, selectium, and qualitol) is proud to present the perfect solution to all of your real and/or imaginary data quality issues — The Data Quality Placebo.

Simply take two capsules (made with an easy-to-swallow coating) every morning and you will be guaranteed to experience:

“Zero Defects with Zero Side Effects” TM

(Legal Disclaimer: Zero Defects with Zero Side Effects may be the result of Zero Testing, which itself is probably just a side effect of The Prescott Promise: “We can promise you that we will never test any of our products on animals because . . . we never test any of our products.”)

How Data Cleansing Saves Lives

When it comes to data quality best practices, it’s often argued, and sometimes quite vehemently, that proactive defect prevention is far superior to reactive data cleansing.  Advocates of defect prevention sometimes admit that data cleansing is a necessary evil.  However, at least in my experience, most of the time they conveniently, and ironically, cleanse (i.e., drop) the word necessary.

Therefore, I thought I would share a story about how data cleansing saves lives, which I read about in the highly recommended book Space Chronicles: Facing the Ultimate Frontier by Neil deGrasse Tyson.  “Soon after the Hubble Space Telescope was launched in April 1990, NASA engineers realized that the telescope’s primary mirror—which gathers and reflects the light from celestial objects into its cameras and spectrographs—had been ground to an incorrect shape.  In other words, the two-billion dollar telescope was producing fuzzy images.  That was bad.  As if to make lemonade out of lemons, though, computer algorithms came to the rescue.  Investigators at the Space Telescope Science Institute in Baltimore, Maryland, developed a range of clever and innovative image-processing techniques to compensate for some of Hubble’s shortcomings.”

In other words, since it would be three years before Hubble’s faulty optics could be repaired during a 1993 space shuttle mission, data cleansing allowed astrophysicists to make good use of Hubble despite the bad data quality of its early images.

So, data cleansing algorithms saved Hubble’s fuzzy images — but how did this data cleansing actually save lives?

“Turns out,” Tyson explained, “maximizing the amount of information that could be extracted from a blurry astronomical image is technically identical to maximizing the amount of information that can be extracted from a mammogram.  Soon the new techniques came into common use for detecting early signs of breast cancer.”

“But that’s only part of the story.  In 1997, for Hubble’s second servicing mission, shuttle astronauts swapped in a brand-new, high-resolution digital detector—designed to the demanding specifications of astrophysicists whose careers are based on being able to see small, dim things in the cosmos.  That technology is now incorporated in a minimally invasive, low-cost system for doing breast biopsies, the next stage after mammograms in the early diagnosis of cancer.”

Even though defect prevention was eventually implemented to prevent data quality issues in Hubble’s images of outer space, those interim data cleansing algorithms are still being used today to help save countless human lives here on Earth.

So, at least in this particular instance, we have to admit that data cleansing is a necessary good.

The Diffusion of the Consumerization of IT

This blog post is sponsored by the Enterprise CIO Forum and HP.

On a previous post about the consumerization of IT, Paul Calento commented: “Clearly, it’s time to move IT out of a discrete, defined department and out into the field, even more than already.  Likewise, solutions used to power an organization need to do the same thing.  Problem is, though, that it’s easy to say that embedding IT makes sense (it does), but there’s little experience with managing it (like reporting and measurement).  Services integration is a goal, but cross-department, cross-business-unit integration remains a thorn in the side of many attempts.”

Embedding IT does make sense, and not only is it easier said than done, let alone done well, but part of the problem within many organizations is that IT became partially self-embedded within some business units while the IT department was resisting the consumerization of IT because they treated it like a fad and not an innovation.  And now those business units are resisting the efforts of the redefined IT department because they fear losing the IT capabilities that consumerization has already given them.

This growing IT challenge brings to mind the Diffusion of Innovations theory developed by Everett Rogers for describing the five stages for the rate at which innovations (e.g., new ideas or technology trends) spread within cultures, such as organizations, starting with the Innovators and Early Adopters, progressing through the Early and Late Majority, and trailed by the Laggards.

A related concept called Crossing the Chasm was developed by Geoffrey Moore to describe the critical phenomenon occurring when enough of the Early Adopters have embraced the innovation so that the beginning of the Early Majority becomes an almost certainty even though mainstream adoption of the innovation is still far from guaranteed.

From my perspective, traditional IT departments are just now crossing the chasm of the diffusion of the consumerization of IT, and are conflicting with the business units that crossed the chasm long ago with their direct adoption of cloud computingSaaS, and mobility solutions not provided by the IT department.  This divergence caused by the IT department and some business units being on different sides of the chasm has damaged, and potentially irreparably, some aspects of the IT-Business partnership.

The longer the duration of this divergence, the more difficult it will be for an IT department, that has finally crossed the chasm, to redefine their role and remain relevant partners with those business units that, perhaps for the first time in the organization’s history, were ahead of the information technology adoption curve.  Additionally, even the communication and collaboration across business units is negatively affected by different business units crossing the IT consumerization chasm at different times, which often, as Paul Calento noted, complicates the organization’s attempts to integrate cross-business-unit IT services.

This blog post is sponsored by the Enterprise CIO Forum and HP.

 

Related Posts

Serving IT with a Side of Hash Browns

The IT Consumerization Conundrum

The IT Prime Directive of Business First Contact

The UX Factor

A Swift Kick in the AAS

Shadow IT and the New Prometheus

The Diderot Effect of New Technology

Are Cloud Providers the Bounty Hunters of IT?

The IT Pendulum and the Federated Future of IT

Suburban Flight, Technology Sprawl, and Garage IT

Data Quality and the Q Test

In psychology, there’s something known as the Q Test, which asks you to use one of your fingers to trace an upper case letter Q on your forehead.  Before reading this blog post any further, please stop and perform the Q Test on your forehead right now.

 

Essentially, there’s only two ways you can complete the Q Test, which are differentiated by how you trace the tail of the Q.  Most people start by tracing a letter O, and then complete the Q by tracing its tail either toward their right eye or toward their left eye.

If you trace the tail of the Q toward your right eye, you’re imagining what a letter Q would look like from your perspective.  But if you trace the tail of the Q toward your left eye, you’re imagining what it would look like from the perspective of another person.

Basically, the point of the Q Test is to determine whether or not you have a natural tendency to consider the perspective of others.

Although considering the perspective of others is a positive under different circumstances, if you traced the letter Q with its tail toward your left eye, psychologists say that you failed the Q Test since it reveals a negative — you’re a good liar.  The reason why is that you have to be good at considering the perspective of others in order to be good at deceiving them with a believable lie.

So, as I now consider your perspective, dear reader, I bet you’re wondering: What does the Q Test have to do with data quality?

Like truth, beauty, and art, data quality can be said to be in the eyes of the beholder, or when data quality is defined, as it most often is, as fitness for the purpose of use — the eyes of the user.  But since most data has both multiple uses and users, data fit for the purpose of one use or user may not be fit for the purpose of other uses and users.  However, these multiple perspectives are considered irrelevant from the perspective of an individual user, who just needs quality data fit for the purpose of their own use.

The good news is that when it comes to data quality, most of us pass the Q Test, which means we’re not good liars.  The bad news is that since most of us pass the Q Test, we’re often only concerned about our own perspective about data quality, which is why so many organizations struggle to define data quality standards.

At the next discussion about your organization’s data quality standards, try inviting the participants to perform the Q Test.

 

Related Posts

The Point of View Paradox

You Say Potato and I Say Tater Tot

Data Myopia and Business Relativity

Beyond a “Single Version of the Truth”

DQ-BE: Single Version of the Time

Data and the Liar’s Paradox

The Fourth Law of Data Quality

Plato’s Data

Once Upon a Time in the Data

The Idea of Order in Data

Hell is other people’s data

Song of My Data

 

Related OCDQ Radio Episodes

Clicking on the link will take you to the episode’s blog post:

  • Redefining Data Quality — Guest Peter Perera discusses his proposed redefinition of data quality, as well as his perspective on the relationship of data quality to master data management and data governance.
  • Organizing for Data Quality — Guest Tom Redman (aka the “Data Doc”) discusses how your organization should approach data quality, including his call to action for your role in the data revolution.
  • The Johari Window of Data Quality — Guest Martin Doyle discusses helping people better understand their data and assess its business impacts, not just the negative impacts of bad data quality, but also the positive impacts of good data quality.
  • Studying Data Quality — Guest Gordon Hamilton discusses the key concepts from recommended data quality books, including those which he has implemented in his career as a data quality practitioner.

Two Flaws in the “Fail Faster” Philosophy

There are many who advocate that the key to success, especially with innovation, is what’s known as the “fail faster” philosophy, which says that not only should we embrace new ideas and try new things without being overly concerned with failure, but, more importantly, we should effectively fail as efficiently as possible in order to expedite learning valuable lessons from our failure.

However, I have often experienced what I see as two fundamental flaws in the “fail faster” philosophy:

  1. It requires that you define failure
  2. It requires that you admit when you have failed

Most people — myself included — often fail both of these requirements.  Most people do not define failure, but instead assume that they will be successful (even though they conveniently do not define success either).  But even when people define failure, they often refuse to admit when they have failed.  In the face of failure, most people either redefine failure or extend the deadline (perhaps we should call it the fail line?) for when they will have to admit that they have failed.

We are often regaled with stories of persistence in spite of repeated failure, such as Thomas Edison’s famous remark:

“Many of life’s failures are people who did not realize how close they were to success when they gave up.”

Edison also remarked that he didn’t invent one way to make a lightbulb, but instead he invented more than 1,000 ways how not to make a lightbulb.  Each of those failed prototypes for a commercially viable lightbulb was instructive and absolutely essential to his eventual success.  But what if Edison had refused to define and admit failure?  How would he have known when to abandon one prototype and try another?  How would he have been able to learn valuable lessons from his repeated failure?

Josh Linkner recently blogged about failure being the dirty little secret of so-called overnight success, citing several examples, including Rovio (makers of the Angry Birds video game), Dyson vacuum cleaners, and WD-40.

Although these are definitely inspiring success stories, my concern is that often the only failure stories we hear are about people and companies that became famous for eventually succeeding.  In other words, we often hear eventually successful stories, and we almost never hear, or simply choose to ignore, the more common, and perhaps more useful, cautionary tales of abject failure.

It seems we have become so obsessed with telling stories that we have relegated both failure and success to the genre of fiction, which I fear is preventing us from learning any fact-based, and therefore truly valuable, lessons about failure and success.

 

Related Posts

The Winning Curve

Persistence

Mistake Driven Learning

The Fragility of Knowledge

The Wisdom of Failure