Cloud Computing for Midsize Businesses

OCDQ Radio is a vendor-neutral podcast about data quality and its related disciplines, produced and hosted by Jim Harris.

During this episode, Ed Abrams and I discuss cloud computing for midsize businesses, and, more specifically, we discuss aspects of the recently launched IBM global initiatives to help Managed Service Providers (MSP) deliver cloud-based service offerings.

Ed Abrams is the Vice President of Marketing, IBM Midmarket.  In this role, Ed is responsible for leading a diverse team that supports IBM’s business objectives with small and midsize businesses by developing, planning, and executing offerings and go-to-market strategies designed to help midsize businesses grow.  In this role Ed works closely and collaboratively with sales and channels teams, and agency partners to deliver high-quality and effective marketing strategies, offerings, and campaigns.

 

Cloud Computing for Midsize Businesses

Additional listening options:

 

This podcast was sponsored by the IBM for Midsize Business program, which provides midsize businesses with the tools, expertise and solutions they need to become engines of a smarter planet.

 

Related Posts

Cloud Computing is the New Nimbyism

Lightning Strikes the Cloud

The Cloud Security Paradox

The Cloud is shifting our Center of Gravity

Are Cloud Providers the Bounty Hunters of IT?

The Partly Cloudy CIO

OCDQ Radio - Saving Private Data

OCDQ Radio - Big Data and Big Analytics

The Return of the Dumb Terminal

A Swift Kick in the AAS

Sometimes all you Need is a Hammer

Shadow IT and the New Prometheus

Data Quality and Miracle Exceptions

“Reading superhero comic books with the benefit of a Ph.D. in physics,” James Kakalios explained in The Physics of Superheroes, “I have found many examples of the correct description and application of physics concepts.  Of course, the use of superpowers themselves involves direct violations of the known laws of physics, requiring a deliberate and willful suspension of disbelief.”

“However, many comics need only a single miracle exception — one extraordinary thing you have to buy into — and the rest that follows as the hero and the villain square off would be consistent with the principles of science.”

 

“Data Quality is all about . . .”

It is essential to foster a marketplace of ideas about data quality in which a diversity of viewpoints is freely shared without bias, where everyone is invited to get involved in discussions and debates and have an opportunity to hear what others have to offer.

However, one of my biggest pet peeves about the data quality industry is when I listen to analysts, vendors, consultants, and other practitioners discuss data quality challenges, I am often required to make a miracle exception for data quality.  In other words, I am given one extraordinary thing I have to buy into in order to be willing to buy their solution to all of my data quality problems.

These superhero comic book style stories usually open with a miracle exception telling me that “data quality is all about . . .”

Sometimes, the miracle exception is purchasing technology from the right magic quadrant.  Other times, the miracle exception is either following a comprehensive framework, or following the right methodology from the right expert within the right discipline (e.g., data modeling, business process management, information quality management, agile development, data governance, etc.).

But I am especially irritated by individuals who bash vendors for selling allegedly only reactive data cleansing tools, while selling their allegedly only proactive defect prevention methodology, as if we could avoid cleaning up the existing data quality issues, or we could shut down and restart our organizations, so that before another single datum is created or business activity is executed, everyone could learn how to “do things the right way” so that “the data will always be entered right, the first time, every time.”

Although these and other miracle exceptions do correctly describe the application of data quality concepts in isolation, by doing so, they also oversimplify the multifaceted complexity of data quality, requiring a deliberate and willful suspension of disbelief.

Miracle exceptions certainly make for more entertaining stories and more effective sales pitches, but oversimplifying complexity for the purposes of explaining your approach, or, even worse and sadly more common, preaching at people that your approach definitively solves their data quality problems, is nothing less than applying the principle of deus ex machina to data quality.

 

Data Quality and deus ex machina

Deus ex machina is a plot device whereby a seemingly unsolvable problem is suddenly and abruptly solved with the contrived and unexpected intervention of some new event, character, ability, or object.

This technique is often used in the marketing of data quality software and services, where the problem of poor data quality can seemingly be solved by a new event (e.g., creating a data governance council), a new character (e.g., hiring an expert consultant), a new ability (e.g., aligning data quality metrics with business insight), or a new object (e.g., purchasing a new data quality tool).

Now, don’t get me wrong.  I do believe various technologies and methodologies from numerous disciplines, as well as several core principles (e.g., communication, collaboration, and change management) are all important variables in the data quality equation, but I don’t believe that any particular variable can be taken in isolation and deified as the God Particle of data quality physics.

 

Data Quality is Not about One Extraordinary Thing

Data quality isn’t all about technology, nor is it all about methodology.  And data quality isn’t all about data cleansing, nor is it all about defect prevention.  Data quality is not about only one thing — no matter how extraordinary any one of its things may seem.

Battling the dark forces of poor data quality doesn’t require any superpowers, but it does require doing the hard daily work of continuously improving your data quality.  Data quality does not have a miracle exception, so please stop believing in one.

And for the love of high-quality data everywhere, please stop trying to sell us one.

 

Related Posts

Data Quality: Quo Vadimus?

The Dichotomy Paradox, Data Quality and Zero Defects

Finding Data Quality

Data Governance Frameworks are like Jigsaw Puzzles

Data Quality Industry: Problem Solvers or Enablers?

Do you believe in Magic (Quadrants)?

Which came first, the Data Quality Tool or the Business Need?

What Data Quality Technology Wants

OCDQ Radio - The Johari Window of Data Quality

OCDQ Radio - Redefining Data Quality

OCDQ Radio - The Blue Box of Information Quality

OCDQ Radio - Studying Data Quality

OCDQ Radio - Organizing for Data Quality

Alternatives to Enterprise Data Quality Tools

The recent analysis by Andy Bitterer of Gartner Research (and ANALYSTerical) about the acquisition of open source data quality tool DataCleaner by the enterprise data quality vendor Human Inference, prompted the following Twitter conversation:

Since enterprise data quality tools can be cost-prohibitive, more prospective customers are exploring free and/or open source alternatives, such as the Talend Open Profiler, licensed under the open source General Public License, or non-open source, but entirely free alternatives, such as the Ataccama DQ Analyzer.  And, as Andy noted in his analysis, both of these tools offer an easy transition to the vendors’ full-fledged commercial data quality tools, offering more than just data profiling functionality.

As Henrik Liliendahl Sørensen explained, in his blog post Data Quality Tools Revealed, data profiling is the technically easiest part of data quality, which explains the tool diversity, and early adoption of free and/or open source alternatives.

And there are also other non-open source alternatives that are more affordable than enterprise data quality tools, such as Datamartist, which combines data profiling and data migration capabilities into an easy-to-use desktop application.

My point is neither to discourage the purchase of enterprise data quality tools, nor promote their alternatives—and this blog post is certainly not an endorsement—paid or otherwise—of the alternative data quality tools I have mentioned simply as examples.

My point is that many new technology innovations originate from small entrepreneurial ventures, which tend to be specialists with a narrow focus that can provide a great source of rapid innovation.  This is in contrast to the data management industry trend of innovation via acquisition and consolidation, embedding data quality technology within data management platforms, which also provide data integration and master data management (MDM) functionality as well, allowing the mega-vendors to offer end-to-end solutions and the convenience of one-vendor information technology shopping.

However, most software licenses for these enterprise data management platforms start in the six figures.  On top of the licensing, you have to add the annual maintenance fees, which are usually in the five figures.  Add to the total cost of the solution, the professional services that are needed for training and consulting for installation, configuration, application development, testing, and production implementation—and you have another six figure annual investment.

Debates about free and/or open source software usually focus on the robustness of functionality and the intellectual property of source code.  However, from my perspective, I think that the real reason more prospective customers are exploring these alternatives to enterprise data quality tools is because of the free aspect—but not because of the open source aspect.

In other words—and once again I am only using it as an example—I might download Talend Open Profiler because I wanted data profiling functionality at an affordable price—but not because I wanted the opportunity to customize its source code.

I believe the “try it before you buy it” aspect of free and/or open source software is what’s important to prospective customers.

Therefore, enterprise data quality vendors, instead of acquiring an open source tool as Human Inference did with DataCleaner, how about offering a free (with limited functionality) or trial version of your enterprise data quality tool as an alternative option?

 

Related Posts

Do you believe in Magic (Quadrants)?

Can Enterprise-Class Solutions Ever Deliver ROI?

Which came first, the Data Quality Tool or the Business Need?

Selling the Business Benefits of Data Quality

What Data Quality Technology Wants

The People Platform

Platforms are popular in enterprise data management.  Most of the time, the term is used to describe a technology platform, an integrated suite of tools that enables the organization to manage its data in support of its business processes.

Other times the term is used to describe a methodology platform, an integrated set of best practices that enables the organization to manage its data as a corporate asset in order to achieve superior business performance.

Data governance is an example of a methodology platform, where one of its central concepts is the definition, implementation, and enforcement of policies, which govern the interactions between business processes, data, technology, and people.

But many rightfully lament the misleading term “data governance” because it appears to put the emphasis on data, arguing that since business needs come first in every organization, data governance should be formalized as a business process, and therefore mature organizations should view data governance as business process management.

However, successful enterprise data management is about much more than data, business processes, or enabling technology.

Business process management, data quality management, and technology management are all people-driven activities because people empowered by high quality data, enabled by technology, optimize business processes for superior business performance.

Data governance policies illustrate the intersection of business, data, and technical knowledge, which is spread throughout the enterprise, transcending any artificial boundaries imposed by an organizational chart, where different departments or different business functions appear as if they were independent of the rest of the organization.

Data governance policies reveal how truly interconnected and interdependent the organization is, and how everything that happens within the organization happens as a result of the interactions occurring among its people.

Michael Fauscette defines people-centricity as “our current social and business progression past the industrial society’s focus on business, technology, and process.  Not that business or technology or process go away, but instead they become supporting structures that facilitate new ways of collaborating and interacting with customers, suppliers, partners, and employees.”

In short, Fauscette believes people are becoming the new enterprise platform—and not just for data management.

I agree, but I would argue that people have always been—and always will be—the only successful enterprise platform.

 

Related Posts

The Collaborative Culture of Data Governance

Data Governance and the Social Enterprise

Connect Four and Data Governance

What Data Quality Technology Wants

Data and Process Transparency

The Business versus IT—Tear down this wall!

Collaboration isn’t Brain Surgery

Trust is not a checklist

Quality and Governance are Beyond the Data

Data Transcendentalism

Podcast: Data Governance is Mission Possible

Video: Declaration of Data Governance

What Data Quality Technology Wants

This is a screen capture of the results of last month’s unscientific data quality poll where it was noted that viewpoints about the role of data quality technology (i.e., what data quality technology wants) are generally split between two opposing perspectives:

  1. Technology enables a data quality process, but doesn’t obviate the need for people (e.g., data stewards) to remain actively involved and be held accountable for maintaining the quality of data.
  2. Technology automates a data quality process, and a well-designed and properly implemented technical solution obviates the need for people to be actively involved after its implementation.

 

Commendable Comments

Henrik Liliendahl Sørensen voted for enable, but commented he likes to say enables by automating the time consuming parts, an excellent point which he further elaborated on in two of his recent blog posts: Automation and Technology and Maturity.

Garnie Bolling commented that he believes people will always be part of the process, especially since data quality has so many dimensions and trends, and although automated systems can deal with what he called fundamental data characteristics, an automated system can not change with trends or the ongoing evolution of data.

Frank Harland commented that automation can and has to take over the tedious bits of work (e.g., he wouldn't want to type in all those queries that can be automated by data profiling tools), but to get data right, we have to get processes right, get data architecture right, get culture and KPI’s right, and get a lot of the “right” people to do all the hard work that has to be done.

Chris Jackson commented that what an organization really needs is quality data processes not data quality processes, and once the focus is on treating the data properly rather than catching and remediating poor data, you can have a meaningful debate about the relative importance of well-trained and motivated staff vs. systems that encourage good data behavior vs. replacing fallible people with standard automated process steps.

Alexa Wackernagel commented that when it comes to discussions about data migration and data quality with clients, she often gets the requirement—or better to call it the dream—for automated processes, but the reality is that data handling needs easy accessible technology to enable data quality.

Thanks to everyone who voted and special thanks to everyone who commented.  As always, your feedback is greatly appreciated.

 

What Data Quality Technology Wants: Enable and Automate


“Data Quality Powers—Activate!”


“I’m sorry, Defect.  I’m afraid I can’t allow that.”

I have to admit that my poll question was flawed (as my friend HAL would say, “It can only be attributable to human error”).

Posing the question in an either/or context made it difficult for the important role of automation within data quality processes to garner many votes.  I agree with the comments above that the role of data quality technology is to both enable and automate.

As the Wonder Twins demonstrate, data quality technology enables Zan (i.e., technical people), Jayna (i.e., business people), and  Gleek (i.e., data space monkeys, er I mean, data people) to activate one of their most important powers—collaboration.

In addition to the examples described in the comments above, data quality technology automates proactive defect prevention by providing real-time services, which greatly minimize poor data quality at the multiple points of origin within the data ecosystem, because although it is impossible to prevent every problem before it happens, the more control enforced where data originates, the better the overall enterprise data quality will be—or as my friend HAL would say:

“Putting data quality technology to its fullest possible use is all any corporate entity can ever hope to do.”

Related Posts

What Does Data Quality Technology Want?

DQ-Tip: “Data quality tools do not solve data quality problems...”

Which came first, the Data Quality Tool or the Business Need?

Data Quality Industry: Problem Solvers or Enablers?

Data Quality Magic

The Tooth Fairy of Data Quality

Data Quality is not a Magic Trick

Do you believe in Magic (Quadrants)?

Pirates of the Computer: The Curse of the Poor Data Quality

What Does Data Quality Technology Want?

During a recent Radiolab podcast, Kevin Kelly, author of the book What Technology Wants, used the analogy of how a flower leans toward sunlight because it “wants” the sunlight, to describe what the interweaving web of evolving technical innovations (what he refers to as the super-organism of technology) is leaning toward—in other words, what technology wants.

The other Radiolab guest was Steven Johnson, author of the book Where Good Ideas Come From, who somewhat dispelled the traditional notion of the eureka effect by explaining that the evolution of ideas, like all evolution, stumbles its way toward the next good idea, which inevitably leads to a significant breakthrough, such as what happens with innovations in technology.

Listening to this thought-provoking podcast made me ponder the question: What does data quality technology want?

In a previous post, I used the term OOBE-DQ to refer to the out-of-box-experience (OOBE) provided by data quality (DQ) tools, which usually becomes a debate between “ease of use” and “powerful functionality” after you ignore the Magic Beans sales pitch that guarantees you the data quality tool is both remarkably easy to use and incredibly powerful.

The data quality market continues to evolve away from esoteric technical tools and stumble its way toward the next good idea, which is business-empowering suites providing robust functionality with increasingly role-based user interfaces, which are tailored to the specific needs of different users.  Of course, many vendors would love to claim sole responsibility for what they would call significant innovations in data quality technology, instead of what are simply by-products of an evolving market.

The deployment of data quality functionality within and across organizations also continues to evolve, as data cleansing activities are being complemented by real-time defect prevention services used to greatly minimize poor data quality at the multiple points of origin within the enterprise data ecosystem.

However, viewpoints about the role of data quality technology generally remain split between two opposing perspectives:

  1. Technology enables a data quality process, but doesn’t obviate the need for people (e.g., data stewards) to remain actively involved and be held accountable for maintaining the quality of data.
  2. Technology automates a data quality process, and a well-designed and properly implemented technical solution obviates the need for people to be actively involved after its implementation.

Do you think that continuing advancements and innovations in data quality technology will obviate the need for people to be actively involved in data quality processes?  In the future, will we have high quality data because our technology essentially wants it and therefore leans our organizations toward high quality data?  Let’s conduct another unscientific data quality poll:

 

Additionally, please feel free to post a comment below and explain your vote or simply share your opinions and experiences.

 

Related Posts

DQ-Tip: “Data quality tools do not solve data quality problems...”

Which came first, the Data Quality Tool or the Business Need?

Data Quality Industry: Problem Solvers or Enablers?

Data Quality Magic

The Tooth Fairy of Data Quality

Data Quality is not a Magic Trick

Do you believe in Magic (Quadrants)?

Pirates of the Computer: The Curse of the Poor Data Quality

Can Enterprise-Class Solutions Ever Deliver ROI?

The information technology industry has a great fondness for enterprise-class solutions and TLAs (two or three letter acronyms): ERP (Enterprise Resource Planning), DW (Data Warehousing), BI (Business Intelligence), MDM (Master Data Management), DG (Data Governance), DQ (Data Quality), CDI (Customer Data Integration), CRM (Customer Relationship Management), PIM (Product Information Management), BPM (Business Process Management), etc. — and new TLAs are surely coming soon.

But there is one TLA to rule them all, one TLA to fund them, one TLA to bring them all and to the business bind them—ROI.

 

Enterpri$e-Cla$$ $olution$

All enterprise-class solutions have one thing in common—they require a significant investment and total cost of ownership.

Most enterprise software/system licenses start in the six figures.  Due in large part to vendor consolidation, many are embedded within a consolidated enterprise application development platform with seamlessly integrated components offering an end-to-end solution that pushes the license well into seven figures. 

On top of the licensing, you have to add the annual maintenance fees, which are usually in the five figures—sometimes more.

Add to the total cost of the solution the professional services needed for training and consulting for installation, configuration, application development, testing, and production implementation, and you have another six figure annual investment.

With such a significant investment and total cost of ownership required, can enterprise-class solutions ever deliver ROI?

 

Should I refinance my mortgage?

As a quick (but relevant) tangent, let's use a simple analogy from the world of personal finance.

Similar to most homeowners, I get offers to refinance my mortgage all the time.  A common example is an offer that states I can reduce my monthly payments by $200 by refinancing.  Sounds great, $200 a month is an annual cost reduction of $2400. 

However, this great deal includes $3000 in refinancing costs.  Although I start paying $200 less a month immediately, I do not really start saving any money for 15 months, when the monthly “savings” break even with the $3000 in refinancing costs. 

Of course, saying only 15 months is ignoring possible tax implications as well as lost interest or returns that I could have earned since the $3000 likely came from either a savings or an investment account.

Additionally, refinancing might not be a good idea if I plan to sell the house in less than 15 months.  The $3000 could instead be invested in finishing my basement or repairing minor damages, which could help increase its value and therefore its sales price.

How does this analogy relate to enterprise-class solutions?

 

The Business Justification Paradox

Focusing solely on the technical features and ignoring the business benefits of an enterprise-class solution isn’t going to convince either the organization's executive management or its shareholders that the solution is required.

Therefore, emphasis has to placed on the need to make the business justification, where true ROI can only be achieved through tangible business impacts, such as mitigated risks, reduced costs, or increased revenues.

However, a legitimate business justification for any enterprise-class solution is often relatively easy to make.

The business justification paradox is that although an enterprise-class solution definitely has the long-term future potential to reduce costs, mitigate risks, and increase revenues, in the immediate future (and current fiscal year), it will only increase costs, decrease revenues, and therefore potentially increase risks.

In the mortgage analogy, the break even point on the opportunity cost of refinancing can be precisely calculated.  Is it even possible to accurately estimate the break even point on the opportunity cost of implementing an enterprise-class solution?

Furthermore, true ROI obviously has to be at least estimated to exceed simply breaking even on the investment.

Given the reality that the longer an initiative takes, the more likely its funding will either be reduced or completely cut, many advocate an agile methodology, which targets iterative cycles quickly delivering small, but tangible value.  However, the up-front costs of enterprise licenses and incremental costs of the ongoing efforts and maintenance still loom large on the balance sheet.

Even with “creative” accounting practices, the unquestionably real short-term “ROI high” of following an agile approach could still leave you “chasing the dragon” in search of at least breaking even on your enterprise-class solution's total cost of ownership.

 

A Call for Debate

My point in this blog post was neither to make the argument that organizations should not invest in enterprise-class solutions, nor to berate organizations for evaluating such possible investments using short-term thinking limited to the current fiscal year.

I am simply trying to encourage an open, honest, and healthy debate about the true ROI of enterprise-class solutions.

I am tired of hearing over-simplifications about how all you need to do is make a valid business justification, as well as attempting to decipher the mystical ROI and total cost of ownership calculations provided by vendors and industry analysts.

I am also tired of being told how emerging industry trends like open source, cloud computing, and software as a service (SaaS) are “less expensive” than traditional approaches.  Perhaps that is true, but can they deliver enterprise-class solutions and ROI?

This blog post is a call for debate.  Please post a comment.  All viewpoints are welcome.

Subterranean Computing

Cloud computing continues to receive significant industry buzz and endorsements from many industry luminaries:

  • Tim O'Reilly of O'Reilly Media calls cloud computing “the platform for all computing.”
  • Connor MacLeod of the Clan MacLeod says “there can be only one—and that one is cloud computing.”  
  • Marc Benioff of SalesForce.com refers to companies in the “anti-cloud crowd” as “innovationless.”
  • Lando Calrissian of Cloud City calls anyone not using cloud computing a “slimy, double-crossing, no-good swindler.”

Therefore, I was happy to hear a cogent alternative viewpoint from a member of the “anti-cloud crowd” when I recently interviewed Sidd Finch, the Founder and President of the New York based startup company Kremvax, which recently secured another $4.1 billion in venture capital to pursue an intriguing alternative to cloud computing called Subterranean Computing.

 

The Truth about Cloud Computing

Mr. Finch began the interview by discussing some of the common criticisms of cloud computing, which include issues such as data privacy, data protection, and data security.  However, I was most intrigued by the new research Mr. Finch cited from Professor Nat Tate of the College of Nephology at the University of Southern North Dakota at Hoople.

According to Professor Tate, here is the truth about cloud computing:

  • Cloud computing's viability depends greatly on the type of cloud, not public or private, but rather cirrus, stratus, or cumulus.
  • Cirrus clouds are not good for data privacy concerns because they tend to be wispy and therefore completely transparent.
  • Stratus clouds are not good for data protection concerns because “data drizzling” occurs frequently and without warning. 
  • Cumulus clouds are not good for data security concerns because “fair weather clouds” disperse at the first sign of trouble. 

 

The Underlying Premise of Subterranean Computing

Later in the interview, Mr. Finch described the underlying premise of subterranean computing:

“Instead of beaming your data up into the cloud, bury your data down underground.”  

According to Mr. Finch, here are the basic value propositions of subterranean computing:

  • Subterranean computing's viability is limited only to your imagination (but real money is required, and preferably cash).
  • Data privacy is not a concern because your data gets buried in its own completely (not virtually) private hole in the ground.
  • Data protection is not a concern because once it is buried, your data will never be used again for any purpose whatsoever.
  • Data security is not a concern, but for an additional fee, we bury your data where nobody will ever find it (we know a guy).

 

Brown is the new Green

Environmentally sustainable computing (i.e., “Green IT”) is another buzzworthy industry trend these days.  Reduce your carbon footprint, utilize electricity more efficiently, evaluate alternative power sources, and leverage recyclable materials. 

All great ideas.  But according to Mr. Finch, subterranean computing takes it to the next level by running entirely on geothermal power, a sustainable and renewable energy source, as well as converting your databases into Composting Data Stores (CDS).

In subterranean computing, your data is buried deep underground, where CDS can draw the very minimal amount of power it requires directly from the heat emanating from the Earth's core.  The CDS biodegradable data format (BDF) also minimizes your data storage requirements by automatically composting old data, which creates the raw material used to store your new data.

In the words of Kremvax customer and award-eligible environmentalist Isaac Bickerstaff: “brown is the new green.” 

Bickerstaff is the Lord Mayor of the English village of Spiggot, which has “gone subterranean” with its computing infrastructure.

 

Conclusion

So which new industry trend will your organization be implementing this year: cloud computing or subterranean computing? 

Well, before you make your final decision, please be advised that Industry Analyst Lirpa Sloof has recently reported rumors are circulating that Larry Ellison of Oracle is planning on announcing the first Cloud-Subterranean hybrid computing platform at the Oracle OpenWorld 2010 conference, which is also rumored to be changing its venue from San Francisco to Spiggot.

But whenever you're evaluating new technology, remember the wise words from Subterranean Homesick Blues by Bob Dylan:

“You don’t need a weatherman to know which way the wind blows.”

The Twitter Clockwork is NOT Orange

Recently, a Twitter-related tête à tête à tête involving David Carr of The New York Times, Nick Bilton of The New York Times, and George Packer of The New Yorker temporarily made both the Blogosphere all abuzz and the Twitterverse all atwitter.

This was simply another entry in the deeply polarizing debate between those for (Carr and Bilton) and against (Packer) Twitter.

 

A new decade of debate begins

On January 1, 2010, David Carr published his thoughts in the article Why Twitter Will Endure:

“By carefully curating the people you follow, Twitter becomes an always-on data stream from really bright people in their respective fields, whose tweets are often full of links to incredibly vital, timely information.”

. . . 

“Nearly a year in, I’ve come to understand that the real value of the service is listening to a wired collective voice.”

. . .

“At first, Twitter can be overwhelming, but think of it as a river of data rushing past that I dip a cup into every once in a while. Much of what I need to know is in that cup . . . I almost always learn about it first on Twitter.”

. . .

“All those riches do not come at zero cost: If you think e-mail and surfing can make time disappear, wait until you get ahold of Twitter, or more likely, it gets ahold of you.  There is always something more interesting on Twitter than whatever you happen to be working on.”

Carr goes on to quote Clay Shirky, author of the book Here Comes Everybody:

“It will be hard to wait out Twitter because it is lightweight, endlessly useful and gets better as more people use it.  Brands are using it, institutions are using it, and it is becoming a place where a lot of important conversations are being held.”

 

The most frightening picture of the future

On January 29, 2010, in his blog post Stop the World, George Packer declared that “the most frightening picture of the future that I’ve read thus far in the new decade has nothing to do with terrorism or banking or the world’s water reserves.”

What was the most frightening picture of the future that Packer had read less than a month into the new decade? 

The aforementioned article by David Carr—no, I am not kidding.

“Every time I hear about Twitter,” wrote Packer, “I want to yell Stop!  The notion of sending and getting brief updates to and from dozens or thousands of people every few minutes is an image from information hell.  I’m told that Twitter is a river into which I can dip my cup whenever I want.  But that supposes we’re all kneeling on the banks.  In fact, if you’re at all like me, you’re trying to keep your footing out in midstream, with the water level always dangerously close to your nostrils.  Twitter sounds less like sipping than drowning.”

Someone who admits that he has, in fact, never even used Twitter, continued with a crack addiction analogy:

“Who doesn’t want to be taken out of the boredom or sameness or pain of the present at any given moment?  That’s what drugs are for, and that’s why people become addicted to them. 

Carr himself was once a crack addict (he wrote about it in The Night of the Gun).  Twitter is crack for media addicts. 

It scares me, not because I’m morally superior to it, but because I don’t think I could handle it.  I’m afraid I’d end up letting my son go hungry.”

 

“Call me a digital crack dealer”

On February 3, 2010, in his blog post, The Twitter Train Has Left the Station, Nick Bilton responded:

“Call me a digital crack dealer, but here’s why Twitter is a vital part of the information economy—and why Mr. Packer and other doubters ought to at least give it a Tweet:

Hundreds of thousands of people now rely on Twitter every day for their business.  Food trucks and restaurants around the world tell patrons about daily food specials.  Corporations use the service to handle customer service issues.  Starbucks, Dell, Ford, JetBlue and many more companies use Twitter to offer discounts and coupons to their customers.  Public relations firms, ad agencies, schools, the State Department—even President Obama—use Twitter and other social networks to share information.”

. . .

“Most importantly, Twitter is transforming the nature of news, the industry from which Mr. Packer reaps his paycheck.  The news media are going through their most robust transformation since the dawn of the printing press, in large part due to the Internet and services like Twitter.  After this metamorphosis takes place, everyone will benefit from the information moving swiftly around the globe.”

Bilton concludes his post with a train analogy:

“Ironically, Mr. Packer notes how much he treasures his Amtrak rides in the quiet car of the train, with his laptop closed and cellphone turned off.  As I’ve found in previous research, when trains were a new technology 150 years ago, some journalists and intellectuals worried about the destruction that the railroads would bring to society.  One news article at the time warned that trains would ‘blight crops with their smoke, terrorize livestock … and people could asphyxiate’ if they traveled on them.

I wonder if, 150 years ago, Mr. Packer would be riding the train at all, or if he would have stayed home, afraid to engage in an evolving society and demanding that the trains be stopped.”

 

Our apparent appetite for our own destruction

On February 4, 2010, in his blog post Neither Luddite nor Biltonite, George Packer responded:

“It’s true that I hadn’t used Twitter (not consciously, anyway—my editors inform me that this blog has for some time had an automated Twitter feed).  I haven’t used crack, either, but—as a Bilton reader pointed out—you don’t need to do the drug to understand the effects.”

. . .

“Just about everyone I know complains about the same thing when they’re being honest—including, maybe especially, people whose business is reading and writing.  They mourn the loss of books and the loss of time for books.  It’s no less true of me, which is why I’m trying to place a few limits on the flood of information that I allow into my head.”

. . .

“There’s no way for readers to be online, surfing, e-mailing, posting, tweeting, reading tweets, and soon enough doing the thing that will come after Twitter, without paying a high price in available time, attention span, reading comprehension, and experience of the immediately surrounding world.  The Internet and the devices it’s spawned are systematically changing our intellectual activities with breathtaking speed, and more profoundly than over the past seven centuries combined.  It shouldn’t be an act of heresy to ask about the trade-offs that come with this revolution.”

. . .

“The response to my post tells me that techno-worship is a triumphalist and intolerant cult that doesn’t like to be asked questions.  If a Luddite is someone who fears and hates all technological change, a Biltonite is someone who celebrates all technological change: because we can, we must.  I’d like to think that in 1860 I would have been an early train passenger, but I’d also like to think that in 1960 I’d have urged my wife to go off Thalidomide.”

. . .

“American newspapers and magazines will continue to die by the dozen.  The economic basis for reporting (as opposed to information-sharing, posting, and Tweeting) will continue to erode.  You have to be a truly hard-core techno-worshipper to call this robust.  Any journalist who cheerleads uncritically for Twitter is essentially asking for his own destruction.”

. . .

“It’s true that Bilton will have news updates within seconds that reach me after minutes or hours or even days. 

It’s a trade-off I can live with.”

Packer concludes his post by quoting the end of G. B. Trudeau's book My Shorts R Bunching. Thoughts?:

“The time you spend reading this tweet is gone, lost forever, carrying you closer to death.  Am trying not to abuse the privilege.”

 

The Twitter Clockwork is NOT Orange

A Clockwork Orange

The primary propaganda used by the anti-Twitter lunatic fringe is comparing the microblogging and social networking service to that disturbing scene (pictured above) from the movie A Clockwork Orange, where you are confined within a straight jacket, your head strapped into a restraining chair preventing you from looking away, your eyes clamped to remain open—and you are forced to stare endlessly into the abyss of the cultural apocalypse that the Twitterverse is apparently supposed to represent.

You can feel free to call me a Biltonite, because I obviously agree far more with Bilton and Carr—and not with Packer.

Of course, I recommend you read all four of the articles/posts I linked to and selectively quoted above.  Especially Carr's article, which was far more balanced than either my quotes or Packer's posts reflect. 

 

Social Media Will Endure

We continue to witness the decline of print media and the corresponding evolution of social media.  I completely understand why Packer (and others with a vested interest in print media) want to believe social media is a revolution that must be put down. 

Hence the outrageous exaggerations Packer uses when comparing Twitter with drug abuse (crack cocaine) and the truly offensive remark of comparing Twitter with one of the worst medical tragedies in modern history (Thalidomide). 

I believe the primary reason that social media will endure, beyond our increasing interest in exchanging what has traditionally been only a broadcast medium (print media) for a conversation medium, is because it is enabling our communication to return to the more direct and immediate forms of information sharing that existed even before the evolution of written language.

Social media is an evolution and not a revolution being forced upon society by unrelenting technological advancements and techno-worship.  In many ways, social media is not a new concept at all—technology has simply finally caught up with us.

Humans have always been “social” by our very nature.  We have always thrived on connection, conversation, and community. 

Social media is rapidly evolving.  Therefore, specific services like Twitter may be replaced (or Twitter may continue to evolve). 

However, the essence of social media will endure—but the same can't be said of Packerites (neo-Luddites like George Packer).

 

What Say You?

Please share your thoughts on this debate by posting a comment below. 

Or you can share your thoughts with me on Twitter—which reminds me, it's time for me to be strapped back into the chair . . .

Can Social Media become a Universal Translator?

I have always been a huge fan of science fiction, mostly television and movies, but also a few select books as well. 

After only FTL (faster-than-light space travel, e.g., warp drive, hyperdrive, Infinite Improbability Drive, or “Ludicrous Speed”), the next most common technology found in almost all science fiction is a universal translator, which somehow manages to instantly translate all communication into the native language of the user.

Without question, and especially for television and movies, a universal translator serves as a useful plot device in science fiction. 

It saves valuable time otherwise spent explaining how people (especially from completely different planets) are able to communicate without knowing each other's language.  The time saved can therefore be dedicated to far cooler things, such as laser guns, lightsabers, space battles, and massive explosions—in other words, the truly scientific parts of science fiction.

Just a few of my universal translator (and science fiction) favorites include the following:

  • The Babel Fish – a small yellow leech-like fish from The Hitchhiker's Guide to the Galaxy book series by Douglas Adams, which after it is inserted into your ear, simultaneously translates from one spoken language to another.

     

  • Translator Microbes – a bacteria injected on the Farscape television series, which after it has colonized your brain stem, translates spoken language and then passes the translation along to the rest of your brain.

     

  • The Universal Translator – a linguacode matrix on Star Trek, first used in the late 22nd century for the instant translation of Earth languages, which removed language barriers and helped Earth’s disparate cultures come to terms of universal peace.

 

It's a Small (Digital) World

Many of my social media blog posts have included some form of the following paragraph:

Rapid advancements in technology, coupled with the meteoric rise of the Internet and social media (blogs, Twitter, Facebook, LinkedIn, etc.) has created an amazing medium that is enabling people separated by vast distances and disparate cultures to come together, communicate, and collaborate in ways few would have thought possible less than a decade ago.

It's a really good paragraph (it must be since I just used it yet again!).  However, apparently channeling science fiction's useful plot device, I waxed poetic while ignoring the still very present communication challenge of language translation.

My native language is English, and like many people from the United States, it is the only language I am fluent in.  Blogging has made the digital version of my world much smaller and allowed my writing to reach parts of the world it wouldn’t otherwise have been able to reach—places where English is not the primary language.

 

What language do you blog in?

I have to admit that despite my professional experience, which has included some international commerce, I am often oblivious to how much of a challenge is faced by non-English speakers in the business world.  Blogging is certainly no exception.

On ProBlogger, Darren Rowse recently posted Bloggers from Non English Speaking Backgrounds, which was a follow-up to a recent newsletter survey about the challenges facing bloggers going into 2010, where quite a few of the responses came from bloggers for whom English was not their first language, and they cited two primary challenges:

  1. Not knowing which language they should blog in – Should they blog in their primary language and reach a potentially smaller readership, or should they blog in English where their readership could be larger, but where they have challenges with writing well?

     

  2. Feeling isolated from other bloggers – Some bloggers felt that they were not taken as seriously by bloggers in other parts of the world and therefore found networking difficult.  

Since Darren Rowse (he is based in Australia) is also only fluent in English, he requested that his readers comment on his post and share their perspectives on these common challenges.  The last time that I checked, the post had over 170 comments.

One of the most telling things for me is that this discussion wasn't limited to blogging in the business world. 

For me personally, I would have no choice but to blog in my primary language.  Just as an example, if Spanish was the primary language of the business blogging world, then I would have to either settle for a smaller readership, or simply not blog at all. 

Despite my four academic years with the language, just about the only complete sentence I can say in Spanish today is:

¿Dónde está el baño?

I can (pretend to) speak Danish

Some of you are probably thinking: What about computer software and online services for language translation?

I have always used Yahoo! Babel Fish (and long before it was purchased by Yahoo).  It is far from the most robust online translation service, but the science fiction reference in its name (see above) is likely the reason I frequent that particular website.

Many have told me that Google Language Tools is probably the most advanced (and free) online language translation service currently available.  However, no tool can make it as easy as science fiction—at least no current (free or otherwise) tool.

I recently used these tools to say Tak for din kommentar (“Thanks for your comment” in Danish) to Henrik Liliendahl Sørensen, for the excellent comment he left on one of my recent social media blog posts, which inspired me to write this blog post.

Not bad Danish for a non-native speaker, huh?  Well, to be completely honest—that was the final translation provided by Henrik after my initial attempt (although close) was not quite correct.    

And just to name one of the current options for blog translation, Wibiya is a free service allowing you to integrate applications and widgets into a customized web-based toolbar for your blog.  One of those applications is Translation, which is powered by Google Translate, and allows your blog readers to translate any page on your website into their native language with just a single click of their mouse.  If you would like to view an example of a blog using this feature, then please visit: Phil Simon's Blog.

 

What makes language translation so difficult?

Although the current online language translation services are helpful, they are far from perfect.

The most common challenge is what is referred to as round-trip translation, where in the process of translation, an intermediate language is used (most often the primary language of the translator).

As a simplistic example, let’s pretend I wanted to translate the earlier Danish phrase into Spanish. 

I would begin with a Danish to English translation (back to my primary language as a starting point), then an English to Spanish translation, and finally a Spanish to Danish translation (for verification purposes):

Tak for din kommentar –> Thank you for your comment (Danish –> English)

Thank you for your comment –> Gracias por tu comentario (English –> Spanish)

Gracias por tu comentario –> Tak for din kommentar (Spanish –> Danish)

Now, let’s try a more complex example by translating my favorite social media paragraph from English to Spanish to Danish, where for the sake of this example, we will pretend Spanish is my primary language.

Original paragraph in English:

Rapid advancements in technology, coupled with the meteoric rise of the Internet and social media (blogs, Twitter, Facebook, LinkedIn, etc.) has created an amazing medium that is enabling people separated by vast distances and disparate cultures to come together, communicate, and collaborate in ways few would have thought possible less than a decade ago.

English –> Spanish:

Los rápidos avances en tecnología, junto con el meteórico ascenso de Internet y medios de comunicación social (blogs, Twitter, Facebook, LinkedIn, etc) ha creado un medio sorprendente que es posible que las personas separadas por enormes distancias y culturas diferentes se reúnan, se comunican, y colaborar en cuantas formas hubiera creído posible menos de una década atrás.  

Spanish –> Danish:

Hurtige fremskridt inden for teknologi, kombineret med den rivende anledning af internettet og sociale medier (blogs, Twitter, Facebook, LinkedIn, osv.) har skabt et miljø, der kan overraske folk adskilt af store afstande og forskellige kulturer mødes, kommunikere og samarbejde på måder få troede muligt mindre end et årti siden.

Danish –> English:

Rapid advances in technology, coupled with the meteoric rise of the Internet and social media (blogs, Twitter, Facebook, LinkedIn, etc.) has created an environment that may surprise people separated by great distances and different cultures meet, communicate and collaborate in ways few thought possible less than a decade ago.

The differences are relatively minor:

  1. “advancements” –> “advances”
  2. “amazing medium that is enabling people” –> “environment that may surprise people”
  3. “separated by vast distances and disparate cultures” –> “separated by great distances and different cultures”
  4. “come together, communicate, and collaborate” –> “meet, communicate and collaborate”
  5. “in ways few would have thought possible” –> “in ways few thought possible”

However, #2 (“enabling” –> “surprise”) and to a lesser extent #4 (“come together” –> “meet”) have not only lessened the dramatic effect of my original words, but may leave the overall message open to different interpretation.

Therefore, it is easy to imagine the challenges inherit in translating entire blog posts or websites.

 

Can Social Media become a Universal Translator?

Will the continuing trends of both the rapid evolution of social media technology and the widespread adoption of social media for communication and collaboration, be able to deliver on science fiction’s promise of a universal translator?

Although we still don’t have warp drive or lightsabers, we do have some of the other seemingly impossible technologies from science fiction—just compare that mobile device you carry around with you to the communicator and tricorder from the original Star Trek television show.

Therefore, I remain hopeful that a universal translator is in our not too distant future.

OOBE-DQ, Where Are You?

Scooby-Doo, Where Are You!

Much of enterprise software is often viewed as a commercial off-the-shelf (COTS) product, which, in theory, is supposed to provide significant advantages over bespoke, in-house solutions.  In this blog post, I want to discuss your expectations about the out-of-box-experience (OOBE) provided by data quality (DQ) software, or as I prefer to phrase this question:

OOBE-DQ, Where Are You?

Common DQ Software Features

There are many DQ software vendors to choose from and all of them offer viable solutions driven by impressive technology.  Many of these vendors have very similar approaches to DQ, and therefore provide similar technology with common features, including the following (Please Note: some vendors have a suite of related products collectively providing these features):

  • Data Profiling
  • Data Quality Assessment
  • Data Standardization
  • Data Matching
  • Data Consolidation
  • Data Integration
  • Data Quality Monitoring

A common aspect of OOBE-DQ is the “ease of use” vs. “powerful functionality” debate—ignoring the Magic Beans phenomenon, where the Machiavellian salesperson guarantees you their software is both remarkably easy to use and incredibly powerful.

 

So just how easy is your Ease of Use?

Brainiac

“Ease of use” can be difficult to qualify since it needs to take into account several aspects:

— Installation and configuration
— Integration within a suite of related products (or connectivity to other products)
— Intuitiveness of the user interface(s)
— Documentation and context sensitive help screens
— Ability to effectively support a multiple user environment
— Whether performed tasks are aligned with different types of users

There are obviously other aspects, some of which may vary depending on your DQ initiative, your specific industry, or your organizational structure.  However, the bottom line is hopefully the DQ software doesn't require your users to be as smart as Brainiac (pictured above) in order to be able to figure out how to use it, both effectively and efficiently.

 

DQ Powers—Activate!

The Wonder Twins with Gleek - Art by Alex Ross

Ease of use is obviously a very important aspect of OOBE-DQ.  However, as Duke Ellington taught us, it don't mean a thing, if it ain't got that swing—in order words, if it's easy to use but can't do anything, what good is it?  Therefore, powerful functionality is also important.

“Powerful functionality” can be rather subjective, but probably needs to at least include these aspects:

— Fast processing speed
— Scalable architecture
— Batch and near real-time execution modes
— Pre-built functionality for common tasks
— Customizable and reusable components

Once again, there are obviously other aspects, especially depending on the specifics of your situation.  However, in my opinion, one of the most important aspects of DQ functionality is how it helps (as pictured above) enable Zan (i.e., technical stakeholders) and Jayna (i.e., business stakeholders) to activate their most important power—collaboration.  And of course, sometimes even the Wonder Twins needed the help of their pet space monkey Gleek (i.e., data quality consultants).

 

OOBE-DQ, Where Are You?

Where are you in the OOBE-DQ debate?  In other words, what are your expectations when evaluating the out-of-box-experience (OOBE) provided by data quality (DQ) software?

Where do you stand in the “ease of use” vs. “powerful functionality” debate? 

Are there situations where the prioritization of ease of use makes a lack of robust functionality more acceptable? 

Are there situations where the prioritization of powerful functionality makes a required expertise more acceptable?

Please share your thoughts by posting a comment below.

 

Follow OCDQ

If you enjoyed this blog post, then please subscribe to OCDQ via my RSS feed or my E-mail updates.

You can also follow OCDQ on Twitter, fan the Facebook page for OCDQ, and connect with me on LinkedIn.