Devising a Mobile Device Strategy

As I previously blogged in The Age of the Mobile Device, the disruptiveness of mobile devices to existing business models is difficult to overstate.  Mobile was also cited as one of the complementary technology forces, along with social and cloud, in the recent Harvard Business Review blog post by R “Ray” Wang about new business models being enabled by big data.

Since their disruptiveness to existing IT models is also difficult to overstate, this post ponders the Bring Your Own Device (BYOD) trend that’s forcing businesses of all sizes to devise a mobile device strategy.  BYOD is often not about bringing your own device to the office, but about bringing your own device with you wherever you go (even though the downside of this untethered enterprise may be that our always precarious work-life balance surrenders to the pervasive work-is-life feeling mobile devices can enable).

In his recent InformationWeek article, BYOD: Why Mobile Device Management Isn’t Enough, Michael Davis observed that too many IT departments are not devising a mobile device strategy, but instead “they’re merely scrambling to meet pressure from the CEO on down to offer BYOD options or increase mobile app access.”  Davis also noted that when IT creates BYOD policies, they often to fail to acknowledge mobile devices have to be managed differently, partially since they are not owned by the company.

An alternative to BYOD, which Brian Proffitt recently blogged about, is Corporate Owned, Personally Enabled (COPE). “Plenty of IT departments see BYOD as a demon to be exorcised from the cubicle farms,” Proffitt explained, “or an opportunity to dump the responsibility for hardware upkeep on their internal customers.  The idea behind BYOD is to let end users choose the devices, programs, and services that best meet their personal and business needs, with access, support, and security supplied by the company IT department — often with subsidies for device purchases.”  Whereas the idea behind COPE is “the organization buys the device and still owns it, but the employee is allowed, within reason, to install the applications they want on the device.”

Whether you opt for BYOD or COPE, Information Management recently highlighted 5 Trouble Spots to consider, which included assuming that mobile device security is already taken care of by in-house security initiatives, data integration disconnects with on-premises data essentially turning mobile devices into mobile data silos, and the combination of personal and business data, which complicates, among other things, remote wiping the data on a mobile device in the event of a theft or security violation, which is why, as Davis concluded, managing the company data on the device is more important than managing the device itself.

With the complex business and IT challenges involved, how is your midsize business devising a mobile device strategy?

 

This post was written as part of the IBM for Midsize Business program, which provides midsize businesses with the tools, expertise and solutions they need to become engines of a smarter planet.

 

Related Posts

The Age of the Mobile Device

The Return of the Dumb Terminal

More Tethered by the Untethered Enterprise?

OCDQ Radio - Social Media for Midsize Businesses

Social Media Marketing: From Monologues to Dialogues

Social Business is more than Social Marketing

The Cloud is shifting our Center of Gravity

Barriers to Cloud Adoption

OCDQ Radio - Cloud Computing for Midsize Businesses

Cloud Computing is the New Nimbyism

The Cloud Security Paradox

OCDQ Radio - The Evolution of Enterprise Security

The Graystone Effects of Big Data

Big Data Lessons from Orbitz

Will Big Data be Blinded by Data Science?

The Wisdom of Crowds, Friends, and Experts

I recently finished reading the TED Book by Jim Hornthal, A Haystack Full of Needles, which included an overview of the different predictive approaches taken by one of the most common forms of data-driven decision making in the era of big data, namely, the recommendation engines increasingly provided by websites, social networks, and mobile apps.

These recommendation engines primarily employ one of three techniques, choosing to base their data-driven recommendations on the “wisdom” provided by either crowds, friends, or experts.

 

The Wisdom of Crowds

In his book The Wisdom of Crowds, James Surowiecki explained that the four conditions characterizing wise crowds are diversity of opinion, independent thinking, decentralization, and aggregation.  Amazon is a great example of a recommendation engine using this approach by assuming that a sufficiently large population of buyers is a good proxy for your purchasing decisions.

For example, Amazon tells you that people who bought James Surowiecki’s bestselling book also bought Thinking, Fast and Slow by Daniel Kahneman, Crowdsourcing: Why the Power of the Crowd Is Driving the Future of Business by Jeff Howe, and Wikinomics: How Mass Collaboration Changes Everything by Don Tapscott.  However, Amazon neither provides nor possesses knowledge of why people bought all four of these books or qualification of the subject matter expertise of these readers.

However, these concerns, which we could think of as potential data quality issues, and which would be exacerbated within a small amount of transaction data where the eclectic tastes and idiosyncrasies of individual readers would not help us decide what books to buy, within a large amount of transaction data, we achieve the Wisdom of Crowds effect when, taken in aggregate, we receive a general sense of what books we might like to read based on what a diverse group of readers collectively makes popular.

As I blogged about in my post Sometimes it’s Okay to be Shallow, sometimes the aggregated, general sentiment of a large group of unknown, unqualified strangers will be sufficient to effectively make certain decisions.

 

The Wisdom of Friends

Although the influence of our friends and family is the oldest form of data-driven decision making, historically this influence was delivered by word of mouth, which required you to either be there to hear those influential words when they were spoken, or have a large enough network of people you knew that would be able to eventually pass along those words to you.

But the rise of social networking services, such as Twitter and Facebook, has transformed word of mouth into word of data by transcribing our words into short bursts of social data, such as status updates, online reviews, and blog posts.

Facebook “Likes” are a great example of a recommendation engine that uses the Wisdom of Friends, where our decision to buy a book, see a movie, or listen to a song might be based on whether or not our friends like it.  Of course, “friends” is used in a very loose sense in a social network, and not just on Facebook, since it combines strong connections such as actual friends and family, with weak connections such as acquaintances, friends of friends, and total strangers from the periphery of our social network.

Social influence has never ended with the people we know well, as Nicholas Christakis and James Fowler explained in their book Connected: The Surprising Power of Our Social Networks and How They Shape Our Lives.  But the hyper-connected world enabled by the Internet, and further facilitated by mobile devices, has strengthened the social influence of weak connections, and these friends form a smaller crowd whose wisdom is involved in more of our decisions than we may even be aware of.

 

The Wisdom of Experts

Since it’s more common to associate wisdom with expertise, Pandora is a great example of a recommendation engine that uses the Wisdom of Experts.  Pandora used a team of musicologists (professional musicians and scholars with advanced degrees in music theory) to deconstruct more than 800,000 songs into 450 musical elements that make up each performance, including qualities of melody, harmony, rhythm, form, composition, and lyrics, as part of what Pandora calls the Music Genome Project.

As Pandora explains, their methodology uses precisely defined terminology, a consistent frame of reference, redundant analysis, and ongoing quality control to ensure that data integrity remains reliably high, believing that delivering a great radio experience to each and every listener requires an incredibly broad and deep understanding of music.

Essentially, experts form the smallest crowd of wisdom.  Of course, experts are not always right.  At the very least, experts are not right about every one of their predictions.  Nor do experts always agree with other, which is why I imagine that one of the most challenging aspects of the Music Genome Project is getting music experts to consistently apply precisely the same methodology.

Pandora also acknowledges that each individual has a unique relationship with music (i.e., no one else has tastes exactly like yours), and allows you to “Thumbs Up” or “Thumbs Down” songs without affecting other users, producing more personalized results than either the popularity predicted by the Wisdom of Crowds or the similarity predicted by the Wisdom of Friends.

 

The Future of Wisdom

It’s interesting to note that the Wisdom of Experts is the only one of these approaches that relies on what data management and business intelligence professionals would consider a rigorous approach to data quality and decision quality best practices.  But this is also why the Wisdom of Experts is the most time-consuming and expensive approach to data-driven decision making.

In the past, the Wisdom of Crowds and Friends was ignored in data-driven decision making for the simple reason that this potential wisdom wasn’t digitized.  But now, in the era of big data, not only are crowds and friends digitized, but technological advancements combined with cost-effective options via open source (data and software) and cloud computing make these approaches quicker and cheaper than the Wisdom of Experts.  And despite the potential data quality and decision quality issues, the Wisdom of Crowds and/or Friends is proving itself a viable option for more categories of data-driven decision making.

I predict that the future of wisdom will increasingly become an amalgamation of experts, friends, and crowds, with the data and techniques from all three potential sources of wisdom often acknowledged as contributors to data-driven decision making.

 

Related Posts

Sometimes it’s Okay to be Shallow

Word of Mouth has become Word of Data

The Wisdom of the Social Media Crowd

Data Management: The Next Generation

Exercise Better Data Management

Darth Vader, Big Data, and Predictive Analytics

Data-Driven Intuition

The Big Data Theory

Finding a Needle in a Needle Stack

Big Data, Predictive Analytics, and the Ideal Chronicler

The Limitations of Historical Analysis

Magic Elephants, Data Psychics, and Invisible Gorillas

OCDQ Radio - Data Quality and Big Data

Big Data: Structure and Quality

HoardaBytes and the Big Data Lebowski

The Data-Decision Symphony

OCDQ Radio - Decision Management Systems

A Tale of Two Datas

Open MIKE Podcast — Episode 08

Method for an Integrated Knowledge Environment (MIKE2.0) is an open source delivery framework for Enterprise Information Management, which provides a comprehensive methodology that can be applied across a number of different projects within the Information Management space.  For more information, click on this link: openmethodology.org/wiki/What_is_MIKE2.0

The Open MIKE Podcast is a video podcast show, hosted by Jim Harris, which discusses aspects of the MIKE2.0 framework, and features content contributed to MIKE 2.0 Wiki Articles, Blog Posts, and Discussion Forums.

 

Episode 08: Information Lifecycle Management

If you’re having trouble viewing this video, you can watch it on Vimeo by clicking on this link: Open MIKE Podcast on Vimeo

 

MIKE2.0 Content Featured in or Related to this Podcast

Information Asset Management: openmethodology.org/wiki/Information_Asset_Management_Offering_Group

Information Lifecycle Management: openmethodology.org/wiki/Information_Lifecycle_Management_Solution_Offering

You can also find the videos and blog post summaries for every episode of the Open MIKE Podcast at: ocdqblog.com/MIKE

Social Business is more than Social Marketing

Although much of the early business use of social media was largely focused on broadcasting marketing messages at customers, social media transformed word of mouth into word of data and empowered customers to add their voice to marketing messages, forcing marketing to evolve from monologues to dialogues.  But is the business potential of social media limited to marketing?

During the MidMarket IBM Social Business #Futurecast, a panel discussion from earlier this month, Ed Brill, author of the forthcoming book Opting In: Lessons in Social Business from a Fortune 500 Product Manager, defined the term social business as “an organization that engages employees in a socially-enabled process that brings together how employees interact with each other, partners, customers, and the marketplace.  It’s about bringing all the right people, both internally and externally, together in a conversation to solve problems, be innovative and responsive, and better understand marketplace dynamics.”

“Most midsize businesses today,” Laurie McCabe commented, “are still grappling with how to supplement traditional applications and tools with some of the newer social business tools.  Up until now, the focus has been on integrating social media into a lot of marketing communications, and we haven’t yet seen the integration of social media into other business processes.”

“Midsize businesses understand,” Handly Cameron remarked, “how important it is to get into social media, but they’re usually so focused on daily operations that they think that a social business is simply one that uses social media, and therefore they cite the facts that they created Twitter and Facebook accounts as proof that they are a social business, but again, they are focusing on external uses of social media and not internal uses such as improving employee collaboration.”

Collaboration was a common theme throughout the panel discussion.  Brill said a social business is one that has undergone the cultural transformation required to embrace the fact that it is a good idea to share knowledge.  McCabe remarked that the leadership of a social business rewards employees for sharing knowledge, not for hoarding knowledge.  She also emphasized the importance of culture before tools since simply giving individuals social tools will not automatically create a collaborative culture.

Cameron also noted how the widespread adoption of cloud computing and mobile devices is helping to drive the adoption of social tools for collaboration, and helping to break down a lot of the traditional boundaries to knowledge sharing, especially as more organizations are becoming less bounded by the physical proximity of their employees, partners, and customers.

From my perspective, even though marketing might have been how social media got in the front door of many organizations, social media has always been about knowledge sharing and collaboration.  And with mobile, cloud, and social technologies so integrated into our personal and professional lives, life and business are both more social and collaborative than ever before.  So, even if collaboration isn’t in the genes of your organization, it’s no longer possible to put the collaboration genie back in the bottle.

 

This post was written as part of the IBM for Midsize Business program, which provides midsize businesses with the tools, expertise and solutions they need to become engines of a smarter planet.

 

Related Posts

Social Media Marketing: From Monologues to Dialogues

OCDQ Radio - Social Media for Midsize Businesses

Word of Mouth has become Word of Data

Information Asymmetry versus Empowered Customers

OCDQ Radio - Social Media Strategy

The Challenging Gift of Social Media

Listening and Broadcasting

Quality is more important than Quantity

Demystifying Social Media

Social Karma

The Limitations of Historical Analysis

This blog post is sponsored by the Enterprise CIO Forum and HP.

“Those who cannot remember the past are condemned to repeat it,” wrote George Santayana in the early 20th century to caution us about not learning the lessons of history.  But with the arrival of the era of big data and dawn of the data scientist in the early 21st century, it seems like we no longer have to worry about this problem since not only is big data allowing us to digitize history, data science is also building us sophisticated statistical models from which we can analyze history in order to predict the future.

However, “every model is based on historical assumptions and perceptual biases,” Daniel Rasmus blogged. “Regardless of the sophistication of the science, we often create models that help us see what we want to see, using data selected as a good indicator of such a perception.”  Although perceptual bias is a form of the data silence I previously blogged about, even absent such a bias, there are limitations to what we can predict about the future based on our analysis of the past.

“We must remember that all data is historical,” Rasmus continued. “There is no data from or about the future.  Future context changes cannot be built into a model because they cannot be anticipated.”  Rasmus used the example that no models of retail supply chains in 1962 could have predicted the disruption eventually caused by that year’s debut of a small retailer in Arkansas called Wal-Mart.  And no models of retail supply chains in 1995 could have predicted the disruption eventually caused by that year’s debut of an online retailer called Amazon.  “Not only must we remember that all data is historical,” Rasmus explained, “but we must also remember that at some point historical data becomes irrelevant when the context changes.”

As I previously blogged, despite what its name implies, predictive analytics can’t predict what’s going to happen with certainty, but it can predict some of the possible things that could happen with a certain probability.  Another important distinction is that “there is a difference between being uncertain about the future and the future itself being uncertain,” Duncan Watts explained in his book Everything is Obvious (Once You Know the Answer).  “The former is really just a lack of information — something we don’t know — whereas the latter implies that the information is, in principle, unknowable.  The former is an orderly universe, where if we just try hard enough, if we’re just smart enough, we can predict the future.  The latter is an essentially random world, where the best we can ever hope for is to express our predictions of various outcomes as probabilities.”

“When we look back to the past,” Watts explained, “we do not wish that we had predicted what the search market share for Google would be in 1999.  Instead we would end up wishing we’d been able to predict on the day of Google’s IPO that within a few years its stock price would peak above $500, because then we could have invested in it and become rich.  If our prediction does not somehow help to bring about larger results, then it is of little interest or value to us.  We care about things that matter, yet it is precisely these larger, more significant predictions about the future that pose the greatest difficulties.”

Although we should heed Santayana’s caution and try to learn history’s lessons in order to factor into our predictions about the future what was relevant from the past, as Watts cautioned, there will be many times when “what is relevant can’t be known until later, and this fundamental relevance problem can’t be eliminated simply by having more information or a smarter algorithm.”

Although big data and data science can certainly help enterprises learn from the past in order to predict some probable futures, the future does not always resemble the past.  So, remember the past, but also remember the limitations of historical analysis.

This blog post is sponsored by the Enterprise CIO Forum and HP.

 

Related Posts

Data Silence

Magic Elephants, Data Psychics, and Invisible Gorillas

OCDQ Radio - Data Quality and Big Data

Big Data: Structure and Quality

WYSIWYG and WYSIATI

Will Big Data be Blinded by Data Science?

Big Data el Memorioso

Information Overload Revisited

HoardaBytes and the Big Data Lebowski

The Data-Decision Symphony

OCDQ Radio - Decision Management Systems

The Big Data Theory

Finding a Needle in a Needle Stack

Darth Vader, Big Data, and Predictive Analytics

Data-Driven Intuition

A Tale of Two Datas

Data Silence

This blog post is sponsored by the Enterprise CIO Forum and HP.

In the era of big data, information optimization is becoming a major topic of discussion.  But when some people discuss the big potential of big data analytics under the umbrella term of data science, they make it sound like since we have access to all the data we would ever need, all we have to do is ask the Data Psychic the right question and then listen intently to the answer.

However, in his recent blog post Silence Isn’t Always Golden, Bradley S. Fordham, PhD explained that “listening to what the data does not say is often as important as listening to what it does.  There can be various types of silences in data that we must get past to take the right actions.”  Fordham described these data silences as various potential gaps in our analysis.

One data silence is syntactic gaps, which is a proportionately small amount of data in a very large data set that “will not parse (be converted from raw data into meaningful observations with semantics or meaning) in the standard way.  A common response is to ignore them under the assumption there are too few to really matter.  The problem is that oftentimes these items fail to parse for similar reasons and therefore bear relationships to each other.  So, even though it may only be .1% of the overall population, it is a coherent sub-population that could be telling us something if we took the time to fix the syntactic problems.”

This data silence reminded me of my podcast discussion with Thomas C. Redman, PhD about big data and data quality, during which we discussed how some people erroneously assume that data quality issues can be ignored in larger data sets.

Another data silence is inferential gaps, which is basing an inference on only one variable in a data set.  The example Fordham uses is from a data set showing that 41% of the cars sold during the first quarter of the year were blue, from which we might be tempted to infer that customers bought more blue cars because they preferred blue.  However, by looking at additional variables in the data set and noticing that “70% of the blue cars sold were from the previous model year, it is likely they were discounted to clear them off the lots, thereby inflating the proportion of blue cars sold.  So, maybe blue wasn’t so popular after all.”

Another data silence Fordham described using the same data set is gaps in field of view.  “At first glance, knowing everything on the window sticker of every car sold in the first quarter seems to provide a great set of data to understand what customers wanted and therefore were buying.  At least it did until we got a sinking feeling in our stomachs because we realized that this data only considers what the auto manufacturer actually built.  That field of view is too limited to answer the important customer desire and motivation questions being asked.  We need to break the silence around all the things customers wanted that were not built.”

This data silence reminded me of WYSIATI, which is an acronym coined by Daniel Kahneman to describe how the data you are looking at can greatly influence you to jump to the comforting, but false, conclusion that “what you see is all there is,” thereby preventing you from expanding your field of view to notice what data might be missing from your analysis.

As Fordham concluded, “we need to be careful to listen to all the relevant data, especially the data that is silent within our current analyses.  Applying that discipline will help avoid many costly mistakes that companies make by taking the wrong actions from data even with the best of techniques and intentions.”

Therefore, in order for your enterprise to leverage big data analytics for business success, you not only need to adopt a mindset that embraces the principles of data science, you also need to make sure that your ears are set to listen for data silence.

This blog post is sponsored by the Enterprise CIO Forum and HP.

 

Related Posts

Magic Elephants, Data Psychics, and Invisible Gorillas

OCDQ Radio - Data Quality and Big Data

Big Data: Structure and Quality

WYSIWYG and WYSIATI

Will Big Data be Blinded by Data Science?

Big Data el Memorioso

Information Overload Revisited

HoardaBytes and the Big Data Lebowski

The Data-Decision Symphony

OCDQ Radio - Decision Management Systems

The Big Data Theory

Finding a Needle in a Needle Stack

Darth Vader, Big Data, and Predictive Analytics

Data-Driven Intuition

A Tale of Two Datas

Barriers to Cloud Adoption

I previously blogged about leveraging the cloud for application development, noting as the cloud computing market matures we are seeing an increasing number of robust infrastructure as a service (IaaS) and platform as a service (PaaS) offerings that can accelerate new application development, as well as facilitate the migration of existing applications and data to the cloud.

A recent LinkedIn discussion about cloud computing asked whether small and midsize businesses (SMB) are embracing all that the cloud has to offer and, if not, then what are the most common barriers to cloud adoption.

“There is a lot of skepticism,” Sabharinath Bala noted, “about hosting apps and data in the cloud.  Not all SMBs are confident about cloud-based apps due to reasons ranging from data privacy and security to federal regulations.  I’ve seen quite a few SMBs embracing the cloud by hosting internal apps (payroll, HCM, etc.) in the cloud first and then moving on to apps that contain client confidential data.  In most cases, this is more of an exercise to build confidence about data security and privacy issues.”

Concern about data security and privacy issues is understandably the most commonly cited barrier to migrating applications, and the often sensitive data they contain, to the cloud.  This is why, as Steve O’Donnell commented, “commodity applications such as email, document management, and communications are being migrated first.  However, extremely critical applications such as CRM, ERP, and salesforce management are being adopted quickly as these really appeal to mobile workers.”

I have previously blogged about mobile devices being the biggest driver for cloud adoption since almost all mobile applications are based on a mobile-app-portal-to-the-cloud computing model.  Therefore, since without the cloud mobile devices can not be leveraged to their fullest potential, it is not surprising to see a high correlation between cloud adoption and mobile enablement.

Nor is it surprising to see that “the S in SMB is adopting the cloud faster than the M,” as Karthik Balachandran observed, “partially because the cloud has given smaller businesses access to IT assets that they did not have before.  But, larger businesses still enjoy returns from their traditional IT investments.  Call it legacy drag?”

Legacy drag is certainly a real concern, but another reason smaller firms may be migrating faster is because, as Karen Harrison commented, “companies with larger IT departments also feel a sense of loyalty to the people they have, and that also contributes to the lag.  In today’s economy, many companies don’t want to lay off workers who have been with them a long time.”

But lacking some of these legacy challenges facing larger businesses doesn’t necessarily mean that SMBs have an easier path to the cloud.  Although “there is no reason for your average SMB to not leverage what is available in the cloud to the fullest,” noted Fred McClimans, “realistically, this is not a technology issue, but rather a behavioral issue that goes well beyond the cloud: we’ve been conditioned to think that we have to physically own something to control it, keep it safe, or treat it as an asset.  Rather than focusing on owning assets, we need to get businesses to begin to think about leveraging assets.  And just like feeling comfortable with cloud-based applications, this is an educational/comfort issue.”

What other barriers to cloud adoption have you encountered in your organization?

 

This post was written as part of the IBM for Midsize Business program, which provides midsize businesses with the tools, expertise and solutions they need to become engines of a smarter planet.

 

Related Posts

Leveraging the Cloud for Application Development

OCDQ Radio - Cloud Computing for Midsize Businesses

A Swift Kick in the AAS

The Age of the Mobile Device

Cloud Computing is the New Nimbyism

Lightning Strikes the Cloud

The Cloud Security Paradox

The Cloud is shifting our Center of Gravity

Are Cloud Providers the Bounty Hunters of IT?

The Partly Cloudy CIO

Open MIKE Podcast — Episode 06

Method for an Integrated Knowledge Environment (MIKE2.0) is an open source delivery framework for Enterprise Information Management, which provides a comprehensive methodology that can be applied across a number of different projects within the Information Management space.  For more information, click on this link: openmethodology.org/wiki/What_is_MIKE2.0

The Open MIKE Podcast is a video podcast show, hosted by Jim Harris, which discusses aspects of the MIKE2.0 framework, and features content contributed to MIKE 2.0 Wiki Articles, Blog Posts, and Discussion Forums.

 

Episode 06: Getting to Know NoSQL

If you’re having trouble viewing this video, you can watch it on Vimeo by clicking on this link: Open MIKE Podcast on Vimeo

 

MIKE2.0 Content Featured in or Related to this Podcast

Big Data Solution Offering: openmethodology.org/wiki/Big_Data_Solution_Offering

Preparing for NoSQL: openmethodology.org/wiki/Preparing_for_NoSQL

Hadoop and the Enterprise Debates: openmethodology.org/wiki/Hadoop_and_the_Enterprise_Debates

Big Data Definition: openmethodology.org/wiki/Big_Data_Definition

Big Sensor Data: openmethodology.org/wiki/Big_sensor_data

You can also find the videos and blog post summaries for every episode of the Open MIKE Podcast at: ocdqblog.com/MIKE

 

Related Posts

Data Management: The Next Generation

Is DW before BI going Bye-Bye?

Our Increasingly Data-Constructed World

Dot Collectors and Dot Connectors

HoardaBytes and the Big Data Lebowski

OCDQ Radio - Data Quality and Big Data

Exercise Better Data Management

A Tale of Two Datas

Big Data Lessons from Orbitz

The Graystone Effects of Big Data

Will Big Data be Blinded by Data Science?

Magic Elephants, Data Psychics, and Invisible Gorillas

Big Data el Memorioso

Information Overload Revisited

Finding a Needle in a Needle Stack

Darth Vader, Big Data, and Predictive Analytics

Swimming in Big Data

The Big Data Theory

Big Data: Structure and Quality

Sometimes it’s Okay to be Shallow

The Evolution of Enterprise Security

This podcast episode is sponsored by the Enterprise CIO Forum and HP.

OCDQ Radio is a vendor-neutral podcast about data quality and its related disciplines, produced and hosted by Jim Harris.

During this episode, Bill Laberis and I discuss the necessary evolution of enterprise security in the era of cloud computing and mobile devices.  Our discussion includes public, private, and hybrid clouds, leveraging existing security best practices, defining BYOD (Bring Your Own Device) policies, mobile device management, and striking a balance between convenience and security.

Bill Laberis is the Editorial Director of the Enterprise CIO Forum, in which capacity he oversees the content of both its US and international websites.  He is also Editorial Director and Social Media Manager in the IDG Custom Solutions Group, working closely with clients to create highly individualized custom content programs that leverage the wide range of media capabilities, including print, online, multimedia, and custom events.

Bill Laberis was editor-in-chief of Computerworld from 1986-1996, has been a frequent speaker and keynoter, and has written for various business publications including The Wall Street Journal.  He has been closely following the IT sector for 30 years.

Leveraging the Cloud for Application Development

For most of my career, I developed on-premises applications for the clients of enterprise software vendors.  Most of the time, effort, and money spent during the initial phases of those projects was allocated to setting up the application environments.

After the client purchased any missing (or upgraded existing) components of the required on-premises technology infrastructure, software had to be installed, hardware had to be configured, and disk space had to be allocated.  Then separate environments had to be established for development, testing, staging, and production, while setting up user accounts with the appropriate levels of access and security for each environment.  Finally, source control and provisioning procedures had to be implemented.

Therefore, a significant amount of time, effort, and money was expended before application development even began.  Of course, resources also had to be allocated to maintain these complex environments throughout the entire application lifecycle.

As the cloud computing market matures, we are seeing an increasing number of robust infrastructure as a service (IaaS) and platform as a service (PaaS) offerings, which can accelerate application development, especially for midsize businesses.

“The cloud offers immense advantages,” Steve Garone recently blogged, “in terms of agility and flexibility, making it easier, and if automation is employed, almost transparent to make assets available in real time when and where needed.  These advantages are valuable for midsize businesses because the resources and expertise needed to implement a fully automated cloud-based solution may not exist within a smaller IT staff used to managing a less complex environment.”  Nevertheless, Garone recommends a close examination of not just the benefits, but also the costs and, most important, the ROI associated with cloud-based solutions.

Leveraging the cloud for application development does have clear advantages.  However, application development environments are still complex to manage.  Even though most of that complexity will be conveniently concealed by the cloud, it will still exist.

Carefully investigate the security, scalability, and reliability of cloud service providers.  IaaS and PaaS have matured enough to be viable options for application development, but don’t allow the chance to jump-start your development cloud your judgment.

 

This post was written as part of the IBM for Midsize Business program, which provides midsize businesses with the tools, expertise and solutions they need to become engines of a smarter planet.

 

Related Posts

OCDQ Radio - Cloud Computing for Midsize Businesses

A Swift Kick in the AAS

Cloud Computing is the New Nimbyism

Lightning Strikes the Cloud

The Cloud Security Paradox

Can the Enterprise really be Secured?

The Cloud is shifting our Center of Gravity

Are Cloud Providers the Bounty Hunters of IT?

The Partly Cloudy CIO

OCDQ Radio - Saving Private Data

Availability Bias and Data Quality Improvement

The availability heuristic is a mental shortcut that occurs when people make judgments based on the ease with which examples come to mind.  Although this heuristic can be beneficial, such as when it helps us recall examples of a dangerous activity to avoid, sometimes it leads to availability bias, where we’re affected more strongly by the ease of retrieval than by the content retrieved.

In his thought-provoking book Thinking, Fast and Slow, Daniel Kahneman explained how availability bias works by recounting an experiment where different groups of college students were asked to rate a course they had taken the previous semester by listing ways to improve the course — while varying the number of improvements that different groups were required to list.

Counterintuitively, students in the group required to list more necessary improvements gave the course a higher rating, whereas students in the group required to list fewer necessary improvements gave the course a lower rating.

According to Kahneman, the extra cognitive effort expended by the students required to list more improvements biased them into believing it was difficult to list necessary improvements, leading them to conclude that the course didn’t need much improvement, and conversely, the little cognitive effort expended by the students required to list few improvements biased them into concluding, since it was so easy to list necessary improvements, that the course obviously needed improvement.

This is counterintuitive because you’d think that the students would rate the course based on an assessment of the information retrieved from their memory regardless of how easy that information was to retrieve.  It would have made more sense for the course to be rated higher for needing fewer improvements, but availability bias lead the students to the opposite conclusion.

Availability bias can also affect an organization’s discussions about the need for data quality improvement.

If you asked stakeholders to rate the organization’s data quality by listing business-impacting incidents of poor data quality, would they reach a different conclusion if you asked them to list one incident versus asking them to list at least ten incidents?

In my experience, an event where poor data quality negatively impacted the organization, such as a regulatory compliance failure, is often easily dismissed by stakeholders as an isolated incident to be corrected by a one-time data cleansing project.

But would forcing stakeholders to list ten business-impacting incidents of poor data quality make them concede that data quality improvement should be supported by an ongoing program?  Or would the extra cognitive effort bias them into concluding, since it was so difficult to list ten incidents, that the organization’s data quality doesn’t really need much improvement?

I think that the availability heuristic helps explain why most organizations easily approve reactive data cleansing projects, and availability bias helps explain why most organizations usually resist proactively initiating a data quality improvement program.

 

Related Posts

DQ-View: The Five Stages of Data Quality

Data Quality: Quo Vadimus?

Data Quality and Chicken Little Syndrome

The Data Quality Wager

You only get a Return from something you actually Invest in

“Some is not a number and soon is not a time”

Why isn’t our data quality worse?

Data Quality and the Bystander Effect

Data Quality and the Q Test

Perception Filters and Data Quality

Predictably Poor Data Quality

WYSIWYG and WYSIATI

 

Related OCDQ Radio Episodes

Clicking on the link will take you to the episode’s blog post:

  • Organizing for Data Quality — Guest Tom Redman (aka the “Data Doc”) discusses how your organization should approach data quality, including his call to action for your role in the data revolution.
  • The Johari Window of Data Quality — Guest Martin Doyle discusses helping people better understand their data and assess its business impacts, not just the negative impacts of bad data quality, but also the positive impacts of good data quality.
  • Redefining Data Quality — Guest Peter Perera discusses his proposed redefinition of data quality, as well as his perspective on the relationship of data quality to master data management and data governance.
  • Studying Data Quality — Guest Gordon Hamilton discusses the key concepts from recommended data quality books, including those which he has implemented in his career as a data quality practitioner.

Open MIKE Podcast — Episode 05

Method for an Integrated Knowledge Environment (MIKE2.0) is an open source delivery framework for Enterprise Information Management, which provides a comprehensive methodology that can be applied across a number of different projects within the Information Management space.  For more information, click on this link: openmethodology.org/wiki/What_is_MIKE2.0

The Open MIKE Podcast is a video podcast show, hosted by Jim Harris, which discusses aspects of the MIKE2.0 framework, and features content contributed to MIKE 2.0 Wiki Articles, Blog Posts, and Discussion Forums.

 

Episode 05: Defining Big Data

If you’re having trouble viewing this video, you can watch it on Vimeo by clicking on this link: Open MIKE Podcast on Vimeo

 

MIKE2.0 Content Featured in or Related to this Podcast

Big Data Definition: openmethodology.org/wiki/Big_Data_Definition

Big Sensor Data: openmethodology.org/wiki/Big_sensor_data

Hadoop and the Enterprise Debates: openmethodology.org/wiki/Hadoop_and_the_Enterprise_Debates

Preparing for NoSQL: openmethodology.org/wiki/Preparing_for_NoSQL

Big Data Solution Offering: openmethodology.org/wiki/Big_Data_Solution_Offering

You can also find the videos and blog post summaries for every episode of the Open MIKE Podcast at: ocdqblog.com/MIKE

 

Related Posts

Our Increasingly Data-Constructed World

Dot Collectors and Dot Connectors

HoardaBytes and the Big Data Lebowski

OCDQ Radio - Data Quality and Big Data

Exercise Better Data Management

A Tale of Two Datas

Big Data Lessons from Orbitz

The Graystone Effects of Big Data

Will Big Data be Blinded by Data Science?

Magic Elephants, Data Psychics, and Invisible Gorillas

Big Data el Memorioso

Information Overload Revisited

Finding a Needle in a Needle Stack

Darth Vader, Big Data, and Predictive Analytics

Why Can’t We Predict the Weather?

Swimming in Big Data

The Big Data Theory

Big Data: Structure and Quality

Sometimes it’s Okay to be Shallow

Small Data and VRM

Can the Enterprise really be Secured?

This blog post is sponsored by the Enterprise CIO Forum and HP.

Over the last two months, I have been blogging a lot about how enterprise security has become an even more important, and more complex, topic of discussion than it already was.  The days of the perimeter fence model being sufficient are long gone, and social media is helping social engineering more effectively attack the weakest links in an otherwise sound security model.

With the consumerization of IT allowing Shadow IT to emerge from the shadows and the cloud and mobile devices enabling the untethering of the enterprise from the physical boundaries that historically defined where the enterprise stopped and the outside world began, I have been more frequently pondering the question: Can the enterprise really be secured?

The cloud presents the conundrum of relying on non-enterprise resources for some aspects of enterprise security.  However, “one advantage of the cloud,” Judy Redman recently blogged, “is that it drives the organization to take a more comprehensive, and effective, approach to risk governance.”  Redman’s post includes four recommended best practices for stronger cloud security.

With the growing popularity of the mobile-app-portal-to-the-cloud business model, more enterprises are embracing mobile app development for deploying services to better support both their customers and their employees.  “Mobile apps,” John Jeremiah recently blogged, “are increasingly dependent on cloud services that the apps team didn’t build, the organization doesn’t own, and the ops team doesn’t even know about.”  Jeremiah’s post includes four things to consider for stronger mobile security.

Although it is essential for every enterprise to have a well-articulated security strategy, “it is important to understand that strategy is not policy,” John Burke recently blogged.  “Security strategy links corporate strategy overall to specific security policies; policies implement strategy.”  Burke’s post includes five concrete steps to take to build a security strategy and implement security policies.

With the very notion of an enterprise increasingly becoming more of a conceptual entity than a physical entity, enterprise security is becoming a bit of a misnomer.  However, the underlying concepts of enterprise security still need to be put into practice, and even more so now that, since the enterprise has no physical boundaries, the enterprise is everywhere, which means that everyone (employees, partners, suppliers, service providers, customers) will have to work together for “the enterprise” to really be secured.

This blog post is sponsored by the Enterprise CIO Forum and HP.

 

Related Posts

Enterprise Security and Social Engineering

The Weakest Link in Enterprise Security

Enterprise Security is on Red Alert

Securing your Digital Fortress

The Good, the Bad, and the Secure

The Data Encryption Keeper

The Cloud Security Paradox

The Cloud is shifting our Center of Gravity

Are Cloud Providers the Bounty Hunters of IT?

The Return of the Dumb Terminal

More Tethered by the Untethered Enterprise?

A Swift Kick in the AAS

Sometimes all you Need is a Hammer

Shadow IT and the New Prometheus

The Diffusion of the Consumerization of IT

Cloud Computing for Midsize Businesses

OCDQ Radio is a vendor-neutral podcast about data quality and its related disciplines, produced and hosted by Jim Harris.

During this episode, Ed Abrams and I discuss cloud computing for midsize businesses, and, more specifically, we discuss aspects of the recently launched IBM global initiatives to help Managed Service Providers (MSP) deliver cloud-based service offerings.

Ed Abrams is the Vice President of Marketing, IBM Midmarket.  In this role, Ed is responsible for leading a diverse team that supports IBM’s business objectives with small and midsize businesses by developing, planning, and executing offerings and go-to-market strategies designed to help midsize businesses grow.  In this role Ed works closely and collaboratively with sales and channels teams, and agency partners to deliver high-quality and effective marketing strategies, offerings, and campaigns.