The Data Encryption Keeper

This blog post is sponsored by the Enterprise CIO Forum and HP.

Since next week is Halloween, and Rafal Los recently blogged about how most enterprise security discussions are FUD-filled (i.e., filled with Fear, Uncertainty, and Doubt) horror stories, I decided to use Tales from the Crypt as the theme for this blog post.

 

Tales from the Encrypted

One frightening consequence of the unrelenting trend of the consumerization of IT, especially cloud computing and mobility, is that not all of the organization’s data is stored within its on-premises technology infrastructure, or accessed using devices under its control.  With an increasing percentage of enterprise data constantly in motion as a moving target in a sometimes horrifyingly hyper-connected world, data protection and data privacy are legitimate concerns and increasingly complex challenges.

Cryptography has a long history that predates the Information Age, but data encryption via cryptographic computer algorithms has played a key (sorry, I couldn’t resist the pun) role in the history of securing the organization’s data.  But instead of trying to fight the future of business being enabled by cloud and mobile technologies like it was the Zombie Data-pocalypse, we need a modern data security model that can remain good for business, but ghoulish for the gremlins, goblins, and goons of cyber crime.

Although some rightfully emphasize the need for stronger authentication to minimize cloud breaches, data encryption is often overlooked—especially who should be responsible for it.  Most cloud providers use vendor-side encryption models, meaning that their customers transfer non-encrypted data to the cloud, where the cloud vendor then becomes responsible for data encryption.

 

The Data Encryption Keeper

However, as Richard Jarvis commented on my previous post, “it’s only a matter of time before there’s a highly public breakdown in the vendor-side encryption model.  Long term, I expect to see an increase in premium, client-side encryption services targeted at corporate clients.  To me, this will offer the best of both worlds, and will benefit both cloud vendors and their clients.”

I have to admit that in my own security assessments of cloud computing solutions, I have verified that the cloud vendor was using strong data encryption methods, but I didn’t consider that the responsibility for cloud data encryption might be misplaced.

So perhaps one way to prevent the cloud from becoming a haunted house for data is to pay more attention to who is cast to play the role of the Data Encryption Keeper.  And perhaps the casting call for this data security role should stay on-premises.

This blog post is sponsored by the Enterprise CIO Forum and HP.

 

Related Posts

The Cloud Security Paradox

The Good, the Bad, and the Secure

Securing your Digital Fortress

Shadow IT and the New Prometheus

Are Cloud Providers the Bounty Hunters of IT?

The Diderot Effect of New Technology

The IT Consumerization Conundrum

The IT Prime Directive of Business First Contact

A Sadie Hawkins Dance of Business Transformation

Are Applications the La Brea Tar Pits for Data?

Why does the sun never set on legacy applications?

The Partly Cloudy CIO

The IT Pendulum and the Federated Future of IT

Suburban Flight, Technology Sprawl, and Garage IT

Data Governance and the Adjacent Possible

I am reading the book Where Good Ideas Come From by Steven Johnson, which examines recurring patterns in the history of innovation.  The first pattern Johnson writes about is called the Adjacent Possible, which is a term coined by Stuart Kauffman, and is described as “a kind of shadow future, hovering on the edges of the present state of things, a map of all the ways in which the present can reinvent itself.  Yet it is not an infinite space, or a totally open playing field.  The strange and beautiful truth about the adjacent possible is that its boundaries grow as you explore those boundaries.”

Exploring the adjacent possible is like exploring “a house that magically expands with each door you open.  You begin in a room with four doors, each leading to a new room that you haven’t visited yet.  Those four rooms are the adjacent possible.  But once you open any one of those doors and stroll into that room, three new doors appear, each leading to a brand-new room that you couldn’t have reached from your original starting point.  Keep opening new doors and eventually you’ll have built a palace.”

If it ain’t broke, bricolage it

“If it ain’t broke, don’t fix it” is a common defense of the status quo, which often encourages an environment that stifles innovation and the acceptance of new ideas.  The status quo is like staying in the same familiar and comfortable room and choosing to keep all four of its doors closed.

The change management efforts of data governance often don’t talk about opening one of those existing doors.  Instead they often broadcast the counter-productive message that “everything is so broken, we can’t fix it.”  We need to destroy our existing house and rebuild it from scratch with brand new rooms — and probably with one of those open floor plans without any doors.

Should it really be surprising when this approach to change management is so strongly resisted?

The term bricolage can be defined as making creative and resourceful use of whatever materials are at hand regardless of their original purpose, stringing old parts together to form something radically new, transforming the present into the near future.

“Good ideas are not conjured out of thin air,” explains Johnson, “they are built out of a collection of existing parts.”

The primary reason that the change management efforts of data governance are resisted is because they rely almost exclusively on negative methods—they emphasize broken business and technical processes, as well as bad data-related employee behaviors.

Although these problems exist and are the root cause of some of the organization’s failures, there are also unheralded processes and employees that prevented other problems from happening, which are the root cause of some of the organization’s successes.

It’s important to demonstrate that some data governance policies reflect existing best practices, which helps reduce resistance to change, and so a far more productive change management mantra for data governance is: “If it ain’t broke, bricolage it.”

Data Governance and the Adjacent Possible

As Johnson explains, “in our work lives, in our creative pursuits, in the organizations that employ us, in the communities we inhabit—in all these different environments, we are surrounded by potential new ways of breaking out of our standard routines.”

“The trick is to figure out ways to explore the edges of possibility that surround you.”

Most data governance maturity models describe an organization’s evolution through a series of stages intended to measure its capability and maturity, tendency toward being reactive or proactive, and inclination to be project-oriented or program-oriented.

Johnson suggests that “one way to think about the path of evolution is as a continual exploration of the adjacent possible.”

Perhaps we need to think about the path of data governance evolution as a continual exploration of the adjacent possible, as a never-ending journey which begins by opening that first door, building a palatial data governance program one room at a time.

 

Related Posts

The Cloud Security Paradox

This blog post is sponsored by the Enterprise CIO Forum and HP.

Nowadays it seems like any discussion about enterprise security inevitably becomes a discussion about cloud security.  Last week, as I was listening to John Dodge and Bob Gourley discuss recent top cloud security tweets on Enterprise CIO Forum Radio, the story that caught my attention was the Network World article by Christine Burns, part of a six-part series on cloud computing, which had a provocative title declaring that public cloud security remains Mission Impossible.

“Cloud security vendors and cloud services providers have a long way to go,” Burns wrote, “before enterprise customers will be able to find a comfort zone in the public cloud, or even in a public/private hybrid deployment.”  Although I agree with Burns, and I highly recommend reading her entire excellent article, I have always been puzzled by debates over cloud security.

A common opinion is that cloud-based solutions are fundamentally less secure than on-premises solutions.  Some critics even suggest cloud-based solutions can never be secure.  I don’t agree with either opinion because to me it’s all a matter of perspective.

Let’s imagine that I am a cloud-based service provider selling solutions leveraging my own on-premises resources, meaning that I own and operate all of the technology infrastructure within the walls of my one corporate office.  Let’s also imagine that in addition to the public cloud solution that I sell to my customers, I have built a private cloud solution for some of my employees (e.g., salespeople in the field), and that I also have other on-premises systems (e.g., accounting) not connected to any cloud.

Since all of my solutions are leveraging the exact same technology infrastructure, if it is impossible to secure my public cloud, then it logically follows that not only is it impossible to secure my private cloud, but it is also impossible to secure my on-premises systems as well.  Therefore, all of my security must be Mission Impossible.  I refer to this as the Cloud Security Paradox.

Some of you will argue that my scenario was oversimplified, since most cloud-based solutions, whether public or private, may include technology infrastructure that is not under my control, and may be accessed using devices that are not under my control.

Although those are valid security concerns, they are not limited to—nor were they created by—cloud computing, because with the prevalence of smart phones and other mobile devices, those security concerns exist for entirely on-premises solutions as well.

In my opinion, cloud-based versus on-premises, public cloud versus private cloud, and customer access versus employee access, are all oversimplified arguments.  Regardless of the implementation strategy, technology infrastructure and especially your data needs to be secured wherever it is, however it is accessed, and with the appropriate levels of control over who can access what.

Fundamentally, the real problem is a lack of well-defined, well-implemented, and well-enforced security practices.  As Burns rightfully points out, a significant challenge with cloud-based solutions is that “public cloud providers are notoriously unwilling to provide good levels of visibility into their underlying security practices.”

However, when the cost savings and convenience of cloud-based solutions are accepted without a detailed security assessment, that is not a fundamental flaw of cloud computing—that is simply a bad business decision.

Let’s stop blaming poor enterprise security practices on the adoption of cloud computing.

This blog post is sponsored by the Enterprise CIO Forum and HP.

 

Related Posts

The Good, the Bad, and the Secure

Securing your Digital Fortress

Shadow IT and the New Prometheus

Are Cloud Providers the Bounty Hunters of IT?

The Diderot Effect of New Technology

The IT Consumerization Conundrum

The IT Prime Directive of Business First Contact

A Sadie Hawkins Dance of Business Transformation

Are Applications the La Brea Tar Pits for Data?

Why does the sun never set on legacy applications?

The Partly Cloudy CIO

The IT Pendulum and the Federated Future of IT

Suburban Flight, Technology Sprawl, and Garage IT

Turning Data Silos into Glass Houses

Although data silos are denounced as inherently bad since they complicate the coordination of enterprise-wide business activities, since they are often used to support some of those business activities, whether or not data silos are good or bad is a matter of perspective.  For example, data silos are bad when different business units are redundantly storing and maintaining their own private copies of the same data, but data silos are good when they are used to protect sensitive data that should not be shared.

Providing the organization with a single system of record, a single version of the truth, a single view, a golden copy, or a consolidated repository of trusted data has long been the anti-data-silo siren song of enterprise data warehousing (EDW), and more recently, of master data management (MDM).  Although these initiatives can provide significant business value, somewhat ironically, many data silos start with EDW or MDM data that was replicated and customized in order to satisfy the particular needs of an operational project or tactical initiative.  This customized data either becomes obsolesced after the conclusion of its project or initiative — or it continues to be used because it is satisfying a business need that EDW and MDM are not.

One of the early goals of a new data governance program should be to provide the organization with a substantially improved view of how it is using its data — including data silos — to support its operational, tactical, and strategic business activities.

Data governance can help the organization catalog existing data sources, build a matrix of data usage and related business processes and technology, identify potential external reference sources to use for data enrichment, as well as help define the metrics that meaningfully measure data quality using business-relevant terminology.

The transparency provided by this combined analysis of the existing data, business, and technology landscape will provide a more comprehensive overview of enterprise data management problems, which will help the organization better evaluate any existing data and technology re-use and redundancies, as well as whether investing in new technology will be necessary.

Data governance can help topple data silos by first turning them into glass houses through transparency, empowering the organization to start throwing stones at those glass houses that must be eliminated.  And when data silos are allowed to persist, they should remain glass houses, clearly illustrating whether or not they have the business-justified reasons for continued use.

 

Related Posts

Data and Process Transparency

The Good Data

The Data Outhouse

Time Silos

Sharing Data

Single Version of the Truth

Beyond a “Single Version of the Truth”

The Quest for the Golden Copy

The Idea of Order in Data

Hell is other people’s data

Shadow IT and the New Prometheus

This blog post is sponsored by the Enterprise CIO Forum and HP.

Over twenty-five years ago, business-enabling information technology was still in its nascent phase.  The Internet was still coming of age and the World Wide Web didn’t exist yet.  The personal computer revolution had only recently started and it was still far from going mainstream.  And the few mobile phones that existed back then were simply phones—not mini-supercomputers.

Back in those dark ages, most organizations had a centralized IT department, which selected, implemented, and controlled the technology used to support business activities.  Since information technology was a brave new world and the organization was so dependent on its magic (Clarke’s Third Law“Any sufficiently advanced technology is indistinguishable from magic.”), the IT department was allowed to dictate that everyone used the same type of computer, loaded with the same standard applications, which provided, for the most part, the same general information technology solution for a myriad of business problems.

Shadow IT was the term used to describe business-driven information technology solutions not under the jurisdiction of IT, and thus operated in the shadows by business users who had to carefully conceal their use of non-IT-sanctioned technology.

Returning to the light of the present day, it’s difficult to imagine life—both personal and professional—without the Internet and the World Wide Web, personal computers, and the growing prevalence of smart phones, tablet PCs, and other mobile devices.

In Greek mythology, Prometheus stole fire from the gods and gave it to us mere mortals.  And once humans could command fire, we combined it with our use of other enabling tools, thus learning that we were capable of taking control of our own destiny.

The consumer-driven trends of cloud computingSaaS, and mobility are the New Prometheus stealing the fire of technology from the IT department and giving it directly to business users, thus enabling them to take control of their own IT destiny.

The consumerization of IT has demystified business-enabling information technology.  The fire stolen by the New Prometheus is allowing business-driven, business-function-specific, and decentralized IT solutions to finally step out of the shadows.

Although IT Delivery remains a strategic discipline for the organization, the tactical and operational execution of that strategy needs to be decentralized and embedded within the business functions of the organization.  Communication and collaboration are more important than ever, but the centralization and generalization of information technology is a thing of the past.

This blog post is sponsored by the Enterprise CIO Forum and HP.

 

Related Posts

The Good, the Bad, and the Secure

Securing your Digital Fortress

Are Cloud Providers the Bounty Hunters of IT?

The Diderot Effect of New Technology

The IT Consumerization Conundrum

The IT Prime Directive of Business First Contact

A Sadie Hawkins Dance of Business Transformation

Are Applications the La Brea Tar Pits for Data?

Why does the sun never set on legacy applications?

The Partly Cloudy CIO

The IT Pendulum and the Federated Future of IT

Suburban Flight, Technology Sprawl, and Garage IT

Information Quality Certified Professional

Information Quality Certified Professional (IQCP) is the new certification program from the IAIDQ.  The application deadline for the next certification exam is October 25, 2011.  For more information about IQCP certification, please refer to the following links:

 

Taking the first IQCP exam

A Guest Post written by Gordon Hamilton

I can still remember how galvanized I was by the first email mentions of the IQCP certification and its inaugural examination.  I’d been a member of the IAIDQ for the past year and I saw the first mailings in early February 2011.  It’s funny but my memory of the sequence of events was that I filled out the application for the examination that first night, but going back through my emails I see that I attended several IAIDQ Webinars and followed quite a few discussions on LinkedIn before I finally applied and paid for the exam in mid-March (I still got the early bird discount).

Looking back now, I am wondering why I was so excited about the chance to become certified in data quality.  I know that I had been considering the CBIP and CBAP, from TDWI and IIBA respectively, for more than a year, going so far as to purchase study materials and take some sample exams.  Both the CBIP and CBAP designations fit where my career had been for 20+ years, but the subject areas were now tangential to my focus on information and data quality.

The IQCP certification fit exactly where I hoped my career trajectory was now taking me, so it really did galvanize me to action.

I had been a software and database developer for 20+ years when I caught a bad case of Deming-god worship while contracting at Microsoft in the early 2000s, and it only got worse as I started reading books by Olson, Redman, English, Loshin, John Morris, and Maydanchik on how data quality dovetailed with development methodologies of folks like Kimball and Inmon, which in turn dovetailed with the Lean Six Sigma methods.  I was on the slippery slope to choosing data quality as a career because those gurus of Data Quality, and Quality in general, were explaining, and I was finally starting to understand, why data warehouse projects failed so often, and why the business was often underwhelmed by the information product.

I had 3+ months to study and the resource center on the IAIDQ website had a list of recommended books and articles.  I finally had to live up to my moniker on Twitter of DQStudent.  I already had many of the books recommended by IAIDQ at home but hadn’t read them all yet, so while I waited for Amazon and AbeBooks to send me the books I thought were crucial, I began reading Deming, English, and Loshin.

Of all the books that began arriving on my doorstep, the most memorable was Journey to Data Quality by Richard Wang et al.

That book created a powerful image in my head of the information product “manufactured” by every organization.  That image of the “information product” made the suggestions by the data quality gurus much clearer.  They were showing how to apply quality techniques to the manufacture of Business Intelligence.  The image gave me a framework upon which to hang the other knowledge I was gathering about data quality, so it was easier to keep pushing through the books and articles because each new piece could fit somewhere in that manufacturing process.

I slept well the night before the exam, and gave myself plenty of time to make it to the Castle exam site that afternoon.  I took along several books on data quality, but hardly glanced at them.  Instead I grabbed a quick lunch and then a strong coffee to carry me through the 3 hour exam.  At 50 questions per hour I was very conscious of how long each question was taking me and every 10 questions or so I would check to see if was going to run into time trouble.  It was obvious after 20 questions that I had plenty of time so I began to get into a groove, finishing the exam 30 minutes early, leaving plenty of time to review any questionable answers.

I found the exam eminently fair with no tricky question constructions at all, so I didn’t seem to fall into the over-thinking trap that I sometimes do.  Even better, the exam wasn’t the type that drilled deeper and deeper into my knowledge gaps when I missed a question.  Even though I felt confident that I had passed, I’ve got to tell you that the 6 weeks that the IAIDQ took to determine the passing threshold on this inaugural exam and send out passing notifications were the longest 6 weeks I have spent for a long time.  Now that the passing mark is established, they swear that the notifications will be sent out much faster.

I still feel a warm glow as I think back on achieving IQCP certification.  I am proud to say that I am a data quality consultant and I have the certificate proving the depth and breadth of my knowledge.

Gordon Hamilton is a Data Quality, Data Warehouse, and IQCP certified professional, whose 30 years’ experience in the information business encompasses many industries, including government, legal, healthcare, insurance and financial.

 

Related Posts

Studying Data Quality

The Blue Box of Information Quality

Data, Information, and Knowledge Management

Are you turning Ugly Data into Cute Information?

The Dichotomy Paradox, Data Quality and Zero Defects

The Data Quality Wager

Aristotle, Data Governance, and Lead Rulers

Data governance requires the coordination of a complex combination of a myriad of factors, including executive sponsorship, funding, decision rights, arbitration of conflicting priorities, policy definition, policy implementation, data quality remediation, data stewardship, business process optimization, technology enablement, and, perhaps most notably, policy enforcement.

But sometimes this emphasis on enforcing policies makes data governance sound like it’s all about rules.

In their book Practical Wisdom, Barry Schwartz and Kenneth Sharpe use the Nicomachean Ethics of Aristotle as a guide to explain that although rules are important, what is more important is “knowing the proper thing to aim at in any practice, wanting to aim at it, having the skill to figure out how to achieve it in a particular context, and then doing it.”

Aristotle observed the practical wisdom of the craftsmen of his day, including carpenters, shoemakers, blacksmiths, and masons, noting how “their work was not governed by systematically applying rules or following rigid procedures.  The materials they worked with were too irregular, and each task posed new problems.”

“Aristotle was particularly fascinated with how masons used rulers.  A normal straight-edge ruler was of little use to the masons who were carving round columns from slabs of stone and needed to measure the circumference of the columns.”

Unless you bend the ruler.

“Which is exactly what the masons did.  They fashioned a flexible ruler out of lead, a forerunner of today’s tape measure.  For Aristotle, knowing how to bend the rule to fit the circumstance was exactly what practical wisdom was all about.”

Although there’s a tendency to ignore the existing practical wisdom of the organization, successful data governance is not about systematically applying rules or following rigid procedures, and precisely because the dynamic challenges faced, and overcome daily, by business analysts, data stewards, technical architects, and others, exemplify today’s constantly changing business world.

But this doesn’t mean that effective data governance policies can’t be implemented.  It simply means that instead of focusing on who should lead the way (i.e., top-down or bottom-up), we should focus on what the rules of data governance are made of.

Well-constructed data governance policies are like lead rulers—flexible rules that empower us with an understanding of the principle of the policy, and trust us to figure out how best to enforce the policy in a particular context, how to bend the rule to fit the circumstance.  Aristotle knew this was exactly what practical wisdom was all about—data governance needs practical wisdom.

“Tighter rules and regulations, however necessary, are pale substitutes for wisdom,” concluded Schwartz and Sharpe.  “We need rules to protect us from disaster.  But at the same time, rules without wisdom are blind and at best guarantee mediocrity.”

The Good, the Bad, and the Secure

This blog post is sponsored by the Enterprise CIO Forum and HP.

A previous post examined the data aspects of enterprise security, which requires addressing both outside-in and inside-out risks.

Most organizations tend to both overemphasize and oversimplify outside-in data security using a perimeter fence model, which, as Doug Newdick commented, “implicitly treats all of your information system assets as equivalent from a security and risk perspective, when that is clearly not true.”  Different security levels are necessary for different assets, and therefore a security zone model makes more sense, where you focus more on securing specific data or applications, and less on securing the perimeter.

“I think that these sorts of models will become more prevalent,” Newdick concluded, “as we face the proliferation of different devices and platforms in the enterprise, and the sort of Bring Your Own Device approaches that many organizations are examining.  If you don’t own or manage your perimeter, securing the data or application itself becomes more important.”

Although there’s also a growing recognition that inside-out data security needs to be improved, “it’s critical that organizations recognize the internal threat can’t be solved solely via policy and process,” commented Richard Jarvis, who recommended an increase in the internal use of two-factor authentication, as well as the physical separation of storage so highly confidential data is more tightly restricted within a dedicated hardware infrastructure.

As Rafal Los recently blogged, the costs of cyber crime continue to rise.  Although the fear of a cloud security breach is the most commonly expressed concern, Judy Redman recently blogged about how cyber crime doesn’t only happen in the cloud.  With the growing prevalence of smart phones, tablet PCs, and other mobile devices, data security in our hyper-connected world requires, as John Dodge recently blogged, that organizations also institute best practices for mobile device security.

Cloudsocial, and mobile technologies “make business and our life more enriched,” commented Pearl Zhu, “but on the other hand, this open environment makes the business environment more vulnerable from the security perspective.”  In other words, this open environment, which some have described as a multi-dimensional attack space, is good for business, but bad for security.

Most organizations already spend a fistful of dollars on enterprise security, but they may need to budget for a few dollars more because the digital age is about the good, the bad, and the secure.  In other words, we have to take the good with the bad in the more open business environment enabled by cloud, mobile, and social technologies, which requires a modern data security model that can protect us from the bad without being overprotective to the point of inhibiting the good.

This blog post is sponsored by the Enterprise CIO Forum and HP.

 

Related Posts

Securing your Digital Fortress

Are Cloud Providers the Bounty Hunters of IT?

The Diderot Effect of New Technology

The IT Consumerization Conundrum

The IT Prime Directive of Business First Contact

A Sadie Hawkins Dance of Business Transformation

Are Applications the La Brea Tar Pits for Data?

Why does the sun never set on legacy applications?

The Partly Cloudy CIO

The IT Pendulum and the Federated Future of IT

Suburban Flight, Technology Sprawl, and Garage IT

The Fall Back Recap Show

OCDQ Radio is a vendor-neutral podcast about data quality and its related disciplines, produced and hosted by Jim Harris.

On this episode, I celebrate the autumnal equinox by falling back to look at the Best of OCDQ Radio, including discussions about Data, Information, Business-IT Collaboration, Change Management, Big Analytics, Data Governance, and the Data Revolution.

Thank you for listening to OCDQ Radio.  Your listenership is deeply appreciated.

Special thanks to all OCDQ Radio guests.  If you missed any of their great appearances, check out the full episode list below.

Popular OCDQ Radio Episodes

Clicking on the link will take you to the episode’s blog post:

  • Demystifying Data Science — Guest Melinda Thielbar, a Ph.D. Statistician, discusses what a data scientist does and provides a straightforward explanation of key concepts such as signal-to-noise ratio, uncertainty, and correlation.
  • Data Quality and Big Data — Guest Tom Redman (aka the “Data Doc”) discusses Data Quality and Big Data, including if data quality matters less in larger data sets, and if statistical outliers represent business insights or data quality issues.
  • Demystifying Master Data Management — Guest John Owens explains the three types of data (Transaction, Domain, Master), the four master data entities (Party, Product, Location, Asset), and the Party-Role Relationship, which is where we find many of the terms commonly used to describe the Party master data entity (e.g., Customer, Supplier, Employee).
  • Data Governance Star Wars — Special Guests Rob Karel and Gwen Thomas joined this extended, and Star Wars themed, discussion about how to balance bureaucracy and business agility during the execution of data governance programs.
  • The Johari Window of Data Quality — Guest Martin Doyle discusses helping people better understand their data and assess its business impacts, not just the negative impacts of bad data quality, but also the positive impacts of good data quality.
  • Studying Data Quality — Guest Gordon Hamilton discusses the key concepts from recommended data quality books, including those which he has implemented in his career as a data quality practitioner.

The Blue Box of Information Quality

OCDQ Radio is a vendor-neutral podcast about data quality and its related disciplines, produced and hosted by Jim Harris.

On this episode, Daragh O Brien and I discuss the Blue Box of Information Quality, which is much bigger on the inside, as well as using stories as an analytical tool and change management technique, and why we must never forget that “people are cool.”

Daragh O Brien is one of Ireland’s leading Information Quality and Governance practitioners.  After being born at a young age, Daragh has amassed a wealth of experience in quality information driven business change, from CRM Single View of Customer to Regulatory Compliance, to Governance and the taming of information assets to benefit the bottom line, manage risk, and ensure customer satisfaction.  Daragh O Brien is the Managing Director of Castlebridge Associates, one of Ireland’s leading consulting and training companies in the information quality and information governance space.

Daragh O Brien is a founding member and former Director of Publicity for the IAIDQ, which he is still actively involved with.  He was a member of the team that helped develop the Information Quality Certified Professional (IQCP) certification and he recently became the first person in Ireland to achieve this prestigious certification.

In 2008, Daragh O Brien was awarded a Fellowship of the Irish Computer Society for his work in developing and promoting standards of professionalism in Information Management and Governance.

Daragh O Brien is a regular conference presenter, trainer, blogger, and author with two industry reports published by Ark Group, the most recent of which is The Data Strategy and Governance Toolkit.

Popular OCDQ Radio Episodes

Clicking on the link will take you to the episode’s blog post:

  • Demystifying Data Science — Guest Melinda Thielbar, a Ph.D. Statistician, discusses what a data scientist does and provides a straightforward explanation of key concepts such as signal-to-noise ratio, uncertainty, and correlation.
  • Data Quality and Big Data — Guest Tom Redman (aka the “Data Doc”) discusses Data Quality and Big Data, including if data quality matters less in larger data sets, and if statistical outliers represent business insights or data quality issues.
  • Demystifying Master Data Management — Guest John Owens explains the three types of data (Transaction, Domain, Master), the four master data entities (Party, Product, Location, Asset), and the Party-Role Relationship, which is where we find many of the terms commonly used to describe the Party master data entity (e.g., Customer, Supplier, Employee).
  • Data Governance Star Wars — Special Guests Rob Karel and Gwen Thomas joined this extended, and Star Wars themed, discussion about how to balance bureaucracy and business agility during the execution of data governance programs.
  • The Johari Window of Data Quality — Guest Martin Doyle discusses helping people better understand their data and assess its business impacts, not just the negative impacts of bad data quality, but also the positive impacts of good data quality.
  • Studying Data Quality — Guest Gordon Hamilton discusses the key concepts from recommended data quality books, including those which he has implemented in his career as a data quality practitioner.

Securing your Digital Fortress

This blog post is sponsored by the Enterprise CIO Forum and HP.

Although its cyber-security plot oversimplifies some technology aspects of data encryption, the Dan Brown novel Digital Fortress is an enjoyable read.  The digital fortress of the novel was a computer program thought capable of creating an unbreakable data encryption algorithm, but it’s later discovered the program is capable of infiltrating and dismantling any data security protocol.

The data aspects of enterprise security are becoming increasingly prevalent topics of discussion within many organizations, which are pondering how secure their digital fortress actually is.  In other words, whether or not their data assets are truly secure.

Most organizations focus almost exclusively on preventing external security threats, using a data security model similar to building security, where security guards make sure that only people with valid security badges are allowed to enter the building.  However, once you get past the security desk, you have mostly unrestricted access to all areas inside the building.

As Bryan Casey recently blogged, the data security equivalent is referred to as “Tootsie Pop security,” the practice of having a hard, crunchy, security exterior, but with a soft security interior.  In other words, once you enter a valid user name and password, or as a hacker you obtain or create one, you have mostly unrestricted access to all databases inside the organization.

Although hacking is a real concern, this external focus could cause companies to turn a blind eye to internal security threats.

“I think the real risk is not the outside threat in,” explained Joseph Spagnoletti, “it’s more the inside threat out.”  As more data is available to more people within the organization, and with more ways to disseminate data more quickly, data security risks can be inadvertently created when sharing data outside of the organization, perhaps in the name of customer service or marketing.

A commonly cited additional example of an inside-out threat is cloud security, especially the use of public or community clouds for collaboration and social networking.  The cloud complicates data security in the sense that not all of the organization’s data is stored within its physical fortresses of buildings and on-premises computer hardware and software.

However, it must be noted that mobility is likely an even greater inside-out data security threat than cloud computing.  Laptops have long been the primary antagonist in the off-premises data security story, but with the growing prevalence of smart phones, tablet PCs, and other mobile devices, the digital fortress is now constantly in motion, a moving target in a hyper-connected world.

So how do organizations institute effective data security protocols in the digital age?  Can the digital fortress truly be secured?

“The key to data security, and really all security,” Bryan Casey concluded, “is the ability to affect outcomes.  It’s not enough to know what’s happening, or even what’s happening right now.  You need to know what’s happening right now and what actions you can take to protect yourself and your organization.”

What actions are you taking to protect yourself and your organization?  How are you securing your digital fortress?

This blog post is sponsored by the Enterprise CIO Forum and HP.

 

Related Posts

Are Cloud Providers the Bounty Hunters of IT?

The Diderot Effect of New Technology

The IT Consumerization Conundrum

The IT Prime Directive of Business First Contact

A Sadie Hawkins Dance of Business Transformation

Are Applications the La Brea Tar Pits for Data?

Why does the sun never set on legacy applications?

The Partly Cloudy CIO

The IT Pendulum and the Federated Future of IT

Suburban Flight, Technology Sprawl, and Garage IT

Good-Enough Data for Fast-Enough Decisions

OCDQ Radio is a vendor-neutral podcast about data quality and its related disciplines, produced and hosted by Jim Harris.

On this episode, Julie Hunt and I discuss the intersection of data quality and business intelligence, especially the strategy of good-enough data for fast-enough decisions, a necessity for surviving and thriving in the constantly changing business world.

Julie Hunt is an accomplished software industry analyst and business technology strategist, providing market and competitive insights for software vendors.  Julie Hunt has the unique perspective of a hybrid, which means she has extensive experience in the technology, business, and customer/people-oriented aspects of creating, marketing and selling software.  Working in the B2B software industry for more than 25 years, she has hands-on experience for multiple solution spaces including data integration, business intelligence, analytics, content management, and collaboration.  She is also a member of the Boulder BI Brain Trust.

Julie Hunt regularly shares her insights about the software industry on Twitter as well as via her highly recommended blog.

Popular OCDQ Radio Episodes

Clicking on the link will take you to the episode’s blog post:

  • Demystifying Data Science — Guest Melinda Thielbar, a Ph.D. Statistician, discusses what a data scientist does and provides a straightforward explanation of key concepts such as signal-to-noise ratio, uncertainty, and correlation.
  • Data Quality and Big Data — Guest Tom Redman (aka the “Data Doc”) discusses Data Quality and Big Data, including if data quality matters less in larger data sets, and if statistical outliers represent business insights or data quality issues.
  • Demystifying Master Data Management — Guest John Owens explains the three types of data (Transaction, Domain, Master), the four master data entities (Party, Product, Location, Asset), and the Party-Role Relationship, which is where we find many of the terms commonly used to describe the Party master data entity (e.g., Customer, Supplier, Employee).
  • Data Governance Star Wars — Special Guests Rob Karel and Gwen Thomas joined this extended, and Star Wars themed, discussion about how to balance bureaucracy and business agility during the execution of data governance programs.
  • The Johari Window of Data Quality — Guest Martin Doyle discusses helping people better understand their data and assess its business impacts, not just the negative impacts of bad data quality, but also the positive impacts of good data quality.
  • Studying Data Quality — Guest Gordon Hamilton discusses the key concepts from recommended data quality books, including those which he has implemented in his career as a data quality practitioner.

DQ-Tip: “The quality of information is directly related to...”

Data Quality (DQ) Tips is an OCDQ regular segment.  Each DQ-Tip is a clear and concise data quality pearl of wisdom.

“The quality of information is directly related to the value it produces in its application.”

This DQ-Tip is from the excellent book Entity Resolution and Information Quality by John Talburt.

The relationship between data and information, and by extension data quality and information quality, is acknowledged and explored in the book’s second chapter, which includes a brief history of information theory, as well as the origins of many of the phrases frequently used throughout the data/information quality industry, e.g., fitness for use and information product.

Talburt explains that the problem with the fitness-for-use definition for the quality of an information product (IP) is that it “assumes that the expectations of an IP user and the value produced by the IP in its application are both well understood.”

Different users often have different applications for data and information, requiring possibly different versions of the IP, each with a different relative value to the user.  This is why Talburt believes that the quality of information is best defined, not as fitness for use, but instead as the degree to which the information creates value for a user in a particular application.  This allows us to measure the business-driven value of information quality with technology-enabled metrics, which are truly relevant to users.

Talburt believes that casting information quality in terms of business value is essential to gaining management’s endorsement of information quality practices within an organizaiton, and Talburt recommends three keys to success with information quality:

  1. Always relate information quality to business value
  2. Give stakeholders a way to talk about information quality—the vocabulary and concepts
  3. Show them a way to get started on improving information quality—and a vision for sustaining it

 

Related Posts

The Real Data Value is Business Insight

Is your data complete and accurate, but useless to your business?

The Fourth Law of Data Quality

The Role of Data Quality Monitoring in Data Governance

Data Quality Measurement Matters

Studying Data Quality

DQ-Tip: “Undisputable fact about the value and use of data...”

DQ-Tip: “Data quality tools do not solve data quality problems...”

DQ-Tip: “There is no such thing as data accuracy...”

DQ-Tip: “Data quality is primarily about context not accuracy...”

DQ-Tip: “There is no point in monitoring data quality...”

DQ-Tip: “Don't pass bad data on to the next person...”

DQ-Tip: “...Go talk with the people using the data”

DQ-Tip: “Data quality is about more than just improving your data...”

DQ-Tip: “Start where you are...”

Studying Data Quality

OCDQ Radio is a vendor-neutral podcast about data quality and its related disciplines, produced and hosted by Jim Harris.

On this episode, Gordon Hamilton and I discuss data quality key concepts, including those which we have studied in some of our favorite data quality books, and more important, those which we have implemented in our careers as data quality practitioners.

Gordon Hamilton is a Data Quality and Data Warehouse professional, whose 30 years’ experience in the information business encompasses many industries, including government, legal, healthcare, insurance and financial.  Gordon was most recently engaged in the healthcare industry in British Columbia, Canada, where he continues to advise several health care authorities on data quality and business intelligence platform issues.

Gordon Hamilton’s passion is to bring together:

  • Exposure of business rules through data profiling as recommended by Ralph Kimball.

  • Monitoring business rules in the EQTL (Extract-Quality-Transform-Load) pipeline leading into the data warehouse.

  • Managing the business rule violations through systemic and specific solutions within the statistical process control framework of Shewhart/Deming.

  • Researching how to sustain data quality metrics as the “fit for purpose” definitions change faster than the information product process can easily adapt.

Gordon Hamilton’s moniker of DQStudent on Twitter hints at his plan to dovetail his Lean Six Sigma skills and experience with the data quality foundations to improve the manufacture of the “information product” in today’s organizations.  Gordon is a member of IAIDQ, TDWI, and ASQ, as well as an enthusiastic reader of anything pertaining to data.

Gordon Hamilton recently became an Information Quality Certified Professional (IQCP), via the IAIDQ certification program.

Recommended Data Quality Books

By no means a comprehensive list, and listed in no particular order whatsoever, the following books were either discussed during this OCDQ Radio episode, or are otherwise recommended for anyone looking to study data quality and its related disciplines:

Popular OCDQ Radio Episodes

Clicking on the link will take you to the episode’s blog post:

  • Demystifying Data Science — Guest Melinda Thielbar, a Ph.D. Statistician, discusses what a data scientist does and provides a straightforward explanation of key concepts such as signal-to-noise ratio, uncertainty, and correlation.
  • Data Quality and Big Data — Guest Tom Redman (aka the “Data Doc”) discusses Data Quality and Big Data, including if data quality matters less in larger data sets, and if statistical outliers represent business insights or data quality issues.
  • Demystifying Master Data Management — Guest John Owens explains the three types of data (Transaction, Domain, Master), the four master data entities (Party, Product, Location, Asset), and the Party-Role Relationship, which is where we find many of the terms commonly used to describe the Party master data entity (e.g., Customer, Supplier, Employee).
  • Data Governance Star Wars — Special Guests Rob Karel and Gwen Thomas joined this extended, and Star Wars themed, discussion about how to balance bureaucracy and business agility during the execution of data governance programs.
  • The Johari Window of Data Quality — Guest Martin Doyle discusses helping people better understand their data and assess its business impacts, not just the negative impacts of bad data quality, but also the positive impacts of good data quality.