Game theory models a strategic situation as a game in which an individual player’s success depends on the choices made by the other players involved in the game. One excellent example is the game known as The Prisoner’s Dilemma, which is deliberately designed to demonstrate why two people might not cooperate—even if it is in both of their best interests to do so.
Here is the classic scenario. Two criminal suspects are arrested, but the police have insufficient evidence for a conviction. So they separate the prisoners and offer each the same deal. If one testifies for the prosecution against the other (i.e., defects) and the other remains silent (i.e., cooperates), the defector goes free and the silent accomplice receives the full one-year sentence. If both remain silent, both prisoners are sentenced to only one month in jail for a minor charge. If each betrays the other, each receives a three-month sentence. Each prisoner must choose to betray the other or to remain silent.
If you have ever regularly watched a police procedural television series, such as Law & Order, then you have seen many dramatizations of the prisoner’s dilemma, including several sample outcomes of when the prisoners make different choices.
The Iterated Prisoner’s Dilemma
In iterated versions of the prisoner’s dilemma, players remember the previous actions of their opponent and change their strategy accordingly. In many fields of study, these variations are considered fundamental to understanding cooperation and trust.
Here is an economics scenario with two players and a banker. Each player holds a set of two cards, one printed with the word Cooperate (as in, with each other), the other printed with the word Defect. Each player puts one card face-down in front of the banker. By laying them face down, the possibility of a player knowing the other player’s selection in advance is eliminated. At the end of each turn, the banker turns over both cards and gives out the payments, which can vary, but one example is as follows.
If both players cooperate, they are each awarded $5. If both players defect, they are each penalized $1. But if one player defects while the other player cooperates, the defector is awarded $10, while the cooperator neither wins nor loses any money.
Therefore, the safest play is to always cooperate, since you would never lose any money—and if your opponent always cooperates, then you can both win on every turn. However, although defecting creates the possibility of losing a small amount of money, it also creates the possibility of winning twice as much money.
It is the iterated nature of this version of the prisoner’s dilemma that makes it so interesting for those studying human behavior.
For example, if you were playing against me, and I defected on the first two turns while you cooperated, I would have won $20 while you would have won nothing. So what would you do on the third turn? Let’s say that you choose to defect.
But if I defected yet again, although we would both lose $1, overall I would still be +$19 while you would be -$1. And what if I continued defecting? This would actually be an understandable strategy for me—if I was only playing for money, since you would have to defect 19 more times in a row before I broke even, but by which time you would have also lost $20. And if instead, you start cooperating again in order to stop your losses, I could win a lot of money—at the expense of losing your trust.
Although the iterated prisoner’s dilemma is designed so that, over the long-term, cooperating players generally do better than non-cooperating players, in the short-term, the best result for an individual player is to defect while their opponent cooperates.
The Stakeholder’s Dilemma
Organizations embarking on an enterprise-wide initiative, such as data quality, master data management, and data governance, play a version of the iterated prisoner’s dilemma, which I refer to as The Stakeholder’s Dilemma.
These initiatives often bring together key stakeholders from all around the organization, representing each business unit or business function, and perhaps stakeholders representing data and technology as well. These stakeholders usually form a committee or council, which is responsible for certain top-down aspects of the initiative, such as funding and strategic planning.
Of course, it is unrealistic to expect every stakeholder to cooperate equally at all times. The realities of the fiscal calendar effect, conflicting interests, and changing business priorities, will mean that during any particular turn in the game (i.e., the current phase of the initiative), the amount of resources (money, time, people) allocated to the effort by a particular stakeholder will vary.
There will be times when sacrifices for the long-term greater good of the initiative will require that cooperating stakeholders either contribute more resources during the current phase, or receive fewer benefits from its deliverables, than defecting stakeholders.
As with the iterated prisoner’s dilemma, the challenge is what happens during the next turn (i.e., the next phase of the initiative).
If the same stakeholders repeatedly defect, then will the other stakeholders continue to cooperate? Or will the spirit of trust, cooperation, and collaboration necessary for the continuing success of the ongoing initiative be irreparably damaged?
There are many, and often complex, reasons for why enterprise-wide initiatives fail, but failing to play the stakeholder’s dilemma well is one very common reason—and it is also a reason why many future enterprise-wide initiatives will fail to garner support.
How well does your organization play The Stakeholder’s Dilemma?