I set an all-time personal record this past week: my MacBook was dormant for five consecutive days. I dedicate this triumph to the delightful friends with whom I spent New Year’s. Indeed, I had the pleasure of celebrating with friends from Digital Democracy, The Fletcher School and The Global Justice Center on a Caribbean island for some much needed time off.
We all brought some good reading along and I was finally able to enjoy a number of books on my list. One of these, Dan Ariely’s “Predictably Irrational” was recommended to me by Erik Hersman, and I’m really glad he did. MIT Professor Ariely specializes in behavioral economics. His book gently discredits mainstream economics. Far from being rational agents, we are remarkably irrational in our decision-making, and predictably so.
Ariely draws on a number of social experiments to explicate his thesis.
For social scientists, experiments are like microscopes or strobe lights. They help us slow human behavior to a frame-by-frame narration of events, isolate individual forces, and examine those forces carefully and in more detail. They let us test directly and unambiguously what makes us tick.
In a series of fascinating experiments, Ariely seeks to understand what factors influence our decisions to be honest, especially when we can get away with dishonesty. In one experiment, participants complete a very simple math exercise. When done, the first set of participants (control group) are asked to hand in their answers for independent grading but the second set are subsequently given the answers and asked to report their own scores. At no point do the latter hand in their answers; hence the temptation to cheat.
In this experiment, some students are asked to list the names of 10 books they read in high school while others are asked to write down as many of the Ten Commandments as they can recall prior to the math exercise. Ariely’s wanted to know whether this would have any effect on the honesty of those participants reporting their scores? The statistically significant results surprised even him: “The students who had been asked to recall the Ten Commandments had not cheated at all.”
In fact, they averaged the same score as the (control) group that could not cheat. In contrast, participants who were asked to list their 10 high school books and self-report their scores cheated: they claimed grades that were 33% higher than those who could not cheat (control group).
What especially impressed me about the experiment [...] was that the students who could remember only one or two commandments were as affected by them as the students who remembered nearly all ten. This indicated that it was not the Commandments themselves that encouraged honestly, but the mere contemplation of a moral benchmark of some kind.
Ariely carried out a follow up experiment in which he asked some of his MIT students to sign an honor code instead of listing the Commandments. The results were identical. What’s more, “the effect of signing a statement about an honor code is particularly amazing when we take into account that MIT doesn’t even have an honor code.”
In short, we are far more likely to be honest when reminded of morality, especially when temptation strikes. Ariely thus concludes that the act of taking an oath can make all the difference.
I’m intrigued by this finding and it’s potential application to crowdsourcing crisis information, e.g., Ushahidi‘s work in the DRC. Could some version of an honor code be introduced in the self-reporting process? Could the Ushahidi team create a control group to determine the impact on data quality? Even if impact were difficult to establish, would introducing an honor code still make sense given Ariely’s findings on basic behavioral psychology?