In the spring of 2012, the Duke economist and behavioral theorist Dan Ariely published a trade book entitled The (Honest) Truth About Dishonesty: How We Lie to Everyone—Especially Ourselves, a fascinating account of multiple experiments in which he and a series of colleagues tested the willingness of people to cheat in a variety of situations.
To make their experiments work, Ariely and his colleagues had to figure out how to create environments that would allow or even induce people to cheat. The researchers did that in manifold and ingenious ways. Typically they first designed a task to discern a base level of dishonesty in an average sample of adult humans (the control condition). Then they modified that task in various ways (the experimental conditions) to see whether those changes would induce or reduce the amount of cheating.
So, for example, they offered people small sums of money in return for solving mathematical problems; the amount of money increased with the number of problems solved correctly. In the control condition, the subjects would simply return their tests and receive the money from one of the researchers. In one of the experimental conditions, by contrast, the subjects might be asked to score their own tests, shred their response sheets, and then report verbally to the researcher how many problems they had answered correctly. As you might expect, the number of “correct” problems reported by the test-takers increased in that experiment.
Ariely draws an important conclusion from the experiments: Under the right conditions, most people are willing to cheat a little bit. He calls that the “fudge factor,” and uses it to explain a wide variety of lab experiments as well as real-world situations in which unethical behavior seemed to spiral out of control—such as the 2008 collapse of financial markets in the wake of countless unethical moves made by investment bankers, auditors, lawyers, and more. <Read more.>