Is Behavioral Science A Pack Of Lies?

Researchers at the website Data Colada are on a mission to clean up psychological science. In a recent series of posts,[1] they alleged that four studies involving Harvard Business School professor Francesca Gino used falsified data to produce positive findings. In an ironic twist, some of the experiments were on dishonesty and ethics.

The accusations have led some to question whether the field of behavioral science can be trusted. Given that governments and regulators across the globe are using science as the basis of policy and rule setting. Such accusations have serious implications for healthcare, financial markets, law enforcement, and the judiciary.

It is worth taking a closer look at Data Colada’s claims and the weaknesses in behavioral science generally. We need to ask whether we should be concerned about the application of psychology to risk and compliance.

Colada’s Claims About Data And Behavioral Science

In August 2021, the Data Colada team disclosed that they believed data in an influential behavioral science paper was falsified. The landmark 2012 study was undertaken by Lisa L. Shu, Nina Mazar, Francesca Gino, Dan Ariely, and Max H. Bazerman.

There were three studies reported in the paper. The third study used data from a car insurance company to conclude that signing an honesty declaration at the front of a claim form makes individuals less likely to lie about the mileage they drove. However, the base mileage data used to judge dishonesty was suspicious.

One would expect the mileage driven by a large number of people to be extremely varied. A small number of people won’t drive much, a small number will drive vast distances, and most will drive an average distance.

But what the Data Colada team discovered was that the base data showed a uniform pattern of mileage with a sharp cut-off at 50,000 miles. In the 2012 study, the distances driven by individuals didn’t vary at all. An equal number of people seemed to drive across a range of distances – but never beyond 50,000 miles.

This simply didn’t make sense. There were other indicators of falsification identified by the reviewers, such as suspiciously precise reported mileages (e.g., 1743 miles rather than 1750 – people tend to round mileage when they self-report) and changing fonts indicating data had been copied poorly.

Behavioral Science And Red Flags

The latest Data Colada analysis identifies similar red flags in the data used in papers co-authored by Francesca Gino. They claim the following.

  • Revisiting the Shu et al. paper, Study 1 showed signs that data indicating a neutral result was deliberately manipulated to conclude that signing at the top of the form encouraged honesty. The Data Colada team used a little-known feature of Excel to show that data had been moved manually to drive the positive finding. The manipulation had, this time, been undertaken by a different researcher. In their words: ‘Two different people independently faked data for two different studies in a paper about dishonesty.’

  • In another study, Gino et al. (2015) hypothesized that students forced to argue against their own beliefs would feel a little ‘grubby.’ As a consequence, they would have more favorable opinions regarding cleaning products (in order to reduce that uncomfortable feeling). Once again, Data Colada found suspicious data. The subjects of the experiment were asked in which year of their studies they were currently at. Some 20 respondents replied with the inappropriate answer ‘Harvard.’ In its own right, this is a red flag for data manipulation. However, these 20 responses wholly accounted for the positive result of the experiment, suggesting the data had been fabricated.

  • Gino and Wiltermuth (2014) show similar manipulation. In this paper, 13 data entries were moved from a negative result to a positive result to support the hypothesis that cheating was correlated with creativity.

  • Gino et al. (2020) also show evidence of tampering. Despite subjects offering a score indicating that they found networking events stressful and distressing, verbal comments recorded in the same form by the same subjects seemed universally positive. The Data Colada team concluded that an experimenter had changed the scores but not the words. Only the scores were used in the experiment; the words were ignored. Once again, the scores with the mismatched words accounted for the positive result. Without them, there would be no result at all.

Data Colada is at pains to make clear that their accusations are aimed solely at Francesca Gino and Dan Ariely, not their co-researchers. Francesca Gino is on leave from Harvard University, and all articles mentioned above have either been retracted or have retraction requests associated with them.

Psychology

Psychology as a discipline has faced a number of methodological challenges throughout its history. Sir Cyril Burt, for instance, was an eminent psychologist whose experiments were used to promote the theory that intelligence was inheritable. After his death, it was claimed that he not only falsified data from identical twins who never existed, but he also invented two co-researchers he claimed had worked with him.

His data and methods were declared untrustworthy, but Burt’s work was used as justification for the UK’s ‘11-plus’ examination – a selection test, the results of which were used to channel children into academic or technical schooling around the age of 11.[2]

The Failings Of Psychological Science

Psychological science has also been criticized for other failings, which can be grouped under the following broad headings.

  • Cultural bias. Psychology is primarily based on experiments involving White European subjects with little regard for other cultural contexts.

  • The ‘replication crisis’. The findings of some studies have been impossible to repeat.

  • Publication bias. Practitioners are under pressure to publish often, and only those with positive results reach the light of day.

  • i-Frame, not s-frame. It has been suggested that ‘nudges’ often only produce small effects and that the focus of the science on changing individual behavior is less effective than policies aimed at changing societal rules.

  • Oversimplification. Concepts such as ‘system 1’ (instinctive, subconscious thinking) vs. ‘system 2’ (cognitive, conscious reasoning) are an overly simplified characterization of the full complexity of human cognition.

Keeping An Open Mind

There can be no doubt that psychology has had its instances of fraud, misrepresentation, and over-generalization. Yet the same is true of other, more ‘concrete’ sciences. When genetic scientists at University College London were found to have fabricated results over many years, no one demanded a rejection of the entire field of genetics.[3] Any science is an inherently social process, subject to the same human biases, flaws, and predispositions as any activity.

This is a reason to be cautious and thorough when applying scientific findings, but it is not a reason to reject it outright. Careful consideration and much experimentation are required when deploying interventions. There are many successful examples of the application of behavioral science to significant human problems. That science might sometimes be problematic should not lead us to reject the positive solutions it can offer.

Written by Paul Eccleson

This article was first published by the International Compliance Association (ICA), the leading professional body for the global regulatory and financial crime compliance community. For more information on the benefits of becoming an ICA member, including access to the ICA’s complete content library of articles, videos, podcasts, blogs, and e-books, visit: Become an ICA Member – Application Form (int-comp.org)

References:

[1] See http://datacolada.org/ for details of the studies and more on the claims.

[2] The Wikipedia entry for Cyril Burt is a good starting point for understanding the controversy

[3] Ian Sample, ‘Top geneticist ‘should resign’ over his team’s laboratory fraud,’ The Guardian, 1 February 2020

Lavanya Rathnam

Lavanya Rathnam is an experienced technology, finance, and compliance writer. She combines her keen understanding of regulatory frameworks and industry best practices with exemplary writing skills to communicate complex concepts of Governance, Risk, and Compliance (GRC) in clear and accessible language. Lavanya specializes in creating informative and engaging content that educates and empowers readers to make informed decisions. She also works with different companies in the Web 3.0, blockchain, fintech, and EV industries to assess their products’ compliance with evolving regulations and standards.

Posted in Articles

Leave a Reply

Your email address will not be published. Required fields are marked *