When we evaluate and compare a range of data points — whether that data is related to health outcomes, head counts, or menu prices — we tend to neglect the relative strength of the evidence and treat it as simply binary, according to research published in Psychological Science, a journal of the Association for Psychological Science.
“People show a strong tendency to dichotomize data distributions and ignore differences in the degree to which instances differ from an explicit or inferred midpoint,” says psychological scientist Matthew Fisher of Carnegie Mellon University, first author on the research. “This tendency is remarkably widespread across a diverse range information formats and content domains, and our research is the first to demonstrate this general tendency.”
In a series of six studies, Fisher and coauthor Frank C. Keil of Yale University examined how people tend to reduce a continuous range of data points into just two categories.
“Especially in the Internet age, people have access to an overwhelming amount of information,” says Fisher. “We have been interested in how people make sense of all the data at their fingertips.”
Fisher and Keil hypothesized that people would implicitly create an “imbalance score,” analyzing the difference in data points that fall on one side of a given boundary and those that fall on the other side. If people are evaluating data from different studies investigating the relationship between caffeine and health, for example, they would quickly categorize data as either showing an effect or not, regardless of the relative strength of the evidence.
In one online study, Fisher and Keil randomly assigned a total of 605 participants to consider a specific topic related to either scientific reports, eyewitness testimonies, social judgments, or consumer reviews. They saw a series of 17 claims about the relationship between two variables, such as taking a certain medication and experiencing feelings of hunger (e.g., “One group of scientists found that the new medication makes feeling hungry 2 times more likely,” “One group of scientists found that the new medication makes feeling hungry 4 times less likely”).
After viewing the claims, participants then summarized the evidence, choosing the rating that best captured their overall impression.
As hypothesized, the imbalance score — the number of strong and weak negative evidence claims subtracted from the number of strong and weak positive evidence claims — was associated with participants’ summary judgments. Their summary judgments were also influenced by the first piece evidence they saw.
Further evidence for the impact of imbalance score on participants’ estimates emerged in two additional online studies, in which people saw data presented in various forms, including vertical and horizontal bar charts, pie charts, verbal descriptions with or without percentages, and dot plots.
The binary bias even appeared in the context of real-world decision making: Participants seemed to collapse data into two categories whether they were evaluating menu prices or determining which factories had higher carbon dioxide output. In both of these domains, participants’ judgments were influenced by the imbalance score implied by the data.
“We were surprised by the pervasiveness of the effect across contexts and content domains,” says Fisher. “The binary bias influenced how people interpret sequences of information and a wide variety of graphical displays.”
The fact that the bias is so pervasive suggests that it is not due to a specific feature of data visualization or statistical information but is instead a general cognitive illusion. Fisher and Keil suspect that this cognitive distortion may offer a cognitive shortcut that allows us to process large amounts of information relatively efficiently.
“Our work suggests the bias is a basic processing mechanism which is applied across many contexts, including health, financial and public-policy decisions,” the researchers conclude.
Source: Read Full Article