Professor Chris Chartier speaking with students at the 2026 URCA Symposium

AU Department of Psychology contributes to major study that landed on the cover of Nature magazine

Published on May 12, 2026
Psychology

ASHLAND, Ohio – A massive research project on the trustworthiness of scientific studies that involved several members of the Ashland University Department of Psychology as co-authors landed on the cover of Nature magazine on April 1.

The years-long project, Systematizing Confidence in Open Research and Evidence (SCORE), brought together 865 collaborators across many fields of social science, including most of AU’s psychology department. Christopher Chartier, Ph.D., professor of psychology, took on a central role from the outset and Diane Bonfiglio, Ph.D., student-now-alumna Savannah Lewis ’20, Mitchell Metzger, Ph.D., and Kathleen Schmidt, a former research scientist at AU, all made significant contributions as well and are listed as co-authors.

Findings from SCORE, which was designed to improve the assessment of scientific credibility in the social and behavioral sciences, were published in Nature as a collection of three papers, offering new empirical evidence on the reproducibility, robustness and replicability of research across the social and behavioral sciences, and the predictability of replicability.

Chartier, who summed it up as a “big health check on the scientific community,” was invited to be part of this project due to his founding of the award-winning Psychological Science Accelerator and subsequent work with the Center for Open Science, a non-profit organization with a mission to increase openness, integrity and trustworthiness of scientific research.

“One of the main things I was involved in was kind of pairing research laboratories from the (PSA) network with available and required replications, particularly in psychology,” explained Chartier. “I was kind of like a team builder, coordinator of that aspect of the project.”

Image
Psychology professor Chris Chartier during a 2026 class

Chartier also was one of the team leaders when it came to the paper focused on replicability, a core principle of science. Titled “Investigating the replicability of the social and behavioral sciences,” the team of 292 researchers from around the world tested if findings from 164 published papers of well-known journals could be replicated successfully when tested again with new data.

“The main goal was just to have a solid estimate of replicability in the social and behavioral sciences to understand what is the current situation in this arm of the scientific enterprise,” explained Chartier. “The reason we care is that replicability is a fundamental principle of science, if somebody publishes a finding, we, at a baseline, expect some reasonable percentage that an independent team can go look for that same thing.”

One specific example of the research was Chartier and Lewis examining a published study on consumer decision making of how Amazon ratings are evaluated, in terms of the impact of the rating quality versus the quantity of ratings.

“We did successfully replicate their finding that both have an additive impact on hypothetical consumer decisions,” said Chartier. “Ours was one of the successful replications that said both quantity and … quality or value of rating judgments have a measurable independent impact on people’s likelihood to buy the product, overall evaluation of the product. So, a straightforward marketing with social psych kind of study.”

Image
Psychology professor Chris Chartier during a 2026 class

Even though Chartier’s primary focus is on the psychology aspect, he is excited about the broad scope of the project touching on other disciplines.

“It wasn’t constrained to just psychology. You’ve got sociology, political science, education, behavioral economics, experimental philosophy, many fields that are similar to psychology in substance and how people do and publish their work,” he noted.

Thus, the impact of this paper could eventually be felt throughout society.

“The reason replication matters is ideally our scientific findings are used by everyday people to improve their lives … by practitioners providing therapy, by policy makers deciding how to structure educational and economic systems, and when they do that they take scientific finding as truth because it’s the best we got,” Chartier said.

The paper revealed that roughly 50% of the findings replicated successfully. While Chartier admitted that some people may continue to be skeptical of scientific findings, the implications of the study point to uncertainty—not failure—and a need for more cautious confidence in research claims.

“Turns out the answer is somewhere in the middle, and so what it suggests is that science is valid and probably our best source of truth in the universe. But, it’s far from perfect and we shouldn’t take every published finding at face value,” he summarized. “It means that it should not and cannot be ignored. It’s our best source of evidence, but it means you have to remain critical and evaluate all the evidence for a finding anytime you’re trying to understand something.”

About SCORE
Systematizing Confidence in Open Research and Evidence (SCORE) was a large-scale, multi-method research initiative designed to improve the assessment of scientific credibility in the social and behavioral sciences. Recognizing that evaluating the trustworthiness of research claims is essential but resource-intensive, SCORE aimed to develop scalable, accurate tools for estimating credibility. The program combined expert judgments, machine learning approaches, and empirical assessments of repeatability—including reproducibility, robustness, and replicability—to validate credibility indicators. In addition to its primary scientific goals, SCORE produced openly accessible datasets, algorithms, and evidence that offer unprecedented insight into the state of research credibility. The project began in 2019 and the primary outcomes and outputs were reported and shared in 2026.