News — Wellesley, Mass. – New research by Wellesley College professor  and colleagues has produced an accurate and inclusive test of mindreading, or .

For the past two decades, the most common scientific test of mindreading has a person guess what other people are thinking by just looking at their eyes. As of February, 2024, this test—called the , or RMET—had been cited over , making it one of the most influential scientific tools in all of social sciences.

But unfortunately, says Wilmer, the RMET had major issues with inclusivity. “The eyes in the test were all cut out of British tabloids circa 1995,” he explains. “So the eyes are the eyes of white celebrities, and the words available to describe their expressions are deeply gender stereotyped.” (For example, the correct answers for female eyes are disproportionately sexual, he notes, e.g., "fantasizing," and those for male eyes are disproportionately assertive, e.g., "insisting").

The limitations of this influential test were obvious—and frustrating to people asked to participate in research using the standard RMET. Wilmer’s coauthor, Laura Germine, explains, “In our mental health studies, many participants from communities of color told us that taking this test, as a measure of their social abilities, was offensive—the original test overtly centers the emotional experiences and expressions of White people as the standard. We wanted to address this, both to advance science and to improve equity in mental health research.”

So the research team developed a new test—, or MRMET—a racially inclusive and gender-balanced replacement for the existing standard. The new MRMET test was created in collaboration with a diverse community of Boston actors, recruited through @company_one, a theater organization devoted to justice, equity & social change.

The MRMET is introduced in a .

The paper establishes the high quality of the new MRMET test via a massive study of 60,000 participants, recruited via , a citizen-science project where people contribute to science by testing themselves. The study found that the researchers’ inclusive new MRMET was as good or better than the existing RMET across a wide array of quality metrics. For example, the MRMET more accurately predicted autism spectrum symptoms than the RMET.

According to Wilmer, the original RMET creators had good intentions. They likely never expected the test to be used as widely as it has, even 20 years later. For their original purposes (testing in Cambridge, UK in the 90s), it was reasonable. “But,” says Wilmer, “for a test with such widespread use, we can do better!”

“The science of how we perceive faces has long operated on the implicit assumption that an effective way to achieve scientific control is to study homogeneous faces,” Wilmer says. “But our work clearly shows that there is no necessary value to such homogeneity. In fact, our work shows that one can simultaneously achieve both inclusion and scientific excellence.”

Two of Wilmer’s former students at Wellesley were co-authors on the study. “This project would not have been possible without the vision and insight of Jasmine Kaduthodil and Ally Kim,” Wilmer says.

The researchers thus present the MRMET as a high-quality, inclusive, normed and validated alternative to the RMET, and as a case-in-point that inclusivity in psychological tests is an achievable aim.

Wilmer hopes, too, that this new test can provide a model for inclusivity in other scientific domains. “Through this work, we provide a case study of how a sustained, careful scientific effort can tear down a barrier to inclusion that existed within science itself.”

The MRMET is currently being disseminated under an open source (CC-BY-SA) license by the Human Variation Lab at Wellesley College and the nonprofit 501c3 Many Brains Project.

###