News — UCI cognitive scientist Megan Peters shares her work to unlock mysteries of the mind
How do our brains take in complex information from the world around us to help us make decisions? And what happens when there’s a mismatch between how well your brain thinks it’s performing this function and how well it’s actually doing? In this episode of the UCI School of Social Sciences Experts On, cognitive scientist Megan Peters takes a deep dive into metacognition - our ability to monitor our own cognitive processing.
--- Video embed: ---
“Our brains are fantastically powerful information processing systems. They take in information from the world around us through our eyes, ears, and other senses, and they process or transform that sensory information into rich internal representations — representations that we can then use to make useful decisions, to navigate effectively without running into things, and ultimately, to stay alive. And interestingly, our brains also can tell us when they’re doing a good job with all this processing, through a process called metacognition, or our ability to monitor our own cognitive processing.
My name is Megan Peters, and I’m an associate professor in the department of Cognitive Sciences at UC Irvine. I’m also a Fellow in the Canadian Institute for Advanced Research Brain, Mind, & Consciousness program and I am president and chair of the board at Neuromatch.
My research seeks to understand metacognition — how it works in the brain, and how it works at a computational or algorithm level — and it also seeks to understand what this metacognitive processing might have to do with the conscious experiences we have of our environments, of each other, and of ourselves. So in our research group, we use a combination of behavioral experiments with humans, brain imaging (like MRI scans), and computational approaches like mathematical modeling and machine learning or Artificial Intelligence, to try to unravel these mysteries.
I think my favorite overall line of research right now has to do with cases where our brains’ self-monitoring sometimes seems to go wrong. So what I mean is, sometimes your brain “metacognitively” computes how well it thinks you’re doing at this “sensory information processing” task, but this ends up being completely different from how well you’re actually doing. Imagine it this way: you’re driving down a foggy road, at night in the dark. You probably can’t see very well, and you’d hope that your brain would also be able to tell you, “I can’t see super well right now, I should probably slow down.” And most of the time, your brain does this self-monitoring correctly, and you do slow down. But sometimes, under some kinds of conditions or visual information, your brain miscalculates, and it erroneously tells you, “Actually you can see just fine right now!”
So this is a sort of “metacognitive illusion”: your brain is telling you “you’re doing great, you can see very clearly!” when in reality, the quality of the information that it’s receiving, and the processing it’s doing, is really poor, really bad — in essence, that means that you can feel totally confident in your abilities to accurately process the world around you, when in fact you’re interpreting the world totally incorrectly.
Now normally, in everyday life, this doesn’t happen of course. But we can create conditions in the lab where this happens very robustly, which helps us understand when and how it might happen in the real world, too, and what the consequences might be. So this is fascinating both because it is a powerful tool for studying how your brain constructs that metacognitive feeling of confidence, and also because — in theory — it means that your subjective, conscious feeling of confidence might be doing something really different than just automatically or directly reading out how reliably you brain is processing information. And that could eventually provide a better way to investigate how our so-called phenomenological or conscious experiences can arise from activity patterns in your brain at all.”
Web link:
Direct video link: