Verified: A Conversation about Science, Trust, and Pandemics with Mike Caulfield

Verified: A Conversation about Science, Trust, and Pandemics with Mike Caulfield
Image credits

Welcome to Episode 4 of the Second Breakfast podcast. At the outset I should apologize for the delay in getting this edited and out the door. Indeed, our guest today Mike Caulfield and I recorded this back in January.

Mike has worked for the past few years at the University of Washington's Center for an Informed Public, where he has studied misinformation and, so importantly, helped challenge the ways in which we have historically taught students how to verify things they find on the Internet. He is the author, along with Sam Wineburg, of the book Verified, which came out late last year. And he's a good, good friend — I would say one of the people whose work I have found to be the most intellectually stimulating, the most generative for my own.

Speaking of generative, Mike and Sam's book doesn't really touch too much on the flood of "generative AI" content that we're starting to see take over a lot of digital sites. But I think a lot of the things that they talk about with regards to digital literacy still apply — and they apply because their work isn't simply about technology but about a variety of intellectual and institutional and psychological behaviors and practices that we aren't that adept at interrogating. Or, in some cases, we're too good at interrogating — we won't, for example, believe anything we read in the Daily Mail, even though, as the old adage goes, even a broken clock is right twice a day.

So in this conversation, Mike and I chat about misinformation, digital literacy, and why he still pays for a Peloton subscription. We do not, I should note, talk about second breakfasts as Mike confesses to have skipped first breakfast.

Image credits

Mike: So this is second breakfast. I missed breakfast so this is good.

Audrey: Yes. Yeah. Well, I don't know. I don't know what I'm doing. I'm still trying to figure it out. But I know for me, I think through things when I write, like I don't often even know until I sit down and start to like make it into an argument. And so for now, this is, this is what I'm doing.

I wanted to talk with you about the book because, you know, as I'm entering this new space, this new field where I have no expertise, I am both recognizing just that slippery slope of a wellness-to-misinformation pipeline seems to be well greased. And so, yeah, I thought you would be a perfect person to talk to about how one navigates this in any field but particularly around our own health.

Mike: Yeah, so I mean a lot of what, so the book is Verified: How to Think Straight, Get Duped Less and Make Better Decisions about What to Believe Online, which is a very sort of booky subtitle. But the underlying model that informs the book is a little more subtle and nuanced than that. The question people have is how can you make decisions about things where you don't have perfect knowledge, you may not have domain expertise, but you still have to make decisions. Some of those decisions are just decisions about what to believe or what to say you believe, and some of those decisions are about things you're going to do.

We live in this world; we have imperfect information; we have personal flaws; we've got to make these decisions anyway. And even if we think we're not making decisions, we are making decisions on what we believe or what we don't believe and we're doing this relatively constantly. So the book talks about methods to do that.

I think the core of the book is something that hasn't been fully understood in misinformation studies. When we fact check things online, we aren't really fact checking the relationship of 18 facts to the world. What's actually happening is we're seeing something. It's either compelling to us or it's not compelling to us. It feels right. It feels like proof of something. It feels like something that might change our life. Or it feels like it's garbage. And then we're getting bigger context. And what we're actually fact checking is our reaction to the thing. A lot of these models kind of throw away the initial reaction.

What we're trying to get at is, you see a thing, you already have an opinion about it. We're trying to get people to go out, get a little more information, come back, see if your opinion has changed. And we show some techniques that people can do to do that in 90 seconds.

Audrey: I think the importance of that emotional reaction was a really interesting insight in the book — the way in which we do have these immediate responses to things. It should be a moment to notice — what is it that has you so worked up?

What we've sort of been trained to do online is share it with everybody. And what you say is that if you're feeling super excited — either agitated or elated about something you come across — then that's actually not the moment to hit the share button, right?

Mike: Yeah, yeah, it's not the moment to hit the share button. At the same time, it's incredibly valuable that you feel something. One of the things that we find is that people can't really process information they have no feeling about. We've done this with students where we give students something, like on like Hunter Biden and Burisma. Right? And say, hey, you know, is this true? And what are you going to find? You're going to find that students don't understand it; it's just full of nonsense words. Like, what's Burisma?

And so this really useful, because if we don't have the emotion, we don't know, it's very hard for us to understand why it might be meaningful to us, but our emotions kind of tell us that. We're reacting to something, and in some way, the thing in front of us is either confirming that we were right after all, that we've always been right and everybody has been wrong to persecute us for our ideas; or it's telling us that, oh, this thing will change my life, whatever this supplement is or something like this. Or we maybe deride the thing. We're like, oh, this is complete nonsense. There's nothing to this at all. And once we have that emotion, it really does form the rest of the rest of the experience.

So one of the things that I want people to understand is how valuable those emotions are. We don't really want a bunch of people that look at something — for example, look at something that seems to show injustice at the border — and just be like Spock-like people who are like, let me identify the eight data points here. We want people to have the emotion, then we want people to ask if the emotion reaction, hope, aspiration, dismissiveness, whatever it is you feel, was it appropriate?

It's the same in the health realm and the health technology realm: you see this thing and you feel like, well, that will solve my problem. You feel some thing about it. But then the question is, do you have that same sense of relief once you actually know how the thing works?

Audrey: One of the things I think about is how all of this is tied up in a lack of or a loss of trust with the people, with the institutions that were, I think, supposed to be the keepers of society, really, the keepers of knowledge, the keepers of scientific knowledge. We've lost trust in almost all institutions.

That makes it incredibly challenging then to navigate these things. There was even that old ad that was like "I'm not a doctor, but I play one on TV" — that idea that just being an actor that played a doctor inferred made you a valuable spokesperson for health.

But today, we don't trust institutional players any longer. And yet we have to figure out where expertise lies. Actors and doctors. I mean, we do have to trust experts, don't we?

Mike: Well. Kind of. I mean, it depends what you want to do, right? You have to at least know what that consensus is.

One of the problems in terms of a capitalist information economy, is what you ideally want a source that brings domain expertise and a lot of knowledge to an issue. You want that person to have strong incentives to, we use the phrase, "take care with the truth." It's not just tell the truth, but to actually be careful with it.

And you want someone ideally that maybe shares your values, right? Because the decisions that you're making are made not just in a vacuum, they're made through a set of values. And the problem is getting someone that has all three of those things is really hard.

For one thing, for example, if you're looking at the oil industry and the safety of fracking. A lot of your geological engineers who could tell you, hey, how dangerous is this? They work for an oil company, right? So you have the expertise and you have this conflict that they don't necessarily have the same incentives to be careful with the truth. I'm not saying they would lie, but I'm just saying that they might not do that.

But sometimes some things people believe are just bullshit and sometimes it's very toxic bullshit. But if you look at people who feel like they're excluded from this sort of college-educated crowd that they feel they live in a different environment and so forth. You know, their worry is that they can't believe that someone that has domain expertise would understand their values, and so they value someone that would speak through those, speak through those values. I know that sounds very abstract… but some people want a messenger that seems to come from the same sort of social class or environment as they do, and it's not like a meaningless request. For example, if you're a Black person in America and there's a medical question, like the COVID vaccine, you would like someone that actually understands where you came from, shares some of your values, and you would like them to bring their expertise and bring those things together, and then maybe they say get the vaccine, it means something.

We can often get two of the three — we can get the expertise and the care with the truth. Or we can get the expertise and the values. But we can't always get all three together, and I think people find that frustrating.

On the other hand, I do think that people misunderstand how much experts tend to disagree, and that when you actually do find people converging on something, they're not always right. It does tend to indicate more than people think. It's very hard to get experts agree.

Like you look at the moon landing, like my favorite explanation of like why the moon landing can't be a hoax is that at the time we landed on the moon, there was a Soviet satellite orbiting the moon, right? And if we had not been on the moon, and we were faking that, I think the Soviet Union would have chimed in at that point and said so.

One of the problems in terms of a capitalist information economy, is what you ideally want a source that brings domain expertise and a lot of knowledge to an issue. You want that person to have strong incentives to, we use the phrase, "take care with the truth." It's not just tell the truth, but to actually be careful with it.

Audrey: yeah, I think so too, I mean, I think it's also challenging because the science and technology change. And so there's always that like, aha moment that someone was wrong in the past about an X, Y, or Z thing. I mean, I think a lot about Fauci, who's a really interesting figure in this way. I mean, because the science changes, and of course culture changes too.

When people have this long tenure, it is likely they have not always been right. And someone like Fauci, I think, fits several of those things that you were talking about. He has a different kind of track record for different populations. As a figure, he holds a different meaning, a different cultural meaning for different populations. And that can be leveraged to talk about his lack of trustworthiness.

Mike: I mean, if the activity of science never produced any change of belief, we wouldn't pursue it, right? So it does, over time things change and that's a piece of it. In Fauci's history, of course, there is some bias in the 80s that comes into play. Society changes. I'm not giving Fauci a free pass on that early stuff at all.

And then, during COVID we saw all of this evolving in real time on a minute by minute basis: science, minute by minute at a level that we had never done before. And I think it really screwed some people up. It's hard enough seeing science year by year but like seeing it minute by minute, like even that's not how you're supposed to consume it. You know, that's not even how scientists consume it, you know? I think that eroded a lot of trust. Things are very messy before you come to conclusions. You get into a domain and you realize, you know, there's just a lot of stuff that's coming at things from 18 different directions with a lot of different theoretical and methodological lenses and it all is conflicting and it takes a long time until you start to get in an area you start to get some of some things that actually hold no matter what sort of lens you're using or actually integrate with other understandings.

I have a friend who looks at the supposed replication crisis and one of the things he often points out is very often it's not a replication crisis. What's actually happening is someone tests something some way, someone changes something inadvertently, like they run the test a little different and they get a different result. And what's actually happened is you actually slowly over time you start to narrow in on the results. But the tendency is to interpret all this stuff as a crisis, as a replication crisis. But part of what we get out of these replications when they fail is that we delve into the details and we start to pry apart where we might be getting these effects. But it's hard to get people to see that, right? People really like the idea of one study, built for all time, and someone else did the same study, the results weren't the same, and we throw up our hands and go "everyone's a liar."

Being in an academic discipline, you know the cacophony of this. I think people that are being exposed to this now, in real time, I understand why they're really doubtful about things.

Audrey: There's our lack of knowledge around the science of science. And then there's also a communication problem. And then as you said, there's the problem of capitalism, which really does steer conversations in certain ways and make us think about solutions and "problem solved," even, in certain kinds of ways.

Mike: Solutionism is still a good way of looking at a lot of this. It's certainly the lens that people sometimes unintentionally adopt. Solutionism sits on top of these complex social problems, which don't have a single solution. I mean, the fact that it is often used to address things that are not, like, solvable, I think there's also this issue that, say, you find the thing that works and that thing is unchanging, right? So that you bring a new thing into the environment or a new solution to the environment and then everything, time just stops, everything freezes around it and nothing happens.

I remember like back when we were looking at some of these early education things where all these little studies would show that if the professor opens a lecture with something like this, you know, the students do this much better and so forth. And I don't doubt that effect existed, but it's sort of like a curiosity gap headline. Like, yeah, you do something really shocking at the beginning of the lecture, the students that time are gonna pay attention to it, right? But three weeks later, it's kind of running its course. Like, the introduction of iPad. Students are so engaged with them! And yeah, of course they are, it's a new iPad.

Audrey: Yeah, and then after like three months of doing digital worksheets, the kids are like, wow, this sucks.

Mike: Yeah, because the world doesn't stand still. People will develop new methods of cheating around it, right? Your thing that supposedly captures students attention, yeah, great, that will work for a little bit, but it will eventually be non-novel,

And this is part of the solutionism problem, of course — things are so much more complex and this framework reduces everything to "solution." But part of the problem with the idea of a solution too is that a solution is sort of a period at the end of a sentence rather than a comma.

I think that perception also influences people's trust because they feel like they're being told that something's a solution, period. Masking is a great example of that. There's all the evidence in the world that early on in the pandemic masking helped quite a bit slow the spread. But the environment where you've introduced masking changes. Mask discipline slides down, so what masking means changes, right? The virus, you know, over time adapts, it becomes more viral, it's a little less affected by this. And then people's immunity and sort of the baseline denominator, you know, the denominator of people's vulnerability shifts after people either get infected or have a vaccination.

And then you do a study and you say, hey look at this, the masking isn't making much of a difference.

Well, yeah, because people have had vaccinations. This and a whole bunch of other measures. People have had illness. Mass discipline is a little bit different than it was. And the virus is a different virus that spreads in this new environment.

You introduce a solution, the environment changes around the solution and sometimes in reaction to the solution. And then people say "oh that solution never worked."

Audrey: Yeah, and I think about "solutions journalism" as something that philanthropic funders have pushed for a while. And I think that it comes with, it comes with this sort of relentless optimism too, that I think that people find very jarring to the way in which they experience the world.

Mike: I do worry about this information environment and democracy and democratic discourse. But I also think it's notable, like we look at something like the pandemic, which is unprecedented. You know, when they first said, look, let's try… people call it a lockdown, but let's try, let's try selective isolation, you know, here. Let's try some social distancing. When they first were talking about that at the very beginning, most of the experts said, you're not gonna get people to do it at all.

The belief was that people would not do it at all. The same with with masking initially. You know, there's no use to ask people to mask because you're not going to get people to change their behavior in this massive way overnight.

A lot of the early discussions were very dismal about the ability of people to change. You had a new vaccine and completely new technology. We were told, look, it's gonna take years to develop this vaccine. And then suddenly someone shows up with it nine months later and like, and people were suspicious of that.

But in the end, people did change. People got the vaccine. People talk about this as a massive communication failure. But maybe it's not.

Audrey: I do think that technology is really encouraging us to become more and more individualized. I think sort of some ways sort of flattening culture and disrupting social cohesion and community by privileging personalization at the expense of democracy. I was pleased to see that there was still a great deal of awareness about social responsibility, about reciprocity, a willingness to think through what things meant for the group.

Mike: yes, but it is not infinite and that's just us as humans. Humans are actually really good at the sort of acute and the immediate. Like when that first happens, everybody's reaching out, everyone's calling, everyone's talking. Six months, they're not even remembering that you're suffering.

I think we're set up to deal with these sorts of crises. It's just hard to sustain it. It requires a certain intensity of thought and focus to do these things in long-term.

Audrey: Oh yeah, that hits home. That hits home. Yeah.

Mike: Yeah, it's the same with grief. And it's just the way that we're, as humans, we're built to really deal with that acute and that immediate, and then we just lose the cognitive salience of what people are going through leaves us. So it's the same thing with the pandemic. I think we're set up to deal with these sorts of crises. It's just hard to sustain it. It requires a certain intensity of thought and focus to do these things in long-term. That's hard, as it is with anything in life, really.

Audrey: So I want to make sure that I want to make sure that we talk about a few more things before you have to go. So, um, I remember that you were an early adopter of Peloton.

Mike: I was an early adopter Peloton. I got in the best — I'm gonna sound like an advertisement — best shape of my life Peloton, probably better than I was in my 20s. It really worked for me and then we moved. And this was pre-pandemic, yeah, we were actually, during the pandemic I got very into it.

And it was interesting to me because I'm broadly dismissive of things like that. And yet it was working. It was working really well for a long, long time. And so, and this is one of those things, like this solution that kind of works. And then when we moved, it just like the place I had carved out for it in my life was not there anymore and it's just hard to re-carve that out. I'm looking at it's over here sitting here. I pay what I call the Peloton indulgence, you know, like in the Catholic Church you sinned and then you paid for indulgences. So every month I'm still paying 47 bucks a month to Peloton to like cover my sins.

I think there's something about the fact that Peloton worked well, which is also keeping me from re-engaging with it, if that makes sense. Meaning that I know if I'm gonna start, I'm gonna end up going in, I'm gonna become a different person the way I became a different person, temporarily using it. There's something mentally that wants to just put off being a different person. Whereas, you know, whereas something that was maybe a little less um a little less consuming, I think, might, you know, might, I don't know, might pull me in more. But um, but yeah, it's um, it's sitting right there. I pay for my sins every month and uh hope I'll get back, I guess.

Audrey: I get that though. Having become a runner, for example, I'm not ready to train for a marathon, not because I'm not ready to train for a marathon, but because like, I don't know if I'm ready to be that person.

Mike: Yeah, well, it means the same thing with quitting smoking. I smoked for so long. And yeah, part of the issue was it's hard to quit because nicotine's really addictive. But part of the issue was like, okay, so if I really apply myself to this, I'm gonna become a different person. Am I ready to be that different person?

You spend a lot of time when you're a smoker hating nonsmokers and then you got to become a nonsmoker so it's a little bit of a difficult identity transition there. Yeah so there is something about that. I do think the most effective things for health tend to be when you actually do, you do conceptualize your life a little differently. And the thing is, there's more than you can fit into a life. You like a lot of things about yourself. And bringing some other stuff in means you're gonna push some other stuff out.

Am I ready to be that different person?

Audrey: Exactly. Wow, thank you so much for this Mike. It's been great to talk to you.

Thank you for reading and listening to Second Breakfast. I'll have the next episode in a week or so. Please consider becoming a paid subscriber, so that maybe I can hire a person who actually knows how to make a podcast help me with this.