Echo chambers as trust manipulators

My analysis of echo chambers as trust-manipulators is now available in two exciting different versions! First, there was Escaping the Echo Chamber, the short version written for a general audience. And now, fresh off the presses, there’s Echo Chambers and Epistemic Bubbles, the long scholarly version written for philosophers and social scientists, full of citations and more careful versions of all the arguments.

In Escaping the Echo Chamber (published in Aeon Magazine), I claim that the whole discussion about this stuff has been confusing two very different social phenomena. An epistemic bubble is a structure that limits what you see. When all your friends on Facebook share your politics, and you don’t get exposed to the other side’s arguments, that’s just a bubble. An echo chamber, on the other hand, is a structure that manipulates trust. Members of echo chambers are taught to distrust everybody on the outside. An echo chamber functions more like a cult. It isolates its members, not by restricting their access to the world, but by alienating them from the outside world.

In epistemic bubbles, other voices are not heard; in echo chambers, other voices are actively undermined.

This is crucial, because you have to know the disease to pick the right cure. Epistemic bubbles can be broken by simple exposure. But echo chambers cannot; members of echo chambers have been prepared to resist exposure to evidence from the outside. This radically overinflated their trust for insiders.

Crucially, this thing that people are calling “post-truth” – where people just ignore the outside evidence? Epistemic bubbles can’t explain that. Only echo chamber effects can explain it. And if that’s what’s actually going on, then the solution isn’t just to wave “the evidence” or “the facts” in an echo chamber member’s face. They’ve been given a basis for rejecting such outside evidence as corrupted, malignant. The only way to fix an echo chamber is by repairing the broken trust at its root.

In Echo Chambers and Epistemic Bubbles (published in Episteme), I offer extended versions of all of the above arguments. This is the scholarly director’s cut. The definitions are more carefully fleshed out (and, admittedly, much longer and uglier and less memorable). The arguments are laid out in more detail, with citations. There’s also an extended discussion of the social science literature, where I point out a lot of places where people have conflated these concepts. I target a lot of recent papers which claim to have disproved the existence of echo chambers and epistemic bubbles, and point out that they’ve studied only exposure, and not distrust. Finally, there’s a much longer discussion of who’s responsible for the beliefs of echo chamber members. I take on Quassim Cassam’s story about epistemic vice and laziness in conspiracy theorists. He thought that, basically, all conspiracy theorists were just lazy and corrupt. I argue the opposite; the echo chambers story shows how a person could be blameless, because they were caught in a bad social network.

If you’re really interested in going all the way down the rabbit-hole, my analysis here is based on some earlier work. In Cognitive Islands and Runaway Echo Chambers, I analyze those domains where you need the help of experts, but you can only find experts by exercising your own abilities. This opens the door to a harmful sort of runaway bootstrapping, where people with bad beliefs use them to pick bad experts, and this only compounds their error. In Expertise and the Fragmentation of Intellectual Autonomy, I lay out the case for why we have to trust in experts, and why perfect intellectual autonomy is no longer possible, given the massive sprawl of scientific knowledge.

Cognitive islands and runaway echo chambers

My new paper, Cognitive islands and runaway echo chambers is out in Synthese. (For those without institutional access, here’s the pre-print for free.)

What it’s about, in a nutshell: In some areas of intellectual life, you need to already be an expert to find the other experts. This opens a door to a horrible possibility: if you misunderstand things and use that misunderstanding to pick out who you trust, then that trust will simply compound your misunderstanding. Morally flawed people will pick morally flawed advisors and gurus, and bootstrap themselves into being worse people. But we have to trust. So we might just be screwed.

The long version: For some kinds of experts, there’s an easy test: you can tell a good mechanic because they can fix your car. You don’t really need to know anything about cars to sort the real mechanics from the posers. Call these the obvious cognitive domains. A total novice has some hope of figuring out who the right advisors and teachers are. But in some kinds of cognitive domains, you already have to be an expert to recognize the experts. And no other kind of expertise will do — you need to share expertise to recognize a real expert. Call these cognitive islands. On a cognitive island, you need to already be some kind of expert to figure out who the experts are. The novice in that domain has no idea who to trust. Plausible candidates for cognitive islands include the moral and the aesthetic domains, and maybe even philosophy and economics and more.

Some people think being on a cognitive island makes it impossible to use experts. Only novices need the help of experts, and they can’t find any. I think this kind of pessimism is wrong, and I think if we look at how we actually trust each other, how we use other experts, we’ll see why. All the time, we use our own expertise to help find other experts who can help fill in our own gaps — our blind spots, our biases. We need others to help us triangulate on when we’re reasoning well and when we’ve made mistakes.

The real problem for cognitive islands isn’t that we can’t use other experts at all. It’s that there’s no safety net. If your own understanding is flawed, there’s no test for that. If other experts are flawed, there’s no independent check. You can only figure out who to trust by applying your own abilities. But we have to trust each other – we have to use each other to corroborate and check on our own thinking. And this means that if you’re deeply flawed, those flaws will simply compound themselves through expert selection. KKK members will pick racist advisors, who will corroborate their racism.

This leads to a kind of epistemic trap, which I call runaway personal echo chambers. On a cognitive island, the only way to figure out who to trust is to use your own abilities. So if you start out with deep problems in your understanding, you’ll just bootstrap yourself into something worse. And it doesn’t seem like there’s any way out.