How Scientific is Scientific Polarization?

As Joe Biden cleared 270 last week, some people remarked on how different the narrative would’ve been had the votes been counted in a different order:

The idea that order shouldn’t affect your final take is a classic criterion of rationality. Whatever order the evidence comes in, your final opinion should be the same if it’s the same total evidence in the end.

This post is about how O’Connor & Weatherall’s model of “scientific polarization” runs afoul of this constraint. In their model, divergent opinions arise from a shared body of evidence, despite everyone involved being rational. Just how rational is what we’re considering here.

The Model

Here’s the quick version of O’Connor & Weatherall’s model. You can find the gory details in my previous post, but we won’t need them here.

A community of medical doctors is faced with a novel treatment for some condition. Currently, patients with this condition have a .5 chance of recovering. The new treatment either increases that chance or decreases it. In actual fact it increases the chance of recovery, but our doctors don’t know that yet.

Some doctors start out more skeptical of the new treatment, others more optimistic. Those with credence > .5 try the new treatment on their patients, and share the results with the others. Everybody then updates their credence in the new treatment. The cycle of experimentation, sharing, and updating then repeats.

Crucially though, our doctors don’t fully trust one another’s results. If a doctor has a very different opinion about the new treatment than her colleague, she won’t fully trust that colleague’s data. She may even discount them entirely, if their credences differ enough.

As things develop, this medical community is apt to split. Some doctors learn the truth about the new treatment’s superiority, while others remain skeptical and even come to completely disregard the results reported by their colleagues. This won’t always happen, but it’s the likely outcome given certain assumptions. Crucial for us here: doctors must discount one other’s data entirely when their credences differ significantly—by .5 let’s say, just for concreteness.

The Problem

This way of evaluating evidence depends on the order. Here’s an extreme example to make the point vivid.

Suppose Dr. Hibbert has credence .501 in the new treatment’s benefits, and his colleagues Nick and Zoidberg are both at 1.0. Nick and Zoidberg each have a report to share with Hibbert, containing bad news about the new treatment. Nick found that it failed in all but 1 of his 10 patients, while Zoidberg found that it failed in all 10 of his. Whose report should Dr. Hibbert update on first?

If he listens to Nick first, he’ll fall below .5 and ignore Zoidberg’s report as a result. His difference of opinion with Zoidberg will be so large that Hibbert will come to discount him entirely. But if he listens to Zoidberg first, he’ll ignore Nick then, for the same reason.

So Hibbert can only really listen to one of them. And since their reports are different, he’ll end up with different credences depending on who he listens to. Zoidberg’s report is slightly more discouraging. So Hibbert will end up more skeptical of the new treatment if he listens to Zoidberg first, than if he listens to Nick first.

Can It Be Fixed?

This problem isn’t an artifact of the particulars of O’Connor & Weatherall’s model. It’s in the nature of the project. Any polarization model of the same, broad kind must have the same bug.

Polarization happens because skeptical agents come to ignore their optimistic colleagues at some point. Otherwise, skeptics would eventually be drawn to the truth. As long as they’re still willing to give some credence to the experimental results, they’ll eventually see that those results favour optimism about the new treatment.

But even if our agents never ignored one another completely, we’d still have this problem. Suppose all three of our characters have the same credence. And Nick has one success to report where Zoidberg has one failure. Intuitively, once Hibbert hears them both out, he should end up right back where he started, no matter who he listens to first.

But if he listens to Nick first, his credence will move away from Zoidberg’s. So when he gets to Zoidberg’s report it’ll carry less weight than Nick’s did. He’ll end up more confident than he started. Whereas he’ll end up less confident if he proceeds in reverse order.

Does It Matter?

It seems like polarization can’t be fully scientific if it’s driven by mistrust based on difference of opinion. But that doesn’t make the model worthless, or even uninteresting. O’Connor & Weatherall are already clear that their agents aren’t meant to be “rational with a capital ‘R’” anyway.

Quite plausibly, real people behave something like the agents in this model a lot of the time. The model might be capturing a very real phenomenon, even if it’s an irrational one. We just have to take the “scientific” in “scientific polarization” with the right amount of salt.

 
Back