The classic “Lockean” thesis about full and partial belief says full belief is rational iff strong partial belief is rational. Hannes Leitgeb’s “Humean” thesis proposes a subtler connection. $ \newcommand\p{Pr} \newcommand{\B}{\mathbf{B}} \newcommand{\given}{\mid} $
- The Humean Thesis
For a rational agent whose full beliefs are given by the set $\mathbf{B}$, and whose credences by the probability function $\p$: $B \in \mathbf{B}$ iff $\p(B \given A) > t$ for all $A$ consistent with $\mathbf{B}$.
Notice that we can think of this as, instead, a coherentist theory of justification. Suppose we replace credence with “evidential” probability (think: Carnap, Williamson). Then we get a theory of justification where beliefs aren’t justified in isolation. It’s not enough for a belief to be highly probable in its own right, it has to be part of a larger body that underwrites that high probability.
Flipping things around, the coherentist theory of justification from my last wacky post doubles as an even wackier theory of full belief. The Humean view is roughly that a belief is justified iff its fellows secure its high probability. Now the “Super-Humean” view says a belief is justified to the extent its fellows secure its high centrality.
(Last time we explored one fun way of measuring centrality, drawing on coherentism for inspiration, and network theory for the math. But network theory offers many others ways of measuring centrality, which could be slotted in here to provide alternative theories of full and partial belief.)
Like Leitgeb’s Humean view, the Super-Humean view has a holistic character. Instead of evaluating full beliefs just by looking at your credences, we also have to look at what else you believe.
Another parallel: both theories have a permissive quality. Leitgeb presents examples where more than one set $\B$ fits with a given credence function $\p$, on the Humean view. And the same will be true on the Super-Humean view.1
But there are interesting differences. We can evaluate beliefs individually on the Super-Humean account, even though our method of evaluation is holistic. True, a belief’s justification depends on what else you believe. But your beliefs don’t all stand or fall together; some can come out justified even though others come out unjustified.
Strictly speaking, some beliefs come out highly justified even though others come out hardly justified. Because, differing again from the Humean view, evaluations are graded on the Super-Humean view. Each belief is assigned a degree of justification.
One nice thing about the Super-Humean view, then, is that it allows for “non-ideal” theorizing. We can study non-ideal agents, and discern more justified beliefs from lesser ones.
“But does it handle the lottery and preface paradoxes?”, is the question we always ask about a theory of full belief. As is so often the case, the answer is “yes, but…”.
Consider a lottery of $100$ tickets with one to be selected at random as the winner. If you believe of each ticket that it will lose, we have a network of $101$ nodes: $L_1$ through $L_{100}$, plus the tautology node $\top$. How strong are the connections between these nodes? Assuming we take $L_3–L_{100}$ as givens in determining the weight of the $L_2 \rightarrow L_1$ arrow, it gets weight $0$ since $$\p(L_1 \given L_2 \wedge L_3 \wedge \ldots \wedge L_{100}) = 0.$$ And likewise for all the other arrows,2 except those pointing to the $\top$ node (they always get weight $1$). All the $L_i$ beliefs thus come out with rock-bottom justification compared to $\top$, i.e. you aren’t justified in believing these lottery propositions.
Contrast that with a preface case, where you believe each of $100$ claims you’ve researched, $C_1$ through $C_{100}$. These claims are positively correlated though, or at least independent.3 So $$\p(C_1 \given C_2 \wedge C_3 \wedge \ldots \wedge C_{100}) \approx 1,$$ and likewise for the other $C_i$. The belief-graph here is thus tightly connected, and the $C_i$ nodes will score high on centrality compared to $\top$. So you’re highly justified in your beliefs in the preface case.
So far so good, at least if you think—as I tend to—that lottery beliefs should come out unjustified, while preface beliefs should come out justified. What’s the “but…” then? I see two issues (at least).
First, we had to assume that all your remaining beliefs are taken as given in assessing the weight of a connection like $L_2 \rightarrow L_1$. That worked out well here. But as a general rule, it doesn’t always have great results, as Juan ComesaƱa noted about our treatment of the Tweety case last time.
We could go all the way to the other extreme of course, and just evaluate the $L_2 \rightarrow L_1$ connection in isolation by looking at $\p(L_1 \given L_2)$. But that seems too extreme, since it means ignoring the agent’s other beliefs altogether.
What we want is something in between, it seems. We want the agent’s other beliefs to “get in the way” enough that they substantially weaken the connections in the lottery graph. But we don’t want them to be taken entirely for granted. Exactly how to achieve the right balance here is something I’m not sure about.
Second issue: what if you only adopt a few lottery beliefs, just $L_1$ and $L_2$ for example? Then we can’t exploit the “collective defeat” that drove our treatment of the lottery.
You might respond that this is a fine result, since isolated lottery beliefs are actually justified. It’s only when you apply the same logic to all the tickets that your justification is undercut. But I find this unsatisfying.
Maybe a student encountering the paradox for the first time is justified in believing their ticket will lose. But it should be enough to defeat that justification that they merely realize they could believe the same thing about all the other tickets, for identical reasons. Even if they don’t go ahead to form those beliefs, they should drop the one belief they had about their own ticket.
This is one way in which Leitgeb’s Humean theory seems superior to me. On the Humean view, which beliefs are rational depends on how the space of possibilities is partitioned (see Leitgeb 2014). And the partition is determined by the context—how the subject frames the situation in their mind. (At least, that’s how I understand Leitgeb here.) So just realizing the symmetry of the lottery paradox is enough to defeat justification, on the Humean view.
Example: imagine we’ll flip a coin of unknown bias $10$ times. And suppose the probabilities obey Laplace’s Rule of Succession (a.k.a. Carnap’s $\mathfrak{m}^*$ confirmation function). Then, if you believe each flip will land heads, your beliefs will all come out highly justified, i.e. highly central in your web of $10$ beliefs. But they’d have the same justification if you instead believed each flip will land tails.
Permissivism aside, this might seem a pretty bad result on its own. Even if our theory fixed which way you should go, say heads instead of tails, that would be pretty weird. Shouldn’t you wait for at least a few flips before forming any such beliefs?
The problem is that we haven’t required your beliefs to be inherently probable, only that they render one another probable. The Lockean and Humean theories have such a threshold requirement built-in, but we can build it into our theory too. We can just stipulate that a full belief should be highly probable, as well as being highly central in the network of all your beliefs.
[return]- More carefully, each arrow gets the minimum possible weight. If we use the “Google hack” from last time, this is some small positive number $\epsilon$ instead of $0$. [return]
- Notice we’re borrowing the crux of Pollock’s (1994) classic treatment of the lottery and preface paradoxes. We’re just plugging his observation into a different formal framework. [return]