by Brian Tomasik
First written: 2 May 2009; last update: 19 Jan. 2015
What kinds of "epistemological black swans" might our circle of thinkers be missing? How can we tease apart common mistakes in popular beliefs versus things that ordinary people actually are seeing correctly and that analytical people in my group are missing? It may be that improving society's wisdom in general is a robust approach in the face of this quandary.
Robert Burton's book,
Feeling 99.9% sure of something is not necessarily reason to assign it 99.9% probability, as a number of studies have shown. When you ask people to give 90% subjective confidence intervals for some quantity, they're wrong about 50% of the time.
It's easy to underestimate the likelihood of rare events, especially those that fall outside your conventional framework of thinking. Nassim Taleb calls such events meta-level uncertainty about whether the model itself might be wrong.
It can be helpful to look at a situation from multiple levels of abstraction. For instance, there are many times when I've written a computer program that I'm sure will run correctly: I've checked and double-checked every line of code and found them completely free of errors. But when I run the program, it crashes, and I have no idea why.
Brains can fail
I am a primate running patchwork cognitive algorithms on relatively fragile wetware. We know that brain devices fail at relatively high rates. 19% of the US population has a mental illness of some sort, with a small fraction of these cases involving serious insanity or delusion. In addition, some people simply lack certain normal abilities, such as the 7% of males who are colorblind.
I and many of my associates have extraordinarily strange beliefs. Many of these are weird facts -- e.g., that an exact copy of me exists within a radius of 101029 meters. But others are logical conclusions (e.g., that libertarian free will is incoherent) and methodological notions (e.g., that Occam's razor makes the parallelism solution to the mind-body problem astronomically improbable). These latter kinds of beliefs theoretically involve certainty or near certainty.
But given my understanding of the frailty of human beliefs in general -- to say nothing of the tempting possibility that correct knowledge is out of the question, or that all of these statements are entirely meaningless -- should I assign nonzero probability to the possibility that I'm wrong about these conceptual matters?
Taking assumptions as given?
One answer is to say "no": We all start with assumptions, and I'm making the assumptions that I'm making. This is one possible attitude toward things like Bayes' theorem and Occam's razor. In the same way that my impulse to prevent suffering is ultimately something that I want to do, "just because," so my faith in math and Bayesian epistemology could be simply something the collection of atoms in my brain has chosen to have, and that's that. (I wonder: Is there any sense in which it would be possible to assign probability less than 1 to the Bayesian framework itself? Prima facie, this would be simply incoherent.)
But what about other, less foundational conclusions, like the incoherence of libertarian free will? It's not obvious to me that the negation of this conclusion would contradict my epistemological framework, since my position on the issue may stem from lack of imagination (I can't conceive of anything other than determinism or random behavior) rather than clear logical contradiction. On this point itself I'm uncertain -- maybe libertarian free will is logically impossible. But I'm not smart enough to be sure. And even if I felt sure, I very well might be mistaken, or even -- as suggested in the first paragraph -- completely insane.
Probabilities over logical reasoning
Can probability be used to capture uncertainties of this type? In practice, the answer is clearly yes. When I do math homework problems, my probability of making an algebra mistake is not only nonzero but fairly high. And it's not incoherent to reason about errors of this type. For instance, if I do a utility calculation involving a complex algebraic formula, I may be uncertain as to whether I've made a sign error, in which case the answer would be negated. It's perfectly reasonable for me to assign, say, 90% probability to having done the computation correctly and 10% to having made the sign error and then multiply these by their corresponding utility-values-if-correct-computation. There's no mystery here: I'm just assigning probabilities over the conceptually unproblematic hypotheses "Brian got the right answer" vs. "Brian made a sign error."
In practice, of course, it's rarely useful to apply this sort of reasoning, because the number of wrong math answers is, needless to say, infinite. (Still, it might be useful to study the distribution of correct and incorrect answers that occur in practice. This reminds me of the suggestion by a friend that mathematicians might study the rates at which conjectures of certain types turn out to be true, in order to better estimate probabilities of theorems they can't actually prove. Indeed, statistical techniques have been used within the domain of automated theorem proving.) When someone objects to a rationalist's conclusion about such and such on the grounds that "Your cognitive algorithm might be flawed," the rationalist can usually reply, "Well, maybe, sure. But what am I going to do about it? Which element of the huge space of alternatives am I going to pick instead?"
Perhaps one answer to that question could be "Beliefs that fellow humans, running their own cognitive algorithms, have arrived at." After all, those people are primates trying to make sense of their environment just like you are, and it doesn't seem inconceivable that not only are you wrong but they're actually right. This would seem to suggest some degree of philosophical majoritarianism. Obviously we need to weight different people's beliefs according to the probability that their cognitive algorithms are sound, but we should keep in mind the fact that those weights are themselves circular.
Beliefs as evidence
For most beliefs, it seems we can incorporate peer disagreement just by making it Bayesian evidence. That other people believe something is a fact about the world that our hypotheses need to explain. In many cases a simple explanation of why people believe X is that they correctly saw or deduced that X was the case. For instance, a likely hypothesis why your husband said your wallet was on the kitchen table is that your wallet is indeed on the kitchen table. In other cases, people's beliefs are wrong, but the theory gives an account of why -- such as that people believe they were abducted by aliens due to sleep paralysis, or that we only perceive three dimensions of space and one of time because our universe's other 7 dimensions are so small.
One can imagine inventing a crazy theory and then adding the stipulation: "If this theory is true, you won't be able to tell that it is, and in fact, most people will think the theory is crazy." Given this specification, the crazy theory does predict well the evidence we observe -- including the evidence derived from other people's non-belief in the theory. However, unless the theory itself offers a natural account of why we don't believe the theory, adding a stipulation that people will tend to think the theory is crazy even when it's true should penalize the prior probability of the theory due to its greater complexity and arbitrariness.
When it comes to foundational questions about our epistemological framework itself, including whether to use Bayesian reasoning at all, we can't just incorporate other people's beliefs as Bayesian evidence. In such cases, we might adopt a rougher heuristic to give some weight to the belief frameworks of others without assuming full Bayesian probability when doing so.
Ultimate epistemological justification?
Ultimately I think skepticism is right in a certain sense: We can't be certain of anything, even logical truths. I can't see how to ultimately justify principles like induction and Occam's razor. Even if we can see why they seem to work well, we require these principles before that observation can have normative force: An inductive argument for induction is circular, and Occam's razor is needed to justify choosing a simple principle that simple theories work well rather than a weirdly complex one.
One standard reply to the problem of induction is that it asks for the impossible: It demands a deductive justification of induction. But deduction and induction are just different modes of reasoning, so asking one to justify the other is like asking a stool to fly you to Europe. The stool can be useful even if it doesn't carry out intercontinental voyages.
This reply raises a further question: Who says deduction is the gold standard either? And how do we justify deduction itself? Ultimately we reach a point where there is no finite, non-circular justification.
In 2007, I told a friend that I couldn't ultimately justify using Bayesian probability. My friend replied: "Your brain is Bayesian. That's how brains are built." At the time I found this wholly unsatisfactory, but now I see a kind of wisdom to it. There is no airtight justification for why I use certain cognitive processes. At some point, I have to just say "that's how my brain does it" and accept that. Of course, humans can engage in meta-cognition and meta-meta-cognition, learning about cognitive biases and the nature of general intelligence. But at some point, all of this work rests on certain assumptions (like rules of inference and basic concepts about the world).
I can take comfort in the thought that evolution has optimized intelligent agents to act successfully in their environments. As another friend told me, it's nigh impossible to survive without using Occam's razor. But there remains the problem of proving that the reasoning processes produced by evolution are actually valid in an absolute sense, rather than just working well enough for organisms to manipulate their surroundings. And it's not even clear what in what "absolute sense" things can be true, or whether that concept even means anything other than being a functional neural representation in human brains.
There's a sense in which everything is inductive, even the rules of logic. How do we know that modus ponens preserves truth? Rules of inference and probability are essentially hypotheses that we test by making deductions and observing success at the conclusions they produce. They feel obviously true because of our evolutionarily and/or culturally shaped cognitive intuitions, but as we know well from other domains, unshakeable intuitions (e.g., that the geometry of space is Euclidean) can turn out to be wrong empirically. Plausibly at least some of our logical axioms are genetically hard-wired into our brains, but they were still "empirically" discovered by evolutionary experimentation.
We can and should reason further about these questions. But ultimately everything we can think is constrained by physical and logical limits on cognition. At some fundamental level or another, the justification is merely our being a certain kind of computation.