The three kinds of uncertainty

Septentrionalium Terrarum
Septentrionalium Terrarum – Mercator's Atlas, pt III

There are three kinds of uncertainty – three ways you can fail to know something.

These are epistemic, aleatoric, and logical uncertainty.


Epistemic uncertainty

What have I got in my pocket?

If you could in principle know a fact, but you do not at present have the relevant information, you are epistemically uncertain about this fact. Epistemic uncertainty is often easily resolved by experiment or observation, and is in many ways the most well-behaved of the types of uncertainty.

Suppose you are playing poker against a friend, and are wondering what she has in her hand. You could find out by peeking, or, if she is feeling particularly generous, asking. The state of her hand is fixed, fully determined already, and you merely lack access to it. The world is already in a particular state, but you do not know which one.

Aleatoric uncertainty

Heads or tails?

If there is no way, even in principle, to answer a question ahead of time – if neither thinking nor experimentation will tell you – then you are in a state of aleatoric uncertainty.

Typically, “truly random” outcomes are considered to be the source of such uncertainty – dice rolls, shuffled decks, or even the observation of quantum states.

Now, suppose you have won the previous hand, and your friend is feeling confident – she offers you a bet – not only will she win the next hand, but she bets that you will have a red card in your hand. In this case, before reshuffling the deck, you are aleatorically uncertain about the colour of your to-be-drawn cards. No amount of peeking or asking will help you, for the shuffling will thoroughly mix the cards. You are uncertain, and gathering information cannot make you less so.

Logical uncertainty

What’s the forty-third digit of ?

If you already have all the information to determine the truth of a proposition, but the structure of the proposition is sufficiently complex as to require reasoning, this yields logical uncertainty about the proposition. This sort of uncertainty is particularly interesting, given that it exists somewhat outside typical analysis of probability theory – it might be the case that you believe some mathematical statement A to be true with probability 90%One might find this allocation of probability to logical statements somewhat strange – I might motivate it by asking instead, “What is the chance that the forty-third digit of is a 9?”. The answer “ten percent!” is eminently reasonable. , while you believe statement B to be unlikely, true with probability 10%.

These credences may be perfectly reasonable, given what you know about relevantly similar mathematical statements – maybe all the A-like statements you know of are true, and all the B-like ones are false. If, however, it turns out that A ↔ B, then in some sense you ought to equate p(A) = p(B) – but it may take you a very long time to figure out that A ↔ B!

This is the essence of logical uncertainty.

Self-locating uncertainty

Who do you think you are?

I said three, but…

Suppose that you are the unwilling participant in a curious experiment – a powerful archmage decides she likes you, and elects to cast true simulacrum on you.

True Simulacrum
9th-level conjuration
Casting Time: 1 minute
Range: Touch
Components: V, S, M (5.662 × 10¹⁸ joules of energy and powdered ruby worth 1,500 gp, which are consumed by the spell)
Duration: 9.6 × 10²⁹ years


You shape the universal wave-function as to create a factor-duplicate of a creature.

The archmage enjoys messing with you, and so she blindfolds you ahead of time. You hear her speak – “Dulaku oerin oerin sedo surutai, oerin DUP a al ixuv” – and you know that you are now two. You feel her place something into your hands, and the archmage tells you this:

“I have given the version of you on the North side of the room a blue token, and to the one on the South side of the room, I have graciously gifted a red token. The one of you who first guesses correctly the colour of your token will go free, the other I intend to keep – but be careful – guess wrongly, and I am imprisoning you both!

Now, consider your situation, still blindfolded. You know every fact about the state of the room. You know that there are two of you. You know that one is at the north of the room, and that the other is at the south. You know where the tokens are. You can answer every question about the state of the room with absolute confidence. And yet. What is the chance you have the blue token? It must be 50%. You do not know.

This is self-locating uncertainty. As it is a sort of uncertainty irreducible by prior cognition or observation, this is actually a sub-type of aleatoric uncertaintyI did say three! – no amount of prior experience could help you here, since having watched a thousand duplications would tell you nothing about which copy you are in this one. Epistemic uncertainty yields to evidence; self-locating uncertainty does not.

Terrifyingly, this sort of uncertainty appears in the real world – if you subscribe to the many-worlds interpretation of quantum mechanics, then this is the sort of uncertainty you have about observations of quantum states – not “will the electron be spin-up or spin-down?”, but “am I the version of me in the spin-up world, or the version of me in the spin-down world?”.

This also crops up in any theories that posit sufficiently large worlds such that “you” will be multiply-instantiated – if, for example, the universe is infinite in size, then there will be identical copies of you elsewhere, simply by chance!

In the Tegmark Level IV multiverse, all logically non-contradictory structures are instantiatedAn interesting note – in such a universe, epistemic uncertainty in fact collapses into aleatoric uncertainty – there is no longer a notion of “what is actually the case?”, instead only “which state of affairs is the one that I am within?” , a truly staggering level of physical profligacy.

It’s tremendous fun that the world is so strange, and that we might have to contend with such obstacles to our understanding.

If you wish to read more about this, I might recommend the following:

Farewell!


Appendix: Why probabilities?

Beliefs-as-probabilities can be constructed (no, extracted!) by considering a person’s behaviour when offered the chance to participate in a bet (also often called a lottery in this discipline).

If one is inclined to bet more confidently on the truth or falsity of a proposition, that indicates that their credence in the position is closer to 100% or 0%. Consider the following (very simple) bet:

If X is the case, I pay you £100.
If X is not the case, you pay me £100.

Now, if you believe that X is more-likely-than-not to be the case, this is a bet you should expect to win money by accepting. If X is, for example, “The sun will rise tomorrow”, then you should be pleased to accept my money by taking this bet. If, however, X is something like “This 20-sided die will come up 17 when I roll it”, then you would likely not take this bet. This operationalises the intuitive statement “It is more likely that the sun will rise tomorrow than that a fair 20-sided die will come up 17”.

The astute reader might comment that all we have established is that some outcomes are likelier than others – why must we ascribe numbers to these things? The answer is to appeal to Dutch books – combinations of bets such that, if a reasoner does not operate as if they have numeric probabilities that stand in proper relation to each-other, the reasoner will find these bets appealing, despite the fact that such bets predictably lose the reasoner money in expectationThis is not, in the strictest sense, sufficient to prove much of anything about the behaviour of real reasoners. The move that plays the most by the rules of decision theory is probably to cite the fact that it seems that humans have utility ∝ log(money). One can also point out that it may be possible to escape money-pumps even when one cannot help but act irrationally, by maximising utility over plans, rather than actions – see SEP Decision Theory § 6.1 Was Ulysses rational? .

Dutch book arguments point at and name a particular sort of illness that your mind can be afflicted with, and in so doing allow you to see that it is undesirable to have.

Appendix: Logical induction

It turns out that it is possible to formalise the assignment of probabilities to logical statements in a manner that satisfies many desiderata via an algorithm called Garrabrant induction, which is inspired by a relaxation of the Dutch book argument.

A bounded system, like you or I, cannot know all logical facts immediately, and as such is almost always going to be vulnerable to clever Dutch books that make use of inscrutable logical facts – but one can consider instead the set of “efficiently computable” Dutch books – to these, Garrabrant induction is robust.