We know very little on the grounds of absolute certainty — well, at least, very little that is interesting or useful. Part of my gravitation towards applications of probability theory to epistemology is motivated by this observation. In order to get a better grip on these claims, it is important to learn the basics of probability theory and confirmation theory. If some belief is known with less than certainty, then there are some grounds on which that belief is made probable. We may talk about some theory, “T” being probable given one’s background knowledge (”k” — background knowledge could mean what everyone believes or the set of all tautologies). To symbolize the probability of “T” given “k”, we write:P(T|k) = ?
So far, so good. But what happens when we have multiple pieces of evidence (A & B & C & D) that probabilistically support T? As many of you know, however, when you calculate probabilities, you typically multiply a series of numbers that range between 1 and 0. But if that is the case, it becomes clear then, adding evidence, even if it is highly probable, is actually going to diminish what one can confidently conclude. Consider four pieces of evidence such that A=.9, B=.8, C=.8, D=.7 (note that every piece of evidence is very good). When one multiplies the probabilities to get a lower boundary for P(T|k) with consideration for evidences A, B, C, and D, the result is .9 × .8 × .8 × .7, which is a little over .4. This is hardly an impressive conclusion to reach with such powerful evidence. Has something gone wrong or are we stuck with the problem of dwindling probabilities? I suggest that something has gone terribly wrong.