Desktop version

Home arrow Education

Sufficient condition of comparability

We now return briefly to the problem of the relation between conditioned probability and absolute probability. We saw that if p is a probability measure then there is complete symmetry between the two concepts, since it is easy to define a conditioned probability measure in terms of an absolute probability measure and vice versa. In contrast, it is possible to define a comparative absolute probability by moving from a comparative conditioned probability, but the converse is impossible. For this reason it has been maintained (Fine, 1973, pp. 30-1) that the former is an epistemologically primary concept with respect to the l atter. Here in our opinion there is confusion between the logical and the epistemological levels. The fact that absolute comparative probability is logically simpler does not mean that it is also epistemologically simpler. It seems to me that in general absolute probability is none other than an elliptic probability. As far as we know, among the scholars of comparative probability only Koopman (1940) considers conditioned probabilities as primary, in accordance with the general perspective of Keynes (TP, ch. 1, s. 3).

We now look again at the problem of comparability. As mentioned above, all standard axiomatizations of comparative probability presuppose comparability between all sentences of the language considered.9 Moreover most of these approaches concern the problem of compatibility between qualitative axioms and the introduction of additivity.10 We mentioned above that the comparability criteria proposed by Keynes are too strict since, when relative frequencies with a certain weight are available, a comparison between them can be affirmed. For this reason it is impossible to establish a priori a necessary and sufficient principle of comparability. But if we leave aside relative frequencies, comparability could be defined in the following way:

Sufficient principle of comparability: at least one between p(a/c) > p(b/c) and p(b/c) > p(a/c); and one between p(a/b) > p(a/c) and p(a/c) ^ p(a/b).

That is, given certain evidence c, it is a always possible to compare the probabilities of two different hypotheses a and b; and, given a certain hypothesis a, it is always possible to compare the probabilities ascribed to it by two different evidences b and c. But, if the evidence and/or the hypothesis are not the same, it is not certain that a comparative evaluation of the probabilities can be provided. Note that this is a sufficient principle of comparability - that is, all probabilities of this kind must be comparable - but it does not exclude the possibility that there are other probabilities which are comparable.

Let us call 'homogeneous' two probabilities which have either the same hypothesis or the same evidence; otherwise they are 'inhomogeneous'. Then we can paraphrase the sufficient principle of comparability by saying that homogeneous probabilities are always comparable; but the same does not generally hold for inhomogeneous probabilities. For instance, the probability that Caesar conquered Gaul given the content of De bello gallico is greater than the probability that Caesar conquered Gaul given the content of De bello civili (provided that De bello gallico is unknown); moreover, the probability that Caesar conquered Gaul given the content of De bello gallico is greater than the probability that Brutus plotted against Caesar given the content of De bello gallico (provided that Suetonius's Life of Caesar is unknown).

The proposed principle seems to be too strong, since it imposes the comparability between dishomogeneous probabilities as well. Indeed, it might be that:

For the transitivity of the probability relation, it follows that p(a/c) ^ p(b/d). It is clear that the two inhomogeneous probabilities p(a/c) and p(b/d) have a sort of middle term, that is, p(b/c); but it is easy to imagine a long chain of such intermediations, which would imply comparability of the majority of pairs of probabilities.

In the formal systems of comparative probability the following proposition holds:

where with '0' we mean the probability of any contradiction, whereas with '1' we mean the probability of any logical truth.

Indeed, if the hypothesis holds, the thesis follows from p(a/-b) > 0. In other words, if b favours a, then b is relevant for a. The converse does not hold, as shown by the following counter-example. If the probability of arriving at school on time when leaving home at 8 a.m. is greater than 0, it is not always true that the probability of arriving at school on time when leaving home at 8 a.m. is greater than that of arriving on time without leaving at 8 a.m. Indeed it is better to leave at 7.50 a.m. Neither does it hold that:

In other words, if b is indifferent for a, then it is also irrelevant for a. For instance, we can arrive at a road junction such that the probability of falling down a slope is positive and equal in both routes. Neither, a fortiori, does the converse of this proposition hold.

In sum, six epistemologically different degrees of cognition can be identified:

p(a/b) = 0 impossibility (not only logical impossibility),

p(a/b) < p(a/-b) unfavourable evidence,

p(a/b) > 0 relevant evidence,

p(a/b) > p(a/-b) favourable evidence,

p(a/b) > p(-a/b) probable hypothesis,

p(a/b) = 1 certitude (not only logical truth).

Note that if b is relevant for a this does not mean that it might not be an unfavourable evidence for a. Moreover, if b is an unfavourable evidence for a, the latter might be impossible. Finally, if a is a probable hypothesis with respect to the evidence b, it is not certain that b is favourable to a.

 
Source
< Prev   CONTENTS   Source   Next >

Related topics