Desktop version

Home arrow Philosophy arrow Advances in Proof-Theoretic Semantics

On Dummett's “Proof-Theoretic Justifications of Logical Laws”

Abstract This paper deals with Michael Dummett's attempts at a proof-theoretic justification of the laws of (intuitionistic) logic, pointing to several critical problems inherent in this approach. It discusses in particular the role played by “boundary rules” in Dummett's semantics. For a revised approach based on schematic validity it is shown that the rules of intuitionistic logic can indeed be justified, but it is argued that a schematic conception of validity is problematic for Dummett's philosophy of logic.

Can logical laws be justified? Of course, the question can be answered, trivially, in the affirmative: a logical law can be justified by deriving it from other logical laws. But the question is meant to ask something deeper, something like: can the logical laws be justified, all of them. Or, at least, can the logical laws be justified on the basis of some small fragment of them, a fragment deductively weaker than the whole?

To this question it seems plausible that the answer is negative. Early analytic philosophers might have argued that since the logical laws provide the canons of justification, it does not even make sense to seek to justify them. (This view is, I take it, near to the surface, if not completely explicit, in Frege. It is the cornerstone of Carnap's thought, when he takes the specification of a linguistic framework— including all the logical laws—as a precondition for any rational inquiry or debate at all.) This philosophical view is supported by, or mirrored in, an obvious technical point: any justification would involve a deductive argument; this argument would use logical laws, so that the justification would presuppose what it is supposed to justify. Thus it would be circular, and not a justification at all. This is well illustrated by soundness proofs for deductive systems: ordinarily, in showing soundness of a particular axiom or rule, one uses logical reasoning that is the direct analogue in the metalanguage of that very axiom or rule.

Nonetheless, as Michael Dummett has long urged (see, e.g., [1]), a negative answer might be too quick.

It might be proposed, for example, that it is the meaning of our words that have, as upshots, the acceptability of the logical laws; might not an account of those meanings therefore be able to play the role of supporting, or even fully justifying logical laws? To put a finer point on it, the suggestion is that logical laws are true by dint of the meanings of the words in them—specifically the meanings of the logical particles; and hence one might be able to find justifications of those laws simply by unfolding what the meanings of the logical particles are. The hope is that this might be done without invoking the full panoply of logical laws that use those particles, so as to obtain noncircular justifications.

In an odd sense, the idea goes back to Wittgenstein's discovery of truth-functional analysis: for the validity of the truth-functional laws follows at once from the stipulation of the truth-functions that the connectives represent. (I say “in an odd sense”, since for Wittgenstein the logical laws have no content, and it is surely odd to speak of justifying something without content: what is there to justify?) But it should be noted that a strong assumption underlies Wittgenstein's procedure, namely his notion of propositions as bipolar—possibly true, possibly false, and determinately either one or the other. That is a highly suspect assumption, at least to those like Dummett who wish to question classical two-valued logic. So perhaps the question should be rephrased as: can we find noncircular justifications of logical laws by unfolding the meanings of the logical particles, without making strong meaning-theoretic assumptions?

Gerhard Gentzen's work in proof theory in the 1930s proved to be suggestive in this regard. Gentzen had developed logical systems in which the role of each connective was isolated, so that each basic inference rule was “about” one and only one connective. Indeed, he showed that two sorts of rules for each connective suffice. One sort allows for the introduction of the connective, and one for its elimination. In the context of a system for natural deduction (rather than in Gentzen's sequent calculus), the rule of ∧-introduction is that which licenses the inference of AB from premises A and B; the rules of ∧-elimination license the inference of A from AB and of B from AB. The rule of →-introduction is the rule of discharge

of premises: if B has been deduced from premises including A, then we may infer AB while striking A from the list of premises. The rule of →-elimination is just modus ponens, licensing the inference of B from A and AB. Gentzen suggested that introduction rules have much the same status as definitions: they fix the meaning of the connectives they introduce, at least in part. That is, an introduction rule for a connective gives the conditions under which a statement with that connective as its main connective can be inferred. Those conditions can be thought of as simply stipulated, and once stipulated, as constitutive of the meaning of the connective.

With respect to the project of justifying logical laws on the basis of the meaning of the logical particles, if we accept this view of introduction rules then clearly those rules stand in no further need of justification. As Dummett puts it, they are “selfjustifying”. The question then is whether such self-justifying rules can be used to endow further logical rules with justification, in particular, rules beyond those that amount to iterated use of introduction rules.

In Chaps. 11–13 of The Logical Basis of Metaphysics [3], Michael Dummett formulates a method for providing what he argues are just such justifications. The introduction rules for the connectives are taken as furnishing the canonical means of establishing sentences whose main connectives are one of those the rules introduce. Dummett's method then seeks to show, of an inference, that any canonical argument for the premises of the inference can be transformed into a canonical argument for the conclusion. Dummett's claim is that if this can be shown, the inference is justified.

The clearest illustrative case is an inference by an elimination rule, say, an inference from FG to G. A canonical argument for the premise FG would end in an application of the rule of ∧-introduction, that is, would end in an inference of FG from F and G. But then the argument already contains a canonical argument for G. Thus, the inference is justified, since we can transform the given argument for FG into a canonical argument for G simply by extracting the subargument for G. The basic idea here stems from Gentzen's [6] technique of normalization of proofs, which he devised to prove his cut-elimination theorem. Dummett's use of the technique as a justificatory procedure is inspired by a similar proposal of Dag Prawitz from the early 1970s (especially in [7]), although there are differences in formulation and in scope.

This method of justifying logical laws is important to Dummett for several reasons. First, it provides a sense in which logical inferences can be justified, in a way that is clearly noncircular, and so stills the doubt I mentioned at the start as to whether any such program could make sense. Moreover, although the method presupposes the self-justifying nature of introduction rules, and so relies on a view of the meaning-endowing nature of those rules, the method need not invoke a full-fledged and comprehensive theory of meaning, as Dummett's better-known arguments criticizing classical logic and supporting intuitionistic logic do. Since we seem to be no closer to obtaining a comprehensive Dummettian theory of meaning for natural language than we were when Dummett formulated his meaning-theoretic program 25 years ago, this avoidance of invoking such a theory makes the method more credible and presumably less open to controversy.

As it turns out, or so Dummett asserts, the method provides justification for intuitionistic logic but not for classical logic, at least not for the classical laws about negation. Thus it gives important support to his position that intuitionistic logic is preferable to classical. Indeed, it exhibits a virtue of intuitionistic logic—justifiability on the basis of laws that merely express the meaning of the connectives—that classical logic fails to have: “[Intuitionistic logic's] logical constants can be understood, and its logical laws acknowledged, without appeal to any semantic theory and with only a very general meaning-theoretical background.” [3, p. 300] The failure of this method for the laws of classical negation thus allows an invidious distinction to be made.

In this paper I investigate Dummett's method, as it applies to sentential logic.1 I shall show that, even in this restricted domain, Dummett's method won't do: it provides “justifications” for obviously invalid inferences. I shall consider how to repair the damage, and analyze the question of whether the repair restores confidence in the philosophical framework underlying Dummett's claim that his method does indeed justify. The results are, I think, suggestive of some overlooked, and possibly deep, difficulties in Dummett's overarching project of marrying intuitionism and a verificationistic theory of meaning.

 
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >

Related topics