Desktop version

Home arrow Philosophy arrow Advances in Proof-Theoretic Semantics

CONTENTS

Some Remarks on Proof-Theoretic Semantics

Abstract This is a tripartite work. The first part is a brief discussion of what it is to be a logical constant, rejecting a view that allows a particular self-referential

“constant” • to be such a thing in favour of a view that leads to strong normalisation

results. The second part is a commentary on the flattened version of Modus Ponens,

and its relationship with rules of type theory. The third part is a commentary on work (joint with Nissim Francez) on “general elimination rules” and harmony, with a retraction of one of the main ideas of that work, i.e. the use of “flattened” general elimination rules for situations with discharge of assumptions. We begin with some general background on general elimination rules.

Keywords General elimination rules • Harmony • Strong normalisation

Background on General Elimination Rules

Standard natural deduction rules for Int (intuitionistic predicate logic) in the style of Gentzen [9] and Prawitz [24] are presumed to be familiar. The theory of cutelimination for sequent calculus rules is very clear: whether a derivation in a sequent calculus is cut-free or not is easily defined, according to the presence or absence of instances of the Cut rule. For natural deduction, normality is a less clear concept: there are several inequivalent definitions (including variations such as “full normality”) in the literature. For implicational logic it is easy; but rules such as the elimination rule for disjunction cause minor problems with the notion of “maximal formula occurrence” (should one include or not include the permutative conversions?), and more problems when minor premisses have vacuous discharge of assumptions.

One proposed solution, albeit partial, is the uniform use of general elimination rules, i.e. GE-rules. These can be motivated in terms of Prawitz's inversion principle [1]: “the conclusion obtained by an elimination does not state anything more than what must have already been obtained if the major premiss of the elimination was inferred by an introduction” [25, p. 246]. Normality is now the simple idea [39] that the major premiss of each elimination step should be an assumption; see also [13, 36].

The standard elimination rules for disjunction, absurdity and existential quantification are already GE rules:

by various authors, notably Prawitz [26, 27], Martin-Löf [15] and Schroeder-Heister [30], inspired in part by type theory (where conjunction is a special case of the I:-type constructor, with AB =def I:( A, B) whenever B(x ) is independent of x ) and (perhaps) in part by linear logic [10] (where conjunction appears in two flavours: multiplicative ⊗ and additive &).

To this one can add GE-rules for implication[2] and universal quantification:

Rules of the first kind are conveniently called “flattened” [29] (in comparison with Schroeder-Heister's “higher-level” rules, for which see [30, 32]). López-Escobar [13]

distinguishes between the premiss A of ⊃GE as a “minor” premiss and that of C (assuming B) as a “transfer” premiss[3].

One thus has a calculus of rules in natural deduction style for Int; such calculi, and their normalisation results, have been studied by von Plato [39], by López-Escobar [13] and by Tennant [36]. With the definition (given above) that a deduction is normal

iff the major premiss of every elimination step is an assumption, the main results are:

1. Weak Normalisation (WN): every deduction can be replaced by a normal deduction of the same conclusion from the same assumptions [13, 20, 36, 39].

2. Strong Normalisation (SN), for the implicational fragment: an obvious set of rules for reducing non-normal deductions is strongly normalising, i.e. every reduction sequence terminates [12, Sect. 6], [13, 36, 37].

3. SN, for the full language: a straightforward extension of the proof of [37] for implication4; also, the proofs for implication “directly carry over” [12] toa system with conjunctions and disjunctions. An argument (using the ordinary elimination rule for implication) is given in [35] for the rules for implication and existential quantification, with the virtue of illustrating in detail how to handle GE rules where the Tait–Martin-Löf method of induction on types familiar from [11] is not available. See also [13].

4. Some straightforward arguments for normalisation (by induction on the structure of the deduction) [40].

5. A 1-1 correspondence with intuitionistic sequent calculus derivations [20, 39].

6. Some interpolation properties [17].

7. Extension of the normalisation results to classical logic [41]. Despite the above results, there are some disadvantages:

1. Poor generalisation of the GE rule for implication to the type-theoretic constant

Π , of which ⊃ can be treated as a special case [15]: details below in Sect. 3.

2. Too many deductions, as in sequent calculus. Focused [aka “permutation-free”]

sequent calculi [5, 6] have advantages. Sequent calculus has (for each derivable sequent) rather too many derivations, in comparison to natural deduction, since derivations often have many permutations each of which is, when translated to ordinary natural deduction, replaced by an identity of deductions. The GE-rules have the same feature, which interferes with rather than assists in rootfirst proof search.

3. (For some complex constants, if one adopts the methodology beyond the basic intuitionistic ones) a “disharmonious mess” [4]: details below in Sect. 4.4.

4. No SN results (yet, in general) for GE-rules for arbitrarily complex constants.

  • [1] See [19, 31] for discussions of this principle, including its antecedents in the work of Lorenzen
  • [2] Reference [3] has an early occurrence of this
  • [3] On the other hand, Francez and Dyckhoff [8] calls A the “support” and, more in line with tradition than López-Escobar [13], the remaining premiss the “minor premiss”
 
Found a mistake? Please highlight the word and press Shift + Enter  
CONTENTS

Related topics