Desktop version

Home arrow Philosophy arrow Advances in Proof-Theoretic Semantics

Proof-Theoretic Semantics Beyond Logic

Proof-theoretic semantics has been occupied almost exclusively with logical reasoning, and, in particular, with the meaning of logical constants. Even though the way we can acquire knowledge logically is extremely interesting, this is not and should not form the central pre-occupation of proof-theoretic semantics. The methods used in proof-theoretic semantics extend beyond logic, often so that their application in logic is nothing but a special case of these more general methods.

What is most interesting is the handling of reasoning with information that is incorporated into sentences, which, from the viewpoint of logic, are called 'atomic'. A special way of providing such information, as long as we are not yet talking about empirical knowledge, is by definitions. By defining terms, we introduce claims into our reasoning system that hold in virtue of the definition. In mathematics the most prominent example is inductive definitions. Now definitional reasoning itself obeys certain principles that we find otherwise in proof-theoretic semantics. As an inductive definition can be viewed as a set of rules the heads of which contain the definiendum (for example, an atomic formula containing a predicate to be defined), it is only natural to consider inductive clauses as kinds of introduction rules, suggesting a straightforward extension of principles of proof-theoretic semantics to the atomic case. A particular challenge here comes from logic programming, where we consider inductive definitions of a certain kind, called 'definite-clause programs', and use them not only for descriptive, but also for computational purposes. In the context of dealing with negation, we even have the idea of inverting clauses in a certain sense. Principles such as the 'completion' of logic programs or the 'closed-world assumption' (which logic programming borrowed from Artificial Intelligence research), are strongly related to principles generating elimination rules from introduction rules and, thus, to the idea of harmony between these rules.

Definitional Reflection

In what follows, we sketch the idea of definitional reflection, which employs the idea of clausal definitions as a powerful paradigm to extend proof-theoretic semantics beyond the specific realm of logic. It is related to earlier approaches developed by Lorenzen [38, 39] who based logic (and also arithmetic and analysis) on a general theory of admissible rules using a sophisticated inversion principle (he coined the term 'inversion principle' and was the first to formulate it in a precise way). It is also related to Martin-Löf's [40] idea of iterated inductive definitions, which gives introduction and elimination rules for inductively defined atomic sentences. Moreover, it is inspired by ideas in logic programming, where programs can be read as inductive definitions and where, in the attempt to provide a satisfactory interpretation of negation, ideas that correspond to the inversion of rules have been considered (see Denecker et al. [5], Hallnäs and Schroeder-Heister [31]). We take definitional reflection as a specific example of how proof-theoretic semantics can be extended beyond logic, and we claim that such an extension is quite useful. Other extensions beyond logic are briefly mentioned at the end of this section.

A particular advantage that distinguishes definitional reflection from the approaches of Lorenzen and Martin-Löf and makes it more similar to what has been done in logic programming is the idea that the meaning assignment by means of a clausal or inductive definition can be partial, which means in particular that definitions need not be well-founded. In logic programming this has been common from the very beginning. For example, clauses such as

p ⇐ ¬ p

which defines p by its own negation, or related circular clauses have been standard examples for decades in the discussion of normal logic programs and the treatment of negation (see, e.g. Gelfond and Lifschitz [24], Gelder et al. [23]). Within mainstream proof-theoretic semantics, such circular definitions have only recently garnered attention, in particular within the discussion of paradoxes, mostly without awareness of logic programming semantics and developments there. The idea of definitional reflection can be used to incorporate smoothly partial meaning and non-wellfounded definitions. We consider definitional reflection as an example of how to move beyond logic and, with it, beyond the totality and well-foundedness assumptions of the proof-theoretic semantics of logic.

As definitional reflection is a local approach not based on the placeholder view of assumptions, we formulate it in a sequent-style framework. A definition is a list of clauses. A clause has the form

aB

where the head a is an atomic formula ('atom'). In the simplest case, the body B is a list of atoms b1,..., bm , in which case a definition looks like a definite logic program. We often consider an extended case where B may also contain structural implica-

tion14 '⇒', and sometimes even structural universal implication, which essentially

is handled by restricting substitution. Given a definition D, the list of clauses with a

head starting with the predicate P is called the definition of P. In the propositional case where atoms are just propositional letters, we speak of the definition of a having the form

aB1

⎨⎪ .

Da .

⎪⎩ aBn

However, it should be clear that the definition of P or of a is normally just a particular part of a definition D, which contains clauses for other expressions as well. It should also be clear that this definition D cannot always be split up into separate definitions of its predicates or propositional letters. So 'definition of a' or 'of P' is a mode of speech. What is always meant is the list of clauses for a predicate or propositional letter within a definition D.

Syntactically, a clause resembles an introduction rule. However, in the theory of definitional reflection we separate the definition, which is incorporated in the set of clauses, from the inference rules, which put it into practice. So, instead of different introduction rules which define different expressions, we have a general schema that applies to a given definition. Separating the specific definition from the inference schema using arbitrary definitions gives us wider flexibility. We need not consider introduction rules to be basic and other rules to be derived from them. Instead we can speak of certain inference principles that determine the inferential meaning of a clausal definition and which are of equal stance. There is a pair of inference principles that put a definition into action, which are in harmony with each other, without one of them being preferential. As we are working in a sequent-style framework, we have inferential principles for introducing the defined constant on the right and on the left of the turnstile, that is, in the assertion and in the assumption positions. For simplicity we consider the case of a propositional definition D, which has no predicates, functions, individual variables or constants, and in which the bodies of clauses are just lists of propositional letters. Suppose Da (as above) is the definition of a (within D), and the Bi have the form 'bi 1,..., biki ', as in propositional logic programming. Then the right-introduction rules for a are

(fa) r fbi 1 ... r fbiki

r fa r B

, in short

r fa (1 ≤ in),

and the left-introduction rule for a is

r, B1 fC ... r, Bn fC

f- r, a fC

If we talk generically about these rules, that is, without mentioning a specific a, but just the definition D, we also write (fD) and (D f-). The right introduction rule

expresses reasoning 'along' the clauses. It is also called definitional closure, by which is meant 'closure under the definition'. The intuitive meaning of the left introduction rule is the following: Everything that follows from every possible definiens of a, follows from a itself. This rule is called the principle of definitional reflection, as it reflects upon the definition as a whole. If B1,..., Bn exhaust all possible conditions to generate a according to the given definition, and if each of these conditions entails the very same conclusion, then a itself entails this conclusion.

This principle, which gives the whole approach its name, extracts deductive consequences of a from a definition in which only the defining conditions of a are given. If the clausal definition D is viewed as an inductive definition, definitional reflection can be viewed as being based on the extremal clause of D: Nothing else beyond the clauses given in D defines a. To give a very simple example, consider the following

definition:

r child_of_tom ⇐ anna

child_of_tom ⇐ robert

Then one instance of the principle of definitional reflection with respect to this definition is

anna ftall robert ftall

child_of_tom ftall

Therefore, if we know anna ftall and robert ftall, we can infer child_of_tom ftall.

Since definitional reflection depends on the definition as a whole, taking all defini-

entia of a into account, it is non-monotonic with respect to D. If D is extended with an additional clause

aBn+1

for a, then previous applications of the (D f-) rule may no longer remain valid. In the present example, if we add the clause

child_of_tom ⇐ john

we can no longer infer child_of_tom ftall, except when we also know john ftall.

Note that due to the definitional reading of clauses, which gives rise to inversion, the sign '⇐' expresses more than just implication, in contradistinction to structural

implication '⇒' that may occur in the body of a clause. To do justice to this fact,

one might instead use ':-' as in PROLOG, or ':=' to express that we are dealing with

some sort of definitional equality.

In standard logic programming one has, on the declarative side, only what corresponds to definitional closure. Definitional reflection leads to powerful extensions of logic programming (due to computation procedures based on this principle) that lie beyond the scope of the present discussion.

 
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >

Related topics