Home Philosophy Advances in Proof-Theoretic Semantics

## Proof-Theoretic Semantics Beyond LogicProof-theoretic semantics has been occupied almost exclusively with logical reasoning, and, in particular, with the meaning of logical constants. Even though the way we can acquire knowledge logically is extremely interesting, this is not and should not form the central pre-occupation of proof-theoretic semantics. The methods used in proof-theoretic semantics extend beyond logic, often so that their application in logic is nothing but a special case of these more general methods. What is most interesting is the handling of reasoning with information that is incorporated into sentences, which, from the viewpoint of logic, are called 'atomic'. A special way of providing such information, as long as we are not yet talking about empirical knowledge, is by definitions. By defining terms, we introduce claims into our reasoning system that hold in virtue of the definition. In mathematics the most prominent example is inductive definitions. Now definitional reasoning itself obeys certain principles that we find otherwise in proof-theoretic semantics. As an inductive definition can be viewed as a set of rules the heads of which contain the ## Definitional ReflectionIn what follows, we sketch the idea of A particular advantage that distinguishes definitional reflection from the approaches of Lorenzen and Martin-Löf and makes it more similar to what has been done in logic programming is the idea that the meaning assignment by means of a clausal or inductive definition can be partial, which means in particular that definitions need not be well-founded. In logic programming this has been common from the very beginning. For example, clauses such as
which defines As definitional reflection is a local approach not based on the placeholder view of assumptions, we formulate it in a sequent-style framework. A
where the head tion14 '⇒', and sometimes even structural universal implication, which essentially is handled by restricting substitution. Given a definition D, the list of clauses with a head starting with the predicate ⎧ ⎨⎪ . D ⎪⎩ However, it should be clear that the definition of Syntactically, a clause resembles an introduction rule. However, in the theory of definitional reflection we separate the definition, which is incorporated in the set of clauses, from the inference rules, which put it into practice. So, instead of different introduction rules which define different expressions, we have a general schema that applies to a given definition. Separating the specific definition from the inference schema using arbitrary definitions gives us wider flexibility. We need not consider introduction rules to be basic and other rules to be derived from them. Instead we can speak of certain inference principles that determine the inferential meaning of a clausal definition and which are of equal stance. There is a pair of inference principles that put a definition into action, which are in harmony with each other, without one of them being preferential. As we are working in a sequent-style framework, we have inferential principles for introducing the defined constant on the right and on the left of the turnstile, that is, in the assertion and in the assumption positions. For simplicity we consider the case of a propositional definition D, which has no predicates, functions, individual variables or constants, and in which the bodies of clauses are just lists of propositional letters. Suppose D
, in short
and the left-introduction rule for
f- If we talk generically about these rules, that is, without mentioning a specific expresses reasoning 'along' the clauses. It is also called This principle, which gives the whole approach its name, extracts deductive definition: r child_of_tom ⇐ anna child_of_tom ⇐ robert Then one instance of the principle of definitional reflection with respect to this definition is
child_of_tom ftall Therefore, if we know anna ftall and robert ftall, we can infer child_of_tom ftall. Since definitional reflection depends on the definition as a whole, taking
for child_of_tom ⇐ john we can no longer infer child_of_tom ftall, except when we also know john ftall. Note that due to the definitional reading of clauses, which gives rise to inversion, the sign '⇐' expresses more than just implication, in contradistinction to structural implication '⇒' that may occur in the body of a clause. To do justice to this fact, one might instead use ':-' as in PROLOG, or ':=' to express that we are dealing with some sort of definitional equality. In standard logic programming one has, on the declarative side, only what corresponds to definitional closure. Definitional reflection leads to powerful extensions of logic programming (due to computation procedures based on this principle) that lie beyond the scope of the present discussion. |

< Prev | CONTENTS | Next > |
---|

Related topics |

Academic library - free online college e textbooks - info{at}ebrary.net - © 2014 - 2019