Read [28] has, following a suggestion of Schroeder-Heister (with Appendix B of Prawitz's [26] and Ekman's Paradox ^{[1]} [7] in mind), proposed as a logical constant a nullary operator • (aka R, for “Russell”) with the single (but impure) introduction rule.

The GE-rule justified by this (along the same lines as for implication) is then

which, given the usual ⊥E rule and the unnecessary duplication of premisses, can

be simplified to

So, by this •E rule, the premiss of the •I rule is deducible, hence • is deducible, hence ⊥ is deducible. There is however a weakness (other than just that it leads to inconsistency) in the alleged justification of • as a logical constant: it is a circularity. We follow Martin-Löf

[15, 16] and Dummett [2] in accepting that we understand a proposition when we understand what it means to have a canonical proof of it, i.e. what forms a canonical

proof can take. In the case of •, there is a circularity: the introduction rule gives us a canonical proof only once we have a proof of ⊥ from the assumption of •, i.e. have a method for transforming arbitrary proofs of • into proofs of ⊥. The reference here to “arbitrary proofs of •” is the circularity.

There are similar ideas about type formers, and it is instructive to consider another case, an apparent circularity: the formation rule (in [15]) for the type N of natural numbers. That is a type that we understand when we know what its canonical elements are; these are 0 and, when we have an element n of N , the term s(n). The reference back to “an element n of N ” looks like a circularity of the same kind; but it is rather different—we don't need to grasp all elements of N to construct a canonical element by means of the rule, just one of them, namely n.

A formal treatment of this issue has long been available in the type theory literature, e.g. Mendler [18], Luo [14], Coq [1]. We will try to give a simplified version of the ideas. With the convention that propositions are interpreted as types (of their proofs), we take type theory as a generalisation of logic, with ideas and restrictions in the former being applicable to the latter. The simplest recursive case (N ) has just been considered and the recursion explained as harmless (despite Dummett's reservations expressed as his “complexity condition” [2]). What about more general definitions?

The definition of the type N can be expressed as saying that N is the least fixed point of the operator ΦN =def λX.(1 + X ), i.e. N =def μX.(1 + X ). Similarly, the

type of lists of natural numbers is μL .(1 + N × L), and the type of binary trees with leaves in A and node labels in B is μT . A + (T × B × T ). A unary operator definition Φ =def λX. ... is said to be positive iff the only occurrences of the type

variable X in the body ... are positive, where an occurrence of X in the expression

A → B is positive (resp. negative) iff it is a positive (resp. negative) occurrence in

B or a negative (resp. positive) occurrence in A; a variable occurs positively in itself, and occurs positively (resp. negatively) in A + B and in A × B just where it occurs

positively (resp. negatively) in A or in B. A definition of a type as the least fixed point of an operator is then positive iff the operator definition is positive.

Read's •, then, is defined as μX.(X → ⊥). This is not a positive definition; the

negativity of the occurrence of X in the body X → ⊥ is a symptom of the circular

idea that • can be grasped once we already have a full grasp of what the proofs of •

might be.

In practice, a stronger requirement is imposed, that the definition be strictly positive, i.e. the only occurrences of the type variable X in the body ... are strictly

positive, where an occurrence of X in the expression A → B is strictly positive iff

it is a strictly positive occurrence in B; a variable occurs strictly positively in itself, and occurs strictly positively in A + B and in A × B just where it occurs strictly

positively in A or in B. A definition of a type as the least fixed point of an operator is then strictly positive iff the operator definition is strictly positive.

With such definitions, it can be shown that strong normalisation (of a suitable set of reductions) holds [18, Chap. 3]; similar accounts appear in [1, 14].