Desktop version

Home arrow Language & Literature

Discriminative irregularity

The Zipfian structure of the primary linguistic input suggests a system-external explanation for the prevalence of interdependencies and regular patterns in general. Morphological models do not need to treat implicational structure and regularity as normative. They are merely prerequisites for generalization from the sparse, biased language sample that learners encounter.

A learning-based approach raises similar questions about the system-external motivation for irregular patterns. This is where discriminative learning models offer insight into the perseverance of patterns that are often treated as a kind of functionless residue or outright noise in the linguistic system. Viewed from a discriminative perspective, irregular formations are not noise, but on the contrary, are well-discriminated and, correspondingly, highly informative forms. This is particularly clear in the case of suppletion, though many deviations from regular patterns will tend to enhance the discriminability of forms and aid communicative efficiency. In English, for example, a regular preterite form such as walked is much less clearly discriminated from the present/base form walk than suppletive went is from go. Precisely this point is demonstrated in the learning model in Ramscar et al. (2013a). Indeed, given the existence of a separate participle form gone (and the near-obsolescence of the historical source wend), the suppletive preterite went comes as close to the ‘one meaning-one form’ ideal as any verb form in English.[1] Once established as well-discriminated exponents of specific properties, such irregulars will also function as attractors that enhance the salience of regular contrasts. By highlighting communicative contrasts that are distinctive in a language, irregulars can clarify the oppositions between less well discriminated regulars.

Recognizing the function that irregulars perform does not justify an inverted classification of regularity on which they are communicatively optimal. Instead, irregulars can be seen to enjoy a kind of ‘herd immunity’ within a morphological system. Any increase in the salience and discriminability of irregulars is offset by a commensurate reduction in generalizabilty. Given the Zipfian structure of the input, irregulars can only function in a system that is either small enough to be acquired from directly encountered forms, or regular enough to allow speakers to deduce the shape of unencountered forms.

From a learning-based perspective, neither regular nor irregular patterns are normative; they merely serve different, broadly complementary, functions in a system. The coexistence of regular and irregular patterns reflects an opposition between a pair of communicative pressures, one that enhances discriminability and another that promotes generalizability. Different languages can be expected to reach different states of equilibrium between these pressures, leading to different proportions of regular and irregular patterns.

This kind of dynamic view constrasts starkly with approaches that adopt a purely formal notion of ‘scientific compactness’, ‘optimality’, ‘canonicity’, ‘language perfection’ etc. On all of these accounts, irregularity comes out as a defective property. Models that incorporate a diachronic dimension maybe able to trace the sources of these defects and identify the distributional factors (whether measured in terms of type or token frequencies) that contribute to their survival in the face of regularizing pressure. However without taking the communicative function of language into consideration, it is difficult to perceive the function that irregulars serve once they enter a system.

  • [1] This ideal is often described in terms of ‘transparency’ in models of Natural Morphology(Mayerthaler 1981; Wurzel 1984; Dressler 1987). See also Bybee (1985:2o8ff.) for critical discussion.
 
Source
< Prev   CONTENTS   Source   Next >

Related topics