Desktop version

Home arrow Education arrow Cognitive Linguistics and Humor Research.

Sub-symbolic implementations of frames

The need for a sufficiently flexible implementation of frames has driven some researchers to explore the adequacy of sub-symbolic processing in neural networks (McClelland 1986). The propensity of these networks to display (a) graceful degradation, viz. arriving at a best guess given imperfect information, (b) spontaneous generalization, that is, accommodating inputs that do not conform to previously instantiated schemas, and (c) the ability to arrive at a compromise solution to mutual constraint satisfaction problems is compatible with the flexibility people show in their interpretation of jokes.

The early promise of this approach can be seen in a model proposed by Rumelhart and colleagues (Rumelhart 1986) that classifies rooms in a house based on their contents (e.g. whether they have beds, chairs, refrigerators, and so on). Units in the network represent semantic micro-features, and the weights between units encode correlations between those micro-features. The network is set up to promote excitatory weights between micro-features that co-occur, and inhibitory weights between features that do not. If the network has experienced a high correlation between the mutual activations of stove, refrigerator, and counter, when the stove unit is activated, the network (using a gradient descent algorithm) activates correlated micro-features (e.g., refrigerator, counter) until it settles into a kitchen frame.

While the model by Rumelhart and colleagues reveals the flexibility of probabilistic approaches, it is incapable of representing information needed to get the jokes discussed above. This is because it contains no mechanisms for generating the high-level inferences that relate frames to one another. A more sophisticated model by St. John is able to use co-occurrence frequencies in its input to infer default information, and to modify its predictions about upcoming events in a way that is sensitive to context (St. John 1992). However, St. John’s model is limited in much the same way as symbolic implementations: information that deviates too much from stored frames cannot be accommodated. Because it is unable to compute the relationships between different higher-level representations, St. John’s (1992) model is incapable of combining information from different scripts in any sensible way.

Lange and Dyer (1989) propose a structured connectionist model called ROBIN (role-binding network) that explicitly attempts to capture inferential revisions in frame-shifting. ROBIN uses connections between nodes to encode semantic knowledge represented in a frame type data structure. Each frame has one or more slots, and slots have constraints on the type of fillers to which they can be bound. The relationships between frames are represented by excitatory and inhibitory connections between nodes and the pathways between corresponding slots. Once initial role assignments have been made, ROBIN propagates evidential activation values in order to compute inferences from the information the programmers have given it.

Inference is understood as resulting from the spread of activation across the connections between related frames and competing slot-fillers. For example, connections between frames for Transfer-Inside and Inside-of allow the system to ‘infer’ Inside-of (Pizza, Oven) from Transfer-Inside (Seana, Pizza, Oven). In this model, frame selection is entirely a matter of spreading activation. Because each slot has a number of binding nodes, all of the meanings of an ambiguous word can serve as candidate bindings. Candidate bindings can be activated simultaneously, and the binding node with the greatest evidential activation eventually wins out. Because multiple frames are activated in parallel, contextual information can further activate an already highly activated node (or set of nodes), thus confirming an initial interpretation. Alternatively, contextual information can activate a previously less-active interpretation, thus implementing frame-shifting.

The neurally inspired architecture of these models contributes important advances over traditional, symbolic implementations of frames. Advances include probabilistic representations, parallel activations, and the use of spreading activation mechanisms. However, none of these models have the capacity to creatively combine frames, to draw inferences that require an understanding of the relationship between frames, or to construct novel frames in response to contextual demands. While sub-symbolic implementations of frames represent an improvement over traditional frame-based models, they share many of the same limitations.

 
Source
< Prev   CONTENTS   Source   Next >

Related topics