Desktop version

Home arrow History arrow A Minimalist View on the Syntax–Semantics Relationship: Turning the Mind into a Snowflake

Hard times

Chains had been thus born as a notational appliance, and, living thereafter in the state of this original sin, were liable to accusations of theoretical depravity in the minimalist period: ‘the notation soon took on a life of its own and like Gepettos famous puppet, chains were converted from convenient fictions to flesh and blood theoretical constructs,’ as the charge might go (Hornstein (1998: 99); see also the discussion in Gartner (2002), who remarks that ‘no claim (...) should be made that the concept of chains is particularly simple, minimal, or well-adapted to a purely derivationalist perspective on CHL (Gartner 2002: 90)). Given the reluctance to allow movement into 0-positions and insistence on the inability of chains to enter into thematically relevant configurations in Chomsky (1995) (where a configurational theory of 0-assignment is proposed, whereas chains are ‘in no configuration at all’ Chomsky (1995: 287)) and the gradual move away from representational towards derivational concepts, it was only natural to find chains of A-movement in particular suspect of being entirely dispensable (as in Lasnik (1999), who discusses the importance of the stand on 0-theoretic relations and its import for the theoretical status of chains, a relationship discussed also by Rodrigues (2004); see also Kiguchi (2002) for a further development of this approach to chains), the status of A-bar chains being slightly less shaky due to notorious reconstruction effects. Chains turned to appear mere ‘terminological conveniences’:

Consider the status of chains and the operation multiple Merge, constructing chains. It is not clear that these are more than terminological conveniences. No operations of L apply to chains. Principles holding of chains can be expressed directly in terms of occurrences (e. g., the uniformity condition on bar level), as can interpretive operations referring to chains: for example, principles of d-role assignment and surface interpretation discussed earlier, or conditions on reconstruction. (Chomsky 2001: 40)

The debate about the status and interpretive relevance of copies left by successive cyclic applications of internal merge is a continuation of closely intertwined debates about cyclicity as it effects consecutive displacement of NPs and its possible semantic consequences, going back to the early days of cycle-based derivations and trace theory; Lightfoot (1976) summarizes the issue as follows:

... we may distinguish between two views of trace theory: the “pluralist” view (...) says that traces (a) play a crucial role in the syntax and (b) turn out to yield exactly the right information at surface structure to support semantic interpretation; the “exclusively semantic” view says that (...) the theory is motivated only by the requirement of surface structure semantic interpretation. Tied in with these two views of trace theory is the following question: does a moved NP always leave a trace or is a trace left only on the first movement, i. e. only in the original, deep structure position? It is sometimes assumed that a trace is left only on the first movement. If one adopts the exclusively semantic view of trace theory, there is no reason to have an NP leave a trace at intermediate stages of the derivation, because presumably such positions never play a role in semantic interpretation. (Lightfoot 1976: 560-561)

The gradual unification of movement operations under the heading of Move a and various locality effects under the umbrella of cyclic domains of applications for the movement rule generalized the issue and forced adoption of the stance advocated for in Lightfoot (1976) cited above—successive cyclic movement began to leave traces at every intermediate site, with their possible elimination via deletion mechanisms being only a highly restricted option. What is telling in the quote above is the fact that the ‘exclusively semantic view’ does not require that intermediate positions be present for the semantic interpretation. This reflects the fact that what seemed particularly pressing in the development of the interpretive semantics approach was the establishment of a connection between the deep structure position, relevant for thematic interpretation, and the surface structure position as attained by movement rules—the connection allowing to ‘loosely think of a transformational grammar from a semantic point of view as a mapping of a structure of thematic relations onto a kind of‘logical form” (Chomsky 1975a: 39). The latter was supposed to provide scopal information if the notion was applicable, as it was in cases of A-bar movement; otherwise, the concept of ‘semantic interpretation’ in effect required no more and no less than ‘to take 9-role assignment as an A-movement reconstruction effect' as Lasnik (1999: 207) summarized the much later minimalist view. This restrictive notion of semantic relevance has been repeatedly articulated as oriented towards explanation of the ‘duality of semantics’ correlated in the current minimalist theory with two kinds of the merge operation:

At the semantic interface, the two types of Merge correlate well with the duality of semantics that has been studied within generative grammar for almost forty years, at first in terms of “deep and surface structure interpretation” (and of course with much earlier roots). To a large extent, EM yields generalized argument structure (9-roles, the “cartographic” hierarchies, and similar properties); and IM yields discourse-related properties such as old information and specificity, along with scopal effects. (Chomsky 2008: 140)

Such characterizations of the consequences of the displacement operation, ultimately going back to the Extended Standard Theory period, have become a common conceptualization of the difference between two basic types of interpretive reflexes of the operation merge (see recently e. g. Chomsky (2007: 10), Chomsky (2013b: 64), Chomsky (2014: 13), Chomsky (2015c: 100)). The focus on scope- related properties in the case of internal merge is tightly coupled with an attempt to link internal merge—which, operating at the phase level, adds an occurrence of a displaced object at the edge of the phase—and properties of phasal heads, C in particular (it is C that is standardly connected with various scope-related phenomena, including quantifier-based ones and wh-related ones; the phaseheading v* acquires much of analogous properties on approaches which seek to unify phases in this respect and posit extended left edge area at the level of the vP phase as well, as in Belletti (2004, 2005) and related work), and thus to find more theoretically satisfying foundations for the A-A-bar distinction than a merely descriptive taxonomy:

CI clearly permits interpretation of quantification in some manner. Language should provide such a device if expressive potential is to be adequately utilized. (...) The most familiar notation is operator-variable constructions. But that device virtually comes free, given EM and IM expressing the duality of semantics at CI (...). In the simplest case, the copy merged to the edge by IM is the operator taking scope over the copy that had previously been merged by EM, the latter understood as the variable; the full structure of the two copies provides the interpretation as a restricted variable, hence yields the options for reconstruction along lines that have been pursued very productively in recent years. These considerations take us a step towards establishing the A/A'-distinction as a property of language with a principled explanation in terms of SMT. (Chomsky 2007: 11-12)

The line of approach exemplified at length in the quote above has had—beside stimulating much research into syntactic encoding of operator-variable relationships and so-called reconstruction effects—a detrimental effect for the conceptualization of chains, in which in effect only those links which are either required for the establishment of ^-properties or necessary for an account of scope-related phenomena have been taken as indeed obtaining an interpretation in the C-I component, in the latter case with an almost exclusive focus on occurrences directly involving the operator-variable relationships so that the higher occurrence scopes as an operator over the lower one, interpreted as a variable—a stance expressed explicitly in the quote from Chomsky (2007) above. Although ‘under the trace theory of movement rules (...) a surface structure is in some respects similar to a logical formula with bound variables’ (Chomsky 1975a: 39), this did not mean that the A-movement case and the case of intermediate chain links were treated as having a status equal to the A-bar case—on the contrary, the operator- variable dependency had to be separated in terms of the logical form for an interpretive relationship closely corresponding to that found in first-order logic to be established, the need for separation leading to assorting traces to distinct groups entirely, only traces left by A-bar movement (required to be A-free and characterized as belonging to the class of R-expressions) being understood as variables in the strict sense (see further the full discussion in Chomsky (1986b) and the standard references in Chomsky (1986b: 214 n. 95)). This partiality has remained part and parcel of the treatment of displacement phenomena, which is not merely a reflex of earlier theoretical proposals, but seems to be connected to a specific take on the autonomy of syntax thesis, on which syntactic properties and processes are not only independent of—in particular, not driven by—semantic properties and requirements (‘we don’t want to seat interpretive motivations in the driver’s seat of our syntactic car’ as Lasnik and Uriagereka (2005: 154) put it), but may and should remain understood and analyzed exclusively in syntactic terms, with a possibility that they are, partly or entirely, relevant only for the syntactic computation—a point discussed already with respect to features in particular in section 1.3.3 to which we now return insofar as it is relevant for general interpretive properties of chains. Features driving the computational process in narrow syntax were, and for the most part still are, understood as being confined to the syntactic space only, standing in need of elimination before a syntactic object is sent off to the interfaces, and otherwise leading to a crash; their role is restricted to set the syntactic engine in motion—recall the Chomsky-Richards deduction of feature inheritance from the timing of feature valuation and transfer (see Chomsky (2007) and Richards (2007), and Richards (2011, 2012a,b) for a discussion of some consequences of the approach) and the retreat from a characterization of phases in interface-related (in particular, in C-I related terms), as in Chomsky (2000a), where phases are understood primarily as syntactic objects ‘relatively independent in terms of interface properties' where ‘perhaps the simplest and most principled choice is to take SO to be the closest syntactic counterpart to a proposition: either a verb phrase in which all 9-roles are assigned or a full clause including tense and force’ (Chomsky (2000a: 106); see also Chomsky (2004a: 124)), towards a purely formal understanding of phases, with two closely related main properties: first, they are definable in terms of operations on uninterpretable formal features—‘the size of phases is in part determined by uninterpretable features’ (Chomsky (2008: 154); see already Chomsky (2001) and further Chomsky (2005: 17), Chomsky (2007: 17-20), Chomsky (2012a: 6)); second, the very existence and size of phases is motivated by third factor requirements of computational efficiency: ‘Phases should, presumably, be as small as possible, to minimize computation after Transfer and to capture as fully as possible the cyclic/compositional character of mappings to the interface’ (Chomsky 2008: 155). Considerations of this kind are thought to affect directly the problem of uninterpretable features:

A fourth conclusion is a suggestion about another curious phenomenon: the fact that languages have unvalued features, assigned values in certain structural positions. These features mark phases, a particular execution of strict cyclicity, well-motivated on grounds of computational efficiency; and it may be that their only motivation is to do so. (Chomsky 2015b: 5)

The same point is developed at more length in Chomsky (2015a):

.. .an interesting question arises as to why language would have unvalued features. Since they’re unvalued, they’re not doing anything. What are they there for? My suspicion is that they’re probably there to identify phases. If you’re going to have an efficient computational system, it’s going to have to be successive cyclic somehow—strict cyclic even— so strict cyclicity is a pretty important principle computationally. It means that once you’ve computed something, you can forget about it. So it saves a lot of computation. But strict cyclicity requires some version of phase theory. Then the question becomes, “What are the phases?” And they seem to be connected to valuation of unvalued features. That would mean that the basic phases are v*P and CP, which looks right from other points of view, and that’s where the unvalued features are valued. (Chomsky 2015a: 81)

This move does not, to be sure, eliminate the possibility that phases give rise to interpretive units which are ‘relatively independent in terms of interface properties’—it just does not make such independence the aim or the driving force behind syntactic computation, just as it does not make semantic considerations the rationale for syntactic derivations; and it is to be expected both on general minimalist grounds, which deny such teleological properties to be even in view of the syntactic engine, as well as for reasons related to Darwin’s Problem: minimal computation principles are neither language-specific nor do they require specific evolutionary explanations (see a recent summary of the issues in Chomsky (2016a), as well as the discussion in Gallego (2010, 2012)).

 
Source
< Prev   CONTENTS   Source   Next >

Related topics