Yang’s (2016) Tolerance Principle describes with incredible precision how many exceptions the mechanisms of child language acquisition can tolerate to induce a productive rule, and, as I pointed out in a previous post, it is a notable advance in the long-standing controversy as to the amount of data necessary for the acquisition of language. This post (which is an abbreviated reworking of a recent paper) addresses a different but related issue, that of the amount of data on language variation needed by a linguist to develop a theory of language. My point is that the two main types of current linguistic theory (roughly functionalism and formalism) correlate to different scientific methods (the inductive one and the deductive one, respectively) and therefore the two types of theory have a different “tolerance threshold” regarding the sparseness of data.
In fact, the relationship between data and theory in science is different depending on whether an inductive or a deductive methodology is employed. Following Dougherty’s(1976) characterisation, we can say that in an inductive model there is a certain set of procedures and operations with which the scientist uses the data to develop a theory. The theory is derived from the data by inductive processes. If the methodology is followed correctly, the scientist will arrive at an empirically motivated theory to describe the phenomena under consideration. So, in this model “the empirical motivation for accepting (or rejecting) a theory stems from the data which give rise to the theory, i.e. the data which played a role in its discovery. In this view, the discovery of a theory and the justification of a theory are a single process; discovery and justification cannot be distinguished” (Dougherty 1976: 5). On the contrary, in the deductive model there does not exist a set of procedures and operations with which the scientist works on the data to discover a theory. Rather, in this model the theory is a product of human creativity. A theory is a conjecture advanced as a possible explanation of the phenomena under investigation. According to Dougherty, “the means by which a theory is arrived at are irrelevant in determining its empirical adequacy. The theory derives its total empirical motivation from the comparison of the consequences deduced from the theory with observable experimental phenomena. In this view, the discovery of a theory and the justification of a theory are two different processes” (Dougherty 1976: 5).
The history of modern science is a clear illustration of the primacy, in the realm of the natural sciences, of the deductive method, generally known as the hypothetico-deductive model. Chomsky’s naturalistic conception of language implies the adoption of the hypothetico-deductive method for linguistic theory.
But many opponents of the Chomskyan conception of linguistic theory conceive of the study of language inductively, mostly as the economic systematisation of a collection of linguistic facts. However, the object of study of Chomskyan linguistic theory is not human languages, but the faculty of language (FL). Of course, no one speaks FL: people either speak a specific language or they do not speak at all. FL determines part of the structure of languages, and therefore languages must be studied in order to discover the structure and properties of FL, but languages are not the ultimate object of study. As Chomsky pointed out, specific languages reflect historical facts (for example the Norman Conquest or a Basque substratum) that cannot be regarded as properties of FL. And for this reason, the inductive model is simply insufficient to discover the truth about FL. There is, of course, an inductive phase in all hypothetico-deductive theories, and therefore, to a large extent, inductive linguistic theories (such as those developed by Greenberg and many others) are useful for deductive linguistic theory, but these two types of theories do not have the same goals, nor the same objects of study.
Of course, the differences that these two main traditions show in the way they approach the issue of the diversity of languages are not ultimately based on different conceptions of science; rather, the different conceptions of science are inspired by different conceptions of the object of study. From a generativist point of view, language is conceived of as a natural phenomenon, and languages are understood as particular environmentally conditioned (and historically modified) manifestations of that phenomenon. That is, we proceed deductively from language to languages. One of the clearest examples of this procedure is parametric theory: from common design principles, the various emerging systems respond to variations in development processes that have systematic implications, just as happens in the development of natural organisms. In contrast, from a functionalist point of view, we proceed inductively from languages to language. This model implies that languages exist in themselves and that language is a secondary concept induced from the descriptive generalisations obtained from the study of languages. Authors in the broad area of functionalism (and also so-called cognitive linguistics) favour an inductive model of linguistic theory for one clear reason: they do not consider FL to be a legitimate object of study. In general, such authors conceive of languages as cultural objects or institutions that are not the instantiation of a biologically determined faculty of language, but are objects that must be studied in themselves and for themselves.
In view of these two models of linguistic theory it is easier to understand why there has been controversy in our discipline regarding the question of how many languages do we need to formulate a theory of the faculty of language. In strictly logical terms, this question has only two answers: (i) a sufficient number of languages or (ii) all languages (the possible answer ‘none’ is not acceptable, since we would no longer be in the field of empirical science). And, again in strictly logical terms, the deductive model would have to choose (i) as a response, and the inductive model should choose (ii). However, it is clear that answer (ii) is ineffective, since studying all languages is not possible: thousands (perhaps tens of thousands) of languages have been extinguished without a trace, and many of those that remain are undocumented. Therefore, the truly relevant question is what is meant by ‘a sufficient number’ for each of the models. Given the impossibility of option (ii), the inductive approach has developed protocols to determine representative samples, such as in the case of typological studies (usually in the direction of maximising both genealogical and areal diversity). But we should not ignore the fact that any selection will be arbitrary and incomplete (and, therefore, potentially destructive to the inductive model). From the logic of the deductive method it can be stated that, if it is not possible to consider all languages, then it is not necessary to study more than one, so the answer to the question could be: the more the better, but at least one.
Perhaps this is the reason why Chomsky has argued that, theoretically, FL could be studied from a single language:
“I have not hesitated to propose a general principle of linguistic structure on the basis of observation of a single language […] The inference is legitimate, assuming that humans are not specifically adapted to learn one rather than another human language […] Assuming that the genetically determined language faculty is a common human possession, we may conclude that a principle of language is universal if we are led to postulate it as a ‘precondition’ for the acquisition of a single language. To set such a conclusion, we will naturally want to investigate other languages in comparable detail. We may find that our inference is refuted by such investigation” (Chomsky, 1979: 48).
Note that although Chomsky admits that it will be necessary to investigate other languages (in comparable detail) to confirm or falsify hypotheses, in fact (and again speaking theoretically) this would not be necessary if we were able to distinguish in the study of a specific language those of its elements which derive from the environment and those which emerge from the organism itself (and which are, therefore, ‘a precondition for the acquisition’). But, of course, we have no way of doing this directly, and hence, for such an objective the consideration of language diversity is essential as a means of refining the theory. Verifying the formal properties in which languages (or dialects) differ has a very directly bearing on what aspects of language are not fixed by nature.
In any case, there is one important point to note here: whereas it is clear that the consideration of language diversity is crucial for the development of a theory of FL, this does not imply that we should accept, as functionalists do (e.g. Comrie 1981), that the theory of language must be inductive. For Comrie the idea that the study of a language can serve to discover universal properties of language is unacceptable, and defends Greenberg’s option that in order to establish something as universal in language it would be necessary to consider a wide variety of languages. Comrie recognises the coherence of Chomsky’s position, and makes a useful comparison with other sciences:
“[I]f one wanted to study the chemical properties of iron, then presumably one would concentrate on analysing a single sample of iron, rather than on analysing vast numbers of pieces of iron, still less attempting to obtain a representative sample of the world’s iron. This simply reflects our knowledge (based, presumably, on experience) that all instances of a given substance are homogeneous with respect to their chemical properties” (1981: 6).
According to Comrie, this assumption of uniformity cannot be applied in the study of linguistic universals. He rejects the comparison with iron as being inadequate, and proposes another one, which is very symptomatic:
“On the other hand, if one wanted to study human behaviour under stress, then presumably one would not concentrate on analysing the behaviour of just a single individual, since we know from experience that different people behave differently under similar conditions of stress, i.e. if one wanted to make generalizations about over-all tendencies in human behaviour under stress it would be necessary to work with a representative sample of individuals” (Comrie 1981: 6).
Which example best fits linguistic theory, that of the study of iron or that of the study of human behaviour under stress? It seems that the choice depends on the way in which the object of study is conceived: the faculty of language (“a real object of the natural world”) or languages themselves (its manifestations). In fact, they are not incompatible conceptions, but complementary ones.
Comrie justifies his preference by assuming that if what “we want to find out in work on language universals is the range of variation found across languages and the limits placed on this variation, it would be a serious methodological error to build into our research programme aphoristic assumptions about the range of variation” (Comrie, 1981: 6). Yet we must note here that the goal of Chomskyan linguistic theory is not to discover the range of variation found across languages and the limits placed on this variation, since it does not constitute an inductive approach. As Chomsky himself pointed out, any theory of the Universal Grammar (UG) must meet two conditions:
“On the one hand it must be compatible with the diversity of existing (indeed, possible) grammars. At the same time, UG must be sufficiently constrained and restrictive in the options it permits so as to account for the fact that each of these grammars develops in the mind on the basis of quite limited evidence” (Chomsky, 1981: 3).
The assumption of uniformity is not therefore a methodological error, but one of the factors that must restrict the form of a hypothetico-deductive theory of language. Comrie explicitly assumes in the excerpt quoted above that the goal of typologists is to discover “the range of variation found across languages and the limits placed on this variation”, that is, an inductive study from a set of facts; meanwhile, Chomsky’s stance implies that variation is only of interest as a source of empirical testing for the theory of FL, in a deductive sense.
An inductive theory is essentially determined by the data from which it is obtained. The more detailed the description of linguistic variation, the more complex the theory becomes. A deductive theory, rather, is by definition less dependent on the data, although obviously it must have empirical support. As a consequence, inductive models tend to emphasise diversity to the detriment of language uniformity, whereas deductive models, such as generative grammar, tend to consider linguistic diversity as superficial and largely confined to the components of language externalisation.
Yang’s equation shows that the mechanisms of child language acquisition seem to be designed to optimise learning in a context of limited exposure to data, since the smaller the amount of data in the learner’s linguistic experience, the greater the tolerance of exceptions for the induction of productive rules. On the other hand, in a curiously analogous way, a deductive syntactic theory has a greater ability to overcome the data on linguistic variation in looking for the invariant principles of the human faculty of language, its primary object of study.