Commentators have already gotten hung-up on whether English became simplified before or after spreading, but this misses the impact of the article: There is an alternative approach to linguistics which looks at the differences between languages and recognises social factors as the primary source of linguistic change. Furthermore, these ideas are testable using statistics and genetic methods. It’s a pity the article didn’t mention the possibility of experimental approaches, including Gareth Roberts’ work on emerging linguistic diversity and work on cultural transmission using the Pictionary paradigm (Simon Garrod, Nick Fay, Bruno Gallantucci, see here and here).
David Robson (2011). Power of Babel: Why one language isn’t enough New Scientist, 2842Online
How many languages do you speak? This is actually a difficult question, because there’s no such thing as a language, as I argue in this video.
This is a video of a talk I gave as part of the Edinburgh University Linguistics & English Language Society’s Soap Vox lecture series. I argue that ‘languages’ are not discrete, monolithic, static entities – they are fuzzy, emergent, complex, dynamic, context-sensitive categories. I don’t think anyone would actually disagree with this, yet some models of language change and evolution still include representations of a ‘language’ where the learner must ‘pick’ a language to speak, rather than picking variants and allowing higher-level categories like languages to emerge.
In this lecture I argue that languages shouldn’t be modelled as discrete, unchanging things by demonstrating that there’s no consistent, valid way of measuring the number of languages that a person speaks.
The slides aren’t always in view (it improves as the lecture goes on), but I’ll try and write this up as a series of posts soon.
A paper by Gell-Mann & Ruhlen in PNAS this week conducts a phylogenetic analysis of word order in languages and concludes that SOV is the most likely ancestor language word order. The main conclusions from the analysis are:
(i) The word order in the ancestral language was SOV.
(ii) Except for cases of diffusion, the direction of syntactic change, when it occurs, has been for the most part SOV > SVO and, beyond that, SVO > VSO/VOS with a subsequent reversion to SVO occurring occasionally. Reversion to SOV occurs only through diffusion.
(iii) Diffusion, although important, is not the dominant process in the evolution of word order.
(iv) The two extremely rare word orders (OVS and OSV) derive directly from SOV.
This analysis agrees with Luke Maurtis‘ work on function and Uniform Information Density (blogged about here).
Being someone who likes to welcome new academics blogs on the scene, particularly ones of a linguistic tilt, I urge you to go over, visit, read and maybe even leave a comment at A Rare Bite of Linguistics. It’s only one-post old, but the subject topic of language change and grammaticalisation fits in nicely with this blog’s overarching themes. As some of you might know, I wrote a bit about grammaticalisation at the start of this year, so the work is especially useful to lay folk such as myself. The post is the first of two that report the author’s findings of her MA project, which focused on the grammatical status of certainly in collocation with modal verbs. In the author’s own words:
My hypothesis is that the adverb is not fully grammaticalised even though it might show signatures of grammaticalisation.
Following Noël (2007), Bybee (2003) and Hopper and Traugott (2003) grammaticalisation affects a construction primarily and a single word secondarily; I suggest that, for modal synergy, a structural unit is formed of a modal verb and an adjacent modal adverb in mid-position, e.g. would certainly, must certainly etc. Mid-position is the ‘natural habitat’ of the modal particle and if there is grammaticalisation of certainly into a modal particle, this is consequently where we would expect to find it. Moreover, if this were a grammatical unit/construction consisting of two grammatical constituents, the grammaticality would lie in the bondedness (syntagmatic restriction) of the two elements, and the semantic and paradigmatic restrictions which are said to be part of grammaticalisation (cf. Lehmann’s parameters): we would expect an abstract meaning and perhaps reduced phonological properties (which I cannot test), paradigmaticity, low paradigmatic variability and high cohesion with modal verbs in general. Scope is a contested parameter and it seems that in this case too, we will deal with increased scope. Lastly, as Bybee (2003) indicated, frequency plays a staple role in the propagation of an item to becoming grammaticalised (see also Croft 2000).
It’s at quite a high level, but she does provide good, comprehensive definitions of what she’s studying and, more importantly, a fleshed out understanding of grammaticalisation theory and the processes underpinning it.
Lexicons from around 20% of the extant languages spoken by hunter-gatherer societies were coded for etymology (available in the supplementary material). The levels of borrowed words were compared with the languages of agriculturalist and urban societies taken from the World Loanword Database. The study focussed on three locations: Northern Australia, northwest Amazonia, and California and the Great Basin.
In opposition to some previous hypotheses, hunter-gatherer societies did not borrow significantly more words than agricultural societies in any of the regions studied.
The rates of borrowing were universally low, with most languages not borrowing more than 10% of their basic vocabulary. The mean rate for hunter-gatherer societies was 6.38% while the mean for 5.15%. This difference is actually significant overall, but not within particular regions. Therefore, the authors claim, “individual area variation is more important than any general tendencies of HG or AG languages”.
Interestingly, in some regions, mobility, population size and population density were significant factors. Mobile populations and low-density populations had significantly lower borrowing rates, while smaller populations borrowed proportionately more words. This may be in line with the theory of linguistic carrying capacity as discussed by Wintz (see here and here). The level of exogamy was a significant factor in Australia.
The study concludes that phylogenetic analyses are a valid form of linguistic analysis because the level of horizontal transmission is low. That is, languages are tree-like enough for phylogenetic assumptions to be valid:
“While it is important to identify the occasional aberrant cases of high borrowing, our results support the idea that lexical evolution is largely tree-like, and justify the continued application of quantitative phylogenetic methods to examine linguistic evolution at the level of the lexicon. As is the case with biological evolution, it will be important to test the fit of trees produced by these methods to the data used to reconstruct them. However, one advantage linguists have over biologists is that they can use the methods we have described to identify borrowed lexical items and remove them from the dataset. For this reason, it has been proposed that, in cases of short to medium time depth (e.g., hundreds to several thousand years), linguistic data are superior to genetic data for reconstructing human prehistory “
Excellent – linguistics beats biology for a change!
However, while the level of horizontal transmission might not be a problem in this analysis, there may be a problem in the paths of borrowing. If a language borrows relatively few words, but those words come from many different languages, and may have many paths through previous generations, there may be a subtle effect of horizontal transition that is being masked. The authors acknowledge that they did not address the direction of transmission in a quantitative way.
A while ago, I did study of English etymology trying to quantify the level of horizontal transmission through time (description here). The graph for English doesn’t look tree-like to me, perhaps the dynamics of borrowing works differently for languages with a high level of contact:
Claire Bowern, Patience Epps, Russell Gray, Jane Hill, Keith Hunley, Patrick McConvell, Jason Zentz (2011). Does Lateral Transmission Obscure Inheritance in Hunter-Gatherer Languages? PLoS ONE, 6 (9) : doi:10.1371/journal.pone.0025195
Stephen Fry has embarked on a series of documentaries about language, beginning with the evolution of language which he calls ‘the final frontier’ of human understanding. The typical documentary hype is all here: Stephen Pinker sits in a gigantic fish tank with bits of taxidermied brain lying around like sandwiches; Michael Tomasello appears to live in a tropical primate enclose; Fry conducts his studies from a medieval study complete with quills, a CGI tree of languages and a talking parrot.
Despite this, it was actually a coherent and comprehensive review of topics in the field: Language versus communication in animals, phisological constraints of language, creativity and the desire to share information, the pragmatic origins of language, FoxP2 and the poverty of the stimulus. Bilingualism is even added to this cannon of interesting ways to approach the origins of language, somewhat tempered by Fry’s question “wouldn’t it be better if everybody spoke Esperanto?”.
Mercifully, Fry seems to be actually interested rather than trying to build up the conspiracy plot format endemic in other science documentaries. There are some odd diversions to a Klingon version of Hamlet, a trip to a German Christmas market and a slightly awkward re-enactment of a feral child case, but all in all the message is not objectionable: There is a graded difference between non-human and human communication, it’s partly genetic and partly cultural and languages continually change under pressures to be learned and to express new ideas. There are also welcome additions of the original Wug test and, of course, Fry & Laurie’s seminal sketch about language.
Overall, I’d say it was the second best documentary the BBC have made about the origins of language.
Having had several months off, I thought I’d kick things off by looking at a topic that’s garnered considerable interest in evolutionary theory, known as degeneracy. As a concept, degeneracy is a well known characteristic of biological systems, and is found in the genetic code (many different nucleotide sequences encode a polypeptide) and immune responses (populations of antibodies and other antigen-recognition molecules can take on multiple functions) among many others (cf. Edelman & Gally, 2001). More recently, degeneracy is appreciated as having applications in a wider range of phenomena, with Paul Mason (2010) offering the following value-free, scientific definition:
Degeneracy is observed in a system if there are components that are structurally different (nonisomorphic) and functionally similar (isofunctional) with respect to context.
A pressing concern in evolutionary research is how increasingly complex forms “are able to evolve without sacrificing robustness or the propensity for future beneficial adaptations” (Whitcare & Bender, 2010). One common solution is to refer to redundancy: duplicate elements that have a structure-to-function ratio of one-to-one (Mason, 2010). Nature does redundancy well, and is exemplified by the human body: we have two eyes, two lungs, two kidneys, and so on. Still, even with redundant components, selection in biological systems would result in a situation where competitive elimination leads to the eventual extinction of redundant variants (ibid).
Does your social network determine your rational rationality? When trying to co-ordinate with a number of other people on a cultural feature, the locally rational thing to do is to go with the majority. However, in certain situations it might make sense to choose the minority feature. This means that learning multiple features might be rational in some situations, even if there is a pressure against redundancy. I’m interested in whether there are situations in which it is rational to be bilingual and whether bilingualism is stable over long periods of time. Previous models suggest that bilingualism is not stable (e.g. Castello et al. 2007), therefore an irrational strategy (at least not a primary strategy), but these were based on locally rational learners.
This week we had a lecture from Simon DeDeo on system-wide timescales in the behaviour of macaques. He talked about Spin Glasses and rationality, which got me thinking. A Spin Glass is a kind of magnetised material where the ‘spin’ or magnetism (plus or minus) of the molecules does not reach a consensus, but flips about chaotically. This happens when the structure of the material creates ‘frustrated’ triangles where a molecule is trying to co-ordinate with other molecules with opposing spins, making it difficult to resolve the tensions. Long chains of interconnected frustrated triangles can cause system-wide flips on the order of hours or days and are difficult to study both in models (Ising model) and in the real world.
EDIT: Since writing this post, I have discovered a major flaw with the conclusion which is described here.
One of the problems with large-scale statistical analyses of linguistic typologies is the temporal resolution of the data. Because we only typically have single measurements for populations, we can’t see the dynamics of the system. A correlation between two variables that exists now may be an accident of more complex dynamics. For instance, Lupyan & Dale (2010) find a statistically significant correlation between a linguistic population’s size and its morphological complexity. One hypothesis is that the language of larger populations are adapting to adult learners as they comes into contact with other languages. Hay & Bauer (2007) also link demography with phonemic diversity. However, it’s not clear how robust these relationships are over time, because of a lack of data on these variables in the past.
To test this, a benchmark is needed. One method is to use careful statistical controls, such as controlling for the area that the language is spoken in, the density of the population etc. However, these data also tend to be synchronic. Another method is to compare the results against the predictions of a simple model. Here, I propose a simple model based on a dynamic where cultural variants in small populations change more rapidly than those in large populations. This models the stochastic nature of small samples (see the introduction of Atkinson, 2011 for a brief review of this idea). This model tests whether chaotic dynamics lead to periods of apparent correlation between variables. Source code for this model is available at the bottom.
In my previous post on linguistic replicators and major transitions, I mentioned grammaticalisation as a process that might inform us about the contentive-functional split in the lexicon. Naturally, it makes sense that grammaticalisation might offer insights into other transitions in linguistics, and, thanks to an informative comment from a regular reader, I was directed to a book chapter by Heine & Kuteva (2007): The Genesis of Grammar: On combining nouns. I might dedicate a post to the paper in the future, but, as with many previous claims, this probably won’t happen. So instead, here is the abstract and a table of the authors’ hypothesised grammatical innovations:
That it is possible to propose a reconstruction of how grammar evolved in human languages is argued for by Heine and Kuteva (2007). Using observations made within the framework of grammaticalization theory, these authors hypothesize that time-stable entities denoting concrete referential concepts, commonly referred to as ‘nouns’, must have been among the first items distinguished by early humans in linguistic discourse. Based on crosslinguistic findings on grammatical change, this chapter presents a scenario of how nouns may have contributed to introducing linguistic complexity in language evolution.