Population Size and Rates of Language Change

In previous posts, I’ve looked at the relationship between cultural evolution and demography (see here, here and here). As such, it makes sense to see if such methods are applicable in language which is, after all, a cultural product. So, having spent the last few days looking over the literature on language and demography, I found the following paper on population size and language change (free download). In it, the authors, Søren Wichmann and Eric Holman, use lexical data from WALS to test for an effect of the number of speakers on the rate of language change. Their general findings argue against a strong influence of  population size, with them instead opting for a model where the type of network influences change at a local level, through different degrees of connectivity between individuals. Here is the abstract:

Previous empirical studies of population size and language change have  produced  equivocal  results. We  therefore  address  the  question  with  a new set of lexical data from nearly one-half of the world’s languages. We first show that relative population sizes of modern languages can be extrapolated to ancestral languages, albeit with diminishing accuracy, up to several thousand years into the past. We then test for an effect of population against the null hypothesis that the ultrametric inequality is satisified by lexical distances among triples of related languages. The test shows mainly negligible effects of population, the exception being an apparently faster rate of change in the larger of two closely related variants. A possible explanation for the exception may be the influence on emerging standard (or cross-regional) variants from speakers who shift from different dialects to the standard. Our results strongly indicate that the sizes of speaker populations do not in and of themselves determine rates of language change. Comparison of this empirical  finding with previously published computer simulations suggests that the most plausible model  for  language  change  is  one  in  which  changes  propagate  on  a  local level in a type of network in which the individuals have different degrees of connectivity.

As I’m in the middle of several other things at the moment I don’t really have time to provide a thorough review of this paper. Having said that, I agree with their claim of population size being unlikely to account for rates of language change. I reckon their results would be stronger if they factored in population density. So those that are dense and large will change faster than those which are large and distributed. The main point being that population size and population density influence the degree of social interconnectivity. Nettle (1999), for instance, argues that “spreading an innovation over a tribe of 500 people is much easier and takes much less time than spreading one over five million people.” This is fairly reasonable if we are looking at the generation of a single innovation within each of these populations. However, if those 500 people are spread across a large distance, then their transmission chain is going to be stretched: effectively lowering the rate of transmission. The same applies for a population of five million individuals who are packed into a small area: Arguably, given the right conditions, we can arrive at a situation where a population of five million show greater levels of interconnectivity than 500. I think it’s this aspect, the level of social interconnectivity, which may be more relevant to the rate of language change (other things to test for, include: writing systems/literacy and inter-language contact).

Language About Language

How is it, then, that we can talk about talking? If you are willing to assume the existence of basic perceptual and cognitive capacities, a relatively simple answer follows immediately. The sounds of talk are, after all, sounds like any other sounds. We can perceive them in the same way we perceive the sound of a waterfall or a bird’s song, a thunderclap or the rustling of leaves in the wind, a cricket’s chirp or the breaking of waves on a beach. All are things we can hear, easily and naturally, and so it is with the sound of the human voice.

Roman Jakobson famously theorized that language has six functions: referential, emotive, poetic, conative, phatic, and the metalingual function. That’s the function we’re interested in, our capacity to speak about speech. Jakobson talked of the metalingual function as an orientation toward the language code, which seems just a bit grand. For I’m led to believe that many languages lack terms for explicitly talking about the ‘code.’ Thus, in The Singer of Tales (Atheneum 1973, orig. Harvard 1960), Albert Lord attests (p. 25):

Man without writing thinks in terms of sound groups and not in words, and the two do not necessarily coincide. When asked what a word is, he will reply that he does not know, or he will give a sound group which may vary in length from what we call a word to an entire line of poetry, or even an entire song. [Remember, Lord is writing about oral narrative.] The word for “word” means an “utterance.” When the singer is pressed then to way what a line is, he, whose chief claim to fame is that he traffics in lines of poetry, will be entirely baffled by the question; or he will say that since he has been dictating and has seen his utterances being written down, he has discovered what a line is, although he did not know it as such before, because he had never gone to school.

While I’m willing to entertain doubts about the full generality of this statement – “man without writing” – I assume the it is an accurate report about the Yugoslavian peasants among whom Milman Parry and Albert Lord conducted their fieldwork and that it also applies to other preliterate peoples, though not necessarily to all.

Given those caveats, the paragraph is worth re-reading. Before doing so, recall how casually we have come to see language as a window on the workings of the mind in the Chomskyian and post-Chomskyian eras. If that is the case, then what can one see through a window that lacks even a word for words, that fails to distinguish between words and utterances? And what of the poets who don’t know what a line is? The lack of such knowledge does not stand in the way of the poeticizing, no more than the lack of knowledge of generative grammar precludes the ability to talk intelligently on a vast range of subjects.

Continue reading “Language About Language”

Some Links #14: Can Robots create their own language?

Can Robots create their own language? Sean already mentioned this in the comments for a previous post. But as I’m a big fan of Luc Steels‘ work this video may as well go on the front page:

Speaking in Tones: Music and Language Partner in the brain. The first of two really good articles in Scientific American. As you can guess by the title, this article is looking at current research into the links between music and language, such as the overlap in brain circuitry, how prosodic qualities of speech are vital in language development, and the way in which a person hears a set of musical notes may be affected by their native language. Sadly, the article is behind a paywall, so unless you have a subscription you’ll only get to read the first few paragraphs, plus the one I’m about to quote:

In a 2007 investigation neuroscientists Patrick Wong and Nina Kraus, along with their colleagues at Northwestern University, exposed English speakers to Mandarin speech sounds and measured the electrical responses in the auditory brain stem using electrodes placed on the scalp. The responses to Mandarin were stronger among participants who had received musical training — and the earlier they had begun training and the longer they had continued training, the stronger the activity in these brain areas.

Carried to extremes: How quirks of perception drive the evolution of species. In the second good article, which by the way is free to view, Ramachandran and Ramachandran propose another mechanism of evolution in regards to perception:

Our hypothesis involves the unintended consequences of aesthetic and perceptual laws that evolved to help creatures quickly identify what in their surroundings is useful (food and potential mates) and what constitutes a threat (environment dangers and predators). We believe that these laws indirectly drive many aspects of the evolution of animals’ shape, size and coloration.

It’s important to note that they are not arguing against natural selection; rather, they are simply offering an addition force that guides the evolution of a species. It’s quite interesting, even if I’m not completely convinced by their hypothesis — but my criticisms can wait until they publish an actual academic paper on the subject.

A robotic model of the human vocal tract? Talking Brains links to the Anthropomorphic Talking Robot developed at Waseda University. Apparently it can produce some vowels. Here is a picture of the device (which looks like some sort of battle drone):

Battle Drone or Model Vocal Tract?

Y Chromosome II: What is its structure? Be sure to check out the new contributor over at GNXP, Kele Cable, and her article on the structure of the Y Chromosome. I found this sentence particularly amusing:

As you can see in Figure 1, the Y chromosome (on the right) is puny and diminutive. It really is kind of pathetic once you look at it.

Scientopia. A cool collection of bloggers have banded together to form Scientopia. With plenty of articles having already appeared it all looks very promising. In truth, it’s probably not going to be as successful as ScienceBlogs, largely because it doesn’t pay contributors and, well, nothing is ever going to be as big as ScienceBlogs was at its peak. This new ecology of the science blogosphere is well articulated in a long post by Bora over at A Blog Around the Clock.

Language Evolution and Tetris!

Hello, people of the Blogosphere!

Why not take some time out from your dedicated reading to do a little language evolution experiment!  And all you have to do is play Tetris!

The Evolution of Tetris

… and learn an alien language.  It takes no more than 10 minutes.

The instructions and game are here:

http://blake.ppls.ed.ac.uk/~s0451342/tetris/Tetris_Experiment.htm

Due to me being a terrible programmer, it’ll probably crash or do some weird things.  But it’s all in the name of pseudo-science!

P.S. – users of the latest Firefox will need to update java.

Language evolution in the laboratory

When talking about language evolution there’s always a resistance from people exclaiming;  ‘but how do we know?’, ‘surely all of this is conjecture!’ and, because of this, ‘what’s the point?’

Thomas Scott-Phillips and Simon Kirby have written a new article (in press) in ‘Trends in Cognitive Science’ which addresses some of the techniques currently used to address language evolution using experiments in the laboratory.

The Problem of language evolution

The problem of language evolution is one which encompasses not only the need to explain biologically how language came about but also how language came to be how it is today through processes of cultural evolution. Because of this potential ambiguity arises when using the term ‘language evolution’. To sort this ambiguity the authors put forward the following:

Language evolution researchers are interested in the processes that led to a qualitative change from a non-linguistic state to a linguistic one. In other words, language evolution is concerned with the emergence of language

Continue reading “Language evolution in the laboratory”

Experiments in Communication pt 1: Artificial Language Learning and Constructed Communication Systems

ResearchBlogging.orgMuch of recent research in linguistics has involved the use of experimentation to directly test hypotheses by comparing and contrasting real-world data with that of laboratory results and computer simulations. In a previous post I looked at how humans, non-human primates, and even non-human animals are all capable of high-fidelity cultural transmission. Yet, to apply this framework to human language, another set of experimental literature needs to be considered, namely: artificial language learning and constructed communication systems.

Continue reading “Experiments in Communication pt 1: Artificial Language Learning and Constructed Communication Systems”

Language Evolved due to an “animal connection”?

New hypothesis of language evolution. Language Evolved due to an “animal connection” according to Pat Shipman:

Next, the need to communicate that knowledge about the behavior of prey animals and other predators drove the development of symbols and language around 200,000 years ago, Shipman suggests.

For evidence, Shipman pointed to the early symbolic representations of prehistoric cave paintings and other artwork that often feature animals in a good amount of detail. By contrast, she added that crucial survival information about making fires and shelters or finding edible plants and water sources was lacking.

“All these things that ought to be important daily information are not there or are there in a really cursory, minority role,” Shipman noted. “What that conversation is about are animals.”

Of course, much evidence is missing, because “words don’t fossilize,” Shipman said. She added that language may have arisen many times independently and died out before large enough groups of people could keep it alive.

Nothing but wild conjecture as usual but still interesting.

Original article here.

Chomsky Chats About Language Evolution

If you go to this page at Linguistic Inquiry (house organ of the Chomsky school), you’ll find this blurb:

Episode 3: Samuel Jay Keyser, Editor-in-Chief of Linguistic Inquiry, has shared a campus with Noam Chomsky for some 40-odd years via MIT’s Department of Linguistics and Philosophy. The two colleagues recently sat down in Mr. Chomsky’s office to discuss ideas on language evolution and the human capacity for understanding the complexities of the universe. The unedited conversation was recorded on September 11, 2009.

I’ve neither listened to the podcast nor read the transcript—both linked available here. But who knows, maybe you will. FWIW, I was strongly influenced by Chomsky in my undergraduate years, but the lack of a semantic theory was troublesome. Yes, there was co-called generative semantics, but that didn’t look like semantics to me, it looked like syntax.

Then I found Syd Lamb’s stuff on stratificational grammar & that looked VERY interesting. Why? For one thing, the diagrams were intriguing. For another, Lamb used the same formal constructs for phonology, morphology, syntax and (what little) semantics (he had). That elegance appealed to me. Still does, & I’ve figured out how to package a very robust semantics into Lamb’s diagrammatic notation. But that’s another story.

Some Links #13: Universal Grammar Haters

Universal Grammar haters. Mark Lieberman takes umbrage with claims that Ewa Dabrowska’s recent work challenges the concept of a biologically evolved substrate for language. Put simply: it doesn’t. What their experiments suggest is that there are considerable differences in native language attainment. As some of you will probably know, I’m not necessarily a big fan of most UG conceptions, however, there are plenty of papers that directly deal with such issues. Dabrowska’s not being one of them. In Lieberman’s own words:

In support of this view, let me offer another analogy. Suppose we find that deaf people are somewhat more likely than hearing people to remember the individual facial characteristics of a stranger they pass on the street. This would be an interesting result, but would we spin it to the world as a challenge to the widely-held theory that there’s an evolutionary substrate for the development of human face-recognition abilities?

Remote control neurons. I remember reading about optogenetics awhile back. It’s a clever technique that enables neural manipulation through the use of light-activated channels and enzymes. Kevin Mitchell over at GNXP classic refers to a new approach where neurons are activated using a radio frequency magnetic field. The obvious advantage to this new approach being fairly straight-forward: magnetic-fields pass through brains far more easily than light. It means the new approach is a lot less invasive, without the need to insert micro-optical fibres or light-emitting diodes. Cool stuff.

Motor imagery enhances object recognition. Neurophilosophy has an article about a study showing that motor simulations may enhance the recognition of tools:

According to these results, then, the simple action of squeezing the ball not only slowed down the participants’ naming of tools, but also slightly reduced their accuracy in naming them correctly. This occured, the authors say, because squeezing the ball involves the same motor circuits needed for generating the simulation, so it interferes with the brain’s ability to generate the mental image of reaching out and grasping the tool. This in turn slows identification of the tools, because their functionality is an integral component of our conceptualization of them. There is other evidence that  parallel motor simulations can interfere with movements, and with each other: when reaching for a pencil, people have a larger grip aperture if a hammer is also present than if the pencil is by itself.

On the Origin of Science Writers. If you fancy yourself as a science writer, then Ed Yong, of Not Exactly Rocket Science, wants to read your story. As expected, he’s got a fairly large response (97 comments at the time of writing), which includes some of my favourite science journalists and bloggers. It’s already a useful resource, full of fascinating stories and bits of advice, from a diverse source of individuals.

Some thoughts about science blog aggregation. Although it’s still hanging about, many people, including myself, are looking for an alternative to the ScienceBlogs network. Dave Munger points to Friendfeed as one potential solution, with him setting up a feed for all the Anthropology posts coming in from Research Blogging. Also, in the comments Christina Pikas mentioned Nature Blogs, which, I’m ashamed to say, I haven’t come across before.

Physicists get linguist envy?

So I wrote a post a couple of weeks ago on my Hungarian friend’s blog in which I wrote about, amongst other things, why some linguists have physics envy, but I just read a new scientist article in which it seems physicists can have linguistics envy too!

Murray Gell-Mann, a nobel prize winning physicist (who discovered quarks), has taken it upon himself to try to work out the origins of human language:

Another pet project is an attempt to trace the majority of human languages back to a common root. Since the 19th century, linguists have been comparing languages to infer their common ancestry, but in most cases, Gell-Mann says, this kind of analysis loses the trail 6000 or 7000 years back. He says most linguists insist it is impossible to follow the trail any further into the past and – this is what truly rankles with him – “absurdly, they don’t even want to try”.

Gell-Mann heads SFI’s Evolution of Human Languages (EHL) programme. The EHL linguists say they can go even further back by classifying language families into superfamilies and even into a super-superfamily. “What we’ve found,” Gell-Mann explains, “is tentative evidence for a situation in which a huge fraction of all human languages are descended from one spoken 20,000 years ago, towards the end of the last ice age.” The team does not claim to account for all languages, though, and remains agnostic about whether they can eventually do so. “All of this just comes from following the data,” he says.

I love that attempting to trace the majority of human languages back to a common root can be described as a ‘pet project’.

If anyone’s interested here’s a paper he wrote on the subject:

Murray Gell-Mann, Ilia Peiros, George Starostin. Distant Language Relationship: The Current Perspective.