A new issue of the Journal of Language Evolution has just appeared, including a paper by Peeter Tinits, Jonas Nölle, and myself on the influence of usage context on the emergence of overspecification. (It has actually been published online already a couple of weeks ago, and an earlier version of it was included in last year’s Evolang proceedings.) Some of the volunteers who participated in our experiment have actually been recruited via Replicated Typo – thanks to everyone who helped us out! Without you, this study wouldn’t have been possible.
I hope that I’ll find time to write a bit more about this paper in the near future, especially about its development, which might itself qualify as an interesting example of cultural evolution. Even though the paper just reports on a tiny experimental case study, adressing a fairly specific phenomenon, we discovered, in the process of writing, that each of the three authors had quite different ideas of how language works, which made the write-up process much more challenging than expected (but arguably also more interesting).
For now, however, I’ll just link to the paper and quote our abstract:
This article investigates the influence of contextual pressures on the evolution of overspecification, i.e. the degree to which communicatively irrelevant meaning dimensions are specified, in an iterated learning setup. To this end, we combine two lines of research: In artificial language learning studies, it has been shown that (miniature) languages adapt to their contexts of use. In experimental pragmatics, it has been shown that referential overspecification in natural language is more likely to occur in contexts in which the communicatively relevant feature dimensions are harder to discern. We test whether similar functional pressures can promote the cumulative growth of referential overspecification in iterated artificial language learning. Participants were trained on an artificial language which they then used to refer to objects. The output of each participant was used as input for the next participant. The initial language was designed such that it did not show any overspecification, but it allowed for overspecification to emerge in 16 out of 32 usage contexts. Between conditions, we manipulated the referential context in which the target items appear, so that the relative visuospatial complexity of the scene would make the communicatively relevant feature dimensions more difficult to discern in one of them. The artificial languages became overspecified more quickly and to a significantly higher degree in this condition, indicating that the trend toward overspecification was stronger in these contexts, as suggested by experimental pragmatics research. These results add further support to the hypothesis that linguistic conventions can be partly determined by usage context and shows that experimental pragmatics can be fruitfully combined with artificial language learning to offer valuable insights into the mechanisms involved in the evolution of linguistic phenomena.
In addition to our article, there’s also a number of other papers in the new JoLE issue that are well worth a read, including another Iterated Learning paper by Clay Beckner, Janet Pierrehumbert, and Jennifer Hay, who have conducted a follow-up on the seminal Kirby, Cornish & Smith (2008) study. Apart from presenting highly relevant findings, they also make some very interesting methodological points.
I would like to hear what the three different ideas for how language works were.