The natural approach has always been: Is [language] well designed for use, understood typically as use for communication? I think that’s the wrong question. The use of language for communication might turn out to be a kind of epiphenomenon… If you want to make sure that we never misunderstand one another, for that purpose language is not well designed, because you have such properties as ambiguity. If we want to have the property that the things that we usually would like to say come out short and simple, well, it probably doesn’t have that property (Chomsky, 2002: 107).
The paper itself argues against Chomsky’s position by claiming ambiguity allows for more efficient communication systems. First of all, looking at ambiguity from the perspective of coding theory, Piantadosi et al argue that any good communication system will leave out information already in the context (assuming the context is informative about the intended meaning). Their other point, and one which they test through a corpus analysis of English, Dutch and German, suggests that as long as there are some ambiguities the context can resolve, then ambiguity will be used to make communication easier. In short, ambiguity emerges as a result of tradeoffs between ease of production and ease of comprehension, with communication systems favouring hearer inference over speaker effort:
The essential asymmetry is: inference is cheap, articulation expensive, and thus the design requirements are for a system that maximizes inference. (Hence … linguistic coding is to be thought of less like definitive content and more like interpretive clue.) (Levinson, 2000: 29).
If this asymmetry exists, and hearers are good at disambiguating in context, then a direct result of such a tradeoff should be that linguistic units which require less effort should be more ambiguous. This is what they found in results from their corpus analysis of word length, word frequency and phonotactic probability:
We tested predictions of this theory, showing that words and syllables which are more efficient are preferentially re-used in language through ambiguity, allowing for greater ease overall. Our regression on homophones, polysemous words, and syllables – though similar – are theoretically and statistically independent. We therefore interpret positive results in each as strong evidence for the view that ambiguity exists for reasons of communicative efficiency (Piantadosi et al., 2012: 288).
At some point, I’d like to offer a more comprehensive overview of this paper, but this will have to wait until I’ve read more of the literature. Until then, here’s some graphs of the results from their paper:
Chater et al. (2009) used a computational model to show that biological adaptations for language are impossible because language changes too rapidly through cultural evolution for natural selection to be able to act.
This new paper, Baronchelli et al. (2012), uses similar models to first argue that if language changes quickly then “neutral genes” are selected for because biological evolution cannot act upon linguistic features when they are too much of a “moving target”. Secondly they show that if language changes slowly in order to facilitate coding of linguistic features in the genome, then two isolated subpopulations who originally spoke the same language will diverge biologically through genetic assimilation after they linguistically diverge, which they inevitably will.
The paper argues that because we can observe so much diversity in the world’s languages, but yet children can acquire any language they are immersed in, only the model which supports the selection of “neutral genes” is plausible. Because of this, a hypothesis in which domain general cognitive abilities facilitate language rather than a hypothesis for a biologically specified, special-purpose language system is much more plausible.
A Prometheus scenario:
Baronchelli et al. (2012) use the results of their models to argue against what they call a “Prometheus” scenario. This is a scenario in which “a single mutation (or very few) gave rise to the language faculty in an early human ancestor, whose descendants then dispersed across the globe.”
I wonder if “prometheus” scenario an established term in this context because I can’t find much by googling it. It seems an odd term to use given that Prometheus was the titan who “stole” fire and other cultural tools from the Gods to be used by humans. Since Prometheus was a Titan, he couldn’t pass his genes on to humans, and rather the beginning and proliferation of fire and civilization happened through a process of learning and cultural transmission. I know this is just meant to be an analogy and presumably the promethian aspect of it is alluding to it suddenly happening, but I can’t help but feel that the term “Prometheus scenario” should be given to the hypothesis that language is the result of cultual evolution acting upon domain general processes, rather than one which supports a genetically-defined language faculty in early humans.
References.
Baronchelli A, Chater N, Pastor-Satorras R, & Christiansen MH (2012). The biological origin of linguistic diversity. PloS one, 7 (10) PMID: 23118922
Chater, N., Reali, F., & Christiansen, M. H. (2009). Restrictions on biological adaptation in language evolution. Proceedings of the National Academy of Sciences, 106(4), 1015- 1020.
Lately, there have been a string of news articles regarding animals imitating human speech sounds. First, there was an account of the nine year-old beluga whale named NOC who was recorded making unusually low, clipped bursts of noise. Then, today, news from the University of Vienna was reported of an asian elephant named Koshik using his trunk to imitate Korean words. Koshik does attempt to match both the pitch and timbre of the human voice, though the researchers doubt there is any meaningfulness to his phrases beyond an attempt at social affiliation.
The more interesting aspect of NOC’s speech is that, unlike the dolphins that are trained to imitate human noises or computer generated whistles, it is the first recorded spontaneous imitation of human speech. Similarly, the marine animals previously studied were raised primarily in captivity. NOC is not only a wild beluga whale, but his speech was also recorded in the wild. The study, published in Current Biology, can be found here (only the abstract is available for free).
Sam Ridgway, Donald Carder, Michelle Jeffries, Mark Todd. Current Biology – 23 October 2012 (Vol. 22, Issue 20, pp. R860-R861)
Angela S. Stoeger, Daniel Mietchen, Sukhun Oh, Shermin de Silva, Christian T. Herbst, Soowhan Kwon, W. Tecumseh Fitch. “An Asian Elephant Imitates Human Speech.” Current Biology, 2012; DOI:10.1016/j.cub.2012.09.022
For some years now Simon Garrod and Nicolas Fay, among others, have been looking at the emergence of symbolic graphical symbols out of iconic ones using communication experiments which simulate repeated use of a symbol.
Garrod et al. (2007) use a ‘pictionary’ style paradigm where participants are to graphically depict one of 16 concepts without using words, so that their partner can identify it. This process is repeated to see if repeated usage would take advantage of the shared memory of the representation rather than the representation itself to the point where a iconic depiction of an item could become an arbitrary, symbolic one.
Garrod et al. (2007) showed that simple repetition is not enough to allow an arbitrary system to emerge and that feedback and interaction are required between communicators. The amount of interaction afforded to participants was shown to affect the emergence of signs due to a process of grounding. The signs that emerged from this process of interaction were shown to be arbitrary as participants not involved directly in the interaction were shown to have trouble interpreting the outcome signs.
The experimental evidence then shows that icons do indeed evolve into symbols as a consequence of the shared memory of the representation rather than the representation itself. Which is all well and good, but can this process be seen in the real world? YES!
I was talking to a friend on skype and he started typing repeated right round brackets:
))))))))
At first I just thought he had some problem with keys sticking on his keyboard, but after he did it two or three times I finally asked. To which he alluded that that they were smilies. Upon further questioning, it seems that this has become a norm for Russian internet chat that their emoticons have lost their eyes – presumably in the same process as Garrod et al. (2007) showed above.
They have also created an intensification system based on this slightly more arbitrary symbol, where by the more brackets repeated the happier or sadder you are. Among those in the UK and America, the need to intensify an emoticon has stayed well within the rhealms of iconicity with : D meaning “very happy” and D: meaning “oh God, WHHHHHYYYYY”. Japan have a completely different emoticon system altogether which focusses on the eyes: ^_^ meaning happy and u_u meaning sad. Some of argued that this is because in Japan people tend to look to the eyes for emotional cues, whereas Americans tend to look to the mouth, as backed up by SCIENCE.
I’d be interested to see if norms have been established in other countries, either iconic or not.
Refs
Garrod S, Fay N, Lee J, Oberlander J, & Macleod T (2007). Foundations of representation: where might graphical symbol systems come from? Cognitive science, 31 (6), 961-87 PMID: 21635324
Yuki, M., Maddux, W., & Masuda, T. (2007). Are the windows to the soul the same in the East and West? Cultural differences in using the eyes and mouth as cues to recognize emotions in Japan and the United States Journal of Experimental Social Psychology, 43 (2), 303-311 DOI: 10.1016/j.jesp.2006.02.004
Mice can learn vocalisations! A new article realised today on PLOS ONE by Gustavo Arriaga, Eric Zhou and Erich Jarvis, shows that mice share some of the same mechanisms used to learn vocal patterning in songbirds and humans.
Mice can learn vocalisations! A new article realised today on PLOS ONE by Gustavo Arriaga, Eric Zhou and Erich Jarvis, shows that mice share some of the same mechanisms used to learn vocal patterning in songbirds and humans.
Very few animals have the capacity for vocal learning. This ability allows species to modify the sequence and pitch of sounds that create songs or speech. Currently, only three groups of birds – parrots, hummingbirds and songbirds – and some mammalian species – humans, whales, dolphins, sea lions, bats and elephants – have demonstrated vocal learning. This ability is still yet to be found even in non-human primates.
This study looks at the ultrasonic vocalizations known as mouse ‘song’ and provides evidence that mice can change at least one acoustic feature of these vocalizations based on their social exposure.
Two mice were put together and over time learned to match the pitch of their songs to one another. The paper suggests this is a limited form of vocal learning.
The paper also shows evidence that the mice can control their vocal motor neurons. In the press release, Erich Jarvis states, “This is an exciting find, as the presence of direct forebrain control over the vocal neurons may be one of the most critical aspects in the human evolution of speech.”
While this vocal learning in mice seems to be much more primitive than in songbirds or humans, it may reveal some of the intermediate steps in the process by which vocalization evolved in advanced vocal learners like songbirds and humans.
Following some great work over at FeministPhilosophers to raise awareness of the prevalence of all-male conference events in Philosophy, an interdisciplinary action for gender equity at scholarly conferences has been doing the rounds over the last four days. It was proposed by Dan Sperber and Virginia Valian, who have also compiled an accompanying Q & A that is very informative indeed, especially for those who may not have thought about such issues before. The sentiment of this action is certainly commendable and it’s heartening to see this conversation being opened in the research community. In the spirit of continuing this conversation, I have made critical comments elsewhere that I’m more or less cross-posting here.
The commitment is summarised thus:
Commitment to gender equity at scholarly conferences
Across the disciplines, disproportionately more men than women participate in scholarly conferences – as keynote or plenary speakers, as symposiasts, or as panelists. This, we believe, is the outcome of widespread and generally unintended bias. It is unfair, it hinders advancement in scholarship, and it is especially discouraging to junior scholars. Overcoming such bias involves not just awareness but positive action.
We therefore undertake to make our participation in conferences – whether as an organizer, sponsor, or invited speaker – conditional on the invitation of women and men speakers in a fair and balanced manner.
So, we can understand this action as a distributed boycott of conferences that individuals believe have been unfair in their approach to inviting female speakers. There is a guideline in the accompanying Q & A on how signees can establish whether a conference has been organised fairly, which outlines various considerations you can make as an attendee and also as an organiser. The problem is that there is little to compel anyone signing this to actually make good on conditional conference participation, particularly since the bias is (as noted in the Q & A) unintended and unconscious even among those who personally endeavour to act against it. The consequences of public accountability are at best unclear, unless those signing up are also committing themselves to monitor the conduct of their fellow signatories. The fact that people generally have these sentiments before they’ve signed the commitment, and that simply holding this sentiment doesn’t seem to have made any difference to how conferences are organised, ought to make us pause.
This may seem a little unfair of me, but at least part of my cynicism is based on the fact that female representation in political parties and government positions is notoriously difficult to improve with non-binding good intentions alone. At Edinburgh University’s inaugural Chrystal Macmillan lecture last year Prof Pippa Norris showed that even voluntarily enacting quotas for a minimum number of female representatives was not enough to improve equity in political parties and make sure that they actually do anything. The only measure that proves effective is an additional penalty of non-registration for those parties that do not meet requirements. This is the case worldwide.
Related to the problems inherent in grassroots strategies of action, I also find myself wondering how it could benefit female scholars (individually but also at large), to make such a commitment. Surely the point here is that their representation is already under par. This is an especially important concern when we ask ourselves who is more likely to actually participate in such an action; despite the fact that this commitment is intended for everyone, I suspected that the one area where women might be overrepresented is on the signature list. As of today, this is certainly true (see pie chart, right; updated chart here).
It is worth pausing to consider exactly why this is problematic. It is not only that there exist fewer opportunities for high-profile female academics to speak than there should be – though that is an important issue. A more pervasive reason why fewer female speakers is a problem is that the resultant academic environment is hostile to other female academics – particularly junior attendees who, realistically, do not have as much luxury in limiting their participation. For female academics to consign themselves to only “fair” conferences seems to then work somewhat against the intended positive action, since even fewer women end up being represented than there currently are. Female junior attendance, I would bet, will largely remain the same since they cannot professionally afford to restrict themselves. The result is that they are attending conferences with even less female representation than there would have otherwise been, and encountering a more hostile and male-dominated environment.
A further point of concern is that, as is fairly typical of feminist campaigns, there seems to be bit of a trend for the loss of interest from male academics over a relatively short space of time (see table, left). Is it reasonable, then, to expect that a majority-female abstention will ignite structural change to remedy this situation? I am inclined to believe that it isn’t. The idea that women should opt out of speaking at conferences in order to pressure them into organisational change is questionable precisely because their contribution is already valued less than that of their male counterparts. Given this bias, withheld participation by women may have much less impact on conferences than desired, particularly at those events which are often currently all-male anyway. If we still want to claim that a boycott is a desirable means of effecting change in this instance (and I’m not entirely convinced that it is), I’d venture that it would be significantly more effective if it comprised a male majority. An additional improvement would be to compile a list of conferences with a poor track record for a focused boycott that people could commit to, rather than relying on their subjective assessment. This would be an improvement not least because leaving the onus on the individual to decide how to behave under the obligation of this commitment (combined with the lack of a concrete goal/measure of success) makes the chances of material change rather slim indeed.
Given what we already know about women’s political representation, I believe a more effective goal is to implement change at an explicitly organisational level. As an example off the top of my head, petitioning for a requirement that established conferences declare their level of complicity with a set of fairness provisions might be more promising. This allows others to judge fairness more transparently (and less subjectively) while simultaneously giving high visibility to this issue as a matter of course. This kind of approach strikes me as somewhat more hopeful in making fair representation a standard consideration of conference organisers, both now and in the future. One barrier to this is that there isn’t, to my knowledge, a central body for the registration of academic conferences or an ombudsman-type overseer that could enforce such a requirement. Given that the academy has proven itself unable to make equity provisions, perhaps one should be instated. At any rate, this is still by no means enough; if we can learn anything from the political sphere it’s that there has to be a material downside to non-compliance beyond disapproval (or lack of votes) from the constituency.
That this conversation has been opened and circulated around the interdisciplinary research community is a very positive step in the right direction. Further thinking on how we can make material changes to structural inequity is both crucial and timely; any and all discussion on this is a Good Thing. I know I’m not alone in hoping that signing this commitment is not the beginning and end of the research community’s action toward gender equity.
Stefan L. Frank, Rens Bod and Morten H. Christiansen
Abstract: It is generally assumed that hierarchical phrase structure plays a central role in human language. However, considerations of simplicity and evolutionary continuity suggest that hierarchical structure should not be invoked too hastily. Indeed, recent neurophysiological, behavioural and computational studies show that sequential sentence structure has considerable explanatory power and that hierarchical processing is often not involved. In this paper, we review evidence from the recent literature supporting the hypothesis that sequential structure may be fundamental to the comprehension, production and acquisition of human language. Moreover, we provide a preliminary sketch outlining a non-hierarchical model of language use and discuss its implications and testable predictions. If linguistic phenomena can be explained by sequential rather than hierarchical structure, this will have considerable impact in a wide range of fields, such as linguistics, ethology, cognitive neuroscience, psychology and computer science.
Published online before print September 12, 2012, doi: 10.1098/rspb.2012.1741 Proceedings of the Royal Society B
This post continues my summary of Jim Hurford’s discussion of two contrasting extreme positions on language evolution in his plenary talk at the Poznan Linguistic Meeting. Here’s the summary of these two positions from my last post:
Position A:
(1) There was a single biological mutation which (2) created a new unique cognitive domain, which then (3) immediately enabled the “unlimited command of complex structures via the computational operation of merge. (4) This domain is used primarily for advanced private thought and only derivatively for public communication. (5) It was not promoted by natural selection.
Position B:
(1) There were many cumulative mutations which (2) allowed the expanding interactions of pre-existing cognitive domains creating a new domain, which however is not characterized by principles unique to language. This then (3) gradually enabled the command of successively more complex structures. Also, on this view, this capacity was used primarily for public communication, and only derivatively for advanced private thought and was (5) promoted by natural selection.
Hurford criticized the position that the biological changes enabling languages primarily evolved for private thought, because this would imply that the first species in the Homo lineage that developed the capacity for unlimited combinatorial private thought (i.e. “merge”) were non-social and isolated clever hominids. This, as Hurford rightly points out, is quite unrealistic given everything we know about human evolution regarding, for example, competition, group size, neocortex side and tactical deception. There is in fact very strong evidence that what characterizes humans the most is the exact opposite as would be predicted by the “Merge developed in the service of enhancing private thought” position: We have the largest group size of any primate, the largest neocortex (which has been linked to the affordances of navigating a complex social world) and have the most pronounced capacity for tactical deception.
In his Talk, Hurford asked “What is wrong, and what is right, about current theories of language, in the light of evolution?” (you can find the abstract here).
Hurford presented two extreme positions on the evolution of language (which nevertheless are advocated by quite a number of evolutionary linguists) and then discussed what kinds of evidence and lines of reasoning support or seem to go against these positions.
Extreme position A, which basically is the Chomskyan position of Generative Grammar, holds that:
(1) There was a single biological mutation which (2) created a new unique cognitive domain, which then (3) immediately enabled the unlimited command of complex structures via the computational operation of merge. Further, according to this extreme position, (4) this domain is used primarily for advanced private thought and only derivatively for public communication and lastly (5) it was not promoted by natural selection.
On the other end of the spectrum there is extreme position B, which holds that:
(1) there were many cumulative mutations which (2) allowed the expanding interactions of pre-existing cognitive domains creating a new domain, which however is not characterized by principles unique to language. This then (3) gradually enabled the command of successively more complex structures. Also, on this view, this capacity was used primarily for public communication, and only derivatively for advanced private thought and was (5) promoted by natural selection.
Hurford then went on to discuss which of these individual points were more likely to capture what actually happened in the evolution of language.
He first looked at the debate over the role of natural selection in the evolution of language. In Generative Grammar there is a biological neurological mechanism or computational apparatus, called Universal Grammar (UG) by Chomsky, which determines what languages human infants could possibly acquire. In former Generative Paradigms, like the Government & Binding Approach of the 1980s, UG was thought to be extremely complex. What was more, some of these factors and structures seemed extremely arbitrary. Thus, from this perspective, it seemed inconceivable that they could have been selected for by natural selection. This is illustrated quite nicely in a famous quote by David Lightfoot:
“Subjacency has many virtues, but I am not sure that it could have increased the chances of having fruitful sex (Lightfoot 1991: 69)”
We’ll come back to our talk later on and talk about it in a bit more detail but for the time being here’s our abstract:
Cognitive Linguistics is a school of modern linguistic theory and practice that sees language as an integral part of cognition and tries to explain linguistic phenomena with relation to general cognitive capacities (e.g. Evans 2012; Geeraerts & Cuyckens, 2007). In this talk, we argue that there is a wealth of relevant research and theorizing in Cognitive Linguistics that can make important contributions to the study of the evolution of language and cognition. This is in line with recent developments in the field, which have attempted to apply key insights from Cognitive Linguistics on the nature of language and its relation to cognition and culture to the question of language evolution and change (cf. e.g. Evans, 2012; Pleyer, 2012; Sinha, 2009; Tomasello, 2008)
We illustrate this proposal with relation to the three timescales that have a bearing on explicating the structure and evolution of language (Kirby, 2012):
The ontogenetic timescale of individuals acquiring language
The glossogenetictimescale of historical language change
The phylogenetic timescale of the evolution of the species