In Search of the Wild Replicator


The key to the treasure is the treasure.
– John Barth

In view of Sean’s post about Andrew Smith’s take on linguistic replicators I’ve decided to repost this rather longish note from New Savanna. I’d orignally posted it in the Summer of 2010 as part of a run-up to a post on cultural evolution for the National Humanities Center (USA); I’ve collected those notes into a downloadable PDF. Among other things the notes deal with William Croft’s notions (at least as they existed in 2000) and suggests that we’ll find language replicators on the emic side of the emic/etic distinction.

I’ve also appended some remarks I made to John Lawler in the subsequent discussion at New Savanna.

* * * * *
There’s been a fair amount of work done on language from an evolutionary point of view, which is not surprising, as historical linguistics has well-developed treatments of language lineages and taxonomy, the “stuff” of large-scale evolutionary investigation. While this work is directly relevant to a consideration of cultural evolution, however, I will not be reviewing or discussing it. For it doesn’t deal with the theoretical issues that most concern me in these posts, namely, a conceptualization of the genetic and phenotypic entities of culture. This literature is empirically oriented in a way that doesn’t depend on such matters.

The Arbitrariness of the Sign

In particular, I want to deal with the arbitrariness of the sign. Given my approach to memes, that arbitrariness would appear to eliminate the possibility that word meanings could have memetic status. For, as you may recall, I’ve defined memes to be perceptual properties – albeit sometimes very complex and abstract ones – of physical things and events. Memes can be defined over speech sounds, language gestures, or printed words, but not over the meanings of words. Note that by “meaning” I mean the mental or neural event that is the meaning of the word, what Saussure called the signified. I don’t mean the referent of the word, which, in many cases, but by no means all, would have perceptible physical properties. I mean the meaning, the mental event. In this conception, it would seem that that cannot be memetic.

That seems right to me. Language is different from music and drawing and painting and sculpture and dance, it plays a different role in human society and culture. On that basis one would expect it to come out fundamentally different on a memetic analysis.

This, of course, leaves us with a problem. If word meaning is not memetic, then how is it that we can use language to communicate, and very effectively over a wide range of cases? Not only language, of course, but everything that depends on language. Continue reading “In Search of the Wild Replicator”

EvoLang coverage: Boeckx on integrating biolingustics and cultural evolution

Cedric Boeckx gave a remarkable plenary which tried to pull together the fields of cultural language evolution and biolinguistics, with surprising concessions on either side.  Boeckx started from a relatively uncontroversial part of Chomsky’s claim:  That aspects of language can be studied scientifically as part of biology.  However, Boeckx noted that Luria in 1976 was confident that ‘within a few years’ linguists would be interfacing with and contributing to findings from biology.  However, formal syntax has failed to carry out the biological commitment, and Boeckx wonders why linguists don’t have more to say about, for instance, the recent developments in the study of FOXP2.

Boeckx outlined his own position as minimalist, in the sense that a fully specified UG is not plausible.  We need to realise that biology is complex, and move beyond the classical model of Broca and Wernicke’s area as dedicated centers of language.  Also, Boeckx urged the audience to forget about the FLN/FLB distinction, since from a biological viewpoint this view is misleading:  Genes build neural structures, not behaviour (although linguists should note the richness of the range of aspects now thought to be part of FLB).

Instead, Boeckx suggests that the subject of study should be a set of formal properties.  Boeckx suggested the following, while emphasising that the particular terms were not important and it is just the concepts that he would focus on:

  • An edge property:  This removes selectional restrictions on concepts in different domains and makes it possible to combine them.  For example, humans can pull together concepts from very different domains.  Also, lexical items have the property of being able to combine with other lexical items.
  • Set formation or Merge:  The ability to combine lexical items.
  • Cyclic transfer:  Elements are combined at different levels before being passed to other operations.  This allows recursion.

These specify a minimal specification of universal grammar for which might realistically find biological explanations.  Boeckx sees no problem with the idea that we share some of these abilities with animals.  An even bigger concession is that he believes that the particular structures of language (e.g. word order or pro-drop) can be explained by cultural evolution i.e. grammaticalism.  The minimal specifications are weak biases, but we need a cultural explanation.

Boeckx went on to suggest how the biological underpinnings of the minimal specification might be approached.  He promoted the concept of the ‘Global workspace’ as used by Dehanene and colleagues.  This approach suggests that cross-modular computation is the key to human cognition.  It focuses on distributed networks of neurons with long-distance connections which allow different modules of the brain to interact.  Humans are particularly good at integrating concepts across perceptual modalities or time.  Boeckx suggests that this ability is the biological basis for the edge property.  It allows different perceptions to be treated in such a way that they can be combined.  I was put in mind of synaesthesia and the work of Chrissy Cuskley on synaesthesia and language evolution.

Boeckx went on to suggest that the thalmus could act as a regulator of information exchange in this global workspace and cited some studies showing that it is sensitive to syntax and semantics, but not phonology.  The thalmus is ideally placed – right at the center of the brain.  Boeckx also suggested that humans have evolved to have a more regularly spherical brain, facilitating this workspace by placing the thalmus equidistantly from all brain areas (suggesting that earlier ancestors of modern humans had a more elongated brain).  However, he was skeptical that we could ever know if this was an adaptation for language.

This integrative approach is in close alignment with proponents of cultural evolution such as Simon Kirby, who sees the structure of langauge as emerging from cultural transmission, but biology as proving the platform for cultural transmission.  Boeckx’s approach differs a great deal to that of Massimo Piattelli-Palmarini, whose talk essentially told cultural evolutionists that they were wrong and should stop researching explanations that could not be true.  However, one commenter wondered if Boeckx’s concessions were a dangerous form of moderate liberalism – these arguments might leave both the cultural camp and the formalist camp believing that there is no conflict and actually lead to further isolation.  However, I welcome this impressive synthesis and hope that it’ll raise the profile of cultural transmission in the evolution of language.

Evolang Coverage: Luke McCrohon on horizontal transfer

Luke McCrohon suggests that tools from evolutionary biology can be applied to linguistic borrowing between languages.  McCrohon correctly points out that the descent of lexicons are far from tree-like, and there is a great deal of horizontal transfer (see also my post on analysing an etymology dictionary). Although it’s mainly nouns that are borrowed into a language, any feature can potentially be borrowed, according to Thmason & Kaufman (1988).  However, we tend to observe hierarchies of borrowing such that some types of words are borrowed more frequently than others.  For instance, Haugen notes that nouns are more likely to be borrowed than verbs, which are in turn more likely to be borrowed than prepositions.  McCrohon links this with a similar observation in biological evolution that certain types of genes are more likely to be borrowed.  Informational genes (that provide the basis for functions) are less likely to be borrowed than operational genes (that modify other functions).  Jain et al.’s (1999) complexity hypothesis suggests that, while all genes have the same probability of being copied, simpler genes are more likely to be copied faithfully since they have fewer constraints on the precise form they must take to be effective.

McCrohon argues that In a similar way, the explanation of the linguistic borrowing hierarchy might also reflect the increasing constraints on how a word can be used.  For instance, most nouns can be substituted by other nouns, while prepositions are highly restricted by context or domain.  Also, language-interal change might be affected by these restrictions.  Even if there is a more effective form than in the existing system, removing one form might have knock-on consequences for the whole system.  This inter-connectedness could have implications for how languages are likely to change.

Furthermore, this model might predict that words are equally likely to be selected for borrowing, but only certain types have a good likelihood of being successfully borrowed.  However, a commenter wondered about words that are borrowed to fill conceptual gaps such as new technologies.  Still, an interesting analogy between problems in biology and problems in linguistics.  And McCrohon is confident that his studies will also have something to give back to the biology community by studying how this problem applies to linguistics.

Evolang coverage: Andrew Smith: Linguistic replicators are not observable, nor replicators

Andrew Smith asks what are Darwinian linguistic replicators.  He starts with Croft’s conception of the lingueme.  Croft says that linguemes are external manifestations: utterances including their full context.  However, this might mean that they are not observable, since we can’t observe the full context of an utterance nor the speaker’s intention.  Furthermore, this ignores the fact that meanings are different for each hearer.  So linguemes cannot be observed on the hearer’s side either.  Nikolas Ritt’s conceptualisation of the lingueme suggests that it is an entirely internal entity.  However, this means that we can’t observe the lingueme at all.  Furthermore, it ignores the fact that langauge is ostensive and inferential.  Smith advocates a view stronger than Mufwene’s position that meanings are re-constructed in the minds of hearers:  Hearers build their own knowledge and infer the meaning of speakers – this is a far remove from replicating anything in the speaker’s mind.   So the lingueme does not replicate faithfully.  In fact, we should not expect the lingueme to replicate faithfully, but be on the opposite side of the continuum to replicators.

Smith concluded with the paradox that linguemes must contain some aspect of meaning, but meaning is individual and not observable.

Monica Tamariz asked whether linguistic replicators needed to have an aspect of meaning.  Alternatively, Tamariz argued you could have replication of forms without replication of meaning.  Smith disagreed, seeing a pairing of form and meaning as an essential part of a linguistic replicator.

Smith pointed out that some priming effects demonstrated that people can re-create speaker’s individual voices in their minds, so would this count as faithful replication.  Smith replied that we shouldn’t expect linguistic replicators to be faithfully transmitted.

Luke McCrohan later suggested that perhaps you could have replication of a communicative event- that is, the lingueme is both the speaker’s intention and the hearer’s inference and the external form.

This was a refreshing and no-nonsense take on the linguistic replicator question.  But whatever the right answer, it demonstrates that evolutionary linguists are still struggling to reconcile language in the individual and language in the environment.  Nowhere is this clearer than in models, where typically addressing one aspect compromises the other.

So, what is it then, this Grammaticalization?

ResearchBlogging.org

A century ago Antoine Meillet, in his work L’évolution des Formes Grammaticales, coined the term grammaticalization to describe the process through which linguistic forms evolve from a lexical to a grammatical status. Even though knowledge of this process is found in earlier works by French and British philosophers (e.g. Condillac, 1746; Tooke, 1857), as well as in the publications of a long list of nineteenth-century linguists beginning with Franz Bopp (1816) (cf. Heine, 2003), it was Meillet’s term that would come to characterise what is now a whole field of study in historical language change. At a first glance, the concept of grammaticalization might seem fairly straightforward, yet in the proceeding hundred years it has undergone numerous revisions and developments, with many of these issues being brought to the fore at a conference I recently attended in Berlin (yes, there are other conferences we’re interested in than Evolang).

One of the stated aims of the conference was to refine the notion of grammaticalization (click here for the website). I’m not 100% sure this was achieved and, following an excellent talk by Graeme Trousdale, I was even less sure of whether we should keep using the term. We’ll come back to this is in a moment. For now, many linguists will probably agree that one of the most prominent developments is found in the expansion of Meillet’s definition by Kuryłowicz (1965): “[…] grammaticalization is that subset of linguistic changes whereby lexical material in highly constrained pragmatic and morphosyntactic contexts becomes grammatical, and grammatical material becomes more grammatical […]” (Traugott, 1996: 183 [my emphasis]). Under this new definition, grammaticalization takes into account the gradual nature of diachronic change in language, with there being a continuum of various degrees of grammatical status (Winter-Froemel, 2012).

A widely used example of grammaticalization is the development of the periphrastic future be going to. In the time of Shakespeare’s English, be going to had its literal meaning of a subject travelling to a location in order to do something, with the subject position only allowing for a noun phrase denoting “an animate, mobile entity, and the verb following the phrase would have to be a dynamic verb” (Bybee, 2003: 605). Indeed, there were several movement verbs that we could substitute based on the following constructional schema:

(1)        [[movement verb + Progressive] + purpose clause (to + infinitive)]

            E.g.,     I am going to see the king

                        I am travelling to see the king

                        I am riding to see the king

However, of the above examples, it was only the construction with go in it that underwent grammaticalization so that the motion verb (go) and the purpose clause (to + infinitive) came to express intentionality and future possibility. Of course, these changes did not happen abruptly, but rather they gradually evolved over time, with one prediction being that there was stage of ambiguity where both meanings coexisted (see Hopper’s concept of layering). We might conceive of this as hidden variation due to the inferential capacities entailed in the transmission from speakers to hearers. At some point the use of be going to was used in a construction that has an unambiguous meaning (e.g., I’m going to stay at home; The tree is going to lose its leaves etc), which led to an unmasking of this hidden variation within the speech community. This unmasking further opens up the possibility for these two meanings to become structurally untangled; demonstrated in contracted form of be plus the reduced gonna [gʌnə]. Below is a diagrammatic representation of these changes:

Continue reading “So, what is it then, this Grammaticalization?”

Evolang coverage: Brain activity during the emergence of a grounded communication game

Takeshi Konno, Junya Morita and Takashi Hashimoto talk about the integrative approach to the emergence of symbolic communication.  The talk included details of a hybrid model of cognition for communication that involved a context-free grammar to handle denotation and a neural network to handle connotation.  However, the most interesting work was an analysis of the different brain areas used at different stages of the evolution of a communication system.  They used an experimental paradigm similar to Galantucci (2005) where two human players played a coordination game using computer terminals.  On the screen, players were placed in one of four coloured room, but unable to see their partner in another room.  The aim was to move once (or not move) to end up in the same room as your partner.  Players were allowed to communicate once before moving using a sequence of abstract shapes.  Players could send a sequence of two abstract shapes to their partner.  The idea was to set up a communication system whereby, for instance, a square followed by a circle might mean ‘move into the green room’.

Konno et al. observe an evolution in the communication system:  First, the establishment of common ground (what shapes meant what colour).  Next, a symbolic system emerged with a semantics and a syntax.  At this stage, players were sending messages simultaneously.  Finally, role division (pragmatics) emerged to handle situations where the suggestion of a move by one player was impossible to reach in a single move by the other.  Therefore, one player would make a suggestion, and the second player would either modify the suggestion or confirm the suggestion by sending back the same signal.  Konno et al. note the emergence of the possibility of the same signal to meaning different things.

Interestingly, a recent experiment used EEG scans of participants’ brain activity as they played.  Konno et al. observed activity in Wernicke’s area at the semantic and syntactic stage, but also increased activation during the pragmatic stage of the evolution of the system in Broca’s area, the right frontal cortex and the medial frontal area.  Although this finding was not covered in a lot of detail, and the implications were not fully fleshed out, it’s an intriguing result, and may usher in a new series of brain-scanning versions of other communication game paradigms.  Do participants at a later stage of an iterated learning paradigm used different brain areas to those in the initial stages of the evolution of the language?

Evolang abstract:

Konno,T., Morita,J. and Hashimoto,T. (2012) “How is pragmatic grounding formed in the symbolic communication systems?,” Proceedings of Evolang9, Campus Plaza Kyoto, abstract.

Galantucci, B. (2005). An experimental study of the emergence of human communication systems. Cognitive Science, 29(5), 737-767.

Evolang coverage: Animal Communication and the Evolution of Language

Are there more differences or more similarities between human language and other animal communication systems? And what exactly does it tell us if we find precursors and convergent evolution of aspects similar to human language? These were some of the key questions at this year’s Evolang’s Animal Communication and Language Evolution Workshop (proceedings for all workshops here).

As Johan Bolhuis pointed out, ever since Darwin (1871), comparing apes and humans’ seemed like the most logical thing to do when trying to find out more about the evolution of traits presumed to be special to humans. Apes and especially chimpanzees, so the reasoning goes, are after all our closest relatives and serve as the best models for the capacities of our prelinguistic hominid ancestors. The comparative aspects of language have gained new attention since the controversial Hauser, Chomsky, Fitch (2002) paper in Science. For example, their claim that the capacity for producing and understanding recursive embedding of a certain kind is uniquely human was taken up by some researchers (including Hauser and Fitch themselves) who looked for syntactic abilities in other animals. More recently, songbirds have also become a centre of attention in the animal communication literature, with pretty much everything being quite controversial, however.

What is important here, according to the second workshop organizer Kazuo Okanoya, is that when doing research and theorizing, we should not treat humans as a special case, but as on a continuum with animals. And this also holds for language. In explaining language evolution, we don’t want to speak of a sudden burst that gave us something that is wholly different from anything else in the animal kingdom, but more of a continuous transition and emergence of language. For this it is, important to study other animals in closer details if we are to arrive at a continuous explanation of language emergence. Granted, humans are special. But simply saying they are special isn’t scientific. We need to detail in what ways humans are special.

Regarding the central question whether there are more differences or similarities between language and animal communication, and what exactly these similarities and differences are, opinions of course differ. After the first speaker didn’t turn up Irene Pepperberg gave an impromptu talk on her work with parrots. Taking the example of a complex exclusion task, she argued that symbol-trained animals can do things other animals simply cannot, and that this might be tied to the complex cognitive processing that occurs during language (and vocal) learning. She also stressed that birds can serve as good models for the evolution of some aspects underlying language because they developed broadly similar vocal learning capacities like humans in a process referred to as parallel evolution, convergence, or analogy. Responding to other prevalent criticism, Pepperberg counters the view that animals like Alex and Kanzi are simply exceptional and unique, just like not every human is a Picasso or a Beethoven. What Picasso and Beethoven show us is what humans can be capable of, and the same holds for animals and Alex and Kanzi. No one would argue that animals have language in the sense that humans do. But given that they have the brain structures and cognitive capacities to allow a more complicated vocal learning and complicated cognitive processing means we can use them as a model of how these processes might have got started. There is still much work to be done, especially questions like what animals like parrots actually need and use these complex vocal and cognitive capacities for in the wild.

Whereas Dominic Mitchell argued in his talk that there is indeed a discontinuity between animal communication and human language with reference to animal signaling theory (e.g. Krebs & Dawkins 1984), Ramon Ferrer-i-Cancho after him focused more on the similarities. Specifically, he showed quite convincingly that statistical patterns in language, like Zipf’s law, the law of brevity, the law that more frequent words are shorter, and the Menzerath-Altmann law (the longer the words the shorter the syllables) can also be found in the communicative behaviours of other animals. Zipf’s law for word frequencies, for example, can also be observed in the whistles of bottlenose dolphins. A criticism of Zipf’s law in the Chomskyan tradition holds that it just as well applies to random typing and rolling the dice, but Ferrer-i-Cancho showed that it is simply not the case by plotting the actual distribution of random typing and rolling the dice which is actually quite different from the logarithmic distribution of Zipf’s law if you look at it in any detail. The law that more frequent words are shorter can also be found in Chickadee calls, Formosan macaques and Common marmosets. There is some controversy whether this law really holds for all of these species, especially common marmosets, but Ferrer-i-Cancho presented a reanalysis of criticism in which he showed that what there are no “true exceptions” to the law. He proposes an information theoretic explanation for these kinds of behavioural universals where communicative solutions converge on a local optimum of differing communicative demands. He also proposes that considerations like this should lead us to change our perspective and concepts of universals quite radically, and that instead of looking only for linguistic universals we should also look for universals of communicative behavior and universal principles beyond human language such as cognitive effort minimization and mean code length minimization.

Returning to birds, Johan J. Bolhius picked the issue of similarities and differences up again and showed that there is in fact a staggering amount of similarities between birds and humans. For example, songbirds also learn their songs from a tutor (most often their father) and make almost perfect copies of their songs. As Hauser, Chomsky, Fitch 2002 have already pointed out, this signal copying seems not to be present in apes and monkeys. But the similarities go even further than that: Songbirds “babble” before they can sing properly (a period called ‘subsong’) and they also have a sensitive period for learning. And there are not only behavioural, but also neural similarities. In fact, songbirds seem to have a neural organization, broadly similar to the human separation between Broca’s area (mostly concerned with production, although this simple view of course is not the whole story, as James, for example, has shown) and Wernicke’s area (mostly concerned with understanding). So there seem to be regions that are exclusively activated when animals hears songs (kinda Wernicke-Type region) and regions with neuronal activation when animals sing, something which is called the ‘song system. Interestingly, this activation is also related to how much the animal has learned about that particular song it is hearing, so the better it knows the song the more activation is there. This means that this regions might be related to song memory. In lesion studies, where these regions involved in listening to a known song were damaged, recognition of the songs were indeed impaired but not wholly wiped out. Song production, on the other hand was completely unimpaired, mirroring the results from patients with lesions to either Broca’s or Wernicke’s areas. Zebra finches also show some degree of lateralization in that there is stronger activation in the left hemisphere when they hear the song they know, but not when the song they hear is unfamiliar. Although FOXP2 is not a “language gene”, which can’t be stressed enough, it is interesting that songbirds in which the bird-FOXP2-gene was “knocked out” show incomplete learning of the tutor songs.

Overall, Bolhuis concludes that what we can learn from looking at birdsong is that there are three significant factors evolved in the evolution of language:

Homology in the neural and genetic mechanisms due to our shared evolutionary past with birds.

Convergences or parallel evolution of auditory-vocal learning

And last specialisations, specifically human language syntax, which as Bolhuis argued in a paper with Bob Berwick and Kazuo Okanoya is still vastly different in complexity and hierarchical embedding from everything in songbird vocal behavior.

This focus on syntactic ability stems of course from a generativist perspective on these issues, and future research, especially from new and up-and-coming linguistic schools like Cognitive Linguistics and Construction Grammar (cf. Hurford 2012) is sure to bring more light into the matter of how exactly human language works, what kinds of elements and constructions it is made of, and how these compare to what is found in animals, and whether there really a single unitary thing like the fabled “syntactic ability” of humans (cf. e.g.work by Ewa Dabrowska)

 

Evolang coverage: Bart de Boer on Fact-free science

This is written at 1am after a sake and sushi reception.  I have to praise the organisation of the conference so far!

Kicking off the workshop on Constructive approaches to Language Evolution (proceedings for all workshops downloadable here), Bart de Boer talked about the dangers of Fact-free science.  Maynard-Smith recognised of a certain kind of science that does not refer to outside phenomena, but merely concentrates on exploring models already established in the sub-field.  Constructive approaches and the Artificial Life approach was always susceptible to this criticism, but de Boer recognises that the initial enthusiasm for constructive models has waned while the skepticism has remained.   However, de Boer suggested that Maynard-Smith’s point should be a friendly warning to researchers in language evolution, rather than a criticism, since Maynard-Smith himself was subject to these kinds of criticism in the field of mathematical modelling.  de Boer emphasises that research should never loose sight of the research questions that motivated previous studies, and encouraged modellers to ask whether they were answering questions that other researchers were asking.

de Boer also talked about ‘Cargo cult science’ – a name derived from pre-industrial cults that believed in emulating the technologically advanced societies that they came in contact with would maintain the flow of new goods – a practice that goes through the motions of doing science, but doesn’t actually produce results.  For instance, a model shouldn’t just explain the data which it was built on, but should be expandable to explain other phenomena.

de Boer questioned whether the Iterated Learning Model experimental paradigms were guilty of this kind of cottage-industry science, wondering whether they study langauge evolution or how humans play certain types of games.  However, he did concede that it was a relatively new paradigm and at least it got modellers running experiments.  I asked whether this was a little unfair on the ILM, since part of the motivation of the ILM studies was to counter claims made in that pinnacle of fact-free science, formalist nativism.  That is, the ILM showed that you don’t need strong innate biases to get strong language universals in populations.  de Boer answered, quite sensibly, that these points had been made with the computation models already, but more importantly, there was no point in trying to convince those kinds of researchers – the real audience for researchers of cultural evolution should be biologists – de Boer pointed out that the most prestigious work on language evolution (in terms of journal prestige and citations) is largely by biologists, not linguists (e.g Nowak).  And to convince them, we need fact-free science.

It was a pity, then that some interesting modelling work by Reiji Suzuki and Takaya Arita (Reconsidering language evolution from coevolution of learning and niche construction using a concept of dynamic fitness landscape, also in the workshop proceedings) seemed to be suffering from this malady.  To start with, as Thom Scott-Phillips pointed out, the title doesn’t make sense, since niche-construction is essentially a type of coevolution.  Suzuki described model where individuals could affect each other’s linguistic inventories either directly through communication, or indirectly by contributing linguistic elements to a pool of linguistic resources, like an animal altering its adaptive landscape (e.g. beavers building dams).  Each individual had a phenotype space which was defined by several innate properties:  First, an initial phenotype.  Second, a learning variable where by an individual could bring its phenotype closer to the peak of the adaptive landscape.  Finally, a niche construction parameter by which individuals could pull the adaptive peak closer to or further away from their phenotype. Individuals inherited these parameters like genes.

A circular dynamic emerged where the population cycled through having many adaptive peaks, which increased the learning parameter, which lead to a single adaptive peak, which lowered the importance of learning, which finally pulled the single adaptive peak into many adaptive peaks, which increased the importance of learning, and so on.  While this was happening, the fitness of the agents was being ratcheted up by a series of steep increases, essentially a the Baldwin effect being repeatedly applied.  This is the first of a number of presentations about the Baldwin effect and coevolution (talk by Bill Thompson and poster by Vanessa Ferdinand).

While this is an interesting dynamic, when I asked how the concept of a shared environment or the ability to modify the adaptive landscape applied to language, there was not a clear answer.  I suspect that the distinction between individual interactions and modifying the external environment, which works well for animals building nests or dams, does not work so well for spoken language, because linguistic signals don’t persist in the environment.  However, the problem of how to represent the langauge of a community alongside individual behaviour is not an easy problem to solve.  Suzuki suggested that perhaps the model can be related to an earlier stage of language evolution, but we’ll have to wait for a better description of how this model can answer the questions that researchers in language evolution ask.

Evolang Previews: Cognitive Construal, Mental Spaces, and the Evolution of Language and Cognition

Evolang is busy this year – 4 parallel sessions and over 50 posters. We’ll be posting a series of previews to help you decide what to go and see. If you’d like to post a preview of your work, get in touch and we’ll give you a guest slot.

Michael Pleyer Cognitive Construal, Mental Spaces, and the Evolution of Language and Cognition Poster Session 1, 17:20-19:20, “Hall” (2F), 14th March

Perspective-taking and -setting in language, cognition and interaction is crucial to the creation of meaning and to how people share knowledge and experiences. As I’ve already written about on this blog (e.g. herehere, here), it probably also played an important part in the story of how human language and cognition came to be. In my poster presentation I argue that a particular school of linguistic thought, Cognitive Linguistics (e.g. Croft & Cruse 2004; Evans & Green 2006; Geeraerts & Cuyckens 2007; Ungerer & Schmid 2006), has quite a lot to say about the structure and cognitive foundations of perspective-taking and -setting in language.

Therefore an interdisciplinary dialogue between Cognitive Linguistics and research on the evolution of language might prove highly profitable. To illustrate this point, I offer an example of one potential candidate for such an interdisciplinary dialogue, so-called Blending Theory (e.g. Fauconnier & Turner 2002), which, I argue,  can serve as a useful model for the kind of representational apparatus that needed to evolve in the human lineage to support linguistic interaction. In this post I will not say much about Blending Theory (go see my poster for that 😉 or browse here ), but I want to  elaborate a bit on Cognitive Linguistics and why it is a promising school of thought for language evolution research, something which I also elaborate on in my proceedings paper.

So what is Cognitive Linguistics?

Evans & Green (2006: 50), define Cognitive Linguistics as

“the study of language in a way that is compatible with what is known about the human mind, treating language as reflecting and revealing the mind.”

Cognitive Linguistics sees language as tightly integrated with human cognition. What is more, a core assumption of Cognitive Linguistics is that principles inherent in language can be seen as instantiations of more general principles of human cognition. This means that language is seen as drawing on mechanisms and principles that are not language-specific but general to cognition, like conceptualisation, categorization, entrenchment, routinization, and so forth.

From the point of view of the speaker, the most important function of language is that it expresses conceptualizations, i.e. mental representations. From the point of view of the hearer, linguistic utterances then serve as prompts for the dynamic construction of a mental representation. Crucially, this process of constructing a mental representation is fundamentally tied to human cognition and our knowledge of the world around us. Continue reading “Evolang Previews: Cognitive Construal, Mental Spaces, and the Evolution of Language and Cognition”

Conference session on Theory and evidence in language evolution research

The 43rd Poznań Linguistic Meeting is holding a thematic session on Theory and evidence in language evolution research.  The call is still open, but the deadline is the 15th March.  From the conference description:

The aims of the session can be summarised as follows:

  • to assess the present range of available evidence and to discuss the status of the new sources of evidence
  • to assess the role of theoretical syntheses and holistic scenarios of language emergence and evolution
  • to identify the ways in which linguistic methodologies can be made relevant to answering the ‘origins’ type questions,
  • to identify the limitations of linguistic methodologies alone and thus directions of interdisciplinary collaboration
  • to bridge the gap between conceptions of evidence in biology and linguistics