In an effort to update this blog regularly, I’ve decided to take the lazy route and post up a list of abstracts. This will only happen once a week, but it’s a useful resource (for me at least), and will usually be an indicator of what articles I’m going to write about in the near future.
Neuroscience has greatly improved our understanding of the brain basis of abstract lexical and semantic processes. The neuronal devices underlying words and concepts are distributed neuronal assemblies reaching into sensory and motor systems of the cortex and, at the cognitive level, information binding in such widely dispersed circuits is mirrored by the sensorimotor grounding of form and meaning of symbols. Recent years have seen the emergence of evidence for similar brain embodiment of syntax. Neurophysiological studies have accumulated support for the linguistic notion of abstract combinatorial rules manifest as functionally discrete neuronal assemblies. Concepts immanent to the theory of abstract automata could be grounded in observations from modern neuroscience, so that it became possible to model abstract pushdown storage – which is critical for building linguistic tree structure representations – as ordered dynamics of memory circuits in the brain. At the same time, neurocomputational research showed how sequence detectors already known from animal brains can be neuronally linked so that they merge into larger functionally discrete units, thereby underpinning abstract rule representations that syntactically bind lexicosemantic classes of morphemes and words into larger meaningful constituents. Specific predictions of brain-based grammar models could be confirmed by neurophysiological and brain imaging experiments using MEG, EEG and fMRI. Neuroscience and neurocomputational research offering perspectives on understanding abstract linguistic mechanisms in terms of neuronal circuits and their interactions therefore point programmatic new ways to future theory-guided experimental investigation of the brain basis of grammar.
Rich Memory and Distributed Phonology (Port)
It is claimed here that experimental evidence about human speech processing and the richness of memory for linguistic material supports a distributed view of language where every speaker creates an idiosyncratic perspective on the linguistic conventions of the community. In such a system, words are not spelled in memory of speakers from uniform letter-like units (whether phones or phonemes), but rather from the rich auditory patterns of speech plus any coupled visual, somatosensory and motor patterns. The evidence is strong that people actually employ high-dimensional, spectro-temporal, auditory patterns to support speech production, speech perception and linguistic memory in real time. Abstract phonology (with its phonemes, distinctive features, syllable types, etc.) is actually a kind of social institution – a loose inventory of patterns that evolves over historical time in each human community as a structure with many symmetries and regularities in the community corpus. Linguistics studies the phonological (and grammatical) patterns of various communities of speakers. But linguists should not expect to find the descriptions they make to be explicitly represented in any individual speaker’s mind, much less in every mind in the community. The alphabet is actually a technology that has imposed itself on our understanding of language.
We largely agree with the points raised by Evans and Levinson (2009, henceforth, E&L) about linguistic diversity, both regarding spoken languages and signed languages. Here we want to raise three main issues: (1) we expand on E&L’s concern that the metalanguage used in cross-linguistic description may work well in the analyses of some languages, but not so well in others (we pick up on their example of “classifiers” in signed languages); (2) we discuss E&L’s claim that sign languages lack pronouns; and (3) we join E&L in highlighting the importance of sign languages when considering linguistic diversity and for understanding the emergence of new languages. We conclude by stressing the need for all linguists to consider the multimodal nature of language (including gesture and “paralinguistic” characteristics such as intonation and prosody) rather than just the classic linguistic characteristics which are the exclusive focus of much work in mainstream approaches to the study of language.
The Shape and Tempo of Language Evolution (Greenhill et al)
There are approximately 7000 languages spoken in the world today. This diversity reflects the legacy of thousands of years of cultural evolution. How far back we can trace this history depends largely on the rate at which the different components of language evolve. Rates of lexical evolution are widely thought to impose an upper limit of 6000–10 000 years on reliably identifying language relationships. In contrast, it has been argued that certain structural elements of language are much more stable. Just as biologists use highly conserved genes to uncover the deepest branches in the tree of life, highly stable linguistic features hold the promise of identifying deep relationships between the world’s languages. Here, we present the first global network of languages based on this typological information. We evaluate the relative evolutionary rates of both typological and lexical features in the Austronesian and Indo-European language families. The first indications are that typological features evolve at similar rates to basic vocabulary but their evolution is substantially less tree-like. Our results suggest that, while rates of vocabulary change are correlated between the two language families, the rates of evolution of typological features and structural subtypes show no consistent relationship across families.
Working Memory Capacity and the Evolution of Modern Cognitive Potential (Haidle)
Tool use is the main database to track down behavioral developments in the archaeological record and thus human evolution. Working‐memory capacity and modern cognitive potential, however, are no simple and obvious characters in tool behavior. Coded in cognigrams, which allow a direct comparison, animal and human tool use can be examined for specific aspects of the working‐memory capacity. Detailed studies of tool behavior of wasps, sea otters, bottlenose dolphins, and chimpanzees are presented and compared with the manufacture and use of Oldowan tools and Lower Paleolithic spears. Although this shows a wide range of problem‐solution distances, problem solving in animals seems to be restricted to problem complexes for which a solution can be found in spatial and temporal vicinity. In human evolution, the complexity of tool behavior increases regarding the number of active foci managed at a time in an action, the number and diversity of operational steps in a problem‐solution complex, and the spatial and temporal frame in which solutions are sought. The results suggest a gradual development of the different aspects of a complex capacity instead of a late introduction of a closed phenomenon with only different facets.
The phonology paper sounds most interesting.
Independently of that, you might want to take a look at my most recent cultural evolution post, which is about language and is more or less based on the ideas of William Croft:
http://new-savanna.blogspot.com/2010/06/cultural-evolution-8-language-games-1.html