Me and James are presenting a poster at Digital HSS’s Digital Scholarship conference on the nomothetic approach. Here’s a sneaky peek at the current draft of the poster, although who knows how much it’ll change by the time I print it tomorrow! Some of the themes are referenced in this post. We’ll also be giving a talk about this topic at EvoLang in the workshop on constructive approaches to language evolution.
23rd Feb, The Business School, Buccleuch Place, University of Edinburgh. I’ll be there to answer questions during lunch.
Winters, J. & Roberts, S. (2012) Proving anything is possible in the dataverse: Limitations of the nomothetic approach to social science. Presented at Digital Scholarship: a day of ideas, Digital HSS, University of Edinburgh.
Why do you blog about your research? Why do you read other blogs? Does blogging improve your employability? Are there hidden advantages to blogging?
How would you convince an undergraduate to start blogging?
I’m giving a talk during Edinburgh’s ominous ‘Innovative Learning Week’ on how and why to blog about your research (more details here). One of the key messages I hoped to convey was that blogging helps your research by crowd-sourcing criticism: If you put something up on the web, someone might help you.
So there I was trying to come up with reasons about why you should blog, when I realised: I could put the talk online and see if anyone helps. Insight fail.
So, why do you blog? Has it helped your career?
So far, my main source of facts about the question above has been Geißler et al. 2011, who survey geoscience bloggers. They find, in line with the general blogging community, that the majority of bloggers are male, and about half are from the USA. Graduate students and university faculty make up the largest proportion, with freelancers and industry bloggers coming next. There are proportionately few undergraduates who blog.
The most stated reasons for writing for a blog are to share knowledge, to popularise the field, to have fun and to improve writing abilities.
Here’s their results for the sources of inspiration and perception of blogging:
Readers might remember discussion of Stephen Fry’s language documentaries. Most memorable for me were shots of Stephen walking along sandy beaches, waxing lyrical about language. This new documentary about animal intelligence shares some of these elements (sandy beaches, far flung destinations), but crucially, Liz Bonnin is more than an enthusiastic observer – she is not just an engaging television presenter, but a Real Scientist (!!), with a Bachelors in Biochemistry and a Masters in Wild Animal Biology.
It’s a jam-packed schedule: Eleven plenary talks, four parallel sessions of ordinary talks (only 15 minutes + 5 for questions) and over 50 posters. There are also five workshops the day before the main conference.
Evolution of word frequency distribution based on prediction dynamics
Constructive knowledge: Nomothetic approaches to language evolution
An evolutionary game model of building a language convention in a language contact situation
Reconsidering language evolution from coevolution of learning and niche construction using a concept of dynamic fitness landscape
Language diversity in the naming game on adaptive weighted networks
Synthetic modeling of cultural language evolution
A simple model on the evolution process of herbivore-induced plant volatiles
Hybrid approach for combining multiple levels of abstraction
From signs’ life cycle regularities to mathematical modelling of language evolution: explaining the mechanism for the formation of words’ synchronous polysemy and frequency of use distributions
I’m intrigued to find out what “herbivore-induced plant volatiles” can teach us about Language Evolution.
There are a few talks by members of Replicated Typo:
Talks
The Evolution of Morphological Agreement – Richard Littauer
Constructive knowledge: Nomothetic approaches to language evolution – Sean Roberts and James Winters
Posters
Cognitive Construal, Mental Spaces and the Evolution of Language and Cognition – Michael Pleyer
Re-Dating the Loss of Laryngeal Air Sacs in Homo sapiens – Richard Littauer (an extension of this work)
A Bottom-Up Approach to Language Evolution – Sean Roberts
Chris Manning and Dan Jurafsky are running a free online 8-week course on Natural Language Processing to students worldwide, January 23rd – March 18th 2012:
For those of you who know students or colleagues who might be looking for an introduction to NLP next quarter, encourage them to join us and the 40,000 students who have already registered in the course!
Students have access to screencast lecture videos, are given quiz questions, review exams and programming assignments in Java or Python, receive regular feedback on progress, and can participate in a discussion forum.
The course covers a broad range of topics in natural language processing at the advanced undergraduate or introductory graduate level, including word and sentence tokenization, text classification and sentiment analysis, spelling correction, information extraction, parsing, meaning extraction, and question answering, We will also introduce the underlying theory from probability, statistics, and machine learning that are crucial for the field, and cover fundamental algorithms like n-gram language modeling, naive bayes and maxent classifiers, sequence models, probabilistic dependency and constituent parsing, and vector-space models of meaning.
Commentators have already gotten hung-up on whether English became simplified before or after spreading, but this misses the impact of the article: There is an alternative approach to linguistics which looks at the differences between languages and recognises social factors as the primary source of linguistic change. Furthermore, these ideas are testable using statistics and genetic methods. It’s a pity the article didn’t mention the possibility of experimental approaches, including Gareth Roberts’ work on emerging linguistic diversity and work on cultural transmission using the Pictionary paradigm (Simon Garrod, Nick Fay, Bruno Gallantucci, see here and here).
David Robson (2011). Power of Babel: Why one language isn’t enough New Scientist, 2842Online
Replicated Typo 2.0 has reached 100,000 hits! The most popular search term that leads visitors here is ‘What makes humans unique?’ and part of the answer has to be our ability to transmit our culture. But as we’ve shown on this blog, culturally transmitted features can be highly correlated with each other. This fact is a source of both frustration and fascination, so I’ve roped together some of my favourite investigations of cultural correlations into a correlation super-chain. In addition, there’s a whole new spurious correlation at the end of the article!
Learning new skills is a fascinating process that often involves a complex interplay between biological predispositions and cultural transmission. The evolution of language, in particular, has captivated scholars and researchers, leading to intriguing debates surrounding its origins and development. Recently, esteemed linguist Noam Chomsky delivered a thought-provoking lecture at UCL, where he discussed the poverty of the stimulus and critiqued iterated learning experiments. These experiments, which explore language evolution and artificial language learning, have sparked discussions about the role of human intelligence and the nature of evolutionary processes. As the field of communication evolution continues to evolve, new avenues such as government compliant LMS offer promising opportunities to enhance our understanding of how skills are acquired and disseminated.
Noam Chomsky gave a lecture on the poverty of the stimulus at UCL responding to topics such as language evolution and artificial language learning experiments. From about 89 minutes in he discusses iterated learning and language evolution, saying the conclusions derive from “serious illusions about evolution”:
Chomsky’s criticism of iterated learning experiments (see post here and here) is based on two points. First, the emergence of structure is more to do with the intelligence of the modern humans taking part in the experiment than a realistic language evolving scenario. He suggests that structure would not emerge in a series of computer programs without human intelligence. As as a colleague pointed out, however, the first iterated learning experiments used computational models of this kind. Secondly, he suggests that the view of evolution employed in the explanation of these systems is a pop-psychology, gradual hill-climbing one. In fact, Chomsky claims, evolution of traits such as language or eyes derive from single, frozen accidents. That is, evolution moves in leaps and bounds rather than small steps (Jim Hurford recently gave a lecture entitled ‘Reconciling linguistic jerks and biological creeps‘ on this topic). Why else would humans be the only species with language?
Geoffrey Pullum counters this last point by asking why would an innately specified UG emerge so rapidly, but then freeze for tens of thousands of years, when (borrowing Phillip Lieberman’s point) traits such as lactose tolerance have emerged in the human genome within two thousand years. Chomsky gives some examples of traits that have developed rapidly, but then only changed marginally.
I don’t think that proponents of iterated learning paradigms would have a problem with a sudden emergence of a capacity for advanced linguistic communication. Although there is a continuity between human and non-human communication systems, we have some tricks that other animals don’t (see Michael’s post here). However, the evolution of the structure of language after these mutations could owe a huge amount to processes of cultural transmission. The universals we see in the world’s languages, then would be an amplification of weak biological biases.
However, Chomsky seems disillusioned with the whole field of what he calls ‘the evolution of communication’. At least we didn’t get it as bad as exemplar theory, which he dismisses as “so outlandish it’s not worth thinking about”.
[Edit: I originally attributed Mark Liberman instead of Phillip Lieberman. Now I’ve made this error in both directions!]
Last week we had a lecture from Anvita Abbi on rare linguistic structures in Great Andamanese – a language spoken in the Andaman Islands. The indigenous populations of the Andaman Islands lived in isolation for tens of thousands of years until the 19th Century, but still exhibit some common features of south-east Asian languages such as retroflex consonants. This could be evidence for the migration route of humans from India to Australia. Indeed, recent genetic research has shown that the Andamanese are descendants of the first human migration from Africa in the Palaeolithic, though Abbi suggested that the linguistic evidence is also a strong marker of human migration and an “important repository of our shared human history and civilization”.
Although the similarities are fascinating for studies of cultural evolution, the rarity of some structures in Great Andamanese are even more intriguing.
After passing my final exams I feel that I can relax a bit and have the time to read a book again. So instead of reading a book that I need to read purely for ‘academic reasons’, I thought I’d pick one I’d thoroughly enjoy: James Hurford’s “The Origins of Grammar“, which clocks in at a whopping 808 pages.
I’m still reading the first chapter (which you can read for free here) but I thought I’d share some of his analyses of “Animal Syntax.”
Hurford’s general conclusion is that despite what you sometimes read in the popular press,
“No non-human has any semantically compositional syntax, where the form of the syntactic combination determines how the meanings of the parts combine to make the meaning of the whole.”
The crucial notion here is that of compositionality. Hurford argues that we can find animal calls and songs that are combinatorial, that is songs and calls in which elements are put together according to some kind of rule or pattern. But what we do not find, he argues, are the kinds of putting things together where the elements put together each have a specified meaning and the whole song, call or communicative assembly “means something which is a reflection of the meanings of the parts.”
(Link)
To illustrate this, Hurford cites the call system of putty-nosed monkeys (Arnold and Zuberbühler 2006). These monkeys have only two different call signals in their repertoire, a ‘pyow’-sound that ‘means’, roughly, ‘LEOPARD’; and a ‘hack’ sound that ‘means’, roughly, ‘EAGLE’.