Where I’m at on cultural evolution, some quick remarks

I don’t know.

Some notes to myself.

1. Cultural Analogs to Genes and Phenotypes

I’ve spent a fair amount of time off and on over the last two decades hacking away at identifying cultural analogues to biological genes and phenotypes. In the past few years that effort has taken the form of an examination of Dan Dennett. I more or less like the current conceptual configuration, where I’ve got Cultural Beings as an analog to phenotypes and coordinators as analogs to genes. As far as I can tell – and I AM biased, of course, it’s the best such scheme going.

And it just lays there. So what? I don’t see that it allows me to explain anything that can’t otherwise be explained. Nor does it have obvious empirical consequences that one could test in obvious ways. It seems to me mostly a formal exercise at this point. In that it is not different from any version of memetics nor from Sperber’s cultural attractor theory. These are all formal exercises with little explanatory value that I can see.

That’s got to change. But how? I note that dealing with words as evolutionary objects seems somewhat different from treating literary works (or musical works and performances, works of visual art, etc.) as evolutionary objects.

Issues: Design, Human Communication

2. Cultural Direction

Perhaps the most interesting work I’ve done in the past year as been my work on Matt Jockers’ Macroanalysis and, just recently, on Underwood and Sellers’ paper on 19th century poetry. In the case of Jockers’ work on the novel, he’d done a study of influence which I’ve reconceptualized as a demonstration that the literary system as a direction. In the case of Underwood and Sellers, they’ve found themselves looking at directionality, but they hadn’t been looking for it. Their problem was to ward of the conceptual ‘threat’ of Whig historicism; they want to see if they can accept the directionality but not commit themselves to Whiggishness, and I’ve spent some time arguing that they need not worry.

What excites me is that two independent studies have come up with what looks like demonstrations of historical direction. I take this as an indication of the causal structure of the underlying historical process, which encompasses thousands upon thousands of people interaction with and through thousands of texts over the course of a century. What shows up in the texts can be thought of as a manifestation of Geist and so these studies are about the apparent direction of Geist. Continue reading “Where I’m at on cultural evolution, some quick remarks”

Evolang 11: Call for papers

The next Evolution of Language Conference will take place in New Orleans on March 21 -24, 2016.  The call for papers is now open.

The deadline for submissions is September 4th.  See the call for papers for more details.

This year there are some notable changes, including double blind reviewing, electronic proceedings and the possibility of adding supplementary materials.

I’m looking forwards to it already!

What the Songbird Said Radio Programme

BBC radio 4 have a new radio programme about songbirds and human language including contributions from Simon Fisher, Katie Slocombe and Johan Bolhuis, among others.

You can listen here:

http://bbc.in/1KAO2Cq

And here’s the synopsis:

Could birdsong tell us something about the evolution of human language? Language is arguably the single thing that most defines what it is to be human and unique as a species. But its origins – and its apparent sudden emergence around a hundred thousand years ago – remains mysterious and perplexing to researchers. But could something called vocal learning provide a vital clue as to how language might have evolved? The ability to learn and imitate sounds – vocal learning – is something that humans share with only a few other species, most notably, songbirds. Charles Darwin noticed this similarity as far back as 1871 in the Descent of Man and in the last couple of decades, research has uncovered a whole host of similarities in the way humans and songbirds perceive and process speech and song. But just how useful are animal models of vocal communication in understanding how human language might have evolved? Why is it that there seem to be parallels with songbirds but little evidence that our closest primate relatives, chimps and bonobos, share at least some of our linguistic abilities?

Computational Construction Grammar and Constructional Change

—————————-
Call For Participation
—————————-

Computational Construction Grammar and Constructional Change
Annual Conference of the Linguistic Society of Belgium
8 June 2015, Vrije Universiteit Brussel, Belgium

http://ai.vub.ac.be/bkl-2015

After several decades in scientific purgatory, language evolution has reclaimed its place as one of the most important branches in linguistics, and it is increasingly recognised as one of the most crucial sources of evidence for understanding human cognition. This renewed interest is accompanied by exciting breakthroughs in the science of language. Historical linguists can now couple their expertise to powerful methods for retrieving and documenting which changes have taken place. At the same time, construction grammar is increasingly being embraced in all areas of linguistics as a fruitful way of making sense of all these empirical observations. Construction grammar has also enthused formal and computational linguists, who have developed sophisticated tools for exploring issues in language processing and learning, and how new forms of grammar may emerge in speech populations.

Separately, linguists and computational linguists can therefore explain which changes take place in language and how these changes are possible. When working together, however, they can also address the question of why language evolves over time and how it emerged in the first place. This year, the BKL-CBL conference therefore brings together top researchers from both fields to put evidence and methods from both perspectives on the table, and to take up the challenge of uniting these efforts.

————————
Invited Speakers
————————
The conference contains presentations by 5 different keynote speakers.
* Graeme Trousdale (University of Edinburgh)
* Luc Steels (VUB/ IBE Barcelona)
* Kristin Davidse (University of Leuven)
* Peter Petré (University of Lille)
* Arie Verhagen (University of Leiden)

————————
Poster Presentations
————————
We still accept 500-word abstracts for poster presentations. All presentations must represent original, unpublished work not currently under review elsewhere. Work presented at the conference can be selected as a contribution for a special issue of the Belgian Journal of Linguistics (Summer 2016).

————————
Important dates
————————
* Abstract Submission: 29 May 2015
* Notification of acceptance: 1 June 2015
* Conference: 8 June 2015

————————
Introductory tutorial on Fluid Construction Grammar
————————
Learn how to write your own operational grammars in Fluid Construction Grammar in our tutorial on 7 and 9 June. The tutorial is practically oriented and mainly consists of hands-on exercises. Participation is free but registration is required.

————————
Organising Committee
————————
* Katrien Beuls, Vrije Universiteit Brussel, Belgium
* Remi van Trijp, Sony Computer Science Laboratories, Paris, France

Follow-up on Dennett and Mental Software

This is a follow-up to a previous post, Dennet’s WRONG: the Mind is NOT Software for the Brain. In that post I agreed with Tecumseh Fitch [1] that the hardware/software distinction for digital computers is not valid for mind/brain. Dennett wants to retain the distinction [2], however, and I argued against that. Here are some further clarifications and considerations.

1. Technical Usage vs. Redescription

I asserted that Dennett’s desire to talk of mental software (or whatever) has no technical justification. All he wants is a different way of describing the same mental/neural processes that we’re investigating.

What did I mean?

Dennett used the term “virtual machine”, which has a technical, if a bit diffuse, meaning in computing. But little or none of that technical meaning carries over to Dennett’s use when he talks of, for example, “the long-division virtual machine [or] the French-speaking virtual machine”. There’s no suggestion in Dennett that a technical knowledge of the digital technique would give us insight into neural processes. So his usage is just a technical label without technical content.

2. Substrate Neutrality

Dennett has emphasized the substrate neutrality of computational and informatic processes. Practical issues of fabrication and operation aside, a computational process will produce the same result regardless of whether or not it is implemented in silicon, vacuum tubes, or gears and levels. I have no problem with this.

As I see it, taken only this far we’re talking about humans designing and fabricating devices and systems. The human designers and fabricators have a “transcendental” relationship to their devices. They can see and manipulate them whole, top to bottom, inside and out.

But of course, Dennett wants this to extend to neural tissue as well. Once we know the proper computational processes to implement, we should be able to implement a conscious intelligent mind in digital technology that will not be meaningfully different from a human mind/brain. The question here, it seems to me, is: But is this possible in principle?

Dennett has recently come to the view that living neural tissue has properties lacking in digital technology [3, 4, 5]. What does that do to substrate neutrality? Continue reading “Follow-up on Dennett and Mental Software”

Dennet’s WRONG: the Mind is NOT Software for the Brain

And he more or less knows it; but he wants to have his cake and eat it too. It’s a little late in the game to be learning new tricks.

I don’t know just when people started casually talking about the brain as a computer and the mind as software, but it’s been going on for a long time. But it’s one thing to use such language in casual conversation. It’s something else to take it as a serious way of investigating mind and brain. Back in the 1950s and 1960s, when computers and digital computing were still new and the territory – both computers and the brain – relatively unexplored, one could reasonably proceed on the assumption that brains are digital computers. But an opposed assumption – that brains cannot possibly be computers – was also plausible.

The second assumption strikes me as being beside the point for those of us who find computational ideas essential to thinking about the mind, for we can proceed without the somewhat stronger assumption that the mind/brain is just a digital computer. It seems to me that the sell-by date on that one is now past.

The major problem is that living neural tissue is quite different from silicon and metal. Silicon and metal passively take on the impress of purposes and processes humans program into them. Neural tissue is a bit trickier. As for Dennett, no one championed the computational mind more vigorously than he did, but now he’s trying to rethink his views, and that’s interesting to watch.

The Living Brain

In 2014 Tecumseh Fitch published an article in which he laid out a computational framework for “cognitive biology” [1]. In that article he pointed out why the software/hardware distinction doesn’t really work for brains (p. 314):

Neurons are living cells – complex self-modifying arrangements of living matter – while silicon transistors are etched and fixed. This means that applying the “software/hardware” distinction to the nervous system is misleading. The fact that neurons change their form, and that such change is at the heart of learning and plasticity, makes the term “neural hardware” particularly inappropriate. The mind is not a program running on the hardware of the brain. The mind is constituted by the ever-changing living tissue of the brain, made up of a class of complex cells, each one different in ways that matter, and that are specialized to process information.

Yes, though I’m just a little antsy about that last phrase – “specialized to process information” – as it suggests that these cells “process” information in the way that clerks process paperwork: moving it around, stamping it, denying it, approving it, amending it, and so forth. But we’ll leave that alone.

One consequence of the fact that the nervous system is made of living tissue is that it is very difficult to undo what has been learned into the detailed micro-structure of this tissue. It’s easy to wipe a hunk of code or data from a digital computer without damaging the hardware, but it’s almost impossible to do the something like that with a mind/brain. How do you remove a person’s knowledge of Chinese history, or their ability to speak Basque, and nothing else, and do so without physical harm? It’s impossible. Continue reading “Dennet’s WRONG: the Mind is NOT Software for the Brain”

ICPhS Phonetic Evolution Meeting. 12/8/2015 in Glasgow

At this year’s International Congress of Phonetic Sciences in Glasgow, there is a special interest satellite meeting on the evolution of phonetic capabilities.

Title: The Evolution of Phonetic Capabilities: Causes, Constraints and Consequences

Date: Wednesday 12th August 2015

Time: 13.30 – 18.30

Place: Glasgow SECC, Boisdale 1

Registration is £10, and can be completed through the ICPhS registration page under “Registration only with no accommodation”.

If you would like to register only for this meeting, without registering for the main ICPhS conference, you can do so by emailing contact@icphs2015.info

For any other queries, contact hannah@ai.vub.ac.be

About:

In recent years, there has been a resurgence in research in the evolution of language and speech. New techniques in computational and mathematical modelling, experimental paradigms, brain and vocal tract imaging, corpus analysis and animal studies, as well as new archeological evidence, have allowed us to address questions relevant to the evolution of our phonetic capabilities. The workshop will focus on recent work addressing the emergence of our phonetic capabilities, with a special focus on the interaction between biological and cultural evolution.

Program:

The Evolution of Phonetic Capabilities: Causes, Constraints and Consequences

Wednesday 12th August – Glasgow SECC, Boisdale 1

13.50 – 14.00 Welcome
14.00-14.20 Introduction Hannah Little
14.20 – 14.50 Laryngeal Articulatory Function and Speech Origins John H. Esling,

Allison Benner &

Scott R. Moisik

14.50 – 15.20 Anatomical biasing and clicks: Preliminary biomechanical modeling Scott R. Moisik &

Dan Dediu

15.20 – 15.50 Exploring potential climate effects on the evolution of human sound systems Seán G. Roberts,

Caleb Everett &

Damián Blasi

15.50 – 16.20 Coffee Break
16.20 – 16.50 General purpose cognitive processing constraints and phonotactic properties of the vocabulary Padraic Monaghan & Willem H. Zuidema
16.50 – 17.20 Simulating the interaction of functional pressure, redundancy and category variation in phonetic systems Bodo Winter &

Andy Wedel

17.20 – 17.50 Universality in Cultural Transmission Bill Thompson
17.50 – 18.30 Discussion Panel Chaired by

Bart de Boer

 

Has Dennett Undercut His Own Position on Words as Memes?

Early in 2013 Dan Dennett had an interview posted at John Brockman’s Edge site, The Normal Well-Tempered Mind. He opened by announcing that he’d made a mistake early in his career, that he opted a conception of the brain-as-computer that was too simple. He’s now trying to revamp his sense of what the computational brain is like. He said a bit about that in that interview, and a bit more in a presentation he gave later in the year: If brains are computers, what kind of computers are they? He made some remarks in that presentation that undermine his position on words as memes, though he doesn’t seem to realize that.

Here’s the abstract of that talk:

Our default concepts of what computers are (and hence what a brain would be if it was a computer) include many clearly inapplicable properties (e.g., powered by electricity, silicon-based, coded in binary), but other properties are no less optional, but not often recognized: Our familiar computers are composed of millions of basic elements that are almost perfectly alike – flipflops, registers, or-gates – and hyper-reliable. Control is accomplished by top-down signals that dictate what happens next. All subassemblies can be designed with the presupposition that they will get the energy they need when they need it (to each according to its need, from each according to its ability). None of these is plausibly mirrored in cerebral computers, which are composed of billions of elements (neurons, astrocytes, …) that are no-two-alike, engaged in semi-autonomous, potentially anarchic or even subversive projects, and hence controllable only by something akin to bargaining and political coalition-forming. A computer composed of such enterprising elements must have an architecture quite unlike the architectures that have so far been devised for AI, which are too orderly, too bureaucratic, too efficient.

While there’s nothing in that abstract that seems to undercut his position on memes, and he affirmed that position toward the end of the talk, we need to look at some of the details.

The Material Mind is a Living Thing

The details concern Terrence Deacon’s recent book, Incomplete Nature: How Mind Emerged from Matter (2013). Rather than quote from Dennett’s remarks in the talk, I’ll quote from his review, “Aching Voids and Making Voids” (The Quarterly Review of Biology, Vol. 88, No. 4, December 2013, pp. 321-324). The following passage may be a bit cryptic, but short of reading the relevant chapters in Deacon’s book (which I’ve not done) and providing summaries, there’s not much I can do, though Dennett says a bit more both in his review and in the video.

Here’s the passage:

But if we are going to have a proper account of information that matters, which has a role to play in getting work done at every level, we cannot just discard the sender and receiver, two homunculi whose agreement on the code defines what is to count as information for some purpose. Something has to play the roles of these missing signal-choosers and signal-interpreters. Many—myself included—have insisted that computers themselves can serve as adequate stand-ins. Just as a vending machine can fill in for a sales clerk in many simplified environments, so a computer can fill in for a general purpose message-interpreter. But one of the shortcomings of this computational perspective, according to Deacon, is that by divorcing information processing from thermodynamics, we restrict our theories to basically parasitical systems, artifacts that depend on a user for their energy, for their structure maintenance, for their interpretation, and for their raison d’être.

In the case of words the signal choosers and interpreters are human beings and the problem is precisely that they have to agree on “what is to count as information for some purpose.” By talking of words as memes, and of memes as agents, Dennett sweeps that problem under the conceptual rug. Continue reading “Has Dennett Undercut His Own Position on Words as Memes?”

Causality in linguistics: Nodes and edges in causal graphs

This coming week I’ll be at the Causality in the Language Sciences conference.  One of the topics of discussion will be how to integrate theories of causality into linguistic work.  Bayesian Causal Graphs are a core approach to causality, and seem like a useful framework for thinking about linguistic problems.  However, it’s not entirely clear whether all questions in linguistics can be represented using causal graphs.  In this post, I’ll discuss some possible uses of Bayesian Causal Graphs, and test the fit of some actual data to some causal structures.  (and please forgive my basic understanding of causality theory!)

Causal graphs are composed of states connected by edges.  A change or activation of a state causes a change in another.  States and causes can be categorical and absolute, or statistical and even complex in their relations.  Causal graphs are often introduced with the following kind of structure, taken from Pearl’s seminal book on Causality.  The season causes it to rain (in winter) and causes the sprinkler to come on (in summer).  Both the sprinkler being on and rain independently cause the grass to be wet.  If the grass is wet, the grass becomes slippery:

Screen Shot 2015-04-11 at 22.06.52

This example is easy to understand because each state is binary and (in this simple world) each causal effect is immediate and direct.  However, finding a similar example for linguistics is tricky.  Linguists may simply not agree on what the nodes are or what the edges represent.

Continue reading “Causality in linguistics: Nodes and edges in causal graphs”

Q: Why is the Dawkins Meme Idea so Popular?

A: Because it is daft.

I believe there are two answers to that question. For most people it’s convenient. That requires one explanation, which I’ll run through first.

For some people, however, memetics is more than convenient. Some, including Dawkins himself and his philosophical acolyte, Dan Dennett, use it as a way of explaining religion. In that role the meme idea is attractive because it is, or has evolved into, an egregiously bad idea, one almost as irrational as the religious ideas whose popularity it is supposed to explain away. By analogy to an argument Dawkins himself has made about religion, that makes memetics the perfect vehicle for the affirmation of materialist faith.

But I don’t want to go there yet. Let’s work into it.

Ordinary Memetics

When Dawkins first proposed the idea in The Selfish Gene (1976), it wasn’t a bad idea—nor even a new one. Ted Cloak, among others, got there first, but not with the catchy name. Having worked hard to conceptualize the gene as a replicator, Dawkins was looking for  another set of examples. and coined the term “meme” as a replicator for culture. The word, and the idea, caught on and soon talk of memes was flying all over the place.

I suspect that the spread of computer technology is partially responsible for the cultural climate in which the meme idea found a home. Computers ‘level’ everything into bits: words, pictures, videos, numbers, computer programs of all kinds, simulations of explosions, traffic flow, moon landings, everything becomes bits: bits, bits, and more bits. The meme concept simply ‘levels’ all of culture—songs, recipes, costumes, paintings, hazing rituals, etc.—into the uniform substance of memes.

What is culture? Memes.

Simple and useful. As long as you don’t try to push it very far.


In his original exposition Dawkins was a bit equivocal as to whether or not memes existed inside the brain or outside in the external world. Thus at one point he refers to one copy ‘Auld Lang Syne’ existing in his brain and other copies existing in a song book (p. 194). This issue became a matter of debate among the relatively small community of thinkers who were attempting to develop an intellectually rigorous theory of cultural evolution. Some of us were known as externalists while others were internalists.

I was and am an externalist. I’ve stated my position at length in two papers published in the mid-1990s (Culture as an Evolutionary Arena and Culture’s Evolutionary Landscape) and more recently in an extensive series of notes, The Evolution of Human Culture. I have no intention of rehearsing those arguments here. But I want to make one point, and that is about human language.

Continue reading “Q: Why is the Dawkins Meme Idea so Popular?”