Cultural Evolution and Oral Tradition: ‘Information transfer’ at the micro scale

It’s clear that one problem I have with Dennett’s memetics is this his conception face-to-face mechanisms of cultural evolution – like the transfer of information from one computer to another – seems rather thin, unrealistically so. I tend to think that meaning is something arrived at through negotiation whereas Dennett writes as though one-shot one-way ‘information transfer’ is sufficient to the process.

I want to present some passages from David Rubin, Memory in Oral Tradition: The Cognitive Psychology of Epic, Ballads, and Counting-out Rhymes (Oxford UP 1995) that I think merit close consideration. These are passages about oral epic and so are relevant to thinking about folktales, myth and such, stories that are held in memory and delivered to an audience without benefit of written prompt. One thing we need to keep in mind is that, in oral culture, the notion of faithful repetition is not the same as it is in literate culture. In the literate world repetition means word-for-word. In oral cultures it does not. A faithful recounting of a story is one where the same characters are involved in the same (major) incidents in (pretty much) the same order. Word-for-word recounting is not required; in fact, such a notion is all but meaningless. With no written (or otherwise recorded) verification, how do you tell?

This passages illustrates that nicely (pp. 137-138):

Avdo Medjedovic was the best singer recorded by Lord and Parry. An example of his learning a new song provides insights into what it is that the poetic-language learner must learn about his genre (Lord, 1960; Lord & Bynum, 1974). A singer sang a song of 2,294 lines that Avdo Medjedovic had never heard before. When the song was finished, Avdo Medjedovic was asked if he could sing the same song. He did, only now the song was 6,313 lines long. The basic story line remained the same, but, to use Lord’s description, “the song lengthened, the ornamentation and richness accumulated, and the human touches of character, touches that distinguish Avdo Medjedovic from other singers, imparted a depth of feeling that had been missing” (p. 78). Avdo Medjedovic’s song retold the same story in his own words, much as subjects in a psychology experiment would retell a story from a genre with which they were familiar, but Avdo Medjedovic’s own words were poetic language and his story was a song of high artistic quality. Although the particular words changed, the words added were all traditional; and so the stability of the tradition, if not the stability of the words of a particular telling of a story, was ensured.

Several aspects of this feat are of interest. First, the song was composed without preparation and sung at great speed. There was no time for preparation before the 6,313 lines were sung, and once the song began, the rhythm allowed little time for Avdo Medjedovic to stop and collect his thoughts. Such a feat implies a well-organized memory and the equivalent of an efficient set of rules for production. Second, the song expanded yet remained traditional in style, demonstrating that more than a particular song was being recalled. Rather, rules or parts drawn from other songs were being used. Third, although Avdo Medjedovic was creative by any standards, he was not trying to create a novel song; he believed that he was telling a true story just the way he had heard it, though perhaps a little better. To do otherwise would be to distort history.

So, an expert listens to a story than runs to 2,294 lines and then immediately repeats it back, but embellished to 6,313. Would he be able to do the same thing the next day or ten days or a year later? Probably. Continue reading “Cultural Evolution and Oral Tradition: ‘Information transfer’ at the micro scale”

Where I’m at on cultural evolution, some quick remarks

I don’t know.

Some notes to myself.

1. Cultural Analogs to Genes and Phenotypes

I’ve spent a fair amount of time off and on over the last two decades hacking away at identifying cultural analogues to biological genes and phenotypes. In the past few years that effort has taken the form of an examination of Dan Dennett. I more or less like the current conceptual configuration, where I’ve got Cultural Beings as an analog to phenotypes and coordinators as analogs to genes. As far as I can tell – and I AM biased, of course, it’s the best such scheme going.

And it just lays there. So what? I don’t see that it allows me to explain anything that can’t otherwise be explained. Nor does it have obvious empirical consequences that one could test in obvious ways. It seems to me mostly a formal exercise at this point. In that it is not different from any version of memetics nor from Sperber’s cultural attractor theory. These are all formal exercises with little explanatory value that I can see.

That’s got to change. But how? I note that dealing with words as evolutionary objects seems somewhat different from treating literary works (or musical works and performances, works of visual art, etc.) as evolutionary objects.

Issues: Design, Human Communication

2. Cultural Direction

Perhaps the most interesting work I’ve done in the past year as been my work on Matt Jockers’ Macroanalysis and, just recently, on Underwood and Sellers’ paper on 19th century poetry. In the case of Jockers’ work on the novel, he’d done a study of influence which I’ve reconceptualized as a demonstration that the literary system as a direction. In the case of Underwood and Sellers, they’ve found themselves looking at directionality, but they hadn’t been looking for it. Their problem was to ward of the conceptual ‘threat’ of Whig historicism; they want to see if they can accept the directionality but not commit themselves to Whiggishness, and I’ve spent some time arguing that they need not worry.

What excites me is that two independent studies have come up with what looks like demonstrations of historical direction. I take this as an indication of the causal structure of the underlying historical process, which encompasses thousands upon thousands of people interaction with and through thousands of texts over the course of a century. What shows up in the texts can be thought of as a manifestation of Geist and so these studies are about the apparent direction of Geist. Continue reading “Where I’m at on cultural evolution, some quick remarks”

Evolang 11: Call for papers

The next Evolution of Language Conference will take place in New Orleans on March 21 -24, 2016.  The call for papers is now open.

The deadline for submissions is September 4th.  See the call for papers for more details.

This year there are some notable changes, including double blind reviewing, electronic proceedings and the possibility of adding supplementary materials.

I’m looking forwards to it already!

Could Heart of Darkness have been published in 1813? – a digression from Underwood and Sellers 2015

Here I’m just thinking out loud. I want to play around a bit.

Conrad’s Heart of Darkness is well within the 1820-1919 time span covered by Underwood and Sellers in How Quickly Do Literary Standards Change?, while Austen’s Pride and Prejudice, published in 1813, is a bit before. And both are novels, while Underwood and Sellers wrote about poetry. But these are incidental matters. My purpose is to think about literary history and the direction of cultural change, which is front and center in their inquiry. But I want to think about that topic in a hypothetical mode that is quite different from their mode of inquiry.

So, how likely is it that a book like Heart of Darkness would have been published in the second decade of the 19th century, when Pride and Prejudice was published? A lot, obviously, hangs on that word “like”. For the purposes of this post likeness means similar in the sense that Matt Jockers defined in Chapter 9 of Macroanalysis. For all I know, such a book may well have been published; if so, I’d like to see it. But I’m going to proceed on the assumption that such a book doesn’t exist.

The question I’m asking is about whether or not the literary system operates in such a way that such a book is very unlikely to have been written. If that is so, then what happened that the literary system was able to produce such a book almost a century later?

What characteristics of Heart of Darkness would have made it unlikely/impossible to publish such a book in 1813? For one thing, it involved a steamship, and steamships didn’t exist at that time. This strikes me as a superficial matter given the existence of ships of all kinds and their extensive use for transport on rivers, canals, lakes, and oceans.

Another superficial impediment is the fact that Heart is set in the Belgian Congo, but the Congo hadn’t been colonized until the last quarter of the century. European colonialism was quite extensive by that time, and much of it was quite brutal. So far as I know, the British novel in the early 19th century did not concern itself with the brutality of colonialism. Why not? Correlatively, the British novel of the time was very much interested in courtship and marriage, topics not central to Heart, but not entirely absent either.

The world is a rich and complicated affair, bursting with stories of all kinds. But some kinds of stories are more salient in a given tradition than others. What determines the salience of a given story and what drives changes in salience over time? What had happened that colonial brutality had become highly salient at the turn of the 20th century? Continue reading “Could Heart of Darkness have been published in 1813? – a digression from Underwood and Sellers 2015”

Underwood and Sellers 2015: Beyond Whig History to Evolutionary Thinking

In the middle of their most interesting and challenging paper, How Quickly Do Literary Standards Change?, Underwood and Sellers have two paragraphs in which they raise the specter of Whig history and banish it. In the process they take some gratuitous swipes at Darwin and Lamarck and, by implication, at the idea that evolutionary thinking can be of benefit to literary history. I find these two paragraphs confused and confusing and so feel a need to comment on them.

Here’s what I’m doing: First, I present those two paragraphs in full, without interruption. That’s so you can get a sense of how their thought hangs together. Second, and the bulk of this post, I repeat those two paragraphs, in full, but this time with inserted commentary. Finally, I conclude with some remarks on evolutionary thinking in the study of culture.

Beware of Whig History

By this point in their text Underwood and Sellers have presented their evidence and their basic, albeit unexpected finding, that change in English-language poetry from 1820-1919 is continuous and in the direction of standards implicit in the choices made by 14 selective periodicals. They’ve even offered a generalization that they think may well extend beyond the period they’ve examined (p. 19): “Diachronic change across any given period tends to recapitulate the period’s synchronic axis of distinction.” While I may get around to discussing that hypothesis – which I like – in another post, we can set it aside for the moment.

I’m interested in two paragraphs they write in the course of showing how difficult it will be to tease a causal model out of their evidence. Those paragraphs are about Whig history. Here they are in full and without interruption (pp. 20-21):

Nor do we actually need a causal explanation of this phenomenon to see that it could have far-reaching consequences for literary history. The model we’ve presented here already suggests that some things we’ve tended to describe as rejections of tradition — modernist insistence on the concrete image, for instance — might better be explained as continuations of a long-term trend, guided by established standards. Of course, stable long-term trends also raise the specter of Whig history. If it’s true that diachronic trends parallel synchronic principles of judgment, then literary historians are confronted with material that has already, so to speak, made a teleological argument about itself. It could become tempting to draw Lamarckian inferences — as if Keats’s sensuous precision and disillusionment had been trying to become Swinburne all along.

We hope readers will remain wary of metaphors that present historically contingent standards as an impersonal process of adaptation. We don’t see any evidence yet for analogies to either Darwin or Lamarck, and we’ve insisted on the difficulty of tracing causality exactly to forestall those analogies. On the other hand, literary history is not a blank canvas that acquires historical self-consciousness only when retrospective observers touch a brush to it. It’s already full of historical observers. Writing and reviewing are evaluative activities already informed by ideas about “where we’ve been” and “where we ought to be headed.” If individual writers are already historical agents, then perhaps the system of interaction between writers, readers, and reviewers also tends to establish a resonance between (implicit, collective) evaluative opinions and directions of change. If that turns out to be true, we would still be free to reject a Whiggish interpretation, by refusing to endorse the standards that happen to have guided a trend. We may even be able to use predictive models to show how the actual path of literary history swerved away from a straight line. (It’s possible to extrapolate a model of nineteenth-century reception into the twentieth, for instance, and then describe how actual twentieth-century reception diverged from those predictions.) But we can’t strike a blow against Whig history simply by averting our eyes from continuity. The evidence we’re seeing here suggests that literary- historical trends do turn out to be relatively coherent over long timelines.

I agree with those last two sentences. It’s how Underwood and Sellers get there that has me a bit puzzled. Continue reading “Underwood and Sellers 2015: Beyond Whig History to Evolutionary Thinking”

Underwood and Sellers 2015: Cosmic Background Radiation, an Aesthetic Realm, and the Direction of 19thC Poetic Diction

I’ve read and been thinking about Underwood and Sellers 2015, How Quickly Do Literary Standards Change?, both the blog post and the working paper. I’ve got a good many thoughts about their work and its relation to the superficially quite different work that Matt Jockers did on influence in chapter nine of Macroanalysis. I am, however, somewhat reluctant to embark on what might become another series of long-form posts, which I’m likely to need in order to sort out the intuitions and half-thoughts that are buzzing about in my mind.

What to do?

I figure that at the least I can just get it out there, quick and crude, without a lot of explanation. Think of it as a mark in the sand. More detailed explanations and explorations can come later.

19th Century Literary Culture has a Direction

My central thought is this: Both Jockers on influence and Underwood and Sellers on literary standards are looking at the same thing: long-term change in 19th Century literary culture has a direction – where that culture is understood to include readers, writers, reviewers, publishers and the interactions among them. Underwood and Sellers weren’t looking for such a direction, but have (perhaps somewhat reluctantly) come to realize that that’s what they’ve stumbled upon. Jockers seems a bit puzzled by the model of influence he built (pp. 167-168); but in any event, he doesn’t recognize it as a model of directional change. That interpretation of his model is my own.

When I say “direction” what do I mean?

That’s a very tricky question. In their full paper Underwood and Sellers devote two long paragraphs (pp. 20-21) to warding off the spectre of Whig history – the horror! the horror! In the Whiggish view, history has a direction, and that direction is a progression from primitive barbarism to the wonders of (current Western) civilization. When they talk of direction, THAT’s not what Underwood and Sellers mean.

But just what DO they mean? Here’s a figure from their work:

19C Direction

Notice that we’re depicting time along the X-axis (horizontal), from roughly 1820 at the left to 1920 on the right. Each dot in the graph, regardless of color (red, gray) or shape (triangle, circle), represents a volume of poetry and its position on the X-axis is volume’s publication date.

But what about the Y-axis (vertical)? That’s tricky, so let us set that aside for a moment. The thing to pay attention to is the overall relation of these volumes of poetry to that axis. Notice that as we move from left to right, the volumes seem to drift upward along the Y-axis, a drift that’s easily seen in the trend line. That upward drift is the direction that Underwood and Sellers are talking about. That upward drift was not at all what they were expecting.

Drifting in Space

But what does the upward drift represent? What’s it about? It represents movement in some space, and that space represents poetic diction or language. What we see along the Y-axis is a one-dimensional reduction or projection of a space that in fact has 3200 dimensions. Now, that’s not how Underwood and Sellers characterize the Y-axis. That’s my reinterpretation of that axis. I may or may not get around to writing a post in which I explain why that’s a reasonable interpretation. Continue reading “Underwood and Sellers 2015: Cosmic Background Radiation, an Aesthetic Realm, and the Direction of 19thC Poetic Diction”

What the Songbird Said Radio Programme

BBC radio 4 have a new radio programme about songbirds and human language including contributions from Simon Fisher, Katie Slocombe and Johan Bolhuis, among others.

You can listen here:

http://bbc.in/1KAO2Cq

And here’s the synopsis:

Could birdsong tell us something about the evolution of human language? Language is arguably the single thing that most defines what it is to be human and unique as a species. But its origins – and its apparent sudden emergence around a hundred thousand years ago – remains mysterious and perplexing to researchers. But could something called vocal learning provide a vital clue as to how language might have evolved? The ability to learn and imitate sounds – vocal learning – is something that humans share with only a few other species, most notably, songbirds. Charles Darwin noticed this similarity as far back as 1871 in the Descent of Man and in the last couple of decades, research has uncovered a whole host of similarities in the way humans and songbirds perceive and process speech and song. But just how useful are animal models of vocal communication in understanding how human language might have evolved? Why is it that there seem to be parallels with songbirds but little evidence that our closest primate relatives, chimps and bonobos, share at least some of our linguistic abilities?

Computational Construction Grammar and Constructional Change

—————————-
Call For Participation
—————————-

Computational Construction Grammar and Constructional Change
Annual Conference of the Linguistic Society of Belgium
8 June 2015, Vrije Universiteit Brussel, Belgium

http://ai.vub.ac.be/bkl-2015

After several decades in scientific purgatory, language evolution has reclaimed its place as one of the most important branches in linguistics, and it is increasingly recognised as one of the most crucial sources of evidence for understanding human cognition. This renewed interest is accompanied by exciting breakthroughs in the science of language. Historical linguists can now couple their expertise to powerful methods for retrieving and documenting which changes have taken place. At the same time, construction grammar is increasingly being embraced in all areas of linguistics as a fruitful way of making sense of all these empirical observations. Construction grammar has also enthused formal and computational linguists, who have developed sophisticated tools for exploring issues in language processing and learning, and how new forms of grammar may emerge in speech populations.

Separately, linguists and computational linguists can therefore explain which changes take place in language and how these changes are possible. When working together, however, they can also address the question of why language evolves over time and how it emerged in the first place. This year, the BKL-CBL conference therefore brings together top researchers from both fields to put evidence and methods from both perspectives on the table, and to take up the challenge of uniting these efforts.

————————
Invited Speakers
————————
The conference contains presentations by 5 different keynote speakers.
* Graeme Trousdale (University of Edinburgh)
* Luc Steels (VUB/ IBE Barcelona)
* Kristin Davidse (University of Leuven)
* Peter Petré (University of Lille)
* Arie Verhagen (University of Leiden)

————————
Poster Presentations
————————
We still accept 500-word abstracts for poster presentations. All presentations must represent original, unpublished work not currently under review elsewhere. Work presented at the conference can be selected as a contribution for a special issue of the Belgian Journal of Linguistics (Summer 2016).

————————
Important dates
————————
* Abstract Submission: 29 May 2015
* Notification of acceptance: 1 June 2015
* Conference: 8 June 2015

————————
Introductory tutorial on Fluid Construction Grammar
————————
Learn how to write your own operational grammars in Fluid Construction Grammar in our tutorial on 7 and 9 June. The tutorial is practically oriented and mainly consists of hands-on exercises. Participation is free but registration is required.

————————
Organising Committee
————————
* Katrien Beuls, Vrije Universiteit Brussel, Belgium
* Remi van Trijp, Sony Computer Science Laboratories, Paris, France

Follow-up on Dennett and Mental Software

This is a follow-up to a previous post, Dennet’s WRONG: the Mind is NOT Software for the Brain. In that post I agreed with Tecumseh Fitch [1] that the hardware/software distinction for digital computers is not valid for mind/brain. Dennett wants to retain the distinction [2], however, and I argued against that. Here are some further clarifications and considerations.

1. Technical Usage vs. Redescription

I asserted that Dennett’s desire to talk of mental software (or whatever) has no technical justification. All he wants is a different way of describing the same mental/neural processes that we’re investigating.

What did I mean?

Dennett used the term “virtual machine”, which has a technical, if a bit diffuse, meaning in computing. But little or none of that technical meaning carries over to Dennett’s use when he talks of, for example, “the long-division virtual machine [or] the French-speaking virtual machine”. There’s no suggestion in Dennett that a technical knowledge of the digital technique would give us insight into neural processes. So his usage is just a technical label without technical content.

2. Substrate Neutrality

Dennett has emphasized the substrate neutrality of computational and informatic processes. Practical issues of fabrication and operation aside, a computational process will produce the same result regardless of whether or not it is implemented in silicon, vacuum tubes, or gears and levels. I have no problem with this.

As I see it, taken only this far we’re talking about humans designing and fabricating devices and systems. The human designers and fabricators have a “transcendental” relationship to their devices. They can see and manipulate them whole, top to bottom, inside and out.

But of course, Dennett wants this to extend to neural tissue as well. Once we know the proper computational processes to implement, we should be able to implement a conscious intelligent mind in digital technology that will not be meaningfully different from a human mind/brain. The question here, it seems to me, is: But is this possible in principle?

Dennett has recently come to the view that living neural tissue has properties lacking in digital technology [3, 4, 5]. What does that do to substrate neutrality? Continue reading “Follow-up on Dennett and Mental Software”

Dennet’s WRONG: the Mind is NOT Software for the Brain

And he more or less knows it; but he wants to have his cake and eat it too. It’s a little late in the game to be learning new tricks.

I don’t know just when people started casually talking about the brain as a computer and the mind as software, but it’s been going on for a long time. But it’s one thing to use such language in casual conversation. It’s something else to take it as a serious way of investigating mind and brain. Back in the 1950s and 1960s, when computers and digital computing were still new and the territory – both computers and the brain – relatively unexplored, one could reasonably proceed on the assumption that brains are digital computers. But an opposed assumption – that brains cannot possibly be computers – was also plausible.

The second assumption strikes me as being beside the point for those of us who find computational ideas essential to thinking about the mind, for we can proceed without the somewhat stronger assumption that the mind/brain is just a digital computer. It seems to me that the sell-by date on that one is now past.

The major problem is that living neural tissue is quite different from silicon and metal. Silicon and metal passively take on the impress of purposes and processes humans program into them. Neural tissue is a bit trickier. As for Dennett, no one championed the computational mind more vigorously than he did, but now he’s trying to rethink his views, and that’s interesting to watch.

The Living Brain

In 2014 Tecumseh Fitch published an article in which he laid out a computational framework for “cognitive biology” [1]. In that article he pointed out why the software/hardware distinction doesn’t really work for brains (p. 314):

Neurons are living cells – complex self-modifying arrangements of living matter – while silicon transistors are etched and fixed. This means that applying the “software/hardware” distinction to the nervous system is misleading. The fact that neurons change their form, and that such change is at the heart of learning and plasticity, makes the term “neural hardware” particularly inappropriate. The mind is not a program running on the hardware of the brain. The mind is constituted by the ever-changing living tissue of the brain, made up of a class of complex cells, each one different in ways that matter, and that are specialized to process information.

Yes, though I’m just a little antsy about that last phrase – “specialized to process information” – as it suggests that these cells “process” information in the way that clerks process paperwork: moving it around, stamping it, denying it, approving it, amending it, and so forth. But we’ll leave that alone.

One consequence of the fact that the nervous system is made of living tissue is that it is very difficult to undo what has been learned into the detailed micro-structure of this tissue. It’s easy to wipe a hunk of code or data from a digital computer without damaging the hardware, but it’s almost impossible to do the something like that with a mind/brain. How do you remove a person’s knowledge of Chinese history, or their ability to speak Basque, and nothing else, and do so without physical harm? It’s impossible. Continue reading “Dennet’s WRONG: the Mind is NOT Software for the Brain”