Languages have structure on two levels. The level on which small meaningless building blocks (phonemes) make up bigger meaningful building blocks (morphemes), and the level of structure at which these meaningful building blocks make up even bigger meaningful structures (words, sentences, utterances). This was identified way back in the 1960s as one of Hockett’s design features for language know as “duality of patterning”, and in most of linguistics people refer to these different levels of structure as “phonology” and “(morpho)syntax”.
However, in recent years these contrasting levels of structure have started to be talked about in the context of language evolution, either in reference to artificial language learning experiments or experimental semiotics, where a proxy for language is used so it doesn’t make sense to talk about phonological or morphosyntactic structure, or when talking about animal communication where it also doesn’t make sense to talk about terms which pertain to human language. Instead, terms such as “combinatorial” and “compositional” structure are used, occasionally contrastively, or sometimes they get conflated to mean the same thing.
In the introduction to a recent special issue in Language and Cognition on new perspectives on duality of patterning, Bart de Boer, Wendy Sandler and Simon Kirby helpfully outline their preferred use of terminology:
Duality of patterning (Hockett, 1960) is the property of human language that enables combinatorial structure on two distinct levels: meaningless sounds can be combined into meaningful morphemes and words, which themselves could be combined further. We will refer to recombination at the first level as combinatorial structure, while recombination at the second level will be called compositional structure.
You will notice that they initially call both levels of structure “combinatorial”, and they both arguably are, and my point in this blog post isn’t necessarily that only structure on the first level should be called combinatorial, but that work talking about combinatorial structure should establish what their terminology means.
A recent paper by Scott-Philips and Blythe (2013), which is entitled “Why is combinatorial communication rare in the natural world, and why is language an exception to this trend?” presents an agent based model to show how limited the conditions are from which combinatorial communication can emerge. Obviously, in order to do this they need to define what they mean by combinatorial communication and present this figure by way of explanation:
They explain:
In a combinatorial communication system, two (or more) holistic signals (A and B in this figure) are combined to form a third, composite signal (A + B), which has a different effect (Z) to the sum of the two individual signals (X + Y). This figure illustrates the simplest combinatorial communication system possible. Applied to the putty-nosed monkey system, the symbols in this figure are: a, presence of eagles; b, presence of leopards; c, absence of food; A, ‘pyow’; B, ‘hack’ call; C = A + B ‘pyow–hack’; X, climb down; Y, climb up; Z ≠ X + Y, move to a new location. Combinatorial communication is rare in nature: many systems have a signal C = A + B with an effect Z = X + Y; very few have a signal C = A + B with an effect Z ≠ X + Y.
In this example, the building blocks which make C , A and B, are arguably meaningful because they act as signals in their own right, therefore, if C had a meaning which was a combination of the meanings of A and B, this system (using de Boer, Sandler and Kirby’s definition) would be compositional (this isn’t represented in the figure above). However, if the meaning of C is not a combination of the meanings of A and B, then A and B are arguably meaningless building blocks (and their individual expressions just happen to have meaning, for example the individual phoneme /a/ being an indefinite determiner in English, but not having this meaning when it is used in the word “cat”). In this case, the system would be combinatorial (as defined by the figure above, as well as under the definition of de Boer, Sadler and Kirby). So far so good, it looks like we are in agreement.
However, later in their paper Scott-Philips and Blythe go on to argue:
Coded ‘combinatorial’ signals are in a sense not really combinatorial at all. After all, there is no ‘combining’ going on. There is really just a third holistic signal, which happens to be comprised of the same pieces as other existing holistic signals. Indeed, the most recent experimental results suggest that the putty-nosed monkeys interpret the ‘combinatorial’ pyow–hack calls in exactly this idiomatic way, rather than as the product of two component parts of meaning. By contrast, the ostensive creation of new composite signals is clearly combinatorial: the meaning of the new, composite signal is in part (but only in part) a function of the meanings of the component pieces.
The argument they are giving here is that unless the meaning of C is a combination of A and B (or compositional as defined above), then it is not really a combinatorial signal.
Scott-Philips and Blythe definitely know and demonstrate that there is a difference between the two levels of structure, but they conflate them both under one term, “combinatorial”, which makes it harder to understand that there is a very clear difference. Also, changing the definition of what they mean by “combinatorial” between the introduction of their paper and their discussion confuses their argument.
Perhaps we should all agree to adopt the terminology proposed by de Boer, Sandler and Kirby, but given the absence of a consensus on the matter, at the very least I think outlining exactly what is meant by combinatorial (or compositional) needs to be established at the beginning of every paper using these terms.
References
de Boer, B., Sandler, W., & Kirby, S. (2012). New perspectives on duality of patterning: Introduction to the special issue. Language and Cognition, 4(4).
Hockett, C. 1960. The origin of speech. Scientific American 203. 88–111.
Scott-Phillips, T. C., & Blythe, R. A. (2013). Why is combinatorial communication rare in the natural world, and why is language an exception to this trend?. Journal of The Royal Society Interface, 10(88), 20130520.
De Boer et al’s usage lines-up with the way the two terms have been used in the theoretical literature for a long time: We can talk about the combinatorial mechanisms underlying phonology, morphology, syntax, et al., but compositionality has a narrower use stemming from Frege’s use of the term (‘The Principle of Compositionality’), i.e. the meaning of a complex expression should be a function from the meanings of the parts comprising that expression + the way they are combined.
Hi Hannah. Interesting write-up, and thanks for bringing the paper to people’s attention.
What we wanted in the recent paper was a general term to describe what happens when things are combined together. Whether that combining is meaningless units into meaningful ones, or meaningful ones into higher ones, is not something we were concerned with. The paper’s really about something even more basic than these issues: the very idea of sticking two previously distinct things together. We needed a label for that, and ‘combinatorial’ seems reasonable and intuitive. Perhaps, following de Boer et al, we could have adopted ‘recombination’, but ‘combinatorial’ seems the more basic term to me. After all, combining things is what’s going on here! But either way, there’s probably scope for somebody to review all these different terms (as you know, these certainly aren’t the only papers on the topic, or the only terms in use), and lay out how they could and should be used.
You say, about our comments in the discussion, that “The argument they are giving here is that unless the meaning of C is a combination of A and B (or compositional as defined above), then it is not really a combinatorial signal”. This is a slight over-interpretation. We don’t make such an argument. Instead, what we were pointing to in that passage was that, in a system of communication based on the code model, where we do see combinatorial signals they are not combined *by the agents themselves*. That’s what we mean by “in a sense not really combinatorial at all”. The ‘combining’, such as it is, is done by natural selection creating a third signal, which, just happens “to be comprised of the same pieces as other existing holistic signals”. This isn’t the case in a system based on the ostensive-inferential model of communication, as language is. Here, the agents themselves are actually combining things. This is the point we were trying to make in the passage you quote.
We couldn’t expand too much on this point in that paper, because it’s not a paper about ostension and inference, but actually about a model of the evolution of combinatorial signals in a code model system. But, as you probably know (please excuse the plug), I’m writing a book about the origins of language at the moment, in which the code/ostension distinction is critical, and I explain these points in detail there. The book will be published next year, by Palgrave MacMillan.
Moving on, this seems a good point to mention something about the de Boer et al definition of ‘compositional’ that I don’t quite understand. I’ve always understood ‘compositional’ to mean that the meaning of the unit in question is a function not only of the meanings of the component parts, but also of the way in which they are combined. The meaning of ‘boathouse’ is not simply the meaning of boat plus the meaning of house, and we can see that clearly when we compare it to ‘houseboat’, which is something different. The specific way the units are combined is part of what of the meaning of the composite (!) expression. But the de Boer et al definition seems to leave that aspect of the definition out. As such, if two or more meaningful units are combined, and the subsequent meaning is invariant to the way the units are combined, then this is compositionality under the de Boer et al definition, but not under the traditional definition. Here’s an example: “Can you not…” vs “Can not you…”. Since the meaning of both constructions is the same, then this meaning is a function only of the component parts, and not of the way that they are combined (ok, “Can” has to go at the front, but I think we can put that aside). This makes it compositional under the de Boer et al definition, but not under the traditional definition. I suspect I’ve actually misunderstood something here, and would welcome comments.
Manfred Krifka has an interesting discussion on different types of compositionality based on their complexity (Milestones in the evolution of semantically interpreted language)
Actually, I should have added: the fact that the definition of compositionality typically includes the requirement that meaning is in part a function of the way the component parts are combined that is one reason why we didn’t want to include it in our definition. We make no difference in our model between A+B and B+A – but in a compositional system, these are different.
This is great! Thanks for posting this James.
You’re right, the traditional notion of compositionality crucially includes the notion that the meaning of a whole is a function of the meaning of its parts and *the way they are combined* – something like this is necessary if we want to capture the strict correspondence between structure and meaning: “The dog chased the cat” doesn’t mean the same thing as “the cat chased the dog”. It seems a bit misleading to use the term “compositionality” for something weaker than it’s standardly used to describe, as Boer et al do. Boer et al’s notion seems to correspond to “bottom-level” or “weak” compositionality in the sense of Pagin & Westerstahl (2010): http://people.su.se/~ppagin/papers/pwcompass1e.pdf. The article looks like it’s worth a read incidentally – they tease apart different conceivable levels of compositionality in a careful fashion.
The authors point out that if a language has any kind of non-trivial syntax, weak-compositionality won’t serve the language-user very well. A quote: “The meaning operation R(a) that corresponds to a complex syntactic operation (a) cannot be predicted from its build-up out of simpler syntactic operations and their corresponding meaning operations. Hence, there will be infinitely many complex syntactic operations whose semantic significance must be learned one by one.”
Hi Thom,
Thanks for the thorough reply. You and Patrick are perhaps right that compositionality is maybe the wrong word to use, but a weaker definition for structure that arises from combination of meaningful elements definitely should exist. I also see what you mean about combination in the code model, but it raises the question as to whether we interpret words made up from combinations of the same phonemes holistically as the monkeys do, and I think we do, it’s just the process from which the combinatorial structure arises is through cultural evolution rather than biological.
I also feel like the distinction between the different layers of structure is still necessary, especially when you’re making claims that combinatorial structure can come about in ostensive communication as the result of deduction or calculation, because this makes a lot of sense if you’re talking about meaningful elements being recombined, but becomes more difficult to imagine and argue (though not impossible) if you’re talking about the recombination of meaningless elements.
Hannah – a definition for the weaker notion of non-structure dependent compositionality does exist, see the Pagin & Westerstahl (2010) paper i link to in my reply to Thom (they call it ‘bottom-level compositionality’).
Did you mean to imply that we understand ALL words holistically and that all combinatorial morphological structure is epiphenomenal, by the way? That can’t be the case surely. Morphology (at least a subset of it) is productive in the sense that we can understand words we’ve never came across before, for example if i say i’m suffering from ‘stapler-less-ness’ you know exactly what i mean.
Thanks Patrick, interesting that they still use the word compositionality.
Of course I didn’t mean to imply that we understand morphologically complex words holistically. I lumped morphology in with syntax (morphosyntax) in my original post for a reason. It stands apart from phonological structure in that morphologically complex words are made up from meaningful elements. What I was talking about, as I said in my comment, is words made up from different combinations of the same phonemes, i.e. we don’t interpret feel, flee and leaf as a combination of [l], [i] and [f] because they’re meaningless building blocks, phonemes.
Aaah, ok now i understand what you were getting at. Thanks for clarifying, that makes a lot more sense.
Hi Hannah, I am a researcher in AI in Mila and I recently got interested in iterated learning.Your post really helped me understand what is going on with these terms in the linguistic literature. Thanks a lot.