It wasn’t a matter of deep and well-thought princple. It was simpler than that. Chomsky’s approach to linguistics didn’t have the tools I was looking for. Let me explain.
* * * * *
Dan Everett’s kicked off two discussions on Facebook about Chomksy. This one takes Christina Behme’s recent review article, A ‘Galilean’ science of language, as its starting point. And this one’s about nativism, sparked by Vyv Evans’ The Language Myth.
* * * * *
I learned about Chomsky during my second year as an undergraduate at Johns Hopkins. I took a course in psycholinguistics taught by James Deese, known for his empirical work on word associations. We read and wrote summaries of classic articles, including Lee’s review of Syntactic Structures and Chomsky’s review of Skinner’s Verbal Behavior. My summary of one of them, I forget which, prompted Deese to remark that my summary was an “unnecessarily original” recasting of my argument.
That’s how I worked. I tried to get inside the author’s argument and then to restate it in my own words.
In any event I was hooked. But Hopkins didn’t have any courses in linguistics let alone a linguistics department. So I had to pursue Chomsky and related thinkers on my own. Which I did over the next few years. I read Aspects, Syntatic Structures, Sound Patterns of English (well, more like I read at that one), Lenneberg’s superb book on biological foundations (with an appendix by Chomsky), found my way to generative semantics, and other stuff. By the time I headed off to graduate school in English at the State University of New York at Buffalo I was mostly interested in that other stuff.
I became interested in Chomsky because I was interested in language. While I was interested in language as such, I was a bit more interested in literature and much of my interest in linguistics followed from that. Literature is made of language, hence some knowledge of linguistics should be useful. Trouble is, it was semantics that I needed. Chomsky had no semantics and generative semantics looked more like syntax.
So that other stuff looked more promising. Somehow I’d found my way to Syd Lamb’s stratificational linguistics. I liked that for the diagrams, as I think diagrammatically, and for the elegance. Lamb used the same vocabulary of structural elements to deal with phonology, morphology, and syntax. That made sense to me. And the s work within his system actually looked like semantics, rather than souped up syntax, though there wasn’t enough of it.
A remark in Karl Pribram’s Languages of the Brain led to me Ross Quillian’s early work on semantic nets. That’s more like it. By the time I headed off to Buffalo I’d pretty much abandoned generative grammar, mostly because it didn’t have an interesting semantics.
But shortly before I left Hopkins I read an article in Linguistic Inquiry that went like this:
The early GG days were exciting and we were thinking of doing grammars of whole languages. Then things got tough and started falling apart. Now we’ve broken into several schools of thought and we’ve pretty much given up on doing complete grammars. Now we write about fragments of grammar with little immediate prospect of assembling complete grammars.
Sounds like the end of the GG program. But no, it’s kept chugging along for the last four decades, spawning schools of thought, but none of them, that I’m aware, approaching a complete grammar of a real language. What keeps the bunny ticking?
As for Buffalo, I found my way to David Hays in the Linguistics department. He had little use for GG, though he he’d been happy enough to hire a generative grammarian or two into the department when he was chair (he’d founded the department). And his objections were well-thought out – he referred by to Hockett’s The State of the Art (1970). We discussed them from time to time. But mostly we talked semantics and cognition, and eventually the brain and cultural evolution.
It’s been awhile since I’ve swum in those waters. But about ten years ago I produced a bunch of notes on what I called attractor nets. I took Lamb’s notation and used it to reconstruct the cognitive networks Hays and his students had developed. This effort was motivated in part by Walter Freeman’s neurobiology; that’s where the attractors come from. I treated the cortex as bunch of neurofunctional areas (NFA) variously linked to subcortical structures and to one another. Each NFA is dynamical system and, as such, has an attractor landscape, as it’s sometimes called. Under load each NFA is driven to one of its attractors.
That’s where Lamb’s notation comes in. Since an NFA can only be in one attractor state at a time, the attractors in its landscape are related to one another through logical OR. The nodes in Lamb’s networks are logical operators over the edges on their input side. So, each NFA is collectively a logical OR over its attractors. And where an NFA receives major inputs from downstream NFAs its attractors will be logical ANDs over those input NFAs.
It’s complicated, and I never really pushed it to the wall, or at any rate, as far as I could. But I’ve produced some interesting, if somewhat cryptic, notes:
- Attractor Nets 2011: Diagrams for a New Theory of Mind
- Attractor Nets, Series I: Notes Toward a New Theory of Mind, Logic and Dynamics in Relational Networks
The first is a set of heavily notated diagrams while the second a set of notes that includes lots of diagrams. The first contains diagrams that aren’t in the second.
These are a long way from Chomskian linguistics. But that linguistics is one of the things that set me on the path that led to those notes.
Abandoning emprical studies on languages for developing more logical theory and still being unable to generate a single sentence or a single “phrase”…
Today, we have powerful computers and we have formulas to try out automatic text generation or some applications like this. It was the role of linguist, but now it is not. If you look at markets such as search engines, CAT softwares, online dictionaries, it belongs to developers yet linguists.
This is almost comically misguided. I guess you think everything interesting that can be done in analysing language amounts to running some stats on big corpora. Try actually learning something about linguistics before making these kinds of sweeping statements.
Dude, it’s true that Chomsky has a good model on sentence generation. However, I’m trying to understand how I am able to create this model on computer. I created a bunch of phrase and morphological classes (on Java). Only way I thought was serializing bunch of sentences and sending them to another computer by sockets and try to create a Turing environment by Chomskian theory.
However, I say classes, “classes”, which also mean computers can analyze a -let’s call it- “query” by serialized object meaning “deep structure”.
Then I thought what about a skeleton that generates a surface structure, but how could another computer find out that -for instance- a verb has a aspect function while it has simply nothing on it (zero morpheme)? This also requires a complete auxiliary language for computer.
Don’t worry, I do not believe the things I said above anymore, but I still have tons of questions on it. Forgive me.