In thinking about the recent LARB critique of digital humanities and of responses to it I couldn’t help but think, once again, about the term itself: “digital humanities.” One criticism is simply that Allington, Brouillette, and Golumbia (ABG) had a circumscribed conception of DH that left too much out of account. But then the term has such a diverse range of reference that discussing DH in a way that is both coherent and compact is all but impossible. Moreover, that diffuseness has led some people in the field to distance themselves from the term.
And so I found my way to some articles that Matthew Kirschenbaum has written more or less about the term itself. But I also found myself thinking about another term, one considerably older: “computational linguistics.” While it has not been problematic in the way DH is proving to be, it was coined under the pressure of practical circumstances and the discipline it names has changed out from under it. Both terms, of course, must grapple with the complex intrusion of computing machines into our life ways.
Digital Humanities
Let’s begin with Kirschenbaum’s “Digital Humanities as/Is a Tactical Term” from Debates in the Digital Humanities (2011):
To assert that digital humanities is a “tactical” coinage is not simply to indulge in neopragmatic relativism. Rather, it is to insist on the reality of circumstances in which it is unabashedly deployed to get things done—“things” that might include getting a faculty line or funding a staff position, establishing a curriculum, revamping a lab, or launching a center. At a moment when the academy in general and the humanities in particular are the objects of massive and wrenching changes, digital humanities emerges as a rare vector for jujitsu, simultaneously serving to position the humanities at the very forefront of certain value-laden agendas—entrepreneurship, openness and public engagement, future-oriented thinking, collaboration, interdisciplinarity, big data, industry tie-ins, and distance or distributed education—while at the same time allowing for various forms of intrainstitutional mobility as new courses are approved, new colleagues are hired, new resources are allotted, and old resources are reallocated.
Just so, the way of the world.
Kirschenbaum then goes into the weeds of discussions that took place at the University of Virginia while a bunch of scholars where trying to form a discipline. So:
A tactically aware reading of the foregoing would note that tension had clearly centered on the gerund “computing” and its service connotations (and we might note that a verb functioning as a noun occupies a service posture even as a part of speech). “Media,” as a proper noun, enters the deliberations of the group already backed by the disciplinary machinery of “media studies” (also the name of the then new program at Virginia in which the curriculum would eventually be housed) and thus seems to offer a safer landing place. In addition, there is the implicit shift in emphasis from computing as numeric calculation to media and the representational spaces they inhabit—a move also compatible with the introduction of “knowledge representation” into the terms under discussion.
How we then get from “digital media” to “digital humanities” is an open question. There is no discussion of the lexical shift in the materials available online for the 2001–2 seminar, which is simply titled, ex cathedra, “Digital Humanities Curriculum Seminar.” The key substitution—“humanities” for “media”—seems straightforward enough, on the one hand serving to topically define the scope of the endeavor while also producing a novel construction to rescue it from the flats of the generic phrase “digital media.” And it preserves, by chiasmus, one half of the former appellation, though “humanities” is now simply a noun modified by an adjective.
And there we have it.
A decade later, however, the term was being used to different effect by critics of DH. As Kirschenbaum noted at the end of “What Is Digital Humanities and What’s It Doing in English Departments?” (ADE Bulletin #150, 2010):
Digital humanities, which began as a term of consensus among a relatively small group of researchers, is now backed on a growing number of campuses by a level of funding, infrastructure, and administrative commitments that would have been unthinkable even a decade ago. Even more recently, I would argue, the network effects of blogs and Twitter at a moment when the academy itself is facing massive and often wrenching changes linked both to new technologies and the changing political and economic landscape has led to the construction of “digital humanities” as a free- floating signifier, one that increasingly serves to focus the anxiety and even outrage of individual scholars over their own lack of agency amid the turmoil in their institutions and profession. This is manifested in the intensity of debates around open- access publishing, where faculty members increasingly demand the right to retain ownership of their own scholarship—meaning, their own labor—and disseminate it freely to an audience apart from or parallel with more traditional structures of academic publishing, which in turn are perceived as outgrowths of dysfunctional and outmoded practices surrounding peer review, tenure, and promotion […].
And that, more or less, is where we are today. The recent ABG critique can be read, at least in part, as yet another use of “digital humanities” as a “free-floating signifier [that] serves to focus the anxiety and even outrage of individual scholars over their own lack of agency amid the turmoil in their institutions and profession.”
Of course, the use of computers in humanities research is much older than these debates. The standard history leads back to Roberto Busa and the Index Thomisticus in the early 1950. At roughly the same time there another enterprise got started, one that could plausibly be grandfathered into the history of DH.
Computational Linguistics
That enterprise is translation from one language to another. As that is certainly an activity undertaken by humanists, one could reasonably regard attempts to do so by digital computers as an activity within the scope of DH though, as far as I know, DHers generally do not do so. Nor was the early work in machine translation (MT), as it was and is called, undertaken toward humanistic ends. It was undertaken for practical purposes and was undertaken in the United States through funding from the federal government. The object was to translate technical texts in Russian into English.
The early work was promising – early work in such ventures is always promising, as no one really knows what’s going on – but the distance between promises and working technology became so wide that the paymasters cut off funding in the middle 1960s and threw the enterprise into a tailspin. David Hays was one of the researchers involved in these efforts. He headed the RAND Corporation’s work in MT and he was one of the authors of the report the resulted in defunding. Now, that’s not quite what the report recommended, but that’s what happened.
Hays and others feared that defunding was likely and took steps to survive afterward. They created a professional society and the coined a new term for what they regarded as a new enterprise. No, practical MT was not around any corner forseeable in the early 1960s, but opportunities for deep and fundamental research abounded. What should we call this new/reborn enterprise?
According to Martin Kay the name was chosen in a meeting that was held at Hays’s office at RAND. Kay was on Hays’s team at RAND and attended that meeting. In time he became one of the Grand Old Men of the discipline and was given a lifetime award in 1994. Here’s what he said about that meeting when he accepted that award:
Now anybody who competes for research grants knows that, while substance and competence play a significant role, the most important thing to have is a name, and we did not have one for the exciting new scientific enterprise we were about to engage upon. To be sure, the association and the committee antedated that report, but we had inside information and we were ready. I use the word “we” loosely. I was precocious, but very junior, so that my role was no more than that of a fly on the wall. However, I was indeed present at a meeting in the office of David Hays at Rand when the name “computational linguistics” was settled upon. I remember who was there but, in the interests of avoiding embarrassment, I will abstain from mentioning their names. As I recall, four proposals were put forward, namely
- Computational Linguistics
- Mechanolinguistics
- Automatic Language Data Processing
- Natural Language Processing
One can see why “mechanolinguistics” was rejected; it sounds like some mechanical toy, Legos for language if you will. Kay tells us that the last two were rejected “because it was felt that they did not sufficiently stress the scientific nature of the proposed enterprise.” That makes sense. Who wants to fund mere “processing”? Note, moreover, a similar consideration was in play at the birth of DH. Science was not the issue in this case, of course. But in both cases we’re dealing with intellectual seriousness, with the creation of knowledge–scientific knowledge in one case, humanistic knowledge in the other.
But, while “natural language processing” was rejected in favor of “computational linguistics” back in the early 1960s, the intellectual world had changed considerably by the mid-1990s. As Kay went on to note: “The term ‘Natural Language Processing’ is now very popular and, if you look at the proceedings of this conference, you may well wonder whether the question of what we call ourselves and our association should not be revisited.” The matter is still very much in play.
Both “computational linguistics” (CL) and “natural language processing” (NLP) have entries in Wikipedia. If you examine the Talk pages for those entries you’ll see that the editors concerned about those entries have been considering merging them for the last decade: CL Talk, NLP Talk. But what name should we give to the merged entry? And what happened to bring this situation about?
What happened is that the intellectual methods that came along with the name “computational linguistics” proved brittle and gave way to somewhat different methods, methods based on statistics and machine learning. Here’s what Martin Kay said about the difference:
Computational linguistics is not natural language processing. Computational linguistics is trying to do what linguists do in a computational manner, not trying to process texts, by whatever methods, for practical purposes. Natural Language Processing, on the other hand, is motivated by engineering concerns. I suspect that nobody would care about building probabilistic models of language unless it was thought that they would serve some practical end. There is nothing unworthy in such an enterprise. But ALPAC’s conclusions are as true today as they were in the 1960’s—good engineering requires good science. If one’s view of language is that it is a probability distribution over strings of letter or sounds, one turns one’s back on the scientific achievements of the ages and foreswears the opportunity that computers offer to carry that enterprise forward.
The crucial point, it seems to me, comes down to this: Researchers in computational linguistics have had to create theories and models of language and language processes: phonetics, phonology, morphology, syntax, semantics, pragmatics, and cognition. These models have to be painstakingly “hand-coded” into computer programs. This work could reasonably be seen as an investigation of human psychology. Researchers in natural language processing, in contrast, created statistical models of machine learning and then fed those models large bodies of language data. Depending on the nature of that data, the machine would learn to recognize human speech, to translate from one language to another, or two answer circumscribed questions and perform circumscribed tasks. None of these systems perform at full human level; but they do remarkably well, good enough for many practical purposes.
In particular they perform much better than any of the systems created under the regime that had coined the term “computational linguistics” and preferred it to “natural language processing” because that term “did not sufficiently stress the scientific nature of the proposed enterprise.” The transition from one regime to the other happened during the 1980s. In practical terms, statistical processing has won out over scientific knowledge.
The issues that history presents to us are as deep and profound as any before us. I am deeply sympathetic to Martin Kay’s plea on behalf of “the scientific achievements of the ages” but I can no more ignore the real and practical achievements of statistical NLP than Martin Kay did (you might want to read his piece through to the end). What the future will bring in a decade, two decades, five or six decades, that’s anyone’s guess. We don’t know.
What does this imply for DH?
The only thing I can be sure of is that the future is opaque. I will note a couple of things, however. When Kirschenbaum was reviewing the discussions that took place at the University of Virginia he mentioned that John Sowa took part in them and that he introduced knowledge representation into the discussions. Sowa is a mathematician and computer scientist who spent his career at IBM in various capacities, mostly as a researcher. Knowledge representation is one of those pesky terms that means many things to many people but is most centrally concerned about the representation of human knowledge in forms accessible to computers. As such knowledge representation was central to (classical) computational linguistics and artificial intelligence. I see precious little of (classical) knowledge representation in digital humanities though now one constructs a database or organizes a dataset can reasonably be seen as a problem of knowledge representation. And for that matter, so can data visualization.
Moreover, the machine learning techniques that are so important in computational criticism are in or are closely related to the large family of statistical techniques subsumed under natural language processing. That is to say, there is a historical line of development that starts with those early efforts at machine translation and continues in today’s statistical NLP, though obviously other lines of development feed into and have come to dominate that stream. If you want to understand how we arrived at our current situation–for a large range of values for “we” and “current situation”–you need to think about machine translation and computational linguistics.
And as for neoliberalism, the term hadn’t been coined when machine translation was born. But, as I’ve noted, machine translation, and thus computational linguistics, was born deep in the Cold War and in intimate contract with military purposes. David Hays tells me that during his early years at RAND half of RAND’s budget came from a single Air Force contract and that the work order for that contract was simple: to do work for the good of the nation. Does anyone get a grant with such generous terms these days?
The world is rich a complicated. Sometimes reality does not compute.