University of Arizona Logo

Can Multilingualism be Simulated?

I propose to consider the question “Can multilingualism be simulated?” The term “multilingualism” is often used to mark one of the human social and existential behavioral conditions produced especially by experiences of migration and displacement, but also by special intensities of education. To the extent that it stands in contrast with “monolingualism” as marking the state-managed sovereignty of a nationalized standard, or written dialect, “multilingualism” is also often used to mark the violation of de jure or de facto state-managed codes for public (and certain forms of private) communication, including those employed in and for the regulation of both labor and education. If “multilingualism” is in some ways thus often imagined as a litmus test for what we might call the humanity of a state exercising its monopolies of both knowledge and force, it might be worth considering the question of whether multilingualism can be simulated, as the spoken and written production of the state-managed code itself can now be simulated by software.
The fact is that multilingualism has long been simulated, in this way, in and as the unintended and unwanted mark of failure in efforts to computerize human communication. Electronic computers as we know them today were devised for applications in cryptanalysis and ballistics during the Second World War. In the postwar period, the U.S.-Soviet arms race encouraged attention to a broader cultural application of computing, in a vision of computers as fully autonomous and fully automatic translators of human writing and speech in natural languages. In the United States, the imagination of human language successfully manipulable by an electronic computer was embraced by some prominent postwar mathematicians and engineers, contested by others, and regarded with caution or dismay by most humanists and writers and many journalists. Debate over the technical and ethical limits of computing was widespread and energetic, both in the academic world and the U.S. literary and journalistic public spheres, and literature and literary language had a surprisingly prominent place in this debate, as the last frontier for the power of computation and its ultimate test. A peak of optimism around 1959 was registered by Émile Delavenay, a scholar of D. H. Lawrence and head of UNESCO's Department of Documents and Publications, in a slim volume entitled An Introduction to Machine Translation. Although computers could only process data and could not use human language, Delavenay argued, their ability to perform logical operations allowed them to mimic a limited range of human mental processes with greater-than-human speed and flexibility. As the difference between scientific and literary prose was at bottom a difference of degree, not of kind, there was no reason to see literary prose as a barrier to the machine-assisted general logical classification of knowledge. At the same time, acknowledging his own ostentation in light of the profoundly limited and disappointing technical achievements to date, Delavenay made lemons into lemonade, observing that incomplete or partial output including untranslated words might be reimagined as preserving the “local color” of the source — in a French machine translation of a novel composed in Hindi, for example.
As a trace of the breakdown of a technocratic dream of managing linguistic confusion, this residual, rather than spontaneous multilingualism might be thought as the dangerous supplement or remainder of the Turing “test” for artificial intelligence, which strongly implies linguistic competence in a national standard. Recent proposals for steganographic techniques deliberately employing this “noise” in MT output suggest a certain resignation to its durability.Technology needs: at most, means for projection of a few slides, either directly from my own USB flash drive or using Internet access to a server at my own institution. However, I may choose not to use any visual aids at all.
Addresses white paper question “Can multilingualism be simulated?”
Citations of relevant published work:
Lennon, Brian. “The Antinomy of Multilingual U.S. Literature.” Comparative American Studies 6.3 (September 2008): 203-224.
Lennon, Brian. “Translation Being Between.” In In Babel’s Shadow: Multilingual Literatures, Monolingual States. Minneapolis: The University of Minnesota Press, 2010. 55−92.
Lennon, Brian. “Afterword: Unicode and Totality.” In In Babel’s Shadow: Multilingual Literatures, Monolingual States. Minneapolis: The University of Minnesota Press, 2010. 167−173.

Abstract PDF: