Skip to content
FREE SHIPPING ON ALL DOMESTIC ORDERS $35+
FREE SHIPPING ON ALL US ORDERS $35+

The Atoms Of Language: The Mind's Hidden Rules Of Grammar

Availability:
in stock, ready to be shipped
Original price $18.99 - Original price $18.99
Original price $18.99
$18.99
$18.99 - $18.99
Current price $18.99
Whether all human languages are fundamentally the same or different has been a subject of debate for ages. This problem has deep philosophical implications: If languages are all the same, it implies a fundamental commonality-and thus the mutual intelligibility-of human thought. We are now on the verge of answering this question. Using a twenty-year-old theory proposed by the world's greatest living linguist, Noam Chomsky, researchers have found that the similarities among languages are more profound than the differences. Languages whose grammars seem completely incompatible may in fact be structurally almost identical, except for a difference in one simple rule. The discovery of these rules and how they may vary promises to yield a linguistic equivalent of the Periodic Table of the Elements: a single framework by which we can understand the fundamental structure of all human language. This is a landmark breakthrough, both within linguistics, which will thereby become a full-fledged science for the first time, and in our understanding of the human mind.

ISBN-13: 9780465005222

Media Type: Paperback

Publisher: Basic Books

Publication Date: 10-08-2002

Pages: 288

Product Dimensions: 5.00(w) x 7.95(h) x 1.40(d)

Mark C. Baker is a professor in the Department of Linguistics and the Center for Cognitive Science at Rutgers University. He lives in Camden, New Jersey.

Read an Excerpt

Chapter One


The Code Talker Paradox


Deep mysteries of language are illustrated by an incident that occurred in 1943, when the Japanese military was firmly entrenched around the Bismarck Archipelago. American pilots had nicknamed the harbor of Rabaul "Dead End" because so many of them were shot down by antiaircraft guns placed in the surrounding hills. It became apparent that the Japanese could easily decode Allied messages and thus were forewarned about the time and place of each attack.

    The Marine Corps responded by calling in one of their most effective secret weapons: eleven Navajo Indians. These were members of the famous Code Talkers, whose native language was the one cipher the Japanese cryptographers were never able to break. The Navajos quickly provided secure communications, and the area was soon taken with minimal further losses. Such incidents were repeated throughout the Pacific theater in World War II. Years after the end of the war, a U.S. president commended the Navajo Code Talkers with the following words: "Their resourcefulness, tenacity, integrity and courage saved the lives of countless men and women and sped the realization of peace for war-torn lands." But it was not only their resourcefulness, tenacity, integrity, and courage that made possible their remarkable contribution: It was also their language.

    This incident vividly illustrates the fundamental puzzle of linguistics. On the one hand, Navajo must be extremely different from English (and Japanese), or the men listening to the CodeTalkers' transmissions would eventually have been able to figure out what they were saying. On the other hand, Navajo must be extremely similar to English (and Japanese), or the Code Talkers could not have transmitted with precision the messages formulated by their English-speaking commanders. Navajo was effective as a code because it had both of these properties. But this seems like a contradiction: How can two languages be simultaneously so similar and so different? This paradox has beset the comparative study of human languages for centuries. Linguists are beginning to understand how the paradox can be dissolved, making it possible for the first time to chart out precisely the ways in which human languages can differ from one another and the ways in which they are all the same.


* * *


Let us first consider more carefully the evidence that languages can be radically different. The Japanese readily solved the various artificial codes dreamed up by Allied cryptographers. Translating a message from English to Navajo evidently involves transforming it in ways that are more far-reaching than could be imagined by the most clever engineers or mathematicians of that era. This seems more remarkable if one knows something about the codes in use in World War II, which were markedly more sophisticated than any used before that time. In this respect, an ordinary human language goes far beyond the bounds of what can reasonably be called a code. If the differences between Navajo and English were only a matter of replacing words like man with Navajo-sounding vocabulary like hastiin, or putting the words in a slightly different order, decoding Navajo would not have been so difficult. It is clear that the characteristics one might expect to see emphasized in the first few pages of a grammar book barely scratch the surface of the complexity of a truly foreign language.

    Other signs of the complexity and diversity of human languages are closer to our everyday experience. Consider, for example, your personal computer. It is vastly smaller and more powerful than anything the inventors of the computer imagined back in the 1950s. Nevertheless, it falls far short of the early computer scientists' expectations in its ability to speak English. Since the beginning of the computer age, founders of artificial intelligence such as Alan Turing and Marvin Minsky have foreseen a time in which people and computers would interact in a natural human language, just as two people might talk to each other on a telephone. This expectation was communicated vividly to the world at large through the 1968 movie 2001: A Space Odyssey, in which the computer HAL understood and spoke grammatically perfect (if somewhat condescending) English. Indeed, natural language was not even considered one of the "hard" problems of computer engineering in the 1960s; the academic leaders thought that it would more or less take care of itself once people got around to it. Thirty-five years and billions of research dollars later, their confidence has proved unwarranted. It is now 2001, and though HAL's switches and indicator lights look hopelessly out-of-date, his language skills are still in the indefinite future. Progress is being made: We only recently achieved the pleasure of listening to weather reports and phone solicitations generated by computers. But computer-generated speech still sounds quite strange, and one would not mistake it for the human-generated variety for long. Moreover, these systems are incapable of improvising away from their set scripts concerning barometric pressures and the advantages of a new vacuum cleaner.

    This poor record contrasts with scientists' much greater success in programming computers to play chess. Another of HAL's accomplishments in 2001 was beating the human crew members at chess—a prediction that has turned out to be entirely realistic. We usually think of chess as a quintessentially intellectual activity that can be mastered only by the best and brightest. Any ordinary person, in contrast, can talk your ear off in understandable English without necessarily being regarded as intelligent for doing so. Yet although computer programs can now beat the best chess players in the world, no artificial system exists that can match an average five-year-old at speaking and understanding English. The ability to speak and understand a human language is thus much more complex in objective terms than the tasks we usually consider to require great intelligence. We simply tend to take language for granted because it comes so quickly and automatically to us. Just as Navajo proved harder than other codes during World War II, so English proves harder than the Nimzowitsch variation of the French defense in chess.

    The experience of computer science confirms not only that human languages are extremely complex but that they differ in their complexities. Another major goal of artificial intelligence since the 1960s has been machine translation—the creation of systems that will take a text in one language and render the same text in another language. In this domain the ideal is set not by HAL but by Star Trek: All crew members have a "universal translator" implanted in their ears that miraculously transforms the very first alien sentence it hears into perfect English. Again, real machine translation projects have proven more difficult. Some programs can take on tasks like converting the English abstracts of engineering articles into Japanese or providing a working draft of a historical text from German in English or translating a page on the World Wide Web. But the products of these systems are very rough and used only in situations where an imperfect aid is desired. Indeed, sometimes they make embarrassingly funny mistakes. Harvey Newquist reports an apocryphal story about an early English-Russian system that translated the biblical quotation "The spirit is willing, but the flesh is weak" into Russian as "The vodka is strong, but the meat is spoiled." Performance has improved since the 1960s, but not as much as one might imagine. Here is a quotation from the biblical book of Ecclesiastes (9:11 RSV):


Again I saw that under the sun the race is not to the swift, nor the battle to the strong, nor bread to the wise, nor riches to the intelligent, nor favor to the men of skill, but time and chance happen to them all.<

Table of Contents

Preface ix
1 The Code Talker Paradox 1
2 The Discovery of Atoms 19
3 Samples Versus Recipes 51
4 Baking a Polysynthetic Language 85
5 Alloys and Compounds 123
6 Toward a Periodic Table of Languages 157
7 Why Parameters? 199
Notes 235
Glossary 245
Map 250
References 253
Index 263