The Cold War origins of Google Translate

Getty Images (Copyright: Getty Images)
(Copyright: Getty Images)

The ease with which we can now translate web pages between different languages has surprising origins in the power struggles between Russia and the US.

There are times when I feel as if I’m truly living in the future. It happened to me most recently when, browsing an online newspaper archive, I came across a 1954 article in the Los Angeles Times about the dawning age of language translation by computers. The short Associated Press article about an IBM computer, trumpeted as the first computer capable of translation between different languages, ended with an example of its skills. 

The reporter, in suitably sensational language, explained “the brain” was fed a sentence in Russian to translate into English. “Lights flash, there is a subdued clinking and clanking, and in 10 seconds you’ve got the translation,” it said.

Curious to see if it had got it right, I copied the Russian text from the story and opened a new tab in my browser. A quick copy and paste into Google Translate and I quickly confirmed the translation - no need for a super computer or access to the laboratories of a computing giant. And, had the original quote been in one of the 63 languages supported by Google, the process would have been just as quick. 

Welcome to the future.

My discovery got me hunting for the origins of this technology that we now take for granted. And, as I discovered, our interest in multilingual machines and trouble-free translation goes back much further than the 1950s. In fact, you have to go back to 1629, when French philosopher and mathematician Rene Descartes proposed a series of universal symbols that any language could be converted into. His idea was seemingly never capitalised on. In 1933 patents were filed independently in both France and Russia which used different mechanical means of translating languages through paper tape. But, as is so often the case, war was the catalyst for serious effort in the field.

Fear and loathing

Electromechanical cipher machines used during WWII, such as the German Enigma, inspired scientists after the war to dive head first into the bold new era of computer translation machines. One of its early proponents was American scientist Warren Weaver, director of the Natural Sciences Division of the Rockefeller Foundation. In 1946 he read a report by English physicist Andrew D. Booth which inspired him to believe that machine translation was just around the corner. In the following years, his colleagues encouraged him to elaborate on his ideas, resulting in his 1949 memorandum “Translation”. The document, said to be the single most influential publication in the early days of machine translation, outlined a series of ambitious goals for the field, despite appearing at a time when few people knew what computers may be capable of.

The note, which recognised the need for a “tremendous amount of work in the logical structures of languages before one would be ready for any mechanization”, was circulated to about 200 of his friends (many of whom were US government policymakers) and is said to have inspired virtually all serious research into the subject in the 1950s.

But Weaver’s memo was not the only driver for this burgeoning field. What really kick-started research was Cold War fear and the US desire to easily read and translate Russian technical papers.

In the mid-1950s roughly 50% of scientific papers published around the world were in English. The average paper cost about $6 to translate (around $50, adjusted for inflation) and translation of highly technical papers required that the human translator be intimately familiar with the material. The enormous amount of time and high cost of translating those papers presented a problem for Americans obsessed with being on the forefront of new technological developments - and beating the Russians.

As a 1958 issue of Popular Science explained, these papers could contain clues to “H-power, interplanetary flight, more powerful batteries, longer-wearing tires.”

“The trouble is: Too few scientists and engineers read foreign languages. What we need is a machine to read one language and type in another: an automatic translator. We’re trying to build - not one, but several,” it read.  

One of those early machines was the IBM computer mentioned in that Associated Press article that sparked my interest. The all purpose computer, described as “"the most advanced, most flexible high-speed computer in the world” when it debuted in 1952, was programmed to carry out translations. Two years later it was ready. Although it only had 250 words and six grammar rules it was enough for the media of the time to be suitably impressed. The 25 January 1954 issue of Chemical and Engineering News ran a story on the “enormous step” the machine had made “toward establishing intercultural communication”.

“A mathematical computer, IBM 701... has been converted into an electronic language translator, called the electronic brain. The brain's first language feat has been in translating Russian scientific literature, chemistry and engineering included, into English. A typist who doesn’t have the know-how in any language can work the machine,” it read.

‘No future’

The machine dominated coverage in the popular science and engineering press. An article in the October, 1958 issue of Mechanix Illustrated, explained how the “giant brain” worked.

“First, every word in a sizable English dictionary is listed on tape under a code number,” it said. “The Russian, French or German equivalents for each word are given the same number. Then, to translate from Russian to English, for example, a tape with the Russian code numbers is fed into the machine, which matches the numbers and prints out the English.”

The reporter goes on to describe an experiment, where the computer was asked to translate the English saying “Out of sight, out of mind,” into Russian. “The result was startling: ‘Invisible and insane’,” the article says. “Newer computers are much more sophisticated, and while human editing to rearrange awkward word sequences is still needed, the computer can make hundreds of rough translations in a day.”

Around this time, we also begin to see the idea of machine translation cross into popular culture. As we looked at previously, Sunday comic strips like Closer Than We Think were borne out of the Cold War and concern that Americans would lose the scientific and technological battles as much as they would lose a nuclear one. The 21 August 1960 edition of the Closer Than We Think strip intentionally ignored the obvious reasons that the American government wanted to develop such machines:

“In the world of tomorrow, you'll be able to talk in English and be understood by a Japanese who knows only his own tongue, or by an Ottoman Turk who's acquainted with his own language and no other,” it says. The picture shows what seems to be a foreign dignitary paying a visit to the White House. He has just stepped off his “vertiplane” which has landed in the garden and is shaking hands with a very formal gentleman carrying the “translator box”. The cartoon was inspired by a machine being developed by the US Air Force, the text explains. “Right now it operates at only 40 words per minute and is bulky and complicated. But miniaturization, combined with magnetic tape, suggests far more dramatic possibilities for the future - a translating box that might listen to one vernacular and instantly relay a verbal translation. Any language would then be usable anywhere, universally!”

However, as so often happens, the reality failed to live up the glossy images and hype. By the mid-1960s there was frustration in the United States about the future of machine translation. Then in 1966 came the hammer blow. The influential Automatic Language Processing Advisory Committee (Alpac) published a report on the state of the field, and particularly the success – or lack of – in the analysis and scanning of Russian-language documents for US military use. Its conclusion was damming: “We do not have useful machine translation [and] there is no immediate or predictable prospect of useful machine translation.” The committee effectively recommended a halt to the various research programmes and a return to human translators. It was not until the 1980s, when cheap computing power became available, that research began again in earnest.

Just think what Google Translate could have been now if they hadn’t stopped.

If you would like to comment on this story or anything else you have seen on Future, head over to our Facebook page or message us on Twitter.