- About Us
- Contact Us
Many people have turned to computers to translate text in other languages. This can be a quick and effective way to translate information. Travelers may rely on their phone or apps to communicate with locals in another language. While results through using this technology are often fast, they are not always reliable. Machine translation has experienced rapid growth in less than a century, but still has room to grow and develop.
The roots of machine translation date back to 1933 when Soviet scientist, Peter Troyanskii, presented new technology to the Academy of Sciences of the USSR. The technology consisted of a typewriter, cards in four different languages and a camera that was used to select and print words when translating from one language to another. The person using the machine typed in a word, found a card, took a photo and typed certain linguistic information about the word on the typewriter. The typewriter encoded one feature, and the tape and the film made a set of frames with the words and their characteristics.
The next major step in machine translation occurred in 1954 when the Georgetown-IBM experiment publicly demonstrated the translation of 60 Russian sentences into English for the first time. The experiment used 250 words and six grammar rules to translate a series of sentences. However, this experiment had its limitation, including only using sentences that had no ambiguity and using only a few grammar rules. However, the experiment was enough to stimulate funding of machine translation in the United States and other countries around the world.
The next era involved translated dictionaries that gave one or more equivalent words. However, researchers were disappointed that there were semantic barriers to these systems. A 1966 report revealed that machine translation was slower and twice as expensive as human translation.
By the 1980s, many countries had developed their own systems to translate content. By the 1990s, some systems were based on grammatical rules while others did not consider any context. The research was completed on speech translation, which combined speech recognition and translation models. By 2000, the primary form of machine translation was statistical machine translation. This method broke text down into different segments and used statistical evidence to select what was the most appropriate translation. Today, the main approaches continue to be rule-based and statistics-based.
While the technology involved with machine translation has certainly advanced from its early roots, it still has its pitfalls. In particular, these systems often have problems with the following:
Some research has demonstrated that out of a sample of many sentences, machine translation incorrectly translates up to two-thirds of the sentences.
There have been several studies that pitted machine translation against human translation. While machine translation has made great strides, researchers at the University of Zurich have concluded that human translation provides for a more adequate and fluent translation than machine translation when considering an entire document. In another competition, Google Translate scored 28/60 while human translators scored 49/60 on the same sample. Ultimately, the experiment determines that machine translation often provides awkwardly-worded translations that were not reliable.
Other opponents correctly state that machines lack instinct when translating documents. Professional translators are more likely to understand nuances in language and figurative language. They draw upon their own linguistic and cultural background, as well as their personal knowledge. Professional translation services can provide accurate translations of important texts and complete website translations. They can also provide localization services and cultural consulting based on a strong understanding of the local language and culture.
Drop us a note in the form and one of our experts will set up a time to discuss the
ways Dynamic Language can help your business go further, faster.