Table of contents

А man does voice recognition on his phone.

AI translations: brief history, developments & future trends

5 min.

This article explores the evolution of AI translations, from early machine translation systems to more advanced neural machine translation models. It also discusses the benefits, limitations, latest developments, and future trends in the field, including the need for human oversight to ensure accuracy and cultural sensitivity.

Communicating with individuals speaking foreign languages can be a rewarding experience. It can help in broadening one’s point of view and understanding with regards to different cultures. However, it can also be a challenge when one does not know the foreign language spoken by their conversation partner and due to one or more reasons cannot afford to have a human translator assist them. In such instances having a portable AI (Artificial Intelligence) translation device within reach may be beneficial as it can aid in removing the language barrier to a certain degree. However, in order for one to recognize its full potential, one needs to understand its concept, latest developments and future trends.

 

Brief history of AI translations

The history of AI translation can be traced in several stages. The initial concept machine translation (MT) was proposed by the American scientist, mathematician, and science administrator Warren Weaver in 1949. Weaver believed that it was possible to employ modern computers as a means to intuitively interpret human languages. Since the initial concept and proposal of the idea, machine translation has become one of the most daunting tasks with regards to natural language processing and artificial intelligence, with researchers of different generations dedicating themselves to realizing the aspiration. In regards to methodology, machine translation approaches can be divided into two primary categories, namely the rule-based and the data-driven approach.

Of these, the rule-based methods held high preference prior to the 2000s and consist of bilingual linguistic experts helping to design rules for the analysis of the source language, the transformation from the source to the target language, as well as the generation of the target language. Due to the subjectivity and labor intensity required for the creation of the systems, rule-based software was difficult to scale and quite fragile when the rules did not cover the unseen phenomena within a language. For comparison, data-driven methods intend to teach computers how to translate using extensive databases containing parallel sentence pairs (parallel corpus) translated by humans who are either bilingual or have studied linguistics. This type of approach to machine translation has undergone three periods beginning from the middle of the 1980s. During the first period, a proposal was made in having such systems translate sentences via the retrieval of similar examples from sentence pairs interpreted and prepared by humans. This period was succeeded by a second one in the early 1990s when statistical machine translation was conceived as a proposal and idea. The concept of such software is to have word/phrase level translation rules be automatically learned by the system through the use of a combination of parallel corpora and probabilistic models which use estimates on which translation would be the most probable. This permitted users to translate sentences or words with improved quality in regards to the translation provided. 

However, because of how complex integrating multiple manually designed components (language model, translation model, and reordering model) was, this type of Artificial Intelligence translation could not fully utilize large-scale parallel corpora, resulting in translation quality being quite dissatisfactory. For a period of 10 years, no further accomplishments were made in the area of Artificial Intelligence translation prior to the proposal of integrating deep learning into machine translation. Following the introduction of such mechanisms and methods, neural machine translation via the usage of deep neural networks has developed at a rather quick pace. For instance, in 2016, several extensive experiments conducted on different language pairs demonstrated that Artificial Intelligence translation systems using deep neural networks have achieved a significant accomplishment in the form of approaching human-level translation quality.

As the science of artificial intelligence translation has continued to change and evolve, so have the devices containing and storing the systems which perform the act of translation. In the beginning, the machines which contained the machine translation software were relatively bulky and challenging to carry. As the advancements within this branch of artificial intelligence and computer science continued to increase, so did the improvements with regards to the devices storing the translation software. Examples include the appearance of electronic dictionaries that serve the same purpose as their paperback counterparts, however they utilize batteries and contain large more extensive databases that employ statistic machine translation. Initially, such devices, although inaccurate at times, could be quite beneficial in regards to assisting in understanding a written text such as a road sign or speaking to a foreigner. Such devices use speech recognition, text-to-speech conversion, and probabilistic models. 

The principle of working of such devices was relatively straightforward in essence. The user would select the category of the sentence they need to have interpreted, then input the phrase or word using their voice or a keyboard. Using probabilistic or statistic models, the system would then proceed to create a translation from the source language into the target one and display the result on the screen or read it aloud. However, due to the usage of statistic models, such electronic dictionaries could at times display dissatisfactory results, including inaccuracies which could be problematic. Additionally, such models could cause discomfort in regards to their transportation as they tended to be relatively heavy in terms of weight and could easily get damaged. Due to such contributing factors, in addition to the rapid progress made in improving the quality of deep neural networks, electronic dictionaries progressively become irrelevant and not up to date.

While the first prototypes tended to be relatively inaccurate with regard to the outputted translations and were quite bulky in terms of the devices that stored them, present-day models are rather compact and highly sophisticated. For instance, one can carry current models as they can be found in the form of a portable tablet, hand-held similar to a mobile phone and others. This permits, for instance, traveling to many destinations and being able to carry the translator in a handbag or possibly one's pocket.

 

Latest developments of Artificial Intelligence translation

The development and improvement of AI translation technology particularly in regards to output accuracy continue in the present. For instance, many experiments and research projects have been conducted by universities and various companies in finding means of increasing the level of precisions presented by the software. Such include examples like having such systems be able to translate into languages that are morphologically rich. Previously, this would be pretreated by either pre-processing the words into sub-word units or by performing the translation at character level with the former being based on one or more word segmentation algorithms optimized via the application of corpus-level statistics with no regard to the translation task. In the latter case, on the other hand, the machine learns directly from the translation data but in order to be effective requires deep architectures. The most contemporary and highly promising improvement in this aspect is having the words translated via modeling word formation. To that end, the words are input through a hierarchical latent variable model. Said model mimics the morphological inflection process within natural languages. This results in the generation of words one character at a time via the composition of two latent representations. One representation is the continuous one which aims at attaining the lexical semantics as well as different approximately discrete features. Such features have the purpose of finding the morpho-syntactic function shared between two or more surface forms. As a result, the proposed model accomplishes improved accuracy when translating a source text into three morphologically-rich languages, in addition to demonstrating an improved generalization capacity when it comes to low- or mid-resource settings.

In addition to the previously-mentioned project pertaining to increasing the quality to the outputted text in regards to translation quality, including taking into account context, another important research/experiment worth mentioning is the introduction of Data Diversification. This is a simple tactic meant to boost machine translation performance by diversifying the training data. For the task, the system makes and uses predictions of multiple forward and backward models which are then merged with the original dataset the final neural machine translation model will be trained upon. Based on the results of the experiment, this method can be applied to all models of machine translation and does require extra monolingual data such as the kind acquired via usage of back-translation, or the usage of more computations/parameters like model ensembles. During the conduct of the research, the method achieved a BLEU score of 30.7 and 43.7 when set to perform the WMT’14 English-French and English-German. Similarly, when set to perform eight other tasks (four IWSLT tasks using the same combinations as the WMT’14 one and four low-resource tasks: English-Nepali and English-Sinhala), it demonstrated a substantial improvement in terms of the quality of the outputted texts. Such results demonstrate that the approach has more efficiency than the knowledge distillation and dual learning ones.

A third interesting research project which can help greatly in improving neural machine translation systems is the proposal and research of a new framework for unsupervised machine translation using a reference language base. This framework has the reference language share the parallel corpus only with the source language, thus indicating a clear enough signal that in turn helps with the reconstruction training of the machine translation software via a reference agreement mechanism. The results of the conducted experiments show that the method improves the quality of the outputted texts generated by the unsupervised neural machine translation system when compared to a potent baseline in terms of using only one auxiliary language.



Future trends for AI translations

Neural machine translation is a relatively young and expeditiously developing science and aspect of artificial intelligence, computer science, and language processing. Due to its speed of development and the high interest shown by most of the scientific and business communities, there are quite a few future trends that are worth observing, exploring, and studying in this discipline.

 

Improving the efficiency of the system

Such include, for instance, analyzing and figuring new ways to advance the efficiency of neural machine inferencing in order to achieve high accuracy. Such an improvement can help in avoiding the degradation of excellence inflicted by non-auto aggressive neural machine translation systems. Possible areas where further honing is possible include the word ordering of the decoder input and others. In this regard, the potential of synchronous bidirectional decoding is considered by many scientists to be worthy of deeper investigation. Moreover, quite a few exploration teams have begun to design decoding algorithms that use free order of the information input, with such experiments showing promising results in terms of studying the nature of human language generation.

 

Improving low-resource translation output text/speech quality

Another future trend which is quite likely to continue its development as an all-time point of contention is low-resource translation. One of the reasons why this topic has high chances of remaining for many years within the minds of the scientific and business community as an interesting aspect worth exploring is due to many of the natural human languages lacking large amounts of annotated bilingual data. In this regard, there is a high interest in creating a multilingual neural machine translation device with many questions concerning such a machine remaining open and unexplored. An example of such a question is the matter of dealing with the unbalance issue, which such systems frequently develop. Another example is the matter of building a good and incremental multilingual model for new incoming languages.

 

Making semi-supervised machine translation systems easier to build 

Semi-supervised machine translation models are also a frequently discussed topic as they are highly practical when it comes to real applications, but the back-translation algorithm which helps to form the core of such systems tends to be quite time-consuming to create. Because of the amount of time the algorithm consumes in its current form, many teams are exploring alternatives in creating a design that permits the construction of an efficient semi-supervised neural machine translation model but is easy to deploy. Additionally, deep integration of the pre-training method in neural machine translation has the potential of improving both the unsupervised and semi-supervised frameworks.

 

Further research on when to use various modalities

Within multimodal neural machine translation, there is the problem of how and when to make full use of various modalities. For instance, image-text translation is applicable only for image captions. Also, the end-to-end framework cannot perform on par with the cascaded approach in many of the situations when it comes to speech recognition, particularly when the training data is more than a bit limited.

 

Removal of simultaneous translation issues

When it comes to the matter of simultaneous translation, there are still many issues that need to be attended to such as the unexplored problems of repetition and correction in speech as well as combining translation and summarization which has the potential to aid audiences in learning the essential details within a speaker’s speech at low latency.

 

Integrating background information such as history & environment

Quite often, machine translation is not only about interpreting text/speech or images. It also has a strong relation to culture, history, environment, and others - vital information for which a novel model of neural machine translation needs to be created so that it may be able to generate translations which are in line with the background.

 

Artificial Intelligence translations – a means of improving communication

Devices using artificial intelligence translation began their development in 1949 when the American mathematician Warren Weaver proposed the idea of using machines to help in removing the language barrier by training them to interpret from one language into another. This has led to the development of a new branch within Artificial Intelligence, Computer Science, and Language Processing with three distinct periods, with the third currently being in progress. As time has progressed and continues to move forward, the technology used for the creation of such translation devices evolves, improving the level of sophistication and accuracy of the texts or speeches created as output by such devices, making them a suitable means of reducing or meaning the language barrier in many instances such as during a business meeting. Some of the latest developments concerning such systems feature projects related to the creation of new method such as Data Diversion, further honing of the level of excellence when interpreting via the further implementation of context, as well as experimenting in finding a means of bettering the output produced when the target language is morphologically rich. There are also many future trends that are likely to be developed or explored in the following years. Examples of such trends include honing neural machine translation inferencing, domain adaption, as well as integrating prior knowledge such as bilingual lexicon and/or history.

16.11.2020

Discover More Exciting Articles

High Valyrian & Dothraki For Beginners

Top 7 Inspiring New Year Resolutions For A Fulfilling Year

Top 10 Most Stunning Places To Travel In Italy

Telephone handset and letter.
In order to provide various features on our website, better evaluate activities on our website, and always present to you suitable offers, we use cookies. Decide for yourself which cookies you would like to allow. By moving the respective cookie bar to blue and clicking on “Save settings“, you activate the corresponding cookie and agree that the cookie in question may be placed. You can reverse this on this page at any time.