Is the rise of AI translation machines leading to a showdown for the human translator? Find out in this captivating article that delves into the age-old debate of humans versus machines in the field of translation.
Artificial Intelligence Technology can be quite beneficial in many aspects pertaining to everyday life. It can assist in tasks such as machine assembly, controlling traffic, medical diagnosis and others. However, according to critics, it can also pose a threat in regards to replacing humans in professions which often have a long history, thus rendering many without a permanent job. An example of one such profession is that of translator or interpreter, due to the steady progress made in making artificial intelligence translate sentences and texts from one language into another with the same accuracy as ones made by humans. However, is there truly a threat in translators becoming obsolete or can technology in fact serve as an aid?
What does AI Translation mean & how does it work?
AI Translation (Artificial Intelligence Translation) is a form of machine translation which translates sentences or texts from one target language into another using a method referred to as Neural Machine Translation. This is accomplished by converting a sequence of words forming a sentence from the target language into a sequence of words forming a sentence in another language using deep neural networks. Deep neural networks are architectural computer models which consist of a large array of non-standard layers with different varieties including the recursive, the network of pointers, the convolution and others. Such architectures can solve specific classes of tasks whilst simultaneously learning from a small amount of data. Such models are the result of the sophistication and development of the initial neural networks created as an idea by Warren McCulloch and Walter Pitts who in a joint study of the early ‘40s proposed a formal model of the human brain, with Frank Rosenblatt, an American psychologist, summarizing their work and creating a model of a neural network on a computer. The primordial neural network uses an algorithm of reverse error propagation consisting of two steps – a prediction of the target feature which is then compared to the true value, with the error being calculated as the difference between the predicted and the true value. The second step consists of the calculations and values of the network being adjusted to reduce the error, with additional aid being provided by an additional algorithm which teaches the network to correctly predict the required values. To that end, large amounts of data need to be inputted.
Such deep neural networks are also used in the field of AOT or automatic text processing. Automatic text processing originated as a blend between computer/mathematical linguistics and machine learning, with one of the primary issues being how to transfer texts to computer memory while preserving structure and semantics. In regards to this problem, two essentially different methods have been used as solutions: a linear-algebraic vector model and a probability language one. The vector model works on the principle of turning texts into a vector of frequencies of words which are then used by methods of traditional machine learning such as the method of reference vectors or hierarchical clustering. However, this method does not allow the storage of the order of words, unlike the language model which aids in solving the questions: what is the probability of a given word sequence and which words are the most likely to follow after the given word sequence?
Prior to the development of neural networks, this model had its parameters set via the usage of Markov’s chain apparatus. Yet, such models were imperfect as they remember a comparatively small and fixed number of previous words. Apart from the issue of text representation, there is also the problem of word meaning and its representation. Within cognitive or traditional linguistics, it is presumed that a person understands the meaning of the word via its context or the number of standing words and their meanings. Based on this presumption linguists have reached the conclusion that the meaning of the words can be represented by vectors. Thus context vectors can help in determining which words meet next to the data. Such solutions in combination with deep machine learning aid in translating texts. There is one primary difference between the traditional and deep machine learning approaches in regards to translations. This is the principle in how the feature space for describing essential elements such as words is defined. Traditional machine learning relies on complex linguistic features such as specialized databases or affective dictionaries. Such large data banks require significant dedication with their compiling which can take up to a decade. In contrast deep machine learning seldom requires the usage of such specialized external resources, nonetheless it encounters a different problem pertaining to the need for extensive amounts of data used for learning. This is the matter of marking up said data. To that end, annotators manually select words in every new document the neural network will then learn to extract.
In the present, many projects within the field use neural networks. Among the primary architectures used is the convolution neural networks and recursive neural networks, particularly the two-layer version which uses long-term memory cells. Such a structure permits reading input sentences and retaining the meaning of each individual word as well as that of the sentence itself in separate vectors. This results in the network deciding whether for instance a certain word is the name of a city or person. It can also assist in the neural network learning to generate a translation into another language. Quite often this type of architecture employs the attention mechanism: a simple superstructure is added to the recurrence layers of the network in order for the system to learn to focus on important elements in the sentence. For instance, using such a mechanism the system will focus on the subject and the predicate first and then on optional elements such as the object, the complement, the modifier, etc. Very often, neural networks include a special type of operator known as a Briquette. This is a type of operator which matches several scalar vectors. Then several convolutions are used with the output turning out in a series of scalars. Such neural networks, particularly within the convolution variants, search for stable n-grams (sequences of n words significant for the task at hand). Such n-grams can range from terms to names to stable collocations. While the usage of such n-grams has been widely used prior to the development of neural networks, it is in combination with convolution, a method and term derived from computer vision that an increase in quality in regards to translation is made. Very often such networks perform more than one classification task: for instance the system needs to conduct a thematic classification, followed or accompanied by a paraphrase definition and an extraction of relations between words. For example, a neural network of this type will be given several sentences: “I have a cat which is black and white” and “My cat is black and white”. The system must decide if both sentences have the same meaning. To do so, it will analyze the meaning of the sentences as well as of the separate words beginning with essential elements such as “I”, “have”, “My”, “cat”, etc. It then proceeds to run several algorithms to determine based on certain parameters if the meanings of the sentences match. The same neural network can also determine if two or more objects have a specific relation (examples: is “horse” in relation with the word “animal” in “A horse is a type of animal found on farms” and what kind of relation does it have?).
Quite often, these types of network are combined with bundle networks with the distribution being the following: the main task is solved or handled by the recurrence layer, the intermediate role between the main task layer and the input words is carried out by the convolution layer and the bundle layer analyzes the input words character by character taking into account word modification and formation, allowing for the definition of parts of speech without the usage of external morphological processors such as large complied databases. It should be noted that much of the improvement in quality in regards to machine translation has occurred due to the many research projects dedicated to adapting such architectures to handling text data as until recent years such networks were seldom used for texts due to vectors not permitting slight changes which in turn would permit getting a different discrete feature such as another letter or word. Examples of such projects include the incorporation of a variation autocoder which then proceeds to aid in morphological analysis tasks and permits the finding of regularities in word formation. There are also projects which have helped in upgrading or enhancing competing networks in their ability to generate texts of a certain genre or style, thus helping in creating personalized assistants capable of communicating using one’s favorite words and jokes.
Apart from the previously mentioned setback in regards to vector inflexibility, there is also the issue of noisiness. Social network data can encounter issues with accidential or intentional misprints of users and such misprints are not always correctable. In addition to such errors, data obtained from social networks contains a mixture of different languages, unusual usage of punctuation and pseudo graphics. Speech processing tasks, on the other hand, have the issues of separating different voices such as the main speaker’s voice, music in the background or dialogues from nonessential noise. In such compromised data, it can be quite challenging for the neural network in regard to locating regularities. Such a problem can be counterbalanced by training the system using larger amounts of marked data.
Is AI translation a threat or an aide to human translators?
AI machines using deep learning and neural networks have been considered since their near conception as a threat to many human professions. This applies to a certain degree to human translators as well, with the primary concern being that Artificial Intelligence can replace such professionals in regards to translating texts or speech. However, AI translation machine may not pose a threat but be more of an aide.
AI Translation is feared by some to be capable of replacing human interpreters due to several contributing factors. For instance, such systems permit for the translation of texts or speech in lesser time compared to human translators due to the AI translation devices requiring fewer minutes to locate and select suitable words for the translation and connect them into logical sentences. As a result of such devices needing less time to translate the text, clients will be able to receive the document earlier than if they had employed human translators. In addition, such machines can provide a relatively high rate of consistency, particularly if they are translating documents from the same field or in the same style every time they are employed. Due to such consistency, it will be less challenging for companies to deliver texts with excellent quality to their clients while avoiding errors or mistakes which reduce said quality.
Apart from the previously mentioned reasons, AI translation is considered to be a threat to human translation professionals due to its ability to permit quicker turnaround and revenue in addition to permitting companies to make significant savings. Such savings can be achieved by having the machines translate the text and customize it without the need for hours of work, thus reducing the costs incurred, especially in regards to paying costly fees. Additionally, due to such systems and devices being able to learn from its previous errors using different parameters, it will relatively quickly adapt and expand its database, thus allowing AI translation machines to expeditiously gain the amount of information which an experienced translator has both due to training and due to years of practicing the art and profession of translation.
AI translation devices can be employed as beneficial systems that can aid in increasing productivity. As has been stated previously, neural network translation machines can produce an interpretation of a text or speech more quickly compared to their human counterparts. This can assist human professionals in completing more projects in comparatively less time. Additionally, such systems can be further trained to assist professionals by providing a database which is then used when translating texts, particularly ones containing terms which can challenging to render in a different language by integrating said systems into computer-assisted translation (CAT) tools. Similarly, such technology can provide aid in regards to translating low-resource languages such as Romanian or Irish. This can be accomplished by having the systems trained to interpret texts using databases containing words and phrases from such languages. Alternatively, AI translation devices can be used instead to translate high-resource languages while human interpreters concentrate on low-resource ones. Furthermore, such machines can be beneficial in instances where the tone of voice is neutral or polite, rendering the need to translate intonation unnecessary. Additionally, due to such systems having the option to be integrated into portable devices such as a tablet, earpiece or mobile phone, they can be used any time.
AI translation devices currently provide relatively high quality in regards to translating clearly spoken speech or text not containing idiomatic phrases or figures of speech. Because of their current level of development and sophistication they can be employed in several situations. The recommended instances where such devices should be considered include:
AI translation devices can assist humans in regards to communication but cannot replace them
AI translation devices have improved significantly in recent years in regards to delivering good quality translations both when interpreting text and speech. This is due to the usage of neural networks which use a large array of non-standard layers in addition to other nonlinear functions and aids to analyze sentences and texts as well as learn from both currently input as well as previously input information. Due to the ability of such systems to learn from previous information and errors and the improvement in output text quality while retaining speed and consistency, some consider such systems to be a threat to professional human translators.
However, while neural network translation systems do have such advantages, they cannot replace human translators and are unlikely to be able to for at least a decade or more. This is due to the devices having several disadvantages which render them unable to fully replace an experienced human interpreter. While neural network translation devices can translate texts with comparatively high quality, they do not always achieve such excellence in regards to the fluency and natural sounding of the text or speech. For instance, if such machines were to translate an artistic text or the speech of an orator, the generated interpretation is likely to sound less natural and fluent compared to one generated by an experienced human professional. This is due to AI translation systems having difficulties or an inability to comprehend cultural differences and nuances as well as figures of speech and idiomatic phrases, which quite often forms the bulk or essence of the text or speech. Similarly, neural networks do not factor in the tone of voice which can be detected even in written texts. Due to such disadvantages, AI translation cannot replace human translators, particularly in the fields of interpreting orators such as politicians or in translating fictional literature.
While far from reaching the level of sophistication where such machines can replace human translators, they can be beneficial to said professionals by providing aid in the form of increasing productivity. Furthermore they can also aid in everyday communication with examples including restaurants, hotels, business meetings and others.