Deep Learning is based on the use of models of various layers of neural networks. In recent years, the application in the area of Natural Language Processing has had a very significant impact providing, among the others, great improvement in machine translation. A key point in neural translation is that it poses a full end-to-end system. Another important advantage is that it allows incorporate attention mechanisms as an effective way of modelling long-distance dependencies and changing the order of words (reordering) in the translation process.
Despite these benefits and generally better results, recent studies show that neural translation tends to introduce errors that are not produced in other models, such as the classical statistical systems. The neural translation lacks a mechanism to register the words of origin that have been translated or they need to be translated, and this can lead to an "over-translation" or a "sub-translation". It is also known that neural translators tend to generate natural phrases in the target language, but that does not always reflect the original meaning of the sentence of origin. Finally, the neural models have problems memorizing the translation of very rare words. Statistical systems for machine translation, on the other hand, have a mechanism capable of ensuring that all the words of origin are used in the translation one and only once.
The goal of the project is to help the neural system to solve the problems previously mentioned considering all the possibilities of combining the models of automatic statistical translation with neural models.