About
Why is machine translation between English and Portuguese significantly better than machine translation between Dutch and Spanish? Why do speech recognizers work better in German than Finnish? The main problem is the insufficient amount of labelled data for training in both cases. Although the world is multimodal and highly multilingual, speech and language technology is not keeping up with the demand in all languages. We need better learning methods that exploit the advancements of a few modalities and languages for the benefit of others.
This project addresses the low-resources problem and the expensive approach to multilingual machine translation since systems for all translation pairs are required. LUNAR proposes to jointly learn a multilingual and multimodal model that builds upon a lifelong universal language representation. This model will compensate for the lack of supervised data and significantly increase the system capacity of generalization from training data given the unconventional variety of employed resources. This model will reduce the number of required translation systems from quadratic to linear as well as allowing for an incremental adaptation of new languages and data.
The high-risk/high-gain relies on automatically training a universal language representation by specifically designed deep learning algorithms. LUNAR will employ an encoder-decoder architecture. The encoder represents an abstraction of an input by reducing its dimensionality, which will become the proposed universal language representation; from this abstraction, the decoder generates the output. The encoder-decoder internal architecture will be explicitly designed for learning the universal language representation, which will be appropriately integrated as an objective of the architecture.
LUNAR will impact highly multidisciplinary communities of specialists in computer science, mathematics, engineering and linguistics who work on natural language understanding, natural language and speech processing applications.