posted on 2020-05-01, 00:00authored byAlberto Mario Bellini
In this work, we introduce a new architecture to address the Visual Question Answering problem, an open field of research in the NLP and Vision community.
In the last few years, with the advent of Deep Learning and the exponential growth of computing power, researches came up with brilliant solutions to tackle the problem.
However, most of the related work share a standard limitation: the number of possible answers is usually restricted to a limited set of candidates, limiting the power of such models.
In this work, we describe a new architecture that employs new state-of-the-art language models, such as the Transformer, to generate open-ended answers. In the end, our contribution to the scientific community lies in a new approach that allows VQA systems to generate unconstrained answers.
First, we introduce the necessary background as well as the most critical computational models to deal with text and images. Ultimately, we show that our architecture compares well with other VQA models, setting a new baseline for future work.