ABSTRACT
Recognized by law, the Brazilian Sign Language (LIBRAS), is the
second Brazilian official language and, according to IBGE (Brazilian
Institute of Geography and Statistics), Brazil has a large community
of hearing-impaired people, with approximately nine million of
deaf people. Besides that, most of the non-deaf community cannot
communicate or understand this language. Considering that, the
use of LIBRAS’ interpreters becomes extremely necessary in order
to allow a greater inclusion of people with this type of disability
with the whole community. However, an alternative solution to
this problem would be to use artificial neural network methods for
the LIBRAS recognition and translation. In this work, a process
of LIBRAS’ recognition and translation is presented, using videos
as input and a convolutional-recurrent neural network, known as
ConvLSTM. This type of neural network receives the sequence of
frames from the videos and analyzes, frame by frame, if the frame
belongs to the video and if the video belongs to a specific class.
This analysis is done in two steps: first, the image is analyzed in
the convolutional layer of the network and, after that, it is sent to
the network recurrent layer. In the current version of the implemented
network, data collection has already been carried out, the
convolutional-recurrent neural network has been trained and it is
possible to recognize when a given LIBRAS’ video represents or
not a specific sentence in this language.
O Computer on the Beach é um evento técnico-científico que visa reunir profissionais, pesquisadores e acadêmicos da área de Computação, a fim de discutir as tendências de pesquisa e mercado da computação em suas mais diversas áreas.