Domain Adaptation and Transfer Learning methods enhance Deep Learning Models used in Inner Speech Based Brain Computer Interfaces
Keywords:
Convolutional Neural Network, Transfer Learning, Domain Adaptation, Deep LearningAbstract
Brain Computer Interfaces are useful devices that can partially restore the communication from severe compromised patients. Although the advances in deep learning have significantly improved brain pattern recognition, a large amount of data is required for training these deep architectures. In the last years, the inner speech paradigm has drew much attention, as it can potentially allow a natural control of different devices. However, as of the date of this publication, there is only a small amount of data available in this paradigm. In this work we show that it is possible, by means of transfer learning and domain adaptation methods, to make the most of the scarce data, enhancing the training process of a deep learning architecture used in brain computer interfaces.
Downloads
Published
Issue
Section
License
Copyright (c) 2022 Luciano Ivan Zablocki, Agustín Nicolás Mendoza, Nicolás Nieto

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Acorde a estos términos, el material se puede compartir (copiar y redistribuir en cualquier medio o formato) y adaptar (remezclar, transformar y crear a partir del material otra obra), siempre que a) se cite la autoría y la fuente original de su publicación (revista y URL de la obra), b) no se use para fines comerciales y c) se mantengan los mismos términos de la licencia.











