Modeling Multimodal Multitasking in a Smart House



Título del documento: Modeling Multimodal Multitasking in a Smart House
Revue: Polibits
Base de datos: PERIÓDICA
Número de sistema: 000368178
ISSN: 1870-9044
Autores: 1
1
1
1
Instituciones: 1Universidad de Sevilla, Sevilla. España
Año:
Periodo: Ene-Jun
Número: 39
País: México
Idioma: Inglés
Tipo de documento: Artículo
Enfoque: Experimental, aplicado
Resumen en inglés This paper belongs to an ongoing series of papers presented in different conferences illustrating the results obtained from the analysis of the MIMUS corpus. This corpus is the result of a number of WoZ experiments conducted at the University of Seville as part of the TALK Project. The main objective of the MIMUS corpus was to gather information about different users and their performance, preferences and usage of a multimodal multilingual natural dialogue system in the Smart Home scenario. The focus group is composed by wheel–chair–bound users. In previous papers the corpus and all relevant information related to it has been analyzed in depth. In this paper, we will focus on multimodal multitasking during the experiments, that is, modeling how users may perform more than one task in parallel. These results may help us envision the importance of discriminating complementary vs. independent simultaneous events in multimodal systems. This gains more relevance when we take into account the likelihood of the cooccurrence of these events, and the fact that humans tend to multitask when they are sufficiently comfortable with the tools they are handling
Disciplinas: Ciencias de la computación
Palabras clave: Procesamiento de datos,
Casa inteligente,
Multitareas,
Experimentos multimodales
Keyword: Computer science,
Data processing,
Smart house,
Multitasking,
Multimodal experiments
Texte intégral: Texto completo (Ver HTML)