Modeling Multimodal Multitasking in a Smart House



Document title: Modeling Multimodal Multitasking in a Smart House
Journal: Polibits
Database: PERIÓDICA
System number: 000368178
ISSN: 1870-9044
Authors: 1
1
1
1
Institutions: 1Universidad de Sevilla, Sevilla. España
Year:
Season: Ene-Jun
Number: 39
Country: México
Language: Inglés
Document type: Artículo
Approach: Experimental, aplicado
English abstract This paper belongs to an ongoing series of papers presented in different conferences illustrating the results obtained from the analysis of the MIMUS corpus. This corpus is the result of a number of WoZ experiments conducted at the University of Seville as part of the TALK Project. The main objective of the MIMUS corpus was to gather information about different users and their performance, preferences and usage of a multimodal multilingual natural dialogue system in the Smart Home scenario. The focus group is composed by wheel–chair–bound users. In previous papers the corpus and all relevant information related to it has been analyzed in depth. In this paper, we will focus on multimodal multitasking during the experiments, that is, modeling how users may perform more than one task in parallel. These results may help us envision the importance of discriminating complementary vs. independent simultaneous events in multimodal systems. This gains more relevance when we take into account the likelihood of the cooccurrence of these events, and the fact that humans tend to multitask when they are sufficiently comfortable with the tools they are handling
Disciplines: Ciencias de la computación
Keyword: Procesamiento de datos,
Casa inteligente,
Multitareas,
Experimentos multimodales
Keyword: Computer science,
Data processing,
Smart house,
Multitasking,
Multimodal experiments
Full text: Texto completo (Ver HTML)