Journal: | Journal of applied research and technology |
Database: | PERIÓDICA |
System number: | 000424107 |
ISSN: | 1665-6423 |
Authors: | Woodward, Alexander1 Chan, Yuk Hin2 Gong, Rui2 Nguyen, Minh2 Gee, Trevor2 Delmas, Patrice2 Gimel’farb, Georgy2 Marquez Flores, Jorge Alberto3 |
Institutions: | 1University of Tokyo, The Graduate School of Arts and Sciences, Tokio. Japón 2University of Auckland, Department of Computer Science, Auckland, Central Auckland. Nueva Zelanda 3Universidad Nacional Autónoma de México, Centro de Ciencias Aplicadas y Desarrollo Tecnológico, Ciudad de México. México |
Year: | 2017 |
Season: | Feb |
Volumen: | 15 |
Number: | 1 |
Country: | México |
Language: | Inglés |
Document type: | Artículo |
Approach: | Aplicado, descriptivo |
English abstract | This work presents a robust, and low-cost framework for real-time marker based 3-D human expression modeling using off-the-shelf stereo web-cameras and inexpensive adhesive markers applied to the face. The system has low computational requirements, runs on standard hardware, and is portable with minimal set-up time and no training. It does not require a controlled lab environment (lighting or set-up) and is robust under varying conditions, i.e. illumination, facial hair, or skin tone variation. Stereo web-cameras perform 3-D marker tracking to obtain head rigid motion and the non-rigid motion of expressions. Tracked markers are then mapped onto a 3-D face model with a virtual muscle animation system. Muscle inverse kinematics update muscle contraction parameters based on marker motion in order to create a virtual character’s expression performance. The parametrization of the muscle-based animation encodes a face performance with little bandwidth. Additionally, a radial basis function mapping approach was used to easily remap motion capture data to any face model. In this way the automated creation of a personalized 3-D face model and animation system from 3-D data is elucidated. The expressive power of the system and its ability to recognize new expressions was evaluated on a group of test subjects with respect to the six universally recognized facial expressions. Results show that the use of abstract muscle definition reduces the effect of potential noise in the motion capture data and allows the seamless animation of any virtual anthropomorphic face model with data acquired through human face performance |
Disciplines: | Ciencias de la computación |
Keyword: | Procesamiento de datos, Expresión facial, Reconocimiento de gestos, Marcadores de movimiento, Estéreo visión |
Keyword: | Data processing, Facial expression, Gesture recognition, Motion markers, Stereo vision |
Full text: | Texto completo (Ver HTML) Texto completo (Ver PDF) |