Revista: | Computación y sistemas |
Base de datos: | |
Número de sistema: | 000560373 |
ISSN: | 1405-5546 |
Autores: | Vu, Tu1 Bui, Xuan2 Than, Khoat1 Ichise, Ryutaro3 |
Instituciones: | 1Hanoi University of Science and Technology, Hanoi. Vietnam 2Thai Nguyen University of Information and Communication Technology, Thai Nguyen. Vietnam 3National Institute of Informatics, Tokyo. Japón |
Año: | 2018 |
Periodo: | Oct-Dic |
Volumen: | 22 |
Número: | 4 |
Paginación: | 1317-1327 |
País: | México |
Idioma: | Inglés |
Tipo de documento: | Artículo |
Resumen en inglés | The estimation of the posterior distribution is the core problem in topic models, unfortunately it is intractable. There are approximation and sampling methods proposed to solve it. However, most of them do not have any clear theoretical guarantee of neither quality nor rate of convergence. Online Maximum a Posteriori Estimation (OPE) is another approach with concise guarantee on quality and convergence rate, in which we cast the estimation of the posterior distribution into a non-convex optimization problem. In this paper, we propose a more general and flexible version of OPE, namely Generalized Online Maximum a Posteriori Estimation (G-OPE), which not only enhances the flexibility of OPE in different real-world situations but also preserves key advantage theoretical characteristics of OPE when comparing to the state-of-the-art methods. We employ G-OPE as inference a document within large text corpora. The experimental and theoretical results show that our new approach performs better than OPE and other state-of-the-art methods. |
Disciplinas: | Ciencias de la computación |
Palabras clave: | Inteligencia artificial |
Keyword: | Topic models, Posterior inference, Online MAP estimation, Large-scale learning, Non-convex optimization, Artificial intelligence |
Texto completo: | Texto completo (Ver HTML) Texto completo (Ver PDF) |