Journal: | Computación y sistemas |
Database: | |
System number: | 000560373 |
ISSN: | 1405-5546 |
Authors: | Vu, Tu1 Bui, Xuan1 Than, Khoat1 Ichise, Ryutaro3 |
Institutions: | 1Hanoi University of Science and Technology, Hanoi. Vietnam 2Thai Nguyen University of Information and Communication Technology, Vietnam 3National Institute of Informatics, Tokyo. Japón |
Year: | 2018 |
Season: | Oct-Dic |
Volumen: | 22 |
Number: | 4 |
Pages: | 1317-1327 |
Country: | México |
Language: | Inglés |
English abstract | The estimation of the posterior distribution is the core problem in topic models, unfortunately it is intractable. There are approximation and sampling methods proposed to solve it. However, most of them do not have any clear theoretical guarantee of neither quality nor rate of convergence. Online Maximum a Posteriori Estimation (OPE) is another approach with concise guarantee on quality and convergence rate, in which we cast the estimation of the posterior distribution into a non-convex optimization problem. In this paper, we propose a more general and flexible version of OPE, namely Generalized Online Maximum a Posteriori Estimation (G-OPE), which not only enhances the flexibility of OPE in different real-world situations but also preserves key advantage theoretical characteristics of OPE when comparing to the state-of-the-art methods. We employ G-OPE as inference a document within large text corpora. The experimental and theoretical results show that our new approach performs better than OPE and other state-of-the-art methods. |
Keyword: | Topic models, Posterior inference, Online MAP estimation, Large-scale learning, Non-convex optimization |
Full text: | Texto completo (Ver HTML) Texto completo (Ver PDF) |