Revista: | Computación y sistemas |
Base de datos: | |
Número de sistema: | 000560436 |
ISSN: | 1405-5546 |
Autores: | Ali Batita, Mohamed1 Ayadi, Rami2 Zrigui, Mounir1 |
Instituciones: | 1University of Monastir, Faculty of Sciences, Monastir. Túnez 2University of Gabes, Higher Institute of Computer of Medenine, Gabes. Túnez |
Año: | 2019 |
Periodo: | Jul-Sep |
Volumen: | 23 |
Número: | 3 |
Paginación: | 935-942 |
País: | México |
Idioma: | Inglés |
Tipo de documento: | Artículo |
Resumen en inglés | Arabic WordNet is an important resource for many tasks of natural language processing. However, it suffers from many problems. In this paper, we address the problem of the unseen relationships between words in Arabic WordNet. More precisely, we focus on the ability for new relationships to be learned 'automatically' in Arabic WordNet from existing relationships. Using the Neural Tensor Network, we investigate how it can be an advantageous technique to fill the relationship gaps between Arabic WordNet words. With minimum resources, this model delivers meaningful results. The critical component is how to represent the entities of Arabic WordNet. For that, we use AraVec, a set of pre-trained distributed word representation for the Arabic language. We show how much it helps to use these vectors for initialization. We evaluated the model, using a number of tests which reveal that semantically-initialized vectors provide considerable greater accuracy than randomly initialized ones. |
Disciplinas: | Ciencias de la computación |
Palabras clave: | Inteligencia artificial |
Keyword: | Arabic WordNet, Natural language processing, Neural tensor network, AraVec, Word representation, Word embedding, Artificial intelligence |
Texto completo: | Texto completo (Ver HTML) Texto completo (Ver PDF) |