Revista: | Computación y sistemas |
Base de datos: | |
Número de sistema: | 000560802 |
ISSN: | 1405-5546 |
Autores: | Laatar, Rim1 Aloulou, Chafik1 Belguith, Lamia Hadrich1 |
Instituciones: | 1Université de Sfax, Sfax. Túnez |
Año: | 2023 |
Periodo: | Abr-Jun |
Volumen: | 27 |
Número: | 2 |
Paginación: | 379-388 |
País: | México |
Idioma: | Inglés |
Tipo de documento: | Artículo |
Resumen en inglés | Word Sense Disambiguation (WSD) aims to determine the correct meaning of words that can have multiple interpretations. Recently, contextualized word embeddings, whose goal is to give different representations of the same word in diverse contexts, have been shown to have a tremendous impact on several natural language processing tasks including question answering, semantic analysis and even word sense disambiguation. This paper reports on experiments with different stacks of word embeddings and evaluation of their usefulness for Arabic word sense disambiguation. Word embeddings stay in the core of the development of NLP, with multiple key language models being created over the last two years like FastText, ElMo, BERT and Flair. It’s worth pointing out that the Arabic language can be divided into three major historical periods: old Arabic, middle-age Arabic, and Modern Arabic. Actually, contemporary Arabic has proved to be the greatest concern of many researchers. The main gist of our work is to disambiguate Arabic words according to the historical period in which they appeared. To perform the WSD task, we suggest a method that deploys stacked embeddings models. The experimental evaluation demonstrates that stacked embeddings outperforms the previously proposed methods for Arabic WSD. |
Disciplinas: | Ciencias de la computación, Literatura y lingüística |
Palabras clave: | Inteligencia artificial, Lingüística aplicada |
Keyword: | Artificial intelligence, Applied linguistics |
Texto completo: | Texto completo (Ver HTML) Texto completo (Ver PDF) |