Revue: | Computación y sistemas |
Base de datos: | PERIÓDICA |
Número de sistema: | 000423289 |
ISSN: | 1405-5546 |
Autores: | Verma, Rakesh1 Lee, Daniel1 |
Instituciones: | 1University of Houston, Computer Science Department, Houston, Texas. Estados Unidos de América |
Año: | 2017 |
Periodo: | Oct-Dic |
Volumen: | 21 |
Número: | 4 |
País: | México |
Idioma: | Inglés |
Tipo de documento: | Artículo |
Enfoque: | Aplicado, descriptivo |
Resumen en inglés | Due to its promise to alleviate information overload, text summarization has attracted the attention of many researchers. However, it has remained a serious challenge. Here, we first prove empirical limits on the recall (and F1-scores) of extractive summarizers on the DUC datasets under ROUGE evaluation for both the single-document and multi-document summarization tasks. Next we define the concept of compressibility of a document and present a new model of summarization, which generalizes existing models in the literature and integrates several dimensions of the summarization problem, viz., abstractive versus extractive, single versus multi-document, and syntactic versus semantic. Finally, we examine some new and some existing single-document summarization algorithms in a single framework and compare with state of the art summarizers on DUC data |
Disciplinas: | Ciencias de la computación, Literatura y lingüística |
Palabras clave: | Lingüística aplicada, Procesamiento de lenguaje natural, Resumen de texto automático, Integración extractiva, Heurística |
Keyword: | Applied linguistics, Natural language processing, Automatic summarization, Extractive summarization, Heuristics |
Texte intégral: | Texto completo (Ver HTML) Texto completo (Ver PDF) |