• shareshare
  • link
  • cite
  • add
Powered by OpenAIRE graph
Found an issue? Give us feedback
auto_awesome_motion View all 6 versions
Publication . Conference object . Article . Preprint . 2018 . Embargo end date: 01 Jan 2018

Guided Neural Language Generation for Abstractive Summarization using Abstract Meaning Representation

Hardy Hardy; Andreas Vlachos;
Open Access

Recent work on abstractive summarization has made progress with neural encoder-decoder architectures. However, such models are often challenged due to their lack of explicit semantic modeling of the source document and its summary. In this paper, we extend previous work on abstractive summarization using Abstract Meaning Representation (AMR) with a neural language generation stage which we guide using the source document. We demonstrate that this guidance improves summarization results by 7.4 and 10.5 points in ROUGE-2 using gold standard AMR parses and parses obtained from an off-the-shelf parser respectively. We also find that the summarization performance using the latter is 2 ROUGE-2 points higher than that of a well-established neural encoder-decoder approach trained on a larger dataset. Code is available at \url{}

Comment: Accepted in EMNLP 2018

Subjects by Vocabulary

Microsoft Academic Graph classification: Meaning (linguistics) Artificial intelligence business.industry business Parsing computer.software_genre computer Representation (mathematics) Computer science Automatic summarization Natural language processing


Computation and Language (cs.CL), FOS: Computer and information sciences, Computer Science - Computation and Language

Powered by OpenAIRE graph
Found an issue? Give us feedback
Funded by
Scalable Understanding of Multilingual Media
  • Funder: European Commission (EC)
  • Project Code: 688139
  • Funding stream: H2020 | RIA
Related to Research communities
Digital Humanities and Cultural Heritage
Download fromView all 6 sources
Conference object
License: cc-by
Data sources: UnpayWall