- home
- Advanced Search
- Digital Humanities and Cultural Heritage
- Publications
- Research software
- Other research products
- Article
- European Commission
- EC|H2020
- TransModal
- EU
- Digital Humanities and Cultural Heritage
- Publications
- Research software
- Other research products
- Article
- European Commission
- EC|H2020
- TransModal
- EU
Loading
description Publicationkeyboard_double_arrow_right Article 2021 NetherlandsPublisher:MIT Press - Journals Funded by:EC | SUMMA, EC | TransModalEC| SUMMA ,EC| TransModalAuthors: Liu, J.; Cohen, S.B.; Lapata, M.; Bos, Johan;Liu, J.; Cohen, S.B.; Lapata, M.; Bos, Johan;Abstract We consider the task of crosslingual semantic parsing in the style of Discourse Representation Theory (DRT) where knowledge from annotated corpora in a resource-rich language is transferred via bitext to guide learning in other languages. We introduce Universal Discourse Representation Theory (UDRT), a variant of DRT that explicitly anchors semantic representations to tokens in the linguistic input. We develop a semantic parsing framework based on the Transformer architecture and utilize it to obtain semantic resources in multiple languages following two learning schemes. The Many-to-One approach translates non-English text to English, and then runs a relatively accurate English parser on the translated text, while the One-to-Many approach translates gold standard English to non-English text and trains multiple parsers (one per language) on the translations. Experimental results on the Parallel Meaning Bank show that our proposal outperforms strong baselines by a wide margin and can be used to construct (silver-standard) meaning banks for 99 languages.
Computational Lingui... arrow_drop_down Computational LinguisticsOther literature type . Article . 2021 . Peer-reviewedLicense: CC BY NC NDadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1162/coli_a_00406&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess Routesgold 1 citations 1 popularity Average influence Average impulse Average Powered by BIP!more_vert Computational Lingui... arrow_drop_down Computational LinguisticsOther literature type . Article . 2021 . Peer-reviewedLicense: CC BY NC NDadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1162/coli_a_00406&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Article , Conference object , Contribution for newspaper or weekly magazine 2021 United KingdomPublisher:Association for Computational Linguistics (ACL) Funded by:EC | TransModal, UKRI | UKRI Centre for Doctoral ...EC| TransModal ,UKRI| UKRI Centre for Doctoral Training in Natural Language ProcessingAuthors: Hosking, Tom; Lapata, Mirella;Hosking, Tom; Lapata, Mirella;We propose a method for generating paraphrases of English questions that retain the original intent but use a different surface form. Our model combines a careful choice of training objective with a principled information bottleneck, to induce a latent encoding space that disentangles meaning and form. We train an encoder-decoder model to reconstruct a question from a paraphrase with the same meaning and an exemplar with the same surface form, leading to separated encoding spaces. We use a Vector-Quantized Variational Autoencoder to represent the surface form as a set of discrete latent variables, allowing us to use a classifier to select a different surface form at test time. Crucially, our method does not require access to an external source of target exemplars. Extensive experiments and a human evaluation show that we are able to generate paraphrases with a better tradeoff between semantic preservation and syntactic novelty compared to previous methods. Comment: ACL 2021
arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2021Data sources: arXiv.org e-Print ArchiveEdinburgh Research ExplorerContribution for newspaper or weekly magazine . 2021Data sources: Edinburgh Research Exploreradd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2021.acl-long.112&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen hybrid 5 citations 5 popularity Top 10% influence Average impulse Average Powered by BIP!more_vert arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2021Data sources: arXiv.org e-Print ArchiveEdinburgh Research ExplorerContribution for newspaper or weekly magazine . 2021Data sources: Edinburgh Research Exploreradd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2021.acl-long.112&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Article 2021Publisher:MIT Press - Journals Funded by:EC | TransModal, UKRI | UKRI Centre for Doctoral ...EC| TransModal ,UKRI| UKRI Centre for Doctoral Training in Natural Language ProcessingAuthors: Jain, Parag; Lapata, Mirella;Jain, Parag; Lapata, Mirella;Abstract We present a memory-based model for context- dependent semantic parsing. Previous approaches focus on enabling the decoder to copy or modify the parse from the previous utterance, assuming there is a dependency between the current and previous parses. In this work, we propose to represent contextual information using an external memory. We learn a context memory controller that manages the memory by maintaining the cumulative meaning of sequential user utterances. We evaluate our approach on three semantic parsing benchmarks. Experimental results show that our model can better process context-dependent information and demonstrates improved performance without using task-specific decoders.
arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2021Data sources: arXiv.org e-Print ArchiveTransactions of the Association for Computational LinguisticsArticle . 2021 . Peer-reviewedLicense: CC BYData sources: CrossrefTransactions of the Association for Computational LinguisticsArticleLicense: CC BYData sources: UnpayWallhttps://doi.org/10.48550/arxiv...Article . 2021License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1162/tacl_a_00422&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen gold 2 citations 2 popularity Average influence Average impulse Average Powered by BIP!more_vert arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2021Data sources: arXiv.org e-Print ArchiveTransactions of the Association for Computational LinguisticsArticle . 2021 . Peer-reviewedLicense: CC BYData sources: CrossrefTransactions of the Association for Computational LinguisticsArticleLicense: CC BYData sources: UnpayWallhttps://doi.org/10.48550/arxiv...Article . 2021License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1162/tacl_a_00422&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Conference object , Article 2021Publisher:Association for Computational Linguistics (ACL) Funded by:EC | TransModal, EC | BroadSem, NWO | Scaling Semantic Parsing ...EC| TransModal ,EC| BroadSem ,NWO| Scaling Semantic Parsing to Unrestricted DomainsAuthors: Wang, Bailin; Lapata, Mirella; Titov, Ivan;Wang, Bailin; Lapata, Mirella; Titov, Ivan;Semantic parsing aims at translating natural language (NL) utterances onto machine-interpretable programs, which can be executed against a real-world environment. The expensive annotation of utterance-program pairs has long been acknowledged as a major bottleneck for the deployment of contemporary neural models to real-life applications. In this work, we focus on the task of semi-supervised learning where a limited amount of annotated data is available together with many unlabeled NL utterances. Based on the observation that programs which correspond to NL utterances must be always executable, we propose to encourage a parser to generate executable programs for unlabeled utterances. Due to the large search space of executable programs, conventional methods that use approximations based on beam-search such as self-training and top-k marginal likelihood training, do not perform as well. Instead, we view the problem of learning from executions from the perspective of posterior regularization and propose a set of new training objectives. Experimental results on Overnight and GeoQuery show that our new objectives outperform conventional methods, bridging the gap between semi-supervised and supervised learning. Comment: NAACL 2021 Camera Ready
arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2021Data sources: arXiv.org e-Print Archiveadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2021.naacl-main.219&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen hybrid 0 citations 0 popularity Average influence Average impulse Average Powered by BIP!more_vert arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2021Data sources: arXiv.org e-Print Archiveadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2021.naacl-main.219&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Contribution for newspaper or weekly magazine , Article , Conference object 2020 United KingdomPublisher:Association for Computational Linguistics (ACL) Funded by:EC | TransModalEC| TransModalAuthors: Papalampidi, Pinelopi; Keller, Frank; Frermann, Lea; Lapata, Mirella;Papalampidi, Pinelopi; Keller, Frank; Frermann, Lea; Lapata, Mirella;Most general-purpose extractive summarization models are trained on news articles, which are short and present all important information upfront. As a result, such models are biased on position and often perform a smart selection of sentences from the beginning of the document. When summarizing long narratives, which have complex structure and present information piecemeal, simple position heuristics are not sufficient. In this paper, we propose to explicitly incorporate the underlying structure of narratives into general unsupervised and supervised extractive summarization models. We formalize narrative structure in terms of key narrative events (turning points) and treat it as latent in order to summarize screenplays (i.e., extract an optimal sequence of scenes). Experimental results on the CSI corpus of TV screenplays, which we augment with scene-level summarization labels, show that latent turning points correlate with important aspects of a CSI episode and improve summarization performance over general extractive algorithms leading to more complete and diverse summaries. Comment: Accepted to appear at ACL 2020
Edinburgh Research E... arrow_drop_down Edinburgh Research ExplorerContribution for newspaper or weekly magazine . 2020Data sources: Edinburgh Research ExplorerarXiv.org e-Print ArchiveOther literature type . Preprint . 2020Data sources: arXiv.org e-Print Archivehttps://doi.org/10.18653/v1/20...Other literature type . Conference object . 2020 . Peer-reviewedhttps://doi.org/10.48550/arxiv...Article . 2020License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2020.acl-main.174&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen hybrid 6 citations 6 popularity Top 10% influence Average impulse Average Powered by BIP!more_vert Edinburgh Research E... arrow_drop_down Edinburgh Research ExplorerContribution for newspaper or weekly magazine . 2020Data sources: Edinburgh Research ExplorerarXiv.org e-Print ArchiveOther literature type . Preprint . 2020Data sources: arXiv.org e-Print Archivehttps://doi.org/10.18653/v1/20...Other literature type . Conference object . 2020 . Peer-reviewedhttps://doi.org/10.48550/arxiv...Article . 2020License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2020.acl-main.174&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Conference object , Article 2020Publisher:Association for Computational Linguistics (ACL) Funded by:EC | SUMMA, EC | TransModalEC| SUMMA ,EC| TransModalAuthors: Liu, Jiangming; Gardner, Matt; Cohen, Shay B.; Lapata, Mirella;Liu, Jiangming; Gardner, Matt; Cohen, Shay B.; Lapata, Mirella;Complex reasoning over text requires understanding and chaining together free-form predicates and logical connectives. Prior work has largely tried to do this either symbolically or with black-box transformers. We present a middle ground between these two extremes: a compositional model reminiscent of neural module networks that can perform chained logical reasoning. This model first finds relevant sentences in the context and then chains them together using neural modules. Our model gives significant performance improvements (up to 29\% relative error reduction when comfibined with a reranker) on ROPES, a recently introduced complex reasoning dataset. Comment: accepted by EMNLP 2020
https://www.aclweb.o... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2020Data sources: arXiv.org e-Print Archivehttps://doi.org/10.48550/arxiv...Article . 2020License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2020.emnlp-main.245&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen hybrid 3 citations 3 popularity Top 10% influence Average impulse Average Powered by BIP!more_vert https://www.aclweb.o... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2020Data sources: arXiv.org e-Print Archivehttps://doi.org/10.48550/arxiv...Article . 2020License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2020.emnlp-main.245&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Article , Conference object 2020Publisher:Association for Computational Linguistics (ACL) Funded by:EC | TransModalEC| TransModalAuthors: Sherborne, Tom; Xu, Yumo; Lapata, Mirella;Sherborne, Tom; Xu, Yumo; Lapata, Mirella;Recent progress in semantic parsing scarcely considers languages other than English but professional translation can be prohibitively expensive. We adapt a semantic parser trained on a single language, such as English, to new languages and multiple domains with minimal annotation. We query if machine translation is an adequate substitute for training data, and extend this to investigate bootstrapping using joint training with English, paraphrasing, and multilingual pre-trained models. We develop a Transformer-based parser combining paraphrases by ensembling attention over multiple encoders and present new versions of ATIS and Overnight in German and Chinese for evaluation. Experimental results indicate that MT can approximate training data in a new language for accurate parsing when augmented with paraphrasing through multiple MT engines. Considering when MT is inadequate, we also find that using our approach achieves parsing accuracy within 2% of complete translation using only 50% of training data. Camera Ready for EMNLP2020 Findings
arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2020Data sources: arXiv.org e-Print Archivehttps://doi.org/10.48550/arxiv...Article . 2020License: arXiv Non-Exclusive DistributionData sources: Datacitehttps://doi.org/10.18653/v1/20...Other literature type . Conference object . 2020 . Peer-reviewedadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2020.findings-emnlp.45&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen hybrid 4 citations 4 popularity Top 10% influence Average impulse Average Powered by BIP!more_vert arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2020Data sources: arXiv.org e-Print Archivehttps://doi.org/10.48550/arxiv...Article . 2020License: arXiv Non-Exclusive DistributionData sources: Datacitehttps://doi.org/10.18653/v1/20...Other literature type . Conference object . 2020 . Peer-reviewedadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2020.findings-emnlp.45&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Article , Conference object 2020Publisher:Association for Computational Linguistics (ACL) Funded by:EC | TransModal, EC | BroadSem, NWO | Scaling Semantic Parsing ...EC| TransModal ,EC| BroadSem ,NWO| Scaling Semantic Parsing to Unrestricted DomainsAuthors: Bražinskas, Arthur; Lapata, Mirella; Titov, Ivan;Bražinskas, Arthur; Lapata, Mirella; Titov, Ivan;Opinion summarization is the automatic creation of text reflecting subjective information expressed in multiple documents, such as user reviews of a product. The task is practically important and has attracted a lot of attention. However, due to the high cost of summary production, datasets large enough for training supervised models are lacking. Instead, the task has been traditionally approached with extractive methods that learn to select text fragments in an unsupervised or weakly-supervised way. Recently, it has been shown that abstractive summaries, potentially more fluent and better at reflecting conflicting information, can also be produced in an unsupervised fashion. However, these models, not being exposed to actual summaries, fail to capture their essential properties. In this work, we show that even a handful of summaries is sufficient to bootstrap generation of the summary text with all expected properties, such as writing style, informativeness, fluency, and sentiment preservation. We start by training a conditional Transformer language model to generate a new product review given other available reviews of the product. The model is also conditioned on review properties that are directly related to summaries; the properties are derived from reviews with no manual effort. In the second stage, we fine-tune a plug-in module that learns to predict property values on a handful of summaries. This lets us switch the generator to the summarization mode. We show on Amazon and Yelp datasets that our approach substantially outperforms previous extractive and abstractive methods in automatic and human evaluation. Comment: EMNLP 2020
https://www.aclweb.o... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2020Data sources: arXiv.org e-Print Archivehttps://doi.org/10.18653/v1/20...Other literature type . Conference object . 2020 . Peer-reviewedhttps://doi.org/10.48550/arxiv...Article . 2020License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2020.emnlp-main.337&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen hybrid 19 citations 19 popularity Top 10% influence Top 10% impulse Top 10% Powered by BIP!more_vert https://www.aclweb.o... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2020Data sources: arXiv.org e-Print Archivehttps://doi.org/10.18653/v1/20...Other literature type . Conference object . 2020 . Peer-reviewedhttps://doi.org/10.48550/arxiv...Article . 2020License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2020.emnlp-main.337&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Conference object , Article 2020Publisher:Association for Computational Linguistics (ACL) Funded by:EC | TransModalEC| TransModalAuthors: Amplayo, Reinald Kim; Lapata, Mirella;Amplayo, Reinald Kim; Lapata, Mirella;The supervised training of high-capacity models on large datasets containing hundreds of thousands of document-summary pairs is critical to the recent success of deep learning techniques for abstractive summarization. Unfortunately, in most domains (other than news) such training data is not available and cannot be easily sourced. In this paper we enable the use of supervised learning for the setting where there are only documents available (e.g.,~product or business reviews) without ground truth summaries. We create a synthetic dataset from a corpus of user reviews by sampling a review, pretending it is a summary, and generating noisy versions thereof which we treat as pseudo-review input. We introduce several linguistically motivated noise generation functions and a summarization model which learns to denoise the input and generate the original review. At test time, the model accepts genuine reviews and generates a summary containing salient opinions, treating those that do not reach consensus as noise. Extensive automatic and human evaluation shows that our model brings substantial improvements over both abstractive and extractive baselines. Comment: ACL 2020
arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2020Data sources: arXiv.org e-Print Archivehttps://doi.org/10.18653/v1/20...Other literature type . Conference object . 2020 . Peer-reviewedhttps://doi.org/10.48550/arxiv...Article . 2020License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2020.acl-main.175&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen hybrid 25 citations 25 popularity Top 10% influence Top 10% impulse Top 10% Powered by BIP!more_vert arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2020Data sources: arXiv.org e-Print Archivehttps://doi.org/10.18653/v1/20...Other literature type . Conference object . 2020 . Peer-reviewedhttps://doi.org/10.48550/arxiv...Article . 2020License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2020.acl-main.175&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Article , Conference object 2020Publisher:Association for Computational Linguistics (ACL) Funded by:EC | BroadSem, NWO | Scaling Semantic Parsing ..., EC | TransModalEC| BroadSem ,NWO| Scaling Semantic Parsing to Unrestricted Domains ,EC| TransModalAuthors: Bražinskas, Arthur; Lapata, Mirella; Titov, Ivan;Bražinskas, Arthur; Lapata, Mirella; Titov, Ivan;Opinion summarization is the task of automatically creating summaries that reflect subjective information expressed in multiple documents, such as product reviews. While the majority of previous work has focused on the extractive setting, i.e., selecting fragments from input reviews to produce a summary, we let the model generate novel sentences and hence produce abstractive summaries. Recent progress in summarization has seen the development of supervised models which rely on large quantities of document-summary pairs. Since such training data is expensive to acquire, we instead consider the unsupervised setting, in other words, we do not use any summaries in training. We define a generative model for a review collection which capitalizes on the intuition that when generating a new review given a set of other reviews of a product, we should be able to control the "amount of novelty" going into the new review or, equivalently, vary the extent to which it deviates from the input. At test time, when generating summaries, we force the novelty to be minimal, and produce a text reflecting consensus opinions. We capture this intuition by defining a hierarchical variational autoencoder model. Both individual reviews and the products they correspond to are associated with stochastic latent codes, and the review generator ("decoder") has direct access to the text of input reviews through the pointer-generator mechanism. Experiments on Amazon and Yelp datasets, show that setting at test time the review's latent code to its mean, allows the model to produce fluent and coherent summaries reflecting common opinions. Comment: ACL 2020
arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2019Data sources: arXiv.org e-Print Archivehttps://doi.org/10.18653/v1/20...Other literature type . Conference object . 2020 . Peer-reviewedhttps://doi.org/10.48550/arxiv...Article . 2019License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2020.acl-main.461&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen hybrid 36 citations 36 popularity Top 1% influence Top 10% impulse Top 10% Powered by BIP!more_vert arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2019Data sources: arXiv.org e-Print Archivehttps://doi.org/10.18653/v1/20...Other literature type . Conference object . 2020 . Peer-reviewedhttps://doi.org/10.48550/arxiv...Article . 2019License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2020.acl-main.461&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eu
Loading
description Publicationkeyboard_double_arrow_right Article 2021 NetherlandsPublisher:MIT Press - Journals Funded by:EC | SUMMA, EC | TransModalEC| SUMMA ,EC| TransModalAuthors: Liu, J.; Cohen, S.B.; Lapata, M.; Bos, Johan;Liu, J.; Cohen, S.B.; Lapata, M.; Bos, Johan;Abstract We consider the task of crosslingual semantic parsing in the style of Discourse Representation Theory (DRT) where knowledge from annotated corpora in a resource-rich language is transferred via bitext to guide learning in other languages. We introduce Universal Discourse Representation Theory (UDRT), a variant of DRT that explicitly anchors semantic representations to tokens in the linguistic input. We develop a semantic parsing framework based on the Transformer architecture and utilize it to obtain semantic resources in multiple languages following two learning schemes. The Many-to-One approach translates non-English text to English, and then runs a relatively accurate English parser on the translated text, while the One-to-Many approach translates gold standard English to non-English text and trains multiple parsers (one per language) on the translations. Experimental results on the Parallel Meaning Bank show that our proposal outperforms strong baselines by a wide margin and can be used to construct (silver-standard) meaning banks for 99 languages.
Computational Lingui... arrow_drop_down Computational LinguisticsOther literature type . Article . 2021 . Peer-reviewedLicense: CC BY NC NDadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1162/coli_a_00406&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess Routesgold 1 citations 1 popularity Average influence Average impulse Average Powered by BIP!more_vert Computational Lingui... arrow_drop_down Computational LinguisticsOther literature type . Article . 2021 . Peer-reviewedLicense: CC BY NC NDadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1162/coli_a_00406&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Article , Conference object , Contribution for newspaper or weekly magazine 2021 United KingdomPublisher:Association for Computational Linguistics (ACL) Funded by:EC | TransModal, UKRI | UKRI Centre for Doctoral ...EC| TransModal ,UKRI| UKRI Centre for Doctoral Training in Natural Language ProcessingAuthors: Hosking, Tom; Lapata, Mirella;Hosking, Tom; Lapata, Mirella;We propose a method for generating paraphrases of English questions that retain the original intent but use a different surface form. Our model combines a careful choice of training objective with a principled information bottleneck, to induce a latent encoding space that disentangles meaning and form. We train an encoder-decoder model to reconstruct a question from a paraphrase with the same meaning and an exemplar with the same surface form, leading to separated encoding spaces. We use a Vector-Quantized Variational Autoencoder to represent the surface form as a set of discrete latent variables, allowing us to use a classifier to select a different surface form at test time. Crucially, our method does not require access to an external source of target exemplars. Extensive experiments and a human evaluation show that we are able to generate paraphrases with a better tradeoff between semantic preservation and syntactic novelty compared to previous methods. Comment: ACL 2021
arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2021Data sources: arXiv.org e-Print ArchiveEdinburgh Research ExplorerContribution for newspaper or weekly magazine . 2021Data sources: Edinburgh Research Exploreradd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2021.acl-long.112&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen hybrid 5 citations 5 popularity Top 10% influence Average impulse Average Powered by BIP!more_vert arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2021Data sources: arXiv.org e-Print ArchiveEdinburgh Research ExplorerContribution for newspaper or weekly magazine . 2021Data sources: Edinburgh Research Exploreradd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2021.acl-long.112&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Article 2021Publisher:MIT Press - Journals Funded by:EC | TransModal, UKRI | UKRI Centre for Doctoral ...EC| TransModal ,UKRI| UKRI Centre for Doctoral Training in Natural Language ProcessingAuthors: Jain, Parag; Lapata, Mirella;Jain, Parag; Lapata, Mirella;Abstract We present a memory-based model for context- dependent semantic parsing. Previous approaches focus on enabling the decoder to copy or modify the parse from the previous utterance, assuming there is a dependency between the current and previous parses. In this work, we propose to represent contextual information using an external memory. We learn a context memory controller that manages the memory by maintaining the cumulative meaning of sequential user utterances. We evaluate our approach on three semantic parsing benchmarks. Experimental results show that our model can better process context-dependent information and demonstrates improved performance without using task-specific decoders.
arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2021Data sources: arXiv.org e-Print ArchiveTransactions of the Association for Computational LinguisticsArticle . 2021 . Peer-reviewedLicense: CC BYData sources: CrossrefTransactions of the Association for Computational LinguisticsArticleLicense: CC BYData sources: UnpayWallhttps://doi.org/10.48550/arxiv...Article . 2021License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1162/tacl_a_00422&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen gold 2 citations 2 popularity Average influence Average impulse Average Powered by BIP!more_vert arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2021Data sources: arXiv.org e-Print ArchiveTransactions of the Association for Computational LinguisticsArticle . 2021 . Peer-reviewedLicense: CC BYData sources: CrossrefTransactions of the Association for Computational LinguisticsArticleLicense: CC BYData sources: UnpayWallhttps://doi.org/10.48550/arxiv...Article . 2021License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1162/tacl_a_00422&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Conference object , Article 2021Publisher:Association for Computational Linguistics (ACL) Funded by:EC | TransModal, EC | BroadSem, NWO | Scaling Semantic Parsing ...EC| TransModal ,EC| BroadSem ,NWO| Scaling Semantic Parsing to Unrestricted DomainsAuthors: Wang, Bailin; Lapata, Mirella; Titov, Ivan;Wang, Bailin; Lapata, Mirella; Titov, Ivan;Semantic parsing aims at translating natural language (NL) utterances onto machine-interpretable programs, which can be executed against a real-world environment. The expensive annotation of utterance-program pairs has long been acknowledged as a major bottleneck for the deployment of contemporary neural models to real-life applications. In this work, we focus on the task of semi-supervised learning where a limited amount of annotated data is available together with many unlabeled NL utterances. Based on the observation that programs which correspond to NL utterances must be always executable, we propose to encourage a parser to generate executable programs for unlabeled utterances. Due to the large search space of executable programs, conventional methods that use approximations based on beam-search such as self-training and top-k marginal likelihood training, do not perform as well. Instead, we view the problem of learning from executions from the perspective of posterior regularization and propose a set of new training objectives. Experimental results on Overnight and GeoQuery show that our new objectives outperform conventional methods, bridging the gap between semi-supervised and supervised learning. Comment: NAACL 2021 Camera Ready
arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2021Data sources: arXiv.org e-Print Archiveadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2021.naacl-main.219&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen hybrid 0 citations 0 popularity Average influence Average impulse Average Powered by BIP!more_vert arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2021Data sources: arXiv.org e-Print Archiveadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2021.naacl-main.219&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Contribution for newspaper or weekly magazine , Article , Conference object 2020 United KingdomPublisher:Association for Computational Linguistics (ACL) Funded by:EC | TransModalEC| TransModalAuthors: Papalampidi, Pinelopi; Keller, Frank; Frermann, Lea; Lapata, Mirella;Papalampidi, Pinelopi; Keller, Frank; Frermann, Lea; Lapata, Mirella;Most general-purpose extractive summarization models are trained on news articles, which are short and present all important information upfront. As a result, such models are biased on position and often perform a smart selection of sentences from the beginning of the document. When summarizing long narratives, which have complex structure and present information piecemeal, simple position heuristics are not sufficient. In this paper, we propose to explicitly incorporate the underlying structure of narratives into general unsupervised and supervised extractive summarization models. We formalize narrative structure in terms of key narrative events (turning points) and treat it as latent in order to summarize screenplays (i.e., extract an optimal sequence of scenes). Experimental results on the CSI corpus of TV screenplays, which we augment with scene-level summarization labels, show that latent turning points correlate with important aspects of a CSI episode and improve summarization performance over general extractive algorithms leading to more complete and diverse summaries. Comment: Accepted to appear at ACL 2020
Edinburgh Research E... arrow_drop_down Edinburgh Research ExplorerContribution for newspaper or weekly magazine . 2020Data sources: Edinburgh Research ExplorerarXiv.org e-Print ArchiveOther literature type . Preprint . 2020Data sources: arXiv.org e-Print Archivehttps://doi.org/10.18653/v1/20...Other literature type . Conference object . 2020 . Peer-reviewedhttps://doi.org/10.48550/arxiv...Article . 2020License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2020.acl-main.174&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen hybrid 6 citations 6 popularity Top 10% influence Average impulse Average Powered by BIP!more_vert Edinburgh Research E... arrow_drop_down Edinburgh Research ExplorerContribution for newspaper or weekly magazine . 2020Data sources: Edinburgh Research ExplorerarXiv.org e-Print ArchiveOther literature type . Preprint . 2020Data sources: arXiv.org e-Print Archivehttps://doi.org/10.18653/v1/20...Other literature type . Conference object . 2020 . Peer-reviewedhttps://doi.org/10.48550/arxiv...Article . 2020License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2020.acl-main.174&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Conference object , Article 2020Publisher:Association for Computational Linguistics (ACL) Funded by:EC | SUMMA, EC | TransModalEC| SUMMA ,EC| TransModalAuthors: Liu, Jiangming; Gardner, Matt; Cohen, Shay B.; Lapata, Mirella;Liu, Jiangming; Gardner, Matt; Cohen, Shay B.; Lapata, Mirella;Complex reasoning over text requires understanding and chaining together free-form predicates and logical connectives. Prior work has largely tried to do this either symbolically or with black-box transformers. We present a middle ground between these two extremes: a compositional model reminiscent of neural module networks that can perform chained logical reasoning. This model first finds relevant sentences in the context and then chains them together using neural modules. Our model gives significant performance improvements (up to 29\% relative error reduction when comfibined with a reranker) on ROPES, a recently introduced complex reasoning dataset. Comment: accepted by EMNLP 2020
https://www.aclweb.o... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2020Data sources: arXiv.org e-Print Archivehttps://doi.org/10.48550/arxiv...Article . 2020License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2020.emnlp-main.245&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen hybrid 3 citations 3 popularity Top 10% influence Average impulse Average Powered by BIP!more_vert https://www.aclweb.o... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2020Data sources: arXiv.org e-Print Archivehttps://doi.org/10.48550/arxiv...Article . 2020License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2020.emnlp-main.245&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Article , Conference object 2020Publisher:Association for Computational Linguistics (ACL) Funded by:EC | TransModalEC| TransModalAuthors: Sherborne, Tom; Xu, Yumo; Lapata, Mirella;Sherborne, Tom; Xu, Yumo; Lapata, Mirella;Recent progress in semantic parsing scarcely considers languages other than English but professional translation can be prohibitively expensive. We adapt a semantic parser trained on a single language, such as English, to new languages and multiple domains with minimal annotation. We query if machine translation is an adequate substitute for training data, and extend this to investigate bootstrapping using joint training with English, paraphrasing, and multilingual pre-trained models. We develop a Transformer-based parser combining paraphrases by ensembling attention over multiple encoders and present new versions of ATIS and Overnight in German and Chinese for evaluation. Experimental results indicate that MT can approximate training data in a new language for accurate parsing when augmented with paraphrasing through multiple MT engines. Considering when MT is inadequate, we also find that using our approach achieves parsing accuracy within 2% of complete translation using only 50% of training data. Camera Ready for EMNLP2020 Findings
arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2020Data sources: arXiv.org e-Print Archivehttps://doi.org/10.48550/arxiv...Article . 2020License: arXiv Non-Exclusive DistributionData sources: Datacitehttps://doi.org/10.18653/v1/20...Other literature type . Conference object . 2020 . Peer-reviewedadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2020.findings-emnlp.45&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen hybrid 4 citations 4 popularity Top 10% influence Average impulse Average Powered by BIP!more_vert arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2020Data sources: arXiv.org e-Print Archivehttps://doi.org/10.48550/arxiv...Article . 2020License: arXiv Non-Exclusive DistributionData sources: Datacitehttps://doi.org/10.18653/v1/20...Other literature type . Conference object . 2020 . Peer-reviewedadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2020.findings-emnlp.45&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Article , Conference object 2020Publisher:Association for Computational Linguistics (ACL) Funded by:EC | TransModal, EC | BroadSem, NWO | Scaling Semantic Parsing ...EC| TransModal ,EC| BroadSem ,NWO| Scaling Semantic Parsing to Unrestricted DomainsAuthors: Bražinskas, Arthur; Lapata, Mirella; Titov, Ivan;Bražinskas, Arthur; Lapata, Mirella; Titov, Ivan;Opinion summarization is the automatic creation of text reflecting subjective information expressed in multiple documents, such as user reviews of a product. The task is practically important and has attracted a lot of attention. However, due to the high cost of summary production, datasets large enough for training supervised models are lacking. Instead, the task has been traditionally approached with extractive methods that learn to select text fragments in an unsupervised or weakly-supervised way. Recently, it has been shown that abstractive summaries, potentially more fluent and better at reflecting conflicting information, can also be produced in an unsupervised fashion. However, these models, not being exposed to actual summaries, fail to capture their essential properties. In this work, we show that even a handful of summaries is sufficient to bootstrap generation of the summary text with all expected properties, such as writing style, informativeness, fluency, and sentiment preservation. We start by training a conditional Transformer language model to generate a new product review given other available reviews of the product. The model is also conditioned on review properties that are directly related to summaries; the properties are derived from reviews with no manual effort. In the second stage, we fine-tune a plug-in module that learns to predict property values on a handful of summaries. This lets us switch the generator to the summarization mode. We show on Amazon and Yelp datasets that our approach substantially outperforms previous extractive and abstractive methods in automatic and human evaluation. Comment: EMNLP 2020
https://www.aclweb.o... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2020Data sources: arXiv.org e-Print Archivehttps://doi.org/10.18653/v1/20...Other literature type . Conference object . 2020 . Peer-reviewedhttps://doi.org/10.48550/arxiv...Article . 2020License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2020.emnlp-main.337&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen hybrid 19 citations 19 popularity Top 10% influence Top 10% impulse Top 10% Powered by BIP!more_vert https://www.aclweb.o... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2020Data sources: arXiv.org e-Print Archivehttps://doi.org/10.18653/v1/20...Other literature type . Conference object . 2020 . Peer-reviewedhttps://doi.org/10.48550/arxiv...Article . 2020License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2020.emnlp-main.337&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Conference object , Article 2020Publisher:Association for Computational Linguistics (ACL) Funded by:EC | TransModalEC| TransModalAuthors: Amplayo, Reinald Kim; Lapata, Mirella;Amplayo, Reinald Kim; Lapata, Mirella;The supervised training of high-capacity models on large datasets containing hundreds of thousands of document-summary pairs is critical to the recent success of deep learning techniques for abstractive summarization. Unfortunately, in most domains (other than news) such training data is not available and cannot be easily sourced. In this paper we enable the use of supervised learning for the setting where there are only documents available (e.g.,~product or business reviews) without ground truth summaries. We create a synthetic dataset from a corpus of user reviews by sampling a review, pretending it is a summary, and generating noisy versions thereof which we treat as pseudo-review input. We introduce several linguistically motivated noise generation functions and a summarization model which learns to denoise the input and generate the original review. At test time, the model accepts genuine reviews and generates a summary containing salient opinions, treating those that do not reach consensus as noise. Extensive automatic and human evaluation shows that our model brings substantial improvements over both abstractive and extractive baselines. Comment: ACL 2020
arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2020Data sources: arXiv.org e-Print Archivehttps://doi.org/10.18653/v1/20...Other literature type . Conference object . 2020 . Peer-reviewedhttps://doi.org/10.48550/arxiv...Article . 2020License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2020.acl-main.175&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen hybrid 25 citations 25 popularity Top 10% influence Top 10% impulse Top 10% Powered by BIP!more_vert arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2020Data sources: arXiv.org e-Print Archivehttps://doi.org/10.18653/v1/20...Other literature type . Conference object . 2020 . Peer-reviewedhttps://doi.org/10.48550/arxiv...Article . 2020License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2020.acl-main.175&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Article , Conference object 2020Publisher:Association for Computational Linguistics (ACL) Funded by:EC | BroadSem, NWO | Scaling Semantic Parsing ..., EC | TransModalEC| BroadSem ,NWO| Scaling Semantic Parsing to Unrestricted Domains ,EC| TransModalAuthors: Bražinskas, Arthur; Lapata, Mirella; Titov, Ivan;Bražinskas, Arthur; Lapata, Mirella; Titov, Ivan;Opinion summarization is the task of automatically creating summaries that reflect subjective information expressed in multiple documents, such as product reviews. While the majority of previous work has focused on the extractive setting, i.e., selecting fragments from input reviews to produce a summary, we let the model generate novel sentences and hence produce abstractive summaries. Recent progress in summarization has seen the development of supervised models which rely on large quantities of document-summary pairs. Since such training data is expensive to acquire, we instead consider the unsupervised setting, in other words, we do not use any summaries in training. We define a generative model for a review collection which capitalizes on the intuition that when generating a new review given a set of other reviews of a product, we should be able to control the "amount of novelty" going into the new review or, equivalently, vary the extent to which it deviates from the input. At test time, when generating summaries, we force the novelty to be minimal, and produce a text reflecting consensus opinions. We capture this intuition by defining a hierarchical variational autoencoder model. Both individual reviews and the products they correspond to are associated with stochastic latent codes, and the review generator ("decoder") has direct access to the text of input reviews through the pointer-generator mechanism. Experiments on Amazon and Yelp datasets, show that setting at test time the review's latent code to its mean, allows the model to produce fluent and coherent summaries reflecting common opinions. Comment: ACL 2020
arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2019Data sources: arXiv.org e-Print Archivehttps://doi.org/10.18653/v1/20...Other literature type . Conference object . 2020 . Peer-reviewedhttps://doi.org/10.48550/arxiv...Article . 2019License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2020.acl-main.461&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen hybrid 36 citations 36 popularity Top 1% influence Top 10% impulse Top 10% Powered by BIP!more_vert arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2019Data sources: arXiv.org e-Print Archivehttps://doi.org/10.18653/v1/20...Other literature type . Conference object . 2020 . Peer-reviewedhttps://doi.org/10.48550/arxiv...Article . 2019License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2020.acl-main.461&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eu