- home
- Advanced Search
Filters
Clear All- Digital Humanities and Cultural Heritage
- 2017-2021
- Publications
- Preprint
- VIRTA
- Digital Humanities and Cultural Heritage
- 2017-2021
- Publications
- Preprint
- VIRTA
Loading
description Publicationkeyboard_double_arrow_right Conference object , Article 2021 FrancePublisher:IEEE Funded by:EC | TAILOREC| TAILORAlsaidi, Safa; Decker, Amandine; Lay, Puthineath; Marquer, Esteban; Murena, Pierre-Alexandre; Couceiro, Miguel;Analogical proportions are statements of the form "A is to B as C is to D" that are used for several reasoning and classification tasks in artificial intelligence and natural language processing (NLP). For instance, there are analogy based approaches to semantics as well as to morphology. In fact, symbolic approaches were developed to solve or to detect analogies between character strings, e.g., the axiomatic approach as well as that based on Kolmogorov complexity. In this paper, we propose a deep learning approach to detect morphological analogies, for instance, with reinflexion or conjugation. We present empirical results that show that our framework is competitive with the above-mentioned state of the art symbolic approaches. We also explore empirically its transferability capacity across languages, which highlights interesting similarities between them. Comment: Submitted and accepted by the 8th IEEE International Conference on Data Science and Advanced Analytics (DSAA)
HAL Descartes; INRIA... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2021Data sources: arXiv.org e-Print Archivehttps://doi.org/10.1109/dsaa53...Conference object . 2021 . Peer-reviewedLicense: IEEE CopyrightData sources: CrossrefINRIA a CCSD electronic archive server; Mémoires en Sciences de l'Information et de la CommunicationConference object . 2021License: CC BYHal-DiderotConference object . 2021License: CC BYFull-Text: https://hal.inria.fr/hal-03328841/documentData sources: Hal-DiderotHal-DiderotConference object . 2021Full-Text: https://hal.inria.fr/hal-03313556/documentData sources: Hal-Diderotadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1109/dsaa53316.2021.9564186&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen 1 citations 1 popularity Average influence Average impulse Average Powered by BIP!more_vert HAL Descartes; INRIA... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2021Data sources: arXiv.org e-Print Archivehttps://doi.org/10.1109/dsaa53...Conference object . 2021 . Peer-reviewedLicense: IEEE CopyrightData sources: CrossrefINRIA a CCSD electronic archive server; Mémoires en Sciences de l'Information et de la CommunicationConference object . 2021License: CC BYHal-DiderotConference object . 2021License: CC BYFull-Text: https://hal.inria.fr/hal-03328841/documentData sources: Hal-DiderotHal-DiderotConference object . 2021Full-Text: https://hal.inria.fr/hal-03313556/documentData sources: Hal-Diderotadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1109/dsaa53316.2021.9564186&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Article , Conference object 2017 FinlandPublisher:IEEE Authors: Tavakoli, Hamed R.; Shetty, Rakshith; Borji, Ali; Laaksonen, Jorma;Tavakoli, Hamed R.; Shetty, Rakshith; Borji, Ali; Laaksonen, Jorma;To bridge the gap between humans and machines in image understanding and describing, we need further insight into how people describe a perceived scene. In this paper, we study the agreement between bottom-up saliency-based visual attention and object referrals in scene description constructs. We investigate the properties of human-written descriptions and machine-generated ones. We then propose a saliency-boosted image captioning model in order to investigate benefits from low-level cues in language models. We learn that (1) humans mention more salient objects earlier than less salient ones in their descriptions, (2) the better a captioning model performs, the better attention agreement it has with human descriptions, (3) the proposed saliency-boosted model, compared to its baseline form, does not improve significantly on the MS COCO database, indicating explicit bottom-up boosting does not help when the task is well learnt and tuned on a data, (4) a better generalization is, however, observed for the saliency-boosted model on unseen data. Comment: To appear in ICCV 2017
arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2017Data sources: arXiv.org e-Print ArchiveAaltodoc Publication ArchiveArticle . 2017 . Peer-reviewedData sources: Aaltodoc Publication Archivehttps://doi.org/10.48550/arxiv...Article . 2017License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1109/iccv.2017.272&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen 36 citations 36 popularity Top 10% influence Top 10% impulse Top 10% Powered by BIP!more_vert arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2017Data sources: arXiv.org e-Print ArchiveAaltodoc Publication ArchiveArticle . 2017 . Peer-reviewedData sources: Aaltodoc Publication Archivehttps://doi.org/10.48550/arxiv...Article . 2017License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1109/iccv.2017.272&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eu
Loading
description Publicationkeyboard_double_arrow_right Conference object , Article 2021 FrancePublisher:IEEE Funded by:EC | TAILOREC| TAILORAlsaidi, Safa; Decker, Amandine; Lay, Puthineath; Marquer, Esteban; Murena, Pierre-Alexandre; Couceiro, Miguel;Analogical proportions are statements of the form "A is to B as C is to D" that are used for several reasoning and classification tasks in artificial intelligence and natural language processing (NLP). For instance, there are analogy based approaches to semantics as well as to morphology. In fact, symbolic approaches were developed to solve or to detect analogies between character strings, e.g., the axiomatic approach as well as that based on Kolmogorov complexity. In this paper, we propose a deep learning approach to detect morphological analogies, for instance, with reinflexion or conjugation. We present empirical results that show that our framework is competitive with the above-mentioned state of the art symbolic approaches. We also explore empirically its transferability capacity across languages, which highlights interesting similarities between them. Comment: Submitted and accepted by the 8th IEEE International Conference on Data Science and Advanced Analytics (DSAA)
HAL Descartes; INRIA... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2021Data sources: arXiv.org e-Print Archivehttps://doi.org/10.1109/dsaa53...Conference object . 2021 . Peer-reviewedLicense: IEEE CopyrightData sources: CrossrefINRIA a CCSD electronic archive server; Mémoires en Sciences de l'Information et de la CommunicationConference object . 2021License: CC BYHal-DiderotConference object . 2021License: CC BYFull-Text: https://hal.inria.fr/hal-03328841/documentData sources: Hal-DiderotHal-DiderotConference object . 2021Full-Text: https://hal.inria.fr/hal-03313556/documentData sources: Hal-Diderotadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1109/dsaa53316.2021.9564186&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen 1 citations 1 popularity Average influence Average impulse Average Powered by BIP!more_vert HAL Descartes; INRIA... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2021Data sources: arXiv.org e-Print Archivehttps://doi.org/10.1109/dsaa53...Conference object . 2021 . Peer-reviewedLicense: IEEE CopyrightData sources: CrossrefINRIA a CCSD electronic archive server; Mémoires en Sciences de l'Information et de la CommunicationConference object . 2021License: CC BYHal-DiderotConference object . 2021License: CC BYFull-Text: https://hal.inria.fr/hal-03328841/documentData sources: Hal-DiderotHal-DiderotConference object . 2021Full-Text: https://hal.inria.fr/hal-03313556/documentData sources: Hal-Diderotadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1109/dsaa53316.2021.9564186&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Article , Conference object 2017 FinlandPublisher:IEEE Authors: Tavakoli, Hamed R.; Shetty, Rakshith; Borji, Ali; Laaksonen, Jorma;Tavakoli, Hamed R.; Shetty, Rakshith; Borji, Ali; Laaksonen, Jorma;To bridge the gap between humans and machines in image understanding and describing, we need further insight into how people describe a perceived scene. In this paper, we study the agreement between bottom-up saliency-based visual attention and object referrals in scene description constructs. We investigate the properties of human-written descriptions and machine-generated ones. We then propose a saliency-boosted image captioning model in order to investigate benefits from low-level cues in language models. We learn that (1) humans mention more salient objects earlier than less salient ones in their descriptions, (2) the better a captioning model performs, the better attention agreement it has with human descriptions, (3) the proposed saliency-boosted model, compared to its baseline form, does not improve significantly on the MS COCO database, indicating explicit bottom-up boosting does not help when the task is well learnt and tuned on a data, (4) a better generalization is, however, observed for the saliency-boosted model on unseen data. Comment: To appear in ICCV 2017
arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2017Data sources: arXiv.org e-Print ArchiveAaltodoc Publication ArchiveArticle . 2017 . Peer-reviewedData sources: Aaltodoc Publication Archivehttps://doi.org/10.48550/arxiv...Article . 2017License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1109/iccv.2017.272&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen 36 citations 36 popularity Top 10% influence Top 10% impulse Top 10% Powered by BIP!more_vert arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2017Data sources: arXiv.org e-Print ArchiveAaltodoc Publication ArchiveArticle . 2017 . Peer-reviewedData sources: Aaltodoc Publication Archivehttps://doi.org/10.48550/arxiv...Article . 2017License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1109/iccv.2017.272&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eu