- home
- Advanced Search
Filters
Clear All- Digital Humanities and Cultural Heritage
- Publications
- Other literature type
- NA
- Digital Humanities and Cultural Heritage
- Publications
- Other literature type
- NA
Loading
description Publicationkeyboard_double_arrow_right Article 2023Publisher:MDPI AG Authors: Set, Beatha;Set, Beatha;This paper draws on conceptualisations of language as heteroglossic practices to examine how the experienced bilingual science teacher navigates between the monoglossic ideology that is embodied in the official Namibian Language in Education Policy (LiEP) within a linguistically constrained Namibian bilingual context. This paper aims to support recent research that challenges monolingual and monoglossic language practices, which tend to ignore the linguistic resources that children bring to the classroom. Data were collected from a classroom including video and audio recordings of lessons, field notes and photographs. The data were analysed through socio-cultural discourse and fine-grained multimodal analytical methods. The data findings illustrate the moment where the science teacher was constrained by English monolingual policy to mediate learners’ access to science learning, and harnessed all linguistic resources that the learners bring to the classroom. Subsidiary to that, there were moments where the teacher worked flexibly across languages, discourses and modes to interrupt the monoglossic ideology that is embodied in the official Namibian Language in Education Policy (LiEP). The use of rich heteroglossic practices is a clear testimony to enhanced science meaning-making regardless of learners’ ‘limited proficiency in English. The findings highlight the need to support learners from linguistically diverse backgrounds through a deliberate inclusive language policy that harnesses the heteroglossic nature of communicative practices and prepares teachers for a multilingual reality.
Languages arrow_drop_down LanguagesOther literature type . Article . 2023 . Peer-reviewedLicense: CC BYFull-Text: http://www.mdpi.com/2226-471X/8/2/131/pdfadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.3390/languages8020131&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess Routesgold 0 citations 0 popularity Average influence Average impulse Average Powered by BIP!more_vert Languages arrow_drop_down LanguagesOther literature type . Article . 2023 . Peer-reviewedLicense: CC BYFull-Text: http://www.mdpi.com/2226-471X/8/2/131/pdfadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.3390/languages8020131&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Part of book or chapter of book , Article , Conference object 2023 FrancePublisher:Springer Nature Switzerland Chen, Xiaofei; He, Yuting; Xue, Cheng; Ge, Rongjun; Li, Shuo; Yang, Guanyu;The foundation models based on pre-training technology have significantly advanced artificial intelligence from theoretical to practical applications. These models have facilitated the feasibility of computer-aided diagnosis for widespread use. Medical contrastive vision-language pre-training, which does not require human annotations, is an effective approach for guiding representation learning using description information in diagnostic reports. However, the effectiveness of pre-training is limited by the large-scale semantic overlap and shifting problems in medical field. To address these issues, we propose the Knowledge-Boosting Contrastive Vision-Language Pre-training framework (KoBo), which integrates clinical knowledge into the learning of vision-language semantic consistency. The framework uses an unbiased, open-set sample-wise knowledge representation to measure negative sample noise and supplement the correspondence between vision-language mutual information and clinical knowledge. Extensive experiments validate the effect of our framework on eight tasks including classification, segmentation, retrieval, and semantic relatedness, achieving comparable or better performance with the zero-shot or few-shot settings. Our code is open on https://github.com/ChenXiaoFei-CS/KoBo. International audience
arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2023Data sources: arXiv.org e-Print Archivehttps://doi.org/10.1007/978-3-...Part of book or chapter of book . 2023 . Peer-reviewedLicense: Springer Nature TDMData sources: Crossrefadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1007/978-3-031-43907-0_39&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen 0 citations 0 popularity Average influence Average impulse Average Powered by BIP!more_vert arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2023Data sources: arXiv.org e-Print Archivehttps://doi.org/10.1007/978-3-...Part of book or chapter of book . 2023 . Peer-reviewedLicense: Springer Nature TDMData sources: Crossrefadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1007/978-3-031-43907-0_39&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Article 2022Publisher:Elsevier BV Authors: Lu-Jing Huang; Tao Wang;Lu-Jing Huang; Tao Wang;We give some relationships between the first Dirichlet eigenvalues and the exit time moments for the general symmetric Markov processes. As applications, we present some examples, including symmetric diffusions and $\alpha$-stable processes, and provide the estimates of their first Dirichlet eigenvalues and the exit time moments. Comment: 11 pages
arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2022Data sources: arXiv.org e-Print ArchiveStatistics & Probability LettersArticle . 2023 . Peer-reviewedLicense: Elsevier TDMData sources: Crossrefhttps://doi.org/10.48550/arxiv...Article . 2022License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.2139/ssrn.4193541&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen bronze 0 citations 0 popularity Average influence Average impulse Average Powered by BIP!more_vert arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2022Data sources: arXiv.org e-Print ArchiveStatistics & Probability LettersArticle . 2023 . Peer-reviewedLicense: Elsevier TDMData sources: Crossrefhttps://doi.org/10.48550/arxiv...Article . 2022License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.2139/ssrn.4193541&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Article , Conference object 2021Publisher:ACM Kong, Junsheng; Li, Weizhao; Liu, Zeyi; Liao, Ben; Qiu, Jiezhong; Hsieh, Chang-Yu; Cai, Yi; Zhang, Shengyu;The notion of word embedding plays a fundamental role in natural language processing (NLP). However, pre-training word embedding for very large-scale vocabulary is computationally challenging for most existing methods. In this work, we show that with merely a small fraction of contexts (Q-contexts)which are typical in the whole corpus (and their mutual information with words), one can construct high-quality word embedding with negligible errors. Mutual information between contexts and words can be encoded canonically as a sampling state, thus, Q-contexts can be fast constructed. Furthermore, we present an efficient and effective WEQ method, which is capable of extracting word embedding directly from these typical contexts. In practical scenarios, our algorithm runs 11$\sim$13 times faster than well-established methods. By comparing with well-known methods such as matrix factorization, word2vec, GloVeand fasttext, we demonstrate that our method achieves comparable performance on a variety of downstream NLP tasks, and in the meanwhile maintains run-time and resource advantages over all these baselines. Comment: Accepted by CIKM 2021
arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2021Data sources: arXiv.org e-Print Archiveadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1145/3459637.3482343&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen 0 citations 0 popularity Average influence Average impulse Average Powered by BIP!more_vert arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2021Data sources: arXiv.org e-Print Archiveadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1145/3459637.3482343&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Article 2021 FrancePublisher:MIT Press Funded by:EC | COMPRISEEC| COMPRISEDavid Ifeoluwa Adelani; Jade Abbott; Graham Neubig; Daniel D'souza; Julia Kreutzer; Constantine Lignos; Chester Palen-Michel; Happy Buzaaba; Shruti Rijhwani; Sebastian Ruder; Stephen Mayhew; Israel Abebe Azime; Shamsuddeen Hassan Muhammad; Chris Chinenye Emezue; Joyce Nakatumba-Nabende; Perez Ogayo; Aremu Anuoluwapo; Catherine Gitau; Derguene Mbaye; Jesujoba O. Alabi; Seid Muhie Yimam; Tajuddeen R. Gwadabe; Ignatius Ezeani; Rubungo Andre Niyongabo; Jonathan Mukiibi; Verrah Otiende; Iroro Orife; Davis David; Samba Ngom; Tosin P. Adewumi; Paul Rayson; Mofetoluwa Adeyemi; Gerald Muriuki; Emmanuel Anebi; Chiamaka Chukwuneke; Nkiruka Odu; Eric Peter Wairagala; Samuel Oyerinde; Clemencia Siro; Tobius Saul Bateesa; Temilola Oloyede; Yvonne Wambui; Victor Akinode; Deborah Nabagereka; Maurice Katusiime; Ayodele Awokoya; Mouhamadane Mboup; Dibora Gebreyohannes; Henok Tilaye; Kelechi Nwaike; Degaga Wolde; Abdoulaye Faye; Blessing Sibanda; Orevaoghene Ahia; Bonaventure F. P. Dossou; Kelechi Ogueji; Thierno Ibrahima Diop; Abdoulaye Diallo; Adewale Akinfaderin; Tendai Marengereke; Salomey Osei;We take a step towards addressing the underrepresentation of the African continent in NLP research by bringing together different stakeholders to create the first large, publicly available, high-quality dataset for named entity recognition (NER) in ten African languages. We detail the characteristics of these languages to help researchers and practitioners better understand the challenges they pose for NER tasks. We analyze our datasets and conduct an extensive empirical evaluation of stateof-the-art methods across both supervised and transfer learning settings. Finally, we release the data, code, and models to inspire future research on African NLP. 1 International audience
Transactions of the ... arrow_drop_down Transactions of the Association for Computational LinguisticsArticle . 2021 . Peer-reviewedLicense: CC BYarXiv.org e-Print ArchiveOther literature type . Preprint . 2021Data sources: arXiv.org e-Print ArchiveTransactions of the Association for Computational LinguisticsArticleLicense: CC BYData sources: UnpayWallHal-DiderotArticle . 2021Full-Text: https://hal.inria.fr/hal-03350962/documentData sources: Hal-Diderotadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1162/tacl_a_00416&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen gold 14 citations 14 popularity Top 10% influence Average impulse Top 10% Powered by BIP!more_vert Transactions of the ... arrow_drop_down Transactions of the Association for Computational LinguisticsArticle . 2021 . Peer-reviewedLicense: CC BYarXiv.org e-Print ArchiveOther literature type . Preprint . 2021Data sources: arXiv.org e-Print ArchiveTransactions of the Association for Computational LinguisticsArticleLicense: CC BYData sources: UnpayWallHal-DiderotArticle . 2021Full-Text: https://hal.inria.fr/hal-03350962/documentData sources: Hal-Diderotadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1162/tacl_a_00416&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Other literature type , Article 2020Publisher:Copernicus GmbH W. Hua; Y. Qiao; M. Hou; M. Hou; M. Hou;Abstract. Laser scanning or photogrammetry are useful individual techniques for digital documentation of cultural heritage sites. However, these techniques are of limited usage if cultural heritage such as the Great Wall is in harsh geographical conditions. The Great Wall is usually built on the ridge with cliffs on both sides, so it is very difficult to construct scaffolding. Therefore, the three-dimensional (3D) data obtained from the traditional 3D laser scanning is not complete. As UAV cannot enter the enemy tower, the 3D structure data inside the enemy tower with unmanned aerial vehicle (UAV) photogrammetry is missing. In order to explore effective methods to completely collect the 3D data of cultural heritage under harsh geographical environment, this study focuses on establishing a 3D model and the associated digital documentation for the No.15 enemy tower of the New Guangwu Great Wall using a combination of terrestrial laser scanning and UAV photogrammetry. This paper proposes an integrated data collection method and reduces the layout of image control points using RTK-UAV technology, which improved work efficiency and reduced work risks as well. In this paper, the internal structure data of the Great Wall enemy tower was collected by laser scanning, the external structure data was collected by UAV photogrammetry, and data fusion was based on ICP algorithm. Finally, we obtained the complete and high quality 3D digital documentation of the Great Wall enemy tower, the data can be displayed digitally and help heritage experts complete the Great Wall's restoration. This study demonstrates the potential of integrating terrestrial laser scanning and UAV photogrammetry in 3D digital documentation of cultural heritage sites.
DOAJ arrow_drop_down ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information SciencesArticle . 2020 . Peer-reviewedLicense: CC BYData sources: Crossrefadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.5194/isprs-archives-xliii-b2-2020-1465-2020&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess Routesgold 4 citations 4 popularity Top 10% influence Average impulse Average Powered by BIP!more_vert DOAJ arrow_drop_down ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information SciencesArticle . 2020 . Peer-reviewedLicense: CC BYData sources: Crossrefadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.5194/isprs-archives-xliii-b2-2020-1465-2020&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Conference object , Article 2020 United KingdomPublisher:Association for Computational Linguistics (ACL) Nekoto, Wilhelmina; Marivate, Vukosi; Matsila, Tshinondiwa; Fasubaa, Timi; Kolawole, Tajudeen; Fagbohungbe, Taiwo; Akinola, Solomon Oluwole; Muhammad, Shamsuddeen Hassan; Kabongo, Salomon; Osei, Salomey; Freshia, Sackey; Niyongabo, Rubungo Andre; Macharm, Ricky; Ogayo, Perez; Ahia, Orevaoghene; Meressa, Musie; Adeyemi, Mofe; Mokgesi-Selinga, Masabata; Okegbemi, Lawrence; Martinus, Laura Jane; Tajudeen, Kolawole; Degila, Kevin; Ogueji, Kelechi; Siminyu, Kathleen; Kreutzer, Julia; Webster, Jason; Ali, Jamiil Toure; Abbott, Jade; Orife, Iroro; Ezeani, Ignatius; Dangana, Idris Abdulkabir; Kamper, Herman; Elsahar, Hady; Duru, Goodness; Kioko, Ghollah; Murhabazi, Espoir; van Biljon, Elan; Whitenack, Daniel; Onyefuluchi, Christopher; Emezue, Chris; Dossou, Bonaventure; Sibanda, Blessing; Bassey, Blessing Itoro; Olabiyi, Ayodele; Ramkilowan, Arshath; Öktem, Alp; Akinfaderin, Adewale; Bashir, Abdallah;Research in NLP lacks geographic diversity, and the question of how NLP can be scaled to low-resourced languages has not yet been adequately solved. "Low-resourced"-ness is a complex problem going beyond data availability and reflects systemic problems in society. In this paper, we focus on the task of Machine Translation (MT), that plays a crucial role for information accessibility and communication worldwide. Despite immense improvements in MT over the past decade, MT is centered around a few high-resourced languages. As MT researchers cannot solve the problem of low-resourcedness alone, we propose participatory research as a means to involve all necessary agents required in the MT development process. We demonstrate the feasibility and scalability of participatory research with a case study on MT for African languages. Its implementation leads to a collection of novel translation datasets, MT benchmarks for over 30 languages, with human evaluations for a third of them, and enables participants without formal training to make a unique scientific contribution. Benchmarks, models, data, code, and evaluation results are released under https://github.com/masakhane-io/masakhane-mt. Comment: Findings of EMNLP 2020; updated benchmarks
Lancaster EPrints arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2020Data sources: arXiv.org e-Print Archivehttps://doi.org/10.48550/arxiv...Article . 2020License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2020.findings-emnlp.195&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen hybrid 19 citations 19 popularity Top 10% influence Top 10% impulse Top 10% Powered by BIP!visibility 2visibility views 2 download downloads 10 Powered bymore_vert Lancaster EPrints arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2020Data sources: arXiv.org e-Print Archivehttps://doi.org/10.48550/arxiv...Article . 2020License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2020.findings-emnlp.195&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Conference object , Article 2019Publisher:IEEE Li, Yehao; Yao, Ting; Pan, Yingwei; Chao, Hongyang; Mei, Tao;Image captioning has received significant attention with remarkable improvements in recent advances. Nevertheless, images in the wild encapsulate rich knowledge and cannot be sufficiently described with models built on image-caption pairs containing only in-domain objects. In this paper, we propose to address the problem by augmenting standard deep captioning architectures with object learners. Specifically, we present Long Short-Term Memory with Pointing (LSTM-P) --- a new architecture that facilitates vocabulary expansion and produces novel objects via pointing mechanism. Technically, object learners are initially pre-trained on available object recognition data. Pointing in LSTM-P then balances the probability between generating a word through LSTM and copying a word from the recognized objects at each time step in decoder stage. Furthermore, our captioning encourages global coverage of objects in the sentence. Extensive experiments are conducted on both held-out COCO image captioning and ImageNet datasets for describing novel objects, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, we obtain an average of 60.9% in F1 score on held-out COCO~dataset. Comment: CVPR 2019
arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2019Data sources: arXiv.org e-Print Archivehttps://doi.org/10.1109/cvpr.2...Conference object . 2019 . Peer-reviewedLicense: IEEE CopyrightData sources: Crossrefhttps://doi.org/10.48550/arxiv...Article . 2019License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1109/cvpr.2019.01278&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen 33 citations 33 popularity Top 10% influence Top 10% impulse Top 10% Powered by BIP!more_vert arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2019Data sources: arXiv.org e-Print Archivehttps://doi.org/10.1109/cvpr.2...Conference object . 2019 . Peer-reviewedLicense: IEEE CopyrightData sources: Crossrefhttps://doi.org/10.48550/arxiv...Article . 2019License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1109/cvpr.2019.01278&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Article 2018Publisher:IOP Publishing Dmitry Gets; Artur Ishteev; Tatiana Liashenko; Danila Saranin; Sergey V. Makarov; Anvar A. Zakhidov;Organic-inorganic halide perovskites recently have emerged as a promising material for highly effective light-emitting diodes (LEDs) and solar cells (SCs). Despite efficiencies of both perovskite SCs and LEDs are already among the best, the development of a perovskite dual functional device that is capable of working in these two regimes with high efficiencies is still challenging. Here we demonstrate that the dual functional device based on mixed halide perovskite CH3NH3PbBr2I can be switched from SC to LED with low threshold voltage Vth < 2 V by exposing to Sun at open circuit Voc or at small bias voltage of Vpol ~ 1 - 2 V. Such photo-poling creates in-situ p-i-n junction via methylammonium (CH3NH3+, MA+) and I-/Br- ions migration to interfaces, lowering charge injection barriers, and self-balancing injection currents in perovskite LED. We show that before the photo-poling, the electroluminescence (EL) is highly unstable in LED regime, whereas after the photo-poling, stabilized EL exhibits unusual dynamics, increasing with time and poling cycle number, while Vth and injection current decrease with cycling runs. Additionally, photo-induced and current-induced halide segregation accumulates with cycling, that is found beneficial for LED, increasing its efficiency and brightness, but reversibly degrading photovoltaic (PV) performance, which can be easily recovered. 15 pages; 4 fisures; 1 table
arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2018Data sources: arXiv.org e-Print ArchiveJournal of Physics : Conference SeriesArticle . 2018 . Peer-reviewedLicense: CC BYData sources: Crossrefhttps://doi.org/10.48550/arxiv...Article . 2018License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1088/1742-6596/1124/4/041022&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen gold 1 citations 1 popularity Average influence Average impulse Average Powered by BIP!more_vert arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2018Data sources: arXiv.org e-Print ArchiveJournal of Physics : Conference SeriesArticle . 2018 . Peer-reviewedLicense: CC BYData sources: Crossrefhttps://doi.org/10.48550/arxiv...Article . 2018License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1088/1742-6596/1124/4/041022&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Conference object , Article 2018Publisher:IEEE Li, Yehao; Yao, Ting; Pan, Yingwei; Chao, Hongyang; Mei, Tao;Automatically describing a video with natural language is regarded as a fundamental challenge in computer vision. The problem nevertheless is not trivial especially when a video contains multiple events to be worthy of mention, which often happens in real videos. A valid question is how to temporally localize and then describe events, which is known as "dense video captioning." In this paper, we present a novel framework for dense video captioning that unifies the localization of temporal event proposals and sentence generation of each proposal, by jointly training them in an end-to-end manner. To combine these two worlds, we integrate a new design, namely descriptiveness regression, into a single shot detection structure to infer the descriptive complexity of each detected proposal via sentence generation. This in turn adjusts the temporal locations of each event proposal. Our model differs from existing dense video captioning methods since we propose a joint and global optimization of detection and captioning, and the framework uniquely capitalizes on an attribute-augmented video captioning architecture. Extensive experiments are conducted on ActivityNet Captions dataset and our framework shows clear improvements when compared to the state-of-the-art techniques. More remarkably, we obtain a new record: METEOR of 12.96% on ActivityNet Captions official test set. CVPR 2018 Spotlight, Rank 1 in ActivityNet Captions Challenge 2017
arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2018Data sources: arXiv.org e-Print Archivehttps://doi.org/10.48550/arxiv...Article . 2018License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1109/cvpr.2018.00782&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen 69 citations 69 popularity Top 1% influence Top 10% impulse Top 1% Powered by BIP!more_vert arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2018Data sources: arXiv.org e-Print Archivehttps://doi.org/10.48550/arxiv...Article . 2018License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1109/cvpr.2018.00782&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eu
Loading
description Publicationkeyboard_double_arrow_right Article 2023Publisher:MDPI AG Authors: Set, Beatha;Set, Beatha;This paper draws on conceptualisations of language as heteroglossic practices to examine how the experienced bilingual science teacher navigates between the monoglossic ideology that is embodied in the official Namibian Language in Education Policy (LiEP) within a linguistically constrained Namibian bilingual context. This paper aims to support recent research that challenges monolingual and monoglossic language practices, which tend to ignore the linguistic resources that children bring to the classroom. Data were collected from a classroom including video and audio recordings of lessons, field notes and photographs. The data were analysed through socio-cultural discourse and fine-grained multimodal analytical methods. The data findings illustrate the moment where the science teacher was constrained by English monolingual policy to mediate learners’ access to science learning, and harnessed all linguistic resources that the learners bring to the classroom. Subsidiary to that, there were moments where the teacher worked flexibly across languages, discourses and modes to interrupt the monoglossic ideology that is embodied in the official Namibian Language in Education Policy (LiEP). The use of rich heteroglossic practices is a clear testimony to enhanced science meaning-making regardless of learners’ ‘limited proficiency in English. The findings highlight the need to support learners from linguistically diverse backgrounds through a deliberate inclusive language policy that harnesses the heteroglossic nature of communicative practices and prepares teachers for a multilingual reality.
Languages arrow_drop_down LanguagesOther literature type . Article . 2023 . Peer-reviewedLicense: CC BYFull-Text: http://www.mdpi.com/2226-471X/8/2/131/pdfadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.3390/languages8020131&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess Routesgold 0 citations 0 popularity Average influence Average impulse Average Powered by BIP!more_vert Languages arrow_drop_down LanguagesOther literature type . Article . 2023 . Peer-reviewedLicense: CC BYFull-Text: http://www.mdpi.com/2226-471X/8/2/131/pdfadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.3390/languages8020131&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Part of book or chapter of book , Article , Conference object 2023 FrancePublisher:Springer Nature Switzerland Chen, Xiaofei; He, Yuting; Xue, Cheng; Ge, Rongjun; Li, Shuo; Yang, Guanyu;The foundation models based on pre-training technology have significantly advanced artificial intelligence from theoretical to practical applications. These models have facilitated the feasibility of computer-aided diagnosis for widespread use. Medical contrastive vision-language pre-training, which does not require human annotations, is an effective approach for guiding representation learning using description information in diagnostic reports. However, the effectiveness of pre-training is limited by the large-scale semantic overlap and shifting problems in medical field. To address these issues, we propose the Knowledge-Boosting Contrastive Vision-Language Pre-training framework (KoBo), which integrates clinical knowledge into the learning of vision-language semantic consistency. The framework uses an unbiased, open-set sample-wise knowledge representation to measure negative sample noise and supplement the correspondence between vision-language mutual information and clinical knowledge. Extensive experiments validate the effect of our framework on eight tasks including classification, segmentation, retrieval, and semantic relatedness, achieving comparable or better performance with the zero-shot or few-shot settings. Our code is open on https://github.com/ChenXiaoFei-CS/KoBo. International audience
arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2023Data sources: arXiv.org e-Print Archivehttps://doi.org/10.1007/978-3-...Part of book or chapter of book . 2023 . Peer-reviewedLicense: Springer Nature TDMData sources: Crossrefadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1007/978-3-031-43907-0_39&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen 0 citations 0 popularity Average influence Average impulse Average Powered by BIP!more_vert arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2023Data sources: arXiv.org e-Print Archivehttps://doi.org/10.1007/978-3-...Part of book or chapter of book . 2023 . Peer-reviewedLicense: Springer Nature TDMData sources: Crossrefadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1007/978-3-031-43907-0_39&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Article 2022Publisher:Elsevier BV Authors: Lu-Jing Huang; Tao Wang;Lu-Jing Huang; Tao Wang;We give some relationships between the first Dirichlet eigenvalues and the exit time moments for the general symmetric Markov processes. As applications, we present some examples, including symmetric diffusions and $\alpha$-stable processes, and provide the estimates of their first Dirichlet eigenvalues and the exit time moments. Comment: 11 pages
arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2022Data sources: arXiv.org e-Print ArchiveStatistics & Probability LettersArticle . 2023 . Peer-reviewedLicense: Elsevier TDMData sources: Crossrefhttps://doi.org/10.48550/arxiv...Article . 2022License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.2139/ssrn.4193541&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen bronze 0 citations 0 popularity Average influence Average impulse Average Powered by BIP!more_vert arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2022Data sources: arXiv.org e-Print ArchiveStatistics & Probability LettersArticle . 2023 . Peer-reviewedLicense: Elsevier TDMData sources: Crossrefhttps://doi.org/10.48550/arxiv...Article . 2022License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.2139/ssrn.4193541&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Article , Conference object 2021Publisher:ACM Kong, Junsheng; Li, Weizhao; Liu, Zeyi; Liao, Ben; Qiu, Jiezhong; Hsieh, Chang-Yu; Cai, Yi; Zhang, Shengyu;The notion of word embedding plays a fundamental role in natural language processing (NLP). However, pre-training word embedding for very large-scale vocabulary is computationally challenging for most existing methods. In this work, we show that with merely a small fraction of contexts (Q-contexts)which are typical in the whole corpus (and their mutual information with words), one can construct high-quality word embedding with negligible errors. Mutual information between contexts and words can be encoded canonically as a sampling state, thus, Q-contexts can be fast constructed. Furthermore, we present an efficient and effective WEQ method, which is capable of extracting word embedding directly from these typical contexts. In practical scenarios, our algorithm runs 11$\sim$13 times faster than well-established methods. By comparing with well-known methods such as matrix factorization, word2vec, GloVeand fasttext, we demonstrate that our method achieves comparable performance on a variety of downstream NLP tasks, and in the meanwhile maintains run-time and resource advantages over all these baselines. Comment: Accepted by CIKM 2021
arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2021Data sources: arXiv.org e-Print Archiveadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1145/3459637.3482343&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen 0 citations 0 popularity Average influence Average impulse Average Powered by BIP!more_vert arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2021Data sources: arXiv.org e-Print Archiveadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1145/3459637.3482343&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Article 2021 FrancePublisher:MIT Press Funded by:EC | COMPRISEEC| COMPRISEDavid Ifeoluwa Adelani; Jade Abbott; Graham Neubig; Daniel D'souza; Julia Kreutzer; Constantine Lignos; Chester Palen-Michel; Happy Buzaaba; Shruti Rijhwani; Sebastian Ruder; Stephen Mayhew; Israel Abebe Azime; Shamsuddeen Hassan Muhammad; Chris Chinenye Emezue; Joyce Nakatumba-Nabende; Perez Ogayo; Aremu Anuoluwapo; Catherine Gitau; Derguene Mbaye; Jesujoba O. Alabi; Seid Muhie Yimam; Tajuddeen R. Gwadabe; Ignatius Ezeani; Rubungo Andre Niyongabo; Jonathan Mukiibi; Verrah Otiende; Iroro Orife; Davis David; Samba Ngom; Tosin P. Adewumi; Paul Rayson; Mofetoluwa Adeyemi; Gerald Muriuki; Emmanuel Anebi; Chiamaka Chukwuneke; Nkiruka Odu; Eric Peter Wairagala; Samuel Oyerinde; Clemencia Siro; Tobius Saul Bateesa; Temilola Oloyede; Yvonne Wambui; Victor Akinode; Deborah Nabagereka; Maurice Katusiime; Ayodele Awokoya; Mouhamadane Mboup; Dibora Gebreyohannes; Henok Tilaye; Kelechi Nwaike; Degaga Wolde; Abdoulaye Faye; Blessing Sibanda; Orevaoghene Ahia; Bonaventure F. P. Dossou; Kelechi Ogueji; Thierno Ibrahima Diop; Abdoulaye Diallo; Adewale Akinfaderin; Tendai Marengereke; Salomey Osei;We take a step towards addressing the underrepresentation of the African continent in NLP research by bringing together different stakeholders to create the first large, publicly available, high-quality dataset for named entity recognition (NER) in ten African languages. We detail the characteristics of these languages to help researchers and practitioners better understand the challenges they pose for NER tasks. We analyze our datasets and conduct an extensive empirical evaluation of stateof-the-art methods across both supervised and transfer learning settings. Finally, we release the data, code, and models to inspire future research on African NLP. 1 International audience
Transactions of the ... arrow_drop_down Transactions of the Association for Computational LinguisticsArticle . 2021 . Peer-reviewedLicense: CC BYarXiv.org e-Print ArchiveOther literature type . Preprint . 2021Data sources: arXiv.org e-Print ArchiveTransactions of the Association for Computational LinguisticsArticleLicense: CC BYData sources: UnpayWallHal-DiderotArticle . 2021Full-Text: https://hal.inria.fr/hal-03350962/documentData sources: Hal-Diderotadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1162/tacl_a_00416&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen gold 14 citations 14 popularity Top 10% influence Average impulse Top 10% Powered by BIP!more_vert Transactions of the ... arrow_drop_down Transactions of the Association for Computational LinguisticsArticle . 2021 . Peer-reviewedLicense: CC BYarXiv.org e-Print ArchiveOther literature type . Preprint . 2021Data sources: arXiv.org e-Print ArchiveTransactions of the Association for Computational LinguisticsArticleLicense: CC BYData sources: UnpayWallHal-DiderotArticle . 2021Full-Text: https://hal.inria.fr/hal-03350962/documentData sources: Hal-Diderotadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1162/tacl_a_00416&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Other literature type , Article 2020Publisher:Copernicus GmbH W. Hua; Y. Qiao; M. Hou; M. Hou; M. Hou;Abstract. Laser scanning or photogrammetry are useful individual techniques for digital documentation of cultural heritage sites. However, these techniques are of limited usage if cultural heritage such as the Great Wall is in harsh geographical conditions. The Great Wall is usually built on the ridge with cliffs on both sides, so it is very difficult to construct scaffolding. Therefore, the three-dimensional (3D) data obtained from the traditional 3D laser scanning is not complete. As UAV cannot enter the enemy tower, the 3D structure data inside the enemy tower with unmanned aerial vehicle (UAV) photogrammetry is missing. In order to explore effective methods to completely collect the 3D data of cultural heritage under harsh geographical environment, this study focuses on establishing a 3D model and the associated digital documentation for the No.15 enemy tower of the New Guangwu Great Wall using a combination of terrestrial laser scanning and UAV photogrammetry. This paper proposes an integrated data collection method and reduces the layout of image control points using RTK-UAV technology, which improved work efficiency and reduced work risks as well. In this paper, the internal structure data of the Great Wall enemy tower was collected by laser scanning, the external structure data was collected by UAV photogrammetry, and data fusion was based on ICP algorithm. Finally, we obtained the complete and high quality 3D digital documentation of the Great Wall enemy tower, the data can be displayed digitally and help heritage experts complete the Great Wall's restoration. This study demonstrates the potential of integrating terrestrial laser scanning and UAV photogrammetry in 3D digital documentation of cultural heritage sites.
DOAJ arrow_drop_down ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information SciencesArticle . 2020 . Peer-reviewedLicense: CC BYData sources: Crossrefadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.5194/isprs-archives-xliii-b2-2020-1465-2020&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess Routesgold 4 citations 4 popularity Top 10% influence Average impulse Average Powered by BIP!more_vert DOAJ arrow_drop_down ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information SciencesArticle . 2020 . Peer-reviewedLicense: CC BYData sources: Crossrefadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.5194/isprs-archives-xliii-b2-2020-1465-2020&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Conference object , Article 2020 United KingdomPublisher:Association for Computational Linguistics (ACL) Nekoto, Wilhelmina; Marivate, Vukosi; Matsila, Tshinondiwa; Fasubaa, Timi; Kolawole, Tajudeen; Fagbohungbe, Taiwo; Akinola, Solomon Oluwole; Muhammad, Shamsuddeen Hassan; Kabongo, Salomon; Osei, Salomey; Freshia, Sackey; Niyongabo, Rubungo Andre; Macharm, Ricky; Ogayo, Perez; Ahia, Orevaoghene; Meressa, Musie; Adeyemi, Mofe; Mokgesi-Selinga, Masabata; Okegbemi, Lawrence; Martinus, Laura Jane; Tajudeen, Kolawole; Degila, Kevin; Ogueji, Kelechi; Siminyu, Kathleen; Kreutzer, Julia; Webster, Jason; Ali, Jamiil Toure; Abbott, Jade; Orife, Iroro; Ezeani, Ignatius; Dangana, Idris Abdulkabir; Kamper, Herman; Elsahar, Hady; Duru, Goodness; Kioko, Ghollah; Murhabazi, Espoir; van Biljon, Elan; Whitenack, Daniel; Onyefuluchi, Christopher; Emezue, Chris; Dossou, Bonaventure; Sibanda, Blessing; Bassey, Blessing Itoro; Olabiyi, Ayodele; Ramkilowan, Arshath; Öktem, Alp; Akinfaderin, Adewale; Bashir, Abdallah;Research in NLP lacks geographic diversity, and the question of how NLP can be scaled to low-resourced languages has not yet been adequately solved. "Low-resourced"-ness is a complex problem going beyond data availability and reflects systemic problems in society. In this paper, we focus on the task of Machine Translation (MT), that plays a crucial role for information accessibility and communication worldwide. Despite immense improvements in MT over the past decade, MT is centered around a few high-resourced languages. As MT researchers cannot solve the problem of low-resourcedness alone, we propose participatory research as a means to involve all necessary agents required in the MT development process. We demonstrate the feasibility and scalability of participatory research with a case study on MT for African languages. Its implementation leads to a collection of novel translation datasets, MT benchmarks for over 30 languages, with human evaluations for a third of them, and enables participants without formal training to make a unique scientific contribution. Benchmarks, models, data, code, and evaluation results are released under https://github.com/masakhane-io/masakhane-mt. Comment: Findings of EMNLP 2020; updated benchmarks
Lancaster EPrints arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2020Data sources: arXiv.org e-Print Archivehttps://doi.org/10.48550/arxiv...Article . 2020License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2020.findings-emnlp.195&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen hybrid 19 citations 19 popularity Top 10% influence Top 10% impulse Top 10% Powered by BIP!visibility 2visibility views 2 download downloads 10 Powered bymore_vert Lancaster EPrints arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2020Data sources: arXiv.org e-Print Archivehttps://doi.org/10.48550/arxiv...Article . 2020License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.18653/v1/2020.findings-emnlp.195&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Conference object , Article 2019Publisher:IEEE Li, Yehao; Yao, Ting; Pan, Yingwei; Chao, Hongyang; Mei, Tao;Image captioning has received significant attention with remarkable improvements in recent advances. Nevertheless, images in the wild encapsulate rich knowledge and cannot be sufficiently described with models built on image-caption pairs containing only in-domain objects. In this paper, we propose to address the problem by augmenting standard deep captioning architectures with object learners. Specifically, we present Long Short-Term Memory with Pointing (LSTM-P) --- a new architecture that facilitates vocabulary expansion and produces novel objects via pointing mechanism. Technically, object learners are initially pre-trained on available object recognition data. Pointing in LSTM-P then balances the probability between generating a word through LSTM and copying a word from the recognized objects at each time step in decoder stage. Furthermore, our captioning encourages global coverage of objects in the sentence. Extensive experiments are conducted on both held-out COCO image captioning and ImageNet datasets for describing novel objects, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, we obtain an average of 60.9% in F1 score on held-out COCO~dataset. Comment: CVPR 2019
arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2019Data sources: arXiv.org e-Print Archivehttps://doi.org/10.1109/cvpr.2...Conference object . 2019 . Peer-reviewedLicense: IEEE CopyrightData sources: Crossrefhttps://doi.org/10.48550/arxiv...Article . 2019License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1109/cvpr.2019.01278&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen 33 citations 33 popularity Top 10% influence Top 10% impulse Top 10% Powered by BIP!more_vert arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2019Data sources: arXiv.org e-Print Archivehttps://doi.org/10.1109/cvpr.2...Conference object . 2019 . Peer-reviewedLicense: IEEE CopyrightData sources: Crossrefhttps://doi.org/10.48550/arxiv...Article . 2019License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1109/cvpr.2019.01278&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Article 2018Publisher:IOP Publishing Dmitry Gets; Artur Ishteev; Tatiana Liashenko; Danila Saranin; Sergey V. Makarov; Anvar A. Zakhidov;Organic-inorganic halide perovskites recently have emerged as a promising material for highly effective light-emitting diodes (LEDs) and solar cells (SCs). Despite efficiencies of both perovskite SCs and LEDs are already among the best, the development of a perovskite dual functional device that is capable of working in these two regimes with high efficiencies is still challenging. Here we demonstrate that the dual functional device based on mixed halide perovskite CH3NH3PbBr2I can be switched from SC to LED with low threshold voltage Vth < 2 V by exposing to Sun at open circuit Voc or at small bias voltage of Vpol ~ 1 - 2 V. Such photo-poling creates in-situ p-i-n junction via methylammonium (CH3NH3+, MA+) and I-/Br- ions migration to interfaces, lowering charge injection barriers, and self-balancing injection currents in perovskite LED. We show that before the photo-poling, the electroluminescence (EL) is highly unstable in LED regime, whereas after the photo-poling, stabilized EL exhibits unusual dynamics, increasing with time and poling cycle number, while Vth and injection current decrease with cycling runs. Additionally, photo-induced and current-induced halide segregation accumulates with cycling, that is found beneficial for LED, increasing its efficiency and brightness, but reversibly degrading photovoltaic (PV) performance, which can be easily recovered. 15 pages; 4 fisures; 1 table
arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2018Data sources: arXiv.org e-Print ArchiveJournal of Physics : Conference SeriesArticle . 2018 . Peer-reviewedLicense: CC BYData sources: Crossrefhttps://doi.org/10.48550/arxiv...Article . 2018License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1088/1742-6596/1124/4/041022&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen gold 1 citations 1 popularity Average influence Average impulse Average Powered by BIP!more_vert arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2018Data sources: arXiv.org e-Print ArchiveJournal of Physics : Conference SeriesArticle . 2018 . Peer-reviewedLicense: CC BYData sources: Crossrefhttps://doi.org/10.48550/arxiv...Article . 2018License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1088/1742-6596/1124/4/041022&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eudescription Publicationkeyboard_double_arrow_right Conference object , Article 2018Publisher:IEEE Li, Yehao; Yao, Ting; Pan, Yingwei; Chao, Hongyang; Mei, Tao;Automatically describing a video with natural language is regarded as a fundamental challenge in computer vision. The problem nevertheless is not trivial especially when a video contains multiple events to be worthy of mention, which often happens in real videos. A valid question is how to temporally localize and then describe events, which is known as "dense video captioning." In this paper, we present a novel framework for dense video captioning that unifies the localization of temporal event proposals and sentence generation of each proposal, by jointly training them in an end-to-end manner. To combine these two worlds, we integrate a new design, namely descriptiveness regression, into a single shot detection structure to infer the descriptive complexity of each detected proposal via sentence generation. This in turn adjusts the temporal locations of each event proposal. Our model differs from existing dense video captioning methods since we propose a joint and global optimization of detection and captioning, and the framework uniquely capitalizes on an attribute-augmented video captioning architecture. Extensive experiments are conducted on ActivityNet Captions dataset and our framework shows clear improvements when compared to the state-of-the-art techniques. More remarkably, we obtain a new record: METEOR of 12.96% on ActivityNet Captions official test set. CVPR 2018 Spotlight, Rank 1 in ActivityNet Captions Challenge 2017
arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2018Data sources: arXiv.org e-Print Archivehttps://doi.org/10.48550/arxiv...Article . 2018License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1109/cvpr.2018.00782&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euAccess RoutesGreen 69 citations 69 popularity Top 1% influence Top 10% impulse Top 1% Powered by BIP!more_vert arXiv.org e-Print Ar... arrow_drop_down arXiv.org e-Print ArchiveOther literature type . Preprint . 2018Data sources: arXiv.org e-Print Archivehttps://doi.org/10.48550/arxiv...Article . 2018License: arXiv Non-Exclusive DistributionData sources: Dataciteadd ClaimPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.1109/cvpr.2018.00782&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eu