software . 2020 . Embargo end date: 29 Dec 2020

Slovenian RoBERTa contextual embeddings model: SloBERTa 1.0

Ulčar, Matej; Robnik-Šikonja, Marko;
Open Access
  • Published: 29 Dec 2020
  • Publisher: Faculty of Computer and Information Science, University of Ljubljana
The monolingual Slovene RoBERTa (A Robustly Optimized Bidirectional Encoder Representations from Transformers) model is a state-of-the-art model representing words/tokens as contextually dependent word embeddings, used for various NLP tasks. Word embeddings can be extracted for every word occurrence and then used in training a model for an end task, but typically the whole RoBERTa model is fine-tuned end-to-end. SloBERTa model is closely related to French Camembert model The corpora used for training the model have 3.47 billion tokens in total. The subword vocabulary contains 32,000 tokens. The scripts and programs used for data prep...
Persistent Identifiers
Funded by
Cross-Lingual Embeddings for Less-Represented Languages in European News Media
  • Funder: European Commission (EC)
  • Project Code: 825153
  • Funding stream: H2020 | RIA
Digital Humanities and Cultural Heritage
Any information missing or wrong?Report an Issue