Giter Site home page Giter Site logo

plantl-gob-es / lm-spanish Goto Github PK

View Code? Open in Web Editor NEW
244.0 27.0 21.0 165 KB

Official source for spanish Language Models and resources made @ BSC-TEMU within the "Plan de las Tecnologías del Lenguaje" (Plan-TL).

License: Apache License 2.0

Python 100.00%
language-model embeddings corpora benchmarks transformers nlp

lm-spanish's Introduction

Spanish Language Models 💃🏻

A repository part of the MarIA project.

Corpora 📃

Corpora Number of documents Number of tokens Size (GB)
BNE 201,080,084 135,733,450,668 570GB

Models 🤖

  • new ✨ Ǎguila-7B: https://huggingface.co/projecte-aina/aguila-7b

    A 7B parameters LLM that has been trained on a mixture of Spanish, Catalan and English data, adding up to a total of 26B tokens. It uses the Falcon-7b model as a starting point, a state-of-the-art English language model that was openly released just a few months ago by the Technology Innovation Institute. Read more here

  • RoBERTa-base BNE: https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne

  • RoBERTa-large BNE: https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne

    Transformer-based masked language models for the Spanish language. They are based on the RoBERTa large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019.

  • longformer-base-4096-bne-es: https://huggingface.co/PlanTL-GOB-ES/longformer-base-4096-bne-es

    The Longformer version of the roberta-base-ca-v2 masked language model for the Catalan language. The use of these models allows us to process larger contexts (up to 4096 tokens) as input without the need of additional aggregation strategies. The pretraining process of this model started from the roberta-base-ca-v2 checkpoint and was pretrained for MLM on both short and long documents in Catalan.

  • GPT2-base BNE: https://huggingface.co/PlanTL-GOB-ES/gpt2-base-bne

  • GPT2-large BNE: https://huggingface.co/PlanTL-GOB-ES/gpt2-large-bne

    Transformer-based model for the Spanish language. They are based on the GPT-2 model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019.

See results achieved on several tasks below. Vegeu els resultats obtinguts en diverses tasques més avall.

Usage example ⚗️

For the RoBERTa-base

from transformers import AutoModelForMaskedLM
from transformers import AutoTokenizer, FillMaskPipeline
from pprint import pprint
tokenizer_hf = AutoTokenizer.from_pretrained('PlanTL-GOB-ES/roberta-base-bne')
model = AutoModelForMaskedLM.from_pretrained('PlanTL-GOB-ES/roberta-base-bne')
model.eval()
pipeline = FillMaskPipeline(model, tokenizer_hf)
text = f"¡Hola <mask>!"
res_hf = pipeline(text)
pprint([r['token_str'] for r in res_hf])

For the RoBERTa-large

from transformers import AutoModelForMaskedLM
from transformers import AutoTokenizer, FillMaskPipeline
from pprint import pprint
tokenizer_hf = AutoTokenizer.from_pretrained('PlanTL-GOB-ES/roberta-large-bne')
model = AutoModelForMaskedLM.from_pretrained('PlanTL-GOB-ES/roberta-large-bne')
model.eval()
pipeline = FillMaskPipeline(model, tokenizer_hf)
text = f"¡Hola <mask>!"
res_hf = pipeline(text)
pprint([r['token_str'] for r in res_hf])

Fine-tunned models 🧗🏼‍♀️🏇🏼🤽🏼‍♀️🏌🏼‍♂️🏄🏼‍♀️

For a complete list, refer to https://huggingface.co/PlanTL-GOB-ES

Other Spanish Language Models 👩‍👧‍👦

Domain-specific language models:

Word embeddings 🔤

spaCy models

Datasets 🗂️

For a complete list, refer to https://huggingface.co/PlanTL-GOB-ES

EvalES: The Spanish Evaluation Benchmark 🏆

The EvalES benchmark consists of 10 tasks: Named Entity Recognition and Classification (CoNLL-NERC and CAPITEL-NERC), Part-of-Speech Tagging (UD-POS and CAPITEL-POS ), Text Classification (MLDoc), Paraphrase Identification (PAWS-X), Semantic Textual Similarity (STS), Question Answering (SQAC), Textual Entailment (XNLI) and Massive.

Results ✅

Dataset Metric RoBERTa-b RoBERTa-l BETO* mBERT BERTIN** Electricidad***
MLDoc F1 0.9664 0.9702 0.9714🔥 0.9617 0.9668 0.9565
CoNLL-NERC F1 0.8851🔥 0.8823 0.8759 0.8691 0.8835 0.7954
CAPITEL-NERC F1 0.8960 0.9051🔥 0.8772 0.8810 0.8856 0.8035
PAWS-X F1 0.9020 0.9150🔥 0.8930 0.9000 0.8965 0.9045
UD-POS F1 0.9907🔥 0.9904 0.9900 0.9886 0.9898 0.9818
CAPITEL-POS F1 0.9846 0.9856🔥 0.9836 0.9839 0.9847 0.9816
SQAC F1 0.7923 0.8202🔥 0.7923 0.7562 0.7678 0.7383
STS Combined 0.8533🔥 0.8411 0.8159 0.8164 0.7945 0.8063
XNLI Accuracy 0.8016 0.8263🔥 0.8130 0.7876 0.7890 0.7878
Massive Accuracy 0.8605 0.8722 0.8732🔥 0.8504 0.8500 0.8517

* A model based on BERT architecture.

** A model based on RoBERTa architecture.

*** A model based on Electra architecture.

For more information, refer to https://benchmark.plantl.bsc.es/

Demos

Cite 📣

@article{gutierrezfandino2022,
	author = {Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquin Silveira-Ocampo and Casimiro Pio Carrino and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Aitor Gonzalez-Agirre and Marta Villegas},
	title = {MarIA: Spanish Language Models},
	journal = {Procesamiento del Lenguaje Natural},
	volume = {68},
	number = {0},
	year = {2022},
	issn = {1989-7553},
	url = {http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405},
	pages = {39--60}
}

Contact 📧

📋 We are interested in (1) extending our corpora to make larger models (2) train/evaluate the model in other tasks.

For questions regarding this work, contact [email protected]

Disclaimer

The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.

When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.

In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.

Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.

Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.

En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.

lm-spanish's People

Contributors

asier-gutierrez avatar gonzalez-agirre avatar joanllop avatar mmarimon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lm-spanish's Issues

Agila dataset availability

Hi! Thanks for all the work!

I would like to ask if the dataset used for the training of the agila-7b model is available for testing/downloading.

Potential issues with HF GPT2 Models

Hello,

I am using the GPT2 models available in HF, and running into a few issues. Firstly, there seems to be an issue with the tokenizer. Trying to calculate perplexity using the evaluate module, as follows:

from evaluate import load
perplexity = load("perplexity", module_type="metric")
results = perplexity.compute(predictions=["Hola, como estas?"], model_id="PlanTL-GOB-ES/gpt2-base-bne", device="cpu")

Gives the following error:

 ...
  File "/ikerlariak/aormazabal024/PhD/Poetry-Generation/demo/poetry-env-traganarru/lib/python3.8/site-packages/torch/nn/functional.py", line 2199, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
IndexError: index out of range in self`

This seems to be related to the special tokens for <pad>, <s>, </s> and <unk> not being properly set (but are used by the evaluate module), as the only special token added in the tokenizer is <|endoftext|>. One can manually fix it for the local snapshot:

tokenizer.pad_token = '<pad>'
tokenizer.bos_token = '</s>'
tokenizer.eos_token = '</s>'
tokenizer.unk_token = '<unk>'
tokenizer.save_pretrained('[snapshot-path]')

However, even after fixing this, I am getting quite high perplexities compared to the 10-13 reported in the paper for all sentences I am trying (assuming per-word-perplexity is reported). Is it possible there was an issue when converting from fairseq to HF, and are the original fairseq models available somewhere to compare? Or maybe I am making a mistake when calculating the ppl, was there any tokenization done to the text apart from BPE (i.e. replacing newlines with , which is pretty standard in fairseq)?

GPT-2 state and GPT-j-6B

I would like to ask about the state of the GPT-2 model. Will it arrive soon at huggingface?

I would also like to ask if you have the intention of train GPT-j-6B. Training this model for some people would be impossible due to its hardware requirements, but you have Mare Nostrum, the dataset and the previous version GPT-2.

Dataset Availability

Hi!

Thanks for the work, I love it. I was wandering if the final clean corpus is available somewhere.

Tokenizer rare subwords

Hi, First at all many thanks for this amazing work in Spanish 😍.

from transformers import AutoModelForMaskedLM
from transformers import AutoTokenizer

model_checkpoint = "PlanTL-GOB-ES/roberta-large-bne"
model = AutoModelForMaskedLM.from_pretrained(model_checkpoint)
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)

rare_tokens_vocab = [word for word in tokenizer.vocab if 'Ġ' in word]
print(len(rare_tokens_vocab))
# 37695 50262

Almost 75% or tokens contains "Ġ" char. It is really strange! Probably due to dirty text in the corpus ?? or why is the reason of so many tokens with "Ġ"??

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.