LSX-UniWue / SuperGLEBerLinks
German Language Understanding Evaluation Benchmark @NAACL24
☆18Updated 2 months ago
Alternatives and similar repositories for SuperGLEBer
Users that are interested in SuperGLEBer are comparing it to the libraries listed below
Sorting:
- Efficient Language Model Training through Cross-Lingual and Progressive Transfer Learning☆30Updated 2 years ago
- [EMNLP'23] Official Code for "FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual Models"☆34Updated 5 months ago
- A software for transferring pre-trained English models to foreign languages☆19Updated 2 years ago
- German Text Embedding Clustering Benchmark☆18Updated last year
- Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.☆85Updated last year
- Code for the paper "Getting the most out of your tokenizer for pre-training and domain adaptation"☆21Updated last year
- A repository containing the code for translating popular LLM benchmarks to German.☆31Updated 2 years ago
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆60Updated last year
- A collection of datasets for language model pretraining including scripts for downloading, preprocesssing, and sampling.☆63Updated last year
- Label shift estimation for transfer difficulty with Familiarity.☆10Updated 9 months ago
- ☆65Updated 2 years ago
- One-stop shop for running and fine-tuning transformer-based language models for retrieval☆60Updated 2 weeks ago
- INCOME: An Easy Repository for Training and Evaluation of Index Compression Methods in Dense Retrieval. Includes BPR and JPQ.☆24Updated 2 years ago
- Simple-to-use scoring function for arbitrarily tokenized texts.☆47Updated 9 months ago
- Official implementation of "GPT or BERT: why not both?"☆62Updated 4 months ago
- ☆40Updated last week
- Code for SaGe subword tokenizer (EACL 2023)☆27Updated last year
- GLADIS: A General and Large Acronym Disambiguation Benchmark (EACL 23)☆18Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 2 years ago
- ☆15Updated last year
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆22Updated 5 months ago
- GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embeddings☆44Updated last year
- A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.☆54Updated 4 months ago
- A python package to run inference with HuggingFace language and vision-language checkpoints wrapping many convenient features.☆28Updated last year
- ☆72Updated 2 years ago
- Code for Zero-Shot Tokenizer Transfer☆142Updated 10 months ago
- ☆27Updated 9 months ago
- My NER Experiments with ModernBERT and Ettin☆25Updated 4 months ago
- KIND: an Italian Multi-Domain Dataset for Named Entity Recognition☆15Updated 2 years ago
- Implementation of the paper "Fine-Tuning Transformers: Vocabulary Transfer" https://arxiv.org/pdf/2112.14569.pdf☆20Updated 3 years ago