gautierdag / tokenizer-benchLinks
Code for the paper "Getting the most out of your tokenizer for pre-training and domain adaptation"
☆19Updated last year
Alternatives and similar repositories for tokenizer-bench
Users that are interested in tokenizer-bench are comparing it to the libraries listed below
Sorting:
- ☆12Updated 7 months ago
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆21Updated 2 weeks ago
- M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer☆54Updated 2 years ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆58Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- Code for SaGe subword tokenizer (EACL 2023)☆25Updated 7 months ago
- Official repository for our EACL 2023 paper "LongEval: Guidelines for Human Evaluation of Faithfulness in Long-form Summarization" (https…☆44Updated 11 months ago
- ☆100Updated 2 years ago
- Embedding Recycling for Language models☆38Updated 2 years ago
- ☆20Updated 2 years ago
- LTG-Bert☆33Updated last year
- Code for ACL 2022 paper "Expanding Pretrained Models to Thousands More Languages via Lexicon-based Adaptation"☆30Updated 3 years ago
- Efficient Language Model Training through Cross-Lingual and Progressive Transfer Learning☆30Updated 2 years ago
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆72Updated last year
- ☆46Updated 3 years ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆48Updated last year
- ☆51Updated 2 years ago
- Code and pre-trained models for "ReasonBert: Pre-trained to Reason with Distant Supervision", EMNLP'2021☆29Updated 2 years ago
- ☆29Updated 3 years ago
- Query-focused summarization data☆42Updated 2 years ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆27Updated last year
- ☆72Updated 2 years ago
- [ICLR 2022] Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators☆24Updated last year
- A Framework aims to wisely initialize unseen subword embeddings in PLMs for efficient large-scale continued pretraining☆17Updated last year
- 🌏 Modular retrievers for zero-shot multilingual IR.☆28Updated last year
- SeqScore: Scoring for named entity recognition and other sequence labeling tasks☆23Updated 4 months ago
- Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.☆82Updated 10 months ago
- ☆14Updated last month
- ☆18Updated 11 months ago
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval☆29Updated 2 years ago