kensho-technologies / pathpieceLinks
PathPiece tokenizer
☆13Updated last year
Alternatives and similar repositories for pathpiece
Users that are interested in pathpiece are comparing it to the libraries listed below
Sorting:
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆59Updated 2 years ago
- A Framework aims to wisely initialize unseen subword embeddings in PLMs for efficient large-scale continued pretraining☆18Updated 2 years ago
- ☆13Updated 11 months ago
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆22Updated 5 months ago
- ☆39Updated last year
- Do Multilingual Language Models Think Better in English?☆42Updated 2 years ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆49Updated 2 years ago
- ☆21Updated 2 years ago
- ACL22 paper: Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little Cost☆42Updated 2 years ago
- ☆15Updated 2 months ago
- ☆14Updated last year
- ☆101Updated 2 years ago
- IntructIR, a novel benchmark specifically designed to evaluate the instruction following ability in information retrieval models. Our foc…☆31Updated last year
- Code for SaGe subword tokenizer (EACL 2023)☆27Updated last year
- [EMNLP'23] Official Code for "FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual Models"☆34Updated 5 months ago
- Embedding Recycling for Language models☆38Updated 2 years ago
- Official code release for "SuperBPE: Space Travel for Language Models"☆76Updated last week
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 3 years ago
- Simple and scalable tools for data-driven pretraining data selection.☆29Updated 5 months ago
- 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.☆81Updated 3 years ago
- Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.☆85Updated last year
- Code for the paper "Getting the most out of your tokenizer for pre-training and domain adaptation"☆21Updated last year
- ☆10Updated 3 years ago
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆22Updated last year
- Hugging Face RoBERTa with Flash Attention 2☆24Updated 2 months ago
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆98Updated 2 years ago
- Official repository for our EACL 2023 paper "LongEval: Guidelines for Human Evaluation of Faithfulness in Long-form Summarization" (https…☆44Updated last year
- State-of-the-art paired encoder and decoder models (17M-1B params)☆53Updated 3 months ago
- Pytorch Implementation of EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks☆63Updated 3 years ago
- Simple-to-use scoring function for arbitrarily tokenized texts.☆47Updated 9 months ago