bminixhofer / tokenkitLinks
A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.
☆61Updated 6 months ago
Alternatives and similar repositories for tokenkit
Users that are interested in tokenkit are comparing it to the libraries listed below
Sorting:
- Code for Zero-Shot Tokenizer Transfer☆142Updated last year
- ☆48Updated last year
- 🚢 Data Toolkit for Sailor Language Models☆95Updated 10 months ago
- Synthetic Data Generation for Evaluation☆13Updated 11 months ago
- code for training & evaluating Contextual Document Embedding models☆202Updated 8 months ago
- Supercharge huggingface transformers with model parallelism.☆77Updated 5 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- EvaByte: Efficient Byte-level Language Models at Scale☆114Updated 8 months ago
- Organize the Web: Constructing Domains Enhances Pre-Training Data Curation☆74Updated 8 months ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆94Updated last year
- ☆38Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Updated 3 months ago
- Code for the paper "Fishing for Magikarp"☆178Updated 8 months ago
- ☆59Updated last year
- Official code for the NeurIPS25 paper "RAT: Bridging RNN Efficiencyand Attention Accuracy in Language Modeling" (https://arxiv.org/abs/25…☆23Updated last month
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆74Updated 6 months ago
- This is the official repository for Inheritune.☆120Updated 11 months ago
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆113Updated 2 months ago
- Official code release for "SuperBPE: Space Travel for Language Models"☆80Updated last week
- ☆77Updated last year
- ☆59Updated 2 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆78Updated last year
- Prune transformer layers☆74Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆87Updated 3 years ago
- minimal pytorch implementation of bm25 (with sparse tensors)☆104Updated 2 months ago
- Evaluating LLMs with fewer examples☆169Updated last year
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆67Updated 3 months ago
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- BPE modification that implements removing of the intermediate tokens during tokenizer training.☆25Updated last year