bminixhofer / tokenkitLinks
A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.
☆49Updated 4 months ago
Alternatives and similar repositories for tokenkit
Users that are interested in tokenkit are comparing it to the libraries listed below
Sorting:
- Code for Zero-Shot Tokenizer Transfer☆141Updated 10 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆110Updated 3 weeks ago
- ☆48Updated last year
- EvaByte: Efficient Byte-level Language Models at Scale☆110Updated 7 months ago
- ☆76Updated 3 months ago
- ☆39Updated last year
- Official code release for "SuperBPE: Space Travel for Language Models"☆75Updated 2 weeks ago
- Supercharge huggingface transformers with model parallelism.☆77Updated 3 months ago
- 🚢 Data Toolkit for Sailor Language Models☆94Updated 8 months ago
- ☆39Updated last year
- Repository for "I am a Strange Dataset: Metalinguistic Tests for Language Models"☆44Updated last year
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆66Updated last month
- minimal pytorch implementation of bm25 (with sparse tensors)☆104Updated 3 weeks ago
- BPE modification that implements removing of the intermediate tokens during tokenizer training.☆25Updated 11 months ago
- ☆82Updated this week
- This is the official repository for Inheritune.☆115Updated 9 months ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆66Updated last year
- ☆52Updated last year
- ☆53Updated 9 months ago
- A repository for research on medium sized language models.☆78Updated last year
- ☆55Updated last year
- Datasets collection and preprocessings framework for NLP extreme multitask learning☆188Updated 4 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆44Updated last month
- State-of-the-art paired encoder and decoder models (17M-1B params)☆53Updated 3 months ago
- Official implementation of "GPT or BERT: why not both?"☆62Updated 3 months ago
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- code for training & evaluating Contextual Document Embedding models☆200Updated 6 months ago
- ☆58Updated last year