jkallini / mrt5Links
Code repository for the paper "MrT5: Dynamic Token Merging for Efficient Byte-level Language Models."
☆53Updated 4 months ago
Alternatives and similar repositories for mrt5
Users that are interested in mrt5 are comparing it to the libraries listed below
Sorting:
- ☆91Updated last year
- ☆57Updated last month
- Official implementation of "BERTs are Generative In-Context Learners"☆32Updated 10 months ago
- Code for Zero-Shot Tokenizer Transfer☆142Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- MEXMA: Token-level objectives improve sentence representations☆42Updated last year
- ☆82Updated last year
- Code and pretrained models for the paper: "MatMamba: A Matryoshka State Space Model"☆62Updated last year
- ☆59Updated 2 months ago
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆75Updated 7 months ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆67Updated last year
- Fork of Flame repo for training of some new stuff in development☆19Updated last month
- A repository for research on medium sized language models.☆77Updated last year
- MatFormer repo☆70Updated last year
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆113Updated 3 months ago
- A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.☆62Updated 7 months ago
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆86Updated 4 months ago
- GoldFinch and other hybrid transformer components☆45Updated last year
- State-of-the-art paired encoder and decoder models (17M-1B params)☆58Updated 6 months ago
- Improving Text Embedding of Language Models Using Contrastive Fine-tuning☆64Updated last year
- ☆68Updated last year
- Synthetic data generation and benchmark implementation for "Episodic Memories Generation and Evaluation Benchmark for Large Language Mode…☆63Updated 4 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆115Updated 9 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆58Updated last week
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆59Updated 10 months ago
- Official implementation of "GPT or BERT: why not both?"☆61Updated 6 months ago
- An unofficial pytorch implementation of 'Efficient Infinite Context Transformers with Infini-attention'☆54Updated last year
- [TMLR 2026] When Attention Collapses: How Degenerate Layers in LLMs Enable Smaller, Stronger Models☆122Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆89Updated last year
- ☆37Updated last year