jkallini / mrt5Links
Code repository for the paper "MrT5: Dynamic Token Merging for Efficient Byte-level Language Models."
☆43Updated 3 months ago
Alternatives and similar repositories for mrt5
Users that are interested in mrt5 are comparing it to the libraries listed below
Sorting:
- ☆83Updated 11 months ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆65Updated last year
- Code and pretrained models for the paper: "MatMamba: A Matryoshka State Space Model"☆60Updated 8 months ago
- A repository for research on medium sized language models.☆78Updated last year
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆70Updated last month
- Synthetic data generation and benchmark implementation for "Episodic Memories Generation and Evaluation Benchmark for Large Language Mode…☆49Updated 3 months ago
- A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.☆40Updated last month
- ☆81Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated 11 months ago
- ☆56Updated 3 months ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆86Updated last year
- MEXMA: Token-level objectives improve sentence representations☆41Updated 7 months ago
- ☆48Updated 11 months ago
- Code for Zero-Shot Tokenizer Transfer☆135Updated 6 months ago
- This is the official repository for Inheritune.☆112Updated 5 months ago
- Official implementation of "BERTs are Generative In-Context Learners"☆31Updated 4 months ago
- Efficient encoder-decoder architecture for small language models (≤1B parameters) with cross-architecture knowledge distillation and visi…☆29Updated 6 months ago
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆107Updated 4 months ago
- Implementations of attention with the softpick function, naive and FlashAttention-2☆81Updated 3 months ago
- ☆51Updated 6 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆56Updated last week
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆143Updated 2 months ago
- An unofficial pytorch implementation of 'Efficient Infinite Context Transformers with Infini-attention'☆53Updated 11 months ago
- Official code release for "SuperBPE: Space Travel for Language Models"☆62Updated 3 weeks ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆98Updated 10 months ago
- Official implementation of ECCV24 paper: POA☆24Updated last year
- GoldFinch and other hybrid transformer components☆46Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆127Updated 11 months ago
- Improving Text Embedding of Language Models Using Contrastive Fine-tuning☆64Updated last year
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆91Updated 8 months ago