jkallini / mrt5Links
Code repository for the paper "MrT5: Dynamic Token Merging for Efficient Byte-level Language Models."
☆51Updated last month
Alternatives and similar repositories for mrt5
Users that are interested in mrt5 are comparing it to the libraries listed below
Sorting:
- ☆57Updated last month
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year
- ☆53Updated 9 months ago
- ☆81Updated last year
- Official implementation of "BERTs are Generative In-Context Learners"☆32Updated 8 months ago
- ☆87Updated last year
- MEXMA: Token-level objectives improve sentence representations☆42Updated 10 months ago
- Code for Zero-Shot Tokenizer Transfer☆140Updated 10 months ago
- Code and pretrained models for the paper: "MatMamba: A Matryoshka State Space Model"☆61Updated 11 months ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆66Updated last year
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆73Updated 4 months ago
- A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.☆49Updated 4 months ago
- Efficient encoder-decoder architecture for small language models (≤1B parameters) with cross-architecture knowledge distillation and visi…☆32Updated 9 months ago
- Official implementation of "GPT or BERT: why not both?"☆62Updated 3 months ago
- A repository for research on medium sized language models.☆78Updated last year
- ☆48Updated last year
- Fork of Flame repo for training of some new stuff in development☆19Updated this week
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆110Updated 2 weeks ago
- This is the official repository for Inheritune.☆115Updated 9 months ago
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆114Updated last year
- Repository for "I am a Strange Dataset: Metalinguistic Tests for Language Models"☆44Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆131Updated 2 weeks ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Updated last year
- Improving Text Embedding of Language Models Using Contrastive Fine-tuning☆65Updated last year
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆56Updated 3 weeks ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆101Updated last year
- EvaByte: Efficient Byte-level Language Models at Scale☆110Updated 6 months ago
- State-of-the-art paired encoder and decoder models (17M-1B params)☆53Updated 3 months ago
- ☆69Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 5 months ago