jkallini / mrt5Links
Code repository for the paper "MrT5: Dynamic Token Merging for Efficient Byte-level Language Models."
☆51Updated last month
Alternatives and similar repositories for mrt5
Users that are interested in mrt5 are comparing it to the libraries listed below
Sorting:
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year
- Repository for "I am a Strange Dataset: Metalinguistic Tests for Language Models"☆44Updated last year
- MEXMA: Token-level objectives improve sentence representations☆41Updated 9 months ago
- ☆81Updated last year
- Code for Zero-Shot Tokenizer Transfer☆138Updated 9 months ago
- ☆57Updated 3 weeks ago
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆73Updated 4 months ago
- ☆86Updated last year
- Code and pretrained models for the paper: "MatMamba: A Matryoshka State Space Model"☆61Updated 11 months ago
- A repository for research on medium sized language models.☆78Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆90Updated last year
- ☆52Updated 9 months ago
- Official implementation of "BERTs are Generative In-Context Learners"☆32Updated 7 months ago
- Official implementation of "GPT or BERT: why not both?"☆61Updated 2 months ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆66Updated last year
- ☆48Updated last year
- A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.☆47Updated 3 months ago
- Improving Text Embedding of Language Models Using Contrastive Fine-tuning☆65Updated last year
- MatFormer repo☆64Updated 10 months ago
- ☆26Updated last year
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆56Updated last week
- ☆55Updated 11 months ago
- This is the official repository for Inheritune.☆115Updated 8 months ago
- Fork of Flame repo for training of some new stuff in development☆18Updated 2 weeks ago
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆85Updated last month
- ☆80Updated last week
- Efficient encoder-decoder architecture for small language models (≤1B parameters) with cross-architecture knowledge distillation and visi…☆32Updated 8 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆110Updated 6 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆99Updated last year
- Official implementation of ECCV24 paper: POA☆24Updated last year