jkallini / mrt5Links
Code repository for the paper "MrT5: Dynamic Token Merging for Efficient Byte-level Language Models."
☆45Updated 5 months ago
Alternatives and similar repositories for mrt5
Users that are interested in mrt5 are comparing it to the libraries listed below
Sorting:
- ☆82Updated last year
- ☆85Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year
- Code and pretrained models for the paper: "MatMamba: A Matryoshka State Space Model"☆61Updated 9 months ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆65Updated last year
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆146Updated 3 months ago
- ☆69Updated last year
- A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.☆46Updated 2 months ago
- Code for Zero-Shot Tokenizer Transfer☆137Updated 8 months ago
- ☆58Updated 4 months ago
- MatFormer repo☆62Updated 9 months ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆88Updated last year
- Efficient encoder-decoder architecture for small language models (≤1B parameters) with cross-architecture knowledge distillation and visi…☆29Updated 7 months ago
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆70Updated 2 months ago
- ☆51Updated 7 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆101Updated 8 months ago
- MEXMA: Token-level objectives improve sentence representations☆41Updated 8 months ago
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆84Updated this week
- Maya: An Instruction Finetuned Multilingual Multimodal Model using Aya☆116Updated last month
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆82Updated 10 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆109Updated 4 months ago
- GoldFinch and other hybrid transformer components☆45Updated last year
- A repository for research on medium sized language models.☆77Updated last year
- This is the official repository for Inheritune.☆113Updated 7 months ago
- ☆54Updated 10 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆98Updated 11 months ago
- Experiments for efforts to train a new and improved t5☆76Updated last year
- Improving Text Embedding of Language Models Using Contrastive Fine-tuning☆64Updated last year
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆42Updated 11 months ago
- Fork of Flame repo for training of some new stuff in development☆17Updated last week