jkallini / mrt5
Code repository for the paper "MrT5: Dynamic Token Merging for Efficient Byte-level Language Models."
☆40Updated last month
Alternatives and similar repositories for mrt5
Users that are interested in mrt5 are comparing it to the libraries listed below
Sorting:
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆57Updated 8 months ago
- ☆78Updated 8 months ago
- ☆25Updated last year
- Official implementation of "BERTs are Generative In-Context Learners"☆27Updated 2 months ago
- ☆48Updated 6 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆27Updated 7 months ago
- A framework for pitting LLMs against each other in an evolving library of games ⚔☆32Updated 3 weeks ago
- Aioli: A unified optimization framework for language model data mixing☆25Updated 3 months ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆83Updated last year
- Synthetic data generation and benchmark implementation for "Episodic Memories Generation and Evaluation Benchmark for Large Language Mode…☆42Updated last month
- ☆56Updated last week
- ☆56Updated last week
- ☆50Updated last year
- ☆33Updated 10 months ago
- Code, results and other artifacts from the paper introducing the WildChat-50m dataset and the Re-Wild model family.☆29Updated last month
- Maya: An Instruction Finetuned Multilingual Multimodal Model using Aya☆108Updated 2 months ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆63Updated last year
- Efficient encoder-decoder architecture for small language models (≤1B parameters) with cross-architecture knowledge distillation and visi…☆23Updated 3 months ago
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆30Updated 2 months ago
- Experiments for efforts to train a new and improved t5☆77Updated last year
- A repository for research on medium sized language models.☆76Updated 11 months ago
- ☆40Updated last year
- Code for RATIONALYST: Pre-training Process-Supervision for Improving Reasoning https://arxiv.org/pdf/2410.01044☆32Updated 7 months ago
- ☆47Updated 8 months ago
- Code and pretrained models for the paper: "MatMamba: A Matryoshka State Space Model"☆59Updated 5 months ago
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 3 months ago
- GoldFinch and other hybrid transformer components☆45Updated 9 months ago
- Simple replication of [ColBERT-v1](https://arxiv.org/abs/2004.12832).☆80Updated last year
- ☆50Updated 11 months ago
- Scaling is a distributed training library and installable dependency designed to scale up neural networks, with a dedicated module for tr…☆60Updated 6 months ago