jkallini / mrt5Links
Code repository for the paper "MrT5: Dynamic Token Merging for Efficient Byte-level Language Models."
☆43Updated 3 months ago
Alternatives and similar repositories for mrt5
Users that are interested in mrt5 are comparing it to the libraries listed below
Sorting:
- ☆56Updated 2 months ago
- ☆82Updated 10 months ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆64Updated last year
- A repository for research on medium sized language models.☆77Updated last year
- Code and pretrained models for the paper: "MatMamba: A Matryoshka State Space Model"☆59Updated 7 months ago
- Code for Zero-Shot Tokenizer Transfer☆133Updated 6 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated 10 months ago
- MatFormer repo☆47Updated 7 months ago
- Efficient encoder-decoder architecture for small language models (≤1B parameters) with cross-architecture knowledge distillation and visi…☆29Updated 5 months ago
- Supercharge huggingface transformers with model parallelism.☆77Updated 9 months ago
- ☆48Updated 5 months ago
- ☆81Updated last year
- BPE modification that implements removing of the intermediate tokens during tokenizer training.☆24Updated 7 months ago
- ☆69Updated last month
- Official implementation of "BERTs are Generative In-Context Learners"☆30Updated 4 months ago
- GoldFinch and other hybrid transformer components☆46Updated 11 months ago
- Maya: An Instruction Finetuned Multilingual Multimodal Model using Aya☆117Updated this week
- ☆68Updated 11 months ago
- Implementations of attention with the softpick function, naive and FlashAttention-2☆80Updated 2 months ago
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆68Updated 3 weeks ago
- This is the official repository for Inheritune.☆112Updated 5 months ago
- Synthetic data generation and benchmark implementation for "Episodic Memories Generation and Evaluation Benchmark for Large Language Mode…☆46Updated 3 months ago
- PyTorch implementation of models from the Zamba2 series.☆184Updated 5 months ago
- MEXMA: Token-level objectives improve sentence representations☆41Updated 6 months ago
- ☆52Updated 8 months ago
- ☆48Updated 10 months ago
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆113Updated last year
- Repository for "I am a Strange Dataset: Metalinguistic Tests for Language Models"☆44Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆86Updated last year
- Experiments for efforts to train a new and improved t5☆76Updated last year