Code repository for the paper "MrT5: Dynamic Token Merging for Efficient Byte-level Language Models."
☆58Sep 25, 2025Updated 7 months ago
Alternatives and similar repositories for mrt5
Users that are interested in mrt5 are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Digital texts in Prakrit☆10Sep 14, 2025Updated 7 months ago
- Linear Attention for Efficient Bidirectional Sequence Modeling☆16May 13, 2025Updated 11 months ago
- ☆11Nov 18, 2024Updated last year
- Efficient encoder-decoder architecture for small language models (≤1B parameters) with cross-architecture knowledge distillation and visi…☆32Feb 7, 2025Updated last year
- [NeurIPS 2024] Image Understanding Makes for A Good Tokenizer for Image Generation☆22Dec 17, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Code for the paper "Greed is All You Need: An Evaluation of Tokenizer Inference Methods"☆13Nov 26, 2024Updated last year
- This repo contains VPR models that have been fine-tuned for indoor usage.☆16May 15, 2024Updated last year
- Official implementation of "Data Mixture Inference: What do BPE tokenizers reveal about their training data?"☆18May 15, 2025Updated 11 months ago
- Official repository of the paper "JIST: Joint Image and Sequence Training for Sequential Visual Place Recognition"☆24Dec 15, 2023Updated 2 years ago
- Official repository for BMVC 2022 paper: Global Proxy-based Hard Mining for Visual Place Recognition☆18Mar 7, 2023Updated 3 years ago
- ☆18Jun 12, 2023Updated 2 years ago
- Datapunt open panorama project☆14May 6, 2024Updated last year
- Tool to perform paired evaluation of automatic systems☆13Oct 20, 2021Updated 4 years ago
- ☆17Mar 20, 2025Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆91Sep 12, 2025Updated 7 months ago
- Implementation of MambaFormer in Pytorch ++ Zeta from the paper: "Can Mamba Learn How to Learn? A Comparative Study on In-Context Learnin…☆21Apr 20, 2026Updated 2 weeks ago
- The offcial repository for 'CharacterBERT and Self-Teaching for Improving the Robustness of Dense Retrievers on Queries with Typos', SIGI…☆16May 4, 2022Updated 4 years ago
- [CVPR'25] MergeVQ: A Unified Framework for Visual Generation and Representation with Token Merging and Quantization☆48Jul 22, 2025Updated 9 months ago
- Large language model Mistral for DNA☆22Sep 12, 2025Updated 7 months ago
- triple-encoders is a library for contextualizing distributed Sentence Transformers representations.☆15Sep 3, 2024Updated last year
- Adding new tasks to T0 without catastrophic forgetting☆33Oct 20, 2022Updated 3 years ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 7 months ago
- SCT: An Efficient Self-Supervised Cross-View Training For Sentence Embedding (TACL)☆16Jul 27, 2024Updated last year
- Serverless GPU API endpoints on Runpod - Get Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- Experiments for efforts to train a new and improved t5☆76Apr 15, 2024Updated 2 years ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆68Apr 24, 2024Updated 2 years ago
- Visual Place Recognition☆31Nov 25, 2025Updated 5 months ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆29Apr 17, 2024Updated 2 years ago
- Convert Transkribus PAGE-XML to standard PAGE-XML☆12Dec 10, 2025Updated 4 months ago
- A sytem for Named Entity Disambiguation based on Random Walks and Learning to Rank.☆19Feb 26, 2022Updated 4 years ago
- Named entity annotation tool☆28Jul 6, 2023Updated 2 years ago
- "Graph Convolutions Enrich the Self-Attention in Transformers!" NeurIPS 2024☆27Mar 19, 2025Updated last year
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Jul 13, 2022Updated 3 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ☆98Jul 4, 2025Updated 10 months ago
- ResiDual: Transformer with Dual Residual Connections, https://arxiv.org/abs/2304.14802☆98Aug 18, 2023Updated 2 years ago
- Code for the paper "On the Expressivity Role of LayerNorm in Transformers' Attention" (Findings of ACL'2023)☆59Sep 27, 2024Updated last year
- Task Compass: Scaling Multi-task Pre-training with Task Prefix (EMNLP 2022: Findings) (stay tuned & more will be updated)☆22Oct 17, 2022Updated 3 years ago
- Dataset and Baselines for "You are here! Finding position and orientation on a 2D map from a single image: The Flatlandia localization pr…☆11Sep 15, 2023Updated 2 years ago
- Character-level conversion between Hebrew text and Latin transliteration using deep learning - a demonstration of seq2seq training.☆14Jun 27, 2023Updated 2 years ago
- [ICLR 2025 Oral] "Your Mixture-of-Experts LLM Is Secretly an Embedding Model For Free"☆92Oct 15, 2024Updated last year