CLAIRE-Labo / RATLinks
Official code for the NeurIPS25 paper "RAT: Bridging RNN Efficiencyand Attention Accuracy in Language Modeling" (https://arxiv.org/abs/2507.04416))
☆23Updated last month
Alternatives and similar repositories for RAT
Users that are interested in RAT are comparing it to the libraries listed below
Sorting:
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆87Updated last year
- Official code release for "SuperBPE: Space Travel for Language Models"☆77Updated this week
- Simple and efficient pytorch-native transformer training and inference (batched)☆79Updated last year
- A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.☆59Updated 6 months ago
- Code for Zero-Shot Tokenizer Transfer☆142Updated 11 months ago
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆68Updated 8 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- Official implementation of "BERTs are Generative In-Context Learners"☆32Updated 9 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆114Updated 8 months ago
- Minimum Description Length probing for neural network representations☆20Updated 11 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆78Updated last year
- Supercharge huggingface transformers with model parallelism.☆77Updated 5 months ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆94Updated last year
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆41Updated 2 weeks ago
- Aioli: A unified optimization framework for language model data mixing☆32Updated 11 months ago
- State-of-the-art paired encoder and decoder models (17M-1B params)☆54Updated 5 months ago
- some common Huggingface transformers in maximal update parametrization (µP)☆87Updated 3 years ago
- ☆53Updated last year
- Experiments for efforts to train a new and improved t5☆76Updated last year
- Replicating O1 inference-time scaling laws☆91Updated last year
- Long Context Extension and Generalization in LLMs☆62Updated last year
- A repository for research on medium sized language models.☆77Updated last year
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆90Updated 5 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- Universal Reasoning Model☆113Updated 2 weeks ago
- A challenging aggregation benchmark for long-context models☆21Updated 2 months ago
- ☆57Updated last year
- ☆91Updated last year
- Code and training scripts for FlexOlmo☆120Updated this week