Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"
☆248Jun 6, 2025Updated 8 months ago
Alternatives and similar repositories for based
Users that are interested in based are comparing it to the libraries listed below
Sorting:
- Understand and test language model architectures on synthetic tasks.☆254Updated this week
- ☆58Jul 9, 2024Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆138Dec 17, 2024Updated last year
- Code for the paper: https://arxiv.org/pdf/2309.06979.pdf☆21Jul 29, 2024Updated last year
- Official implementation of the paper "Linear Transformers with Learnable Kernel Functions are Better In-Context Models"☆169Jan 16, 2025Updated last year
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32May 25, 2024Updated last year
- ☆36Feb 26, 2024Updated 2 years ago
- ☆19Dec 4, 2025Updated 2 months ago
- train with kittens!☆63Oct 25, 2024Updated last year
- Some preliminary explorations of Mamba's context scaling.☆218Feb 8, 2024Updated 2 years ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Apr 17, 2024Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- Accelerated First Order Parallel Associative Scan☆194Jan 7, 2026Updated last month
- Annotated version of the Mamba paper☆497Feb 27, 2024Updated 2 years ago
- Here we will test various linear attention designs.☆62Apr 25, 2024Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- Open weights language model from Google DeepMind, based on Griffin.☆663Feb 6, 2026Updated 3 weeks ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆562Dec 28, 2024Updated last year
- Code implementing "Efficient Parallelization of a Ubiquitious Sequential Computation" (Heinsen, 2023)☆98Dec 5, 2024Updated last year
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,428Updated this week
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated last year
- FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores☆343Dec 28, 2024Updated last year
- Long Context Extension and Generalization in LLMs☆63Sep 21, 2024Updated last year
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆156Apr 7, 2025Updated 10 months ago
- Reference implementation of Megalodon 7B model☆528May 17, 2025Updated 9 months ago
- ☆53May 20, 2024Updated last year
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆949Nov 16, 2025Updated 3 months ago
- ☆124May 28, 2024Updated last year
- Triton implement of bi-directional (non-causal) linear attention☆68Updated this week
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆170Jan 30, 2025Updated last year
- A Triton Kernel for incorporating Bi-Directionality in Mamba2☆78Dec 18, 2024Updated last year
- A repository for research on medium sized language models.☆78May 23, 2024Updated last year
- Triton Implementation of HyperAttention Algorithm☆48Dec 11, 2023Updated 2 years ago
- ☆51Jan 28, 2024Updated 2 years ago
- ☆17Dec 19, 2024Updated last year
- ☆35Nov 22, 2024Updated last year
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆250Jan 31, 2025Updated last year
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆237Oct 14, 2025Updated 4 months ago