Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"
☆251Jun 6, 2025Updated 10 months ago
Alternatives and similar repositories for based
Users that are interested in based are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Understand and test language model architectures on synthetic tasks.☆264Mar 22, 2026Updated 2 weeks ago
- ☆58Jul 9, 2024Updated last year
- Code for the paper: https://arxiv.org/pdf/2309.06979.pdf☆21Jul 29, 2024Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆141Dec 17, 2024Updated last year
- Official implementation of the paper "Linear Transformers with Learnable Kernel Functions are Better In-Context Models"☆169Jan 16, 2025Updated last year
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- train with kittens!☆64Oct 25, 2024Updated last year
- Here we will test various linear attention designs.☆62Apr 25, 2024Updated last year
- 🚀 Efficient implementations for emerging model architectures☆4,823Updated this week
- ☆19Dec 4, 2025Updated 4 months ago
- Accelerated First Order Parallel Associative Scan☆197Jan 7, 2026Updated 3 months ago
- ☆51Jan 28, 2024Updated 2 years ago
- Sequence modeling with Mega.☆303Jan 28, 2023Updated 3 years ago
- Some preliminary explorations of Mamba's context scaling.☆219Feb 8, 2024Updated 2 years ago
- This repo contains code for the paper: "Can Foundation Models Help Us Achieve Perfect Secrecy?"☆24Feb 9, 2023Updated 3 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32May 25, 2024Updated last year
- ☆36Feb 26, 2024Updated 2 years ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆561Dec 28, 2024Updated last year
- Open weights language model from Google DeepMind, based on Griffin.☆666Feb 6, 2026Updated 2 months ago
- ☆54May 20, 2024Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆57Aug 20, 2024Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- Annotated version of the Mamba paper☆500Feb 27, 2024Updated 2 years ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Apr 17, 2024Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Code implementing "Efficient Parallelization of a Ubiquitious Sequential Computation" (Heinsen, 2023)☆98Dec 5, 2024Updated last year
- FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores☆350Dec 28, 2024Updated last year
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆253Jan 31, 2025Updated last year
- Triton Implementation of HyperAttention Algorithm☆48Dec 11, 2023Updated 2 years ago
- ☆107Mar 9, 2024Updated 2 years ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated 2 years ago
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆170Jan 30, 2025Updated last year
- [COLM'25] A Controlled Study on Long Context Extension and Generalization in LLMs☆64Mar 9, 2026Updated last month
- Reference implementation of Megalodon 7B model☆527May 17, 2025Updated 10 months ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Triton implement of bi-directional (non-causal) linear attention☆73Mar 1, 2026Updated last month
- ☆35Nov 22, 2024Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆955Nov 16, 2025Updated 4 months ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆548May 16, 2025Updated 10 months ago
- ☆124May 28, 2024Updated last year
- Griffin MQA + Hawk Linear RNN Hybrid☆89Apr 26, 2024Updated last year