Open weights language model from Google DeepMind, based on Griffin.
☆663Feb 6, 2026Updated 3 weeks ago
Alternatives and similar repositories for recurrentgemma
Users that are interested in recurrentgemma are comparing it to the libraries listed below
Sorting:
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆248Jun 6, 2025Updated 8 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- Griffin MQA + Hawk Linear RNN Hybrid☆89Apr 26, 2024Updated last year
- Here we will test various linear attention designs.☆62Apr 25, 2024Updated last year
- A JAX research toolkit for building, editing, and visualizing neural networks.☆1,869Jun 22, 2025Updated 8 months ago
- Accelerated First Order Parallel Associative Scan☆194Jan 7, 2026Updated last month
- train with kittens!☆63Oct 25, 2024Updated last year
- A simple, performant and scalable Jax LLM!☆2,148Updated this week
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆170Jan 30, 2025Updated last year
- Reference implementation of Megalodon 7B model☆528May 17, 2025Updated 9 months ago
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32May 25, 2024Updated last year
- Jax implementation of "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆15May 10, 2024Updated last year
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆549Updated this week
- ☆35Nov 22, 2024Updated last year
- ☆51Jan 28, 2024Updated 2 years ago
- A MAD laboratory to improve AI architecture designs 🧪☆138Dec 17, 2024Updated last year
- Code for the paper: https://arxiv.org/pdf/2309.06979.pdf☆21Jul 29, 2024Updated last year
- Simple, minimal implementation of the Mamba SSM in one file of PyTorch.☆2,921Mar 8, 2024Updated last year
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆595Aug 12, 2025Updated 6 months ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated last year
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆952Nov 16, 2025Updated 3 months ago
- ☆58Jul 9, 2024Updated last year
- Tile primitives for speedy kernels☆3,183Updated this week
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,428Updated this week
- A repository for research on medium sized language models.☆78May 23, 2024Updated last year
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆33Aug 14, 2024Updated last year
- Understand and test language model architectures on synthetic tasks.☆254Updated this week
- Official codebase for the paper "Beyond A* Better Planning with Transformers via Search Dynamics Bootstrapping".☆375Jun 11, 2024Updated last year
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆67Apr 24, 2024Updated last year
- ☆36Feb 26, 2024Updated 2 years ago
- Reaching LLaMA2 Performance with 0.1M Dollars☆988Jul 23, 2024Updated last year
- Some preliminary explorations of Mamba's context scaling.☆218Feb 8, 2024Updated 2 years ago
- seqax = sequence modeling + JAX☆171Jul 23, 2025Updated 7 months ago
- Minimalistic large language model 3D-parallelism training☆2,569Feb 19, 2026Updated last week
- Schedule-Free Optimization in PyTorch☆2,256May 21, 2025Updated 9 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆693Jan 26, 2026Updated last month
- Official repository of the xLSTM.☆2,112Nov 4, 2025Updated 3 months ago