google-deepmind / recurrentgemmaLinks
Open weights language model from Google DeepMind, based on Griffin.
☆650Updated 3 months ago
Alternatives and similar repositories for recurrentgemma
Users that are interested in recurrentgemma are comparing it to the libraries listed below
Sorting:
- a small code base for training large models☆310Updated 4 months ago
- Annotated version of the Mamba paper☆489Updated last year
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆908Updated 4 months ago
- Visualize the intermediate output of Mistral 7B☆368Updated 7 months ago
- ☆307Updated last year
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆349Updated last year
- Fast bare-bones BPE for modern tokenizer training☆164Updated 2 months ago
- A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and full…☆622Updated 5 months ago
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆322Updated 10 months ago
- Official codebase for the paper "Beyond A* Better Planning with Transformers via Search Dynamics Bootstrapping".☆373Updated last year
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆559Updated 8 months ago
- ☆279Updated last year
- Official repository for the paper "Grokfast: Accelerated Grokking by Amplifying Slow Gradients"☆563Updated last year
- A pure NumPy implementation of Mamba.☆224Updated last year
- ☆195Updated last week
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆537Updated 3 months ago
- The repository for the code of the UltraFastBERT paper☆519Updated last year
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆651Updated last week
- Reference implementation of Megalodon 7B model☆523Updated 3 months ago
- [ICML 2024] CLLMs: Consistency Large Language Models☆402Updated 9 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆575Updated last month
- Mamba-Chat: A chat LLM based on the state-space model architecture 🐍☆929Updated last year
- PyTorch implementation of models from the Zamba2 series.☆184Updated 7 months ago
- Normalized Transformer (nGPT)☆188Updated 9 months ago
- Long context evaluation for large language models☆220Updated 6 months ago
- Accelerate, Optimize performance with streamlined training and serving options with JAX.☆308Updated this week
- For optimization algorithm research and development.☆534Updated last week
- Visualizing the internal board state of a GPT trained on chess PGN strings, and performing interventions on its internal board state and …☆212Updated 9 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆240Updated 3 months ago
- Understand and test language model architectures on synthetic tasks.☆224Updated last month