google-deepmind / recurrentgemmaLinks
Open weights language model from Google DeepMind, based on Griffin.
☆663Updated this week
Alternatives and similar repositories for recurrentgemma
Users that are interested in recurrentgemma are comparing it to the libraries listed below
Sorting:
- a small code base for training large models☆322Updated 9 months ago
- Visualize the intermediate output of Mistral 7B☆384Updated last year
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆944Updated 2 months ago
- Annotated version of the Mamba paper☆495Updated last year
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆355Updated last year
- A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and full…☆628Updated 10 months ago
- The repository for the code of the UltraFastBERT paper☆518Updated last year
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆347Updated last year
- ☆316Updated last year
- Official codebase for the paper "Beyond A* Better Planning with Transformers via Search Dynamics Bootstrapping".☆375Updated last year
- A JAX research toolkit for building, editing, and visualizing neural networks.☆1,860Updated 7 months ago
- Reference implementation of Megalodon 7B model☆529Updated 8 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆693Updated last week
- Official repository for the paper "Grokfast: Accelerated Grokking by Amplifying Slow Gradients"☆575Updated last year
- Fast bare-bones BPE for modern tokenizer training☆175Updated 7 months ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆562Updated last year
- ☆289Updated last year
- ☆208Updated 3 weeks ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆595Updated 5 months ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆549Updated 8 months ago
- For optimization algorithm research and development.☆558Updated 3 weeks ago
- Accelerate, Optimize performance with streamlined training and serving options with JAX.☆336Updated this week
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆547Updated 3 weeks ago
- Mamba-Chat: A chat LLM based on the state-space model architecture 🐍☆939Updated last year
- ☆562Updated last year
- JAX implementation of the Llama 2 model☆216Updated 2 years ago
- PyTorch implementation of models from the Zamba2 series.☆186Updated last year
- Inference code for Persimmon-8B☆412Updated 2 years ago
- A pure NumPy implementation of Mamba.☆222Updated last year
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆371Updated last year