google-deepmind / recurrentgemmaLinks
Open weights language model from Google DeepMind, based on Griffin.
☆647Updated 2 months ago
Alternatives and similar repositories for recurrentgemma
Users that are interested in recurrentgemma are comparing it to the libraries listed below
Sorting:
- a small code base for training large models☆309Updated 3 months ago
- Annotated version of the Mamba paper☆487Updated last year
- A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and full…☆622Updated 5 months ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆904Updated 3 months ago
- Visualize the intermediate output of Mistral 7B☆368Updated 7 months ago
- Official codebase for the paper "Beyond A* Better Planning with Transformers via Search Dynamics Bootstrapping".☆373Updated last year
- ☆275Updated last year
- The repository for the code of the UltraFastBERT paper☆517Updated last year
- A pure NumPy implementation of Mamba.☆224Updated last year
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆318Updated 10 months ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆557Updated 7 months ago
- Reference implementation of Megalodon 7B model☆524Updated 3 months ago
- ☆307Updated last year
- ☆194Updated 2 weeks ago
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆349Updated last year
- Fast bare-bones BPE for modern tokenizer training☆164Updated 2 months ago
- Official repository for the paper "Grokfast: Accelerated Grokking by Amplifying Slow Gradients"☆560Updated last year
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆643Updated this week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆568Updated last week
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆526Updated last week
- Scalable and Performant Data Loading☆291Updated last week
- Mamba-Chat: A chat LLM based on the state-space model architecture 🐍☆928Updated last year
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆536Updated 3 months ago
- A JAX research toolkit for building, editing, and visualizing neural networks.☆1,811Updated 2 months ago
- ☆526Updated last year
- [ICML 2024] CLLMs: Consistency Large Language Models☆400Updated 9 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆240Updated 2 months ago
- Normalized Transformer (nGPT)☆186Updated 9 months ago
- For optimization algorithm research and development.☆530Updated this week
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆344Updated 8 months ago