google-deepmind / recurrentgemma
Open weights language model from Google DeepMind, based on Griffin.
☆614Updated 6 months ago
Alternatives and similar repositories for recurrentgemma:
Users that are interested in recurrentgemma are comparing it to the libraries listed below
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆505Updated 2 months ago
- Annotated version of the Mamba paper☆469Updated 10 months ago
- a small code base for training large models☆283Updated 3 weeks ago
- Official implementation of "Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling"☆831Updated last month
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆492Updated 2 months ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆541Updated 2 weeks ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆534Updated this week
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆273Updated 2 months ago
- Minimalistic 4D-parallelism distributed training framework for education purpose☆644Updated this week
- For optimization algorithm research and development.☆484Updated this week
- [ICML 2024] CLLMs: Consistency Large Language Models☆368Updated 2 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆219Updated last month
- The repository for the code of the UltraFastBERT paper☆513Updated 9 months ago
- A JAX research toolkit for building, editing, and visualizing neural networks.☆1,714Updated last month
- ☆296Updated 6 months ago
- Helpful tools and examples for working with flex-attention☆583Updated this week
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆277Updated last month
- Code repository for Black Mamba☆234Updated 11 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆732Updated this week
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆470Updated this week
- Puzzles for exploring transformers☆331Updated last year
- Flash Attention in ~100 lines of CUDA (forward pass only)☆681Updated 2 weeks ago
- Minimalistic large language model 3D-parallelism training☆1,386Updated this week
- ☆201Updated 6 months ago
- Large Context Attention☆670Updated 5 months ago
- Puzzles for learning Triton☆1,300Updated last month
- PyTorch implementation of models from the Zamba2 series.☆166Updated last month
- Visualize the intermediate output of Mistral 7B☆333Updated 11 months ago
- Muon optimizer for neural networks: >30% extra sample efficiency, <3% wallclock overhead☆210Updated last week