test-time-training / ttt-lm-jaxView external linksLinks
Official JAX implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States
☆453Nov 2, 2025Updated 3 months ago
Alternatives and similar repositories for ttt-lm-jax
Users that are interested in ttt-lm-jax are comparing it to the libraries listed below
Sorting:
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆1,320Jul 14, 2024Updated last year
- Inference Speed Benchmark for Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆83Jul 14, 2024Updated last year
- ☆44Nov 1, 2025Updated 3 months ago
- ☆51Jan 28, 2024Updated 2 years ago
- Official Code Repository for the paper "Key-value memory in the brain"☆31Feb 25, 2025Updated 11 months ago
- ☆58Jul 9, 2024Updated last year
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆56Dec 4, 2024Updated last year
- Some preliminary explorations of Mamba's context scaling.☆218Feb 8, 2024Updated 2 years ago
- PyTorch implementation of models from the Zamba2 series.☆186Jan 23, 2025Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆248Jun 6, 2025Updated 8 months ago
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,379Updated this week
- ☆19Dec 4, 2025Updated 2 months ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated last year
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32May 25, 2024Updated last year
- FlashRNN - Fast RNN Kernels with I/O Awareness☆174Oct 20, 2025Updated 3 months ago
- ☆53May 20, 2024Updated last year
- Here we will test various linear attention designs.☆62Apr 25, 2024Updated last year
- Mamba SSM architecture☆17,186Jan 12, 2026Updated last month
- ☆27Jul 28, 2025Updated 6 months ago
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆169Jan 30, 2025Updated last year
- ☆106Mar 9, 2024Updated last year
- Minimal Mamba-2 implementation in PyTorch☆243Jun 17, 2024Updated last year
- Simple, minimal implementation of the Mamba SSM in one file of PyTorch.☆2,918Mar 8, 2024Updated last year
- Jax implementation of "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆15May 10, 2024Updated last year
- ☆16Dec 19, 2024Updated last year
- train with kittens!☆63Oct 25, 2024Updated last year
- Pretraining and inference code for a large-scale depth-recurrent language model☆859Dec 29, 2025Updated last month
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆66Apr 24, 2024Updated last year
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- ☆90Nov 16, 2023Updated 2 years ago
- [NeurIPS 2024] Simple and Effective Masked Diffusion Language Model☆619Sep 29, 2025Updated 4 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆132Dec 3, 2024Updated last year
- Dreamer on JAX☆16Jan 19, 2022Updated 4 years ago
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆138Dec 17, 2024Updated last year
- ☆131May 29, 2025Updated 8 months ago
- Open weights language model from Google DeepMind, based on Griffin.☆663Feb 6, 2026Updated last week
- ☆20May 30, 2024Updated last year