test-time-training / ttt-lm-jaxLinks
Official JAX implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States
β448Updated 2 months ago
Alternatives and similar repositories for ttt-lm-jax
Users that are interested in ttt-lm-jax are comparing it to the libraries listed below
Sorting:
- [ICLR2025 Spotlightπ₯] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parametersβ584Updated 11 months ago
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden Statesβ1,311Updated last year
- [NeurIPS 2025 Spotlight] TPA: Tensor ProducT ATTenTion Transformer (T6) (https://arxiv.org/abs/2501.06425)β445Updated last week
- β659Updated 9 months ago
- β304Updated 9 months ago
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Modelsβ236Updated 3 months ago
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paperβ793Updated 5 months ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorchβ377Updated last year
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorchβ343Updated 9 months ago
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Ruleβ433Updated 4 months ago
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"β170Updated 11 months ago
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAIβ293Updated 7 months ago
- Some preliminary explorations of Mamba's context scaling.β217Updated last year
- When it comes to optimizers, it's always better to be safe than sorryβ399Updated 4 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793β451Updated 8 months ago
- Reading list for research topics in state-space modelsβ341Updated 7 months ago
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"β204Updated last week
- Collection of papers on state-space modelsβ615Updated 2 months ago
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"β231Updated 3 months ago
- β207Updated last week
- H-Net: Hierarchical Network with Dynamic Chunkingβ808Updated 2 months ago
- Normalized Transformer (nGPT)β197Updated last year
- Awesome list of papers that extend Mamba to various applications.β138Updated 7 months ago
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Modeβ¦β119Updated last year
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Modelsβ339Updated 11 months ago
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attentionβ¦β294Updated last year
- Implementation of π Ring Attention, from Liu et al. at Berkeley AI, in Pytorchβ549Updated 8 months ago
- Inference Speed Benchmark for Learning to (Learn at Test Time): RNNs with Expressive Hidden Statesβ80Updated last year
- Official PyTorch implementation for ICLR2025 paper "Scaling up Masked Diffusion Models on Text"β360Updated last year
- Annotated version of the Mamba paperβ495Updated last year