test-time-training / ttt-lm-jaxLinks
Official JAX implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States
β413Updated 11 months ago
Alternatives and similar repositories for ttt-lm-jax
Users that are interested in ttt-lm-jax are comparing it to the libraries listed below
Sorting:
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden Statesβ1,227Updated 11 months ago
- [ICLR2025 Spotlightπ₯] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parametersβ563Updated 5 months ago
- Muon is an optimizer for hidden layers in neural networksβ988Updated this week
- β572Updated 2 months ago
- Some preliminary explorations of Mamba's context scaling.β214Updated last year
- The official implementation of TPA: Tensor ProducT ATTenTion Transformer (T6) (https://arxiv.org/abs/2501.06425)β376Updated last week
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793β427Updated last month
- β288Updated 2 months ago
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paperβ667Updated last month
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorchβ302Updated 3 months ago
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAIβ286Updated last month
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorchβ345Updated last year
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Modelsβ222Updated 2 months ago
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Ruleβ181Updated 3 months ago
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"β172Updated 3 months ago
- Implementation of π Ring Attention, from Liu et al. at Berkeley AI, in Pytorchβ526Updated last month
- When it comes to optimizers, it's always better to be safe than sorryβ246Updated 3 months ago
- Reading list for research topics in state-space modelsβ302Updated last month
- Normalized Transformer (nGPT)β184Updated 7 months ago
- β191Updated this week
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attentionβ¦β290Updated last year
- Official Implementation for the paper "d1: Scaling Reasoning in Diffusion Large Language Models via Reinforcement Learning"β235Updated last week
- Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Modelsβ726Updated this week
- Collection of papers on state-space modelsβ594Updated 2 months ago
- π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"β720Updated 3 months ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Modelsβ322Updated 4 months ago
- Notes on the Mamba and the S4 model (Mamba: Linear-Time Sequence Modeling with Selective State Spaces)β169Updated last year
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"β224Updated last year
- [ICML 2024 Best Paper] Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution (https://arxiv.org/abs/2310.16834)β600Updated last year
- β195Updated last year