test-time-training / ttt-lm-jax
Official JAX implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States
β405Updated 8 months ago
Alternatives and similar repositories for ttt-lm-jax:
Users that are interested in ttt-lm-jax are comparing it to the libraries listed below
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden Statesβ1,170Updated 9 months ago
- [ICLR2025 Spotlightπ₯] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parametersβ547Updated 2 months ago
- β262Updated last month
- Some preliminary explorations of Mamba's context scaling.β212Updated last year
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paperβ588Updated 3 weeks ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793β404Updated this week
- Muon optimizer: +>30% sample efficiency with <3% wallclock overheadβ560Updated 3 weeks ago
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"β166Updated last week
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorchβ282Updated 2 weeks ago
- When it comes to optimizers, it's always better to be safe than sorryβ217Updated 2 weeks ago
- Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Modelsβ519Updated this week
- Helpful tools and examples for working with flex-attentionβ720Updated this week
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Modelsβ212Updated last week
- The official implementation of Tensor ProducT ATTenTion Transformer (T6)β357Updated last week
- Reading list for research topics in state-space modelsβ277Updated last week
- Notes on the Mamba and the S4 model (Mamba: Linear-Time Sequence Modeling with Selective State Spaces)β162Updated last year
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAIβ279Updated 3 weeks ago
- Inference Speed Benchmark for Learning to (Learn at Test Time): RNNs with Expressive Hidden Statesβ66Updated 9 months ago
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attentionβ¦β288Updated 11 months ago
- Annotated version of the Mamba paperβ481Updated last year
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"β218Updated 10 months ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorchβ327Updated 10 months ago
- Normalized Transformer (nGPT)β167Updated 4 months ago
- Implementation of π Ring Attention, from Liu et al. at Berkeley AI, in Pytorchβ510Updated 5 months ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Modelsβ276Updated last month
- β516Updated this week
- A simple and efficient Mamba implementation in pure PyTorch and MLX.β1,202Updated 4 months ago
- β182Updated this week
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorchβ407Updated 3 months ago
- Official implementation of TransNormerLLM: A Faster and Better LLMβ243Updated last year