test-time-training / ttt-lm-jax
Official JAX implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States
☆366Updated 3 months ago
Related projects ⓘ
Alternatives and complementary repositories for ttt-lm-jax
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆1,040Updated 4 months ago
- Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆335Updated last week
- Some preliminary explorations of Mamba's context scaling.☆191Updated 9 months ago
- Cobra: Extending Mamba to Multi-modal Large Language Model for Efficient Inference☆257Updated 3 months ago
- Reading list for research topics in state-space models☆241Updated 2 weeks ago
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆174Updated this week
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆246Updated 6 months ago
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆200Updated 5 months ago
- ☆228Updated 2 months ago
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆137Updated last week
- Collection of papers on state-space models☆556Updated 2 weeks ago
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling☆169Updated last week
- code for "Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion"☆615Updated last week
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆280Updated 6 months ago
- Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAI☆725Updated this week
- ☆181Updated 11 months ago
- Implementation of Autoregressive Diffusion in Pytorch☆300Updated 2 weeks ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆476Updated 3 weeks ago
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆1,339Updated this week
- ☆461Updated 3 months ago
- When do we not need larger vision models?☆336Updated this week
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆328Updated 3 weeks ago
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆256Updated last week
- Annotated version of the Mamba paper☆457Updated 8 months ago
- [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation☆633Updated last month
- [ICML 2024 Best Paper] Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution (https://arxiv.org/abs/2310.16834)☆404Updated 8 months ago
- Official implementation of TransNormerLLM: A Faster and Better LLM☆229Updated 9 months ago
- Causal depthwise conv1d in CUDA, with a PyTorch interface☆329Updated 3 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆214Updated 3 months ago
- ☆175Updated this week