test-time-training / ttt-lm-pytorchLinks
Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States
☆1,294Updated last year
Alternatives and similar repositories for ttt-lm-pytorch
Users that are interested in ttt-lm-pytorch are comparing it to the libraries listed below
Sorting:
- Official JAX implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆438Updated last month
- Collection of papers on state-space models☆613Updated last month
- [ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆579Updated 10 months ago
- A simple and efficient Mamba implementation in pure PyTorch and MLX.☆1,387Updated last year
- Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAI☆1,288Updated 3 weeks ago
- Code release for DynamicTanh (DyT)☆1,028Updated 8 months ago
- Awesome Papers related to Mamba.☆1,383Updated last year
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paper☆790Updated 4 months ago
- Muon is an optimizer for hidden layers in neural networks☆2,132Updated last month
- Unofficial implementation of Titans, SOTA memory for transformers, in Pytorch☆1,748Updated last week
- [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation☆897Updated last year
- [Mamba-Survey-2024] Paper list for State-Space-Model/Mamba and it's Applications☆747Updated 6 months ago
- PyTorch implementation of FractalGen https://arxiv.org/abs/2502.17437☆1,209Updated 10 months ago
- The official implementation for [NeurIPS2025 Oral] Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink…☆673Updated last week
- PyTorch implementation of MAR+DiffLoss https://arxiv.org/abs/2406.11838☆1,825Updated last year
- [NeurIPS 2025 Spotlight] TPA: Tensor ProducT ATTenTion Transformer (T6) (https://arxiv.org/abs/2501.06425)☆438Updated last week
- Build high-performance AI models with modular building blocks☆574Updated last month
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,089Updated last week
- Notes on the Mamba and the S4 model (Mamba: Linear-Time Sequence Modeling with Selective State Spaces)☆175Updated last year
- ☆647Updated 8 months ago
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆838Updated 2 years ago
- Reading list for research topics in state-space models☆338Updated 6 months ago
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆785Updated 5 months ago
- A collection of AWESOME things about mixture-of-experts☆1,244Updated last year
- H-Net: Hierarchical Network with Dynamic Chunking☆797Updated last month
- A Framework of Small-scale Large Multimodal Models☆940Updated 8 months ago
- An implementation of "Retentive Network: A Successor to Transformer for Large Language Models"☆1,210Updated 2 years ago
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,212Updated last year
- A curated collection of papers, tutorials, videos, and other valuable resources related to Mamba.☆679Updated 4 months ago
- Next-Token Prediction is All You Need☆2,271Updated last month