test-time-training / ttt-lm-pytorchLinks
Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States
β1,227Updated last year
Alternatives and similar repositories for ttt-lm-pytorch
Users that are interested in ttt-lm-pytorch are comparing it to the libraries listed below
Sorting:
- Official JAX implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden Statesβ416Updated 11 months ago
- [ICLR2025 Spotlightπ₯] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parametersβ563Updated 5 months ago
- Collection of papers on state-space modelsβ594Updated 2 months ago
- Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAIβ1,169Updated 3 weeks ago
- Code release for DynamicTanh (DyT)β978Updated 3 months ago
- Awesome Papers related to Mamba.β1,366Updated 8 months ago
- A simple and efficient Mamba implementation in pure PyTorch and MLX.β1,279Updated 7 months ago
- π Efficient implementations of state-of-the-art linear attention modelsβ2,900Updated this week
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paperβ667Updated last month
- Muon is an optimizer for hidden layers in neural networksβ1,092Updated this week
- Unofficial implementation of Titans, SOTA memory for transformers, in Pytorchβ1,402Updated last month
- PyTorch implementation of FractalGen https://arxiv.org/abs/2502.17437β1,143Updated 4 months ago
- [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptationβ811Updated 9 months ago
- PyTorch implementation of MAR+DiffLoss https://arxiv.org/abs/2406.11838β1,649Updated 9 months ago
- [Mamba-Survey-2024] Paper list for State-Space-Model/Mamba and it's Applicationsβ719Updated 2 weeks ago
- β572Updated 3 months ago
- Code for CRATE (Coding RAte reduction TransformEr).β1,233Updated 8 months ago
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorchβ708Updated last week
- A collection of AWESOME things about mixture-of-expertsβ1,159Updated 7 months ago
- Notes on the Mamba and the S4 model (Mamba: Linear-Time Sequence Modeling with Selective State Spaces)β169Updated last year
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language modelsβ776Updated last year
- Build high-performance AI models with modular building blocksβ533Updated this week
- An implementation of "Retentive Network: A Successor to Transformer for Large Language Models"β1,193Updated last year
- Reading list for research topics in state-space modelsβ306Updated last month
- Next-Token Prediction is All You Needβ2,166Updated 3 months ago
- Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Modelsβ726Updated this week
- [ICLR 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.β1,587Updated this week
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projectionβ1,575Updated 8 months ago
- The official implementation of TPA: Tensor ProducT ATTenTion Transformer (T6) (https://arxiv.org/abs/2501.06425)β376Updated last week
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorchβ345Updated last year