test-time-training / ttt-lm-pytorchLinks
Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States
β1,243Updated last year
Alternatives and similar repositories for ttt-lm-pytorch
Users that are interested in ttt-lm-pytorch are comparing it to the libraries listed below
Sorting:
- Official JAX implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden Statesβ421Updated last year
- [ICLR2025 Spotlightπ₯] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parametersβ569Updated 6 months ago
- A simple and efficient Mamba implementation in pure PyTorch and MLX.β1,314Updated 8 months ago
- Collection of papers on state-space modelsβ596Updated 3 months ago
- Awesome Papers related to Mamba.β1,372Updated 10 months ago
- Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAIβ1,193Updated 2 months ago
- Muon is an optimizer for hidden layers in neural networksβ1,595Updated last month
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paperβ725Updated last week
- [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptationβ832Updated 10 months ago
- Code release for DynamicTanh (DyT)β1,004Updated 4 months ago
- π Efficient implementations of state-of-the-art linear attention modelsβ3,066Updated this week
- Unofficial implementation of Titans, SOTA memory for transformers, in Pytorchβ1,448Updated 2 months ago
- PyTorch implementation of FractalGen https://arxiv.org/abs/2502.17437β1,154Updated 6 months ago
- Build high-performance AI models with modular building blocksβ545Updated last week
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language modelsβ796Updated last year
- A collection of AWESOME things about mixture-of-expertsβ1,192Updated 8 months ago
- Simple, minimal implementation of the Mamba SSM in one file of PyTorch.β2,849Updated last year
- Notes on the Mamba and the S4 model (Mamba: Linear-Time Sequence Modeling with Selective State Spaces)β169Updated last year
- A Framework of Small-scale Large Multimodal Modelsβ881Updated 4 months ago
- β599Updated 4 months ago
- The official implementation of TPA: Tensor ProducT ATTenTion Transformer (T6) (https://arxiv.org/abs/2501.06425)β381Updated this week
- PyTorch implementation of MAR+DiffLoss https://arxiv.org/abs/2406.11838β1,711Updated 11 months ago
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorchβ741Updated last month
- Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Modelsβ782Updated last month
- Autoregressive Model Beats Diffusion: π¦ Llama for Scalable Image Generationβ1,843Updated last year
- Next-Token Prediction is All You Needβ2,183Updated 5 months ago
- Reading list for research topics in state-space modelsβ319Updated 2 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Modelsβ1,581Updated last year
- π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"β822Updated 5 months ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projectionβ1,589Updated 9 months ago