lucidrains / titans-pytorchLinks
Unofficial implementation of Titans, SOTA memory for transformers, in Pytorch
☆1,384Updated 3 weeks ago
Alternatives and similar repositories for titans-pytorch
Users that are interested in titans-pytorch are comparing it to the libraries listed below
Sorting:
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆1,212Updated 11 months ago
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paper☆657Updated 2 weeks ago
- Official PyTorch implementation for "Large Language Diffusion Models"☆2,378Updated last week
- Muon: An optimizer for hidden layers in neural networks☆897Updated 2 weeks ago
- Code for BLT research paper☆1,686Updated last month
- A Self-adaptation Framework🐙 that adapts LLMs for unseen tasks in real-time!☆1,106Updated 4 months ago
- 🚀 Efficient implementations of state-of-the-art linear attention models in Torch and Triton☆2,753Updated this week
- [ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆562Updated 4 months ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆883Updated last month
- Continuous Thought Machines, because thought takes time and reasoning is a process.☆1,026Updated 3 weeks ago
- A simple and efficient Mamba implementation in pure PyTorch and MLX.☆1,261Updated 6 months ago
- Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAI☆1,162Updated last week
- Build high-performance AI models with modular building blocks☆529Updated this week
- Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models☆703Updated 2 months ago
- Code release for DynamicTanh (DyT)☆954Updated 2 months ago
- MMaDA - Open-Sourced Multimodal Large Diffusion Language Models☆1,136Updated last week
- Training Large Language Model to Reason in a Continuous Latent Space☆1,162Updated 5 months ago
- Muon is Scalable for LLM Training☆1,081Updated 2 months ago
- Official Repo for Open-Reasoner-Zero☆1,969Updated 3 weeks ago
- Recipes to scale inference-time compute of open models☆1,097Updated last month
- Dream 7B, a large diffusion language model☆774Updated last week
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆702Updated 3 months ago
- TTRL: Test-Time Reinforcement Learning☆650Updated 2 weeks ago
- Helpful tools and examples for working with flex-attention☆846Updated this week
- The official implementation of Tensor ProducT ATTenTion Transformer (T6) (https://arxiv.org/abs/2501.06425)☆375Updated last week
- Schedule-Free Optimization in PyTorch☆2,180Updated last month
- Witness the aha moment of VLM with less than $3.☆3,785Updated last month
- MoBA: Mixture of Block Attention for Long-Context LLMs☆1,803Updated 2 months ago
- An Open-source RL System from ByteDance Seed and Tsinghua AIR☆1,364Updated last month
- NanoGPT (124M) in 3 minutes☆2,699Updated last week