ddidacus / llama-titansLinks
Adaptation of titans-pytorch to llama models on HF
☆21Updated 8 months ago
Alternatives and similar repositories for llama-titans
Users that are interested in llama-titans are comparing it to the libraries listed below
Sorting:
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆116Updated last year
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆55Updated 11 months ago
- ☆33Updated last year
- Parallelizing non-linear sequential models over the sequence length☆54Updated 4 months ago
- Official repo of paper LM2☆46Updated 8 months ago
- ☆32Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆55Updated last year
- DeciMamba: Exploring the Length Extrapolation Potential of Mamba (ICLR 2025)☆31Updated 6 months ago
- ☆86Updated last year
- Implementation of CoLA: Compute-Efficient Pre-Training of LLMs via Low-Rank Activation☆24Updated 8 months ago
- Official Code Repository for the paper "Key-value memory in the brain"☆29Updated 8 months ago
- ☆48Updated last year
- ☆34Updated 7 months ago
- Official code for the paper "Attention as a Hypernetwork"☆45Updated last year
- Official implementation of the transformer (TF) architecture suggested in a paper entitled "Looped Transformers as Programmable Computers…☆27Updated 2 years ago
- Stick-breaking attention☆61Updated 4 months ago
- ☆46Updated 2 weeks ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆129Updated last week
- Mixture of A Million Experts☆48Updated last year
- A State-Space Model with Rational Transfer Function Representation.☆82Updated last year
- Official repository for the paper "Exploring the Promise and Limits of Real-Time Recurrent Learning" (ICLR 2024)☆13Updated 4 months ago
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆131Updated this week
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆231Updated 3 weeks ago
- Evaluating the Mamba architecture on the Othello game☆48Updated last year
- ☆34Updated last year
- Some preliminary explorations of Mamba's context scaling.☆216Updated last year
- [COLM 2025] Code for Paper: Learning Adaptive Parallel Reasoning with Language Models☆132Updated 2 months ago
- 📄Small Batch Size Training for Language Models☆63Updated last month
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆84Updated last year
- ☆53Updated last year