test-time-training / ttt-lm-kernelsLinks
Inference Speed Benchmark for Learning to (Learn at Test Time): RNNs with Expressive Hidden States
☆74Updated last year
Alternatives and similar repositories for ttt-lm-kernels
Users that are interested in ttt-lm-kernels are comparing it to the libraries listed below
Sorting:
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆132Updated last week
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆116Updated last year
- Linear Attention Sequence Parallelism (LASP)☆87Updated last year
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆104Updated last year
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆98Updated 10 months ago
- Some preliminary explorations of Mamba's context scaling.☆216Updated last year
- ☆253Updated 5 months ago
- ☆96Updated 8 months ago
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆95Updated 11 months ago
- [ICML 2025] Fourier Position Embedding: Enhancing Attention’s Periodic Extension for Length Generalization☆103Updated 5 months ago
- MambaFormer in-context learning experiments and implementation for https://arxiv.org/abs/2402.04248☆57Updated last year
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆231Updated last month
- ☆106Updated last month
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Updated last year
- Official code for the paper "Attention as a Hypernetwork"☆46Updated last year
- ☆35Updated 8 months ago
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆28Updated last year
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Updated last year
- [ACL 2024] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models☆109Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆55Updated last year
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆55Updated 11 months ago
- A repository for DenseSSMs☆89Updated last year
- Code for paper "Patch-Level Training for Large Language Models"☆92Updated last year
- ☆105Updated last year
- M1: Towards Scalable Test-Time Compute with Mamba Reasoning Models☆45Updated 3 months ago
- The official github repo for "Diffusion Language Models are Super Data Learners".☆186Updated last week
- Stick-breaking attention☆61Updated 4 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆206Updated 5 months ago
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆367Updated 2 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆131Updated 2 weeks ago