jxiw / M1Links
M1: Towards Scalable Test-Time Compute with Mamba Reasoning Models
☆45Updated 4 months ago
Alternatives and similar repositories for M1
Users that are interested in M1 are comparing it to the libraries listed below
Sorting:
- ☆96Updated 8 months ago
- ☆61Updated 4 months ago
- Kinetics: Rethinking Test-Time Scaling Laws☆82Updated 4 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆124Updated 4 months ago
- ☆106Updated 2 months ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆98Updated 10 months ago
- The official implementation for [NeurIPS2025 Oral] Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink…☆101Updated last month
- ☆101Updated 2 months ago
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆132Updated 2 weeks ago
- ☆120Updated 5 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆135Updated 4 months ago
- Flash-Linear-Attention models beyond language☆20Updated 2 months ago
- ☆253Updated 5 months ago
- [NeurIPS'25] dKV-Cache: The Cache for Diffusion Language Models☆117Updated 5 months ago
- [CoLM'25] The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆150Updated 4 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆245Updated 4 months ago
- 🔥 A minimal training framework for scaling FLA models☆291Updated 2 months ago
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆116Updated last year
- Stick-breaking attention☆61Updated 4 months ago
- The official github repo for "Diffusion Language Models are Super Data Learners".☆186Updated last week
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆36Updated last year
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆105Updated last month
- TraceRL & TraDo-8B: Revolutionizing Reinforcement Learning Framework for Diffusion Large Language Models☆307Updated 3 weeks ago
- Geometric-Mean Policy Optimization☆90Updated this week
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cache…☆176Updated 2 months ago
- ☆55Updated 5 months ago
- ☆66Updated 4 months ago
- The evaluation framework for training-free sparse attention in LLMs☆102Updated last month
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆206Updated 5 months ago
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆231Updated last month