OpenSparseLLMs / LinearizationLinks
☆61Updated 4 months ago
Alternatives and similar repositories for Linearization
Users that are interested in Linearization are comparing it to the libraries listed below
Sorting:
- ☆110Updated 2 months ago
- ☆120Updated 5 months ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆99Updated 11 months ago
- ☆104Updated 2 months ago
- ☆99Updated 9 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆126Updated 5 months ago
- [NeurIPS'25] dKV-Cache: The Cache for Diffusion Language Models☆121Updated 6 months ago
- ☆84Updated last week
- LongSpec: Long-Context Lossless Speculative Decoding with Efficient Drafting and Verification☆68Updated 4 months ago
- [ICML 2025] SparseLoRA: Accelerating LLM Fine-Tuning with Contextual Sparsity☆61Updated 4 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆256Updated 4 months ago
- Kinetics: Rethinking Test-Time Scaling Laws☆82Updated 4 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆105Updated last month
- ☆201Updated 2 weeks ago
- paper list, tutorial, and nano code snippet for Diffusion Large Language Models.☆133Updated 5 months ago
- ☆10Updated last year
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cache…☆185Updated 2 weeks ago
- ☆45Updated 2 months ago
- M1: Towards Scalable Test-Time Compute with Mamba Reasoning Models☆45Updated 4 months ago
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆41Updated last year
- PoC for "SpecReason: Fast and Accurate Inference-Time Compute via Speculative Reasoning" [NeurIPS '25]☆59Updated 2 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆142Updated 4 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆50Updated last year
- Official Implementation of FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acceleration☆27Updated last week
- [ASPLOS'26] Taming the Long-Tail: Efficient Reasoning RL Training with Adaptive Drafter☆73Updated last week
- [EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMs☆15Updated last year
- [ICLR‘24 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆99Updated 5 months ago
- The official implementation for [NeurIPS2025 Oral] Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink…☆273Updated 2 months ago
- ☆85Updated 3 weeks ago
- ☆26Updated last week