wdlctc / headinferLinks
☆55Updated 3 months ago
Alternatives and similar repositories for headinfer
Users that are interested in headinfer are comparing it to the libraries listed below
Sorting:
- Query-agnostic KV cache eviction: 3–4× reduction in memory and 2× decrease in latency (Qwen3/2.5, Gemma3, LLaMA3)☆99Updated this week
- A collection of tricks and tools to speed up transformer models☆170Updated 2 months ago
- QuIP quantization☆57Updated last year
- A repository aimed at pruning DeepSeek V3, R1 and R1-zero to a usable size☆67Updated 4 months ago
- ☆61Updated 5 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆85Updated 3 months ago
- KV cache compression for high-throughput LLM inference☆136Updated 6 months ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last month
- ☆149Updated 2 months ago
- Self-host LLMs with LMDeploy and BentoML☆22Updated last month
- Work in progress.☆72Updated 2 months ago
- RWKV-7: Surpassing GPT☆94Updated 9 months ago
- Fused Qwen3 MoE layer for faster training, compatible with HF Transformers, LoRA, 4-bit quant, Unsloth☆167Updated this week
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆47Updated 4 months ago
- Simple extension on vLLM to help you speed up reasoning model without training.☆181Updated 3 months ago
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆145Updated this week
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆161Updated 4 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆125Updated 8 months ago
- Official Implementation of APB (ACL 2025 main Oral)☆31Updated 6 months ago
- Linear Attention Sequence Parallelism (LASP)☆86Updated last year
- Official repository for the paper "NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks". This rep…☆59Updated 10 months ago
- FBI-LLM: Scaling Up Fully Binarized LLMs from Scratch via Autoregressive Distillation☆51Updated last week
- ☆54Updated 2 months ago
- ☆41Updated 4 months ago
- LLM Inference on consumer devices☆124Updated 5 months ago
- ☆52Updated 2 months ago
- This repository contains the code for the paper: SirLLM: Streaming Infinite Retentive LLM☆59Updated last year
- Cascade Speculative Drafting☆29Updated last year
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆153Updated 4 months ago
- ☆86Updated 7 months ago