wdlctc / headinferLinks
☆52Updated 2 months ago
Alternatives and similar repositories for headinfer
Users that are interested in headinfer are comparing it to the libraries listed below
Sorting:
- Query-agnostic KV cache eviction: 3–4× reduction in memory and 2× decrease in latency (Qwen3/2.5, Gemma3, LLaMA3)☆91Updated last week
- ☆139Updated 3 weeks ago
- QuIP quantization☆54Updated last year
- Simple extension on vLLM to help you speed up reasoning model without training.☆166Updated last month
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆139Updated this week
- ☆59Updated 3 months ago
- A collection of tricks and tools to speed up transformer models☆170Updated last month
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆160Updated this week
- ☆79Updated 8 months ago
- Work in progress.☆70Updated 2 weeks ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆121Updated 7 months ago
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆47Updated 2 months ago
- ☆41Updated 3 weeks ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆81Updated last month
- KV cache compression for high-throughput LLM inference☆132Updated 5 months ago
- Fused Qwen3 MoE layer for faster training, compatible with HF Transformers, LoRA, 4-bit quant, Unsloth☆122Updated this week
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆277Updated last month
- ☆64Updated last month
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆128Updated 7 months ago
- ☆51Updated 8 months ago
- RWKV-7: Surpassing GPT☆92Updated 8 months ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆179Updated 3 weeks ago
- Official repository for the paper "NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks". This rep…☆59Updated 8 months ago
- LLM Inference on consumer devices☆120Updated 4 months ago
- A repository for research on medium sized language models.☆77Updated last year
- FBI-LLM: Scaling Up Fully Binarized LLMs from Scratch via Autoregressive Distillation☆49Updated last year
- ☆80Updated 6 months ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last year
- ☆47Updated last month