DeepAuto-AI / hip-attentionLinks
Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.
☆147Updated this week
Alternatives and similar repositories for hip-attention
Users that are interested in hip-attention are comparing it to the libraries listed below
Sorting:
- Work in progress.☆72Updated 2 months ago
- ☆38Updated 11 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆161Updated 5 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆200Updated last year
- ☆150Updated 2 months ago
- Efficient LLM Inference over Long Sequences☆391Updated 2 months ago
- ☆128Updated last year
- ☆56Updated 2 months ago
- ☆92Updated 3 weeks ago
- ☆202Updated 9 months ago
- Official implementation for Training LLMs with MXFP4☆89Updated 4 months ago
- The evaluation framework for training-free sparse attention in LLMs☆93Updated 2 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆245Updated 7 months ago
- This is a fork of SGLang for hip-attention integration. Please refer to hip-attention for detail.☆17Updated 3 weeks ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆88Updated 3 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆129Updated 9 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆334Updated 4 months ago
- KV cache compression for high-throughput LLM inference☆136Updated 7 months ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last month
- PyTorch implementation of models from the Zamba2 series.☆185Updated 7 months ago
- PB-LLM: Partially Binarized Large Language Models☆153Updated last year
- Query-agnostic KV cache eviction: 3–4× reduction in memory and 2× decrease in latency (Qwen3/2.5, Gemma3, LLaMA3)☆102Updated 2 weeks ago
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆302Updated 3 months ago
- Official repository for the paper "NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks". This rep…☆60Updated 10 months ago
- ☆86Updated 8 months ago
- QuIP quantization☆59Updated last year
- ☆80Updated 10 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆155Updated 5 months ago
- [CoLM'25] The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆146Updated 2 months ago
- ☆69Updated last year