DeepAuto-AI / hip-attentionLinks
Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.
☆147Updated last week
Alternatives and similar repositories for hip-attention
Users that are interested in hip-attention are comparing it to the libraries listed below
Sorting:
- Work in progress.☆74Updated 3 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆162Updated 6 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆202Updated last year
- ☆202Updated 10 months ago
- Official implementation for Training LLMs with MXFP4☆100Updated 6 months ago
- Efficient LLM Inference over Long Sequences☆390Updated 4 months ago
- ☆60Updated 4 months ago
- ☆38Updated last year
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆248Updated 8 months ago
- [NeurIPS'25 Oral] Query-agnostic KV cache eviction: 3–4× reduction in memory and 2× decrease in latency (Qwen3/2.5, Gemma3, LLaMA3)☆125Updated last week
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆130Updated 10 months ago
- ☆127Updated last year
- ☆102Updated this week
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆344Updated 5 months ago
- PB-LLM: Partially Binarized Large Language Models☆156Updated last year
- ☆152Updated 4 months ago
- ☆85Updated 9 months ago
- The evaluation framework for training-free sparse attention in LLMs☆102Updated 2 weeks ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated 2 weeks ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆92Updated 5 months ago
- ☆201Updated 10 months ago
- [NeurIPS 24 Spotlight] MaskLLM: Learnable Semi-structured Sparsity for Large Language Models☆179Updated 9 months ago
- PyTorch implementation of models from the Zamba2 series.☆185Updated 9 months ago
- RWKV-7: Surpassing GPT☆98Updated 11 months ago
- This is a fork of SGLang for hip-attention integration. Please refer to hip-attention for detail.☆18Updated last week
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆99Updated last year
- ☆80Updated 11 months ago
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆308Updated 5 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆130Updated 10 months ago
- Load compute kernels from the Hub☆304Updated last week