DeepAuto-AI / sglang
This is a fork of SGLang for hip-attention integration. Please refer to hip-attention for detail.
☆12Updated this week
Alternatives and similar repositories for sglang:
Users that are interested in sglang are comparing it to the libraries listed below
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆128Updated this week
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆115Updated 5 months ago
- Work in progress.☆58Updated 3 weeks ago
- Code for data-aware compression of DeepSeek models☆21Updated 3 weeks ago
- ☆50Updated 6 months ago
- ☆126Updated 2 months ago
- An innovative method expediting LLMs via streamlined semi-autoregressive generation and draft verification.☆25Updated 3 weeks ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆73Updated 8 months ago
- ☆45Updated this week
- ☆44Updated last year
- Beyond KV Caching: Shared Attention for Efficient LLMs☆18Updated 9 months ago
- QuIP quantization☆52Updated last year
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆100Updated 2 weeks ago
- ☆77Updated 3 months ago
- ☆37Updated 6 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆161Updated 9 months ago
- Code for "RSQ: Learning from Important Tokens Leads to Better Quantized LLMs"☆15Updated 2 months ago
- An extention to the GaLore paper, to perform Natural Gradient Descent in low rank subspace☆16Updated 6 months ago
- Transformers components but in Triton☆32Updated last month
- ☆48Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆59Updated 6 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆45Updated 6 months ago
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆103Updated last month
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆32Updated 8 months ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆117Updated last year
- ☆125Updated last year
- ☆131Updated last month
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆45Updated 2 weeks ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆116Updated 10 months ago