mdy666 / mdy_triton
☆88Updated this week
Alternatives and similar repositories for mdy_triton:
Users that are interested in mdy_triton are comparing it to the libraries listed below
- [ICLR 2025] PEARL: parallel speculative decoding with adaptive draft length☆52Updated this week
- The Official Implementation of Ada-KV: Optimizing KV Cache Eviction by Adaptive Budget Allocation for Efficient LLM Inference☆67Updated last month
- Multi-Candidate Speculative Decoding☆34Updated 10 months ago
- A sparse attention kernel supporting mix sparse patterns☆159Updated last month
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆51Updated last month
- SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs☆78Updated last week
- ☆120Updated last week
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆253Updated 3 months ago
- Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**☆171Updated 3 weeks ago
- Awesome list for LLM quantization☆182Updated 2 months ago
- Implement some method of LLM KV Cache Sparsity☆30Updated 9 months ago
- ☆229Updated 10 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆157Updated 8 months ago
- SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models☆16Updated 5 months ago
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆105Updated 9 months ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆144Updated 8 months ago
- ☆52Updated 11 months ago
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆43Updated last year
- QAQ: Quality Adaptive Quantization for LLM KV Cache☆47Updated 11 months ago
- 16-fold memory access reduction with nearly no loss☆80Updated 2 weeks ago
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆331Updated this week
- Source code for the paper "LongGenBench: Long-context Generation Benchmark"☆14Updated 5 months ago
- ☆63Updated 2 months ago
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.☆75Updated 3 months ago
- This repository serves as a comprehensive survey of LLM development, featuring numerous research papers along with their corresponding co…☆74Updated 3 weeks ago
- Unofficial implementations of block/layer-wise pruning methods for LLMs.☆64Updated 10 months ago
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)☆238Updated this week