mdy666 / mdy_triton
☆98Updated this week
Alternatives and similar repositories for mdy_triton:
Users that are interested in mdy_triton are comparing it to the libraries listed below
- The Official Implementation of Ada-KV: Optimizing KV Cache Eviction by Adaptive Budget Allocation for Efficient LLM Inference☆66Updated 2 months ago
- [ICLR 2025] PEARL: parallel speculative decoding with adaptive draft length☆59Updated last week
- ☆125Updated 2 weeks ago
- SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs☆84Updated this week
- Multi-Candidate Speculative Decoding☆34Updated 11 months ago
- Implement some method of LLM KV Cache Sparsity☆30Updated 9 months ago
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆107Updated 10 months ago
- A sparse attention kernel supporting mix sparse patterns☆164Updated last month
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆52Updated last month
- Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**☆173Updated last month
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆147Updated 5 months ago
- This repository serves as a comprehensive survey of LLM development, featuring numerous research papers along with their corresponding co…☆80Updated last month
- ☆231Updated 10 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆259Updated 4 months ago
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆346Updated last week
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆164Updated last month
- Awesome list for LLM quantization☆186Updated 2 months ago
- QAQ: Quality Adaptive Quantization for LLM KV Cache☆47Updated 11 months ago
- Awesome list for LLM pruning.☆212Updated 3 months ago
- Puzzles for learning Triton, play it with minimal environment configuration!☆262Updated 3 months ago
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)☆241Updated this week
- Official PyTorch implementation of FlatQuant: Flatness Matters for LLM Quantization☆110Updated 2 months ago
- 16-fold memory access reduction with nearly no loss☆81Updated this week
- Official PyTorch implementation of IntactKV: Improving Large Language Model Quantization by Keeping Pivot Tokens Intact☆42Updated 9 months ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆80Updated 2 months ago