Qcompiler / MIXQ
MIXQ: Taming Dynamic Outliers in Mixed-Precision Quantization by Online Prediction
☆83Updated 4 months ago
Alternatives and similar repositories for MIXQ:
Users that are interested in MIXQ are comparing it to the libraries listed below
- Support mixed-precsion inference with vllm☆80Updated 2 months ago
- ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆153Updated 4 months ago
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆242Updated 6 months ago
- An acceleration library that supports arbitrary bit-width combinatorial quantization operations☆218Updated 5 months ago
- Mixed precision inference by Tensorrt-LLM☆79Updated 5 months ago
- [ICLR 2025🔥] SVD-LLM & [NAACL 2025🔥] SVD-LLM V2☆189Updated last week
- ☆107Updated 4 years ago
- SQuant [ICLR22]☆131Updated 2 years ago
- [ECCV 2024] Efficient Inference of Vision Instruction-Following Models with Elastic Cache☆43Updated 8 months ago
- Official Implementation of "Accel-GNN: High-Performance GPU Accelerator Design for Graph Neural Networks"☆49Updated last week
- APOLLO: SGD-like Memory, AdamW-level Performance☆195Updated 3 weeks ago
- 使用deepspeed从头开始训练一个LLM,经过pretrain和sft阶段,验证llm学习知识、理解语言、回答问题的能力☆147Updated 8 months ago
- Build CUDA Neural Network From Scratch☆16Updated 7 months ago
- Higher performance OpenAI LLM service than vLLM serve: A pure C++ high-performance OpenAI LLM service implemented with GPRS+TensorRT-LLM+…☆123Updated this week
- The framework to prune LLMs to any size and any config.☆89Updated last year
- A deployment, monitoring and autoscaling service towards serverless LLM serving.☆151Updated 3 weeks ago
- Unified KV Cache Compression Methods for Auto-Regressive Models☆956Updated 2 months ago
- Official implementation of "MaxK-GNN: Extremely Fast GPU Kernel Design for Accelerating Graph Neural Networks Training"☆37Updated last year
- Official PyTorch implementation of FlatQuant: Flatness Matters for LLM Quantization☆112Updated this week
- This tool(enhance_long) aims to enhance the LlaMa2 long context extrapolation capability in the lowest-cost approach, preferably without …☆45Updated last year
- RobustFT: Robust Supervised Fine-tuning for Large Language Models under Noisy Response☆40Updated 3 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆108Updated 2 weeks ago
- A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆23Updated this week
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆93Updated last year
- Code Repository of Evaluating Quantized Large Language Models☆121Updated 6 months ago
- ☆125Updated 3 weeks ago
- Official implementation of ICML 2024 paper "ExCP: Extreme LLM Checkpoint Compression via Weight-Momentum Joint Shrinking".☆47Updated 8 months ago
- ☆39Updated 9 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆260Updated 4 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆168Updated last month