ilur98 / DGQ
Official Code For Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM
☆13Updated last year
Alternatives and similar repositories for DGQ:
Users that are interested in DGQ are comparing it to the libraries listed below
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆22Updated 10 months ago
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆22Updated 7 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆28Updated 7 months ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆35Updated 10 months ago
- torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.☆21Updated 9 months ago
- ☆52Updated last week
- FBI-LLM: Scaling Up Fully Binarized LLMs from Scratch via Autoregressive Distillation☆46Updated 6 months ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆38Updated 10 months ago
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆27Updated 3 months ago
- Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better☆14Updated 9 months ago
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Updated last year
- ☆31Updated 7 months ago
- AFPQ code implementation☆19Updated last year
- ☆21Updated 5 months ago
- BESA is a differentiable weight pruning technique for large language models.☆14Updated 10 months ago
- LLM Inference with Microscaling Format☆16Updated 2 months ago
- ☆12Updated 3 months ago
- ACL 2023☆38Updated last year
- PyTorch code for Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆36Updated 4 months ago
- ☆38Updated 11 months ago
- Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆76Updated last month
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆44Updated last year
- Repository for CPU Kernel Generation for LLM Inference☆25Updated last year
- ☆11Updated last year
- The code repository of "MBQ: Modality-Balanced Quantization for Large Vision-Language Models"☆27Updated 3 weeks ago
- Triton implement of bi-directional (non-causal) linear attention☆35Updated last week
- Low-Rank Llama Custom Training☆21Updated 9 months ago
- Are gradient information useful for pruning of LLMs?☆41Updated 8 months ago
- SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models☆27Updated 5 months ago