[ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Models"
☆39Mar 11, 2024Updated 2 years ago
Alternatives and similar repositories for QLLM
Users that are interested in QLLM are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.☆25Mar 29, 2024Updated 2 years ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆31Mar 12, 2024Updated 2 years ago
- Official Code For Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM☆14Dec 27, 2023Updated 2 years ago
- Structured Binary Neural Networks for Image Recognition☆16Oct 12, 2022Updated 3 years ago
- Benchmarking Attention Mechanism in Vision Transformers.☆20Oct 10, 2022Updated 3 years ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆21Feb 5, 2024Updated 2 years ago
- Structured Binary Neural Networks for Image Recognition☆18Nov 18, 2021Updated 4 years ago
- [ICLR 2025] Official PyTorch implmentation of paper "T-Stitch: Accelerating Sampling in Pre-trained Diffusion Models with Trajectory Stit…☆107Feb 26, 2024Updated 2 years ago
- The official implementation of BiViT: Extremely Compressed Binary Vision Transformers☆16Jun 18, 2023Updated 2 years ago
- ☆16Sep 12, 2023Updated 2 years ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆418Aug 13, 2024Updated last year
- [ICCV 2021] Official implementation of "Scalable Vision Transformers with Hierarchical Pooling"☆33Dec 30, 2021Updated 4 years ago
- ☆18Oct 4, 2022Updated 3 years ago
- [CVPR 2025] APHQ-ViT: Post-Training Quantization with Average Perturbation Hessian Based Reconstruction for Vision Transformers☆42Apr 7, 2025Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Collections of model quantization algorithms. Any issues, please contact Peng Chen (blueardour@gmail.com)☆73Oct 7, 2021Updated 4 years ago
- [ICLR 2025] Official implementation of paper "Dynamic Low-Rank Sparse Adaptation for Large Language Models".☆24Mar 16, 2025Updated last year
- [NeurIPS 2022 Spotlight] This is the official PyTorch implementation of "EcoFormer: Energy-Saving Attention with Linear Complexity"☆74Nov 15, 2022Updated 3 years ago
- ☆13Jun 16, 2024Updated last year
- [ICLR 2024 Spotlight] This is the official PyTorch implementation of "EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Di…☆69Jun 4, 2024Updated last year
- PyTorch implementation of "Deep Transferring Quantization" (ECCV2020)☆18Jun 22, 2022Updated 3 years ago
- Code for the NeurIPS 2022 paper "Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning".☆130Jul 11, 2023Updated 2 years ago
- Code implementation of GPTAQ (https://arxiv.org/abs/2504.02692)☆89Jul 28, 2025Updated 9 months ago
- Reorder-based post-training quantization for large language model☆199May 17, 2023Updated 2 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- A tool for model sparse based on torch.fx☆13Jun 3, 2024Updated last year
- ☆21Feb 11, 2022Updated 4 years ago
- [NeurIPS 2024] The official implementation of ZipCache: Accurate and Efficient KV Cache Quantization with Salient Token Identification☆31Mar 30, 2025Updated last year
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆387Nov 20, 2025Updated 5 months ago
- A collection of object-compositional modeling by implicit neural representation.☆59Aug 12, 2023Updated 2 years ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆337Jul 2, 2024Updated last year
- Code for High-Capacity Expert Binary Networks (ICLR 2021).☆27Dec 3, 2021Updated 4 years ago
- ☆12Sep 7, 2024Updated last year
- ☆19Nov 6, 2023Updated 2 years ago
- Deploy open-source AI quickly and easily - Special Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆137May 16, 2024Updated last year
- Codes for Accepted Paper : "MetaQuant: Learning to Quantize by Learning to Penetrate Non-differentiable Quantization" in NeurIPS 2019☆54May 8, 2020Updated 5 years ago
- Neural Network Quantization With Fractional Bit-widths☆11Feb 19, 2021Updated 5 years ago
- ☆23Nov 26, 2024Updated last year
- ACL 2023☆39Jun 6, 2023Updated 2 years ago
- [AAAI 2022] This is the official PyTorch implementation of "Less is More: Pay Less Attention in Vision Transformers"☆97Jun 19, 2022Updated 3 years ago
- DNN quantization with outlier channel splitting (ICML'19)☆114Mar 21, 2020Updated 6 years ago