[ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Models"
☆31Mar 12, 2024Updated 2 years ago
Alternatives and similar repositories for QLLM
Users that are interested in QLLM are comparing it to the libraries listed below
Sorting:
- ☆13Jun 22, 2025Updated 9 months ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆39Mar 11, 2024Updated 2 years ago
- The official implementation of BiViT: Extremely Compressed Binary Vision Transformers☆16Jun 18, 2023Updated 2 years ago
- Benchmarking Attention Mechanism in Vision Transformers.☆20Oct 10, 2022Updated 3 years ago
- torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.☆25Mar 29, 2024Updated last year
- Efficient GPU kernels for mixed-precision Vision Transformers in Triton☆18Sep 18, 2025Updated 6 months ago
- [NeurIPS 2022 Spotlight] This is the official PyTorch implementation of "EcoFormer: Energy-Saving Attention with Linear Complexity"☆74Nov 15, 2022Updated 3 years ago
- FMS Model Optimizer is a framework for developing reduced precision neural network models.☆21Updated this week
- Collections of model quantization algorithms. Any issues, please contact Peng Chen (blueardour@gmail.com)☆73Oct 7, 2021Updated 4 years ago
- ☆52Nov 5, 2024Updated last year
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆51Oct 21, 2023Updated 2 years ago
- The code for Joint Neural Architecture Search and Quantization☆14Apr 10, 2019Updated 6 years ago
- [TMLR] Official PyTorch implementation of paper "Efficient Quantization-aware Training with Adaptive Coreset Selection"☆38Aug 20, 2024Updated last year
- source code of the paper: Robust Quantization: One Model to Rule Them All☆41Mar 24, 2023Updated 2 years ago
- PyTorch implementation of "Deep Transferring Quantization" (ECCV2020)☆18Jun 22, 2022Updated 3 years ago
- Neural Network Quantization With Fractional Bit-widths☆11Feb 19, 2021Updated 5 years ago
- BitSplit Post-trining Quantization☆50Dec 20, 2021Updated 4 years ago
- ☆11Sep 7, 2024Updated last year
- official implementation of Generative Low-bitwidth Data Free Quantization(GDFQ)☆55Jul 23, 2023Updated 2 years ago
- Common template for pytorch project. Easy to extent and modify for new project.☆13Dec 13, 2022Updated 3 years ago
- BigBang-Proton is a LLM pretrained on cross-scale, cross-structure, cross-discipline real-world scientific tasks to construct a scienti…☆22Nov 8, 2025Updated 4 months ago
- Anatomy of High-Performance GEMM with Online Fault Tolerance on GPUs☆13Apr 3, 2025Updated 11 months ago
- [NeurIPS 2020] ShiftAddNet: A Hardware-Inspired Deep Network☆74Nov 16, 2020Updated 5 years ago
- Collections of model quantization algorithms. Any issues, please contact Peng Chen (blueardour@gmail.com)☆45Aug 19, 2021Updated 4 years ago
- Optimizing Deep Convolutional Neural Network with Ternarized Weights and High Accuracy☆16Jan 27, 2019Updated 7 years ago
- [COLM 2025] Official PyTorch implementation of "Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models"☆72Jul 8, 2025Updated 8 months ago
- Structured Binary Neural Networks for Image Recognition☆16Oct 12, 2022Updated 3 years ago
- Implementation of Direct Preference Optimization☆17Jul 17, 2023Updated 2 years ago
- Official Code For Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM☆14Dec 27, 2023Updated 2 years ago
- Implement Towards Effective Low-bitwidth Convolutional Neural Networks☆41Sep 17, 2018Updated 7 years ago
- ☆17Jul 22, 2022Updated 3 years ago
- [ICLR 2024 Spotlight] This is the official PyTorch implementation of "EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Di…☆68Jun 4, 2024Updated last year
- [ECCV 2024] SparseRefine: Sparse Refinement for Efficient High-Resolution Semantic Segmentation☆14Jan 10, 2025Updated last year
- [CVPR 2023] PD-Quant: Post-Training Quantization Based on Prediction Difference Metric☆60Mar 23, 2023Updated 3 years ago
- ☆19Aug 26, 2021Updated 4 years ago
- Unofficial Scalable-Softmax Is Superior for Attention☆20May 30, 2025Updated 9 months ago
- Codes for the paper "Deep Neural Networks with Multi-Branch Architectures Are Less Non-Convex"☆21Jul 25, 2020Updated 5 years ago
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆172Nov 26, 2025Updated 3 months ago
- Code for the NeurIPS 2022 paper "Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning".☆129Jul 11, 2023Updated 2 years ago