clevercool / SQuantLinks
SQuant [ICLR22]
☆130Updated 2 years ago
Alternatives and similar repositories for SQuant
Users that are interested in SQuant are comparing it to the libraries listed below
Sorting:
- ☆105Updated 4 years ago
- Pruning Filter in Filter(NeurIPS2020)☆148Updated last year
- QAT(quantize aware training) for classification with MQBench☆28Updated 3 years ago
- An acceleration library that supports arbitrary bit-width combinatorial quantization operations☆232Updated 11 months ago
- MIXQ: Taming Dynamic Outliers in Mixed-Precision Quantization by Online Prediction☆93Updated 10 months ago
- [NeurIPS22] "Advancing Model Pruning via Bi-level Optimization" by Yihua Zhang*, Yuguang Yao*, Parikshit Ram, Pu Zhao, Tianlong Chen, Min…☆117Updated 2 years ago
- Support mixed-precsion inference with vllm☆84Updated 2 months ago
- Mixed precision inference by Tensorrt-LLM☆80Updated 10 months ago
- ☆96Updated 4 years ago
- [ICLR 2025🔥] SVD-LLM & [NAACL 2025🔥] SVD-LLM V2☆244Updated 3 weeks ago
- Official Implementation of "Accel-GNN: High-Performance GPU Accelerator Design for Graph Neural Networks"☆51Updated 5 months ago
- Official implementation of "MaxK-GNN: Extremely Fast GPU Kernel Design for Accelerating Graph Neural Networks Training"☆39Updated last year
- [ECCV 2024] Efficient Inference of Vision Instruction-Following Models with Elastic Cache☆42Updated last year
- A Pytorch implementation of CVPR 2021 paper "RSG: A Simple but Effective Module for Learning Imbalanced Datasets"☆107Updated 3 years ago
- KFunca: A minimalist, high-performance GPU-based automatic differentiation framework☆28Updated last month
- [TMLR] Official PyTorch implementation of paper "Efficient Quantization-aware Training with Adaptive Coreset Selection"☆34Updated last year
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆256Updated 4 months ago
- Build CUDA Neural Network From Scratch☆21Updated last year
- official implementation of paper SDP4Bit: Toward 4-bit Communication Quantization in Sharded Data Parallelism for LLM Training☆40Updated 9 months ago
- BitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.☆57Updated 2 years ago
- [ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization☆95Updated 3 years ago
- Code for ICML 2021 submission☆34Updated 4 years ago
- Explainable Person Re-Identification with Attribute-guided Metric Distillation☆99Updated 3 years ago
- [ICML 2023] This project is the official implementation of our accepted ICML 2023 paper BiBench: Benchmarking and Analyzing Network Binar…☆56Updated last year
- [ICCV 2025] SparseMM: Head Sparsity Emerges from Visual Concept Responses in MLLMs☆71Updated 2 months ago
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- A toolkit for fine-tuning, inferencing, and evaluating GreenBitAI's LLMs.☆184Updated last month
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆262Updated last year
- Official implementation of "Towards Efficient Visual Adaption via Structural Re-parameterization".☆184Updated last year
- ☆19Updated 4 years ago