[ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models
☆23Mar 15, 2024Updated last year
Alternatives and similar repositories for smoothquantplus
Users that are interested in smoothquantplus are comparing it to the libraries listed below
Sorting:
- A high-throughput and memory-efficient inference and serving engine for LLMs☆12Nov 14, 2025Updated 3 months ago
- ☆30Jul 22, 2024Updated last year
- An easy-to-use package for implementing SmoothQuant for LLMs☆111Apr 7, 2025Updated 10 months ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆11Dec 13, 2023Updated 2 years ago
- QAQ: Quality Adaptive Quantization for LLM KV Cache☆54Mar 27, 2024Updated last year
- ☆85Jan 23, 2025Updated last year
- MUA-RL: MULTI-TURN USER-INTERACTING AGENT REINFORCEMENT LEARNING FOR AGENTIC TOOL USE☆57Nov 5, 2025Updated 3 months ago
- ☆21Feb 5, 2024Updated 2 years ago
- The reproduct of the paper - Aligner: Achieving Efficient Alignment through Weak-to-Strong Correction☆22May 29, 2024Updated last year
- The official implementation of the EMNLP 2023 paper LLM-FP4☆220Dec 15, 2023Updated 2 years ago
- 大模型部署实战:TensorRT-LLM, Triton Inference Server, vLLM☆27Feb 26, 2024Updated 2 years ago
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆251Mar 15, 2024Updated last year
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆120Mar 6, 2024Updated last year
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆184Apr 2, 2025Updated 11 months ago
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Feb 22, 2026Updated last week
- ☆129Jan 22, 2024Updated 2 years ago
- Official implementation of the ICLR 2024 paper AffineQuant☆28Mar 30, 2024Updated last year
- convert paddleOCR to torchOCR, ppocr-v3,ppocr-v4, onnx, openvino☆33Aug 16, 2023Updated 2 years ago
- [ICCV-2023] EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization☆28Dec 6, 2023Updated 2 years ago
- Model optimizer used in Adlik.☆42May 23, 2023Updated 2 years ago
- patches for huggingface transformers to save memory☆34Jun 2, 2025Updated 9 months ago
- Vision-Language Models Toolbox: Your all-in-one solution for multimodal research and experimentation☆12Feb 16, 2025Updated last year
- OAuth authentication plugin for personal coding assistance with ChatGPT Plus/Pro subscriptions - uses OpenAI's official authentication me…☆23Feb 3, 2026Updated last month
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆39Mar 11, 2024Updated last year
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆172Nov 26, 2025Updated 3 months ago
- ☆38Jun 14, 2023Updated 2 years ago
- 本仓库用于灰度测试版本的发布仓库使用☆60Updated this week
- ☆13Updated this week
- Open-source Human Feedback Library☆11Oct 25, 2023Updated 2 years ago
- ☆22Dec 23, 2025Updated 2 months ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees" adapted for Llama models☆41Aug 4, 2023Updated 2 years ago
- 百度QA100万数据集☆45Nov 30, 2023Updated 2 years ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Feb 29, 2024Updated 2 years ago
- nanodet int8 量化,实测推理2ms一帧!☆36Apr 23, 2021Updated 4 years ago
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆43Oct 20, 2023Updated 2 years ago
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆58Aug 12, 2024Updated last year
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆277Aug 31, 2024Updated last year
- EoRA: Fine-tuning-free Compensation for Compressed LLM with Eigenspace Low-Rank Approximation☆27Jul 30, 2025Updated 7 months ago
- python越南语分词器☆10Nov 14, 2019Updated 6 years ago