intel / neural-compressorLinks
SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, and ONNX Runtime
☆2,533Updated last week
Alternatives and similar repositories for neural-compressor
Users that are interested in neural-compressor are comparing it to the libraries listed below
Sorting:
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,167Updated last year
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆2,954Updated this week
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,562Updated last year
- A Python package for extending the official PyTorch that can easily obtain performance on Intel platform☆1,993Updated this week
- PyTorch native quantization and sparsity for training and inference☆2,531Updated this week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,362Updated 4 months ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆513Updated this week
- A unified library of state-of-the-art model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. …☆1,578Updated this week
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,109Updated this week
- ONNX Optimizer☆780Updated last month
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,504Updated last week
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,221Updated last year
- Transformer related optimization, including BERT, GPT☆6,355Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,079Updated 5 months ago
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,188Updated 2 weeks ago
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,894Updated this week
- Olive: Simplify ML Model Finetuning, Conversion, Quantization, and Optimization for CPUs, GPUs and NPUs.☆2,191Updated last week
- A pytorch quantization backend for optimum☆1,011Updated last week
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,067Updated last year
- Reference implementations of MLPerf® inference benchmarks☆1,495Updated this week
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,585Updated last year
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,842Updated 2 weeks ago
- The Triton TensorRT-LLM Backend☆910Updated this week
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,487Updated this week
- Sparsity-aware deep learning inference runtime for CPUs☆3,161Updated 6 months ago
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,695Updated last month
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,688Updated last year
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,276Updated 6 months ago
- Convert ONNX models to PyTorch.☆710Updated last month
- TinyChatEngine: On-Device LLM Inference Library☆929Updated last year