NART = NART is not A RunTime, a deep learning inference framework.
☆37Mar 2, 2023Updated 3 years ago
Alternatives and similar repositories for NART
Users that are interested in NART are comparing it to the libraries listed below
Sorting:
- Offline Quantization Tools for Deploy.☆142Dec 28, 2023Updated 2 years ago
- Built upon Megatron-Deepspeed and HuggingFace Trainer, EasyLLM has reorganized the code logic with a focus on usability. While enhancing …☆49Sep 18, 2024Updated last year
- ☆11Jan 10, 2025Updated last year
- ☆17Nov 29, 2023Updated 2 years ago
- ☆10Aug 4, 2020Updated 5 years ago
- Model Quantization Benchmark☆861Apr 20, 2025Updated 10 months ago
- ☆21Feb 11, 2022Updated 4 years ago
- A tool for model sparse based on torch.fx☆13Jun 3, 2024Updated last year
- ☆38Oct 12, 2024Updated last year
- ☆13Jun 16, 2024Updated last year
- source code of the paper: Robust Quantization: One Model to Rule Them All☆41Mar 24, 2023Updated 2 years ago
- [ICML 2025] This is the official PyTorch implementation of "OmniBal: Towards Fast Instruction-Tuning for Vision-Language Models via Omniv…☆27Jun 16, 2025Updated 9 months ago
- A toolkit for developers to simplify the transformation of nn.Module instances. It's now corresponding to Pytorch.fx.☆13Apr 7, 2023Updated 2 years ago
- [EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs, and video generative models.☆688Mar 11, 2026Updated last week
- ☆37Aug 5, 2022Updated 3 years ago
- United Perception☆436Dec 5, 2022Updated 3 years ago
- AAAI2023 Efficient and Accurate Models towards Practical Deep Learning Baseline☆13Nov 29, 2022Updated 3 years ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆11Dec 13, 2023Updated 2 years ago
- A primitive library for neural network☆1,367Nov 24, 2024Updated last year
- ☆38Jul 25, 2022Updated 3 years ago
- Inference of quantization aware trained networks using TensorRT☆84Jan 27, 2023Updated 3 years ago
- ☆19Mar 16, 2022Updated 4 years ago
- Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming☆98Jun 10, 2021Updated 4 years ago
- This code implements NICE papper☆20Oct 1, 2018Updated 7 years ago
- ONNX Command-Line Toolbox☆36Oct 11, 2024Updated last year
- [TCAD 2021] Block Convolution: Towards Memory-Efficient Inference of Large-Scale CNNs on FPGA☆17Jul 7, 2022Updated 3 years ago
- High Performance FP8 GEMM Kernels for SM89 and later GPUs.☆20Jan 24, 2025Updated last year
- A model compression and acceleration toolbox based on pytorch.☆331Jan 12, 2024Updated 2 years ago
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Jul 21, 2023Updated 2 years ago
- ☆41Mar 31, 2022Updated 3 years ago
- Model Quantization Benchmark☆18Sep 30, 2025Updated 5 months ago
- This project is the official implementation of our accepted IEEE TPAMI paper Diverse Sample Generation: Pushing the Limit of Data-free Qu…☆15Feb 26, 2023Updated 3 years ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆39Mar 11, 2024Updated 2 years ago
- ppl.cv is a high-performance image processing library of openPPL supporting various platforms.☆515Oct 30, 2024Updated last year
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆51Oct 21, 2023Updated 2 years ago
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,945Updated this week
- Official Code For Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM☆14Dec 27, 2023Updated 2 years ago
- A collection of research papers on low-precision training methods☆64May 10, 2025Updated 10 months ago
- ComfyUI custom node for lightx2v☆79Updated this week