neuralmagic / sparsezoo
Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes
☆370Updated 4 months ago
Related projects ⓘ
Alternatives and complementary repositories for sparsezoo
- ML model optimization product to accelerate inference.☆320Updated 7 months ago
- Top-level directory for documentation and general content☆120Updated 4 months ago
- Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models☆2,071Updated 3 months ago
- Sparsity-aware deep learning inference runtime for CPUs☆3,028Updated 4 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆253Updated last month
- Prune a model while finetuning or training.☆394Updated 2 years ago
- Library for 8-bit optimizers and quantization routines.☆714Updated 2 years ago
- FasterAI: Prune and Distill your models with FastAI and PyTorch☆243Updated 3 weeks ago
- Implementation of a Transformer, but completely in Triton☆249Updated 2 years ago
- Fast sparse deep learning on CPUs☆51Updated 2 years ago
- This repository contains the experimental PyTorch native float8 training UX☆211Updated 3 months ago
- Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. Th…☆328Updated this week
- A library to analyze PyTorch traces.☆307Updated this week
- Actively maintained ONNX Optimizer☆647Updated 8 months ago
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,535Updated 9 months ago
- Pipeline Parallelism for PyTorch☆726Updated 3 months ago
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight …☆234Updated last year
- A GPU performance profiling tool for PyTorch models☆495Updated 3 years ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆409Updated this week
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,011Updated 7 months ago
- An open-source efficient deep learning framework/compiler, written in python.☆652Updated last week
- Accelerate PyTorch models with ONNX Runtime☆356Updated 2 months ago
- TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization, pruning, distillati…☆567Updated this week
- A pytorch quantization backend for optimum☆824Updated last week
- A library for researching neural networks compression and acceleration methods.☆136Updated 2 months ago
- ☆267Updated this week
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆338Updated this week
- GPU implementation of a fast generalized ANS (asymmetric numeral system) entropy encoder and decoder, with extensions for lossless compre…☆317Updated last week
- torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters i…☆175Updated last week
- ☆236Updated 3 months ago