neuralmagic / sparsezooLinks
Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes
☆392Updated 4 months ago
Alternatives and similar repositories for sparsezoo
Users that are interested in sparsezoo are comparing it to the libraries listed below
Sorting:
- ML model optimization product to accelerate inference.☆326Updated 4 months ago
- Top-level directory for documentation and general content☆120Updated 4 months ago
- Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models☆2,146Updated 4 months ago
- Sparsity-aware deep learning inference runtime for CPUs☆3,154Updated 4 months ago
- An open-source efficient deep learning framework/compiler, written in python.☆731Updated last month
- Accelerate PyTorch models with ONNX Runtime☆364Updated 7 months ago
- Prune a model while finetuning or training.☆405Updated 3 years ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,062Updated last year
- Recipes are a standard, well supported set of blueprints for machine learning engineers to rapidly train models using the latest research…☆330Updated last week
- Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. Th…☆418Updated this week
- Library for 8-bit optimizers and quantization routines.☆780Updated 3 years ago
- Implementation of a Transformer, but completely in Triton☆275Updated 3 years ago
- FasterAI: Prune and Distill your models with FastAI and PyTorch☆249Updated 3 months ago
- GPU implementation of a fast generalized ANS (asymmetric numeral system) entropy encoder and decoder, with extensions for lossless compre…☆353Updated 3 months ago
- A research library for pytorch-based neural network pruning, compression, and more.☆163Updated 2 years ago
- ☆253Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆265Updated 11 months ago
- Easily benchmark PyTorch model FLOPs, latency, throughput, allocated gpu memory and energy consumption☆107Updated 2 years ago
- ☆331Updated 3 weeks ago
- Fast sparse deep learning on CPUs☆56Updated 3 years ago
- Accelerate your Neural Architecture Search (NAS) through fast, reproducible and modular research.☆480Updated 11 months ago
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆400Updated this week
- Curated list of awesome material on optimization techniques to make artificial intelligence faster and more efficient 🚀☆119Updated 2 years ago
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,582Updated last year
- ONNX Optimizer☆760Updated this week
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,085Updated last week
- Examples for using ONNX Runtime for model training.☆348Updated 11 months ago
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight …☆236Updated 2 years ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆498Updated this week
- An Open-Source Library for Training Binarized Neural Networks☆718Updated last year