neuralmagic / sparsezooLinks
Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes
☆388Updated 6 months ago
Alternatives and similar repositories for sparsezoo
Users that are interested in sparsezoo are comparing it to the libraries listed below
Sorting:
- ML model optimization product to accelerate inference.☆324Updated 6 months ago
- Top-level directory for documentation and general content☆121Updated 6 months ago
- Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models☆2,146Updated 6 months ago
- Sparsity-aware deep learning inference runtime for CPUs☆3,159Updated 6 months ago
- Prune a model while finetuning or training.☆404Updated 3 years ago
- Accelerate PyTorch models with ONNX Runtime☆367Updated 9 months ago
- An open-source efficient deep learning framework/compiler, written in python.☆737Updated 3 months ago
- Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. Th…☆427Updated this week
- A research library for pytorch-based neural network pruning, compression, and more.☆163Updated 3 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Updated last year
- A library for researching neural networks compression and acceleration methods.☆140Updated 3 months ago
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆412Updated last week
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight …☆237Updated 2 years ago
- ML model training for edge devices☆167Updated 2 years ago
- Highly optimized inference engine for Binarized Neural Networks☆251Updated last week
- Recipes are a standard, well supported set of blueprints for machine learning engineers to rapidly train models using the latest research…☆334Updated last week
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,068Updated last year
- Curated list of awesome material on optimization techniques to make artificial intelligence faster and more efficient 🚀☆119Updated 2 years ago
- An Open-Source Library for Training Binarized Neural Networks☆723Updated last year
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆500Updated this week
- Blazing fast training of 🤗 Transformers on Graphcore IPUs☆85Updated last year
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆515Updated this week
- Accelerate your Neural Architecture Search (NAS) through fast, reproducible and modular research.☆482Updated 2 weeks ago
- PyTorch interface for the IPU☆181Updated 2 years ago
- Fast sparse deep learning on CPUs☆56Updated 3 years ago
- Implementation of a Transformer, but completely in Triton☆277Updated 3 years ago
- Library for 8-bit optimizers and quantization routines.☆779Updated 3 years ago
- The Triton backend for the ONNX Runtime.☆168Updated last week
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆201Updated last week
- ☆252Updated last year