neuralmagic / sparsezooLinks
Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes
☆387Updated 8 months ago
Alternatives and similar repositories for sparsezoo
Users that are interested in sparsezoo are comparing it to the libraries listed below
Sorting:
- ML model optimization product to accelerate inference.☆325Updated 8 months ago
- Top-level directory for documentation and general content☆120Updated 8 months ago
- Prune a model while finetuning or training.☆406Updated 3 years ago
- Accelerate PyTorch models with ONNX Runtime☆367Updated this week
- Library for 8-bit optimizers and quantization routines.☆780Updated 3 years ago
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight …☆237Updated 2 years ago
- An open-source efficient deep learning framework/compiler, written in python.☆740Updated 5 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Updated 2 months ago
- FasterAI: Prune and Distill your models with FastAI and PyTorch☆252Updated this week
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,072Updated last year
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆420Updated this week
- Implementation of a Transformer, but completely in Triton☆279Updated 3 years ago
- Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. Th…☆432Updated this week
- ONNX Optimizer☆795Updated this week
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆327Updated 4 months ago
- Blazing fast training of 🤗 Transformers on Graphcore IPUs☆87Updated last year
- Curated list of awesome material on optimization techniques to make artificial intelligence faster and more efficient 🚀☆119Updated 2 years ago
- Recipes are a standard, well supported set of blueprints for machine learning engineers to rapidly train models using the latest research…☆340Updated this week
- Highly optimized inference engine for Binarized Neural Networks☆251Updated 2 weeks ago
- Fast sparse deep learning on CPUs☆56Updated 3 years ago
- Easily benchmark PyTorch model FLOPs, latency, throughput, allocated gpu memory and energy consumption☆109Updated 2 years ago
- Common utilities for ONNX converters☆294Updated last month
- Training material for IPU users: tutorials, feature examples, simple applications☆88Updated 2 years ago
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,586Updated last week
- ML model training for edge devices☆168Updated 2 years ago
- An Open-Source Library for Training Binarized Neural Networks☆726Updated last year
- ☆208Updated 4 years ago
- Lite Inference Toolkit (LIT) for PyTorch☆160Updated 4 years ago
- A research library for pytorch-based neural network pruning, compression, and more.☆162Updated 3 years ago
- A code generator from ONNX to PyTorch code☆142Updated 3 years ago