neuralmagic / sparsifyLinks
ML model optimization product to accelerate inference.
☆324Updated 7 months ago
Alternatives and similar repositories for sparsify
Users that are interested in sparsify are comparing it to the libraries listed below
Sorting:
- Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes☆387Updated 7 months ago
- Top-level directory for documentation and general content☆120Updated 7 months ago
- Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models☆2,145Updated 7 months ago
- Sparsity-aware deep learning inference runtime for CPUs☆3,160Updated 7 months ago
- FasterAI: Prune and Distill your models with FastAI and PyTorch☆252Updated this week
- Accelerate PyTorch models with ONNX Runtime☆368Updated 3 weeks ago
- Curated list of awesome material on optimization techniques to make artificial intelligence faster and more efficient 🚀☆119Updated 2 years ago
- Recipes are a standard, well supported set of blueprints for machine learning engineers to rapidly train models using the latest research…☆336Updated this week
- An open-source efficient deep learning framework/compiler, written in python.☆739Updated 4 months ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,071Updated last year
- Library for 8-bit optimizers and quantization routines.☆781Updated 3 years ago
- Accelerate your Neural Architecture Search (NAS) through fast, reproducible and modular research.☆483Updated last month
- A library that contains a rich collection of performant PyTorch model metrics, a simple interface to create new metrics, a toolkit to fac …☆245Updated 3 weeks ago
- IDE for PyTorch and its ecosystem☆393Updated last year
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight …☆237Updated 2 years ago
- Lite Inference Toolkit (LIT) for PyTorch☆160Updated 4 years ago
- Implementation of a Transformer, but completely in Triton☆278Updated 3 years ago
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆415Updated this week
- Examples for using ONNX Runtime for model training.☆358Updated last year
- TF2 implementation of knowledge distillation using the "function matching" hypothesis from https://arxiv.org/abs/2106.05237.☆88Updated 4 years ago
- Prune a model while finetuning or training.☆405Updated 3 years ago
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆214Updated 8 months ago
- The merlin dataloader lets you rapidly load tabular data for training deep leaning models with TensorFlow, PyTorch or JAX☆422Updated last year
- Scailable ONNX python tools☆98Updated last year
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,585Updated last year
- An alternative to convolution in neural networks☆258Updated last year
- The Triton backend for the ONNX Runtime.☆170Updated this week
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆502Updated last week
- TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and sup…☆411Updated this week
- Lightning HPO & Training Studio App☆19Updated 2 years ago