neuralmagic / sparsemlLinks
Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
☆2,144Updated 8 months ago
Alternatives and similar repositories for sparseml
Users that are interested in sparseml are comparing it to the libraries listed below
Sorting:
- Sparsity-aware deep learning inference runtime for CPUs☆3,161Updated 8 months ago
- Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes☆387Updated 8 months ago
- Top-level directory for documentation and general content☆120Updated 8 months ago
- ML model optimization product to accelerate inference.☆325Updated 8 months ago
- Your PyTorch AI Factory - Flash enables you to easily configure and run complex AI recipes for over 15 tasks across 7 data domains☆1,736Updated 2 years ago
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,587Updated last week
- D2Go is a toolkit for efficient deep learning☆851Updated last year
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,688Updated last year
- SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, …☆2,577Updated last week
- FFCV: Fast Forward Computer Vision (and other ML workloads!)☆2,990Updated last year
- A PyTorch repo for data loading and utilities to be shared by the PyTorch domain libraries.☆1,247Updated this week
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,072Updated last year
- Convert ONNX models to PyTorch.☆725Updated 3 months ago
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,939Updated this week
- Accelerate your Neural Architecture Search (NAS) through fast, reproducible and modular research.☆484Updated 2 months ago
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,702Updated 3 weeks ago
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,123Updated this week
- PyTorch extensions for high performance and large scale training.☆3,397Updated 9 months ago
- Library for 8-bit optimizers and quantization routines.☆780Updated 3 years ago
- Prune a model while finetuning or training.☆406Updated 3 years ago
- The WeightWatcher tool for predicting the accuracy of Deep Neural Networks☆1,711Updated last month
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,277Updated 3 weeks ago
- ONNX Optimizer☆795Updated last week
- A coding-free framework built on PyTorch for reproducible deep learning studies. PyTorch Ecosystem. 🏆26 knowledge distillation methods p…☆1,591Updated last month
- PyTorch native quantization and sparsity for training and inference☆2,657Updated last week
- An Open-Source Library for Training Binarized Neural Networks☆724Updated last year
- Examples for using ONNX Runtime for model training.☆361Updated last year
- maximal update parametrization (µP)☆1,672Updated last year
- MADGRAD Optimization Method☆804Updated last year
- An open-source efficient deep learning framework/compiler, written in python.☆740Updated 5 months ago