neuralmagic / deepsparseLinks
Sparsity-aware deep learning inference runtime for CPUs
☆3,158Updated 7 months ago
Alternatives and similar repositories for deepsparse
Users that are interested in deepsparse are comparing it to the libraries listed below
Sorting:
- Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models☆2,143Updated 7 months ago
- Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes☆387Updated 7 months ago
- Top-level directory for documentation and general content☆120Updated 7 months ago
- ML model optimization product to accelerate inference.☆325Updated 7 months ago
- SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, …☆2,570Updated this week
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,250Updated 3 weeks ago
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,586Updated last year
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,173Updated last year
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,689Updated last year
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,695Updated this week
- PyTorch native quantization and sparsity for training and inference☆2,617Updated last week
- PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily wri…☆1,432Updated this week
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.☆833Updated 5 months ago
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,088Updated 6 months ago
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,115Updated this week
- ☆1,025Updated 2 years ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,071Updated last year
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,537Updated this week
- Inference Llama 2 in one file of pure 🔥☆2,115Updated last month
- PyTorch extensions for high performance and large scale training.☆3,393Updated 8 months ago
- Cramming the training of a (BERT-type) language model into limited compute.☆1,361Updated last year
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆3,892Updated this week
- Olive: Simplify ML Model Finetuning, Conversion, Quantization, and Optimization for CPUs, GPUs and NPUs.☆2,232Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,081Updated last week
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,920Updated this week
- This repository contains the official implementation of the research paper, "FastViT: A Fast Hybrid Vision Transformer using Structural R…☆1,981Updated 2 years ago
- Transformer related optimization, including BERT, GPT☆6,382Updated last year
- Train to 94% on CIFAR-10 in <6.3 seconds on a single A100. Or ~95.79% in ~110 seconds (or less!)☆1,299Updated last year
- A pytorch quantization backend for optimum☆1,021Updated last month
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,245Updated last year