neuralmagic / deepsparseLinks
Sparsity-aware deep learning inference runtime for CPUs
☆3,152Updated 3 weeks ago
Alternatives and similar repositories for deepsparse
Users that are interested in deepsparse are comparing it to the libraries listed below
Sorting:
- Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models☆2,142Updated 3 weeks ago
- Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes☆392Updated 3 weeks ago
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆2,950Updated this week
- ML model optimization product to accelerate inference.☆325Updated 3 weeks ago
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,169Updated 8 months ago
- Top-level directory for documentation and general content☆122Updated 3 weeks ago
- SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX R…☆2,434Updated this week
- ☆1,027Updated last year
- Accessible large language models via k-bit quantization for PyTorch.☆7,150Updated this week
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,688Updated 8 months ago
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,647Updated 2 months ago
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,572Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,020Updated 2 months ago
- Thunder gives you PyTorch models superpowers for training and inference. Unlock out-of-the-box optimizations for performance, memory and …☆1,367Updated this week
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,051Updated last year
- Transformer related optimization, including BERT, GPT☆6,211Updated last year
- ☆980Updated 4 months ago
- PyTorch extensions for high performance and large scale training.☆3,331Updated last month
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,867Updated last year
- Simple, safe way to store and distribute tensors☆3,311Updated last week
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,836Updated last year
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,081Updated last week
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆3,280Updated this week
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.☆801Updated 4 months ago
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,131Updated last year
- Fast inference engine for Transformer models☆3,867Updated 2 months ago
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,549Updated 11 months ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆8,860Updated this week
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,193Updated last month
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Bla…☆2,507Updated this week