neuralmagic / deepsparseLinks
Sparsity-aware deep learning inference runtime for CPUs
☆3,158Updated 3 months ago
Alternatives and similar repositories for deepsparse
Users that are interested in deepsparse are comparing it to the libraries listed below
Sorting:
- Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models☆2,147Updated 3 months ago
- Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes☆391Updated 3 months ago
- Top-level directory for documentation and general content☆120Updated 3 months ago
- ML model optimization product to accelerate inference.☆326Updated 3 months ago
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,170Updated 11 months ago
- SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX R…☆2,492Updated this week
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,075Updated last week
- PyTorch native quantization and sparsity for training and inference☆2,341Updated last week
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,582Updated last year
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,675Updated 3 weeks ago
- PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily wri…☆1,410Updated last week
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,689Updated 10 months ago
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,877Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,053Updated 2 months ago
- ☆1,029Updated last year
- Olive: Simplify ML Model Finetuning, Conversion, Quantization, and Optimization for CPUs, GPUs and NPUs.☆2,106Updated this week
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,180Updated last year
- ☆1,002Updated 7 months ago
- Accessible large language models via k-bit quantization for PyTorch.☆7,584Updated this week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,246Updated 2 months ago
- Simple, safe way to store and distribute tensors☆3,446Updated last week
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,315Updated last month
- A pytorch quantization backend for optimum☆987Updated 3 weeks ago
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,857Updated this week
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.☆816Updated last month
- PyTorch extensions for high performance and large scale training.☆3,369Updated 4 months ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆489Updated this week
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,898Updated last year
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,062Updated last year
- Transformer related optimization, including BERT, GPT☆6,300Updated last year