neuralmagic / deepsparse
Sparsity-aware deep learning inference runtime for CPUs
☆3,077Updated 5 months ago
Alternatives and similar repositories for deepsparse:
Users that are interested in deepsparse are comparing it to the libraries listed below
- Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models☆2,089Updated 5 months ago
- Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes☆376Updated 5 months ago
- ML model optimization product to accelerate inference.☆322Updated 9 months ago
- Top-level directory for documentation and general content☆120Updated last month
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆2,667Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆6,522Updated this week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆2,669Updated last week
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,585Updated last month
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,549Updated 11 months ago
- Transformer related optimization, including BERT, GPT☆5,981Updated 9 months ago
- SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX R…☆2,298Updated this week
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆1,998Updated 9 months ago
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,152Updated 3 months ago
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆1,942Updated last month
- Foundation Architecture for (M)LLMs☆3,038Updated 9 months ago
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆2,847Updated this week
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,827Updated 11 months ago
- PyTorch extensions for high performance and large scale training.☆3,232Updated this week
- 4 bits quantization of LLaMA using GPTQ☆3,026Updated 6 months ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆2,516Updated 3 weeks ago
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆1,885Updated 2 weeks ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,016Updated 9 months ago
- PyTorch native quantization and sparsity for training and inference☆1,753Updated this week
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,671Updated 2 months ago
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,647Updated this week
- FFCV: Fast Forward Computer Vision (and other ML workloads!)☆2,880Updated 7 months ago
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,339Updated last month
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,620Updated this week