huggingface / optimum-benchmarkLinks
ποΈ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of Optimum's hardware optimizations & quantization schemes.
β327Updated 4 months ago
Alternatives and similar repositories for optimum-benchmark
Users that are interested in optimum-benchmark are comparing it to the libraries listed below
Sorting:
- A high-throughput and memory-efficient inference and serving engine for LLMsβ267Updated last month
- Easy and lightning fast training of π€ Transformers on Habana Gaudi processor (HPU)β205Updated last week
- Easy and Efficient Quantization for Transformersβ202Updated 7 months ago
- An innovative library for efficient LLM inference via low-bit quantizationβ352Updated last year
- β206Updated 8 months ago
- β328Updated last week
- Inference server benchmarking toolβ142Updated 4 months ago
- Comparison of Language Model Inference Enginesβ239Updated last year
- GPTQ inference Triton kernelβ322Updated 2 years ago
- π Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.β219Updated this week
- Official implementation of Half-Quadratic Quantization (HQQ)β910Updated last month
- A safetensors extension to efficiently store sparse quantized tensors on diskβ237Updated this week
- ArcticInference: vLLM plugin for high-throughput, low-latency inferenceβ379Updated last week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.β992Updated last year
- Provides end-to-end model development pipelines for LLMs and Multimodal models that can be launched on-prem or cloud-native.β510Updated 9 months ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"β394Updated last year
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantizationβ713Updated last year
- β61Updated last year
- For releasing code related to compression methods for transformers, accompanying our publicationsβ455Updated last year
- β125Updated last year
- OpenAI compatible API for TensorRT LLM triton backendβ220Updated last year
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLMβ205Updated last week
- π€ Optimum Intel: Accelerate inference with Intel optimization toolsβ531Updated this week
- A throughput-oriented high-performance serving framework for LLMsβ943Updated 3 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMsβ93Updated this week
- Benchmark suite for LLMs from Fireworks.aiβ86Updated 2 weeks ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantizationβ402Updated last year
- π―An accuracy-first, highly efficient quantization toolkit for LLMs, designed to minimize quality degradation across Weight-Only Quantizaβ¦β839Updated this week
- π Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flashβ¦β280Updated 2 months ago
- β133Updated this week