huggingface / optimum-benchmarkLinks
ποΈ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of Optimum's hardware optimizations & quantization schemes.
β320Updated 2 months ago
Alternatives and similar repositories for optimum-benchmark
Users that are interested in optimum-benchmark are comparing it to the libraries listed below
Sorting:
- A high-throughput and memory-efficient inference and serving engine for LLMsβ267Updated last year
- Easy and lightning fast training of π€ Transformers on Habana Gaudi processor (HPU)β201Updated last week
- Easy and Efficient Quantization for Transformersβ203Updated 5 months ago
- An innovative library for efficient LLM inference via low-bit quantizationβ350Updated last year
- Comparison of Language Model Inference Enginesβ236Updated 11 months ago
- β205Updated 6 months ago
- β317Updated last week
- For releasing code related to compression methods for transformers, accompanying our publicationsβ452Updated 10 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.β954Updated last year
- GPTQ inference Triton kernelβ315Updated 2 years ago
- π Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.β217Updated last week
- Official implementation of Half-Quadratic Quantization (HQQ)β894Updated last month
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".β278Updated 2 years ago
- π Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flashβ¦β271Updated last week
- OpenAI compatible API for TensorRT LLM triton backendβ218Updated last year
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"β390Updated last year
- Applied AI experiments and examples for PyTorchβ307Updated 3 months ago
- Advanced quantization toolkit for LLMs and VLMs. Native support for WOQ, MXFP4, NVFP4, GGUF, Adaptive Bits and seamless integration with β¦β735Updated this week
- π€ Optimum Intel: Accelerate inference with Intel optimization toolsβ513Updated this week
- Provides end-to-end model development pipelines for LLMs and Multimodal models that can be launched on-prem or cloud-native.β509Updated 7 months ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantizationβ391Updated last year
- β219Updated 10 months ago
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantizationβ708Updated last year
- β58Updated last year
- A high-performance inference system for large language models, designed for production environments.β486Updated 3 weeks ago
- Inference server benchmarking toolβ130Updated 2 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMsβ93Updated this week
- β122Updated last year
- Benchmark suite for LLMs from Fireworks.aiβ84Updated last week
- A safetensors extension to efficiently store sparse quantized tensors on diskβ210Updated last week