huggingface / optimum-benchmarkLinks
ποΈ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of Optimum's hardware optimizations & quantization schemes.
β307Updated 2 months ago
Alternatives and similar repositories for optimum-benchmark
Users that are interested in optimum-benchmark are comparing it to the libraries listed below
Sorting:
- A high-throughput and memory-efficient inference and serving engine for LLMsβ265Updated 9 months ago
- Easy and Efficient Quantization for Transformersβ198Updated last month
- β195Updated 2 months ago
- Easy and lightning fast training of π€ Transformers on Habana Gaudi processor (HPU)β191Updated this week
- An innovative library for efficient LLM inference via low-bit quantizationβ349Updated 11 months ago
- GPTQ inference Triton kernelβ303Updated 2 years ago
- Official implementation of Half-Quadratic Quantization (HQQ)β855Updated this week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.β868Updated 10 months ago
- β280Updated this week
- Advanced Quantization Algorithm for LLMs and VLMs, with support for CPU, Intel GPU, CUDA and HPU. Seamlessly integrated with Torchao, Traβ¦β551Updated this week
- For releasing code related to compression methods for transformers, accompanying our publicationsβ437Updated 6 months ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantizationβ362Updated 11 months ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"β376Updated last year
- OpenAI compatible API for TensorRT LLM triton backendβ209Updated last year
- π Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.β206Updated this week
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantizationβ697Updated 11 months ago
- Provides end-to-end model development pipelines for LLMs and Multimodal models that can be launched on-prem or cloud-native.β507Updated 3 months ago
- Comparison of Language Model Inference Enginesβ222Updated 7 months ago
- β120Updated last year
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.β175Updated 4 months ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Seβ¦β730Updated 4 months ago
- β549Updated 9 months ago
- A safetensors extension to efficiently store sparse quantized tensors on diskβ141Updated this week
- Inference server benchmarking toolβ87Updated 3 months ago
- Official PyTorch implementation of QA-LoRAβ138Updated last year
- β58Updated 10 months ago
- Fast low-bit matmul kernels in Tritonβ338Updated this week
- A family of compressed models obtained via pruning and knowledge distillationβ347Updated 8 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMsβ87Updated this week
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needsβ438Updated this week