huggingface / optimum-amd
AMD related optimizations for transformer models
☆74Updated 5 months ago
Alternatives and similar repositories for optimum-amd:
Users that are interested in optimum-amd are comparing it to the libraries listed below
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆87Updated this week
- Load compute kernels from the Hub☆115Updated last week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆98Updated this week
- ☆68Updated 3 weeks ago
- ☆122Updated 3 weeks ago
- Inference server benchmarking tool☆49Updated 2 weeks ago
- Fast and memory-efficient exact attention☆168Updated this week
- vLLM performance dashboard☆27Updated 11 months ago
- Google TPU optimizations for transformers models☆107Updated 2 months ago
- ☆70Updated 4 months ago
- Fast low-bit matmul kernels in Triton☆288Updated this week
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆166Updated 2 weeks ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆72Updated 7 months ago
- [EMNLP Findings 2024] MobileQuant: Mobile-friendly Quantization for On-device Language Models☆56Updated 6 months ago
- ☆30Updated this week
- Easy and Efficient Quantization for Transformers☆196Updated 2 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- Repository for CPU Kernel Generation for LLM Inference☆25Updated last year
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆105Updated 9 months ago
- QuIP quantization☆51Updated last year
- ☆118Updated 11 months ago
- PB-LLM: Partially Binarized Large Language Models☆151Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆262Updated 6 months ago
- Ahead of Time (AOT) Triton Math Library☆57Updated this week
- A toolkit for fine-tuning, inferencing, and evaluating GreenBitAI's LLMs.☆82Updated last month
- python package of rocm-smi-lib☆20Updated 6 months ago
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆183Updated this week
- ☆207Updated 2 months ago
- Work in progress.☆56Updated last week
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆113Updated 4 months ago