huggingface / optimum-amdLinks
AMD related optimizations for transformer models
☆90Updated last month
Alternatives and similar repositories for optimum-amd
Users that are interested in optimum-amd are comparing it to the libraries listed below
Sorting:
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆90Updated this week
- An innovative library for efficient LLM inference via low-bit quantization☆350Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆265Updated 11 months ago
- No-code CLI designed for accelerating ONNX workflows☆214Updated 3 months ago
- ☆218Updated 8 months ago
- Fast and memory-efficient exact attention☆191Updated this week
- Repository of model demos using TT-Buda☆62Updated 6 months ago
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆315Updated 2 weeks ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆167Updated this week
- ☆78Updated 10 months ago
- Google TPU optimizations for transformers models☆120Updated 8 months ago
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆180Updated 6 months ago
- Easy and Efficient Quantization for Transformers☆203Updated 3 months ago
- llama.cpp to PyTorch Converter☆34Updated last year
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆198Updated this week
- ☆120Updated last year
- ☆165Updated last week
- ☆100Updated last month
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆63Updated 3 months ago
- ☆152Updated 3 months ago
- Development repository for the Triton language and compiler☆133Updated this week
- Machine Learning Agility (MLAgility) benchmark and benchmarking tools☆40Updated 2 months ago
- ☆72Updated 6 months ago
- KV cache compression for high-throughput LLM inference☆141Updated 8 months ago
- High-speed and easy-use LLM serving framework for local deployment☆122Updated 2 months ago
- ☆200Updated 5 months ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆498Updated this week
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Updated 8 months ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆277Updated last year
- This reference can be used with any existing OpenAI integrated apps to run with TRT-LLM inference locally on GeForce GPU on Windows inste…☆125Updated last year