huggingface / optimum-amdLinks
AMD related optimizations for transformer models
☆96Updated 2 months ago
Alternatives and similar repositories for optimum-amd
Users that are interested in optimum-amd are comparing it to the libraries listed below
Sorting:
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆94Updated this week
- Fast and memory-efficient exact attention☆205Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Updated 3 weeks ago
- An innovative library for efficient LLM inference via low-bit quantization☆351Updated last year
- ☆219Updated 11 months ago
- No-code CLI designed for accelerating ONNX workflows☆222Updated 6 months ago
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆325Updated 3 months ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆225Updated last week
- ☆78Updated last year
- ☆171Updated 3 weeks ago
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆184Updated 8 months ago
- ☆120Updated last year
- ☆113Updated last month
- ☆159Updated 6 months ago
- Development repository for the Triton language and compiler☆138Updated last week
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Updated 10 months ago
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆63Updated 6 months ago
- GPTQ inference Triton kernel☆317Updated 2 years ago
- Advanced quantization toolkit for LLMs and VLMs. Support for WOQ, MXFP4, NVFP4, GGUF, Adaptive Schemes and seamless integration with Tra…☆785Updated this week
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆279Updated 2 years ago
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆203Updated last week
- ☆207Updated 7 months ago
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆34Updated this week
- Easy and Efficient Quantization for Transformers☆202Updated 6 months ago
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆212Updated 2 months ago
- 👷 Build compute kernels☆196Updated last week
- AI Tensor Engine for ROCm☆327Updated this week
- [ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration☆253Updated last year
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆522Updated this week
- Official implementation for Training LLMs with MXFP4☆116Updated 8 months ago