ROCm / xformersLinks
Hackable and optimized Transformers building blocks, supporting a composable construction.
☆34Updated last week
Alternatives and similar repositories for xformers
Users that are interested in xformers are comparing it to the libraries listed below
Sorting:
- 8-bit CUDA functions for PyTorch☆70Updated 4 months ago
- AMD related optimizations for transformer models☆97Updated 3 months ago
- Fast and memory-efficient exact attention☆214Updated this week
- [WIP] Better (FP8) attention for Hopper☆32Updated 11 months ago
- a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA en…☆51Updated last year
- ☆79Updated last year
- ☆79Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆114Updated this week
- ☆71Updated 7 months ago
- Development repository for the Triton language and compiler☆140Updated this week
- ☆163Updated 7 months ago
- ☆18Updated last year
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆74Updated last year
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆56Updated last year
- Ahead of Time (AOT) Triton Math Library☆88Updated 2 weeks ago
- QuIP quantization☆61Updated last year
- Quantize transformers to any learned arbitrary 4-bit numeric format☆50Updated 2 weeks ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆125Updated last year
- ☆46Updated 8 months ago
- PyTorch half precision gemm lib w/ fused optional bias + optional relu/gelu☆78Updated last year
- Lightweight Python Wrapper for OpenVINO, enabling LLM inference on NPUs☆27Updated last year
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆113Updated this week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆244Updated this week
- ☆137Updated last week
- ☆64Updated 8 months ago
- ☆119Updated last month
- Use safetensors with ONNX 🤗☆87Updated this week
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆93Updated this week
- ☆71Updated 10 months ago
- (WIP) Parallel inference for black-forest-labs' FLUX model.☆18Updated last year