ROCm / bitsandbytes
8-bit CUDA functions for PyTorch
☆48Updated 2 months ago
Alternatives and similar repositories for bitsandbytes:
Users that are interested in bitsandbytes are comparing it to the libraries listed below
- Fast and memory-efficient exact attention☆171Updated this week
- AMD related optimizations for transformer models☆75Updated 5 months ago
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆11Updated 10 months ago
- Development repository for the Triton language and compiler☆118Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆74Updated this week
- a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA en…☆38Updated 7 months ago
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆29Updated this week
- 8-bit CUDA functions for PyTorch, ported to HIP for use in AMD GPUs☆49Updated 2 years ago
- AMD SMI☆61Updated this week
- AI Tensor Engine for ROCm☆168Updated this week
- 8-bit CUDA functions for PyTorch Rocm compatible☆39Updated last year
- ☆29Updated this week
- oneCCL Bindings for Pytorch*☆94Updated last week
- Ahead of Time (AOT) Triton Math Library☆57Updated last week
- hipBLASLt is a library that provides general matrix-matrix operations with a flexible API and extends functionalities beyond a traditiona…☆90Updated this week
- ☆126Updated 3 weeks ago
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆62Updated last month
- OpenAI Triton backend for Intel® GPUs☆179Updated this week
- ☆68Updated 3 weeks ago
- ☆38Updated this week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆248Updated 5 months ago
- ☆20Updated 3 weeks ago
- ☆22Updated 2 months ago
- Ongoing research training transformer models at scale☆18Updated this week
- ☆46Updated last week
- Explore training for quantized models☆17Updated 3 months ago
- A collection of examples for the ROCm software stack☆203Updated last week
- ☆118Updated last year
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆106Updated 9 months ago
- ☆20Updated 2 weeks ago