ROCm / bitsandbytesLinks
8-bit CUDA functions for PyTorch
☆66Updated last month
Alternatives and similar repositories for bitsandbytes
Users that are interested in bitsandbytes are comparing it to the libraries listed below
Sorting:
- Fast and memory-efficient exact attention☆194Updated last week
- Development repository for the Triton language and compiler☆136Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆107Updated this week
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆12Updated last year
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆514Updated this week
- AMD related optimizations for transformer models☆93Updated 2 weeks ago
- AI Tensor Engine for ROCm☆292Updated this week
- Ahead of Time (AOT) Triton Math Library☆80Updated last week
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆114Updated this week
- Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more☆24Updated this week
- a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA en…☆48Updated last year
- Linux based GDDR6/GDDR6X VRAM temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆104Updated 6 months ago
- ☆126Updated this week
- 8-bit CUDA functions for PyTorch Rocm compatible☆41Updated last year
- ☆152Updated 4 months ago
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆147Updated this week
- ☆409Updated 6 months ago
- DLPrimitives/OpenCL out of tree backend for pytorch☆372Updated last year
- AMD SMI☆91Updated this week
- OpenAI Triton backend for Intel® GPUs☆215Updated this week
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆23Updated 3 weeks ago
- AMD's graph optimization engine.☆262Updated this week
- Fast low-bit matmul kernels in Triton☆385Updated last week
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆703Updated 2 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆266Updated 3 months ago
- ☆51Updated this week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆183Updated this week
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆212Updated this week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆63Updated 4 months ago
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆388Updated this week