ROCm / bitsandbytes
8-bit CUDA functions for PyTorch
☆42Updated 2 weeks ago
Alternatives and similar repositories for bitsandbytes:
Users that are interested in bitsandbytes are comparing it to the libraries listed below
- Fast and memory-efficient exact attention☆152Updated this week
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆11Updated 7 months ago
- Development repository for the Triton language and compiler☆104Updated this week
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆221Updated last week
- ☆100Updated last month
- A safetensors extension to efficiently store sparse quantized tensors on disk☆66Updated this week
- 8-bit CUDA functions for PyTorch, ported to HIP for use in AMD GPUs☆46Updated last year
- AMD related optimizations for transformer models☆64Updated 2 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆57Updated this week
- Fast low-bit matmul kernels in Triton☆199Updated last week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆232Updated 3 months ago
- extensible collectives library in triton☆77Updated 4 months ago
- a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA en…☆35Updated 5 months ago
- ☆157Updated last year
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆22Updated last week
- Linux based GDDR6/GDDR6X VRAM temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆89Updated 5 months ago
- 8-bit CUDA functions for PyTorch Rocm compatible☆39Updated 10 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity