ROCm / bitsandbytesLinks
8-bit CUDA functions for PyTorch
☆68Updated 2 months ago
Alternatives and similar repositories for bitsandbytes
Users that are interested in bitsandbytes are comparing it to the libraries listed below
Sorting:
- Fast and memory-efficient exact attention☆202Updated this week
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆613Updated this week
- Development repository for the Triton language and compiler☆137Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆111Updated this week
- AI Tensor Engine for ROCm☆311Updated this week
- ☆158Updated 5 months ago
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆33Updated last week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆214Updated last week
- AMD related optimizations for transformer models☆96Updated last month
- a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA en…☆48Updated last year
- Ahead of Time (AOT) Triton Math Library☆84Updated 3 weeks ago
- Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more☆24Updated this week
- DLPrimitives/OpenCL out of tree backend for pytorch☆378Updated 2 weeks ago
- Fast low-bit matmul kernels in Triton☆402Updated 2 weeks ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆272Updated 4 months ago
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆114Updated this week
- OpenAI Triton backend for Intel® GPUs☆222Updated this week
- ☆100Updated 2 months ago
- Advanced quantization toolkit for LLMs and VLMs. Native support for WOQ, MXFP4, NVFP4, GGUF, Adaptive Schemes and seamless integration wi…☆753Updated this week
- ☆418Updated 8 months ago
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆378Updated 7 months ago
- GPTQ inference Triton kernel☆316Updated 2 years ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆723Updated 4 months ago
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆12Updated last year
- Deep Learning Primitives and Mini-Framework for OpenCL☆205Updated last year
- AMD's graph optimization engine.☆267Updated this week
- ☆130Updated this week
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆588Updated this week
- ☆51Updated last week
- LLM training in simple, raw C/HIP for AMD GPUs☆55Updated last year