ROCm / bitsandbytesLinks
8-bit CUDA functions for PyTorch
☆53Updated this week
Alternatives and similar repositories for bitsandbytes
Users that are interested in bitsandbytes are comparing it to the libraries listed below
Sorting:
- Fast and memory-efficient exact attention☆174Updated this week
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆11Updated last year
- Development repository for the Triton language and compiler☆125Updated this week
- a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA en…☆43Updated 10 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆84Updated this week
- Ahead of Time (AOT) Triton Math Library☆66Updated last week
- 8-bit CUDA functions for PyTorch Rocm compatible☆41Updated last year
- 8-bit CUDA functions for PyTorch, ported to HIP for use in AMD GPUs☆50Updated 2 years ago
- ☆137Updated this week
- AI Tensor Engine for ROCm☆208Updated this week
- Ongoing research training transformer models at scale☆23Updated 2 weeks ago
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆177Updated this week
- ☆25Updated this week
- AMD related optimizations for transformer models☆79Updated 7 months ago
- Explore training for quantized models☆18Updated this week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆129Updated this week
- oneCCL Bindings for Pytorch*☆97Updated 2 months ago
- ☆38Updated this week
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆109Updated 11 months ago
- Fast low-bit matmul kernels in Triton☆322Updated last week
- Deep Learning Primitives and Mini-Framework for OpenCL☆197Updated 9 months ago
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆31Updated this week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆252Updated 7 months ago
- ☆68Updated this week
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆408Updated last week
- Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more☆24Updated this week
- OpenAI Triton backend for Intel® GPUs☆191Updated this week
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆106Updated this week
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆87Updated this week
- AMD SMI☆71Updated this week