ROCm / bitsandbytes
8-bit CUDA functions for PyTorch
☆52Updated 2 weeks ago
Alternatives and similar repositories for bitsandbytes
Users that are interested in bitsandbytes are comparing it to the libraries listed below
Sorting:
- Fast and memory-efficient exact attention☆174Updated this week
- a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA en…☆41Updated 8 months ago
- 8-bit CUDA functions for PyTorch Rocm compatible☆40Updated last year
- Development repository for the Triton language and compiler☆120Updated this week
- 8-bit CUDA functions for PyTorch, ported to HIP for use in AMD GPUs☆49Updated 2 years ago
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆11Updated 10 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆77Updated this week
- AMD related optimizations for transformer models☆75Updated 6 months ago
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆30Updated this week
- Linux based GDDR6/GDDR6X VRAM temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆99Updated 2 weeks ago
- Deep Learning Primitives and Mini-Framework for OpenCL☆195Updated 8 months ago
- A collection of examples for the ROCm software stack☆208Updated this week
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆86Updated this week
- AMD SMI☆65Updated this week
- ☆313Updated last month
- AI Tensor Engine for ROCm☆190Updated this week
- OpenAI Triton backend for Intel® GPUs☆184Updated this week
- build scripts for ROCm☆186Updated last year
- hipBLASLt is a library that provides general matrix-matrix operations with a flexible API and extends functionalities beyond a traditiona…☆94Updated this week
- ☆60Updated last year
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆71Updated 3 months ago
- ☆32Updated this week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆248Updated 6 months ago
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆351Updated this week
- Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more☆23Updated this week
- Ahead of Time (AOT) Triton Math Library☆63Updated this week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆109Updated this week
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆66Updated this week
- Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators☆395Updated this week
- Next generation BLAS implementation for ROCm platform☆368Updated this week