ROCm / bitsandbytesLinks
8-bit CUDA functions for PyTorch
☆69Updated 3 months ago
Alternatives and similar repositories for bitsandbytes
Users that are interested in bitsandbytes are comparing it to the libraries listed below
Sorting:
- Fast and memory-efficient exact attention☆205Updated last week
- Development repository for the Triton language and compiler☆138Updated last week
- a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA en…☆50Updated last year
- AI Tensor Engine for ROCm☆327Updated this week
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆681Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆113Updated this week
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆12Updated last year
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆34Updated last week
- AMD related optimizations for transformer models☆96Updated 2 months ago
- DLPrimitives/OpenCL out of tree backend for pytorch☆382Updated last month
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆717Updated 2 weeks ago
- OpenAI Triton backend for Intel® GPUs☆223Updated this week
- Ahead of Time (AOT) Triton Math Library☆84Updated 2 weeks ago
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆113Updated this week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆225Updated last week
- ☆159Updated 6 months ago
- Deep Learning Primitives and Mini-Framework for OpenCL☆205Updated last year
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆735Updated 4 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆277Updated 5 months ago
- Fast low-bit matmul kernels in Triton☆413Updated last week
- NVIDIA Linux open GPU with P2P support☆101Updated 3 weeks ago
- ☆54Updated last week
- Advanced quantization toolkit for LLMs and VLMs. Support for WOQ, MXFP4, NVFP4, GGUF, Adaptive Schemes and seamless integration with Tra…☆785Updated this week
- Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more☆24Updated last week
- ☆420Updated 8 months ago
- Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators☆500Updated this week
- ☆28Updated 3 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆94Updated this week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆63Updated 6 months ago
- ☆63Updated this week