agrocylo / bitsandbytes-rocmLinks
8-bit CUDA functions for PyTorch, ported to HIP for use in AMD GPUs
☆51Updated 2 years ago
Alternatives and similar repositories for bitsandbytes-rocm
Users that are interested in bitsandbytes-rocm are comparing it to the libraries listed below
Sorting:
- 8-bit CUDA functions for PyTorch Rocm compatible☆41Updated last year
- 4 bits quantization of LLMs using GPTQ☆49Updated 2 years ago
- 8-bit CUDA functions for PyTorch☆68Updated last month
- ☆534Updated last year
- An unsupervised model merging algorithm for Transformers-based language models.☆108Updated last year
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆216Updated 2 weeks ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆63Updated 2 years ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆93Updated this week
- AMD related optimizations for transformer models☆95Updated last month
- 4 bits quantization of LLaMa using GPTQ☆130Updated 2 years ago
- Merge Transformers language models by use of gradient parameters.☆208Updated last year
- Fast and memory-efficient exact attention☆200Updated last month
- A simple converter which converts pytorch bin files to safetensor, intended to be used for LLM conversion.☆72Updated last year
- ☆156Updated 2 years ago
- LLaMA Server combines the power of LLaMA C++ with the beauty of Chatbot UI.☆129Updated 2 years ago
- ☆37Updated 2 years ago
- Wheels for llama-cpp-python compiled with cuBLAS support☆97Updated last year
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆313Updated last year
- Inference on CPU code for LLaMA models☆137Updated 2 years ago
- Falcon LLM ggml framework with CPU and GPU support☆247Updated last year
- A gradio web UI for running Large Language Models like GPT-J 6B, OPT, GALACTICA, LLaMA, and Pygmalion.☆308Updated 2 years ago
- llama.cpp to PyTorch Converter☆34Updated last year
- Linux based GDDR6/GDDR6X VRAM temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆104Updated 6 months ago
- Efficient 3bit/4bit quantization of LLaMA models☆19Updated 2 years ago
- ☆403Updated 2 years ago
- Make PyTorch models at least run on APUs.☆57Updated last year
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆109Updated 2 years ago
- LLM that combines the principles of wizardLM and vicunaLM☆716Updated 2 years ago
- ☆52Updated last year
- C/C++ implementation of PygmalionAI/pygmalion-6b☆56Updated 2 years ago