agrocylo / bitsandbytes-rocmLinks
8-bit CUDA functions for PyTorch, ported to HIP for use in AMD GPUs
☆53Updated 2 years ago
Alternatives and similar repositories for bitsandbytes-rocm
Users that are interested in bitsandbytes-rocm are comparing it to the libraries listed below
Sorting:
- 8-bit CUDA functions for PyTorch Rocm compatible☆41Updated last year
- Wheels for llama-cpp-python compiled with cuBLAS support☆102Updated 2 years ago
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆313Updated 2 years ago
- Falcon LLM ggml framework with CPU and GPU support☆249Updated 2 years ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆64Updated 2 years ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆93Updated this week
- Inference on CPU code for LLaMA models☆137Updated 2 years ago
- ☆535Updated 2 years ago
- An unsupervised model merging algorithm for Transformers-based language models.☆108Updated last year
- ☆36Updated 2 years ago
- 8-bit CUDA functions for PyTorch☆70Updated 4 months ago
- 4 bits quantization of LLaMa using GPTQ☆131Updated 2 years ago
- A simple converter which converts pytorch bin files to safetensor, intended to be used for LLM conversion.☆72Updated 2 years ago
- A gradio web UI for running Large Language Models like GPT-J 6B, OPT, GALACTICA, LLaMA, and Pygmalion.☆310Updated 2 years ago
- Merge Transformers language models by use of gradient parameters.☆213Updated last year
- LLaMA Server combines the power of LLaMA C++ with the beauty of Chatbot UI.☆130Updated 2 years ago
- 4 bits quantization of LLMs using GPTQ☆49Updated 2 years ago
- 4 bits quantization of LLaMA using GPTQ, ported to HIP for use in AMD GPUs.☆32Updated 2 years ago
- DEPRECATED!☆50Updated last year
- C++ implementation for 💫StarCoder☆459Updated 2 years ago
- ☆404Updated 2 years ago
- ☆156Updated 2 years ago
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆109Updated 2 years ago
- Web UI for ExLlamaV2☆513Updated last year
- A finetuning pipeline for instruct tuning Raven 14bn using QLORA 4bit and the Ditty finetuning library☆28Updated last year
- UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click run on Google Colab. + A Gradio ChatGPT…☆475Updated 2 years ago
- Fast and memory-efficient exact attention☆214Updated this week
- An innovative library for efficient LLM inference via low-bit quantization☆352Updated last year
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆217Updated this week
- Text WebUI extension to add clever Notebooks to Chat mode☆146Updated 6 months ago