agrocylo / bitsandbytes-rocmLinks
8-bit CUDA functions for PyTorch, ported to HIP for use in AMD GPUs
☆52Updated 2 years ago
Alternatives and similar repositories for bitsandbytes-rocm
Users that are interested in bitsandbytes-rocm are comparing it to the libraries listed below
Sorting:
- 8-bit CUDA functions for PyTorch Rocm compatible☆41Updated last year
- ☆37Updated 2 years ago
- An unsupervised model merging algorithm for Transformers-based language models.☆108Updated last year
- 8-bit CUDA functions for PyTorch☆69Updated 3 months ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆64Updated 2 years ago
- 4 bits quantization of LLaMa using GPTQ☆131Updated 2 years ago
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆313Updated last year
- Wheels for llama-cpp-python compiled with cuBLAS support☆99Updated last year
- A finetuning pipeline for instruct tuning Raven 14bn using QLORA 4bit and the Ditty finetuning library☆28Updated last year
- ☆404Updated 2 years ago
- ☆156Updated 2 years ago
- ☆535Updated 2 years ago
- Inference on CPU code for LLaMA models☆137Updated 2 years ago
- Web UI for ExLlamaV2☆514Updated 11 months ago
- 4 bits quantization of LLaMA using GPTQ, ported to HIP for use in AMD GPUs.☆32Updated 2 years ago
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆109Updated 2 years ago
- Falcon LLM ggml framework with CPU and GPU support☆249Updated last year
- Merge Transformers language models by use of gradient parameters.☆211Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆124Updated 2 years ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆94Updated this week
- Instruct-tuning LLaMA on consumer hardware☆66Updated 2 years ago
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,103Updated 2 weeks ago
- A prompt/context management system☆169Updated 2 years ago
- 4 bits quantization of LLMs using GPTQ☆49Updated 2 years ago
- Linux based GDDR6/GDDR6X VRAM temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆107Updated 8 months ago
- ChatGPT-like Web UI for RWKVstic☆100Updated 2 years ago
- An extension for oobabooga/text-generation-webui that enables the LLM to search the web☆274Updated last month
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆216Updated last month
- A simple converter which converts pytorch bin files to safetensor, intended to be used for LLM conversion.☆72Updated last year
- LLaMA Server combines the power of LLaMA C++ with the beauty of Chatbot UI.☆130Updated 2 years ago