agrocylo / bitsandbytes-rocmLinks
8-bit CUDA functions for PyTorch, ported to HIP for use in AMD GPUs
☆51Updated 2 years ago
Alternatives and similar repositories for bitsandbytes-rocm
Users that are interested in bitsandbytes-rocm are comparing it to the libraries listed below
Sorting:
- 8-bit CUDA functions for PyTorch Rocm compatible☆41Updated last year
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆313Updated last year
- Wheels for llama-cpp-python compiled with cuBLAS support☆99Updated last year
- ☆535Updated 2 years ago
- Inference on CPU code for LLaMA models☆137Updated 2 years ago
- Falcon LLM ggml framework with CPU and GPU support☆248Updated last year
- 4 bits quantization of LLMs using GPTQ☆49Updated 2 years ago
- 8-bit CUDA functions for PyTorch☆69Updated 2 months ago
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆216Updated 2 weeks ago
- An unsupervised model merging algorithm for Transformers-based language models.☆108Updated last year
- Merge Transformers language models by use of gradient parameters.☆209Updated last year
- Web UI for ExLlamaV2☆514Updated 10 months ago
- ☆157Updated 2 years ago
- ☆37Updated 2 years ago
- A finetuning pipeline for instruct tuning Raven 14bn using QLORA 4bit and the Ditty finetuning library☆28Updated last year
- LLaMA Server combines the power of LLaMA C++ with the beauty of Chatbot UI.☆130Updated 2 years ago
- A simple converter which converts pytorch bin files to safetensor, intended to be used for LLM conversion.☆72Updated last year
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆109Updated 2 years ago
- A gradio web UI for running Large Language Models like GPT-J 6B, OPT, GALACTICA, LLaMA, and Pygmalion.☆309Updated 2 years ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆64Updated 2 years ago
- C++ implementation for 💫StarCoder☆457Updated 2 years ago
- A proof-of-concept project that showcases the potential for using small, locally trainable LLMs to create next-generation documentation t…☆539Updated 2 years ago
- Run stable-diffusion-webui with Radeon RX 580 8GB on Ubuntu 22.04.2 LTS☆68Updated 2 years ago
- Fast and memory-efficient exact attention☆202Updated this week
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆124Updated 2 years ago
- 8-bit CUDA functions for PyTorch in Windows 10☆68Updated 2 years ago
- NVIDIA Linux open GPU with P2P support☆94Updated last week
- 4 bits quantization of LLaMa using GPTQ☆131Updated 2 years ago
- ☆48Updated 2 years ago
- Simple Python library/structure to ablate features in LLMs which are supported by TransformerLens☆539Updated last year