agrocylo / bitsandbytes-rocmLinks
8-bit CUDA functions for PyTorch, ported to HIP for use in AMD GPUs
☆51Updated 2 years ago
Alternatives and similar repositories for bitsandbytes-rocm
Users that are interested in bitsandbytes-rocm are comparing it to the libraries listed below
Sorting:
- 8-bit CUDA functions for PyTorch Rocm compatible☆41Updated last year
- 8-bit CUDA functions for PyTorch☆66Updated last month
- Falcon LLM ggml framework with CPU and GPU support☆247Updated last year
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆63Updated 2 years ago
- 4 bits quantization of LLMs using GPTQ☆49Updated 2 years ago
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆212Updated this week
- An unsupervised model merging algorithm for Transformers-based language models.☆106Updated last year
- ☆37Updated 2 years ago
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆313Updated last year
- Wheels for llama-cpp-python compiled with cuBLAS support☆97Updated last year
- ☆42Updated 2 years ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆90Updated this week
- ☆403Updated 2 years ago
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆108Updated 2 years ago
- Run stable-diffusion-webui with Radeon RX 580 8GB on Ubuntu 22.04.2 LTS☆67Updated last year
- A gradio web UI for running Large Language Models like GPT-J 6B, OPT, GALACTICA, LLaMA, and Pygmalion.☆308Updated 2 years ago
- NVIDIA Linux open GPU with P2P support☆66Updated 2 weeks ago
- Simple Python library/structure to ablate features in LLMs which are supported by TransformerLens☆518Updated last year
- ☆534Updated last year
- Merge Transformers language models by use of gradient parameters.☆207Updated last year
- ☆234Updated 2 years ago
- Web UI for ExLlamaV2☆511Updated 8 months ago
- A finetuning pipeline for instruct tuning Raven 14bn using QLORA 4bit and the Ditty finetuning library☆28Updated last year
- ☆157Updated 2 years ago
- Inference on CPU code for LLaMA models☆137Updated 2 years ago
- Fast and memory-efficient exact attention☆194Updated last week
- Make abliterated models with transformers, easy and fast☆90Updated 6 months ago
- Instruct-tuning LLaMA on consumer hardware☆65Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated 2 years ago
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,068Updated last week