WapaMario63 / GPTQ-for-LLaMa-ROCmLinks
4 bits quantization of LLaMA using GPTQ, ported to HIP for use in AMD GPUs.
☆32Updated 2 years ago
Alternatives and similar repositories for GPTQ-for-LLaMa-ROCm
Users that are interested in GPTQ-for-LLaMa-ROCm are comparing it to the libraries listed below
Sorting:
- DEPRECATED!☆50Updated last year
- A fork of textgen that kept some things like Exllama and old GPTQ.☆22Updated last year
- Web UI for ExLlamaV2☆511Updated 8 months ago
- Falcon LLM ggml framework with CPU and GPU support☆247Updated last year
- 8-bit CUDA functions for PyTorch, ported to HIP for use in AMD GPUs☆51Updated 2 years ago
- A manual for helping using tesla p40 gpu☆135Updated 11 months ago
- Lord of LLMS☆294Updated last month
- An autonomous AI agent extension for Oobabooga's web ui☆173Updated 2 years ago
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆212Updated this week
- A gradio web UI for running Large Language Models like GPT-J 6B, OPT, GALACTICA, LLaMA, and Pygmalion.☆308Updated 2 years ago
- A prompt/context management system☆170Updated 2 years ago
- ☆534Updated last year
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆108Updated 2 years ago
- ☆157Updated 2 years ago
- ☆37Updated 2 years ago
- A simple Gradio WebUI for loading/unloading models and loras in tabbyAPI.☆20Updated 11 months ago
- Dolphin System Messages☆353Updated 8 months ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆63Updated 2 years ago
- KoboldAI is generative AI software optimized for fictional use, but capable of much more!☆417Updated 9 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated 2 years ago
- LLM that combines the principles of wizardLM and vicunaLM☆716Updated 2 years ago
- Visual Studio Code extension for WizardCoder☆148Updated 2 years ago
- A simple webui for stable-diffusion.cpp☆47Updated last week
- An extension for oobabooga/text-generation-webui that enables the LLM to search the web☆268Updated this week
- BabyAGI to run with locally hosted models using the API from https://github.com/oobabooga/text-generation-webui☆87Updated 2 years ago
- A multimodal, function calling powered LLM webui.☆216Updated last year
- TheBloke's Dockerfiles☆306Updated last year
- A KoboldAI-like memory extension for oobabooga's text-generation-webui☆107Updated last year
- Wheels for llama-cpp-python compiled with cuBLAS support☆97Updated last year
- ☆29Updated last year