broncotc / bitsandbytes-rocmLinks
☆36Updated 2 years ago
Alternatives and similar repositories for bitsandbytes-rocm
Users that are interested in bitsandbytes-rocm are comparing it to the libraries listed below
Sorting:
- ☆156Updated 2 years ago
- 8-bit CUDA functions for PyTorch Rocm compatible☆41Updated last year
- A gradio web UI for running Large Language Models like GPT-J 6B, OPT, GALACTICA, LLaMA, and Pygmalion.☆310Updated 2 years ago
- 4 bits quantization of LLMs using GPTQ☆49Updated 2 years ago
- C/C++ implementation of PygmalionAI/pygmalion-6b☆56Updated 2 years ago
- KoboldAI is generative AI software optimized for fictional use, but capable of much more!☆421Updated last year
- ☆535Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆124Updated 2 years ago
- 8-bit CUDA functions for PyTorch, ported to HIP for use in AMD GPUs☆53Updated 2 years ago
- Discord bot that uses KoboldAI. Supports tavern cards and json files.☆66Updated 2 years ago
- ☆404Updated 2 years ago
- Prototype UI for chatting with the Pygmalion models.☆235Updated 2 years ago
- A prompt/context management system☆169Updated 2 years ago
- a fork that installs runs on pytorch cpu-only☆217Updated 2 years ago
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆313Updated 2 years ago
- Generate Large Language Model text in a container.☆20Updated 2 years ago
- This repo turns your PC into a AI Horde worker node☆286Updated 3 months ago
- Text WebUI extension to add clever Notebooks to Chat mode☆145Updated 5 months ago
- Where we keep our notes about model training runs.☆16Updated 2 years ago
- Oobabooga extension for Bark TTS☆120Updated 2 years ago
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆109Updated 2 years ago
- XTTSv2 Extension for oobabooga text-generation-webui☆156Updated 2 years ago
- A KoboldAI-like memory extension for oobabooga's text-generation-webui☆108Updated last year
- fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backe…☆412Updated 2 years ago
- Falcon LLM ggml framework with CPU and GPU support☆249Updated 2 years ago
- A Discord bot which talks to Large Language Model AIs running on oobabooga's text-generation-webui☆103Updated last year
- Inference code for LLaMA models☆189Updated 2 years ago
- Inference on CPU code for LLaMA models☆137Updated 2 years ago
- 4 bits quantization of LLaMa using GPTQ☆131Updated 2 years ago
- SoTA Transformers with C-backend for fast inference on your CPU.☆311Updated 2 years ago