broncotc / bitsandbytes-rocmLinks
☆36Updated 2 years ago
Alternatives and similar repositories for bitsandbytes-rocm
Users that are interested in bitsandbytes-rocm are comparing it to the libraries listed below
Sorting:
- 8-bit CUDA functions for PyTorch Rocm compatible☆41Updated last year
- ☆156Updated 2 years ago
- A gradio web UI for running Large Language Models like GPT-J 6B, OPT, GALACTICA, LLaMA, and Pygmalion.☆310Updated 2 years ago
- 4 bits quantization of LLMs using GPTQ☆49Updated 2 years ago
- Prototype UI for chatting with the Pygmalion models.☆235Updated 2 years ago
- Generate Large Language Model text in a container.☆20Updated 2 years ago
- C/C++ implementation of PygmalionAI/pygmalion-6b☆55Updated 2 years ago
- KoboldAI is generative AI software optimized for fictional use, but capable of much more!☆422Updated last year
- 8-bit CUDA functions for PyTorch, ported to HIP for use in AMD GPUs☆53Updated 2 years ago
- A prompt/context management system☆168Updated 2 years ago
- ☆404Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆125Updated 2 years ago
- Falcon LLM ggml framework with CPU and GPU support☆249Updated 2 years ago
- ChatGPT-like Web UI for RWKVstic☆100Updated 2 years ago
- A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model load…☆113Updated 4 years ago
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆109Updated 2 years ago
- Discord bot that uses KoboldAI. Supports tavern cards and json files.☆66Updated 2 years ago
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆313Updated 2 years ago
- ☆535Updated 2 years ago
- Framework agnostic python runtime for RWKV models☆147Updated 2 years ago
- Oobabooga extension for Bark TTS☆120Updated 2 years ago
- Inference code for LLaMA models☆189Updated 2 years ago
- 4 bits quantization of LLaMa using GPTQ☆131Updated 2 years ago
- A KoboldAI-like memory extension for oobabooga's text-generation-webui☆108Updated last year
- SoTA Transformers with C-backend for fast inference on your CPU.☆311Updated 2 years ago
- 4 bits quantization of SantaCoder using GPTQ☆51Updated 2 years ago
- Efficient 3bit/4bit quantization of LLaMA models☆18Updated 2 years ago
- A Discord bot which talks to Large Language Model AIs running on oobabooga's text-generation-webui☆103Updated last year
- Linux based GDDR6/GDDR6X VRAM temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆109Updated 9 months ago
- Text WebUI extension to add clever Notebooks to Chat mode☆146Updated 6 months ago