blackcon / VicunaWithGUILinks
This project support a WEB UI with Vicuna13B (using llama-cpp-python, chatbot-ui)
☆46Updated 2 years ago
Alternatives and similar repositories for VicunaWithGUI
Users that are interested in VicunaWithGUI are comparing it to the libraries listed below
Sorting:
- A simple LangChain-like implementation based on Sentence Embedding+local knowledge base, with Vicuna (FastChat) serving as the LLM. Suppo…☆95Updated 2 years ago
- minichatgpt - To Train ChatGPT In 5 Minutes☆169Updated 2 years ago
- A voice chatbot based on GPT4All and talkGPT, running on your local pc!☆152Updated 2 years ago
- Instruct-tune LLaMA on consumer hardware with shareGPT data☆125Updated 2 years ago
- Host the GPTQ model using AutoGPTQ as an API that is compatible with text generation UI API.☆91Updated 2 years ago
- Gradio UI for RWKV LLM☆29Updated 2 years ago
- CodeAssist is an advanced code completion tool that provides high-quality code completions for Python, Java, C++ and so on. CodeAssist 是一…☆58Updated 3 months ago
- ☆82Updated 2 years ago
- A Web-UI for Llama_index. Allow ChatGPT to access your own database.☆36Updated 2 years ago
- Visual Studio Code extension for WizardCoder☆149Updated 2 years ago
- An experimental open-source attempt to make GPT-4 fully autonomous.☆98Updated 2 years ago
- An open-source LLM tool for extracting repeatable tasks from your conversations, and saving them into a customized skill library for retr…☆129Updated 2 years ago
- Python bindings for llama.cpp☆66Updated last year
- LLaMA Server combines the power of LLaMA C++ with the beauty of Chatbot UI.☆130Updated 2 years ago
- ☆21Updated 2 years ago
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.☆52Updated 2 years ago
- Family of instruction-following LLMs powered by Evol-Instruct: WizardLM, WizardCoder☆44Updated last year
- ☆33Updated 2 years ago
- HuggingChat like UI in Gradio☆70Updated 2 years ago
- Enhancing LangChain prompts to work better with RWKV models☆34Updated 2 years ago
- Langport is a language model inference service☆95Updated last year
- ☆12Updated last year
- 4 bits quantization of LLaMa using GPTQ☆131Updated 2 years ago
- 4 bits quantization of SantaCoder using GPTQ☆51Updated 2 years ago
- Finetune any model on HF in less than 30 seconds☆56Updated 2 months ago
- Deploy your GGML models to HuggingFace Spaces with Docker and gradio☆38Updated 2 years ago
- LocalAGI:Locally run AGI powered by LLaMA, ChatGLM and more. | 基于 ChatGLM, LLaMA 大模型的本地运行的 AGI☆81Updated 2 years ago
- Example of Alpaca-LoRA with llama index.☆31Updated 2 years ago
- ☆137Updated 2 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆132Updated last year