bublint / ue5-llama-lora
A proof-of-concept project that showcases the potential for using small, locally trainable LLMs to create next-generation documentation tools.
☆482Updated last year
Related projects: ⓘ
- LLM that combines the principles of wizardLM and vicunaLM☆712Updated last year
- A gradio web UI for running Large Language Models like GPT-J 6B, OPT, GALACTICA, LLaMA, and Pygmalion.☆305Updated last year
- An Autonomous LLM Agent that runs on Wizcoder-15B☆338Updated 11 months ago
- Harnessing the Memory Power of the Camelids☆145Updated 11 months ago
- A llama.cpp drop-in replacement for OpenAI's GPT endpoints, allowing GPT-powered apps to run off local llama.cpp models instead of OpenAI…☆594Updated last year
- Load local LLMs effortlessly in a Jupyter notebook for testing purposes alongside Langchain or other agents. Contains Oobagooga and Kobol…☆210Updated last year
- fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backe…☆408Updated last year
- Uses Auto-GPT with Llama.cpp☆381Updated 5 months ago
- An autonomous AI agent extension for Oobabooga's web ui☆175Updated last year
- A prompt/context management system☆163Updated last year
- Falcon LLM ggml framework with CPU and GPU support☆245Updated 7 months ago
- Self-evaluating interview for AI coders☆517Updated last week
- ☆167Updated last year
- BabyAGI to run with locally hosted models using the API from https://github.com/oobabooga/text-generation-webui☆90Updated last year
- LLaMa retrieval plugin script using OpenAI's retrieval plugin☆326Updated last year
- C++ implementation for 💫StarCoder☆443Updated last year
- Official supported Python bindings for llama.cpp + gpt4all☆1,024Updated last year
- Web UI for ExLlamaV2☆420Updated 3 weeks ago
- Tune any FALCON in 4-bit☆469Updated last year
- Customizable implementation of the self-instruct paper.☆1,004Updated 6 months ago
- ☆84Updated this week
- ☆275Updated last year
- TheBloke's Dockerfiles☆296Updated 6 months ago
- This repo is to showcase how you can run a model locally and offline, free of OpenAI dependencies.☆227Updated 2 months ago
- ☆533Updated 9 months ago
- ☆143Updated last year
- Simple UI for LLM Model Finetuning☆2,046Updated 8 months ago
- An easy way to host your own AI API and expose alternative models, while being compatible with "open" AI clients.☆327Updated 2 months ago
- UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click run on Google Colab. + A Gradio ChatGPT…☆436Updated last year
- Supercharge Open-Source AI Models☆348Updated last year