Aspartame-e951 / apiserver.pyLinks
A small standalone flask python server for llama.cpp that acts like a KoboldAI api.
☆14Updated 2 years ago
Alternatives and similar repositories for apiserver.py
Users that are interested in apiserver.py are comparing it to the libraries listed below
Sorting:
- Creates an Langchain Agent which uses the WebUI's API and Wikipedia to work☆73Updated 2 years ago
- Experimental sampler to make LLMs more creative☆31Updated 2 years ago
- GPT-2 small trained on phi-like data☆68Updated last year
- A prompt/context management system☆168Updated 2 years ago
- ☆27Updated 2 years ago
- oobaboga -text-generation-webui implementation of wafflecomposite - langchain-ask-pdf-local☆72Updated 2 years ago
- Dynamic parameter modulation for oobabooga's text-generation-webui that adjusts generation parameters to better mirror user affect.☆36Updated 2 years ago
- Train Llama Loras Easily☆31Updated 2 years ago
- A KoboldAI-like memory extension for oobabooga's text-generation-webui☆108Updated last year
- 5X faster 60% less memory QLoRA finetuning☆21Updated last year
- ☆74Updated 2 years ago
- Porting BabyAGI to Oobabooba.☆31Updated 2 years ago
- Harnessing the Memory Power of the Camelids☆147Updated 2 years ago
- A repository to store helpful information and emerging insights in regard to LLMs☆21Updated 2 years ago
- Experimental LLM Inference UX to aid in creative writing☆128Updated last year
- An Extension for oobabooga/text-generation-webui☆36Updated 2 years ago
- 4 bits quantization of SantaCoder using GPTQ☆51Updated 2 years ago
- Deploy your GGML models to HuggingFace Spaces with Docker and gradio☆38Updated 2 years ago
- Host the GPTQ model using AutoGPTQ as an API that is compatible with text generation UI API.☆90Updated 2 years ago
- Locally running LLM with internet access☆97Updated 7 months ago
- After my server ui improvements were successfully merged, consider this repo a playground for experimenting, tinkering and hacking around…☆53Updated last year
- An OpenAI-like LLaMA inference API☆113Updated 2 years ago
- ☆12Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆125Updated 2 years ago
- Simple and fast server for GPTQ-quantized LLaMA inference☆24Updated 2 years ago
- Let's create synthetic textbooks together :)☆76Updated 2 years ago
- ☆68Updated last year
- ☆40Updated last year
- An unsupervised model merging algorithm for Transformers-based language models.☆108Updated last year
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆38Updated 2 years ago