dcSpark-AI / open-LLM-serverLinks
Run local LLMs via HTTP API in a single command (Windows/Mac/Linux)
☆61Updated 2 years ago
Alternatives and similar repositories for open-LLM-server
Users that are interested in open-LLM-server are comparing it to the libraries listed below
Sorting:
- Harnessing the Memory Power of the Camelids☆146Updated last year
- An Autonomous LLM Agent that runs on Wizcoder-15B☆334Updated 10 months ago
- Local LLM ReAct Agent with Guidance☆158Updated 2 years ago
- Load local LLMs effortlessly in a Jupyter notebook for testing purposes alongside Langchain or other agents. Contains Oobagooga and Kobol…☆214Updated 2 years ago
- Real-time Fallacy Detection using OpenAI whisper and ChatGPT/LLaMA/Mistral☆115Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆124Updated 2 years ago
- CrustAGI is an Task-driven Autonomous Agent experiment written in Rust☆44Updated 2 years ago
- Unofficial python bindings for the rust llm library. 🐍❤️🦀☆75Updated 2 years ago
- Multi-platform desktop app to download and run Large Language Models(LLM) locally in your computer.☆290Updated 2 years ago
- Experimental LLM Inference UX to aid in creative writing☆120Updated 8 months ago
- Like system requirements lab but for LLMs☆30Updated 2 years ago
- Falcon LLM ggml framework with CPU and GPU support☆247Updated last year
- ☆217Updated 2 years ago
- Super-simple, fully Rust powered "memory" (doc store + semantic search) for LLM projects, semantic search, etc.☆62Updated last year
- A fast batching API to serve LLM models☆187Updated last year
- OpenAI compatible API for serving LLAMA-2 model☆218Updated last year
- TheBloke's Dockerfiles☆307Updated last year
- A prompt/context management system☆170Updated 2 years ago
- BabyAGI to run with GPT4All☆248Updated 2 years ago
- An autonomous AI agent extension for Oobabooga's web ui☆176Updated last year
- Lord of LLMS☆294Updated last month
- ☆168Updated 2 years ago
- 💬 Chatbot web app + HTTP and Websocket endpoints for LLM inference with the Petals client☆314Updated last year
- oobaboga -text-generation-webui implementation of wafflecomposite - langchain-ask-pdf-local☆71Updated 2 years ago
- Run inference on replit-3B code instruct model using CPU☆158Updated 2 years ago
- Edge full-stack LLM platform. Written in Rust☆381Updated last year
- An OpenAI-like LLaMA inference API☆113Updated last year
- Run any Large Language Model behind a unified API☆171Updated last year
- ☆276Updated 2 years ago
- An AI code interpreter for sensitive data, powered by GPT-4 or Code Llama / Llama 2.☆441Updated last year