dcSpark-AI / open-LLM-serverLinks
Run local LLMs via HTTP API in a single command (Windows/Mac/Linux)
☆61Updated 2 years ago
Alternatives and similar repositories for open-LLM-server
Users that are interested in open-LLM-server are comparing it to the libraries listed below
Sorting:
- Super-simple, fully Rust powered "memory" (doc store + semantic search) for LLM projects, semantic search, etc.☆62Updated 2 years ago
- Multi-platform desktop app to download and run Large Language Models(LLM) locally in your computer.☆289Updated 2 years ago
- OpenAI compatible API for serving LLAMA-2 model☆218Updated 2 years ago
- Harnessing the Memory Power of the Camelids☆147Updated 2 years ago
- An Autonomous LLM Agent that runs on Wizcoder-15B☆333Updated last year
- Load local LLMs effortlessly in a Jupyter notebook for testing purposes alongside Langchain or other agents. Contains Oobagooga and Kobol…☆212Updated 2 years ago
- Real-time Fallacy Detection using OpenAI whisper and ChatGPT/LLaMA/Mistral☆117Updated last year
- Local LLM ReAct Agent with Guidance☆158Updated 2 years ago
- CrustAGI is an Task-driven Autonomous Agent experiment written in Rust☆45Updated 2 years ago
- Run inference on replit-3B code instruct model using CPU☆159Updated 2 years ago
- GPT-2 small trained on phi-like data☆67Updated last year
- An easy way to host your own AI API and expose alternative models, while being compatible with "open" AI clients.☆332Updated last year
- 💭 Build autonomous agents, retrieval augmented generation (RAG) processes and language model powered chat applications☆302Updated 5 months ago
- A langchain based tool to allow agents to dynamically create, use, store, and retrieve tools to solve real world problems☆127Updated 2 years ago
- Falcon LLM ggml framework with CPU and GPU support☆247Updated last year
- Prompt-Promptor is a python library for automatically generating prompts using LLMs☆76Updated 2 years ago
- A OpenAI API compatible REST server for llama.☆208Updated 8 months ago
- ☆215Updated 2 years ago
- ☆275Updated 2 years ago
- A simple and clear way of hosting llama.cpp as a private HTTP API using Rust☆27Updated last year
- AI stack for interacting with LLMs, Stable Diffusion, Whisper, xTTS and many other AI models☆163Updated last year
- Build robust LLM applications with true composability 🔗☆420Updated last year
- 💬 Chatbot web app + HTTP and Websocket endpoints for LLM inference with the Petals client☆316Updated last year
- LLaMa retrieval plugin script using OpenAI's retrieval plugin☆323Updated 2 years ago
- A prompt/context management system☆170Updated 2 years ago
- Locally run an Instruction-Tuned Chat-Style LLM☆38Updated 2 years ago
- Edge full-stack LLM platform. Written in Rust☆380Updated last year
- A fast batching API to serve LLM models☆188Updated last year
- Like system requirements lab but for LLMs☆31Updated 2 years ago
- BabyAGI to run with GPT4All☆248Updated 2 years ago