dcSpark-AI / open-LLM-serverLinks
Run local LLMs via HTTP API in a single command (Windows/Mac/Linux)
☆61Updated 2 years ago
Alternatives and similar repositories for open-LLM-server
Users that are interested in open-LLM-server are comparing it to the libraries listed below
Sorting:
- Super-simple, fully Rust powered "memory" (doc store + semantic search) for LLM projects, semantic search, etc.☆62Updated 2 years ago
- CrustAGI is an Task-driven Autonomous Agent experiment written in Rust☆45Updated 2 years ago
- Harnessing the Memory Power of the Camelids☆147Updated 2 years ago
- Local LLM ReAct Agent with Guidance☆158Updated 2 years ago
- A langchain based tool to allow agents to dynamically create, use, store, and retrieve tools to solve real world problems☆126Updated 2 years ago
- An Autonomous LLM Agent that runs on Wizcoder-15B☆334Updated last year
- A prompt/context management system☆171Updated 2 years ago
- OpenAI compatible API for serving LLAMA-2 model☆218Updated 2 years ago
- A OpenAI API compatible REST server for llama.☆208Updated 9 months ago
- Load local LLMs effortlessly in a Jupyter notebook for testing purposes alongside Langchain or other agents. Contains Oobagooga and Kobol…☆213Updated 2 years ago
- Unofficial python bindings for the rust llm library. 🐍❤️🦀☆76Updated 2 years ago
- BabyAGI to run with GPT4All☆249Updated 2 years ago
- An autonomous AI agent extension for Oobabooga's web ui☆174Updated 2 years ago
- Prompt-Promptor is a python library for automatically generating prompts using LLMs☆76Updated 2 years ago
- Run any Large Language Model behind a unified API☆170Updated 2 years ago
- Multi-platform desktop app to download and run Large Language Models(LLM) locally in your computer.☆291Updated 2 years ago
- Real-time Fallacy Detection using OpenAI whisper and ChatGPT/LLaMA/Mistral☆117Updated last year
- Run inference on replit-3B code instruct model using CPU☆160Updated 2 years ago
- SmartGPT is a implementation of a dynamic prompting system, inspired by [AI Explained](https://www.youtube.com/@ai-explained-) on YouTub…☆66Updated 6 months ago
- ☆168Updated 2 years ago
- Haystack and Mistral 7B RAG Implementation. It is based on completely open-source stack.☆79Updated 2 years ago
- Falcon LLM ggml framework with CPU and GPU support☆248Updated last year
- ☆216Updated 2 years ago
- This code implements a Local LLM Selector from the list of Local Installed Ollama LLMs for your specific user Query☆103Updated 2 years ago
- TheBloke's Dockerfiles☆308Updated last year
- A chat interface that uses the REMO memory system with LangFlow☆124Updated 2 years ago
- ☆276Updated 2 years ago
- Edge full-stack LLM platform. Written in Rust☆381Updated last year
- Experimental LLM Inference UX to aid in creative writing☆127Updated 11 months ago
- A simple and clear way of hosting llama.cpp as a private HTTP API using Rust☆27Updated last year