iaalm / llama-api-serverLinks
A OpenAI API compatible REST server for llama.
☆208Updated 5 months ago
Alternatives and similar repositories for llama-api-server
Users that are interested in llama-api-server are comparing it to the libraries listed below
Sorting:
- An OpenAI-like LLaMA inference API☆112Updated last year
- An easy way to host your own AI API and expose alternative models, while being compatible with "open" AI clients.☆332Updated last year
- TheBloke's Dockerfiles☆305Updated last year
- Visual Studio Code extension for WizardCoder☆149Updated 2 years ago
- C++ implementation for 💫StarCoder☆456Updated last year
- Harnessing the Memory Power of the Camelids☆146Updated last year
- An Autonomous LLM Agent that runs on Wizcoder-15B☆334Updated 9 months ago
- ☆275Updated 2 years ago
- Provide a way to use the GPT-QLLama model as an API☆43Updated 2 years ago
- Load local LLMs effortlessly in a Jupyter notebook for testing purposes alongside Langchain or other agents. Contains Oobagooga and Kobol…☆214Updated 2 years ago
- Local LLM ReAct Agent with Guidance☆158Updated 2 years ago
- Run any Large Language Model behind a unified API☆170Updated last year
- Run Alpaca LLM in LangChain☆216Updated last year
- Host the GPTQ model using AutoGPTQ as an API that is compatible with text generation UI API.☆91Updated 2 years ago
- 💬 Chatbot web app + HTTP and Websocket endpoints for LLM inference with the Petals client☆314Updated last year
- Use local llama LLM or openai to chat, discuss/summarize your documents, youtube videos, and so on.☆152Updated 7 months ago
- Falcon LLM ggml framework with CPU and GPU support☆246Updated last year
- A fast batching API to serve LLM models☆185Updated last year
- LLaMA Server combines the power of LLaMA C++ with the beauty of Chatbot UI.☆128Updated 2 years ago
- ☆38Updated 2 years ago
- ☆168Updated 2 years ago
- Locally run an Instruction-Tuned Chat-Style LLM☆38Updated 2 years ago
- Run inference on replit-3B code instruct model using CPU☆157Updated 2 years ago
- Extend the original llama.cpp repo to support redpajama model.☆118Updated 11 months ago
- A command-line interface to generate textual and conversational datasets with LLMs.☆301Updated last year
- LLaMA Cog template☆306Updated last year
- starcoder server for huggingface-vscdoe custom endpoint☆172Updated last year
- LLaMa retrieval plugin script using OpenAI's retrieval plugin☆323Updated 2 years ago
- Simple and fast server for GPTQ-quantized LLaMA inference☆24Updated 2 years ago
- An open source UI for OpenChat models☆284Updated last year