shawwn / openai-serverLinks
OpenAI API webserver
☆190Updated 4 years ago
Alternatives and similar repositories for openai-server
Users that are interested in openai-server are comparing it to the libraries listed below
Sorting:
- Inference code for LLaMA models☆189Updated 2 years ago
- Command-line script for inferencing from models such as MPT-7B-Chat☆100Updated 2 years ago
- LLaMa retrieval plugin script using OpenAI's retrieval plugin☆324Updated 2 years ago
- Drop in replacement for OpenAI, but with Open models.☆153Updated 2 years ago
- 💬 Chatbot web app + HTTP and Websocket endpoints for LLM inference with the Petals client☆317Updated last year
- ☆404Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆124Updated 2 years ago
- The code we currently use to fine-tune models.☆117Updated last year
- Reimplementation of the task generation part from the Alpaca paper☆119Updated 2 years ago
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆109Updated 2 years ago
- howdoi.ai☆257Updated 2 years ago
- SoTA Transformers with C-backend for fast inference on your CPU.☆308Updated 2 years ago
- An easy way to host your own AI API and expose alternative models, while being compatible with "open" AI clients.☆332Updated last year
- fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backe…☆412Updated 2 years ago
- LLaMA Cog template☆304Updated last year
- A Simple Discord Bot for the Alpaca LLM☆99Updated 2 years ago
- Extend the original llama.cpp repo to support redpajama model.☆118Updated last year
- Prompt programming with FMs.☆444Updated last year
- Run inference on MPT-30B using CPU☆576Updated 2 years ago
- A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model load…☆113Updated 3 years ago
- Harnessing the Memory Power of the Camelids☆147Updated 2 years ago
- ☆457Updated 2 years ago
- Embeddings focused small version of Llama NLP model☆107Updated 2 years ago
- Instruct-tuning LLaMA on consumer hardware☆66Updated 2 years ago
- Falcon LLM ggml framework with CPU and GPU support☆248Updated last year
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆121Updated 2 years ago
- A discord bot that roleplays!☆151Updated 2 years ago
- chatbot does what you ask, like open Google search, post a Tweet, etc.☆331Updated 5 months ago
- A template to run LLaMA in Cog☆66Updated 2 years ago
- React app implementing OpenAI and Google APIs to re-create behavior of the toolformer paper.☆233Updated 2 years ago