shawwn / openai-server
OpenAI API webserver
☆184Updated 3 years ago
Alternatives and similar repositories for openai-server:
Users that are interested in openai-server are comparing it to the libraries listed below
- LLaMa retrieval plugin script using OpenAI's retrieval plugin☆324Updated last year
- Reimplementation of the task generation part from the Alpaca paper☆119Updated last year
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆111Updated last year
- A GPT-J API to use with python3 to generate text, blogs, code, and more☆206Updated 2 years ago
- howdoi.ai☆256Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backe…☆408Updated last year
- A Simple Discord Bot for the Alpaca LLM☆101Updated last year
- Instruct-tuning LLaMA on consumer hardware☆66Updated last year
- SoTA Transformers with C-backend for fast inference on your CPU.☆311Updated last year
- A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model load…☆115Updated 3 years ago
- 💬 Chatbot web app + HTTP and Websocket endpoints for LLM inference with the Petals client☆309Updated 10 months ago
- A tiny implementation of an autonomous agent powered by LLMs (OpenAI GPT-4)☆443Updated last year
- LLaMA Server combines the power of LLaMA C++ with the beauty of Chatbot UI.☆120Updated last year
- ☆128Updated 2 years ago
- Command-line script for inferencing from models such as MPT-7B-Chat☆101Updated last year
- Inference code for LLaMA models☆188Updated 2 years ago
- Just a simple HowTo for https://github.com/johnsmith0031/alpaca_lora_4bit☆31Updated last year
- A discord bot that roleplays!☆147Updated last year
- Command-line script for inferencing from models such as falcon-7b-instruct☆76Updated last year
- ☆84Updated last year
- Falcon LLM ggml framework with CPU and GPU support☆246Updated last year
- Embeddings focused small version of Llama NLP model☆103Updated last year
- Run Alpaca LLM in LangChain☆218Updated last year
- Inference code for facebook LLaMA models with Wrapyfi support☆130Updated last year
- ☆407Updated last year
- Yet Another LLaMA/ALPACA Discord Bot☆71Updated last year
- The code we currently use to fine-tune models.☆113Updated 10 months ago
- A client/server for LLaMA (Large Language Model Meta AI) that can run ANYWHERE.☆60Updated last year
- Instruct-tune LLaMA on consumer hardware☆362Updated last year