yoziru / nextjs-vllm-uiLinks
Fully-featured, beautiful web interface for vLLM - built with NextJS.
☆166Updated 3 weeks ago
Alternatives and similar repositories for nextjs-vllm-ui
Users that are interested in nextjs-vllm-ui are comparing it to the libraries listed below
Sorting:
- The RunPod worker template for serving our large language model endpoints. Powered by vLLM.☆390Updated 2 weeks ago
- automatically quant GGUF models☆219Updated 2 weeks ago
- A fast batching API to serve LLM models☆189Updated last year
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.☆267Updated 10 months ago
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆165Updated last year
- ☆51Updated 10 months ago
- ☆210Updated 4 months ago
- ☆108Updated 4 months ago
- ☆134Updated 3 weeks ago
- This is the Mixture-of-Agents (MoA) concept, adapted from the original work by TogetherAI. My version is tailored for local model usage a…☆117Updated last year
- Gradio based tool to run opensource LLM models directly from Huggingface☆96Updated last year
- Dagger functions to import Hugging Face GGUF models into a local ollama instance and optionally push them to ollama.com.☆118Updated last year
- Dataset Crafting w/ RAG/Wikipedia ground truth and Efficient Fine-Tuning Using MLX and Unsloth. Includes configurable dataset annotation …☆192Updated last year
- Link you Ollama models to LM-Studio☆150Updated last year
- An extension for oobabooga/text-generation-webui that enables the LLM to search the web☆275Updated last month
- Ollama chat client in Vue, everything you need to do your private text rpg in browser☆136Updated last year
- An extension that lets the AI take the wheel, allowing it to use the mouse and keyboard, recognize UI elements, and prompt itself :3...no…☆127Updated last year
- Unsloth Studio☆122Updated 9 months ago
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM …☆610Updated 10 months ago
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆86Updated this week
- LLM inference in C/C++☆104Updated 3 weeks ago
- An OpenAI API compatible image generation server for the FLUX.1 family of models from Black Forest Labs☆59Updated last year
- A pipeline parallel training script for LLMs.☆165Updated 8 months ago
- Aggregates compute from spare GPU capacity☆183Updated this week
- Distributed Inference for mlx LLm☆99Updated last year
- Docker compose to run vLLM on Windows☆113Updated 2 years ago
- Sparse Inferencing for transformer based LLMs☆216Updated 4 months ago
- Open source LLM UI, compatible with all local LLM providers.☆177Updated last year
- ☆127Updated last year
- A multimodal, function calling powered LLM webui.☆217Updated last year