gpustack / llama-boxLinks
LM inference server implementation based on *.cpp.
☆286Updated 2 months ago
Alternatives and similar repositories for llama-box
Users that are interested in llama-box are comparing it to the libraries listed below
Sorting:
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆211Updated 2 months ago
- A text-to-speech and speech-to-text server compatible with the OpenAI API, supporting Whisper, FunASR, Bark, and CosyVoice backends.☆165Updated 3 months ago
- run DeepSeek-R1 GGUFs on KTransformers☆254Updated 7 months ago
- xllamacpp - a Python wrapper of llama.cpp☆62Updated this week
- Library for model distillation☆153Updated last month
- ☆93Updated 3 months ago
- automatically quant GGUF models☆214Updated last week
- The main repository for building Pascal-compatible versions of ML applications and libraries.☆138Updated 2 months ago
- Port of Facebook's LLaMA model in C/C++☆63Updated 6 months ago
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆727Updated this week
- ☆105Updated last month
- The LLM API Benchmark Tool is a flexible Go-based utility designed to measure and analyze the performance of OpenAI-compatible API endpoi…☆49Updated 2 weeks ago
- Open Source Text Embedding Models with OpenAI Compatible API☆160Updated last year
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.☆262Updated 7 months ago
- A proxy server for multiple ollama instances with Key security☆515Updated 2 weeks ago
- Self-hosted huggingface mirror service. 自建huggingface镜像服务。☆200Updated 3 months ago
- LLM inference in C/C++☆103Updated 2 months ago
- Fully-featured, beautiful web interface for vLLM - built with NextJS.☆159Updated 5 months ago
- Cook up amazing multimodal AI applications effortlessly with MiniCPM-o☆211Updated 2 weeks ago
- CPU inference for the DeepSeek family of large language models in C++☆314Updated 3 weeks ago
- ☆53Updated this week
- The Fastest Way to Fine-Tune LLMs Locally☆323Updated 7 months ago
- InferX: Inference as a Service Platform☆137Updated this week
- Docker compose to run vLLM on Windows☆103Updated last year
- ☆206Updated last month
- Efficient visual programming for AI language models☆361Updated 5 months ago
- A memory framework for Large Language Models and Agents.☆183Updated 10 months ago
- Download models from the Ollama library, without Ollama☆104Updated 11 months ago
- vLLM for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆307Updated 3 weeks ago
- Service for testing out the new Qwen2.5 omni model☆61Updated 6 months ago