gpustack / llama-boxLinks
LM inference server implementation based on *.cpp.
☆290Updated 3 months ago
Alternatives and similar repositories for llama-box
Users that are interested in llama-box are comparing it to the libraries listed below
Sorting:
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆215Updated 3 months ago
- A text-to-speech and speech-to-text server compatible with the OpenAI API, supporting Whisper, FunASR, Bark, and CosyVoice backends.☆172Updated 4 months ago
- run DeepSeek-R1 GGUFs on KTransformers☆255Updated 8 months ago
- Library for model distillation☆156Updated 2 months ago
- xllamacpp - a Python wrapper of llama.cpp☆65Updated this week
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆746Updated this week
- ☆93Updated 4 months ago
- ☆107Updated last month
- The main repository for building Pascal-compatible versions of ML applications and libraries.☆147Updated 2 months ago
- automatically quant GGUF models☆214Updated 3 weeks ago
- Port of Facebook's LLaMA model in C/C++☆64Updated 6 months ago
- Open Source Text Embedding Models with OpenAI Compatible API☆160Updated last year
- Self-hosted huggingface mirror service. 自建huggingface镜像服务。☆205Updated 4 months ago
- Cook up amazing multimodal AI applications effortlessly with MiniCPM-o☆224Updated last week
- LLM inference in C/C++☆102Updated last week
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.☆265Updated 8 months ago
- vLLM for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆327Updated last month
- The LLM API Benchmark Tool is a flexible Go-based utility designed to measure and analyze the performance of OpenAI-compatible API endpoi…☆56Updated 2 weeks ago
- Fully-featured, beautiful web interface for vLLM - built with NextJS.☆159Updated 6 months ago
- A proxy server for multiple ollama instances with Key security☆527Updated last week
- ☆54Updated this week
- InferX: Inference as a Service Platform☆138Updated this week
- CPU inference for the DeepSeek family of large language models in C++☆313Updated last month
- ☆365Updated this week
- ☆106Updated 2 months ago
- The Fastest Way to Fine-Tune LLMs Locally☆325Updated 8 months ago
- Local Qwen3 LLM inference. One easy-to-understand file of C source with no dependencies.☆145Updated 4 months ago
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.☆79Updated last year
- No-code CLI designed for accelerating ONNX workflows☆216Updated 5 months ago
- A third-party component library based on Gradio. Integrates Ant Design, Ant Design X, Monaco Editor and more advanced components to help…☆129Updated last week