gpustack / llama-box
LM inference server implementation based on *.cpp.
☆131Updated this week
Alternatives and similar repositories for llama-box:
Users that are interested in llama-box are comparing it to the libraries listed below
- A text-to-speech and speech-to-text server compatible with the OpenAI API, supporting Whisper, FunASR, Bark, and CosyVoice backends.☆63Updated last month
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆127Updated last week
- automatically quant GGUF models☆160Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆130Updated 8 months ago
- Open Source Text Embedding Models with OpenAI Compatible API☆147Updated 8 months ago
- ☆136Updated 3 weeks ago
- Self-hosted huggingface mirror service. 自建huggingface镜像服务。☆136Updated this week
- The latest graphrag interface is used, using the local ollama to provide the LLM interface.Support for using the pip installation☆142Updated 5 months ago
- GLM Series Edge Models☆130Updated 3 weeks ago
- Get up and running with Llama 3, Mistral, Gemma, and other large language models.☆26Updated this week
- ☆59Updated 10 months ago
- A pipeline parallel training script for LLMs.☆128Updated last month
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.☆231Updated last week
- Unlock Unlimited Potential! Share Your GPU Power Across Your Local Network!☆48Updated 8 months ago
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆237Updated this week
- The Level-Navi Agent, a framework that requires no training and utilizes large language models for deep query understanding and precise s…☆66Updated 2 months ago
- llama.cpp fork with additional SOTA quants and improved performance☆202Updated this week
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆237Updated 11 months ago
- ☆107Updated 11 months ago
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.☆63Updated 11 months ago
- OpenAI compatible API for TensorRT LLM triton backend☆200Updated 7 months ago
- Deployment a light and full OpenAI API for production with vLLM to support /v1/embeddings with all embeddings models.☆41Updated 7 months ago
- EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆250Updated 5 months ago
- Evaling and unaligning Chinese LLM censorship☆57Updated 5 months ago
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆54Updated 3 months ago
- Mixture-of-Experts (MoE) Language Model☆185Updated 6 months ago
- gpt_server是一个用于生产级部署LLMs或Embedding的开源框架。☆158Updated this week