gpustack / gguf-parser-go
Review/Check GGUF files and estimate the memory usage and maximum tokens per second.
☆161Updated this week
Alternatives and similar repositories for gguf-parser-go
Users that are interested in gguf-parser-go are comparing it to the libraries listed below
Sorting:
- LM inference server implementation based on *.cpp.☆185Updated this week
- A text-to-speech and speech-to-text server compatible with the OpenAI API, supporting Whisper, FunASR, Bark, and CosyVoice backends.☆109Updated 3 weeks ago
- llama.cpp fork with additional SOTA quants and improved performance☆439Updated this week
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.☆253Updated 2 months ago
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆583Updated last week
- LLM inference in C/C++☆76Updated this week
- automatically quant GGUF models☆174Updated last week
- ☆202Updated 3 weeks ago
- Open Source Text Embedding Models with OpenAI Compatible API☆153Updated 9 months ago
- xllamacpp - a Python wrapper of llama.cpp☆36Updated last week
- ☆88Updated 2 months ago
- run DeepSeek-R1 GGUFs on KTransformers☆226Updated 2 months ago
- Comparison of Language Model Inference Engines☆217Updated 4 months ago
- Production ready LLM model compression/quantization toolkit with hw accelerated inference support for both cpu/gpu via HF, vLLM, and SGLa…☆537Updated this week
- Fully-featured, beautiful web interface for vLLM - built with NextJS.☆135Updated last month
- VSCode AI coding assistant powered by self-hosted llama.cpp endpoint.☆180Updated 3 months ago
- AI Studio is an independent app for utilizing LLM.☆259Updated last week
- Lightweight Inference server for OpenVINO☆165Updated this week
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆341Updated this week
- Model swapping for llama.cpp (or any local OpenAPI compatible server)☆745Updated this week
- ☆89Updated 4 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆131Updated 10 months ago
- Moxin is a family of fully open-source and reproducible LLMs☆92Updated 2 weeks ago
- ggml implementation of embedding models including SentenceTransformer and BGE☆56Updated last year
- Port of Facebook's LLaMA model in C/C++☆52Updated 2 weeks ago
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM …☆558Updated 2 months ago
- ☆59Updated last year
- Download models from the Ollama library, without Ollama☆72Updated 5 months ago
- Service for testing out the new Qwen2.5 omni model☆48Updated last week
- A real-time speech-to-speech chatbot powered by Whisper Small, Llama 3.2, and Kokoro-82M.☆224Updated 3 months ago