gpustack / gguf-parser-goLinks
Review/Check GGUF files and estimate the memory usage and maximum tokens per second.
☆173Updated this week
Alternatives and similar repositories for gguf-parser-go
Users that are interested in gguf-parser-go are comparing it to the libraries listed below
Sorting:
- LM inference server implementation based on *.cpp.☆203Updated this week
- A text-to-speech and speech-to-text server compatible with the OpenAI API, supporting Whisper, FunASR, Bark, and CosyVoice backends.☆118Updated this week
- llama.cpp fork with additional SOTA quants and improved performance☆519Updated this week
- automatically quant GGUF models☆181Updated this week
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆614Updated last week
- xllamacpp - a Python wrapper of llama.cpp☆40Updated last week
- ☆88Updated 2 months ago
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.☆255Updated 3 months ago
- run DeepSeek-R1 GGUFs on KTransformers☆231Updated 3 months ago
- Stateful load balancer custom-tailored for llama.cpp 🏓🦙☆767Updated last week
- ☆202Updated 2 weeks ago
- Gemma 2 optimized for your local machine.☆370Updated 9 months ago
- The main repository for building Pascal-compatible versions of ML applications and libraries.☆90Updated 2 weeks ago
- Download models from the Ollama library, without Ollama☆84Updated 6 months ago
- Production ready LLM model compression/quantization toolkit with hw accelerated inference support for both cpu/gpu via HF, vLLM, and SGLa…☆590Updated last week
- Open Source Text Embedding Models with OpenAI Compatible API☆153Updated 10 months ago
- ggml implementation of embedding models including SentenceTransformer and BGE☆58Updated last year
- Library for model distillation☆142Updated 3 months ago
- VSCode AI coding assistant powered by self-hosted llama.cpp endpoint.☆182Updated 4 months ago
- Lightweight Inference server for OpenVINO☆176Updated last week
- Efficient visual programming for AI language models☆362Updated 3 weeks ago
- A fast batching API to serve LLM models☆181Updated last year
- LLM inference in C/C++☆21Updated 2 months ago
- A real-time speech-to-speech chatbot powered by Whisper Small, Llama 3.2, and Kokoro-82M.☆227Updated 4 months ago
- Guaranteed Structured Output from any Language Model via Hierarchical State Machines☆134Updated this week
- This is the Mixture-of-Agents (MoA) concept, adapted from the original work by TogetherAI. My version is tailored for local model usage a…☆116Updated 11 months ago
- LLM inference in C/C++☆77Updated 3 weeks ago
- Comparison of Language Model Inference Engines☆217Updated 5 months ago
- DFloat11: Lossless LLM Compression for Efficient GPU Inference☆405Updated last week
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆153Updated last year