gpustack / gguf-parser-goLinks
Review/Check GGUF files and estimate the memory usage and maximum tokens per second.
☆189Updated 2 weeks ago
Alternatives and similar repositories for gguf-parser-go
Users that are interested in gguf-parser-go are comparing it to the libraries listed below
Sorting:
- LM inference server implementation based on *.cpp.☆248Updated this week
- A text-to-speech and speech-to-text server compatible with the OpenAI API, supporting Whisper, FunASR, Bark, and CosyVoice backends.☆146Updated 3 weeks ago
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆668Updated this week
- Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPU…☆381Updated this week
- A proxy server for multiple ollama instances with Key security☆470Updated last week
- run DeepSeek-R1 GGUFs on KTransformers☆246Updated 5 months ago
- Download models from the Ollama library, without Ollama☆90Updated 8 months ago
- automatically quant GGUF models☆190Updated this week
- Library for model distillation☆148Updated 5 months ago
- VSCode AI coding assistant powered by self-hosted llama.cpp endpoint.☆183Updated 6 months ago
- 🏗️ Fine-tune, build, and deploy open-source LLMs easily!☆466Updated this week
- LLM inference in C/C++☆98Updated last week
- Comparison of Language Model Inference Engines☆225Updated 7 months ago
- ☆207Updated 2 weeks ago
- The LLM API Benchmark Tool is a flexible Go-based utility designed to measure and analyze the performance of OpenAI-compatible API endpoi…☆36Updated 5 months ago
- ☆95Updated 7 months ago
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆461Updated last week
- ☆91Updated last month
- Code execution utilities for Open WebUI & Ollama☆290Updated 8 months ago
- ☆104Updated this week
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.☆260Updated 5 months ago
- llama.cpp fork with additional SOTA quants and improved performance☆964Updated last week
- Model Context Protocol Servers for Milvus☆162Updated 2 months ago
- Fully-featured, beautiful web interface for vLLM - built with NextJS.☆149Updated 2 months ago
- InferX is a Inference Function as a Service Platform☆119Updated 2 weeks ago
- Minimal Linux OS with a Model Context Protocol (MCP) gateway to expose local capabilities to LLMs.☆260Updated last month
- Production ready LLM model compression/quantization toolkit with hw accelerated inference support for both cpu/gpu via HF, vLLM, and SGLa…☆713Updated last week
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆269Updated last month
- Run AI generated code in isolated sandboxes☆90Updated 6 months ago
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM …