gpustack / gguf-parser-goLinks
Review/Check GGUF files and estimate the memory usage and maximum tokens per second.
☆240Updated last month
Alternatives and similar repositories for gguf-parser-go
Users that are interested in gguf-parser-go are comparing it to the libraries listed below
Sorting:
- LM inference server implementation based on *.cpp.☆295Updated 2 months ago
- A text-to-speech and speech-to-text server compatible with the OpenAI API, supporting Whisper, FunASR, Bark, and CosyVoice backends.☆192Updated last month
- Download models from the Ollama library, without Ollama☆123Updated last year
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆798Updated last week
- run DeepSeek-R1 GGUFs on KTransformers☆261Updated 11 months ago
- automatically quant GGUF models☆219Updated last month
- A proxy server for multiple ollama instances with Key security☆582Updated this week
- The main repository for building Pascal-compatible versions of ML applications and libraries.☆169Updated 5 months ago
- Library for model distillation☆161Updated 5 months ago
- ☆94Updated 7 months ago
- The LLM API Benchmark Tool is a flexible Go-based utility designed to measure and analyze the performance of OpenAI-compatible API endpoi…☆68Updated 3 months ago
- VSCode AI coding assistant powered by self-hosted llama.cpp endpoint.☆183Updated last year
- REAP: Router-weighted Expert Activation Pruning for SMoE compression☆232Updated 2 months ago
- Comparison of Language Model Inference Engines☆239Updated last year
- InferX: Inference as a Service Platform☆156Updated this week
- ☆108Updated 2 weeks ago
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆843Updated this week
- Docs for GGUF quantization (unofficial)☆366Updated 6 months ago
- The Fastest Way to Fine-Tune LLMs Locally☆333Updated last month
- llama.cpp fork with additional SOTA quants and improved performance☆1,605Updated this week
- A modern web interface for managing and interacting with vLLM servers (www.github.com/vllm-project/vllm). Supports both GPU and CPU modes…☆366Updated this week
- xllamacpp - a Python wrapper of llama.cpp☆73Updated this week
- LLM inference in C/C++☆104Updated 2 weeks ago
- llmbasedos — Local-First OS Where Your AI Agents Wake Up and Work☆282Updated last month
- ☆109Updated 5 months ago
- Open Source Text Embedding Models with OpenAI Compatible API☆167Updated last year
- ☆209Updated last month
- Tool to download models from Huggingface Hub and convert them to GGML/GGUF for llama.cpp☆170Updated 9 months ago
- OpenAI compatible API for TensorRT LLM triton backend☆220Updated last year
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆626Updated 2 weeks ago