Tensor library for machine learning
☆14,394Apr 9, 2026Updated last week
Alternatives and similar repositories for ggml
Users that are interested in ggml are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- LLM inference in C/C++☆103,237Updated this week
- Port of OpenAI's Whisper model in C/C++☆48,661Mar 29, 2026Updated 2 weeks ago
- Inference Llama 2 in one file of pure C☆19,379Aug 6, 2024Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆76,536Updated this week
- Diffusion model(SD,Flux,Wan,Qwen Image,Z-Image,...) inference in pure C/C++☆5,726Updated this week
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Universal LLM Deployment Engine with ML Compilation☆22,414Apr 6, 2026Updated last week
- Python bindings for llama.cpp☆10,181Updated this week
- Development repository for the Triton language and compiler☆18,902Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆8,121Updated this week
- Fast and memory-efficient exact attention☆23,344Updated this week
- LLM training in simple, raw C/CUDA☆29,511Jun 26, 2025Updated 9 months ago
- You like pytorch? You like micrograd? You love tinygrad! ❤️☆32,330Updated this week
- Minimalist ML framework for Rust☆19,952Apr 9, 2026Updated last week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆42,029Updated this week
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- [Unmaintained, see README] An ecosystem of Rust libraries for working with large language models☆6,152Jun 24, 2024Updated last year
- Large Language Model Text Generation Inference☆10,830Mar 21, 2026Updated 3 weeks ago
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆13,354Updated this week
- SGLang is a high-performance serving framework for large language models and multimodal models.☆25,643Updated this week
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,448Jun 2, 2025Updated 10 months ago
- MLX: An array framework for Apple silicon☆25,423Updated this week
- Distribute and run LLMs with a single file.☆24,121Updated this week
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,885Jan 28, 2024Updated 2 years ago
- Inference code for Llama models☆59,324Jan 26, 2025Updated last year
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,471Mar 30, 2026Updated 2 weeks ago
- LlamaIndex is the leading document agent and OCR platform☆48,601Updated this week
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,865Jun 10, 2024Updated last year
- INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model☆1,567Mar 23, 2025Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆56,599Nov 12, 2025Updated 5 months ago
- Transformer related optimization, including BERT, GPT☆6,412Mar 27, 2024Updated 2 years ago
- Fast inference engine for Transformer models☆4,417Feb 4, 2026Updated 2 months ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,047Apr 11, 2025Updated last year
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,375Oct 28, 2024Updated last year
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- lightweight, standalone C++ inference engine for Google's Gemma models.☆6,846Apr 8, 2026Updated last week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,929Updated this week
- Instruct-tune LLaMA on consumer hardware☆18,950Jul 29, 2024Updated last year
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator☆19,864Updated this week
- High-speed Large Language Model Serving for Local Deployment☆9,324Jan 24, 2026Updated 2 months ago
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,536Jul 16, 2023Updated 2 years ago
- The original local LLM interface. Text, vision, tool-calling, training. UI + API, 100% offline and private.☆46,493Updated this week