Tensor library for machine learning
☆14,252Mar 16, 2026Updated last week
Alternatives and similar repositories for ggml
Users that are interested in ggml are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- LLM inference in C/C++☆98,911Updated this week
- Port of OpenAI's Whisper model in C/C++☆47,689Updated this week
- Inference Llama 2 in one file of pure C☆19,302Aug 6, 2024Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆74,135Updated this week
- Diffusion model(SD,Flux,Wan,Qwen Image,Z-Image,...) inference in pure C/C++☆5,591Mar 16, 2026Updated last week
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Universal LLM Deployment Engine with ML Compilation☆22,246Mar 18, 2026Updated last week
- Python bindings for llama.cpp☆10,089Updated this week
- Development repository for the Triton language and compiler☆18,708Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆8,052Mar 17, 2026Updated last week
- Fast and memory-efficient exact attention☆22,938Updated this week
- LLM training in simple, raw C/CUDA☆29,216Jun 26, 2025Updated 8 months ago
- You like pytorch? You like micrograd? You love tinygrad! ❤️☆31,715Updated this week
- Minimalist ML framework for Rust☆19,735Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆41,869Mar 18, 2026Updated last week
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- [Unmaintained, see README] An ecosystem of Rust libraries for working with large language models☆6,152Jun 24, 2024Updated last year
- Large Language Model Text Generation Inference☆10,812Jan 8, 2026Updated 2 months ago
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆13,169Updated this week
- SGLang is a high-performance serving framework for large language models and multimodal models.☆24,829Updated this week
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,445Jun 2, 2025Updated 9 months ago
- MLX: An array framework for Apple silicon☆24,748Updated this week
- Distribute and run LLMs with a single file.☆23,859Updated this week
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,883Jan 28, 2024Updated 2 years ago
- Inference code for Llama models☆59,250Jan 26, 2025Updated last year
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,431Mar 5, 2026Updated 3 weeks ago
- LlamaIndex is the leading document agent and OCR platform☆47,963Updated this week
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,858Jun 10, 2024Updated last year
- INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model☆1,562Mar 23, 2025Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆55,432Nov 12, 2025Updated 4 months ago
- Transformer related optimization, including BERT, GPT☆6,400Mar 27, 2024Updated last year
- Fast inference engine for Transformer models☆4,368Feb 4, 2026Updated last month
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,035Apr 11, 2025Updated 11 months ago
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,379Oct 28, 2024Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- lightweight, standalone C++ inference engine for Google's Gemma models.☆6,755Updated this week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,841Mar 18, 2026Updated last week
- Instruct-tune LLaMA on consumer hardware☆18,961Jul 29, 2024Updated last year
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator☆19,643Updated this week
- High-speed Large Language Model Serving for Local Deployment☆9,060Jan 24, 2026Updated 2 months ago
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,537Jul 16, 2023Updated 2 years ago
- The original local LLM interface. Text, vision, tool-calling, training, and more. 100% offline.☆46,348Updated this week