ggml-org / ggmlLinks
Tensor library for machine learning
☆13,840Updated last week
Alternatives and similar repositories for ggml
Users that are interested in ggml are comparing it to the libraries listed below
Sorting:
- Inference Llama 2 in one file of pure C☆19,121Updated last year
- Python bindings for llama.cpp☆9,917Updated 5 months ago
- LLM inference in C/C++☆93,398Updated this week
- Universal LLM Deployment Engine with ML Compilation☆21,896Updated 3 weeks ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,815Updated last year
- Large Language Model Text Generation Inference☆10,731Updated 2 weeks ago
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,527Updated 2 years ago
- Accessible large language models via k-bit quantization for PyTorch.☆7,912Updated this week
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,318Updated this week
- 🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading☆9,876Updated last year
- Diffusion model(SD,Flux,Wan,Qwen Image,Z-Image,...) inference in pure C/C++☆5,215Updated this week
- High-speed Large Language Model Serving for Local Deployment☆8,591Updated 5 months ago
- Development repository for the Triton language and compiler☆18,178Updated this week
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,481Updated 7 months ago
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,414Updated last month
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,023Updated 9 months ago
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,380Updated last year
- Port of OpenAI's Whisper model in C/C++☆46,066Updated this week
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆12,705Updated this week
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.