ggerganov / ggml
Tensor library for machine learning
☆11,233Updated this week
Related projects ⓘ
Alternatives and complementary repositories for ggml
- Python bindings for llama.cpp☆8,141Updated this week
- LLM inference in C/C++☆68,097Updated this week
- Large Language Model Text Generation Inference☆9,122Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆30,423Updated this week
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,059Updated 5 months ago
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,269Updated 3 months ago
- Inference Llama 2 in one file of pure C☆17,476Updated 3 months ago
- Locally run an Instruction-Tuned Chat-Style LLM☆10,252Updated last year
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆7,919Updated 6 months ago
- Instruct-tune LLaMA on consumer hardware☆18,653Updated 3 months ago
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆5,994Updated 2 months ago
- 🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading☆9,248Updated 2 months ago
- Accessible large language models via k-bit quantization for PyTorch.☆6,299Updated this week
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆36,993Updated this week
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆12,672Updated this week
- tiktoken is a fast BPE tokeniser for use with OpenAI's models.☆12,427Updated last month
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,384Updated last year
- Universal LLM Deployment Engine with ML Compilation☆19,215Updated this week
- Go ahead and axolotl questions☆7,930Updated this week
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,497Updated last month
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,198Updated 3 weeks ago
- High-speed Large Language Model Serving on PCs with Consumer-grade GPUs☆7,965Updated 2 months ago
- Gorilla: Training and Evaluating LLMs for Function Calls (Tool Calls)☆11,487Updated this week
- Fast and memory-efficient exact attention☆14,279Updated this week
- Official inference library for Mistral models☆9,738Updated last week
- Stable Diffusion and Flux in pure C/C++☆3,508Updated 3 weeks ago
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆10,734Updated last week
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆3,680Updated this week
- Port of OpenAI's Whisper model in C/C++☆35,738Updated this week