google / gemma.cppLinks
lightweight, standalone C++ inference engine for Google's Gemma models.
☆6,491Updated last week
Alternatives and similar repositories for gemma.cpp
Users that are interested in gemma.cpp are comparing it to the libraries listed below
Sorting:
- The official PyTorch implementation of Google's Gemma models☆5,496Updated last month
- Gemma open-weight LLM library, from Google DeepMind☆3,494Updated this week
- CoreNet: A library for training deep neural networks☆7,013Updated 2 months ago
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,011Updated 3 months ago
- High-speed Large Language Model Serving for Local Deployment☆8,231Updated 4 months ago
- A lightweight library for portable low-level GPU computation using WebGPU.☆3,877Updated 4 months ago
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆9,731Updated last year
- Run PyTorch LLMs locally on servers, desktop and mobile☆3,597Updated this week
- Tensor library for machine learning☆12,808Updated this week
- Examples in the MLX framework☆7,632Updated last month
- A simple, performant and scalable Jax LLM!☆1,815Updated last week
- An Extensible Deep Learning Library☆2,158Updated this week
- On-device AI across mobile, embedded and edge for PyTorch☆3,012Updated this week
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,631Updated last year
- Inference Llama 2 in one file of pure C☆18,543Updated 11 months ago
- Modeling, training, eval, and inference code for OLMo☆5,757Updated this week
- Local AI API Platform☆2,765Updated last week
- Run Mixtral-8x7B models in Colab or consumer desktops☆2,312Updated last year
- Blazingly fast LLM inference.☆5,849Updated this week
- Large World Model -- Modeling Text and Video with Millions Context☆7,300Updated 8 months ago
- Development repository for the Triton language and compiler☆16,114Updated this week
- Distribute and run LLMs with a single file.☆22,726Updated last week
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆6,927Updated last year
- Official inference library for Mistral models☆10,355Updated 3 months ago
- PyTorch native post-training library☆5,323Updated this week
- Training LLMs with QLoRA + FSDP☆1,490Updated 8 months ago
- Implementation for MatMul-free LM.☆3,016Updated 8 months ago
- Inference Llama 2 in one file of pure 🔥☆2,115Updated last year
- A minimal GPU design in Verilog to learn how GPUs work from the ground up☆8,565Updated 10 months ago
- A PyTorch native platform for training generative AI models☆4,032Updated this week