google / gemma_pytorch
The official PyTorch implementation of Google's Gemma models
☆5,290Updated 3 months ago
Related projects ⓘ
Alternatives and complementary repositories for gemma_pytorch
- lightweight, standalone C++ inference engine for Google's Gemma models.☆5,991Updated this week
- Open weights LLM from Google DeepMind.☆2,477Updated this week
- PyTorch native finetuning library☆4,336Updated this week
- PyTorch code and models for V-JEPA self-supervised learning from video.☆2,673Updated 3 months ago
- Scripts for fine-tuning Meta Llama with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom dataset…☆15,222Updated this week
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆7,919Updated 6 months ago
- Fast and memory-efficient exact attention☆14,279Updated this week
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆5,669Updated last month
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆10,734Updated last week
- DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model☆3,597Updated last month
- Finetune Llama 3.2, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 80% less memory☆18,263Updated this week
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆9,195Updated 4 months ago
- Go ahead and axolotl questions☆7,930Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆6,299Updated this week
- Official inference library for Mistral models☆9,738Updated last week
- ☆4,035Updated 5 months ago
- Modeling, training, eval, and inference code for OLMo☆4,645Updated this week
- SGLang is a fast serving framework for large language models and vision language models.☆6,127Updated this week
- Train transformer language models with reinforcement learning.☆10,086Updated this week
- Inference Llama 2 in one file of pure C☆17,476Updated 3 months ago
- Qwen2.5 is the large language model series developed by Qwen team, Alibaba Cloud.☆9,783Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆30,423Updated this week
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆20,286Updated 3 months ago
- The official Meta Llama 3 GitHub site☆27,145Updated 3 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,059Updated 5 months ago
- llama3 implementation one matrix multiplication at a time☆13,741Updated 5 months ago
- A series of large language models trained from scratch by developers @01-ai☆7,711Updated last week
- Retrieval and Retrieval-augmented LLMs☆7,613Updated this week
- Examples in the MLX framework☆6,235Updated last week
- TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain…☆8,681Updated last week