google / gemma_pytorchLinks
The official PyTorch implementation of Google's Gemma models
☆5,571Updated 5 months ago
Alternatives and similar repositories for gemma_pytorch
Users that are interested in gemma_pytorch are comparing it to the libraries listed below
Sorting:
- lightweight, standalone C++ inference engine for Google's Gemma models.☆6,615Updated this week
- Gemma open-weight LLM library, from Google DeepMind☆3,812Updated last week
- PyTorch native post-training library☆5,595Updated this week
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,152Updated 2 months ago
- ☆4,105Updated last year
- Modeling, training, eval, and inference code for OLMo☆6,124Updated 3 weeks ago
- Official repo for "Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models"☆3,327Updated last year
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆10,151Updated last year
- Large World Model -- Modeling Text and Video with Millions Context☆7,371Updated last year
- Training LLMs with QLoRA + FSDP☆1,529Updated last year
- CoreNet: A library for training deep neural networks☆7,024Updated last month
- Run PyTorch LLMs locally on servers, desktop and mobile☆3,617Updated 2 months ago
- A PyTorch native platform for training generative AI models☆4,719Updated this week
- PyTorch code and models for V-JEPA self-supervised learning from video.☆3,270Updated 8 months ago
- Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch☆1,891Updated 3 weeks ago
- ☆2,551Updated last year
- An Extensible Deep Learning Library☆2,289Updated last week
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,799Updated last year
- A simple, performant and scalable Jax LLM!☆1,994Updated this week
- Examples in the MLX framework☆7,990Updated last month
- Tools for merging pretrained large language models.☆6,447Updated 3 weeks ago
- Robust recipes to align language models with human and AI preferences☆5,422Updated 2 months ago
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆12,918Updated last week
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.☆2,066Updated last year
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,621Updated last year
- Video+code lecture on building nanoGPT from scratch☆4,535Updated last year
- Reaching LLaMA2 Performance with 0.1M Dollars☆987Updated last year
- ☆3,038Updated last year
- Meta Lingua: a lean, efficient, and easy-to-hack codebase to research LLMs.☆4,732Updated 4 months ago
- Inference Llama 2 in one file of pure C☆18,952Updated last year