Model parallel transformers in JAX and Haiku
☆6,368Jan 21, 2023Updated 3 years ago
Alternatives and similar repositories for mesh-transformer-jax
Users that are interested in mesh-transformer-jax are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries☆7,426Apr 13, 2026Updated 3 weeks ago
- An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.☆8,279Feb 25, 2022Updated 4 years ago
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodes☆241May 12, 2023Updated 2 years ago
- Repo for external large-scale work☆6,554Apr 27, 2024Updated 2 years ago
- ☆2,964Apr 21, 2026Updated 2 weeks ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- JAX-based neural network library☆3,228Apr 30, 2026Updated last week
- OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamical…☆37,409Aug 17, 2024Updated last year
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆24,275Aug 15, 2024Updated last year
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆32,214Sep 30, 2025Updated 7 months ago
- Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more☆35,536Updated this week
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,745Jan 8, 2024Updated 2 years ago
- Code for the paper "Language Models are Unsupervised Multitask Learners"☆24,833Aug 14, 2024Updated last year
- 🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading☆10,109Sep 7, 2024Updated last year
- 🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal model…☆160,288Updated this week
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,366Oct 28, 2024Updated last year
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆30,258Jul 17, 2024Updated last year
- Flax is a neural network library for JAX that is designed for flexibility.☆7,191Updated this week
- CodeGen is a family of open-source model for program synthesis. Trained on TPU-v4. Competitive with OpenAI Codex.☆5,175Oct 27, 2025Updated 6 months ago
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆42,281Updated this week
- Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch☆11,318May 11, 2024Updated last year
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,503Apr 28, 2026Updated last week
- Code for GPT-4chan☆637Jun 3, 2022Updated 3 years ago
- ☆1,647Apr 27, 2023Updated 3 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM☆7,868Oct 11, 2025Updated 6 months ago
- Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch☆5,629Feb 17, 2024Updated 2 years ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,114Jan 23, 2026Updated 3 months ago
- Ongoing research training transformer models at scale☆16,253Updated this week
- Development repository for the Triton language and compiler☆19,124Updated this week
- StableLM: Stability AI Language Models☆15,717Apr 8, 2024Updated 2 years ago
- API for the GPT-J language model 🦜. Including a FastAPI backend and a streamlit frontend☆334Oct 25, 2021Updated 4 years ago
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"☆6,513Jan 14, 2026Updated 3 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆57,469Nov 12, 2025Updated 5 months ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- GPT-3: Language Models are Few-Shot Learners☆15,744Sep 18, 2020Updated 5 years ago
- Instruct-tune LLaMA on consumer hardware☆18,937Jul 29, 2024Updated last year
- LLM inference in C/C++☆107,892Updated this week
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,463May 1, 2026Updated last week
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,658Apr 29, 2026Updated last week
- A latent text-to-image diffusion model☆72,974Jun 18, 2024Updated last year
- Mesh TensorFlow: Model Parallelism Made Easier☆1,625Nov 17, 2023Updated 2 years ago