Mesh TensorFlow: Model Parallelism Made Easier
☆1,625Nov 17, 2023Updated 2 years ago
Alternatives and similar repositories for mesh
Users that are interested in mesh are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- PyTorch extensions for high performance and large scale training.☆3,407Apr 26, 2025Updated last year
- ☆2,963Apr 21, 2026Updated last week
- Task-based datasets, preprocessing, and evaluation for sequence models.☆593Updated this week
- Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.☆14,692Dec 1, 2025Updated 4 months ago
- Ongoing research training transformer models at scale☆16,145Updated this week
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Lingvo☆2,862Apr 21, 2026Updated last week
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"☆6,508Jan 14, 2026Updated 3 months ago
- Enabling PyTorch on XLA Devices (e.g. Google TPU)☆2,775Updated this week
- Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more☆35,484Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆42,188Updated this week
- A GPipe implementation in PyTorch☆862Jul 25, 2024Updated last year
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,873Updated this week
- Make huge neural nets fit in memory☆2,837Apr 26, 2020Updated 6 years ago
- A performant and modular runtime for TensorFlow☆753Sep 4, 2025Updated 7 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,950Apr 20, 2026Updated last week
- A high performance and generic framework for distributed DNN training☆3,715Oct 3, 2023Updated 2 years ago
- Dataset, streaming, and file system extensions maintained by TensorFlow SIG-IO☆736Mar 11, 2026Updated last month
- TFX is an end-to-end platform for deploying production ML pipelines☆2,182Apr 21, 2026Updated last week
- Useful extra functionality for TensorFlow 2.x maintained by SIG-addons☆1,706Sep 4, 2025Updated 7 months ago
- Transformer related optimization, including BERT, GPT☆6,412Mar 27, 2024Updated 2 years ago
- Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.☆17,205Jun 2, 2023Updated 2 years ago
- Google Research☆37,778Updated this week
- JAX-based neural network library☆3,226Updated this week
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Training and serving large-scale neural networks with auto parallelization.☆3,186Dec 9, 2023Updated 2 years ago
- Flax is a neural network library for JAX that is designed for flexibility.☆7,174Updated this week
- ☆367Apr 12, 2024Updated 2 years ago
- Trax — Deep Learning with Clear Code and Speed☆8,303Sep 26, 2025Updated 7 months ago
- Development repository for the Triton language and compiler☆19,040Updated this week
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries☆7,423Apr 13, 2026Updated 2 weeks ago
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆32,207Sep 30, 2025Updated 6 months ago
- An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.☆8,283Feb 25, 2022Updated 4 years ago
- PyTorch elastic training☆729Jun 15, 2022Updated 3 years ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Parallelformers: An Efficient Model Parallelization Toolkit for Deployment☆789Apr 24, 2023Updated 3 years ago
- Guide for building custom op for TensorFlow☆385Mar 23, 2023Updated 3 years ago
- Optimized primitives for collective multi-GPU communication☆4,640Updated this week
- Efficiently computes derivatives of NumPy code.☆7,484Updated this week
- Compiler for Neural Network hardware accelerators☆3,327May 11, 2024Updated last year
- "Multi-Level Intermediate Representation" Compiler Infrastructure☆1,768Apr 22, 2021Updated 5 years ago
- An implementation of a deep learning recommendation model (DLRM)☆4,033Jan 12, 2026Updated 3 months ago