OpenDiLoCo: An Open-Source Framework for Globally Distributed Low-Communication Training
☆560Jan 13, 2025Updated last year
Alternatives and similar repositories for OpenDiloco
Users that are interested in OpenDiloco are comparing it to the libraries listed below
Sorting:
- prime is a framework for efficient, globally distributed training of AI models over the internet.☆852Nov 16, 2025Updated 4 months ago
- ☆48Jan 18, 2024Updated 2 years ago
- TOPLOC: is a novel method for verifiable inference that enables users to verify that LLM providers are using the correct model configurat…☆52Apr 14, 2025Updated 11 months ago
- Asynchronous P2P communication backend for decentralized pipeline parallelism☆42Jun 9, 2025Updated 9 months ago
- Modded vLLM to run pipeline parallelism over public networks☆40May 20, 2025Updated 10 months ago
- Distributed Training Over-The-Internet☆984Oct 14, 2025Updated 5 months ago
- ☆137Mar 20, 2025Updated last year
- Manage ML configuration with pydantic☆16Updated this week
- Solidity contracts for the decentralized Prime Network protocol☆26Jul 6, 2025Updated 8 months ago
- Scaling is a distributed training library and installable dependency designed to scale up neural networks, with a dedicated module for tr…☆66Nov 18, 2025Updated 4 months ago
- Official code for "SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient"☆149Dec 11, 2023Updated 2 years ago
- DeMo: Decoupled Momentum Optimization☆198Dec 2, 2024Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆133Dec 3, 2024Updated last year
- peer-to-peer compute and intelligence network that enables decentralized AI development at scale☆137Nov 10, 2025Updated 4 months ago
- ☆34Sep 10, 2024Updated last year
- torch implementation of diloco☆22May 31, 2024Updated last year
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆487Updated this week
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆374Dec 12, 2024Updated last year
- An Open Source Toolkit For LLM Distillation☆891Updated this week
- Minimalistic large language model 3D-parallelism training☆2,617Feb 19, 2026Updated last month
- Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.☆2,402Jan 11, 2026Updated 2 months ago
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆148Sep 12, 2025Updated 6 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆203Jul 17, 2024Updated last year
- Lightning Training strategy for HiveMind☆18Jan 20, 2026Updated 2 months ago
- A 7B parameter model for mathematical reasoning☆42Feb 17, 2025Updated last year
- Tools for merging pretrained large language models.☆6,867Updated this week
- Minimalistic 4D-parallelism distributed training framework for education purpose☆2,116Aug 26, 2025Updated 6 months ago
- [NeurIPS 2024] Low rank memory efficient optimizer without SVD☆33Jul 1, 2025Updated 8 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆252Jan 31, 2025Updated last year
- Efficient Triton Kernels for LLM Training☆6,216Updated this week
- A comprehensive repository of reasoning tasks for LLMs (and beyond)☆463Sep 27, 2024Updated last year
- A framework for PyTorch to enable fault management for collective communication libraries (CCL) such as NCCL☆20Feb 9, 2026Updated last month
- PyTorch native quantization and sparsity for training and inference☆2,730Updated this week
- GRadient-INformed MoE☆264Sep 25, 2024Updated last year
- An open infrastructure to democratize and decentralize the development of superintelligence for humanity.☆635Updated this week
- PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily wri…☆1,445Updated this week
- A PyTorch native platform for training generative AI models☆5,162Updated this week
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,684Oct 28, 2024Updated last year
- Latent Large Language Models☆19Aug 24, 2024Updated last year