PrimeIntellect-ai / OpenDilocoLinks
OpenDiLoCo: An Open-Source Framework for Globally Distributed Low-Communication Training
☆562Updated last year
Alternatives and similar repositories for OpenDiloco
Users that are interested in OpenDiloco are comparing it to the libraries listed below
Sorting:
- prime is a framework for efficient, globally distributed training of AI models over the internet.☆850Updated 2 months ago
- Distributed Training Over-The-Internet☆975Updated 3 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆371Updated last year
- Async RL Training at Scale☆1,044Updated this week
- ☆592Updated last year
- Efficient LLM Inference over Long Sequences☆394Updated 7 months ago
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆674Updated 9 months ago
- ☆219Updated last year
- ☆957Updated 3 months ago
- Beyond Language Models: Byte Models are Digital World Simulators☆334Updated last year
- Atropos is a Language Model Reinforcement Learning Environments framework for collecting and evaluating LLM trajectories through diverse …☆851Updated this week
- Official implementation of Half-Quadratic Quantization (HQQ)☆912Updated last month
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆201Updated last year
- Pretraining and inference code for a large-scale depth-recurrent language model☆861Updated last month
- Long context evaluation for large language models☆226Updated 11 months ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆943Updated 2 months ago
- scalable and robust tree-based speculative decoding algorithm☆366Updated last year
- Checkpoint-engine is a simple middleware to update model weights in LLM inference engines☆902Updated this week
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆356Updated 2 weeks ago
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆347Updated last year
- 🎯An accuracy-first, highly efficient quantization toolkit for LLMs, designed to minimize quality degradation across Weight-Only Quantiza…☆839Updated last week
- LLM KV cache compression made easy☆866Updated last week
- DeMo: Decoupled Momentum Optimization☆198Updated last year
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆475Updated this week
- ☆577Updated last year
- GRadient-INformed MoE☆264Updated last year
- [ICML 2024] CLLMs: Consistency Large Language Models☆410Updated last year
- Reference implementation of Megalodon 7B model☆529Updated 8 months ago
- noise_step: Training in 1.58b With No Gradient Memory☆220Updated last year
- PyTorch-native post-training at scale☆613Updated this week