NousResearch / DisTrOLinks
Distributed Training Over-The-Internet
☆967Updated last month
Alternatives and similar repositories for DisTrO
Users that are interested in DisTrO are comparing it to the libraries listed below
Sorting:
- prime is a framework for efficient, globally distributed training of AI models over the internet.☆848Updated 3 weeks ago
- OpenDiLoCo: An Open-Source Framework for Globally Distributed Low-Communication Training☆550Updated 11 months ago
- Atropos is a Language Model Reinforcement Learning Environments framework for collecting and evaluating LLM trajectories through diverse …☆762Updated this week
- Async RL Training at Scale☆909Updated last week
- A Self-adaptation Framework🐙 that adapts LLMs for unseen tasks in real-time!☆1,174Updated 10 months ago
- Open weights language model from Google DeepMind, based on Griffin.☆656Updated 6 months ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆931Updated 3 weeks ago
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆829Updated 4 months ago
- An open infrastructure to democratize and decentralize the development of superintelligence for humanity.☆541Updated this week
- Pretraining and inference code for a large-scale depth-recurrent language model☆852Updated last month
- On-device intelligence.☆390Updated 8 months ago
- Aidan Bench attempts to measure <big_model_smell> in LLMs.☆315Updated 5 months ago
- [NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewards☆1,262Updated last month
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆330Updated last year
- ☆864Updated 2 years ago
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆668Updated 7 months ago
- A comprehensive repository of reasoning tasks for LLMs (and beyond)☆452Updated last year
- Official implementation of Half-Quadratic Quantization (HQQ)☆897Updated last month
- noise_step: Training in 1.58b With No Gradient Memory☆221Updated 11 months ago
- Minimalistic large language model 3D-parallelism training☆2,362Updated 3 weeks ago
- smol models are fun too☆92Updated last year
- Official inference library for pre-processing of Mistral models☆823Updated last week
- Fast parallel LLM inference for MLX☆234Updated last year
- ☆581Updated last year
- System 2 Reasoning Link Collection☆861Updated 8 months ago
- Minimalistic 4D-parallelism distributed training framework for education purpose☆1,917Updated 3 months ago
- MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.☆1,397Updated 7 months ago
- Official repository for the paper "Grokfast: Accelerated Grokking by Amplifying Slow Gradients"☆565Updated last year
- Testing baseline LLMs performance across various models☆330Updated last week
- A benchmark to evaluate language models on questions I've previously asked them to solve.☆1,034Updated 7 months ago