NousResearch / DisTrOLinks
Distributed Training Over-The-Internet
☆959Updated 4 months ago
Alternatives and similar repositories for DisTrO
Users that are interested in DisTrO are comparing it to the libraries listed below
Sorting:
- OpenDiLoCo: An Open-Source Framework for Globally Distributed Low-Communication Training☆537Updated 8 months ago
- prime is a framework for efficient, globally distributed training of AI models over the internet.☆828Updated 4 months ago
- Atropos is a Language Model Reinforcement Learning Environments framework for collecting and evaluating LLM trajectories through diverse …☆702Updated this week
- Async RL Training at Scale☆669Updated last week
- Official inference library for pre-processing of Mistral models☆797Updated this week
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆915Updated 5 months ago
- A Self-adaptation Framework🐙 that adapts LLMs for unseen tasks in real-time!☆1,151Updated 8 months ago
- An open infrastructure to democratize and decentralize the development of superintelligence for humanity.☆490Updated this week
- smol models are fun too☆93Updated 11 months ago
- Aidan Bench attempts to measure <big_model_smell> in LLMs.☆312Updated 3 months ago
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆816Updated 2 months ago
- ☆865Updated last year
- System 2 Reasoning Link Collection☆855Updated 6 months ago
- Open weights language model from Google DeepMind, based on Griffin.☆651Updated 4 months ago
- A comprehensive repository of reasoning tasks for LLMs (and beyond)☆449Updated last year
- Official implementation of Half-Quadratic Quantization (HQQ)☆879Updated last month
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆658Updated 5 months ago
- [NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewards☆1,168Updated this week
- noise_step: Training in 1.58b With No Gradient Memory☆221Updated 9 months ago
- Pretraining and inference code for a large-scale depth-recurrent language model☆830Updated last month
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆322Updated 11 months ago
- A benchmark to evaluate language models on questions I've previously asked them to solve.☆1,031Updated 5 months ago
- ☆572Updated last year
- On-device intelligence.☆378Updated 6 months ago
- The Tensor (or Array)☆449Updated last year
- nanoGPT style version of Llama 3.1☆1,429Updated last year
- Simple Python library/structure to ablate features in LLMs which are supported by TransformerLens☆511Updated last year
- Mamba-Chat: A chat LLM based on the state-space model architecture 🐍☆931Updated last year
- Fast parallel LLM inference for MLX☆220Updated last year
- Minimalistic large language model 3D-parallelism training☆2,246Updated last month