PrimeIntellect-ai / OpenDiloco
OpenDiLoCo: An Open-Source Framework for Globally Distributed Low-Communication Training
☆455Updated last month
Alternatives and similar repositories for OpenDiloco:
Users that are interested in OpenDiloco are comparing it to the libraries listed below
- prime is a framework for efficient, globally distributed training of AI models over the internet.☆663Updated this week
- Distributed Training Over-The-Internet☆881Updated 3 months ago
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆593Updated this week
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆301Updated 2 months ago
- Long context evaluation for large language models☆200Updated last week
- Efficient LLM Inference over Long Sequences☆362Updated 2 weeks ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆195Updated 7 months ago
- Minimalistic 4D-parallelism distributed training framework for education purpose☆872Updated this week
- Muon optimizer: +>30% sample efficiency with <3% wallclock overhead☆434Updated this week
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆1,032Updated this week
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆268Updated last year
- A throughput-oriented high-performance serving framework for LLMs☆745Updated 5 months ago
- ☆200Updated last month
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 4 months ago
- ☆506Updated 6 months ago
- ☆419Updated this week
- Minimalistic large language model 3D-parallelism training☆1,630Updated this week
- [ICML 2024] CLLMs: Consistency Large Language Models☆379Updated 3 months ago
- OLMoE: Open Mixture-of-Experts Language Models☆634Updated 2 months ago
- Official implementation of "Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling"☆849Updated last week
- [NeurIPS'24 Spotlight, ICLR'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which r…☆924Updated last week
- An Open Source Toolkit For LLM Distillation☆516Updated last month
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym☆369Updated this week
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆294Updated 4 months ago
- LLM KV cache compression made easy☆412Updated 2 weeks ago
- scalable and robust tree-based speculative decoding algorithm☆333Updated last month
- Fast, Flexible and Portable Structured Generation☆748Updated this week
- PyTorch per step fault tolerance (actively under development)☆253Updated last week