PrimeIntellect-ai / OpenDilocoLinks
OpenDiLoCo: An Open-Source Framework for Globally Distributed Low-Communication Training
☆537Updated 9 months ago
Alternatives and similar repositories for OpenDiloco
Users that are interested in OpenDiloco are comparing it to the libraries listed below
Sorting:
- prime is a framework for efficient, globally distributed training of AI models over the internet.☆831Updated 4 months ago
- Async RL Training at Scale☆709Updated this week
- Distributed Training Over-The-Internet☆961Updated last week
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆342Updated 10 months ago
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆659Updated 5 months ago
- Efficient LLM Inference over Long Sequences☆390Updated 3 months ago
- Checkpoint-engine is a simple middleware to update model weights in LLM inference engines☆768Updated last week
- ☆827Updated this week
- scalable and robust tree-based speculative decoding algorithm☆359Updated 8 months ago
- ☆218Updated 8 months ago
- DFloat11: Lossless LLM Compression for Efficient GPU Inference☆548Updated last month
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆420Updated this week
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆201Updated last year
- Official implementation of Half-Quadratic Quantization (HQQ)☆883Updated last month
- ☆572Updated last year
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆248Updated 8 months ago
- noise_step: Training in 1.58b With No Gradient Memory☆221Updated 9 months ago
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆612Updated last week
- Advanced Quantization Algorithm for LLMs and VLMs, with support for CPU, Intel GPU, CUDA and HPU.☆668Updated this week
- Atropos is a Language Model Reinforcement Learning Environments framework for collecting and evaluating LLM trajectories through diverse …☆714Updated this week
- LLM KV cache compression made easy☆650Updated last week
- Muon is Scalable for LLM Training☆1,325Updated 2 months ago
- OLMoE: Open Mixture-of-Experts Language Models☆886Updated 3 weeks ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,136Updated 3 weeks ago
- [ICML 2024] CLLMs: Consistency Large Language Models☆404Updated 11 months ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆296Updated 2 months ago
- A throughput-oriented high-performance serving framework for LLMs☆904Updated last month
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆343Updated 5 months ago
- Beyond Language Models: Byte Models are Digital World Simulators☆329Updated last year
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆325Updated 11 months ago