NVIDIA-NeMo / RLLinks
Scalable toolkit for efficient model reinforcement
☆1,293Updated this week
Alternatives and similar repositories for RL
Users that are interested in RL are comparing it to the libraries listed below
Sorting:
- A project to improve skills of large language models☆804Updated this week
- PyTorch-native post-training at scale☆605Updated this week
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,518Updated this week
- ☆957Updated 3 months ago
- Scalable toolkit for efficient model alignment☆848Updated 3 months ago
- Checkpoint-engine is a simple middleware to update model weights in LLM inference engines☆902Updated this week
- Miles is an enterprise-facing reinforcement learning framework for LLM and VLM post-training, forked from and co-evolving with slime.☆830Updated this week
- LLM KV cache compression made easy☆866Updated last week
- Training library for Megatron-based models with bidirectional Hugging Face conversion capability☆400Updated this week
- PyTorch building blocks for the OLMo ecosystem☆763Updated this week
- Ring attention implementation with flash attention☆973Updated 4 months ago
- Async RL Training at Scale☆1,034Updated this week
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆964Updated 10 months ago
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆623Updated last week
- Muon is Scalable for LLM Training☆1,421Updated 6 months ago
- slime is an LLM post-training framework for RL Scaling.☆3,571Updated last week
- OLMoE: Open Mixture-of-Experts Language Models☆965Updated 4 months ago
- ☆579Updated 4 months ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆334Updated 3 months ago
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆676Updated this week
- Understanding R1-Zero-Like Training: A Critical Perspective☆1,203Updated 5 months ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆279Updated 2 months ago
- Minimalistic large language model 3D-parallelism training☆2,529Updated last month
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark + Toolkit with Torch -> CUDA (+ more DSLs)☆781Updated 2 weeks ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,180Updated 4 months ago
- Large Context Attention☆764Updated 3 months ago
- Efficient LLM Inference over Long Sequences☆394Updated 7 months ago
- Minimalistic 4D-parallelism distributed training framework for education purpose☆2,058Updated 5 months ago
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"☆816Updated last week
- Recipes to scale inference-time compute of open models☆1,124Updated 8 months ago