NVIDIA-NeMo / RLLinks
Scalable toolkit for efficient model reinforcement
☆1,227Updated this week
Alternatives and similar repositories for RL
Users that are interested in RL are comparing it to the libraries listed below
Sorting:
- A project to improve skills of large language models☆756Updated this week
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,437Updated this week
- Scalable toolkit for efficient model alignment☆848Updated 3 months ago
- PyTorch-native post-training at scale☆585Updated this week
- ☆949Updated 2 months ago
- PyTorch building blocks for the OLMo ecosystem☆681Updated this week
- Miles is an enterprise-facing reinforcement learning framework for large-scale MoE post-training and production workloads, forked from an…☆714Updated this week
- Checkpoint-engine is a simple middleware to update model weights in LLM inference engines☆888Updated this week
- slime is an LLM post-training framework for RL Scaling.☆3,224Updated last week
- Training library for Megatron-based models with bi-directional Hugging Face conversion capability☆347Updated this week
- LLM KV cache compression made easy☆749Updated last month
- OLMoE: Open Mixture-of-Experts Language Models☆950Updated 3 months ago
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆614Updated last week
- Async RL Training at Scale☆985Updated this week
- Muon is Scalable for LLM Training☆1,397Updated 5 months ago
- Ring attention implementation with flash attention☆961Updated 4 months ago
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆952Updated 9 months ago
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark + Toolkit with Torch -> CUDA (+ more DSLs)☆748Updated this week
- ☆575Updated 3 months ago
- Understanding R1-Zero-Like Training: A Critical Perspective☆1,186Updated 4 months ago
- Recipes to scale inference-time compute of open models☆1,123Updated 7 months ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆278Updated last month
- Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"☆576Updated 3 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆370Updated last year
- ☆1,067Updated this week
- ☆1,376Updated 4 months ago
- Minimalistic large language model 3D-parallelism training☆2,411Updated last month
- Large Context Attention☆759Updated 3 months ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆328Updated 2 months ago
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆343Updated 3 weeks ago