NVIDIA-NeMo / RLLinks
Scalable toolkit for efficient model reinforcement
☆857Updated this week
Alternatives and similar repositories for RL
Users that are interested in RL are comparing it to the libraries listed below
Sorting:
- SkyRL: A Modular Full-stack RL Library for LLMs☆818Updated this week
- A project to improve skills of large language models☆553Updated this week
- Scalable toolkit for efficient model alignment☆838Updated last month
- Checkpoint-engine is a simple middleware to update model weights in LLM inference engines☆516Updated this week
- slime is a LLM post-training framework for RL Scaling.☆1,747Updated this week
- ☆423Updated this week
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆343Updated 9 months ago
- Decentralized RL Training at Scale☆569Updated this week
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆847Updated 5 months ago
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆459Updated 3 weeks ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆265Updated last month
- Muon is Scalable for LLM Training☆1,302Updated last month
- OLMoE: Open Mixture-of-Experts Language Models☆863Updated 6 months ago
- Understanding R1-Zero-Like Training: A Critical Perspective☆1,076Updated 2 weeks ago
- Ring attention implementation with flash attention☆864Updated last month
- ☆519Updated last month
- ☆216Updated 7 months ago
- LLM KV cache compression made easy☆604Updated this week
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆443Updated 3 months ago
- Super-Efficient RLHF Training of LLMs with Parameter Reallocation☆313Updated 4 months ago
- Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"☆524Updated 2 months ago
- Efficient LLM Inference over Long Sequences☆391Updated 2 months ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆269Updated last month
- PyTorch building blocks for the OLMo ecosystem☆286Updated this week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆557Updated 2 weeks ago
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆395Updated 2 weeks ago
- Recipes to scale inference-time compute of open models☆1,111Updated 3 months ago
- Large Context Attention☆736Updated 7 months ago
- procedural reasoning datasets☆1,092Updated last week
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆490Updated 7 months ago