ChenmienTan / RL2Links
☆373Updated this week
Alternatives and similar repositories for RL2
Users that are interested in RL2 are comparing it to the libraries listed below
Sorting:
- The official implementation of Self-Play Preference Optimization (SPPO)☆569Updated 5 months ago
- Codebase for Iterative DPO Using Rule-based Rewards☆252Updated 3 months ago
- Unified KV Cache Compression Methods for Auto-Regressive Models☆1,204Updated 6 months ago
- A scalable, end-to-end training pipeline for general-purpose agents☆339Updated 2 weeks ago
- [ICLR 2025🔥] SVD-LLM & [NAACL 2025🔥] SVD-LLM V2☆231Updated 3 months ago
- ✨ A synthetic dataset generation framework that produces diverse coding questions and verifiable solutions - all in one framwork☆242Updated last month
- adds Sequence Parallelism into LLaMA-Factory☆527Updated this week
- ☆216Updated 2 months ago
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆258Updated 10 months ago
- APOLLO: SGD-like Memory, AdamW-level Performance; MLSys'25 Oustanding Paper Honorable Mention☆242Updated 2 months ago
- Recipes to train the self-rewarding reasoning LLMs.☆225Updated 4 months ago
- Train your Agent model via our easy and efficient framework☆1,277Updated this week
- The official implementation of the ICML 2024 paper "MemoryLLM: Towards Self-Updatable Large Language Models" and "M+: Extending MemoryLLM…☆182Updated last week
- R1-like Computer-use Agent☆78Updated 3 months ago
- A recipe for online RLHF and online iterative DPO.☆522Updated 6 months ago
- An acceleration library that supports arbitrary bit-width combinatorial quantization operations☆227Updated 9 months ago
- [ICML 2025] "SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator"☆249Updated 2 weeks ago
- ✨✨R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learning☆183Updated 2 months ago
- [ICLR 2025] BitStack: Any-Size Compression of Large Language Models in Variable Memory Environments☆36Updated 5 months ago
- [NeurIPS 2024] BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models☆264Updated 4 months ago
- Mulberry, an o1-like Reasoning and Reflection MLLM Implemented via Collective MCTS☆1,205Updated 3 months ago
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆209Updated 2 months ago
- [COLM'25] DeepRetrieval - 🔥 Training Search Agent with Retrieval Outcomes via Reinforcement Learning☆585Updated last month
- [NeurIPS2024] Twin-Merging: Dynamic Integration of Modular Expertise in Model Merging☆136Updated 4 months ago
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆412Updated 2 months ago
- Official code of paper "Beyond 'Aha!': Toward Systematic Meta-Abilities Alignment in Large Reasoning Models"☆79Updated last month
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆228Updated 2 months ago
- Async pipelined version of Verl☆108Updated 3 months ago
- slime is a LLM post-training framework aiming for RL Scaling.☆596Updated this week
- Scalable toolkit for efficient model reinforcement☆499Updated this week