eric-haibin-lin / verl-dataLinks
☆12Updated 7 months ago
Alternatives and similar repositories for verl-data
Users that are interested in verl-data are comparing it to the libraries listed below
Sorting:
- ☆64Updated last year
- RL Scaling and Test-Time Scaling (ICML'25)☆112Updated 11 months ago
- ☆105Updated last year
- Interpretable Contrastive Monte Carlo Tree Search Reasoning☆48Updated last year
- [NeurIPS'24 LanGame workshop] On The Planning Abilities of OpenAI's o1 Models: Feasibility, Optimality, and Generalizability☆41Updated 5 months ago
- ☆19Updated 11 months ago
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆32Updated 4 months ago
- [EMNLP'25 Industry] Repo for "Z1: Efficient Test-time Scaling with Code"☆67Updated 8 months ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆89Updated last year
- ☆60Updated 6 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆56Updated 10 months ago
- [EMNLP 2025] CompassVerifier: A Unified and Robust Verifier for LLMs Evaluation and Outcome Reward☆59Updated 4 months ago
- ☆53Updated 10 months ago
- The official repository of paper "Pass@k Training for Adaptively Balancing Exploration and Exploitation of Large Reasoning Models''☆110Updated 4 months ago
- Process Reward Models That Think☆67Updated 3 weeks ago
- ☆111Updated last year
- ☆108Updated 3 months ago
- WideSearch: Benchmarking Agentic Broad Info-Seeking☆109Updated 2 months ago
- Verifiers for LLM Reinforcement Learning☆80Updated 8 months ago
- ☆20Updated last year
- ☆87Updated 4 months ago
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆43Updated 10 months ago
- exploring whether LLMs perform case-based or rule-based reasoning☆30Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- The code and data for the paper JiuZhang3.0☆49Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆138Updated last year
- Extensive Self-Contrast Enables Feedback-Free Language Model Alignment☆21Updated last year
- [ICLR 2025] MiniPLM: Knowledge Distillation for Pre-Training Language Models☆69Updated last year
- [ICML 2025] Predictive Data Selection: The Data That Predicts Is the Data That Teaches☆59Updated 9 months ago