agentica-project / verl-pipeline
Async pipelined version of Verl
☆60Updated 2 weeks ago
Alternatives and similar repositories for verl-pipeline:
Users that are interested in verl-pipeline are comparing it to the libraries listed below
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆175Updated last month
- ☆149Updated 4 months ago
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆42Updated 5 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆132Updated 7 months ago
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆101Updated 4 months ago
- Reproducing R1 for Code with Reliable Rewards☆179Updated this week
- Code for Paper: Teaching Language Models to Critique via Reinforcement Learning☆94Updated last week
- ☆125Updated 3 weeks ago
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆95Updated last month
- Collection of papers for scalable automated alignment.☆88Updated 6 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆143Updated last month
- ☆63Updated 5 months ago
- A Comprehensive Survey on Long Context Language Modeling☆131Updated 3 weeks ago
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆47Updated 2 months ago
- ☆57Updated last month
- Implementation of the Quiet-STAR paper (https://arxiv.org/pdf/2403.09629.pdf)☆53Updated 8 months ago
- ☆137Updated 5 months ago
- ☆89Updated 7 months ago
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆182Updated 6 months ago
- Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆61Updated 2 weeks ago
- ☆98Updated 6 months ago
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆102Updated last month
- The official repository of the Omni-MATH benchmark.☆80Updated 4 months ago
- Based on the R1-Zero method, using rule-based rewards and GRPO on the Code Contests dataset.☆17Updated this week
- CodeElo: Benchmarking Competition-level Code Generation of LLMs with Human-comparable Elo Ratings☆34Updated 2 months ago
- On Memorization of Large Language Models in Logical Reasoning☆63Updated 3 weeks ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆321Updated 7 months ago
- Repository of LV-Eval Benchmark☆63Updated 7 months ago
- ☆63Updated 5 months ago
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied wit…☆122Updated 9 months ago