jinzhuoran / RAG-RewardBenchLinks
RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment
☆16Updated 9 months ago
Alternatives and similar repositories for RAG-RewardBench
Users that are interested in RAG-RewardBench are comparing it to the libraries listed below
Sorting:
- Source code for our paper: "ARIA: Training Language Agents with Intention-Driven Reward Aggregation".☆22Updated last month
- ☆21Updated 5 months ago
- ☆59Updated last year
- ☆22Updated last year
- ☆37Updated last month
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- ☆45Updated this week
- Pitfalls of Rule- and Model-based Verifiers: A Case Study on Mathematical Reasoning.☆23Updated 4 months ago
- ☆19Updated 6 months ago
- ☆62Updated 3 months ago
- [arxiv: 2505.02156] Adaptive Thinking via Mode Policy Optimization for Social Language Agents☆44Updated 3 months ago
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆69Updated 2 months ago
- ☆50Updated 11 months ago
- The repository of the project "Fine-tuning Large Language Models with Sequential Instructions", code base comes from open-instruct and LA…☆29Updated 10 months ago
- The rule-based evaluation subset and code implementation of Omni-MATH☆23Updated 9 months ago
- The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism☆30Updated last year
- Suri: Multi-constraint instruction following for long-form text generation (EMNLP’24)☆25Updated 10 months ago
- BeHonest: Benchmarking Honesty in Large Language Models☆34Updated last year
- Emergent Hierarchical Reasoning in LLMs/VLMs through Reinforcement Learning☆30Updated 3 weeks ago
- ☆18Updated 5 months ago
- The official code repository for the paper "Mirage or Method? How Model–Task Alignment Induces Divergent RL Conclusions".☆15Updated last month
- Official code for paper "SPA-RL: Reinforcing LLM Agent via Stepwise Progress Attribution"☆43Updated 3 weeks ago
- ☆30Updated 9 months ago
- ☆13Updated last year
- A comrephensive collection of learning from rewards in the post-training and test-time scaling of LLMs, with a focus on both reward model…☆56Updated 3 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- ☆12Updated last year
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆69Updated 9 months ago
- instruction-following benchmark for large reasoning models☆42Updated last month
- This the implementation of LeCo☆31Updated 8 months ago