hkust-nlp / RL-Verifier-RobustnessLinks
From Accuracy to Robustness: A Study of Rule- and Model-based Verifiers in Mathematical Reasoning.
☆23Updated last month
Alternatives and similar repositories for RL-Verifier-Robustness
Users that are interested in RL-Verifier-Robustness are comparing it to the libraries listed below
Sorting:
- ☆19Updated 7 months ago
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Updated 11 months ago
- Extending context length of visual language models☆12Updated 11 months ago
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆69Updated 4 months ago
- The rule-based evaluation subset and code implementation of Omni-MATH☆24Updated 10 months ago
- BeHonest: Benchmarking Honesty in Large Language Models☆34Updated last year
- ☆58Updated last year
- ☆50Updated last year
- Laser: Learn to Reason Efficiently with Adaptive Length-based Reward Shaping☆59Updated 5 months ago
- The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism☆30Updated last year
- ☆57Updated 3 weeks ago
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆62Updated 11 months ago
- The repository of the project "Fine-tuning Large Language Models with Sequential Instructions", code base comes from open-instruct and LA…☆30Updated 11 months ago
- Code of EMNLP 2025 paper 'UltraIF: Advancing Instruction Following from the Wild'.☆19Updated 7 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 7 months ago
- ☆13Updated last year
- ☆16Updated 5 months ago
- instruction-following benchmark for large reasoning models☆45Updated 3 months ago
- ☆21Updated 6 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆76Updated last month
- ☆64Updated 5 months ago
- ☆32Updated 6 months ago
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆67Updated 4 months ago
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆37Updated last year
- ☆22Updated 3 weeks ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Updated last year
- Resources and paper list for 'Scaling Environments for Agents'. This repository accompanies our survey on how environments contribute to …☆25Updated last week
- A comrephensive collection of learning from rewards in the post-training and test-time scaling of LLMs, with a focus on both reward model…☆58Updated 5 months ago
- ☆45Updated last month
- RL with Experience Replay☆48Updated 3 months ago