DualityRL / multi-attemptLinks
☆19Updated 10 months ago
Alternatives and similar repositories for multi-attempt
Users that are interested in multi-attempt are comparing it to the libraries listed below
Sorting:
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Updated last year
- ☆16Updated last year
- Sotopia-RL: Reward Design for Social Intelligence☆46Updated 5 months ago
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners☆85Updated 7 months ago
- ☆50Updated 11 months ago
- ☆47Updated 3 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆32Updated 5 months ago
- From Accuracy to Robustness: A Study of Rule- and Model-based Verifiers in Mathematical Reasoning.☆24Updated 3 months ago
- The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism☆30Updated last year
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- The source code for running LLMs on the AAAR-1.0 benchmark.☆17Updated 9 months ago
- AgentRewardBench: Evaluating Automatic Evaluations of Web Agent Trajectories☆40Updated 5 months ago
- ☆45Updated 6 months ago
- ☆31Updated last year
- The official code repository for the paper "Mirage or Method? How Model–Task Alignment Induces Divergent RL Conclusions".☆15Updated 4 months ago
- ☆57Updated last week
- ☆17Updated 5 months ago
- Exploration of automated dataset selection approaches at large scales.☆53Updated 10 months ago
- [NeurIPS 2024] A comprehensive benchmark for evaluating critique ability of LLMs☆48Updated last year
- ☆22Updated 5 months ago
- ☆70Updated 7 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated 2 years ago
- A Recipe for Building LLM Reasoners to Solve Complex Instructions☆29Updated 3 months ago
- Suri: Multi-constraint instruction following for long-form text generation (EMNLP’24)☆27Updated 3 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆41Updated 3 weeks ago
- The official implementation of the paper "Self-Updatable Large Language Models by Integrating Context into Model Parameters"☆13Updated 8 months ago
- ☆58Updated last year
- A Dynamic Visual Benchmark for Evaluating Mathematical Reasoning Robustness of Vision Language Models☆27Updated last year
- [NAACL 2025] Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs☆24Updated last year