ByteDance-BandAI / ReportBenchLinks
A comprehensive benchmark for evaluating deep research agents on academic survey tasks
☆26Updated last month
Alternatives and similar repositories for ReportBench
Users that are interested in ReportBench are comparing it to the libraries listed below
Sorting:
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large L…☆51Updated last year
- [EMNLP 2025] Verification Engineering for RL in Instruction Following☆40Updated last week
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆51Updated 4 months ago
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Updated 9 months ago
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆68Updated 11 months ago
- Suri: Multi-constraint instruction following for long-form text generation (EMNLP’24)☆25Updated last week
- ☆58Updated last year
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75Updated 4 months ago
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆62Updated 11 months ago
- The official repo of "WebExplorer: Explore and Evolve for Training Long-Horizon Web Agents"☆74Updated 2 weeks ago
- The code and data for the paper JiuZhang3.0☆49Updated last year
- A comrephensive collection of learning from rewards in the post-training and test-time scaling of LLMs, with a focus on both reward model…☆56Updated 4 months ago
- BeHonest: Benchmarking Honesty in Large Language Models☆34Updated last year
- The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism☆30Updated last year
- WideSearch: Benchmarking Agentic Broad Info-Seeking☆95Updated this week
- Self-Knowledge Guided Retrieval Augmentation for Large Language Models (EMNLP Findings 2023)☆28Updated last year
- Towards Systematic Measurement for Long Text Quality☆36Updated last year
- ☆62Updated 4 months ago
- [EMNLP'24] LongHeads: Multi-Head Attention is Secretly a Long Context Processor☆30Updated last year
- The source code for running LLMs on the AAAR-1.0 benchmark.☆17Updated 6 months ago
- ☆30Updated 9 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆69Updated this week
- Evaluate the Quality of Critique☆36Updated last year
- Code for the 2025 ACL publication "Fine-Tuning on Diverse Reasoning Chains Drives Within-Inference CoT Refinement in LLMs"☆33Updated 3 months ago
- The repository of the project "Fine-tuning Large Language Models with Sequential Instructions", code base comes from open-instruct and LA…☆29Updated 10 months ago
- From Accuracy to Robustness: A Study of Rule- and Model-based Verifiers in Mathematical Reasoning.☆23Updated last week
- A Dynamic Visual Benchmark for Evaluating Mathematical Reasoning Robustness of Vision Language Models☆26Updated 10 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆114Updated 5 months ago