HumanEval-V / HumanEval-V-BenchmarkLinks
A Lightweight Visual Reasoning Benchmark for Evaluating Large Multimodal Models through Complex Diagrams in Coding Tasks
☆13Updated 9 months ago
Alternatives and similar repositories for HumanEval-V-Benchmark
Users that are interested in HumanEval-V-Benchmark are comparing it to the libraries listed below
Sorting:
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98Updated last year
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆69Updated 4 months ago
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆63Updated 11 months ago
- ☆32Updated 6 months ago
- ☆136Updated 2 months ago
- The official repo for "AceCoder: Acing Coder RL via Automated Test-Case Synthesis" [ACL25]☆94Updated 7 months ago
- From Accuracy to Robustness: A Study of Rule- and Model-based Verifiers in Mathematical Reasoning.☆23Updated last month
- The rule-based evaluation subset and code implementation of Omni-MATH☆25Updated 11 months ago
- ☆19Updated 7 months ago
- ☆50Updated last year
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"☆50Updated last year
- ☆25Updated 8 months ago
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆189Updated 10 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆76Updated last month
- [NeurIPS 2025] Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"☆125Updated last month
- ☆22Updated 3 weeks ago
- [NeurIPS 2025 Spotlight] ReasonFlux-Coder: Open-Source LLM Coders with Co-Evolving Reinforcement Learning☆133Updated 2 months ago
- [EMNLP 2024] Multi-modal reasoning problems via code generation.☆27Updated 9 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆132Updated 8 months ago
- Official implementation for "MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge for Text-to-Image Generation?"☆49Updated 5 months ago
- Laser: Learn to Reason Efficiently with Adaptive Length-based Reward Shaping☆59Updated 6 months ago
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆68Updated 4 months ago
- The official repository of NeurIPS'25 paper "Ada-R1: From Long-Cot to Hybrid-CoT via Bi-Level Adaptive Reasoning Optimization"☆20Updated 2 weeks ago
- A Sober Look at Language Model Reasoning☆89Updated last week
- ☆30Updated last year
- ☆58Updated last year
- Reproducing R1 for Code with Reliable Rewards☆12Updated 7 months ago
- Extending context length of visual language models☆12Updated 11 months ago
- ☆23Updated last year