HumanEval-V / HumanEval-V-Benchmark
A Lightweight Visual Reasoning Benchmark for Evaluating Large Multimodal Models through Complex Diagrams in Coding Tasks
☆12Updated 2 months ago
Alternatives and similar repositories for HumanEval-V-Benchmark
Users that are interested in HumanEval-V-Benchmark are comparing it to the libraries listed below
Sorting:
- ☆99Updated last week
- [EMNLP 2024] Multi-modal reasoning problems via code generation.☆23Updated 3 months ago
- ☆28Updated 10 months ago
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆58Updated 4 months ago
- Official repo for "HumanEval Pro and MBPP Pro: Evaluating Large Language Models on Self-invoking Code Generation Task"☆27Updated last month
- Code repo for the paper: Attacking Vision-Language Computer Agents via Pop-ups☆29Updated 4 months ago
- A novel approach to improve the safety of large language models, enabling them to transition effectively from unsafe to safe state.☆59Updated 3 months ago
- Github repository for "Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging" (ICML 2025)☆22Updated last week
- ☆22Updated 2 months ago
- Code for "A Sober Look at Progress in Language Model Reasoning" paper☆45Updated this week
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆95Updated last week
- A Survey on the Honesty of Large Language Models☆57Updated 5 months ago
- ☆46Updated last week
- The official repo for "AceCoder: Acing Coder RL via Automated Test-Case Synthesis"☆82Updated last month
- ☆43Updated last month
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆61Updated 5 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆72Updated 7 months ago
- ☆43Updated 3 months ago
- The rule-based evaluation subset and code implementation of Omni-MATH☆21Updated 4 months ago
- ☆19Updated 7 months ago
- ☆16Updated last week
- ☆18Updated 2 weeks ago
- Code for "CREAM: Consistency Regularized Self-Rewarding Language Models", ICLR 2025.☆22Updated 2 months ago
- Reproducing R1 for Code with Reliable Rewards☆190Updated last week
- XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts☆31Updated 10 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆93Updated 11 months ago
- V1: Toward Multimodal Reasoning by Designing Auxiliary Task☆34Updated last month
- The code of arXiv paper: "Dynamic Scaling of Unit Tests for Code Reward Modeling"☆19Updated 4 months ago
- Reproducing R1 for Code with Reliable Rewards☆10Updated last month
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆67Updated 3 months ago