HumanEval-V / HumanEval-V-Benchmark
A Lightweight Visual Understanding and Reasoning Benchmark for Evaluating Large Multimodal Models through Coding Tasks
☆17Updated 2 months ago
Alternatives and similar repositories for HumanEval-V-Benchmark:
Users that are interested in HumanEval-V-Benchmark are comparing it to the libraries listed below
- ☆61Updated this week
- ☆58Updated 5 months ago
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆21Updated 2 months ago
- M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆55Updated last month
- BeHonest: Benchmarking Honesty in Large Language Models☆31Updated 6 months ago
- A Dynamic Visual Benchmark for Evaluating Mathematical Reasoning Robustness of Vision Language Models☆18Updated 2 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆48Updated 2 months ago
- ☆27Updated 3 months ago
- ☆28Updated 3 months ago
- Training and Benchmarking LLMs for Code Preference.☆32Updated 3 months ago
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆14Updated 2 months ago
- ☆20Updated 7 months ago
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆54Updated 2 months ago
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"☆45Updated 3 months ago
- ☆18Updated 4 months ago
- ☆20Updated 3 months ago
- ☆13Updated 7 months ago
- [NAACL 2025] Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs☆23Updated 4 months ago
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆59Updated 3 months ago
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆57Updated 4 months ago
- The repository of the project "Fine-tuning Large Language Models with Sequential Instructions", code base comes from open-instruct and LA…☆29Updated 2 months ago
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆52Updated 4 months ago
- [ACL 2024 Findings] CriticBench: Benchmarking LLMs for Critique-Correct Reasoning☆22Updated 11 months ago
- ☆21Updated 7 months ago
- ☆12Updated 7 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆63Updated 11 months ago
- Codebase for Instruction Following without Instruction Tuning☆33Updated 4 months ago
- The rule-based evaluation subset and code implementation of Omni-MATH☆16Updated last month