pkunlp-icler / PCA-EVAL
[ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain
☆102Updated last year
Alternatives and similar repositories for PCA-EVAL:
Users that are interested in PCA-EVAL are comparing it to the libraries listed below
- Recent advancements propelled by large language models (LLMs), encompassing an array of domains including Vision, Audio, Agent, Robotics,…☆118Updated 2 weeks ago
- A Self-Training Framework for Vision-Language Reasoning☆70Updated 2 months ago
- ☆68Updated 2 months ago
- ☆95Updated last year
- ☆125Updated 8 months ago
- ☆61Updated last year
- ☆37Updated 2 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆96Updated 3 weeks ago
- ☆143Updated 4 months ago
- ☆69Updated 3 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆113Updated 3 months ago
- This repo contains evaluation code for the paper "MileBench: Benchmarking MLLMs in Long Context"☆31Updated 8 months ago
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆127Updated 4 months ago
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆31Updated 3 months ago
- A RLHF Infrastructure for Vision-Language Models☆167Updated 4 months ago
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"☆47Updated 4 months ago
- ☆64Updated 9 months ago
- Feeling confused about super alignment? Here is a reading list☆42Updated last year
- This repo contains the code for "MEGA-Bench Scaling Multimodal Evaluation to over 500 Real-World Tasks" [ICLR2025]☆60Updated last week
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆79Updated 8 months ago
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆84Updated last year
- [NeurIPS 2024] A comprehensive benchmark for evaluating critique ability of LLMs☆39Updated 3 months ago
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆45Updated last year
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆37Updated 4 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆41Updated 8 months ago
- ☆73Updated last year
- ☆49Updated last year