mat-agent / MAT-AgentLinks
☆46Updated last week
Alternatives and similar repositories for MAT-Agent
Users that are interested in MAT-Agent are comparing it to the libraries listed below
Sorting:
- ☆74Updated last year
- A Self-Training Framework for Vision-Language Reasoning☆80Updated 5 months ago
- code for "CoMT: A Novel Benchmark for Chain of Multi-modal Thought on Large Vision-Language Models"☆16Updated 3 months ago
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.☆64Updated 3 months ago
- NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆69Updated 3 weeks ago
- ☆24Updated 4 months ago
- G1: Bootstrapping Perception and Reasoning Abilities of Vision-Language Model via Reinforcement Learning☆64Updated last month
- Interleaving Reasoning: Next-Generation Reasoning Systems for AGI☆30Updated last week
- Official repo for EscapeCraft (an 3D environment for room escape) and benchmark MM-Escape☆16Updated 3 weeks ago
- This repository will continuously update the latest papers, technical reports, benchmarks about multimodal reasoning!☆45Updated 3 months ago
- ☆46Updated 2 months ago
- ☆150Updated 7 months ago
- VCR-Bench: A Comprehensive Evaluation Framework for Video Chain-of-Thought Reasoning☆31Updated 2 months ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆47Updated last year
- A RLHF Infrastructure for Vision-Language Models☆177Updated 7 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆76Updated last year
- TimeChat-online: 80% Visual Tokens are Naturally Redundant in Streaming Videos☆51Updated last week
- Official implementation of GUI-R1 : A Generalist R1-Style Vision-Language Action Model For GUI Agents☆116Updated last month
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning"☆119Updated 3 weeks ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆65Updated 11 months ago
- A comprehensive collection of process reward models.☆92Updated 2 weeks ago
- ☆101Updated this week
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆88Updated last year
- ☆80Updated 5 months ago
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆46Updated 7 months ago
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆50Updated 6 months ago
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆47Updated 3 months ago
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆123Updated 2 months ago
- code for "Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization"☆55Updated 10 months ago
- ☆100Updated last year