Brandon3964 / MultiModal-Task-VectorLinks
[NeurIPS 2024] Official Code for the Paper "Multimodal Task Vectors Enable Many-Shot Multimodal In-Context Learning"
☆23Updated 4 months ago
Alternatives and similar repositories for MultiModal-Task-Vector
Users that are interested in MultiModal-Task-Vector are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆96Updated 8 months ago
- NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆84Updated 2 months ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆132Updated 9 months ago
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆132Updated 3 months ago
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆80Updated 9 months ago
- MultiMath: Bridging Visual and Mathematical Reasoning for Large Language Models☆30Updated 6 months ago
- Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆58Updated 2 months ago
- More Thinking, Less Seeing? Assessing Amplified Hallucination in Multimodal Reasoning Models☆43Updated 2 months ago
- [CVPR' 25] Interleaved-Modal Chain-of-Thought☆70Updated 3 months ago
- This repository contains the code for SFT, RLHF, and DPO, designed for vision-based LLMs, including the LLaVA models and the LLaMA-3.2-vi…☆110Updated last month
- Github repository for "Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging" (ICML 2025)☆68Updated 2 months ago
- Interleaving Reasoning: Next-Generation Reasoning Systems for AGI☆105Updated last month
- ☆27Updated 2 months ago
- [ICML 2025] Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in…☆149Updated last month
- A Survey on Benchmarks of Multimodal Large Language Models☆126Updated last month
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆187Updated 3 weeks ago
- ☆26Updated 6 months ago
- [MM2024, oral] "Self-Supervised Visual Preference Alignment" https://arxiv.org/abs/2404.10501☆56Updated last year
- ☆50Updated 2 weeks ago
- Official PyTorch Implementation of MLLM Is a Strong Reranker: Advancing Multimodal Retrieval-augmented Generation via Knowledge-enhanced …☆79Updated 8 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆77Updated last year
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆55Updated 9 months ago
- A Self-Training Framework for Vision-Language Reasoning☆80Updated 6 months ago
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆78Updated 5 months ago
- Doodling our way to AGI ✏️ 🖼️ 🧠☆86Updated 2 months ago
- ☆78Updated last year
- ☆95Updated 4 months ago
- [CVPR 2024] Official Code for the Paper "Compositional Chain-of-Thought Prompting for Large Multimodal Models"☆134Updated last year
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆125Updated last week
- VLM2-Bench [ACL 2025 Main]: A Closer Look at How Well VLMs Implicitly Link Explicit Matching Visual Cues☆41Updated 2 months ago