kaistAI / Volcano
[NAACL 2024] Vision language model that reduces hallucinations through self-feedback guided revision. Visualizes attentions on image features for interpretation.
☆44Updated 8 months ago
Alternatives and similar repositories for Volcano
Users that are interested in Volcano are comparing it to the libraries listed below
Sorting:
- Preference Learning for LLaVA☆44Updated 6 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆73Updated 11 months ago
- ☆28Updated 2 months ago
- ☆24Updated last year
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆73Updated 6 months ago
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆42Updated 6 months ago
- NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆53Updated last week
- Multimodal RewardBench☆39Updated 2 months ago
- Implementation and dataset for paper "Can MLLMs Perform Text-to-Image In-Context Learning?"☆38Updated last month
- ☆51Updated last year
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆87Updated last year
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆65Updated 11 months ago
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆48Updated 6 months ago
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆44Updated 6 months ago
- code for "Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization"☆55Updated 8 months ago
- A Comprehensive Benchmark for Robust Multi-image Understanding☆11Updated 8 months ago
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆83Updated last year
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆64Updated 8 months ago
- ☆99Updated last year
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆52Updated last month
- ☆75Updated 4 months ago
- Github repository for "Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging" (ICML 2025)☆22Updated this week
- ☆18Updated 10 months ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆69Updated 6 months ago
- ☆41Updated 4 months ago
- ☆25Updated 6 months ago
- [2024-ACL]: TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversation☆47Updated last year
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆45Updated 9 months ago
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆79Updated 3 months ago
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆56Updated 6 months ago