deepcs233 / Visual-CoT
[Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought Reasoning
☆135Updated last month
Related projects ⓘ
Alternatives and complementary repositories for Visual-CoT
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆148Updated last month
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆96Updated last week
- Official implementation of the Law of Vision Representation in MLLMs☆131Updated this week
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆135Updated 6 months ago
- This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strat…☆72Updated 7 months ago
- A RLHF Infrastructure for Vision-Language Models☆104Updated this week
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆210Updated last month
- Visualizing the attention of vision-language models☆72Updated 3 weeks ago
- ✨✨ MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆78Updated last week
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆43Updated 5 months ago
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆66Updated 9 months ago
- ☆121Updated 3 weeks ago
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆73Updated 7 months ago
- ☆289Updated 9 months ago
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆73Updated 6 months ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆115Updated last week
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(…☆246Updated last week
- [NeurIPS'24 Spotlight] EVE: Encoder-Free Vision-Language Models☆231Updated last month
- A collection of visual instruction tuning datasets.☆76Updated 8 months ago
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆100Updated 6 months ago
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning☆255Updated 8 months ago
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆182Updated 7 months ago
- Official repository of MMDU dataset☆75Updated last month
- A Survey on Benchmarks of Multimodal Large Language Models☆64Updated last month
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆59Updated 5 months ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆72Updated 2 weeks ago
- [CVPR 2024] Official Code for the Paper "Compositional Chain-of-Thought Prompting for Large Multimodal Models"☆80Updated 5 months ago
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆132Updated last month
- ICML'2024 | MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI☆95Updated 4 months ago
- LLaVA-NeXT-Image-Llama3-Lora, Modified from https://github.com/arielnlee/LLaVA-1.6-ft☆39Updated 4 months ago