tsunghan-wu / reverse_vlmLinks
π₯ Official implementation of "Generate, but Verify: Reducing Visual Hallucination in Vision-Language Models with Retrospective Resampling"
β43Updated 2 months ago
Alternatives and similar repositories for reverse_vlm
Users that are interested in reverse_vlm are comparing it to the libraries listed below
Sorting:
- Official implement of MIA-DPOβ65Updated 7 months ago
- Official repository of 'ScaleCap: Inference-Time Scalable Image Captioning via Dual-Modality Debiasingββ56Updated 2 months ago
- [NeurIPS 2024] Official Repository of Multi-Object Hallucination in Vision-Language Modelsβ31Updated 10 months ago
- Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens (arXiv 2025)β155Updated last month
- (ICLR 2025 Spotlight) Official code repository for Interleaved Scene Graph.β28Updated last month
- β14Updated 6 months ago
- The code repository of UniRLβ40Updated 3 months ago
- Official Repository of Personalized Visual Instruct Tuningβ32Updated 6 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Modelsβ80Updated last year
- π₯ [ICLR 2025] Official PyTorch Model "Visual Haystacks: A Vision-Centric Needle-In-A-Haystack Benchmark"β19Updated 7 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Modelsβ86Updated last year
- Ego-R1: Chain-of-Tool-Thought for Ultra-Long Egocentric Video Reasoningβ117Updated 3 weeks ago
- Official implementation of "Why are Visually-Grounded Language Models Bad at Image Classification?" (NeurIPS 2024)β91Updated 11 months ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".β189Updated 3 months ago
- [NeurIPS 2024] Official PyTorch implementation of LoTLIP: Improving Language-Image Pre-training for Long Text Understandingβ43Updated 8 months ago
- Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"β63Updated 2 months ago
- Code for "CAFe: Unifying Representation and Generation with Contrastive-Autoregressive Finetuning"β24Updated 5 months ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.orβ¦β139Updated last year
- β37Updated last week
- β74Updated 2 months ago
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"β32Updated 11 months ago
- Official PyTorch Code of ReKV (ICLR'25)β49Updated 6 months ago
- β33Updated 10 months ago
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Groundingβ65Updated 3 months ago
- [ICCV 2025] ONLY: One-Layer Intervention Sufficiently Mitigates Hallucinations in Large Vision-Language Models