mengchuang123 / VASparse-github
[CVPR 2025] VASparse: Towards Efficient Visual Hallucination Mitigation via Visual-Aware Token Sparsification
☆23Updated last week
Alternatives and similar repositories for VASparse-github:
Users that are interested in VASparse-github are comparing it to the libraries listed below
- [ECCV2024] Reflective Instruction Tuning: Mitigating Hallucinations in Large Vision-Language Models☆16Updated 8 months ago
- Official pytorch implementation of "RITUAL: Random Image Transformations as a Universal Anti-hallucination Lever in Large Vision Language…☆10Updated 3 months ago
- [ICME 2024 Oral] DARA: Domain- and Relation-aware Adapters Make Parameter-efficient Tuning for Visual Grounding☆20Updated last month
- [CVPR 2025] Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attention☆27Updated 8 months ago
- [IJCAI 2023] Text-Video Retrieval with Disentangled Conceptualization and Set-to-Set Alignment☆51Updated 11 months ago
- Official PyTorch code of "Enhancing Video-LLM Reasoning via Agent-of-Thoughts Distillation".☆22Updated last month
- p-MoD: Building Mixture-of-Depths MLLMs via Progressive Ratio Decay☆32Updated 2 months ago
- PyTorch Implementation of "Divide, Conquer and Combine: A Training-Free Framework for High-Resolution Image Perception in Multimodal Larg…☆21Updated last month
- [ICLR 2025] TRACE: Temporal Grounding Video LLM via Casual Event Modeling☆76Updated 2 months ago
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆85Updated 3 months ago
- ☆40Updated 2 months ago
- Multi-Stage Vision Token Dropping: Towards Efficient Multimodal Large Language Model☆21Updated 2 months ago
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"☆33Updated last year
- The official repository for paper "PruneVid: Visual Token Pruning for Efficient Video Large Language Models".☆35Updated last month
- ☆16Updated 4 months ago
- [ICLR 2025] TimeSuite: Improving MLLMs for Long Video Understanding via Grounded Tuning☆28Updated last week
- [NeurIPS 2024] Code for Dual Prototype Evolving for Test-Time Generalization of Vision-Language Models☆36Updated 2 weeks ago
- [CVPR 2023] Diversity-Aware Meta Visual Prompting☆81Updated last year
- 🔎Official code for our paper: "VL-Uncertainty: Detecting Hallucination in Large Vision-Language Model via Uncertainty Estimation".☆30Updated last week
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)☆85Updated 4 months ago
- [CVPR 2023 Highlight & TPAMI] Video-Text as Game Players: Hierarchical Banzhaf Interaction for Cross-Modal Representation Learning☆113Updated 3 months ago
- Noise of Web (NoW) is a challenging noisy correspondence learning (NCL) benchmark containing 100K image-text pairs for robust image-text …☆12Updated 4 months ago
- [NeurIPS 2024] Mitigating Object Hallucination via Concentric Causal Attention☆50Updated 3 months ago
- ☆45Updated 4 months ago
- FreeVA: Offline MLLM as Training-Free Video Assistant☆57Updated 9 months ago
- Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in Multimodal …☆46Updated last month
- ☆56Updated last week
- Instruction Tuning in Continual Learning paradigm☆44Updated last month
- [CVPR25] A ChatGPT-Prompted Visual hallucination Evaluation Dataset, featuring over 100,000 data samples and four advanced evaluation mod…☆14Updated last month
- 【NeurIPS 2024】The official code of paper "Automated Multi-level Preference for MLLMs"☆19Updated 6 months ago