aimagelab / ReflectiVALinks
[CVPR 2025] Augmenting Multimodal LLMs with Self-Reflective Tokens for Knowledge-based Visual Question Answering
☆35Updated 2 months ago
Alternatives and similar repositories for ReflectiVA
Users that are interested in ReflectiVA are comparing it to the libraries listed below
Sorting:
- (ICML 2024) Improve Context Understanding in Multimodal Large Language Models via Multimodal Composition Learning☆27Updated 9 months ago
- [CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension☆54Updated last year
- [CVPR 2025] Recurrence-Enhanced Vision-and-Language Transformers for Robust Multimodal Document Retrieval☆16Updated 2 months ago
- [BMVC 2023] Zero-shot Composed Text-Image Retrieval☆53Updated 7 months ago
- Official PyTorch code of GroundVQA (CVPR'24)☆61Updated 9 months ago
- The official implementation for Candidate Set Re-ranking for Composed Image Retrieval (TMLR) 01/2024☆19Updated last year
- The official implementation for BLIP4CIR with bi-directional training | Bi-directional Training for Composed Image Retrieval via Text Pro…☆31Updated last year
- 【ICLR 2024, Spotlight】Sentence-level Prompts Benefit Composed Image Retrieval☆83Updated last year
- [CVPR 2025] Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attention☆36Updated 11 months ago
- FreeVA: Offline MLLM as Training-Free Video Assistant☆60Updated last year
- Emerging Pixel Grounding in Large Multimodal Models Without Grounding Supervision☆41Updated 3 months ago
- [EMNLP 2024 Findings] The official PyTorch implementation of EchoSight: Advancing Visual-Language Models with Wiki Knowledge.☆63Updated last week
- Composed Video Retrieval☆58Updated last year
- (CVPR2024) MeaCap: Memory-Augmented Zero-shot Image Captioning☆48Updated 10 months ago
- [CVPR 2025] COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training☆21Updated 3 months ago
- [ICML2024] Repo for the paper `Evaluating and Analyzing Relationship Hallucinations in Large Vision-Language Models'☆21Updated 5 months ago
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆24Updated 7 months ago
- [BMVC 2024 Oral ✨] Revisiting Image Captioning Training Paradigm via Direct CLIP-based Optimization☆18Updated 9 months ago
- [ICLR 2025] TimeSuite: Improving MLLMs for Long Video Understanding via Grounded Tuning☆33Updated 2 months ago
- ✨A curated list of papers on the uncertainty in multi-modal large language model (MLLM).☆48Updated 2 months ago
- VisRL: Intention-Driven Visual Perception via Reinforced Reasoning☆29Updated 2 weeks ago
- [CVPR 2025] Official PyTorch code of "Enhancing Video-LLM Reasoning via Agent-of-Thoughts Distillation".☆33Updated last month
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"☆34Updated last year
- [CVPR 2023] Positive-Augmented Contrastive Learning for Image and Video Captioning Evaluation☆62Updated 3 months ago
- Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data Efficiency☆42Updated 3 weeks ago
- NegCLIP.☆32Updated 2 years ago
- [ICCV 2023] - Composed Image Retrieval on Common Objects in context (CIRCO) dataset☆69Updated 10 months ago
- ☆50Updated last year
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆179Updated last month
- Code for DeCo: Decoupling token compression from semanchc abstraction in multimodal large language models☆38Updated 11 months ago