chancharikmitra / SAVsLinks
Official Codebase for "Generative Multimodal Model Features Are Discriminative Vision-Language Classifiers"
☆16Updated 2 months ago
Alternatives and similar repositories for SAVs
Users that are interested in SAVs are comparing it to the libraries listed below
Sorting:
- Mitigating Shortcuts in Visual Reasoning with Reinforcement Learning☆33Updated last month
- ☆91Updated last year
- ☆45Updated 7 months ago
- [NeurIPS 2024] Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning☆69Updated 6 months ago
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆78Updated 9 months ago
- ☆119Updated last year
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆91Updated last year
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆85Updated 2 months ago
- [ACM Multimedia 2025] This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual…☆81Updated 5 months ago
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆46Updated 9 months ago
- ☆31Updated last year
- Emerging Pixel Grounding in Large Multimodal Models Without Grounding Supervision☆41Updated 4 months ago
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆65Updated 2 months ago
- A collection of visual instruction tuning datasets.☆76Updated last year
- Official repository of "CoMP: Continual Multimodal Pre-training for Vision Foundation Models"☆30Updated 4 months ago
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆129Updated 5 months ago
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆165Updated 10 months ago
- Official repository for CoMM Dataset☆45Updated 7 months ago
- [ICCV 2025] Official code for "AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning"☆35Updated last month
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆100Updated 2 months ago
- Look, Compare, Decide: Alleviating Hallucination in Large Vision-Language Models via Multi-View Multi-Path Reasoning☆22Updated 11 months ago
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"☆30Updated 9 months ago
- Visual self-questioning for large vision-language assistant.☆42Updated 2 weeks ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆47Updated last year
- ☆85Updated 7 months ago
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆55Updated 9 months ago
- SophiaVL-R1: Reinforcing MLLMs Reasoning with Thinking Reward☆72Updated this week
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆135Updated 3 months ago
- Official implementation of the Law of Vision Representation in MLLMs☆163Updated 8 months ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆36Updated 4 months ago