hanmenghan / Skip-n
This repository contains the code of our paper 'Skip \n: A simple method to reduce hallucination in Large Vision-Language Models'.
☆11Updated 8 months ago
Related projects ⓘ
Alternatives and complementary repositories for Skip-n
- ☆19Updated last year
- [NeurIPS 2023] Official Pytorch code for LOVM: Language-Only Vision Model Selection☆20Updated 9 months ago
- [ECCV2024] Learning Video Context as Interleaved Multimodal Sequences☆29Updated last month
- Compress conventional Vision-Language Pre-training data☆49Updated last year
- Emerging Pixel Grounding in Large Multimodal Models Without Grounding Supervision☆19Updated 2 weeks ago
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆56Updated last year
- ☆25Updated 9 months ago
- [ICLR 23] Contrastive Aligned of Vision to Language Through Parameter-Efficient Transfer Learning☆36Updated last year
- Look, Compare, Decide: Alleviating Hallucination in Large Vision-Language Models via Multi-View Multi-Path Reasoning☆17Updated 2 months ago
- Code and datasets for "What’s “up” with vision-language models? Investigating their struggle with spatial reasoning".☆34Updated 8 months ago
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"☆27Updated 8 months ago
- Official Code Release for "Diagnosing and Rectifying Vision Models using Language" (ICLR 2023)☆32Updated last year
- 【NeurIPS 2024】The official code of paper "Automated Multi-level Preference for MLLMs"☆17Updated last month
- [CVPR2024 Highlight] Official implementation for Transferable Visual Prompting. The paper "Exploring the Transferability of Visual Prompt…☆32Updated 4 months ago
- VisualGPTScore for visio-linguistic reasoning☆26Updated last year
- ☆11Updated 2 years ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆42Updated 4 months ago
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆22Updated 5 months ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆41Updated 3 months ago
- [ACL 2023] Delving into the Openness of CLIP☆23Updated last year
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"☆16Updated 3 weeks ago
- Official code of *Towards Event-oriented Long Video Understanding*☆11Updated 3 months ago
- Holistic Coverage and Faithfulness Evaluation of Large Vision-Language Models (ACL-Findings 2024)☆13Updated 6 months ago
- [CVPR' 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆40Updated 3 months ago
- Code and data setup for the paper "Are Diffusion Models Vision-and-language Reasoners?"☆31Updated 7 months ago
- Official repo of the paper "MMWorld: Towards Multi-discipline Multi-faceted World Model Evaluation in Videos"☆20Updated last month
- NegCLIP.☆26Updated last year
- The released data for paper "Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models".☆32Updated last year
- The First to Know: How Token Distributions Reveal Hidden Knowledge in Large Vision-Language Models?☆18Updated last week
- Code for paper "Point and Ask: Incorporating Pointing into Visual Question Answering"☆18Updated 2 years ago