yongliang-wu / ExploreCfg
[NeurIPS2023] Exploring Diverse In-Context Configurations for Image Captioning
☆37Updated 5 months ago
Alternatives and similar repositories for ExploreCfg
Users that are interested in ExploreCfg are comparing it to the libraries listed below
Sorting:
- ☆117Updated 3 months ago
- [AAAI2025] Video Repurposing from User Generated Content: A Large-scale Dataset and Benchmark☆11Updated last month
- Latest Advances on (RL based) Multimodal Reasoning and Generation in Multimodal Large Language Models☆24Updated this week
- Envolving Temporal Reasoning Capability into LMMs via Temporal Consistent Reward☆35Updated last month
- [CVPR 2024] How to Configure Good In-Context Sequence for Visual Question Answering☆17Updated 8 months ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆45Updated 10 months ago
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆87Updated 5 months ago
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆207Updated last year
- [AAAI2025] Unlearning Concepts in Diffusion Model via Concept Domain Correction and Concept Preserving Gradient☆30Updated last month
- [CVPR2025] Number it: Temporal Grounding Videos like Flipping Manga☆79Updated last month
- ☆17Updated 5 months ago
- [CVPR 2025] LamRA: Large Multimodal Model as Your Advanced Retrieval Assistant☆109Updated this week
- This is the first released survey paper on hallucinations of large vision-language models (LVLMs). To keep track of this field and contin…☆69Updated 9 months ago
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆146Updated last year
- The Code for Lever LM: Configuring In-Context Sequence to Lever Large Vision Language Models☆14Updated 7 months ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆46Updated last year
- [LLaVA-Video-R1]✨First Adaptation of R1 to LLaVA-Video (2025-03-18)☆28Updated last week
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆56Updated 10 months ago
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆83Updated last year
- 关于LLM和Multimodal LLM的paper list☆38Updated last week
- Official implementation of HawkEye: Training Video-Text LLMs for Grounding Text in Videos☆41Updated last year
- Official repository for the A-OKVQA dataset☆84Updated last year
- [NeurIPS 2024] Visual Perception by Large Language Model’s Weights☆45Updated last month
- [EMNLP 2024 Findings] The official PyTorch implementation of EchoSight: Advancing Visual-Language Models with Wiki Knowledge.☆61Updated 2 months ago
- [CVPR25] A ChatGPT-Prompted Visual hallucination Evaluation Dataset, featuring over 100,000 data samples and four advanced evaluation mod…☆16Updated last month
- [CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension☆49Updated last year
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆167Updated 3 months ago
- [CVPR' 25] Interleaved-Modal Chain-of-Thought☆39Updated 3 weeks ago
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆192Updated 3 weeks ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆116Updated 6 months ago