Jiaxuan-Li / EVCapLinks
[CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension
☆60Updated last year
Alternatives and similar repositories for EVCap
Users that are interested in EVCap are comparing it to the libraries listed below
Sorting:
- [BMVC 2023] Zero-shot Composed Text-Image Retrieval☆54Updated last year
- Official PyTorch code of GroundVQA (CVPR'24)☆64Updated last year
- Code for DeCo: Decoupling token compression from semanchc abstraction in multimodal large language models☆74Updated 4 months ago
- (CVPR2024) MeaCap: Memory-Augmented Zero-shot Image Captioning☆53Updated last year
- 【ICLR 2024, Spotlight】Sentence-level Prompts Benefit Composed Image Retrieval☆91Updated last year
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆50Updated 4 months ago
- NegCLIP.☆38Updated 2 years ago
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"☆38Updated last year
- Large Language Models are Temporal and Causal Reasoners for Video Question Answering (EMNLP 2023)☆77Updated 8 months ago
- Official implementation of "Why are Visually-Grounded Language Models Bad at Image Classification?" (NeurIPS 2024)☆93Updated last year
- [ICLR 2025] VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning☆66Updated 2 months ago
- Code for paper "LLMs Can Evolve Continually on Modality for X-Modal Reasoning" NeurIPS2024☆40Updated 11 months ago
- Composed Video Retrieval☆61Updated last year
- [ICML2024] Repo for the paper `Evaluating and Analyzing Relationship Hallucinations in Large Vision-Language Models'☆22Updated 11 months ago
- [NeurIPS 2024] Visual Perception by Large Language Model’s Weights☆55Updated 8 months ago
- [NeurIPS 2024] Mitigating Object Hallucination via Concentric Causal Attention☆63Updated 3 months ago
- Can I Trust Your Answer? Visually Grounded Video Question Answering (CVPR'24, Highlight)☆83Updated last year
- Emergent Visual Grounding in Large Multimodal Models Without Grounding Supervision☆41Updated last month
- [CVPR 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆53Updated 7 months ago
- Official implementation of HawkEye: Training Video-Text LLMs for Grounding Text in Videos☆44Updated last year
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆108Updated 6 months ago
- [CVPR 2024] Do you remember? Dense Video Captioning with Cross-Modal Memory Retrieval☆63Updated last year
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆151Updated last year
- [CVPR 2024] Improving language-visual pretraining efficiency by perform cluster-based masking on images.☆29Updated last year
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆77Updated last year
- [ICLR 2025] TRACE: Temporal Grounding Video LLM via Casual Event Modeling☆138Updated 3 months ago
- The official implementation of RAR☆92Updated last year
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆136Updated 6 months ago
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆55Updated last year
- [CVPR 2024] Official Code for the Paper "Compositional Chain-of-Thought Prompting for Large Multimodal Models"☆142Updated last year