Jiaxuan-Li / EVCapLinks
[CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension
☆57Updated last year
Alternatives and similar repositories for EVCap
Users that are interested in EVCap are comparing it to the libraries listed below
Sorting:
- Official PyTorch code of GroundVQA (CVPR'24)☆64Updated last year
- (CVPR2024) MeaCap: Memory-Augmented Zero-shot Image Captioning☆51Updated last year
- NegCLIP.☆37Updated 2 years ago
- [BMVC 2023] Zero-shot Composed Text-Image Retrieval☆54Updated 11 months ago
- Code for DeCo: Decoupling token compression from semanchc abstraction in multimodal large language models☆74Updated 3 months ago
- 【ICLR 2024, Spotlight】Sentence-level Prompts Benefit Composed Image Retrieval☆89Updated last year
- SmallCap: Lightweight Image Captioning Prompted with Retrieval Augmentation☆124Updated last year
- [CVPR 2024] Do you remember? Dense Video Captioning with Cross-Modal Memory Retrieval☆60Updated last year
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"☆37Updated last year
- Large Language Models are Temporal and Causal Reasoners for Video Question Answering (EMNLP 2023)☆76Updated 7 months ago
- [CVPR 2024] Context-Guided Spatio-Temporal Video Grounding☆59Updated last year
- Code for paper "LLMs Can Evolve Continually on Modality for X-Modal Reasoning" NeurIPS2024☆37Updated 10 months ago
- [ICLR 2025] VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning☆65Updated last month
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆149Updated last year
- [CVPR 2023 Highlight & TPAMI] Video-Text as Game Players: Hierarchical Banzhaf Interaction for Cross-Modal Representation Learning☆121Updated 9 months ago
- ☆69Updated last year
- [CVPR 2025] Official PyTorch code of "Enhancing Video-LLM Reasoning via Agent-of-Thoughts Distillation".☆49Updated 5 months ago
- Implementation of "VL-Mamba: Exploring State Space Models for Multimodal Learning"☆84Updated last year
- Can I Trust Your Answer? Visually Grounded Video Question Answering (CVPR'24, Highlight)☆82Updated last year
- Official implementation of HawkEye: Training Video-Text LLMs for Grounding Text in Videos☆42Updated last year
- ☆20Updated 2 months ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆50Updated 3 months ago
- [NeurIPS 2024] Visual Perception by Large Language Model’s Weights☆52Updated 6 months ago
- [ICLR 2025] TRACE: Temporal Grounding Video LLM via Casual Event Modeling☆130Updated 2 months ago
- Official implementation of "Why are Visually-Grounded Language Models Bad at Image Classification?" (NeurIPS 2024)☆91Updated last year
- [ICLR 2025] TimeSuite: Improving MLLMs for Long Video Understanding via Grounded Tuning☆46Updated 6 months ago
- [CVPR 2024] Official Code for the Paper "Compositional Chain-of-Thought Prompting for Large Multimodal Models"☆139Updated last year
- [CVPRW-25 MMFM] Official repository of paper titled "How Good is my Video LMM? Complex Video Reasoning and Robustness Evaluation Suite fo…☆50Updated last year
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆93Updated 2 months ago
- ☆59Updated 2 years ago