aimagelab / ReTLinks
[CVPR 2025] Recurrence-Enhanced Vision-and-Language Transformers for Robust Multimodal Document Retrieval
☆15Updated 2 months ago
Alternatives and similar repositories for ReT
Users that are interested in ReT are comparing it to the libraries listed below
Sorting:
- [BMVC 2024 Oral ✨] Revisiting Image Captioning Training Paradigm via Direct CLIP-based Optimization☆18Updated 8 months ago
- [CVPR 2025] Augmenting Multimodal LLMs with Self-Reflective Tokens for Knowledge-based Visual Question Answering☆33Updated 2 months ago
- Emerging Pixel Grounding in Large Multimodal Models Without Grounding Supervision☆41Updated 2 months ago
- [CVPR 2024] Improving language-visual pretraining efficiency by perform cluster-based masking on images.☆28Updated last year
- [CVPR 2025] Few-shot Recognition via Stage-Wise Retrieval-Augmented Finetuning☆18Updated 2 months ago
- Hyperbolic Safety-Aware Vision-Language Models. CVPR 2025☆16Updated 2 months ago
- Official Implementation of "Read-only Prompt Optimization for Vision-Language Few-shot Learning", ICCV 2023☆53Updated last year
- [CVPR 2025 🔥]A Large Multimodal Model for Pixel-Level Visual Grounding in Videos☆66Updated last month
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"☆34Updated last year
- [CVPR 2025] Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attention☆34Updated 10 months ago
- Official implementation of "Why are Visually-Grounded Language Models Bad at Image Classification?" (NeurIPS 2024)☆83Updated 7 months ago
- [NeurIPS 2024] Official PyTorch implementation of LoTLIP: Improving Language-Image Pre-training for Long Text Understanding☆43Updated 4 months ago
- [CVPR 2023] Positive-Augmented Contrastive Learning for Image and Video Captioning Evaluation☆61Updated 3 months ago
- [CVPRW-25 MMFM] Official repository of paper titled "How Good is my Video LMM? Complex Video Reasoning and Robustness Evaluation Suite fo…☆48Updated 9 months ago
- [ICLR 2025] - Cross the Gap: Exposing the Intra-modal Misalignment in CLIP via Modality Inversion☆45Updated last month
- FreeDA: Training-Free Open-Vocabulary Segmentation with Offline Diffusion-Augmented Prototype Generation (CVPR 2024)☆45Updated 9 months ago
- Detail-Oriented CLIP for Fine-Grained Tasks (ICLR SSI-FM 2025)☆49Updated 2 months ago
- Visual self-questioning for large vision-language assistant.☆41Updated 8 months ago
- [CVPR 2025] RAP: Retrieval-Augmented Personalization☆56Updated 2 weeks ago
- FreeVA: Offline MLLM as Training-Free Video Assistant☆60Updated 11 months ago
- code for studying OpenAI's CLIP explainability☆31Updated 3 years ago
- [CVPR 2025] COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training☆19Updated 2 months ago
- Implementation of "VL-Mamba: Exploring State Space Models for Multimodal Learning"☆81Updated last year
- [BMVC 2023] Zero-shot Composed Text-Image Retrieval☆54Updated 6 months ago
- Official code for paper "GRIT: Teaching MLLMs to Think with Images"☆64Updated this week
- cliptrase☆36Updated 9 months ago
- Pytorch Implementation for CVPR 2024 paper: Learn to Rectify the Bias of CLIP for Unsupervised Semantic Segmentation☆43Updated last month
- [ECCV 2024] API: Attention Prompting on Image for Large Vision-Language Models☆88Updated 7 months ago
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆91Updated last week
- [ICLR 2025] VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning☆56Updated 4 months ago