iOPENCap / awesome-unimodal-training
text-only training or language-free training for multimodal tasks (image/audio/video caption, retrieval, text2image)
☆11Updated 6 months ago
Alternatives and similar repositories for awesome-unimodal-training:
Users that are interested in awesome-unimodal-training are comparing it to the libraries listed below
- [ICLR 2025] TRACE: Temporal Grounding Video LLM via Casual Event Modeling☆87Updated 3 months ago
- ☆11Updated 3 months ago
- [CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension☆49Updated last year
- This is the official implementation of ReVisionLLM: Recursive Vision-Language Model for Temporal Grounding in Hour-Long Videos☆14Updated last month
- Pytorch Code for "Unified Coarse-to-Fine Alignment for Video-Text Retrieval" (ICCV 2023)☆63Updated 10 months ago
- [AAAI 2025] VTG-LLM: Integrating Timestamp Knowledge into Video LLMs for Enhanced Video Temporal Grounding☆98Updated 4 months ago
- 🔥 Omni large models and datasets for understanding and generating multi-modalities.☆15Updated 6 months ago
- A hot-pluggable tool for visualizing LLaVA's attention.☆15Updated last year
- LongVALE: Vision-Audio-Language-Event Benchmark Towards Time-Aware Omni-Modal Perception of Long Videos. (CVPR 2025))☆19Updated last week
- Official repository for "Boosting Audio Visual Question Answering via Key Semantic-Aware Cues" in ACM MM 2024.☆16Updated 6 months ago
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆46Updated 5 months ago
- [SIGIR 2024] - Simple but Effective Raw-Data Level Multimodal Fusion for Composed Image Retrieval☆34Updated 9 months ago
- ☆12Updated 3 months ago
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆82Updated last year
- This is the official implementation of RGNet: A Unified Retrieval and Grounding Network for Long Videos☆15Updated last month
- [ACM MM 2024] Improving Composed Image Retrieval via Contrastive Learning with Scaling Positives and Negatives☆30Updated 6 months ago
- [CVPR 2024] Context-Guided Spatio-Temporal Video Grounding☆52Updated 9 months ago
- This repo holds the official code and data for "Beyond Literal Descriptions: Understanding and Locating Open-World Objects Aligned with H…☆17Updated 11 months ago
- A curated list of zero-shot captioning papers☆22Updated last year
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆58Updated 4 months ago
- [CVPR 2024] Do you remember? Dense Video Captioning with Cross-Modal Memory Retrieval☆55Updated 10 months ago
- [ECCV’24] Official Implementation for CAT: Enhancing Multimodal Large Language Model to Answer Questions in Dynamic Audio-Visual Scenario…☆52Updated 7 months ago
- ☆11Updated 6 months ago
- This is the official implementation of our paper "QuoTA: Query-oriented Token Assignment via CoT Query Decouple for Long Video Comprehens…☆68Updated last month
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆24Updated 4 months ago
- Official Pytorch implementation of "Improved Probabilistic Image-Text Representations" (ICLR 2024)☆58Updated 11 months ago
- Awesome multi-modal large language paper/project, collections of popular training strategies, e.g., PEFT, LoRA.☆27Updated 8 months ago
- Official pytorch repository for "TR-DETR: Task-Reciprocal Transformer for Joint Moment Retrieval and Highlight Detection" (AAAI 2024 Pape…☆45Updated 2 months ago
- Source code of our AAAI 2024 paper "Cross-Modal and Uni-Modal Soft-Label Alignment for Image-Text Retrieval"☆39Updated last year
- (CVPR2024) MeaCap: Memory-Augmented Zero-shot Image Captioning☆46Updated 8 months ago