iOPENCap / awesome-unimodal-trainingLinks
text-only training or language-free training for multimodal tasks (image/audio/video caption, retrieval, text2image)
☆11Updated 9 months ago
Alternatives and similar repositories for awesome-unimodal-training
Users that are interested in awesome-unimodal-training are comparing it to the libraries listed below
Sorting:
- Mitigating Open-Vocabulary Caption Hallucinations (EMNLP 2024)☆17Updated 9 months ago
- ☆11Updated 6 months ago
- An in-context learning research testbed☆19Updated 4 months ago
- ☆11Updated 10 months ago
- Repo for paper "CODIS: Benchmarking Context-Dependent Visual Comprehension for Multimodal Large Language Models".☆12Updated 9 months ago
- A hot-pluggable tool for visualizing LLaVA's attention.☆22Updated last year
- [EMNLP’24 Main] Encoding and Controlling Global Semantics for Long-form Video Question Answering☆19Updated 10 months ago
- The Code for Lever LM: Configuring In-Context Sequence to Lever Large Vision Language Models☆16Updated 10 months ago
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆26Updated 7 months ago
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆55Updated 9 months ago
- [CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension☆54Updated last year
- [ECCV'24] Official Implementation of Autoregressive Visual Entity Recognizer.☆14Updated last year
- [ICLR 2025] TRACE: Temporal Grounding Video LLM via Casual Event Modeling☆110Updated 3 weeks ago
- Official implementation of paper ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understanding☆36Updated 4 months ago
- Croc: Pretraining Large Multimodal Models with Cross-Modal Comprehension☆24Updated 9 months ago
- [ACL’24 Findings] Video-Language Understanding: A Survey from Model Architecture, Model Training, and Data Perspectives☆40Updated last month
- [IJCAI 2023] Text-Video Retrieval with Disentangled Conceptualization and Set-to-Set Alignment☆52Updated last year
- [ACM MM 2024] Improving Composed Image Retrieval via Contrastive Learning with Scaling Positives and Negatives☆36Updated last month
- TS-LLaVA: Constructing Visual Tokens through Thumbnail-and-Sampling for Training-Free Video Large Language Models☆16Updated 7 months ago
- HallE-Control: Controlling Object Hallucination in LMMs☆31Updated last year
- [EMNLP 2022] Official Pytorch code for "Modal-specific Pseudo Query Generation for Video Corpus Moment Retrieval"☆10Updated last year
- ☆28Updated 9 months ago
- [CVPR 2024] Context-Guided Spatio-Temporal Video Grounding☆56Updated last year
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆69Updated 3 months ago
- Code and data for ACL 2024 paper on 'Cross-Modal Projection in Multimodal LLMs Doesn't Really Project Visual Attributes to Textual Space'☆16Updated last year
- ✨✨The Curse of Multi-Modalities (CMM): Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio☆46Updated last month
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆87Updated last year
- ☆10Updated last year
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆132Updated 9 months ago
- [WACV 2025] Official Pytorch code for "Background-aware Moment Detection for Video Moment Retrieval"☆16Updated 5 months ago