iOPENCap / awesome-unimodal-trainingLinks
text-only training or language-free training for multimodal tasks (image/audio/video caption, retrieval, text2image)
☆11Updated last year
Alternatives and similar repositories for awesome-unimodal-training
Users that are interested in awesome-unimodal-training are comparing it to the libraries listed below
Sorting:
- Mitigating Open-Vocabulary Caption Hallucinations (EMNLP 2024)☆17Updated last year
- ☆11Updated 10 months ago
- ☆11Updated last year
- TS-LLaVA: Constructing Visual Tokens through Thumbnail-and-Sampling for Training-Free Video Large Language Models☆18Updated 10 months ago
- The Code for Lever LM: Configuring In-Context Sequence to Lever Large Vision Language Models☆16Updated last year
- An in-context learning research testbed☆19Updated 8 months ago
- [EMNLP’24 Main] Encoding and Controlling Global Semantics for Long-form Video Question Answering☆18Updated last year
- [CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension☆60Updated last year
- A hot-pluggable tool for visualizing LLaVA's attention.☆24Updated last year
- [ECCV'24] Official Implementation of Autoregressive Visual Entity Recognizer.☆14Updated last year
- [ACM MM 2024] Improving Composed Image Retrieval via Contrastive Learning with Scaling Positives and Negatives☆39Updated 2 months ago
- Official Repo for FoodieQA paper (EMNLP 2024)☆16Updated 5 months ago
- [ACL’24 Findings] Video-Language Understanding: A Survey from Model Architecture, Model Training, and Data Perspectives☆44Updated 4 months ago
- ViCToR: Improving Visual Comprehension via Token Reconstruction for Pretraining LMMs☆25Updated 3 months ago
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆55Updated last year
- Repo for paper "CODIS: Benchmarking Context-Dependent Visual Comprehension for Multimodal Large Language Models".☆12Updated last year
- Code for DeCo: Decoupling token compression from semanchc abstraction in multimodal large language models☆74Updated 4 months ago
- This is the official implementation of RGNet: A Unified Retrieval and Grounding Network for Long Videos☆17Updated 8 months ago
- Pytorch Code for "Unified Coarse-to-Fine Alignment for Video-Text Retrieval" (ICCV 2023)☆66Updated last year
- [ICME 2024 Oral] DARA: Domain- and Relation-aware Adapters Make Parameter-efficient Tuning for Visual Grounding☆23Updated 9 months ago
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆121Updated 2 months ago
- [CVPR 2024] Context-Guided Spatio-Temporal Video Grounding☆62Updated last year
- The official implementation of paper "Prototype-based Aleatoric Uncertainty Quantification for Cross-modal Retrieval" accepted by NeurIPS…☆27Updated last year
- [IJCAI 2023] Text-Video Retrieval with Disentangled Conceptualization and Set-to-Set Alignment☆53Updated last year
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆151Updated last year
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆98Updated 3 months ago
- ☆12Updated last year
- [CVPR 2024] Do you remember? Dense Video Captioning with Cross-Modal Memory Retrieval☆63Updated last year
- CLIP-MoE: Mixture of Experts for CLIP☆49Updated last year
- LongVALE: Vision-Audio-Language-Event Benchmark Towards Time-Aware Omni-Modal Perception of Long Videos. (CVPR 2025))☆52Updated 5 months ago