iOPENCap / awesome-unimodal-trainingLinks
text-only training or language-free training for multimodal tasks (image/audio/video caption, retrieval, text2image)
☆11Updated last year
Alternatives and similar repositories for awesome-unimodal-training
Users that are interested in awesome-unimodal-training are comparing it to the libraries listed below
Sorting:
- ☆11Updated 11 months ago
- Mitigating Open-Vocabulary Caption Hallucinations (EMNLP 2024)☆19Updated last year
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆58Updated last year
- ☆11Updated last year
- [CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension☆60Updated last year
- TS-LLaVA: Constructing Visual Tokens through Thumbnail-and-Sampling for Training-Free Video Large Language Models☆19Updated last year
- Repo for paper "CODIS: Benchmarking Context-Dependent Visual Comprehension for Multimodal Large Language Models".☆12Updated last year
- [EMNLP’24 Main] Encoding and Controlling Global Semantics for Long-form Video Question Answering☆18Updated last year
- Official implementation of paper ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understanding☆38Updated 9 months ago
- Code for DeCo: Decoupling token compression from semanchc abstraction in multimodal large language models☆75Updated 5 months ago
- [IJCAI 2023] Text-Video Retrieval with Disentangled Conceptualization and Set-to-Set Alignment☆53Updated last year
- [ICML 2025] Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in…☆182Updated 3 months ago
- Code and data for ACL 2024 paper on 'Cross-Modal Projection in Multimodal LLMs Doesn't Really Project Visual Attributes to Textual Space'☆17Updated last year
- A hot-pluggable tool for visualizing LLaVA's attention.☆24Updated last year
- ☆71Updated last year
- An in-context learning research testbed☆19Updated 9 months ago
- [SIGIR 2024] - Simple but Effective Raw-Data Level Multimodal Fusion for Composed Image Retrieval☆44Updated last year
- [ACM MM 2024] Improving Composed Image Retrieval via Contrastive Learning with Scaling Positives and Negatives☆39Updated 4 months ago
- [ICLR 2025] TRACE: Temporal Grounding Video LLM via Casual Event Modeling☆143Updated 4 months ago
- [ECCV'24] Official Implementation of Autoregressive Visual Entity Recognizer.☆14Updated last year
- ☆12Updated last year
- [ACL’24 Findings] Video-Language Understanding: A Survey from Model Architecture, Model Training, and Data Perspectives☆47Updated 6 months ago
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆31Updated last year
- ViCToR: Improving Visual Comprehension via Token Reconstruction for Pretraining LMMs☆25Updated 4 months ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆180Updated last year
- [ICLR2024] The official implementation of paper "UniAdapter: Unified Parameter-Efficient Transfer Learning for Cross-modal Modeling", by …☆77Updated last year
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆34Updated last month
- Pytorch Code for "Unified Coarse-to-Fine Alignment for Video-Text Retrieval" (ICCV 2023)☆66Updated last year
- Official Repo for FoodieQA paper (EMNLP 2024)☆17Updated 6 months ago
- [ICML2024] Repo for the paper `Evaluating and Analyzing Relationship Hallucinations in Large Vision-Language Models'☆22Updated last year