ytaek-oh / fsc-clipLinks
[EMNLP 2024] Preserving Multi-Modal Capabilities of Pre-trained VLMs for Improving Vision-Linguistic Compositionality
☆20Updated last year
Alternatives and similar repositories for fsc-clip
Users that are interested in fsc-clip are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆46Updated last year
- Code for "CAFe: Unifying Representation and Generation with Contrastive-Autoregressive Finetuning"☆24Updated 8 months ago
- Official Repository of Personalized Visual Instruct Tuning☆33Updated 9 months ago
- ☆11Updated last year
- (ICLR 2025 Spotlight) Official code repository for Interleaved Scene Graph.☆31Updated 4 months ago
- Official Implementation for "SiLVR : A Simple Language-based Video Reasoning Framework"☆19Updated 3 months ago
- ☆15Updated last year
- Do Vision and Language Models Share Concepts? A Vector Space Alignment Study☆16Updated last year
- [CVPR 2025] DiscoVLA: Discrepancy Reduction in Vision, Language, and Alignment for Parameter-Efficient Video-Text Retrieval☆22Updated 5 months ago
- ☆15Updated last month
- [EMNLP 2024] Official code for "Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models"☆20Updated last year
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆34Updated 3 weeks ago
- [ECCV 2024] Learning Video Context as Interleaved Multimodal Sequences☆40Updated 9 months ago
- Official InfiniBench: A Benchmark for Large Multi-Modal Models in Long-Form Movies and TV Shows☆19Updated last month
- \infty-Video: A Training-Free Approach to Long Video Understanding via Continuous-Time Memory Consolidation