jiaangli / VLCALinks
Do Vision and Language Models Share Concepts? A Vector Space Alignment Study
☆14Updated 6 months ago
Alternatives and similar repositories for VLCA
Users that are interested in VLCA are comparing it to the libraries listed below
Sorting:
- [EMNLP 2024] Preserving Multi-Modal Capabilities of Pre-trained VLMs for Improving Vision-Linguistic Compositionality☆16Updated 7 months ago
- This is the implementation of CounterCurate, the data curation pipeline of both physical and semantic counterfactual image-caption pairs.☆18Updated 11 months ago
- ☆10Updated 7 months ago
- If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions☆17Updated last year
- Code for "Are “Hierarchical” Visual Representations Hierarchical?" in NeurIPS Workshop for Symmetry and Geometry in Neural Representation…☆20Updated last year
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆56Updated 2 months ago
- Official code for "pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation", ICML 2023.☆33Updated last year
- UnifiedMLLM: Enabling Unified Representation for Multi-modal Multi-tasks With Large Language Model☆22Updated 10 months ago
- Official implementation of "Automated Generation of Challenging Multiple-Choice Questions for Vision Language Model Evaluation" (CVPR 202…☆27Updated last week
- Distributed Optimization Infra for learning CLIP models☆26Updated 8 months ago
- Official implementation of "Connect, Collapse, Corrupt: Learning Cross-Modal Tasks with Uni-Modal Data" (ICLR 2024)☆31Updated 7 months ago
- Official Repository of Personalized Visual Instruct Tuning☆28Updated 3 months ago
- ☆32Updated last year
- Code and data for ACL 2024 paper on 'Cross-Modal Projection in Multimodal LLMs Doesn't Really Project Visual Attributes to Textual Space'☆15Updated 10 months ago
- ☆18Updated 10 months ago
- ABC: Achieving Better Control of Multimodal Embeddings using VLMs☆12Updated 2 months ago
- ☆18Updated 10 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆74Updated 11 months ago
- Sparse Autoencoders Learn Monosemantic Features in Vision-Language Models☆15Updated last month
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆67Updated last year
- 🔥 [ICLR 2025] Official PyTorch Model "Visual Haystacks: A Vision-Centric Needle-In-A-Haystack Benchmark"☆15Updated 3 months ago
- Official Pytorch implementation of "Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations" (ICLR '25)☆73Updated last week
- LCA-on-the-line (ICML 2024 Oral)☆11Updated 3 months ago
- Official implementation and dataset for the NAACL 2024 paper "ComCLIP: Training-Free Compositional Image and Text Matching"☆34Updated 9 months ago
- ☆19Updated last year
- ☆24Updated last year
- We introduce new approach, Token Reduction using CLIP Metric (TRIM), aimed at improving the efficiency of MLLMs without sacrificing their…☆13Updated 5 months ago
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆24Updated 6 months ago
- Look, Compare, Decide: Alleviating Hallucination in Large Vision-Language Models via Multi-View Multi-Path Reasoning☆22Updated 8 months ago
- ☆11Updated 2 months ago