☆54Jan 17, 2025Updated last year
Alternatives and similar repositories for locality-alignment
Users that are interested in locality-alignment are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] Official PyTorch implementation of LoTLIP: Improving Language-Image Pre-training for Long Text Understanding☆50Jan 14, 2025Updated last year
- [TACL] Do Vision and Language Models Share Concepts? A Vector Space Alignment Study☆16Nov 22, 2024Updated last year
- Fully Open Framework for Democratized Multimodal Reinforcement Learning.☆43Dec 19, 2025Updated 3 months ago
- [ICCV 2023] Going Beyond Nouns With Vision & Language Models Using Synthetic Data☆13Sep 30, 2023Updated 2 years ago
- (NeurIPS 2024) What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights☆27Oct 28, 2024Updated last year
- [CVPR 2025 Highlight] Official Pytorch codebase for paper: "Assessing and Learning Alignment of Unimodal Vision and Language Models"☆58Aug 15, 2025Updated 7 months ago
- [ICLR 2025] Official Pytorch Implementation of MMR: A Large-scale Benchmark Dataset for Multi-target and Multi-granularity Reasoning Segm…☆24Apr 3, 2025Updated 11 months ago
- ☆20Updated this week
- A Novel Semantic Segmentation Network using Enhanced Boundaries in Cluttered Scenes (WACV 2025)☆12Aug 11, 2025Updated 7 months ago
- [CVPR2024] PairAug: What Can Augmented Image-Text Pairs Do for Radiology?☆29Nov 11, 2024Updated last year
- [ICLR'25] Reconstructive Visual Instruction Tuning☆136Apr 9, 2025Updated 11 months ago
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆138May 8, 2025Updated 10 months ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆164Sep 27, 2025Updated 5 months ago
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"☆33Oct 12, 2024Updated last year
- ☆28Jan 15, 2026Updated 2 months ago
- Radiology Language Evaluations☆11Nov 17, 2023Updated 2 years ago
- ☆11Jul 31, 2022Updated 3 years ago
- ☆10Jul 5, 2024Updated last year
- Official code for "Can We Talk Models Into Seeing the World Differently?" (ICLR 2025).☆28Jan 26, 2025Updated last year
- FineCLIP: Self-distilled Region-based CLIP for Better Fine-grained Understanding (NIPS24)☆37Nov 12, 2025Updated 4 months ago
- [CVPR-2024] NAYER: Noisy Layer Data Generation for Efficient and Effective Data-free Knowledge Distillation☆16Oct 19, 2024Updated last year
- ☆10Apr 2, 2023Updated 2 years ago
- [NeurIPS2022] This is the official implementation of the paper "Expediting Large-Scale Vision Transformer for Dense Prediction without Fi…☆86Oct 29, 2023Updated 2 years ago
- ☆14Sep 11, 2025Updated 6 months ago
- Can 3D Vision-Language Models Truly Understand Natural Language?☆20Mar 28, 2024Updated last year
- Code for the paper: "No Zero-Shot Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance" [NeurI…☆94Apr 29, 2024Updated last year
- [MICCAI 2025] Hierarchical Self-Supervised Adversarial Training for Robust Vision Models in Histopathology☆12Jun 17, 2025Updated 9 months ago
- [CVPRW 2025] Official repository of paper titled "Towards Evaluating the Robustness of Visual State Space Models"☆26Jun 8, 2025Updated 9 months ago
- PyTorch code for the CVPR'23 paper: "ConStruct-VL: Data-Free Continual Structured VL Concepts Learning"☆14Feb 5, 2024Updated 2 years ago
- Exploring prompt tuning with pseudolabels for multiple modalities, learning settings, and training strategies.☆49Nov 8, 2024Updated last year
- Implementation of Learning without Prejudices: Continual Unbiased Learning via Benign and Malignant Forgetting (ICLR 2023)☆13Apr 14, 2023Updated 2 years ago
- A new multi-task learning framework using Vision Transformers☆11Jun 19, 2024Updated last year
- Preference Learning for LLaVA☆59Nov 9, 2024Updated last year