bardisafa / PreSelLinks
[CVPR 2025] An Implementation of the paper "Pre-Instruction Data Selection for Visual Instruction Tuning"
☆16Updated 5 months ago
Alternatives and similar repositories for PreSel
Users that are interested in PreSel are comparing it to the libraries listed below
Sorting:
- [CVPR2024 Highlight] Official implementation for Transferable Visual Prompting. The paper "Exploring the Transferability of Visual Prompt…☆46Updated 11 months ago
- [ICLR 2025] VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning☆65Updated 2 months ago
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"☆37Updated last year
- Official code for ICLR 2024 paper, "A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation"☆83Updated last year
- ☆21Updated last year
- [ACM Multimedia 2025] This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual…☆82Updated 9 months ago
- [CVPR 2025] Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attention☆50Updated last year
- [NeurIPS 2023] Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization☆108Updated last year
- ☆60Updated last month
- [ECCV 2024] API: Attention Prompting on Image for Large Vision-Language Models☆106Updated last year
- [ICLR 2024] Test-Time RL with CLIP Feedback for Vision-Language Models.☆95Updated last month
- Official implementation for CVPR'23 paper "BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning"☆110Updated 2 years ago
- [CVPR 2024] Improving language-visual pretraining efficiency by perform cluster-based masking on images.☆29Updated last year
- [NeurIPS '24] Frustratingly easy Test-Time Adaptation of VLMs!!☆57Updated 8 months ago
- [ECCV 2024] Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models☆54Updated last year
- Official code for the paper "Two Effects, One Trigger: On the Modality Gap, Object Bias, and Information Imbalance in Contrastive Vision-…☆21Updated 6 months ago
- [ICLR 2025] - Cross the Gap: Exposing the Intra-modal Misalignment in CLIP via Modality Inversion☆56Updated 7 months ago
- ☆27Updated last year
- CLIP-MoE: Mixture of Experts for CLIP☆49Updated last year
- Dataset pruning for ImageNet and LAION-2B.☆79Updated last year
- Augmenting with Language-guided Image Augmentation (ALIA)☆80Updated 2 years ago
- 🌋👵🏻 Yo'LLaVA: Your Personalized Language and Vision Assistant☆118Updated 8 months ago
- [CVPR'24] Validation-free few-shot adaptation of CLIP, using a well-initialized Linear Probe (ZSLP) and class-adaptive constraints (CLAP)…☆77Updated 5 months ago
- [ICLR 2024] Real-Fake: Effective Training Data Synthesis Through Distribution Matching☆78Updated last year
- Official implementation of "Why are Visually-Grounded Language Models Bad at Image Classification?" (NeurIPS 2024)☆92Updated last year
- official repo for paper "[CLS] Token Tells Everything Needed for Training-free Efficient MLLMs"☆23Updated 7 months ago
- ☆25Updated 4 months ago
- Visual self-questioning for large vision-language assistant.☆45Updated 4 months ago
- ☆21Updated 7 months ago
- [CVPR 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆53Updated 7 months ago