HKUST-LongGroup / DyMELinks
Empowering Small VLMs to Think with Dynamic Memorization and Exploration
☆15Updated last month
Alternatives and similar repositories for DyME
Users that are interested in DyME are comparing it to the libraries listed below
Sorting:
- [NIPS2023]Implementation of Foundation Model is Efficient Multimodal Multitask Model Selector☆37Updated last year
- (NeurIPS 2024) What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights☆29Updated last year
- Look, Compare, Decide: Alleviating Hallucination in Large Vision-Language Models via Multi-View Multi-Path Reasoning☆23Updated last year
- The efficient tuning method for VLMs☆80Updated last year
- [CVPR2024 Highlight] Official implementation for Transferable Visual Prompting. The paper "Exploring the Transferability of Visual Prompt…☆46Updated 10 months ago
- ☆24Updated 5 months ago
- [CVPR 2024 Highlight] ImageNet-D☆44Updated last year
- [ICCV 2023] ViLLA: Fine-grained vision-language representation learning from real-world data☆46Updated 2 years ago
- M2-Reasoning: Empowering MLLMs with Unified General and Spatial Reasoning☆46Updated 4 months ago
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆59Updated last year
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆45Updated 11 months ago
- ☆26Updated 2 years ago
- Official code for "pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation", ICML 2023.☆33Updated 2 years ago
- Code release for "Understanding Bias in Large-Scale Visual Datasets"☆21Updated 11 months ago
- Benchmarking Multi-Image Understanding in Vision and Language Models☆12Updated last year
- CLIP-MoE: Mixture of Experts for CLIP☆50Updated last year
- This repository contains the code of our paper 'Skip \n: A simple method to reduce hallucination in Large Vision-Language Models'.☆14Updated last year
- Towards a Unified View on Visual Parameter-Efficient Transfer Learning☆26Updated 3 years ago
- Emergent Visual Grounding in Large Multimodal Models Without Grounding Supervision☆41Updated last month
- Official code base for "Long-Tailed Diffusion Models With Oriented Calibration" ICLR2024☆14Updated last year
- An Enhanced CLIP Framework for Learning with Synthetic Captions☆37Updated 7 months ago
- This is the official code of "Uncovering Prototypical Knowledge for Weakly Open-Vocabulary Semantic Segmentation, NeurIPS 23"☆26Updated last year
- ECCV24, NeurIPS24, Benchmarking Generalized Out-of-Distribution Detection with Vision-Language Models☆28Updated 10 months ago
- ☆37Updated last year
- ☆53Updated 10 months ago
- official repo for paper "[CLS] Token Tells Everything Needed for Training-free Efficient MLLMs"☆23Updated 6 months ago
- [CVPR 2024] The official implementation of paper "synthesize, diagnose, and optimize: towards fine-grained vision-language understanding"☆49Updated 5 months ago
- [CVPR 2025] PyTorch implementation of paper "FLAME: Frozen Large Language Models Enable Data-Efficient Language-Image Pre-training"☆32Updated 4 months ago
- [ICCV2023] Official code for "VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity Control"☆52Updated 2 years ago
- Evaluation and dataset construction code for the CVPR 2025 paper "Vision-Language Models Do Not Understand Negation"☆38Updated 6 months ago