ys-zong / VL-ICLView external linksLinks
[ICLR 2025] VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning
☆70Sep 20, 2025Updated 4 months ago
Alternatives and similar repositories for VL-ICL
Users that are interested in VL-ICL are comparing it to the libraries listed below
Sorting:
- [ICCV 2023] Going Beyond Nouns With Vision & Language Models Using Synthetic Data☆14Sep 30, 2023Updated 2 years ago
- [ICML 2024] Fool Your (Vision and) Language Model With Embarrassingly Simple Permutations☆15Oct 28, 2023Updated 2 years ago
- [CVPR 2024 Highlight] ImageNet-D☆46Oct 15, 2024Updated last year
- The Code for Lever LM: Configuring In-Context Sequence to Lever Large Vision Language Models☆17Oct 4, 2024Updated last year
- The proposed simulated dataset consisting of 9,536 charts and associated data annotations in CSV format.☆26Feb 22, 2024Updated last year
- Counterfactual Reasoning VQA Dataset☆27Nov 23, 2023Updated 2 years ago
- ☆20Apr 23, 2024Updated last year
- Official implementation of "Why are Visually-Grounded Language Models Bad at Image Classification?" (NeurIPS 2024)☆96Oct 19, 2024Updated last year
- Code for "CLIP Behaves like a Bag-of-Words Model Cross-modally but not Uni-modally"☆19Feb 14, 2025Updated last year
- [ACL 2023] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models☆36Oct 3, 2024Updated last year
- [CVPR 2025] Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attention☆61Jul 16, 2024Updated last year
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆138May 8, 2025Updated 9 months ago
- [CVPR2025] VideoICL: Confidence-based Iterative In-context Learning for Out-of-Distribution Video Understanding☆24Mar 24, 2025Updated 10 months ago
- Implementation and dataset for paper "Can MLLMs Perform Text-to-Image In-Context Learning?"☆42Jun 2, 2025Updated 8 months ago
- ☆46Nov 8, 2024Updated last year
- Implementation of "Meta Omnium: A Benchmark for General-Purpose Learning-to-Learn"☆25Jun 19, 2023Updated 2 years ago
- [IROS2025]Adjacent-view Transformers for Supervised Surround-view Depth Estimation☆14Nov 14, 2025Updated 3 months ago
- ☆10Jul 5, 2024Updated last year
- ☆12Dec 20, 2024Updated last year
- [CVPR'25] Official code of paper "Mimic In-Context Learning for Multimodal Tasks"☆24Jun 8, 2025Updated 8 months ago
- ☆11Jan 19, 2025Updated last year
- Official repository for CoMM Dataset☆49Dec 31, 2024Updated last year
- 🚀 [NeurIPS24] Make Vision Matter in Visual-Question-Answering (VQA)! Introducing NaturalBench, a vision-centric VQA benchmark (NeurIPS'2…☆89Jun 24, 2025Updated 7 months ago
- (NeurIPS 2024) What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights☆28Oct 28, 2024Updated last year
- ☆27Jul 6, 2024Updated last year
- [CVPR 2024] The official implementation of paper "synthesize, diagnose, and optimize: towards fine-grained vision-language understanding"☆50Jun 16, 2025Updated 8 months ago
- Benchmarking Multi-Image Understanding in Vision and Language Models☆12Jul 29, 2024Updated last year
- [ICML‘25] Official code for paper "Occult: Optimizing Collaborative Communication across Experts for Accelerated Parallel MoE Training an…☆12Apr 17, 2025Updated 9 months ago
- ☆11Oct 2, 2024Updated last year
- ☆13Jul 2, 2025Updated 7 months ago
- Project for SNARE benchmark☆11Jun 5, 2024Updated last year
- The official implement of "Routing Experts: Learning to Route Dynamic Experts in Existing Multi-modal Large Language Models"☆17Mar 24, 2025Updated 10 months ago
- ☆25Nov 22, 2024Updated last year
- This is an implementation of the paper "Are We Done with Object-Centric Learning?"☆12Sep 11, 2025Updated 5 months ago
- [ACL Main 2025] I0T: Embedding Standardization Method Towards Zero Modality Gap☆12Jun 18, 2025Updated 7 months ago
- ☆29Oct 18, 2022Updated 3 years ago
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆61Dec 10, 2024Updated last year
- Code and data setup for the paper "Are Diffusion Models Vision-and-language Reasoners?"☆33Mar 15, 2024Updated last year
- The SVO-Probes Dataset for Verb Understanding☆31Jan 28, 2022Updated 4 years ago