foundation-multimodal-models / ConBenchLinks
[NeurIPS'24] Official implementation of paper "Unveiling the Tapestry of Consistency in Large Vision-Language Models".
☆38Updated last year
Alternatives and similar repositories for ConBench
Users that are interested in ConBench are comparing it to the libraries listed below
Sorting:
- Official repository for CoMM Dataset☆48Updated 11 months ago
- ☆120Updated last year
- Evaluation code for Ref-L4, a new REC benchmark in the LMM era☆51Updated 11 months ago
- Official repository of "CoMP: Continual Multimodal Pre-training for Vision Foundation Models"☆39Updated 8 months ago
- [ICLR'25] Reconstructive Visual Instruction Tuning☆129Updated 8 months ago
- [NeurIPS 2024] Visual Perception by Large Language Model’s Weights☆55Updated 8 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆58Updated last year
- [ACL 2025] PruneVid: Visual Token Pruning for Efficient Video Large Language Models☆60Updated 6 months ago
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆156Updated 2 months ago
- Dataset pruning for ImageNet and LAION-2B.☆79Updated last year
- [ICLR2025] γ -MOD: Mixture-of-Depth Adaptation for Multimodal Large Language Models☆40Updated last month
- FreeVA: Offline MLLM as Training-Free Video Assistant☆65Updated last year
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆134Updated 9 months ago
- [NeurIPS 2024] Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning☆70Updated 9 months ago
- [ICCV 2025] Official code for "AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning"☆45Updated 2 months ago
- A collection of visual instruction tuning datasets.☆76Updated last year
- [ICML 2024] CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers☆34Updated 11 months ago
- The official implementation of 《MLLMs-Augmented Visual-Language Representation Learning》☆31Updated last year
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆96Updated 4 months ago
- ☆124Updated last year
- Code for DeCo: Decoupling token compression from semanchc abstraction in multimodal large language models☆74Updated 4 months ago
- 【NeurIPS 2024】The official code of paper "Automated Multi-level Preference for MLLMs"☆20Updated last year
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆158Updated last year
- Pruning the VLLMs☆106Updated last year
- [Neurips 24' D&B] Official Dataloader and Evaluation Scripts for LongVideoBench.☆111Updated last year
- [ICCV 2025] p-MoD: Building Mixture-of-Depths MLLMs via Progressive Ratio Decay☆43Updated 5 months ago
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆146Updated last month
- [ICCV 2025] Official implementation of LLaVA-KD: A Framework of Distilling Multimodal Large Language Models☆110Updated last month
- Adapting LLaMA Decoder to Vision Transformer☆30Updated last year
- The official implementation of "2024NeurIPS Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation"☆51Updated 11 months ago