foundation-multimodal-models / ConBench
[NeurIPS'24] Official implementation of paper "Unveiling the Tapestry of Consistency in Large Vision-Language Models".
☆34Updated 3 months ago
Alternatives and similar repositories for ConBench:
Users that are interested in ConBench are comparing it to the libraries listed below
- Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference" proposed by Pekin…☆71Updated 3 months ago
- The official code of the paper "PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction".☆51Updated 3 weeks ago
- ☆114Updated 7 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆56Updated 4 months ago
- VL-GPT: A Generative Pre-trained Transformer for Vision and Language Understanding and Generation☆85Updated 4 months ago
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆113Updated 8 months ago
- [EMNLP 2024 Findings🔥] Official implementation of "LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context Infe…☆89Updated 2 months ago
- Code release for VTW (AAAI 2025) Oral☆30Updated last week
- ☆24Updated 8 months ago
- A collection of visual instruction tuning datasets.☆76Updated 10 months ago
- [Neurips 24' D&B] Official Dataloader and Evaluation Scripts for LongVideoBench.☆84Updated 6 months ago
- The official implementation of 《MLLMs-Augmented Visual-Language Representation Learning》☆31Updated 10 months ago
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆27Updated last month
- Official repository for CoMM Dataset☆27Updated last month
- official impelmentation of Kangaroo: A Powerful Video-Language Model Supporting Long-context Video Input☆62Updated 5 months ago
- ☆17Updated 5 months ago
- ☆95Updated last year
- [ICML 2024] CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers.☆30Updated last month
- code for "Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization"☆53Updated 5 months ago
- Officail Repo of γ -MOD: Mixture-of-Depth Adaptation for Multimodal Large Language Models☆29Updated 3 months ago
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆33Updated 3 months ago
- This is the official PyTorch implementation of "ZipAR: Accelerating Auto-regressive Image Generation through Spatial Locality"☆45Updated 2 weeks ago
- Adapting LLaMA Decoder to Vision Transformer☆26Updated 8 months ago
- Liquid: Language Models are Scalable Multi-modal Generators☆61Updated last month
- [NeurIPS 2024] Visual Perception by Large Language Model’s Weights☆35Updated 3 months ago
- Evaluation code for Ref-L4, a new REC benchmark in the LMM era☆24Updated last month
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆109Updated 2 months ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆44Updated last year
- ☆87Updated last year
- [ICCV2023] Official code for "VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity Control"☆53Updated last year