Yanqing0327 / MLLMs-AugmentedLinks
The official implementation of 《MLLMs-Augmented Visual-Language Representation Learning》
☆31Updated last year
Alternatives and similar repositories for MLLMs-Augmented
Users that are interested in MLLMs-Augmented are comparing it to the libraries listed below
Sorting:
- [CVPR2024] The code of "UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory"☆67Updated last year
- Official repository for CoMM Dataset☆48Updated 9 months ago
- FreeVA: Offline MLLM as Training-Free Video Assistant☆64Updated last year
- [ICCV 2025] Official code for "AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning"☆42Updated 3 weeks ago
- [NeurIPS 2024] Official PyTorch implementation of LoTLIP: Improving Language-Image Pre-training for Long Text Understanding☆45Updated 9 months ago
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆136Updated 5 months ago
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆79Updated last year
- Official repository of "CoMP: Continual Multimodal Pre-training for Vision Foundation Models"☆32Updated 6 months ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆48Updated last year
- [NeurIPS 2024] Visual Perception by Large Language Model’s Weights☆52Updated 6 months ago
- ☆119Updated last year
- Mitigating Shortcuts in Visual Reasoning with Reinforcement Learning☆40Updated 3 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆57Updated last year
- [ACL 2025] PruneVid: Visual Token Pruning for Efficient Video Large Language Models☆55Updated 5 months ago
- VisualGPTScore for visio-linguistic reasoning☆27Updated 2 years ago
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"☆37Updated last year
- ☆76Updated last year
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆104Updated 5 months ago
- Codes for ICLR 2025 Paper: Towards Semantic Equivalence of Tokenization in Multimodal LLM☆75Updated 6 months ago
- ☆80Updated 11 months ago
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆58Updated 2 years ago
- (NeurIPS 2024 Spotlight) TOPA: Extend Large Language Models for Video Understanding via Text-Only Pre-Alignment☆31Updated last year
- 【NeurIPS 2024】The official code of paper "Automated Multi-level Preference for MLLMs"☆20Updated last year
- Official Repository of Personalized Visual Instruct Tuning☆32Updated 7 months ago
- Emergent Visual Grounding in Large Multimodal Models Without Grounding Supervision☆40Updated last week
- Accelerating Vision-Language Pretraining with Free Language Modeling (CVPR 2023)☆32Updated 2 years ago
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆49Updated 7 months ago
- Official implementation of CVPR 2024 paper "vid-TLDR: Training Free Token merging for Light-weight Video Transformer".☆52Updated last week
- A Comprehensive Benchmark and Toolkit for Evaluating Video-based Large Language Models!☆135Updated last year
- Evaluation code for Ref-L4, a new REC benchmark in the LMM era☆49Updated 10 months ago