Hao840 / ADEM-VL
PyTorch code for "ADEM-VL: Adaptive and Embedded Fusion for Efficient Vision-Language Tuning"
☆19Updated 5 months ago
Alternatives and similar repositories for ADEM-VL:
Users that are interested in ADEM-VL are comparing it to the libraries listed below
- OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆57Updated last month
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆45Updated 2 months ago
- [EMNLP 2023] TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding☆49Updated last year
- Implementation of the model: "(MC-ViT)" from the paper: "Memory Consolidation Enables Long-Context Video Understanding"☆21Updated 2 months ago
- Official Pytorch Implementation of Self-emerging Token Labeling☆32Updated last year
- How Good is Google Bard's Visual Understanding? An Empirical Study on Open Challenges☆30Updated last year
- Project for "LaSagnA: Language-based Segmentation Assistant for Complex Queries".☆53Updated 10 months ago
- (CVPR 2025) Official implementation to DELT: A Simple Diversity-driven EarlyLate Training for Dataset Distillation which outperforms SOTA…☆20Updated last month
- AAPL: Adding Attributes to Prompt Learning for Vision-Language Models (CVPRw 2024)☆34Updated 10 months ago
- [NeurIPS 2024] The official implementation of "Image Copy Detection for Diffusion Models"☆16Updated 5 months ago
- This is the official repo for ByteVideoLLM/Dynamic-VLM☆20Updated 3 months ago
- INF-LLaVA: Dual-perspective Perception for High-Resolution Multimodal Large Language Model☆42Updated 7 months ago
- ☆42Updated last year
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆52Updated 3 months ago
- Unsolvable Problem Detection: Evaluating Trustworthiness of Vision Language Models☆75Updated 6 months ago
- ☆25Updated last year
- 🤖 [ICLR'25] Multimodal Video Understanding Framework (MVU)☆30Updated last month
- [ECCV2024] ProxyCLIP: Proxy Attention Improves CLIP for Open-Vocabulary Segmentation☆78Updated last month
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"☆26Updated 5 months ago
- Official Codebase for "Generative Multimodal Model Features Are Discriminative Vision-Language Classifiers"☆13Updated this week
- [NeurIPS 2024] official code release for our paper "Revisiting the Integration of Convolution and Attention for Vision Backbone".☆35Updated 2 months ago
- [EMNLP 2024] Official code for "Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models"☆16Updated 5 months ago
- Official PyTorch implementation of "No Time to Waste: Squeeze Time into Channel for Mobile Video Understanding"☆32Updated 10 months ago
- ☆43Updated 2 months ago
- [ICLR 2025] Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision☆59Updated 8 months ago
- ☆39Updated 4 months ago
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆30Updated this week
- Multi-vision Sensor Perception and Reasoning (MS-PR) benchmark, assessing VLMs on their capacity for sensor-specific reasoning.☆13Updated last month
- Official Repository of Personalized Visual Instruct Tuning☆28Updated 3 weeks ago
- A benchmark dataset and simple code examples for measuring the perception and reasoning of multi-sensor Vision Language models.☆18Updated 3 months ago