360CVGroup / Inner-Adaptor-ArchitectureLinks
LMM solved catastrophic forgetting, AAAI2025
☆44Updated 3 months ago
Alternatives and similar repositories for Inner-Adaptor-Architecture
Users that are interested in Inner-Adaptor-Architecture are comparing it to the libraries listed below
Sorting:
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆58Updated 9 months ago
- [MM2024, oral] "Self-Supervised Visual Preference Alignment" https://arxiv.org/abs/2404.10501☆56Updated 11 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆44Updated last year
- Official repository of MMDU dataset☆92Updated 9 months ago
- Official implement of MIA-DPO☆59Updated 5 months ago
- Official repo for StableLLAVA☆95Updated last year
- Repo for paper "T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs"☆49Updated 4 months ago
- ☆50Updated last year
- HermesFlow: Seamlessly Closing the Gap in Multimodal Understanding and Generation☆63Updated 4 months ago
- ☆83Updated 6 months ago
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆46Updated 8 months ago
- [ICCV 2025] Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridges☆70Updated 4 months ago
- [ICCV'25] Explore the Limits of Omni-modal Pretraining at Scale☆105Updated 10 months ago
- MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models☆41Updated 3 months ago
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆113Updated last month
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment☆35Updated last year
- ☆32Updated 3 months ago
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆37Updated last year
- Code for our Paper "All in an Aggregated Image for In-Image Learning"☆30Updated last year
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆50Updated 7 months ago
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆60Updated 4 months ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆143Updated 8 months ago
- LLaVA combines with Magvit Image tokenizer, training MLLM without an Vision Encoder. Unifying image understanding and generation.☆37Updated last year
- [ICLR2025] γ -MOD: Mixture-of-Depth Adaptation for Multimodal Large Language Models☆37Updated 5 months ago
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆26Updated 2 months ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆48Updated 11 months ago
- [ICCV 2025] Dynamic-VLM☆21Updated 7 months ago
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆26Updated 6 months ago
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆67Updated 10 months ago
- ☆73Updated last year