deepglint / MVTLinks
Margin-based Vision Transformer
☆41Updated last week
Alternatives and similar repositories for MVT
Users that are interested in MVT are comparing it to the libraries listed below
Sorting:
- [ACM MM25] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆92Updated 2 months ago
- [ICCV2025] A Token-level Text Image Foundation Model for Document Understanding☆121Updated last month
- Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models☆63Updated 11 months ago
- Official implementation of paper AdaReTaKe: Adaptive Redundancy Reduction to Perceive Longer for Video-language Understanding☆85Updated 5 months ago
- Official repo of Griffon series including v1(ECCV 2024), v2(ICCV 2025), G, and R, and also the RL tool Vision-R1.☆236Updated last month
- The official repository of the dots.vlm1 instruct models proposed by rednote-hilab.☆256Updated 2 weeks ago
- A Simple Framework of Small-scale LMMs for Video Understanding☆94Updated 3 months ago
- [ACM MM2025] The official repository for the RealSyn dataset☆37Updated 3 months ago
- [ICCV 2025] Official implementation of LLaVA-KD: A Framework of Distilling Multimodal Large Language Models☆98Updated 3 months ago
- LLaVE: Large Language and Vision Embedding Models with Hardness-Weighted Contrastive Learning☆68Updated 4 months ago
- ☆119Updated last year
- ☆72Updated 4 months ago
- Research Code for Multimodal-Cognition Team in Ant Group☆167Updated 3 months ago
- Rex-Thinker: Grounded Object Refering via Chain-of-Thought Reasoning☆122Updated 3 months ago
- The official repo for “TextCoT: Zoom In for Enhanced Multimodal Text-Rich Image Understanding”.☆42Updated last year
- official code for "Modality Curation: Building Universal Embeddings for Advanced Multimodal Information Retrieval"☆35Updated 3 months ago
- [COLM 2025] Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources☆272Updated last month
- [ACL2025 Findings] Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models☆77Updated 4 months ago
- The SAIL-VL2 series model developed by the BytedanceDouyinContent Group☆42Updated 3 weeks ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆177Updated 11 months ago
- The Next Step Forward in Multimodal LLM Alignment☆181Updated 5 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆57Updated last year
- Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).☆157Updated last year
- Code for ChatRex: Taming Multimodal LLM for Joint Perception and Understanding☆203Updated 8 months ago
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆126Updated 11 months ago
- Video Benchmark Suite: Rapid Evaluation of Video Foundation Models☆15Updated 9 months ago
- [EMNLP 2024] RWKV-CLIP: A Robust Vision-Language Representation Learner☆142Updated 4 months ago
- ☆33Updated 2 months ago
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoning☆104Updated 4 months ago
- [CVPR 2024] Official implementation of "ViTamin: Designing Scalable Vision Models in the Vision-language Era"☆211Updated last year