deepglint / MVTLinks
Margin-based Vision Transformer
☆36Updated last month
Alternatives and similar repositories for MVT
Users that are interested in MVT are comparing it to the libraries listed below
Sorting:
- [ACM MM2025] The official repository for the RealSyn dataset☆37Updated 2 months ago
- [EMNLP25 Main]The official code of "Gradient-Attention Guided Dual-Masking Synergetic Framework for Robust Text-based Person Retrieval"☆14Updated last week
- A Simple Framework of Small-scale LMMs for Video Understanding☆92Updated 3 months ago
- Official implementation of paper AdaReTaKe: Adaptive Redundancy Reduction to Perceive Longer for Video-language Understanding☆81Updated 4 months ago
- [ICCV2025] A Token-level Text Image Foundation Model for Document Understanding☆116Updated 3 weeks ago
- [ACM MM25] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆87Updated last month
- official code for "Modality Curation: Building Universal Embeddings for Advanced Multimodal Information Retrieval"☆33Updated 2 months ago
- Rex-Thinker: Grounded Object Refering via Chain-of-Thought Reasoning☆116Updated 2 months ago
- Official repo of Griffon series including v1(ECCV 2024), v2(ICCV 2025), G, and R, and also the RL tool Vision-R1.☆237Updated last month
- [ICCV 2025] Official implementation of LLaVA-KD: A Framework of Distilling Multimodal Large Language Models☆95Updated 2 months ago
- [CVPR 2024] Official implementation of "ViTamin: Designing Scalable Vision Models in the Vision-language Era"☆209Updated last year
- Research Code for Multimodal-Cognition Team in Ant Group☆165Updated 2 months ago
- [NeurIPS 2024] Classification Done Right for Vision-Language Pre-Training☆216Updated 5 months ago
- New generation of CLIP with fine grained discrimination capability, ICML2025☆294Updated last week
- ☆119Updated last year
- [ICLR 2025] Diffusion Feedback Helps CLIP See Better☆289Updated 7 months ago
- Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).☆156Updated 11 months ago
- [EMNLP 2024] RWKV-CLIP: A Robust Vision-Language Representation Learner☆141Updated 3 months ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆175Updated 11 months ago
- Code for ChatRex: Taming Multimodal LLM for Joint Perception and Understanding☆202Updated 7 months ago
- Video Benchmark Suite: Rapid Evaluation of Video Foundation Models☆15Updated 8 months ago
- [ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation☆196Updated 5 months ago
- Official repository of "CoMP: Continual Multimodal Pre-training for Vision Foundation Models"☆31Updated 5 months ago
- [COLM 2025] Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources☆265Updated 3 weeks ago
- LinVT: Empower Your Image-level Large Language Model to Understand Videos☆82Updated 8 months ago
- Precision Search through Multi-Style Inputs☆72Updated last month
- [CVPR2025] Official implementation of High Fidelity Scene Text Synthesis.☆68Updated 5 months ago
- The Next Step Forward in Multimodal LLM Alignment☆179Updated 4 months ago
- A lightweight flexible Video-MLLM developed by TencentQQ Multimedia Research Team.☆74Updated 11 months ago
- ☆81Updated last month