Mozhgan91 / LEOLinks
LEO: A powerful Hybrid Multimodal LLM
☆18Updated 6 months ago
Alternatives and similar repositories for LEO
Users that are interested in LEO are comparing it to the libraries listed below
Sorting:
- ☆36Updated last month
- Offical repo for CAT-V - Caption Anything in Video: Object-centric Dense Video Captioning with Spatiotemporal Multimodal Prompting☆48Updated last month
- ☆23Updated 4 months ago
- ☆99Updated 4 months ago
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models☆45Updated 2 months ago
- Autoregressive Semantic Visual Reconstruction Helps VLMs Understand Better☆36Updated last month
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆74Updated 3 weeks ago
- 🚀 Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Models☆28Updated 2 months ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆54Updated 3 weeks ago
- [ICCV 2025] Official code for paper: Beyond Text-Visual Attention: Exploiting Visual Cues for Effective Token Pruning in VLMs☆19Updated last month
- Official code for paper "GRIT: Teaching MLLMs to Think with Images"☆115Updated last week
- Official implement of MIA-DPO☆63Updated 6 months ago
- Code and dataset link for "DenseWorld-1M: Towards Detailed Dense Grounded Caption in the Real World"☆96Updated last month
- FreeVA: Offline MLLM as Training-Free Video Assistant☆63Updated last year
- [ICCV 2025] GroundingSuite: Measuring Complex Multi-Granular Pixel Grounding☆66Updated last month
- [CVPR 2025] OVO-Bench: How Far is Your Video-LLMs from Real-World Online Video Understanding?☆78Updated 2 weeks ago
- ☆87Updated last month
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆85Updated 2 months ago
- Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"☆49Updated 3 weeks ago
- [CVPR 2025] LLaVA-ST: A Multimodal Large Language Model for Fine-Grained Spatial-Temporal Understanding☆56Updated last month
- ☆35Updated last month
- Official PyTorch implementation of RACRO (https://www.arxiv.org/abs/2506.04559)☆17Updated last month
- [CVPR 2025 🔥]A Large Multimodal Model for Pixel-Level Visual Grounding in Videos☆76Updated 3 months ago
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆63Updated 3 weeks ago
- Official repo for "Streaming Video Understanding and Multi-round Interaction with Memory-enhanced Knowledge" ICLR2025☆64Updated 4 months ago
- 🔥 [CVPR 2024] Official implementation of "See, Say, and Segment: Teaching LMMs to Overcome False Premises (SESAME)"☆42Updated last year
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"☆30Updated 10 months ago
- (ICLR 2025 Spotlight) Official code repository for Interleaved Scene Graph.☆27Updated this week
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆78Updated 9 months ago
- ☆27Updated 4 months ago