Mozhgan91 / LEOLinks
LEO: A powerful Hybrid Multimodal LLM
☆18Updated 5 months ago
Alternatives and similar repositories for LEO
Users that are interested in LEO are comparing it to the libraries listed below
Sorting:
- Autoregressive Semantic Visual Reconstruction Helps VLMs Understand Better☆31Updated last month
- 🚀 Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Models☆24Updated last month
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models☆45Updated last month
- ☆33Updated last week
- Fast-Slow Thinking for Large Vision-Language Model Reasoning☆16Updated 2 months ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆51Updated 6 months ago
- Official repository of "CoMP: Continual Multimodal Pre-training for Vision Foundation Models"☆29Updated 3 months ago
- ☆34Updated 3 weeks ago
- official repo for paper "[CLS] Token Tells Everything Needed for Training-free Efficient MLLMs"☆22Updated 2 months ago
- ☆22Updated 3 months ago
- [NeurIPS 2024] TransAgent: Transfer Vision-Language Foundation Models with Heterogeneous Agent Collaboration☆24Updated 9 months ago
- Ref-Diff: Zero-shot Referring Image Segmentation with Generative Models☆16Updated last month
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆60Updated this week
- Official implementation of MC-LLaVA.☆32Updated last month
- ☆87Updated 3 weeks ago
- Official repository of "Inst-IT: Boosting Multimodal Instance Understanding via Explicit Visual Prompt Instruction Tuning"☆32Updated 4 months ago
- [ICCV 2025] Dynamic-VLM☆23Updated 7 months ago
- Official implementation of Next Block Prediction: Video Generation via Semi-Autoregressive Modeling☆37Updated 5 months ago
- Official implementation of paper ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understanding☆34Updated 4 months ago
- Official Repository of Personalized Visual Instruct Tuning☆31Updated 4 months ago
- Official code for paper "GRIT: Teaching MLLMs to Think with Images"☆109Updated this week
- 🔥 [CVPR 2024] Official implementation of "See, Say, and Segment: Teaching LMMs to Overcome False Premises (SESAME)"☆42Updated last year
- ☆22Updated 4 months ago
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"☆29Updated 9 months ago
- Ego-R1: Chain-of-Tool-Thought for Ultra-Long Egocentric Video Reasoning☆85Updated 3 weeks ago
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆56Updated 8 months ago
- Repo for paper "T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs"☆49Updated 4 months ago
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆48Updated 3 months ago
- [EMNLP 2024] Official code for "Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models"☆20Updated 9 months ago
- ☆19Updated last month