SHI-Labs / OLA-VLM
OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024
☆58Updated 2 months ago
Alternatives and similar repositories for OLA-VLM:
Users that are interested in OLA-VLM are comparing it to the libraries listed below
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆76Updated last month
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆50Updated 4 months ago
- Code for the paper "Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers"☆64Updated last month
- ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆31Updated 4 months ago
- Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridges☆67Updated 2 months ago
- [AAAI2025] ChatterBox: Multi-round Multimodal Referring and Grounding, Multimodal, Multi-round dialogues☆53Updated this week
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment☆34Updated 10 months ago
- ☆80Updated last month
- ☆115Updated 9 months ago
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"☆28Updated 6 months ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆133Updated 5 months ago
- INF-LLaVA: Dual-perspective Perception for High-Resolution Multimodal Large Language Model☆42Updated 9 months ago
- Official Pytorch Implementation of Self-emerging Token Labeling☆33Updated last year
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆188Updated last month
- [ICLR 2025] Source code for paper "A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegr…☆75Updated 4 months ago
- ☆91Updated 3 weeks ago
- This is the official repo for ByteVideoLLM/Dynamic-VLM☆20Updated 4 months ago
- Official repo for StableLLAVA☆95Updated last year
- An open source implementation of CLIP (With TULIP Support)☆132Updated last month
- Multimodal Open-O1 (MO1) is designed to enhance the accuracy of inference models by utilizing a novel prompt-based approach. This tool wo…☆29Updated 7 months ago
- PyTorch code for "ADEM-VL: Adaptive and Embedded Fusion for Efficient Vision-Language Tuning"☆20Updated 6 months ago
- [ICML 2025] VistaDPO: Video Hierarchical Spatial-Temporal Direct Preference Optimization for Large Video Models☆22Updated this week
- Matryoshka Multimodal Models☆101Updated 3 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆72Updated 3 months ago
- ☆32Updated 3 months ago
- ☆45Updated 3 months ago
- [Fully open] [Encoder-free MLLM] Vision as LoRA☆146Updated 2 weeks ago
- Official implementation of the paper "MMInA: Benchmarking Multihop Multimodal Internet Agents"☆42Updated 2 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆42Updated 10 months ago
- ☆73Updated this week