SHI-Labs / OLA-VLMLinks
OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024
☆60Updated 5 months ago
Alternatives and similar repositories for OLA-VLM
Users that are interested in OLA-VLM are comparing it to the libraries listed below
Sorting:
- Official implementation of "PyVision: Agentic Vision with Dynamic Tooling."☆108Updated 2 weeks ago
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆104Updated last week
- Implementation of the model: "(MC-ViT)" from the paper: "Memory Consolidation Enables Long-Context Video Understanding"☆22Updated last week
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆53Updated 2 weeks ago
- [AAAI2025] ChatterBox: Multi-round Multimodal Referring and Grounding, Multimodal, Multi-round dialogues☆55Updated 3 months ago
- PyTorch code for "ADEM-VL: Adaptive and Embedded Fusion for Efficient Vision-Language Tuning"☆20Updated 9 months ago
- X-Reasoner: Towards Generalizable Reasoning Across Modalities and Domains☆47Updated 2 months ago
- ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration