SHI-Labs / OLA-VLM
OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024
☆58Updated last month
Alternatives and similar repositories for OLA-VLM:
Users that are interested in OLA-VLM are comparing it to the libraries listed below
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆69Updated 2 weeks ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆48Updated 3 months ago
- PyTorch code for "ADEM-VL: Adaptive and Embedded Fusion for Efficient Vision-Language Tuning"☆19Updated 5 months ago
- Code for the paper "Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers"☆60Updated 3 weeks ago
- An open source implementation of CLIP (With TULIP Support)☆122Updated 3 weeks ago
- ☆66Updated last week
- [AAAI2025] ChatterBox: Multi-round Multimodal Referring and Grounding, Multimodal, Multi-round dialogues☆53Updated 3 months ago
- Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridges☆67Updated last month
- INF-LLaVA: Dual-perspective Perception for High-Resolution Multimodal Large Language Model☆42Updated 8 months ago
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"☆28Updated 6 months ago
- ☆78Updated this week
- VideoChat-R1: Enhancing Spatio-Temporal Perception via Reinforcement Fine-Tuning☆54Updated this week
- 🤖 [ICLR'25] Multimodal Video Understanding Framework (MVU)☆32Updated 2 months ago
- [Fully open] [Encoder-free MLLM] Vision as LoRA☆73Updated last week
- ☆33Updated 2 months ago
- This is the official repo for ByteVideoLLM/Dynamic-VLM☆20Updated 3 months ago
- Implementation of the model: "(MC-ViT)" from the paper: "Memory Consolidation Enables Long-Context Video Understanding"☆21Updated last week
- ☆39Updated this week
- Official Pytorch Implementation of Self-emerging Token Labeling☆33Updated last year
- [EMNLP 2023] TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding☆49Updated last year
- [ICLR 2025] Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision☆59Updated 9 months ago
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment☆34Updated 9 months ago
- [ICLR 2025] Source code for paper "A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegr…☆74Updated 4 months ago
- This repository provides an improved LLamaGen Model, fine-tuned on 500,000 high-quality images, each accompanied by over 300 token prompt…☆30Updated 5 months ago
- Official implementation of the paper "MMInA: Benchmarking Multihop Multimodal Internet Agents"☆42Updated last month
- Official implementation of Add-SD: Rational Generation without Manual Reference.☆27Updated 7 months ago
- Matryoshka Multimodal Models☆98Updated 2 months ago
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆181Updated last week
- Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language Models☆98Updated 3 weeks ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆41Updated 9 months ago