friedrichor / LLaVA-NeXT-ReproducedLinks
Reproduced LLaVA-NeXT with training code and scripts.
☆10Updated 10 months ago
Alternatives and similar repositories for LLaVA-NeXT-Reproduced
Users that are interested in LLaVA-NeXT-Reproduced are comparing it to the libraries listed below
Sorting:
- This is the official repo for ByteVideoLLM/Dynamic-VLM☆20Updated 5 months ago
- [ICML 2024] Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning☆49Updated last year
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆120Updated 7 months ago
- Official code for paper "GRIT: Teaching MLLMs to Think with Images"☆64Updated this week
- Visual Instruction Tuning for Qwen2 Base Model☆34Updated 11 months ago
- Official repository of "CoMP: Continual Multimodal Pre-training for Vision Foundation Models"☆26Updated 2 months ago
- ☆115Updated 10 months ago
- [NeurIPS 2024] TransAgent: Transfer Vision-Language Foundation Models with Heterogeneous Agent Collaboration☆24Updated 7 months ago
- The official GitHub page for ''What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Ins…☆19Updated last year
- Official implement of MIA-DPO☆58Updated 4 months ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆51Updated 5 months ago
- Official implementation of paper ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understanding☆34Updated 2 months ago
- The official repo for “TextCoT: Zoom In for Enhanced Multimodal Text-Rich Image Understanding”.☆40Updated 8 months ago
- ☆12Updated this week
- ☆84Updated 2 months ago
- ☆62Updated last month
- ☆84Updated last year
- ☆91Updated last year
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆46Updated 2 months ago
- "Visual Prompt Selection for In-Context Learning Segmentation Framework"☆14Updated 5 months ago
- Official Repository of paper: Envisioning Beyond the Pixels: Benchmarking Reasoning-Informed Visual Editing☆53Updated last week
- [CVPR 2025 Highlight] Official Pytorch codebase for paper: "Assessing and Learning Alignment of Unimodal Vision and Language Models"☆39Updated last month
- ☆18Updated 5 months ago
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆90Updated 4 months ago
- ☆32Updated last year
- [ECCV 2024] Elysium: Exploring Object-level Perception in Videos via MLLM☆74Updated 7 months ago
- Evaluation code for Ref-L4, a new REC benchmark in the LMM era☆34Updated 5 months ago
- The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆74Updated 2 weeks ago
- LLaVE: Large Language and Vision Embedding Models with Hardness-Weighted Contrastive Learning☆56Updated 2 weeks ago
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆91Updated last week