palchenli / VL-Instruction-Tuning
☆91Updated last year
Alternatives and similar repositories for VL-Instruction-Tuning:
Users that are interested in VL-Instruction-Tuning are comparing it to the libraries listed below
- The official GitHub page for ''What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Ins…☆19Updated last year
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆58Updated last year
- ☆133Updated last year
- A collection of visual instruction tuning datasets.☆76Updated last year
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆48Updated 4 months ago
- ☆61Updated last year
- This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strat…☆76Updated last month
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆48Updated 8 months ago
- Repository of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models☆37Updated last year
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆27Updated 8 months ago
- [ICML 2024] Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning☆48Updated 10 months ago
- [NeurIPS 2024] Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning☆68Updated last month
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆57Updated 5 months ago
- ☆68Updated 2 months ago
- [ICCV2023] Official code for "VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity Control"☆53Updated last year
- ☆24Updated 10 months ago
- ☆95Updated last year
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆31Updated 3 months ago
- SVIT: Scaling up Visual Instruction Tuning☆164Updated 9 months ago
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆83Updated last year
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆37Updated 4 months ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆44Updated 8 months ago
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆97Updated 2 weeks ago
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆73Updated 7 months ago
- FreeVA: Offline MLLM as Training-Free Video Assistant☆57Updated 9 months ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆46Updated last year
- ☆29Updated 7 months ago
- Official repository of MMDU dataset☆86Updated 5 months ago
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆90Updated 2 months ago
- ☆51Updated 7 months ago