AIDC-AI / ParrotLinks
π The code repository for "Parrot: Multilingual Visual Instruction Tuning" in PyTorch.
β39Updated last month
Alternatives and similar repositories for Parrot
Users that are interested in Parrot are comparing it to the libraries listed below
Sorting:
- Official implement of MIA-DPOβ58Updated 4 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignmentβ56Updated 8 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reductionβ105Updated 3 months ago
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigationβ82Updated 5 months ago
- β¨β¨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?β122Updated 3 months ago
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Moβ¦β78Updated 4 months ago
- β46Updated last month
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Modelsβ115Updated last month
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Diβ¦β52Updated 7 months ago
- ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Explorationβ37Updated 5 months ago
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''β84Updated last year
- The code repository for "Wings: Learning Multimodal LLMs without Text-only Forgetting" [NeurIPS 2024]β18Updated 5 months ago
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)β50Updated 7 months ago
- β77Updated 4 months ago
- [ICML 2025] Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation inβ¦β124Updated last week
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Modelsβ63Updated 10 months ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'β172Updated last week
- β84Updated 2 months ago
- NoisyRollout: Reinforcing Visual Reasoning with Data Augmentationβ65Updated this week
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Modelsβ74Updated 8 months ago
- Code & Dataset for Paper: "Distill Visual Chart Reasoning Ability from LLMs to MLLMs"β53Updated 7 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Modelsβ74Updated 11 months ago
- β25Updated last year
- CLIP-MoE: Mixture of Experts for CLIPβ37Updated 7 months ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".β163Updated last week
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Modelsβ65Updated last month
- Official repository of MMDU datasetβ91Updated 8 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*β103Updated last week
- β85Updated last year
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.β68Updated last year