VT-NLP / MultiInstruct
MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning
☆135Updated last year
Alternatives and similar repositories for MultiInstruct:
Users that are interested in MultiInstruct are comparing it to the libraries listed below
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆198Updated 11 months ago
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆130Updated 5 months ago
- Official repository for the A-OKVQA dataset☆75Updated 9 months ago
- ☆33Updated last year
- ☆39Updated last year
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆142Updated 10 months ago
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆63Updated last year
- ☆64Updated 5 years ago
- ☆139Updated 4 months ago
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 2 years ago
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆81Updated last year
- ☆28Updated 3 months ago
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning☆271Updated 11 months ago
- ☆94Updated last year
- ☆59Updated last year
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆66Updated 3 months ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆63Updated 4 months ago
- SVIT: Scaling up Visual Instruction Tuning☆164Updated 8 months ago
- [NAACL 2024] MMC: Advancing Multimodal Chart Understanding with LLM Instruction Tuning☆95Updated last month
- 😎 curated list of awesome LMM hallucinations papers, methods & resources.☆148Updated 11 months ago
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆90Updated last year
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆79Updated 10 months ago
- EMNLP2023 - InfoSeek: A New VQA Benchmark focus on Visual Info-Seeking Questions☆18Updated 9 months ago
- ICCV 2023 (Oral) Open-domain Visual Entity Recognition Towards Recognizing Millions of Wikipedia Entities☆38Updated 6 months ago
- Research code for "KAT: A Knowledge Augmented Transformer for Vision-and-Language"☆63Updated 2 years ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆256Updated 8 months ago
- ChatBridge, an approach to learning a unified multimodal model to interpret, correlate, and reason about various modalities without rely…☆50Updated last year
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆285Updated last month
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆46Updated last year
- ☆133Updated last year