THUKElab / MESEDLinks
[AAAI 2024] MESED: A Multi-modal Entity Set Expansion Dataset with Fine-grained Semantic Classes and Hard Negative Entities
☆16Updated last year
Alternatives and similar repositories for MESED
Users that are interested in MESED are comparing it to the libraries listed below
Sorting:
- ☆88Updated last year
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆45Updated 7 months ago
- Official resource for paper Investigating and Mitigating the Multimodal Hallucination Snowballing in Large Vision-Language Models (ACL 20…☆15Updated last year
- Offical Repository of "AtomThink: Multimodal Slow Thinking with Atomic Step Reasoning"☆62Updated 2 months ago
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆85Updated last year
- [ICLR 2023] This is the code repo for our ICLR‘23 paper "Universal Vision-Language Dense Retrieval: Learning A Unified Representation Spa…☆53Updated last year
- ☆14Updated 2 years ago
- [ACL 2024 Oral] This is the code repo for our ACL‘24 paper "MARVEL: Unlocking the Multi-Modal Capability of Dense Retrieval via Visual Mo…☆39Updated last year
- EMNLP2023 - InfoSeek: A New VQA Benchmark focus on Visual Info-Seeking Questions☆25Updated last year
- ☆25Updated last year
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆57Updated last year
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆97Updated 2 years ago
- ✨✨The Curse of Multi-Modalities (CMM): Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio☆52Updated 6 months ago
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆92Updated last year
- A Self-Training Framework for Vision-Language Reasoning☆88Updated last year
- Codebase for ACL 2023 paper "Mixture-of-Domain-Adapters: Decoupling and Injecting Domain Knowledge to Pre-trained Language Models' Memori…☆52Updated 2 years ago
- [AAAI 2025]Math-PUMA: Progressive Upward Multimodal Alignment to Enhance Mathematical Reasoning☆42Updated 9 months ago
- ☆87Updated 2 years ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆77Updated last year
- This repository contains the code for SFT, RLHF, and DPO, designed for vision-based LLMs, including the LLaVA models and the LLaMA-3.2-vi…☆117Updated 7 months ago
- [ACL 2024 (Oral)] A Prospector of Long-Dependency Data for Large Language Models☆59Updated last year
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆85Updated 2 years ago
- This repo contains evaluation code for the paper "MileBench: Benchmarking MLLMs in Long Context"☆36Updated last year
- An Arena-style Automated Evaluation Benchmark for Detailed Captioning☆56Updated 8 months ago
- ☆13Updated last year
- ☆11Updated last year
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆46Updated 2 years ago
- Official repo for "AlignGPT: Multi-modal Large Language Models with Adaptive Alignment Capability"☆34Updated last year
- ☆101Updated 2 years ago
- ☆19Updated 2 years ago