THUKElab / MESEDLinks
[AAAI 2024] MESED: A Multi-modal Entity Set Expansion Dataset with Fine-grained Semantic Classes and Hard Negative Entities
☆16Updated last year
Alternatives and similar repositories for MESED
Users that are interested in MESED are comparing it to the libraries listed below
Sorting:
- ☆82Updated last year
- Official resource for paper Investigating and Mitigating the Multimodal Hallucination Snowballing in Large Vision-Language Models (ACL 20…☆12Updated last year
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆43Updated 2 months ago
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆97Updated last year
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆81Updated 10 months ago
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆55Updated 10 months ago
- ☆14Updated last year
- Official repo for "AlignGPT: Multi-modal Large Language Models with Adaptive Alignment Capability"☆33Updated last year
- This repository contains the code for SFT, RLHF, and DPO, designed for vision-based LLMs, including the LLaVA models and the LLaMA-3.2-vi…☆113Updated 3 months ago
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆32Updated 7 months ago
- code for "CoMT: A Novel Benchmark for Chain of Multi-modal Thought on Large Vision-Language Models"☆19Updated 6 months ago
- Code and data for ACL 2024 paper on 'Cross-Modal Projection in Multimodal LLMs Doesn't Really Project Visual Attributes to Textual Space'☆16Updated last year
- An Arena-style Automated Evaluation Benchmark for Detailed Captioning☆55Updated 3 months ago
- [ICLR 2023] This is the code repo for our ICLR‘23 paper "Universal Vision-Language Dense Retrieval: Learning A Unified Representation Spa…☆53Updated last year
- 自己阅读的多模态对话系统论文(及部分笔记)汇总☆23Updated 2 years ago
- ☆13Updated 9 months ago
- Code and model for AAAI 2024: UMIE: Unified Multimodal Information Extraction with Instruction Tuning☆40Updated last year
- Codebase for ACL 2023 paper "Mixture-of-Domain-Adapters: Decoupling and Injecting Domain Knowledge to Pre-trained Language Models' Memori…☆51Updated last year
- ☆11Updated 8 months ago
- ACL'2023: Multi-Task Pre-Training of Modular Prompt for Few-Shot Learning☆40Updated 2 years ago
- ☆100Updated last year
- ☆25Updated last year
- ✨✨The Curse of Multi-Modalities (CMM): Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio☆48Updated 2 months ago
- A Self-Training Framework for Vision-Language Reasoning☆84Updated 8 months ago
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆46Updated last year
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆73Updated last year
- my commonly-used tools☆61Updated 8 months ago
- ☆55Updated last year
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆83Updated last year
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆50Updated 2 months ago