lyan62 / FoodieQA
Official Repo for FoodieQA paper (EMNLP 2024)
☆15Updated 2 months ago
Alternatives and similar repositories for FoodieQA:
Users that are interested in FoodieQA are comparing it to the libraries listed below
- ☆22Updated 5 months ago
- Mosaic IT: Enhancing Instruction Tuning with Data Mosaics☆17Updated 6 months ago
- Code and data for ACL 2024 paper on 'Cross-Modal Projection in Multimodal LLMs Doesn't Really Project Visual Attributes to Textual Space'☆11Updated 6 months ago
- Official repo for "AlignGPT: Multi-modal Large Language Models with Adaptive Alignment Capability"☆32Updated 6 months ago
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆59Updated 2 months ago
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆41Updated 3 months ago
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆39Updated 3 months ago
- [NeurIPS2024] Twin-Merging: Dynamic Integration of Modular Expertise in Model Merging☆48Updated last month
- [NAACL 2024] A Synthetic, Scalable and Systematic Evaluation Suite for Large Language Models☆33Updated 7 months ago
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆22Updated 2 months ago
- Official Pytorch implementation of "Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations"☆53Updated this week
- The benchmark and datasets of the ICML 2024 paper "VisionGraph: Leveraging Large Multimodal Models for Graph Theory Problems in Visual C…☆13Updated 8 months ago
- Official implementation of "MMNeuron: Discovering Neuron-Level Domain-Specific Interpretation in Multimodal Large Language Model". Our co…☆14Updated last month
- Repo for paper "CODIS: Benchmarking Context-Dependent Visual Comprehension for Multimodal Large Language Models".☆10Updated 3 months ago
- [ACL 2024] This is the code repo for our ACL‘24 paper "MARVEL: Unlocking the Multi-Modal Capability of Dense Retrieval via Visual Module …☆36Updated 7 months ago
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆33Updated last month
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆50Updated 10 months ago
- CLIP-MoE: Mixture of Experts for CLIP☆23Updated 3 months ago
- visual question answering prompting recipes for large vision-language models☆24Updated 4 months ago
- An Easy-to-use Hallucination Detection Framework for LLMs.☆55Updated 9 months ago
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆33Updated 9 months ago
- AdaMoLE: Adaptive Mixture of LoRA Experts☆21Updated 3 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆62Updated 7 months ago
- A Survey on the Honesty of Large Language Models☆51Updated last month
- ✨✨The Curse of Multi-Modalities (CMM): Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio☆38Updated 3 months ago
- ☆59Updated 7 months ago
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆79Updated 9 months ago
- PyTorch implementation of StableMask (ICML'24)☆12Updated 7 months ago
- Public code repo for paper "Aligning LLMs with Individual Preferences via Interaction"☆18Updated 3 months ago
- Official Code Repository for the paper "Knowledge-Augmented Reasoning Distillation for Small Language Models in Knowledge-intensive Tasks…☆37Updated 2 months ago