Ruiyang-061X / Uncertainty-oLinks
✨ Official code for our paper: "Uncertainty-o: One Model-agnostic Framework for Unveiling Epistemic Uncertainty in Large Multimodal Models".
☆13Updated 4 months ago
Alternatives and similar repositories for Uncertainty-o
Users that are interested in Uncertainty-o are comparing it to the libraries listed below
Sorting:
- 🔎Official code for our paper: "VL-Uncertainty: Detecting Hallucination in Large Vision-Language Model via Uncertainty Estimation".☆39Updated 3 months ago
- ✨A curated list of papers on the uncertainty in multi-modal large language model (MLLM).☆49Updated 3 months ago
- Implementation of "DIME-FM: DIstilling Multimodal and Efficient Foundation Models"☆15Updated last year
- [ICML 2024] "Visual-Text Cross Alignment: Refining the Similarity Score in Vision-Language Models"☆51Updated 10 months ago
- [ICLR 2024 Spotlight] "Negative Label Guided OOD Detection with Pretrained Vision-Language Models"☆20Updated 8 months ago
- [CVPR2024 Highlight] Official implementation for Transferable Visual Prompting. The paper "Exploring the Transferability of Visual Prompt…☆44Updated 6 months ago
- [ACL 2024] Logical Closed Loop: Uncovering Object Hallucinations in Large Vision-Language Models. Detect and mitigate object hallucinatio…☆22Updated 5 months ago
- ☆15Updated 2 years ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆45Updated last year
- This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strat…☆78Updated 4 months ago
- Towards a Unified View on Visual Parameter-Efficient Transfer Learning☆26Updated 2 years ago
- [NeurIPS 2023]DDCoT: Duty-Distinct Chain-of-Thought Prompting for Multimodal Reasoning in Language Models☆44Updated last year
- ☆24Updated last week
- Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models (AAAI 2024)☆73Updated 5 months ago
- The PyTorch implementation for "DEAL: Disentangle and Localize Concept-level Explanations for VLMs" (ECCV 2024 Strong Double Blind)☆20Updated 8 months ago
- Code and Dataset for the paper "LAMM: Label Alignment for Multi-Modal Prompt Learning" AAAI 2024☆32Updated last year
- Multimodal-Composite-Editing-and-Retrieval-update☆33Updated 8 months ago
- [SIGIR 2024] - Simple but Effective Raw-Data Level Multimodal Fusion for Composed Image Retrieval☆39Updated last year
- [ICML 2024] Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning☆49Updated last year
- Look, Compare, Decide: Alleviating Hallucination in Large Vision-Language Models via Multi-View Multi-Path Reasoning☆22Updated 10 months ago
- ☆27Updated last year
- (ICML 2024) Improve Context Understanding in Multimodal Large Language Models via Multimodal Composition Learning☆27Updated 9 months ago
- The efficient tuning method for VLMs☆80Updated last year
- code for studying OpenAI's CLIP explainability☆33Updated 3 years ago
- Official code for "Understanding and Mitigating Overfitting in Prompt Tuning for Vision-Language Models" (TCSVT'2023)☆28Updated last year
- [CVPR 2025 Highlight] Official Pytorch codebase for paper: "Assessing and Learning Alignment of Unimodal Vision and Language Models"☆46Updated last month
- [IJCV2025] https://arxiv.org/abs/2304.04521☆15Updated 5 months ago
- [ICLR 2025] Official Implementation of Local-Prompt: Extensible Local Prompts for Few-Shot Out-of-Distribution Detection☆42Updated this week
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆48Updated last year
- [NeurIPS 2023] Generalized Logit Adjustment☆38Updated last year