injadlu / DAMALinks
[ICML 2025] Official code of "DAMA: Data- and Model-aware Alignment of Multi-modal LLMs"
☆11Updated last month
Alternatives and similar repositories for DAMA
Users that are interested in DAMA are comparing it to the libraries listed below
Sorting:
- ☆47Updated 7 months ago
- Towards Modality Generalization: A Benchmark and Prospective Analysis☆25Updated last month
- ECSO (Make MLLM safe without neither training nor any external models!) (https://arxiv.org/abs/2403.09572)☆25Updated 7 months ago
- [ICLR 2025] "Noisy Test-Time Adaptation in Vision-Language Models"☆13Updated 4 months ago
- OOD Generalization相关文章的阅读笔记☆31Updated 6 months ago
- A Task of Fictitious Unlearning for VLMs☆19Updated 2 months ago
- Instruction Tuning in Continual Learning paradigm☆50Updated 4 months ago
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆60Updated 7 months ago
- Awesome-Efficient-Inference-for-LRMs is a collection of state-of-the-art, novel, exciting, token-efficient methods for Large Reasoning Mo…☆72Updated last week
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆88Updated 6 months ago
- Official implementation for "ALI-Agent: Assessing LLMs'Alignment with Human Values via Agent-based Evaluation"☆18Updated last month
- [NAACL 2025 Main] Official Implementation of MLLMU-Bench☆26Updated 3 months ago
- Official Code and data for ACL 2024 finding, "An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal Large Language Models"☆19Updated 7 months ago
- Code for ICLR 2025 Paper: Visual Description Grounding Reduces Hallucinations and Boosts Reasoning in LVLMs☆16Updated last month
- This repo contains the code for the paper "Understanding and Mitigating Hallucinations in Large Vision-Language Models via Modular Attrib…☆18Updated 3 months ago
- [CVPR25] A ChatGPT-Prompted Visual hallucination Evaluation Dataset, featuring over 100,000 data samples and four advanced evaluation mod…☆17Updated 2 months ago
- [ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shi…☆59Updated 11 months ago
- ☆11Updated 4 months ago
- ☆29Updated 2 years ago
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆34Updated 5 months ago
- translation of VHL repo in paddle☆25Updated last year
- [NeurIPS 2023]DDCoT: Duty-Distinct Chain-of-Thought Prompting for Multimodal Reasoning in Language Models☆44Updated last year
- [CVPR 2025] Devils in Middle Layers of Large Vision-Language Models: Interpreting, Detecting and Mitigating Object Hallucinations via Att…☆23Updated 3 months ago
- Hierarchical Decomposition of Prompt-Based Continual Learning: Rethinking Obscured Sub-optimality (NeurIPS 2023, Spotlight)☆86Updated 7 months ago
- An implementation for MLLM oversensitivity evaluation☆13Updated 7 months ago
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)☆91Updated 7 months ago
- [ACL 2024] Logical Closed Loop: Uncovering Object Hallucinations in Large Vision-Language Models. Detect and mitigate object hallucinatio…☆22Updated 4 months ago
- ☆24Updated last year
- ☆18Updated last year
- The official implementation of the paper **Learning Concise and Descriptive Attributes for Visual Recognition**☆44Updated last year