BillChan226 / SafeWatchLinks
[ICLR 2025] Official implementation for "SafeWatch: An Efficient Safety-Policy Following Video Guardrail Model with Transparent Explanations"
☆41Updated 10 months ago
Alternatives and similar repositories for SafeWatch
Users that are interested in SafeWatch are comparing it to the libraries listed below
Sorting:
- [ICCV 2025] The official code of the paper "Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration R…☆107Updated 5 months ago
- Accepted by IJCAI-24 Survey Track☆224Updated last year
- ☆54Updated last year
- The reinforcement learning codes for dataset SPA-VL☆42Updated last year
- A toolbox for benchmarking trustworthiness of multimodal large language models (MultiTrust, NeurIPS 2024 Track Datasets and Benchmarks)☆172Updated 5 months ago
- One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models☆56Updated 11 months ago
- [ICML 2024] Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models.☆81Updated 10 months ago
- [NeurIPS 2025] More Thinking, Less Seeing? Assessing Amplified Hallucination in Multimodal Reasoning Models☆73Updated 6 months ago
- Accepted by ECCV 2024☆176Updated last year
- A curated list of resources dedicated to the safety of Large Vision-Language Models. This repository aligns with our survey titled A Surv…☆182Updated 2 months ago
- This is the first released survey paper on hallucinations of large vision-language models (LVLMs). To keep track of this field and contin…☆89Updated last year
- ☆55Updated last year
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆103Updated last year
- [NAACL 2025 Main] Official Implementation of MLLMU-Bench☆43Updated 8 months ago
- [CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allo…☆387Updated last year
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆122Updated 3 months ago
- [ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shi…☆67Updated last year
- List of papers about Large Multimodal model☆31Updated 6 months ago
- An implementation for MLLM oversensitivity evaluation☆17Updated last year
- ECSO (Make MLLM safe without neither training nor any external models!) (https://arxiv.org/abs/2403.09572)☆34Updated last year
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆152Updated last year
- [ICLR 2025] MMFakeBench: A Mixed-Source Multimodal Misinformation Detection Benchmark for LVLMs☆42Updated 8 months ago
- This repository contains a collection of resources and papers on Detecting Multimedia Generated by Large AI Models☆111Updated 4 months ago
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆94Updated last year
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆306Updated 7 months ago
- ☆40Updated last year
- Code for ICLR 2025 Paper: Visual Description Grounding Reduces Hallucinations and Boosts Reasoning in LVLMs☆22Updated 7 months ago
- (ICCV 2025)This repository is the official implementation of AIGI-Holmes: Towards Explainable and Generalizable AI-Generated Image Detect…☆144Updated 4 months ago
- A library of visualization tools for the interpretability and hallucination analysis of large vision-language models (LVLMs).☆42Updated 6 months ago
- [CVPR 2025] Devils in Middle Layers of Large Vision-Language Models: Interpreting, Detecting and Mitigating Object Hallucinations via Att…☆57Updated 2 months ago