AI45Lab / OpenRTLinks
Open-source red teaming framework for MLLMs with 37+ attack methods
☆148Updated this week
Alternatives and similar repositories for OpenRT
Users that are interested in OpenRT are comparing it to the libraries listed below
Sorting:
- ☆55Updated last year
- Poison as Cure: Visual Noise for Mitigating Object Hallucinations in LVMs☆30Updated 3 months ago
- [ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shi…☆68Updated last year
- Official repository for “Reasoning in the Dark: Interleaved Vision-Text Reasoning in Latent Space”☆17Updated 2 months ago
- 🔥An open-source survey of the latest video reasoning tasks, paradigms, and benchmarks.☆118Updated last week
- ☆33Updated 4 months ago
- [AAAI2025] Unlearning Concepts in Diffusion Model via Concept Domain Correction and Concept Preserving Gradient☆42Updated 8 months ago
- [NAACL 2025 Main] Official Implementation of MLLMU-Bench☆44Updated 9 months ago
- [ICCV 2025] The official code of the paper "Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration R…☆108Updated 6 months ago
- Code for "The Devil behind the mask: An emergent safety vulnerability of Diffusion LLMs"☆73Updated 3 months ago
- [ACM MM2023] Code Release of GCMA: Generative Cross-Modal Transferable Adversarial Attacks from Images to Videos☆12Updated last year
- ☆154Updated 10 months ago
- [CVPR2025] T2ISafety: Benchmark for Assessing Fairness, Toxicity, and Privacy in Image Generation☆29Updated 6 months ago
- Code for the paper "AsFT: Anchoring Safety During LLM Fune-Tuning Within Narrow Safety Basin".☆35Updated 6 months ago
- ☆67Updated 5 months ago
- (ICCV 2025)This repository is the official implementation of AIGI-Holmes: Towards Explainable and Generalizable AI-Generated Image Detect…☆149Updated 5 months ago
- A curated list of resources dedicated to the safety of Large Vision-Language Models. This repository aligns with our survey titled A Surv…☆184Updated last week
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆107Updated last year
- 关于LLM和Multimodal LLM的paper list☆52Updated last week
- The First to Know: How Token Distributions Reveal Hidden Knowledge in Large Vision-Language Models?☆41Updated last year
- Interleaving Reasoning: Next-Generation Reasoning Systems for AGI☆234Updated 2 months ago
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)☆99Updated last year
- A paper list of Awesome Latent Space.☆276Updated last week
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆127Updated 4 months ago
- [NeurIPS 2025] More Thinking, Less Seeing? Assessing Amplified Hallucination in Multimodal Reasoning Models☆73Updated 7 months ago
- [CVPR 2025] Devils in Middle Layers of Large Vision-Language Models: Interpreting, Detecting and Mitigating Object Hallucinations via Att…☆60Updated 3 months ago
- [NeurIPS 2025 Spotlight] Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆78Updated 3 months ago
- 📖 This is a repository for organizing papers, codes, and other resources related to unified multimodal models.☆339Updated 2 months ago
- Accepted by IJCAI-24 Survey Track☆226Updated last year
- Code for ICLR 2025 Paper: Visual Description Grounding Reduces Hallucinations and Boosts Reasoning in LVLMs☆21Updated 8 months ago