wellzline / Trustworthy_T2I_DMsLinks
☆12Updated 3 weeks ago
Alternatives and similar repositories for Trustworthy_T2I_DMs
Users that are interested in Trustworthy_T2I_DMs are comparing it to the libraries listed below
Sorting:
- ☆27Updated 3 months ago
- Code and data for paper "Can LLM Watermarks Robustly Prevent Unauthorized Knowledge Distillation?". (ACL 2025 Main)☆16Updated last month
- The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Uns…☆80Updated 5 months ago
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆81Updated last year
- Official implementation of NeurIPS'24 paper "Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Model…☆45Updated 9 months ago
- [NeurIPS 2024 D&B Track] UnlearnCanvas: A Stylized Image Dataset to Benchmark Machine Unlearning for Diffusion Models by Yihua Zhang, Cho…☆74Updated 8 months ago
- [CVPR 2024] Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transfomers☆17Updated 9 months ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆51Updated 6 months ago
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆59Updated 10 months ago
- ☆12Updated 8 months ago
- ☆37Updated last year
- PDM-based Purifier☆22Updated 9 months ago
- ☆32Updated 6 months ago
- ☆65Updated 10 months ago
- List of T2I safety papers, updated daily, welcome to discuss using Discussions☆62Updated 11 months ago
- ☆14Updated 5 months ago
- The repository of the paper "REEF: Representation Encoding Fingerprints for Large Language Models," aims to protect the IP of open-source…☆59Updated 6 months ago
- To Think or Not to Think: Exploring the Unthinking Vulnerability in Large Reasoning Models☆31Updated 2 months ago
- ☆27Updated last year
- ☆19Updated last year
- "In-Context Unlearning: Language Models as Few Shot Unlearners". Martin Pawelczyk, Seth Neel* and Himabindu Lakkaraju*; ICML 2024.☆28Updated last year
- A Task of Fictitious Unlearning for VLMs☆20Updated 4 months ago
- A collection of resources on attacks and defenses targeting text-to-image diffusion models☆72Updated 4 months ago
- [ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shi…☆62Updated last year
- ☆20Updated last year
- ☆24Updated 5 months ago
- Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks☆29Updated last year
- Official repo for Detecting, Explaining, and Mitigating Memorization in Diffusion Models (ICLR 2024)☆76Updated last year
- A large-scale dataset composed of high-quality synthetic images aimed at evaluating social biases in LVLMs☆13Updated 2 months ago
- Code and data for paper "A Semantic Invariant Robust Watermark for Large Language Models" accepted by ICLR 2024.☆32Updated 8 months ago