chs20 / RobustVLM
[ICML 2024] Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models
☆89Updated 3 months ago
Related projects: ⓘ
- [ICML 2024] Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models.☆36Updated last month
- Unsolvable Problem Detection: Evaluating Trustworthiness of Vision Language Models☆67Updated this week
- [ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shi…☆34Updated 2 months ago
- The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Uns…☆43Updated last month
- What do we learn from inverting CLIP models?☆41Updated 6 months ago
- ☆35Updated last year
- Official PyTorch implementation of "CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning" @ ICCV 2023☆27Updated 8 months ago
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆61Updated 9 months ago
- This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strat…☆66Updated 5 months ago
- [ICLR24 (Spotlight)] "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation…☆93Updated last month
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆47Updated last month
- Official codebase for Image Hijacks: Adversarial Images can Control Generative Models at Runtime☆28Updated last year
- Code for Finetune like you pretrain: Improved finetuning of zero-shot vision models☆86Updated last year
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Models☆39Updated 5 months ago
- An LLM-free Multi-dimensional Benchmark for Multi-modal Hallucination Evaluation☆85Updated 8 months ago
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆61Updated 4 months ago
- One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models☆31Updated 4 months ago
- [NeurIPS-2023] Annual Conference on Neural Information Processing Systems☆147Updated 10 months ago
- [CVPR 2024] Self-Discovering Interpretable Diffusion Latent Directions for Responsible Text-to-Image Generation☆24Updated 4 months ago
- Official Pytorch repo of CVPR'23 and NeurIPS'23 papers on understanding replication in diffusion models.☆102Updated 9 months ago
- ☆57Updated last week
- ☆35Updated 2 months ago
- Intriguing Properties of Data Attribution on Diffusion Models (ICLR 2024)☆22Updated 7 months ago
- ECCV2024: Adversarial Prompt Tuning for Vision-Language Models☆19Updated last month
- Official code for ICLR 2024 paper, "A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation"☆61Updated 5 months ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆100Updated 2 months ago
- Official repo for Detecting, Explaining, and Mitigating Memorization in Diffusion Models (ICLR 2024)☆46Updated 5 months ago
- Code for our ICLR 2024 paper "PerceptionCLIP: Visual Classification by Inferring and Conditioning on Contexts"☆76Updated 4 months ago
- A Survey on Benchmarks of Multimodal Large Language Models☆30Updated last month
- ☆24Updated 3 months ago