ZSHsh98 / NSG-VDLinks
[NeurIPS 2025 Spotlight] "Physics-Driven Spatiotemporal Modeling for AI-Generated Video Detection"
☆59Updated last month
Alternatives and similar repositories for NSG-VD
Users that are interested in NSG-VD are comparing it to the libraries listed below
Sorting:
- [NAACL 2025 Main] Official Implementation of MLLMU-Bench☆43Updated 9 months ago
- [NeurIPS25 & ICML25 Workshop on Reliable and Responsible Foundation Models] A Simple Baseline Achieving Over 90% Success Rate Against the…☆82Updated 8 months ago
- The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Uns…☆87Updated 10 months ago
- [ICML 2024] Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models☆149Updated 6 months ago
- [ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shi…☆67Updated last year
- ☆55Updated last year
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆234Updated last year
- Official repo for ICT: Image-Object Cross-Level Trusted Intervention for Mitigating Object Hallucination in Large Vision-Language Models☆24Updated 9 months ago
- Awesome-Low-Rank-Adaptation☆124Updated last year
- A library of visualization tools for the interpretability and hallucination analysis of large vision-language models (LVLMs).☆42Updated 7 months ago
- [CVPR 2024] On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm☆80Updated 10 months ago
- ☆61Updated last year
- A curated collection of resources focused on the Mechanistic Interpretability (MI) of Large Multimodal Models (LMMs). This repository agg…☆173Updated 2 months ago
- A toolbox for benchmarking trustworthiness of multimodal large language models (MultiTrust, NeurIPS 2024 Track Datasets and Benchmarks)☆173Updated 6 months ago
- Official code implement of Robust Classification via a Single Diffusion Model☆96Updated 9 months ago
- Awesome Low-Rank Adaptation☆58Updated 4 months ago
- Poison as Cure: Visual Noise for Mitigating Object Hallucinations in LVMs☆30Updated 3 months ago
- ☆66Updated 5 months ago
- ICML 2025 Oral: ABKD: Pursuing a Proper Allocation of the Probability Mass in Knowledge Distillation via α-β-Divergence☆41Updated 4 months ago
- [ACL 2025] Data and Code for Paper VLSBench: Unveiling Visual Leakage in Multimodal Safety☆52Updated 5 months ago
- [CVPR2024 highlight] Generalized Large-Scale Data Condensation via Various Backbone and Statistical Matching (G-VBSM)☆28Updated last year
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆127Updated 3 months ago
- Code for "The Devil behind the mask: An emergent safety vulnerability of Diffusion LLMs"☆73Updated 2 months ago
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆84Updated 2 years ago
- This repo contains the code for the paper "Understanding and Mitigating Hallucinations in Large Vision-Language Models via Modular Attrib…☆30Updated 5 months ago
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆163Updated 6 months ago
- [CVPR 2025] Official implementation for "Steering Away from Harm: An Adaptive Approach to Defending Vision Language Model Against Jailbre…☆47Updated 5 months ago
- [AAAI, ICLR TP] Fast Machine Unlearning Without Retraining Through Selective Synaptic Dampening☆55Updated last year
- [ICCV 2025] The official code of the paper "Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration R…☆109Updated 5 months ago
- Official repository for "Safety in Large Reasoning Models: A Survey" - Exploring safety risks, attacks, and defenses for Large Reasoning …☆83Updated 4 months ago