[ICCV-2023] Gradient inversion attack, Federated learning, Generative adversarial network.
☆52Jul 13, 2024Updated last year
Alternatives and similar repositories for GIFD_Gradient_Inversion_Attack
Users that are interested in GIFD_Gradient_Inversion_Attack are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A comprehensive toolbox for model inversion attacks and defenses, which is easy to get started.☆192Sep 23, 2025Updated 6 months ago
- [ECCV-2024 oral] Source code for our paper "A Closer Look at GAN Priors: Exploiting Intermediate Features for Enhanced Model Inversion At…☆24Sep 22, 2024Updated last year
- [ECCV-2024] Transferable Targeted Adversarial Attack, CLIP models, Generative adversarial network, Multi-target attacks☆38Apr 23, 2025Updated 11 months ago
- GI-PIP: Do We Require Impractical Auxiliary Dataset for Gradient Inversion Attacks? (ICASSP 2024)☆16Apr 18, 2025Updated 11 months ago
- [ICCV-2025] Universal Adversarial Attack, Multimodal Adversarial Attacks, VLP models, Contrastive Learning, Cross-modal Perturbation Gene…☆37Jul 10, 2025Updated 9 months ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- The code for the paper "Embracing Collaboration Over Competition: Condensing Multiple Prompts for Visual In-Context Learning" (CVPR'25).☆15Sep 25, 2025Updated 6 months ago
- End-to-End Gradient Inversion (Gradient Leakage in Federated Learning) 【https://ieeexplore.ieee.org/document/9878027】☆11Aug 19, 2022Updated 3 years ago
- [ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks" and "Be C…☆48Jul 18, 2025Updated 8 months ago
- The code for the paper "Efficient Self-Supervised Video Hashing with Selective State Spaces" (AAAI'25).☆23Aug 2, 2025Updated 8 months ago
- ☆10Apr 21, 2022Updated 3 years ago
- FGLA: Fast Generation-Based Gradient Leakage Attacks against Highly Compressed Gradients☆14Mar 17, 2026Updated 3 weeks ago
- ☆15Jun 4, 2024Updated last year
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆57May 4, 2023Updated 2 years ago
- Algorithms to recover input data from their gradient signal through a neural network☆318Apr 14, 2023Updated 2 years ago
- Open source password manager - Proton Pass • AdSecurely store, share, and autofill your credentials with Proton Pass, the end-to-end encrypted password manager trusted by millions.
- Surrogate Model Extension (SME): A Fast and Accurate Weight Update Attack on Federated Learning [Accepted at ICML 2023]☆14Mar 31, 2024Updated 2 years ago
- [BMVC 2023] Backdoor Attack on Hash-based Image Retrieval via Clean-label Data Poisoning☆17Sep 1, 2023Updated 2 years ago
- AutoML, Privacy Preserving, Federated Learning☆26Jun 8, 2023Updated 2 years ago
- Paper List for Gradient Inversion Attacks in Federated Learning [IEEE TPAMI 2026]☆32Mar 20, 2026Updated 3 weeks ago
- The code for the paper "LCM: Locally Constrained Compact Point Cloud Model for Masked Point Modeling" (NeurIPS'24).☆14Dec 25, 2024Updated last year
- [CVPR-2023] Re-thinking Model Inversion Attacks Against Deep Neural Networks☆43Nov 12, 2023Updated 2 years ago
- The first toolkit for MLRM safety evaluation, providing unified interface for mainstream models, datasets, and jailbreaking methods!☆15Apr 8, 2025Updated last year
- [NDSS'25] The official implementation of safety misalignment.☆18Jan 8, 2025Updated last year
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆62Oct 24, 2022Updated 3 years ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- Breaching privacy in federated learning scenarios for vision and text☆319Jan 24, 2026Updated 2 months ago
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆42Jan 25, 2024Updated 2 years ago
- [CVPR 2024] Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transfomers☆16Oct 24, 2024Updated last year
- [NeurIPS2024] BoostAdapter: Improving Test-Time Adaptation via Regional Bootstrapping☆19Feb 28, 2026Updated last month
- [ICLR 2024] Towards Elminating Hard Label Constraints in Gradient Inverision Attacks☆14Feb 6, 2024Updated 2 years ago
- This is a comprehensive resource repository for deep learning model inversion attacks and defenses research.☆28Nov 13, 2025Updated 4 months ago
- The code for "Improved Deep Leakage from Gradients" (iDLG).☆166Mar 4, 2021Updated 5 years ago
- Code for "Language Models Can Learn from Verbal Feedback Without Scalar Rewards"☆60Jan 5, 2026Updated 3 months ago
- ☆19Sep 8, 2023Updated 2 years ago
- Open source password manager - Proton Pass • AdSecurely store, share, and autofill your credentials with Proton Pass, the end-to-end encrypted password manager trusted by millions.
- CVPR 2025 - R-TPT: Improving Adversarial Robustness of Vision-Language Models through Test-Time Prompt Tuning☆22Aug 28, 2025Updated 7 months ago
- ☆24Feb 17, 2026Updated last month
- [NeurIPS 2019] Deep Leakage From Gradients☆477Apr 17, 2022Updated 3 years ago
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆23Oct 30, 2023Updated 2 years ago
- [ICLR'26] SPEED: Scalable, Precise, and Efficient Concept Erasure for Diffusion Models☆38Mar 9, 2026Updated last month
- Code to replicate the Representation Noising paper and tools for evaluating defences against harmful fine-tuning☆24Dec 12, 2024Updated last year
- Instance-wise Batch Label Restoration via Gradients In Federated Learning (ICLR 2023)☆11May 18, 2023Updated 2 years ago