inspire-group / DP-RandPView external linksLinks
[NeurIPS 2023] Differentially Private Image Classification by Learning Priors from Random Processes
☆12Jun 12, 2023Updated 2 years ago
Alternatives and similar repositories for DP-RandP
Users that are interested in DP-RandP are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] On Evluating the Durability of Safegurads for Open-Weight LLMs☆13Jun 20, 2025Updated 7 months ago
- Learning to See by Looking at Noise☆113Nov 24, 2024Updated last year
- ☆15Apr 7, 2023Updated 2 years ago
- Code for our paper "Localizing Lying in Llama"☆13Apr 24, 2025Updated 9 months ago
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Jan 3, 2023Updated 3 years ago
- Private Adaptive Optimization with Side Information (ICML '22)☆16Jun 23, 2022Updated 3 years ago
- Computationally friendly hyper-parameter search with DP-SGD☆25Jan 7, 2025Updated last year
- Code Repository for the Paper ---Revisiting the Assumption of Latent Separability for Backdoor Defenses (ICLR 2023)☆47Feb 28, 2023Updated 2 years ago
- ☆20May 6, 2022Updated 3 years ago
- Codebase for "Exploring the Landscape of Spatial Robustness" (ICML'19, https://arxiv.org/abs/1712.02779).☆25Sep 16, 2019Updated 6 years ago
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆18May 13, 2019Updated 6 years ago
- [CVPR 2022] "Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free" by Tianlong Chen*, Zhenyu Zhang*, Yihua Zhang*, Shiyu C…☆27Oct 5, 2022Updated 3 years ago
- Algorithms for Privacy-Preserving Machine Learning in JAX☆151Updated this week
- Code for paper "Poisoned classifiers are not only backdoored, they are fundamentally broken"☆26Jan 7, 2022Updated 4 years ago
- ☆28Nov 28, 2023Updated 2 years ago
- [AAAI'21] Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification☆29Dec 31, 2024Updated last year
- This is a PyTorch implementation of the paperViP A Differentially Private Foundation Model for Computer Vision☆36Jun 27, 2023Updated 2 years ago
- Code to break Llama Guard☆32Dec 7, 2023Updated 2 years ago
- Completely remove Gemini’s SynthID security so it can’t detect that your image was made with AI. Simply clone the repository locally, run…☆22Dec 10, 2025Updated 2 months ago
- A list of papers in NeurIPS 2022 related to adversarial attack and defense / AI security.☆75Dec 5, 2022Updated 3 years ago
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆34Jun 29, 2025Updated 7 months ago
- ☆32Sep 2, 2024Updated last year
- SpinWalk, a framework for Monte-Carlo simulation to model spins random walk within a network. SpinWalk paper:☆14Jun 16, 2025Updated 7 months ago
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆39Dec 24, 2023Updated 2 years ago
- This is the starter kit for the Trojan Detection Challenge 2023 (LLM Edition), a NeurIPS 2023 competition.☆90May 19, 2024Updated last year
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Sep 11, 2023Updated 2 years ago
- ☆48Jun 19, 2024Updated last year
- ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Rep…☆48Apr 27, 2022Updated 3 years ago
- A united toolbox for running major robustness verification approaches for DNNs. [S&P 2023]☆90Mar 24, 2023Updated 2 years ago
- Official Code Implementation for the CCS 2022 Paper "On the Privacy Risks of Cell-Based NAS Architectures"☆11Nov 21, 2022Updated 3 years ago
- Disrupting Diffusion: Token-Level Attention Erasure Attack against Diffusion-based Customization(ACM MM2024)☆18Mar 31, 2025Updated 10 months ago
- Official Implementation for "Purifying Quantization-conditioned Backdoors via Layer-wise Activation Correction with Distribution Approxim…☆12Aug 14, 2024Updated last year
- ☆14Feb 26, 2025Updated 11 months ago
- Repo for the paper "Bounding Training Data Reconstruction in Private (Deep) Learning".☆11Jun 16, 2023Updated 2 years ago
- Code used to produce experimental results for the paper "Deep Structured Prediction with Nonlinear Output Activations"☆11May 6, 2019Updated 6 years ago
- Attacks using out-of-distribution adversarial examples☆11Nov 19, 2019Updated 6 years ago
- This work corroborates a run-time Trojan detection method exploiting STRong Intentional Perturbation of inputs, is a multi-domain Trojan …☆10Mar 7, 2021Updated 4 years ago
- A python implementation of the paper "GraRep: Learning Graph Representations with Global Structural Information".☆11Jun 7, 2017Updated 8 years ago
- Unofficial Pytorch Implementation of "A Simple Framework for Contrastive Learning of Visual Representations"☆10Mar 11, 2020Updated 5 years ago