fabriceyhc / mode_nn_debugging
MODE: Automated Neural Network Model Debugging via State Differential Analysis and Input Selection - Replication Project
☆15Updated last year
Related projects ⓘ
Alternatives and complementary repositories for mode_nn_debugging
- Code release for RobOT (ICSE'21)☆15Updated last year
- ☆26Updated last year
- CC: Causality-Aware Coverage Criterion for Deep Neural Networks☆10Updated last year
- ☆24Updated 4 years ago
- ☆18Updated 5 years ago
- Reward Guided Test Generation for Deep Learning☆20Updated 3 months ago
- ☆9Updated last year
- CROWN: A Neural Network Verification Framework for Networks with General Activation Functions☆38Updated 5 years ago
- ☆24Updated 3 years ago
- Learning Security Classifiers with Verified Global Robustness Properties (CCS'21) https://arxiv.org/pdf/2105.11363.pdf☆26Updated 2 years ago
- ICSE2021 Submission☆12Updated 2 years ago
- This repo keeps track of popular provable training and verification approaches towards robust neural networks, including leaderboards on …☆99Updated 2 years ago
- A united toolbox for running major robustness verification approaches for DNNs. [S&P 2023]☆88Updated last year
- [NDSS'23] BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense☆14Updated 6 months ago
- The released code of Neurify in NIPS 2018☆46Updated last year
- Machine Learning & Security Seminar @Purdue University☆25Updated last year
- DeepLocalize: Fault Localization for Deep NeuralNetworks☆25Updated 3 years ago
- This is the source code for Maximum Mean Discrepancy Test is Aware of Adversarial Attacks (ICML2021).☆19Updated 2 years ago
- White-box Fairness Testing through Adversarial Sampling☆12Updated 3 years ago
- ☆17Updated 3 years ago
- Benchmarks for the VNN Comp 2022☆9Updated 5 months ago
- Research Artifact of USENIX Security 2023 Paper: Precise and Generalized Robustness Certification for Neural Networks☆12Updated last year
- ADAPT is the open source white-box testing framework for deep neural networks☆21Updated last year
- Interval attacks (adversarial ML)☆21Updated 5 years ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆44Updated 4 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated last year
- The official repo for GCP-CROWN paper☆12Updated 2 years ago
- Code release for DeepJudge (S&P'22)☆51Updated last year
- Fourth edition of VNN COMP (2023)☆16Updated last year
- ☆32Updated 2 months ago