E0HYL / FINER-explainLinks
CCS 2023 | Explainable malware and vulnerability detection with XAI in paper "FINER: Enhancing State-of-the-art Classifiers with Feature Attribution to Facilitate Security Analysis"
☆11Updated last year
Alternatives and similar repositories for FINER-explain
Users that are interested in FINER-explain are comparing it to the libraries listed below
Sorting:
- Code for the paper Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers☆59Updated 3 years ago
- Machine Learning & Security Seminar @Purdue University☆25Updated 2 years ago
- ☆66Updated 4 years ago
- Continuous Learning for Android Malware Detection (USENIX Security 2023)☆71Updated last year
- [IEEE S&P'24] ODSCAN: Backdoor Scanning for Object Detection Models☆17Updated 8 months ago
- [NDSS'23] BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense☆17Updated last year
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Updated 4 years ago
- This is the implementation for IEEE S&P 2022 paper "Model Orthogonalization: Class Distance Hardening in Neural Networks for Better Secur…☆12Updated 3 years ago
- Hidden backdoor attack on NLP systems☆47Updated 3 years ago
- ☆19Updated last year
- ☆24Updated last year
- adversarial malware detection via a principled way☆22Updated 2 years ago
- ☆11Updated last year
- Code release for DeepJudge (S&P'22)☆51Updated 2 years ago
- Code for our USENIX Security 2021 paper -- CADE: Detecting and Explaining Concept Drift Samples for Security Applications☆140Updated 2 years ago
- Code Implementation for Traceback of Data Poisoning Attacks in Neural Networks☆19Updated 3 years ago
- Code for ML Doctor☆91Updated last year
- ☆10Updated 4 years ago
- Learning Security Classifiers with Verified Global Robustness Properties (CCS'21) https://arxiv.org/pdf/2105.11363.pdf☆27Updated 3 years ago
- ☆18Updated 4 years ago
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆21Updated 5 years ago
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆18Updated 6 months ago
- [NDSS 2025] CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian Sampling☆15Updated 7 months ago
- the instructions about request access to AdvDroidZero☆12Updated last year
- FARE: Enabling Fine-grained Attack Categorization under Low-quality Labeled Data☆26Updated 3 years ago
- FLTracer: Accurate Poisoning Attack Provenance in Federated Learning☆22Updated last year
- [Oakland 2024] Exploring the Orthogonality and Linearity of Backdoor Attacks☆27Updated 4 months ago
- ☆32Updated 3 years ago
- ☆18Updated 3 years ago
- ☆55Updated 5 years ago