E0HYL / FINER-explainLinks
CCS 2023 | Explainable malware and vulnerability detection with XAI in paper "FINER: Enhancing State-of-the-art Classifiers with Feature Attribution to Facilitate Security Analysis"
☆11Updated last year
Alternatives and similar repositories for FINER-explain
Users that are interested in FINER-explain are comparing it to the libraries listed below
Sorting:
- Code for the paper Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers☆59Updated 3 years ago
- ☆68Updated 5 years ago
- [IEEE S&P'24] ODSCAN: Backdoor Scanning for Object Detection Models☆20Updated 2 months ago
- [NDSS'23] BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense☆17Updated last year
- Continuous Learning for Android Malware Detection (USENIX Security 2023)☆73Updated 2 years ago
- Machine Learning & Security Seminar @Purdue University☆25Updated 2 years ago
- ☆19Updated last year
- Code release for DeepJudge (S&P'22)☆52Updated 2 years ago
- This is the implementation for IEEE S&P 2022 paper "Model Orthogonalization: Class Distance Hardening in Neural Networks for Better Secur…☆11Updated 3 years ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Updated 5 years ago
- Hidden backdoor attack on NLP systems☆47Updated 4 years ago
- ☆19Updated 4 years ago
- ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation☆51Updated 3 years ago
- Code for ML Doctor☆91Updated last year
- [Oakland 2024] Exploring the Orthogonality and Linearity of Backdoor Attacks☆26Updated 7 months ago
- Code for paper "SrcMarker: Dual-Channel Source Code Watermarking via Scalable Code Transformations" (IEEE S&P 2024)☆33Updated last year
- 🔥🔥🔥 Detecting hidden backdoors in Large Language Models with only black-box access☆50Updated 6 months ago
- adversarial malware detection via a principled way☆22Updated 2 years ago
- Code Implementation for Traceback of Data Poisoning Attacks in Neural Networks☆20Updated 3 years ago
- [CVPR'24] LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning☆15Updated 10 months ago
- ☆149Updated last year
- ☆19Updated 3 years ago
- Distribution Preserving Backdoor Attack in Self-supervised Learning☆20Updated last year
- ☆25Updated 2 years ago
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆19Updated 9 months ago
- ☆84Updated 4 years ago
- ☆11Updated last year
- ☆26Updated 3 years ago
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆21Updated 6 years ago
- [ICLR 2025] REFINE: Inversion-Free Backdoor Defense via Model Reprogramming☆11Updated 9 months ago