msrocean / continual-learning-malwareLinks
This repository contains code and data of the paper **On the Limitations of Continual Learning for Malware Classification**, accepted to be published at the First Conference on Lifelong Learning Agents (CoLLAs).
☆19Updated last year
Alternatives and similar repositories for continual-learning-malware
Users that are interested in continual-learning-malware are comparing it to the libraries listed below
Sorting:
- Code for the paper Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers☆59Updated 3 years ago
- Continuous Learning for Android Malware Detection (USENIX Security 2023)☆71Updated last year
- CCS 2023 | Explainable malware and vulnerability detection with XAI in paper "FINER: Enhancing State-of-the-art Classifiers with Feature …☆11Updated last year
- ☆66Updated 4 years ago
- Code release for RobOT (ICSE'21)☆15Updated 2 years ago
- A Python library for Secure and Explainable Machine Learning☆185Updated 2 months ago
- ☆24Updated last year
- Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples☆19Updated 3 years ago
- SecML-Torch: A Library for Robustness Evaluation of Deep Learning Models☆67Updated last week
- Craft poisoned data using MetaPoison☆53Updated 4 years ago
- ☆147Updated 10 months ago
- Library for training globally-robust neural networks.☆29Updated 3 weeks ago
- A repository to quickly generate synthetic data and associated trojaned deep learning models☆79Updated 2 years ago
- Code for the AsiaCCS 2021 paper: "Malware makeover: Breaking ML-based static analysis by modifying executable bytes"☆53Updated last year
- [NDSS'23] BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense☆17Updated last year
- PhD/MSc course on Machine Learning Security (Univ. Cagliari)☆211Updated 2 months ago
- ☆23Updated 3 years ago
- [CVPR'24] LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning☆15Updated 7 months ago
- ☆11Updated last year
- This repo keeps track of popular provable training and verification approaches towards robust neural networks, including leaderboards on …☆98Updated 2 years ago
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆19Updated 6 months ago
- Code for ML Doctor☆91Updated last year
- A curated list of academic events on AI Security & Privacy☆160Updated last year
- Exercises for practicing MLSec for Systems Security☆10Updated 11 months ago
- ☆18Updated 4 years ago
- A united toolbox for running major robustness verification approaches for DNNs. [S&P 2023]☆90Updated 2 years ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Updated 4 years ago
- How Robust are Randomized Smoothing based Defenses to Data Poisoning? (CVPR 2021)☆13Updated 4 years ago
- Code Implementation for Traceback of Data Poisoning Attacks in Neural Networks☆19Updated 3 years ago
- ☆28Updated 2 years ago