microsoft / CodeGenerationPoisoning
Proof of concept code for poisoning code generation models.
☆45Updated last year
Alternatives and similar repositories for CodeGenerationPoisoning
Users that are interested in CodeGenerationPoisoning are comparing it to the libraries listed below
Sorting:
- ☆111Updated 10 months ago
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆45Updated last month
- A repository to quickly generate synthetic data and associated trojaned deep learning models☆77Updated last year
- Repository for "SecurityEval Dataset: Mining Vulnerability Examples to Evaluate Machine Learning-Based Code Generation Techniques" publis…☆67Updated last year
- Code for the paper Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers☆59Updated 3 years ago
- Universal Robustness Evaluation Toolkit (for Evasion)☆31Updated last week
- Machine Learning & Security Seminar @Purdue University☆25Updated 2 years ago
- ☆66Updated 4 years ago
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆55Updated 2 months ago
- ☆44Updated 2 years ago
- The repository contains the code for analysing the leakage of personally identifiable (PII) information from the output of next word pred…☆93Updated 9 months ago
- An implementation of the ACL 2024 Findings paper "Generalization-Enhanced Code Vulnerability Detection via Multi-Task Instruction Fine-Tu…☆44Updated 11 months ago
- ☆16Updated 8 months ago
- Code for the AAAI 2023 paper "CodeAttack: Code-based Adversarial Attacks for Pre-Trained Programming Language Models☆29Updated 2 years ago
- Code release for RobOT (ICSE'21)☆15Updated 2 years ago
- Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers' Outputs (ACM CCS'21)☆17Updated 2 years ago
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆47Updated last month
- ☆20Updated last year
- ☆22Updated last year
- A curated list of academic events on AI Security & Privacy☆150Updated 8 months ago
- On Training Robust PDF Malware Classifiers (Usenix Security'20) https://arxiv.org/abs/1904.03542☆29Updated 3 years ago
- 🔥🔥🔥 Detecting hidden backdoors in Large Language Models with only black-box access☆23Updated 6 months ago
- Fault-aware neural code rankers☆28Updated 2 years ago
- ☆19Updated last year
- Example TrojAI Submission☆24Updated 5 months ago
- ☆18Updated 2 years ago
- Code repository for the paper --- [USENIX Security 2023] Towards A Proactive ML Approach for Detecting Backdoor Poison Samples☆25Updated last year
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆66Updated last year
- Official repo for "ProSec: Fortifying Code LLMs with Proactive Security Alignment"☆14Updated last month
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆68Updated last year