purdue-hcss / SecureChainLinks
☆46Updated 3 months ago
Alternatives and similar repositories for SecureChain
Users that are interested in SecureChain are comparing it to the libraries listed below
Sorting:
- Simultaneous evaluation on both functionality and security of LLM-generated code.☆30Updated last month
- Official repo for "ProSec: Fortifying Code LLMs with Proactive Security Alignment"☆16Updated 9 months ago
- ☆20Updated last year
- Official Implementation of NeurIPS 2024 paper - BiScope: AI-generated Text Detection by Checking Memorization of Preceding Tokens☆28Updated 9 months ago
- ☆125Updated last year
- Adversarial Attack for Pre-trained Code Models☆10Updated 3 years ago
- 🔮Reasoning for Safer Code Generation; 🥇Winner Solution of Amazon Nova AI Challenge 2025☆34Updated 4 months ago
- ☆15Updated 2 years ago
- Replication Package for "Natural Attack for Pre-trained Models of Code", ICSE 2022☆49Updated last month
- Machine Learning & Security Seminar @Purdue University☆25Updated 2 years ago
- This is the official code repository for paper "Exploiting the Adversarial Example Vulnerability of Transfer Learning of Source Code".☆16Updated 3 months ago
- Backdooring Neural Code Search☆14Updated 2 years ago
- White-box Fairness Testing through Adversarial Sampling☆13Updated 4 years ago
- ☆50Updated last year
- CodeGuard+: Constrained Decoding for Secure Code Generation☆17Updated last year
- ☆21Updated last year
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models☆221Updated last month
- Siren: Byzantine-robust Federated Learning via Proactive Alarming (SoCC '21)☆11Updated last year
- Adversarial Robustness for Code☆16Updated 4 years ago
- ☆16Updated 2 years ago
- Official repo for FSE'24 paper "CodeArt: Better Code Models by Attention Regularization When Symbols Are Lacking"☆16Updated 9 months ago
- 🔥🔥🔥 Detecting hidden backdoors in Large Language Models with only black-box access☆50Updated 6 months ago
- ☆84Updated 3 months ago
- ☆15Updated 2 years ago
- ☆37Updated last year
- enchmarking Large Language Models' Resistance to Malicious Code☆13Updated last year
- [NeurIPS'24] RedCode: Risky Code Execution and Generation Benchmark for Code Agents☆61Updated last month
- [CIKM 2024] Trojan Activation Attack: Attack Large Language Models using Activation Steering for Safety-Alignment.☆29Updated last year
- ☆18Updated last year
- Making code edting up to 7.7x faster using multi-layer speculation☆24Updated 10 months ago