purdue-hcss / SecureChainLinks
☆46Updated last month
Alternatives and similar repositories for SecureChain
Users that are interested in SecureChain are comparing it to the libraries listed below
Sorting:
- Official repo for "ProSec: Fortifying Code LLMs with Proactive Security Alignment"☆15Updated 6 months ago
- ☆20Updated last year
- Official Implementation of NeurIPS 2024 paper - BiScope: AI-generated Text Detection by Checking Memorization of Preceding Tokens☆24Updated 6 months ago
- 🔮Reasoning for Safer Code Generation; 🥇Winner Solution of Amazon Nova AI Challenge 2025☆26Updated last month
- ☆15Updated last year
- ☆122Updated last year
- ☆49Updated last year
- Machine Learning & Security Seminar @Purdue University☆25Updated 2 years ago
- Simultaneous evaluation on both functionality and security of LLM-generated code.☆26Updated 3 weeks ago
- Backdooring Neural Code Search☆14Updated 2 years ago
- 🔥🔥🔥 Detecting hidden backdoors in Large Language Models with only black-box access☆44Updated 4 months ago
- White-box Fairness Testing through Adversarial Sampling☆13Updated 4 years ago
- ☆35Updated 11 months ago
- ☆21Updated 10 months ago
- Replication Package for "Natural Attack for Pre-trained Models of Code", ICSE 2022☆47Updated last year
- Official repo for FSE'24 paper "CodeArt: Better Code Models by Attention Regularization When Symbols Are Lacking"☆16Updated 6 months ago
- ☆15Updated last year
- [NeurIPS'24] RedCode: Risky Code Execution and Generation Benchmark for Code Agents☆49Updated 2 months ago
- Siren: Byzantine-robust Federated Learning via Proactive Alarming (SoCC '21)☆11Updated last year
- ☆18Updated 8 months ago
- This is the official code repository for paper "Exploiting the Adversarial Example Vulnerability of Transfer Learning of Source Code".☆13Updated 2 weeks ago
- enchmarking Large Language Models' Resistance to Malicious Code☆12Updated 10 months ago
- The official code for ``An Engorgio Prompt Makes Large Language Model Babble on''☆14Updated last month
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆52Updated 6 months ago
- [ACL 2024] The official GitHub repo for the paper "The Earth is Flat because...: Investigating LLMs' Belief towards Misinformation via Pe…☆78Updated last year
- ☆16Updated last year
- Adversarial Attack for Pre-trained Code Models☆10Updated 3 years ago
- ☆18Updated last year
- ☆11Updated 11 months ago
- Official Code for ACL 2024 paper "GradSafe: Detecting Unsafe Prompts for LLMs via Safety-Critical Gradient Analysis"☆60Updated 11 months ago