Co1lin / CWEvalLinks
Simultaneous evaluation on both functionality and security of LLM-generated code.
☆27Updated last month
Alternatives and similar repositories for CWEval
Users that are interested in CWEval are comparing it to the libraries listed below
Sorting:
- ☆48Updated last year
- ☆21Updated 11 months ago
- ☆82Updated last month
- [NeurIPS'24] RedCode: Risky Code Execution and Generation Benchmark for Code Agents☆52Updated 3 months ago
- ☆108Updated 8 months ago
- Official repo for "ProSec: Fortifying Code LLMs with Proactive Security Alignment"☆15Updated 7 months ago
- ☆53Updated last year
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models☆209Updated 8 months ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆146Updated last year
- ☆119Updated 5 months ago
- Backdooring Neural Code Search☆14Updated 2 years ago
- Adversarial Attack for Pre-trained Code Models☆10Updated 3 years ago
- [NeurIPS 2024] Official implementation for "AgentPoison: Red-teaming LLM Agents via Memory or Knowledge Base Backdoor Poisoning"☆160Updated 6 months ago
- Agent Security Bench (ASB)☆137Updated 3 weeks ago
- ☆11Updated last year
- 🔮Reasoning for Safer Code Generation; 🥇Winner Solution of Amazon Nova AI Challenge 2025☆28Updated 2 months ago
- ☆36Updated last year
- Official Code for ACL 2024 paper "GradSafe: Detecting Unsafe Prompts for LLMs via Safety-Critical Gradient Analysis"☆60Updated last year
- ☆46Updated last month
- ☆35Updated last year
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆52Updated 7 months ago
- enchmarking Large Language Models' Resistance to Malicious Code☆13Updated 10 months ago
- Code and data of the EMNLP 2022 paper "Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversaria…☆61Updated 2 years ago
- This is the code repository for "Uncovering Safety Risks of Large Language Models through Concept Activation Vector"☆45Updated 2 weeks ago
- ☆20Updated last year
- [ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion☆53Updated 3 weeks ago
- Code for the AAAI 2023 paper "CodeAttack: Code-based Adversarial Attacks for Pre-Trained Programming Language Models☆33Updated 2 years ago
- Implementation for "RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content"☆22Updated last year
- A survey on harmful fine-tuning attack for large language model☆215Updated last week
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆160Updated 6 months ago