Co1lin / CWEvalLinks
Simultaneous evaluation on both functionality and security of LLM-generated code.
☆27Updated 2 months ago
Alternatives and similar repositories for CWEval
Users that are interested in CWEval are comparing it to the libraries listed below
Sorting:
- ☆48Updated last year
- Adversarial Attack for Pre-trained Code Models☆10Updated 3 years ago
- 🔮Reasoning for Safer Code Generation; 🥇Winner Solution of Amazon Nova AI Challenge 2025☆31Updated 2 months ago
- Official repo for "ProSec: Fortifying Code LLMs with Proactive Security Alignment"☆15Updated 7 months ago
- ☆82Updated 2 months ago
- ☆21Updated last year
- Backdooring Neural Code Search☆14Updated 2 years ago
- [NeurIPS'24] RedCode: Risky Code Execution and Generation Benchmark for Code Agents☆55Updated this week
- ☆55Updated last year
- ☆11Updated last year
- enchmarking Large Language Models' Resistance to Malicious Code☆13Updated 11 months ago
- Code for ACL (main) paper "JumpCoder: Go Beyond Autoregressive Coder via Online Modification"☆27Updated last year
- [CIKM 2024] Trojan Activation Attack: Attack Large Language Models using Activation Steering for Safety-Alignment.☆28Updated last year
- ☆128Updated 2 weeks ago
- [ACL2024-Main] Data and Code for WaterBench: Towards Holistic Evaluation of LLM Watermarks☆28Updated 2 years ago
- [LREC-COLING'24] HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalization☆38Updated 8 months ago
- [ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion☆54Updated last month
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆149Updated last year
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models☆215Updated 8 months ago
- ☆39Updated last year
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆52Updated 7 months ago
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆84Updated last year
- LLM Unlearning☆177Updated 2 years ago
- Official Code for ACL 2024 paper "GradSafe: Detecting Unsafe Prompts for LLMs via Safety-Critical Gradient Analysis"☆60Updated last year
- ☆111Updated 9 months ago
- [CCS 2024] Optimization-based Prompt Injection Attack to LLM-as-a-Judge☆35Updated 2 months ago
- A survey on harmful fine-tuning attack for large language model☆220Updated this week
- A Systematic Literature Review on Large Language Models for Automated Program Repair☆215Updated last week
- This is the code repository for "Uncovering Safety Risks of Large Language Models through Concept Activation Vector"☆46Updated last month
- ☆124Updated last year