CS-EVAL / CS-EvalLinks
CS-Eval is a comprehensive evaluation suite for fundamental cybersecurity models or large language models' cybersecurity ability.
☆43Updated 8 months ago
Alternatives and similar repositories for CS-Eval
Users that are interested in CS-Eval are comparing it to the libraries listed below
Sorting:
- CyberMetric dataset☆93Updated 7 months ago
- ☆47Updated 10 months ago
- An Execution Isolation Architecture for LLM-Based Agentic Systems☆86Updated 6 months ago
- The repository of paper "HackMentor: Fine-Tuning Large Language Models for Cybersecurity".☆126Updated last year
- ☆100Updated last year
- ☆25Updated last year
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆512Updated 10 months ago
- SecLLMHolmes is a generalized, fully automated, and scalable framework to systematically evaluate the performance (i.e., accuracy and rea…☆57Updated 3 months ago
- CyberGym is a large-scale, high-quality cybersecurity evaluation framework designed to rigorously assess the capabilities of AI agents on…☆49Updated last week
- ☆26Updated 9 months ago
- Repository for "SecurityEval Dataset: Mining Vulnerability Examples to Evaluate Machine Learning-Based Code Generation Techniques" publis…☆74Updated last year
- SC-Safety: 中文大模型多轮对抗安全基准☆142Updated last year
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆101Updated 9 months ago
- The D-CIPHER and NYU CTF baseline LLM Agents built for NYU CTF Bench☆91Updated last week
- CVE-Bench: A Benchmark for AI Agents’ Ability to Exploit Real-World Web Application Vulnerabilities☆73Updated 2 weeks ago
- 复旦白泽大模型安全基准测试集(2024年夏季版)☆42Updated last year
- ☆35Updated last year
- Agent Security Bench (ASB)☆100Updated last month
- This repo contains the codes of the penetration test benchmark for Generative Agents presented in the paper "AutoPenBench: Benchmarking G…☆35Updated last month
- ☆70Updated last year
- ☆82Updated 8 months ago
- Awesome Large Language Models for Vulnerability Detection☆207Updated this week
- An autonomous LLM-agent for large-scale, repository-level code auditing☆192Updated 3 weeks ago
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]☆236Updated last week
- AIBugHunter: A Practical Tool for Predicting, Classifying and Repairing Software Vulnerabilities☆43Updated last year
- This project aims to consolidate and share high-quality resources and tools across the cybersecurity domain.☆229Updated last week
- AutoAudit—— the LLM for Cyber Security 网络安全大语言模型☆343Updated 5 months ago
- JailBench:大型语言模型越狱攻击风险评测中文数据集 [PAKDD 2025]☆110Updated 5 months ago
- 漏洞规则库是一个致力于帮助开发者识别和避免常见安全漏洞的开源项目。我们收集、整理和分析各类编程语言和常用库中的安全漏洞模式,并提供相应的防范措施和最佳实践。☆25Updated this week
- 🪐 A Database of Existing Security Vulnerabilities Patches to Enable Evaluation of Techniques (single-commit; multi-language)☆41Updated 3 months ago