CS-EVAL / CS-EvalLinks
CS-Eval is a comprehensive evaluation suite for fundamental cybersecurity models or large language models' cybersecurity ability.
☆43Updated 7 months ago
Alternatives and similar repositories for CS-Eval
Users that are interested in CS-Eval are comparing it to the libraries listed below
Sorting:
- CyberMetric dataset☆91Updated 5 months ago
- ☆35Updated 11 months ago
- 复旦白泽大模型安全基准测试集(2024年夏季版)☆38Updated 10 months ago
- ☆98Updated last year
- The repository of paper "HackMentor: Fine-Tuning Large Language Models for Cybersecurity".☆121Updated last year
- An Execution Isolation Architecture for LLM-Based Agentic Systems☆82Updated 4 months ago
- SecLLMHolmes is a generalized, fully automated, and scalable framework to systematically evaluate the performance (i.e., accuracy and rea…☆58Updated last month
- ☆44Updated 8 months ago
- SC-Safety: 中文大模型多轮对抗安全基准☆137Updated last year
- ☆66Updated 11 months ago
- JailBench:大型语言模型越狱攻击风险评测中文数据集 [PAKDD 2025]☆98Updated 3 months ago
- Agent Security Bench (ASB)☆89Updated last week
- Awesome Large Language Models for Vulnerability Detection☆160Updated this week
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆97Updated 8 months ago
- This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking cour…☆84Updated 2 months ago
- ☆25Updated last year
- Benchmark data from the article "AutoPT: How Far Are We from End2End Automated Web Penetration Testing?"☆16Updated 7 months ago
- CyberGym is a large-scale, high-quality cybersecurity evaluation framework designed to rigorously assess the capabilities of AI agents on…☆30Updated this week
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]☆225Updated last year
- ☆26Updated 8 months ago
- 🪐 A Database of Existing Security Vulnerabilities Patches to Enable Evaluation of Techniques (single-commit; multi-language)☆40Updated 2 months ago
- ☆36Updated last month
- CVE-Bench: A Benchmark for AI Agents’ Ability to Exploit Real-World Web Application Vulnerabilities☆61Updated last week
- ☆29Updated 5 months ago
- This project aims to consolidate and share high-quality resources and tools across the cybersecurity domain.☆212Updated 2 months ago
- Investigating Large Language Models for Code Vulnerability Detection: An Experimental Study☆31Updated 3 months ago
- The D-CIPHER and NYU CTF baseline LLM Agents built for NYU CTF Bench☆81Updated 2 months ago
- Repository for "SecurityEval Dataset: Mining Vulnerability Examples to Evaluate Machine Learning-Based Code Generation Techniques" publis…☆71Updated last year
- ☆55Updated last month
- ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors [EMNLP 2024 Findings]☆198Updated 8 months ago