gitkolento / SecProbe
SecProbe:任务驱动式大模型安全能力评测系统
☆12Updated 4 months ago
Alternatives and similar repositories for SecProbe:
Users that are interested in SecProbe are comparing it to the libraries listed below
- ☆13Updated last month
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆129Updated last month
- ☆8Updated 6 months ago
- JailBench:大型语言模型越狱攻击风险评测中文数据集 [PAKDD 2025]☆73Updated 3 weeks ago
- ☆45Updated 3 months ago
- "他山之石、可以攻玉":复旦白泽智能发布面向国内开源和国外商用大模型的Demo数据集JADE-DB☆391Updated 2 weeks ago
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆86Updated 5 months ago
- 复旦白泽大模型安全基准测试集(2024年夏季版)☆35Updated 8 months ago
- ☆79Updated 11 months ago
- 针对大语言模型的对抗性攻击总结☆24Updated last year
- ☆80Updated last month
- ☆16Updated 2 weeks ago
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models☆135Updated last month
- ☆20Updated 5 months ago
- ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors [EMNLP 2024 Findings]☆179Updated 6 months ago
- ☆17Updated last month
- Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM☆29Updated 2 months ago
- BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks on Large Language Models☆127Updated last month
- 😎 up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.☆256Updated this week
- ☆13Updated 8 months ago
- Papers and resources related to the security and privacy of LLMs 🤖☆491Updated 4 months ago
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]☆208Updated 9 months ago
- Accepted by ECCV 2024☆117Updated 5 months ago
- ☆28Updated 6 months ago
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆53Updated this week
- Agent Security Bench (ASB)☆69Updated this week
- A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide…☆1,288Updated 2 weeks ago
- ☆43Updated 9 months ago
- This Github repository summarizes a list of research papers on AI security from the four top academic conferences.☆108Updated last year
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆51Updated 7 months ago