gitkolento / SecProbe
SecProbe:任务驱动式大模型安全能力评测系统
☆10Updated last month
Alternatives and similar repositories for SecProbe:
Users that are interested in SecProbe are comparing it to the libraries listed below
- ☆13Updated last month
- JailBench:大型语言模型越狱攻击风险评测中文数据集☆31Updated 6 months ago
- ☆66Updated 2 months ago
- ☆11Updated 10 months ago
- Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM☆28Updated this week
- ☆32Updated 3 weeks ago
- ☆15Updated this week
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models☆108Updated 3 months ago
- "他山之石、可以攻玉":复旦白泽智能发布面向国内开源和国外商用大模型的Demo数据集JADE-DB☆351Updated last month
- ☆78Updated 9 months ago
- A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide…☆1,122Updated 2 weeks ago
- [NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey☆86Updated 5 months ago
- ☆37Updated 7 months ago
- 针对大语言模型的对抗性攻击总结☆16Updated last year
- 😎 up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.☆184Updated last week
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆62Updated 3 months ago
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]☆182Updated 6 months ago
- The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models".☆277Updated 2 months ago
- Accepted by IJCAI-24 Survey Track☆183Updated 4 months ago
- ☆16Updated 7 months ago
- Red Queen Dataset and data generation template☆10Updated 3 months ago
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,098Updated this week
- Papers and resources related to the security and privacy of LLMs 🤖☆467Updated last month
- A survey on harmful fine-tuning attack for large language model☆124Updated this week
- BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks on Large Language Models☆94Updated this week
- 复旦白泽大模型安全基准测试集(2024年夏季版)☆30Updated 5 months ago
- This Github repository summarizes a list of research papers on AI security from the four top academic conferences.☆105Updated last year
- [ICLR24] Official Repo of BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models☆22Updated 5 months ago
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆274Updated 3 months ago
- ☆23Updated 3 months ago