gitkolento / SecProbeLinks
SecProbe:任务驱动式大模型安全能力评测系统
☆14Updated last year
Alternatives and similar repositories for SecProbe
Users that are interested in SecProbe are comparing it to the libraries listed below
Sorting:
- A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide…☆1,763Updated this week
- ☆12Updated last year
- "他山之石、可以攻玉":复旦白泽智能发布面向国内开源和国外商用大模型的Demo数据集JADE-DB☆494Updated 2 months ago
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆224Updated 2 months ago
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models☆230Updated last week
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,847Updated last week
- ☆28Updated 11 months ago
- ☆73Updated 2 weeks ago
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]☆271Updated 6 months ago
- Papers and resources related to the security and privacy of LLMs 🤖☆559Updated 7 months ago
- [NeurIPS 2025] BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Models☆273Updated this week
- ☆86Updated 5 months ago
- 😎 up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.☆485Updated last week
- ☆20Updated last year
- This Github repository summarizes a list of research papers on AI security from the four top academic conferences.☆175Updated 8 months ago
- ☆37Updated last year
- A collection list for Large Language Model (LLM) Watermark☆56Updated 11 months ago
- Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM☆39Updated last year
- Repo for SemStamp (NAACL2024) and k-SemStamp (ACL2024)☆26Updated last year
- ☆63Updated 8 months ago
- ☆17Updated last year
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆427Updated last year
- A survey on harmful fine-tuning attack for large language model☆231Updated 3 weeks ago
- 针对大语言模型的对抗性攻击总结☆40Updated 2 years ago
- [ACL2024-Main] Data and Code for WaterBench: Towards Holistic Evaluation of LLM Watermarks☆30Updated 2 years ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆564Updated last year
- ☆26Updated last year
- ☆37Updated last year
- ☆162Updated last year
- ☆27Updated last year