STAIR-BUPT / STAIR-LLMGuardrailsLinks
☆12Updated last year
Alternatives and similar repositories for STAIR-LLMGuardrails
Users that are interested in STAIR-LLMGuardrails are comparing it to the libraries listed below
Sorting:
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆216Updated last month
- ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors [EMNLP 2024 Findings]☆218Updated last year
- S-Eval: Towards Automated and Comprehensive Safety Evaluation for Large Language Models☆106Updated 2 months ago
- ☆114Updated 10 months ago
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]☆267Updated 4 months ago
- Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM☆39Updated 11 months ago
- [NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey☆109Updated last year
- ☆71Updated 7 months ago
- ☆84Updated 3 months ago
- ☆25Updated last year
- ☆25Updated last year
- Awesome Jailbreak, red teaming arxiv papers (Automatically Update Every 12th hours)☆82Updated this week
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models☆220Updated last month
- Accepted by ECCV 2024☆179Updated last year
- ☆26Updated 9 months ago
- Accepted by IJCAI-24 Survey Track☆225Updated last year
- ☆55Updated last year
- ☆137Updated 9 months ago
- ☆153Updated last month
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆182Updated 6 months ago
- This is the code repository for "Uncovering Safety Risks of Large Language Models through Concept Activation Vector"☆47Updated 2 months ago
- "他山之石、可以攻玉":复旦白泽智能发布面向国内开源和国外商用大模型的Demo数据集JADE-DB☆485Updated last month
- Red Queen Dataset and data generation template☆23Updated last year
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆412Updated 11 months ago
- ☆54Updated last year
- An LLM can Fool Itself: A Prompt-Based Adversarial Attack (ICLR 2024)☆107Updated 11 months ago
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆103Updated last year
- ☆156Updated last year
- Papers and resources related to the security and privacy of LLMs 🤖☆553Updated 6 months ago
- Code for ACM MM2024 paper: White-box Multimodal Jailbreaks Against Large Vision-Language Models☆31Updated 11 months ago