SolidShen / BAITLinks
π₯π₯π₯ Detecting hidden backdoors in Large Language Models with only black-box access
β43Updated 3 months ago
Alternatives and similar repositories for BAIT
Users that are interested in BAIT are comparing it to the libraries listed below
Sorting:
- β16Updated last year
- [NDSS 2025] "CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models"β19Updated 3 weeks ago
- β34Updated 11 months ago
- Machine Learning & Security Seminar @Purdue Universityβ25Updated 2 years ago
- Code for paper "The Philosopherβs Stone: Trojaning Plugins of Large Language Models"β22Updated last year
- β20Updated last year
- β35Updated 11 months ago
- β62Updated 8 months ago
- β25Updated last year
- β18Updated 3 years ago
- Distribution Preserving Backdoor Attack in Self-supervised Learningβ17Updated last year
- β25Updated 3 years ago
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agaiβ¦β51Updated 5 months ago
- [IEEE S&P'24] ODSCAN: Backdoor Scanning for Object Detection Modelsβ17Updated 8 months ago
- Official Implementation of NeurIPS 2024 paper - BiScope: AI-generated Text Detection by Checking Memorization of Preceding Tokensβ24Updated 5 months ago
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)β34Updated 2 months ago
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Modelsβ187Updated 6 months ago
- BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Modelsβ211Updated last month
- Agent Security Bench (ASB)β119Updated 3 months ago
- [NDSS'23] BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defenseβ17Updated last year
- β12Updated 6 months ago
- β24Updated 7 months ago
- β19Updated last year
- [NDSS'25] The official implementation of safety misalignment.β16Updated 8 months ago
- π₯ Amazon Nova AI Challenge Winner - ASTRA emerged victorious as the top attacking team in Amazon's global AI safety competition, defeatiβ¦β56Updated last month
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]β89Updated 11 months ago
- β82Updated 2 weeks ago
- β14Updated this week
- [CVPR'24] LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioningβ15Updated 8 months ago
- β66Updated 4 years ago