LLM-Integrity-Guard / JailMineLinks
☆19Updated last year
Alternatives and similar repositories for JailMine
Users that are interested in JailMine are comparing it to the libraries listed below
Sorting:
- ASCII-generator in Go☆19Updated 4 months ago
- The official repository of the paper "The Digital Cybersecurity Expert: How Far Have We Come?" presented in IEEE S&P 2025☆19Updated 2 months ago
- Code for paper "SrcMarker: Dual-Channel Source Code Watermarking via Scalable Code Transformations" (IEEE S&P 2024)☆29Updated last year
- Catch IPv6 NS on WAN and send it to LAN. (Should) Make OpenWrt IPv6 ndp relay works.☆13Updated last month
- Modified qemu for binary-only kernel tracing, address sanitizer and so on☆19Updated last month
- ☆24Updated 6 months ago
- Academic Papers about LLM Application on Security☆181Updated last month
- [EMNLP 24] Official Implementation of CLEANGEN: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models☆16Updated 5 months ago
- Official repo for "ProSec: Fortifying Code LLMs with Proactive Security Alignment"☆15Updated 4 months ago
- ☆82Updated last year
- ☆35Updated 10 months ago
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆101Updated 9 months ago
- This repository provide the studies on the security of language models for code (CodeLMs).☆50Updated 5 months ago
- enchmarking Large Language Models' Resistance to Malicious Code☆12Updated 8 months ago
- ☆26Updated 9 months ago
- Deploy and customize our own pwn.college - pwn.hust.college☆56Updated this week
- ☆15Updated 2 years ago
- Code for paper "The Philosopher’s Stone: Trojaning Plugins of Large Language Models"☆20Updated 10 months ago
- [USENIX'24] Prompt Stealing Attacks Against Text-to-Image Generation Models☆41Updated 6 months ago
- 🔥🔥🔥 Detecting hidden backdoors in Large Language Models with only black-box access☆35Updated 2 months ago
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆48Updated 4 months ago
- ☆223Updated last year
- Code for Voice Jailbreak Attacks Against GPT-4o.☆32Updated last year
- Official repo for FSE'24 paper "CodeArt: Better Code Models by Attention Regularization When Symbols Are Lacking"☆16Updated 5 months ago
- [NDSS'23] BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense☆17Updated last year
- ☆120Updated last year
- Agent Security Bench (ASB)☆102Updated last month
- The official repository for guided jailbreak benchmark☆11Updated last week
- ☆29Updated 10 months ago
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆183Updated 5 months ago