☆29Aug 21, 2023Updated 2 years ago
Alternatives and similar repositories for LMSanitator
Users that are interested in LMSanitator are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆18Aug 15, 2022Updated 3 years ago
- ☆13Oct 21, 2021Updated 4 years ago
- TextGuard: Provable Defense against Backdoor Attacks on Text Classification☆15Nov 7, 2023Updated 2 years ago
- ☆25Jun 23, 2021Updated 4 years ago
- ☆26Dec 1, 2022Updated 3 years ago
- Deploy open-source AI quickly and easily - Special Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- [NDSS 2025] "CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models"☆26Aug 20, 2025Updated 8 months ago
- ☆32Mar 4, 2022Updated 4 years ago
- [EMNLP 24] Official Implementation of CLEANGEN: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models☆19Mar 9, 2025Updated last year
- ☆14Dec 12, 2023Updated 2 years ago
- ☆14Feb 26, 2025Updated last year
- Code and datasets for the salesforce AI research paper on prompt leakage and multi-turn threats against LLMs☆22Nov 10, 2025Updated 5 months ago
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆21Aug 10, 2024Updated last year
- Composite Backdoor Attacks Against Large Language Models☆24Apr 12, 2024Updated 2 years ago
- ☆12Feb 21, 2022Updated 4 years ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- RAP: Retrieval-Augmented Planning with Contextual Memory for Multimodal LLM Agents☆24Aug 23, 2024Updated last year
- ☆15Dec 7, 2023Updated 2 years ago
- Unofficial implementation of "Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection"☆27Jul 6, 2024Updated last year
- Code and data of the ACL-IJCNLP 2021 paper "Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger"☆44Sep 11, 2022Updated 3 years ago
- This repository is the official implementation of the paper "ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning…☆19Jun 7, 2023Updated 2 years ago
- ☆27Feb 1, 2023Updated 3 years ago
- [WWW '25] Model Supply Chain Poisoning: Backdooring Pre-trained Models via Embedding Indistinguishability☆18May 30, 2025Updated 10 months ago
- ☆27Nov 20, 2023Updated 2 years ago
- [IEEE S&P'24] ODSCAN: Backdoor Scanning for Object Detection Models☆22Oct 5, 2025Updated 6 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ☆26Aug 21, 2024Updated last year
- Implementation of Self-supervised-Online-Adversarial-Purification☆13Aug 2, 2021Updated 4 years ago
- ☆18Jun 15, 2021Updated 4 years ago
- Code for the paper "RAP: Robustness-Aware Perturbations for Defending against Backdoor Attacks on NLP Models" (EMNLP 2021)☆25Oct 21, 2021Updated 4 years ago
- [CVPR 2024] "Data Poisoning based Backdoor Attacks to Contrastive Learning": official code implementation.☆16Feb 10, 2025Updated last year
- The official pytorch implementation of ACM MM 19 paper "MetaAdvDet: Towards Robust Detection of Evolving Adversarial Attacks"☆11Jun 7, 2021Updated 4 years ago
- Source code for the ACL'2025 paper titled "Unveiling privacy risks in llm agent memory"☆29Dec 2, 2025Updated 4 months ago
- A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)☆289Jan 11, 2025Updated last year
- Implementation of the paper "Exploring the Universal Vulnerability of Prompt-based Learning Paradigm" on Findings of NAACL 2022☆32Jul 11, 2022Updated 3 years ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆19Feb 18, 2025Updated last year
- Code for the paper: Fast and Private Inference of Deep Neural Networks by Co-designing Activation Functions☆11Mar 13, 2024Updated 2 years ago
- Two-party Privacy-preserving Neural Network Training using Split Learning and Homomorphic Encryption (CKKS Scheme)☆11Sep 23, 2025Updated 6 months ago
- [NeurIPS'24] Official implement of "PrivCirNet: Efficient Private Inference via Block Circulant Transformation"☆15Feb 26, 2026Updated last month
- ESEC/FSE'21: Prediction-Preserving Program Simplification☆10Oct 4, 2022Updated 3 years ago
- [Preprint] Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis☆10Sep 23, 2021Updated 4 years ago
- bert蒸馏实践,包含BiLSTM蒸馏BERT和TinyBert☆13Apr 23, 2022Updated 3 years ago