ASTRAL-Group / SVIPLinks
SVIP: Towards Verifiable Inference of Open-Source Large Language Models
☆13Updated 6 months ago
Alternatives and similar repositories for SVIP
Users that are interested in SVIP are comparing it to the libraries listed below
Sorting:
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆20Updated 2 years ago
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆61Updated 2 years ago
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆23Updated 2 years ago
- ☆24Updated last year
- Official codes for "Understanding Deep Gradient Leakage via Inversion Influence Functions", NeurIPS 2023☆16Updated 2 years ago
- Code for paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"☆66Updated last year
- [NeurIPS 2023] Differentially Private Image Classification by Learning Priors from Random Processes☆12Updated 2 years ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆27Updated last year
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆58Updated 11 months ago
- This repository is the official implementation of the paper "ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning…☆19Updated 2 years ago
- This is the official code for the paper "Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturba…☆33Updated 9 months ago
- [NeurIPS 2024] Fight Back Against Jailbreaking via Prompt Adversarial Tuning☆10Updated last year
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆39Updated 2 years ago
- Distribution Preserving Backdoor Attack in Self-supervised Learning☆20Updated last year
- This is the official code for the paper "Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning" (NeurIPS2024)☆25Updated last year
- Identification of the Adversary from a Single Adversarial Example (ICML 2023)☆10Updated last year
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆81Updated last year
- ICLR 2023 paper "Exploring and Exploiting Decision Boundary Dynamics for Adversarial Robustness" by Yuancheng Xu, Yanchao Sun, Micah Gold…☆25Updated 2 years ago
- ☆32Updated 3 years ago
- ☆33Updated 3 years ago
- The collection of papers about Private Evolution☆17Updated 2 months ago
- This is the official code for the paper "Safety Tax: Safety Alignment Makes Your Large Reasoning Models Less Reasonable".☆26Updated 9 months ago
- Anti-Backdoor learning (NeurIPS 2021)☆83Updated 2 years ago
- ☆70Updated 10 months ago
- [CCS-LAMPS'24] LLM IP Protection Against Model Merging☆16Updated last year
- Code and data to go with the Zhu et al. paper "An Objective for Nuanced LLM Jailbreaks"