zzwjames / DPGBALinks
An official implementation of "Rethinking Graph Backdoor Attacks: A Distribution-Preserving Perspective" (KDD 2024)
☆12Updated last year
Alternatives and similar repositories for DPGBA
Users that are interested in DPGBA are comparing it to the libraries listed below
Sorting:
- An official PyTorch implementation of "Unnoticeable Backdoor Attacks on Graph Neural Networks" (WWW 2023)☆61Updated 2 years ago
- ☆56Updated 3 years ago
- ☆33Updated 2 years ago
- A PyTorch implementation of "Backdoor Attacks to Graph Neural Networks" (SACMAT'21)☆43Updated 4 years ago
- The implementation of paper "AlphaSteer: Learning Refusal Steering with Principled Null-Space Constraint"☆30Updated last month
- [ICLR24] Official Repo of BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models☆44Updated last year
- Official implementation of "Graph Unlearning" (ACM CCS 2022)☆54Updated 5 months ago
- ☆27Updated 3 years ago
- The python implementation of our "UA-FedRec: Untargeted Attack on Federated News Recommendation" in KDD 2023.☆19Updated 3 years ago
- [ICML 2023] "On Strengthening and Defending Graph Reconstruction Attack with Markov Chain Approximation"☆23Updated 2 years ago
- [ICML 2023] "On Strengthening and Defending Graph Reconstruction Attack with Markov Chain Approximation"☆33Updated 2 years ago
- [ICLR 2022] Understanding and Improving Graph Injection Attack by Promoting Unnoticeability☆38Updated 2 years ago
- The official implement of NeurIPS'24 Datasets and Benchmarks Track paper: GLBench: A Comprehensive Benchmark for Graphs with Large Langua…☆69Updated last year
- "In-Context Unlearning: Language Models as Few Shot Unlearners". Martin Pawelczyk, Seth Neel* and Himabindu Lakkaraju*; ICML 2024.☆28Updated 2 years ago
- This is the implementation of OODGAT from KDD'22: Learning on Graphs with Out-of-Distribution Nodes.☆23Updated 3 years ago
- source code of KDD 2022 paper "Reliable Representations Make A Stronger Defender: Unsupervised Structure Refinement for Robust GNN".☆28Updated last year
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆79Updated this week
- ☆61Updated 7 months ago
- This is the code implementation for the paper "Data Poisoning Attacks to Deep Learning Based Recommender Systems"☆17Updated 3 years ago
- ☆26Updated last year
- Certified (approximate) machine unlearning for simplified graph convolutional networks (SGCs) with theoretical guarantees (ICLR 2023)☆20Updated 2 years ago
- ☆29Updated 2 years ago
- ☆13Updated last year
- [ACL'25 Main] SelfElicit: Your Language Model Secretly Knows Where is the Relevant Evidence! | 让你的LLM更好地利用上下文文档:一个基于注意力的简单方案☆24Updated 10 months ago
- Code and data for paper "A Semantic Invariant Robust Watermark for Large Language Models" accepted by ICLR 2024.☆37Updated last year
- A unified framework for recommender system attacking☆32Updated last year
- A PyTorch implementation of "Can Large Language Models Improve the Adversarial Robustness of Graph Neural Networks?" (KDD 2025)☆29Updated 2 months ago
- This is the repo for the survey of Bias and Fairness in IR with LLMs.☆59Updated 4 months ago
- Paper List for Fair Graph Learning (FairGL).☆144Updated last year
- This is the code repository for "Uncovering Safety Risks of Large Language Models through Concept Activation Vector"☆47Updated 2 months ago