ConvexPolytopePosioning
☆37Jan 10, 2020Updated 6 years ago
Alternatives and similar repositories for ConvexPolytopePosioning
Users that are interested in ConvexPolytopePosioning are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Bullseye Polytope Clean-Label Poisoning Attack☆17Nov 5, 2020Updated 5 years ago
- KNN Defense Against Clean Label Poisoning Attacks☆13Sep 24, 2021Updated 4 years ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Nov 11, 2020Updated 5 years ago
- Craft poisoned data using MetaPoison☆54Apr 5, 2021Updated 5 years ago
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆75Apr 12, 2018Updated 8 years ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆34Mar 28, 2020Updated 6 years ago
- Code for "Label-Consistent Backdoor Attacks"☆57Nov 22, 2020Updated 5 years ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Feb 27, 2020Updated 6 years ago
- PyTorch implementation for the paper Classification from Positive, Unlabeled and Biased Negative Data.☆19Nov 21, 2023Updated 2 years ago
- Generating Potent Poisons and Backdoors from Scratch with Guided Diffusion☆11Apr 1, 2024Updated 2 years ago
- ☆29Jun 17, 2024Updated last year
- ☆26Jan 25, 2019Updated 7 years ago
- Official codebase of our paper "Invert and Defend: Model-based Approximate Inversion of Generative Adversarial Network For Secure Inferen…☆15Nov 21, 2022Updated 3 years ago
- ☆27Oct 17, 2022Updated 3 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Anti-Backdoor learning (NeurIPS 2021)☆83Jul 20, 2023Updated 2 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Oct 3, 2023Updated 2 years ago
- Code and datasets for the salesforce AI research paper on prompt leakage and multi-turn threats against LLMs☆22Nov 10, 2025Updated 5 months ago
- ☆12Dec 9, 2020Updated 5 years ago
- ☆13Oct 21, 2021Updated 4 years ago
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆13Sep 6, 2023Updated 2 years ago
- Pytorch implementation of backdoor unlearning.☆21Jun 8, 2022Updated 3 years ago
- The code reproduces the results of the experiments in the paper. In particular, it performs experiments in which machine-learning models …☆20Aug 16, 2021Updated 4 years ago
- TrojanZoo provides a universal pytorch platform to conduct security researches (especially backdoor attacks/defenses) of image classifica…☆303Aug 25, 2025Updated 8 months ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Code for the paper "Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks"☆14Aug 22, 2022Updated 3 years ago
- Code for paper "Poisoned classifiers are not only backdoored, they are fundamentally broken"☆26Jan 7, 2022Updated 4 years ago
- Code for the paper "Unbiased Supervised Contrastive Learning" | ICLR 2023 https://openreview.net/forum?id=Ph5cJSfD2XN☆12Sep 22, 2023Updated 2 years ago
- ☆27Dec 15, 2022Updated 3 years ago
- A unified benchmark problem for data poisoning attacks☆161Oct 4, 2023Updated 2 years ago
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆18May 13, 2019Updated 6 years ago
- ☆18Jun 15, 2021Updated 4 years ago
- Code for ICLR 2023 Paper Better Generative Replay for Continual Federated Learning☆33Apr 23, 2023Updated 3 years ago
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆21Oct 25, 2019Updated 6 years ago
- Serverless GPU API endpoints on Runpod - Get Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- A paper summary of Backdoor Attack against Neural Network☆13Aug 9, 2019Updated 6 years ago
- Official Implementation of "Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning"☆13Feb 10, 2025Updated last year
- Code for the paper: On Symmetric Losses for Learning from Corrupted Labels☆19May 11, 2019Updated 6 years ago
- ☆54Sep 11, 2021Updated 4 years ago
- ☆31Oct 7, 2021Updated 4 years ago
- ☆34Mar 28, 2022Updated 4 years ago
- A modular evaluation metrics and a benchmark for large-scale federated learning☆12Jul 25, 2024Updated last year