☆21Mar 17, 2025Updated 11 months ago
Alternatives and similar repositories for AI-Supervision-Risk
Users that are interested in AI-Supervision-Risk are comparing it to the libraries listed below
Sorting:
- ☆12May 6, 2022Updated 3 years ago
- [NeurIPS'23] Binary Classification with Confidence Difference☆10May 13, 2024Updated last year
- personal settings for linux tools, including zsh, vim, tmux, pip.☆11Dec 2, 2019Updated 6 years ago
- TFLlib-Trustworthy Federated Learning Library and Benchmark☆62Nov 15, 2025Updated 3 months ago
- The repo for paper: Exploiting the Index Gradients for Optimization-Based Jailbreaking on Large Language Models.☆13Dec 16, 2024Updated last year
- [ICML 2023] Protecting Language Generation Models via Invisible Watermarking☆13Sep 8, 2023Updated 2 years ago
- [USENIX Security 2025] SOFT: Selective Data Obfuscation for Protecting LLM Fine-tuning against Membership Inference Attacks☆19Sep 18, 2025Updated 5 months ago
- [EMNLP 2022] Distillation-Resistant Watermarking (DRW) for Model Protection in NLP☆13Aug 17, 2023Updated 2 years ago
- Code for the CCE algorithm proposed in "Towards Compositionality in Concept Learning" at ICML 2024.☆17Jun 2, 2024Updated last year
- [ICLR 2025] "Rethinking LLM Unlearning Objectives: A Gradient Perspective and Go Beyond"☆17Feb 27, 2025Updated last year
- Stanford CoreNLP annotator implementing jMWE for detecting Multi-Word Expressions / collocations☆15Jan 6, 2017Updated 9 years ago
- [NeurIPS2023] "Selectivity Drives Productivity: Efficient Dataset Pruning for Enhanced Transfer Learning" by Yihua Zhang*, Yimeng Zhang*,…☆14Oct 12, 2023Updated 2 years ago
- The official repository of 'Unnatural Language Are Not Bugs but Features for LLMs'☆24May 20, 2025Updated 9 months ago
- The code of “Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning”☆17Feb 26, 2024Updated 2 years ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆31Dec 12, 2021Updated 4 years ago
- All in How You Ask for It: Simple Black-Box Method for Jailbreak Attacks☆18Apr 24, 2024Updated last year
- About Official PyTorch implementation of "Query-Efficient Black-Box Red Teaming via Bayesian Optimization" (ACL'23)☆15Jul 9, 2023Updated 2 years ago
- Code for the paper "Multitasking Framework for Unsupervised Simple Definition Generation" on ACL 2022.☆17Aug 17, 2022Updated 3 years ago
- [NDSS 2025] "CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models"☆24Aug 20, 2025Updated 6 months ago
- [ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion☆58Oct 1, 2025Updated 5 months ago
- ☆27Apr 11, 2023Updated 2 years ago
- ☆54May 22, 2025Updated 9 months ago
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆19Feb 18, 2025Updated last year
- Official code for the ICCV2023 paper ``One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training''☆20Aug 9, 2023Updated 2 years ago
- [NeurIPS 2022] "Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets" by Ruisi Cai*, Zhenyu Zh…☆21Oct 1, 2022Updated 3 years ago
- [NeurIPS'20] Semi-Supervised Partial Label Learning via Confidence-Rated Margin Maximization☆21May 29, 2022Updated 3 years ago
- Comprehensive Assessment of Trustworthiness in Multimodal Foundation Models☆27Mar 15, 2025Updated 11 months ago
- [ACL'25 Main] SelfElicit: Your Language Model Secretly Knows Where is the Relevant Evidence! | 让你的LLM更好地利用上下文文档:一个基于注意力的简单方案☆25Feb 17, 2025Updated last year
- Code and data for EACL 2024 paper "Contextualization Distillation from Large Language Models for Knowledge Graph Completion"☆24Oct 17, 2024Updated last year
- To Think or Not to Think: Exploring the Unthinking Vulnerability in Large Reasoning Models☆33May 21, 2025Updated 9 months ago
- ☆29Aug 31, 2025Updated 6 months ago
- Code repo for the model organisms and convergent directions of EM papers.☆51Sep 22, 2025Updated 5 months ago
- ☆529Jul 25, 2025Updated 7 months ago
- [AAAI 2024] MELO: Enhancing Model Editing with Neuron-indexed Dynamic LoRA☆27Apr 9, 2024Updated last year
- This is the source code for MEA-Defender. Our paper is accepted by the IEEE Symposium on Security and Privacy (S&P) 2024.☆29Nov 19, 2023Updated 2 years ago
- The code and data for "Are Large Pre-Trained Language Models Leaking Your Personal Information?" (Findings of EMNLP '22)☆28Oct 31, 2022Updated 3 years ago
- ☆646Jul 29, 2025Updated 7 months ago
- Machine Learning & Security Seminar @Purdue University☆25May 9, 2023Updated 2 years ago
- ☆11Dec 23, 2024Updated last year