snu-mllab / Bayesian-Red-TeamingLinks
About Official PyTorch implementation of "Query-Efficient Black-Box Red Teaming via Bayesian Optimization" (ACL'23)
โ15Updated 2 years ago
Alternatives and similar repositories for Bayesian-Red-Teaming
Users that are interested in Bayesian-Red-Teaming are comparing it to the libraries listed below
Sorting:
- This repository contains the official code for the paper: "Prompt Injection: Parameterization of Fixed Inputs"โ32Updated 10 months ago
- ๐คซ Code and benchmark for our ICLR 2024 spotlight paper: "Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Conโฆโ42Updated last year
- [ACL 2023] Knowledge Unlearning for Mitigating Privacy Risks in Language Modelsโ82Updated 10 months ago
- Restore safety in fine-tuned language models through task arithmeticโ28Updated last year
- Code for "Universal Adversarial Triggers Are Not Universal."โ17Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Modelโ68Updated 2 years ago
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]โ18Updated last year
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"โ59Updated 10 months ago
- Official implementation of Privacy Implications of Retrieval-Based Language Models (EMNLP 2023). https://arxiv.org/abs/2305.14888โ36Updated last year
- โ29Updated last year
- ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"โ36Updated 5 months ago
- [EMNLP 2024] Official implementation of "Hierarchical Deconstruction of LLM Reasoning: A Graph-Based Framework for Analyzing Knowledge Utโฆโ21Updated 8 months ago
- This is the official repo for Towards Uncertainty-Aware Language Agent.โ26Updated 11 months ago
- โ43Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuningโ96Updated last year
- โ20Updated 11 months ago
- โ27Updated 2 years ago
- [๐๐๐๐๐ ๐ ๐ข๐ง๐๐ข๐ง๐ ๐ฌ ๐๐๐๐ & ๐๐๐ ๐๐๐๐ ๐๐๐๐๐ ๐๐ซ๐๐ฅ] ๐๐ฏ๐ฉ๐ข๐ฏ๐ค๐ช๐ฏ๐จ ๐๐ข๐ต๐ฉ๐ฆ๐ฎ๐ข๐ต๐ช๐ค๐ข๐ญ ๐๐ฆ๐ข๐ด๐ฐ๐ฏ๐ช๐ฏโฆโ51Updated last year
- โ38Updated last year
- โ10Updated last year
- โ41Updated 10 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.โ74Updated 5 months ago
- โ45Updated 11 months ago
- โ35Updated 7 months ago
- Code for the paper "Spectral Editing of Activations for Large Language Model Alignments"โ24Updated 7 months ago
- โ23Updated last year
- โ13Updated last month
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.โ27Updated last year
- โ28Updated last year
- Code for the paper "Reasoning Models Better Express Their Confidence"โ17Updated 2 months ago