i-gallegos / Fair-LLM-BenchmarkLinks
☆156Updated 2 years ago
Alternatives and similar repositories for Fair-LLM-Benchmark
Users that are interested in Fair-LLM-Benchmark are comparing it to the libraries listed below
Sorting:
- ACL 2022: An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models.☆153Updated 4 months ago
- Repository for the Bias Benchmark for QA dataset.☆133Updated 2 years ago
- Dataset associated with "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation" paper☆85Updated 4 years ago
- This repository contains the data and code introduced in the paper "CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Maske…☆127Updated last year
- [ACL 2023] Knowledge Unlearning for Mitigating Privacy Risks in Language Models☆86Updated last year
- Official repository for our NeurIPS 2023 paper "Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense…☆184Updated 2 years ago
- ☆182Updated last year
- ☆38Updated 2 years ago
- ☆28Updated last year
- ☆57Updated 2 years ago
- Source code and data for ADEPT: A DEbiasing PrompT Framework (AAAI-23).☆15Updated last year
- LLM Unlearning☆178Updated 2 years ago
- ☆17Updated 2 years ago
- ☆26Updated 2 years ago
- Code and data for Marked Personas (ACL 2023)☆28Updated 2 years ago
- Code & Data for the paper "RedditBias: A Real-World Resource for Bias Evaluation and Debiasing of Conversational Language Models"☆32Updated 4 years ago
- Paper list for the survey "Combating Misinformation in the Age of LLMs: Opportunities and Challenges" and the initiative "LLMs Meet Misin…☆106Updated last year
- A resource repository for representation engineering in large language models☆146Updated last year
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆91Updated last year
- [EMNLP 2023] Poisoning Retrieval Corpora by Injecting Adversarial Passages https://arxiv.org/abs/2310.19156☆46Updated 2 years ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆122Updated 10 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆85Updated 10 months ago
- ☆116Updated last year
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆108Updated last year
- The lastest paper about detection of LLM-generated text and code☆282Updated 6 months ago
- UnQovering Stereotyping Biases via Underspecified Questions - EMNLP 2020 (Findings)☆21Updated 4 years ago
- The dataset and code for the ICLR 2024 paper "Can LLM-Generated Misinformation Be Detected?"☆80Updated last year
- Personalized Steering of Large Language Models: Versatile Steering Vectors Through Bi-directional Preference Optimization☆40Updated last year
- ☆12Updated last year
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆103Updated 7 months ago