SALT-NLP / chain-of-thought-bias
☆25Updated 5 months ago
Alternatives and similar repositories for chain-of-thought-bias:
Users that are interested in chain-of-thought-bias are comparing it to the libraries listed below
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆66Updated 2 years ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆58Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆63Updated last year
- EMNLP 2022: "MABEL: Attenuating Gender Bias using Textual Entailment Data" https://arxiv.org/abs/2210.14975☆37Updated last year
- Restore safety in fine-tuned language models through task arithmetic☆27Updated 11 months ago
- ☆38Updated last year
- ☆47Updated last year
- ☆72Updated 9 months ago
- [ACL 2023] Knowledge Unlearning for Mitigating Privacy Risks in Language Models☆80Updated 6 months ago
- [NeurIPS 2023 D&B Track] Code and data for paper "Revisiting Out-of-distribution Robustness in NLP: Benchmarks, Analysis, and LLMs Evalua…☆32Updated last year
- ☆22Updated 5 months ago
- ☆25Updated last year
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆95Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆106Updated 6 months ago
- ☆41Updated last year
- ☆30Updated 10 months ago
- BeHonest: Benchmarking Honesty in Large Language Models☆31Updated 7 months ago
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆19Updated 5 months ago
- Official code for ICML 2024 paper on Persona In-Context Learning (PICLe)☆23Updated 8 months ago
- ☆44Updated 6 months ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆54Updated 11 months ago
- [NeurIPS 2024 D&B] Evaluating Copyright Takedown Methods for Language Models☆17Updated 8 months ago
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆78Updated 10 months ago
- ☆30Updated 5 months ago
- Repo for paper: Examining LLMs' Uncertainty Expression Towards Questions Outside Parametric Knowledge☆13Updated last year
- Mostly recording papers about models' trustworthy applications. Intending to include topics like model evaluation & analysis, security, c…☆20Updated last year
- AbstainQA, ACL 2024☆25Updated 5 months ago
- ☆25Updated 2 years ago
- Official implementation of Privacy Implications of Retrieval-Based Language Models (EMNLP 2023). https://arxiv.org/abs/2305.14888☆35Updated 9 months ago
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆29Updated 4 months ago