BunsenFeng / PoliLeanLinks
Code for "From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models". ACL 2023. Best Paper Award.
☆40Updated last year
Alternatives and similar repositories for PoliLean
Users that are interested in PoliLean are comparing it to the libraries listed below
Sorting:
- ☆88Updated 2 years ago
- Easy-to-use MIRAGE code for faithful answer attribution in RAG applications. Paper: https://aclanthology.org/2024.emnlp-main.347/☆25Updated 8 months ago
- Grade-School Math with Irrelevant Context (GSM-IC) benchmark is an arithmetic reasoning dataset built upon GSM8K, by adding irrelevant se…☆64Updated 2 years ago
- EMNLP 2022: "MABEL: Attenuating Gender Bias using Textual Entailment Data" https://arxiv.org/abs/2210.14975☆38Updated last year
- [ICLR 2023] Code for our paper "Selective Annotation Makes Language Models Better Few-Shot Learners"☆111Updated 2 years ago
- ☆46Updated last year
- ☆86Updated 3 years ago
- Resolving Knowledge Conflicts in Large Language Models, COLM 2024☆18Updated last month
- Paper list of "The Life Cycle of Knowledge in Big Language Models: A Survey"☆59Updated 2 years ago
- ☆116Updated last year
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆78Updated 2 years ago
- ☆54Updated last year
- AbstainQA, ACL 2024☆28Updated last year
- [ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.☆103Updated 2 years ago
- ☆177Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆69Updated 3 years ago
- This repository contains the dataset and code for "WiCE: Real-World Entailment for Claims in Wikipedia" in EMNLP 2023.☆41Updated last year
- The LM Contamination Index is a manually created database of contamination evidences for LMs.☆81Updated last year
- Codes and Datasets for our ACL 2023 paper on cognitive reframing of negative thoughts☆65Updated 2 years ago
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆41Updated 2 years ago
- ☆57Updated 2 years ago
- WikiWhy is a new benchmark for evaluating LLMs' ability to explain between cause-effect relationships. It is a QA dataset containing 9000…☆48Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆61Updated last year
- This repository contains data, code and models for contextual noncompliance.☆24Updated last year
- ☆50Updated 3 years ago
- ☆78Updated last year
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆131Updated 2 years ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆55Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆118Updated last year
- Do Large Language Models Know What They Don’t Know?☆101Updated last year