ZhaofengWu / counterfactual-evaluationLinks
☆56Updated 3 months ago
Alternatives and similar repositories for counterfactual-evaluation
Users that are interested in counterfactual-evaluation are comparing it to the libraries listed below
Sorting:
- [ICLR 2023] Code for our paper "Selective Annotation Makes Language Models Better Few-Shot Learners"☆109Updated 2 years ago
- Grade-School Math with Irrelevant Context (GSM-IC) benchmark is an arithmetic reasoning dataset built upon GSM8K, by adding irrelevant se…☆60Updated 2 years ago
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆77Updated 2 years ago
- ☆44Updated 11 months ago
- ☆78Updated 2 years ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆132Updated 2 years ago
- Synthetic question-answering dataset to formally analyze the chain-of-thought output of large language models on a reasoning task.☆147Updated 10 months ago
- ☆177Updated last year
- ☆100Updated last year
- ☆29Updated last year
- ☆28Updated last year
- ☆48Updated 2 years ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆114Updated 11 months ago
- [ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.☆102Updated 2 years ago
- [EMNLP-2022 Findings] Code for paper “ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback”.☆27Updated 2 years ago
- Inspecting and Editing Knowledge Representations in Language Models☆116Updated 2 years ago
- ☆75Updated last year
- Active Example Selection for In-Context Learning (EMNLP'22)☆49Updated last year
- ☆41Updated last year
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆41Updated 2 years ago
- Supporting code for ReCEval paper☆29Updated 11 months ago
- ☆27Updated 2 years ago
- ☆63Updated 2 years ago
- ☆87Updated 2 years ago
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆16Updated 7 months ago
- ☆75Updated last year
- The official repository for the paper "From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning".☆65Updated 2 years ago
- Analyzing LLM Alignment via Token distribution shift☆16Updated last year
- Teaching Models to Express Their Uncertainty in Words☆39Updated 3 years ago