jaehunjung1 / Maieutic-PromptingLinks
☆51Updated last year
Alternatives and similar repositories for Maieutic-Prompting
Users that are interested in Maieutic-Prompting are comparing it to the libraries listed below
Sorting:
- ☆75Updated last year
- ☆176Updated last year
- Code for Editing Factual Knowledge in Language Models☆139Updated 3 years ago
- [EMNLP 2022] TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models☆74Updated last year
- EMNLP 2022: "MABEL: Attenuating Gender Bias using Textual Entailment Data" https://arxiv.org/abs/2210.14975☆38Updated last year
- [ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.☆102Updated 2 years ago
- ☆88Updated 2 years ago
- Code for paper "CrossFit : A Few-shot Learning Challenge for Cross-task Generalization in NLP" (https://arxiv.org/abs/2104.08835)☆112Updated 3 years ago
- Entity-Based Knowledge Conflicts in Question Answering. Code repo for EMNLP2021 paper: https://aclanthology.org/2021.emnlp-main.565/☆73Updated 3 years ago
- ☆86Updated 2 years ago
- ☆35Updated 3 years ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year
- [ICLR 2023] Code for our paper "Selective Annotation Makes Language Models Better Few-Shot Learners"☆110Updated 2 years ago
- Official Code for the papers: "Controlled Text Generation as Continuous Optimization with Multiple Constraints" and "Gradient-based Const…☆63Updated last year
- ☆22Updated 3 years ago
- ☆36Updated last year
- [ICLR 2022] Towards Continual Knowledge Learning of Language Models☆92Updated 2 years ago
- This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”☆85Updated 3 years ago
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆102Updated 2 years ago
- ☆110Updated 3 years ago
- ☆50Updated 2 years ago
- ☆45Updated last year
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆96Updated 4 years ago
- Grade-School Math with Irrelevant Context (GSM-IC) benchmark is an arithmetic reasoning dataset built upon GSM8K, by adding irrelevant se…☆60Updated 2 years ago
- ☆44Updated last year
- ☆64Updated 2 years ago
- The official code of TACL 2021, "Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies".☆78Updated 2 years ago
- Code and Data for NeurIPS2021 Paper "A Dataset for Answering Time-Sensitive Questions"☆73Updated 3 years ago
- WikiWhy is a new benchmark for evaluating LLMs' ability to explain between cause-effect relationships. It is a QA dataset containing 9000…☆47Updated last year
- Repository for ACL 2022 paper Mix and Match: Learning-free Controllable Text Generation using Energy Language Models☆44Updated 3 years ago