successar / Eraser-Benchmark-Baseline-ModelsLinks
Baseline for ERASER benchmark
☆17Updated 2 years ago
Alternatives and similar repositories for Eraser-Benchmark-Baseline-Models
Users that are interested in Eraser-Benchmark-Baseline-Models are comparing it to the libraries listed below
Sorting:
- Implementation for https://arxiv.org/abs/2005.00652☆28Updated 2 years ago
- ☆24Updated 4 years ago
- ☆20Updated 3 years ago
- A benchmark for understanding and evaluating rationales: http://www.eraserbenchmark.com/☆99Updated 2 years ago
- ☆46Updated 2 years ago
- NILE : Natural Language Inference with Faithful Natural Language Explanations☆30Updated 2 years ago
- A unified approach to explain conditional text generation models. Pytorch. The code of paper "Local Explanation of Dialogue Response Gene…☆16Updated 3 years ago
- ☆27Updated 2 years ago
- Code and datasets for the EMNLP 2020 paper "Calibration of Pre-trained Transformers"☆61Updated 2 years ago
- Code Repo for the ACL21 paper "Common Sense Beyond English: Evaluating and Improving Multilingual LMs for Commonsense Reasoning"☆22Updated 3 years ago
- ☆28Updated 2 years ago
- Code for Evaluating Explanations for Reading Comprehension with Realistic Counterfactuals.☆18Updated 4 years ago
- Code and Data for our EMNLP 2020 paper titled 'Learning to Explain: Datasets and Models for Identifying Valid Reasoning Chains in Multiho…☆28Updated 3 years ago
- ☆17Updated 5 years ago
- ☆58Updated 3 years ago
- ☆63Updated 5 years ago
- Code for paper "Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?"☆22Updated 4 years ago
- ☆49Updated 2 years ago
- ☆26Updated 2 years ago
- ☆92Updated 3 years ago
- ☆17Updated 4 years ago
- Debiasing Methods in Natural Language Understanding Make Bias More Accessible: Code and Data☆14Updated 3 years ago
- Code and dataset for the EMNLP 2021 Finding paper "Can NLI Models Verify QA Systems’ Predictions?"☆25Updated 2 years ago
- Code for ModularQA☆28Updated 4 years ago
- ☆21Updated 4 years ago
- The accompanying code for "Injecting Numerical Reasoning Skills into Language Models" (Mor Geva*, Ankit Gupta* and Jonathan Berant, ACL 2…☆89Updated last year
- This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”☆85Updated 3 years ago
- ☆24Updated 2 years ago
- Faithfulness and factuality annotations of XSum summaries from our paper "On Faithfulness and Factuality in Abstractive Summarization" (h…☆84Updated 4 years ago
- This repository contains the code for "How many data points is a prompt worth?"☆48Updated 4 years ago