dig-team / hanna-benchmark-asg
HANNA, a large annotated dataset of Human-ANnotated NArratives for ASG evaluation.
☆28Updated last month
Related projects ⓘ
Alternatives and complementary repositories for hanna-benchmark-asg
- First explanation metric (diagnostic report) for text generation evaluation☆61Updated 4 months ago
- ☆28Updated 9 months ago
- Benchmark for evaluating open-ended generation☆44Updated 2 weeks ago
- The data and the PyTorch implementation for the models and experiments in the paper "Exploiting Asymmetry for Synthetic Training Data Gen…☆58Updated last year
- The code implementation of the EMNLP2022 paper: DisCup: Discriminator Cooperative Unlikelihood Prompt-tuning for Controllable Text Gene…☆25Updated last year
- ☆80Updated last year
- Resources for our ACL 2023 paper: Distilling Script Knowledge from Large Language Models for Constrained Language Planning☆35Updated last year
- Code base of In-Context Learning for Dialogue State tracking☆44Updated last year
- We construct and introduce DIALFACT, a testing benchmark dataset crowd-annotated conversational claims, paired with pieces of evidence fr…☆41Updated 2 years ago
- ☆44Updated last year
- [EMNLP 2022] Code for our paper “ZeroGen: Efficient Zero-shot Learning via Dataset Generation”.☆16Updated 2 years ago
- Codes for our paper "CTRLEval: An Unsupervised Reference-Free Metric for Evaluating Controlled Text Generation" (ACL 2022)☆32Updated 2 years ago
- Technical Report: Is ChatGPT a Good NLG Evaluator? A Preliminary Study☆42Updated last year
- ☆29Updated last year
- ☆15Updated 9 months ago
- [ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.☆92Updated last year
- Code for the paper Code for the paper InstructDial: Improving Zero and Few-shot Generalization in Dialogue through Instruction Tuning☆97Updated last year
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆39Updated last year
- WikiWhy is a new benchmark for evaluating LLMs' ability to explain between cause-effect relationships. It is a QA dataset containing 9000…☆46Updated 11 months ago
- ☆67Updated 9 months ago
- Dataset, metrics, and models for TACL 2023 paper MACSUM: Controllable Summarization with Mixed Attributes.☆34Updated last year
- Official implementation of the ACL 2023 paper: "Zero-shot Faithful Factual Error Correction"☆17Updated last year
- ☆36Updated 7 months ago
- Code and data for the FACTOR paper☆39Updated last year
- ☆30Updated 11 months ago
- The official code and dataset for EMNLP 2022 paper "COPEN: Probing Conceptual Knowledge in Pre-trained Language Models".☆19Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆54Updated 10 months ago
- This code accompanies the paper DisentQA: Disentangling Parametric and Contextual Knowledge with Counterfactual Question Answering.☆18Updated last year
- ☆47Updated 3 months ago