Xt-cyh / CoDI-Eval
☆22Updated 9 months ago
Alternatives and similar repositories for CoDI-Eval:
Users that are interested in CoDI-Eval are comparing it to the libraries listed below
- [ACL 2024] Code for "MoPS: Modular Story Premise Synthesis for Open-Ended Automatic Story Generation"☆35Updated 9 months ago
- Evaluate the Quality of Critique☆34Updated 10 months ago
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆39Updated 2 years ago
- ☆29Updated 3 months ago
- AbstainQA, ACL 2024☆25Updated 6 months ago
- This is for EMNLP 2024 Paper: AppBench: Planning of Multiple APIs from Various APPs for Complex User Instruction☆11Updated 5 months ago
- Merging Generated and Retrieved Knowledge for Open-Domain QA (EMNLP 2023)☆22Updated last year
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆59Updated 9 months ago
- ☆15Updated last year
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Updated 4 months ago
- ☆41Updated last year
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆47Updated 3 months ago
- This is the repository for paper "CREATOR: Tool Creation for Disentangling Abstract and Concrete Reasoning of Large Language Models"☆23Updated last year
- Resources for our ACL 2023 paper: Distilling Script Knowledge from Large Language Models for Constrained Language Planning☆36Updated last year
- Technical Report: Is ChatGPT a Good NLG Evaluator? A Preliminary Study☆43Updated 2 years ago
- Repo for outstanding paper@ACL 2023 "Do PLMs Know and Understand Ontological Knowledge?"☆31Updated last year
- The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism☆28Updated 9 months ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆79Updated last year
- Implementation of the paper: "Making Retrieval-Augmented Language Models Robust to Irrelevant Context"☆68Updated 8 months ago
- ☆59Updated 7 months ago
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆67Updated last year
- ☆44Updated 5 months ago
- Code and data for "ConflictBank: A Benchmark for Evaluating the Influence of Knowledge Conflicts in LLM" (NeurIPS 2024 Track Datasets and…☆41Updated 5 months ago
- BeHonest: Benchmarking Honesty in Large Language Models☆31Updated 8 months ago
- Self-Knowledge Guided Retrieval Augmentation for Large Language Models (EMNLP Findings 2023)☆26Updated last year
- Target-oriented Proactive Dialogue Systems with Personalization: Problem Formulation and Dataset Curation (EMNLP 2023)☆30Updated 11 months ago
- [ACL 2024 Findings] CriticBench: Benchmarking LLMs for Critique-Correct Reasoning☆24Updated last year
- Dataset and baseline for Coling 2022 long paper (oral): "ConFiguRe: Exploring Discourse-level Chinese Figures of Speech"☆11Updated last year
- ☆31Updated last year
- MemoChat: Tuning LLMs to Use Memos for Consistent Long-Range Open-Domain Conversation☆27Updated last year