Xt-cyh / CoDI-EvalLinks
☆22Updated 7 months ago
Alternatives and similar repositories for CoDI-Eval
Users that are interested in CoDI-Eval are comparing it to the libraries listed below
Sorting:
- Self-Knowledge Guided Retrieval Augmentation for Large Language Models (EMNLP Findings 2023)☆28Updated 2 years ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆51Updated 6 months ago
- ☆75Updated last year
- ☆14Updated 2 years ago
- ☆31Updated 10 months ago
- [EMNLP 2023] Plan, Verify and Switch: Integrated Reasoning with Diverse X-of-Thoughts☆28Updated 2 years ago
- ☆51Updated last year
- Evaluate the Quality of Critique☆36Updated last year
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- ☆30Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Updated last year
- The source code for running LLMs on the AAAR-1.0 benchmark.☆17Updated 8 months ago
- ☆41Updated 2 years ago
- Supporting code for ReCEval paper☆31Updated last year
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆41Updated 2 years ago
- Code and data for the FACTOR paper☆52Updated 2 years ago
- Code for ProTrix: Building Models for Planning and Reasoning over Tables with Sentence Context☆18Updated last year
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆126Updated last year
- ☆48Updated last year
- [NAACL 2024] Making Language Models Better Tool Learners with Execution Feedback☆42Updated last year
- The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism☆30Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆76Updated 7 months ago
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆79Updated last year
- Code for the 2025 ACL publication "Fine-Tuning on Diverse Reasoning Chains Drives Within-Inference CoT Refinement in LLMs"☆33Updated 6 months ago
- [ACL 2024] Code for the paper "ALaRM: Align Language Models via Hierarchical Rewards Modeling"☆25Updated last year
- ☆35Updated last year
- Do Large Language Models Know What They Don’t Know?☆102Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆71Updated 3 years ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆85Updated last year
- Lightweight tool to identify Data Contamination in LLMs evaluation☆53Updated last year