khuangaf / CHOCOLATE
Code and data for the ACL 2024 Findings paper "Do LVLMs Understand Charts? Analyzing and Correcting Factual Errors in Chart Captioning"
☆26Updated 11 months ago
Alternatives and similar repositories for CHOCOLATE:
Users that are interested in CHOCOLATE are comparing it to the libraries listed below
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆39Updated 2 years ago
- Improving Language Understanding from Screenshots. Paper: https://arxiv.org/abs/2402.14073☆28Updated 10 months ago
- ☆42Updated 9 months ago
- [NAACL 2024] Making Language Models Better Tool Learners with Execution Feedback☆41Updated last year
- AbstainQA, ACL 2024☆25Updated 6 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆47Updated 4 months ago
- Supporting code for ReCEval paper☆28Updated 7 months ago
- ☆44Updated 8 months ago
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆55Updated 6 months ago
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆68Updated last year
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- Official implementation for "Law of the Weakest Link: Cross capabilities of Large Language Models"☆42Updated 7 months ago
- An Easy-to-use Hallucination Detection Framework for LLMs.☆58Updated last year
- Codebase for Instruction Following without Instruction Tuning☆34Updated 7 months ago
- Middleware for LLMs: Tools Are Instrumental for Language Agents in Complex Environments (EMNLP'2024)☆36Updated 4 months ago
- MUFFIN: Curating Multi-Faceted Instructions for Improving Instruction-Following☆16Updated 6 months ago
- [NeurIPS 2024] A comprehensive benchmark for evaluating critique ability of LLMs☆39Updated 5 months ago
- ☆22Updated 9 months ago
- Code for ProTrix: Building Models for Planning and Reasoning over Tables with Sentence Context☆17Updated 5 months ago
- The source code for running LLMs on the AAAR-1.0 benchmark.☆16Updated last month
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Updated 4 months ago
- ☆69Updated last year
- [COLM'24] "How Easily do Irrelevant Inputs Skew the Responses of Large Language Models?"☆22Updated 6 months ago
- Let's Sample Step by Step: Adaptive-Consistency for Efficient Reasoning with LLMs☆36Updated last year
- ☆35Updated last year
- ☆40Updated 5 months ago
- ☆30Updated last year
- Evaluate the Quality of Critique☆34Updated 11 months ago
- Code for ICLR 2024 paper "CRAFT: Customizing LLMs by Creating and Retrieving from Specialized Toolsets"☆54Updated 11 months ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆55Updated last year