psunlpgroup / ReaLMistake
This repository includes a benchmark and code for the paper "Evaluating LLMs at Detecting Errors in LLM Responses".
☆27Updated 6 months ago
Alternatives and similar repositories for ReaLMistake:
Users that are interested in ReaLMistake are comparing it to the libraries listed below
- ☆22Updated 2 months ago
- Evaluate the Quality of Critique☆35Updated 8 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆43Updated last year
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated 11 months ago
- AbstainQA, ACL 2024☆25Updated 4 months ago
- [EMNLP 2024] A Retrieval Benchmark for Scientific Literature Search☆69Updated 2 months ago
- Tasks for describing differences between text distributions.☆16Updated 6 months ago
- LongHeads: Multi-Head Attention is Secretly a Long Context Processor☆28Updated 10 months ago
- [arXiv preprint] Official Repository for "Evaluating Language Models as Synthetic Data Generators"☆34Updated 2 months ago
- ☆33Updated 10 months ago
- Prompting Large Language Models to Generate Dense and Sparse Representations for Zero-Shot Document Retrieval☆40Updated 3 months ago
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorch☆29Updated last week
- ☆20Updated 8 months ago
- ☆27Updated 11 months ago
- Code Prompting Elicits Conditional Reasoning Abilities in Text+Code LLMs. EMNLP 2024☆20Updated 3 months ago
- The code implementation of MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Models…☆31Updated last year
- Critique-out-Loud Reward Models☆52Updated 4 months ago
- ☆40Updated last year
- ☆17Updated 4 months ago
- the instructions and demonstrations for building a formal logical reasoning capable GLM☆53Updated 5 months ago
- EMNLP 2024 "Re-reading improves reasoning in large language models". Simply repeating the question to get bidirectional understanding for…☆24Updated 2 months ago
- Benchmarking Benchmark Leakage in Large Language Models☆50Updated 9 months ago
- ☆40Updated last week
- ✨ Resolving Knowledge Conflicts in Large Language Models, COLM 2024☆15Updated 4 months ago
- FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions☆42Updated 7 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆42Updated last year
- IntructIR, a novel benchmark specifically designed to evaluate the instruction following ability in information retrieval models. Our foc…☆31Updated 8 months ago
- Codebase for Instruction Following without Instruction Tuning☆33Updated 4 months ago
- Dialogue Action Tokens: Steering Language Models in Goal-Directed Dialogue with a Multi-Turn Planner☆21Updated 7 months ago
- ☆44Updated 5 months ago