Anni-Zou / DocBenchLinks
DocBench: A Benchmark for Evaluating LLM-based Document Reading Systems
☆43Updated 11 months ago
Alternatives and similar repositories for DocBench
Users that are interested in DocBench are comparing it to the libraries listed below
Sorting:
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆81Updated last year
- ☆48Updated 3 months ago
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆48Updated 7 months ago
- ☆74Updated last year
- ☆154Updated last year
- PASTA: Post-hoc Attention Steering for LLMs☆122Updated 9 months ago
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆110Updated 11 months ago
- Tree prompting: easy-to-use scikit-learn interface for improved prompting.☆41Updated last year
- Code implementation of synthetic continued pretraining☆129Updated 8 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆171Updated 2 months ago
- [ICLR 2024] MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use☆95Updated last year
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆153Updated last year
- ☆133Updated 5 months ago
- ☆28Updated 8 months ago
- Unofficial Implementation of Chain-of-Thought Reasoning Without Prompting☆33Updated last year
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- Official repository for paper "ReasonIR Training Retrievers for Reasoning Tasks".☆198Updated 2 months ago
- [ICLR 2025] InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales☆120Updated 7 months ago
- [NAACL 2025] The official implementation of paper "Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language M…☆28Updated last year
- Codes and datasets for the paper Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Ref…☆64Updated 6 months ago
- [EMNLP 2024] A Retrieval Benchmark for Scientific Literature Search☆96Updated 9 months ago
- Improving Language Understanding from Screenshots. Paper: https://arxiv.org/abs/2402.14073☆30Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆148Updated 10 months ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆105Updated last month
- augmented LLM with self reflection☆132Updated last year
- ☆135Updated 10 months ago
- ☆127Updated 11 months ago
- [NeurIPS 2024] A comprehensive benchmark for evaluating critique ability of LLMs☆46Updated 9 months ago
- RL Scaling and Test-Time Scaling (ICML'25)☆113Updated 7 months ago
- Code and models for EMNLP 2024 paper "WPO: Enhancing RLHF with Weighted Preference Optimization"☆41Updated 11 months ago