Anni-Zou / DocBenchLinks
DocBench: A Benchmark for Evaluating LLM-based Document Reading Systems
☆52Updated last year
Alternatives and similar repositories for DocBench
Users that are interested in DocBench are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆154Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆148Updated last year
- Codes and datasets for the paper Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Ref…☆68Updated 7 months ago
- [ICLR 2025] InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales☆127Updated 8 months ago
- ☆47Updated 4 months ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆109Updated 8 months ago
- Implementation of the paper: "Answering Questions by Meta-Reasoning over Multiple Chains of Thought"☆96Updated last year
- Code implementation of synthetic continued pretraining☆135Updated 9 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆82Updated last year
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆77Updated last year
- PASTA: Post-hoc Attention Steering for LLMs☆125Updated 11 months ago
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆122Updated last year
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- ☆74Updated last year
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆109Updated last year
- ☆150Updated 2 weeks ago
- ☆155Updated last year
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆178Updated 3 months ago
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆159Updated 8 months ago
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆49Updated 9 months ago
- [ICLR 2025] BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval☆169Updated last month
- [NAACL'24] Dataset, code and models for "TableLlama: Towards Open Large Generalist Models for Tables".☆131Updated last year
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆122Updated last year
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆75Updated 11 months ago
- [EMNLP 2024 (Oral)] Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA☆138Updated 11 months ago
- Reformatted Alignment☆112Updated last year
- ☆75Updated 7 months ago
- [ICLR 2024] MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use☆99Updated last year
- Improving Language Understanding from Screenshots. Paper: https://arxiv.org/abs/2402.14073☆31Updated last year
- [NeurIPS 2024] A comprehensive benchmark for evaluating critique ability of LLMs☆47Updated 11 months ago