dependentsign / Awesome-LLM-based-EvaluatorsLinks
✨✨Latest Papers about LLM-based Evaluators
☆30Updated last year
Alternatives and similar repositories for Awesome-LLM-based-Evaluators
Users that are interested in Awesome-LLM-based-Evaluators are comparing it to the libraries listed below
Sorting:
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆186Updated 6 months ago
- EMNLP'23 survey: a curation of awesome papers and resources on refreshing large language models (LLMs) without expensive retraining.☆133Updated last year
- Awesome LLM for NLG Evaluation Papers☆24Updated last year
- A curated list of awesome papers about information retrieval(IR) in the age of large language model(LLM). These include retrieval augment…☆69Updated 10 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆114Updated 11 months ago
- ☆75Updated 6 months ago
- ☆75Updated last year
- 【ACL 2024】 SALAD benchmark & MD-Judge☆150Updated 3 months ago
- Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"☆47Updated last year
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆147Updated last year
- ☆54Updated 10 months ago
- Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs☆256Updated last year
- ☆179Updated 2 weeks ago
- Codes for papers on Large Language Models Personalization (LaMP)☆163Updated 4 months ago
- Code Repo for EfficientRAG: Efficient Retriever for Multi-Hop Question Answering☆49Updated 3 months ago
- Implementation of the paper: "Making Retrieval-Augmented Language Models Robust to Irrelevant Context"☆69Updated 10 months ago
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆69Updated last year
- A Survey of Attributions for Large Language Models☆203Updated 10 months ago
- Multilingual Large Language Models Evaluation Benchmark☆124Updated 10 months ago
- Repository for the Bias Benchmark for QA dataset.☆118Updated last year
- Project for the paper entitled `Instruction Tuning for Large Language Models: A Survey`☆179Updated 6 months ago
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆77Updated last month
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆126Updated 9 months ago
- BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval☆144Updated last month
- [ACL 2023] Code and Data Repo for Paper "Element-aware Summary and Summary Chain-of-Thought (SumCoT)"☆53Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆127Updated 11 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆160Updated this week
- Recent papers on (1) Psychology of LLMs; (2) Biases in LLMs.☆49Updated last year
- [ICLR 2025] InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales☆98Updated 4 months ago
- The repository for paper <Evaluating Open-QA Evaluation>☆24Updated last year