tjunlp-lab / Awesome-LLMs-Evaluation-Papers
The papers are organized according to our survey: Evaluating Large Language Models: A Comprehensive Survey.
☆761Updated last year
Alternatives and similar repositories for Awesome-LLMs-Evaluation-Papers
Users that are interested in Awesome-LLMs-Evaluation-Papers are comparing it to the libraries listed below
Sorting:
- The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".☆1,520Updated last month
- [ACL 2023] Reasoning with Language Model Prompting: A Survey☆956Updated last month
- List of papers on hallucination detection in LLMs.☆862Updated last week
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,014Updated 5 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,740Updated 4 months ago
- Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).☆769Updated last year
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,162Updated last year
- Aligning Large Language Models with Human: A Survey☆730Updated last year
- Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs…☆527Updated 6 months ago
- Must-read Papers on Knowledge Editing for Large Language Models.☆1,082Updated 2 months ago
- [EMNLP 2023] Enabling Large Language Models to Generate Text with Citations. Paper: https://arxiv.org/abs/2305.14627☆484Updated 7 months ago
- This repository contains a collection of papers and resources on Reasoning in Large Language Models.☆564Updated last year
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆546Updated last year
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆468Updated last year
- This is a collection of research papers for Self-Correcting Large Language Models with Automated Feedback.☆520Updated 6 months ago
- Papers and Datasets on Instruction Tuning and Following. ✨✨✨☆496Updated last year
- LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.☆685Updated 7 months ago
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆343Updated last year
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆348Updated last month
- ☆901Updated 9 months ago
- The repository for the survey paper <<Survey on Large Language Models Factuality: Knowledge, Retrieval and Domain-Specificity>>☆340Updated last year
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆840Updated last week
- Paper List for In-context Learning 🌷☆853Updated 7 months ago
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆488Updated 10 months ago
- ☆532Updated last year
- Summarize existing representative LLMs text datasets.☆1,266Updated last month
- OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.☆559Updated last year
- Awesome-LLM-RAG: a curated list of advanced retrieval augmented generation (RAG) in Large Language Models☆1,197Updated 2 months ago
- This includes the original implementation of SELF-RAG: Learning to Retrieve, Generate and Critique through self-reflection by Akari Asai,…☆2,076Updated 11 months ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆484Updated 3 months ago