teacherpeterpan / self-correction-llm-papers
This is a collection of research papers for Self-Correcting Large Language Models with Automated Feedback.
☆511Updated 4 months ago
Alternatives and similar repositories for self-correction-llm-papers:
Users that are interested in self-correction-llm-papers are comparing it to the libraries listed below
- LLM hallucination paper list☆310Updated last year
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆421Updated 5 months ago
- RewardBench: the first evaluation tool for reward models.☆526Updated 3 weeks ago
- [ACL 2023] Reasoning with Language Model Prompting: A Survey☆945Updated 3 months ago
- This repository contains a collection of papers and resources on Reasoning in Large Language Models.☆557Updated last year
- Papers and Datasets on Instruction Tuning and Following. ✨✨✨☆486Updated 11 months ago
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,000Updated 4 months ago
- papers related to LLM-agent that published on top conferences☆312Updated last year
- Aligning Large Language Models with Human: A Survey☆726Updated last year
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆334Updated last year
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆471Updated 9 months ago
- [ACL 2024] A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future☆429Updated 2 months ago
- A series of technical report on Slow Thinking with LLM☆581Updated this week
- Data and Code for Program of Thoughts (TMLR 2023)☆263Updated 10 months ago
- Paper List for In-context Learning 🌷☆849Updated 5 months ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆475Updated 2 months ago
- A Survey on Data Selection for Language Models☆218Updated 5 months ago
- This is the repository for the Tool Learning survey.☆330Updated 3 weeks ago
- Must-read Papers on Knowledge Editing for Large Language Models.☆1,039Updated 2 weeks ago
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆595Updated 2 months ago
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆451Updated last year
- A curated list of Human Preference Datasets for LLM fine-tuning, RLHF, and eval.☆349Updated last year
- ☆486Updated 3 weeks ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆297Updated 7 months ago
- MAD: The first work to explore Multi-Agent Debate with Large Language Models :D☆345Updated 2 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆817Updated 2 weeks ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆542Updated 3 months ago
- This is the repo for the survey of LLM4IR.☆472Updated 6 months ago
- A large-scale, fine-grained, diverse preference dataset (and models).☆335Updated last year
- Paper collection on building and evaluating language model agents via executable language grounding☆348Updated 10 months ago