ryokamoi / llm-self-correction-papers
List of papers on Self-Correction of LLMs.
☆72Updated 4 months ago
Alternatives and similar repositories for llm-self-correction-papers
Users that are interested in llm-self-correction-papers are comparing it to the libraries listed below
Sorting:
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆47Updated last year
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆46Updated 5 months ago
- Codebase for Instruction Following without Instruction Tuning☆34Updated 7 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆47Updated 4 months ago
- In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning☆34Updated last year
- ☆69Updated last year
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆101Updated 2 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆81Updated 9 months ago
- ☆65Updated 2 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆44Updated last year
- Revisiting Mid-training in the Era of RL Scaling☆37Updated 3 weeks ago
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆57Updated last year
- Code for ACL2023 paper: Pre-Training to Learn in Context☆108Updated 9 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- ☆100Updated last month
- The paper list of multilingual pre-trained models (Continual Updated).☆21Updated 10 months ago
- Code for RL4F: Generating Natural Language Feedback with Reinforcement Learning for Repairing Model Outputs. ACL 2023.☆63Updated 5 months ago
- Exploration of automated dataset selection approaches at large scales.☆40Updated 2 months ago
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆39Updated 2 years ago
- ☆17Updated 2 weeks ago
- Official implementation for "Law of the Weakest Link: Cross capabilities of Large Language Models"☆42Updated 7 months ago
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 3 months ago
- The source code for running LLMs on the AAAR-1.0 benchmark.☆16Updated last month
- Code for preprint "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆38Updated last week
- ☆70Updated last week
- General Reasoner: Advancing LLM Reasoning Across All Domains☆82Updated last week
- Benchmarking Benchmark Leakage in Large Language Models☆51Updated 11 months ago
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆47Updated 3 months ago
- ☆64Updated last month
- ☆120Updated 7 months ago