ryokamoi / llm-self-correction-papers
List of papers on Self-Correction of LLMs.
☆70Updated 2 weeks ago
Alternatives and similar repositories for llm-self-correction-papers:
Users that are interested in llm-self-correction-papers are comparing it to the libraries listed below
- ☆62Updated 10 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆62Updated 4 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆45Updated 11 months ago
- ☆76Updated 2 weeks ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated 10 months ago
- ☆61Updated 3 weeks ago
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; arXiv preprint arXiv:2403.…☆40Updated 2 weeks ago
- This is the official repository of the paper "OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI"☆89Updated 3 weeks ago
- ☆113Updated 3 months ago
- ☆48Updated 11 months ago
- Codebase for Instruction Following without Instruction Tuning☆33Updated 3 months ago
- [EMNLP 2024] A Retrieval Benchmark for Scientific Literature Search☆69Updated last month
- Codebase accompanying the Summary of a Haystack paper.☆75Updated 3 months ago
- ☆50Updated 2 months ago
- ☆27Updated last week
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆57Updated 9 months ago
- The paper list of multilingual pre-trained models (Continual Updated).☆18Updated 6 months ago
- ReBase: Training Task Experts through Retrieval Based Distillation☆28Updated 5 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆77Updated 5 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆44Updated 2 weeks ago
- CodeUltraFeedback: aligning large language models to coding preferences☆67Updated 6 months ago
- ☆37Updated 3 months ago
- PyTorch building blocks for OLMo☆47Updated this week
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆53Updated 4 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated 10 months ago
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆79Updated 11 months ago
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners☆62Updated last week
- ☆23Updated 3 weeks ago
- Learning to Retrieve by Trying - Source code for Grounding by Trying: LLMs with Reinforcement Learning-Enhanced Retrieval☆28Updated 2 months ago