Moocember / Optimization-by-PROmptingLinks
☆78Updated 2 years ago
Alternatives and similar repositories for Optimization-by-PROmpting
Users that are interested in Optimization-by-PROmpting are comparing it to the libraries listed below
Sorting:
- ☆129Updated last year
- Meta-CoT: Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models☆101Updated 2 years ago
- Open Implementations of LLM Analyses☆107Updated last year
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆85Updated last year
- ☆29Updated 3 weeks ago
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆124Updated last year
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)☆29Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)☆73Updated last year
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆118Updated 2 years ago
- ☆150Updated 2 years ago
- ☆99Updated last year
- ☆122Updated last year
- Codebase accompanying the Summary of a Haystack paper.☆80Updated last year
- The code implementation of MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Models…☆38Updated last year
- Learning to Retrieve by Trying - Source code for Grounding by Trying: LLMs with Reinforcement Learning-Enhanced Retrieval☆51Updated last year
- ☆139Updated last year
- Official repo of Respond-and-Respond: data, code, and evaluation☆103Updated last year
- Plug in and play implementation of " Textbooks Are All You Need", ready for training, inference, and dataset generation☆74Updated 2 years ago
- FuseAI Project☆88Updated 11 months ago
- Challenge LLMs to Reason About Reasoning: A Benchmark to Unveil Cognitive Depth in LLMs☆51Updated last year
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆63Updated last year
- ☆49Updated 2 years ago
- ☆48Updated last year
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated last year
- augmented LLM with self reflection☆135Updated 2 years ago
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆221Updated 2 years ago
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆90Updated last year
- Self-Alignment with Principle-Following Reward Models☆169Updated 4 months ago
- ☆80Updated 9 months ago
- Official implementation for "Law of the Weakest Link: Cross capabilities of Large Language Models"☆43Updated last year