vicgalle / refined-dpoLinks
Refined Direct Preference Optimization with Synthetic Data for Behavioral Alignment of LLMs
☆13Updated last year
Alternatives and similar repositories for refined-dpo
Users that are interested in refined-dpo are comparing it to the libraries listed below
Sorting:
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated 2 years ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Updated last year
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 10 months ago
- Unofficial Implementation of Chain-of-Thought Reasoning Without Prompting☆34Updated last year
- Aioli: A unified optimization framework for language model data mixing☆31Updated 10 months ago
- EMNLP 2024 "Re-reading improves reasoning in large language models". Simply repeating the question to get bidirectional understanding for…☆27Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Updated 2 months ago
- [NeurIPS 2023 Main Track] This is the repository for the paper titled "Don’t Stop Pretraining? Make Prompt-based Fine-tuning Powerful Lea…☆76Updated last year
- [NeurIPS 2023] PyTorch code for Can Language Models Teach? Teacher Explanations Improve Student Performance via Theory of Mind☆66Updated last year
- ☆29Updated last month
- ☆75Updated last year
- ☆27Updated 9 months ago
- In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning☆35Updated 2 years ago
- The official implementation of Self-Exploring Language Models (SELM)☆63Updated last year
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆123Updated last year
- Tree prompting: easy-to-use scikit-learn interface for improved prompting.☆40Updated 2 years ago
- ZYN: Zero-Shot Reward Models with Yes-No Questions☆35Updated 2 years ago
- ☆29Updated last week
- ☆100Updated last year
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆84Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆61Updated last year
- Scalable Meta-Evaluation of LLMs as Evaluators