0xallam / Direct-Preference-OptimizationLinks
Direct Preference Optimization from scratch in PyTorch
โ98Updated 2 months ago
Alternatives and similar repositories for Direct-Preference-Optimization
Users that are interested in Direct-Preference-Optimization are comparing it to the libraries listed below
Sorting:
- Reference implementation for Token-level Direct Preference Optimization(TDPO)โ141Updated 4 months ago
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. ๐งฎโจโ226Updated last year
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.โ318Updated 10 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)โ112Updated last year
- A brief and partial summary of RLHF algorithms.โ129Updated 3 months ago
- โ300Updated 3 weeks ago
- โ109Updated 3 months ago
- โ276Updated 5 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"โ158Updated last month
- โ203Updated 4 months ago
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied witโฆโ131Updated 11 months ago
- โ190Updated 2 months ago
- โ65Updated 2 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don'tโฆโ114Updated 11 months ago
- โ97Updated 11 months ago
- A version of verl to support tool useโ251Updated last week
- [ICML 2024] Selecting High-Quality Data for Training Language Modelsโ176Updated last year
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".โ78Updated 5 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuningโ158Updated 9 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate"โ159Updated 3 weeks ago
- โ220Updated last month
- โ142Updated 7 months ago
- โ180Updated 2 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervisionโ121Updated 9 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.โ65Updated 3 weeks ago
- A continually updated list of literature on Reinforcement Learning from AI Feedback (RLAIF)โ171Updated 5 months ago
- RLHF implementation details of OAI's 2019 codebaseโ187Updated last year
- Official Code Repository for LM-Steer Paper: "Word Embeddings Are Steers for Language Models" (ACL 2024 Outstanding Paper Award)โ114Updated 8 months ago
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learningโ164Updated last year
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.โ124Updated 3 months ago