0xallam / Direct-Preference-OptimizationLinks
Direct Preference Optimization from scratch in PyTorch
โ113Updated 6 months ago
Alternatives and similar repositories for Direct-Preference-Optimization
Users that are interested in Direct-Preference-Optimization are comparing it to the libraries listed below
Sorting:
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. ๐งฎโจโ258Updated last year
- Critique-out-Loud Reward Modelsโ70Updated 11 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)โ148Updated 7 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)โ120Updated last year
- โ318Updated 4 months ago
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuningโ497Updated 11 months ago
- A Survey on Data Selection for Language Modelsโ250Updated 5 months ago
- โ211Updated 7 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.โ128Updated 6 months ago
- โ269Updated last year
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"โ171Updated 4 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".โ82Updated 8 months ago
- โ280Updated 9 months ago
- Code for STaR: Bootstrapping Reasoning With Reasoning (NeurIPS 2022)โ211Updated 2 years ago
- โ207Updated 6 months ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"โ517Updated 8 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.โ84Updated 4 months ago
- [ICML 2024] Selecting High-Quality Data for Training Language Modelsโ188Updated last year
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Modelsโ109Updated 2 months ago
- RewardBench: the first evaluation tool for reward models.โ640Updated 3 months ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Modelsโ266Updated last year
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuningโ362Updated last year
- Repo of paper "Free Process Rewards without Process Labels"โ164Updated 6 months ago
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied witโฆโ142Updated last year
- This is the official GitHub repository for our survey paper "Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language โฆโ115Updated 4 months ago
- The repo for In-context Autoencoderโ143Updated last year
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuningโ176Updated 3 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don'tโฆโ121Updated last year
- RLHF implementation details of OAI's 2019 codebaseโ191Updated last year
- Official Code Repository for LM-Steer Paper: "Word Embeddings Are Steers for Language Models" (ACL 2024 Outstanding Paper Award)โ124Updated 2 months ago