0xallam / Direct-Preference-OptimizationLinks
Direct Preference Optimization from scratch in PyTorch
☆123Updated 9 months ago
Alternatives and similar repositories for Direct-Preference-Optimization
Users that are interested in Direct-Preference-Optimization are comparing it to the libraries listed below
Sorting:
- A Survey on Data Selection for Language Models☆254Updated 8 months ago
- ☆281Updated last year
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆271Updated last year
- ☆329Updated 7 months ago
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆199Updated last month
- RewardBench: the first evaluation tool for reward models.☆680Updated 7 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆151Updated 11 months ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆533Updated last year
- Critique-out-Loud Reward Models☆71Updated last year
- ☆160Updated last year
- RLHF implementation details of OAI's 2019 codebase☆197Updated 2 years ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆269Updated last year
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆512Updated last year
- Project for the paper entitled `Instruction Tuning for Large Language Models: A Survey`☆223Updated 5 months ago
- ☆274Updated 2 years ago
- Code for STaR: Bootstrapping Reasoning With Reasoning (NeurIPS 2022)☆218Updated 2 years ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆182Updated 7 months ago
- ☆166Updated 3 months ago
- The repo for In-context Autoencoder☆162Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆127Updated last year
- A large-scale, fine-grained, diverse preference dataset (and models).☆359Updated 2 years ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆126Updated last year
- An index of algorithms for reinforcement learning from human feedback (rlhf))☆92Updated last year
- ☆220Updated 9 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆184Updated 6 months ago
- ☆213Updated 10 months ago
- [ACL 2024] MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialogues☆136Updated last year
- Official Code Repository for LM-Steer Paper: "Word Embeddings Are Steers for Language Models" (ACL 2024 Outstanding Paper Award)☆134Updated 6 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆96Updated 3 months ago
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆366Updated last year