Shentao-YANG / Preference_Grounded_GuidanceLinks
Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).
☆16Updated 7 months ago
Alternatives and similar repositories for Preference_Grounded_Guidance
Users that are interested in Preference_Grounded_Guidance are comparing it to the libraries listed below
Sorting:
- Code for Paper (Preserving Diversity in Supervised Fine-tuning of Large Language Models)☆36Updated 3 months ago
- ☆44Updated last year
- Domain-specific preference (DSP) data and customized RM fine-tuning.☆25Updated last year
- The code of “Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning”☆17Updated last year
- AbstainQA, ACL 2024☆28Updated 10 months ago
- Directional Preference Alignment☆59Updated 11 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆123Updated 11 months ago
- ☆35Updated last year
- Offical code of the paper Large Language Models Are Implicitly Topic Models: Explaining and Finding Good Demonstrations for In-Context Le…☆75Updated last year
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆108Updated last year
- [ACL 2023 Findings] What In-Context Learning “Learns” In-Context: Disentangling Task Recognition and Task Learning☆21Updated 2 years ago
- ☆100Updated last year
- Methods and evaluation for aligning language models temporally☆29Updated last year
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆56Updated last year
- ☆18Updated last year
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"☆30Updated last year
- The Unreliability of Explanations in Few-shot Prompting for Textual Reasoning (NeurIPS 2022)☆16Updated 2 years ago
- ☆68Updated last year
- Official code for paper Understanding the Reasoning Ability of Language Models From the Perspective of Reasoning Paths Aggregation☆20Updated last year
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆47Updated 10 months ago
- Grade-School Math with Irrelevant Context (GSM-IC) benchmark is an arithmetic reasoning dataset built upon GSM8K, by adding irrelevant se…☆60Updated 2 years ago
- ☆14Updated last month
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆62Updated last year
- ☆27Updated 2 years ago
- ☆56Updated 3 months ago
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆61Updated 10 months ago
- ☆43Updated 5 months ago
- This is an official implementation of the paper ``Building Math Agents with Multi-Turn Iterative Preference Learning'' with multi-turn DP…☆29Updated 8 months ago
- Evaluate the Quality of Critique☆36Updated last year
- Analyzing LLM Alignment via Token distribution shift☆16Updated last year