PorUna-byte / PARLinks
☆20Updated 3 months ago
Alternatives and similar repositories for PAR
Users that are interested in PAR are comparing it to the libraries listed below
Sorting:
- [ICML 2025] Official code of "AlphaDPO: Adaptive Reward Margin for Direct Preference Optimization"☆22Updated last year
- ☆17Updated 4 months ago
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆57Updated last year
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- official implementation of paper "Process Reward Model with Q-value Rankings"☆65Updated 10 months ago
- Dialogue Action Tokens: Steering Language Models in Goal-Directed Dialogue with a Multi-Turn Planner☆28Updated last year
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆49Updated last year
- A Sober Look at Language Model Reasoning☆89Updated 3 weeks ago
- [NAACL 2025] The official implementation of paper "Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language M…☆29Updated last year
- Directional Preference Alignment☆58Updated last year
- Exploration of automated dataset selection approaches at large scales.☆50Updated 9 months ago
- ☆30Updated last year
- ☆51Updated 10 months ago
- [ACL 2024] Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning☆52Updated last year
- Interpretable Contrastive Monte Carlo Tree Search Reasoning☆48Updated last year
- [ACL 2025] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLM…☆68Updated last year
- ☆53Updated 10 months ago
- [ACL 2025 Findings] Official implementation of the paper "Unveiling the Key Factors for Distilling Chain-of-Thought Reasoning".☆19Updated 9 months ago
- Evaluate the Quality of Critique☆36Updated last year
- ☆49Updated 9 months ago
- ☆45Updated 9 months ago
- Code and models for EMNLP 2024 paper "WPO: Enhancing RLHF with Weighted Preference Optimization"☆41Updated last year
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆31Updated 4 months ago
- Self-Supervised Alignment with Mutual Information☆20Updated last year
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆118Updated 7 months ago
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆69Updated 2 years ago
- ☆15Updated last year
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆33Updated last year
- [ICML 2025] Beyond Bradley-Terry Models: A General Preference Model for Language Model Alignment (https://arxiv.org/abs/2410.02197)☆33Updated 3 months ago
- ☆16Updated last year