ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment
☆56Jun 16, 2024Updated last year
Alternatives and similar repositories for exact-optimization
Users that are interested in exact-optimization are comparing it to the libraries listed below
Sorting:
- Direct preference optimization with f-divergences.☆16Nov 3, 2024Updated last year
- Code and models for EMNLP 2024 paper "WPO: Enhancing RLHF with Weighted Preference Optimization"☆41Sep 24, 2024Updated last year
- For ACL25 paper "WAFFLE: Multi-Modal Model for Automated Front-End Development" - by Shanchao Liang and Nan Jiang and Shangshu Qian and L…☆11May 28, 2025Updated 9 months ago
- Trust Region Preference Approximation: A simple and stable reinforcement learning algorithm for LLM reasoning☆14Jun 28, 2025Updated 8 months ago
- Unofficially Implements https://arxiv.org/abs/2112.05682 to get Linear Memory Cost on Attention for PyTorch☆12Jan 16, 2022Updated 4 years ago
- lanmt ebm☆12Jun 19, 2020Updated 5 years ago
- ☆11May 28, 2024Updated last year
- [ICLR 2025] Bridging and Modeling Correlations in Pairwise Data for Direct Preference Optimization☆12Jan 26, 2025Updated last year
- CMU Linguistic Annotation Backend☆15Sep 22, 2025Updated 5 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆78Aug 17, 2024Updated last year
- ☆20Dec 14, 2024Updated last year
- GenRM-CoT: Data release for verification rationales☆68Oct 16, 2024Updated last year
- A library for constrained RLHF.☆13Feb 19, 2024Updated 2 years ago
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Feb 22, 2024Updated 2 years ago
- ☆29Oct 3, 2023Updated 2 years ago
- ☆160Nov 23, 2024Updated last year
- The official implementation of Preference Data Reward-Augmentation.☆18May 1, 2025Updated 10 months ago
- Official implementation of Language Models as Compilers: Simulating the Execution Of Pseudocode Improves Algorithmic Reasoning in Languag…☆23Apr 8, 2024Updated last year
- ☆19Mar 25, 2025Updated 11 months ago
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆17Jan 8, 2025Updated last year
- ☆78Feb 22, 2024Updated 2 years ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆906Sep 30, 2025Updated 5 months ago
- The official implementation for Detector Guidance for Multi-Object Text-to-Image Generation (DG)☆20Feb 7, 2024Updated 2 years ago
- Code repo for "Harnessing Negative Signals: Reinforcement Distillation from Teacher Data for LLM Reasoning"☆33Jul 25, 2025Updated 7 months ago
- This is the official repository for all the code of TheoremLlama☆47Aug 4, 2025Updated 7 months ago
- ☆320Sep 18, 2024Updated last year
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆946Feb 16, 2025Updated last year
- Implementations of growing and pruning in neural networks☆22Jul 26, 2023Updated 2 years ago
- ☆19May 6, 2023Updated 2 years ago
- The reproduct of the paper - Aligner: Achieving Efficient Alignment through Weak-to-Strong Correction☆22May 29, 2024Updated last year
- GoldFinch and other hybrid transformer components☆45Jul 20, 2024Updated last year
- ☆18Mar 28, 2022Updated 3 years ago
- Dataset for Unified Editing, EMNLP 2023. This is a model editing dataset where edits are natural language phrases.☆23Sep 4, 2024Updated last year
- ☆27Oct 8, 2021Updated 4 years ago
- Generate images from texts. In Russian☆19Dec 13, 2021Updated 4 years ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆149Oct 27, 2024Updated last year
- Sys2Bench is a benchmarking suite designed to evaluate reasoning and planning capabilities of large language models across algorithmic, l…☆29Mar 5, 2025Updated 11 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆151Feb 14, 2025Updated last year
- This repository contains the joint use of CPO and SimPO method for better reference-free preference learning methods.☆56Aug 13, 2024Updated last year