Singla17 / dynamic-alignment-optimization
[EMNLP'24 (Main)] DRPO(Dynamic Rewarding with Prompt Optimization) is a tuning-free approach for self-alignment. DRPO leverages a search-based optimization framework that allows LLMs to iteratively self-improve and design the best alignment instructions without the need for additional training.
☆23Updated 6 months ago
Alternatives and similar repositories for dynamic-alignment-optimization
Users that are interested in dynamic-alignment-optimization are comparing it to the libraries listed below
Sorting:
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆47Updated 4 months ago
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆39Updated 2 years ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- ☆29Updated 4 months ago
- Evaluate the Quality of Critique☆35Updated 11 months ago
- [NAACL 2024] Making Language Models Better Tool Learners with Execution Feedback☆41Updated last year
- the instructions and demonstrations for building a formal logical reasoning capable GLM☆53Updated 8 months ago
- Supporting code for ReCEval paper☆28Updated 8 months ago
- [ACL 2023] Solving Math Word Problems via Cooperative Reasoning induced Language Models (LLMs + MCTS + Self-Improvement)☆49Updated last year
- Towards Systematic Measurement for Long Text Quality☆34Updated 8 months ago
- [NAACL 2025] The official implementation of paper "Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language M…☆26Updated last year
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆61Updated 10 months ago
- Resources for our ACL 2023 paper: Distilling Script Knowledge from Large Language Models for Constrained Language Planning☆36Updated last year
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- ☆41Updated last year
- ☆30Updated last year
- This is the repository for paper "CREATOR: Tool Creation for Disentangling Abstract and Concrete Reasoning of Large Language Models"☆24Updated last year
- Code for RL4F: Generating Natural Language Feedback with Reinforcement Learning for Repairing Model Outputs. ACL 2023.☆63Updated 5 months ago
- Code for ICLR 2024 paper "CRAFT: Customizing LLMs by Creating and Retrieving from Specialized Toolsets"☆56Updated 11 months ago
- ☆35Updated last year
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large L…☆48Updated 10 months ago
- First explanation metric (diagnostic report) for text generation evaluation☆61Updated 2 months ago
- ☆69Updated last year
- Trending projects & awesome papers about data-centric llm studies.☆34Updated 3 weeks ago
- LongHeads: Multi-Head Attention is Secretly a Long Context Processor☆29Updated last year
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- Official implementation of AAAI 2025 paper "Augmenting Math Word Problems via Iterative Question Composing"(https://arxiv.org/abs/2401.09…☆20Updated 5 months ago
- This repository includes a benchmark and code for the paper "Evaluating LLMs at Detecting Errors in LLM Responses".☆29Updated 9 months ago
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆46Updated 5 months ago
- Evaluation on Logical Reasoning and Abstract Reasoning Challenges☆27Updated 3 weeks ago