yongchao98 / PROMSTLinks
Automatic prompt optimization framework for multi-step agent tasks.
☆31Updated 8 months ago
Alternatives and similar repositories for PROMST
Users that are interested in PROMST are comparing it to the libraries listed below
Sorting:
- The source code and dataset mentioned in the paper Seal-Tools: Self-Instruct Tool Learning Dataset for Agent Tuning and Detailed Benchmar…☆52Updated 8 months ago
- Codebase for Instruction Following without Instruction Tuning☆35Updated 9 months ago
- Reformatted Alignment☆113Updated 9 months ago
- ☆47Updated last month
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆50Updated last month
- ☆56Updated 8 months ago
- Contrastive Chain-of-Thought Prompting☆64Updated last year
- A scalable automated alignment method for large language models. Resources for "Aligning Large Language Models via Self-Steering Optimiza…☆19Updated 7 months ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆33Updated last year
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆47Updated 5 months ago
- DeepResearch Bench: A Comprehensive Benchmark for Deep Research Agents☆190Updated 3 weeks ago
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large L…☆50Updated last year
- ☆36Updated 10 months ago
- We aim to provide the best references to search, select, and synthesize high-quality and large-quantity data for post-training your LLMs.☆57Updated 9 months ago
- Code for ICML 25 paper "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆40Updated last week
- ☆102Updated 7 months ago
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆38Updated 4 months ago
- Code for ICLR 2024 paper "CRAFT: Customizing LLMs by Creating and Retrieving from Specialized Toolsets"☆57Updated last year
- ☆24Updated 5 months ago
- This repository contains the joint use of CPO and SimPO method for better reference-free preference learning methods.☆53Updated 11 months ago
- ☆102Updated 9 months ago
- Towards Systematic Measurement for Long Text Quality☆36Updated 10 months ago
- Self-Evolved Diverse Data Sampling for Efficient Instruction Tuning☆81Updated last year
- ☆89Updated last month
- ☆41Updated 9 months ago
- ☆50Updated last year
- Implementations of online merging optimizers proposed by Online Merging Optimizers for Boosting Rewards and Mitigating Tax in Alignment☆75Updated last year
- ☆53Updated 2 weeks ago
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆48Updated 7 months ago