gao-g / preludeLinks
Code for the paper "Aligning LLM Agents by Learning Latent Preference from User Edits".
☆41Updated 7 months ago
Alternatives and similar repositories for prelude
Users that are interested in prelude are comparing it to the libraries listed below
Sorting:
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆123Updated 10 months ago
- Dialogue Action Tokens: Steering Language Models in Goal-Directed Dialogue with a Multi-Turn Planner☆25Updated last year
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆101Updated last month
- Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering☆61Updated 7 months ago
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆146Updated 8 months ago
- Evaluate the Quality of Critique☆36Updated last year
- Code for Paper (Preserving Diversity in Supervised Fine-tuning of Large Language Models)☆33Updated 2 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆112Updated last year
- Critique-out-Loud Reward Models☆68Updated 9 months ago
- Generating diverse counterfactual data for Natural Language Understanding tasks using Large Language Models (LLMs). The generator support…☆37Updated last year
- ☆114Updated 5 months ago
- augmented LLM with self reflection☆129Updated last year
- Code for the paper <SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning>☆48Updated last year
- ☆43Updated last year
- Data and code for the Corr2Cause paper (ICLR 2024)☆107Updated last year
- ☆99Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences☆71Updated last year
- ☆29Updated last year
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners☆82Updated last month
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆127Updated last year
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆108Updated last year
- ☆19Updated 4 months ago
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆126Updated last year
- PASTA: Post-hoc Attention Steering for LLMs☆121Updated 7 months ago
- official implementation of paper "Process Reward Model with Q-value Rankings"☆60Updated 5 months ago
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆58Updated last year
- Natural Language Reinforcement Learning☆92Updated 6 months ago
- ☆26Updated last year
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆47Updated 5 months ago