gao-g / prelude
Code for the paper "Aligning LLM Agents by Learning Latent Preference from User Edits".
☆25Updated 3 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for prelude
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆48Updated 8 months ago
- Evaluate the Quality of Critique☆35Updated 5 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆48Updated 7 months ago
- Repository for paper Tools Are Instrumental for Language Agents in Complex Environments☆32Updated last month
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆96Updated last year
- ☆85Updated 11 months ago
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆29Updated 6 months ago
- Generating diverse counterfactual data for Natural Language Understanding tasks using Large Language Models (LLMs). The generator support…☆35Updated last year
- Supporting code for ReCEval paper☆26Updated 2 months ago
- ☆24Updated 4 months ago
- ☆18Updated 2 months ago
- BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval☆56Updated 3 weeks ago
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆96Updated 2 weeks ago
- This repository contains data, code and models for contextual noncompliance.☆18Updated 3 months ago
- Code for our paper: "GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models"☆51Updated last year
- 👻 Code and benchmark for our EMNLP 2023 paper - "FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions"☆51Updated 5 months ago
- [ACL 2024] Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning☆30Updated 3 months ago
- Benchmarking Benchmark Leakage in Large Language Models☆44Updated 5 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆41Updated 8 months ago
- Directional Preference Alignment☆49Updated last month
- ☆103Updated 4 months ago
- Repo for: When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment☆38Updated last year
- Accompanying code for "Boosted Prompt Ensembles for Large Language Models"