likenneth / dialogue_action_token
Dialogue Action Tokens: Steering Language Models in Goal-Directed Dialogue with a Multi-Turn Planner
☆20Updated 6 months ago
Alternatives and similar repositories for dialogue_action_token:
Users that are interested in dialogue_action_token are comparing it to the libraries listed below
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆109Updated 2 months ago
- ☆93Updated last year
- NeurIPS 2024 tutorial on LLM Inference☆37Updated last month
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆111Updated 4 months ago
- Critique-out-Loud Reward Models☆47Updated 3 months ago
- Evaluate the Quality of Critique☆35Updated 7 months ago
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆111Updated 2 months ago
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆49Updated 7 months ago
- CodeUltraFeedback: aligning large language models to coding preferences☆66Updated 6 months ago
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆53Updated 10 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated 11 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆88Updated 3 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆64Updated 5 months ago
- ☆81Updated this week
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆44Updated last month
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆64Updated last year
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"☆27Updated 7 months ago
- Natural Language Reinforcement Learning☆67Updated 3 weeks ago
- Code for the paper "Aligning LLM Agents by Learning Latent Preference from User Edits".☆34Updated last month
- GenRM-CoT: Data release for verification rationales☆42Updated 3 months ago
- ☆25Updated 8 months ago
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆98Updated last year
- ☆18Updated 8 months ago
- ☆36Updated 5 months ago
- Directional Preference Alignment☆54Updated 3 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆94Updated this week
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"☆124Updated 9 months ago
- 🌾 OAT: Online AlignmenT for LLMs☆81Updated 3 weeks ago
- Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering☆54Updated last month
- About The corresponding code from our paper " REFINER: Reasoning Feedback on Intermediate Representations" (EACL 2024). Do not hesitate t…☆69Updated 10 months ago