yubol-bobo / Awesome-Multi-Turn-LLMsLinks
This is the official GitHub repository for our survey paper "Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language Models".
☆79Updated 2 months ago
Alternatives and similar repositories for Awesome-Multi-Turn-LLMs
Users that are interested in Awesome-Multi-Turn-LLMs are comparing it to the libraries listed below
Sorting:
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆55Updated 7 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆124Updated 3 months ago
- Official codebase for "GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning".☆79Updated last month
- ☆241Updated last month
- ☆64Updated 3 weeks ago
- The Entropy Mechanism of Reinforcement Learning for Large Language Model Reasoning.☆251Updated this week
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆180Updated 6 months ago
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆56Updated this week
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆121Updated 3 months ago
- This is a unified platform for implementing and evaluating test-time reasoning mechanisms in Large Language Models (LLMs).☆20Updated 6 months ago
- Official code for the paper, "Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning"☆129Updated this week
- ☆202Updated 3 months ago
- ☆205Updated 4 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆80Updated 6 months ago
- A Framework for LLM-based Multi-Agent Reinforced Training and Inference☆157Updated last month
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆228Updated 2 months ago
- ☆47Updated 4 months ago
- A comprehensive collection of process reward models.☆95Updated 3 weeks ago
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆134Updated this week
- Test-time preferenece optimization (ICML 2025).☆146Updated 2 months ago
- ☆113Updated 4 months ago
- A version of verl to support tool use☆292Updated this week
- [ACL'25] We propose a novel fine-tuning method, Separate Memory and Reasoning, which combines prompt tuning with LoRA.☆67Updated last month
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆124Updated 8 months ago
- This the implementation of LeCo☆31Updated 5 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆73Updated last month
- OpenRFT: Adapting Reasoning Foundation Model for Domain-specific Tasks with Reinforcement Fine-Tuning☆145Updated 6 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆154Updated 4 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆114Updated last year
- Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering☆61Updated 7 months ago