haoyangliu123 / awesome-deepseek-r1Links
A collection on the recent reproduction papers and projects on DeepSeek-R1
☆31Updated 4 months ago
Alternatives and similar repositories for awesome-deepseek-r1
Users that are interested in awesome-deepseek-r1 are comparing it to the libraries listed below
Sorting:
- ☆64Updated last month
- ☆222Updated last week
- 😎 A Survey of Efficient Reasoning for Large Reasoning Models: Language, Multimodality, and Beyond☆252Updated 2 weeks ago
- Awesome-Long2short-on-LRMs is a collection of state-of-the-art, novel, exciting long2short methods on large reasoning models. It contains…☆228Updated 3 weeks ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆81Updated 10 months ago
- Official code for the paper, "Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning"☆125Updated last week
- The Entropy Mechanism of Reinforcement Learning for Large Language Model Reasoning.☆191Updated last week
- A Framework for LLM-based Multi-Agent Reinforced Training and Inference☆140Updated 2 weeks ago
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆54Updated 6 months ago
- ☆47Updated 3 weeks ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆124Updated 3 months ago
- ☆139Updated last month
- Official Repository of "Learning to Reason under Off-Policy Guidance"☆240Updated 3 weeks ago
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆72Updated 2 weeks ago
- ☆220Updated last month
- Paper List of Inference/Test Time Scaling/Computing☆264Updated last week
- This my attempt to create Self-Correcting-LLM based on the paper Training Language Models to Self-Correct via Reinforcement Learning by g…☆35Updated this week
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆185Updated last year
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆177Updated 5 months ago
- Paper list for Efficient Reasoning.☆509Updated this week
- AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models, ICLR 2025 (Outstanding Paper)☆271Updated last week
- A comprehensive collection of process reward models.☆92Updated 2 weeks ago
- ☆242Updated last month
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆318Updated 10 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆73Updated last week
- ☆44Updated 4 months ago
- Accepted LLM Papers in NeurIPS 2024☆37Updated 8 months ago
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆48Updated last month
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆141Updated 4 months ago
- This is the official implementation of the paper "S²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning"☆65Updated 2 months ago