RUC-GSAI / YuLan-MiniLinks
A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.
☆217Updated 2 months ago
Alternatives and similar repositories for YuLan-Mini
Users that are interested in YuLan-Mini are comparing it to the libraries listed below
Sorting:
- ☆169Updated 5 months ago
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆263Updated 2 months ago
- ☆65Updated 10 months ago
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆190Updated 6 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆256Updated 4 months ago
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆113Updated 4 months ago
- A Comprehensive Survey on Long Context Language Modeling☆189Updated 2 months ago
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆130Updated 5 months ago
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆356Updated last week
- This is a repo for showcasing using MCTS with LLMs to solve gsm8k problems☆91Updated 6 months ago
- OpenRFT: Adapting Reasoning Foundation Model for Domain-specific Tasks with Reinforcement Fine-Tuning☆149Updated 9 months ago
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆244Updated 5 months ago
- ☆211Updated 7 months ago
- ☆90Updated 4 months ago
- ☆307Updated last year
- ☆318Updated 4 months ago
- The official repo of SynLogic: Synthesizing Verifiable Reasoning Data at Scale for Learning Logical Reasoning and Beyond☆165Updated 2 months ago
- AN O1 REPLICATION FOR CODING☆335Updated 9 months ago
- RL Scaling and Test-Time Scaling (ICML'25)☆111Updated 8 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆172Updated 2 months ago
- A Large-Scale, Challenging, Decontaminated, and Verifiable Mathematical Dataset for Advancing Reasoning☆254Updated last week
- CPPO: Accelerating the Training of Group Relative Policy Optimization-Based Reasoning Models (NeurIPS 2025)☆152Updated 2 weeks ago
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆176Updated 2 months ago
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆443Updated 4 months ago
- Pre-trained, Scalable, High-performance Reward Models via Policy Discriminative Learning.☆157Updated last week
- ☆293Updated 4 months ago
- ☆202Updated 5 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆256Updated 9 months ago
- MiroMind-M1 is a fully open-source series of reasoning language models built on Qwen-2.5, focused on advancing mathematical reasoning.☆236Updated last month
- ☆297Updated 4 months ago