xufangzhi / phi-Decoding
[Preprint] An inference-time decoding strategy with adaptive foresight sampling
☆88Updated this week
Alternatives and similar repositories for phi-Decoding:
Users that are interested in phi-Decoding are comparing it to the libraries listed below
- Code for Paper: Teaching Language Models to Critique via Reinforcement Learning☆90Updated last week
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆72Updated 4 months ago
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆66Updated 3 weeks ago
- ☆59Updated 7 months ago
- Official Repository of Are Your LLMs Capable of Stable Reasoning?☆25Updated last month
- The official repository of the Omni-MATH benchmark.☆80Updated 3 months ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆74Updated 10 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate"☆135Updated 2 months ago
- Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling☆101Updated 2 months ago
- ☆76Updated 3 months ago
- [Preprint] A Generalizable and Purely Unsupervised Self-Training Framework☆43Updated this week
- We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆61Updated 5 months ago
- Codebase for Instruction Following without Instruction Tuning☆34Updated 6 months ago
- Knowledge Unlearning for Large Language Models☆25Updated 2 weeks ago
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners☆75Updated 2 weeks ago
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆66Updated last month
- ☆55Updated 6 months ago
- What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆63Updated last month
- ☆56Updated last month
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆107Updated last year
- Reformatted Alignment☆115Updated 6 months ago
- ☆44Updated 5 months ago
- ☆45Updated 2 months ago
- ☆82Updated 5 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- The rule-based evaluation subset and code implementation of Omni-MATH☆19Updated 3 months ago
- official implementation of paper "Process Reward Model with Q-value Rankings"☆54Updated 2 months ago
- ☆45Updated last month
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆99Updated last month
- [ACL 2024] Code for "MoPS: Modular Story Premise Synthesis for Open-Ended Automatic Story Generation"☆35Updated 8 months ago