xufangzhi / phi-Decoding
[Preprint] An inference-time decoding strategy with adaptive foresight sampling
☆79Updated this week
Alternatives and similar repositories for phi-Decoding:
Users that are interested in phi-Decoding are comparing it to the libraries listed below
- Code for Paper: Teaching Language Models to Critique via Reinforcement Learning☆84Updated last month
- official implementation of paper "Process Reward Model with Q-value Rankings"☆51Updated last month
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆73Updated 9 months ago
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆70Updated 4 months ago
- Official Repository of Are Your LLMs Capable of Stable Reasoning?☆22Updated last week
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate"☆131Updated last month
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆96Updated 3 weeks ago
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆64Updated this week
- ☆82Updated 4 months ago
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners☆75Updated 2 months ago
- ☆59Updated 6 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆44Updated this week
- The official repository of the Omni-MATH benchmark.☆77Updated 3 months ago
- ☆40Updated 3 weeks ago
- Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling☆95Updated 2 months ago
- ☆76Updated 2 months ago
- Knowledge Unlearning for Large Language Models☆20Updated 2 weeks ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆104Updated last week
- ☆16Updated 2 months ago
- We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆60Updated 5 months ago
- ☆59Updated last week
- Reformatted Alignment☆115Updated 6 months ago
- Large Language Models Can Self-Improve in Long-context Reasoning☆67Updated 4 months ago
- Codebase for Instruction Following without Instruction Tuning☆33Updated 6 months ago
- Interpretable Contrastive Monte Carlo Tree Search Reasoning☆46Updated 4 months ago
- ☆49Updated 2 weeks ago
- LongHeads: Multi-Head Attention is Secretly a Long Context Processor☆29Updated 11 months ago
- Unofficial Implementation of Chain-of-Thought Reasoning Without Prompting☆29Updated last year
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆109Updated 8 months ago