menhguin / minp_paperLinks
Code Implementation, Evaluations, Documentation, Links and Resources for Min P paper
☆33Updated 2 months ago
Alternatives and similar repositories for minp_paper
Users that are interested in minp_paper are comparing it to the libraries listed below
Sorting:
- Codebase for Instruction Following without Instruction Tuning☆34Updated 8 months ago
- ☆16Updated 2 months ago
- The official code repo and data hub of top_nsigma sampling strategy for LLMs.☆25Updated 3 months ago
- AgentRewardBench: Evaluating Automatic Evaluations of Web Agent Trajectories☆15Updated 2 weeks ago
- ☆24Updated 8 months ago
- ☆61Updated last year
- ☆79Updated 9 months ago
- ☆34Updated 11 months ago
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆36Updated 3 months ago
- A repository for research on medium sized language models.☆76Updated last year
- Repo for "Z1: Efficient Test-time Scaling with Code"☆59Updated last month
- ☆46Updated last month
- ☆17Updated 4 months ago
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆31Updated 2 months ago
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆25Updated 2 months ago
- ☆17Updated last month
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆57Updated 9 months ago
- Unofficial Implementation of Chain-of-Thought Reasoning Without Prompting☆32Updated last year
- ☆89Updated this week
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆47Updated last month
- This repo is based on https://github.com/jiaweizzhao/GaLore☆28Updated 8 months ago
- Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆90Updated 2 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆32Updated 2 months ago
- ☆32Updated 4 months ago
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated last year
- ☆64Updated 2 months ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated last year
- Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling☆102Updated 4 months ago
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆26Updated 6 months ago
- ☆45Updated 3 months ago