likenneth / q_probeLinks
Q-Probe: A Lightweight Approach to Reward Maximization for Language Models
☆41Updated 11 months ago
Alternatives and similar repositories for q_probe
Users that are interested in q_probe are comparing it to the libraries listed below
Sorting:
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆47Updated last year
- The repository contains code for Adaptive Data Optimization☆24Updated 5 months ago
- NeurIPS 2024 tutorial on LLM Inference☆45Updated 5 months ago
- ☆49Updated 3 weeks ago
- ☆27Updated this week
- The official implementation of Regularized Policy Gradient (RPG) (https://arxiv.org/abs/2505.17508)☆27Updated last week
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated last year
- ☆32Updated 4 months ago
- Reinforcing General Reasoning without Verifiers☆33Updated last week
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆57Updated 9 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆32Updated 2 months ago
- A testbed for agents and environments that can automatically improve models through data generation.☆24Updated 3 months ago
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)☆29Updated last year
- ☆21Updated 8 months ago
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆29Updated last year
- ☆61Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆54Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆70Updated 11 months ago
- Code for RATIONALYST: Pre-training Process-Supervision for Improving Reasoning https://arxiv.org/pdf/2410.01044☆33Updated 8 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆28Updated 8 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆43Updated last year
- Official Repo for InSTA: Towards Internet-Scale Training For Agents☆42Updated this week
- [ICLR 2025] "Training LMs on Synthetic Edit Sequences Improves Code Synthesis" (Piterbarg, Pinto, Fergus)☆19Updated 3 months ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆88Updated last week
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆53Updated last month
- ☆24Updated 8 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated 9 months ago
- AgentRewardBench: Evaluating Automatic Evaluations of Web Agent Trajectories☆15Updated 3 weeks ago
- Advantage Leftover Lunch Reinforcement Learning (A-LoL RL): Improving Language Models with Advantage-based Offline Policy Gradients☆26Updated 8 months ago
- ☆93Updated 11 months ago