Dahoas / QDSyntheticData
☆13Updated 9 months ago
Alternatives and similar repositories for QDSyntheticData
Users that are interested in QDSyntheticData are comparing it to the libraries listed below
Sorting:
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆54Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆44Updated last year
- ☆23Updated 8 months ago
- ☆25Updated 8 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆47Updated last year
- ☆21Updated 7 months ago
- [ICLR 2025] "Training LMs on Synthetic Edit Sequences Improves Code Synthesis" (Piterbarg, Pinto, Fergus)☆19Updated 3 months ago
- ☆45Updated last year
- Repository for the code of the "PPL-MCTS: Constrained Textual Generation Through Discriminator-Guided Decoding" paper, NAACL'22☆65Updated 2 years ago
- A framework for few-shot evaluation of autoregressive language models.☆24Updated last year
- Discriminator-Guided Chain-of-Thought Reasoning☆47Updated 7 months ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated last year
- Inverse Constitutional AI [ICLR 2025]: compressing pairwise preference data into a short constitution of principles.☆25Updated this week
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆30Updated 3 months ago
- NeurIPS 2024 tutorial on LLM Inference☆43Updated 5 months ago
- ☆33Updated last year
- [arXiv] EvalTree: Profiling Language Model Weaknesses via Hierarchical Capability Trees☆18Updated 2 months ago
- ☆27Updated last year
- Replicating O1 inference-time scaling laws☆85Updated 5 months ago
- NaturalProver: Grounded Mathematical Proof Generation with Language Models☆37Updated 2 years ago
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆28Updated last year
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated 11 months ago
- ☆34Updated last year
- ☆44Updated 9 months ago
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden State☆18Updated last year
- ☆31Updated 4 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated last month
- ☆29Updated last year
- Self-Supervised Alignment with Mutual Information☆18Updated 11 months ago
- ☆24Updated 6 months ago