cmu-l3 / neurips2024-inference-tutorial-codeLinks
NeurIPS 2024 tutorial on LLM Inference
☆47Updated 11 months ago
Alternatives and similar repositories for neurips2024-inference-tutorial-code
Users that are interested in neurips2024-inference-tutorial-code are comparing it to the libraries listed below
Sorting:
- ☆100Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated last year
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆64Updated 6 months ago
- CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)☆72Updated last year
- The repository contains code for Adaptive Data Optimization☆28Updated 11 months ago
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆32Updated last year
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- Replicating O1 inference-time scaling laws☆90Updated 11 months ago
- ☆75Updated last year
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆67Updated 8 months ago
- ☆108Updated last year
- Exploration of automated dataset selection approaches at large scales.☆48Updated 8 months ago
- Long Context Extension and Generalization in LLMs☆62Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆60Updated last year
- ☆77Updated 2 weeks ago
- ☆155Updated 11 months ago
- ☆104Updated last year
- A brief and partial summary of RLHF algorithms.☆136Updated 8 months ago
- Organize the Web: Constructing Domains Enhances Pre-Training Data Curation☆68Updated 6 months ago
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year
- official implementation of paper "Process Reward Model with Q-value Rankings"☆64Updated 9 months ago
- [ACL 2024] Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning☆52Updated last year
- ☆87Updated last year
- ☆124Updated 8 months ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆110Updated 3 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆121Updated last year
- ☆53Updated 9 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 7 months ago