cmu-l3 / neurips2024-inference-tutorial-codeLinks
NeurIPS 2024 tutorial on LLM Inference
☆45Updated 7 months ago
Alternatives and similar repositories for neurips2024-inference-tutorial-code
Users that are interested in neurips2024-inference-tutorial-code are comparing it to the libraries listed below
Sorting:
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated 11 months ago
- ☆99Updated last year
- ☆85Updated 2 months ago
- The repository contains code for Adaptive Data Optimization☆25Updated 7 months ago
- CodeUltraFeedback: aligning large language models to coding preferences☆71Updated last year
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆30Updated last year
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆59Updated 3 months ago
- Exploration of automated dataset selection approaches at large scales.☆47Updated 5 months ago
- ☆61Updated last week
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆59Updated 5 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆33Updated last week
- Codebase for Instruction Following without Instruction Tuning☆35Updated 10 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆57Updated last year
- [ACL 2024] Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning☆46Updated last year
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆81Updated 9 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆44Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 3 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated 11 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- official implementation of paper "Process Reward Model with Q-value Rankings"☆60Updated 6 months ago
- ☆114Updated 6 months ago
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated last year
- ☆101Updated 10 months ago
- ☆21Updated 3 months ago
- ☆34Updated 6 months ago
- This is official project in our paper: Is Bigger and Deeper Always Better? Probing LLaMA Across Scales and Layers☆30Updated last year
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆52Updated 5 months ago
- ☆53Updated 5 months ago
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆59Updated last week