okarthikb / DPOLinks
Implementation of Direct Preference Optimization
☆17Updated 2 years ago
Alternatives and similar repositories for DPO
Users that are interested in DPO are comparing it to the libraries listed below
Sorting:
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆32Updated last year
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆67Updated 7 months ago
- ☆89Updated last year
- Triton Implementation of HyperAttention Algorithm☆48Updated 2 years ago
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆76Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆86Updated last year
- NeurIPS 2024 tutorial on LLM Inference☆47Updated last year
- ☆106Updated last year
- ☆109Updated last year
- [ICLR 2025] "Training LMs on Synthetic Edit Sequences Improves Code Synthesis" (Piterbarg, Pinto, Fergus)☆19Updated 10 months ago
- Using FlexAttention to compute attention with different masking patterns☆47Updated last year
- ☆53Updated last year
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated last year
- ☆50Updated last year
- The repository contains code for Adaptive Data Optimization☆29Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- Simple and efficient pytorch-native transformer training and inference (batched)☆79Updated last year
- ☆23Updated 10 months ago
- A toolkit for scaling law research ⚖☆53Updated 10 months ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- ☆47Updated last year
- ☆33Updated 11 months ago
- ☆45Updated 2 years ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Updated 2 years ago
- ☆52Updated last year
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated 2 years ago
- ☆53Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- Can Language Models Solve Olympiad Programming?☆123Updated 10 months ago