okarthikb / DPOLinks
Implementation of Direct Preference Optimization
☆17Updated 2 years ago
Alternatives and similar repositories for DPO
Users that are interested in DPO are comparing it to the libraries listed below
Sorting:
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆32Updated last year
- Language models scale reliably with over-training and on downstream tasks☆99Updated last year
- ☆53Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆88Updated last year
- ☆23Updated last year
- Triton Implementation of HyperAttention Algorithm☆48Updated 2 years ago
- NeurIPS 2024 tutorial on LLM Inference☆47Updated last year
- ☆51Updated 2 years ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆79Updated last year
- ☆91Updated last year
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆40Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆68Updated 9 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆78Updated last year
- ☆53Updated 2 years ago
- Using FlexAttention to compute attention with different masking patterns☆47Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 7 months ago
- Reinforcement Learning via Regressing Relative Rewards☆38Updated last year
- [ICLR 2025] "Training LMs on Synthetic Edit Sequences Improves Code Synthesis" (Piterbarg, Pinto, Fergus)☆19Updated 11 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- ☆33Updated last year
- A toolkit for scaling law research ⚖☆55Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆135Updated last year
- Can Language Models Solve Olympiad Programming?☆123Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated 2 years ago
- ☆108Updated last year
- ☆123Updated 11 months ago
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆184Updated 8 months ago
- A repository for transformer critique learning and generation☆89Updated 2 years ago
- ☆75Updated last year