andersonbcdefg / dpo-loraLinks
direct preference optimization with only 1 model copy :)
☆14Updated 2 years ago
Alternatives and similar repositories for dpo-lora
Users that are interested in dpo-lora are comparing it to the libraries listed below
Sorting:
- ☆75Updated last year
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆234Updated 5 months ago
- Public Inflection Benchmarks☆68Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Updated 3 months ago
- Functional Benchmarks and the Reasoning Gap☆90Updated last year
- ☆125Updated 10 months ago
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- A 7B parameter model for mathematical reasoning☆40Updated 10 months ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆222Updated 2 weeks ago
- Repository for the paper Stream of Search: Learning to Search in Language☆152Updated 10 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆198Updated last year
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆190Updated 9 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆174Updated 11 months ago
- Experiments for efforts to train a new and improved t5☆76Updated last year
- ☆100Updated 6 months ago
- Evaluating LLMs with fewer examples☆170Updated last year
- ☆62Updated 2 years ago
- Token-level adaptation of LoRA matrices for downstream task generalization.☆14Updated last year
- ☆136Updated 9 months ago
- ☆147Updated 3 months ago
- nanoGPT-like codebase for LLM training☆113Updated last month
- ☆111Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆135Updated last year
- Official repo for Learning to Reason for Long-Form Story Generation☆73Updated 8 months ago
- ☆126Updated 2 months ago
- NSA Triton Kernels written with GPT5 and Opus 4.1☆69Updated 4 months ago
- Scaling is a distributed training library and installable dependency designed to scale up neural networks, with a dedicated module for tr…☆66Updated last month
- ☆200Updated 8 months ago
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆117Updated 2 years ago
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆28Updated last year