lilakk / BLEUBERILinks
Official repository for "BLEUBERI: BLEU is a surprisingly effective reward for instruction following"
☆23Updated 3 weeks ago
Alternatives and similar repositories for BLEUBERI
Users that are interested in BLEUBERI are comparing it to the libraries listed below
Sorting:
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆57Updated 9 months ago
- Verifiers for LLM Reinforcement Learning☆61Updated 2 months ago
- ☆35Updated last year
- ☆51Updated 7 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- Aioli: A unified optimization framework for language model data mixing☆27Updated 5 months ago
- ☆15Updated 2 months ago
- ☆47Updated 4 months ago
- ☆20Updated 3 months ago
- PyTorch implementation for MRL☆18Updated last year
- ☆27Updated this week
- A framework for pitting LLMs against each other in an evolving library of games ⚔☆32Updated 2 months ago
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- ☆47Updated 10 months ago
- Simple GRPO scripts and configurations.☆58Updated 4 months ago
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆59Updated last month
- ☆61Updated 3 weeks ago
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆33Updated 3 months ago
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated last year
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 4 months ago
- Code for RATIONALYST: Pre-training Process-Supervision for Improving Reasoning https://arxiv.org/pdf/2410.01044☆33Updated 8 months ago
- ☆24Updated 9 months ago
- ☆65Updated 2 months ago
- Codebase for Instruction Following without Instruction Tuning☆34Updated 9 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆55Updated this week
- The official code repo and data hub of top_nsigma sampling strategy for LLMs.☆26Updated 4 months ago
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated last year
- A fast, local, and secure approach for training LLMs for coding tasks using GRPO with WebAssembly and interpreter feedback.☆30Updated 2 months ago
- Code, results and other artifacts from the paper introducing the WildChat-50m dataset and the Re-Wild model family.☆29Updated 2 months ago
- Understanding the correlation between different LLM benchmarks☆29Updated last year