austrian-code-wizard / c3poLinks
☆27Updated this week
Alternatives and similar repositories for c3po
Users that are interested in c3po are comparing it to the libraries listed below
Sorting:
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆43Updated last year
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆47Updated last year
- ☆42Updated last month
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated 11 months ago
- EMNLP 2024 "Re-reading improves reasoning in large language models". Simply repeating the question to get bidirectional understanding for…☆26Updated 5 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆57Updated 9 months ago
- Measuring and Controlling Persona Drift in Language Model Dialogs☆17Updated last year
- A repository for transformer critique learning and generation☆89Updated last year
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 3 months ago
- The code implementation of MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Models…☆34Updated last year
- Repository for Skill Set Optimization☆13Updated 10 months ago
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆25Updated 2 months ago
- SCREWS: A Modular Framework for Reasoning with Revisions☆27Updated last year
- The repository contains code for Adaptive Data Optimization☆24Updated 5 months ago
- Verifiers for LLM Reinforcement Learning☆55Updated last month
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- ☆34Updated 11 months ago
- Official Repo for InSTA: Towards Internet-Scale Training For Agents☆42Updated this week
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆47Updated 6 months ago
- ☆48Updated last year
- Codebase accompanying the Summary of a Haystack paper.☆78Updated 8 months ago
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences☆71Updated 11 months ago
- ☆37Updated 2 years ago
- Learning to Retrieve by Trying - Source code for Grounding by Trying: LLMs with Reinforcement Learning-Enhanced Retrieval☆34Updated 7 months ago
- ☆24Updated 8 months ago
- Understanding the correlation between different LLM benchmarks☆29Updated last year
- ☆69Updated last year