martin-wey / CodeUltraFeedbackLinks
CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)
☆72Updated last year
Alternatives and similar repositories for CodeUltraFeedback
Users that are interested in CodeUltraFeedback are comparing it to the libraries listed below
Sorting:
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆63Updated last year
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- Scalable Meta-Evaluation of LLMs as Evaluators☆43Updated last year
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆73Updated last year
- The repository contains code for Adaptive Data Optimization☆28Updated 11 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆84Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆61Updated last year
- ☆100Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆76Updated last year
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners☆86Updated 6 months ago
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆64Updated last year
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆112Updated 4 months ago
- ☆103Updated last year
- ☆80Updated 8 months ago
- ☆29Updated 3 months ago
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated last year
- ☆28Updated 3 weeks ago
- ☆65Updated last year
- NeurIPS 2024 tutorial on LLM Inference☆47Updated 11 months ago
- ☆82Updated last month
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- Self-Alignment with Principle-Following Reward Models☆169Updated 2 months ago
- ☆41Updated last year
- the instructions and demonstrations for building a formal logical reasoning capable GLM☆55Updated last year
- Dialogue Action Tokens: Steering Language Models in Goal-Directed Dialogue with a Multi-Turn Planner☆28Updated last year
- ☆53Updated 9 months ago
- Code for RL4F: Generating Natural Language Feedback with Reinforcement Learning for Repairing Model Outputs. ACL 2023.☆64Updated last year
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆51Updated 5 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year