amazon-science / llm-code-preferenceLinks
Training and Benchmarking LLMs for Code Preference.
☆34Updated 8 months ago
Alternatives and similar repositories for llm-code-preference
Users that are interested in llm-code-preference are comparing it to the libraries listed below
Sorting:
- ☆28Updated 2 weeks ago
- RepoQA: Evaluating Long-Context Code Understanding☆113Updated 9 months ago
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆62Updated 9 months ago
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆48Updated last year
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆67Updated 11 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆151Updated 9 months ago
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆59Updated last year
- ☆49Updated last year
- Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆136Updated 3 weeks ago
- ☆27Updated 6 months ago
- ☆36Updated 2 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆81Updated 11 months ago
- ☆61Updated last week
- CodeUltraFeedback: aligning large language models to coding preferences☆71Updated last year
- XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts☆33Updated last year
- ☆119Updated last year
- 🚀 SWE-bench Goes Live!☆103Updated last week
- ☆37Updated 4 months ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆102Updated 4 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆105Updated 2 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆139Updated 10 months ago
- Open-Source LLM Coders with Co-Evolving Reinforcement Learning☆99Updated last week
- [NeurIPS 2024] Evaluation harness for SWT-Bench, a benchmark for evaluating LLM repository-level test-generation☆51Updated this week
- ☆66Updated last year
- e☆39Updated 3 months ago
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆86Updated 10 months ago
- Code for paper "LEVER: Learning to Verifiy Language-to-Code Generation with Execution" (ICML'23)☆89Updated 2 years ago
- ☆32Updated last month
- RL Scaling and Test-Time Scaling (ICML'25)☆109Updated 6 months ago
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆91Updated 2 months ago