ARiSE-Lab / CYCLE_OOPSLA_24Links
Open-source repository for the OOPSLA'24 paper "CYCLE: Learning to Self-Refine Code Generation"
☆10Updated last year
Alternatives and similar repositories for CYCLE_OOPSLA_24
Users that are interested in CYCLE_OOPSLA_24 are comparing it to the libraries listed below
Sorting:
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆71Updated last year
- ☆28Updated this week
- Reinforcement Learning for Repository-Level Code Completion☆40Updated last year
- ☆53Updated last year
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆62Updated last year
- Moatless Testbeds allows you to create isolated testbed environments in a Kubernetes cluster where you can apply code changes through git…☆14Updated 6 months ago
- ☆17Updated 2 months ago
- Training and Benchmarking LLMs for Code Preference.☆36Updated 11 months ago
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆62Updated last year
- ☆41Updated 3 months ago
- ☆27Updated 9 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆82Updated last year
- ☆29Updated last year
- ☆72Updated last year
- XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts☆35Updated last year
- Official code implementation for the ACL 2025 paper: 'Dynamic Scaling of Unit Tests for Code Reward Modeling'☆25Updated 5 months ago
- ☆35Updated 2 years ago
- ☆20Updated 6 months ago
- NaturalCodeBench (Findings of ACL 2024)☆67Updated last year
- [FORGE 2025] Graph-based method for end-to-end code completion with context awareness on repository☆66Updated last year
- Syntax Error-Free and Generalizable Tool Use for LLMs via Finite-State Decoding☆27Updated last year
- [KDD24-ADS] R-Eval: A Unified Toolkit for Evaluating Domain Knowledge of Retrieval Augmented Large Language Models☆12Updated last year
- Codes for paper SoAy: A Service-oriented APIs Applying Framework of Large Language Models☆27Updated 3 months ago
- [NeurIPS 2024] Evaluation harness for SWT-Bench, a benchmark for evaluating LLM repository-level test-generation☆58Updated last month
- ☆33Updated last year
- ☆40Updated 5 months ago
- ☆22Updated last year
- [NeurIPS 2025 Spotlight] ReasonFlux-Coder: Open-Source LLM Coders with Co-Evolving Reinforcement Learning☆125Updated last month
- ☆32Updated last month
- [NeurIPS'25] Official Implementation of RISE (Reinforcing Reasoning with Self-Verification)☆30Updated 2 months ago