openai / code-align-evals-data
☆28Updated 3 years ago
Related projects ⓘ
Alternatives and complementary repositories for code-align-evals-data
- Code for the paper "Efficient Training of Language Models to Fill in the Middle"☆167Updated last year
- ☆50Updated 4 months ago
- Code for the TMLR 2023 paper "PPOCoder: Execution-based Code Generation using Deep Reinforcement Learning"☆96Updated 10 months ago
- ☆75Updated last year
- Repository for analysis and experiments in the BigCode project.☆115Updated 7 months ago
- A set of utilities for running few-shot prompting experiments on large-language models☆112Updated last year
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆44Updated 10 months ago
- This repository contains all the code for collecting large scale amounts of code from GitHub.☆105Updated last year
- Accepted by Transactions on Machine Learning Research (TMLR)☆118Updated last month
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆104Updated 5 months ago
- ☆105Updated 3 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆111Updated last month
- Script for downloading GitHub.☆88Updated 4 months ago
- The test set for Koala☆45Updated last year
- ☆175Updated last year
- distill chatGPT coding ability into small model (1b)☆24Updated last year
- ☆147Updated 3 years ago
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆53Updated 2 months ago
- The data processing pipeline for the Koala chatbot language model☆117Updated last year
- Training language models to make programs faster☆81Updated 6 months ago
- ☆47Updated last year
- Code accompanying the paper Pretraining Language Models with Human Preferences☆176Updated 9 months ago
- Releasing code for "ReCode: Robustness Evaluation of Code Generation Models"☆48Updated 7 months ago
- ☆158Updated last year
- A repository for transformer critique learning and generation☆85Updated 11 months ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆121Updated 3 months ago
- Minimal library to train LLMs on TPU in JAX with pjit().☆277Updated 10 months ago
- ☆221Updated last year
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆73Updated 3 months ago
- A hard gym for programming☆140Updated 4 months ago