openai / code-align-evals-dataLinks
☆28Updated 4 years ago
Alternatives and similar repositories for code-align-evals-data
Users that are interested in code-align-evals-data are comparing it to the libraries listed below
Sorting:
- Code for the paper "Efficient Training of Language Models to Fill in the Middle"☆188Updated 2 years ago
- ☆158Updated 4 years ago
- Repository for analysis and experiments in the BigCode project.☆124Updated last year
- Code for paper "LEVER: Learning to Verifiy Language-to-Code Generation with Execution" (ICML'23)☆90Updated 2 years ago
- Minimal library to train LLMs on TPU in JAX with pjit().☆298Updated last year
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898☆227Updated last year
- Code accompanying the paper Pretraining Language Models with Human Preferences☆180Updated last year
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆209Updated last year
- Code for the TMLR 2023 paper "PPOCoder: Execution-based Code Generation using Deep Reinforcement Learning"☆116Updated last year
- This project studies the performance and robustness of language models and task-adaptation methods.☆154Updated last year
- A hard gym for programming☆161Updated last year
- This repository contains all the code for collecting large scale amounts of code from GitHub.☆109Updated 2 years ago
- Accepted by Transactions on Machine Learning Research (TMLR)☆132Updated last year
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆49Updated last year
- ☆242Updated 2 years ago
- Simple next-token-prediction for RLHF☆226Updated 2 years ago
- HellaSwag: Can a Machine _Really_ Finish Your Sentence?☆220Updated 5 years ago
- For experiments involving instruct gpt. Currently used for documenting open research questions.☆70Updated 2 years ago
- Code for the curation of The Stack v2 and StarCoder2 training data☆117Updated last year
- ☆173Updated 2 years ago
- ☆179Updated 2 years ago
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆217Updated 2 years ago
- Python tools for processing the stackexchange data dumps into a text dataset for Language Models☆82Updated last year
- A set of utilities for running few-shot prompting experiments on large-language models☆123Updated last year
- A repository for transformer critique learning and generation☆88Updated last year
- ☆111Updated last year
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆86Updated last year
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆154Updated last year
- ☆120Updated last year
- A unified benchmark for math reasoning☆88Updated 2 years ago