openai / code-align-evals-dataLinks
☆28Updated 4 years ago
Alternatives and similar repositories for code-align-evals-data
Users that are interested in code-align-evals-data are comparing it to the libraries listed below
Sorting:
- Code for the paper "Efficient Training of Language Models to Fill in the Middle"☆195Updated 2 years ago
- Code accompanying the paper Pretraining Language Models with Human Preferences☆180Updated last year
- For experiments involving instruct gpt. Currently used for documenting open research questions.☆71Updated 3 years ago
- Minimal library to train LLMs on TPU in JAX with pjit().☆299Updated last year
- ☆160Updated 4 years ago
- Code for paper "LEVER: Learning to Verifiy Language-to-Code Generation with Execution" (ICML'23)☆90Updated 2 years ago
- ☆180Updated 2 years ago
- Code for the TMLR 2023 paper "PPOCoder: Execution-based Code Generation using Deep Reinforcement Learning"☆118Updated last year
- The data processing pipeline for the Koala chatbot language model☆118Updated 2 years ago
- ☆248Updated 2 years ago
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆49Updated last year
- A hard gym for programming☆162Updated last year
- Python tools for processing the stackexchange data dumps into a text dataset for Language Models☆85Updated 2 years ago
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898☆231Updated last year
- A set of utilities for running few-shot prompting experiments on large-language models☆126Updated 2 years ago
- Simple next-token-prediction for RLHF☆227Updated 2 years ago
- ☆173Updated 2 years ago
- Language Models of Code are Few-Shot Commonsense Learners (EMNLP 2022)☆86Updated 2 years ago
- This repository contains all the code for collecting large scale amounts of code from GitHub.☆110Updated 2 years ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆161Updated last year
- Accepted by Transactions on Machine Learning Research (TMLR)☆136Updated last year
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆220Updated 2 years ago
- This project studies the performance and robustness of language models and task-adaptation methods.☆155Updated last year
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆209Updated last year
- Repository for analysis and experiments in the BigCode project.☆127Updated last year
- ☆159Updated 2 years ago
- We view Large Language Models as stochastic language layers in a network, where the learnable parameters are the natural language prompts…☆95Updated last year
- The dataset and code for paper: TheoremQA: A Theorem-driven Question Answering dataset☆160Updated last year
- RL algorithm: Advantage induced policy alignment☆66Updated 2 years ago
- A framework for human-readable prompt-based method with large language models. Specially designed for researchers. (Deprecated, check out…☆131Updated 2 years ago