facebookresearch / coder_reviewer_rerankingLinks
Official code release for the paper Coder Reviewer Reranking for Code Generation.
☆45Updated 2 years ago
Alternatives and similar repositories for coder_reviewer_reranking
Users that are interested in coder_reviewer_reranking are comparing it to the libraries listed below
Sorting:
- A repository containing the Jupyter notebook code generation benchmark.☆61Updated 3 years ago
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆48Updated last year
- This repo contains data and code for the paper "Reasoning over Public and Private Data in Retrieval-Based Systems."☆46Updated last year
- ☆78Updated 4 months ago
- This project shows how to derive the total number of training tokens from a large text dataset from 🤗 datasets with Apache Beam and Data…☆27Updated 2 years ago
- Finding semantically meaningful and accurate prompts.☆47Updated last year
- ☆44Updated 8 months ago
- Google Research☆46Updated 2 years ago
- Embedding Recycling for Language models☆39Updated 2 years ago
- A repository for transformer critique learning and generation☆90Updated last year
- This is the official PyTorch repo for "UNIREX: A Unified Learning Framework for Language Model Rationale Extraction" (ICML 2022).☆26Updated 2 years ago
- Code for the NLP4Prog workshop paper "Reading StackOverflow Encourages Cheating: Adding Question TextImproves Extractive Code Generation"☆21Updated 3 years ago
- Repository for the code and dataset for the paper: "Have LLMs Advanced enough? Towards Harder Problem Solving Benchmarks For Large Langu…☆39Updated last year
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"☆28Updated 3 years ago
- ☆119Updated last year
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆58Updated 2 years ago
- Code for paper "LEVER: Learning to Verifiy Language-to-Code Generation with Execution" (ICML'23)☆89Updated 2 years ago
- Few-shot Learning with Auxiliary Data☆31Updated last year
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆99Updated 2 years ago
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆107Updated last year
- Code for our paper: "GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models"☆55Updated 2 years ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- ☆54Updated 2 years ago
- ☆48Updated last year
- This repository contains some of the code used in the paper "Training Language Models with Langauge Feedback at Scale"☆27Updated 2 years ago
- We view Large Language Models as stochastic language layers in a network, where the learnable parameters are the natural language prompts…☆94Updated last year
- [ACL 2023]: Training Trajectories of Language Models Across Scales https://arxiv.org/pdf/2212.09803.pdf☆24Updated last year
- Understanding the correlation between different LLM benchmarks☆29Updated last year
- ☆75Updated last year
- Minimum Description Length probing for neural network representations☆18Updated 6 months ago