facebookresearch / coder_reviewer_rerankingLinks
Official code release for the paper Coder Reviewer Reranking for Code Generation.
☆45Updated 2 years ago
Alternatives and similar repositories for coder_reviewer_reranking
Users that are interested in coder_reviewer_reranking are comparing it to the libraries listed below
Sorting:
- ☆80Updated 8 months ago
- Google Research☆46Updated 3 years ago
- This repo contains data and code for the paper "Reasoning over Public and Private Data in Retrieval-Based Systems."☆46Updated last year
- This project shows how to derive the total number of training tokens from a large text dataset from 🤗 datasets with Apache Beam and Data…☆27Updated 3 years ago
- A repository containing the Jupyter notebook code generation benchmark.☆62Updated 3 years ago
- This is the official PyTorch repo for "UNIREX: A Unified Learning Framework for Language Model Rationale Extraction" (ICML 2022).☆26Updated 2 years ago
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆49Updated last year
- Embedding Recycling for Language models☆38Updated 2 years ago
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"☆28Updated 3 years ago
- ☆44Updated last year
- Finding semantically meaningful and accurate prompts.☆48Updated 2 years ago
- ☆56Updated 2 years ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆59Updated 2 years ago
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 3 years ago
- Code for our paper: "GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models"☆57Updated 2 years ago
- ☆38Updated 3 years ago
- Few-shot Learning with Auxiliary Data☆31Updated 2 years ago
- Ranking of fine-tuned HF models as base models.☆36Updated 2 months ago
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆98Updated 2 years ago
- This repository contains some of the code used in the paper "Training Language Models with Langauge Feedback at Scale"☆27Updated 2 years ago
- We view Large Language Models as stochastic language layers in a network, where the learnable parameters are the natural language prompts…☆95Updated last year
- ☆76Updated last year
- A diff tool for language models☆44Updated last year
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆107Updated 2 years ago
- PyTorch code for the RetoMaton paper: "Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval" (ICML 2022)☆74Updated 3 years ago
- ☆98Updated 2 years ago
- [NeurIPS 2022] DataMUX: Data Multiplexing for Neural Networks☆60Updated 3 years ago
- [ACL 2023]: Training Trajectories of Language Models Across Scales https://arxiv.org/pdf/2212.09803.pdf☆25Updated 2 years ago
- code for "Natural Language to Code Translation with Execution"☆41Updated 3 years ago
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆78Updated last year