microsoft / DataScienceProblemsLinks
A repository containing the Jupyter notebook code generation benchmark.
☆59Updated 3 years ago
Alternatives and similar repositories for DataScienceProblems
Users that are interested in DataScienceProblems are comparing it to the libraries listed below
Sorting:
- Official code release for the paper Coder Reviewer Reranking for Code Generation.☆45Updated 2 years ago
- Interpretable and efficient predictors using pre-trained language models. Scikit-learn compatible.☆44Updated 2 months ago
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆49Updated 2 years ago
- Code for generating the JuICe dataset.☆37Updated 4 years ago
- Code for paper "LEVER: Learning to Verifiy Language-to-Code Generation with Execution" (ICML'23)☆90Updated 2 years ago
- ☆49Updated 2 years ago
- ☆57Updated 2 years ago
- Finding semantically meaningful and accurate prompts.☆48Updated 2 years ago
- ☆80Updated 10 months ago
- ☆119Updated last year
- Data and code for "DocPrompting: Generating Code by Retrieving the Docs" @ICLR 2023☆251Updated 2 years ago
- Code, datasets and results of the ChatGPT evaluation presented in paper "ChatGPT: Jack of all trades, master of none"☆29Updated 2 years ago
- We view Large Language Models as stochastic language layers in a network, where the learnable parameters are the natural language prompts…☆95Updated last year
- This repo contains data and code for the paper "Reasoning over Public and Private Data in Retrieval-Based Systems."☆46Updated last year
- ☆38Updated 3 years ago
- Code for the NLP4Prog workshop paper "Reading StackOverflow Encourages Cheating: Adding Question TextImproves Extractive Code Generation"☆21Updated 4 years ago
- ☆30Updated 2 years ago
- ☆44Updated last year
- PyTorch code for the RetoMaton paper: "Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval" (ICML 2022)☆76Updated 3 years ago
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆107Updated 2 years ago
- Interpreting Language Models with Contrastive Explanations (EMNLP 2022 Best Paper Honorable Mention)☆62Updated 3 years ago
- ☆54Updated 2 years ago
- CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)☆73Updated last year
- Code for our paper: "GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models"☆57Updated 2 years ago
- [EACL 2024] ICE-Score: Instructing Large Language Models to Evaluate Code☆80Updated last year
- ☆29Updated 2 years ago
- A diff tool for language models☆44Updated 2 years ago
- Evaluation suite for large-scale language models.☆129Updated 4 years ago
- A unified benchmark for math reasoning☆89Updated 3 years ago
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"☆28Updated 3 years ago