microsoft / prose-benchmarks
PROSE Public Benchmark Suite
☆24Updated last month
Related projects ⓘ
Alternatives and complementary repositories for prose-benchmarks
- NaturalCodeBench (Findings of ACL 2024)☆56Updated 3 weeks ago
- ☆39Updated 5 months ago
- ☆75Updated last year
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆52Updated last month
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆43Updated 10 months ago
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆57Updated 7 months ago
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆74Updated last month
- ☆23Updated 4 months ago
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆52Updated 2 months ago
- ☆50Updated 4 months ago
- Official repo for NAACL 2024 Findings paper "LeTI: Learning to Generate from Textual Interactions."☆61Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences☆65Updated 4 months ago
- Official code for the paper "CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules"☆34Updated 11 months ago
- ☆56Updated 8 months ago
- Language Models of Code are Few-Shot Commonsense Learners (EMNLP 2022)☆86Updated last year
- ☆18Updated last week
- LMTuner: Make the LLM Better for Everyone☆33Updated last year
- ☆25Updated this week
- ☆89Updated last month
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆41Updated 2 weeks ago
- [EMNLP 2023 Industry Track] A simple prompting approach that enables the LLMs to run inference in batches.☆68Updated 8 months ago
- ☆29Updated last year
- Repository for Skill Set Optimization☆12Updated 3 months ago
- Lightweight tool to identify Data Contamination in LLMs evaluation☆40Updated 8 months ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆25Updated last year
- evol augment any dataset online☆55Updated last year
- RepoQA: Evaluating Long-Context Code Understanding☆99Updated last week
- Scalable Meta-Evaluation of LLMs as Evaluators☆41Updated 8 months ago
- Scratchpad/Chain-of-Thought Prompts☆12Updated 2 years ago
- ☆105Updated 3 months ago