microsoft / prose-benchmarksLinks
PROSE Public Benchmark Suite
☆27Updated last week
Alternatives and similar repositories for prose-benchmarks
Users that are interested in prose-benchmarks are comparing it to the libraries listed below
Sorting:
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆62Updated 11 months ago
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆49Updated last year
- NaturalCodeBench (Findings of ACL 2024)☆67Updated 11 months ago
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆62Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)☆72Updated last year
- ☆78Updated 6 months ago
- ☆53Updated last year
- ☆28Updated 3 weeks ago
- ☆39Updated 3 months ago
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆87Updated last year
- Language Models of Code are Few-Shot Commonsense Learners (EMNLP 2022)☆85Updated 2 years ago
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆86Updated last year
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- Training and Benchmarking LLMs for Code Preference.☆36Updated 10 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆81Updated last year
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆80Updated last year
- RepoQA: Evaluating Long-Context Code Understanding☆117Updated 10 months ago
- Official repo for NAACL 2024 Findings paper "LeTI: Learning to Generate from Textual Interactions."☆64Updated 2 years ago
- [EMNLP 2023 Industry Track] A simple prompting approach that enables the LLMs to run inference in batches.☆77Updated last year
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- evol augment any dataset online☆60Updated 2 years ago
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆48Updated 8 months ago
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Updated 10 months ago
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆68Updated last year
- ☆28Updated 8 months ago
- ☆41Updated last year
- Code for ICML 25 paper "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆42Updated 2 months ago
- ☆65Updated last year
- [NAACL 2024] Struc-Bench: Are Large Language Models Good at Generating Complex Structured Tabular Data? https://aclanthology.org/2024.naa…☆55Updated last month
- [NAACL 2024] Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models☆86Updated last year