microsoft / prose-benchmarksLinks
PROSE Public Benchmark Suite
☆27Updated 2 weeks ago
Alternatives and similar repositories for prose-benchmarks
Users that are interested in prose-benchmarks are comparing it to the libraries listed below
Sorting:
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆59Updated last year
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆48Updated last year
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆62Updated 10 months ago
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆86Updated 10 months ago
- ☆78Updated 4 months ago
- Official repo for NAACL 2024 Findings paper "LeTI: Learning to Generate from Textual Interactions."☆64Updated 2 years ago
- ☆35Updated last month
- ☆51Updated last year
- Language Models of Code are Few-Shot Commonsense Learners (EMNLP 2022)☆86Updated 2 years ago
- [EMNLP 2023 Industry Track] A simple prompting approach that enables the LLMs to run inference in batches.☆76Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences☆71Updated last year
- NaturalCodeBench (Findings of ACL 2024)☆68Updated 9 months ago
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- Evaluating tool-augmented LLMs in conversation settings☆85Updated last year
- [NAACL 2024] Struc-Bench: Are Large Language Models Good at Generating Complex Structured Tabular Data? https://aclanthology.org/2024.naa…☆54Updated last week
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆81Updated last year
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆68Updated 11 months ago
- [NAACL 2024] Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models☆85Updated last year
- Pre-training code for CrystalCoder 7B LLM☆55Updated last year
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆79Updated last year
- EMNLP 2024 "Re-reading improves reasoning in large language models". Simply repeating the question to get bidirectional understanding for…☆26Updated 8 months ago
- ☆20Updated 4 months ago
- [NeurIPS 2024] Evaluation harness for SWT-Bench, a benchmark for evaluating LLM repository-level test-generation☆52Updated 2 weeks ago
- LLMs as Collaboratively Edited Knowledge Bases☆45Updated last year
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆85Updated last year
- Accepted by Transactions on Machine Learning Research (TMLR)☆130Updated 10 months ago
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Updated 9 months ago
- List of papers on Self-Correction of LLMs.☆74Updated 7 months ago
- ☆65Updated last year