microsoft / prose-benchmarksLinks
PROSE Public Benchmark Suite
☆28Updated 3 months ago
Alternatives and similar repositories for prose-benchmarks
Users that are interested in prose-benchmarks are comparing it to the libraries listed below
Sorting:
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆63Updated last year
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆49Updated last year
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆87Updated last year
- NaturalCodeBench (Findings of ACL 2024)☆68Updated last year
- ☆41Updated 6 months ago
- Official repo for NAACL 2024 Findings paper "LeTI: Learning to Generate from Textual Interactions."☆66Updated 2 years ago
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆64Updated last year
- ☆80Updated 8 months ago
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆73Updated last year
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆89Updated last year
- [NAACL 2024] Struc-Bench: Are Large Language Models Good at Generating Complex Structured Tabular Data? https://aclanthology.org/2024.naa…☆55Updated 4 months ago
- CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)☆73Updated last year
- Language Models of Code are Few-Shot Commonsense Learners (EMNLP 2022)☆86Updated 2 years ago
- [NAACL 2024] Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models☆86Updated last year
- ☆54Updated last year
- Accepted by Transactions on Machine Learning Research (TMLR)☆136Updated last year
- Meta-CoT: Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models☆100Updated 2 years ago
- Open Implementations of LLM Analyses☆108Updated last year
- Evaluating LLMs with CommonGen-Lite☆93Updated last year
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆81Updated last year
- [EMNLP 2023 Industry Track] A simple prompting approach that enables the LLMs to run inference in batches.☆76Updated last year
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- LMTuner: Make the LLM Better for Everyone☆37Updated 2 years ago
- RepoQA: Evaluating Long-Context Code Understanding☆125Updated last year
- ☆28Updated last month
- Plug in and play implementation of " Textbooks Are All You Need", ready for training, inference, and dataset generation☆74Updated 2 years ago
- Evaluating tool-augmented LLMs in conversation settings☆88Updated last year
- Source code for GreaTer ICLR 2025 - Gradient Over Reasoning makes Smaller Language Models Strong Prompt Optimizers☆34Updated 8 months ago
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆45Updated last year
- evol augment any dataset online☆61Updated 2 years ago