infi-coder / infibench-evaluation-harnessLinks
The Infibench variant of bigcode-evaluation-harness --- a framework for the evaluation of autoregressive code generation language models.
☆15Updated last year
Alternatives and similar repositories for infibench-evaluation-harness
Users that are interested in infibench-evaluation-harness are comparing it to the libraries listed below
Sorting:
- Evaluation results of code generation LLMs☆31Updated 2 years ago
- ☆28Updated 9 months ago
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆62Updated last year
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆63Updated last year
- Large Language Models Meet NL2Code: A Survey☆35Updated 11 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆154Updated last year
- ☆66Updated 10 months ago
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆56Updated last year
- NaturalCodeBench (Findings of ACL 2024)☆67Updated last year
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆71Updated last year
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆49Updated last year
- ☆28Updated last week
- ☆53Updated last year
- RepoQA: Evaluating Long-Context Code Understanding☆119Updated last year
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆82Updated last year
- Knowledge transfer from high-resource to low-resource programming languages for Code LLMs☆16Updated 2 months ago
- Official code for the paper "CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules"☆47Updated 9 months ago
- ☆46Updated 4 months ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆107Updated 7 months ago
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆87Updated last year
- Moatless Testbeds allows you to create isolated testbed environments in a Kubernetes cluster where you can apply code changes through git…☆14Updated 6 months ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated 2 years ago
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)☆72Updated last year
- Open Implementations of LLM Analyses☆107Updated last year
- ☆160Updated last year
- ☆41Updated 4 months ago
- Training and Benchmarking LLMs for Code Preference.☆36Updated 11 months ago
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆93Updated 5 months ago
- [ACL25' Findings] SWE-Dev is an SWE agent with a scalable test case construction pipeline.☆56Updated 3 months ago