infi-coder / infibench-evaluation-harnessLinks
The Infibench variant of bigcode-evaluation-harness --- a framework for the evaluation of autoregressive code generation language models.
☆15Updated 11 months ago
Alternatives and similar repositories for infibench-evaluation-harness
Users that are interested in infibench-evaluation-harness are comparing it to the libraries listed below
Sorting:
- ☆66Updated 9 months ago
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆62Updated last year
- Large Language Models Meet NL2Code: A Survey☆35Updated 10 months ago
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆62Updated last year
- NaturalCodeBench (Findings of ACL 2024)☆67Updated last year
- ☆53Updated last year
- ☆27Updated 8 months ago
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆71Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)☆71Updated last year
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆153Updated last year
- ☆28Updated last week
- Accepted by Transactions on Machine Learning Research (TMLR)☆131Updated last year
- Official code for the paper "CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules"☆47Updated 8 months ago
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆49Updated last year
- Code for the TMLR 2023 paper "PPOCoder: Execution-based Code Generation using Deep Reinforcement Learning"☆116Updated last year
- APIBench is a benchmark for evaluating the performance of API recommendation approaches released in the paper "Revisiting, Benchmarking a…☆60Updated 2 years ago
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆56Updated 11 months ago
- ☆46Updated 4 months ago
- ☆51Updated last year
- repo for the paper titled “CodeGen4Libs: A Two-Stage Approach for Library-Oriented Code Generation”☆14Updated 2 years ago
- Moatless Testbeds allows you to create isolated testbed environments in a Kubernetes cluster where you can apply code changes through git…☆14Updated 6 months ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆105Updated 7 months ago
- ☆29Updated last week
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆87Updated last year
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆82Updated last year
- TDD-Bench-Verified is a new benchmark for generating test cases for test-driven development (TDD)☆24Updated 3 weeks ago
- Evaluation results of code generation LLMs☆31Updated 2 years ago
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆163Updated last year
- Code for the paper <SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning>☆49Updated 2 years ago