infi-coder / infibench-evaluation-harnessLinks
The Infibench variant of bigcode-evaluation-harness --- a framework for the evaluation of autoregressive code generation language models.
☆13Updated 9 months ago
Alternatives and similar repositories for infibench-evaluation-harness
Users that are interested in infibench-evaluation-harness are comparing it to the libraries listed below
Sorting:
- ☆67Updated 7 months ago
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆59Updated last year
- NaturalCodeBench (Findings of ACL 2024)☆68Updated 9 months ago
- ☆27Updated 6 months ago
- ☆46Updated 2 months ago
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆68Updated 11 months ago
- ☆51Updated last year
- Code for the paper <SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning>☆48Updated 2 years ago
- ☆91Updated this week
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆81Updated last year
- Open Implementations of LLM Analyses☆106Updated 10 months ago
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆62Updated 10 months ago
- My implementation of "Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models"☆97Updated last year
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆56Updated 9 months ago
- ☆28Updated 3 weeks ago
- Official code for the paper "CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules"☆45Updated 6 months ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆102Updated 5 months ago
- Training and Benchmarking LLMs for Code Preference.☆34Updated 8 months ago
- CodeUltraFeedback: aligning large language models to coding preferences☆71Updated last year
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆85Updated last year
- A scalable automated alignment method for large language models. Resources for "Aligning Large Language Models via Self-Steering Optimiza…☆20Updated 8 months ago
- Large Language Models Meet NL2Code: A Survey☆35Updated 8 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆152Updated 10 months ago
- ☆1Updated 11 months ago
- Accepted by Transactions on Machine Learning Research (TMLR)☆130Updated 10 months ago
- 🔔🧠 Easily experiment with popular language agents across diverse reasoning/decision-making benchmarks!☆52Updated last month
- ☆23Updated 2 years ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- RepoQA: Evaluating Long-Context Code Understanding☆113Updated 9 months ago