infi-coder / infibench-evaluator
The evaluation framework for the InfiCoder-Eval benchmark.
☆20Updated 8 months ago
Alternatives and similar repositories for infibench-evaluator:
Users that are interested in infibench-evaluator are comparing it to the libraries listed below
- Training and Benchmarking LLMs for Code Preference.☆33Updated 4 months ago
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆58Updated 5 months ago
- XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts☆30Updated 8 months ago
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆57Updated 11 months ago
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆64Updated 6 months ago
- ☆22Updated 4 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆46Updated last year
- Code and dataset for EMNLP 2022 Findings paper "Benchmarking Language Models for Code Syntax Understanding"☆14Updated 2 years ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆133Updated 5 months ago
- Code for paper "LEVER: Learning to Verifiy Language-to-Code Generation with Execution" (ICML'23)☆85Updated last year
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- ☆26Updated 2 months ago
- ☆87Updated 6 months ago
- ☆74Updated last year
- A Lightweight Visual Reasoning Benchmark for Evaluating Large Multimodal Models through Complex Diagrams in Coding Tasks☆6Updated last month
- CodeUltraFeedback: aligning large language models to coding preferences☆70Updated 9 months ago
- [NeurIPS 2024] Evaluation harness for SWT-Bench, a benchmark for evaluating LLM repository-level test-generation☆36Updated last week
- ☆76Updated this week
- RepoQA: Evaluating Long-Context Code Understanding☆106Updated 4 months ago
- ☆80Updated last month
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆47Updated last year
- ☆59Updated 10 months ago
- [NAACL 2025 Oral] Multimodal Needle in a Haystack (MMNeedle): Benchmarking Long-Context Capability of Multimodal Large Language Models☆40Updated 2 weeks ago
- On The Planning Abilities of OpenAI's o1 Models: Feasibility, Optimality, and Generalizability☆38Updated 2 months ago
- ☆30Updated 2 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆52Updated 11 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models"☆59Updated 5 months ago
- This is the official repo of "QuickLLaMA: Query-aware Inference Acceleration for Large Language Models"☆45Updated 8 months ago
- ☆36Updated 9 months ago
- ☆65Updated 4 months ago