princeton-nlp / intercodeLinks
[NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898
☆220Updated last year
Alternatives and similar repositories for intercode
Users that are interested in intercode are comparing it to the libraries listed below
Sorting:
- Accepted by Transactions on Machine Learning Research (TMLR)☆128Updated 8 months ago
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆167Updated 10 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆145Updated 8 months ago
- ☆97Updated 11 months ago
- Code for paper "LEVER: Learning to Verifiy Language-to-Code Generation with Execution" (ICML'23)☆88Updated last year
- Evaluating LLMs with fewer examples☆158Updated last year
- Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆76Updated 2 weeks ago
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".☆245Updated 7 months ago
- [ICLR'24 Spotlight] A language model (LM)-based emulation framework for identifying the risks of LM agents with tool use☆145Updated last year
- Data and code for "DocPrompting: Generating Code by Retrieving the Docs" @ICLR 2023☆248Updated last year
- A benchmark that challenges language models to code solutions for scientific problems☆124Updated 2 weeks ago
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆183Updated this week
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆143Updated 10 months ago
- Can Language Models Solve Olympiad Programming?☆116Updated 5 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆226Updated 7 months ago
- Open Source WizardCoder Dataset☆158Updated last year
- 🐙 OctoPack: Instruction Tuning Code Large Language Models☆468Updated 4 months ago
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆326Updated last year
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆153Updated last year
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆125Updated last year
- ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)☆262Updated last year
- RepoQA: Evaluating Long-Context Code Understanding☆109Updated 7 months ago
- Scaling Data for SWE-agents☆256Updated this week
- A set of utilities for running few-shot prompting experiments on large-language models☆121Updated last year
- Code for the paper 🌳 Tree Search for Language Model Agents☆201Updated 11 months ago
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆55Updated 8 months ago
- ☆270Updated 2 years ago
- ☆118Updated 11 months ago
- 🌍 Repository for "AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agent", ACL'24 Best Resource Pap…☆215Updated last month
- Run evaluation on LLMs using human-eval benchmark☆414Updated last year