xlang-ai / DS-1000
[ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".
☆221Updated last week
Related projects ⓘ
Alternatives and complementary repositories for DS-1000
- ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)☆233Updated 6 months ago
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆133Updated 2 months ago
- Data and Code for Program of Thoughts (TMLR 2023)☆243Updated 5 months ago
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆200Updated last month
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆104Updated 5 months ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆120Updated 3 months ago
- Open Source WizardCoder Dataset☆153Updated last year
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆111Updated 3 weeks ago
- A multi-programming language benchmark for LLMs☆206Updated 2 weeks ago
- An Analytical Evaluation Board of Multi-turn LLM Agents☆243Updated 5 months ago
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆358Updated 4 months ago
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆57Updated 3 months ago
- [ACL 2024] AUTOACT: Automatic Agent Learning from Scratch for QA via Self-Planning☆177Updated last month
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆143Updated 8 months ago
- A new tool learning benchmark aiming at well-balanced stability and reality, based on ToolBench.☆112Updated last month
- RewardBench: the first evaluation tool for reward models.☆424Updated 2 weeks ago
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆292Updated 3 weeks ago
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898☆194Updated 6 months ago
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆408Updated 8 months ago
- Generative Judge for Evaluating Alignment☆216Updated 9 months ago
- ☆117Updated last year
- CodeRAG-Bench: Can Retrieval Augment Code Generation?☆77Updated 4 months ago
- A Comprehensive Benchmark for Software Development.☆85Updated 5 months ago
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels …☆239Updated last year
- ☆144Updated 3 months ago
- ☆189Updated 2 months ago
- This is a collection of research papers for Self-Correcting Large Language Models with Automated Feedback.☆431Updated last week
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆435Updated 7 months ago
- 🐙 OctoPack: Instruction Tuning Code Large Language Models☆433Updated last month
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆315Updated 10 months ago