xlang-ai / DS-1000
[ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".
☆222Updated 3 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for DS-1000
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆133Updated 3 months ago
- ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)☆235Updated 7 months ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆122Updated 3 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆115Updated last month
- An Analytical Evaluation Board of Multi-turn LLM Agents☆250Updated 6 months ago
- Data and Code for Program of Thoughts (TMLR 2023)☆243Updated 6 months ago
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels …☆240Updated last year
- A new tool learning benchmark aiming at well-balanced stability and reality, based on ToolBench.☆115Updated 2 months ago
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆145Updated 8 months ago
- ☆117Updated last year
- A multi-programming language benchmark for LLMs☆208Updated this week
- Open Source WizardCoder Dataset☆153Updated last year
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆285Updated last month
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆57Updated 4 months ago
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆104Updated 5 months ago
- A collection of practical code generation tasks and tests in open source projects. Complementary to HumanEval by OpenAI.☆121Updated 11 months ago
- Code for paper "LEVER: Learning to Verifiy Language-to-Code Generation with Execution" (ICML'23)☆79Updated last year
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆167Updated last month
- ☆146Updated 3 months ago
- Generative Judge for Evaluating Alignment☆218Updated 10 months ago
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆318Updated 10 months ago
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆214Updated last year
- [ICLR 2023] Code for the paper "Binding Language Models in Symbolic Languages"☆302Updated last year
- ☆192Updated 3 months ago
- Paper collection on building and evaluating language model agents via executable language grounding☆339Updated 6 months ago
- 🐙 OctoPack: Instruction Tuning Code Large Language Models☆435Updated 2 months ago
- CodeRAG-Bench: Can Retrieval Augment Code Generation?☆84Updated last week
- Accepted by Transactions on Machine Learning Research (TMLR)☆120Updated last month
- ☆265Updated 11 months ago
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆74Updated 2 months ago