multi-swe-bench / MagentLessLinks
☆12Updated 5 months ago
Alternatives and similar repositories for MagentLess
Users that are interested in MagentLess are comparing it to the libraries listed below
Sorting:
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆85Updated last year
- LeetCode Training and Evaluation Dataset☆45Updated 8 months ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆166Updated 4 months ago
- Collection of papers for scalable automated alignment.☆94Updated last year
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆134Updated last year
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆181Updated 7 months ago
- [ACL25] FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation☆37Updated last month
- ☆14Updated last year
- LongProc: Benchmarking Long-Context Language Models on Long Procedural Generation☆33Updated 2 months ago
- ☆32Updated 7 months ago
- An Evolving Code Generation Benchmark Aligned with Real-world Code Repositories☆67Updated last year
- ☆52Updated 10 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆126Updated last year
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆62Updated last year
- BrowseComp-Plus: A More Fair and Transparent Evaluation Benchmark of Deep-Research Agent☆147Updated 3 weeks ago
- ☆53Updated 7 months ago
- ☆70Updated last year
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆183Updated last year
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆83Updated 11 months ago
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆64Updated last year
- ☆47Updated 9 months ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆134Updated last year
- This repository contains code and data for the paper "TableEval: A Real-World Benchmark for Complex, Multilingual, and Multi-Structured T…☆28Updated 6 months ago
- A Comprehensive Benchmark for Software Development.☆124Updated last year
- Self-Knowledge Guided Retrieval Augmentation for Large Language Models (EMNLP Findings 2023)☆28Updated 2 years ago
- [ICLR 2025] Is Your Model Really A Good Math Reasoner? Evaluating Mathematical Reasoning with Checklist☆34Updated last year
- GenRM-CoT: Data release for verification rationales☆68Updated last year
- A new tool learning benchmark aiming at well-balanced stability and reality, based on ToolBench.☆205Updated 8 months ago
- ☆58Updated last year
- ☆77Updated last year