multi-swe-bench / MagentLessLinks
☆11Updated last month
Alternatives and similar repositories for MagentLess
Users that are interested in MagentLess are comparing it to the libraries listed below
Sorting:
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆129Updated last year
- Collection of papers for scalable automated alignment.☆93Updated 10 months ago
- LongProc: Benchmarking Long-Context Language Models on Long Procedural Generation☆27Updated 2 months ago
- ☆14Updated last year
- ☆68Updated last year
- ☆43Updated 5 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆171Updated 3 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆81Updated 8 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆119Updated last year
- ☆17Updated 10 months ago
- ☆11Updated 2 years ago
- EMNLP'2023: Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration☆36Updated last year
- Math evaluations of llama models.☆10Updated last year
- ☆75Updated last year
- The repository for paper <Evaluating Open-QA Evaluation>☆25Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆131Updated last year
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆63Updated last year
- [ICLR 2025] Is Your Model Really A Good Math Reasoner? Evaluating Mathematical Reasoning with Checklist☆33Updated 10 months ago
- ☆36Updated last month
- LeetCode Training and Evaluation Dataset☆32Updated 4 months ago
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆60Updated 11 months ago
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆56Updated 10 months ago
- Towards Systematic Measurement for Long Text Quality☆37Updated last year
- BeHonest: Benchmarking Honesty in Large Language Models☆34Updated last year
- Explore what LLMs are really leanring over SFT☆29Updated last year
- ☆51Updated 3 months ago
- ☆20Updated last year
- [ACL 2024] FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models☆111Updated 3 months ago
- ☆51Updated 6 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆69Updated 9 months ago