CoIR-team / coirLinks
(ACL 2025 Main) A Comprehensive Benchmark for Code Information Retrieval.
☆101Updated last week
Alternatives and similar repositories for coir
Users that are interested in coir are comparing it to the libraries listed below
Sorting:
- This includes the original implementation of CtrlA: Adaptive Retrieval-Augmented Generation via Inherent Control.☆61Updated 8 months ago
- MPLSandbox is an out-of-the-box multi-programming language sandbox designed to provide unified and comprehensive feedback from compiler a…☆178Updated 2 months ago
- Code and dataset of CodeSteer☆57Updated 2 months ago
- [EMNLP 2023] CodeTransOcean: A Comprehensive Multilingual Benchmark for Code Translation☆54Updated last year
- Grimoire is All You Need for Enhancing Large Language Models☆116Updated last year
- Explore concepts like Self-Correct, Self-Refine, Self-Improve, Self-Contradict, Self-Play, and Self-Knowledge, alongside o1-like reasonin…☆169Updated 6 months ago
- [ACL 2024] CodeScope: An Execution-based Multilingual Multitask Multidimensional Benchmark for Evaluating LLMs on Code Understanding and …☆100Updated 10 months ago
- Code and Checkpoints for "Generate rather than Retrieve: Large Language Models are Strong Context Generators" in ICLR 2023.☆286Updated 2 years ago
- LLM Benchmark for Code☆30Updated 10 months ago
- (NeurIPS 2024) AvaTaR: Optimizing LLM Agents for Tool Usage via Contrastive Reasoning☆206Updated 2 weeks ago
- ☆208Updated last month
- This is the official code repository of MoTCoder: Elevating Large Language Models with Modular of Thought for Challenging Programming Tas…☆83Updated 2 months ago
- DocAgent is a system designed to generate high-quality, context-aware code documentation for Python codebases using a multi-agent approac…☆269Updated 2 months ago
- A toolkit for fine-tuning, inferencing, and evaluating GreenBitAI's LLMs.☆185Updated 2 weeks ago
- An Extensible Framework for Retrieval-Augmented LLM Applications: Learning Relevance Beyond Simple Similarity.☆39Updated 6 months ago
- [ACL 2024] User-friendly evaluation framework: Eval Suite & Benchmarks: UHGEval, HaluEval, HalluQA, etc.☆167Updated 2 weeks ago
- ☆54Updated last week
- Recipes to train the self-rewarding reasoning LLMs.☆223Updated 3 months ago
- ☆45Updated 11 months ago
- A library for generating difficulty-scalable, multi-tool, and verifiable agentic tasks with execution trajectories.☆45Updated last week
- Official code of paper "Beyond 'Aha!': Toward Systematic Meta-Abilities Alignment in Large Reasoning Models"☆77Updated 3 weeks ago
- [EMNLP 2024] DA-Code: Agent Data Science Code Generation Benchmark for Large Language Models☆70Updated 2 weeks ago
- We leverage 14 datasets as OOD test data and conduct evaluations on 8 NLU tasks over 21 popularly used models. Our findings confirm that …☆93Updated last year
- Official Implementation of "Pay Attention to What You Need"☆42Updated 4 months ago
- [NeurIPS 2024] EffiBench: Benchmarking the Efficiency of Automatically Generated Code☆54Updated 6 months ago
- Your efficient and accurate answer verification system for RL training.☆30Updated this week
- Support mixed-precsion inference with vllm☆84Updated 5 months ago
- Official implementation of RARE: Retrieval-Augmented Reasoning Modeling☆181Updated 3 weeks ago
- ML-Bench: Evaluating Large Language Models and Agents for Machine Learning Tasks on Repository-Level Code (https://arxiv.org/abs/2311.098…☆301Updated 7 months ago
- RobustFT: Robust Supervised Fine-tuning for Large Language Models under Noisy Response☆41Updated 6 months ago