CoIR-team / coir
A Comprehensive Benchmark for Code Information Retrieval.
☆83Updated 3 weeks ago
Alternatives and similar repositories for coir:
Users that are interested in coir are comparing it to the libraries listed below
- This includes the original implementation of CtrlA: Adaptive Retrieval-Augmented Generation via Inherent Control.☆60Updated 6 months ago
- Grimoire is All You Need for Enhancing Large Language Models☆113Updated last year
- MPLSandbox is an out-of-the-box multi-programming language sandbox designed to provide unified and comprehensive feedback from compiler a…☆174Updated this week
- [EMNLP 2023] CodeTransOcean: A Comprehensive Multilingual Benchmark for Code Translation☆52Updated last year
- Explore concepts like Self-Correct, Self-Refine, Self-Improve, Self-Contradict, Self-Play, and Self-Knowledge, alongside o1-like reasonin…☆164Updated 4 months ago
- [ACL 2024] CodeScope: An Execution-based Multilingual Multitask Multidimensional Benchmark for Evaluating LLMs on Code Understanding and …☆97Updated 8 months ago
- ☆51Updated this week
- Code and dataset of CodeSteer☆54Updated 3 weeks ago
- We leverage 14 datasets as OOD test data and conduct evaluations on 8 NLU tasks over 21 popularly used models. Our findings confirm that …☆94Updated last year
- AvaTaR: Optimizing LLM Agents for Tool Usage via Contrastive Reasoning (NeurIPS 2024)☆190Updated last month
- This is the official code repository of MoTCoder: Elevating Large Language Models with Modular of Thought for Challenging Programming Tas…☆79Updated 3 weeks ago
- An Extensible Framework for Retrieval-Augmented LLM Applications: Learning Relevance Beyond Simple Similarity.☆39Updated 4 months ago
- LLM Benchmark for Code☆31Updated 8 months ago
- RobustFT: Robust Supervised Fine-tuning for Large Language Models under Noisy Response☆40Updated 4 months ago
- Recipes to train the self-rewarding reasoning LLMs.☆212Updated last month
- [NeurIPS 2024] EffiBench: Benchmarking the Efficiency of Automatically Generated Code☆51Updated 4 months ago
- ☆102Updated last year
- This tool(enhance_long) aims to enhance the LlaMa2 long context extrapolation capability in the lowest-cost approach, preferably without …☆45Updated last year
- Code and Checkpoints for "Generate rather than Retrieve: Large Language Models are Strong Context Generators" in ICLR 2023.☆282Updated 2 years ago
- [EMNLP 2024] DA-Code: Agent Data Science Code Generation Benchmark for Large Language Models☆66Updated 5 months ago
- [ACL 2024] User-friendly evaluation framework: Eval Suite & Benchmarks: UHGEval, HaluEval, HalluQA, etc.☆162Updated 5 months ago
- ☆46Updated 9 months ago
- Official Implementation of "Pay Attention to What You Need"☆42Updated last month
- ☆47Updated 6 months ago
- The Official Repo of ML-Bench: Evaluating Large Language Models and Agents for Machine Learning Tasks on Repository-Level Code (https://a…☆296Updated 5 months ago
- Support mixed-precsion inference with vllm☆83Updated 3 months ago
- This repo contains my customised style python based plots for NLP papers, and includes my reproduction for my favourite papers' plots☆39Updated last year
- A Unified Intermediate Representation for Graph Query Languages☆65Updated 2 years ago
- Benchmarking LLMs via Uncertainty Quantification☆221Updated last year
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆247Updated 7 months ago