JetBrains-Research / lca-baselinesLinks
Baselines for all tasks from Long Code Arena benchmarks ποΈ
β39Updated 10 months ago
Alternatives and similar repositories for lca-baselines
Users that are interested in lca-baselines are comparing it to the libraries listed below
Sorting:
- Source codes for paper βReACC: A Retrieval-Augmented Code Completion Frameworkββ65Updated 3 years ago
- XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Expertsβ35Updated last year
- [LREC-COLING'24] HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalizationβ41Updated 11 months ago
- Reinforcement Learning for Repository-Level Code Completionβ42Updated last year
- An Evolving Code Generation Benchmark Aligned with Real-world Code Repositoriesβ67Updated last year
- β16Updated last year
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srwβ64Updated last year
- Repoformer: Selective Retrieval for Repository-Level Code Completion (ICML 2024)β66Updated 7 months ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)β171Updated 5 months ago
- β44Updated 7 months ago
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".β85Updated last year
- Benchmark ClassEval for class-level code generation.β145Updated last year
- β126Updated 2 years ago
- [EMNLP'22] Code for 'Exploring Representation-level Augmentation for Code Search'β27Updated 2 years ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluationβ165Updated last year
- β¨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024β186Updated last year
- An open-source library for contamination detection in NLP datasets and Large Language Models (LLMs).β59Updated last year
- β46Updated 3 months ago
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.β62Updated last year
- A Manually-Annotated Code Generation Benchmark Aligned with Real-World Code Repositoriesβ36Updated last year
- We introduce FixEval , a dataset for competitive programming bug fixing along with a comprehensive test suite and show the necessity of eβ¦β26Updated 3 years ago
- Code for the TMLR 2023 paper "PPOCoder: Execution-based Code Generation using Deep Reinforcement Learning"β118Updated 2 years ago
- This is the official implement for the paper 'Domain Adaptive Code Completion via Language Models and Decoupled Domain Databases''β14Updated 2 years ago
- β31Updated last year
- A comprehensive code domain benchmark review of LLM researches.β195Updated 4 months ago
- β33Updated last week
- β12Updated 11 months ago
- TDD-Bench-Verified is a new benchmark for generating test cases for test-driven development (TDD)β27Updated 4 months ago
- β56Updated last year
- EvoEval: Evolving Coding Benchmarks via LLMβ81Updated last year