amazon-science / Repoformer
Repoformer: Selective Retrieval for Repository-Level Code Completion (ICML 2024)
☆55Updated 10 months ago
Alternatives and similar repositories for Repoformer:
Users that are interested in Repoformer are comparing it to the libraries listed below
- Reinforcement Learning for Repository-Level Code Completion☆31Updated 8 months ago
- ☆43Updated 10 months ago
- CodeRAG-Bench: Can Retrieval Augment Code Generation?☆128Updated 5 months ago
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆59Updated 7 months ago
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆53Updated 6 months ago
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆57Updated last year
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆139Updated 9 months ago
- [LREC-COLING'24] HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalization☆39Updated last month
- NaturalCodeBench (Findings of ACL 2024)☆64Updated 6 months ago
- Source codes for paper ”ReACC: A Retrieval-Augmented Code Completion Framework“☆62Updated 3 years ago
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆162Updated 8 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆110Updated 9 months ago
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆81Updated 7 months ago
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆74Updated 9 months ago
- ☆95Updated last month
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆64Updated 8 months ago
- Official github repo for AutoDetect, an automated weakness detection framework for LLMs.☆42Updated 10 months ago
- Code and data for "MT-Eval: A Multi-Turn Capabilities Evaluation Benchmark for Large Language Models"☆39Updated 6 months ago
- Training and Benchmarking LLMs for Code Preference.☆33Updated 5 months ago
- ☆44Updated 11 months ago
- An open-source library for contamination detection in NLP datasets and Large Language Models (LLMs).☆56Updated 8 months ago
- An Evolving Code Generation Benchmark Aligned with Real-world Code Repositories☆58Updated 8 months ago
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Models☆96Updated 8 months ago
- ☆28Updated 5 months ago
- Code for Paper: Teaching Language Models to Critique via Reinforcement Learning☆94Updated 3 weeks ago
- Official repo for "HumanEval Pro and MBPP Pro: Evaluating Large Language Models on Self-invoking Code Generation Task"☆27Updated 3 weeks ago
- ☆86Updated 6 months ago
- XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts☆30Updated 10 months ago
- Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆64Updated 2 weeks ago
- ☆63Updated 4 months ago