nuprl / MultiPL-T
Knowledge transfer from high-resource to low-resource programming languages for Code LLMs
☆13Updated 8 months ago
Alternatives and similar repositories for MultiPL-T
Users that are interested in MultiPL-T are comparing it to the libraries listed below
Sorting:
- Training and Benchmarking LLMs for Code Preference.☆33Updated 6 months ago
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆53Updated 6 months ago
- XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts☆31Updated 10 months ago
- [NeurIPS 2024] Evaluation harness for SWT-Bench, a benchmark for evaluating LLM repository-level test-generation☆48Updated 3 weeks ago
- ☆24Updated 6 months ago
- ☆61Updated last year
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆57Updated last year
- Official code for the paper "CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules"☆45Updated 4 months ago
- Large Language Models Meet NL2Code: A Survey☆36Updated 5 months ago
- [ACL 2023] Code for ContraCLM: Contrastive Learning For Causal Language Model☆33Updated last year
- ☆30Updated 2 months ago
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆60Updated 7 months ago
- Moatless Testbeds allows you to create isolated testbed environments in a Kubernetes cluster where you can apply code changes through git…☆11Updated last month
- Code repository for the paper - "Superposed Decoding: Multiple Generations from a Single Autoregressive Inference Pass"☆17Updated 8 months ago
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆64Updated 8 months ago
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆48Updated last year
- NaturalCodeBench (Findings of ACL 2024)☆64Updated 7 months ago
- ☆33Updated last year
- ☆34Updated last month
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- Incremental Python parser for constrained generation of code by LLMs.☆16Updated 7 months ago
- ☆44Updated 11 months ago
- FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions☆44Updated 10 months ago
- Repoformer: Selective Retrieval for Repository-Level Code Completion (ICML 2024)☆55Updated 2 weeks ago
- Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆67Updated 3 weeks ago
- ☆37Updated 10 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆137Updated 7 months ago
- Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions☆42Updated 9 months ago
- ☆43Updated 3 months ago
- ☆30Updated 6 months ago