ise-uiuc / xft
XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts
☆29Updated 7 months ago
Alternatives and similar repositories for xft:
Users that are interested in xft are comparing it to the libraries listed below
- Training and Benchmarking LLMs for Code Preference.☆32Updated 3 months ago
- ☆22Updated 3 months ago
- [NeurIPS'24] SemCoder: Training Code Language Models with Comprehensive Semantics Reasoning☆18Updated 3 months ago
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆62Updated 7 months ago
- Reinforcement Learning for Repository-Level Code Completion☆22Updated 6 months ago
- EvoEval: Evolving Coding Benchmarks via LLM☆66Updated 10 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆125Updated 4 months ago
- ☆28Updated 3 months ago
- ☆33Updated last year
- ☆28Updated 3 months ago
- ☆13Updated 2 months ago
- [LREC-COLING'24] HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalization☆33Updated last month
- Baselines for all tasks from Long Code Arena benchmarks 🏟️☆27Updated 2 weeks ago
- RepoQA: Evaluating Long-Context Code Understanding☆102Updated 3 months ago
- Code for the TMLR 2023 paper "PPOCoder: Execution-based Code Generation using Deep Reinforcement Learning"☆109Updated last year
- ☆40Updated this week
- Releasing code for "ReCode: Robustness Evaluation of Code Generation Models"☆52Updated 11 months ago
- An Evolving Code Generation Benchmark Aligned with Real-world Code Repositories☆49Updated 6 months ago
- Source Code Data Augmentation for Deep Learning: A Survey.☆64Updated 8 months ago
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆57Updated 4 months ago
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆77Updated 5 months ago
- ☆22Updated 5 months ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆130Updated 6 months ago
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆57Updated 10 months ago
- ☆20Updated last year
- [EMNLP'22] Code for 'Exploring Representation-level Augmentation for Code Search'☆26Updated last year
- [NeurIPS 2024] Evaluation harness for SWT-Bench, a benchmark for evaluating LLM repository-level test-generation☆34Updated this week
- Code and dataset for EMNLP 2022 Findings paper "Benchmarking Language Models for Code Syntax Understanding"☆14Updated 2 years ago
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆47Updated 3 months ago