bigcode-project / octopack
🐙 OctoPack: Instruction Tuning Code Large Language Models
☆463Updated 3 months ago
Alternatives and similar repositories for octopack:
Users that are interested in octopack are comparing it to the libraries listed below
- Fine-tune SantaCoder for Code/Text Generation.☆191Updated 2 years ago
- Run evaluation on LLMs using human-eval benchmark☆409Updated last year
- ☆270Updated 2 years ago
- A framework for the evaluation of autoregressive code generation language models.☆934Updated 6 months ago
- Open Source WizardCoder Dataset☆158Updated last year
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆162Updated 8 months ago
- [NeurIPS'24] SelfCodeAlign: Self-Alignment for Code Generation☆307Updated 2 months ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆546Updated last year
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".☆242Updated 6 months ago
- Data and code for "DocPrompting: Generating Code by Retrieving the Docs" @ICLR 2023☆243Updated last year
- ☆651Updated 6 months ago
- PaL: Program-Aided Language Models (ICML 2023)☆489Updated last year
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆139Updated 9 months ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆629Updated last year
- [ICLR 2024] Lemur: Open Foundation Models for Language Agents☆547Updated last year
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…☆940Updated 6 months ago
- Code and data for "Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs"☆464Updated last year
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆240Updated last year
- Accepted by Transactions on Machine Learning Research (TMLR)☆126Updated 7 months ago
- Generative Judge for Evaluating Alignment☆236Updated last year
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆376Updated 9 months ago
- A multi-programming language benchmark for LLMs☆243Updated 3 months ago
- batched loras☆341Updated last year
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆208Updated last year
- ☆434Updated 8 months ago
- Generate textbook-quality synthetic LLM pretraining data☆498Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆459Updated last year
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆136Updated 6 months ago
- All available datasets for Instruction Tuning of Large Language Models☆250Updated last year
- LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.☆682Updated 7 months ago