bigcode-project / octopack
🐙 OctoPack: Instruction Tuning Code Large Language Models
☆451Updated 2 weeks ago
Alternatives and similar repositories for octopack:
Users that are interested in octopack are comparing it to the libraries listed below
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".☆233Updated 3 months ago
- ☆628Updated 3 months ago
- Run evaluation on LLMs using human-eval benchmark☆395Updated last year
- A framework for the evaluation of autoregressive code generation language models.☆886Updated 3 months ago
- ☆268Updated last year
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆144Updated 6 months ago
- Open Source WizardCoder Dataset☆156Updated last year
- A multi-programming language benchmark for LLMs☆227Updated 3 weeks ago
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆325Updated 3 weeks ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆541Updated 11 months ago
- Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467☆277Updated last week
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆130Updated 6 months ago
- PaL: Program-Aided Language Models (ICML 2023)☆481Updated last year
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898☆208Updated 9 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆451Updated 11 months ago
- Data and code for "DocPrompting: Generating Code by Retrieving the Docs" @ICLR 2023☆238Updated last year
- Fine-tune SantaCoder for Code/Text Generation.☆188Updated last year
- ☆172Updated last year
- Generate textbook-quality synthetic LLM pretraining data☆495Updated last year
- [ICLR 2024] Lemur: Open Foundation Models for Language Agents☆540Updated last year
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark