bigcode-project / octopackLinks
🐙 OctoPack: Instruction Tuning Code Large Language Models
☆479Updated 11 months ago
Alternatives and similar repositories for octopack
Users that are interested in octopack are comparing it to the libraries listed below
Sorting:
- ☆672Updated last year
- Run evaluation on LLMs using human-eval benchmark☆426Updated 2 years ago
- ☆277Updated 2 years ago
- Open Source WizardCoder Dataset☆162Updated 2 years ago
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆184Updated last year
- A framework for the evaluation of autoregressive code generation language models.☆1,015Updated 5 months ago
- Fine-tune SantaCoder for Code/Text Generation.☆194Updated 2 years ago
- ☆486Updated last year
- A multi-programming language benchmark for LLMs☆290Updated last week
- Data and code for "DocPrompting: Generating Code by Retrieving the Docs" @ICLR 2023☆251Updated 2 years ago
- Accepted by Transactions on Machine Learning Research (TMLR)☆136Updated last year
- [NeurIPS'24] SelfCodeAlign: Self-Alignment for Code Generation☆323Updated 10 months ago
- PaL: Program-Aided Language Models (ICML 2023)☆517Updated 2 years ago
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".☆263Updated last year
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆166Updated 4 months ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆551Updated last year
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898☆233Updated last year
- A hard gym for programming☆163Updated last year
- [ICLR 2024] Lemur: Open Foundation Models for Language Agents☆555Updated 2 years ago
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆390Updated last year
- ☆85Updated 2 years ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆316Updated 2 years ago
- ☆379Updated 2 years ago
- Simple next-token-prediction for RLHF☆227Updated 2 years ago
- Official repository for LongChat and LongEval☆533Updated last year
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆168Updated last year
- LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.☆768Updated last year
- ☆173Updated 2 years ago
- Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467☆303Updated 10 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆163Updated last year