abacaj / code-evalLinks
Run evaluation on LLMs using human-eval benchmark
☆417Updated last year
Alternatives and similar repositories for code-eval
Users that are interested in code-eval are comparing it to the libraries listed below
Sorting:
- ☆270Updated 2 years ago
- 🐙 OctoPack: Instruction Tuning Code Large Language Models☆472Updated 5 months ago
- Open Source WizardCoder Dataset☆159Updated 2 years ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆546Updated last year
- ☆661Updated 9 months ago
- Fine-tune SantaCoder for Code/Text Generation.☆192Updated 2 years ago
- A framework for the evaluation of autoregressive code generation language models.☆971Updated last week
- [NeurIPS'24] SelfCodeAlign: Self-Alignment for Code Generation☆309Updated 5 months ago
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆168Updated 11 months ago
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".☆251Updated 9 months ago
- Accepted by Transactions on Machine Learning Research (TMLR)☆130Updated 9 months ago
- Official repository for LongChat and LongEval☆524Updated last year
- PaL: Program-Aided Language Models (ICML 2023)☆502Updated 2 years ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆306Updated last year
- ☆84Updated 2 years ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆467Updated last year
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898☆223Updated last year
- ☆311Updated last year
- ☆366Updated 2 years ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆659Updated last year
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆388Updated last year
- ☆525Updated 8 months ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆151Updated last year
- A multi-programming language benchmark for LLMs☆265Updated 2 weeks ago
- LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.☆719Updated 9 months ago
- [ICLR 2024] Lemur: Open Foundation Models for Language Agents☆553Updated last year
- evol augment any dataset online☆59Updated 2 years ago
- A bagel, with everything.☆323Updated last year
- Mass-editing thousands of facts into a transformer memory (ICLR 2023)☆509Updated last year
- Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467☆289Updated 5 months ago