abacaj / code-evalLinks
Run evaluation on LLMs using human-eval benchmark
β426Updated 2 years ago
Alternatives and similar repositories for code-eval
Users that are interested in code-eval are comparing it to the libraries listed below
Sorting:
- π OctoPack: Instruction Tuning Code Large Language Modelsβ479Updated 10 months ago
- β277Updated 2 years ago
- Open Source WizardCoder Datasetβ162Updated 2 years ago
- β672Updated last year
- A framework for the evaluation of autoregressive code generation language models.β1,008Updated 5 months ago
- [NeurIPS'24] SelfCodeAlign: Self-Alignment for Code Generationβ323Updated 10 months ago
- β¨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024β182Updated last year
- Fine-tune SantaCoder for Code/Text Generation.β194Updated 2 years ago
- β84Updated 2 years ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Contextβ478Updated last year
- PaL: Program-Aided Language Models (ICML 2023)β518Updated 2 years ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.β551Updated last year
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuningβ663Updated last year
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".β261Updated last year
- Accepted by Transactions on Machine Learning Research (TMLR)β136Updated last year
- Official repository for LongChat and LongEvalβ533Updated last year
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmarkβ391Updated last year
- [ICLR 2024] Lemur: Open Foundation Models for Language Agentsβ556Updated 2 years ago
- NexusRaven-13B, a new SOTA Open-Source LLM for function calling. This repo contains everything for reproducing our evaluation on NexusRavβ¦β318Updated 2 years ago
- β559Updated last year
- β313Updated last year
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Themβ536Updated last year
- β483Updated last year
- A multi-programming language benchmark for LLMsβ286Updated last month
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898β232Updated last year
- β379Updated 2 years ago
- Compress your input to ChatGPT or other LLMs, to let them process 2x more content and save 40% memory and GPU time.β405Updated last year
- Code and data for "Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs"β473Updated last year
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"β315Updated 2 years ago
- LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.β763Updated last year