abacaj / code-evalLinks
Run evaluation on LLMs using human-eval benchmark
β427Updated 2 years ago
Alternatives and similar repositories for code-eval
Users that are interested in code-eval are comparing it to the libraries listed below
Sorting:
- π OctoPack: Instruction Tuning Code Large Language Modelsβ479Updated last year
- Open Source WizardCoder Datasetβ163Updated 2 years ago
- β279Updated 2 years ago
- A framework for the evaluation of autoregressive code generation language models.β1,020Updated 6 months ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.β551Updated last year
- Fine-tune SantaCoder for Code/Text Generation.β196Updated 2 years ago
- Official repository for LongChat and LongEvalβ534Updated last year
- [NeurIPS'24] SelfCodeAlign: Self-Alignment for Code Generationβ323Updated 11 months ago
- β671Updated last year
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".β265Updated last year
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuningβ668Updated last year
- PaL: Program-Aided Language Models (ICML 2023)β517Updated 2 years ago
- β¨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024β186Updated last year
- β313Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Contextβ484Updated last year
- β380Updated 2 years ago
- NexusRaven-13B, a new SOTA Open-Source LLM for function calling. This repo contains everything for reproducing our evaluation on NexusRavβ¦β318Updated 2 years ago
- β489Updated last year
- [ICLR 2024] Lemur: Open Foundation Models for Language Agentsβ556Updated 2 years ago
- β561Updated last year
- A multi-programming language benchmark for LLMsβ297Updated last week
- β85Updated 2 years ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"β316Updated 2 years ago
- Accepted by Transactions on Machine Learning Research (TMLR)β137Updated last year
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Themβ546Updated last year
- β592Updated last year
- A bagel, with everything.β326Updated last year
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmarkβ389Updated last year
- Compress your input to ChatGPT or other LLMs, to let them process 2x more content and save 40% memory and GPU time.β410Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformersβ426Updated 2 years ago