abacaj / code-eval
Run evaluation on LLMs using human-eval benchmark
β400Updated last year
Alternatives and similar repositories for code-eval:
Users that are interested in code-eval are comparing it to the libraries listed below
- β268Updated last year
- Open Source WizardCoder Datasetβ156Updated last year
- π OctoPack: Instruction Tuning Code Large Language Modelsβ458Updated last month
- Official repository for LongChat and LongEvalβ516Updated 9 months ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.β543Updated last year
- A framework for the evaluation of autoregressive code generation language models.β907Updated 4 months ago
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Modelβ513Updated last month
- β¨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024β147Updated 7 months ago
- [ICLR 2024] Lemur: Open Foundation Models for Language Agentsβ543Updated last year
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".β238Updated 4 months ago
- Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467β281Updated last month
- β501Updated 4 months ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuningβ646Updated 9 months ago
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmarkβ371Updated 8 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Contextβ453Updated last year
- A bagel, with everything.β317Updated 11 months ago
- β84Updated last year
- β307Updated 9 months ago
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898β209Updated 10 months ago
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"β380Updated 3 weeks ago
- NexusRaven-13B, a new SOTA Open-Source LLM for function calling. This repo contains everything for reproducing our evaluation on NexusRavβ¦β313Updated last year
- [NeurIPS'24] SelfCodeAlign: Self-Alignment for Code Generationβ298Updated 3 weeks ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"β298Updated last year
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]β289Updated 10 months ago
- evol augment any dataset onlineβ59Updated last year
- [ACL 2024] Progressive LLaMA with Block Expansion.β499Updated 10 months ago
- β325Updated last month
- Generative Judge for Evaluating Alignmentβ230Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Compositionβ619Updated 7 months ago
- Fine-tune SantaCoder for Code/Text Generation.β190Updated last year