abacaj / code-evalLinks
Run evaluation on LLMs using human-eval benchmark
β411Updated last year
Alternatives and similar repositories for code-eval
Users that are interested in code-eval are comparing it to the libraries listed below
Sorting:
- Open Source WizardCoder Datasetβ158Updated last year
- π OctoPack: Instruction Tuning Code Large Language Modelsβ464Updated 3 months ago
- β269Updated 2 years ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Contextβ461Updated last year
- A framework for the evaluation of autoregressive code generation language models.β946Updated 7 months ago
- Official repository for LongChat and LongEvalβ518Updated last year
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Themβ495Updated 11 months ago
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".β244Updated 7 months ago
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmarkβ378Updated 10 months ago
- [ICLR 2024] Lemur: Open Foundation Models for Language Agentsβ548Updated last year
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.β546Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Compositionβ635Updated 10 months ago
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"β344Updated last year
- [NeurIPS'24] SelfCodeAlign: Self-Alignment for Code Generationβ307Updated 3 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]β554Updated 5 months ago
- Generative Judge for Evaluating Alignmentβ238Updated last year
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898β219Updated last year
- β309Updated 11 months ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuningβ651Updated last year
- β¨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024β164Updated 9 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.β727Updated 8 months ago
- β652Updated 7 months ago
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Modelβ526Updated 4 months ago
- β517Updated 6 months ago
- Code and data for "Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs"β464Updated last year
- Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467β285Updated 3 months ago
- RewardBench: the first evaluation tool for reward models.β582Updated this week
- FuseAI Projectβ570Updated 4 months ago
- Code and data for "MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning" [ICLR 2024]β371Updated 9 months ago
- Codes for the paper "βBench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718β326Updated 8 months ago