allenai / OLMo-EvalLinks
Evaluation suite for LLMs
☆354Updated last week
Alternatives and similar repositories for OLMo-Eval
Users that are interested in OLMo-Eval are comparing it to the libraries listed below
Sorting:
- ☆524Updated 7 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆306Updated last year
- Train Models Contrastively in Pytorch☆728Updated 3 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆464Updated last year
- Official repository for ORPO☆458Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆242Updated 8 months ago
- distributed trainer for LLMs☆578Updated last year
- Data and tools for generating and inspecting OLMo pre-training data.☆1,267Updated this week
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆739Updated 9 months ago
- Generative Representational Instruction Tuning☆660Updated 3 weeks ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆657Updated last year
- Reproducible, flexible LLM evaluations☆222Updated last week
- Code for Quiet-STaR☆735Updated 10 months ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆546Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆701Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Users☆229Updated 8 months ago
- Code and data for "MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning" [ICLR 2024]☆376Updated 10 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆267Updated last year
- [ICLR 2024] Lemur: Open Foundation Models for Language Agents☆551Updated last year
- Website for hosting the Open Foundation Models Cheat Sheet.☆267Updated 2 months ago
- ☆319Updated 10 months ago
- Automatic evals for LLMs☆467Updated 3 weeks ago
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆385Updated last year
- Code and data for "Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs"☆466Updated last year
- A repository for research on medium sized language models.☆504Updated last month
- ☆310Updated last year
- A simple unified framework for evaluating LLMs☆225Updated 3 months ago
- Scaling Data-Constrained Language Models☆338Updated 3 weeks ago
- Scalable toolkit for efficient model alignment☆829Updated last week
- RewardBench: the first evaluation tool for reward models.☆614Updated last month