allenai / OLMo-EvalLinks
Evaluation suite for LLMs
☆366Updated 4 months ago
Alternatives and similar repositories for OLMo-Eval
Users that are interested in OLMo-Eval are comparing it to the libraries listed below
Sorting:
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆477Updated last year
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆314Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆253Updated last year
- ☆552Updated last year
- distributed trainer for LLMs☆583Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆750Updated last year
- Official repository for ORPO☆465Updated last year
- ☆313Updated last year
- Reproducible, flexible LLM evaluations☆266Updated this week
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆660Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Users☆246Updated last year
- Website for hosting the Open Foundation Models Cheat Sheet.☆268Updated 6 months ago
- Train Models Contrastively in Pytorch☆754Updated 7 months ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆551Updated last year
- Data and tools for generating and inspecting OLMo pre-training data.☆1,345Updated 2 weeks ago
- [NeurlPS D&B 2024] Generative AI for Math: MathPile☆418Updated 7 months ago
- ☆320Updated last year
- [ICLR 2024] Lemur: Open Foundation Models for Language Agents☆554Updated 2 years ago
- Code for Quiet-STaR☆741Updated last year
- A repository for research on medium sized language models.☆518Updated 5 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆204Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆277Updated last year
- The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction☆390Updated last year
- An open collection of methodologies to help with successful training of large language models.☆539Updated last year
- PyTorch building blocks for the OLMo ecosystem☆319Updated this week
- Code and data for "MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning" [ICLR 2024]☆376Updated last year
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆391Updated last year
- A simple unified framework for evaluating LLMs☆254Updated 7 months ago
- Generative Representational Instruction Tuning☆678Updated 4 months ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆355Updated last year