bigscience-workshop / lm-evaluation-harness
A framework for few-shot evaluation of autoregressive language models.
☆103Updated last year
Alternatives and similar repositories for lm-evaluation-harness:
Users that are interested in lm-evaluation-harness are comparing it to the libraries listed below
- ☆129Updated 2 months ago
- ☆97Updated 2 years ago
- This project studies the performance and robustness of language models and task-adaptation methods.☆149Updated 10 months ago
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆179Updated 2 years ago
- The official code of EMNLP 2022, "SCROLLS: Standardized CompaRison Over Long Language Sequences".☆69Updated last year
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆126Updated last year
- This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”☆85Updated 2 years ago
- ☆72Updated last year
- Datasets collection and preprocessings framework for NLP extreme multitask learning☆176Updated 2 months ago
- Code for Editing Factual Knowledge in Language Models☆136Updated 3 years ago
- ☆159Updated 2 years ago
- ☆175Updated 2 years ago
- Token-level Reference-free Hallucination Detection☆94Updated last year
- Automatic metrics for GEM tasks☆65Updated 2 years ago
- ☆46Updated last year
- ☆97Updated 2 years ago
- Code for the arXiv paper: "LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond"☆58Updated 2 months ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.