babylm / evaluation-pipeline-2024Links
The evaluation pipeline for the 2024 BabyLM Challenge.
☆33Updated last year
Alternatives and similar repositories for evaluation-pipeline-2024
Users that are interested in evaluation-pipeline-2024 are comparing it to the libraries listed below
Sorting:
- Evaluation pipeline for the BabyLM Challenge 2023.☆77Updated 2 years ago
- The official code of EMNLP 2022, "SCROLLS: Standardized CompaRison Over Long Language Sequences".☆69Updated last year
- Benchmark API for Multidomain Language Modeling☆25Updated 3 years ago
- M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer☆54Updated 3 years ago
- ☆43Updated last year
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆99Updated 4 years ago
- Automatic metrics for GEM tasks☆67Updated 3 years ago
- Query-focused summarization data☆42Updated 2 years ago
- Code repository for the paper "Mission: Impossible Language Models."☆56Updated 3 months ago
- The official code of LM-Debugger, an interactive tool for inspection and intervention in transformer-based language models.☆180Updated 3 years ago
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆60Updated last year
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆138Updated 2 years ago
- DEMix Layers for Modular Language Modeling☆54Updated 4 years ago
- ☆101Updated 3 years ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆59Updated 2 years ago
- ☆65Updated 2 years ago
- ☆47Updated last year
- This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”☆85Updated 3 years ago
- Code and data for "A Systematic Assessment of Syntactic Generalization in Neural Language Models"☆29Updated 4 years ago
- ☆11Updated last year
- ☆39Updated last year
- A library for finding knowledge neurons in pretrained transformer models.☆158Updated 3 years ago
- The original implementation of Min et al. "Nonparametric Masked Language Modeling" (paper https//arxiv.org/abs/2212.01349)☆158Updated 2 years ago
- ☆145Updated 11 months ago
- Follow the Wisdom of the Crowd: Effective Text Generation via Minimum Bayes Risk Decoding☆19Updated 3 years ago
- ☆22Updated 3 years ago
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆74Updated last year
- ☆38Updated last year
- Measuring the Mixing of Contextual Information in the Transformer☆34Updated 2 years ago
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆39Updated 3 years ago