babylm / evaluation-pipeline-2024
The evaluation pipeline for the 2024 BabyLM Challenge.
☆29Updated 4 months ago
Alternatives and similar repositories for evaluation-pipeline-2024:
Users that are interested in evaluation-pipeline-2024 are comparing it to the libraries listed below
- Automatic metrics for GEM tasks☆65Updated 2 years ago
- ☆31Updated 9 months ago
- Evaluation pipeline for the BabyLM Challenge 2023.☆75Updated last year
- ☆58Updated 2 years ago
- Code and data for "A Systematic Assessment of Syntactic Generalization in Neural Language Models"☆26Updated 3 years ago
- Follow the Wisdom of the Crowd: Effective Text Generation via Minimum Bayes Risk Decoding☆18Updated 2 years ago
- Rationales for Sequential Predictions☆40Updated 3 years ago
- Code for paper "Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?"☆21Updated 4 years ago
- ☆45Updated last year
- Measuring the Mixing of Contextual Information in the Transformer☆28Updated last year
- ☆39Updated 3 years ago
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆57Updated 9 months ago
- DEMix Layers for Modular Language Modeling☆53Updated 3 years ago
- M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer☆55Updated 2 years ago
- Benchmark API for Multidomain Language Modeling☆24Updated 2 years ago
- Paper: Lexicon Learning for Few-Shot Neural Sequence Modeling☆16Updated 3 years ago
- ☆34Updated 3 years ago
- ☆59Updated 2 years ago
- ☆37Updated 10 months ago
- Code for the paper "Implicit Representations of Meaning in Neural Language Models"☆53Updated 2 years ago
- ☆31Updated last year
- ☆24Updated 3 years ago
- ☆27Updated 2 months ago
- ☆19Updated last year
- Code for the paper "Simulating Bandit Learning from User Feedback for Extractive Question Answering".☆18Updated 2 years ago
- The official code of EMNLP 2022, "SCROLLS: Standardized CompaRison Over Long Language Sequences".☆69Updated last year
- [NAACL 2022] GlobEnc: Quantifying Global Token Attribution by Incorporating the Whole Encoder Layer in Transformers☆21Updated last year
- ☆89Updated 2 years ago
- EMNLP 2021 - CTC: A Unified Framework for Evaluating Natural Language Generation☆96Updated 2 years ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆90Updated 3 years ago