tau-nlp / scrollsLinks
The official code of EMNLP 2022, "SCROLLS: Standardized CompaRison Over Long Language Sequences".
☆69Updated last year
Alternatives and similar repositories for scrolls
Users that are interested in scrolls are comparing it to the libraries listed below
Sorting:
- DEMix Layers for Modular Language Modeling☆54Updated 4 years ago
- Automatic metrics for GEM tasks☆67Updated 3 years ago
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆78Updated 2 years ago
- Benchmark API for Multidomain Language Modeling☆25Updated 3 years ago
- Code for paper "CrossFit : A Few-shot Learning Challenge for Cross-task Generalization in NLP" (https://arxiv.org/abs/2104.08835)☆113Updated 3 years ago
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆39Updated 2 years ago
- ☆54Updated 2 years ago
- ☆46Updated last year
- ☆141Updated 9 months ago
- Follow the Wisdom of the Crowd: Effective Text Generation via Minimum Bayes Risk Decoding☆18Updated 2 years ago
- Code for Editing Factual Knowledge in Language Models☆142Updated 3 years ago
- ☆113Updated 3 years ago
- Query-focused summarization data☆42Updated 2 years ago
- The official implemetation of "Evidentiality-guided Generation for Knowledge-Intensive NLP Tasks" (NAACL 2022).☆45Updated 2 years ago
- The LM Contamination Index is a manually created database of contamination evidences for LMs.☆81Updated last year
- ☆75Updated 2 years ago
- code associated with ACL 2021 DExperts paper☆118Updated 2 years ago
- This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”☆85Updated 3 years ago
- PyTorch + HuggingFace code for RetoMaton: "Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval" (ICML 2022), including an…☆282Updated 3 years ago
- ☆97Updated 3 years ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆59Updated 2 years ago
- ☆86Updated 3 years ago
- ☆58Updated 3 years ago
- ☆82Updated 2 years ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆99Updated 4 years ago
- Code and data accompanying the paper "TRUE: Re-evaluating Factual Consistency Evaluation".☆81Updated 2 weeks ago
- ☆24Updated 2 years ago
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆98Updated 2 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆138Updated 2 years ago
- ☆11Updated last year