tau-nlp / scrollsLinks
The official code of EMNLP 2022, "SCROLLS: Standardized CompaRison Over Long Language Sequences".
☆69Updated 2 years ago
Alternatives and similar repositories for scrolls
Users that are interested in scrolls are comparing it to the libraries listed below
Sorting:
- DEMix Layers for Modular Language Modeling☆54Updated 4 years ago
- Benchmark API for Multidomain Language Modeling☆25Updated 3 years ago
- Automatic metrics for GEM tasks☆67Updated 3 years ago
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆78Updated 2 years ago
- ☆54Updated 2 years ago
- This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”☆85Updated 3 years ago
- Code for paper "CrossFit : A Few-shot Learning Challenge for Cross-task Generalization in NLP" (https://arxiv.org/abs/2104.08835)☆113Updated 3 years ago
- This repository contains the dataset and code for "WiCE: Real-World Entailment for Claims in Wikipedia" in EMNLP 2023.☆42Updated 2 years ago
- ☆47Updated 2 years ago
- PyTorch + HuggingFace code for RetoMaton: "Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval" (ICML 2022), including an…☆285Updated 3 years ago
- ☆35Updated 4 years ago
- A library for parameter-efficient and composable transfer learning for NLP with sparse fine-tunings.☆75Updated last year
- Follow the Wisdom of the Crowd: Effective Text Generation via Minimum Bayes Risk Decoding☆19Updated 3 years ago
- Code for Editing Factual Knowledge in Language Models☆142Updated 4 years ago
- ☆88Updated 3 years ago
- ☆83Updated 2 years ago
- Query-focused summarization data☆43Updated 2 years ago
- ☆145Updated last year
- ☆11Updated last year
- Retrieval as Attention☆82Updated 3 years ago
- ☆75Updated 2 years ago
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆183Updated 3 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆138Updated 2 years ago
- ☆49Updated 2 years ago
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆39Updated 3 years ago
- ☆72Updated 2 years ago
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆23Updated last year
- ☆114Updated 3 years ago
- TBC☆28Updated 3 years ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆99Updated 4 years ago