mt-upc / transformer-contributionsLinks
Measuring the Mixing of Contextual Information in the Transformer
☆31Updated 2 years ago
Alternatives and similar repositories for transformer-contributions
Users that are interested in transformer-contributions are comparing it to the libraries listed below
Sorting:
- ☆17Updated 2 years ago
- ☆38Updated last year
- The geometry of multilingual language model representations (EMNLP 2022).☆21Updated 2 years ago
- Benchmark API for Multidomain Language Modeling☆25Updated 3 years ago
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆39Updated 2 years ago
- Materials for "Quantifying the Plausibility of Context Reliance in Neural Machine Translation" at ICLR'24 🐑 🐑☆15Updated last year
- ☆86Updated last year
- ☆20Updated last year
- ☆35Updated 3 years ago
- A curated list of research papers and resources on Cultural LLM.☆46Updated 11 months ago
- ☆110Updated 3 years ago
- ☆64Updated 2 years ago
- Automatic metrics for GEM tasks☆67Updated 2 years ago
- [NAACL 2022] GlobEnc: Quantifying Global Token Attribution by Incorporating the Whole Encoder Layer in Transformers☆21Updated 2 years ago
- This repository contains the dataset and code for "WiCE: Real-World Entailment for Claims in Wikipedia" in EMNLP 2023.☆42Updated last year
- code associated with ACL 2021 DExperts paper☆115Updated 2 years ago
- ☆11Updated 3 years ago
- DEMix Layers for Modular Language Modeling☆53Updated 4 years ago
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆58Updated last year
- ☆75Updated last year
- Code for paper "CrossFit : A Few-shot Learning Challenge for Cross-task Generalization in NLP" (https://arxiv.org/abs/2104.08835)☆112Updated 3 years ago
- M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer☆54Updated 2 years ago
- The original Backpack Language Model implementation, a fork of FlashAttention☆69Updated 2 years ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆94Updated 3 years ago
- Code of NAACL 2022 "Efficient Hierarchical Domain Adaptation for Pretrained Language Models" paper.☆32Updated last year
- Easy-to-use framework for evaluating cross-lingual consistency of factual knowledge (Supported LLaMA, BLOOM, mT5, RoBERTa, etc.) Paper he…☆25Updated 2 weeks ago
- ☆58Updated 3 years ago
- ☆24Updated 4 years ago
- ☆29Updated 2 years ago
- Code for Editing Factual Knowledge in Language Models☆139Updated 3 years ago