ldery / TARTANLinks
Official Repo for ICLR2022 paper : Should we be Pre-Training ? Exploring End-Task Aware Training In Lieu of Continued Pre-training
☆9Updated 3 years ago
Alternatives and similar repositories for TARTAN
Users that are interested in TARTAN are comparing it to the libraries listed below
Sorting:
- EMNLP 2021 - Frustratingly Simple Pretraining Alternatives to Masked Language Modeling☆32Updated 3 years ago
- ☆58Updated 3 years ago
- Knowledge Infused Decoding☆71Updated last year
- Meta Representation Transformation for Low-resource Cross-lingual Learning☆40Updated 4 years ago
- The Stanford Word Substitution (Swords) Benchmark☆32Updated 3 years ago
- Bootstrapped Unsupervised Sentence Representation Learning (ACL 2021)☆30Updated 3 years ago
- [EMNLP 2021] Efficient Contrastive Learning via Novel Data Augmentation and Curriculum Learning☆17Updated last month
- Simple Questions Generate Named Entity Recognition Datasets (EMNLP 2022)☆76Updated 2 years ago
- ☆13Updated 3 years ago
- Code for "BERTifying the Hidden Markov Model for Multi-Source Weakly Supervised Named Entity Recognition"☆32Updated 2 years ago
- Findings of ACL'2023: Optimizing Test-Time Query Representations for Dense Retrieval☆30Updated last year
- code for our EACL 2021 paper: "Challenges in Automated Debiasing for Toxic Language Detection" by Xuhui Zhou, Maarten Sap, Swabha Swayamd…☆19Updated 3 years ago
- Mutual Information Predicts Hallucinations in Abstractive Summarization☆12Updated 2 years ago
- Code for the ACL2022 paper "Synthetic Question Value Estimation for Domain Adaptation of Question Answering"☆17Updated 3 years ago
- The official implemetation of "Evidentiality-guided Generation for Knowledge-Intensive NLP Tasks" (NAACL 2022).☆44Updated 2 years ago
- ☆14Updated 3 years ago
- This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”☆85Updated 3 years ago
- Code for ACL 2022 paper "Expanding Pretrained Models to Thousands More Languages via Lexicon-based Adaptation"☆30Updated 3 years ago
- ☆24Updated 2 years ago
- Supervised Contrastive Learning for Downstream Optimized Sequence Representations☆27Updated 3 years ago
- ☆25Updated 2 years ago
- ☆31Updated last year
- The contrastive token loss function for reducing generative repetition of autoregressive neural language models.☆13Updated 3 years ago
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.☆48Updated 3 years ago
- Discretized Integrated Gradients for Explaining Language Models (EMNLP 2021)☆27Updated 3 years ago
- Long-context pretrained encoder-decoder models☆96Updated 2 years ago
- Source code for ICLR 2021 paper : Pre-training Text-to-Text Transformers for Concept-Centric Common Sense☆27Updated 3 years ago
- ACL22 paper: Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little Cost☆41Updated last year
- Code for EMNLP 2021 paper: Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting☆17Updated 3 years ago
- ☆71Updated 3 years ago