nyu-mll / jiant-v1-legacyLinks
The jiant toolkit for general-purpose text understanding models
โ22Updated 4 years ago
Alternatives and similar repositories for jiant-v1-legacy
Users that are interested in jiant-v1-legacy are comparing it to the libraries listed below
Sorting:
- ๐ฆฎ Code and pretrained models for Findings of ACL 2022 paper "LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text Retrieโฆโ49Updated 3 years ago
- This repository is the official implementation of our paper MVP: Multi-task Supervised Pre-training for Natural Language Generation.โ73Updated 2 years ago
- โ117Updated 3 years ago
- [TMLR'23] Contrastive Search Is What You Need For Neural Text Generationโ119Updated 2 years ago
- Interpreting Language Models with Contrastive Explanations (EMNLP 2022 Best Paper Honorable Mention)โ62Updated 3 years ago
- PERFECT: Prompt-free and Efficient Few-shot Learning with Language Modelsโ109Updated 3 years ago
- โ39Updated 2 years ago
- โ78Updated last year
- Detect hallucinated tokens for conditional sequence generation.โ64Updated 3 years ago
- Source code for paper "Learning from Noisy Labels for Entity-Centric Information Extraction", EMNLP 2021โ55Updated 3 years ago
- The Few-Shot Bot: Prompt-Based Learning for Dialogue Systemsโ118Updated 3 years ago
- โ67Updated 3 years ago
- EMNLP 2021 - CTC: A Unified Framework for Evaluating Natural Language Generationโ97Updated 2 years ago
- A Structured Span Selector (NAACL 2022). A structured span selector with a WCFG for span selection tasks (coreference resolution, semantiโฆโ21Updated 3 years ago
- Repo for "On Learning to Summarize with Large Language Models as References"โ43Updated 2 years ago
- PyTorch code for "FactPEGASUS: Factuality-Aware Pre-training and Fine-tuning for Abstractive Summarization" (NAACL 2022)โ39Updated 2 years ago
- โ27Updated 3 years ago
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.โ48Updated 3 years ago
- An Empirical Study On Contrastive Search And Contrastive Decoding For Open-ended Text Generationโ27Updated last year
- Code for the CIKM 2019 Paper: How Does BERT Answer Questions? A Layer-Wise Analysis of Transformer Representationsโ32Updated 2 years ago
- An official repository for MIA 2022 (NAACL 2022 Workshop) Shared Task on Cross-lingual Open-Retrieval Question Answering.โ31Updated 3 years ago
- Long-context pretrained encoder-decoder modelsโ96Updated 2 years ago
- [NAACL 2022] Robust (Controlled) Table-to-Text Generation with Structure-Aware Equivariance Learning.โ57Updated last year
- reStructured Pre-trainingโ98Updated 2 years ago
- Token-level Reference-free Hallucination Detectionโ96Updated 2 years ago
- codes and pre-trained models of paper "Segatron: Segment-aware Transformer for Language Modeling and Understanding"โ18Updated 2 years ago
- Code and pre-trained models for "ReasonBert: Pre-trained to Reason with Distant Supervision", EMNLP'2021โ29Updated 2 years ago
- A extension of Transformers library to include T5ForSequenceClassification class.โ39Updated 2 years ago
- BLOOM+1: Adapting BLOOM model to support a new unseen languageโ73Updated last year
- The data and code for EmailSumโ61Updated 4 years ago