facebookresearch / ELECTRA-Fewshot-LearningLinks
This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.
☆48Updated 3 years ago
Alternatives and similar repositories for ELECTRA-Fewshot-Learning
Users that are interested in ELECTRA-Fewshot-Learning are comparing it to the libraries listed below
Sorting:
- Code for EMNLP 2021 paper: Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting☆17Updated 3 years ago
- KETOD Knowledge-Enriched Task-Oriented Dialogue☆32Updated 2 years ago
- PyTorch reimplementation of REALM and ORQA☆22Updated 3 years ago
- Pytorch Implementation of EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks☆63Updated 3 years ago
- Code for text augmentation method leveraging large-scale language models☆62Updated 3 years ago
- Long-context pretrained encoder-decoder models☆96Updated 2 years ago
- ACL22 paper: Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little Cost☆41Updated last year
- Pre-training BART in Flax on The Pile dataset☆22Updated 4 years ago
- Research code for the paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models"☆27Updated 3 years ago
- Code for ACL 2022 paper "Expanding Pretrained Models to Thousands More Languages via Lexicon-based Adaptation"☆30Updated 3 years ago
- PyTorch reimplementation of the paper "SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization"☆16Updated 3 years ago
- DEMix Layers for Modular Language Modeling☆53Updated 4 years ago
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆99Updated 2 years ago
- Knowledge Infused Decoding☆71Updated last year
- This is the official implementation of NeurIPS 2021 "One Question Answering Model for Many Languages with Cross-lingual Dense Passage Ret…☆71Updated 3 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆137Updated 2 years ago
- ☆13Updated 3 years ago
- ☆54Updated 2 years ago
- [EMNLP'21] Mirror-BERT: Converting Pretrained Language Models to universal text encoders without labels.☆78Updated 3 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆52Updated 2 years ago
- EMNLP 2021 - CTC: A Unified Framework for Evaluating Natural Language Generation☆98Updated 2 years ago
- The source code of "Language Models are Few-shot Multilingual Learners" (MRL @ EMNLP 2021)☆53Updated 3 years ago
- TBC☆27Updated 2 years ago
- PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models☆109Updated 3 years ago
- ☆14Updated last year
- A Benchmark for Robust, Multi-evidence, Multi-answer Question Answering☆16Updated 2 years ago
- EMNLP 2021 - Frustratingly Simple Pretraining Alternatives to Masked Language Modeling☆32Updated 3 years ago
- Source code for paper "Learning from Noisy Labels for Entity-Centric Information Extraction", EMNLP 2021☆55Updated 3 years ago
- [NeurIPS 2022] Generating Training Data with Language Models: Towards Zero-Shot Language Understanding☆68Updated 2 years ago
- Simple Questions Generate Named Entity Recognition Datasets (EMNLP 2022)☆76Updated 2 years ago