google-deepmind / pg19
☆229Updated 4 years ago
Related projects ⓘ
Alternatives and complementary repositories for pg19
- Repository containing code for "How to Train BERT with an Academic Budget" paper☆309Updated last year
- ☆315Updated 3 years ago
- Code and data to support the paper "PAQ 65 Million Probably-Asked Questions andWhat You Can Do With Them"☆202Updated 3 years ago
- A library for finding knowledge neurons in pretrained transformer models.☆151Updated 2 years ago
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)☆457Updated 2 years ago
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆177Updated 2 years ago
- Understanding the Difficulty of Training Transformers☆328Updated 2 years ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!☆111Updated last year
- Task-based datasets, preprocessing, and evaluation for sequence models.☆561Updated this week
- This repository contains the code for "Generating Datasets with Pretrained Language Models".☆187Updated 3 years ago
- The original implementation of Min et al. "Nonparametric Masked Language Modeling" (paper https//arxiv.org/abs/2212.01349)☆156Updated last year
- Code for the paper "Are Sixteen Heads Really Better than One?"☆169Updated 4 years ago
- Scalable training for dense retrieval models.☆271Updated last year
- MPNet: Masked and Permuted Pre-training for Language Understanding https://arxiv.org/pdf/2004.09297.pdf☆288Updated 3 years ago
- A framework for few-shot evaluation of autoregressive language models.☆101Updated last year
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃☆112Updated 2 years ago
- Adversarial Natural Language Inference Benchmark☆389Updated 2 years ago
- Neural Text Generation with Unlikelihood Training☆310Updated 3 years ago
- An original implementation of "MetaICL Learning to Learn In Context" by Sewon Min, Mike Lewis, Luke Zettlemoyer and Hannaneh Hajishirzi☆254Updated last year
- ☆158Updated last year
- ☆97Updated 2 years ago
- The official code of LM-Debugger, an interactive tool for inspection and intervention in transformer-based language models.☆172Updated 2 years ago
- Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"☆431Updated last year
- PyTorch + HuggingFace code for RetoMaton: "Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval" (ICML 2022), including an…☆269Updated 2 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆136Updated last year
- The official code of EMNLP 2022, "SCROLLS: Standardized CompaRison Over Long Language Sequences".☆68Updated 10 months ago
- LM Pretraining with PyTorch/TPU☆132Updated 5 years ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆127Updated 6 months ago
- FairSeq repo with Apollo optimizer☆107Updated 11 months ago
- BLEURT is a metric for Natural Language Generation based on transfer learning.☆697Updated last year