lyeoni / gpt-pytorch
PyTorch Implementation of OpenAI GPT
☆117Updated last year
Related projects ⓘ
Alternatives and complementary repositories for gpt-pytorch
- A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in Pytorch☆222Updated last year
- An implementation of masked language modeling for Pytorch, made as concise and simple as possible☆177Updated last year
- ☆456Updated 3 years ago
- Pretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)☆325Updated 10 months ago
- Code and Data for ACL 2020 paper "Few-Shot NLG with Pre-Trained Language Model"☆189Updated last year
- Understanding the Difficulty of Training Transformers☆328Updated 2 years ago
- PyTorch Implementation of OpenAI GPT-2☆292Updated 4 months ago
- This is a repository with the code for the ACL 2019 paper "Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, t…☆303Updated 3 years ago
- Extremely simple and fast word2vec implementation with Negative Sampling + Sub-sampling☆180Updated 3 years ago
- Language Modeling Example with Transformers and PyTorch Lighting☆65Updated 4 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆145Updated 3 years ago
- Statistics and accepted paper list of NLP conferences with arXiv link☆430Updated 3 years ago
- Repository containing code for "How to Train BERT with an Academic Budget" paper☆309Updated last year
- Optimus: the first large-scale pre-trained VAE language model☆376Updated last year
- PyTorch implementation of BERT in "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"☆97Updated 6 years ago
- Fully featured implementation of Routing Transformer☆284Updated 3 years ago
- Trains Transformer model variants. Data isn't shuffled between batches.☆141Updated 2 years ago
- Implementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"☆71Updated last year
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!☆110Updated last year
- Pytorch Implementation of ALBERT(A Lite BERT for Self-supervised Learning of Language Representations)☆225Updated 3 years ago
- Implementation of the GBST block from the Charformer paper, in Pytorch☆117Updated 3 years ago
- Pytorch implementation of Compressive Transformers, from Deepmind☆155Updated 3 years ago
- PyTorch – SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models.☆59Updated 2 years ago
- Transformers for Longer Sequences☆575Updated 2 years ago
- Implementation of the first paper on word2vec☆202Updated 2 years ago
- On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines☆132Updated last year
- [NAACL 2021] This is the code for our paper `Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self…☆201Updated 2 years ago
- Fine-tuning GPT-2 Small for Question Answering☆130Updated last year
- GeDi: Generative Discriminator Guided Sequence Generation☆208Updated 2 years ago
- API for accessing the GraphLog dataset☆89Updated 6 months ago