lucidrains / charformer-pytorch
Implementation of the GBST block from the Charformer paper, in Pytorch
β117Updated 3 years ago
Related projects β
Alternatives and complementary repositories for charformer-pytorch
- A π€-style implementation of BERT using lambda layers instead of self-attentionβ70Updated 4 years ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorchβ75Updated 3 years ago
- Implementation of Mixout with PyTorchβ74Updated last year
- Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://aβ¦β46Updated 2 years ago
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 2 years ago
- LaNMT: Latent-variable Non-autoregressive Neural Machine Translation with Deterministic Inferenceβ79Updated 3 years ago
- β42Updated 4 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.β145Updated 3 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β92Updated last year
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arxβ¦β136Updated last year
- Code accompanying our papers on the "Generative Distributional Control" frameworkβ117Updated last year
- β63Updated 2 years ago
- This repository contains the code for running the character-level Sandwich Transformers from our ACL 2020 paper on Improving Transformer β¦β55Updated 3 years ago
- A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in Pytorchβ222Updated last year
- β67Updated 2 years ago
- Shared code for training sentence embeddings with Flax / JAXβ27Updated 3 years ago
- This repositary hosts my experiments for the project, I did with OffNote Labs.β11Updated 3 years ago
- LM Pretraining with PyTorch/TPUβ132Updated 5 years ago
- Code for "Finetuning Pretrained Transformers into Variational Autoencoders"β37Updated 2 years ago
- FairSeq repo with Apollo optimizerβ107Updated 11 months ago
- GPT, but made only out of MLPsβ86Updated 3 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorchβ45Updated 3 years ago
- Transformers without Tears: Improving the Normalization of Self-Attentionβ130Updated 5 months ago
- β73Updated 3 years ago
- On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselinesβ132Updated last year
- EMNLP 2021 - Frustratingly Simple Pretraining Alternatives to Masked Language Modelingβ31Updated 3 years ago
- PyTorch reimplementation of REALM and ORQAβ22Updated 2 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorchβ95Updated last year
- A variant of Transformer-XL where the memory is updated not with a queue, but with attentionβ46Updated 4 years ago
- M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyerβ55Updated 2 years ago