lucidrains / charformer-pytorchLinks
Implementation of the GBST block from the Charformer paper, in Pytorch
β117Updated 4 years ago
Alternatives and similar repositories for charformer-pytorch
Users that are interested in charformer-pytorch are comparing it to the libraries listed below
Sorting:
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.β147Updated 3 years ago
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 2 years ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorchβ76Updated 4 years ago
- β44Updated 4 years ago
- Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://aβ¦β46Updated 2 years ago
- π€ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.β82Updated 3 years ago
- FairSeq repo with Apollo optimizerβ114Updated last year
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arxβ¦β135Updated last year
- A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in Pytorchβ227Updated 2 years ago
- An official implementation of "BPE-Dropout: Simple and Effective Subword Regularization" algorithm.β53Updated 4 years ago
- Code accompanying our papers on the "Generative Distributional Control" frameworkβ118Updated 2 years ago
- A π€-style implementation of BERT using lambda layers instead of self-attentionβ69Updated 4 years ago
- PyTorch reimplementation of REALM and ORQAβ22Updated 3 years ago
- GPT, but made only out of MLPsβ89Updated 4 years ago
- β46Updated 3 years ago
- LaNMT: Latent-variable Non-autoregressive Neural Machine Translation with Deterministic Inferenceβ80Updated 3 years ago
- This repo contains a set of neural transducer, e.g. sequence-to-sequence model, focusing on character-level tasks.β76Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β93Updated 2 years ago
- Cascaded Text Generation with Markov Transformersβ129Updated 2 years ago
- β67Updated 2 years ago
- Implementation of Mixout with PyTorchβ75Updated 2 years ago
- M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyerβ54Updated 2 years ago
- As good as new. How to successfully recycle English GPT-2 to make models for other languages (ACL Findings 2021)β48Updated 3 years ago
- β62Updated 3 years ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorchβ75Updated 2 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorchβ46Updated 4 years ago
- Official Pytorch implementation of (Roles and Utilization of Attention Heads in Transformer-based Neural Language Models), ACL 2020β16Updated 3 months ago
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We sβ¦β67Updated 2 years ago
- BLOOM+1: Adapting BLOOM model to support a new unseen languageβ72Updated last year
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scaleβ155Updated last year