lucidrains / charformer-pytorchLinks
Implementation of the GBST block from the Charformer paper, in Pytorch
β118Updated 4 years ago
Alternatives and similar repositories for charformer-pytorch
Users that are interested in charformer-pytorch are comparing it to the libraries listed below
Sorting:
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.β147Updated 4 years ago
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 3 years ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorchβ76Updated 4 years ago
- β44Updated 5 years ago
- Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://aβ¦β46Updated 3 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arxβ¦β138Updated 2 years ago
- A π€-style implementation of BERT using lambda layers instead of self-attentionβ69Updated 5 years ago
- Implementation of Mixout with PyTorchβ75Updated 2 years ago
- π€ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.β81Updated 3 years ago
- β62Updated 3 years ago
- A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in Pytorchβ235Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β96Updated 2 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorchβ46Updated 4 years ago
- GeDi: Generative Discriminator Guided Sequence Generationβ208Updated 4 months ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorchβ76Updated 2 years ago
- GPT, but made only out of MLPsβ89Updated 4 years ago
- M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyerβ54Updated 2 years ago
- BLOOM+1: Adapting BLOOM model to support a new unseen languageβ74Updated last year
- This repo contains a set of neural transducer, e.g. sequence-to-sequence model, focusing on character-level tasks.β76Updated 2 years ago
- β67Updated 3 years ago
- LM Pretraining with PyTorch/TPUβ136Updated 6 years ago
- Implementation of the paper 'Sentence Bottleneck Autoencoders from Transformer Language Models'β17Updated 3 years ago
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We sβ¦β67Updated 2 years ago
- β219Updated 5 years ago
- β46Updated 3 years ago
- Transformers without Tears: Improving the Normalization of Self-Attentionβ133Updated last year
- Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.β105Updated 3 years ago
- LaNMT: Latent-variable Non-autoregressive Neural Machine Translation with Deterministic Inferenceβ79Updated 4 years ago
- Code for "Finetuning Pretrained Transformers into Variational Autoencoders"β39Updated 3 years ago
- This repository contains the code for running the character-level Sandwich Transformers from our ACL 2020 paper on Improving Transformer β¦β55Updated 4 years ago