google-research / bigbird
Transformers for Longer Sequences
β591Updated 2 years ago
Alternatives and similar repositories for bigbird:
Users that are interested in bigbird are comparing it to the libraries listed below
- Longformer: The Long-Document Transformerβ2,082Updated 2 years ago
- Flexible components pairing π€ Transformers with Pytorch Lightningβ612Updated 2 years ago
- An implementation of masked language modeling for Pytorch, made as concise and simple as possibleβ178Updated last year
- Pretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)β326Updated last year
- β344Updated 3 years ago
- Code for using and evaluating SpanBERT.β895Updated last year
- The corresponding code from our paper "DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations". Do not hesitate to oβ¦β381Updated last year
- An efficient implementation of the popular sequence models for text generation, summarization, and translation tasks. https://arxiv.org/pβ¦β430Updated 2 years ago
- Repository containing code for "How to Train BERT with an Academic Budget" paperβ311Updated last year
- Autoregressive Entity Retrievalβ781Updated last year
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorchβ857Updated last year
- FastFormers - highly efficient transformer models for NLUβ704Updated last year
- Code associated with the Don't Stop Pretraining ACL 2020 paperβ531Updated 3 years ago
- Officially supported AllenNLP modelsβ538Updated 2 years ago
- [ACL 2021] LM-BFF: Better Few-shot Fine-tuning of Language Models https://arxiv.org/abs/2012.15723β725Updated 2 years ago
- Long Range Arena for Benchmarking Efficient Transformersβ745Updated last year
- A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in Pytorchβ221Updated last year
- Repository for the paper "Optimal Subarchitecture Extraction for BERT"β471Updated 2 years ago
- Understanding the Difficulty of Training Transformersβ328Updated 2 years ago
- Fully featured implementation of Routing Transformerβ288Updated 3 years ago
- A Visual Analysis Tool to Explore Learned Representations in Transformers Modelsβ586Updated last year
- Code for the ALiBi method for transformer language models (ICLR 2022)β515Updated last year
- β494Updated last year
- β460Updated 3 years ago
- Reformer, the efficient Transformer, in Pytorchβ2,152Updated last year
- An implementation of Performer, a linear attention-based transformer, in Pytorchβ1,114Updated 3 years ago
- XTREME is a benchmark for the evaluation of the cross-lingual generalization ability of pre-trained multilingual models that covers 40 tyβ¦β638Updated 2 years ago
- Optimus: the first large-scale pre-trained VAE language modelβ380Updated last year
- NL-Augmenter π¦ β π A Collaborative Repository of Natural Language Transformationsβ780Updated 9 months ago
- β΅οΈThe official PyTorch implementation for "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing" (EMNLP 2020).β310Updated last year