google-research / bigbird
Transformers for Longer Sequences
☆603Updated 2 years ago
Alternatives and similar repositories for bigbird:
Users that are interested in bigbird are comparing it to the libraries listed below
- Longformer: The Long-Document Transformer☆2,104Updated 2 years ago
- Repository for the paper "Optimal Subarchitecture Extraction for BERT"☆472Updated 2 years ago
- Autoregressive Entity Retrieval☆785Updated last year
- Code for using and evaluating SpanBERT.☆896Updated last year
- Repository containing code for "How to Train BERT with an Academic Budget" paper☆313Updated last year
- Long Range Arena for Benchmarking Efficient Transformers☆750Updated last year
- Pretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)☆327Updated last year
- Flexible components pairing 🤗 Transformers with Pytorch Lightning☆609Updated 2 years ago
- ☆501Updated last year
- XTREME is a benchmark for the evaluation of the cross-lingual generalization ability of pre-trained multilingual models that covers 40 ty…☆643Updated 2 years ago
- Code for the ALiBi method for transformer language models (ICLR 2022)☆521Updated last year
- A Visual Analysis Tool to Explore Learned Representations in Transformers Models☆588Updated last year
- An efficient implementation of the popular sequence models for text generation, summarization, and translation tasks. https://arxiv.org/p…☆433Updated 2 years ago
- FastFormers - highly efficient transformer models for NLU☆705Updated 3 weeks ago
- Pytorch library for fast transformer implementations☆1,695Updated 2 years ago
- Fast BPE☆670Updated 9 months ago
- Reformer, the efficient Transformer, in Pytorch☆2,163Updated last year
- [DEPRECATED] Repo for exploring multi-task learning approaches to learning sentence representations☆791Updated 3 years ago
- An implementation of masked language modeling for Pytorch, made as concise and simple as possible☆180Updated last year
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,120Updated 3 years ago
- ☆345Updated 3 years ago
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆251Updated 3 years ago
- Code associated with the Don't Stop Pretraining ACL 2020 paper☆529Updated 3 years ago
- A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in Pytorch☆225Updated last year
- Officially supported AllenNLP models☆540Updated 2 years ago
- [ACL 2021] Learning Dense Representations of Phrases at Scale; EMNLP'2021: Phrase Retrieval Learns Passage Retrieval, Too https://arxiv.o…☆604Updated 2 years ago
- Understanding the Difficulty of Training Transformers☆329Updated 2 years ago
- UnifiedQA: Crossing Format Boundaries With a Single QA System☆432Updated 2 years ago
- Fusion-in-Decoder☆566Updated last year
- ☆490Updated last year