Transformers for Longer Sequences
☆633Sep 1, 2022Updated 3 years ago
Alternatives and similar repositories for bigbird
Users that are interested in bigbird are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Longformer: The Long-Document Transformer☆2,190Feb 8, 2023Updated 3 years ago
- Google's BigBird (Jax/Flax & PyTorch) @ 🤗Transformers☆49Mar 20, 2023Updated 3 years ago
- 中文bigbird预训练模型☆96Jul 5, 2022Updated 3 years ago
- Long Range Arena for Benchmarking Efficient Transformers☆788Dec 16, 2023Updated 2 years ago
- ☆12Jan 8, 2021Updated 5 years ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Using correlated substitutions to infer recombination rates in RNA viruses☆18Jan 26, 2023Updated 3 years ago
- ☆184May 26, 2023Updated 2 years ago
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"☆6,503Jan 14, 2026Updated 3 months ago
- Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"☆1,611Aug 12, 2020Updated 5 years ago
- The implementation of DeBERTa☆2,206Sep 29, 2023Updated 2 years ago
- Linguistically-Informed Self-Attention for Semantic Role Labeling (old version)☆27Dec 3, 2021Updated 4 years ago
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆2,373Mar 23, 2024Updated 2 years ago
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆32,201Sep 30, 2025Updated 6 months ago
- ☆1,296Dec 15, 2022Updated 3 years ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ↔️ Utilizing RBERT model structure for KLUE Relation Extraction task☆15Nov 15, 2022Updated 3 years ago
- Code for using and evaluating SpanBERT.☆906Jul 25, 2023Updated 2 years ago
- ☆390Oct 18, 2023Updated 2 years ago
- [ACL 2021] Learning Dense Representations of Phrases at Scale; EMNLP'2021: Phrase Retrieval Learns Passage Retrieval, Too https://arxiv.o…☆607Jun 15, 2022Updated 3 years ago
- State-of-the-Art Text Embeddings☆18,534Updated this week
- Resources for the NAACL 2018 paper "A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents"☆390Mar 24, 2023Updated 3 years ago
- A Pytorch-Lightning Implementation of Transformer Network☆11Oct 22, 2020Updated 5 years ago
- Dense Passage Retriever - is a set of tools and models for open domain Q&A task.☆1,862Apr 6, 2023Updated 3 years ago
- Knowledge Discovery for COVID19: Search the full text of CORD19 research papers on COVID 19☆27Apr 22, 2020Updated 5 years ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Models to perform neural summarization (extractive and abstractive) using machine learning transformers and a tool to convert abstractive…☆440May 26, 2025Updated 10 months ago
- Reformer, the efficient Transformer, in Pytorch☆2,189Jun 21, 2023Updated 2 years ago
- ☆1,652Jul 20, 2023Updated 2 years ago
- Shared repository for open-sourced projects from the Google AI Language team.☆1,767Apr 8, 2026Updated last week
- ☆249Mar 16, 2022Updated 4 years ago
- ☆220Jun 8, 2020Updated 5 years ago
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,177Feb 2, 2022Updated 4 years ago
- [EMNLP 2021] SimCSE: Simple Contrastive Learning of Sentence Embeddings https://arxiv.org/abs/2104.08821☆3,649Oct 16, 2024Updated last year
- Pytorch library for fast transformer implementations☆1,767Mar 23, 2023Updated 3 years ago
- Deploy open-source AI quickly and easily - Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- ALBERT: A Lite BERT for Self-supervised Learning of Language Representations☆3,282Apr 14, 2023Updated 3 years ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,608Updated this week
- ☆3,697Sep 21, 2022Updated 3 years ago
- Ongoing research training transformer models at scale☆15,985Updated this week
- ☆10Oct 15, 2019Updated 6 years ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,086Jan 23, 2026Updated 2 months ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆102Nov 2, 2020Updated 5 years ago