coastalcph / hierarchical-transformers
Hierarchical Attention Transformers (HAT)
☆49Updated last year
Alternatives and similar repositories for hierarchical-transformers:
Users that are interested in hierarchical-transformers are comparing it to the libraries listed below
- ☆21Updated 3 years ago
- The official repository for Efficient Long-Text Understanding Using Short-Text Models (Ivgi et al., 2022) paper☆69Updated last year
- Google's BigBird (Jax/Flax & PyTorch) @ 🤗Transformers☆48Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- LTG-Bert☆30Updated last year
- Long-context pretrained encoder-decoder models☆94Updated 2 years ago
- ☆54Updated 2 years ago
- EMNLP 2021 - Frustratingly Simple Pretraining Alternatives to Masked Language Modeling☆31Updated 3 years ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆47Updated last year
- Embedding Recycling for Language models☆38Updated last year
- An official repository for MIA 2022 (NAACL 2022 Workshop) Shared Task on Cross-lingual Open-Retrieval Question Answering.☆30Updated 2 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆52Updated last year
- ☆53Updated 3 years ago
- The official implementation of "Distilling Relation Embeddings from Pre-trained Language Models, EMNLP 2021 main conference", a high-qual…☆46Updated 3 months ago
- Pytorch implementation of “Recursive Non-Autoregressive Graph-to-Graph Transformer for Dependency Parsing with Iterative Refinement”☆62Updated 4 years ago
- ☆17Updated 2 years ago
- ☆44Updated 2 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆45Updated 4 years ago
- Code for Stage-wise Fine-tuning for Graph-to-Text Generation☆26Updated 2 years ago
- A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in Pytorch☆224Updated last year
- ☆97Updated 2 years ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorch☆75Updated 4 years ago
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.☆47Updated 2 years ago
- Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles☆43Updated 9 months ago
- ☆13Updated last year
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆136Updated last year
- Official codebase accompanying our ACL 2022 paper "RELiC: Retrieving Evidence for Literary Claims" (https://relic.cs.umass.edu).☆20Updated 2 years ago
- BioELECTRA☆51Updated 3 years ago
- Using business-level retrieval system (BM25) with Python in just a few lines.☆31Updated 2 years ago
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆18Updated last month