coastalcph / hierarchical-transformers
Hierarchical Attention Transformers (HAT)
☆45Updated 10 months ago
Related projects ⓘ
Alternatives and complementary repositories for hierarchical-transformers
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆92Updated last year
- Pytorch implementation of “Recursive Non-Autoregressive Graph-to-Graph Transformer for Dependency Parsing with Iterative Refinement”☆61Updated 3 years ago
- ☆42Updated 2 years ago
- The official repository for Efficient Long-Text Understanding Using Short-Text Models (Ivgi et al., 2022) paper☆67Updated last year
- Long-context pretrained encoder-decoder models☆95Updated 2 years ago
- ☆55Updated last year
- EMNLP 2021 - Frustratingly Simple Pretraining Alternatives to Masked Language Modeling☆31Updated 3 years ago
- Google's BigBird (Jax/Flax & PyTorch) @ 🤗Transformers☆47Updated last year
- ☆21Updated 3 years ago
- LTG-Bert☆29Updated 10 months ago
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.☆45Updated 2 years ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆44Updated last year
- Pre-training BART in Flax on The Pile dataset☆20Updated 3 years ago
- Embedding Recycling for Language models☆38Updated last year
- Efficient Language Model Training through Cross-Lingual and Progressive Transfer Learning☆29Updated last year
- Using business-level retrieval system (BM25) with Python in just a few lines.☆31Updated last year
- Simple Questions Generate Named Entity Recognition Datasets (EMNLP 2022)☆76Updated last year
- An official repository for MIA 2022 (NAACL 2022 Workshop) Shared Task on Cross-lingual Open-Retrieval Question Answering.☆31Updated 2 years ago
- Code for equipping pretrained language models (BART, GPT-2, XLNet) with commonsense knowledge for generating implicit knowledge statement…☆16Updated 3 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆45Updated 3 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆51Updated last year
- Ensembling Hugging Face transformers made easy☆63Updated last year
- ☆40Updated 3 years ago
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆97Updated last year
- Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://a…☆46Updated 2 years ago
- Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.☆75Updated 2 months ago
- Efficient Memory-Augmented Transformers☆34Updated last year
- SeqScore: Scoring for named entity recognition and other sequence labeling tasks☆21Updated last month
- ACL22 paper: Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little Cost☆39Updated last year
- PropSegmEnt is an annotated dataset for segmenting English text into propositions, and recognizing proposition-level entailment relations…☆18Updated last year