coastalcph / hierarchical-transformersLinks
Hierarchical Attention Transformers (HAT)
☆59Updated last year
Alternatives and similar repositories for hierarchical-transformers
Users that are interested in hierarchical-transformers are comparing it to the libraries listed below
Sorting:
- Embedding Recycling for Language models☆38Updated 2 years ago
- Google's BigBird (Jax/Flax & PyTorch) @ 🤗Transformers☆49Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 2 years ago
- ☆21Updated 4 years ago
- Code for Relevance-guided Supervision for OpenQA with ColBERT (TACL'21)☆41Updated 4 years ago
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆22Updated 4 months ago
- LTG-Bert☆34Updated last year
- ☆101Updated 2 years ago
- Ranking of fine-tuned HF models as base models.☆36Updated last month
- Dataset from the paper "Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering" (COLING 2022)☆115Updated 3 years ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆49Updated last year
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.☆48Updated 3 years ago
- 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.☆81Updated 3 years ago
- ☆54Updated 2 years ago
- Bi-encoder entity linking architecture☆51Updated last year
- Pytorch implementation of “Recursive Non-Autoregressive Graph-to-Graph Transformer for Dependency Parsing with Iterative Refinement”☆62Updated 4 years ago
- EMNLP 2021 - Frustratingly Simple Pretraining Alternatives to Masked Language Modeling☆34Updated 3 years ago
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆66Updated last month
- An instruction-based benchmark for text improvements.☆143Updated 2 years ago
- ☆15Updated last year
- ☆32Updated 2 years ago
- Ensembling Hugging Face transformers made easy☆62Updated 2 years ago
- ☆58Updated 4 years ago
- [NeurIPS 2023 Main Track] This is the repository for the paper titled "Don’t Stop Pretraining? Make Prompt-based Fine-tuning Powerful Lea…☆75Updated last year
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆98Updated 2 years ago
- ☆14Updated last month
- The official implementation of "Distilling Relation Embeddings from Pre-trained Language Models, EMNLP 2021 main conference", a high-qual…☆47Updated 11 months ago
- The official repository for Efficient Long-Text Understanding Using Short-Text Models (Ivgi et al., 2022) paper☆70Updated 2 years ago
- A package for fine tuning of pretrained NLP transformers using Semi Supervised Learning☆14Updated 4 years ago
- ☆55Updated 2 years ago