JoaoLages / RATransformers
RATransformers π- Make your transformer (like BERT, RoBERTa, GPT-2 and T5) Relation Aware!
β41Updated 2 years ago
Alternatives and similar repositories for RATransformers:
Users that are interested in RATransformers are comparing it to the libraries listed below
- β21Updated 3 years ago
- Embedding Recycling for Language modelsβ38Updated last year
- Ranking of fine-tuned HF models as base models.β35Updated last year
- Code for equipping pretrained language models (BART, GPT-2, XLNet) with commonsense knowledge for generating implicit knowledge statementβ¦β16Updated 3 years ago
- The official implementation of "Distilling Relation Embeddings from Pre-trained Language Models, EMNLP 2021 main conference", a high-qualβ¦β46Updated last month
- SPRINT Toolkit helps you evaluate diverse neural sparse models easily using a single click on any IR dataset.β43Updated last year
- INCOME: An Easy Repository for Training and Evaluation of Index Compression Methods in Dense Retrieval. Includes BPR and JPQ.β22Updated last year
- This repository contains the code for the paper 'PARM: Paragraph Aggregation Retrieval Model for Dense Document-to-Document Retrieval' puβ¦β40Updated 3 years ago
- β13Updated last year
- Shared code for training sentence embeddings with Flax / JAXβ27Updated 3 years ago
- β22Updated 2 years ago
- Code & data for EMNLP 2020 paper "MOCHA: A Dataset for Training and Evaluating Reading Comprehension Metrics".β16Updated 2 years ago
- β74Updated 3 years ago
- [ICLR 2023] PyTorch code of Summarization Programs: Interpretable Abstractive Summarization with Neural Modular Treesβ23Updated last year
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrievalβ28Updated 2 years ago
- Code and pre-trained models for "ReasonBert: Pre-trained to Reason with Distant Supervision", EMNLP'2021β29Updated last year
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 2 years ago
- β17Updated last year
- Training T5 to perform numerical reasoning.β23Updated 3 years ago
- Code associated with the paper "Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists"β47Updated 2 years ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 laβ¦β45Updated last year
- β55Updated 2 years ago
- Source code and data for Like a Good Nearest Neighborβ28Updated 2 weeks ago
- Apps built using Inspired Cognition's Critique.β58Updated last year
- Few-shot Learning with Auxiliary Dataβ26Updated last year
- Official codebase accompanying our ACL 2022 paper "RELiC: Retrieving Evidence for Literary Claims" (https://relic.cs.umass.edu).β20Updated 2 years ago
- β45Updated 2 months ago
- Research code for the paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models"β26Updated 3 years ago
- This repository accompanies our paper βDo Prompt-Based Models Really Understand the Meaning of Their Prompts?ββ85Updated 2 years ago
- Exploring Few-Shot Adaptation of Language Models with Tablesβ23Updated 2 years ago