JoaoLages / RATransformers
RATransformers π- Make your transformer (like BERT, RoBERTa, GPT-2 and T5) Relation Aware!
β41Updated last year
Related projects β
Alternatives and complementary repositories for RATransformers
- Shared code for training sentence embeddings with Flax / JAXβ27Updated 3 years ago
- β21Updated 3 years ago
- Embedding Recycling for Language modelsβ38Updated last year
- β16Updated last year
- β22Updated 2 years ago
- Ranking of fine-tuned HF models as base models.β35Updated last year
- Code & data for EMNLP 2020 paper "MOCHA: A Dataset for Training and Evaluating Reading Comprehension Metrics".β16Updated 2 years ago
- Semantically Structured Sentence Embeddingsβ67Updated last month
- Research code for the paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models"β26Updated 3 years ago
- Source code and data for Like a Good Nearest Neighborβ28Updated 9 months ago
- Code for equipping pretrained language models (BART, GPT-2, XLNet) with commonsense knowledge for generating implicit knowledge statementβ¦β16Updated 3 years ago
- SPRINT Toolkit helps you evaluate diverse neural sparse models easily using a single click on any IR dataset.β42Updated last year
- Open source library for few shot NLPβ77Updated last year
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrievalβ27Updated 2 years ago
- β73Updated 3 years ago
- INCOME: An Easy Repository for Training and Evaluation of Index Compression Methods in Dense Retrieval. Includes BPR and JPQ.β22Updated last year
- Few-shot Learning with Auxiliary Dataβ26Updated 11 months ago
- Apps built using Inspired Cognition's Critique.β58Updated last year
- This repository accompanies our paper βDo Prompt-Based Models Really Understand the Meaning of Their Prompts?ββ84Updated 2 years ago
- β13Updated last year
- This repository contains the code for the paper 'PARM: Paragraph Aggregation Retrieval Model for Dense Document-to-Document Retrieval' puβ¦β40Updated 2 years ago
- diagNNose is a Python library that facilitates a broad set of tools for analysing hidden activations of neural models.β81Updated last year
- β46Updated this week
- Starbucks: Improved Training for 2D Matryoshka Embeddingsβ17Updated last month
- Interpreting Language Models with Contrastive Explanations (EMNLP 2022 Best Paper Honorable Mention)β58Updated 2 years ago
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"β28Updated 2 years ago
- Code associated with the paper "Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists"β46Updated 2 years ago
- SMASHED is a toolkit designed to apply transformations to samples in datasets, such as fields extraction, tokenization, prompting, batchiβ¦β31Updated 5 months ago
- β55Updated last year