IBM / model-recycling
Ranking of fine-tuned HF models as base models.
β35Updated last year
Alternatives and similar repositories for model-recycling:
Users that are interested in model-recycling are comparing it to the libraries listed below
- Embedding Recycling for Language modelsβ38Updated last year
- β22Updated 3 years ago
- RATransformers π- Make your transformer (like BERT, RoBERTa, GPT-2 and T5) Relation Aware!β41Updated 2 years ago
- β11Updated 4 months ago
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"β28Updated 2 years ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 laβ¦β48Updated last year
- This is the official PyTorch repo for "UNIREX: A Unified Learning Framework for Language Model Rationale Extraction" (ICML 2022).β24Updated 2 years ago
- Code for SaGe subword tokenizer (EACL 2023)β24Updated 4 months ago
- Few-shot Learning with Auxiliary Dataβ27Updated last year
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning Pβ¦β34Updated last year
- Code and pre-trained models for "ReasonBert: Pre-trained to Reason with Distant Supervision", EMNLP'2021β29Updated 2 years ago
- β13Updated 6 months ago
- Official codebase accompanying our ACL 2022 paper "RELiC: Retrieving Evidence for Literary Claims" (https://relic.cs.umass.edu).β20Updated 2 years ago
- β44Updated 5 months ago
- Starbucks: Improved Training for 2D Matryoshka Embeddingsβ19Updated 2 months ago
- This project develops compact transformer models tailored for clinical text analysis, balancing efficiency and performance for healthcareβ¦β18Updated last year
- β14Updated 6 months ago
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrievalβ29Updated 2 years ago
- A package for fine tuning of pretrained NLP transformers using Semi Supervised Learningβ15Updated 3 years ago
- Code for equipping pretrained language models (BART, GPT-2, XLNet) with commonsense knowledge for generating implicit knowledge statementβ¦β16Updated 3 years ago
- Companion Repo for the Vision Language Modelling YouTube series - https://bit.ly/3PsbsC2 - by Prithivi Da. Open to PRs and collaborationsβ14Updated 2 years ago
- β46Updated 3 years ago
- β21Updated 3 years ago
- β19Updated 2 years ago
- Using short models to classify long textsβ21Updated 2 years ago
- β11Updated 2 years ago
- Code for the paper SciCo: Hierarchical Cross-Document Coreference for Scientific Concepts (AKBC 2021). https://openreview.net/forum?id=OFβ¦β29Updated 3 years ago
- β13Updated 2 years ago
- Plug-and-play Search Interfaces with Pyserini and Hugging Faceβ31Updated last year
- Implementation of the paper 'Sentence Bottleneck Autoencoders from Transformer Language Models'β17Updated 3 years ago