IBM / model-recycling
Ranking of fine-tuned HF models as base models.
☆35Updated last year
Related projects ⓘ
Alternatives and complementary repositories for model-recycling
- Embedding Recycling for Language models☆38Updated last year
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆44Updated last year
- ☆15Updated 3 months ago
- ☆22Updated 2 years ago
- Using short models to classify long texts☆20Updated last year
- SMASHED is a toolkit designed to apply transformations to samples in datasets, such as fields extraction, tokenization, prompting, batchi…☆31Updated 5 months ago
- ☆14Updated last month
- ☆21Updated 3 years ago
- ☆19Updated 2 years ago
- Few-shot Learning with Auxiliary Data☆26Updated 11 months ago
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval☆27Updated 2 years ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆92Updated last year
- Plug-and-play Search Interfaces with Pyserini and Hugging Face☆32Updated last year
- This project develops compact transformer models tailored for clinical text analysis, balancing efficiency and performance for healthcare…☆18Updated 7 months ago
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"☆28Updated 2 years ago
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆17Updated last month
- Companion Repo for the Vision Language Modelling YouTube series - https://bit.ly/3PsbsC2 - by Prithivi Da. Open to PRs and collaborations☆14Updated 2 years ago
- Code for equipping pretrained language models (BART, GPT-2, XLNet) with commonsense knowledge for generating implicit knowledge statement…☆16Updated 3 years ago
- RATransformers 🐭- Make your transformer (like BERT, RoBERTa, GPT-2 and T5) Relation Aware!☆41Updated last year
- A package for fine tuning of pretrained NLP transformers using Semi Supervised Learning☆15Updated 3 years ago
- ☆46Updated this week
- INCOME: An Easy Repository for Training and Evaluation of Index Compression Methods in Dense Retrieval. Includes BPR and JPQ.☆22Updated last year
- Code and pre-trained models for "ReasonBert: Pre-trained to Reason with Distant Supervision", EMNLP'2021☆29Updated last year
- Google's BigBird (Jax/Flax & PyTorch) @ 🤗Transformers☆47Updated last year
- ☆11Updated 2 years ago
- Code for "Incorporating Relevance Feedback for Information-Seeking Retrieval using Few-Shot Document Re-Ranking" (https://arxiv.org/abs/2…☆12Updated last year
- SPRINT Toolkit helps you evaluate diverse neural sparse models easily using a single click on any IR dataset.☆42Updated last year
- ☆29Updated 9 months ago