unicamp-dl / ExaRankerLinks
☆29Updated last year
Alternatives and similar repositories for ExaRanker
Users that are interested in ExaRanker are comparing it to the libraries listed below
Sorting:
- ☆101Updated 2 years ago
- ☆54Updated 2 years ago
- Inquisitive Parrots for Search☆196Updated 3 months ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆49Updated last year
- Dense hybrid representations for text retrieval☆63Updated 2 years ago
- ☆37Updated 2 weeks ago
- ☆46Updated 3 years ago
- SPRINT Toolkit helps you evaluate diverse neural sparse models easily using a single click on any IR dataset.☆47Updated 2 years ago
- ☆86Updated 5 months ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆94Updated 2 years ago
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆133Updated last year
- INCOME: An Easy Repository for Training and Evaluation of Index Compression Methods in Dense Retrieval. Includes BPR and JPQ.☆24Updated last year
- Mr. TyDi is a multi-lingual benchmark dataset built on TyDi, covering eleven typologically diverse languages.☆79Updated 3 years ago
- Retrieval-Augmented Generation battle!☆58Updated last month
- Code and Data for "Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering"☆86Updated last year
- FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions☆46Updated last year
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated 2 months ago
- Code and model release for the paper "Task-aware Retrieval with Instructions" by Asai et al.☆163Updated last year
- ☆53Updated last month
- Dataset from the paper "Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering" (COLING 2022)☆114Updated 2 years ago
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆99Updated 2 years ago
- ☆28Updated last year
- Scalable training for dense retrieval models.☆299Updated 3 months ago
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆21Updated 2 months ago
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval☆29Updated 2 years ago
- ☆72Updated 2 years ago
- [EMNLP 2022] This is the code repo for our EMNLP‘22 paper "COCO-DR: Combating Distribution Shifts in Zero-Shot Dense Retrieval with Contr…☆50Updated last year
- Embedding Recycling for Language models☆39Updated 2 years ago
- Code of ICLR paper: https://openreview.net/forum?id=-cqvvvb-NkI☆94Updated 2 years ago
- We believe the ability of an LLM to attribute the text that it generates is likely to be crucial for both system developers and users in …☆54Updated 2 years ago