google-research-datasets / swim-ir
SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 languages, generated using PaLM 2 and summarize-then-ask prompting.
☆48Updated last year
Alternatives and similar repositories for swim-ir:
Users that are interested in swim-ir are comparing it to the libraries listed below
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆19Updated 3 months ago
- ☆28Updated last year
- Prompting Large Language Models to Generate Dense and Sparse Representations for Zero-Shot Document Retrieval☆45Updated 6 months ago
- ☆29Updated last year
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval☆29Updated 2 years ago
- ☆54Updated 2 years ago
- Plug-and-play Search Interfaces with Pyserini and Hugging Face☆31Updated last year
- Retrieval-Augmented Generation battle!☆50Updated 4 months ago
- IntructIR, a novel benchmark specifically designed to evaluate the instruction following ability in information retrieval models. Our foc…☆31Updated 10 months ago
- 🌏 Modular retrievers for zero-shot multilingual IR.☆27Updated last year
- ☆97Updated 2 years ago
- FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions☆43Updated 10 months ago
- ☆18Updated 8 months ago
- ☆45Updated 3 years ago
- SPRINT Toolkit helps you evaluate diverse neural sparse models easily using a single click on any IR dataset.☆45Updated last year
- Code for the arXiv paper: "LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond"☆59Updated 3 months ago
- INCOME: An Easy Repository for Training and Evaluation of Index Compression Methods in Dense Retrieval. Includes BPR and JPQ.☆24Updated last year
- Embedding Recycling for Language models☆38Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- Efficient Memory-Augmented Transformers☆34Updated 2 years ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- ☆38Updated last year
- ☆14Updated 7 months ago
- Code and Data for "Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering"☆83Updated 8 months ago
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆46Updated 3 weeks ago
- Resources for the shared task on conversational question answering SCAI-QReCC 2021☆29Updated 2 years ago
- Code and pre-trained models for "ReasonBert: Pre-trained to Reason with Distant Supervision", EMNLP'2021☆29Updated 2 years ago
- ☆42Updated 8 months ago
- Using business-level retrieval system (BM25) with Python in just a few lines.☆31Updated 2 years ago
- ☆44Updated 5 months ago